this post was submitted on 24 Aug 2024
38 points (95.2% liked)

Linux

47290 readers
1059 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I am seeking advice regarding my ebook collection on a Linux system, which is stored on an external drive and sorted into categories. However, there are still many unsorted ebooks. I have tried using Calibre for organization, but it creates duplicate files during import on my main drive where I don't want to keep any media. I would like to:

  • Use Calibre's automatic organization (tags, etc.) without duplicating files
  • Maintain my existing folder structure while using Calibre
  • Automatically sort the remaining ebooks into my existing categories/folder structure

I am considering the use of symlinks to maintain the existing folder structure if there is a simple way to automate the process due to my very large collection.

Regarding automatic sorting by category, I am looking for a solution that doesn't require manual organization or a significant time investment. I'm wondering if there's a way to extract metadata based on file hashes or any other method that doesn't involve manual work. Most of the files should have title and author metadata, but some won't.

Has anyone encountered a similar problem and found a solution? I would appreciate any suggestions for tools, scripts, or workflows that might help. Thank you in advance for any advice!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 weeks ago (2 children)

I tried to ingest a four terabyte epub library once. Even getting the data ingested with the author and title in the right spot was almost impossible. If a duplicates weren't just slightly wrong would be a different story but the duplicates are often misspells or different spellings.

Realistically the best thing you can do is get an output of file name, title, author and hand dedupe, but even then you're going to have to be careful about quality and language and all kinds of other strange issues you run into with large libraries.

In the end I gave up and only stored what I really wanted and would realistically ever need and that was small enough to hand cull.

[–] [email protected] 2 points 3 weeks ago (1 children)

When I had to match against misspells I found Levenshtein distance to be most useful.

https://en.wikipedia.org/wiki/Levenshtein_distance?wprov=sfla1

[–] [email protected] 2 points 3 weeks ago

Yeah, I wrote some python once to give me voice control over a plex server. Distance algorithms did okay I had a lot better results out of fuzzy wuzzy though

Kind of a dicey prospect with risk when you're doing duplication though.

Then you run into problems with things like Harry Potter and the philosopher's Stone versus Harry Potter and the sorcerer's Stone. Depending on how badly your database is degenerated you can even end up with words out of order and all kinds of crazy crap. If the database is truly large just truing the database up can be unreasonably time-consuming.

I was pretty amazed at all the different versions of string search and comparison algorithms.