After upgrading my internet connection I immediatelly noticed that my HDD tops 40 MB/s and bottlnecking download speed in qbittorrent. Is it possible to use SSD drive as a catch drive for 12 TB HDD so it uses SSD speeds when downloading and moves files to HDD later on? If yes, does it make sense? Anyone using anything simmilar? Would 512 GB be enough or could I benefit from 2TB SSD?

HDD is just for jellyfin (movies/shows), not in raid, dont need backup for that drive, I can afford risking data if that matters at all

All suggestions are welcome, Thx in advance

EDIT: I obviously have upset some of you, wasn’t my intention, I’m sorry about that. I love to tinker and learn new things, but I could live with much lower speeds tho… Please don’t hate me if I couldn’t understand your comment or not being clear with my question.

HDD being bottleneck at 40 MB/s was wrong assumption (found out in meantime). I’m still trying to figure out what was the reason for download to be that slow, but I’m interested in learning about the main question anyway. I just thought I’m experiencing the same issue like many people today, having faster internet than storage. Some of you provided solutions I will look into, but need time for that and also have to fix whatever else I’m having issue with.

Keep this community awesome because it is <3

  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    ·
    3 months ago

    40MB/s is very very low even for a HDD. I would eventually debug why it’s that low.

    Yes it’s possible. FS like zfs btrfs etc. support that.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      3 months ago

      It’s probably a 5400rpm drive, and/or SMR. Both are going to make it slower.

      • Markaos@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        In my very limited experience with my 5400rpm SMR WD disk, it’s perfectly capable of writing at over 100 MB/s until its cache runs out, then it pretty much dies until it has time to properly write the data, rinse and repeat.

        40 MB/s sustained is weird (but maybe it’s just a different firmware? I think my disk was able to actually sustain 60 MB/s for a few hours when I limited the write speed, 40 could be a conservative setting that doesn’t even slowly fill the cache)

    • acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 months ago

      agreed, I think there is something else going on here. test the write speed with another application, I doubt the drive actually maxes out at 40MB/s unless it’s severely fragmented or failing.

      incidentally what OP wants is how most people set up Unraid servers. SSD cache takes incoming files for write speed, then at a later time the OS moves the files to the spinning disk array.

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Its the cheapest drive I could find (refurbished seagate from amazon), I thought thats the reason for being slow, but wasnt aware its that low. Im also getting 25-40 MB/s (200-320 Mbps) when copying files from this drive over network. Streaming works great so its not too slow at all. Is there better way of debugging this? What speeds can I expect from good drive or best drive?

      Ill research more about BTRFS and ZFS, thx

        • rambos@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          3 months ago

          Yeah, but need to figure out how to see transfer speed using ssh. Sorry noob here :)

            • rambos@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 months ago

              I have managed to copy with rsync and getting 180 MB/s. I guess my initial assumption was wrong, HDD is obviously not bottleneck here, it can get close to ISP speed. Thank you for pointing this out, Ill do more testing these days. Im kinda shocked because I never knew HDD can be that fast. Gonna reread all the comments as well

              • ShortN0te@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                The limitation of HDDs was never sequential Read/Write when it comes to day to day use on a PC.

                The huge difference to an SSD is when data is written or read not sequentially, often referred to random I/O.

              • not_fond_of_reddit@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                The cool thing about rsync is that it goes ”BRRRRRRRRR!” like a warthog… the plane… and it can saturate the receiving drive or array depending on your network and client. And getting 180 with rsync… on a SATA drive, can’t really hope for more.

                And you can run a quick n dirty test is using dd

                $> dd if=/dev/zero of=1g-testfile bs=1g count=1

                • rambos@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  Thx. Ive seen dd commands in guides how to test drive speed, but I’m not sure how can I specify what drive I want to test. I see I could change “if” and “of”, but don’t trust myself enough to use my own modified commands before understanding them better. Will read more about that. Honestly I’m surprised drive speed test is not easier, but its probably just me still being noob xD

  • johntash@eviltoast.org
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    3 months ago

    Unraid has this with their cache pools. ZFS can also be configured to have a cache drive for writes.

    You can also DIY with something like mergerfs and separate file systems.

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      Ive heard about all of these before, gonna do more research. Thank you

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      3 months ago

      Are you also talking about incomplete directory in qbit? Doesnt make it faster afaik, but I might be wrong. I havent tried anything yet, wanted to check is it something usual or not worth at all. Got zero experience with using SSD as catch drive, it just made sense to me

      • braindefragger@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        3 months ago

        Yes, if the temporary directory where the files are being downloaded (incomplete folder) is on the SSD, then it will be faster, especially if you’ve identified a cheap HDD as your bottleneck.

        Unless you are incorrect about the HDD being the bottleneck.

        • rambos@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          3 months ago

          Yeah it will be faster, but its extra step before the files get available on HDD.

          Even if my HDD is super fast and healthy it would still be a bottleneck for 2Gbps fiber? Ill deffo play with HDD more to find max speeds, wasnt paying attention before because it felt normal to me

          • braindefragger@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Of course it’s an additional step. But it will download faster. Which was what you asked for, specifically in your post above.

            If you write directly to your HDD it will take longer to download. If you write to your ( faster?) SSD the download will be faster but yeah, processing has another step of copying.

            I’m sorry, but I have no idea what you’re asking.

            Best of luck.

            • rambos@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              3 months ago

              Yeah feels like that lol. Thx anyway, have a nice day dude

        • acosmichippo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          6
          ·
          edit-2
          3 months ago

          what OP wants is to download the file to a SSD, be able to use it on the SSD for a time, and then have the file moved to spinning disk later when they don’t need to wait for it.

          this is just adding an extra step to the process before the file can be available to use. you’re just saving the copying to the HDD until the very end of the torrent.

          • braindefragger@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Yeah, of course it is. Because that’s what OP asked for. I don’t see ( use it for a bit first and then automatically copy it over ).

            I see:

            Is it possible to use SSD drive as a catch drive for 12 TB HDD so it uses SSD speeds when downloading and moves files to HDD later on?

            I assumed OP wanted Faster Download Speeds > Time to Access File

            You know what. I don’t care. This whole post is ridiculous.

      • DaGeek247@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        Yeah, I use the incomplete folder location as a cache drive for my downloads as well. works quite nicely. It also keeps the incomplete ISOs out of jellyfin until they’re actually ready to watch, so, bonus.

        If it’s not going faster for you there’s probably something else that’s broke.

        • rambos@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 months ago

          It will download faster to SSD, but then I have to wait the files to be moved to HDD before getting them imported in media server. Im not after big numbers in qbit, I just want to start watching faster if possible. Sorry Im probably not explaining well and Im not sure if Im asking for something that even make sense

          • DaGeek247@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            3 months ago

            qbittorrent moves the completed files to the assigned literally as soon as it is done.

            • acosmichippo@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              3 months ago

              but if the disk is actually bottlenecking at 40MB/s it will still take time to copy from the SSD. That plus the initial download to SSD will just end up being more time than downloading to the spinning disk at 40MB/s in the first place.

              • Terrasque@infosec.pub
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                3 months ago

                I doubt the disk will bottleneck at 40mb/s when doing sequential write. Torrent downloads are usually heavy random writes, which is the worst you can do to a HDD.

              • DaGeek247@fedia.io
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                3 months ago

                That’s not how hard drives work, and doesn’t take into account that OP might want to download more than one thing at a time.

                Hard drives are fastest when they are moving large single files. SSDs are way better than hard drives at lots of small random reads/writes.Setting qbittorrent up so that all the random writes inherent to downloading a torrent go to a small ssd, and then moving that file over to the big hard drive with a single long writer operation is how you make both devices perform to their best.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 months ago

    You can and Qbittorrent has this functionality built in. You set your in progress download folder to be the SSD then set the move when completed to your HDD.

    As for the size, that would depend on how much you are downloading.

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      3 months ago

      But that would first download to SSD, then move to HDD and then become available (arr import) on jellyfin server, making it slower than not using SSD. Am I missing something?

      • BombOmOm@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        3 months ago

        The biggest thing is you have changed a random write to a linear write, something HDDs are significantly better at. The torrent is downloading little pieces from all over the place, requiring the HDD to move it’s head all over the place to write them. But when simply copying off the ssd, it keeps the head in roughly one place and just writes lineally, utilizing it’s maximum write speed.

        I would say try it out, see if it helps.


        Also, if the HDD is having to do other tasks at the same time, that will slow it down as the head can only ever be in one place.

  • Maxy@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    3 months ago

    qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      3 months ago

      But that would first download to SSD, then move to HDD and then become available (arr import) on jellyfin server, making it slower than not using SSD. Am I missing something?

      • braindefragger@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        3 months ago

        Is it possible to use SSD drive as a catch drive for 12 TB HDD so it uses SSD speeds when downloading and moves files to HDD later on?

        Is that not what you asked for?

        • rambos@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Well yes, but I was hoping files can be available (imported to media server) before they are moved to HDD. Import is not possible from incomplete directory if I understood that correctly (*arr stack)

          • catloaf@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 months ago

            You would have to add both directories to your library.

      • Maxy@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.

        • rambos@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Thank you. The files I download are usually 5-30 GB size. I don’t want to max out my internet speed, I just want to get the files in media library ASAP after requesting download manually (happens maybe few times a week)

          It makes sense, Ill test sequential and random write performance and maybe even test it since I have the hardware available.

          At first I wasn’t aware that my speed is super low for HDD, therefore I was looking for some magic solution with SSD speeds and HDD storage that might not even exist. I have to do more testing for sure

  • capital@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    3 months ago

    I do this with mergerfs.

    I then periodically use their prewritten scripts to move things off the cache and to the backing drives.

    I should say it’s not really caching but effectively works to take care of this issue. Bonus since all that storage isn’t just used for cache but also long term storage. For me, that’s a better value proposition.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 months ago

        <3 mergerfs and <3 my setup, but just a warning: make sure you read the documentation and ensure you’ve got all the proper options set in your fstab entry for the mergerfs mount.

        There’s a lot of stuff in there that can interact weirdly with various pieces of software and lead to the most insane debug sessions because, well, why would a drive mount break other software (in my case it was qbittorrent in docker when an upgrade required me to change the mount options to not include direct_io).

        • capital@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Yeah that was fun times.

          Luckily, thanks to using docker, it was easy enough to “pin” a working version in the compose file while I figured out what just broke.

          For everyone’s reference, here’s my fstab to give you an idea of what works with linuxserver.io’s qbittorrent

          ## Media disks setup for mergerfs and snapraid
          
          # Map cache to 1TB SSD
          /dev/disk/by-id/ata-Samsung_SSD_860_EVO_1TB_S3Z8NB0K820469N-part1 /mnt/ssd1 xfs defaults 0 0
          
          # Map storage and parity. All spinning disks.
          /dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK39X4N-part1 /mnt/par1         xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK3TY5N-part1 /mnt/disk01       xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK4806N-part1 /mnt/disk02       xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK4H0RN-part1 /mnt/disk03       xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT0TS-part1 /mnt/disk04 xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT1YS-part1 /mnt/disk05 xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT3EK-part1 /mnt/disk06 xfs defaults 0 0
          /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N6CKJJ6P-part1 /mnt/disk07 xfs defaults 0 0
          
          # Setup mergerfs backing pool
          /mnt/disk* /mnt/stor fuse.mergerfs defaults,nonempty,allow_other,use_ino,inodecalc=path-hash,cache.files=off,moveonenospc=true,dropcacheonclose=true,link_cow=true,minfreespace=1000G,category.create=pfrd,fsname=mergerfs 0 0
          
          # Setup mgergerfs caching pool
          /mnt/ssd1:/mnt/disk* /mnt/cstor fuse.mergerfs defaults,nonempty,allow_other,use_ino,inodecalc=path-hash,cache.files=partial,moveonenospc=ff,dropcacheonclose=true,minfreespace=10G,category.create=ff,fsname=cachemergerfs 0 0
          
          • schizo@forum.uncomfortable.business
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            Yeah, it took me FOREVER to finally land on a useful search result for WTF was going on (thanks Google, you pile of junk!) because the impact was that everything looked perfectly fine, you just… couldn’t download anything?

            No errors, no faults, nothing in the logs, just adding anything resulted in absolutely nothing happening.

            Really freaking weird.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    bcachefs will fill this role someday.

    For now there is ZFS which as a cache drive option. Keep in mind it will absolutely destroy the cache drive by wearing out the flash

    You also could look into ZFS special disks. However, if you are going that way already you might as well get a bunch of disks.

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Ill look into ZFS, but in meantime I found out my HDD is probably not bottleneck. Still want to learn about this so thanks for your comment

  • Mister Bean@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Depends on the file system, I know for a fact that ZFS supports ssd caches (in the form of l2arc and slog) and I believe that lvm does something similar (although I’ve never used it).

    As for the size, it really depends how big the downloads are if you’re not downloading the biggest 4k movies in existence then you should be fine with something reasonably small like a 250 or 500gb ssd (although I’d always recommend higher because of durability and speed)

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 months ago

      Thx. I use ext4 right now. I might consider reformating, but so many new words to reasearch before deciding that. I heard about ZFS, but not sure is that right for me since I only have 16 GB of RAM.

      Downloads are 100-200 GB max, but less than 40 GB most of the time. I have 512 GB in use and 2TB SSD not in use, can swap them if needed

  • nitrolife@rekabu.ru
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 months ago

    I used lvm with SSD cache few years, but time to time I have problems with loads after reboot. If forgot about reboots all work great with LVM raid + LVM cache. Cache can be configured without raid. And you can add or remove cache in any time. Docs: https://man.archlinux.org/man/lvmcache.7

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      cache few years, but time to time I have problems w

      Thx, Ill check it out

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    [Thread #938 for this sub, first seen 27th Aug 2024, 13:05] [FAQ] [Full list] [Contact] [Source code]

  • lemmylommy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Any HDD should be able to get at least 100MB/s sequential write speed. Unfortunately torrent writes are usually very random, which just kills hdd performance. Multiple parallel downloads or concurrent playback from the same disk will only make it worse.

    Using a SSD for temporary files will absolutely help. It should be big enough to hold all the files you are downloading at any one time.

    You could also try to find a write cache setting that works for you. That way what would usually be many small writes can be combined to bigger chunks in memory before sending them to storage. Depending on how much ram is available I would start at 1GB or so and if it is still bottlenecking try in- or decreasing until it improves. Of course always stay in the range of free ram.

    Back when I was torrenting (ages ago) write cache helped a lot. It should be somewhere in the settings menu.

    • osaerisxero@kbin.melroy.org
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      My solution to this was to put the default download folder on an nvme and then move the torrent to a storage hdd after completion

    • rambos@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Oh, you are talking about torrent client settings? I could spare 1-2 GB of RAM, but not more than that (got 16 GB in total). I see this might help a lot, but I would I still be limited with HDD max write speed? Using SSD for temporary files sounds great, but waiting files to be coppied to HDD would slow it down if I understood correctly

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    9
    ·
    edit-2
    3 months ago

    Great that you have a catch drive. I assume the data drive manages everything. So I’m going to call that the manager drive.

    Now you just need:

    A 1st base drive.

    A 2nd base drive.

    A 3rd base drive.

    A shortstop drive.

    A left Field drive.

    A center field drive.

    A right field drive.

    About 3-4 starting drives

    A half dozen reliever drives.

    A closer drive.

    A hittch coach drive

    And a couple of base running coach drives!

    Got yourself a baseball team!