For people authoring original content who may end up having the only copy of a given piece of news-relevant data in their possession, using a lossy compression method to back it up sort of defeats the purpose. This isn’t stashing your old DVD collection, this is trying to back up privileged professional data.
Raw high-def video and image files?
But yeah, there’s unlimited and then there’s kinda pushing the limits of what’s reasonable. 233TB is more than the contents of some orgs’ datacenters
Wait, journalist, 233 terabyte? Just what in the fuck did his life’s work consist of?
Npm packages in docker
Liftoff app cache folder.
That would take 2 years to upload.
raw recorded video
It’s simply stupid to not compress to h265 before uploading it.
Tell me you don’t know shit about professional video production without telling me you don’t know shit about professional video production.
that’s not what videographers do with their raw footage
For people authoring original content who may end up having the only copy of a given piece of news-relevant data in their possession, using a lossy compression method to back it up sort of defeats the purpose. This isn’t stashing your old DVD collection, this is trying to back up privileged professional data.
https://x265.readthedocs.io/en/master/lossless.html
https://trac.ffmpeg.org/wiki/Encode/H.265#Losslessencoding
I want to clarify that it supports lossless compression as well.
If it’s good it’s good 😊
A Call of Duty update.
That’s only one map though, where’s the rest?
No not my Gary’s moods lol
My node_modules folder
Log files from a local SQL server.
JPGs
Raw high-def video and image files? But yeah, there’s unlimited and then there’s kinda pushing the limits of what’s reasonable. 233TB is more than the contents of some orgs’ datacenters
Yeah, there are show a day YouTube production companies with a team of editors running years off a petabyte.
Certainly not impossible, but probably more of an article in the writing than an actual journalist in distress