Most of the “tech” youtube world is based around presenting mostly useless consumer products as it was technological advancement.
Most of their SAAS advertisers could be replaced by a “docker compose up”, hardware ones, most of the time are just regular tools with one or two gimmick.
The way to get money advertising on linux is by misleading business people into getting useless enterprise services.
Exactly why banks almost always use one form of a corporate UNIX based OS for this or that. Shit hits the fan --> blame the other guy. You can’t do that with community based distros, even with Debian, they offer no guarantee whatsoever.
SAAS is a scam developed by venture capital to make their otherwise nominally profitable tech gambits able to bilk clients of cash on a scale not even Barnum could fathom.
In my experience almost no outage happens because hardware failures. And most outages happen because bad configurations and/or expired certs, which in turn are a symptom of too much complexity.
Is there 🤔? I’ve seen things in production you wouldn’t believe. Rigs from the stone age, a 30+ year old DEC still running their version of UNIX and people saving files on tapes. Why? It’s how it has always been done 🤷. A firewall/router configured back in 2001 (no one’s touched it ever since). An Ubuntu 12.4 install running a black box VM that no one knows what it’s actually for, except that it was needed back in 2012 for something related to upgrading the network… so don’t touch it cuz shit might stop working.
Trust me, I’ve seen homelabs that are far better maintained than real world production stuff. If you’re talking about the 0.2% of companies/banks that actually take care of their infrastructure, they are the expection, not the norm.
Homelabs will always be better maintained. In most cases it’s a one man show and the documentation can be slight hints that will help you remember the process when you need it.
Most of the documentation for my homelab server is a README file in the folder next to the docker compose. At work I’m forced to write a lengthy explanation as to why things are the way they are in Confluence.
Most of the “tech” youtube world is based around presenting mostly useless consumer products as it was technological advancement.
Most of their SAAS advertisers could be replaced by a “docker compose up”, hardware ones, most of the time are just regular tools with one or two gimmick.
The way to get money advertising on linux is by misleading business people into getting useless enterprise services.
Entreprise services are there so client companies have someone to blame contractually if there’s an issue instead of themselves, that’s very valuable.
Plus, support is pretty nice to have.
Only because it actually takes the real work off the backs from the sysadmins.
Exactly why banks almost always use one form of a corporate UNIX based OS for this or that. Shit hits the fan --> blame the other guy. You can’t do that with community based distros, even with Debian, they offer no guarantee whatsoever.
You are off your rocker if you think most saas products can be replaced by docker 🤣
There is a big gap between you running jellyfin in your basement and securely and reliably maintaining services.
SAAS is a scam developed by venture capital to make their otherwise nominally profitable tech gambits able to bilk clients of cash on a scale not even Barnum could fathom.
👌👍
it’s funny that you use that as a selling point.
In my experience almost no outage happens because hardware failures. And most outages happen because bad configurations and/or expired certs, which in turn are a symptom of too much complexity.
Imagine thinking availability is all you need to do.
Your experience must be extremely limited.
Is there 🤔? I’ve seen things in production you wouldn’t believe. Rigs from the stone age, a 30+ year old DEC still running their version of UNIX and people saving files on tapes. Why? It’s how it has always been done 🤷. A firewall/router configured back in 2001 (no one’s touched it ever since). An Ubuntu 12.4 install running a black box VM that no one knows what it’s actually for, except that it was needed back in 2012 for something related to upgrading the network… so don’t touch it cuz shit might stop working.
Trust me, I’ve seen homelabs that are far better maintained than real world production stuff. If you’re talking about the 0.2% of companies/banks that actually take care of their infrastructure, they are the expection, not the norm.
Homelabs will always be better maintained. In most cases it’s a one man show and the documentation can be slight hints that will help you remember the process when you need it.
Most of the documentation for my homelab server is a README file in the folder next to the docker compose. At work I’m forced to write a lengthy explanation as to why things are the way they are in Confluence.
If there is documentation… subcontractors come and go, some leave documentation, others don’t.
Most saas products no, most of software i saw advertising on those kind of channels yes.
So you’re telling me all those products built on top of docker are !!MILITARY GRADE!! ?