Hi all, I’ve been venturing for months in this amazing self-hosted hobby and for the last couple of days I’m reading and trying to understand kubernetes a bit more, I’ve followed this article :

https://theselfhostingblog.com/posts/setting-up-a-kubernetes-cluster-using-raspberry-pis-k3s-and-portainer/

that helps you set up the lightweight Kubernetes version (K3s) and use Portainer as your management dashboard, and it works flawlessly, as you guys can see I’m just using two nodes at the moment.

And I’m using “helm” to install packages and the site ArtifactHUB to get ready to use repository to add into portainer Helm section (still in beta) but works flawlessly, I’ve installed some packages and the apps works just as I expected, but there’s seem to be a shortage of ready to use repository as it’s the case with docker alone, like with Plex the only way I got plex running in K3s is with KubeSail with offers an unofficial apps section that includes plex and tons of other well known apps, but strangely enough there are labeled unofficial but still works perfect when installed, but portainer would label all apps installed from KubeSail as external.

Now I think I get the use of kubernetes, it’s to have several nodes to use as recourses for your apps and also like a load balance if one node fails your services/apps can keep on running? (like raid for harddisks?)

All tough it was fun learning atleast the basic of Kubernetes with my two nodes, is it really necessary to go full blown out with only kubernetes? Or is Docker just fine for the majority of us homelad self hosted folks?

And is what I’m learning here the same in enterprise environments? Atleast the basics?

  • Eufalconimorph@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    Kubernetes adds a lot of complexity. In return, it allows various teams in your company to work mostly independently, so that your software stack can mirror your org chart better. It trades latency for scalability (adds network calls to things that could have been local function calls). If your “home lab” isn’t serving millions of users, you don’t need Kubernetes to run it.

    That said, you might be using your home lab partly as practice for a job at a large company where the tradeoffs of Kubernetes make sense (or at least someone thought they made sense and started using it, which is more common). That means using it at home can provide valuable self training, since you can screw around and not take down the production cluster for anyone other than yourself.

  • pusillanimouslist@alien.topB
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    No, it is not worth it. The benefits of k8s really kicks in at scale, which none of us really reach. Most of us would be well served with proxmox or similar.

    But then again, if we were all reasonable people most of us wouldn’t have a homelab either.

    Anyways, I run K3S. It’s overkill, but that’s fine. But god, helm. Most of the problems I’ve had with my kubernetes setup has been half baked, abandonware helm charts not supported by the project in question. I’m going through a process of removing every instance of helm where the chart isn’t first party created.

  • johntellsall@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    I’m a DevOps professional and adore Kubernetes. I have a CKAD cert and professional experience…

    No

    For my homelab, I’m putting most stuff at the Proxmox layer (eg Nextcloud, Kubernetes, Storage-NFS). I’m putting a few things in Kubernetes but at this point it’s just a testbed (eg Argocd). At some point I’ll put up a 2nd “production” K8s and run apps in there forever.

    I’ve been doing setups with Ansible and a little Terraform and it’s great. I can build and tweak and rebuild really quickly without having to go in and tweak one little thing. It’s fantastic for my confidence the stack works exactly as I want it.

  • borg286@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    10+ years at Google as an SRE. While borg =!= k8s, I’ve seen my fair share of platforms come and go. The trend seems to reward shifts towards declarative automation rather than imperative orchestration models. In the programming world, you’ll hear the term idempotent, similar idea. There is no substitute or wrapper that can take imperative and make it declarative without tons of work. Ansible is imperative where if something goes wrong it is easiest to nuke then try again. K8s is the culmination of various imperative-based automation systems at Google, attempts at replacing them with declarative, then try again, then finally start afresh with an open-source version of borg.

    Not many companies need the scale of Google, with thousands of engineers trying to modify production with hardened interfaces that force developers to write their applications in such an opinionated way (stateful applications must use StatefulSet, dynamic configuration should go into a ConfigMap, separate your command line arguments from the command being executed from the environment variables, LoadBalancers are distinct from and are an implementation detail of Services…).

    But with the good foundation that k8s provides and imposes, you set yourself up for letting the infrastructure team not care about what is running on what hardware. They can focus on doing hardware, networking, disk swapouts… Ops can focus on service uptime, readiness+liveness probers, standardized monitoring/logging, traffic routing and rollouts. Devs can focus on writing code. These standards reduce the leakage that often happens between these 3 groups.

    Taking declarative to the next level, you build CICD pipelines that can take your yaml files in a github repo and automatically push them. To the next level you want to account for importing templates and standard libraries, so you look to Kustomize till you realize that it doesn’t give you the building blocks you need. You then start to adopt more declarative models where the source code (both java and json/yaml config files) can be built and the artifacts of that build step are what are fed into k8s, making your github repo the source of truth. Then all production fiddling is done with PRs rather than clicking buttons in an imperative way on some UI.

    The more you see automation tools, the more you realize that declarative offers a more robust interface that can be glued to other declarative systems, albiet adding yet another layer of abstraction. This complexity is often not streamlined enough for people on this subreddit, as well as for lots of people writing self-hosted apps. Helm is about as both streamlined and exhaustive as you’re going to get.

    I agree with many here that learning k8s is best if you’re needing to learn it for your job, or you have hopes of getting into the DevOps field.

      • borg286@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        As an SRE I rarely touch customer-facing stuff, but from what I’ve seen of the devs they are often several layers removed from the public docs. Most are simply focused on their own cog. For this reason I am gravitating towards the projects they’ve open-sourced (k8s, grpc, bazel) and building from more of a clean slate. I’d much prefer open-source components that I can fit into a k8s cluster than rely have lock-in on some cloud service. They solve some nice things, but I’d like to run it locally if I want. For example I’d much prefer to have my pubsub stack rely on Redis Streams rather than GCP PubSub. Redis has such a small footprint, scales to 16k nodes and given how fast it is that is way more of a ceiling than I need. GCP’s UI is nice, but at the end of the day I’m going to be editing some config file and letting my CICD pipeline roll it out than going to the GCP console and clicking some buttons. But that’s just me.

    • zkhcohen@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      As a DevOps engineer, this is an exceptionally good answer to OP’s question.

      • chunkyfen@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Heya, I was wondering, how should someone strive to focus on a declarative method? What the first steps? Thank you.

        • Nekadim@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          Ansible, terraform as an example of software that let’s you manage your hardware with declarative style but without adding unnecessary complexity for homelabs.

          Even if you need to orchestrate smth on you machine you can use Hashicop Nomad, it is waaay easier to spin up and manage and even cat orchestrate executions of binaries contrary to k8s which can orchestre workloads only in containers (or vms with some plugins)

    • analcocoacream@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      There is no substitute or wrapper that can take imperative and make it declarative without tons of work. Ansible is imperative where if something goes wrong it is easiest to nuke then try again

      I am currently setting up my home server using Ansible and I’d say 50% of my time/energy is trying to make it as idempotent as possible. Things like ok I want my service started but I want to restart it if it changed etc.

      Although the main downside with k8s is that I don’t think you can do much low level/privileged stuff. Like setting up a VPN for a single container, accessing devices for monitoring etc.

  • Traditional_Wafer_20@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    K8s is not worth it for the average homelab user. But the whole point of self hosted to do way to complicated stuff for fun so…

  • duckofdeath87@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Do you have multiple physical machines and want to turn them off with full uptime? If not, i don’t think it’s worth it. It’s a really amazing system and if you want to learn, go for it, but it’s hard to justify running on just one server

  • king_hreidmar@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    K8s can allow you to build a reliable and mostly self sufficient suite of tools for your home lab. There is a lot of upfront cost to get there. However, I’d argue k8s isn’t actually all that more complex than running individual docker containers. In both cases you need to have an understanding of networking, containers, proxies, databases, and declarative config of some form or another. K8s just provides primitives that make it really easy to build more complex container projects up declaratively. It doesn’t mean it has to be complex. I run 5 or 6 different services with individual backing Postgres DBs. I source the containers from docker hub just like you would in docker. Certbot will auto deploy certs for any service I set up this way. HA proxy will auto add domains and upstreams for them too. When I want to setup a new service I often just copy and paste an existing service manifest and do a find and replace with a new service name. At that point I can usually just apply the manifest and wait 5 min. My service will be up, available on the internet, and already have SSL certs.

    I’ll add, if you have really complex projects with tons of micro services you can deploy a helm chart for that in two commands. Even with minimal or no knowledge about how it should be setup.

  • fjch1997@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    Can someone link to that Adolf Hitler rant about containers running in containers running in a "lightweight " VM video?

  • lestrenched@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I plan to use podman at home since I just have one node and I don’t care about HA as much (what will I even do HA with? VMs?).

    If you have multiple nodes for an HA setup, sure, go right ahead. It will be a massive learning curve though. But so are most things in life. I think everyone can learn a lot by running kubernetes (godly complex networking in my opinion).

  • unableToHuman@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    So here’s my take. I’m not a devops guy professionally. I started my homelab with docker. The problem was the number things I was hosting kept growing and I was worried of loading the machine. I had a few other machines lying around that I decided to pull into a k3s. I somehow love it. My entire home lab is now stored with IaC and lives in a GitHub with CI/CD. Any changes I make to the repo are automatically deployed to the cluster. If I need to Takedown a machine I don’t need to worry about loss of service. I also use velero for backups. If things go wrong a few commands and my entire cluster is fully restored from backups. Now I can easily agree that kubernetes is overkill for a homelab. But I feel it offers some convenience in terms of administration. For docker I still had to deploy everything by portainer which I hadn’t found a way to automate. Backup and restore was not fully automated. You could backup the data but you had to manually redeploy your apps and then restore data to it. At least this was what I could implement. With kubernetes everything is fully in code and controlled by the GitHub repo. Granted the learning curve is steep. Took me 3 months to fully port my system to k3s. Also for general apps check out https://bjw-s.github.io/helm-charts/docs/ You can use that chart to make a helm chart for any app that can be deployed via docker compose. So I just create my own helm charts for apps that only habe docker instructions and deploy it.

    TLDR; learning curve is steep but there are a few gains in terms of IaC administration and ability to leverage multiple machines

  • Franceesios@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Thanks all, for all the great feedback i got. I will definitely be investing more time to keep on learning Kubernetes, the more I read some of your comments the more ideas I’m having on my own use case in my home lab.

  • thbb@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I have maintained my web sites, mail servers, family cloud, home media center, backup solution and a few other services for over 20 years.

    My configuration is quite stable, and I appreciate maintaining by hand, tight-knitted, the links between all services, so as to make it easier to make the configurations and services slowly evolve over time (migrating from sendmail to postfix, httpd to apache to ngnix, owncloud to nextcloud, cvs to git, etc).

    I wouldn’t recommend the extra sophistication of containers and orchestration of services for this type of usage, which is very stable and meant to slowly evolve to stay current. Those extras layers are meant to serve fast-evolving environments with frequent updates, continuous delivery, and multiple maintainers. They’d be a burden for my usage.

    However, if you plan to use your home server to learn and acquire new skills you want to put on your resume, then it’s definitely worth the effort.

  • Franceesios@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    105 up votes… I honestly did not think that my question would have spark a ton of replies like this, I truly appreciate you guys!

  • falcorns_balls@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I had 3 R640s at home churning kubernetes for about a year. The main benefit of it for me was that it was an enjoyable and fun learning experience. Ultimately I received no real benefit in availability due to just city power issues and me not wanting to shell out for that large of a UPS to prevent a kubernetes cluster disaster. Ultimately due to energy use and the heat, I scaled back to running a bunch of docker loads on a NUC extreme, and one of my R640s. Of course you could also run k3s on a string of raspberry pis or something too.