Self Proclaimed Internet user and Administrator of Reddthat

  • 28 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle





  • The downvotes you can see (on this post) are from accounts on your instance then. As this post is semi inflammatory it is highly likely to have garnered some downvotes.

    Edit: I guess I was wrong regarding the logic of how downvotes work when we block them. As the http request (used too?) return an error when responding to a downvote. I’ll have to look at it again. As the only way it was/is 15, is if:

    • we kept track of downvotes and sent out the activities notification
    • your instance got the notifications from other instances about our post, (which is not how Lemmy works unless I’m seriously misunderstanding it.)


  • Bah! I totally forgot that they have the new “efficiency” cores…

    Performance Cores: 6 Cores, 12 Threads, 2.5 GHz Base, 4.8 GHz Turbo
    Efficient Cores: 8 Cores, 8 Threads, 1.8 GHz Base, 3.5 GHz Turbo

    Hmmm, I’d still say its totally worth it because the 12500 only has 6 core (12 threads) total. You are getting 8 extra core/threads.

    Linux/docker/anyOS will make use of 8 extra cores regardless of the workload. Sure they might not be as performent on the lower end but a process running 12 threads vs a process running 20 threads will always be more performant.


  • I’m always look at ongoing costs rather than upfront and mostly thats the TDP, which is exactly the same. So I would agree with your sentiment. The major cost is performing it.

    Single thread has a small increase 5% or so, but you have double the amount of threads. So your two dozen (24) docker containers could have a thread per container! Thid could benefit you a lot if you were running anywhere near 100% or have long running multithread jobs.

    If I had the disposable money and I thought I could sell the 12th gen CPU then maybe. But i’m still rocking some old E3-12xx v3 Xeons which probably costs me more per year than what you will pay to upgrade!







  • I use Wasabi storage which is more expensive as they have a minimum space allotment but because my servers are in Aus I had issues with backblaze b2 storage and the latency. (I was dealing with 200-300ms in network latency AU -> US + the time that backblaze takes to store the data).
    At that time lemmy/pictrs was not as optimised as it is now so it’s much better now.

    Backblaze comes out WAY cheaper per-month if you have servers in us/eu, as close to their regions as possible, but they also charge you for API.

    As part of an Object Storage / cdn remember you also might have to pay for egress charges as well. Cloudflare is part of the “Bandwidth Alliance” but that isn’t applicable here as pictrs needs to present the images via its own domain, (such as cdn.reddthat.com). So you’ll still want a CDN infront which will mean you will only pay once for the egress instead of everytime everyone loads it.


    • Minio’s free tier is a host-it-yourself. Where as their paid tier are for actual storage hosters, they have a minimum of 100TB/month @ $10/TB.
    • B2 has no minimum, and egress + hosting of 20GB cost me… ($0.29)
      • Note: They still havn’t billed me because I havn’t passed the $1 mark yet!
    • Wasabi (my current choice) is $7/TB (AU, US is $6/TB iirc) with a minimum of 1TB/month with out egress or api charges

    Reddthat has… looks up 150GB of object storage now.

    I would recommend B2 if you are starting out and are in US/EU. Wasabi in all other regions and have a CDN infront. (and don’t mind burning a little cash for peace of mind)


  • Oh that’s super nice!
    That’s why I asked if they were hiring. Sounds like a nice place to work. I work at a job that doesn’t allow me to have non-work time during my allotted on-work time. ie 9-5.

    Obviously I agree that this is definitely on the riskier side of the line, whether it crosses into NSFW territory is unfortunately dependant on everyone’s own definition.

    See the other comment(s) from Red that go into detail regarding what is going to be done regarding the future of the community and the riskiness.

    Removing the post at this time would not be for the benefit of the community when we can have active discussions on what we want the community to become.
    Unfortunately I have a feeling that some of the commenters here are from All and are not active subscribers. So they may not be used to seeing content like this. They also may not understand the difference between instances and the nuances with federation.

    Edit: This is the comment from Red regarding what we are going to do: https://reddthat.com/comment/2250572





  • that’s only an issue if you’re telling nginx the internal IP of the container container names

    Oh how naive I thought so to. Nope.

    If you have an nginx container (swag) that is inside the docker network, without a resolver 127... configuration line. Upon initial loading of the container it will resolve all upstreams. In this case yours are sab and sonarr. These resolve to 127.99.99.1 and 127.99.99.2 respectively (for example purposes). These are kept inside memory, and are not resolved again until a reload happens on the container.

    Lets say sab was a service that could scale out to multiple containers. You would now have two containers called sab and one sonarr. The IP resolutions are 127.99.99.1 (sab), 127.99.99.2 (sonarr), 127.99.99.3 (sab).
    Nginx will never forward a packet to 127.99.99.3, because as far as nginx is concerned the hostname sab only resolves to 127.99.99.1. Thus, the 2nd sab container will never get any traffic.

    Of course this wouldn’t matter in your usecase, as sab and sonarr are not able to have high availability. BUT, lets say your two containers were restarted/crashed at the same time and they swapped ips/got new IPs because docker decided the old ones were still inuse.

    Swag thinks sab = 127.99.99.1, and sonarr = 127.99.99.2. In reality, sonarr is now 127.99.99.3 and sab is 127.99.99.4 So you launch http://sonarr.local and get greeted with a sonarr is down message. That is why the resolver lines around the web say to have the ttl=5s to enforce a always updating dns name.

    This issue is exactly what happened here: https://reddthat.com/comment/1853904

    I know nginx

    Oh don’t get me wrong, nginx/Swag/NPM are all great! I’ve been trialing out NPM myself. But the more I use nginx with docker the more I think maybe I should look into this k8s or k3s thing, as the amount of networking issues I end up getting and hours I spend dealing with it… It just might just be worth-it in the end :D

    /rant


  • I get hit by this all-the-time.
    The worst thing is when docker containers scale up/down, so they get new IPs.
    The proxies (mostly nginx) only do DNS resolution at startup. Which is why they say to add this resolver configuration to your nginx. It forces a re-validation every 30 seconds.

    You’ll have containerA/B/C have ips 172.20.0.[2-4] and all have a hostname of “container”. Then if you add a new container (scale=4) containerD comes up with 172.20.0.5.
    Your nginx container still resolves “container” to [2-4] and it will never resolve to the new container unless you restart the container, or you have this resolver configuration (which will make it force a resolution after 30 seconds).

    This one feature makes me hate using nginx as a reverse proxy for containers but it’s more intuitive than having to write a constant traefik middlewares just so I can have everything the way I want it

    This hit us at Reddthat recently, and was part of the reasons why we had some downtime. The UI containers were scaling out with load but the proxy wasn’t resolving the new containers and was still sending traffic to the original container. :eyeroll: