• 3 Posts
  • 134 Comments
Joined 1 year ago
cake
Cake day: June 3rd, 2023

help-circle

  • That’s kinda a weird take, since the private server model was the only model until 10 years ago or so. Companies definitely know it. It’s just not financially efficient comparing to benefiting from economies of scale with hosting. Plus you don’t lose a ton of money or piss of players if you over or under estimate how popular the game will be.

    Had they gone with private servers here, they would have lost even more money than they already have. The problem here is they spent too much money on a game no one wanted to play, chasing a fad that ended before it launched.





  • I actually looked into this, part of the explanation is that in the 80s, Sweden entered a public/private partnership to subsidize the purchase of home computers, which otherwise would have been prohibitively expensive. This helped create a relatively wide local consumer base for software entertainment as well as have a jump start on computer literacy and software development.


  • I think to some extent it’s a matter of scale, though. If I advertise something as a calculator capable of doing all math, and it can only do one problem, it is so drastically far away from its intended purpose that the meaning kinda breaks down. I don’t think it would be wrong to say “it malfunctions in 99.999999% of use cases” but it would be easier to say that it just doesn’t work.

    Continuing (and torturing) that analogy, if we did the disgusting work of precomputing all 2 number math problems for integers from -1,000,000 to 1,000,000 and I think you could say you had a (really shitty and slow) calculator, which “malfunctions” for numbers outside that range if you don’t specify the limitation ahead of time. Not crazy different from software which has issues with max_int or small buffers.

    If it were the case that there had only been one case of a hallucination with LLMs, I think we could pretty safely call that a malfunction (and we wouldn’t be having this conversation). If it happens 0.000001% of the time, I think we could still call it a malfunction and that it performs better than a lot of software. 99.999% of the time, it’d be better to say that it just doesn’t work. I don’t think there is, or even needs to be, some unified understanding of where the line is between them.

    Really my point is there are enough things to criticize about LLMs and people’s use of them, this seems like a really silly one to try and push.