• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle







  • CLAs can be abusive, but not necessarily. Apache Foundation contributors need to sign CLAs, which essentially codify in contract form the terms of the Apache 2.0 license. It’s a precaution, in case some jurisdiction doesn’t uphold the passive licensing scheme used otherwise. There’s also a relicensing clause, but that’s restricted to keeping in spirit, they can’t close the source.




  • I was in a similar position and moved to Proton. Their native Linux support is rudimentary, but nobody else provides a better, privacy respecting option. Their web apps work well though, and the email client uses local storage APIs for offline use and search.

    I do use Mega for cloud storage though, they’re e2ee and have solid Linux (both GUI and really nice CLI) and mobile support.


  • After doing some Meta/Facebook VR development in my job the lack of popularity made increasingly more sense. In brief, they’re both incredibly incompetent and transparently greedy.

    I’m honestly baffled how they could spend so many tens of billions of dollars and have such bad software, it is completely bug ridden. You’ll hit a bug, research it, and find out it’s a major know bug for literal years they haven’t fixed. They care so little that they couldn’t bother to update the Oculus branding to Meta for over 3 years in various software tools and libraries.

    Their greed might be more salient aspect preventing adoption, though. They transparently wanted to be the gatekeepers to everything “metaverse” related, a business model that is now explicitly illegal in the EU after years of being merely very sketchy. They are straight up hostile to anyone else trying to implement enterprise or business features. Concrete example: fleet management software, aka MDM. There are third party tools that are cheaper and much more featured than Meta’s solution, but in the last year they’ve pushed hard to kick those third parties out of the ecosystem.

    I could go on, but in short nobody in their right mind would build a major business on their ecosystem. They’d rather let Meta burn billions in R&D and come back later. Besides, not even Meta is able to make money in the area now.






  • So this is probably another example of Google using too blunt of instruments for AI. LLMs are very suggestible and leading questions can severely bias responses. Most people using them without knowing a lot about the field will ask “bad” questions. So it likely has instructions to avoid “which is better” and instead provide pros and cons for the user to consider themselves.

    Edit: I don’t mean to excuse, just explain. If anything, the implication is that Google rushed it out after attempting to slap bandaids on serious problems. OpenAI and Anthropic, for example, have talked about how alignment training and human adjustment takes a majority of the development time. Since Google is in a self described emergency mode, cutting that process short seems a likely explanation.