Yeah, they are ideally the same mailbox. I’d like a similar experience to Gmail, but with all the emails rehomed to my server.
Yeah, they are ideally the same mailbox. I’d like a similar experience to Gmail, but with all the emails rehomed to my server.
Steam + Proton works for most games, but there are still rough edges that you need to be prepared to deal with. In my experience, it’s typically older titles and games that use anti-cheat that have the most trouble. Most of the time it just works, I even ran the Battle.net installer as an external Steam game with Proton enabled and was able to play Blizzard titles right away.
The biggest gap IMO is VR. If you have a VR headset that you use on your desktop and it’s important to you, stay on Windows. There is no realistic solution for VR integration in Linux yet. There are ways that you can kinda get something to work with ALVR, but it’s incredibly janky and no dev will support it. There are rumors Steam Link is being ported to Linux, nothing official yet though.
On balance, I’m incredibly happy with Mint since I switched last year. However, I do a decent amount of personal software development, and I’ve used Linux for 2 decades as a professional developer. I wouldn’t say the average Windows gamer would be happy dealing with the rough spots quite yet, but it’s like 95% of the way there these days. Linux has really grown up a lot in the last few years.
Ralph Nader saying that he thinks the death toll is over 200k is not a reasonable source to cite. The 30-50k estimates from most sources are already appallingly high. There’s an active contingent of Ben Shapiro types trying to convince everyone what Israel is doing is fine, don’t give them ammo to cast doubt on the official death count.
Not sure where that 200k number is from. The article you linked doesn’t say that and I haven’t seen a number that high reported anywhere myself. All the info I have seen bounds the estimates between 30k and 50k killed, either through active combat or through disease/malnourishment/injury.
https://www.aljazeera.com/news/longform/2023/10/9/israel-hamas-war-in-maps-and-charts-live-tracker
I’m sure there are plenty of Israelis that want to do this even if they won’t admit it to themselves but this isn’t the final anything. The IDF has killed around 37,000 Palestinians out of ~2.3 million. That’s horrible but nowhere near the “barely any left” stage.
A genocide on the scale of millions takes industrial effort to accomplish. I’m not saying it couldn’t happen, but given Israel’s reliance on foreign aid, current industrial capacity, and political position, it seems unlikely. My guess is Israel will take some more territory and the conflict (kinda tough to call the IDF bombing almost exclusively civilians a war) will peter out. Foreign aid will be allowed back in and Israel will put its mask back on.
Personally, I don’t see how this doesn’t end with half the middle east actively going to war with Israel if they don’t stop soon. The only thing really keeping them safe is the US, and Israel has burned a lot of political capital here. Their leaders are awful, power-hungry shits, but they’re not stupid. If they don’t try to rebuild some of that capital, there’s every chance that Israel loses its lifeline.
What comes years after things die down, I don’t know. Gazan sentiment towards Israel was already overwhelmingly negative before this, but the IDF has never done anything on this scale before. I don’t think Israel can allow Gaza any type of self-governance for decades after this. This is beyond even post-WW2 Japan levels of destruction, and unlike Japan every nation around them is still on their side.
Yeah, I don’t fully understand why Nvidia cards have this problem on first setup with so many distros. On Windows, the default display driver can at least boot with reduced resolution on most cards made in the last 15 years until you install proper drivers. It seems like the Linux kernel and common desktop environments ought to be able to do the same.
Maybe this is better in the 6.x kernel, I haven’t tried it. I’m not too much of a tinkerer, so the bleeding edge doesn’t interest me. I just want a good shell, POSIX for personal coding projects, and the ability to play games on Steam. Mint is great for that once you get past the initial display driver issues.
I’ve been using Mint for about 6 months now and it works with Nvidia just fine BUT the new user experience isn’t great. You have to use the nomodeset kernel option and install Nvidia drivers, otherwise you’ll boot to a black screen.
Helpful guide: https://forums.linuxmint.com/viewtopic.php?t=421550
You’re using “machine learning” interchangeably with “AI.” We’ve been doing ML for decades, but it’s not what most people would consider AI and it’s definitely not what I’m referring to when I say “AI winter.”
“Generative AI” is the more precise term for what most people are thinking of when they say “AI” today and it’s what is driving investments right now. It’s still very unclear what the actual value of this bubble is. There are tons of promises and a few clear use-cases, but not much proof on the ground of it being as wildly profitable as the industry is saying yet.
AI is not self-sustaining yet. Nvidia is doing well selling shovels, but most AI companies are not profitable. Stock prices and investor valuations are effectively bets on the future, not measurements of current success.
From this Forbes list of top AI companies, all but one make their money from something besides AI directly. Several of them rode the Web3 hype wave too, that didn’t make them Web3 companies.
We’re still in the early days of AI adoption and most reports of AI-driven profit increases should be taken with a large grain of salt. Some parts of AI are going to be useful, but that doesn’t mean another winter won’t come when the bubble bursts.
I didn’t say it wasn’t amazing nor that it couldn’t be a component in a larger solution but I don’t think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They’re simulating the language area of human brains, sure, but there’s no reasoning or understanding in an LLM.
In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You’ll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it’s a crude metaphor, but I don’t think it’s wrong) starts fitting less and less well until the conversation completely falls apart. That’s generally called “hallucination” but I’m not a fan of that because it implies a lot about the model that isn’t really true. Y
You may have already read this, but if you haven’t: Steven Wolfram wrote a great overview of how GPT works that isn’t too technical. There’s also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I’ve found it to be a really interesting way to think about LLMs.
It’s possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren’t going to change that, they’ll just make them better Chinese rooms.
Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.
However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.
A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.
AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.
Hyperfixating on producing performant code by using Rust (when you code in a very particular way) makes applications worse. Good API and system design are a lot easier when you aren’t constantly having to think about memory allocations and reference counting. Rust puts that dead-center of the developer experience with pointers/ownership/Arcs/Mutexes/etc and for most webapps it just doesn’t matter how memory is allocated. It’s cognitive load for no reason.
The actual code running for the majority of webapps (including Lemmy) is not that complicated, you’re just applying some business logic and doing CRUD operations with datastores. It’s a lot more important to consider how your app interacts with your dependencies than how to get your business logic to be hyper-efficient. Your code is going to be waiting on network I/O and DB operations most of the time anyway.
Hindsight is 20/20 and I’m not faulting anyone for not thinking through a personal project, but I don’t think Rust did Lemmy any favors. At the end of the day, it doesn’t matter how performant your code is if you make bad design and dependency choices. Rust makes it harder to see these bad choices because you have to spend so much time in the weeds.
To be clear, I’m not shitting on Rust. I’ve used it for a few projects and great for apps where processing performance is important. It’s just not a good choice for most webapps, you’d be far better off in a higher-level language.
I wouldn’t shortchange how much making the barrier to entry lower can help. You have to fight Rust a lot to build anything complex, and that can have a chilling effect on contributions. This is not a dig at Rust; it has to force you to build things in a particular way because it has to guarantee memory safety at compile time. That isn’t to say that Rust’s approach is the only way to be sure your code is safe, mind you, just that Rust’s insistence on memory safety at compile time is constraining.
To be frank, this isn’t necessary most of the time, and Rust will force you to spend ages worrying about problems that may not apply to your project. Java gets a bad rap but it’s second only to Python in ease-of-use. When you’re working on an API-driven webapp, you really don’t need Rust’s efficiency as much as you need a well-defined architecture that people can easily contribute to.
I doubt it’ll magically fix everything on its own, but a combo of good contribution policies and a more approachable codebase might.
i ain’t won jack alot from the squattery
Why do you think ventilators made people worse? They only put people on ventilators when their O2 stats dropped so low they were going to die of oxygen deprivation.
Part of the reason these rules are similar is because AI-generated images look very dreamlike. The objects in the image are synthesized from a large corpus of real images. The synthesis is usually imperfect, but close enough that human brains can recognize it as the type of object that was intended from the prompt.
Mythical creatures are imaginary, and the descriptions obviously come from human brains rather than real life. If anyone “saw” a mythical creature, it would have been the brain’s best approximation of a shape the person was expecting to see. But, just like a dream, it wouldn’t be quite right. The brain would be filling in the gaps rather than correctly interpreting something in real life.
In reading this thread, I get the sense that some people don’t (or can’t) separate gameplay and story. Saying, “this is a great game” to me has nothing to do with the story; the way a game plays can exist entirely outside a story. The two can work together well and create a fantastic experience, but “game” seems like it ought to refer to the thing you do since, you know, you’re playing it.
My personal favorite example of this is Outer Wilds. The thing you played was a platformer puzzle game and it was executed very well. The story drove the gameplay perfectly and was a fantastic mystery you solved as you played. As an experience, it was about perfect to me; the gameplay was fun and the story made everything you did meaningful.
I loved the story of TLoU and was thrilled when HBO adapted it. Honestly, it’s hard to imagine anyone enjoying the thing TLoU had you do separately from the story it was telling. It was basically “walk here, press X” most of the time with some brief interludes of clunky shooting and quicktime events.
I get the gameplay making the story more immersive, but there’s no reason the gameplay shouldn’t be judged on its own merit separately from the story.
This is an honest question, not a troll: what makes The Last of Us groundbreaking from a technical perspective? I played it and loved the story, but the gameplay was utterly boring to me. I got through the game entirely because I wanted to see the conclusion of the story and when the HBO show came out I was thrilled because it meant I wouldn’t have to play a game I hated to see the story of TLoU 2.
It’s been years, but my recollection is the game was entirely on rails, mostly walking and talking with infrequent bursts of quicktime events and clunky shooting. What was groundbreaking about it?
People from east and southeast Asia have been cultivating and eating soy beans as a staple food since before Babylon. I mean that literally; there is evidence of soy bean cultivation in what is now China from like 7000 BC.
It’s tough to take a phrase like, “Soy makes men weak,” as anything other than racism when it puts down a quarter of the population of the planet. At best, it’s ignorance, but in my experience the people who hold this opinion don’t change their mind when you explain this to them.
It’s really more of a proxy setup that I’m looking for. With thunderbird, you can get what I’m describing for a single client. But if I want to have access to those emails from several clients, there needs to be a shared server to access.
docker-mbsync might be a component I could use, but doesn’t sound like there’s a ready-made solution for this today.