I literally pulled the original game out of a cereal box in 2010 and proceeded to have hours upon hours of fun with it. It was on one of those funny small CD-ROMs. Good times.
I literally pulled the original game out of a cereal box in 2010 and proceeded to have hours upon hours of fun with it. It was on one of those funny small CD-ROMs. Good times.
“Version” is definitely used commonly to describe two different … versions of the same thing, without implying that one is better than the other or supercedes it. There are two versions of the PS5, one with and one without a disk drive. There are many different versions of Windows, like Home or Enterprise. You can get hardcover or paperback versions of many books. Etc. Etc.
Hm. As long as you only interact with Lemmy through a (trusted) VPN, or even through Tor, you’re just as safe using Lemmy as you would be any other website. Servers can always see your IP by default, and the owners of those servers can be coerced to give it away by whatever external forces. If you hide your IP, they can’t. That’s pretty much it.
Sure, Patreon is great, but Patreon alone is not enough for most creators to make a living, considering how hard it is to get people to commit to monthly subscriptions.
Would you put blame on doctors for contributing to the opioid?
I’m gonna assume by “contributing to the opioid” you mean over-prescribing pain medication for the commission? If so, that comparison is so far-fetched that it’s completely meaningless. You’re really going to compare that with independent creators having skippable ad reads that have to be clearly marked as such on content you get for free?
This is a bit unnecessarily tough on independent content creators… what exactly do you expect them to do? Make no money from their content? How would they be able to make a living?
I doubt it’d take long before they try to stop you on different grounds like impeding traffic or public nuisance or whatever
Doomerism like this is fucking stupid and definitely leads to the wrong thing, which is to do nothing. If we’re already fucked, why even try? The truth is that IF we try, we very well might be able to avoid the worst. Which is worth fighting for.
The “Internet” and many foundations of networking originated in the US, but the Web, which is what I’d wager many think of when you say “the Internet”, was invented in Switzerland by a British man.
Honestly, I’ve worked with a few teams that use conventional commits, some even enforcing it through CI, and I don’t think I’ve ever thought “damn, I’m glad we’re doing this”. Granted, all the teams I’ve been on were working on user facing products with rolling release where main always = prod, and there was zero need for auto-generating changelogs, or analyzing the git history in any way. In my experience, trying to roughly follow 1 feature / change per PR and then just squash-merging PRs to main is really just … totally fine, if that’s what you’re doing.
I guess what I’m trying to say is that while conv commits are neat and all, the overhead really isn’t really always worth it. If you’re developing an SDK or OSS package and you need changelogs, sure. Other than that, really, what’s the point?
I know. Just the “full-stack meta frameworks” part alone makes any ADHD person feel nausea.
But why? What’s bad about this?
I disagree. Geminispace is very usable without scripts
That’s great, I’m not saying that it’s impossible to make usable apps without JS. I’m saying that the capabilities of websites would be greatly reduced without JS being a thing. Sure, a forum can be served as fully static pages. But the web can support many more advanced use-cases than that.
If only one paradigm must remain, then naturally I pick mine. If not, then there’s no problem and I still shouldn’t care.
So you can see that other people have different needs to yours, but you think those shouldn’t be considered? We’re arguing about the internet. It’s a pretty diverse space.
For me it’s obvious that embeddable cross-platform applications as content inside hypertext are much better than turning a hypertext system into some overengineered crappy mess of a cross-platform application system.
Look, I’m not saying that the web is the most coherent platform to develop for or use, but it’s just where we’re at after decades of evolving needs needing to be met.
That said, embedded interactive content is absolutely not better than what we have now. For one, both Flash and Java Applets were mostly proprietary technologies, placing far too much trust in the corpos developing them. There were massive cross-platform compatibility problems, and neither were in any way designed for or even ready for a responsive web that displays well on different screen sizes. Accessibility was a big problem as well, given an entirely different accessibility paradigm was necessary within vs. the HTML+CSS shell around the embedded content.
Today, the web can do everything Flash + Java Applets could do and more, except in a way that’s not proprietary but based on shared standards, one that’s backwards-compatible, builds on top of foundational technologies like HTML rather than around, and can actually keep up with the plethora of different client devices we have today. And speaking of security — sure, maybe web browsers were pretty insecure back then generally, but I don’t see how you can argue that a system requiring third-party browser plug-ins that have to be updated separately from the browser can ever be a better basis for security than just relying entirely on the (open-source!) JS engine of the browser for all interactivity.
I ask you for links and how many clicks and fucks it would take to make one with these, as opposed to back then. These are measurable, scientific things. Ergonomics is not a religion.
The idea that any old website builder back in the day was more “ergonomic” while even approaching the result quality and capabilities of any no-code homepage builder solution you can use today is just laughable. Sorry, but I don’t really feel the burden of proof here. And I’m not even a fan of site builders, I would almost prefer building my own site, but I recognize that they’re the only (viable) solution for the majority of people just looking for a casual website.
Besides — there’s nothing really preventing those old-school solutions from working today. If they’re so much better than modern offerings, why didn’t they survive?
Ah yes! “Just teach” the cat. Easy
So what does it say about us diverting from purely server-side scripted message boards with pure HTML and tables, and not a line of JS? Yes, let’s get back there please.
Ironically, proper SSR that has the server render the page as pure HTML & CSS is becoming more and more popular lately thanks to full-stack meta frameworks that make it super easy. Of course, wanting to go back to having no JS is crazy — websites would lose almost all ability to make pages interactive, and that would be a huge step backwards, no matter how much nostalgia you feel for a time before widespread JS. Also tables for layout fucking sucked in every possible way; for the dev, for the user, and for accessibility.
people want nice, dynamic, usable websites with lots of cool new features, people are social
That’s right, they do and they are.
By the way, we already had that with Flash and Java applets, some things of what I remember were still cooler than modern websites of the “web application” paradigm are now.
Flash and Java Applets were a disaster and a horrible attempt at interactivity, and everything we have today is miles ahead of them. I don’t even want to get into making arguments as to why because it’s so widely documented.
And we had personal webpages with real names and contacts and photos. And there were tools allowing to make them easily.
There are vastly more usable and simple tools for making your own personal websites today!
I think it’s mostly just that phones by themselves absolutely suck as a form factor for pretty much everything but casual games.
Hillary: robots must follow the three laws of robotics
Bernie: robots can have a little evil
Absolutely not, time doesn’t give a shit about humans, and would happily pass without any conscious observer at all anywhere in the universe.
Doing that would tell you nothing about whether the browser might have un-patched, known vulnerabilities elsewhere.
How do you know this? Of course there are lots of reasons for why they’d want to enforce minimum browser versions. But security might very well be one of them. Especially if you’re a bank you probably feel bad about sending session tokens to a browser that potentially has known security vulnerabilities.
And sure, the user agent isn’t a sure way to tell whether a browser is outdated, but in 95% of cases it’s good enough, and people that know enough to understand the block shouldn’t apply to them can bypass it easily anyway.
Is there some story I missed about his family being assholes?
Am I tripping? They’re just saying that they think it’s bad that these kinds of big decisions are up for 9 people to decide. Like, “it’s bad that a court of 9 people has this much power”. I don’t see a “both sides” argument here at all, if anything what I see is a language barrier…