• Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    133
    arrow-down
    3
    ·
    4 months ago

    But for that brief moment, we all got to laugh at it because it said to put glue on pizza.

    All worth it!

  • corroded@lemmy.world
    link
    fedilink
    arrow-up
    109
    arrow-down
    6
    ·
    4 months ago

    The problem isn’t the rise of “AI” but more so how we’re using it.

    If a company wants to create a machine learning model that analyzes metrics on an automated production line and spits out parameters to improve the efficiency of their equipment, that’s a great use of the technology. We don’t need a LLM to produce a useless summary of what it thinks is a question when all I want is a page of search results.

        • herrvogel@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          4 months ago

          Guns are made to kill. When someone gets killed by a gun, that’s the gun being used for the thing’s primary intended purpose. They exist to cause serious harm. Causing damage is their entire reason for existing.

          Nobody designed LLMs with the purpose of using up as much power as possible. If you want something like that, look at PoW crypto currencies, which were explicitly designed to be inefficient and wasteful.

          • baggachipz@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            arrow-down
            3
            ·
            4 months ago

            Ahh, see, but the gun people don’t say it’s solely to kill. They say it’s “a tool”. I guess it could be for hunting, or skeet shooting, or target practice. One could argue that they get more out of owning a gun than just killing people.

            But the result of gun ownership is also death where it wouldn’t have otherwise occurred. Yes, LLMs are a tool, but they also destroy the environment through enormous consumption of energy which is mostly created using non-renewable, polluting sources. Thus, LLM use is killing people, even if that’s not the intent.

            • herrvogel@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              4 months ago

              Difference remains whatever people claim. Guns are weapons made to cause damage first and foremost, and tools second. LLMs are tools first and whatever else second. You can un-dangerousify a tool by using it properly, but you can’t do that with a literal weapon. Danger and damage and harm is their entire reason to exist in the first place.

              • baggachipz@sh.itjust.works
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                4 months ago

                Good point, but I also think that the intent does not necessarily affect the result. BTW I also think guns shouldn’t be a thing, unless under very strict circumstances (military, licensed hunters). I also posit that the use of unlicensed LLMs in the general public is proving to be irresponsible. That is to say, a specific and worthy use case should be established and licensed to use these “AI” tools.

          • 3ntranced@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            5
            ·
            4 months ago

            One might argue it’s killing more people in the past 30 years than all guns have throughout history

    • FiniteBanjo@lemmy.today
      link
      fedilink
      arrow-up
      4
      arrow-down
      20
      ·
      4 months ago

      Thats fucking bullshit, the people developing it and shipping it as a product have been very clear and upfront about their uses and none of it is ethical.

  • alienanimals@lemmy.world
    link
    fedilink
    arrow-up
    68
    arrow-down
    47
    ·
    4 months ago

    This is a strawman argument. AI is a tool. Like any tool, it’s used for negative things and positive things. Focusing on just the negative is disingenuous at best. And focusing on AI’s climate impact while completely ignoring the big picture is asinine (the oil industry knew they were the primary cause of climate change more than 60 years ago).

    AI has many positive use-cases yet they are completely ignored by people who lack logic and rationality.

    AI is helping physicists speed up experiments into supernovae to better understand the universe.

    AI is helping doctors to expedite cancer screening rates.

    AI is powering robots that can do the dishes.

    AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.

    • UltraHamster64@lemmy.worldOP
      link
      fedilink
      arrow-up
      48
      arrow-down
      7
      ·
      4 months ago

      Yes, ai is a tool. And the person in the screenshot is criticizing a generative gpt-like and midjorney-like ai, which has a massive impact on the climate and almost no useful results.

      In your examples, as I can see, they always train their own model (supernovae research, illegal fishing) or heavily customize it and use it in close conjunction with people (cancer screenings).

      And so I think we talking about two different things, so I want to clarify:

      ai as in neural-network algorithm that can digest massive amounts of data and give meaningful results - absolutely is useful and, I think, the more the time will pass (and more grifters move on to other fields) the more actual useful niches and cases would be solved with neural-nets.

      But, ai as in we-gonna-shove-this-bot-down-your-throut gpt-like bots trained on all the data from all the internet (mostly reddit) that struggle with basic questions, hallucinate glue on pizza, generate 6-fingered hands and are close to useless in any use-case are absolutely abismal and not worth it to ruin our climate for.

    • Floey@lemm.ee
      link
      fedilink
      arrow-up
      32
      arrow-down
      10
      ·
      4 months ago

      Obviously by AI they mean stuff like ChatGPT. An energy intensive toy where the goal is to get it into the hands of as many paying customers as possible. And you’re doing free PR for them by associating it with useful small scale research projects. I don’t think most researchers will want to associate their projects with AI now that the term has been poisoned, though they might have to because many bigwigs have been sucked into the hype. The term AI has basically existed nebulously since the beginning of computing, so whether we call one thing or another AI is basically personal taste. Companies like OpenAI have successfully attached their product to the term and have created the strongest association, so ultimately if you say AI in a contemporary context a lot of people are hearing GPT-like.

      • AccountMaker@slrpnk.net
        link
        fedilink
        arrow-up
        17
        arrow-down
        2
        ·
        4 months ago

        Yeah, but it doesn’t really help that this is a community “Fuck AI” made as “A place for all those who loathe machine-learning…”. It’s like saying “I loathe Dijsktra’s algorithm”. The term machine learning has been used since at least the 50’s and it involves a lot of elegant mathematics which all essentially just try to perform optimizations of various functions in various ways. And yet, at least in places I’m exposed to, people constantly present any instance of machine learning as useless, morally wrong, theft, ineffective compared to “traditional methods” and so on, to the point where I feel uneasy telling people that I’m doing research in that area, since there’s so much hate towards the entire field, not just LLMs. It might be because of them, sure, but in my experience, the popular hating of AI is not limited to ChatGPT, corporations and the like.

        • markon@lemmy.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          3
          ·
          4 months ago

          It is a sad thing to see. The education system especially here in the US has really failed many people. I was always super curious and excited about machine intelligence from a young age. I was born in the mid 90’s. I’ve been dreaming about automation myself out of work so I could create and do what I love and spend more time with the people I love, and just explore and learn and grow. As a kid I noticed two things that made adults miserable:

          1. Overworked with too little pay and too little time off. Monetary stressors.
          2. Having kids they didn’t want but lied to themselves about. (Only some obviously)

          I went to school for CS and eventually had to drop because of personal life and mental health struggles, but I’m still interested in joining the field and open source. People sometimes make me feel really sad, and misunderstood, plus discourages me from even bothering because they’re so negative. I know how we got here, but it’s sad it’s this predictable a reaction.

          By 2014-2015 I was watching a lot of machine learning videos on YouTube, playing with style GANs etc. The fact a computer could even “apply a style” and keep the image coherent was just endlessly fascinating and it made for a lot of cool photos etc. Watching AlphaGo beat a world champion in Go using Q-learning techniques and self-play was incredible. I just love it. I love future tech and I think we can have social and economic equity and much less wealth and income inequality with all this.

          A lot of people don’t realize labor adds a lot to the cost of what they buy and there are only so many workers. Having even today’s LLMs fully implemented and realized as agents (this is very quickly coming about) things will slowly get cheaper and better, then likely more rapidly. Software development will be cheaper. Engineers, game designers and artists will bring to life incredible things that we haven’t thought up yet, and will likely use these tools/entities to enhance their workflows. Yes there will be less artist grunt work, and there will be affects on jobs. It’s just not going to stop anyone from doing what they love however they like to do it. It’s so odd to me.

          Cheers and keep your head up. If we get this right I think people will change their tune, but probably not until they see economic and quality of life improvements. Though, I’d say machine learning and machine intelligence has added a great deal of value and opportunity in my life. I wish everyone a good future regardless of how your feel about this. I just hope people who aren’t in the field or weren’t enthusiastic before will at least remember there are a lot of real, kind, and extremely intelligent people working really hard on this stuff, and truly believe they can make a substantial positive impact. We should be more trusting and open. It’s really hard to do it, we can get burned, but most people want decent things for most others despite disagreement and don’t strife. We’ve made it this far. Let’s go to the stars 🤠

      • Lifter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        Let’s instead make an honest attempt to de-poison the term, rather rhat just giving in. It is indeed like saying “All math bad” because math can be used in bad ways.

    • reddithalation@sopuli.xyz
      link
      fedilink
      arrow-up
      26
      arrow-down
      4
      ·
      4 months ago

      but those are the cool interesting research related AIs, not the venture capital hype LLMs that will gift us AGI any day now with just a bit more training data/compute.

      • Yprum@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        But the reason the planet burns is because of how we generate the energy, not because of using energy. I’m not defending all these fucked up greedy corporations and their use of AI, machine learning, LLMs or whatever crap they are trying to get us to use want or not, but our real problem is based on energy generation, not consumption.

        • oo1@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          yeah it’s all the evil power companies’ fault.

          I take it these ai’s are coming up with the solution to the cheap, clean energy problem that has escaped organic intelligences for the past 50 years.

          I think the EV fanatics all learned about the magic electricity grid from the same people, but i still don’t see all EVs being supplied with the required photo voltaic system and secondary battery to make them load and shape neutral.

          It’s not my fault that all my unnecessary short haul flights contribute of global warming , the airline should have invented a clean plane.

    • otto_von@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      3
      ·
      4 months ago

      But these are other applications of AI. I think he meant LLMS. That would be like saying “fitting functions has many other applications and can be used for good”.

    • riodoro1@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      13
      ·
      4 months ago

      And focusing on AI’s climate impact while completely ignoring the big picture is asinine

      immediately goes to whataboutism and chooses big oil as an example. Pure gold.

      • alienanimals@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        If you’re complaining about climate impact, looking at the big picture isn’t whataboutism. It’s the biggest part of the dataset which is important for anyone who actually cares about the issue.

        You’re complaining about a single cow fart, then when someone points out there are thousands of cars with greater emissions - you cry whataboutism. It’s clear you just want to have a knee-jerk reaction without rationally looking at the problem.

    • CompostMaterial@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      5
      ·
      4 months ago

      If all fossil fuel power plants were converted to nuclear then tech power consumption wouldn’t even matter. Again, it was the oil industry that railroaded nuclear power as being unsafe.

      • Mothproof3007@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        4 months ago

        If we had infinite money plus infinite people with the required skills to design and build nuclear power plants plus a magical method to build nuclear reactors in 2 months (or even instantly !) plus managed to convince the public opinion that nuclear energy is actually fine, then the climate crisis would be only partially solved ! Hurray ! (This doesn’t in and of itself solve food production & consumption, transportation and other sources of land use change emissions, we’d need a whole lot more work or on many other subjects)

        In more serious terms (Net Zero research), nuclear isn’t perfect nor is it the be all and end all solution, but it IS globally a part of the solution to generate cleaner electricity and cutting emissions. However, since we don’t have all the magical things I was listing earlier, its development encounters many roadblocks and it turns out that wind and solar are extremely well scalable, integrates pretty well into grids as long as we’re willing to develop the (mostly known) solutions to counter their variability (several exemples of high integration rates in different settings). The issue is that all of this (both nuclear and renewables) demands a lot of investment in terms of money, of people with the required skill sets and educating the public opinion that this is needed and desirable. And that’s a MASSIVE challenge.

        Which is why, to get to the point, the enormous electricity use of AI is actually a problem because its additional power consumption is keeping fossil power plants running or making them run more when emissions should be declining due to advancements in low carbon electricity production (mostly renewables). In general, it makes reaching Net Zero goals harder.

        • hedgehog@ttrpg.network
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          4 months ago

          We can (but largely don’t) recycle nuclear waste, completely negating the need for ultra-long term (i.e., measured in the thousands of years) storage and getting more overall energy relative to the waste that will end up in long term (measured in hundreds of years) storage.

          That said, my understanding is that we have a plan for dealing with the waste, but it’s been awaiting formal review for a decade. This plan was already approved in 2002 but was shut down in 2010 for political reasons, not because of technical or safety concerns.

        • CompostMaterial@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          4 months ago

          I do. The same thing we have been doing with it this entire time storing it in underground bunkers. Unlike the propaganda, the waste from nuclear is rather small and unlike pollution from fossil fuel plants, it is easily contained however much longer living. The benefit still out weigh the cost of managing disposal. The reality is that there is plenty of uninhabited land on the planet where nuclear waste can be stored and isolated for thousands of years. One day, hopefully, we will have fusion power which won’t generate waste. And perhaps, someday we will also figure out how to permanently disposal of the nuclear waste. In the mean time, storage is a fine solution that far out weighs polluting the atmosphere with burning things.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      11
      ·
      4 months ago

      Most of the people on this website hate AI without even understanding it, and refuse to make an honest assessment of its capabilities, instead pretending that it’s nothing more than a good auto correct predictor engine.

      • rekorse@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        4 months ago

        Everyone makes their own risk analysis, a lot of people think that whatever you say it can do, its not worth the cost overall.

        Unfortunately its your problem to disentangle useful AI from predatory AI. It would probably make sense to just call it something else (neural network, a new programming language, a new data analysis model), but then how would you trick investors?

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      7
      ·
      4 months ago

      bullshit take. OP didn’t post a screenshot about AI, it’s about LLMs. They are absolutely doing more harm than good. And the examples you are quoting are also highly misleading at best:

      • science assistance: that’s machine learning, not AI
      • helping doctors? yes, again, machine learning. Expedite screening rates? That’s horribly dangerous and will get people killed. What it could do is scan medical data that has already been seen by a qualified doctor / radiologist / scientist, and re-submit them for a second opinion in case it “finds” a pattern.
      • powering robots that have moving parts: that’s where you want actual AI, logical rules from sensor to action, putting deep learning or LLM bullshit in there is again fucking dangerous and will get people killed
      • helping to catch illegal fishing / etc: again, deep learning, not AI.
      • Lifter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        5
        ·
        4 months ago

        You seem to be arguing against another stawman. OP didn’t say they only dislike LLM the sub is even “Fuck AI”. And this thread is talking about AI in general.

        Machine Learning is a subset of AI and has always been. Also, LLM is a subset of Machine Learning. You are trying to split hairs, or at least do a “That’s not a real Scotsman” out of the above post.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 months ago

          My bad bad for not seeing the sub’s name before commenting. My points still stand though.

          There’s machine learning, and there’s machine learning. Either way, pattern matching & statistics has nothing to do with intelligence beyond the actual pattern matching logic itself. Only morons call LLMs “AI”. A simple rule “if value > threshold then doSomething” is more AI than an LLM. Because there’s actual logic there. An LLM has no such logic behind word prediction, but thanks to statistic it is able to fool many people (including myself, depending on the context) into believing it is intelligent. So that makes it dangerous, but not AI.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          4 months ago

          ML didn’t aggressively claim the name AI as a buzzword to scam massive investment in trash. Someone talking about ML calls it ML.

          Someone talking about “AI” is almost certainly not referring to ML.

          • Lifter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            Companies hve always simplified smart things and called it AI. AI is hotter than ever now, not only LLM.

            And again ML is a subset of AI, LLM is a subset of ML. With these definitions, everything is AI. Look up the definition of AI. It’s just a collection of techniques to do “smarter” things with AI. It includes all of the above, e.g. “If this then that” but also more advanced mathematics, like statistical methods and ML. LLM is one of those statistical models.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              It doesn’t matter how similar the underlying math is. LLMs and ML are wildly different in every way that matters.

              ML is taking a specific data set, in one specific problem space, to model a specific problem in that one specific space. It is inherently a limited application, because that’s what the math can do. It finds patterns better than our brains. It doesn’t reason. ML works.

              LLMs are taking a broad data set, that’s primarily junk, and trying to solve far more complicated problems, generally, without any tools to do so. LLMs do not work. They confabulate.

              ML has been used heavily for a long time (because it’s not junk) and companies have never made a point of calling it AI. This AI bubble is all about the dumpster fire that is LLMs being wildly overused. Companies selling “AI” to investors aren’t doing tried and true ML.

              • Lifter@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                Yea, this bubble is mostly LLM, but also deepfakes and other generative image algorithms. They are all ML. LLM has some fame because people can’t seem to realise that it’s crap. They definitely passed the Turing test, while still being pretty much useless.

                There are many other useless ML algorithms. Just because you don’t like something doesn’t mean it doesn’t belong. ML has some good stuff and some bad stuff. The statement “ML works” doesn’t mean anything. It’s like saying “math works”.

                There have been many AI bubbles in the past as well, as well as slumps. Look up the term AI winter. Most AI algorithms turn out not really working except for a few niche applications. You are probably referring to these few as “ML works”. Most AI projects fail, but some prevail. This goes for all tech though. So… tech works.

                What Microsoft is doing is they are trying to cast a wide net to see if they hit one of the few actual good applications for LLMs. Most of them will fail but there might be one or two really successful products. Good for them to have that kind of capital to just haphazardly try new features everywhere.

                • conciselyverbose@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  No, they’re not “all ML”. ML is the whole package, not one part of the algorithm.

                  Obviously if you apply any tech badly it isn’t magic. ML does what it’s intended to, which is find the best model to approximate a specific phenomena. But when it’s applied correctly to an appropriately scoped problem, it does a good job.

                  LLMs do not do a good job at anything but telling you what language looks like, and all the investment is people trying to apply them to things they fundamentally cannot do. They are not capable of anything that resembles reasoning in any way, and that’s how the scam companies are pretending to use them.

    • rekorse@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      14
      ·
      4 months ago

      We can do all those things without AI. Why do you care how fast it happens? If we could cure cancer twice as fast by grinding up baby animals would you do it?

      • Ookami38@sh.itjust.works
        link
        fedilink
        arrow-up
        11
        ·
        4 months ago

        Probably not the best to imply you want cancer treatment research to slow down simply because you don’t like the tool used to do it. There’s a lot of shit wrong with our current implementations of AI, but let’s not completely throw the baby out with the bath, eh?

          • Ookami38@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            why do you care how fast it happens

            I care how fast it happens because I don’t want it to slow down.

            • rekorse@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              4 months ago

              We can’t just use the fear of death to justify any means to prevent it. If we found out we could live eternally but had to destroy other creatures or humans to do so, we would consider that to be too high a cost.

              The cost for AI at the moment is just immoral. Even those who have found methods to deal with the costs, are still benefiting from calling it AI in the form of investments and marketing. Calling their work AI is worth money because of all of this fraudulent behavior.

              If I started growing and producing my own organic abuse free heroin and selling it, it would still be immoral because I’m benefiting from the economy created by the illegal market. I’m participating in that market despite my efforts.

              Ive said before that if these companies doing the ethical AI stuff want to stop being criticized for being part of this AI nonsense, feel free to call it something else. AI is overly broad and applied incorrectly all the time as it is anyways, and is mainly applied to things to draw money and interest that otherwise wouldnt exist.

              Its a way to signal to investors that there is a profit incentive to be focused on here.

              • Ookami38@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                We can’t just use the fear of death to justify any means to prevent it. If we found out we could live eternally but had to destroy other creatures or humans to do so, we would consider that to be too high a cost.

                Sure, there are costs that are too high for anything.

                The cost for AI at the moment is just immoral. Even those who have found methods to deal with the costs, are still benefiting from calling it AI in the form of investments and marketing. Calling their work AI is worth money because of all of this fraudulent behavior.

                This is the part where it breaks down though. There’s nothing inherently immoral about AI. It’s not the concept of AI you have problems with. It’s the implementation. I hate a lot of the implementation, too. Shoehorning an AI into everything, using AI to justify a reduction in labor, that all sucks. The tool itself, though? Pretty fuckin awesome.

                If I started growing and producing my own organic abuse free heroin and selling it, it would still be immoral because I’m benefiting from the economy created by the illegal market. I’m participating in that market despite my efforts.

                Are we comparing this to cancer research still? If so that’s a bit of a WILD statement. It’s pretty close to the COVID vaccine denial mentality - because it was made using something I don’t like/fully understand, it must be bad.

                Ive said before that if these companies doing the ethical AI stuff want to stop being criticized for being part of this AI nonsense, feel free to call it something else. AI is overly broad and applied incorrectly all the time as it is anyways, and is mainly applied to things to draw money and interest that otherwise wouldnt exist.

                Ok let’s go back to drugs, then. If we were making your organic, free trade heroin, but called it beroin so that we’re not piggybacking off the heroin market, we’re good? No, that doesn’t make sense. Heroin will fuck up someone’s life regardless of what you call it, how it was produced, eetc.There’s (virtually) no legitimate, useful application of heroin. Probably not one we’d ever see the production of broadly okayed.

                Conversely, you’ve already agreed that there are ethical uses and applications of AI. It doesn’t matter what the name is, it’s the same technology. AI has become the term for this technology, just like heroin has become the term for that drug, and it doesn’t matter what else you want to call it, everyone already knows what you mean. It doesn’t matter what you call it, its uses are still the same. It’s impact is still the same.

                So yeah, if you just have a problem with, say, cancer researchers using AI, and would rather them use, idk, AGI or any of the other alternative names, I think you’re missing the point.

                • rekorse@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  I’m not saying they should do research at all, just take steps to separate yourselves from the awful practices practiced by the bug players right now.

                  We should be able to talk about advances in cancer research without having to have a discussion about how AI is going overall, including the shitty actors.

                  And, to be fair, most of the good projects you are defending do differentiate themselves very publicly about the differences and how they are more responsible. All I’m saying is that is a good thing. Companies should be scrambling to distance themselves from openAI, copilot, and whatever else the big tech companies have created.

      • pyre@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        10
        ·
        edit-2
        4 months ago

        yes. i love animals but if my kid has a cold and i knew a puppy’s breathing caused it, I would drown that puppy myself. let alone finding a cure for fucking cancer.

        that being said AI isn’t doing that and even if it is I wouldn’t trust the results.

        • rekorse@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Hey at least you owned the logical conclusion of your argument. I can respect that.

          I do disagree, but I’m also vegan so thats probably why.

  • ZeroHora@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    4 months ago

    Is not the entire picture, we are destroying our planet to generate bad art, fake tities and search a little bit faster but with the same chance of being entirely wrong as just googleing it.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    18
    arrow-down
    8
    ·
    4 months ago

    On the grand scheme of things, I suspect we actually don’t have that much power in stopping the industrial machine.

    Even if every person on here, on Reddit, and every left-leaning social media revolted against the powers that be right now, we wouldn’t resolve anything. Not really. They’d send the military out, shoot us down (possibly quite literally), then go back to business as usual.

    Unless there becomes a business incentive to change our ways, then capitalism will not follow, and instead it’ll do everything it can to resist that change. By the time there is enough economic inventive, it’ll be far too late to be worth fixing.

    • MBM@lemmings.world
      link
      fedilink
      arrow-up
      20
      ·
      4 months ago

      I mean, this isn’t just a social media thing. It was part of the reason there was a writer’s strike in Hollywood and they did manage to accomplish something. I don’t see why protests/strikes/politics would be useless here.

      • Th4tGuyII@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        You’re right, but I was making a point, as social media is most often where you hear people calling for revolution.

        I’ll agree that strikes can work, especially employment strikes - but that’s usually because there’s a specific, private entity to target, an employer to back into the metaphorical corner.

        As far as protesting/striking against the system, you need only look at the strikes and protests relating Palestine to know what kind of force such a revolutionary strike would be met with.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      17
      arrow-down
      4
      ·
      4 months ago

      A lot of people on Lemmy are expecting the glorious revolution to happen any time now and then we will live in whatever utopia they believe makes a utopia. Even if something like that happens, and I’m less certain by the day that it ever will, the result isn’t necessarily any better than what came before. And often worse.

      • Cornelius_Wangenheim@lemmy.world
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        edit-2
        4 months ago

        It’ll almost certainly be worse. When revolutions happen, the people who seize power are the ones who were most prepared, organized and willing to exercise violence. Does that at all sound like leftists in the West?

        • Wilzax@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          The only way to enact utopia is by making it so popular an idea that the propaganda machine gets drowned out. This is going to be a very long and slow process that may never end. But we can always aim for “not worse” and if we can do that, we can also aim for “a little better”. Anything faster than those baby steps feels really far from possible, but those baby steps are always worth taking.

          • ArmokGoB@lemmy.dbzer0.comM
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Wake me up when people found a solarpunk city-state with nuclear capability so that they don’t just get rolled over by the nearest superpower.

    • Lumisal@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      See, the thing is, dead people don’t buy as many things as live ones, so extreme capitalism doesn’t want to kill you directly either. Slow poison is fine if profitable enough, but fast intentional bullet to their main customer base - not as much.

    • mommykink@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      4 months ago

      Even if every person on here, on Reddit, and every left-leaning social media revolted against the powers that be right now, we wouldn’t resolve anything. Not really. They’d send the military out, shoot us down (possibly quite literally), then go back to business as usual.

      What are your thoughts on 2A and private gun ownership?

      • reddithalation@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        the us military will always have more firepower than your group of armed civilians. maybe good for defending against other armed civilians, but don’t act like you could take on the military.

      • Th4tGuyII@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        That doesn’t really factor into anything.

        If the military backs the system, they’d win that fight as they’ll always be better armed.

        That’s why the founding fathers never wanted a standing military, because it took power away from the people - now more than ever.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    15
    arrow-down
    6
    ·
    4 months ago

    I mean, it also made the first image of a black hole, so there’s that part.

    I’d also flag that you shouldn’t use one of these to do basic sums, but in fairness the corporate shills are so desperate to find a sellable application that they’ve been pushing that sort of use super hard, so on that one I blame them.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        24
        arrow-down
        3
        ·
        4 months ago

        Machine learning tech is used in all sorts of data analysis and image refining.

        https://physics.aps.org/articles/v16/63

        I get that all this stuff is being sold as a Google search replacement, but a) it is not, and b) it is actually useful, when used correctly.

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          4 months ago

          This is why the term “AI” sucks so much. Even “machine learning” is kind of misleading.

          Large-scale statistical computing obviously has uses, especially for subjects that lend themselves well to statistical analysis of large and varied data sets, like astronomical observations.

          Sticking all of the text on the internet into a blender and expecting the resulting statistical weights to produce some kind of oracle is… Well, exactly what you’d expect the tech cultists to pivot to after crypto fell apart, tbh, but still incredibly dumb.

          Calling them both “AI” does a tremendous disservice to us all. But here we are, unable to escape the marketing.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            9
            arrow-down
            1
            ·
            4 months ago

            Yeah, it’s no oracle. But it IS fascinating how well it does language, and how close it sticks to plausible answers. It has uses, like narrowing down fuzzy queries, translation and other looser things that traditional algorithms struggle with.

            It’s definitely not a search engine or a calculator, though.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    20
    ·
    4 months ago

    There’s literally no point.

    Like, humans aren’t really the “smartest” animals. We’re just the best at language and tool use. Other animals routinely demolish us in everythig else measured on an IQ test.

    Pigeons get a bad rap at being stupid, but their brains are just different than ours. Their image and pattern recognition is so insane, they can recognize words they’ve never seen aren’t gibberish just by letter structure.

    We weren’t even trying to get them to do it. They were just introducing new words and expected the pigeons to have to learn, but they could already tell despite never seeing that word before.

    Why the hell are we jumping straight to human consciousness as a goal when we don’t even know what human consciousness is? It’s like picking up Elden Ring on whatever the final boss is for your very first time playing the game. Maybe you’ll eventually beat it. But why wouldn’t you just start from the beginning and work your way up as the game gets harder?

    We should at least start with pigeons and get an artificial pigeon and work our way up.

    Like, that old reddit repost about pigeon guided bombs, that wasn’t a hail Mary, it was incredibly effective.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      24
      arrow-down
      3
      ·
      4 months ago

      Who’s jumping to human consciousness as a goal? LLMs aren’t human consciousness. The original post is demagoguery, but it’s not misrepresenting the mechanics. Chatbots already have more to do with your pigeons than with human consciousness.

      I hate that the stupidity about AGI some of these techbros are spouting is being taken at face value by critics of the tech.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      4 months ago

      Pigeons get a bad rap at being stupid

      Do they? I guess I haven’t encountered that much. I think about messenger pigeons in wars and such…

      Disgusting? Sure, I’ve heard that a lot. But I haven’t heard ‘stupid’ really as a word to describe pigeons.

      Anyway, I don’t disagree with you otherwise. My dogs are super stupid in my perception but I know which one of us would be better at following a trail after someone had left the scene. (Okay, maybe Charlie would still be too stupid to do that one, but Ghost could do it).

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 months ago

        Something that blows my mind about dogs is that their sense of smell is so good that, when combined with routine, they use it to track time i.e. if their human leaves the house for 8 hours most days to go to work, the dog will be able to discern the difference between “human’s smell 7 hours after they left” and “human’s smell 8 hours after they left”, and learn that the latter means their human should be home soon. How awesome is that?!

    • JayTreeman@fedia.io
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      4 months ago

      You might like the scifi youtuber Isaac Arthur. He has a huge library, but a number of episodes that talk about intelligence.

    • Luvs2Spuj@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      2
      ·
      4 months ago

      Right, but AI is a step away from a solution due to outrageous energy costs.

      If we fix our energy problems, then it makes more sense to harvest the entirety of human knowledge and creativity to try make the line go up.

    • ArmokGoB@lemmy.dbzer0.comM
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 months ago

      Here’s some research on how much energy various machine learning models use.

      In 2021, Google’s total electricity consumption was 18.3 TWh, with AI accounting for 10%–15% of this total.

      Let’s call it 10% to make it seem as energy-efficient as possible. That’s 1.83 TWh a year, or about 5 GWh a day. An average US home uses 10.5 MWh a year. You could power 476 US homes for a year, and still have some energy left over, with the amount of energy Google uses on their AI-powered search in a single day.

      • Yprum@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        But then the problem is how google uses AI, not AI itself. I can have an LLM running locally not consuming crazy amounts of energy for my own purposes.

        So blaming AI is absurd, we should blame OpenAI, Google, Amazon… This whole hatred for AI is absurd when it’s not the real source of the problem. We should concentrate on blaming and ideally punishing companies for this kind of use (abuse more like) of energy. Energy usage also is not an issue in itself, as long as we use adequate energy sources. If companies start deploying huge solar panel fields on top of their buildings and parkings and whatnot to cover part of the energy use we could all end up better than before even.

        • ArmokGoB@lemmy.dbzer0.comM
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          4 months ago

          I agree that we shouldn’t blame the tools. I also believe the idea that generative AI can be used for good, in the right hands. However, denying the negative impact these tools have is just as disingenuous as saying that the tools are only going to be used by fat cats and grifters looking to maximize profit.

          Also, did you know that you can just mod random people? It doesn’t even ask you. You just wake up one day as a moderator.

          • Yprum@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            But is it the tool that has the negative impact or is it the corporations that use the tool with a negative impact? I think it is an important distinction, even more so when this kind of blaming the AI stuff sounds a lot to distraction techniques, “no don’t look at what has caused global warming for the last century, look at this tech that exploded over the last year and is consuming crazy amounts of energy”. And saying that, I want to make sure its clear, that doesn’t mean it shouldn’t be handled, discussed or criticised (the use of AI I mean), as long as we don’t fall into irrational blaming of a tool that has no such issue.

            I didn’t know about the mod stuff, but also not sure why you mention it, am I going to find myself mod of some weird shit now? X)

            • ArmokGoB@lemmy.dbzer0.comM
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              But is it the tool that has the negative impact or is it the corporations that use the tool with a negative impact?

              Running machine learning models is extremely computationally-intensive. To my knowledge, it doesn’t scale particularly well when you have a bunch of users making arbitrary requests. The energy problem is mostly to do with the number of users, rather than the fact that it’s corporations doing it. This isn’t to say that big tech doesn’t create a bunch of other problems by controlling closed-source models.

    • Mango@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      4 months ago

      It’s with noting that you’re commenting rationally in an echo chamber. You can’t do that.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    21
    ·
    4 months ago

    No. Once it has identified it as a math problem a different part of the code is called.

    Fucking morons with Twatter accounts