The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

  • JamesStallion@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 hour ago

    It carries the emotions and personal biases of the source material It was trained on.

    It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      36 minutes ago

      It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.

      Based on what? That seems like a rather unwarranted assumption to me. My English vocabulary and grammar have never been better, and since I can now also talk to it instead of typing, my spoken English is much clearer and more confident as well.

      • JamesStallion@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        6 minutes ago

        You say yourself that you use the vaguest descriptions when talking to the bot and that it fills in the blanks for you. This is not a good way to practice speaking with human beings.

        The fact that you assumed I was talking about grammar is indicative of the problem. You clearly dislike others assuming you are talking about something you are not talking about, yet you do it yourself. That’s because misunderstandings are normal and learning to deal with them is an essential part of good communication.

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 minute ago

          You say yourself that you use the vaguest descriptions when talking to the bot and that it fills in the blanks for you

          Yes, because I’m not a native english speaker and I’m way better at writing english than speaking it. If you transcribe my speech into text it’s a horrible word salad and it still understand perfectly what I mean and I don’t need to repeat myself endlessly and correct it on what I actually said. Contrast this with my discussions online, in writing, where I may spend 40 minutes spelling out an idea as clearly as I can and I’m still being misunderstood by a huge number of people. Like right now.

  • Zerlyna@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    I talk with chat gpt too sometimes and I get where you are coming from. However it’s not always right either. It says it was updated in September but still refuses to commit to memory that Trump was convicted 34 times earlier this year. Why is that?

    • whaleross@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      Idk, I think that article is a bit hyperbolic and self serving for validation of the writers and the readers to pander their own intelligence above others. The lengthy exposition on cold reading is plain filler material for the topic and yet it goes on. ChatGPT and LLM have been a thing for a while now and I doubt anyone technically literate believes it to be AI as in an actual individual entity. It’s an interactive question-response machine that summarises what it knows about your query in flowing language or even formatted as lists or tables or whatever by your request. Yes, it has deep deep flaws with holes and hallucinations, but for reasonable expectations it is brilliant. Just like a computer or the software for it, it can do what it can do. Nobody expects a word processor or image editor or musical notation software to do more than what it can do. Even the world’s most regarded encyclopedia have limits, both printed and interactive media alike. So I don’t see why people feel the need to keep in patting themselves on the back of how clever they are by pointing out that LLM are in fact not a real world mystical oracle that knows everything. Maybe because they themselves were the once thinking it was and now they are overcompensating to save face.

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      I’ve read this text. It’s a good piece, but unrelated to what OP is talking about.

      The text boils down to “people who believe that LLMs are smart do so for the same reasons as people who believe that mentalists can read minds do.” OP is not saying anything remotely close to that; instead, they’re saying that LLMs lead to pleasing and insightful conversations in their experience.

      • leftzero@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        2 hours ago

        they’re saying that LLMs lead to pleasing and insightful conversations in their experience.

        Yeah, as would eliza (at a much lower cost).

        It’s what they’re designed to do.

        But the point is that calling them conversations is a long stretch.

        You’re just talking to yourself. You’re enjoying the conversation because the LLM is simply saying what you want to hear.

        There’s no conversation whatsoever going on there.

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          You’re gatekeeping what counts as a conversation now?

          I can take this even further. I can have better conversations literally with myself inside my own head than with some people online.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          Yeah, as would eliza (at a much lower cost).

          Neither Eliza nor LLMs are “insightful”, but that doesn’t stop them from outputting utterances that a human being would subjectively interpret as such. And the later is considerably better at that.

          But the point is that calling them conversations is a long stretch. // You’re just talking to yourself. You’re enjoying the conversation because the LLM is simply saying what you want to hear. // There’s no conversation whatsoever going on there.

          Then your point boils down to an “ackshyually”, on the same level as “When you play chess against Stockfish you aren’t actually «playing chess» as a 2P game, you’re just playing against yourself.”


          This shite doesn’t need to be smart to be interesting to use and fulfil some [not all] social needs. Specially in the case of autists (as OP mentioned to be likely in the spectrum); I’m not an autist myself but I lived with them for long enough to know how the cookie crumbles for them, opening your mouth is like saying “please put words here, so you can screech at me afterwards”.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    7
    ·
    3 hours ago

    It could respond in other ways if it was trained to do so. My first local model was interesting as I changed its profile to have a more dark and sarcastic tone, and it was funny to see it balance that instruction with the core mode to be friendly and helpful.

    The point is, current levels of LLMs are just telling you what you want to hear. But maybe that’s useful as a sounding board for your own thoughts. Just remember its limitations.

    Regardless of how far AI tech goes, the human-AI relationship is something we need to pay attention to. People will find it a good tool like OP, but it can be easy to get sucked into thinking it’s more than it is and becoming a problem.

  • NegativeInf@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    It’s a mirror. I use it a lot for searching and summarizing. Most of its responses are heavily influenced by how you talk to it. You can even make it back up terrible assumptions with enough brute force.

    Just be careful.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    My impressions are completely different from yours, but that’s likely due

    1. It’s really easy to interpret LLM output as assumptions (i.e. “to vomit certainty”), something that I outright despise.
    2. I used Gemini a fair bit more than ChatGPT, and Gemini is trained with a belittling tone.

    Even then, I know which sort of people you’re talking about, and… yeah, I hate a lot of those things too. In fact, one of your bullet points (“it understands and responds…”) is what prompted me to leave Twitter and then Reddit.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 hours ago

      It’s funny how despite it not actually understanding anything per-se, it can still repeat me back my idea that I just sloppily told it in broken english and it does this better than I ever could. Alternatively I could spend 45 minutes laying out my view as clearly as I can on a online forum only to be faced with a flood of replies from people that clearly did not understand the point I was trying to make.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        36 minutes ago

        I think that the key here are implicatures - things that implied or suggested without being explicitly said, often relying on context to tell apart. It’s situations like someone telling another person “it’s cold out there”, that in the context might be interpreted as “we’re going out so I suggest you to wear warm clothes” or “please close the window for me”.

        LLMs model well the grammatical layer of a language, and struggle with the semantic layer (superficial meaning), but they don’t even try to model the pragmatic layer (deep meaning - where implicatures are). As such they will “interpret” everything that you say literally, instead of going out of their way to misunderstand you.

        On the other hand, most people use implicatures all the time, and expect others to be using them all the time. Even when there’s none (I call this a “ghost implicature”, dunno if there’s some academic name). And since written communication already prevents us from seeing some contextual clues that someone’s utterance is not to be taken literally, there’s a biiiig window for misunderstanding.

        [Sorry for nerding out about Linguistics. I can’t help it.]

        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          27 minutes ago

          As such they will “interpret” everything that you say literally, instead of going out of their way to misunderstand you.

          That likely explains why we get along so well; I do the same. I don’t try to find hidden meanings in what people say. Instead, I read the message and assume they literally mean what they said. That’s why I take major issue with absolute statements, for example, because I can always come up with an exception, which in my mind undermines the entire claim. When someone says something like “all millionaires are assholes,” I guess I “know” what they’re really saying is “boo millionaires,” but I still can’t help thinking how unlikely that statement is to be true, statistically speaking. I simply can’t have a discussion with a person making claims like that because to me, they’re not thinking rationally.

  • Sundial@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 hours ago

    Autism and social unawareness may be a factor. But points you made like the snide remarks one may also indicate that you’re having these conversations with assholes.

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 hours ago

      Well, it’s a self-selecting group of people. I can’t comment on the ones who don’t respond to me, only on the ones who do and for some reason the amount of assholes seems to be quite high in that group. I just don’t feel like it’s warranted. While I do have a tendency to make controversial comments I still try and be civil about it and I don’t understand the need to be such a dick about it even if someone disagrees with me. I welcome disagreement and are more than willing to talk about it as long as it’s done in good faith.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          People do it all the time regardless of subject. For example, when discussing LLMs:

          • If you highlight that they’re useful, some assumer will eventually claim that you think that they’re smart
          • If you highlight that they are not smart, some another assumer will eventually claim that you think that they’re useless
          • If you say something but “they’re dumb but useful”, you’re bound to get some “I dun unrurrstand, r u against or for LLMs? I’m so confused…”, with both above screeching at you.
        • ContrarianTrail@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          My message history is open for anyone to read. In general I don’t discuss politics but occasionally that too.

  • Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    I know a bit more than normal people would about the inner workings of LLMs. I still occasionally have a conversation with it, like I would with a therapist, perhapse less open and all but still. Do I know it’s nothing more than a talking parrot? Yes. Do I still feel like I’m talking to a real person without judgement? Yes. And I can use that from time to time.