I wanted to see if “Leo” could add value to my searches, so I gave it a test.

I asked it a question that I already knew the answer to, basically “give me a list of resources for XYZ”, and it gave me a list of “organizations” related to XYZ. FYI: XYZ is being used here as a filler example, the actual names are irrelevant.

The problem is, it seemingly pulled these resources out of its ass, because they DON’T EXIST.

I ask it for a website, then social media accounts, then how I could contact one of these organizations, and it provided them. Non-existent social media accounts were provided, a 555 phone number, and an email address that was simply made up. WTF??

I even told it that none of those links exist and then it comes up with:

“I apologize for the confusion. Unfortunately, it appears that the [REDACTED] does not have any social media accounts.”

Excuse me? Why the hell answer the question with fake social media links???

If this is what AI does, it’s quite literally dumber than the worst customer service person. LOL

Has this been your experience with ChatGPT or is only Leo a dumbass?

UPDATE: I continued to ask questions about this organization, it pulled out of thin air. At one point, it told me that it was a non-profit governed by a board of directors. When asked about who was on the board of directors, it said there was no board of directors. OK.

Then I continued to ask questions, and it answered my question “when was the organization established” with… and I shit you not:

“Unfortunately, there is no XYZ Association. The concept of a XYZ Association is fictional and was created for the purpose of this scenario.” 🫠 🫠 🫠 🫠 🫠

  • pelotron@midwest.social
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    7 months ago

    My company has been doing an internal test of GitHub CoPilot to see if it’s worth a company-wide subscription. One of the guys leading it came out with a video showing what he’s been doing with it.

    In one example, he showed how it generates intellisense-like suggestions when writing code. “Here you can see CoPilot generates a snippet of code that does what I wanted! Well, almost … this method it wants to call doesn’t actually exist in our codebase.” Um, ok.

    Then, “Now I can ask it to write a unit test for this code. Look at that! A full blown test in just a couple seconds! Well… it is asserting against the wrong value here. Not sure where it got that number.” LOL

    Can’t wait to use AI to generate shitty code that I then have to debug.

    (Disclaimer, these tests are absolutely worth doing to see how useful these tools are, but to me the results so far are amusing at best.)

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      The answers it provided were completely nonsensical! I honestly can’t believe how anyone could use it for anything but a laugh.

  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Honestly I wouldn’t compare other models so directly they all have different strengths and uses, there are only a few actually powerfull models out there and if the one you is free than i would not expect more than a gimmick, The difference between chatgpt3.5 and 4 alone is huge. Bing which should use gpt4 is also crap.

    A lot depends on how you prompt. Garbage in = garbage out. Ai is very easy to use with a low barrier but its quite a challenge to master getting useful and factual results. Prompt engineering is becoming a new class of programming in a way. Some Tips for you: “please search the web to verify your answer, if possible provide a link to wikipedia or other sources” “always answer using up to date verifiable facts” “always come clear when no factual answers can be find, never make up any answers”

    You can even do almost psychological manipulation: “Answer in the style of a nobel price winning academics professor who is master in this subject”

    “Please provide a high quality result, this is important for my career” (this one was found by the aiexplained youtube channel.

    Once you manage to prompt finetune a model for an engineered use case it can repeatedly save massive amounts of time. For example I have a gpt that is designed to help me write better prompt for other gpts. Another wich i gave the technical pdf about an api and is able to help me debug my own code.

    Many ai you currently see is also just using other models like gpt3 in the background. Always try to check what llm model an ai uses and if you cant find info then that should already tell you something.

    One ai i can personally vouch for that is not Bad while being free is https://www.perplexity.ai Below i simply asked of brave had an ai and what model it uses.

    Disclaimer: i ran out of time so no i did not factcheck these responses.

    1

    2

    llama2 is impressive for its size, small enough to run on consumer hardware but it is far from advanced in logical reasoning

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      One ai i can personally vouch for that is not Bad while being free is https://www.perplexity.ai

      After some quick tests, this one appears much better than Leo, but I notice that it also uses a very limited (bias?) source list for some specific questions that really should have a broader answer using more than one source. Still, at least the answers make sense. LOL

      The main draw with Leo is the private nature of the system, which differs from other AI chatbots as far as privacy and data collection are concerned.

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        With almost any non local ai, data is most certainly being collected just to get the system working. Privacy is a matter of trusting the supplier, in this case brave.

        I am not saying brave isn’t trustworthy, there is some controverse around it but i have used it myself for years and hold no ill against it but it is something to consider.

        If you really want private, as well as full control you can run llama 2 fully local on your own system and integrate it with anything you like. if your willing to cough up for a stare of the art graphics card that is.

        https://www.fosai.xyz