I asked it about how to use etc with three dots in an example followed by a brand new sentence starting with a capital letter afterward.

It told me : / In standard usage, “etc.” is typically followed by three dots and then continues with a lowercase letter. If you are starting a new sentence, you do not add additional dots after “etc.” /

Then I begged it to give me an example of that rule. One such as:
I love swimming, soccer, etc… I also love eating animals.

And it just couldn’t do that. It kept typing 4 dots or single dot or no dots at all, and it can’t even recognize what it typed every single time. Lol try it yourself

  • Identity3000@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    5 months ago

    So I can imagine there might be several things going on here, including the way in which you prompt it. Trying to dissect exactly what’s happening would require more info, and more experimentation.

    For the record, using three periods after etc is considered incorrect in formal writing. Grammar books will be pretty unanimous in saying you should only use 1 period (“etc.”). If your prompt is asking it to “correctly” do something that its training data says is “incorrect”, I could see that causing problems.

    On the flip side, chatGPT isn’t designed to use/enforce “proper” grammar. It’s simply regurgitating patterns based on how people use it in the real world (again, based on the training data). If the training data has lots of inconsistencies and counter-examples, it would seem logical that it might show up in the output, too.

    And finally, it could just be how well you design the input prompt. In my own trial (also using free 3.5 model), it was very consistent in using it properly, even though I had suggested it do otherwise:

    Input:

    Tell me how to use “etc” properly in a sentence, punctuating it with three dots. Give me multiple examples that show how to correctly use it at the end of a sentence and in the middle of a sentence.

    Output:

    Certainly! “Etc.” is an abbreviation for “et cetera,” which means “and so on” or “and others.” Here are examples of how to use it properly:

    At the end of a sentence:

    1. I enjoy various outdoor activities: hiking, biking, camping, etc.
    2. She has a wide range of skills: coding, graphic design, project management, etc.
    3. The conference covered topics like artificial intelligence, machine learning, cybersecurity, etc.

    In the middle of a sentence:

    1. The restaurant offers a variety of dishes, such as pizza, pasta, salads, etc., to cater to different preferences.
    2. Our team includes experts in software development, data analysis, user experience design, etc., ensuring a well-rounded approach to projects.
    3. The museum features art from different periods—Renaissance, Baroque, Romantic, etc.—providing a comprehensive cultural experience.
  • Klicnik@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 months ago

    Something I have noticed in GPT 3.5 is that it seems something on the site itself, not necessarily GPT’s output, changes the text that GPT provides. I had an issue where I asked it for the links to websites it was sourcing. I could see a longer URL pop up for just a second before it got shortened to underlined blue text which was not a link. I asked it multiple times in multiple ways to give me tbe URL again. It happened multiple times where the URL showed up for just a second before getting converted to weird, non-URL format.

    I finally got the URLs it was trying to provide when I told it, “Please provide the raw HTML of your most recent response.” Then I looked at the “a href” sections and found the URLs it had been trying to provide.

    I wonder if the site itself and not GPT is changing three or more consecutive periods to one as well as some of the other issues you have reported.

  • take6056@feddit.nl
    link
    fedilink
    arrow-up
    5
    ·
    5 months ago

    It seems like ChatGPT can write, but from what I’ve understood about the technology it always sounded more like it was taught to “speak”. Not with sounds obviously, but the sentences are build without necessarily knowing all characters that make it up, like children do with speech before learning to write.

    I’m not a researcher on the topic, so I could’ve interpreted something wrong. I’d like to see Cunningham’s law proven right, if I did!

    • RiderExMachina@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 months ago

      AI is basically just “heuristically, sentences usually look like this, and when this word is used in this context, the next word is usually…” And so on.

      There is no “thinking” behind ChatGPT, no real understanding of the topics it’s provided. Just a computer that provides sentences in a way that make humans attempt to humanize it.

      • muntedcrocodile@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        5 months ago

        U can make “thinking” by havibg it generate a thought process its often used to make agents with langchain that are significantly morr intellegent than the base models especialy when given the capability to use tools.

  • relevants@feddit.de
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    5 months ago

    Considering you asked it to follow a rule that does not exist, it’s unsurprising it can’t do it “correctly”

  • Artisian@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    Surprised nobody mentioned this: Most of these models use tokenization; they group words into groups of symbols like “ea” and “the” and “anti” - they don’t pick which key to press for the text, they pick which bunch of keys to press. These are called tokens. I believe there are tokens it just can’t output, or tokens that are extremely unlikely. I could imagine that “etc.” and “…” are tokens with relatively high probabilities, but perhaps “etc…” doesn’t break into a nice set of them? (or the tokens it can be broken into all have extremely low weights for the model).