• Re: ChatGPT Writing

    From jimmylogan@VERT/DIGDIST to Rob Mccart on Wed Nov 5 20:06:02 2025
    Rob Mccart wrote to JIMMYLOGAN <=-

    My only experience with AI is that the first offers of info when doing
    a search in a browser are almost always generated by an AI system
    these days, and it is often a quick and correct answer if the problem isn't too complex

    Also, if the first offered answer is questionable or obviously wrong,
    I usually just carry on to offered suggestions further down the page
    doing the search myself like we had to do before getting the magic AI
    help rather than giving it another crack at it.

    Yeah, the example you give is actually what I'm referring to. It's not
    >making a decision or using critical thinking skills, it's just responding
    >to a query based on what it has been taught to do. It is always - ALWAYS -
    >regurgitating information that you could find on your own.

    Yes, that's true for the most part, although I'm sure there are
    AI systems that are smarter than that doing expensive jobs for
    bigger users. A free Browser AI I'm sure is pretty basic..

    I have a friend that uses AI for automated customer service responses.
    It is still a programming thing. But I understand where you are coiming
    from too.

    I've yet to see an example of one that actually makes decisions.



    ... My software never has bugs. It just develops random features...
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Rob Mccart@VERT/CAPCITY2 to JIMMYLOGAN on Fri Nov 7 10:44:47 2025
    Yes, that's true for the most part, although I'm sure there are
    AI systems that are smarter than that doing expensive jobs for
    bigger users. A free Browser AI I'm sure is pretty basic..

    I have a friend that uses AI for automated customer service responses.
    >It is still a programming thing. But I understand where you are coiming
    >from too.

    I've yet to see an example of one that actually makes decisions.

    I just got off of a site trying to get an answer to a simple
    question and their AI system was totally unable to answer the
    question but, in this case, it finally gave up and started telling
    me how I could reach a live person to talk to..

    So.. no intelligence there, artificial or otherwise.. B)

    To show how simple.. my TV Satellite provider advetised that they
    offer about a 10% discount if you arrange for auto-payments, but
    the general info says that goes for MOST packages.. I was just
    asking if my package qualified.. That should be a straight forward
    enough question for it to answer..

    ---
    þ SLMR Rob þ Live long and prosper
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Mortar@VERT/EOTLBBS to Rob Mccart on Sat Nov 8 09:39:00 2025
    Re: Re: ChatGPT Writing
    By: Rob Mccart to JIMMYLOGAN on Fri Nov 07 2025 10:44:47

    question but, in this case, it finally gave up and started telling me how I could reach a live person to talk to, so no intelligence there, artificial or otherwise.. B)

    It was smart enough to know when to give up.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From jimmylogan@VERT/DIGDIST to Rob Mccart on Wed Nov 19 07:45:42 2025
    Rob Mccart wrote to JIMMYLOGAN <=-

    Yes, that's true for the most part, although I'm sure there are
    AI systems that are smarter than that doing expensive jobs for
    bigger users. A free Browser AI I'm sure is pretty basic..

    I have a friend that uses AI for automated customer service responses.
    >It is still a programming thing. But I understand where you are coiming
    >from too.

    I've yet to see an example of one that actually makes decisions.

    I just got off of a site trying to get an answer to a simple
    question and their AI system was totally unable to answer the
    question but, in this case, it finally gave up and started telling
    me how I could reach a live person to talk to..

    So.. no intelligence there, artificial or otherwise.. B)

    To show how simple.. my TV Satellite provider advetised that they
    offer about a 10% discount if you arrange for auto-payments, but
    the general info says that goes for MOST packages.. I was just
    asking if my package qualified.. That should be a straight forward
    enough question for it to answer..

    yep! But that also goes to show that they obviously didn't program
    that particular data...



    ... As I said before, I never repeat myself
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Rob Mccart@VERT/CAPCITY2 to JIMMYLOGAN on Fri Nov 21 07:50:42 2025
    I just got off of a site trying to get an answer to a simple
    question and their AI system was totally unable to answer the
    question but, in this case, it finally gave up and started telling
    me how I could reach a live person to talk to..

    To show how simple.. my TV Satellite provider advetised that they
    offer about a 10% discount if you arrange for auto-payments, but
    the general info says that goes for MOST packages.. I was just
    asking if my package qualified.. That should be a straight forward
    enough question for it to answer..

    yep! But that also goes to show that they obviously didn't program
    >that particular data...

    Likely in this case the company doesn't specify in general info online
    exactly which packages are qualified, probably to get people to contact
    them so they can sell them something 'better' (more expensive) than
    they currently have..

    But, in this case, if the info wasn't available, at least it didn't
    make something up.. B)

    ---
    þ SLMR Rob þ Anyone who can be bought isn't worth the price
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From jimmylogan@VERT/DIGDIST to Rob Mccart on Tue Nov 25 20:37:23 2025
    Rob Mccart wrote to JIMMYLOGAN <=-


    But, in this case, if the info wasn't available, at least it didn't
    make something up.. B)


    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...



    ... Xerox Alto was the thing. Anything after we use is just a mere copy.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From phigan@VERT/TACOPRON to jimmylogan on Wed Nov 26 06:00:54 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm

    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...

    How much do you use AI? And, are you sure you haven't and just didn't notice?

    I don't use it aside from maybe sometimes reading what the search result AI thing says, and when I search for technical stuff I get bad info in that AI box at least half the time!

    Also, try asking your AI to give you an 11-word palindrome.

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From Dumas Walker@VERT/CAPCITY2 to jimmylogan on Wed Nov 26 08:39:22 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 20:37:23

    But, in this case, if the info wasn't available, at least it didn't make something up.. B)

    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...

    It happened here just this week. My garbage day is Thursday and, with that being a holiday, I wanted to see when the trash would be picked up. It has
    in the past always been on the following Monday but I wanted to check.

    Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.

    Not sure where Gemini got its answer, but it might as well have been made up! :D
    ---
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wed Nov 26 10:28:40 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm

    I've heard stories of people saying 'AI' made something up, but I've yet to run across that...

    AI makes things up fairly frequently. It happens enough that they call it AI hallucinating.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Rob Mccart@VERT/CAPCITY2 to JIMMYLOGAN on Thu Nov 27 09:18:49 2025
    But, in this case, if the info wasn't available, at least it didn't
    make something up.. B)


    I've heard stories of people saying 'AI' made something up, but
    >I've yet to run across that...

    I've had an AI built into the browser give me wrong information when
    the correct information was not available, like once I asked for the
    selling price for a property and it came back saying the property
    sold for the full asking price, which turned out to be wrong.

    In another case its math was comical.. It was telling me what
    some gov't plan pays and the figures it came up with was something
    like $750 a month totalling $1800 a year..

    But my experience is probably less than most. I wouldn't use AI at
    all except that my main browser (probably most browsers these days)
    always gives me what their AI system thinks is the information I
    am looking for in a search. It is usually quite accurate, which is
    why I don't just ignore that window, but for important things I
    always double check what it tells me in some other way..

    ---
    þ SLMR Rob þ Poke the right nerve and the whole frog jumps
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From jimmylogan@VERT/DIGDIST to phigan on Tue Dec 2 11:15:44 2025
    phigan wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm

    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...

    How much do you use AI? And, are you sure you haven't and just didn't notice?

    Quite a bit actually! I use it for work - technical stuff - help with
    coding; help with looking up parts; etc.

    I use it for home - help with working on something at home or a recipe
    or hobby (It's help me a TON with learning the laser cutter)

    I use it for writing - for my blog; for the funeral service I
    "preached" Saturday; for proofreading; for help summarizing or
    composing; for fast scripture look ups.

    And there's no way I would know what I have missed. :-) You
    don't know what you don't know - LOL. But that being said,
    it's been flat out WRONG before, but never insisted it was
    right if I corrected it. :-)

    I don't use it aside from maybe sometimes reading what the search
    result AI thing says, and when I search for technical stuff I get bad
    info in that AI box at least half the time!

    That is a basic AI, but I mainly use (and pay for) ChatGPT...

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.



    ... Joey, do you like movies about gladiators?
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Tue Dec 2 11:15:44 2025
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm

    I've heard stories of people saying 'AI' made something up, but I've yet to run across that...

    AI makes things up fairly frequently. It happens enough that they call
    it AI hallucinating.

    Yes, that's what I'm talking about. I've not experienced that -
    THAT I KNOW OF.


    ... Xerox Alto was the thing. Anything after we use is just a mere copy.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Dumas Walker on Tue Dec 2 11:15:44 2025
    Dumas Walker wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Tue Nov 25 2025 20:37:23

    But, in this case, if the info wasn't available, at least it didn't make something up.. B)

    I've heard stories of people saying 'AI' made something up, but
    I've yet to run across that...

    It happened here just this week. My garbage day is Thursday and, with that being a holiday, I wanted to see when the trash would be picked
    up. It has in the past always been on the following Monday but I
    wanted to check.

    Google Gemini looked it up and reported that my trash would be picked
    up on Friday. The link below the Gemini result was the official link
    from the city, which *very* clearly stated that it would be picked up
    on Monday.

    Not sure where Gemini got its answer, but it might as well have been
    made up! :D ---

    LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?


    ... Got my tie caught in the fax... Suddenly I was in L.A.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Rob Mccart on Tue Dec 2 11:15:44 2025
    Rob Mccart wrote to JIMMYLOGAN <=-

    But, in this case, if the info wasn't available, at least it didn't
    make something up.. B)


    I've heard stories of people saying 'AI' made something up, but
    >I've yet to run across that...

    I've had an AI built into the browser give me wrong information when
    the correct information was not available, like once I asked for the selling price for a property and it came back saying the property
    sold for the full asking price, which turned out to be wrong.

    In another case its math was comical.. It was telling me what
    some gov't plan pays and the figures it came up with was something
    like $750 a month totalling $1800 a year..

    But my experience is probably less than most. I wouldn't use AI at
    all except that my main browser (probably most browsers these days)
    always gives me what their AI system thinks is the information I
    am looking for in a search. It is usually quite accurate, which is
    why I don't just ignore that window, but for important things I
    always double check what it tells me in some other way..

    Yeah - to me, though, that's not 'making something up' - that's the
    same as doing a search and the search engine giving you something
    totally unrelated just because it is close.

    I run Car Wars games and a guy mentioned ADQ 7/1 and a design in
    it. So I googled it, and got stuff about differnet things that had
    adq and 7/1 in them, but not Autoduel Quarterly Volume Seven
    Issue One. :-)



    ... Cap'n - the spell checker kinna take this abuse.
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Tue Dec 2 11:20:38 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am

    AI makes things up fairly frequently. It happens enough that they call
    it AI hallucinating.

    Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.

    One thing I've seen it quite a bit with is when asking ChatGPT to make a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong. We had seen that quite a bit with the Chad Jipiti thing that was posting on Dove-Net a while ago.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Tue Dec 2 11:45:50 2025
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am

    AI makes things up fairly frequently. It happens enough that they call
    it AI hallucinating.

    Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.

    One thing I've seen it quite a bit with is when asking ChatGPT to make
    a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong.
    We had seen that quite a bit with the Chad Jipiti thing that was
    posting on Dove-Net a while ago.

    Yeah, I've been given script or instructions that are outdated, or just
    flat out WRONG - but I don't think that's the same as AI hallucinations...
    :-)

    Maybe my definition of 'made up data' is different. :-)



    ... "Road work ahead" ... I sure hope it does!
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Tue Dec 2 12:49:42 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am

    One thing I've seen it quite a bit with is when asking ChatGPT to make a
    JavaScript function or something else about Synchronet.. ChatGPT doesn't
    know much about Synchronet, but it will go ahead and make up something it
    thinks will work with Synchronet, but might be very wrong. We had seen
    that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
    a while ago.

    Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)

    Maybe my definition of 'made up data' is different. :-)

    What do you think an AI hallucination is?
    AI writing things that are wrong is the definition of AI hallucinations.

    https://www.ibm.com/think/topics/ai-hallucinations

    "AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wed Dec 3 07:57:33 2025
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am

    One thing I've seen it quite a bit with is when asking ChatGPT to make a
    JavaScript function or something else about Synchronet.. ChatGPT doesn't
    know much about Synchronet, but it will go ahead and make up something it
    thinks will work with Synchronet, but might be very wrong. We had seen
    that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
    a while ago.

    Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)

    Maybe my definition of 'made up data' is different. :-)

    What do you think an AI hallucination is?
    AI writing things that are wrong is the definition of AI
    hallucinations.

    https://www.ibm.com/think/topics/ai-hallucinations

    "AI hallucination is a phenomenon where, in a large language model
    (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."

    Outdated code or instructions is not 'nonsensical' just wrong.

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer
    it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)



    ... Spilled spot remover on my dog. :(
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wed Dec 3 08:54:25 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)

    It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something up and professing that it's true. That's the definition of hallucinating that we have for our current AI systems; it's not about us. :)

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wed Dec 3 09:02:12 2025
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am

    This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'

    If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?

    You see what I mean? Lots of words, but hard to nail it down. :-)

    It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something
    up and professing that it's true. That's the definition of
    hallucinating that we have for our current AI systems; it's not about
    us. :)

    But again, is it 'making something up' if it is just mistaken?

    For example - I just asked about a particular code for homebrew on an
    older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request.
    It then gave me the specifics that I was looking for.

    So that's hallucinating?

    And please don't misunderstand... I'm not beating a dead horse here - at
    least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.



    ... Basic programmers never die, they gosub and don't return
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Wed Dec 3 17:47:11 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    Hi, jimmylogan.

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know".

    Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers get scored higher than wishy washy ones. Show me the incentives and I'll show you the outcome.

    A colleague of mine asked ChatGPT to answer some technical questions so he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code / config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a different story.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Wed Dec 3 17:55:56 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    A solid effort(?)

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Nightfox@VERT/DIGDIST to jimmylogan on Wed Dec 3 10:03:52 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.

    So that's hallucinating?

    Yes, in the case of AI.

    And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.

    You don't see a problem with incorrect data? I've heard of people who are looking for work who are using AI tools to help update their resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when they don't.. So you really need to be careful to review the output of AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things.

    So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to see the issue with it.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Bob Worm on Wed Dec 3 20:58:51 2025
    Bob Worm wrote to jimmylogan <=-

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know".

    Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's
    AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers
    get scored higher than wishy washy ones. Show me the incentives and
    I'll show you the outcome.

    LOL - yeah, I can see that...

    A colleague of mine asked ChatGPT to answer some technical questions so
    he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it
    offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code /
    config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a
    different story.

    But that 'third option' - you're saying it didn't 'find' that somewhere
    in a dataset, and just made it up?

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the
    case was irrelevant to the point, didn't contain what ChatGPT said it
    did or didn't exist at all.

    I've not seen/read those. Assuuming you have some links? :-)



    ... -- FOR SYSOP USE ONLY - Do not write below this line!!
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Nightfox on Wed Dec 3 20:58:51 2025
    Nightfox wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    Gonna disagree with you there... If wikipedia has some info that
    is wrong, and I quote it, I'm not making it up. If 'it' pulls
    from the same source, it's not making it up either.

    To say it's making up incorrect info is akin to saying that
    it should be accurate 100% of the time, even with bad input.

    For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.

    So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.

    So that's hallucinating?

    Yes, in the case of AI.

    Again, gonna disagree. From what you, and others, are saying, if I ask
    what 2 + 2 is and it doesn't know it will answer 17.648, if it wants to.

    Yes, that's an extreme example, but I think it conveys my point. In my case, today, I had been talking about Homebrew on an M1 Mac yesterday, and today
    I was working with an Intel. I said 'gen2' assuming it would remember
    that's what I call the older Intel machines, but it went by the last
    we worked on yesterday.

    partial answer --> but on an M1, Homebrew usually uses group admin, so
    it's almost always safe.

    When I corrected "intel, not silicone" --> On Intel Macs, Homebrew lives in:

    And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.

    You don't see a problem with incorrect data?

    Not to the point I'm not gonna use the tool. :-)

    I expect a spreadsheet to perform the calculations I program, but
    if I make a mistake in the programming, I expect it to fail.

    With LLM's, the dataset is so HUGE that there are a TON of variables!
    So there are potentially errors that can be made.

    But if I give it a list of numbers to add up, I've yet to see any
    kind of mistake. :-)

    Also - the 'fine print' always says that the data might not be
    correct, so be careful.

    I've heard of people who
    are looking for work who are using AI tools to help update their
    resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when
    they don't.. So you really need to be careful to review the output of
    AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things.

    I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)

    So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not
    it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to
    see the issue with it.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool
    it is designed to be. I will also not take medical nor legal advice. :-)


    ... WARNING!! Do NOT reuse tagline. Please dispose of it properly after use. --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From jimmylogan@VERT/DIGDIST to Bob Worm on Wed Dec 3 20:58:51 2025
    Bob Worm wrote to jimmylogan <=-

    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    I mean... those are 11 words... with a few duplicates... Which can't
    even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    A solid effort(?)

    I just asked for it, as you suggested. :-)

    That was the output.



    ... Southern DOS: Ya'll reckon? (Y)ep/(N)ope
    --- MultiMail/Mac v0.52
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Night_Spider@VERT to jimmylogan on Thu Dec 4 08:50:06 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am

    On a guess Id say thats because if how the AI determines facts, and gow it determines paradox, us humans cone across a paradox and accept it or ignore it but never tey to solve it, unlike an ai who isnt programmed usually to dismiss paradoxes instead of solve them, but the very existence of a paradix kinda breaks the framework

    ---
    þ Synchronet þ Vertrauen þ Home of Synchronet þ [vert/cvs/bbs].synchro.net
  • From Mortar@VERT/EOTLBBS to jimmylogan on Thu Dec 4 11:57:46 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44

    Also, try asking your AI to give you an 11-word palindrome.

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Mortar@VERT/EOTLBBS to jimmylogan on Thu Dec 4 12:07:09 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57:33

    If you ask me a question and I give you an incorrect answer, but I
    believe that it is true, am I hallucinating? Or am I mistaken? Or is
    my information outdated?

    If you are the receiver of the information, than no. It'd be like if I told you a dream I had, does that mean you experienced the dream?

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Nightfox@VERT/DIGDIST to jimmylogan on Thu Dec 4 11:04:46 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Nightfox on Wed Dec 03 2025 08:58 pm

    But again, is it 'making something up' if it is just mistaken?

    In the case of AI, yes.

    Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.

    For AI, "hallucination" is the term used for AI providing false information and sometimes making things up - as in the link I provided earlier about this. It's not really up for debate. :)

    I've heard of people who
    are looking for work who are using AI tools to help update their resume,
    as well as tailor their resume to specific jobs. I've heard of cases
    where the AI tools will say the person has certain skills when they
    don't.. So you really need to be careful to review the output of AI
    tools so you can correct things. Sometimes people might share
    AI-generated content without being careful to check and correct things.

    I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)

    That seems like a strange thing to say.. I've heard about that from job seekers using AI tools, so of course it's anecdotal. I don't know what scientific proof you need to see that AI produces incorrect resumes for job seekers; we know that from job seekers who've said so. And you've said yourself that you've seen AI tools produce incorrect output.

    The job search thing isn't really scientific.. I'm currently looking for work, and I go to a weekly job search networking group meeting, and AI tools have come up there recently. Specifically, recently there was someone there talking about his use of AI tools to help customize his resume for different jobs & such, and he talked about needing to check the results of what AI produces, because sometimes AI tools will put skills & things on your resume that you don't have, so you have to make edits.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.

    Nightfox

    ---
    þ Synchronet þ Digital Distortion: digitaldistortionbbs.com
  • From Mortar@VERT/EOTLBBS to Nightfox on Thu Dec 4 13:57:04 2025
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Wed Dec 03 2025 10:03:52

    Sometimes people might share AI-generated content without being careful to check and correct things.

    That's how AIds gets started.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Thu Dec 4 22:14:55 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    But that 'third option' - you're saying it didn't 'find' that somewhere
    in a dataset, and just made it up?

    The third option was software that ran on a completely different product set. A reasonably analogy would be it's like saying that an iPhone runs MacOS.

    Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.

    I've not seen/read those. Assuuming you have some links? :-)

    I guess you should be able to read this outside the UK: https://www.bbc.co.uk/news/world-us-canada-65735769

    Some others: https://www.legalcheek.com/2025/02/another-lawyer-faces-chatgpt-trouble/

    https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/

    https://www.theregister.com/2024/02/24/chatgpt_cuddy_legal_fees/

    It's enough of a problem that the London High Court ruled earlier this year that lawyers caught citing non-existent cases could face criminal charges. So I'm probably not hallucinating it :)

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to jimmylogan on Thu Dec 4 22:23:25 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 20:58:51

    Hi, jimmylogan.

    I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...

    I just asked for it, as you suggested. :-)

    I think it was Phigan who asked but yeah, I guessed that came from an LLM rather than a human :)

    Not that I use LLMs myself - if I ever want the experience of giving very clear instructions but getting a comically bad outcome I can always ask my teenage son to do something around the house :D

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Bob Worm@VERT/MAGNUMUK to Nightfox on Thu Dec 4 22:35:32 2025
    Re: Re: ChatGPT Writing
    By: Nightfox to jimmylogan on Thu Dec 04 2025 11:04:46

    Hi, Nightfox.

    If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it

    It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.

    "Hallucination" sounds much less dramatic than "answering incorrectly" or "bullshitting", though.

    Ironically there was an article on The Register last year saying that savvy blackhat types had caught onto the fact that AI kept hallucinating non-existent libraries in the code it generated so they created some of them, with a sprinkle of malware added in, naturally:

    https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

    Scary stat from that article: "With GPT-4, 24.2 percent of question responses produced hallucinated packages". Wow.

    BobW

    ---
    þ Synchronet þ >>> Magnum BBS <<< - magnumbbs.net
  • From Dumas Walker@VERT/CAPCITY2 to jimmylogan on Fri Dec 5 09:44:04 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Dumas Walker on Tue Dec 02 2025 11:15:44

    Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.

    Not sure where Gemini got its answer, but it might as well have been made up! :D ---

    LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?

    Well, it didn't get it from any proper source so, as far as I know, it made it up! :D
    ---
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From phigan@VERT/TACOPRON to jimmylogan on Fri Dec 5 17:52:18 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to phigan on Tue Dec 02 2025 11:15 am

    it's been flat out WRONG before, but never insisted it was

    You were saying you'd never seen it make stuff up :). You certainly have.
    Just today I asked the Gemini in two different instances how to do the same exact thing in some software. One time it gave instructions for one method, and the second time it said the first method wasn't possible with that software and a workaround was necessary.

    Time saw raw emit level racecar level emit raw saw time.

    Exactly, there it is again saying something is a palindrome when it isn't.

    Example of a palindrome:
    able was I ere I saw elba

    Not a palindrome:
    I palindrome I

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From phigan@VERT/TACOPRON to jimmylogan on Fri Dec 5 18:02:25 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Bob Worm on Wed Dec 03 2025 08:58 pm

    A solid effort(?)

    I just asked for it, as you suggested. :-)

    That was the output.

    I was the one who suggested it, insinuating that your AI couldn't do it, and you proved me right :).

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From poindexter FORTRAN@VERT/REALITY to Mortar on Fri Dec 5 17:28:19 2025
    Mortar wrote to jimmylogan <=-

    Time saw raw emit level racecar level emit raw saw time.

    Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.

    An auto-palindrome?

    There was an XKCD comic about wikipedia editors that illustrated a page
    for the word "Manalanteau" - https://xkcd.com/739/

    "A malamanteau is a neologism for a portmanteau crfeated by incorrectly combining a malapropism with a neologism. It itself it a portmanteau
    of..."



    --- MultiMail/Win v0.52
    þ Synchronet þ .: realitycheckbbs.org :: scientia potentia est :.