Rob Mccart wrote to JIMMYLOGAN <=->making a decision or using critical thinking skills, it's just responding
My only experience with AI is that the first offers of info when doing
a search in a browser are almost always generated by an AI system
these days, and it is often a quick and correct answer if the problem isn't too complex
Also, if the first offered answer is questionable or obviously wrong,
I usually just carry on to offered suggestions further down the page
doing the search myself like we had to do before getting the magic AI
help rather than giving it another crack at it.
Yeah, the example you give is actually what I'm referring to. It's not
Yes, that's true for the most part, although I'm sure there are
AI systems that are smarter than that doing expensive jobs for
bigger users. A free Browser AI I'm sure is pretty basic..
Yes, that's true for the most part, although I'm sure there are>It is still a programming thing. But I understand where you are coiming
AI systems that are smarter than that doing expensive jobs for
bigger users. A free Browser AI I'm sure is pretty basic..
I have a friend that uses AI for automated customer service responses.
I've yet to see an example of one that actually makes decisions.
question but, in this case, it finally gave up and started telling me how I could reach a live person to talk to, so no intelligence there, artificial or otherwise.. B)
Rob Mccart wrote to JIMMYLOGAN <=->It is still a programming thing. But I understand where you are coiming
Yes, that's true for the most part, although I'm sure there are
AI systems that are smarter than that doing expensive jobs for
bigger users. A free Browser AI I'm sure is pretty basic..
I have a friend that uses AI for automated customer service responses.
I've yet to see an example of one that actually makes decisions.
I just got off of a site trying to get an answer to a simple
question and their AI system was totally unable to answer the
question but, in this case, it finally gave up and started telling
me how I could reach a live person to talk to..
So.. no intelligence there, artificial or otherwise.. B)
To show how simple.. my TV Satellite provider advetised that they
offer about a 10% discount if you arrange for auto-payments, but
the general info says that goes for MOST packages.. I was just
asking if my package qualified.. That should be a straight forward
enough question for it to answer..
I just got off of a site trying to get an answer to a simple>that particular data...
question and their AI system was totally unable to answer the
question but, in this case, it finally gave up and started telling
me how I could reach a live person to talk to..
To show how simple.. my TV Satellite provider advetised that they
offer about a 10% discount if you arrange for auto-payments, but
the general info says that goes for MOST packages.. I was just
asking if my package qualified.. That should be a straight forward
enough question for it to answer..
yep! But that also goes to show that they obviously didn't program
Rob Mccart wrote to JIMMYLOGAN <=-
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
But, in this case, if the info wasn't available, at least it didn't make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
I've heard stories of people saying 'AI' made something up, but I've yet to run across that...
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but>I've yet to run across that...
phigan wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
How much do you use AI? And, are you sure you haven't and just didn't notice?
I don't use it aside from maybe sometimes reading what the search
result AI thing says, and when I search for technical stuff I get bad
info in that AI box at least half the time!
Also, try asking your AI to give you an 11-word palindrome.
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Rob Mccart on Tue Nov 25 2025 08:37 pm
I've heard stories of people saying 'AI' made something up, but I've yet to run across that...
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Dumas Walker wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Rob Mccart on Tue Nov 25 2025 20:37:23
But, in this case, if the info wasn't available, at least it didn't make something up.. B)
I've heard stories of people saying 'AI' made something up, but
I've yet to run across that...
It happened here just this week. My garbage day is Thursday and, with that being a holiday, I wanted to see when the trash would be picked
up. It has in the past always been on the following Monday but I
wanted to check.
Google Gemini looked it up and reported that my trash would be picked
up on Friday. The link below the Gemini result was the official link
from the city, which *very* clearly stated that it would be picked up
on Monday.
Not sure where Gemini got its answer, but it might as well have been
made up! :D ---
Rob Mccart wrote to JIMMYLOGAN <=-
But, in this case, if the info wasn't available, at least it didn't
make something up.. B)
I've heard stories of people saying 'AI' made something up, but>I've yet to run across that...
I've had an AI built into the browser give me wrong information when
the correct information was not available, like once I asked for the selling price for a property and it came back saying the property
sold for the full asking price, which turned out to be wrong.
In another case its math was comical.. It was telling me what
some gov't plan pays and the figures it came up with was something
like $750 a month totalling $1800 a year..
But my experience is probably less than most. I wouldn't use AI at
all except that my main browser (probably most browsers these days)
always gives me what their AI system thinks is the information I
am looking for in a search. It is usually quite accurate, which is
why I don't just ignore that window, but for important things I
always double check what it tells me in some other way..
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Tue Dec 02 2025 11:15 am
AI makes things up fairly frequently. It happens enough that they call
it AI hallucinating.
Yes, that's what I'm talking about. I've not experienced that - THAT I KNOW OF.
One thing I've seen it quite a bit with is when asking ChatGPT to make
a JavaScript function or something else about Synchronet.. ChatGPT doesn't know much about Synchronet, but it will go ahead and make up something it thinks will work with Synchronet, but might be very wrong.
We had seen that quite a bit with the Chad Jipiti thing that was
posting on Dove-Net a while ago.
One thing I've seen it quite a bit with is when asking ChatGPT to make a
JavaScript function or something else about Synchronet.. ChatGPT doesn't
know much about Synchronet, but it will go ahead and make up something it
thinks will work with Synchronet, but might be very wrong. We had seen
that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
a while ago.
Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)
Maybe my definition of 'made up data' is different. :-)
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am
One thing I've seen it quite a bit with is when asking ChatGPT to make a
JavaScript function or something else about Synchronet.. ChatGPT doesn't
know much about Synchronet, but it will go ahead and make up something it
thinks will work with Synchronet, but might be very wrong. We had seen
that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
a while ago.
Yeah, I've been given script or instructions that are outdated, or just flat out WRONG - but I don't think that's the same as AI hallucinations... :-)
Maybe my definition of 'made up data' is different. :-)
What do you think an AI hallucination is?
AI writing things that are wrong is the definition of AI
hallucinations.
https://www.ibm.com/think/topics/ai-hallucinations
"AI hallucination is a phenomenon where, in a large language model
(LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate."
This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 07:57 am
This is the first time I've seen the objective definition. I was told, paraphrasing here, 'AI will hallucinate - if it doesn't know the answer it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I believe that it is true, am I hallucinating? Or am I mistaken? Or is my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
It sounds like you might be thinking too hard about it. For AI, the definition of "hallucinating" is simply what you said, making something
up and professing that it's true. That's the definition of
hallucinating that we have for our current AI systems; it's not about
us. :)
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
But again, is it 'making something up' if it is just mistaken?
For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.
So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.
So that's hallucinating?
And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.
Bob Worm wrote to jimmylogan <=-
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
If a human is unsure they would say "I'm not sure", or "it's something like..." or "I think it's..." - possibly "I don't know".
Our memories aren't perfect but it's unusual for us to assert with 100% confidence that something is correct when it's not. Apparently today's
AI more-or-less always confidently asserts that (correct and incorrect) things are fact because during the training phase, confident answers
get scored higher than wishy washy ones. Show me the incentives and
I'll show you the outcome.
A colleague of mine asked ChatGPT to answer some technical questions so
he could fill in basic parts of an RFI document before taking it to the technical teams for completion. He asked it what OS ran on a particular piece of kit - there are actually two correct options for that, it
offered neither and instead confidently asserted that it was a third, totally incorrect, option. It's not about getting outdated code /
config (even a human could do that if not "in the know") - but when it just makes up syntax or entire non-existent libraries that's a
different story.
Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the
case was irrelevant to the point, didn't contain what ChatGPT said it
did or didn't exist at all.
Nightfox wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to Nightfox on Wed Dec 03 2025 09:02 am
But again, is it 'making something up' if it is just mistaken?
In the case of AI, yes.
For example - I just asked about a particular code for homebrew on an older machine. I said 'gen2' and that means the 2nd generation of MacBook Air power adapter. It's just a slang that *I* use, but I've used it with ChatGPT many times. BUT - the last code I was talking about was for an M2.
So it gave me an 'M2' answer and not Intel, so I had to modify my request. It then gave me the specifics that I was looking for.
So that's hallucinating?
Yes, in the case of AI.
And please don't misunderstand... I'm not beating a dead horse here - at least not on purpose. I guess I don't see a 'problem' inherent with incorrect data, since it's just a tool and not a be all - end all thing.
You don't see a problem with incorrect data?
I've heard of people who
are looking for work who are using AI tools to help update their
resume, as well as tailor their resume to specific jobs. I've heard of cases where the AI tools will say the person has certain skills when
they don't.. So you really need to be careful to review the output of
AI tools so you can correct things. Sometimes people might share AI-generated content without being careful to check and correct things.
So yes, it's a problem. People are using AI tools to generate content, and sometimes the content it generats is wrong. And whether or not
it's "simply mistaken", "hallucination" is the definition given to AI doing that. It's as simple as that. I'm surprised you don't seem to
see the issue with it.
Bob Worm wrote to jimmylogan <=-
Re: Re: ChatGPT Writing
By: jimmylogan to phigan on Tue Dec 02 2025 11:15:44
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
I mean... those are 11 words... with a few duplicates... Which can't
even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...
A solid effort(?)
Also, try asking your AI to give you an 11-word palindrome.
Time saw raw emit level racecar level emit raw saw time.
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
But again, is it 'making something up' if it is just mistaken?
In the case of AI, yes.
Gonna disagree with you there... If wikipedia has some info that is wrong, and I quote it, I'm not making it up. If 'it' pulls from the same source, it's not making it up either.
I've heard of people who
are looking for work who are using AI tools to help update their resume,
as well as tailor their resume to specific jobs. I've heard of cases
where the AI tools will say the person has certain skills when they
don't.. So you really need to be careful to review the output of AI
tools so you can correct things. Sometimes people might share
AI-generated content without being careful to check and correct things.
I'd like to see some data on that... Anecdotal 'evidence' is not always scientific proof. :-)
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
Sometimes people might share AI-generated content without being careful to check and correct things.
But that 'third option' - you're saying it didn't 'find' that somewhere
in a dataset, and just made it up?
Just look at all the recent scandals around people filing court cases prepared by ChatGPT which refer to legal precedents where either the case was irrelevant to the point, didn't contain what ChatGPT said it did or didn't exist at all.
I've not seen/read those. Assuuming you have some links? :-)
I mean... those are 11 words... with a few duplicates... Which can't even be arranged into a palindrome because "saw" and "raw" don't have their corresponding "was" and "war" palindromic partners...
I just asked for it, as you suggested. :-)
If that's the definition, then okay - a 'mistake' is technically a hallucination. Again, that won't prevent me from using it as the tool it
It's not a "technically" thing. "Hallucination" is simply the term used for AI producing false output.
Google Gemini looked it up and reported that my trash would be picked up on Friday. The link below the Gemini result was the official link from the city, which *very* clearly stated that it would be picked up on Monday.
Not sure where Gemini got its answer, but it might as well have been made up! :D ---
LOL - yep, an error. But is that actually a 'made up answer,' aka hallucinating?
it's been flat out WRONG before, but never insisted it was
Time saw raw emit level racecar level emit raw saw time.
A solid effort(?)
I just asked for it, as you suggested. :-)
That was the output.
Mortar wrote to jimmylogan <=-
Time saw raw emit level racecar level emit raw saw time.
Not a palindrome. The individual letters/numbers must read the same in both directions. Example: A man, a plan, a canal - Panama.
| Sysop: | TLM |
|---|---|
| Location: | Davie, FL |
| Users: | 19 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 06:11:49 |
| Calls: | 65 |
| Calls today: | 1 |
| Messages: | 3,421 |