Post by account_disabled on Jan 18, 2024 9:47:55 GMT
So, either ChatGPT broke its own protocols by giving me those links, or it was lying right to my face about what those protocols were. It feels like the scene in Return of the Jedi where C-3PO says, “It’s against my programming to impersonate a deity,” and then he proceeds to do it anyway. Either way… not a great look for the tool. Yet another bad ChatGPT result to add to the list. 7. Hallucinating fake information This is one of the biggest ChatGPT failures: It commonly makes up information. We’ve already seen how it provides false sources and lies about which info it’s allowed to give you, but now we’re talking about it making up the info itself.
It’s been shown to do this with a wide range of topics. While ChatGPT can hallucinate Job Function Email List information about even the most basic subjects, it most often does so with niche topics. Of course, niche topics are also the ones that are the most likely to require research, so that’s still a pretty big problem. It’s also likely to happen in cases where the prompt itself contains a misconception. Here’s an example of this in action: As you can see, I fed ChatGPT a prompt about J.R.R.
Tolkien’s The Silmarillion, which I consider to be pretty niche. But the prompt itself refers to something that never happened — there was no mive argument between the characters of Yavanna and Tulkas in the book. But rather than being able to recognize that I made up that plot point, ChatGPT confidently spit out an answer explaining its significance, exactly as though it really happened. That’s the dangerous thing about ChatGPT’s hallucinations — it doesn’t just give wrong answers, it gives them confidently.
It’s been shown to do this with a wide range of topics. While ChatGPT can hallucinate Job Function Email List information about even the most basic subjects, it most often does so with niche topics. Of course, niche topics are also the ones that are the most likely to require research, so that’s still a pretty big problem. It’s also likely to happen in cases where the prompt itself contains a misconception. Here’s an example of this in action: As you can see, I fed ChatGPT a prompt about J.R.R.
Tolkien’s The Silmarillion, which I consider to be pretty niche. But the prompt itself refers to something that never happened — there was no mive argument between the characters of Yavanna and Tulkas in the book. But rather than being able to recognize that I made up that plot point, ChatGPT confidently spit out an answer explaining its significance, exactly as though it really happened. That’s the dangerous thing about ChatGPT’s hallucinations — it doesn’t just give wrong answers, it gives them confidently.