Researchers say that the model behind the chatbot fabricated a convincing bogus database, but a forensic examination shows it doesn’t pass for authentic.
ChatGPT says whatever you ask it to say. More at 11.
It’s just modeling humans. I was only a lab TA for two semesters, and I caught so many fake data sets.
So… Software designed to make things up made something up when asked to make something up? Okay…
There was someone on radio the other day talking about doing research with it for their show and starting by asking a simple math question that it got wrong and then the conversation devolved into ChatGPT inventing anecdotes when asked about if a Nobel medal was ever brought to space and it ending up saying it didn’t know why it kept inventing anecdotes instead of finding reliable info!
All to say, it doesn’t know what is and isn’t reliable information so it builds answers based on what it interprets you might want to read.
Automation tool does automation.