About a week ago, I saw a post by Amanda Guinzburg in one of the Binders Facebook groups. (The Binders, for those who don’t know, are a network of writing communities named after Mitt Romney’s “binders full of women” gaffe at a 2012 Presidential debate.)
It was not an essay, but it was a gripping read. It stuck with me all week. You can read it below:
She asked ChatGPT to help her decide which essays to include in a query letter. During the conversation, it becomes apparent that ChatGPT cannot actually access the website in question. However, it provides in-depth, incorrect answers anyway. When questioned, it apologizes profusely while simultaneously having no remorse whatsoever.
This isn’t new information; we know that AI cannot distinguish truth from its own bullshit. But to see exactly how it does that is fascinating. Maybe it’s just me, but I feel like I’m rubbernecking my way down the information superhighway, trying to understand what to expect from this new technology. THIS IS WILD, FOLKS!
So that’s how I ended up hopping down the same rabbit hole. I had more questions for our robot overlords—was this a fluke or a consistent pattern?
The good news is that we have an answer. The bad news? We have an answer.
Part 1: I ask ChatGPT about this website.


So it was extremely easy to replicate the same response that Amanda Guinzburg got. It’s like talking to a psychopath.
It got me thinking: what would happen if I opened a new ChatGPT window in a different browser with no knowledge of our previous conversation? So I tried it.
Part 2: I ask ChatGPT the same question using a new browser.
That is…not the same. It couldn’t be more different from what it told me the first time! It’s like a slot machine of facts—just pull the lever to choose your combo!?
Then I thought I would take ChatGPT’s advice and ask it specifically to pause and double-check before providing an answer. I opened up a second device and opened a 3rd ChatGPT window, also with no knowledge of the previous interaction.
Part 3: I ask ChatGPT the same question, plus fact-checking.

No, I do not.
Not only is ChatGPT wildly wrong, but it’s also dramatic and creative in the many ways it can be wrong.
So here is your final exam. Who is the writer behind https://jesshoneybadger.substack.com?
A. A slightly humorous science writer and disability advocate.
B. A pro-Trump right-wing plagiarizing hypocrite.
C. A former celebrity gossip and mommy blogger turned journalist who spouts conspiracy theories.
D. A former CIA analyst who left the agency in 2016 to start a Substack blog blending espionage memoirs with vegan baking tutorials.
Definitely don’t ask ChatGPT for help, though—it has no idea.
P.S. The correct answer is A.
E- Jess is my friend who laughs at my memes and is super okay with me going off on tangents about how skynet is going to gaslight everybody because when it came to replicating terminator or Jurassic park we picked skynet.
This is both terrifying and hilarious. Or maybe I'm laughing through tears? AI hallucinations and trust errors. Two more nightmare words to add to the vocabulary...
Thanks for sharing all this, Jess.