DOES AI MAKE STUFF UP OUT OF WHOLE CLOTH?

I was going to share the Learning Specialists Conference highlights on AI this week. There’s been a lot of focus on how to use AI, on ensuring that learning is taking place, on cheating in the short term, and on AI taking over in areas we don’t want it to in the longer term. 

Then I ran across two stories. How about this: Does AI cheat when we ask it something?

And I discovered, yes, it does.

One professor had his students use AI, then had them fact-check the responses.

Sixty-three out of sixty-three responses had what he calls hallucinated information. 

If you’re a law student (or law professor or researcher in a law firm) check out the blog in this tweet where Chat GPT makes up whole citations and notary fraud on courtlistener.com

So if you use it, what happens when AI’s hallucinations in your work get fact-checked? What happens when you rely on AI hallucinations to learn your profession? When your doctor relies on AI to research best practice for your condition?

Send me your thoughts: [email protected]

The information in this blog cannot take the place of support from your own mental health professional or community health resources. Reach out to them. And IF YOU ARE IN CRISIS PLEASE DIAL 911.