The Limits of ChatGPT and Why You Shouldn’t Be Afraid of AI

The Limits of ChatGPT and Why You Shouldn’t Be Afraid of AI

For Sarah, thanks for being a dedicated listener and fan of Top Tech Stories of the Week!

I assure you this article wasn’t written by ChatGPT.

ChatGPT has gotten a lot of hype in a very short span of time. Teachers are decrying the AI for destroying ‘the research paper’, while students the world over cheer. Lazy content writers are using the AI to write material primed by the hard work of humans. My mother, God bless her, even my mother is wondering if ChatGPT can fix her Netflix.

While many folks are losing their mind over ChatGPT, some of us are taking a more nuanced view of the technology and exploring ways to improve the human experience.

What is ChatGPT?

According to ChatGPT’s creators at OpenAI, ChatGPT is a language model that interacts with text in a conversational way. Essentially, you send ChatGPT a phrase or a question as a prompt. After you’ve prompted ChatGPT, the model will respond in conversational text. The same way if I ask you:

Got any ideas for a date night meal?

And ChatGPT responds:

Not Bad, Right?

We could go on and on like this indefinitely. ChatGPT accepts a prompt and regurgitates answers.

How Does ChatGPT Work?

ChatGPT is a machine learning model. In very, very simple terms, a machine learning model is fed a lot of data, and that data is processed using mathematical calculations to compute the likelihood a prompted set of words and phrases will generate a responding set of words and phrases.

This mathematical processing to compute these likelihoods, or probabilities, take lots and lots of data and lots and lots of time. How much data?

300 billion words.

If I were to ask you (prompt):

How are you doing?

There’s a very high likelihood (probability) you’d respond with:

I‘m doing alright.

You could also respond with:

  • I’m feeling fine.
  • I’m okay.
  • It’s cool.
  • Meh

ChatGPT generates a phrase using the highest probability, and that highest probability happens because a lot of people fed ChatGPT these responses.

The whole system of choosing prompts and computing responses is called training. When you hear AI designers or computer people talk about training a model this is what they are doing. There are lots of ways to train a model, and I won’t get into them here. Just know training requires a combination of matrix math, simple math, and calculus.

Back to Date Night Meal Suggestions

I could change my mind and ask ChatGPT for some vegetarian options. Let’s see what they say. (Is ChatGPT a they?)

Noice!

Not too shabby huh? ChatGPT came through and gave me som meatless options. It even decided to skip all of the Beyond and Impossible fake meat. This is really clever, because ChatGPT is tracking my conversation and is using my previous prompts as part of building follow on responses.

I was even able to get more specific in my asking by requesting some lighter fare. I didn’t just ask ChatGPT to specify a red sauce or an herb sauce, I asked it for something light, and it responded with some decent choices.

But How Did It Know What You Meant

ChatGPT knows what I meant by light because, when the folks at OpenAI trained ChatGPT, they trained my date night meal question with lots of food articles. The writers of those articles used titles like: ‘Date Night Meal Ideas’, ‘Vegetarian Date Night Meal Ideas’, ‘Light Date Night Meals You Would Love’.

These articles included the names of dishes and colorful reviews indicating if they were light, heavy, delicious, tasty, and even romantic. ChatGPT used all of this information to mathematically figure out the most probable answers.

But, take a look at what I’m asking for here. This is all factual information. Any old person who won’t let laziness get in the way can find this information on the Internet. What I’m really after, is someone to recommend a meal that would be good. I want to take away the risk of choosing something I won’t like. Let’s see what happens if I ask for a recommendation.

Don’t Peek Behind the Curtain

And this is where the Civ and English teachers shouldn’t fret. ChatGPT falls back on its disclaimer of actually being a machine and not a human. It has no taste buds and has never been on a date. And ChatGPT just shoved those raggedy vegetable skewers on me. That was the first selection in the list! Lazy!

ChatGPT totally punted and didn’t take a risk

The Limits of ChatGPT (and Other AIs for That Matter)

Provenance and trust are areas of my AI research, and this is where the current state of the art falls ridiculously short.

ChatGPT doesn’t know anything, it doesn’t question, it can’t decide, it can’t make meaningful recommendations…ChatGPT can’t even verify facts.

“It just runs programs.”

ChatGPT can only take information as input and return information as output. And that information is only as good as its sources and veracity. It’s provenance.

Teachers assigning research papers need to rethink the rubric. Instead of assigning a paper on World War II to evaluate whether or not the student can communicate the facts looked up in an encyclopedia or book, the assignment should be structured forcing the student to think about the impacts of WW2 on present day generations, how to prevent such atrocities from happening, how the stressors of WW2 influenced British culture and exploring those themes. Themes require humanism. Intuition, application of current culture, and some introspection. The assignments might be tougher to grade and admittedly lean more into the gray area, but you can’t ask ChatGPT for an insightful opinion. You can ask a kid for it. These assignments will be tougher for the students too.

Let Machines Do What Machines Do Best, Let Humans Do What Humans Do Best

Any fool can go look up information and restructure that same information in a paper. We’ve spent many decades training human beings to behave like machines. Our current education system is fixated on rote mechanics and training our kids to perform some function versus trying to think through the needs and challenges in our world. We’ve been treating people like cogs in the works, instead of capitalizing on the weird and mushy aspects of humanity.

I’d be doing you a disservice if I said you shouldn’t be concerned about AI. Blind adoption of AI technology would be dangerous for society. Letting the machines just go off wouldn’t do well. There are a lot of laborious tasks machines can simply do better than we can, and honestly, I think that’s a good application of the technology. The machines should be making our lives easier, instead of how we’ve been going along now.

AI is going to force us to be more creative and work harder at making new connections than ever before.

For every new advance in technology, we haven’t had an easier life. The tech just helped us get more done in fewer hours and our stress levels went up.

We could deploy these brave new technologies to flip that narrative, but we’ll need to think it through and make sure everyone benefits. Life could really be beautiful and full of meaning if we wanted it. For those looking to utilize these technologies I only wish they would find meaningful ways of using them For the Benefit of All Mankind instead of sucking more money out of their wallet.

BONUS: I Ask ChatGPT for an Idea

There are a lot of hacks out there using ChatGPT to regenerate other people’s content, or to prime themselves for new ideas. Using the idea as a pump for new ideas is actually a decent use, but regenerating other people’s work is just intellectual laziness.

I’ve written a couple of books, and a comic. Right now, I’m working on a fiction short story around my favorite topics of psyche, time, and quantum. I asked ChatGPT for an idea around my story’s central location:

Meh

The damned thing sent me back a cliche’. Of course the psychological aspects are going to be in the story, but a mad man? Unknown force? More specific, huh? Rubbish.

If you’re using ChatGPT please share what kind of mileage you’re getting out of it. If you think AI is the death of humanity we’d love to know as well.