I’ve been playing around with ChatGPT for the last few days. Not for general lulz, but to see if it might actually be usable for businesses or organizations.
My expectations were low. It’s exceeded them, but - with one exception (see below) - it still doesn’t meet the bar to tackle the use cases I was hoping it could address.
The Limits of ChatGPT (and Machine Learning in general)
Here’s some of the key limitations I see with ChatGPT so far:
It’s derivative.
This is by definition and design.
It can only generate responses based on its training data.
Therefore, it can only generate responses that are derivative of content other people have created.
It’s limited in time coverage.
ChatGPT explicitly says it doesn’t have much information past 2021.
This is also inherent to the design. The training set will always have some limits.
This limitation can be reduced via more expansive technology (continuous training and such).
But the scope of the data set (the whole flaming internet) means that it will likely never be completely responsive to current events.
However, this means it will likely be a long time before it’s anywhere near as responsive to current events as us humans.
It’s non-specific.
ChatGPT returns results generalized from all the data in its training set.
If you ask it how to do something in a specific instance of some thing, it will return results that are generalized from how to do similar things in all instances of similar things.
And as a side note, how “similar things” may be defined is a very opaque part of the model. (It may even be opaque to the ChatGPT designers).
For example, if you give it two nearly identical prompts along the lines of “explain how to do X in personal finance product Y” and “explain how to do X in personal finance product Z”, the results of both prompts are often:
Almost exactly the same for both products.
Wrong for both products.
It can’t analyze anything.
ChatGPT can only reproduce analysis that was already done by one or more humans.
This is a special case of #1, but it’s important to understand.
It bases its responses on the quantity of data it has on a particular topic.
This means that even when it returns analyses other people have done, it’s going to return the most common (and thus likely the shallowest) analysis.
Algorithms to surface unusual/original analyses are possible.
But that would require a specialized algorithm (created by a human) for just that purpose (e.g. not general AI or even general ML).
It’s important to note that all of these limitations apply to all ML (Machine Learning) systems to some extent, not just ChatGPT.
No, AI bots aren’t coming for your job
Don’t get me wrong, ChatGPT does a whole huge bucket of things really, really well. And a first pass at ChatGPT returns some pretty impressive results. They could even seem a little frightening.
But the limitations listed above are pretty hefty. And that’s not even a complete list. And since most of these limitations are inherent to the nature of ML, they’re not going away.
So, people who are afraid that AI will become smarter than humans in the near future are fear-mongering themselves into premature cardiac arrest. We aren’t even close to stupid general AI - despite tens of thousands of scientists and billions of dollars chasing that goal for many decades.
In other words, AI that can outthink even the simplest of us is not likely to happen for decades. (My back-of-the-napkin calculations suggest no sooner than 2100 to even match human intelligence - but that’s a whole different discussion).
In the same vein, people who are worried that ChatGPT or its ilk will replace human writers or coders are similarly premature in their doom-saying.
If you doubt this, try this exercise.
Create prompts to ask ChatGPT to write an action scene for you in the voice of Tom Clancy.
Play with the prompts until you get something you think is good.
Go open a book written by the real Tom Clancy, find an action scene, and read it.
Convince me you can’t tell the difference.
Good writers spend significant time and energy finding story lines and action sequences that no one has seen before. It’s part of the job, and writers who don’t do that work generate trope-filled blandly generic content (which doesn’t sit well with audiences). By its very nature, ChatGPT simply can’t do that.
Unless of course…
However, we’re much closer to ML replacing human writers and coders than we are to general AI.
What ChatGPT and AI coding tools will do is replace human beings doing derivative work. Low-level copywriters and merely-adequate programmers have cause to be worried.
This is really easy to demonstrate. Ask ChatGPT to generate a blog article on almost any technical topic on the internet, and it does a bang-up job. In fact, I was able to get it to produce significantly better copy for my own product than one of my long-term paid copywriters; it took me less than 5 minutes.
So, ChatGPT isn’t coming for my copywriter’s job. The (maybe unsettling) reality is that ChatGPT already took it from him.
No, I’m not just a cold bastard; but we all have to face reality. As it turns out, I write much better copy than my copywriter; the reason I hire a copywriter is to save me the time from having to do that work.
To put it another way, I’m not replacing my copywriter, I’m replacing me. And if I can get a machine to do that better than a human - and do it for free (or close) - I’d be a fool to keep paying the human.
This is why low-skilled copywriters and coders need to be worried. Because, if you’re lower-skilled than an AI, then you probably haven’t invested enough time in honing your skills. Realize you’re in an arms race, and start learning in-demand skills an AI can’t perform, or you’ll lose that race.
The Bigger Risks
There are bigger risks of this kind of AI. The biggest I see right now is that it will dumb-down the humans that we need to do the true original work.
If someone with 10 years’ programming experience ends up writing 80% of their code with an AI, how much actual coding experience do they really have? And what happens when we need them to write an actual, original piece of code which - since it’s original - a coding AI simply can’t create?
The differences in code quality and reliability between someone with 2 years experience and 10 years experience is dramatic. If someone writes 80% of their code with an AI, will they ever have enough years of actual programming experience to match the coding skill of a modern-day programmer with 10 years’ experience?
This isn’t purely speculative; the same thing has happened almost everywhere automation has been widely adopted. For an example, go find yourself someone who can build a chair for you from scratch - from carving the legs, back, pins, and other parts, to assembling them, to finishing the wood. A hundred years ago, that person would have been easy to find; but now that automation has replaced most of the need for these skills, it’s hard.
So, if you want a cookie-cutter chair, the stuff factories produce now may actually be better than the average chairs available 100 years ago. But if you want a chair that’s different in any way from the mass-produced molds, that’s much harder to get.
So, what happens to our society if/when truly new software and truly original writing is no longer available from millions of skilled practitioners, but becomes only practiced by a small number of artisans?
The End
Darn, I’m long-winded.
Wow, Chief: you really lay out many considerations, and take the time to check them out. Any time I am an A.I. conversation. I will be sure to pass on this link.
I know nothing about "coding"? The only thing I could relate to is that you'd probably be gazing into a computer screen for decades. One time I did full-time video editing for 6 years. Nothing similar, except software and computer screens as a steady diet. Something about that is not all that great.
Now I write and publish, but I can also take a break. (Yeah, I'm chained to the computer too.) But only because I like to talk.
.