Since machines first learned to talk, in the form of chatbots, humans have been whispering about the inevitable demise of the humble role known as a programmer. And since that inception, machines have learned to talk incredibly well. Well enough to fool (some) people into thinking that they are indeed alive and ready to start a 9 to 5 job.
ChatGPT is one of those machines. Or at least, a version of those machines. Announced in November of 2022, ChatGPT is a natural language framework that uses the well trained GPT-3.5 model and that is designed to mimic human conversation with its user. And it has become incredibly popular as it amassed over 1 million users in the first 5 days of launch and has even been discussed by the likes of Joe Rogan on his recent podcast.
Needless to say, I was one of those 1 million users. And after having spent a fair number of hours talking to the model about code and life, I can say that I was left rather impressed by the potential that I saw.
But I’m not fearful of losing my job just yet. In fact, far from it.
What it can do
GPT-3 itself is a language model that is trained on billions of parameters, which is a massive leap from its predecessor GPT-2 and it is designed to essentially have natural conversation. ChatGPT is the framework that facilitates that communication through a chat like interface and that is open to the general public, so you can take it for a test run here.
Usually, learning what a chatbot is capable of requires alot of trial and error. And it was no different with ChatGPT. I assumed that it could answer any and all questions about life in the beginning, however, I quickly learned a few of its limitations by asking it simple questions.
I’ll start off with the capabilities that I personally saw first hand, because I mainly focused on it’s code writing skills and not much else. Though questions such as the following still yield perfectly normal answers:
- What is 2 + 2?
- Who is Elon Musk?
- How much wood can a woodchuck chuck, etc