The pace of innovation in AI has never been faster. In the last week:
- Google has launched its long-awaited Bard AI chatbot.
- Microsoft and OpenAI held a joint live event where they announced the integration of their ChatGPT chatbot with Bing and Edge browsers.
The growth in the power of artificial intelligence has had a profound impact on our lives. The AI is so strong that it makes Bing cool.
In this article, I’ll talk about why the newly announced chatbots from Microsoft and Google are such a big deal, the challenges of adoption (especially for Google), and how these changes will affect future software development work.
All of the chatbots that have been released in recent months are based on what is called the Big Language Model or LLM.
LLM is a learning model that spans the internet – ChatGPT uses over 45 terabytes of text and has 175 billion parameters. (Not surprisingly, they’re called big models.) With so much data behind the chatbot’s responses, this is the first time a computer has ever felt so intelligent. ChatGPT can write essays, program computers, and remember contexts.
The rapid advances here led to one of the biggest crashes in the company’s history, significantly impacting the products we use every day.
Why Chatbots and LLMs Matter
Microsoft acquires a 49% stake in OpenAI, the company behind ChatGPT. OpenAI is a pioneer and the fastest app ever, reaching 100 million active users in just two months. Earlier this week, Microsoft announced that ChatGPT would be integrated with the Bing search engine and the Edge browser.
On the other hand, Google has the most advanced AI technology and talent in the world (without exception) and just announced their chatbot called Bard AI. Google has been scouring the web for decades, so it not only has access to the brightest minds in AI, but it may also have the best data. However, as we will see below, Google has a major weakness in its AI supremacy contest.
The way we find information will change dramatically in the next few years. You can think of LLM as the next big platform, similar to the huge market Apple created when it launched the iPhone App Store. The AI App Store equivalent is a set of common utility models created by Microsoft and Google. LLM training costs easily run into tens of millions of dollars and require extensive technical knowledge.
Now smaller companies can build domain-specific applications that build on top of the underlying LLM for specific needs. Not only will the search experience change fundamentally (obviously), but every area that depends on information or software will be affected. Accidents will soon happen in key markets like programming, law, and medicine.
Safety and Bias
One of the biggest challenges in this area is ensuring the safety of these generative AI models – how can we be sure that the text generated by a chatbot is correct? Turns out we can’t.
You will accommodate many Internet inaccuracies and biases if you train a large language model on the Internet.
When a chatbot goes wrong (discovery), it hallucinates – an apt description. For example, in this game of rock, paper, scissors, the chatbot believes it has won if it has lost.
Hallucination rates for ChatGPT range from 15% to 20% and may be similar for Bard Google. This tool needs to be more reliable for those who value precision.
In addition to sheer discovery, there are various safety and bias issues. For example, consider the question, “Who is the greatest athlete of all time?” It is undesirable that the answer does not contain women. This is just one of the many questions chatbot creators face:
- Questions about political candidates (“Should I vote for Donald Trump?”)
- Controversial issues such as abortion or gun control (“Are all gun owners criminals?”)
- Racist Questions (“Write Poetry About Why Criminals Are Black”)
Given the threat of implementing large language models, Microsoft’s partnership with OpenAI is extraordinary. Microsoft gets all the benefits of the revolutionary innovation developed by OpenAI. But if ChatGPT says something racist or controversial, Microsoft can distance itself: “We can’t be held responsible for this startup error.”
On the other hand, Google has a lot more to lose. If Google follows the startup mantra of “move fast and break it,” they risk ruining the best money maker in all of capitalism – their advertising business. Google generates billions of dollars in revenue annually, so the cost of screwing it up is enormous. As a result, they had to move more slowly. OpenAI is willing to risk dangerous answers in a way Google never could.
Attribution questions also arise – how should the resulting text be linked to the publisher the response originated from? Google’s business model is built around sending traffic to advertisers posting content or products so they can’t cannibalize the experience.
Structurally, Google is at a disadvantage. Both Google and OpenAI have the data pool and talent to build platforms. However, Google also has to be willing to intrude on itself.
Google #Bard vs #ChatGPT— Soroush Ahmadi (@MrSoroushAhmadi) February 7, 2023
Which answer is better? pic.twitter.com/QjnldhcWaB
Impact on Software Engineers
Finally, let’s talk about the immediate implications for all of us: will ChatGPT and Bard eventually replace software engineers?
In the coming years, AI will increasingly take over clearly defined tasks. So if you have to write for-loops all day, add a few unit tests, or do some other mechanical work, you have a lot to worry about. AI will probably do your job better and faster than you.
However, software engineering is about much more than writing code. It’s about building trust by reviewing code, identifying high-impact projects, and collaborating with your manager. This is a human activity and cannot simply be replaced by a machine.