Last Updated:
Google Bard & Person Holding Phone

Why Big Tech’s Gamble on AI Assistants Could Prove Risky

Today, leading tech companies have been unable to solve persisting issues with artificial intelligence language models that just won’t go away. 

Ever since the dawn of the boom in generative AI technology, tech companies across the globe have been working tirelessly to deliver an epic app to complement the technology. To begin with, it was online search, which had varied results. 

Next, it was digital assistants powered by AI or AI assistants. In late September 2023, several leading tech companies, namely Google, Meta, and OpenAI, unveiled several new features for their AI chatbots. They can now mimic a personal assistant by searching the web for users to retrieve the quickest, most accurate, and most relevant results. 

ChatGPT

In ChatGPT, for example, from OpenAI, the new feature enables users to converse with that chatbot like you would in a telephone call, to receive instant replies to the questions you ask in the form of a humanlike voice, albeit synthetic. It can scour the internet and trawl huge volumes of pages to find the results you’re looking for. 

Over at Google, the new feature will enable people to use the Bard chatbot (similar to ChatGPT), which is plugged into most of the Google products, such as YouTube, Maps, Docs, and Gmail, to ask questions regarding their own content. 

In other words, users can ask Bard to search for things by searching through emails or organizing their calendars. Bard can also pull up instant results from Google to answer any questions you ask. A similar thing is happening at Meta. 

The company recently announced that it will be able to use a wide range of chatbots (including celebrity AI avatars and regular AI chatbots) to bring up search results from Bing based on the questions you require answers for. This service/feature will soon be available in Messenger, Instagram, and WhatsApp. 

Some say that given the limitations of the technology, it’s one big gamble by the tech companies involved. They still haven’t solved some of the nagging problems plaguing AI language models, such as their tendency to fabricate answers or mislead in one way or another by giving inaccurate information. 

One of the other things that has been highlighted is that AI language models still have many safety and security flaws. The technology could impact countless people by enabling AI models to access personal data and other sensitive information found in private messages, calendars, and emails. 

Introducing this kind of technology would no doubt open up a can of worms and lead to a massive increase in cybercrimes being committed, including successful phishing scams on an unprecedented scale and other cyberattacks. 

We are not here to discuss some of today’s biggest security issues with AI language models. However, knowing that digital assistants powered by various AI models will have access to highly-prized sensitive data while being able to surf the internet at the same time, the technology becomes vulnerable to a certain type of cyberattack known as indirect prompt injection (meaning a vulnerability in large language models, LLMs).

A cybercriminal or bad actor, whatever you want to call them, can easily carry out such an attack, and there’s no way to remedy it. When this kind of attack is carried out, a third party is capable of inserting hidden text on a website page to alter it, which is designed to change the behavior of artificial intelligence. Cybercriminals can use emails or social media to send those victims to the websites with secret hidden text. 

When this occurs, the AI-powered tool could be tricked into letting the cybercriminal easily extract things like credit or debit card details. The opportunities for cybercriminals to carry out their crimes would be limitless if this new generation of AI models the tech companies are talking about integrating into emails and social media, for example, is fully implemented. 

When it comes to artificial intelligence’s tendency to fabricate things (also known as hallucinations), Google highlighted that Bard would be released only as an experiment and that users could easily cross-reference the answers it retrieves by searching on Google to see if the answers match. 

When fact-checking in this way and finding conflicting results, users can tap or click the thumbs-down button to give feedback. Doing so will help Bard improve by learning where it went wrong. However, this puts a burden on the user to identify issues.

Google pointed out that it has yet to solve the prompt injection issue. However, it employs other systems, such as spam filters, to pinpoint and extract any cyber threats it detects. The company also conducts red team exercises and adversarial testing to discover how bad actors might attack products built on language models. 

With all this in mind, whenever a new kind of digital technology is released, there are generally a number of bugs that need fixing. Even those who were most supportive of AI language model products in the early days have yet to be overly impressed by the current situation. 

One such example was that of a New York Times columnist, Kevin Roose, who pointed out that while the Google assistant was good at summarizing emails, it informed him about ones he didn’t even have in his inbox. 

Bard

Tech companies need to prove that the technology can be predictable, reliable, safe, and secure; otherwise, people won’t want to use it. To ensure cybercriminals cannot use the technology against them, leaders in the field mustn’t get complacent, especially when it concerns things like personal data. Nobody wants to have a cybercriminal trawling through their emails because of weak flaws in the AI tools they insist on delivering so we can all use them.