AI Text And Image Generating Algorithms Could Lead to Rise in Disinformation Campaigns
According to some recent studies, the generative Artificial Intelligence (AI) boom in text and image-generating algorithms could potentially lead to a steep rise in disinformation campaigns, bringing about a possible new international arms race. Left unchecked, it could rapidly spiral out of control.
Global governments are in a race to incorporate the AI algorithms that power things like ChatGPT into powerful new tools for misinformation. This could lead to a huge arms race between some of the world’s superpowers. However, it can also be used by governments in a positive way, not just in a negative way.
It was even suggested by researchers at RAND – a nonprofit think tank that offers advice to the US government, that a researcher for the Chinese military who is known to have experience in the field of information campaigns how LLMs (Large Language Models) could be a useful tool in this particular field.
Many are currently alarmed at the new scale and power AI could bring to disinformation campaigns by governments worldwide. However, there is currently no evidence that such campaigns are already being used in this way. The technology could easily be used to create untold numbers of fake accounts to push their own state’s narratives and agendas.
In previous years, when this kind of activity was used to interfere with foreign affairs, humans were still essentially needed to run the disinformation campaigns. However, powerful new AI algorithms that can generate images, text, and even videos would eliminate the need for thousands of human workers toiling away at their computers.
The technology is now at the stage where it’s able to produce convincing content without human intervention and then post it on multiple fake social media/networking accounts. A disinformation campaign would now be just a fraction of the cost when using an AI-powered system instead of employing thousands of people to do the job, which would cost a great deal more.
Everyone has access to this technology, so it’s something that governments worldwide will certainly be looking to use and exploit and, no doubt, already are. Another study revealed that whichever country can master the technology and use it most effectively can look forward to a boost in economic prosperity, new military capabilities, and cultural influence, especially when it’s used for good.
However, if used with bad intentions, it could lead to the World Wide Web being overrun with Artificial Intelligence bots programmed to cause havoc via misinformation warfare. To avoid this, it would require human beings to interact directly with by talking to one another.
The Special Competitive Studies Project (SCSP) has suggested that the United States government should pave the way forward by promoting transparency in this field. The aim would be to foster a sense of trust and hopefully encourage nations to collaborate on nurturing this technology for the benefit of everyone.
There are many risks surrounding generative AI, and to avoid a completely polluted internet, the people involved at the top should openly discuss how nations should tread in the future when using this technology. It must be a collaborative effort that everyone should agree upon. The technology should be used for the betterment of society, not for spreading disinformation to force agendas and narratives upon people.