Last Updated:
European Union

European Union to Let 'Responsible' AI Startup Companies Train Models on its Supercomputers

Aleks P
Aleks P Artificial Intelligence

The European Union (EU) recently proposed a new strategy that would increase access to its HPC (high-performance computing) supercomputers by enabling only 'responsible' startup companies to use the resource for the purpose of training AI (artificial intelligence) models.

However, there is one minor drawback: Any company looking to obtain access to the European Union's HPC – consisting of petascale and pre-exascale supercomputers – must also sign up for the union's program on the governance of artificial intelligence.   

A temporary solution for a set of voluntary standards or specific rules aimed at the industry developing and employing artificial intelligence (while more official rules and regulations continue being worked on) was also announced by the EU back in May, stating the enterprise's goal would be to prepare organizations for applying established artificial intelligence rules within the next few years. 

The EU, one of the most outward-oriented economies in the world, also currently has the Artificial Intelligence Act (or AI Act) in the works. This risk-based legal framework for the better regulation of applied artificial intelligence is in the negotiation stages among key co-legislators inside the union. When it has eventually been finalized at some point over the coming years, it's likely to end up being fully embraced as a standard.

Additionally, the EU has also initiated attempts to collaborate with the United States and other nations on proper AI code of Conduct practices in an effort to fill the void and close the gaps in international legislation surrounding the governance of artificial intelligence as other countries also continue working on their own set of rules for better regulation. One of the main draws is access to HPCs. 

The initiative was recently confirmed by an official spokesperson for the Commission, stating that its intention is to expand current policy, which already enables firms in the industry to access the supercomputers (thanks to a proposals process called EuroHPC Access Calls) by presenting them with a new enterprise to back and allow access to EU supercomputer capacity, for responsible and moral startups in the field of artificial intelligence. 

The EU president Ursula von der Leyen first announced the high-performance computing access initiative at this year's State of the Union address in mid-September 2023. 

Warning about the extinction-level risk to humanity

At some point during the State of the Union address, a short amount of the speech focused on certain issues concerning a gray area of AI-powered technology that could potentially pose an extinction-level threat to life as we know it – highlighting the possibility of AI moving far quicker than people in the field ever anticipated, getting out of hand. It must be done correctly to ensure the tech doesn't go beyond our control to a point where humanity is at risk. 

The EU president also took time to point out the many different sectors that would benefit from implementing artificial technology systems and tools, stating that some of the main improvements would be in addressing climate change, healthcare, and boosting productivity in general. 

She also suggested that the potential for some of the very real threats posed by rogue artificial intelligence are being closely monitored and discussed by experts, academics, and other AI developers and researchers in the field. She highlighted that one of the global priorities, along with global-scale threats like nuclear war and pandemics, should be mitigating AI's extinction risk to humankind. 

She also advertised her support for the European Union's recent efforts to pass comprehensive new laws on the governance of artificial intelligence. She even hinted at the idea of forming a new body, something along the lines of the Intergovernmental Panel on Climate Change (IPCC), which would back global policymakers by arming them with the latest research and constantly updating them on the actual science concerning the risks posed by artificial intelligence. 

Von der Leyen believes a similar body for artificial intelligence is necessary to weigh up the benefits and potential risks that come with AI It should be made up of independent experts, tech companies, and scientists from the field. It will enable them to come up with a rapid and internationally coordinated response – expanding existing efforts already carried out by the G7 Hiroshima Process, to name just one. 

The Commission's president of the European Union's calls for more safety surrounding AI will remain a major focus. It will aim to closely monitor and reduce theoretical dangers stemming from automated systems, including factors related to various matters, such as liability issues, disinformation and discrimination, problems with bias, and those relating to physical safety, plus various other concerns. 

One of those that welcomed EU leadership's involvement in combatting the existential threat artificial intelligence poses to humanity was Conjecture, a startup company based in London, England, which focuses on the safety of artificial intelligence. 

They revealed how it was an important step in the right direction that von der Leyen openly acknowledged that artificial intelligence if left unchecked, could potentially be a threat to everyone on the planet. Even CEOs from several leading companies that have developed the world's foremost artificial intelligence models have admitted AI poses a threat. 

With this in mind, there cannot be a concerted effort to pit nations against each other to gain an edge in artificial intelligence over their rivals. This kind of behavior could lead to the risk of AI becoming greater. 

The European Union's push for 'responsible' artificial intelligence

During the address, she discussed guiding innovation, emphasizing the plan to expand startup companies' access to the union's high-performance computing supercomputers for training, with more guidance on this to follow. 

The European Union is currently in possession of eight supercomputers in various research institutions dotted around the continent. One of the pre-exascale supercomputers, called Leonardo, can be found in Italy. There's also another pre-exascale supercomputer called MareNostrum 5 in Spain and a pre-exascale HPC supercomputer called Lumi, which is in Finland. 

According to plans, two even more powerful exascale supercomputers are in the worlds, including Jules Verne, to be sited in France, and Jupiter, in Germany. 

Over the past few years, a great deal of investment has seen the European Union become the world's leader in supercomputing capabilities. Three of the world's five biggest supercomputers are located on the continent. The EU leader said that now is the right time to build up on this with the new that would increase access to its HPC (high-performance computing) supercomputers by enabling only 'responsible' startup companies to use the resource for the purpose of training AI models. 

It will only be a small part of the union's efforts to help guide innovation. They hope to expand further by entering into talks with more companies developing and deploying artificial intelligence systems and tools. It's already happening in the US, where up to several leading AI tech companies have already agreed to the voluntary rules surrounding trust, security, and safety. 

They will continue to work closely with companies specializing in artificial intelligence, so they also adhere to the ideas set out by the AI Act of their own accord before the act comes into effect. All their efforts should come together to create a global standard for AI's safer, ethical use. 

Thanks to the previously mentioned calls access policy process, some of those who already have access to EuroHPC supercomputers are public administration, industry, and scientific institutes. The current system requires them to apply and explain their reason for needing (and how they plan to use) extremely large allocations in terms of data storage, compute time, and support resources. 

The EuroHPC JU (Joint Undertaking) system will also need to be tweaked accordingly to give AI startups and SMEs quicker access to these computers. 

The moral standard used by Horizon research projects is already used to assess the right to use EPC supercomputers. Similarly, this can also be a standard set of practices for candidates needing help with easier accessibility to high-performance computers under an artificial intelligence program.  

Following the EU president's huge announcement in a LinkedIn blog post, the European Union's internal market commission, Thierry Breton, wrote that to leverage one of the continent's main assets, the public high-performance computing infrastructure, it will be launching the EU AI Startup Initiative. 

Being granted access to the continent's current supercomputing infrastructure will assist companies starting out to reduce the amount of time it takes to train their current AI models down to days or weeks instead of months or years. It will also help them step up the role they play in safer and responsible artificial intelligence in accordance with the standard values set out by the European Union, which could then help foster innovation in the field of AI. 

One of the other things he mentioned was a boost to develop AI research via the HorizonEurope research and European Partnership on AI, Data and Robotics program, and the development of regulatory sandboxes under the AI Act that will soon come into effect. It remains to be seen just how significant the advantage in the field will be with support from the EU Initiative select startup companies with HPC for the training of AI models.

However, it shows a clear attempt by the European Union to use a resource currently high in demand to foster innovation in an acceptable way that directly corresponds with EU values. 

The drive for better AI governance

One of the other things discussed was to supercharge a drive for more inclusive governance of artificial intelligence. He stated that everyone who is in the AI field should be involved, including consumers and policymakers to academic experts and NGOs, and from the small startup companies to the major players and any other businesses using artificial intelligence across any of the EU's industrial ecosystems. All of the stakeholders will convene to discuss this matter at the European AI Alliance Assembly in November. 

Keeping in mind this announcement, the UK government now looks set to have some regional competition. It also looks set to try to establish itself as a global leader in artificial intelligence safety by organizing a similar AI summit around the same time. 

At the time of writing, it's unclear as to who in the AI field is likely to show up at the UK AI Summit. The new EU initiative also received quick and exuberant support from several leaders in AI. This included a quick vow of priority/early access to leading models for UK AI safety research from Anthropic, OpenAI, and Google DeepMind – just a short time after several meetings took place between the prime minister of the United Kingdom and the CEOs of several companies. 

Some have even compared Breton's comment about ensuring everyone's involvement in the governance of AI, so consumers and policymakers to academic experts and NGOs, and from the small startup companies to the major players and any other businesses using artificial intelligence across any of the EU's industrial ecosystems as a jab at the Big-Tech-backed approach the UK has. 

The Commission launched the European AI Alliance, which started as an online discussion forum in 2018 but also carried out various workshops and in-person meetings. Since its inception, the union claims it has managed to bring together thousands of stakeholders, forming an ongoing open policy dialogue on artificial intelligence. 

The High-Level Expert Group on AI stemmed from the formation of this forum, which had a major impact on assisting the Commission's policymaking when it outlined the AI Act. Although the AI Alliance has been around since 2019, the group hasn't convened for over 24 months, so Commissioner Breton deemed it necessary to reconvene once more. 

The European AI Alliance Assembly in November will be held at a crucial moment in the adoption process for the AI Acts. One of the main things they will pay particular attention to will be the wider efforts to foster greatness and have faith in artificial intelligence and the implementation of the AI Act & AI Pact.