In a significant move by the White House, an executive mandate was issued by the President, setting the stage for the governance of generative artificial intelligence technologies, preempting any formal statutory measures from Congress.
This directive outlines eight primary objectives: the formulation of new AI reliability and security standards, safeguarding individual privacy, promoting fairness and civil rights, defending the interests of consumers, patients, and students, bolstering worker support, stimulating innovation and competitive practices, reinforcing American leadership in AI advancements, and guaranteeing the prudent and effective adoption of AI in government operations.
A range of federal departments has been charged with the duty of establishing benchmarks to guard against AI's potential misuse in biotechnology, devising reliable practices for digital content verification, and enhancing cybersecurity measures.
The National Institute of Standards and Technology (NIST) is tasked with the development of protocols for rigorously testing AI systems prior to their public deployment. The Energy Department and the Department of Homeland Security are instructed to tackle AI's possible threats to essential infrastructure and manage risks associated with chemical, biological, radiological, and nuclear domains, as well as cybersecurity concerns. Developers of sizable AI platforms, such as those produced by OpenAI and Meta, are mandated to divulge results from safety evaluations.
A high-ranking official, under the condition of anonymity, provided insights during a briefing from the administration:
"The stance is not to retract AI models that are already accessible to the public," the official mentioned. "However, these models will be subject to existing anti-discrimination regulations."
To further ensure the privacy of citizens, the administration is advocating for Congress to enact data privacy laws and is supporting the development of technologies that preserve privacy.
The directive includes measures to combat AI-induced discrimination, particularly in its application to legal sentencing, parole decisions, and surveillance activities. It compels federal agencies to draft guidelines for landlords, federal benefit schemes, and contractual agreements to prevent AI from intensifying discriminatory practices.
Agencies have also been instructed to consider the implications of AI on employment and to compile a report detailing AI's impact on the job market. The administration is keen on integrating more professionals into the AI sector and has announced the creation of a National AI Research Resource to offer essential data to students and AI experts, along with technical support for smaller enterprises. Additionally, there is a directive for the swift recruitment of AI experts into government roles.
Prior to this, the administration had unveiled an 'AI Bill of Rights,' a collection of guidelines for AI development. These guidelines were later transformed into a series of pacts between the White House and major AI entities, including giants like Meta, Google, OpenAI, Nvidia, and Adobe.
Nonetheless, such an executive order is not an enduring statute and is typically limited to the term of the current presidency. While Congress continues to deliberate on AI regulation, certain legislators have expressed the urgency to enact AI-related laws before the year's conclusion.
Experts in the field view the executive order as a progressive stride towards establishing norms for generative AI.
Navrina Singh, the founder of Credo AI and a participant in the National Artificial Intelligence Advisory Committee, remarked on the significance of the executive order as a clear indication of the United States' commitment to the responsible stewardship of generative AI technologies.
"This interim step is prudent as we cannot anticipate initial policies to be flawless while legislative discussions are underway," Singh commented. "This truly underscores the prioritization of AI by the federal government."