Please register to access this content.
To continue viewing the content you love, please sign in or create a new account
Dismiss
This content is for our paying subscribers only

Chief AI Officers, bias testing are now required for US federal agencies

Agencies will be required to implement 'concrete safeguards' by Dec. 1



AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration.
Image Credit: REUTERS

The White House will require federal agencies to test artificial intelligence tools for potential risks and designate officers to ensure oversight, actions intended to encourage responsible adoption of the emerging technology by the US government.

The Office of Management and Budget issued a government-wide policy Thursday to mitigate the threats posed by AI "- including discrimination and privacy violations "- and increase transparency over how government uses the technology, building on an executive order signed by President Joe Biden last year.

Agencies will be required to implement "concrete safeguards" by Dec. 1 when they use AI in ways that could affect Americans' rights or safety, according to a White House fact sheet. Chief AI officers will be appointed at federal agencies to coordinate the use of artificial intelligence across government and insure implementation of OMB's guidance.

The safeguards will ensure agencies can give travelers the ability to opt out of Transportation Security Administration facial recognition without delays at airports, better prevent bias and disparities in health care and allow human oversight when using AI to root out fraud in government services, according to the fact sheet.

"President Biden and I intend that these domestic policies will serve as a model for global action," Vice President Kamala Harris told reporters, detailing the efforts. She said the US would "continue to call on all nations to follow our lead and put the public interest first" when promoting the use of AI.

Advertisement

Senior administration officials, who spoke on condition of anonymity to preview the measures, said high-risk systems would undergo rigorous testing under the new guidelines.

The Biden administration intends to hire at least 100 employees with a focus on artificial intelligence by this summer, according to the fact sheet. Some posts are expected to be political appointments that will not require Senate confirmation, according to a senior official.

Alexandra Reeve Givens, president and chief executive officer of the Center for Democracy and Technology, praised the guidance as a "really big step" to help ensure federal agencies are using AI responsibly.

"This is about responsible use of taxpayer dollars and agencies having rigorous processes in place when they're choosing to deploy a new technology that could significantly affect people," Givens said.

Federal agencies have used artificial intelligence technology for years and plan to increase that usage, according to a December Government Accountability Office report. Broader regulation of artificial intelligence, though, has stalled in Congress, leaving Biden to instead leverage the government's position as a top customer for technology to establish safeguards. The White House has resorted to voluntary commitments from major companies that committed to follow a set of principles on AI innovation.

Advertisement

The administration has faced pressure from civil rights groups, labor allies and other advocates who have urged safeguards to prevent harm.

Some AI software has been found to perpetuate bias. A Bloomberg News experiment found racial bias when using ChatGPT, a product of OpenAI Inc., to rank resumes. Elsewhere, in the criminal justice system, facial recognition has misidentified suspects and software has yielded unfair sentences for individuals.

Harris said the measures were developed in consultation with the private sector and public-interest groups.

Under the guidance, Americans will be able to seek remedies if they believe AI has led to false information or decisions about them. Agencies are being asked to publish a list of the AI systems in use, risk assessments and how those risks are being managed.

Waivers could be given for software that doesn't comply with administration rules, but a justification must be published. Some uses may be withheld because of their sensitivity, according to the fact sheet.

Advertisement