Companies that own or operate critical infrastructure increasingly rely on artificial intelligence. Airports use A.I. in their security systems; water companies use it to predict pipe failures; and energy companies use it to project demand. On Thursday, the U.S. Department of Homeland Security will release new guidance for how such companies use the technology.
The document, a compilation of voluntary best practices, stems from an executive order that President Biden signed more than a year ago to create safeguards around A.I. Among other measures, it directed the Department of Homeland Security to create a board of experts from the private and public sectors to examine how best to protect critical infrastructure. The risks run the gamut from an airline meltdown to the exposure of confidential personal information.
Alejandro N. Mayorkas, the homeland security secretary, first convened the board in May. It includes Sam Altman, the chief executive of OpenAI; Jensen Huang, the chief executive of Nvidia; Sundar Pichai, the chief executive of Alphabet; and Vicki Hollub, the chief executive of Occidental Petroleum.
Given the broad range of companies whose executives worked to put it together, the guidance is general in scope. It encourages companies that provide cloud computing services, like Amazon, to monitor for suspicious activity and establish clear protocol for reporting it. It suggests developers like OpenAI put in place strong privacy practices and look for potential biases. And for critical infrastructure owners and operators, like airlines, it encourages strong privacy practices and transparency around the use of A.I.
The 35-page document stops short of suggesting any formal metrics that could be used to help companies hold themselves accountable for complying with the guidelines, though it calls on legislators to supplement companies’ internal oversight mechanisms with regulation — a requirement that President Biden acknowledged was necessary when he issued his executive order.
“It’s a broad acknowledgment that we’re all responsible for our individual contributions to A.I. and the technology,” said Ed Bastian, the chief executive of Delta Air Lines, who is also on the board. “It’s something that, as the end user, we’ve been victims of candidly in the past.”
Mr. Bastian was referring to a flawed software update issued this summer by the cybersecurity company CrowdStrike that led to widespread technological disruptions. The outage, which affected Delta more than other carriers, highlighted operational vulnerabilities and cost Delta an estimated $500 million. He said he hoped the new guidance could help avoid a similarly disastrous problem.
“Putting out a framework that everyone sees each other’s accountability to the ecosystem sounds simple, but it’s a massive step in the right direction,” he said.
The board’s call to supplement the new guidance with regulation may not be answered any time soon. Such laws do not appear to be an immediate priority for President-elect Donald J. Trump. He has said he would revoke Mr. Biden’s executive order on artificial intelligence as part of his deregulatory agenda. One of his most urgent priorities for the Department of Homeland Security is cracking down on illegal immigrants.
Mr. Mayorkas said the framework would still be useful without enforcement mechanisms. He compared the work companies are doing to safeguard against the risks of A.I. to the work that they did when cybersecurity risks first emerged. He hopes they move faster.
“It took many companies, not all, but many companies, too much time to build governance regimes to address the breadth and depth of the cybersecurity challenge,” Mr. Mayorkas said.
“By calling this out in terms of a culture of safety, security and accountability in the in the framework, we seek to ensure a more accelerated uptake in the in the domain of A.I.”
The post Homeland Security Department to Release New A.I. Guidance appeared first on New York Times.