Department of Commerce Establishes AI Advisory CommitteeSeeks Top-Level Experts to Guide Privacy, Data Security Policies
The U.S. Department of Commerce this week announced the establishment of an artificial intelligence advisory committee set to counsel President Joe Biden and other federal agencies on issues ranging from privacy concerns to data security, along with global competition and inherent biases.
See Also: Case Study: The Road to Zero Trust
The Commerce Department is working with the National AI Initiative Office within the White House Office of Science and Technology Policy, or OSTP, on the formation of this National Artificial Intelligence Advisory Committee. Secretary of Commerce Gina Raimondo said this week that the group now seeks "top-level candidates" to advise on safeguards that can be incorporated into AI technologies.
A formal notice in the Federal Register says a subcommittee - co-created by the Commerce Department's National Institute of Standards and Technology - will specifically focus on AI-related law enforcement issues, which also include data security and privacy parameters. NIST will also provide wider administrative support for this effort.
The notice says the committee will examine the state of U.S. competitiveness in AI, along with workforce issues, legal rights and ways it can enhance government operations in cybersecurity, healthcare, infrastructure and disaster recovery.
While NIST will review potential nominees, Raimondo will select the committee chair and vice chair. The group will report regularly to the president and several congressional committees.
Caitlin Fennessy, research director at the International Association of Privacy Professionals, or IAPP, tells Information Security Group, "Groups like this can be quite impactful. This one [in particular] has potential to be more impactful … with clear ground rules and expectations, and public reporting duties to Congress."
'Engine for Growth'
Speaking about the committee, Raimondo says, "AI presents an enormous opportunity to tackle the biggest issues of our time, strengthen our technological competitiveness and be an engine for growth in nearly every sector of the economy. But we must be thoughtful, creative and wise in how we address the challenges that accompany these new technologies."
Eric Lander, White House science advisor and OSTP director, adds, "We have seen major advances in the design, development and use of AI, especially in the past several years. We must be sure that these advances are matched by similar progress in ensuring AI is trustworthy, and that it ensures fairness and protections for civil rights."
The committee comes out of the National Artificial Intelligence Initiative Act of 2020, which provides nearly $6.5 billion over five years to increase funding for related research, education and standards development. In it, the secretary of commerce is directed to establish the committee with consultation from several other cabinet members.
Commerce officials say the group will consist of "expert leaders from a broad and interdisciplinary range of AI-relevant disciplines," including academia, industry, nonprofits, civil society and federal laboratories. Members will offer counsel on research and development, ethics, standards, education, security and economic competitiveness.
IAPP's Fennessy tells ISMG that the formation of this group "recognizes that privacy and security are integral to the U.S. being a leader in the development of trustworthy AI." She expects senior-level stakeholders to "look pretty closely at privacy, security, training and workforce development" issues to "advance the ethical use of AI." She also expects the committee to consider specific, technical protective measures that can be built into these technologies.
Earlier this year, a report from the 15-member National Security Commission on Artificial Intelligence, an independent commission formed in 2018, said the U.S. is in danger of falling behind China and Russia in developing AI technologies and countering cybersecurity threats that could develop as AI use becomes more widespread (see: AI Supremacy: Russia, China Could Edge Out US, Experts Warn).
The lengthy report included ways to counter threats that leverage AI - including disinformation campaigns and cyberattacks by nation-states. Many of its 60 recommendations echoed findings from the Cyberspace Solarium Commission's 2020 report.
The commission also called on Congress to authorize the spending of billions of dollars to fund the development of AI, machine learning and other technologies to help the U.S. better compete as well as protect critical assets from security threats.
Eric Schmidt, the commission chair and former CEO of Google, and Robert O. Work, the commission vice chair, said in the report that the U.S. needs to create a holistic approach that balances AI development and countermeasures to fend off emerging threats.
Lauren Christopher, an associate professor of electrical and computer engineering at Indiana University-Purdue University Indianapolis who has studied the effects of AI, previously told ISMG that the commission's report correctly noted the U.S. needs to do more to keep pace with AI.
"The U.S. needs to invest more and have leadership strategies in place. There is much already done in the U.S. on AI topics, but much more to do," Christopher said at the time.
Additionally, citing wrongful arrests based on incorrect facial recognition matches, Democratic lawmakers, led by Sens. Edward Markey, D-Mass., and Jeff Merkley, D-Ore., have moved to ban the federal government from using facial recognition technology. If passed, their 2020 bill - called the Facial Recognition and Biometric Technology Moratorium Act - would prohibit federal authorities from using the technology and other biometric tools.
The Security Industry Association, a trade association representing security solutions providers, declared strong opposition to the bill, saying it would "impose a blanket ban on most federal use of nearly all biometric and related image analytics technologies, incorrectly labeling all such technologies as surveillance regardless of application."
And this summer, NIST also announced that it is working to develop risk management guidance around the use of AI and machine learning - citing a need to secure the emerging technologies (see: NIST Works to Create AI Risk Management Framework).