President Joe Biden’s top science adviser is calling for a new “bill of rights” to defend against powerful new artificial intelligence technology.
The White House’s Office of Science and Technology Policy on Friday launched a fact-finding mission to look into facial recognition and other biometric devices used to identify people or assess their emotional or mental state and character. .
Biden’s chief science adviser, Eric Lander, and Alondra Nelson, deputy director for science and society, also published an opinion piece in Wired magazine detailing the need to develop new safeguards against faulty and harmful uses of AI. given that may unfairly discriminate against or infringe upon persons. their privacy.
“Calculating rights is only a first step,” he wrote. “What can we do to protect them? Possibilities include the federal government refusing to purchase software or technology products that fail to respect these rights, requiring federal contractors to use those technologies.” who comply with this ‘bill of rights’ or to adopt new laws and regulations to fill the gap.”
This isn’t the first time the Biden administration has expressed concern about harmful uses of AI, but it’s one of its most obvious steps toward doing something about it.
European regulators have already put in place measures to rein in risky AI applications that could put people’s safety or rights at risk. European Parliament lawmakers this week took a step in favor of banning biometric mass surveillance, although none of the bloc’s countries are bound by Tuesday’s vote, which allowed law enforcement to scan facial features in public places. Called for new rules prohibiting
Political leaders in Western democracies have said they want to balance the desire to tap into the economic and social potential of AI by addressing growing concerns about the reliability of tools that can track and profile individuals and how Make recommendations about who has access to jobs, loans and educational opportunities. .
A federal document filed Friday calls for public comments from AI developers, experts and anyone affected by biometric data collection.
BSA, the software trade association backed by companies such as Microsoft, IBM, Oracle and Salesforce, said it welcomed the White House’s attention to combating AI bias, but is pushing for an approach that requires companies to account for risks. You will need to make your own assessment. their AI applications and then show how they will mitigate those risks.
“It enables the good that everyone sees in AI but reduces the risk that it fosters discrimination and bias,” said Aaron Cooper, the group’s vice president of global policy.