Tuesday, 21 January 2020

Alphabet CEO backs temporary ban on facial-recognition technology

One concern is how AI could be used to create video or audio that shows anyone saying and doing anything, anywhere.
At the Bruegel European economic think-tank in Belgium, Alphabet's chief executive officer, Sundar Pichai, urges the US and EU to coordinate regulatory approaches on artificial intelligence [Geert Vanden Wijngaert/Bloomberg]
At the Bruegel European economic think-tank in Belgium, Alphabet's chief executive officer, Sundar Pichai, urges the US and EU to coordinate regulatory approaches on artificial intelligence [Geert Vanden Wijngaert/Bloomberg]
The chief executive of Google parent company Alphabet on Monday backed a European Union proposal to temporarily ban facial-recognition technology because of the possibility that it could be used for nefarious purposes.
"I think it is important that governments and regulations tackle it sooner rather than later and gives a framework for it," Sundar Pichai told a conference in Brussels, Belgium, organised by the think-tank Bruegel.

More:

The European Commission acts as the EU executive. The commission is taking a tougher line on artificial intelligence (AI) than the United States. According to an 18-page proposal paper seen by Reuters, the commission wants to strengthen existing regulations on privacy and data rights. Part of this includes a moratorium of up to five years on using facial recognition technology in public areas, to give the EU time to work out how to prevent abuses, the paper said.
"It can be immediate, but maybe there's a waiting period before we really think about how it's being used," Pichai said. "It's up to governments to charter the course" for the use of such technology.
Pichai urged regulators to take a "proportionate approach" when drafting rules, days before the Commission is due to publish proposals on the issue.
Regulators are grappling with ways to govern AI, encouraging innovation while trying to curb potential misuse, as companies and law enforcement agencies increasingly adopt the technology.
There was no question AI needs to be regulated, Pichai said, but rulemakers should tread carefully.
"Sensible regulation must also take a proportionate approach, balancing potential harms with social opportunities. This is especially true in areas that are high risk and high value," he said.
Regulators should tailor rules according to different sectors, Pichai said, citing medical devices and self-driving cars as examples that require different rules.
He urged governments to align their rules and agree on core values. Earlier this month, the US government published regulatory guidelines on AI aimed at limiting authorities' overreach and urged Europe to avoid an aggressive approach.
Pichai said it was important to be clear-eyed about what could go wrong with AI, and that while it promised huge benefits there were real concerns about potential negative consequences.
One area of concern is so-called "deepfakes" - video or audio clips that have been manipulated using AI and which potentially could be created of any individual, saying anything in any setting.
Pichai said Google had released open datasets to help the research community build better tools to detect such fakes.
The world's most popular internet search engine said last month that Google Cloud was not offering general-purpose facial-recognition application programming interfaces (APIs) while it establishes policy and technical safeguards.
SOURCE: REUTERS NEWS AGENCY

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home