Google says its A.I. won't be used for weapons, surveillance
BY ALI BRELAND -
© Getty
Google said Thursday that it would not let its artificial intelligence (A.I.) tools be used for deadly weapons or surveillance.
The tech giant made the pronouncement while unveiling its new A.I. principles, while saying that it would continue to contract with the government and military.
“These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” Google CEO Sundar Pichai wrote in a post.
“We recognize that such powerful technology raises equally powerful questions about its use. As a leader in AI, we feel a deep responsibility to get this right,” he continued.
The company outlined seven principles for how it uses A.I., including avoiding “creating or reinforcing unfair bias” and proceeding “where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.”
Pichai explained that Google would not use its A.I. for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” nor to support “technologies that gather or use information for surveillance violating internationally accepted norms of human rights.”
The principles come after massive backlash against Google’s Project Maven, an A.I. drone warfare program that it contracts to the Pentagon.
The company announced last week that will not renew its Project Maven contract amid pressure from employees and backlash from other groups like the Tech Workers Coalition, a group of tech industry workers and labor and community organizers.
More than 4,000 Google employees signed a petition protesting Google’s contract, and some staffers resigned over it.
http://thehill.com/policy/technology/391271-google-says-its-ai-wont-be-used-for-weapons-in-new-principles
posted by Satish Sharma at 21:09
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home