+44 20 3290 3020 [email protected]

The Tricky Ethics of Google's Cloud Ambitions

Google’s attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company’s cloud computing conference, building on his predecessor’s strategy of emphasizing Google’s strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its—and its customers’—AI projects to ethical reviews . They have caused Google to turn away some business. “There have been things that we have said no to,” says Tracy Frey, director of AI strategy for Google Cloud, although she declines to say what. But this week, the company fueled criticism that those mechanisms can’t be trusted when it fumbled an attempt to introduce outside oversight over its AI development. Google’s ethics reviews tap a range of experts. Frey says product managers, engineers, lawyers, and ethicists assess proposed new services against Google’s AI principles. Some new products announced next week will come with features or limitations added as a result. Last year, that process led Google not to launch a facial recognition service, something rivals Microsoft and Amazon have done. This week, more than 70 AI researchers—including nine who work at Google— signed an open letter calling on Amazon to stop selling the technology to law enforcement. Frey says that tricky decisions over how—or whether—to release AI technology will become more common as the technology advances. In February, San Francisco research institute OpenAI said it would not release new software it created that is capable of generating surprisingly fluent text because it might be used maliciously. The episode was dismissed by some researchers as a stunt, but Frey says it provides a powerful example of the kind of restraint needed as AI technology gets more powerful. “We hope to be able to have that same courageous stance,” she says. Google said last year that it modified research on lip-reading software to minimize the risk of misuse. The technology could help the hard of hearing—or be used to infringe on privacy. Not everyone is convinced that Google itself can be trusted to make ethical decisions about its own technology and business.

Send this to a friend