Google CEO Sundar Pichai said last week that concern about malicious applications of artificial intelligence is “very legitimate”.
In a Washington Post interview, Pichai said that AI tools will need ethical protection rods and require companies to think deeply about how technology can be abused.
“I think technology must realize that it can not just build it and fix it,” said Pichai, fresh from his testimony to House Lawmakers. “I think it does not work.”
Technical giants must ensure that artificial intelligence with its own agency does not harm humanity, noted Pichai.
HERE YOU ARE ROBOKING ON IPHONE AND ANDROID
The technology manager, who runs a company using AI in many of its products, including its powerful search engine, said he is optimistic about the long-term benefits of technology, but his assessment of AI’s potential negative parallels of critics who have warned of the potential for addiction and addiction.
Advocates and technicians have warned for AI’s ability to embody authoritarian regimes, authorize mass monitoring and disseminate wrong information among other opportunities.
SpaceX and Tesla founder Elon Musk once said that AI could prove to be “more dangerous than nukes”.
Google’s work with Project Maven, a military AI program, triggered a protest from its employees and led the technology t to announce that it will not continue work when the contract expires in 201
10  IPHONE TRICKS YOU WILL KNOW SOMARE
Pichai said in interviews that governments around the world are still trying to understand the AI’s effects and the potential need for regulation.
“Sometimes I worry that people underestimate the extent of changes that are possible in the medium and long term, and I think the questions are actually beautiful complexes,” he told the post. Other technology companies, like Microsoft, have embarked on the regulation of AI – both by the companies that create the technology and the governments that monitor the use.