Categories: world

Exclusive: Google Cancels AI's Board of Directors in response to screams

This week, Vox and other sales outlets reported that Google's newly-created AI Ethics Board fell out of controversy over several board members. Yes, it is officially made to fall apart – it has been interrupted. Google told Vox on Thursday that it is pulling the plug on the label. The board survived for just over a week. Founded to steer the "responsible development of AI" on Google, it would have had eight members and met four times in 2019 to consider concerns about Google's AI program. These problems include how AI can enable authoritarian states, how AI algorithms produce different results, whether to work with military applications of AI and more. But it went into trouble from the beginning. Thousands of Google employees signed a petition requiring a board member, Heritage Foundation president Kay Coles James, to be removed from her comments on trans-people and her organization's skepticism about climate change. 1 9659005] At the same time, the introduction of drone company CEO Dyan Gibben's old divisions of the company resumed the use of the company's AI for military applications. Board member Alessandro Acquisti resigned . Another member, Joanna Bryson, who defended her decision not to resign claimed James "Believe it or not, I know worse about one of the others." Other board members found swamped with the requirement that they justify their decision to remain on the board. On Thursday afternoon, a Google spokesperson for Vox told the company that it decided to resolve the panel, called ATEAC, the…

This week, Vox and other sales outlets reported that Google’s newly-created AI Ethics Board fell out of controversy over several board members.

Yes, it is officially made to fall apart – it has been interrupted. Google told Vox on Thursday that it is pulling the plug on the label.

The board survived for just over a week. Founded to steer the “responsible development of AI” on Google, it would have had eight members and met four times in 2019 to consider concerns about Google’s AI program. These problems include how AI can enable authoritarian states, how AI algorithms produce different results, whether to work with military applications of AI and more. But it went into trouble from the beginning.

Thousands of Google employees signed a petition requiring a board member, Heritage Foundation president Kay Coles James, to be removed from her comments on trans-people and her organization’s skepticism about climate change. 1

9659005] At the same time, the introduction of drone company CEO Dyan Gibben’s old divisions of the company resumed the use of the company’s AI for military applications.

Board member Alessandro Acquisti resigned . Another member, Joanna Bryson, who defended her decision not to resign claimed James “Believe it or not, I know worse about one of the others.” Other board members found swamped with the requirement that they justify their decision to remain on the board.

On Thursday afternoon, a Google spokesperson for Vox told the company that it decided to resolve the panel, called ATEAC, the Advanced Technology External Advisory Council (ATEAC). Here is the company’s statement in its entirety:

It has become clear that ATEAC in the current environment cannot function as we wanted. So we stop the council and go back to the drawing board. We continue to be responsible in our work with the important issues raised by AI, and find ways to get opinions on these topics.

The panel would add external perspectives to ongoing AI ethics work by Google engineers, all of whom will continue. Hopefully, the termination of the board does not represent a retreat from Google’s AI ethics work, but a chance to consider how to engage more constructively outside of stakeholders.

The board became a big responsibility for Google

The credibility of the board first took a hit when Alessandro Acquisti, a privacy researcher, announced on Twitter that he went down and argued: “While I am engaged in research such as contrary to key ethical issues of justice, rights and inclusion in AI, I do not think this is the right forum for me to participate in this important work. “

Meanwhile, the petition to remove Kay Coles James has attracted more than 2300 Signatures from Google employees so far and showed no signs of losing steam.

As anger about the board intensified, the board members were drawn to extended ethical debates on why they were on the board, which could not have been what Google was hoping for. On Facebook, board member Luciano Floridi, a philosopher of ethics in Oxford, mused:

Requesting [Kay Coles James’s] advice was a serious error and sending the wrong message about nature and the goals of the entire ATEAC project. From an ethical perspective, Google has misjudged what it means to have representative views in a broader context. If Mrs Coles James does not resign, as I hope she does, and if Google does not remove her (https: //medium.com/…/ googlers-against-transphobia-and-hate -…), as I Personally recommended, the question is: What is the right moral attitude to take in view of this gross mistake?

He stopped deciding to stay on the panel, but it was not the kind of ethical debate that Google had hoped to spark – and it became difficult to imagine the two working together.

That wasn’t the only problem. One day ago I argued that the board was not well-intentioned for success, betraying aside. AI ethics cards such as Google, fashionable in Silicon Valley, do not seem to be highly equipped to solve or even make progress on the issues of high ethical AI progress.

A role on Google’s AI Board was an unpaid, toothless position that could not possibly, in four meetings over a year, come to a clear understanding of everything that Google does, let alone provide sophisticated guidance on it. There are urgent ethical issues about the AI ​​work that Google does – and no real way the board could handle them satisfactorily. Initially, the goal was poorly designed.

Now it has been interrupted.

Google still needs to figure out AI ethics – just not like this

Many of Google’s AI researchers are actively working to make AI fairer and more transparent and clumsy missteps of management will not change it. Google’s spokesperson talked about pointing to several documents allegedly reflecting Google’s AI ethics strategy, from a detailed mission statement describing different types of research, they will not continue to look back at the beginning of the year whether their AI works socially so far. good for detailed documents on AI’s governance.

Ideally, an external panel would complement that work, increase accountability and help ensure that each Google AI project is subject to appropriate scrutiny. Even before the war, the board was not set up to do so.

Google’s next external responsibility staff will have to resolve these issues. A better board may meet more often and have more stakeholders involved. It would also make public and openly specific recommendations, and Google would tell if they had followed them and why.

It’s important that Google gets the right one. The AI ​​capacity continues to shift, leaving most Americans nervous about everything from automation to data security to catastrophic advanced AI systems. Ethics and governance cannot be a side view for companies like Google, and they will be under intense scrutiny as they try to navigate the challenges they create.


Sign up for the future Perfect newsletter. Twice a week you get a summary of ideas and solutions to address our biggest challenges: improving public health, diminishing human and animal suffering, relieving catastrophic risks, and – to put it simply – better to do good

Share
Published by
Faela