A group of current and former employers working at top Artificial Intelligence companies such as OpenAI and Google DeepMind coming forward to pen an open letter about the complete absence of safety checks for AI systems. What does the letter released on Tuesday do: The letter also gives recommendations for whistleblowing and a “right to warn about artificial intelligence. ”
This statement further signed by eleven current and former OpenAI employees and two from Google DeepMind lays emphasis over serious issues regarding the capacity of AI technology for harm. This principle is not a secret: “AI companies have deep pre-public knowledge of what their systems can and cannot do, how well they can protect data and individuals, and the dangers inherent in different kinds of harms,” the letter continues. “Nevertheless, they have relatively soft commitments to report some of this information to governments and no commitments to do so to civil society now,” “We do not believe that they can all be expected to voluntarily share it. ”
In response, OpenAI has defended its behaviours, especially pointing to features such as a tipline where people who have concerns can report them as well as stating that it will not launch new technology without adequate precautions being created. “As one of the most capable and the safest AI companies, we are happy with our work and will continue to approach the issue with science,” the OpenAI’s spokesperson commented. Google has not given her interview yet on this matter.
Here the author of the letter highlights the role of openness and the responsibility that the fledging field of artificial intelligence will need to take. I tlists four principles, of which one seeks to ban organizations from compelling workers to sign clauses that prohibit them from speaking about risk-associated AI topics. It also imagine a whistle blower system where employees can raise concerns and complaints directly to the boards in anonymity.
“As long as there is no powerful government bureaucracy over these corporations, currently and former employees are among the rarest individuals who could be held personally responsible by the community,” the letter said. “However, corporations need to be kept in the dark through strict policies of non-disclosure which in essence denies us the possibility of expressing our concerns to anyone outside these organizations, let alone to other stakeholders. ”
This is their open letter, submitted after a series of recent OpenAI employees’ resignations and claims about rather strict non-disclosure policies; yet again, the letter brings into discussion the issues of safety and transparency in the field of AI.