Stanford University researchers say AI ethics practitioners report lacking institutional support at their companies.
Thank you for reading this post, don't forget to subscribe!Tech companies that have promised to support the ethical development of artificial intelligence (AI) are failing to live up to their pledges as safety takes a back seat to performance metrics and product launches, according to a new report by Stanford University researchers.
Despite publishing AI principles and employing social scientists and engineers to conduct research and develop technical solutions related to AI ethics, many private companies have yet to prioritise the adoption of ethical safeguards, Stanford’s Institute for Human-Centered Artificial Intelligence said in the report released on Thursday.
“Companies often ‘talk the talk’ of AI ethics but rarely ‘walk the walk’ by adequately resourcing and empowering teams that work on responsible AI,” researchers Sanna J Ali, Angele Christin, Andrew Smart and Riitta Katila said in the report titled Walking the Walk of AI Ethics in Technology Companies.
Drawing on the experiences of 25 “AI ethics practitioners”, the report said workers involved in promoting AI ethics complained of lacking institutional support and being siloed off from other teams within large organisations despite promises to the contrary.
Employees reported a culture of indifference or hostility due to product managers who see their work as damaging to a company’s productivity, revenue or product launch timeline, the report said.
“Being very loud about putting more brakes on [AI development] was a risky thing to do,” one person surveyed for the report said. “It was not built into the process.”
The report did not name the companies where the surveyed employees worked.
Governments and academics have expressed concerns about the speed of AI development, with ethical questions touching on everything from the use of private data to racial discrimination and copyright infringement.
Such concerns have grown louder since OpenAI’s release of ChatGPT last year and the subsequent development of rival platforms such as Google’s Gemini.
Employees told the Stanford researchers that ethical issues are often only considered very late in the game, making it difficult to make adjustments to new apps or software, and that ethical considerations are often disrupted by the frequent reorganisation of teams.
“Metrics around engagement or the performance of AI models are so highly prioritised that ethics-related recommendations that might negatively affect those metrics require irrefutable quantitative evidence,” the report said.
“Yet quantitative metrics of ethics or fairness are hard to come by and challenging to define given that companies’ existing data infrastructures are not tailored to such metrics.”