1.1 C
Ottawa
Wednesday, November 13, 2024

Tech firms failing to ‘walk the walk’ on ethical AI, report says

Date:

Stanford University researchers say AI ethics practitioners report lacking institutional support at their companies.

Thank you for reading this post, don't forget to subscribe!

Tech companies that have promised to support the ethical development of artificial intelligence (AI) are failing to live up to their pledges as safety takes a back seat to performance metrics and product launches, according to a new report by Stanford University researchers.

Despite publishing AI principles and employing social scientists and engineers to conduct research and develop technical solutions related to AI ethics, many private companies have yet to prioritise the adoption of ethical safeguards, Stanford’s Institute for Human-Centered Artificial Intelligence said in the report released on Thursday.

“Companies often ‘talk the talk’ of AI ethics but rarely ‘walk the walk’ by adequately resourcing and empowering teams that work on responsible AI,” researchers Sanna J Ali, Angele Christin, Andrew Smart and Riitta Katila said in the report titled Walking the Walk of AI Ethics in Technology Companies.

Drawing on the experiences of 25 “AI ethics practitioners”, the report said workers involved in promoting AI ethics complained of lacking institutional support and being siloed off from other teams within large organisations despite promises to the contrary.

Employees reported a culture of indifference or hostility due to product managers who see their work as damaging to a company’s productivity, revenue or product launch timeline, the report said.

“Being very loud about putting more brakes on [AI development] was a risky thing to do,” one person surveyed for the report said. “It was not built into the process.”

The report did not name the companies where the surveyed employees worked.

Governments and academics have expressed concerns about the speed of AI development, with ethical questions touching on everything from the use of private data to racial discrimination and copyright infringement.

Such concerns have grown louder since OpenAI’s release of ChatGPT last year and the subsequent development of rival platforms such as Google’s Gemini.

Employees told the Stanford researchers that ethical issues are often only considered very late in the game, making it difficult to make adjustments to new apps or software, and that ethical considerations are often disrupted by the frequent reorganisation of teams.

“Metrics around engagement or the performance of AI models are so highly prioritised that ethics-related recommendations that might negatively affect those metrics require irrefutable quantitative evidence,” the report said.

“Yet quantitative metrics of ethics or fairness are hard to come by and challenging to define given that companies’ existing data infrastructures are not tailored to such metrics.”

know more

Popular

More like this
Related

23andMe reports sales decline a day after announcing plans to cut 40% of workforce

Signage at 23andMe headquarters in Sunnyvale, California, U.S., on...

Trump’s win may put this popular student loan forgiveness program at risk

10'000 Hours | Digitalvision | Getty ImagesCurrent borrowers should...

This climate startup is challenging Tesla in the race to electrify big rigs

On the road to electrifying vehicles, cars and small...

Ruben Amorim to Man Utd offers Casemiro lifeline as demise pinned on Ten Hag after Leicester win

Casemiro was brilliant in his first Manchester United game without Erik ten Hag as his manager. What if he’s *actually* not past it? “The most important thing is passion and energy,” Ruud van Nistelrooy said ahead of kick-off. Manchester United were never short of those qualities while he banged goals in for fun for the