0.1 C
Ottawa
Friday, November 15, 2024

US highlights AI as risk to financial system for first time

Date:

Financial Stability Oversight Council says emerging technology poses ‘safety-and-soundness risks’ as well as benefits.

Thank you for reading this post, don't forget to subscribe!

Financial regulators in the United States have named artificial intelligence (AI) as a risk to the financial system for the first time.

In its latest annual report, the Financial Stability Oversight Council said the growing use of AI in financial services is a “vulnerability” that should be monitored.

While AI offers the promise of reducing costs, improving efficiency, identifying more complex relationships and improving performance and accuracy, it can also “introduce certain risks, including safety-and-soundness risks like cyber and model risks,” the FSOC said in its annual report released on Thursday.

The FSOC, which was established in the wake of the 2008 financial crisis to identify excessive risks in the financial system, said developments in AI should be monitored to ensure that oversight mechanisms “account for emerging risks” while facilitating “efficiency and innovation”.

Authorities must also “deepen expertise and capacity” to monitor the field, the FSOC said.

US Treasury Secretary Janet Yellen, who chairs the FSOC, said that the uptake of AI may increase as the financial industry adopts emerging technologies and the council will play a role in monitoring “emerging risks”.

“Supporting responsible innovation in this area can allow the financial system to reap benefits like increased efficiency, but there are also existing principles and rules for risk management that should be applied,” Yellen said.

US President Joe Biden in October issued a sweeping executive order on AI that focused largely on the technology’s potential implications for national security and discrimination.

Governments and academics worldwide have expressed concerns about the break-neck speed of AI development, amid ethical questions spanning individual privacy, national security and copyright infringement.

In a recent survey carried out by Stanford University researchers, tech workers involved in AI research warned that their employers were failing to put in place ethical safeguards despite their public pledges to prioritise safety.

Last week, European Union policymakers agreed on landmark legislation that will require AI developers to disclose data used to train their systems and carry out testing of high-risk products.

Source

:

Al Jazeera and news agencies

know more

Popular

More like this
Related

Vaccine maker stocks fall on reports Trump plans to tap RFK Jr. to lead HHS

Robert F. Kennedy Jr. in Phoenix on Aug. 23,...

Trump AG pick Matt Gaetz under scrutiny as House Ethics report on sex, drug claims takes focus

Multiple Republican senators gave a chilly reception to President-elect...

Amazon launches fixed pricing for treatment of conditions such as hair loss. Hims & Hers stock drops 24%

A worker delivers Amazon packages in San Francisco on...

Rooney replaces Amorim if Man Utd ’embrace chaos’; one ‘limitation’ proves Ten Hag’s replacement doesn’t fit

The Saturday morning Mailbox is dominated by views on Manchester United, who should ’embrace the chaos’ and replace Ruben Amorim with Wayne Rooney… Send your views to theeditor@football365.com… Man Utd should just embrace the chaos… With another previously highly regarded manager being cast off after failing to cut the mustard at Manchester United, it’s hard