SEC Chairman: AI May Lead to Next Financial Crisis

0
28

Securities and Exchange Commission (SEC) Chairman Gary Gensler has expressed significant concerns about the potential consequences of artificial intelligence (AI) on the financial system. In an interview with DealBook, Gensler outlined his views on how AI could become a systemic risk and the need for responsible regulation.

AI as a Transformational Technology with Risks

Gensler sees AI as a transformational technology set to impact business and society. He co-wrote a paper in 2020 on deep learning and financial stability, concluding that a few AI companies would build foundational models that many businesses would rely on. This concentration could deepen interconnections across the economic system, making a financial crash more likely.

Gensler expects that the United States will most likely end up with two or three foundational AI models, increasing “herding” behavior. “This technology will be the center of future crises, future financial crises,” Gensler said. “It has to do with this powerful set of economics around scale and networks.”

Concerns About Concentration and Regulation

The SEC chief’s warnings extend to the potential conflicts of interest in AI models. The rise of meme stocks and retail trading apps has highlighted the power of predictive algorithms. Gensler questions whether companies using AI to study investor behavior are prioritizing user interests.

“You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” Gensler emphasized. In response, the SEC proposed a rule On July 26, 2023 requiring platforms to eliminate conflicts of interest in their technology. The SEC’s proposal was to address conflicts of interest arising from investment advisers and broker-dealers using predictive data analytics to interact with investors.

SEC Chairman Gary Gensler emphasized that the rules, if adopted, would protect investors from conflicts of interest, ensuring that firms do not place their interests ahead of investors’.

The proposal would require firms to analyze and eliminate or neutralize conflicts that may emerge from using predictive analytics. The rules also include provisions for maintaining records regarding compliance with these matters.

The question of legal liability for AI is also a matter of debate. Gensler believes companies should create safe mechanisms and that using a chatbot like ChatGPT does not delegate responsibility. “There are humans that build the models that set up parameters,” he stated, emphasizing the duty of care and loyalty under the law.

Balancing Innovation with Responsibility

Gensler’s insights serve as a timely reminder of the importance of balancing innovation with responsibility. As AI continues to transform various sectors, including the financial system, his warnings underscore the need for careful regulation, oversight, and ethical considerations.

The SEC’s focus on AI’s potential risks reflects a growing awareness of the need for a comprehensive approach to ensure that technology serves the interests of investors and the broader economy, rather than creating new vulnerabilities.

Image source: Shutterstock

Credit: Source link

ads

LEAVE A REPLY

Please enter your comment!
Please enter your name here