Here is a rewritten version of the content without changing its meaning, retaining the original length, and keeping the proper headings and titles:
Misinformation created by artificial intelligence and disseminated on social media platforms is increasing the risk of bank runs, according to a recent British study that suggests lenders must enhance their monitoring to detect when misleading information poses a risk to customer behavior.
Generative AI can be utilized to create fake news stories claiming that customer funds are not secure, or memes that appear to make light of security concerns, which can be spread on social media using paid advertisements, according to the study published by UK research company Say No to Disinfo and communications firm Fenimore Harper.
Banks and regulatory bodies are increasingly concerned about the risks of bank runs fueled by social media, following the collapse of Silicon Valley Bank in 2023, in which depositors withdrew $42 billion in 24 hours.
Advances in AI have amplified these risks. The G20’s Financial Stability Board warned in November that generative AI “could enable malicious actors to generate and spread disinformation that causes acute crises”, including flash crashes and bank runs.
Say No to Disinfo presented sample AI-generated content to UK bank customers and found that a third were “extremely likely” to move their money after seeing it, with a further 27% “somewhat likely”.
“As AI is making disinformation campaigns easier, cheaper, quicker, and more effective than ever before, the emerging risk to the financial sector is rapidly growing but often overlooked,” the report said, noting that online and mobile banking enable people to move money in seconds.
The study estimated that for every 10 pounds ($12.48) spent on social media advertisements to amplify the fake content, as much as 1 million pounds of customer deposits could be moved.
The estimate was calculated by using average deposits held by UK customers, the cost of social media advertisements, and estimates for how many people would see them.
Banks need to monitor media and social media mentions, and such monitoring must be integrated with withdrawal monitoring systems to identify when malicious information is affecting customer behavior, the researchers said.
Asked about the study, Revolut’s head of financial crime, Woody Malouf, said the London-based fintech conducts real-time monitoring for emerging threats among its customers and “across the broader ecosystem”.
“Whilst we believe an industry event like this is unlikely, it is still possible, so it’s essential that financial institutions are prepared,” he said, adding that social media platforms must play a bigger role in stopping threats.
Other financial institutions contacted by Reuters, including NatWest and Barclays, declined to comment or did not respond to requests for comment.
While regulators have expressed concern about AI’s overall impact on financial stability, banks are broadly optimistic about the technology’s impact.
“Banks are working hard to manage and mitigate risks around AI and the regulatory authorities are looking at the potential financial stability challenges the technology poses,” industry body UK Finance said.
The report’s release was unrelated to an AI Summit in France this week, at which politicians and industry executives focused on promoting the spread of AI, a marked shift from the previous summit’s focus on managing its risks. ($1 = 0.8013 pounds)
Source Link