+更多
专家名录
唐朱昌
唐朱昌
教授,博士生导师。复旦大学中国反洗钱研究中心首任主任,复旦大学俄...
严立新
严立新
复旦大学国际金融学院教授,中国反洗钱研究中心执行主任,陆家嘴金...
陈浩然
陈浩然
复旦大学法学院教授、博士生导师;复旦大学国际刑法研究中心主任。...
何 萍
何 萍
华东政法大学刑法学教授,复旦大学中国反洗钱研究中心特聘研究员,荷...
李小杰
李小杰
安永金融服务风险管理、咨询总监,曾任蚂蚁金服反洗钱总监,复旦大学...
周锦贤
周锦贤
周锦贤先生,香港人,广州暨南大学法律学士,复旦大学中国反洗钱研究中...
童文俊
童文俊
高级经济师,复旦大学金融学博士,复旦大学经济学博士后。现供职于中...
汤 俊
汤 俊
武汉中南财经政法大学信息安全学院教授。长期专注于反洗钱/反恐...
李 刚
李 刚
生辰:1977.7.26 籍贯:辽宁抚顺 民族:汉 党派:九三学社 职称:教授 研究...
祝亚雄
祝亚雄
祝亚雄,1974年生,浙江衢州人。浙江师范大学经济与管理学院副教授,博...
顾卿华
顾卿华
复旦大学中国反洗钱研究中心特聘研究员;现任安永管理咨询服务合伙...
张平
张平
工作履历:曾在国家审计署从事审计工作,是国家第一批政府审计师;曾在...
转发
上传时间: 2024-06-15      浏览次数:208次
AI Chatbot Fools Scammers & Scores Money-Laundering Intel

 

https://www.darkreading.com/cyber-risk/ai-chatbot-fools-scammers-and-scores-money-laundering-intel

 

Responding to scammers' emails and text messages typically has been the fodder of threat researchers, YouTube stunts, and even comedians.

 

Yet one experiment using conversational AI to answer spam messages and engage fraudsters in conversations has shown that large language models (LLMs) can interact with cybercriminals, gleaning threat intelligence by diving down the rabbit hole of financial fraud — an effort that usually requires a human threat analyst.

 

Over the past two years, researchers at UK-based fraud-defense firm Netcraft used a chatbot based on Open AI's ChatGPT to respond to scams and convince cybercriminals to part with sensitive information: specifically, banks account numbers at more than 600 financial institutions spanning 73 different countries that are used to transfer stolen money.

 

Overall, the technique allows threat analysts to extract more details about the infrastructure used by cybercriminals to con people out of their money, says Robert Duncan, vice president of product strategy for Netcraft.

 

"We're effectively using AI to emulate a victim, so we play along with the scam to get to the ultimate goal, which typically [for the scammer] is to receive money in some form," he says. "It's proven remarkably robust at adapting to different types of criminal activity ... changing behavior between something like a romance scam, which might last months, [and] advanced fee fraud — where you get to the end of it very quickly."

 

As international fraud rings are profiting from scams — especially romance and investment fraud operating out of cyber-scam centers in Southeast Asia — defenders are searching for ways to expose cybercriminals' financial and infrastructure components and shut them down. Countries, such as the United Arab Emirates, have embarked on partnerships to develop AI in ways that can improve cybersecurity. Using AI chatbots could shift the technological advantage from attackers back to defenders, a form of proactive cyber defense.

 

Personas With Local Languages

Netcraft's research shows that AI chatbots could help curb cybercrime by forcing cybercriminals to work harder. Currently, cybercriminals and fraudsters use mass email and text-messaging campaigns to cast a wide net, hoping to catch a few credulous victims from which to steal money.

 

The two-year research project uncovered thousands of accounts linked to fraudsters. While Duncan would not reveal the name of the banks, the scammers' accounts were mainly in the United States and the United Kingdom — likely because the personas donned by the AI chatbots were from those regions as well. Financial fraud works better when using bank accounts in the same country as the victim, he says.

 

The company is already seeing that distribution change, however, as it adds more languages to its chatbot's capabilities.

 

"When we spin up some new personas in Italy, we're now seeing more Italian accounts coming in, so it's really a function of where we're running these personas and what language we're having them speak in," he says.

 

The promise of using AI chatbots to engage with scammers and cybercriminals is that machines can conduct such conversations at scale. Netcraft has bet on the technology as a way to acquire threat intelligence that would not otherwise be available, announcing its Conversational Scam Intelligence service at the RSA Conference in May.

 

AI on AI

Typically, scammers attempt to convince the victims to buy cryptocurrency or gift cards as the preferred way of payment, but eventually hand over bank account information, according to Netcraft. The goal in using an AI chatbot is to keep the conversation going long enough to reach those milestones. The average conversation results in cybercriminals sending 32 messages and the chatbot issuing 15 replies.

 

When the AI chatbot system succeeds, it can harvest important threat data from cybercriminals. In one case, a scammer promising an inheritance of $5 million to the "victim" sent information on 17 different accounts at 12 different banks in an attempt to complete the transfer of an initial fee. Other fraudsters have impersonated specific banks, such as Deutsche Bank and the Central Bank of Nigeria, to convince the "victim" to transfer money. The chatbot duly collected all the information.

 

While Netcraft's current focus with the experiment is to gain in-depth threat intelligence, the platform could be operationalized to engage fraudsters on a larger scale, flipping the current asymmetry between attackers and defenders. Rather than attackers using automation to increase the workload on defenders, a conversational system could widely engage cybercriminals, forcing them to have to figure out which conversations are real and which are not.

 

Such an approach holds promise, especially since attackers are starting to adopt AI in new ways as well, Duncan says.

 

"We've definitely seen indicators that attackers are sending texts that resemble the type of texts that ChatGPT puts out," he says. "Again, it's very hard to be certain, but we would be very surprised if we weren't already talking back to AI, and essentially we have an AI-on-AI conversation."