Spotting Insider Trading, Financial Fraud, Misconduct: There's An App For That
University team and industry leader develop tech that can spot likely insider-trading or other rogue chatter in emails & phone calls using AI
Insider trading victimizes everyone, and it's on the rise.
Now Stevens Institute of Technology is fighting back, partnering with Accenture to develop new artificial intelligence (AI) that promises to reliably spot signs of potentially illegal behavior and illicit conduct
"The data is very good, so far," says electrical and computer engineering (ECE) professor Rajarathnam Chandramouli, part of a Stevens team developing new text- and audio-mining algorithms together with Accenture. "The financial industry is very, very interested in our solution. Now it's a matter of refinements and further testing once our tool is fully integrated into the Accenture Insights Platform."
Cracking down on financial crime
In 2014, Preet Bharara, former U.S. Attorney of the Southern District of New York, told FRONTLINE that insider trading and market abuse are "rampant" in U.S. securities markets, stretching from coast to coast across business areas including finance but also tech, pharmaceutical and other sectors; perpetrators range from executives and traders to Fortune 500 employees and IT experts.
And these insider trades harm the entire financial system by robbing individuals and institutional investors of confidence to invest. Once capital is removed from the market, panic can set in.
"The importance of finding something out in real time can’t be overstated," Bharara added, "because if you find out about something in real time, you’re more likely to have documents that have not been destroyed; you’re more likely to have evidence that hasn’t been taken away; you’re more likely to find witnesses who haven’t wandered away and who might be willing to talk to you."
Around that time, Stevens ECE professors Chandramouli and K.P. Subbalakshmi hit on an idea to retool their lie-detecting technology Jaasuz — which can accurately scan emails and other communications for signs of deception — for financial applications. Working with Ph.D. candidate Zongru Shao, a Stevens Innovation & Entrepreneurship Fellow, they quickly proved the concept and began refining the algorithm.
Accenture, a longtime Stevens partner in diverse research areas including advanced risk analytics, blockchain financial technology and insurance risk management, supported the project nearly from the outset — recently broadening its scope and extending support for a second year.
"The reason we have been investing in this is simple: because we truly believe in it," says Sharad Sachdev, a Managing Director with Accenture and analytics lead in the firm's Insurance Strategy and Artificial Intelligence Practice.
"As we work to develop leading analytics tools for our clients to detect complex typologies such as rogue trading, market manipulation and abuse, we see this as a potentially powerful weapon," adds Constantine Boyadjiev, North American Fraud, Risk, and Compliance Analytics lead for Accenture Digital and a longtime company liaison with the university. "There's no one single silver bullet for clients to detect fraud and misconduct; you need an entire quiver of arrows. And this could become a very powerful new arrow in the quiver."
Hunting for patterns, code words, emotions
To develop new tools, the Stevens team first fed huge quantities of deceptive email texts and normal chatter — such as the roughly 500,000 emails placed in the public record during the late-1990s Enron scandal — into an AI-based analytic engine built off some of Jaasuz's blueprints. The researchers tested and expanded this approach to specifically analyze insider trading email communications.
The software began picking out patterns in speech and coded words in conversation, among other signals.
"We had an example, in an insider trading case involving collusion, where someone says, for example, 'I'd like to buy a thousand frequent-flyer miles' or 'I'd like to buy a kilo of potatoes' and that was code for an insider transaction," says Subbalakshmi. "Our algorithm can pick out these sorts of patterns using statistical natural language processing context models."
The AI separates legitimate sales orders and conversations from suspicious ones by looking for context.
"A conversation about buying potatoes could be about buying potatoes," says Shao. "But if that conversation takes place suddenly, midday in a trading office, without any surrounding conversations about going to the store or shopping — and then the employee immediately goes offline and takes a break to make a private call — your suspicions are raised. Also, the AI algorithm learns over time to make fewer mistakes."
What's most impressive is that the technology can spot a few occurrences of bad chatter in a huge volume of innocuous conversations — like picking a dangerous needle out of a haystack
"Traditional machine-learning algorithms actually don't work that well for the sort of case where you have millions of emails and just a few suspicious ones," explains Subbalakshmi. "So we had to develop a new one that would."
New areas of exploration include audio, calls, emotions
Now that the algorithm can detect suspicious or risky conversations via email, the teams are working to develop a new tool to scan the audio of recorded phone conversations for the same sorts of risk factors and combine those insights with text-mining scores.
Future iterations of the technology will also factor in sentiments and emotions expressed in emails and calls between traders, adds Boyadjiev, further enhancing the accuracy of predicting risky behavior.
"We can already identify 12 emotions in these text and voice communications with our algorithms," notes Chandramouli. "Institutions may find they can then combine behavioral risk scores computed by our mathematical algorithm with some business logic — let's say an identified employee suddenly downloads a huge file from the institution's database — to further identify suspicious behavior or risk."
Financial institutions are already interested in the technology for compliance and risk applications, for which a patent is now being sought by the Stevens/Accenture group.
"The deployment of this technology could also far exceed the scope of the financial services sector," notes Boyadjiev, "with potential applications across many far-reaching domains including government — for example, for use in surveillance by regulatory bodies or intelligence agencies — and the medical / behavioral health arena, where it could aid the early detection of mental diseases and disorders.
To learn more about Stevens research and media availability of faculty, contact Media Relations Manager Kat Cutler ([email protected]) at 201.216. 5139 or 603.799.8076.