A hybrid text filter and artificial intelligence-based scanner is picking up around 1,500 “high-risk” cases of technology-facilitated abuse every year, the Commonwealth Bank of Australia (CBA) has revealed as it announces plans to publish its proprietary AI models for free.
In the banking context, technology-facilitated financial abuse occurs when a person sends a transfer to their victim – something that is now as easy as entering their email or mobile number – and uses the free-form messages field to include harassing, threatening, or offensive messages.
“Financial abuse occurs when money is used to gain control over a partner,” CBA group customer advocate Angela MacMillan said, but “sadly we see that perpetrators use all kinds of ways to circumvent existing measures such as using the messaging field to send offensive or threatening messages.”
Many perpetrators would effectively use the bank transfer system as a messaging service, sending multiple transactions worth $1 or less simply to harass victims they may have been barred from contacting through more conventional means.
“Any payment that contains a free text field to be completed by the sender and viewed by the recipient can be a vehicle for criminal communication”, AUSTRAC has previously warned in publishing a guide about the practice – which became a focus for CBA after an isolated 2020 case led CBA staff to conduct a three-month audit of the messages that bank customers were sending through its payments system.
That audit ultimately identified more than 8,000 bank customers who had received messages ranging from jokes and profanities to what general manager of community and customer vulnerability Catherine Fitzpatrick called “serious threats and clear references to domestic and family violence”.
Using technology to fight back
That finding drove the bank to update its acceptable use policy to remind perpetrators that such behaviour could drive the bank to block their transactions or close their accounts – and in 2021, it deployed a filter to block transfers whose messages included offensive and abusive keywords.
CBA also partnered with AI specialist H2O.ai to implement a technology-driven platform that uses AI to evaluate the overall sense and intent of a message’s language.
Since then, the systems have scanned millions of funds transfers of CBA’s payments processing systems, helping bank authorities protect would-be victims of technology-facilitated abuse.
The models have now been published on Github as Abuse in Transaction Description Detection (AITD), a fully documented standard that uses a defined workflow and natural language processing (NLP) models to evaluate the emotions, toxicity, and sentiment of messages sent across a banking network.
“By sharing our source code and model with any bank in the world, it will help financial institutions have better visibility of technology-facilitated abuse,” MacMillan said. “This can help to inform action the bank may choose to take to help protect customers.”
In September, CBA took the project a step further by expanding the platform – which has been developed and refined in line with the e-Safety Commissioner’s Safety by Design Framework – in a pilot with the NSW Police that asks customers if they would like the police to become involved when multiple abusive transfer messages are identified.
That partnership “means critical information can now be shared when financial transactions are being used to threaten, harass or intimidate victims of domestic violence,” Australian Banking Association CEO Anna Bligh said, noting that the trial would “provide valuable insights for police services and other banks about how to better combat the scourge of domestic violence.”
Building on a 2020 Deloitte study that found over 623,100 Australians – nearly 1 in 30 women and 1 in 50 men were subjected to financial abuse, costing $5.7 billion for victims – in early 2022 CBA launched a formal campaign against financial abuse called Next Chapter that also includes a support hotline (1800 222 387) for customers and non-customers alike.