‘People are just not worried about being cheated’

image source, Clark Hoefnagels

image title, Clark Hoefnagels has created an AI-powered tool that spots fraudulent emails

  • Author, Jane Wakefield
  • The role, Technology journalist

When Clark Hoefnagels’ grandmother was defrauded out of $27,000 (£21,000) last year, he felt compelled to do something about it.

“I felt like my family was vulnerable and I had to do something to protect them,” he says.

“There was a sense of responsibility for doing all things tech for my family.”

As part of his efforts, Mr. Hoefnagels, who lives in Ontario, Canada, ran the scam or “phishing” emails his grandmother received through the popular AI chatbot ChatGPT.

He was curious to see if they would recognize them as fake, and he immediately did.

This gave rise to an idea that has since grown into a business called Catch. It is an AI system that is trained to spot fraudulent emails.

Currently compatible with Google’s Gmail, Catch scans incoming email and highlights any it deems to be fake or potentially fake.

AI tools such as ChatGPT, Google Gemini, Claude and Microsoft Copilot are also known as generative AI. This is because they can generate new content.

In the beginning, it was a text response in response to a question, a request, or to start a conversation with them. But generative AI applications can now increasingly create photos and images, voice content, compose music or create documents.

People from all activities and industries are increasingly using such artificial intelligence to improve their work. Unfortunately, scammers too.

In fact, there is a product sold on the dark web called FraudGPT that allows criminals to create content that enables a variety of scams, including creating phishing emails linked to banks or creating custom scam pages designed to steal personal information.

Even more troubling is the use of voice cloning, which can be used to convince relatives that a loved one needs financial help, or even in some cases to convince them that the person has been kidnapped and that a ransom needs to be paid.

There are some pretty alarming statistics about the scale of the growing AI fraud problem.

Reports of artificial intelligence tools being used to attempt to defraud banking systems will increase by 84% in 2022, according to the latest data from anti-fraud organization Cifas.

The situation is similar in the US, where a report this month said that artificial intelligence “has led to a significant increase in the sophistication of cybercrime”.

image source, Getty Images

image title, Studies show that fraudsters are increasingly using AI

Given this increased global threat, you would imagine that Mr. Hoefnagels’ Catch product would be popular with members of the public. Unfortunately, this was not the case.

“People don’t want that,” he says. “We’ve learned that people aren’t worried about fraud, even after they’ve been scammed.

“We talked to a guy who lost $15,000 and we told him we’d catch the email, but he wasn’t interested. People are not interested in any level of protection.”

Mr. Hoefnagels adds that this particular man simply did not think it would happen to him again.

The group that is worried about being scammed, he says, is older people. Yet instead of buying protection, he says their fears are more often assuaged with very low-tech tactics — their kids telling them they just don’t respond or respond to anything.

Mr. Hoefnagels says he fully understands this approach. “After what happened to my grandmother, we said ‘don’t answer the phone if you don’t have it in your contacts and don’t go to email anymore’.”

As a result of the apathy Catch has faced, Mr. Hoefnagel says he is now winding down the business while also looking for a potential buyer.

More AI stories

While individuals can be blasé about fraud and fraudsters are increasingly using AI, banks cannot afford to.

Two-thirds of financial firms now see AI-driven fraud as a “growing threat”, according to a global survey in January.

Fortunately, banks are now increasingly using AI to fight back.

AI-powered software made by Norwegian start-up Strise has been helping European banks spot fraudulent transactions and money laundering since 2022. It automatically and quickly screens millions of transactions a day.

“There are a lot of pieces of the puzzle that you need to put together, and the AI ​​software makes it possible to automate the checks,” says Strise co-founder Marit Rødevand.

“It’s a very complicated job, and compliance teams have been drastically understaffed in recent years, but AI can help bring that information together very quickly.”

Mrs. Rødevand adds that it is a step ahead of criminals. “A criminal doesn’t have to care about the law or compliance. And they’re also good at sharing data, whereas banks can’t because of regulations, so criminals can jump on new technology more quickly.”

image source, Marit Rødevand

image title, Marit Rødevand says the battle for companies like hers is to stay ahead of cybercriminals

Featurespace, another tech company that makes AI software to help banks fight fraud, says it’s noticing things that are out of the ordinary.

“We don’t track the behavior of fraudsters, instead we track the behavior of the real customer,” says Martina King, CEO of the Anglo-American company.

“We build a statistical profile around what normal looks like. We can see, based on the data the bank has, if something is normal behavior or if it is anomalous and irregular.”

The company says it now works with banks such as HSBC, NatWest and TSB, and has contracts in 27 different countries.

Back in Ontario, Mr. Hoefnagels says that while he was initially frustrated that more members of the public didn’t realize the growing risk of fraud, he now understands that people just don’t think it’s going to happen to them.

“It made me more empathetic to individuals, and [instead] try to push companies and governments more.”

Leave a Comment