Amidst a flood of artificial intelligence (AI) apps designed to launch humanity to new heights—or cash in before the market bubble bursts—many are debating whether AI is a friend or foe.
In the right hands, AI can produce awe-inspiring results. But in the wrong hands, it can be a powerful tool for stealing our money and data.
Finder.com shares how financial scammers are using AI and how you can protect yourself from their schemes.
Eyewitness testimony ain’t what it used to be. Thanks to deepfake technology, people now have to question if Tom Cruise really danced in a bathrobe on his front lawn or if Trump and Biden actually went on a beach vacation together.
But when it comes to swindling people out of their hard-earned money, deepfake impersonations and phishing are no laughing matter.
Using software designed to learn like the human brain does, deepfake technology analyzes videos, images and audio of real people and recreates their likeness—sometimes without their permission.
The software began moving into the mainstream after Generative Adversarial Networks (GANs) were introduced in 2014, which enabled computers to recognize and reproduce detailed patterns.
It didn’t take long for scammers to jump on the breakthrough technology’s money-making potential.
In 2019, one of the first known cases of deepfake scamming occurred when the CEO of an energy firm based in the U.K. was scammed out of €220,000 (about $243,000 USD at the time) after receiving a deepfaked phone call from the head of the firm’s parent company in Germany.
The CEO recognized the “melody” and “subtle German accent” of the caller’s voice and didn’t flag anything as suspicious until noticing that a follow-up call was from Austria.
Deepfake financial scams can take the form of “loved ones” phoning for emergency cash transfers, fraudsters impersonating managers during business calls to gain access to employee payment details or fake government representatives demanding payment of taxes and other fines.
When scammers are harder than ever to catch, how do you prevent bad actors from getting a hold of your money?
Once upon a time, internet buzz was a sure sign that something or someone was relevant and possibly monetizable. Real market and cultural value was identified through online clicks, views, likes, shares, subscribers and follower counts.
It soon became apparent that there are many ways to make money online.
Eager to leverage their brands and line their pockets, internet users started unearthing every conceivable way of gaming the system, making it possible to “buy your way” to the top of the algorithm by purchasing online engagement from real people and programming bots to engage with content like real humans.
AI takes fake engagement to a whole new level by effortlessly creating websites that appear to be owned by legitimate organizations and by producing fake online reviews and social media accounts that can be tough to distinguish from the real thing.
Masked by a “trustworthy” online presence, scammers convince unsuspecting individuals and businesses to pour money out for products, causes or opportunities that don’t exist.
False activity has become so ubiquitous online that an idea began circulating that the internet is now controlled by bots and contains very little, if any, real human activity—a conspiracy known as the “dead internet theory.”
True or not, thanks to AI, the possibility of a bot-controlled web doesn’t seem too far off.
These days, the market is a befuddling mix of classic and next-gen investments. Private equity and crypto are now worked into many individual investors’ portfolios alongside stocks, bonds and high-interest savings accounts.
The proliferation of nouveau riche influencers and successful side hustlers has stimulated the public’s appetite for seemingly easy money. When paired with the futuristic gleam of AI, you unfortunately have a ripe environment for investment scams.
66-year-old Abigail Ruvalcaba transferred over $430,000 in cash, gift cards and bitcoin to scammers impersonating General Hospital TV star Steve Burton online.
A good portion of the money came from selling her home, which Abigail was persuaded to do after more than a year of interacting with videos of the actor over Facebook and WhatsApp. Unbeknownst to her, these videos were generated by AI.
But not all AI financial scams operate behind the scenes. The technology can also be a Trojan horse placed front-and-center to capture the interest of trend-chasing investors.
In April 2025, Shaukat Shamim was sentenced to 30 months in prison for fraudulently raising millions of dollars for his AI software company, YouPlus. Shaukat falsely claimed his software could predict video marketing outcomes, and he altered bank statements to persuade investors that YouPlus had received deposits from major businesses like Coca-Cola, Kraft and Netflix.
Altogether, YouPlus raised around $17 million from the time it was founded in 2013 to Shaukat’s resignation in 2019.
Smart investing involves spotting high-potential opportunities, but just as important is steering clear of opportunities that come with too many red flags to ignore.
Many people are feeling a financial squeeze these days. Drawing on a number of studies, Newsweek reports that consumer confidence in the U.S. is down this year. Up north, Finder.com reports that almost one in three Canadians are “slightly confident” or “not confident at all” in their financial future.
While AI can be used to exploit people and their money, it can also be leveraged to make smarter decisions and grow wealth faster. For example:
AI is a double-edged sword that becomes more useful as it gets to know us better. Unfortunately, that also makes it incredibly skilled in the art of deception.
When it comes to your money, pay attention to signs that people or businesses aren’t what they claim to be. And remember, if an opportunity sounds too good to be true, it probably is.
This story was produced by Finder.com and reviewed and distributed by Stacker.

Reader Comments(0)