When you see a convincing deepfake of a central banker for the first time, you can’t quite believe what you’re seeing. Then you do. Then you begin to question how anyone is supposed to distinguish between the two. That is essentially the issue that keeps regulators in Frankfurt, London, and Washington up at night.
Fraudsters released videos of Bank of Italy governor Fabio Panetta endorsing investment products he had nothing to do with a few months ago. The Bank attempted to get ahead of the story, filed a complaint, and issued warnings. However, by then, the videos had completed their task, making their way through social media and messaging apps and collecting clicks from viewers who trusted the face on the screen. Observing cases such as these gives the impression that the traditional rule of finance, according to which recognition confers authority, has quietly ceased to be secure.
| Threat Category | AI-generated synthetic media used in financial fraud |
| First Major Academic Breakthrough | Generative Adversarial Networks (GANs), 2014 |
| Notable Case 1 | Hong Kong multinational, video conference fraud |
| Loss in Hong Kong Case | Approximately HK$200 million (≈US$25.6 million) |
| Notable Case 2 | Misuse of Bank of Italy Governor Fabio Panetta’s image |
| Year of Panetta Incident | 2026 |
| Singapore Incident (2025) | Finance director duped by AI-generated CFO; ~$499,000 wired |
| Key UK Legislation | Economic Crime and Corporate Transparency Act (ECCTA) |
| Effective Date of “Failure to Prevent Fraud” | September 2025 |
| Corporate Governance Update | Provision 29, effective January 2026 |
| Penalty for Non-Compliance | Unlimited fines for large firms |
| Projected Trend | Synthetic media losses projected to triple by 2027 |
In certain aspects, the 2024 incident in Hong Kong was even more bizarre. An employee of a multinational corporation joined what appeared to be a standard video conference with the CFO and a number of coworkers. Every person on the call was artificial. It was all fake—the gestures, the voices, the small, familiar texture of corporate life, the slight annoyance of a senior executive in a hurry. Around HK$200 million, or about US$25.6 million, had already been transferred between five local bank accounts by the time anyone realized.
It’s difficult to ignore how inexpensive attacks of this nature have become. For a while, deepfakes—faces switched onto movie clips and voices stitched together for comedic effect—were primarily an online novelty. The academic foundation was established in 2014 with generative adversarial networks. However, the technology lost control somewhere between the open-source releases and the emergence of “deepfake-as-a-service” platforms. With a laptop and a few minutes of public video, a reasonably competent con artist can now create a convincing impersonation of nearly any executive on the planet.

The SEC has been observing all of this with increasing discomfort. The circumstances are there, but American regulators have not yet experienced a Panetta-level humiliation. Rumors drive markets. If a convincing fake video of a Federal Reserve official, a CEO of a Fortune 500 company, or even the CFO of a mid-cap company was released at the right time, it could cause trades worth billions of dollars before anyone realized it was fake. It appears that investors think the verification systems will catch up. Whether they will or not is still up in the air.
Britain has advanced more quickly, possibly by reading the same tea leaves. A “failure to prevent fraud” offense was introduced by the Economic Crime and Corporate Transparency Act, which went into effect in September 2025. If large companies are unable to demonstrate that they took reasonable precautions, they could face unlimited fines. The corporate governance code’s provision 29, which goes into effect in January 2026, goes one step further and mandates board-level statements regarding the efficacy of internal controls pertaining to cyber and fraud channels. Matt Flegg of K2 Integrity has maintained that the true story is this regulatory change rather than any specific fraud.
Slowly and unevenly, the response is beginning to take shape within compliance departments. callback protocols. approvals from multiple people for transfers exceeding specific thresholds. Finance teams practice identifying voice and video abnormalities through tabletop exercises. The “VOICE” checklist, which stands for verify, observe, involve, confirm, and escalate—a cumbersome acronym that nevertheless encapsulates the new reality—has been adopted by some businesses. Seeing no longer equates to believing. Nor is hearing.
After reading the case files, what remains is the subtle erosion of trust. Not in spectacular collapses, but in minor compromises, an additional phone call, a second signature, or a meeting that no one can trust anymore. The financial system has not yet been disrupted by deepfakes. Everyone is now a little slower, a little less certain of what they just witnessed, and a little more suspicious. Regulators are still attempting to determine whether that is sufficient in a market that is based on confidence and speed.
