In late July 2025, OpenAI CEO Sam Altman stood before an audience of central bankers and regulators in Washington and issued a warning: a fraud crisis is coming. This crisis is emerging not through malware, brute-force attacks, or zero-day exploits. It is instead unfolding through the manipulation of human trust. “It is crazy,” he said, “that we are still using voiceprint authentication.” With current AI capabilities, Altman argued, synthetic audio and video are not only good enough to deceive, but have already begun to undermine the foundational assumptions of digital identity.

The warning coincided with a period in which banks, insurance firms, and managed service providers have faced a rise in impersonation attacks. In many cases, these intrusions occur without malware or technical exploits. Rather than exploiting system vulnerabilities, they succeed by mimicking trusted voices during routine interactions.

The attackers used deepfake video to impersonate the company’s CFO on a group video call, supported by fake emails and WhatsApp messages that mimicked other senior executives.

The shift in threat model is significant. For decades, cybersecurity has concentrated on the perimeter: protecting servers, endpoints, and credentials. As generative AI tools grow more accessible and realistic, the boundary of trust is shifting. Fraud now frequently exploits the context and delivery of communication rather than system vulnerabilities. One of the clearest examples of video-based deception occurred in 2024, when staff at the Hong Kong branch of the engineering firm Arup were tricked into transferring the equivalent of 25 million U.S. dollars. The attackers used deepfake video to impersonate the company’s CFO on a group video call, supported by fake emails and WhatsApp messages that mimicked other senior executives. Hong Kong police confirmed the use of AI-generated video in the incident, which was widely reported across international media. This case stands as one of the most vivid illustrations of how trust in video presence alone can be exploited by attackers using off-the-shelf generative tools.

While the Arup incident occurred overseas, similar patterns have emerged in North America. In the case of Clorox, a widely reported 2023 breach disrupted months of operations after attackers used social engineering to gain access through Cognizant, one of its service providers. The adversary group later identified as Scattered Spider bypassed technical controls by convincing the help desk to reset credentials. More recently, Marks & Spencer suffered a compromise affecting internal systems. Though the company has not disclosed full details, initial assessments suggest a communication-based vulnerability rather than a conventional intrusion. These incidents underscore the broader trend: cyberattacks are not becoming less frequent or more predictable. They are evolving; quietly, strategically, and often without malware.

Altman’s remarks reflect a growing concern across cybersecurity, finance, and IT services: conventional authentication signals are no longer reliable. Caller ID can be spoofed, voiceprints can be cloned, and live video may offer no definitive proof of presence. The longstanding reliance on surface-level familiarity is increasingly inadequate.

Traceless logo
Want to know how to protect YOUR organization from AI-Powered Fraud? Book a demo with Traceless and see how we can get you secure in under 10 minutes!

For any organization that handles sensitive communication or system access, this moment requires more than awareness. It calls for a reassessment of how identity is verified during sensitive interactions, particularly those that take place over phone, chat, or video. These concerns are no longer hypothetical; they are drawn from observed patterns of compromise.

Some organizations are turning to layered authentication approaches, combining secure identity verification that does not rely on voice, device fingerprinting, and ephemeral communication channels that do not retain data once the interaction is complete. Others are beginning to restrict approvals and credential requests to secure portals, where identity can be verified with high assurance. The most forward-leaning institutions are rethinking how they manage communication inside their own walls. Platforms like Slack, Microsoft Teams, and help desk ticketing systems often serve as hubs for sensitive activity, whether that’s a password reset, an account approval, or a vendor access request. While these tools are convenient, they are not secure by default. When enhanced with identity verification that goes beyond voice, and when paired with communication layers that leave no retrievable trace, these environments can become significantly harder to exploit. Rather than relying on the perceived safety of internal channels, organizations are beginning to insist on verifiable, ephemeral interactions for high-risk workflows.

Organizations are turning to layered authentication approaches, combining secure identity verification that does not rely on voice, device fingerprinting, and ephemeral communication channels that do not retain data once the interaction is complete.

These types of safeguards are neither complex nor experimental. They rely on principles that have long been understood: verify before trusting, and minimize exposure when trust is granted. Secure communication environments that do not depend on voice recognition and that do not retain messages or files beyond their intended use significantly reduce the risk of impersonation and data exfiltration. In the event of a breach, there is less sensitive material for attackers to access, and fewer opportunities for them to exploit a communication trail.

Altman’s remarks underscore an ongoing failure to adapt authentication methods to the capabilities of modern deception. The problem is not isolated to any single tool or tactic. It lies in the broader structural vulnerability created when identity is assumed without formal verification.

If your organization handles sensitive approvals or system access, those interactions are now prime targets for AI-driven impersonation. Traceless integrates with your existing tools in under 10 minutes, adding identity verification and ephemeral messaging that make these attacks significantly harder to pull off. Book a demo to see how it works.