Why Trust-Based Attacks Are Surging
In 2023, Clorox reported in court filings that attackers posing as company representatives contacted its IT service provider, Cognizant, and successfully obtained privileged system access. The impostors cited legitimate-sounding reasons for their requests, supported them with procedural details, and were granted the credentials without additional verification. The incident was part of a broader breach that disrupted Clorox’s operations and underscored a unsettling truth: credibility, not urgency, is now the strongest tactic for many social engineering attacks.
This was not a hurried phishing attempt but a carefully constructed pretext. The case reflects a growing shift in attacker tactics, where intrusions succeed by fitting neatly into expected workflows rather than attempting to break them. This evolution has been driven by attackers’ ability to gather, synthesize, and apply context in ways that closely mirror legitimate business activity.
From Urgency to Plausibility
Traditional phishing relies on panic. Pretexting takes the opposite approach, building trust over time and embedding requests within believable narratives. Attackers might pose as auditors, vendors, or IT staff, equipped with forged credentials, tailored jargon, and supporting documents. The requests arrive in the right format, reference real stakeholders, and align with known schedules.
Generative AI has made these narratives more convincing. It can produce natural-sounding correspondence, mimic voices and faces, and generate documentation that passes casual inspection. Research by attackers that once took weeks, scraping press releases, regulatory filings, and staff social media, can now be condensed into hours. These details feed into AI systems that not only craft convincing written messages, but also power deepfake phone and video calls in which the attacker appears to be a trusted figure such as a CEO or regulator. In some documented cases, these deepfake calls have included accurate references to ongoing projects or recent internal events, making the deception extremely difficult to detect. This preparation makes the fabricated persona far more persuasive, fitting neatly into an organization’s operational context and increasing the likelihood that staff will comply with malicious requests.
In both the Clorox breach and Scattered Spider’s campaigns, the attackers avoided suspicion not by bypassing defenses, but by blending in.
Scattered Spider, also tracked as UNC3944 and Muddled Libra, has repeatedly used this approach. Threat intelligence reporting from CISA, Google’s Mandiant, and others documents how the group targets IT service desks with phone-based pretexts, impersonating employees or support staff to persuade agents to reset passwords or re-enroll MFA for privileged accounts. Once these resets are approved, the attackers quickly register their own devices, pivot to administrative access, and expand their intrusion. These operations exploit procedural trust rather than technical flaws, making them difficult to detect with traditional security controls. In both the Clorox breach and Scattered Spider’s campaigns, the attackers avoided suspicion not by bypassing defenses, but by blending in.
While Scattered Spider is prominent, they are not alone in exploiting help-desk workflows. Lapsus$, a group responsible for breaches at major telecom and technology companies, has also used pretexting to trick service-desk staff into resetting credentials. These cases highlight that the tactic is now part of a broader threat landscape, used by both financially motivated groups and state-aligned actors.
Why Traditional Defenses Fall Short
Training employees to spot typos, mismatched domains, and urgent demands remains useful for low-level phishing. Yet these cues are absent in well-executed pretexting. A request framed as part of normal operations, delivered through familiar channels, rarely triggers alarm.
This means that even security-aware employees may process a request without hesitation, particularly when it is framed as time-sensitive but routine.
Firewalls and endpoint detection systems are designed to block malicious code, not maliciously framed requests from apparently trusted sources. Even multi-factor authentication can be subverted if the approver believes the action to be legitimate.
Attackers study the target organization’s internal processes, tools, and communication styles so that their requests appear procedural rather than exceptional. This means that even security-aware employees may process the request without hesitation, particularly when it is framed as time-sensitive but routine.
Addressing this requires a shift from awareness to verification. The key question is no longer whether a message appears suspicious, but whether the requester’s identity and intent can be independently confirmed. Some organizations are embedding identity verification into the workflows most often exploited, ensuring that sensitive requests pass through secure, out-of-band checks before being fulfilled.
Equally important is limiting data persistence. If sensitive files or messages vanish once their purpose is served, the window for exploitation narrows. Ephemeral, point-to-point transfers can significantly reduce the impact of a breach, even if an attacker gains temporary access.
Building Resilience into Operations
Verification should be a standard step, not an exception. Help desks can require secure identity checks for any request involving account changes, access escalations, or profile edits. Vendor and partner communications that touch sensitive systems can be subject to automated challenges, regardless of the requester’s history.
Organizations with complex vendor ecosystems can benefit from extending verification protocols to third-party access requests. Attackers often exploit the assumption that a known partner is inherently trustworthy. By applying the same scrutiny to external and internal requests, companies can close a common gap in their defenses.
Culturally, verification must be seen not as a sign of mistrust, but as a safeguard embedded in normal procedure. This is easier to achieve when the process is seamless, integrated into existing tools, and minimally disruptive. Leaders can set this tone by normalizing verification in their own communications, demonstrating that no one is exempt from security protocols.
Pretexting in the age of AI is not about forcing the door open; it is about being welcomed inside. By making identity and intent verification routine, and by ensuring that sensitive information leaves no lasting trace, organizations can blunt the effectiveness of these attacks. The attackers have already adapted. It is time for defenders to do the same.
If your organization handles sensitive approvals or system access, those interactions are now prime targets for AI-driven impersonation. Traceless integrates with your existing tools in under 10 minutes, adding identity verification and ephemeral messaging that make these attacks significantly harder to pull off. Book a demo to see how it works.
