In 2025, a sequence of third-party data theft incidents affecting Salesforce customers exposed a failure mode that many senior engineers had anticipated but few organizations had fully operationalized against.
These incidents were not caused by a vulnerability in Salesforce's core infrastructure. There was no privilege escalation, no authentication bypass, and no exploitation of internal Salesforce systems. Instead, attackers operated almost entirely within the boundaries of intended platform extensibility, abusing delegated trust, OAuth tokens, and third-party integrations exactly as designed.
This post reconstructs the incident class as a single, coherent narrative. It synthesizes public disclosures, threat intelligence reporting, and victim-authored postmortems, then moves beyond mechanics to examine what these incidents reveal about platform architecture, detection limits, and organizational learning in an era increasingly shaped by AI-powered integrations.
Throughout 2025, multiple organizations reported unauthorized access and data exfiltration from their Salesforce environments. While the affected companies differed in size, industry, and maturity, the incidents shared a strikingly consistent structure:
Salesforce responded by revoking tokens, disabling affected applications, issuing security advisories, and notifying impacted customers. Salesforce consistently emphasized that its core platform was not breached and that the incidents resulted from third-party compromise and customer-level authorization decisions.
That framing is technically correct. It is also incomplete.
Although these incidents share a common failure mode involving delegated trust and OAuth abuse, they were not executed by a single threat actor or a single campaign.
Two primary threat clusters were involved.
This group conducted large-scale social engineering and vishing campaigns. Their primary access vector involved tricking employees into installing or authorizing malicious, rebranded Salesforce utilities that closely resembled legitimate tools such as Salesforce Data Loader. Once authorized, these tools operated with explicitly granted user permissions and appeared normal from Salesforce's perspective.
This group executed a supply-chain style attack by abusing stolen OAuth tokens associated with trusted third-party integrations, most notably the Salesloft Drift application. Using these tokens, UNC6395 gained API-level access to Salesforce customer environments and conducted rapid, automated data exfiltration.
These operations were distinct in execution and tooling. However, subsequent reporting in late 2025, including incidents involving Gainsight integrations, suggests overlap in infrastructure and monetization within the broader ShinyHunters ecosystem.
As recently as November 2025, Salesforce disclosed another OAuth token compromise involving Gainsight-published applications, suggesting the attack pattern remains active and the threat actor ecosystem continues to evolve.
The distinction matters. Defenses against vishing and malicious installers do little to stop OAuth token abuse. Conversely, OAuth governance alone does not mitigate social engineering at the user boundary.
Across investigations, two access paths dominated.
First, employees were socially engineered into authorizing malicious applications that mimicked legitimate Salesforce tooling. These applications requested permissions that users believed were routine, granting attackers durable access.
Second, attackers obtained OAuth access tokens from compromised third-party integrations. These tokens were reused across hundreds of Salesforce environments, enabling broad data access without triggering authentication alarms.
In both cases, access was legitimate by design.
One of the most consequential access paths involved the Salesloft Drift integration. Attackers used stolen OAuth tokens associated with this application to access Salesforce APIs across hundreds of customer environments.
A critical point must be stated explicitly:
Public reporting has not established whether this resulted from:
This uncertainty is not incidental. It limits the ability to generalize upstream defensive lessons and reinforces the need for robust downstream controls, because token abuse may be the first observable signal regardless of how initial compromise occurred.
From Salesforce's perspective, the API calls were valid:
This highlights a fundamental challenge.
The platform did not fail at authentication. It failed at distinguishing intent.
Traditional security controls are poorly equipped to differentiate legitimate automation from malicious exploitation when both operate within the same permission envelope.
Detection was delayed in many cases because attacker behavior closely resembled normal automation. API volumes were within expected ranges, and access originated from known integrations.
Attackers associated with UNC6395 used Salesforce Bulk API 2.0 to exfiltrate data rapidly, in some cases completing extraction in under four minutes. After execution, they deleted the bulk jobs to obscure evidence.
Despite this, Cloudflare was able to reconstruct the attack using residual logs, including:
Organizations with Salesforce Event Monitoring enabled were materially better positioned to reconstruct activity than those relying solely on standard audit logs.
These incidents exposed a recurring problem: OAuth tokens function as infrastructure identities, not user accounts, yet they are often governed with far less rigor.
In Salesforce environments, commonly used scopes such as full, api, and refresh_token often grant broader access than administrators realize. Tokens frequently lack meaningful expiration and are not rotated with the same discipline as human credentials.
Cloudflare's postmortem revealed that its Drift integration had read access to Salesforce Cases. From a product perspective, this seemed reasonable. From a risk perspective, it dramatically expanded the blast radius once the token was compromised.
This raises uncomfortable but necessary questions:
Cloudflare ultimately moved to aggressive credential rotation and token invalidation. The remediation worked, but it was reactive and costly.
The deeper lesson is that OAuth governance must be treated as an operational system, not a configuration checkbox.
Salesforce revoked compromised tokens, disabled affected applications, issued advisories, and notified customers. Salesforce emphasized the shared responsibility model, reinforcing the importance of MFA enforcement, app permission review, and employee training.
Salesforce did not publish a single, unified postmortem reconstructing the incidents across customers, vendors, and threat actors.
However, this does not mean detailed analysis was absent.
Some of the most rigorous postmortems came from affected customers. Cloudflare published a minute-by-minute forensic reconstruction. Google and multiple security firms released deep threat intelligence analyses.
The gap was not transparency. The gap was synthesis.
AI-driven integrations exacerbate every failure mode described above.
AI assistants and copilots typically request broad, cross-object access to provide context-aware functionality. This is true for first-party offerings such as Salesforce Einstein GPT and for third-party AI tools that ingest CRM data for summarization, forecasting, or decision support.
These integrations differ from traditional tooling in two ways:
Consider a concrete example: an AI assistant that summarizes customer support history to accelerate case resolution requires read access across Cases, Contacts, and potentially Accounts. If compromised, that access pattern—continuous, cross-object, automated—is indistinguishable from normal operation. There is no anomaly to detect because the behavior is the product.
While there are no widely disclosed incidents yet involving compromised AI integrations in Salesforce environments, the risk profile is structurally similar and arguably worse. When an AI integration is compromised, its access patterns are difficult to distinguish from normal operation, and its data traversal is expected rather than anomalous.
This is extrapolation, but grounded extrapolation.
This statement is easy to agree with and easy to misunderstand.
OAuth is not authentication. It is not a login system. It is a way to delegate authority. When it works well, it allows a system to say: this external actor can do this specific thing, for this specific purpose, for a limited time.
In practice, OAuth tokens behave less like temporary permissions and more like infrastructure keys.
They sit outside normal identity review cycles. They are rarely rotated. They often outlive the people who approved them. They accumulate permissions incrementally as products evolve. And because they do not fail loudly when misused, they rarely trigger the kinds of alerts engineers are trained to respond to.
The risk compounds quietly.
Over-privileged scopes granted for convenience remain in place long after the original use case has changed. Refresh tokens with no practical expiration continue to authorize access indefinitely. Third-party applications installed for one-off evaluations persist as long-lived principals. Shadow IT turns delegation into sprawl.
None of this looks like a vulnerability scan finding. There is no CVE. There is no red dashboard. There is just an expanding surface of valid access that nobody is actively reasoning about.
That is what makes OAuth-related incidents feel surprising when they happen, even though the conditions for failure were present for years.
The shared responsibility model is useful. It answers an important question: who owns what.
Salesforce owns the platform. Customers own their integrations. Vendors own their applications. Everyone can point to a box on the diagram.
But accountability does not automatically produce understanding.
In real incidents, teams often respond by proving compliance rather than interrogating assumptions. The work becomes about showing that MFA was enabled, that policies existed, that guidance was followed. This is necessary, but it is not sufficient.
Learning requires feedback loops. It requires asking why certain permissions felt reasonable at the time. It requires examining how incentives pushed teams toward broader access. It requires admitting where convenience quietly overrode caution.
Shared responsibility models are static. Learning is dynamic.
Without explicit mechanisms for reflection, retrospection, and knowledge reuse, organizations satisfy the model and still repeat the failure.
This is not a metaphor. It is an operational reality.
When an organization experiences an OAuth-related incident, the response pattern is predictable. Engineers scramble to understand token behavior. Security teams write new detection logic. Playbooks are drafted. Policies are tightened. Tools are built.
And then the incident fades.
The knowledge lives in a postmortem document, a handful of dashboards, and the heads of a few people who were close to the response. Six months later, a different team installs a new integration. The same questions are asked again. The same mistakes are made again.
Not because teams are careless, but because the memory is not accessible at the moment it matters.
Detection logic is rebuilt from scratch. Remediation playbooks are reinvented. Governance policies are written reactively. Valuable lessons remain fragmented across internal documents, vendor blogs, and threat intelligence reports that are rarely consulted under pressure.
This is why OAuth token abuse keeps reappearing across organizations that otherwise have strong security programs.
The problem is not lack of intelligence. The problem is lack of durable, queryable memory.
Until incident knowledge is treated as shared infrastructure rather than historical record, each organization will continue to learn the same lessons independently, at production scale.
That is the deeper signal in the Salesforce incidents of 2025.
Modern software systems are assembled, not built. Integrations are how work gets done. They save time, accelerate delivery, and let teams focus on core value instead of rebuilding infrastructure.
But there is an uncomfortable truth that the Salesforce incidents make explicit.
The moment you install a third-party application, enable a connected app, or authorize an OAuth integration, you have extended your trust boundary. That external code now runs with your permissions, not the vendor's.
It does not matter that you did not write the code. It does not matter that it lives on someone else's infrastructure. From a data access and risk perspective, it is acting as you.
This has several concrete implications:
This is not a reason to avoid integrations. It is a reason to treat them as production dependencies, subject to the same scrutiny as internal services.
Vendor vetting, scope review, and integration lifecycle management are not procurement concerns. They are engineering concerns.
Most systems are well-instrumented. Most incidents are still confusing.
The difference is context.
During the Salesforce incidents, attackers used valid tokens and standard APIs. The logs showed what happened, but not why it mattered. An API call without business intent is just a row in a table.
Semantic context changes the response entirely.
Contextual logging answers questions like:
Without that context, engineers are forced to reconstruct meaning under pressure by correlating multiple data sources manually. This slows response, increases uncertainty, and expands blast radius.
Semantic context also changes alerting. Instead of triggering on volume or thresholds, systems can trigger on unexpected relationships. A chat widget reading support cases might be expected. The same widget exporting large volumes of account data is not.
The Salesforce incidents illustrate a simple truth: observability that cannot explain intent is insufficient for modern, API-driven systems.
Tokens are often treated as configuration artifacts. Something you set once, document lightly, and revisit only after an incident.
In practice, tokens behave like infrastructure.
They are depended on by multiple systems. They are embedded into automation pipelines. They are assumed to be stable and always available. When they break or are revoked unexpectedly, production workflows fail.
Treating token governance as infrastructure means:
In the Salesforce incidents, long-lived tokens with broad scopes created silent, durable access paths that bypassed MFA and perimeter controls entirely.
This was not a hygiene failure. It was an infrastructure design failure.
If tokens are critical enough to take your system down when revoked, they are critical enough to be governed with the same rigor as databases, networks, and identity systems.
Every organization involved in the 2025 Salesforce incidents learned something valuable.
Cloudflare learned how OAuth-based CRM exfiltration looks in practice. They learned which logs survive deletion. They learned how to scan exfiltrated data for embedded secrets. They built new tooling and processes.
The problem is not that learning did not happen.
The problem is that it happened locally.
Without deliberate institutionalization, this knowledge remains trapped in postmortems, Slack threads, and the memories of people who happened to be on call. When those people move on, the learning decays.
Institutional learning requires structure:
This is not about documentation for compliance. It is about reducing the cognitive load during the next incident by ensuring that the organization does not start from zero every time.
The Salesforce incidents show that attackers rely on repetition. Defense must rely on memory.
Organizations that scale securely are not the ones with the fewest incidents. They are the ones that remember their incidents best.
The Salesforce incidents make one thing clear: knowing the right lessons is not enough. The hard part is making those lessons available at the moment decisions are made, under real operational pressure.
This is where COEhub is intentionally different from postmortem repositories, ticket systems, or static documentation.
COEhub treats incidents as living operational knowledge, not historical artifacts.
Below is how each of the core learnings is translated into concrete capability.
In COEhub, integrations are modeled explicitly as risk-bearing entities.
Each integration is associated with:
When an engineer authorizes a new integration or expands scopes, COEhub can surface:
This moves integration review from tribal knowledge to repeatable engineering judgment.
COEhub does not replace logs. It adds meaning to them.
During an incident, COEhub correlates:
Instead of asking, "Why is this API call here?", teams can ask:
This shifts response from raw log inspection to pattern recognition under time pressure.
COEhub tracks token-related incidents as first-class signals.
Over time, it builds an internal corpus of:
This enables teams to evolve token policy based on evidence, not best-effort guidelines.
Instead of debating rotation frequency in the abstract, teams can ask:
That is infrastructure thinking applied to governance.
COEhub enforces structure where most organizations rely on storytelling.
Each incident captured in COEhub includes:
This structure allows incidents to be:
Knowledge stops being owned by individuals and starts being owned by the organization.
The difference institutional memory makes is easiest to see during an active incident.
Below is a concrete comparison based on patterns observed in OAuth-related incidents like the Salesforce cases.
A security alert fires indicating unusual Salesforce API activity.
Engineers ask:
Actions unfold slowly:
Response characteristics:
Each incident feels novel, even when it is not.
The same alert fires.
Within minutes, COEhub surfaces:
Engineers ask different questions:
Response characteristics:
The incident is still stressful, but it is not unfamiliar.
The Salesforce incidents show that:
The differentiator is not whether an organization experiences incidents.
The differentiator is whether the organization recognizes itself inside the incident while it is happening.
COEhub exists to make that recognition possible.
Not by predicting the future, but by ensuring the past is never lost.
The Salesforce third-party data theft incidents of 2025 were not extraordinary because of technical novelty. They were extraordinary because they revealed how fragile institutional memory remains in modern SaaS ecosystems.
Security failures increasingly occur at the seams, not the cores.
The organizations that adapt will not be the ones that never have incidents. They will be the ones that remember them, learn from them, and operationalize that learning before the next OAuth token is issued.
Memory, when engineered deliberately, becomes resilience.