Services Work Research About My Business Press LinkedIn ↗ Let's talk →

Digital harm runs on
broken systems and
bad actors.
Most only see one.

I've spent 12 years mapping both — across blockchains and borders, inside platforms, and within the regulatory blind spots adversaries call home. I help law firms, regulators, tech companies, and government clients see the full picture and act on it.

Let's talk about your case
Scroll
Career
U.S. Army · NSA · eBay · Meta · SoFi · Starling Lab
Featured in
Reuters · WIRED · Ars Technica · NPR · BuzzFeed News · Stanford

Regulatory blind spots
aren't accidents.
They're being
weaponized.

A Chinese crime syndicate registered shell companies in Colorado for $50, enrolled fraudulently as an SEC investment adviser, and laundered tens of millions from U.S. victims. Every step exploited a documented regulatory gap. The regulators knew. They chose not to fix it.

This is the pattern. Adversaries don't outmaneuver defenses — they read the policy documents, find where enforcement stops, and build there. AI deepfakes, encrypted social engineering, shell networks, fraudulent registrations: none of it is technically sophisticated. It's institutionally sophisticated. The threat actors have generalized. The defenses mostly haven't.

How I can help

I work with a small number of clients at a time. Here's where I tend to add the most value.

For law firms & legal institutions

From raw evidence to findings that hold up.

  • Digital fraud, platform harm, and financial crime investigations
  • Blockchain transaction analysis and tracing
  • Dark web intelligence and OSINT
  • Expert evidence across federal, state, and international jurisdictions
  • Technical findings translated for courts and non-technical decision-makers
What I bring: investigative rigor, chain-of-custody discipline, Mandarin-language capability, technical findings made legible to non-technical decision-makers.
For fintechs, exchanges, DeFi & DAOs

Strategy and execution for the threats you can't outrun.

  • Fraud and financial crime program buildouts from scratch
  • Detection gap evaluation and threat assessments
  • Targeted investigations into specific threat actors or patterns
  • Fractional leadership for fraud strategy and AML functions
  • AML/CFT compliance strategy and crypto forensics
What I bring: 12+ years of program-building, hands-on blockchain analytics, AML/CFT expertise, firsthand knowledge of how transnational crypto fraud operations are actually structured.
For digital platforms & AI companies

When the threat is your own ecosystem.

  • Trust and safety strategy and program design
  • AI system red teaming against real-world adversarial tactics
  • Fraud and abuse vector identification before exploitation
  • Intelligence collection from cybercrime forums and adversarial networks
  • Content moderation policy advisory
What I bring: red team experience against live AI systems, behavioral intelligence from Meta-scale operations, governance advisory grounded in Stanford policy research.
For government & policy makers

Bridging policy and adversarial reality.

  • Regulatory gap mapping and enforcement blind spot analysis
  • Investigative briefings with documented evidence
  • AI governance framework development
  • Cross-jurisdictional threat intelligence
  • Policy advisory for oversight bodies, advocacy groups, and agencies
What I bring: firsthand intelligence on regulatory arbitrage being systematically weaponized, cross-jurisdictional investigative experience, AI governance research applicable to regulatory design.
Engagements I take
  • Consulting on specific investigations or cases
  • Retained advisory (ongoing strategic access)
  • Fractional leadership (interim fraud strategy or T&S)
  • Expert review and litigation support
  • Red team and adversarial assessment
  • Policy advisory and regulatory framework development

I don't do volume. I work on cases that matter.

Selected work
Platform Intelligence · Encrypted Environments · Attribution

First-of-its-kind attribution of pig-butchering syndicates inside encrypted messaging platforms.

Designed WhatsApp's first Trust & Safety Investigations program — building behavioral attribution and syndicate mapping methodology from scratch in an environment where message content is invisible. Achieved the first documented attribution of organized pig-butchering syndicates operating at scale on WhatsApp, producing findings that drove law enforcement referrals and the platform's first coordinated financial crime enforcement actions.

Meta / WhatsApp · First-of-its-kind investigative methodology

Behavioral Intelligence · Election Security · Mass Atrocity Risk

Forecasting election violence from platform behavioral signals.

I developed risk forecasting frameworks for 15+ high-risk elections at Meta, identifying behavioral precursors to mass atrocity events in Ethiopia, Afghanistan, and India. This work — detecting coordinated network behaviors that correlate with off-platform violence — improved detection model accuracy by 60% and directly informed Meta's first EU DSA systemic risk assessment.

Meta Newsroom, November 2021 ↗

AI Disinformation · Investigative Journalism · Platform Accountability

How a local newspaper's website became an AI-generated clickbait factory.

A small Iowa newspaper's website was hijacked and rebuilt as an AI-generated disinformation engine — one of the first documented cases of AI content systematically replacing local journalism at scale. I investigated the operation; findings published in WIRED.

WIRED, February 2024 ↗

AI Systems · Red Team · Content Moderation Policy

Finding the gaps in AI platforms before the bad actors do.

Red teamed Synthesia's AI avatar content moderation system — mapping how malicious actors across 10 cybercrime forums and 12 platforms perceive and exploit AI-generated video. Surfaced abuse vectors the system wasn't catching and drove direct policy and safety upgrades.

Synthesia, August 2024 ↗

Research & policy work

Alongside investigative work, I research the governance frameworks that determine whether AI and digital systems cause harm or prevent it — and where the gaps between those frameworks and adversarial reality actually live.

Academic Paper · Stanford INTLPOL 245B

No Free Lunches: How the Global South Can Govern Open-Source AI

What legal tools do low-capacity states actually have to govern open-source models like LLaMA, DeepSeek, and Mistral? Examines the AU Continental AI Strategy, Brazil's AI Bill, and Kenya's AI Code of Practice against the hard constraints of Global North licensing.

Stanford University · International Policy Program · 2025
27 pages, primary source-based.
Download PDF ↗
Policy Research · GIZ-Commissioned

Accelerating Open Source AI in India

GIZ-commissioned policy research on compute infrastructure barriers, digital sovereignty, and open source AI adoption in India — proposing national compute scaling and goal-based government allocation, based on expert interviews across government, practitioners, and civil society.

Digital Futures Lab / GIZ (Germany) · 2025
Supervised by Francis Fukuyama and Erik Jensen, Stanford University. Co-authored with Kevin Klyman et al. Research contributed to DFL's February 2026 institutional brief.
Team report ↗ DFL final brief ↗
Policy Lab Report · Stanford Law

Dissociation from Companies Engaged in Misleading Communications

Stanford Law and Policy Lab report on when universities should dissociate from fossil fuel companies based on disinformation conduct — developed from Stanford's Committee on Funding for Energy Research, supervised by Paul Brest and Noah Diffenbaugh.

Stanford Law School · Law and Policy Lab · Spring 2025
Co-authored with Eeshan Chaturvedi, Arman Hedayat, Tianyi Huang, Nora Swidey, and Emma Wang.
Stanford Law ↗
About
Sandeep Abraham

I work where
your adversaries
meet your governance.

I'm a digital threat strategist in the San Francisco Bay Area — mapping the systems, networks, and incentives that enable digital harm at scale, and producing findings that drive legal, regulatory, and operational responses.

Previously, I spent five years as an NSA technical linguist hunting Pacific Command's highest-priority targets. From there: architecting StubHub's fraud intelligence program at eBay; designing WhatsApp's first Trust & Safety Investigations program and leading crisis response at Meta; and most recently building the fraud hunting function at SoFi.

I hold a Master of International Policy from Stanford University, where I also hold a Journalism Fellowship at the Starling Lab for Data Integrity and have contributed to AI policy research under Francis Fukuyama. I'm a service-disabled Army veteran — and I bring the same tenacity to this work that I had in uniform.

When I'm not working, I'm in Fremont with my wife and daughter, probably hunting down the best boba in the Bay.

Starling Lab Fellow, Stanford Stanford M.A. International Policy CFE — Certified Fraud Examiner CAMS — AML Specialist U.S. Army Veteran TS/SCI (former) Chainalysis Certified
Risky Business Solutions

Tony Eastin and I founded Risky Business Solutions in 2023 after spending years hunting influence operations and dismantling fraud networks inside the world's largest tech platforms — and before that, doing it for U.S. military and intelligence agencies.

The firm focuses on AI-driven deception, crypto financial crime, and platform abuse — serving national law firms, state Attorney General offices, technology companies, and development organizations that need the kind of investigative rigor and risk frameworks that used to be reserved for the biggest budgets. We make them accessible to the organizations that need them most.

riskybusiness.solutions ↗

Press & media
Reuters
Meta is earning a fortune on a deluge of fraudulent ads, documents show ↗
Nov 2025
Ars Technica
Bombshell report exposes how Meta relied on scam ad profits to fund AI ↗
Nov 2025
NPR · On Point
Why are scam ads everywhere online? ↗
Nov 2025
The Independent
Meta makes millions from scam ads as internal documents reveal scale of problem ↗
Nov 2025
WIRED
How a Small Iowa Newspaper's Website Became an AI-Generated Clickbait Factory ↗
Feb 2024
Unit21
Risky Business
FinCrime Ops Tour — San Francisco · Featured Speaker ↗
Oct 2025
Integrity Institute
Unleashing the Potential of Generative AI in Integrity, Trust & Safety Work ↗
Jun 2023
BuzzFeed News
Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement ↗
Apr 2021
Recorded Future
Podcast
StubHub Leverages Empathy and Emotional Intelligence for Threat Hunting ↗
Apr 2019

Occasional writing on digital harm,
AI governance, and adversarial intelligence.

Investigation updates, policy analysis, and field notes — when there's something worth saying. Subscribe when dispatches go live.

Coming soon

If you're dealing with
digital harm —
let's talk.

I take a small number of engagements at a time. The best way to start is a short conversation about what you're working on. I'll tell you quickly whether I'm the right fit, and if I'm not, I'll try to point you in the right direction.

For law firms and legal institutions: investigations and privileged comms handled appropriately.

For fintechs, exchanges, DeFi, and DAOs: retained advisory and select fractional engagements.

For digital platforms and AI companies: trust and safety strategy, red team engagements, fraud program buildouts.

For government, policy makers, and advocacy organizations: policy advisory, framework development, and investigative briefings.