I've spent 12 years mapping both — across blockchains and borders, inside platforms, and within the regulatory blind spots adversaries call home. I help law firms, regulators, tech companies, and government clients see the full picture and act on it.
Let's talk about your caseA Chinese crime syndicate registered shell companies in Colorado for $50, enrolled fraudulently as an SEC investment adviser, and laundered tens of millions from U.S. victims. Every step exploited a documented regulatory gap. The regulators knew. They chose not to fix it.
This is the pattern. Adversaries don't outmaneuver defenses — they read the policy documents, find where enforcement stops, and build there. AI deepfakes, encrypted social engineering, shell networks, fraudulent registrations: none of it is technically sophisticated. It's institutionally sophisticated. The threat actors have generalized. The defenses mostly haven't.
I work with a small number of clients at a time. Here's where I tend to add the most value.
I don't do volume. I work on cases that matter.
Designed WhatsApp's first Trust & Safety Investigations program — building behavioral attribution and syndicate mapping methodology from scratch in an environment where message content is invisible. Achieved the first documented attribution of organized pig-butchering syndicates operating at scale on WhatsApp, producing findings that drove law enforcement referrals and the platform's first coordinated financial crime enforcement actions.
Meta / WhatsApp · First-of-its-kind investigative methodology
I developed risk forecasting frameworks for 15+ high-risk elections at Meta, identifying behavioral precursors to mass atrocity events in Ethiopia, Afghanistan, and India. This work — detecting coordinated network behaviors that correlate with off-platform violence — improved detection model accuracy by 60% and directly informed Meta's first EU DSA systemic risk assessment.
A small Iowa newspaper's website was hijacked and rebuilt as an AI-generated disinformation engine — one of the first documented cases of AI content systematically replacing local journalism at scale. I investigated the operation; findings published in WIRED.
Red teamed Synthesia's AI avatar content moderation system — mapping how malicious actors across 10 cybercrime forums and 12 platforms perceive and exploit AI-generated video. Surfaced abuse vectors the system wasn't catching and drove direct policy and safety upgrades.
Alongside investigative work, I research the governance frameworks that determine whether AI and digital systems cause harm or prevent it — and where the gaps between those frameworks and adversarial reality actually live.
What legal tools do low-capacity states actually have to govern open-source models like LLaMA, DeepSeek, and Mistral? Examines the AU Continental AI Strategy, Brazil's AI Bill, and Kenya's AI Code of Practice against the hard constraints of Global North licensing.
Download PDF ↗GIZ-commissioned policy research on compute infrastructure barriers, digital sovereignty, and open source AI adoption in India — proposing national compute scaling and goal-based government allocation, based on expert interviews across government, practitioners, and civil society.
Stanford Law and Policy Lab report on when universities should dissociate from fossil fuel companies based on disinformation conduct — developed from Stanford's Committee on Funding for Energy Research, supervised by Paul Brest and Noah Diffenbaugh.
Stanford Law ↗
I'm a digital threat strategist in the San Francisco Bay Area — mapping the systems, networks, and incentives that enable digital harm at scale, and producing findings that drive legal, regulatory, and operational responses.
Previously, I spent five years as an NSA technical linguist hunting Pacific Command's highest-priority targets. From there: architecting StubHub's fraud intelligence program at eBay; designing WhatsApp's first Trust & Safety Investigations program and leading crisis response at Meta; and most recently building the fraud hunting function at SoFi.
I hold a Master of International Policy from Stanford University, where I also hold a Journalism Fellowship at the Starling Lab for Data Integrity and have contributed to AI policy research under Francis Fukuyama. I'm a service-disabled Army veteran — and I bring the same tenacity to this work that I had in uniform.
When I'm not working, I'm in Fremont with my wife and daughter, probably hunting down the best boba in the Bay.
Tony Eastin and I founded Risky Business Solutions in 2023 after spending years hunting influence operations and dismantling fraud networks inside the world's largest tech platforms — and before that, doing it for U.S. military and intelligence agencies.
The firm focuses on AI-driven deception, crypto financial crime, and platform abuse — serving national law firms, state Attorney General offices, technology companies, and development organizations that need the kind of investigative rigor and risk frameworks that used to be reserved for the biggest budgets. We make them accessible to the organizations that need them most.
Investigation updates, policy analysis, and field notes — when there's something worth saying. Subscribe when dispatches go live.
Coming soon
I take a small number of engagements at a time. The best way to start is a short conversation about what you're working on. I'll tell you quickly whether I'm the right fit, and if I'm not, I'll try to point you in the right direction.