The FTC reports that romance scams cost US consumers over $1.3 billion in 2024, the largest category of consumer fraud loss after investment scams. Most of those scams started on a dating app or social platform that failed to detect a fake profile at signup. Here's how to actually catch them at the front door.
Dating-app trust is a fragile thing. Every match a user has with a romance scammer, a catfish, or an AI-generated profile knocks confidence down a little. Long-term, it's the only metric that matters: do real users believe they're going to meet real people here? When the answer drifts toward "no," retention curves go off a cliff.
The depressing part is that the bulk of fake-profile damage is preventable at signup, with signals the app already has access to in the first 200ms.
Who's faking profiles, and why
Fake profiles fall into four buckets, in roughly increasing order of damage per profile:
- Spam-and-redirect bots — automated signups whose only goal is to get a chat going and drop a Telegram link to OnlyFans, a crypto pump, or a phishing site. Volume play. The profile is "live" for under 48 hours before being banned.
- Cam-girl funnels — a real person operates dozens of profiles to redirect matches to paid platforms. Often run from a content farm in Southeast Asia or Eastern Europe.
- Romance scammers — long-con operators who maintain a believable profile for weeks before pivoting to "I'm stuck overseas, can you Western Union $500." Average loss per victim is in the thousands.
- AI-generated catfish — the new top tier. ChatGPT-class models for chat, Stable Diffusion / Midjourney for photos, sometimes a real-time face filter for video calls. Indistinguishable to humans on first pass.
What unites all four: at signup time, they leave network and device fingerprints that real users typically don't.
The signals fake profiles share
Datacenter or residential-proxy origin
Real dating-app signups overwhelmingly come from mobile carrier IPs (LTE/5G) or residential broadband. Scammer farms run on cloud VMs (Hetzner, OVH, AWS, Linode) or rent residential proxies (BrightData, Smartproxy, IPRoyal) to hide their real location. Per-IP reputation is fooled by residential proxies. Per-network-pattern detection isn't.
Antidetect-browser fingerprints
Operating dozens of profiles from the same machine requires every profile to look like it lives on a separate device. That's exactly what Kameleo, GoLogin, AdsPower, and Multilogin sell. The profile says "iPhone 15 in Atlanta", the browser surface is a perfect-looking iOS Safari fingerprint, but the underlying signals are inconsistent in ways that browser-fingerprint matching can't see but a tampering score can.
Headless / scripted signups
The lowest tier of scammer just wires up Puppeteer or Playwright against your signup form. They get rate-limited fast, but they do enough damage in the first hour to be worth it for them. (See: detecting Puppeteer & Playwright.)
Velocity / clustering
Even when each individual profile looks plausible, the same operator is creating 30 of them. Sentinel's persistent visitor ID survives clearing cookies, switching IPs, and reinstalling the app. From your point of view, what looked like 30 unrelated signups becomes one repeat visitor — extremely suspicious in a context where each real user signs up exactly once.
Phone / email re-use patterns
A fresh Gmail address that was created 2 hours ago, paired with a VOIP number from Twilio, on a residential proxy IP, with a tampered browser fingerprint, is not a real user. Each signal is weak alone, devastating combined.
Where to put detection in the funnel
The cheapest place to catch a fake profile is at signup, before the profile photo upload, before the bio is written, before the first match. Catching them later is order-of-magnitude harder and order-of-magnitude more expensive in trust damage to other users.
The right set of touchpoints:
- Signup form load — the SDK fires asynchronously here, collects fingerprints, has the token ready by the time the user clicks "Create account".
- Signup submit — server-side
/v1/evaluatecall. Block obvious automation outright; flag medium-risk for additional verification. - Photo upload — second checkpoint, since some operators bypass the signup screen with API replay. Re-verify token here.
- First message sent — third checkpoint. Even a sophisticated operator that got past signup tends to start spamming on the chat surface, where behavioral signals are noisy and discriminating.
Integration sketch
Add the SDK to your signup screen / app:
<script async src="https://fp.sntlhq.com/agent"></script>
On signup submission server-side:
const verdict = await fetch('https://sntlhq.com/v1/evaluate', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk_live_YOUR_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({ token: req.body.sentinelToken })
}).then(r => r.json());
const d = verdict.details;
// Score 0-100. Higher = more trustworthy.
let trust = 100;
if (d.isBot) trust -= 100;
if (d.isAntidetectBrowser) trust -= 60;
if (d.isResidentialProxy) trust -= 25;
if (d.isVPN) trust -= 15;
if (d.tamperingScore > 0.6) trust -= 30;
if (d.repeatVisitorCount > 1) trust -= 40;
if (trust <= 0) {
return res.status(403).json({ error: 'signup_unavailable' });
}
await createAccount({
...userInput,
trustScore: Math.max(trust, 0),
sentinelVisitorId: verdict.visitorId
});
Now downstream features can read trustScore:
- Score 80–100: full unrestricted experience.
- Score 40–79: require photo verification before messaging strangers; daily message quota.
- Score 0–39: shadow-restrict — profile visible only to the user themselves until they pass a second verification step. They never know they're throttled, so the operator can't iterate.
What this changes
Apps that put a tiered trust score in front of fake-profile detection typically see:
- 50–80% reduction in fake-profile reports from real users in the first 30 days.
- Romance-scam financial loss reports drop sharply — most scams need at least one chat to begin, and shadow-restricted accounts never get there.
- Real-user 7-day retention goes up measurably because their early experience contains fewer obvious bots.
- Trust & Safety team workload drops because the bulk of "ban this profile" tickets stop being generated.
Get started
Free key at sntlhq.com/signup — 1,000 requests/hour, no card. Drop the SDK on your signup screen and the verify call before createAccount, and you'll see the fake-profile baseline change inside the first day. Node SDK at @sentinelsup/sdk.