Every product launch creates a moment of high-value scarcity. Fraudsters have built a systematic playbook around exploiting that moment — gaming referral programs, hoarding early access slots, stacking launch bonuses, and scalping limited drops. Robinhood's referral fraud cost the company tens of millions of dollars. Nike SNKRS bots drove 99% of launch-day traffic during sought-after drops. The defense requires understanding the exact mechanics — because standard rate-limiting and email verification stop almost none of it.
This post maps the five primary launch-abuse patterns, how they're executed, and the signals that actually catch them. It's written for product, growth, and engineering teams who are about to launch something — or who are already bleeding quietly from launch abuse they haven't fully measured.
Pattern 1: Referral program self-referral at scale
Referral programs create a financial incentive per new signup that traces back to an existing user. The fraud is structurally simple: create many fake "new" accounts and refer them from a controlling "referrer" account. Each fake account generates a payout to the fraudster.
The Robinhood case is the canonical example. Robinhood's referral program paid $5–$200 in free stock per successful referral (the stock value was randomized within that range per referral). Fraud rings created thousands of referral accounts per operator, using:
- Disposable email services (Guerrilla Mail, Mailinator, and thousands of domain-per-day throwaway SMTP providers) for each new account's email address
- Google Voice and VoIP number pools for phone verification steps
- Residential proxies to give each account a distinct IP from the referrer and from each other
- Antidetect browsers to rotate device fingerprints, since Robinhood's app collected device signals to try to detect duplicates
At scale, a single fraud ring operating 2,000 accounts at the $5 floor earned $10,000 per referral cycle. At the $200 ceiling (rare, but available), a well-run ring could net $400,000 from 2,000 accounts. Industry estimates for Robinhood's total referral fraud exposure over the program's lifetime are in the tens of millions of dollars — and that's a single program at one company. Revolut, Cash App, and Chime all ran similar programs during their growth phases and faced structurally identical attacks.
Pattern 2: Waitlist slot hoarding and resale
High-demand early-access waitlists — for new fintech products, closed betas, exclusive communities, AI tools with limited capacity — become grey markets when scarcity is real and desire is high. The fraud pattern:
- An operator bot monitors the waitlist signup URL for availability
- When the waitlist opens, the bot auto-registers 50–500 slots using distinct email addresses, proxied IPs, and rotated device fingerprints
- Each slot is a real invite code (or position number) that grants access to the product
- The hoarded slots are listed for resale on Discord servers, Telegram groups, and specialized grey-market forums at $5–$100 per slot
The product company's problem is threefold. First, legitimate users wanting early access are pushed to the back of a queue that fraudsters have front-loaded. Second, the early access community — often seeded deliberately to generate word-of-mouth — is diluted with disengaged resellers who have no actual interest in the product. Third, the "organic demand" metrics (waitlist size, early conversion rate) are inflated and misleading for planning purposes.
NFT mint waitlists are a particularly acute version of this problem. During the 2021–2022 NFT bull market, whitelist spot hoarding was so systematic that some collections estimated 60–70% of their whitelist was held by bots and resellers. The pattern has migrated forward into AI tools, limited SaaS beta programs, and hardware pre-orders.
Pattern 3: "First N users get X" promotion exploitation
Growth teams routinely deploy acquisition mechanics like "the first 500 users to sign up get 3 months free" or "first 1,000 customers get lifetime pricing." These promotions are designed to create urgency and reward early adopters. They are also trivially farmable.
A single operator with a working automation setup can claim all 500 or 1,000 promotional slots in under a minute if no fraud controls gate the signup. The accounts sit dormant after claiming the promotion (since there's no genuine usage intent), or in some cases the promo-entitled accounts are themselves sold: lifetime plan access sold for 20–40% of the product's standard annual price is a meaningful margin if you're running hundreds of accounts.
The 2023 wave of AI tools offering "free lifetime access to the first X users" was exploited systematically. Several well-known AI writing and productivity tools reported that within days of launch, their "free lifetime" cohort was almost entirely inert — no usage, no product engagement — while the accounts claiming paid plans were the actual product users. The lifetime accounts had been hoarded and sold.
Pattern 4: Multi-accounting to stack launch bonuses across SaaS tools
SaaS tools commonly offer extended free trials or enhanced features for new accounts during a launch period. Multi-account operators create multiple accounts per person — under different emails, different identities if KYC is light, different devices — to stack these bonuses. The primary use cases:
- Agency resale: a freelancer or agency creates 10–20 "client" accounts to give each client access to a premium trial without paying — all managed under a central antidetect browser setup that keeps the accounts appearing unrelated
- Internal stacking: one company creating multiple accounts to extend their trial period indefinitely (each new account is a fresh trial)
- Competitive intelligence: a competitor creating many trial accounts to access all product features, export all content, and cancel before conversion — systematically, using automation
The SaaS trial abuse problem is significant because it distorts the core growth metric: trial-to-paid conversion rate. If 30% of your "trial" accounts are multi-accounts with no intent to convert, your genuine conversion rate is substantially higher than it looks — and your product and growth decisions are being made on polluted data.
Pattern 5: Scalping bots on limited product drops
Sneaker drops, game console releases, limited-edition merchandise, and NFT mints all share a characteristic: supply is artificially constrained and resale value is immediate. Scalping bots automate the entire purchasing flow to claim inventory before human users can complete a single checkout step.
Nike SNKRS is the most documented example. During the Travis Scott Air Jordan 1 drop in 2019, Nike received over 2 million entries in under 10 minutes for roughly 5,000 pairs — a ratio consistent with heavy automated inflation. For the Air Dior Jordan 1 in 2020, 5 million entries competed for 8,000 pairs. Nike's own post-mortems from that era acknowledged that bot traffic represented the substantial majority of launch-minute requests on high-demand releases.
The PlayStation 5 launch in November 2020 is the retail equivalent. Retailers reported that within seconds of inventory going live, bot-operated accounts had purchased allocation limits across hundreds of simultaneous sessions, routing each checkout through a different residential proxy IP to evade per-IP purchase limits. A single scalping operation that day could net $100–$200 per unit in immediate resale margin, and the bots completed purchases in milliseconds.
The same pattern applies to game console drops, graphics cards (GPU scalping was particularly acute during the 2020–2022 crypto mining boom), event tickets, and limited-run collectibles. The technology stack is identical: residential proxies, antidetect browsers, automated checkout toolkits (AIO bots: Supreme Bot, Kodai, Stellar, NSB), and inventory monitoring to trigger the run at the exact moment availability opens.
How residential proxies and antidetect browsers make it invisible
Standard defenses — IP rate limiting, email verification, CAPTCHA — all fail against the residential proxy and antidetect browser combination because they check the wrong layer:
- IP rate limiting blocks more than N requests per IP per hour. Residential proxies give each request a distinct residential IP. Rate limiting doesn't fire.
- Email verification confirms that the email address receives mail. Disposable email services and VoIP SMS pools confirm receiving. Verification passes.
- CAPTCHA is solved by solving services at $0.001 per solve. A $100 budget provides 100,000 solves — enough to complete a full launch-day attack with change left over.
- Device fingerprinting (naive) checks a cookie or a basic user-agent string. Antidetect browsers rotate canvas fingerprints, WebGL hashes, font sets, audio fingerprints, and hardware signatures. Naive fingerprinting doesn't catch the rotation.
The residential proxy and antidetect browser combination attacks the identification layer from both directions simultaneously: the network address looks residential and distinct, and the device fingerprint looks like a different consumer device. Against controls that rely on either IP or basic device ID, this combination is effectively invisible.
The signals that betray launch-abuse bots
Timing clustering at inventory open
Human users arrive at a product drop over a natural distribution — some at exactly the announced time, more in the following 30–120 seconds as they navigate from wherever they were. Bot fleets arrive in tight clusters with inter-request times measured in milliseconds. The timing distribution of requests in the first 60 seconds of a limited-inventory event is a powerful discriminating signal: human arrival curves are smooth; bot arrival curves spike at the exact millisecond of inventory availability and have internal timing patterns consistent with scripted parallelism.
Device fingerprint reuse across accounts
Antidetect browsers are good but not perfect. The same physical hardware running Multilogin or Kameleo produces profiles that rotate across many surface signals, but deep signals — hardware concurrency, screen geometry, audio context basefrequency, GPU renderer string, and WebGL unmasked renderer — are harder to spoof consistently. A visitorId derived from these signals can cluster multiple "distinct" antidetect profiles back to the same physical device, revealing multi-account rings that look unrelated at the surface level.
Additionally, antidetect browser detection itself is a strong signal. Detectable artifacts in Multilogin, Kameleo, GoLogin, and Dolphin{anty} are present in the browser environment that legitimate consumer devices don't produce. An antidetect browser at a product drop is nearly always a bot or a scalper — not a legitimate consumer.
IP subnet clustering
Residential proxy providers assign exit nodes from their available pool. At very high attack volumes, a large-scale operation exhausts nearby exits before routing further afield, producing a disproportionate representation from certain /24 subnets even though each exit is technically a different IP. Subnet clustering analysis — the percentage of requests in a time window that share the first three octets of their source IP — surfaces large fleet operations that appear distributed at the individual IP level.
Behavioral absence
Human users at a product launch page scroll, pause, zoom on images, re-read descriptions, and hover over multiple options before purchasing. Bot automation goes directly from page load to form submission to checkout — zero intermediate interaction, near-zero dwell time, no scroll events, no hover events. The behavioral signal isn't just timing; it's the complete absence of the exploratory behavior that characterizes genuine consumer intent.
How Sentinel blocks launch abuse
The detection architecture for launch events requires faster-than-human evaluation, because the entire attack window can be 30–120 seconds for a limited drop. Sentinel's sub-40ms evaluation latency is designed specifically for this: the API call completes before the attacker's checkout request completes.
// Launch day protection — evaluate every request at inventory access
const result = await fetch('https://api.sntlhq.com/v1/evaluate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.SENTINEL_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
visitorId: req.body.visitorId, // from Sentinel JS snippet
ip: req.headers['x-forwarded-for'],
userAgent: req.headers['user-agent']
})
}).then(r => r.json());
const {
riskScore, // 0–100 composite
residentialProxy, // true if known proxy exit
antidetectBrowser, // true if Multilogin / Kameleo / GoLogin / Dolphin{anty}
automationDetected, // true if Selenium / Puppeteer / Playwright
visitorId // stable device identifier for cluster indexing
} = result;
// Hard block: automation or antidetect always indicates a scalper
if (automationDetected || antidetectBrowser) {
return res.status(403).json({ error: 'access_denied' });
}
// High-risk residential proxy with elevated risk score: queue entry, not checkout
if (residentialProxy && riskScore > 60) {
// Place in a review queue rather than allowing immediate purchase
return res.json({ status: 'queued', position: await getQueuePosition(visitorId) });
}
// Check visitorId hasn't already claimed this drop
const priorClaim = await db.checkDropClaim(visitorId, dropId);
if (priorClaim) {
return res.status(409).json({ error: 'already_claimed' });
}
// Normal user: allow purchase
await db.recordDropClaim(visitorId, dropId);
return res.json({ status: 'allowed' });
The visitorId cluster check at the last step is the most important element for multi-accounting scenarios: even if the bot operator successfully presents a low risk score by using a warmed residential proxy, the visitorId persists the device identity across sessions and catches the same device claiming the drop multiple times under different accounts.
For referral program protection, the same visitorId check runs at signup to flag cases where the referrer and the referred user share a device — the clearest possible signal of self-referral. No legitimate referral occurs when two "different people" use the exact same physical device to sign up.
The fundamental insight is that launch abuse exploits the moment of maximum scarcity using the moment of maximum traffic. The controls need to operate at sub-second latency, check signals that residential proxies can't spoof, and cluster accounts by device rather than by IP or email.
Product launches are high-stakes enough that a fraud event during the launch window doesn't just cause financial loss — it poisons growth metrics, seeds the community with disengaged actors, and can permanently damage user trust. The engineering investment in launch-day fraud controls has an ROI that compounds across every future launch the company runs.