If your headless-detection still pivots on navigator.webdriver, you're catching the bots that didn't read the StackOverflow answer. Modern Puppeteer, Playwright, and Selenium ship with stealth plugins that patch every classical detection vector before the page even loads. The signals that still work in 2026 are not on navigator. They're in the input physics, the GPU rendering pipeline, the timing of synthesised events, and the entropy of the keystroke stream. Here's what to instrument.
Headless browsers are the workhorse of automation: scrapers, credential stuffers, scalper bots, click farms, ad-fraud rings, content harvesters. Every one of those threats begins with a Chromium process driven via the Chrome DevTools Protocol (CDP). If you can detect the CDP-driven session, you can stop the threat before it executes. If you can't, every other defense is downstream of an attacker who can already script your interface.
The detection arms race, briefly
The classical headless signals — and what defeated each:
navigator.webdriver === true— solved 2018 by puppeteer-extra-plugin-stealth.window.chrome.runtimemissing — solved 2019 by stealth plugins, undetected-chromedriver, browser-launchers.- Plugin array empty — solved 2019.
- Permissions API inconsistencies (Notification.permission) — solved 2020.
- Languages mismatch — solved 2020.
- Iframe contentWindow.chrome === undefined — solved 2021.
- WebGL VENDOR / RENDERER spoofing — defeated by stealth plugins that override the WebGLRenderingContext.getParameter return values.
- User-Agent CDP property exposure — defeated by Chromium build flags and runtime patching.
Every static-property check has a stealth-plugin counter. The detection axis has to move off static properties entirely.
What still works in 2026
1. Input-event entropy
This is the most reliable single signal we have. Real human input — even very fast input — produces continuous streams of events with predictable physical characteristics:
- Mouse:
mousemoveevents arrive at sub-millisecond intervals between meaningful position changes, with sub-pixel coordinates and acceleration profiles that follow Fitts's-law-like curves toward targets. - Touch: multiple
touchmoveevents with pressure variation between events, tilt angle data when supported, and finger-area variation. - Keyboard:
keydownandkeyupseparated by 30–120 ms typically; modifier-key combinations show physical-impossibility patterns when faked (Shift+x with Shift release before x release is normal; the inverse is suspicious).
CDP-driven mouse movement, even via page.mouse.move() with stealth plugins, produces synthesised events with characteristic anomalies: integer-pixel coordinates only, perfectly straight or perfectly curved trajectories with no jitter, identical inter-event timings on every move regardless of distance. Real users' mouse movements are noisy in a particular way that's very hard to fake without explicitly modeling it.
Detection rule: a session that performs a meaningful action (login, signup, checkout) without a single mouse-move event preceding it is bot-by-default. Add the entropy check on top — the mousemove stream should have a Shannon entropy above a threshold. CDP-injected events fail it almost universally.
2. CDP runtime detection
The Chrome DevTools Protocol leaves a small but persistent runtime signature regardless of stealth patching. Calling console.debug with a %c-formatted argument inside a getter that's accessed during devtools attachment produces a measurable timing delta. Multiple frameworks have published similar techniques (Hero's CDP-detection, ADTrustLab's research). The gap is small (~0.3 ms) but stable, and combined with input-entropy it's a strong joint signal.
This signal does need to be re-checked every few months — Chromium's CDP team patches the most-public detections — but as of May 2026 several variants still work reliably and the underlying gap is fundamental enough that it's unlikely to fully close.
3. GPU / canvas rendering signature
Headless Chrome's GPU pipeline differs from headful Chrome's in measurable ways even when running with --use-gl=swiftshader or with hardware acceleration emulation. Specific differences:
- Subpixel positioning of text in canvas rendering deviates from headful by stable, machine-predictable values.
- Bezier curve aliasing patterns in canvas paths show characteristic differences.
- WebGL
readPixels()output for a controlled scene differs from headful by stable byte-level deltas.
Maintaining a hash table of "canvas rendering of a known calibration scene → CDP-driven Chromium" matches is one of the cleanest detection surfaces. Stealth plugins don't touch the GPU pipeline because doing so would require swapping in a fake renderer, which itself produces detectable inconsistencies.
4. Timing of high-precision APIs
The Performance API and requestAnimationFrame behave differently when the browser is being driven via CDP, especially when the page is "headless" without a real compositor:
requestAnimationFramecallbacks fire on a precise vsync-aligned cadence in headful (60 Hz typically). In headless, the cadence is software-generated and exhibits a different jitter profile.performance.now()resolution varies by browser mode and security context. Headless mode can show non-default resolution patterns.
Useful as supporting signals, less reliable as primaries — modern stealth tooling does try to spoof the rAF cadence, with mixed success.
5. Behavioral baselines vs the population
Per-session and per-action baselines that real users vary on but bots don't:
- Typing dwell time variance. Real users vary their key dwell-time wildly across keys. Bot scripts don't, even when randomised.
- Form-fill order entropy. Real users tab around, click around, edit fields out of order. Bots tend to fill in source-order.
- Scroll-event distribution. Real users produce trackpad/mouse-wheel scrolls in characteristic bursty patterns; programmatic scroll is essentially absent or perfectly uniform.
- Page-dwell time distribution. Bots exhibit bimodal dwell distributions — either nanoseconds (script raced ahead) or exactly the configured wait time. Real users are continuous.
What to log per session
The minimum useful client-side telemetry, captured before the action you care about (login, signup, checkout):
{
"input": {
"mouseMoveCount": 42,
"mouseMoveEntropy": 4.13,
"mouseTrajectorySmoothness": 0.78,
"touchEventCount": 0,
"keyDwellMeanMs": 84,
"keyDwellStdDev": 33,
"scrollEventCount": 7,
"scrollEventEntropy": 2.6
},
"render": {
"canvasFingerprint": "f3a9...",
"webglRendererHash": "b21c...",
"rafJitterMs": 0.42
},
"timing": {
"performanceNowResolutionMs": 0.005,
"loadTimeMs": 1842
},
"navigator": {
"webdriver": false,
"languages": ["en-US", "en"],
"platform": "Win32",
"hardwareConcurrency": 8
}
}
The first block (input) is by itself a strong-enough signal to power the detection in most cases. The render block is the high-confidence supporting signal. The timing block is a tie-breaker. The navigator block is for the bots who don't bother with stealth plugins — it catches them cheaply and lets you allocate resources to the harder cases.
Patterns to deploy carefully
Don't block on a single signal
Every individual signal has false positives. Real users on touch-only devices have zero mousemove events. Real users with bursty trackpads have low scroll entropy. Real users on integrated GPUs have non-standard canvas signatures. Block on a vector, not a scalar — the joint distribution is what catches automation reliably while keeping false-positive rates under 0.1%.
Score, then gate
Return a 0–1 confidence that this session is automation, then route to actions:
- 0.0–0.3: allow.
- 0.3–0.6: step-up (CAPTCHA, email confirm, 2FA challenge).
- 0.6–0.9: hard challenge or honeypot route.
- 0.9–1.0: block, log, blacklist visitorId.
Beware of headless-Chromium-as-a-service
BrowserBase, Browserless, ScrapingBee, and similar providers run hardened headless instances with stealth-grade configurations and rotating residential proxies. They're harder to detect than self-hosted Puppeteer because the operators have invested in fingerprint normalisation. The behavioral signals (input entropy, action timing) still catch them; the static-property signals do not.
How Sentinel handles this
The automationDetected field in the device-intel response is a composite that runs the signals above (and ~15 others) in a single client-side SDK with no measurable performance impact. The browserTampering float catches the harder cases where the automation framework is well-stealthed but the underlying engine is still patched. Combined with residentialProxy and antidetectBrowser, the joint detection rate against modern bot stacks runs in the 95%+ range with sub-0.1% false-positive rate on real consumer traffic.
Free tier (1,000 requests/hour, no card) is enough to instrument the highest-value paths — login, signup, checkout, password-reset — and watch the automation signal in production. Most teams find their existing CAPTCHA-only defense was missing 30–60% of real headless traffic. The Puppeteer/Playwright-specific deep-dive covers framework-level detection in more depth.
Headless detection isn't a single check anymore. It's a behavioral and rendering-level analysis that has to assume the attacker patched everything cosmetic. The signals that still work in 2026 are the ones the attacker can't patch without rewriting the underlying browser engine — and that's a much harder lift than installing a stealth plugin.