Skip to content

v2 shows a checkbox or image puzzle. v3 returns a score between 0.0 and 1.0 and lets you decide what to do with it. Both are still maintained, both have working secret keys, and both have a long list of teams who picked the wrong one because they assumed v3 was just "the newer version of v2." It isn't. They're different tools with different failure modes.

What each one does

reCAPTCHA v2 is a verification widget. The user clicks a checkbox or solves an image grid. The browser receives a token. You send the token to your server, your server calls Google's siteverify endpoint, and you get back success: true/false. The contract is binary.

reCAPTCHA v3 is a scoring service. There is no widget. A script runs on every page, watches user behaviour, and when you call grecaptcha.execute() you get a token. Your server verifies the token and Google returns a score from 0 (probably bot) to 1 (probably human). You decide the threshold, the action, and what to do for borderline scores. The contract is analog.

Aspectv2v3
User-facing UICheckbox or image puzzleNone
OutputPass/failScore 0.0–1.0
Decision logicVendor decidesYou decide
FrictionVisibleZero
False-positive costHigh (user is annoyed)Low if threshold is permissive
False-negative costBot solved a puzzleBot got a high score
CoverageAbout 10M sitesAbout 1.2M sites

When v2 is the right pick

Use v2 when you want a definitive yes/no on a small number of high-stakes actions. Account signup, password reset, contact forms with email side-effects, abuse-report endpoints. The friction is the point: you're explicitly saying "the cost of one extra second of user time is acceptable to filter bots out of this endpoint."

v2 also makes sense when your traffic includes a lot of unauthenticated users you can't reliably score. v3 needs behavioural history; on a fresh-session, no-cookie visit, the score will be middling and not useful. v2 just asks them to click a box.

When v3 is the right pick

Use v3 when you want passive risk scoring across the whole site, not gating on specific actions. Page-view abuse, scraping, account takeover signals — anywhere you'd benefit from a score per request without disrupting the user. The classic deployment is "score every request, log scores below 0.5, block scores below 0.2, occasionally challenge scores between 0.2 and 0.5 with a v2 fallback."

The catch is that v3 requires you to write the decision logic. Most teams who deploy v3 set a threshold of 0.5 and never look at the data again, which means they get worse coverage than v2 with more complexity. v3 is a tool for teams who will actually tune it.

Where both fall short

Both v2 and v3 send all telemetry to Google. That is a real privacy and compliance question in the EU; the German Federal Data Protection Authority and several courts have flagged reCAPTCHA cookies under GDPR. If your privacy policy doesn't already disclose Google as a data processor, v2/v3 deployment is a problem.

Both versions are also vulnerable to industrial solver services. A v2 puzzle costs about $1–$2 per 1,000 solves on the open market. v3 is harder to attack at scale but trivially defeated by a real Chrome browser running with a residential proxy — the sort of setup that scraping operations use anyway.

Modern bot operations don't care which version you use. They care whether the verification is bound to action context, whether the token is single-use, and whether your server validates it at all. Many sites still don't, which is why bot traffic patterns look the same regardless of v2 vs v3 deployment.

What a fairer comparison looks like

If you're choosing between v2 and v3 today, the better question is: do you want a vendor to make decisions for you (v2) or to give you signals you'll act on (v3)? If you're not going to tune v3 thresholds, pick v2. If you're going to set 0.5 and forget, pick v2.

If neither model fits — for example, you want behavioural scoring with no Google dependency, server-bound action context, and a single SDK across web/mobile — vendors like CaptchaLa provide a tiered model where most sessions pass silently, ambiguous sessions get a light interaction, and only high-risk sessions see a puzzle. The token is verified server-side at apiv1.captcha.la/v1/validate against the originating IP and action.

Recap

v2 is a friction-based verifier. v3 is a scoring service. They're not "the old one and the new one" — they're different products that solve different problems, and a lot of integrations are wrong because someone defaulted to v3 because the number is bigger. Pick based on whether you want a decision or a signal, and whether your team will actually use the signal.

Articles are CC BY 4.0 — feel free to quote with attribution