- cross-posted to:
- latestagecapitalism@lemmygrad.ml
- cybersecurity@sh.itjust.works
- cross-posted to:
- latestagecapitalism@lemmygrad.ml
- cybersecurity@sh.itjust.works
Research Findings:
- reCAPTCHA v2 is not effective in preventing bots and fraud, despite its intended purpose
- reCAPTCHA v2 can be defeated by bots 70-100% of the time
- reCAPTCHA v3, the latest version, is also vulnerable to attacks and has been beaten 97% of the time
- reCAPTCHA interactions impose a significant cost on users, with an estimated 819 million hours of human time spent on reCAPTCHA over 13 years, which corresponds to at least $6.1 billion USD in wages
- Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set
- Google should bear the cost of detecting bots, rather than shifting it to users
“The conclusion can be extended that the true purpose of reCAPTCHA v2 is a free image-labeling labor and tracking cookie farm for advertising and data profit masquerading as a security service,” the paper declares.
In a statement provided to The Register after this story was filed, a Google spokesperson said: “reCAPTCHA user data is not used for any other purpose than to improve the reCAPTCHA service, which the terms of service make clear. Further, a majority of our user base have moved to reCAPTCHA v3, which improves fraud detection with invisible scoring. Even if a site were still on the previous generation of the product, reCAPTCHA v2 visual challenge images are all pre-labeled and user input plays no role in image labeling.”
how do you get the metric of 70-100% of the time?
the best bots doing it 70-100% of the time is very different to the kind of bot your average spammer will have access to
Did you read the article or the TL:DR in the post body?
So yeah, while these are research numbers, it wouldn’t be surprising if many larger bots have access to ways around that - especially since those numbers are from 2016 and 2019 respectively. Surely it is even easier nowadays.
that doesn’t answer the question?
i’d argue “bespoke system, deployed in a very limited context, built by researchers at the top of their field” is kind of out of reach for most people? and any bot network scaled up automatically becomes easier to detect the further you scale it
the cost of just paying humans to break these already at or below pennies per challenge