- cross-posted to:
- technology@lemmy.zip
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.zip
- technology@lemmy.world
Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they’re a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.
ETH Zurich PhD student Andreas Plesner and his colleagues’ new research, available as a pre-print paper, focuses on Google’s ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an “invisible” reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.
Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low “human” confidence rating.
When it’s asking for motorcycles but it’s clearly a scooter
Or, like, “there’s the bottom 10% of a traffic light in this one. Do I click that box? Ia that supposed to count?”
What they are doing is comparing your answer and seeing if it is consistent with how it has been answered previously. They realize that not everyone is going to give the exact same answer, so as long as you answer it in a way that enough other people have answered it, it should let you in.
I’ll usually go with the minimum number of clicks that I think will get me through, since I’m lazy and it’ll also at times slow down how fast you can click which is annoying.
I’ll also answer them wrong if I think it’s a mistake that enough other people will make. “Yes… that RV over there is a bus…”
They are also overly US centric.
One of the questions asks you to click on only the school buses. I had to Google how you tell the difference between a school bus and not a school bus.
Also is it a crosswalk if it’s at an intersection or is it only a crosswalk if it’s in the middle of a road somewhere?
The questions either need to be not cultural or they need to be adapted for where they detect the user is coming from, the first option seems easier.
Interesting. Do you not have school buses, or are school buses not distinctly marked? How do kids get to school when it’s beyond walking distance?
You know, regular buses
School busses and regular busses look completely different. What do those look like in your country?
Same as any bus
Does the backside of a traffic light even count? What about these strange traffic lights that have more boarder than light?
How about “do they want just the bulbs or the pole holding it up?”
That tip of a handle bar that makes you wonder if that square counts or not.
Or the square with the driver in it: does it classify the driver as part of the motorcycle?
Does it count when the AI driving the car clips it?
I had one with one of those Motorcycles with the long handles, apparently they aren’t part of the bike, but the dudes foot holding it up is.
I think the reason AI are better than humans is that the AI is just as stupid as the image classifier.
Worse is when its asking for crosswalks and its clearly a rumblestrip.