Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United Kingdom AI Crime Government

How Facial Recognition Tech Is Being Used In London By Shops - and Police (bbc.co.uk) 98

"Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'."

That's a quote from the BBC by a wrongly accused customer who was flagged by a facial-recognition system called Facewatch. "She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology."

Facewatch later wrote to her and acknowledged it had made an error — but declined to comment on the incident in the BBC's report: [Facewatch] did say its technology helped to prevent crime and protect frontline workers. Home Bargains, too, declined to comment. It's not just retailers who are turning to the technology... [I]n east London, we joined the police as they positioned a modified white van on the high street. Cameras attached to its roof captured thousands of images of people's faces. If they matched people on a police watchlist, officers would speak to them and potentially arrest them...

On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech... The BBC spoke to several people approached by the police who confirmed that they had been correctly identified by the system — 192 arrests have been made so far this year as a result of it.

Lindsey Chiswick, director of intelligence for the Met, told the BBC that "It takes less than a second for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match."

"That is the correct and acceptable way to do it," writes long-time Slashdot reader Baron_Yam, "without infringing unnecessarily on the freedoms of the average citizen. Just tell me they have appropriate rules, effective oversight, and a penalty system with teeth to catch and punish the inevitable violators."

But one critic of the tech complains to the BBC that everyone scanned automatically joins "a digital police line-up," while the article adds that others "liken the process to a supermarket checkout — where your face becomes a bar code." And "The error count is much higher once someone is actually flagged. One in 40 alerts so far this year has been a false positive..."

Thanks to Slashdot reader Bruce66423 for sharing the article.
This discussion has been archived. No new comments can be posted.

How Facial Recognition Tech Is Being Used In London By Shops - and Police

Comments Filter:
  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Saturday June 01, 2024 @05:47PM (#64516241) Homepage

    Facewatch admitted their mistake. Sara suffered as a result (cried, etc). She is due compensation.

    Mr Thompson might also want to sue the police. Did they destroy the finger prints and copy of his passport that they had taken ?

    • by Anonymous Coward

      People should boycott stores that use that garbage

      • by mjwx ( 966435 )

        People should boycott stores that use that garbage

        It's Home Bargains, people with any self respect already avoid it.

    • (I guess, referring to something in TFA, which I'm not motivated enough to read. I already hate and avoid London.)

      Mr Thompson might also want to sue the police. Did they destroy the finger prints and copy of his passport that they had taken ?

      Why would the police destroy a passport? They don't own it. If they seize one (with a court order, or brought in as lost property, or from a search of a 3rd party) they return it to the Home Office - their supervising ministry, whose property it always has been and alwa

    • There is probably a lawsuit coming, hence no comments.
    • by AmiMoJo ( 196126 )

      There's probably a claim under the Equality Act too, given these things rarely work very well for dark skin.

    • by mjwx ( 966435 )

      Facewatch admitted their mistake. Sara suffered as a result (cried, etc). She is due compensation.

      Mr Thompson might also want to sue the police. Did they destroy the finger prints and copy of his passport that they had taken ?

      In the UK there is hardly any need... The conclusion is so forgone they'll look to compensate without a lawsuit.

      Over here, if your flight arrives more than 2 hours late you're entitled to compensation (4 hours if it's a long haul fight).

      GDPR means they would have had to destroy all Personally Identifiable Information, they'd be risking huge fines if they didn't.

  • by swillden ( 191260 ) <shawn-ds@willden.org> on Saturday June 01, 2024 @06:17PM (#64516289) Journal

    It's basically impossible to build a system like this that works, and likely always will be.

    The core problem is an interesting little math problem called the "Birthday Problem" or the "Birthday Paradox". In its original form, it's about the probability that some two people at a party have the same birthday. Suppose you're at a party where you and your buddy know no one (so neither of you knows anyone's birthdays), and your buddy says "I'll bet you $100 that at least two people at this party share a birthday". Should you take that bet? If there are more than 23 people in the room, you should not, because odds are >50% that at least two people share a birthday. This is counterintuitive because we think "366 possible birthdays, 30 people, that's a lot more birthdays than people, so odds of a shared birthday are low." But what you actually need to think about is how many pairs of people are at the party, because that's the number of possibilities for a match. With 30 people that's 30*28 = 870 pairs.

    How does this relate to face recognition or other biometric systems? Biometric matching algorithms are threshold algorithms, not binary-output. That is, they compute how close face A is to face B, according to some complicated metric, and if the distance is below a threshold, we call them a match. But this fuzziness in matching means that the algorithm effectively partitions the space of all faces into a bunch of similarity pigeonholes (this is oversimplified, but a useful approximation), where the number of pigeonholes roughly corresponds to the false accept rate.

    Likewise, the birthday question partitions people into 366 pigeonholes. The birthday "false accept rate" (FAR) is roughly 1 in 366 (not quite that because Feb 29 is less likely than other days, but you get the idea).

    For a false accept rate (FAR) of 1 in n, you can estimate the number of entries in the system at which you'll have a %gt;50% probability of a false accept for any given entry at about sqrt(n). This means that if you have an FAR of 1:50,000 (which is at the better end of what commercial systems do), you can only put just over 200 people in the database before false accepts happen with more-than-even odds. 200.

    In this case, their actual database of faces is probably fairly small, but the space of faces that can be matched against those faces is very large. Tens of millions. Supposing they have 200 thieves in the database and 10M non-thieves, the probability that any random non-thief matches a thief is one minus the probability that the system doesn't get a false match on any of the 200, so 1-(1-p)^200, where p is the FAR. For p = 1/50,000, about 4% of non-thieves will match some thief. Since the base rate is 200 thieves out of 10M non-thieves (0.002%), this means that false matches will outnumber true matches by 200:1.

    If this system is only seeing false matches outnumber true matches by 40:1, it's actually doing really well! Likely because their existing thief database is very small, and the system will get worse and worse as more thieves are added.

    Bottom line, this kind of thing just doesn't work. What could work, kinda sorta, is if the system showed the store employee the photo and personal info of the known thief, so they could check whether the person matched is actually the same one. This is how stuff like law enforcement fingerprint database queries work: every search returns a bunch of results and then the police first have a human expert look at the matches to throw out some of the false positives, and then they have some legwork to do to figure out if any of the remaining matches are the real deal.

    Of course, I said "kinda sorta" because such a system would result in lots of legitimate customers being harassed every time they go to a store. That will be bad for business.

    Big biometric matching systems without some sort of additional filter just don't work. Can't work.

    • With 30 people that's 30*28 = 870 pairs.

      30*29 = 870, obviously.

      Also, damn, I should have previewed. Apparently I failed to close a <i>.

      • Also, damn, I should have previewed. Apparently I failed to close a <i>.

        Don't worry about it. Your comment was both informative and insightful, thereby justifying all that extra emphasis... ;-)

    • Big biometric matching systems without some sort of additional filter just don't work. Can't work.

      The additional filter can be that the system provides the first name of the supposed thief.

      System: detects "Pete" -- a thief!
      Staff: Good morning sir, I'm Franck from customer support, how should I call you?
      Customer: Dave
      Staff: Nice to meet you Dave, can I quickly check your loyalty card or credit card?
      Customer: shows something issued to their name, e.g. David X (which is not Pete)
      Staff: Thank you Dave, enjoy your shopping.

    • What might work even better is capitalism. Legislate high compensation for falsely accusing someone, even higher for expelling or detaining someone based on false positive identification. The customers will then talk to liability insurance companies who have actuaries who will explain to the businesses and the police how statistics (including the birthday problem) work, and how the process needs to be augmented to minimize the risks of payout.
      • Well, that would be a way to make sure non one tries to deploy anything like this. But I don't think it's necessary. Businesses care more about not pissing off customers than they do about preventing theft, because theft is relatively small.
    • That's valid right now.
      Technology is advancing, though, and in a few years it will become more precise, to the point where false positives will fall dramatically.
      And, yes, filters are being used and polished continuously.

      • That's valid right now. Technology is advancing, though, and in a few years it will become more precise, to the point where false positives will fall dramatically.

        Very unlikely. The level of precision that's needed for large populations is insane. If you want to be able to get a false match rate of, say, 0.1% in a database of 100M people, you'd need a FAR of about 1:500,000,000,000.

        • I think that's what they also said about fingerprints, way back when they were a novelty.
          And you don't need a database of 100M people, you need a list of people who are most likely to be in that specific place, at that specific time. I'd venture to say it's going to be less that 10K people, and that's a generous number, more valid for a large city such as London, rather than, say, Dorchester in Dorset.

          John the local thief would be in that list, but Jack the convicted thief who is in prison 5K miles away wou

          • I think that's what they also said about fingerprints, way back when they were a novelty.

            What makes you think that?

            When fingerprints were a novelty, automated matching systems didn't exist (computers didn't exist), so matching against databases of significant size was infeasibly labor-intensive and the implications of the Birthday Paradox didn't come up at all.

            And you don't need a database of 100M people, you need a list of people who are most likely to be in that specific place, at that specific time. I'd venture to say it's going to be less that 10K people, and that's a generous number, more valid for a large city such as London, rather than, say, Dorchester in Dorset.

            You can filter by likely presence, sure. You have to get the possible match database size down, and it has to be much smaller than you think it does.

            A few years ago, I worked on building a fingerprint-based authentication system for

            • There's a certain distance threshold which, if passed, would make thieving from a certain remote area not worthy.
              A general store would therefore be able to have its distance threshold reduced to walking distance, for example, whereas a jewelry store would have a different filter set, e.g. filter out pickpockets, etc.

              It's all about HOW the system is implemented.

  • London's homicide rate is the lowest it has ever been in history. It may not seem like that due to social media, but it's true. Ironically the fact that crime is filmed and propagated in social media makes it seems like it's happening a lot more when it's simply a result of more cameras and people wanting to peddle some narrative.

    • Buddy, the homicide rates for nearly all wealthy countries have been going down, down, down for decades, centuries. London is not special in this regard. Even the US has a much lower homicide rate than it used to. It Doesn't have anything to do with British people's incredible cowardice leading them to give up all their privacy and rights. That's just a weird British thing.

    • by drnb ( 2434720 )

      London's homicide rate is the lowest it has ever been in history.

      That could be due to better medical care. Look at attacks rather than fatalities to adjust for that variable.

  • The summary talks about the 1:40 false positive rate, suggesting that it's a reason to stop. While not negligible 1:40 actually seems really good.

    I would expect police officers approaching people looking for suspect X to be wrong most of the time, a 3% incorrect rate is a huge improvement.

    Race factors are always a concern in systems like this, but it's probably being monitored and addressed. It's also replacing a human system that is known to have substantial racial biases.

    • The real question is: how do you catch and deal with false positives? Perform no additional checks and level a false accusation, like in TFA? Then 1:40 is not so good. But if the penalty for each false accusation is, say, cutting off a one of the Facewatch CEO's toes, then you can be sure that every hit flagged by the system will be checked and double checked by a human operator. And perhaps that is what it takes to make these systems acceptable: to always have a human verify an alert from the system. I
      • how do you catch and deal with false positives?

        With diplomacy and staff training. First the system should provide a name of the potentially identified thief, say "Pete" (my example above). A staff member goes there, calls them by that name and see what happens. If they say "sorry I'm not Pete I'm Dave" and show any proof of not being Pete, you just say "oh my bad, have a good day" and nothing happened, they don't even know why you called them Pete, and were never shamed in public or falsely accused.

        What should not happen is to treat the face recognition

        • Thieves are pretty brassy as a rule, your average clerk won't be able to determine a thief from a false positive. And you might be exposing them to potential violence asking them to confront people.

          What you will certainly get is good customers turning into former customers when you treat them like criminals.

          • And you might be exposing them to potential violence asking them to confront people.

            Darn, mate, the place you live in must be very bleak.
            Around here, I can hardly imagine someone having the potential to become violent if being politely confronted - like asking for name or ID. And I live in a third world country.

            • Normal people, no. But a thief trying not to get caught? Most likely they're going to run, but they might assault the clerk in their path. And you have to ask if the risk is worth it for minimum wage.

              Most stores already have a 'do not engage' policy: give the video to the cops and let them deal with it after the fact.

              • Ah, I now understand what you meant.
                Still, thieves who rely on stealth would still be less likely to be aggressive.

          • The purpose of the system is to detect thieves, and it is apparently correct 38 times out of 40. One solution is to monitor them silently using the cameras. But if the shop manager does not want to accept the 38 thieves in the shop and decided to send them away, the staff will have to confront them. In TFA, the situation you describe is what happened: someone was rudely confronted, called a thief, asked to leave. Unfortunately, this person was the 2/40 false positive.

            I propose an improvement of the workflow

  • ... for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match."

    Which is why they want everybody's face, so there's always a match. That match, including time and place, is never deleted, same as any time your details are taken for whatever spurious reason. So there's a few false positives, who fucking cares? They don't.

  • by io333 ( 574963 ) on Saturday June 01, 2024 @06:57PM (#64516357)

    Just like the SciFi stuff I read as a kid.

    Can we have more please? Can we have drones with minguns and this recognition tech zooming around instantly eliminating people? How amazing can it get!!!!!

  • "Lindsey Chiswick, director of intelligence for the Met, told the BBC that "It takes less than a second for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match."

    Bullshit. Once scanned, it is in their database forever, just waiting for the "thief" bit to be set. She was scanned, it was accidentally set, and will likely happen again to her. Because it is in their software, and the police have it as well.

  • I've posted this before [slashdot.org], and I'm posting it again, and I'll continue to post it until everyone here understands the danger of facial recognition software.

    Let's pretend for a moment that facial recognition software is 99.9% accurate. It's not, but for sake of argument, let's go with it.

    Now, pretend you walk into a store, the software flags you as a criminal who shoplifted from the store earlier, and you get arrested. If that store received 5,000 visitors that day, what are the odds you were a false positive, i.e. you were flagged as a criminal but weren't actually the one who committed the crime?

    Answer: 80%.

    Why? The average Joe wrongly assumes a 99.9% accuracy rate means 99.9% of the time it will identify a criminal correctly. But it does not mean this. What it means is that the system will correctly identify -anybody- 99.9% of the time; flagging a face as "not the criminal" counts to this accuracy score as equally as "is the criminal". So, out of 5,000 people, on average, 0.1%, or five people, will be identified by the system as "criminal". This means that, if you are flagged as the criminal, and you are one out of 5,000 visitors, then there's a 4/5 chance, or 80%, that you are not the criminal.

    But here's the kicker: Facial recognition software is being used to convict people. This is the real crime. This technology is an emperor wearing no clothes, but everyone from prosecutors to juries, pretends that it is authoritative. This technology is not 99.9% accurate; at best, it's about 99% accurate, as long as the individual is Caucasian and male. At worst, it's about 65% accurate. [aclu-mn.org] We are endangering the stability of society to take something this inaccurate and use it to convict people of crimes.

    • But here's the kicker: Facial recognition software is being used to convict people.

      No, it is not.
      It is SOMETIMES used in conjunction with many, many other pieces of evidence, but it is not the main prosecution piece, at all.
      Stop spewing nonsense.

  • just keep racking up those false positives and it will happen
  • "It takes less than a second for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match"

    Yeah, I believe this, never-ever has a system with a capability to collect data been abused. No image is deleted, everything is used to train whatever models they developed, and so on.

    And, this being the brexit shithole, you know that law enforcement has access to this, too.

    Hey, Winston, do your physical jerks properly, ffs!

  • by bsdetector101 ( 6345122 ) on Sunday June 02, 2024 @06:43AM (#64517185)
    Read the book "1984"
  • I can't imagine why anyone would want to live in or visit such a police state.
    My god, it's appalling!
    I'm sure that there are plenty of 'law and order' safety babies, but ten million of them?

  • This is actually very interesting. Those who are not in the subject will not understand. I recently found out what quishing is, I read https://moonlock.com/quishing [moonlock.com] here. If you go deeper, you can discover a lot for yourself. We need to develop this topic. This is the future.

The universe is an island, surrounded by whatever it is that surrounds universes.

Working...