Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI United States Technology

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute (bbc.com) 65

An anonymous reader quotes a report from the BBC: A leading research centre has called for new laws to restrict the use of emotion-detecting tech. The AI Now Institute says the field is "built on markedly shaky foundations." Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices. It wants such software to be banned from use in important decisions that affect people's lives and/or determine their access to opportunities. The US-based body has found support in the UK from the founder of a company developing its own emotional-response technologies -- but it cautioned that any restrictions would need to be nuanced enough not to hamper all work being done in the area.

AI Now refers to the technology by its formal name, affect recognition, in its annual report. It says the sector is undergoing a period of significant growth and could already be worth as much as $20 billion. "It claims to read, if you will, our inner-emotional states by interpreting the micro-expressions on our face, the tone of our voice or even the way that we walk," explained co-founder Prof Kate Crawford. "It's being used everywhere, from how do you hire the perfect employee through to assessing patient pain, through to tracking which students seem to be paying attention in class. "At the same time as these technologies are being rolled out, large numbers of studies are showing that there is... no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks."
"Prof Crawford suggested that part of the problem was that some firms were basing their software on the work of Paul Ekman, a psychologist who proposed in the 1960s that there were only six basic emotions expressed via facial emotions," reports the BBC. "But, she added, subsequent studies had demonstrated there was far greater variability, both in terms of the number of emotional states and the way that people expressed them."
This discussion has been archived. No new comments can be posted.

Emotion Recognition Tech Should Be Banned, Says an AI Research Institute

Comments Filter:
  • Bottle the genie (Score:5, Insightful)

    by alvinrod ( 889928 ) on Friday December 13, 2019 @10:47PM (#59518048)
    I don't think anyone is going to be able to bottle the genie at this point because the allure is there and like any panacea people will seek it out. If it's bunk, the companies that cheat and employ it will make bad decisions and go out of business as a result. If it's effective than the companies that cheat and use it will have an advantage and the only way to stay in the game will be to use it.

    Has anyone even bothered to try testing it to find out whether or not it's worth a damn?
    • I think the biggest problem is that it may become pretty accurate. The possible applications are pretty terrifying, especially when combined with widespread surveillance technology.

      its so easy to sell as a way to catch pedophiles - just look for people with the wrong expressions when they are looking at children or pictures of children. Just to be safe they should be brought in and tested - "treated" if necessary.

      The of course its terrorists.

    • If it's bunk, the companies that cheat and employ it will make bad decisions and go out of business as a result. If it's effective than the companies that cheat and use it will have an advantage and the only way to stay in the game will be to use it.

      Another possibility: those who don't positively correlate with facial expression/emotion will be Darwin'd out of the gene pool, only leaving those who are easy for deep learning to recognize emotionally as the "winners" of evolution.

    • the point is probably cause used to cover up unfair and illegal practices. Age discrimination, Unionization efforts and of course phony lie detectors used to obtain bogus warrants. All are made possible by this tech whether it works or not.

      Also, why should we leave this up to the free market? Worst case the market fails us and results in bad practices. Best case thousands of people have their lives destroyed by this tech until the companies go out of business. And as mentioned above the bad actors won't
    • by Livius ( 318358 )

      These kinds of algorithmic assessments will almost certainly going to provide *some* information, but it will not be completely reliable. In principle, that doesn't matter because some people are more perceptive than others about emotional states, and we understand that their intuition may be valuable but will not be infallible. The difference is that people will not understand how to assess the reliability of machine-generated evaluations and may blindly give them far too much credibility.

    • This company sells products used for tech support call centers to identify the emotions of callers:

      https://www.affectiva.com/what... [affectiva.com]

      Your WiFi signals can be used to read your heartrate variability which is related to emotional state:

      http://eqradio.csail.mit.edu/f... [mit.edu]

      There has been a lot of research on stress measurement using wearables, Santosh Kumar did some of the recent ground breaking research:

      https://www.buzzfeednews.com/a... [buzzfeednews.com]

      Heart rate variability, breathing, and galvonic skin response senso

    • by jythie ( 914043 )
      They sell to law enforcement, an institution which, when using bunk techniques, tends to double down and support it for decades. HR at big companies is not much better... unless a lawsuit forces them to drop a shaky filtering method, they will keep using it for a LONG time.
    • by hey! ( 33014 )

      Has regulation become that unthinkable?

    • by gweihir ( 88907 )

      I do agree. Mostly. Even if it is bunk, it may be around for a long, long time. Just think of this being used to target ads. Ads do not work and are basically known to not work. Yet billions are poured into them every year. If this thing can be used to keep the ad-scam (advertisers like Google scamming the companies that pay for ads) going longer or stronger, it will be pushed for all its might.

      As to whether it works or not, it surely will work to some degree. The model is far too simplistic, deception is p

  • That is a game of Whack-A-Mole. By the time you outlaw the emotional reading AI, they start selling a hand writing AI, then when that also proves a failure, they put in an AI that analyses how you sit. Etc. Etc.

    • Seriously. That's it. Book it. Done.
    • Maybe what we really need is to redistribute ownership of land and the means of production. Then if the local inbred aristocrat wants to buy a magic mind-reading AI box, that dystopian bullshit won't affect the common man's access to housing and a livelihood.

    • It isn't like whack-a-mole at all. Your simply describe a process that will take multiple steps, but the individual steps don't have to be redone again and again like in whack-a-mole.

      Each time you ban an application of the technology, that application is banned and they have to try some other bullshit that was less likely to do what they wanted. It seems like they'd have a losing battle, because there are only a limited number of inputs available from these situations that they want to use it.

  • by Gravis Zero ( 934156 ) on Friday December 13, 2019 @10:53PM (#59518058)

    There are plenty of people who have a reduced or flat affect [wikipedia.org] as a result of psychiatric illness or the treatment thereof which tabloids have identified/labeled as "resting bitch face". Making a decision based on someone's facial affect would therefore be a violation of disability discrimination laws in the US and I'm sure the UK has similar laws.

    Honestly, I think this kind of stuff is more creepy than threatening but it definitely shouldn't be used to determine how someone feels beyond a consumer product trying to better interface with it's owner.

    • by AHuxley ( 892839 )
      She's a pilot now...
    • There are plenty of people who have a reduced or flat affect as a result of psychiatric illness or the treatment thereof ...

      Also from several physical illness that can cause partial paralysis of the facial muscles.

      For instance: I know of a person who had a childhood bout with Graves Disease. (It was undertreated - to the point that one eye popped out and had to be reinstalled surgically.) This, along with the reconstructive surgery, paralyzed many of the muscles around the eye, which are used in emotiona

    • There are plenty of people who have a reduced or flat affect [wikipedia.org] as a result of psychiatric illness or the treatment thereof which tabloids have identified/labeled as "resting bitch face"...

      Care to confirm this tabloid theory of yours? From what I can tell, reduced/flat affect is NOT the same thing as "resting bitch face", which the very name implies that certain facial expressions are present only at rest. In other words, those same people are otherwise capable of normal emotional expression, they just don't "wear" a semi-permanent smile on their face 24/7.

      I've also never heard of "resting bitch face" being directly caused by a psychological disorder. There's also a damn good chance our wo

      • There are plenty of people who have a reduced or flat affect [wikipedia.org] as a result of psychiatric illness or the treatment thereof which tabloids have identified/labeled as "resting bitch face"...

        Care to confirm this tabloid theory of yours? From what I can tell, reduced/flat affect is NOT the same thing as "resting bitch face", which the very name implies that certain facial expressions are present only at rest. In other words, those same people are otherwise capable of normal emotional expression, they just don't "wear" a semi-permanent smile on their face 24/7.

        I've also never heard of "resting bitch face" being directly caused by a psychological disorder. There's also a damn good chance our woke-ass society would have come up with a far more tolerant name IF it were actually linked to any type of legitimate medical condition, instead of just referring to the victims as "bitches".

        Either way, thanks to you all for teaching me a new term ("resting bitch face") ... somehow had never heard that one before ...

      • In other words, those same people are otherwise capable of normal emotional expression, they just don't "wear" a semi-permanent smile on their face 24/7.

        Alas, people with a reduced or flat affect are absolutely capable of facial emotional expression but it's not as reflexive, at least not all situations for all emotions. Do not underestimate the complexity of psychological conditions because it's description is simplistic.

        There's also a damn good chance our woke-ass society would have come up with a far more tolerant name IF it were actually linked to any type of legitimate medical condition, instead of just referring to the victims as "bitches".

        Considering the origin of it's name is not entirely certain but popularized by a comedic parody [wikipedia.org], I'm not surprised by the name. Also, again psychiatric conditions are fiendishly complex and weren't even recognized to exist until last cen

    • by PPH ( 736903 )

      First of all, I don't think RBF qualifies as a disability. True, there may be some people who suffer from neurological maladies that prevent the proper expression of emotions. Are those disabilities? Good question. Also, my reaction to another human being will vary quite a bit from that to a machine. I guess I'm one of those boomers that grew up interacting in 3D.

      But this brings up an interesting point: No emotional recognition system (human or machine) would work very well in the absence of some sort of s

    • People make decisions on the basis of their read of another person's emotions constantly. And they are constantly wrong to widely varying degrees depending on the innate capabilities of the individual reading the emotions. Despite that, the ability to read emotions is critical in human interaction.

      We haven't sought to ban those who can't read emotions well from public interactions or from using their abilities to make decisions like whether or not to market a product to any particular individual or how to a

  • If we ban this technology then how are we supposed to find replicants hiding among the human population?

    Seriously though, this tech cannot be banned. If someone finds value in this then they will use it. It might be difficult to hide a high resolution camera for this now but they don't have to, just hold interviews by teleconference. People will be willing to put their face up close to a camera, then split off the feed to the Voight-Kampff emotion detection machine.

    All we are seeing here is an admission

  • by melted ( 227442 ) on Friday December 13, 2019 @11:36PM (#59518096) Homepage

    How is it a "leading research centre" if nobody knows about it? Sounds like a bunch of useless cranks "researching" the "ethical aspects" of something they don't even understand.

    • How is it a "leading research centre" if nobody knows about it?

      Wait, what? You thought you have to be famous to study leading questions? Does everything have a gatekeeper now?

  • So this seems like a new tech version of the pre-screening questionnaires in vogue (been out of job hunt. Do companies still employ this?) that would try to weed out undesirable potential employees by asking vague questions multiple ways and comparing the results. As I recall, the final effect was elevating the worst liars as promising employees. Dribble some science woo to make it seem plausible, and you too can gauge a person's worth without all that messy vetting.

    I'm also struck by the clutching of pearl

    • Comrade - America pioneered the Social Credit Score. Parastate companies like Experian, TransUnion, et al. were giving unaccountable, life-altering, black box "just trust us!" scores to our peons back when China was still just dreaming about it. We are the original cybernetic totalitarians.

  • Comment removed based on user account deletion
  • Not sure I see the problem here, They are only teaching it to do exactly what humans also do in those same situations and yes people often suck at this too. As long as the AI isn't making a completely independent decision rather just pointing out what it suspects then it is a useful tool.
  • No such thing exists, and certainly not at the interviewer phase. The only thing perfect is the HR firm creaming off talent extras, and likely wrecking team dynamics by having mono-cultures of drone employees. Any perfect employee will leave if it senses a better deal, or knows it has a stronger wage review position. Perfect will vary as life style upheavals and maturity develop. Sometimes unlikely people rise to the top. There is no magic sauce, especially when low skill jobs are being stocked with over-qu
  • by BAReFO0t ( 6240524 ) on Saturday December 14, 2019 @06:04AM (#59518380)

    Too "unprofessional".
    Makes you look too much like a homo sapiens. Distasteful. (Exclamation marks: Banned too, since the 10th edition of the minimalist handbible. Emoticons: Don't even think about it. *monotone beep boop*)

    • I can tell there is a really long story behind this short post, but please, please don't tell it.

  • What could possibly go wrong.

  • by LordWabbit2 ( 2440804 ) on Saturday December 14, 2019 @07:25AM (#59518444)
    Headline says BANNED - first sentence says restricted.
    And then to cap it off, it goes on to say that the entire thing is about as reliable as phrenology, so there's actually nothing to ban.
    • Headline says BANNED - first sentence says restricted.

      And then to cap it off, it goes on to say that the entire thing is about as reliable as phrenology, so there's actually nothing to ban.

      What happens when you discover that phrenology is banned or restricted (depending on location) entirely because it isn't reliable? Does it all start to make sense, or does your head just explode?

      BTW, it doesn't say anything about it not working and there being nothing to ban. Instead, it says creepy stuff about how it will make us "safer," somehow.

  • See recently published book Duped, Timothy R Levine. Diagnosis and prediction are tricky.

    AI is being used medically for tumour recognition in imaging. I'm sure that at first, its accuracy wasn't so great. Today it rivals human experts. The big difference is that it costs to cut you open to validate a finding.. But that doesn't mean AI techniques should be ignored.

    There are those who question the value of widespread PSA testing and mammograms. That's why the treatment for for questionable results is watchful

  • Maybe if you were looking for a good salesperson, where personality counts. The problem here is companies are getting lazy and implementing prejudicial (yes, pre-judging people) solutions that use AI, but can both be considered discrimination as well as miss great opportunities to hire people that take direction and learn skills well. Even a dog has capabilities to sense deception and questionable behavior, based on things that cannot be detected by a machine. Our landlords have used that stink test for
  • Has anyone seen the first episode of "The World According to Jeff Goldblum" on Disney+? https://www.youtube.com/watch?v=P7aN9OkNl7U
    It's about sneakers, but about 3/4 of the way through we see the setup that Adidas uses to detect emotions of sports figures when they see a shoe for the first time. It seems sports figures have trouble voicing how they honestly feel, but the face does not lie. Jeff Goldblum decided to test it. He is an actor, after all, and should be able to elicit such emotions on demand.

    • "I would put this tech as better than a lie detector." You do realize that this isn't saying much, right?

      Also, I suspect (but don't know) that an emotion that an actor portrays is easier to detect than a real emotion that a person is feeling, simply because the actor is *supposed* to convey that emotion. If this is true, then the fact that this AI could detect an emotion portrayed by an actor does not mean it could reliably detect an emotion in the real world.

      • It is going to be a long time before it can detect "I'm smiling because you just said something really awful and I'm frightened."

Time is the most valuable thing a man can spend. -- Theophrastus

Working...