I figure that responses in the U.S. would be very similar (uc3.co).
by Dan Burns
Jul 26, 2019, 6:30 PM

“Emotional AI” is going to be the greatest, right?

As with everything else tech, the first and so far quite possibly only thing you’ve seen or heard about “emotional AI” is how wonderfully gloriously great it all is, with no downside at all. You know, like social media has worked out to be. Uh, not so fast.

An ACLU report published (June 13) called “the Dawn of Robot Surveillance” says AI-aided video surveillance “won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated ‘video analytics’ technologies threaten to fundamentally change the nature of surveillance.”

…More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment. Other systems can describe what’s happening in a video scene.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we give to Facebook and other social media sites, and the photos governments collect in the process of issuing ID cards and drivers licenses. The technology already exists to automatically identify everyone a camera “sees” in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the smartphones we carry with us everywhere, or by our laptops or Bluetooth-connected devices. Police have been tracking phones for years, and this practice can now be combined with video analytics.
(Vice)

So, how concerned should any knowledgeable person, of enlightened sentiments (that is, among other things, a typical Left.mn reader), choose to be? Especially, given the kind of subject matter you expect to find on this website, with how it could potentially be used politically?

Certainly there are plenty of indications that people in general are not willing to go with absolutely anything that Big Tech tries to foist on us. “Glassholes” didn’t get far. I have yet to see anyone actually wearing a computer watch in public, though undoubtedly in places where I’m not likely to be found it’s known to happen. And, despite the endless hype about the “Internet of Things,” I know few people who have even troubled to put any of their TVs online, much less their coffeemakers or garage door openers.

On the other hand, I’ve certainly noticed that I’m among the minority that still uses cash rather than swiping a card in the grocery store. Partly that’s habit, but partly it is that I don’t particularly want The Man to know what, when, and where on everything I buy. (OK, call me paranoid. And I am indeed a Pynchon fan. So be it.) Most people do seem to consider the convenience to be well worth any privacy or other concerns, in that case.

My own sense is that most people want, or will want, “emotional AI” on a tight rein, once acquainted with the issues discussed in the quoted article. (Unless they live in a more or less totalitarian state where they have little to no choice.) But to be effective, helping to see that it is tightly and correctly regulated involves remaining knowledgeable and engaged. And a whole lot of Americans still can’t even be troubled to get off our magnificently rock-hard sculpted behinds and vote, once every two years.

This is not meant to be considered very relevant to the above, but if you want to read the all-time classic science fiction story of AI gone very, very wrong, dig this.

Thanks for your feedback. If we like what you have to say, it may appear in a future post of reader reactions.