AI versus Marshy #47: emotional edition


Hello and welcome to another edition of AI versus Marshy.

This is the newsletter that strives to make sense of AI hype and gives you ways to apply lessons today.

This week goes a bit differently because we’re going deep on a topic - emotion detection.

I started investigating and it demanded its own edition, so let’s move on - like soothing a tantrum - and get on with it.

Thanks 🙏

-Marshy

Emotions triggered?

Via LinkedIn.

This story starts with an assertion from my friend Dave.

“Emotional detection via computer vision is a deeply fraught area in AI. We should not be incorporating it into the way we see ourselves or others (and definitely not into AI workflows)”

Oh dear.

I’ve been nerding out on Hume, and I love some of the thinking about linking education to emotional engagement, and I think there’s potential for this and ADHD and knowledge workers.

Is this actually bad tech?

Dave was referencing a video by Morten Rand-Hendriksen.

I do not know Morten, but his video was rebutting the Chat GPT4o video demo this week:

And then videos like these were cut into smaller videos to highlight specific features.

Morten argues that facial emotion detection doesn’t work.

It’s facial phrenology (see: a scam) and the examples he cites are:

  • himself (it gets my emotions incorrect consistently with his neurodiversity)
  • that facial recognition does not work
  • and different cultures highlight have different expressions

So first things first - discussing the ethics of AI is good for all of us.

Without these discussions there’s real risks that we’ll get swept away by hype, big tech will eat us, or an AGI will transform us into slaves like the Matrix.

We need people like Morten to make challenges to this technology - otherwise we’re just accepting information prima facie (oh look I studied Law for five seconds).

But making challenges doesn’t automatically make your assertions right either.

So does emotional recognition with AI work?

I don’t think this is a fair question.

Do humans even know what our emotions are doing?

Solving for emotional intelligence

30 years ago, if a large scale, dramatic and upsetting event occurred we’d see it on TV, maybe read about it in the newspaper, and process this over the next few days and go back to our normal modes of living.

Today, if a dramatic and upsetting event is captured on video - we might see it on our phones and TVs from 18 different angles before the end of the day.

The footage is shocking and we’re seeing more of it.

Here’s a completely different example.

The book The Body Keeps the Score is a book written in 2015 that looks at impact of childhood and complex trauma on people over time.

It unpacks a lot of treatment methods and some of the challenges people have with bringing their traumas into everyday interactions.

The science on this is strong.

These traumas and people’s emotional responses colour everything - dating, drug and alcohol use, financial stability, likelihood of suicide and more.

One of the most damning stats from the book refers to ACE scores - these are Adverse Childhood Events.

The more you have (out of 10) the more likely life os going to be hard, including:

These links are relatively fresh in health and science world (< 10 years old) and we still don’t know how to address this in everyday life.

But collectively our understanding of the impacts is still early days. Some corners of the US point to childhood traumas as health issue - reduce the trauma and reduce the load on the medical system.

Locally, interest in this area has grown over the last 10 years - here’s what we see on Google trends:

The reason I’m pointing to the impacts of technology and childhood trauma on our emotional wellbeing is because we’re not sure what’s going on as humans ourselves.

Which leads us back into areas of AI that are trying to build up our emotional intelligence.

What if we could understand ourselves better?

I saw Nicole Gibson at a TSN event last year.

She runs Love Out Loud and another project called InTruth - an emotional monitoring platform for detecting your emotional patterns over time.

Understanding when you’re dysregulated, or when you’re really enjoying yourself could be really useful.

The app’s goal is to focus on patterns - not snap calls, and understanding these patterns over time is key - not the accuracy of an emotional read at any given moment.

This brings me back to Morten’s arguments about AI not being accurate.

It’s not.

And we’re not too good at this as humans either.

What AI does is collect data trends over time and use that to be predictive - not accurate.

AI can’t be accurate if we’re not.

But if there’s a pattern of something and we know to look into it more - that’s a good thing.

One of my favourite books is Pamela Meyer’s Liespotting - I read it after watching her TED talk, which has one of the most powerful openings I’ve seen in any presentation.

The book goes into much more detail about spotting lies - it’s not about aha moments - it’s listening to and reading the cues and unpacking the story further to investigate the clues that make up a lie.

For example - getting someone to retell a story at different parts in time is easy for someone recalling what happened - they’re just remembering things in a different order.

However - it’s tricky for someone who made up a story, because they’ve invented the story with a sequence of events.

I feel like emotional detection in AI is similar here - we’re getting additional clues and cues towards how someone is feeling - which could be life-changing in areas like crisis support.

I do think we need to keep having these discussions and listening to challenges about the technology.

I just don’t believe it’s a case of “we shouldn’t use this” and that it doesn’t work.

That was a different gear for AI versus Marshy this week but I was keen to unpack it.

Let me know what you think and if you agree or disagree?

A reader sent me this article on AI’s impact on water resources so there might be another edition like this coming soon.

Remember people - the way we learn to handle AI better is by working with it better ourselves.

We’ve got this!

-Marshy 💪

AI vs. Marshy

Growth marketer meets biggest technological advancement in our lives. Learn about AI in a way that doesn't overwhelm. Add a splash of strap yourself in and be prepared.

Read more from AI vs. Marshy

Newsletter Hello and welcome to another edition of AI versus Marshy! Another “hullo” to our new subscribers, this is a newsletter about AI, written by a perennial optimist, whose tech and marketing background tells him not to be swept away by hype just yet. But there ARE a hell of a lot of interesting things to look at and this week is no different. This week covers: copy.ai upgrades its platform with Anthropic - with some side-by-side testing and a dash of nope A new, newsletter shoutout to...

Hello and welcome to another edition of AI versus Marshy! Welcome to our new readers - nice to see you 👋 This is the newsletter that keeps you abreast of the AI hype machine, and grounds it in - yeah but what’s the reality now? This week looks at: 3 uncomfortable truths about AI right now 3 Aussie companies actually doing things with AI The (last) update on TheLeadMagnet.biz (in current format) Lots to run with today, so let’s make like a sprinter and leg on with it 🏃🏽♂️💨 -Marshy How I...

Hello and welcome to another edition of AI versus Marshy! This is the newsletter that demystifies the AI hype and gives you actionable and practical information instead. This week we look at: • Researching on autopilot with Zapier Central • Consultants an early winner in the AI race • Sprint 4 update - TheLeadMagnet.biz Now there's lots to crack on with today, so let's make like a whip and flick to it. ChatGPT drawing a whip having a crack -Marshy Research on autopilot with Zapier Central...