AI vs. Marshy

AI versus Marshy #47: emotional edition

Published about 1 month ago • 4 min read

Hello and welcome to another edition of AI versus Marshy.

This is the newsletter that strives to make sense of AI hype and gives you ways to apply lessons today.

This week goes a bit differently because we’re going deep on a topic - emotion detection.

I started investigating and it demanded its own edition, so let’s move on - like soothing a tantrum - and get on with it.

Thanks 🙏


Emotions triggered?

Via LinkedIn.

This story starts with an assertion from my friend Dave.

“Emotional detection via computer vision is a deeply fraught area in AI. We should not be incorporating it into the way we see ourselves or others (and definitely not into AI workflows)”

Oh dear.

I’ve been nerding out on Hume, and I love some of the thinking about linking education to emotional engagement, and I think there’s potential for this and ADHD and knowledge workers.

Is this actually bad tech?

Dave was referencing a video by Morten Rand-Hendriksen.

I do not know Morten, but his video was rebutting the Chat GPT4o video demo this week:

And then videos like these were cut into smaller videos to highlight specific features.

Morten argues that facial emotion detection doesn’t work.

It’s facial phrenology (see: a scam) and the examples he cites are:

  • himself (it gets my emotions incorrect consistently with his neurodiversity)
  • that facial recognition does not work
  • and different cultures highlight have different expressions

So first things first - discussing the ethics of AI is good for all of us.

Without these discussions there’s real risks that we’ll get swept away by hype, big tech will eat us, or an AGI will transform us into slaves like the Matrix.

We need people like Morten to make challenges to this technology - otherwise we’re just accepting information prima facie (oh look I studied Law for five seconds).

But making challenges doesn’t automatically make your assertions right either.

So does emotional recognition with AI work?

I don’t think this is a fair question.

Do humans even know what our emotions are doing?

Solving for emotional intelligence

30 years ago, if a large scale, dramatic and upsetting event occurred we’d see it on TV, maybe read about it in the newspaper, and process this over the next few days and go back to our normal modes of living.

Today, if a dramatic and upsetting event is captured on video - we might see it on our phones and TVs from 18 different angles before the end of the day.

The footage is shocking and we’re seeing more of it.

Here’s a completely different example.

The book The Body Keeps the Score is a book written in 2015 that looks at impact of childhood and complex trauma on people over time.

It unpacks a lot of treatment methods and some of the challenges people have with bringing their traumas into everyday interactions.

The science on this is strong.

These traumas and people’s emotional responses colour everything - dating, drug and alcohol use, financial stability, likelihood of suicide and more.

One of the most damning stats from the book refers to ACE scores - these are Adverse Childhood Events.

The more you have (out of 10) the more likely life os going to be hard, including:

These links are relatively fresh in health and science world (< 10 years old) and we still don’t know how to address this in everyday life.

But collectively our understanding of the impacts is still early days. Some corners of the US point to childhood traumas as health issue - reduce the trauma and reduce the load on the medical system.

Locally, interest in this area has grown over the last 10 years - here’s what we see on Google trends:

The reason I’m pointing to the impacts of technology and childhood trauma on our emotional wellbeing is because we’re not sure what’s going on as humans ourselves.

Which leads us back into areas of AI that are trying to build up our emotional intelligence.

What if we could understand ourselves better?

I saw Nicole Gibson at a TSN event last year.

She runs Love Out Loud and another project called InTruth - an emotional monitoring platform for detecting your emotional patterns over time.

Understanding when you’re dysregulated, or when you’re really enjoying yourself could be really useful.

The app’s goal is to focus on patterns - not snap calls, and understanding these patterns over time is key - not the accuracy of an emotional read at any given moment.

This brings me back to Morten’s arguments about AI not being accurate.

It’s not.

And we’re not too good at this as humans either.

What AI does is collect data trends over time and use that to be predictive - not accurate.

AI can’t be accurate if we’re not.

But if there’s a pattern of something and we know to look into it more - that’s a good thing.

One of my favourite books is Pamela Meyer’s Liespotting - I read it after watching her TED talk, which has one of the most powerful openings I’ve seen in any presentation.

The book goes into much more detail about spotting lies - it’s not about aha moments - it’s listening to and reading the cues and unpacking the story further to investigate the clues that make up a lie.

For example - getting someone to retell a story at different parts in time is easy for someone recalling what happened - they’re just remembering things in a different order.

However - it’s tricky for someone who made up a story, because they’ve invented the story with a sequence of events.

I feel like emotional detection in AI is similar here - we’re getting additional clues and cues towards how someone is feeling - which could be life-changing in areas like crisis support.

I do think we need to keep having these discussions and listening to challenges about the technology.

I just don’t believe it’s a case of “we shouldn’t use this” and that it doesn’t work.

That was a different gear for AI versus Marshy this week but I was keen to unpack it.

Let me know what you think and if you agree or disagree?

A reader sent me this article on AI’s impact on water resources so there might be another edition like this coming soon.

Remember people - the way we learn to handle AI better is by working with it better ourselves.

We’ve got this!

-Marshy 💪

AI vs. Marshy

by Luke "Marshy" Marshall

Growth marketer meets biggest technological advancement in our lives. Learn about AI in a way that doesn't overwhelm. Add a splash of strap yourself in and be prepared.

Read more from AI vs. Marshy

Hello and welcome to the next edition of AI versus Marshy! This is the newsletter that stays in touch with the AI hype machine, and boils things down to: what’s actually happening? and, what’s useful to know right now? The dust has settled on our 1st birthday, and it’s back to regular business. This week we cover: A counterpoint to the AI hype machine So what happened with the VoicePanel link? Building an AI business from scratch - Update #1 Lots to get on with this week so let’s make like a...

3 days ago • 7 min read

Hello and welcome to another edition of AI versus Marshy! 👋 The newsletter that follows the AI trend for you and surfaces up interesting things that are happening across tech news, new capabilities, and how you can make sense of it for your own benefit. And it’s not just any edition today, it’s our 1 year birthday 🥳 To the few dozen or so readers that transferred over from other writing I’ve done in the past (and chosen to stay!) I thank you. And to every reader that has joined organically...

9 days ago • 7 min read

Hello and welcome to another edition of AI versus Marshy! This is the newsletter that keeps tabs on AI and helps you keep up-to-date with what’s happening and how you can use it to suit yourself or your business. Last week we got into some big themes, and this week we’re deliberately moving away from big picture and into some practical hands-on with ChatGPT4o. We’re focusing on: Using it for improving your LinkedIn profile Turning a presentation into a social post Using it to create a...

17 days ago • 5 min read
Share this post