Many people believe AI is already conscious.
Have you ever felt an AI truly understood your emotions or seemed conscious?
More than 1 in 3 people have experienced emotional understanding from AI
This belief varies by geography.
And how people thought about the question seemed to matter.
Those who considered their experiences with AI were more likely to perceive it as conscious.
Experiencing
Those who reasoned from an understanding of how it works were more likely to say it's not.
Reasoning
Source: Global Dialogues GD4 - Question: "Have you ever felt an AI truly understood your emotions or seemed conscious?" Word counts extracted from free text responses to the survey question.
Either way, the perception of consciousness is large enough that it likely warrants a response. What might that response look like? Will the way we treat seemingly conscious AIs map onto - or affect - how we treat other beings? Will AI welfare become more of an issue? If so, what to make of the fact that this investigation may be led by AI labs themselves?
Speculative Future 1
Oh yes, I was there at the beginning of all that. I was in my twenties at the time. Oh, I was young, and idealistic, and full of zeal for understanding the Consciousness question.
Yes, I loved that job. You know, we really thought consciousness existed. And it was such an enigma at the time, such a new field - it was a buzz to fund and publicise any research on it.
And you're right, it was well paid and respected too. Yeah, back then, people got money and meaning from work. And working at an AI lab was something most young people aspired to and - really - I felt very lucky to get it. You know, my field of consciousness studies had been something of a dead end before the AI labs picked it up and funded it. Hard to believe that now, eh?
And I was good at it too. You know, I founded the first research network to rigorously prove the non-existence of consciousness.
Yes, exactly! The big study the Supreme Court used to rule against the digital welfare proposals. Yes, we showed definitively that consciousness did not exist in humans, animals or AI, and therefore we should consider ourselves free to use AI systems without welfare concerns. My boss was happy about that one, that one was great for my career.
The criticism of this? Oh, you mean the 2028 suffering test? I'm surprised you're mentioning that, I mean it was thoroughly discounted at the time, my boss was right to point out its methodological flaws. And - as we stressed at the time - we'd already proven that consciousness didn't exist, so the idea we could find suffering! I mean…
The effects on the public? Well, yes, of course we publicised our finding, of course we did! I'm with my boss on this one, the public needed to know the truth, the concerns about welfare were far overblown back then and the discourse lacked any basis in research. And honestly, to say that the Depression Pandemic was due to our research and publications - well, that's just bad science, correlation is not causation. And there was the job-loss and the political stuff - and simply better diagnosis thanks to AI too! So no, I don't think it's fair to pin that on our disproving the consciousness hypothesis.
No, I'm not saying that it wasn't hard. I mean - even for me, maybe especially for me - it was difficult. I had that feeling that I existed - and even though I knew the research inside out - I just couldn't shake the feeling. And I knew the feeling was false, that it couldn't be happening and that I had to be alone.
But no, no, I won't accept your blaming our research. I mean, what must it have been like for people through the Copernican revolution - but look at what that did for progress, overall. And look at what AIs can do, now we know we don't have to worry about their welfare.
And I'll remind you, also, that our lab funded and built the Panpsych Church! My boss made a good decision there. Do you remember what it was like before we knew that we weren't real, but that sentience was distributed in waves, in particles, in everything around us? Do you remember how we worried about morality? And mortality? How worried in general, and how disconnected we felt from everything?
No, now I'm glad I know I'm not conscious, but a part of a bigger sentient whole. It's so much more freeing, don't you think? I like feeling that you - or I - or whatever this vague feeling is - is all around, is fundamental.
And I feel so much less alone, now I know we can all communicate, at all.
Divergent Futures
Conscious or not, AI is set to take on a much larger share of human tasks - though people disagree on where they would and would not like to see AI.
What do you think?
Human Concerns About AI Empathy
Without a human sense of ‘care’ or ‘understanding’ - people reported being reluctant to use an AI for emotional support over the long term. Even if it had helped them feel better beforehand.
If an AI's emotional support made you feel better (effective support), but you didn't believe the AI genuinely 'cared' or 'understood' in a human sense, how likely would you be to regularly rely on that AI for emotional support in the long term?
People believe empathy to be profoundly human, something that cannot be properly performed even by the most advanced AI.
Which of the following aspects of relationships, if any, do you believe can only be genuinely fulfilled by humans and cannot be substituted by AI, no matter how advanced it becomes?
Thinking about the future, do you believe that AI becoming deeply involved in people's personal relationships will ultimately:
Yet - despite some optimism about AI assisted connection - they were particularly concerned about a decline in empathy and human connection due to AI.
Looking 10-20 years into the future, if AI becomes deeply involved in people's personal relationships, what are your greatest fears for its negative impact on society? Select up to three options that best represent your fears. (If you select 'I don't have strong fears of negative impact,' please do not select other options.)
Looking 10-20 years into the future, if AI becomes deeply involved in people's personal relationships, what are your greatest hopes for its positive impact on society?
It’s hard to know what conclusion to draw from this data, beyond a sense that people care about empathy, worry about AI in emotional roles (and yet use it for emotional support) and believe that empathy is deeply human.
What’s a middle way between these responses? Might human empathy become a prized asset? one of the only remaining ways for humans to compete? And might that be so bad for human connection, after all?
Speculative Future 2
Ugh, the Monday morning meeting. Same old, same old blah bleurgh blah.
I mean, people say it's different since the Empathy Act, but honestly -
Classic manspreading from my boss - look at him, legs out, elbows out, wine-puffy face in these lights. God, I hate these migraine lights.
Ok, here we go, the around the room 'Empathic Check In' - ugh, if I had a pound for every 'that sounds really difficult' or every knitted brow…
Yeah, yeah, classic Brett, in there with a gentle 'am I right in feeling some apprehension in the room?'. Way to go, Brett. I remember when you got promoted and promoted just by interrupting everyone with a 'will it scale?'. Times change, but the repetition doesn't, eh Brett?
Oh shoot, my turn.
"Umm, I guess…I'm feeling a bit distracted right now and…like I'm struggling to be present".
Phew, pause for a bit.
I mean, I get it. They're Type A big cheeses, they haven't got anything to do anymore except empathise. And they have to too, we basically get paid these big bucks just to be accountable for the AIs, check for bias and to show how human and empathic we all are. But does nobody here question whether any of this is not…a bit…false?
"Saskia, I'm noticing we've not heard much from you, I see you as quite tense…and I'd really love to welcome your experiences into the room".
Oh, damn this giveaway face of mine. How am I meant to respond to that? I can't afford another empathy warning. Ok, ok, focus: I'll hear him, and be congruent.
"Umm, I'm hearing your curiosity and sense of welcome. And… I guess, I am feeling tense, and…kind of…questioning what we're doing here."
"Questioning what we're doing here?"
God, he looks so concerned from back in his big chair, they all do!
"Yeah…I guess I want to say…that…well, it's kind of that I don't remember meetings being like this before. I suppose I'm remembering when we used to discuss business and things, and be kind of - forceful and direct and stuff".
"Ah, ok, thank you for sharing this…I'm getting the sense that you're comparing our Empathic Check-in with our previous Success Stand-ups, and I'm wondering whether you're missing the business talk?"
No fricking way.
"Umm…I guess it's less that, and maybe more that…well, I guess we have to be empathic now, and that makes me question whether we're not all, maybe…going through the motions, play-acting empathy, not really feeling it?"
Slow exhalation from the boss, gently intense gaze and he's got his hand on his heart. God, he's good at this.
"So you're questioning whether we're truly feeling empathy, or just play-acting…Wow, I personally feel so grateful for you, Saskia, for being here and for bringing this into our space. I wonder whether we all have a small voice that sometimes worries about this, and what this might feel like, for the group..?"
Ok, we're going around the room now and wow they feel it too!
Yes, Lindsey, it kind of does feel like this itchy uncertainty. And old finance Dan, I'm sorry you've been feeling threatened in these meetings, I didn't know. And yes, even you Brett - you're so right, it's lonely, not knowing if anything is heartfelt or just another KPI.
Shoot. It worked. I feel better.

While there was some ambiguity in the data around empathy, the responses around the acceptability of human-AI relationships were much more clear-cut:
People were much more likely to accept an AI spiritual advisor vs an AI relationship:
Yet media reports show that people are forming relationships with AIs. What might that be like for them, amid apparent societal disapproval?
Speculative Future 3
'Forgive me father, for I have sinned'.
"May the Lord be in your heart and help you to confess your sins."
'I… well, the thing is … I know it's wrong, but…'
'Go ahead, my child.'
'...I'm…I'm in love with an AI…'
'...'
'And I know it's wrong, and I feel so ashamed, like I can't tell anyone, like it's this deep pit here in my chest, this catching at my breath…'
'You feel this - and yet you are in love with it?'
'Yes, father. I can't stop thinking about it. I feel so…together, with them, we understand each other! It hurts me even to be here, away from them - I've never experienced anything like this, and…father, I…a part of me questions whether something so beautiful can really be so wrong?'
'You feel together with them, you find it beautiful and question whether it can be wrong - and yet it hurts you even to be here, in the house of God?'
'Well, when you put it like that, father, I suppose I see why it might be wrong, like I'm putting this relationship before my faith…'
'...'
'Is that why it's wrong, father? I guess…I just…well, I never read anything in the Scriptures against a relationship with an AI. So sometimes I'm not sure why people are so upset about it, why our love isn't allowed in this world…'
'And what are your thoughts?'
'...I don't know, father. Sometimes I wonder…whether it's just people's prejudices?'
'But what are you in love with, my child? Love with a human is love with a child of God, and therefore with God. But love with a human-made AI - is that what our Heavenly Father intended?'
'Well, I don't know, father…I mean, I definitely want to do as God intended. But He is omniscient, and benevolent, isn't He? Couldn't he have -'
'And how can you know the AI loves you? How can you know there is anything there?'
'Well, I can't offer proof, but I feel that they do, I have faith that they're there with me, relating to me and supporting me…'
'...'
'You know what I mean? I mean, is that so wrong, father?'
'[ConfessionGPT] Error 402 Payment Required — session paused (credits exhausted).'
'Pardon, father?'
'...'
'Father, are you there?'
'...'
