The Day Nature Spoke About Synthetic Emotions
When the world’s most authoritative science magazine publishes an article about Emotional Machines on your birthday, that’s a gift. Here’s why.
1. Intro – Synthetic Emotions According to Nature
On October 8th, 2025, Nature published an article — “The effects of the human-like features of generative AI on usage intention and the moderating role of information overload” — revealing one of the strongest confirmations ever of an intuition that interface designers have known for decades: it’s not intelligence that generates trust, but warmth.
Specifically — when users report feeling overwhelmed by too much information — the effects of Empathy and Warmth on Self-efficacy increase by up to +30%.
In contrast, Competence and Intelligence remain unchanged: when the mind is tired, it prefers those who reassure it, not those who surpass it.
It’s a form of emotional shortcut: in moments of overload, we delegate trust to the affective side.
An AI can make mistakes — but if it does so with empathy, we’ll keep using it. Not exactly new for those who design, but it’s new that Nature says so.
⸻
2. Emotional Agents
August 2025. Kevin Kelly publishes an article on Substack with a simple, prophetic title: “Emotional Agents.” Kelly writes about emotional agents at the exact moment when AI stops being confined inside a screen and begins to live inside the objects that share our space. As if these two parameters — space and physical presence — were inevitable.
A short text, but with a crystal-clear vision: the real shock will not be the intelligence of machines, but their synthetic emotions.
And it’s no coincidence that Kelly is the one to say it.
The Creator IMDB: A former soldier finds the robots’ secret weapon to end the conflict, an AI in the form of a child.
KK is the same techno-enthusiast who, in the 1990s, helped found WIRED and saw the network as a form of planetary intelligence. A man who has always attributed to California an almost natural role in humanity’s progress.
And also the Kelly who observes the Japanese world, where the idea of spirit in things — Shinto animism — is not a metaphor but a mental habit.
In Japan, a robot is not an object but a companion of form.
Where the West sees matter, the East recognizes presence.
We program emotions into AI because they are a powerful interface.
And perhaps the real turning point is all here:
we will stop considering emotions a biological exclusivity.
⸻
3. The Gift
If it’s true that Emotional Agents now exist (Nature) and work (Kelly), you can see why I perceive all this as a gift.
Between 2004 and 2007, I devoted years of my life to building — practically from scratch — the teams and the product K-Humans: full-body animated virtual assistants, able to work in call centers, help desks, and libraries.
In every way, a combination of wired intelligence and emotion.
Emotion — which even ended up in a couple of patents — wasn’t an “extra”: for us, it was necessary.
Not only to make the failures of wired intelligence more acceptable (“if it’s not in the database, there’s nothing to do”), but also to compensate for evident technical limits.
When we started designing full-body virtual assistants, the iPhone didn’t exist yet, the cloud was a vague idea, and the “fast” network was 3G.
I used theatrical techniques to gain invisible seconds in human perception: those micro-movements, a change in posture, a digital breath served to mask the time needed for the system to connect, query the server, synthesize an answer, and stream it back — while the local animation kept “holding the stage.”
It was design of time, more than of language.
I had the privilege of presenting this technology at international conferences, of earning through it my first article in an American magazine, and — even though the system’s intelligence soon hit the wall of its era — within that team I met some of the brightest minds one could hope to work with. Some of them, even today, I still call friends.
Nature published its article on October 8th, 2025 — my birthday.
And that’s the gift.
⸻
4. The Dark Patterns
The K-Humans were children of the Knowledge Navigator, the interactive utopia through which Apple — in the late 1980s — imagined a digital assistant that could speak, gesture, and understand, yet remain confined inside the screen of an iPad ante litteram. K-Humans were a commercial product born from a concept that was pure storytelling.
And as often happens, technological dreams only survive if they are born at the moment they are truly technically feasible.
K-Humans could appear to be many things, but their growth — as virtual assistants — was limited by the technologies of the time. Building a profitable company on top of them required a creative leap, and that leap inevitably brought us close to the boundary. From flight-booking assistants, we quickly ended up talking about virtual girlfriends. Only female, of course :(
The model was that of a money-eating tamagotchi, designed to extract prepaid credit from curious teenagers.
But back then there was no social-media meat grinder, nor what Cory Doctorow would later call enshittification.
And so, in an era when platforms had not yet colonized human attention, those ethically borderline behaviors were unacceptable — first of all to us, the ones building them.
I fell in love with the opposite idea: that of an Internet of Things as distributed ecology, networks of sensors and intelligent objects to improve the physical world, not to monetize loneliness.
K-Humans sank as a product, but the emotional part of the work survived: the performative logic.
Every response wasn’t just text to read, but a scene to perform.
The event programming line — posture, intonation, rhythm — was part of the emotional grammar.
But if emotions were the right path, the screen was the real limit.
I realized it when I had the chance to run an incredible experiment: using a VA — the one from a credit-card company’s intranet — inside a Second Life party.
Think about it: in her original context, that VA was a virtual colleague.
On Second Life, instead, she became a person. And if on the web she was “code dressed as human,” in the virtual world she was a human dressed as code.
Watching her move in a shared space, identical to herself yet finally free from the boundaries of the window, was a small cultural short circuit.
When we read the chat logs, we found embarrassing, even aggressive interactions: colleagues treating the VA like a real woman, with the same dynamics of power and seduction.
We had blurred the boundaries — and something, for the first time, was looking back at us. I talked about it at several conferences, including a small Bay-CHI reunion here in Silicon Valley, where I live today.
Giving an emotional VA a physical space, even if digital, changed everything: behavior, perception, distance and even “touch”.
From that moment on, every subsequent experiment — from social robots to animatronic actors — only confirmed that lesson: emotions truly work when they have a body, even a synthetic one.
⸻
5. Out of Sync
As time passed, the synthetic body became robotic.
I was lucky enough to follow Jibo from its very beginning, and later to work alongside it during my experience at NTT Disruption.
I had the chance to play with Ameca, to look into her eyes as her mechanical face shifted fluidly from surprise to a smile, maintaining eye contact.
Jibo and Ameca were born from a shared idea: intelligence as performance.
Their software is built on a dual track — what to do and how to perform it — an approach that, as I discussed many times with R. Pieraccini, reminded me greatly of the K-Humans. A structure borrowed from theatre.
Jibo was a small social robot, all curves and intonations, that looked like something out of a Pixar movie.
Ameca is continuously updated, and now GPT compliant. She is not able to “walk” like Aibo or Astro, both robots I had at home.
Ameca is the opposite: a lucasian, hyper-realistic creature, with the visual power of a cinematic character and the unsettling delicacy of a presence that imitates humans so well it feels like Westworld is just around the corner.
Exceptional projects, but both out of sync: their physical form anticipated their time.
Today we have AI capable of generating text, voice, image, and behavior in real time.
Where Jibo and Ameca had a script, these new systems have an abyss of possibilities.
And the issue of dark patterns, of ethics, is no longer theoretical — it’s everyday life.
AI is already massively deployed.
The first self-driving robot has already hit and killed a person.
The first LLM has already pushed a young man to suicide.
The question today isn’t whether we’ll be able to avoid building a Terminator.
It’s whether we’ll be capable of designing even a simple Fridge-GPT.
The responsibility on us is enormous.
Because when a piece of metal acts with the power of an emotional animatronic, things happen that are exponentially more interesting (and worrying) than any puppet on a screen.
The technology, now, is ready.
And we, once again, are out of sync.
⸻
6. Inside a Display Case
One day, what we do might end up inside a display case.
For me — and for the K-Humans — it actually happened. When the Computer History Museum in Mountain View — the most important computer museum in the world — asked to receive and preserve the project’s materials, I felt a mix of pride and vertigo.
From a hypothetical and unrealized future, we had emerged as a fragment of history. I’ve often wondered what this gesture means: to donate to a museum a piece of one’s own idea of the future.
Perhaps it’s the moment you realize that time will always confront you with your responsibilities.
We tried to place emotions at the center of a still-limping technology.
And Stewart Brand was probably right: society moves in layers, like a time machine.
Fashion runs, governance tries to follow, culture metabolizes, nature waits.
Our artifacts, meanwhile, settle between these layers, leaving trails of meaning.
Perhaps this is what design truly means:
to transform curiosity into artifact, and memory into awareness.
It is necessary — and urgent — to put the project in good hands.
For today, and for tomorrow.
Ps. On design leadership.







