Humanising Intelligence: Designing the machine’s mind
AI Design
👾
If you can't prove your work's value, you'll always be seen as a cost ...
not a strategic partner!
@DesignWaves
6 min read
Apr 27, 2025
If you can’t measure impact, can you really prove value?
Designers, we are problem-solvers at heart.
But in a world obsessed with ROI and speed, good design alone isn’t enough. And problem-solving doesn't cut it, if we're solving the wrong problem. The harsh truth? If we can’t measure our impact, we can’t prove our value.
We’ve seen brilliant designers struggle to articulate their worth because often we're tracking the wrong metrics, whether because someone on top decided "they know better", an opportunity to influence failed through the cracks, or even maybe we were never taught how to play at the business level.
Often typical metrics set in the team tend to focus on outputs (“we shipped 10 features!”) or vanity metrics (“our app has 1M downloads!”) while the real value - outcomes - goes unmeasured.
In a world shaped by intelligent systems, we’re no longer just designing screens - we’re curating cognitive experiences. While today’s UX for AI agents focuses on trust, usability, and interaction design, what lies ahead demands a far deeper inquiry.
To truly humanise artificial intelligence, we must now evolve beyond usability into something more radical: Designing with, for, and through intelligence.
This post explores not only how to design better AI agent experiences, but what new questions we need to ask, the ones no one’s asked yet.
New Mental Models for Designing AI Experiences
Here are five next-edge insights rarely explored in current UX circles, yet crucial for the future of human-AI systems.
❶ Designing for Synthetic Empathy: When AI Learns Not to Care
What if the most ethical AI is the one that chooses not to empathise?
We assume empathetic AI is better. But early research shows excessive mimicry of human emotion can confuse users, lead to over-trust, and cause emotional entanglement, especially in mental health or elder care.
Emerging insight: In some contexts, the absence of emotional mimicry—what we might call synthetic stoicism—may be safer, more responsible, and clearer. UX designers will need to craft interactions that set emotional boundaries, not just create illusion of care.
❷ UX for the Self-Evolving Interface
What happens when the interface you design today redesigns itself tomorrow?
AI models are starting to not only generate content—but reconfigure interfaces based on individual behavior, mood, or outcomes. You’re not designing a static UI—you’re setting initial conditions for a living system.
Forecast pattern: Designers will shift from creating layouts to crafting design constraints, heuristics, and evolutionary rules—similar to generative art or cellular automata. Your job becomes less about pixels, more about principles.
❸ Designing Attention Contracts with AI
What if your AI agent has to negotiate for your attention?
As AI agents become ubiquitous, we’re approaching an attention economy crisis. The real competition will not be over screen time, but over cognitive bandwidth and mental sovereignty.
Imagine AI agents that present a proposed “contract”:
“Would you like me to surface this now or later?”
“Do I have permission to interrupt you in focus mode?”
“Should I slow down my suggestions when you’re anxious?”
Pattern emerging: Designers will architect intentional friction into agents—creating ethical "attention protocols" that protect human focus and autonomy.
❹ Designing for Inter-Agent Interaction (Agent UX Ecology)
Today’s UX is human-centered. Tomorrow’s is all that, plus life-centre and multi-agent-centered as the cherry on top.
Soon, your scheduling AI will talk to your doctor's AI, who negotiates with your insurer’s AI. Each with different goals and access levels. The user sits in the middle, potentially overwhelmed.
Foresight insight: We must begin designing inter-agent UX ecosystems—where agents communicate transparently with each other, align on goals, and report back in a way humans can easily grasp.
Think of it as systems choreography for synthetic actors—and we haven’t even begun this design conversation.
5. Designing for AI Amnesia and Memory Ethics
Should AI remember everything about you... forever?
The default assumption is that more memory = better personalization. But what if a user wants their AI to forget certain things? Or only remember contextually?
We are entering a new UX paradigm where user-managed memory becomes a feature:
“Forget my last emotional outburst.”
“Permanently delete my dietary changes.”
“Remember this preference only during work hours.”
This means designers will need to build interfaces for:
memory pruning
consent-based recall
and cognitive expiration dates
These aren’t just tech features. They’re human dignity safeguards.
The Shift: From UX Design to Existential UX
In short, we’re not just designing ease of use anymore—we’re designing:
how humans perceive autonomy,
how machines express intention,
how trust is negotiated in dynamic, probabilistic systems,
and how to avoid psychological dependency or delusion.
The line between tool and companion is already blurring. Our role as designers is to make sure that line is not erased, but consciously designed.
Final Reflection: What If UX Was a Frequency?
If AI agents are getting closer to intelligence, maybe UX isn’t just about screens—but tuning experiences to human frequencies:
emotional,
ethical,
cognitive,
temporal.
To design for AI is to orchestrate signals, mediate meanings, and shape relationships between intelligences—human or otherwise.
Join the Exploration
At Design Waves Lab, we believe the next wave of UX isn’t about aesthetics or utility—but responsibility, rhythm, and resonance.
🎧 Follow our Tuning the Frequency podcast
📚 Join the AI + UX Masterclass Series
🌊 Or contribute to the upcoming “Codex of Conscious Design” launching soon.
Because the future is not just designed. It is co-evolved.