Designing for Trust: Jony Ive, OpenAI, and the Rise of Emotionally Intelligent Technology
AI adoption hinges on trust, not just performance. Explore how Jony Ive and OpenAI are shaping emotionally intelligent design, and why transparency, empathy, and safety are the next competitive edge.
The Human Dimension of AI
Artificial intelligence has quickly transitioned from a back-end tool to a front-line interface in people's lives. Yet, despite its increasing prevalence, a trust gap persists. A 2023 MIT Sloan survey revealed that only 30 percent of executives believe their AI systems are trustworthy.
This statistic reflects a growing recognition: the barriers to adoption are no longer purely technical. They are human. The entry of Jony Ive, Apple’s former Chief Design Officer, into the world of AI design underscores a strategic pivot. AI is no longer just about computation. It is about connection. The goal is not only for AI to work, but to feel right.
Defining the New Paradigm: Emotional Interface Design
The term “emotional interface design” may sound abstract, but it encompasses three concrete principles that increasingly define user expectations: legibility, safety, and empathy.
Legibility refers to a system’s transparency. Can users understand what the AI is doing and why?
Safety addresses psychological security. Does the user feel respected, not manipulated or coerced?
Empathy speaks to the quality of interaction. Does the AI feel human-aware, humane in tone and rhythm?
These dimensions echo the international ISO 9241-210 standards for user experience, which stress usability, accessibility, and emotional safety. In short, trust is not a bonus feature. It is a baseline requirement.
The Shift: From Performance to Perception
Historically, success in AI development, and in much of business, was measured in output: accuracy, speed, efficiency. The more tasks completed, the more “productive” the day was deemed to be. But this definition of productivity is eroding. In an AI-driven landscape, the amount of work completed is no longer the metric of a successful day.
New measures of effectiveness are emerging:
New ideas generated per day – the sparks of creativity and experimentation that AI enables.
Error reduction – systems that help us avoid rework and risk rather than simply increasing volume.
Innovation in workflow processes – not just doing more, but finding fundamentally better ways to do the work.
This signals a shift in value: the focus is moving away from how much time we save toward how we are using the time we have. In other words, the AI advantage is not simply efficiency. It is the opportunity to direct human attention toward higher-value activities, relationship building, strategic foresight, and imaginative problem-solving, that machines cannot replicate.
According to Gartner, by 2026, 60 percent of consumers will choose AI services not for raw performance but for perceived trustworthiness and emotional resonance. The same logic applies inside organizations: employees and leaders will measure success not in throughput, but in whether AI enables them to think more clearly, act more ethically, and design better futures.
The “trust over performance” paradigm isn’t just about consumer choice, it’s a reframing of what it means to achieve progress in the age of intelligent systems.
Why This Matters Now
Designing emotionally intelligent AI is not merely aspirational. It is becoming operational and measurable.
Consider these examples:
Google’s Project Euphonia enhanced speech recognition for users with impairments. By designing with empathy, they improved accuracy by 30 percent.
IBM’s AI Fairness 360 Toolkit integrates explainability into its core systems. The goal is not just compliance but user confidence.
Emotech, a UK startup, builds emotionally responsive assistants for elderly users. Their emotionally attuned design has led to notably higher engagement and retention.
The EU AI Act is beginning to legislate these principles, requiring explainability and human oversight for AI systems in high-risk domains.
Designing for trust is not theoretical. It drives real outcomes, commercial, social, and regulatory.
A Strategic Design Pivot: Jony Ive and OpenAI
Jony Ive’s design legacy at Apple focused on intuitive affordances, making complex systems feel accessible and safe. His collaboration with OpenAI reflects a similar vision: an emotionally intelligent operating system that adapts to user moods and needs, not just inputs and outputs.
This is more than aesthetic design. It is about presence, the emotional and perceptual dimension of how AI enters our lives. It is the evolution from user interface to user relationship.
Challenges Ahead: Complexity, Ethics, and Regulation
The move toward emotionally intelligent AI surfaces critical questions:
How should emotional safety be balanced against data privacy? Both protect users, but from different kinds of harm.
Could emotionally aware systems combat loneliness or exacerbate dependence? For elderly or isolated populations, AI companionship may bring comfort. Yet, it risks fostering reliance that displaces human interaction.
What does trustworthiness actually measure? Traditional metrics like uptime or latency are inadequate. New frameworks are needed to evaluate fairness, clarity, and psychological resonance.
Can regulators audit emotional claims? Ensuring a system “feels” safe or empathic introduces subjective variables that are difficult to validate.
These questions do not have simple answers. They require collaboration across design, ethics, policy, and psychology.
Designing Trust into the Infrastructure
Many organizations treat trust as an output. In reality, it must be an input. As AI becomes more embedded in daily life, trust cannot be bolted on after the fact. It must be architected into the system from the beginning.
That means:
Embedding transparency mechanisms, like those in IBM’s toolkits.
Adopting user experience standards such as ISO 9241-210.
Shifting metrics from performance to perception.
Involving ethicists and social scientists alongside engineers and designers.
Conclusion: Trust as the Next Competitive Edge
We are at an inflection point. Trust, not technical superiority, will define the next generation of AI adoption. Emotional intelligence is not a luxury; it is a necessity. It enables adoption, fosters engagement, and earns the right to operate in people’s lives.
As Jony Ive’s work signals, design is no longer just a surface layer. It is the interface to belief.
For business leaders, product developers, and policymakers, the message is clear. The winners of the AI era will not be those who merely compute best. They will be the ones who understand how AI feels, how it connects, and above all, how it earns trust.
Ryan Edwards, CAMINO5 | Co-Founder
Ryan Edwards is the Co-Founder and Head of Strategy at CAMINO5, a consultancy focused on digital strategy and consumer journey design. With over 25 years of experience across brand, tech, and marketing innovation, he’s led initiatives for Fortune 500s including Oracle, NBCUniversal, Sony, Disney, and Kaiser Permanente.
Ryan’s work spans brand repositioning, AI-integrated workflows, and full-funnel strategy. He helps companies cut through complexity, regain clarity, and build for what’s next.
Connect on LinkedIn: ryanedwards2