From Principle to practice: Making AI ethics practical, not performative
Online panel
Most organizations now agree that responsible AI matters. The harder challenge is operational: how do you actually build responsibility into products, teams, and decision-making before problems appear in the market?
For many companies, AI ethics still lives at the level of principles and policy statements. But responsible AI only becomes real when it influences how systems are designed, how teams make decisions, and how organizations align incentives around trust and long-term value.
This was the focus of a recent conversation hosted by AREA 17 and moderated by founder and CEO George Eid. The discussion brought together leaders working across AI policy, product design, digital forensics, and information integrity to explore a practical question: How do we design systems so that responsible behavior becomes the right business decision?
Watch the recording to know more.
Three insights from the conversation
1. Responsibility is becoming a business constraint
One of the biggest barriers to AI adoption today is trust. Organizations and users hesitate to adopt systems they believe could leak data, generate harmful outputs, or introduce unacceptable risk.
That means responsible AI is increasingly tied to business performance. Products that demonstrate safeguards, transparency, and accountability are more likely to be adopted and sustained over time.
In this sense, responsibility is not only a moral issue. It is a strategic one.
2. Responsible AI must be built into the product lifecycle
In many organizations, ethical considerations are considered late in the process, often during legal review or post-launch governance. But by that stage, the most important design decisions have already been made.
Responsible AI instead requires attention across the full lifecycle: from design and data selection to deployment, monitoring, and iteration. Because AI systems evolve over time, responsibility cannot be a final checkpoint. It must be a continuous practice embedded in how products are built and maintained.
3. Designing AI increasingly means designing behavior
Traditional digital design focused on interfaces: screens, buttons, and user flows. AI systems introduce a different challenge. Designers and product teams are now shaping how systems behave—how they respond, generate outputs, and make decisions in real-world situations.
Those behaviors are influenced by training data, prompts, evaluation frameworks, and guardrails. In that sense, AI design is becoming a form of behavior design. Product teams are no longer just designing what users see. They are shaping how systems act.
For organizations building or integrating AI, the real challenge is translating principles into everyday product decisions. The organizations that succeed will be those that make safety, accountability, and trust part of how the business performs, not just how it communicates.
Watch the recording to know more.
Panelists
Sasha Rubel, Head of AI Policy at Amazon Web Services for EMEA, member of the OECD Network of AI Experts, and Ireland’s AI Advisory Council. Previously led UNESCO’s work on AI ethics.
Emmanuelle Saliba, Chief Investigative Officer at GetReal Security, a pioneer in social verification, and an award-winning journalist previously at NBC, ABC, and Euronews.
Elizabeth Laraki, Design Partner at Electric Capital and product design leader who helped shape platforms used by billions, including Google Maps, YouTube, and Facebook.
Paz Pérez, Systems and behavior designer who previously built the Gen AI for UX Teams program at Google. Founder of the consultancy Nothing Fancy AI.
Brigitte Perrin, Founder of Earth Info Hub, an initiative focused on restoring trust in climate and environmental information. Former communications lead at the World Meteorological Organization.