Table of contents
Share Post

When we talk about AI and automation, we often rush to the technical frontier — performance, safety, optimization. Yet beneath these algorithms lies an ethical layer that shapes how technology interacts with people and society. In a recent episode of Urban Innovate TALKS, I sat down with Dr. Katie Evans, philosopher, consultant, and author of UNESCO’s graphic novel Inside AI: An Algorithmic Adventure, to explore how moral reasoning, science fiction, and public transport come together in the design of intelligent systems.

AI as a “Place,” Not a Person

We are quick to humanize AI. As Katie puts it:

“Having a cute little AI robot that explains itself and justifies itself kind of humanizes it. And from a scientific perspective, that’s not accurate.”

She invites us instead to imagine AI as an artificial place — a space in which we navigate choices, not a being with intent. This subtle shift matters: once we stop attributing motives to machines, we can focus on what we, as humans, decide to encode in them.

For me, that image resonates deeply. If AI is a place, then pulsur — the tool my team and I are developing — is a city map of perceptions. It’s not an agent, it’s a compass that helps transit agencies understand where people stand, how they feel, and what values guide their mobility choices.

Why Machines’ Mistakes Hurt More

Every road accident is tragic. Yet when an autonomous vehicle errs, public outrage spikes far beyond what human error provokes. Why? Katie links it back to the mythology of perfection:

“Since the beginning of science fiction, if you think of Asimov and even the ancient Greek tradition, the idea was never that the machine or the automated automata was fallible. It was perfect. And what went wrong was that it overapplied this perfection to our fallible human morals.”

We expect technology to exceed us — to be flawless. But driving, she reminds us, isn’t just about performance metrics. People give up agency when they hand the wheel to automation: spontaneity, freedom, even the simple joy of driving. That emotional cost explains why “just being safer” isn’t enough for public acceptance.

Public Transport: A Moral Petri Dish

Public transport, Katie notes, “has always been a petri dish for moral progress.” From Rosa Parks’ bus to today’s debates on automation, it’s a stage where social values are tested in public.

In our discussion, we asked: what happens when jobs such as bus drivers’ evolve or disappear? Katie recalled philosopher James Moore’s question: Are there decisions computers should never make?

Article content
Excerpt from James H. Moore’s 1079 article “Are There Decision Computers SHould Never Make?”, a foundamental piece in computer ethics introducing the idea of clear and fuzzy standards

She explained that technology operates between clear standards (like chess, where outcomes are measurable) and fuzzy standards (like choosing a career, where multiple answers can be right).

“Almost every real-world deployment of AI is a fuzzy-standard situation,” she said. “What’s right depends on the values you’re trying to maximize.”

That’s where ethics turns into governance. Replacing humans with automation isn’t only an efficiency decision; it’s a value statement about what we prioritize — safety, cost, flexibility, or dignity of work.

Convenience vs. Privacy: The New Urban Trade-off

Our cities are quietly shifting toward fully traceable mobility: ride-hailing, door-to-door AVs, ticketless travel. I asked Katie if anonymity will vanish from the public realm. Her answer was cautious but clear:

“We are shifting toward a no-anonymity, no-privacy, totally trackable environment… and it shows no sign of stopping.”

Safety and convenience often justify this shift, yet it redefines what “public” means. As we pursue seamless mobility, we must ask: How much privacy are we willing to lose for the sake of convenience?

Designing Tools like pulsur: Justification Over Perfection

When we turned to pulsur, Urban Innovate’s AI-based sentiment analysis tool for transit agencies, Katie reframed the challenge:

“You cannot expect anymore to find a perfect solution for your problem. You have to expect to provide really good justification.”

She advised developers to assess not just safety risks but societal impact:

  • What do users stand to lose if misclassified?
  • How do we uphold fairness when decisions affect access, opportunity, or agency?
  • Are we using people’s opinions in good faith?

The answer, she said, is not moral perfection but transparency and justification — being able to explain why certain values were prioritized and to whom they are accountable.

From Ethics to Action

Katie closed with two reminders every technologist should carry:

“There is no such thing as neutral technology. Sometimes we have the temptation to improve humanity with technology — instead, we should focus on improving the services and products of humanity.”

These lines summarize the new frontier of responsible innovation. As cities and agencies adopt AI-driven systems, the real test won’t be how intelligent our machines are, but how honest we are about the values they serve.


To explore the full discussion in depth, you can watch the complete webinar “The Ethics Layer of AI – Philosophy, Morality and the Designs of AVs” now available online on Urban Innovate TALKS.

Further Reading & Listening

Henriette Cornet

Stay in the loop

Subscribe to our free newsletter.

Related Articles