AI as Societal Infrastructure
2025-08-01

AI is everywhere now, deciding what appears online, shaping cities, and influencing healthcare, finance, and education. Yet ordinary people still have almost no say in how these systems are created or governed.
Why This Matters
AI is still largely in the hands of a few companies and institutions. The public is treated as an audience, not as a co-author of the technology that will shape everyday life. That imbalance risks hardening bias, sidelining communities, and draining public trust.
If AI is going to be as essential as roads, schools, or public parks, then it must be designed and run with the same spirit of shared ownership and public accountability.
What is The Right to AI?
I began this work in 2022 with more than thirty community organizations across Montréal. The original effort focused on public space, inclusion, and emerging technologies. It has since grown into a broader nonprofit movement with a simple conviction: AI should not merely serve the public; it should answer to the public.
- I host workshops where communities test, critique, and question real AI systems, surfacing both promise and harm.
- I publish research on how the public can shape AI at every stage, from data stewardship to deployment and oversight.
- I work with policymakers so that the people most affected by automated systems are no longer the last to be consulted.
The Paper
The paper borrows from Lefebvre’s Right to the City and applies that logic to AI. I argue that AI now operates as social infrastructure and should be treated as a shared resource rather than a private black box. The paper takes on generative agents, mass data extraction, and conflict across values and cultures, then turns toward practical forms of democratic governance.
The heart of it is simple: participation must matter. It should change how data is governed, how models are trained, and whether certain systems are deployed at all.
A Ladder of Participation
At the bottom rung, people are just consumers, clicking “accept” and occasionally filling out feedback forms, while decisions stay centralized.
A step up, organizations invite limited input and offer more transparency, but the real power stays with them.
Government-controlled models add regulations and consultations, which help, but can miss the nuance of local realities.
At the top is citizen control: community assemblies, data trusts, and shared ownership of AI systems, where everyday people and experts decide together on risks, goals, and oversight.
What I Have Learned
I have learned that early involvement matters, but continuity matters even more. Technical expertise is not enough on its own; lived experience changes what problems are visible and what risks count. I have also learned that participation fails when it is unfunded. Without training, accessibility, time, and institutional backing, even good intentions collapse into symbolism.
What Comes Next
I want to see stronger public AI literacy, better tools for everyday participation, local AI councils that can grow from advisory bodies into decision-making ones, and community-run data trusts backed by real audits. I also want to see AI systems localized to the cultures and contexts in which they operate, rather than exported as if one model fits every place.
How to Help
You can volunteer to bring the Right to AI into your community.
You can join research projects exploring new ways to involve the public in AI governance.
You can collaborate on workshops, studies, or pilot programs.
Contact: contact@therighttoai.org
Explore More
The Right to AI — nonprofit
About The Right to AI
Book: The Right to AI
Tags: AI Governance · Participation · Pluralism · Open Source · Data Stewardship · Workshops · Montréal