Build AI With People, Not For Them
2025-07-31

A practical framework for building AI with communities in the room from the beginning, not after the damage is done.
Why This Matters
Too much AI is still built about people, not with them.
Co-Producing AI changes that dynamic by treating communities as equal partners at every stage, from defining the problem to deciding when to retire a system.
This isn’t about token feedback sessions or vague promises to “circle back.” It’s about shared power, open choices, and long-term accountability.
The People-First Lifecycle
I structured the playbook around five connected phases.
Co-Framing is where it starts. Communities and developers work together to define the problem, understand who is most affected, and decide who has a voice—and even veto power—before any model is trained.
In Co-Design, decisions about data, models, and interfaces happen in the open. Participatory prototyping weighs trade-offs between accuracy, privacy, explainability, and cost in real time, with all stakeholders at the table.
Co-Implementation brings full transparency to training and fine-tuning. Model cards, dataset summaries, and error logs are published for public review.
During Co-Deployment, systems go live with clear rules: how to raise issues, how to prevent scope creep, and when to roll things back.
Finally, Co-Maintenance ensures the process does not end at launch. Systems are regularly audited not just for technical drift, but for ethical health and participatory strength. When features change, communities are re-consented.
What I Learned Along the Way
Shifting real decision-making power to the people most affected builds deeper trust. Feedback has to be continuous, not ceremonial. Privacy has to be tailored to context, not forced into a single mold. And meaningful participation costs money: travel, childcare, translation, accessibility, and the time required to do the work properly.
What the Framework Produces
By the end of this process, teams walk away with a governance charter that includes real appeal rights; a public model card and dataset description shaped by community input; a recourse and transparency portal with release notes and audit logs; and an audit schedule that addresses both technology and ethics.
How This Came Together
The playbook grew out of four multidisciplinary workshops in Montréal in 2024, involving 20 experts from research, industry, and civil society. I also grounded it in a scoping review of 76 key works across computer science, social science, the humanities, and policy, spanning 2013 to 2024.
Why This Approach Stands Apart
Global AI guidelines, from the Montréal Declaration to IEEE EAD, NIST AI RMF, and EU Trustworthy AI, offer principles. This framework adds the missing how: practical checkpoints, governance routines, and a commitment to ongoing co-maintenance.
Visuals
Five phases connected by continuous feedback and shared accountability.
Design vs. Co-design.
Where It Works Best
This approach is especially useful in AI governance, where lofty principles have to survive contact with institutions, budgets, and deadlines. It is most urgent for high-stakes systems in health, finance, and public services, where trust is not optional. It also offers a way to push organizations away from participation-washing and toward genuine, resourced involvement.
Tags: AI Governance · Participatory AI · Co-design · Design Justice · Expansive Learning · DEI