Policy
We build the rules that should exist for AI, and we take them directly to the people who write the laws.
The Problem: AI Is Being Deployed Without Oversight
When a company wants to build a new factory, it has to complete an environmental impact review. Regulators check whether the factory will pollute the water, harm wildlife, or affect nearby communities. There is a process, and there are consequences for skipping it.
When a technology company deploys a new AI system that affects millions of people, there is no equivalent process. No review. No public input. No assessment of who might be harmed. The system launches, users deal with the consequences, and regulators play catch-up, if they act at all. Neurodivergent people, elderly users, disabled individuals, and others who depend on technology for daily functioning are often the most affected and the least consulted.
Our Solution: The Civic AI Accountability Standard (CAAS)
We built a framework to fix this. The Civic AI Accountability Standard (CAAS) adapts the logic of environmental impact review to AI deployment. It gives legislators, regulators, and advocacy organizations a structured way to evaluate AI systems before they launch and to hold companies accountable after the fact.
The framework has two main tools. The first, called the Algorithmic Civic Impact Review (ACIR), is a checklist that companies would complete before deploying a large-scale AI system. It evaluates the system across seven areas, including a dedicated section on how the system will affect vulnerable populations like disabled, elderly, and neurodivergent users. The second tool, called the Civic Technology Integrity Review (CTIR), is designed for after deployment. Congressional staff, regulators, and attorneys can use it to evaluate whether an AI system is actually doing what the company promised, and whether it is causing harm that was not disclosed.
We also developed an eight-question diagnostic tool that measures whether a technology platform is building community trust or eroding it. When applied to real-world AI deployments, the results have consistently shown patterns of harm that affect cognitively vulnerable users most severely.
The entire framework is free to use. It is public domain, meaning anyone can use it, adapt it, or cite it without restriction. It is published in full at CAASNow.org.
Model Law: The Civic AI Accountability Act of 2026
We did not stop at a framework. We wrote a model law that any state or federal legislator can introduce. The Civic AI Accountability Act of 2026 would require companies to complete a civic impact review before deploying large-scale AI systems, mandate specific assessment of effects on vulnerable populations (including neurodivergent users), and create enforcement mechanisms for post-deployment audits. The full text is available at CAASNow.org and is free for any legislative office to use or adapt.
Who We Have Contacted
This work is not sitting on a shelf. The complete CAAS research packet, including the framework, the model law, supporting research, and a personal constituent letter from the founder, has been formally delivered to the offices of Senator Tammy Baldwin (D-WI), Senator Bernie Sanders (I-VT), Senator Elizabeth Warren (D-MA), and Representative Alexandria Ocasio-Cortez (D-NY-14). Our founder has an existing relationship with Senator Baldwin’s office, which has previously acted on his behalf in a congressional inquiry with the Social Security Administration.
Fighting Back on Behalf of Consumers
Beyond working with legislators, we also take direct action when companies harm neurodivergent consumers. We file formal complaints, write demand letters, and produce detailed research documenting unfair billing practices, cancellation barriers, misleading marketing, data privacy violations, and inadequate diagnostic services. We approach each case as both a service to the individual affected and a contribution to the public record that regulators and attorneys can draw on.
If you have questions about our policy work, want to discuss the CAAS framework, or have a media inquiry, contact hello@ndmind.org or call 920-288-2007.