
A city council meeting in a small town. A motion passed. A line item signed. A system installed. Maybe it’s a predictive model that promises to make emergency dispatch faster. Maybe it’s a tool that determines eligibility for housing benefits. The people in the room go home thinking they’ve done their jobs. And they have.
But the next morning, a video goes viral. A grandmother denied food assistance. A teenager flagged for surveillance. One line of code, buried deep in the decision logic, carries the weight of centuries of inequality and now it’s everywhere. No government decision is truly local anymore. Not in the age of TikTok. Not when livestreams and YouTube clips can turn bureaucratic procedure into social media spectacle. What was once a zoning call, a procurement contract, or an automation rollout is now a meme, a movement, a mess. The distance between city hall and global headlines is measured in milliseconds.
Behind every administrative act now looms the presence of artificial intelligence. It may be invisible to most—the procurement officer clicking through software options, the case manager relying on a risk score—but its influence is not. AI systems are quietly scripting the futures of communities, counties, cities, countries, and continents. Sometimes invisibly. Sometimes catastrophically. And while AI is optimizing the human experience and reimagining business models across disciplines and industries, it is also creating new ethical and safety challenges as fast as it solves old ones. These systems mimic human cognition. They give our computers a human-like feel. But they do not give them human judgment.
It was inevitable: we’ve entered an era where the algorithm has become a public servant. The question is, who does it serve? Budget forecasts shaped by predictive models. Service delivery prioritized by scoring systems. Procurement driven by recommendation engines. And behind it all, a promise: faster, smarter, cheaper governance. Real-time service delivery excellence.
But promises aren’t policies. Systems don’t solve politics. And algorithms, in the hands of government, become more than math; they become moral actors. Consider the child welfare algorithm in Pennsylvania that assigned risk scores to families. These weren’t technical glitches. They were devastating design decisions and bad algorithmic policy that punished the poor, the marginalized, the historically underserved.
Several high-profile scandals have revealed the dangers of using AI and algorithmic tools. In 2016, ProPublica exposed the COMPAS recidivism algorithm for its racial bias, disproportionately labeling Black defendants as high-risk and raising concerns about fairness and transparency in sentencing.
In 2020, Detroit police wrongfully arrested Robert Williams, a Black man, due to a faulty facial recognition match, highlighting the technology’s high error rates for people of color and sparking national outrage. Similarly, the LAPD’s use of the predictive policing tool PredPol until 2019 led to the over-surveillance of Black and Latino communities without proven benefits, prompting its cancellation. Together, these cases illustrate how opaque, data-driven systems can reinforce systemic bias, undermine due process, and erode public trust when deployed without accountability.
In July 2025, the federal government issued its AI Action Plan under Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence. National in scope, but its weight falls squarely on local shoulders. City and county governments aren’t just implementers of AI policy. They are test beds. Sandboxes. Risk zones. First responders. And the plan asks a lot:
- Train a new AI-ready workforce—from electricians to data technicians.
- Adopt ethical procurement processes, including bias audits and impact reviews.
- Oversee AI infrastructure, balancing economic gains with environmental responsibility.
- Align local regulations with national standards or risk losing federal funding.
- Engage the public not just in deployment, but in design.
In theory, these are steps forward. In practice, they are seismic shifts in how local governance operates. It’s easy to talk about AI in the abstract. Easier still to sell it as salvation. But in the real world, the city manager of a southern town might wake up to find a software-triggered benefits cutoff that hits hundreds of residents. The deputy commissioner may have to explain to a mother why her son was misidentified by facial recognition, wrongfully arrested, traumatized. The mayor may be in conflict with a police chief who swears the AI tool works even as community trust collapses.
AI doesn’t just automate decisions. It codifies judgment. It scales error. It erases nuance, the very thing local government is designed to understand. And it does all this behind closed doors, proprietary code, trade secrets, and black-box algorithms that even public servants can’t access, let alone explain.
So how do you govern the invisible? The outsourced? The unaccountable? You start with clarity. You build capacity. You take stock.
Know Your Systems.
Conduct an AI inventory. Know where algorithms live—budgeting, hiring, emergency response, permitting, service delivery. Audit for vulnerabilities. Deploy due diligence. Think duty of care. Consider a duty to warn. Always uphold due process.
Set Procurement Standards.
Don’t buy what you can’t govern. Every AI contract should include transparency, auditability, explainability, impact assessments, change management strategies, crisis response plans, and business continuity plans.
Establish Oversight.
Form interdisciplinary committees: IT, legal, HR, equity officers, and—most importantly—residents and community stakeholders. Interdisciplinary teams are a competitive advantage. Remember: if AI has limits, you have a duty to warn.
Invest in Literacy.
Your staff doesn’t need to be engineers, but they do need to understand what these systems do—and what they hide.
Engage the Public.
Hold town halls. Run workshops. Share surveys. Host design sessions. Let people see and shape the systems that shape their lives.
Because AI, for all its power, doesn’t see context. It doesn’t see history. It doesn’t see the mother working three jobs or the teenager raising his siblings. It sees data points. Probabilities. Risk scores. And it moves fast. But these decisions aren’t just metrics. They impact human lives.
Indeed, the algorithm is now a public servant. But it is one without accountability, unless we build it in. One without empathy, unless we demand it. One without understanding, unless we code it and ensure good governance of all AI systems.
Every algorithm carries a deeper story: a story about trust. About confidence. About access, equity, and opportunity. But trust is fragile. It may not survive another scandal rooted in a system no one understands or a decision no one can explain.
AI is not just a tool for outsourcing responsibility. It is a test of our values. The public servant of the future may be an algorithm. But only principled, prepared, proactive, and ethically vigilant leaders can ensure AI is trustworthy and serves the public good, and is designed, safely and responsibly, with the public interest in mind.

PROFESSOR RENÉE CUMMINGS, a 2023 VentureBeat AI Innovator Award winner, is an AI, data, and tech ethicist, and the first data activist-in-residence at the University of Virginia’s School of Data Science, where she was named professor of practice in Data Science.
New, Reduced Membership Dues
A new, reduced dues rate is available for CAOs/ACAOs, along with additional discounts for those in smaller communities, has been implemented. Learn more and be sure to join or renew today!