From data drudgery to AI speed: How one UNC School of Government project can serve as a roadmap for leveraging AI safely

Have you ever wondered how fast your town fixes its reported potholes? The School of Government has a resource that can help answer that very question.

Established in 1995, the North Carolina Benchmarking Project helps municipalities compare their service and performance trends with other participating units. Each year, 16 partner municipalities submit performance data across 11 services to the UNC School of Government, which hosts strategy sessions each November to analyze data, discuss shared challenges, and best practices. The insights from these sessions and the dashboard are synthesized into an annual report. 

But in 2024, the reporting process looked a little different. Under the leadership of NC Benchmarking Project Director and School faculty member Obed Pasha, the project team integrated artificial intelligence (AI) into two critical workflows: data auditing and report development. The results? More than 300 hours saved, fewer errors, and stronger outcomes.

Now, in a public management bulletin co-authored by Pasha, the team is sharing exactly how they did it—offering a practical roadmap for other public sector organizations considering AI adoption into their processes.

Pasha’s team was comprised of UNC MPA students Christopher L. Cole II, Noah Ellington, Avangelyne Padilla, and Kirsten Tucker, with graduate Keegan Huynh leading the students in the project.

Tackling Data Auditing with AI

The first challenge involved auditing massive amounts of data. The dataset includes 873 performance metrics collected over four years across 11 departments and 16 municipalities—totaling more than 614,000 unique data points.

"AI integration through Microsoft Copilot helped us save over 100 hours of data audit," Pasha explained. "Instead of manually observing the changes in over 600,000 data points, AI identified unusual changes in data in minutes if the data exceeded upper and lower limits."

The project team partnered with the University’s AI club, AI @ UNC, to develop an automated system that flags data anomalies—like an extra zero or misplaced decimal—using multiple methods, including z-scores, historical comparisons, and peer municipality comparisons. Critically, the system explains why each data point was flagged, avoiding the "black box" problem common in AI applications.

"The automated flagging and accompanying explanations allow our team to review flagged data more efficiently," the bulletin notes, "and enable partner municipalities to understand the reasons behind flagged data."

AI as a Writing Partner

The second application focused on report development. The team recorded more than 30 hours of conversations with more than 200 local government staff members across multiple strategy sessions, generating flip charts, sticky notes, transcripts, and meeting notes. Participating municipalities by service department gathered at the School of Government for the annual Performance Strategy Sessions to discuss strategies and define outcomes. Within the Asphalt Maintenance discussion, this high-performing and collaborative session focused on critical challenges, including workforce shortages, infrastructure coordination, roadway safety, response efficiency, and long-term pavement quality. Participants examined emerging trends, shared successful strategies, and explored both short-term solutions and long-term reforms to enhance department performance.

Summarizing these conversations and identifying coherent themes is generally labor-intensive. But by carefully training an AI model on previous reports, style guides, and structured prompts, the team transformed the process into a much quicker one.

"This process helped produce a strong and consistent report last year and saved us over 200 hours of work," Pasha said.

The bulletin details the step-by-step approach: training the AI with context, developing outlines, generating drafts section by section, and creating a separate fact-checking process to verify every example and quote against transcripts.

Prompts to train the AI included the following:

“I will now upload some example reports from the past two years. Use these reports as context for guiding the way we write the new report. Please analyze the uploaded example reports to identify key sections, structure, and formatting conventions, level of analysis, detail, tone, and formatting for drafting the 2025 benchmarking report.”

Still Invaluable: The Human Element

Despite the efficiency gains, Pasha emphasizes that AI is a tool—a valuable tool, to be sure—but by no means a replacement for human judgment.

"AI is an excellent tool to synthesize and audit. However, local governments should be careful in using AI, since it sometimes produces hallucinations and requires human oversight," he cautioned.

The bulletin identifies specific challenges the team encountered, including AI occasionally generating plausible-sounding examples that did not exist in transcripts and struggles with interpreting abbreviations and shorthand notes. The team's solution? Rigorous validation protocols, dedicated fact-checking processes, and always treating AI outputs as first drafts requiring human review.

A Blueprint for Other Public Sector Organizations

What made this project noteworthy was not just the time saved—but also the transparency. By documenting their process, challenges, and lessons learned, the Benchmarking Project offers other public sector organizations a practical example of AI adoption done thoughtfully—and with caution.

The bulletin includes specific prompts used, workflow details, and best practices for training AI models. Looking ahead, Pasha and his team plan to continue refining their AI systems, potentially incorporating predictive analytics to help municipalities anticipate emerging trends and proactively address service-delivery challenges.

For public officials wondering whether AI could help their work, the North Carolina Benchmarking Project offers an answer: yes—if you approach it with the right guardrails, training, and commitment to human oversight.

Published February 13, 2026