Human review, responsibility should be the ‘core feature’ of AI solutions, official says

Bloustein Local Government | News

Artificial intelligence has emerged as a tool to help agencies issue parking fines and tickets more efficiently, particularly as many cities have understaffed enforcement teams, but well-trained human reviewers remain critical to the approval process, experts say.

It can be tempting to incorporate an AI solution into parking enforcement, like an automated ticketing or fine system, but “everybody adopting any of these [artificial intelligence] technologies needs to address the risks … and develop appropriate risk reduction or mitigation strategies,” said Marc Pfeiffer, senior policy fellow at Rutgers University’s Center for Urban Policy Research.

“That’s where subject matter expertise becomes important. AI seems so confident, and the language [the tech] uses is intended to build confidence in you,” he said. That’s where being trained on how AI works and its limitations can help agency staff be more attuned to double-checking AI-enabled results or identifying potential errors that need further evaluation.

Without that expertise, more mistakes, like incorrectly issuing fines to drivers, can occur, “and there will be times when there is an egregious error made, and it’s going to snap back into the agency’s face,” Pfeiffer said.

Indeed, New York’s Metropolitan Transportation Authority faced backlash against the agency’s use of AI-enabled cameras on certain public buses that mistakenly flagged and ticketed approximately 3,800 vehicles for blocking bus lanes in 2024. More than 870 of those tickets were issued to vehicles that were legally parked. Similar challenges unfolded in Alameda, California, after reports emerged of the Alameda-Contra Costa Transit District incorrectly issuing $110 tickets to some cars that were parked in legal spaces away from a bus stop.

Both agencies claimed the ticket cases were being reviewed by human staff, but such incidents underscore the value of proactive risk assessment and management before deploying AI for enforcement purposes.

For example, potential lawsuits, negative pushback from the community and other impacts are where prevention and risk management upfront could have saved agencies from trouble, Pfeiffer explained.

Route Fifty, April 10, 2026