Law Enforcement Is Experimenting With AI-Assisted Policing: What You Need to Know

0
58

Police departments across the country are starting to use artificial intelligence to help complete their paperwork. 

AI can help officers save time writing reports and make their jobs more efficient, the Associated Press reports.

But experts warn of possible dangers when tapping into new technology, which can introduce serious, sometimes life-changing mistakes in reports, promote bias and racism and leak private information, Politicoreports.

According to its website, a new AI product offered by the technology company Axon called Draft One is designed to help police officers file reports in less time than required with conventional methods.

“Today, officers spend as much as two-thirds of their day on paperwork,” Axon says on the website. “Our AI Research team is working to cut the time spent on paperwork by improving the efficiency and accuracy of report-writing and information analysis in law enforcement.”

For instance, Axon says its product gives officers the ability to automatically redact footage caught on body cams “so officers can share footage with the public faster while protecting the privacy of individuals captured in video.”

Its AI software allows supervisors to review footage and reports so they “can better understand compliance and provide feedback in order to improve training and police-community relations.”

Draft One has had the “most positive reaction” of any product the company has debuted on the market, Axon’s founder and CEO Rick Smith told the Associated Press. But, he added, “there’s certainly concerns.”

District attorneys want to know that police officers didn’t only rely on an AI chatbot to write a report since they might have to testify in court about an alleged crime, Smith told the AP.

“They never want to get an officer on the stand who says, ‘Well, the AI wrote that, I didn’t,’” Smith told the outlet.

An AP investigation of another crime-fighting AI program, ShotSpotter, a gunshot detection tool that according to its website uses “sensors, algorithms and artificial intelligence” to classify 14 million sounds in its database as gunshots or something else, found “serious flaws” in the technology.

In one case in which ShotSpotter evidence was presented in court, an Illinois grandfather named Michael Williams was jailed for more than a year in 2021 when he was accused of shooting and killing a man based on audio evidence prosecutors say they obtained from the AI tool, the AP reported in 2022.

The AI technology allegedly picked up a loud noise when Williams’ car drove through an intersection and determined that Williams had fired a shot at the passenger in his car, the AP reported.

But Williams told authorities that a person in another vehicle pulled up to his car and shot at his passenger, the AP reported. A judge ended up dismissing the case when prosecutors said they didn’t have enough evidence to proceed.

“I kept trying to figure out, how can they get away with using the technology like that against me?” Williams told the AP.

In a variety of other applications used by lawyers, “AI creates ‘hallucinations’ or fake cases, citations, and legal arguments that seem correct but do not actually exist,” an expert at the law firm Freeman, Mathis & Gary wrote in an article on its website about risks and problems associated with Generative AI used in the legal profession.

This includes the accuracy of chatbots now being used by prosecutors and defense attorneys to sift through documents to detect relevant evidence, draft legal memoranda and devise complex litigation strategies. “A [2024]Stanford University study found that 75% of the answers generated by AI chatbots regarding a sample court ruling were incorrect,” the expert wrote.

“Further, AI cannot adequately address questions of law that implicate more than one practice area. For example, a legal issue implicating both immigration law and criminal law may yield an accurate answer for immigration law purposes but disregard any criminal law issues and implications.”

AI used by law enforcement may be helpful but precautions and protections need to be in place, Jonathan Parham, a former police director in Rahway, New Jersey, told Politico.

“The AI should never replace the officer — it should enhance their operational competency,” Parham told Politico.

LEAVE A REPLY

Please enter your comment!
Please enter your name here