Sitemap
AI & Law Enforcement image (created by James McGreggor via ChatGPT)

AI as a Witness: The Emerging Role of Artificial Intelligence in Law Enforcement

9 min readMay 12, 2025

As artificial intelligence (AI) continues to grow in adoption, its implications continue to be explored, especially when the impact to humans is directly affected. When considering the impact AI has on people and society, law enforcement (LE) certainly demands close and continuous scrutiny. While it’s easy to see the benefit that AI brings, such as streamlined operations, enhanced accuracy, increased quality and reduced workloads, these gains are accompanied by ethical dilemmas and vulnerabilities. The challenge lies in not just identifying where AI can be helpful without compromising fairness and accountability today; the real challenge is addressing the long-term effect of leveraging AI — being pragmatic while still considering existential concerns. No other technology before now has offered the ability to offload “thought tasks”, which is why we need to pause and take a little more time to consider the impact of every task we offload.

While highlighting areas that have significant potential for positive impact such as: “identification of vulnerable and exploited children, and police emergency call centres”, Interpol has stated that current AI systems have limitations and risks that require awareness and careful consideration by the law enforcement community to either avoid or sufficiently mitigate the issues that can result from their use in police work.” In June of 2023 Interpol published the Toolkit for Responsible AI Innovation in Law Enforcement to support responsible and practical adoption of AI.

It is clear that AI offers numerous benefits to law enforcement agencies. Its potential to assist in report writing, perform data entry, or provide on-demand access to policies during field operations can substantially ease the burden on overworked officers and improve officer effectiveness. There are several products in development and on the market today that range from multi-platform automation solutions to full-scale AI.

Products such as Axon’s Draft One is an example product that offers many AI features, with report writing being one of them, whereas others product companies focus more on streamlining information flow such as the product offered by Bravo Foxtrot and Smart Squad. Then you have companies like Cognate that focuses more on on larger scale intelligence use cases, such as threat hunting and pattern recognition. Other companies such as C3 and Motorola also offer AI solutions for LE.

Not only does AI have the potential to ease the burden on police officers, it has the potential to make the quality of information better; however, we need to think deeply about where and how AI is used, because the risk of it negatively impacting people is not as innocuous as unnecessarily sending a vehicle to rework for a failed inspection (in manufacturing use cases), or writing a blog post that sounds inauthentic.

If you have been involved or reading up on AI systems, you most likely have learned that bias in these systems can have serious negative effects on the usability of the models that are being trained. You may have also heard about reliability challenges within AI such as Hallucinations, presenting inaccurate information as if it were correct, Model Drift, the decline in a model’s performance (i.e., precision and accuracy) over time, and Bias Amplification, which occurs when AI systems reinforce each other’s biases, leading to more pronounced discrimination. There are two other important concepts to be aware of when we think about leveraging AI in law enforcement, or any situation where some form of determination being done based on human interaction: Fluency Bias and Anchoring Effect.

In the article The First Draft of Justice, Robert Atkinson provides an explanation for Anchoring Effect: “Cognitive science tells us that once a draft is written — especially one that reads smoothly — we tend to stick to it. Even if we know it’s flawed, we rarely revise it as much as we should…once a version of events is down on paper, it becomes a reference point. We adjust around it, rather than reconsidering it from scratch.” He also describes Fluency Bias as psychological pattern where humans are “more likely to believe information that’s easy to read, well-structured, and confidently phrased. Language that flows smoothly tends to feel more truthful — even when it isn’t.

Considering that AI systems are designed to read and sound grammatically correct — we have all seen the numerous arguments around the em-dash — we need to think about how this may influence the audiences that are receiving this information (e.g., a jury). How are these anti-patterns (e.g., bias amplification, model drift) affecting the models that are being trained to generate the reports in the first place?

This is not to say that AI cannot be used “on-scene” or in other supportive ways. Leveraging AI to complete discrete data entry fields on forms, such as filling out addresses, phone numbers, and dates and times, etc., should be safe and provide significant value to officers; however, with fields that are open ended — where an officer needs to provide their assessment of the situation — is where leveraging AI presents risk. An alternative could simply be to switch to a transcription mode; this would still save time and reduce fatigue from manually typing or writing.

AI could also be used as a tool for the officer to ensure accuracy and compliance, making sure that forms and reports are submitted on time and are complete. Using AI to influence positive behaviors and promote compliance is certainly a good use of AI. Truleo, is an emerging product company and an example of taking this approach; their goal is to “Improve morale and retention with automation and positive reinforcement.” (Trueleo.com). By providing officers in the field with functionality such as policy lookup, conversational note taking, and en route service summaries, as well as promoting positive response handling (e.g., de-escalations), police departments can ensure that their officers, especially new officers, are better prepared to handle calls and other incidents while reducing stress on the officers themselves. With reduced stress and a better understanding of policy and procedures, especially de-escalation techniques, hopefully outcomes of officer responses will become more positive as well.

There are also product companies attempting to use AI in a more pro-active manner, providing direct translation services, and even threat assessment, but we need to remember that at the core, artificial intelligence — more specifically LLMs — are still just a series of computational networks, processing data and predicting the most likely token (language characters) to form an expected response. AI is not really thinking, it is making predictions based on mathematical probabilities; because of this, we also need to consider other situational factors such as vocal intonation and non-verbal input (i.e., facial expressions and silence). These are human factors that AI is still not be able to fully contextualize.

Think about how silence is used in every-day communication, then consider how silence is used in different cultures. If non-verbal cues mean different things in different cultures, how will it be interpreted by AI when interacting with international visitors?

As explained in the article Automating Silence? by Celeste Rodriguez Louro, “Humans are highly skilled at using silence in conversation through a process of socialisation refined as we move from the cradle to the grave.” She further explains that “The expectation that human answers to direct queries will necessarily be lexicalised (manifested as words) does not align with how humans communicate. In conversation, silence is used strategically to express feelings or ideas which are best left unsaid.

Lets consider something else: neurdivergecy. How will AI handle situations where it is interacting with someone who may be considered neurodivergent (e.g., having ADHD, or is autistic)? What about someone who is non-verbal?

What about someone with diabetes? When someone’s blood sugar drops too low, and they are not able to immediately address the situation, they may exhibit symptoms ranging from from mild shakiness, sweating, difficulty concentrating, and irritability, to more severe symptoms like confusion, slurred speech, and even loss of consciousness or aggression. What other behaviors does this mirror? Intoxication. Can AI pick up on this medical condition and address it correctly?

As we enter the dawn of AI, we need to think about it differently than how we have other technologies like mobile or the internet. With AI taking over activities that would require human thought, we need to take the time to think critically about the impact.

There is absolutely a need to provide officers with every asset possible to be able to effectively respond to situations, and AI can surely help with this. Some areas where I can see AI being valuable are…

  • Writing end of shift reports.
  • Completing vehicle inspection check lists.
  • Providing anomaly detection — useful during events such as crime scene investigation or during vehicle walk arounds during shift change-over.
  • Auto-lookup of license plates or BOLO identifications.
  • Auto-assist emergency calls.
  • Providing additional training, especially community relations and de-escalation techniques, or even on how to write better reports.
  • Facial recognition — useful for missing-children or open warrants.

What about for larger issues? Here are some additional things to consider…

Can AI be used for self reflection by officers to help them better understand their own bias and train them to be more aware? Although not a law enforcement platform, companies like Cogbias.ai are assisting customers understand bias being introduced in the questions they are asking.

Could AI also be used to identify patterns and prevent future incidents or help rebuild morale? Let’s say that one precinct — or district — starts showing degradation in the quality of reports, or in the ‘tone’ that reports are written coupled with an increase in turn. Could there be potential risk to morale? Could this be something that leads to erosion within the community or a higher likelihood of officers using stronger use of force than warranted? With proper insights, could commanders be able to respond faster to morale issues, possibly preventing incidents within the community, or could they prevent officer self harm by providing some early intervention?

How can AI help support better community engagement and relationships? How can we leverage AI to help develop better relationships?

There are so many use-cases for AI in law enforcement — we just need to make sure that we are selecting the right ones.

AI is changing everything about they way we live, both directly and indirectly. It has the potential to make our lives better by offloading a lot of the repetitive and low-value work, by increasing quality though anomaly detection and deviation handling, and being an all-around knowledge assistant that is available 24–7; however, it has the potential to have serious negative implications as well. Even with the most well-intentioned applications, with AI in the loop, we need to think about how it could be used by bad-actors or how it might bring shape to a future that we have not even considered possible. The best way to do this is to be curious, listen, and be transparent and ask questions. You most likely have heard the quote “there are no stupid questions”, well because of the near infinite possibilities with AI that statement is only amplified.

What do you think?

  1. Do you share the same concerns? Do you disagree; if so why?
  2. What AI challenges are you seeing in your own profession?

Thanks for reading!

AI is an exciting and rapidly evolving technology — the more we question, learn, and share, the better off we will all be.

If you liked this article, please follow me (James McGreggor) on LinkedIn.

For help architecting modern digital solutions (including AI) please visit our company www.blueforgedigital.com.

Author’s Note:

This article was created through a process that leveraged generative AI to facilitate grammatical and organizational refinement to ensure clarity, correctness, and logical flow; all content and ideas were provided by the author, with the initial and final drafts being fully edited by the author.

Author has no affiliation with any organization described in this article.

References

--

--

James McGreggor
James McGreggor

Written by James McGreggor

I am a digital technologist and business strategist who believes in using technology for good.

No responses yet