Pre-Interview Hypothesis Generation: Another Way Agencies Can Use AI

An August 2025 newsletter from the National Children’s Alliance took a closer look at a recent study of using AI in the forensic interview process. The findings are intriguing—and also raise a few questions about how to handle the new technology.

About the Large Language Models (LLMs)

Forensic interviews are designed to be neutral and focused on finding facts. Still, as humans, our experiences, training, and even unconscious biases inform the way we approach every interview. The recent study published in Psychology, Crime & Law, titled Pre-interview hypothesis generation: large language models (LLMs) show how new technology can be beneficial for child abuse investigations. The goal was to see whether AI could help improve reliability in the investigative process by reducing bias and thinking through more angles for any given case.

In the study, researchers tested two different AI systems known as LLMs. These tools are trained on huge amounts of text data, and they are able to understand content and then generate human-like language from their analysis. In order to test the LLMs, the researchers put them up against three groups of human participants: child abuse investigation experts, psychologists, and ordinary “naïve” participants with no prior knowledge of child abuse investigations. Together, the pool included 21 experts, 60 psychologists, and 60 naïve participants.

Participants were asked to generate hypotheses and possible questions for case vignettes modeled after real-world situations. The results were striking. The first LLM, GPT-4, outperformed most human groups in both the number and comprehensiveness of hypotheses it generated. Not surprisingly, experts and psychologists scored second-best, while naïve participants performed worst.

Importantly, GPT-4’s hypotheses were more comprehensive than any other group’s list. They covered multiple explanations for suspected abuse, and that level of response raises an important question for agencies. Can AI tools like LLMs be a powerful support system in preparing interviews? If they help investigators think about possible explanations—while still maintaining objectivity—then shouldn’t there be ways for agencies to use them more?

Potential Concerns for Forensic Interviews

While the LLM assessment showed some promising results and reasons for agencies to take note, the study also highlighted some cautions. First, generating too many hypotheses can overwhelm investigators. So the tool that’s designed to help could actually be more troublesome than first thought. Skilled interviewers must still be selective. They need to focus on hypotheses that can realistically be tested during an interview. AI doesn’t always deliver testable hypotheses. They may broaden perspectives, but human discretion is still essential.

Second, there’s a key difference in focus. Experts and psychologists tended to concentrate on incident details—information that can be checked and corroborated. On the other hand, LLMs (and naïve participants) often generated questions about mental state or family dynamics. While valuable, those areas are more speculative and less useful for confirming facts.

What’s more, we can’t avoid the legal considerations. If LLMs are used to help prepare interviews, their output data could be considered discoverable in court. Prosecutors may need to be part of the process. For example, would interviewers need to be cross-examined on whether AI-generated hypotheses were part of the discovery phase? 

It’s clear that AI can support interview preparation in theory, but it’s no substitute for human expertise. If used, it has to be viewed as a supplement to human judgment and flexibility. But as more and more studies come out, the power can’t be denied. If there are ways for agencies to improve their systems, they’re definitely worth exploring!

Stay on the Cutting Edge—with iRecord’s Comprehensive Solutions

At iRecord, we are passionate about helping agencies adapt new solutions thoughtfully—always with the mission of serving those who protect and serve with accuracy and professionalism.

We partner with other industry leaders to deliver smart, forward-thinking solutions that enhance the way agencies work. It all has to work together, from the recording technology to evidence management and beyond. And as AI continues to advance, we are here to help agencies explore how tools like LLMs can be integrated responsibly into their workflows.

If you’re interested in exploring the power of AI in your agency—whether with LLMs or AI-assisted redactions, automatic transcriptions, or another tool—we’d love to talk! With the right solutions and partners, agencies can stay on the cutting edge while continuing to uphold the highest standards of care and justice. Let us be part of the conversation. Contact us today!

Contact Us