Manual research reviews often miss key data. Learn how the new RAISE framework ensures AI transparency while keeping human researchers accountable.

The technology might be disruptive, but our standards are non-negotiable. You can use AI, but the human researcher stays 100% accountable for every single finding.
Position Statement on Artificial Intelligence (AI) Use in Evidence Synthesis Across Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence 2025 https://onlinelibrary.wiley.com/doi/10.1002/cl2.70074


The human researcher remains 100% accountable for every finding, method, and piece of content generated, even if an AI tool was used. According to the joint position statement from Cochrane, Campbell, JBI, and the CEE, researchers cannot blame an algorithm for errors or "hallucinations." This "human-in-the-loop" model requires authors to co-sign every output and be prepared to answer for the integrity of the data, acting as the final filter for all machine-generated judgments.
RAISE stands for the Responsible use of AI in evidence SynthEsis. It is a systemic framework designed to ensure transparency and rigor when advanced technologies are used in research. It maps out specific responsibilities for the entire ecosystem, including tool developers, methodologists, and publishers. For researchers, it mandates clear disclosure of how AI was used; for developers, it requires "giving the receipts" by providing public information about training data, potential biases, and system limitations.
Disclosure is required whenever AI makes or suggests a scientific judgment, such as screening abstracts, extracting data, or synthesizing qualitative findings. Low-risk tasks, such as using basic scripts for spelling and grammar checks, generally do not need to be listed. However, for more substantive tasks, researchers must provide a "biography" of the AI usage, including the tool's name, version, platform, and a justification for why that specific tool was appropriate for the research question.
To counter the "black box" nature of AI—where the logic between input and output is hidden—researchers must practice "active skepticism." This involves validating the tool through piloting or calibration, such as comparing AI results against human results in a small sample. Furthermore, researchers are encouraged to make their "prompts," code, and datasets publicly available so that the logic used to reach a conclusion is visible and can be critiqued by the wider scientific community.
Researchers must weigh the gains in speed and efficiency against the potential risks of error and the "upfront investment" of time required to validate the tool. While AI can mitigate human error—such as the 13% risk of a single human missing a relevant study during screening—it can also introduce systemic biases based on its training data. Additionally, the framework encourages researchers to consider the environmental and social costs, such as the high energy consumption required to run large language models.
From Columbia University alumni built in San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
From Columbia University alumni built in San Francisco
