AI in expert witness reporting: pragmatic adoption with professional guardrails
By Luci Lloyd, Managing Director and Paediatric Nursing Care Expert
AI is now part of the expert witness and medico legal landscape, sometimes obvious, and sometimes embedded in tools we are already using, and the question for us as experts and solicitors is no longer “if”, but how it should be used safely, transparently, and defensibly.
In my view, the best approach is not avoiding it but also not enthusiasm without limits; enthusiasm; it is measured acceptance with clear policies and procedures, so as experts we preserve what Courts rely on most, our independence, impartiality and reliability.
At JJ&A, as part of our monthly meeting we discuss AI and the tools available, and anything we are utilising internally is done with caution. We have started to develop AI processes and policies to support our employees and experts along the way, while we are also in communications with an AI organisation to see how AI and automation can assist us as a small business.
What JJ&A use AI for (and why)
I’m all for utilising AI, provided we are honest about scope and disciplined about verification. I have only been learning about and using AI in my own work for approximately 12 months, and I have found that AI can be helpful for:
- Grammar and readability
- Administrative tasks
- Research support
However, this is a key point; I treat AI as an assistive tool. I quality assure everything, and I check sources at the primary origin. I ensure as Managing Director that I provide the same advice to our employees and experts. I am far from an AI expert, but I appreciate it has a place both at home and at work.
Why guardrails matter more in expert witness work than in routine practice?
Expert witness reporting is not simply clinical documentation. It is evidence for the Court, and our expert duties require independence, objectivity, and accountability. Guidance aimed at expert witnesses is explicit that experts remain responsible for their reports and cannot avoid these duties by relying on AI.
The biggest risk: credibility damage from AI being wrong (hallucinations)
Generative AI can produce output that sounds reasonable while being wrong, including fabricated references, known as hallucinations. I never treat AI as a substitute for reading the underlying evidence and I always check the primary source.
A risk is the perception (or reality) that an expert has not personally read the evidence and records provided. Commentary from expert witness training has noted the expectation that experts personally review the material they rely upon, because cross-examination will likely probe exactly what the expert read and how conclusions were formed.
A simple risk framework I think is useful (for experts and instructing solicitors)
One of the most helpful developments in recent guidance is a risk-based approach, thinking in terms of low-risk, high-risk, and prohibited use. Here is how I apply it to expert witness reporting:
Low-risk (generally acceptable with oversight):
- Grammar/spell checking and style refinement that does not change meaning.
- Administrative structuring and generic templates without case facts.
- Undertake duties such as organising documents or scheduling work or meetings.
High-risk (use only with heightened controls and transparency):
- Summarisation of records where omissions could affect analysis.
- Any drafting that touches substantive content, reasoning, or key citations.
Prohibited (not compatible with expert duty, in my view):
- AI generating the expert’s opinion on breach of duty, causation, condition and prognosis and quantum.
- Uploading confidential case data into systems without robust confidentiality safeguards.
Practical “guardrails” I would like to see become standard
For shared confidence across Claimant/Defendant instructions, A simple set of norms:
- Purpose limitation: define what AI is used for (e.g., grammar/admin/research support) and what it is not used for (opinion-formation).
- Verification: no AI-provided fact, reference, or quotation is used without checking the primary source.
- Confidentiality: do not input identifiable case data into public AI tools; keep data governance front-and-centre.
- Auditability: keep a brief internal note of tools used and for what purpose, so the workflow can be explained if challenged.
- Disclosure where meaningful: if AI played a material role in any analysis or substantive content, discuss with instructing solicitors whether disclosure is appropriate.
Conclusion
AI has brought benefits to expert witness work, particularly in improving clarity, reducing administrative burden, and supporting efficient research. But the integrity of expert evidence depends on boundaries: the expert’s opinion must remain the expert’s; sources must be verified; confidentiality must be protected; and the workflow must be defensible under scrutiny. With shared guardrails, AI can be a genuine support to quality, rather than a risk to credibility.
References
The Academy of Experts: Guidance for Expert Witnesses on the use of AI (Jan 2026)
Expert Witness Institute: Why you must verify AI-generated content in your expert report (Jan 2026)
Courts and Tribunals Judiciary: AI Guidance for Judicial Office Holders (Oct 2025)
Law Society Gazette report of Mr Justice Waksman’s warning (Nov 2025)
Bush & Co: Expert Witnessing in the Age of AI: Mindful Advancements (Jun 2025)
Clyde & Co commentary on medico-legal CPD session referencing AI and expert expectations (Feb 2026)
Bond Solon: commentary on AI tools in litigation workflows (witness preparation) (Mar 2026)
McCollum Consultants: discussion of AI tools and expert duties (Nov 2025)

