AI Prompts, Legal Privilege, Liability - A New World of Risks
Introduction: When “Ask Copilot” Becomes Evidence
Forbes recently published an article describing how OpenAI was required to help law enforcement identify a ChatGPT user as part of an investigation. This not a subpoena, but rather a search warrant. However, this present a conversation that organizations should be considering when they adopt GenAI. What is the discoverability and liability as it pertains to AI prompts. (Forbes)
What is the discoverability of prompts from Microsoft Copilot of ChatGPT when employees use them to analyze, understand, or research decisions. Could those interactions be requested during litigation? As I thought about this, I came up with three plausible scenarios where this could become problematic.
Day To Day, Business as Usual: When AI “Advises” on Security Decisions
Imagine a user asks Copilot: “What would happen if we don’t implement this control, and an attacker compromises this system?” If Copilot replies that without this control the security program “could be seen as negligent,” that conversation is now stored, timestamped, and discoverable.
If leadership decided to deny the deployment of that control, that prompt could become evidence in a claim or some other litigation, especially if the decision wasn’t presented with the associated risks and the decision documented.
Organizations that have rolled out copilots and other AI should provide guidance to their employees that any interactions should be treated the same as other forms of electronic communication. Furthermore, they should work with their legal counsel to understand the implications to attorney client privilege before, during, and after a security incident as it relates to AI prompts.
Cyber Liability Insurance: AI Prompts and Claims
Cyber liability insurers are already known to spend time understanding an organization’s security program and control environment. They have also been known to deny claims or cancel policies if anything is claimed incorrectly or if there is any evidence of negligence. Insurance Journal
If prompts are, in fact, discoverable then consider the following sitautions:
Records of a prompt where a user asked copilot to provide a list of users that did not have MFA. The results included service accounts and other accounts that may legitimately not need MFA. Then imagine a question on an insurance questionnaire that asked “Do all users have MFA enabled.” Could this be explained in court if it was not explained in the questionnaire?
What if, similar to the first scenario, a prompt advised that not deploying a certain control is negligent. The control was not deployed. Could that be grounds for a carrier denying a claim?
I haven’t seen any cases where this has occurred, but consider the cost of recent breaches. Jaguar is approaching $2.5b. Insurers are going to be looking for reasons to deny claims. Will AI prompts give them an out?
The main takeaway again, is ensure that decision are documneted. They should include timelines for when associated risks are communicated, who made the decisions, and their acknowledgement of risk acceptance. Preferably risk acceptance is documented in a formal system.
During an Incident: Privilege, Discovery, and the Digital Paper Trail
I thought of two other scenarios during incident response:
As a repeat to other scenarios past prompts become relevant: imagine after a breach, investigators find AI prompts from weeks, months, years earlier that warned of the control failure that led to the incident. Those records could affect litigation, class-action exposure, or even denial of insurance claims.
Response prompts during an incident: As organizations build AI into their daily workflows, defenders or analysts may ask Copilot how to take actions during an incident. This may inadvertently create communication that can be discovered. This communication would not be privileged and therefore could be discoverable.
I am not offering legal advice, all attorney client privilege should come from your attorneys. But if AI is not part of your legal guidance for privileged workflow, it can be discoverable.
Make sure you have addressed attorney client privilege in your incident response playbooks and it is understood at all levels of the organization. I talk about this in our series on incident response preparedness. Ensure your legal counsel is a close partner in all aspects of incident management. Consider how AI will be used during an incident.
Conclusion: AI Governance and The Impacts to Discoverability
I cannot emphasize enough for all teams including engineering, analyst, non-technical teams, and executives understand what and how attorney client privilege impacts an incident. How are your teams using AI prompts and if that information was discovered and presented in a court how would it be perceived?
Summary of takeaways:
Review your AI governance and data retention policies
Document risk acceptance and security decisions
Educate teams on attorney client privilege
The bottom line is copilots may help write code, summarize logs, or assess risks, but there are aspects that have still not been tested and need to be considered within the big picture.