U.S. lawyers are increasingly warning clients not to treat artificial-intelligence chatbots as confidential sounding boards after a federal judge in New York ruled that a former financial-services executive could not withhold chatbot-generated materials from prosecutors in a securities-fraud case, Reuters reported Wednesday.
The decision has sharpened concerns that conversations with platforms such as ChatGPT and Claude may be reachable in criminal investigations and civil litigation because chatbots do not carry the legal protections that attach to communications with a lawyer. The immediate lesson is especially acute in financial-crime matters, where defendants often face document-heavy probes and may be tempted to use AI tools to organize facts, test arguments, or draft summaries for counsel.
According to the Reuters, Bradley Heppner, the former chair of bankrupt financial-services company GWG Holdings and founder of Beneficent, used Anthropic’s Claude to prepare reports about his case for his lawyers after being charged with securities and wire fraud. U.S. District Judge Jed Rakoff ruled in February that 31 Claude-generated documents had to be produced, writing that no attorney-client relationship exists, or could exist, between an AI user and a platform such as Claude.
Law firms are now advising clients generally that sharing facts, legal strategy, or counsel’s advice with a chatbot could jeopardize privilege, meaning the same risk could arise in other criminal cases where a defendant uses AI to discuss defense themes, organize timelines, or rehearse explanations outside counsel’s presence, Reuters said.
Several firms have urged clients to use caution, and some have begun adding contract language warning that disclosure of privileged communications to third-party AI platforms may waive attorney-client protection, according to the report.
The news outlet also pointed to a contrasting ruling from Michigan, where U.S. Magistrate Judge Anthony Patti held that a self-represented woman in an employment case did not have to turn over her ChatGPT chats, treating them instead as her own work product. That split underscores that the law remains unsettled, but the direction of travel is clear enough that lawyers are racing to set guardrails, including steering clients toward closed corporate AI systems and instructing them to state in prompts when research is being conducted at counsel’s direction.
As Reuters put it, many attorneys are falling back on an old rule for the AI era: do not talk about your case with anyone except your lawyer, including a chatbot.
Read more at Reuters
