Now, with advancements that allow users to input personal health information, OpenAI says, “ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns.”
But as AI tools move deeper into the most intimate corners of users’ lives, concerns over data safety and privacy are rising, particularly when it comes to medical information, which carries a uniquely personal and sensitive weight. One medical expert, however, is bullish on the safety of personal data in ChatGPT Health.
Dr. Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, said the central issue is ultimately “whether you trust OpenAI to keep to their word,” in a recent interview with Time.
Wachter said he “sort of” does trust them. OpenAI, he argued, has a reputation to uphold. “They have a really strong corporate interest in not screwing this up,” he told Time.
“If they want to get into sensitive topics like health, their brand is going to be dependent on you feeling comfortable doing this, and the first time there’s a data breach, it’s like, ‘Take my data out of there—I’m not sharing it with you anymore.’”
OpenAI appears to agree that the stakes are high. The company says ChatGPT Health builds on existing layered protection systems, including purpose-built encryption and isolation.


