General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsOpenAI launches ChatGPT Health, encouraging users to connect their medical records (The Verge) + Bluesky reactions
https://www.theverge.com/ai-artificial-intelligence/857640/openai-launches-chatgpt-health-connect-medical-recordsBut its not intended for diagnosis or treatment.
by Hayden Field
Jan 7, 2026, 1:00 PM CST
OpenAI has been dropping hints this week about AIs role as a healthcare ally and today, the company is announcing a product to go along with that idea: ChatGPT Health.
ChatGPT Health is a sandboxed tab within ChatGPT thats designed for users to ask their health-related questions in what it describes as a more secure and personalized environment, with a separate chat history and memory feature than the rest of ChatGPT. The company is encouraging users to connect their personal medical records and wellness apps, such as Apple Health, Peloton, MyFitnessPal, Weight Watchers, and Function, to get more personalized, grounded responses to their questions. It suggests connecting medical records so that ChatGPT can analyze lab results, visit summaries, and clinical history; MyFitnessPal and Weight Watchers for food guidance; Apple Health for health and fitness data, including movement, sleep, and activity patterns; and Function for insights into lab tests.
On the medical records front, OpenAI says its partnered with b.well, which will provide back-end integration for users to upload their medical records, since the company works with about 2.2 million providers. For now, ChatGPT Health requires users to sign up for a waitlist to request access, as its starting with a beta group of early users, but the product will roll out gradually to all users regardless of subscription tier.
The company makes sure to mention in the blog post that ChatGPT Health is not intended for diagnosis or treatment, but it cant fully control how people end up using AI when they leave the chat. By the companys own admission, in underserved rural communities, users send nearly 600,000 healthcare-related messages weekly, on average, and seven in 10 healthcare conversations in ChatGPT happen outside of normal clinic hours. In August, physicians published a report on a case of a man being hospitalized for weeks with an 18th-century medical condition after taking ChatGPTs alleged advice to replace salt in his diet with sodium bromide. Googles AI Overview made headlines for weeks after its launch over dangerous advice, such as putting glue on pizza, and a recent investigation by The Guardian found that dangerous health advice has continued, with false advice for liver function tests, womens cancer tests, and recommended diets for those with pancreatic cancer.
-snip
Ran across this late last night while catching up with Bluesky posts. First saw it mentionrd by science fiction writer John Scalzi, commenting on The Verge's Bluesky post about it. Scalzi's comment: "Today in Oh Hell No"
Some comments from other people in Scalzi's thread:
Oh, look, it's the long-awaited GOP health care plan. Give up your privacy and hope that your ailment is the most frequent one matching your symptoms, then get told to use whatever quack medicine paid the highest fee to OpenAI.
Today in "desperately searching for a way to monetize this boondoggle"
Aside from the idea of 'we are taking all of your data', if you have direct personal communication from an seemly-authoritative human-sounding chatbot, people are going to take terrible advice from this whether or not the company line is 'not intended for diagnosis or treatment'.
It's the modern version of "I checked WebMD and apparently I have cancer" only this version will probably suggest you drink clorox.
EDITING to add some of the Bluesky comments on The Verge's post about this:
if it's not intended for diagnosis or treatment, it's intended to sell you things and sell healthcare product and pharma companies the opportunities to sell you things. for the umpteenth time, no x a million billion
I work in IS for a good sized health system. Weve been reminded often never, ever, ever put patient data into ChatGPT or any other AI chatbot cause bad things will happen. My eye is twitching about private patient data being used to train LLMs.
We don't think the "you should kill yourself" bot needs sensitive medical data
This is the dumbest AI related crap I have seen this week, and there has been a LOT of dumb AI related crap this week already.
they aren't your actual medical provider and therefore are not bound by HIPAA in any way
this should reduce the nation's health costs by reducing the number of people who need health
SheltieLover
(76,895 posts)Ty for sharing!
highplainsdem
(60,061 posts)SheltieLover
(76,895 posts)Dave Bowman
(6,683 posts)ibegurpard
(17,075 posts)n/t
dalton99a
(92,158 posts)Response to dalton99a (Reply #4)
dalton99a This message was self-deleted by its author.
highplainsdem
(60,061 posts)The Madcap
(1,755 posts)If I did provide my records, they would think I had six fingers on each hand and three arms. Not to mention that they would steal every penny I had. No way would I do this voluntarily.
snot
(11,523 posts)Last edited Thu Jan 8, 2026, 09:07 PM - Edit history (2)
that prohibits cos. from collecting more info than really needed in order to serve the customer, and imposes strict, money penalties for breaches or misuse of data collected.
I've avoided giving up more info than I had to from the get-go, but my data's been stolen 3 times thanks to companies that were successfully hacked. The only remedy they offered was a service that, if I go to the trouble to get myself signed up for it, notifies me when my data's found on the dark web. Well guess what, I already know it's there, and apparently there's no way to get it off.
highplainsdem
(60,061 posts)AI is now making the data gathering and exploitation of that data much worse.
snot
(11,523 posts)I'm not sure I can think of ANY cos. that AREN'T doing it, except perhaps a few family-owned businesses.
I make some charitable donations toward the end of each year, and this time even the nonprofit organizations were requiring me to create an account with a bunch of personal info in order for me to donate to them!
Also infuriating to me are the pretenses that doing things on our cell phones, giving everyone our cell numbers, and using passwords maintained by some third party are going to help protect my security. One's cell is the LEAST secure, LEAST private component in anyone's digital system or environment; and the more other entities have my passwords, the less secure they are.
Or biometric id like those can't be stolen once a digital record of them is created???
The Big Bros. in this world want to be able to tie everything to one profile of each us that they can readily connect to everything we do and that they can simulate or shut down at will.
(Sorry, this stuff really aggravates me.)