I worked on one of the first wearable foundation models in 2018. The innovation of this 2025 paper from Apple is moving up to a higher level of abstraction: instead of training on raw sensor data (PPG, accelerometer), it trains on a timeseries of behavioral biomarkers derived from that data (e.g., HRV, resting heart rate, and so on.).
They find high accuracy in detecting many conditions: diabetes (83%), heart failure (90%), sleep apnea (85%), etc.
What is an "accuracy" of 83%? Do 83% of predicted diabetes cases actually have diabetes? Or did 83% of those who have diabetes get diagnosed as such? It's about precision vs. recall. You can improve one by sacrificing the other. Boiling it down to one number is hard.
I'm sure they're also interested in the data. Imagine raising premiums based on conditions they detect from your wearables. That's why it's of utmost importance to secure biometrics data
There are so many companies across many industries who are salivating at the thought of everyone using wearables to monitor their "health" and getting their hands on that data. Including law enforcement, lawyers, and other government agencies.
By 2018, the concept was definitely in the air since you had GPT-1 (2018) and BERT (2018). You could argue even Word2Vec (2013) had the core concept of pre-training on an unsupervised or self-supervised objective leading to performance on a downstream semantic task. However, the phrase "foundation model" wasn't coined until 2021, to my knowledge.
Is anyone else surprised by how poorly performing the results are for the vast majority of cases? The foundation model which had access to sensor data and behavioral biomarkers actually _underperformed_ the baseline predictor that just uses nonspecific demographic data in almost 10 areas.
In fact, even when the wearable foundation model was better, it was only marginally better.
I was expecting much more dramatic improvements with such rich data available.
I wonder how much of that is driven by poor performing behavioral models. There was a HN article from a few weeks back and it only had an accuracy of about 70% determining if someone was awake or asleep. I would guess that the secondary behavioral data used in this data (like cardiovascular fitness) are much harder to predict from raw sensor data than being awake or asleep.
I worked with similar data in grad school. I'm not surprised. You can have a lot of data, but sometimes the signal (or signal quality) just isn't present in that haystack, and there's nothing you can do about it.
Sometimes you just have to use ultrasound or MRI or stick a camera in the body, because everything else might as well be reading tea leaves, and people generally demand very high accuracy when it comes to their health.
i love this because I build in medtech, but the big problem is no open weights, nor open data.
you can export your own apple XML data for usage and processing, but if you want to create an application and request apple XML data from users, that likely crosses into clinical research territory with data security policy requirements and de-identification needs.
They might not sell "your" data outright, but it doesn't mean they won't sell inferences/assumptions that they make about you using your data.
The reality is that no matter how ethical the company you trust with that data is, you're still one hack or pissed off employee away from having that data leaked, and all of that data is freely up for grabs to the state (whose 3 letter agencies are likely collecting it wholesale) and open to subpoena in a lawsuit.
I have about 3-3.5 years worth of Apple Health + Fitness data (via my Apple Watch) encompassing daily walks / workouts / runs / HIIT / weight + BMI / etc. I started collecting this religiously during pandemic.
The exported Fitness data is ~3.5GB
I'm looking to do some longitudinal analysis - for my own purposes first, to see how certain indicators have evolved.
Has anyone done something similar? Perhaps in R, Python? Would love to do some tinkering. Any pointers appreciated!
FWIW, we're working on something similar (you wouldn't necessarily need to write R or Python). Feel free to email me at bmb@empirical.health and I can add you to a beta once we have it ready!
Apple's VO2Max measures are not based upon that deep neural network development, and empirical seems to be conflating a few things. And FWIW, just finding the actual paper is almost impossible as that same site has SEO-bombed Google so thoroughly you end up in the circular-reference empirical world where all of their pages reference each other as authorities.
Apple and Columbia did recently collaborate on a heart rate response model -- one which can be downloaded and trialed -- but that was not related to the development of their VO2Max calculations.
Apple is very shrouded about how they calculate VO2Max, but it likely is a pretty simple calculation (e.g. how much is your heart responding based upon the level of activity assumed based upon your motion, method of exercise and movements). The most detail they provide is in https://www.apple.com/healthcare/docs/site/Using_Apple_Watch..., which mostly is a validation that it's providing decent enough accuracy.
Apple was reporting VO2max for a very long time (much before 2023). I wonder what the accuracy was back then? Maybe they should the option for users to re-compute those past numbers based on the latest and greatest algorithm.
In the paper, they say they can't release the weights due to terms of consent with study participants (this is from the Apple Heart and Movement study).
I worked on one of the first wearable foundation models in 2018. The innovation of this 2025 paper from Apple is moving up to a higher level of abstraction: instead of training on raw sensor data (PPG, accelerometer), it trains on a timeseries of behavioral biomarkers derived from that data (e.g., HRV, resting heart rate, and so on.).
They find high accuracy in detecting many conditions: diabetes (83%), heart failure (90%), sleep apnea (85%), etc.
What is an "accuracy" of 83%? Do 83% of predicted diabetes cases actually have diabetes? Or did 83% of those who have diabetes get diagnosed as such? It's about precision vs. recall. You can improve one by sacrificing the other. Boiling it down to one number is hard.
They use the area under the receiver operating curve, which is a pretty standard way to boil that down to one number.
1 reply →
Insurance and health insurance companies must be super interested in this research and its applications.
I'm sure they're also interested in the data. Imagine raising premiums based on conditions they detect from your wearables. That's why it's of utmost importance to secure biometrics data
5 replies →
There are so many companies across many industries who are salivating at the thought of everyone using wearables to monitor their "health" and getting their hands on that data. Including law enforcement, lawyers, and other government agencies.
1 reply →
Had the phrase "foundation model" become a term of art yet?
By 2018, the concept was definitely in the air since you had GPT-1 (2018) and BERT (2018). You could argue even Word2Vec (2013) had the core concept of pre-training on an unsupervised or self-supervised objective leading to performance on a downstream semantic task. However, the phrase "foundation model" wasn't coined until 2021, to my knowledge.
1 reply →
reminds me of Jim Simons of Renaissance advise when it comes to data science - sort first, then regress.
Not sort in the literal way right?
https://stats.stackexchange.com/questions/185507/what-happen...
3 replies →
Is anyone else surprised by how poorly performing the results are for the vast majority of cases? The foundation model which had access to sensor data and behavioral biomarkers actually _underperformed_ the baseline predictor that just uses nonspecific demographic data in almost 10 areas.
In fact, even when the wearable foundation model was better, it was only marginally better.
I was expecting much more dramatic improvements with such rich data available.
I wonder how much of that is driven by poor performing behavioral models. There was a HN article from a few weeks back and it only had an accuracy of about 70% determining if someone was awake or asleep. I would guess that the secondary behavioral data used in this data (like cardiovascular fitness) are much harder to predict from raw sensor data than being awake or asleep.
I worked with similar data in grad school. I'm not surprised. You can have a lot of data, but sometimes the signal (or signal quality) just isn't present in that haystack, and there's nothing you can do about it.
Sometimes you just have to use ultrasound or MRI or stick a camera in the body, because everything else might as well be reading tea leaves, and people generally demand very high accuracy when it comes to their health.
Cool way of integrating the two approaches. For those on mobile, I created an infographic that's a bit more accessible: https://studyvisuals.com/artificial-intelligence/beyond-sens...
i love this because I build in medtech, but the big problem is no open weights, nor open data.
you can export your own apple XML data for usage and processing, but if you want to create an application and request apple XML data from users, that likely crosses into clinical research territory with data security policy requirements and de-identification needs.
what is the best way for non-big tech to buy such data for research and product development?
Some are for free:
- aidlab.com/datasets
- physionet.org
1 reply →
data brokers.
Trusting your health data with AI brothers is... extremely ill-advised.
I don't even trust Apple themselves, which will sell your health data any insurance company any minute now.
They might not sell "your" data outright, but it doesn't mean they won't sell inferences/assumptions that they make about you using your data.
The reality is that no matter how ethical the company you trust with that data is, you're still one hack or pissed off employee away from having that data leaked, and all of that data is freely up for grabs to the state (whose 3 letter agencies are likely collecting it wholesale) and open to subpoena in a lawsuit.
What do you base that suspicion on?
2 replies →
Thanks for posting this. This looks promising...
I have about 3-3.5 years worth of Apple Health + Fitness data (via my Apple Watch) encompassing daily walks / workouts / runs / HIIT / weight + BMI / etc. I started collecting this religiously during pandemic.
The exported Fitness data is ~3.5GB
I'm looking to do some longitudinal analysis - for my own purposes first, to see how certain indicators have evolved.
Has anyone done something similar? Perhaps in R, Python? Would love to do some tinkering. Any pointers appreciated!
Thanks!!
It might actually be worth writing your analysis in Swift with the actual HealthKit API and visualization libraries.
Bonus: when you’re done, you’ll have an app you can sell.
:thumbs_up.gif:
My sentiments, exactly.
Though I'm looking to scratch my own itch for now...
How hard is it to write in swift?
FWIW, we're working on something similar (you wouldn't necessarily need to write R or Python). Feel free to email me at bmb@empirical.health and I can add you to a beta once we have it ready!
Thanks, I'll reach out.
I am curious to do my own analysis, for two main reasons:
- some data is confidential (I'd hate for it to leave my devices) - wanna DIY / learn / iterate
Will ping you in any case. Thanks
Is there a way to run this on your own data? I’ve been wearing my Apple Watch for years and would love to be able to use it better.
Not yet -- this one is just a research study. Some of their previous research has made it into product features.
For example, Apple Watch VO2Max (cardio fitness) is based on a deep neural network published in 2023: https://www.empirical.health/blog/how-apple-watch-cardio-fit...
Apple's VO2Max measures are not based upon that deep neural network development, and empirical seems to be conflating a few things. And FWIW, just finding the actual paper is almost impossible as that same site has SEO-bombed Google so thoroughly you end up in the circular-reference empirical world where all of their pages reference each other as authorities.
Apple and Columbia did recently collaborate on a heart rate response model -- one which can be downloaded and trialed -- but that was not related to the development of their VO2Max calculations.
Apple is very shrouded about how they calculate VO2Max, but it likely is a pretty simple calculation (e.g. how much is your heart responding based upon the level of activity assumed based upon your motion, method of exercise and movements). The most detail they provide is in https://www.apple.com/healthcare/docs/site/Using_Apple_Watch..., which mostly is a validation that it's providing decent enough accuracy.
3 replies →
Apple was reporting VO2max for a very long time (much before 2023). I wonder what the accuracy was back then? Maybe they should the option for users to re-compute those past numbers based on the latest and greatest algorithm.
Interesting to see contrastive loss instead of a reconstruction loss.
Has anyone seen the publishing of the weights or even an API release?
In the paper, they say they can't release the weights due to terms of consent with study participants (this is from the Apple Heart and Movement study).
Can someone explain what "wearable foundation" means?
It's a "Foundation Model" for wearable devices. So "wearable" describes where it is to be used, rather than describing "foundation".