Comment by brandonb

3 days ago

I worked on one of the first wearable foundation models in 2018. The innovation of this 2025 paper from Apple is moving up to a higher level of abstraction: instead of training on raw sensor data (PPG, accelerometer), it trains on a timeseries of behavioral biomarkers derived from that data (e.g., HRV, resting heart rate, and so on.).

They find high accuracy in detecting many conditions: diabetes (83%), heart failure (90%), sleep apnea (85%), etc.

What is an "accuracy" of 83%? Do 83% of predicted diabetes cases actually have diabetes? Or did 83% of those who have diabetes get diagnosed as such? It's about precision vs. recall. You can improve one by sacrificing the other. Boiling it down to one number is hard.

Insurance and health insurance companies must be super interested in this research and its applications.

  • I'm sure they're also interested in the data. Imagine raising premiums based on conditions they detect from your wearables. That's why it's of utmost importance to secure biometrics data

  • There are so many companies across many industries who are salivating at the thought of everyone using wearables to monitor their "health" and getting their hands on that data. Including law enforcement, lawyers, and other government agencies.

Had the phrase "foundation model" become a term of art yet?

  • By 2018, the concept was definitely in the air since you had GPT-1 (2018) and BERT (2018). You could argue even Word2Vec (2013) had the core concept of pre-training on an unsupervised or self-supervised objective leading to performance on a downstream semantic task. However, the phrase "foundation model" wasn't coined until 2021, to my knowledge.

    • I guess I just find the whole "foundation model" phrasing to be designed in a way to pat the backs of the "winners" who would of course be those with the most money. I'm sure there are foundation models from groups that aren't e.g. OpenAI, but the origins felt egotistical and asserting that you made one prior to the phrase's inception only feels more-so.

      Had you merely called it an early instance of pretraining, I'd be fine with it.