← Back to context

Comment by manmal

17 hours ago

They are retrained every 12-24 months and constantly getting new/updated reinforcement learning layers. New concepts are not the problem. The problem is outdated information in the training data, like only crappy old Postgres syntax in most of the Stackoverflow body.

> They are retrained every 12-24 months and constantly getting new/updated reinforcement learning layers

This is true now, but it can't stay true, given the enormous costs of training. Inference is expensive enough as is, the training runs are 100% venture capital "startup" funding and pretty much everyone expects them to go away sooner or later

Can't plan a business around something that volatile

  • GPT-5.1 was based on over 15 months old data IIRC, and it wasn’t that bad. Adding new layers isn’t that expensive.