← Back to context Comment by carlmr 1 year ago Isn't that the exact point of o1, that it has time to think for itself without reprompting? 3 comments carlmr Reply arthurcolle 1 year ago yeah but they aren't letting you see the useful chain of thought reasoning that is crucial to train a good model. Everyone will replicate this over next 6 months optimalsolver 1 year ago >Everyone will replicate this over next 6 monthsNot without a billion dollars worth of compute, they won't. arthurcolle 1 year ago Are you sure its a billion? Helps with estimating the training run
arthurcolle 1 year ago yeah but they aren't letting you see the useful chain of thought reasoning that is crucial to train a good model. Everyone will replicate this over next 6 months optimalsolver 1 year ago >Everyone will replicate this over next 6 monthsNot without a billion dollars worth of compute, they won't. arthurcolle 1 year ago Are you sure its a billion? Helps with estimating the training run
optimalsolver 1 year ago >Everyone will replicate this over next 6 monthsNot without a billion dollars worth of compute, they won't. arthurcolle 1 year ago Are you sure its a billion? Helps with estimating the training run
yeah but they aren't letting you see the useful chain of thought reasoning that is crucial to train a good model. Everyone will replicate this over next 6 months
>Everyone will replicate this over next 6 months
Not without a billion dollars worth of compute, they won't.
Are you sure its a billion? Helps with estimating the training run