Comment by SXX
2 hours ago
My guess they mean Google create those summaries via tool use and not trying to filter actual chain of thoughts on API level or return errors if model start leaking it.
If you work with big contexts in AI Studio (like 600,000-900,000 tokens) it sometimes just breaks downs on its own and starts returning raw cot without any prompt hacking whatsoever.
I believe if you intentionally try to expose it that would be pretty easy to achieve.
No comments yet
Contribute on Hacker News ↗