← Back to context

Comment by hoechst

3 days ago

documents like https://github.com/obra/superpowers/blob/main/skills/testing... are very confusing to read as a human. "skills" in this project generally don't seem to follow set format and just look like what you would get when prompting an LLM to "write a markdown doc that step by step describes how to do X" (which is what actually happened according to the blog post).

idk, but if you already assume that the LLM knows what TDD is (it probably ingested ~100 whole books about it), why are we feeding a short (and imo confusing) version of that back to it before the actual prompt?

i feel like a lot of projects like this that are supposed to give LLMs "superpowers" or whatever by prompt engineering are operating on the wrong assumption that LLMs are self-learning and can be made 10x smarter just by adding a bit of magic text that the LLM itself produced before the actual prompt.

ofc context matters and if i have a repetitive tasks, i write down my constraints and requirements and paste that in before every prompt that fits this task. but that's just part of the specific context of what i'm trying to do. it's not giving the LLM superpowers, it's just providing context.

i've read a few posts like this now, but what i am always missing is actual examples of how it produces objectively better results compared to just prompting without the whole "you have skill X" thing.

I fully agree. I’ve been running codex with GPT Pro (5o-codex-high) for a few weeks now, and it really just boils down to context.

I’ve found the most helpful things for me is just voice to Whisper to LLMs, managing token usage effectively and restarting chats when necessary, and giving it quantified ways to check when its work is done (say, AI-Unit-Tests with apis or playwright tests.) Also, every file I own is markdown haha.

And obviously having different AI chats for specialized tasks (the way the math works on these models makes this have much better results!)

All of this has allowed me to still be in the PM role like he said, but without burning down a needless forest on having it reevaluate things in its training set lol. But why would we go back to vendor lock in with Claude? Not to mention how much more powerful 5o-codex-high is, it’s not even close

The good thing about what he said is getting AI to work with AI, I have found this to be incredibly useful in promoting, and segmenting out roles

Especially with some of the more generic skills like https://github.com/obra/superpowers-skills/blob/main/skills/... and https://github.com/obra/superpowers-skills/blob/main/skills/...: it seems like they're general enough that they'd be better off in the main prompt. I'd be interested to see when claude actually decides to pull them in

  • Also the format seems quite badly written. Ie. those “quick references” are actually examples. Several generic sentences are repeated multiple times in different wording across sections, etc.

Everything is just context, of course. Every time I see a blog post on "the nine types of agentic memory" or some such I have a similar reaction.

I would say that systems like this are about getting the agent to correctly choose the precisely correct context snippet for the exact subtask it's doing at a given point within a larger workflow. Obviously you could also do that manually, but that doesn't scale to running many agents in parallel, or running automomously for longer durations.