← Back to context

Comment by xcubic

9 hours ago

I am not following, can you give a concrete example of your workflow?

In my agent file I explain that I have a static analyzer which generates a callgraph. On starting the agent runs ~/.agent/tools/__callgraph__/generate_callgraph.py

It then gets to see callgraph.current.md and upon subsequent sessions callgraph.diff.md.

Here is an example of some output that I currently have in callgraph.current.md

  ## src/components/Header.tsx

  - **export Header({ ... }: Props)** (start 9, end 54) → `useAuth`

  ## src/components/HelpTooltip.tsx

  - **export HelpTooltip({ ... }: Props)** (start 15, end 42) → (none)

  ## src/components/ResultsTable.tsx

  - **getHeaderLabel(col: { id: string; columnDef: { header?: unknown } }): string** (start 45, end 51) → (none)
  - **getCellValue(colId: string, original: KeywordResult): string** (start 53, end 74) → (none)
  - **export ResultsTable({ ... }: Props)** (start 76, end 406) → `getCellValue`

  ## src/components/SettingsDrawer.tsx

  - **export SettingsDrawer({ ... }: Props)** (start 158, end 336) → (none)

For example:

ResultsTable calls getCellValue.

In these cases it's just one function but you also have stuff like

  **export Dashboard()** (start 36, end 635) → `getTopKeywords`, `normalizeText`, `searchKeywords`, `searchKeywordsMulti`, `searchSemantic`, `searchSemanticMulti`

For the Python version it also gives the parameters and types along with it. I think the next thing I'd need to do is give self-defined type definitions. Doing things this way allows an LLM to not read all that much but to be able to reason relatively well over what the code does. The caveat is that you abstracted your code well. If you didn't, the LLM doesn't know your implementation.

I probably should also add return types.