← Back to context

Comment by jondwillis

9 days ago

Trying to avoid the things already mentioned:

- Opaque training data (and provenance thereof… where’s my cut of the profits for my share of the data?)

- Closed source frontier models, profit-motive to build moat and pull up ladders (e.g. reasoning tokens being hidden so they can’t be used as training data)

- Opaque alignment (see above)

- Overfitting to in-context examples- e.g. syntax and structure are often copied from examples even with contrary prompting

- Cloud models (seemingly) changing behavior even on pinned versions

- Over-dependence: “oops! I didn’t have to learn so I didn’t. My internet is out so now I feel the lack.”