← Back to context

Comment by no_wizard

20 hours ago

I work on an application that uses AI to index and evaluate any given corpus (like papers, knowledge bases etc) of knowledge and it has been a huge help here, and I know its because we are dealing with what is effectively structured data that can be well classified once identified, and we have relatively straightforward ways of doing identification. The real magic is when the finely tuned AI started to correctly stitch pieces of information together that previously didn't appear to be related that is the secret sauce beyond simply indexing for search

Code is similar - programming languages have rules that are well known, couple that with proper identification, pattern matching and thats how you get to these generated prototypes[0] done via so called 'vibe coding' (not the biggest fan of the term but I digress)

I think this is early signs that this generation of LLMs at least, are likely to be augmentations to many existing roles as opposed to strictly replacing them. Productivity will increase by a good magnitude once the tools are well understood and scoped to task

[0]: They really are prototypes. You will eventually hit walls by having an LLM generate the code without understanding the code.