Comment by onesphere

3 years ago

We have a corpus or database of programs that follow logic but with no simulation, so it represents knowledge to solve a problem yet all we have control over is the parameters (inputs). In this case, input is functional logical content (a program), describing the resolution of corpus details. The model solves its integrated, corporate logic, and our output is an interpretation of that individual program.

Now our task is to swap out this entire database for something like it, but not exactly the same. The output becomes the input to this new matrix. The individual program persists, but everything is the next generation. With a little book-keeping, the programs do our will...

Don't think I quite follow. Is the new program (operating on the output of the earlier program) supposed to reason about why you are seeing the result that you are seeing? Or is it doing more post processing to make the earlier output directly consumable by your corporate systems.

  • The new program’s purpose could be to do more post processing to make the interpretation of that earlier program directly consumable (inter-generationally), or it could simply start producing more problems to solve.

    • Gotcha! That makes sense. I would recommend looking at LangChain though, as it does a good job at modeling multi-stage learning / inference environments.

      2 replies →