← Back to context

Comment by sminchev

3 days ago

Some things that I know how to do, I just run myself. If starting the tests is a bash command, I asked the AI to create bash script that does this, and then I run it myself. Same, with the build, deploy and other similar tasks. For some no so important tasks, I use different model, like GLM, which is cheaper. Then I save the result of the, let's say bug analysis, or code review, and ask my main model (Opus) to read the document and execute the task. This way I use my expensive model to write the tasks, but the cheaper one to do the analysis.