Comment by phillipcarter

2 years ago

Although we're not using Claude in production (yet), it's a regular part of our testing when we build new features with LLMs. Part of the reason why we haven't used it (yet) is because OpenAI had more certifications faster, so we went to market with them. And their API has just gotten better and more reliable ever since, and it's cheap. But now that Claude is in AWS Bedrock that opens up some things for us that were previously closed.

In my experience, my exact prompt (modulo a few tiny tweaks) works just as well in development with Claude Instant as it does GPT 3.5. And it's just as fast!

Makes sense as claude instant is likely better than 3.5

  • I dunno about that. GPT 3.5 is extremely good. I would wager that most apps that use RAG to pass context in and get JSON (or some other thing) out that you can pass to some other part of your product don't need GPT 4 or anything else equally as powerful.