What do you like about it? Compared to GPT-3.5, Claude Instant seems to be the same or worse in quality according to both human and automated benchmarks, but also more expensive. It seems undifferentiated. And I would rather use Mixtral than either of those in most cases, since Mixtral often outperforms GPT-3.5 and can be run on my own hardware.
Data extraction mostly. Supports long document, cheaper input tokens than gpt3 turbo, and when I ask to stick to document informations it doesn't try to fill in gaps out with his trained knowledge.
Sure you can't have a chat with it or expect it to do high level reasoning, but has enough to do the basic deductions for grounded answers.
We have Claude Instant on the models page: https://artificialanalysis.ai/models
Can add it via the select at the top right of each card where it says '9 Selected' (below the highlight charts)
Definitely agree with your point on Claude Instant though. Much less than half the price, much higher throughput/speed for a relatively small quality decrease (varied by how 'quality' is measured, use-case)
What do you like about it? Compared to GPT-3.5, Claude Instant seems to be the same or worse in quality according to both human and automated benchmarks, but also more expensive. It seems undifferentiated. And I would rather use Mixtral than either of those in most cases, since Mixtral often outperforms GPT-3.5 and can be run on my own hardware.
Data extraction mostly. Supports long document, cheaper input tokens than gpt3 turbo, and when I ask to stick to document informations it doesn't try to fill in gaps out with his trained knowledge.
Sure you can't have a chat with it or expect it to do high level reasoning, but has enough to do the basic deductions for grounded answers.
We have Claude Instant on the models page: https://artificialanalysis.ai/models Can add it via the select at the top right of each card where it says '9 Selected' (below the highlight charts)
Ah cool was on mobile didn't eee the select
Definitely agree with your point on Claude Instant though. Much less than half the price, much higher throughput/speed for a relatively small quality decrease (varied by how 'quality' is measured, use-case)