← Back to context

Comment by nomel

13 hours ago

Not intuitive at all. Not all models are equally capable, just because they had the same training data. The model architecture (as a whole) is very important. To reduce capability, you can reduce layers, tool use, thinking, quantize it, etc. This is trivially proven by a cursory glance in the rough direction of any set of benchmarks (or actual use).

Using small models as a classifier "there might be a vulnerability here" is probably reasonable, if you have a model capable of proving it. There are many companies attempting this without the verification step, resulting in AI vulnerability checker being banned left and right, from the nonsense noise.