Comment by fbilhaut
6 hours ago
Indeed. What's is interesting (and not so common) with models like GLiNER is that it is way lighter than LLMs while preserving good (if not better) quality and some zero-shot ability (in this case wrt to the entity classes). This feature is very significant, as most "traditional" (including Transformer-based) approaches are more or less supervised during fine-tuning. Icing on the cake, you don't have to deal with all the problems that arise when you want to get structured output from an LLM (in this case structured NER output).
No comments yet
Contribute on Hacker News ↗