Comment by wongarsu
9 hours ago
If you just want to ingest varied data into a consistent format, qwen2.5vl:7b works well (in my use cases better than qwen3vl). The ollama version is quantized, perfectly adequate, and runs on normal consumer hardware (even more so if you don't care about speeds that feel interactive)
No comments yet
Contribute on Hacker News ↗