Comment by adam_patarino 12 hours ago Or you could use a local model where you’re not constrained by tokens. Like rig.ai 2 comments adam_patarino Reply dostick 11 hours ago How is your offering different from local ollama? adam_patarino 9 hours ago Its batteries included. No config.We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.
dostick 11 hours ago How is your offering different from local ollama? adam_patarino 9 hours ago Its batteries included. No config.We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.
adam_patarino 9 hours ago Its batteries included. No config.We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.
How is your offering different from local ollama?
Its batteries included. No config.
We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.
Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.