← Back to context Comment by adam_patarino 10 hours ago Or you could use a local model where you’re not constrained by tokens. Like rig.ai 2 comments adam_patarino Reply dostick 9 hours ago How is your offering different from local ollama? adam_patarino 7 hours ago Its batteries included. No config.We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.
dostick 9 hours ago How is your offering different from local ollama? adam_patarino 7 hours ago Its batteries included. No config.We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.
adam_patarino 7 hours ago Its batteries included. No config.We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.
How is your offering different from local ollama?
Its batteries included. No config.
We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.
Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.