Comment by ainch
1 day ago
As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs. The last time I booted it up, I tried to do some basic string manipulation just to test stuff out, but spent an hour puzzling out why `var x = 'hello'; print(x[3])` didn't work, and neither did `len(x)` (turns out they'd opted for more specific byte-vs-codepoint representations, but the docs contradicted the actual implementation).
Hopefully they get Mojo to a good place for more general ML, but at the moment it still feels quite limited - they've actually deprecated some of the nice builtins they had for Tensors etc... For now I'll stick with JAX and check in periodically, fingers crossed.
I still don’t understand why we lack a language that will take uncomplicated computation heavy code and turn it into SIMD / multi thread / multiprocessing / GPU code with minimal additional syntax.
Surely this is the sort of thing compiler / language design nerds dream about?
It doesn’t have to guarantee efficiency or provide cutting edge performance in any context … it should just exist!
My understanding is that we can make such a language … but it’s not caught the fancy of someone who could do it
Still a bit early but I'm working on kiwi, a k-dialect that can lower to Apple MLX.
Currently supports CPU and GPU on macOS and CPU on linux.
https://kiwilang.com
https://github.com/kiwi-array-lang/kiwi
Kiwi runs computations on small dense arrays in its own runtime, when they are larger it will lower to MLX CPU and eventually to MLX GPU when it is worth it.
As user you don't have to change any code, you just write k.
I'm sure there are other languages designed to take advantage of modern GPUs.
But even with SIMD you can get quite far with array oriented code and many array language implementations will make use of it (BQN, ngn/growler/k, goal, ktye k has a version with SIMD support, …)
Thanks for sharing this is neat!
I’ve yet to find a language that does SIMD / multithreading / GPU with minimal tweaks let along multiprocessing.
Both ahead of time compilers and JIT compilers often perform autovectorization of tight loops. The problem is that lots of hot loops are not necessarily simple loops, and in particular a lot of source code is written in a way which uses sequential dependencies that can’t be modeled in SIMD code. Aside from undefined behavior in C/C++, most compilers will fail to autovectorize because doing so would very slightly change the behavior of your code in a very hard to understand way.
Surely a high level language can own the contract of making sane choices of when to auto vectorize and when not to (or just inefficiently auto vectorize - that is fine too!)
3 replies →
Intel Ispc is a compiler for a C superset language that targets CPU SIMD and GPUs.
A beautiful find! It’s what 12+ years old at this point?
Definitely the closest thing so far (doesn’t do multiprocessing) but does seem to do SIMD / multithreading and GPU auto parallelizing!
Any idea why it’s so little known?
If you're happy with NumPy's API, then surely JAX is exactly what you're looking for.
JAX can’t do what Numba can do for example. I just want one way to write simple math-y code like you normally would and automagically convert to run on one of the above approaches.
That’s what compilers and high level languages are supposed to be for!
>I still don’t understand why we lack a language that will take uncomplicated computation heavy code and turn it into SIMD / multi thread / multiprocessing / GPU code with minimal additional syntax.
It's already (partly) existed called D language, by default it's garbage collected (GC), can also be program without it or hybrid. It's a modern, backward compatible with C and it's included in GCC.
The linear algebra system in D or Mir GLAS is standalone BLAS implementation written directly in D [1]. It's already proven faster than the other widely existing conventional BLAS like OpenBLAS back in 2016, about ten years ago!
This popular OpenBLAS include Fortran based LAPACK (yes you read it right Fortran) and it is being used by almost all data processing languages currently Matlab, Julia, Rust and also Mojo [2].
Interestingly there is a very early stage of standalone BLAS implementation written directly in Mojo namely mojoBLAS similar to Mir GLAS just started very recently [3].
>Surely this is the sort of thing compiler / language design nerds dream about?
You can say this again.
Especially on the GC side of the programming language since this SIMD / multi thread / multiprocessing / GPU can be abstracted away.
Actually someone recently proposed VGC or virtualized garbage collector for Python in C++ for heteregenous GC [4],[5]. However, the current evaluation excludes JIT compilation, AOT optimization, SIMD acceleration, and GPU offloading.
[1] OpenBLAS:
https://en.wikipedia.org/wiki/OpenBLAS
[2] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen:
http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...
[3] mojoBLAS:
https://github.com/shivasankarka/mojoBLAS
[4] Virtual Garbage Collector (VGC): A Zone-Based Garbage Collection Architecture for Python's Parallel Runtime:
https://arxiv.org/abs/2512.23768
[5] VGC-for-arxiv:
https://github.com/Abdullahlab-n/VGC-for-arxiv
I don't think mojo depends on OpenBLAS or other BLAS implementation. I remember that they took a lot of pride in the early days how linalg primitives like matmul which was completely written in mojo was faster than MLK, openBLAS and other implementations.
Delightful thank you! Would love to see a version of D that auto vectorizes to Vulkan or something
Mojo is cool but I just don't understand the python backwards compat thing. They're holding themselves back with that.
All the flaws I can think of in Kotlin are due to the Java compatibility. They could've made it work here by being more explicit but the way it currently works seems doomed.
> All the flaws I can think of in Kotlin are due to the Java compatibility.
All the use of Kotlin in industry are due to Java compatibility. Else there would be ~0% marketshare of Kotlin.
Mojo is NOT Python compatible (although they initially wanted it to be). So they got all downsides without the upsides.
4 replies →
There is unfortunately likely a lot of truth to this. I like Kotlin, but, anecdotally, I've only ever chosen it due to needing JVM
I'm pretty sure that they have decided that backwards-compat is not the best path for Mojo. Matter of fact, the following is the _last_ item on the roadmap on the home page:
> Supporting more of Python's dynamic features like classes, inheritance, and untyped variables to maximize compatibility with Python code.
What's more, note how it says "to maximize compatibility" not "to achieve full compatibility."
Same story with C and Objective-C, C and C++, JavaScript and TypeScript, Java and Scala, Java and Clojure,.....
Yes the underlying platform they based their compatibility on, is the reason they got some design flaws, some more than other.
However that compatibility is the reason they won wide adoption in first place.
They coulda made it Scala!
> Mojo is cool but I just don't understand the python backwards compat thing. They're holding themselves back with that.
In reality I think they've dropped that pretty hard. Literally you can't even get the length of a string with `len(s)` in the latest release. They also removed negative indexing, which I find baffling and frustrating. The roadmap does say they don't intend to have any "syntax sugar" until later in the implementation, but negative indexing is such a core part of what makes Python so much nicer to work with compared to say C++...
>As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs.
Unless it's open sourced, it's a moot point, as most Python devs wont come anyway.
https://mojolang.org/docs/roadmap/#contributing-to-mojo
> We're committed to open-sourcing all of Mojo, but the language is still very young and we believe a tight-knit group of engineers with a common vision moves faster than a community-driven effort. So we will continue to plan and prioritize the Mojo roadmap within Modular until more of its internal architecture is fleshed out.
I hope they stick to their original promise. And the 1.0 release would be a great time to deliver this.
> but the language is still very young and we believe a tight-knit group of engineers with a common vision moves faster than a community-driven effort.
This is a false dichotomy.
For years Golang was developed in the open but strictly moved on the vision of its creators rather than being "community-driven". Many other venerable open source projects don't involve the community in serious strategy discussions. The community mainly acts as a bug finder/fixer. Mojo could do the same: be open source but choose its own priorities internally.
I'm guessing that Mojo is still looking for a monetization strategy. Keeping important things proprietary in Mojo at this stage helps I'm sure (nothing wrong with that).
But I feel the era of proprietary programming language play is over. Unless you create some hardware (which the Mojo guys don't) it's going to be tough.
Indeed, this fall 100%
1 reply →
open source does not mean open community. you can just throw tarballs over the wall
This is exactly how the open sourcing of Swift went so I imagine it will be the same.
> We're committed to open-sourcing all of Mojo
Translated from corporatese it means "it will never happen".
4 replies →
This is a bit ironic, given that people seem to have no problem using CUDA all over the place... Plus they promise to open source with the 1.0 release. We'll see...
I don’t see irony there. We’re locked into CUDA due to past decisions. And in new decisions we don’t want to repeat that mistake.
CUDA won because AMD and Intel made a mess out of OpenCL, and Khronos had no vision to support anything beyond C99 dialect until it was too late.
Doesn't matter if it was closed, when the alternatives were much worse.
3 replies →
I'm really not sure that's true.. I can't think of a single Python dev I've worked with who cared about opensource. All they cared about is the language being easy and free to use.
The people that write the libraries care, why do you think Python is where we’re writing ML code and not MATLAB?
3 replies →
I think that plan is to open source the compiler with 1.0 which is expected to be this summer. so in ~3-4 months time.
It does almost seem like they're trying to recreate the Nim programming language in this regard.