Comment by skrrtww
9 hours ago
This looks like slop? The README is full of emojis and kind of incoherent, there are no implementation details, there claims to be a Metal backend that doesn't seem to exist, etc.
The dependency list is also...something: https://github.com/J-x-Z/cocoa-way/tree/main/vendor
This is definitely not worth using. It doesn't even say what hypervisor its using. Is it using QEMU? Docker? Podman? Lima? Colima?
And also this chart is super weird:
A standard VM will always be the easiest to set up by far. And latency should be the same across all 4. I mean after all it's a VM running on your local machine. Honestly I don't even know what it means when it says "Latency".
I also looked at some of the code and it's using OpenGL 3.3 Core which is... super old. But it makes sense in the context of this being LLM-generated since most of its training data is probably OpenGL 3.3 Core code....
Overall this project is very strange. It makes me feel more confident in my skills, AI isn't all that great. It's all hype. You can get to the frontpage of HN. And if you're Peter Steinberger you can get acquired by OpenAI for a billion dollars. But that's about it. The code isn't getting any better.
This reminds me of that C-compiler-in-Rust publicity stunt by Anthropic. There's no substance. It's just a headline.
While I agree with the rest of your comment, they do mention they use OrbStack as their hypervisor in their demo video.
Gotcha thanks for that info. Yeah that's insane. You have to read the description of a YouTube video to understand what a project on Github is doing. There is no architecture here.