← Back to context

Comment by simonw

19 hours ago

Clarification added later: One of my key interests at the moment is finding ways to run untrusted code from users (or generated by LLMs) in a robust sandbox from a Python application. MicroQuickJS looked like a very strong contender on that front, so I fired up Claude Code to try that out and build some prototypes.

I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...

PR and prompt I used here: https://github.com/simonw/research/pull/50 - using this pattern: https://simonwillison.net/2025/Nov/6/async-code-research/

Down to -4. Is this generic LLM-dislike, or a reaction to perceived over-self-promotion, or something else?

No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.

I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.

(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)

  • I think many subscribe to this philosophy: https://distantprovince.by/posts/its-rude-to-show-ai-output-...

    Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.

    The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.

    • Yeah, I agree that it's rude to show AI output to people... in most cases (and 100% if you don't disclose it.)

      My simonw/research GitHub repo is deliberately separate from everything else I do because it's entirely AI-generated. I wrote about that here: https://simonwillison.net/2025/Nov/6/async-code-research/#th...

      This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?

      Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.

      I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.

      9 replies →

  • Thank you for sharing.

    A lot of HN people got cut by AI in one way or another, so they seem to have personal beefs with AI. I am talking about not only job shortages but also general humbling of the bloated egos.

    • > I am talking about not only job shortages but also general humbling of the bloated egos.

      I'm gonna give you the benefit for the doubt here. Most of us do not dislike genAI because we were fired or "humbled". Most of us dislike it because a) the terrible environmental impacts, b) the terrible economic impacts, and c) the general non-production-readiness of results once you get past common, well-solved problems

      Your stated understanding comes off a little bit like "they just don't like it because they're jealous".

    • I'm constanly encountering this "bloated ego" argument every time the narrative is being steered away to prevent monetary losses for AI companies.

      Especially so when it concerns AI theft of human music and visual art.

      "Those pompous artists, who do they think they are? We'll rob them of their egos".

      The problem is that these ego-accusations don't quite come from egoless entities.

  • It is because you keep over promoting AI almost every day of the week in the HN comments.

    In this particular case AI has nothing to do with Fabrice Bellard.

    We can have something different on HN like what Fabrice Bellard is up to.

    You can continue AI posting as normal in the coming days.

    • Forget about the AI bit. Do you think it's interesting that MicroQuickJS can be used from Python via FFI or as a compiled module, and can also be compiled to WebAssembly and called from Node.js and Deno and from Pyodide running in a browser?

      ... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?

      I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.

      17 replies →

  • I was hoping you experimented with this! I am right there with you, hoping for an easier wasm sandbox for LLMs.

    (Keep posting please. Downvotes due to mentioning LLMs will be perceived as a quaint historic artifact in the not so distant future…)

    • On the contrary, it's pretty possible that LLMs themselves will be perceied as a quaint historic artefact and join the ranks of mechanical turks, zeppelins, segways, google glasses and blockchains.

  • I think the people interacting with this post are just more likely to appreciate the raw craftsmanship and talent of an individual like Bellard, and coincidentally might be more critical of the machinery that in their perception devalues it. I count myself among them, but didn’t downvote, as I generally think your content is of high quality.

  • Your tireless experimenting (and especially documenting) is valuable and I love to see all of it. The avant garde nature of your recent work will draw the occasional flurry of disdain from more jaded types, but I doubt many HN regulars would think you had anything but good intentions! Guess I am basically just saying.. keep it up.

  • I didn't downvote you. You're one of "the AI guys" to me on HN. The content of your post is fine, too, but, even if it was sketch, I'd've given you the benefit of the doubt.

Look at how others implement quickjs and restrict its runtime for sensitive workloads [1], should be similar.

But there are other ways, e.g. run the logic isolated within gvisor/firecracker/kata.

[1] github.com/microsoft/CCF under src/js/core

What is the purpose of compiling this to web assembly? What web assembly runtimes are there where there is not already an easily accessible (substantially faster) js execution environment? I know wasmtime exists and is not tied to a js execution engine like basically every other web assembly implementation, but the uses of wasmtime are not restricted from dependencies like v8 or jsc. Usually web assembly is used for providing sandboxing something a js execution environment is already designed to provide, and is only used when the code that requires sandboxing is native code not javascript. It sounds like a good way to waste a lot of performance for some additional sandboxing, but I can't imagine why you would ever design a system that way if you could choose a different (already available and higher performance) sandbox.

  • I want to build features - both client- and server-side - where users can provide JavaScript code that I then execute safely.

    Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.

    Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.

    If there's a better option than that I'd love to hear about it!

    • This is generally the purpose of JavaScript execution environments like v8 or jsc (or quickjs although I understand not trusting that as a sandbox to the same degree). They are specifically intended for executing untrusted scripts (eg web browsers). Web assembly’s sandboxing comes from js sandboxing, since it was originally a feature of the same programs for the same reasons. Wrapping one sandbox in another is what I’m surprised by.

      5 replies →

  • As I noted in another comment Figma has used QuickJS to run JS inside Wasm ever since a security vulnerability was discovered in their previous implementation.

    In a browser environment it's much easier to sandbox Wasm successfully than to sandbox JS.

    • That’s very interesting! Have they documented the reasoning for that approach? I would have expected iframes to be both simpler and faster sandboxing mechanism especially in compute bound cases. Maybe the communication overhead is too high in their workload?

      EDIT: found this from your other comment: https://www.figma.com/blog/an-update-on-plugin-security/ they do not address any alternatives considered.