← Back to context

Comment by chc4

8 months ago

24bit integers and floats, no array datatype, and a maximum 4GB heap of nodes are very harse restrictions, especially for any workloads that would actually want to be running on a GPU. The limitations in the HVM2 whitepaper about unsound evaluation around closures and infinite loops because it is evaluating both sides of a conditional without any short circuiting are also extremely concerning.

Before you reply "these are things we can address in the future": that doesn't matter. Everyone can address everything in the future. They are currently hard technical barriers to it's use, with no way of knowing the level of effort that will require or the knock-on effects, especially since some of these issues have been "we can fix that later" for ten years.

I also highly recommend changing your benchmark numbers from "interactions per second" to a standard measurement like FLOPS. No one else on earth knows how many of those interactions are pure overhead from your evaluation semantics, and not doing useful work. They come across as attempting to wow an audience with high numbers and not communicating an apples to apples comparison with other languages.

So use a metric that makes absolutely no sense on given domain, instead of one that is completely correct, sensible, accurate, stablished on the literature, and vastly superior in context? What even is a FLOPS in the context of Interaction Net evaluation? These things aren't even interchangeable.

  • The fact that you don’t know the answer to this question, and don’t even seem to think it is relevant, is chilling.

    People want to be able to ground your work—which you are claiming is the “parallel future of computation”—in something familiar. Insulting them and telling them their concerns are irrelevant just isn’t going to work.

    I would urge you to think about what a standard comparison versus Haskell would look like. Presumably it would be something that dealt with a large state space, but also top down computation (something you couldn’t easily do with matrices). Big examples might include simply taking a giant Haskell benchmark (given the setting of inets it seems like a natural fit) that is implemented in a fairly optimal way—-both algorithmically and also wrt performance—-and compare directly on large inputs.

    Sorry to trash on you here, not trying to come across as insulting, but I agree that “reductions per second” is meaningless without a nuanced understanding of the potentially massive encoding blowup that compilation introduces.

    We want to believe, but the claims here are big

    • I really appreciate the feedback, but the claim is that the performance scales linearly with cores, and it does. Also that it runs on GPUs, and it does. Yet, asking what is its "floating point operations per second" is nonsense, because it is not doing floating point computations. It is doing interactions. Thus the "interactions per second" term, which I didn't invent, it is the term used on the domain's literature.

      I truly want to help here, but that is like asking us to tell you how many gallops per second hour car does. It just makes no sense in context. If I did invent some conversion, I would be lying, and that would be much worse than using a non-familiar term. The way to compare across languages is to benchmark and report on time. Which is like "horsepower" in that sense, as it applies to both domains.