Comment by antonvs

19 days ago

Traditional FP has had functional equivalents to iterators since before most imperative languages existed. LISP had a map function (MAPCAR) in its earliest versions, in the 1950s. Later that was generalized to folds, and the underlying structures were generalized from linked lists to arbitrary “traversable” types, including unbounded streams.

The language in the OP is a special-purpose language for data parallelism, targeting GPUs, and explicitly described as “not intended to replace existing general-purpose languages” (quote from the language’s home page.) As such, it has requirements and constraints that most languages don’t have. Looking at its design through a general-purpose languages lens doesn’t necessarily make sense.

That's not really the lens I'm looking at it through. It's just entertaining that we're still discussing array<->function equivalence in the year of our lord 2026, long after every mainstream language supports said equivalence in practice.

  • I suspect there are two points you haven't fully understood:

    1. The equivalence being discussed is not supported in "every mainstream language" in practice. If you disagree, read https://news.ycombinator.com/item?id=46699933 for a good overview of the equivalence in question and explain how you think mainstream languages support that.

    2. The current discussion is in the context of a language targeting CUDA. Currently, very few languages aside from C++ have good CUDA support, and C++ certainly doesn't achieve that by having its arrays be equivalent to functions "in practice" or in any other sense.

    Just as an example of what OP is addressing, FTA:

    > "To allow for efficient defunctionalisation, Futhark imposes restrictions on how functions can be used; for example banning returning them from branches. These restrictions are not (and ought not be!) imposed on arrays, and so unification is not possible. Also, in Futhark an array type such as [n]f64 explicitly indicates its size (and consequently the valid indices), which can even be extracted at run time. This is not possible with functions, and making it possible requires us to move further towards dependent types - which may of course be a good idea anyway."

    As such, it seems to me your comments about this are wildly off the mark.

    • > very few languages aside from C++ have good CUDA support

      CUDA happens to be (loosely) source-compatible with C++, but I'm not sure that's the same as saying C++ has good CUDA support. The majority of C++ code does not compile to CUDA (although the inverse is often true).

      > C++ certainly doesn't achieve that by having its arrays be equivalent to functions "in practice" or in any other sense

      The syntax may not be unified, but what else do you think iterators are for? They are an abstraction to let us ignore pesky details like the underlying storage of arrays, and instead treat them like any other generator function. This is perhaps more evident in a language like Python where generators and iterators are entirely interchangeable.

      > in Futhark an array type such as [n]f64 explicitly indicates its size (and consequently the valid indices), which can even be extracted at run time. This is not possible with functions

      These are specific oddities of Furthark - we have languages (i.e. C/C++) where the size of an array is not knowable, and we have languages where the range of inputs to a function are knowable (at least for numeric inputs, i.e. Ada)

      > Futhark imposes restrictions on how functions can be used; for example banning returning them from branches. These restrictions are not (and ought not be!) imposed on arrays

      Again, this is a case of Furthark's own design decisions restricting it. This is only a problem because their arrays are carrying around runtime size information - if they didn't have that, one wouldn't be able to usefully return them from branches anyway. Alternately, there are plenty of ML-family languages where you can return a function from a branch.