Comment by tossandthrow
3 days ago
Arguably FP really is about eliminating side effects.
The research has sprung out of lambda calculus where a computation is defined in terms of functions (remember: Functional programming).
Side effects can only be realized by exposing them in the runtime / std-lib etc. How one does that is a value judgement, but if a term is not idempotent, then you arguably does not have a functional programming language anymore.
You gotta ask the question: why does FP care about eliminating side effects? There are two possible answers:
1. It's just something weird that FP people like to do; or
2. It's in service of a larger goal, the ability to reason about programs.
If you take the reasonable answer---number 2---then the conclusion is that effects are not a problem so long as you can still reason about programs containing them. Linear / affine types (like Rust's borrow checker) and effect systems are different ways to accommodate effects into a language and still retain some ability to reason about programs.
No practical program can be written without effects, so they must be in a language somewhere.
More here: https://noelwelsh.com/posts/what-and-why-fp/
> No practical program can be written without effects, so they must be in a language somewhere.
Or rather, very few. It is like programming languages that trade Turing-completeness for provability, but worse.
In theory, one could imagine a program that adds 2 matrices in a purely functional manner, and you would have to skip on outputting the result to stay side-effect-free. Yet, it is running on a computer so the program does affect its internal state, notably the RAM in which the result is visible somewhere. One could dump that memory from outside of the program/process itself to get the result of the computation. That would be quite weird, but on the other hand sometimes normal programs do something like that by communicating through shared memory.
It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
(from your link:) > Both languages allow mutation but it's up to us to use it appropriately.
This is the crux of the problem. If you add a C example to your Typescript and Scala examples, people will throw you stones for that statement - out of instinct. The intent is to prevent accidental misuse. Mutation is "considered harmful" by some because it can be accidentally misused
> It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
Absolutely! When you really dig into it, the concept of an effect is quite ill-defined. It comes down to whatever some observer considers important. For example, from the point of view of substitution quick sort and bubble sort are equivalent but most people would argue that they are very much not.
The preface of https://www.proquest.com/openview/32fcc8064e57c82a696956000b... is quite interesting.
If you start with lambda calculus you don't have effects in the first place, so there's nothing to eliminate. Lambda calculus and friends are perfectly reasonable languages for computation in the sense of calculation.
A better way to think about general-purpose functional programming is that it's a way to add effects to a calculation-oriented foundation. The challenge is to keep the expressiveness, flexibility and useful properties of lambda calculus while extending it to writing interactive, optimizable real-world programs.
To retain referential transparency, we basically need to ensure that a function provided the same arguments always returns the same result.
A simple way around this is to never give the same value to a function twice - ie, using uniqueness types, which is the approach taken by Clean. A uniqueness type, by definition, can never be used more than once, so functions which take a uniqueness type as an argument are referentially transparent.
In Haskell, you never directly call a function with side effects - you only ever bind it to `main`.
Functions with (global) side effects return a value of type `IO a`, and the behavior of IO is fully encapsulated by the monadic operations.
return lifts a pure value into IO, and bind sequences IO operations. Importantly, there cannot exist any function of type `IO a -> a` which escapes IO, as this would violate referential transparency. Since every effect must return IO, and the only thing we can do with the IO is bind it, the eventual result of running the program must be an IO value, hence `main` returns a value of type `IO ()`.
So bind encapsulates side effects, effectively using a strategy similar to Clean, where each `IO` is a synonym of some `State# RealWorld -> (# State# RealWorld, a #)`. Bind takes a value of IO as it's first argument, consumes the input `State# RealWorld` value and extracts a value of type `a` - feeds this value the next function in the sequence of binds, returning a new value of type `IO b`, which has a new `State# RealWorld`. Since `bind` enforces a linear sequencing of operations, this has the effect that each `RealWorld` is basically a unique value never used more than once - even though uniqueness types themselves are absent from Haskell.
Effect systems in FP languages give you precisely that: referential transparency.
Funny you mention that because that's exactly how my talk at ZuriHac 2025 started, video published just yesterday:
https://www.youtube.com/watch?v=RsTuy1jXQ6Y
Lisp always had side effects and mutability, and it's the canonical FP language, directly inspired by lambda calculus. To be fair, before Haskellers figured out monads, nobody even knew of any way to make a FP language that's both pure and useful.
> and it's the canonical FP language
Don't tell that to the Haskell crowd. I feel like at least half the time this topic comes up on HN someone jumps out of the woodwork claiming lisp isn't fp because it doesn't do something or other that Haskell does.
I doubt it's Haskellers saying that. In my experience Haskellers don't even think about Lisp at all.