Comment by corethree
2 years ago
Well once you introduce points into the theoretical world of functions, composition no longer fully works. It doesn't make sense.
While a program in practice can have "points" an algebra of functions should be a theory like number theory. Number theory deals with numbers only, function theory deals with functions only.
I'm not so strict on this in practice, the function must eventually be called and in the end the compositions converge into a point. But if you want to apply algebraic theory to the functions, they need to be point free.
You can apply algebra to your functions, while also acknowledging that there are other interesting objects than just functions. It's called logic, and in my opinion, logic is really just algebra.
Number theory is not just numbers, by the way. It has plenty of functions as well, for example the Riemann zeta function.
>You can apply algebra to your functions, while also acknowledging that there are other interesting objects than just functions. It's called logic, and in my opinion, logic is really just algebra.
The basis of all formal theories includes logic. I haven't heard of any illogical formal theories. Logic isn't just algebra it's everything.
I acknowledge there are other interesting primitives other than functions. But does it make sense to formulate a theory with linked lists and dependency injection as it's core primitives? Then we build everything else in terms of those two concepts? Seems arbitrary. More than likely there are more fundamental building blocks we can use to develop a theory.
If we want to make a theory of computation what is the core primitive that computes? A function. So it makes sense to use this as the core primitive of a theory.
>Number theory is not just numbers, by the way. It has plenty of functions as well, for example the Riemann zeta function.
How should I put this. In number theory you have numbers as an instantiated primitive. Then you have rules on how to compose those numbers to form other numbers. Those rules are called "functions".
If we have a "function theory" on computation. Then the instantiated primitive is a function. Then we need rules on how to compose those functions to form other functions. Those rules are called "functions".
Note the repetition of the two paragraphs above. It might clarify to you what I'm talking about. So in a sense the word "function" in the second paragraph is more Meta... a function of functions.
Anyway the point here is that we want to formulate a theory with the lowest amount of primitives and axiomatic concepts.
> Anyway the point here is that we want to formulate a theory with the lowest amount of primitives and axiomatic concepts.
A theory makes only sense with respect to some logic. So first, you need to define what logic you are using. This logic is your theory with the lowest amount of primitives and axiomatic concepts.
Now, once you have that, you can turn your attention to other things on top of that. If you are interested in functions only, sure. You can for example study the lambda calculus.
But don't pretend that this is so that you can reason about computer programs better. Sure, learning more about functions is useful, but numbers are also very important for computer programs. So are trees. And graphs. Expressing all of these just in terms of functions can be a fun exercise (pun intended, Church numerals anyone?). But instead of trying to express the important concepts of your program as functions, and then studying them, maybe your time is better spent studying the concepts themselves directly.
5 replies →