Comment by nimih
7 hours ago
I remember having a similar sort of realization early in my career when trying to implement some horribly convoluted business logic in SQL (I no longer remember the actual details of what I was trying to do, just the epiphany which resulted; I think it had something to do with proration of insurance premiums and commissions): I realized that if I simply pre-computed the value of the function in question and shoved it into a table (requiring "only" a couple million rows or so), then I could use a join in place of function application, and be done with it. An obvious idea in retrospect, but the sudden dredging of the set-theoretic formulation of functions--that they are simply collections of tuples--from deep within my memory was certainly startling at the time.
BTW this is extremely common in life insurance systems, where premiums (provisions, surrender values, etc.) depend on formulas applied to mortality tables; these data themselves are simply tables for people from 0 to 100 years of age, so many formulas end up with only 100 possible outputs and are precomputed. (or 200 for combined probabilities, or gender-specific ones)
I've seen this as a "solution" to implementing a function for fibbonacci numbers. The array of all of the fibbonacci numbers that can fit into a 32-bit integer is not particularly large, so sticking it into a static local variable is easy to do.