Comment by esafak
2 days ago
No, they are not. Model outputs can be discretized but the model parameters (excluding hyperparameters) are typically continuous. That's why we can use gradient descent.
2 days ago
No, they are not. Model outputs can be discretized but the model parameters (excluding hyperparameters) are typically continuous. That's why we can use gradient descent.
Where are the model parameters stored and how are they represented?
In disk or memory as multidimensional arrays ("tensors" in ML speak).
Do we agree that these memories consist of a finite # of bits?
2 replies →