← Back to context

Comment by ndriscoll

4 days ago

'10 / 3 = 3' is either bad notation or wrong. It's not true under any usual definition of 10, 3, / or =. '3 = 3.0' on the other hand is perfectly reasonable in many circumstances. If you think 10/3 can equal 3 but not 3.0, you are either confused or confusing or both. What you mean to write is '≈', and when you do that, it's obvious that 3 and 3.0 are both usable in that sentence.

It is perfectly reasonable to define 3: ℕ = succ(succ(succ(zero))). It's also perfectly reasonable to define 3: ℝ as the image of succ(succ(succ(zero))): ℕ under the canonical embedding. Or you can define 3: ℚ with the obvious element. You can also define 3.0: ℚ or 3.0: ℝ as the obvious elements. If you were really a deviant, I suppose you could even define 3.0: ℕ, and people would roll their eyes, but everyone would understand you. Obviously, there are reasonable ways to define things so that `3 = 3.0` is a meaningful sentence (typechecks) and also literally true.

Again, different conventions are used in different contexts. The "user" of mathematics should pick the conventions and notations that make sense for what they're doing to communicate what they're trying to say. That itself is an important lesson. The sigfig convention you learned in middle school isn't the word of God.

Not being aware of these things to be capable of musing about them is I suppose another issue with our education system.

If I ask for someone for 3 of something and they give me 3.001 of it, it's whatever. If I ask someone for 3.000 of something and they give me 3.001 of it, it's out of spec.

  • Admittedly I only did this in school and it's been over 10 years, but I recall when doing engineering drawings, we'd specify ± (or separate lower/upper tolerances in some situations). Using decimal points to indicate uncertainty was not a thing I believe I did after high school. Does any actual professional use decimal places and not explicit ±?

    Similarly, we calculated those ± values using the chain rule/uncertainty propagation, not with the simple decimal place rules you learn as a kid. I assume no one serious uses the child rules when CAD software can just as easily use the real ones.

    • > we'd specify ± (or separate lower/upper tolerances in some situations)

      > we calculated those ± values using the chain rule/uncertainty propagation

      Yes, that's common in detailed engineering documents. It still doesn't change the fact if I ask for 3.000 and you give me 3.001 I'm not going to consider that in-spec despite not giving a ±. It's assumed if I wrote it out to that decimal point I'm caring about that level of precision.

      > Using decimal points to indicate uncertainty was not a thing I believe I did after high school

      Well, I'd imagine since the topic of lesson was understanding whole numbers at a basic level this was probably a high school or lower class, probably more like elementary or middle school. You know, in that time when you did use decimal places to indicate precision. This person wasn't talking about losing points at their engineering job.