Comment by dilap
21 hours ago
This has already been explained many times, but it's so much fun I'll do it again. :-)
So: The way Go presents it is confusing, but this behavior makes sense, is correct, will never be changed, and is undoubtedly depended on by correct programs.
The confusing thing for people use to C++ or C# or Java or Python or most other languages is that in Go nil is a perfectly valid pointer receiver for a method to have. The method resolution lookup happens statically at compile time, and as long as the method doesn't try to deref the pointer, all good.
It still works if you assign to an interface.
package main
import "fmt"
type Dog struct {}
type Cat struct {}
type Animal interface {
MakeNoise()
}
func (*Dog) MakeNoise() { fmt.Println("bark") }
func (*Cat) MakeNoise() { fmt.Println("meow") }
func main() {
var d *Dog = nil
var c *Cat = nil
var i Animal = d
var j Animal = c
d.MakeNoise()
c.MakeNoise()
i.MakeNoise()
j.MakeNoise()
}
This will print
bark
meow
bark
meow
But the interface method lookup can't happen at compile time. So the interface value is actually a pair -- the pointer to the type, and the instance value. The type is not nil, hence the interface value is something like (&Cat,nil) and (&Dog,nil) in each case, which is not the interface zero value, which is (nil, nil).
But it's super confusing because Go type cooerces a nil struct value to a non-nil (&type, nil) interface value. There's probably some naming or syntax way to make this clearer.
But the behavior is completely reasonable.
The underlying reason, which you hint on, is that in Go (unlike Python, Java, C#… even C++) the “type” of an “object” is not stored alongside the object.
A struct{a, b int32} takes 8 bytes of memory. It doesn't use any extra bytes to “know” its type, to point to a vtable of “methods,” to store a lock, or any other object “header.”
Dynamic dispatch in Go uses interfaces which are fat pointers that store the both type and a pointer to an object.
With this design it's only natural that you can have nil pointers, nil interfaces (no type and no pointer), and typed interfaces to a nil pointer.
This may be a bad design decision, it may be confusing. It's the reason why data races can corrupt memory.
But saying, as the author, “The reason for the difference boils down to again, not thinking, just typing” is just lazy.
Just as lazy as it is arguing Go is bad for portability.
I've written Go code that uses syscalls extensively and runs in two dozen different platforms, and found it far more sensible than the C approach.
Yeah, I totally agree -- given Go's design, the behavior makes sense (and changing the behavior just to make it more familiar to users of languages that fundamentally work differently would be silly).
However, the non-intuitive punning of nil is unfortunate.
I'm not sure what the ideal design would be.
Perhaps just making an interface not comparable to nil, but instead something like `unset`.
Still, it's a sharp edge you hit once and then understand. I am surprised people get so bothered by it...it's not like something that impairs your use of the language once you're proficient.
(E.g. complaints about nil existing at all, or error handling, are much more relatable!)
(Side note, Go did fix scoping of captured variables in for,range loops, which was a back-incompat change, but they justified it by emperically showing it fixed more bugs than it caused (very reasonable). C# made the same change w/ the same justification earlier, which was inspiration for Go.)
And this issue was known from lisps for 50+ years.. if only we could somehow learn from other languages' mistakes.
Yeah, it blew my mind when I first learned Go had this problem -- like, people have already tripped over this many times! I was pleasantly surprised to see them fix it though.
I deeply, seriously, believe that you should have written the words "Its super confusing", meditated on that for a minute, then left it at that. It is super confusing. That's it. Nothing else matters. I understand why it is the way it is. I'm not stupid. As you said: Its super confusing, which is relevant when you're picking languages other people at your company (interns, juniors) have to write in.
> “The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
It's a sharp edge you trip over once, and makes sense once you think about it, it's not like you need a PhD to understand it!
I do think there probably would've been some more elegant syntax or naming convention to make it less confusing.