← Back to context

Comment by blixt

1 day ago

I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.

But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.

The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.

But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.

Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.

I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).

Go's filesystem API is the perfect example. You need to open files? Great, we'll create

  func Open(name string) (*File, error)

function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.

  • > Who cares, hasn't happen to me in the first 5 years I used Go.

    This is the mindset that makes me want to throttle the golang authors.

    Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!

    The problem is that your code is littered with these situations everywhere. You don’t think to test for them, it’s worked on all the data you fed it so far, and then you run into situations like the GP’s where you lose data because golang didn’t bother to think carefully about some API impedance mismatch, can’t even express it anyway, and just drops things on the floor when it happens.

    So now your user has irrecoverably lost data, there’s a bug in your bug tracker, and you and everyone else who uses go has to solve for yet another a stupid footgun that should have been obvious from the start and can never be fixed upstream.

    And you, and every other golang programmer, gets a steady and never-ending stream of these type of issues, randomly selected for, for the lifetime of your program. Which one will bite you tomorrow? No idea! But the more and more people who use it, the more data you feed it, the more clients with off-the-beaten-track use-cases, the more and more it happens.

    Oops, non-UTF-8 filename. Oops, can’t detect the difference between an empty string in some JSON or a nil one. Oops, handed out a pointer and something got mutated out from under me. Oops, forgot to defer. Oops, maps aren’t thread-safe. Oops, maps don’t have a sane zero value. And on and on and fucking on and it never goddamn ends.

    And it could have, if only Rob Pike and co. didn’t just ship literally the first thing they wrote with zero forethought.

    • > Golang makes it easy to do the dumb, wrong, incorrect thing that looks like it works 99.7% of the time. How can that be wrong? It works in almost all cases!

      my favorite example of this was the go authors refusing to add monotonic time into the standard library because they confidently misunderstood its necessity

      (presumably because clocks at google don't ever step)

      then after some huge outages (due to leap seconds) they finally added it

      now the libraries are a complete a mess because the original clock/time abstractions weren't built with the concept of multiple clocks

      and every go program written is littered with terrible bugs due to use of the wrong clock

      https://github.com/golang/go/issues/12914 (https://github.com/golang/go/issues/12914#issuecomment-15075... might qualify for the worst comment ever)

      1 reply →

    • I can count on fewer hands the number of times I've been bitten by such things in over 10 years of professional Go vs bitten just in the last three weeks by half-assed Java.

      7 replies →

  • While the general question about string encoding is fine, unfortunately in a general-purpose and cross-platform language, a file interface that enforces Unicode correctness is actively broken, in that there are files out in the world it will be unable to interact with. If your language is enforcing that, and it doesn't have a fallback to a bag of bytes, it is broken, you just haven't encountered it. Go is correct on this specific API. I'm not celebrating that fact here, nor do I expect the Go designers are either, but it's still correct.

    • This is one of those things that kind of bugs me about, say, OsStr / OsString in Rust. In theory, it’s a very nice, principled approach to strings (must be UTF-8) and filenames (arbitrary bytes, almost, on Linux & Mac). In practice, the ergonomics around OsStr are horrible. They are missing most of the API that normal strings have… it seems like manipulating them is an afterthought, and it was assumed that people would treat them as opaque (which is wrong).

      Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)

      11 replies →

  • Much more egregious is the fact that the API allows returning both an error and a valid file handle. That may be documented to not happen. But look at the Read method instead. It will return both errors and a length you need to handle at the same time.

    • The Read() method is certainly an exception rather than a rule. The common convention is to return nil value upon encountering an error unless there's real value in returning both, e.g. for a partial read that failed in the end but produced some non-empty result nevertheless. It's a rare occasion, yes, but if you absolutely have to handle this case you can. Otherwise you typically ignore the result if err!=nil. It's a mess, true, but real world is also quite messy unfortunately, and Go acknowledges that

      1 reply →

  • > What if the file name is not valid UTF-8

    Nothing? Neither Go nor the OS require file names to be UTF-8, I believe

    • > Nothing?

      It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.

      (Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)

      8 replies →

    • Well, Windows is an odd beast when 8-bit file names are used. If done naively, you can’t express all valid filenames with even broken UTF-8 and non-valid-Unicode filenames cannot be encoded to UTF-8 without loss or some weird convention.

      You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.

      7 replies →

  • Note that Go strings can be invalid UTF-8, they dropped panicking on encountering an invalid UTF string before 1.0 I think

    • This also epitomizes the issue. What's the point of having `string` type at all, if it doesn't allow you to make any extra assumptions about the contents beyond `[]byte`? The answer is that they planned to make conversion to `string` error out when it's invalid UTF-8, and then assume that `string`s are valid UTF-8, but then it caused problems elsewhere, so they dropped it for immediate practical convenience.

      63 replies →

  • > What if the file name is not valid UTF-8, though

    They could support passing filename as `string | []byte`. But wait, go does not even have union types.

    • But []byte, or a wrapper like Path, is enough, if strings are easily convertible into it. Rust does it that way via the AsRef<T> trait.

  • If the filename is not valid UTF-8, Golang can still open the file without a problem, as long as your filesystem doesn't attempt to be clever. Linux ext4fs and Go both consider filenames to be binary strings except that they cannot contain NULs.

    This is one of the minor errors in the post.

  • > they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).

    I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.

    I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"

  • > What if the file name is not valid UTF-8, though?

    Then make it valid UTF-8. If you try to solve the long tail of issues in a commonly used function of the library its going to cause a lot of pain. This approach is better. If someone has a weird problem like file names with invalid characters, they can solve it themselves, even publish a package. Why complicate 100% of uses for solving 0.01% of issues?

    • > Then make it valid UTF-8.

      I think you misunderstand. How do you do that for a file that exists on disk that's trying to be read? Rename it for them? They may not like that.

I recently started writing Go for a new job, after 20 years of not touching a compiled language for something serious (I've done DevKitArm dev. as a hobby).

I know it's mostly a matter of tastes, but darn, it feels horrible. And there are no default parameter values, and the error hanling smells bad, and no real stack trace in production. And the "object orientation" syntax, adding some ugly reference to each function. And the pointers...

It took me back to my C/C++ days. Like programming with 25 year old technology from back when I was in university in 1999.

  • And then people are amazed for it to achieve compile times, compiled languages were already doing on PCs running at 10 MHz within the constraints of 640 KB (TB, TP, Modula-2, Clipper, QB).

    • > [some] compiled languages were already doing on PCs running at 10 MHz within the constraints of 640 KB

      Many compiled languages are very slow to compile however, especially for large projects, C++ and rust being the usual examples.

      13 replies →

    • That's a bit unfair to the modern compilers - there's a lot more standards to adhere to, more (micro)architectures, frontends need to plug into IRs into optimisers into codegen, etc. Some of it is self-inflicted: do you need yet-another 0.01% optimisation? At the cost of maintainability, or even correctness? (Hello, UB.) But most of it is just computers evolving.

      But those are not rules. If you're doing stuff for fun, check out QBE <https://c9x.me/compile/> or Plan 9 C <https://plan9.io/sys/doc/comp.html> (which Go was derived from!)

      2 replies →

  • If you want a nice modern compiled language, try Kotlin. It's not ideal, but it's very ergonomic and has very reasonable compile times (to JVM, I did not play with native compilation). People also praise Nim for being nice towards the developer, but I don't have any first-hand experience with it.

    • I have only used Kotlin on the JVM. You're saying there's a way to avoid the JVM and build binaries with it? Gotta go look that up. The problem with Kotlin is not the language but finding jobs using it can be spotty. "Kotlin specialist" isn't really a thing at all. You can find more Golang and Python jobs than Kotlin.

  • But it's not--Go is a thoroughly modern language, minus a few things as noted in this discussion. But it's very and I've written quite a few APIs for corporate clients using it and they are doing great.

> Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences.

It feels often like the two principles they stuck/stick to are "what makes writing the compiler easier" and "what makes compilation fast". And those are good goals, but they're only barely developer-oriented.

  • Not sure it was only that. I remember a lot of "we're not Java" in the discussions around it. I always had the feeling, they were rejecting certain ideas like exceptions and generics more out of principle, than any practical analysis.

    Like, yes, those ideas have frequently been driven too far and have led to their own pain points. But people also seem to frequently rediscover that removing them entirety will lead to pain, too.

    • Ian Lance Taylor, a big proponent of generics, wrote a lot about the difficulties of adding generics to Golang. I bet the initial team just had to cut the scope and produce a working language, as simple as possible while still practically useful. Easy concurrency was the goal, so they basically took mostl of Modula-2 plus ideas form Oberon (and elsewhere), removed all the "fluff" (like arrays indexable by enumeration types, etc), added GC, and that was plenty enough.

      1 reply →

  • I recall that one of the primary reasons they built Go was because of the half-day compile times Google's C++ code was reaching.

  • I am reminded when I read "barely developer oriented" that this comes from Google, who run compute and compilers at Ludicrous Scale. It doesn't seem strange that they might optimize (at least in part) for compiler speed and simplicity.

  • Ah well you know, the kids want new stuff. They don't actually care about getting work done.

  • What makes compilation fast is a good goal at places with large code bases and build times. Maybe makes less sense in smaller startups with a few 100k LOC.

My feeling is that in terms of developer ergonomics, it nailed the “very opinionated, very standard, one way of doing things” part. It is a joy to work on a large microservices architecture and not have a different style on each repo, or avoiding formatting discussions because it is included.

The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way. People expect a map/filter method rather than a loop with off by one risks, a type system with the smartness of typescript (if less featured and more heavily enforced), error handling is annoying, and so on.

I get that it’s tough to implement some of those features without opening the way to a lot of “creativity” in the bad sense. But I feel like go is sometimes a hard sell for this reason, for young devs whose mother language is JavaScript and not C.

  • > The issue is that it was a bit outdated in the choice of _which_ things to choose as the one Go way

    I agree with this. I feel like Go was a very smart choice to create a new language to be easy and practical and have great tooling, and not to be experimental or super ambitious in any particular direction, only trusting established programming patterns. It's just weird that they missed some things that had been pretty well hashed out by 2009.

    Map/filter/etc. are a perfect example. I remember around 2000 the average programmer thought map and filter were pointlessly weird and exotic. Why not use a for loop like a normal human? Ten years later the average programmer was like, for loops are hard to read and are perfect hiding places for bugs, I can't believe we used to use them even for simple things like map, filter, and foreach.

    By 2010, even Java had decided that it needed to add its "stream API" and lambda functions, because no matter how awful they looked when bolted onto Java, it was still an improvement in clarity and simplicity.

    Somehow Go missed this step forward the industry had taken and decided to double down on "for." Go's different flavors of for are a significant improvement over the C/C++/Java for loop, but I think it would have been more in line with the conservative, pragmatic philosophy of Go to adopt the proven solution that the industry was converging on.

    • Go Generics provides all of this. Prior to generics, you could have filter, map, reduce etc but you needed to implement them yourself once in a library/pkg and do it for each type.

      After Go added generics in version 1.18, you can just import someone else's generic implementations of whatever of these functions you want and use them all throughout your code and never think about it. It's no longer a problem.

      1 reply →

  • > People expect a map/filter method

    Do they? After too many functional battles I started practicing what I'm jokingly calling "Debugging-Driven Development" and just like TDD keeps the design decisions in mind to allow for testability from the get-go, this makes me write code that will be trivially easy to debug (specially printf-guided debugging and step-by-step execution debugging)

    Like, adding a printf in the middle of a for loop, without even needing to understand the logic of the loop. Just make a new line and write a printf. I grew tired of all those tight chains of code that iterate beautifully but later when in a hurry at 3am on a Sunday are hell to decompose and debug.

    • I'm not a hard defender of functional programming in general, mind you.

      It's just that a ridiculous amount of steps in real world problems can be summarised as 'reshape this data', 'give me a subset of this set', or 'aggregate this data by this field'.

      Loops are, IMO, very bad at expressing those common concepts briefly and clearly. They take a lot of screen space, usually accesory variables, and it isn't immediately clear from just seing a for block what you're about to do - "I'm about to iterate" isn't useful information to me as a reader, are you transforming data, selecting it, aggregating it?.

      The consequence is that you usually end up with tons of lines like

      userIds = getIdsfromUsers(users);

      where the function is just burying a loop. Compare to:

      userIds = users.pluck('id')

      and you save the buried utility function somewhere else.

    • Rust has `.inspect()` for iterators, which achieves your printf debugging needs. Granted, it's a bit harder for an actual debugger, but support's quite good for now.

    • I'll agree that explicit loops are easier to debug, but that comes at the cost of being harder to write _and_ read (need to keep state in my head) _and_ being more bug-prone (because mutability).

      I think it's a bad trade-off, most languages out there are moving away from it

      7 replies →

    • Just use a real debugger. You can step into closures and stuff.

      I assume, anyway. Maybe the Go debugger is kind of shitty, I don't know. But in PHP with xdebug you just use all the fancy array_* methods and then step through your closures or callables with the debugger.

    • This depends on the language and IDE. Intellij Java debugger is excellent at stream debugging.

  • The lack of stack traces in Go is diabolical for all the effort we have to out in by manually passing every error

> Concurrency is tricky

The go language and its runtime is the only system I know that is able to handle concurrency with multicore cpus seamlessly within the language, using the CSP-like (goroutine/channel) formalism which is easy to reason with.

Python is a mess with the gil and async libraries that are hard to reason with. C,C++,Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.

So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.

  • > So, go is a perfect fit for the http server (or service) usecase and in my experience there is no parallel.

    Elixir handling 2 million websocket connections on a single machine back in 2015 would like to have a word.[1] This is largely thanks to the Erlang runtime it sits atop.

    Having written some tricky Go (I implemented Raft for a class) and a lot of Elixir (professional development), it is my experience that Go's concurrency model works for a few cases but largely sucks in others and is way easier to write footguns in Go than it ought to be.

    [1]: https://phoenixframework.org/blog/the-road-to-2-million-webs...

    • I worked in both Elixir and Go. I still think Elixir is best for concurrency.

      I recently realized that there is no easy way to "bubble up a goroutine error", and I wrote some code to make sure that was possible, and that's when I realize, as usual, that I'm rewriting part of the OTP library.

      The whole supervisor mechanism is so valuable for concurrency.

  • > Java etc need external libraries to implement threading

    Java does not need external libraries to implement threading, it's baked into the language and its standard libraries.

  • > Java etc need external libraries to implement threading which cant be reasoned with in the context of the language itself.

    What do you mean by this for Java? The library is the runtime that ships with Java, and while they're OS threads under the hood, the abstraction isn't all that leaky, and it doesn't feel like they're actually outside the JVM.

    Working with them can be a bit clunky, though.

    • Also, Java is one of the only languages with actually decent concurrent data structures right out of the box.

    • I think parent means they're (mostly) not supported via keywords. But you can use Kotlin and get that.

  • With all due respect, there are many languages in popular use that can do this, in many cases better than golang.

    I believe it’s the only system you know. But it’s far from the only one.

    • There's not that many. C/C++ and Rust all map to OS threads and don't have CSP type concurrency built in.

      In Go's category, there's Java, Haskell, OCaml, Julia, Nim, Crystal, Pony...

      Dynamic languages are more likely to have green threads but aren't Go replacements.

      5 replies →

  • Unless we consider JDK as external library. Speaking of library, Java's concurrency containers are truly powerful yet can be safely used by so many engineers. I don't think Go's ecosystem is even close.

  • > using the CSP-like (goroutine/channel) formalism which is easy to reason with

    I thought it was a seldom mentioned fact in Go that CSP systems are impossible to reason about outside of toy projects so everyone uses mutexes and such for systemic coordination.

    I'm not sure I've even seen channels in a production application used for anything more than stopping a goroutine, collecting workgroup results, or something equally localized.

    • There's also atomic operations (sync/atomic) and higher-level abstractions built on atomics and/or mutexes (sempahores, sync.Once, sync.WaitGroup/errgroup.Group, etc.). I've used these and seen them used by others.

      But yeah, the CSP model is mostly dead. I think the language authors' insistence that goroutines should not be addressable or even preemptible from user code makes this inevitable.

      Practical Go concurrency owes more to its green threads and colorless functions than its channels.

      1 reply →

  • Go is such a good fit for multi-core, especially that it is not even memory safe under data races..

    • It is rare to encounter this in practice, and it does get picked up by the race detector (which you have to consciously enable). But the language designers chose not to address it, so I think it's a valid criticism. [1]

      Once you know about it, though, it's easy to avoid. I do think, especially given that the CSP features of Go are downplayed nowadays, this should be addressed more prominently in the docs, with the more realistic solutions presented (atomics, mutexes).

      It could also potentially be addressed using 128-bit atomics, at least for strings and interfaces (whereas slices are too big, taking up 3 words). The idea of adding general 128-bit atomic support is on their radar [2] and there already exists a package for it [3], but I don't think strings or interfaces meet the alignment requirements.

      [1]: https://research.swtch.com/gorace

      [2]: https://github.com/golang/go/issues/61236

      [3]: https://pkg.go.dev/github.com/CAFxX/atomic128

> Just all-around a trusty tool in the belt

I agree.

The Go std-lib is fantastic.

Also no dependency-hell with Go, unlike with Python. Just ship an oven-ready binary.

And what's the alternative ?

Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments.

Zig ? Rust ? Complex learning curve. And having to choose e.g. Rust crates re-introduces dependency hell and the potential for supply-chain attacks.

  • > Java ? Licensing sagas requiring the use of divergent forks. Plus Go is easier to work with, perhaps especially for server-side deployments

    Yeah, these are sagas only, because there is basically one, single, completely free implementation anyone uses on the server-side and it's OpenJDK, which was made 100% open-source and the reference implementation by Oracle. Basically all of Corretto, AdoptOpenJDK, etc are just builds of the exact same repository.

    People bringing this whole license topic up can't be taken seriously, it's like saying that Linux is proprietary because you can pay for support at Red Hat..

    • > People bringing this whole license topic up can't be taken seriously

      So you mean all those universities and other places that have been forced to spend $$$ on licenses under the new regime also can't be taken seriously ? Are you saying none of them took advice and had nobody on staff to tell them OpenJDK exists ?

      Regarding your Linux comment, some of us are old enough to remember the SCO saga.

      Sadly Oracle have deeper pockets to pay more lawyers than SCO ever did ....

      9 replies →

  • You forgot D. In a world where D exists, it's hard to understand why Go needed to be created. Every critique in this post is not an issue in D. If the effort Google put into Go had gone on making D better, I think D today would be the best language you could use. But as it is, D has had very little investment (by that I mean actual developer time spent on making it better, cleaning it up, writing tools) and it shows.

    • I don't think the languages are comparable. Go tries to stay simple (whatever that means), while D is a kitchen-sink language.

  • > Rust crates re-introduces dependency hell and the potential for supply-chain attacks.

    I’m only a casual user of both but how are rust crates meaningfully different from go’s dependency management?

    • Go has a big, high quality standard library with most of what one might need. Means you have to bring in and manage (and trust) far fewer third party dependencies, and you can work faster because you’re not spending a bunch of time figuring out what the crate of the week is for basic functionality.

      10 replies →

    • I think it's because go's community sticks close to the standard library:

      e.g. iirc. Rust has multiple ways of handling Strings while Go has (to a big extent) only one (thanks to the GC)

      2 replies →

  • uv + the new way of adding the required packages in the comments is pretty good.

    you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.

    Still no match for Go though, shipping a single cross-compiled binary is a joy. And with a bit of trickery you can even bundle in your whole static website in it :) Works great when you're building business logic with a simple UI on top.

    • I've been out of the Python game for a while but I'm not surprised there is yet another tool on the market to handle this.

      You really come to appreciate when these batteries are included with the language itself. That Go binary will _always_ run but that Python project won't build in a few years.

      4 replies →

    • > you can go `uv run script.py` and it'll automatically fetch the libraries and run the script in a virtual environment.

      Yeah, but you still have to install `uv` as a pre-requisite.

      And you still end up with a virtual environment full of dependency hell.

      And then of course we all remember that whole messy era when Python 2 transitioned to Python 3, and then deferred it, and deferred it again....

      You make a fair point, of course it is technically possible to make it (slightly) "cleaner". But I'll still take the Go binary thanks. ;-)

      1 reply →

  • This just makes it even more frustrating to me. Everything good about go is more about the tooling and ecosystem but the language itself is not very good. I wish this effort had been put into a better language.

    • Go has transparent async io and a very nice M:N threading model that makes writing http servers using epoll very simple and efficient.

      The ergonomics for this use case are better than in any language I ever used.

      1 reply →

    • > I wish this effort had been put into a better language.

      But it is being put. Read newsletters like "The Go Blog", "Go Weekly". It's been improving constantly. Language-changes require lots of time to be done right, but the language is evolving.

  • > Rust crates re-introduces [...] potential for supply-chain attacks.

    I have absolutely no idea how go would solve this problem, and in fact I don't think it does at all.

    > The Go std-lib is fantastic.

    I have seen worse, but I would still not call it decent considering this is a fairly new language that could have done a lot more.

    I am going to ignore the incredible amount of asinine and downright wrong stuff in many of the most popular libraries (even the basic ones maintained by google) since you are talking only about the stdlib.

    On the top of my head I found inconsistent tagging management for structs (json defaults, omitzero vs omitempty), not even errors on tag typos, the reader/writer pattern that forces you to to write custom connectors between the two, bzip2 has a reader and no writer, the context linked list for K/V. Just look at the consistency of the interfaces in the "encoding" pkg and cry, the package `hash` should actually be `checksum`. Why does `strconv.Atoi`/ItoA still exist? Time.Add() vs Time.Sub()...

    It chock full of inconsistencies. It forces me to look at the documentation every single time I don't use something for more than a couple of days. No, the autocomplete with the 2-line documentation does not include the potential pitfalls that are explained at the top of the package only.

    And please don't get me started on the wrappers I had to write around stuff in the net library to make it a bit more consistent or just less plain wrong. net/url.Parse!!! I said don't make my start on this package! nil vs NoBody! ARGH!

    None of this is stuff at the language level (of which there is plenty to say).

    None of it is a dealbreaker per se, but it adds attrition and becomes death by a billion cuts.

    I don't even trust any parser written in go anymore, I always try to come up with corner cases to check how it reacts, and I am often surprised by most of them.

    Sure, there are worse languages and libraries. Still not something I would pick up in 2025 for a new project.

  • > std-lib

    Yes, My favourite is the `time` package. It's just so elegant how it's just a number under there, the nominal type system truly shines. And using it is a treat. What do you mean I can do `+= 8*time.Hour` :D

    • Unfortunately it doesn't have error handling, so when you do += 8 hours and it fails, it won't return a Go error, it won't throw a Go exception, it just silently does the wrong thing (clamp the duration) and hope you don't notice...

      It's simplistic and that's nice for small tools or scripts, but at scale it becomes really brittle since none of the edge cases are handled

      3 replies →

    • As long as you don’t need to do `hours := 8` and `+= hours * time.Hour`. Incredibly the only way to get that multiplication to work is to cast `hours` to a `time.Duration`.

      In Go, `int * Duration = error`, but `Duration * Duration = Duration`!

      2 replies →

People tend to refer to the bit where Discord rewrote a bit of their stack in Rust because Go GC pauses were causing issues.

The code was on the hot path of their central routing server handling Billions (with a B) messages in a second or something crazy like that.

You're not building Discord, the GC will most likely never be even a blip in your metrics. The GC is just fine.

  • I get you can specifically write code that does not malloc, but I'm curious at scale if there are heap management / fragmentation and compression issues that are equivalent to GC pause issues.

    I don't have a lot of experience with the malloc languages at scale, but I do know that heat fragmentation and GC fragmentation are very similar problems.

    There are techniques in GC languages to avoid GC like arena allocation and stuff like that, generally considered non-idiomatic.

"Concurrency is tricky"

This tends to be true for most languages, even the ones with easier concurrency support. Using it correctly is the tricky part.

I have no real problem with the portability. The area I see Go shining in is stuff like AWS Lambda where you want fast execution and aren't distributing the code to user systems.

> The type system is most of the time very convenient

In what universe?

  • In mine. It's Just Fine.

    Is it the best or most robust or can you do fancy shit with it? No

    But it works well enough to release reliable software along with the massive linter framework that's built on top of Go.

I find Result[] and Optional[] somewhat overrated, but nil does bother me. However, nil isn't going to go away (what else is going to be the default value for pointers and interfaces, and not break existing code?). I think something like a non-nilable type annotation/declaration would be all Go needs.

  • Yeah maybe they're overrated, but they seem like the agreed-upon set of types to avoid null and to standardize error handling (with some support for nice sugars like Rust's ? operator).

    I quite often see devs introducing them in other languages like TypeScript, but it just doesn't work as well when it's introduced in userland (usually you just end up with a small island of the codebase following this standard).

    • Typescript has another way of dealing with null/undefined: it's in the type definition, and you can't use a value that's potentially null/undefined. Using Optional<T> in Typescript is, IMO, weird. Typescript also has exceptions...

      I think they only work if the language is built around it. In Rust, it works, because you just can't deref an Optional type without matching it, and the matching mechanism is much more general than that. But in other languages, it just becomes a wart.

      As I said, some kind of type annotation would be most go-like, e.g.

          func f(ptr PtrToData?) int { ... }
      

      You would only be allowed to touch *ptr inside a if ptr != nil { ... }. There's a linter from uber (nilaway) that works like that, except for the type annotation. That proposal would break existing code, so perhaps something an explicit marker for non-nil pointers is needed instead (but that's not very ergonomic, alas).

  • Yeah default values are one of Go's original sins, and it's far too late to roll those back. I don't think there are even many benefits—`int i;` is not meaningfully better than `int i = 0;`. If it's struct initialization they were worried about, well, just write a constructor.

    Go has chosen explicit over implicit everywhere except initialization—the one place where I really needed "explicit."

    • It makes types very predictable though: a var int is always a valid int no matter what, where or how. How would you design the type system and semantics around initialization and declarations without defaults? Just allow uninitialized values like in C? That’s basically default values with extra steps and bonus security holes. An expansion of the type system to account for PossiblyUndefined<t>? That feels like a significant complication, but maybe someone made it work…

Golang is great for problem classes where you really, really can't do away with tracing GC. That's a rare case perhaps, but it exists nonetheless. Most GC languages don't have the kind of high-performance concurrent GC that you get out of the box with Golang, and the minimum RAM requirements are quite low as well. (You can of course provide more RAM to try and increase overall throughput, and you probably should - but you don't have to. That makes it a great fit for running on small cloud VM's, where RAM itself can be at a premium.)

  • Java's GCs are a generation ahead, though, in both throughput-oriented and latency-sensitive workloads [1]. Though Go's GC did/does get a few improvements and it is much better than it was a few years ago.

    [1] ZGC has basically decoupled the heap size from the pause time, at that point you get longer pauses from the OS scheduler than from GC.

    • Do you have a source for this? My understanding is Go's GC is much better optimized for low latency.

> But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.

I got insta rejected in interview when i said this in response to interview panels question about 'thoughts about golang' .

Like they said, 'interview is over' and showed me the (virtual) door. I was stunned lol. This was during peak golang mania . Not sure what happened to rancherlabs .

  • They probably thought you weren't going to be a good fit for writing idiomatic Go. One of the things many people praise Go for is its standard style across codebases, if you don't like it, you're liable to try and write code that uses different patterns, which is painful for everyone involved.

  • Some workplaces explicitly test cultural closeness to their philosophy of work (language, architecture, etc).

    It’s part trying to keep a common direction and part fear that dislike of their tech risks the hire not staying for long.

    I don’t agree with this approach, don’t get me wrong, but I’ve seen it done and it might explain your experience.

> I find myself wishing for Optional[T] quite often.

Well, so long as you don't care about compatibility with the broad ecosystem, you can write a perfectly fine Optional yourself:

    type Optional[Value any] struct {
     value  Value
     exists bool
    }

    // New empty.
    func New[Value any]() Optional[Value] {}

    // New of value.
    func Of[Value any](value Value) Optional[Value] {}

    // New of pointer.
    func OfPointer[Value any](value *Value) Optional[Value] {}

    // Only general way to get the value.
    func (o Optional[Value]) Get() (Value, bool) {}

    // Get value or panic.
    func (o Optional[Value]) MustGet() Value {}

    // Get value or default.
    func (o Optional[Value]) GetOrElse(defaultValue Value) Value {}

    // JSON support.
    func (o Optional[Value]) MarshalJSON() ([]byte, error) {}
    func (o *Optional[Value]) UnmarshalJSON(data []byte) error {}

    // DB support.
    func (o *Optional[Value]) Scan(value any) error {}
    func (o Optional[Value]) Value() (driver.Value, error) {}

But you probably do care about compatibility with everyone else, so... yeah it really sucks that the Go way of dealing with optionality is slinging pointers around.

  • You can write `Optional`, sure, but you can't un-write `nil`, which is what I really want. I use `Optional<T>` in Java as much as I can, and it hasn't saved me from NullPointerException.

    • You're not being very precise about your exact issues. `nil` isn't anywhere as much of an issue in Go as it is in Java because not everything is a reference to an object. A struct cannot be nil, etc. In Java you can literally just `return null` instead of an `Optional<T>`, not so in Go.

      There aren't many possibilities for nil errors in Go once you eliminate the self-harm of abusing pointers to represent optionality.

  • There's some other issues, too.

    For JSON, you can't encode Optional[T] as nothing at all. It has to encode to something, which usually means null. But when you decode, the absence of the field means UnmarshalJSON doesn't get called at all. This typically results in the default value, which of course you would then re-encode as null. So if you round-trip your JSON, you get a materially different output than input (this matters for some other languages/libraries). Maybe the new encoding/json/v2 library fixes this, I haven't looked yet.

    Also, I would usually want Optional[T]{value:nil,exists:true} to be impossible regardless of T. But Go's type system is too limited to express this restriction, or even to express a way for a function to enforce this restriction, without resorting to reflection, and reflection has a type erasure problem making it hard to get right even then! So you'd have to write a bunch of different constructors: one for all primitive types and strings; one each for pointers, maps, and slices; three for channels (chan T, <-chan T, chan<- T); and finally one for interfaces, which has to use reflection.

    • For JSON I just marshall to/from:

          {
           "value": "value",
           "exists: true,
          }
      

      For nil, that's interesting. I've never ran into issues there, so I never considered it.

      1 reply →

> Concurrency is tricky but

You hear that Rob Pike? LOL. All those years he shat on Java, it was so irritating. (Yes schadenfreude /g)

The remarkable thing to me about Go is that it was created relatively recently, and the collective mindshare of our industry knew better about these sorts of issues. It would be like inventing a modern record player today with fancy new records that can't be damaged and last forever. Great... but why the fuck are we doing that? We should not be writing low level code like this with all of the boilerplate, verbosity, footguns. Build high level languages that perform like low level languages.

I shouldn't fault the creators. They did what they did, and that is all and good. I am more shocked by the way it has exploded in adoption.

Would love to see a coffeescript for golang.