Comment by metadat
3 years ago
> I would absolutely love to discover the original code review for this and why this was chosen as a default. If the PRs from 2011 are any indication, it was probably to get unit tests to pass faster. If you know why this is the default, I’d love to hear about it!
Please hold while I pick my fallen jaw up off the floor.
The parents of the Internet work at Google. How could this defect make it to production and live for 12+ years in the wild? I guess nothing fixes itself, but this shatters the myth of Google(r) superiority. It turns out people are universally entities comprised of sloppy, error-prone wetware.
At the very least there should be a comment in caps and in the documentation describing why this default was chosen and in what circumstances it's ill-advised. I'm not claiming to be remarkably exceptional and even I bundle such information on the first pass when writing the initial code (my rule: to ensure a good future, any unusual or non-standard defaults deserve at least a minimal explanation) (Full-Disclosure: I was rejected after round 1 of Google code screens 3 times, though have been hired to other FAANG/like companies).
Yeesh.
p.s. Be sure to brace yourself before reading https://news.ycombinator.com/item?id=34179426#34180015
> It turns out people are universally entities comprised of sloppy, error-prone wetware.
The line from Agent K in 'Men In Black' comes to mind here.
More jobs than not, I left with at least one 3+ month old PR of changes for stability I was 'not allowed to merge because we didn't have the bandwidth to regression (or do cross-ecosystem-update-on-lib)'. Yes I made sure to explain to my colleagues why I did them and why I was mentioning them before I left.
Most eventually got applied.
> (I've been rejected after round 1 of Google code screens 3 times, though have been hired to other FAANG-like companies). Sheesh.
I've found that the companies that hire based on quality-of-bullshitting sometimes pay more, but are far less satisfying than companies that hire on quality-of-language-lawyering (i.e. you understand the caveats of a given solution rather than sugar coating them).
> Please hold while I pick my fallen jaw up off the floor.
> p.s. Be sure to brace yourself before reading https://news.ycombinator.com/item?id=34179426#34180015
Both of these snide comments assume that the speculative explanations are correct, which they very well may not be.
Google's interview level is set to not needing to fire too many bad people, it's not about being superior (err on the side of caution when hiring).
This might change now in this downturn, but when I was working at Google in 2008, we were the only tech company where nobody was fired because of the recession (there were offices closed, and people had the option to relocate, although not everybody took that option).
If you compare it with Facebook, they just fired a lot of people.
In short: you probably just didn't have luck, you should try again when you can.
Google designs for Google. In their world everyone uses a latest gen MacBook with maxed out RAM on gigabit fiber.
The default is glinux, most of the company are using chromebooks.
First half yes, second half no. Everyone quickly finds out that chromebooks cant hack it spec-wise, even for simple chrome remote desktop.
3 replies →
The "On gigabit fiber" part is true, though.
Most engineers have work desktops which run GLinux and they also have macbooks.
1 reply →
Google has more end users on slow networks and old devices than almost anyone. Throttle your browser with the browser tools and see what loads quicker, google.com or a website of your choice. Once you've loaded google.com, do a search.
Does it matter from the server point of view?
How can you call it a defect when it might have been a deliberate decision? Your whole post sounds like you're upset Google didn't hire you lmao
The entire post is embarrassing and makes me think that Google made the correct decision. Also, it seems that people that want to change the default behaviour can simply use the TCPConn.SetNoDelay function.
Decisions deserve documentation (because a footgun warning is preferable to spontaneous unintended penetration).
It is documented. https://pkg.go.dev/net#TCPConn.SetNoDelay
> The default is true (no delay), meaning that data is sent as soon as possible after a Write.
Huh, really? There is public API to change behavior, thats about it. There maybe a million page documentation by now if every decision needed a documentation.
2 replies →
It’s not a defect, and it’s not unusual to enable TCP_NODELAY.
As a default, it’s a design decision. It’s documented in the Golang Net library.
I remember learning all of this stuff in 1997 in my first Java job and witnessing same shock and horror at TCP_NODELAY being disabled (!) by default when most server developers had to enable it to get any reasonable latency for their RPC type apps, because most clients had delayed TCP ACKs on by default. Which should never be used with Nagle’s algorithm!
This Internet folklore gets relearned by every new generation. Golang’s default has decades of experience in building server software behind the decision to enable it. As many other threads here have explained, including Nagle himself.
> The parents of the Internet work at Google. How could this defect make it to production and live for 12+ years in the wild?
Google is a big company; the “parents of the internet”, insofar as they work at Google, probably work nowhere near this, in terms of scope of work.
Would be naive to think corporate incentives are not influencing code and protocols:
> Http/3 standardized 6 months ago and Google has been widely using it for years-- but not supported by Go.
> Webtransport originally did P2P/Ice component but no longer.
> Http/3 doesn't even have option to work without certificate authorities.
> Http/3 doesn't even have option to work without certificate authorities.
Unencrypted HTTP is dead for any serious purpose. Any remaining use is legacy, like code written in Basic.
With Letsencrypt on one hand, and single-binary utilities to run your own local CA on the other hand, this should pose no problem.
> this should pose no problem.
It poses a stack of problems a foot high.
Some random examples:
Docker, Kubernetes, etc... use HTTP by default. Not HTTPS or HTTP/3. Unencrypted HTTP 1.1! This is because containers are snapshots and can't contain certificates. Injecting certificates is a pain in the butt, because there is no standardised mechanism for it.
Okay! You inserted a certificate! For... what name? Is it the "site host name", or the "server name"? Either one you pick will be wrong for something. Many web apps expect to see a host header on the backend that matches the frontend, and will poop themselves if you give them a per-machine (or per-container) certificate. I've seen cloud load balancers that have the opposite problem and expect valid per-machine certificates!
If you pick per-machine certificates, then by definition you have to man-in-the-middle, which breaks a handful of apps that require (and enforce!) end-to-end cryptography.
Okay, fine, you have Let's Encrypt issuing per-site certificates, automatically, via your public endpoint. Nothing could be easier! Right up until someone in secops says that you also need make the non-production sites have "private endpoints". Now, you need two distinct mechanisms for certificate issuance, one internal only, and one public. Double the fun.
It just goes on and on: You'll also likely have to deal with CDNs, API gateways, Lambda/Functions, S3 / blob accounts, legacy virtual machines, management endpoints, infrastructure consoles, and so on. Some of these have integrated issuance/renewal capability, some don't. Some break because of your DNS CAA records. Some don't. Some send notifications before expiry, some don't. And so forth...
As a random example, I recently had to deal with a GIS product that shall not be named that requires a HTTPS REST API to set or change its certificates. Yes. You heard me. HTTPS. To set a valid certificate, you first have to automate against a HTTPS endpoint with an invalid certificate, restart the service, do a multi-minute wait in a retry loop, and then continue the automation. Failure to handle any one of the dozen failure scenarios and corner cases will lead to a dead service that won't start at all. Fun stuff.
Automated certificate issuance for complex architectures is definitely not a solved problem in general.
9 replies →
> this shatters the myth of Google(r) superiority. It turns out people are universally entities comprised of sloppy, error-prone wetware.
Golang was created with the specific goal of sidestepping what had become a bureaucratic C++ "readability" process within Google, so yes. Goodhart's law in action.
The problem with C++ is not getting readability, but footguns! footguns everywhere! Plus the compile time.
That’s not at all true. Go has readability as well.
It has it now. For a long, long time, readability for Golang at Google was "Read some of the other Go code out there. Try to make it look like that."
(I don't have enough historical knowledge to comment on the notion that Go was invented to sidestep the need to get more team members readability in C++ though).
Googler's network environment would be extremely good so it's not weird.
I think one of the most insightful things I've learned in life is that books, movies, articles, etc. have warped my perception of the "elites." When you split hairs, there is certainly a difference in skill/knowledge _but_ at the end of the day, everyone will make mistakes. (error-prone wetware, haha)
I totally get it though. I mean, as a recent example, look at FTX. I knew SBF and was close to working for Alameda (didn't want to go to Hong Kong tho). Over the years I thought that I was an idiot for missing out and that everyone there was a genius. Turns out they weren't and not only that _everyone_ got taken for a ride. VCs throwing money, celebrities signing to say anything, politicians shaking hands, etc.
Funny, I did see a leaked text when Elon was trying to buy Twitter, SBF was trying to be part of it and someone didn't actually think he had the money, so maybe someone saw the BS.
All that aside tho, yea, this is something I forget and "re-learn" all the time. A bit concerning if you think about it too much! I wonder if that's the same for other fields of work. I mean, if there was an attack on a power grid, how many people in the US would even know _how_ to fix it? Are the systems legacy? I've seen some code bases where one file could be deleted and it would take tons of hours to even figure out what went wrong, lol.
There's nothing elite about being a programmer at any of the big tech companies. It's software engineering and design. It's the same everywhere, just different problem domains.
I've worked with some of the highest ranking people in multiple large tech companies. The truth is there is no "elite". CTOs of the biggest companies in the world are just like you and me.
>There's nothing elite about being a programmer at any of the big tech companies. It's software engineering and design. It's the same everywhere
I just can't agree with this. I have worked with tons of companies and generally, the "sweet-spot" is new mid-sized firms. There is a considerable difference in quality, on almost every metric when working with a bad firm. I've worked with a Fortune 10 company and it was one of the worst applications of "software and design" I've ever seen.
1000 layers of bureaucracy and relatively bad salaries. I'm not looking to speak ill of anyone but we shouldn't pretend you can hire an army of top notch SDEs for bottom of the barrel pay.
The result is a mess.
>I've worked with some of the highest ranking people in multiple large tech companies. The truth is there is no "elite". CTOs of the biggest companies in the world are just like you and me.
I can certainly agree with this in a sense. Everyone makes mistakes. Nobody is "genius" like you see in movies. However, there is a difference in skill and experience (save nepotism or pure luck). If you want to say we all have the same potential, I 100% agree. As it stands though, if you took the "average" developer and I mean truly the _average_, not skewed by personal experience, the average FAANG dev is going to be "better."
I mean, look at how many programmers can't fizzbuzz.