← Back to context

Comment by eigenvalue

2 years ago

Glad to see Ilya is back in a position to contribute to advancing AI. I wonder how they are going to manage to pay the kinds of compensation packages that truly gifted AI researchers can make now from other companies that are more commercially oriented. Perhaps they can find people who are ideologically driven and/or are already financially independent. It's also hard to see how they will be able to access enough compute now that others are spending many billions to get huge new GPU data centers. You sort of need at least the promise/hope of future revenue in a reasonable time frame to marshall the kinds of resources it takes to really compete today with big AI super labs.

> compensation packages that truly gifted AI researchers can make now

I guess it depends on your definition of "truly gifted" but, working in this space, I've found that there is very little correlation between comp and quality of AI research. There's absolutely some brilliant people working for big names and making serious money, there's also plenty of really talented people working for smaller startups doing incredible work but getting paid less, academics making very little, and even the occasional "hobbyist" making nothing and churning out great work while hiding behind an anime girl avatar.

OpenAI clearly has some talented people, but there's also a bunch of the typical "TC optimization" crowd in there these days. The fact that so many were willing to resign with sama if necessary appears largely because they were more concerned with losing their nice compensation packages than any of their obsession with doing top tier research.

  • Two people I knew recently left Google to join OpenAI. They were solid L5 engineers on the verge of being promoted to L6, and their TC is now $900k. And they are not even doing AI research, just general backend infra. You don't need to be gifted, just good. And of course I can't really fault them for joining a company for the purpose of optimizing TC.

  • "...even the occasional "hobbyist" making nothing and churning out great work while hiding behind an anime girl avatar."

    the people i often have the most respect for.

  • Definitely true of even normal software engineering; my experience has been the opposite of expectations, that TC-creep has infected the industry to an irreparable degree and the most talented people I've ever worked around or with are in boring, medium-sized enterprises in the midwest US or australia, you'll probably never hear of them, and every big tech company would absolutely love to hire them but just can't figure out the interview process to weed them apart from the TC grifters.

    TC is actually totally uncorrelated with the quality of talent you can hire, beyond some low number that pretty much any funded startup could pay. Businesses hate to hear this, because money is easy to turn the dial up on; but most have no idea how to turn the dial up on what really matters to high talent individuals. Fortunately, I doubt Ilya will have any problem with that.

    • I find this hard to believe having worked in multiple enterprises and in the FAANG world.

      In my anecdotal experience, I can only think of one or two examples of someone from the enterprise world who I would consider outstanding.

      The overall quality of engineers is much higher at the FAANG companies.

      2 replies →

    • perfect sort of thing to say to get lots of upvotes, but absolutely false in my experience at both enterprise and bigtech

Academic compensation is different than what you’d find elsewhere on Hacker News. Likewise, academic performance is evaluated differently than what you’d expect as a software engineer. Ultimately, everyone cares about scientific impact so academic compensation relies on name and recognition far more than money. Personally, I care about the performance of the researchers (i.e., their publications), the institution’s larger research program (and their resources), the institution’s commitment to my research (e.g., fellowships and tenure). I want to do science for my entire career so I prioritize longevity rather than a quick buck.

I’ll add, the lack of compute resources was a far worse problem early in the deep learning research boom, but the market has adjusted and most researchers are able to be productive with existing compute infrastructure.

  • But wouldn't the focus on "safety first" sort of preclude them from giving their researchers the unfettered right to publish their work however and whenever they see fit? Isn't the idea to basically try to solve the problems in secret and only release things when they have high confidence in the safety properties?

    If I were a researcher, I think I'd care more about ensuring that I get credit for any important theoretical discoveries I make. This is something that LeCun is constantly stressing and I think people underestimate this drive. Of course, there might be enough researchers today who are sufficiently scared of bad AI safety outcomes that they're willing to subordinate their own ego and professional drive to the "greater good" of society (at least in their own mind).

    • If you're working on superintelligence I don't think you'd be worried about not getting credit due to a lack of publications, of all things. If it works, it's the sort of thing that gets you in the history books.

      2 replies →

At the end game, a "non-safe" superinteligence seems easier to create, so like any other technology, some people will create it (even if just because they can't make it safe). And in a world with multiple superintelligent agents, how can the safe ones "win"? It seems like a safe AI is at inherent disadvantage for survival.

  • The current intelligences of the world (us) have organized their civilization in a way that the conforming members of society are the norm and criminals the outcasts. Certainly not a perfect system, but something along those lines for the most part.

    I like to think AGIs will decide to do that too.

    • I disagree that civilization is organized along the lines of conforming and criminals. Rather, I would argue that the current intelligences of the world have primarily organized civilization in such a way that a small percentage of its members control the vast majority of all human resources, and the bottom 50% control almost nothing[0]

      I would hope that AGI would prioritize humanity itself, but since it's likely to be created and/or controlled by a subset of that same very small percentage of humans, I'm not hopeful.

      [0] https://en.wikipedia.org/wiki/Wealth_inequality_in_the_Unite...

    • It's a beautiful system, wherein "criminality" can be used to label and control any and all persons who disagree with the whim of the incumbent class.

      Perhaps this isn't a system we should be trying to emulate with a technology that promises to free us of our current inefficiencies or miseries.

    • Considering the U.S. is likely to have a convicted felon as its next president, I don't agree with this characterization

> Perhaps they can find people who are ideologically driven

given the nature of their mission, this shouldn't be too terribly difficult; many gifted researchers do not go to the highest bidder

Generally, the mindset that makes the best engineers is an obsession with solving hard problems. Anecdotally, there's not a lot of overlap between the best engineers I know and the best paid engineers I know. The best engineers I know are too obsessed with solving problems to be sidetracked the salary game. The best paid engineers I know are great engineers, but the spend a large amount of time playing the salary game, bouncing between companies and are always doing the work that looks best on a resume, not the best work they know how to do.

Great analysis, but you're missing two key factors IMO:

1. People who honestly think AGI is here aren't thinking about their careers in the typical sense at all. It's sorta ethical/"ideological", but it's mostly just practical.

2. People who honestly think AGI is here are fucking terrified right now, and were already treating Ilya as a spiritual center after Altman's coup (quite possibly an unearned title, but oh well, that's history for ya). A rallying cry like this -- so clearly aimed at the big picture instead of marketing they don't even need CSS -- will be seen as a do-or-die moment by many, I think. There's only so much of "general industry continues to go in direction experts recommend against; corporate consolidation continues!" headlines an ethical engineer can take before snapping and trying to take on Goliath, odds be damned

They will be able to pay their researchers the same way every other startup in the space is doing it – by raising an absurd amount of money.

My guess is they will work on a protocol to drive safety with the view that every material player will use / be regulated and required to use that could lead to a very robust business model

I assume that OpenAI and others will support this effort and the comp / training / etc and they will be very well positioned to offer comparable $$$ packages, leverage resources, etc

Daniel Gross (with his partner Nat Friedman) invested $100M into Magic alone.

I don't think SSI will struggle to raise money.

I think they will easily find enough capable altruistic people for this mission.

Are you seriously asking how the most talented AI researcher of the last decade will be able to recruit other researchers? Ilya saw the potential of deep learning way before other machine learning academics.

Last I checked the researcher salaries haven't even reached software engineer levels.

  • The kind of AI researchers being discussed here likely make an order of magnitude more than run of the mill "software engineers".

    • You're comparing top names with run of the mill engineers maybe, which isn't fair.

      And maybe you need to discover talent rather than buy talent from the previous generation.

      2 replies →

    • Unless you know something I don’t, that’s not the case. It also makes sense, engineers are far more portable and scarcity isn’t an issue (many ML PhDs find engineering positions).

  • That is incredibly untrue and has been for years in the AI/ML space at many startups and at Amazon, Google, Facebook, etc. Good ML researchers have been making a good amount more for a while (source: I've hired both and been involved in leveling and pay discussions for years)