Meta Superintelligence Labs' first paper is about RAG

2 days ago (paddedinputs.substack.com)

https://arxiv.org/abs/2509.01092

This has nothing to do with superintelligence, it's just the people that were working on the paper prior to the re-org happened to publish after the name change.

Though it is notable that contrary to many (on HN and Twitter) that Meta would stop publishing papers and be like other AI labs (e.g. OpenAI). They're continued their rapid pace of releasing papers AND open source models.

  • What model(s) have Meta released since the Lab re-org?

    Also, that wasn't based on purely hearsay, Zuck explicitly said:

    > We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible. [0]

    [0]: https://www.meta.com/superintelligence/

  • Still, I think the optics matter... the fact that Meta's still putting out technical work (and open sourcing it) after the restructure says a lot about where they want to position themselves

  • Open weights models, not open source. And even their weights are under a specific license not as permissive as apache 2.

    • This is the right terminology. Model weights are literally compiled binary data; they are the output of an algorithm run on a bunch of source data. That training dataset is the "source" of the model. Training data (or the scripts used to generate it) is human-readable and modifiable, like source code. Binary weights are not.

      4 replies →

    • I'm not a lawyer, but I believe that the weights aren't subject to copyright. So, you can use them outside of Meta's license agreement provided you get them from somewhere else.

It's kinda funny, Meta has long had some of the best in the field, but left them untapped. I really think if they just took a step back and stop being so metric focused and let their people freely explore then they'd be winning the AI race. But with this new team, I feel like meta mostly hired the people who are really good at gaming the system. The people that care more about the money than the research.

A bit of this is true at every major lab. There's tons of untapped potential. But these organizations are very risk adverse. I mean why not continue with the strategy that got us to the point we're at in the first place. Labs used to hire researchers and give them a lot of free reign. But those times ended and AI progress also slowed down. Maybe if you want to get ahead you gotta stop thinking like everyone else

Well meta... you can "hold me hostage" for a lot cheaper than those guys. I'm sure this is true for hundreds of passionate ML researchers. I'd take a huge pay cut to have autonomy and resources. I know for a fact there's many working at Mets right now that would do the same. Do maybe if you're going to throw money at the problem, diversify a bit and look back at what made SV what it is today and what made AI take leaps forward

  • My theory is that as more people compete, the top candidates become those who are best at gaming the system rather than actually being the best. Someone has probably studied this. My only evidence is job applications for GAFAM and Tinder tho.

    • I've spent most of my career working, chatting and hanging out with what might be best described as "passionate weirdos" in various quantitative areas of research. I say "weirdos" because they're people driven by an obsession with a topic, but don't always fit the mold by having the ideal combination of background, credentials and personality to land them on a big tech company research team.

      The other day I was spending some time with a researcher from Deep Mind and I was surprised to find that while they were sharp and curious to an extent, nearly every ounce of energy they expended on research was strategic. They didn't write about research they were fascinated by, they wrote and researched on topics they strategically felt had the highest probability getting into a major conference in a short period of time to earn them a promotion. While I was a bit disappointed, I certainly didn't judge them because they are just playing the game. This person probably earns more than many rooms of smart, passionate people I've been in, and that money isn't for smarts alone; it's for appealing to the interests of people with the money.

      You can see this very clearly by comparing the work being done in the LLM space to that being done in the Image/Video diffusion model space. There's much more money in LLMs right now, and the field is flooded with papers on any random topic. If you dive in, most of them are not reproducible or make very questionable conclusions based on the data they present, but that's not of very much concern so long as the paper can be added to a CV.

      In the stable diffusion world it's mostly people driven by personal interest (usually very non-commericial personal interests) and you see tons of innovation in that field but almost no papers. In fact, if you really want to understand a lot of the most novel work coming out of the image generation world you often need to dig into PRs made by an anonymous users with anime themed profile pic.

      The bummer of course is that there are very hard limits on what any researcher can do with a home GPU training setup. It does lead to creative solutions to problems, but I can't help but wonder what the world would look like if more of these people had even a fraction of the resources available exclusively to people playing the game.

      17 replies →

    • But there is no way to know who is truly the 'best'. The people who position and market themselves to be viewed as the best are the only ones who even have a chance to be viewed as such. So if you're a great researcher but don't project yourself that way, no one will ever know you're a great researcher (except for the other great researchers who aren't really invested in communicating how great you are). The system seems to incentivize people to not only optimize for their output but also their image. This isn't a bad thing per se, but is sort of antithetical to the whole shoulder of giants ethos of science.

      2 replies →

    • Yeah I think this is a general principle. Just look at the quality of US presidents over time, or generations of top physicists. I guess it’s just a numbers game: the number of genuinely interested people is relatively constant while the number of gamers grows with the compensation and perceived status of the activity. So when compensation and perceived status skyrockets the ratio between those numbers changes drastically.

      1 reply →

    • It is pretty simple - if the rewards are great enough and the objective difficult enough, at some point it becomes more efficient to kneecap your competitors rather than to try to outrun them.

      I genuinely thing science would be better served if scientist got paid modest salaries to pursue their own research interests and all results became public domain. So many Universities now fancy themselves startup factories, and startups are great for some things, no doubt, but I don't think pure research is always served by this strategy.

    • I would categorize people into 2 broad extremes. 1) those that care two hoots about what others or the system expects of them and in that sense are authentic and 2) those that only care about what others or the system expects of them, and in that sense are not authentic. There is a spectrum in there.

    • that's what happens at the top of most competitive domains. Just take a look at pro sports; guys are looking for millimeters to shave off and they turn to "playing the game" rather than merely improving athletic performance. Watching a football game (either kind) and a not-small portion of the action is guys trying to draw penalties or exploit the rules to get an edge.

    • Anytime a system gets hyper-competitive and the stakes are high, it starts selecting for people who are good at playing the system rather than just excelling at the underlying skill

    • This is an interesting theory. I think there is something to it. It is really hard to do good in a competitive environment. Very constrained.

    • I have seen absolutely incredible, best in the world type engineers, much smarter than myself, get fired from my FAANG because of the performance games.

      I persist because I'm fantastic at politics while being good enough to do my job. Feels weird man.

  • > Labs used to hire researchers and give them a lot of free reign.

    I can't think of it ever really paying off. Bell Labs is the best example. Amazing research that was unrelated to the core business off the parent company. Microsoft Research is another great one. Lots of interesting research that .. got MS some nerd points? But has materialized into very very few actual products and revenue streams. Moving AI research doesn't help Meta build any motes or revenue streams. It just progresses our collective knowledge.

    On the "human progress" scale it's fantastic to put lots of smart people in a room and let them do their thing. But from a business perspective it seems to almost never pay off. Waiting on the irrational charity of businesses executive is probably not the best way to structure thing.

    I'd tell them to go become academics.. but all the academics I know are just busy herding their students and attending meetings

    • Perhaps these companies just end up with so much money that they can't possibly find ways to spend all of it rationally for purely product driven work and just end up funding projects with no clear business case.

      1 reply →

    • It paid off for PARC, iirc the laser printer justified lots of other things that Xerox didn't profit from but turned out to be incredibly important.

    • The problem here is management expecting researchers to dump out actionable insights like a chicken laying eggs. Researchers exist so that you can rifle through their notes and steal ideas.

      1 reply →

    • Indeed. And it feels like there is this untold in-between where if you belong to an unknown applied AI team, you don’t have to deal with academia’s yak shaving, you don’t have to deal with Meta’s politics and you end up single handedly inventing TRMs.

    • How many patents did that research result in that paid off in terms of use, licensing and royalties?

    •   > I can't think of it ever really paying off
      

      Sure worked for Bell Labs

      Also it is what big tech was doing until LLMs hit the scene

      So I'm not sure what you mean by it never paying off. We were doing it right up till one of those things seemed to pay off and then hyper focused on it. I actually think this is a terrible thing we frequently do in tech. We find promise in a piece of tech, hyper focus on it. Specifically, hyper focus on how to monetizing it which ends up stunting the technology because it hasn't had time to mature and we're trying to monetize the alpha product instead of trying to get that thing to beta.

        > But from a business perspective it seems to almost never pay off.
      

      So this is actually what I'm trying to argue. It actually does pay off. It has paid off. Seriously, look again at Silicon Valley and how we got to where we are today. And look at how things changed in the last decade...

      Why is it that we like off the wall thinkers? That programmers used to be known as a bunch of nerds and weirdos. How many companies were started out of garages (Apple)? How many started as open source projects (Android)? Why did Google start giving work lifestyle perks and 20% time?

      So I don't know what you're talking about. It has frequently paid off. Does it always pay off? Of course not! It frequently fails! But that is pretty true for everything. Maybe the company stocks are doing great[0], but let's be honest, the products are not. Look at the last 20 years and compare it to the 20 years before that. The last 20 years has been much slower. Now maybe it is a coincidence, but the biggest innovation in the last 20 years has been in AI and from 2012 to 2021 there were a lot of nice free reign AI research jobs at these big tech companies where researchers got paid well, had a lot of autonomy in research, and had a lot of resources at their disposal. It really might be a coincidence, but a number of times things like this have happened in history and they tend to be fairly productive. So idk, you be the judge. Hard to conclude that this is definitely what creates success, but I find it hard to rule this out.

        > I'd tell them to go become academics.. but all the academics I know are just busy herding their students and attending meetings
      

      Same problem, different step of the ladder

      [0] https://news.ycombinator.com/item?id=45555175

  • I always wonder about that. Those $100m Mathematicians... how can they have rooms to think under Meta's crushing IMPACT pressure?

    • For just 10% of those money a $100M mathematician can hire 10 $1M mathematicians or a whole math dept in some European university to do the work and the thinking for them and thus beat any pressure while resting and vesting on the remaining 90%.

      2 replies →

  • The money chase is real. You can kind of tell who's in it for the comp package vs. who'd be doing the same work on a laptop in their garage if that's what it took

  • AI progress has slowed down?! By what metric?

    Quite the statement for anybody who follows developments (without excluding xAI).

  • winning the AI race? Meta? Oh that was a good one. Zuck is a follower not a leader. It is in his DNA

  • > I really think if they just took a step back and stop being so metric focused and let their people freely explore then they'd be win..

    This is very true, and more than just in ai.

    I think if they weren’t so metric focused they probably wouldn’t have hit so much bad publicity and scandal too.

  • "Maybe if you want to get ahead you gotta stop thinking like everyone else"

    Well for starters you need a leader who can rally the troops who "think(s) different" - something like a S Jobs.

    That person doesnt seem to exist in the industry right now.

  • I thought Alex Wang was a very curious choice. There are so many foundational AI labs with interesting CEOs... I get that Wang is remarkable in his own right, but he basically just built MTurk and timed the bubble.

    Doesn't really scream CEO of AGI to me.

    • A lot of people also don't know that many of the well known papers are just variations on small time papers with a fuck ton more compute thrown at the problem. Probably the strongest feature that correlates to successful researcher is compute. Many have taken this to claim that the GPU poor can't contribute but that ignores so many other valid explanations... and we wonder why innovation has slowed... It's also weird because if compute was all you need then there's a much cheaper option than Zuck paid. But he's paying for fame.

      10 replies →

    • The reportings at the time said that he was Mark’s 5th choice or similar. It is fairly clear he would prefer Ilya, Murati, Mark Chen, and perhaps others, but they said no, and Alex Wang was the first one to say yes.

      13 replies →

    • Alexandr Wang is not interesting and a few steps short of a fraud that Mark had to bail out because he was so co invested.

      Shareholders should be livid if they knew a single thing about what was going on.

      2 replies →

A great idea, bypassing as much conversion as possible between vector space and natural language tokens. Reminds me of a discussion of having AI’s “talk” to each other using vector space.

There was an interesting quote “plain old BM25 from 1994 outperforms vector search on recall” and super relevant to what I did yesterday. I am trying to use small local models more often and yesterday I wrote Common Lisp code that uses a large corpus of text and a user query or prompt to construct a fairly concise one-shot prompt with select context from the text corpus. This is RAG, and I used both BM25 and vector embeddings matching. I added the code and an example as a new chapter in my CL book (link directly to new material: https://leanpub.com/lovinglisp/read#leanpub-auto-autocontext...) yesterday afternoon. BM25 is fast. This is new code, and I will certainly be experimenting more with it, but as-is it is useful when working with small local LLMs.

One thing I don't get about the ever-reoccuring RAG discussions and hype men proclaiming "Rag is dead", is that people seem to be talking about wholly different things? My mental model is that what is called RAG can either be:

- a predefined document store / document chunk store where every chunk gets a a vector embedding, and a lookup decides what gets pulled into context as to not have to pull whole classes of document, filling it up

- the web search like features in LLM chat interfaces, where they do keyword search, and pull relevant documents into context, but somehow only ephemerally, with the full documents not taking up context in the future of the thread (unsure about this, did I understand it right?) .

with the new models with million + tokens of context windows, some where arguing that we can just throw whole books into the context non-ephemerally, but doesnt that significantly reduce the diversity of possible sources we can include at once if we hard commit to everything staying in context forever? I guess it might help with consistency? But is the mechanism with which we decide what to keep in context not still some kind of RAG, just with larger chunks of whole documents instead of only parts?

I'd be extatic if someone who really knows their stuff could clear this up for me

  • Technically, RAG is anything that augments generation with external search. However, it often has a narrower meaning: "uses a vector DB."

    Throwing everything into one large context window is often impractical - it takes much more time to process, and many models struggle to find information accurately if too much is going on in the context window ("lost in the middle").

    The "classic" RAG still has its place when you want low latency (or you're limited by VRAM) and the results are already good enough.

  • We can't throw in infinite things in the context though.

    My impression is that GPT-5 gets confused, not quite right away, but after a couple of pages it has no idea. It doesn't take pages on pages before it forgets things.

    • I’m currently experimenting with prompts of ~300k tokens for a certain classification task and I think I might be able to make it work. GPT5 chokes but Gemini 2.5 Pro is showing promise. Jury’s still out and I might change my tune in a couple of weeks.

      1 reply →

  • > My mental model is that what is called RAG can either be:

    RAG is confusing, because if you look at the words making up the acronym RAG, it seems like it could be either of the things you mentioned. But it originally referred to a specific technique of embeddings + vector search - this was the way it was used in the ML article that defined the term, and this is the way most people in the industry actually use the term.\

    It annoys me, because I think it should refer to all techniques of augmenting, but in practice it's often not used that way.

    There are reasons that specifically make the "embeddings" idea special - namely, it's a relatively new technique that actually fits LLM very well, because it's a semantic search - meaning, it works on "the same input" as LLMs do, which is a free-text query. (As opposed to a traditional lookups that work on keyword search or similar.)

    As for whether RAG is dead - if you mean specifically vector-embeddings and semantic search, it's possible - because you could theoretically use other techniques for augmentation, e.g. an agent that understands a user question about a codebase and uses grep/find/etc to look for the information, or composes a search to search the internet for something. But it's definitely not going to die in that second sense of "we need some way to augment LLMs knowledge before text generation", that will probably always be relevant, as you say.

  • The answer is adaptability.

    In both cases for "Question Answering" it's about similarity search but there are two main orthogonal differences between RAG and Non-RAG :

    -Knowing the question at the time of index building

    -Higher order features : the ability to compare fetched documents with one another and refine the question

    Non-RAG, aka multi-layer (non-causal) transformer with infinite context, is the more generic version, fully differentiable meaning you can use machine learning to learn how to Non-RAG better. Each layer of the transformer can use the previous layer to reason and refine the similarity search. (A causal transformer know the question at the time when it is feed the question, and can choose to focus it's attention on different part of the previously computed features of the provided documents but may benefit from having some reflection token, or better : be given the question before being presented the documents (provided you've trained it to answer it like that).)

    RAG is an approximation of the generic case to make it faster and cheaper. Usually it breaks end-to-end differentiability by using external tools, so this mean that if you want to use machine learning to learn how to RAG better you will need to use some variant of Reinforcement Learning which is slower to learn things. RAG usually don't know the question at the time of index building, and documents are treated independently of each other, so no (automatic) higher order features (embeddings are fixed).

    A third usual approximation, is to feed the output of RAG into Non-RAG, to hopefully get the best of both world. You can learn the Non-RAG given RAG with machine learning (if you train it with some conversations where it used RAG), but the RAG part won't improve by itself.

    Non-RAG need to learn so it needs a big training dataset, but fortunately it can pick-up question answer pair in an unsupervised fashion when you feed it the whole web, and you only need a small instruction training and preference optimization dataset to shape it to your need. If performance isn't what you expect in a specific case, you can provide more specific examples and retrain the model until it gets it and you get better performance for the case you were interested in. You can improve the best case but it's hard to improve the worst case.

    RAG has more control on what you feed it but content should be in a more structured way. You can prevent worst cases more easily but it's hard to improve good case.

this was really weird to read:

> But RAG is a very real world, practical topic for something as significant as a new lab’s first paper.

I would expect exactly the opposite - that a new lab would put out a few random papers that happen to be in areas their researchers were interested in and already working on, and once people had been working together a while and developed some synergy they would maybe come out with something really groundbreaking.

do people really view a "first paper" as something deeply significant and weighty? because that just seems like a good way to get bogged down in trying to second guess whether any given paper was good enough to be your all-important debut!

  • As an academic I would expect the same as you, and no, to my knowledge "first paper" is meaningless, at least in academia. Most people's first paper is some small contribution to what their PhD supervisor is doing at the time, where the student tries their best at writing but it ends up so heavily edited that probably 90% of the final text comes from the supervisor :) So typically first papers don't define or represent a researcher. When you start you just don't have the experience to have a great idea and carry it through to a good paper.

    Of course here we are talking about a lab, not an individual person, but still I haven't heard of first papers being considered special in any way, even for labs.

Can we have a more informative, less clickbaity, title?

This was a very obvious next step, I played around with implementing something similar at one point.

In general we need to make it simpler for LLMs to take in different forms of embeddings. At least frameworks that simplify it.

I am not surprised because the culture at meta is not at all, even in the slightest, to focus on science for the sake of it. It’s actively actively purged out of you. The focus is on metrics and how the bottom line is impacted. So this is in line with that

  • It’s not that simple. I worked at a supplier of Meta and they paid us large NREs to fund our exploratory work

  • Yeah and this problem is near impossible to fix once it has infested into the culture of the firm.

    • It's not always a bad thing though, like in this case they looked for a practical win and found one because impractical wins can't make them money.

  • "People are using our service more!" turns out to be a horrible metric when they outright lie to you (x has sent you a message! - when no message exists)

This is not work by any of the high profile new hires, in case folks are confused.

I am not sure if I understand things correctly.

I came to believe the LLMs work with token embeddings. Is then the REFRAG only "something" in front of the LLM, and the decoder is the RL policy which expands only some token chunk embeddings into token embeddings feedable to LLM? Or the REFRAG needs you to 'tune' the LLM to be able to work with both token embeddings and token chunk embeddings?

Seems very incremental and very far from the pompous 'superintelligence' goal.

  • If you can collapse "retrieve this complex chunk when it is needed" into a single token, what else can you put into a token?

    "Send this through the math coprocessor." "Validate against the checklist." "Call out to an agent for X." "Recheck against input stream Y." And so on.

    Retrieval augmentation is only one of many uses for this. If this winds up with better integration with agents, it is very possible that the whole is more than the sum of its parts.

  • Think about it this way; they are encoding whole "thoughts" or "ideas" as single tokens.

    It's effectively a multimodal model, which handles "concept" tokens alongside "language" tokens and "image" tokens.

    A really big conceptual step, actually, IMO.

  • It’s unlikely that the existing LLM architecture will evolve into anything that resembles superintelligence any more than it does already.

    Which means that modifications to the architecture, and combining it with other components and approaches, are the next likely step. This paper fits that.

  • A 30 fold improvement seems a tad more than incremental.

    • I can start brushing my teeth 30 times faster but it won't change my life. This is nice for RAG but it's a very localized improvement. And 30× sounds big but is just an order of magnitude improvement also.

      2 replies →

So this looks essentially like continuous prompting (see prefix tuning) with RL-driven selection of what to present as tokens and what as continuous inputs (embeddings).

I couldn't immediately see in their graphs/tables any comparison against simple lexical/statistical based context compression, such as candidate selection of chunks using TF-IDF, word overlap etc. For most of us in the industry we need to find these quick wins that give us equivalent performance to sending huge amount of information to the LLM, while compressing by 10x.

This was inevitable. You can't keep training LLMs and expect that's the answer to the evolution of AI. Yes it'll happen and we'll keep creating new more refined and bigger models but it's like DNA or something like the cortex of the brain. After that you need these systems that essentially "live" for years digesting information and develop a more refined way to process, store and retrieve the information. Compression of RAG was also inevitable. It's like the btree index of a database. The thing is, we're probably one or two iterations away from being good enough on the RAG pipeline and then we'll need to focus more on the other pieces of sensory input that need to be connected and processed at higher throughput. Right now it's not fast or efficient enough. This is where the likes of Google will shine. They are probably two decades ahead of everyone on internal technology and there is some team with the breakthrough but it hasn't seen the light of day yet. What's coming out of DeepMind is really a forced effort in productization and publication of work in a consumable format but internally they are likely way ahead. I don't have as much faith in Meta's efforts despite seeing things like this. Quite frankly those people, the ones doing the work should move to more honourable companies. Not feed crack addiction in the form of Meta's universe.

  • exactly. the real focus internally is working on new architectures. there is no other possibility.

Did a "superintelligence" lab publish a superintelligence related paper with no results for intelligence? What measured improvements did this proposal make in their LLM's intelligence?

I hate articles that don't define their acronyms! Lazy? Intentionally exclusive?

So that others don't also have to look it up, it's Retrieval-Augmented Generation (RAG).

They even say it's "a topic that we didn’t expect"... so... perhaps many people wouldn't have heard of it?

Interesting. All developers I know who tinkered around with embeddings and vector similarity scoring were instantly hooked. The efficiency of computing the embeddings once and then reusing as many times as needed, comparing the vectors with a cheap <30-line function is extremely appealing. Not to mention the indexing capabilities to make it work at scale.

IMO vector embedding is the most important innovation in computing of the last decade. There's something magical about it. These people deserve some kind of prize. The idea that you can reduce almost any intricate concept including whole paragraphs to a fixed-size vector which encapsulates its meaning and proximity to other concepts across a large number of dimensions is pure genius.

  • Vector embedding is not an invention of the last decade. Featurization in ML goes back to the 60s - even deep learning-based featurization is decades old at a minimum. Like everything else in ML this became much more useful with data and compute scale

  • If you take the embedding for king, subtract the embedding for male, add the embedding for female, and lookup the closest embedding you get queen.

    The fact that dot product addition can encode the concept of royalty and gender (among all other sorts) is kind of magic to me.

  • Vector embeddings are slightly interesting because they come pre-trained with large amounts of data.

    But similar ways to reduce huge numbers of dimensions to a much smaller set of "interesting" dimensions have been known for a long time.

    Examples include principal component analysis/single value decomposition, which was the first big breakthrough in face recognition (in the early 90s), and also used in latent semantic indexing, the Netflix prize, and a large pile of other things. And the underlying technique was invented in 1901.

    Dimensionality reduction is cool, and vector embedding is definitely an interesting way to do it (at significant computational cost).

  • Vector embeddings are so overhyped. They're decent as a secondary signal, but they're expensive to compute and fragile. BM25 based solutions are more robust and WAY lower latency, at the cost of some accuracy loss vs hybrid solutions. You can get the majority of the lift from hybrid solutions with ingest time semantic expansion/reverse hyde type input annotation with a sparse embedding BM25 at a fraction of the computational cost.

    • But it's much cheaper to compute than inference, and also you only have to compute once for any content and reuse multiple times.

  • The idea of reducing language to mere bits, in general, sounds like it would violate the Godel/Turing theorems about computability.

Refreshing (and slightly unexpected) to see Meta Superintelligence start with something this practical instead of a headline-grabbing new model

> the core insight here is actually: if embeddings are generated by layers within the LLM, it makes no sense to convert them back to natural language, just for another LLM to compress those tokens back to embeddings.

Doesn't this tie the two layers together in a way that they can't evolve separately?

> Long awaited first paper from Meta Superintelligence Labs is not a model layer innovation. What does this mean?

It means you're reading into it too much and need to be let down, gently, from the hype train.

I find it absurd that, compared to the past, large companies now have more abundant stock prices and cash than ever before, yet nearly every AI Lab in these companies is facing greater pressure than ever, being asked to generate short-term profits. In the midst of AI's unprecedented boom, the research environment and atmosphere in the industry seem to have worsened compared to the past.

A great post, it starts with this:

TL;DR

• MSI’s first paper, REFRAG, is about a new way to do RAG.

• This slightly modified LLM converts most retrieved document chunks into compact, LLM-aligned chunk embeddings that the LLM can consume directly.

• A lightweight policy (trained with RL) decides which chunk embeddings should be expanded back into full tokens under a budget; the LLM runs normally on this mixed input.

• The net effect is far less KV cache and attention cost, much faster first-byte latency and higher throughput, while preserving perplexity and task accuracy in benchmarks.

I wish more long posts followed this model of a scientific paper.

Working in big tech it's pretty wild to see how integral AI has become to our work internally, vs the public perception of it. People are NOT prepared.

  • 1. Hyperbolic statement about LLM capabilities with no concrete examples

    2. Wild claim that the companies that sell LLMs are actually downplaying their capabilities instead of hyping them

    • Personal experience here in a FAANG, there has been a considerable increase in: 1. Teams exploring how to leverage LLMs for coding. 2. Teams/orgs that already standardized some of the processes to work with LLMs (MCP servers, standardized the creation of the agents.md files, etc) 3. Teams actively using it for coding new features, documenting code, increasing test coverage, using it for code reviews etc.

      Again, personal, experience, but in my team ~40-50% of the PRs are generated by Codex.

      3 replies →

  • I've heard of one study that said AI slows developers down, even when they think it's helping.

    https://www.infoworld.com/article/4061078/the-productivity-p...

    • It is true sometimes, but other times it saves hours. We're all still in the learning stage of how best to use these new tools, and their capabilities are growing constantly.

    • AI may slow coding a bit but dramatically reduces cognitive load.

      The real value of AI isn't in helping coding. It's in having a human-like intelligence to automate processes. I can't get into details but my team is doing things that I couldn't dream of three years ago.

      1 reply →

  • Not prepared for what? Seems like the rest of the world is desperate to be shown the way to unlock something of value?

    • I think at this point it's software devs looking for the value unlock.

      Non-software devs are actually making functional programs for themselves for the first time ever. The value is crazy.

      6 replies →