← Back to context

Comment by LuxBennu

3 days ago

The title is misleading — there's no trained 100B model, just an inference framework that claims to handle one. But the engineering is worth paying attention to. I run quantized 70B models locally (M2 Max 96GB, llama.cpp + LiteLLM), and memory bandwidth is always the bottleneck. The 1.58-bit approach is interesting because ternary weights turn matmuls into additions — a fundamentally different compute profile on commodity CPUs. If 5-7 tok/s on a single CPU for 100B-class models is reproducible, that's a real milestone for on-device inference. Framework is ready. Now we need someone to actually train the model.

> Framework is ready. Now we need someone to actually train the model.

If Microslop aren't gonna train the model themselves to prove their own thesis, why would others? They've had 2 years (I think?) to prove BitNet in at least some way, are you really saying they haven't tried so far?

Personally that makes it slightly worrisome to just take what they say at face value, why wouldn't they train and publish a model themselves if this actually led to worthwhile results?

  • Because this is Microsoft, experimenting and failing is not encouraged, taking less risky bets and getting promoted is. Also no customer asked them to have 1-bit model, hence PM didn't prioritize it.

    But it doesn't mean, idea is worthless.

    You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea.

    • > You could have said same about Transformers, Google released it, but didn't move forward,

      I don't think you can, Google looked at the research results, and continued researching Transformers and related technologies, because they saw the value for it particularly in translations. It's part of the original paper, what direction to take, give it a read, it's relatively approachable for being a machine learning paper :)

      Sure, it took OpenAI to make it into an "assistant" that answered questions, but it's not like Google was completely sleeping on the Transformer, they just had other research directions to go into first.

      > But it doesn't mean, idea is worthless.

      I agree, they aren't, hope that wasn't what my message read as :) But, ideas that don't actually pan out in reality are slightly less useful than ideas that do pan out once put to practice. Root commentator seems to try to say "This is a great idea, it's all ready, only missing piece is for someone to do the training and it'll pan out!" which I'm a bit skeptical about, since it's been two years since they introduced the idea.

      7 replies →

    • > You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea

      Google released transforms as research because they invented it while improving Google Translate. They had been running it for customers for years.

      Beyond that, they had publicly-used transformer based LMs ("mums") integrated into search before GPT-3 (pre-chat mode) was even trained. They were shipping transformer models generating text for years before the ChatGPT moment. Literally available on the Google SERP page is probably the widest deployment technology can have today.

      Transformers are also used widely in ASR technologies, like Google Assistant, which of course was available to hundreds of millions of users.

      Finally, they had a private-to-employees experimental LLMs available, as well as various research initatives released (meena, LaMDA, PaLM, BERT, etc) and other experiments, they just didn't productize everything (but see earlier points). They even experimented with scaling (see "Chinchilla scaling laws").

  • The most benign answer would be that they don’t want to further support an emerging competitor to OpenAI, which they have significant business ties to. I think the more likely answer which you hinted at is that the utility of the model falls apart as scale increases. They see the approach as a dead end so they are throwing the scraps out to the stray dogs.

    • Not to mention Microsoft's investments in Nvidia and other GPU-adjacent/dependent companies!

      A successful ternary model would basically erase all that value overnight. In fact, the entire stock market could crash!

      Think about it: This is Microsoft we're talking about! They're a convicted monopolist that has a history of manipulating the market for IT goods and services. I wouldn't put it past them to refuse to invest in training a ternary model or going so far as to buy up ternary startups just to shut them down.

      Want to make some easy money: Start a business training a ternary model and make an offer to Microsoft. I bet they'll buy you out for at least a few million even if you don't have a product yet!

      1 reply →

  • Rest assured, all the big players (openai, google, deepseek etc) have run countless experiments with 4,3,2,1.58,1 bits, and various sparse factors and shapes. This barrel has been scraped to the bottom

    • I have doubts about this. Perhaps the closed models have, but I wouldn't be so sure for the open ones.

      GLM 5, for example, is running 16-bit weights natively. This makes their 755B model 1.5TB in size. It also makes their 40B active parameters ~80GB each.

      Compare this to Kimi K2.5. 1T model, but it's 4-bit weights (int4), which makes the model ~560 GB. Their 32B active parameters are ~16 GB.

      Sure, GLM 5 is the stronger model, but is that price worth paying with 2-3x longer generation times? What about 2-3x more memory required?

      I think this barrel's bottom really hasn't been scraped.

The title being misleading is important as well, because this has landed on the front page, and the only thing that would be the only notable part of this submission.

The "new" on huggingface banner has weights that were uploaded 11 months ago, and it's 2B params. Work on this in the repo is 2 years old.

The amount of publicity compared to the anemic delivery for BitNet is impressive.

I've also always though that it's an interesting opportunity for custom hardware. Two bit addition is incredibly cheap in hardware, especially compared to anything involving floating point. You could make huge vector instructions on the cheap, then connect it to the fastest memory you can buy, and you have a capable inference chip.

You'd still need full GPUs for training, but for inference the hardware would be orders of magnitude simpler than what Nvidia is making

  • These are trits, which provide their own efficiencies.

    Interestingly, a trit x float multiplier is cheaper than a trit x integer multiplier in hardware if you're willing to ignore things like NaNs.

    0 and 1 are trivial, just a mux for identity and zero. But because floats are sign-magnitude, multiply by -1 is just an inverter for the sign bit, where as for integers you need a bitwise inverter and full incrermenter.

  • You only need GPUs if you assume the training is gradient descent. GAs or anything else that can handle nonlinearities would be fine, and possibly fast enough to be interesting.

Text is misleading too. 5-7 tok/sec is not reading speed, it's a tad slower. For me, at least, and I am an experienced reader, not especially schooled in quick-reading though.

I happened to "live" on 7.0-7.5 tok/sec output speed for a while, and it is an annoying experience. It is the equivalent of walking behind someone slightly slower on a footwalk. I dealt with this by deliberately looking away for a minute until output was "buffered" and only then started reading.

For any local setup I'd try to reach for 10 tok/sec. Sacrifice some kv cache and shove a few more layers on your GPU, it's worth it.

> a fundamentally different compute profile on commodity CPU

In what way? On modern processors, a Fused Multiply-Add (FMA) instruction generally has the exact same execution throughput as a basic addition instruction

  • You drop the memory throughput requirements because of the packed representation of bits so an FMA can become the bottleneck, and you bypass the problem of needing to upscale the bits to whatever FP the FMA instruction needs.

    typically for 1-bit matmul, you can get away with xors and pop_counts which should have a better throughput profile than FMA when taking into account the SIMD nature of the inputs/outputs.

  • The win is in how many weights you process per instruction and how much data you load.

    So it's not that individual ops are faster — it's that the packed representation lets each instruction do more useful work, and you're moving far less data from memory to do it.

  • Bitnet encoding more information dense per byte perhaps? CPUs have slow buses so would eke out more use of bandwidth?

> memory bandwidth is always the bottleneck

I'm hoping that today's complaints are tomorrow's innovations. Back when 1Mb hard drive was $100,000, or when Gates said 640kb is enough.

Perhaps some 'in the (chip) industry' can comment on what RAM manufacturers are doing at the moment - better, faster, larger? Or is there not much headroom left and it's down to MOBO manufacturers, and volume?

  • Chip speed has increased faster than memory speed for a long time now, leaving DRAM behind. GDDR was good for awhile but is no longer sufficient. HBM is what's used now.

    The last logical step of this process would be figuring out how to mix the CPU transistors with the RAM capacitors on the same chip as opposed to merely stacking separate chips on the same package.

    A related stopgap is the AI startup (forget which) making accelerators on giant chips full of SRAM. Not a cost effective approach outside of ML.

  • We have faster memory, it's just all used in data center cards you can't buy (and can't afford to buy).

    AMD actually used HBM2 memory in their Radeon VII card back in 2019 (!!) for $700. It had 16 GB of HBM2 memory with 1 TB/s throughput.

    The RTX 5080 in conversion l comparison also has 16 GB of VRAM, but was released in 2025 and has 960 GB/s throughput. The RTX 5090 does have an edge at 1.8 TB/s bandwidth and 32 GB of VRAM but it also costs several times more. Imagine if GPUs had gone down the path of the Radeon VII.

    That being said, the data center cards from both are monstrous.

    The Nvidia B200 has 180 GB of VRAM (2x90GB) offering 8.2 TB/s bandwidth (4.1 TB/s x2) released in 2024. It just costs as much as a car, but that doesn't matter, because afaik you can't even buy them individually. I think you need to buy a server system from Nvidia or Dell that will come with like 8 of these and cost you like $600k.

    AMD has the Mi series. Eg AMD MI325x. 288 GB of VRAM doing 10 TB/s bandwidth and released in 2024. Same story as Nvidia: buy from an OEM that will sell you a full system with 8x of these (and if you do get your hands on one of these you need a special motherboard for them since they don't do PCIe). Supposedly a lot cheaper than Nvidia, but still probably $250k.

    These are not even the latest and greatest for either company. The B300 and Mi355x are even better.

    It's a shame about the socket for the Mi series GPUs (and the Nvidia ones too). The Mi200 and Mi250x would be pretty cool to get second-hand. They are 64 GB and 128GB VRAM GPUs, but since they use OAP socket you need the special motherboard to run them. They're from 2021, so in a few years time they will likely be replaced, but as a regular joe you likely can't use them.

    The systems exist, you just can't have them, but you can rent them in the cloud at about $2-4 per hour per GPU.

  • For larger contexts, the bottleneck is probably token prefill instead of memory bandwidth. Supposedly prefill is faster on the M5+ GPUs, but still a big hurdle for pre-M5 chips.

  • It might be advantageous to have a different memory structure altogether, bespoke to the specific task.

Yes. I had to read it over twice, it does strike me as odd that there wasn't a base model to work with.

But it seems the biggest model available is 10B? Somewhat unusual and does make me wonder just how challenging it will be to train any model in the 100B order of magnitude.

  • Approximately as challenging as training a regular 100B model from scratch. Maybe a bit more challenging because there's less experience with it

    The key insight of the BitNet paper was that using their custom BitLinear layer instead of normal Linear layers (as well as some more training and architecture changes) lead to much, much better results than quantizing an existing model down to 1.58 bits. So you end up making a full training run in bf16 precision using the specially adapted model architecture

  • What's unusual about it? It seems pretty standard to train small models to validate an approach, and then show that training scales with model size to 8B to 14B parameter models, which is what they did.

There are 1 bit average GGUFs of large models, not perfect quality but they will hold a conversation. These days, there is also quantized finetuning to heal the damage.

It comes from (intentionally?) misleading docs: https://github.com/microsoft/BitNet/issues/391

(only suggesting that it's intentional because it's been there so long)

  • That issue appears to be the one that's wrong. From the technical report

    > We evaluated bitnet.cpp in terms of both inference speed and energy cost. Comprehensive tests were conducted on models with various parameter sizes, ranging from 125M to 100B. specific configurations for each model are detailed in the Appendix A.

    • Thanks for pointing that out. I'll ask the issue creator if they've considered that. Would be nice if the maintainer would handle that (sigh) and link to the actual models used for testing (double sigh).

      2 replies →

LLM account

  • I browsed through the history of the user and confirm this statement. I know that there are users who say they used em-dashes even before the rise of ChatGPT and HN statistics support that. For example, one prominent example is dang.

    However this user uses — in almost all his posts and he had a speed of 1 comment per minute or so on multiple different topics.

  • Hmm, the user joined in 2019 but had no submissions or comments until just 40 minutes ago (at least judging by the lack of a second page?) and all the comments are on AI related submissions. Benefit of doubt is it'd have to be a very dedicated lurker or dormant account they remembered they had.

    Edit: oh, just recalled dang restricted Show HNs the other day to only non-new users (possibly with some other thresholds). I wonder if word got out and some are filling accounts with activity.

    • There has been a shift to the Ai accounts, they use Show HN less now. This started before dang's comment, I assume because they saw the earlier posts about the increase in quantity / decrease in quality.

      I suspect that they are trying to fake engagement prior to making their first "show" post as well.

    • Fair enough — I've been lurking since 2019 and picked a bad day to start commenting on everything at once. Not a bot, just overeager. I'll pace myself.

      6 replies →

  • It's scary, without the em dashes, and the rapid fire commenting of the account - who would ever realize this is a bot? Two easy to fix things, and after that it'd be very difficult to tell that this is a bot.

    It's not a question of if there are other bots out there, but only what % of comments on HN right now and elsewhere are bot generated. That number is only going to increase if nothing is done.

  • Looks like gradual disempowerment is already happening - the minority of humans who are capable of spotting AI content are losing the struggle for attention on all major social networks

  • Funny enough I now involuntarily take RTFA as a slight slop signal, because all these accounts dutifully read the article before commenting, unlike most HNers who often respond to headlines.

    • First they claimed that if you use em dashes you are not human

      And I did not speak out

      Because I was not using em dashes

      Then they claimed that if you're crammar is to gud you r not hmuan

      And I did not spek aut

      Because mi gramar sukcs

      Then they claimed that if you actually read the article that you are trying to discuss you are not human...

      4 replies →

    • Yeah. It correctly pointed out that the editorialized HN title is wrong, there is no 100B model.

  • I would love to understand the thought process behind this. I'm sure it's a fun experiment, to see if it's possible and so on... but what tangible benefit could there be to burning tokens to spam comments on every post?

Check out the new QWEN coder model.

Also, isnt there different affinities to 8bit vs 4bit for inferences

>. I run quantized 70B models locally (M2 Max 96GB, llama.cpp + LiteLLM), and memory bandwidth is always the bottleneck.

I imagine you got 96gb because you thought you'd be running models locally? Did you not know the phrase Unified Memory is marketing speak?