Comment by freakynit
20 hours ago
Holy cow their chatapp demo!!! I for first time thought i mistakenly pasted the answer. It was literally in a blink of an eye.!!
20 hours ago
Holy cow their chatapp demo!!! I for first time thought i mistakenly pasted the answer. It was literally in a blink of an eye.!!
I asked it to design a submarine for my cat and literally the instant my finger touched return the answer was there. And that is factoring in the round-trip time for the data too. Crazy.
The answer wasn't dumb like others are getting. It was pretty comprehensive and useful.
it's incredible how many people are commenting here without having read the article. they completely lost the point.
With this speed, you can keep looping and generating code until it passes all tests. If you have tests.
Generate lots of solutions and mix and match. This allows a new way to look at LLMs.
Not just looping, you could do a parallel graph search of the solution-space until you hit one that works.
Infinite Monkey Theory just reached its peak
You could also parse prompts into an AST, run inference, run evals, then optimise the prompts with something like a genetic algorithm.
And then it's slow again to finally find a correct answer...
It won't find the correct answer. Garbage in, garbage out.
Agreed, this is exciting, and has me thinking about completely different orchestrator patterns. You could begin to approach the solution space much more like a traditional optimization strategy such as CMA-ES. Rather than expect the first answer to be correct, you diverge wildly before converging.
This is what people already do with “ralph” loops using the top coding models. It’s slow relative to this, but still very fast compared to hand-coding.
This doesn't work. The model outputs the most probable tokens. Running it again and asking for less probable tokens just results in the same but with more errors.
Do you not have experience with agents solving problems? They already successfully do this. They try different things until they get a solution.
> It was literally in a blink of an eye.!!
It's not even close. It takes the eye 100mm .. 400ms to blink. This think takes under 30ms to process a small query - about 10 times faster.
OK investors, time to pull out of OpenAI and move all your money to ChatJimmy.
A related argument I raised a few days back on HN:
What's the moat with with these giant data-centers that are being built with 100's of billions of dollars on nvidia chips?
If such chips can be built so easily, and offer this insane level of performance at 10x efficiency, then one thing is 100% sure: more such startups are coming... and with that, an entire new ecosystem.
RAM hoarding is, AFAICT, the moat.
3 replies →
I think their hope is that they’ll have the “brand name” and expertise to have a good head start when real inference hardware comes out. It does seem very strange, though, to have all these massive infrastructure investment on what is ultimately going to be useless prototyping hardware.
1 reply →
You'd still need those giant data centers for training new frontier models. These Taalas chips, if they work, seem to do the job of inference well, but training will still require general purpose GPU compute
1 reply →
If I am not mistaken this chip was build specifically for the llama 8b model. Nvidia chips are general purpose.
Nvidia bought all the capacity so their competitors can't be manufactured at scale.
You mean Nvidia?
I dunno, it pretty quickly got stuck; the "attach file" didn't seem to work, and when I asked "can you see the attachment" it replied to my first message rather than my question.
It’s llama 3.1 8B. No vision, not smart. It’s just a technical demo.
why is everyone seemingly incapable of understanding this? waht is going on here? Its like ai doomers consistently have the foresight of a rat. yeah no shit it sucks its running llama 3 8b, but theyre completely incapable of extrapolation.
Hmm.. I had tried simple chat converation without file attachments.
I got 16.000 tokens per second ahaha
I get nothing, no replies to anything.
Maybe hn and reddit crowd have overloaded them lol
That… what…
Well it got all 10 incorrect when I asked for top 10 catchphrases from a character in Plato's books. It confused the baddie for Socrates.
Well yeah, they're running a small, outdated, older model. That's not really the point. This approach can be used for better, larger, newer models.
Fast, but stupid.
The question is not about how fast it is. The real question(s) are:
(This also assumes that diffusion LLMs will get faster)
The blog answers all those questions. It says they're working on fabbing a reasoning model this summer. It also says how long they think they need to fab new models, and that the chips support LoRAs and tweaking context window size.
I don't get these posts about ChatJimmy's intelligence. It's a heavily quantized Llama 3, using a custom quantization scheme because that was state of the art when they started. They claim they can update quickly (so I wonder why they didn't wait a few more months tbh and fab a newer model). Llama 3 wasn't very smart but so what, a lot of LLM use cases don't need smart, they need fast and cheap.
Also apparently they can run DeepSeek R1 also, and they have benchmarks for that. New models only require a couple of new masks so they're flexible.
LLMs can't count. They need tool use to answer these questions accurately.
That particular one can't count without using external tools. Others can, and do.
[dead]
I asked, “What are the newest restaurants in New York City?”
Jimmy replied with, “2022 and 2023 openings:”
0_0
Well, technically it's answer is correct when you consider it's knowledge cutoff date... it just gave you a generic always right answer :)
chatjimmy's trained on LLama 3.1
Is super fast but also super inaccurate, I would say not even gpt-3 levels.
That's because it's llama3 8b.
There are a lot of people here that are completely missing the point. What is it called where you look at a point of time and judge an idea without seemingly being able to imagine 5 seconds into the future.
“static evaluation”
It is incredibly fast, on that I agree, but even simple queries I tried got very inaccurate answers. Which makes sense, it's essentially a trade off of how much time you give it to "think", but if it's fast to the point where it has no accuracy, I'm not sure I see the appeal.
the hardwired model is Llama 3.1 8B, which is a lightweight model from two years ago. Unlike other models, it doesn't use "reasoning:" the time between question and answer is spent predicting the next tokens. It doesn't run faster because it uses less time to "think," It runs faster because its weights are hardwired into the chip rather than loaded from memory. A larger model running on a larger hardwired chip would run about as fast and get far more accurate results. That's what this proof of concept shows
I see, that's very cool, that's the context I was missing, thanks a lot for explaining.
1 reply →
If it's incredibly fast at a 2022 state of the art level of accuracy, then surely it's only a matter of time until it's incredibly fast at a 2026 level of accuracy.
yeah this is mindblowing speed. imagine this with opus 4.6 or gpt 5.2. probably coming soon
1 reply →
Why do you assume this?
I can produce total jibberish even faster, doesn’t mean I produce Einstein level thought if I slow down
2 replies →
I think it might be pretty good for translation. Especially when fed with small chunks of the content at a time so it doesn't lose track on longer texts.