It wouldn't be surprising if the RP2350 gets officially certified to run at something above the max supported clock at launch (150MHz), though obviously nothing close to 800MHz. That happened to the RP2040[1], which at launch nominally supported 133MHz but now it's up to 200MHz (the SDK still defaults to 125MHz for compatibility, but getting 200MHz is as simple as toggling a config flag[2]).
The 300MHz, 400MHz, and 500MHz points requiring only 1.1, 1.3, and 1.5v and with only the last point getting slightly above body temperature, even with no cooling, seem like something that should maybe not be "officially" supported, but maybe mentioned somewhere in an official blog post or docs. Getting 3x+ the performance with some config changes is noteworthy. It would be interesting to run an experiment to see if there's any measurable degradation of stability or increased likelihood at failure at those settings compared to a stock unit running the same workload for the same time.
All of their reliability testing and validation happens at the lower voltages and speeds. I doubt they'd include anything in the official docs lest they be accused of officially endorsing something that might later turn out to reduce longevity.
When pushing clock speeds, things get nondeterministic...
Here is an idea for a CPU designer...
Observe that you can get way more performance (increased clock speed) or more performance per watt (lower core voltage) if you are happy to lose reliability.
Also observe that many CPU's do superscalar out of order execution, which requires having the ability to backtrack, and this is normally implemented with a queue and a 'commit' phase.
Finally, observe that verifying this commit queue is a fully parallel operation, and therefore can be checked slower and in a more power efficient way.
So, here's the idea. You run a blazing fast superscalar CPU, well past the safe clock speed limits that makes hundreds of computation or flow control mistakes per second. You have slow but parallel verification circuitry to verify the execution trace. Whenever a mistake is made, you put a pipeline bubble in the main CPU, clear the commit queue, you put in the correct result from the verification system, and continue - just like you would with a branch misprediction.
This happening a few hundred times per second will have a negligible impact on performance. (consider 100 cycles 'reset' penalty, 100*100 is a tiny fraction of 4Ghz)
The main fast CPU could also make deliberate mistakes - for example assuming floats aren't NaN, assuming division won't be by zero, etc. Trimming off rarely used logic makes the core smaller, making it easier to make it even faster or more power efficient (since wire length determines power consumption per bit).
Totally logical, especially with some sort of thermal mass, as you can throttle down the clock when quiet to cool down after, I used this concept in my first sci-fi novel where the AI was aware of its temperature for these reasons. I run my Pico2 board in my MP3 jukebox at 250Mhz, it has been on for several weeks without missing a beat (pun intended)
How do we know if a computation is a mistake? Do we verify every computation?
If so, then:
That seems like it would slow the ultimate computation to no more than rate rate at which they can be these computations can be verified.
That makes the verifier the ultimate bottleneck, and the other (fast, expensive -- like an NHRA drag car) pipeline becomes vestigial since it can't be trusted anyway.
Well the point is that verification can run in parallel, so if you can verify at 500 Mhz and have twenty of these units, you can run the core at 10 GHz. Minus of course the fixed single instruction verification time penalty, which gets more and more negligible the more parallel you go. Of course there is lots of overhead in that too, like GPUs painfully show.
Both the RP2040 and the RP2350 are amazing value these days with most other electronics increasing in price. Plus you can run FUZIX on them for the UNIX feel.
Mmh... I think that the LicheeRV Nano has kind of more value to it.
Around 20 bucks for the Wifi variant. 1GHz, 256MB RAM, USB OTG, GPIO and full Linux support while drawing less than 1W without any power optimizations and even supports < 15$ 2.8" LCDs out of the box.
I think the ace up the sleeve is PIO; I've seen so many weird and wonderful use cases for the Pico/RP-chips enabled by this feature, that don't seem replicable on other $1-class microcontrollers.
That said: it's a bit sad there's so little (if anything) in the space between microcontrollers & feature-packed Linux capable SoC's.
I mean: these days a multi-core, 64 bit CPU & a few GB's of RAM seems to be the absolute minimum for smartphones, tablets etc, let alone desktop style work. But remember ~y2k masses of people were using single core, sub-1GHz CPU's with a few hundred MB RAM or less. And running full-featured GUI's, Quake1/2/3 & co, web surfing etc etc on that. GUI's have been done on sub-1MB RAM machines once.
Microcontrollers otoh seem to top out on ~512KB RAM. I for one would love a part with integrated:
# Multi-core, but 32 bit CPU. 8+ cores cost 'nothing' in this context.
# Say, 8 MB+ RAM (up to a couple hundred MB)
# Simple 2D graphics, maybe a blitter, some sound hw etc
# A few options for display output. Like, DisplayPort & VGA.
Read: relative low-complexity, but with the speed & power efficient integration of modern IC's. The RP2350pc goes in this direction, but just isn't (quite) there.
Eh it's really not when you consider that the ESP32 exists. it has PCNT units for encoders, RMT LED drivers, 18 ADC channels instead of four, ULP coprocessor and various low power modes, not to mention wifi integrated into the SoC itself, not optional on the carrier board. And it's like half the price on top of all that. It's not even close.
The PIO units on the RP2040 are... overrated. Very hard to configure, badly documented and there's only 8 total. WS2812 control from the Pico is unreliable at best in my experience.
They are just different tools; both have their uses. I wouldn't really put either above the other by default.
> And it's like half the price on top of all that. It's not even close.
A reel of 3,400 RP2350 units costs $0.80 each, while a single unit is $1.10. The RP2040 is $0.70 each in a similar size reel. Are you sure about your figures, or are you perhaps comparing development boards rather than SoCs? If you’re certain, could I have a reference for ESP32s being sold at $0.35 each (or single quantities at $0.55)?
PIO units may be tricky to configure, but they're incredibly versatile. If you aren't comfortable writing PIO code yourself, you can always rely on third-party libraries. Driving HDMI? Check. Supporting an obscure, 40-year-old protocol that nothing else handles? Check. The possibilities are endless.
I find it hard to believe the RP2040 would have any issues driving WS2812s, provided everything is correctly designed and configured. Do you have any references for that?
It’s amusing to contemplate energy per cycle as one clocks higher and higher — the usual formula has the energy per cycle scaling roughly as voltage squared.
I recently turned turbo off on a small, lightly loaded Intel server. This reduced power by about a factor of 2, core temperature by 30-40C, and allowed running the fans much quieter. I’m baffled as to why the CPU didn’t do this on its own. (Apple gets these details right. Intel, not so much.)
This is a boring NVR workload with a bit of GPU usage, with total system utilization around 10% with turbo off. Apparently the default behavior is to turbo from the normal ~3GHz up to 5.4GHz, and I don’t know why the results were quite so poor.
This is an i9-13900H (Minisforum MS-01) machine, so maybe it has some weird tuning for gaming workloads? Still seems a bit pathetic. I have not tried monitoring the voltages with turbo on and off to understand exactly why it’s performing quite so inefficiently.
Haha — this was a fun day! It's honestly surprising how robust the RP2350 was under such extreme experimentation. Mike's write-up walks through pushing the core voltages far beyond stock limits and dry-ice cooling to see what the silicon could handle.
Credit where it's due: Mike is a wizard. He's been involved in some of our more adventurous tinkering, and his input on the more complex areas of our product software has been invaluable. Check out his GitHub for some really interesting projects: https://github.com/MichaelBell
What I love of the Pico overclock story is that, sure, not at 870Mhz, but otherwise you basically give for granted that at 300Mhz and without any cooling it is rock solid, and many units at 400Mhz too.
Well, hope no one tries to deploy overlocked Raspberry Pi hardware in production... especially for kiosk style applications where they're in a metal box in the sun.
They're unstable enough at stock if taken outside an air conditioned room.
The post is about a microcontroller that sips a fraction of a Watt under sane conditions. Cooling its CPU cores is not a problem for real-world applications. You have to bypass the internal voltage regulator crank up the voltage even more before heat becomes an issue.
Great stuff.
It wouldn't be surprising if the RP2350 gets officially certified to run at something above the max supported clock at launch (150MHz), though obviously nothing close to 800MHz. That happened to the RP2040[1], which at launch nominally supported 133MHz but now it's up to 200MHz (the SDK still defaults to 125MHz for compatibility, but getting 200MHz is as simple as toggling a config flag[2]).
[1] https://www.tomshardware.com/raspberry-pi/the-raspberry-pi-p...
[2] https://github.com/raspberrypi/pico-sdk/releases/tag/2.1.1
The 300MHz, 400MHz, and 500MHz points requiring only 1.1, 1.3, and 1.5v and with only the last point getting slightly above body temperature, even with no cooling, seem like something that should maybe not be "officially" supported, but maybe mentioned somewhere in an official blog post or docs. Getting 3x+ the performance with some config changes is noteworthy. It would be interesting to run an experiment to see if there's any measurable degradation of stability or increased likelihood at failure at those settings compared to a stock unit running the same workload for the same time.
All of their reliability testing and validation happens at the lower voltages and speeds. I doubt they'd include anything in the official docs lest they be accused of officially endorsing something that might later turn out to reduce longevity.
When pushing clock speeds, things get nondeterministic...
Here is an idea for a CPU designer...
Observe that you can get way more performance (increased clock speed) or more performance per watt (lower core voltage) if you are happy to lose reliability.
Also observe that many CPU's do superscalar out of order execution, which requires having the ability to backtrack, and this is normally implemented with a queue and a 'commit' phase.
Finally, observe that verifying this commit queue is a fully parallel operation, and therefore can be checked slower and in a more power efficient way.
So, here's the idea. You run a blazing fast superscalar CPU, well past the safe clock speed limits that makes hundreds of computation or flow control mistakes per second. You have slow but parallel verification circuitry to verify the execution trace. Whenever a mistake is made, you put a pipeline bubble in the main CPU, clear the commit queue, you put in the correct result from the verification system, and continue - just like you would with a branch misprediction.
This happening a few hundred times per second will have a negligible impact on performance. (consider 100 cycles 'reset' penalty, 100*100 is a tiny fraction of 4Ghz)
The main fast CPU could also make deliberate mistakes - for example assuming floats aren't NaN, assuming division won't be by zero, etc. Trimming off rarely used logic makes the core smaller, making it easier to make it even faster or more power efficient (since wire length determines power consumption per bit).
I think you might like this:
https://www.usenix.org/system/files/1309_14-17_mickens.pdf
You could run an LLM like this, and the temperature parameter would become an actual thing...
Totally logical, especially with some sort of thermal mass, as you can throttle down the clock when quiet to cool down after, I used this concept in my first sci-fi novel where the AI was aware of its temperature for these reasons. I run my Pico2 board in my MP3 jukebox at 250Mhz, it has been on for several weeks without missing a beat (pun intended)
LLM are memory-bandwidth bound so higher core frequency would not help much.
How do we know if a computation is a mistake? Do we verify every computation?
If so, then:
That seems like it would slow the ultimate computation to no more than rate rate at which they can be these computations can be verified.
That makes the verifier the ultimate bottleneck, and the other (fast, expensive -- like an NHRA drag car) pipeline becomes vestigial since it can't be trusted anyway.
Well the point is that verification can run in parallel, so if you can verify at 500 Mhz and have twenty of these units, you can run the core at 10 GHz. Minus of course the fixed single instruction verification time penalty, which gets more and more negligible the more parallel you go. Of course there is lots of overhead in that too, like GPUs painfully show.
7 replies →
> if you are happy to lose reliability.
The only problem here is that reliability is a statistical thing. You might be lucky, you might not.
Side channel attacks don't stand a chance!
you never had WHEA errors... or pll issue on cpu C state transition...
Both the RP2040 and the RP2350 are amazing value these days with most other electronics increasing in price. Plus you can run FUZIX on them for the UNIX feel.
Mmh... I think that the LicheeRV Nano has kind of more value to it.
Around 20 bucks for the Wifi variant. 1GHz, 256MB RAM, USB OTG, GPIO and full Linux support while drawing less than 1W without any power optimizations and even supports < 15$ 2.8" LCDs out of the box.
And Rust can be compiled to be used with it...
https://github.com/scpcom/LicheeSG-Nano-Build/
Take a look at the `best-practise.md`.
It is also the base board of NanoKVM[1]
1: https://github.com/sipeed/NanoKVM
I think the ace up the sleeve is PIO; I've seen so many weird and wonderful use cases for the Pico/RP-chips enabled by this feature, that don't seem replicable on other $1-class microcontrollers.
4 replies →
Amazing value indeed!
That said: it's a bit sad there's so little (if anything) in the space between microcontrollers & feature-packed Linux capable SoC's.
I mean: these days a multi-core, 64 bit CPU & a few GB's of RAM seems to be the absolute minimum for smartphones, tablets etc, let alone desktop style work. But remember ~y2k masses of people were using single core, sub-1GHz CPU's with a few hundred MB RAM or less. And running full-featured GUI's, Quake1/2/3 & co, web surfing etc etc on that. GUI's have been done on sub-1MB RAM machines once.
Microcontrollers otoh seem to top out on ~512KB RAM. I for one would love a part with integrated: # Multi-core, but 32 bit CPU. 8+ cores cost 'nothing' in this context. # Say, 8 MB+ RAM (up to a couple hundred MB) # Simple 2D graphics, maybe a blitter, some sound hw etc # A few options for display output. Like, DisplayPort & VGA.
Read: relative low-complexity, but with the speed & power efficient integration of modern IC's. The RP2350pc goes in this direction, but just isn't (quite) there.
IIRC, you can use up to 16 MB of PSRAM with RP2350. Maybe up to 32 MB, not sure.
Many dev boards provide 8 MB PSRAM.
You might like the ESP32-P4
Eh it's really not when you consider that the ESP32 exists. it has PCNT units for encoders, RMT LED drivers, 18 ADC channels instead of four, ULP coprocessor and various low power modes, not to mention wifi integrated into the SoC itself, not optional on the carrier board. And it's like half the price on top of all that. It's not even close.
The PIO units on the RP2040 are... overrated. Very hard to configure, badly documented and there's only 8 total. WS2812 control from the Pico is unreliable at best in my experience.
They are just different tools; both have their uses. I wouldn't really put either above the other by default.
> And it's like half the price on top of all that. It's not even close.
A reel of 3,400 RP2350 units costs $0.80 each, while a single unit is $1.10. The RP2040 is $0.70 each in a similar size reel. Are you sure about your figures, or are you perhaps comparing development boards rather than SoCs? If you’re certain, could I have a reference for ESP32s being sold at $0.35 each (or single quantities at $0.55)?
PIO units may be tricky to configure, but they're incredibly versatile. If you aren't comfortable writing PIO code yourself, you can always rely on third-party libraries. Driving HDMI? Check. Supporting an obscure, 40-year-old protocol that nothing else handles? Check. The possibilities are endless.
I find it hard to believe the RP2040 would have any issues driving WS2812s, provided everything is correctly designed and configured. Do you have any references for that?
It’s amusing to contemplate energy per cycle as one clocks higher and higher — the usual formula has the energy per cycle scaling roughly as voltage squared.
I recently turned turbo off on a small, lightly loaded Intel server. This reduced power by about a factor of 2, core temperature by 30-40C, and allowed running the fans much quieter. I’m baffled as to why the CPU didn’t do this on its own. (Apple gets these details right. Intel, not so much.)
It reduced the temperature by 30°? So it originally was "lightly loaded" and running at 60-70° C?
More like 80-90 before and around 50 afterward.
This is a boring NVR workload with a bit of GPU usage, with total system utilization around 10% with turbo off. Apparently the default behavior is to turbo from the normal ~3GHz up to 5.4GHz, and I don’t know why the results were quite so poor.
This is an i9-13900H (Minisforum MS-01) machine, so maybe it has some weird tuning for gaming workloads? Still seems a bit pathetic. I have not tried monitoring the voltages with turbo on and off to understand exactly why it’s performing quite so inefficiently.
1 reply →
Haha — this was a fun day! It's honestly surprising how robust the RP2350 was under such extreme experimentation. Mike's write-up walks through pushing the core voltages far beyond stock limits and dry-ice cooling to see what the silicon could handle.
Credit where it's due: Mike is a wizard. He's been involved in some of our more adventurous tinkering, and his input on the more complex areas of our product software has been invaluable. Check out his GitHub for some really interesting projects: https://github.com/MichaelBell
Blatant plug: We have a wide range of boards based on the RP2350 for all sorts of projects! https://shop.pimoroni.com/collections/pico :-)
it might actually be better to cool from the bottom, since the pads probably conduct heat better than the chip package material
I bet if you designed a custom board it could do a little better
As we become acclimated to non deterministic responses from computers it may not even matter if some of that comes from the hardware.
Eventually it will be seen as a feature.
What I love of the Pico overclock story is that, sure, not at 870Mhz, but otherwise you basically give for granted that at 300Mhz and without any cooling it is rock solid, and many units at 400Mhz too.
Interesting post. Curious what can I run on a RPi Pico 2 W since I recently got my hands on it.
When do we get MCUs that run in the GHz range?
https://www.renesas.com/en/products/ra8d2
This some harmless stupid fun.
remembering pushing i7 920 on dry ice with acetone by the time... also voltmod nforce 2 chipset to cranck bus clock for opteron 144. So cool !
Well, hope no one tries to deploy overlocked Raspberry Pi hardware in production... especially for kiosk style applications where they're in a metal box in the sun.
They're unstable enough at stock if taken outside an air conditioned room.
The post is about a microcontroller that sips a fraction of a Watt under sane conditions. Cooling its CPU cores is not a problem for real-world applications. You have to bypass the internal voltage regulator crank up the voltage even more before heat becomes an issue.
This is about the Raspberry Pi Pico 2 (based on the RP2350), not the original Raspberry Pi.
And is it better with bad cooling?
3 replies →