This has always annoyed the piss out of me. It wouldn't have been Bill's or Microsoft's call to make, in the first place. The hardware memory map is not set by software.
The 640K limitation derives from the 1MB address space of the IBM PC, and as the name implies, IBM did the hardware design. They did it around a particular Intel chip, which had a 1MB address space. IBM could've put in hardware support for bank switching (as some EMS/XMS add-in cards later did), they could've used a chip with more than 20 address lines, they could've done a lot of things.
But they didn't. IBM wasn't designing a mainframe-killer, they were designing a personal computer. It was competing with 16k and 64k 8-bit machines, and the first IBM PCs shipped with 64k and later 128k of RAM. Using the top 384K for peripherals and allocating 640K for programs must've seemed insanely generous at the time. But whoever made that decision, it was on the hardware side, not anyone at Microsoft.
But whoever made that decision, it was on the hardware side, not anyone at Microsoft.
Bill Gates' famous comment isn't really a decision though. As it's usually cited it's just an opinion - '640Kb should be enough for anyone' means "I don't think any programs will need more than that!". If someone at IBM decided that there should be 640Kb of available RAM for programs it's believable that Bill might have simply been agreeing with them.
The main thing that's annoying about the quote is that it's trotted out as an example of how Bill was wrong, as if being wrong is something terrible that he should be ashamed of decades later. That's nonsense. Being wrong is fine so long as you change your mind when you understand that you are wrong.
It wasn't even wrong. There wasn't a need for that much memory on a desktop then. You could already do wonders with 64KB on an 8bit micro: edit text, run spreadsheets, play games. You would need more for multimedia or web surfing, but that was still in the future.
At one of the Microsoft company meetings in the middle to late 2000s (I recall it was at Safeco Field) he claimed that when IBM was developing the PC he tried to convince them to use an MC68000 instead of the 8088. He said going with the 8088 set the industry back ten years. Assuming he wasn't making the story up, it's hard to imagine him making that quote or even agreeing with it.
>If someone at IBM decided that there should be 640Kb of available RAM for programs it's believable that Bill might have simply been agreeing with them.
Yeah, "640k should be enough for anybody (specifically in the context of a conversation about current hardware of software and not necessarily in perpetuity)"
No, the point of the 'quote' is people trying to make a point that 'even Bill Gates' can't predict the future need for technology requirements expanding.
Of course that's wrong, and most of the time it is used by people who don't understand the trade offs made in incremental advancements in tech standards.
I'm not even sure he was as wrong as people claim. They used to do a lot with very limited memory. Now, modern computers "need" hundreds of megabytes for a chat application, and slow down for the latest Gmail revamp. The quote serves as a reminder of how well we used to economize on memory.
I don't think the point is that Bill Gates should be ashamed. It's a reminder how fast technology changes and that the assumptions we make today could be invalidated just as quickly.
He didnt understand he was wrong. He never learnt a lesson. Years later when Microsoft was launching their Zume audio player, he played a fool on the scene with bunch of people acting that he never heard if iPod.
And the first PC design was deliberately spec-reduced in order not to impact on sales of the dedicated word processor, the IBM Displaywriter.
The 8086 was thought too powerful and would compete against existing IBM products, so the 8088 was chosen. Other changes to expansion and bus architecture were on the same basis.
The PC was not supposed to be the benchmark design on which the entire future of computing was built. If you were looking for one of those better off starting with an Amiga. :)
I was doing OS coding in those days - and writing both drivers and apps on top of Windows. People forget what was happening back then.
Hardware had removed the 640K memory limit - but Windows stuck with it for years afterward. Not for any technical reason, but because they dominated the market, and neither needed to change nor wanted to change - no matter how hard that made things for developers.
I worked with Microsoft guys, and they were very blunt about not wanting to change their working OS code if they didn't need to - so Bill held back the entire industry for years.
MS has always been famously conservative when it comes to making sure old software still functioned on newer iterations. They launched a completely new OS (NT) just so they wouldn't have to kill true DOS compatibility. They put special case instructions in their OS to recognize applications and ensure they worked.
It's easy to say they put back "the industry" for years, if we define the industry as chip design, but I think it's just as easy to make a case that they greatly expanded the industry over this time by giving people a platform and OS combo that could continue to run their software from a few years back even after upgrading every component. In my opinion, consumer confidence likely greatly outweighed the other benefits, as it allowed economies of scale to really be reached, and spurred the whole industry forward from the massive demand.
That's right. My first PC had 512K (and a NEC V40 CPU, 80188 compatible). The first time I met someone at school owning a PC with 640K I found it really weird, like it was an usual number coming from a ZX Spectrum 8-bit with 128K of RAM.
It was a software limit, inasmuch as the software wanted to run in real mode even though by 1989 you had a modern 32 bit cpu that had all the features to run Windows 10 today.
"The hardware" did not support more than 1 MB. The 8086/8088 had a 20-bit address space (reflected both in the segmentation model and the address bus), so that limits it to 1 MB.
Perhaps the most damning quote: "I have to say that in 1981, making those decisions, I felt like I was providing enough freedom for 10 years. That is, a move from 64k to 640k felt like something that would last a great deal of time. Well, it didn't..." -Bill Gates
While he didn't use the exact words of the title quote, there are multiple sources which indicate he was indeed surprised at how fast applications grew in memory requirements/usage. Thus, I rate it as "partly true".
At lot of it was probably caused by the switch from hand-crafted low-level code to libraries and compilers, which usually take up more RAM for comparable features. While such tools sometimes can produce compact code, it's often not considered economical by publishers.
Linus Torvalds on the other hand did indeed say ”Anybody who needs more than 64Mb/task - tough cookies” on the original Linux announcement email chain.
This is not really the same thing though, that's not saying "64MB per task should be enough for anybody", actually it's arguably the opposite. In his original announcement he clearly introduces his project has a somewhat limited OS.
For both Linus and Bill - anything they said you have to remember the context. 640K was - "YUUGE" (always say it right) For Linus releasing linux at the time a 64Mb task was "YUUGE" (again just make sure you always say it right)
"Modern operating systems can now take advantage of that seemingly vast potential memory. But even 32 bits of address space won't prove adequate as time goes on."
Other people have pointed out how much memory 64bits provides, but the real problem is when we go from 48bits to 64bits. 48bits provides 256TB of address space. Right now amd64 only uses the bottom 48bits for pointers and some systems take advantage of this for things like NANboxing. I don't really know the numbers, but losing the ability to NANbox will probably come with a performance hit.
NaN-boxed or tagged pointers have to be processed before use anyway - amd64 enforces canonical pointers, even if the high bits are unused (they have to be set to 1).
NANboxing isn't the only strategy for type tagging. We've been using tagged pointers in 32-bit land for decades. I doubt the perf hit here is going to be significant.
I don't quite reach that number. According to Landauer's principle the minimum energy required to flip a bit is 0.0172 eV. Times 2^128 is ~10^18 J. The first google result for boiling all oceans puts it around 10^29 J. Even at realistic energy values for current hardware (10 fJ), we are still 5 orders of magnitude short of boiling all oceans.
Moving beyond a 64-bit architecture will allow manipulation of memory address spaces that are larger than 16 Exabytes. Even high-end servers today do not contain more than 1TB of physical memory. It would take several decades if not more for memory densities to become high enough to begin to reach the boundary that 64-bit addressing limits computers.
Current x86 architectures only use 48 bits for addressing. That’s still enough to address ~280TB of RAM. I expect that to change before upgrading to 128-bit pointers.
People where I work have workstations with more than 1TB of ram.
I think the value of a larger pointer (immediately) would be tags, not memory density, just as it was with 64-bit systems -- my old Alpha only had 256mb of ram, and maxed out at 512mb IIRC.
In fairness, what we have right now is 48-bit addressing, not 64-bit.
I've already run into cases of programs figuring that a "64-bit" address space is basically infinite and running out of address space by trying to have tens to hundreds of thousands of 4GB memory mappings (not necessarily backed by RAM).
"(note: you may want local 128-bit virtual addressing for other reasons than accessing more memory)"
Yes, that's why IPv6 is as big as it is. We could fit pretty comfortably into 64 bits for a long time, really, if you assume we fairly tightly pack everybody in. 128 bits is really to give us some routing headroom, not because anyone seriously thinks we're going to use IPv6 on 2^128 distinct targets any time soon. (By the time we have a "galactic internet" it sure won't be using IPv6, not because it's bad or wrong but just because we're going to need something designed to handle the very different challenges involved.)
I think there are roughly 2^166 atoms on earth (translating 10^50 if I didn't mess up my maths). So clearly we don't need 256 bit addressing for any system!
Slightly off-topic, but it's cool to see David Mikkelson posting in that Usenet group as "Snopes," as that's where he first used the name, even before launching the urban myth-busting site in the '90s.
I had no idea Snopes was that old; that was my takeaway from this as well. We've always assumed it wasn't an actual quote from Gates, but I only knew Snopes as the website.
Snopes wasn't originally called "Snopes," was it? I seem to remember it as something else, like "urbanlengend" or something. Maybe the movie bought the name?
Gibson's prose has a short shelf life. That's why he doesn't write future-set stories anymore.
In the film of Johnny Mnemonic, the screenplay of which was penned by Gibson himself, Johnny must act as a courier for 320 GB by using a brain implant -- in 2025. It's 2018 in the real world, and data smugglers today would probably get farther by swallowing toy balloons with two or three microSD cards in them than by going the whole invasive wipe-your-childhood-memories brain implant route (if such a route were available).
Not true that Gibson's prose has a short shelf life. Neuromancer is in my view a timeless masterpiece, and Gibson's most recent book is set in the future.
Star Trek: The Next Generation referred to data sizes as "quads", which has the advantage that it has actually future-proofed the series a bit. It's fairly clear that whatever a "quad" is, it isn't just two bits smashed together, but what it is? Who knows.
Although I'm sure hardware companies will find a way to promote deep learning techniques into home computers, which will obviously require more and more transistors.
The fact that it was the sentiment at the time is one thing, but it doesn't make attributing a quote to someone who didn't say it right or correct, even if it goes in the way of the general sentiment, even if that someone did agree with said sentiment.
"Tricksters and pranksters" AKA good old fashioned trolling. It's a shame the modern definition of "troll" seems to have warped to include those who tell people to kill themselves on social media etc.
Sure, Bill. Of course you didn't say something like that as it would make you sound totally clueless to future generations. You wouldn't want a howler of a mistake like that to be your calling card, would you? So, yeah, of course you didn't say it.
This has always annoyed the piss out of me. It wouldn't have been Bill's or Microsoft's call to make, in the first place. The hardware memory map is not set by software.
The 640K limitation derives from the 1MB address space of the IBM PC, and as the name implies, IBM did the hardware design. They did it around a particular Intel chip, which had a 1MB address space. IBM could've put in hardware support for bank switching (as some EMS/XMS add-in cards later did), they could've used a chip with more than 20 address lines, they could've done a lot of things.
But they didn't. IBM wasn't designing a mainframe-killer, they were designing a personal computer. It was competing with 16k and 64k 8-bit machines, and the first IBM PCs shipped with 64k and later 128k of RAM. Using the top 384K for peripherals and allocating 640K for programs must've seemed insanely generous at the time. But whoever made that decision, it was on the hardware side, not anyone at Microsoft.
But whoever made that decision, it was on the hardware side, not anyone at Microsoft.
Bill Gates' famous comment isn't really a decision though. As it's usually cited it's just an opinion - '640Kb should be enough for anyone' means "I don't think any programs will need more than that!". If someone at IBM decided that there should be 640Kb of available RAM for programs it's believable that Bill might have simply been agreeing with them.
The main thing that's annoying about the quote is that it's trotted out as an example of how Bill was wrong, as if being wrong is something terrible that he should be ashamed of decades later. That's nonsense. Being wrong is fine so long as you change your mind when you understand that you are wrong.
It wasn't even wrong. There wasn't a need for that much memory on a desktop then. You could already do wonders with 64KB on an 8bit micro: edit text, run spreadsheets, play games. You would need more for multimedia or web surfing, but that was still in the future.
19 replies →
At one of the Microsoft company meetings in the middle to late 2000s (I recall it was at Safeco Field) he claimed that when IBM was developing the PC he tried to convince them to use an MC68000 instead of the 8088. He said going with the 8088 set the industry back ten years. Assuming he wasn't making the story up, it's hard to imagine him making that quote or even agreeing with it.
21 replies →
>If someone at IBM decided that there should be 640Kb of available RAM for programs it's believable that Bill might have simply been agreeing with them.
Yeah, "640k should be enough for anybody (specifically in the context of a conversation about current hardware of software and not necessarily in perpetuity)"
To follow up on that: as if stating anything that’s perfectly acceptable today but not years from now is somehow outrageous.
“8 core CPUs or 32GB of RAM are more than enough for gaming” (c) 2018
Facts change and opinions change with them. Wonder if Moore’s law will get the same treatment now.
1 reply →
No, the point of the 'quote' is people trying to make a point that 'even Bill Gates' can't predict the future need for technology requirements expanding.
Of course that's wrong, and most of the time it is used by people who don't understand the trade offs made in incremental advancements in tech standards.
I'm not even sure he was as wrong as people claim. They used to do a lot with very limited memory. Now, modern computers "need" hundreds of megabytes for a chat application, and slow down for the latest Gmail revamp. The quote serves as a reminder of how well we used to economize on memory.
I don't think the point is that Bill Gates should be ashamed. It's a reminder how fast technology changes and that the assumptions we make today could be invalidated just as quickly.
He didnt understand he was wrong. He never learnt a lesson. Years later when Microsoft was launching their Zume audio player, he played a fool on the scene with bunch of people acting that he never heard if iPod.
And the first PC design was deliberately spec-reduced in order not to impact on sales of the dedicated word processor, the IBM Displaywriter.
The 8086 was thought too powerful and would compete against existing IBM products, so the 8088 was chosen. Other changes to expansion and bus architecture were on the same basis.
The PC was not supposed to be the benchmark design on which the entire future of computing was built. If you were looking for one of those better off starting with an Amiga. :)
I was doing OS coding in those days - and writing both drivers and apps on top of Windows. People forget what was happening back then.
Hardware had removed the 640K memory limit - but Windows stuck with it for years afterward. Not for any technical reason, but because they dominated the market, and neither needed to change nor wanted to change - no matter how hard that made things for developers.
I worked with Microsoft guys, and they were very blunt about not wanting to change their working OS code if they didn't need to - so Bill held back the entire industry for years.
MS has always been famously conservative when it comes to making sure old software still functioned on newer iterations. They launched a completely new OS (NT) just so they wouldn't have to kill true DOS compatibility. They put special case instructions in their OS to recognize applications and ensure they worked.
It's easy to say they put back "the industry" for years, if we define the industry as chip design, but I think it's just as easy to make a case that they greatly expanded the industry over this time by giving people a platform and OS combo that could continue to run their software from a few years back even after upgrading every component. In my opinion, consumer confidence likely greatly outweighed the other benefits, as it allowed economies of scale to really be reached, and spurred the whole industry forward from the massive demand.
8 replies →
That's right. My first PC had 512K (and a NEC V40 CPU, 80188 compatible). The first time I met someone at school owning a PC with 640K I found it really weird, like it was an usual number coming from a ZX Spectrum 8-bit with 128K of RAM.
To your point:
Entire businesses have been run on systems that have 512k of RAM.
The IBM Series 1 had up to 128k: it was designed by Don Estridge, more famously known as the father of the PC...
It was a software limit, inasmuch as the software wanted to run in real mode even though by 1989 you had a modern 32 bit cpu that had all the features to run Windows 10 today.
Nice try Bill. We all know you said it. :)
> But whoever made that decision, it was on the hardware side, not anyone at Microsoft.
The hardware supported more than 1MB, as later CPUs proved. However, MS-DOS didn't really support it, as EMS/XMS proved:
https://www.filfre.net/2017/04/the-640-k-barrier/
I'd say that you've got it backwards, in my opinion. And in the opinion of that fine article I linked.
"The hardware" did not support more than 1 MB. The 8086/8088 had a 20-bit address space (reflected both in the segmentation model and the address bus), so that limits it to 1 MB.
8 replies →
A somewhat more extensive research of that quote:
https://quoteinvestigator.com/2011/09/08/640k-enough/
Perhaps the most damning quote: "I have to say that in 1981, making those decisions, I felt like I was providing enough freedom for 10 years. That is, a move from 64k to 640k felt like something that would last a great deal of time. Well, it didn't..." -Bill Gates
While he didn't use the exact words of the title quote, there are multiple sources which indicate he was indeed surprised at how fast applications grew in memory requirements/usage. Thus, I rate it as "partly true".
At lot of it was probably caused by the switch from hand-crafted low-level code to libraries and compilers, which usually take up more RAM for comparable features. While such tools sometimes can produce compact code, it's often not considered economical by publishers.
Linus Torvalds on the other hand did indeed say ”Anybody who needs more than 64Mb/task - tough cookies” on the original Linux announcement email chain.
Well Linus also said his kernel won't be big and professional like GNU's.
Which it isn't. :-)
This is not really the same thing though, that's not saying "64MB per task should be enough for anybody", actually it's arguably the opposite. In his original announcement he clearly introduces his project has a somewhat limited OS.
The way I'd read that, "tough cookies" doesn't mean it isn't a valid need. It just means it isn't supported.
like garrosh says, "times change"
source: https://www.youtube.com/watch?v=oxY89F5oU-I
For both Linus and Bill - anything they said you have to remember the context. 640K was - "YUUGE" (always say it right) For Linus releasing linux at the time a 64Mb task was "YUUGE" (again just make sure you always say it right)
Garrosh is referring to time travel.
"Modern operating systems can now take advantage of that seemingly vast potential memory. But even 32 bits of address space won't prove adequate as time goes on."
But 64 bits should be enough for anybody.
Other people have pointed out how much memory 64bits provides, but the real problem is when we go from 48bits to 64bits. 48bits provides 256TB of address space. Right now amd64 only uses the bottom 48bits for pointers and some systems take advantage of this for things like NANboxing. I don't really know the numbers, but losing the ability to NANbox will probably come with a performance hit.
NaN-boxed or tagged pointers have to be processed before use anyway - amd64 enforces canonical pointers, even if the high bits are unused (they have to be set to 1).
Aarch64 on the other way...
3 replies →
NANboxing isn't the only strategy for type tagging. We've been using tagged pointers in 32-bit land for decades. I doubt the perf hit here is going to be significant.
1 reply →
Exponentials being what they are, the 32 -> 64 jump buys us a lot of headroom.
IIRC from the ZFS promo blurb, the energy required to flip 2^128 bits is enough to boil the oceans (assuming a maximally efficient process).
I don't quite reach that number. According to Landauer's principle the minimum energy required to flip a bit is 0.0172 eV. Times 2^128 is ~10^18 J. The first google result for boiling all oceans puts it around 10^29 J. Even at realistic energy values for current hardware (10 fJ), we are still 5 orders of magnitude short of boiling all oceans.
3 replies →
16 million terabytes will be a lot of RAM for at least a couple of decades.
Hey, some of us could even run our favourite Electron app at that point
1 reply →
Moving beyond a 64-bit architecture will allow manipulation of memory address spaces that are larger than 16 Exabytes. Even high-end servers today do not contain more than 1TB of physical memory. It would take several decades if not more for memory densities to become high enough to begin to reach the boundary that 64-bit addressing limits computers.
So yes, highly likely for a little while..
You can have an AWS instance with up to 12 TB of RAM: https://aws.amazon.com/blogs/aws/now-available-amazon-ec2-hi...
Current x86 architectures only use 48 bits for addressing. That’s still enough to address ~280TB of RAM. I expect that to change before upgrading to 128-bit pointers.
1 reply →
People where I work have workstations with more than 1TB of ram.
I think the value of a larger pointer (immediately) would be tags, not memory density, just as it was with 64-bit systems -- my old Alpha only had 256mb of ram, and maxed out at 512mb IIRC.
6 replies →
> Even high-end servers today do not contain more than 1TB of physical memory.
??? I know of multiple high end servers with 1TiB+ memory footprints. What is your definition of high end?
Here, have a look at HP's page if you don't believe me: https://www.hpe.com/us/en/product-catalog/servers/proliant-s...
And I'd hardly consider those "high end", there are much larger memory servers out there if you want them.
64-bit addressing is much more than enough for many decades to come or even forever!
In fairness, what we have right now is 48-bit addressing, not 64-bit.
I've already run into cases of programs figuring that a "64-bit" address space is basically infinite and running out of address space by trying to have tens to hundreds of thousands of 4GB memory mappings (not necessarily backed by RAM).
For local addressing inside a computer 16 exabytes is enough.
For global or large system level addressing 128 to 256-bits should be enough.
(note: you may want local 128-bit virtual addressing for other reasons than accessing more memory)
"(note: you may want local 128-bit virtual addressing for other reasons than accessing more memory)"
Yes, that's why IPv6 is as big as it is. We could fit pretty comfortably into 64 bits for a long time, really, if you assume we fairly tightly pack everybody in. 128 bits is really to give us some routing headroom, not because anyone seriously thinks we're going to use IPv6 on 2^128 distinct targets any time soon. (By the time we have a "galactic internet" it sure won't be using IPv6, not because it's bad or wrong but just because we're going to need something designed to handle the very different challenges involved.)
1 reply →
I think there are roughly 2^166 atoms on earth (translating 10^50 if I didn't mess up my maths). So clearly we don't need 256 bit addressing for any system!
1 reply →
You will regret this.
Slightly off-topic, but it's cool to see David Mikkelson posting in that Usenet group as "Snopes," as that's where he first used the name, even before launching the urban myth-busting site in the '90s.
I had no idea Snopes was that old; that was my takeaway from this as well. We've always assumed it wasn't an actual quote from Gates, but I only knew Snopes as the website.
Snopes wasn't originally called "Snopes," was it? I seem to remember it as something else, like "urbanlengend" or something. Maybe the movie bought the name?
Wow... in a post denying that the comment never happened a lot of people here still take it as a fact.
Now, what do you guy think about the "Al Gore said he invented the internet"?
Ah, the wonders of a post-truth world...
3MB of RAM should be enough for anybody to get a ticket out of the Sprawl.
Gibson's prose has a short shelf life. That's why he doesn't write future-set stories anymore.
In the film of Johnny Mnemonic, the screenplay of which was penned by Gibson himself, Johnny must act as a courier for 320 GB by using a brain implant -- in 2025. It's 2018 in the real world, and data smugglers today would probably get farther by swallowing toy balloons with two or three microSD cards in them than by going the whole invasive wipe-your-childhood-memories brain implant route (if such a route were available).
Not true that Gibson's prose has a short shelf life. Neuromancer is in my view a timeless masterpiece, and Gibson's most recent book is set in the future.
18 replies →
Star Trek: The Next Generation referred to data sizes as "quads", which has the advantage that it has actually future-proofed the series a bit. It's fairly clear that whatever a "quad" is, it isn't just two bits smashed together, but what it is? Who knows.
1 reply →
If you try to make your stories too future-proof you end up making it look like just magic. It may also become harder to relate to.
See Numenera [0] though, set a billion years in the future, while still keeping a sense of familiarity.
[0] https://numenera.com
8 replies →
3MB of hot, hot ram
I find it quite funny that there's a parallel discussion as to whether or not everything in a modern circuit board is implemented with NAND gates.
While today, 16GB of ram and 1TB of storage will soon not be enough for a modest computer.
https://en.wikipedia.org/wiki/Wirth%27s_law
Although I'm sure hardware companies will find a way to promote deep learning techniques into home computers, which will obviously require more and more transistors.
Today in 2018 you need 16GB. Back in 1998 it was 16MB and 20 years before that it was 16KB.
That should tell you how much you'll need in 2038.
Out of curiosity how did this chain end up on google groups? Where was the original and how was google able to ingest it?
Thanks!
https://en.wikipedia.org/wiki/Google_Groups#Deja_News
Thank you!
This reply down the thread wins it for me:
> I always thought he was talking about his monthly bonus, not computer memory...
Ha!
soon after that, it became his daily bonus...
https://books.google.it/books?id=2C4EAAAAMBAJ&q=%22nobody+wo...
the fact that a direct quote doesn't exist doesn't mean that it wasn't the sentiment at the time.
The fact that it was the sentiment at the time is one thing, but it doesn't make attributing a quote to someone who didn't say it right or correct, even if it goes in the way of the general sentiment, even if that someone did agree with said sentiment.
nor does it mean it was even bill gates' sentiment.
Doesn't matter. We all know there is no reason anyone would want a computer in their home.
This scene https://www.youtube.com/watch?v=XXBxV6-zamM#t=1h28s (at 1:00:28) shows why people would think that.
No one actually wanted a computer in their home.
They did want an interactive TV set.
Oh, that explains why there's no market for more than five computers in the world.
1 reply →
If smartphones are not considered a computer, then that probably is the case for the majority of people today.
Enlighten me. How can we twist concepts so that a smartphone is not a computer?
4 replies →
that's so last century... now we believe that 'nobody needs a quantum computer at home'
1024 qbits is surely enough for everything and everyone
I think Gates overestimated https://en.wikipedia.org/wiki/Swappiness
Hah. Reading this is quite amusing. Tricksters and pranksters were just as prevalent on the internet back then as they are now.
"Tricksters and pranksters" AKA good old fashioned trolling. It's a shame the modern definition of "troll" seems to have warped to include those who tell people to kill themselves on social media etc.
Having watched a 4K movie, I can confidently say 640K should be enough for anybody.
Well, let's be scientific about this. If Bill Gates is willing to write me a check for 640K, I'll be happy to test his theory for him. ;-)
LMAO
Yeah, I don't think it's a literal quote, unless it was in a very specific context.
It was making fun of the 640K limitation of MS-DOS, and the implicit reasoning behind that hard limit.
Of course, this reminds me of the OS/2 2.0 fiasco.
I have 16384000K of ram and I could use more. However, my use case might be different from someone browsing the web.
:-)
Sure, Bill. Of course you didn't say something like that as it would make you sound totally clueless to future generations. You wouldn't want a howler of a mistake like that to be your calling card, would you? So, yeah, of course you didn't say it.