This week in 1988, Robert Morris unleashed his eponymous worm

1 day ago (tomshardware.com)

I followed his course 6.5840 on distributed systems (https://pdos.csail.mit.edu/6.824/, YouTube videos at https://youtube.com/playlist?list=PLrw6a1wE39_tb2fErI4-WkMbs...) and completed the labs. One day, out of curiosity, I looked up his name. Then I realized what a legend he is.

Great course by the way.

  • RTM was my TA at MIT for a CS/systems engineering course. It took the students until we did an assignment about the worm to realize who he was IIRC. The students thought it was very cool, but even then, as a TA covering the assignment, he didn't really talk about it.

    • He was also a TA at Harvard with Trevor Blackwell for CS 148 (computer networking, taught by H T Kung) at the time. I remember taking that with them in 1995.

  • His dad was a legend as well, chief scientist in NSA.

    • Which is why he was able to survive taking the fall alone, and let Paul Graham go on to have an illustrious career of picking fights with obscure bloggers and saying dumb things about women in tech.

  • Would be cool if he adds a session on how to hack distributed system in 1988...

    • Account "guest" with no password was provided by default back then, to help others do some work remotely, debug connection issues, or chat with admins.

    • Honestly, there was not very much security back in those days. So much relied on trusting the Internet "community" not to abuse.

    • > Would be cool if he adds a session on how to hack distributed system in 1988..

      username: field

      password: technician

When i worked at Convex, there was an unnatural mania that fingerd be disabled and all sendmail patches be applied as quickly as possible. When I asked why, the answer started with "well... a couple of years ago there was this guy from the east coast who worked here for a year..."

The 10% number is completely made up. According to Paul Graham, "I was there when this statistic was cooked up, and this was the recipe: someone guessed that there were about 60,000 computers attached to the Internet, and that the worm might have infected ten percent of them."

  • That figure is probably UUCP mostly not live connected hosts. I could be wrong, but 60k hosts that you could telnet to sounds like a lot of ducking hosts back then. I was there too, in my late teens. God bless PG.

    • Yeah and a 'host' back then wasn't a cheap PC or something, they tended to be $30000 workstations or $300000 servers. At tech companies and Universities only, and mostly in the US. 60k sounds like a lot for those days. It grew massively from the early 90s.

      Even UUCP was still really fringe and those weren't actually connected hosts on tcp/ip. They had their own dialup mail exchange protocol similar to fidonet.

      6 replies →

A good account is With Microscope and Tweezers: The Worm from MIT's Perspective [1], published in CACM a few months after the event. Notice it was the worm.

I was an intern at IBM in '88 and they shut-down the (iirc) two internet getaways to their corporate network (vnet) while people figured out what was going on. News moved slowly back then, and the idea of self-replicating software was unusual. Although IBM had had its own replicator the previous year [2].

[1] https://www.cs.columbia.edu/~gskc/security/rochlis89microsco...

[2] https://en.wikipedia.org/wiki/Christmas_Tree_EXEC

  • >the idea of self-replicating software was unusual

    floppy based viruses were well established and quite common

    • True. I should probably have qualified that to something like "independently self-replicating". Floppy-disk based viruses obviously still required humans in the transmission path, whereas the Morris Worm and its successors were novel in that they used the internet and worked without human intervention.

      Memories of adding an illicit McAffee to autoexec.bat on my boot floppies...

    • Yes. We ran non networked, Mac computer rooms at university, and having a good antivirus was an absolute must. Infections spread through floppies.

      The Mac's ease of use as opposed to the PC made it also the juiciest virus target.

I'm pretty sure Paul Graham was directly involved in this story (not in any bad, culpable way, but enough that, were a film to be made about it, a well-known actor would be cast for his part).

https://news.ycombinator.com/item?id=38020635

  • Out of curiosity, why do you think this?

    • There's contemporaneous reporting. It's in Katie Hafner and John Markoff's book! A friend of Morris', named Paul, has a role in the aftermath of the worm.

      I'm not dunking on Paul Graham here. If you know anything about me, if anything, this is a point in his favor. :)

      7 replies →

That was one scary exciting day (source: was running machines at MIT at the time)

  • That day our tech chief at the time came running and told us about the worm, and that apparently our country managed to avoid it because the news spread quickly enough that one guy simply unplugged the whole country from the Internet - there was only a single connection back then. (!)

  • I remember that day was sooooooooooo quiet on Usenet.

    Not much was happening in the Eng and CS buildings on campus (except for those that had to deal with the worm).

  • Good times, good times. I was in a Stanford computer lab when everything started to get very, very slow.

From the Wikipedia article:

Clifford Stoll, author of The Cuckoo's Egg, wrote that "Rumors have it that [Morris] worked with a friend or two at Harvard's computing department (Harvard student Paul Graham sent him mail asking for 'Any news on the brilliant project')".

Has pg commented on this?

  • PG spoke about the worm a bit in an interview here: https://aletteraday.substack.com/p/letter-85-paul-graham-and...

    Some quotes from that:

    > The worm, no one would have ever known that the worm existed, except there was a bug in it. That was the problem. The worm itself was absolutely harmless. But there was a bug in the code that controlled the number of copies that would spread to a given computer. And so the computer would get like 100 copies of the worm running on it, back in the day, when having 100 processes running on your computer would be enough to crash it.

    >he called me and told me what had happened.

    • I suppose the notion that you could just distribute untested software onto an unlimited amount of other peoples computers without consent wasn't yet considered unethical so therefore the worm was perceived to be absolutely harmless by rtm and pg. Just some minor details they couldn't possibly have seen back then.

      3 replies →

  • A bit of an aside from The Cuckoo’s Egg;

    It’s been a long time since I read the book, but IIRC Cliff visited with Robert Morris (rtm’s dad) at the NSA when he traveled to Washington DC, and I think the worm and rtm are mentioned after he meets with the elder Robert.

I was a student part-time administrator/systems programmer at the Purdue Engineering Computer Network at the time. Our OS installs had enough local mods (and we had enough non-VAX, non-Sun architectures) that we were immune to some of the worm's modalities, but the sendmail debug mode exploit at least still caused a lot of consternation.

  • Diversity is security! I wish more people understood that. It may be more difficult to manage a bunch of diverse systems, but they are much more resilient to attacks.

    • I don't think that's proven out, like, at all; measure it against the returns on hardening mainstream platforms. The "monoculture" security thing has always been overblown, not least because you're never going to get an ecology where you have enough diversity to matter. Having 3 mainstream desktop or phone options is only marginally better than having just 1, and you're never going to have 20.

      2 replies →

Morris’s program wasn’t meant to be malicious, but it accidentally became a turning point in cybersecurity history. Much of what we now know as security research, red teaming, and even the “gray hat” culture can be traced back to that moment.

  • I'll note that phrack magazine predates the worm by 3 years. Wargames, the movie, predates it by 5 years. 2600 by by 4 years. Mitnick started having fun around 9 years earlier.

    I'm not so sure the Morris worm was the turning point.

I expected some info on its functioning. The goal was to gauge the size of the Internet, how? Why did it fail? I guess Wikipedia for the rescue.

I used to keep a vt100 at the head of my bed, roll over and check on things a few times at night. 3am and everything is screwed. can't really log in anyplace, or start any jobs. The bus doesn't run until 5:30, so I just get dressed and walk across the bridge the to lab. Visitors center isn't open, so I just sneak through the exit by the guardhouse. They're civilian contractors, they either don't see me, or recognize me and don't care.

Since it's all locked up, I just reboot the big vax single user - that takes about 10 minutes so I also start on a couple of the suns. You have to realize that everything including desktops runs sendmail in this era, and when some of these machines come up they are ok for a sec and then sendmail starts really eating into the cpu.

I'm pretty bleary eyed but I walk around restarting everything single and taking sendmail out of the rcs. The TMC applications engineer comes in around 7 and gets me a cup of coffee. He manages to get someone to pick up in Cambridge and they tell him that's happening everywhere.

I assume you all know that Robert Morris is one of the YC (and Viaweb) cofounders? [1] Together with Paul Graham, Jessica Livingston, and Trevor Blackwell.

[1] https://en.wikipedia.org/wiki/Robert_Tappan_Morris

I remember this event as one of the few times that the Internet made the mainstream news in the eighties. After the fact talked with some network people at Michigan and Michigan State and it was not a very good day for them. They also wanted jail time for him which did not happen.

Thankfully the security holes in C that have allowed Morris worm to exist, have been taken care by WG14 since then.

  • The future isn't evenly distributed. I recently discovered an actively developed software project that had a ton of helper functions based on the design of `gets` with the same vulnerability. Surprisingly not all C/C++ developers have learned yet to recoil in horror at seeing a buffer pointer being passed around without a length. (C++'s std::span was very convenient for fixing the issue by letting the buffer pointer and length be kept together, exactly like Go and Rust slices.)

    • > Surprisingly not all C/C++ developers have learned yet to recoil in horror at seeing a buffer pointer being passed around without a length.

      As someone who wasn't taught better (partly due to not picking CS as a career stream), are there any languages which avoid such vulnerability issues? Does something like rust help with this?

      2 replies →

I find it funny that:

1) He released it from MIT to avoid suspicion.

2) After he was convicted, he went from Cornell to Harvard to complete his Ph.D.

3) He became an assistant professor at MIT after that.

He had to be really spectacular/have crazy connections to still be able to finish his training at a top program and get a job at the institution he tried to frame.

  • One of my favourite quiet jokes is the "Editorial Board" list for The Annals of Improbable Research¹ where RTM is listed under Computer Science. Asterisks after each name denote qualifications, RTM's being "Convicted Felon"

    ---

    ¹Awarders of the Ig Nobel prize

  • Have you read any of his papers? Morris was not fucking around.

    • Please expand?

      He was and is very smart. This is not disputed. He was 23 at the time. Not exactly a child.

      The worm was surprisingly elaborate containing three separate remote exploits.

      It probably took a few weeks to build and test.

      So sabotaging thousands of at the time very expensive network connected computers was a very deliberate action.

      I posit that he likely did it to become famous and perhaps even successful, feeling safe with his dad’s position. And it worked. He did not end up in prison. He ended up cofounding Viaweb and YCombinator.

      Unironically a great role model for YC. :/

      23 replies →

  • You know his dad ran research at the NSA right?

    His dad's also a badass and super fun to talk to. Never talked to the son though, but I'd love to some day.

    • I talked to the son at one of the early (~2008) YC dinners. Actually found him more approachable than PG or most YC founders; RTM is a nerd in the "cares a whole lot about esoteric mathematics" way, which I found a refreshing change from the "take over the world" vibe that I got from a lot of the rest of YC.

      Interesting random factoid: RTM's research in the early 2000s was on Chord [1], one of the earliest distributed hash tables. Chord inspired Kademlia [2], which later went on to power Limewire, Ethereum, and IPFS. So his research at MIT actually has had a bigger impact in terms of collected market cap than most YC startups have.

      [1] https://en.wikipedia.org/wiki/Chord_(peer-to-peer)

      [2] https://en.wikipedia.org/wiki/Kademlia

    • RTM Jr is a very nice person, obviously very smart, but also has a good sense of humor and is friendly and approachable. We overlapped as C.S. grad students at Harvard for several years.

    • I did not. That actually makes everything make much more sense. I was even wordering how he got out of jail time for something like this and just thought he had amazing lawyers.

      9 replies →

  • > tried to frame.

    MIT really respects good hacks and good hackers. It was probably more effective than sending in some PDF of a paper.

    • >MIT really respects good hacks and good hackers.

      Oooof in light of Aaron Swartz. He plugged directly into a network switch that was in an unlocked and unlabelled room at MIT so he could download faster and faced "charges of breaking and entering with intent, grand larceny, and unauthorized access to a computer network".

      MIT really didn't lift a finger for this either.

      >Swartz's attorneys requested that all pretrial discovery documents be made public, a move which MIT opposed

      https://en.wikipedia.org/wiki/Aaron_Swartz

      5 replies →

Funnily enough, just a few weeks before that, REM released their eponymous album. Perhaps Morris was inspired by that?

Wikipedia says the Morris worm went out on 1998 Nov 2. No idea why they would publish the article on 2025 Nov 4 with that title.

I remember that the Boston Museum of Science used to have a floppy disk on display with the Morris worm on it.

Oh, those memories!

He was sentenced to pay $10,050, today he would not get away that easily I guess...

Another thing I didn't know (citing Wikipedia):

"In 1995, Morris cofounded Viaweb with Paul Graham, a start-up company that made software for building online stores. It would go on to be sold to Yahoo for $49 million[14], which renamed the software Yahoo! Store. "

and (same source):

"He is a longtime friend and collaborator of Paul Graham. Along with cofounding two companies with him, Graham dedicated his book ANSI Common Lisp to Morris and named the programming language that generates the online stores' web pages RTML (Robert T. Morris Language) in his honor."

I’m still waiting for the first runaway autonomous botnet.

  • Currently AI doesn't work very well on hardware separated by hundreds of milliseconds of latency and slow network links. Both the training and inference are slow.

    However I think this is a solvable problem, and I started solving it a while ago with decent results:

    https://github.com/Hello1024/shared-tensor

    When someone gets this working well, I could totally see a distributed AI being tasked with expanding it's own pool of compute nodes by worming into things and developing new exploits and sucking up more training data.

    • Couldn’t an AI write and deploy a botnet much like a human does today? With a small, centralized inference core.

      It doesn’t need to be fully decentralized, the control plane just needs some redundancy

  • It's kind of surprising that it hasn't happened already, outside of iot junk. Seems like computer OSs just got so secure that it's become impractical to deploy a widespread exploit. And everything moved to scamming instead.

> the internet in 1988

60k computers ( mostly at institutions ) in 20 countries

  • Everything was slower though. Turkey as a whole country had one 9600bps link to Bitnet at the time. Internet was accessed through Bitnet gateways. Systems (CPUs and I/O in general) were also much slower.

    • Slower and unstable. I spent a lot of my freshman year in college on Bitnet chat and iirc about every 30 minutes there would be a "netsplit" and a bunch of folks in the chat would disappear. Maybe it was our universities connection, which I think was direct to UIUC. I've posted here before that back then I thought Bitnet chat was magical. Things like being in a chat room with students in Berlin while the wall was falling felt so futuristic to me.

    • Much slower. Most campuses in the US were connected with 56K dedicated lines. The NSF backbone had just upgraded to T1.

    • ftp.wustl.edu would manage about 1 KBps and I was sitting one hop away from it at UIUC.

      Insomnia paid off a lot back then.

The Morris worm is certainly the more historically important one but AFAIK nothing has ever beaten SQL Slammer (2003) for sheer sleekness and propagation speed: 376 bytes, sent as UDP packets to randomly generated IP addresses as fast as the network interface could pump them out. Infected all susceptible hosts on the entire Internet within 10 minutes. Thankfully, that was only MSSQL servers and, being that sleek, it had no persistence mechanism. So turning the machine off and on again removed the infection completely.

>However, the pioneering Morris worm malware wasn’t made with malice, says an FBI retrospective on the “programming error.” It was designed to gauge the size of the Internet, resulting in a classic case of unintended consequences.

had RTM actually RTM the world might be a bit different than it is today.

  • Well, sort of. RTM underestimated the effect of exponential growth, and thought that he would in effect have an account on all of the connected systems, without permission. He evidently didn't intend to use this power for evil, just to see if it could be done.

    He did do us all a service; people back then didn't seem to realize that buffer overflows were a security risk. The model people had then, including my old boss at one of my first jobs in the early 80s, is that if you fed a program invalid input and it crashed, this was your fault because the program had a specification or documentation and you didn't comply with it.

    • Interestingly, it took another 7 years for stack overflows to be taken seriously, despite a fairly complete proof of concept widely written about. For years, pretty much everybody slept on buffer overflows of all sorts; if you found an IFS expansion bug in an SUID, you'd only talk about it on hushed private mailing lists with vendor security contacts, but nobody gave a shit about overflows.

      It was Thomas Lopatic and 8lgm that really lit a fire under this (though likely they were inspired by Morris' work). Lopatic wrote the first public modern stack overflow exploit, for HPUX NCSA httpd, in 1995. Later that year, 8lgm teased (but didn't publish --- which was a big departure for them) a remote stack overflow in Sendmail 8.6.12 (it's important to understand what a big deal Sendmail vectors were at the time).

      That 8lgm tease was what set Dave Goldsmith, Elias Levy, San Mehat, and Pieter Zatko (and presumably a bunch of other people I just don't know) off POC'ing the first wave of public stack overflow vulnerabilities. In the 9-18 months surrounding that work, you could look at basically any piece of privileged code, be it a remote service or an SUID binary or a kernel driver, and instantly spot overflows. It was the popularization with model exploits and articles like "Smashing The Stack" that really raised the alarm people took seriously.

      That 7 year gap is really wild when you think about it, because during that time period, during which people jealously guarded fairly dumb bugs, like an errant pipe filter input to the calendar manager service that run by default on SunOS shelling out to commands, you could have owned up literally any system on the Internet, so prevalent were the bugs. And people blew them off!

      I wrote a thread about this on Twitter back in the day, and Neil Woods from 8lgm responded... with the 8.6.12 exploit!

      https://x.com/tqbf/status/1328433106563588097

      5 replies →

I was logged into brillig.umd.edu (University of Maryland's Vax 8600) that night, frustrated that my emacs kept getting paged out, rhythmically typing ^A ^E ^A ^E to wiggle the cursor around to keep it paged in while I thought.

I ps aux'ed and saw a hell of a lot of sendmail demons running, but didn't realize till the next morning that we were actively under attack, being repeatedly but unsuccessfully finger daemon gets(3) buffer overflowed, and repeatedly and successfully sendmail daemon DEBUG'ed.

RTM's big mistake was not checking to see if a machine was already infected before re-infecting it and recursing, otherwise nobody would have noticed and he would have owned the entire internet.

What's funny is that UMD was on MILNET via NSA's "secret" IMP 57 at Fort Mead, so RTM's worm was attacking us through his daddy's own MILNET PSN (Packet Switching Node)!

https://news.ycombinator.com/item?id=31822138

    From: Dennis G. Perry <PERRY@vax.darpa.mil>
    Date: Apr 6, 1987, 3:19 PM

    Jordan, you are right in your assumptions that people will get annoyed
    that what happened was allowed to happen.

    By the way, I am the program manager of the Arpanet in the Information
    Science and Technology Office of DARPA, located in Roslin (Arlington), not
    the Pentagon. [...]

Here's my story of The Night of The Worm:

https://www.ee.torontomu.ca/~elf/hack/internet-worm.html

>The Sendmail Attack:

>In the sendmail attack, the worm opens a TCP connection to another machine's sendmail (the SMTP port), invokes debug mode, and sends a RCPT TO that requests its data be piped through a shell. That data, a shell script (first-stage bootstrap) creates a temporary second-stage bootstrap file called x$$,l1.c (where '$$' is the current process ID). This is a small (40-line) C program.

>The first-stage bootstrap compiles this program with the local cc and executes it with arguments giving the Internet hostid/socket/password of where it just came from. The second-stage bootstrap (the compiled C program) sucks over two object files, x$$,vax.o and x$$,sun3.ofrom the attacking host. It has an array for 20 file names (presumably for 20 different machines), but only two (vax and sun) were compiled in to this code. It then figures out whether it's running under BSD or SunOS and links the appropriate file against the C library to produce an executable program called /usr/tmp/sh - so it looks like the Bourne shell to anyone who looked there.

>The Fingerd Attack:

>In the fingerd attack, it tries to infiltrate systems via a bug in fingerd, the finger daemon. Apparently this is where most of its success was (not in sendmail, as was originally reported). When fingerd is connected to, it reads its arguments from a pipe, but doesn't limit how much it reads. If it reads more than the internal 512-byte buffer allowed, it writes past the end of its stack. After the stack is a command to be executed ("/usr/ucb/finger") that actually does the work. On a VAX, the worm knew how much further from the stack it had to clobber to get to this command, which it replaced with the command "/bin/sh" (the Bourne shell). So instead of the finger command being executed, a shell was started with no arguments. Since this is run in the context of the finger daemon, stdin and stdout are connected to the network socket, and all the files were sucked over just like the shell that sendmail provided.

It's a little shocking to me that there haven't been more things like this.

While we're much more conscientious and better at security than we were way back then, things are certainly not totally secure.

The best answer I have is the same as what a bio professor told me once about designer plagues: it hasn't happened because nobody's done it. The capability is out there, and the vulnerability is out there.

(Someone will chime in about COVID lab leak theories, but even if that's true that's not what I mean. If that happened it was the worst industrial accident in history, not an intentional designer plague.)

  • >The best answer I have is the same as what a bio professor told me once about designer plagues: it hasn't happened because nobody's done it. The capability is out there, and the vulnerability is out there.

    I could be wrong, but I've come to believe that despite the hype they have very little capability.

  • After things like

    https://en.wikipedia.org/wiki/Blaster_(computer_worm)

    https://en.wikipedia.org/wiki/SQL_Slammer

    https://en.wikipedia.org/wiki/Sasser_(computer_worm)

    Bill Gates sent out the "Trusted Computing" memo to harden Windows and make it somewhat secure.

    Essentially, Windows used to be trivial to exploit, in that Every single service was by default exposed to the web, full of very trivial buffer overflows that dovetailed nicely into remote code execution.

    Since then, Windows has stopped exposing everything to the internet by default and added a firewall, fixed most buffer overflows in entry points of these services, and made it substantially harder to turn most vulnerabilities into the kind of remote code execution you would use to make simple worms.

    >better at security than we were way back then

    In some ways this is dramatically understated. Now the majority of malware comes from getting people to click on links, targeted attacks that drop it, piggyback riding in on infected downloads, and other forms of just getting the victim to run your code. Worms and botnets are either something you "Willingly" install through "free" VPNs, or target absolutely broken and insecure routers.

    The days where simply plugging a computer into the internet would result in you immediately trying to infect 100 other computers with no interaction are pretty much gone. For all the bitching about forced updates and UAC and other security measures, they basically work.

  • To a fairly significant extent, the Morris worm is why there haven't been more; it did prompt something of a culture shift away from trusting users to trusting mechanisms, mostly by prompting people to realise that the internet wasn't only going to be in the hands of a set of people who were one or two degrees of separation apart. It didn't make sense to assume people would treat it with reverence like a giant beautiful shared space.

    It's most obviously paralleled by Samy Kamkar's MySpace worm, which exploited fairly similar too-much-trust territory.

    • I imagine the - heterogeneity of modern computing environments - number of 'layers' in any system - sheer size of the modern Internet all also make it harder to scale

[flagged]

  • 6,000+, and those machines served many others (back then there were tens of thousands of machines on the Internet, but probably 10x as many that were connected to these by relays that handled email or Usenet traffic).

    • Also worth remember that especially with Internet-connected computers almost everything was multiuser. You did work on the Internet from a shell on a shared Unix server, not from a laptop.

      1 reply →

Hypothetically if the m$ cloud ecosystem got completely oblibetated (including backups) would customers switch? Or is the lockin as complete as it is with the operating system customers?