Comment by tiborsaas

17 hours ago

> We see that that’s a quite a long line. Mail servers don’t like that

Why do mail server care about how long a line is? Why don't they just let the client reading the mail worry about wrapping the lines?

SMTP is a line–based protocol, including the part that transfers the message body

The server needs to parse the message headers, so it can't be an opaque blob. If the client uses IMAP, the server needs to fully parse the message. The only alternative is POP3, where the client downloads all messages as blobs and you can only read your email from one location, which made sense in the year 2000 but not now when everyone has several devices.

  • But everything after headers can (almost) be a blob. Just copy buffers while taking care to track CRLF and look if what follows is a space. In fact, you have to do it anyhow, because line-folding is allowed in headers as well! And this "chunking long lines" technique has been around since the 70s, when people started writing parsers on top of hand-crafted buffered I/O.

  • Hey, POP3 still makes sense. Having a local copy of your emails is useful.

Mails are (or used to be) processed line-by-line, typically using fixed-length buffers. This avoids dynamic memory allocation and having to write a streaming parser. RFC 821 finally limited the line length to at most 1000 bytes.

Given a mechanism for soft line breaks, breaking already at below 80 characters would increase compatibility with older mail software and be more convenient when listing the raw email in a terminal.

This is also why MIME Base64 typically inserts line breaks after 76 characters.

  • In early days, many/most people also read their email on terminals (or printers) with 80-column lines, so breaking lines at 72-ish was considered good email etiquette (to allow for later quoting prefix ">" without exceeding 80 characters).

    • One of the technical marvels of the day were mail and usenet clients that could properly render quoted text from infinite, never ending flame wars!

I don't think kids today realize how little memory we had when SMTP was designed.

For example, the PDP-11 (early 1970s), which was shared among dozens of concurrent users, had 512 kilobytes of RAM. The VAX-11 (late 1970s) might have as much as 2 megabytes.

Programmers were literally counting bytes to write programs.

  • I assure you we were not, at least it wasn’t really necessary. Virtual Memory is a powerful drug.

    • My point is that bytes mattered. If you could put a year in 2 bytes instead of 4, you did. If you could shrink the TCP header by packing fields, you did. And if you could limit SMTP memory use by specifying a 1000-byte limit, then that's what you did.

      Every programmer I know from that era knew how big things were in bytes, because it mattered.

      Also, not all PDP-11 systems had VM. And the designers of SMTP certainly did not expect that it would only run on systems with VM.

This is how email work(ed) over smtp. When each command was sent it would get a '200'-class message (success) or 400/500-class message (failure). Sound familiar?

telnet smtp.mailserver.com 25

HELO

MAIL FROM: me@foo.com

RCPT TO: you@bar.com

DATA

blah blah blah

how's it going?

talk to you later!

.

QUIT

  • For anyone who wants to try this against a modern server:

        openssl s_client -connect smtp.mailserver.com:smtps -crlf
        220 smtp.mailserver.com ESMTP Postfix (Debian/GNU)
        EHLO example.com
        250-smtp.mailserver.com
        250-PIPELINING
        250-SIZE 10240000
        250-VRFY
        250-ETRN
        250-AUTH PLAIN LOGIN
        250-ENHANCEDSTATUSCODES
        250-8BITMIME
        250-DSN
        250-SMTPUTF8
        250 CHUNKING
    
        MAIL FROM:me@example.com
        250 2.1.0 Ok
    
        RCPT TO:postmaster
        250 2.1.5 Ok
    
        DATA
        354 End data with <CR><LF>.<CR><LF>
    
        Hi
        .
        250 2.0.0 Ok: queued as BADA579CCB
    
        QUIT
        221 2.0.0 Bye

  • This brings back some fun memories from the 1990s when this was exactly how we would send prank emails.

    • Yep! And also, if you included a blank line and then the headers for a new email in the bottom of your message, you could tell the server, hey, here comes another email for you to process!

      If you were typing into a feedback form powered by something from Matt’s Script Archive, there was about a 95% chance you could trivially get it to send out multiple emails to other parties for every one email sent to the site’s owner.

  • I like how SMTP was at least honest in calling it the "receipt to" address and not the "sender" address.

    Edit: wrong.

    • RCPT TO specifies the destination (recipient) address, the "sender" is what is written in MAIL FROM.

      However what most mail programs show as sender and recipient is neither, they rather show the headers contained in the message.

      1 reply →

Back in 80s-90s it was common to use static buffers to simplify implementation - you allocate a fixed size buffer and reject a message if it has a line longer than the buffer size. SMTP RFC specifies 1000 symbols limit (including \r\n) but it's common to wrap around 87 symbols so it is easy to examine source (on a small screen).

The simplest reason: Mail servers have long had features which will send the mail client a substring of the text content without transferring the entire thing. Like the GMail inbox view, before you open any one message.

I suspect this is relevant because Quoted Printable was only a useful encoding for MIME types like text and HTML (the human readable email body), not binary (eg. Attachments, images, videos). Mail servers (if they want) can effectively treat the binary types as an opaque blob, while the text types can be read for more efficient transfer of message listings to the client.

As far as I can remember, most mail servers were fairly sane about that sort of thing, even back in the 90’s when this stuff was introduced. However, there were always these more or less motivated fears about some server somewhere running on some ancient IBM hardware using EBCDIC encoding and truncating everything to 72 characters because its model of the world was based on punched cards. So standards were written to handle all those bizarre systems. And I am sure that there is someone on HN who actually used one of those servers...

RFC822 explicitly says it is for readability on systems with simple display software. Given that the protocol is from 1982 and systems back then had between 4 and 16kb RAM in total it might have made sense to give the lower end thin client systems of the day something preprocessed.

  • Also it is an easy way to stop a denial of service attack. If you let an infinite amount in that field. I can remotely overflow your system memory. The mail system can just error out and hang up on the person trying the attack instead of crashing out.

    • Surely you don't need the message to be broken up into lines just for that. Just read until a threshold is reached and then close the connection.

  • You could expect a lot more (512kB, 1MB, 2MB) in an internet-connected machine running Unix or VMS.

Keep in mind that in ye olden days, email was not a worldwide communication method. It was more typical for it to be an internal-only mail system, running on whatever legacy mainframe your org had, and working within whatever constraints that forced. So in the 90s when the internet began to expand, and email to external organizations became a bigger thing, you were just as concerned with compatibility with all those legacy terminal-based mail programs, which led to different choices when engineering the systems.

  • This is incorrect

    • Are you certain? Not OP, but a huge chunk of early RFCs was about how to let giant IBM systems talk to everyone else, specifying everything from character sets (nearly universally “7-bit ASCII”) to end of line/message characters. Otherwise, IBM would’ve tried to make EBCDIC the default for everything.

      For instance, consider FTP’s text mode, which was primarily a way to accidentally corrupt your download when you forgot to type “bin” first, but was also handy for getting human readable files from one incompatible system to another.

      2 replies →