High Performance SSH/SCP

6 days ago (psc.edu)

The fact that sftp is not the fastest protocol is well known by rclone users.

The main problem is that it packetizes the data and waits for responses, effectively re-implementing the TCP window inside a TCP stream. You can only have so many packets outstanding in the standard SFTP implementation (64 is the default) and the buffers are quite small (32k by default) which gives a total outstanding data of 2MB. The highest transfer rate you can make depends on the latency of the link. If you have 100 ms of latency then you can send at most 20 MB/s which is about 200 Mbit/s - nowhere near filling a fast wide pipe.

You can tweak the buffer size (up to 256k I think) and the number of outstanding requests, but you hit limits in the popular servers quite quickly.

To mitigate this rclone lets you do multipart concurrent uploads and downloads to sftp which means you can have multiple streams operating at 200 Mbit/s which helps.

The fastest protocols are the TLS/HTTP based ones which stream data. They open up the TCP window properly and the kernel and networking stack is well optimized for this use. Webdav is a good example.

  • "(sftp) packetizes the data and waits for responses, effectively re-implementing the TCP window inside a TCP stream."

    why is it designed this way? what problems it's supposed to solve?

    • Here is some speculation:

      SFTP was designed as a remote file system system access protocol rather than transfer a single file like scp.

      I suspect that the root of the problem is that SFTP works over a single SSH channel. SSH connections can have multiple channels but usually the server binds a single channel to a single executable so it makes sense to use only a single channel.

      Everything flows from that decision - packetisation becomes necessary otherwise you have to wait for all the files to transfer before you can do anything else (eg list a directory) and that is no good for your remote filesystem access.

      Perhaps the packets could have been streamed but the way it works is more like an RPC protocol with requests and responses. Each request has a serial number which is copied to the response. This means the client can have many requests in-flight.

      There was a proposal for rclone to use scp for the data connections. So we'd use sftp for the day to day file listings, creating directories etc, but do actual file transfers with scp. Scp uses one SSH channel per file so doesn't suffer from the same problems as sftp. I think we abandoned that idea though as many sftp servers aren't configured with scp as well. Also modern versions of OpenSSH (OpenSSH 9.0 released April 2022) use SFTP instead of scp anyway. This was done to fix various vulnerabilities in scp as I understand.

    • Notably, the SFTP specification was never completed. We're working off of draft specs, and presumably these issues wouldn't have made it into a final version.

    • Because that is a poor characterization of the problem.

      It just has a in-flight message/queue limit like basically every other communication protocol. You can only buffer so many messages and space for responses until you run out of space. The problem there is just that the default amount of buffering is very low and is not adaptive to the available space/bandwidth.

      1 reply →

  • When you are limited to use SSH as the transport, you can still do better than using scp or sftp by using rsync with --rsh="ssh ...".

    Besides being faster, with rsync and the right command options you can be certain that it makes exact file copies, together with any file metadata, even between different operating systems and file systems.

    I have not checked if in recent years all the bugs of scp and sftp have been fixed, but some years ago there were cases when scp and sftp were losing silently, without warnings, some file metadata (e.g. high-precision timestamps, which were truncated, or extended file attributes).

    I am using ssh every day, but there are decades since I have last used scp or sftp, with the exception of the cases when I have to connect to a server that I cannot control and where it happens that rsync is not installed. Even on such servers, if I may add an executable in my home directory, I first copy there an rsync with scp, then I do any other copies with that rsync.

    • I have the opposite opinion and experience: a simple file copy is pretty trivial with scp, but with rsync - it's a goddamn lottery. Too many options, too many possible modes and thus I am never sure about the outcome meeting my expectations.

  • > The fastest protocols are the TLS/HTTP based ones which stream data.

    I think maybe you are referring to QUIC [0]? It'd be interesting to see some userspace clients/servers for QUIC that compete with Aspera's FASP [1] and operate on a point to point basis like scp. Both use UDP to decrease the overhead of TCP.

    0. https://en.wikipedia.org/wiki/QUIC

    1. https://en.wikipedia.org/wiki/Fast_and_Secure_Protocol

    • Available QUIC implementations are very slow. MsQUIC is one of the fastest and can only reach a meager ~7 Gb/s [1]. Most commercial implementations sit in the 2-4 Gb/s range.

      To be fair, that is not really a problem of the protocol, just the implementations. You can comfortably drive 10x that bandwidth with a reasonable design.

      [1] https://microsoft.github.io/msquic/

      1 reply →

    • We've been looking at using QUIC as the transport layer in HPN-SSH. It's more of a pain that you might think because it breaks the SSH authentication paradigm and requires QUIC layer encryption - so a naive implementation would end up encrypting the data twice. I don't want to do that. Mostly what we are thinking about doing is changing the channel multiplexing for bulk data transfers in order to avoid the overhead and buffer issues. If we can rely entirely on TCP for that then we should get even better performance.

      1 reply →

    • Actually the fastest ones in my experience are the HTTP/1.x ones. HTTP/2 is generally slower in rclone though I think that is the fault of the stlib not opening more connections. I haven't really tried QUIC

      I just think for streaming lots of data quickly HTTP/1.x plus TLS plus TCP has received many more engineering hours of optimization than any other combo.

      1 reply →

  • Besides limiting the length and number of outstanding IO requests, SFTP also rides on top of SSH, which also has a limited window size.

Any chance this work can be upstreamed into mainline SSH? I'd love to have better performance for SSH, but I'm probably not going to install and remember to use this just for the few times it would be relevant.

  • I doubt this would ever be accepted upstream. That said if one wants speed play around with lftp [1]. It has a mirror subsystem that can replicate much of rsync functionality in a chroot sftp-only destination and can use multiple TCP/SFTP streams in a batch upload and per-file meaning one can saturate just about any upstream. I have used this for transferring massive postgres backups and then because I am paranoid when using applications that automatically multipart transfer files I include a checksum file for the source and then verify the destination files.

    The only downside I have found using lftp is that given there is no corresponding daemon for rsync on the destination then directory enumeration can be slow if there are a lot of nested sub-directories. Oh and the syntax is a little odd for me anyway. I always have to look at my existing scripts when setting up new automation.

    Demo to play with, download only. Try different values. This will be faster on your servers, especially anything within the data-center.

        ssh mirror@mirror.newsdump.org # do this once to accept key as ssh-keyscan will choke on my big banner
    
        mkdir -p /dev/shm/test && cd /dev/shm/test
    
        lftp -u mirror, -e "mirror --parallel=4 --use-pget=8 --no-perms --verbose /pub/big_file_test/ /dev/shm/test;bye" sftp://mirror.newsdump.org
    

    For automation add --loop to repeat job until nothing has changed.

    [1] - https://linux.die.net/man/1/lftp

    • The normal answer that I have heard to the performance problems in the conversion from scp to sftp is to use rsync.

      The design of sftp is such that it cannot exploit "TCP sliding windows" to maximize bandwidth on high-latency connections. Thus, the migration from scp to sftp has involved a performance loss, which is well-known.

      https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers...

      The rsync question is not a workable answer, as OpenBSD has reimplemented the rsync protocol in a new codebase:

      https://www.openrsync.org/

      An attempt to combine the BSD-licensed rsync with OpenSSH would likely see it stripped out of GPL-focused implementations, where the original GPL release has long standing.

      It would be more straightforward to design a new SFTP implementation that implements sliding windows.

      I understand (but have not measured) that forcibly reverting to the original scp protocol will also raise performance in high-latency conditions. This does introduce an attack surface, should not be the default transfer tool, and demands thoughtful care.

      https://lwn.net/Articles/835962/

      6 replies →

    • Wow, I hadn't heard of this before. You're saying it can "chunk" large files when operating against a remote sftp-subsystem (OpenSSH)?

      I often find myself needing to move a single large file rather than many smaller ones but TCP overhead and latency will always keep speeds down.

      4 replies →

  • Also upstream is extremely well audited. That's a huge benefit i don't want to loose by using fork.

    • I do want to say that HPN-SSH is also well audited; you can see the results of CI tests on the github. We also do fuzz testing, static analysis, extensive code reviews, and functionality testing. We build directly on top of OpenSSH and work with them when we can. We don't touch the authentication code and the parallel ciphers are built directly on top of OpenSSL.

      I've been developing it for 20+ years and if you have any specific questions I'd be happy to answer them.

  • I'm the lead developer. I can go into this a bit more when I get from an appointment if people are interested.

    • I’m interested. Mainly to update the documentation on it for Gentoo, people have asked about it over the years. Also, TIL it appears HN has a sort of account dormancy status it appears you are in.

      1 reply →

  • OpenSSH is from the people at OpenBSD, which means performance improvements have to be carefully vetted against bugs, and, judging by the fact that they're still on fastfs and the lack of TRIM in 2025, that will not happen.

    • There's nothing inherently slow about UFS2; the theoretical performance profile should be nearly identical to Ext4. For basic filesystem operations UFS2 and Ext4 will often be faster than more modern filesystems.

      OpenBSD's filesystem operations are slow not because of UFS2, but because they simply haven't been optimized up-and-down the stack the way Ext4 has been Linux or UFS2 on FreeBSD. And of course, OpenBSD's implementation doesn't have a journal (both UFS and Ext had journaling bolted late in life) so filesystem checks (triggered on an unclean shutdown or after N boots) can take a long time, which often cause people to think their system has frozen or didn't come up. That user interface problem notwithstanding, UFS2 is extremely robust. OpenBSD is very conservative about optimizations, especially when they increase code complexity, and particularly for subsystems where the project doesn't have time available to give it the necessary attention.

      2 replies →

  • I admittedly don't really know how SSH is built but it looks to me like the patch that "makes" it HPN-SSH is already present upstream[1], it's just not applied by default? Nixpkgs seems to allow you to build the pkg with the patch [2].

    [1] https://github.com/freebsd/freebsd-ports/blob/main/security/...

    [2] https://github.com/NixOS/nixpkgs/blob/d85ef06512a3afbd6f9082...

  • There’s a third party ZFS utility (zrepl, I think) that solves this in a nice way: ssh is used as a control channel to coordinate a new TLS connection over which the actual data is sent. It is considerably faster, apparently.

  • Unlikely. These patches have been carried out-of-tree for over a decade precisely because upstream OpenSSH won't accept them.

    • More than 2 decades at this point. The primary reasons is that the full patch set would be a burden for them to integrate and they don't prioritize performance for bulk data transfers. Which is perfectly understandable from their perspective. HPN-SSH builds on the expertise of OpenSSH and we follow their work closely - when they make a new release we incorporate it and follow with our own release inside of a week or two (depending on how long the code review and functionality/regression testing takes). We focus on throughput performance which involves receive buffer normalization, private key cipher speed, code optimization, and so forth. We tend to stay clear of anything involve authentication and we never roll our own when it comes to the ciphers.

  • Depending on your hardware architecture and security needs, fiddling with ciphers in mainline might improve speed.

This has been around for years (like at least mid-2000’s). Gentoo used to have this patchset available as a USE flag on net-misc/openssh, but some time ago it was moved to net-misc/openssh-contrib (also configurable by useflag).

There are some minor usability bugs and I think both endpoints need to have it installed to take advantage. I remember asking ages ago why it wasn’t upstreamed, there were reasons…

  • to be honest, there was a period of time in about 2010 or 2012 where I simply wasn't maintaining it as well as I should have been. I wouldn't have upstreamed it then either. That's changed a lot since then.

    As an aside - you only really need HPN-SSH on the receiving side of the bulk data to get the buffer normalization performance benefits. It turns out the bottleneck is almost entirely on the receiver and the client will send out data as quickly as you like. At least it was like that until OpenSSH 8.8. At that point changes were made where the client would crash if the send buffer exceeded 16MB. So we had to limit OpenSSH to HPN-SSH flows to a maximum of 16MB receive space. Which is annoying but that's still going to be a win for a lot of users.

This is cool very cool and I think I'll give it a try, though I'm wary about using a forked SSH so would love to see things land upstream.

I've been using mosh now for over a decade and it is amazing. Add on rsync for file transfers and I've felt pretty set. If you haven't checked out mosh, you should definitely do so!

It's not clear if you need it on both ends to get an advantage?

  • The bottleneck in SSH is entirely on the receiving side. So as long at the receiver is using HPN-SSH you will see some performance improvements if the BDP of the path exceeds 2MB. Note: because of changes made to OpenSSH in 8.8 the maximum buffer with OpenSSH as the sender is 16MB. In an HPN to HPN connection that maximum receive buffer is 128MB.

The contracting activity in terms of rsync and async, where SFTP is secure tunneling, either with SSH or OpenSSH, which -p flag specifies as the port: 22, but /ssh/.configuring 10901 works for TCP.

I don't think it comes as a surprise that you can improve performance by re-implementing ciphers, but what is the security trade-off? Many times, well audited implementations of ciphers are intentionally less performant in order to operate in constant time and avoid side channel attacks. Is it even possible to do constant time operations while being multithreaded?

The only change I see here that is probably harmless and a speed boost is using AES-NI for AES-CTR. This should probably be an upstream patch. The rest is more iffy.

  • The parallel ciphers are built using OpenSSL primitives. We aren't reimplementing the cipher itself in anyway. Since counter ciphers use an atomically increasing counter you can precompute the blocks in advance. Which is what we do - we have a cache of ketstream data that is precomputed and we pull the correct block off as needed - this gets around the need to have the application compute the blocks serially which can be a bottleneck at higher throughput rates.

    The main performance improvement is from the buffer normalization. This can provide, on the right path, a 100x improvement in throughput performance without any compromise in security.