Generally, try not to use SCP. It has been a crufty old program from the Berkeley R-Utilities, but newer OpenSSH releases have rewritten it to use the sftp-server server instead. There will be wildly different behavior between these implementations.
If you need something that SFTP cannot do, then use tar on both sides.
PuTTY has implemented their pscp to prefer the sftp-server for many years, in a long prediction of the eventual abandonment. Their pscp implementation is a better drop-in replacement than the OpenSSH solutions.
The allure of SCP is retry on failure, which is somewhat more difficult with SFTP:
until scp source.txt user@target:dir/
do echo target down; sleep 300
done
Converting that to pscp is much easier than SFTP.
I also have an older rhel5 system where I am running tinysshd to use better SSH crypto. Due to upgrades, NFS is now squashing everything to nobody, so I had to disable precisely these checks to let users login with their authorized_keys. I can post the code if anybody is curious.
SCP protocol is fine and convenient as long as people understand that the remote file arguments are server-side shell code, and the consequences that implies.
You get the benefit of being able to e.g. get your last download off your desktop to your laptop like this:
scp -TO desktop:'downloads/*(oc[1])' .
or this if you're on bash:
scp -TO desktop:'$(ls -t downloads/* | head -1)' .
or pull a file from a very nested project dir for which you have setup dynamic directories (or shell variables if you're on bash):
Just don't pull files from a SCP server that may be malicious. Use on trusted servers. If you do the following on your home dir:
scp -TOr malicious:foo/ .
That may overwrite .ssh/authorized_keys, .zshrc, etc. because `foo/` is server-side shell code.
> If you need something that SFTP cannot do, then use tar on both sides.
No reason to make things inconvenient between personal, trusted computers, just because there may be malicious servers out there where one has no reason to SCP.
I occasionally use `scp` around my network and have for years. It works great and its simple interface is easy to remember. I don't want to sftp if I have to use tar on both sides. I might type rsync and but then I remember something about the trailing slash will cause the command to behave differently the second time. I just don't need yet another syntax I'll misremember. As long as scp is in my distro's repositories, I'll be using it.
-PuTTY pscp allows raw passwords on the command line, or from a file. OpenSSH is unreasonable in refusing to do this.
-Scripting can adapt to a .netrc easily; OpenSSH will never do this.
-Modern OpenSSH is a nightmare when using legacy crypto, while pscp is fluid. There is nothing wrong with hmac-md5, and no reason to refuse it. I will take PuTTY or dropbear in a heartbeat over these burned bridges and workarounds.
-pscp does not link to dozens of libraries as ssh/scp does, so it is easier to build with less dependency. The ldd output of ssh and scp on rhel9 is 23 libraries, while PuTTY is 3 [package obtained from EPEL].
-pscp strongly leans to SFTP on the backend and can be directed to use it exclusively, so there is no ambiguity.
-Using pscp with a retry on fail is much easier than sftp -b.
-The wacky cipher control on rhel8 does not impact the PuTTY tools.
I upvoted you, and yes, cleartext NFS is a concern.
I had it wrapped in stunnel TLS, but I ripped that out recently as I am retiring and the new staff is simply not capable of maintaining that configuration.
My users were yelling, and the patch to tinysshd to omit all permissions checks silenced the complaints. No, it's not pretty.
Using './*' would have avoided this in most shells because ordinary globbing excludes dotfiles, so .ssh and authorized_keys are not matched. In my experience scp is brittle for bulk syncs, so I run rsync -a --exclude='.ssh' --dry-run ./ user@host:~/target to verify before I commit the changes. I keep an out of band recovery path, like a temporary deploy key, a nonprivileged rescue user, or console access, as the only reliable way to avoid being locked out at 3AM.
The problem was not scp'ing the .ssh/ directory. The problem was scp'ing a directory whose permissions were 777, and "mapping" it (cannot find a better term) to a remote directory, which happened to be the home directory. The remote home directory therefore had its permissions changed to 777, which was deemed "too open" by openssh which refuses to use any file in it.
I accidentally nuked my hosted server's network stack with a config error... my bigger mistake was generating a massive random password for the root account... the remote terminal management console didn't support pasting and the default config only gave you like 30s to login.... not fun at all.
Script all the things. double-check your scripts... always be backing up.
Also a gentle reminder that backups without periodic drills are just binary blobs. I had an instance where for some reason my Borg backups where corrupted. Only caught them with periodic drills.
You did not transfer the files within a directory. You transferred the directory itself, via `.`. That is why scp changed the permissions of your home directory itself; if you instead had transferred via `*` I am sure you would not have had this problem.
Done stupid stuff like this enough times that I just use tar, and also make a sandbox directory to receive it, to double-check whats going to happen, before un—tar’ing it again into the destination intended and/or do a manual move.
Too many burned fingers to not do this little dance almost every other time.
Actually, I lied, I just use rsync like an insane person.
Related: In my Bash logout script I have a chmod that fixes authorized_keys. It won't help with scp because that's non-interactive, but it has helped the other 999 times I've forgotten to clean up the mess I made during an ssh session.
Getting locked out of a server must be a cannonical experienc in the sysadmin journey, like checking the logs to see you are being attacked as soon as your online, or trying to build your own linux from scratch without bloat.
tl;dr: I you scp -r to your homedir, expect scp to copy not just files and directories but their permissions as well (which I think isn't all that surprising).
It's not supposed to do that unless it's newly creating the destination, or you supplied the -p flag to preserve permissions... that's what the entire issue is about; it's a bug that was fixed in 10.3.
I have a few observations about this article.
Generally, try not to use SCP. It has been a crufty old program from the Berkeley R-Utilities, but newer OpenSSH releases have rewritten it to use the sftp-server server instead. There will be wildly different behavior between these implementations.
The backend SCP changes are documented here:
https://lwn.net/Articles/835962/
If you need something that SFTP cannot do, then use tar on both sides.
PuTTY has implemented their pscp to prefer the sftp-server for many years, in a long prediction of the eventual abandonment. Their pscp implementation is a better drop-in replacement than the OpenSSH solutions.
The allure of SCP is retry on failure, which is somewhat more difficult with SFTP:
Converting that to pscp is much easier than SFTP.
I also have an older rhel5 system where I am running tinysshd to use better SSH crypto. Due to upgrades, NFS is now squashing everything to nobody, so I had to disable precisely these checks to let users login with their authorized_keys. I can post the code if anybody is curious.
SCP protocol is fine and convenient as long as people understand that the remote file arguments are server-side shell code, and the consequences that implies.
You get the benefit of being able to e.g. get your last download off your desktop to your laptop like this:
or this if you're on bash:
or pull a file from a very nested project dir for which you have setup dynamic directories (or shell variables if you're on bash):
Just don't pull files from a SCP server that may be malicious. Use on trusted servers. If you do the following on your home dir:
That may overwrite .ssh/authorized_keys, .zshrc, etc. because `foo/` is server-side shell code.
> If you need something that SFTP cannot do, then use tar on both sides.
No reason to make things inconvenient between personal, trusted computers, just because there may be malicious servers out there where one has no reason to SCP.
I occasionally use `scp` around my network and have for years. It works great and its simple interface is easy to remember. I don't want to sftp if I have to use tar on both sides. I might type rsync and but then I remember something about the trailing slash will cause the command to behave differently the second time. I just don't need yet another syntax I'll misremember. As long as scp is in my distro's repositories, I'll be using it.
easy to remember, if you don't use trailing slashes ever, it will just work every time
> Their pscp implementation is a better drop-in replacement than the OpenSSH solutions.
What makes it a better drop in replacement?
Several reasons.
-PuTTY pscp allows raw passwords on the command line, or from a file. OpenSSH is unreasonable in refusing to do this.
-Scripting can adapt to a .netrc easily; OpenSSH will never do this.
-Modern OpenSSH is a nightmare when using legacy crypto, while pscp is fluid. There is nothing wrong with hmac-md5, and no reason to refuse it. I will take PuTTY or dropbear in a heartbeat over these burned bridges and workarounds.
https://www.openssh.org/legacy.html
-pscp does not link to dozens of libraries as ssh/scp does, so it is easier to build with less dependency. The ldd output of ssh and scp on rhel9 is 23 libraries, while PuTTY is 3 [package obtained from EPEL].
-pscp strongly leans to SFTP on the backend and can be directed to use it exclusively, so there is no ambiguity.
-Using pscp with a retry on fail is much easier than sftp -b.
-The wacky cipher control on rhel8 does not impact the PuTTY tools.
That is an extensive list.
>If you need something that SFTP cannot do, then use tar on both sides.
Wouldn't tar do the exact same thing to that file's permissions?
Likely, but maintaining hard links is more of what I was thinking.
you sound so wise and produce excellent reference, but in the next breath you show NFS in use?
signed -confused
What would you use for remote mounting filesystems? I don't know of any that are simply superior (w/o caveats/tradeoffs).
Why is it so self-evident that NFS is bad?
I upvoted you, and yes, cleartext NFS is a concern.
I had it wrapped in stunnel TLS, but I ripped that out recently as I am retiring and the new staff is simply not capable of maintaining that configuration.
My users were yelling, and the patch to tinysshd to omit all permissions checks silenced the complaints. No, it's not pretty.
This is a useful tip!
but also... who has a dir with 777 permissions? Is that something people do nowadays?
My guess would be mounting an NTFS partition - with ntfs-3g it will load everything as 777 just by default, since it can’t translate the permissions.
I've seen users who have every file set to 777. They do it to "avoid permissions issues"
Well, everybody has 1777 as /tmp (with the sticky bit).
[dead]
I assume using `./*` rather than `.` in the `scp` command would have worked around the issue?
Using './*' would have avoided this in most shells because ordinary globbing excludes dotfiles, so .ssh and authorized_keys are not matched. In my experience scp is brittle for bulk syncs, so I run rsync -a --exclude='.ssh' --dry-run ./ user@host:~/target to verify before I commit the changes. I keep an out of band recovery path, like a temporary deploy key, a nonprivileged rescue user, or console access, as the only reliable way to avoid being locked out at 3AM.
The problem was not scp'ing the .ssh/ directory. The problem was scp'ing a directory whose permissions were 777, and "mapping" it (cannot find a better term) to a remote directory, which happened to be the home directory. The remote home directory therefore had its permissions changed to 777, which was deemed "too open" by openssh which refuses to use any file in it.
Yes, since it would’ve copied the globbed files, rather than the current directory itself.
I accidentally nuked my hosted server's network stack with a config error... my bigger mistake was generating a massive random password for the root account... the remote terminal management console didn't support pasting and the default config only gave you like 30s to login.... not fun at all.
Script all the things. double-check your scripts... always be backing up.
> the remote terminal management console didn't support pasting and the default config only gave you like 30s to login
I would have used AutoHotkey or something similar in such a scenario.
Also a gentle reminder that backups without periodic drills are just binary blobs. I had an instance where for some reason my Borg backups where corrupted. Only caught them with periodic drills.
You did not transfer the files within a directory. You transferred the directory itself, via `.`. That is why scp changed the permissions of your home directory itself; if you instead had transferred via `*` I am sure you would not have had this problem.
Ah, file permissions. My old friend. Good thing this happened on a 'local' server and not a remote VPS.
Done stupid stuff like this enough times that I just use tar, and also make a sandbox directory to receive it, to double-check whats going to happen, before un—tar’ing it again into the destination intended and/or do a manual move.
Too many burned fingers to not do this little dance almost every other time.
Actually, I lied, I just use rsync like an insane person.
It's nice to see people sharing their mistakes too.
Related: In my Bash logout script I have a chmod that fixes authorized_keys. It won't help with scp because that's non-interactive, but it has helped the other 999 times I've forgotten to clean up the mess I made during an ssh session.
Getting locked out of a server must be a cannonical experienc in the sysadmin journey, like checking the logs to see you are being attacked as soon as your online, or trying to build your own linux from scratch without bloat.
tl;dr: I you scp -r to your homedir, expect scp to copy not just files and directories but their permissions as well (which I think isn't all that surprising).
It's not supposed to do that unless it's newly creating the destination, or you supplied the -p flag to preserve permissions... that's what the entire issue is about; it's a bug that was fixed in 10.3.
I wouldn't even expect it on newly created stuff without the -p flag. Normal cp doesn't do it.
[dead]
When I load the site in my (slightly older) Firefox I just get some random junk and gibberish (markov chain generated nonsense?)
<bleep> that nonsense!
I suspect you're hitting the page where they're running https://iocaine.madhouse-project.org/
Perhaps you got bot flagged or something
That URL gives me a 418 I'm a teapot error with no body. I'm guessing they don't like my VPN.