If you have a quarter of a million files that you need to copy from one server to another, that you can't do recursively coz they don't fit the file-tree that way, you might lazily just write a script to fire a scp job foreach file:
> scp source/path/ user@host:dest/path/
And that'd be fine, but it means opening and closing a connection between the servers a quarter of a million times, each one taking a second or so. Or about two days.
Turns out that you can open a Master Connection though, and then all the other connections will channel their data through that, saving that open/close connection delay.
> ssh -M user@host "sleep 100h" >/dev/null &
Ironically, we tell it to sleep, so that it will stay awake for a long time.
We need to have set our .ssh config to tell it what path to use for the pipe between the tasks:
> Host hostname.com
> ControlPath ~user/.ssh/ctrl-%u@%h-%p-%r
Then the you can run the simple quarter-million-copy lines and it'll finish two days faster!
It'll still take a couple weeks though. Especially since I can't even have it running at weekends. Like now, when I'm about to boot into Windows.
Still, today I learned about ssh master connections.