I am using scp and seeing file transfers that are far, far below my wire speed. Using the scp default settings, I get ~300kB/s on a 100Mb/s networks and a 10Mb/s network. Using ssh version 1 and blowfish I get 500kB/s-1MB/s, again on both the networks. I am running scp on Cygwin on a 1GHz Centrino x86 machine (Windows XP Pro SP2). The process monitor reports the CPU on the sending machine is around 50%. In transfering a 200MB movie (avi) I see the following stats: $ scp 200MB-movie.avi foo@bar:/a/b 200MB-movie.avi 2% 4848KB 480.8KB/s 06:00 I see this with any destination machine on two different LANs: a Suse64 AMD64 with no load and a RAID5 array, a P133 FreeBSD box with no load and a single IDE drive, and a Linux box with moderate load. If I force scp to use ssh protocol version 1 and blowfish, I get better performance but still far below wire speed: $ scp -1 -c blowfish 200MB-movie.avi foo@bar:/a/b 200MB-movie.avi 7% 14MB 813.9KB/s 03:21 ETA I ran top on one of the destination machines (Suse64 AMD64) and the CPU was under 10%. I do not have a .ssh config file. $ ssh -v OpenSSH_3.8.1p1, OpenSSL 0.9.7d 17 Mar 2004 Any suggestions or workarounds appreciated. --Pat / zippy@cs.brandeis.edu
Try Ciphers=arcfour it's usually faster than blowfish or aes if you're CPU bound. Also, make sure you have compression disabled. With my laptop (XP SP1, 1.3 GHz Pentium M, OpenSSH 3.9p1) on a 100Mb/s ethernet I get 1.5 MB/s (aes128-cbc, 15% CPU) and 1.6 MB/s (arcfour, 10% CPU). In my case, it appears to be IO bound. How fast can you push data across your network without SSH?
No reply == closed bug. BTW testing from a Linux host to an OpenBSD host on a 100Mb/s network got 5.5MB/s (ie over half of wire speed). Neither host is particularly fast (500MHz-ish) and the transfer was CPU bound. Perhaps Cygwin limits the speed someplace (maybe disk, network or pipe IO?) It would be interesting to compare the performance of a native Windows client (eg PuTTY). BTW2: RAID5 writes are CPU intensive (calculating parity) so if you're using software RAID5 that won't be helping either (and I'm not sure if the CPU for that will get charged back to the writing process either).
Change all RESOLVED bug to CLOSED with the exception of the ones fixed post-4.4.