Bug 3067 - Fails to unlink ControlMaster socket early enough, confuses other clients
Summary: Fails to unlink ControlMaster socket early enough, confuses other clients
Status: NEW
Alias: None
Product: Portable OpenSSH
Classification: Unclassified
Component: ssh (show other bugs)
Version: 7.9p1
Hardware: Other Linux
: P5 normal
Assignee: Assigned to nobody
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-09-07 04:32 AEST by Paul Evans
Modified: 2020-07-10 14:01 AEST (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Paul Evans 2019-09-07 04:32:17 AEST
(from https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877020)

TL;DR: ssh(1) must unlink local socket _before_ attempting more network
  traffic otherwise broken TCP sockets will stall the entire thing.

-

I make heavy use of the shared control sockets to multiplex multiple
shells, sftp, and other commands down a single TCP connection to remote
servers.

  ControlPath ~/var/run/ssh-master-%r@%h:%p.sock
  ControlPersist 1s
  ControlMaster auto

In this setup, under stable networking all works nicely.

However, my machine is a laptop, and sometimes due to mobile data, wifi,
ethernet cable swapping, or other issues my IP address and hence routing
change. After such a change, all existing TCP sockets are now unuseable
and must be closed and reopened.

Simply closing all ssh clients is insufficient here, because the client
tries to perform a controlled shutdown of the TCP socket *first* and
will only unlink(2) the control master socket from the local filesystem
after it has done this. By ordering the operations thus, the client
stalls trying to perform this controlled TCP shutdown over now-invalid
networking, and never gets around to removing the local unix socket. New
ssh clients would try to use this and similarly stall.

The correct order of operation ought to be that the control master local
socket is unlinked *before* trying to send any traffic, thus restoring
the user's "turn it off and on again" approach to fixing the problem -
namely by just killing all their clients and making a new one.

---

Additionally I should add; a workaround for this is to simply

  $ rm ~/var/run/ssh-master-*.sock

after closing the previous master clients, before starting them up afresh. The lack of existing unix socket causes the first one to take master control again and all resumes fine.
Comment 1 Damien Miller 2020-07-10 14:01:57 AEST
I think you have two options here:

First, you can explicitly shut down multiplexed sessions that have stalled underlying TCP connection. E.g. I use

for x in ~/.ssh/ctl-* ; do test -r "$x" && ssh -Fnone -Ostop -oControlPath="$x" dummy ; done

Second, you can set a protocol-level health check to automatically kill unresponsive sessions. E.g. adding the following to ~/.ssh/config

ServerAliveInterval 2m
ServerAliveCountMax 3

Will terminate any unresponsive connection after six minutes.