If one pool goes down, all pools render inaccessible
It seems there's an issue with zfs' handling of lost pools. If one pool becomes inaccessible for some reason, say, a loose cable to the storage, all pools become inaccessible, and as a result, logging into the system won't work, neither locally or remotely. I'm currently seeing this after starting a scrub of a troublesome pool (See #1109), and I've seen it before, both during testing and in production. To reproduce this, just take a healthy RAIDz1 pool and yank two drives.