One Weird Trick to Make NFS Work

I am by no means an expert on NFS. I border on being NFS-literate, in fact. But I excel at encountering errors. Here, I will attempt to document some of the ones I’ve figured out how to fix / work around.

mount.nfs: access denied by server

This is the error that led to the joking “one weird trick” (referencing those spammy ads) title of this post.

I keep running into errors like this:
$ sudo mount -t nfs winterfell:/home/j /mnt/j
mount.nfs: access denied by server while mounting winterfell:/home/j

I think the error can be caused by a number of things. From the client, check that showmount -e winterfell (where winterfell, of course, is the name of the server you’re trying to mount a volume from) (a) returns anything at all (meaning you can at least reach the server), (b) lists the share matching the one you’re trying to mount, and (c) has an IP mask that matches the one you’re connecting from.

If (a) fails, check iptables (make sure port 2049, at least, is open). If (b) or (c) fails, fix it in /etc/exports, and reload the config with exportfs -r.

But if everything above looks normal, but you can’t actually mount the share, it’s probably this: NFSv4 appears to require functional reverse DNS or it will fail without telling you much about what’s going on. I’m still a bit in the dark on the details. You might try using the IP instead of a hostname. If that doesn’t work, the “one weird trick” is to fall back to NFSv3 (or even NFSv2, but don’t), which doesn’t do this. Something like this:

sudo mount -t nfs -o nfsvers=3 winterfell:/home/j /mnt/j

If that succeeds, you’ll know the problem.

mount_nfs: can’t mount $share from $hostname onto $mountpoint: Operation not permitted

I keep getting this one when trying to mount things on a Mac client.

This is fixed by adding the -o resvport flag to your mount command, e.g.:

sudo mount -t nfs -o resvport winterfell:/home/j /mnt/j

IIUC, what is happening here is that the server is requiring that clients connect from a reserved (<1024) port (which only superusers can do), but the Mac client doesn’t do that by default. -o resvport tells it to bind to a reserved port, and you’re golden.

Hopefully someone else will find these tips useful. (That “someone else” may be me in six months’ time…)

ssh tunneling for fun and profit

ssh is one of those things that’s useful for way more than meets the eye. Here’s a handy feature to add to your bag of tricks — you can tunnel traffic from your machine to a remote machine through another server running an ssh server.

Where this is often useful is in setups where you want to access a system on a private LAN, but it’s behind a firewall or bastion host (running ssh). You could connect if you were on the LAN on the other side, but you’re not.

It looks something like this:

The magical command here is something like this:

ssh -NfL 8080:192.168.1.2:80 root@virtlab-cloud-04

That would map localhost:8080 (on the machine where you’re running this command — i.e., your computer, or “You” in the diagram) to 192.168.1.2’s port 80 — but it connects to 192.168.1.2’s port 80 _through_ a host named “virtlab-cloud-04”, which you’ve ssh’ed into as root. (You do not need to be root for this to work.)

So, maybe you’re on your laptop at an airport hotspot, and 192.168.1.2 is the IP of a home system. You can map a port on it to your laptop by ssh’ing through your Linux box listening over ssh at home.

At a previous employer, I used this to manage our SAN via its (awful) web-based UI on our production network. The SAN was obviously not reachable over the Internet, but I could map its web UI to localhost:8080 on my desktop through a bastion host we had.

SSH Tip: Hash known_hosts names

I picked up a little book called SSH Mastery the other day. It’s a fairly short read, but quite interesting.

It mentioned one tip that happened to solve something that always bothered me—ssh keeps a ~/.ssh/known_hosts file with the host keys of all the machines you’ve connected to previously. It’s good for SSH, since it can verify that the host keys haven’t changed since you last connected, but it’s also a privacy and security risk, to have a file listing all the servers you have access to. Not exactly something that keeps me up at night, but a sub-optimal situation.

The book mentions that ssh can easily be changed to record a hash of the hostnames instead, with the directive HashKnownHosts yes. (But note that it’s not retroactive, though ssh-keygen has an option to encode the existing ones.)

The only downside is that this makes it impossible to periodically prune the contents of known_hosts of systems you no longer care about — though that probably won’t save you more than a few kB of disk space.

Simple disposable VMs with snap-guest

Have you ever wished you could easily spin up a virtual machine for a little testing? Something quick, but something you could (optionally) throw away when you were done?

Of course you have. And I think snap-guest is the answer to your dreams (mine, too!). It allows you to set up a “base” image, and then easily spin up copy-on-write copies of it.

Installation

You can follow the installation instructions in the README, though note the trap — the syntax for the symlink is backwards. With that setup, I built a handful of virtual machines. One is RHEL 6.3, and the other is Fedora 17. (I plan to set up more soon.) You may want to copy it into /usr/bin instead of symlinking it into /usr/local/bin if you use sudo.

The “base” VMs are something you should set up and then shut down, to never touch again, because otherwise you will cause problems with the “copy-on-write” copies which are now against something that has been changed underneath them. So create a base image that things will be based on. Here’s what I did after the base install:

  • yum update
  • yum install ntpd, set it up with working servers, and chkconfig ntpd on
  • Set up EPEL
  • yum install bash-completion git screen telnet (telnet is for checking ports, not insecure logins!)
  • Add a non-privileged user
  • I added repos for Aeolus, but did not install anything from them for the base image.
  • Disable smartd, enable acpid
  • Allow incoming traffic on ports 22, 80, 443, 3000 in the firewall
  • Set up Avahi — yum install avahi, chkconfig avahi-daemon on, and open UDP port 5353 in the firewall. Do the same on your desktop, edit /etc/nsswitch.conf‘s “hosts:” line to read “hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname”. Now, ssh vm-hostname.local will “just work”. (Thanks, eck, for this trick!)
  • Clean things out for provisioning of guests: touch /.unconfigure; yum clean all; rm -rf /etc/ssh/ssh_host_*; poweroff
  • In hindsight, it might have been worthwhile to set up a basic local LDAP server on the guest so that I could test Conductor against it when needed.

When the machine shuts down, you shouldn’t boot it again, unless you are prepared to wipe out any derivative guests.

Usage

I ended up using a little more than is ordinarily required, because I didn’t love all of the defaults:

sudo /usr/local/bin/snap-guest -b Fedora-17-base -t test_f17_guest -m 2048 -c4 -n bridge=br0

This will clone the “Fedora-17-base” image, starting a “test_f17_guest” VM. -m 2048 tells it to use 2048MB RAM instead of 800MB. -c4 gives it 4 cores, and -n bridge=br0 brings it up using my host’s virbr0 brdiged interface for networking. Obviously, customize all of this as required.

Note that the system will come up with a hostname matching whatever you used with -t. If you set up Avahi as I outlined above, you should be able to “ssh test_f17_guest.local” and log right in.

I still have some kinks to work out, like network interfaces coming up under different names. But I think this is going to be immensely useful going forward. Historically, needing to test a patch on RHEL, or finding a clean Fedora system to test an upstream patch on to rule out issues with my local setup, has been a real timesink. Now it takes about 10 seconds to make a cloned guest, and under a minute for it to boot. I can re-use guests, or just trash them when I’m done.