A brief analysis of why sshd is not required to run in an Docker container

  • 2020-05-24 06:31:57
  • OfStack

When starting with Docker, people often ask, "how do I get into the container?" , others will say, "run an SSH server in your container." However, you will learn from this post that you do not need to run the SSHd daemon to get into your container at all. Unless, of course, your container is an SSH server.

Running the SSH server is taken for granted because it provides an easy way into the container. Almost everyone in our company has used SSH at least once. A large percentage of us use it every day, and they are familiar with public and private keys, password-less logins, key agents, and sometimes even port forwarding and other unusual features. For that reason, it's not surprising that people recommend you run SSH in a container. But you should think it over.

If you are assuming an Docker mirror image of Redis Server or Java Webservice, I will ask you the following questions:

What do you need to do with SSH? Generally speaking, you may want to do a backup, check the log, or restart the process, adjust the configuration, or perhaps use gdb, strace, or other similar tools to use the debug server. So we'll look at one how we don't use SSH to do these things.

How do you manage your keys and passwords? Generally speaking, you can either write them in your mirror image or put them in a volume. You want to know what you would do if you were to update these keys or passwords. If you write them to an image, you need to rebuild the image, redeploy them, and restart the container. It's ok, it's not the end of the world, but it's not a great way to do it. It is much better to put them in volumes and then manage them by managing volumes. This method works, but it has serious drawbacks. You must make sure that the container does not have write permissions for the volume; Otherwise, the container could break the key (making it impossible for you to get into the container later), and the situation would be even worse if you Shared one more volume with multiple containers. If we didn't use SSH, wouldn't we have one less thing to worry about?

How do you manage security upgrades? The SSH server is quite secure, but there are still security issues, and you will have to upgrade all containers using SSH if necessary. That means a lot of rebuilding and restarting. In other words, even if you have a simple, compact memcached service, you still have to make sure that you keep your security up to date, or a thousand miles may be lost on the nest. So again, if we didn't use SSH, wouldn't we have one less thing to worry about?

Do you need to "install only one SSH server" to do this? Of course not. You need to install a process manager, such as Monit or Supervisor. This is because Docker itself only monitors one process. If you need to run multiple processes, you have to put a layer on top of it to look at their applications. In other words, you're complicating a simple problem. If your application goes down (either to a normal exit or to a crash), you will have to view it from your process manager log, not simply the information provided by Docker.

You can be responsible for putting your application in a container, but should you be responsible for managing both the access policy and the security restrictions? In small organizations, this is not the case. But in a large organization, if you're the one responsible for setting up the application container, there's probably another person responsible for defining the remote access policy. Your company probably has strict policy definitions about who can access, how to access, or other kinds of censorship tracking requirements. That way, you won't be allowed to throw an SSH server into your container.

But what should I do...

Back up my data.

Your data should exist in volume. You can then use the -- volumes-from option to run another container and share this volume with the first container. The advantage: if you need to install a new tool (such as s75pxd) to keep your backup data for a long time, or to move it to another permanent storage, you can do so in this particular backup container, rather than in the main service container. That's neat.

Check the log ?

Use volume again! If you write all your logs to a specific directory, and this directory is an volume, you can start another log inspection" container (using -- volumes-from, remember ?). And do what you need to do in it. If you also need special tools (or just an interesting ack-grep), you can install them in this container to keep the original environment of the main container.

Restart service & # 63;

Basically all service can be restarted by signal. . When you use/etc init d/foo restart or service foo restart, when in fact they will send a specific signal to process. You can use docker kill-s < signal > To send this signal. Some service may not listen for these signals, but can receive commands on a specific socket. If it is 1 TCP socket, it only needs to be connected to the network. If it is an UNIX socket, you can use volume again. Set the container and the service control socket to a specific directory, and this directory is one volume. Then launch a new container to access the volume; This allows you to use the UNIX socket.

"But this is too complicated!" -not really. Suppose you called foo servcie on/var run/foo sock creates a socket, and need you run fooctl restart to complete restart. Just use -v /var/run(or add VOLUME /var/run to the Docker file) to launch this service. When you want to restart, use the -- volumes-from option and override the command to start the same image. Like this:


# Starting the service 
CID=$(docker run -d -v /var/run fooservice) 
# Restarting the service with a sidekick container 
docker run --volumes-from $CID fooservice fooctl restart 

Simple!

Modify my configuration file

If you are performing a persistent configuration change, you'd better put the change in image, because if you start another container, the service will still use the old configuration and your configuration change will be lost. So, without your SSH access! "But I need to change my configuration during the lifetime of the service; Like adding a new virtual site!" In this case, you need to use... Waiting...... volume! The configuration should be in volume, and the volume should be Shared with a special-purpose "configuration editor" container. You can use anything you like in this container: SSH + your favorite editor, or an web service that accepts calls from API, or a timed task that pulls information from an external source. And so on. Also, separate concerns: 1 container runs the service and another handles configuration updates. "But I made a temporary change because I was testing different values!" In this case, check out chapter 1!

Debugging my application?

This is probably the only scenario where 1 needs to enter container. Because you're running gdb, strace, tweak configuration, etc. In this case, you need nsenter.

Introduce nsenter

nsenter is a small tool for entering namespaces. Technically, it can enter an existing namespace, or it can spawn a process into a new set of namespaces. "What is a namespace ?" They are an important part of the container. Simply put: by using nsenter, you can access an existing container, although this container does not run ssh or any special purpose daemon.

Where do you get nsenter?

Check out jpetazzo/nsenter on GitHub. The simple installation is:


docker run -v /usr/local/bin:/target jpetazzo/nsenter 

It will install nsenter into /usr/local/bin, and you can use it immediately.

nsenter is also available in your distribution (in the util-linux package).

How to use it?

First, calculate the PID you want to enter the container:


PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>) 

Then enter the container:


nsenter --target $PID --mount --uts --ipc --net --pid 

Inside the container, you can manipulate the shell parser. If you want to run a particular script or program in an automated manner, add it as a parameter to nsenter. It works a bit like chroot, except that it USES containers instead of simple directories.

How about remote access?

If you need to access a container from a remote host, there are (at least) two methods:

SSH enters the Docker host and USES nsenter;

SSH enters the Docker host and authorizes the esenter command (i.e., nsenter) with a special key parameter.

The first method is relatively simple; But you need root access to the Docker host (not great from a security perspective). The second method USES the command= schema in the authorized_keys file of SSH. You may be familiar with the "classical" authorized_keys file, which looks like this:

ssh - rsa AAAAB3N... QOID = = jpetazzo @ tarrasque

(of course, a real key is actually quite long, with 1 taking up several lines.) You can also enforce the use of a proprietary command. If you want to view the memory available on a remote host on your system, you can use the SSH key, but you won't want to surrender all the shell permissions, you can enter the following in the authorized_keys file:


command="free" ssh-rsa AAAAB3N ... QOID== jpetazzo@tarrasque 

Now, when using a proprietary key to connect, replace the acquired shell, which can execute the free command. You can't do anything else. (in general, you might also want to add no-port-forwarding; For more information, check out the authorized_keys(5) manual (manpage). The key to this mechanism is the separation of responsibilities. Alice puts services inside the container; She doesn't have to deal with remote access, login, etc. Betty adds an SSH layer for special cases (debugging strange problems). Charlotte will consider logging in. And so on.

conclusion

Is it really an error (uppercase W) to run the SSH server in a container? To be honest, it's not that serious. This is even extremely convenient when you are not accessing the Docker host, but it still requires an shell in the container. In addition, there are many ways we can run the SSH server in a container and get all the features we want, and the architecture is very clear. Docker allows you to use whatever workflow works best for you. But before you do that, and quickly step into the "my container is really a small VPS" buzzword, be aware that there are other solutions so you can make an informed decision.


Related articles: