Re: process supervisor - considerations for docker

From: Gorka Lertxundi <glertxundi_at_gmail.com>
Date: Thu, 26 Feb 2015 09:31:09 +0100

Hi,

My name is Gorka, not Gornak! It seems like suddenly I discovered that I
was born in the eastern europe! hehe :)

I'll answer to both of you, mixed, so try not to get confused.

Lets go,

> But Gornak - I must say that your new ubuntu base image really seem *a
> > lot* better than the phusion/baseimage one. It is fantastic and an
> > excellent job you have done there and you continue to update with new
> > versions of s6, etc. Can't really say thank you enough for that.
>

Thanks!

I think if anybody were to start up a new baseimage project, Alpine is
> the way to go, hands-down. Tiny, efficient images.


Wow, I didn't hear about Alpine Linux. What would differentiate from, for
example, busybox with opkg? https://github.com/progrium/busybox. Busybox is
battle-tested and having a package manager in it seems the right way.

The problem with these 'not as mainstream as ubuntu' distros is the smaller
community around it. That community discovers things that you probably
didn't be aware of, bugfixes, fast security updates, ... . So my main
concern about the image is not its size but the
keep-easily-up-to-date-and-secure no matter it size. Even so, although you
probably know that, docker storages images incrementally so that just the
base images is stored once and all app-specific images will be on top of
this image.

It is always the result of a commitment to easy of use, size and
maintainability.


> > Great work Gorka for providing these linux x86_64 binaries on Github
> releases.
> > This was exactly the kind of thing I was hoping for / looking for in
> > regards to that aspect.
>

As I said in my last email, I'll try to keep them updated

> Right so I was half-expecting this kind of response (and from John
> > Regan too). In my initial post I could not think of a concise-enough
> > way to demonstrate and explain my reasoning behind that specific
> > request. At least not without entering into a whole other big long
> > discussion that would have detracted / derailed from some of the other
> > important considerations and discussion points in respect to docker.
> >
> > Basically without that capability (which I am aware goes against
> > convention for process supervisors that occupy pid 1). Then you are
> > forcing docker users to choose an XOR (exclusive-OR) between either
> > using s6 process supervision or the ability to specify command line
> > arguments to their docker containers (via ENTRYPOINT and/or CMD).
> > Which essentially is like a breakage of those ENTRYPOINT and CMD
> > features of docker. At least that is my understanding how pretty much
> > all of these process supervisors behave. And not any criticism
> > levelled at s6 alone. Since you would not typically expect this
> > feature anyway (before we had containerisation etc.). It is very
> > docker-specific.
> >
> > Both of you seem to have stated effectively that you don't really see
> > such a pressing reason why it is needed.
> >
> > So then it's another thing entirely for me to explain why and convince
> > you guys there are good reasons for it being important to be able to
> > continue to use CMD and ENTRYPOINT for specifying command line
> > arguments still remains an important thing after adding a process
> > supervisor. There are actually many different reasons for why that is
> > desirable (that I can think of right now). But that's another
> > discussion and case for me to make to you.
> >
> > I would be happy to go into that aspect further. Perhaps off-the
> > mailing list is a better idea. To then come back here again when that
> > discussion is over and concluded with a short summary. But I don't
> > want to waste anyone's time so please reply and indicate if you would
> > really like for me to go into more depth with better justifications
> > for why we need that particular feature.
>

I don't think it must be one or another. With CMD [ "/init" ] you can:
* start your supervisor by default: docker run your-image
* get access to the container directly without any s6 process started:
docker run your-image /bin/bash
* run a custom script and supervise it: docker run your-image /init
/your-custom-script


> > Would appreciate coming back to how we can do this later on. After I
> > have made a more convincing case for why it's actually needed. My
> > naive assumption, not knowing any of s6 yet: It should be that simply
> > passing on an argv[] array aught to be possible. And perhaps without
> > too many extra hassles or loops to jump through.
>

Would appreciate that use-cases! :-)


> > >>
> https://github.com/glerchundi/container-base/blob/master/rootfs/etc/s6/.s6-svscan/finish#L23-L31
> >
> > That is awesome ^^. Like the White Whale of moby dick. It really felt
> > good to see this here in your base image. Really appreciate that.
>

With some changes to fit my docker base image needs but that was taken from
Laurent examples in the s6 tarball, so thanks Laurent :-)


> > Nope. At least for the initial guildelines I gave - the container has
> > only 1 supervised process (not 2). As per the official docker
> > guidelines. Having a 2nd or 3rd etc supervised process means optional
> > behaviour...
> >
> So this is something I actually disagree with - it seems like
> everybody gets hung up on how many processes the container is running.
> Like I mentioned in a previous email, people start talking about "the
> Docker way" means "one container, one process."
>
> I think the correct approach is one *task* per container. The
> container should do one thing - serve web pages, be a chat service,
> run a webapp, and so on. Who cares if you need one process or many
> processes to get the task done?
>
> A pretty good example is running something like GitLab. You wind up
> running a web server and an SSH daemon in the same container, but
> they form a single, logical service.
>
> To clarify, I'm against running ssh as a "let the user log in and poke
> around" service like Phusion does. But in GitLab, it makes sense - the
> users add their SSH keys via the web interface, and only use it for
> cloning/pushing/pulling/etc, not interactively.
>
> There is a limit to what kinds of processes should be in a container.
> Again, using GitLab - you need to run a MySQL or PostgreSQL database
> for GitLab, but that's really a whole different task and should be in
> a different container.
>
> I *do* agree with implementing optional features, though. In my
> images, if you link a container to a logstash container, then the
> logstash-forwarder will kick in and collect logs. If you don't, it
> doesn't start the logstash-forwarder process.
>

Totally agree. I'm with John.

One docker container means one service, not one process. For example, php
web server, which is composed by: nginx and php-fpm.

This is a great discussion!
Received on Thu Feb 26 2015 - 08:31:09 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:44:19 UTC