Re: [announce] s6-rc: a s6-based service manager for Unix systems

From: Avery Payne <>
Date: Thu, 24 Sep 2015 14:06:28 -0700

On 9/24/2015 12:29 PM, Laurent Bercot wrote:
> On 24/09/2015 18:45, Avery Payne wrote:
>> This is probably the only real problem in the arrangement I'm using
>> then. The entire premise is "the support scripts live where the
>> definitions live and are called with relative pathing, allowing the
>> definitions to be located anywhere".
> My point of view is that service directories have their place on a
> RAM filesystem, so they should not include heavy data. Stuff like
> support scripts is read-only: it doesn't have to be in RAM.
> You can have local data with relative pathing... in your own
> service directory. As soon as you exit your service directory,
> it's global data.
Ah. As it is designed right now, and based on your definition, my .run/
directory is global data. This is where we slightly disagree; the ./run
directory was meant to be movable, and not installed at a fixed
location; and everything in it is relative to installation, so it is
more "local" than "global". That's ok, I think we can work this out.

A side note: there really isn't a lot of heavy data. In fact, I feel
ashamed to admit that there will be a lot of wasted block space because
each file is either a symlink (which is a few characters), or a fixed
file that has less than 80 bytes of data. There's a lot of empty space
in this arrangement, but that's ok. For a laptop, desktop, or server,
it's a few megabytes of install on a gargantuan multi-gigabyte hard
drive. For an embedded system, all of that empty block space will get
compressed and squeezed into something much smaller and denser. Either
way, the empty space ends up not being so bad.

>> This was done precisely to
>> avoid the mess of "you MUST have your scripts/support scripts live at
>> this exact path", which was originally inspired by a message exchange
>> we had months ago about separating policy from mechanism. For my
>> definitions to work, those four dot-directories have to live where
>> the run-time definitions live. In this case, your compiled directory
>> is the equivalent of a run-time directory.
>> I am certainly not adverse to changing their names or locations - the
>> entire purpose is to have a set of definitions that work out of the
>> box, and I need to make it do whatever is required to achieve that -
>> but I cringe at the concept that I would need to use an absolute
>> path, because I'll end up making everything brittle as a result. I'm
>> open to suggestions on how to fix this.
> I'd suggest something like a BASEDIR environment variable, defaulting
> to "." or "../svcdef" or whatever the relative path for your support
> scripts normally is. All your calls to the support scripts would then
> start with ${BASEDIR}. Is it envisionable?
Yes and no. I already have a default set of environment settings,
including a PATH, that are included in a .env/ directory. I could
include BASEDIR as well and do some shennanigans with that to make this
work. As part of the use-s6-rc script, I can call upon the definition
of BASEDIR and re-link all of the definitions in the source directory to
point to the absolute path that contains the .run/ directory. The
problem remains, I think you are asking me to make all my symlinks
non-relative, and while I can do that (and yes, everything will work) it
"feels" brittle to me.

With just the relative links, supporting an end-user is easy -
everything is at a known location, and if it isn't, well, "there's your
problem right there", we just need to place these directories at the
correct location and it's all fixed. With fixed pathing, now I have to
figure out where everything is hiding at, and if the install was munged
badly, or worse the user did something completely off the wall, then who
knows where it's at, etc. etc. It muddies the waters when trying to
figure out what is going wrong.

s6-rc gets around this because it's very explicit about where the source
and compiled definitions are, no problem there. But my support scripts
work with relative pathing by design. By the way, when I say support
script, I really mean the main run script for every definition. More
info on that at the bottom of this message...

>> The only magic needed is that the support directories have to be
>> located in the same runtime location, so that the relative path links
>> will align and everything will be found. The compiled definitions
>> are the equivalent of a runtime location, therefore, they must live
>> there.
> Nope, it doesn't work that way. s6-rc relocates the service directories
> at run time, and it only knows how to handle service directories. If
> you have extra data, put it either in a service directory or completely
> outside s6-rc's reach.
There may be a (rather ugly) workaround I can do for this, pointing
(definition)/run file to (definition)/data/run, which in turn points to
a fixed location. More in a moment...

>> (a) stay a symlink pointing to a directory in /data, in this case the
>> directory is named /data/1.0.0 or (b) dereference /env entirely,
>> turning /env into a real directory filled with the contents of the
>> files in data/1.0.0 so that the files are retained. Either solution
>> works.
> Currently, (a) happens. I don't guarantee it will stay the same, but
> it should, and if it ever changes, it will change to (b). So that will
> not be a problem.
Cool. That means that the version of options passed to the daemon can
still be controlled and maintained by the admin and/or package
maintainer. This was the issue that no-one (on the mailing list)
believes to be a problem; but if you are a packager, just how do you
deal with changing options, especially if your distribution allows you
to roll backwards as well as roll forwards? What if you have two
versions of the same software installed (see Debian's weird approach for
handling postgresql)? How do you deal with that?

Having the options match a certain version means you only care when you
rev your software. If you roll back, the old version of the options to
be used are still there, and you simply point to it instead. Rolling
forward is the same, change a symlink to point to the envdir that holds
the correct options. Without the symlink, we are replacing each file
for each change, every time. With the symlink, no file really changes,
unless a new set of envdir settings shows up. In the Debian PostgreSQL
example, you could clone the postgresql definition, and have two
definitions, one for the old version and one for the new. The contents
of the definitions will be identical except for one thing: env/ points
to a different directory. Nothing else would change.

And yes, I ran into this when attempting to convert the settings
published at, and found out that some options had
disappeared, had changed meaning, or are not required, because the
examples given were based on very old versions of the software; and over
time, those versions had "command option drift", adding and dropping
options to be used. Command options that shift and change over time are
a maintenance headache, a small one for the end-user but a big one for
the distribution riding herd on hundreds of definitions.

>> It doesn't matter if they have a wrapper, as long as the terminus of
>> that chain ends up calling ../.run/run at some point.
> Honestly, if you're interested in converting your supervision-scripts
> to the s6-rc format, I think it will be easier to work on an automatic
> offline converter that takes your scripts and rearranges them in a
> format that plays nice with s6-rc-compile than to try and port your
> whole directory structure. It's not like you're going to need all the
> compatibility layers if you know your target is s6-rc, hence s6.
That's the purpose of the yet-to-be written use-s6-rc script. It would
follow steps 1 through 7 in my prior message, attempting to reformat all
of the definitions to what is needed for s6/s6-rc, and make it available
in the source directory that s6-rc expects to us prior to being
compiled. I'm not trying to circumvent s6-rc-compile, rather, think of
use-s6-rc as a pre-compiler to reformat my existing arrangement into
something palatable, placing it into the source directory that is used
by s6-rc-compile.

We're in agreement here, no problem.

>> One of the stated goals of the project is that there will be
>> no such thing as a separate /run file in each definition
> Can you clarify what benefit you're getting from that approach?
1. Maintainability. There is only one /run file to maintain for the
entire project. As a by-product, any bug fixes in that one file
automatically extend to every definition, "fixing" all of them. A
recent example: I took the advice of people on the mailing list and
moved peer-based startup out of the script, into a separate
script that is called from I only had to write this once,
instead of 100+ times (i.e. for each and every definition). Another
example: I have a bug in the setup of a /var/run (or /run or whatever it
is you are using) directory, causing it to not be set with the correct
group permissions. Fixing this bug fixes it for every definition...yet
not a single definition was changed. Having this approach also fixes a
problem on the programming side of things - namely, that the programmer
will not scale as well as the machine will.

2. Flexibility. (When I speak of this, I am talking in the context of
the contents of either svcdef/ or your source directory only, not
compiled directory.) As I mentioned, you can switch off of using shell
scripts entirely, and use execline, and when you do, all of your
definitions are moved to using execline. Again, the definitions don't
change, they remain the same. Only a *single* symlink in .run/ changes,
instead of all of the links in all of the definitions. If you have some
weird situation where you want to mix and match (service A runs with
execline but service B throws a temper tantrum and requires a shell)
then you can modify the symlink of that one definition to point to the
shell launcher.

3. Compatibility. While there are many systems that support just using
a single /run file, there are systems that don't. Both perp and nosh
come to mind, as perp has internal procedures based on a passed
parameter, and nosh has multiple script files based on the intended
action. I have an untested shim for perp based on the shim described at
the perp website. I'm still thinking over how to support nosh. If
necessary, I will be able to get away with just writing perp and nosh
specific scripts, and more importantly, the definitions still don't
change; there will be just one script for each "function" that perp and
nosh require; and yes, they will appear as symlinks pointing to a master
script in each case. I could even include those in the definition, and
s6-rc will ignore them during the compile process, omitting them
completely - which is exactly what I want to have happen.

4. Separation of concern. How you launch your daemon, what supervision
suite you use, or for that matter the system manager you use, does not
matter. I am only focused on (a) being able to wedge the definitions
into whatever arrangement is present, (b) the correct chainload that a
daemon needs (c) the correct options to be passed to said daemon and (d)
any ephemeral things like /var/run/(service) that need to be set up at
runtime. To make this possible, I had to abstract away the run script,
logging, the chainload programs, and even where the definitions live.
The only solution I could come up with that didn't require a hard-coded
path was to use a relative path. Because the definitions (generally
speaking in all cases) "live together" at the same location in svcdef/,
it made sense, at the time, to encode the path to use the same parent
directory. So, svcdef/ is just a directory filled with definitions, and
it also happens to have .run/ directory inside of it, where the run
script lives. Every definition that has a /run file, well, that file is
actually just a symlink to ../.run/run in the svcdef/ directory. There
literally are no run files in my definitions (well that's not entirely
true, I have about 8 scripts that need to be converted to this format,
but that's one of the goals).

5. Portability. I can't rely on the type of shell used, the
distribution, or even the kernel used. I can't rely on much of anything
other than (a) there has to be some kind of shell for the admin to
manipulate this, (b) the filesystem supports . and .. directories,
allowing me to traverse a directory tree without knowing the path in
advance, and (c) the filesystem supports symlinks. Finally, the
settings for the daemons are stored in envdir format, because it was the
one format that I could be assured of having available for each and
every supervision suite. Even when it isn't available, there are ways
to work around this, typically by just reading the files.

So here's the new plan:

The source directory will be defined in the SOURCEDIR variable, which
will be stored inside of svcdef/.env (SOURCEDIR will be equivalent in
spirit to your suggestion of BASEDIR). This actually solves a problem
that I've been thinking about for supporting things like anopa. Now
SOURCEDIR will always refer to wherever the original definitions are
meant to live (as opposed to the ones used at runtime). Thanks.

The *contents* of svcdef/, not the svcdef/ directory itself, will be
copied into the source directory. This is what I was trying to say in
the original message. ;) The copy will include the .bin, .run, .log,
and .env directories, which are the support directories, and they will
live alongside the definitions at $SOURCEDIR. (Sorry about mentioning
.finish, I forgot that I removed it)

Every definition's /run file will be re-linked to point to the .run/
directory in the *source* directory. This will turn all of my relative
paths into fixed paths, but at a known location.

The above three steps will be mixed into the original 7 steps I
described, in the correct sequence.

The compiled versions will end up with each definition having a /run
symlink that points to the .run/run stored in the *source* directory.
If you decide to dereference the link and end up copying the contents of
.run/run instead, things continue to work because /run will be a real
file and not just a symlink to the master $SOURCEDIR/.run/run file.

Another option: write use-s6-rc to extract all of the /env settings,
dereferences all of my /run symlinks, and do some other misc. grunt
work, and stuff the results into the source directory. This works too,
because the intent remains the same - provide a fairly complete set of
definitions out of the box, that work for the user (sysadmin) and the
distribution (packager).
Received on Thu Sep 24 2015 - 21:06:28 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:38:49 UTC