anopa: init system/service manager built around s6

From: Olivier Brunel <>
Date: Fri, 10 Apr 2015 22:48:10 +0200

Hey there,

So for some time now, I've been meaning to make changes to my system and
move to something based around a supervision suite, specifically s6.

To do so, I needed something to take care of the whole boot process and
such, and eventually decided to make my own init system/service manager,
which would take care of that and help with dependencies a bit.

Since what I have now is starting to look like something, I thought I'd
share, so here we go: introducing anopa.

anopa is a bunch of tools/scripts, built around s6, aimed to provide an
init system and service manager for Linux systems. Assuming you're
familiar with Laurent's description of the three stages of init[1],
anopa provides execline scripts for the different stages; That is:

- stage1 to be the init that will get things ready, i.e. create the
runtime service repository (and s6-svscan scandir), then exec into
s6-svscan (as PID1)

- stage2 is the script that blocks until s6-svscan is ready (or the
catch-all logger running, really) and triggers starting all services

- stage3 handles the reboot/poweroff procedure

- And there's also stage0 & stage4 which are meant to be used as
init/shutdown resp. inside an initramfs

s6 works with services, defined via their servicedir. Since it is build
around it, anopa uses the same principle; However, a service from
anopa's point-of-view can either be one-shot, i.e. a process to start on
boot (during stage 1) and that's it (possibly with a counter-part one to
be run on shutdown, during stage 3), or a long-run, i.e. a process to be
started during phase 2, and to stay up until stage 3 is initiated.

Therefore, you can have two kinds of services, and servicedir: one-shot,
and long-run. A servicedir is simply a directory, in which the
definition of the service is held.

Long-run services requires a file run to be present, which will be the
long-running process launched & supervised. Therefore, the rule is that
if a servicedir contains a file run it is of a long-run service, else of
a one-shot one.

One-shot services instead have a script start to be run when started,
and a script stop to be run when stopped. A one-shot service can either
have only a start script, only a stop script, both, or none. In the
later case, it can be used simply to order things.

Both one-shot & long-run servicedirs can also include folders needs,
wants, after, and/or before to define dependencies/ordering. In short,
each of those contains regular empty files, names of the services.

Anything in after means the current service will only be started after
those services; Anything in before means it'll be started before they
are. Those are only ordering directives, i.e. if the mentioned services
aren't getting started, directives are ignored.

wants allows to specify services to automatically started (w/out any
ordering), while needs is a special case: the service will be
automatically started, it will also be added into after, and should it
fail to start, the service won't be started but be in error (dependency
failed) instead.

There's a bit more description in the man pages, but that's the basic idea.

To get more onto how things are meant to work with anopa: during stage1
the runtime service repository is created. This isn't just a simply copy
of files, but features some extra little things, more on that in a
moment. Then s6-svscan is started, but all services had a file down so
nothing (but the catch-all logger) is actually started.
That logger unblocks stage2, which will have anopa start everything,
one-shot & long-run, using specified ordering.
stage3 does the stopping, so to e.g. reboot there's a script that simply
calls `s6-svscanctl -rb` leaving stage3 (which s6-svscan execs into) to
take care of stopping everything in (reverse) order.

Back on create the runtime repo, it's not just a copy of said repo to a
tmpfs (/run) but actually features a few things.

The way this works in anopa would be to use aa-enable(1) to create the
service repository, as well as the scandir. This is done by using source
directories, and optionally merging in configuration directories. The
idea is that you will not create your service repository manually, but
instead using aa-enable(1) from pre-established definitions.

The service repository will contain all (enabled) servicedirs, both
one-shots and long-runs. For long-run servicedirs, symlinks will be put
into directory .scandir which will be used by s6-svscan as its scandir.

A typical organization would work like the following :

- /usr/lib/services
Source directory containing available servicedirs. This is where
servicedirs from packages would be installed.

- /etc/anopa/services
Source directory containing available servicedirs. This is where the
administrator can define its own servicedirs, either because they're not
provided by packages, or to be used instead.

- /etc/anopa/enabled
List directory containing either empty regular files, whose name is the
name of a service to enable on boot, or directories, whose name is the
name of a service to enable on boot.

When creating the runtime repository, that directory is read, and for
each service named there the corresponding servicedir is copied from a
source directory into the runtime repository.
If this was a directory, its content is then merged/copied over into the
newlu created servicedir, allowing easy custom service-specific

Source directories are looked up in order, thus allowing the
administrator to provide not only its own servicedirs, but its own
version of a given servicedir.

Then there's a notion of instances, where in source directories you can
have a service foo_at_ but will enable e.g. foo_at_bar
This would have the servicedir foo_at_ copied as foo_at_bar, which can be
useful to have more than one service of the same "kind" (e.g.
A little execline helper is provided to get variables SERVICE_NAME (e.g.
getty_at_tty1), SERVICE (e.g. getty) and INSTANCE (e.g. tty1) to help deal
with things easily.

Also, any file ending with an _at_ in one of needs/wants/after/before would
be renamed with the instance name, so e.g. if you have a file
wants/set-numlock_at_ inside getty@, it would be copied over as
getty_at_tty1/wants/set-numlock_at_tty1 automatically.

There's also a special case of the logger. In a servicedir (in a source
directory) an empty file log will mean to copy the servicedir log in its
place, and if it's a folder it's processed as a config folder (i.e.
first copy the servicedir log, then copy over/merge the original/config
dir), so it's very easy to use the same logger for all services while
only defining it once, and still be able to set service-specific options
(or a brand new run script, if needed).

To help further, if a file run-args exist inside such a log dir, it
won't be copied but its content will be appended to the run file, thus
making it easy to add a script for s6-log, for instance.

Lastly, any file (in config dirs) that starts with a dash won't be
copied, but the file (w/out the dash) in the created servicedir will be
removed; And any file that starts with a plus sign won't be copied but
have its content appended to the file (w/out the plus) in the created

Of course anopa is really a bunch of tools & scripts, so feel free to
use it differently or whatever :)

I realize this might not be all very clear, and I have yet to talk about
what happens when service are started and how things are handled (e.g.
re: timeouts, one-shot output/password input, etc) but I feel I've been
long enough already, and hopefully anopa comes with man pages for its
different tools that should explain how things work. (If not, feel free
to ask.)

Obviously it depends on s6 and execline, also skalibs.

Source code is on github[2], official site is here[3], in case it can be
useful to some.


Received on Fri Apr 10 2015 - 20:48:10 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:44:19 UTC