Re: [Announce] s6.rc: a distribution-friendly init/rc framework (long, off-topic)

From: Avery Payne <avery.p.payne_at_gmail.com>
Date: Fri, 23 Mar 2018 13:30:08 -0700

>
> I see that s6.rc comes with a lot of pre-written scripts, from acpid
> to wpa_supplicant. Like Avery's supervision-scripts package, this is
> something that I think goes above and beyond simple "policy": this is
> seriously the beginning of a distribution initiative. I have no wish
> at all to do this outside of a distribution, and am convinced that
> the software base must come first, and then service-specific
> scripts must be written in coordination with distributions that use
> it; that is what I plan to do for s6-frontend in coordination with
> Adélie or Alpine (which are the most likely to use it first). But there
> is a large gray area here: what is "reusable policy" (RP) and what is
> "distribution-specific" (DS)? For instance, look at the way the
> network is brought up - in s6.rc, in OpenRC, in sysvinit scripts,
> in systemd. Is bringing up the network RP or DS? If it involves
> choosing between several external software packages that provide
> equivalent functionality, is it okay to hardcode a dependency, or
> should we provide flexibility (with a complexity cost)?
>
> This is very much the kind of discussion that I think is important
> to have, that I would like to have in the relatively near future, and
> since more and more people are getting experience with semi-packaging
> external software, and flirting with the line between software
> development and distro integration - be it Olivier, Avery, Casper,
> Jonathan, or others - I think we're now in a good position to have it.
>
>
I'm still thinking this over, especially the distribution-specific
dependencies. The tl;dr version is, we are really dealing with the
intersection of settings specific to the supervisor, the distribution's
policy (in the form of naming-by-path, environment settings, file
locations, etc), and the options needed for the version of the daemon
used. If you can account for all three, you should be able to get
consistent run scripts.

The launch of a simple longrun process is nearly (but not entirely)
universal. What I typically see in > 90% of cases are:

1. designation of the scripting environment in the shebang, to enforce
consistency.
2. clearing and resetting the environment state.
3. if needed, capture STDERR for in-line logging.
4. if needed, running any pre-start programs to create conditions (example:
uuid generation prior to launching udev)
5. if needed, the creation of a run directory at the distribution-approved
location
7. if needed, permission changes to the run directory
6. if needed, ownership changes to the run directory
7. as needed, chain loading helper programs, with dependencies on path
8. as needed, chain loading environment variables
9. specification of the daemon to run, with dependencies on path
10. settings as appropriate for the version of daemon used, with
dependencies on path

The few processes that can't do this, typically have either a design flaw
or a very elaborate launch process. Either of those require a "special"
run file anyways, so they are already exceptions.

The following issues arise from distribution causing policy to be needed:

* The type of logging used, which can vary quite a bit
* The various names of the supervisory programs
* The path of the daemon itself
* The path of the daemon's settings file and/or directory
* Naming conventions for devices, especially network devices
* How to deal with "instances" of a service
* Handling of failures in the finish file
* Changes in valid option flags between different versions of a daemon

Notice that the first 4 could easily be turned into parameters. Device
names I don't have an answer for - yet. Instances are going to be
dependent on the supervisor and system-state mechanism used, and frankly, I
think are beyond the scope of the project. I don't have an answer for the
finish file at this time because that is a behavior dictated by
distribution policy; it too is outside of scope. The last one, option
flags, can be overcome by making them a parameter.

The idea that we could turn the bulk of policy into parametric settings
that are isolated away from code is why I have not been as concerned about
separation of policy. I've been messing around with using two parametric
files + simple macro expansion of the 10 longrun steps listed above to
build the run files as needed. You would use it like this:

1. You download the source code.
2. You specify a supervisor in a settings file, which in turn provides all
of the correct names for various programs
3. You specify a distribution "environment" in a settings file, which
provides path information, device naming, etc.
4. You run a build process to create all of the run files, which
incorporate the correct values based on the settings from the prior two
files.
5. You run a final build step that installs the files into a "svcdef"
directory which contains all of the definitions ready-to-use; this would
correspond with the s6 approach of a definition directory that does not
contain the active run-time state. (Of course, it doesn't have to be named
"svcdef"...)

At some point this year, I will turn my attention again to
supervision-scripts and scrap the code while keeping the settings, once I
find a consistent and non-obscure way to build run files.

Hopefully this gives some food for thought, and also some potential answers
to "the distribution policy problem" .
Received on Fri Mar 23 2018 - 20:30:08 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:44:19 UTC