Re: init.d script names

From: Avery Payne <>
Date: Thu, 2 Oct 2014 18:36:08 -0700

On Thu, Oct 2, 2014 at 3:55 PM, Laurent Bercot <>
> Yeah, unfortunately that's not going to happen. You might get close for
> some
> services, maybe even a majority, but in the general case there's no direct
> mapping from a System V script to a supervision run script.

Won't stop me from trying. Even if only 10% are "direct maps", that's
approximately 100+ scripts I don't need to write.

I've already come up with an interesting plan to template out the entire
logging aspect. Default system-wide logging methods with your choice of
back-ends and per-service manual overrides, here we come... Also helps
that I'm being a bit lazy about this, I really don't want to write or copy
1,000+ log/run files. :) If you want details, let me know.

> The supervision frameworks we have are *process* supervision frameworks.
> They can monitor long-running processes, and that's it. That's already
> great,
> but nowadays machine state is more complex than that. A service can be
> represented by something else than a process - for instance, is a network
> card up or down ? - and real service management is a superset of process
> supervision.

I've noticed this in a round-about way but haven't take the time to ponder
it. After reading your description, the problem has been clarified and I
suspect my approach is probably not going to fit well. But I might have a
temporary solution.

Setting aside your spot-on comment about process supervision, let's take
the network interface example. From my humble point of view, using a
script to bring the interface up and then leave it "running" is a passive
approach. There is a reasonable assurance that the interface will remain
up, granted, but there is no guarantee that it will *stay* up. If some
other program brings it down, or an administrator accidentally brings it
down, or even if the cable is unplugged, then the "passive" nature of this
approach will not catch it, and the state of the interface is out of sync
with the expectations of the framework.

My expectation is that the interface should be actively managed. Rather
than doing a one-time setup and assuming that the interface is up, it would
make more sense to me to have a process, i.e. dhcpcd5 manage the
interface. The program would actively notice that the interface is down,
having issues, etc. and attempt to respond. Instead of a promise of an
interface that is running, we have the assurance that a program is actively
monitoring and managing it. And whatever the process supervision
framework, you end up with framework overseeing -> network management
overseeing -> network interface.

So, yeah, my approach is to *expect* active management of these passive
services when possible, by using processes. And yes, I'm cheating a little
bit by side-stepping the entire "process management as a subset of service
management" problem. So, for now, the issue is resolved in the planning
stage...but that doesn't mean it's over, and your argument may yet win out
over my best efforts to avoid it. A good example: how does one actively
manage iptable rules? There is no program I am aware of to do this, and
even if there were, it could conflict with any hand-edits that the
administrator would make prior to saving the rules to a file. Definitely a

 And to add more complexity on top, in a SysV scheme, you can mix
> one-time initialization and service startup. Which is not a bad thing
> per se, it's sometimes needed, but this does not fit at all in a
> runit-like scheme where you cleanly separate the one-time initialization
> from the launching and supervising of the daemons.

I think one-time setups can be accommodated by state tracking in the
service directory itself. Something like ...

  setup stuff here
  if $error then
    return 1
    touch ./setup
    return $?

[ ! -f ./setup ] && do-setup || exit 1

might be enough. As the system comes down, it can clear out all of the
setups with

for rmfile in /etc/sv/*/setup
  rm $rmfile

Yeah, that's not real shell code but you get the idea.

> So yeah, there's no other way, for now, than doing manual conversion -
> you need to really study the SysV scripts to know exactly what they do,
> and split them into 1. one-time initialization, and 2. zero or more
> processes to be supervised. A lot of the time, it's going to be
> straightforward, but you just can't rely on it. You gave the example of
> Samba, but Samba isn't even that hard - it's just 2 processes. I'm sure
> some scripts out there are much more evil than that.

Already doing this now. I have a small script that extracts ALL of the
init.d scripts for examination. I then look at the "requirements" that are
baked in to the scripts and go from there, only adding the bits that are
needed by hand. Also have a script that builds the bazillion service
directories needed with the same name as the init script. It's not
perfect, but since a project goal is to support the "SysV replacement shim"
feature, it's a necessary evil.

The Samba problem is indeed simple, compared to some of the crooked mess
that is used to fire up an X11 display. Let's just say, if I can get that
problem licked, then I'll have a good chance at getting all the others to
come along for the ride.

Bad enough that lightdm needs dbus... (shakes head)

> In 2015, I plan to extend s6 to perform service management in a clean
> way. I believe it's the only way we can provide a plausible alternative
> to the _other_ "frameworks" out there, if you catch my drift.

I really need to sit down, cook up an VM, and start looking at the
arrangement you have in s6. I was hoping that some of the work I am doing
in runit might be portable to s6 but I won't know until I look at how it
works. When I have it cooked and I've had time to play with it, I'll get
back with whatever questions and crib notes I have.

With regard to "other frameworks", well... what I'm paid to use at work is
what I use, and that's just peachy by me, no problems at all. No sir.

But it doesn't mean I need to use it at home.
Received on Fri Oct 03 2014 - 01:36:08 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:44:18 UTC