Hi Everyone
we have a fair number of services which allow (and occasionally require) 
user interaction via a (built-in) shell. All the shell interaction is 
supposed to be logged, in addition to all the messages that are issued 
spontaneously by the process. So we cannot directly use a logger 
attached to the stdout/stderr of the process.
procServ is a process supervisor adapted to such situations. It allows 
an external process (conserver in our case) to attach to the service's 
shell via a TCP or UNIX domain socket. procServ supports logging 
everything it sees (input and output) to a file or stdout.
In the past we had recurring problems with processes that spew out an 
extreme amount of messages, quickly filling up our local disks. Since 
logrotate runs via cron it is not possible to reliably guarantee that 
this doesn't happen. Thus, inspired by process supervision suites a la 
daemontools, we are now using a small shell wrapper script that pipes 
the output of the process into the multilog tool from the daemontools 
package.
Here is the script, slightly simplified. Most of the parameters are 
passed via environment.
```
IOC=$1
/usr/bin/procServ -f -L- --logstamp --timefmt="$TIMEFMT" \
  -q -n %i --ignore=^D^C^] -P "unix:$RUNDIR/$IOC" -c "$BOOTDIR" "./$STCMD" \
  | /usr/bin/multilog "s$LOGSIZE" "n$LOGNUM" "$LOGDIR/$IOC"
```
So far this seems to do the job, but I have two questions:
1. Is there anything "bad" about this approach? Most supervision tools 
have this sort of thing as a built-in feature and I suspect there may be 
a reason for that other than mere convenience.
2. Do any of the existing process supervision tools support what 
procServ gives us wrt interactive shell access from outside?
Cheers
Ben
-- 
I would rather have questions that cannot be answered, than answers that
cannot be questioned.  -- Richard Feynman
Received on Tue Oct 19 2021 - 10:59:58 CEST