Re: from multiple sources to multiple destinations using s6-log
 
Hi Laurent,
The second option is exactly what I was looking for. A supervised s6-log process perfectly fits in my use case.
Log reliability via fdholder-daemon is new for me so I need to read more about it. A lot of useful utilities in skarnet* which have gone unnoticed for me!
Thanks again,
On Wed, Feb 18, 2015 at 11:16 AM, Laurent Bercot
<ska-supervision_at_skarnet.org> wrote:
> On 18/02/2015 09:12, Gorka Lertxundi wrote:
>> Imagine I have a single process which generates multiple kind of logs, for example, nginx, or even better mysql, which could generate three types of logs: general, error and slow-query. Piping all of them to the same destination in order to be eatable by s6-log, will converge typologies into one log processing chain.
>   Hi Gorka,
>   I'm not sure what your objective is:
>   1. Being able to pipe several log sources into the same pipe to a
> unique s6-log process, which would then select lines on a token given
> in the input, and log into a separate logdir for each original source;
>   or
>   2. Being able to have several supervised s6-log processes all reading
> from the same service, one per log flux.
>   I don't recommend going after 1. Generally, if you have several log
> sources, you don't want to merge them into a single destination - that's
> the syslogd design and is inefficient. The sources are different for a
> reason, and it's always easier to concatenate several sets of logs for
> analysis than it is to parse information in a single set to retrieve the
> origin of a log line. But from this diagram:
>> general   -> /dev/mypipe1 (prepend “g:”)  --\                 /-- if g  s6-log /generalpath
>> error     -> /dev/mypipe2 (prepend “e:”)  ---|— /dev/stdout -|--- if e  s6-log /errorpath
>> slow-quey -> /dev/mypipe3 (prepend “sq:”) --/                 \-- if sq s6-log /slow-query
> it looks like you're going after 2, and feel constrained by the need to
> pipe all the logs into the service's stdout for them to be picked up
> by a logger.
>   So my answer to 2 is: don't feel constrained. The "one logger per
> service" model isn't an absolute one, it's just a special treatment
> of the common case by s6-svscan to make it easier; you can have as
> many log fluxes as you need, there just won't be any specific help
> from s6-svscan.
>   - Create a supervised service (without its own logger!) that runs a
> s6-log process reading from a fifo for every flux you have: i.e.
> a "mysqld general-path log" service, a "mysqld error-path log" service,
> and a "mysqld slow-query log" service.
>   - In your mysqld's run script, redirect the outputs of your daemon
> to the appropriate fifo.
>   - There, you're done. But if you want the same guarantees of not
> losing any logs when you restart a s6-log process as s6-svscan gives
> you with a service's "native" logger, you should perform two more
> steps:
>     * Make sure you have a s6-fdholder-daemon service running.
>     * At some point in your initialization scripts, open each of your
> fifos for reading and store the descriptor into the fd-holding daemon.
>   Now your fifos will never close - unless you kill the fdholder without
> transferring its contents first, so don't do that before shutdown time.
>   Honestly, now that s6 can perform fd-holding, the specific logger
> handling performed by s6-svscan becomes largely irrelevant. I may even
> deprecate it in a very distant future. (Very distant! Please don't
> freak out.)
>   HTH,
> -- 
>   Laurent
Received on Wed Feb 18 2015 - 10:44:24 UTC
This archive was generated by hypermail 2.3.0
: Sun May 09 2021 - 19:44:19 UTC