Re: from multiple sources to multiple destinations using s6-log

From: Gorka Lertxundi <glertxundi_at_gmail.com>
Date: Wed, 18 Feb 2015 06:31:06 -0800 (PST)

Hi Laurent,




Please I apologize for the spam.




I’m trying to do what you’ve explained in the last part of your answer. That related to fdholder.




>From http://skarnet.org/software/s6/socket-activation.html:




1.- "Make sure you have an early supervision infrastructure running. Ideally, you would make s6-svscan your process 1."




Ok.




#> ps x | grep s6-svscan

  PID TTY      STAT   TIME COMMAND

    1 ?        Ss+    0:00 s6-svscan -t0 /etc/s6




2.- "Start an early fd-holding service. Let's say the fd-holding daemon is listening on socket /service/fdholder/s." 




Ok.




#> cat /etc/s6/fdholder/run

#!/bin/bash

exec s6-fdholder-daemon -i /etc/s6/fdholder/rules /etc/s6/fdholder/socket




#> ls-recursive /etc/s6/fdholder

/etc/s6/fdholder/rules/gid

/etc/s6/fdholder/rules/uid

/etc/s6/fdholder/rules/uid/default

/etc/s6/fdholder/rules/uid/default/allow

/etc/s6/fdholder/rules/uid/default/env/S6_FDHOLDER_LIST

/etc/s6/fdholder/rules/uid/default/env/S6_FDHOLDER_STORE_REGEX = ^.*$




#> ps x | grep fdholder

  PID TTY      STAT   TIME COMMAND

   53 ?        S+     0:00 s6-supervise fdholder

  762 ?        Ss     0:00 s6-fdholderd -i /etc/s6/fdholder/rules




3.- "For every Unix domain socket /my/socket you need to open, run s6-ipcserver-socketbinder /my/socket s6-fdholder-store /service/fdholder/s unix:/my/socket. You can do the same with INET domain sockets."




/*

#> s6-ipcserver-socketbinder /tmp/mysocket s6-fdholder-store /etc/s6/fdholder/socket MYSOCKET

s6-fdholder-storec: fatal: unable to store fd: Broken pipe










s6-fdholderd[20489]: segfault at 104fb ip 000000000040cd03 sp 00007fff8ffca460 error 4 in s6-fdholderd[400000+17000]

*/




#> s6-fdholder-list /etc/s6/fdholder/socket

#> echo $?

0




#> s6-ipcserver-socketbinder /tmp/mysocket s6-fdholder-store /etc/s6/fdholder/socket MYSOCKET

s6-fdholder-storec: fatal: unable to store fd: Operation not permitted

#> echo $?

1




#> rm /etc/s6/fdholder/rules/uid/default/env/S6_FDHOLDER_LIST 

#> s6-fdholder-list /etc/s6/fdholder/socket 

#> echo $?

0




#> rm /etc/s6/fdholder/rules/uid/default/env/S6_FDHOLDER_STORE_REGEX 

#> s6-ipcserver-socketbinder /tmp/mysocket s6-fdholder-store /etc/s6/fdholder/socket MYSOCKET

s6-fdholder-storec: fatal: unable to store fd: Broken pipe

#> dmesg | tail -1

s6-fdholderd[23255]: segfault at f009 ip 000000000040cd03 sp 00007fff854aaa90 error 4 in s6-fdholderd[400000+17000]




Questions:

   - I'm missing something? Segfault seems that something is going really wrong in here.

   - How would be the steps to create&use a pipe and forward log messages to it?

      1.- s6-mkfifodir or mkpipe?

      2.- How to register into the fdholder store?

          obviusly this won't work: s6-ipcserver-socketbinder /my/pipe s6-fdholder-store /etc/s6/fdholder/socket MYPIPE?

      3.- How to pipe from holder to s6-log?

          s6-fdholder-retrieve /etc/s6/fdholder/socket MYPIPE s6-log ... ?




Thanks in advance.

On Wed, Feb 18, 2015 at 11:44 AM, Gorka Lertxundi <glertxundi_at_gmail.com>
wrote:

> Hi Laurent,
> The second option is exactly what I was looking for. A supervised s6-log process perfectly fits in my use case.
> Log reliability via fdholder-daemon is new for me so I need to read more about it. A lot of useful utilities in skarnet* which have gone unnoticed for me!
> Thanks again,
> On Wed, Feb 18, 2015 at 11:16 AM, Laurent Bercot
> <ska-supervision_at_skarnet.org> wrote:
>> On 18/02/2015 09:12, Gorka Lertxundi wrote:
>>> Imagine I have a single process which generates multiple kind of logs, for example, nginx, or even better mysql, which could generate three types of logs: general, error and slow-query. Piping all of them to the same destination in order to be eatable by s6-log, will converge typologies into one log processing chain.
>> Hi Gorka,
>> I'm not sure what your objective is:
>> 1. Being able to pipe several log sources into the same pipe to a
>> unique s6-log process, which would then select lines on a token given
>> in the input, and log into a separate logdir for each original source;
>> or
>> 2. Being able to have several supervised s6-log processes all reading
>> from the same service, one per log flux.
>> I don't recommend going after 1. Generally, if you have several log
>> sources, you don't want to merge them into a single destination - that's
>> the syslogd design and is inefficient. The sources are different for a
>> reason, and it's always easier to concatenate several sets of logs for
>> analysis than it is to parse information in a single set to retrieve the
>> origin of a log line. But from this diagram:
>>> general -> /dev/mypipe1 (prepend “g:”) --\ /-- if g s6-log /generalpath
>>> error -> /dev/mypipe2 (prepend “e:”) ---|— /dev/stdout -|--- if e s6-log /errorpath
>>> slow-quey -> /dev/mypipe3 (prepend “sq:”) --/ \-- if sq s6-log /slow-query
>> it looks like you're going after 2, and feel constrained by the need to
>> pipe all the logs into the service's stdout for them to be picked up
>> by a logger.
>> So my answer to 2 is: don't feel constrained. The "one logger per
>> service" model isn't an absolute one, it's just a special treatment
>> of the common case by s6-svscan to make it easier; you can have as
>> many log fluxes as you need, there just won't be any specific help
>> from s6-svscan.
>> - Create a supervised service (without its own logger!) that runs a
>> s6-log process reading from a fifo for every flux you have: i.e.
>> a "mysqld general-path log" service, a "mysqld error-path log" service,
>> and a "mysqld slow-query log" service.
>> - In your mysqld's run script, redirect the outputs of your daemon
>> to the appropriate fifo.
>> - There, you're done. But if you want the same guarantees of not
>> losing any logs when you restart a s6-log process as s6-svscan gives
>> you with a service's "native" logger, you should perform two more
>> steps:
>> * Make sure you have a s6-fdholder-daemon service running.
>> * At some point in your initialization scripts, open each of your
>> fifos for reading and store the descriptor into the fd-holding daemon.
>> Now your fifos will never close - unless you kill the fdholder without
>> transferring its contents first, so don't do that before shutdown time.
>> Honestly, now that s6 can perform fd-holding, the specific logger
>> handling performed by s6-svscan becomes largely irrelevant. I may even
>> deprecate it in a very distant future. (Very distant! Please don't
>> freak out.)
>> HTH,
>> --
>> Laurent
Received on Wed Feb 18 2015 - 14:31:06 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:44:19 UTC