Re: tai confusion

From: Paul Jarc <>
Date: Wed, 07 Jan 2015 16:30:00 -0500

Laurent Bercot <> wrote:
> tai_from_sysclock(&a, (uint64)t + TAI_MAGIC)
> Then you have to copy micro/nanoseconds by hand, if applicable.

Ok. And then in the other direction, use sysclock_from_tai() and
subtract TAI_MAGIC?

> I want to hide the system clock from applications and provide them
> with TAI time instead, suitable for computations no matter what your
> system clock setting is.

Without knowing the application's requirements, successfully hiding
the system clock means you would have to supply *all* the interfaces
that work with time values: *stat(), utimes(), localtime(), mktime(),
etc., not just time(). But it seems like it would be less work to
just supply the conversion functions, and let the application convert
from/to whatever data source it uses.

> If you insist, I'll try to come up with something more intuitive.

I don't mind writing it myself. Would you like a patch or git pull

> And most of all, [tain_init() is] only ever useful with --enable-monotonic,
> which doesn't really make sense if you're using linear time in your
> applications, which is probably the case if you're skalibs-aware

I'm writing code that other people should be able to use, and I don't
want to put constraints on how they configure skalibs. I just want to
use the tai functions, including tain_now(), in a way that will work
for any self-consistent set of skalibs configuration options that
someone else may have set. Do I need tain_init() for that? If not,
then it sounds like it may not need to be part of the documented API.

Received on Wed Jan 07 2015 - 21:30:00 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:38:49 UTC