[M3devel] idioms for tracking initialization state and raising errors?
Rodney M. Bates
rodney_bates at lcwb.coop
Fri Sep 2 22:48:50 CEST 2016
Yea, I see your point. Putting the allocation inside serializes
allocations for different mutexes, since there is only one global
initMutex.
On 09/01/2016 07:07 PM, Jay K wrote:
>
> It is on the basis of efficiency that I do things this way.
>
>
> It is true in a race condition, we do extra work.
>
>
> But the idea is to less work under any lock, to reduce contention
> and increase scaling.
>
>
> Keep in mind that the contention is on unrelated work,
> while the race condition is on related work.
>
>
> So the contention would occur more than the race.
>
>
> i.e. we have n mutexes being entered on m threads, for the first time.
> Let's say m > n.
> We allow m calls to pthread_mutex_new to proceed perhaps in parallel.
> To the extent that it can. Maybe not at all. Maybe significantly.
> And then we throw out m - n of them.
>
>
> The alternative would be n serialized calls to pthread_mutex_new.
>
>
> This is a common pattern. I didn't make it up.
> Someone changed my code from your way to this way in the past
> for this reason. The pattern there was a little different, in that
> in the lost race case, the caller had to retry, because it didn't get
> a correct/coherent result. But this occasional retry is still preferable
> for scaling.
>
>
> In truth, the call to malloc will be somewhat serialized, depending on underlying platform.
> And, analogs on Windows, InitializeCriticalSection is internally a bit serialized,
> and InitializeSRWLock is not at all.
> malloc often has some thread-local small free list to reduce contention.
>
> - Jay
>
>
>
> > Date: Thu, 1 Sep 2016 10:03:43 -0500
> > From: rodney_bates at lcwb.coop
> > To: hosking at purdue.edu; rodney.m.bates at acm.org
> > CC: jay.krell at cornell.edu; m3devel at elegosoft.com
> > Subject: Re: [M3devel] idioms for tracking initialization state and raising errors?
> >
> > OK, on that basis, I withdraw the suggestion about allocating the PCRITICAL_SECTION
> > while locked. In any case, the wasted allocate/delete would probably be rare.
> >
> > On 08/31/2016 11:18 PM, Hosking, Antony L wrote:
> > > I may have been suspicious about internal locking within pthread_mutex_new, but don’t recall precisely. It might even have been a deadlock bug that I encountered. In general I don’t like library calls inside the mutex blocks − just the state changes I am really trying to protect.
> > >
> > >> On 1 Sep 2016, at 11:40 AM, Rodney M. Bates <rodney_bates at lcwb.coop> wrote:
> > >>
> > >>
> > >>
> > >> On 08/31/2016 12:58 PM, Rodney M. Bates wrote:
> > >>>
> > >>>
> > >>> On 07/31/2016 04:07 AM, Jay K wrote:
> > >>>> Is this a correct idiom?
> > >>>> If so, I think it is fairly reasonable.
> > >>>>
> > >>>>
> > >>>> That is:
> > >>>> be very willing to "early return"
> > >>>> put all cleanup in finally
> > >>>> raise errors in finally
> > >>>>
> > >>>>
> > >>>> In particular, I don't want to repeat the cleanup.
> > >>>> I don't want to raise before leaving critical sections.
> > >>>>
> > >>>>
> > >>>> I don't intend this to raise within a raise, but only
> > >>>> from the "normal" exit of the finally block.
> > >>>>
> > >>>>
> > >>>> I also don't want extra local booleans, i.e. raise: boolean to indicate failure.
> > >>>> Just the one to indicate success.
> > >>>>
> > >>>>
> > >>>> PROCEDURE InitMutex (mutex: Mutex) =
> > >>>> VAR
> > >>>> lock: PCRITICAL_SECTION := NIL;
> > >>>> locked: PCRITICAL_SECTION := NIL;
> > >>>>
> > >>>> BEGIN
> > >>>> IF mutex.initialized THEN RETURN END;
> > >>>>
> > >>>> TRY
> > >>>>
> > >>>> lock := NewCriticalSection();
> > >>>> IF lock = NIL THEN RETURN; END;
> > >>>>
> > >>>
> > >>> ^I suggest moving the two statements above inside the critical
> > >>> section on initLock, after the inner check on mutex.initialized.
> > >>> That way, if we lose a race, it avoids executing creation then
> > >>> deletion of an unneeded CriticalSection, and further eliminates
> > >>> the cleanup action in the code.
> > >>>
> > >>
> > >> Tony, is there some reason why, in ThreadPThread.m3, pthread_mutex_new can't be
> > >> done while holding initMu? In initMutex, it is doing it the way Jay proposed,
> > >> i.e., allocate before getting initMu, which (the allocation) could turn out unnecessary
> > >> if we lose a race, then slightly later, freeing it, never used, if a lost race happened.
> > >>
> > >>
> > >>> BTW, I am assuming initLock is a single global CriticalSection?
> > >>>
> > >>>
> > >>>> EnterCriticalSection(ADR(initLock));
> > >>>> locked := ADR(initLock);
> > >>>>
> > >>>> IF mutex.initialized THEN RETURN END;
> > >>>>
> > >>>> (* We won the race. *)
> > >>>> RTHeapRep.RegisterFinalCleanup (mutex, CleanMutex);
> > >>>> mutex.lock := lock;
> > >>>> lock := NIL;
> > >>>> mutex.initialized := TRUE;
> > >>>>
> > >>>> FINALLY
> > >>>> IF locked # NIL THEN LeaveCriticalSection(locked); END;
> > >>>> DelCriticalSection(lock);
> > >>>> IF NOT mutex.initialized THEN (* Raise after leaving critical section. *)
> > >>>> RuntimeError.Raise (RuntimeError.T.OutOfMemory);
> > >>>> END;
> > >>>> END;
> > >>>>
> > >>>> END InitMutex;
> > >>>>
> > >>>>
> > >>>> Thank you,
> > >>>> - Jay
> > >>>>
> > >>>>
> > >>>> _______________________________________________
> > >>>> M3devel mailing list
> > >>>> M3devel at elegosoft.com
> > >>>> https://m3lists.elegosoft.com/mailman/listinfo/m3devel
> > >>>>
> > >>>
> > >>
> > >> --
> > >> Rodney Bates
> > >> rodney.m.bates at acm.org
> > >> _______________________________________________
> > >> M3devel mailing list
> > >> M3devel at elegosoft.com
> > >> https://m3lists.elegosoft.com/mailman/listinfo/m3devel
> > >
> >
> > --
> > Rodney Bates
> > rodney.m.bates at acm.org
--
Rodney Bates
rodney.m.bates at acm.org
More information about the M3devel
mailing list