From rodney_bates at lcwb.coop Thu Sep 1 03:40:04 2016 From: rodney_bates at lcwb.coop (Rodney M. Bates) Date: Wed, 31 Aug 2016 20:40:04 -0500 Subject: [M3devel] idioms for tracking initialization state and raising errors? In-Reply-To: <57C71ACD.1000202@lcwb.coop> References: <57C71ACD.1000202@lcwb.coop> Message-ID: <57C786F4.1010303@lcwb.coop> On 08/31/2016 12:58 PM, Rodney M. Bates wrote: > > > On 07/31/2016 04:07 AM, Jay K wrote: >> Is this a correct idiom? >> If so, I think it is fairly reasonable. >> >> >> That is: >> be very willing to "early return" >> put all cleanup in finally >> raise errors in finally >> >> >> In particular, I don't want to repeat the cleanup. >> I don't want to raise before leaving critical sections. >> >> >> I don't intend this to raise within a raise, but only >> from the "normal" exit of the finally block. >> >> >> I also don't want extra local booleans, i.e. raise: boolean to indicate failure. >> Just the one to indicate success. >> >> >> PROCEDURE InitMutex (mutex: Mutex) = >> VAR >> lock: PCRITICAL_SECTION := NIL; >> locked: PCRITICAL_SECTION := NIL; >> >> BEGIN >> IF mutex.initialized THEN RETURN END; >> >> TRY >> >> lock := NewCriticalSection(); >> IF lock = NIL THEN RETURN; END; >> > > ^I suggest moving the two statements above inside the critical > section on initLock, after the inner check on mutex.initialized. > That way, if we lose a race, it avoids executing creation then > deletion of an unneeded CriticalSection, and further eliminates > the cleanup action in the code. > Tony, is there some reason why, in ThreadPThread.m3, pthread_mutex_new can't be done while holding initMu? In initMutex, it is doing it the way Jay proposed, i.e., allocate before getting initMu, which (the allocation) could turn out unnecessary if we lose a race, then slightly later, freeing it, never used, if a lost race happened. > BTW, I am assuming initLock is a single global CriticalSection? > > >> EnterCriticalSection(ADR(initLock)); >> locked := ADR(initLock); >> >> IF mutex.initialized THEN RETURN END; >> >> (* We won the race. *) >> RTHeapRep.RegisterFinalCleanup (mutex, CleanMutex); >> mutex.lock := lock; >> lock := NIL; >> mutex.initialized := TRUE; >> >> FINALLY >> IF locked # NIL THEN LeaveCriticalSection(locked); END; >> DelCriticalSection(lock); >> IF NOT mutex.initialized THEN (* Raise after leaving critical section. *) >> RuntimeError.Raise (RuntimeError.T.OutOfMemory); >> END; >> END; >> >> END InitMutex; >> >> >> Thank you, >> - Jay >> >> >> _______________________________________________ >> M3devel mailing list >> M3devel at elegosoft.com >> https://m3lists.elegosoft.com/mailman/listinfo/m3devel >> > -- Rodney Bates rodney.m.bates at acm.org From nafkhami at elegosoft.com Thu Sep 1 10:10:40 2016 From: nafkhami at elegosoft.com (navid afkhami) Date: Thu, 1 Sep 2016 10:10:40 +0200 Subject: [M3devel] Mailing list Problem Message-ID: <4f97be9f-aa9d-4b19-8ddf-fb822c8299ae@elegosoft.com> Dear Subscriber, Unfortunately, we were facing some technical issues on our Mail-server and some of the E Mails had not been delivered. The issues are fixed now and the m3 Mailinglists are back to the normal operation again. We apologize for any inconvenience. Elegosoft Team -- Navid Afkhami IT Services & Support elego Software Solutions GmbH Gustav-Meyer-Allee 25 Building 12.3 (BIG) room 227 13355 Berlin, Germany phone +49 30 23 45 86 96 navid.afkhami at elegosoft.com fax +49 30 23 45 86 95 http://www.elegosoft.com Geschaeftsfuehrer: Olaf Wagner, Sitz Berlin Amtsgericht Berlin-Charlottenburg, HRB 77719, USt-IdNr: DE163214194 From rodney_bates at lcwb.coop Thu Sep 1 17:03:43 2016 From: rodney_bates at lcwb.coop (Rodney M. Bates) Date: Thu, 01 Sep 2016 10:03:43 -0500 Subject: [M3devel] idioms for tracking initialization state and raising errors? In-Reply-To: <55429D34-10AB-440A-A1CF-CAB68B517CFF@purdue.edu> References: <57C71ACD.1000202@lcwb.coop> <57C786F4.1010303@lcwb.coop> <55429D34-10AB-440A-A1CF-CAB68B517CFF@purdue.edu> Message-ID: <57C8434F.5040307@lcwb.coop> OK, on that basis, I withdraw the suggestion about allocating the PCRITICAL_SECTION while locked. In any case, the wasted allocate/delete would probably be rare. On 08/31/2016 11:18 PM, Hosking, Antony L wrote: > I may have been suspicious about internal locking within pthread_mutex_new, but don?t recall precisely. It might even have been a deadlock bug that I encountered. In general I don?t like library calls inside the mutex blocks ? just the state changes I am really trying to protect. > >> On 1 Sep 2016, at 11:40 AM, Rodney M. Bates wrote: >> >> >> >> On 08/31/2016 12:58 PM, Rodney M. Bates wrote: >>> >>> >>> On 07/31/2016 04:07 AM, Jay K wrote: >>>> Is this a correct idiom? >>>> If so, I think it is fairly reasonable. >>>> >>>> >>>> That is: >>>> be very willing to "early return" >>>> put all cleanup in finally >>>> raise errors in finally >>>> >>>> >>>> In particular, I don't want to repeat the cleanup. >>>> I don't want to raise before leaving critical sections. >>>> >>>> >>>> I don't intend this to raise within a raise, but only >>>> from the "normal" exit of the finally block. >>>> >>>> >>>> I also don't want extra local booleans, i.e. raise: boolean to indicate failure. >>>> Just the one to indicate success. >>>> >>>> >>>> PROCEDURE InitMutex (mutex: Mutex) = >>>> VAR >>>> lock: PCRITICAL_SECTION := NIL; >>>> locked: PCRITICAL_SECTION := NIL; >>>> >>>> BEGIN >>>> IF mutex.initialized THEN RETURN END; >>>> >>>> TRY >>>> >>>> lock := NewCriticalSection(); >>>> IF lock = NIL THEN RETURN; END; >>>> >>> >>> ^I suggest moving the two statements above inside the critical >>> section on initLock, after the inner check on mutex.initialized. >>> That way, if we lose a race, it avoids executing creation then >>> deletion of an unneeded CriticalSection, and further eliminates >>> the cleanup action in the code. >>> >> >> Tony, is there some reason why, in ThreadPThread.m3, pthread_mutex_new can't be >> done while holding initMu? In initMutex, it is doing it the way Jay proposed, >> i.e., allocate before getting initMu, which (the allocation) could turn out unnecessary >> if we lose a race, then slightly later, freeing it, never used, if a lost race happened. >> >> >>> BTW, I am assuming initLock is a single global CriticalSection? >>> >>> >>>> EnterCriticalSection(ADR(initLock)); >>>> locked := ADR(initLock); >>>> >>>> IF mutex.initialized THEN RETURN END; >>>> >>>> (* We won the race. *) >>>> RTHeapRep.RegisterFinalCleanup (mutex, CleanMutex); >>>> mutex.lock := lock; >>>> lock := NIL; >>>> mutex.initialized := TRUE; >>>> >>>> FINALLY >>>> IF locked # NIL THEN LeaveCriticalSection(locked); END; >>>> DelCriticalSection(lock); >>>> IF NOT mutex.initialized THEN (* Raise after leaving critical section. *) >>>> RuntimeError.Raise (RuntimeError.T.OutOfMemory); >>>> END; >>>> END; >>>> >>>> END InitMutex; >>>> >>>> >>>> Thank you, >>>> - Jay >>>> >>>> >>>> _______________________________________________ >>>> M3devel mailing list >>>> M3devel at elegosoft.com >>>> https://m3lists.elegosoft.com/mailman/listinfo/m3devel >>>> >>> >> >> -- >> Rodney Bates >> rodney.m.bates at acm.org >> _______________________________________________ >> M3devel mailing list >> M3devel at elegosoft.com >> https://m3lists.elegosoft.com/mailman/listinfo/m3devel > -- Rodney Bates rodney.m.bates at acm.org From jay.krell at cornell.edu Fri Sep 2 02:07:35 2016 From: jay.krell at cornell.edu (Jay K) Date: Fri, 2 Sep 2016 00:07:35 +0000 Subject: [M3devel] idioms for tracking initialization state and raising errors? In-Reply-To: <57C8434F.5040307@lcwb.coop> References: , <57C71ACD.1000202@lcwb.coop> <57C786F4.1010303@lcwb.coop>, <55429D34-10AB-440A-A1CF-CAB68B517CFF@purdue.edu>, <57C8434F.5040307@lcwb.coop> Message-ID: It is on the basis of efficiency that I do things this way. It is true in a race condition, we do extra work. But the idea is to less work under any lock, to reduce contention and increase scaling. Keep in mind that the contention is on unrelated work, while the race condition is on related work. So the contention would occur more than the race. i.e. we have n mutexes being entered on m threads, for the first time. Let's say m > n. We allow m calls to pthread_mutex_new to proceed perhaps in parallel. To the extent that it can. Maybe not at all. Maybe significantly. And then we throw out m - n of them. The alternative would be n serialized calls to pthread_mutex_new. This is a common pattern. I didn't make it up. Someone changed my code from your way to this way in the past for this reason. The pattern there was a little different, in that in the lost race case, the caller had to retry, because it didn't get a correct/coherent result. But this occasional retry is still preferable for scaling. In truth, the call to malloc will be somewhat serialized, depending on underlying platform. And, analogs on Windows, InitializeCriticalSection is internally a bit serialized, and InitializeSRWLock is not at all. malloc often has some thread-local small free list to reduce contention. - Jay > Date: Thu, 1 Sep 2016 10:03:43 -0500 > From: rodney_bates at lcwb.coop > To: hosking at purdue.edu; rodney.m.bates at acm.org > CC: jay.krell at cornell.edu; m3devel at elegosoft.com > Subject: Re: [M3devel] idioms for tracking initialization state and raising errors? > > OK, on that basis, I withdraw the suggestion about allocating the PCRITICAL_SECTION > while locked. In any case, the wasted allocate/delete would probably be rare. > > On 08/31/2016 11:18 PM, Hosking, Antony L wrote: > > I may have been suspicious about internal locking within pthread_mutex_new, but don?t recall precisely. It might even have been a deadlock bug that I encountered. In general I don?t like library calls inside the mutex blocks ? just the state changes I am really trying to protect. > > > >> On 1 Sep 2016, at 11:40 AM, Rodney M. Bates wrote: > >> > >> > >> > >> On 08/31/2016 12:58 PM, Rodney M. Bates wrote: > >>> > >>> > >>> On 07/31/2016 04:07 AM, Jay K wrote: > >>>> Is this a correct idiom? > >>>> If so, I think it is fairly reasonable. > >>>> > >>>> > >>>> That is: > >>>> be very willing to "early return" > >>>> put all cleanup in finally > >>>> raise errors in finally > >>>> > >>>> > >>>> In particular, I don't want to repeat the cleanup. > >>>> I don't want to raise before leaving critical sections. > >>>> > >>>> > >>>> I don't intend this to raise within a raise, but only > >>>> from the "normal" exit of the finally block. > >>>> > >>>> > >>>> I also don't want extra local booleans, i.e. raise: boolean to indicate failure. > >>>> Just the one to indicate success. > >>>> > >>>> > >>>> PROCEDURE InitMutex (mutex: Mutex) = > >>>> VAR > >>>> lock: PCRITICAL_SECTION := NIL; > >>>> locked: PCRITICAL_SECTION := NIL; > >>>> > >>>> BEGIN > >>>> IF mutex.initialized THEN RETURN END; > >>>> > >>>> TRY > >>>> > >>>> lock := NewCriticalSection(); > >>>> IF lock = NIL THEN RETURN; END; > >>>> > >>> > >>> ^I suggest moving the two statements above inside the critical > >>> section on initLock, after the inner check on mutex.initialized. > >>> That way, if we lose a race, it avoids executing creation then > >>> deletion of an unneeded CriticalSection, and further eliminates > >>> the cleanup action in the code. > >>> > >> > >> Tony, is there some reason why, in ThreadPThread.m3, pthread_mutex_new can't be > >> done while holding initMu? In initMutex, it is doing it the way Jay proposed, > >> i.e., allocate before getting initMu, which (the allocation) could turn out unnecessary > >> if we lose a race, then slightly later, freeing it, never used, if a lost race happened. > >> > >> > >>> BTW, I am assuming initLock is a single global CriticalSection? > >>> > >>> > >>>> EnterCriticalSection(ADR(initLock)); > >>>> locked := ADR(initLock); > >>>> > >>>> IF mutex.initialized THEN RETURN END; > >>>> > >>>> (* We won the race. *) > >>>> RTHeapRep.RegisterFinalCleanup (mutex, CleanMutex); > >>>> mutex.lock := lock; > >>>> lock := NIL; > >>>> mutex.initialized := TRUE; > >>>> > >>>> FINALLY > >>>> IF locked # NIL THEN LeaveCriticalSection(locked); END; > >>>> DelCriticalSection(lock); > >>>> IF NOT mutex.initialized THEN (* Raise after leaving critical section. *) > >>>> RuntimeError.Raise (RuntimeError.T.OutOfMemory); > >>>> END; > >>>> END; > >>>> > >>>> END InitMutex; > >>>> > >>>> > >>>> Thank you, > >>>> - Jay > >>>> > >>>> > >>>> _______________________________________________ > >>>> M3devel mailing list > >>>> M3devel at elegosoft.com > >>>> https://m3lists.elegosoft.com/mailman/listinfo/m3devel > >>>> > >>> > >> > >> -- > >> Rodney Bates > >> rodney.m.bates at acm.org > >> _______________________________________________ > >> M3devel mailing list > >> M3devel at elegosoft.com > >> https://m3lists.elegosoft.com/mailman/listinfo/m3devel > > > > -- > Rodney Bates > rodney.m.bates at acm.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodney_bates at lcwb.coop Fri Sep 2 22:48:50 2016 From: rodney_bates at lcwb.coop (Rodney M. Bates) Date: Fri, 02 Sep 2016 15:48:50 -0500 Subject: [M3devel] idioms for tracking initialization state and raising errors? In-Reply-To: References: , <57C71ACD.1000202@lcwb.coop> <57C786F4.1010303@lcwb.coop>, <55429D34-10AB-440A-A1CF-CAB68B517CFF@purdue.edu>, <57C8434F.5040307@lcwb.coop> Message-ID: <57C9E5B2.70108@lcwb.coop> Yea, I see your point. Putting the allocation inside serializes allocations for different mutexes, since there is only one global initMutex. On 09/01/2016 07:07 PM, Jay K wrote: > > It is on the basis of efficiency that I do things this way. > > > It is true in a race condition, we do extra work. > > > But the idea is to less work under any lock, to reduce contention > and increase scaling. > > > Keep in mind that the contention is on unrelated work, > while the race condition is on related work. > > > So the contention would occur more than the race. > > > i.e. we have n mutexes being entered on m threads, for the first time. > Let's say m > n. > We allow m calls to pthread_mutex_new to proceed perhaps in parallel. > To the extent that it can. Maybe not at all. Maybe significantly. > And then we throw out m - n of them. > > > The alternative would be n serialized calls to pthread_mutex_new. > > > This is a common pattern. I didn't make it up. > Someone changed my code from your way to this way in the past > for this reason. The pattern there was a little different, in that > in the lost race case, the caller had to retry, because it didn't get > a correct/coherent result. But this occasional retry is still preferable > for scaling. > > > In truth, the call to malloc will be somewhat serialized, depending on underlying platform. > And, analogs on Windows, InitializeCriticalSection is internally a bit serialized, > and InitializeSRWLock is not at all. > malloc often has some thread-local small free list to reduce contention. > > - Jay > > > > > Date: Thu, 1 Sep 2016 10:03:43 -0500 > > From: rodney_bates at lcwb.coop > > To: hosking at purdue.edu; rodney.m.bates at acm.org > > CC: jay.krell at cornell.edu; m3devel at elegosoft.com > > Subject: Re: [M3devel] idioms for tracking initialization state and raising errors? > > > > OK, on that basis, I withdraw the suggestion about allocating the PCRITICAL_SECTION > > while locked. In any case, the wasted allocate/delete would probably be rare. > > > > On 08/31/2016 11:18 PM, Hosking, Antony L wrote: > > > I may have been suspicious about internal locking within pthread_mutex_new, but don?t recall precisely. It might even have been a deadlock bug that I encountered. In general I don?t like library calls inside the mutex blocks ? just the state changes I am really trying to protect. > > > > > >> On 1 Sep 2016, at 11:40 AM, Rodney M. Bates wrote: > > >> > > >> > > >> > > >> On 08/31/2016 12:58 PM, Rodney M. Bates wrote: > > >>> > > >>> > > >>> On 07/31/2016 04:07 AM, Jay K wrote: > > >>>> Is this a correct idiom? > > >>>> If so, I think it is fairly reasonable. > > >>>> > > >>>> > > >>>> That is: > > >>>> be very willing to "early return" > > >>>> put all cleanup in finally > > >>>> raise errors in finally > > >>>> > > >>>> > > >>>> In particular, I don't want to repeat the cleanup. > > >>>> I don't want to raise before leaving critical sections. > > >>>> > > >>>> > > >>>> I don't intend this to raise within a raise, but only > > >>>> from the "normal" exit of the finally block. > > >>>> > > >>>> > > >>>> I also don't want extra local booleans, i.e. raise: boolean to indicate failure. > > >>>> Just the one to indicate success. > > >>>> > > >>>> > > >>>> PROCEDURE InitMutex (mutex: Mutex) = > > >>>> VAR > > >>>> lock: PCRITICAL_SECTION := NIL; > > >>>> locked: PCRITICAL_SECTION := NIL; > > >>>> > > >>>> BEGIN > > >>>> IF mutex.initialized THEN RETURN END; > > >>>> > > >>>> TRY > > >>>> > > >>>> lock := NewCriticalSection(); > > >>>> IF lock = NIL THEN RETURN; END; > > >>>> > > >>> > > >>> ^I suggest moving the two statements above inside the critical > > >>> section on initLock, after the inner check on mutex.initialized. > > >>> That way, if we lose a race, it avoids executing creation then > > >>> deletion of an unneeded CriticalSection, and further eliminates > > >>> the cleanup action in the code. > > >>> > > >> > > >> Tony, is there some reason why, in ThreadPThread.m3, pthread_mutex_new can't be > > >> done while holding initMu? In initMutex, it is doing it the way Jay proposed, > > >> i.e., allocate before getting initMu, which (the allocation) could turn out unnecessary > > >> if we lose a race, then slightly later, freeing it, never used, if a lost race happened. > > >> > > >> > > >>> BTW, I am assuming initLock is a single global CriticalSection? > > >>> > > >>> > > >>>> EnterCriticalSection(ADR(initLock)); > > >>>> locked := ADR(initLock); > > >>>> > > >>>> IF mutex.initialized THEN RETURN END; > > >>>> > > >>>> (* We won the race. *) > > >>>> RTHeapRep.RegisterFinalCleanup (mutex, CleanMutex); > > >>>> mutex.lock := lock; > > >>>> lock := NIL; > > >>>> mutex.initialized := TRUE; > > >>>> > > >>>> FINALLY > > >>>> IF locked # NIL THEN LeaveCriticalSection(locked); END; > > >>>> DelCriticalSection(lock); > > >>>> IF NOT mutex.initialized THEN (* Raise after leaving critical section. *) > > >>>> RuntimeError.Raise (RuntimeError.T.OutOfMemory); > > >>>> END; > > >>>> END; > > >>>> > > >>>> END InitMutex; > > >>>> > > >>>> > > >>>> Thank you, > > >>>> - Jay > > >>>> > > >>>> > > >>>> _______________________________________________ > > >>>> M3devel mailing list > > >>>> M3devel at elegosoft.com > > >>>> https://m3lists.elegosoft.com/mailman/listinfo/m3devel > > >>>> > > >>> > > >> > > >> -- > > >> Rodney Bates > > >> rodney.m.bates at acm.org > > >> _______________________________________________ > > >> M3devel mailing list > > >> M3devel at elegosoft.com > > >> https://m3lists.elegosoft.com/mailman/listinfo/m3devel > > > > > > > -- > > Rodney Bates > > rodney.m.bates at acm.org -- Rodney Bates rodney.m.bates at acm.org From wagner at elegosoft.com Thu Sep 29 12:24:41 2016 From: wagner at elegosoft.com (Olaf Wagner) Date: Thu, 29 Sep 2016 12:24:41 +0200 Subject: [M3devel] Supported Browers of CM3-IDE In-Reply-To: <844a058f-581b-dc07-c20d-2974f6e2e5f8@gmx.de> References: <844a058f-581b-dc07-c20d-2974f6e2e5f8@gmx.de> Message-ID: <20160929122441.08fce38e58d3aee7b7c4a1e1@elegosoft.com> Forwarded to the developers list: On Thu, 29 Sep 2016 11:56:37 +0200 Wolfgang Keller <9103784 at gmx.de> wrote: > Hello > Is CM3 still under support? I work with Ubuntu 14.04 and try to get into > using Modula-3 and the CM3-IDE. I run into several problems from which I > pick: > > Firefox browser unusable because the IDE server shuts down > > Can you offer any solution to this problem? Thank you very much! > > Regards > - Wolfgang Keller > Software Developer > -- Olaf Wagner -- elego Software Solutions GmbH -- http://www.elegosoft.com Gustav-Meyer-Allee 25 / Geb?ude 12, 13355 Berlin, Germany phone: +49 30 23 45 86 96 mobile: +49 177 2345 869 fax: +49 30 23 45 86 95 Gesch?ftsf?hrer: Olaf Wagner | Sitz: Berlin Handelregister: Amtsgericht Charlottenburg HRB 77719 | USt-IdNr: DE163214194 From rodney_bates at lcwb.coop Fri Sep 30 04:29:53 2016 From: rodney_bates at lcwb.coop (Rodney M. Bates) Date: Thu, 29 Sep 2016 21:29:53 -0500 Subject: [M3devel] Supported Browers of CM3-IDE In-Reply-To: <20160929122441.08fce38e58d3aee7b7c4a1e1@elegosoft.com> References: <844a058f-581b-dc07-c20d-2974f6e2e5f8@gmx.de> <20160929122441.08fce38e58d3aee7b7c4a1e1@elegosoft.com> Message-ID: <57EDCE21.3090905@lcwb.coop> Commit c00008, which I just pushed, fixes this problem on my machine. But, holy cow, then it spent tens of minutes scanning what looks to be everything in my entire home directory, including lots of stuff that has nothing to do with Modula3! Wolfgang, what is your next problem with it? On 09/29/2016 05:24 AM, Olaf Wagner wrote: > Forwarded to the developers list: > > On Thu, 29 Sep 2016 11:56:37 +0200 > Wolfgang Keller <9103784 at gmx.de> wrote: > >> Hello >> Is CM3 still under support? I work with Ubuntu 14.04 and try to get into >> using Modula-3 and the CM3-IDE. I run into several problems from which I >> pick: >> >> Firefox browser unusable because the IDE server shuts down >> >> Can you offer any solution to this problem? Thank you very much! >> >> Regards >> - Wolfgang Keller >> Software Developer >> > > -- Rodney Bates rodney.m.bates at acm.org