[M3devel] Ho hum... AtFork...

Tony Hosking hosking at cs.purdue.edu
Wed Aug 13 18:28:50 CEST 2014


Which revision of ThreadPThread.m3 is this output from?

On Aug 13, 2014, at 2:30 AM, mika at async.caltech.edu wrote:

> 
> OK... deleted all the derived directories and rebuilt.  Result on FreeBSD is similar (or identical, hard to tell):
> 
> I get a partial deadlock:
> 
>  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
> 48760 root        98    0 87836K 29104K CPU1    1  12:32  83.89% threadtest{threadtest}
> 48760 root        94    0 87836K 29104K CPU3    3  11:02  74.66% threadtest{threadtest}
> 48760 root        94    0 87836K 29104K CPU0    0  11:03  71.97% threadtest{threadtest}
> 48760 root        32    0 87836K 29104K umtxn   2   0:01   0.00% threadtest{threadtest}
> 48760 root        20    0 87836K 29104K umtxn   2   0:01   0.00% threadtest{threadtest}
> 48760 root        35    0 87836K 29104K uwait   2   0:01   0.00% threadtest{threadtest}
> 48760 root        27    0 87836K 29104K uwait   0   0:01   0.00% threadtest{threadtest}
> 48760 root        27    0 87836K 29104K umtxn   3   0:01   0.00% threadtest{threadtest}
> 48760 root        27    0 87836K 29104K umtxn   2   0:01   0.00% threadtest{threadtest}
> 48760 root        20    0 87836K 29104K umtxn   0   0:00   0.00% threadtest{threadtest}
> 48759 root        20    0 30276K 11064K wait    0   0:00   0.00% gdb
> 
> one of the dead threads is the main thread, so now output.  But three
> threads are still alive, interestingly enough.
> 
> Looks like the same issue as before:
> 
> Thread 5 (Thread 801807400 (LWP 100751/threadtest)):
> #0  0x0000000800ae089c in __error () from /lib/libthr.so.3
> #1  0x0000000800ad6261 in pthread_suspend_np () from /lib/libthr.so.3
> #2  0x00000000004511bc in ThreadPThread__SuspendThread (mt=Error accessing memory address 0x8000ff5faa28: Bad address.
> ) at ../src/thread/PTHREAD/ThreadFreeBSD.c:33
> #3  0x000000000044ed8e in ThreadPThread__StopThread (M3_DMxDjQ_act=Error accessing memory address 0x8000ff5faa48: Bad address.
> ) at ../src/thread/PTHREAD/ThreadPThread.m3:909
> #4  0x000000000044ef31 in ThreadPThread__StopWorld () at ../src/thread/PTHREAD/ThreadPThread.m3:948
> #5  0x000000000044e44e in RTThread__SuspendOthers () at ../src/thread/PTHREAD/ThreadPThread.m3:713
> #6  0x0000000000436386 in RTCollector__CollectSomeInStateZero () at ../src/runtime/common/RTCollector.m3:749
> #7  0x0000000000436331 in RTCollector__CollectSome () at ../src/runtime/common/RTCollector.m3:723
> #8  0x0000000000435ff9 in RTHeapRep__CollectEnough () at ../src/runtime/common/RTCollector.m3:657
> #9  0x0000000000433233 in RTAllocator__AllocTraced (M3_Cwb5VA_dataSize=Error accessing memory address 0x8000ff5fac88: Bad address.
> ) at ../src/runtime/common/RTAllocator.m3:367
> #10 0x0000000000432b55 in RTAllocator__GetOpenArray (M3_Eic7CK_def=Error accessing memory address 0x8000ff5fad48: Bad address.
> ) at ../src/runtime/common/RTAllocator.m3:296
> #11 0x0000000000431e8f in RTHooks__AllocateOpenArray (M3_AJWxb1_defn=Error accessing memory address 0x8000ff5fada8: Bad address.
> ) at ../src/runtime/common/RTAllocator.m3:143
> #12 0x00000000004052ea in Main__AApply (M3_AP7a1g_cl=Error accessing memory address 0x8000ff5fade8: Bad address.
> ) at ../src/Main.m3:283
> #13 0x000000000044d243 in ThreadPThread__RunThread (M3_DMxDjQ_me=Error accessing memory address 0x8000ff5faeb8: Bad address.
> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449
> #14 0x000000000044cf00 in ThreadPThread__ThreadBase (M3_AJWxb1_param=Error accessing memory address 0x8000ff5faf68: Bad address.
> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422
> #15 0x0000000800ad54a4 in pthread_create () from /lib/libthr.so.3
> #16 0x0000000000000000 in ?? ()
> 
> Interestingly the same test seems to work on AMD64_LINUX.  That is
> promising, at least.  But of course, software testing never reveals the
> absence of bugs.
> 
>     Mika
> 
> Peter McKinna writes:
>> --089e01183a1813188705007c3498
>> Content-Type: text/plain; charset=UTF-8
>> 
>> Oh, one other thing that seemed to help me in testing this stuff especially
>> things to do with m3core and libm3 which seem to get out of whack
>> sometimes, is to totally remove the target directory of m3core then cm3;
>> cm3 -ship Same with libm3. I'm not sure that clean actually removes the
>> targets and I think there are leftover files that stuff things up since I
>> would get segv in FileRd for no obvious reason.
>> 
>> Peter
>> 
>> 
>> 
>> On Wed, Aug 13, 2014 at 3:33 PM, Peter McKinna <peter.mckinna at gmail.com>
>> wrote:
>> 
>>> That is weird. The thread tester works fine with the old version of
>>> ThreadPThread.m3 on linux amd_64 except if you have the fork test in the
>>> mix of tests you see the pthread_mutex_destroy every now and again, the
>>> fork bug in fact.
>>> I was testing the pthread changes Tony has made over the past few days
>>> using the thread tester program until I stupidly rebuilt the compiler and
>>> was getting all sorts of hangs as it dragged in the new version of
>>> pthreads. So I rebuilt everything from a backup to get back where I was
>>> with a decent compiler, then reintroduced the latest changes to
>>> ThreadPThread.m3, which yesterday was just hanging in sigsuspend but today
>>> is giving me an assert failure at line 1387 in UnlockHeap. So its a bit
>>> unstable to say the least. Maybe there is something else wrong with freebsd.
>>> 
>>> Peter
>>> 
>>> 
>>> 
>>> On Wed, Aug 13, 2014 at 2:54 PM, <mika at async.caltech.edu> wrote:
>>> 
>>>> 
>>>> Yeah OK 1.262 makes a lot more sense to me but I'm still not getting
>>>> things to work and now I'm really baffled because I think I understand
>>>> this code!
>>>> 
>>>> What I did was I threw away all my changes and reverted ThreadPThread.m3
>>>> to 1.262 as you suggested.
>>>> 
>>>> Rebuilt the compiler with upgrade.sh
>>>> 
>>>> Then rebuilt the compiler again with itself.
>>>> 
>>>> Then I realcleaned the world and buildshipped it.  (I was afraid
>>>> that parseparams, also imported by the thread tester, would pollute
>>>> it somehow.)
>>>> 
>>>> When I look at the code in 1.262 it looks quite straightforward.  heapMu
>>>> is just a mutex, (Modula-3) mutexes are just (pthreads) mutexes.
>>>> Condition
>>>> variables are mutexes too, but that's no big deal.
>>>> 
>>>> So, rebuild thread tester, run it:
>>>> 
>>>> new source -> compiling Main.m3
>>>> -> linking threadtest
>>>> root at rover:~mika/cm3-cvs-anon/cm3/m3-libs/m3core/tests/thread #
>>>> AMD64_FREEBSD/threadtest
>>>> Writing file...done
>>>> Creating read threads...done
>>>> Creating fork threads...done
>>>> Creating alloc threads...done
>>>> Creating lock threads...done
>>>> running...printing oldest/median age/newest
>>>> .
>>>> 
>>>> ***
>>>> *** runtime error:
>>>> ***    Segmentation violation - possible attempt to dereference
>>>> NIL.........laziest thread is 1407901189/1407901189/9 (tests: read
>>>> 1407901189/1407901189/1407901189 fork 1407901189/1407901189/1407901189
>>>> alloc 9/9/9 lock 1407901189/1407901189/9)
>>>> 
>>>> 
>>>> 
>>>> 
>>>> ^C
>>>> root at rover:~mika/cm3-cvs-anon/cm3/m3-libs/m3core/tests/thread # gdb !$
>>>> gdb AMD64_FREEBSD/threadtest
>>>> GNU gdb 6.1.1 [FreeBSD]
>>>> Copyright 2004 Free Software Foundation, Inc.
>>>> GDB is free software, covered by the GNU General Public License, and you
>>>> are
>>>> welcome to change it and/or distribute copies of it under certain
>>>> conditions.
>>>> Type "show copying" to see the conditions.
>>>> There is absolutely no warranty for GDB.  Type "show warranty" for
>>>> details.
>>>> This GDB was configured as "amd64-marcel-freebsd"...
>>>> (gdb) run
>>>> Starting program:
>>>> /big/home2/mika/2/cm3-cvs/cm3/m3-libs/m3core/tests/thread/AMD64_FREEBSD/threadtest
>>>> [New LWP 100121]
>>>> Writing file...done
>>>> Creating read threads...done
>>>> Creating fork threads...done
>>>> Creating alloc threads...done
>>>> Creating lock threads...done
>>>> running...printing oldest/median age/newest
>>>> .[New Thread 801807000 (LWP 100343/threadtest)]
>>>> [New Thread 801807c00 (LWP 100667/threadtest)]
>>>> [New Thread 801808800 (LWP 100816/threadtest)]
>>>> [New Thread 801807400 (LWP 100349/threadtest)]
>>>> [New Thread 801808400 (LWP 100815/threadtest)]
>>>> [New Thread 801807800 (LWP 100352/threadtest)]
>>>> [New Thread 801808c00 (LWP 100817/threadtest)]
>>>> [New Thread 801809000 (LWP 100818/threadtest)]
>>>> [New Thread 801809800 (LWP 100820/threadtest)]
>>>> [New Thread 801808000 (LWP 100678/threadtest)]
>>>> [New Thread 801806400 (LWP 100121/threadtest)]
>>>> [New Thread 801806800 (LWP 100341/threadtest)]
>>>> [New Thread 801806c00 (LWP 100342/threadtest)]
>>>> .ERROR: pthread_mutex_lock:11
>>>> 
>>>> Program received signal SIGABRT, Aborted.
>>>> [Switching to Thread 801809800 (LWP 100820/threadtest)]
>>>> 0x0000000800d5c26a in thr_kill () from /lib/libc.so.7
>>>> (gdb) where
>>>> #0  0x0000000800d5c26a in thr_kill () from /lib/libc.so.7
>>>> #1  0x0000000800e23ac9 in abort () from /lib/libc.so.7
>>>> #2  0x000000000045101f in ThreadPThread__pthread_mutex_lock (mutex=Error
>>>> accessing memory address 0x8000fe5f2d98: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:513
>>>> #3  0x000000000044b370 in ThreadPThread__LockMutex (M3_AYIbX3_m=Error
>>>> accessing memory address 0x8000fe5f2dc8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:119
>>>> #4  0x0000000000405b5d in Main__LApply (M3_AP7a1g_cl=Error accessing
>>>> memory address 0x8000fe5f2df8: Bad address.
>>>> ) at ../src/Main.m3:319
>>>> #5  0x000000000044d243 in ThreadPThread__RunThread (M3_DMxDjQ_me=Error
>>>> accessing memory address 0x8000fe5f2eb8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449
>>>> #6  0x000000000044cf00 in ThreadPThread__ThreadBase
>>>> (M3_AJWxb1_param=Error accessing memory address 0x8000fe5f2f68: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422
>>>> #7  0x0000000800ad54a4 in pthread_create () from /lib/libthr.so.3
>>>> #8  0x0000000000000000 in ?? ()
>>>> (gdb) set lang c
>>>> (gdb) thread apply all bt
>>>> ... not that interesting ...
>>>> 
>>>> Segfault is segfault---error 11 is EDEADLK (locking against myself?)
>>>> 
>>>> ????? the code is very straightforward, as I said.
>>>> 
>>>> I thought people had the thread tester working with pthreads?  Which set
>>>> of files, then?  Anyone on FreeBSD/amd64?
>>>> 
>>>> Could it be an issue with "volatile"?  Not even sure where to look.
>>>> 
>>>> The code calling the lock is just this:
>>>> 
>>>> PROCEDURE GetChar (rd: T): CHAR
>>>>  RAISES {EndOfFile, Failure, Alerted} =
>>>>  BEGIN
>>>>    LOCK rd DO
>>>>      RETURN FastGetChar(rd);
>>>>    END
>>>>  END GetChar;
>>>> 
>>>> No mysteries there...
>>>> 
>>>> Ah is this the fork bug?  Looks like the Init routine is called on a
>>>> lot of NIL mutexes around the fork.  But the subprocesses aren't meant
>>>> to accesses the mutexes... humm
>>>> 
>>>> Hmm, and it's not entirely a fork issue.  Even with "threadtest -tests
>>>> STD,-fork,-forktoomuch" it misbehaves.  Hangs....
>>>> 
>>>> Looks like a deadlock this time.
>>>> 
>>>> Some stacks...
>>>> 
>>>> Thread 4 (Thread 800c0b400 (LWP 100477/threadtest)):
>>>> #0  0x00000000004b4e2b in __vdso_gettimeofday ()
>>>> #1  0x00000000004ab8d2 in gettimeofday ()
>>>> #2  0x0000000000452e4b in TimePosix__Now () at
>>>> ../src/time/POSIX/TimePosixC.c:50
>>>> #3  0x0000000000452d72 in Time__Now () at
>>>> ../src/time/POSIX/TimePosix.m3:14
>>>> #4  0x00000000004029a3 in Main__LApply (M3_AP7a1g_cl=Error accessing
>>>> memory address 0x8000fedf6df8: Bad address.
>>>> ) at ../src/Main.m3:327
>>>> #5  0x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=Error
>>>> accessing memory address 0x8000fedf6eb8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449
>>>> #6  0x0000000000449c50 in ThreadPThread__ThreadBase
>>>> (M3_AJWxb1_param=Error accessing memory address 0x8000fedf6f68: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422
>>>> #7  0x000000000047c7d4 in thread_start ()
>>>> #8  0x0000000000000000 in ?? ()
>>>> 
>>>> Thread 3 (Thread 800c0b000 (LWP 100476/threadtest)):
>>>> #0  0x000000000047e53c in _umtx_op_err ()
>>>> #1  0x0000000000475f14 in __thr_umutex_lock ()
>>>> #2  0x0000000000479404 in mutex_lock_common ()
>>>> #3  0x000000000044dd15 in ThreadPThread__pthread_mutex_lock (mutex=Error
>>>> accessing memory address 0x8000feff7d98: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:506
>>>> #4  0x00000000004480c0 in ThreadPThread__LockMutex (M3_AYIbX3_m=Error
>>>> accessing memory address 0x8000feff7dc8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:119
>>>> #5  0x00000000004028ad in Main__LApply (M3_AP7a1g_cl=Error accessing
>>>> memory address 0x8000feff7df8: Bad address.
>>>> ) at ../src/Main.m3:319
>>>> #6  0x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=Error
>>>> accessing memory address 0x8000feff7eb8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449
>>>> #7  0x0000000000449c50 in ThreadPThread__ThreadBase
>>>> (M3_AJWxb1_param=Error accessing memory address 0x8000feff7f68: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422
>>>> #8  0x000000000047c7d4 in thread_start ()
>>>> #9  0x0000000000000000 in ?? ()
>>>> 
>>>> Thread 6 (Thread 800c09800 (LWP 100470/threadtest)):
>>>> #0  0x000000000047e53c in _umtx_op_err ()
>>>> #1  0x0000000000475759 in suspend_common ()
>>>> #2  0x00000000004755c1 in pthread_suspend_np ()
>>>> #3  0x000000000044df0c in ThreadPThread__SuspendThread (mt=Error
>>>> accessing memory address 0x8000ffbfd6f8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadFreeBSD.c:33
>>>> #4  0x000000000044bade in ThreadPThread__StopThread (M3_DMxDjQ_act=Error
>>>> accessing memory address 0x8000ffbfd718: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:909
>>>> #5  0x000000000044bc81 in ThreadPThread__StopWorld () at
>>>> ../src/thread/PTHREAD/ThreadPThread.m3:948
>>>> #6  0x000000000044b19e in RTThread__SuspendOthers () at
>>>> ../src/thread/PTHREAD/ThreadPThread.m3:713
>>>> #7  0x00000000004330d6 in RTCollector__CollectSomeInStateZero () at
>>>> ../src/runtime/common/RTCollector.m3:749
>>>> #8  0x0000000000433081 in RTCollector__CollectSome () at
>>>> ../src/runtime/common/RTCollector.m3:723
>>>> #9  0x0000000000432d49 in RTHeapRep__CollectEnough () at
>>>> ../src/runtime/common/RTCollector.m3:657
>>>> #10 0x000000000042ff83 in RTAllocator__AllocTraced
>>>> (M3_Cwb5VA_dataSize=Error accessing memory address 0x8000ffbfd958: Bad
>>>> address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:367
>>>> #11 0x000000000042f8a5 in RTAllocator__GetOpenArray (M3_Eic7CK_def=Error
>>>> accessing memory address 0x8000ffbfda18: Bad address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:296
>>>> #12 0x000000000042ebdf in RTHooks__AllocateOpenArray
>>>> (M3_AJWxb1_defn=Error accessing memory address 0x8000ffbfda78: Bad address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:143
>>>> #13 0x000000000040ca4e in Rd__NextBuff (M3_EkTcCb_rd=Error accessing
>>>> memory address 0x8000ffbfdab8: Bad address.
>>>> ) at ../src/rw/Rd.m3:159
>>>> #14 0x000000000040cd43 in UnsafeRd__FastGetChar (M3_EkTcCb_rd=Error
>>>> accessing memory address 0x8000ffbfdb88: Bad address.
>>>> ) at ../src/rw/Rd.m3:187
>>>> #15 0x000000000040cc4a in Rd__GetChar (M3_EkTcCb_rd=Error accessing
>>>> memory address 0x8000ffbfdbd8: Bad address.
>>>> ) at ../src/rw/Rd.m3:176
>>>> #16 0x000000000040095c in Main__RApply (M3_AP7a1g_cl=Error accessing
>>>> memory address 0x8000ffbfdc58: Bad address.
>>>> ) at ../src/Main.m3:185
>>>> 
>>>> #17 0x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=Error
>>>> accessing memory address 0x8000ffbfdeb8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449
>>>> #18 0x0000000000449c50 in ThreadPThread__ThreadBase
>>>> (M3_AJWxb1_param=Error accessing memory address 0x8000ffbfdf68: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422
>>>> #19 0x000000000047c7d4 in thread_start ()
>>>> #20 0x0000000000000000 in ?? ()
>>>> 
>>>> Thread 5 (Thread 800c09400 (LWP 101035/threadtest)):
>>>> #0  0x000000000047e53c in _umtx_op_err ()
>>>> #1  0x0000000000475f14 in __thr_umutex_lock ()
>>>> #2  0x0000000000479404 in mutex_lock_common ()
>>>> #3  0x000000000044dd15 in ThreadPThread__pthread_mutex_lock (mutex=Error
>>>> accessing memory address 0x8000ffffc678: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:506
>>>> #4  0x000000000044d039 in RTOS__LockHeap () at
>>>> ../src/thread/PTHREAD/ThreadPThread.m3:1337
>>>> ---Type <return> to continue, or q <return> to quit---
>>>> #5  0x000000000042ff79 in RTAllocator__AllocTraced
>>>> (M3_Cwb5VA_dataSize=Error accessing memory address 0x8000ffffc6e8: Bad
>>>> address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:365
>>>> #6  0x000000000042f15b in RTAllocator__GetTracedObj (M3_Eic7CK_def=Error
>>>> accessing memory address 0x8000ffffc7a8: Bad address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:224
>>>> #7  0x000000000042eb23 in RTHooks__AllocateTracedObj
>>>> (M3_AJWxb1_defn=Error accessing memory address 0x8000ffffc7f8: Bad address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:122
>>>> #8  0x000000000045fbe4 in TextCat__Flat (M3_Bd56fi_LText=Error accessing
>>>> memory address 0x8000ffffc858: Bad address.
>>>> ) at ../src/text/TextCat.m3:562
>>>> #9  0x000000000045ed5d in TextCat__Balance (M3_Bd56fi_LText=0x800c490b0,
>>>> M3_BUgnwf_LInfo=Error accessing memory address 0x8000ffffc8f8: Bad address.
>>>> ) at ../src/text/TextCat.m3:488
>>>> #10 0x000000000045d64f in RTHooks__Concat (M3_Bd56fi_t=Error accessing
>>>> memory address 0x8000ffffcbb8: Bad address.
>>>> ) at ../src/text/TextCat.m3:40
>>>> #11 0x000000000040639e in Main_M3 (M3_AcxOUs_mode=Error accessing memory
>>>> address 0x8000ffffcc38: Bad address.
>>>> ) at ../src/Main.m3:593
>>>> #12 0x000000000043d55d in RTLinker__RunMainBody (M3_DjPxE3_m=Error
>>>> accessing memory address 0x8000ffffcf88: Bad address.
>>>> ) at ../src/runtime/common/RTLinker.m3:408
>>>> #13 0x000000000043c8e8 in RTLinker__AddUnitI (M3_DjPxE3_m=Error accessing
>>>> memory address 0x8000ffffd008: Bad address.
>>>> ) at ../src/runtime/common/RTLinker.m3:115
>>>> #14 0x000000000043c97c in RTLinker__AddUnit (M3_DjPxE5_b=Error accessing
>>>> memory address 0x8000ffffd028: Bad address.
>>>> ) at ../src/runtime/common/RTLinker.m3:124
>>>> #15 0x00000000004004a6 in main (argc=Error accessing memory address
>>>> 0x8000ffffd07c: Bad address.
>>>> ) at _m3main.c:22
>>>> 
>>>> Thread 9 (Thread 800c0a400 (LWP 100473/threadtest)):
>>>> #0  0x000000000047e53c in _umtx_op_err ()
>>>> #1  0x0000000000477e8a in check_suspend ()
>>>> #2  0x00000000004780a2 in sigcancel_handler ()
>>>> #3  <signal handler called>
>>>> #4  0x000000000047e53c in _umtx_op_err ()
>>>> #5  0x0000000000475f14 in __thr_umutex_lock ()
>>>> #6  0x0000000000479404 in mutex_lock_common ()
>>>> #7  0x000000000044dd15 in ThreadPThread__pthread_mutex_lock (mutex=Error
>>>> accessing memory address 0x8000ff5fac18: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:506
>>>> #8  0x000000000044d039 in RTOS__LockHeap () at
>>>> ../src/thread/PTHREAD/ThreadPThread.m3:1337
>>>> #9  0x000000000042ff79 in RTAllocator__AllocTraced
>>>> (M3_Cwb5VA_dataSize=Error accessing memory address 0x8000ff5fac88: Bad
>>>> address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:365
>>>> #10 0x000000000042f8a5 in RTAllocator__GetOpenArray (M3_Eic7CK_def=Error
>>>> accessing memory address 0x8000ff5fad48: Bad address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:296
>>>> #11 0x000000000042ebdf in RTHooks__AllocateOpenArray
>>>> (M3_AJWxb1_defn=Error accessing memory address 0x8000ff5fada8: Bad address.
>>>> ) at ../src/runtime/common/RTAllocator.m3:143
>>>> #12 0x000000000040203a in Main__AApply (M3_AP7a1g_cl=Error accessing
>>>> memory address 0x8000ff5fade8: Bad address.
>>>> ) at ../src/Main.m3:283
>>>> #13 0x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=Error
>>>> accessing memory address 0x8000ff5faeb8: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449
>>>> #14 0x0000000000449c50 in ThreadPThread__ThreadBase
>>>> (M3_AJWxb1_param=Error accessing memory address 0x8000ff5faf68: Bad address.
>>>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422
>>>> #15 0x000000000047c7d4 in thread_start ()
>>>> #16 0x0000000000000000 in ?? ()
>>>> 
>>>> (others are similar)
>>>> 
>>>> Hmm looks like a FreeBSD issue now.  It's here...
>>>> 
>>>> int
>>>> __cdecl
>>>> ThreadPThread__SuspendThread (m3_pthread_t mt)
>>>> {
>>>>  ThreadFreeBSD__Fatal(pthread_suspend_np(PTHREAD_FROM_M3(mt)),
>>>> "pthread_suspend_np");
>>>>  return 1;
>>>> }
>>>> 
>>>> Now this suspend can wait:
>>>> 
>>>> static int
>>>> suspend_common(struct pthread *curthread, struct pthread *thread,
>>>>        int waitok)
>>>> {
>>>>        uint32_t tmp;
>>>> 
>>>>        while (thread->state != PS_DEAD &&
>>>>              !(thread->flags & THR_FLAGS_SUSPENDED)) {
>>>>                thread->flags |= THR_FLAGS_NEED_SUSPEND;
>>>>                /* Thread is in creation. */
>>>>                if (thread->tid == TID_TERMINATED)
>>>>                        return (1);
>>>>                tmp = thread->cycle;
>>>>                _thr_send_sig(thread, SIGCANCEL);
>>>>                THR_THREAD_UNLOCK(curthread, thread);
>>>>                if (waitok) {
>>>>                        _thr_umtx_wait_uint(&thread->cycle, tmp, NULL,
>>>> 0);  <==========
>>>>                        THR_THREAD_LOCK(curthread, thread);
>>>>                } else {
>>>>                        THR_THREAD_LOCK(curthread, thread);
>>>>                        return (0);
>>>>                }
>>>>        }
>>>> 
>>>>        return (1);
>>>> }
>>>> 
>>>> ... but what it can wait for I am not clear on.
>>>> 
>>>> Do things work better on Linux?
>>>> 
>>>> What is the status of the fork bug?  Can it be worked around by only
>>>> forking via Process.Create with no mutexes held (outside of all LOCK
>>>> blocks)?
>>>> 
>>>>     Mika
>>>> 
>>>> 
>>>> 
>>>> Peter McKinna writes:
>>>>> --001a11c2ced4471a2e050079db82
>>>>> Content-Type: text/plain; charset=UTF-8
>>>>> 
>>>>> Mika
>>>>> 
>>>>> I think you need to back out Tony's changes to fix the fork bug, at
>>>> least
>>>>> for now. Need to reload from cvs version 1.262 for ThreadPThread.m3 If
>>>>> you're using git you're on your own.
>>>>> Do a cvs log ThreadPThread.m3 for an explanation for some of the design
>>>>> principles.
>>>>> Also if you use gdb then you need to set lanc c before backtraces so at
>>>>> least you can see address names and values even if they are contorted you
>>>>> can extract the M3 name in the parm lists. Also in gdb thread apply all
>>>> bt
>>>>> gives all thread backtraces which can be handy to see whose got the locks
>>>>> held.
>>>>> 
>>>>> Regards Peter
>>>>> 
>>>>> 
>>>>> 
>>>>> On Wed, Aug 13, 2014 at 12:14 PM, <mika at async.caltech.edu> wrote:
>>>>> 
>>>>>> 
>>>>>> Question... is there something odd about my pthreads?  Are pthreads
>>>>>> normally reentrant?  I didn't think so.
>>>>>> 
>>>>>> My compiler is much happier with the following changes I already
>>>> outlined:
>>>>>> 
>>>>>> 1. a dirty, disgusting hack to keep from locking against myself going
>>>> from
>>>>>> XWait with self.mutex locked to m.release().
>>>>>> 
>>>>>> 2. the change to LockHeap I described in previous email
>>>>>> 
>>>>>> But now...
>>>>>> 
>>>>>> (gdb) cont
>>>>>> Continuing.
>>>>>> ERROR: pthread_mutex_lock:11
>>>>>> ERROR: pthread_mutex_lock:11
>>>>>> 
>>>>>> Program received signal SIGABRT, Aborted.
>>>>>> 0x000000080107626a in thr_kill () from /lib/libc.so.7
>>>>>> (gdb) where
>>>>>> #0  0x000000080107626a in thr_kill () from /lib/libc.so.7
>>>>>> #1  0x000000080113dac9 in abort () from /lib/libc.so.7
>>>>>> #2  0x000000000071e37a in ThreadPThread__pthread_mutex_lock
>>>> (mutex=Error
>>>>>> accessing memory address 0x8000ffffb508: Bad address.
>>>>>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:543
>>>>>> #3  0x000000000071d48d in RTOS__LockHeap () at
>>>>>> ../src/thread/PTHREAD/ThreadPThread.m3:1377
>>>>>> #4  0x0000000000706b9d in RTHooks__CheckLoadTracedRef
>>>> (M3_Af40ku_ref=Error
>>>>>> accessing memory address 0x8000ffffb568: Bad address.
>>>>>> ) at ../src/runtime/common/RTCollector.m3:2234
>>>>>> #5  0x000000000071d284 in ThreadPThread__AtForkParent () at
>>>>>> ../src/thread/PTHREAD/ThreadPThread.m3:1348
>>>>>> #6  0x0000000800df8733 in fork () from /lib/libthr.so.3
>>>>>> #7  0x000000000070dd8b in RTProcess__Fork () at
>>>>>> ../src/runtime/common/RTProcessC.c:152
>>>>>> #8  0x00000000006c52f2 in ProcessPosixCommon__Create_ForkExec
>>>>>> (M3_Bd56fi_cmd=Error accessing memory address 0x8000ffffb6f8: Bad
>>>> address.
>>>>>> ) at ../src/os/POSIX/ProcessPosixCommon.m3:75
>>>>>> #9  0x00000000006c6c6c in Process__Create (M3_Bd56fi_cmd=Error
>>>> accessing
>>>>>> memory address 0x8000ffffb7f8: Bad address.
>>>>>> ) at ../src/os/POSIX/ProcessPosix.m3:21
>>>>>> #10 0x00000000004d6826 in QMachine__FulfilExecPromise
>>>> (M3_D6rRrg_ep=Error
>>>>>> accessing memory address 0x8000ffffb838: Bad address.
>>>>>> ) at ../src/QMachine.m3:1666
>>>>>> #11 0x00000000004d6220 in QMachine__ExecCommand (M3_An02H2_t=Error
>>>>>> accessing memory address 0x8000ffffb9f8: Bad address.
>>>>>> ) at ../src/QMachine.m3:1605
>>>>>> #12 0x00000000004d537e in QMachine__DoTryExec (M3_An02H2_t=Error
>>>> accessing
>>>>>> memory address 0x8000ffffbee8: Bad address.
>>>>>> ) at ../src/QMachine.m3:1476
>>>>>> 
>>>>>> What am I doing wrong here?
>>>>>> 
>>>>>> The error doesn't look unreasonable!  Looking more closely at the code:
>>>>>> 
>>>>>> First, AtForkPrepare has been called:
>>>>>> 
>>>>>> PROCEDURE AtForkPrepare() =
>>>>>>  VAR me := GetActivation();
>>>>>>      act: Activation;
>>>>>>  BEGIN
>>>>>>    PThreadLockMutex(slotsMu, ThisLine());
>>>>>>    PThreadLockMutex(perfMu, ThisLine());
>>>>>>    PThreadLockMutex(initMu, ThisLine()); (* InitMutex =>
>>>>>> RegisterFinalCleanup => LockHeap *)
>>>>>>    PThreadLockMutex(heapMu, ThisLine());
>>>>>>    PThreadLockMutex(activeMu, ThisLine()); (* LockHeap =>
>>>> SuspendOthers
>>>>>> => activeMu *)
>>>>>>    (* Walk activations and lock all threads.
>>>>>>     * NOTE: We have initMu, activeMu, so slots won't change,
>>>> conditions
>>>>>> and
>>>>>>     * mutexes won't be initialized on-demand.
>>>>>>     *)
>>>>>>    act := me;
>>>>>>    REPEAT
>>>>>>      PThreadLockMutex(act.mutex, ThisLine());
>>>>>>      act := act.next;
>>>>>>    UNTIL act = me;
>>>>>>  END AtForkPrepare;
>>>>>> 
>>>>>> a postcondition of this routine is that heapMu is locked.
>>>>>> 
>>>>>> now we get into AtForkParent:
>>>>>> 
>>>>>> PROCEDURE AtForkParent() =
>>>>>>  VAR me := GetActivation();
>>>>>>      act: Activation;
>>>>>>      cond: Condition;
>>>>>>  BEGIN
>>>>>>    (* Walk activations and unlock all threads, conditions. *)
>>>>>>    act := me;
>>>>>>    REPEAT
>>>>>>      cond := slots[act.slot].join;
>>>>>>      IF cond # NIL THEN PThreadUnlockMutex(cond.mutex, ThisLine())
>>>> END;
>>>>>>      PThreadUnlockMutex(act.mutex, ThisLine());
>>>>>>      act := act.next;
>>>>>>    UNTIL act = me;
>>>>>>    PThreadUnlockMutex(activeMu, ThisLine());
>>>>>>    PThreadUnlockMutex(heapMu, ThisLine());
>>>>>>    PThreadUnlockMutex(initMu, ThisLine());
>>>>>>    PThreadUnlockMutex(perfMu, ThisLine());
>>>>>>    PThreadUnlockMutex(slotsMu, ThisLine());
>>>>>>  END AtForkParent;
>>>>>> 
>>>>>> We can see by inspecting the code that a necessary precondition for
>>>>>> this routine is that heapMu is locked!  (Since it's going to unlock it,
>>>>>> it had BETTER be locked on entry.)
>>>>>> 
>>>>>> But the cond := ... causes a RTHooks.CheckLoadTracedRef
>>>>>> 
>>>>>> which causes an RTOS.LockHeap
>>>>>> 
>>>>>> the code of which we just saw:
>>>>>> 
>>>>>> PROCEDURE LockHeap () =
>>>>>>  VAR self := pthread_self();
>>>>>>  BEGIN
>>>>>>    WITH r = pthread_mutex_lock(heapMu,ThisLine()) DO <*ASSERT r=0*>
>>>> END;
>>>>>> ...
>>>>>> 
>>>>>> we try to lock heapMu.  kaboom!  No surprise there, really?
>>>>>> 
>>>>>> Am I going about this totally the wrong way?  Other people are running
>>>>>> Modula-3
>>>>>> with pthreads, right?  Right??  Somewhere out there in m3devel-land?
>>>>>> 
>>>>>>     Mika
>>>>>> 
>>>>>> 
>>>>> 
>>>>> --001a11c2ced4471a2e050079db82
>>>>> Content-Type: text/html; charset=UTF-8
>>>>> Content-Transfer-Encoding: quoted-printable
>>>>> 
>>>>> <div dir=3D"ltr">Mika<div><br></div><div>=C2=A0 I think you need to back
>>>> ou=
>>>>> t Tony's changes to fix the fork bug, at least for now. Need to
>>>> reload =
>>>>> from cvs version 1.262 for ThreadPThread.m3 If you're using git
>>>> you&#39=
>>>>> ;re on your own.</div>
>>>>> 
>>>>> <div>=C2=A0 Do a cvs log ThreadPThread.m3 for an explanation for some of
>>>> th=
>>>>> e design principles.=C2=A0</div><div>=C2=A0 Also if you use gdb then you
>>>> ne=
>>>>> ed to set lanc c before backtraces so at least you can see address names
>>>> an=
>>>>> d values even if they are contorted you can extract the M3 name in the
>>>> parm=
>>>>> lists. Also in gdb thread apply all bt gives all thread backtraces
>>>> which c=
>>>>> an be handy to see whose got the locks held.</div>
>>>>> 
>>>>> <div><br></div><div>Regards Peter</div><div><br></div><div
>>>> class=3D"gmail_e=
>>>>> xtra"><br><br><div class=3D"gmail_quote">On Wed, Aug 13, 2014 at 12:14
>>>> PM, =
>>>>> <span dir=3D"ltr"><<a href=3D"mailto:mika at async.caltech.edu"
>>>> target=3D"=
>>>>> _blank">mika at async.caltech.edu</a>></span> wrote:<br>
>>>>> 
>>>>> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
>>>> .8ex;border-left:1p=
>>>>> x #ccc solid;padding-left:1ex"><br>
>>>>> Question... is there something odd about my pthreads? =C2=A0Are pthreads
>>>> no=
>>>>> rmally reentrant? =C2=A0I didn't think so.<br>
>>>>> <br>
>>>>> My compiler is much happier with the following changes I already
>>>> outlined:<=
>>>>> br>
>>>>> <br>
>>>>> 1. a dirty, disgusting hack to keep from locking against myself going
>>>> from =
>>>>> XWait with self.mutex locked to m.release().<br>
>>>>> <br>
>>>>> 2. the change to LockHeap I described in previous email<br>
>>>>> <br>
>>>>> But now...<br>
>>>>> <br>
>>>>> (gdb) cont<br>
>>>>> Continuing.<br>
>>>>> ERROR: pthread_mutex_lock:11<br>
>>>>> ERROR: pthread_mutex_lock:11<br>
>>>>> <br>
>>>>> Program received signal SIGABRT, Aborted.<br>
>>>>> 0x000000080107626a in thr_kill () from /lib/libc.so.7<br>
>>>>> (gdb) where<br>
>>>>> #0 =C2=A00x000000080107626a in thr_kill () from /lib/libc.so.7<br>
>>>>> #1 =C2=A00x000000080113dac9 in abort () from /lib/libc.so.7<br>
>>>>> #2 =C2=A00x000000000071e37a in ThreadPThread__pthread_mutex_lock
>>>> (mutex=3DE=
>>>>> rror accessing memory address 0x8000ffffb508: Bad address.<br>
>>>>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:543<br>
>>>>> #3 =C2=A00x000000000071d48d in RTOS__LockHeap () at
>>>> ../src/thread/PTHREAD/T=
>>>>> hreadPThread.m3:1377<br>
>>>>> #4 =C2=A00x0000000000706b9d in RTHooks__CheckLoadTracedRef
>>>> (M3_Af40ku_ref=
>>>>> =3DError accessing memory address 0x8000ffffb568: Bad address.<br>
>>>>> ) at ../src/runtime/common/RTCollector.m3:2234<br>
>>>>> #5 =C2=A00x000000000071d284 in ThreadPThread__AtForkParent () at
>>>> ../src/thr=
>>>>> ead/PTHREAD/ThreadPThread.m3:1348<br>
>>>>> #6 =C2=A00x0000000800df8733 in fork () from /lib/libthr.so.3<br>
>>>>> #7 =C2=A00x000000000070dd8b in RTProcess__Fork () at
>>>> ../src/runtime/common/=
>>>>> RTProcessC.c:152<br>
>>>>> #8 =C2=A00x00000000006c52f2 in ProcessPosixCommon__Create_ForkExec
>>>> (M3_Bd56=
>>>>> fi_cmd=3DError accessing memory address 0x8000ffffb6f8: Bad address.<br>
>>>>> ) at ../src/os/POSIX/ProcessPosixCommon.m3:75<br>
>>>>> #9 =C2=A00x00000000006c6c6c in Process__Create (M3_Bd56fi_cmd=3DError
>>>> acces=
>>>>> sing memory address 0x8000ffffb7f8: Bad address.<br>
>>>>> ) at ../src/os/POSIX/ProcessPosix.m3:21<br>
>>>>> #10 0x00000000004d6826 in QMachine__FulfilExecPromise
>>>> (M3_D6rRrg_ep=3DError=
>>>>> accessing memory address 0x8000ffffb838: Bad address.<br>
>>>>> ) at ../src/QMachine.m3:1666<br>
>>>>> #11 0x00000000004d6220 in QMachine__ExecCommand (M3_An02H2_t=3DError
>>>> access=
>>>>> ing memory address 0x8000ffffb9f8: Bad address.<br>
>>>>> ) at ../src/QMachine.m3:1605<br>
>>>>> #12 0x00000000004d537e in QMachine__DoTryExec (M3_An02H2_t=3DError
>>>> accessin=
>>>>> g memory address 0x8000ffffbee8: Bad address.<br>
>>>>> ) at ../src/QMachine.m3:1476<br>
>>>>> <br>
>>>>> What am I doing wrong here?<br>
>>>>> <br>
>>>>> The error doesn't look unreasonable! =C2=A0Looking more closely at
>>>> the =
>>>>> code:<br>
>>>>> <br>
>>>>> First, AtForkPrepare has been called:<br>
>>>>> <br>
>>>>> PROCEDURE AtForkPrepare() =3D<br>
>>>>> =C2=A0 VAR me :=3D GetActivation();<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 act: Activation;<br>
>>>>> =C2=A0 BEGIN<br>
>>>>> =C2=A0 =C2=A0 PThreadLockMutex(slotsMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadLockMutex(perfMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadLockMutex(initMu, ThisLine()); (* InitMutex =3D>
>>>> Re=
>>>>> gisterFinalCleanup =3D> LockHeap *)<br>
>>>>> =C2=A0 =C2=A0 PThreadLockMutex(heapMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadLockMutex(activeMu, ThisLine()); (* LockHeap
>>>> =3D> S=
>>>>> uspendOthers =3D> activeMu *)<br>
>>>>> =C2=A0 =C2=A0 (* Walk activations and lock all threads.<br>
>>>>> =C2=A0 =C2=A0 =C2=A0* NOTE: We have initMu, activeMu, so slots won't
>>>> ch=
>>>>> ange, conditions and<br>
>>>>> =C2=A0 =C2=A0 =C2=A0* mutexes won't be initialized on-demand.<br>
>>>>> =C2=A0 =C2=A0 =C2=A0*)<br>
>>>>> =C2=A0 =C2=A0 act :=3D me;<br>
>>>>> =C2=A0 =C2=A0 REPEAT<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 PThreadLockMutex(act.mutex, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 act :=3D act.next;<br>
>>>>> =C2=A0 =C2=A0 UNTIL act =3D me;<br>
>>>>> =C2=A0 END AtForkPrepare;<br>
>>>>> <br>
>>>>> a postcondition of this routine is that heapMu is locked.<br>
>>>>> <br>
>>>>> now we get into AtForkParent:<br>
>>>>> <br>
>>>>> PROCEDURE AtForkParent() =3D<br>
>>>>> =C2=A0 VAR me :=3D GetActivation();<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 act: Activation;<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 cond: Condition;<br>
>>>>> =C2=A0 BEGIN<br>
>>>>> =C2=A0 =C2=A0 (* Walk activations and unlock all threads, conditions.
>>>> *)<br=
>>>>>> 
>>>>> =C2=A0 =C2=A0 act :=3D me;<br>
>>>>> =C2=A0 =C2=A0 REPEAT<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 cond :=3D slots[act.slot].join;<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 IF cond # NIL THEN PThreadUnlockMutex(cond.mutex,
>>>> This=
>>>>> Line()) END;<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 PThreadUnlockMutex(act.mutex, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 =C2=A0 act :=3D act.next;<br>
>>>>> =C2=A0 =C2=A0 UNTIL act =3D me;<br>
>>>>> =C2=A0 =C2=A0 PThreadUnlockMutex(activeMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadUnlockMutex(heapMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadUnlockMutex(initMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadUnlockMutex(perfMu, ThisLine());<br>
>>>>> =C2=A0 =C2=A0 PThreadUnlockMutex(slotsMu, ThisLine());<br>
>>>>> =C2=A0 END AtForkParent;<br>
>>>>> <br>
>>>>> We can see by inspecting the code that a necessary precondition for<br>
>>>>> this routine is that heapMu is locked! =C2=A0(Since it's going to
>>>> unloc=
>>>>> k it,<br>
>>>>> it had BETTER be locked on entry.)<br>
>>>>> <br>
>>>>> But the cond :=3D ... causes a RTHooks.CheckLoadTracedRef<br>
>>>>> <br>
>>>>> which causes an RTOS.LockHeap<br>
>>>>> <br>
>>>>> the code of which we just saw:<br>
>>>>> <br>
>>>>> PROCEDURE LockHeap () =3D<br>
>>>>> =C2=A0 VAR self :=3D pthread_self();<br>
>>>>> =C2=A0 BEGIN<br>
>>>>> =C2=A0 =C2=A0 WITH r =3D pthread_mutex_lock(heapMu,ThisLine()) DO
>>>> <*ASSE=
>>>>> RT r=3D0*> END;<br>
>>>>> ...<br>
>>>>> <br>
>>>>> we try to lock heapMu. =C2=A0kaboom! =C2=A0No surprise there, really?<br>
>>>>> <br>
>>>>> Am I going about this totally the wrong way? =C2=A0Other people are
>>>> running=
>>>>> Modula-3<br>
>>>>> with pthreads, right? =C2=A0Right?? =C2=A0Somewhere out there in
>>>> m3devel-la=
>>>>> nd?<br>
>>>>> <span><font color=3D"#888888"><br>
>>>>> =C2=A0 =C2=A0 =C2=A0Mika<br>
>>>>> <br>
>>>>> </font></span></blockquote></div><br></div></div>
>>>>> 
>>>>> --001a11c2ced4471a2e050079db82--
>>>> 
>>> 
>>> 
>> 
>> --089e01183a1813188705007c3498
>> Content-Type: text/html; charset=UTF-8
>> Content-Transfer-Encoding: quoted-printable
>> 
>> <div dir=3D"ltr">Oh, one other thing that seemed to help me in testing this=
>> stuff especially things to do with m3core and libm3 which seem to get out =
>> of whack sometimes, is to totally remove the target directory of m3core the=
>> n cm3; cm3 -ship Same with libm3. I'm not sure that clean actually remo=
>> ves the targets and I think there are leftover files that stuff things up s=
>> ince I would get segv in FileRd for no obvious reason.<div>
>> <br></div><div>Peter</div><div><br></div></div><div class=3D"gmail_extra"><=
>> br><br><div class=3D"gmail_quote">On Wed, Aug 13, 2014 at 3:33 PM, Peter Mc=
>> Kinna <span dir=3D"ltr"><<a href=3D"mailto:peter.mckinna at gmail.com" targ=
>> et=3D"_blank">peter.mckinna at gmail.com</a>></span> wrote:<br>
>> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
>> x #ccc solid;padding-left:1ex"><div dir=3D"ltr">That is weird. The thread t=
>> ester works fine with the old version of ThreadPThread.m3 on linux amd_64 e=
>> xcept if you have the fork test in the mix of tests you see the pthread_mut=
>> ex_destroy every now and again, the fork bug in fact.=C2=A0<div>
>> 
>> I was testing the pthread changes Tony has made over the past few days usin=
>> g the thread tester program until I stupidly rebuilt the compiler and was g=
>> etting all sorts of hangs as it dragged in the new version of pthreads. So =
>> I rebuilt everything from a backup to get back where I was with a decent co=
>> mpiler, then reintroduced the latest changes to ThreadPThread.m3, which yes=
>> terday was just hanging in sigsuspend but today is giving me an assert fail=
>> ure at line 1387 in UnlockHeap. So its a bit unstable to say the least. May=
>> be there is something else wrong with freebsd.</div>
>> <span class=3D"HOEnZb"><font color=3D"#888888">
>> <div><br></div><div>Peter</div><div><br></div></font></span></div><div clas=
>> s=3D"HOEnZb"><div class=3D"h5"><div class=3D"gmail_extra"><br><br><div clas=
>> s=3D"gmail_quote">On Wed, Aug 13, 2014 at 2:54 PM,  <span dir=3D"ltr"><<=
>> a href=3D"mailto:mika at async.caltech.edu" target=3D"_blank">mika at async.calte=
>> ch.edu</a>></span> wrote:<br>
>> 
>> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
>> x #ccc solid;padding-left:1ex"><br>
>> Yeah OK 1.262 makes a lot more sense to me but I'm still not getting<br=
>>> 
>> things to work and now I'm really baffled because I think I understand<=
>> br>
>> this code!<br>
>> <br>
>> What I did was I threw away all my changes and reverted ThreadPThread.m3<br=
>>> 
>> to 1.262 as you suggested.<br>
>> <br>
>> Rebuilt the compiler with upgrade.sh<br>
>> <br>
>> Then rebuilt the compiler again with itself.<br>
>> <br>
>> Then I realcleaned the world and buildshipped it. =C2=A0(I was afraid<br>
>> that parseparams, also imported by the thread tester, would pollute<br>
>> it somehow.)<br>
>> <br>
>> When I look at the code in 1.262 it looks quite straightforward. =C2=A0heap=
>> Mu<br>
>> is just a mutex, (Modula-3) mutexes are just (pthreads) mutexes. =C2=A0Cond=
>> ition<br>
>> variables are mutexes too, but that's no big deal.<br>
>> <br>
>> So, rebuild thread tester, run it:<br>
>> <br>
>> new source -> compiling Main.m3<br>
>> =C2=A0-> linking threadtest<br>
>> root at rover:~mika/cm3-cvs-anon/cm3/m3-libs/m3core/tests/thread # AMD64_FREEB=
>> SD/threadtest<br>
>> Writing file...done<br>
>> Creating read threads...done<br>
>> Creating fork threads...done<br>
>> Creating alloc threads...done<br>
>> Creating lock threads...done<br>
>> running...printing oldest/median age/newest<br>
>> .<br>
>> <br>
>> ***<br>
>> *** runtime error:<br>
>> *** =C2=A0 =C2=A0Segmentation violation - possible attempt to dereference N=
>> IL.........laziest thread is 1407901189/<a href=3D"tel:1407901189%2F9" valu=
>> e=3D"+14079011899" target=3D"_blank">1407901189/9</a> (tests: read 14079011=
>> 89/1407901189/1407901189 fork 1407901189/1407901189/1407901189 alloc 9/9/9 =
>> lock 1407901189/1407901189/9)<br>
>> 
>> 
>> <br>
>> <br>
>> <br>
>> <br>
>> ^C<br>
>> root at rover:~mika/cm3-cvs-anon/cm3/m3-libs/m3core/tests/thread # gdb !$<br>
>> gdb AMD64_FREEBSD/threadtest<br>
>> GNU gdb 6.1.1 [FreeBSD]<br>
>> Copyright 2004 Free Software Foundation, Inc.<br>
>> GDB is free software, covered by the GNU General Public License, and you ar=
>> e<br>
>> welcome to change it and/or distribute copies of it under certain condition=
>> s.<br>
>> Type "show copying" to see the conditions.<br>
>> There is absolutely no warranty for GDB. =C2=A0Type "show warranty&quo=
>> t; for details.<br>
>> This GDB was configured as "amd64-marcel-freebsd"...<br>
>> (gdb) run<br>
>> Starting program: /big/home2/mika/2/cm3-cvs/cm3/m3-libs/m3core/tests/thread=
>> /AMD64_FREEBSD/threadtest<br>
>> [New LWP 100121]<br>
>> Writing file...done<br>
>> Creating read threads...done<br>
>> Creating fork threads...done<br>
>> Creating alloc threads...done<br>
>> Creating lock threads...done<br>
>> running...printing oldest/median age/newest<br>
>> .[New Thread 801807000 (LWP 100343/threadtest)]<br>
>> [New Thread 801807c00 (LWP 100667/threadtest)]<br>
>> [New Thread 801808800 (LWP 100816/threadtest)]<br>
>> [New Thread 801807400 (LWP 100349/threadtest)]<br>
>> [New Thread 801808400 (LWP 100815/threadtest)]<br>
>> [New Thread 801807800 (LWP 100352/threadtest)]<br>
>> [New Thread 801808c00 (LWP 100817/threadtest)]<br>
>> [New Thread 801809000 (LWP 100818/threadtest)]<br>
>> [New Thread 801809800 (LWP 100820/threadtest)]<br>
>> [New Thread 801808000 (LWP 100678/threadtest)]<br>
>> [New Thread 801806400 (LWP 100121/threadtest)]<br>
>> [New Thread 801806800 (LWP 100341/threadtest)]<br>
>> [New Thread 801806c00 (LWP 100342/threadtest)]<br>
>> .ERROR: pthread_mutex_lock:11<br>
>> <div><br>
>> Program received signal SIGABRT, Aborted.<br>
>> </div>[Switching to Thread 801809800 (LWP 100820/threadtest)]<br>
>> 0x0000000800d5c26a in thr_kill () from /lib/libc.so.7<br>
>> (gdb) where<br>
>> #0 =C2=A00x0000000800d5c26a in thr_kill () from /lib/libc.so.7<br>
>> #1 =C2=A00x0000000800e23ac9 in abort () from /lib/libc.so.7<br>
>> #2 =C2=A00x000000000045101f in ThreadPThread__pthread_mutex_lock (mutex=3DE=
>> rror accessing memory address 0x8000fe5f2d98: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:513<br>
>> #3 =C2=A00x000000000044b370 in ThreadPThread__LockMutex (M3_AYIbX3_m=3DErro=
>> r accessing memory address 0x8000fe5f2dc8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:119<br>
>> #4 =C2=A00x0000000000405b5d in Main__LApply (M3_AP7a1g_cl=3DError accessing=
>> memory address 0x8000fe5f2df8: Bad address.<br>
>> ) at ../src/Main.m3:319<br>
>> #5 =C2=A00x000000000044d243 in ThreadPThread__RunThread (M3_DMxDjQ_me=3DErr=
>> or accessing memory address 0x8000fe5f2eb8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449<br>
>> #6 =C2=A00x000000000044cf00 in ThreadPThread__ThreadBase (M3_AJWxb1_param=
>> =3DError accessing memory address 0x8000fe5f2f68: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422<br>
>> #7 =C2=A00x0000000800ad54a4 in pthread_create () from /lib/libthr.so.3<br>
>> #8 =C2=A00x0000000000000000 in ?? ()<br>
>> (gdb) set lang c<br>
>> (gdb) thread apply all bt<br>
>> ... not that interesting ...<br>
>> <br>
>> Segfault is segfault---error 11 is EDEADLK (locking against myself?)<br>
>> <br>
>> ????? the code is very straightforward, as I said.<br>
>> <br>
>> I thought people had the thread tester working with pthreads? =C2=A0Which s=
>> et of files, then? =C2=A0Anyone on FreeBSD/amd64?<br>
>> <br>
>> Could it be an issue with "volatile"? =C2=A0Not even sure where t=
>> o look.<br>
>> <br>
>> The code calling the lock is just this:<br>
>> <br>
>> PROCEDURE GetChar (rd: T): CHAR<br>
>> =C2=A0 RAISES {EndOfFile, Failure, Alerted} =3D<br>
>> =C2=A0 BEGIN<br>
>> =C2=A0 =C2=A0 LOCK rd DO<br>
>> =C2=A0 =C2=A0 =C2=A0 RETURN FastGetChar(rd);<br>
>> =C2=A0 =C2=A0 END<br>
>> =C2=A0 END GetChar;<br>
>> <br>
>> No mysteries there...<br>
>> <br>
>> Ah is this the fork bug? =C2=A0Looks like the Init routine is called on a<b=
>> r>
>> lot of NIL mutexes around the fork. =C2=A0But the subprocesses aren't m=
>> eant<br>
>> to accesses the mutexes... humm<br>
>> <br>
>> Hmm, and it's not entirely a fork issue. =C2=A0Even with "threadte=
>> st -tests STD,-fork,-forktoomuch" it misbehaves. =C2=A0Hangs....<br>
>> <br>
>> Looks like a deadlock this time.<br>
>> <br>
>> Some stacks...<br>
>> <br>
>> Thread 4 (Thread 800c0b400 (LWP 100477/threadtest)):<br>
>> #0 =C2=A00x00000000004b4e2b in __vdso_gettimeofday ()<br>
>> #1 =C2=A00x00000000004ab8d2 in gettimeofday ()<br>
>> #2 =C2=A00x0000000000452e4b in TimePosix__Now () at ../src/time/POSIX/TimeP=
>> osixC.c:50<br>
>> #3 =C2=A00x0000000000452d72 in Time__Now () at ../src/time/POSIX/TimePosix.=
>> m3:14<br>
>> #4 =C2=A00x00000000004029a3 in Main__LApply (M3_AP7a1g_cl=3DError accessing=
>> memory address 0x8000fedf6df8: Bad address.<br>
>> ) at ../src/Main.m3:327<br>
>> #5 =C2=A00x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=3DErr=
>> or accessing memory address 0x8000fedf6eb8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449<br>
>> #6 =C2=A00x0000000000449c50 in ThreadPThread__ThreadBase (M3_AJWxb1_param=
>> =3DError accessing memory address 0x8000fedf6f68: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422<br>
>> #7 =C2=A00x000000000047c7d4 in thread_start ()<br>
>> #8 =C2=A00x0000000000000000 in ?? ()<br>
>> <br>
>> Thread 3 (Thread 800c0b000 (LWP 100476/threadtest)):<br>
>> #0 =C2=A00x000000000047e53c in _umtx_op_err ()<br>
>> #1 =C2=A00x0000000000475f14 in __thr_umutex_lock ()<br>
>> #2 =C2=A00x0000000000479404 in mutex_lock_common ()<br>
>> #3 =C2=A00x000000000044dd15 in ThreadPThread__pthread_mutex_lock (mutex=3DE=
>> rror accessing memory address 0x8000feff7d98: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:506<br>
>> #4 =C2=A00x00000000004480c0 in ThreadPThread__LockMutex (M3_AYIbX3_m=3DErro=
>> r accessing memory address 0x8000feff7dc8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:119<br>
>> #5 =C2=A00x00000000004028ad in Main__LApply (M3_AP7a1g_cl=3DError accessing=
>> memory address 0x8000feff7df8: Bad address.<br>
>> ) at ../src/Main.m3:319<br>
>> #6 =C2=A00x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=3DErr=
>> or accessing memory address 0x8000feff7eb8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449<br>
>> #7 =C2=A00x0000000000449c50 in ThreadPThread__ThreadBase (M3_AJWxb1_param=
>> =3DError accessing memory address 0x8000feff7f68: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422<br>
>> #8 =C2=A00x000000000047c7d4 in thread_start ()<br>
>> #9 =C2=A00x0000000000000000 in ?? ()<br>
>> <br>
>> Thread 6 (Thread 800c09800 (LWP 100470/threadtest)):<br>
>> #0 =C2=A00x000000000047e53c in _umtx_op_err ()<br>
>> #1 =C2=A00x0000000000475759 in suspend_common ()<br>
>> #2 =C2=A00x00000000004755c1 in pthread_suspend_np ()<br>
>> #3 =C2=A00x000000000044df0c in ThreadPThread__SuspendThread (mt=3DError acc=
>> essing memory address 0x8000ffbfd6f8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadFreeBSD.c:33<br>
>> #4 =C2=A00x000000000044bade in ThreadPThread__StopThread (M3_DMxDjQ_act=3DE=
>> rror accessing memory address 0x8000ffbfd718: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:909<br>
>> #5 =C2=A00x000000000044bc81 in ThreadPThread__StopWorld () at ../src/thread=
>> /PTHREAD/ThreadPThread.m3:948<br>
>> #6 =C2=A00x000000000044b19e in RTThread__SuspendOthers () at ../src/thread/=
>> PTHREAD/ThreadPThread.m3:713<br>
>> #7 =C2=A00x00000000004330d6 in RTCollector__CollectSomeInStateZero () at ..=
>> /src/runtime/common/RTCollector.m3:749<br>
>> #8 =C2=A00x0000000000433081 in RTCollector__CollectSome () at ../src/runtim=
>> e/common/RTCollector.m3:723<br>
>> #9 =C2=A00x0000000000432d49 in RTHeapRep__CollectEnough () at ../src/runtim=
>> e/common/RTCollector.m3:657<br>
>> #10 0x000000000042ff83 in RTAllocator__AllocTraced (M3_Cwb5VA_dataSize=3DEr=
>> ror accessing memory address 0x8000ffbfd958: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:367<br>
>> #11 0x000000000042f8a5 in RTAllocator__GetOpenArray (M3_Eic7CK_def=3DError =
>> accessing memory address 0x8000ffbfda18: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:296<br>
>> #12 0x000000000042ebdf in RTHooks__AllocateOpenArray (M3_AJWxb1_defn=3DErro=
>> r accessing memory address 0x8000ffbfda78: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:143<br>
>> #13 0x000000000040ca4e in Rd__NextBuff (M3_EkTcCb_rd=3DError accessing memo=
>> ry address 0x8000ffbfdab8: Bad address.<br>
>> ) at ../src/rw/Rd.m3:159<br>
>> #14 0x000000000040cd43 in UnsafeRd__FastGetChar (M3_EkTcCb_rd=3DError acces=
>> sing memory address 0x8000ffbfdb88: Bad address.<br>
>> ) at ../src/rw/Rd.m3:187<br>
>> #15 0x000000000040cc4a in Rd__GetChar (M3_EkTcCb_rd=3DError accessing memor=
>> y address 0x8000ffbfdbd8: Bad address.<br>
>> ) at ../src/rw/Rd.m3:176<br>
>> #16 0x000000000040095c in Main__RApply (M3_AP7a1g_cl=3DError accessing memo=
>> ry address 0x8000ffbfdc58: Bad address.<br>
>> ) at ../src/Main.m3:185<br>
>> <br>
>> #17 0x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=3DError ac=
>> cessing memory address 0x8000ffbfdeb8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449<br>
>> #18 0x0000000000449c50 in ThreadPThread__ThreadBase (M3_AJWxb1_param=3DErro=
>> r accessing memory address 0x8000ffbfdf68: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422<br>
>> #19 0x000000000047c7d4 in thread_start ()<br>
>> #20 0x0000000000000000 in ?? ()<br>
>> <br>
>> Thread 5 (Thread 800c09400 (LWP 101035/threadtest)):<br>
>> #0 =C2=A00x000000000047e53c in _umtx_op_err ()<br>
>> #1 =C2=A00x0000000000475f14 in __thr_umutex_lock ()<br>
>> #2 =C2=A00x0000000000479404 in mutex_lock_common ()<br>
>> #3 =C2=A00x000000000044dd15 in ThreadPThread__pthread_mutex_lock (mutex=3DE=
>> rror accessing memory address 0x8000ffffc678: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:506<br>
>> #4 =C2=A00x000000000044d039 in RTOS__LockHeap () at ../src/thread/PTHREAD/T=
>> hreadPThread.m3:1337<br>
>> ---Type <return> to continue, or q <return> to quit---<br>
>> #5 =C2=A00x000000000042ff79 in RTAllocator__AllocTraced (M3_Cwb5VA_dataSize=
>> =3DError accessing memory address 0x8000ffffc6e8: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:365<br>
>> #6 =C2=A00x000000000042f15b in RTAllocator__GetTracedObj (M3_Eic7CK_def=3DE=
>> rror accessing memory address 0x8000ffffc7a8: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:224<br>
>> #7 =C2=A00x000000000042eb23 in RTHooks__AllocateTracedObj (M3_AJWxb1_defn=
>> =3DError accessing memory address 0x8000ffffc7f8: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:122<br>
>> #8 =C2=A00x000000000045fbe4 in TextCat__Flat (M3_Bd56fi_LText=3DError acces=
>> sing memory address 0x8000ffffc858: Bad address.<br>
>> ) at ../src/text/TextCat.m3:562<br>
>> #9 =C2=A00x000000000045ed5d in TextCat__Balance (M3_Bd56fi_LText=3D0x800c49=
>> 0b0, M3_BUgnwf_LInfo=3DError accessing memory address 0x8000ffffc8f8: Bad a=
>> ddress.<br>
>> ) at ../src/text/TextCat.m3:488<br>
>> #10 0x000000000045d64f in RTHooks__Concat (M3_Bd56fi_t=3DError accessing me=
>> mory address 0x8000ffffcbb8: Bad address.<br>
>> ) at ../src/text/TextCat.m3:40<br>
>> #11 0x000000000040639e in Main_M3 (M3_AcxOUs_mode=3DError accessing memory =
>> address 0x8000ffffcc38: Bad address.<br>
>> ) at ../src/Main.m3:593<br>
>> #12 0x000000000043d55d in RTLinker__RunMainBody (M3_DjPxE3_m=3DError access=
>> ing memory address 0x8000ffffcf88: Bad address.<br>
>> ) at ../src/runtime/common/RTLinker.m3:408<br>
>> #13 0x000000000043c8e8 in RTLinker__AddUnitI (M3_DjPxE3_m=3DError accessing=
>> memory address 0x8000ffffd008: Bad address.<br>
>> ) at ../src/runtime/common/RTLinker.m3:115<br>
>> #14 0x000000000043c97c in RTLinker__AddUnit (M3_DjPxE5_b=3DError accessing =
>> memory address 0x8000ffffd028: Bad address.<br>
>> ) at ../src/runtime/common/RTLinker.m3:124<br>
>> #15 0x00000000004004a6 in main (argc=3DError accessing memory address 0x800=
>> 0ffffd07c: Bad address.<br>
>> ) at _m3main.c:22<br>
>> <br>
>> Thread 9 (Thread 800c0a400 (LWP 100473/threadtest)):<br>
>> #0 =C2=A00x000000000047e53c in _umtx_op_err ()<br>
>> #1 =C2=A00x0000000000477e8a in check_suspend ()<br>
>> #2 =C2=A00x00000000004780a2 in sigcancel_handler ()<br>
>> #3 =C2=A0<signal handler called><br>
>> #4 =C2=A00x000000000047e53c in _umtx_op_err ()<br>
>> #5 =C2=A00x0000000000475f14 in __thr_umutex_lock ()<br>
>> #6 =C2=A00x0000000000479404 in mutex_lock_common ()<br>
>> #7 =C2=A00x000000000044dd15 in ThreadPThread__pthread_mutex_lock (mutex=3DE=
>> rror accessing memory address 0x8000ff5fac18: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:506<br>
>> #8 =C2=A00x000000000044d039 in RTOS__LockHeap () at ../src/thread/PTHREAD/T=
>> hreadPThread.m3:1337<br>
>> #9 =C2=A00x000000000042ff79 in RTAllocator__AllocTraced (M3_Cwb5VA_dataSize=
>> =3DError accessing memory address 0x8000ff5fac88: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:365<br>
>> #10 0x000000000042f8a5 in RTAllocator__GetOpenArray (M3_Eic7CK_def=3DError =
>> accessing memory address 0x8000ff5fad48: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:296<br>
>> #11 0x000000000042ebdf in RTHooks__AllocateOpenArray (M3_AJWxb1_defn=3DErro=
>> r accessing memory address 0x8000ff5fada8: Bad address.<br>
>> ) at ../src/runtime/common/RTAllocator.m3:143<br>
>> #12 0x000000000040203a in Main__AApply (M3_AP7a1g_cl=3DError accessing memo=
>> ry address 0x8000ff5fade8: Bad address.<br>
>> ) at ../src/Main.m3:283<br>
>> #13 0x0000000000449f93 in ThreadPThread__RunThread (M3_DMxDjQ_me=3DError ac=
>> cessing memory address 0x8000ff5faeb8: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:449<br>
>> #14 0x0000000000449c50 in ThreadPThread__ThreadBase (M3_AJWxb1_param=3DErro=
>> r accessing memory address 0x8000ff5faf68: Bad address.<br>
>> ) at ../src/thread/PTHREAD/ThreadPThread.m3:422<br>
>> #15 0x000000000047c7d4 in thread_start ()<br>
>> #16 0x0000000000000000 in ?? ()<br>
>> <br>
>> (others are similar)<br>
>> <br>
>> Hmm looks like a FreeBSD issue now. =C2=A0It's here...<br>
>> <br>
>> int<br>
>> __cdecl<br>
>> ThreadPThread__SuspendThread (m3_pthread_t mt)<br>
>> {<br>
>> =C2=A0 ThreadFreeBSD__Fatal(pthread_suspend_np(PTHREAD_FROM_M3(mt)), "=
>> pthread_suspend_np");<br>
>> =C2=A0 return 1;<br>
>> }<br>
>> <br>
>> Now this suspend can wait:<br>
>> <br>
>> static int<br>
>> suspend_common(struct pthread *curthread, struct pthread *thread,<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 int waitok)<br>
>> {<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t tmp;<br>
>> <br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 while (thread->state !=3D PS_DEAD &&=
>> <br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 !(thread->flags & T=
>> HR_FLAGS_SUSPENDED)) {<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 thread->flags |=
>> =3D THR_FLAGS_NEED_SUSPEND;<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Thread is in cre=
>> ation. */<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (thread->tid =
>> =3D=3D TID_TERMINATED)<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
>> =A0 =C2=A0 return (1);<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 tmp =3D thread->=
>> cycle;<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 _thr_send_sig(threa=
>> d, SIGCANCEL);<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 THR_THREAD_UNLOCK(c=
>> urthread, thread);<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (waitok) {<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
>> =A0 =C2=A0 _thr_umtx_wait_uint(&thread->cycle, tmp, NULL, 0); =C2=A0=
>> <=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
>> =A0 =C2=A0 THR_THREAD_LOCK(curthread, thread);<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
>> =A0 =C2=A0 THR_THREAD_LOCK(curthread, thread);<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
>> =A0 =C2=A0 return (0);<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 }<br>
>> <br>
>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 return (1);<br>
>> }<br>
>> <br>
>> ... but what it can wait for I am not clear on.<br>
>> <br>
>> Do things work better on Linux?<br>
>> <br>
>> What is the status of the fork bug? =C2=A0Can it be worked around by only f=
>> orking via Process.Create with no mutexes held (outside of all LOCK blocks)=
>> ?<br>
>> <br>
>> =C2=A0 =C2=A0 =C2=A0Mika<br>
>> <br>
>> <br>
>> <br>
>> Peter McKinna writes:<br>
>> >--001a11c2ced4471a2e050079db82<br>
>> >Content-Type: text/plain; charset=3DUTF-8<br>
>> <div><div>><br>
>> >Mika<br>
>> ><br>
>> > =C2=A0I think you need to back out Tony's changes to fix the fork =
>> bug, at least<br>
>> >for now. Need to reload from cvs version 1.262 for ThreadPThread.m3 If<=
>> br>
>> >you're using git you're on your own.<br>
>> > =C2=A0Do a cvs log ThreadPThread.m3 for an explanation for some of the=
>> design<br>
>> >principles.<br>
>> > =C2=A0Also if you use gdb then you need to set lanc c before backtrace=
>> s so at<br>
>> >least you can see address names and values even if they are contorted y=
>> ou<br>
>> >can extract the M3 name in the parm lists. Also in gdb thread apply all=
>> bt<br>
>> >gives all thread backtraces which can be handy to see whose got the loc=
>> ks<br>
>> >held.<br>
>> ><br>
>> >Regards Peter<br>
>> ><br>
>> ><br>
>> ><br>
>> >On Wed, Aug 13, 2014 at 12:14 PM, <<a href=3D"mailto:mika at async.calt=
>> ech.edu" target=3D"_blank">mika at async.caltech.edu</a>> wrote:<br>
>> ><br>
>> >><br>
>> >> Question... is there something odd about my pthreads? =C2=A0Are pt=
>> hreads<br>
>> >> normally reentrant? =C2=A0I didn't think so.<br>
>> >><br>
>> >> My compiler is much happier with the following changes I already o=
>> utlined:<br>
>> >><br>
>> >> 1. a dirty, disgusting hack to keep from locking against myself go=
>> ing from<br>
>> >> XWait with self.mutex locked to m.release().<br>
>> >><br>
>> >> 2. the change to LockHeap I described in previous email<br>
>> >><br>
>> >> But now...<br>
>> >><br>
>> >> (gdb) cont<br>
>> >> Continuing.<br>
>> >> ERROR: pthread_mutex_lock:11<br>
>> >> ERROR: pthread_mutex_lock:11<br>
>> >><br>
>> >> Program received signal SIGABRT, Aborted.<br>
>> >> 0x000000080107626a in thr_kill () from /lib/libc.so.7<br>
>> >> (gdb) where<br>
>> >> #0 =C2=A00x000000080107626a in thr_kill () from /lib/libc.so.7<br>
>> >> #1 =C2=A00x000000080113dac9 in abort () from /lib/libc.so.7<br>
>> >> #2 =C2=A00x000000000071e37a in ThreadPThread__pthread_mutex_lock (=
>> mutex=3DError<br>
>> >> accessing memory address 0x8000ffffb508: Bad address.<br>
>> >> ) at ../src/thread/PTHREAD/ThreadPThreadC.c:543<br>
>> >> #3 =C2=A00x000000000071d48d in RTOS__LockHeap () at<br>
>> >> ../src/thread/PTHREAD/ThreadPThread.m3:1377<br>
>> >> #4 =C2=A00x0000000000706b9d in RTHooks__CheckLoadTracedRef (M3_Af4=
>> 0ku_ref=3DError<br>
>> >> accessing memory address 0x8000ffffb568: Bad address.<br>
>> >> ) at ../src/runtime/common/RTCollector.m3:2234<br>
>> >> #5 =C2=A00x000000000071d284 in ThreadPThread__AtForkParent () at<b=
>> r>
>> >> ../src/thread/PTHREAD/ThreadPThread.m3:1348<br>
>> >> #6 =C2=A00x0000000800df8733 in fork () from /lib/libthr.so.3<br>
>> >> #7 =C2=A00x000000000070dd8b in RTProcess__Fork () at<br>
>> >> ../src/runtime/common/RTProcessC.c:152<br>
>> >> #8 =C2=A00x00000000006c52f2 in ProcessPosixCommon__Create_ForkExec=
>> <br>
>> >> (M3_Bd56fi_cmd=3DError accessing memory address 0x8000ffffb6f8: Ba=
>> d address.<br>
>> >> ) at ../src/os/POSIX/ProcessPosixCommon.m3:75<br>
>> >> #9 =C2=A00x00000000006c6c6c in Process__Create (M3_Bd56fi_cmd=3DEr=
>> ror accessing<br>
>> >> memory address 0x8000ffffb7f8: Bad address.<br>
>> >> ) at ../src/os/POSIX/ProcessPosix.m3:21<br>
>> >> #10 0x00000000004d6826 in QMachine__FulfilExecPromise (M3_D6rRrg_e=
>> p=3DError<br>
>> >> accessing memory address 0x8000ffffb838: Bad address.<br>
>> >> ) at ../src/QMachine.m3:1666<br>
>> >> #11 0x00000000004d6220 in QMachine__ExecCommand (M3_An02H2_t=3DErr=
>> or<br>
>> >> accessing memory address 0x8000ffffb9f8: Bad address.<br>
>> >> ) at ../src/QMachine.m3:1605<br>
>> >> #12 0x00000000004d537e in QMachine__DoTryExec (M3_An02H2_t=3DError=
>> accessing<br>
>> >> memory address 0x8000ffffbee8: Bad address.<br>
>> >> ) at ../src/QMachine.m3:1476<br>
>> >><br>
>> >> What am I doing wrong here?<br>
>> >><br>
>> >> The error doesn't look unreasonable! =C2=A0Looking more closel=
>> y at the code:<br>
>> >><br>
>> >> First, AtForkPrepare has been called:<br>
>> >><br>
>> >> PROCEDURE AtForkPrepare() =3D<br>
>> >> =C2=A0 VAR me :=3D GetActivation();<br>
>> >> =C2=A0 =C2=A0 =C2=A0 act: Activation;<br>
>> >> =C2=A0 BEGIN<br>
>> >> =C2=A0 =C2=A0 PThreadLockMutex(slotsMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadLockMutex(perfMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadLockMutex(initMu, ThisLine()); (* InitMutex =
>> =3D><br>
>> >> RegisterFinalCleanup =3D> LockHeap *)<br>
>> >> =C2=A0 =C2=A0 PThreadLockMutex(heapMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadLockMutex(activeMu, ThisLine()); (* LockHeap =
>> =3D> SuspendOthers<br>
>> >> =3D> activeMu *)<br>
>> >> =C2=A0 =C2=A0 (* Walk activations and lock all threads.<br>
>> >> =C2=A0 =C2=A0 =C2=A0* NOTE: We have initMu, activeMu, so slots won=
>> 't change, conditions<br>
>> >> and<br>
>> >> =C2=A0 =C2=A0 =C2=A0* mutexes won't be initialized on-demand.<=
>> br>
>> >> =C2=A0 =C2=A0 =C2=A0*)<br>
>> >> =C2=A0 =C2=A0 act :=3D me;<br>
>> >> =C2=A0 =C2=A0 REPEAT<br>
>> >> =C2=A0 =C2=A0 =C2=A0 PThreadLockMutex(act.mutex, ThisLine());<br>
>> >> =C2=A0 =C2=A0 =C2=A0 act :=3D act.next;<br>
>> >> =C2=A0 =C2=A0 UNTIL act =3D me;<br>
>> >> =C2=A0 END AtForkPrepare;<br>
>> >><br>
>> >> a postcondition of this routine is that heapMu is locked.<br>
>> >><br>
>> >> now we get into AtForkParent:<br>
>> >><br>
>> >> PROCEDURE AtForkParent() =3D<br>
>> >> =C2=A0 VAR me :=3D GetActivation();<br>
>> >> =C2=A0 =C2=A0 =C2=A0 act: Activation;<br>
>> >> =C2=A0 =C2=A0 =C2=A0 cond: Condition;<br>
>> >> =C2=A0 BEGIN<br>
>> >> =C2=A0 =C2=A0 (* Walk activations and unlock all threads, conditio=
>> ns. *)<br>
>> >> =C2=A0 =C2=A0 act :=3D me;<br>
>> >> =C2=A0 =C2=A0 REPEAT<br>
>> >> =C2=A0 =C2=A0 =C2=A0 cond :=3D slots[act.slot].join;<br>
>> >> =C2=A0 =C2=A0 =C2=A0 IF cond # NIL THEN PThreadUnlockMutex(cond.mu=
>> tex, ThisLine()) END;<br>
>> >> =C2=A0 =C2=A0 =C2=A0 PThreadUnlockMutex(act.mutex, ThisLine());<br=
>>> 
>> >> =C2=A0 =C2=A0 =C2=A0 act :=3D act.next;<br>
>> >> =C2=A0 =C2=A0 UNTIL act =3D me;<br>
>> >> =C2=A0 =C2=A0 PThreadUnlockMutex(activeMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadUnlockMutex(heapMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadUnlockMutex(initMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadUnlockMutex(perfMu, ThisLine());<br>
>> >> =C2=A0 =C2=A0 PThreadUnlockMutex(slotsMu, ThisLine());<br>
>> >> =C2=A0 END AtForkParent;<br>
>> >><br>
>> >> We can see by inspecting the code that a necessary precondition fo=
>> r<br>
>> >> this routine is that heapMu is locked! =C2=A0(Since it's going=
>> to unlock it,<br>
>> >> it had BETTER be locked on entry.)<br>
>> >><br>
>> >> But the cond :=3D ... causes a RTHooks.CheckLoadTracedRef<br>
>> >><br>
>> >> which causes an RTOS.LockHeap<br>
>> >><br>
>> >> the code of which we just saw:<br>
>> >><br>
>> >> PROCEDURE LockHeap () =3D<br>
>> >> =C2=A0 VAR self :=3D pthread_self();<br>
>> >> =C2=A0 BEGIN<br>
>> >> =C2=A0 =C2=A0 WITH r =3D pthread_mutex_lock(heapMu,ThisLine()) DO =
>> <*ASSERT r=3D0*> END;<br>
>> >> ...<br>
>> >><br>
>> >> we try to lock heapMu. =C2=A0kaboom! =C2=A0No surprise there, real=
>> ly?<br>
>> >><br>
>> >> Am I going about this totally the wrong way? =C2=A0Other people ar=
>> e running<br>
>> >> Modula-3<br>
>> >> with pthreads, right? =C2=A0Right?? =C2=A0Somewhere out there in m=
>> 3devel-land?<br>
>> >><br>
>> >> =C2=A0 =C2=A0 =C2=A0Mika<br>
>> >><br>
>> >><br>
>> ><br>
>> </div></div>>--001a11c2ced4471a2e050079db82<br>
>> >Content-Type: text/html; charset=3DUTF-8<br>
>> >Content-Transfer-Encoding: quoted-printable<br>
>> ><br>
>> ><div dir=3D3D"ltr">Mika<div><br></div&gt=
>> ;<div>=3DC2=3DA0 I think you need to back ou=3D<br>
>> >t Tony&#39;s changes to fix the fork bug, at least for now. Need to=
>> reload =3D<br>
>> >from cvs version 1.262 for ThreadPThread.m3 If you&#39;re using git=
>> you&#39=3D<br>
>> >;re on your own.</div><br>
>> ><br>
>> ><div>=3DC2=3DA0 Do a cvs log ThreadPThread.m3 for an explanation =
>> for some of th=3D<br>
>> >e design principles.=3DC2=3DA0</div><div>=3DC2=3DA0 Also if=
>> you use gdb then you ne=3D<br>
>> >ed to set lanc c before backtraces so at least you can see address name=
>> s an=3D<br>
>> >d values even if they are contorted you can extract the M3 name in the =
>> parm=3D<br>
>> > lists. Also in gdb thread apply all bt gives all thread backtraces whi=
>> ch c=3D<br>
>> >an be handy to see whose got the locks held.</div><br>
>> ><br>
>> ><div><br></div><div>Regards Peter</div>&l=
>> t;div><br></div><div class=3D3D"gmail_e=3D<br>
>> >xtra"><br><br><div class=3D3D"gmail_quote&q=
>> uot;>On Wed, Aug 13, 2014 at 12:14 PM, =3D<br>
>> > <span dir=3D3D"ltr">&lt;<a href=3D3D"mailt=
>> o:<a href=3D"mailto:mika at async.caltech.edu" target=3D"_blank">mika at async.ca=
>> ltech.edu</a>" target=3D3D"=3D<br>
>> >_blank"><a href=3D"mailto:mika at async.caltech.edu" target=3D"_bl=
>> ank">mika at async.caltech.edu</a></a>&gt;</span> wrote:<br=
>> ><br>
>> ><br>
>> ><blockquote class=3D3D"gmail_quote" style=3D3D"margin=
>> :0 0 0 .8ex;border-left:1p=3D<br>
>> >x #ccc solid;padding-left:1ex"><br><br>
>> >Question... is there something odd about my pthreads? =3DC2=3DA0Are pth=
>> reads no=3D<br>
>> >rmally reentrant? =3DC2=3DA0I didn&#39;t think so.<br><br>
>> ><br><br>
>> >My compiler is much happier with the following changes I already outlin=
>> ed:<=3D<br>
>> >br><br>
>> ><br><br>
>> >1. a dirty, disgusting hack to keep from locking against myself going f=
>> rom =3D<br>
>> >XWait with self.mutex locked to m.release().<br><br>
>> ><br><br>
>> >2. the change to LockHeap I described in previous email<br><br>
>> ><br><br>
>> >But now...<br><br>
>> ><br><br>
>> >(gdb) cont<br><br>
>> >Continuing.<br><br>
>> >ERROR: pthread_mutex_lock:11<br><br>
>> >ERROR: pthread_mutex_lock:11<br><br>
>> ><br><br>
>> >Program received signal SIGABRT, Aborted.<br><br>
>> >0x000000080107626a in thr_kill () from /lib/libc.so.7<br><br>
>> >(gdb) where<br><br>
>> >#0 =3DC2=3DA00x000000080107626a in thr_kill () from /lib/libc.so.7<b=
>> r><br>
>> >#1 =3DC2=3DA00x000000080113dac9 in abort () from /lib/libc.so.7<br&g=
>> t;<br>
>> >#2 =3DC2=3DA00x000000000071e37a in ThreadPThread__pthread_mutex_lock (m=
>> utex=3D3DE=3D<br>
>> >rror accessing memory address 0x8000ffffb508: Bad address.<br><br=
>>> 
>> >) at ../src/thread/PTHREAD/ThreadPThreadC.c:543<br><br>
>> >#3 =3DC2=3DA00x000000000071d48d in RTOS__LockHeap () at ../src/thread/P=
>> THREAD/T=3D<br>
>> >hreadPThread.m3:1377<br><br>
>> >#4 =3DC2=3DA00x0000000000706b9d in RTHooks__CheckLoadTracedRef (M3_Af40=
>> ku_ref=3D<br>
>> >=3D3DError accessing memory address 0x8000ffffb568: Bad address.<br&=
>> gt;<br>
>> >) at ../src/runtime/common/RTCollector.m3:2234<br><br>
>> >#5 =3DC2=3DA00x000000000071d284 in ThreadPThread__AtForkParent () at ..=
>> /src/thr=3D<br>
>> >ead/PTHREAD/ThreadPThread.m3:1348<br><br>
>> >#6 =3DC2=3DA00x0000000800df8733 in fork () from /lib/libthr.so.3<br&=
>> gt;<br>
>> >#7 =3DC2=3DA00x000000000070dd8b in RTProcess__Fork () at ../src/runtime=
>> /common/=3D<br>
>> >RTProcessC.c:152<br><br>
>> >#8 =3DC2=3DA00x00000000006c52f2 in ProcessPosixCommon__Create_ForkExec =
>> (M3_Bd56=3D<br>
>> >fi_cmd=3D3DError accessing memory address 0x8000ffffb6f8: Bad address.&=
>> lt;br><br>
>> >) at ../src/os/POSIX/ProcessPosixCommon.m3:75<br><br>
>> >#9 =3DC2=3DA00x00000000006c6c6c in Process__Create (M3_Bd56fi_cmd=3D3DE=
>> rror acces=3D<br>
>> >sing memory address 0x8000ffffb7f8: Bad address.<br><br>
>> >) at ../src/os/POSIX/ProcessPosix.m3:21<br><br>
>> >#10 0x00000000004d6826 in QMachine__FulfilExecPromise (M3_D6rRrg_ep=3D3=
>> DError=3D<br>
>> > accessing memory address 0x8000ffffb838: Bad address.<br><br>
>> >) at ../src/QMachine.m3:1666<br><br>
>> >#11 0x00000000004d6220 in QMachine__ExecCommand (M3_An02H2_t=3D3DError =
>> access=3D<br>
>> >ing memory address 0x8000ffffb9f8: Bad address.<br><br>
>> >) at ../src/QMachine.m3:1605<br><br>
>> >#12 0x00000000004d537e in QMachine__DoTryExec (M3_An02H2_t=3D3DError ac=
>> cessin=3D<br>
>> >g memory address 0x8000ffffbee8: Bad address.<br><br>
>> >) at ../src/QMachine.m3:1476<br><br>
>> ><br><br>
>> >What am I doing wrong here?<br><br>
>> ><br><br>
>> >The error doesn&#39;t look unreasonable! =3DC2=3DA0Looking more clo=
>> sely at the =3D<br>
>> >code:<br><br>
>> ><br><br>
>> >First, AtForkPrepare has been called:<br><br>
>> ><br><br>
>> >PROCEDURE AtForkPrepare() =3D3D<br><br>
>> >=3DC2=3DA0 VAR me :=3D3D GetActivation();<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 act: Activation;<br><br>
>> >=3DC2=3DA0 BEGIN<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadLockMutex(slotsMu, ThisLine());<br><=
>> br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadLockMutex(perfMu, ThisLine());<br><b=
>> r>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadLockMutex(initMu, ThisLine()); (* InitMute=
>> x =3D3D&gt; Re=3D<br>
>> >gisterFinalCleanup =3D3D&gt; LockHeap *)<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadLockMutex(heapMu, ThisLine());<br><b=
>> r>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadLockMutex(activeMu, ThisLine()); (* LockHe=
>> ap =3D3D&gt; S=3D<br>
>> >uspendOthers =3D3D&gt; activeMu *)<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 (* Walk activations and lock all threads.<br&g=
>> t;<br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0* NOTE: We have initMu, activeMu, so sl=
>> ots won&#39;t ch=3D<br>
>> >ange, conditions and<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0* mutexes won&#39;t be initialized =
>> on-demand.<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0*)<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 act :=3D3D me;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 REPEAT<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 PThreadLockMutex(act.mutex, ThisLine()=
>> );<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 act :=3D3D act.next;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 UNTIL act =3D3D me;<br><br>
>> >=3DC2=3DA0 END AtForkPrepare;<br><br>
>> ><br><br>
>> >a postcondition of this routine is that heapMu is locked.<br><br>
>> ><br><br>
>> >now we get into AtForkParent:<br><br>
>> ><br><br>
>> >PROCEDURE AtForkParent() =3D3D<br><br>
>> >=3DC2=3DA0 VAR me :=3D3D GetActivation();<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 act: Activation;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 cond: Condition;<br><br>
>> >=3DC2=3DA0 BEGIN<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 (* Walk activations and unlock all threads, condi=
>> tions. *)<br=3D<br>
>> >><br>
>> >=3DC2=3DA0 =3DC2=3DA0 act :=3D3D me;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 REPEAT<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 cond :=3D3D slots[act.slot].join;<b=
>> r><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 IF cond # NIL THEN PThreadUnlockMutex(=
>> cond.mutex, This=3D<br>
>> >Line()) END;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 PThreadUnlockMutex(act.mutex, ThisLine=
>> ());<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0 act :=3D3D act.next;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 UNTIL act =3D3D me;<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadUnlockMutex(activeMu, ThisLine());<br&g=
>> t;<br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadUnlockMutex(heapMu, ThisLine());<br>=
>> <br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadUnlockMutex(initMu, ThisLine());<br>=
>> <br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadUnlockMutex(perfMu, ThisLine());<br>=
>> <br>
>> >=3DC2=3DA0 =3DC2=3DA0 PThreadUnlockMutex(slotsMu, ThisLine());<br&gt=
>> ;<br>
>> >=3DC2=3DA0 END AtForkParent;<br><br>
>> ><br><br>
>> >We can see by inspecting the code that a necessary precondition for<=
>> br><br>
>> >this routine is that heapMu is locked! =3DC2=3DA0(Since it&#39;s go=
>> ing to unloc=3D<br>
>> >k it,<br><br>
>> >it had BETTER be locked on entry.)<br><br>
>> ><br><br>
>> >But the cond :=3D3D ... causes a RTHooks.CheckLoadTracedRef<br><b=
>> r>
>> ><br><br>
>> >which causes an RTOS.LockHeap<br><br>
>> ><br><br>
>> >the code of which we just saw:<br><br>
>> ><br><br>
>> >PROCEDURE LockHeap () =3D3D<br><br>
>> >=3DC2=3DA0 VAR self :=3D3D pthread_self();<br><br>
>> >=3DC2=3DA0 BEGIN<br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 WITH r =3D3D pthread_mutex_lock(heapMu,ThisLine()=
>> ) DO &lt;*ASSE=3D<br>
>> >RT r=3D3D0*&gt; END;<br><br>
>> >...<br><br>
>> ><br><br>
>> >we try to lock heapMu. =3DC2=3DA0kaboom! =3DC2=3DA0No surprise there, r=
>> eally?<br><br>
>> ><br><br>
>> >Am I going about this totally the wrong way? =3DC2=3DA0Other people are=
>> running=3D<br>
>> > Modula-3<br><br>
>> >with pthreads, right? =3DC2=3DA0Right?? =3DC2=3DA0Somewhere out there i=
>> n m3devel-la=3D<br>
>> >nd?<br><br>
>> ><span><font color=3D3D"#888888"><br><br>
>> >=3DC2=3DA0 =3DC2=3DA0 =3DC2=3DA0Mika<br><br>
>> ><br><br>
>> ></font></span></blockquote></div><br><=
>> /div></div><br>
>> ><br>
>> >--001a11c2ced4471a2e050079db82--<br>
>> </blockquote></div><br></div>
>> </div></div></blockquote></div><br></div>
>> 
>> --089e01183a1813188705007c3498--




More information about the M3devel mailing list