<div dir="ltr">Mika<div><br></div><div> I think you need to back out Tony's changes to fix the fork bug, at least for now. Need to reload from cvs version 1.262 for ThreadPThread.m3 If you're using git you're on your own.</div>
<div> Do a cvs log ThreadPThread.m3 for an explanation for some of the design principles. </div><div> Also if you use gdb then you need to set lanc c before backtraces so at least you can see address names and values even if they are contorted you can extract the M3 name in the parm lists. Also in gdb thread apply all bt gives all thread backtraces which can be handy to see whose got the locks held.</div>
<div><br></div><div>Regards Peter</div><div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Aug 13, 2014 at 12:14 PM, <span dir="ltr"><<a href="mailto:mika@async.caltech.edu" target="_blank">mika@async.caltech.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Question... is there something odd about my pthreads? Are pthreads normally reentrant? I didn't think so.<br>
<br>
My compiler is much happier with the following changes I already outlined:<br>
<br>
1. a dirty, disgusting hack to keep from locking against myself going from XWait with self.mutex locked to m.release().<br>
<br>
2. the change to LockHeap I described in previous email<br>
<br>
But now...<br>
<br>
(gdb) cont<br>
Continuing.<br>
ERROR: pthread_mutex_lock:11<br>
ERROR: pthread_mutex_lock:11<br>
<br>
Program received signal SIGABRT, Aborted.<br>
0x000000080107626a in thr_kill () from /lib/libc.so.7<br>
(gdb) where<br>
#0 0x000000080107626a in thr_kill () from /lib/libc.so.7<br>
#1 0x000000080113dac9 in abort () from /lib/libc.so.7<br>
#2 0x000000000071e37a in ThreadPThread__pthread_mutex_lock (mutex=Error accessing memory address 0x8000ffffb508: Bad address.<br>
) at ../src/thread/PTHREAD/ThreadPThreadC.c:543<br>
#3 0x000000000071d48d in RTOS__LockHeap () at ../src/thread/PTHREAD/ThreadPThread.m3:1377<br>
#4 0x0000000000706b9d in RTHooks__CheckLoadTracedRef (M3_Af40ku_ref=Error accessing memory address 0x8000ffffb568: Bad address.<br>
) at ../src/runtime/common/RTCollector.m3:2234<br>
#5 0x000000000071d284 in ThreadPThread__AtForkParent () at ../src/thread/PTHREAD/ThreadPThread.m3:1348<br>
#6 0x0000000800df8733 in fork () from /lib/libthr.so.3<br>
#7 0x000000000070dd8b in RTProcess__Fork () at ../src/runtime/common/RTProcessC.c:152<br>
#8 0x00000000006c52f2 in ProcessPosixCommon__Create_ForkExec (M3_Bd56fi_cmd=Error accessing memory address 0x8000ffffb6f8: Bad address.<br>
) at ../src/os/POSIX/ProcessPosixCommon.m3:75<br>
#9 0x00000000006c6c6c in Process__Create (M3_Bd56fi_cmd=Error accessing memory address 0x8000ffffb7f8: Bad address.<br>
) at ../src/os/POSIX/ProcessPosix.m3:21<br>
#10 0x00000000004d6826 in QMachine__FulfilExecPromise (M3_D6rRrg_ep=Error accessing memory address 0x8000ffffb838: Bad address.<br>
) at ../src/QMachine.m3:1666<br>
#11 0x00000000004d6220 in QMachine__ExecCommand (M3_An02H2_t=Error accessing memory address 0x8000ffffb9f8: Bad address.<br>
) at ../src/QMachine.m3:1605<br>
#12 0x00000000004d537e in QMachine__DoTryExec (M3_An02H2_t=Error accessing memory address 0x8000ffffbee8: Bad address.<br>
) at ../src/QMachine.m3:1476<br>
<br>
What am I doing wrong here?<br>
<br>
The error doesn't look unreasonable! Looking more closely at the code:<br>
<br>
First, AtForkPrepare has been called:<br>
<br>
PROCEDURE AtForkPrepare() =<br>
VAR me := GetActivation();<br>
act: Activation;<br>
BEGIN<br>
PThreadLockMutex(slotsMu, ThisLine());<br>
PThreadLockMutex(perfMu, ThisLine());<br>
PThreadLockMutex(initMu, ThisLine()); (* InitMutex => RegisterFinalCleanup => LockHeap *)<br>
PThreadLockMutex(heapMu, ThisLine());<br>
PThreadLockMutex(activeMu, ThisLine()); (* LockHeap => SuspendOthers => activeMu *)<br>
(* Walk activations and lock all threads.<br>
* NOTE: We have initMu, activeMu, so slots won't change, conditions and<br>
* mutexes won't be initialized on-demand.<br>
*)<br>
act := me;<br>
REPEAT<br>
PThreadLockMutex(act.mutex, ThisLine());<br>
act := act.next;<br>
UNTIL act = me;<br>
END AtForkPrepare;<br>
<br>
a postcondition of this routine is that heapMu is locked.<br>
<br>
now we get into AtForkParent:<br>
<br>
PROCEDURE AtForkParent() =<br>
VAR me := GetActivation();<br>
act: Activation;<br>
cond: Condition;<br>
BEGIN<br>
(* Walk activations and unlock all threads, conditions. *)<br>
act := me;<br>
REPEAT<br>
cond := slots[act.slot].join;<br>
IF cond # NIL THEN PThreadUnlockMutex(cond.mutex, ThisLine()) END;<br>
PThreadUnlockMutex(act.mutex, ThisLine());<br>
act := act.next;<br>
UNTIL act = me;<br>
PThreadUnlockMutex(activeMu, ThisLine());<br>
PThreadUnlockMutex(heapMu, ThisLine());<br>
PThreadUnlockMutex(initMu, ThisLine());<br>
PThreadUnlockMutex(perfMu, ThisLine());<br>
PThreadUnlockMutex(slotsMu, ThisLine());<br>
END AtForkParent;<br>
<br>
We can see by inspecting the code that a necessary precondition for<br>
this routine is that heapMu is locked! (Since it's going to unlock it,<br>
it had BETTER be locked on entry.)<br>
<br>
But the cond := ... causes a RTHooks.CheckLoadTracedRef<br>
<br>
which causes an RTOS.LockHeap<br>
<br>
the code of which we just saw:<br>
<br>
PROCEDURE LockHeap () =<br>
VAR self := pthread_self();<br>
BEGIN<br>
WITH r = pthread_mutex_lock(heapMu,ThisLine()) DO <*ASSERT r=0*> END;<br>
...<br>
<br>
we try to lock heapMu. kaboom! No surprise there, really?<br>
<br>
Am I going about this totally the wrong way? Other people are running Modula-3<br>
with pthreads, right? Right?? Somewhere out there in m3devel-land?<br>
<span><font color="#888888"><br>
Mika<br>
<br>
</font></span></blockquote></div><br></div></div>