[M3devel] naming convention unix vs. grumpyunix?

Jay K jay.krell at cornell.edu
Thu Jun 23 20:33:47 CEST 2016


Not really.
My goal is to be close as possible to:   tar xf foo.tar.gz      mkdir bar # optional     cd bar # or foo     ../foo/configure     make    sudo make install  
Nagging questions:     Is there one foo.tar.gz for everyone and autoconf picks the right part of it, or people pick the "correct" one for their system.  
    Is there a few such files -- target.tar.gz, the-rest-m3.tar.gz, maybe m3cc.tar.gz -- this is how 3.6 was structured         Back then quake was written in C however, not sure it matters. 
    Do we first build cm3 and then the rest of the system using it, or do we just use "make" to build everything. 
I can imagine how to build everything from assembly using autoconf/libtool/make, but I kinda only want to do that only if cm3 also reuses the same infrastructure. Sometimes I also think of giving up on dynamic linking, since that is one of the bigger thorns.

 - Jay

From: dmuysers at hotmail.com
To: jay.krell at cornell.edu; hosking at purdue.edu
CC: m3devel at elegosoft.com
Subject: Re: [M3devel] naming convention unix vs. grumpyunix?
Date: Thu, 23 Jun 2016 08:33:00 +0200







Did you ever consider 0install as a 
means of distribution?


 

From: Jay K 
Sent: Thursday, June 23, 2016 2:38 AM
To: Hosking, Antony L 
Cc: m3devel 
Subject: Re: [M3devel] naming convention unix vs. 
grumpyunix?
 


This is a bit long and out of order, sorry. 
Simple story is for us to get out of the platform-specific build system 
maintenance business, and reuse larger portability from other projects.
 
 
 
I've been wrestling with this in my head a long while. 
 
 
- I don't like maintaining the config files. 
   It is hard to be an expert on dynamic linking across "many" 
operating 
   systems, linkers, versions. 
 
 
- I don't like that for example an AIX port remains absent.
   And now I see AIX doesn't have $ORIGIN.
 
 
- It bothers me just slightly that we aren't portable
   to the older systems that lack $ORIGIN.
 
 
   $ORIGIN is nice if you are redistributing binaries, 
   that will be moved around, but it was never needed 
   for self-built software, or software installed to 
   an agreed upon place, and it isn't supported in setuid or 
such
   programs. 
 
  (Aside -- and remember how bad it used to be?
   We used to distribute binaries with random hardcoded 
paths,  
   and advise people to set LD_LIBRARY_PATH. Even for stuff 
people
   self-built, it wasn't good. So I did improve things 
   but I don't think it is worth us doing ourselves.) 
 
 
- Our current bootstrap/cross-build story isn't automated enough. 
   And then, what should it look like? 
 
 
- Generating cmake or autoconf/automake/libtool input provides some 
potential answers. 
 
   I'd really like to delegate to folks that did and will 
continue to port pretty much everywhere.  
   Sometimes I think, hey, we can just do what we need ourselves, 
but then I see how 
   much gnarly system-specific knowledge autotools/cmake deliver 
nicely to their users. 
 
   
   I had a mental stumbling block for years with cmake/autotools 
but finally 
   got over it. I have prototyped some simple uses, both with 
recursive 
   make and non-recursive make. 
 
 
   configure is a bit slow, but we'd have a very minimal 
one.
   The resulting make invocations are ok. 
 
 
   I can almost just generate makefiles myself, but then for 
example
   I don't know much about "install". cmake/automake provide me 
"install"
   with me knowing nothing. 
 
 
   I don't really want to be an expert in make, compiler flags, 
linker flags,
   Posix portability gotchas, etc. -- ok maybe at the libc/m3core 
level, but
   not so much as the make/sed/awk/sh level. 
 
 
  There are a few details of autoconf/cmake/libtool I don't like, 
where the Modula-3
  build system is clearly and simply superior. And other areas where 
I'm not
  sure what is ideal.
 
 
  Where Modula-3 is clearly superior in that in producing static and 
dynamic
  libraries, it only ever compiles once. cmake and libtool are pretty 
keen
  on compiling everything twice -- even with identical command 
lines.
 
 
  Where I'm not sure is our probing for libraries and the 
build_standalone feature.
  I think if we did things a little different/better, we wouldn't even 
have cm3
  be standalone.
 
 
  I very much want to offer to users the:
    tar xf cm3...
    cd cm3... 
    configure 
    make 
    make install 
 
 
sort of experience. 
 
 
There are slight difficulties. 
configure probes the C compiler for what it produces.
Let's ignore C-backend and LLVM for now and consider cm3cg.
 
 
The likely best bootstrap format is assembly source. Like the 3.6 release. 

For just cm3/m3core/libm3, or the entire system? 
 
 
So configure probing vs. having on hand possibly just one assembly source 
is a bit of a misfit.  
 
 
Perhaps configure would be tailored to hardcode what the distribution 
contains. 
 
 
Or perhaps the distribution would contain "everything" and configure would 
pick the right one. 
It is obviously wasteful, but these days maybe ok, and the result easier 
for people to install.
 
 
The C generating backend doesn't fix this much or entirely, since the C is 
still target-specific.
Maybe we can fold the C down to just a few platforms, and then the idea of 
one cross-platform distribution
might work. Maybe eventually the generated C can speak in "integer" and 
array/struct references, instead
of front-end computed offsets, but that is a ways off. 
 
 
autotools/libtool also solve that problem where non-shipped binaries don't 
run. 
Something we have hacked on by sprinkling build_standalone around. 
I'm not sure if cmake fixes this. 
 
 
I'm not sure they solve it the way I want though -- I'd like to have the 
uninstalled
paths hardcoded, then relink or otherwise binary edit in install. 
 
 
One thing I need to study a bit more is how to install all the extra stuff 
to the pkg directories.
 
As well,...so many things... we have this structure: 
   bin/foo
   lib/foo.so (did I do this? No matter, the layout is wierd w/o 
it.)
   pkg/foo/TARGET/foo.so 
 
 
I have always found it a little suspicious that binaries have implicit 
TARGET
but pkgs have explicit TARGET. I somewhat pine for a layout that can 
accomodiate
all targets including the bin directory.
 
 
I suppose if bin and lib are what run, and pkg is only for building, this 
accomodates
unshipped cross builds nicely. But ideally you could have a runnable 
PPC_DARWIN/I386_DARWIN/AMD664_DARWIN
system all in structure (caveat that PPC_DARWIN doesn't work in Rosetta 
because of our
preemptive suspend -- cooperative suspend would fix that.) 
 
 
- Jay






From: hosking at purdue.edu
To: jay.krell at cornell.edu
CC: 
m3devel at elegosoft.com
Subject: Re: [M3devel] naming convention unix vs. 
grumpyunix?
Date: Wed, 22 Jun 2016 21:19:12 +0000


Why import dependencies on make and automake?

Sent from my 
iPad

On Jun 22, 2016, at 9:41 PM, Jay K <jay.krell at cornell.edu> 
wrote:



  
  

  I propose making unix match grumpyunix and removing grumpyunix. 
   
  There is slight upside and should be no downside.
   
  The upside is that various tools -- make and automake -- know that .s is 
  assembly and will assemble it.
   
  Is it a downside for base name to change foo.m3 => foo_m.s/foo_m.o vs. 
  foo.m3 => foo.ms/foo.mo?
   
  I expect everything will just work.
   
  - Jay
   
  




  _______________________________________________
M3devel 
  mailing list
M3devel at m3lists.elegosoft.com
https://m3lists.elegosoft.com/mailman/listinfo/m3devel




_______________________________________________
M3devel mailing 
list
M3devel at elegosoft.com
https://m3lists.elegosoft.com/mailman/listinfo/m3devel
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://m3lists.elegosoft.com/pipermail/m3devel/attachments/20160623/2577773a/attachment-0002.html>


More information about the M3devel mailing list