Building and Using A Cross Development Tool Chain
Building and Using A Cross Development Tool Chain
Robert Schiele
rschiele@uni-mannheim.de
Abstract 1 Motivation
other development machines that are of characteristics require them to be handled spe-
the same architecture and operating sys- cially when used in a cross development tool
tem because you cannot mix up object chain. In section 3, we will show what must
files generated for different platforms. be done to build a complete cross development
tool chain and what are some tricks to work
• As a developer, you normally want to sup- around some problems. In section 4, we show
port multiple platforms, but in most cases, how to integrate the cross development tool
you have a large number of fast machines chain into build systems to gain a more effi-
for one platform, but only a few slow cient development tool chain. Finally, we will
machines for another one. If you used find some conclusions on our thoughts in the
only the system compiler in that case, you last section.
would end up in long compilation times
for those platforms where you only have a
few slow machines. 2 How a Compiler Works
• Last but not least, you often also want To understand how a compiler works and thus
to build for a different glibc release what we have to set up for a cross compiler,
etc. than the one installed on your sys- we need to have a look at the C development
tem for compatibility reasons. This is also tool chain. This is normally not a monolithic
not possible for all cases with a system tool that is fed by C sources and produces exe-
compiler pre-configured for your system’s cutables, but consists of a chain of tools, where
binutils release and other system specific each of these tools executes a specific transfor-
parameters. mation. An overview of this tool chain can be
found in Figure 1. In the following, I will show
1.3 Compiling for a Foreign Platform those parts and explain what they do.
the front end and the back end, where the latter 3 Building the tool chain
is not independent.
3.1 The Binutils As long as there is not a hard bug in the used
binutils package, this step is quite unlikely to
The simplest thing to start with is the binutils fail, as there are no dependencies to other tools
package because they neither depend on the of the tool chain we build. For the follow-
gcc compiler nor on the glibc of the des- ing parts we should expect some trouble be-
tination platform. And we need them anyway cause of intrinsic dependencies between gcc
when we want to build object files for the des- and glibc.
tination platform, which is obviously done for
From this point on, we should add the bin/
the glibc, but even gcc provides a library
directory from our installation directory into
with some primitive functionality for some op-
$PATH, as the following steps will need the
erations that are too complex for the destina-
tools installed here.
tion platform processor to execute directly.
the C library. This cycle in the dependency files to the destination directory, by removing
graph can be seen in figure 2. We can resolve the failing parts from the makefiles and contin-
this cycle by introducing a simple C compiler uing the build afterwards, or by just touching
that does not ship these additional libraries, so the files that fail to build. The last option forces
that we get dependencies as shown in figure make to silently build and install corrupted li-
3. But because of the reason mentioned above, braries, but if we have this in mind, this is not
for most configurations we cannot even build a really problematic, as we can just rebuild the
simple C only compiler. That means we can whole thing later and thus replace the broken
build the compiler itself, but the support li- parts with sane ones.
braries might fail. So we just start by doing
The simplest way of installing an incomplete
compiler when using GNU make is calling
CFLAGS="-O2 -Dinhibit_libc" make and make install with the addi-
../gcc-3.2.3/configure tional parameter -k so that make automati-
--enable-languages=c cally continues on errors. This will then just
--prefix=/local/cross skip the failing parts, i.e. the support libraries.
--target=powerpc-linux
--disable-nls
3.3 The C Library
--disable-multilib
--disable-shared
--enable-threads=single After having built a simple C compiler, we can
build the C library. It has already been said that
and then starting the actual build with make. this might be necessary to be part of an iterative
The configure command disables just ev- build process together with the compiler itself.
erything that is not absolutely necessary for To build the glibc we also need some ker-
building the C library in order to limit the pos- nel headers, so we unpack the kernel sources
sible problems to a minimum amount. Some- somewhere and do some basic configuration by
times it also helps to set the inhibit_libc typing
macro to tell the compiler that there is no libc
yet, so we add this also. In case the build com-
pletes without an error, we are lucky and can make ARCH=ppc symlinks
just continue with building the C library after include/linux/version.h
doing a make install before.
and do the usual make and make install 3.4 A Full-featured Compiler
stuff.
After we have a complete C library, we can
Note that the -host parameter is different
build the full-featured compiler. That means
here to the tools, as the glibc should actu-
we do now again a rebuild of the compiler,
ally run on the target platform and not, like the
but with all languages and runtime libraries we
tools, on the build host. The -prefix is also
want to have included.
different, as the glibc has to be placed into
the target specific subdirectory within the in- With a complete C library, this would be no
stallation directory, and not directly into the problem any more, so we should manage to do
installation directory. Additionally, we have this by just typing
to tell configure where to find the ker-
nel headers and that we do not need profil- ../gcc-3.2.3/configure
ing support, but we want the add-ons like --enable-languages=
linuxthreads enabled. c,c++,f77,objc
In case that building the full glibc fails be- --prefix=/local/cross
cause building the C Compiler was incomplete --disable-libgcj
before, the same hints for installing the in- --with-gxx-include-dir=
complete library apply that where explained /local/cross/include/g++
for the incomplete compiler. Additionally, it --with-system-zlib
might help to touch the file powerpc-linux/ --enable-shared
include/gnu/stubs.h within the installa- --enable-__cxa_atexit
tion directory, in case it does not exist yet. This --target=powerpc-linux
file does not contain important information for
building the simple C compiler, but for some and again doing the build and installation by
platforms it is just necessary to be there be- make and make install.
cause other files used during the build include
it. 4 Using the Tool Chain on a Clus-
After installation of the glibc (even the ter
incomplete one), we also have to install
the kernel headers manually by copying We now have a full-featured cross develop-
include/linux to powerpc-linux/ ment tool chain. We can use these tools by
include/linux within the installa- just putting the bin/ path where we installed
tion directory and include/asm-ppc to them to the system’s search path and calling
powerpc-linux/include/asm. The latest them by the tool name with the platform name
kernels also want include/asm-generic prefixed, e.g. for calling gcc as a cross com-
to be copied to powerpc-linux/include/ piler for platform powerpc-linux, we call
asm-generic. Other systems than Linux powerpc-linux-gcc. The tools should
might have similar requirements. behave in the same way the native tools on the
host system do, except that they produce code
for a different platform.
cations. There are various methods for doing CVS head revision replaced ppmconnect by
so. In the following we will show two of them. the integrated binary ppmake.
4.1 Using a Parallel Virtual Machine (PVM) There is also a script provided in the package
that does most of these things automatically,
but I do not like the way this script handles the
We receive most scalability by dispatching all
process, so I do not use it personally, and such
jobs that produce some workload to the nodes
it is a bit out of date recently.
in the cluster. make is a wonderful tool to do
so. A long time ago, Stephan Zimmermann Note that there is a similar project [PVMb] by
implemented a tool called ppmake that be- Jean Labrousse ongoing which aims at in in-
haved like a simple shell that distributed the tegrating a similar functionality directly into
commands to execute on the nodes of a cluster GNU make. You may want to consider look-
based on PVM. He stopped the development of ing at this project also.
the tool in 1997. As I wanted to have some im-
provements for the tool, I agreed with him to You should note that it is necessary for this ap-
put the tool under GPL and started to imple- proach that all files used in the build process
ment some improvements. You can fetch the are available on the whole cluster within a ho-
current development state from [ppm], but note mogenous file system structure, for example
that the documentation is really out of date and by placing them on a NFS server and mount-
that I also stopped further development for sev- ing on all nodes at the same place. Addition-
eral reasons. ally, it is necessary that all commands used
within the makefiles behave in the same way
If you want to use this tool, you just have to on all nodes of the cluster. Otherwise, you
fetch the package, build it and tell make to will get random results, which is most likely
use this shell instead of the standard /bin/sh not what you want. This means you should
shell by setting the make variable SHELL to always call the platform-specific compiler ex-
the ppmake executable. Obviously you have plicitly, e.g. by powerpc-linux-gcc in-
to set up a PVM cluster before make this work. stead of gcc, and the same releases of the com-
Information on how to set up a PVM cluster piler, the linker and the libraries should be in-
can be found at [PVMa]. To gain something stalled on all nodes.
from your cluster you should also do parallel
builds by specifying the parameter -j on the
4.2 Using with distcc
make command line.
are directly executed on the system, where the At least if you have an amount of systems for
build process was invoked. Although this lim- office jobs idling almost all of their time, it is
its the amount of workload that really runs in worth investing some time for building up such
parallel, this is in most cases not a real prob- an infrastructure to use their CPU power for
lem, as most build processes spend most of your build processes.
their time with compilation anyway.
As this is a tutorial paper, its contents are
The advantage of this approach is that you only intended for people that do not have exten-
need to have the cross compiler and assem- sive konwledge on the topic described to help
bler on each node. Include files and libraries them understanding it. If you think something
are necessary only on the system on which the is unclear, some information should be added
build is invoked. or you find an error, please send a mail to
rschiele@uni-mannheim.de.
Such an approach is implemented in Martin
Pool’s distcc package [dis]. This tool is a
replacement for the gcc compiler driver. Pre- References
processing and linking is done almost in the
same way the standard compiler driver does, [ASU86] A.V. Aho, R. Sethi, and J.D. Ullman.
but the actual compile and assemble jobs are Compilers: Principles, Techniques,
distributed among various nodes on the net- and Tools. Addison-Wesley, Read-
work. ing, MA, 1986.
Although this solution obviously gives not the [Bin] GNU Binutils.
same amount of scalability, as not all jobs can http://sources.redhat.
be parallelized, it is for most situations a better com/binutils/.
solution, as from my experience it seems that
many system administrators are not capable of [dis] distcc: a fast, free distributed C and
installing a homogenous build environment on C++ compiler. http://distcc.
a cluster of systems. samba.org/.