xref: /OK3568_Linux_fs/buildroot/docs/manual/configure.txt (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1// -*- mode:doc; -*-
2// vim: set syntax=asciidoc:
3
4[[configure]]
5== Buildroot configuration
6
7All the configuration options in +make *config+ have a help text
8providing details about the option.
9
10The +make *config+ commands also offer a search tool. Read the help
11message in the different frontend menus to know how to use it:
12
13* in _menuconfig_, the search tool is called by pressing +/+;
14* in _xconfig_, the search tool is called by pressing +Ctrl+ + +f+.
15
16The result of the search shows the help message of the matching items.
17In _menuconfig_, numbers in the left column provide a shortcut to the
18corresponding entry. Just type this number to directly jump to the
19entry, or to the containing menu in case the entry is not selectable due
20to a missing dependency.
21
22Although the menu structure and the help text of the entries should be
23sufficiently self-explanatory, a number of topics require additional
24explanation that cannot easily be covered in the help text and are
25therefore covered in the following sections.
26
27=== Cross-compilation toolchain
28
29A compilation toolchain is the set of tools that allows you to compile
30code for your system. It consists of a compiler (in our case, +gcc+),
31binary utils like assembler and linker (in our case, +binutils+) and a
32C standard library (for example
33http://www.gnu.org/software/libc/libc.html[GNU Libc],
34http://www.uclibc-ng.org/[uClibc-ng]).
35
36The system installed on your development station certainly already has
37a compilation toolchain that you can use to compile an application
38that runs on your system. If you're using a PC, your compilation
39toolchain runs on an x86 processor and generates code for an x86
40processor. Under most Linux systems, the compilation toolchain uses
41the GNU libc (glibc) as the C standard library. This compilation
42toolchain is called the "host compilation toolchain". The machine on
43which it is running, and on which you're working, is called the "host
44system" footnote:[This terminology differs from what is used by GNU
45configure, where the host is the machine on which the application will
46run (which is usually the same as target)].
47
48The compilation toolchain is provided by your distribution, and
49Buildroot has nothing to do with it (other than using it to build a
50cross-compilation toolchain and other tools that are run on the
51development host).
52
53As said above, the compilation toolchain that comes with your system
54runs on and generates code for the processor in your host system. As
55your embedded system has a different processor, you need a
56cross-compilation toolchain - a compilation toolchain that runs on
57your _host system_ but generates code for your _target system_ (and
58target processor). For example, if your host system uses x86 and your
59target system uses ARM, the regular compilation toolchain on your host
60runs on x86 and generates code for x86, while the cross-compilation
61toolchain runs on x86 and generates code for ARM.
62
63Buildroot provides two solutions for the cross-compilation toolchain:
64
65 * The *internal toolchain backend*, called +Buildroot toolchain+ in
66   the configuration interface.
67
68 * The *external toolchain backend*, called +External toolchain+ in
69   the configuration interface.
70
71The choice between these two solutions is done using the +Toolchain
72Type+ option in the +Toolchain+ menu. Once one solution has been
73chosen, a number of configuration options appear, they are detailed in
74the following sections.
75
76[[internal-toolchain-backend]]
77==== Internal toolchain backend
78
79The _internal toolchain backend_ is the backend where Buildroot builds
80by itself a cross-compilation toolchain, before building the userspace
81applications and libraries for your target embedded system.
82
83This backend supports several C libraries:
84http://www.uclibc-ng.org[uClibc-ng],
85http://www.gnu.org/software/libc/libc.html[glibc] and
86http://www.musl-libc.org[musl].
87
88Once you have selected this backend, a number of options appear. The
89most important ones allow to:
90
91 * Change the version of the Linux kernel headers used to build the
92   toolchain. This item deserves a few explanations. In the process of
93   building a cross-compilation toolchain, the C library is being
94   built. This library provides the interface between userspace
95   applications and the Linux kernel. In order to know how to "talk"
96   to the Linux kernel, the C library needs to have access to the
97   _Linux kernel headers_ (i.e. the +.h+ files from the kernel), which
98   define the interface between userspace and the kernel (system
99   calls, data structures, etc.). Since this interface is backward
100   compatible, the version of the Linux kernel headers used to build
101   your toolchain do not need to match _exactly_ the version of the
102   Linux kernel you intend to run on your embedded system. They only
103   need to have a version equal or older to the version of the Linux
104   kernel you intend to run. If you use kernel headers that are more
105   recent than the Linux kernel you run on your embedded system, then
106   the C library might be using interfaces that are not provided by
107   your Linux kernel.
108
109 * Change the version of the GCC compiler, binutils and the C library.
110
111 * Select a number of toolchain options (uClibc only): whether the
112   toolchain should have RPC support (used mainly for NFS),
113   wide-char support, locale support (for internationalization),
114   C++ support or thread support. Depending on which options you choose,
115   the number of userspace applications and libraries visible in
116   Buildroot menus will change: many applications and libraries require
117   certain toolchain options to be enabled. Most packages show a comment
118   when a certain toolchain option is required to be able to enable
119   those packages. If needed, you can further refine the uClibc
120   configuration by running +make uclibc-menuconfig+. Note however that
121   all packages in Buildroot are tested against the default uClibc
122   configuration bundled in Buildroot: if you deviate from this
123   configuration by removing features from uClibc, some packages may no
124   longer build.
125
126It is worth noting that whenever one of those options is modified,
127then the entire toolchain and system must be rebuilt. See
128xref:full-rebuild[].
129
130Advantages of this backend:
131
132* Well integrated with Buildroot
133* Fast, only builds what's necessary
134
135Drawbacks of this backend:
136
137* Rebuilding the toolchain is needed when doing +make clean+, which
138  takes time. If you're trying to reduce your build time, consider
139  using the _External toolchain backend_.
140
141[[external-toolchain-backend]]
142==== External toolchain backend
143
144The _external toolchain backend_ allows to use existing pre-built
145cross-compilation toolchains. Buildroot knows about a number of
146well-known cross-compilation toolchains (from
147http://www.linaro.org[Linaro] for ARM,
148http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/[Sourcery
149CodeBench] for ARM, x86-64, PowerPC, and MIPS, and is capable of
150downloading them automatically, or it can be pointed to a custom
151toolchain, either available for download or installed locally.
152
153Then, you have three solutions to use an external toolchain:
154
155* Use a predefined external toolchain profile, and let Buildroot
156  download, extract and install the toolchain. Buildroot already knows
157  about a few CodeSourcery and Linaro toolchains. Just select the
158  toolchain profile in +Toolchain+ from the available ones. This is
159  definitely the easiest solution.
160
161* Use a predefined external toolchain profile, but instead of having
162  Buildroot download and extract the toolchain, you can tell Buildroot
163  where your toolchain is already installed on your system. Just
164  select the toolchain profile in +Toolchain+ through the available
165  ones, unselect +Download toolchain automatically+, and fill the
166  +Toolchain path+ text entry with the path to your cross-compiling
167  toolchain.
168
169* Use a completely custom external toolchain. This is particularly
170  useful for toolchains generated using crosstool-NG or with Buildroot
171  itself. To do this, select the +Custom toolchain+ solution in the
172  +Toolchain+ list. You need to fill the +Toolchain path+, +Toolchain
173  prefix+ and +External toolchain C library+ options. Then, you have
174  to tell Buildroot what your external toolchain supports. If your
175  external toolchain uses the 'glibc' library, you only have to tell
176  whether your toolchain supports C\++ or not and whether it has
177  built-in RPC support. If your external toolchain uses the 'uClibc'
178  library, then you have to tell Buildroot if it supports RPC,
179  wide-char, locale, program invocation, threads and C++.
180  At the beginning of the execution, Buildroot will tell you if
181  the selected options do not match the toolchain configuration.
182
183Our external toolchain support has been tested with toolchains from
184CodeSourcery and Linaro, toolchains generated by
185http://crosstool-ng.org[crosstool-NG], and toolchains generated by
186Buildroot itself. In general, all toolchains that support the
187'sysroot' feature should work. If not, do not hesitate to contact the
188developers.
189
190We do not support toolchains or SDK generated by OpenEmbedded or
191Yocto, because these toolchains are not pure toolchains (i.e. just the
192compiler, binutils, the C and C++ libraries). Instead these toolchains
193come with a very large set of pre-compiled libraries and
194programs. Therefore, Buildroot cannot import the 'sysroot' of the
195toolchain, as it would contain hundreds of megabytes of pre-compiled
196libraries that are normally built by Buildroot.
197
198We also do not support using the distribution toolchain (i.e. the
199gcc/binutils/C library installed by your distribution) as the
200toolchain to build software for the target. This is because your
201distribution toolchain is not a "pure" toolchain (i.e. only with the
202C/C++ library), so we cannot import it properly into the Buildroot
203build environment. So even if you are building a system for a x86 or
204x86_64 target, you have to generate a cross-compilation toolchain with
205Buildroot or crosstool-NG.
206
207If you want to generate a custom toolchain for your project, that can
208be used as an external toolchain in Buildroot, our recommendation is
209to build it either with Buildroot itself (see
210xref:build-toolchain-with-buildroot[]) or with
211http://crosstool-ng.org[crosstool-NG].
212
213Advantages of this backend:
214
215* Allows to use well-known and well-tested cross-compilation
216  toolchains.
217
218* Avoids the build time of the cross-compilation toolchain, which is
219  often very significant in the overall build time of an embedded
220  Linux system.
221
222Drawbacks of this backend:
223
224* If your pre-built external toolchain has a bug, may be hard to get a
225  fix from the toolchain vendor, unless you build your external
226  toolchain by yourself using Buildroot or Crosstool-NG.
227
228[[build-toolchain-with-buildroot]]
229==== Build an external toolchain with Buildroot
230
231The Buildroot internal toolchain option can be used to create an
232external toolchain. Here are a series of steps to build an internal
233toolchain and package it up for reuse by Buildroot itself (or other
234projects).
235
236Create a new Buildroot configuration, with the following details:
237
238* Select the appropriate *Target options* for your target CPU
239  architecture
240
241* In the *Toolchain* menu, keep the default of *Buildroot toolchain*
242  for *Toolchain type*, and configure your toolchain as desired
243
244* In the *System configuration* menu, select *None* as the *Init
245  system* and *none* as */bin/sh*
246
247* In the *Target packages* menu, disable *BusyBox*
248
249* In the *Filesystem images* menu, disable *tar the root filesystem*
250
251Then, we can trigger the build, and also ask Buildroot to generate a
252SDK. This will conveniently generate for us a tarball which contains
253our toolchain:
254
255-----
256make sdk
257-----
258
259This produces the SDK tarball in +$(O)/images+, with a name similar to
260+arm-buildroot-linux-uclibcgnueabi_sdk-buildroot.tar.gz+. Save this
261tarball, as it is now the toolchain that you can re-use as an external
262toolchain in other Buildroot projects.
263
264In those other Buildroot projects, in the *Toolchain* menu:
265
266* Set *Toolchain type* to *External toolchain*
267
268* Set *Toolchain* to *Custom toolchain*
269
270* Set *Toolchain origin* to *Toolchain to be downloaded and installed*
271
272* Set *Toolchain URL* to +file:///path/to/your/sdk/tarball.tar.gz+
273
274===== External toolchain wrapper
275
276When using an external toolchain, Buildroot generates a wrapper program,
277that transparently passes the appropriate options (according to the
278configuration) to the external toolchain programs. In case you need to
279debug this wrapper to check exactly what arguments are passed, you can
280set the environment variable +BR2_DEBUG_WRAPPER+ to either one of:
281
282* +0+, empty or not set: no debug
283
284* +1+: trace all arguments on a single line
285
286* +2+: trace one argument per line
287
288=== /dev management
289
290On a Linux system, the +/dev+ directory contains special files, called
291_device files_, that allow userspace applications to access the
292hardware devices managed by the Linux kernel. Without these _device
293files_, your userspace applications would not be able to use the
294hardware devices, even if they are properly recognized by the Linux
295kernel.
296
297Under +System configuration+, +/dev management+, Buildroot offers four
298different solutions to handle the +/dev+ directory :
299
300 * The first solution is *Static using device table*. This is the old
301   classical way of handling device files in Linux. With this method,
302   the device files are persistently stored in the root filesystem
303   (i.e. they persist across reboots), and there is nothing that will
304   automatically create and remove those device files when hardware
305   devices are added or removed from the system. Buildroot therefore
306   creates a standard set of device files using a _device table_, the
307   default one being stored in +system/device_table_dev.txt+ in the
308   Buildroot source code. This file is processed when Buildroot
309   generates the final root filesystem image, and the _device files_
310   are therefore not visible in the +output/target+ directory. The
311   +BR2_ROOTFS_STATIC_DEVICE_TABLE+ option allows to change the
312   default device table used by Buildroot, or to add an additional
313   device table, so that additional _device files_ are created by
314   Buildroot during the build. So, if you use this method, and a
315   _device file_ is missing in your system, you can for example create
316   a +board/<yourcompany>/<yourproject>/device_table_dev.txt+ file
317   that contains the description of your additional _device files_,
318   and then you can set +BR2_ROOTFS_STATIC_DEVICE_TABLE+ to
319   +system/device_table_dev.txt
320   board/<yourcompany>/<yourproject>/device_table_dev.txt+. For more
321   details about the format of the device table file, see
322   xref:makedev-syntax[].
323
324 * The second solution is *Dynamic using devtmpfs only*. _devtmpfs_ is
325   a virtual filesystem inside the Linux kernel that has been
326   introduced in kernel 2.6.32 (if you use an older kernel, it is not
327   possible to use this option). When mounted in +/dev+, this virtual
328   filesystem will automatically make _device files_ appear and
329   disappear as hardware devices are added and removed from the
330   system. This filesystem is not persistent across reboots: it is
331   filled dynamically by the kernel. Using _devtmpfs_ requires the
332   following kernel configuration options to be enabled:
333   +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+. When Buildroot is in
334   charge of building the Linux kernel for your embedded device, it
335   makes sure that those two options are enabled. However, if you
336   build your Linux kernel outside of Buildroot, then it is your
337   responsibility to enable those two options (if you fail to do so,
338   your Buildroot system will not boot).
339
340 * The third solution is *Dynamic using devtmpfs + mdev*. This method
341   also relies on the _devtmpfs_ virtual filesystem detailed above (so
342   the requirement to have +CONFIG_DEVTMPFS+ and
343   +CONFIG_DEVTMPFS_MOUNT+ enabled in the kernel configuration still
344   apply), but adds the +mdev+ userspace utility on top of it. +mdev+
345   is a program part of BusyBox that the kernel will call every time a
346   device is added or removed. Thanks to the +/etc/mdev.conf+
347   configuration file, +mdev+ can be configured to for example, set
348   specific permissions or ownership on a device file, call a script
349   or application whenever a device appears or disappear,
350   etc. Basically, it allows _userspace_ to react on device addition
351   and removal events. +mdev+ can for example be used to automatically
352   load kernel modules when devices appear on the system. +mdev+ is
353   also important if you have devices that require a firmware, as it
354   will be responsible for pushing the firmware contents to the
355   kernel. +mdev+ is a lightweight implementation (with fewer
356   features) of +udev+. For more details about +mdev+ and the syntax
357   of its configuration file, see
358   http://git.busybox.net/busybox/tree/docs/mdev.txt.
359
360 * The fourth solution is *Dynamic using devtmpfs + eudev*. This
361   method also relies on the _devtmpfs_ virtual filesystem detailed
362   above, but adds the +eudev+ userspace daemon on top of it. +eudev+
363   is a daemon that runs in the background, and gets called by the
364   kernel when a device gets added or removed from the system. It is a
365   more heavyweight solution than +mdev+, but provides higher
366   flexibility.  +eudev+ is a standalone version of +udev+, the
367   original userspace daemon used in most desktop Linux distributions,
368   which is now part of Systemd. For more details, see
369   http://en.wikipedia.org/wiki/Udev.
370
371The Buildroot developers recommendation is to start with the *Dynamic
372using devtmpfs only* solution, until you have the need for userspace
373to be notified when devices are added/removed, or if firmwares are
374needed, in which case *Dynamic using devtmpfs + mdev* is usually a
375good solution.
376
377Note that if +systemd+ is chosen as init system, /dev management will
378be performed by the +udev+ program provided by +systemd+.
379
380=== init system
381
382The _init_ program is the first userspace program started by the
383kernel (it carries the PID number 1), and is responsible for starting
384the userspace services and programs (for example: web server,
385graphical applications, other network servers, etc.).
386
387Buildroot allows to use three different types of init systems, which
388can be chosen from +System configuration+, +Init system+:
389
390 * The first solution is *BusyBox*. Amongst many programs, BusyBox has
391   an implementation of a basic +init+ program, which is sufficient
392   for most embedded systems. Enabling the +BR2_INIT_BUSYBOX+ will
393   ensure BusyBox will build and install its +init+ program. This is
394   the default solution in Buildroot. The BusyBox +init+ program will
395   read the +/etc/inittab+ file at boot to know what to do. The syntax
396   of this file can be found in
397   http://git.busybox.net/busybox/tree/examples/inittab (note that
398   BusyBox +inittab+ syntax is special: do not use a random +inittab+
399   documentation from the Internet to learn about BusyBox
400   +inittab+). The default +inittab+ in Buildroot is stored in
401   +system/skeleton/etc/inittab+. Apart from mounting a few important
402   filesystems, the main job the default inittab does is to start the
403   +/etc/init.d/rcS+ shell script, and start a +getty+ program (which
404   provides a login prompt).
405
406 * The second solution is *systemV*. This solution uses the old
407   traditional _sysvinit_ program, packed in Buildroot in
408   +package/sysvinit+. This was the solution used in most desktop
409   Linux distributions, until they switched to more recent
410   alternatives such as Upstart or Systemd. +sysvinit+ also works with
411   an +inittab+ file (which has a slightly different syntax than the
412   one from BusyBox). The default +inittab+ installed with this init
413   solution is located in +package/sysvinit/inittab+.
414
415 * The third solution is *systemd*. +systemd+ is the new generation
416   init system for Linux. It does far more than traditional _init_
417   programs: aggressive parallelization capabilities, uses socket and
418   D-Bus activation for starting services, offers on-demand starting
419   of daemons, keeps track of processes using Linux control groups,
420   supports snapshotting and restoring of the system state,
421   etc. +systemd+ will be useful on relatively complex embedded
422   systems, for example the ones requiring D-Bus and services
423   communicating between each other. It is worth noting that +systemd+
424   brings a fairly big number of large dependencies: +dbus+, +udev+
425   and more. For more details about +systemd+, see
426   http://www.freedesktop.org/wiki/Software/systemd.
427
428The solution recommended by Buildroot developers is to use the
429*BusyBox init* as it is sufficient for most embedded
430systems. *systemd* can be used for more complex situations.
431