pub:eslinux

Learning Linux for embedded systems

Michael Eager is principal consultant at Eager Consulting in Palo Alto, Calif. He has over four decades experience developing compilers, debuggers, and simulators for a wide range of processor architectures used in embedded systems. His current and former clients include major semiconductor companies and systems developers. Michael has been a member of the ISO C++ Standard Committee and ABI Committees for several processor architectures. He is chair of the Debugging Standards Committee for DWARF, a widely used debug data format. He is active in the open-source and Linux communities.

I was recently asked how a person with experience in embedded systems programming with 8-bit processors, such as PIC, as well as 32-bit processors, such as PowerPC, but no Linux experience, can learn how to use Embedded Linux.

What I always recommend to such an embedded systems programmer is this: Look at Embedded Linux as two parts, the embedded part and the Linux part. Let's consider the Linux part first.

Operating systems abound and the choices are many for an embedded system, both proprietary and open source. Linux is one of these choices. No matter what you use for your development host, whether Linux or Windows or Mac, you need to learn how to program using the target OS. In this respect, using Embedded Linux is not greatly different from using VXworks, WindowCE, or another OS. You need an understanding of how the OS is designed, how to configure the OS, and how to program using its application programming interface (API).

A few factors make learning how to program Linux easier than other embedded OSes. You'll find many books and tutorials about Linux, as well as Unix from which it is derived – many more than for other OSes. Online resources for Linux are ample, while other OSes have a much smaller presence, or one driven by the OS manufacturer. Linux is open source, and you can read the code to get an understanding of exactly what the OS is doing, something that is often impossible with a proprietary OS distributed as binaries. (I certainly do not recommend reading Linux source to try to learn how to program Linux. That's like trying to learn to drive by studying how a car's transmission works.)

The most significant factor that sets Linux apart from other OSes is that the same kernel is used for all systems, from the smallest embedded boards, to desktop systems, to large server farms. This means that you can learn a large amount of Linux programming on your desktop in an environment, which is much more flexible than using a target board with all of the complexities of connecting to the target, downloading a test programming, and running the test. All of the basic concepts and most APIs are the same for your desktop Linux and your Embedded Linux.

You could install a desktop Linux distribution on your development system, replacing your Windows or Mac system, but that may be a pretty large piece to bite off at one time, since you would likely need to configure email, learn new tools, and come up to speed with a different desktop interface. You could install Linux in a dual-boot environment, where you use the old environment for email, etc., and use the Linux system for learning. This can be pretty awkward, since you need to shut down one environment to bring up the other. Additionally, doing either within a corporate environment may be impractical or impossible. IT folks prefer supporting a known environment, not one that you have chosen.

An easier way is to create a virtual machine environment on your current development system. For Windows hosts, you can install VMware Player or VirtualBox, and on the Mac, you can install Parallels or VMware Fusion. Using a VM offers you much more flexibility. You can install a desktop Linux distribution, like Ubuntu or Fedora. You can use this distribution to become familiar with basic Linux concepts, learn the command shell and learn how to build and run programs. You can reconfigure the kernel or load drivers, without the concern that you'll crash your desktop system. You can build the entire kernel and application environment, similar to what you might do with a cross-development environment for an Embedded Linux target.

If your VM running Linux crashes, you simply restart the VM. The crash doesn't affect other things which you might be doing on your development system, such as reading a web page on how to build and install a driver, or that writing an email to one of the many support mailing lists.

Some of the VM products have snapshot features that allow you to take a checkpoint of a known working configuration, to which you can roll back if you can't correct a crash easily. This snapshot is far easier than trying to rescue a crashing desktop system or an unresponsive target board.

A Linux VM running on your desktop is not a perfect model for an Embedded Linux environment. The VM emulates the hardware of a desktop system, with a limited set of devices that are unlikely to match a real embedded target. But our objective at this point is not modeling a real target (something we'll discuss later) but creating an environment were you can learn Linux concepts and programming easily.

This is the first step: Create a VM and install a desktop Linux distribution on the VM. We'll pick from here in our next installment.

In the first part of this series, I outlined an approach to getting started with Embedded Linux for people with experience using non-Linux embedded systems. This starts with learning Linux in a desktop environment, running on a VMware or VirtualBox environment. One advantage that Linux has over other embedded OSes is that the same kernel is used on all systems, from smallest to largest, and many of the same utilities and libraries are available for both embedded and desktop environments.

Teaching Linux is far beyond the scope of a short article, but we can outline a road map to becoming acquainted with Linux on the desktop and talk about how this relates to Linux in an embedded environment. There are many good books which will introduce you to Linux. The important basic principles to familiarize yourself with are the command line, file system, directory structure, and process organization.

Most Linux system configuration and management is performed from the command line. On a desktop system, this means opening a terminal window and using the Bash shell. Commands start with the command name and usually accept options (generally preceded by a hyphen) followed by file names. Many command names are terse (like ls or cp), and can take a number of options, most of which you will rarely use. If you are familiar with the Windows CMD shell (or the MSDOS shell from which it evolved), a number of the commands are similar (like cd) but there frequently are subtle differences. At a minimum, you will need to know how to list files (cat, less, file), list and move between directories (ls, cd, pwd), and how to get help (man, info, apropos). My desktop Linux system has thousands of commands. You don't need to know more than a small number, but if there is something you want to do, it's likely there is a command to do it. The apropos command is a good way to find commands which might do what you want to do. Try running “man apropos” in from the command line. You will also need to become familiar with an editor, such as vi, which can be used in a command shell.

On an Embedded Linux system, most likely you will not have a windowing system. Instead you will be interacting with BusyBox and the Ash shell, a small command line interpreter. BusyBox packages about 200 commands into a single executable program.

One of the design philosophies of Unix and Linux is its organization around a hierarchical file system. The root of this file system is named “/” and everything in the file system can be found starting here. Naturally, the file system holds regular files containing text or data, as well as programs. Additionally, it contains a variety of special “files” which may represent physical devices (like hard drives), interfaces created by drivers (such as virtual terminals), network connections, and more. Where other OSes may provide a programmatic interface to internal information about processes or memory, Linux provides a much simple interface to by representing this information as text files. The /proc directory, for example, contains a sub-directory for each currently running process which describes almost everything you might want to know about the process.

The common directories are /boot, containing the boot program; /bin and /sbin, containing programs usually run by the system administrator, root; /dev, containing devices (both real and virtual); /etc, containing system configuration files; /home, containing user files; /proc and /sys, with system information; /lib, containing libraries; /usr, containing not user files, but programs which may be run by users; /tmp, containing temporary files, and finally, /var, containing system logs. An Embedded Linux system will have the same organization, although occasionally some directories may be combined. It will have far fewer files than a desktop system.

Linux (and Unix) has a hierarchical process structure. The first process, init, has process ID (PID) one and is created by the Linux kernel when the system starts. Init, in turn, creates child processes which allow you to log into the system. These in turn start windowing systems or command shells, which in turn will spawn other processes. If you type “ps” into a command window, you will see a brief listing of the top level processes running in that window, usually just the ps command itself and the bash command line interpreter. Typing “ps -l”, will give you more information, including process ID of each process's parent (PPID), the user ID (UID) of the person running the program, and more information. The “ps l” command will also print background processes. (A very few older commands inherited from Unix, like ps and tar, optionally omit the hyphen that precedes options. Unfortunately, for historical reasons, ps gives different output depending on whether you specify an option with or without the hyphen.) The “ps alx” command will give you a long list of every process running on your system, way more than you really want to know. (You might want to pipe this through less to page through the listing: “ps alx | less”.) You can also look through the /proc directory to see a different view of the processes on your system.

An Embedded Linux system has exactly the same process organization as your desktop system. The differences will be that there are far fewer processes running on an embedded system than on a desktop system.

Wander around your Linux system running on the VM. Try listing files and running commands. Don't be afraid that you might break something; Linux is very robust. But you might take a snapshot of the system so that you can get back to a known stable system just in case.

Our next installment will talk about program development for Linux and Embedded Linux.

Just about every project is going to require using the GNU Compiler Collection (GCC), the GNU binary utilities (Binutils), and make, used to build programs and handle dependencies. Linux does have several IDEs (integrated development environments) like Eclipse, but they're all built around these command line tools. Unlike development on Windows, where using Visual Studio is the rule, many Linux developers do not use an IDE.

To compile a C program using gcc, write your program using your favorite text editor (vi, emacs, gedit, kwrite, etc.) and save it with a suffix of .c (in the following example, we use the standard first C program from K&R and saved it as hello.c). Then enter the following commands:

  $ gcc -o hello -g -O1 hello.c  

This will invoke the C compiler to translate your program, and, if this succeeds without errors, it will go on to invoke the linker with the correct system libraries to create an executable named hello. (Other operating systems identify executables by a suffix, like .exe. Linux executables generally do not have a suffix. Instead a file system flag indicates that a file can be executed.) The name of the executable follows the -o option, the -g options says to generate debugging information, and the -O1 (that's letter O followed by digit 1) indicates to generate optimized code. GCC has a large number of different options, but these are the basics. For easier debugging, you may want to specify -O0 (letter O followed by digit 0) or omit the -O option to compile without optimization. If you have more than one file which you want to compile and link together, just list the source file names one after the other.

You might find that your Linux installation is missing some components, like GCC or the headers for the C library, which are not installed by default. If this is the case, you can use your system's package manager to add these component. On a Fedora system, this means using yum or perhaps the packagekit GUI; on an Ubuntu system, you would use apt-get or the synaptic GUI. These package managers will handle downloading and installing the component you request, as well as any dependencies that may be required.

You can execute programs from the command line by entering the name of the program, if it is in a directory in your path list, or by giving the path to the file. For our example, we can do the following:

$ ./hello
 
Hello world! 

In this case, since our current directory is not listed in the $PATH environment variable, we use the dot (.) to indicate the current directory and then the file name, separated by a slash from the directory specification.

This might be a good time to use GDB debugger to run your program. Whether you are doing kernel, driver, or application development, it's likely that you will need to debug your program using GDB. GDB has a command line interface and it is a good idea to learn the commands to do basic operations like printing variable values, setting breakpoints, and stepping through your program. There are several GUI's available for GDB. I use DDD, which now has a new maintainer after being dormant for a while. Other GUI's include the Eclipse CDT IDE, Insight, and even extensions to the Emacs text editor.

You can invoke gdb as follows:

$ gdb hello
 
[ start up messages from gdb ]
 
(gdb) break main
Breakpoint 1 at 0x400530: file hello.c, line 5.
 
(gdb) run
Starting program: /tmp/hello 
Breakpoint 1, main () at hello.c:5
5         printf ("Hello world!\n");
 
(gdb) cont
Continuing.
Hello world!
 
(gdb) quit

In the above example, the output from gdb is in bold; our commands are in normal type. In addition to the initial startup messages from gdb, there may be some other messages about missing debugging information for system libraries or a message about the program exiting.

In an Embedded Linux environment, you will be using GCC, GDB, and make in ways which are similar to native Linux development. In most cases, you will use a different build of GCC and GDB which are targeted for the processor you are using. The program names may be different, for example, arm-none-eabi-gcc, which generates code for ARM using the EABI (Embedded ABI). Another difference is that you most likely will be working in a cross-development environment, where you compile on a host system and download your programs to a target system. If you have experience with embedded programming, working in a cross-development environment should be familiar to you. We'll talk about how this works with Embedded Linux in a future installment.

In the next installment, we'll talk about Linux applications, libraries, and the wide range of freely available software packages.

Intro

We are continuing our series on how to get started with Embedded Linux if you have experience with embedded systems, but no Linux experience. You can find the first article here, second article here. and the third here.

Linux has a large number of libraries that can be used for application development, and many of these libraries can be used with embedded systems. You may need to use your package manager to install the library and the related development package, which contains the headers. Some libraries are available in both static and dynamic versions. The static libraries are linked with your program while dynamic libraries are separate, loaded as needed when your program is executed.

Whenever I write any but the simplest application, my go-to reference for many years has been Advanced Programming in the UNIX Environment by W. Richard Stevens. This reference, originally published in 1992 before Linux was developed, has been updated by Stephen A. Rago, with a third edition published this year. Linux adopted most of the APIs and interfaces of Unix, although the implementations may be different. Another reference is The Linux Programming Interface by Michael Kerrisk. At over 900 and 1,500 pages respectively, neither is going to end up as bedside reading, but if you need to know the nitty-gritty details of file or process manipulation, signals, threads, inter- or intra-process or network communication and synchronization, and much more, these are great places to start. There are also significant online resources as well as help forums like Stack Overflow.

There is an extensive Open Source infrastructure supporting Linux, both in desktop and embedded environments. The GNU project of the Free Software Foundation maintains a large number of widely used utility programs and libraries. You can download these packages from http://gnu.mirrors.pair.com/gnu, which should automatically connect you to a mirror near you. SourceForge has source for over 300,000 projects, many very substantial. Freecode also has thousands of open-source applications. The last destination on my short list is GitHub, which provides hosting for the code repositories of thousands of projects.

Most libraries or programs are built using the GNU make utility, along with bash scripts or support utilities like automake and autoconf. At its simplest, make checks which files need to be compiled and manages (using a Makefile written by the developer) the order in which these compilations are performed. Makefiles can be quite complex, with the Makefile invoking make to build subdirectories or invoking itself recursively. Automake is designed to generate Makefiles, identifying dependencies and invoking libtool, a utility to create shared libraries. Autoconf allows libraries or programs to be compiled for different targets and operating systems or with different options. All of these are beyond the scope of this overview, but O'Reilley has excellent books about make and autotools.

The standard sequence to build most of the standard libraries or program for your Linux system is to download the sources, usually in the form of a tar file, possibly compressed with gzip, bzip2, or xz. If I'd like to build my own copy of diff, I would first download the diffutils package from a GNU mirror. Usually I'd use a browser to save the package, but I could also use the wget utility:

  $ cd /tmp
  $ wget ftp://ftp.gnu.org/gnu/diffutils/diffutils-3.3.tar.xz

Untar the file and cd to the resulting directory:

  $ tar xfv diffutils-3.3.tar.xz
  $ cd diffutils-3.3

(If the package has a .gz or .tgz suffix, you will need to add the “z” option after “xfv”. If it has the .bz suffix, add the “j” option.)

Many packages have a README file. You should read this before you build. While most packages use a configure script generated by autoconf, not all do. The README will tell you how to build the package.

Building most packages, like the diffutils, is simple: you enter the following commands:

  $ ./configure
  $ make
  $ make install

The first command invokes configure that will analyze your system and create a Makefile to build your library or program tailored for your system. The second command invokes make, which will compile and link the library or program in work directory. The third command will install the library or program. Naturally, on occasion there will be errors in each of these steps. Configure may tell you that you need to install other packages, or perhaps the headers for installed libraries. The make step may stop with compile errors. The final step may try to install the library or program in a protected system directory. If this is the case, you can either run this last step as root, or prefixing this with the sudo command to temporarily assume root privileges. Alternately, you can specify the –prefix option to configure and point to an unprotected directory:

   $ ./configure --prefix=~/mydiff

When you run “make install”, the program and any other files will be installed in the directory mentioned in the prefix option, in this case, mydiff in my home directory.

With some caveats which we will discuss in the future, this means that libraries and programs which you build on your native x86 Linux environment can also be built for other processors such as ARM, PowerPC, or MIPS, or for other configurations of Linux, using the many of the same tools and techniques.

Our next installment will talk about the Linux kernel, how it is configured, and how to build it.

Most embedded projects will require configuring the Linux kernel and perhaps developing device drivers to match your hardware. Since the same kernel is used in our desktop Linux system running in a virtual machine as you would use on your embedded system, we can build a custom kernel and a simple driver on the desktop system in much the same way we will with a development board.

Each of the major distributions describes how they build a kernel. For Fedora, the description is on the Fedora Wiki and for Ubuntu it is available on the community site. Each of these will install the source used to build your VM system. They use programs like rpmbuild or scripts to make the process a bit easier, although perhaps not as transparent as it might be.

I recommend that you log into your VM and follow the scripts to build an install a new kernel. Before you install, you might take a snapshot of the VM just in case things go haywire and you want to get back to a known stable system.

When you follow these instructions, they will created a directory containing the kernel used to build the release you have installed. This is the “vanilla” kernel, which can be downloaded from kernel.org, plus a number of patches selected by the distribution. If you are using Fedora, as I am, you can find both the vanilla and patched kernels in ~/rpmbuild/BUILD/kernel-. Let's copy the patched kernel to a separate directory:

  $ cp -r ~/rpmbuild/BUILD/kernel*/linux* ~/linux

Older guides will frequently say to copy the kernel to /usr/src/linux, which is the traditional location for the kernel. You don't need to do this; the kernel can be built in any location. And usually /usr/src is a protected directory, only writable by root. You do not need to be root to build the kernel.

Let's take a brief tour of a few of the files and subdirectories under ~/linux:

  • Makefile. Use to control configuring and building the kernel
  • Arch. Contains architecture specific files
  • Drivers. Sources for drivers included with the kernel
  • Include. Headers used to build the kernel and drivers
  • Kernel. The “core” target-independent parts of the kernel
  • README. A description of the directory, with make directives
  • Scripts. Bash scripts used to build the kernel and drivers

There are several directories which contain parts of the kernel, like net (networking), mm (memory management), fs (file system), etc. And there's lots of documentation as well.

The arch directory contains sub-directories for more than two dozen different processor architectures, including the very popular ARM and Intel x86 processors and several which are much less well known. These directories contain the target-specific code which allows Linux to run on so many different processors. You'll find directories with the same names as under kernel which contain the target-specific code to support the target-independent functions.

Central to building the Linux kernel is the .config file. This is a hidden file (the initial dot says to hide it, but you can see it if you enter “ls -a”.) This is the current configuration. There are sample config files under the configs directory. X86 has only two, one for 32-bit and the other for 64-bit. ARM has dozens for many different processor configurations. If you open arch/x86/configs/i386_defconfig, you will see something like this:

CONFIG_EXPERIMENTAL=y
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y

Each of the y's says that an option is enabled. A configuration option which is commented out or missing, is not enabled.

It would be a daunting task to create one of these configuration files from scratch, and happily, we don't have to. The sources you downloaded for your VM included a .config, or perhaps has instructions on copying the config file corresponding to your installation from the /boot directory. To get started building the kernel, you run

$ make oldconfig

If you want to see all of the options (and select which you want to enable or disable using a simple text-based utility, run

$ make menuconfig

Go ahead and explore the GUI that is then displayed (Figure 1 below). The arrow keys allow you to move up or down the list. To set or unset an option, press the space bar. To go to a sub-menu (indicated by an → arrow), press Enter. The left and right arrows select the options on the bottom of the window. Selecting Help will tell you a bit about the option. When you want to exit a sub-menu, select Exit. At the top-level menu, select Save to save your changes to .config.

Figure 1. Linux graphical user interface. Before we build our kernel, let's give it a (slightly) different name. If you open the Makefile in an editor, the first several lines will have the VERSION, PATCHLEVEL, SUBLEVEL, and EXTRAVERSION values specified. The first three will be the same as the kernel version. In my case, this was 3.11.9, the same values as on the name of the kernel source RPM I downloaded. Add your name or a number to .config http://m.eet.com/media/1202288/Figure1-LinuxGui.jpg This will be used to make the newly built kernel different from the installed kernel.

To build the kernel, it's a good idea to start with a clean slate:

$ make clean

You don't need to do this if you wish to continue a build which stopped, perhaps because of a compile failure.

Compiling the kernel takes quite a bit of time, even on a fast system:

$ make

Only one line is printed for each compilation. If you want details (lots of details), add V=1 at the end of the make command.

Next, you need to build the loadable modules which were not linked into the kernel. This is much quicker:

$ make modules

Install the modules under /lib, then install the new kernel in /boot. These two commands need to be executed as root, since they modify protected system directories:

$ su
Password:
# make modules_install
# make install

Take a look at /lib/modules and /boot to see what the Makefile installed. You can now reboot your VM:

# reboot

Select your new kernel from the GRUB boot menu. Once the system reboots and you log in, you can check the version by running “uname -1”. You should see the version number of the new kernel, including your name or the number you added to EXTRAVERSION.

You might modify the configuration, rebuild and install the kernel. Update the EXTRAVERSION each time to help keep track of which kernel is which. Don't worry if you create a kernel which crashes. You can always select one of the older versions in the GRUB boot menu, or, if you took my advice to save a snapshot of the VM, you can restore from the snapshot and pick up where that left off.

There are some differences when building the Linux kernel for Embedded Linux, so we will revisit this later in the series. But the file organization, configuration, and overall process for building the Linux kernel is similar for a desktop Linux system and an Embedded Linux system.

Next time we will talk about building loadable modules, commonly used for device drivers.

In our last segment, we gave an overview of how the Linux kernel source is organized, how to configure the kernel, and how to build a new kernel with your choice of options. As you may have noticed, the result of building the kernel is that three files are installed in /boot. Each of these files has the version number appended to its name.

  • vmlinuz – the compressed kernel
  • initramfs (or initrd) – an initial file system used to boot the kernel
  • System.map – a list of exported kernel symbols and their addresses.

To explain, vmlinuz is the executable image of the kernel, which has been compressed using gzip (or another compression algorithm). It also contains a decompression stub that unpacks the kernel when it is loaded. You can decompress this by using extract-vmlinux which you can find in the kernel scripts directory.

Temporary root file system initramfs (or its predecessor, initrd) is loaded into memory during the boot process. This root file system contains device drivers or other programs that might be needed to mount the root file system, but which may not be needed after the kernel is loaded. This allows one generic kernel to be created which can be run on a variety of systems that have different hardware.

The Linux kernel is designed as a monolithic system, with a single, large binary image. Everything making up the kernel is compiled and linked together to form this image. The exception to this are Linux Kernel Modules (LKMs) which are somewhat independent. LKMs are pieces of code that can be loaded into the kernel to extend its functionality. LKMs may be compiled either as part of the kernel build (when you select them as part of the kernel configuration) or they may be compiled separately. They may be linked into the kernel image (specified in the kernel configuration) or they can be loaded later, after the system boots and the kernel is running.

LKMs are used for several purposes, including supporting different file systems; adding new functionality or system calls, or, most commonly, to add support for new hardware. LKMs are tightly coupled to specific versions of the kernel and must be recompiled for each version of the kernel. You can see the list of LKMs which are installed on your Linux system by running the “lsmod” command or listing /proc/modules.

We are going to focus on the use of LKMs for device drivers, since this is the most common use in Embedded Linux. There are several classes of devices in Linux: Character, block, and network devices are the most common types. A character device is read or written as a sequential stream of bytes, like a keyboard or a printer. A block device is used to support a file system and is read or written as fixed size blocks. A network device is used to communicate with a different system sending or receiving data packets. Character and block devices are represented by file system entries under /dev; network interfaces have names like “eth0” for example.

Let's walk through building a simple LKM. We'll start with the simplest LKM, which we'll name lkm.c –

lkm.c
/*  Simple Linux Kernel Module */
 
#include <linux/module.h>
 
int init_module()
{
        printk(KERN_ALERT "LKM init\n");
 
        return 0;
}
 
void cleanup_module()
{
        printk(KERN_ALERT "LKM stopped\n");
}

This has a function, init_module( ), which it is called when the LKM is loaded into the kernel, and another, cleanup_module( ), which it's called when it is removed. Printk is similar to printf, except it directs output to the system log, /var/log/messages.

Our makefile, named Makefile, is also quite simple:

obj-m += lkm.o
 
KERNELDIR=/lib/modules/$(shell uname -r)/build
 
all:
        make -C ${KERNELDIR} M=$(PWD) modules
 
clean:
        make -C ${KERNELDIR} M=$(PWD) clean

The first line adds the name of our object file to the list of files to be compiled. We set KERNELDIR to point to the kernel build directory for the current version of the kernel. If you are running the kernel we built in the previous installment, this will be your build source directory. If building LKM with stock kernel, you may need to install the kernel development files. On Fedora, this is down with “yum install kernel-devel.” If you want to build the LKM for a different version of the kernel, you can replace “$(shell uname -r)” with the version number. Of course, you have to have built that version or installed its development files.

When you run make, you will see something like the following:

$ make
make -C /lib/modules/3.12.8-200.fc19.x86_64/build M=/home/eager/lkm modules
make[1]: Entering directory `/usr/src/kernels/3.12.8-200.fc19.x86_64'
  CC [M]  /home/eager/lkm/lkm.o
  Building modules, stage 2.
  MODPOST 1 modules
  CC      /home/eager/lkm/lkm.mod.o
  LD [M]  /home/eager/lkm/lkm.ko
make[1]: Leaving directory `/usr/src/kernels/3.12.8-200.fc19.x86_64'

Our makefile sets up the parameter to tell what file(s) to compile and invokes the kernel Makefile to do the heavy lifting and actually build the LKM. The result is a file named lkm.ko, with the “ko” suffix indicating that this is a kernel module. We can use the “modinfo” command to see the information recorded with the module:

$ modinfo lkm.ko
filename:       /home/eager/lkm/lkm.ko
depends:        
vermagic:       3.12.8-200.fc19.x86_64 SMP mod_unload

We can load the module into the running kernel with the insmod command:

# insmod lkm.ko

Note that while we can compile our kernel module in user mode (always a good idea), you need to be root to run insmod. You can run this command in another shell where you have assumed superuser privileges using the “su” command or you can prefix the insmod command with “sudo,” assuming that you have set up /etc/sudoers.

Unless there is an error, insmod doesn't issue any messages. To see that our module is loaded into the kernel, we use the lsmod command:

$ lsmod | grep lkm
lkm                    12426  0 

This indicates that our LKM is installed in the kernel, that its size is 12426 bytes, and that no other LKM depends on it. One LKM can depend on another, with a low level driver providing services to a higher level driver. You can see this if you run a command like:

$ lsmod | grep par
parport_pc             28048  0 
parport                40425  2 ppdev,parport_pc

Here we can see that the parport (parallel port) driver is used by two other drivers – ppdev and parport_pc.

We can remove an LKM from the kernel by using the rmmod command, again, as superuser:

# rmmod lkm.ko

If we take a look at the system log, we can see the messages issued when our LKM was initialized and when it was removed:

# tail -n 2 /var/log/messages
Jan 26 17:58:36 fedora19 kernel: [27688.186462] LKM init
Jan 26 18:00:14 fedora19 kernel: [27786.528267] LKM stopped

You can install the LKM into your kernel by copying it to the correct system directory. If you look at /lib/modules you will see a subdirectory for each kernel which you have installed. If you look under the directory for the currently executing kernel, /lib/modules/$(uname -r)/kernel you will see a number of subdirectories for various kinds of kernel modules, such as crypto, drivers, memory management, and sound. We're going to copy our LKM to the drivers/misc directory as root:

# cp lkm.ko /lib/modules/kernel/$(uname -r)/kernel/drivers/misc

Next we execute the depmod command which updates the module map files which are under /lib/modules/$(uname -r):

# depmod -a

This will create an entry in modules.dep which modprobe uses to find the LKM:

# modprobe lkm

We can also remove the module using modprobe:

# modprobe -r lkm

Modprobe is a higher level command and calls insmod and rmmod to do the grunt work. It understands and resolves module prerequisites, supports aliases, and is used by the kernel to load LKMs as needed.

You may have noticed something when you look at the syslog:

Jan 26 18:38:34 fedora19 kernel: [118400.641718] lkm: module license 'unspecified' taints kernel.
Jan 26 18:38:34 fedora19 kernel: [118400.641724] Disabling lock debugging due to kernel taint
Jan 26 18:38:34 fedora19 kernel: [118400.642077] lkm: module verification failed: signature and/or required key missing - tainting kernel
Jan 26 18:38:34 fedora19 kernel: [118400.642981] LKM init
Jan 26 18:41:38 fedora19 kernel: [118585.066364] LKM stopped

The kernel module loader checks whether the LKM we are loading has a license that is compatible with the Linux kernel. Our simple LKM doesn't mention licensing and, indeed, the message says it is “unspecified.” It also says it “taints” the kernel. This isn't some kind of bad smell, like milk that's too old. It means the kernel now contains a code which is not Open Source. If you ask a kernel developer for help with a bug, and he or she notices the “tainted kernel” message, you are likely going to be asked to reproduce the problem without the LKM which taints the kernel. This is for a couple reasons, one philosophical, the other practical. Linux kernel developers are interested in promoting Open Source and looking at bugs in kernels which contain only Open Source and do not contain proprietary code is one way to further this goal. The practical reason is that the proprietary code may be the cause of the bug and without access to the source it may be difficult or impossible to debug the kernel.

So much for a trivial kernel module; what about one which actually does something? In our next installment, we'll build a simple character device driver.

In the previous installment, we talked about Linux Kernel Modules (LKM) in general and wrote a minimal LKM. The most common use of LKM is to create a driver to support new hardware or to create a virtual device. The simplest virtual device is /dev/null, which discards all data written to it, and returns an immediate EOF when read. You can find the source for the /dev/null driver in linux/drivers/char/mem.c.

We're going to write a driver which creates a virtual device which will return a pseudo-random number when read. This is a similar to the /dev/random virtual device, but simpler, and not to be used in real programs.

Here's the basic framework for our driver, named myrandom.c:

myrandom.c
/*  Random number virtual device. */
#include <linux/modules.h>
 
#define DRIVER_AUTHOR "Michael J. Eager <eager@eagercon.com>"
#define DRIVER_DESC   "random number generator device"
 
/* Declare license and provide authorship and description. */
MODULE_LICENSE("GPL");
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESC);
 
/* Declare all functions. */
static int init_myrandom(void);
static void cleanup_myrandom(void);
 
static int init_myrandom(void)
{
  printk(KERN_ALERT "myrandom init\n");
 
  return 0;
}
 
static void cleanup_myrandom(void)
{
  printk(KERN_ALERT "myrandom stopped\n");
}
 
module_init(init_myrandom);
module_exit(cleanup_myrandom); 

This looks a lot like the simple LKM we created in the previous installment. All we need to change in the Makefile is replace lkm.o with myrandom.o.

We've added a MODULE_LICENSE macro, saying that this driver is licensed under the GPL, as well as a macro listing the author and a description. These will be displayed when you run modinfo on the myrandom.ko file. We also added two macros at the end, module_init and module_exit, which allow us to specify the name of the init and cleanup routines, rather than using the defaults. When you run insmod myrandom.ko (as root) and tail /var/log/messages, you will see the messages issued by the driver when it is loaded. Running rmmod myrandom will remove the driver, and add the stopped message to the system log.

Next, let's tell the kernel that we have support for our myrandom device:

Toward the top of the file, add:

#include <linux/fs.h>

And add some data structures:

static int Major;
static struct file_operations fops = {
};

Update the init and cleanup routines as shown below:

static int init_myrandom(void)
{
  printk(KERN_ALERT "myrandom init\n");
 
 
  Major = register_chrdev(0, "myrandom", &fops);
  if (Major < 0) {
    printk(KERN_ALERT "Registering myrandom device failed: %d\n",
	      Major);
    return Major;
  }
 
  printk("Myrandom assigned major number %d\n", Major);
 
  return 0;
}
 
static void cleanup_myrandom(void)
{
  /* Unregister myrandom device. */
  unregister_chrdev(Major, "myrandom");
  printk(KERN_ALERT "myrandom stopped\n");
}

When we load this driver using insmod, it will register a character device driver. Devices are represented as special files under /dev which have a major device number indicating a device class, and a minor device number indicating the specific device in that class. If you run cat /proc/devices you will see the list of device classes defined in your system, including the device number for myrandom. When we call register_chrdev, with the first argument of zero, we ask the kernel to assign an available major device number. Alternately, you can specify one which is not otherwise used.

Registering a device driver does not create the device file under /dev. There are a few ways to do this. We are going to use the simplest method and run the mknod command as root, and chmod to make it accessible by all users:

# mknod /dev/myrandom c 249 0
# chmod 0666 /dev/myrandom
# ls -l /dev/myrandom
crw-rw-rw- 1 root root 249, 0 Mar 25 08:00 /dev/myrandom

The kernel assigned the my driver the major device number of 249; your number might be different.

Other ways to create the device file is to call the kernel's mknod() function when we register the driver, which will do the same as the mknod command. A more elegant and flexible method is to use udev. Udev is a user-space daemon which is notified by the kernel when a new device is registered. It then uses rules in /etc/udev/rules.d to create device nodes.

We want our device driver to support the open, close, and read operations, and we want a write to our device to return an error. Let's define the routines which will do these operations:

static int open_myrandom(struct inode *, struct file *);
static int close_myrandom(struct inode *, struct file *);
static ssize_t read_myrandom(struct file *, char *, size_t, loff_t *);
static ssize_t write_myrandom(struct file *, const char *, size_t, loff_t *);
 
static int Major;
static struct file_operations fops = {
  .read = read_myrandom,
  .release = close_myrandom,
  .open = open_myrandom,
  .write = write_myrandom
};
 
static int myrandom_in_use = 0;
 
static int open_myrandom(struct inode *inode, struct file *file)
{
  if (myrandom_in_use)
    return -EBUSY;
  myrandom_in_use++;
  return 0;
}
 
static int close_myrandom(struct inode *inode, struct file *file)
{
  if (myrandom_in_use)
    myrandom_in_use--;
  return 0;
}
 
static ssize_t read_myrandom(struct file *filp, char *buf, size_t len, loff_t *ofs)
{
  return 0;
}
 
static ssize_t write_myrandom(struct file *filp, const char *buf, size_t len, loff_t *ofs)
{
  return 0;
}

The open routine increments an internal flag so that only one program can open the device at a time. A second program trying to open the device will receive a busy error. The close routine resets this flag. For the moment, the read and write routines are dummies which do nothing.

If you insmod the driver, you can use the dd command to open the device and read from it:

$ dd if=/dev/myrandom
0+0 records in
0+0 records out
0 bytes (0 B) copied, 4.5739e-05 s, 0.0 kB/s

The dd command opened the device and immediately received an EOF.

Transferring data Let's complete this example driver by returning some data when we read the device:

Near the top of the file, add:

#include <asm/uaccess.h>
static unsigned char myrandom(void);

Let's create a simple random character generator which return a random letter or number each time it is called:

/* Generate random letters and numbers.  Algorithm from Wikipedia. */
static unsigned char myrandom(void)
{
  static char letters[] =
    "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
  static unsigned int m_w = 0x12345678;
  static unsigned int m_z = 0x87654321;
  int myrand;
 
  m_z = 36969 * (m_z & 65535) + (m_z >> 16);
  m_w = 18000 * (m_w & 65535) + (m_w >> 16);
 
  myrand = (m_z << 16) + m_w;
  myrand = (myrand >> 16) % sizeof(letters);
 
  return letters[myrand];
}

Let's flesh out the read routine:

static ssize_t read_myrandom(struct file *filp, char *buf, size_t len, loff_t *ofs)
{
  unsigned char rand_val;
  int count = 0;
 
  /* Return EOF when all bytes read. */
  if (bytes_read == 100) 
    return 0;
 
  while (len-- > 0 && bytes_read++ < 100) {
    rand_val = myrandom();
    put_user(rand_val, buf++);
    count++;
  }
  /* Return number of bytes transferred. */
  return count;
}

Finally, in the open routine, let's zero out the count whenever the device is opened:

static int open_myrandom(struct inode *inode, struct file *file)
{
  if (myrandom_in_use)
    return -EBUSY;
  myrandom_in_use++;
  bytes_read = 0;
  return 0;
}

When we read /dev/myrandom, we will receive the number of random characters we requested, up to a limit of 100. When we write to the device, we get an immediate device full message.

$ dd if=/dev/myrandom bs=10 count=1
QFF5tmwf3i1+0 records in
1+0 records out
10 bytes (10 B) copied, 0.000252246 s, 39.6 kB/s
$ dd if=/dev/myrandom
e9LDb6se6jJZnrS6prxpbkyTwIaaTlU1YDaz7buUtbvDXw1hxSgImzTc84zF28SZqUtS6tfRO8kl1iQCXEXSGjOTftygRqzV0+1 records in
0+1 records out
100 bytes (100 B) copied, 0.000145579 s, 687 kB/s
$ dd if=/dev/zero of=/dev/myrandom
dd: writing to /dev/myrandom: No space left on device
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000461547 s, 0.0 kB/s

Note that since the string returned from /dev/myrandom is not zero terminated, the message from dd is appended to the output.

Click on myrandom.c to see the full code, available for download.

Last comments about drivers One topic we didn't discuss is passing parameters to a driver. This can be done using modprobe, either directly or by using /etc/modprobe.conf. In our example, we might have specified the random number seed. A real driver would perhaps have device addresses or options specified as parameters.

We've only touched on the basics of writing a device driver and none of the hardware related issues. I recommend Linux Device Drivers by Alessandro Rubini, Jon Corbet and Greg Kroah-Hartman, published by O'Reilly. The 4th edition should be out in July, adding Jessica McKeller to the list of authors.

The traditional model for Embedded Linux (and all embedded system development) is cross-platform development. In this development style, you create your software on a powerful host system (like your desktop computer) and then transfer the binary image to a much smaller target system. Your host computer has many features which make it an excellent development environment: a fast processor, lots of memory and disk space, a big display, and all of the tools you need. You have documentation and access to the Internet for articles, support, or software.

Your target system is likely designed for a specific application, such as router, media player, file server, controller, or some other purpose. The processor is generally limited in power and speed, selected to meet the requirements of the application. Often, the processor architecture for the target system is different from your host system, selected for low price or integrated peripherals. The target's system memory, both RAM and persistent, is limited, just the amount required to run the application programs. Connections from the target system to the “outside world” are only those needed to run the application. For example, a file server would have a network connection and connections for hard disks, but no display or keyboard connections.

http://eetimes.com/ContentEETimes/Images/Design/Embedded/2014/0714/0714EmbEager08.1-big.png Figure 1. Typical cross-development setup for working with embedded board, in this case, a Linksys WRT54G router. Click on image to enlarge.

You use cross-compilers on the host system to compile your kernel and application programs into a binary image that you transfer to the target system. The cross-compiler, as well as a cross-debugger, may be different versions than the host system compiler and debugger, designed to support the target processor. For the most part, they work exactly the same as the corresponding tools on the host system. Other than the cross-development tools, you use the host tools for everything else, from editing files, to building the kernel or applications. The versions of the kernel or application programs used for the target system may be different from those on the development system. Your development system may have a standard distribution, like Ubuntu or RHEL, while your target system has a newer kernel which is still under development and may not be completely stable.

There are some added complexities with the cross-platform development model. When you compile your program for the target, it is important that the system header files for the target are used, and not the system headers for the host. You can imagine the problems which might occur if one of your compiles incorrectly uses a host system header which defines an int to be 64 bits when the target uses 32-bit ints. Most of the time, the cross-compiler takes care of insuring that the headers for the target system are used when compiling programs for the target.

You will need to build a complete file system, including the kernel, application programs, and all of the directories needed to populate a Linux file system. There are several tools which will help do this, such as buildroot, OpenEmbedded, or Yocto. We'll discuss these in a future article.

Another complexity is that the file system on the target may be different from the host. Many targets use flash memory for storage, which might use the Journaling Flash File System (JFFS) instead of the EXT file system used on hard drives on the host. You will need to convert the file system on the host into the binary format needed to transfer to the target and write to the flash memory.

Depending on how the target boots, you might need to configure a TFTP server and DHCP server on your host. That usually means that you will use a different network connection on the host system to connect to the target and not the one you use to connect to the Internet. (Corporate IT departments are understandably unhappy when they discover rogue DHCP servers responding to requests on the corporate network.)

Finally, there's the problem of communication between the host system and the target system. There are many ways that you might create a connection between the host system and the target system. These include JTAG or another low-level connection to the processor, which might or might not allow you to transfer binary images. There's the venerable serial port, usually used as a system console, which remains pretty common on target hardware, even if they seem to be scarce on development systems. (You can use a USB to serial adapter, as shown in Figure 1.) Many target systems have network ports which allow you to connect them to a network. These targets might support booting from a binary image on the host using TFTP, rather than having to write the image to a flash file system, and you may be able to run with the target root file system on an NFS server.

We'll discuss all of these additional complexities in future articles.

With this style of development, you are developing programs on the target. This is a reasonable approach when the target system has a powerful processor and adequate memory to build the kernel and applications. The source files for the kernel and applications, as well as build directories, might be on local hard drives or mounted over the network using NFS.

http://eetimes.com/ContentEETimes/Images/Design/Embedded/2014/0714/0714EmbEager08.2-big.jpg Figure 2. Self-hosted development using Raspberry Pi. Click on image to enlarge.

In the past, this development model was only used for very large and expensive embedded systems, such as the switch for a telephone central office. Now there are a number of embedded systems which package a fast processor, lots of RAM memory, flash memory for a file system, a video controller, and a wide range of peripherals on a small, inexpensive board. Popular examples include the BeagleBone and the Raspberry Pi. These boards have fast ARM processors (600 MHz to 1GHz), ample RAM (256-512Mb), removable SD cards which serve as a file system of 4Gb or more, USB ports to connect to a keyboard or mouse, and video output to a monitor. (Compare this with the hardware used in the first Linksys WRT54G router: 125MHz processor, 16Mb RAM, 4Mb flash memory.) In many respects, using one of these single-board computers is like working with a laptop computer on a board.

There are a number of advantages to this model, especially with the high-powered single-board computers. The first is that pre-packaged Linux distributions are available. In some cases, you have your choice of several distributions, such as Ubuntu, which are also available for desktop systems. Development tools (compiler, assembler, and debugger) for the target processor are contained in the distribution. Building a kernel or applications for the target is very much the same as building for a desktop or server Linux system. There's no way to accidentally mix host and target header files, since the compiler on the target only references the target header files.

There are disadvantages, as well. This model can only be used when the target system is one of these powerful systems. Building an entire root file system, with many programs and libraries, can take many hours even on a fast desktop system. On a slower target, like the Raspberry Pi, it might take days. These target systems may have much more hardware than the planned application actually requires. It may be difficult to pare down the distribution to only those components needed by the application, since the system has to support both development and the application environment.

We'll explore this self-hosted development in future articles as well.

There are ways to combine the benefits of both of these development models and at the same time avoid some of the disadvantages. You might build the root file system on a host using the cross-development model and serve it to the target using NFS, while doing driver or application development on the target using the self-hosted development model.

Another alternative is to emulate the target hardware on the host system using QEMU. QEMU is a processor and system emulator which has support for many different architectures and system designs. You run QEMU on your host development system and connect to it using a virtual network. This has the advantage of higher performance on the more powerful host system, while using the target environment. Additionally, you can start QEMU had wait for GDB to connect to it, so that you can trace code from the very first instruction, something with may be difficult or impossible with a self-hosted target.

As we discussed in the last installment, there are two common models for Embedded Linux development. These are cross-platform development and self-hosted development. In this installment, we are going to explore self-hosted development using the Raspberry Pi as a target.

Self-hosted development is an offshoot of developing on a host system, similar to what we have been doing with our VM installation in previous installments of this series. But there are several differences since the target system, like most embedded systems, has limited memory and a relatively slow processor.

The Raspberry Pi (Model B*) is an inexpensive ($35) single-board computer (SBC) with a 700MHz Broadcom ARM processor, 512Mb RAM, high definition video output supported by a GPU, USB connectors for keyboard and mouse, an SD memory card used for the file system, an Ethernet port, as well as significant expansion ability. There are many other single-board computers on the market, some of which may be better suited to a particular application. The RPi (as it is sometimes called) was designed for educational use, so it fits well with the purposes of this column. If you need an board for a commercial application, I recommend that you look at other SBCs which are designed for the rigorous environment required by most embedded systems and are better tailored to your application's specific requirements. The RPi has a tremendous support base, with a dedicated website (www.raspberrypi.org), magazines, many online blogs and how-to guides, and even videos.

I followed the Quick Start Guide on www.raspberrypi.org and its recommendations to use the pre-installed NOOBS SD card and install the Raspbian distribution, based on the popular Debian distribution. First, I connected the RPi as shown in this photo.

I followed the instructions on using the NOOBS SD card, which displays a number of available distributions when the RPi boots from the SD card. I selected Raspbian. This was mostly uneventful, but it takes a while. After Raspbian is installed, it needs to be configured. I made the mistake of specifying All Locales, rather than just the US Local, where I live, which took longer to configure than installing the distribution. Once Raspbian was done configuring, I was able to log in using the username “pi” and password “raspberry.” I discovered that the USB keyboard I was using was not being recognized correctly, and some keys were displaying wrong, such as showing the British pound symbol (£) rather than the character on the key cap. There may be some way to fix this using one of the Debian configuration tools, but my fix was to edit the default keyboard configuration:

>
sudo nano /etc/default/keyboard

Change XKBLAYOUT=“gb” to specify the correct country code, in my case, us.

To complete setting up the RPi, I needed to configure the Ethernet port. By default, Raspbian gets an IP address using DHCP. My DHCP server is configured to give the RPI the same IP address each time it submits a request. If you don't configure your DHCP server to assign the same address, it may assign a different address whenever the RPI starts, which can be confusing. Alternately, you can assign a static IP address to your RPI by following instructions found here: http://elinux.org/RPi_Setting_up_a_static_IP_in_Debian.

Here's the RPI booted to a console login screen, powered by a small USB battery courtesy of the Yocto project: http://www.embedded.com/ContentEETimes/Images/Design/Embedded/2014/1214/EMB1214_Eager09_Figure01.jpg http://www.embedded.com/ContentEETimes/Images/Design/Embedded/2014/1214/EMB1214_Eager09_Figure02.jpg

When you log in to the RPI, you'll discover that it is a pretty complete distribution. In keeping with its educational goals, you will find that most of the development tools have already been installed. It normally boots up into command line mode, but you can start a GUI by running “startx.”

Let's build the simple driver from Part Seven. Raspbian comes with the Midori and NetSurf web browser. I used NetSurf to visit Embedded.com and navigate to where I could download the ZIP file containing the source code for myrandom.c. http://www.embedded.com/design/embedded/source-code/4429789/myrandom-Linux-example-driver]. I saved it in a subdirectory named myrandom and used “unzip myrandom.zip” to unpack it.

Using vi or nano, create a new Makefile in the same directory like the one listed in Part Six, replacing “lkm.o” with “myrandom.o.” (I don't recommend cutting-and-pasting the text from the web page. That picks up hidden characters.) Then I typed in the command “make.” And the response was “Make: Nothing to be done for 'all'.” You might remember from Part Six where I mentioned that you might need to install the kernel development files for your stock kernel. If these are not installed, make doesn't know what to do.

We can use Debian's apt-get utility to install the kernel headers and build directory. But first, I wanted to make sure that I was running the latest version of Raspbian. So I first executed the command “sudo rpi-update”, waited for it to complete updated files, then I rebooted the Raspberry Pi.

There are a number of web pages which discuss installing the Linux headers and the kernel sources. An easy way is to use the rpi-source script available on GitHub. To install, I followed a slightly modified set of the instructions on the rpi-source wiki https://github.com/notro/rpi-source/wiki]:

$ wget https://raw.githubusercontent.com/notro/rpi-source/master/rpi-source
$ sudo mv rpi-source /usr/bin/rpi-source
$ sudo chmod +x /usr/bin/rpi-source
$ rpi-source -q --tag-update

Before you run rpi-source, install the ncurses headers. The kernel build scripts require these headers. They can be installed using apt-get:

$ sudo apt-get install libncurses5-dev

Now run rpi-source:

$ rpi-source

As warned on the wiki, rpi-source generated a complaint that the installed version of GCC was different from the version needed to build the kernel. I followed the instructions and added the jessie source repository, and installed gcc-4.8:

$ sudo cat “deb http://mirrordirector.raspbian.org/raspbian/ jessie main contrib non-free rpi” > /etc/apt/sources.list.d/jessie.list
$ sudo apt-get update
$ sudo apt-get install -y gcc-4.8 g++-4.8
$ sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.6 20
$ sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 50
$ sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.6 20
$ sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.8 50

After updating GCC, I re-ran rpi-source. This ran for a while and downloaded almost 120Mb of sources. Luckily, I have a fast Internet connection. Rpi-source then built the support programs needed to build kernel modules. When this was done, rpi-source had created a directory under /home/pi, created a link in /home/pi/linux pointing to it, and updated /lib/modules/`uname -r`/build to point to /home/pi/linux.

When all this setup was complete, I was able to revisit Part Seven, and follow the instructions there:

$ cd ~/myrandom
$ make
$ sudo insmod myrandom.ko
$ sudo grep myrandom /var/log/messages
As before, I created the /dev/myrandom node with the correct major number (not the same as before) listed in /var/log/messages. I ran the same small test of the driver. And it worked!

The Raspberry Pi, like similar Single Board Computers (SBCs) such as the BeagleBone and many others, act a lot like an underpowered laptop computer, with a separate keyboard and monitor. It's certainly possible to do application or development of Linux Kernel Modules on an SBC. The advantage is that it is native development, like working on your desktop system. The disadvantage is that the resources available on an SBC are often limited, which can result in long build times or running out of space on the flash card.

Next time, we'll talk about a hybrid approach, where we build the Linux kernel for the Raspberry Pi on a fast host system, using cross-development tools.

* A new version of the Raspberry Pi, the Model B+, was announced as this article was being written. It looks like a good enhancement to the Model B, with a microSD card rather than the larger SD card, and additional USB ports. There should be no differences in using the Model B and B+.

In the last installment, we set up a Raspberry Pi SBC (Single Board Computer) and used it to build a simple driver. This is an example of self-hosting, the development model in which we use an embedded target as our development platform.

We are going to take this one step further in this installment. First we will build the kernel on the Raspberry Pi, and then build it on a faster host system. Using a separate host for development is an example of the cross-development model.

Last time, we used the rpi-source script to download and install the headers required to build a kernel module on the RPi. The same developer, notro, who created that script, has also created a rpi-build script which makes it easy to build the kernel. Instructions can be found on the GitHub page:

$ wget https://raw.githubusercontent.com/notro/rpi-build/master/rpi-build
$ sudo mv rpi-build /usr/bin/rpi-build
$ sudo chmod +x /usr/bin/rpi-build

While the rpi-source script was written in Python, the rpi-build script is written in Ruby which needs to be installed before we can run it:

$ sudo apt-get update
$ sudo apt-get install ruby

The first time you run rpi-build, it will check for missing dependencies. Reply Y when asked whether you want to install these dependencies. After this, we can build the kernel using this script:

$ rpi-build use[stdlib] linux install

This is going to take a while. While the script downloads files and starts compiling the kernel files, let's bring up our VM running Fedora and look at building the Raspberry Pi kernel on a faster system.

We are going to use the git source control system to make a copy of the cross tool chain and kernel from the Rasbberry Pi repository on github.com. First we need to install git as root:

$ su
Password: <enter root password>
# yum install git

And then we can download the tool chain and kernel source:

$ git clone --depth 1 git://github.com/raspberrypi/tools.git
$ git clone --depth 1 git://github.com/raspberrypi/linux.git

Let's get the Linux kernel ready to build:

$ cd ~/linux
$ make mrproper
$ export CCPREFIX=~/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-

This selects the Linaro version of GCC-4.8.3 as our cross compiler. A cross compiler runs on one system, in this case our VM running on an X86-64 host system, and generates object code which will execute on a different processor, in this case, ARM, and specifically for the Broadcom BCM2708 processor used by the Raspberry Pi.

We are going to use the default Linux configuration for the Raspberry Pi:

$ cp arch/arm/configs/bcmrpi_defconfig .config
$ make ARCH=arm CROSS_COMPILE=${CCPREFIX} oldconfig

Alternately, we could have copied the configuration from the Raspberry Pi, which can be found in /proc/config.gz. Use gunzip to unpack this file and rename it from config to .config before running make oldconfig as shown above. When you do this, you will be asked about new options which have been added to the kernel but which are not mentioned in the .config file you are using. I simply hit return for all of these (many) options, selecting the default.

We are continuing our series on how to get started using Embedded Linux. The previous installments can be found on the Open Mike blog.

In the last installment, we showed how to build a Linux kernel on a Raspberry Pi Single Board Computer (SBC), as well as build it on a much faster development system. There are some advantages to building on our target system, namely that we are using development tools and the environment on the target. When we develop on another system, we have to use cross compilers and a copy of the target environment, which makes this somewhat more complex. But a cross-development environment is usually much faster than an embedded target, with much more disk and memory space. With many embedded Linux projects, it simply isn't possible to build on the target.

We copied the kernel from the development system and wrote it to the SDcard which we inserted in the Raspberry Pi. This worked, but it is awkward to move the SDcard from the development system to the target, run some tests, then move it back when we want to make changes, then back to the target when we want to test these changes. Happily, the Raspberry Pi and almost all target boards have a way to talk with the outside world, either over a serial port or ethernet. We'll use this to transfer files from the development system to the target, and even control the target from the development host.

The Raspbian distribution for the RPi comes with a complete set of network tools preinstalled. On the RPi console, enter the command “ip addr show”. You should see something like the following:

$ ip addr show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether b8:27:eb:bc:d0:81 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.112/24 brd 192.168.20.255 scope global eth0
       valid_lft forever preferred_lft forever

This tells you that the board is connected to the network and that the address for the external connection, eth0, is 192.168.20.112. That's the address my DHCP server assigned to the board. You will likely have a different address. I added the following line to the /etc/hosts file on my development system so that I could use the name “rpi” instead of using the full IP address:

192.168.20.112 raspberrypi rasbpberrypi.eagercon.com rpi

SSH stands for Secure Shell and it is the standard way to connect to remote systems using an encrypted connection. SSH is a collection of programs including ssh and scp. It is installed by default on the Raspberry Pi and we will use it to connect to the target by entering the following command on our development host:

$ ssh pi@192.168.20.112

or

$ ssh pi@rpi

You will be asked for the password for user “pi”. If you have not changed it, it is “raspberry”. . Once this is accepted, you will be in a bash shell on the target. You can do anything from this shell that you would be able to do from the Raspberry Pi console. (You can also set up your target with a shared key so that you bypass the prompt for a password every time you connect.)

The first time that you use ssh, you will be asked to verify that the system that you are connecting to is the one you intended. You can answer “yes”. If you happen to use ssh to connect to a system using the internet and you get this message unexpectedly, say after you have previously verified the system identity, there can be a couple reasons. One is that you are not connecting to the system that you intended, worst case, because someone has intercepted the connection. More likely is that the keys on the remote system have been regenerated.

By default, ssh starts a terminal session running bash. If you add a command at the end of the line, this command will be run on the remote system and then terminate. For example:

$ ssh pi@rpi df      
pi@rpi\'s password: 
Filesystem     1K-blocks    Used Available Use% Mounted on
rootfs           6240824 4518044   1387688  77% /
/dev/root        6240824 4518044   1387688  77% /
devtmpfs          219768       0    219768   0% /dev
tmpfs              44788     224     44564   1% /run
tmpfs               5120       0      5120   0% /run/lock
tmpfs              89560       0     89560   0% /run/shm
/dev/mmcblk0p5     57288   25688     31600  45% /boot
$

SSH will also allow you to run graphic programs remotely if you specify the “-X” option on the ssh command. The program will be run on the target, but the window will be displayed on your host system.

RSH is a package including the rsh and rcp programs. Similer to ssh, rsh allows you to connect to a remote system over ethernet. The difference, and the reason that RSH is not usually installed by default in most distributions, is that no encryption is done on the data you enter or that returned by the remote system. You may find that rsh is an alias for ssh where it is not installed. The command to connect using rsh is similar to that for ssh, except that you must specify the user id in a command line option:

$ rsh -l pi rpi

Similar to rsh, rlogin will also allow you to connect to a target:

$ rlogin -l pi rpi

Telnet is an older method of communicating with another system either over a serial line or over a network. It has fallen into disfavor in most applications, since all communication is in plain text, while SSH encrypts all messages. For development using an embedded Linux system connected to your development system, there is little reason to be concerned about someone monitoring your communications. Telnet is a much simpler program and since it doesn't have to encrypt or decrypt messages it uses fewer CPU resources, which may be important for low performance targets. An embedded Linux system which does not have enough CPU power or memory to run SSH can easily support telnet. You might have to install the telnet package on your development host, since by default most Linux distributions do not install it.

Both ssh and telnet allow you to control a target system from your host, even one which does not have a display or console attached. Or, as happened while writing this article, when the display abruptly stops working.

The ssh package contains a command, scp, which allows you to copy files between your development host system and a remote system. It works similar to the cp command, but allows you to add the name of a remote system. Let's copy the kernel.img file which we built in the last installment to the /boot directory of our Raspberry Pi target:

$ cd ~/linux/arch/arm/boot
$ scp Image pi@rpi:/tmp
pi@rpi\'s password:
Image 100% 6226KB 778.3KB/s 00:00
$
The last line will be updated as the transfer is performed, telling you how much data has been copied and how long it will take to complete the transfer.

After copying the file to /tmp on the RPi, we can use the shell we opened to the RPi target to copy it to the /boot directory:

$ sudo cp /tmp/Image /boot/kernel.img
We could not copy the file directly to /boot on the Raspberry Pi because it is owned by root. The Debian distribution which is installed on the Raspberry Pi does not have a password for root, so the only way that you can copy the file is to use the “sudo” command shown above.

Like ssh, the data is encrypted before being sent and decrypted on the receiving end. Especially on low performance targets, this can make data transfer slow.

The rsh package includes the rcp command, which is similar to scp except that the data is not encrypted. This means that it is more efficient than scp and can support faster data transfer, even on a low-performance target system. You will need to install it on the Raspberry Pi target:

$ sudo apt-get install xinetd rsh-server>/code>
You also need to modify the /etc/hosts.equiv file to contain a line like the following:
<code bash>
$ sudo cat<hostname> <userid>>> /etc/hosts.equiv
Replace <hostname> and <userid> with the hostname of your development system and your userid.

On the host system, we can now use rcp in the same way we used scp above:

$ rcp Image rpi:/tmp

Notice that we don't need to specify the user id or enter a password when using rcp.

Talking to the target isn't the most exciting thing in the world, but when you can't, it can be one of the most frustrating. Now that we can connect to the target, enter commands, and transfer files, we can move on to the topic of the next installment: application development.

  • pub/eslinux.txt
  • Última modificación: 2020/09/28 11:44
  • (editor externo)