Tag Archives: ELC-E 2015

Debugging the Linux Kernel with GDB – Peter Griffin, Linaro / STMicroelectronics

Proprietary tools for debugging hardware like trace32 and DS-5 are very expensive. Open source debuggers like gdb lack ‘kernel debug’ features. This talk gives an overview of what exists in open source today, how we can make it better, and what are the challenges.

Ways to debug Linux with gdb:

  1. GDB remote protocol connecting to kgdb stud, to qemu running a kernel, or jtag probe.
    1. kgdb: all kernel threads are enumerated in GDB. But since kgdb runs in-kernel, it’s less suitable for serious crashes. Also need to rebuild to enable it and need serial or ethernet support, so only for working board.
    2. Qemu: no real hardware required, goot for testing generic kernel code, good to test “Linux awareness” in the debugger (see below).
    3. jtag over openOCD: supports many cheap FTDI JTAG dongles for ARM and MIPS. Allows you to inspect very broken systems (no serial/network) or before kgdb stub is functional. But it can be difficult to set up; easier if you hot attach i.e. boot in the normal way and attach jtag later.
  2. Generate a kernel dump (/proc/kcore or /proc/vmcore with recovery kernel) and debug offline. No live debugging possible, but good for debugging deployed systems.

Problems with gdb debugging (except kgdb): ‘info threads’ shows only 1 thread per CPU, no visibility of sleeping threads. ‘backtrace’ only shows kernel backtrace, if running in userspace you get just ???. To improve this, Linux awareness must be added:

  1. Task awareness: report task_structs as threads.
  2. Loadable module support
  3. OS helper commands, e.g. to extract dmesg buffer

Awareness can be added as a scripting extension (python or guile), or in the stub (openOCD), or as a C extension. As a python extension, the code can live in the kernel tree, so it can evolve together with the kernel. Jan Kiszka has implemented this: scripts/gdb/*, enable CONFIG_GDB_SCRIPTS. Adds extra gdb commands like lx-dmesg, lx-lsmod, lx-ps (list kernel threads), and functions. However, it doesn’t allow you to use thread backtraces, no thread objects are created in gdb context. GDB Python API would have to be extended to support this.

Second option to add awareness is to add it in the stub. This is e.g. what kgdb does. OpenOCD has task awareness for RTOSes, and even for Linux but disabled by default. Add -rtos linux to ‘target create’ command to enable it. But changes required to get it working. However, implementing kernel-awareness in the gdb stub would require reimplementing it for each stub (qemu, …), so it also can’t be used to debug kernel dumps. Kernel data structures have to be parsed in OpenOCD, so a dependency on the kernel is added, this can’t be passed over gdbremote protocol. There could be workarounds for this (extend gdbremote protocol) but it’s work.

Third option is to add a C extension to gdb. This is what STM did for ST Micro Connect 2 debugger. Implemented in GDB target model as patches on GDB. They asked Linaro to help upstreaming this support to gdb. The LKD (Linux Kernel Debugger) adds a layer on the target stack (that has to be explicitly loaded) to map the kernel task_struct to GDB threads. It is populated when the debugger takes over. For efficiency, it reads the entire task struct. But it doesn’t (yet) support unwinding userland. Hooks into module_init and module_arch_cleanup to detect and handle module (un)loading, using the solib infrastructure in gdb. It also programs the MMU to be able to access the modules, for this the gdbremote protocol has to be extended. It also adds helper commands to view things like dmesg, but also a bunch of /proc files and memory maps. It is clearly the most powerful approach, it supports kernel dumps as well, and an implementation already exists (including test suite). But it creates a dependency between kernel and debugger (though it normally doesn’t change), i.e. it adds program specific information to gdb.

Another extension already exists that does something similar for kernel dumps.

Next step is to port to 7.10 and publish it, removing the parts which overlap with the python gdb extension and migrate more of it into python. Preliminary response from the GDB community has been positive.

Rethinking the Core System – Bernhard Rosenkränzer, Linaro

Are alternatives to gcc, libstdc++ and glibc viable yet? And how do I use them?
uClibc as an alternative is already “traditional”.
binutils is still needed, especially ld. Alternatives: lld and mclink(?), but they’re not quite there yet. gold (also from binutils) is a viable alternatives. It has a bit of use outside of LTO: it has code folding options, and it is a little bit faster to link. gas is sometimes needed because the LLVM assembler doesn’t support all things you still encounter in assembly code. Tools like nm need to deal with LLVM bytecode and gcc interim code (for LTO) in addition to traditional object files. We can get this with a wrapper script that determines the type of object file and calls into llvm or gcc.
gcc can mostly be replaced by clang. E.g. OpenMandriva 3 is almost fully built with clang 3.7. Problems usually due to bad code or gcc extensions. Many patches have been upstreamed, a couple of patches are still maintained out of tree in OpenMandriva. Packages that are really too difficult to handle are just built with gcc – which is possible as long as they use the same libc (doesn’t matter who compiled it).
clang’s __GNUC__ macros are too conservative, so some projects will think the compiler version is too old to enable some feature. Proper solution is to detect features instead.
Things to avoid to be compatible with both compilers:

  • Nested functions
  • Variable length arrays of structs and non-POD types
  • Empty scruts
  • array subscripts of type char (cast it to int)
  • Some reserved words (e.g. _Nullable is used in Qt)
  • Declared but undefined static functions or variables (error in clang, not even a warning in gcc)
  • build gcc with –with-default-libstdcxx-abi=gcc4-compatible – gcc5 changed ABI (will be changed in future)
  • C89-isms or C++98-isms, e.g. ‘extern inline’ – clang defaults to new standard versions, gcc defaults to C90.

It can be a good idea to just build with clang as an extra warnings pass, even if you build the final thing with gcc.

Use gcc or clang for new code? clang tends to be faster at compiling, its source code is easier to read, it has backends for GPU. gcc has better OpenMP support and supports more targets. Best solution: try both, try the results, and go for the one that produced the best executable.

musl is now a viable replacement of glibc. It’s not binary compatible with glibc. clang doesn’t support musl directly,but patches are available from OpenMandriva (just 5 patches). gcc trunk supports musl.

bionic is also a libc that starts to be usable. It is highly optimised (especially for ARM). It still lacks SysV shared memory – so no X server. But you can take code from e.g. musl to implement that part.

To ensure libc compatibility:

  • Really include all headers you use, don’t rely on implicit inclusion through another header file (esp. with musl).
  • Avoid using deprecated API.
  • Don’t assume that _GNU_SOURCE, _BSD_SOURCE are defined.
  • __linux__ != __GLIBC__
  • Some functions may not exist, e.g. locale variants.

glibc supports most targets and almost everything is compatible with it. musl tends to be faster and smaller, without cruft. bionic is even smaller and faster (on ARM), but is designed for Android needs so you may have to import some functions from another libc. uClibc is even smaller, and it’s configurable, but doesn’t support aarch64.

libstdc++ can be replaced with LLVM’s libc++, but binary compatibility doesn’t exist. Problem for libraries: e.g. Qt linked with libc++, other application linked with libstdc++, at DLL load time you get symbol clashes. So libc++ is only viable if you can rebuild everything.

To ensure libstdc++ compatibiilty:

  • Code for C++11 or C++14
  • Don’t assume one header includes another

libc++ is often a better choice: 50% smaller with full C++14 support. Android is switching to it, but they have to since STLport is unmaintained. But it’s almost exclusively with clang, so only use it if you also compile with clang.

Cross-compiling with clang is easy because the compiler always supports all possible target backends. So you just need to point to the correct sysroot – except that –sysroot doesn’t work very well. Need a wrapper to add -nostdinc -isystem XXX -L XXX.

When cross-compiling, if it compiles, it doesn’t mean it works. One thing you can do is to run it in qemu.

CHIP – The World’s First Nine Dollar Computer – Hans de Goede, Red Hat

Hans works for Redhat, which has nothing to do with C.H.I.P., Hans is not affiliated with Next Thing Co in any way except as a user of the C.H.I.P.

C.H.I.P. is a full computer it 60x41mm. Core = Allwinnar R8 (1GHz Cortex A8, no virtualisation extensions, Mali 400) + 512MB DDR3 + 4GB NAND + RTL8723BS (802.11b/g/n 2.4GHz + BT 4.0). USB-A, .5mm headphone with mic in or composite video out, USB-B OTG, LiOn/LiPo battery connector (charger is in PMIC).
2 big headers: voltage, LCD/LVDS, CSI, 2x I2C, resistive touch, PWM out, UART. LCD pins can be configured for MMIO (external phy needed). CSI can be configured as SPI2 + MMC2. Roughly 60 pins can be configured as GPIO. One pin is “firmware recovery mode” which is needed to flash an image over USB.
It doesn’t have a good video output on the base board. So need daughterboards with HDMI or VGA out. Pocket chip has LCD + resistive touch + keyboard + case that holds the C.H.I.P. Hans would like to see a daugherboard with wired ethernet. To identify daughterboards the plan is to use 1-wire chips (which have a unique ID so no conflicts, but U-Boot doesn’t have a 1-wire stack).
The board consumes about 2A, so running on USB is difficult.
Mainline support: U-Boot is there, except for the NAND flash (only a simple RO driver is available out of tree) – Free Electrons is working on a mainlining a full NAND driver (cfr. U-Boot). Linux mainline is missing NAND, Wifi/BT (Realtek has an out-of-three driver under GPL, but it’s extremely ugly; someone is working on cleaning it up), video. The IP block for video encoding/decoding has been reverse-engineered, but nobody is working on a driver. Next Thing Co plans to have an out-of-tree driver that uses Allwinner’s Android binary blobs. Similar for GPU. Video output: U-Boot has video output suport, kernel can take this over through simplefb. Maxime Ripard is working on a KMS driver.
A lot of upstream work is done by the linux-sunxi community.
It runs a default Fedora 22 – that kernel and U-Boot already has Allwinner support.

Supporting Multi-Function Devices in the Linux Kernel: A Tour of the mfd, regmap and syscon APIs – Alexandre Belloni, Free Electrons

Multi-function devices are external peripherals or on-SoC hardware blocks that expose functionalities that are handled by separate subsystems in the kernel. PMICs are a typical example, they often have e.g. an RTC, LED controller in addition to the regulators.
The MFD subsystem handles such devices. It allows you to have 1 device and register it in multiple subsystems. It will also handle multiplexing the accesses to the bus, since the different subsystems will talk to the device concurrently. It could also handle clocks, configure it, handle variants, etc.
The advantage of using an MFD with separate functions has the advantage that you can reuse the individual function drivers for other MFD devices.
API: mfd_add_devices(parent, id, cells, n_devs, mem_base, irq_base, irq_domain), and mfd_remove_devices(dev)
struct mfd_cell is a huge struct with a few device-specific entries, here are the common ones. It has a few of the usual device members, like name and of_compatible. The MFD driver first registers itself as a normal I2C driver, then creates each of its cells, then calls mfd_add_devices.
There is a device-specific header file, e.g. include/linux/mfd/tps6507x.h, that is used by all the individual function drivers. The function drivers are spread over the tree in their individual subsystems, e.g. tps6507x_pmic under regulator.
The cells can refer to pdata which is defined in the MFD driver and then used by the function drivers. But of course this is a bit the return of the board file, i.e. not using DT. In the DT, you have a definition of the MFD itself and a child node for each function.
Multiplexing register access: there are some registers that are shared between different functions, e.g. watchdog and rtc enable bits could be in the same register. So these have to be accessed atomically. An easy way to do this is to use the regmap API (which was created for ASoC). You create it in the MFD driver and use it from the functions. regmap can use I2C, SPI, MMIO, SPMI, or you can pass your own accessors. Handles locking, can cache registers, can do endianness conversion, checks out-of-bound accesses, handles IRQs in addition to registers. Register types: read only, write only, volatile, precious (= resets itself so you have to cache the value if you still want to have it later).
regmap_init{_i2c,_spi}() with devm_ and _clk variants. To use: regmap_read(), regmap_write(), regmap_update_bits().
Sometimes an MFD supports only one simultaneous function. E.g. Atmel Flexcom can do either UART or SPI or I2C, but not all at the same time. So the mode is specified in the DT. The MFD driver then configures the hardware and populates the function driver. The function drivers can be reused because Atmel uses the same kind of IP blocks as in their previous chips.
SoCs sometimes have a set of registers that have miscellaneous features that don’t relate to a specific IP. Clearly, there can’t be a functional driver for this. The syscon MFD driver handles this kind of situation. This driver uses the regmap API. When you request access to syscon, the regmap is created if it doesn’t exist yet. E.g. syscon_regmap_lookup_by_compatible(). To avoid writing an MFD driver, you can use simple-mfd as the DT binding, that will just make sure that also all children are registered. Then you just have to add a header file that defines the registers.
There has been some pushback against patches using syscon, are there guidelines for when it’s appropriate? Alexandre would use it anytime that there are registers used by several drivers.
What is the support for run-time power management? You have to implement it both in the MFD driver (e.g. turn off global clock) and the function drivers (taking function-specific steps).
Could you have a regmap type that calls into userspace?

ELC-E / LinuxCon 2015 (Dublin) Wednesday keynotes

Getting it right – Martin Fink, HP says copyleft is good, please default to copyleft licenses.
Building the J-Core CPU as Open Hardware: Disruptive Open Source Principles Applied to Hardware and Software – Jeff Dionne, Smart Energy Instruments
Business Innovation within Huawei’s Service Provider Operations (SPO) Lab – David Mohally, Huawei
Continue reading

Linux Kernel SoC Support Mainlining Tips (By a Bunch of Other French People) – Thomas Petazzoni, Free Electrons

Free Electrons has been working on mainlining for Marvell, Allwinner, Atmel. With just 6 engineers, they are in the top 20 contributor list every time since about 3 years. But exactly being a small team makes them very efficient. There are not communication issues or politics, no legal overhead, no long meetings. There is no internal review of the code, but sometimes there are two people of free-electrons that are commenting on each others patches on the mailing list. It makes it easy to do knowledge sharing, the people working on different SoCs and different drivers can easily learn from each other since they’re in the same room. Motivation is another key property. For instance, they do a victory dance when something works. Through this motivation, they can follow reviews etc. at the time they happen, not just at working hours. Unlike Free Electrons, SoC companies often don’t really understand the community. The people contributing really have to be part of the community, you can’t as a company set up a communication channels with another entity. It’s a relationship between people. Once you’re part of the community you nurture it, you’re ready for comments. You have to understand that the community doesn’t have the same goals as the SoC vendor. The SoC vendor wants to support his next chip; the community wants maintainability, collaboration between different subsystems, and continued support for older chips. Being part of the community also means conferences and networking. It is there that you can build up trust relationships. The biggest hurdle in mainlining is getting the attention of the maintainers. If you already met them, have had discussions with them, if you’ve proven that you will be there to fix any breakage, it will be easier for the maintainer to apply your patches. The focus on mainlining: not working on products, not distracted by customer issues, and the boards have already been brought up so they know what the gotchas are. There is no overhead of unrealistic planning or expectations. You need to have the right tools, ICT infra that works with you instead of against you.
But number of patches is a poor measurement of involvement. One reason that there are so many is because it’s almost all on specific drivers and cpus. In core infra, it is (and should be) much more difficult to get a patch accepted, so of course you will get less “contributions”.

Fireside Chat – Linus Torvalds and Dirk Hohndel

Anything exciting this release? So far it seems pretty calm: no new arch, no new fs, just updates to arches and drivers.
Is security of the kernel also improving (cfr. Core Infra initiative)? We could do with tools to detect some kind of patterns. The kernel is special because any bug is a potential security bug. We could do better, but we’re not doing particularly bad. Scrutiny is unlikely to be the way to go, because a human simply can’t go over the 25M lines of code in wildly different drivers and make sense of it. So rather, we need analysis tools and automated testing.
Linus is excited about hardware, so he really likes to look at the evolution in CPUs. He’s like to see an ARM laptop, because he doesn’t play with dev boards anymore.
What more can we do to find more contributors? We already get good contributors, the problem is in maintainers. It’s fairly easy (though a bit scary) to submit patches. The next step is more difficult. Succession plans, maintainer teams (adopted by e.g. ARM). Linus is probably going to push for that, e.g. it reduces the threshold to become a maintainer. A drive-by contributor is not so very useful in the long run.
Everybody is talking about containers, is that something you find interesting? No. Containers have been a huge challenge. It’s interesting as long as it is about hardware interfaces, e.g. KVM and cgoups.
The kernel should be like tarmac, it shouldn’t be something that people should need to be worried about.
Linus has been making C++ commits in other projects, is it going to happen in the kernel too? No, it’s not going to happen.
In the kernel, the basic principle is that if we break an application, we will unbreak it. In other projects (e.g. core libraries) that principle is not followed, and that is just wrong. Because of this promise, users are much more likely to update their kernels because they know they can rely on it.
Would you support something like ksplice at a mainline feature? Yes, but it’s hard so it will only be accepted if it really works. It will anyway only work for very specific changes. It’s never ever going to be possible to live patch from 4.3 to 4.4.
Is there an open source project that Linus says “I really wish someone would do it”. Nothing specific, except that projects should not break their users (see above). He would preferably not need to start another project, because he prefers if it would already have been done.
What would you like to be different in Linux’s 25th year? Considering how good we’re doing, it’s ludicrous to say it should be even better. He still enjoys all these crazy people doing crazy things that he would never have thought of. The most surprising thing in those 24 years was getting users.
There is speculation that Linus considers becoming a professional photographer. That’s not true.
Does Guinness really taste better in Dublin? It wasn’t that different because you can get fresh different in Portland.

Creating Open Hardware Tools – David Anders (prpplague), Intel

See http://elinux.org/Open_Tools

In science, open tools are created fairly often, since for experiments you often need specialised tools, and scientists share it with each other. E.g. Bunsen burner – Bunsen distributed it under an essentially open source license.
All commercial debugging tools that exist have limitations, e.g. limited OS support, price, features. Since it’s commercial, you can’t change it.
One of the first open source HW projects was LART. To be able to program it, they needed a JTAG dongle, so they also developed a parallel port JTAG dongle: Holly Gates.
Nowadays the Maker/Hacker community is booming. These people want to have a logic analyser, scope, and other tools, so there is a drive to create less expensive ones and where features can be added.
Logic Analysers: Open Workbench Logic Sniffer: FPGA based. Bus Pirate: PIC based – very limited but very cheap ($10). Saleae Logic: based on Cypress FX2 reference design. Also many Arduino based designs.
Oscilloscopes: handhelds, which are repurposed mp3 players – mostly not open source but there are a few that are (or at least where the schematics and firmware are released). Arduino-based. Kits for building your own oscilloscope.
Custom solutions, specific for the use case. E.g. TI TFP410: converts DVI to RGB. You can use this to create a DVI debug board. In general, bridge chips are a good way to create debug boards.
Shared tools. E.g. flashrom is intended to flash e.g. BIOS, led to creating little pieces of hardware to assist doing that. These have reference schematics, so if it doesn’t match your design you can use it. Similar for OpenOCD, sigrok.
To create your own simple hardware: kicad (UI is clunky but will improve now it’s the primary development tool for CERN). Eagle CAD (proprietary but free version with license limitations). Altium but David hasn’t tried it because he’s not fond of cloud applications. Its only real limitation is that you are only allowed to work on open hardware. Note that each package has their own format (though there’s an open interchange format but don’t rely on it; there are also conversion tools but they’re dodgy), but the way that kicad and Eagle CAD is that they have fairly open formats. More expensive packages make it really difficult to do version control.
Once you create something, be sure to license it, otherwise it can’t be shared! Note that you can still make money off it even if it’s open source.
Still some serious challenges in the open hardware tools.
Displays: simple LED, 7 segment, LCD: it becomes very difficult to debug. Therefore, you will need more complex boards for debugging. This leads to heavier requirements on the design software: 4+ layers, differential lines (PCIe, SATA, HDMI), matched impedance, QFN and BGA packages (difficult to solder manually).
For e.g. differential pairs, FPGA could be a solution, but the libraries to do the decoding are closed, restrictively licensed, including DRM restrictions.
If you can’t do soldering yourself, you can go to companies that take care of PCB + soldering, e.g. microfab.

Developers Care About the License: Using SPDX to Describe License Information – Jilayne Lovejoy, ARM

Jilayne previously worked for a company that did audit services, where she learned about all the ways that developers can not provide license information in their code. If they would do that better, then some of these tools would not be necessary.
What developers really care about is sharing the code. So how do we go about that? Given that it is by default protected by copyright? By giving permission. If you don’t do that (if you don’t specify a license), it’s not open source.
Github is notorious for not having licenses. When gh added a way to create a license file at repo creation time, the % of projects with license jumped from 10% to 20%. Repos with a higher rating have a higher % of licences.
How do we specify the license? LICENSE.txt. But the problem is that as the package travels downstream (e.g. binaries), it may get lost. In addition, there can be many different components collected together so the license may be hard to find. When someone takes a few files out of the package, the package license also becomes useful. So it’s important to put a license notice in every file.
SPDX: human and machine readable format, focuses on capturing facts, not interpretation. It’s a pretty large standard, 72 different use cases were considered for defining it.
So how to use it as a developer? First of all, use a short identifier from the SPDX License List. This includes guidelines to identify if the license text matches. Put this identifier in every source file. E.g. SPDX-License-Identifier: xxxx, either with or without the actual short license reference. Of course, also include the full license text in the project. Cfr. poco project.
What if more than one license applies? There is a license expression syntax, see appendix IV in the license list. Operators: AND, OR, WITH (= exceptions), + (= or later).
Second way to express license: provide an SPDX document. The SPDX file contains file checksums, so you need a tool to generate it. FOSSology (install or use unomaha instance), WindRiver (submit through website), Yocto generates during build, Debian generates during build, Maven plugin generates during build, Eclipse plugin (under development), DoSOCS (stores info in a database), and more.
What SPDX by itself doesn’t solve is getting good data, i.e. do you trust the SPDX file you get from someone else?
If not all files are really used to generate the binary, SPDX does allow you to express dependencies between files (i.e., the binary and the corresponding source) and use this to interpret what the license of the binary is.

Portable Linux Lab: A Novel Approach to Teaching Programming in Schools – Emma Foley & Laura Reddy, Intel

When Emma was in secondary school, one of her teachers decided he was going to teach programming, even though they hardly had a computer – this is how she got into technology.
There is a lack of STEM (science, tech, engineering, math) professionals, there is not much computing done in schools, tech is not very well known and stereotyped, and there’s a lack of diversity.
GIFT-ED (Firls influenced for technology in education) program is a 8-week mentoring initiative for 14-year old female students. Goals is to make a carreer in STEM something normal. Why is it not considered normal now? Girls think it’s not interesting, that they wouldn’t be good at it, and that the people in it are not very nice. This program was a big success. However, it wasn’t really scalable. One problem was that the school computer labs were very restricted. It also wasn’t easy to continue independently, because it’s difficult to get started at home (they partly achieved that by teaching HTML/Javascript which they could continue at home). So maybe better to use Linux instead of windows, with a live USB; you also have a lot more things you can tackle that way: shell, python, and even compiled languages (HTML doesn’t teach you that many computer skills). However, it is still staring at a screen and not real-life. Hence Portable Linux Lab:
Portable Linux Lab is a Galileo or Edison with a breadboard with LEDs and buttons, so something real is happening which keeps the motivation going. This also gives more opportunities for problem solving, because things can go wrong in the HW as well as SW. And it allows you to teach about electronics with the same thing. And it’s portable. And it gives them Linux experience, which will give them more confidence.
At earlier age, scratch or graphical programming in Arduino is probably better (spelling is an issue…).