Tag Archives: conference report

Summary of ELC-E 2014

I made a presentation of what I learned at ELC-E for my colleagues, with a lot of references to the original material (i.e. the slides on the LF website).

Slides as PDF

Beamer source with the .key extension because wordpress.com doesn’t allow files with a .tex extension.

Open Source: A Job and an Adventure – Dawn Foster, Puppet Labs

Why do you want a job in open source?

  • Meet friends from around the world.
  • Travel opportunities: conferences, but also interacting with other companies.
  • Career opportunities: your work is visible, and you have a lot of connections.
  • Freedom, innovation, cooperation (even between competitors, e.g. puppet + chef do configuration management camp at FOSDEM).

How do you get there?

  • Start a new project, with something that you need. This is probably one the hardest way to do, because it requires creating a company and doing a businessy thing, or you have to find some company to hire you to work on your project.
  • Participate in an existing project. This is the most common way – most companies who do significant work in open source hire from the community. Participation is not just from developers, but also blogging, documentation, …
  • Just join a company that does open source, in any role; you can transition into other jobs at that company.
  • Bring OSS in your current job.
  • Write and speak about your open source work, so potential employers see you. Sometimes it’s years later, but people know you.
  • Consulting: once you’re already relatively well known.
  • Documentation is one of the most common ways to get started participating in an open source project.
  • Be nice to people and to the community. What you do and say will be remembered. You never know that the person who you’re talking to might have a job for you in the future.
  • Networking, both for getting a new job and for improving your current situation. You can’t start with this when you are looking for a job. Basically it’s having interesting conversations with people. You shouldn’t think of it as a networking activity or work. Talk with a wide variety of people.

One of the tricky things about OSS work is that you can work on it infinitely, there is always more to do. So you need to manage your time. Prioritize, delegate (e.g. wait with responding to a question on the mailing list, maybe someone else will respond). Documenting things also saves time, because it is easier to delegate and it’s easier to respond to questions.

Software Update in Embedded Systems – Stefano Babic, DENX

Why is embedded SW different to upgrade?

  • power failure
  • bad firmware
  • communication errors
  • + often there is no direct access so you need to recover automatically from failure
  • SW is not on a plain disk, but on a variety of media (NOR, NAND, eMMC, FPGA, …)

[Leaving out a lot of things that are so obvious to me that I didn’t want to write them down – see the slides.]

Take into account who will do the update. The mechanic may not even have a computer with him when he goes on-site! For instance, give him a USB stick, but remember to give feedback about failure.

Solutions for system upgrade

  • Bootloader: is severely limited (drivers), limited UI
  • Package manager: not atomic, hard to know exactly what is installed, more places where things can go wrong; but advantage: smaller update images
  • Rescue image
  • From the application: requires double copy of the application software to enable atomic update; if there is a rescue system as well, then that one doesn’t get tested well…

The upgrade systems that are used in reality are 95% similar, so Stefano started swupdate for this common stuff. Features:

  • Can recover from failure: this is not really generic, but offers a toolbox in which you need to enable things, e.g. watchdog, bootcounter, …
  • Checks hardware and software compatibility
  • Check image integrity, but not signature!
  • Can repartition the storage
  • Local and remote upgrade possible
  • In case new features have to be added: lua interpreter so can be extended on the fly
  • Single image for multiple devices, so a single release image applies to all devices in the system – this makes sure things stay consistent. So a single image for all devices, and each device extracts the part that is for them.
  • General API to interact with the UI and transport frontends (built-in or custom).
  • Possible to write a custom image parser in lua.
  • Handler depending on the device/partition on which a sub-image has to be installed. Custom lua handler is possible.
  • Mainly intended for rescue system scenario, but could be extended to double copy (needs change to the way bootloader flags are set).

https://github.com/sbabic/swupdate

Qubes OS – Joanna Rutkowska, Founder and CEO of Invisible Things Lab

QubesOS is a client OS (=desktop, phone, tablet, … currently desktop) that implements security through compartmentalization. It uses a hypervisor (Xen) to make that happen. The client must be secure, because if that is compromised there is no security, the client can see the keyboard and screen. Present client systems are really insecure. Attacks come through apps (e.g. browser), from malicious applications, from USB devices, through the networking stack. through filesystem metadata. Once attacked, the lack of GUI isolation makes it possible for the malware to see the sensitive information of another application that is secure. Note that these are the security challenges of desktop systems, which is in many ways different from the challenges on servers.

Just trying to find all the bugs is not going to work – there always will be bugs. A monolithic kernel is bad for security, because it all runs on the same TCB (Trusted Computing Base) so there is no isolation between a compromised Wifi stack and the rest of the kernel. Same for Xorg, network-manager, … . And making them run as non-root doesn’t really help, because it’s the user stuff that is important.

Qubes runs several parallel OSes on the same desktop using virtualization (Xen) to isolate domains, e.g. Secure, Home, Work, Random stuff. Why does virtualization help? Because it reduces the interfaces, which makes the attack surface a lot smaller. Still, because it is virtualized, it preserves compatibility with legacy apps and drivers. However, not just the VM – hypervisor is critical. The VMs still communicate with each other, e.g. because you do file sharing. This creates another leak path between the compartments. Essentially, virtualization doesn’t do much more than what the MMU allows you to do for inter-process isolation. BTW you also need an IOMMU so the driver domains can also be isolated. Bottom line, the inter-VM communication framework is also essential.

Of course there is a trade-off between security and usability. If the compartments are really isolated from each other, you can’t do much with the system. QubesOS tries to find a good balance.

Real Safe Times in the Jailhouse Hypervisor – Jan Kiszka, Siemens

Jailhouse is a hypervisor that allows to run safety-critical tasks on a multicore in parallel with Linux. Jailhouse tries to be simple, rather than feature-complete, and concentrates on controlling access to resources (memory area, CPU, interrupts) rather than really virtualizing them, i.e. only one guest gets access to each resource. Isolation is really enforced, it’s not cooperative between the guests.

Jailhouse partitions an already booted system (load module and start daemon), after Linux is already running. It offloads work to Linux, e.g. booting, the configuration (no need to do this in the bootloader), control and monitoring. The disadvantage of this approach is that it is not easy to boot Linux in a cell (at least on x86, because there’s a lot of BIOS stuff going on there that jailhouse doesn’t support), and that your boot time becomes larger (Linux has to boot before you can start your RT cell).

The partitions are called cells in jailhouse. There is one root cell (Linux), and one or more other cells (real-time, safety-critical part). Within a cell, anything goes, but it’s not possible to access resources from another cell, or to do things with global effect (e.g. reset). This is symmetrical, so if the real-time cell crashes, it doesn’t bring down Linux. Since also the root cell should not be able to misbehave and do something wrong with the RT cell, any cell can lock down things (e.g. the possibility to do shutdown) so even Linux can not modify it. It also provides the means to validate the cells, i.e. that what is running now is the same thing that you tested before.

Jailhouse obviously can’t avoid hardware errors or errors in the RT software. It can however capture and forward hardware error reports.

Jailhouse is currently going to a certification process (first review by TUV completed).

Jailhouse initially focused on x86 with VT-x and VT-d. It supports direct interrupt delivery, so it doesn’t have to go through the hypervisor first. Basically, there is no runtime overhead except the latency added by the IOMMU. Of course, communication between cells still has some overhead. AMD is in the process of adding AMD64 support. ARMv7 port required almost no changes in the jailhouse core, but there’s still a lot of tweaking to be done. No plans for QorIQ.

The application on the RT cell will have to be adapted to deal with the fact that it doesn’t have access to all the devices it would normally have. For inter-processor communication, there are IPC interrupts and shared memory. Considering to implement a vritual PCI device.

Jailhouse has a skeleton “inmate” for an OS-less application running on an RT cell. For more complex things, you can use an existing RTOS. In that case, you have to remove the platform bringup stuff, replace any legacy BIOS, PIC, PIT based stuff, remap the timers to the ones that are made available by jailhouse, and add support for inter-cell I/O. They have implemented a reference implementation in RTEMS.

Debugging something that runs in a cell is a bit tricky. Hardware debugger may be heard to get, emulation can get slow – and may even be impossible to emulate jailhouse. Therefore, KVM was extended to make it emulate things in the same way that jailhouse makes them available. This way, the KVM debugger can be used. Of course, in this environment there are no RT guarantees (interrupts are emulated!).

USB and the Real World – Alan Ott

This talk is about getting the best performance out of USB devices.

USB speeds: we’ll talk about full speed (12Mbps) and high speed (480Mbps).

The logical USB device has configurations, which has interfaces, which has endpoints. An endpoint is an addressable source/sink of data (unidirectional). Cfr. socket, but unidirectional. An interface is a related set of endpoints, that together provide a function, e.g. mass storage, HID, … . Multiple interfaces in a configuration are active at the same time => composite device. Out of multiple configuration, only 1 is active at the same time. Most devices have only 1 configuration.

4 types of endpoint – an endpoint has exactly one type. Control endpoint is mostly used during enumeration. Interrupt and bulk endpoint are used for the actual data. Interrupt is small amount of low-latency data – they reserve bandwidth to guarantee latency. Bulk endpoints transfer large amounts of data but has no guarantees. Isochronous endpoint is for large amounts of time-sensitive data. It has no guarantees, instead the data is dropped if it is late.

Endpoint length = max amount of data per transfer, e.g. 64 bytes for a bulk full-speed endpoint.

Transaction = basic unit of data across the bus, up to the endpoint length; transfer = one or more transactions in order to move a chunk of data from one side to the other. A transfer is ended by a short transaction (less than endpoint length), or when the desired amount of data is reached – but that’s determined by the protocol, and e.g. a USB analyser may not know about that.

USB is controlled by host, so host always initiates transfers and hosts polls devices to check if they want to send data.

IN transaction = host sends IN token to device, if device has data it sends it, host sends ACK; if device does not have data, it sends NAK. NAK just means “not ready yet”, not an error. If the device NAKs, the host keeps on trying until it times out.

OUT: host sends OUT token, host sends data up to endpoint length, device sends ACK or NAK. So the data is sent before the device has the responds at all. The host retries all of this until it times out.

IN and OUT are typically fully handled by hardware.

In Linux, the gadget framework for handling UDC (USB Device Controllers) is largely separate from the host USB stack. Unlikel OHCI/EHCI, the device interface is not standardized.

musb = IP block from Mentor

EG20T Platform Controller Hub = on embedded Intel SoMs

PIC32 non-Linux device, with M-Stack developed by Alan.

Why do you make a USB device?

  1. Easy, well-supported connection to PC
  2. Make use of an existing device class so you don’t have to write drivers
  3. Want to connect to PC and move a lot of data quickly (where you control both host and device)

For cases 1 and 2, naive implementations can work. You can use configfs to dynamically create the USB device from userspace, with no kernel driver. But if you really need performance (case 3), you’re going to have to do something more.

Synchronous API: USB transfer will only be initiated by HW when a transfer is active. So after the transfer completes, the bus sits idle until your software finally goes to the next iteration of the loop and starts the next transfer. Therefore, use the async API and submit multiple transfers. The HW will jump to the next transfer when the first one has finished. This is true both on the host and on the device side.

Transfers should be large enough. At high speed, the max endpoint length for bulk is 512 bytes – so try to use all of that to reduce overhead.

If you need to optimize, use an USB analyser to see what’s going on. But when looking at NAKs coming from the device that is too slow, don’t just count the NAKs because the host controller will adapt to the latency of the device and wait a little before its next attempt.

Increasing transfer size allows the USB controller to handle transactions back-to-back, avoiding any latency between them. Measured on BeagleBoneBlack: first transaction of a transfer has 40us latency, after that it’s only 6us. 64Kbyte transfers seem to work well. However, with musb, it turns out that very large OUT transfers are actually a little bit slower because the DMA is done at the transaction level.

However, since USB is message based, it’s convenient to put application messages in one transfer because then you have to add boundaries yourself. Queuing messages can also increase the latency.

Putting the protocol in the kernel rather than userspace slightly increases the performance (7%) because it avoids to have the userspace boundary in the latency-sensitive transfer-to-transfer hand-off.

Multiple bulk endpoints could increase performance, because you get extra DMA concurrency. It makes the protocol more complex to manage, and it also depends on host performance.

A high-bandwidth interrupt endpoint gives you reserved bandwidth, endpoint length can go up to 3072 bytes at high speed. But if the bandwidth is not available, the device doesn’t enumerate! Same for isochronous which supports even larger endpoint lengths.

Remember that hubs have an influence: they translate between high-speed and full-speed, thereby hiding some of the latency when using synchronous API.

Serial gadget is pretty suboptimal because it goes over the tty framework, which breaks it into small transfers.

To find performance issues in the kernel, use ftrace and kernelshark.

Finding Stupid Vulnerabilities in Binaries – Armijn Hemel, Tjaldur Software Governance Solutions

This talk is about finding obvious security bugs in embedded devices. He will not tell anything about which things you shouldn’t do – that is stuff that you should already know. Still, these obvious bugs are present in embedded devices (that are never updated).

Continue reading