Tag Archives: FOSDEM 16

Running the Processing environment on ARM SBCs – Gottfried Haider

Processing is a (GPL, github) sketchbook and language to learn how to code, primarily targeted at design and art schools. It is based on Java, but there are also p5.js and Processing.py ports. The code is just java, Processing adds a lot of pre-imported libraries to do drawing, interaction, motion, typography, …

The UI is similar to Arduino, because Arduino started from the Processing concepts.

Processing makes a lot of assumptions of the underlying OS, e.g. JNI use was not multi-arch aware but tied to x86. Gottfried ported Processing to an RPi (and implicitly, any ARMv6) as part of a GSoC project. RPi was chosen because it is cheap enough that you can leave it in a setup to keep on showing it (e.g. as part of an art project). And it’s much more powerful than an AVR. Other than the JNI porting the main thing is porting to GLES2 instead of X.org. So they switched to Newt as the platform-independent windowing system.

So now Processing 3.0.1 runs on RPi with the closed source GLES2 driver. Eric Anholt is in progress to make it work with DRM/Mesa as well. You can compile your application and run the jar file on the RPi. It also has a hardware I/O library to access the pin headers, serial port, i2c, spi, LED, pwm. What is missing: Software PWM, configuring pull-ups (i.e. userspace pinctl), make pwm class trigger uevents (bug?), getting the GPIO number corresponding to a specific PWM channel. So this wishlist is all kernel stuff :-).

Developing embedded JavaScript engine, V7 – Marko Mikolucic (?)

v7 is an embedded javascript vm – embedded in the sense that you link it into a different application (cfr. mongoose).

Javascript is tough, e.g. the truth table of the equality comparison is quite complicated. So why do you want such a scripting engine on your device, when there are alternatives like Lua? Why: everybody knows a bit of javascript, it’s widely available and not stagnating. So having js as the way your users can extend your application makes sense.

Compared to other javascript VMs, v7’s goal is to be portable, small (runtime size and code size), reasonably fast. For this it uses some tricks, like a compacting GC and no reference counting. You can store snapshots (like the parsed code) and mmap compiled bytecode (but that requires a port to your platform). On very small machines, the stack size limit makes it impossible to use a typical recursive parser, so instead the parser is based on coroutines with segmented stacks.

v7 is embedded, so you can execute a string and retrieve the resulting values. Values are not represented as a union (because the size would be the largest object size, typically double), and you need a type byte which requires padding for struct alignment. So instead a trick is used from floating point representation: a NaN needs just a few bits, so all non-double types are represented as a NaN. This is done with macros and not with gcc extensions so it can be used with any compiler.

To reduce the size of the AST, it is not constructed with pointers but as a stack (because you know how many operands each operator takes, and you can distinguish operators from values, so you can reconstruct the tree from just a flat array of operators and values).

The bytecode uses a simple stack-based programming model.

The only libc function it uses if malloc.

It’s been developed by 2 people in about one year.

Currently supports 32-bit and 64-bit architectures, might be ported to 16-bit in the future.

Snowdrift.coop – sustainable funding for FLO projects – William Hale (Salt)

What are Free/Liberated/Open public goods? Excludability: can someone stop others from having access to it? Can multiple people make use of it at the same time? Software is a clubbable public good, because you can exclude people (through law).

The snowdrift dilemma (game theory). When the road is blocked by a snowdrift, I can do all of the work which is good for everyone, or you can do the work, or we can do it together, or the work doesn’t get done. Free software is a lot like that. Proprietary software is like a toll road, it gives control to someone else. There are several techniques to make this happen, like law, DRM, secrecy, and platform lock-in.

Can we work together to make public goods that are good to the public? This is the principle below snowdrift.coop. Snowdrift is a cooperative, all stakeholders get votes.

It uses mutual assurance: I only have to give if others do as well.

It uses network effects: you donate more if more people donate.

The developer(s) are kept informed so they know if they can quit their jobs.

Snowdrift is not limited to software, it can also sponsor art or other public goods.

It only supports robust projects, so there should be a good guarantee that there will be results. It’s also important that the public interest is served. It’s also required that all results are made available under a public license (but you can choose which one).

Partnered with OSI and OIN.

It is still alpha. There is a working prototype. But to get to beta and handle real money, they need a lot more (financial) support.

How to design a Linux kernel API – Michael Kerrisk

Design wrong APIs is an old problem – Michael showed an example of the accept() syscall that originates from BSD. But it’s again something at which Linux is best :-). The solution is more a matter of process than technical.

When it comes to APIs, implementing it is the least of the problems – performance issues or bugs can be fixed later. The design of the API is way more difficult to fix because it may break userspace. So thousands of userspace programmers will live with this mistake for decades. There are many kernel APIs: syscalls, pseudo-filesystems, signals, netlink, … . This talk focuses on syscalls.

To design an API you have to think of how it is going to be (ab)used. For example, in the POSIX mq, there is usually a reader that doesn’t do much else so the queue is usually almost empty. So the code is optimised for that. Then someone increased the number of messages that can be in the queue, because their customer wanted to put a lot of stuff in the queue, and it turned out that the priority sorting performed really badly. This was fixed, but it shows that programmers will be very inventive about how they will use your API. Also, the fix introduced new bugs. The problem is that there were no unit tests.

It also happens that refactorings or bugfixes in later versions of the kernel subtly change the behaviour of an API. Or even worse, the API doesn’t actually work as intended when initially released.

So to make a good API, unit tests are part of the solution: to battle regressions, but also to show how it is expected to be used. Not only when you add an API, but also when you refactor an existing API. Historically, this was difficult because LTP lived out of tree and the tests there are only added after the API has been released. Nowadays, we have kselftest in-tree (though it’s not much yet, only started in 2014).

You don’t only need a unit test, but also a specification. This helps to write a test, it helps reviewers. Where do you write it? In the commit message at minimum, but also in the man-pages project. A good man page is also a good basis for specifying tests.

New API will only be used more than half a year after it has been merged, when distros catch up and userspace can really use it. So catching bugs then is way too late, because it is difficult to still fix things. So the feedback loop should be shorter.  First of all, write a detailed specification and example program(s) as early as possible, and distribute them as widely as possible. There is even linux-api@kernel.org to follow this.But also C libraries, LTP, tracing projects, … LWN is a great way to publicise new API. Note however that feedback will mean more work for you 🙂

Example applications are important because they make you realise how usable the API is. For example, the inotify API looks OK at first sight, but when you start writing a simple application that just mirrors the filesystem state in internal structures, you need more than 1000 lines of code. For example, adding a PID of who caused it would help eliminate duplicates of when you cause the change yourself. Also monitoring an entire tree is cumbersome, and monitoring move events is racy.

 

How containers work in Linux – James Bottomley

Hypervisors are based on emulating hardware interfaces. Containers are about virtualising the OS services. I.e., containers will have only a single kernel, hypervisors and VMs have multiple separate kernels.

Containers have the advantage that a single kernel update benefits all guests. And the kernel is the most updated component. Containers are also more elastic, because they are typically smaller and the kernel has a better view of what happens in the guest so can make better scheduling and resource management decisions.

OS container (LXD) contains a full OS including an init system – it basically pretends to be a hypervisor. An application container contains just an application, without the init system and a large part of the shared libs and tools.

In a hypervisor, adding memory is easy but removing it is difficult. You have to go through a complex balloon driver to free up memory from a cooperating guest OS. In a container, scaling is instantaneous, just adapt the limits. However, this evolves because there is more and more hardware support to help hypervisors to achieve the same performance and elasticity as containers.

Containers can virtualise at different granularity levels, e.g. containment of networking could be disabled. But the orchestration systems (Docker, LXC, VZ, …) don’t expose this possibility, they always do full virtualisation. However, the granularity is the one thing where hypervisors can never be as good, so this is the thing where container evangelists (i.e. James) should focus on.

There are two kernel concepts that make containers: CGroups and namespaces. All container systems use the same API – originally there were out-of-tree patches to add different kernel APIs for each container system. In 2011 at the Kernel Summit it was agreed to converge to a single set of APIs. Thus, there is no repeat of the Xen/KVM split, where both hypervisor interfaces are supported in the kernel now. But all of this is still very very new, so it’s not going to work on many enterprise distros (RHEL6, SLES12).

CGroup systems: Block I/O, CPU, devices, memory, network, freezer. Namespaces: network, IPC, mount, PID, UTS (hostname), User (fake root). The user namespace still has lots of problems. The CGroup and namespace APIs are however very difficult to use.

cgroup is typically mounted on /sys/fs/cgroup, separately for each cgroup (with symlinks for historical interfaces). Each cgroup has a number of controls. You add a container by making a directory – the control interfaces appear magically in that directory. The tasks file contains the PIDs of the processes in that control group. Once some PIDs are in there, they can all be manipulated together by writing to the control files in the group directory. The directories are also hierarchical, so you can make subgroups (which obviously can only contain PIDs that are also in the parent).

To manipulate namespaces, there are the unshare and setns tools from util-linux. The namespaces can be found in /proc/pid/ns, which has symlinks to file descriptors for each namespace. You can see which processes share the same namespace by looking which ones point to the same descriptor. To create a namespace, use unshare (sometimes need to do this as root). Now you can bind-mount the namespace symlink to an empty file. This file can be used with nsenter from a different process to enter the same namespace. To release the namespace, the process that created it must first exit, the mount has to be unbound, and then the temporary file. The namespace will still exist until the last process that is in it has exited.

For network namespaces, ip has a subcommand to manipulate them: ip netns. To connect namespaces, you typically add a virtual ethernet device and add it to the namespace with ip link. This way you can do NFV.

 

 

Standardising booting on armv7 – Dennis Gilmore

Dennis is the lead Fedora release engineer and has a strong interest in ARM.

The goal is to simplify the on-ramp for new users to U-Boot and to simplify distribution support of systems. When you come from a grub background, you become lost very quickly because you need to know a bunch of weird details to be able to use U-Boot. For distros, it’s a pain that you have to make a board-specific image.

You also need to wrap the images with mkimage.

But U-Boot also supports syslinux config files (because that’s what used for PXE boot).

For device trees, they have added an option fdtdir to specify a device tree directory, and uboot will look for the right dtb there. Currently this is based on a filename stored in the environment, but ideally it should be based on the ID string in the dtb [this was a comment from the audience].

Fedora currently builds about 40 different U-Boot images, but actually the board vendors should do this so that it becomes like a BIOS. The configs are updated to have a lot more boot options (like ext2, ext3, fat, … while specific boards typically select only 1).

The extlinux.conf files are still board specific (e.g. it has a console= entry, rootfs reference) but it’s generated by anaconda – which already does the same thing for x86 anyway.

There is no secure boot support at the moment.

Still to do: integrate with the menu system; output on video in parallel with serial; interactive editing of boot commands (cfr. grub). Currently this is Fedora-specific, it would be good to have a cross-distro project that can also install and update U-Boot. Ideally cross-bootloader so it can be used for barebox as well.

Debian is on board with these ideas, but SUSE wants to use grub2/UEFI because several packages already rely on that.

A New Patchwork – Stephen Finucane

Currently, the overview shows subject, submitter, reviews and status. You can also download patches and create bundles of related patches which can be manipulated together.

Stephen has been very active on patchwork for 18 months.

The main new development at the moment is the checks feature, i.e. CI integration, that allows to show test results together with the patch.

There is also a design rework ongoing, using HTML5 and CSS.

When Stephen started working on patchwork, most of the code hadn’t been touched since 2008. He did a lot of cleanups, dead code removal, documenting, supporting python3 and Django 1.8.

Delegate feature has been recently updated to have module identification, and there are patches on the list to improve the delegation feature.

For the future, we want the functionality of gerrit and github but without being forced into that kind of workflow.

An important feature to be added is patch series, allowing you to navigate between related patches and apply them together. The Message-Id, In-Reply-To and References headers allow you to identify the series; git and mercurial create these headers. But things start to get difficult when there are versions. They can be sent as a completely new series (easy to support). But sometimes someone else will create a v2, or the patch changes a lot, and that’s a lot more difficult to get right from a tool.

Patchwork currently uses an XMLRPC API, this is oldworldish, it should move to a REST API. But we have to make sure that patchwork still works on old distros so you can’t always rely on all packages. REST API will make it easier to integrate in e.g. a statistics dashboard.