Common Display Framework, Tomorrow’s Linux Kernel Display Architecture – Laurent Pinchart

CDF is a never-ending story: it started over a year ago, and all the time people are coming with new ideas and improvements, but this is not getting the project to make progress. This will be the last talk, next time it will either be “OK it’s accepted in the kernel” or “Forget it”.

The display framework at the moment consist of a frame buffer. There is a hardware component that does scanout, i.e. getting pixels from the framebuffer and converting them to whatever electrical signals are needed to drive the display. Composition is the act of creating several images and creating one composed image that will be sent out of the system. Usually this is done by the same scanout hardware, for efficiency reason. Composition can do scaling and other transformations, depending on the hardware.

In the past there was fbdev, but if you make anything new, go for KMS. KMS models the hardware as frame buffers that are read out by CRTC (scanout + composing). That goes to an encoder over a bus, that encodes it to LVDS or HDMI or whatever display bus is used. That goes to connectors. You can have multiple CRTCs, multiple encoders, and multiple connectors attached to an encoder. So KMS models CRTCs, encoders and connectors and how they bind together. In an embedded system, the CRTC is usually on the SoC but the encoder can be inside or outside the SoC. Another key concept is the mode (as in Kernel Mode Setting). The mode is the set of parameters that determine the display resolution and timings. So it includes blanking (front/back porch) and sync. Mode setting involves setting these parameters, as well as which connectors it should be sent to. Inside KMS this is used to configure the CRTCs, encoders and connectors appropriately, as well as verifying that the requested settings are possible.

The Media Controller models the media-related hardware and how they are connected. Every entity in this graph has a number of properties, and a number of pads (inputs and outputs). Entities are connected through their pads, with a link. The link carries extra information, e.g. if the link is activated (= currently in use). Media Controller doesn’t configure all this hardware, it just models it and exposes it to userspace, so other components don’t need to do that anymore.

The origin of the CDF was when Laurent tried to create DT bindings for a display device. But that means that he had to standardize these new bindings. So he looked at existing display drivers and how they configured the panels. It turned out that every SoC had their own panel drivers. So Laurent wanted to create a single panel model that would be used everywhere, which is a prerequisite for the DT bindings. Sounds simple because most panels have just one resolution and required timings. But with MIPI CSI or DSI there is a control bus with which you can configure the panel. And again, there were specific drivers for this in the different SoCs. (This issue is not handled in the current proposal, it can be done separately.)  And then he needed to write a KMS driver for a panel for which he already had an fbdev driver, so there should be a way to have only one driver for this panel and use them in both KMS and fbdev. But it’s not only panels, there are also encoders, HDMI transmitters, bridges, … (which are essentially different words for the same concept). And finally, there is not just a linear path from CTRC to panel, there are several paths possible, so it’s a graph.

An additional problem is the order of probing. In the device tree, you’d have the HDMI encoder as a device on the I2C bus, while the panel is located in a completely different place so the order in which they’re probed is not defined.

One additional goal is to share the encoder drivers between KMS and v4l, which are currently completely unrelated subsystems. But that’s future for after CDF has been accepted.

An additional challenge is that some panels can be controlled over two different buses, e.g. I2C and the video bus, and you need to use one or the other depending on what you want to do. CDF will also help for this situation.

Two more complications:

  1. Two separate IP blocks on the SoC have a direct link in hardware: video decoder can send data directly to display unit, without going through memory. We currently can’t model that. It can even be worse on an FPGA, where the video decoder and display unit are implemented as a single device without any memory in-between.
  2. KMS has a fixed pipeline: CRTC -> encoder -> connector. However, in reality you can have a more complex graph. Mapping it on this simple model hides some of the configuration you can do, so it’s not ideal.

Laurent started with a panel driver. It is mostly independent like a usual I2C or whatever device. But it is also connects to the display controller driver (called by fbdev or KMS).

In the second version, it was called CDF. The model was a single pipeline that ends with the panel. The calls come from the display controller, and goes to inner device (transmitter) which calls the next device (panel). This received quite positive feedback, and patches were posted to convert a few existing display drivers.

But what with hardware that is not linear? Use the model of the Media Controller (but pad was renamed to port) – it’s actually an extension of the media_entity. The display_entity struct contains a callback to handle the asynchronous probing in the DT case. This callback is called when all the required devices have been probed. It is not possible to use deferred probing, because there can be circular dependencies. In the device tree, all the links between devices are expressed. These links are a kind of overlay graph – the DT itself is a tree according to control buses, while the display graph represents the data flow.

Now Laurent will prepare RFCv4 which uses a completely different model. Instead, there will be a single piece of code (the pipeline controller) that knows about the whole graph and controls them. This way, the entity drivers can be very simple because they only care about themselves; the control node handles the links. There will be a standard pipeline controller for normal linear pipelines; if you need a more complex pipeline controller, then you’ll have to implement your own.

People have asked Laurent to cut the whole problem up in smaller steps. He can do that, but then when you look at a single step, the solution that he chooses doesn’t look appropriate for that particular step. This is one of the reasons why there is a problem progressing with the CDF.

CDF is a fully in-kernel model, it doesn’t change the userspace API of e.g. KMS. It may expose the information like Media Controller, but that’s just as debug info. In the future, the KMS API will have to be changed as well to be able to show the more complex topologies, but that’s independent of CDF (though CDF may help).

Advertisements

One response to “Common Display Framework, Tomorrow’s Linux Kernel Display Architecture – Laurent Pinchart

  1. so … should we forget about it?!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s