DRM = Direct Rendering Manager = framework for display related drivers. Samsung wants to extend it with support for some picture processing steps.
DRM and KMS
DRM supports both full-blown GPUs and simple graphics modules in SoCs. It gives ioctl access to hardware from userspace, as well as GEM (Graphics Execution Manager, historic name) buffers. Kernel ModeSetting (KMS) takes care of configuring the display pipeline with hardware-independent ioctls and atomic switching (since 4.2). It implements the legacy fbdev API, which is e.g. used by kernel console.
A frame buffer is a GEM object with metadata: pixel format, width/height, stride; not necessarily a complete frame. Frame buffers for desktop, for video overlay, for cursor.
A plane is the hardware that scans out the frame buffer.
A CRTC (historical name) mixes and blends planes. The encoder sends it out to a connector, e.g. HDMI.
All of the above are DRM objects. Each object has a unique ID, type, and properties. Properties are generic and typed.
In atomic KMS API, the state of the pipeline is defined by DRM objects and their property values.
IPP is a memory-to-memory operation on an image buffer: crop, scale, colorspace conversion, rotation, flip. Exynos had custom DRM extensions (mainline since 3.8) to support it. Its userspace API is heavily based on the implementation of the hardware modules available in 2013. Its hard to understand and error prone, and not fully implemented. There are no open source clients that use this API. Therefore, Samsung started a rewrite.
A single operation has: a source and destination buffer, the operation area inside it, and rotation and flip transformations.
Picture processing API: memory to memory operation that does scaling, cropping, colorspace conversion, rotation and flip. To be generic, it supports querying capabilities. It hides the details of underlying hardware. It fits into KMS so DRM objects and properties, DRM framebuffers. It also allows future extensions. Submitted to linux-samsung-soc list in August 2016. Currently called a Frame Buffer Processor, but that name is considered to be confusing so it will change.
3 new ioctls: GETFBPROCRESOURCES (get number of FBP objects and their IDs), GETFBPROC (get capabilities of FBP), FBPROC (apply an operation on a framebuffer). Capabilities: CROP, ROTATE, SCALE, CONVERT (colorspace conversion), FB_MODIFIERS (?). FBPROC operation flags: EVENT (register callback when operation is finished), TEST_ONLY (don’t actually operate, just check if the parameters are correct), ASYNC. Properties on the FBP object define the operation to perform, and are set using the normal KMS calls: SRC_FB_ID (Framebuffer to operate on), SRC_X, SRC_Y, SRC_W, SRC_H, and same for DST. Also DRM userspace wrappers around all these ioctls.
Currently only implemented for Exynos.
Why not use v4l? Problem is that changing any parameter in v4l requires a bunch of ioctls. But actually v4l is implementing a single ioctl to configure the whole pipeline.
Where would this kind of API be used? It would be in a middleware, e.g. Wayland/Xorg, perhaps in GStreamer. For Wayland/Xorg, a problem is that the DRM API is very much focused on single application use.
If an image processing block really is part of the image pipeline, e.g. outputs directly to CRTC. In such a case, it should be implemented as a Plane object so that it sits at the right position in the pipeline in the DRM/KMS model.