xref: /OK3568_Linux_fs/kernel/Documentation/admin-guide/media/ipu3.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun.. SPDX-License-Identifier: GPL-2.0
2*4882a593Smuzhiyun
3*4882a593Smuzhiyun.. include:: <isonum.txt>
4*4882a593Smuzhiyun
5*4882a593Smuzhiyun===============================================================
6*4882a593SmuzhiyunIntel Image Processing Unit 3 (IPU3) Imaging Unit (ImgU) driver
7*4882a593Smuzhiyun===============================================================
8*4882a593Smuzhiyun
9*4882a593SmuzhiyunCopyright |copy| 2018 Intel Corporation
10*4882a593Smuzhiyun
11*4882a593SmuzhiyunIntroduction
12*4882a593Smuzhiyun============
13*4882a593Smuzhiyun
14*4882a593SmuzhiyunThis file documents the Intel IPU3 (3rd generation Image Processing Unit)
15*4882a593SmuzhiyunImaging Unit drivers located under drivers/media/pci/intel/ipu3 (CIO2) as well
16*4882a593Smuzhiyunas under drivers/staging/media/ipu3 (ImgU).
17*4882a593Smuzhiyun
18*4882a593SmuzhiyunThe Intel IPU3 found in certain Kaby Lake (as well as certain Sky Lake)
19*4882a593Smuzhiyunplatforms (U/Y processor lines) is made up of two parts namely the Imaging Unit
20*4882a593Smuzhiyun(ImgU) and the CIO2 device (MIPI CSI2 receiver).
21*4882a593Smuzhiyun
22*4882a593SmuzhiyunThe CIO2 device receives the raw Bayer data from the sensors and outputs the
23*4882a593Smuzhiyunframes in a format that is specific to the IPU3 (for consumption by the IPU3
24*4882a593SmuzhiyunImgU). The CIO2 driver is available as drivers/media/pci/intel/ipu3/ipu3-cio2*
25*4882a593Smuzhiyunand is enabled through the CONFIG_VIDEO_IPU3_CIO2 config option.
26*4882a593Smuzhiyun
27*4882a593SmuzhiyunThe Imaging Unit (ImgU) is responsible for processing images captured
28*4882a593Smuzhiyunby the IPU3 CIO2 device. The ImgU driver sources can be found under
29*4882a593Smuzhiyundrivers/staging/media/ipu3 directory. The driver is enabled through the
30*4882a593SmuzhiyunCONFIG_VIDEO_IPU3_IMGU config option.
31*4882a593Smuzhiyun
32*4882a593SmuzhiyunThe two driver modules are named ipu3_csi2 and ipu3_imgu, respectively.
33*4882a593Smuzhiyun
34*4882a593SmuzhiyunThe drivers has been tested on Kaby Lake platforms (U/Y processor lines).
35*4882a593Smuzhiyun
36*4882a593SmuzhiyunBoth of the drivers implement V4L2, Media Controller and V4L2 sub-device
37*4882a593Smuzhiyuninterfaces. The IPU3 CIO2 driver supports camera sensors connected to the CIO2
38*4882a593SmuzhiyunMIPI CSI-2 interfaces through V4L2 sub-device sensor drivers.
39*4882a593Smuzhiyun
40*4882a593SmuzhiyunCIO2
41*4882a593Smuzhiyun====
42*4882a593Smuzhiyun
43*4882a593SmuzhiyunThe CIO2 is represented as a single V4L2 subdev, which provides a V4L2 subdev
44*4882a593Smuzhiyuninterface to the user space. There is a video node for each CSI-2 receiver,
45*4882a593Smuzhiyunwith a single media controller interface for the entire device.
46*4882a593Smuzhiyun
47*4882a593SmuzhiyunThe CIO2 contains four independent capture channel, each with its own MIPI CSI-2
48*4882a593Smuzhiyunreceiver and DMA engine. Each channel is modelled as a V4L2 sub-device exposed
49*4882a593Smuzhiyunto userspace as a V4L2 sub-device node and has two pads:
50*4882a593Smuzhiyun
51*4882a593Smuzhiyun.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}|
52*4882a593Smuzhiyun
53*4882a593Smuzhiyun.. flat-table::
54*4882a593Smuzhiyun
55*4882a593Smuzhiyun    * - pad
56*4882a593Smuzhiyun      - direction
57*4882a593Smuzhiyun      - purpose
58*4882a593Smuzhiyun
59*4882a593Smuzhiyun    * - 0
60*4882a593Smuzhiyun      - sink
61*4882a593Smuzhiyun      - MIPI CSI-2 input, connected to the sensor subdev
62*4882a593Smuzhiyun
63*4882a593Smuzhiyun    * - 1
64*4882a593Smuzhiyun      - source
65*4882a593Smuzhiyun      - Raw video capture, connected to the V4L2 video interface
66*4882a593Smuzhiyun
67*4882a593SmuzhiyunThe V4L2 video interfaces model the DMA engines. They are exposed to userspace
68*4882a593Smuzhiyunas V4L2 video device nodes.
69*4882a593Smuzhiyun
70*4882a593SmuzhiyunCapturing frames in raw Bayer format
71*4882a593Smuzhiyun------------------------------------
72*4882a593Smuzhiyun
73*4882a593SmuzhiyunCIO2 MIPI CSI2 receiver is used to capture frames (in packed raw Bayer format)
74*4882a593Smuzhiyunfrom the raw sensors connected to the CSI2 ports. The captured frames are used
75*4882a593Smuzhiyunas input to the ImgU driver.
76*4882a593Smuzhiyun
77*4882a593SmuzhiyunImage processing using IPU3 ImgU requires tools such as raw2pnm [#f1]_, and
78*4882a593Smuzhiyunyavta [#f2]_ due to the following unique requirements and / or features specific
79*4882a593Smuzhiyunto IPU3.
80*4882a593Smuzhiyun
81*4882a593Smuzhiyun-- The IPU3 CSI2 receiver outputs the captured frames from the sensor in packed
82*4882a593Smuzhiyunraw Bayer format that is specific to IPU3.
83*4882a593Smuzhiyun
84*4882a593Smuzhiyun-- Multiple video nodes have to be operated simultaneously.
85*4882a593Smuzhiyun
86*4882a593SmuzhiyunLet us take the example of ov5670 sensor connected to CSI2 port 0, for a
87*4882a593Smuzhiyun2592x1944 image capture.
88*4882a593Smuzhiyun
89*4882a593SmuzhiyunUsing the media contorller APIs, the ov5670 sensor is configured to send
90*4882a593Smuzhiyunframes in packed raw Bayer format to IPU3 CSI2 receiver.
91*4882a593Smuzhiyun
92*4882a593Smuzhiyun.. code-block:: none
93*4882a593Smuzhiyun
94*4882a593Smuzhiyun    # This example assumes /dev/media0 as the CIO2 media device
95*4882a593Smuzhiyun    export MDEV=/dev/media0
96*4882a593Smuzhiyun
97*4882a593Smuzhiyun    # and that ov5670 sensor is connected to i2c bus 10 with address 0x36
98*4882a593Smuzhiyun    export SDEV=$(media-ctl -d $MDEV -e "ov5670 10-0036")
99*4882a593Smuzhiyun
100*4882a593Smuzhiyun    # Establish the link for the media devices using media-ctl [#f3]_
101*4882a593Smuzhiyun    media-ctl -d $MDEV -l "ov5670:0 -> ipu3-csi2 0:0[1]"
102*4882a593Smuzhiyun
103*4882a593Smuzhiyun    # Set the format for the media devices
104*4882a593Smuzhiyun    media-ctl -d $MDEV -V "ov5670:0 [fmt:SGRBG10/2592x1944]"
105*4882a593Smuzhiyun    media-ctl -d $MDEV -V "ipu3-csi2 0:0 [fmt:SGRBG10/2592x1944]"
106*4882a593Smuzhiyun    media-ctl -d $MDEV -V "ipu3-csi2 0:1 [fmt:SGRBG10/2592x1944]"
107*4882a593Smuzhiyun
108*4882a593SmuzhiyunOnce the media pipeline is configured, desired sensor specific settings
109*4882a593Smuzhiyun(such as exposure and gain settings) can be set, using the yavta tool.
110*4882a593Smuzhiyun
111*4882a593Smuzhiyune.g
112*4882a593Smuzhiyun
113*4882a593Smuzhiyun.. code-block:: none
114*4882a593Smuzhiyun
115*4882a593Smuzhiyun    yavta -w 0x009e0903 444 $SDEV
116*4882a593Smuzhiyun    yavta -w 0x009e0913 1024 $SDEV
117*4882a593Smuzhiyun    yavta -w 0x009e0911 2046 $SDEV
118*4882a593Smuzhiyun
119*4882a593SmuzhiyunOnce the desired sensor settings are set, frame captures can be done as below.
120*4882a593Smuzhiyun
121*4882a593Smuzhiyune.g
122*4882a593Smuzhiyun
123*4882a593Smuzhiyun.. code-block:: none
124*4882a593Smuzhiyun
125*4882a593Smuzhiyun    yavta --data-prefix -u -c10 -n5 -I -s2592x1944 --file=/tmp/frame-#.bin \
126*4882a593Smuzhiyun          -f IPU3_SGRBG10 $(media-ctl -d $MDEV -e "ipu3-cio2 0")
127*4882a593Smuzhiyun
128*4882a593SmuzhiyunWith the above command, 10 frames are captured at 2592x1944 resolution, with
129*4882a593SmuzhiyunsGRBG10 format and output as IPU3_SGRBG10 format.
130*4882a593Smuzhiyun
131*4882a593SmuzhiyunThe captured frames are available as /tmp/frame-#.bin files.
132*4882a593Smuzhiyun
133*4882a593SmuzhiyunImgU
134*4882a593Smuzhiyun====
135*4882a593Smuzhiyun
136*4882a593SmuzhiyunThe ImgU is represented as two V4L2 subdevs, each of which provides a V4L2
137*4882a593Smuzhiyunsubdev interface to the user space.
138*4882a593Smuzhiyun
139*4882a593SmuzhiyunEach V4L2 subdev represents a pipe, which can support a maximum of 2 streams.
140*4882a593SmuzhiyunThis helps to support advanced camera features like Continuous View Finder (CVF)
141*4882a593Smuzhiyunand Snapshot During Video(SDV).
142*4882a593Smuzhiyun
143*4882a593SmuzhiyunThe ImgU contains two independent pipes, each modelled as a V4L2 sub-device
144*4882a593Smuzhiyunexposed to userspace as a V4L2 sub-device node.
145*4882a593Smuzhiyun
146*4882a593SmuzhiyunEach pipe has two sink pads and three source pads for the following purpose:
147*4882a593Smuzhiyun
148*4882a593Smuzhiyun.. tabularcolumns:: |p{0.8cm}|p{4.0cm}|p{4.0cm}|
149*4882a593Smuzhiyun
150*4882a593Smuzhiyun.. flat-table::
151*4882a593Smuzhiyun
152*4882a593Smuzhiyun    * - pad
153*4882a593Smuzhiyun      - direction
154*4882a593Smuzhiyun      - purpose
155*4882a593Smuzhiyun
156*4882a593Smuzhiyun    * - 0
157*4882a593Smuzhiyun      - sink
158*4882a593Smuzhiyun      - Input raw video stream
159*4882a593Smuzhiyun
160*4882a593Smuzhiyun    * - 1
161*4882a593Smuzhiyun      - sink
162*4882a593Smuzhiyun      - Processing parameters
163*4882a593Smuzhiyun
164*4882a593Smuzhiyun    * - 2
165*4882a593Smuzhiyun      - source
166*4882a593Smuzhiyun      - Output processed video stream
167*4882a593Smuzhiyun
168*4882a593Smuzhiyun    * - 3
169*4882a593Smuzhiyun      - source
170*4882a593Smuzhiyun      - Output viewfinder video stream
171*4882a593Smuzhiyun
172*4882a593Smuzhiyun    * - 4
173*4882a593Smuzhiyun      - source
174*4882a593Smuzhiyun      - 3A statistics
175*4882a593Smuzhiyun
176*4882a593SmuzhiyunEach pad is connected to a corresponding V4L2 video interface, exposed to
177*4882a593Smuzhiyunuserspace as a V4L2 video device node.
178*4882a593Smuzhiyun
179*4882a593SmuzhiyunDevice operation
180*4882a593Smuzhiyun----------------
181*4882a593Smuzhiyun
182*4882a593SmuzhiyunWith ImgU, once the input video node ("ipu3-imgu 0/1":0, in
183*4882a593Smuzhiyun<entity>:<pad-number> format) is queued with buffer (in packed raw Bayer
184*4882a593Smuzhiyunformat), ImgU starts processing the buffer and produces the video output in YUV
185*4882a593Smuzhiyunformat and statistics output on respective output nodes. The driver is expected
186*4882a593Smuzhiyunto have buffers ready for all of parameter, output and statistics nodes, when
187*4882a593Smuzhiyuninput video node is queued with buffer.
188*4882a593Smuzhiyun
189*4882a593SmuzhiyunAt a minimum, all of input, main output, 3A statistics and viewfinder
190*4882a593Smuzhiyunvideo nodes should be enabled for IPU3 to start image processing.
191*4882a593Smuzhiyun
192*4882a593SmuzhiyunEach ImgU V4L2 subdev has the following set of video nodes.
193*4882a593Smuzhiyun
194*4882a593Smuzhiyuninput, output and viewfinder video nodes
195*4882a593Smuzhiyun----------------------------------------
196*4882a593Smuzhiyun
197*4882a593SmuzhiyunThe frames (in packed raw Bayer format specific to the IPU3) received by the
198*4882a593Smuzhiyuninput video node is processed by the IPU3 Imaging Unit and are output to 2 video
199*4882a593Smuzhiyunnodes, with each targeting a different purpose (main output and viewfinder
200*4882a593Smuzhiyunoutput).
201*4882a593Smuzhiyun
202*4882a593SmuzhiyunDetails onand the Bayer format specific to the IPU3 can be found in
203*4882a593Smuzhiyun:ref:`v4l2-pix-fmt-ipu3-sbggr10`.
204*4882a593Smuzhiyun
205*4882a593SmuzhiyunThe driver supports V4L2 Video Capture Interface as defined at :ref:`devices`.
206*4882a593Smuzhiyun
207*4882a593SmuzhiyunOnly the multi-planar API is supported. More details can be found at
208*4882a593Smuzhiyun:ref:`planar-apis`.
209*4882a593Smuzhiyun
210*4882a593SmuzhiyunParameters video node
211*4882a593Smuzhiyun---------------------
212*4882a593Smuzhiyun
213*4882a593SmuzhiyunThe parameters video node receives the ImgU algorithm parameters that are used
214*4882a593Smuzhiyunto configure how the ImgU algorithms process the image.
215*4882a593Smuzhiyun
216*4882a593SmuzhiyunDetails on processing parameters specific to the IPU3 can be found in
217*4882a593Smuzhiyun:ref:`v4l2-meta-fmt-params`.
218*4882a593Smuzhiyun
219*4882a593Smuzhiyun3A statistics video node
220*4882a593Smuzhiyun------------------------
221*4882a593Smuzhiyun
222*4882a593Smuzhiyun3A statistics video node is used by the ImgU driver to output the 3A (auto
223*4882a593Smuzhiyunfocus, auto exposure and auto white balance) statistics for the frames that are
224*4882a593Smuzhiyunbeing processed by the ImgU to user space applications. User space applications
225*4882a593Smuzhiyuncan use this statistics data to compute the desired algorithm parameters for
226*4882a593Smuzhiyunthe ImgU.
227*4882a593Smuzhiyun
228*4882a593SmuzhiyunConfiguring the Intel IPU3
229*4882a593Smuzhiyun==========================
230*4882a593Smuzhiyun
231*4882a593SmuzhiyunThe IPU3 ImgU pipelines can be configured using the Media Controller, defined at
232*4882a593Smuzhiyun:ref:`media_controller`.
233*4882a593Smuzhiyun
234*4882a593SmuzhiyunRunning mode and firmware binary selection
235*4882a593Smuzhiyun------------------------------------------
236*4882a593Smuzhiyun
237*4882a593SmuzhiyunImgU works based on firmware, currently the ImgU firmware support run 2 pipes in
238*4882a593Smuzhiyuntime-sharing with single input frame data. Each pipe can run at certain mode -
239*4882a593Smuzhiyun"VIDEO" or "STILL", "VIDEO" mode is commonly used for video frames capture, and
240*4882a593Smuzhiyun"STILL" is used for still frame capture. However, you can also select "VIDEO" to
241*4882a593Smuzhiyuncapture still frames if you want to capture images with less system load and
242*4882a593Smuzhiyunpower. For "STILL" mode, ImgU will try to use smaller BDS factor and output
243*4882a593Smuzhiyunlarger bayer frame for further YUV processing than "VIDEO" mode to get high
244*4882a593Smuzhiyunquality images. Besides, "STILL" mode need XNR3 to do noise reduction, hence
245*4882a593Smuzhiyun"STILL" mode will need more power and memory bandwidth than "VIDEO" mode. TNR
246*4882a593Smuzhiyunwill be enabled in "VIDEO" mode and bypassed by "STILL" mode. ImgU is running at
247*4882a593Smuzhiyun“VIDEO” mode by default, the user can use v4l2 control V4L2_CID_INTEL_IPU3_MODE
248*4882a593Smuzhiyun(currently defined in drivers/staging/media/ipu3/include/intel-ipu3.h) to query
249*4882a593Smuzhiyunand set the running mode. For user, there is no difference for buffer queueing
250*4882a593Smuzhiyunbetween the "VIDEO" and "STILL" mode, mandatory input and main output node
251*4882a593Smuzhiyunshould be enabled and buffers need be queued, the statistics and the view-finder
252*4882a593Smuzhiyunqueues are optional.
253*4882a593Smuzhiyun
254*4882a593SmuzhiyunThe firmware binary will be selected according to current running mode, such log
255*4882a593Smuzhiyun"using binary if_to_osys_striped " or "using binary if_to_osys_primary_striped"
256*4882a593Smuzhiyuncould be observed if you enable the ImgU dynamic debug, the binary
257*4882a593Smuzhiyunif_to_osys_striped is selected for "VIDEO" and the binary
258*4882a593Smuzhiyun"if_to_osys_primary_striped" is selected for "STILL".
259*4882a593Smuzhiyun
260*4882a593Smuzhiyun
261*4882a593SmuzhiyunProcessing the image in raw Bayer format
262*4882a593Smuzhiyun----------------------------------------
263*4882a593Smuzhiyun
264*4882a593SmuzhiyunConfiguring ImgU V4L2 subdev for image processing
265*4882a593Smuzhiyun~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
266*4882a593Smuzhiyun
267*4882a593SmuzhiyunThe ImgU V4L2 subdevs have to be configured with media controller APIs to have
268*4882a593Smuzhiyunall the video nodes setup correctly.
269*4882a593Smuzhiyun
270*4882a593SmuzhiyunLet us take "ipu3-imgu 0" subdev as an example.
271*4882a593Smuzhiyun
272*4882a593Smuzhiyun.. code-block:: none
273*4882a593Smuzhiyun
274*4882a593Smuzhiyun    media-ctl -d $MDEV -r
275*4882a593Smuzhiyun    media-ctl -d $MDEV -l "ipu3-imgu 0 input":0 -> "ipu3-imgu 0":0[1]
276*4882a593Smuzhiyun    media-ctl -d $MDEV -l "ipu3-imgu 0":2 -> "ipu3-imgu 0 output":0[1]
277*4882a593Smuzhiyun    media-ctl -d $MDEV -l "ipu3-imgu 0":3 -> "ipu3-imgu 0 viewfinder":0[1]
278*4882a593Smuzhiyun    media-ctl -d $MDEV -l "ipu3-imgu 0":4 -> "ipu3-imgu 0 3a stat":0[1]
279*4882a593Smuzhiyun
280*4882a593SmuzhiyunAlso the pipe mode of the corresponding V4L2 subdev should be set as desired
281*4882a593Smuzhiyun(e.g 0 for video mode or 1 for still mode) through the control id 0x009819a1 as
282*4882a593Smuzhiyunbelow.
283*4882a593Smuzhiyun
284*4882a593Smuzhiyun.. code-block:: none
285*4882a593Smuzhiyun
286*4882a593Smuzhiyun    yavta -w "0x009819A1 1" /dev/v4l-subdev7
287*4882a593Smuzhiyun
288*4882a593SmuzhiyunCertain hardware blocks in ImgU pipeline can change the frame resolution by
289*4882a593Smuzhiyuncropping or scaling, these hardware blocks include Input Feeder(IF), Bayer Down
290*4882a593SmuzhiyunScaler (BDS) and Geometric Distortion Correction (GDC).
291*4882a593SmuzhiyunThere is also a block which can change the frame resolution - YUV Scaler, it is
292*4882a593Smuzhiyunonly applicable to the secondary output.
293*4882a593Smuzhiyun
294*4882a593SmuzhiyunRAW Bayer frames go through these ImgU pipeline hardware blocks and the final
295*4882a593Smuzhiyunprocessed image output to the DDR memory.
296*4882a593Smuzhiyun
297*4882a593Smuzhiyun.. kernel-figure::  ipu3_rcb.svg
298*4882a593Smuzhiyun   :alt: ipu3 resolution blocks image
299*4882a593Smuzhiyun
300*4882a593Smuzhiyun   IPU3 resolution change hardware blocks
301*4882a593Smuzhiyun
302*4882a593Smuzhiyun**Input Feeder**
303*4882a593Smuzhiyun
304*4882a593SmuzhiyunInput Feeder gets the Bayer frame data from the sensor, it can enable cropping
305*4882a593Smuzhiyunof lines and columns from the frame and then store pixels into device's internal
306*4882a593Smuzhiyunpixel buffer which are ready to readout by following blocks.
307*4882a593Smuzhiyun
308*4882a593Smuzhiyun**Bayer Down Scaler**
309*4882a593Smuzhiyun
310*4882a593SmuzhiyunBayer Down Scaler is capable of performing image scaling in Bayer domain, the
311*4882a593Smuzhiyundownscale factor can be configured from 1X to 1/4X in each axis with
312*4882a593Smuzhiyunconfiguration steps of 0.03125 (1/32).
313*4882a593Smuzhiyun
314*4882a593Smuzhiyun**Geometric Distortion Correction**
315*4882a593Smuzhiyun
316*4882a593SmuzhiyunGeometric Distortion Correction is used to performe correction of distortions
317*4882a593Smuzhiyunand image filtering. It needs some extra filter and envelop padding pixels to
318*4882a593Smuzhiyunwork, so the input resolution of GDC should be larger than the output
319*4882a593Smuzhiyunresolution.
320*4882a593Smuzhiyun
321*4882a593Smuzhiyun**YUV Scaler**
322*4882a593Smuzhiyun
323*4882a593SmuzhiyunYUV Scaler which similar with BDS, but it is mainly do image down scaling in
324*4882a593SmuzhiyunYUV domain, it can support up to 1/12X down scaling, but it can not be applied
325*4882a593Smuzhiyunto the main output.
326*4882a593Smuzhiyun
327*4882a593SmuzhiyunThe ImgU V4L2 subdev has to be configured with the supported resolutions in all
328*4882a593Smuzhiyunthe above hardware blocks, for a given input resolution.
329*4882a593SmuzhiyunFor a given supported resolution for an input frame, the Input Feeder, Bayer
330*4882a593SmuzhiyunDown Scaler and GDC blocks should be configured with the supported resolutions
331*4882a593Smuzhiyunas each hardware block has its own alignment requirement.
332*4882a593Smuzhiyun
333*4882a593SmuzhiyunYou must configure the output resolution of the hardware blocks smartly to meet
334*4882a593Smuzhiyunthe hardware requirement along with keeping the maximum field of view. The
335*4882a593Smuzhiyunintermediate resolutions can be generated by specific tool -
336*4882a593Smuzhiyun
337*4882a593Smuzhiyunhttps://github.com/intel/intel-ipu3-pipecfg
338*4882a593Smuzhiyun
339*4882a593SmuzhiyunThis tool can be used to generate intermediate resolutions. More information can
340*4882a593Smuzhiyunbe obtained by looking at the following IPU3 ImgU configuration table.
341*4882a593Smuzhiyun
342*4882a593Smuzhiyunhttps://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/master
343*4882a593Smuzhiyun
344*4882a593SmuzhiyunUnder baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/files/gcss
345*4882a593Smuzhiyundirectory, graph_settings_ov5670.xml can be used as an example.
346*4882a593Smuzhiyun
347*4882a593SmuzhiyunThe following steps prepare the ImgU pipeline for the image processing.
348*4882a593Smuzhiyun
349*4882a593Smuzhiyun1. The ImgU V4L2 subdev data format should be set by using the
350*4882a593SmuzhiyunVIDIOC_SUBDEV_S_FMT on pad 0, using the GDC width and height obtained above.
351*4882a593Smuzhiyun
352*4882a593Smuzhiyun2. The ImgU V4L2 subdev cropping should be set by using the
353*4882a593SmuzhiyunVIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_CROP as the target,
354*4882a593Smuzhiyunusing the input feeder height and width.
355*4882a593Smuzhiyun
356*4882a593Smuzhiyun3. The ImgU V4L2 subdev composing should be set by using the
357*4882a593SmuzhiyunVIDIOC_SUBDEV_S_SELECTION on pad 0, with V4L2_SEL_TGT_COMPOSE as the target,
358*4882a593Smuzhiyunusing the BDS height and width.
359*4882a593Smuzhiyun
360*4882a593SmuzhiyunFor the ov5670 example, for an input frame with a resolution of 2592x1944
361*4882a593Smuzhiyun(which is input to the ImgU subdev pad 0), the corresponding resolutions
362*4882a593Smuzhiyunfor input feeder, BDS and GDC are 2592x1944, 2592x1944 and 2560x1920
363*4882a593Smuzhiyunrespectively.
364*4882a593Smuzhiyun
365*4882a593SmuzhiyunOnce this is done, the received raw Bayer frames can be input to the ImgU
366*4882a593SmuzhiyunV4L2 subdev as below, using the open source application v4l2n [#f1]_.
367*4882a593Smuzhiyun
368*4882a593SmuzhiyunFor an image captured with 2592x1944 [#f4]_ resolution, with desired output
369*4882a593Smuzhiyunresolution as 2560x1920 and viewfinder resolution as 2560x1920, the following
370*4882a593Smuzhiyunv4l2n command can be used. This helps process the raw Bayer frames and produces
371*4882a593Smuzhiyunthe desired results for the main output image and the viewfinder output, in NV12
372*4882a593Smuzhiyunformat.
373*4882a593Smuzhiyun
374*4882a593Smuzhiyun.. code-block:: none
375*4882a593Smuzhiyun
376*4882a593Smuzhiyun    v4l2n --pipe=4 --load=/tmp/frame-#.bin --open=/dev/video4
377*4882a593Smuzhiyun          --fmt=type:VIDEO_OUTPUT_MPLANE,width=2592,height=1944,pixelformat=0X47337069 \
378*4882a593Smuzhiyun          --reqbufs=type:VIDEO_OUTPUT_MPLANE,count:1 --pipe=1 \
379*4882a593Smuzhiyun          --output=/tmp/frames.out --open=/dev/video5 \
380*4882a593Smuzhiyun          --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
381*4882a593Smuzhiyun          --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=2 \
382*4882a593Smuzhiyun          --output=/tmp/frames.vf --open=/dev/video6 \
383*4882a593Smuzhiyun          --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
384*4882a593Smuzhiyun          --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=3 --open=/dev/video7 \
385*4882a593Smuzhiyun          --output=/tmp/frames.3A --fmt=type:META_CAPTURE,? \
386*4882a593Smuzhiyun          --reqbufs=count:1,type:META_CAPTURE --pipe=1,2,3,4 --stream=5
387*4882a593Smuzhiyun
388*4882a593SmuzhiyunYou can also use yavta [#f2]_ command to do same thing as above:
389*4882a593Smuzhiyun
390*4882a593Smuzhiyun.. code-block:: none
391*4882a593Smuzhiyun
392*4882a593Smuzhiyun    yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
393*4882a593Smuzhiyun          --file=frame-#.out-f NV12 /dev/video5 & \
394*4882a593Smuzhiyun    yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
395*4882a593Smuzhiyun          --file=frame-#.vf -f NV12 /dev/video6 & \
396*4882a593Smuzhiyun    yavta --data-prefix -Bmeta-capture -c10 -n5 -I \
397*4882a593Smuzhiyun          --file=frame-#.3a /dev/video7 & \
398*4882a593Smuzhiyun    yavta --data-prefix -Boutput-mplane -c10 -n5 -I -s2592x1944 \
399*4882a593Smuzhiyun          --file=/tmp/frame-in.cio2 -f IPU3_SGRBG10 /dev/video4
400*4882a593Smuzhiyun
401*4882a593Smuzhiyunwhere /dev/video4, /dev/video5, /dev/video6 and /dev/video7 devices point to
402*4882a593Smuzhiyuninput, output, viewfinder and 3A statistics video nodes respectively.
403*4882a593Smuzhiyun
404*4882a593SmuzhiyunConverting the raw Bayer image into YUV domain
405*4882a593Smuzhiyun----------------------------------------------
406*4882a593Smuzhiyun
407*4882a593SmuzhiyunThe processed images after the above step, can be converted to YUV domain
408*4882a593Smuzhiyunas below.
409*4882a593Smuzhiyun
410*4882a593SmuzhiyunMain output frames
411*4882a593Smuzhiyun~~~~~~~~~~~~~~~~~~
412*4882a593Smuzhiyun
413*4882a593Smuzhiyun.. code-block:: none
414*4882a593Smuzhiyun
415*4882a593Smuzhiyun    raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.out /tmp/frames.out.ppm
416*4882a593Smuzhiyun
417*4882a593Smuzhiyunwhere 2560x1920 is output resolution, NV12 is the video format, followed
418*4882a593Smuzhiyunby input frame and output PNM file.
419*4882a593Smuzhiyun
420*4882a593SmuzhiyunViewfinder output frames
421*4882a593Smuzhiyun~~~~~~~~~~~~~~~~~~~~~~~~
422*4882a593Smuzhiyun
423*4882a593Smuzhiyun.. code-block:: none
424*4882a593Smuzhiyun
425*4882a593Smuzhiyun    raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.vf /tmp/frames.vf.ppm
426*4882a593Smuzhiyun
427*4882a593Smuzhiyunwhere 2560x1920 is output resolution, NV12 is the video format, followed
428*4882a593Smuzhiyunby input frame and output PNM file.
429*4882a593Smuzhiyun
430*4882a593SmuzhiyunExample user space code for IPU3
431*4882a593Smuzhiyun================================
432*4882a593Smuzhiyun
433*4882a593SmuzhiyunUser space code that configures and uses IPU3 is available here.
434*4882a593Smuzhiyun
435*4882a593Smuzhiyunhttps://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/
436*4882a593Smuzhiyun
437*4882a593SmuzhiyunThe source can be located under hal/intel directory.
438*4882a593Smuzhiyun
439*4882a593SmuzhiyunOverview of IPU3 pipeline
440*4882a593Smuzhiyun=========================
441*4882a593Smuzhiyun
442*4882a593SmuzhiyunIPU3 pipeline has a number of image processing stages, each of which takes a
443*4882a593Smuzhiyunset of parameters as input. The major stages of pipelines are shown here:
444*4882a593Smuzhiyun
445*4882a593Smuzhiyun.. kernel-render:: DOT
446*4882a593Smuzhiyun   :alt: IPU3 ImgU Pipeline
447*4882a593Smuzhiyun   :caption: IPU3 ImgU Pipeline Diagram
448*4882a593Smuzhiyun
449*4882a593Smuzhiyun   digraph "IPU3 ImgU" {
450*4882a593Smuzhiyun       node [shape=box]
451*4882a593Smuzhiyun       splines="ortho"
452*4882a593Smuzhiyun       rankdir="LR"
453*4882a593Smuzhiyun
454*4882a593Smuzhiyun       a [label="Raw pixels"]
455*4882a593Smuzhiyun       b [label="Bayer Downscaling"]
456*4882a593Smuzhiyun       c [label="Optical Black Correction"]
457*4882a593Smuzhiyun       d [label="Linearization"]
458*4882a593Smuzhiyun       e [label="Lens Shading Correction"]
459*4882a593Smuzhiyun       f [label="White Balance / Exposure / Focus Apply"]
460*4882a593Smuzhiyun       g [label="Bayer Noise Reduction"]
461*4882a593Smuzhiyun       h [label="ANR"]
462*4882a593Smuzhiyun       i [label="Demosaicing"]
463*4882a593Smuzhiyun       j [label="Color Correction Matrix"]
464*4882a593Smuzhiyun       k [label="Gamma correction"]
465*4882a593Smuzhiyun       l [label="Color Space Conversion"]
466*4882a593Smuzhiyun       m [label="Chroma Down Scaling"]
467*4882a593Smuzhiyun       n [label="Chromatic Noise Reduction"]
468*4882a593Smuzhiyun       o [label="Total Color Correction"]
469*4882a593Smuzhiyun       p [label="XNR3"]
470*4882a593Smuzhiyun       q [label="TNR"]
471*4882a593Smuzhiyun       r [label="DDR", style=filled, fillcolor=yellow, shape=cylinder]
472*4882a593Smuzhiyun       s [label="YUV Downscaling"]
473*4882a593Smuzhiyun       t [label="DDR", style=filled, fillcolor=yellow, shape=cylinder]
474*4882a593Smuzhiyun
475*4882a593Smuzhiyun       { rank=same; a -> b -> c -> d -> e -> f -> g -> h -> i }
476*4882a593Smuzhiyun       { rank=same; j -> k -> l -> m -> n -> o -> p -> q -> s -> t}
477*4882a593Smuzhiyun
478*4882a593Smuzhiyun       a -> j [style=invis, weight=10]
479*4882a593Smuzhiyun       i -> j
480*4882a593Smuzhiyun       q -> r
481*4882a593Smuzhiyun   }
482*4882a593Smuzhiyun
483*4882a593SmuzhiyunThe table below presents a description of the above algorithms.
484*4882a593Smuzhiyun
485*4882a593Smuzhiyun======================== =======================================================
486*4882a593SmuzhiyunName			 Description
487*4882a593Smuzhiyun======================== =======================================================
488*4882a593SmuzhiyunOptical Black Correction Optical Black Correction block subtracts a pre-defined
489*4882a593Smuzhiyun			 value from the respective pixel values to obtain better
490*4882a593Smuzhiyun			 image quality.
491*4882a593Smuzhiyun			 Defined in struct ipu3_uapi_obgrid_param.
492*4882a593SmuzhiyunLinearization		 This algo block uses linearization parameters to
493*4882a593Smuzhiyun			 address non-linearity sensor effects. The Lookup table
494*4882a593Smuzhiyun			 table is defined in
495*4882a593Smuzhiyun			 struct ipu3_uapi_isp_lin_vmem_params.
496*4882a593SmuzhiyunSHD			 Lens shading correction is used to correct spatial
497*4882a593Smuzhiyun			 non-uniformity of the pixel response due to optical
498*4882a593Smuzhiyun			 lens shading. This is done by applying a different gain
499*4882a593Smuzhiyun			 for each pixel. The gain, black level etc are
500*4882a593Smuzhiyun			 configured in struct ipu3_uapi_shd_config_static.
501*4882a593SmuzhiyunBNR			 Bayer noise reduction block removes image noise by
502*4882a593Smuzhiyun			 applying a bilateral filter.
503*4882a593Smuzhiyun			 See struct ipu3_uapi_bnr_static_config for details.
504*4882a593SmuzhiyunANR			 Advanced Noise Reduction is a block based algorithm
505*4882a593Smuzhiyun			 that performs noise reduction in the Bayer domain. The
506*4882a593Smuzhiyun			 convolution matrix etc can be found in
507*4882a593Smuzhiyun			 struct ipu3_uapi_anr_config.
508*4882a593SmuzhiyunDM			 Demosaicing converts raw sensor data in Bayer format
509*4882a593Smuzhiyun			 into RGB (Red, Green, Blue) presentation. Then add
510*4882a593Smuzhiyun			 outputs of estimation of Y channel for following stream
511*4882a593Smuzhiyun			 processing by Firmware. The struct is defined as
512*4882a593Smuzhiyun			 struct ipu3_uapi_dm_config.
513*4882a593SmuzhiyunColor Correction	 Color Correction algo transforms sensor specific color
514*4882a593Smuzhiyun			 space to the standard "sRGB" color space. This is done
515*4882a593Smuzhiyun			 by applying 3x3 matrix defined in
516*4882a593Smuzhiyun			 struct ipu3_uapi_ccm_mat_config.
517*4882a593SmuzhiyunGamma correction	 Gamma correction struct ipu3_uapi_gamma_config is a
518*4882a593Smuzhiyun			 basic non-linear tone mapping correction that is
519*4882a593Smuzhiyun			 applied per pixel for each pixel component.
520*4882a593SmuzhiyunCSC			 Color space conversion transforms each pixel from the
521*4882a593Smuzhiyun			 RGB primary presentation to YUV (Y: brightness,
522*4882a593Smuzhiyun			 UV: Luminance) presentation. This is done by applying
523*4882a593Smuzhiyun			 a 3x3 matrix defined in
524*4882a593Smuzhiyun			 struct ipu3_uapi_csc_mat_config
525*4882a593SmuzhiyunCDS			 Chroma down sampling
526*4882a593Smuzhiyun			 After the CSC is performed, the Chroma Down Sampling
527*4882a593Smuzhiyun			 is applied for a UV plane down sampling by a factor
528*4882a593Smuzhiyun			 of 2 in each direction for YUV 4:2:0 using a 4x2
529*4882a593Smuzhiyun			 configurable filter struct ipu3_uapi_cds_params.
530*4882a593SmuzhiyunCHNR			 Chroma noise reduction
531*4882a593Smuzhiyun			 This block processes only the chrominance pixels and
532*4882a593Smuzhiyun			 performs noise reduction by cleaning the high
533*4882a593Smuzhiyun			 frequency noise.
534*4882a593Smuzhiyun			 See struct struct ipu3_uapi_yuvp1_chnr_config.
535*4882a593SmuzhiyunTCC			 Total color correction as defined in struct
536*4882a593Smuzhiyun			 struct ipu3_uapi_yuvp2_tcc_static_config.
537*4882a593SmuzhiyunXNR3			 eXtreme Noise Reduction V3 is the third revision of
538*4882a593Smuzhiyun			 noise reduction algorithm used to improve image
539*4882a593Smuzhiyun			 quality. This removes the low frequency noise in the
540*4882a593Smuzhiyun			 captured image. Two related structs are  being defined,
541*4882a593Smuzhiyun			 struct ipu3_uapi_isp_xnr3_params for ISP data memory
542*4882a593Smuzhiyun			 and struct ipu3_uapi_isp_xnr3_vmem_params for vector
543*4882a593Smuzhiyun			 memory.
544*4882a593SmuzhiyunTNR			 Temporal Noise Reduction block compares successive
545*4882a593Smuzhiyun			 frames in time to remove anomalies / noise in pixel
546*4882a593Smuzhiyun			 values. struct ipu3_uapi_isp_tnr3_vmem_params and
547*4882a593Smuzhiyun			 struct ipu3_uapi_isp_tnr3_params are defined for ISP
548*4882a593Smuzhiyun			 vector and data memory respectively.
549*4882a593Smuzhiyun======================== =======================================================
550*4882a593Smuzhiyun
551*4882a593SmuzhiyunOther often encountered acronyms not listed in above table:
552*4882a593Smuzhiyun
553*4882a593Smuzhiyun	ACC
554*4882a593Smuzhiyun		Accelerator cluster
555*4882a593Smuzhiyun	AWB_FR
556*4882a593Smuzhiyun		Auto white balance filter response statistics
557*4882a593Smuzhiyun	BDS
558*4882a593Smuzhiyun		Bayer downscaler parameters
559*4882a593Smuzhiyun	CCM
560*4882a593Smuzhiyun		Color correction matrix coefficients
561*4882a593Smuzhiyun	IEFd
562*4882a593Smuzhiyun		Image enhancement filter directed
563*4882a593Smuzhiyun	Obgrid
564*4882a593Smuzhiyun		Optical black level compensation
565*4882a593Smuzhiyun	OSYS
566*4882a593Smuzhiyun		Output system configuration
567*4882a593Smuzhiyun	ROI
568*4882a593Smuzhiyun		Region of interest
569*4882a593Smuzhiyun	YDS
570*4882a593Smuzhiyun		Y down sampling
571*4882a593Smuzhiyun	YTM
572*4882a593Smuzhiyun		Y-tone mapping
573*4882a593Smuzhiyun
574*4882a593SmuzhiyunA few stages of the pipeline will be executed by firmware running on the ISP
575*4882a593Smuzhiyunprocessor, while many others will use a set of fixed hardware blocks also
576*4882a593Smuzhiyuncalled accelerator cluster (ACC) to crunch pixel data and produce statistics.
577*4882a593Smuzhiyun
578*4882a593SmuzhiyunACC parameters of individual algorithms, as defined by
579*4882a593Smuzhiyunstruct ipu3_uapi_acc_param, can be chosen to be applied by the user
580*4882a593Smuzhiyunspace through struct struct ipu3_uapi_flags embedded in
581*4882a593Smuzhiyunstruct ipu3_uapi_params structure. For parameters that are configured as
582*4882a593Smuzhiyunnot enabled by the user space, the corresponding structs are ignored by the
583*4882a593Smuzhiyundriver, in which case the existing configuration of the algorithm will be
584*4882a593Smuzhiyunpreserved.
585*4882a593Smuzhiyun
586*4882a593SmuzhiyunReferences
587*4882a593Smuzhiyun==========
588*4882a593Smuzhiyun
589*4882a593Smuzhiyun.. [#f5] drivers/staging/media/ipu3/include/intel-ipu3.h
590*4882a593Smuzhiyun
591*4882a593Smuzhiyun.. [#f1] https://github.com/intel/nvt
592*4882a593Smuzhiyun
593*4882a593Smuzhiyun.. [#f2] http://git.ideasonboard.org/yavta.git
594*4882a593Smuzhiyun
595*4882a593Smuzhiyun.. [#f3] http://git.ideasonboard.org/?p=media-ctl.git;a=summary
596*4882a593Smuzhiyun
597*4882a593Smuzhiyun.. [#f4] ImgU limitation requires an additional 16x16 for all input resolutions
598