Lines Matching +full:image +full:- +full:sensor
1 .. SPDX-License-Identifier: GPL-2.0
6 Intel Image Processing Unit 3 (IPU3) Imaging Unit (ImgU) driver
14 This file documents the Intel IPU3 (3rd generation Image Processing Unit)
24 ImgU). The CIO2 driver is available as drivers/media/pci/intel/ipu3/ipu3-cio2*
36 Both of the drivers implement V4L2, Media Controller and V4L2 sub-device
38 MIPI CSI-2 interfaces through V4L2 sub-device sensor drivers.
44 interface to the user space. There is a video node for each CSI-2 receiver,
47 The CIO2 contains four independent capture channel, each with its own MIPI CSI-2
48 receiver and DMA engine. Each channel is modelled as a V4L2 sub-device exposed
49 to userspace as a V4L2 sub-device node and has two pads:
53 .. flat-table::
55 * - pad
56 - direction
57 - purpose
59 * - 0
60 - sink
61 - MIPI CSI-2 input, connected to the sensor subdev
63 * - 1
64 - source
65 - Raw video capture, connected to the V4L2 video interface
71 ------------------------------------
77 Image processing using IPU3 ImgU requires tools such as raw2pnm [#f1]_, and
81 -- The IPU3 CSI2 receiver outputs the captured frames from the sensor in packed
84 -- Multiple video nodes have to be operated simultaneously.
86 Let us take the example of ov5670 sensor connected to CSI2 port 0, for a
87 2592x1944 image capture.
89 Using the media contorller APIs, the ov5670 sensor is configured to send
92 .. code-block:: none
97 # and that ov5670 sensor is connected to i2c bus 10 with address 0x36
98 export SDEV=$(media-ctl -d $MDEV -e "ov5670 10-0036")
100 # Establish the link for the media devices using media-ctl [#f3]_
101 media-ctl -d $MDEV -l "ov5670:0 -> ipu3-csi2 0:0[1]"
104 media-ctl -d $MDEV -V "ov5670:0 [fmt:SGRBG10/2592x1944]"
105 media-ctl -d $MDEV -V "ipu3-csi2 0:0 [fmt:SGRBG10/2592x1944]"
106 media-ctl -d $MDEV -V "ipu3-csi2 0:1 [fmt:SGRBG10/2592x1944]"
108 Once the media pipeline is configured, desired sensor specific settings
113 .. code-block:: none
115 yavta -w 0x009e0903 444 $SDEV
116 yavta -w 0x009e0913 1024 $SDEV
117 yavta -w 0x009e0911 2046 $SDEV
119 Once the desired sensor settings are set, frame captures can be done as below.
123 .. code-block:: none
125 yavta --data-prefix -u -c10 -n5 -I -s2592x1944 --file=/tmp/frame-#.bin \
126 -f IPU3_SGRBG10 $(media-ctl -d $MDEV -e "ipu3-cio2 0")
131 The captured frames are available as /tmp/frame-#.bin files.
143 The ImgU contains two independent pipes, each modelled as a V4L2 sub-device
144 exposed to userspace as a V4L2 sub-device node.
150 .. flat-table::
152 * - pad
153 - direction
154 - purpose
156 * - 0
157 - sink
158 - Input raw video stream
160 * - 1
161 - sink
162 - Processing parameters
164 * - 2
165 - source
166 - Output processed video stream
168 * - 3
169 - source
170 - Output viewfinder video stream
172 * - 4
173 - source
174 - 3A statistics
180 ----------------
182 With ImgU, once the input video node ("ipu3-imgu 0/1":0, in
183 <entity>:<pad-number> format) is queued with buffer (in packed raw Bayer
190 video nodes should be enabled for IPU3 to start image processing.
195 ----------------------------------------
203 :ref:`v4l2-pix-fmt-ipu3-sbggr10`.
207 Only the multi-planar API is supported. More details can be found at
208 :ref:`planar-apis`.
211 ---------------------
214 to configure how the ImgU algorithms process the image.
217 :ref:`v4l2-meta-fmt-params`.
220 ------------------------
235 ------------------------------------------
238 time-sharing with single input frame data. Each pipe can run at certain mode -
248 (currently defined in drivers/staging/media/ipu3/include/intel-ipu3.h) to query
251 should be enabled and buffers need be queued, the statistics and the view-finder
261 Processing the image in raw Bayer format
262 ----------------------------------------
264 Configuring ImgU V4L2 subdev for image processing
270 Let us take "ipu3-imgu 0" subdev as an example.
272 .. code-block:: none
274 media-ctl -d $MDEV -r
275 media-ctl -d $MDEV -l "ipu3-imgu 0 input":0 -> "ipu3-imgu 0":0[1]
276 media-ctl -d $MDEV -l "ipu3-imgu 0":2 -> "ipu3-imgu 0 output":0[1]
277 media-ctl -d $MDEV -l "ipu3-imgu 0":3 -> "ipu3-imgu 0 viewfinder":0[1]
278 media-ctl -d $MDEV -l "ipu3-imgu 0":4 -> "ipu3-imgu 0 3a stat":0[1]
284 .. code-block:: none
286 yavta -w "0x009819A1 1" /dev/v4l-subdev7
291 There is also a block which can change the frame resolution - YUV Scaler, it is
295 processed image output to the DDR memory.
297 .. kernel-figure:: ipu3_rcb.svg
298 :alt: ipu3 resolution blocks image
304 Input Feeder gets the Bayer frame data from the sensor, it can enable cropping
310 Bayer Down Scaler is capable of performing image scaling in Bayer domain, the
317 and image filtering. It needs some extra filter and envelop padding pixels to
323 YUV Scaler which similar with BDS, but it is mainly do image down scaling in
335 intermediate resolutions can be generated by specific tool -
337 https://github.com/intel/intel-ipu3-pipecfg
342 https://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/master
344 Under baseboard-poppy/media-libs/cros-camera-hal-configs-poppy/files/gcss
347 The following steps prepare the ImgU pipeline for the image processing.
368 For an image captured with 2592x1944 [#f4]_ resolution, with desired output
371 the desired results for the main output image and the viewfinder output, in NV12
374 .. code-block:: none
376 v4l2n --pipe=4 --load=/tmp/frame-#.bin --open=/dev/video4
377 --fmt=type:VIDEO_OUTPUT_MPLANE,width=2592,height=1944,pixelformat=0X47337069 \
378 --reqbufs=type:VIDEO_OUTPUT_MPLANE,count:1 --pipe=1 \
379 --output=/tmp/frames.out --open=/dev/video5 \
380 --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
381 --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=2 \
382 --output=/tmp/frames.vf --open=/dev/video6 \
383 --fmt=type:VIDEO_CAPTURE_MPLANE,width=2560,height=1920,pixelformat=NV12 \
384 --reqbufs=type:VIDEO_CAPTURE_MPLANE,count:1 --pipe=3 --open=/dev/video7 \
385 --output=/tmp/frames.3A --fmt=type:META_CAPTURE,? \
386 --reqbufs=count:1,type:META_CAPTURE --pipe=1,2,3,4 --stream=5
390 .. code-block:: none
392 yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
393 --file=frame-#.out-f NV12 /dev/video5 & \
394 yavta --data-prefix -Bcapture-mplane -c10 -n5 -I -s2592x1944 \
395 --file=frame-#.vf -f NV12 /dev/video6 & \
396 yavta --data-prefix -Bmeta-capture -c10 -n5 -I \
397 --file=frame-#.3a /dev/video7 & \
398 yavta --data-prefix -Boutput-mplane -c10 -n5 -I -s2592x1944 \
399 --file=/tmp/frame-in.cio2 -f IPU3_SGRBG10 /dev/video4
404 Converting the raw Bayer image into YUV domain
405 ----------------------------------------------
413 .. code-block:: none
415 raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.out /tmp/frames.out.ppm
423 .. code-block:: none
425 raw2pnm -x2560 -y1920 -fNV12 /tmp/frames.vf /tmp/frames.vf.ppm
435 https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/master/
442 IPU3 pipeline has a number of image processing stages, each of which takes a
445 .. kernel-render:: DOT
475 { rank=same; a -> b -> c -> d -> e -> f -> g -> h -> i }
476 { rank=same; j -> k -> l -> m -> n -> o -> p -> q -> s -> t}
478 a -> j [style=invis, weight=10]
479 i -> j
480 q -> r
488 Optical Black Correction Optical Black Correction block subtracts a pre-defined
490 image quality.
493 address non-linearity sensor effects. The Lookup table
497 non-uniformity of the pixel response due to optical
501 BNR Bayer noise reduction block removes image noise by
508 DM Demosaicing converts raw sensor data in Bayer format
513 Color Correction Color Correction algo transforms sensor specific color
518 basic non-linear tone mapping correction that is
538 noise reduction algorithm used to improve image
540 captured image. Two related structs are being defined,
562 Image enhancement filter directed
572 Y-tone mapping
589 .. [#f5] drivers/staging/media/ipu3/include/intel-ipu3.h
595 .. [#f3] http://git.ideasonboard.org/?p=media-ctl.git;a=summary