xref: /OK3568_Linux_fs/kernel/Documentation/driver-api/dmaengine/client.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun====================
2*4882a593SmuzhiyunDMA Engine API Guide
3*4882a593Smuzhiyun====================
4*4882a593Smuzhiyun
5*4882a593SmuzhiyunVinod Koul <vinod dot koul at intel.com>
6*4882a593Smuzhiyun
7*4882a593Smuzhiyun.. note:: For DMA Engine usage in async_tx please see:
8*4882a593Smuzhiyun          ``Documentation/crypto/async-tx-api.rst``
9*4882a593Smuzhiyun
10*4882a593Smuzhiyun
11*4882a593SmuzhiyunBelow is a guide to device driver writers on how to use the Slave-DMA API of the
12*4882a593SmuzhiyunDMA Engine. This is applicable only for slave DMA usage only.
13*4882a593Smuzhiyun
14*4882a593SmuzhiyunDMA usage
15*4882a593Smuzhiyun=========
16*4882a593Smuzhiyun
17*4882a593SmuzhiyunThe slave DMA usage consists of following steps:
18*4882a593Smuzhiyun
19*4882a593Smuzhiyun- Allocate a DMA slave channel
20*4882a593Smuzhiyun
21*4882a593Smuzhiyun- Set slave and controller specific parameters
22*4882a593Smuzhiyun
23*4882a593Smuzhiyun- Get a descriptor for transaction
24*4882a593Smuzhiyun
25*4882a593Smuzhiyun- Submit the transaction
26*4882a593Smuzhiyun
27*4882a593Smuzhiyun- Issue pending requests and wait for callback notification
28*4882a593Smuzhiyun
29*4882a593SmuzhiyunThe details of these operations are:
30*4882a593Smuzhiyun
31*4882a593Smuzhiyun1. Allocate a DMA slave channel
32*4882a593Smuzhiyun
33*4882a593Smuzhiyun   Channel allocation is slightly different in the slave DMA context,
34*4882a593Smuzhiyun   client drivers typically need a channel from a particular DMA
35*4882a593Smuzhiyun   controller only and even in some cases a specific channel is desired.
36*4882a593Smuzhiyun   To request a channel dma_request_chan() API is used.
37*4882a593Smuzhiyun
38*4882a593Smuzhiyun   Interface:
39*4882a593Smuzhiyun
40*4882a593Smuzhiyun   .. code-block:: c
41*4882a593Smuzhiyun
42*4882a593Smuzhiyun      struct dma_chan *dma_request_chan(struct device *dev, const char *name);
43*4882a593Smuzhiyun
44*4882a593Smuzhiyun   Which will find and return the ``name`` DMA channel associated with the 'dev'
45*4882a593Smuzhiyun   device. The association is done via DT, ACPI or board file based
46*4882a593Smuzhiyun   dma_slave_map matching table.
47*4882a593Smuzhiyun
48*4882a593Smuzhiyun   A channel allocated via this interface is exclusive to the caller,
49*4882a593Smuzhiyun   until dma_release_channel() is called.
50*4882a593Smuzhiyun
51*4882a593Smuzhiyun2. Set slave and controller specific parameters
52*4882a593Smuzhiyun
53*4882a593Smuzhiyun   Next step is always to pass some specific information to the DMA
54*4882a593Smuzhiyun   driver. Most of the generic information which a slave DMA can use
55*4882a593Smuzhiyun   is in struct dma_slave_config. This allows the clients to specify
56*4882a593Smuzhiyun   DMA direction, DMA addresses, bus widths, DMA burst lengths etc
57*4882a593Smuzhiyun   for the peripheral.
58*4882a593Smuzhiyun
59*4882a593Smuzhiyun   If some DMA controllers have more parameters to be sent then they
60*4882a593Smuzhiyun   should try to embed struct dma_slave_config in their controller
61*4882a593Smuzhiyun   specific structure. That gives flexibility to client to pass more
62*4882a593Smuzhiyun   parameters, if required.
63*4882a593Smuzhiyun
64*4882a593Smuzhiyun   Interface:
65*4882a593Smuzhiyun
66*4882a593Smuzhiyun   .. code-block:: c
67*4882a593Smuzhiyun
68*4882a593Smuzhiyun      int dmaengine_slave_config(struct dma_chan *chan,
69*4882a593Smuzhiyun			struct dma_slave_config *config)
70*4882a593Smuzhiyun
71*4882a593Smuzhiyun   Please see the dma_slave_config structure definition in dmaengine.h
72*4882a593Smuzhiyun   for a detailed explanation of the struct members. Please note
73*4882a593Smuzhiyun   that the 'direction' member will be going away as it duplicates the
74*4882a593Smuzhiyun   direction given in the prepare call.
75*4882a593Smuzhiyun
76*4882a593Smuzhiyun3. Get a descriptor for transaction
77*4882a593Smuzhiyun
78*4882a593Smuzhiyun  For slave usage the various modes of slave transfers supported by the
79*4882a593Smuzhiyun  DMA-engine are:
80*4882a593Smuzhiyun
81*4882a593Smuzhiyun  - slave_sg: DMA a list of scatter gather buffers from/to a peripheral
82*4882a593Smuzhiyun
83*4882a593Smuzhiyun  - dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the
84*4882a593Smuzhiyun    operation is explicitly stopped.
85*4882a593Smuzhiyun
86*4882a593Smuzhiyun  - interleaved_dma: This is common to Slave as well as M2M clients. For slave
87*4882a593Smuzhiyun    address of devices' fifo could be already known to the driver.
88*4882a593Smuzhiyun    Various types of operations could be expressed by setting
89*4882a593Smuzhiyun    appropriate values to the 'dma_interleaved_template' members. Cyclic
90*4882a593Smuzhiyun    interleaved DMA transfers are also possible if supported by the channel by
91*4882a593Smuzhiyun    setting the DMA_PREP_REPEAT transfer flag.
92*4882a593Smuzhiyun
93*4882a593Smuzhiyun  A non-NULL return of this transfer API represents a "descriptor" for
94*4882a593Smuzhiyun  the given transaction.
95*4882a593Smuzhiyun
96*4882a593Smuzhiyun  Interface:
97*4882a593Smuzhiyun
98*4882a593Smuzhiyun  .. code-block:: c
99*4882a593Smuzhiyun
100*4882a593Smuzhiyun     struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
101*4882a593Smuzhiyun		struct dma_chan *chan, struct scatterlist *sgl,
102*4882a593Smuzhiyun		unsigned int sg_len, enum dma_data_direction direction,
103*4882a593Smuzhiyun		unsigned long flags);
104*4882a593Smuzhiyun
105*4882a593Smuzhiyun     struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
106*4882a593Smuzhiyun		struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
107*4882a593Smuzhiyun		size_t period_len, enum dma_data_direction direction);
108*4882a593Smuzhiyun
109*4882a593Smuzhiyun     struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
110*4882a593Smuzhiyun		struct dma_chan *chan, struct dma_interleaved_template *xt,
111*4882a593Smuzhiyun		unsigned long flags);
112*4882a593Smuzhiyun
113*4882a593Smuzhiyun  The peripheral driver is expected to have mapped the scatterlist for
114*4882a593Smuzhiyun  the DMA operation prior to calling dmaengine_prep_slave_sg(), and must
115*4882a593Smuzhiyun  keep the scatterlist mapped until the DMA operation has completed.
116*4882a593Smuzhiyun  The scatterlist must be mapped using the DMA struct device.
117*4882a593Smuzhiyun  If a mapping needs to be synchronized later, dma_sync_*_for_*() must be
118*4882a593Smuzhiyun  called using the DMA struct device, too.
119*4882a593Smuzhiyun  So, normal setup should look like this:
120*4882a593Smuzhiyun
121*4882a593Smuzhiyun  .. code-block:: c
122*4882a593Smuzhiyun
123*4882a593Smuzhiyun     nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
124*4882a593Smuzhiyun	if (nr_sg == 0)
125*4882a593Smuzhiyun		/* error */
126*4882a593Smuzhiyun
127*4882a593Smuzhiyun	desc = dmaengine_prep_slave_sg(chan, sgl, nr_sg, direction, flags);
128*4882a593Smuzhiyun
129*4882a593Smuzhiyun  Once a descriptor has been obtained, the callback information can be
130*4882a593Smuzhiyun  added and the descriptor must then be submitted. Some DMA engine
131*4882a593Smuzhiyun  drivers may hold a spinlock between a successful preparation and
132*4882a593Smuzhiyun  submission so it is important that these two operations are closely
133*4882a593Smuzhiyun  paired.
134*4882a593Smuzhiyun
135*4882a593Smuzhiyun  .. note::
136*4882a593Smuzhiyun
137*4882a593Smuzhiyun     Although the async_tx API specifies that completion callback
138*4882a593Smuzhiyun     routines cannot submit any new operations, this is not the
139*4882a593Smuzhiyun     case for slave/cyclic DMA.
140*4882a593Smuzhiyun
141*4882a593Smuzhiyun     For slave DMA, the subsequent transaction may not be available
142*4882a593Smuzhiyun     for submission prior to callback function being invoked, so
143*4882a593Smuzhiyun     slave DMA callbacks are permitted to prepare and submit a new
144*4882a593Smuzhiyun     transaction.
145*4882a593Smuzhiyun
146*4882a593Smuzhiyun     For cyclic DMA, a callback function may wish to terminate the
147*4882a593Smuzhiyun     DMA via dmaengine_terminate_async().
148*4882a593Smuzhiyun
149*4882a593Smuzhiyun     Therefore, it is important that DMA engine drivers drop any
150*4882a593Smuzhiyun     locks before calling the callback function which may cause a
151*4882a593Smuzhiyun     deadlock.
152*4882a593Smuzhiyun
153*4882a593Smuzhiyun     Note that callbacks will always be invoked from the DMA
154*4882a593Smuzhiyun     engines tasklet, never from interrupt context.
155*4882a593Smuzhiyun
156*4882a593Smuzhiyun  **Optional: per descriptor metadata**
157*4882a593Smuzhiyun
158*4882a593Smuzhiyun  DMAengine provides two ways for metadata support.
159*4882a593Smuzhiyun
160*4882a593Smuzhiyun  DESC_METADATA_CLIENT
161*4882a593Smuzhiyun
162*4882a593Smuzhiyun    The metadata buffer is allocated/provided by the client driver and it is
163*4882a593Smuzhiyun    attached to the descriptor.
164*4882a593Smuzhiyun
165*4882a593Smuzhiyun  .. code-block:: c
166*4882a593Smuzhiyun
167*4882a593Smuzhiyun     int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc,
168*4882a593Smuzhiyun				   void *data, size_t len);
169*4882a593Smuzhiyun
170*4882a593Smuzhiyun  DESC_METADATA_ENGINE
171*4882a593Smuzhiyun
172*4882a593Smuzhiyun    The metadata buffer is allocated/managed by the DMA driver. The client
173*4882a593Smuzhiyun    driver can ask for the pointer, maximum size and the currently used size of
174*4882a593Smuzhiyun    the metadata and can directly update or read it.
175*4882a593Smuzhiyun
176*4882a593Smuzhiyun    Becasue the DMA driver manages the memory area containing the metadata,
177*4882a593Smuzhiyun    clients must make sure that they do not try to access or get the pointer
178*4882a593Smuzhiyun    after their transfer completion callback has run for the descriptor.
179*4882a593Smuzhiyun    If no completion callback has been defined for the transfer, then the
180*4882a593Smuzhiyun    metadata must not be accessed after issue_pending.
181*4882a593Smuzhiyun    In other words: if the aim is to read back metadata after the transfer is
182*4882a593Smuzhiyun    completed, then the client must use completion callback.
183*4882a593Smuzhiyun
184*4882a593Smuzhiyun  .. code-block:: c
185*4882a593Smuzhiyun
186*4882a593Smuzhiyun     void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc,
187*4882a593Smuzhiyun		size_t *payload_len, size_t *max_len);
188*4882a593Smuzhiyun
189*4882a593Smuzhiyun     int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc,
190*4882a593Smuzhiyun		size_t payload_len);
191*4882a593Smuzhiyun
192*4882a593Smuzhiyun  Client drivers can query if a given mode is supported with:
193*4882a593Smuzhiyun
194*4882a593Smuzhiyun  .. code-block:: c
195*4882a593Smuzhiyun
196*4882a593Smuzhiyun     bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan,
197*4882a593Smuzhiyun		enum dma_desc_metadata_mode mode);
198*4882a593Smuzhiyun
199*4882a593Smuzhiyun  Depending on the used mode client drivers must follow different flow.
200*4882a593Smuzhiyun
201*4882a593Smuzhiyun  DESC_METADATA_CLIENT
202*4882a593Smuzhiyun
203*4882a593Smuzhiyun    - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
204*4882a593Smuzhiyun
205*4882a593Smuzhiyun      1. prepare the descriptor (dmaengine_prep_*)
206*4882a593Smuzhiyun         construct the metadata in the client's buffer
207*4882a593Smuzhiyun      2. use dmaengine_desc_attach_metadata() to attach the buffer to the
208*4882a593Smuzhiyun         descriptor
209*4882a593Smuzhiyun      3. submit the transfer
210*4882a593Smuzhiyun
211*4882a593Smuzhiyun    - DMA_DEV_TO_MEM:
212*4882a593Smuzhiyun
213*4882a593Smuzhiyun      1. prepare the descriptor (dmaengine_prep_*)
214*4882a593Smuzhiyun      2. use dmaengine_desc_attach_metadata() to attach the buffer to the
215*4882a593Smuzhiyun         descriptor
216*4882a593Smuzhiyun      3. submit the transfer
217*4882a593Smuzhiyun      4. when the transfer is completed, the metadata should be available in the
218*4882a593Smuzhiyun         attached buffer
219*4882a593Smuzhiyun
220*4882a593Smuzhiyun  DESC_METADATA_ENGINE
221*4882a593Smuzhiyun
222*4882a593Smuzhiyun    - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
223*4882a593Smuzhiyun
224*4882a593Smuzhiyun      1. prepare the descriptor (dmaengine_prep_*)
225*4882a593Smuzhiyun      2. use dmaengine_desc_get_metadata_ptr() to get the pointer to the
226*4882a593Smuzhiyun         engine's metadata area
227*4882a593Smuzhiyun      3. update the metadata at the pointer
228*4882a593Smuzhiyun      4. use dmaengine_desc_set_metadata_len()  to tell the DMA engine the
229*4882a593Smuzhiyun         amount of data the client has placed into the metadata buffer
230*4882a593Smuzhiyun      5. submit the transfer
231*4882a593Smuzhiyun
232*4882a593Smuzhiyun    - DMA_DEV_TO_MEM:
233*4882a593Smuzhiyun
234*4882a593Smuzhiyun      1. prepare the descriptor (dmaengine_prep_*)
235*4882a593Smuzhiyun      2. submit the transfer
236*4882a593Smuzhiyun      3. on transfer completion, use dmaengine_desc_get_metadata_ptr() to get
237*4882a593Smuzhiyun         the pointer to the engine's metadata area
238*4882a593Smuzhiyun      4. read out the metadata from the pointer
239*4882a593Smuzhiyun
240*4882a593Smuzhiyun  .. note::
241*4882a593Smuzhiyun
242*4882a593Smuzhiyun     When DESC_METADATA_ENGINE mode is used the metadata area for the descriptor
243*4882a593Smuzhiyun     is no longer valid after the transfer has been completed (valid up to the
244*4882a593Smuzhiyun     point when the completion callback returns if used).
245*4882a593Smuzhiyun
246*4882a593Smuzhiyun     Mixed use of DESC_METADATA_CLIENT / DESC_METADATA_ENGINE is not allowed,
247*4882a593Smuzhiyun     client drivers must use either of the modes per descriptor.
248*4882a593Smuzhiyun
249*4882a593Smuzhiyun4. Submit the transaction
250*4882a593Smuzhiyun
251*4882a593Smuzhiyun   Once the descriptor has been prepared and the callback information
252*4882a593Smuzhiyun   added, it must be placed on the DMA engine drivers pending queue.
253*4882a593Smuzhiyun
254*4882a593Smuzhiyun   Interface:
255*4882a593Smuzhiyun
256*4882a593Smuzhiyun   .. code-block:: c
257*4882a593Smuzhiyun
258*4882a593Smuzhiyun      dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
259*4882a593Smuzhiyun
260*4882a593Smuzhiyun   This returns a cookie can be used to check the progress of DMA engine
261*4882a593Smuzhiyun   activity via other DMA engine calls not covered in this document.
262*4882a593Smuzhiyun
263*4882a593Smuzhiyun   dmaengine_submit() will not start the DMA operation, it merely adds
264*4882a593Smuzhiyun   it to the pending queue. For this, see step 5, dma_async_issue_pending.
265*4882a593Smuzhiyun
266*4882a593Smuzhiyun   .. note::
267*4882a593Smuzhiyun
268*4882a593Smuzhiyun      After calling ``dmaengine_submit()`` the submitted transfer descriptor
269*4882a593Smuzhiyun      (``struct dma_async_tx_descriptor``) belongs to the DMA engine.
270*4882a593Smuzhiyun      Consequently, the client must consider invalid the pointer to that
271*4882a593Smuzhiyun      descriptor.
272*4882a593Smuzhiyun
273*4882a593Smuzhiyun5. Issue pending DMA requests and wait for callback notification
274*4882a593Smuzhiyun
275*4882a593Smuzhiyun   The transactions in the pending queue can be activated by calling the
276*4882a593Smuzhiyun   issue_pending API. If channel is idle then the first transaction in
277*4882a593Smuzhiyun   queue is started and subsequent ones queued up.
278*4882a593Smuzhiyun
279*4882a593Smuzhiyun   On completion of each DMA operation, the next in queue is started and
280*4882a593Smuzhiyun   a tasklet triggered. The tasklet will then call the client driver
281*4882a593Smuzhiyun   completion callback routine for notification, if set.
282*4882a593Smuzhiyun
283*4882a593Smuzhiyun   Interface:
284*4882a593Smuzhiyun
285*4882a593Smuzhiyun   .. code-block:: c
286*4882a593Smuzhiyun
287*4882a593Smuzhiyun      void dma_async_issue_pending(struct dma_chan *chan);
288*4882a593Smuzhiyun
289*4882a593SmuzhiyunFurther APIs
290*4882a593Smuzhiyun------------
291*4882a593Smuzhiyun
292*4882a593Smuzhiyun1. Terminate APIs
293*4882a593Smuzhiyun
294*4882a593Smuzhiyun   .. code-block:: c
295*4882a593Smuzhiyun
296*4882a593Smuzhiyun      int dmaengine_terminate_sync(struct dma_chan *chan)
297*4882a593Smuzhiyun      int dmaengine_terminate_async(struct dma_chan *chan)
298*4882a593Smuzhiyun      int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */
299*4882a593Smuzhiyun
300*4882a593Smuzhiyun   This causes all activity for the DMA channel to be stopped, and may
301*4882a593Smuzhiyun   discard data in the DMA FIFO which hasn't been fully transferred.
302*4882a593Smuzhiyun   No callback functions will be called for any incomplete transfers.
303*4882a593Smuzhiyun
304*4882a593Smuzhiyun   Two variants of this function are available.
305*4882a593Smuzhiyun
306*4882a593Smuzhiyun   dmaengine_terminate_async() might not wait until the DMA has been fully
307*4882a593Smuzhiyun   stopped or until any running complete callbacks have finished. But it is
308*4882a593Smuzhiyun   possible to call dmaengine_terminate_async() from atomic context or from
309*4882a593Smuzhiyun   within a complete callback. dmaengine_synchronize() must be called before it
310*4882a593Smuzhiyun   is safe to free the memory accessed by the DMA transfer or free resources
311*4882a593Smuzhiyun   accessed from within the complete callback.
312*4882a593Smuzhiyun
313*4882a593Smuzhiyun   dmaengine_terminate_sync() will wait for the transfer and any running
314*4882a593Smuzhiyun   complete callbacks to finish before it returns. But the function must not be
315*4882a593Smuzhiyun   called from atomic context or from within a complete callback.
316*4882a593Smuzhiyun
317*4882a593Smuzhiyun   dmaengine_terminate_all() is deprecated and should not be used in new code.
318*4882a593Smuzhiyun
319*4882a593Smuzhiyun2. Pause API
320*4882a593Smuzhiyun
321*4882a593Smuzhiyun   .. code-block:: c
322*4882a593Smuzhiyun
323*4882a593Smuzhiyun      int dmaengine_pause(struct dma_chan *chan)
324*4882a593Smuzhiyun
325*4882a593Smuzhiyun   This pauses activity on the DMA channel without data loss.
326*4882a593Smuzhiyun
327*4882a593Smuzhiyun3. Resume API
328*4882a593Smuzhiyun
329*4882a593Smuzhiyun   .. code-block:: c
330*4882a593Smuzhiyun
331*4882a593Smuzhiyun       int dmaengine_resume(struct dma_chan *chan)
332*4882a593Smuzhiyun
333*4882a593Smuzhiyun   Resume a previously paused DMA channel. It is invalid to resume a
334*4882a593Smuzhiyun   channel which is not currently paused.
335*4882a593Smuzhiyun
336*4882a593Smuzhiyun4. Check Txn complete
337*4882a593Smuzhiyun
338*4882a593Smuzhiyun   .. code-block:: c
339*4882a593Smuzhiyun
340*4882a593Smuzhiyun      enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
341*4882a593Smuzhiyun		dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
342*4882a593Smuzhiyun
343*4882a593Smuzhiyun   This can be used to check the status of the channel. Please see
344*4882a593Smuzhiyun   the documentation in include/linux/dmaengine.h for a more complete
345*4882a593Smuzhiyun   description of this API.
346*4882a593Smuzhiyun
347*4882a593Smuzhiyun   This can be used in conjunction with dma_async_is_complete() and
348*4882a593Smuzhiyun   the cookie returned from dmaengine_submit() to check for
349*4882a593Smuzhiyun   completion of a specific DMA transaction.
350*4882a593Smuzhiyun
351*4882a593Smuzhiyun   .. note::
352*4882a593Smuzhiyun
353*4882a593Smuzhiyun      Not all DMA engine drivers can return reliable information for
354*4882a593Smuzhiyun      a running DMA channel. It is recommended that DMA engine users
355*4882a593Smuzhiyun      pause or stop (via dmaengine_terminate_all()) the channel before
356*4882a593Smuzhiyun      using this API.
357*4882a593Smuzhiyun
358*4882a593Smuzhiyun5. Synchronize termination API
359*4882a593Smuzhiyun
360*4882a593Smuzhiyun   .. code-block:: c
361*4882a593Smuzhiyun
362*4882a593Smuzhiyun      void dmaengine_synchronize(struct dma_chan *chan)
363*4882a593Smuzhiyun
364*4882a593Smuzhiyun   Synchronize the termination of the DMA channel to the current context.
365*4882a593Smuzhiyun
366*4882a593Smuzhiyun   This function should be used after dmaengine_terminate_async() to synchronize
367*4882a593Smuzhiyun   the termination of the DMA channel to the current context. The function will
368*4882a593Smuzhiyun   wait for the transfer and any running complete callbacks to finish before it
369*4882a593Smuzhiyun   returns.
370*4882a593Smuzhiyun
371*4882a593Smuzhiyun   If dmaengine_terminate_async() is used to stop the DMA channel this function
372*4882a593Smuzhiyun   must be called before it is safe to free memory accessed by previously
373*4882a593Smuzhiyun   submitted descriptors or to free any resources accessed within the complete
374*4882a593Smuzhiyun   callback of previously submitted descriptors.
375*4882a593Smuzhiyun
376*4882a593Smuzhiyun   The behavior of this function is undefined if dma_async_issue_pending() has
377*4882a593Smuzhiyun   been called between dmaengine_terminate_async() and this function.
378