1*4882a593Smuzhiyun============================================ 2*4882a593SmuzhiyunDynamic DMA mapping using the generic device 3*4882a593Smuzhiyun============================================ 4*4882a593Smuzhiyun 5*4882a593Smuzhiyun:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> 6*4882a593Smuzhiyun 7*4882a593SmuzhiyunThis document describes the DMA API. For a more gentle introduction 8*4882a593Smuzhiyunof the API (and actual examples), see :doc:`/core-api/dma-api-howto`. 9*4882a593Smuzhiyun 10*4882a593SmuzhiyunThis API is split into two pieces. Part I describes the basic API. 11*4882a593SmuzhiyunPart II describes extensions for supporting non-consistent memory 12*4882a593Smuzhiyunmachines. Unless you know that your driver absolutely has to support 13*4882a593Smuzhiyunnon-consistent platforms (this is usually only legacy platforms) you 14*4882a593Smuzhiyunshould only use the API described in part I. 15*4882a593Smuzhiyun 16*4882a593SmuzhiyunPart I - dma_API 17*4882a593Smuzhiyun---------------- 18*4882a593Smuzhiyun 19*4882a593SmuzhiyunTo get the dma_API, you must #include <linux/dma-mapping.h>. This 20*4882a593Smuzhiyunprovides dma_addr_t and the interfaces described below. 21*4882a593Smuzhiyun 22*4882a593SmuzhiyunA dma_addr_t can hold any valid DMA address for the platform. It can be 23*4882a593Smuzhiyungiven to a device to use as a DMA source or target. A CPU cannot reference 24*4882a593Smuzhiyuna dma_addr_t directly because there may be translation between its physical 25*4882a593Smuzhiyunaddress space and the DMA address space. 26*4882a593Smuzhiyun 27*4882a593SmuzhiyunPart Ia - Using large DMA-coherent buffers 28*4882a593Smuzhiyun------------------------------------------ 29*4882a593Smuzhiyun 30*4882a593Smuzhiyun:: 31*4882a593Smuzhiyun 32*4882a593Smuzhiyun void * 33*4882a593Smuzhiyun dma_alloc_coherent(struct device *dev, size_t size, 34*4882a593Smuzhiyun dma_addr_t *dma_handle, gfp_t flag) 35*4882a593Smuzhiyun 36*4882a593SmuzhiyunConsistent memory is memory for which a write by either the device or 37*4882a593Smuzhiyunthe processor can immediately be read by the processor or device 38*4882a593Smuzhiyunwithout having to worry about caching effects. (You may however need 39*4882a593Smuzhiyunto make sure to flush the processor's write buffers before telling 40*4882a593Smuzhiyundevices to read that memory.) 41*4882a593Smuzhiyun 42*4882a593SmuzhiyunThis routine allocates a region of <size> bytes of consistent memory. 43*4882a593Smuzhiyun 44*4882a593SmuzhiyunIt returns a pointer to the allocated region (in the processor's virtual 45*4882a593Smuzhiyunaddress space) or NULL if the allocation failed. 46*4882a593Smuzhiyun 47*4882a593SmuzhiyunIt also returns a <dma_handle> which may be cast to an unsigned integer the 48*4882a593Smuzhiyunsame width as the bus and given to the device as the DMA address base of 49*4882a593Smuzhiyunthe region. 50*4882a593Smuzhiyun 51*4882a593SmuzhiyunNote: consistent memory can be expensive on some platforms, and the 52*4882a593Smuzhiyunminimum allocation length may be as big as a page, so you should 53*4882a593Smuzhiyunconsolidate your requests for consistent memory as much as possible. 54*4882a593SmuzhiyunThe simplest way to do that is to use the dma_pool calls (see below). 55*4882a593Smuzhiyun 56*4882a593SmuzhiyunThe flag parameter (dma_alloc_coherent() only) allows the caller to 57*4882a593Smuzhiyunspecify the ``GFP_`` flags (see kmalloc()) for the allocation (the 58*4882a593Smuzhiyunimplementation may choose to ignore flags that affect the location of 59*4882a593Smuzhiyunthe returned memory, like GFP_DMA). 60*4882a593Smuzhiyun 61*4882a593Smuzhiyun:: 62*4882a593Smuzhiyun 63*4882a593Smuzhiyun void 64*4882a593Smuzhiyun dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 65*4882a593Smuzhiyun dma_addr_t dma_handle) 66*4882a593Smuzhiyun 67*4882a593SmuzhiyunFree a region of consistent memory you previously allocated. dev, 68*4882a593Smuzhiyunsize and dma_handle must all be the same as those passed into 69*4882a593Smuzhiyundma_alloc_coherent(). cpu_addr must be the virtual address returned by 70*4882a593Smuzhiyunthe dma_alloc_coherent(). 71*4882a593Smuzhiyun 72*4882a593SmuzhiyunNote that unlike their sibling allocation calls, these routines 73*4882a593Smuzhiyunmay only be called with IRQs enabled. 74*4882a593Smuzhiyun 75*4882a593Smuzhiyun 76*4882a593SmuzhiyunPart Ib - Using small DMA-coherent buffers 77*4882a593Smuzhiyun------------------------------------------ 78*4882a593Smuzhiyun 79*4882a593SmuzhiyunTo get this part of the dma_API, you must #include <linux/dmapool.h> 80*4882a593Smuzhiyun 81*4882a593SmuzhiyunMany drivers need lots of small DMA-coherent memory regions for DMA 82*4882a593Smuzhiyundescriptors or I/O buffers. Rather than allocating in units of a page 83*4882a593Smuzhiyunor more using dma_alloc_coherent(), you can use DMA pools. These work 84*4882a593Smuzhiyunmuch like a struct kmem_cache, except that they use the DMA-coherent allocator, 85*4882a593Smuzhiyunnot __get_free_pages(). Also, they understand common hardware constraints 86*4882a593Smuzhiyunfor alignment, like queue heads needing to be aligned on N-byte boundaries. 87*4882a593Smuzhiyun 88*4882a593Smuzhiyun 89*4882a593Smuzhiyun:: 90*4882a593Smuzhiyun 91*4882a593Smuzhiyun struct dma_pool * 92*4882a593Smuzhiyun dma_pool_create(const char *name, struct device *dev, 93*4882a593Smuzhiyun size_t size, size_t align, size_t alloc); 94*4882a593Smuzhiyun 95*4882a593Smuzhiyundma_pool_create() initializes a pool of DMA-coherent buffers 96*4882a593Smuzhiyunfor use with a given device. It must be called in a context which 97*4882a593Smuzhiyuncan sleep. 98*4882a593Smuzhiyun 99*4882a593SmuzhiyunThe "name" is for diagnostics (like a struct kmem_cache name); dev and size 100*4882a593Smuzhiyunare like what you'd pass to dma_alloc_coherent(). The device's hardware 101*4882a593Smuzhiyunalignment requirement for this type of data is "align" (which is expressed 102*4882a593Smuzhiyunin bytes, and must be a power of two). If your device has no boundary 103*4882a593Smuzhiyuncrossing restrictions, pass 0 for alloc; passing 4096 says memory allocated 104*4882a593Smuzhiyunfrom this pool must not cross 4KByte boundaries. 105*4882a593Smuzhiyun 106*4882a593Smuzhiyun:: 107*4882a593Smuzhiyun 108*4882a593Smuzhiyun void * 109*4882a593Smuzhiyun dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, 110*4882a593Smuzhiyun dma_addr_t *handle) 111*4882a593Smuzhiyun 112*4882a593SmuzhiyunWraps dma_pool_alloc() and also zeroes the returned memory if the 113*4882a593Smuzhiyunallocation attempt succeeded. 114*4882a593Smuzhiyun 115*4882a593Smuzhiyun 116*4882a593Smuzhiyun:: 117*4882a593Smuzhiyun 118*4882a593Smuzhiyun void * 119*4882a593Smuzhiyun dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, 120*4882a593Smuzhiyun dma_addr_t *dma_handle); 121*4882a593Smuzhiyun 122*4882a593SmuzhiyunThis allocates memory from the pool; the returned memory will meet the 123*4882a593Smuzhiyunsize and alignment requirements specified at creation time. Pass 124*4882a593SmuzhiyunGFP_ATOMIC to prevent blocking, or if it's permitted (not 125*4882a593Smuzhiyunin_interrupt, not holding SMP locks), pass GFP_KERNEL to allow 126*4882a593Smuzhiyunblocking. Like dma_alloc_coherent(), this returns two values: an 127*4882a593Smuzhiyunaddress usable by the CPU, and the DMA address usable by the pool's 128*4882a593Smuzhiyundevice. 129*4882a593Smuzhiyun 130*4882a593Smuzhiyun:: 131*4882a593Smuzhiyun 132*4882a593Smuzhiyun void 133*4882a593Smuzhiyun dma_pool_free(struct dma_pool *pool, void *vaddr, 134*4882a593Smuzhiyun dma_addr_t addr); 135*4882a593Smuzhiyun 136*4882a593SmuzhiyunThis puts memory back into the pool. The pool is what was passed to 137*4882a593Smuzhiyundma_pool_alloc(); the CPU (vaddr) and DMA addresses are what 138*4882a593Smuzhiyunwere returned when that routine allocated the memory being freed. 139*4882a593Smuzhiyun 140*4882a593Smuzhiyun:: 141*4882a593Smuzhiyun 142*4882a593Smuzhiyun void 143*4882a593Smuzhiyun dma_pool_destroy(struct dma_pool *pool); 144*4882a593Smuzhiyun 145*4882a593Smuzhiyundma_pool_destroy() frees the resources of the pool. It must be 146*4882a593Smuzhiyuncalled in a context which can sleep. Make sure you've freed all allocated 147*4882a593Smuzhiyunmemory back to the pool before you destroy it. 148*4882a593Smuzhiyun 149*4882a593Smuzhiyun 150*4882a593SmuzhiyunPart Ic - DMA addressing limitations 151*4882a593Smuzhiyun------------------------------------ 152*4882a593Smuzhiyun 153*4882a593Smuzhiyun:: 154*4882a593Smuzhiyun 155*4882a593Smuzhiyun int 156*4882a593Smuzhiyun dma_set_mask_and_coherent(struct device *dev, u64 mask) 157*4882a593Smuzhiyun 158*4882a593SmuzhiyunChecks to see if the mask is possible and updates the device 159*4882a593Smuzhiyunstreaming and coherent DMA mask parameters if it is. 160*4882a593Smuzhiyun 161*4882a593SmuzhiyunReturns: 0 if successful and a negative error if not. 162*4882a593Smuzhiyun 163*4882a593Smuzhiyun:: 164*4882a593Smuzhiyun 165*4882a593Smuzhiyun int 166*4882a593Smuzhiyun dma_set_mask(struct device *dev, u64 mask) 167*4882a593Smuzhiyun 168*4882a593SmuzhiyunChecks to see if the mask is possible and updates the device 169*4882a593Smuzhiyunparameters if it is. 170*4882a593Smuzhiyun 171*4882a593SmuzhiyunReturns: 0 if successful and a negative error if not. 172*4882a593Smuzhiyun 173*4882a593Smuzhiyun:: 174*4882a593Smuzhiyun 175*4882a593Smuzhiyun int 176*4882a593Smuzhiyun dma_set_coherent_mask(struct device *dev, u64 mask) 177*4882a593Smuzhiyun 178*4882a593SmuzhiyunChecks to see if the mask is possible and updates the device 179*4882a593Smuzhiyunparameters if it is. 180*4882a593Smuzhiyun 181*4882a593SmuzhiyunReturns: 0 if successful and a negative error if not. 182*4882a593Smuzhiyun 183*4882a593Smuzhiyun:: 184*4882a593Smuzhiyun 185*4882a593Smuzhiyun u64 186*4882a593Smuzhiyun dma_get_required_mask(struct device *dev) 187*4882a593Smuzhiyun 188*4882a593SmuzhiyunThis API returns the mask that the platform requires to 189*4882a593Smuzhiyunoperate efficiently. Usually this means the returned mask 190*4882a593Smuzhiyunis the minimum required to cover all of memory. Examining the 191*4882a593Smuzhiyunrequired mask gives drivers with variable descriptor sizes the 192*4882a593Smuzhiyunopportunity to use smaller descriptors as necessary. 193*4882a593Smuzhiyun 194*4882a593SmuzhiyunRequesting the required mask does not alter the current mask. If you 195*4882a593Smuzhiyunwish to take advantage of it, you should issue a dma_set_mask() 196*4882a593Smuzhiyuncall to set the mask to the value returned. 197*4882a593Smuzhiyun 198*4882a593Smuzhiyun:: 199*4882a593Smuzhiyun 200*4882a593Smuzhiyun size_t 201*4882a593Smuzhiyun dma_max_mapping_size(struct device *dev); 202*4882a593Smuzhiyun 203*4882a593SmuzhiyunReturns the maximum size of a mapping for the device. The size parameter 204*4882a593Smuzhiyunof the mapping functions like dma_map_single(), dma_map_page() and 205*4882a593Smuzhiyunothers should not be larger than the returned value. 206*4882a593Smuzhiyun 207*4882a593Smuzhiyun:: 208*4882a593Smuzhiyun 209*4882a593Smuzhiyun bool 210*4882a593Smuzhiyun dma_need_sync(struct device *dev, dma_addr_t dma_addr); 211*4882a593Smuzhiyun 212*4882a593SmuzhiyunReturns %true if dma_sync_single_for_{device,cpu} calls are required to 213*4882a593Smuzhiyuntransfer memory ownership. Returns %false if those calls can be skipped. 214*4882a593Smuzhiyun 215*4882a593Smuzhiyun:: 216*4882a593Smuzhiyun 217*4882a593Smuzhiyun unsigned long 218*4882a593Smuzhiyun dma_get_merge_boundary(struct device *dev); 219*4882a593Smuzhiyun 220*4882a593SmuzhiyunReturns the DMA merge boundary. If the device cannot merge any the DMA address 221*4882a593Smuzhiyunsegments, the function returns 0. 222*4882a593Smuzhiyun 223*4882a593SmuzhiyunPart Id - Streaming DMA mappings 224*4882a593Smuzhiyun-------------------------------- 225*4882a593Smuzhiyun 226*4882a593Smuzhiyun:: 227*4882a593Smuzhiyun 228*4882a593Smuzhiyun dma_addr_t 229*4882a593Smuzhiyun dma_map_single(struct device *dev, void *cpu_addr, size_t size, 230*4882a593Smuzhiyun enum dma_data_direction direction) 231*4882a593Smuzhiyun 232*4882a593SmuzhiyunMaps a piece of processor virtual memory so it can be accessed by the 233*4882a593Smuzhiyundevice and returns the DMA address of the memory. 234*4882a593Smuzhiyun 235*4882a593SmuzhiyunThe direction for both APIs may be converted freely by casting. 236*4882a593SmuzhiyunHowever the dma_API uses a strongly typed enumerator for its 237*4882a593Smuzhiyundirection: 238*4882a593Smuzhiyun 239*4882a593Smuzhiyun======================= ============================================= 240*4882a593SmuzhiyunDMA_NONE no direction (used for debugging) 241*4882a593SmuzhiyunDMA_TO_DEVICE data is going from the memory to the device 242*4882a593SmuzhiyunDMA_FROM_DEVICE data is coming from the device to the memory 243*4882a593SmuzhiyunDMA_BIDIRECTIONAL direction isn't known 244*4882a593Smuzhiyun======================= ============================================= 245*4882a593Smuzhiyun 246*4882a593Smuzhiyun.. note:: 247*4882a593Smuzhiyun 248*4882a593Smuzhiyun Not all memory regions in a machine can be mapped by this API. 249*4882a593Smuzhiyun Further, contiguous kernel virtual space may not be contiguous as 250*4882a593Smuzhiyun physical memory. Since this API does not provide any scatter/gather 251*4882a593Smuzhiyun capability, it will fail if the user tries to map a non-physically 252*4882a593Smuzhiyun contiguous piece of memory. For this reason, memory to be mapped by 253*4882a593Smuzhiyun this API should be obtained from sources which guarantee it to be 254*4882a593Smuzhiyun physically contiguous (like kmalloc). 255*4882a593Smuzhiyun 256*4882a593Smuzhiyun Further, the DMA address of the memory must be within the 257*4882a593Smuzhiyun dma_mask of the device (the dma_mask is a bit mask of the 258*4882a593Smuzhiyun addressable region for the device, i.e., if the DMA address of 259*4882a593Smuzhiyun the memory ANDed with the dma_mask is still equal to the DMA 260*4882a593Smuzhiyun address, then the device can perform DMA to the memory). To 261*4882a593Smuzhiyun ensure that the memory allocated by kmalloc is within the dma_mask, 262*4882a593Smuzhiyun the driver may specify various platform-dependent flags to restrict 263*4882a593Smuzhiyun the DMA address range of the allocation (e.g., on x86, GFP_DMA 264*4882a593Smuzhiyun guarantees to be within the first 16MB of available DMA addresses, 265*4882a593Smuzhiyun as required by ISA devices). 266*4882a593Smuzhiyun 267*4882a593Smuzhiyun Note also that the above constraints on physical contiguity and 268*4882a593Smuzhiyun dma_mask may not apply if the platform has an IOMMU (a device which 269*4882a593Smuzhiyun maps an I/O DMA address to a physical memory address). However, to be 270*4882a593Smuzhiyun portable, device driver writers may *not* assume that such an IOMMU 271*4882a593Smuzhiyun exists. 272*4882a593Smuzhiyun 273*4882a593Smuzhiyun.. warning:: 274*4882a593Smuzhiyun 275*4882a593Smuzhiyun Memory coherency operates at a granularity called the cache 276*4882a593Smuzhiyun line width. In order for memory mapped by this API to operate 277*4882a593Smuzhiyun correctly, the mapped region must begin exactly on a cache line 278*4882a593Smuzhiyun boundary and end exactly on one (to prevent two separately mapped 279*4882a593Smuzhiyun regions from sharing a single cache line). Since the cache line size 280*4882a593Smuzhiyun may not be known at compile time, the API will not enforce this 281*4882a593Smuzhiyun requirement. Therefore, it is recommended that driver writers who 282*4882a593Smuzhiyun don't take special care to determine the cache line size at run time 283*4882a593Smuzhiyun only map virtual regions that begin and end on page boundaries (which 284*4882a593Smuzhiyun are guaranteed also to be cache line boundaries). 285*4882a593Smuzhiyun 286*4882a593Smuzhiyun DMA_TO_DEVICE synchronisation must be done after the last modification 287*4882a593Smuzhiyun of the memory region by the software and before it is handed off to 288*4882a593Smuzhiyun the device. Once this primitive is used, memory covered by this 289*4882a593Smuzhiyun primitive should be treated as read-only by the device. If the device 290*4882a593Smuzhiyun may write to it at any point, it should be DMA_BIDIRECTIONAL (see 291*4882a593Smuzhiyun below). 292*4882a593Smuzhiyun 293*4882a593Smuzhiyun DMA_FROM_DEVICE synchronisation must be done before the driver 294*4882a593Smuzhiyun accesses data that may be changed by the device. This memory should 295*4882a593Smuzhiyun be treated as read-only by the driver. If the driver needs to write 296*4882a593Smuzhiyun to it at any point, it should be DMA_BIDIRECTIONAL (see below). 297*4882a593Smuzhiyun 298*4882a593Smuzhiyun DMA_BIDIRECTIONAL requires special handling: it means that the driver 299*4882a593Smuzhiyun isn't sure if the memory was modified before being handed off to the 300*4882a593Smuzhiyun device and also isn't sure if the device will also modify it. Thus, 301*4882a593Smuzhiyun you must always sync bidirectional memory twice: once before the 302*4882a593Smuzhiyun memory is handed off to the device (to make sure all memory changes 303*4882a593Smuzhiyun are flushed from the processor) and once before the data may be 304*4882a593Smuzhiyun accessed after being used by the device (to make sure any processor 305*4882a593Smuzhiyun cache lines are updated with data that the device may have changed). 306*4882a593Smuzhiyun 307*4882a593Smuzhiyun:: 308*4882a593Smuzhiyun 309*4882a593Smuzhiyun void 310*4882a593Smuzhiyun dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 311*4882a593Smuzhiyun enum dma_data_direction direction) 312*4882a593Smuzhiyun 313*4882a593SmuzhiyunUnmaps the region previously mapped. All the parameters passed in 314*4882a593Smuzhiyunmust be identical to those passed in (and returned) by the mapping 315*4882a593SmuzhiyunAPI. 316*4882a593Smuzhiyun 317*4882a593Smuzhiyun:: 318*4882a593Smuzhiyun 319*4882a593Smuzhiyun dma_addr_t 320*4882a593Smuzhiyun dma_map_page(struct device *dev, struct page *page, 321*4882a593Smuzhiyun unsigned long offset, size_t size, 322*4882a593Smuzhiyun enum dma_data_direction direction) 323*4882a593Smuzhiyun 324*4882a593Smuzhiyun void 325*4882a593Smuzhiyun dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 326*4882a593Smuzhiyun enum dma_data_direction direction) 327*4882a593Smuzhiyun 328*4882a593SmuzhiyunAPI for mapping and unmapping for pages. All the notes and warnings 329*4882a593Smuzhiyunfor the other mapping APIs apply here. Also, although the <offset> 330*4882a593Smuzhiyunand <size> parameters are provided to do partial page mapping, it is 331*4882a593Smuzhiyunrecommended that you never use these unless you really know what the 332*4882a593Smuzhiyuncache width is. 333*4882a593Smuzhiyun 334*4882a593Smuzhiyun:: 335*4882a593Smuzhiyun 336*4882a593Smuzhiyun dma_addr_t 337*4882a593Smuzhiyun dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, 338*4882a593Smuzhiyun enum dma_data_direction dir, unsigned long attrs) 339*4882a593Smuzhiyun 340*4882a593Smuzhiyun void 341*4882a593Smuzhiyun dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, 342*4882a593Smuzhiyun enum dma_data_direction dir, unsigned long attrs) 343*4882a593Smuzhiyun 344*4882a593SmuzhiyunAPI for mapping and unmapping for MMIO resources. All the notes and 345*4882a593Smuzhiyunwarnings for the other mapping APIs apply here. The API should only be 346*4882a593Smuzhiyunused to map device MMIO resources, mapping of RAM is not permitted. 347*4882a593Smuzhiyun 348*4882a593Smuzhiyun:: 349*4882a593Smuzhiyun 350*4882a593Smuzhiyun int 351*4882a593Smuzhiyun dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 352*4882a593Smuzhiyun 353*4882a593SmuzhiyunIn some circumstances dma_map_single(), dma_map_page() and dma_map_resource() 354*4882a593Smuzhiyunwill fail to create a mapping. A driver can check for these errors by testing 355*4882a593Smuzhiyunthe returned DMA address with dma_mapping_error(). A non-zero return value 356*4882a593Smuzhiyunmeans the mapping could not be created and the driver should take appropriate 357*4882a593Smuzhiyunaction (e.g. reduce current DMA mapping usage or delay and try again later). 358*4882a593Smuzhiyun 359*4882a593Smuzhiyun:: 360*4882a593Smuzhiyun 361*4882a593Smuzhiyun int 362*4882a593Smuzhiyun dma_map_sg(struct device *dev, struct scatterlist *sg, 363*4882a593Smuzhiyun int nents, enum dma_data_direction direction) 364*4882a593Smuzhiyun 365*4882a593SmuzhiyunReturns: the number of DMA address segments mapped (this may be shorter 366*4882a593Smuzhiyunthan <nents> passed in if some elements of the scatter/gather list are 367*4882a593Smuzhiyunphysically or virtually adjacent and an IOMMU maps them with a single 368*4882a593Smuzhiyunentry). 369*4882a593Smuzhiyun 370*4882a593SmuzhiyunPlease note that the sg cannot be mapped again if it has been mapped once. 371*4882a593SmuzhiyunThe mapping process is allowed to destroy information in the sg. 372*4882a593Smuzhiyun 373*4882a593SmuzhiyunAs with the other mapping interfaces, dma_map_sg() can fail. When it 374*4882a593Smuzhiyundoes, 0 is returned and a driver must take appropriate action. It is 375*4882a593Smuzhiyuncritical that the driver do something, in the case of a block driver 376*4882a593Smuzhiyunaborting the request or even oopsing is better than doing nothing and 377*4882a593Smuzhiyuncorrupting the filesystem. 378*4882a593Smuzhiyun 379*4882a593SmuzhiyunWith scatterlists, you use the resulting mapping like this:: 380*4882a593Smuzhiyun 381*4882a593Smuzhiyun int i, count = dma_map_sg(dev, sglist, nents, direction); 382*4882a593Smuzhiyun struct scatterlist *sg; 383*4882a593Smuzhiyun 384*4882a593Smuzhiyun for_each_sg(sglist, sg, count, i) { 385*4882a593Smuzhiyun hw_address[i] = sg_dma_address(sg); 386*4882a593Smuzhiyun hw_len[i] = sg_dma_len(sg); 387*4882a593Smuzhiyun } 388*4882a593Smuzhiyun 389*4882a593Smuzhiyunwhere nents is the number of entries in the sglist. 390*4882a593Smuzhiyun 391*4882a593SmuzhiyunThe implementation is free to merge several consecutive sglist entries 392*4882a593Smuzhiyuninto one (e.g. with an IOMMU, or if several pages just happen to be 393*4882a593Smuzhiyunphysically contiguous) and returns the actual number of sg entries it 394*4882a593Smuzhiyunmapped them to. On failure 0, is returned. 395*4882a593Smuzhiyun 396*4882a593SmuzhiyunThen you should loop count times (note: this can be less than nents times) 397*4882a593Smuzhiyunand use sg_dma_address() and sg_dma_len() macros where you previously 398*4882a593Smuzhiyunaccessed sg->address and sg->length as shown above. 399*4882a593Smuzhiyun 400*4882a593Smuzhiyun:: 401*4882a593Smuzhiyun 402*4882a593Smuzhiyun void 403*4882a593Smuzhiyun dma_unmap_sg(struct device *dev, struct scatterlist *sg, 404*4882a593Smuzhiyun int nents, enum dma_data_direction direction) 405*4882a593Smuzhiyun 406*4882a593SmuzhiyunUnmap the previously mapped scatter/gather list. All the parameters 407*4882a593Smuzhiyunmust be the same as those and passed in to the scatter/gather mapping 408*4882a593SmuzhiyunAPI. 409*4882a593Smuzhiyun 410*4882a593SmuzhiyunNote: <nents> must be the number you passed in, *not* the number of 411*4882a593SmuzhiyunDMA address entries returned. 412*4882a593Smuzhiyun 413*4882a593Smuzhiyun:: 414*4882a593Smuzhiyun 415*4882a593Smuzhiyun void 416*4882a593Smuzhiyun dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 417*4882a593Smuzhiyun size_t size, 418*4882a593Smuzhiyun enum dma_data_direction direction) 419*4882a593Smuzhiyun 420*4882a593Smuzhiyun void 421*4882a593Smuzhiyun dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 422*4882a593Smuzhiyun size_t size, 423*4882a593Smuzhiyun enum dma_data_direction direction) 424*4882a593Smuzhiyun 425*4882a593Smuzhiyun void 426*4882a593Smuzhiyun dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 427*4882a593Smuzhiyun int nents, 428*4882a593Smuzhiyun enum dma_data_direction direction) 429*4882a593Smuzhiyun 430*4882a593Smuzhiyun void 431*4882a593Smuzhiyun dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 432*4882a593Smuzhiyun int nents, 433*4882a593Smuzhiyun enum dma_data_direction direction) 434*4882a593Smuzhiyun 435*4882a593SmuzhiyunSynchronise a single contiguous or scatter/gather mapping for the CPU 436*4882a593Smuzhiyunand device. With the sync_sg API, all the parameters must be the same 437*4882a593Smuzhiyunas those passed into the single mapping API. With the sync_single API, 438*4882a593Smuzhiyunyou can use dma_handle and size parameters that aren't identical to 439*4882a593Smuzhiyunthose passed into the single mapping API to do a partial sync. 440*4882a593Smuzhiyun 441*4882a593Smuzhiyun 442*4882a593Smuzhiyun.. note:: 443*4882a593Smuzhiyun 444*4882a593Smuzhiyun You must do this: 445*4882a593Smuzhiyun 446*4882a593Smuzhiyun - Before reading values that have been written by DMA from the device 447*4882a593Smuzhiyun (use the DMA_FROM_DEVICE direction) 448*4882a593Smuzhiyun - After writing values that will be written to the device using DMA 449*4882a593Smuzhiyun (use the DMA_TO_DEVICE) direction 450*4882a593Smuzhiyun - before *and* after handing memory to the device if the memory is 451*4882a593Smuzhiyun DMA_BIDIRECTIONAL 452*4882a593Smuzhiyun 453*4882a593SmuzhiyunSee also dma_map_single(). 454*4882a593Smuzhiyun 455*4882a593Smuzhiyun:: 456*4882a593Smuzhiyun 457*4882a593Smuzhiyun dma_addr_t 458*4882a593Smuzhiyun dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, 459*4882a593Smuzhiyun enum dma_data_direction dir, 460*4882a593Smuzhiyun unsigned long attrs) 461*4882a593Smuzhiyun 462*4882a593Smuzhiyun void 463*4882a593Smuzhiyun dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, 464*4882a593Smuzhiyun size_t size, enum dma_data_direction dir, 465*4882a593Smuzhiyun unsigned long attrs) 466*4882a593Smuzhiyun 467*4882a593Smuzhiyun int 468*4882a593Smuzhiyun dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, 469*4882a593Smuzhiyun int nents, enum dma_data_direction dir, 470*4882a593Smuzhiyun unsigned long attrs) 471*4882a593Smuzhiyun 472*4882a593Smuzhiyun void 473*4882a593Smuzhiyun dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, 474*4882a593Smuzhiyun int nents, enum dma_data_direction dir, 475*4882a593Smuzhiyun unsigned long attrs) 476*4882a593Smuzhiyun 477*4882a593SmuzhiyunThe four functions above are just like the counterpart functions 478*4882a593Smuzhiyunwithout the _attrs suffixes, except that they pass an optional 479*4882a593Smuzhiyundma_attrs. 480*4882a593Smuzhiyun 481*4882a593SmuzhiyunThe interpretation of DMA attributes is architecture-specific, and 482*4882a593Smuzhiyuneach attribute should be documented in :doc:`/core-api/dma-attributes`. 483*4882a593Smuzhiyun 484*4882a593SmuzhiyunIf dma_attrs are 0, the semantics of each of these functions 485*4882a593Smuzhiyunis identical to those of the corresponding function 486*4882a593Smuzhiyunwithout the _attrs suffix. As a result dma_map_single_attrs() 487*4882a593Smuzhiyuncan generally replace dma_map_single(), etc. 488*4882a593Smuzhiyun 489*4882a593SmuzhiyunAs an example of the use of the ``*_attrs`` functions, here's how 490*4882a593Smuzhiyunyou could pass an attribute DMA_ATTR_FOO when mapping memory 491*4882a593Smuzhiyunfor DMA:: 492*4882a593Smuzhiyun 493*4882a593Smuzhiyun #include <linux/dma-mapping.h> 494*4882a593Smuzhiyun /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and 495*4882a593Smuzhiyun * documented in Documentation/core-api/dma-attributes.rst */ 496*4882a593Smuzhiyun ... 497*4882a593Smuzhiyun 498*4882a593Smuzhiyun unsigned long attr; 499*4882a593Smuzhiyun attr |= DMA_ATTR_FOO; 500*4882a593Smuzhiyun .... 501*4882a593Smuzhiyun n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); 502*4882a593Smuzhiyun .... 503*4882a593Smuzhiyun 504*4882a593SmuzhiyunArchitectures that care about DMA_ATTR_FOO would check for its 505*4882a593Smuzhiyunpresence in their implementations of the mapping and unmapping 506*4882a593Smuzhiyunroutines, e.g.::: 507*4882a593Smuzhiyun 508*4882a593Smuzhiyun void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, 509*4882a593Smuzhiyun size_t size, enum dma_data_direction dir, 510*4882a593Smuzhiyun unsigned long attrs) 511*4882a593Smuzhiyun { 512*4882a593Smuzhiyun .... 513*4882a593Smuzhiyun if (attrs & DMA_ATTR_FOO) 514*4882a593Smuzhiyun /* twizzle the frobnozzle */ 515*4882a593Smuzhiyun .... 516*4882a593Smuzhiyun } 517*4882a593Smuzhiyun 518*4882a593Smuzhiyun 519*4882a593SmuzhiyunPart II - Non-coherent DMA allocations 520*4882a593Smuzhiyun-------------------------------------- 521*4882a593Smuzhiyun 522*4882a593SmuzhiyunThese APIs allow to allocate pages that are guaranteed to be DMA addressable 523*4882a593Smuzhiyunby the passed in device, but which need explicit management of memory ownership 524*4882a593Smuzhiyunfor the kernel vs the device. 525*4882a593Smuzhiyun 526*4882a593SmuzhiyunIf you don't understand how cache line coherency works between a processor and 527*4882a593Smuzhiyunan I/O device, you should not be using this part of the API. 528*4882a593Smuzhiyun 529*4882a593Smuzhiyun:: 530*4882a593Smuzhiyun 531*4882a593Smuzhiyun void * 532*4882a593Smuzhiyun dma_alloc_noncoherent(struct device *dev, size_t size, 533*4882a593Smuzhiyun dma_addr_t *dma_handle, enum dma_data_direction dir, 534*4882a593Smuzhiyun gfp_t gfp) 535*4882a593Smuzhiyun 536*4882a593SmuzhiyunThis routine allocates a region of <size> bytes of consistent memory. It 537*4882a593Smuzhiyunreturns a pointer to the allocated region (in the processor's virtual address 538*4882a593Smuzhiyunspace) or NULL if the allocation failed. The returned memory may or may not 539*4882a593Smuzhiyunbe in the kernel direct mapping. Drivers must not call virt_to_page on 540*4882a593Smuzhiyunthe returned memory region. 541*4882a593Smuzhiyun 542*4882a593SmuzhiyunIt also returns a <dma_handle> which may be cast to an unsigned integer the 543*4882a593Smuzhiyunsame width as the bus and given to the device as the DMA address base of 544*4882a593Smuzhiyunthe region. 545*4882a593Smuzhiyun 546*4882a593SmuzhiyunThe dir parameter specified if data is read and/or written by the device, 547*4882a593Smuzhiyunsee dma_map_single() for details. 548*4882a593Smuzhiyun 549*4882a593SmuzhiyunThe gfp parameter allows the caller to specify the ``GFP_`` flags (see 550*4882a593Smuzhiyunkmalloc()) for the allocation, but rejects flags used to specify a memory 551*4882a593Smuzhiyunzone such as GFP_DMA or GFP_HIGHMEM. 552*4882a593Smuzhiyun 553*4882a593SmuzhiyunBefore giving the memory to the device, dma_sync_single_for_device() needs 554*4882a593Smuzhiyunto be called, and before reading memory written by the device, 555*4882a593Smuzhiyundma_sync_single_for_cpu(), just like for streaming DMA mappings that are 556*4882a593Smuzhiyunreused. 557*4882a593Smuzhiyun 558*4882a593Smuzhiyun:: 559*4882a593Smuzhiyun 560*4882a593Smuzhiyun void 561*4882a593Smuzhiyun dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, 562*4882a593Smuzhiyun dma_addr_t dma_handle, enum dma_data_direction dir) 563*4882a593Smuzhiyun 564*4882a593SmuzhiyunFree a region of memory previously allocated using dma_alloc_noncoherent(). 565*4882a593Smuzhiyundev, size and dma_handle and dir must all be the same as those passed into 566*4882a593Smuzhiyundma_alloc_noncoherent(). cpu_addr must be the virtual address returned by 567*4882a593Smuzhiyundma_alloc_noncoherent(). 568*4882a593Smuzhiyun 569*4882a593Smuzhiyun:: 570*4882a593Smuzhiyun 571*4882a593Smuzhiyun struct page * 572*4882a593Smuzhiyun dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, 573*4882a593Smuzhiyun enum dma_data_direction dir, gfp_t gfp) 574*4882a593Smuzhiyun 575*4882a593SmuzhiyunThis routine allocates a region of <size> bytes of non-coherent memory. It 576*4882a593Smuzhiyunreturns a pointer to first struct page for the region, or NULL if the 577*4882a593Smuzhiyunallocation failed. The resulting struct page can be used for everything a 578*4882a593Smuzhiyunstruct page is suitable for. 579*4882a593Smuzhiyun 580*4882a593SmuzhiyunIt also returns a <dma_handle> which may be cast to an unsigned integer the 581*4882a593Smuzhiyunsame width as the bus and given to the device as the DMA address base of 582*4882a593Smuzhiyunthe region. 583*4882a593Smuzhiyun 584*4882a593SmuzhiyunThe dir parameter specified if data is read and/or written by the device, 585*4882a593Smuzhiyunsee dma_map_single() for details. 586*4882a593Smuzhiyun 587*4882a593SmuzhiyunThe gfp parameter allows the caller to specify the ``GFP_`` flags (see 588*4882a593Smuzhiyunkmalloc()) for the allocation, but rejects flags used to specify a memory 589*4882a593Smuzhiyunzone such as GFP_DMA or GFP_HIGHMEM. 590*4882a593Smuzhiyun 591*4882a593SmuzhiyunBefore giving the memory to the device, dma_sync_single_for_device() needs 592*4882a593Smuzhiyunto be called, and before reading memory written by the device, 593*4882a593Smuzhiyundma_sync_single_for_cpu(), just like for streaming DMA mappings that are 594*4882a593Smuzhiyunreused. 595*4882a593Smuzhiyun 596*4882a593Smuzhiyun:: 597*4882a593Smuzhiyun 598*4882a593Smuzhiyun void 599*4882a593Smuzhiyun dma_free_pages(struct device *dev, size_t size, struct page *page, 600*4882a593Smuzhiyun dma_addr_t dma_handle, enum dma_data_direction dir) 601*4882a593Smuzhiyun 602*4882a593SmuzhiyunFree a region of memory previously allocated using dma_alloc_pages(). 603*4882a593Smuzhiyundev, size and dma_handle and dir must all be the same as those passed into 604*4882a593Smuzhiyundma_alloc_noncoherent(). page must be the pointer returned by 605*4882a593Smuzhiyundma_alloc_pages(). 606*4882a593Smuzhiyun 607*4882a593Smuzhiyun:: 608*4882a593Smuzhiyun 609*4882a593Smuzhiyun int 610*4882a593Smuzhiyun dma_get_cache_alignment(void) 611*4882a593Smuzhiyun 612*4882a593SmuzhiyunReturns the processor cache alignment. This is the absolute minimum 613*4882a593Smuzhiyunalignment *and* width that you must observe when either mapping 614*4882a593Smuzhiyunmemory or doing partial flushes. 615*4882a593Smuzhiyun 616*4882a593Smuzhiyun.. note:: 617*4882a593Smuzhiyun 618*4882a593Smuzhiyun This API may return a number *larger* than the actual cache 619*4882a593Smuzhiyun line, but it will guarantee that one or more cache lines fit exactly 620*4882a593Smuzhiyun into the width returned by this call. It will also always be a power 621*4882a593Smuzhiyun of two for easy alignment. 622*4882a593Smuzhiyun 623*4882a593Smuzhiyun 624*4882a593SmuzhiyunPart III - Debug drivers use of the DMA-API 625*4882a593Smuzhiyun------------------------------------------- 626*4882a593Smuzhiyun 627*4882a593SmuzhiyunThe DMA-API as described above has some constraints. DMA addresses must be 628*4882a593Smuzhiyunreleased with the corresponding function with the same size for example. With 629*4882a593Smuzhiyunthe advent of hardware IOMMUs it becomes more and more important that drivers 630*4882a593Smuzhiyundo not violate those constraints. In the worst case such a violation can 631*4882a593Smuzhiyunresult in data corruption up to destroyed filesystems. 632*4882a593Smuzhiyun 633*4882a593SmuzhiyunTo debug drivers and find bugs in the usage of the DMA-API checking code can 634*4882a593Smuzhiyunbe compiled into the kernel which will tell the developer about those 635*4882a593Smuzhiyunviolations. If your architecture supports it you can select the "Enable 636*4882a593Smuzhiyundebugging of DMA-API usage" option in your kernel configuration. Enabling this 637*4882a593Smuzhiyunoption has a performance impact. Do not enable it in production kernels. 638*4882a593Smuzhiyun 639*4882a593SmuzhiyunIf you boot the resulting kernel will contain code which does some bookkeeping 640*4882a593Smuzhiyunabout what DMA memory was allocated for which device. If this code detects an 641*4882a593Smuzhiyunerror it prints a warning message with some details into your kernel log. An 642*4882a593Smuzhiyunexample warning message may look like this:: 643*4882a593Smuzhiyun 644*4882a593Smuzhiyun WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 645*4882a593Smuzhiyun check_unmap+0x203/0x490() 646*4882a593Smuzhiyun Hardware name: 647*4882a593Smuzhiyun forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong 648*4882a593Smuzhiyun function [device address=0x00000000640444be] [size=66 bytes] [mapped as 649*4882a593Smuzhiyun single] [unmapped as page] 650*4882a593Smuzhiyun Modules linked in: nfsd exportfs bridge stp llc r8169 651*4882a593Smuzhiyun Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 652*4882a593Smuzhiyun Call Trace: 653*4882a593Smuzhiyun <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 654*4882a593Smuzhiyun [<ffffffff80647b70>] _spin_unlock+0x10/0x30 655*4882a593Smuzhiyun [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 656*4882a593Smuzhiyun [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 657*4882a593Smuzhiyun [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 658*4882a593Smuzhiyun [<ffffffff80252f96>] queue_work+0x56/0x60 659*4882a593Smuzhiyun [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 660*4882a593Smuzhiyun [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 661*4882a593Smuzhiyun [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 662*4882a593Smuzhiyun [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 663*4882a593Smuzhiyun [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 664*4882a593Smuzhiyun [<ffffffff803c7ea3>] check_unmap+0x203/0x490 665*4882a593Smuzhiyun [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 666*4882a593Smuzhiyun [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 667*4882a593Smuzhiyun [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 668*4882a593Smuzhiyun [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 669*4882a593Smuzhiyun [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 670*4882a593Smuzhiyun [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 671*4882a593Smuzhiyun [<ffffffff8020c093>] ret_from_intr+0x0/0xa 672*4882a593Smuzhiyun <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- 673*4882a593Smuzhiyun 674*4882a593SmuzhiyunThe driver developer can find the driver and the device including a stacktrace 675*4882a593Smuzhiyunof the DMA-API call which caused this warning. 676*4882a593Smuzhiyun 677*4882a593SmuzhiyunPer default only the first error will result in a warning message. All other 678*4882a593Smuzhiyunerrors will only silently counted. This limitation exist to prevent the code 679*4882a593Smuzhiyunfrom flooding your kernel log. To support debugging a device driver this can 680*4882a593Smuzhiyunbe disabled via debugfs. See the debugfs interface documentation below for 681*4882a593Smuzhiyundetails. 682*4882a593Smuzhiyun 683*4882a593SmuzhiyunThe debugfs directory for the DMA-API debugging code is called dma-api/. In 684*4882a593Smuzhiyunthis directory the following files can currently be found: 685*4882a593Smuzhiyun 686*4882a593Smuzhiyun=============================== =============================================== 687*4882a593Smuzhiyundma-api/all_errors This file contains a numeric value. If this 688*4882a593Smuzhiyun value is not equal to zero the debugging code 689*4882a593Smuzhiyun will print a warning for every error it finds 690*4882a593Smuzhiyun into the kernel log. Be careful with this 691*4882a593Smuzhiyun option, as it can easily flood your logs. 692*4882a593Smuzhiyun 693*4882a593Smuzhiyundma-api/disabled This read-only file contains the character 'Y' 694*4882a593Smuzhiyun if the debugging code is disabled. This can 695*4882a593Smuzhiyun happen when it runs out of memory or if it was 696*4882a593Smuzhiyun disabled at boot time 697*4882a593Smuzhiyun 698*4882a593Smuzhiyundma-api/dump This read-only file contains current DMA 699*4882a593Smuzhiyun mappings. 700*4882a593Smuzhiyun 701*4882a593Smuzhiyundma-api/error_count This file is read-only and shows the total 702*4882a593Smuzhiyun numbers of errors found. 703*4882a593Smuzhiyun 704*4882a593Smuzhiyundma-api/num_errors The number in this file shows how many 705*4882a593Smuzhiyun warnings will be printed to the kernel log 706*4882a593Smuzhiyun before it stops. This number is initialized to 707*4882a593Smuzhiyun one at system boot and be set by writing into 708*4882a593Smuzhiyun this file 709*4882a593Smuzhiyun 710*4882a593Smuzhiyundma-api/min_free_entries This read-only file can be read to get the 711*4882a593Smuzhiyun minimum number of free dma_debug_entries the 712*4882a593Smuzhiyun allocator has ever seen. If this value goes 713*4882a593Smuzhiyun down to zero the code will attempt to increase 714*4882a593Smuzhiyun nr_total_entries to compensate. 715*4882a593Smuzhiyun 716*4882a593Smuzhiyundma-api/num_free_entries The current number of free dma_debug_entries 717*4882a593Smuzhiyun in the allocator. 718*4882a593Smuzhiyun 719*4882a593Smuzhiyundma-api/nr_total_entries The total number of dma_debug_entries in the 720*4882a593Smuzhiyun allocator, both free and used. 721*4882a593Smuzhiyun 722*4882a593Smuzhiyundma-api/driver_filter You can write a name of a driver into this file 723*4882a593Smuzhiyun to limit the debug output to requests from that 724*4882a593Smuzhiyun particular driver. Write an empty string to 725*4882a593Smuzhiyun that file to disable the filter and see 726*4882a593Smuzhiyun all errors again. 727*4882a593Smuzhiyun=============================== =============================================== 728*4882a593Smuzhiyun 729*4882a593SmuzhiyunIf you have this code compiled into your kernel it will be enabled by default. 730*4882a593SmuzhiyunIf you want to boot without the bookkeeping anyway you can provide 731*4882a593Smuzhiyun'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. 732*4882a593SmuzhiyunNotice that you can not enable it again at runtime. You have to reboot to do 733*4882a593Smuzhiyunso. 734*4882a593Smuzhiyun 735*4882a593SmuzhiyunIf you want to see debug messages only for a special device driver you can 736*4882a593Smuzhiyunspecify the dma_debug_driver=<drivername> parameter. This will enable the 737*4882a593Smuzhiyundriver filter at boot time. The debug code will only print errors for that 738*4882a593Smuzhiyundriver afterwards. This filter can be disabled or changed later using debugfs. 739*4882a593Smuzhiyun 740*4882a593SmuzhiyunWhen the code disables itself at runtime this is most likely because it ran 741*4882a593Smuzhiyunout of dma_debug_entries and was unable to allocate more on-demand. 65536 742*4882a593Smuzhiyunentries are preallocated at boot - if this is too low for you boot with 743*4882a593Smuzhiyun'dma_debug_entries=<your_desired_number>' to overwrite the default. Note 744*4882a593Smuzhiyunthat the code allocates entries in batches, so the exact number of 745*4882a593Smuzhiyunpreallocated entries may be greater than the actual number requested. The 746*4882a593Smuzhiyuncode will print to the kernel log each time it has dynamically allocated 747*4882a593Smuzhiyunas many entries as were initially preallocated. This is to indicate that a 748*4882a593Smuzhiyunlarger preallocation size may be appropriate, or if it happens continually 749*4882a593Smuzhiyunthat a driver may be leaking mappings. 750*4882a593Smuzhiyun 751*4882a593Smuzhiyun:: 752*4882a593Smuzhiyun 753*4882a593Smuzhiyun void 754*4882a593Smuzhiyun debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); 755*4882a593Smuzhiyun 756*4882a593Smuzhiyundma-debug interface debug_dma_mapping_error() to debug drivers that fail 757*4882a593Smuzhiyunto check DMA mapping errors on addresses returned by dma_map_single() and 758*4882a593Smuzhiyundma_map_page() interfaces. This interface clears a flag set by 759*4882a593Smuzhiyundebug_dma_map_page() to indicate that dma_mapping_error() has been called by 760*4882a593Smuzhiyunthe driver. When driver does unmap, debug_dma_unmap() checks the flag and if 761*4882a593Smuzhiyunthis flag is still set, prints warning message that includes call trace that 762*4882a593Smuzhiyunleads up to the unmap. This interface can be called from dma_mapping_error() 763*4882a593Smuzhiyunroutines to enable DMA mapping error check debugging. 764