xref: /OK3568_Linux_fs/kernel/Documentation/gpu/drm-mm.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun=====================
2*4882a593SmuzhiyunDRM Memory Management
3*4882a593Smuzhiyun=====================
4*4882a593Smuzhiyun
5*4882a593SmuzhiyunModern Linux systems require large amount of graphics memory to store
6*4882a593Smuzhiyunframe buffers, textures, vertices and other graphics-related data. Given
7*4882a593Smuzhiyunthe very dynamic nature of many of that data, managing graphics memory
8*4882a593Smuzhiyunefficiently is thus crucial for the graphics stack and plays a central
9*4882a593Smuzhiyunrole in the DRM infrastructure.
10*4882a593Smuzhiyun
11*4882a593SmuzhiyunThe DRM core includes two memory managers, namely Translation Table Maps
12*4882a593Smuzhiyun(TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
13*4882a593Smuzhiyunmanager to be developed and tried to be a one-size-fits-them all
14*4882a593Smuzhiyunsolution. It provides a single userspace API to accommodate the need of
15*4882a593Smuzhiyunall hardware, supporting both Unified Memory Architecture (UMA) devices
16*4882a593Smuzhiyunand devices with dedicated video RAM (i.e. most discrete video cards).
17*4882a593SmuzhiyunThis resulted in a large, complex piece of code that turned out to be
18*4882a593Smuzhiyunhard to use for driver development.
19*4882a593Smuzhiyun
20*4882a593SmuzhiyunGEM started as an Intel-sponsored project in reaction to TTM's
21*4882a593Smuzhiyuncomplexity. Its design philosophy is completely different: instead of
22*4882a593Smuzhiyunproviding a solution to every graphics memory-related problems, GEM
23*4882a593Smuzhiyunidentified common code between drivers and created a support library to
24*4882a593Smuzhiyunshare it. GEM has simpler initialization and execution requirements than
25*4882a593SmuzhiyunTTM, but has no video RAM management capabilities and is thus limited to
26*4882a593SmuzhiyunUMA devices.
27*4882a593Smuzhiyun
28*4882a593SmuzhiyunThe Translation Table Manager (TTM)
29*4882a593Smuzhiyun===================================
30*4882a593Smuzhiyun
31*4882a593SmuzhiyunTTM design background and information belongs here.
32*4882a593Smuzhiyun
33*4882a593SmuzhiyunTTM initialization
34*4882a593Smuzhiyun------------------
35*4882a593Smuzhiyun
36*4882a593Smuzhiyun    **Warning**
37*4882a593Smuzhiyun    This section is outdated.
38*4882a593Smuzhiyun
39*4882a593SmuzhiyunDrivers wishing to support TTM must pass a filled :c:type:`ttm_bo_driver
40*4882a593Smuzhiyun<ttm_bo_driver>` structure to ttm_bo_device_init, together with an
41*4882a593Smuzhiyuninitialized global reference to the memory manager.  The ttm_bo_driver
42*4882a593Smuzhiyunstructure contains several fields with function pointers for
43*4882a593Smuzhiyuninitializing the TTM, allocating and freeing memory, waiting for command
44*4882a593Smuzhiyuncompletion and fence synchronization, and memory migration.
45*4882a593Smuzhiyun
46*4882a593SmuzhiyunThe :c:type:`struct drm_global_reference <drm_global_reference>` is made
47*4882a593Smuzhiyunup of several fields:
48*4882a593Smuzhiyun
49*4882a593Smuzhiyun.. code-block:: c
50*4882a593Smuzhiyun
51*4882a593Smuzhiyun              struct drm_global_reference {
52*4882a593Smuzhiyun                      enum ttm_global_types global_type;
53*4882a593Smuzhiyun                      size_t size;
54*4882a593Smuzhiyun                      void *object;
55*4882a593Smuzhiyun                      int (*init) (struct drm_global_reference *);
56*4882a593Smuzhiyun                      void (*release) (struct drm_global_reference *);
57*4882a593Smuzhiyun              };
58*4882a593Smuzhiyun
59*4882a593Smuzhiyun
60*4882a593SmuzhiyunThere should be one global reference structure for your memory manager
61*4882a593Smuzhiyunas a whole, and there will be others for each object created by the
62*4882a593Smuzhiyunmemory manager at runtime. Your global TTM should have a type of
63*4882a593SmuzhiyunTTM_GLOBAL_TTM_MEM. The size field for the global object should be
64*4882a593Smuzhiyunsizeof(struct ttm_mem_global), and the init and release hooks should
65*4882a593Smuzhiyunpoint at your driver-specific init and release routines, which probably
66*4882a593Smuzhiyuneventually call ttm_mem_global_init and ttm_mem_global_release,
67*4882a593Smuzhiyunrespectively.
68*4882a593Smuzhiyun
69*4882a593SmuzhiyunOnce your global TTM accounting structure is set up and initialized by
70*4882a593Smuzhiyuncalling ttm_global_item_ref() on it, you need to create a buffer
71*4882a593Smuzhiyunobject TTM to provide a pool for buffer object allocation by clients and
72*4882a593Smuzhiyunthe kernel itself. The type of this object should be
73*4882a593SmuzhiyunTTM_GLOBAL_TTM_BO, and its size should be sizeof(struct
74*4882a593Smuzhiyunttm_bo_global). Again, driver-specific init and release functions may
75*4882a593Smuzhiyunbe provided, likely eventually calling ttm_bo_global_ref_init() and
76*4882a593Smuzhiyunttm_bo_global_ref_release(), respectively. Also, like the previous
77*4882a593Smuzhiyunobject, ttm_global_item_ref() is used to create an initial reference
78*4882a593Smuzhiyuncount for the TTM, which will call your initialization function.
79*4882a593Smuzhiyun
80*4882a593SmuzhiyunSee the radeon_ttm.c file for an example of usage.
81*4882a593Smuzhiyun
82*4882a593SmuzhiyunThe Graphics Execution Manager (GEM)
83*4882a593Smuzhiyun====================================
84*4882a593Smuzhiyun
85*4882a593SmuzhiyunThe GEM design approach has resulted in a memory manager that doesn't
86*4882a593Smuzhiyunprovide full coverage of all (or even all common) use cases in its
87*4882a593Smuzhiyunuserspace or kernel API. GEM exposes a set of standard memory-related
88*4882a593Smuzhiyunoperations to userspace and a set of helper functions to drivers, and
89*4882a593Smuzhiyunlet drivers implement hardware-specific operations with their own
90*4882a593Smuzhiyunprivate API.
91*4882a593Smuzhiyun
92*4882a593SmuzhiyunThe GEM userspace API is described in the `GEM - the Graphics Execution
93*4882a593SmuzhiyunManager <http://lwn.net/Articles/283798/>`__ article on LWN. While
94*4882a593Smuzhiyunslightly outdated, the document provides a good overview of the GEM API
95*4882a593Smuzhiyunprinciples. Buffer allocation and read and write operations, described
96*4882a593Smuzhiyunas part of the common GEM API, are currently implemented using
97*4882a593Smuzhiyundriver-specific ioctls.
98*4882a593Smuzhiyun
99*4882a593SmuzhiyunGEM is data-agnostic. It manages abstract buffer objects without knowing
100*4882a593Smuzhiyunwhat individual buffers contain. APIs that require knowledge of buffer
101*4882a593Smuzhiyuncontents or purpose, such as buffer allocation or synchronization
102*4882a593Smuzhiyunprimitives, are thus outside of the scope of GEM and must be implemented
103*4882a593Smuzhiyunusing driver-specific ioctls.
104*4882a593Smuzhiyun
105*4882a593SmuzhiyunOn a fundamental level, GEM involves several operations:
106*4882a593Smuzhiyun
107*4882a593Smuzhiyun-  Memory allocation and freeing
108*4882a593Smuzhiyun-  Command execution
109*4882a593Smuzhiyun-  Aperture management at command execution time
110*4882a593Smuzhiyun
111*4882a593SmuzhiyunBuffer object allocation is relatively straightforward and largely
112*4882a593Smuzhiyunprovided by Linux's shmem layer, which provides memory to back each
113*4882a593Smuzhiyunobject.
114*4882a593Smuzhiyun
115*4882a593SmuzhiyunDevice-specific operations, such as command execution, pinning, buffer
116*4882a593Smuzhiyunread & write, mapping, and domain ownership transfers are left to
117*4882a593Smuzhiyundriver-specific ioctls.
118*4882a593Smuzhiyun
119*4882a593SmuzhiyunGEM Initialization
120*4882a593Smuzhiyun------------------
121*4882a593Smuzhiyun
122*4882a593SmuzhiyunDrivers that use GEM must set the DRIVER_GEM bit in the struct
123*4882a593Smuzhiyun:c:type:`struct drm_driver <drm_driver>` driver_features
124*4882a593Smuzhiyunfield. The DRM core will then automatically initialize the GEM core
125*4882a593Smuzhiyunbefore calling the load operation. Behind the scene, this will create a
126*4882a593SmuzhiyunDRM Memory Manager object which provides an address space pool for
127*4882a593Smuzhiyunobject allocation.
128*4882a593Smuzhiyun
129*4882a593SmuzhiyunIn a KMS configuration, drivers need to allocate and initialize a
130*4882a593Smuzhiyuncommand ring buffer following core GEM initialization if required by the
131*4882a593Smuzhiyunhardware. UMA devices usually have what is called a "stolen" memory
132*4882a593Smuzhiyunregion, which provides space for the initial framebuffer and large,
133*4882a593Smuzhiyuncontiguous memory regions required by the device. This space is
134*4882a593Smuzhiyuntypically not managed by GEM, and must be initialized separately into
135*4882a593Smuzhiyunits own DRM MM object.
136*4882a593Smuzhiyun
137*4882a593SmuzhiyunGEM Objects Creation
138*4882a593Smuzhiyun--------------------
139*4882a593Smuzhiyun
140*4882a593SmuzhiyunGEM splits creation of GEM objects and allocation of the memory that
141*4882a593Smuzhiyunbacks them in two distinct operations.
142*4882a593Smuzhiyun
143*4882a593SmuzhiyunGEM objects are represented by an instance of struct :c:type:`struct
144*4882a593Smuzhiyundrm_gem_object <drm_gem_object>`. Drivers usually need to
145*4882a593Smuzhiyunextend GEM objects with private information and thus create a
146*4882a593Smuzhiyundriver-specific GEM object structure type that embeds an instance of
147*4882a593Smuzhiyunstruct :c:type:`struct drm_gem_object <drm_gem_object>`.
148*4882a593Smuzhiyun
149*4882a593SmuzhiyunTo create a GEM object, a driver allocates memory for an instance of its
150*4882a593Smuzhiyunspecific GEM object type and initializes the embedded struct
151*4882a593Smuzhiyun:c:type:`struct drm_gem_object <drm_gem_object>` with a call
152*4882a593Smuzhiyunto drm_gem_object_init(). The function takes a pointer
153*4882a593Smuzhiyunto the DRM device, a pointer to the GEM object and the buffer object
154*4882a593Smuzhiyunsize in bytes.
155*4882a593Smuzhiyun
156*4882a593SmuzhiyunGEM uses shmem to allocate anonymous pageable memory.
157*4882a593Smuzhiyundrm_gem_object_init() will create an shmfs file of the
158*4882a593Smuzhiyunrequested size and store it into the struct :c:type:`struct
159*4882a593Smuzhiyundrm_gem_object <drm_gem_object>` filp field. The memory is
160*4882a593Smuzhiyunused as either main storage for the object when the graphics hardware
161*4882a593Smuzhiyunuses system memory directly or as a backing store otherwise.
162*4882a593Smuzhiyun
163*4882a593SmuzhiyunDrivers are responsible for the actual physical pages allocation by
164*4882a593Smuzhiyuncalling shmem_read_mapping_page_gfp() for each page.
165*4882a593SmuzhiyunNote that they can decide to allocate pages when initializing the GEM
166*4882a593Smuzhiyunobject, or to delay allocation until the memory is needed (for instance
167*4882a593Smuzhiyunwhen a page fault occurs as a result of a userspace memory access or
168*4882a593Smuzhiyunwhen the driver needs to start a DMA transfer involving the memory).
169*4882a593Smuzhiyun
170*4882a593SmuzhiyunAnonymous pageable memory allocation is not always desired, for instance
171*4882a593Smuzhiyunwhen the hardware requires physically contiguous system memory as is
172*4882a593Smuzhiyunoften the case in embedded devices. Drivers can create GEM objects with
173*4882a593Smuzhiyunno shmfs backing (called private GEM objects) by initializing them with a call
174*4882a593Smuzhiyunto drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for
175*4882a593Smuzhiyunprivate GEM objects must be managed by drivers.
176*4882a593Smuzhiyun
177*4882a593SmuzhiyunGEM Objects Lifetime
178*4882a593Smuzhiyun--------------------
179*4882a593Smuzhiyun
180*4882a593SmuzhiyunAll GEM objects are reference-counted by the GEM core. References can be
181*4882a593Smuzhiyunacquired and release by calling drm_gem_object_get() and drm_gem_object_put()
182*4882a593Smuzhiyunrespectively.
183*4882a593Smuzhiyun
184*4882a593SmuzhiyunWhen the last reference to a GEM object is released the GEM core calls
185*4882a593Smuzhiyunthe :c:type:`struct drm_driver <drm_driver>` gem_free_object_unlocked
186*4882a593Smuzhiyunoperation. That operation is mandatory for GEM-enabled drivers and must
187*4882a593Smuzhiyunfree the GEM object and all associated resources.
188*4882a593Smuzhiyun
189*4882a593Smuzhiyunvoid (\*gem_free_object) (struct drm_gem_object \*obj); Drivers are
190*4882a593Smuzhiyunresponsible for freeing all GEM object resources. This includes the
191*4882a593Smuzhiyunresources created by the GEM core, which need to be released with
192*4882a593Smuzhiyundrm_gem_object_release().
193*4882a593Smuzhiyun
194*4882a593SmuzhiyunGEM Objects Naming
195*4882a593Smuzhiyun------------------
196*4882a593Smuzhiyun
197*4882a593SmuzhiyunCommunication between userspace and the kernel refers to GEM objects
198*4882a593Smuzhiyunusing local handles, global names or, more recently, file descriptors.
199*4882a593SmuzhiyunAll of those are 32-bit integer values; the usual Linux kernel limits
200*4882a593Smuzhiyunapply to the file descriptors.
201*4882a593Smuzhiyun
202*4882a593SmuzhiyunGEM handles are local to a DRM file. Applications get a handle to a GEM
203*4882a593Smuzhiyunobject through a driver-specific ioctl, and can use that handle to refer
204*4882a593Smuzhiyunto the GEM object in other standard or driver-specific ioctls. Closing a
205*4882a593SmuzhiyunDRM file handle frees all its GEM handles and dereferences the
206*4882a593Smuzhiyunassociated GEM objects.
207*4882a593Smuzhiyun
208*4882a593SmuzhiyunTo create a handle for a GEM object drivers call drm_gem_handle_create(). The
209*4882a593Smuzhiyunfunction takes a pointer to the DRM file and the GEM object and returns a
210*4882a593Smuzhiyunlocally unique handle.  When the handle is no longer needed drivers delete it
211*4882a593Smuzhiyunwith a call to drm_gem_handle_delete(). Finally the GEM object associated with a
212*4882a593Smuzhiyunhandle can be retrieved by a call to drm_gem_object_lookup().
213*4882a593Smuzhiyun
214*4882a593SmuzhiyunHandles don't take ownership of GEM objects, they only take a reference
215*4882a593Smuzhiyunto the object that will be dropped when the handle is destroyed. To
216*4882a593Smuzhiyunavoid leaking GEM objects, drivers must make sure they drop the
217*4882a593Smuzhiyunreference(s) they own (such as the initial reference taken at object
218*4882a593Smuzhiyuncreation time) as appropriate, without any special consideration for the
219*4882a593Smuzhiyunhandle. For example, in the particular case of combined GEM object and
220*4882a593Smuzhiyunhandle creation in the implementation of the dumb_create operation,
221*4882a593Smuzhiyundrivers must drop the initial reference to the GEM object before
222*4882a593Smuzhiyunreturning the handle.
223*4882a593Smuzhiyun
224*4882a593SmuzhiyunGEM names are similar in purpose to handles but are not local to DRM
225*4882a593Smuzhiyunfiles. They can be passed between processes to reference a GEM object
226*4882a593Smuzhiyunglobally. Names can't be used directly to refer to objects in the DRM
227*4882a593SmuzhiyunAPI, applications must convert handles to names and names to handles
228*4882a593Smuzhiyunusing the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls
229*4882a593Smuzhiyunrespectively. The conversion is handled by the DRM core without any
230*4882a593Smuzhiyundriver-specific support.
231*4882a593Smuzhiyun
232*4882a593SmuzhiyunGEM also supports buffer sharing with dma-buf file descriptors through
233*4882a593SmuzhiyunPRIME. GEM-based drivers must use the provided helpers functions to
234*4882a593Smuzhiyunimplement the exporting and importing correctly. See ?. Since sharing
235*4882a593Smuzhiyunfile descriptors is inherently more secure than the easily guessable and
236*4882a593Smuzhiyunglobal GEM names it is the preferred buffer sharing mechanism. Sharing
237*4882a593Smuzhiyunbuffers through GEM names is only supported for legacy userspace.
238*4882a593SmuzhiyunFurthermore PRIME also allows cross-device buffer sharing since it is
239*4882a593Smuzhiyunbased on dma-bufs.
240*4882a593Smuzhiyun
241*4882a593SmuzhiyunGEM Objects Mapping
242*4882a593Smuzhiyun-------------------
243*4882a593Smuzhiyun
244*4882a593SmuzhiyunBecause mapping operations are fairly heavyweight GEM favours
245*4882a593Smuzhiyunread/write-like access to buffers, implemented through driver-specific
246*4882a593Smuzhiyunioctls, over mapping buffers to userspace. However, when random access
247*4882a593Smuzhiyunto the buffer is needed (to perform software rendering for instance),
248*4882a593Smuzhiyundirect access to the object can be more efficient.
249*4882a593Smuzhiyun
250*4882a593SmuzhiyunThe mmap system call can't be used directly to map GEM objects, as they
251*4882a593Smuzhiyundon't have their own file handle. Two alternative methods currently
252*4882a593Smuzhiyunco-exist to map GEM objects to userspace. The first method uses a
253*4882a593Smuzhiyundriver-specific ioctl to perform the mapping operation, calling
254*4882a593Smuzhiyundo_mmap() under the hood. This is often considered
255*4882a593Smuzhiyundubious, seems to be discouraged for new GEM-enabled drivers, and will
256*4882a593Smuzhiyunthus not be described here.
257*4882a593Smuzhiyun
258*4882a593SmuzhiyunThe second method uses the mmap system call on the DRM file handle. void
259*4882a593Smuzhiyun\*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t
260*4882a593Smuzhiyunoffset); DRM identifies the GEM object to be mapped by a fake offset
261*4882a593Smuzhiyunpassed through the mmap offset argument. Prior to being mapped, a GEM
262*4882a593Smuzhiyunobject must thus be associated with a fake offset. To do so, drivers
263*4882a593Smuzhiyunmust call drm_gem_create_mmap_offset() on the object.
264*4882a593Smuzhiyun
265*4882a593SmuzhiyunOnce allocated, the fake offset value must be passed to the application
266*4882a593Smuzhiyunin a driver-specific way and can then be used as the mmap offset
267*4882a593Smuzhiyunargument.
268*4882a593Smuzhiyun
269*4882a593SmuzhiyunThe GEM core provides a helper method drm_gem_mmap() to
270*4882a593Smuzhiyunhandle object mapping. The method can be set directly as the mmap file
271*4882a593Smuzhiyunoperation handler. It will look up the GEM object based on the offset
272*4882a593Smuzhiyunvalue and set the VMA operations to the :c:type:`struct drm_driver
273*4882a593Smuzhiyun<drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to
274*4882a593Smuzhiyunuserspace, but relies on the driver-provided fault handler to map pages
275*4882a593Smuzhiyunindividually.
276*4882a593Smuzhiyun
277*4882a593SmuzhiyunTo use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver
278*4882a593Smuzhiyun<drm_driver>` gem_vm_ops field with a pointer to VM operations.
279*4882a593Smuzhiyun
280*4882a593SmuzhiyunThe VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>`
281*4882a593Smuzhiyunmade up of several fields, the more interesting ones being:
282*4882a593Smuzhiyun
283*4882a593Smuzhiyun.. code-block:: c
284*4882a593Smuzhiyun
285*4882a593Smuzhiyun	struct vm_operations_struct {
286*4882a593Smuzhiyun		void (*open)(struct vm_area_struct * area);
287*4882a593Smuzhiyun		void (*close)(struct vm_area_struct * area);
288*4882a593Smuzhiyun		vm_fault_t (*fault)(struct vm_fault *vmf);
289*4882a593Smuzhiyun	};
290*4882a593Smuzhiyun
291*4882a593Smuzhiyun
292*4882a593SmuzhiyunThe open and close operations must update the GEM object reference
293*4882a593Smuzhiyuncount. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper
294*4882a593Smuzhiyunfunctions directly as open and close handlers.
295*4882a593Smuzhiyun
296*4882a593SmuzhiyunThe fault operation handler is responsible for mapping individual pages
297*4882a593Smuzhiyunto userspace when a page fault occurs. Depending on the memory
298*4882a593Smuzhiyunallocation scheme, drivers can allocate pages at fault time, or can
299*4882a593Smuzhiyundecide to allocate memory for the GEM object at the time the object is
300*4882a593Smuzhiyuncreated.
301*4882a593Smuzhiyun
302*4882a593SmuzhiyunDrivers that want to map the GEM object upfront instead of handling page
303*4882a593Smuzhiyunfaults can implement their own mmap file operation handler.
304*4882a593Smuzhiyun
305*4882a593SmuzhiyunFor platforms without MMU the GEM core provides a helper method
306*4882a593Smuzhiyundrm_gem_cma_get_unmapped_area(). The mmap() routines will call this to get a
307*4882a593Smuzhiyunproposed address for the mapping.
308*4882a593Smuzhiyun
309*4882a593SmuzhiyunTo use drm_gem_cma_get_unmapped_area(), drivers must fill the struct
310*4882a593Smuzhiyun:c:type:`struct file_operations <file_operations>` get_unmapped_area field with
311*4882a593Smuzhiyuna pointer on drm_gem_cma_get_unmapped_area().
312*4882a593Smuzhiyun
313*4882a593SmuzhiyunMore detailed information about get_unmapped_area can be found in
314*4882a593SmuzhiyunDocumentation/admin-guide/mm/nommu-mmap.rst
315*4882a593Smuzhiyun
316*4882a593SmuzhiyunMemory Coherency
317*4882a593Smuzhiyun----------------
318*4882a593Smuzhiyun
319*4882a593SmuzhiyunWhen mapped to the device or used in a command buffer, backing pages for
320*4882a593Smuzhiyunan object are flushed to memory and marked write combined so as to be
321*4882a593Smuzhiyuncoherent with the GPU. Likewise, if the CPU accesses an object after the
322*4882a593SmuzhiyunGPU has finished rendering to the object, then the object must be made
323*4882a593Smuzhiyuncoherent with the CPU's view of memory, usually involving GPU cache
324*4882a593Smuzhiyunflushing of various kinds. This core CPU<->GPU coherency management is
325*4882a593Smuzhiyunprovided by a device-specific ioctl, which evaluates an object's current
326*4882a593Smuzhiyundomain and performs any necessary flushing or synchronization to put the
327*4882a593Smuzhiyunobject into the desired coherency domain (note that the object may be
328*4882a593Smuzhiyunbusy, i.e. an active render target; in that case, setting the domain
329*4882a593Smuzhiyunblocks the client and waits for rendering to complete before performing
330*4882a593Smuzhiyunany necessary flushing operations).
331*4882a593Smuzhiyun
332*4882a593SmuzhiyunCommand Execution
333*4882a593Smuzhiyun-----------------
334*4882a593Smuzhiyun
335*4882a593SmuzhiyunPerhaps the most important GEM function for GPU devices is providing a
336*4882a593Smuzhiyuncommand execution interface to clients. Client programs construct
337*4882a593Smuzhiyuncommand buffers containing references to previously allocated memory
338*4882a593Smuzhiyunobjects, and then submit them to GEM. At that point, GEM takes care to
339*4882a593Smuzhiyunbind all the objects into the GTT, execute the buffer, and provide
340*4882a593Smuzhiyunnecessary synchronization between clients accessing the same buffers.
341*4882a593SmuzhiyunThis often involves evicting some objects from the GTT and re-binding
342*4882a593Smuzhiyunothers (a fairly expensive operation), and providing relocation support
343*4882a593Smuzhiyunwhich hides fixed GTT offsets from clients. Clients must take care not
344*4882a593Smuzhiyunto submit command buffers that reference more objects than can fit in
345*4882a593Smuzhiyunthe GTT; otherwise, GEM will reject them and no rendering will occur.
346*4882a593SmuzhiyunSimilarly, if several objects in the buffer require fence registers to
347*4882a593Smuzhiyunbe allocated for correct rendering (e.g. 2D blits on pre-965 chips),
348*4882a593Smuzhiyuncare must be taken not to require more fence registers than are
349*4882a593Smuzhiyunavailable to the client. Such resource management should be abstracted
350*4882a593Smuzhiyunfrom the client in libdrm.
351*4882a593Smuzhiyun
352*4882a593SmuzhiyunGEM Function Reference
353*4882a593Smuzhiyun----------------------
354*4882a593Smuzhiyun
355*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_gem.h
356*4882a593Smuzhiyun   :internal:
357*4882a593Smuzhiyun
358*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem.c
359*4882a593Smuzhiyun   :export:
360*4882a593Smuzhiyun
361*4882a593SmuzhiyunGEM CMA Helper Functions Reference
362*4882a593Smuzhiyun----------------------------------
363*4882a593Smuzhiyun
364*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
365*4882a593Smuzhiyun   :doc: cma helpers
366*4882a593Smuzhiyun
367*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_gem_cma_helper.h
368*4882a593Smuzhiyun   :internal:
369*4882a593Smuzhiyun
370*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_cma_helper.c
371*4882a593Smuzhiyun   :export:
372*4882a593Smuzhiyun
373*4882a593SmuzhiyunGEM SHMEM Helper Function Reference
374*4882a593Smuzhiyun-----------------------------------
375*4882a593Smuzhiyun
376*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
377*4882a593Smuzhiyun   :doc: overview
378*4882a593Smuzhiyun
379*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_gem_shmem_helper.h
380*4882a593Smuzhiyun   :internal:
381*4882a593Smuzhiyun
382*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
383*4882a593Smuzhiyun   :export:
384*4882a593Smuzhiyun
385*4882a593SmuzhiyunGEM VRAM Helper Functions Reference
386*4882a593Smuzhiyun-----------------------------------
387*4882a593Smuzhiyun
388*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
389*4882a593Smuzhiyun   :doc: overview
390*4882a593Smuzhiyun
391*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_gem_vram_helper.h
392*4882a593Smuzhiyun   :internal:
393*4882a593Smuzhiyun
394*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
395*4882a593Smuzhiyun   :export:
396*4882a593Smuzhiyun
397*4882a593SmuzhiyunGEM TTM Helper Functions Reference
398*4882a593Smuzhiyun-----------------------------------
399*4882a593Smuzhiyun
400*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
401*4882a593Smuzhiyun   :doc: overview
402*4882a593Smuzhiyun
403*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
404*4882a593Smuzhiyun   :export:
405*4882a593Smuzhiyun
406*4882a593SmuzhiyunVMA Offset Manager
407*4882a593Smuzhiyun==================
408*4882a593Smuzhiyun
409*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
410*4882a593Smuzhiyun   :doc: vma offset manager
411*4882a593Smuzhiyun
412*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_vma_manager.h
413*4882a593Smuzhiyun   :internal:
414*4882a593Smuzhiyun
415*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
416*4882a593Smuzhiyun   :export:
417*4882a593Smuzhiyun
418*4882a593Smuzhiyun.. _prime_buffer_sharing:
419*4882a593Smuzhiyun
420*4882a593SmuzhiyunPRIME Buffer Sharing
421*4882a593Smuzhiyun====================
422*4882a593Smuzhiyun
423*4882a593SmuzhiyunPRIME is the cross device buffer sharing framework in drm, originally
424*4882a593Smuzhiyuncreated for the OPTIMUS range of multi-gpu platforms. To userspace PRIME
425*4882a593Smuzhiyunbuffers are dma-buf based file descriptors.
426*4882a593Smuzhiyun
427*4882a593SmuzhiyunOverview and Lifetime Rules
428*4882a593Smuzhiyun---------------------------
429*4882a593Smuzhiyun
430*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_prime.c
431*4882a593Smuzhiyun   :doc: overview and lifetime rules
432*4882a593Smuzhiyun
433*4882a593SmuzhiyunPRIME Helper Functions
434*4882a593Smuzhiyun----------------------
435*4882a593Smuzhiyun
436*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_prime.c
437*4882a593Smuzhiyun   :doc: PRIME Helpers
438*4882a593Smuzhiyun
439*4882a593SmuzhiyunPRIME Function References
440*4882a593Smuzhiyun-------------------------
441*4882a593Smuzhiyun
442*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_prime.h
443*4882a593Smuzhiyun   :internal:
444*4882a593Smuzhiyun
445*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_prime.c
446*4882a593Smuzhiyun   :export:
447*4882a593Smuzhiyun
448*4882a593SmuzhiyunDRM MM Range Allocator
449*4882a593Smuzhiyun======================
450*4882a593Smuzhiyun
451*4882a593SmuzhiyunOverview
452*4882a593Smuzhiyun--------
453*4882a593Smuzhiyun
454*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_mm.c
455*4882a593Smuzhiyun   :doc: Overview
456*4882a593Smuzhiyun
457*4882a593SmuzhiyunLRU Scan/Eviction Support
458*4882a593Smuzhiyun-------------------------
459*4882a593Smuzhiyun
460*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_mm.c
461*4882a593Smuzhiyun   :doc: lru scan roster
462*4882a593Smuzhiyun
463*4882a593SmuzhiyunDRM MM Range Allocator Function References
464*4882a593Smuzhiyun------------------------------------------
465*4882a593Smuzhiyun
466*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_mm.h
467*4882a593Smuzhiyun   :internal:
468*4882a593Smuzhiyun
469*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_mm.c
470*4882a593Smuzhiyun   :export:
471*4882a593Smuzhiyun
472*4882a593SmuzhiyunDRM Cache Handling
473*4882a593Smuzhiyun==================
474*4882a593Smuzhiyun
475*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_cache.c
476*4882a593Smuzhiyun   :export:
477*4882a593Smuzhiyun
478*4882a593SmuzhiyunDRM Sync Objects
479*4882a593Smuzhiyun===========================
480*4882a593Smuzhiyun
481*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
482*4882a593Smuzhiyun   :doc: Overview
483*4882a593Smuzhiyun
484*4882a593Smuzhiyun.. kernel-doc:: include/drm/drm_syncobj.h
485*4882a593Smuzhiyun   :internal:
486*4882a593Smuzhiyun
487*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
488*4882a593Smuzhiyun   :export:
489*4882a593Smuzhiyun
490*4882a593SmuzhiyunGPU Scheduler
491*4882a593Smuzhiyun=============
492*4882a593Smuzhiyun
493*4882a593SmuzhiyunOverview
494*4882a593Smuzhiyun--------
495*4882a593Smuzhiyun
496*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
497*4882a593Smuzhiyun   :doc: Overview
498*4882a593Smuzhiyun
499*4882a593SmuzhiyunScheduler Function References
500*4882a593Smuzhiyun-----------------------------
501*4882a593Smuzhiyun
502*4882a593Smuzhiyun.. kernel-doc:: include/drm/gpu_scheduler.h
503*4882a593Smuzhiyun   :internal:
504*4882a593Smuzhiyun
505*4882a593Smuzhiyun.. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
506*4882a593Smuzhiyun   :export:
507