Lines Matching +full:dma +full:- +full:requests
13 - Jens Axboe <jens.axboe@oracle.com>
14 - Suparna Bhattacharya <suparna@in.ibm.com>
19 - Nick Piggin <npiggin@kernel.dk>
34 - Jens Axboe <jens.axboe@oracle.com>
43 - Christoph Hellwig <hch@infradead.org>
44 - Arjan van de Ven <arjanv@redhat.com>
45 - Randy Dunlap <rdunlap@xenotime.net>
46 - Andre Hedrick <andre@linux-ide.org>
49 while it was still work-in-progress:
51 - David S. Miller <davem@redhat.com>
58 - Per-queue parameters
59 - Highmem I/O support
60 - I/O scheduler modularization
65 1.3.1 Pre-built commands
69 2.2 The bio struct in detail (multi-page io unit)
75 3.2.2 Setting up DMA scatterlists
85 6.1 Partition re-mapping handled by the generic block layer
111 ----------------------------------------------------------
113 Sophisticated devices with large built-in caches, intelligent i/o scheduling
114 optimizations, high memory DMA support, etc may find some of the
123 Tuning at a per-queue level:
125 i. Per-queue limits/values exported to the generic layer by the driver
128 a per-queue level (e.g maximum request size, maximum number of segments in
129 a scatter-gather list, logical block size)
147 - The request queue's max_sectors, which is a soft size in
151 - The request queue's max_hw_sectors, which is a hard limit
163 Maximum dma segments the hardware can handle in a request. 128
164 default (host adapter limit, after dma remapping).
176 - QUEUE_FLAG_CLUSTER (see 3.2.2)
177 - QUEUE_FLAG_QUEUED (see 3.2.4)
180 ii. High-mem i/o capabilities are now considered the default
183 by default copyin/out i/o requests on high-memory buffers to low-memory buffers
191 In order to enable high-memory i/o where the device is capable of supporting
192 it, the pci dma mapping routines and associated data structures have now been
193 modified to accomplish a direct page -> bus translation, without requiring
195 -> bus translation). So this works uniformly for high-memory pages (which
197 low-memory pages.
199 Note: Please refer to :doc:`/core-api/dma-api-howto` for a discussion
200 on PCI high mem DMA aspects and mapping of scatter gather lists, and support
218 It is also possible that a bounce buffer may be allocated from high-memory
226 may need to abort DMA operations and revert to PIO for the transfer, in
232 memory for specific requests if so desired.
249 ------------------------------------------------
253 This comes from some of the high-performance database/middleware
280 requests in the queue. For example it allows reads for bringing in an
282 requests which haven't aged too much on the queue. Potentially this priority
285 requests. Some bits in the bi_opf flags field in the bio structure are
290 -----------------------------------------------------------------------
294 There are situations where high-level code needs to have direct access to
305 for specially crafted requests which such ioctl or diagnostics
307 can instead be used to directly insert such requests in the queue or preferably
314 the command, then such information is associated with the request->special
315 field (rather than misuse the request->buffer field which is meant for the
321 completion. Alternatively one could directly use the request->buffer field to
325 request->buffer, request->sector and request->nr_sectors or
326 request->current_nr_sectors fields itself rather than using the block layer
335 handling direct requests easier for such drivers; Also for drivers that
352 1.3.1 Pre-built Commands
355 A request can be created with a pre-built custom command to be sent directly
357 in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for
358 command pre-building, and the type of the request is now indicated
359 through rq->flags instead of via rq->cmd)
365 It can help to pre-build device commands for requests in advance.
366 Drivers can now specify a request prepare function (q->prep_rq_fn) that the
367 block layer would invoke to pre-build device commands for a given request,
370 (The prepare function would not be called for requests that have RQF_DONTPREP
374 Pre-building could possibly even be done early, i.e before placing the
378 pre-building would be to do it whenever we fail to merge on a request.
381 the pre-builder hook can be invoked there.
388 ---------------------------------------------------------
393 when it came to large i/o requests and readv/writev style operations, as it
394 forced such requests to be broken up into small chunks before being passed
404 1. Should be appropriate as a descriptor for both raw and buffered i/o -
408 2. Ability to represent high-memory buffers (which do not have a virtual
416 (including non-page aligned page fragments, as specified via readv/writev)
434 ------------------
465 unsigned short bi_hw_segments; /* segments after DMA remapping */
476 - Large i/os can be sent down in one go using a bio_vec list consisting
478 are represented in the zero-copy network code)
479 - Splitting of an i/o request across multiple devices (as in the case of
482 - A linked list of bios is used as before for unrelated merges [#]_ - this
484 - Code that traverses the req list can find all the segments of a bio
487 - Drivers which can't process a large bio in one shot can use the bi_iter
495 unrelated merges -- a request ends up containing two or more bios that
502 entries with their corresponding dma address mappings filled in at the
512 become possible. The pagebuf abstraction layer from SGI also uses multi-page
514 The same is true of Andrew Morton's work-in-progress multipage bio writeout
518 ------------------------------------
538 Used by q->elv_next_request_fn
539 rq->queue is gone
544 unsigned long flags; /* also includes earlier rq->cmd settings */
552 /* Number of scatter-gather DMA addr+len pairs after
557 /* Number of scatter-gather addr+len pairs after
558 * physical and DMA remapping hardware coalescing is performed.
559 * This is the number of scatter-gather entries the driver
560 * will actually have to deal with after DMA mapping is done.
586 except that since we have multi-segment bios, current_nr_sectors refers
599 buffer, bio, bio->bi_iter fields too.
602 of the i/o buffer in cases where the buffer resides in low-memory. For high
613 ------------------
620 deadlock-free allocations during extreme VM load. For example, the VM
657 for a non-clone bio. There are the 6 pools setup for different size biovecs,
668 same bio_vec_list). This would typically be used for splitting i/o requests
672 -------------------------------
688 I/O completion callbacks are per-bio rather than per-segment, so drivers
691 need to be reorganized to support multi-segment bios.
693 3.2.2 Setting up DMA scatterlists
708 - Prevents a clustered segment from crossing a 4GB mem boundary
709 - Avoids building segments that would exceed the number of physical
712 DMA remapping (hw_segments) (i.e. IOMMU aware limits).
730 request can be kicked of) as before. With the introduction of multi-page
741 buffers) and expect only virtually mapped buffers, can access the rq->buffer
749 direct access requests which only specify rq->buffer without a valid rq->bio)
752 ------------------
765 maps the array to one or more multi-page bios, issuing submit_bio() to
777 So right now it wouldn't work for direct i/o on non-contiguous blocks.
789 Andrew Morton's multi-page bio patches attempt to issue multi-page
794 Christoph Hellwig had some code that uses bios for page-io (rather than
803 Direct access requests that do not contain bios would be submitted differently
820 TBD: In order for this to work, some changes are needed in the way multi-page
837 The generic dispatch queue is responsible for requeueing, handling non-fs
838 requests and all other subtleties.
841 requests. They can also choose to delay certain requests to improve
854 ----------------------
859 elevator_merge_fn called to query requests for merge with a bio
861 elevator_merge_req_fn called when two requests get merged. the one
877 that two *requests* can still be merged at later
882 elevator_dispatch_fn* fills the dispatch queue with ready requests.
883 I/O schedulers are free to postpone requests by
885 is non-zero. Once dispatched, I/O schedulers
886 are not allowed to manipulate the requests -
915 ----------------------------------------
917 All requests seen by I/O schedulers strictly follow one of the following three
920 set_req_fn ->
922 i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn ->
923 (deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn
924 ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn
927 -> put_req_fn
930 --------------------------------
932 The generic i/o scheduler algorithm attempts to sort/merge/batch requests for
944 sorting and searching, and a fifo linked list for time-based searching. This
945 gives good scalability and good availability of information. Requests are
961 iii. Plugging the queue to batch requests in anticipation of opportunities for
965 that it collects up enough requests in the queue to be able to take
969 till it fills up with a few more requests, before starting to service
970 the requests. This provides an opportunity to merge/sort the requests before
985 multi-page bios being queued in one shot, we may not need to wait to merge
989 ----------------
993 priorities for example). See `*io_context` in block/ll_rw_blk.c, and as-iosched.c
1000 5.1 Granular Locking: io_request_lock replaced by a per-queue lock
1001 ------------------------------------------------------------------
1007 per-queue, with a provision for sharing a lock across queues if
1023 ----------------------------------------------------------------
1031 6.1 Partition re-mapping handled by the generic block layer
1032 -----------------------------------------------------------
1035 Now the generic block layer performs partition-remapping early and thus
1039 submit_bio_noacct even before invoking the queue specific ->submit_bio,
1049 Old-style drivers that just use CURRENT and ignores clustered requests,
1051 clustered requests, multi-page bios, etc for the driver.
1054 support scatter-gather changes should be minimal too.
1059 Drivers should use elv_next_request to pick up requests and are no longer
1061 (struct request->queue has been removed)
1065 it will loop and handle as many sectors (on a bio-segment granularity)
1068 Now bh->b_end_io is replaced by bio->bi_end_io, but most of the time the
1072 then it just needs to replace that with q->queue_lock instead.
1083 rq->rq_dev = mk_kdev(3, 5); /* /dev/hda5 */
1084 rq->sector = 0; /* first sector on hda5 */
1088 rq->rq_dev = mk_kdev(3, 0); /* /dev/hda */
1089 rq->sector = 123128; /* offset from start of disk */
1091 As mentioned, there is no virtual mapping of a bio. For DMA, this is
1106 -----------------------------------------------------
1108 - orig kiobuf & raw i/o patches (now in 2.4 tree)
1109 - direct kiobuf based i/o to devices (no intermediate bh's)
1110 - page i/o using kiobuf
1111 - kiobuf splitting for lvm (mkp)
1112 - elevator support for kiobuf request merging (axboe)
1114 8.2. Zero-copy networking (Dave Miller)
1115 ---------------------------------------
1117 8.3. SGI XFS - pagebuf patches - use of kiobufs
1118 -----------------------------------------------
1119 8.4. Multi-page pioent patch for bio (Christoph Hellwig)
1120 --------------------------------------------------------
1121 8.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11
1122 --------------------------------------------------------------------
1124 -------------------------------------------------
1126 -----------------------------------------
1128 -------------------------------------------------------------------------------------
1133 ----------------------------------------
1135 ----------------------------------------------------------
1136 8.11. Block device in page cache patch (Andrea Archangeli) - now in 2.4.10+
1137 ---------------------------------------------------------------------------
1138 8.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar, Badari)
1139 -------------------------------------------------------------------------------
1140 8.13 Priority based i/o scheduler - prepatches (Arjan van de Ven)
1141 ------------------------------------------------------------------
1143 --------------------------------------------
1144 8.15 Multi-page writeout and readahead patches (Andrew Morton)
1145 ---------------------------------------------------------------
1147 -----------------------------------------------------------------------
1153 ------------------------
1155 Larry McVoy (and subsequent discussions on lkml, and Linus' comments - Jan 2001
1158 ------------------------------------------
1160 On lkml between sct, linus, alan et al - Feb-March 2001 (many of the
1163 9.3 Discussions on mempool on lkml - Dec 2001.
1164 ----------------------------------------------