Lines Matching +full:tlb +full:- +full:split

1 /* SPDX-License-Identifier: GPL-2.0 */
22 * struct iommu_flush_ops - IOMMU callbacks for TLB and page table management.
24 * @tlb_flush_all: Synchronously invalidate the entire TLB context.
25 * @tlb_flush_walk: Synchronously invalidate all intermediate TLB state
28 * @tlb_add_page: Optional callback to queue up leaf TLB invalidation for a
29 * single page. IOMMUs that cannot batch TLB invalidation
46 * struct io_pgtable_cfg - Configuration data for a set of page tables.
49 * action by the low-level page table allocator.
56 * @tlb: TLB management callbacks for this set of tables.
64 * even in non-secure state where they should normally be ignored.
69 * format, and/or requires some format-specific default value.
80 * for use in the upper half of a split address space.
92 const struct iommu_flush_ops *tlb; member
95 /* Low-level data specific to the table format */
138 * struct io_pgtable_ops - Page table manipulation API for IOMMU drivers.
142 * @map_sg: Map a scatter-gather list of physically contiguous memory
171 * alloc_io_pgtable_ops() - Allocate a page table allocator for use by an IOMMU.
178 * the callback routines in cfg->tlb.
185 * free_io_pgtable_ops() - Free an io_pgtable_ops structure. The caller
187 * live, but the TLB can be dirty.
199 * struct io_pgtable - Internal structure describing a set of page tables.
218 if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_all) in io_pgtable_tlb_flush_all()
219 iop->cfg.tlb->tlb_flush_all(iop->cookie); in io_pgtable_tlb_flush_all()
226 if (iop->cfg.tlb && iop->cfg.tlb->tlb_flush_walk) in io_pgtable_tlb_flush_walk()
227 iop->cfg.tlb->tlb_flush_walk(iova, size, granule, iop->cookie); in io_pgtable_tlb_flush_walk()
235 if (iop->cfg.tlb && iop->cfg.tlb->tlb_add_page) in io_pgtable_tlb_add_page()
236 iop->cfg.tlb->tlb_add_page(gather, iova, granule, iop->cookie); in io_pgtable_tlb_add_page()
240 * struct io_pgtable_init_fns - Alloc/free a set of page tables for a