xref: /OK3568_Linux_fs/kernel/Documentation/admin-guide/mm/hugetlbpage.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun.. _hugetlbpage:
2*4882a593Smuzhiyun
3*4882a593Smuzhiyun=============
4*4882a593SmuzhiyunHugeTLB Pages
5*4882a593Smuzhiyun=============
6*4882a593Smuzhiyun
7*4882a593SmuzhiyunOverview
8*4882a593Smuzhiyun========
9*4882a593Smuzhiyun
10*4882a593SmuzhiyunThe intent of this file is to give a brief summary of hugetlbpage support in
11*4882a593Smuzhiyunthe Linux kernel.  This support is built on top of multiple page size support
12*4882a593Smuzhiyunthat is provided by most modern architectures.  For example, x86 CPUs normally
13*4882a593Smuzhiyunsupport 4K and 2M (1G if architecturally supported) page sizes, ia64
14*4882a593Smuzhiyunarchitecture supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M, 16M,
15*4882a593Smuzhiyun256M and ppc64 supports 4K and 16M.  A TLB is a cache of virtual-to-physical
16*4882a593Smuzhiyuntranslations.  Typically this is a very scarce resource on processor.
17*4882a593SmuzhiyunOperating systems try to make best use of limited number of TLB resources.
18*4882a593SmuzhiyunThis optimization is more critical now as bigger and bigger physical memories
19*4882a593Smuzhiyun(several GBs) are more readily available.
20*4882a593Smuzhiyun
21*4882a593SmuzhiyunUsers can use the huge page support in Linux kernel by either using the mmap
22*4882a593Smuzhiyunsystem call or standard SYSV shared memory system calls (shmget, shmat).
23*4882a593Smuzhiyun
24*4882a593SmuzhiyunFirst the Linux kernel needs to be built with the CONFIG_HUGETLBFS
25*4882a593Smuzhiyun(present under "File systems") and CONFIG_HUGETLB_PAGE (selected
26*4882a593Smuzhiyunautomatically when CONFIG_HUGETLBFS is selected) configuration
27*4882a593Smuzhiyunoptions.
28*4882a593Smuzhiyun
29*4882a593SmuzhiyunThe ``/proc/meminfo`` file provides information about the total number of
30*4882a593Smuzhiyunpersistent hugetlb pages in the kernel's huge page pool.  It also displays
31*4882a593Smuzhiyundefault huge page size and information about the number of free, reserved
32*4882a593Smuzhiyunand surplus huge pages in the pool of huge pages of default size.
33*4882a593SmuzhiyunThe huge page size is needed for generating the proper alignment and
34*4882a593Smuzhiyunsize of the arguments to system calls that map huge page regions.
35*4882a593Smuzhiyun
36*4882a593SmuzhiyunThe output of ``cat /proc/meminfo`` will include lines like::
37*4882a593Smuzhiyun
38*4882a593Smuzhiyun	HugePages_Total: uuu
39*4882a593Smuzhiyun	HugePages_Free:  vvv
40*4882a593Smuzhiyun	HugePages_Rsvd:  www
41*4882a593Smuzhiyun	HugePages_Surp:  xxx
42*4882a593Smuzhiyun	Hugepagesize:    yyy kB
43*4882a593Smuzhiyun	Hugetlb:         zzz kB
44*4882a593Smuzhiyun
45*4882a593Smuzhiyunwhere:
46*4882a593Smuzhiyun
47*4882a593SmuzhiyunHugePages_Total
48*4882a593Smuzhiyun	is the size of the pool of huge pages.
49*4882a593SmuzhiyunHugePages_Free
50*4882a593Smuzhiyun	is the number of huge pages in the pool that are not yet
51*4882a593Smuzhiyun        allocated.
52*4882a593SmuzhiyunHugePages_Rsvd
53*4882a593Smuzhiyun	is short for "reserved," and is the number of huge pages for
54*4882a593Smuzhiyun        which a commitment to allocate from the pool has been made,
55*4882a593Smuzhiyun        but no allocation has yet been made.  Reserved huge pages
56*4882a593Smuzhiyun        guarantee that an application will be able to allocate a
57*4882a593Smuzhiyun        huge page from the pool of huge pages at fault time.
58*4882a593SmuzhiyunHugePages_Surp
59*4882a593Smuzhiyun	is short for "surplus," and is the number of huge pages in
60*4882a593Smuzhiyun        the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
61*4882a593Smuzhiyun        maximum number of surplus huge pages is controlled by
62*4882a593Smuzhiyun        ``/proc/sys/vm/nr_overcommit_hugepages``.
63*4882a593SmuzhiyunHugepagesize
64*4882a593Smuzhiyun	is the default hugepage size (in Kb).
65*4882a593SmuzhiyunHugetlb
66*4882a593Smuzhiyun        is the total amount of memory (in kB), consumed by huge
67*4882a593Smuzhiyun        pages of all sizes.
68*4882a593Smuzhiyun        If huge pages of different sizes are in use, this number
69*4882a593Smuzhiyun        will exceed HugePages_Total \* Hugepagesize. To get more
70*4882a593Smuzhiyun        detailed information, please, refer to
71*4882a593Smuzhiyun        ``/sys/kernel/mm/hugepages`` (described below).
72*4882a593Smuzhiyun
73*4882a593Smuzhiyun
74*4882a593Smuzhiyun``/proc/filesystems`` should also show a filesystem of type "hugetlbfs"
75*4882a593Smuzhiyunconfigured in the kernel.
76*4882a593Smuzhiyun
77*4882a593Smuzhiyun``/proc/sys/vm/nr_hugepages`` indicates the current number of "persistent" huge
78*4882a593Smuzhiyunpages in the kernel's huge page pool.  "Persistent" huge pages will be
79*4882a593Smuzhiyunreturned to the huge page pool when freed by a task.  A user with root
80*4882a593Smuzhiyunprivileges can dynamically allocate more or free some persistent huge pages
81*4882a593Smuzhiyunby increasing or decreasing the value of ``nr_hugepages``.
82*4882a593Smuzhiyun
83*4882a593SmuzhiyunPages that are used as huge pages are reserved inside the kernel and cannot
84*4882a593Smuzhiyunbe used for other purposes.  Huge pages cannot be swapped out under
85*4882a593Smuzhiyunmemory pressure.
86*4882a593Smuzhiyun
87*4882a593SmuzhiyunOnce a number of huge pages have been pre-allocated to the kernel huge page
88*4882a593Smuzhiyunpool, a user with appropriate privilege can use either the mmap system call
89*4882a593Smuzhiyunor shared memory system calls to use the huge pages.  See the discussion of
90*4882a593Smuzhiyun:ref:`Using Huge Pages <using_huge_pages>`, below.
91*4882a593Smuzhiyun
92*4882a593SmuzhiyunThe administrator can allocate persistent huge pages on the kernel boot
93*4882a593Smuzhiyuncommand line by specifying the "hugepages=N" parameter, where 'N' = the
94*4882a593Smuzhiyunnumber of huge pages requested.  This is the most reliable method of
95*4882a593Smuzhiyunallocating huge pages as memory has not yet become fragmented.
96*4882a593Smuzhiyun
97*4882a593SmuzhiyunSome platforms support multiple huge page sizes.  To allocate huge pages
98*4882a593Smuzhiyunof a specific size, one must precede the huge pages boot command parameters
99*4882a593Smuzhiyunwith a huge page size selection parameter "hugepagesz=<size>".  <size> must
100*4882a593Smuzhiyunbe specified in bytes with optional scale suffix [kKmMgG].  The default huge
101*4882a593Smuzhiyunpage size may be selected with the "default_hugepagesz=<size>" boot parameter.
102*4882a593Smuzhiyun
103*4882a593SmuzhiyunHugetlb boot command line parameter semantics
104*4882a593Smuzhiyun
105*4882a593Smuzhiyunhugepagesz
106*4882a593Smuzhiyun	Specify a huge page size.  Used in conjunction with hugepages
107*4882a593Smuzhiyun	parameter to preallocate a number of huge pages of the specified
108*4882a593Smuzhiyun	size.  Hence, hugepagesz and hugepages are typically specified in
109*4882a593Smuzhiyun	pairs such as::
110*4882a593Smuzhiyun
111*4882a593Smuzhiyun		hugepagesz=2M hugepages=512
112*4882a593Smuzhiyun
113*4882a593Smuzhiyun	hugepagesz can only be specified once on the command line for a
114*4882a593Smuzhiyun	specific huge page size.  Valid huge page sizes are architecture
115*4882a593Smuzhiyun	dependent.
116*4882a593Smuzhiyunhugepages
117*4882a593Smuzhiyun	Specify the number of huge pages to preallocate.  This typically
118*4882a593Smuzhiyun	follows a valid hugepagesz or default_hugepagesz parameter.  However,
119*4882a593Smuzhiyun	if hugepages is the first or only hugetlb command line parameter it
120*4882a593Smuzhiyun	implicitly specifies the number of huge pages of default size to
121*4882a593Smuzhiyun	allocate.  If the number of huge pages of default size is implicitly
122*4882a593Smuzhiyun	specified, it can not be overwritten by a hugepagesz,hugepages
123*4882a593Smuzhiyun	parameter pair for the default size.
124*4882a593Smuzhiyun
125*4882a593Smuzhiyun	For example, on an architecture with 2M default huge page size::
126*4882a593Smuzhiyun
127*4882a593Smuzhiyun		hugepages=256 hugepagesz=2M hugepages=512
128*4882a593Smuzhiyun
129*4882a593Smuzhiyun	will result in 256 2M huge pages being allocated and a warning message
130*4882a593Smuzhiyun	indicating that the hugepages=512 parameter is ignored.  If a hugepages
131*4882a593Smuzhiyun	parameter is preceded by an invalid hugepagesz parameter, it will
132*4882a593Smuzhiyun	be ignored.
133*4882a593Smuzhiyundefault_hugepagesz
134*4882a593Smuzhiyun	Specify the default huge page size.  This parameter can
135*4882a593Smuzhiyun	only be specified once on the command line.  default_hugepagesz can
136*4882a593Smuzhiyun	optionally be followed by the hugepages parameter to preallocate a
137*4882a593Smuzhiyun	specific number of huge pages of default size.  The number of default
138*4882a593Smuzhiyun	sized huge pages to preallocate can also be implicitly specified as
139*4882a593Smuzhiyun	mentioned in the hugepages section above.  Therefore, on an
140*4882a593Smuzhiyun	architecture with 2M default huge page size::
141*4882a593Smuzhiyun
142*4882a593Smuzhiyun		hugepages=256
143*4882a593Smuzhiyun		default_hugepagesz=2M hugepages=256
144*4882a593Smuzhiyun		hugepages=256 default_hugepagesz=2M
145*4882a593Smuzhiyun
146*4882a593Smuzhiyun	will all result in 256 2M huge pages being allocated.  Valid default
147*4882a593Smuzhiyun	huge page size is architecture dependent.
148*4882a593Smuzhiyun
149*4882a593SmuzhiyunWhen multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
150*4882a593Smuzhiyunindicates the current number of pre-allocated huge pages of the default size.
151*4882a593SmuzhiyunThus, one can use the following command to dynamically allocate/deallocate
152*4882a593Smuzhiyundefault sized persistent huge pages::
153*4882a593Smuzhiyun
154*4882a593Smuzhiyun	echo 20 > /proc/sys/vm/nr_hugepages
155*4882a593Smuzhiyun
156*4882a593SmuzhiyunThis command will try to adjust the number of default sized huge pages in the
157*4882a593Smuzhiyunhuge page pool to 20, allocating or freeing huge pages, as required.
158*4882a593Smuzhiyun
159*4882a593SmuzhiyunOn a NUMA platform, the kernel will attempt to distribute the huge page pool
160*4882a593Smuzhiyunover all the set of allowed nodes specified by the NUMA memory policy of the
161*4882a593Smuzhiyuntask that modifies ``nr_hugepages``. The default for the allowed nodes--when the
162*4882a593Smuzhiyuntask has default memory policy--is all on-line nodes with memory.  Allowed
163*4882a593Smuzhiyunnodes with insufficient available, contiguous memory for a huge page will be
164*4882a593Smuzhiyunsilently skipped when allocating persistent huge pages.  See the
165*4882a593Smuzhiyun:ref:`discussion below <mem_policy_and_hp_alloc>`
166*4882a593Smuzhiyunof the interaction of task memory policy, cpusets and per node attributes
167*4882a593Smuzhiyunwith the allocation and freeing of persistent huge pages.
168*4882a593Smuzhiyun
169*4882a593SmuzhiyunThe success or failure of huge page allocation depends on the amount of
170*4882a593Smuzhiyunphysically contiguous memory that is present in system at the time of the
171*4882a593Smuzhiyunallocation attempt.  If the kernel is unable to allocate huge pages from
172*4882a593Smuzhiyunsome nodes in a NUMA system, it will attempt to make up the difference by
173*4882a593Smuzhiyunallocating extra pages on other nodes with sufficient available contiguous
174*4882a593Smuzhiyunmemory, if any.
175*4882a593Smuzhiyun
176*4882a593SmuzhiyunSystem administrators may want to put this command in one of the local rc
177*4882a593Smuzhiyuninit files.  This will enable the kernel to allocate huge pages early in
178*4882a593Smuzhiyunthe boot process when the possibility of getting physical contiguous pages
179*4882a593Smuzhiyunis still very high.  Administrators can verify the number of huge pages
180*4882a593Smuzhiyunactually allocated by checking the sysctl or meminfo.  To check the per node
181*4882a593Smuzhiyundistribution of huge pages in a NUMA system, use::
182*4882a593Smuzhiyun
183*4882a593Smuzhiyun	cat /sys/devices/system/node/node*/meminfo | fgrep Huge
184*4882a593Smuzhiyun
185*4882a593Smuzhiyun``/proc/sys/vm/nr_overcommit_hugepages`` specifies how large the pool of
186*4882a593Smuzhiyunhuge pages can grow, if more huge pages than ``/proc/sys/vm/nr_hugepages`` are
187*4882a593Smuzhiyunrequested by applications.  Writing any non-zero value into this file
188*4882a593Smuzhiyunindicates that the hugetlb subsystem is allowed to try to obtain that
189*4882a593Smuzhiyunnumber of "surplus" huge pages from the kernel's normal page pool, when the
190*4882a593Smuzhiyunpersistent huge page pool is exhausted. As these surplus huge pages become
191*4882a593Smuzhiyununused, they are freed back to the kernel's normal page pool.
192*4882a593Smuzhiyun
193*4882a593SmuzhiyunWhen increasing the huge page pool size via ``nr_hugepages``, any existing
194*4882a593Smuzhiyunsurplus pages will first be promoted to persistent huge pages.  Then, additional
195*4882a593Smuzhiyunhuge pages will be allocated, if necessary and if possible, to fulfill
196*4882a593Smuzhiyunthe new persistent huge page pool size.
197*4882a593Smuzhiyun
198*4882a593SmuzhiyunThe administrator may shrink the pool of persistent huge pages for
199*4882a593Smuzhiyunthe default huge page size by setting the ``nr_hugepages`` sysctl to a
200*4882a593Smuzhiyunsmaller value.  The kernel will attempt to balance the freeing of huge pages
201*4882a593Smuzhiyunacross all nodes in the memory policy of the task modifying ``nr_hugepages``.
202*4882a593SmuzhiyunAny free huge pages on the selected nodes will be freed back to the kernel's
203*4882a593Smuzhiyunnormal page pool.
204*4882a593Smuzhiyun
205*4882a593SmuzhiyunCaveat: Shrinking the persistent huge page pool via ``nr_hugepages`` such that
206*4882a593Smuzhiyunit becomes less than the number of huge pages in use will convert the balance
207*4882a593Smuzhiyunof the in-use huge pages to surplus huge pages.  This will occur even if
208*4882a593Smuzhiyunthe number of surplus pages would exceed the overcommit value.  As long as
209*4882a593Smuzhiyunthis condition holds--that is, until ``nr_hugepages+nr_overcommit_hugepages`` is
210*4882a593Smuzhiyunincreased sufficiently, or the surplus huge pages go out of use and are freed--
211*4882a593Smuzhiyunno more surplus huge pages will be allowed to be allocated.
212*4882a593Smuzhiyun
213*4882a593SmuzhiyunWith support for multiple huge page pools at run-time available, much of
214*4882a593Smuzhiyunthe huge page userspace interface in ``/proc/sys/vm`` has been duplicated in
215*4882a593Smuzhiyunsysfs.
216*4882a593SmuzhiyunThe ``/proc`` interfaces discussed above have been retained for backwards
217*4882a593Smuzhiyuncompatibility. The root huge page control directory in sysfs is::
218*4882a593Smuzhiyun
219*4882a593Smuzhiyun	/sys/kernel/mm/hugepages
220*4882a593Smuzhiyun
221*4882a593SmuzhiyunFor each huge page size supported by the running kernel, a subdirectory
222*4882a593Smuzhiyunwill exist, of the form::
223*4882a593Smuzhiyun
224*4882a593Smuzhiyun	hugepages-${size}kB
225*4882a593Smuzhiyun
226*4882a593SmuzhiyunInside each of these directories, the same set of files will exist::
227*4882a593Smuzhiyun
228*4882a593Smuzhiyun	nr_hugepages
229*4882a593Smuzhiyun	nr_hugepages_mempolicy
230*4882a593Smuzhiyun	nr_overcommit_hugepages
231*4882a593Smuzhiyun	free_hugepages
232*4882a593Smuzhiyun	resv_hugepages
233*4882a593Smuzhiyun	surplus_hugepages
234*4882a593Smuzhiyun
235*4882a593Smuzhiyunwhich function as described above for the default huge page-sized case.
236*4882a593Smuzhiyun
237*4882a593Smuzhiyun.. _mem_policy_and_hp_alloc:
238*4882a593Smuzhiyun
239*4882a593SmuzhiyunInteraction of Task Memory Policy with Huge Page Allocation/Freeing
240*4882a593Smuzhiyun===================================================================
241*4882a593Smuzhiyun
242*4882a593SmuzhiyunWhether huge pages are allocated and freed via the ``/proc`` interface or
243*4882a593Smuzhiyunthe ``/sysfs`` interface using the ``nr_hugepages_mempolicy`` attribute, the
244*4882a593SmuzhiyunNUMA nodes from which huge pages are allocated or freed are controlled by the
245*4882a593SmuzhiyunNUMA memory policy of the task that modifies the ``nr_hugepages_mempolicy``
246*4882a593Smuzhiyunsysctl or attribute.  When the ``nr_hugepages`` attribute is used, mempolicy
247*4882a593Smuzhiyunis ignored.
248*4882a593Smuzhiyun
249*4882a593SmuzhiyunThe recommended method to allocate or free huge pages to/from the kernel
250*4882a593Smuzhiyunhuge page pool, using the ``nr_hugepages`` example above, is::
251*4882a593Smuzhiyun
252*4882a593Smuzhiyun    numactl --interleave <node-list> echo 20 \
253*4882a593Smuzhiyun				>/proc/sys/vm/nr_hugepages_mempolicy
254*4882a593Smuzhiyun
255*4882a593Smuzhiyunor, more succinctly::
256*4882a593Smuzhiyun
257*4882a593Smuzhiyun    numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy
258*4882a593Smuzhiyun
259*4882a593SmuzhiyunThis will allocate or free ``abs(20 - nr_hugepages)`` to or from the nodes
260*4882a593Smuzhiyunspecified in <node-list>, depending on whether number of persistent huge pages
261*4882a593Smuzhiyunis initially less than or greater than 20, respectively.  No huge pages will be
262*4882a593Smuzhiyunallocated nor freed on any node not included in the specified <node-list>.
263*4882a593Smuzhiyun
264*4882a593SmuzhiyunWhen adjusting the persistent hugepage count via ``nr_hugepages_mempolicy``, any
265*4882a593Smuzhiyunmemory policy mode--bind, preferred, local or interleave--may be used.  The
266*4882a593Smuzhiyunresulting effect on persistent huge page allocation is as follows:
267*4882a593Smuzhiyun
268*4882a593Smuzhiyun#. Regardless of mempolicy mode [see
269*4882a593Smuzhiyun   :ref:`Documentation/admin-guide/mm/numa_memory_policy.rst <numa_memory_policy>`],
270*4882a593Smuzhiyun   persistent huge pages will be distributed across the node or nodes
271*4882a593Smuzhiyun   specified in the mempolicy as if "interleave" had been specified.
272*4882a593Smuzhiyun   However, if a node in the policy does not contain sufficient contiguous
273*4882a593Smuzhiyun   memory for a huge page, the allocation will not "fallback" to the nearest
274*4882a593Smuzhiyun   neighbor node with sufficient contiguous memory.  To do this would cause
275*4882a593Smuzhiyun   undesirable imbalance in the distribution of the huge page pool, or
276*4882a593Smuzhiyun   possibly, allocation of persistent huge pages on nodes not allowed by
277*4882a593Smuzhiyun   the task's memory policy.
278*4882a593Smuzhiyun
279*4882a593Smuzhiyun#. One or more nodes may be specified with the bind or interleave policy.
280*4882a593Smuzhiyun   If more than one node is specified with the preferred policy, only the
281*4882a593Smuzhiyun   lowest numeric id will be used.  Local policy will select the node where
282*4882a593Smuzhiyun   the task is running at the time the nodes_allowed mask is constructed.
283*4882a593Smuzhiyun   For local policy to be deterministic, the task must be bound to a cpu or
284*4882a593Smuzhiyun   cpus in a single node.  Otherwise, the task could be migrated to some
285*4882a593Smuzhiyun   other node at any time after launch and the resulting node will be
286*4882a593Smuzhiyun   indeterminate.  Thus, local policy is not very useful for this purpose.
287*4882a593Smuzhiyun   Any of the other mempolicy modes may be used to specify a single node.
288*4882a593Smuzhiyun
289*4882a593Smuzhiyun#. The nodes allowed mask will be derived from any non-default task mempolicy,
290*4882a593Smuzhiyun   whether this policy was set explicitly by the task itself or one of its
291*4882a593Smuzhiyun   ancestors, such as numactl.  This means that if the task is invoked from a
292*4882a593Smuzhiyun   shell with non-default policy, that policy will be used.  One can specify a
293*4882a593Smuzhiyun   node list of "all" with numactl --interleave or --membind [-m] to achieve
294*4882a593Smuzhiyun   interleaving over all nodes in the system or cpuset.
295*4882a593Smuzhiyun
296*4882a593Smuzhiyun#. Any task mempolicy specified--e.g., using numactl--will be constrained by
297*4882a593Smuzhiyun   the resource limits of any cpuset in which the task runs.  Thus, there will
298*4882a593Smuzhiyun   be no way for a task with non-default policy running in a cpuset with a
299*4882a593Smuzhiyun   subset of the system nodes to allocate huge pages outside the cpuset
300*4882a593Smuzhiyun   without first moving to a cpuset that contains all of the desired nodes.
301*4882a593Smuzhiyun
302*4882a593Smuzhiyun#. Boot-time huge page allocation attempts to distribute the requested number
303*4882a593Smuzhiyun   of huge pages over all on-lines nodes with memory.
304*4882a593Smuzhiyun
305*4882a593SmuzhiyunPer Node Hugepages Attributes
306*4882a593Smuzhiyun=============================
307*4882a593Smuzhiyun
308*4882a593SmuzhiyunA subset of the contents of the root huge page control directory in sysfs,
309*4882a593Smuzhiyundescribed above, will be replicated under each the system device of each
310*4882a593SmuzhiyunNUMA node with memory in::
311*4882a593Smuzhiyun
312*4882a593Smuzhiyun	/sys/devices/system/node/node[0-9]*/hugepages/
313*4882a593Smuzhiyun
314*4882a593SmuzhiyunUnder this directory, the subdirectory for each supported huge page size
315*4882a593Smuzhiyuncontains the following attribute files::
316*4882a593Smuzhiyun
317*4882a593Smuzhiyun	nr_hugepages
318*4882a593Smuzhiyun	free_hugepages
319*4882a593Smuzhiyun	surplus_hugepages
320*4882a593Smuzhiyun
321*4882a593SmuzhiyunThe free\_' and surplus\_' attribute files are read-only.  They return the number
322*4882a593Smuzhiyunof free and surplus [overcommitted] huge pages, respectively, on the parent
323*4882a593Smuzhiyunnode.
324*4882a593Smuzhiyun
325*4882a593SmuzhiyunThe ``nr_hugepages`` attribute returns the total number of huge pages on the
326*4882a593Smuzhiyunspecified node.  When this attribute is written, the number of persistent huge
327*4882a593Smuzhiyunpages on the parent node will be adjusted to the specified value, if sufficient
328*4882a593Smuzhiyunresources exist, regardless of the task's mempolicy or cpuset constraints.
329*4882a593Smuzhiyun
330*4882a593SmuzhiyunNote that the number of overcommit and reserve pages remain global quantities,
331*4882a593Smuzhiyunas we don't know until fault time, when the faulting task's mempolicy is
332*4882a593Smuzhiyunapplied, from which node the huge page allocation will be attempted.
333*4882a593Smuzhiyun
334*4882a593Smuzhiyun.. _using_huge_pages:
335*4882a593Smuzhiyun
336*4882a593SmuzhiyunUsing Huge Pages
337*4882a593Smuzhiyun================
338*4882a593Smuzhiyun
339*4882a593SmuzhiyunIf the user applications are going to request huge pages using mmap system
340*4882a593Smuzhiyuncall, then it is required that system administrator mount a file system of
341*4882a593Smuzhiyuntype hugetlbfs::
342*4882a593Smuzhiyun
343*4882a593Smuzhiyun  mount -t hugetlbfs \
344*4882a593Smuzhiyun	-o uid=<value>,gid=<value>,mode=<value>,pagesize=<value>,size=<value>,\
345*4882a593Smuzhiyun	min_size=<value>,nr_inodes=<value> none /mnt/huge
346*4882a593Smuzhiyun
347*4882a593SmuzhiyunThis command mounts a (pseudo) filesystem of type hugetlbfs on the directory
348*4882a593Smuzhiyun``/mnt/huge``.  Any file created on ``/mnt/huge`` uses huge pages.
349*4882a593Smuzhiyun
350*4882a593SmuzhiyunThe ``uid`` and ``gid`` options sets the owner and group of the root of the
351*4882a593Smuzhiyunfile system.  By default the ``uid`` and ``gid`` of the current process
352*4882a593Smuzhiyunare taken.
353*4882a593Smuzhiyun
354*4882a593SmuzhiyunThe ``mode`` option sets the mode of root of file system to value & 01777.
355*4882a593SmuzhiyunThis value is given in octal. By default the value 0755 is picked.
356*4882a593Smuzhiyun
357*4882a593SmuzhiyunIf the platform supports multiple huge page sizes, the ``pagesize`` option can
358*4882a593Smuzhiyunbe used to specify the huge page size and associated pool. ``pagesize``
359*4882a593Smuzhiyunis specified in bytes. If ``pagesize`` is not specified the platform's
360*4882a593Smuzhiyundefault huge page size and associated pool will be used.
361*4882a593Smuzhiyun
362*4882a593SmuzhiyunThe ``size`` option sets the maximum value of memory (huge pages) allowed
363*4882a593Smuzhiyunfor that filesystem (``/mnt/huge``). The ``size`` option can be specified
364*4882a593Smuzhiyunin bytes, or as a percentage of the specified huge page pool (``nr_hugepages``).
365*4882a593SmuzhiyunThe size is rounded down to HPAGE_SIZE boundary.
366*4882a593Smuzhiyun
367*4882a593SmuzhiyunThe ``min_size`` option sets the minimum value of memory (huge pages) allowed
368*4882a593Smuzhiyunfor the filesystem. ``min_size`` can be specified in the same way as ``size``,
369*4882a593Smuzhiyuneither bytes or a percentage of the huge page pool.
370*4882a593SmuzhiyunAt mount time, the number of huge pages specified by ``min_size`` are reserved
371*4882a593Smuzhiyunfor use by the filesystem.
372*4882a593SmuzhiyunIf there are not enough free huge pages available, the mount will fail.
373*4882a593SmuzhiyunAs huge pages are allocated to the filesystem and freed, the reserve count
374*4882a593Smuzhiyunis adjusted so that the sum of allocated and reserved huge pages is always
375*4882a593Smuzhiyunat least ``min_size``.
376*4882a593Smuzhiyun
377*4882a593SmuzhiyunThe option ``nr_inodes`` sets the maximum number of inodes that ``/mnt/huge``
378*4882a593Smuzhiyuncan use.
379*4882a593Smuzhiyun
380*4882a593SmuzhiyunIf the ``size``, ``min_size`` or ``nr_inodes`` option is not provided on
381*4882a593Smuzhiyuncommand line then no limits are set.
382*4882a593Smuzhiyun
383*4882a593SmuzhiyunFor ``pagesize``, ``size``, ``min_size`` and ``nr_inodes`` options, you can
384*4882a593Smuzhiyunuse [G|g]/[M|m]/[K|k] to represent giga/mega/kilo.
385*4882a593SmuzhiyunFor example, size=2K has the same meaning as size=2048.
386*4882a593Smuzhiyun
387*4882a593SmuzhiyunWhile read system calls are supported on files that reside on hugetlb
388*4882a593Smuzhiyunfile systems, write system calls are not.
389*4882a593Smuzhiyun
390*4882a593SmuzhiyunRegular chown, chgrp, and chmod commands (with right permissions) could be
391*4882a593Smuzhiyunused to change the file attributes on hugetlbfs.
392*4882a593Smuzhiyun
393*4882a593SmuzhiyunAlso, it is important to note that no such mount command is required if
394*4882a593Smuzhiyunapplications are going to use only shmat/shmget system calls or mmap with
395*4882a593SmuzhiyunMAP_HUGETLB.  For an example of how to use mmap with MAP_HUGETLB see
396*4882a593Smuzhiyun:ref:`map_hugetlb <map_hugetlb>` below.
397*4882a593Smuzhiyun
398*4882a593SmuzhiyunUsers who wish to use hugetlb memory via shared memory segment should be
399*4882a593Smuzhiyunmembers of a supplementary group and system admin needs to configure that gid
400*4882a593Smuzhiyuninto ``/proc/sys/vm/hugetlb_shm_group``.  It is possible for same or different
401*4882a593Smuzhiyunapplications to use any combination of mmaps and shm* calls, though the mount of
402*4882a593Smuzhiyunfilesystem will be required for using mmap calls without MAP_HUGETLB.
403*4882a593Smuzhiyun
404*4882a593SmuzhiyunSyscalls that operate on memory backed by hugetlb pages only have their lengths
405*4882a593Smuzhiyunaligned to the native page size of the processor; they will normally fail with
406*4882a593Smuzhiyunerrno set to EINVAL or exclude hugetlb pages that extend beyond the length if
407*4882a593Smuzhiyunnot hugepage aligned.  For example, munmap(2) will fail if memory is backed by
408*4882a593Smuzhiyuna hugetlb page and the length is smaller than the hugepage size.
409*4882a593Smuzhiyun
410*4882a593Smuzhiyun
411*4882a593SmuzhiyunExamples
412*4882a593Smuzhiyun========
413*4882a593Smuzhiyun
414*4882a593Smuzhiyun.. _map_hugetlb:
415*4882a593Smuzhiyun
416*4882a593Smuzhiyun``map_hugetlb``
417*4882a593Smuzhiyun	see tools/testing/selftests/vm/map_hugetlb.c
418*4882a593Smuzhiyun
419*4882a593Smuzhiyun``hugepage-shm``
420*4882a593Smuzhiyun	see tools/testing/selftests/vm/hugepage-shm.c
421*4882a593Smuzhiyun
422*4882a593Smuzhiyun``hugepage-mmap``
423*4882a593Smuzhiyun	see tools/testing/selftests/vm/hugepage-mmap.c
424*4882a593Smuzhiyun
425*4882a593SmuzhiyunThe `libhugetlbfs`_  library provides a wide range of userspace tools
426*4882a593Smuzhiyunto help with huge page usability, environment setup, and control.
427*4882a593Smuzhiyun
428*4882a593Smuzhiyun.. _libhugetlbfs: https://github.com/libhugetlbfs/libhugetlbfs
429