xref: /OK3568_Linux_fs/kernel/Documentation/admin-guide/sysctl/vm.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun===============================
2*4882a593SmuzhiyunDocumentation for /proc/sys/vm/
3*4882a593Smuzhiyun===============================
4*4882a593Smuzhiyun
5*4882a593Smuzhiyunkernel version 2.6.29
6*4882a593Smuzhiyun
7*4882a593SmuzhiyunCopyright (c) 1998, 1999,  Rik van Riel <riel@nl.linux.org>
8*4882a593Smuzhiyun
9*4882a593SmuzhiyunCopyright (c) 2008         Peter W. Morreale <pmorreale@novell.com>
10*4882a593Smuzhiyun
11*4882a593SmuzhiyunFor general info and legal blurb, please look in index.rst.
12*4882a593Smuzhiyun
13*4882a593Smuzhiyun------------------------------------------------------------------------------
14*4882a593Smuzhiyun
15*4882a593SmuzhiyunThis file contains the documentation for the sysctl files in
16*4882a593Smuzhiyun/proc/sys/vm and is valid for Linux kernel version 2.6.29.
17*4882a593Smuzhiyun
18*4882a593SmuzhiyunThe files in this directory can be used to tune the operation
19*4882a593Smuzhiyunof the virtual memory (VM) subsystem of the Linux kernel and
20*4882a593Smuzhiyunthe writeout of dirty data to disk.
21*4882a593Smuzhiyun
22*4882a593SmuzhiyunDefault values and initialization routines for most of these
23*4882a593Smuzhiyunfiles can be found in mm/swap.c.
24*4882a593Smuzhiyun
25*4882a593SmuzhiyunCurrently, these files are in /proc/sys/vm:
26*4882a593Smuzhiyun
27*4882a593Smuzhiyun- admin_reserve_kbytes
28*4882a593Smuzhiyun- block_dump
29*4882a593Smuzhiyun- compact_memory
30*4882a593Smuzhiyun- compaction_proactiveness
31*4882a593Smuzhiyun- compact_unevictable_allowed
32*4882a593Smuzhiyun- dirty_background_bytes
33*4882a593Smuzhiyun- dirty_background_ratio
34*4882a593Smuzhiyun- dirty_bytes
35*4882a593Smuzhiyun- dirty_expire_centisecs
36*4882a593Smuzhiyun- dirty_ratio
37*4882a593Smuzhiyun- dirtytime_expire_seconds
38*4882a593Smuzhiyun- dirty_writeback_centisecs
39*4882a593Smuzhiyun- drop_caches
40*4882a593Smuzhiyun- extfrag_threshold
41*4882a593Smuzhiyun- extra_free_kbytes
42*4882a593Smuzhiyun- highmem_is_dirtyable
43*4882a593Smuzhiyun- hugetlb_shm_group
44*4882a593Smuzhiyun- laptop_mode
45*4882a593Smuzhiyun- legacy_va_layout
46*4882a593Smuzhiyun- lowmem_reserve_ratio
47*4882a593Smuzhiyun- max_map_count
48*4882a593Smuzhiyun- memory_failure_early_kill
49*4882a593Smuzhiyun- memory_failure_recovery
50*4882a593Smuzhiyun- min_free_kbytes
51*4882a593Smuzhiyun- min_slab_ratio
52*4882a593Smuzhiyun- min_unmapped_ratio
53*4882a593Smuzhiyun- mmap_min_addr
54*4882a593Smuzhiyun- mmap_rnd_bits
55*4882a593Smuzhiyun- mmap_rnd_compat_bits
56*4882a593Smuzhiyun- nr_hugepages
57*4882a593Smuzhiyun- nr_hugepages_mempolicy
58*4882a593Smuzhiyun- nr_overcommit_hugepages
59*4882a593Smuzhiyun- nr_trim_pages         (only if CONFIG_MMU=n)
60*4882a593Smuzhiyun- numa_zonelist_order
61*4882a593Smuzhiyun- oom_dump_tasks
62*4882a593Smuzhiyun- oom_kill_allocating_task
63*4882a593Smuzhiyun- overcommit_kbytes
64*4882a593Smuzhiyun- overcommit_memory
65*4882a593Smuzhiyun- overcommit_ratio
66*4882a593Smuzhiyun- page-cluster
67*4882a593Smuzhiyun- panic_on_oom
68*4882a593Smuzhiyun- percpu_pagelist_fraction
69*4882a593Smuzhiyun- stat_interval
70*4882a593Smuzhiyun- stat_refresh
71*4882a593Smuzhiyun- numa_stat
72*4882a593Smuzhiyun- swappiness
73*4882a593Smuzhiyun- unprivileged_userfaultfd
74*4882a593Smuzhiyun- user_reserve_kbytes
75*4882a593Smuzhiyun- vfs_cache_pressure
76*4882a593Smuzhiyun- watermark_boost_factor
77*4882a593Smuzhiyun- watermark_scale_factor
78*4882a593Smuzhiyun- zone_reclaim_mode
79*4882a593Smuzhiyun
80*4882a593Smuzhiyun
81*4882a593Smuzhiyunadmin_reserve_kbytes
82*4882a593Smuzhiyun====================
83*4882a593Smuzhiyun
84*4882a593SmuzhiyunThe amount of free memory in the system that should be reserved for users
85*4882a593Smuzhiyunwith the capability cap_sys_admin.
86*4882a593Smuzhiyun
87*4882a593Smuzhiyunadmin_reserve_kbytes defaults to min(3% of free pages, 8MB)
88*4882a593Smuzhiyun
89*4882a593SmuzhiyunThat should provide enough for the admin to log in and kill a process,
90*4882a593Smuzhiyunif necessary, under the default overcommit 'guess' mode.
91*4882a593Smuzhiyun
92*4882a593SmuzhiyunSystems running under overcommit 'never' should increase this to account
93*4882a593Smuzhiyunfor the full Virtual Memory Size of programs used to recover. Otherwise,
94*4882a593Smuzhiyunroot may not be able to log in to recover the system.
95*4882a593Smuzhiyun
96*4882a593SmuzhiyunHow do you calculate a minimum useful reserve?
97*4882a593Smuzhiyun
98*4882a593Smuzhiyunsshd or login + bash (or some other shell) + top (or ps, kill, etc.)
99*4882a593Smuzhiyun
100*4882a593SmuzhiyunFor overcommit 'guess', we can sum resident set sizes (RSS).
101*4882a593SmuzhiyunOn x86_64 this is about 8MB.
102*4882a593Smuzhiyun
103*4882a593SmuzhiyunFor overcommit 'never', we can take the max of their virtual sizes (VSZ)
104*4882a593Smuzhiyunand add the sum of their RSS.
105*4882a593SmuzhiyunOn x86_64 this is about 128MB.
106*4882a593Smuzhiyun
107*4882a593SmuzhiyunChanging this takes effect whenever an application requests memory.
108*4882a593Smuzhiyun
109*4882a593Smuzhiyun
110*4882a593Smuzhiyunblock_dump
111*4882a593Smuzhiyun==========
112*4882a593Smuzhiyun
113*4882a593Smuzhiyunblock_dump enables block I/O debugging when set to a nonzero value. More
114*4882a593Smuzhiyuninformation on block I/O debugging is in Documentation/admin-guide/laptops/laptop-mode.rst.
115*4882a593Smuzhiyun
116*4882a593Smuzhiyun
117*4882a593Smuzhiyuncompact_memory
118*4882a593Smuzhiyun==============
119*4882a593Smuzhiyun
120*4882a593SmuzhiyunAvailable only when CONFIG_COMPACTION is set. When 1 is written to the file,
121*4882a593Smuzhiyunall zones are compacted such that free memory is available in contiguous
122*4882a593Smuzhiyunblocks where possible. This can be important for example in the allocation of
123*4882a593Smuzhiyunhuge pages although processes will also directly compact memory as required.
124*4882a593Smuzhiyun
125*4882a593Smuzhiyuncompaction_proactiveness
126*4882a593Smuzhiyun========================
127*4882a593Smuzhiyun
128*4882a593SmuzhiyunThis tunable takes a value in the range [0, 100] with a default value of
129*4882a593Smuzhiyun20. This tunable determines how aggressively compaction is done in the
130*4882a593Smuzhiyunbackground. On write of non zero value to this tunable will immediately
131*4882a593Smuzhiyuntrigger the proactive compaction. Setting it to 0 disables proactive compaction.
132*4882a593Smuzhiyun
133*4882a593SmuzhiyunNote that compaction has a non-trivial system-wide impact as pages
134*4882a593Smuzhiyunbelonging to different processes are moved around, which could also lead
135*4882a593Smuzhiyunto latency spikes in unsuspecting applications. The kernel employs
136*4882a593Smuzhiyunvarious heuristics to avoid wasting CPU cycles if it detects that
137*4882a593Smuzhiyunproactive compaction is not being effective.
138*4882a593Smuzhiyun
139*4882a593SmuzhiyunBe careful when setting it to extreme values like 100, as that may
140*4882a593Smuzhiyuncause excessive background compaction activity.
141*4882a593Smuzhiyun
142*4882a593Smuzhiyuncompact_unevictable_allowed
143*4882a593Smuzhiyun===========================
144*4882a593Smuzhiyun
145*4882a593SmuzhiyunAvailable only when CONFIG_COMPACTION is set. When set to 1, compaction is
146*4882a593Smuzhiyunallowed to examine the unevictable lru (mlocked pages) for pages to compact.
147*4882a593SmuzhiyunThis should be used on systems where stalls for minor page faults are an
148*4882a593Smuzhiyunacceptable trade for large contiguous free memory.  Set to 0 to prevent
149*4882a593Smuzhiyuncompaction from moving pages that are unevictable.  Default value is 1.
150*4882a593SmuzhiyunOn CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due
151*4882a593Smuzhiyunto compaction, which would block the task from becomming active until the fault
152*4882a593Smuzhiyunis resolved.
153*4882a593Smuzhiyun
154*4882a593Smuzhiyun
155*4882a593Smuzhiyundirty_background_bytes
156*4882a593Smuzhiyun======================
157*4882a593Smuzhiyun
158*4882a593SmuzhiyunContains the amount of dirty memory at which the background kernel
159*4882a593Smuzhiyunflusher threads will start writeback.
160*4882a593Smuzhiyun
161*4882a593SmuzhiyunNote:
162*4882a593Smuzhiyun  dirty_background_bytes is the counterpart of dirty_background_ratio. Only
163*4882a593Smuzhiyun  one of them may be specified at a time. When one sysctl is written it is
164*4882a593Smuzhiyun  immediately taken into account to evaluate the dirty memory limits and the
165*4882a593Smuzhiyun  other appears as 0 when read.
166*4882a593Smuzhiyun
167*4882a593Smuzhiyun
168*4882a593Smuzhiyundirty_background_ratio
169*4882a593Smuzhiyun======================
170*4882a593Smuzhiyun
171*4882a593SmuzhiyunContains, as a percentage of total available memory that contains free pages
172*4882a593Smuzhiyunand reclaimable pages, the number of pages at which the background kernel
173*4882a593Smuzhiyunflusher threads will start writing out dirty data.
174*4882a593Smuzhiyun
175*4882a593SmuzhiyunThe total available memory is not equal to total system memory.
176*4882a593Smuzhiyun
177*4882a593Smuzhiyun
178*4882a593Smuzhiyundirty_bytes
179*4882a593Smuzhiyun===========
180*4882a593Smuzhiyun
181*4882a593SmuzhiyunContains the amount of dirty memory at which a process generating disk writes
182*4882a593Smuzhiyunwill itself start writeback.
183*4882a593Smuzhiyun
184*4882a593SmuzhiyunNote: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be
185*4882a593Smuzhiyunspecified at a time. When one sysctl is written it is immediately taken into
186*4882a593Smuzhiyunaccount to evaluate the dirty memory limits and the other appears as 0 when
187*4882a593Smuzhiyunread.
188*4882a593Smuzhiyun
189*4882a593SmuzhiyunNote: the minimum value allowed for dirty_bytes is two pages (in bytes); any
190*4882a593Smuzhiyunvalue lower than this limit will be ignored and the old configuration will be
191*4882a593Smuzhiyunretained.
192*4882a593Smuzhiyun
193*4882a593Smuzhiyun
194*4882a593Smuzhiyundirty_expire_centisecs
195*4882a593Smuzhiyun======================
196*4882a593Smuzhiyun
197*4882a593SmuzhiyunThis tunable is used to define when dirty data is old enough to be eligible
198*4882a593Smuzhiyunfor writeout by the kernel flusher threads.  It is expressed in 100'ths
199*4882a593Smuzhiyunof a second.  Data which has been dirty in-memory for longer than this
200*4882a593Smuzhiyuninterval will be written out next time a flusher thread wakes up.
201*4882a593Smuzhiyun
202*4882a593Smuzhiyun
203*4882a593Smuzhiyundirty_ratio
204*4882a593Smuzhiyun===========
205*4882a593Smuzhiyun
206*4882a593SmuzhiyunContains, as a percentage of total available memory that contains free pages
207*4882a593Smuzhiyunand reclaimable pages, the number of pages at which a process which is
208*4882a593Smuzhiyungenerating disk writes will itself start writing out dirty data.
209*4882a593Smuzhiyun
210*4882a593SmuzhiyunThe total available memory is not equal to total system memory.
211*4882a593Smuzhiyun
212*4882a593Smuzhiyun
213*4882a593Smuzhiyundirtytime_expire_seconds
214*4882a593Smuzhiyun========================
215*4882a593Smuzhiyun
216*4882a593SmuzhiyunWhen a lazytime inode is constantly having its pages dirtied, the inode with
217*4882a593Smuzhiyunan updated timestamp will never get chance to be written out.  And, if the
218*4882a593Smuzhiyunonly thing that has happened on the file system is a dirtytime inode caused
219*4882a593Smuzhiyunby an atime update, a worker will be scheduled to make sure that inode
220*4882a593Smuzhiyuneventually gets pushed out to disk.  This tunable is used to define when dirty
221*4882a593Smuzhiyuninode is old enough to be eligible for writeback by the kernel flusher threads.
222*4882a593SmuzhiyunAnd, it is also used as the interval to wakeup dirtytime_writeback thread.
223*4882a593Smuzhiyun
224*4882a593Smuzhiyun
225*4882a593Smuzhiyundirty_writeback_centisecs
226*4882a593Smuzhiyun=========================
227*4882a593Smuzhiyun
228*4882a593SmuzhiyunThe kernel flusher threads will periodically wake up and write `old` data
229*4882a593Smuzhiyunout to disk.  This tunable expresses the interval between those wakeups, in
230*4882a593Smuzhiyun100'ths of a second.
231*4882a593Smuzhiyun
232*4882a593SmuzhiyunSetting this to zero disables periodic writeback altogether.
233*4882a593Smuzhiyun
234*4882a593Smuzhiyun
235*4882a593Smuzhiyundrop_caches
236*4882a593Smuzhiyun===========
237*4882a593Smuzhiyun
238*4882a593SmuzhiyunWriting to this will cause the kernel to drop clean caches, as well as
239*4882a593Smuzhiyunreclaimable slab objects like dentries and inodes.  Once dropped, their
240*4882a593Smuzhiyunmemory becomes free.
241*4882a593Smuzhiyun
242*4882a593SmuzhiyunTo free pagecache::
243*4882a593Smuzhiyun
244*4882a593Smuzhiyun	echo 1 > /proc/sys/vm/drop_caches
245*4882a593Smuzhiyun
246*4882a593SmuzhiyunTo free reclaimable slab objects (includes dentries and inodes)::
247*4882a593Smuzhiyun
248*4882a593Smuzhiyun	echo 2 > /proc/sys/vm/drop_caches
249*4882a593Smuzhiyun
250*4882a593SmuzhiyunTo free slab objects and pagecache::
251*4882a593Smuzhiyun
252*4882a593Smuzhiyun	echo 3 > /proc/sys/vm/drop_caches
253*4882a593Smuzhiyun
254*4882a593SmuzhiyunThis is a non-destructive operation and will not free any dirty objects.
255*4882a593SmuzhiyunTo increase the number of objects freed by this operation, the user may run
256*4882a593Smuzhiyun`sync` prior to writing to /proc/sys/vm/drop_caches.  This will minimize the
257*4882a593Smuzhiyunnumber of dirty objects on the system and create more candidates to be
258*4882a593Smuzhiyundropped.
259*4882a593Smuzhiyun
260*4882a593SmuzhiyunThis file is not a means to control the growth of the various kernel caches
261*4882a593Smuzhiyun(inodes, dentries, pagecache, etc...)  These objects are automatically
262*4882a593Smuzhiyunreclaimed by the kernel when memory is needed elsewhere on the system.
263*4882a593Smuzhiyun
264*4882a593SmuzhiyunUse of this file can cause performance problems.  Since it discards cached
265*4882a593Smuzhiyunobjects, it may cost a significant amount of I/O and CPU to recreate the
266*4882a593Smuzhiyundropped objects, especially if they were under heavy use.  Because of this,
267*4882a593Smuzhiyunuse outside of a testing or debugging environment is not recommended.
268*4882a593Smuzhiyun
269*4882a593SmuzhiyunYou may see informational messages in your kernel log when this file is
270*4882a593Smuzhiyunused::
271*4882a593Smuzhiyun
272*4882a593Smuzhiyun	cat (1234): drop_caches: 3
273*4882a593Smuzhiyun
274*4882a593SmuzhiyunThese are informational only.  They do not mean that anything is wrong
275*4882a593Smuzhiyunwith your system.  To disable them, echo 4 (bit 2) into drop_caches.
276*4882a593Smuzhiyun
277*4882a593Smuzhiyun
278*4882a593Smuzhiyunextfrag_threshold
279*4882a593Smuzhiyun=================
280*4882a593Smuzhiyun
281*4882a593SmuzhiyunThis parameter affects whether the kernel will compact memory or direct
282*4882a593Smuzhiyunreclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
283*4882a593Smuzhiyundebugfs shows what the fragmentation index for each order is in each zone in
284*4882a593Smuzhiyunthe system. Values tending towards 0 imply allocations would fail due to lack
285*4882a593Smuzhiyunof memory, values towards 1000 imply failures are due to fragmentation and -1
286*4882a593Smuzhiyunimplies that the allocation will succeed as long as watermarks are met.
287*4882a593Smuzhiyun
288*4882a593SmuzhiyunThe kernel will not compact memory in a zone if the
289*4882a593Smuzhiyunfragmentation index is <= extfrag_threshold. The default value is 500.
290*4882a593Smuzhiyun
291*4882a593Smuzhiyun
292*4882a593Smuzhiyunhighmem_is_dirtyable
293*4882a593Smuzhiyun====================
294*4882a593Smuzhiyun
295*4882a593SmuzhiyunAvailable only for systems with CONFIG_HIGHMEM enabled (32b systems).
296*4882a593Smuzhiyun
297*4882a593SmuzhiyunThis parameter controls whether the high memory is considered for dirty
298*4882a593Smuzhiyunwriters throttling.  This is not the case by default which means that
299*4882a593Smuzhiyunonly the amount of memory directly visible/usable by the kernel can
300*4882a593Smuzhiyunbe dirtied. As a result, on systems with a large amount of memory and
301*4882a593Smuzhiyunlowmem basically depleted writers might be throttled too early and
302*4882a593Smuzhiyunstreaming writes can get very slow.
303*4882a593Smuzhiyun
304*4882a593SmuzhiyunChanging the value to non zero would allow more memory to be dirtied
305*4882a593Smuzhiyunand thus allow writers to write more data which can be flushed to the
306*4882a593Smuzhiyunstorage more effectively. Note this also comes with a risk of pre-mature
307*4882a593SmuzhiyunOOM killer because some writers (e.g. direct block device writes) can
308*4882a593Smuzhiyunonly use the low memory and they can fill it up with dirty data without
309*4882a593Smuzhiyunany throttling.
310*4882a593Smuzhiyun
311*4882a593Smuzhiyun
312*4882a593Smuzhiyunextra_free_kbytes
313*4882a593Smuzhiyun
314*4882a593SmuzhiyunThis parameter tells the VM to keep extra free memory between the threshold
315*4882a593Smuzhiyunwhere background reclaim (kswapd) kicks in, and the threshold where direct
316*4882a593Smuzhiyunreclaim (by allocating processes) kicks in.
317*4882a593Smuzhiyun
318*4882a593SmuzhiyunThis is useful for workloads that require low latency memory allocations
319*4882a593Smuzhiyunand have a bounded burstiness in memory allocations, for example a
320*4882a593Smuzhiyunrealtime application that receives and transmits network traffic
321*4882a593Smuzhiyun(causing in-kernel memory allocations) with a maximum total message burst
322*4882a593Smuzhiyunsize of 200MB may need 200MB of extra free memory to avoid direct reclaim
323*4882a593Smuzhiyunrelated latencies.
324*4882a593Smuzhiyun
325*4882a593Smuzhiyun==============================================================
326*4882a593Smuzhiyun
327*4882a593Smuzhiyunhugetlb_shm_group
328*4882a593Smuzhiyun=================
329*4882a593Smuzhiyun
330*4882a593Smuzhiyunhugetlb_shm_group contains group id that is allowed to create SysV
331*4882a593Smuzhiyunshared memory segment using hugetlb page.
332*4882a593Smuzhiyun
333*4882a593Smuzhiyun
334*4882a593Smuzhiyunlaptop_mode
335*4882a593Smuzhiyun===========
336*4882a593Smuzhiyun
337*4882a593Smuzhiyunlaptop_mode is a knob that controls "laptop mode". All the things that are
338*4882a593Smuzhiyuncontrolled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
339*4882a593Smuzhiyun
340*4882a593Smuzhiyun
341*4882a593Smuzhiyunlegacy_va_layout
342*4882a593Smuzhiyun================
343*4882a593Smuzhiyun
344*4882a593SmuzhiyunIf non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
345*4882a593Smuzhiyunwill use the legacy (2.4) layout for all processes.
346*4882a593Smuzhiyun
347*4882a593Smuzhiyun
348*4882a593Smuzhiyunlowmem_reserve_ratio
349*4882a593Smuzhiyun====================
350*4882a593Smuzhiyun
351*4882a593SmuzhiyunFor some specialised workloads on highmem machines it is dangerous for
352*4882a593Smuzhiyunthe kernel to allow process memory to be allocated from the "lowmem"
353*4882a593Smuzhiyunzone.  This is because that memory could then be pinned via the mlock()
354*4882a593Smuzhiyunsystem call, or by unavailability of swapspace.
355*4882a593Smuzhiyun
356*4882a593SmuzhiyunAnd on large highmem machines this lack of reclaimable lowmem memory
357*4882a593Smuzhiyuncan be fatal.
358*4882a593Smuzhiyun
359*4882a593SmuzhiyunSo the Linux page allocator has a mechanism which prevents allocations
360*4882a593Smuzhiyunwhich *could* use highmem from using too much lowmem.  This means that
361*4882a593Smuzhiyuna certain amount of lowmem is defended from the possibility of being
362*4882a593Smuzhiyuncaptured into pinned user memory.
363*4882a593Smuzhiyun
364*4882a593Smuzhiyun(The same argument applies to the old 16 megabyte ISA DMA region.  This
365*4882a593Smuzhiyunmechanism will also defend that region from allocations which could use
366*4882a593Smuzhiyunhighmem or lowmem).
367*4882a593Smuzhiyun
368*4882a593SmuzhiyunThe `lowmem_reserve_ratio` tunable determines how aggressive the kernel is
369*4882a593Smuzhiyunin defending these lower zones.
370*4882a593Smuzhiyun
371*4882a593SmuzhiyunIf you have a machine which uses highmem or ISA DMA and your
372*4882a593Smuzhiyunapplications are using mlock(), or if you are running with no swap then
373*4882a593Smuzhiyunyou probably should change the lowmem_reserve_ratio setting.
374*4882a593Smuzhiyun
375*4882a593SmuzhiyunThe lowmem_reserve_ratio is an array. You can see them by reading this file::
376*4882a593Smuzhiyun
377*4882a593Smuzhiyun	% cat /proc/sys/vm/lowmem_reserve_ratio
378*4882a593Smuzhiyun	256     256     32
379*4882a593Smuzhiyun
380*4882a593SmuzhiyunBut, these values are not used directly. The kernel calculates # of protection
381*4882a593Smuzhiyunpages for each zones from them. These are shown as array of protection pages
382*4882a593Smuzhiyunin /proc/zoneinfo like followings. (This is an example of x86-64 box).
383*4882a593SmuzhiyunEach zone has an array of protection pages like this::
384*4882a593Smuzhiyun
385*4882a593Smuzhiyun  Node 0, zone      DMA
386*4882a593Smuzhiyun    pages free     1355
387*4882a593Smuzhiyun          min      3
388*4882a593Smuzhiyun          low      3
389*4882a593Smuzhiyun          high     4
390*4882a593Smuzhiyun	:
391*4882a593Smuzhiyun	:
392*4882a593Smuzhiyun      numa_other   0
393*4882a593Smuzhiyun          protection: (0, 2004, 2004, 2004)
394*4882a593Smuzhiyun	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
395*4882a593Smuzhiyun    pagesets
396*4882a593Smuzhiyun      cpu: 0 pcp: 0
397*4882a593Smuzhiyun          :
398*4882a593Smuzhiyun
399*4882a593SmuzhiyunThese protections are added to score to judge whether this zone should be used
400*4882a593Smuzhiyunfor page allocation or should be reclaimed.
401*4882a593Smuzhiyun
402*4882a593SmuzhiyunIn this example, if normal pages (index=2) are required to this DMA zone and
403*4882a593Smuzhiyunwatermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should
404*4882a593Smuzhiyunnot be used because pages_free(1355) is smaller than watermark + protection[2]
405*4882a593Smuzhiyun(4 + 2004 = 2008). If this protection value is 0, this zone would be used for
406*4882a593Smuzhiyunnormal page requirement. If requirement is DMA zone(index=0), protection[0]
407*4882a593Smuzhiyun(=0) is used.
408*4882a593Smuzhiyun
409*4882a593Smuzhiyunzone[i]'s protection[j] is calculated by following expression::
410*4882a593Smuzhiyun
411*4882a593Smuzhiyun  (i < j):
412*4882a593Smuzhiyun    zone[i]->protection[j]
413*4882a593Smuzhiyun    = (total sums of managed_pages from zone[i+1] to zone[j] on the node)
414*4882a593Smuzhiyun      / lowmem_reserve_ratio[i];
415*4882a593Smuzhiyun  (i = j):
416*4882a593Smuzhiyun     (should not be protected. = 0;
417*4882a593Smuzhiyun  (i > j):
418*4882a593Smuzhiyun     (not necessary, but looks 0)
419*4882a593Smuzhiyun
420*4882a593SmuzhiyunThe default values of lowmem_reserve_ratio[i] are
421*4882a593Smuzhiyun
422*4882a593Smuzhiyun    === ====================================
423*4882a593Smuzhiyun    256 (if zone[i] means DMA or DMA32 zone)
424*4882a593Smuzhiyun    32  (others)
425*4882a593Smuzhiyun    === ====================================
426*4882a593Smuzhiyun
427*4882a593SmuzhiyunAs above expression, they are reciprocal number of ratio.
428*4882a593Smuzhiyun256 means 1/256. # of protection pages becomes about "0.39%" of total managed
429*4882a593Smuzhiyunpages of higher zones on the node.
430*4882a593Smuzhiyun
431*4882a593SmuzhiyunIf you would like to protect more pages, smaller values are effective.
432*4882a593SmuzhiyunThe minimum value is 1 (1/1 -> 100%). The value less than 1 completely
433*4882a593Smuzhiyundisables protection of the pages.
434*4882a593Smuzhiyun
435*4882a593Smuzhiyun
436*4882a593Smuzhiyunmax_map_count:
437*4882a593Smuzhiyun==============
438*4882a593Smuzhiyun
439*4882a593SmuzhiyunThis file contains the maximum number of memory map areas a process
440*4882a593Smuzhiyunmay have. Memory map areas are used as a side-effect of calling
441*4882a593Smuzhiyunmalloc, directly by mmap, mprotect, and madvise, and also when loading
442*4882a593Smuzhiyunshared libraries.
443*4882a593Smuzhiyun
444*4882a593SmuzhiyunWhile most applications need less than a thousand maps, certain
445*4882a593Smuzhiyunprograms, particularly malloc debuggers, may consume lots of them,
446*4882a593Smuzhiyune.g., up to one or two maps per allocation.
447*4882a593Smuzhiyun
448*4882a593SmuzhiyunThe default value is 65536.
449*4882a593Smuzhiyun
450*4882a593Smuzhiyun
451*4882a593Smuzhiyunmemory_failure_early_kill:
452*4882a593Smuzhiyun==========================
453*4882a593Smuzhiyun
454*4882a593SmuzhiyunControl how to kill processes when uncorrected memory error (typically
455*4882a593Smuzhiyuna 2bit error in a memory module) is detected in the background by hardware
456*4882a593Smuzhiyunthat cannot be handled by the kernel. In some cases (like the page
457*4882a593Smuzhiyunstill having a valid copy on disk) the kernel will handle the failure
458*4882a593Smuzhiyuntransparently without affecting any applications. But if there is
459*4882a593Smuzhiyunno other uptodate copy of the data it will kill to prevent any data
460*4882a593Smuzhiyuncorruptions from propagating.
461*4882a593Smuzhiyun
462*4882a593Smuzhiyun1: Kill all processes that have the corrupted and not reloadable page mapped
463*4882a593Smuzhiyunas soon as the corruption is detected.  Note this is not supported
464*4882a593Smuzhiyunfor a few types of pages, like kernel internally allocated data or
465*4882a593Smuzhiyunthe swap cache, but works for the majority of user pages.
466*4882a593Smuzhiyun
467*4882a593Smuzhiyun0: Only unmap the corrupted page from all processes and only kill a process
468*4882a593Smuzhiyunwho tries to access it.
469*4882a593Smuzhiyun
470*4882a593SmuzhiyunThe kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can
471*4882a593Smuzhiyunhandle this if they want to.
472*4882a593Smuzhiyun
473*4882a593SmuzhiyunThis is only active on architectures/platforms with advanced machine
474*4882a593Smuzhiyuncheck handling and depends on the hardware capabilities.
475*4882a593Smuzhiyun
476*4882a593SmuzhiyunApplications can override this setting individually with the PR_MCE_KILL prctl
477*4882a593Smuzhiyun
478*4882a593Smuzhiyun
479*4882a593Smuzhiyunmemory_failure_recovery
480*4882a593Smuzhiyun=======================
481*4882a593Smuzhiyun
482*4882a593SmuzhiyunEnable memory failure recovery (when supported by the platform)
483*4882a593Smuzhiyun
484*4882a593Smuzhiyun1: Attempt recovery.
485*4882a593Smuzhiyun
486*4882a593Smuzhiyun0: Always panic on a memory failure.
487*4882a593Smuzhiyun
488*4882a593Smuzhiyun
489*4882a593Smuzhiyunmin_free_kbytes
490*4882a593Smuzhiyun===============
491*4882a593Smuzhiyun
492*4882a593SmuzhiyunThis is used to force the Linux VM to keep a minimum number
493*4882a593Smuzhiyunof kilobytes free.  The VM uses this number to compute a
494*4882a593Smuzhiyunwatermark[WMARK_MIN] value for each lowmem zone in the system.
495*4882a593SmuzhiyunEach lowmem zone gets a number of reserved free pages based
496*4882a593Smuzhiyunproportionally on its size.
497*4882a593Smuzhiyun
498*4882a593SmuzhiyunSome minimal amount of memory is needed to satisfy PF_MEMALLOC
499*4882a593Smuzhiyunallocations; if you set this to lower than 1024KB, your system will
500*4882a593Smuzhiyunbecome subtly broken, and prone to deadlock under high loads.
501*4882a593Smuzhiyun
502*4882a593SmuzhiyunSetting this too high will OOM your machine instantly.
503*4882a593Smuzhiyun
504*4882a593Smuzhiyun
505*4882a593Smuzhiyunmin_slab_ratio
506*4882a593Smuzhiyun==============
507*4882a593Smuzhiyun
508*4882a593SmuzhiyunThis is available only on NUMA kernels.
509*4882a593Smuzhiyun
510*4882a593SmuzhiyunA percentage of the total pages in each zone.  On Zone reclaim
511*4882a593Smuzhiyun(fallback from the local zone occurs) slabs will be reclaimed if more
512*4882a593Smuzhiyunthan this percentage of pages in a zone are reclaimable slab pages.
513*4882a593SmuzhiyunThis insures that the slab growth stays under control even in NUMA
514*4882a593Smuzhiyunsystems that rarely perform global reclaim.
515*4882a593Smuzhiyun
516*4882a593SmuzhiyunThe default is 5 percent.
517*4882a593Smuzhiyun
518*4882a593SmuzhiyunNote that slab reclaim is triggered in a per zone / node fashion.
519*4882a593SmuzhiyunThe process of reclaiming slab memory is currently not node specific
520*4882a593Smuzhiyunand may not be fast.
521*4882a593Smuzhiyun
522*4882a593Smuzhiyun
523*4882a593Smuzhiyunmin_unmapped_ratio
524*4882a593Smuzhiyun==================
525*4882a593Smuzhiyun
526*4882a593SmuzhiyunThis is available only on NUMA kernels.
527*4882a593Smuzhiyun
528*4882a593SmuzhiyunThis is a percentage of the total pages in each zone. Zone reclaim will
529*4882a593Smuzhiyunonly occur if more than this percentage of pages are in a state that
530*4882a593Smuzhiyunzone_reclaim_mode allows to be reclaimed.
531*4882a593Smuzhiyun
532*4882a593SmuzhiyunIf zone_reclaim_mode has the value 4 OR'd, then the percentage is compared
533*4882a593Smuzhiyunagainst all file-backed unmapped pages including swapcache pages and tmpfs
534*4882a593Smuzhiyunfiles. Otherwise, only unmapped pages backed by normal files but not tmpfs
535*4882a593Smuzhiyunfiles and similar are considered.
536*4882a593Smuzhiyun
537*4882a593SmuzhiyunThe default is 1 percent.
538*4882a593Smuzhiyun
539*4882a593Smuzhiyun
540*4882a593Smuzhiyunmmap_min_addr
541*4882a593Smuzhiyun=============
542*4882a593Smuzhiyun
543*4882a593SmuzhiyunThis file indicates the amount of address space  which a user process will
544*4882a593Smuzhiyunbe restricted from mmapping.  Since kernel null dereference bugs could
545*4882a593Smuzhiyunaccidentally operate based on the information in the first couple of pages
546*4882a593Smuzhiyunof memory userspace processes should not be allowed to write to them.  By
547*4882a593Smuzhiyundefault this value is set to 0 and no protections will be enforced by the
548*4882a593Smuzhiyunsecurity module.  Setting this value to something like 64k will allow the
549*4882a593Smuzhiyunvast majority of applications to work correctly and provide defense in depth
550*4882a593Smuzhiyunagainst future potential kernel bugs.
551*4882a593Smuzhiyun
552*4882a593Smuzhiyun
553*4882a593Smuzhiyunmmap_rnd_bits
554*4882a593Smuzhiyun=============
555*4882a593Smuzhiyun
556*4882a593SmuzhiyunThis value can be used to select the number of bits to use to
557*4882a593Smuzhiyundetermine the random offset to the base address of vma regions
558*4882a593Smuzhiyunresulting from mmap allocations on architectures which support
559*4882a593Smuzhiyuntuning address space randomization.  This value will be bounded
560*4882a593Smuzhiyunby the architecture's minimum and maximum supported values.
561*4882a593Smuzhiyun
562*4882a593SmuzhiyunThis value can be changed after boot using the
563*4882a593Smuzhiyun/proc/sys/vm/mmap_rnd_bits tunable
564*4882a593Smuzhiyun
565*4882a593Smuzhiyun
566*4882a593Smuzhiyunmmap_rnd_compat_bits
567*4882a593Smuzhiyun====================
568*4882a593Smuzhiyun
569*4882a593SmuzhiyunThis value can be used to select the number of bits to use to
570*4882a593Smuzhiyundetermine the random offset to the base address of vma regions
571*4882a593Smuzhiyunresulting from mmap allocations for applications run in
572*4882a593Smuzhiyuncompatibility mode on architectures which support tuning address
573*4882a593Smuzhiyunspace randomization.  This value will be bounded by the
574*4882a593Smuzhiyunarchitecture's minimum and maximum supported values.
575*4882a593Smuzhiyun
576*4882a593SmuzhiyunThis value can be changed after boot using the
577*4882a593Smuzhiyun/proc/sys/vm/mmap_rnd_compat_bits tunable
578*4882a593Smuzhiyun
579*4882a593Smuzhiyun
580*4882a593Smuzhiyunnr_hugepages
581*4882a593Smuzhiyun============
582*4882a593Smuzhiyun
583*4882a593SmuzhiyunChange the minimum size of the hugepage pool.
584*4882a593Smuzhiyun
585*4882a593SmuzhiyunSee Documentation/admin-guide/mm/hugetlbpage.rst
586*4882a593Smuzhiyun
587*4882a593Smuzhiyun
588*4882a593Smuzhiyunnr_hugepages_mempolicy
589*4882a593Smuzhiyun======================
590*4882a593Smuzhiyun
591*4882a593SmuzhiyunChange the size of the hugepage pool at run-time on a specific
592*4882a593Smuzhiyunset of NUMA nodes.
593*4882a593Smuzhiyun
594*4882a593SmuzhiyunSee Documentation/admin-guide/mm/hugetlbpage.rst
595*4882a593Smuzhiyun
596*4882a593Smuzhiyun
597*4882a593Smuzhiyunnr_overcommit_hugepages
598*4882a593Smuzhiyun=======================
599*4882a593Smuzhiyun
600*4882a593SmuzhiyunChange the maximum size of the hugepage pool. The maximum is
601*4882a593Smuzhiyunnr_hugepages + nr_overcommit_hugepages.
602*4882a593Smuzhiyun
603*4882a593SmuzhiyunSee Documentation/admin-guide/mm/hugetlbpage.rst
604*4882a593Smuzhiyun
605*4882a593Smuzhiyun
606*4882a593Smuzhiyunnr_trim_pages
607*4882a593Smuzhiyun=============
608*4882a593Smuzhiyun
609*4882a593SmuzhiyunThis is available only on NOMMU kernels.
610*4882a593Smuzhiyun
611*4882a593SmuzhiyunThis value adjusts the excess page trimming behaviour of power-of-2 aligned
612*4882a593SmuzhiyunNOMMU mmap allocations.
613*4882a593Smuzhiyun
614*4882a593SmuzhiyunA value of 0 disables trimming of allocations entirely, while a value of 1
615*4882a593Smuzhiyuntrims excess pages aggressively. Any value >= 1 acts as the watermark where
616*4882a593Smuzhiyuntrimming of allocations is initiated.
617*4882a593Smuzhiyun
618*4882a593SmuzhiyunThe default value is 1.
619*4882a593Smuzhiyun
620*4882a593SmuzhiyunSee Documentation/admin-guide/mm/nommu-mmap.rst for more information.
621*4882a593Smuzhiyun
622*4882a593Smuzhiyun
623*4882a593Smuzhiyunnuma_zonelist_order
624*4882a593Smuzhiyun===================
625*4882a593Smuzhiyun
626*4882a593SmuzhiyunThis sysctl is only for NUMA and it is deprecated. Anything but
627*4882a593SmuzhiyunNode order will fail!
628*4882a593Smuzhiyun
629*4882a593Smuzhiyun'where the memory is allocated from' is controlled by zonelists.
630*4882a593Smuzhiyun
631*4882a593Smuzhiyun(This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
632*4882a593Smuzhiyunyou may be able to read ZONE_DMA as ZONE_DMA32...)
633*4882a593Smuzhiyun
634*4882a593SmuzhiyunIn non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
635*4882a593SmuzhiyunZONE_NORMAL -> ZONE_DMA
636*4882a593SmuzhiyunThis means that a memory allocation request for GFP_KERNEL will
637*4882a593Smuzhiyunget memory from ZONE_DMA only when ZONE_NORMAL is not available.
638*4882a593Smuzhiyun
639*4882a593SmuzhiyunIn NUMA case, you can think of following 2 types of order.
640*4882a593SmuzhiyunAssume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL::
641*4882a593Smuzhiyun
642*4882a593Smuzhiyun  (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
643*4882a593Smuzhiyun  (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
644*4882a593Smuzhiyun
645*4882a593SmuzhiyunType(A) offers the best locality for processes on Node(0), but ZONE_DMA
646*4882a593Smuzhiyunwill be used before ZONE_NORMAL exhaustion. This increases possibility of
647*4882a593Smuzhiyunout-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
648*4882a593Smuzhiyun
649*4882a593SmuzhiyunType(B) cannot offer the best locality but is more robust against OOM of
650*4882a593Smuzhiyunthe DMA zone.
651*4882a593Smuzhiyun
652*4882a593SmuzhiyunType(A) is called as "Node" order. Type (B) is "Zone" order.
653*4882a593Smuzhiyun
654*4882a593Smuzhiyun"Node order" orders the zonelists by node, then by zone within each node.
655*4882a593SmuzhiyunSpecify "[Nn]ode" for node order
656*4882a593Smuzhiyun
657*4882a593Smuzhiyun"Zone Order" orders the zonelists by zone type, then by node within each
658*4882a593Smuzhiyunzone.  Specify "[Zz]one" for zone order.
659*4882a593Smuzhiyun
660*4882a593SmuzhiyunSpecify "[Dd]efault" to request automatic configuration.
661*4882a593Smuzhiyun
662*4882a593SmuzhiyunOn 32-bit, the Normal zone needs to be preserved for allocations accessible
663*4882a593Smuzhiyunby the kernel, so "zone" order will be selected.
664*4882a593Smuzhiyun
665*4882a593SmuzhiyunOn 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
666*4882a593Smuzhiyunorder will be selected.
667*4882a593Smuzhiyun
668*4882a593SmuzhiyunDefault order is recommended unless this is causing problems for your
669*4882a593Smuzhiyunsystem/application.
670*4882a593Smuzhiyun
671*4882a593Smuzhiyun
672*4882a593Smuzhiyunoom_dump_tasks
673*4882a593Smuzhiyun==============
674*4882a593Smuzhiyun
675*4882a593SmuzhiyunEnables a system-wide task dump (excluding kernel threads) to be produced
676*4882a593Smuzhiyunwhen the kernel performs an OOM-killing and includes such information as
677*4882a593Smuzhiyunpid, uid, tgid, vm size, rss, pgtables_bytes, swapents, oom_score_adj
678*4882a593Smuzhiyunscore, and name.  This is helpful to determine why the OOM killer was
679*4882a593Smuzhiyuninvoked, to identify the rogue task that caused it, and to determine why
680*4882a593Smuzhiyunthe OOM killer chose the task it did to kill.
681*4882a593Smuzhiyun
682*4882a593SmuzhiyunIf this is set to zero, this information is suppressed.  On very
683*4882a593Smuzhiyunlarge systems with thousands of tasks it may not be feasible to dump
684*4882a593Smuzhiyunthe memory state information for each one.  Such systems should not
685*4882a593Smuzhiyunbe forced to incur a performance penalty in OOM conditions when the
686*4882a593Smuzhiyuninformation may not be desired.
687*4882a593Smuzhiyun
688*4882a593SmuzhiyunIf this is set to non-zero, this information is shown whenever the
689*4882a593SmuzhiyunOOM killer actually kills a memory-hogging task.
690*4882a593Smuzhiyun
691*4882a593SmuzhiyunThe default value is 1 (enabled).
692*4882a593Smuzhiyun
693*4882a593Smuzhiyun
694*4882a593Smuzhiyunoom_kill_allocating_task
695*4882a593Smuzhiyun========================
696*4882a593Smuzhiyun
697*4882a593SmuzhiyunThis enables or disables killing the OOM-triggering task in
698*4882a593Smuzhiyunout-of-memory situations.
699*4882a593Smuzhiyun
700*4882a593SmuzhiyunIf this is set to zero, the OOM killer will scan through the entire
701*4882a593Smuzhiyuntasklist and select a task based on heuristics to kill.  This normally
702*4882a593Smuzhiyunselects a rogue memory-hogging task that frees up a large amount of
703*4882a593Smuzhiyunmemory when killed.
704*4882a593Smuzhiyun
705*4882a593SmuzhiyunIf this is set to non-zero, the OOM killer simply kills the task that
706*4882a593Smuzhiyuntriggered the out-of-memory condition.  This avoids the expensive
707*4882a593Smuzhiyuntasklist scan.
708*4882a593Smuzhiyun
709*4882a593SmuzhiyunIf panic_on_oom is selected, it takes precedence over whatever value
710*4882a593Smuzhiyunis used in oom_kill_allocating_task.
711*4882a593Smuzhiyun
712*4882a593SmuzhiyunThe default value is 0.
713*4882a593Smuzhiyun
714*4882a593Smuzhiyun
715*4882a593Smuzhiyunovercommit_kbytes
716*4882a593Smuzhiyun=================
717*4882a593Smuzhiyun
718*4882a593SmuzhiyunWhen overcommit_memory is set to 2, the committed address space is not
719*4882a593Smuzhiyunpermitted to exceed swap plus this amount of physical RAM. See below.
720*4882a593Smuzhiyun
721*4882a593SmuzhiyunNote: overcommit_kbytes is the counterpart of overcommit_ratio. Only one
722*4882a593Smuzhiyunof them may be specified at a time. Setting one disables the other (which
723*4882a593Smuzhiyunthen appears as 0 when read).
724*4882a593Smuzhiyun
725*4882a593Smuzhiyun
726*4882a593Smuzhiyunovercommit_memory
727*4882a593Smuzhiyun=================
728*4882a593Smuzhiyun
729*4882a593SmuzhiyunThis value contains a flag that enables memory overcommitment.
730*4882a593Smuzhiyun
731*4882a593SmuzhiyunWhen this flag is 0, the kernel attempts to estimate the amount
732*4882a593Smuzhiyunof free memory left when userspace requests more memory.
733*4882a593Smuzhiyun
734*4882a593SmuzhiyunWhen this flag is 1, the kernel pretends there is always enough
735*4882a593Smuzhiyunmemory until it actually runs out.
736*4882a593Smuzhiyun
737*4882a593SmuzhiyunWhen this flag is 2, the kernel uses a "never overcommit"
738*4882a593Smuzhiyunpolicy that attempts to prevent any overcommit of memory.
739*4882a593SmuzhiyunNote that user_reserve_kbytes affects this policy.
740*4882a593Smuzhiyun
741*4882a593SmuzhiyunThis feature can be very useful because there are a lot of
742*4882a593Smuzhiyunprograms that malloc() huge amounts of memory "just-in-case"
743*4882a593Smuzhiyunand don't use much of it.
744*4882a593Smuzhiyun
745*4882a593SmuzhiyunThe default value is 0.
746*4882a593Smuzhiyun
747*4882a593SmuzhiyunSee Documentation/vm/overcommit-accounting.rst and
748*4882a593Smuzhiyunmm/util.c::__vm_enough_memory() for more information.
749*4882a593Smuzhiyun
750*4882a593Smuzhiyun
751*4882a593Smuzhiyunovercommit_ratio
752*4882a593Smuzhiyun================
753*4882a593Smuzhiyun
754*4882a593SmuzhiyunWhen overcommit_memory is set to 2, the committed address
755*4882a593Smuzhiyunspace is not permitted to exceed swap plus this percentage
756*4882a593Smuzhiyunof physical RAM.  See above.
757*4882a593Smuzhiyun
758*4882a593Smuzhiyun
759*4882a593Smuzhiyunpage-cluster
760*4882a593Smuzhiyun============
761*4882a593Smuzhiyun
762*4882a593Smuzhiyunpage-cluster controls the number of pages up to which consecutive pages
763*4882a593Smuzhiyunare read in from swap in a single attempt. This is the swap counterpart
764*4882a593Smuzhiyunto page cache readahead.
765*4882a593SmuzhiyunThe mentioned consecutivity is not in terms of virtual/physical addresses,
766*4882a593Smuzhiyunbut consecutive on swap space - that means they were swapped out together.
767*4882a593Smuzhiyun
768*4882a593SmuzhiyunIt is a logarithmic value - setting it to zero means "1 page", setting
769*4882a593Smuzhiyunit to 1 means "2 pages", setting it to 2 means "4 pages", etc.
770*4882a593SmuzhiyunZero disables swap readahead completely.
771*4882a593Smuzhiyun
772*4882a593SmuzhiyunThe default value is three (eight pages at a time).  There may be some
773*4882a593Smuzhiyunsmall benefits in tuning this to a different value if your workload is
774*4882a593Smuzhiyunswap-intensive.
775*4882a593Smuzhiyun
776*4882a593SmuzhiyunLower values mean lower latencies for initial faults, but at the same time
777*4882a593Smuzhiyunextra faults and I/O delays for following faults if they would have been part of
778*4882a593Smuzhiyunthat consecutive pages readahead would have brought in.
779*4882a593Smuzhiyun
780*4882a593Smuzhiyun
781*4882a593Smuzhiyunpanic_on_oom
782*4882a593Smuzhiyun============
783*4882a593Smuzhiyun
784*4882a593SmuzhiyunThis enables or disables panic on out-of-memory feature.
785*4882a593Smuzhiyun
786*4882a593SmuzhiyunIf this is set to 0, the kernel will kill some rogue process,
787*4882a593Smuzhiyuncalled oom_killer.  Usually, oom_killer can kill rogue processes and
788*4882a593Smuzhiyunsystem will survive.
789*4882a593Smuzhiyun
790*4882a593SmuzhiyunIf this is set to 1, the kernel panics when out-of-memory happens.
791*4882a593SmuzhiyunHowever, if a process limits using nodes by mempolicy/cpusets,
792*4882a593Smuzhiyunand those nodes become memory exhaustion status, one process
793*4882a593Smuzhiyunmay be killed by oom-killer. No panic occurs in this case.
794*4882a593SmuzhiyunBecause other nodes' memory may be free. This means system total status
795*4882a593Smuzhiyunmay be not fatal yet.
796*4882a593Smuzhiyun
797*4882a593SmuzhiyunIf this is set to 2, the kernel panics compulsorily even on the
798*4882a593Smuzhiyunabove-mentioned. Even oom happens under memory cgroup, the whole
799*4882a593Smuzhiyunsystem panics.
800*4882a593Smuzhiyun
801*4882a593SmuzhiyunThe default value is 0.
802*4882a593Smuzhiyun
803*4882a593Smuzhiyun1 and 2 are for failover of clustering. Please select either
804*4882a593Smuzhiyunaccording to your policy of failover.
805*4882a593Smuzhiyun
806*4882a593Smuzhiyunpanic_on_oom=2+kdump gives you very strong tool to investigate
807*4882a593Smuzhiyunwhy oom happens. You can get snapshot.
808*4882a593Smuzhiyun
809*4882a593Smuzhiyun
810*4882a593Smuzhiyunpercpu_pagelist_fraction
811*4882a593Smuzhiyun========================
812*4882a593Smuzhiyun
813*4882a593SmuzhiyunThis is the fraction of pages at most (high mark pcp->high) in each zone that
814*4882a593Smuzhiyunare allocated for each per cpu page list.  The min value for this is 8.  It
815*4882a593Smuzhiyunmeans that we don't allow more than 1/8th of pages in each zone to be
816*4882a593Smuzhiyunallocated in any single per_cpu_pagelist.  This entry only changes the value
817*4882a593Smuzhiyunof hot per cpu pagelists.  User can specify a number like 100 to allocate
818*4882a593Smuzhiyun1/100th of each zone to each per cpu page list.
819*4882a593Smuzhiyun
820*4882a593SmuzhiyunThe batch value of each per cpu pagelist is also updated as a result.  It is
821*4882a593Smuzhiyunset to pcp->high/4.  The upper limit of batch is (PAGE_SHIFT * 8)
822*4882a593Smuzhiyun
823*4882a593SmuzhiyunThe initial value is zero.  Kernel does not use this value at boot time to set
824*4882a593Smuzhiyunthe high water marks for each per cpu page list.  If the user writes '0' to this
825*4882a593Smuzhiyunsysctl, it will revert to this default behavior.
826*4882a593Smuzhiyun
827*4882a593Smuzhiyun
828*4882a593Smuzhiyunstat_interval
829*4882a593Smuzhiyun=============
830*4882a593Smuzhiyun
831*4882a593SmuzhiyunThe time interval between which vm statistics are updated.  The default
832*4882a593Smuzhiyunis 1 second.
833*4882a593Smuzhiyun
834*4882a593Smuzhiyun
835*4882a593Smuzhiyunstat_refresh
836*4882a593Smuzhiyun============
837*4882a593Smuzhiyun
838*4882a593SmuzhiyunAny read or write (by root only) flushes all the per-cpu vm statistics
839*4882a593Smuzhiyuninto their global totals, for more accurate reports when testing
840*4882a593Smuzhiyune.g. cat /proc/sys/vm/stat_refresh /proc/meminfo
841*4882a593Smuzhiyun
842*4882a593SmuzhiyunAs a side-effect, it also checks for negative totals (elsewhere reported
843*4882a593Smuzhiyunas 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
844*4882a593Smuzhiyun(At time of writing, a few stats are known sometimes to be found negative,
845*4882a593Smuzhiyunwith no ill effects: errors and warnings on these stats are suppressed.)
846*4882a593Smuzhiyun
847*4882a593Smuzhiyun
848*4882a593Smuzhiyunnuma_stat
849*4882a593Smuzhiyun=========
850*4882a593Smuzhiyun
851*4882a593SmuzhiyunThis interface allows runtime configuration of numa statistics.
852*4882a593Smuzhiyun
853*4882a593SmuzhiyunWhen page allocation performance becomes a bottleneck and you can tolerate
854*4882a593Smuzhiyunsome possible tool breakage and decreased numa counter precision, you can
855*4882a593Smuzhiyundo::
856*4882a593Smuzhiyun
857*4882a593Smuzhiyun	echo 0 > /proc/sys/vm/numa_stat
858*4882a593Smuzhiyun
859*4882a593SmuzhiyunWhen page allocation performance is not a bottleneck and you want all
860*4882a593Smuzhiyuntooling to work, you can do::
861*4882a593Smuzhiyun
862*4882a593Smuzhiyun	echo 1 > /proc/sys/vm/numa_stat
863*4882a593Smuzhiyun
864*4882a593Smuzhiyun
865*4882a593Smuzhiyunswappiness
866*4882a593Smuzhiyun==========
867*4882a593Smuzhiyun
868*4882a593SmuzhiyunThis control is used to define the rough relative IO cost of swapping
869*4882a593Smuzhiyunand filesystem paging, as a value between 0 and 200. At 100, the VM
870*4882a593Smuzhiyunassumes equal IO cost and will thus apply memory pressure to the page
871*4882a593Smuzhiyuncache and swap-backed pages equally; lower values signify more
872*4882a593Smuzhiyunexpensive swap IO, higher values indicates cheaper.
873*4882a593Smuzhiyun
874*4882a593SmuzhiyunKeep in mind that filesystem IO patterns under memory pressure tend to
875*4882a593Smuzhiyunbe more efficient than swap's random IO. An optimal value will require
876*4882a593Smuzhiyunexperimentation and will also be workload-dependent.
877*4882a593Smuzhiyun
878*4882a593SmuzhiyunThe default value is 60.
879*4882a593Smuzhiyun
880*4882a593SmuzhiyunFor in-memory swap, like zram or zswap, as well as hybrid setups that
881*4882a593Smuzhiyunhave swap on faster devices than the filesystem, values beyond 100 can
882*4882a593Smuzhiyunbe considered. For example, if the random IO against the swap device
883*4882a593Smuzhiyunis on average 2x faster than IO from the filesystem, swappiness should
884*4882a593Smuzhiyunbe 133 (x + 2x = 200, 2x = 133.33).
885*4882a593Smuzhiyun
886*4882a593SmuzhiyunAt 0, the kernel will not initiate swap until the amount of free and
887*4882a593Smuzhiyunfile-backed pages is less than the high watermark in a zone.
888*4882a593Smuzhiyun
889*4882a593Smuzhiyun
890*4882a593Smuzhiyununprivileged_userfaultfd
891*4882a593Smuzhiyun========================
892*4882a593Smuzhiyun
893*4882a593SmuzhiyunThis flag controls the mode in which unprivileged users can use the
894*4882a593Smuzhiyunuserfaultfd system calls. Set this to 0 to restrict unprivileged users
895*4882a593Smuzhiyunto handle page faults in user mode only. In this case, users without
896*4882a593SmuzhiyunSYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to
897*4882a593Smuzhiyunsucceed. Prohibiting use of userfaultfd for handling faults from kernel
898*4882a593Smuzhiyunmode may make certain vulnerabilities more difficult to exploit.
899*4882a593Smuzhiyun
900*4882a593SmuzhiyunSet this to 1 to allow unprivileged users to use the userfaultfd system
901*4882a593Smuzhiyuncalls without any restrictions.
902*4882a593Smuzhiyun
903*4882a593SmuzhiyunThe default value is 0.
904*4882a593Smuzhiyun
905*4882a593Smuzhiyun
906*4882a593Smuzhiyunuser_reserve_kbytes
907*4882a593Smuzhiyun===================
908*4882a593Smuzhiyun
909*4882a593SmuzhiyunWhen overcommit_memory is set to 2, "never overcommit" mode, reserve
910*4882a593Smuzhiyunmin(3% of current process size, user_reserve_kbytes) of free memory.
911*4882a593SmuzhiyunThis is intended to prevent a user from starting a single memory hogging
912*4882a593Smuzhiyunprocess, such that they cannot recover (kill the hog).
913*4882a593Smuzhiyun
914*4882a593Smuzhiyunuser_reserve_kbytes defaults to min(3% of the current process size, 128MB).
915*4882a593Smuzhiyun
916*4882a593SmuzhiyunIf this is reduced to zero, then the user will be allowed to allocate
917*4882a593Smuzhiyunall free memory with a single process, minus admin_reserve_kbytes.
918*4882a593SmuzhiyunAny subsequent attempts to execute a command will result in
919*4882a593Smuzhiyun"fork: Cannot allocate memory".
920*4882a593Smuzhiyun
921*4882a593SmuzhiyunChanging this takes effect whenever an application requests memory.
922*4882a593Smuzhiyun
923*4882a593Smuzhiyun
924*4882a593Smuzhiyunvfs_cache_pressure
925*4882a593Smuzhiyun==================
926*4882a593Smuzhiyun
927*4882a593SmuzhiyunThis percentage value controls the tendency of the kernel to reclaim
928*4882a593Smuzhiyunthe memory which is used for caching of directory and inode objects.
929*4882a593Smuzhiyun
930*4882a593SmuzhiyunAt the default value of vfs_cache_pressure=100 the kernel will attempt to
931*4882a593Smuzhiyunreclaim dentries and inodes at a "fair" rate with respect to pagecache and
932*4882a593Smuzhiyunswapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
933*4882a593Smuzhiyunto retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
934*4882a593Smuzhiyunnever reclaim dentries and inodes due to memory pressure and this can easily
935*4882a593Smuzhiyunlead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
936*4882a593Smuzhiyuncauses the kernel to prefer to reclaim dentries and inodes.
937*4882a593Smuzhiyun
938*4882a593SmuzhiyunIncreasing vfs_cache_pressure significantly beyond 100 may have negative
939*4882a593Smuzhiyunperformance impact. Reclaim code needs to take various locks to find freeable
940*4882a593Smuzhiyundirectory and inode objects. With vfs_cache_pressure=1000, it will look for
941*4882a593Smuzhiyunten times more freeable objects than there are.
942*4882a593Smuzhiyun
943*4882a593Smuzhiyun
944*4882a593Smuzhiyunwatermark_boost_factor
945*4882a593Smuzhiyun======================
946*4882a593Smuzhiyun
947*4882a593SmuzhiyunThis factor controls the level of reclaim when memory is being fragmented.
948*4882a593SmuzhiyunIt defines the percentage of the high watermark of a zone that will be
949*4882a593Smuzhiyunreclaimed if pages of different mobility are being mixed within pageblocks.
950*4882a593SmuzhiyunThe intent is that compaction has less work to do in the future and to
951*4882a593Smuzhiyunincrease the success rate of future high-order allocations such as SLUB
952*4882a593Smuzhiyunallocations, THP and hugetlbfs pages.
953*4882a593Smuzhiyun
954*4882a593SmuzhiyunTo make it sensible with respect to the watermark_scale_factor
955*4882a593Smuzhiyunparameter, the unit is in fractions of 10,000. The default value of
956*4882a593Smuzhiyun15,000 on !DISCONTIGMEM configurations means that up to 150% of the high
957*4882a593Smuzhiyunwatermark will be reclaimed in the event of a pageblock being mixed due
958*4882a593Smuzhiyunto fragmentation. The level of reclaim is determined by the number of
959*4882a593Smuzhiyunfragmentation events that occurred in the recent past. If this value is
960*4882a593Smuzhiyunsmaller than a pageblock then a pageblocks worth of pages will be reclaimed
961*4882a593Smuzhiyun(e.g.  2MB on 64-bit x86). A boost factor of 0 will disable the feature.
962*4882a593Smuzhiyun
963*4882a593Smuzhiyun
964*4882a593Smuzhiyunwatermark_scale_factor
965*4882a593Smuzhiyun======================
966*4882a593Smuzhiyun
967*4882a593SmuzhiyunThis factor controls the aggressiveness of kswapd. It defines the
968*4882a593Smuzhiyunamount of memory left in a node/system before kswapd is woken up and
969*4882a593Smuzhiyunhow much memory needs to be free before kswapd goes back to sleep.
970*4882a593Smuzhiyun
971*4882a593SmuzhiyunThe unit is in fractions of 10,000. The default value of 10 means the
972*4882a593Smuzhiyundistances between watermarks are 0.1% of the available memory in the
973*4882a593Smuzhiyunnode/system. The maximum value is 1000, or 10% of memory.
974*4882a593Smuzhiyun
975*4882a593SmuzhiyunA high rate of threads entering direct reclaim (allocstall) or kswapd
976*4882a593Smuzhiyungoing to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate
977*4882a593Smuzhiyunthat the number of free pages kswapd maintains for latency reasons is
978*4882a593Smuzhiyuntoo small for the allocation bursts occurring in the system. This knob
979*4882a593Smuzhiyuncan then be used to tune kswapd aggressiveness accordingly.
980*4882a593Smuzhiyun
981*4882a593Smuzhiyun
982*4882a593Smuzhiyunzone_reclaim_mode
983*4882a593Smuzhiyun=================
984*4882a593Smuzhiyun
985*4882a593SmuzhiyunZone_reclaim_mode allows someone to set more or less aggressive approaches to
986*4882a593Smuzhiyunreclaim memory when a zone runs out of memory. If it is set to zero then no
987*4882a593Smuzhiyunzone reclaim occurs. Allocations will be satisfied from other zones / nodes
988*4882a593Smuzhiyunin the system.
989*4882a593Smuzhiyun
990*4882a593SmuzhiyunThis is value OR'ed together of
991*4882a593Smuzhiyun
992*4882a593Smuzhiyun=	===================================
993*4882a593Smuzhiyun1	Zone reclaim on
994*4882a593Smuzhiyun2	Zone reclaim writes dirty pages out
995*4882a593Smuzhiyun4	Zone reclaim swaps pages
996*4882a593Smuzhiyun=	===================================
997*4882a593Smuzhiyun
998*4882a593Smuzhiyunzone_reclaim_mode is disabled by default.  For file servers or workloads
999*4882a593Smuzhiyunthat benefit from having their data cached, zone_reclaim_mode should be
1000*4882a593Smuzhiyunleft disabled as the caching effect is likely to be more important than
1001*4882a593Smuzhiyundata locality.
1002*4882a593Smuzhiyun
1003*4882a593SmuzhiyunConsider enabling one or more zone_reclaim mode bits if it's known that the
1004*4882a593Smuzhiyunworkload is partitioned such that each partition fits within a NUMA node
1005*4882a593Smuzhiyunand that accessing remote memory would cause a measurable performance
1006*4882a593Smuzhiyunreduction.  The page allocator will take additional actions before
1007*4882a593Smuzhiyunallocating off node pages.
1008*4882a593Smuzhiyun
1009*4882a593SmuzhiyunAllowing zone reclaim to write out pages stops processes that are
1010*4882a593Smuzhiyunwriting large amounts of data from dirtying pages on other nodes. Zone
1011*4882a593Smuzhiyunreclaim will write out dirty pages if a zone fills up and so effectively
1012*4882a593Smuzhiyunthrottle the process. This may decrease the performance of a single process
1013*4882a593Smuzhiyunsince it cannot use all of system memory to buffer the outgoing writes
1014*4882a593Smuzhiyunanymore but it preserve the memory on other nodes so that the performance
1015*4882a593Smuzhiyunof other processes running on other nodes will not be affected.
1016*4882a593Smuzhiyun
1017*4882a593SmuzhiyunAllowing regular swap effectively restricts allocations to the local
1018*4882a593Smuzhiyunnode unless explicitly overridden by memory policies or cpuset
1019*4882a593Smuzhiyunconfigurations.
1020