xref: /OK3568_Linux_fs/kernel/Documentation/admin-guide/cgroup-v2.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1================
2Control Group v2
3================
4
5:Date: October, 2015
6:Author: Tejun Heo <tj@kernel.org>
7
8This is the authoritative documentation on the design, interface and
9conventions of cgroup v2.  It describes all userland-visible aspects
10of cgroup including core and specific controller behaviors.  All
11future changes must be reflected in this document.  Documentation for
12v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
13
14.. CONTENTS
15
16   1. Introduction
17     1-1. Terminology
18     1-2. What is cgroup?
19   2. Basic Operations
20     2-1. Mounting
21     2-2. Organizing Processes and Threads
22       2-2-1. Processes
23       2-2-2. Threads
24     2-3. [Un]populated Notification
25     2-4. Controlling Controllers
26       2-4-1. Enabling and Disabling
27       2-4-2. Top-down Constraint
28       2-4-3. No Internal Process Constraint
29     2-5. Delegation
30       2-5-1. Model of Delegation
31       2-5-2. Delegation Containment
32     2-6. Guidelines
33       2-6-1. Organize Once and Control
34       2-6-2. Avoid Name Collisions
35   3. Resource Distribution Models
36     3-1. Weights
37     3-2. Limits
38     3-3. Protections
39     3-4. Allocations
40   4. Interface Files
41     4-1. Format
42     4-2. Conventions
43     4-3. Core Interface Files
44   5. Controllers
45     5-1. CPU
46       5-1-1. CPU Interface Files
47     5-2. Memory
48       5-2-1. Memory Interface Files
49       5-2-2. Usage Guidelines
50       5-2-3. Memory Ownership
51     5-3. IO
52       5-3-1. IO Interface Files
53       5-3-2. Writeback
54       5-3-3. IO Latency
55         5-3-3-1. How IO Latency Throttling Works
56         5-3-3-2. IO Latency Interface Files
57       5-3-4. IO Priority
58     5-4. PID
59       5-4-1. PID Interface Files
60     5-5. Cpuset
61       5.5-1. Cpuset Interface Files
62     5-6. Device
63     5-7. RDMA
64       5-7-1. RDMA Interface Files
65     5-8. HugeTLB
66       5.8-1. HugeTLB Interface Files
67     5-8. Misc
68       5-8-1. perf_event
69     5-N. Non-normative information
70       5-N-1. CPU controller root cgroup process behaviour
71       5-N-2. IO controller root cgroup process behaviour
72   6. Namespace
73     6-1. Basics
74     6-2. The Root and Views
75     6-3. Migration and setns(2)
76     6-4. Interaction with Other Namespaces
77   P. Information on Kernel Programming
78     P-1. Filesystem Support for Writeback
79   D. Deprecated v1 Core Features
80   R. Issues with v1 and Rationales for v2
81     R-1. Multiple Hierarchies
82     R-2. Thread Granularity
83     R-3. Competition Between Inner Nodes and Threads
84     R-4. Other Interface Issues
85     R-5. Controller Issues and Remedies
86       R-5-1. Memory
87
88
89Introduction
90============
91
92Terminology
93-----------
94
95"cgroup" stands for "control group" and is never capitalized.  The
96singular form is used to designate the whole feature and also as a
97qualifier as in "cgroup controllers".  When explicitly referring to
98multiple individual control groups, the plural form "cgroups" is used.
99
100
101What is cgroup?
102---------------
103
104cgroup is a mechanism to organize processes hierarchically and
105distribute system resources along the hierarchy in a controlled and
106configurable manner.
107
108cgroup is largely composed of two parts - the core and controllers.
109cgroup core is primarily responsible for hierarchically organizing
110processes.  A cgroup controller is usually responsible for
111distributing a specific type of system resource along the hierarchy
112although there are utility controllers which serve purposes other than
113resource distribution.
114
115cgroups form a tree structure and every process in the system belongs
116to one and only one cgroup.  All threads of a process belong to the
117same cgroup.  On creation, all processes are put in the cgroup that
118the parent process belongs to at the time.  A process can be migrated
119to another cgroup.  Migration of a process doesn't affect already
120existing descendant processes.
121
122Following certain structural constraints, controllers may be enabled or
123disabled selectively on a cgroup.  All controller behaviors are
124hierarchical - if a controller is enabled on a cgroup, it affects all
125processes which belong to the cgroups consisting the inclusive
126sub-hierarchy of the cgroup.  When a controller is enabled on a nested
127cgroup, it always restricts the resource distribution further.  The
128restrictions set closer to the root in the hierarchy can not be
129overridden from further away.
130
131
132Basic Operations
133================
134
135Mounting
136--------
137
138Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
139hierarchy can be mounted with the following mount command::
140
141  # mount -t cgroup2 none $MOUNT_POINT
142
143cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
144controllers which support v2 and are not bound to a v1 hierarchy are
145automatically bound to the v2 hierarchy and show up at the root.
146Controllers which are not in active use in the v2 hierarchy can be
147bound to other hierarchies.  This allows mixing v2 hierarchy with the
148legacy v1 multiple hierarchies in a fully backward compatible way.
149
150A controller can be moved across hierarchies only after the controller
151is no longer referenced in its current hierarchy.  Because per-cgroup
152controller states are destroyed asynchronously and controllers may
153have lingering references, a controller may not show up immediately on
154the v2 hierarchy after the final umount of the previous hierarchy.
155Similarly, a controller should be fully disabled to be moved out of
156the unified hierarchy and it may take some time for the disabled
157controller to become available for other hierarchies; furthermore, due
158to inter-controller dependencies, other controllers may need to be
159disabled too.
160
161While useful for development and manual configurations, moving
162controllers dynamically between the v2 and other hierarchies is
163strongly discouraged for production use.  It is recommended to decide
164the hierarchies and controller associations before starting using the
165controllers after system boot.
166
167During transition to v2, system management software might still
168automount the v1 cgroup filesystem and so hijack all controllers
169during boot, before manual intervention is possible. To make testing
170and experimenting easier, the kernel parameter cgroup_no_v1= allows
171disabling controllers in v1 and make them always available in v2.
172
173cgroup v2 currently supports the following mount options.
174
175  nsdelegate
176
177	Consider cgroup namespaces as delegation boundaries.  This
178	option is system wide and can only be set on mount or modified
179	through remount from the init namespace.  The mount option is
180	ignored on non-init namespace mounts.  Please refer to the
181	Delegation section for details.
182
183  memory_localevents
184
185        Only populate memory.events with data for the current cgroup,
186        and not any subtrees. This is legacy behaviour, the default
187        behaviour without this option is to include subtree counts.
188        This option is system wide and can only be set on mount or
189        modified through remount from the init namespace. The mount
190        option is ignored on non-init namespace mounts.
191
192  memory_recursiveprot
193
194        Recursively apply memory.min and memory.low protection to
195        entire subtrees, without requiring explicit downward
196        propagation into leaf cgroups.  This allows protecting entire
197        subtrees from one another, while retaining free competition
198        within those subtrees.  This should have been the default
199        behavior but is a mount-option to avoid regressing setups
200        relying on the original semantics (e.g. specifying bogusly
201        high 'bypass' protection values at higher tree levels).
202
203
204Organizing Processes and Threads
205--------------------------------
206
207Processes
208~~~~~~~~~
209
210Initially, only the root cgroup exists to which all processes belong.
211A child cgroup can be created by creating a sub-directory::
212
213  # mkdir $CGROUP_NAME
214
215A given cgroup may have multiple child cgroups forming a tree
216structure.  Each cgroup has a read-writable interface file
217"cgroup.procs".  When read, it lists the PIDs of all processes which
218belong to the cgroup one-per-line.  The PIDs are not ordered and the
219same PID may show up more than once if the process got moved to
220another cgroup and then back or the PID got recycled while reading.
221
222A process can be migrated into a cgroup by writing its PID to the
223target cgroup's "cgroup.procs" file.  Only one process can be migrated
224on a single write(2) call.  If a process is composed of multiple
225threads, writing the PID of any thread migrates all threads of the
226process.
227
228When a process forks a child process, the new process is born into the
229cgroup that the forking process belongs to at the time of the
230operation.  After exit, a process stays associated with the cgroup
231that it belonged to at the time of exit until it's reaped; however, a
232zombie process does not appear in "cgroup.procs" and thus can't be
233moved to another cgroup.
234
235A cgroup which doesn't have any children or live processes can be
236destroyed by removing the directory.  Note that a cgroup which doesn't
237have any children and is associated only with zombie processes is
238considered empty and can be removed::
239
240  # rmdir $CGROUP_NAME
241
242"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
243cgroup is in use in the system, this file may contain multiple lines,
244one for each hierarchy.  The entry for cgroup v2 is always in the
245format "0::$PATH"::
246
247  # cat /proc/842/cgroup
248  ...
249  0::/test-cgroup/test-cgroup-nested
250
251If the process becomes a zombie and the cgroup it was associated with
252is removed subsequently, " (deleted)" is appended to the path::
253
254  # cat /proc/842/cgroup
255  ...
256  0::/test-cgroup/test-cgroup-nested (deleted)
257
258
259Threads
260~~~~~~~
261
262cgroup v2 supports thread granularity for a subset of controllers to
263support use cases requiring hierarchical resource distribution across
264the threads of a group of processes.  By default, all threads of a
265process belong to the same cgroup, which also serves as the resource
266domain to host resource consumptions which are not specific to a
267process or thread.  The thread mode allows threads to be spread across
268a subtree while still maintaining the common resource domain for them.
269
270Controllers which support thread mode are called threaded controllers.
271The ones which don't are called domain controllers.
272
273Marking a cgroup threaded makes it join the resource domain of its
274parent as a threaded cgroup.  The parent may be another threaded
275cgroup whose resource domain is further up in the hierarchy.  The root
276of a threaded subtree, that is, the nearest ancestor which is not
277threaded, is called threaded domain or thread root interchangeably and
278serves as the resource domain for the entire subtree.
279
280Inside a threaded subtree, threads of a process can be put in
281different cgroups and are not subject to the no internal process
282constraint - threaded controllers can be enabled on non-leaf cgroups
283whether they have threads in them or not.
284
285As the threaded domain cgroup hosts all the domain resource
286consumptions of the subtree, it is considered to have internal
287resource consumptions whether there are processes in it or not and
288can't have populated child cgroups which aren't threaded.  Because the
289root cgroup is not subject to no internal process constraint, it can
290serve both as a threaded domain and a parent to domain cgroups.
291
292The current operation mode or type of the cgroup is shown in the
293"cgroup.type" file which indicates whether the cgroup is a normal
294domain, a domain which is serving as the domain of a threaded subtree,
295or a threaded cgroup.
296
297On creation, a cgroup is always a domain cgroup and can be made
298threaded by writing "threaded" to the "cgroup.type" file.  The
299operation is single direction::
300
301  # echo threaded > cgroup.type
302
303Once threaded, the cgroup can't be made a domain again.  To enable the
304thread mode, the following conditions must be met.
305
306- As the cgroup will join the parent's resource domain.  The parent
307  must either be a valid (threaded) domain or a threaded cgroup.
308
309- When the parent is an unthreaded domain, it must not have any domain
310  controllers enabled or populated domain children.  The root is
311  exempt from this requirement.
312
313Topology-wise, a cgroup can be in an invalid state.  Please consider
314the following topology::
315
316  A (threaded domain) - B (threaded) - C (domain, just created)
317
318C is created as a domain but isn't connected to a parent which can
319host child domains.  C can't be used until it is turned into a
320threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
321these cases.  Operations which fail due to invalid topology use
322EOPNOTSUPP as the errno.
323
324A domain cgroup is turned into a threaded domain when one of its child
325cgroup becomes threaded or threaded controllers are enabled in the
326"cgroup.subtree_control" file while there are processes in the cgroup.
327A threaded domain reverts to a normal domain when the conditions
328clear.
329
330When read, "cgroup.threads" contains the list of the thread IDs of all
331threads in the cgroup.  Except that the operations are per-thread
332instead of per-process, "cgroup.threads" has the same format and
333behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
334written to in any cgroup, as it can only move threads inside the same
335threaded domain, its operations are confined inside each threaded
336subtree.
337
338The threaded domain cgroup serves as the resource domain for the whole
339subtree, and, while the threads can be scattered across the subtree,
340all the processes are considered to be in the threaded domain cgroup.
341"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
342processes in the subtree and is not readable in the subtree proper.
343However, "cgroup.procs" can be written to from anywhere in the subtree
344to migrate all threads of the matching process to the cgroup.
345
346Only threaded controllers can be enabled in a threaded subtree.  When
347a threaded controller is enabled inside a threaded subtree, it only
348accounts for and controls resource consumptions associated with the
349threads in the cgroup and its descendants.  All consumptions which
350aren't tied to a specific thread belong to the threaded domain cgroup.
351
352Because a threaded subtree is exempt from no internal process
353constraint, a threaded controller must be able to handle competition
354between threads in a non-leaf cgroup and its child cgroups.  Each
355threaded controller defines how such competitions are handled.
356
357
358[Un]populated Notification
359--------------------------
360
361Each non-root cgroup has a "cgroup.events" file which contains
362"populated" field indicating whether the cgroup's sub-hierarchy has
363live processes in it.  Its value is 0 if there is no live process in
364the cgroup and its descendants; otherwise, 1.  poll and [id]notify
365events are triggered when the value changes.  This can be used, for
366example, to start a clean-up operation after all processes of a given
367sub-hierarchy have exited.  The populated state updates and
368notifications are recursive.  Consider the following sub-hierarchy
369where the numbers in the parentheses represent the numbers of processes
370in each cgroup::
371
372  A(4) - B(0) - C(1)
373              \ D(0)
374
375A, B and C's "populated" fields would be 1 while D's 0.  After the one
376process in C exits, B and C's "populated" fields would flip to "0" and
377file modified events will be generated on the "cgroup.events" files of
378both cgroups.
379
380
381Controlling Controllers
382-----------------------
383
384Enabling and Disabling
385~~~~~~~~~~~~~~~~~~~~~~
386
387Each cgroup has a "cgroup.controllers" file which lists all
388controllers available for the cgroup to enable::
389
390  # cat cgroup.controllers
391  cpu io memory
392
393No controller is enabled by default.  Controllers can be enabled and
394disabled by writing to the "cgroup.subtree_control" file::
395
396  # echo "+cpu +memory -io" > cgroup.subtree_control
397
398Only controllers which are listed in "cgroup.controllers" can be
399enabled.  When multiple operations are specified as above, either they
400all succeed or fail.  If multiple operations on the same controller
401are specified, the last one is effective.
402
403Enabling a controller in a cgroup indicates that the distribution of
404the target resource across its immediate children will be controlled.
405Consider the following sub-hierarchy.  The enabled controllers are
406listed in parentheses::
407
408  A(cpu,memory) - B(memory) - C()
409                            \ D()
410
411As A has "cpu" and "memory" enabled, A will control the distribution
412of CPU cycles and memory to its children, in this case, B.  As B has
413"memory" enabled but not "CPU", C and D will compete freely on CPU
414cycles but their division of memory available to B will be controlled.
415
416As a controller regulates the distribution of the target resource to
417the cgroup's children, enabling it creates the controller's interface
418files in the child cgroups.  In the above example, enabling "cpu" on B
419would create the "cpu." prefixed controller interface files in C and
420D.  Likewise, disabling "memory" from B would remove the "memory."
421prefixed controller interface files from C and D.  This means that the
422controller interface files - anything which doesn't start with
423"cgroup." are owned by the parent rather than the cgroup itself.
424
425
426Top-down Constraint
427~~~~~~~~~~~~~~~~~~~
428
429Resources are distributed top-down and a cgroup can further distribute
430a resource only if the resource has been distributed to it from the
431parent.  This means that all non-root "cgroup.subtree_control" files
432can only contain controllers which are enabled in the parent's
433"cgroup.subtree_control" file.  A controller can be enabled only if
434the parent has the controller enabled and a controller can't be
435disabled if one or more children have it enabled.
436
437
438No Internal Process Constraint
439~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
440
441Non-root cgroups can distribute domain resources to their children
442only when they don't have any processes of their own.  In other words,
443only domain cgroups which don't contain any processes can have domain
444controllers enabled in their "cgroup.subtree_control" files.
445
446This guarantees that, when a domain controller is looking at the part
447of the hierarchy which has it enabled, processes are always only on
448the leaves.  This rules out situations where child cgroups compete
449against internal processes of the parent.
450
451The root cgroup is exempt from this restriction.  Root contains
452processes and anonymous resource consumption which can't be associated
453with any other cgroups and requires special treatment from most
454controllers.  How resource consumption in the root cgroup is governed
455is up to each controller (for more information on this topic please
456refer to the Non-normative information section in the Controllers
457chapter).
458
459Note that the restriction doesn't get in the way if there is no
460enabled controller in the cgroup's "cgroup.subtree_control".  This is
461important as otherwise it wouldn't be possible to create children of a
462populated cgroup.  To control resource distribution of a cgroup, the
463cgroup must create children and transfer all its processes to the
464children before enabling controllers in its "cgroup.subtree_control"
465file.
466
467
468Delegation
469----------
470
471Model of Delegation
472~~~~~~~~~~~~~~~~~~~
473
474A cgroup can be delegated in two ways.  First, to a less privileged
475user by granting write access of the directory and its "cgroup.procs",
476"cgroup.threads" and "cgroup.subtree_control" files to the user.
477Second, if the "nsdelegate" mount option is set, automatically to a
478cgroup namespace on namespace creation.
479
480Because the resource control interface files in a given directory
481control the distribution of the parent's resources, the delegatee
482shouldn't be allowed to write to them.  For the first method, this is
483achieved by not granting access to these files.  For the second, the
484kernel rejects writes to all files other than "cgroup.procs" and
485"cgroup.subtree_control" on a namespace root from inside the
486namespace.
487
488The end results are equivalent for both delegation types.  Once
489delegated, the user can build sub-hierarchy under the directory,
490organize processes inside it as it sees fit and further distribute the
491resources it received from the parent.  The limits and other settings
492of all resource controllers are hierarchical and regardless of what
493happens in the delegated sub-hierarchy, nothing can escape the
494resource restrictions imposed by the parent.
495
496Currently, cgroup doesn't impose any restrictions on the number of
497cgroups in or nesting depth of a delegated sub-hierarchy; however,
498this may be limited explicitly in the future.
499
500
501Delegation Containment
502~~~~~~~~~~~~~~~~~~~~~~
503
504A delegated sub-hierarchy is contained in the sense that processes
505can't be moved into or out of the sub-hierarchy by the delegatee.
506
507For delegations to a less privileged user, this is achieved by
508requiring the following conditions for a process with a non-root euid
509to migrate a target process into a cgroup by writing its PID to the
510"cgroup.procs" file.
511
512- The writer must have write access to the "cgroup.procs" file.
513
514- The writer must have write access to the "cgroup.procs" file of the
515  common ancestor of the source and destination cgroups.
516
517The above two constraints ensure that while a delegatee may migrate
518processes around freely in the delegated sub-hierarchy it can't pull
519in from or push out to outside the sub-hierarchy.
520
521For an example, let's assume cgroups C0 and C1 have been delegated to
522user U0 who created C00, C01 under C0 and C10 under C1 as follows and
523all processes under C0 and C1 belong to U0::
524
525  ~~~~~~~~~~~~~ - C0 - C00
526  ~ cgroup    ~      \ C01
527  ~ hierarchy ~
528  ~~~~~~~~~~~~~ - C1 - C10
529
530Let's also say U0 wants to write the PID of a process which is
531currently in C10 into "C00/cgroup.procs".  U0 has write access to the
532file; however, the common ancestor of the source cgroup C10 and the
533destination cgroup C00 is above the points of delegation and U0 would
534not have write access to its "cgroup.procs" files and thus the write
535will be denied with -EACCES.
536
537For delegations to namespaces, containment is achieved by requiring
538that both the source and destination cgroups are reachable from the
539namespace of the process which is attempting the migration.  If either
540is not reachable, the migration is rejected with -ENOENT.
541
542
543Guidelines
544----------
545
546Organize Once and Control
547~~~~~~~~~~~~~~~~~~~~~~~~~
548
549Migrating a process across cgroups is a relatively expensive operation
550and stateful resources such as memory are not moved together with the
551process.  This is an explicit design decision as there often exist
552inherent trade-offs between migration and various hot paths in terms
553of synchronization cost.
554
555As such, migrating processes across cgroups frequently as a means to
556apply different resource restrictions is discouraged.  A workload
557should be assigned to a cgroup according to the system's logical and
558resource structure once on start-up.  Dynamic adjustments to resource
559distribution can be made by changing controller configuration through
560the interface files.
561
562
563Avoid Name Collisions
564~~~~~~~~~~~~~~~~~~~~~
565
566Interface files for a cgroup and its children cgroups occupy the same
567directory and it is possible to create children cgroups which collide
568with interface files.
569
570All cgroup core interface files are prefixed with "cgroup." and each
571controller's interface files are prefixed with the controller name and
572a dot.  A controller's name is composed of lower case alphabets and
573'_'s but never begins with an '_' so it can be used as the prefix
574character for collision avoidance.  Also, interface file names won't
575start or end with terms which are often used in categorizing workloads
576such as job, service, slice, unit or workload.
577
578cgroup doesn't do anything to prevent name collisions and it's the
579user's responsibility to avoid them.
580
581
582Resource Distribution Models
583============================
584
585cgroup controllers implement several resource distribution schemes
586depending on the resource type and expected use cases.  This section
587describes major schemes in use along with their expected behaviors.
588
589
590Weights
591-------
592
593A parent's resource is distributed by adding up the weights of all
594active children and giving each the fraction matching the ratio of its
595weight against the sum.  As only children which can make use of the
596resource at the moment participate in the distribution, this is
597work-conserving.  Due to the dynamic nature, this model is usually
598used for stateless resources.
599
600All weights are in the range [1, 10000] with the default at 100.  This
601allows symmetric multiplicative biases in both directions at fine
602enough granularity while staying in the intuitive range.
603
604As long as the weight is in range, all configuration combinations are
605valid and there is no reason to reject configuration changes or
606process migrations.
607
608"cpu.weight" proportionally distributes CPU cycles to active children
609and is an example of this type.
610
611
612Limits
613------
614
615A child can only consume upto the configured amount of the resource.
616Limits can be over-committed - the sum of the limits of children can
617exceed the amount of resource available to the parent.
618
619Limits are in the range [0, max] and defaults to "max", which is noop.
620
621As limits can be over-committed, all configuration combinations are
622valid and there is no reason to reject configuration changes or
623process migrations.
624
625"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
626on an IO device and is an example of this type.
627
628
629Protections
630-----------
631
632A cgroup is protected upto the configured amount of the resource
633as long as the usages of all its ancestors are under their
634protected levels.  Protections can be hard guarantees or best effort
635soft boundaries.  Protections can also be over-committed in which case
636only upto the amount available to the parent is protected among
637children.
638
639Protections are in the range [0, max] and defaults to 0, which is
640noop.
641
642As protections can be over-committed, all configuration combinations
643are valid and there is no reason to reject configuration changes or
644process migrations.
645
646"memory.low" implements best-effort memory protection and is an
647example of this type.
648
649
650Allocations
651-----------
652
653A cgroup is exclusively allocated a certain amount of a finite
654resource.  Allocations can't be over-committed - the sum of the
655allocations of children can not exceed the amount of resource
656available to the parent.
657
658Allocations are in the range [0, max] and defaults to 0, which is no
659resource.
660
661As allocations can't be over-committed, some configuration
662combinations are invalid and should be rejected.  Also, if the
663resource is mandatory for execution of processes, process migrations
664may be rejected.
665
666"cpu.rt.max" hard-allocates realtime slices and is an example of this
667type.
668
669
670Interface Files
671===============
672
673Format
674------
675
676All interface files should be in one of the following formats whenever
677possible::
678
679  New-line separated values
680  (when only one value can be written at once)
681
682	VAL0\n
683	VAL1\n
684	...
685
686  Space separated values
687  (when read-only or multiple values can be written at once)
688
689	VAL0 VAL1 ...\n
690
691  Flat keyed
692
693	KEY0 VAL0\n
694	KEY1 VAL1\n
695	...
696
697  Nested keyed
698
699	KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
700	KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
701	...
702
703For a writable file, the format for writing should generally match
704reading; however, controllers may allow omitting later fields or
705implement restricted shortcuts for most common use cases.
706
707For both flat and nested keyed files, only the values for a single key
708can be written at a time.  For nested keyed files, the sub key pairs
709may be specified in any order and not all pairs have to be specified.
710
711
712Conventions
713-----------
714
715- Settings for a single feature should be contained in a single file.
716
717- The root cgroup should be exempt from resource control and thus
718  shouldn't have resource control interface files.
719
720- The default time unit is microseconds.  If a different unit is ever
721  used, an explicit unit suffix must be present.
722
723- A parts-per quantity should use a percentage decimal with at least
724  two digit fractional part - e.g. 13.40.
725
726- If a controller implements weight based resource distribution, its
727  interface file should be named "weight" and have the range [1,
728  10000] with 100 as the default.  The values are chosen to allow
729  enough and symmetric bias in both directions while keeping it
730  intuitive (the default is 100%).
731
732- If a controller implements an absolute resource guarantee and/or
733  limit, the interface files should be named "min" and "max"
734  respectively.  If a controller implements best effort resource
735  guarantee and/or limit, the interface files should be named "low"
736  and "high" respectively.
737
738  In the above four control files, the special token "max" should be
739  used to represent upward infinity for both reading and writing.
740
741- If a setting has a configurable default value and keyed specific
742  overrides, the default entry should be keyed with "default" and
743  appear as the first entry in the file.
744
745  The default value can be updated by writing either "default $VAL" or
746  "$VAL".
747
748  When writing to update a specific override, "default" can be used as
749  the value to indicate removal of the override.  Override entries
750  with "default" as the value must not appear when read.
751
752  For example, a setting which is keyed by major:minor device numbers
753  with integer values may look like the following::
754
755    # cat cgroup-example-interface-file
756    default 150
757    8:0 300
758
759  The default value can be updated by::
760
761    # echo 125 > cgroup-example-interface-file
762
763  or::
764
765    # echo "default 125" > cgroup-example-interface-file
766
767  An override can be set by::
768
769    # echo "8:16 170" > cgroup-example-interface-file
770
771  and cleared by::
772
773    # echo "8:0 default" > cgroup-example-interface-file
774    # cat cgroup-example-interface-file
775    default 125
776    8:16 170
777
778- For events which are not very high frequency, an interface file
779  "events" should be created which lists event key value pairs.
780  Whenever a notifiable event happens, file modified event should be
781  generated on the file.
782
783
784Core Interface Files
785--------------------
786
787All cgroup core files are prefixed with "cgroup."
788
789  cgroup.type
790
791	A read-write single value file which exists on non-root
792	cgroups.
793
794	When read, it indicates the current type of the cgroup, which
795	can be one of the following values.
796
797	- "domain" : A normal valid domain cgroup.
798
799	- "domain threaded" : A threaded domain cgroup which is
800          serving as the root of a threaded subtree.
801
802	- "domain invalid" : A cgroup which is in an invalid state.
803	  It can't be populated or have controllers enabled.  It may
804	  be allowed to become a threaded cgroup.
805
806	- "threaded" : A threaded cgroup which is a member of a
807          threaded subtree.
808
809	A cgroup can be turned into a threaded cgroup by writing
810	"threaded" to this file.
811
812  cgroup.procs
813	A read-write new-line separated values file which exists on
814	all cgroups.
815
816	When read, it lists the PIDs of all processes which belong to
817	the cgroup one-per-line.  The PIDs are not ordered and the
818	same PID may show up more than once if the process got moved
819	to another cgroup and then back or the PID got recycled while
820	reading.
821
822	A PID can be written to migrate the process associated with
823	the PID to the cgroup.  The writer should match all of the
824	following conditions.
825
826	- It must have write access to the "cgroup.procs" file.
827
828	- It must have write access to the "cgroup.procs" file of the
829	  common ancestor of the source and destination cgroups.
830
831	When delegating a sub-hierarchy, write access to this file
832	should be granted along with the containing directory.
833
834	In a threaded cgroup, reading this file fails with EOPNOTSUPP
835	as all the processes belong to the thread root.  Writing is
836	supported and moves every thread of the process to the cgroup.
837
838  cgroup.threads
839	A read-write new-line separated values file which exists on
840	all cgroups.
841
842	When read, it lists the TIDs of all threads which belong to
843	the cgroup one-per-line.  The TIDs are not ordered and the
844	same TID may show up more than once if the thread got moved to
845	another cgroup and then back or the TID got recycled while
846	reading.
847
848	A TID can be written to migrate the thread associated with the
849	TID to the cgroup.  The writer should match all of the
850	following conditions.
851
852	- It must have write access to the "cgroup.threads" file.
853
854	- The cgroup that the thread is currently in must be in the
855          same resource domain as the destination cgroup.
856
857	- It must have write access to the "cgroup.procs" file of the
858	  common ancestor of the source and destination cgroups.
859
860	When delegating a sub-hierarchy, write access to this file
861	should be granted along with the containing directory.
862
863  cgroup.controllers
864	A read-only space separated values file which exists on all
865	cgroups.
866
867	It shows space separated list of all controllers available to
868	the cgroup.  The controllers are not ordered.
869
870  cgroup.subtree_control
871	A read-write space separated values file which exists on all
872	cgroups.  Starts out empty.
873
874	When read, it shows space separated list of the controllers
875	which are enabled to control resource distribution from the
876	cgroup to its children.
877
878	Space separated list of controllers prefixed with '+' or '-'
879	can be written to enable or disable controllers.  A controller
880	name prefixed with '+' enables the controller and '-'
881	disables.  If a controller appears more than once on the list,
882	the last one is effective.  When multiple enable and disable
883	operations are specified, either all succeed or all fail.
884
885  cgroup.events
886	A read-only flat-keyed file which exists on non-root cgroups.
887	The following entries are defined.  Unless specified
888	otherwise, a value change in this file generates a file
889	modified event.
890
891	  populated
892		1 if the cgroup or its descendants contains any live
893		processes; otherwise, 0.
894	  frozen
895		1 if the cgroup is frozen; otherwise, 0.
896
897  cgroup.max.descendants
898	A read-write single value files.  The default is "max".
899
900	Maximum allowed number of descent cgroups.
901	If the actual number of descendants is equal or larger,
902	an attempt to create a new cgroup in the hierarchy will fail.
903
904  cgroup.max.depth
905	A read-write single value files.  The default is "max".
906
907	Maximum allowed descent depth below the current cgroup.
908	If the actual descent depth is equal or larger,
909	an attempt to create a new child cgroup will fail.
910
911  cgroup.stat
912	A read-only flat-keyed file with the following entries:
913
914	  nr_descendants
915		Total number of visible descendant cgroups.
916
917	  nr_dying_descendants
918		Total number of dying descendant cgroups. A cgroup becomes
919		dying after being deleted by a user. The cgroup will remain
920		in dying state for some time undefined time (which can depend
921		on system load) before being completely destroyed.
922
923		A process can't enter a dying cgroup under any circumstances,
924		a dying cgroup can't revive.
925
926		A dying cgroup can consume system resources not exceeding
927		limits, which were active at the moment of cgroup deletion.
928
929  cgroup.freeze
930	A read-write single value file which exists on non-root cgroups.
931	Allowed values are "0" and "1". The default is "0".
932
933	Writing "1" to the file causes freezing of the cgroup and all
934	descendant cgroups. This means that all belonging processes will
935	be stopped and will not run until the cgroup will be explicitly
936	unfrozen. Freezing of the cgroup may take some time; when this action
937	is completed, the "frozen" value in the cgroup.events control file
938	will be updated to "1" and the corresponding notification will be
939	issued.
940
941	A cgroup can be frozen either by its own settings, or by settings
942	of any ancestor cgroups. If any of ancestor cgroups is frozen, the
943	cgroup will remain frozen.
944
945	Processes in the frozen cgroup can be killed by a fatal signal.
946	They also can enter and leave a frozen cgroup: either by an explicit
947	move by a user, or if freezing of the cgroup races with fork().
948	If a process is moved to a frozen cgroup, it stops. If a process is
949	moved out of a frozen cgroup, it becomes running.
950
951	Frozen status of a cgroup doesn't affect any cgroup tree operations:
952	it's possible to delete a frozen (and empty) cgroup, as well as
953	create new sub-cgroups.
954
955Controllers
956===========
957
958CPU
959---
960
961The "cpu" controllers regulates distribution of CPU cycles.  This
962controller implements weight and absolute bandwidth limit models for
963normal scheduling policy and absolute bandwidth allocation model for
964realtime scheduling policy.
965
966In all the above models, cycles distribution is defined only on a temporal
967base and it does not account for the frequency at which tasks are executed.
968The (optional) utilization clamping support allows to hint the schedutil
969cpufreq governor about the minimum desired frequency which should always be
970provided by a CPU, as well as the maximum desired frequency, which should not
971be exceeded by a CPU.
972
973WARNING: cgroup2 doesn't yet support control of realtime processes and
974the cpu controller can only be enabled when all RT processes are in
975the root cgroup.  Be aware that system management software may already
976have placed RT processes into nonroot cgroups during the system boot
977process, and these processes may need to be moved to the root cgroup
978before the cpu controller can be enabled.
979
980
981CPU Interface Files
982~~~~~~~~~~~~~~~~~~~
983
984All time durations are in microseconds.
985
986  cpu.stat
987	A read-only flat-keyed file.
988	This file exists whether the controller is enabled or not.
989
990	It always reports the following three stats:
991
992	- usage_usec
993	- user_usec
994	- system_usec
995
996	and the following three when the controller is enabled:
997
998	- nr_periods
999	- nr_throttled
1000	- throttled_usec
1001
1002  cpu.weight
1003	A read-write single value file which exists on non-root
1004	cgroups.  The default is "100".
1005
1006	The weight in the range [1, 10000].
1007
1008  cpu.weight.nice
1009	A read-write single value file which exists on non-root
1010	cgroups.  The default is "0".
1011
1012	The nice value is in the range [-20, 19].
1013
1014	This interface file is an alternative interface for
1015	"cpu.weight" and allows reading and setting weight using the
1016	same values used by nice(2).  Because the range is smaller and
1017	granularity is coarser for the nice values, the read value is
1018	the closest approximation of the current weight.
1019
1020  cpu.max
1021	A read-write two value file which exists on non-root cgroups.
1022	The default is "max 100000".
1023
1024	The maximum bandwidth limit.  It's in the following format::
1025
1026	  $MAX $PERIOD
1027
1028	which indicates that the group may consume upto $MAX in each
1029	$PERIOD duration.  "max" for $MAX indicates no limit.  If only
1030	one number is written, $MAX is updated.
1031
1032  cpu.pressure
1033	A read-only nested-key file which exists on non-root cgroups.
1034
1035	Shows pressure stall information for CPU. See
1036	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1037
1038  cpu.uclamp.min
1039        A read-write single value file which exists on non-root cgroups.
1040        The default is "0", i.e. no utilization boosting.
1041
1042        The requested minimum utilization (protection) as a percentage
1043        rational number, e.g. 12.34 for 12.34%.
1044
1045        This interface allows reading and setting minimum utilization clamp
1046        values similar to the sched_setattr(2). This minimum utilization
1047        value is used to clamp the task specific minimum utilization clamp.
1048
1049        The requested minimum utilization (protection) is always capped by
1050        the current value for the maximum utilization (limit), i.e.
1051        `cpu.uclamp.max`.
1052
1053  cpu.uclamp.max
1054        A read-write single value file which exists on non-root cgroups.
1055        The default is "max". i.e. no utilization capping
1056
1057        The requested maximum utilization (limit) as a percentage rational
1058        number, e.g. 98.76 for 98.76%.
1059
1060        This interface allows reading and setting maximum utilization clamp
1061        values similar to the sched_setattr(2). This maximum utilization
1062        value is used to clamp the task specific maximum utilization clamp.
1063
1064
1065
1066Memory
1067------
1068
1069The "memory" controller regulates distribution of memory.  Memory is
1070stateful and implements both limit and protection models.  Due to the
1071intertwining between memory usage and reclaim pressure and the
1072stateful nature of memory, the distribution model is relatively
1073complex.
1074
1075While not completely water-tight, all major memory usages by a given
1076cgroup are tracked so that the total memory consumption can be
1077accounted and controlled to a reasonable extent.  Currently, the
1078following types of memory usages are tracked.
1079
1080- Userland memory - page cache and anonymous memory.
1081
1082- Kernel data structures such as dentries and inodes.
1083
1084- TCP socket buffers.
1085
1086The above list may expand in the future for better coverage.
1087
1088
1089Memory Interface Files
1090~~~~~~~~~~~~~~~~~~~~~~
1091
1092All memory amounts are in bytes.  If a value which is not aligned to
1093PAGE_SIZE is written, the value may be rounded up to the closest
1094PAGE_SIZE multiple when read back.
1095
1096  memory.current
1097	A read-only single value file which exists on non-root
1098	cgroups.
1099
1100	The total amount of memory currently being used by the cgroup
1101	and its descendants.
1102
1103  memory.min
1104	A read-write single value file which exists on non-root
1105	cgroups.  The default is "0".
1106
1107	Hard memory protection.  If the memory usage of a cgroup
1108	is within its effective min boundary, the cgroup's memory
1109	won't be reclaimed under any conditions. If there is no
1110	unprotected reclaimable memory available, OOM killer
1111	is invoked. Above the effective min boundary (or
1112	effective low boundary if it is higher), pages are reclaimed
1113	proportionally to the overage, reducing reclaim pressure for
1114	smaller overages.
1115
1116	Effective min boundary is limited by memory.min values of
1117	all ancestor cgroups. If there is memory.min overcommitment
1118	(child cgroup or cgroups are requiring more protected memory
1119	than parent will allow), then each child cgroup will get
1120	the part of parent's protection proportional to its
1121	actual memory usage below memory.min.
1122
1123	Putting more memory than generally available under this
1124	protection is discouraged and may lead to constant OOMs.
1125
1126	If a memory cgroup is not populated with processes,
1127	its memory.min is ignored.
1128
1129  memory.low
1130	A read-write single value file which exists on non-root
1131	cgroups.  The default is "0".
1132
1133	Best-effort memory protection.  If the memory usage of a
1134	cgroup is within its effective low boundary, the cgroup's
1135	memory won't be reclaimed unless there is no reclaimable
1136	memory available in unprotected cgroups.
1137	Above the effective low	boundary (or
1138	effective min boundary if it is higher), pages are reclaimed
1139	proportionally to the overage, reducing reclaim pressure for
1140	smaller overages.
1141
1142	Effective low boundary is limited by memory.low values of
1143	all ancestor cgroups. If there is memory.low overcommitment
1144	(child cgroup or cgroups are requiring more protected memory
1145	than parent will allow), then each child cgroup will get
1146	the part of parent's protection proportional to its
1147	actual memory usage below memory.low.
1148
1149	Putting more memory than generally available under this
1150	protection is discouraged.
1151
1152  memory.high
1153	A read-write single value file which exists on non-root
1154	cgroups.  The default is "max".
1155
1156	Memory usage throttle limit.  This is the main mechanism to
1157	control memory usage of a cgroup.  If a cgroup's usage goes
1158	over the high boundary, the processes of the cgroup are
1159	throttled and put under heavy reclaim pressure.
1160
1161	Going over the high limit never invokes the OOM killer and
1162	under extreme conditions the limit may be breached.
1163
1164  memory.max
1165	A read-write single value file which exists on non-root
1166	cgroups.  The default is "max".
1167
1168	Memory usage hard limit.  This is the final protection
1169	mechanism.  If a cgroup's memory usage reaches this limit and
1170	can't be reduced, the OOM killer is invoked in the cgroup.
1171	Under certain circumstances, the usage may go over the limit
1172	temporarily.
1173
1174	In default configuration regular 0-order allocations always
1175	succeed unless OOM killer chooses current task as a victim.
1176
1177	Some kinds of allocations don't invoke the OOM killer.
1178	Caller could retry them differently, return into userspace
1179	as -ENOMEM or silently ignore in cases like disk readahead.
1180
1181	This is the ultimate protection mechanism.  As long as the
1182	high limit is used and monitored properly, this limit's
1183	utility is limited to providing the final safety net.
1184
1185  memory.oom.group
1186	A read-write single value file which exists on non-root
1187	cgroups.  The default value is "0".
1188
1189	Determines whether the cgroup should be treated as
1190	an indivisible workload by the OOM killer. If set,
1191	all tasks belonging to the cgroup or to its descendants
1192	(if the memory cgroup is not a leaf cgroup) are killed
1193	together or not at all. This can be used to avoid
1194	partial kills to guarantee workload integrity.
1195
1196	Tasks with the OOM protection (oom_score_adj set to -1000)
1197	are treated as an exception and are never killed.
1198
1199	If the OOM killer is invoked in a cgroup, it's not going
1200	to kill any tasks outside of this cgroup, regardless
1201	memory.oom.group values of ancestor cgroups.
1202
1203  memory.events
1204	A read-only flat-keyed file which exists on non-root cgroups.
1205	The following entries are defined.  Unless specified
1206	otherwise, a value change in this file generates a file
1207	modified event.
1208
1209	Note that all fields in this file are hierarchical and the
1210	file modified event can be generated due to an event down the
1211	hierarchy. For for the local events at the cgroup level see
1212	memory.events.local.
1213
1214	  low
1215		The number of times the cgroup is reclaimed due to
1216		high memory pressure even though its usage is under
1217		the low boundary.  This usually indicates that the low
1218		boundary is over-committed.
1219
1220	  high
1221		The number of times processes of the cgroup are
1222		throttled and routed to perform direct memory reclaim
1223		because the high memory boundary was exceeded.  For a
1224		cgroup whose memory usage is capped by the high limit
1225		rather than global memory pressure, this event's
1226		occurrences are expected.
1227
1228	  max
1229		The number of times the cgroup's memory usage was
1230		about to go over the max boundary.  If direct reclaim
1231		fails to bring it down, the cgroup goes to OOM state.
1232
1233	  oom
1234		The number of time the cgroup's memory usage was
1235		reached the limit and allocation was about to fail.
1236
1237		This event is not raised if the OOM killer is not
1238		considered as an option, e.g. for failed high-order
1239		allocations or if caller asked to not retry attempts.
1240
1241	  oom_kill
1242		The number of processes belonging to this cgroup
1243		killed by any kind of OOM killer.
1244
1245  memory.events.local
1246	Similar to memory.events but the fields in the file are local
1247	to the cgroup i.e. not hierarchical. The file modified event
1248	generated on this file reflects only the local events.
1249
1250  memory.stat
1251	A read-only flat-keyed file which exists on non-root cgroups.
1252
1253	This breaks down the cgroup's memory footprint into different
1254	types of memory, type-specific details, and other information
1255	on the state and past events of the memory management system.
1256
1257	All memory amounts are in bytes.
1258
1259	The entries are ordered to be human readable, and new entries
1260	can show up in the middle. Don't rely on items remaining in a
1261	fixed position; use the keys to look up specific values!
1262
1263	If the entry has no per-node counter(or not show in the
1264	mempry.numa_stat). We use 'npn'(non-per-node) as the tag
1265	to indicate that it will not show in the mempry.numa_stat.
1266
1267	  anon
1268		Amount of memory used in anonymous mappings such as
1269		brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1270
1271	  file
1272		Amount of memory used to cache filesystem data,
1273		including tmpfs and shared memory.
1274
1275	  kernel_stack
1276		Amount of memory allocated to kernel stacks.
1277
1278	  percpu(npn)
1279		Amount of memory used for storing per-cpu kernel
1280		data structures.
1281
1282	  sock(npn)
1283		Amount of memory used in network transmission buffers
1284
1285	  shmem
1286		Amount of cached filesystem data that is swap-backed,
1287		such as tmpfs, shm segments, shared anonymous mmap()s
1288
1289	  file_mapped
1290		Amount of cached filesystem data mapped with mmap()
1291
1292	  file_dirty
1293		Amount of cached filesystem data that was modified but
1294		not yet written back to disk
1295
1296	  file_writeback
1297		Amount of cached filesystem data that was modified and
1298		is currently being written back to disk
1299
1300	  anon_thp
1301		Amount of memory used in anonymous mappings backed by
1302		transparent hugepages
1303
1304	  inactive_anon, active_anon, inactive_file, active_file, unevictable
1305		Amount of memory, swap-backed and filesystem-backed,
1306		on the internal memory management lists used by the
1307		page reclaim algorithm.
1308
1309		As these represent internal list state (eg. shmem pages are on anon
1310		memory management lists), inactive_foo + active_foo may not be equal to
1311		the value for the foo counter, since the foo counter is type-based, not
1312		list-based.
1313
1314	  slab_reclaimable
1315		Part of "slab" that might be reclaimed, such as
1316		dentries and inodes.
1317
1318	  slab_unreclaimable
1319		Part of "slab" that cannot be reclaimed on memory
1320		pressure.
1321
1322	  slab(npn)
1323		Amount of memory used for storing in-kernel data
1324		structures.
1325
1326	  workingset_refault_anon
1327		Number of refaults of previously evicted anonymous pages.
1328
1329	  workingset_refault_file
1330		Number of refaults of previously evicted file pages.
1331
1332	  workingset_activate_anon
1333		Number of refaulted anonymous pages that were immediately
1334		activated.
1335
1336	  workingset_activate_file
1337		Number of refaulted file pages that were immediately activated.
1338
1339	  workingset_restore_anon
1340		Number of restored anonymous pages which have been detected as
1341		an active workingset before they got reclaimed.
1342
1343	  workingset_restore_file
1344		Number of restored file pages which have been detected as an
1345		active workingset before they got reclaimed.
1346
1347	  workingset_nodereclaim
1348		Number of times a shadow node has been reclaimed
1349
1350	  pgfault(npn)
1351		Total number of page faults incurred
1352
1353	  pgmajfault(npn)
1354		Number of major page faults incurred
1355
1356	  pgrefill(npn)
1357		Amount of scanned pages (in an active LRU list)
1358
1359	  pgscan(npn)
1360		Amount of scanned pages (in an inactive LRU list)
1361
1362	  pgsteal(npn)
1363		Amount of reclaimed pages
1364
1365	  pgactivate(npn)
1366		Amount of pages moved to the active LRU list
1367
1368	  pgdeactivate(npn)
1369		Amount of pages moved to the inactive LRU list
1370
1371	  pglazyfree(npn)
1372		Amount of pages postponed to be freed under memory pressure
1373
1374	  pglazyfreed(npn)
1375		Amount of reclaimed lazyfree pages
1376
1377	  thp_fault_alloc(npn)
1378		Number of transparent hugepages which were allocated to satisfy
1379		a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1380                is not set.
1381
1382	  thp_collapse_alloc(npn)
1383		Number of transparent hugepages which were allocated to allow
1384		collapsing an existing range of pages. This counter is not
1385		present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1386
1387  memory.numa_stat
1388	A read-only nested-keyed file which exists on non-root cgroups.
1389
1390	This breaks down the cgroup's memory footprint into different
1391	types of memory, type-specific details, and other information
1392	per node on the state of the memory management system.
1393
1394	This is useful for providing visibility into the NUMA locality
1395	information within an memcg since the pages are allowed to be
1396	allocated from any physical node. One of the use case is evaluating
1397	application performance by combining this information with the
1398	application's CPU allocation.
1399
1400	All memory amounts are in bytes.
1401
1402	The output format of memory.numa_stat is::
1403
1404	  type N0=<bytes in node 0> N1=<bytes in node 1> ...
1405
1406	The entries are ordered to be human readable, and new entries
1407	can show up in the middle. Don't rely on items remaining in a
1408	fixed position; use the keys to look up specific values!
1409
1410	The entries can refer to the memory.stat.
1411
1412  memory.swap.current
1413	A read-only single value file which exists on non-root
1414	cgroups.
1415
1416	The total amount of swap currently being used by the cgroup
1417	and its descendants.
1418
1419  memory.swap.high
1420	A read-write single value file which exists on non-root
1421	cgroups.  The default is "max".
1422
1423	Swap usage throttle limit.  If a cgroup's swap usage exceeds
1424	this limit, all its further allocations will be throttled to
1425	allow userspace to implement custom out-of-memory procedures.
1426
1427	This limit marks a point of no return for the cgroup. It is NOT
1428	designed to manage the amount of swapping a workload does
1429	during regular operation. Compare to memory.swap.max, which
1430	prohibits swapping past a set amount, but lets the cgroup
1431	continue unimpeded as long as other memory can be reclaimed.
1432
1433	Healthy workloads are not expected to reach this limit.
1434
1435  memory.swap.max
1436	A read-write single value file which exists on non-root
1437	cgroups.  The default is "max".
1438
1439	Swap usage hard limit.  If a cgroup's swap usage reaches this
1440	limit, anonymous memory of the cgroup will not be swapped out.
1441
1442  memory.swap.events
1443	A read-only flat-keyed file which exists on non-root cgroups.
1444	The following entries are defined.  Unless specified
1445	otherwise, a value change in this file generates a file
1446	modified event.
1447
1448	  high
1449		The number of times the cgroup's swap usage was over
1450		the high threshold.
1451
1452	  max
1453		The number of times the cgroup's swap usage was about
1454		to go over the max boundary and swap allocation
1455		failed.
1456
1457	  fail
1458		The number of times swap allocation failed either
1459		because of running out of swap system-wide or max
1460		limit.
1461
1462	When reduced under the current usage, the existing swap
1463	entries are reclaimed gradually and the swap usage may stay
1464	higher than the limit for an extended period of time.  This
1465	reduces the impact on the workload and memory management.
1466
1467  memory.pressure
1468	A read-only nested-key file which exists on non-root cgroups.
1469
1470	Shows pressure stall information for memory. See
1471	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1472
1473
1474Usage Guidelines
1475~~~~~~~~~~~~~~~~
1476
1477"memory.high" is the main mechanism to control memory usage.
1478Over-committing on high limit (sum of high limits > available memory)
1479and letting global memory pressure to distribute memory according to
1480usage is a viable strategy.
1481
1482Because breach of the high limit doesn't trigger the OOM killer but
1483throttles the offending cgroup, a management agent has ample
1484opportunities to monitor and take appropriate actions such as granting
1485more memory or terminating the workload.
1486
1487Determining whether a cgroup has enough memory is not trivial as
1488memory usage doesn't indicate whether the workload can benefit from
1489more memory.  For example, a workload which writes data received from
1490network to a file can use all available memory but can also operate as
1491performant with a small amount of memory.  A measure of memory
1492pressure - how much the workload is being impacted due to lack of
1493memory - is necessary to determine whether a workload needs more
1494memory; unfortunately, memory pressure monitoring mechanism isn't
1495implemented yet.
1496
1497
1498Memory Ownership
1499~~~~~~~~~~~~~~~~
1500
1501A memory area is charged to the cgroup which instantiated it and stays
1502charged to the cgroup until the area is released.  Migrating a process
1503to a different cgroup doesn't move the memory usages that it
1504instantiated while in the previous cgroup to the new cgroup.
1505
1506A memory area may be used by processes belonging to different cgroups.
1507To which cgroup the area will be charged is in-deterministic; however,
1508over time, the memory area is likely to end up in a cgroup which has
1509enough memory allowance to avoid high reclaim pressure.
1510
1511If a cgroup sweeps a considerable amount of memory which is expected
1512to be accessed repeatedly by other cgroups, it may make sense to use
1513POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1514belonging to the affected files to ensure correct memory ownership.
1515
1516
1517IO
1518--
1519
1520The "io" controller regulates the distribution of IO resources.  This
1521controller implements both weight based and absolute bandwidth or IOPS
1522limit distribution; however, weight based distribution is available
1523only if cfq-iosched is in use and neither scheme is available for
1524blk-mq devices.
1525
1526
1527IO Interface Files
1528~~~~~~~~~~~~~~~~~~
1529
1530  io.stat
1531	A read-only nested-keyed file.
1532
1533	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1534	The following nested keys are defined.
1535
1536	  ======	=====================
1537	  rbytes	Bytes read
1538	  wbytes	Bytes written
1539	  rios		Number of read IOs
1540	  wios		Number of write IOs
1541	  dbytes	Bytes discarded
1542	  dios		Number of discard IOs
1543	  ======	=====================
1544
1545	An example read output follows::
1546
1547	  8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1548	  8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1549
1550  io.cost.qos
1551	A read-write nested-keyed file with exists only on the root
1552	cgroup.
1553
1554	This file configures the Quality of Service of the IO cost
1555	model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1556	currently implements "io.weight" proportional control.  Lines
1557	are keyed by $MAJ:$MIN device numbers and not ordered.  The
1558	line for a given device is populated on the first write for
1559	the device on "io.cost.qos" or "io.cost.model".  The following
1560	nested keys are defined.
1561
1562	  ======	=====================================
1563	  enable	Weight-based control enable
1564	  ctrl		"auto" or "user"
1565	  rpct		Read latency percentile    [0, 100]
1566	  rlat		Read latency threshold
1567	  wpct		Write latency percentile   [0, 100]
1568	  wlat		Write latency threshold
1569	  min		Minimum scaling percentage [1, 10000]
1570	  max		Maximum scaling percentage [1, 10000]
1571	  ======	=====================================
1572
1573	The controller is disabled by default and can be enabled by
1574	setting "enable" to 1.  "rpct" and "wpct" parameters default
1575	to zero and the controller uses internal device saturation
1576	state to adjust the overall IO rate between "min" and "max".
1577
1578	When a better control quality is needed, latency QoS
1579	parameters can be configured.  For example::
1580
1581	  8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1582
1583	shows that on sdb, the controller is enabled, will consider
1584	the device saturated if the 95th percentile of read completion
1585	latencies is above 75ms or write 150ms, and adjust the overall
1586	IO issue rate between 50% and 150% accordingly.
1587
1588	The lower the saturation point, the better the latency QoS at
1589	the cost of aggregate bandwidth.  The narrower the allowed
1590	adjustment range between "min" and "max", the more conformant
1591	to the cost model the IO behavior.  Note that the IO issue
1592	base rate may be far off from 100% and setting "min" and "max"
1593	blindly can lead to a significant loss of device capacity or
1594	control quality.  "min" and "max" are useful for regulating
1595	devices which show wide temporary behavior changes - e.g. a
1596	ssd which accepts writes at the line speed for a while and
1597	then completely stalls for multiple seconds.
1598
1599	When "ctrl" is "auto", the parameters are controlled by the
1600	kernel and may change automatically.  Setting "ctrl" to "user"
1601	or setting any of the percentile and latency parameters puts
1602	it into "user" mode and disables the automatic changes.  The
1603	automatic mode can be restored by setting "ctrl" to "auto".
1604
1605  io.cost.model
1606	A read-write nested-keyed file with exists only on the root
1607	cgroup.
1608
1609	This file configures the cost model of the IO cost model based
1610	controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1611	implements "io.weight" proportional control.  Lines are keyed
1612	by $MAJ:$MIN device numbers and not ordered.  The line for a
1613	given device is populated on the first write for the device on
1614	"io.cost.qos" or "io.cost.model".  The following nested keys
1615	are defined.
1616
1617	  =====		================================
1618	  ctrl		"auto" or "user"
1619	  model		The cost model in use - "linear"
1620	  =====		================================
1621
1622	When "ctrl" is "auto", the kernel may change all parameters
1623	dynamically.  When "ctrl" is set to "user" or any other
1624	parameters are written to, "ctrl" become "user" and the
1625	automatic changes are disabled.
1626
1627	When "model" is "linear", the following model parameters are
1628	defined.
1629
1630	  =============	========================================
1631	  [r|w]bps	The maximum sequential IO throughput
1632	  [r|w]seqiops	The maximum 4k sequential IOs per second
1633	  [r|w]randiops	The maximum 4k random IOs per second
1634	  =============	========================================
1635
1636	From the above, the builtin linear model determines the base
1637	costs of a sequential and random IO and the cost coefficient
1638	for the IO size.  While simple, this model can cover most
1639	common device classes acceptably.
1640
1641	The IO cost model isn't expected to be accurate in absolute
1642	sense and is scaled to the device behavior dynamically.
1643
1644	If needed, tools/cgroup/iocost_coef_gen.py can be used to
1645	generate device-specific coefficients.
1646
1647  io.weight
1648	A read-write flat-keyed file which exists on non-root cgroups.
1649	The default is "default 100".
1650
1651	The first line is the default weight applied to devices
1652	without specific override.  The rest are overrides keyed by
1653	$MAJ:$MIN device numbers and not ordered.  The weights are in
1654	the range [1, 10000] and specifies the relative amount IO time
1655	the cgroup can use in relation to its siblings.
1656
1657	The default weight can be updated by writing either "default
1658	$WEIGHT" or simply "$WEIGHT".  Overrides can be set by writing
1659	"$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1660
1661	An example read output follows::
1662
1663	  default 100
1664	  8:16 200
1665	  8:0 50
1666
1667  io.max
1668	A read-write nested-keyed file which exists on non-root
1669	cgroups.
1670
1671	BPS and IOPS based IO limit.  Lines are keyed by $MAJ:$MIN
1672	device numbers and not ordered.  The following nested keys are
1673	defined.
1674
1675	  =====		==================================
1676	  rbps		Max read bytes per second
1677	  wbps		Max write bytes per second
1678	  riops		Max read IO operations per second
1679	  wiops		Max write IO operations per second
1680	  =====		==================================
1681
1682	When writing, any number of nested key-value pairs can be
1683	specified in any order.  "max" can be specified as the value
1684	to remove a specific limit.  If the same key is specified
1685	multiple times, the outcome is undefined.
1686
1687	BPS and IOPS are measured in each IO direction and IOs are
1688	delayed if limit is reached.  Temporary bursts are allowed.
1689
1690	Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1691
1692	  echo "8:16 rbps=2097152 wiops=120" > io.max
1693
1694	Reading returns the following::
1695
1696	  8:16 rbps=2097152 wbps=max riops=max wiops=120
1697
1698	Write IOPS limit can be removed by writing the following::
1699
1700	  echo "8:16 wiops=max" > io.max
1701
1702	Reading now returns the following::
1703
1704	  8:16 rbps=2097152 wbps=max riops=max wiops=max
1705
1706  io.pressure
1707	A read-only nested-key file which exists on non-root cgroups.
1708
1709	Shows pressure stall information for IO. See
1710	:ref:`Documentation/accounting/psi.rst <psi>` for details.
1711
1712
1713Writeback
1714~~~~~~~~~
1715
1716Page cache is dirtied through buffered writes and shared mmaps and
1717written asynchronously to the backing filesystem by the writeback
1718mechanism.  Writeback sits between the memory and IO domains and
1719regulates the proportion of dirty memory by balancing dirtying and
1720write IOs.
1721
1722The io controller, in conjunction with the memory controller,
1723implements control of page cache writeback IOs.  The memory controller
1724defines the memory domain that dirty memory ratio is calculated and
1725maintained for and the io controller defines the io domain which
1726writes out dirty pages for the memory domain.  Both system-wide and
1727per-cgroup dirty memory states are examined and the more restrictive
1728of the two is enforced.
1729
1730cgroup writeback requires explicit support from the underlying
1731filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
1732btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are
1733attributed to the root cgroup.
1734
1735There are inherent differences in memory and writeback management
1736which affects how cgroup ownership is tracked.  Memory is tracked per
1737page while writeback per inode.  For the purpose of writeback, an
1738inode is assigned to a cgroup and all IO requests to write dirty pages
1739from the inode are attributed to that cgroup.
1740
1741As cgroup ownership for memory is tracked per page, there can be pages
1742which are associated with different cgroups than the one the inode is
1743associated with.  These are called foreign pages.  The writeback
1744constantly keeps track of foreign pages and, if a particular foreign
1745cgroup becomes the majority over a certain period of time, switches
1746the ownership of the inode to that cgroup.
1747
1748While this model is enough for most use cases where a given inode is
1749mostly dirtied by a single cgroup even when the main writing cgroup
1750changes over time, use cases where multiple cgroups write to a single
1751inode simultaneously are not supported well.  In such circumstances, a
1752significant portion of IOs are likely to be attributed incorrectly.
1753As memory controller assigns page ownership on the first use and
1754doesn't update it until the page is released, even if writeback
1755strictly follows page ownership, multiple cgroups dirtying overlapping
1756areas wouldn't work as expected.  It's recommended to avoid such usage
1757patterns.
1758
1759The sysctl knobs which affect writeback behavior are applied to cgroup
1760writeback as follows.
1761
1762  vm.dirty_background_ratio, vm.dirty_ratio
1763	These ratios apply the same to cgroup writeback with the
1764	amount of available memory capped by limits imposed by the
1765	memory controller and system-wide clean memory.
1766
1767  vm.dirty_background_bytes, vm.dirty_bytes
1768	For cgroup writeback, this is calculated into ratio against
1769	total available memory and applied the same way as
1770	vm.dirty[_background]_ratio.
1771
1772
1773IO Latency
1774~~~~~~~~~~
1775
1776This is a cgroup v2 controller for IO workload protection.  You provide a group
1777with a latency target, and if the average latency exceeds that target the
1778controller will throttle any peers that have a lower latency target than the
1779protected workload.
1780
1781The limits are only applied at the peer level in the hierarchy.  This means that
1782in the diagram below, only groups A, B, and C will influence each other, and
1783groups D and F will influence each other.  Group G will influence nobody::
1784
1785			[root]
1786		/	   |		\
1787		A	   B		C
1788	       /  \        |
1789	      D    F	   G
1790
1791
1792So the ideal way to configure this is to set io.latency in groups A, B, and C.
1793Generally you do not want to set a value lower than the latency your device
1794supports.  Experiment to find the value that works best for your workload.
1795Start at higher than the expected latency for your device and watch the
1796avg_lat value in io.stat for your workload group to get an idea of the
1797latency you see during normal operation.  Use the avg_lat value as a basis for
1798your real setting, setting at 10-15% higher than the value in io.stat.
1799
1800How IO Latency Throttling Works
1801~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1802
1803io.latency is work conserving; so as long as everybody is meeting their latency
1804target the controller doesn't do anything.  Once a group starts missing its
1805target it begins throttling any peer group that has a higher target than itself.
1806This throttling takes 2 forms:
1807
1808- Queue depth throttling.  This is the number of outstanding IO's a group is
1809  allowed to have.  We will clamp down relatively quickly, starting at no limit
1810  and going all the way down to 1 IO at a time.
1811
1812- Artificial delay induction.  There are certain types of IO that cannot be
1813  throttled without possibly adversely affecting higher priority groups.  This
1814  includes swapping and metadata IO.  These types of IO are allowed to occur
1815  normally, however they are "charged" to the originating group.  If the
1816  originating group is being throttled you will see the use_delay and delay
1817  fields in io.stat increase.  The delay value is how many microseconds that are
1818  being added to any process that runs in this group.  Because this number can
1819  grow quite large if there is a lot of swapping or metadata IO occurring we
1820  limit the individual delay events to 1 second at a time.
1821
1822Once the victimized group starts meeting its latency target again it will start
1823unthrottling any peer groups that were throttled previously.  If the victimized
1824group simply stops doing IO the global counter will unthrottle appropriately.
1825
1826IO Latency Interface Files
1827~~~~~~~~~~~~~~~~~~~~~~~~~~
1828
1829  io.latency
1830	This takes a similar format as the other controllers.
1831
1832		"MAJOR:MINOR target=<target time in microseconds"
1833
1834  io.stat
1835	If the controller is enabled you will see extra stats in io.stat in
1836	addition to the normal ones.
1837
1838	  depth
1839		This is the current queue depth for the group.
1840
1841	  avg_lat
1842		This is an exponential moving average with a decay rate of 1/exp
1843		bound by the sampling interval.  The decay rate interval can be
1844		calculated by multiplying the win value in io.stat by the
1845		corresponding number of samples based on the win value.
1846
1847	  win
1848		The sampling window size in milliseconds.  This is the minimum
1849		duration of time between evaluation events.  Windows only elapse
1850		with IO activity.  Idle periods extend the most recent window.
1851
1852IO Priority
1853~~~~~~~~~~~
1854
1855A single attribute controls the behavior of the I/O priority cgroup policy,
1856namely the blkio.prio.class attribute. The following values are accepted for
1857that attribute:
1858
1859  no-change
1860	Do not modify the I/O priority class.
1861
1862  none-to-rt
1863	For requests that do not have an I/O priority class (NONE),
1864	change the I/O priority class into RT. Do not modify
1865	the I/O priority class of other requests.
1866
1867  restrict-to-be
1868	For requests that do not have an I/O priority class or that have I/O
1869	priority class RT, change it into BE. Do not modify the I/O priority
1870	class of requests that have priority class IDLE.
1871
1872  idle
1873	Change the I/O priority class of all requests into IDLE, the lowest
1874	I/O priority class.
1875
1876The following numerical values are associated with the I/O priority policies:
1877
1878+-------------+---+
1879| no-change   | 0 |
1880+-------------+---+
1881| none-to-rt  | 1 |
1882+-------------+---+
1883| rt-to-be    | 2 |
1884+-------------+---+
1885| all-to-idle | 3 |
1886+-------------+---+
1887
1888The numerical value that corresponds to each I/O priority class is as follows:
1889
1890+-------------------------------+---+
1891| IOPRIO_CLASS_NONE             | 0 |
1892+-------------------------------+---+
1893| IOPRIO_CLASS_RT (real-time)   | 1 |
1894+-------------------------------+---+
1895| IOPRIO_CLASS_BE (best effort) | 2 |
1896+-------------------------------+---+
1897| IOPRIO_CLASS_IDLE             | 3 |
1898+-------------------------------+---+
1899
1900The algorithm to set the I/O priority class for a request is as follows:
1901
1902- Translate the I/O priority class policy into a number.
1903- Change the request I/O priority class into the maximum of the I/O priority
1904  class policy number and the numerical I/O priority class.
1905
1906PID
1907---
1908
1909The process number controller is used to allow a cgroup to stop any
1910new tasks from being fork()'d or clone()'d after a specified limit is
1911reached.
1912
1913The number of tasks in a cgroup can be exhausted in ways which other
1914controllers cannot prevent, thus warranting its own controller.  For
1915example, a fork bomb is likely to exhaust the number of tasks before
1916hitting memory restrictions.
1917
1918Note that PIDs used in this controller refer to TIDs, process IDs as
1919used by the kernel.
1920
1921
1922PID Interface Files
1923~~~~~~~~~~~~~~~~~~~
1924
1925  pids.max
1926	A read-write single value file which exists on non-root
1927	cgroups.  The default is "max".
1928
1929	Hard limit of number of processes.
1930
1931  pids.current
1932	A read-only single value file which exists on all cgroups.
1933
1934	The number of processes currently in the cgroup and its
1935	descendants.
1936
1937Organisational operations are not blocked by cgroup policies, so it is
1938possible to have pids.current > pids.max.  This can be done by either
1939setting the limit to be smaller than pids.current, or attaching enough
1940processes to the cgroup such that pids.current is larger than
1941pids.max.  However, it is not possible to violate a cgroup PID policy
1942through fork() or clone(). These will return -EAGAIN if the creation
1943of a new process would cause a cgroup policy to be violated.
1944
1945
1946Cpuset
1947------
1948
1949The "cpuset" controller provides a mechanism for constraining
1950the CPU and memory node placement of tasks to only the resources
1951specified in the cpuset interface files in a task's current cgroup.
1952This is especially valuable on large NUMA systems where placing jobs
1953on properly sized subsets of the systems with careful processor and
1954memory placement to reduce cross-node memory access and contention
1955can improve overall system performance.
1956
1957The "cpuset" controller is hierarchical.  That means the controller
1958cannot use CPUs or memory nodes not allowed in its parent.
1959
1960
1961Cpuset Interface Files
1962~~~~~~~~~~~~~~~~~~~~~~
1963
1964  cpuset.cpus
1965	A read-write multiple values file which exists on non-root
1966	cpuset-enabled cgroups.
1967
1968	It lists the requested CPUs to be used by tasks within this
1969	cgroup.  The actual list of CPUs to be granted, however, is
1970	subjected to constraints imposed by its parent and can differ
1971	from the requested CPUs.
1972
1973	The CPU numbers are comma-separated numbers or ranges.
1974	For example::
1975
1976	  # cat cpuset.cpus
1977	  0-4,6,8-10
1978
1979	An empty value indicates that the cgroup is using the same
1980	setting as the nearest cgroup ancestor with a non-empty
1981	"cpuset.cpus" or all the available CPUs if none is found.
1982
1983	The value of "cpuset.cpus" stays constant until the next update
1984	and won't be affected by any CPU hotplug events.
1985
1986  cpuset.cpus.effective
1987	A read-only multiple values file which exists on all
1988	cpuset-enabled cgroups.
1989
1990	It lists the onlined CPUs that are actually granted to this
1991	cgroup by its parent.  These CPUs are allowed to be used by
1992	tasks within the current cgroup.
1993
1994	If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
1995	all the CPUs from the parent cgroup that can be available to
1996	be used by this cgroup.  Otherwise, it should be a subset of
1997	"cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
1998	can be granted.  In this case, it will be treated just like an
1999	empty "cpuset.cpus".
2000
2001	Its value will be affected by CPU hotplug events.
2002
2003  cpuset.mems
2004	A read-write multiple values file which exists on non-root
2005	cpuset-enabled cgroups.
2006
2007	It lists the requested memory nodes to be used by tasks within
2008	this cgroup.  The actual list of memory nodes granted, however,
2009	is subjected to constraints imposed by its parent and can differ
2010	from the requested memory nodes.
2011
2012	The memory node numbers are comma-separated numbers or ranges.
2013	For example::
2014
2015	  # cat cpuset.mems
2016	  0-1,3
2017
2018	An empty value indicates that the cgroup is using the same
2019	setting as the nearest cgroup ancestor with a non-empty
2020	"cpuset.mems" or all the available memory nodes if none
2021	is found.
2022
2023	The value of "cpuset.mems" stays constant until the next update
2024	and won't be affected by any memory nodes hotplug events.
2025
2026  cpuset.mems.effective
2027	A read-only multiple values file which exists on all
2028	cpuset-enabled cgroups.
2029
2030	It lists the onlined memory nodes that are actually granted to
2031	this cgroup by its parent. These memory nodes are allowed to
2032	be used by tasks within the current cgroup.
2033
2034	If "cpuset.mems" is empty, it shows all the memory nodes from the
2035	parent cgroup that will be available to be used by this cgroup.
2036	Otherwise, it should be a subset of "cpuset.mems" unless none of
2037	the memory nodes listed in "cpuset.mems" can be granted.  In this
2038	case, it will be treated just like an empty "cpuset.mems".
2039
2040	Its value will be affected by memory nodes hotplug events.
2041
2042  cpuset.cpus.partition
2043	A read-write single value file which exists on non-root
2044	cpuset-enabled cgroups.  This flag is owned by the parent cgroup
2045	and is not delegatable.
2046
2047        It accepts only the following input values when written to.
2048
2049        "root"   - a partition root
2050        "member" - a non-root member of a partition
2051
2052	When set to be a partition root, the current cgroup is the
2053	root of a new partition or scheduling domain that comprises
2054	itself and all its descendants except those that are separate
2055	partition roots themselves and their descendants.  The root
2056	cgroup is always a partition root.
2057
2058	There are constraints on where a partition root can be set.
2059	It can only be set in a cgroup if all the following conditions
2060	are true.
2061
2062	1) The "cpuset.cpus" is not empty and the list of CPUs are
2063	   exclusive, i.e. they are not shared by any of its siblings.
2064	2) The parent cgroup is a partition root.
2065	3) The "cpuset.cpus" is also a proper subset of the parent's
2066	   "cpuset.cpus.effective".
2067	4) There is no child cgroups with cpuset enabled.  This is for
2068	   eliminating corner cases that have to be handled if such a
2069	   condition is allowed.
2070
2071	Setting it to partition root will take the CPUs away from the
2072	effective CPUs of the parent cgroup.  Once it is set, this
2073	file cannot be reverted back to "member" if there are any child
2074	cgroups with cpuset enabled.
2075
2076	A parent partition cannot distribute all its CPUs to its
2077	child partitions.  There must be at least one cpu left in the
2078	parent partition.
2079
2080	Once becoming a partition root, changes to "cpuset.cpus" is
2081	generally allowed as long as the first condition above is true,
2082	the change will not take away all the CPUs from the parent
2083	partition and the new "cpuset.cpus" value is a superset of its
2084	children's "cpuset.cpus" values.
2085
2086	Sometimes, external factors like changes to ancestors'
2087	"cpuset.cpus" or cpu hotplug can cause the state of the partition
2088	root to change.  On read, the "cpuset.sched.partition" file
2089	can show the following values.
2090
2091	"member"       Non-root member of a partition
2092	"root"         Partition root
2093	"root invalid" Invalid partition root
2094
2095	It is a partition root if the first 2 partition root conditions
2096	above are true and at least one CPU from "cpuset.cpus" is
2097	granted by the parent cgroup.
2098
2099	A partition root can become invalid if none of CPUs requested
2100	in "cpuset.cpus" can be granted by the parent cgroup or the
2101	parent cgroup is no longer a partition root itself.  In this
2102	case, it is not a real partition even though the restriction
2103	of the first partition root condition above will still apply.
2104	The cpu affinity of all the tasks in the cgroup will then be
2105	associated with CPUs in the nearest ancestor partition.
2106
2107	An invalid partition root can be transitioned back to a
2108	real partition root if at least one of the requested CPUs
2109	can now be granted by its parent.  In this case, the cpu
2110	affinity of all the tasks in the formerly invalid partition
2111	will be associated to the CPUs of the newly formed partition.
2112	Changing the partition state of an invalid partition root to
2113	"member" is always allowed even if child cpusets are present.
2114
2115
2116Device controller
2117-----------------
2118
2119Device controller manages access to device files. It includes both
2120creation of new device files (using mknod), and access to the
2121existing device files.
2122
2123Cgroup v2 device controller has no interface files and is implemented
2124on top of cgroup BPF. To control access to device files, a user may
2125create bpf programs of the BPF_CGROUP_DEVICE type and attach them
2126to cgroups. On an attempt to access a device file, corresponding
2127BPF programs will be executed, and depending on the return value
2128the attempt will succeed or fail with -EPERM.
2129
2130A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx
2131structure, which describes the device access attempt: access type
2132(mknod/read/write) and device (type, major and minor numbers).
2133If the program returns 0, the attempt fails with -EPERM, otherwise
2134it succeeds.
2135
2136An example of BPF_CGROUP_DEVICE program may be found in the kernel
2137source tree in the tools/testing/selftests/bpf/dev_cgroup.c file.
2138
2139
2140RDMA
2141----
2142
2143The "rdma" controller regulates the distribution and accounting of
2144RDMA resources.
2145
2146RDMA Interface Files
2147~~~~~~~~~~~~~~~~~~~~
2148
2149  rdma.max
2150	A readwrite nested-keyed file that exists for all the cgroups
2151	except root that describes current configured resource limit
2152	for a RDMA/IB device.
2153
2154	Lines are keyed by device name and are not ordered.
2155	Each line contains space separated resource name and its configured
2156	limit that can be distributed.
2157
2158	The following nested keys are defined.
2159
2160	  ==========	=============================
2161	  hca_handle	Maximum number of HCA Handles
2162	  hca_object 	Maximum number of HCA Objects
2163	  ==========	=============================
2164
2165	An example for mlx4 and ocrdma device follows::
2166
2167	  mlx4_0 hca_handle=2 hca_object=2000
2168	  ocrdma1 hca_handle=3 hca_object=max
2169
2170  rdma.current
2171	A read-only file that describes current resource usage.
2172	It exists for all the cgroup except root.
2173
2174	An example for mlx4 and ocrdma device follows::
2175
2176	  mlx4_0 hca_handle=1 hca_object=20
2177	  ocrdma1 hca_handle=1 hca_object=23
2178
2179HugeTLB
2180-------
2181
2182The HugeTLB controller allows to limit the HugeTLB usage per control group and
2183enforces the controller limit during page fault.
2184
2185HugeTLB Interface Files
2186~~~~~~~~~~~~~~~~~~~~~~~
2187
2188  hugetlb.<hugepagesize>.current
2189	Show current usage for "hugepagesize" hugetlb.  It exists for all
2190	the cgroup except root.
2191
2192  hugetlb.<hugepagesize>.max
2193	Set/show the hard limit of "hugepagesize" hugetlb usage.
2194	The default value is "max".  It exists for all the cgroup except root.
2195
2196  hugetlb.<hugepagesize>.events
2197	A read-only flat-keyed file which exists on non-root cgroups.
2198
2199	  max
2200		The number of allocation failure due to HugeTLB limit
2201
2202  hugetlb.<hugepagesize>.events.local
2203	Similar to hugetlb.<hugepagesize>.events but the fields in the file
2204	are local to the cgroup i.e. not hierarchical. The file modified event
2205	generated on this file reflects only the local events.
2206
2207Misc
2208----
2209
2210perf_event
2211~~~~~~~~~~
2212
2213perf_event controller, if not mounted on a legacy hierarchy, is
2214automatically enabled on the v2 hierarchy so that perf events can
2215always be filtered by cgroup v2 path.  The controller can still be
2216moved to a legacy hierarchy after v2 hierarchy is populated.
2217
2218
2219Non-normative information
2220-------------------------
2221
2222This section contains information that isn't considered to be a part of
2223the stable kernel API and so is subject to change.
2224
2225
2226CPU controller root cgroup process behaviour
2227~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2228
2229When distributing CPU cycles in the root cgroup each thread in this
2230cgroup is treated as if it was hosted in a separate child cgroup of the
2231root cgroup. This child cgroup weight is dependent on its thread nice
2232level.
2233
2234For details of this mapping see sched_prio_to_weight array in
2235kernel/sched/core.c file (values from this array should be scaled
2236appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2237
2238
2239IO controller root cgroup process behaviour
2240~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2241
2242Root cgroup processes are hosted in an implicit leaf child node.
2243When distributing IO resources this implicit child node is taken into
2244account as if it was a normal child cgroup of the root cgroup with a
2245weight value of 200.
2246
2247
2248Namespace
2249=========
2250
2251Basics
2252------
2253
2254cgroup namespace provides a mechanism to virtualize the view of the
2255"/proc/$PID/cgroup" file and cgroup mounts.  The CLONE_NEWCGROUP clone
2256flag can be used with clone(2) and unshare(2) to create a new cgroup
2257namespace.  The process running inside the cgroup namespace will have
2258its "/proc/$PID/cgroup" output restricted to cgroupns root.  The
2259cgroupns root is the cgroup of the process at the time of creation of
2260the cgroup namespace.
2261
2262Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2263complete path of the cgroup of a process.  In a container setup where
2264a set of cgroups and namespaces are intended to isolate processes the
2265"/proc/$PID/cgroup" file may leak potential system level information
2266to the isolated processes.  For Example::
2267
2268  # cat /proc/self/cgroup
2269  0::/batchjobs/container_id1
2270
2271The path '/batchjobs/container_id1' can be considered as system-data
2272and undesirable to expose to the isolated processes.  cgroup namespace
2273can be used to restrict visibility of this path.  For example, before
2274creating a cgroup namespace, one would see::
2275
2276  # ls -l /proc/self/ns/cgroup
2277  lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2278  # cat /proc/self/cgroup
2279  0::/batchjobs/container_id1
2280
2281After unsharing a new namespace, the view changes::
2282
2283  # ls -l /proc/self/ns/cgroup
2284  lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2285  # cat /proc/self/cgroup
2286  0::/
2287
2288When some thread from a multi-threaded process unshares its cgroup
2289namespace, the new cgroupns gets applied to the entire process (all
2290the threads).  This is natural for the v2 hierarchy; however, for the
2291legacy hierarchies, this may be unexpected.
2292
2293A cgroup namespace is alive as long as there are processes inside or
2294mounts pinning it.  When the last usage goes away, the cgroup
2295namespace is destroyed.  The cgroupns root and the actual cgroups
2296remain.
2297
2298
2299The Root and Views
2300------------------
2301
2302The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2303process calling unshare(2) is running.  For example, if a process in
2304/batchjobs/container_id1 cgroup calls unshare, cgroup
2305/batchjobs/container_id1 becomes the cgroupns root.  For the
2306init_cgroup_ns, this is the real root ('/') cgroup.
2307
2308The cgroupns root cgroup does not change even if the namespace creator
2309process later moves to a different cgroup::
2310
2311  # ~/unshare -c # unshare cgroupns in some cgroup
2312  # cat /proc/self/cgroup
2313  0::/
2314  # mkdir sub_cgrp_1
2315  # echo 0 > sub_cgrp_1/cgroup.procs
2316  # cat /proc/self/cgroup
2317  0::/sub_cgrp_1
2318
2319Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2320
2321Processes running inside the cgroup namespace will be able to see
2322cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2323From within an unshared cgroupns::
2324
2325  # sleep 100000 &
2326  [1] 7353
2327  # echo 7353 > sub_cgrp_1/cgroup.procs
2328  # cat /proc/7353/cgroup
2329  0::/sub_cgrp_1
2330
2331From the initial cgroup namespace, the real cgroup path will be
2332visible::
2333
2334  $ cat /proc/7353/cgroup
2335  0::/batchjobs/container_id1/sub_cgrp_1
2336
2337From a sibling cgroup namespace (that is, a namespace rooted at a
2338different cgroup), the cgroup path relative to its own cgroup
2339namespace root will be shown.  For instance, if PID 7353's cgroup
2340namespace root is at '/batchjobs/container_id2', then it will see::
2341
2342  # cat /proc/7353/cgroup
2343  0::/../container_id2/sub_cgrp_1
2344
2345Note that the relative path always starts with '/' to indicate that
2346its relative to the cgroup namespace root of the caller.
2347
2348
2349Migration and setns(2)
2350----------------------
2351
2352Processes inside a cgroup namespace can move into and out of the
2353namespace root if they have proper access to external cgroups.  For
2354example, from inside a namespace with cgroupns root at
2355/batchjobs/container_id1, and assuming that the global hierarchy is
2356still accessible inside cgroupns::
2357
2358  # cat /proc/7353/cgroup
2359  0::/sub_cgrp_1
2360  # echo 7353 > batchjobs/container_id2/cgroup.procs
2361  # cat /proc/7353/cgroup
2362  0::/../container_id2
2363
2364Note that this kind of setup is not encouraged.  A task inside cgroup
2365namespace should only be exposed to its own cgroupns hierarchy.
2366
2367setns(2) to another cgroup namespace is allowed when:
2368
2369(a) the process has CAP_SYS_ADMIN against its current user namespace
2370(b) the process has CAP_SYS_ADMIN against the target cgroup
2371    namespace's userns
2372
2373No implicit cgroup changes happen with attaching to another cgroup
2374namespace.  It is expected that the someone moves the attaching
2375process under the target cgroup namespace root.
2376
2377
2378Interaction with Other Namespaces
2379---------------------------------
2380
2381Namespace specific cgroup hierarchy can be mounted by a process
2382running inside a non-init cgroup namespace::
2383
2384  # mount -t cgroup2 none $MOUNT_POINT
2385
2386This will mount the unified cgroup hierarchy with cgroupns root as the
2387filesystem root.  The process needs CAP_SYS_ADMIN against its user and
2388mount namespaces.
2389
2390The virtualization of /proc/self/cgroup file combined with restricting
2391the view of cgroup hierarchy by namespace-private cgroupfs mount
2392provides a properly isolated cgroup view inside the container.
2393
2394
2395Information on Kernel Programming
2396=================================
2397
2398This section contains kernel programming information in the areas
2399where interacting with cgroup is necessary.  cgroup core and
2400controllers are not covered.
2401
2402
2403Filesystem Support for Writeback
2404--------------------------------
2405
2406A filesystem can support cgroup writeback by updating
2407address_space_operations->writepage[s]() to annotate bio's using the
2408following two functions.
2409
2410  wbc_init_bio(@wbc, @bio)
2411	Should be called for each bio carrying writeback data and
2412	associates the bio with the inode's owner cgroup and the
2413	corresponding request queue.  This must be called after
2414	a queue (device) has been associated with the bio and
2415	before submission.
2416
2417  wbc_account_cgroup_owner(@wbc, @page, @bytes)
2418	Should be called for each data segment being written out.
2419	While this function doesn't care exactly when it's called
2420	during the writeback session, it's the easiest and most
2421	natural to call it as data segments are added to a bio.
2422
2423With writeback bio's annotated, cgroup support can be enabled per
2424super_block by setting SB_I_CGROUPWB in ->s_iflags.  This allows for
2425selective disabling of cgroup writeback support which is helpful when
2426certain filesystem features, e.g. journaled data mode, are
2427incompatible.
2428
2429wbc_init_bio() binds the specified bio to its cgroup.  Depending on
2430the configuration, the bio may be executed at a lower priority and if
2431the writeback session is holding shared resources, e.g. a journal
2432entry, may lead to priority inversion.  There is no one easy solution
2433for the problem.  Filesystems can try to work around specific problem
2434cases by skipping wbc_init_bio() and using bio_associate_blkg()
2435directly.
2436
2437
2438Deprecated v1 Core Features
2439===========================
2440
2441- Multiple hierarchies including named ones are not supported.
2442
2443- All v1 mount options are not supported.
2444
2445- The "tasks" file is removed and "cgroup.procs" is not sorted.
2446
2447- "cgroup.clone_children" is removed.
2448
2449- /proc/cgroups is meaningless for v2.  Use "cgroup.controllers" file
2450  at the root instead.
2451
2452
2453Issues with v1 and Rationales for v2
2454====================================
2455
2456Multiple Hierarchies
2457--------------------
2458
2459cgroup v1 allowed an arbitrary number of hierarchies and each
2460hierarchy could host any number of controllers.  While this seemed to
2461provide a high level of flexibility, it wasn't useful in practice.
2462
2463For example, as there is only one instance of each controller, utility
2464type controllers such as freezer which can be useful in all
2465hierarchies could only be used in one.  The issue is exacerbated by
2466the fact that controllers couldn't be moved to another hierarchy once
2467hierarchies were populated.  Another issue was that all controllers
2468bound to a hierarchy were forced to have exactly the same view of the
2469hierarchy.  It wasn't possible to vary the granularity depending on
2470the specific controller.
2471
2472In practice, these issues heavily limited which controllers could be
2473put on the same hierarchy and most configurations resorted to putting
2474each controller on its own hierarchy.  Only closely related ones, such
2475as the cpu and cpuacct controllers, made sense to be put on the same
2476hierarchy.  This often meant that userland ended up managing multiple
2477similar hierarchies repeating the same steps on each hierarchy
2478whenever a hierarchy management operation was necessary.
2479
2480Furthermore, support for multiple hierarchies came at a steep cost.
2481It greatly complicated cgroup core implementation but more importantly
2482the support for multiple hierarchies restricted how cgroup could be
2483used in general and what controllers was able to do.
2484
2485There was no limit on how many hierarchies there might be, which meant
2486that a thread's cgroup membership couldn't be described in finite
2487length.  The key might contain any number of entries and was unlimited
2488in length, which made it highly awkward to manipulate and led to
2489addition of controllers which existed only to identify membership,
2490which in turn exacerbated the original problem of proliferating number
2491of hierarchies.
2492
2493Also, as a controller couldn't have any expectation regarding the
2494topologies of hierarchies other controllers might be on, each
2495controller had to assume that all other controllers were attached to
2496completely orthogonal hierarchies.  This made it impossible, or at
2497least very cumbersome, for controllers to cooperate with each other.
2498
2499In most use cases, putting controllers on hierarchies which are
2500completely orthogonal to each other isn't necessary.  What usually is
2501called for is the ability to have differing levels of granularity
2502depending on the specific controller.  In other words, hierarchy may
2503be collapsed from leaf towards root when viewed from specific
2504controllers.  For example, a given configuration might not care about
2505how memory is distributed beyond a certain level while still wanting
2506to control how CPU cycles are distributed.
2507
2508
2509Thread Granularity
2510------------------
2511
2512cgroup v1 allowed threads of a process to belong to different cgroups.
2513This didn't make sense for some controllers and those controllers
2514ended up implementing different ways to ignore such situations but
2515much more importantly it blurred the line between API exposed to
2516individual applications and system management interface.
2517
2518Generally, in-process knowledge is available only to the process
2519itself; thus, unlike service-level organization of processes,
2520categorizing threads of a process requires active participation from
2521the application which owns the target process.
2522
2523cgroup v1 had an ambiguously defined delegation model which got abused
2524in combination with thread granularity.  cgroups were delegated to
2525individual applications so that they can create and manage their own
2526sub-hierarchies and control resource distributions along them.  This
2527effectively raised cgroup to the status of a syscall-like API exposed
2528to lay programs.
2529
2530First of all, cgroup has a fundamentally inadequate interface to be
2531exposed this way.  For a process to access its own knobs, it has to
2532extract the path on the target hierarchy from /proc/self/cgroup,
2533construct the path by appending the name of the knob to the path, open
2534and then read and/or write to it.  This is not only extremely clunky
2535and unusual but also inherently racy.  There is no conventional way to
2536define transaction across the required steps and nothing can guarantee
2537that the process would actually be operating on its own sub-hierarchy.
2538
2539cgroup controllers implemented a number of knobs which would never be
2540accepted as public APIs because they were just adding control knobs to
2541system-management pseudo filesystem.  cgroup ended up with interface
2542knobs which were not properly abstracted or refined and directly
2543revealed kernel internal details.  These knobs got exposed to
2544individual applications through the ill-defined delegation mechanism
2545effectively abusing cgroup as a shortcut to implementing public APIs
2546without going through the required scrutiny.
2547
2548This was painful for both userland and kernel.  Userland ended up with
2549misbehaving and poorly abstracted interfaces and kernel exposing and
2550locked into constructs inadvertently.
2551
2552
2553Competition Between Inner Nodes and Threads
2554-------------------------------------------
2555
2556cgroup v1 allowed threads to be in any cgroups which created an
2557interesting problem where threads belonging to a parent cgroup and its
2558children cgroups competed for resources.  This was nasty as two
2559different types of entities competed and there was no obvious way to
2560settle it.  Different controllers did different things.
2561
2562The cpu controller considered threads and cgroups as equivalents and
2563mapped nice levels to cgroup weights.  This worked for some cases but
2564fell flat when children wanted to be allocated specific ratios of CPU
2565cycles and the number of internal threads fluctuated - the ratios
2566constantly changed as the number of competing entities fluctuated.
2567There also were other issues.  The mapping from nice level to weight
2568wasn't obvious or universal, and there were various other knobs which
2569simply weren't available for threads.
2570
2571The io controller implicitly created a hidden leaf node for each
2572cgroup to host the threads.  The hidden leaf had its own copies of all
2573the knobs with ``leaf_`` prefixed.  While this allowed equivalent
2574control over internal threads, it was with serious drawbacks.  It
2575always added an extra layer of nesting which wouldn't be necessary
2576otherwise, made the interface messy and significantly complicated the
2577implementation.
2578
2579The memory controller didn't have a way to control what happened
2580between internal tasks and child cgroups and the behavior was not
2581clearly defined.  There were attempts to add ad-hoc behaviors and
2582knobs to tailor the behavior to specific workloads which would have
2583led to problems extremely difficult to resolve in the long term.
2584
2585Multiple controllers struggled with internal tasks and came up with
2586different ways to deal with it; unfortunately, all the approaches were
2587severely flawed and, furthermore, the widely different behaviors
2588made cgroup as a whole highly inconsistent.
2589
2590This clearly is a problem which needs to be addressed from cgroup core
2591in a uniform way.
2592
2593
2594Other Interface Issues
2595----------------------
2596
2597cgroup v1 grew without oversight and developed a large number of
2598idiosyncrasies and inconsistencies.  One issue on the cgroup core side
2599was how an empty cgroup was notified - a userland helper binary was
2600forked and executed for each event.  The event delivery wasn't
2601recursive or delegatable.  The limitations of the mechanism also led
2602to in-kernel event delivery filtering mechanism further complicating
2603the interface.
2604
2605Controller interfaces were problematic too.  An extreme example is
2606controllers completely ignoring hierarchical organization and treating
2607all cgroups as if they were all located directly under the root
2608cgroup.  Some controllers exposed a large amount of inconsistent
2609implementation details to userland.
2610
2611There also was no consistency across controllers.  When a new cgroup
2612was created, some controllers defaulted to not imposing extra
2613restrictions while others disallowed any resource usage until
2614explicitly configured.  Configuration knobs for the same type of
2615control used widely differing naming schemes and formats.  Statistics
2616and information knobs were named arbitrarily and used different
2617formats and units even in the same controller.
2618
2619cgroup v2 establishes common conventions where appropriate and updates
2620controllers so that they expose minimal and consistent interfaces.
2621
2622
2623Controller Issues and Remedies
2624------------------------------
2625
2626Memory
2627~~~~~~
2628
2629The original lower boundary, the soft limit, is defined as a limit
2630that is per default unset.  As a result, the set of cgroups that
2631global reclaim prefers is opt-in, rather than opt-out.  The costs for
2632optimizing these mostly negative lookups are so high that the
2633implementation, despite its enormous size, does not even provide the
2634basic desirable behavior.  First off, the soft limit has no
2635hierarchical meaning.  All configured groups are organized in a global
2636rbtree and treated like equal peers, regardless where they are located
2637in the hierarchy.  This makes subtree delegation impossible.  Second,
2638the soft limit reclaim pass is so aggressive that it not just
2639introduces high allocation latencies into the system, but also impacts
2640system performance due to overreclaim, to the point where the feature
2641becomes self-defeating.
2642
2643The memory.low boundary on the other hand is a top-down allocated
2644reserve.  A cgroup enjoys reclaim protection when it's within its
2645effective low, which makes delegation of subtrees possible. It also
2646enjoys having reclaim pressure proportional to its overage when
2647above its effective low.
2648
2649The original high boundary, the hard limit, is defined as a strict
2650limit that can not budge, even if the OOM killer has to be called.
2651But this generally goes against the goal of making the most out of the
2652available memory.  The memory consumption of workloads varies during
2653runtime, and that requires users to overcommit.  But doing that with a
2654strict upper limit requires either a fairly accurate prediction of the
2655working set size or adding slack to the limit.  Since working set size
2656estimation is hard and error prone, and getting it wrong results in
2657OOM kills, most users tend to err on the side of a looser limit and
2658end up wasting precious resources.
2659
2660The memory.high boundary on the other hand can be set much more
2661conservatively.  When hit, it throttles allocations by forcing them
2662into direct reclaim to work off the excess, but it never invokes the
2663OOM killer.  As a result, a high boundary that is chosen too
2664aggressively will not terminate the processes, but instead it will
2665lead to gradual performance degradation.  The user can monitor this
2666and make corrections until the minimal memory footprint that still
2667gives acceptable performance is found.
2668
2669In extreme cases, with many concurrent allocations and a complete
2670breakdown of reclaim progress within the group, the high boundary can
2671be exceeded.  But even then it's mostly better to satisfy the
2672allocation from the slack available in other groups or the rest of the
2673system than killing the group.  Otherwise, memory.max is there to
2674limit this type of spillover and ultimately contain buggy or even
2675malicious applications.
2676
2677Setting the original memory.limit_in_bytes below the current usage was
2678subject to a race condition, where concurrent charges could cause the
2679limit setting to fail. memory.max on the other hand will first set the
2680limit to prevent new charges, and then reclaim and OOM kill until the
2681new limit is met - or the task writing to memory.max is killed.
2682
2683The combined memory+swap accounting and limiting is replaced by real
2684control over swap space.
2685
2686The main argument for a combined memory+swap facility in the original
2687cgroup design was that global or parental pressure would always be
2688able to swap all anonymous memory of a child group, regardless of the
2689child's own (possibly untrusted) configuration.  However, untrusted
2690groups can sabotage swapping by other means - such as referencing its
2691anonymous memory in a tight loop - and an admin can not assume full
2692swappability when overcommitting untrusted jobs.
2693
2694For trusted jobs, on the other hand, a combined counter is not an
2695intuitive userspace interface, and it flies in the face of the idea
2696that cgroup controllers should account and limit specific physical
2697resources.  Swap space is a resource like all others in the system,
2698and that's why unified hierarchy allows distributing it separately.
2699