xref: /OK3568_Linux_fs/kernel/Documentation/admin-guide/cgroup-v1/hugetlb.rst (revision 4882a59341e53eb6f0b4789bf948001014eff981)
1*4882a593Smuzhiyun==================
2*4882a593SmuzhiyunHugeTLB Controller
3*4882a593Smuzhiyun==================
4*4882a593Smuzhiyun
5*4882a593SmuzhiyunHugeTLB controller can be created by first mounting the cgroup filesystem.
6*4882a593Smuzhiyun
7*4882a593Smuzhiyun# mount -t cgroup -o hugetlb none /sys/fs/cgroup
8*4882a593Smuzhiyun
9*4882a593SmuzhiyunWith the above step, the initial or the parent HugeTLB group becomes
10*4882a593Smuzhiyunvisible at /sys/fs/cgroup. At bootup, this group includes all the tasks in
11*4882a593Smuzhiyunthe system. /sys/fs/cgroup/tasks lists the tasks in this cgroup.
12*4882a593Smuzhiyun
13*4882a593SmuzhiyunNew groups can be created under the parent group /sys/fs/cgroup::
14*4882a593Smuzhiyun
15*4882a593Smuzhiyun  # cd /sys/fs/cgroup
16*4882a593Smuzhiyun  # mkdir g1
17*4882a593Smuzhiyun  # echo $$ > g1/tasks
18*4882a593Smuzhiyun
19*4882a593SmuzhiyunThe above steps create a new group g1 and move the current shell
20*4882a593Smuzhiyunprocess (bash) into it.
21*4882a593Smuzhiyun
22*4882a593SmuzhiyunBrief summary of control files::
23*4882a593Smuzhiyun
24*4882a593Smuzhiyun hugetlb.<hugepagesize>.rsvd.limit_in_bytes            # set/show limit of "hugepagesize" hugetlb reservations
25*4882a593Smuzhiyun hugetlb.<hugepagesize>.rsvd.max_usage_in_bytes        # show max "hugepagesize" hugetlb reservations and no-reserve faults
26*4882a593Smuzhiyun hugetlb.<hugepagesize>.rsvd.usage_in_bytes            # show current reservations and no-reserve faults for "hugepagesize" hugetlb
27*4882a593Smuzhiyun hugetlb.<hugepagesize>.rsvd.failcnt                   # show the number of allocation failure due to HugeTLB reservation limit
28*4882a593Smuzhiyun hugetlb.<hugepagesize>.limit_in_bytes                 # set/show limit of "hugepagesize" hugetlb faults
29*4882a593Smuzhiyun hugetlb.<hugepagesize>.max_usage_in_bytes             # show max "hugepagesize" hugetlb  usage recorded
30*4882a593Smuzhiyun hugetlb.<hugepagesize>.usage_in_bytes                 # show current usage for "hugepagesize" hugetlb
31*4882a593Smuzhiyun hugetlb.<hugepagesize>.failcnt                        # show the number of allocation failure due to HugeTLB usage limit
32*4882a593Smuzhiyun
33*4882a593SmuzhiyunFor a system supporting three hugepage sizes (64k, 32M and 1G), the control
34*4882a593Smuzhiyunfiles include::
35*4882a593Smuzhiyun
36*4882a593Smuzhiyun  hugetlb.1GB.limit_in_bytes
37*4882a593Smuzhiyun  hugetlb.1GB.max_usage_in_bytes
38*4882a593Smuzhiyun  hugetlb.1GB.usage_in_bytes
39*4882a593Smuzhiyun  hugetlb.1GB.failcnt
40*4882a593Smuzhiyun  hugetlb.1GB.rsvd.limit_in_bytes
41*4882a593Smuzhiyun  hugetlb.1GB.rsvd.max_usage_in_bytes
42*4882a593Smuzhiyun  hugetlb.1GB.rsvd.usage_in_bytes
43*4882a593Smuzhiyun  hugetlb.1GB.rsvd.failcnt
44*4882a593Smuzhiyun  hugetlb.64KB.limit_in_bytes
45*4882a593Smuzhiyun  hugetlb.64KB.max_usage_in_bytes
46*4882a593Smuzhiyun  hugetlb.64KB.usage_in_bytes
47*4882a593Smuzhiyun  hugetlb.64KB.failcnt
48*4882a593Smuzhiyun  hugetlb.64KB.rsvd.limit_in_bytes
49*4882a593Smuzhiyun  hugetlb.64KB.rsvd.max_usage_in_bytes
50*4882a593Smuzhiyun  hugetlb.64KB.rsvd.usage_in_bytes
51*4882a593Smuzhiyun  hugetlb.64KB.rsvd.failcnt
52*4882a593Smuzhiyun  hugetlb.32MB.limit_in_bytes
53*4882a593Smuzhiyun  hugetlb.32MB.max_usage_in_bytes
54*4882a593Smuzhiyun  hugetlb.32MB.usage_in_bytes
55*4882a593Smuzhiyun  hugetlb.32MB.failcnt
56*4882a593Smuzhiyun  hugetlb.32MB.rsvd.limit_in_bytes
57*4882a593Smuzhiyun  hugetlb.32MB.rsvd.max_usage_in_bytes
58*4882a593Smuzhiyun  hugetlb.32MB.rsvd.usage_in_bytes
59*4882a593Smuzhiyun  hugetlb.32MB.rsvd.failcnt
60*4882a593Smuzhiyun
61*4882a593Smuzhiyun
62*4882a593Smuzhiyun1. Page fault accounting
63*4882a593Smuzhiyun
64*4882a593Smuzhiyunhugetlb.<hugepagesize>.limit_in_bytes
65*4882a593Smuzhiyunhugetlb.<hugepagesize>.max_usage_in_bytes
66*4882a593Smuzhiyunhugetlb.<hugepagesize>.usage_in_bytes
67*4882a593Smuzhiyunhugetlb.<hugepagesize>.failcnt
68*4882a593Smuzhiyun
69*4882a593SmuzhiyunThe HugeTLB controller allows users to limit the HugeTLB usage (page fault) per
70*4882a593Smuzhiyuncontrol group and enforces the limit during page fault. Since HugeTLB
71*4882a593Smuzhiyundoesn't support page reclaim, enforcing the limit at page fault time implies
72*4882a593Smuzhiyunthat, the application will get SIGBUS signal if it tries to fault in HugeTLB
73*4882a593Smuzhiyunpages beyond its limit. Therefore the application needs to know exactly how many
74*4882a593SmuzhiyunHugeTLB pages it uses before hand, and the sysadmin needs to make sure that
75*4882a593Smuzhiyunthere are enough available on the machine for all the users to avoid processes
76*4882a593Smuzhiyungetting SIGBUS.
77*4882a593Smuzhiyun
78*4882a593Smuzhiyun
79*4882a593Smuzhiyun2. Reservation accounting
80*4882a593Smuzhiyun
81*4882a593Smuzhiyunhugetlb.<hugepagesize>.rsvd.limit_in_bytes
82*4882a593Smuzhiyunhugetlb.<hugepagesize>.rsvd.max_usage_in_bytes
83*4882a593Smuzhiyunhugetlb.<hugepagesize>.rsvd.usage_in_bytes
84*4882a593Smuzhiyunhugetlb.<hugepagesize>.rsvd.failcnt
85*4882a593Smuzhiyun
86*4882a593SmuzhiyunThe HugeTLB controller allows to limit the HugeTLB reservations per control
87*4882a593Smuzhiyungroup and enforces the controller limit at reservation time and at the fault of
88*4882a593SmuzhiyunHugeTLB memory for which no reservation exists. Since reservation limits are
89*4882a593Smuzhiyunenforced at reservation time (on mmap or shget), reservation limits never causes
90*4882a593Smuzhiyunthe application to get SIGBUS signal if the memory was reserved before hand. For
91*4882a593SmuzhiyunMAP_NORESERVE allocations, the reservation limit behaves the same as the fault
92*4882a593Smuzhiyunlimit, enforcing memory usage at fault time and causing the application to
93*4882a593Smuzhiyunreceive a SIGBUS if it's crossing its limit.
94*4882a593Smuzhiyun
95*4882a593SmuzhiyunReservation limits are superior to page fault limits described above, since
96*4882a593Smuzhiyunreservation limits are enforced at reservation time (on mmap or shget), and
97*4882a593Smuzhiyunnever causes the application to get SIGBUS signal if the memory was reserved
98*4882a593Smuzhiyunbefore hand. This allows for easier fallback to alternatives such as
99*4882a593Smuzhiyunnon-HugeTLB memory for example. In the case of page fault accounting, it's very
100*4882a593Smuzhiyunhard to avoid processes getting SIGBUS since the sysadmin needs precisely know
101*4882a593Smuzhiyunthe HugeTLB usage of all the tasks in the system and make sure there is enough
102*4882a593Smuzhiyunpages to satisfy all requests. Avoiding tasks getting SIGBUS on overcommited
103*4882a593Smuzhiyunsystems is practically impossible with page fault accounting.
104*4882a593Smuzhiyun
105*4882a593Smuzhiyun
106*4882a593Smuzhiyun3. Caveats with shared memory
107*4882a593Smuzhiyun
108*4882a593SmuzhiyunFor shared HugeTLB memory, both HugeTLB reservation and page faults are charged
109*4882a593Smuzhiyunto the first task that causes the memory to be reserved or faulted, and all
110*4882a593Smuzhiyunsubsequent uses of this reserved or faulted memory is done without charging.
111*4882a593Smuzhiyun
112*4882a593SmuzhiyunShared HugeTLB memory is only uncharged when it is unreserved or deallocated.
113*4882a593SmuzhiyunThis is usually when the HugeTLB file is deleted, and not when the task that
114*4882a593Smuzhiyuncaused the reservation or fault has exited.
115*4882a593Smuzhiyun
116*4882a593Smuzhiyun
117*4882a593Smuzhiyun4. Caveats with HugeTLB cgroup offline.
118*4882a593Smuzhiyun
119*4882a593SmuzhiyunWhen a HugeTLB cgroup goes offline with some reservations or faults still
120*4882a593Smuzhiyuncharged to it, the behavior is as follows:
121*4882a593Smuzhiyun
122*4882a593Smuzhiyun- The fault charges are charged to the parent HugeTLB cgroup (reparented),
123*4882a593Smuzhiyun- the reservation charges remain on the offline HugeTLB cgroup.
124*4882a593Smuzhiyun
125*4882a593SmuzhiyunThis means that if a HugeTLB cgroup gets offlined while there is still HugeTLB
126*4882a593Smuzhiyunreservations charged to it, that cgroup persists as a zombie until all HugeTLB
127*4882a593Smuzhiyunreservations are uncharged. HugeTLB reservations behave in this manner to match
128*4882a593Smuzhiyunthe memory controller whose cgroups also persist as zombie until all charged
129*4882a593Smuzhiyunmemory is uncharged. Also, the tracking of HugeTLB reservations is a bit more
130*4882a593Smuzhiyuncomplex compared to the tracking of HugeTLB faults, so it is significantly
131*4882a593Smuzhiyunharder to reparent reservations at offline time.
132