3
CPUSets are intended to divide all available cpu (and memory?) available to
4
the system between group of tasks.
7
* mount -t cgroup -ocpuset cpuset /dev/cpuset
8
* Creating new task group
9
- mkdir /dev/cpuset/my_task_group
11
- rmdir /dev/cpuset/my_task_group
12
deleting previously created cpuset
13
- echo <pid> > /dev/cpuset/my_task_group/tasks
14
adds the task with given pid and all it's subprocesses to specified
16
- echo #cpunum > /dev/cpuset/my_task_group/cpus
18
- There is also something about 'mems'. Whats that?
20
* It could be what the memory-set should be specified before the task
21
are allowed to enter cpuset (write error: No space left on device).
27
Intended to divide process resource between group of tasks.
30
* mount -t cgroup -ocpu none /dev/cpuctl
33
* echo 1024 > cpu.shares (all other tasks)
34
* mkdir long_and_heavy_task
35
* echo 256 > long_and_heavy_task/cpu.shares
36
* echo #pid > long_and_heavy_task/tasks
39
* rmdir long_and_heavy_task
41
- File 'cpu.shares' defines how much of CPU will get group of tasks. The
42
absolute value is meaningless. Relative values are only important thing,
43
if group1 have '1024' in cpu.shares and group2 have '2048', then group2
44
will get 2 times more cpu time.
45
- The root folder contains all tasks not belonging to any defined group. I.e.
46
all tasks by default. Then new group is created and some task is aded to it
47
'tasks' file, this tasks will be automatically removed from the root 'tasks'
49
- All child tasks are automatically added to group of parent task.
50
- The root folder operates as fully standard group holding all unlisted tasks.
51
The cpu is shared between all groups (including root/unlisted) according to
52
content of their 'cpu.shares' file.
53
- Subgroups are not supported (2.6.25).
55
- To destroy group, you need to distribute owned tasks between other groups
56
(including root/default one), just using standard echo'ing into the 'tasks'
57
file. And, then, rmidr group directory.
62
* mount -t cgroup -omemory none /dev/memctl
65
* echo #pid > tasks - Adding task + childs
66
* echo ###[kKmMgG] > memory.limit_in_bytes - Setting limit
67
(could be adjusted by kernel)
68
* cat memory.usage_in_bytes - Currently allocated for group
72
* Shared pages are accounted on the basis of the first touch approach. The
73
cgroup that first touches a page is accounted for the page.
74
* If cgroup is out of memory, it tries to swap. If it is not possible
75
either, one of cgroup tasks is killed.
76
* When a task migrates from one cgroup to another, it's charge is not
77
carried forward. The pages allocated from the original cgroup still remain
78
charged to it, the charge is dropped when the page is freed or reclaimed.
79
- Task could not reside in two groups simultaneously
83
Provides per cgroup cpu usage information (groups are standalone and not
84
related to group fair scheduling)
86
* mount -t cgroup -ocpuacct none /dev/cpuacct
88
* cat cpuacct.usage - reports time in nanoseconds during which the tasks
89
from cgroup have owned cpu.
93
No yet stable solution for limiting disk accesses.
96
a) Andrea Righi, io-throttle[blockio]
97
http://download.systemimager.org/~arighi/linux/patches/io-throttle/
98
b) Vasily Tarasov (Open VZ)
99
I/O bandwidth controlling subsystem for CGroups based on CFQ
100
c) Satoshi UCHIDA <s-uchida@ap.jp.nec.com> [cfg_cgroup]
103
Currently implemented solution based on priorities. Use 'ionice' util from
104
schedutils or util-linux (CFQ I/O scheduller should be used).
106
ionice -c <class> [-n <prio>] <application_to_start>
107
class: 1 (real-time), 2 (default), 3(idle)
108
'real-time' and 'idle' is only allowed to root
109
priority: 0-7 (0 - the highest priority)
111
altering running process
113
quering running process
115
- The class and priority are inherited by child processes.