cgroups: call find_css_set() safely in cgroup_attach_task()
In cgroup_attach_task(), tsk maybe exit when we call find_css_set(). and
find_css_set() will access to invalid css_set.
This patch increases the count before get_css_set(), and decreases it
after find_css_set().
NOTE:
css_set's refcount is also taskcount, after this patch applied, taskcount
may be off-by-one WHEN cgroup_lock() is not held. but I reviewed other
code which use taskcount, they are still correct. No regression found by
reviewing and simply testing.
So I do not use two counters in css_set. (one counter for taskcount, the
other for refcount. like struct mm_struct) If this fix cause regression,
we will use two counters in css_set.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Paul Menage <menage@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 00d5136..61e92c5 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1214,7 +1214,7 @@
int retval = 0;
struct cgroup_subsys *ss;
struct cgroup *oldcgrp;
- struct css_set *cg = tsk->cgroups;
+ struct css_set *cg;
struct css_set *newcg;
struct cgroupfs_root *root = cgrp->root;
int subsys_id;
@@ -1234,11 +1234,16 @@
}
}
+ task_lock(tsk);
+ cg = tsk->cgroups;
+ get_css_set(cg);
+ task_unlock(tsk);
/*
* Locate or allocate a new css_set for this task,
* based on its final set of cgroups
*/
newcg = find_css_set(cg, cgrp);
+ put_css_set(cg);
if (!newcg)
return -ENOMEM;