Message ID | 20210910034642.2838054-1-lijinlin3@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] block, bfq: fix UAF in bfq_io_set_weight_legacy() | expand |
Hello, On Fri, Sep 10, 2021 at 11:46:42AM +0800, Li Jinlin wrote: > Freeing bfqg is protected by queue lock in blkcg_deactivate_policy(), > but getting/using bfqg is protected by blkcg lock in > bfq_io_set_weight_legacy(). If bfq_io_set_weight_legacy() get bfqg > before freeing bfqg and use bfqg in the after, the use-after-free > will occur. > > CPU0 CPU1 > blkcg_deactivate_policy > spin_lock_irq(&q->queue_lock) > bfq_io_set_weight_legacy > spin_lock_irq(&blkcg->lock) > blkg_to_bfqg(blkg) > pd_to_bfqg(blkg->pd[pol->plid]) > ^^^^^^blkg->pd[pol->plid] != NULL > bfqg != NULL > pol->pd_free_fn(blkg->pd[pol->plid]) > pd_to_bfqg(blkg->pd[pol->plid]) > bfqg_put(bfqg) > kfree(bfqg) > blkg->pd[pol->plid] = NULL > spin_unlock_irq(q->queue_lock); > bfq_group_set_weight(bfqg, val, 0) > bfqg->entity.new_weight > ^^^^^^trigger uaf here > spin_unlock_irq(&blkcg->lock); > > To fix this use-after-free, instead of holding blkcg->lock while > walking ->blkg_list and getting/using bfqg, RCU walk ->blkg_list and > hold the blkg's queue lock while getting/using bfqg. I think this is a bug in blkcg_deactivate_policy() than the other way around. blkgs are protected by both q and blkcg locks and holding either should stabilize them. The blkcg lock nests inside q lock, so I think blkcg_deactivate_policy() just needs to grab the matching blkcg lock before trying to destroy blkgs. Thanks.
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index e2f14508f2d6..7209060caa90 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -1025,21 +1025,25 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, struct blkcg *blkcg = css_to_blkcg(css); struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg); struct blkcg_gq *blkg; + struct bfq_group *bfqg; int ret = -ERANGE; if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT) return ret; ret = 0; - spin_lock_irq(&blkcg->lock); bfqgd->weight = (unsigned short)val; - hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { - struct bfq_group *bfqg = blkg_to_bfqg(blkg); + + rcu_read_lock(); + hlist_for_each_entry_rcu(blkg, &blkcg->blkg_list, blkcg_node) { + spin_lock_irq(&blkg->q->queue_lock); + bfqg = blkg_to_bfqg(blkg); if (bfqg) bfq_group_set_weight(bfqg, val, 0); + spin_unlock_irq(&blkg->q->queue_lock); } - spin_unlock_irq(&blkcg->lock); + rcu_read_unlock(); return ret; }