From patchwork Sun Sep 11 08:34:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuanchu Xie X-Patchwork-Id: 12972784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B361ECAAA1 for ; Sun, 11 Sep 2022 08:35:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC6568D0003; Sun, 11 Sep 2022 04:35:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D4EF46B0073; Sun, 11 Sep 2022 04:35:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC8F28D0003; Sun, 11 Sep 2022 04:35:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id AAC0D6B0072 for ; Sun, 11 Sep 2022 04:35:33 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8031F1403CA for ; Sun, 11 Sep 2022 08:35:33 +0000 (UTC) X-FDA: 79899145746.03.697E1FD Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf05.hostedemail.com (Postfix) with ESMTP id 04CA510009C for ; Sun, 11 Sep 2022 08:35:31 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id p12-20020a259e8c000000b006958480b858so5147011ybq.12 for ; Sun, 11 Sep 2022 01:35:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=D/SnG5xrOwQyB3eTr7OdaTwvgtrUGbj63fgiKAgNxbM=; b=M/K0SjnRJmdCFEWYVswlTFVNzjsHrR1K8GxZkKq6baQAyAH/bmO7hzJLnKdCp6L+GZ exieJ/UH1UGGduTYYSiK4FOImZJYAxakkne6KqJLso6swfanKPAYmjlCGrbaDY0A1Vhv PhWTOMLXOT9qaK5PA1eEW1WtjKb1vKMmZLQYKIaNC23+HfLAeVjeRijwkbKEP9G+Dz// PpNCDuxGFuAwhFZOKkksVQ9wr14TeG7saP4E7gQ3RQ9dwZ3rxS4c7ynPfbHB6Og33sVf dLhaHGG+WqTC0hHazDZJJSZ8skH9Jt5MZUyMVqgYeVO/osfy4rA1fd6OpQL5f7nLxC3n wQrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=D/SnG5xrOwQyB3eTr7OdaTwvgtrUGbj63fgiKAgNxbM=; b=KUeQQan1IczCLImrhPfCOE1SHe2ZhmMcT++JUWccoBdpGIUfBSHnC6wY9KBaOhn+3b kcUFp3S3qa52/yzelh65M1e0DxGzeUbbmzNXq5cLHIM4aMT9WjScfjUG7apXAvpgG2ML iByXkP4AokB7lrvYeuZoX+stDTp+GTbFzgJLuxiLL0+4gxS574llPGTOwPpq9IGlVQ82 f4nQF96Qk4fd9Ug+VlYWLbFAZisDqZ5YupeSSxJyTIuzkwSKa5Cu22vawaNvLY72A4hJ 6dZHqfyloCG/hinnd4pfBnTsOp/hMgoBHA1wqbvFUvOH2L3/LtxYB7chivVm+8wbfS0X P8kw== X-Gm-Message-State: ACgBeo1VLPE/kV83q+p+SJeBkUbJtsUNR6EpuSG9YiZwnkBGCE4Jer1B DAgkMj88bxMjrQA50kBlJfI1thqSrG4/laX8SDbLJDhYfmDhE5IuFZ2Tk1kvC701SZg1NfBQLmf JpdH62NzbDmrMrDAxsap6KgzakwSKTcwYEUPtcrDpElK9QELFE1SmYDNcLu0= X-Google-Smtp-Source: AA6agR5dpiapAsp3xKW4Wg/S1sd2oIoNnsMBpFfht+/wsDHmsUji1L4eWND7lDlK0KhImgnei3AARX1/dStv X-Received: from yuanchu.svl.corp.google.com ([2620:15c:2d4:203:5076:f273:1383:891d]) (user=yuanchu job=sendgmr) by 2002:a0d:d441:0:b0:345:83ae:9330 with SMTP id w62-20020a0dd441000000b0034583ae9330mr18173203ywd.97.1662885331253; Sun, 11 Sep 2022 01:35:31 -0700 (PDT) Date: Sun, 11 Sep 2022 01:34:17 -0700 In-Reply-To: <20220911083418.2818369-1-yuanchu@google.com> Mime-Version: 1.0 References: <20220911083418.2818369-1-yuanchu@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220911083418.2818369-2-yuanchu@google.com> Subject: [RFC PATCH 1/2] mm: multi-gen LRU: support page access info harvesting with eBPF From: Yuanchu Xie To: linux-mm@kvack.org, Yu Zhao Cc: Michael Larabel , Jon Corbet , Andrew Morton , Yuanchu Xie , linux-kernel@vger.kernel.org, bpf@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662885332; a=rsa-sha256; cv=none; b=zvAf398s/BNsF60+BtruvpTMMLihZCOPTEUap1RBIRTLCB6yHmNa/eYgETGwGg0vdoJEAA JyKi4Zzmno/wAx6yD/IgsFuab3bLUTpaXjj50qN91/fCH7qFTN0ZIh6Gg0VvAw47jS4RJW jt+sA/Edi3IteZY9+7jndZEI9A/ZCDE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="M/K0SjnR"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3050dYwcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3050dYwcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662885332; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=D/SnG5xrOwQyB3eTr7OdaTwvgtrUGbj63fgiKAgNxbM=; b=zwQ15oxv6f0+OVuY3H+hHNwhdyZvVYFG8wx/ES99+/eHcb/UsVS+T5pgtpTsDbiWTXnr2Y PC5qPyT9+ey9t208VYwdi71/Z8yupDyLFkpxlKOxifaTexFnuz+CiUS3GNSR7DqGFmj+1B /eYu9D+sLC+R0QMnS5NXsJH2VgDNEas= Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="M/K0SjnR"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3050dYwcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3050dYwcKCKQcYERGLYKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuanchu.bounces.google.com X-Rspam-User: X-Rspamd-Queue-Id: 04CA510009C X-Rspamd-Server: rspam11 X-Stat-Signature: khepwdoukuefmconrt5nxykhcxya7hoz X-HE-Tag: 1662885331-97284 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add the infrastructure to enable bpf programs to hook into MGLRU and capture the page access information as MGLRU walks page tables. - Add empty functions as hook points to capture pte and pmd access bit harvesting of MGLRU page table walks. - Add a kfunc to invoke MGLRU aging. - Add a kfunc and hook point to enable the filtering of MGLRU aging by PIDs. Signed-off-by: Yuanchu Xie --- include/linux/mmzone.h | 1 + mm/vmscan.c | 154 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 155 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 710fc1d83bd0..f652b9473c6f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -481,6 +481,7 @@ struct lru_gen_mm_walk { int mm_stats[NR_MM_STATS]; /* total batched items */ int batched; + pid_t pid; bool can_swap; bool force_scan; }; diff --git a/mm/vmscan.c b/mm/vmscan.c index 762e7cb3d2d0..28499ba15e96 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -60,6 +60,10 @@ #include #include #include +#include +#include +#include +#include #include "internal.h" #include "swap.h" @@ -3381,12 +3385,41 @@ static void reset_mm_stats(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, } } +struct bpf_mglru_should_skip_mm_control { + pid_t pid; + bool should_skip; +}; + +void bpf_set_skip_mm(struct bpf_mglru_should_skip_mm_control *ctl) +{ + ctl->should_skip = true; +} + +__weak noinline void +bpf_mglru_should_skip_mm(struct bpf_mglru_should_skip_mm_control *ctl) +{ +} + +static bool bpf_mglru_should_skip_mm_wrapper(pid_t pid) +{ + struct bpf_mglru_should_skip_mm_control ctl = { + .pid = pid, + .should_skip = false, + }; + + bpf_mglru_should_skip_mm(&ctl); + return ctl.should_skip; +} + static bool should_skip_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) { int type; unsigned long size = 0; struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec); int key = pgdat->node_id % BITS_PER_TYPE(mm->lru_gen.bitmap); +#ifdef CONFIG_MEMCG + struct task_struct *task; +#endif if (!walk->force_scan && !test_bit(key, &mm->lru_gen.bitmap)) return true; @@ -3402,6 +3435,16 @@ static bool should_skip_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) if (size < MIN_LRU_BATCH) return true; +#ifdef CONFIG_MEMCG + rcu_read_lock(); + task = rcu_dereference(mm->owner); + if (task && bpf_mglru_should_skip_mm_wrapper(task->pid)) { + rcu_read_unlock(); + return true; + } + rcu_read_unlock(); +#endif + return !mmget_not_zero(mm); } @@ -3842,6 +3885,22 @@ static bool suitable_to_scan(int total, int young) return young * n >= total; } +/* + * __weak noinline guarantees that both the function and the callsite are + * preserved + */ +__weak noinline void mglru_pte_probe(pid_t pid, unsigned int nid, unsigned long addr, + unsigned long len, bool anon) +{ + +} + +__weak noinline void mglru_pmd_probe(pid_t pid, unsigned int nid, unsigned long addr, + unsigned long len, bool anon) +{ + +} + static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *args) { @@ -3898,6 +3957,8 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, folio_mark_dirty(folio); old_gen = folio_update_gen(folio, new_gen); + mglru_pte_probe(walk->pid, pgdat->node_id, addr, folio_nr_pages(folio), + folio_test_anon(folio)); if (old_gen >= 0 && old_gen != new_gen) update_batch_size(walk, folio, old_gen, new_gen); } @@ -3978,6 +4039,8 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long next, struct vm_area folio_mark_dirty(folio); old_gen = folio_update_gen(folio, new_gen); + mglru_pmd_probe(walk->pid, pgdat->node_id, addr, folio_nr_pages(folio), + folio_test_anon(folio)); if (old_gen >= 0 && old_gen != new_gen) update_batch_size(walk, folio, old_gen, new_gen); next: @@ -4139,6 +4202,7 @@ static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, struct lru_gen_ int err; struct mem_cgroup *memcg = lruvec_memcg(lruvec); + walk->pid = mm->owner->pid; walk->next_addr = FIRST_USER_ADDRESS; do { @@ -5657,6 +5721,96 @@ static int run_cmd(char cmd, int memcg_id, int nid, unsigned long seq, return err; } +int bpf_run_aging(int memcg_id, bool can_swap, + bool force_scan) +{ + struct scan_control sc = { + .may_writepage = true, + .may_unmap = true, + .may_swap = true, + .reclaim_idx = MAX_NR_ZONES - 1, + .gfp_mask = GFP_KERNEL, + }; + int err = -EINVAL; + struct mem_cgroup *memcg = NULL; + struct blk_plug plug; + unsigned int flags; + unsigned int nid; + + if (!mem_cgroup_disabled()) { + rcu_read_lock(); + memcg = mem_cgroup_from_id(memcg_id); +#ifdef CONFIG_MEMCG + if (memcg && !css_tryget(&memcg->css)) + memcg = NULL; +#endif + rcu_read_unlock(); + + if (!memcg) + return -EINVAL; + } + + if (memcg_id != mem_cgroup_id(memcg)) { + mem_cgroup_put(memcg); + return err; + } + + set_task_reclaim_state(current, &sc.reclaim_state); + flags = memalloc_noreclaim_save(); + blk_start_plug(&plug); + if (!set_mm_walk(NULL)) { + err = -ENOMEM; + goto done; + } + + for_each_online_node(nid) { + struct lruvec *lruvec = get_lruvec(memcg, nid); + DEFINE_MAX_SEQ(lruvec); + + err = run_aging(lruvec, max_seq, &sc, can_swap, force_scan); + if (err) + goto done; + } +done: + clear_mm_walk(); + blk_finish_plug(&plug); + memalloc_noreclaim_restore(flags); + set_task_reclaim_state(current, NULL); + mem_cgroup_put(memcg); + + return err; +} + +BTF_SET8_START(bpf_lru_gen_trace_kfunc_ids) +BTF_ID_FLAGS(func, bpf_set_skip_mm) +BTF_SET8_END(bpf_lru_gen_trace_kfunc_ids) + +BTF_SET8_START(bpf_lru_gen_syscall_kfunc_ids) +BTF_ID_FLAGS(func, bpf_run_aging) +BTF_SET8_END(bpf_lru_gen_syscall_kfunc_ids) + +static const struct btf_kfunc_id_set bpf_lru_gen_trace_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_lru_gen_trace_kfunc_ids, +}; + +static const struct btf_kfunc_id_set bpf_lru_gen_syscall_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_lru_gen_syscall_kfunc_ids, +}; + +static int __init bpf_lru_gen_kfunc_init(void) +{ + int err = register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, + &bpf_lru_gen_trace_kfunc_set); + if (err) + return err; + return register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL, + &bpf_lru_gen_syscall_kfunc_set); +} +late_initcall(bpf_lru_gen_kfunc_init); + + /* see Documentation/admin-guide/mm/multigen_lru.rst for details */ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src, size_t len, loff_t *pos)