From patchwork Wed May 3 01:36:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13229439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 259ECC77B75 for ; Wed, 3 May 2023 01:36:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A16756B0078; Tue, 2 May 2023 21:36:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9505B6B007B; Tue, 2 May 2023 21:36:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72DE26B007D; Tue, 2 May 2023 21:36:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by kanga.kvack.org (Postfix) with ESMTP id 3BDFE6B0078 for ; Tue, 2 May 2023 21:36:11 -0400 (EDT) Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1aad5245571so29569235ad.1 for ; Tue, 02 May 2023 18:36:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683077770; x=1685669770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MM6teyjPlmkc2NzJ+Re62gPlYr2XnVPXuXbH/2DQ358=; b=p1yGAarxO09PbCP0WqCQvBy9dOXFLCU/jbfRSJ7zl4ThLhzF/99QvitdmQfZATQO4Z mXlwX+1OeWixhf944Nm9DoVYa9IDRBve633yEpMykaSkvvo59OuWKmJgtdc6/gptOx8s UL4KBFpStrCCxH/3gvuENWUj9c9GgqS6fePf/GrtaFzZ7R+ksFtFSeskgRNC8Zl2rLoT +MIUwMFeXCQt8bM4pJT7RtTujk1eDFrAPwNCVxm/PwYxVrz+Uvgyhq0lc/sUIa+cNTrk SreR49cxb2n2mslRvCsTVykUqNuqjYWQulou+sbVshMOmztKLKnqRq0fRP7HQ5YEpspv YCNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683077770; x=1685669770; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MM6teyjPlmkc2NzJ+Re62gPlYr2XnVPXuXbH/2DQ358=; b=Jx8+zqRiiXossx25hucHyw7v4JnS8FK9WEsnMzM41adDs/UpKwT3w0uOCLiMJt8WMs Cw9PcHUxclNqrgPeyGdY+vSPP+8u+w891dbXU7IWu2CQKz8HIS+Dt1NX7Ds+r1Di/YPU mfAs5P4kL/Wl7DNKXlPYK32+kabvVaiPZ/Fy2kf9y7114u5+Etp0prODGQqbANYc6HT5 TOHyOhxhRS5OgH3dVMMSeGK5eUPfzJpNWIIU5uwMN4BXMkkuDP5vw4PRgpEiRUj9W/Hw UdVsRzG7UV21JTSKfqg4Zikqn6VS5NFFMT5DPxvwxRhOhKsaW+Zw7xeJGMPgUJBP5yoP vYiQ== X-Gm-Message-State: AC+VfDxKKizDQoqNQ2gRxYzxEpbiWZlja92wXxQgcoe7U4rDwday3wnp 1Oi0xCv+AiYVW/26gQfTPTw= X-Google-Smtp-Source: ACHHUZ7Aj65pS61wKFRteplQIiZVAALUH5If9YUR1v5VwcrbQSoRpAvqAiy1vEiWPIBJwFBNM581iQ== X-Received: by 2002:a17:902:ecc3:b0:1aa:f173:2892 with SMTP id a3-20020a170902ecc300b001aaf1732892mr478420plh.57.1683077770360; Tue, 02 May 2023 18:36:10 -0700 (PDT) Received: from localhost (fwdproxy-prn-014.fbsv.net. [2a03:2880:ff:e::face:b00c]) by smtp.gmail.com with ESMTPSA id g22-20020a170902869600b001a67759f9f8sm20301684plo.106.2023.05.02.18.36.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 18:36:10 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, bfoster@redhat.com, willy@infradead.org, linux-api@vger.kernel.org, kernel-team@meta.com Subject: [PATCH v13 1/3] workingset: refactor LRU refault to expose refault recency check Date: Tue, 2 May 2023 18:36:06 -0700 Message-Id: <20230503013608.2431726-2-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503013608.2431726-1-nphamcs@gmail.com> References: <20230503013608.2431726-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000012, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for computing recently evicted pages in cachestat, refactor workingset_refault and lru_gen_refault to expose a helper function that would test if an evicted page is recently evicted. Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- include/linux/swap.h | 1 + mm/workingset.c | 150 +++++++++++++++++++++++++++++-------------- 2 files changed, 103 insertions(+), 48 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 3c69cb653cb9..b2128df5edea 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -368,6 +368,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry) } /* linux/mm/workingset.c */ +bool workingset_test_recent(void *shadow, bool file, bool *workingset); void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages); void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg); void workingset_refault(struct folio *folio, void *shadow); diff --git a/mm/workingset.c b/mm/workingset.c index 817758951886..d81f9dafc9f1 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -255,6 +255,29 @@ static void *lru_gen_eviction(struct folio *folio) return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs); } +/* + * Tests if the shadow entry is for a folio that was recently evicted. + * Fills in @memcgid, @pglist_data, @token, @workingset with the values + * unpacked from shadow. + */ +static bool lru_gen_test_recent(void *shadow, bool file, int *memcgid, + struct pglist_data **pgdat, unsigned long *token, bool *workingset) +{ + struct mem_cgroup *eviction_memcg; + struct lruvec *lruvec; + struct lru_gen_folio *lrugen; + unsigned long min_seq; + + unpack_shadow(shadow, memcgid, pgdat, token, workingset); + eviction_memcg = mem_cgroup_from_id(*memcgid); + + lruvec = mem_cgroup_lruvec(eviction_memcg, *pgdat); + lrugen = &lruvec->lrugen; + + min_seq = READ_ONCE(lrugen->min_seq[file]); + return (*token >> LRU_REFS_WIDTH) == (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)); +} + static void lru_gen_refault(struct folio *folio, void *shadow) { int hist, tier, refs; @@ -269,23 +292,22 @@ static void lru_gen_refault(struct folio *folio, void *shadow) int type = folio_is_file_lru(folio); int delta = folio_nr_pages(folio); - unpack_shadow(shadow, &memcg_id, &pgdat, &token, &workingset); - - if (pgdat != folio_pgdat(folio)) - return; - rcu_read_lock(); + if (!lru_gen_test_recent(shadow, type, &memcg_id, &pgdat, &token, + &workingset)) + goto unlock; + memcg = folio_memcg_rcu(folio); if (memcg_id != mem_cgroup_id(memcg)) goto unlock; + if (pgdat != folio_pgdat(folio)) + return; + lruvec = mem_cgroup_lruvec(memcg, pgdat); lrugen = &lruvec->lrugen; - min_seq = READ_ONCE(lrugen->min_seq[type]); - if ((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH))) - goto unlock; hist = lru_hist_from_seq(min_seq); /* see the comment in folio_lru_refs() */ @@ -317,6 +339,12 @@ static void *lru_gen_eviction(struct folio *folio) return NULL; } +static bool lru_gen_test_recent(void *shadow, bool file, int *memcgid, + struct pglist_data **pgdat, unsigned long *token, bool *workingset) +{ + return false; +} + static void lru_gen_refault(struct folio *folio, void *shadow) { } @@ -385,42 +413,34 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg) } /** - * workingset_refault - Evaluate the refault of a previously evicted folio. - * @folio: The freshly allocated replacement folio. - * @shadow: Shadow entry of the evicted folio. - * - * Calculates and evaluates the refault distance of the previously - * evicted folio in the context of the node and the memcg whose memory - * pressure caused the eviction. + * workingset_test_recent - tests if the shadow entry is for a folio that was + * recently evicted. Also fills in @workingset with the value unpacked from + * shadow. + * @shadow: the shadow entry to be tested. + * @file: whether the corresponding folio is from the file lru. + * @workingset: where the workingset value unpacked from shadow should + * be stored. + * + * Return: true if the shadow is for a recently evicted folio; false otherwise. */ -void workingset_refault(struct folio *folio, void *shadow) +bool workingset_test_recent(void *shadow, bool file, bool *workingset) { - bool file = folio_is_file_lru(folio); struct mem_cgroup *eviction_memcg; struct lruvec *eviction_lruvec; unsigned long refault_distance; unsigned long workingset_size; - struct pglist_data *pgdat; - struct mem_cgroup *memcg; - unsigned long eviction; - struct lruvec *lruvec; unsigned long refault; - bool workingset; int memcgid; - long nr; + struct pglist_data *pgdat; + unsigned long eviction; - if (lru_gen_enabled()) { - lru_gen_refault(folio, shadow); - return; - } + if (lru_gen_enabled()) + return lru_gen_test_recent(shadow, file, &memcgid, &pgdat, &eviction, + workingset); - unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset); + unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); eviction <<= bucket_order; - /* Flush stats (and potentially sleep) before holding RCU read lock */ - mem_cgroup_flush_stats_ratelimited(); - - rcu_read_lock(); /* * Look up the memcg associated with the stored ID. It might * have been deleted since the folio's eviction. @@ -439,7 +459,8 @@ void workingset_refault(struct folio *folio, void *shadow) */ eviction_memcg = mem_cgroup_from_id(memcgid); if (!mem_cgroup_disabled() && !eviction_memcg) - goto out; + return false; + eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat); refault = atomic_long_read(&eviction_lruvec->nonresident_age); @@ -461,20 +482,6 @@ void workingset_refault(struct folio *folio, void *shadow) */ refault_distance = (refault - eviction) & EVICTION_MASK; - /* - * The activation decision for this folio is made at the level - * where the eviction occurred, as that is where the LRU order - * during folio reclaim is being determined. - * - * However, the cgroup that will own the folio is the one that - * is actually experiencing the refault event. - */ - nr = folio_nr_pages(folio); - memcg = folio_memcg(folio); - pgdat = folio_pgdat(folio); - lruvec = mem_cgroup_lruvec(memcg, pgdat); - - mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); /* * Compare the distance to the existing workingset size. We * don't activate pages that couldn't stay resident even if @@ -495,7 +502,54 @@ void workingset_refault(struct folio *folio, void *shadow) NR_INACTIVE_ANON); } } - if (refault_distance > workingset_size) + + return refault_distance <= workingset_size; +} + +/** + * workingset_refault - Evaluate the refault of a previously evicted folio. + * @folio: The freshly allocated replacement folio. + * @shadow: Shadow entry of the evicted folio. + * + * Calculates and evaluates the refault distance of the previously + * evicted folio in the context of the node and the memcg whose memory + * pressure caused the eviction. + */ +void workingset_refault(struct folio *folio, void *shadow) +{ + bool file = folio_is_file_lru(folio); + struct pglist_data *pgdat; + struct mem_cgroup *memcg; + struct lruvec *lruvec; + bool workingset; + long nr; + + if (lru_gen_enabled()) { + lru_gen_refault(folio, shadow); + return; + } + + /* Flush stats (and potentially sleep) before holding RCU read lock */ + mem_cgroup_flush_stats_ratelimited(); + + rcu_read_lock(); + + /* + * The activation decision for this folio is made at the level + * where the eviction occurred, as that is where the LRU order + * during folio reclaim is being determined. + * + * However, the cgroup that will own the folio is the one that + * is actually experiencing the refault event. + */ + nr = folio_nr_pages(folio); + memcg = folio_memcg(folio); + pgdat = folio_pgdat(folio); + lruvec = mem_cgroup_lruvec(memcg, pgdat); + + mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); + + if (!workingset_test_recent(shadow, file, &workingset)) goto out; folio_set_active(folio); From patchwork Wed May 3 01:36:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13229440 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 084D8C7EE26 for ; Wed, 3 May 2023 01:36:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE5F66B007B; Tue, 2 May 2023 21:36:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D9539900002; Tue, 2 May 2023 21:36:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C369E6B007E; Tue, 2 May 2023 21:36:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by kanga.kvack.org (Postfix) with ESMTP id 836EA6B007B for ; Tue, 2 May 2023 21:36:12 -0400 (EDT) Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-24deb9c5ffcso2436781a91.1 for ; Tue, 02 May 2023 18:36:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683077772; x=1685669772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9VX1P6p4S44WrbFadgyxcCHiNhnuoZy6JRGXDkBCmYw=; b=ZUi5CHiSpNlwcO0k6OK5+buqnIzWlJEj+oOp4gzxGMfDwuYeEQCHHYTs5skf8QdGwT HbW9SlITQhs1GRxpbGgQDKekbP1G9s1Dzb3JCUmsYxtozfUyKeIDSB/wnq8zRyP+/a7W 1L33O5u5TbSy1rNUS6AGQaqie/J3MG+dloAXIMmEt96VXPC4/c4Gae4kXTZL5pLuaMMw mgv2h+N719a3+9DAI0U5kzthZGbSJkDNxknp4u9VTpua0oJ6EZ4YOhWXTSicqKI9vfjx J/RmIUMSPakSUOHSJlgP1jMV4B7B7vpK/7VMKF41+DKctXGlmUF68t79aE7067QWo2Qz by1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683077772; x=1685669772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9VX1P6p4S44WrbFadgyxcCHiNhnuoZy6JRGXDkBCmYw=; b=bc5gzDpWGct/30gmcooRChAXIExYfNdnGuhwhSKZWrQABSHuRDmxTuKgiwxp+jEkYP iZhSVsGeFi92/Fl99RKMExHiJ/VYm46VYgtJoCOlmJvSWFkhrOQ1AxTatHEjalTH2ne0 WG9oyvLJ75nZWx+i52rqizLaLU7VY0deQKItM2Sl+3qwZ0p3kXzBHBjOXlg4kycwNdQU zBXVoVk19hoHrGkNQcvRUYlpwfXx5X8NprQ0h7H3qQPu78I73rIrqCF+qsIa3DB46cL4 i+c5nWdL40sZB7MDlnVzR83gIOREWl4xdkKFHOLwd0lJXU/1OkdCeYo2XTsMmcBy5SY6 e6lA== X-Gm-Message-State: AC+VfDwOrxS1ajhpVZjLXmdOtCQZ9A41yagWT3Um/IUuiAuid1iVb/1s Itc2qZl5XXaaoLtDQMai4IU= X-Google-Smtp-Source: ACHHUZ5mT5eL4Sj54XO1uOUSYuiNrzkj4TzUJHkEX2lumO7/W84k3Mwf9FNqtXIlbWEEZwcclO+UPA== X-Received: by 2002:a17:90b:33cc:b0:24d:ee34:57b6 with SMTP id lk12-20020a17090b33cc00b0024dee3457b6mr11070768pjb.41.1683077771422; Tue, 02 May 2023 18:36:11 -0700 (PDT) Received: from localhost (fwdproxy-prn-008.fbsv.net. [2a03:2880:ff:8::face:b00c]) by smtp.gmail.com with ESMTPSA id co1-20020a17090afe8100b0024de5227d1fsm48108pjb.40.2023.05.02.18.36.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 18:36:11 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, bfoster@redhat.com, willy@infradead.org, linux-api@vger.kernel.org, kernel-team@meta.com Subject: [PATCH v13 2/3] cachestat: implement cachestat syscall Date: Tue, 2 May 2023 18:36:07 -0700 Message-Id: <20230503013608.2431726-3-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503013608.2431726-1-nphamcs@gmail.com> References: <20230503013608.2431726-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is currently no good way to query the page cache state of large file sets and directory trees. There is mincore(), but it scales poorly: the kernel writes out a lot of bitmap data that userspace has to aggregate, when the user really doesn not care about per-page information in that case. The user also needs to mmap and unmap each file as it goes along, which can be quite slow as well. Some use cases where this information could come in handy: * Allowing database to decide whether to perform an index scan or direct table queries based on the in-memory cache state of the index. * Visibility into the writeback algorithm, for performance issues diagnostic. * Workload-aware writeback pacing: estimating IO fulfilled by page cache (and IO to be done) within a range of a file, allowing for more frequent syncing when and where there is IO capacity, and batching when there is not. * Computing memory usage of large files/directory trees, analogous to the du tool for disk usage. More information about these use cases could be found in the following thread: https://lore.kernel.org/lkml/20230315170934.GA97793@cmpxchg.org/ This patch implements a new syscall that queries cache state of a file and summarizes the number of cached pages, number of dirty pages, number of pages marked for writeback, number of (recently) evicted pages, etc. in a given range. Currently, the syscall is only wired in for x86 architecture. NAME cachestat - query the page cache statistics of a file. SYNOPSIS #include struct cachestat_range { __u64 off; __u64 len; }; struct cachestat { __u64 nr_cache; __u64 nr_dirty; __u64 nr_writeback; __u64 nr_evicted; __u64 nr_recently_evicted; }; int cachestat(unsigned int fd, struct cachestat_range *cstat_range, struct cachestat *cstat, unsigned int flags); DESCRIPTION cachestat() queries the number of cached pages, number of dirty pages, number of pages marked for writeback, number of evicted pages, number of recently evicted pages, in the bytes range given by `off` and `len`. An evicted page is a page that is previously in the page cache but has been evicted since. A page is recently evicted if its last eviction was recent enough that its reentry to the cache would indicate that it is actively being used by the system, and that there is memory pressure on the system. These values are returned in a cachestat struct, whose address is given by the `cstat` argument. The `off` and `len` arguments must be non-negative integers. If `len` > 0, the queried range is [`off`, `off` + `len`]. If `len` == 0, we will query in the range from `off` to the end of the file. The `flags` argument is unused for now, but is included for future extensibility. User should pass 0 (i.e no flag specified). Currently, hugetlbfs is not supported. Because the status of a page can change after cachestat() checks it but before it returns to the application, the returned values may contain stale information. RETURN VALUE On success, cachestat returns 0. On error, -1 is returned, and errno is set to indicate the error. ERRORS EFAULT cstat or cstat_args points to an invalid address. EINVAL invalid flags. EBADF invalid file descriptor. EOPNOTSUPP file descriptor is of a hugetlbfs file Signed-off-by: Nhat Pham Acked-by: Johannes Weiner --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/syscalls.h | 5 + include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/mman.h | 14 ++ init/Kconfig | 10 ++ kernel/sys_ni.c | 1 + mm/filemap.c | 172 +++++++++++++++++++++++++ 8 files changed, 208 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 320480a8db4f..bc0a3c941b35 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -455,3 +455,4 @@ 448 i386 process_mrelease sys_process_mrelease 449 i386 futex_waitv sys_futex_waitv 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node +451 i386 cachestat sys_cachestat diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index c84d12608cd2..227538b0ce80 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 common cachestat sys_cachestat # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 33a0ee3bcb2e..6648c07c4381 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -72,6 +72,8 @@ struct open_how; struct mount_attr; struct landlock_ruleset_attr; enum landlock_rule_type; +struct cachestat_range; +struct cachestat; #include #include @@ -1058,6 +1060,9 @@ asmlinkage long sys_memfd_secret(unsigned int flags); asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len, unsigned long home_node, unsigned long flags); +asmlinkage long sys_cachestat(unsigned int fd, + struct cachestat_range __user *cstat_range, + struct cachestat __user *cstat, unsigned int flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 45fa180cc56a..cd639fae9086 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) +#define __NR_cachestat 451 +__SYSCALL(__NR_cachestat, sys_cachestat) + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h index f55bc680b5b0..a246e11988d5 100644 --- a/include/uapi/linux/mman.h +++ b/include/uapi/linux/mman.h @@ -4,6 +4,7 @@ #include #include +#include #define MREMAP_MAYMOVE 1 #define MREMAP_FIXED 2 @@ -41,4 +42,17 @@ #define MAP_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB #define MAP_HUGE_16GB HUGETLB_FLAG_ENCODE_16GB +struct cachestat_range { + __u64 off; + __u64 len; +}; + +struct cachestat { + __u64 nr_cache; + __u64 nr_dirty; + __u64 nr_writeback; + __u64 nr_evicted; + __u64 nr_recently_evicted; +}; + #endif /* _UAPI_LINUX_MMAN_H */ diff --git a/init/Kconfig b/init/Kconfig index 32c24950c4ce..f7f65af4ee12 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1771,6 +1771,16 @@ config RSEQ If unsure, say Y. +config CACHESTAT_SYSCALL + bool "Enable cachestat() system call" if EXPERT + default y + help + Enable the cachestat system call, which queries the page cache + statistics of a file (number of cached pages, dirty pages, + pages marked for writeback, (recently) evicted pages). + + If unsure say Y here. + config DEBUG_RSEQ default n bool "Enabled debugging of rseq() system call" if EXPERT diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..04bfb1e4d377 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -299,6 +299,7 @@ COND_SYSCALL(set_mempolicy); COND_SYSCALL(migrate_pages); COND_SYSCALL(move_pages); COND_SYSCALL(set_mempolicy_home_node); +COND_SYSCALL(cachestat); COND_SYSCALL(perf_event_open); COND_SYSCALL(accept4); diff --git a/mm/filemap.c b/mm/filemap.c index a34abfe8c654..73b043240cef 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -58,6 +59,8 @@ #include +#include "swap.h" + /* * Shared mappings implemented 30.11.1994. It's not fully working yet, * though. @@ -4119,3 +4122,172 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp) return try_to_free_buffers(folio); } EXPORT_SYMBOL(filemap_release_folio); + +#ifdef CONFIG_CACHESTAT_SYSCALL +/** + * filemap_cachestat() - compute the page cache statistics of a mapping + * @mapping: The mapping to compute the statistics for. + * @first_index: The starting page cache index. + * @last_index: The final page index (inclusive). + * @cs: the cachestat struct to write the result to. + * + * This will query the page cache statistics of a mapping in the + * page range of [first_index, last_index] (inclusive). The statistics + * queried include: number of dirty pages, number of pages marked for + * writeback, and the number of (recently) evicted pages. + */ +static void filemap_cachestat(struct address_space *mapping, + pgoff_t first_index, pgoff_t last_index, struct cachestat *cs) +{ + XA_STATE(xas, &mapping->i_pages, first_index); + struct folio *folio; + + rcu_read_lock(); + xas_for_each(&xas, folio, last_index) { + unsigned long nr_pages; + pgoff_t folio_first_index, folio_last_index; + + if (xas_retry(&xas, folio)) + continue; + + if (xa_is_value(folio)) { + /* page is evicted */ + void *shadow = (void *)folio; + bool workingset; /* not used */ + int order = xa_get_order(xas.xa, xas.xa_index); + + nr_pages = 1 << order; + /* rounds down to the nearest multiple of 2^order */ + folio_first_index = xas.xa_index >> order << order; + folio_last_index = folio_first_index + nr_pages - 1; + + /* Folios might straddle the range boundaries, only count covered pages */ + if (folio_first_index < first_index) + nr_pages -= first_index - folio_first_index; + + if (folio_last_index > last_index) + nr_pages -= folio_last_index - last_index; + + cs->nr_evicted += nr_pages; + +#ifdef CONFIG_SWAP /* implies CONFIG_MMU */ + if (shmem_mapping(mapping)) { + /* shmem file - in swap cache */ + swp_entry_t swp = radix_to_swp_entry(folio); + + shadow = get_shadow_from_swap_cache(swp); + } +#endif + if (workingset_test_recent(shadow, true, &workingset)) + cs->nr_recently_evicted += nr_pages; + + goto resched; + } + + nr_pages = folio_nr_pages(folio); + folio_first_index = folio_pgoff(folio); + folio_last_index = folio_first_index + nr_pages - 1; + + /* Folios might straddle the range boundaries, only count covered pages */ + if (folio_first_index < first_index) + nr_pages -= first_index - folio_first_index; + + if (folio_last_index > last_index) + nr_pages -= folio_last_index - last_index; + + /* page is in cache */ + cs->nr_cache += nr_pages; + + if (folio_test_dirty(folio)) + cs->nr_dirty += nr_pages; + + if (folio_test_writeback(folio)) + cs->nr_writeback += nr_pages; + +resched: + if (need_resched()) { + xas_pause(&xas); + cond_resched_rcu(); + } + } + rcu_read_unlock(); +} + +/* + * The cachestat(2) system call. + * + * cachestat() returns the page cache statistics of a file in the + * bytes range specified by `off` and `len`: number of cached pages, + * number of dirty pages, number of pages marked for writeback, + * number of evicted pages, and number of recently evicted pages. + * + * An evicted page is a page that is previously in the page cache + * but has been evicted since. A page is recently evicted if its last + * eviction was recent enough that its reentry to the cache would + * indicate that it is actively being used by the system, and that + * there is memory pressure on the system. + * + * `off` and `len` must be non-negative integers. If `len` > 0, + * the queried range is [`off`, `off` + `len`]. If `len` == 0, + * we will query in the range from `off` to the end of the file. + * + * The `flags` argument is unused for now, but is included for future + * extensibility. User should pass 0 (i.e no flag specified). + * + * Currently, hugetlbfs is not supported. + * + * Because the status of a page can change after cachestat() checks it + * but before it returns to the application, the returned values may + * contain stale information. + * + * return values: + * zero - success + * -EFAULT - cstat or cstat_range points to an illegal address + * -EINVAL - invalid flags + * -EBADF - invalid file descriptor + * -EOPNOTSUPP - file descriptor is of a hugetlbfs file + */ +SYSCALL_DEFINE4(cachestat, unsigned int, fd, + struct cachestat_range __user *, cstat_range, + struct cachestat __user *, cstat, unsigned int, flags) +{ + struct fd f = fdget(fd); + struct address_space *mapping; + struct cachestat_range csr; + struct cachestat cs; + pgoff_t first_index, last_index; + + if (!f.file) + return -EBADF; + + if (copy_from_user(&csr, cstat_range, + sizeof(struct cachestat_range))) { + fdput(f); + return -EFAULT; + } + + /* hugetlbfs is not supported */ + if (is_file_hugepages(f.file)) { + fdput(f); + return -EOPNOTSUPP; + } + + if (flags != 0) { + fdput(f); + return -EINVAL; + } + + first_index = csr.off >> PAGE_SHIFT; + last_index = + csr.len == 0 ? ULONG_MAX : (csr.off + csr.len - 1) >> PAGE_SHIFT; + memset(&cs, 0, sizeof(struct cachestat)); + mapping = f.file->f_mapping; + filemap_cachestat(mapping, first_index, last_index, &cs); + fdput(f); + + if (copy_to_user(cstat, &cs, sizeof(struct cachestat))) + return -EFAULT; + + return 0; +} +#endif /* CONFIG_CACHESTAT_SYSCALL */ From patchwork Wed May 3 01:36:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nhat Pham X-Patchwork-Id: 13229441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3CA5C7EE21 for ; Wed, 3 May 2023 01:36:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 979586B007D; Tue, 2 May 2023 21:36:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9299A6B007E; Tue, 2 May 2023 21:36:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B7A86B0080; Tue, 2 May 2023 21:36:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by kanga.kvack.org (Postfix) with ESMTP id 318956B007D for ; Tue, 2 May 2023 21:36:13 -0400 (EDT) Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-24df6bbf765so2994061a91.0 for ; Tue, 02 May 2023 18:36:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1683077772; x=1685669772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=smcp8zMjIvenU/5ttiTpkkwH8ja2gF/kjyxXZVmyP2o=; b=OjfScoX4IsdshgVkTAAiqStkhaB2dp21ScM4MVEUB5zwEv4pSpSusB6ivbbOac6+Tr 5xbiwomvIJyRKtVNPKwkqT2ORZGpPdoitC0bxtNvglisHxHr1sWP1Y2aFQhF0bRxNEdi 9ud+8QrdtXdhYdqPlsZ52O3MDPe6lOFA34bgjgBlhb8/31xaNbOt8Km/w9QmmDi1Yk/D HZyMSkmx2y183cohr4Bc6Vs9wZyVbQHOlIghjn/bD/iL4XmtCb3BEpGaJs96QV98MAVC /CaJWbvbXETp6+5AAWXpXiz8yti04DeRzMAlvbCdPOuH+K6LYYEzvFBU+mL48cK81qlH ihsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683077772; x=1685669772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=smcp8zMjIvenU/5ttiTpkkwH8ja2gF/kjyxXZVmyP2o=; b=TA72tTpxyTzt09iY/+DY6PeHKaELG7NeaSrxl8ze9MZGj1+YngEzkzdkDmQxNOukXf fg5btU3MjoiGjbX+1I1k/Os96MJmfec5d1sDN9QI83u5I4VagZHXVQ8vzyqbT0jB67nx EWpOuDkMKWOxrBDJTBlVUagIbU00ilKlBNvKSeW7duGRjv3Z81z7KP6k1td7kY5qiZQ/ hG/MhuSAsQ5voCq3N4KU4xGPrQgy3a9eCEjUMctJC0wGmsf9Jcyb7DH6Clmmk6crLNZS t9K19kYB5y6me8oLEmEuh7qwYsI3kCfpfYrjHzwR63MHIAhnzc4rOTq6p21c1lOHc388 UydQ== X-Gm-Message-State: AC+VfDyQlmkTIPJkPt+oiom4XyPPt2v6CqY1HIGPj2zoWqk/W36YGNjR BCXzsgCjr6fLO88flOyj6oo= X-Google-Smtp-Source: ACHHUZ7Gk/DNiKMyZunZN5cEw41Zha9IVXYee5F4YwKNt+cf3EVVWJcyhAaRYCpCig+sqfdU+FxLQA== X-Received: by 2002:a17:90a:9204:b0:247:90d8:41fd with SMTP id m4-20020a17090a920400b0024790d841fdmr21084489pjo.26.1683077772331; Tue, 02 May 2023 18:36:12 -0700 (PDT) Received: from localhost (fwdproxy-prn-013.fbsv.net. [2a03:2880:ff:d::face:b00c]) by smtp.gmail.com with ESMTPSA id o15-20020a17090ac08f00b0024b79a69361sm134622pjs.32.2023.05.02.18.36.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 May 2023 18:36:12 -0700 (PDT) From: Nhat Pham To: akpm@linux-foundation.org Cc: hannes@cmpxchg.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, bfoster@redhat.com, willy@infradead.org, linux-api@vger.kernel.org, kernel-team@meta.com Subject: [PATCH v13 3/3] selftests: Add selftests for cachestat Date: Tue, 2 May 2023 18:36:08 -0700 Message-Id: <20230503013608.2431726-4-nphamcs@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230503013608.2431726-1-nphamcs@gmail.com> References: <20230503013608.2431726-1-nphamcs@gmail.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Test cachestat on a newly created file, /dev/ files, and /proc/ files. Also test on a shmem file (which can also be tested with huge pages since tmpfs supports huge pages). Signed-off-by: Nhat Pham Acked-by: Johannes Weiner Acked-by: Nhat Pham --- MAINTAINERS | 7 + tools/testing/selftests/Makefile | 1 + tools/testing/selftests/cachestat/.gitignore | 2 + tools/testing/selftests/cachestat/Makefile | 8 + .../selftests/cachestat/test_cachestat.c | 258 ++++++++++++++++++ 5 files changed, 276 insertions(+) create mode 100644 tools/testing/selftests/cachestat/.gitignore create mode 100644 tools/testing/selftests/cachestat/Makefile create mode 100644 tools/testing/selftests/cachestat/test_cachestat.c diff --git a/MAINTAINERS b/MAINTAINERS index 32772c383ab7..a02946fdd680 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4510,6 +4510,13 @@ S: Supported F: Documentation/filesystems/caching/cachefiles.rst F: fs/cachefiles/ +CACHESTAT: PAGE CACHE STATS FOR A FILE +M: Nhat Pham +M: Johannes Weiner +L: linux-mm@kvack.org +S: Maintained +F: tools/testing/selftests/cachestat/test_cachestat.c + CADENCE MIPI-CSI2 BRIDGES M: Maxime Ripard L: linux-media@vger.kernel.org diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile index 97dcdaa656f6..5994e56e1214 100644 --- a/tools/testing/selftests/Makefile +++ b/tools/testing/selftests/Makefile @@ -4,6 +4,7 @@ TARGETS += amd-pstate TARGETS += arm64 TARGETS += bpf TARGETS += breakpoints +TARGETS += cachestat TARGETS += capabilities TARGETS += cgroup TARGETS += clone3 diff --git a/tools/testing/selftests/cachestat/.gitignore b/tools/testing/selftests/cachestat/.gitignore new file mode 100644 index 000000000000..d6c30b43a4bb --- /dev/null +++ b/tools/testing/selftests/cachestat/.gitignore @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +test_cachestat diff --git a/tools/testing/selftests/cachestat/Makefile b/tools/testing/selftests/cachestat/Makefile new file mode 100644 index 000000000000..fca73aaa7d14 --- /dev/null +++ b/tools/testing/selftests/cachestat/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +TEST_GEN_PROGS := test_cachestat + +CFLAGS += $(KHDR_INCLUDES) +CFLAGS += -Wall +CFLAGS += -lrt + +include ../lib.mk diff --git a/tools/testing/selftests/cachestat/test_cachestat.c b/tools/testing/selftests/cachestat/test_cachestat.c new file mode 100644 index 000000000000..c3823b809c25 --- /dev/null +++ b/tools/testing/selftests/cachestat/test_cachestat.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest.h" + +static const char * const dev_files[] = { + "/dev/zero", "/dev/null", "/dev/urandom", + "/proc/version", "/proc" +}; +static const int cachestat_nr = 451; + +void print_cachestat(struct cachestat *cs) +{ + ksft_print_msg( + "Using cachestat: Cached: %lu, Dirty: %lu, Writeback: %lu, Evicted: %lu, Recently Evicted: %lu\n", + cs->nr_cache, cs->nr_dirty, cs->nr_writeback, + cs->nr_evicted, cs->nr_recently_evicted); +} + +bool write_exactly(int fd, size_t filesize) +{ + char data[filesize]; + bool ret = true; + int random_fd = open("/dev/urandom", O_RDONLY); + + if (random_fd < 0) { + ksft_print_msg("Unable to access urandom.\n"); + ret = false; + goto out; + } else { + int remained = filesize; + char *cursor = data; + + while (remained) { + ssize_t read_len = read(random_fd, cursor, remained); + + if (read_len <= 0) { + ksft_print_msg("Unable to read from urandom.\n"); + ret = false; + goto close_random_fd; + } + + remained -= read_len; + cursor += read_len; + } + + /* write random data to fd */ + remained = filesize; + cursor = data; + while (remained) { + ssize_t write_len = write(fd, cursor, remained); + + if (write_len <= 0) { + ksft_print_msg("Unable write random data to file.\n"); + ret = false; + goto close_random_fd; + } + + remained -= write_len; + cursor += write_len; + } + } + +close_random_fd: + close(random_fd); +out: + return ret; +} + +/* + * Open/create the file at filename, (optionally) write random data to it + * (exactly num_pages), then test the cachestat syscall on this file. + * + * If test_fsync == true, fsync the file, then check the number of dirty + * pages. + */ +bool test_cachestat(const char *filename, bool write_random, bool create, + bool test_fsync, unsigned long num_pages, int open_flags, + mode_t open_mode) +{ + size_t PS = sysconf(_SC_PAGESIZE); + int filesize = num_pages * PS; + bool ret = true; + long syscall_ret; + struct cachestat cs; + struct cachestat_range cs_range = { 0, filesize }; + + int fd = open(filename, open_flags, open_mode); + + if (fd == -1) { + ksft_print_msg("Unable to create/open file.\n"); + ret = false; + goto out; + } else { + ksft_print_msg("Create/open %s\n", filename); + } + + if (write_random) { + if (!write_exactly(fd, filesize)) { + ksft_print_msg("Unable to access urandom.\n"); + ret = false; + goto out1; + } + } + + syscall_ret = syscall(cachestat_nr, fd, &cs_range, &cs, 0); + + ksft_print_msg("Cachestat call returned %ld\n", syscall_ret); + + if (syscall_ret) { + ksft_print_msg("Cachestat returned non-zero.\n"); + ret = false; + goto out1; + + } else { + print_cachestat(&cs); + + if (write_random) { + if (cs.nr_cache + cs.nr_evicted != num_pages) { + ksft_print_msg( + "Total number of cached and evicted pages is off.\n"); + ret = false; + } + } + } + + if (test_fsync) { + if (fsync(fd)) { + ksft_print_msg("fsync fails.\n"); + ret = false; + } else { + syscall_ret = syscall(cachestat_nr, fd, &cs_range, &cs, 0); + + ksft_print_msg("Cachestat call (after fsync) returned %ld\n", + syscall_ret); + + if (!syscall_ret) { + print_cachestat(&cs); + + if (cs.nr_dirty) { + ret = false; + ksft_print_msg( + "Number of dirty should be zero after fsync.\n"); + } + } else { + ksft_print_msg("Cachestat (after fsync) returned non-zero.\n"); + ret = false; + goto out1; + } + } + } + +out1: + close(fd); + + if (create) + remove(filename); +out: + return ret; +} + +bool test_cachestat_shmem(void) +{ + size_t PS = sysconf(_SC_PAGESIZE); + size_t filesize = PS * 512 * 2; /* 2 2MB huge pages */ + int syscall_ret; + size_t compute_len = PS * 512; + struct cachestat_range cs_range = { PS, compute_len }; + char *filename = "tmpshmcstat"; + struct cachestat cs; + bool ret = true; + unsigned long num_pages = compute_len / PS; + int fd = shm_open(filename, O_CREAT | O_RDWR, 0600); + + if (fd < 0) { + ksft_print_msg("Unable to create shmem file.\n"); + ret = false; + goto out; + } + + if (ftruncate(fd, filesize)) { + ksft_print_msg("Unable to trucate shmem file.\n"); + ret = false; + goto close_fd; + } + + if (!write_exactly(fd, filesize)) { + ksft_print_msg("Unable to write to shmem file.\n"); + ret = false; + goto close_fd; + } + + syscall_ret = syscall(cachestat_nr, fd, &cs_range, &cs, 0); + + if (syscall_ret) { + ksft_print_msg("Cachestat returned non-zero.\n"); + ret = false; + goto close_fd; + } else { + print_cachestat(&cs); + if (cs.nr_cache + cs.nr_evicted != num_pages) { + ksft_print_msg( + "Total number of cached and evicted pages is off.\n"); + ret = false; + } + } + +close_fd: + shm_unlink(filename); +out: + return ret; +} + +int main(void) +{ + int ret = 0; + + for (int i = 0; i < 5; i++) { + const char *dev_filename = dev_files[i]; + + if (test_cachestat(dev_filename, false, false, false, + 4, O_RDONLY, 0400)) + ksft_test_result_pass("cachestat works with %s\n", dev_filename); + else { + ksft_test_result_fail("cachestat fails with %s\n", dev_filename); + ret = 1; + } + } + + if (test_cachestat("tmpfilecachestat", true, true, + true, 4, O_CREAT | O_RDWR, 0400 | 0600)) + ksft_test_result_pass("cachestat works with a normal file\n"); + else { + ksft_test_result_fail("cachestat fails with normal file\n"); + ret = 1; + } + + if (test_cachestat_shmem()) + ksft_test_result_pass("cachestat works with a shmem file\n"); + else { + ksft_test_result_fail("cachestat fails with a shmem file\n"); + ret = 1; + } + + return ret; +}