From patchwork Tue Aug 13 09:13:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gao Xiang X-Patchwork-Id: 11091581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66B6114F7 for ; Tue, 13 Aug 2019 09:15:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 56AC6204FB for ; Tue, 13 Aug 2019 09:15:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49DBB284B5; Tue, 13 Aug 2019 09:15:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D46EC204FB for ; Tue, 13 Aug 2019 09:15:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728670AbfHMJPX (ORCPT ); Tue, 13 Aug 2019 05:15:23 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:4247 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728486AbfHMJOg (ORCPT ); Tue, 13 Aug 2019 05:14:36 -0400 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 19FEFAC212349454E761; Tue, 13 Aug 2019 17:14:35 +0800 (CST) Received: from architecture4.huawei.com (10.140.130.215) by smtp.huawei.com (10.3.19.208) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 13 Aug 2019 17:14:25 +0800 From: Gao Xiang To: linux-fsdevel , LKML , Alexander Viro CC: Greg Kroah-Hartman , Andrew Morton , Stephen Rothwell , Theodore Ts'o , Pavel Machek , David Sterba , Amir Goldstein , Christoph Hellwig , "Darrick J . Wong" , "Dave Chinner" , Jaegeuk Kim , Jan Kara , Richard Weinberger , Linus Torvalds , , , Chao Yu , Miao Xie , Li Guifu , Fang Wei , Gao Xiang Subject: [PATCH v7 17/24] erofs: introduce per-CPU buffers implementation Date: Tue, 13 Aug 2019 17:13:19 +0800 Message-ID: <20190813091326.84652-18-gaoxiang25@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190813091326.84652-1-gaoxiang25@huawei.com> References: <20190813091326.84652-1-gaoxiang25@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.140.130.215] X-CFilter-Loop: Reflected Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces per-CPU buffers in order for the upcoming generic decompression framework to use. Note that I tried to use in-kernel per-CPU buffer or per-CPU page approaches to clean up further, however noticeable performanace regression (about 2% for sequential read) was observed. Let's leave it as-is for now. Signed-off-by: Gao Xiang --- fs/erofs/Kconfig | 14 ++++++++++++++ fs/erofs/internal.h | 21 +++++++++++++++++++++ fs/erofs/utils.c | 12 ++++++++++++ 3 files changed, 47 insertions(+) diff --git a/fs/erofs/Kconfig b/fs/erofs/Kconfig index a475fbebb831..5f8787c0cf89 100644 --- a/fs/erofs/Kconfig +++ b/fs/erofs/Kconfig @@ -81,3 +81,17 @@ config EROFS_FS_ZIP If you don't want to enable compression feature, say N. +config EROFS_FS_CLUSTER_PAGE_LIMIT + int "EROFS Cluster Pages Hard Limit" + depends on EROFS_FS_ZIP + range 1 256 + default "1" + help + Indicates maximum # of pages of a compressed + physical cluster. + + For example, if files in a image were compressed + into 8k-unit, hard limit should not be configured + less than 2. Otherwise, the image will be refused + to mount on this kernel. + diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h index a287309dbd26..51db5113b631 100644 --- a/fs/erofs/internal.h +++ b/fs/erofs/internal.h @@ -222,6 +222,12 @@ static inline int erofs_wait_on_workgroup_freezed(struct erofs_workgroup *grp) return v; } #endif /* !CONFIG_SMP */ + +/* hard limit of pages per compressed cluster */ +#define Z_EROFS_CLUSTER_MAX_PAGES (CONFIG_EROFS_FS_CLUSTER_PAGE_LIMIT) +#define EROFS_PCPUBUF_NR_PAGES Z_EROFS_CLUSTER_MAX_PAGES +#else +#define EROFS_PCPUBUF_NR_PAGES 0 #endif /* !CONFIG_EROFS_FS_ZIP */ /* we strictly follow PAGE_SIZE and no buffer head yet */ @@ -482,6 +488,21 @@ int erofs_namei(struct inode *dir, struct qstr *name, extern const struct file_operations erofs_dir_fops; /* utils.c */ +#if (EROFS_PCPUBUF_NR_PAGES > 0) +void *erofs_get_pcpubuf(unsigned int pagenr); +#define erofs_put_pcpubuf(buf) do { \ + (void)&(buf); \ + preempt_enable(); \ +} while (0) +#else +static inline void *erofs_get_pcpubuf(unsigned int pagenr) +{ + return ERR_PTR(-ENOTSUPP); +} + +#define erofs_put_pcpubuf(buf) do {} while (0) +#endif + #ifdef CONFIG_EROFS_FS_ZIP int erofs_workgroup_put(struct erofs_workgroup *grp); struct erofs_workgroup *erofs_find_workgroup(struct super_block *sb, diff --git a/fs/erofs/utils.c b/fs/erofs/utils.c index 628178261056..f3eed9af24d6 100644 --- a/fs/erofs/utils.c +++ b/fs/erofs/utils.c @@ -9,6 +9,18 @@ #include "internal.h" #include +#if (EROFS_PCPUBUF_NR_PAGES > 0) +static struct { + u8 data[PAGE_SIZE * EROFS_PCPUBUF_NR_PAGES]; +} ____cacheline_aligned_in_smp erofs_pcpubuf[NR_CPUS]; + +void *erofs_get_pcpubuf(unsigned int pagenr) +{ + preempt_disable(); + return &erofs_pcpubuf[smp_processor_id()].data[pagenr * PAGE_SIZE]; +} +#endif + #ifdef CONFIG_EROFS_FS_ZIP /* global shrink count (for all mounted EROFS instances) */ static atomic_long_t erofs_global_shrink_cnt;