From patchwork Thu Dec 21 08:57:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Kuai X-Patchwork-Id: 13501360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 509F8C46CD8 for ; Thu, 21 Dec 2023 08:59:46 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.658610.1027934 (Exim 4.92) (envelope-from ) id 1rGEu1-0004P9-Ae; Thu, 21 Dec 2023 08:59:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 658610.1027934; Thu, 21 Dec 2023 08:59:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rGEu0-0004Mx-Uk; Thu, 21 Dec 2023 08:59:36 +0000 Received: by outflank-mailman (input) for mailman id 658610; Thu, 21 Dec 2023 08:59:35 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rGEtz-0001sl-8M for xen-devel@lists.xenproject.org; Thu, 21 Dec 2023 08:59:35 +0000 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 4074d98e-9fdf-11ee-98eb-6d05b1d4d9a1; Thu, 21 Dec 2023 09:59:33 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4SwkrY67Xjz4f3kFr for ; Thu, 21 Dec 2023 16:59:25 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.112]) by mail.maildlp.com (Postfix) with ESMTP id A20FA1A085F for ; Thu, 21 Dec 2023 16:59:28 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.104.67]) by APP1 (Coremail) with SMTP id cCh0CgDnNw5d_oNlEQPvEA--.24929S14; Thu, 21 Dec 2023 16:59:28 +0800 (CST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4074d98e-9fdf-11ee-98eb-6d05b1d4d9a1 From: Yu Kuai To: axboe@kernel.dk, roger.pau@citrix.com, colyli@suse.de, kent.overstreet@gmail.com, joern@lazybastard.org, miquel.raynal@bootlin.com, richard@nod.at, vigneshr@ti.com, sth@linux.ibm.com, hoeppner@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, jejb@linux.ibm.com, martin.petersen@oracle.com, clm@fb.com, josef@toxicpanda.com, dsterba@suse.com, viro@zeniv.linux.org.uk, brauner@kernel.org, nico@fluxnic.net, xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.com, konishi.ryusuke@gmail.com, willy@infradead.org, akpm@linux-foundation.org, hare@suse.de, p.raghav@samsung.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org, linux-bcachefs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-nilfs@vger.kernel.org, yukuai3@huawei.com, yukuai1@huaweicloud.com, yi.zhang@huawei.com, yangerkun@huawei.com Subject: [PATCH RFC v3 for-6.8/block 10/17] cramfs: use bdev apis in cramfs_blkdev_read() Date: Thu, 21 Dec 2023 16:57:05 +0800 Message-Id: <20231221085712.1766333-11-yukuai1@huaweicloud.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231221085712.1766333-1-yukuai1@huaweicloud.com> References: <20231221085712.1766333-1-yukuai1@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: cCh0CgDnNw5d_oNlEQPvEA--.24929S14 X-Coremail-Antispam: 1UD129KBjvJXoW7Ww17ur1fXryfAFyrJF4fKrg_yoW8tF4UpF 1akanIkr4q9ryI9ayfXr1DZF15Ga4kXF4DCFWxu3y3Z3W5Jrna9r10kry0qFW8GrZFqryv 9r4jkryfur15Ka7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPF14x267AKxVWrJVCq3wAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_JF0E3s1l82xGYI kIc2x26xkF7I0E14v26ryj6s0DM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2 z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr1j6r xdM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0D M2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjx v20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1l F7xvr2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7M4IIrI8v6xkF7I0E8cxan2 IY04v7MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAF wI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWrXVW8Jr1lIx kGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVW8JVW5JwCI42IY6xIIjxv20xvEc7CjxVAF wI0_Cr1j6rxdMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVW8JV WxJwCI42IY6I8E87Iv6xkF7I0E14v26F4UJVW0obIYCTnIWIevJa73UjIFyTuYvjfUOBTY UUUUU X-CM-SenderInfo: 51xn3trlr6x35dzhxuhorxvhhfrp/ From: Yu Kuai On the one hand covert to use folio while reading bdev inode, on the other hand prevent to access bd_inode directly. Also do some cleanup that there is no need for two for loop, and remove local array pages. Signed-off-by: Yu Kuai --- fs/cramfs/inode.c | 36 +++++++++++++----------------------- 1 file changed, 13 insertions(+), 23 deletions(-) diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index 60dbfa0f8805..fad95d683d97 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -183,9 +183,6 @@ static int next_buffer; static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset, unsigned int len) { - struct address_space *mapping = sb->s_bdev->bd_inode->i_mapping; - struct file_ra_state ra = {}; - struct page *pages[BLKS_PER_BUF]; unsigned i, blocknr, buffer; unsigned long devsize; char *data; @@ -214,37 +211,30 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset, devsize = bdev_nr_bytes(sb->s_bdev) >> PAGE_SHIFT; /* Ok, read in BLKS_PER_BUF pages completely first. */ - file_ra_state_init(&ra, mapping); - page_cache_sync_readahead(mapping, &ra, NULL, blocknr, BLKS_PER_BUF); - - for (i = 0; i < BLKS_PER_BUF; i++) { - struct page *page = NULL; - - if (blocknr + i < devsize) { - page = read_mapping_page(mapping, blocknr + i, NULL); - /* synchronous error? */ - if (IS_ERR(page)) - page = NULL; - } - pages[i] = page; - } + bdev_sync_readahead(sb->s_bdev, NULL, NULL, blocknr, BLKS_PER_BUF); buffer = next_buffer; next_buffer = NEXT_BUFFER(buffer); buffer_blocknr[buffer] = blocknr; buffer_dev[buffer] = sb; - data = read_buffers[buffer]; + for (i = 0; i < BLKS_PER_BUF; i++) { - struct page *page = pages[i]; + struct folio *folio = NULL; + + if (blocknr + i < devsize) + folio = bdev_read_folio(sb->s_bdev, + (blocknr + i) << PAGE_SHIFT); - if (page) { - memcpy_from_page(data, page, 0, PAGE_SIZE); - put_page(page); - } else + if (IS_ERR_OR_NULL(folio)) { memset(data, 0, PAGE_SIZE); + } else { + memcpy_from_folio(data, folio, 0, PAGE_SIZE); + folio_put(folio); + } data += PAGE_SIZE; } + return read_buffers[buffer] + offset; }