From patchwork Wed Apr 19 14:09:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Thumshirn X-Patchwork-Id: 13216852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9D5FC6FD18 for ; Wed, 19 Apr 2023 14:10:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232021AbjDSOKr (ORCPT ); Wed, 19 Apr 2023 10:10:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231600AbjDSOKp (ORCPT ); Wed, 19 Apr 2023 10:10:45 -0400 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97B3816B11; Wed, 19 Apr 2023 07:10:31 -0700 (PDT) Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-3f17e5fe8bbso7230385e9.1; Wed, 19 Apr 2023 07:10:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681913430; x=1684505430; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=s93ifE1gU+0FrrRIOdnaALh6w14X2Hkv+dazvnzybyI=; b=b4RCjyChSWM4qqShD5AKik30CqDIHsJKpVqboNqsgXJEz6/AHT0vL7M3Tcy/YlkeOU 69tVhZuVX3w8vWtYgZhpjlHmUWzG7omNQZBdLckq/CIQZ2dHo+9IcHxuSwlsQclcUWAU lnJjX3ee/AwGzfL9O46DAzn2fNqk2FwlINYNBVhbryeyttlIvDGrixYVm9agBNhvPsC8 Dpdwh7jaE0DuvcZKMH9oiww+Ob8bsRS6jt8F0p2SdUNfFFUkNRlVYqu2/8HppPWfPg2+ aj1pm5JAy+aTqMX3Pks99e9WbHCq0NGwt4DfoMTE6SC6uejkC0tmf9GkcB23NsDNMC0e x78A== X-Gm-Message-State: AAQBX9d6bSJhnrPm1r4ZV9BkeZL/9jnJu8hHoWdifMOB/D0di4Lh13OE zKTo4CRuOQsyjWm05OctjiM= X-Google-Smtp-Source: AKy350ZCl1EYTqU3ojhOwJGzyXG93dcv3ZTDUZMe/RLHmSNNl6qs8XKigCD/A93InHHpiT9ONj63qw== X-Received: by 2002:a5d:6609:0:b0:2f9:9f6f:e4d with SMTP id n9-20020a5d6609000000b002f99f6f0e4dmr4489637wru.39.1681913430030; Wed, 19 Apr 2023 07:10:30 -0700 (PDT) Received: from localhost.localdomain (aftr-62-216-205-204.dynamic.mnet-online.de. [62.216.205.204]) by smtp.googlemail.com with ESMTPSA id q17-20020a5d61d1000000b002faaa9a1721sm7612089wrv.58.2023.04.19.07.10.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Apr 2023 07:10:29 -0700 (PDT) From: Johannes Thumshirn To: axboe@kernel.dk Cc: johannes.thumshirn@wdc.com, agruenba@redhat.com, cluster-devel@redhat.com, damien.lemoal@wdc.com, dm-devel@redhat.com, dsterba@suse.com, hare@suse.de, hch@lst.de, jfs-discussion@lists.sourceforge.net, kch@nvidia.com, linux-block@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-raid@vger.kernel.org, ming.lei@redhat.com, rpeterso@redhat.com, shaggy@kernel.org, snitzer@kernel.org, song@kernel.org, willy@infradead.org, Damien Le Moal Subject: [PATCH v3 01/19] swap: use __bio_add_page to add page to bio Date: Wed, 19 Apr 2023 16:09:11 +0200 Message-Id: <20230419140929.5924-2-jth@kernel.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230419140929.5924-1-jth@kernel.org> References: <20230419140929.5924-1-jth@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Johannes Thumshirn The swap code only adds a single page to a newly created bio. So use __bio_add_page() to add the page which is guaranteed to succeed in this case. This brings us closer to marking bio_add_page() as __must_check. Signed-off-by: Johannes Thumshirn Reviewed-by: Damien Le Moal --- mm/page_io.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/page_io.c b/mm/page_io.c index 87b682d18850..684cd3c7b59b 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -338,7 +338,7 @@ static void swap_writepage_bdev_sync(struct page *page, bio_init(&bio, sis->bdev, &bv, 1, REQ_OP_WRITE | REQ_SWAP | wbc_to_write_flags(wbc)); bio.bi_iter.bi_sector = swap_page_sector(page); - bio_add_page(&bio, page, thp_size(page), 0); + __bio_add_page(&bio, page, thp_size(page), 0); bio_associate_blkg_from_page(&bio, page); count_swpout_vm_event(page); @@ -360,7 +360,7 @@ static void swap_writepage_bdev_async(struct page *page, GFP_NOIO); bio->bi_iter.bi_sector = swap_page_sector(page); bio->bi_end_io = end_swap_bio_write; - bio_add_page(bio, page, thp_size(page), 0); + __bio_add_page(bio, page, thp_size(page), 0); bio_associate_blkg_from_page(bio, page); count_swpout_vm_event(page); @@ -468,7 +468,7 @@ static void swap_readpage_bdev_sync(struct page *page, bio_init(&bio, sis->bdev, &bv, 1, REQ_OP_READ); bio.bi_iter.bi_sector = swap_page_sector(page); - bio_add_page(&bio, page, thp_size(page), 0); + __bio_add_page(&bio, page, thp_size(page), 0); /* * Keep this task valid during swap readpage because the oom killer may * attempt to access it in the page fault retry time check. @@ -488,7 +488,7 @@ static void swap_readpage_bdev_async(struct page *page, bio = bio_alloc(sis->bdev, 1, REQ_OP_READ, GFP_KERNEL); bio->bi_iter.bi_sector = swap_page_sector(page); bio->bi_end_io = end_swap_bio_read; - bio_add_page(bio, page, thp_size(page), 0); + __bio_add_page(bio, page, thp_size(page), 0); count_vm_event(PSWPIN); submit_bio(bio); }