From patchwork Thu Nov 22 21:32:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10694871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 600AC13BB for ; Thu, 22 Nov 2018 21:32:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 513A52C770 for ; Thu, 22 Nov 2018 21:32:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 44EA12CE23; Thu, 22 Nov 2018 21:32:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB8732C770 for ; Thu, 22 Nov 2018 21:32:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2061A6B2D69; Thu, 22 Nov 2018 16:32:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1B8666B2D6A; Thu, 22 Nov 2018 16:32:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0577B6B2D6B; Thu, 22 Nov 2018 16:32:36 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id B48E36B2D69 for ; Thu, 22 Nov 2018 16:32:36 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id o17so3159730pgi.14 for ; Thu, 22 Nov 2018 13:32:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=21pgTH4nPwYMtfybAiOSoRz5R27AR1FrdXTO1XlQXYY=; b=G8RNQxBfonY62ZqP5LHqX++llPff7NfQ40gjETlaw0Qn+zdLpy+9UgL4DA5oy19aKP K77wEfWhhKTaHQKUumwinxPKtcFahMRsZtxkuVPLoI5L5QZR+wpRzQDz22wVz6BnLmR7 tps+OixkUc2WspkRDZ1fiRbt53ASe9VkhGrwpJvMKtsdIejgkY+Er6Tot8ian8eVkA8w Rj/yb2e0pL+piWHE8M2u0EjQYK05tJ2nAIaHMIEJtTemXvjZHYnNXryCHrh23/3eCSQb yiBBzboNouxb6wlBly7KwfE/6D7HRGpy7FLyXBpiYDw7KYmT6DNW3U+pubFNAB59rLDZ 9+/g== X-Gm-Message-State: AA+aEWb+L73j4yoEAW/HMQhpQnjCKdVxK6EFjCRC5d/YVP1pyrAF7B9p P9kgcKap9kSkvXQ+JLc/2QLN87IKOOYTaj6WCY9KWM+l/ju/MMr7gYBcsXwpAGWT95HoMWPFsEM bvhohIfHg/VRjOccORq51+zJLOR+5WuKqXktmfn4ui+tpL1RJnzAAf0AWgu120xzqfg== X-Received: by 2002:a17:902:930b:: with SMTP id bc11mr13137295plb.17.1542922356418; Thu, 22 Nov 2018 13:32:36 -0800 (PST) X-Google-Smtp-Source: AFSGD/Udk8mL4s9BJD6+JwIUXSeRUkpp5X9XXWFlzI0iTHXtv0LITKvDFdgOMdvfbo/Emws5T4xj X-Received: by 2002:a17:902:930b:: with SMTP id bc11mr13137261plb.17.1542922355741; Thu, 22 Nov 2018 13:32:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542922355; cv=none; d=google.com; s=arc-20160816; b=Io5CrqPtTEn1E1YRmwsIKBdSdkp+W6uPPcvtfwIlvl6mRDeFSXfsNXsgyuIjvYZrnn 4UZL0xIYRl0XcGKq402S8AW2P8sboyYVDj5/d+Ch271MDGs6ITskDkrr5ykv6RmRQ9e4 eQ0bb58B5uDWAf5bBEyGcufnQg2I8msNxyIreBnLDhQdlKP+SIzN5rEC+CiPjewEwzey WZR9GVfryoXbKIblyZq+D4z+OttyqgQYiYsb45zIIk7/cb7Dko6IIfXIoYZN0xlCIHpz IZQBNBAieHSuFZOoTuF1l9ZzL4FvJTGDeNKsxlbhwRWBx5AyYa9+5lcpLxx+4ZfIUclD S62w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=21pgTH4nPwYMtfybAiOSoRz5R27AR1FrdXTO1XlQXYY=; b=yy7X9ShOi9QaqUnqRgwx4LwaiGU6BUYuuykHVnw9YR6Pw//CiyLT4J8aVdqn1o9RIj RNX55qtHsFuHY0ZBegCl9ck4Z9pTsvUGJ4TDJs/3W98yp3eqTLHu/4tJdxZKErEmCmUq p3T12bulRVGPe0z7MizKQzFCIUzAx6oD/cztkNQcxj3jNvOgWq6J3HcUToTqPjvi/59/ 1p8ePGBkcRrBpWQ/iJBmHAT3ce4S3zbnLzIXJfw3Wx0kO2uq7DX9r6AgRzCAE+zJ/m1C m2DnmstrRr9UB4DI/rA+p3kPBTT6PloCvroyCyRzd04KmDxP6kz9s9CN0xAu9cM3dUxh JVvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=eTFshXAg; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id e8si8528398pfc.248.2018.11.22.13.32.35 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 22 Nov 2018 13:32:35 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=eTFshXAg; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=21pgTH4nPwYMtfybAiOSoRz5R27AR1FrdXTO1XlQXYY=; b=eTFshXAgAJQAxCBZKbL3dpgrn zKL49ibd5UXIuPJ/EeSFwG51NqZ3wDYUFs0YJKTtZycS6CDqvOK/7HThwURimjQFDD4F6RX4OXkJg 9LNGXXsQnbEjrGxeeMQcwLZoB8iUczK7rb/CUMJs3CW1y2TnrRJ700HWIgRJsJNmFiGzSMxZTuwSg wmAqGIZN5VC+J0nFZnWtGgtAS9pIvX5y1ZdQU5Bjw5BMKzYJ/dbRJ+05VoXtGkAfPljA3PoorTs20 hHKZkZgY9h07PpxFfbXG7YO566nn74Ocn6VJplwJxKdekSUozfD9ZPNKKyVzot0r337KNXvUJWcs1 Kabo2mRFA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gPwaR-0003LR-8h; Thu, 22 Nov 2018 21:32:35 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Hugh Dickins , Matthew Wilcox Subject: [PATCH 1/2] mm: Remove redundant test from find_get_pages_contig Date: Thu, 22 Nov 2018 13:32:23 -0800 Message-Id: <20181122213224.12793-2-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181122213224.12793-1-willy@infradead.org> References: <20181122213224.12793-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP After we establish a reference on the page, we check the pointer continues to be in the correct position in i_pages. There's no need to check the page->mapping or page->index afterwards; if those can change after we've got the reference, they can change after we return the page to the caller. Signed-off-by: Matthew Wilcox --- mm/filemap.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 81adec8ee02cc..538531590ef2d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1776,16 +1776,6 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, if (unlikely(page != xas_reload(&xas))) goto put_page; - /* - * must check mapping and index after taking the ref. - * otherwise we can get both false positives and false - * negatives, which is just confusing to the caller. - */ - if (!page->mapping || page_to_pgoff(page) != xas.xa_index) { - put_page(page); - break; - } - pages[ret] = page; if (++ret == nr_pages) break; From patchwork Thu Nov 22 21:32:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10694873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F30665A4 for ; Thu, 22 Nov 2018 21:32:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1DF82C770 for ; Thu, 22 Nov 2018 21:32:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D62D12CE23; Thu, 22 Nov 2018 21:32:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1022F2C770 for ; Thu, 22 Nov 2018 21:32:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A3DB6B2D6B; Thu, 22 Nov 2018 16:32:39 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 22E1B6B2D6C; Thu, 22 Nov 2018 16:32:39 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07EA46B2D6D; Thu, 22 Nov 2018 16:32:39 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id BAD306B2D6B for ; Thu, 22 Nov 2018 16:32:38 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id o9so3143670pgv.19 for ; Thu, 22 Nov 2018 13:32:38 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=0UsevhTtmZozmks7C4ffLdwiZeC+IEmI5pSfKcPr6N0=; b=EMh589q1Hcu42GDmUI/p1w+XkrwmY7Hdmx975n1X51OzSArWjLHLhdU+iUIGjKL5TR s6yhr+K6O89Tmoh8vxWDPs/hbOCNNsNbKgViHI1GN6Zc/LFk7RvHuyhGB73qJCBnbNAu 6/1dKbQU6l76Hb8maBEBHuP0iY6KTjXInMgrfutoy1DXcMwLmi8M0P2FeF5PTG0c5ZeQ 3gkvrx6Op7ap/YNLTdKROdQsP3O/4tXko/Q7MDyrkObZxpYHXKlSqeSrkP2flZAwFTuR mR0hbc0r0Wfan/V9ms01yt7Q0+thWRfx5AOHyLn+YQqk+JOtaAE0+vr5jXIgVVuaJRn1 T8PQ== X-Gm-Message-State: AA+aEWar0eTJ1QsAvQBiHgf1mcdlM4m++EZA52M6uoUQi05mzktwLu92 QH10BsV+AfpSajAMjkYamNwozSzXwAnnvqb3YTx/20eDUmxMqCbnGjduT4/j0xlODKxjTWMgMvR 2A11WEWNYyHHjTgcfx5bnFsBVl/qcNVNmJzkLb3CUTe1IrUiHfKN+R2qK1QRrXTHecg== X-Received: by 2002:a65:520a:: with SMTP id o10mr11955611pgp.276.1542922358332; Thu, 22 Nov 2018 13:32:38 -0800 (PST) X-Google-Smtp-Source: AFSGD/XSEhvMjKp2ObS7bwgcUThL+wN4HfxM8tPMkGlQxKpkJLXC6xy13nUaglr/rJFQLo+nBidS X-Received: by 2002:a65:520a:: with SMTP id o10mr11955558pgp.276.1542922357105; Thu, 22 Nov 2018 13:32:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542922357; cv=none; d=google.com; s=arc-20160816; b=RkH7u5HeDxb2E2xetGXSHGj4UAWK5v9wG3MpYt6EXVCwdxQukNQ4BXclRH2A0k3noV Ey6yglJ/VWqG6fEcUhhbggdBlsd6tnI8ZGUPlMdNqFPsOzck1VHEnfSPmGsRTqm1u/gG anno3Dbn2Q5s8l/grmZIvw0OPAdPtaKXYobqZmoWiG89ggv7Sit7yNW1YiJWovIM6WWa g+6jQ3hC4QT2Y4QT+Xg9By5b1gwlwsy2G4D1QSRxpwaC/3a6IpHgWinOOtFj1rtpHzF2 LyBUP7A2VBLYWicWashrN0OU5x+PWfyRuNX1OMazMKYMxTsZFNiWPpE4Bevf5XFOAAV4 DZfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0UsevhTtmZozmks7C4ffLdwiZeC+IEmI5pSfKcPr6N0=; b=iaC7eimYlX1DEgCrHfLaqGPP/LapWKmQV4QT5+fTCmX/69d+bdci5bWOVRegsr+J5Q cGaHhZVqhcEb0XvkJRPF3zsWNP7dienh+FPN021rvAXW1HsK4bn+J9jwfB8QebGBW90Z 8X05nhSbwkOLTQispKN5mRsHyLj5UwFs+1TDRlQFADzcEOQQUjkw4ZditklUpwdQHSz7 aZYS5NzU7qlo5UEr/Ci/uzQalbpsG4tNAxNQ9ad4FYjahiEMa6QunxTmtwUeFahhO6Uk lsaolJWf7KN1Qan/vsdr1tTi0/647iqVXn8zQjH6/OZzfns1nPCqgtX7fAmgclyqAbHY 1jaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="p9RcQW/2"; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 97si4138278ple.389.2018.11.22.13.32.36 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 22 Nov 2018 13:32:37 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="p9RcQW/2"; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0UsevhTtmZozmks7C4ffLdwiZeC+IEmI5pSfKcPr6N0=; b=p9RcQW/2LOptH4kN8G4iernL2 OTD9HLzhuC2hsb3HDUdEHtteSTJ7Nr2WcTB3CVPTuV8xqPeG/MogXrjWgIdgbvVi80xxDfuQ8lQO7 D748YMTz2iLTLtDgKZweJl82th0LFVmzsQwQkM/crGEEA/L+hxEX/y9ZhlOIEFNvIFnHRefOW8Ynn /jlMICpjiF5tnLM8ASSC+CHVSVOK8EvbLpWDc9wNnYuP0OYLIyCWhW+ckYtp1l7pwyepmLr89wRyZ xu+vFkVTJ946I/08w+Iz5NmQu1CiFYKOBNnX05n0gDRJDCE3+B/YBgb1GknzD9Q3MZtI84ml9kpQ6 /dnMEavyw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1gPwaS-0003Ld-JL; Thu, 22 Nov 2018 21:32:36 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Hugh Dickins , Matthew Wilcox Subject: [PATCH 2/2] page cache: Store only head pages in i_pages Date: Thu, 22 Nov 2018 13:32:24 -0800 Message-Id: <20181122213224.12793-3-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20181122213224.12793-1-willy@infradead.org> References: <20181122213224.12793-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Transparent Huge Pages are currently stored in i_pages as pointers to consecutive subpages. This patch changes that to storing consecutive pointers to the head page in preparation for storing huge pages more efficiently in i_pages. Large parts of this are "inspired" by Kirill's patch https://lore.kernel.org/lkml/20170126115819.58875-2-kirill.shutemov@linux.intel.com/ Signed-off-by: Matthew Wilcox --- include/linux/pagemap.h | 9 ++++ mm/filemap.c | 96 +++++++++++++---------------------------- mm/khugepaged.c | 4 +- mm/shmem.c | 2 +- mm/swap_state.c | 2 +- 5 files changed, 42 insertions(+), 71 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 226f96f0dee06..41bf976574e74 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -345,6 +345,15 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping, mapping_gfp_mask(mapping)); } +static inline struct page *find_subpage(struct page *page, pgoff_t offset) +{ + VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(page->index > offset, page); + VM_BUG_ON_PAGE(page->index + (1 << compound_order(page)) <= offset, + page); + return page - page->index + offset; +} + struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset); unsigned find_get_entries(struct address_space *mapping, pgoff_t start, diff --git a/mm/filemap.c b/mm/filemap.c index 538531590ef2d..d7274591381ac 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1407,7 +1407,7 @@ EXPORT_SYMBOL(page_cache_prev_miss); struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) { XA_STATE(xas, &mapping->i_pages, offset); - struct page *head, *page; + struct page *page; rcu_read_lock(); repeat: @@ -1422,25 +1422,19 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) if (!page || xa_is_value(page)) goto out; - head = compound_head(page); - if (!page_cache_get_speculative(head)) - goto repeat; - - /* The page was split under us? */ - if (compound_head(page) != head) { - put_page(head); + if (!page_cache_get_speculative(page)) goto repeat; - } /* - * Has the page moved? + * Has the page moved or been split? * This is part of the lockless pagecache protocol. See * include/linux/pagemap.h for details. */ if (unlikely(page != xas_reload(&xas))) { - put_page(head); + put_page(page); goto repeat; } + page = find_subpage(page, offset); out: rcu_read_unlock(); @@ -1611,7 +1605,6 @@ unsigned find_get_entries(struct address_space *mapping, rcu_read_lock(); xas_for_each(&xas, page, ULONG_MAX) { - struct page *head; if (xas_retry(&xas, page)) continue; /* @@ -1622,17 +1615,13 @@ unsigned find_get_entries(struct address_space *mapping, if (xa_is_value(page)) goto export; - head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (!page_cache_get_speculative(page)) goto retry; - /* The page was split under us? */ - if (compound_head(page) != head) - goto put_page; - - /* Has the page moved? */ + /* Has the page moved or been split? */ if (unlikely(page != xas_reload(&xas))) goto put_page; + page = find_subpage(page, xas.xa_index); export: indices[ret] = xas.xa_index; @@ -1641,7 +1630,7 @@ unsigned find_get_entries(struct address_space *mapping, break; continue; put_page: - put_page(head); + put_page(page); retry: xas_reset(&xas); } @@ -1683,33 +1672,27 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, rcu_read_lock(); xas_for_each(&xas, page, end) { - struct page *head; if (xas_retry(&xas, page)) continue; /* Skip over shadow, swap and DAX entries */ if (xa_is_value(page)) continue; - head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (!page_cache_get_speculative(page)) goto retry; - /* The page was split under us? */ - if (compound_head(page) != head) - goto put_page; - - /* Has the page moved? */ + /* Has the page moved or been split? */ if (unlikely(page != xas_reload(&xas))) goto put_page; - pages[ret] = page; + pages[ret] = find_subpage(page, xas.xa_index); if (++ret == nr_pages) { *start = page->index + 1; goto out; } continue; put_page: - put_page(head); + put_page(page); retry: xas_reset(&xas); } @@ -1754,7 +1737,6 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, rcu_read_lock(); for (page = xas_load(&xas); page; page = xas_next(&xas)) { - struct page *head; if (xas_retry(&xas, page)) continue; /* @@ -1764,24 +1746,19 @@ unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, if (xa_is_value(page)) break; - head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (!page_cache_get_speculative(page)) goto retry; - /* The page was split under us? */ - if (compound_head(page) != head) - goto put_page; - - /* Has the page moved? */ + /* Has the page moved or been split? */ if (unlikely(page != xas_reload(&xas))) goto put_page; - pages[ret] = page; + pages[ret] = find_subpage(page, xas.xa_index); if (++ret == nr_pages) break; continue; put_page: - put_page(head); + put_page(page); retry: xas_reset(&xas); } @@ -1815,7 +1792,6 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, rcu_read_lock(); xas_for_each_marked(&xas, page, end, tag) { - struct page *head; if (xas_retry(&xas, page)) continue; /* @@ -1826,26 +1802,21 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, if (xa_is_value(page)) continue; - head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (!page_cache_get_speculative(page)) goto retry; - /* The page was split under us? */ - if (compound_head(page) != head) - goto put_page; - - /* Has the page moved? */ + /* Has the page moved or been split? */ if (unlikely(page != xas_reload(&xas))) goto put_page; - pages[ret] = page; + pages[ret] = find_subpage(page, xas.xa_index); if (++ret == nr_pages) { *index = page->index + 1; goto out; } continue; put_page: - put_page(head); + put_page(page); retry: xas_reset(&xas); } @@ -1892,7 +1863,6 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, rcu_read_lock(); xas_for_each_marked(&xas, page, ULONG_MAX, tag) { - struct page *head; if (xas_retry(&xas, page)) continue; /* @@ -1903,17 +1873,13 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, if (xa_is_value(page)) goto export; - head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (!page_cache_get_speculative(page)) goto retry; - /* The page was split under us? */ - if (compound_head(page) != head) - goto put_page; - - /* Has the page moved? */ + /* Has the page moved or been split? */ if (unlikely(page != xas_reload(&xas))) goto put_page; + page = find_subpage(page, xas.xa_index); export: indices[ret] = xas.xa_index; @@ -1922,7 +1888,7 @@ unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, break; continue; put_page: - put_page(head); + put_page(page); retry: xas_reset(&xas); } @@ -2533,7 +2499,7 @@ void filemap_map_pages(struct vm_fault *vmf, pgoff_t last_pgoff = start_pgoff; unsigned long max_idx; XA_STATE(xas, &mapping->i_pages, start_pgoff); - struct page *head, *page; + struct page *page; rcu_read_lock(); xas_for_each(&xas, page, end_pgoff) { @@ -2542,17 +2508,13 @@ void filemap_map_pages(struct vm_fault *vmf, if (xa_is_value(page)) goto next; - head = compound_head(page); - if (!page_cache_get_speculative(head)) + if (!page_cache_get_speculative(page)) goto next; - /* The page was split under us? */ - if (compound_head(page) != head) - goto skip; - - /* Has the page moved? */ + /* Has the page moved or been split? */ if (unlikely(page != xas_reload(&xas))) goto skip; + page = find_subpage(page, xas.xa_index); if (!PageUptodate(page) || PageReadahead(page) || diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c13625c1ad5e5..7d6a1319dd42e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1363,7 +1363,7 @@ static void collapse_shmem(struct mm_struct *mm, result = SCAN_FAIL; break; } - xas_store(&xas, new_page + (index % HPAGE_PMD_NR)); + xas_store(&xas, new_page); nr_none++; continue; } @@ -1431,7 +1431,7 @@ static void collapse_shmem(struct mm_struct *mm, list_add_tail(&page->lru, &pagelist); /* Finally, replace with the new page. */ - xas_store(&xas, new_page + (index % HPAGE_PMD_NR)); + xas_store(&xas, new_page); continue; out_lru: xas_unlock_irq(&xas); diff --git a/mm/shmem.c b/mm/shmem.c index ea26d7a0342d7..c2ba84cbb0c0e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -603,7 +603,7 @@ static int shmem_add_to_page_cache(struct page *page, if (xas_error(&xas)) goto unlock; next: - xas_store(&xas, page + i); + xas_store(&xas, page); if (++i < nr) { xas_next(&xas); goto next; diff --git a/mm/swap_state.c b/mm/swap_state.c index fd2f21e1c60ae..dcf9e466d2945 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -132,7 +132,7 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp) for (i = 0; i < nr; i++) { VM_BUG_ON_PAGE(xas.xa_index != idx + i, page); set_page_private(page + i, entry.val + i); - xas_store(&xas, page + i); + xas_store(&xas, page); xas_next(&xas); } address_space->nrpages += nr;