From patchwork Wed Dec 11 02:52:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11284159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05EBE14B7 for ; Wed, 11 Dec 2019 02:57:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CDCBD20838 for ; Wed, 11 Dec 2019 02:57:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="EcVGYlys" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727929AbfLKC51 (ORCPT ); Tue, 10 Dec 2019 21:57:27 -0500 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:17016 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727572AbfLKCx1 (ORCPT ); Tue, 10 Dec 2019 21:53:27 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 10 Dec 2019 18:53:02 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 10 Dec 2019 18:53:24 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 10 Dec 2019 18:53:24 -0800 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Dec 2019 02:53:23 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 11 Dec 2019 02:53:23 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 10 Dec 2019 18:53:23 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Christoph Hellwig Subject: [PATCH v9 04/25] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages Date: Tue, 10 Dec 2019 18:52:57 -0800 Message-ID: <20191211025318.457113-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191211025318.457113-1-jhubbard@nvidia.com> References: <20191211025318.457113-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1576032782; bh=r3XA8lSkA5Bnz921S0tMCXc9LaTJg03zUmdbkjLWRbI=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=EcVGYlysfzlu9HdNLvcx52jQ4+uWhUWgll3IAOf/A7FqS7QAg9NjpzR1IWtEPwnOF 3lmSeFe3jQRNECtPVAqMVNJOC6kJZXe4rL7XCZMN92rY+h9x7vdn7pdQs5BUZuHjIV gEswcohMLwkvAhYNxRNgF0oFZel1oPxNEohWduao2pVrI53gj1Z/BdRqAYZnGW0BIO MRW905EZZyIBDlJDCNN4rnNZYfT1Wa7vtJ6Z159N6GWfzRuidsmpriQS4EQGqR8tWR 2NTc+4AafhlBvlSH+rLZcg00UtOi/dBjt7mzyNzyswp4X1DXZTQBk5ngmq9MZHFO+K QR/nm9ecd/yOQ== Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org An upcoming patch changes and complicates the refcounting and especially the "put page" aspects of it. In order to keep everything clean, refactor the devmap page release routines: * Rename put_devmap_managed_page() to page_is_devmap_managed(), and limit the functionality to "read only": return a bool, with no side effects. * Add a new routine, put_devmap_managed_page(), to handle checking what kind of page it is, and what kind of refcount handling it requires. * Rename __put_devmap_managed_page() to free_devmap_managed_page(), and limit the functionality to unconditionally freeing a devmap page. This is originally based on a separate patch by Ira Weiny, which applied to an early version of the put_user_page() experiments. Since then, Jérôme Glisse suggested the refactoring described above. Cc: Christoph Hellwig Suggested-by: Jérôme Glisse Reviewed-by: Dan Williams Reviewed-by: Jan Kara Signed-off-by: Ira Weiny Signed-off-by: John Hubbard --- include/linux/mm.h | 17 +++++++++++++---- mm/memremap.c | 16 ++-------------- mm/swap.c | 24 ++++++++++++++++++++++++ 3 files changed, 39 insertions(+), 18 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c97ea3b694e6..77a4df06c8a7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -952,9 +952,10 @@ static inline bool is_zone_device_page(const struct page *page) #endif #ifdef CONFIG_DEV_PAGEMAP_OPS -void __put_devmap_managed_page(struct page *page); +void free_devmap_managed_page(struct page *page); DECLARE_STATIC_KEY_FALSE(devmap_managed_key); -static inline bool put_devmap_managed_page(struct page *page) + +static inline bool page_is_devmap_managed(struct page *page) { if (!static_branch_unlikely(&devmap_managed_key)) return false; @@ -963,7 +964,6 @@ static inline bool put_devmap_managed_page(struct page *page) switch (page->pgmap->type) { case MEMORY_DEVICE_PRIVATE: case MEMORY_DEVICE_FS_DAX: - __put_devmap_managed_page(page); return true; default: break; @@ -971,7 +971,14 @@ static inline bool put_devmap_managed_page(struct page *page) return false; } +bool put_devmap_managed_page(struct page *page); + #else /* CONFIG_DEV_PAGEMAP_OPS */ +static inline bool page_is_devmap_managed(struct page *page) +{ + return false; +} + static inline bool put_devmap_managed_page(struct page *page) { return false; @@ -1028,8 +1035,10 @@ static inline void put_page(struct page *page) * need to inform the device driver through callback. See * include/linux/memremap.h and HMM for details. */ - if (put_devmap_managed_page(page)) + if (page_is_devmap_managed(page)) { + put_devmap_managed_page(page); return; + } if (put_page_testzero(page)) __put_page(page); diff --git a/mm/memremap.c b/mm/memremap.c index e899fa876a62..2ba773859031 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -411,20 +411,8 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn, EXPORT_SYMBOL_GPL(get_dev_pagemap); #ifdef CONFIG_DEV_PAGEMAP_OPS -void __put_devmap_managed_page(struct page *page) +void free_devmap_managed_page(struct page *page) { - int count = page_ref_dec_return(page); - - /* still busy */ - if (count > 1) - return; - - /* only triggered by the dev_pagemap shutdown path */ - if (count == 0) { - __put_page(page); - return; - } - /* notify page idle for dax */ if (!is_device_private_page(page)) { wake_up_var(&page->_refcount); @@ -461,5 +449,5 @@ void __put_devmap_managed_page(struct page *page) page->mapping = NULL; page->pgmap->ops->page_free(page); } -EXPORT_SYMBOL(__put_devmap_managed_page); +EXPORT_SYMBOL(free_devmap_managed_page); #endif /* CONFIG_DEV_PAGEMAP_OPS */ diff --git a/mm/swap.c b/mm/swap.c index 5341ae93861f..49f7c2eea0ba 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -1102,3 +1102,27 @@ void __init swap_setup(void) * _really_ don't want to cluster much more */ } + +#ifdef CONFIG_DEV_PAGEMAP_OPS +bool put_devmap_managed_page(struct page *page) +{ + bool is_devmap = page_is_devmap_managed(page); + + if (is_devmap) { + int count = page_ref_dec_return(page); + + /* + * devmap page refcounts are 1-based, rather than 0-based: if + * refcount is 1, then the page is free and the refcount is + * stable because nobody holds a reference on the page. + */ + if (count == 1) + free_devmap_managed_page(page); + else if (!count) + __put_page(page); + } + + return is_devmap; +} +EXPORT_SYMBOL(put_devmap_managed_page); +#endif