From patchwork Wed Oct 21 19:47:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11849677 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 403631580 for ; Wed, 21 Oct 2020 19:47:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BDC4024171 for ; Wed, 21 Oct 2020 19:47:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="XoOqKyzL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BDC4024171 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C03036B005D; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B634B6B0062; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98EA76B0068; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id 5F7A56B005D for ; Wed, 21 Oct 2020 15:47:57 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 01BBA82E86B9 for ; Wed, 21 Oct 2020 19:47:57 +0000 (UTC) X-FDA: 77396968194.21.value36_54125c32724a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id D9361180442C0 for ; Wed, 21 Oct 2020 19:47:56 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30012:30054:30064:30070,0,RBL:216.228.121.65:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04yrscyzpjjdkyzhn9h5wedsm8sebop4q397bgw3xud1nz47caicxw8t49tiy4m.1wxx73uzj588iqgncrgogcw4rcbi4m974zfufgc8u3xg9h3uype1776gpmacqup.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: value36_54125c32724a X-Filterd-Recvd-Size: 3586 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 21 Oct 2020 19:47:56 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Wed, 21 Oct 2020 12:47:42 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 21 Oct 2020 19:47:49 +0000 Received: from rcampbell-dev.nvidia.com (10.124.1.5) by mail.nvidia.com (172.20.187.18) with Microsoft SMTP Server id 15.0.1473.3 via Frontend Transport; Wed, 21 Oct 2020 19:47:50 +0000 From: Ralph Campbell To: , CC: Jerome Glisse , John Hubbard , Alistair Popple , Christoph Hellwig , "Jason Gunthorpe" , Dan Williams , "Matthew Wilcox" , Andrew Morton , Ralph Campbell Subject: [PATCH] mm: handle zone device pages in release_pages() Date: Wed, 21 Oct 2020 12:47:33 -0700 Message-ID: <20201021194733.11530-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1603309662; bh=MGJaH6/SdFv7VRZFRXj5UhNs695KbTQVGDqZPM15c4s=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:MIME-Version: X-NVConfidentiality:Content-Transfer-Encoding:Content-Type; b=XoOqKyzLq98jjDNdrPZ2nieuS/tliQh+Vsm58Wq2Xh2ARaWytegEKizBYwn07xFiY xZRlQHUYYJ8YCsUdPd+cxh4D9JBOcWI7qsop5+C8oEVgElo15ct+ymYnUTTENrflvO bzrx5shdvT+mQGPvmM4Ad2Xs+1/YEIH9b5nXwr0bcw1xIVBLjrHUBSHDjjEOW9dKTQ 5EZYObcyGvKaCtKUshtgIwT98fXaF5YVevnQ9+HnvKGJYEAtTr1anuLhPAkR9UZd8D JNefzmwVGyHeYisnKV6zPKUDyUJuSPsSsw7L3IkZ/IdnT+S4rrsln1ftziF4wwBM4A pEzXmT11x1pqw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: release_pages() is an optimized, inlined version of __put_pages() except that zone device struct pages that are not page_is_devmap_managed() (i.e., memory_type MEMORY_DEVICE_GENERIC and MEMORY_DEVICE_PCI_P2PDMA), fall through to the code that could return the zone device page to the page allocator instead of adjusting the pgmap reference count. Clearly these type of pages are not having the reference count decremented to zero via release_pages() or page allocation problems would be seen. Just to be safe, handle the 1 to zero case in release_pages() like __put_page() does. Signed-off-by: Ralph Campbell Reviewed-by: Christoph Hellwig --- I found this by code inspection while working on converting ZONE_DEVICE struct pages to have zero based reference counts. I don't think there is an actual problem that this fixes, it's more to future proof new uses of release_pages(). This is for Andrew Morton's mm tree after the merge window. mm/swap.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/swap.c b/mm/swap.c index 0eb057141a04..106f519c45ac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -907,6 +907,9 @@ void release_pages(struct page **pages, int nr) put_devmap_managed_page(page); continue; } + if (put_page_testzero(page)) + put_dev_pagemap(page->pgmap); + continue; } if (!put_page_testzero(page))