From patchwork Thu Sep 17 17:39:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11783169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C1CCE746 for ; Thu, 17 Sep 2020 17:40:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D91A221EC for ; Thu, 17 Sep 2020 17:40:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="AsxuTQXP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D91A221EC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F2FB8E0003; Thu, 17 Sep 2020 13:39:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6A2F18E0001; Thu, 17 Sep 2020 13:39:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B8BA8E0003; Thu, 17 Sep 2020 13:39:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 449A58E0001 for ; Thu, 17 Sep 2020 13:39:59 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 04D198249980 for ; Thu, 17 Sep 2020 17:39:59 +0000 (UTC) X-FDA: 77273266518.29.glove37_540f2e527124 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id D1E88180868F8 for ; Thu, 17 Sep 2020 17:39:58 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30012:30036:30054:30064:30070,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04yrrkczgx95gp33gfdnqc4gcf64jocjappjj33fu4spu1tkioqxzrsc9ttrjf9.1hgoda9du7kwrgryzbohp4gnqu56atf5b9ox54x7rna5scbr69t68qpif31s5hx.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: glove37_540f2e527124 X-Filterd-Recvd-Size: 4090 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Sep 2020 17:39:57 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 17 Sep 2020 10:38:27 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 17 Sep 2020 10:39:56 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 17 Sep 2020 10:39:56 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 17 Sep 2020 17:39:53 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 17 Sep 2020 17:39:53 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 17 Sep 2020 10:39:53 -0700 From: Ralph Campbell To: , CC: Yu Zhao , Dan Williams , Matthew Wilcox , Christoph Hellwig , "Andrew Morton" , Ralph Campbell Subject: [PATCH] mm: move call to compound_head() in release_pages() Date: Thu, 17 Sep 2020 10:39:38 -0700 Message-ID: <20200917173938.16420-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1600364307; bh=Oa+LacQu7IjzncAKrybq1ytsDP5qw2OwCrK9wyjMjk4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=AsxuTQXPq60UZALfi7MqrgZdLHn8bXGiss8zyd7yZNEE60C8cR3UInAEGEz3rqKq1 HVgSk3Bblal8eFnFMu0ThbT8LXdVFN0/g8TANz+OAdn+IRWH78iJVXUXJYUQ2lBUpa ZyNxzDm0d0M8l32ICgtyFWbrhZWuSShp3ivkYkoTee5QEJ0PDLEKqeL72/1wzUrYKu sXw+rZtOir8kmBcoFnyGVwOBvwctWv3LF54oAVCcwPgASowvm7z4Lh7E362LSnAQaF NiDct6PwxXM+kECjb0yUKGs/SDMHfh3TU4+h6wQhMtm0bhSWQ5QA30fzfLqR2KchYF SJtiHR5GhYqKA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The function is_huge_zero_page() doesn't call compound_head() to make sure the page pointer is a head page. The call to is_huge_zero_page() in release_pages() is made before compound_head() is called so the test would fail if release_pages() was called with a tail page of the huge_zero_page and put_page_testzero() would be called releasing the page. This is unlikely to be happening in normal use or we would be seeing all sorts of process data corruption when accessing a THP zero page. Looking at other places where is_huge_zero_page() is called, all seem to only pass a head page so I think the right solution is to move the call to compound_head() in release_pages() to a point before calling is_huge_zero_page(). Signed-off-by: Ralph Campbell --- I found this by code inspection while working on my patch ("mm: remove extra ZONE_DEVICE struct page refcount"). This applies cleanly on the latest linux-mm and is for Andrew Morton's tree. mm/swap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/swap.c b/mm/swap.c index eca95afe7ad4..7e79829a2e73 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -889,6 +889,7 @@ void release_pages(struct page **pages, int nr) locked_pgdat = NULL; } + page = compound_head(page); if (is_huge_zero_page(page)) continue; @@ -910,7 +911,6 @@ void release_pages(struct page **pages, int nr) } } - page = compound_head(page); if (!put_page_testzero(page)) continue;