From patchwork Thu May 23 19:05:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 10958649 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A52FE6C5 for ; Thu, 23 May 2019 19:20:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F883286AA for ; Thu, 23 May 2019 19:20:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 834E2286BD; Thu, 23 May 2019 19:20:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F5D6286AA for ; Thu, 23 May 2019 19:20:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 540726B02A3; Thu, 23 May 2019 15:20:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4F1D26B02A4; Thu, 23 May 2019 15:20:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DFC16B02A5; Thu, 23 May 2019 15:20:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 0585C6B02A3 for ; Thu, 23 May 2019 15:20:57 -0400 (EDT) Received: by mail-pl1-f198.google.com with SMTP id w14so4116346plp.4 for ; Thu, 23 May 2019 12:20:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=CUPIV9x05Ws8OFWxr6c7ccpmnkQnmvfxX07vpw07Tuw=; b=iXuh9JJlhSiKG/FUhu5PIc35nIpbTB+BVAGGradJetHnYFkLog2mnV3ggMBHUIc1/K 8XffRB8Rumem79uVJ1mJvnezPg/eW2nvEJGtnE3aCm7m7ALM3Inw9prxm7bcOIdsXCqv pR0sKqY5cjFy8j+e0n0fxo3bEerw5kRkd0GInJyGMSNI6Q1+0QX9ACDB7r0VzkFKTPtR +8iCFAT+3ym8bBx1piVG3bb0IpzzrobrMXG+t1+Bi3PL/gzyLqKecWDDyH7FVhjITRpO EMKlIlB+WXjzsGqx2119XS1nTbXU912DD+XOEGh7Y73Qca5rriB0O1WhBmpEAdSsotAc 96NA== X-Gm-Message-State: APjAAAWu1Qn/8ssvy6im6NW+hHduk2NgV1ow3WQuKQ7OF3HUgv5BzBX6 1dnGZo+iDWjcPRM2bUwvbgBPiCtzYnESUwPnstHEeoAz68C5ecnGOzQRi6Bu6vOGljnUZsDvmo7 m7iFDIANsoag2ZLBWCsVuDVLK0nFxWlAToQyiU8JjQgE4/o9QzIP3aEwLfSMXWLbAqg== X-Received: by 2002:a63:1650:: with SMTP id 16mr6450575pgw.164.1558639256600; Thu, 23 May 2019 12:20:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqzP/e1j6GZB7GY0Fnjd0T7zeM/q+kUgzsKHul78chEgWUvIiUVynyA1mFWtM6MeRIwxB3OM X-Received: by 2002:a63:1650:: with SMTP id 16mr6450505pgw.164.1558639255953; Thu, 23 May 2019 12:20:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558639255; cv=none; d=google.com; s=arc-20160816; b=AompUgQQHxps/wVxCPc1wjAbskHrwdzm+4e9IzpGnvN0d5IT/sulXL0JexMW5JB49i RbVQPL1+93Aqq4SBWM3pcJ37zkvCgnH8GVRBLjkDVuY1Z31+ow092RE1E9JJG+IkVL6T WHiHC68086hZLXWXjKiF3b96ZIkOESd2PRKmtrJ7mjlH77lgrg4txsPjOSams+EW458e pfpb+27WV7P01TD9qMo4C8JY6P92vpB2m01sa7fWjrf04J+SnZyj/i1mBSAClk23LGGv 3mjB6lgUeE2d5MiEFY/Zg8v7Glp+y+zvT3wIJ/NAvPolmVPkmwL2xAPcpne9vtuarjg0 aE/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=CUPIV9x05Ws8OFWxr6c7ccpmnkQnmvfxX07vpw07Tuw=; b=Hpv2fEvAb5qSlaOAnZWqP/fdpSuoLyjM4TERyjTYuwIO/NFurOevvnZuzoGGxs1M7Z FlwV5bSJFEXEF6EMrHW0Pn+uq2Y7Tax2A261OmJtg8P3aUWvAzeA/IFF+b5UeNjpeqtM qh+IOSlgTRfIn/Mz5yOodmyQHRCRaubUUgo9RmF8qxBaRDlqg03KaoF9C9jRH8jZ3ZHo A3sS/ZSuUPAhcduQmJhLogbfc2OmDQGHAZyA+C+c/kNGsO8lfPC10h55ccXmp6b8Scxf tsWephqdqDdUnoztS6IkZSzjcpRGNXR3+s4qNk2jHMg+jT27sONxDJpxtVvZb53XWxQh bx1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="N8/Quhbf"; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id 7si590292pll.99.2019.05.23.12.20.55 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 May 2019 12:20:55 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="N8/Quhbf"; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 20645217D9; Thu, 23 May 2019 19:20:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558639255; bh=mbmgRkQiwuLf6DbY/gT5XIYsFmE75IZy8W8zEZ8AzcQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N8/QuhbfUjEN3xFo+w0+EqEx4WdSOaTvugtvpQT6QX8PIKvuh/jMUmT1GqZS7YuH9 iWl0ckoQ7lPj/ZovPP2GjJCpDyrUkQbeZ38KZb4W+boZZOsTklVg7AxegxOEPjzgFe Sj1JEyMESuadIm0I0zAV11KQp207xTX5PamOWC4o= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ira Weiny , "Kirill A. Shutemov" , Thomas Gleixner , Andrew Morton , Borislav Petkov , Dan Williams , Dave Hansen , Linus Torvalds , Peter Zijlstra , linux-mm@kvack.org, Ingo Molnar , Justin Forbes Subject: [PATCH 5.0 019/139] mm/gup: Remove the write parameter from gup_fast_permitted() Date: Thu, 23 May 2019 21:05:07 +0200 Message-Id: <20190523181723.124661314@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190523181720.120897565@linuxfoundation.org> References: <20190523181720.120897565@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Ira Weiny commit ad8cfb9c42ef83ecf4079bc7d77e6557648e952b upstream. The 'write' parameter is unused in gup_fast_permitted() so remove it. Signed-off-by: Ira Weiny Acked-by: Kirill A. Shutemov Reviewed-by: Thomas Gleixner Cc: Andrew Morton Cc: Borislav Petkov Cc: Dan Williams Cc: Dave Hansen Cc: Linus Torvalds Cc: Peter Zijlstra Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20190210223424.13934-1-ira.weiny@intel.com Signed-off-by: Ingo Molnar Cc: Justin Forbes Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/pgtable_64.h | 3 +-- mm/gup.c | 6 +++--- 2 files changed, 4 insertions(+), 5 deletions(-) --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -259,8 +259,7 @@ extern void init_extra_mapping_uc(unsign extern void init_extra_mapping_wb(unsigned long phys, unsigned long size); #define gup_fast_permitted gup_fast_permitted -static inline bool gup_fast_permitted(unsigned long start, int nr_pages, - int write) +static inline bool gup_fast_permitted(unsigned long start, int nr_pages) { unsigned long len, end; --- a/mm/gup.c +++ b/mm/gup.c @@ -1811,7 +1811,7 @@ static void gup_pgd_range(unsigned long * Check if it's allowed to use __get_user_pages_fast() for the range, or * we need to fall back to the slow version: */ -bool gup_fast_permitted(unsigned long start, int nr_pages, int write) +bool gup_fast_permitted(unsigned long start, int nr_pages) { unsigned long len, end; @@ -1853,7 +1853,7 @@ int __get_user_pages_fast(unsigned long * block IPIs that come from THPs splitting. */ - if (gup_fast_permitted(start, nr_pages, write)) { + if (gup_fast_permitted(start, nr_pages)) { local_irq_save(flags); gup_pgd_range(start, end, write, pages, &nr); local_irq_restore(flags); @@ -1895,7 +1895,7 @@ int get_user_pages_fast(unsigned long st if (unlikely(!access_ok((void __user *)start, len))) return -EFAULT; - if (gup_fast_permitted(start, nr_pages, write)) { + if (gup_fast_permitted(start, nr_pages)) { local_irq_disable(); gup_pgd_range(addr, end, write, pages, &nr); local_irq_enable(); From patchwork Thu May 23 19:06:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 10958653 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 014B314C0 for ; Thu, 23 May 2019 19:23:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEE33286AA for ; Thu, 23 May 2019 19:23:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D25EC286BD; Thu, 23 May 2019 19:23:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0AF57286AA for ; Thu, 23 May 2019 19:23:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0AD836B02A7; Thu, 23 May 2019 15:23:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0832A6B02A8; Thu, 23 May 2019 15:23:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E98E36B02A9; Thu, 23 May 2019 15:23:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id AFDAF6B02A7 for ; Thu, 23 May 2019 15:23:22 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id c7so4867584pfp.14 for ; Thu, 23 May 2019 12:23:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=T70QC1iYSvKWJ7HKtOT8aADR9VEBZzitKh+O4ccR8Os=; b=FoFQ2kOAvNeX5FVNTEJO/+ZKcktULdySI3K8MGRa8ZqjRu5KXqXj+jPjr6ysOvJoxn 9vNo3gdqVwlGh+L4P9q500TamnrIN+FO2vwZq7I/p2VXkt8tK1YXMUygSgDh2rr9P2t2 22Jge6/leYNFeHI7LIGuSsXVMhwdT5Y2L10SRBNXh0F2B2WOjg22lwNICln6caqy6rSJ MXnU78pmv9oXi5AQQcl7dr0lumET25JpP/+whr4DUtOWWHh9mUhKaauoGBl0ZW1UDlzZ rpgraKh6GyKl5AHKywzYzQB6l1s9F8QdUPtXLor5HxWqJaYxt30ujd1/xu43Q6MNrWRW vbEg== X-Gm-Message-State: APjAAAWVMVzWpCtpHwgAIeJXy3xhnt04Gu/JbGj0UJdkuaOFcZBcwsUd hTDf8fSvMmje/f7dxWCu572xfWKheEG8rsUsqV9lGD8BNKVtTCHs729dNJj7KVMSaMtzcp5zjbQ L3mEYaJXefLG1taDkCqbFjTW9AL7kUW+NfFa7BGVSaw1/zKDJzg2QaUnC1oiI7Uxy1A== X-Received: by 2002:a17:90a:af8e:: with SMTP id w14mr3647192pjq.89.1558639402254; Thu, 23 May 2019 12:23:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqxoIhLchIhgs6gRzObBMfvd0Ip1D1AqM5TN9MuAvrt6Zz/Hp8x3Kahfezx5A83ges3PrGHp X-Received: by 2002:a17:90a:af8e:: with SMTP id w14mr3647072pjq.89.1558639401192; Thu, 23 May 2019 12:23:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558639401; cv=none; d=google.com; s=arc-20160816; b=Hz2p8okrWO+jtWH1pAuKw5RrNxgVp4vXasYRGqMnn2YGA+nkXvlpp05KFlvv5Hl0K4 Z0uSpLKdWn63teIyypJlIRZKiuh1kxKqAAyDgnffvGVSFXvJAYkW/IRBgfx3402L16HZ atw8LqnrnZS1MaFokfFXCljn+tGfqzpTQbBhvM6S/qJkNIDH/t25WNoflkPdJ/XQmV8x Z2KCjUNb/AMmhtFdTYY90ui7sG9hZS1j45iV1U5JgSlndceopNMJiUCNHcZPL3mXmVSv 54EAeJ3dJhxrI9qfgBvrGLefqgqtfxax5EX/rLDIMG7L5AhCl1Rif2fLd+KUKzSqMkP8 5raQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=T70QC1iYSvKWJ7HKtOT8aADR9VEBZzitKh+O4ccR8Os=; b=td6BHqSKHOKgl7PTPTgqRpxRK/IAMPR0Y923DyPanaFLmU0RrVgU65hTe0wtvxStCF vLO34L/iOOccCxqbzbwT+arz4J4HXXu1RvPH7r1tRMxFc9DvVFKJ8EKrZpdpsd2xoUW8 fwFhB4k26yASudVb3aYcON3PPLpy/uUQBpv22Qmk+mTE2mgzbAvFJbhiKEO82aQ7qlxD Y+BcBlE0ImbO+UHkFwSj7XwHqHCCb0xK5InuHDG1mk/6mcFronyVkqGKjZJUXz38D5dj yXYW2+a8sd5CHpKyFNkZ2b7DGSbsiZY92JOvVlrmcQ5U3yYwuQ99fLtyQIi40PEKKS0H fihg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ge58KPWr; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id d1si571739plo.137.2019.05.23.12.23.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 May 2019 12:23:21 -0700 (PDT) Received-SPF: pass (google.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ge58KPWr; spf=pass (google.com: domain of gregkh@linuxfoundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3E1E220868; Thu, 23 May 2019 19:23:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558639400; bh=2RbjxuETqCYHxBeM3B7PFPrflHs91gMsDYPlQQFhKMg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ge58KPWrSV7wz9R7P10QsrfSe3YNnb7FpLhxrd1Swv1TGFKciuI98TEXWm0Q6rwxu uqGxp0l30P5U+a9nNodBxNMjBF54FESz2ZUynQsQR7wZAulLB/COMMtCPm1QgPYbOu ut+tG+iAZ4wpDslT0s6oXhU0rI0HR8HmA+TVqPvk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Richard Biener , "H.J. Lu" , Dave Hansen , Yang Shi , Michael Ellerman , Andrew Morton , Andy Lutomirski , Anton Ivanov , Benjamin Herrenschmidt , Borislav Petkov , Guan Xuetao , "H. Peter Anvin" , Jeff Dike , Linus Torvalds , Michal Hocko , Paul Mackerras , Peter Zijlstra , Richard Weinberger , Rik van Riel , Vlastimil Babka , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Ingo Molnar Subject: [PATCH 5.0 075/139] x86/mpx, mm/core: Fix recursive munmap() corruption Date: Thu, 23 May 2019 21:06:03 +0200 Message-Id: <20190523181730.571246299@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190523181720.120897565@linuxfoundation.org> References: <20190523181720.120897565@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Dave Hansen commit 5a28fc94c9143db766d1ba5480cae82d856ad080 upstream. This is a bit of a mess, to put it mildly. But, it's a bug that only seems to have showed up in 4.20 but wasn't noticed until now, because nobody uses MPX. MPX has the arch_unmap() hook inside of munmap() because MPX uses bounds tables that protect other areas of memory. When memory is unmapped, there is also a need to unmap the MPX bounds tables. Barring this, unused bounds tables can eat 80% of the address space. But, the recursive do_munmap() that gets called vi arch_unmap() wreaks havoc with __do_munmap()'s state. It can result in freeing populated page tables, accessing bogus VMA state, double-freed VMAs and more. See the "long story" further below for the gory details. To fix this, call arch_unmap() before __do_unmap() has a chance to do anything meaningful. Also, remove the 'vma' argument and force the MPX code to do its own, independent VMA lookup. == UML / unicore32 impact == Remove unused 'vma' argument to arch_unmap(). No functional change. I compile tested this on UML but not unicore32. == powerpc impact == powerpc uses arch_unmap() well to watch for munmap() on the VDSO and zeroes out 'current->mm->context.vdso_base'. Moving arch_unmap() makes this happen earlier in __do_munmap(). But, 'vdso_base' seems to only be used in perf and in the signal delivery that happens near the return to userspace. I can not find any likely impact to powerpc, other than the zeroing happening a little earlier. powerpc does not use the 'vma' argument and is unaffected by its removal. I compile-tested a 64-bit powerpc defconfig. == x86 impact == For the common success case this is functionally identical to what was there before. For the munmap() failure case, it's possible that some MPX tables will be zapped for memory that continues to be in use. But, this is an extraordinarily unlikely scenario and the harm would be that MPX provides no protection since the bounds table got reset (zeroed). I can't imagine anyone doing this: ptr = mmap(); // use ptr ret = munmap(ptr); if (ret) // oh, there was an error, I'll // keep using ptr. Because if you're doing munmap(), you are *done* with the memory. There's probably no good data in there _anyway_. This passes the original reproducer from Richard Biener as well as the existing mpx selftests/. The long story: munmap() has a couple of pieces: 1. Find the affected VMA(s) 2. Split the start/end one(s) if neceesary 3. Pull the VMAs out of the rbtree 4. Actually zap the memory via unmap_region(), including freeing page tables (or queueing them to be freed). 5. Fix up some of the accounting (like fput()) and actually free the VMA itself. This specific ordering was actually introduced by: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap") during the 4.20 merge window. The previous __do_munmap() code was actually safe because the only thing after arch_unmap() was remove_vma_list(). arch_unmap() could not see 'vma' in the rbtree because it was detached, so it is not even capable of doing operations unsafe for remove_vma_list()'s use of 'vma'. Richard Biener reported a test that shows this in dmesg: [1216548.787498] BUG: Bad rss-counter state mm:0000000017ce560b idx:1 val:551 [1216548.787500] BUG: non-zero pgtables_bytes on freeing mm: 24576 What triggered this was the recursive do_munmap() called via arch_unmap(). It was freeing page tables that has not been properly zapped. But, the problem was bigger than this. For one, arch_unmap() can free VMAs. But, the calling __do_munmap() has variables that *point* to VMAs and obviously can't handle them just getting freed while the pointer is still in use. I tried a couple of things here. First, I tried to fix the page table freeing problem in isolation, but I then found the VMA issue. I also tried having the MPX code return a flag if it modified the rbtree which would force __do_munmap() to re-walk to restart. That spiralled out of control in complexity pretty fast. Just moving arch_unmap() and accepting that the bonkers failure case might eat some bounds tables seems like the simplest viable fix. This was also reported in the following kernel bugzilla entry: https://bugzilla.kernel.org/show_bug.cgi?id=203123 There are some reports that this commit triggered this bug: dd2283f2605 ("mm: mmap: zap pages with read mmap_sem in munmap") While that commit certainly made the issues easier to hit, I believe the fundamental issue has been with us as long as MPX itself, thus the Fixes: tag below is for one of the original MPX commits. [ mingo: Minor edits to the changelog and the patch. ] Reported-by: Richard Biener Reported-by: H.J. Lu Signed-off-by: Dave Hansen Reviewed-by Thomas Gleixner Reviewed-by: Yang Shi Acked-by: Michael Ellerman Cc: Andrew Morton Cc: Andy Lutomirski Cc: Anton Ivanov Cc: Benjamin Herrenschmidt Cc: Borislav Petkov Cc: Guan Xuetao Cc: H. Peter Anvin Cc: Jeff Dike Cc: Linus Torvalds Cc: Michal Hocko Cc: Paul Mackerras Cc: Peter Zijlstra Cc: Richard Weinberger Cc: Rik van Riel Cc: Vlastimil Babka Cc: linux-arch@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-um@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: stable@vger.kernel.org Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap") Link: http://lkml.kernel.org/r/20190419194747.5E1AD6DC@viggo.jf.intel.com Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/mmu_context.h | 1 - arch/um/include/asm/mmu_context.h | 1 - arch/unicore32/include/asm/mmu_context.h | 1 - arch/x86/include/asm/mmu_context.h | 6 +++--- arch/x86/include/asm/mpx.h | 15 ++++++++------- arch/x86/mm/mpx.c | 10 ++++++---- include/asm-generic/mm_hooks.h | 1 - mm/mmap.c | 15 ++++++++------- 8 files changed, 25 insertions(+), 25 deletions(-) --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -237,7 +237,6 @@ extern void arch_exit_mmap(struct mm_str #endif static inline void arch_unmap(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start, unsigned long end) { if (start <= mm->context.vdso_base && mm->context.vdso_base < end) --- a/arch/um/include/asm/mmu_context.h +++ b/arch/um/include/asm/mmu_context.h @@ -22,7 +22,6 @@ static inline int arch_dup_mmap(struct m } extern void arch_exit_mmap(struct mm_struct *mm); static inline void arch_unmap(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start, unsigned long end) { } --- a/arch/unicore32/include/asm/mmu_context.h +++ b/arch/unicore32/include/asm/mmu_context.h @@ -88,7 +88,6 @@ static inline int arch_dup_mmap(struct m } static inline void arch_unmap(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start, unsigned long end) { } --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -277,8 +277,8 @@ static inline void arch_bprm_mm_init(str mpx_mm_init(mm); } -static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long start, unsigned long end) +static inline void arch_unmap(struct mm_struct *mm, unsigned long start, + unsigned long end) { /* * mpx_notify_unmap() goes and reads a rarely-hot @@ -298,7 +298,7 @@ static inline void arch_unmap(struct mm_ * consistently wrong. */ if (unlikely(cpu_feature_enabled(X86_FEATURE_MPX))) - mpx_notify_unmap(mm, vma, start, end); + mpx_notify_unmap(mm, start, end); } /* --- a/arch/x86/include/asm/mpx.h +++ b/arch/x86/include/asm/mpx.h @@ -64,12 +64,15 @@ struct mpx_fault_info { }; #ifdef CONFIG_X86_INTEL_MPX -int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs); -int mpx_handle_bd_fault(void); + +extern int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs); +extern int mpx_handle_bd_fault(void); + static inline int kernel_managing_mpx_tables(struct mm_struct *mm) { return (mm->context.bd_addr != MPX_INVALID_BOUNDS_DIR); } + static inline void mpx_mm_init(struct mm_struct *mm) { /* @@ -78,11 +81,10 @@ static inline void mpx_mm_init(struct mm */ mm->context.bd_addr = MPX_INVALID_BOUNDS_DIR; } -void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long start, unsigned long end); -unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len, - unsigned long flags); +extern void mpx_notify_unmap(struct mm_struct *mm, unsigned long start, unsigned long end); +extern unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len, unsigned long flags); + #else static inline int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs) { @@ -100,7 +102,6 @@ static inline void mpx_mm_init(struct mm { } static inline void mpx_notify_unmap(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start, unsigned long end) { } --- a/arch/x86/mm/mpx.c +++ b/arch/x86/mm/mpx.c @@ -881,9 +881,10 @@ static int mpx_unmap_tables(struct mm_st * the virtual address region start...end have already been split if * necessary, and the 'vma' is the first vma in this range (start -> end). */ -void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long start, unsigned long end) +void mpx_notify_unmap(struct mm_struct *mm, unsigned long start, + unsigned long end) { + struct vm_area_struct *vma; int ret; /* @@ -902,11 +903,12 @@ void mpx_notify_unmap(struct mm_struct * * which should not occur normally. Being strict about it here * helps ensure that we do not have an exploitable stack overflow. */ - do { + vma = find_vma(mm, start); + while (vma && vma->vm_start < end) { if (vma->vm_flags & VM_MPX) return; vma = vma->vm_next; - } while (vma && vma->vm_start < end); + } ret = mpx_unmap_tables(mm, start, end); if (ret) --- a/include/asm-generic/mm_hooks.h +++ b/include/asm-generic/mm_hooks.h @@ -18,7 +18,6 @@ static inline void arch_exit_mmap(struct } static inline void arch_unmap(struct mm_struct *mm, - struct vm_area_struct *vma, unsigned long start, unsigned long end) { } --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2736,9 +2736,17 @@ int __do_munmap(struct mm_struct *mm, un return -EINVAL; len = PAGE_ALIGN(len); + end = start + len; if (len == 0) return -EINVAL; + /* + * arch_unmap() might do unmaps itself. It must be called + * and finish any rbtree manipulation before this code + * runs and also starts to manipulate the rbtree. + */ + arch_unmap(mm, start, end); + /* Find the first overlapping VMA */ vma = find_vma(mm, start); if (!vma) @@ -2747,7 +2755,6 @@ int __do_munmap(struct mm_struct *mm, un /* we have start < vma->vm_end */ /* if it doesn't overlap, we have nothing.. */ - end = start + len; if (vma->vm_start >= end) return 0; @@ -2817,12 +2824,6 @@ int __do_munmap(struct mm_struct *mm, un /* Detach vmas from rbtree */ detach_vmas_to_be_unmapped(mm, vma, prev, end); - /* - * mpx unmap needs to be called with mmap_sem held for write. - * It is safe to call it before unmap_region(). - */ - arch_unmap(mm, vma, start, end); - if (downgrade) downgrade_write(&mm->mmap_sem);