From patchwork Fri Jun 24 04:22:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 9196533 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8699460754 for ; Fri, 24 Jun 2016 04:24:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7BCBE28482 for ; Fri, 24 Jun 2016 04:24:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7034828488; Fri, 24 Jun 2016 04:24:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B37FC28482 for ; Fri, 24 Jun 2016 04:24:05 +0000 (UTC) Received: (qmail 7964 invoked by uid 550); 24 Jun 2016 04:23:34 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 7674 invoked from network); 24 Jun 2016 04:23:32 -0000 From: Andy Lutomirski To: x86@kernel.org, linux-kernel@vger.kernel.org Cc: linux-arch@vger.kernel.org, Borislav Petkov , Nadav Amit , Kees Cook , Brian Gerst , "kernel-hardening@lists.openwall.com" , Linus Torvalds , Josh Poimboeuf , Jann Horn , Heiko Carstens , Andy Lutomirski Date: Thu, 23 Jun 2016 21:22:59 -0700 Message-Id: <3403d1d7d13c5a28635507c5e47400800ddc3733.1466741835.git.luto@kernel.org> X-Mailer: git-send-email 2.5.5 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Subject: [kernel-hardening] [PATCH v4 04/16] x86/cpa: In populate_pgd, don't set the pgd entry until it's populated X-Virus-Scanned: ClamAV using ClamSMTP This avoids pointless races in which another CPU or task might see a partially populated global pgd entry. These races should normally be harmless, but, if another CPU propagates the entry via vmalloc_fault and then populate_pgd fails (due to memory allocation failure, for example), this prevents a use-after-free of the pgd entry. Signed-off-by: Andy Lutomirski --- arch/x86/mm/pageattr.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 7a1f7bbf4105..6a8026918bf6 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -1113,7 +1113,9 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) ret = populate_pud(cpa, addr, pgd_entry, pgprot); if (ret < 0) { - unmap_pgd_range(cpa->pgd, addr, + if (pud) + free_page((unsigned long)pud); + unmap_pud_range(pgd_entry, addr, addr + (cpa->numpages << PAGE_SHIFT)); return ret; }