From patchwork Tue Jan 16 17:02:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10167695 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8C511600CA for ; Tue, 16 Jan 2018 17:02:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 777ED22638 for ; Tue, 16 Jan 2018 17:02:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6BD65228C9; Tue, 16 Jan 2018 17:02:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1690D22638 for ; Tue, 16 Jan 2018 17:02:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751107AbeAPRCn (ORCPT ); Tue, 16 Jan 2018 12:02:43 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:36408 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750903AbeAPRCk (ORCPT ); Tue, 16 Jan 2018 12:02:40 -0500 Received: by mail-wm0-f67.google.com with SMTP id f3so10114436wmc.1 for ; Tue, 16 Jan 2018 09:02:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N/5CM2WExP+3Sg/y66D4RldSXLI/4HOx6RZ1zsknEA8=; b=FkOXRyXyNJZp2QA2jY6HWoa9JGlO+WKmk2CgcyIFwJhUbdSgs9+SGrMyvcGIlGmvBJ IpDzsmqhCKCCqpEFdx75+8wIJq7iW/4Q1V55M/OSQFBFYhBWgi7joPNwabF4jNdki3q7 to760LZ2DVoxlkuNb6GVG5HP2SneWw8F4MN9Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N/5CM2WExP+3Sg/y66D4RldSXLI/4HOx6RZ1zsknEA8=; b=YZoesbowoNhbM54BhkMQvpit+x+70k04tqqpS0Q5Z/0kKkyqh1Xxoi00ZULwc2IeYi IB+biOAxNb+26bFEPmImZ8ax9+Ax+uQsll1oIvdQkmHiuKOBVIT0f5e+wtNL63kRTM4Q Xsf+T7Xm8U0kfi20fHL7kw1yjeZdO5ryFHkcbGSiEVtc/vkMM77qeQYLxcee67tsUlwM RYbdMLydQLv4yeIYLJKm3Bl2ce14gas534ahWMe6WCkgouw2KdivtBGS+NqqUhhR8NUg xZL1Ss48p1u+M8zpC3RdD/+zUlgBRa+VX+N7uehP7lvl7ugE16J+JIdL4YXZ8lPkLDxL ExQA== X-Gm-Message-State: AKGB3mKdj1DfIkwl1pKMGgZqgPp7Epq392UVa9SX5/x9iJeUoFL8bTBn GaJtMXTZAmyQ0CJGnsV0kUUerQ== X-Google-Smtp-Source: ACJfBou/W/IUav+s3YnplOmmBPlkUJifIkpl5UTf1nxFEWIJYAIkuqo9Ta2U4xyGDB2RMG9WlFz2gw== X-Received: by 10.80.202.7 with SMTP id d7mr54755679edi.32.1516122158908; Tue, 16 Jan 2018 09:02:38 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id r29sm2107162edl.82.2018.01.16.09.02.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 16 Jan 2018 09:02:38 -0800 (PST) From: Christoffer Dall To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Cc: Marc Zyngier , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Punit Agrawal , stable@vger.kernel.org, Christoffer Dall Subject: [PULL v2 1/3] KVM: arm/arm64: Check pagesize when allocating a hugepage at Stage 2 Date: Tue, 16 Jan 2018 18:02:31 +0100 Message-Id: <20180116170233.7085-2-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180116170233.7085-1-christoffer.dall@linaro.org> References: <20180116170233.7085-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Punit Agrawal KVM only supports PMD hugepages at stage 2 but doesn't actually check that the provided hugepage memory pagesize is PMD_SIZE before populating stage 2 entries. In cases where the backing hugepage size is smaller than PMD_SIZE (such as when using contiguous hugepages), KVM can end up creating stage 2 mappings that extend beyond the supplied memory. Fix this by checking for the pagesize of userspace vma before creating PMD hugepage at stage 2. Fixes: 66b3923a1a0f77a ("arm64: hugetlb: add support for PTE contiguous bit") Signed-off-by: Punit Agrawal Cc: Marc Zyngier Cc: # v4.5+ Reviewed-by: Christoffer Dall Signed-off-by: Christoffer Dall --- virt/kvm/arm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index b4b69c2d1012..9dea96380339 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1310,7 +1310,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (is_vm_hugetlb_page(vma) && !logging_active) { + if (vma_kernel_pagesize(vma) == PMD_SIZE && !logging_active) { hugetlb = true; gfn = (fault_ipa & PMD_MASK) >> PAGE_SHIFT; } else {