From patchwork Wed Apr 30 11:15:53 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 4092801 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3BEE3C0ACC for ; Wed, 30 Apr 2014 11:19:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5C2982026F for ; Wed, 30 Apr 2014 11:19:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 63D1320265 for ; Wed, 30 Apr 2014 11:19:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WfSVQ-0004ne-T8; Wed, 30 Apr 2014 11:16:52 +0000 Received: from mail-la0-f53.google.com ([209.85.215.53]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WfSVG-0004fC-GU for linux-arm-kernel@lists.infradead.org; Wed, 30 Apr 2014 11:16:43 +0000 Received: by mail-la0-f53.google.com with SMTP id b8so1124460lan.12 for ; Wed, 30 Apr 2014 04:16:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N22APZivuiu0d1yqlmE+rypjJ4a9u7oLVZemDDSHwNM=; b=jlmgciXA0AlANIOAIOxGsyBBJGn0GFSn4GVNK7VIhWODF/WW6RRwMGjX7doU8cSLFK S1HARyPkE1LYz/44eouhSdusx3Q0sai6o8brOW6JNH6As8y5k2AP1PM+LxnqWExjG6eY jGbTtg8xGGYBdVel63jpKJTBKsY0e9YnI7Sk60g3Yj/BXqwmfbUg66HmN2yL+E6AovXz e643J5sUIAfb4kb4+sxvb2St2MwMDZ7ANCIRb7G8fSY3q6ym+2wiJcU0Yd07w1e7yxKS bRCqots4OzNiiR4J3l1Iv3+hB7OwpPDNFHTahuJ10KPCxj6QCs2xtWRgRr0LlkcLNeVR THsQ== X-Gm-Message-State: ALoCoQmYY6abIzIZKLD1bMrFXtmTlNJaSi9C2ZhxJBhUB1bamQ/ErQMxispBhasbYIjBOndK8IpS X-Received: by 10.152.6.194 with SMTP id d2mr30663laa.54.1398856579632; Wed, 30 Apr 2014 04:16:19 -0700 (PDT) Received: from localhost.localdomain (87-51-169-189-static.dk.customer.tdc.net. [87.51.169.189]) by mx.google.com with ESMTPSA id rd5sm25972812lbb.0.2014.04.30.04.16.18 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 30 Apr 2014 04:16:19 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , Gleb Natapov Subject: [PATCH 2/5] arm: KVM: fix possible misalignment of PGDs and bounce page Date: Wed, 30 Apr 2014 04:15:53 -0700 Message-Id: <1398856556-13199-3-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.8.5.2 In-Reply-To: <1398856556-13199-1-git-send-email-christoffer.dall@linaro.org> References: <1398856556-13199-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140430_041642_739804_ED55FFA5 X-CRM114-Status: GOOD ( 13.90 ) X-Spam-Score: -0.7 (/) Cc: kvm@vger.kernel.org, Marc Zyngier , stable@vger.kernel.org, Christoffer Dall , Mark Salter , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mark Salter The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate a bounce page (if hypervisor init code crosses page boundary) and hypervisor PGDs. The problem is that kalloc() does not guarantee the proper alignment. In the case of the bounce page, the page sized buffer allocated may also cross a page boundary negating the purpose and leading to a hang during kvm initialization. Likewise the PGDs allocated may not meet the minimum alignment requirements of the underlying MMU. This patch uses __get_free_page() to guarantee the worst case alignment needs of the bounce page and PGDs on both arm and arm64. Cc: # 3.10+ Signed-off-by: Mark Salter Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 80bb1e6..16f8049 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -42,6 +42,8 @@ static unsigned long hyp_idmap_start; static unsigned long hyp_idmap_end; static phys_addr_t hyp_idmap_vector; +#define pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t)) + #define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x)) static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) @@ -293,14 +295,14 @@ void free_boot_hyp_pgd(void) if (boot_hyp_pgd) { unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE); unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - kfree(boot_hyp_pgd); + free_pages((unsigned long)boot_hyp_pgd, pgd_order); boot_hyp_pgd = NULL; } if (hyp_pgd) unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE); - kfree(init_bounce_page); + free_page((unsigned long)init_bounce_page); init_bounce_page = NULL; mutex_unlock(&kvm_hyp_pgd_mutex); @@ -330,7 +332,7 @@ void free_hyp_pgds(void) for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE) unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE); - kfree(hyp_pgd); + free_pages((unsigned long)hyp_pgd, pgd_order); hyp_pgd = NULL; } @@ -1024,7 +1026,7 @@ int kvm_mmu_init(void) size_t len = __hyp_idmap_text_end - __hyp_idmap_text_start; phys_addr_t phys_base; - init_bounce_page = kmalloc(PAGE_SIZE, GFP_KERNEL); + init_bounce_page = (void *)__get_free_page(GFP_KERNEL); if (!init_bounce_page) { kvm_err("Couldn't allocate HYP init bounce page\n"); err = -ENOMEM; @@ -1050,8 +1052,9 @@ int kvm_mmu_init(void) (unsigned long)phys_base); } - hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); - boot_hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL); + hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order); + boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, pgd_order); + if (!hyp_pgd || !boot_hyp_pgd) { kvm_err("Hyp mode PGD not allocated\n"); err = -ENOMEM;