From patchwork Tue Aug 6 20:49:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 2839693 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1F8C89F495 for ; Tue, 6 Aug 2013 22:31:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 17B822020C for ; Tue, 6 Aug 2013 22:31:30 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C8C5F20205 for ; Tue, 6 Aug 2013 22:31:28 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V6oDT-0001HR-39; Tue, 06 Aug 2013 20:50:52 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V6oCs-0001WV-Ci; Tue, 06 Aug 2013 20:50:14 +0000 Received: from mail-pb0-f45.google.com ([209.85.160.45]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V6oCp-0001Ty-Q5 for linux-arm-kernel@lists.infradead.org; Tue, 06 Aug 2013 20:50:12 +0000 Received: by mail-pb0-f45.google.com with SMTP id mc17so961403pbc.32 for ; Tue, 06 Aug 2013 13:49:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=Tg3uEu0YDiSoeXaI7UbtgSDQUV9JjMP8aOUzNJMKXdI=; b=OD5ccQIXJJZ3dNLQwVh8zmNIya16/GatBX2hUkIfiXOWAVcYwo9GF+YgpGobHxtenw Mkfg/WEwHHS/RRpAdaJlrBtloBnWNpUNPtaJ6BlF8UuBBICbl99ybzn4W/q9dwI5DpWb b690P5DaXV3FO5d7gs/MPCDITuPVl+iNVhY13qEbAvIl1NUJvS5ixmeYiHBe/nx0QWWU 80X4h4vrWFqLTEUXYlx53MdCgzeQtXJeEPCMaKXWszBftgKQJWRz34hsLQCMY8emYNAc CVf0+dXw1KBa4ZvJLNEPuK761mz/5l3hxhDhqofkSsvF/oW8kCoE5Rja9BIiwj3wHqJI fhqA== X-Gm-Message-State: ALoCoQmEZYPdw9iJQyxsRyniZRK10J3z70B8KRFk+QykZA6FAOsJh+VtvAOkaVw4TfLIFo7MuB4k X-Received: by 10.67.23.36 with SMTP id hx4mr793665pad.54.1375822189512; Tue, 06 Aug 2013 13:49:49 -0700 (PDT) Received: from localhost (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id bg3sm3813189pbb.44.2013.08.06.13.49.48 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 06 Aug 2013 13:49:48 -0700 (PDT) Date: Tue, 6 Aug 2013 13:49:43 -0700 From: Christoffer Dall To: Marc Zyngier Subject: Re: [PATCH] arm64: KVM: fix 2-level page tables unmapping Message-ID: <20130806204943.GK16694@cbox> References: <1375790748-10296-1-git-send-email-marc.zyngier@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1375790748-10296-1-git-send-email-marc.zyngier@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130806_165011_950465_D1525394 X-CRM114-Status: GOOD ( 22.36 ) X-Spam-Score: -2.6 (--) Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Aug 06, 2013 at 01:05:48PM +0100, Marc Zyngier wrote: > When using 64kB pages, we only have two levels of page tables, > meaning that PGD, PUD and PMD are fused. In this case, trying > to refcount PUDs and PMDs independantly is a a complete disaster, independently > as they are the same. > > We manage to get it right for the allocation (stage2_set_pte uses > {pmd,pud}_none), but the unmapping path clears both pud and pmd > refcounts, which fails spectacularly with 2-level page tables. > > The fix is to avoid calling clear_pud_entry when both the pmd and > pud pages are empty. For this, and instead of introducing another > pud_empty function, consolidate both pte_empty and pmd_empty into > page_empty (the code is actually identical) and use that to also > test the validity of the pud. > > Signed-off-by: Marc Zyngier > --- > arch/arm/kvm/mmu.c | 22 ++++++++-------------- > 1 file changed, 8 insertions(+), 14 deletions(-) > > diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c > index ca6bea4..7e1d899 100644 > --- a/arch/arm/kvm/mmu.c > +++ b/arch/arm/kvm/mmu.c > @@ -85,6 +85,12 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc) > return p; > } > > +static bool page_empty(void *ptr) > +{ > + struct page *ptr_page = virt_to_page(ptr); > + return page_count(ptr_page) == 1; > +} > + > static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr) > { > pmd_t *pmd_table = pmd_offset(pud, 0); > @@ -103,12 +109,6 @@ static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr) > put_page(virt_to_page(pmd)); > } > > -static bool pmd_empty(pmd_t *pmd) > -{ > - struct page *pmd_page = virt_to_page(pmd); > - return page_count(pmd_page) == 1; > -} > - > static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) > { > if (pte_present(*pte)) { > @@ -118,12 +118,6 @@ static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) > } > } > > -static bool pte_empty(pte_t *pte) > -{ > - struct page *pte_page = virt_to_page(pte); > - return page_count(pte_page) == 1; > -} > - > static void unmap_range(struct kvm *kvm, pgd_t *pgdp, > unsigned long long start, u64 size) > { > @@ -153,10 +147,10 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, > range = PAGE_SIZE; > > /* If we emptied the pte, walk back up the ladder */ > - if (pte_empty(pte)) { > + if (page_empty(pte)) { > clear_pmd_entry(kvm, pmd, addr); > range = PMD_SIZE; > - if (pmd_empty(pmd)) { > + if (page_empty(pmd) && !page_empty(pud)) { > clear_pud_entry(kvm, pud, addr); > range = PUD_SIZE; > } looks right, an alternative would be to check in clear_pud_entry if the entry actually had a value, but I don't think it's really clearer. However, this got me thinking a bit. What happens if we pass a non-pmd aligned address to unmap_range, and let's assume the size of the range is more than 2MB, won't we be leaking memory by incrementing with PMD_SIZE? (same argument goes for PUD_SIZE). See the patch below: --- Christoffer diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ca6bea4..80a83ec 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -132,37 +132,37 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, pmd_t *pmd; pte_t *pte; unsigned long long addr = start, end = start + size; - u64 range; + u64 next; while (addr < end) { pgd = pgdp + pgd_index(addr); pud = pud_offset(pgd, addr); if (pud_none(*pud)) { - addr += PUD_SIZE; + addr = pud_addr_end(addr, end); continue; } pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) { - addr += PMD_SIZE; + addr = pmd_addr_end(addr, end); continue; } pte = pte_offset_kernel(pmd, addr); clear_pte_entry(kvm, pte, addr); - range = PAGE_SIZE; + next = addr + PAGE_SIZE; /* If we emptied the pte, walk back up the ladder */ if (pte_empty(pte)) { clear_pmd_entry(kvm, pmd, addr); - range = PMD_SIZE; + next = pmd_addr_end(addr, end); if (pmd_empty(pmd)) { clear_pud_entry(kvm, pud, addr); - range = PUD_SIZE; + next = pud_addr_end(addr, end); } } - addr += range; + addr = next; } }