From patchwork Fri Aug 9 03:53:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 2841655 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 401ACBF546 for ; Fri, 9 Aug 2013 03:54:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6C25F2045B for ; Fri, 9 Aug 2013 03:54:11 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7197F2028F for ; Fri, 9 Aug 2013 03:54:10 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V7dmA-0005U9-SZ; Fri, 09 Aug 2013 03:54:07 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V7dm8-0001cA-N0; Fri, 09 Aug 2013 03:54:04 +0000 Received: from mail-pb0-f54.google.com ([209.85.160.54]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V7dm6-0001bE-2C for linux-arm-kernel@lists.infradead.org; Fri, 09 Aug 2013 03:54:03 +0000 Received: by mail-pb0-f54.google.com with SMTP id ro12so4267255pbb.27 for ; Thu, 08 Aug 2013 20:53:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=pro8AdxZ9SdiX2DFAgFXKV7OI/MeHrCNYRT4kWiS5mU=; b=MZ5G1o82Xq+e19JtRF6cOdYw5I+Elv4MEsd1Wd77prf5mWkdELvw1jvFh26e7eXdTf ZPzR0A92msio6tivwcMZP/RnasCBSedjMhPbJcoUH7/hAB6jNGkSBZR+WbT0DxJYqGCp ApkSfFSJec3QQIfltt6ed0C6UejrjTlVwRr0bkTX1L8JKrUy6MtP6Ja0u0XdM2oruzZy Bd73EsdTOW/Xrj3OTACAhhEO1w0DDs1jf1ceFgL2kM/xwk4D+Ias6zM5hCIdd0DLaxBx ylZk0rydCDUpelbmcNPybp+KcAgquMDtsPZukzgfXVGEedtDISSdCxae5B7IBzRB2sAC uEog== X-Gm-Message-State: ALoCoQlq7z64Ki1kUwZAs5Q+Z5upik5LkznHyD+nxGJfanQulPfgx8B4gWfeErAHPw5wqgwjvKXZ X-Received: by 10.68.13.42 with SMTP id e10mr9376046pbc.23.1376020420197; Thu, 08 Aug 2013 20:53:40 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id s5sm17446188pbo.38.2013.08.08.20.53.37 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 08 Aug 2013 20:53:39 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [PATCH] ARM: KVM: Fix unaligned unmap_range leak Date: Thu, 8 Aug 2013 20:53:08 -0700 Message-Id: <1376020388-9880-1-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130808_235402_221863_BD646F26 X-CRM114-Status: GOOD ( 10.61 ) X-Spam-Score: -0.1 (/) Cc: Christoffer Dall , linaro-kernel@lists.linaro.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, patches@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, SUSPICIOUS_RECIPS, UNPARSEABLE_RELAY autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The unmap_range function did not properly cover the case when the start address was not aligned to PMD_SIZE or PUD_SIZE and an entire pte table or pmd table was cleared, causing us to leak memory when incrementing the addr. The fix is to always move onto the next page table entry boundary instead of adding the full size of the VA range covered by the corresponding table level entry. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ca6bea4..80a83ec 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -132,37 +132,37 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, pmd_t *pmd; pte_t *pte; unsigned long long addr = start, end = start + size; - u64 range; + u64 next; while (addr < end) { pgd = pgdp + pgd_index(addr); pud = pud_offset(pgd, addr); if (pud_none(*pud)) { - addr += PUD_SIZE; + addr = pud_addr_end(addr, end); continue; } pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) { - addr += PMD_SIZE; + addr = pmd_addr_end(addr, end); continue; } pte = pte_offset_kernel(pmd, addr); clear_pte_entry(kvm, pte, addr); - range = PAGE_SIZE; + next = addr + PAGE_SIZE; /* If we emptied the pte, walk back up the ladder */ if (pte_empty(pte)) { clear_pmd_entry(kvm, pmd, addr); - range = PMD_SIZE; + next = pmd_addr_end(addr, end); if (pmd_empty(pmd)) { clear_pud_entry(kvm, pud, addr); - range = PUD_SIZE; + next = pud_addr_end(addr, end); } } - addr += range; + addr = next; } }