From patchwork Mon Aug 12 04:12:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 2842852 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7A0AFBF546 for ; Mon, 12 Aug 2013 04:14:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B10AB20284 for ; Mon, 12 Aug 2013 04:14:34 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AF9332027F for ; Mon, 12 Aug 2013 04:14:33 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8jWE-0003cI-EL; Mon, 12 Aug 2013 04:14:10 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8jW1-0005wk-DS; Mon, 12 Aug 2013 04:13:57 +0000 Received: from mail-pb0-f50.google.com ([209.85.160.50]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1V8jVn-0005tN-BZ for linux-arm-kernel@lists.infradead.org; Mon, 12 Aug 2013 04:13:44 +0000 Received: by mail-pb0-f50.google.com with SMTP id uo5so6316763pbc.23 for ; Sun, 11 Aug 2013 21:13:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pro8AdxZ9SdiX2DFAgFXKV7OI/MeHrCNYRT4kWiS5mU=; b=SbuowVFV9KNRHzgJqVXVovPUGx2YLbfbjLsJ7ik7soyY55UwFl4QniyiCvxHuRh4qO h280zZAKvbumAMYm0JSNCKaSc55xKh0rroh1wa2KWQMNY33wm8zlUTCp0hoSZyo0I9l4 ZOCUfZxUOv9jiD3ubPPMwMrNC8n44z+CDCCbnOCM3pPZGOphMj9keMsckoApJeFc7WOK 1IXxo9CGF6ppj0UaqriVyIDMrPP3FCM59+6mwjq1QPtZm8MC2xCmobDZFVXFkip1/Gui LqnLPz2bUc+Mz2lHZsqo6AP7G+1Geo4UsEgYSoXSzyX6/hCOHs/dKFW1XG2L45RHYMK3 I3xQ== X-Gm-Message-State: ALoCoQm69EkvpKWbX82CneFgmE1CHQjuUNjMPk//UCFHvgx0JqjAKsndohqtyFI/GY8bZbKNK3ln X-Received: by 10.66.25.205 with SMTP id e13mr22608265pag.180.1376280801950; Sun, 11 Aug 2013 21:13:21 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id nj9sm34355902pbc.13.2013.08.11.21.13.20 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Aug 2013 21:13:21 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , Gleb Natapov Subject: [PATCH 2/4] ARM: KVM: Fix unaligned unmap_range leak Date: Sun, 11 Aug 2013 21:12:59 -0700 Message-Id: <1376280781-6539-3-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1376280781-6539-1-git-send-email-christoffer.dall@linaro.org> References: <1376280781-6539-1-git-send-email-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130812_001343_573545_43F625FF X-CRM114-Status: GOOD ( 10.76 ) X-Spam-Score: -2.6 (--) Cc: linaro-kernel@lists.linaro.org, kvm@vger.kernel.org, patches@linaro.org, Christoffer Dall , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The unmap_range function did not properly cover the case when the start address was not aligned to PMD_SIZE or PUD_SIZE and an entire pte table or pmd table was cleared, causing us to leak memory when incrementing the addr. The fix is to always move onto the next page table entry boundary instead of adding the full size of the VA range covered by the corresponding table level entry. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ca6bea4..80a83ec 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -132,37 +132,37 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, pmd_t *pmd; pte_t *pte; unsigned long long addr = start, end = start + size; - u64 range; + u64 next; while (addr < end) { pgd = pgdp + pgd_index(addr); pud = pud_offset(pgd, addr); if (pud_none(*pud)) { - addr += PUD_SIZE; + addr = pud_addr_end(addr, end); continue; } pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) { - addr += PMD_SIZE; + addr = pmd_addr_end(addr, end); continue; } pte = pte_offset_kernel(pmd, addr); clear_pte_entry(kvm, pte, addr); - range = PAGE_SIZE; + next = addr + PAGE_SIZE; /* If we emptied the pte, walk back up the ladder */ if (pte_empty(pte)) { clear_pmd_entry(kvm, pmd, addr); - range = PMD_SIZE; + next = pmd_addr_end(addr, end); if (pmd_empty(pmd)) { clear_pud_entry(kvm, pud, addr); - range = PUD_SIZE; + next = pud_addr_end(addr, end); } } - addr += range; + addr = next; } }