From patchwork Fri Aug 9 03:53:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 2841654 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AC351BF546 for ; Fri, 9 Aug 2013 03:53:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BDC1C2045B for ; Fri, 9 Aug 2013 03:53:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B6FA42028F for ; Fri, 9 Aug 2013 03:53:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967495Ab3HIDxm (ORCPT ); Thu, 8 Aug 2013 23:53:42 -0400 Received: from mail-pa0-f46.google.com ([209.85.220.46]:38116 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967387Ab3HIDxk (ORCPT ); Thu, 8 Aug 2013 23:53:40 -0400 Received: by mail-pa0-f46.google.com with SMTP id fa1so4490250pad.19 for ; Thu, 08 Aug 2013 20:53:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=pro8AdxZ9SdiX2DFAgFXKV7OI/MeHrCNYRT4kWiS5mU=; b=d5YxJtsrJQkXlE7BeUHSCmsvp79j9ixDGc+rEXl/YZUTtN5tb+DIAMKP2+5u485JSA r07+PXlydxPo2OW/P0iGLTBLB2Aod5wE8gn6xPumtf2jWnbQsBJKAaQ6hBX/GeTFd5uL g429hrsdUPOcvFdi8uaHAwKNVm+bcsmXk+Q2UiOpkwV+wA5O7LTf78FNUnbUsqeUZ5YT wtpyS5McLeBCDEqo/L33nCOhSKbViwV5Wc8FhuUTfARwctBas0O8A6IiKmbyx5euqduC zaNigDHWQqk+pVOfs31XdVvlyuUAtq3upuSolc0nv4iMS4rXN29UGDjMgiK4CMpY0bx4 awLg== X-Gm-Message-State: ALoCoQlbrmFvtesohkTIX30y3OK/XMCYvKHM/oxjJHph+rhfJGtTooax4zFWZiyWDNzg5HSjUy/p X-Received: by 10.68.13.42 with SMTP id e10mr9376046pbc.23.1376020420197; Thu, 08 Aug 2013 20:53:40 -0700 (PDT) Received: from localhost.localdomain (c-67-169-183-77.hsd1.ca.comcast.net. [67.169.183.77]) by mx.google.com with ESMTPSA id s5sm17446188pbo.38.2013.08.08.20.53.37 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 08 Aug 2013 20:53:39 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linaro-kernel@lists.linaro.org, patches@linaro.org, Christoffer Dall Subject: [PATCH] ARM: KVM: Fix unaligned unmap_range leak Date: Thu, 8 Aug 2013 20:53:08 -0700 Message-Id: <1376020388-9880-1-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 1.7.10.4 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, SUSPICIOUS_RECIPS, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The unmap_range function did not properly cover the case when the start address was not aligned to PMD_SIZE or PUD_SIZE and an entire pte table or pmd table was cleared, causing us to leak memory when incrementing the addr. The fix is to always move onto the next page table entry boundary instead of adding the full size of the VA range covered by the corresponding table level entry. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index ca6bea4..80a83ec 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -132,37 +132,37 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, pmd_t *pmd; pte_t *pte; unsigned long long addr = start, end = start + size; - u64 range; + u64 next; while (addr < end) { pgd = pgdp + pgd_index(addr); pud = pud_offset(pgd, addr); if (pud_none(*pud)) { - addr += PUD_SIZE; + addr = pud_addr_end(addr, end); continue; } pmd = pmd_offset(pud, addr); if (pmd_none(*pmd)) { - addr += PMD_SIZE; + addr = pmd_addr_end(addr, end); continue; } pte = pte_offset_kernel(pmd, addr); clear_pte_entry(kvm, pte, addr); - range = PAGE_SIZE; + next = addr + PAGE_SIZE; /* If we emptied the pte, walk back up the ladder */ if (pte_empty(pte)) { clear_pmd_entry(kvm, pmd, addr); - range = PMD_SIZE; + next = pmd_addr_end(addr, end); if (pmd_empty(pmd)) { clear_pud_entry(kvm, pud, addr); - range = PUD_SIZE; + next = pud_addr_end(addr, end); } } - addr += range; + addr = next; } }