From patchwork Fri Nov 2 10:03:21 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1687741 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id B589B3FD2B for ; Fri, 2 Nov 2012 10:03:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933559Ab2KBKDt (ORCPT ); Fri, 2 Nov 2012 06:03:49 -0400 Received: from mail-bk0-f46.google.com ([209.85.214.46]:33632 "EHLO mail-bk0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932924Ab2KBKDr (ORCPT ); Fri, 2 Nov 2012 06:03:47 -0400 Received: by mail-bk0-f46.google.com with SMTP id jk13so1310415bkc.19 for ; Fri, 02 Nov 2012 03:03:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=UBQQ7Bkds157SBXm6TA3vQLbcsXm/0w9qD5Ff92DyrQ=; b=fSTAm3CZPiLWBHAnuPx3BU/haFKVv5pTxEMuQEmaDZnMHUGV1NksusAT63EbQ7G7TO AphtzEI0oDBzpqQ4cJvfJQdqB5McdhMSWqg1Zo3+Rc/wcnFrgIL4JDPHOOmFm25/z69P 4JrQW4RljIioM+uS+tnXYSxOXCoiWeTA7ovFvVh9woWMEjraxmrU0SCVfmf0HpwT7PZA R10I0m7R2aX3ZmiOu2PGJssoPx6L2dPKq21GkuWsCiS1/kkqFuWiNkWrF2nvIwdFVmq5 Ln/rPWtRQdYjruD6gYJxTWpOcl64XG6AR5b48Lm0M+VOrZoD5TE1ldkwnRlPZP+qSpLR w0Hg== Received: by 10.204.149.2 with SMTP id r2mr241734bkv.0.1351850627126; Fri, 02 Nov 2012 03:03:47 -0700 (PDT) Received: from localhost.localdomain ([91.224.175.20]) by mx.google.com with ESMTPS id z22sm6155655bkw.2.2012.11.02.03.03.44 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 02 Nov 2012 03:03:46 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, Christoffer Dall Subject: [RFC PATCH 3/4] KVM: ARM: Improve stage2_clear_pte Date: Fri, 2 Nov 2012 11:03:21 +0100 Message-Id: <1351850602-4781-4-git-send-email-c.dall@virtualopensystems.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1351850602-4781-1-git-send-email-c.dall@virtualopensystems.com> References: <1351850602-4781-1-git-send-email-c.dall@virtualopensystems.com> X-Gm-Message-State: ALoCoQmyYayGuG74RjQY4p5aYB3G3k0liniAZNBzcukX4bONxKm2RXxDCu8QMS74ns2px/DtCG2J Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out parts of the functionality to make the code more readable and rename to unmap_stage2_range while supporting unmapping ranges in one go. Signed-off-by: Christoffer Dall --- arch/arm/kvm/mmu.c | 122 +++++++++++++++++++++++++++++++++++----------------- 1 file changed, 83 insertions(+), 39 deletions(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index cb03d45..96ab6a8 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -365,59 +365,103 @@ void kvm_free_stage2_pgd(struct kvm *kvm) kvm->arch.pgd = NULL; } +static void clear_pud_entry(pud_t *pud) +{ + pmd_t *pmd_table = pmd_offset(pud, 0); + pud_clear(pud); + pmd_free(NULL, pmd_table); + put_page(virt_to_page(pud)); +} + +static void clear_pmd_entry(pmd_t *pmd) +{ + if (pmd_huge(*pmd)) { + pmd_clear(pmd); + } else { + pte_t *pte_table = pte_offset_kernel(pmd, 0); + pmd_clear(pmd); + pte_free_kernel(NULL, pte_table); + } + put_page(virt_to_page(pmd)); +} + +static bool pmd_empty(pmd_t *pmd) +{ + struct page *pmd_page = virt_to_page(pmd); + return page_count(pmd_page) == 1; +} + +static void clear_pte_entry(pte_t *pte) +{ + set_pte_ext(pte, __pte(0), 0); + put_page(virt_to_page(pte)); +} + +static bool pte_empty(pte_t *pte) +{ + struct page *pte_page = virt_to_page(pte); + return page_count(pte_page) == 1; +} + /** - * stage2_clear_pte -- Clear a stage-2 PTE. - * @kvm: The VM pointer - * @addr: The physical address of the PTE + * unmap_stage2_range -- Clear stage2 page table entries to unmap a range + * @kvm: The VM pointer + * @start: The intermediate physical base address of the range to unmap + * @size: The size of the area to unmap * - * Clear a stage-2 PTE, lowering the various ref-counts. Also takes - * care of invalidating the TLBs. Must be called while holding - * mmu_lock, otherwise another faulting VCPU may come in and mess - * things behind our back. + * Clear a range of stage-2 mappings, lowering the various ref-counts. Also + * takes care of invalidating the TLBs. Must be called while holding + * mmu_lock, otherwise another faulting VCPU may come in and mess with things + * behind our backs. */ -static void stage2_clear_pte(struct kvm *kvm, phys_addr_t addr) +static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, size_t size) { pgd_t *pgd; pud_t *pud; pmd_t *pmd; pte_t *pte; - struct page *page; - - pgd = kvm->arch.pgd + pgd_index(addr); - pud = pud_offset(pgd, addr); - if (pud_none(*pud)) - return; + phys_addr_t addr = start, end = start + size; + size_t range; - pmd = pmd_offset(pud, addr); - if (pmd_none(*pmd)) - return; + while (addr < end) { + pgd = kvm->arch.pgd + pgd_index(addr); + pud = pud_offset(pgd, addr); + if (pud_none(*pud)) { + addr += PUD_SIZE; + continue; + } - pte = pte_offset_kernel(pmd, addr); - set_pte_ext(pte, __pte(0), 0); + pmd = pmd_offset(pud, addr); + if (pmd_none(*pmd)) { + addr += PMD_SIZE; + continue; + } - page = virt_to_page(pte); - put_page(page); - if (page_count(page) != 1) { - kvm_tlb_flush_vmid(kvm); - return; - } + if (pmd_huge(*pmd)) { + clear_pmd_entry(pmd); + if (pmd_empty(pmd)) + clear_pud_entry(pud); + addr += PMD_SIZE; + continue; + } - /* Need to remove pte page */ - pmd_clear(pmd); - pte_free_kernel(NULL, (pte_t *)((unsigned long)pte & PAGE_MASK)); + pte = pte_offset_kernel(pmd, addr); + clear_pte_entry(pte); + range = PAGE_SIZE; + + /* If we emptied the pte, walk back up the ladder */ + if (pte_empty(pte)) { + clear_pmd_entry(pmd); + range = PMD_SIZE; + if (pmd_empty(pmd)) { + clear_pud_entry(pud); + range = PUD_SIZE; + } + } - page = virt_to_page(pmd); - put_page(page); - if (page_count(page) != 1) { - kvm_tlb_flush_vmid(kvm); - return; + addr += range; } - pud_clear(pud); - pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK)); - - page = virt_to_page(pud); - put_page(page); kvm_tlb_flush_vmid(kvm); } @@ -693,7 +737,7 @@ static void handle_hva_to_gpa(struct kvm *kvm, static void kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data) { - stage2_clear_pte(kvm, gpa); + unmap_stage2_range(kvm, gpa, PAGE_SIZE); } int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)