From patchwork Sat Oct 6 21:33:43 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 1560111 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 1037C3FD56 for ; Sat, 6 Oct 2012 21:37:13 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TKc0O-0003xe-SN; Sat, 06 Oct 2012 21:33:52 +0000 Received: from mail-vb0-f49.google.com ([209.85.212.49]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TKc0L-0003xQ-IN for linux-arm-kernel@lists.infradead.org; Sat, 06 Oct 2012 21:33:50 +0000 Received: by mail-vb0-f49.google.com with SMTP id fo1so3212286vbb.36 for ; Sat, 06 Oct 2012 14:33:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:cc:content-type:x-gm-message-state; bh=OtSKEQOrJRefBwrE3Lshxa5ck8JAm3BiPuMZMkAsBF4=; b=kXZJgs3uDPDjhFcrdzobAoVMv91fbEh8s+eqSnYjs6dwwyorHvgMSha8SuUlGCtFBj 9hnB4c1hMUDStEcmbRPJihgbwnZzfRxd9y802nK3Isk85+RP1H8N9CnXGEMWEiD6dNZZ BigreCokffGLosjW5bFwwS9oHXPhrkKrnHK4TCpdyL4yQS0N2RTCXieHCl/6qtqAxsxQ 2R+ciWAXC1mWJTIX+LUtmJEW8jCrl6d7nB6mzwhZpwmmiuy6wzJ+l40TQHYa/ImyCCuA 9c6uttJA+kOVnia4y1FAJhbWitmMHjb4NnucRYTup9oyZTR0PHQ6Epu5UgwEGiVPSQRn exMQ== MIME-Version: 1.0 Received: by 10.52.94.108 with SMTP id db12mr5969507vdb.119.1349559223065; Sat, 06 Oct 2012 14:33:43 -0700 (PDT) Received: by 10.58.127.97 with HTTP; Sat, 6 Oct 2012 14:33:43 -0700 (PDT) X-Originating-IP: [72.80.83.148] In-Reply-To: <000401cda2a0$69670a40$3c351ec0$@samsung.com> References: <20121001090945.49198.68950.stgit@ubuntu> <20121001091042.49198.93241.stgit@ubuntu> <000401cda2a0$69670a40$3c351ec0$@samsung.com> Date: Sat, 6 Oct 2012 17:33:43 -0400 Message-ID: Subject: Re: [PATCH v2 06/14] KVM: ARM: Memory virtualization setup From: Christoffer Dall To: Min-gyu Kim X-Gm-Message-State: ALoCoQmRiX8p1QrjC8IJUCESkegGc7O6WN+k+ciPMETdEj08ypwyp/LhGm6M2pCMP3qX2t7J6MBf X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.212.49 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Marc Zyngier , =?UTF-8?B?6rmA7LC97ZmY?= , linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On Thu, Oct 4, 2012 at 10:23 PM, Min-gyu Kim wrote: > > >> -----Original Message----- >> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On >> Behalf Of Christoffer Dall >> Sent: Monday, October 01, 2012 6:11 PM >> To: kvm@vger.kernel.org; linux-arm-kernel@lists.infradead.org; >> kvmarm@lists.cs.columbia.edu >> Cc: Marc Zyngier >> Subject: [PATCH v2 06/14] KVM: ARM: Memory virtualization setup >> >> +static void stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache >> *cache, >> + phys_addr_t addr, const pte_t *new_pte) { >> + pgd_t *pgd; >> + pud_t *pud; >> + pmd_t *pmd; >> + pte_t *pte, old_pte; >> + >> + /* Create 2nd stage page table mapping - Level 1 */ >> + pgd = kvm->arch.pgd + pgd_index(addr); >> + pud = pud_offset(pgd, addr); >> + if (pud_none(*pud)) { >> + if (!cache) >> + return; /* ignore calls from kvm_set_spte_hva */ >> + pmd = mmu_memory_cache_alloc(cache); >> + pud_populate(NULL, pud, pmd); >> + pmd += pmd_index(addr); >> + get_page(virt_to_page(pud)); >> + } else >> + pmd = pmd_offset(pud, addr); >> + >> + /* Create 2nd stage page table mapping - Level 2 */ >> + if (pmd_none(*pmd)) { >> + if (!cache) >> + return; /* ignore calls from kvm_set_spte_hva */ >> + pte = mmu_memory_cache_alloc(cache); >> + clean_pte_table(pte); >> + pmd_populate_kernel(NULL, pmd, pte); >> + pte += pte_index(addr); >> + get_page(virt_to_page(pmd)); >> + } else >> + pte = pte_offset_kernel(pmd, addr); >> + >> + /* Create 2nd stage page table mapping - Level 3 */ >> + old_pte = *pte; >> + set_pte_ext(pte, *new_pte, 0); >> + if (pte_present(old_pte)) >> + __kvm_tlb_flush_vmid(kvm); >> + else >> + get_page(virt_to_page(pte)); >> +} > > > I'm not sure about the 3-level page table, but isn't it necessary to > clean the page table for 2nd level? > There are two mmu_memory_cache_alloc calls. One has following clean_pte_table > and the other doesn't have. hmm, it probably is - I couldn't really find the common case where this is done in the kernel normally (except for some custom loop in ioremap and idmap), but I added this fix: get_page(virt_to_page(pud)); > > And why do you ignore calls from kvm_set_spte_hva? It is supposed to happen when > host moves the page, right? Then you ignore the case because it can be handled > later when fault actually happens? Is there any other reason that I miss? > kvm_set_spte_hva tells us that a page at some IPA is going to be backed by another physical page, which means we must adjust the stage 2 mapping. However, if we don't have that page mapped in the stage 2 page table, we don't need to do anything, and certainly don't want to start allocating unnecessary level2 and level3 page tables. Thanks! -Christoffer diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 5394a52..f11ba27f 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -430,6 +430,7 @@ static void stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (!cache) return; /* ignore calls from kvm_set_spte_hva */ pmd = mmu_memory_cache_alloc(cache); + clean_dcache_area(pmd, PTRS_PER_PMD * sizeof(pmd_t)); pud_populate(NULL, pud, pmd); pmd += pmd_index(addr);