From patchwork Tue Jul 16 00:53:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 2827849 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id ECAB79F7D6 for ; Tue, 16 Jul 2013 00:57:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E065620255 for ; Tue, 16 Jul 2013 00:57:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C1B9720253 for ; Tue, 16 Jul 2013 00:56:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932099Ab3GPAyr (ORCPT ); Mon, 15 Jul 2013 20:54:47 -0400 Received: from mail-pa0-f42.google.com ([209.85.220.42]:38164 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758431Ab3GPAyo (ORCPT ); Mon, 15 Jul 2013 20:54:44 -0400 Received: by mail-pa0-f42.google.com with SMTP id rl6so185586pac.1 for ; Mon, 15 Jul 2013 17:54:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=JAsGz4qF/XphXwuD5yJZB3I+6Ozkg2rZdm3cXBr18tI=; b=cGv7X7Oc+I1usBnYgWW8kW+JdOing2opMRTq0flFe5QeidsgTFylY+YSEwKvhSm86T Sj+plRZ7/nUC/8f1SGrzcEZV+Gt794Zi7HHmcvGDTxHWb1rF7ghVFwmZuzK0mxQpKhJV R/IlDzkZbqIIUbOupCGcrAWdXVNytvNQSl3NbCsLvD3hUOFyMSsfcnOdn/I+ci7ZLWam c6rfAPdhajAJlkjZQfvj6uJ3dp41oCwR71AoO7pfLz2+O0AISCu82QcHO5dEj+cWtwHm NsxuF02EHUXiLA6ePYmiCgc0vbXoSRh1FJyqWGE7Xp8vIILWBvevbDwMQeHhVvm2xOAV 6ifw== X-Received: by 10.68.99.98 with SMTP id ep2mr7539508pbb.6.1373936083451; Mon, 15 Jul 2013 17:54:43 -0700 (PDT) Received: from ka1.ozlabs.ibm.com (ibmaus65.lnk.telstra.net. [165.228.126.9]) by mx.google.com with ESMTPSA id iv4sm63409014pbc.9.2013.07.15.17.54.37 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 15 Jul 2013 17:54:40 -0700 (PDT) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , David Gibson , Benjamin Herrenschmidt , Paul Mackerras , Alexander Graf , Alex Williamson , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 04/10] powerpc: Prepare to support kernel handling of IOMMU map/unmap Date: Tue, 16 Jul 2013 10:53:59 +1000 Message-Id: <1373936045-22653-5-git-send-email-aik@ozlabs.ru> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1373936045-22653-1-git-send-email-aik@ozlabs.ru> References: <1373936045-22653-1-git-send-email-aik@ozlabs.ru> X-Gm-Message-State: ALoCoQkI/8tM0sUNOxj99f7Rb6rzp7ZkafuE8j689jS0LkhOPloTwT8AWPhQBq09Jgk67J8/b0/m Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The current VFIO-on-POWER implementation supports only user mode driven mapping, i.e. QEMU is sending requests to map/unmap pages. However this approach is really slow, so we want to move that to KVM. Since H_PUT_TCE can be extremely performance sensitive (especially with network adapters where each packet needs to be mapped/unmapped) we chose to implement that as a "fast" hypercall directly in "real mode" (processor still in the guest context but MMU off). To be able to do that, we need to provide some facilities to access the struct page count within that real mode environment as things like the sparsemem vmemmap mappings aren't accessible. This adds an API to increment/decrement page counter as get_user_pages API used for user mode mapping does not work in the real mode. CONFIG_SPARSEMEM_VMEMMAP and CONFIG_FLATMEM are supported. Cc: linux-mm@kvack.org Reviewed-by: Paul Mackerras Signed-off-by: Paul Mackerras Signed-off-by: Alexey Kardashevskiy --- Changes: 2013/07/10: * adjusted comment (removed sentence about virtual mode) * get_page_unless_zero replaced with atomic_inc_not_zero to minimize effect of a possible get_page_unless_zero() rework (if it ever happens). 2013/06/27: * realmode_get_page() fixed to use get_page_unless_zero(). If failed, the call will be passed from real to virtual mode and safely handled. * added comment to PageCompound() in include/linux/page-flags.h. 2013/05/20: * PageTail() is replaced by PageCompound() in order to have the same checks for whether the page is huge in realmode_get_page() and realmode_put_page() Signed-off-by: Alexey Kardashevskiy --- arch/powerpc/include/asm/pgtable-ppc64.h | 4 ++ arch/powerpc/mm/init_64.c | 76 +++++++++++++++++++++++++++++++- include/linux/page-flags.h | 4 +- 3 files changed, 82 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h index 46db094..aa7b169 100644 --- a/arch/powerpc/include/asm/pgtable-ppc64.h +++ b/arch/powerpc/include/asm/pgtable-ppc64.h @@ -394,6 +394,10 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array, hpte_slot_array[index] = hidx << 4 | 0x1 << 3; } +struct page *realmode_pfn_to_page(unsigned long pfn); +int realmode_get_page(struct page *page); +int realmode_put_page(struct page *page); + static inline char *get_hpte_slot_array(pmd_t *pmdp) { /* diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index d0cd9e4..dcbb806 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -300,5 +300,79 @@ void vmemmap_free(unsigned long start, unsigned long end) { } -#endif /* CONFIG_SPARSEMEM_VMEMMAP */ +/* + * We do not have access to the sparsemem vmemmap, so we fallback to + * walking the list of sparsemem blocks which we already maintain for + * the sake of crashdump. In the long run, we might want to maintain + * a tree if performance of that linear walk becomes a problem. + * + * Any of realmode_XXXX functions can fail due to: + * 1) As real sparsemem blocks do not lay in RAM continously (they + * are in virtual address space which is not available in the real mode), + * the requested page struct can be split between blocks so get_page/put_page + * may fail. + * 2) When huge pages are used, the get_page/put_page API will fail + * in real mode as the linked addresses in the page struct are virtual + * too. + */ +struct page *realmode_pfn_to_page(unsigned long pfn) +{ + struct vmemmap_backing *vmem_back; + struct page *page; + unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift; + unsigned long pg_va = (unsigned long) pfn_to_page(pfn); + for (vmem_back = vmemmap_list; vmem_back; vmem_back = vmem_back->list) { + if (pg_va < vmem_back->virt_addr) + continue; + + /* Check that page struct is not split between real pages */ + if ((pg_va + sizeof(struct page)) > + (vmem_back->virt_addr + page_size)) + return NULL; + + page = (struct page *) (vmem_back->phys + pg_va - + vmem_back->virt_addr); + return page; + } + + return NULL; +} +EXPORT_SYMBOL_GPL(realmode_pfn_to_page); + +#elif defined(CONFIG_FLATMEM) + +struct page *realmode_pfn_to_page(unsigned long pfn) +{ + struct page *page = pfn_to_page(pfn); + return page; +} +EXPORT_SYMBOL_GPL(realmode_pfn_to_page); + +#endif /* CONFIG_SPARSEMEM_VMEMMAP/CONFIG_FLATMEM */ + +#if defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_FLATMEM) +int realmode_get_page(struct page *page) +{ + if (PageCompound(page)) + return -EAGAIN; + + if (!atomic_inc_not_zero(&page->_count)) + return -EAGAIN; + + return 0; +} +EXPORT_SYMBOL_GPL(realmode_get_page); + +int realmode_put_page(struct page *page) +{ + if (PageCompound(page)) + return -EAGAIN; + + if (!atomic_add_unless(&page->_count, -1, 1)) + return -EAGAIN; + + return 0; +} +EXPORT_SYMBOL_GPL(realmode_put_page); +#endif diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 6d53675..98ada58 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -329,7 +329,9 @@ static inline void set_page_writeback(struct page *page) * System with lots of page flags available. This allows separate * flags for PageHead() and PageTail() checks of compound pages so that bit * tests can be used in performance sensitive paths. PageCompound is - * generally not used in hot code paths. + * generally not used in hot code paths except arch/powerpc/mm/init_64.c + * and arch/powerpc/kvm/book3s_64_vio_hv.c which use it to detect huge pages + * and avoid handling those in real mode. */ __PAGEFLAG(Head, head) CLEARPAGEFLAG(Head, head) __PAGEFLAG(Tail, tail)