From patchwork Fri Sep 18 16:37:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11785425 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01B3159D for ; Fri, 18 Sep 2020 16:42:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C3BC92076B for ; Fri, 18 Sep 2020 16:42:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Iq9tLT7s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3BC92076B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJSN-0006D9-VL; Fri, 18 Sep 2020 16:41:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJSN-0006D4-Ea for xen-devel@lists.xenproject.org; Fri, 18 Sep 2020 16:41:55 +0000 X-Inumbo-ID: 53c54d8f-d119-4fcf-84c6-e3b1b56c8df1 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 53c54d8f-d119-4fcf-84c6-e3b1b56c8df1; Fri, 18 Sep 2020 16:41:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=0M3Bwf+Fb1Uh7qm3wAuaVRN1o0EMdwv1eVJjgWvN6Uc=; b=Iq9tLT7sl2PJmy70EuMRI0ETk5 AxqfF3KpbrXnO3sdWxyMXtjg6vmzVCUPIgJp1PSBRrtnNOt3URpzFx9nyVuPRqWYeJnC17qyzs/z5 r2xIF0XeLnAmttZMZUlNGMV3+tk4fJ+I7T+qKlfm9t7OTQRBVkjJdVN57/aEvs1nMsevMyEWyxJcz 73GEsGdAPgxx8H2hE4bVixZ3yMqWygz6wjfxib8AEillkuqZu/7D5PDkAQKxqxmg5Xbz7aJVTkLaG 2HQQUGmzaqsYj6cL46N9t5baeaLoYffCnXjIDHDf+rnba1z93sjGeb1AZUoCKhiGuvlkKaiBP6c/G 9S1pG1bg==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJSE-0007Ec-4t; Fri, 18 Sep 2020 16:41:46 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 1/6] zsmalloc: switch from alloc_vm_area to get_vm_area Date: Fri, 18 Sep 2020 18:37:19 +0200 Message-Id: <20200918163724.2511-2-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There is no obvious reason why zsmalloc needs to pre-fault the PTEs given that it later uses map_kernel_range to just like vmap(). Signed-off-by: Christoph Hellwig --- mm/zsmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c36fdff9a37131..3e4fe3259612fd 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1122,7 +1122,7 @@ static inline int __zs_cpu_up(struct mapping_area *area) */ if (area->vm) return 0; - area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL); + area->vm = get_vm_area(PAGE_SIZE * 2, 0); if (!area->vm) return -ENOMEM; return 0; From patchwork Fri Sep 18 16:37:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11785437 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20A19112E for ; Fri, 18 Sep 2020 16:45:12 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E492B2076B for ; Fri, 18 Sep 2020 16:45:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="VlNEUv9L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E492B2076B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJUZ-0006LK-E4; Fri, 18 Sep 2020 16:44:11 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJUY-0006Kw-Rk for xen-devel@lists.xenproject.org; Fri, 18 Sep 2020 16:44:10 +0000 X-Inumbo-ID: 205d5fa5-3965-4473-a098-322d22575366 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 205d5fa5-3965-4473-a098-322d22575366; Fri, 18 Sep 2020 16:44:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=69oCjFxEJdGUlaKFJvOKKJe894mXXG8veyEGu1EhEJ0=; b=VlNEUv9LV+aY4SkVXB3pcBhQK7 +4f1IfC2eWnutDmDJHsF4O75LR/vsSEVed4BUN269IhW+paaZFAbeUUdsKcoQ0gNW7Kfmr+VPkUHA GvwA1P+iQoGls6ETWLNao2T8t7Mr8sE0yPVdtkjO/LK1Nj783nR91OuW9xdY/1vbPRffrxGVOeG9X 2sDnFATRgmtwPufmWpiyG7yWGqlpIhXWxZxQ7v2AcX7oVTThZ31QMmE6iQTzxt/29V6Jf9U4Wv6wu SyZcuuzTerz8XZ6jnd8TFvT9ezWRbKXv4ugYyyk7ROHTMayUrFcx1u8hULLBACrBBpTGGl5PxhvUl bs5MaWXw==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJUL-0007LP-9v; Fri, 18 Sep 2020 16:43:57 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 2/6] mm: add a vmap_pfn function Date: Fri, 18 Sep 2020 18:37:20 +0200 Message-Id: <20200918163724.2511-3-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Add a proper helper to remap PFNs into kernel virtual space so that drivers don't have to abuse alloc_vm_area and open coded PTE manipulation for it. Signed-off-by: Christoph Hellwig --- include/linux/vmalloc.h | 1 + mm/Kconfig | 3 +++ mm/vmalloc.c | 45 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 49 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0221f852a7e1a3..8ecd92a947ee0c 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -121,6 +121,7 @@ extern void vfree_atomic(const void *addr); extern void *vmap(struct page **pages, unsigned int count, unsigned long flags, pgprot_t prot); +void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot); extern void vunmap(const void *addr); extern int remap_vmalloc_range_partial(struct vm_area_struct *vma, diff --git a/mm/Kconfig b/mm/Kconfig index 6c974888f86f97..6fa7ba1199eb1e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -815,6 +815,9 @@ config DEVICE_PRIVATE memory; i.e., memory that is only accessible from the device (or group of devices). You likely also want to select HMM_MIRROR. +config VMAP_PFN + bool + config FRAME_VECTOR bool diff --git a/mm/vmalloc.c b/mm/vmalloc.c index be4724b916b3e7..59f2afcf26c312 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2407,6 +2407,51 @@ void *vmap(struct page **pages, unsigned int count, } EXPORT_SYMBOL(vmap); +#ifdef CONFIG_VMAP_PFN +struct vmap_pfn_data { + unsigned long *pfns; + pgprot_t prot; + unsigned int idx; +}; + +static int vmap_pfn_apply(pte_t *pte, unsigned long addr, void *private) +{ + struct vmap_pfn_data *data = private; + + if (WARN_ON_ONCE(pfn_valid(data->pfns[data->idx]))) + return -EINVAL; + *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); + return 0; +} + +/** + * vmap_pfn - map an array of PFNs into virtually contiguous space + * @pfns: array of PFNs + * @count: number of pages to map + * @prot: page protection for the mapping + * + * Maps @count PFNs from @pfns into contiguous kernel virtual space and returns + * the start address of the mapping. + */ +void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) +{ + struct vmap_pfn_data data = { .pfns = pfns, .prot = pgprot_nx(prot) }; + struct vm_struct *area; + + area = get_vm_area_caller(count * PAGE_SIZE, VM_IOREMAP, + __builtin_return_address(0)); + if (!area) + return NULL; + if (apply_to_page_range(&init_mm, (unsigned long)area->addr, + count * PAGE_SIZE, vmap_pfn_apply, &data)) { + free_vm_area(area); + return NULL; + } + return area->addr; +} +EXPORT_SYMBOL_GPL(vmap_pfn); +#endif /* CONFIG_VMAP_PFN */ + static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot, int node) { From patchwork Fri Sep 18 16:37:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11785447 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F231112E for ; Fri, 18 Sep 2020 16:47:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ECB842078B for ; Fri, 18 Sep 2020 16:47:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="e9komF0Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECB842078B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJWe-0006Vj-Sh; Fri, 18 Sep 2020 16:46:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJWd-0006Ve-T3 for xen-devel@lists.xenproject.org; Fri, 18 Sep 2020 16:46:19 +0000 X-Inumbo-ID: 25dd5926-2c60-4f2f-a561-63ba5ff73191 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 25dd5926-2c60-4f2f-a561-63ba5ff73191; Fri, 18 Sep 2020 16:46:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=7fs8HGSpQrTl+v/OwgRrDMGgHGbn/JMy8asyFQyDmh8=; b=e9komF0Y9BGr2JEd5nRCwfZGwu WgoyBjM/v1bwwxG5OopAOaU35R+LcutCR1SpMFCWxQCpjx9g6cixuVx7KaT0fhA2NuU5cCOunJo+X fW/qXOKpcoUDMx+c/bls1dGccdJCa3KD46PJkEv+RVR9MDU8wWxOExdg9c/bL3lC7TJcRe+xbhRq/ Y5aFYw3lFs85h8w/8cjIxpDc1vkJdBCRBkBlm8B3G5PAEhdtWSSqdoEymKe3ocX4cREzfa2jtiSb5 AcpuMQXGe9/JBS2UlCP/59di88TNsIBDuUeD8gl6+7QLqmcSeBs+Q9mGT7sOHMcE266jnGbc3SS6g 6CGVxrlw==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJWS-0007Vm-Ib; Fri, 18 Sep 2020 16:46:08 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 3/6] drm/i915: use vmap in shmem_pin_map Date: Fri, 18 Sep 2020 18:37:21 +0200 Message-Id: <20200918163724.2511-4-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" shmem_pin_map somewhat awkwardly reimplements vmap using alloc_vm_area and manual pte setup. The only practical difference is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't seem to be required here (and could be added to vmap using a flag if actually required). Signed-off-by: Christoph Hellwig Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/gt/shmem_utils.c | 90 +++++++++++---------------- 1 file changed, 38 insertions(+), 52 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 43c7acbdc79dea..77410091597f19 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -49,80 +49,66 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) return file; } -static size_t shmem_npte(struct file *file) +static size_t shmem_npages(struct file *file) { return file->f_mapping->host->i_size >> PAGE_SHIFT; } -static void __shmem_unpin_map(struct file *file, void *ptr, size_t n_pte) -{ - unsigned long pfn; - - vunmap(ptr); - - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (!WARN_ON(IS_ERR(page))) { - put_page(page); - put_page(page); - } - } -} - void *shmem_pin_map(struct file *file) { - const size_t n_pte = shmem_npte(file); - pte_t *stack[32], **ptes, **mem; - struct vm_struct *area; - unsigned long pfn; - - mem = stack; - if (n_pte > ARRAY_SIZE(stack)) { - mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL); - if (!mem) + const size_t n_pages = shmem_npages(file); + struct page **pages, *stack[32]; + void *vaddr; + long i; + + pages = stack; + if (n_pages > ARRAY_SIZE(stack)) { + pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) return NULL; } - area = alloc_vm_area(n_pte << PAGE_SHIFT, mem); - if (!area) { - if (mem != stack) - kvfree(mem); - return NULL; - } - - ptes = mem; - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (IS_ERR(page)) + for (i = 0; i < n_pages; i++) { + pages[i] = shmem_read_mapping_page_gfp(file->f_mapping, i, + GFP_KERNEL); + if (IS_ERR(pages[i])) goto err_page; - - **ptes++ = mk_pte(page, PAGE_KERNEL); } - if (mem != stack) - kvfree(mem); + vaddr = vmap(pages, n_pages, 0, PAGE_KERNEL); + if (!vaddr) + goto err_page; + if (pages != stack) + kvfree(pages); mapping_set_unevictable(file->f_mapping); - return area->addr; + return vaddr; err_page: - if (mem != stack) - kvfree(mem); - - __shmem_unpin_map(file, area->addr, pfn); + while (--i >= 0) + put_page(pages[i]); + if (pages != stack) + kvfree(pages); return NULL; } void shmem_unpin_map(struct file *file, void *ptr) { + long i = shmem_npages(file); + mapping_clear_unevictable(file->f_mapping); - __shmem_unpin_map(file, ptr, shmem_npte(file)); + vunmap(ptr); + + for (i = 0; i < shmem_npages(file); i++) { + struct page *page; + + page = shmem_read_mapping_page_gfp(file->f_mapping, i, + GFP_KERNEL); + if (!WARN_ON(IS_ERR(page))) { + put_page(page); + put_page(page); + } + } } static int __shmem_rw(struct file *file, loff_t off, From patchwork Fri Sep 18 16:37:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11785459 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9C58112E for ; Fri, 18 Sep 2020 16:49:21 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9BB9A20848 for ; Fri, 18 Sep 2020 16:49:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="lGUUUy8Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9BB9A20848 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJYm-0006cU-8S; Fri, 18 Sep 2020 16:48:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJYk-0006cP-PT for xen-devel@lists.xenproject.org; Fri, 18 Sep 2020 16:48:30 +0000 X-Inumbo-ID: 6e3b5302-f034-4d90-9f34-5e7ec1312a86 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6e3b5302-f034-4d90-9f34-5e7ec1312a86; Fri, 18 Sep 2020 16:48:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=9VEUaWmsQuuOPR1/tHaWlmtjumJaDrjp/toASJuFfA0=; b=lGUUUy8ZDPgtdkijJLObkZhRqJ ObJJllK7zp7xBFXA0gHu3cPw4x6pZgrRWMFNqbrg/8sR3Aj6gEMllVub0X/TjUcZSzRR5G8bK56v0 N7Ix8oTgjx8jZrqCd5mHPwuBFel0TXUU1e4GMtwZg/s/kP2UC5iee61eRRPXLym7lEDkRi62D5J+y cjntvP4tZBA42AwsJk6AzMFFUDTiT5KCqwlldZN9oKPIQfMO2OE40qNK+u81jZYt5L1E07XLP/fdW WRA7s8WAGft9XkGhmUr6eyrmR+Sinbh+0exQgPXXZsGeUYMe/iqriJqI9LP4o8t3/FHM/l1Qw7JcP i7QqiSiA==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJYZ-0007dE-E0; Fri, 18 Sep 2020 16:48:19 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 4/6] drm/i915: use vmap in i915_gem_object_map Date: Fri, 18 Sep 2020 18:37:22 +0200 Message-Id: <20200918163724.2511-5-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" i915_gem_object_map implements fairly low-level vmap functionality in a driver. Split it into two helpers, one for remapping kernel memory which can use vmap, and one for I/O memory that uses vmap_pfn. The only practical difference is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't seem to be required here for the kernel memory case (and could be added to vmap using a flag if actually required). Signed-off-by: Christoph Hellwig --- drivers/gpu/drm/i915/Kconfig | 1 + drivers/gpu/drm/i915/gem/i915_gem_pages.c | 101 ++++++++++------------ 2 files changed, 47 insertions(+), 55 deletions(-) diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig index 9afa5c4a6bf006..1e1cb245fca778 100644 --- a/drivers/gpu/drm/i915/Kconfig +++ b/drivers/gpu/drm/i915/Kconfig @@ -25,6 +25,7 @@ config DRM_I915 select CRC32 select SND_HDA_I915 if SND_HDA_CORE select CEC_CORE if CEC_NOTIFIER + select VMAP_PFN help Choose this option if you have a system that has "Intel Graphics Media Accelerator" or "HD Graphics" integrated graphics, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index e8a083743e0927..90029ea83aede9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -234,50 +234,24 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj) return err; } -static inline pte_t iomap_pte(resource_size_t base, - dma_addr_t offset, - pgprot_t prot) -{ - return pte_mkspecial(pfn_pte((base + offset) >> PAGE_SHIFT, prot)); -} - /* The 'mapping' part of i915_gem_object_pin_map() below */ -static void *i915_gem_object_map(struct drm_i915_gem_object *obj, +static void *i915_gem_object_map_page(struct drm_i915_gem_object *obj, enum i915_map_type type) { - unsigned long n_pte = obj->base.size >> PAGE_SHIFT; - struct sg_table *sgt = obj->mm.pages; - pte_t *stack[32], **mem; - struct vm_struct *area; + unsigned long n_pages = obj->base.size >> PAGE_SHIFT, i; + struct page *stack[32], **pages = stack, *page; + struct sgt_iter iter; pgprot_t pgprot; - - if (!i915_gem_object_has_struct_page(obj) && type != I915_MAP_WC) - return NULL; - - /* A single page can always be kmapped */ - if (n_pte == 1 && type == I915_MAP_WB) - return kmap(sg_page(sgt->sgl)); - - mem = stack; - if (n_pte > ARRAY_SIZE(stack)) { - /* Too big for stack -- allocate temporary array instead */ - mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL); - if (!mem) - return NULL; - } - - area = alloc_vm_area(obj->base.size, mem); - if (!area) { - if (mem != stack) - kvfree(mem); - return NULL; - } + void *vaddr; switch (type) { default: MISSING_CASE(type); fallthrough; /* to use PAGE_KERNEL anyway */ case I915_MAP_WB: + /* A single page can always be kmapped */ + if (n_pages == 1) + return kmap(sg_page(obj->mm.pages->sgl)); pgprot = PAGE_KERNEL; break; case I915_MAP_WC: @@ -285,30 +259,44 @@ static void *i915_gem_object_map(struct drm_i915_gem_object *obj, break; } - if (i915_gem_object_has_struct_page(obj)) { - struct sgt_iter iter; - struct page *page; - pte_t **ptes = mem; - - for_each_sgt_page(page, iter, sgt) - **ptes++ = mk_pte(page, pgprot); - } else { - resource_size_t iomap; - struct sgt_iter iter; - pte_t **ptes = mem; - dma_addr_t addr; + if (n_pages > ARRAY_SIZE(stack)) { + /* Too big for stack -- allocate temporary array instead */ + pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) + return NULL; + } - iomap = obj->mm.region->iomap.base; - iomap -= obj->mm.region->region.start; + for_each_sgt_page(page, iter, obj->mm.pages) + pages[i++] = page; + vaddr = vmap(pages, n_pages, 0, pgprot); + if (pages != stack) + kvfree(pages); + return vaddr; +} - for_each_sgt_daddr(addr, iter, sgt) - **ptes++ = iomap_pte(iomap, addr, pgprot); +static void *i915_gem_object_map_pfn(struct drm_i915_gem_object *obj) +{ + resource_size_t iomap = obj->mm.region->iomap.base - + obj->mm.region->region.start; + unsigned long n_pfn = obj->base.size >> PAGE_SHIFT; + unsigned long stack[32], *pfns = stack, i; + struct sgt_iter iter; + dma_addr_t addr; + void *vaddr; + + if (n_pfn > ARRAY_SIZE(stack)) { + /* Too big for stack -- allocate temporary array instead */ + pfns = kvmalloc_array(n_pfn, sizeof(*pfns), GFP_KERNEL); + if (!pfns) + return NULL; } - if (mem != stack) - kvfree(mem); - - return area->addr; + for_each_sgt_daddr(addr, iter, obj->mm.pages) + pfns[i++] = (iomap + addr) >> PAGE_SHIFT; + vaddr = vmap_pfn(pfns, n_pfn, pgprot_writecombine(PAGE_KERNEL_IO)); + if (pfns != stack) + kvfree(pfns); + return vaddr; } /* get, pin, and map the pages of the object into kernel space */ @@ -360,7 +348,10 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj, } if (!ptr) { - ptr = i915_gem_object_map(obj, type); + if (i915_gem_object_has_struct_page(obj)) + ptr = i915_gem_object_map_page(obj, type); + else if (type == I915_MAP_WC) + ptr = i915_gem_object_map_pfn(obj); if (!ptr) { err = -ENOMEM; goto err_unpin; From patchwork Fri Sep 18 16:37:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11785489 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 866F5139A for ; Fri, 18 Sep 2020 16:52:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4CDF720848 for ; Fri, 18 Sep 2020 16:52:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cpCprZwn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CDF720848 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJaq-0007Nw-M3; Fri, 18 Sep 2020 16:50:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJao-0007Np-Sd for xen-devel@lists.xenproject.org; Fri, 18 Sep 2020 16:50:38 +0000 X-Inumbo-ID: 02b8b164-554f-4728-a4ff-5a972c158df2 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 02b8b164-554f-4728-a4ff-5a972c158df2; Fri, 18 Sep 2020 16:50:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=UxCKXqRP/rysmc9RNpHAJ/UIhdxbXkB6cd/t5AXU7G8=; b=cpCprZwn/AafJTn6MHEbaradk7 t/YIiBsf+ylm0zuSEvtQJJqSQ4lNxxXz65sGW+WcamVSPFNTyaoIwTXkXKE1Gh8glqiczJV2J/0Ze AVJM1fdUpkGwFGrNF3JaNgerTd951R06BJgp654l8tPfJvNhcQADyo0X3aR/tXbEiIykkE5RGf7JZ MoFxDTuhrUAHst0ZL1cwk7zQeYAxZD/azM9vvST9VjX3BCCT8szgChY0L89mIBv5In3yssxZv0ghd 4HxI47BxGE8GM2KL7O1rUjcNzYDg8CUmEmG0sOKSEV9Ie0eW0pyXfXTlmZP3sAtzT00mzvFELhMt9 62Kb7LMA==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJag-0007nv-Hp; Fri, 18 Sep 2020 16:50:30 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 5/6] xen/xenbus: use apply_to_page_range directly in xenbus_map_ring_pv Date: Fri, 18 Sep 2020 18:37:23 +0200 Message-Id: <20200918163724.2511-6-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Replacing alloc_vm_area with get_vm_area_caller + apply_page_range allows to fill put the phys_addr values directly instead of doing another loop over all addresses. Signed-off-by: Christoph Hellwig --- drivers/xen/xenbus/xenbus_client.c | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c index 2690318ad50f48..fd80e318b99cc7 100644 --- a/drivers/xen/xenbus/xenbus_client.c +++ b/drivers/xen/xenbus/xenbus_client.c @@ -73,16 +73,13 @@ struct map_ring_valloc { struct xenbus_map_node *node; /* Why do we need two arrays? See comment of __xenbus_map_ring */ - union { - unsigned long addrs[XENBUS_MAX_RING_GRANTS]; - pte_t *ptes[XENBUS_MAX_RING_GRANTS]; - }; + unsigned long addrs[XENBUS_MAX_RING_GRANTS]; phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS]; struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS]; struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS]; - unsigned int idx; /* HVM only. */ + unsigned int idx; }; static DEFINE_SPINLOCK(xenbus_valloc_lock); @@ -686,6 +683,14 @@ int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); #ifdef CONFIG_XEN_PV +static int map_ring_apply(pte_t *pte, unsigned long addr, void *data) +{ + struct map_ring_valloc *info = data; + + info->phys_addrs[info->idx++] = arbitrary_virt_to_machine(pte).maddr; + return 0; +} + static int xenbus_map_ring_pv(struct xenbus_device *dev, struct map_ring_valloc *info, grant_ref_t *gnt_refs, @@ -694,18 +699,15 @@ static int xenbus_map_ring_pv(struct xenbus_device *dev, { struct xenbus_map_node *node = info->node; struct vm_struct *area; - int err = GNTST_okay; - int i; - bool leaked; + bool leaked = false; + int err = -ENOMEM; - area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes); + area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_IOREMAP); if (!area) return -ENOMEM; - - for (i = 0; i < nr_grefs; i++) - info->phys_addrs[i] = - arbitrary_virt_to_machine(info->ptes[i]).maddr; - + if (apply_to_page_range(&init_mm, (unsigned long)area->addr, + XEN_PAGE_SIZE * nr_grefs, map_ring_apply, info)) + goto failed; err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles, info, GNTMAP_host_map | GNTMAP_contains_pte, &leaked); From patchwork Fri Sep 18 16:37:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11785503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E77F159D for ; Fri, 18 Sep 2020 16:53:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B97EE20848 for ; Fri, 18 Sep 2020 16:53:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="N422nbjU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B97EE20848 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJcx-0007Z6-7X; Fri, 18 Sep 2020 16:52:51 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kJJcw-0007Z1-GC for xen-devel@lists.xenproject.org; Fri, 18 Sep 2020 16:52:50 +0000 X-Inumbo-ID: b03d5980-5655-4ccc-9adf-3811bfc874ce Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id b03d5980-5655-4ccc-9adf-3811bfc874ce; Fri, 18 Sep 2020 16:52:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ati4cCc/sV8xA1rhaL7mobTLBs/yjowuEdT0zBqxhTM=; b=N422nbjUcx9LGd8lzVreplkrBq 1Uy7468JP0ytoKBh7hc+yPChNtWQkchJLqb4L5OpgpCQB88XUwPY8EZfTPigAesPs31hlZnYVBsUL PVJdMtuBjcffggpdineuTIamCIF0kFTWP2qOKy7XCs4g9b6foomU+ImqepY5UtTdPAOifHC88X59A hVsin8yOZMZC2ypKQIGBQ4uRWA6hw3r3yAXXHcmraS1HmLKByxzRvkAtSZ4IuKcZsirupocCKOSZm rX8EsrodBwAlmZRcfCmXFiYyYXiUR1rU+zRodZTMHlDaWSgdyUh15Dkg8tzu4SqKJepWKv1pTNIMF ohefLMcA==; Received: from 089144214092.atnat0023.highway.a1.net ([89.144.214.92] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kJJcn-0007vr-Hg; Fri, 18 Sep 2020 16:52:41 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Minchan Kim , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 6/6] x86/xen: open code alloc_vm_area in arch_gnttab_valloc Date: Fri, 18 Sep 2020 18:37:24 +0200 Message-Id: <20200918163724.2511-7-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200918163724.2511-1-hch@lst.de> References: <20200918163724.2511-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Open code alloc_vm_area in the last remaining caller. Signed-off-by: Christoph Hellwig Signed-off-by: Christoph Hellwig --- arch/x86/xen/grant-table.c | 27 +++++++++++++++------ include/linux/vmalloc.h | 5 +--- mm/nommu.c | 7 ------ mm/vmalloc.c | 48 -------------------------------------- 4 files changed, 21 insertions(+), 66 deletions(-) diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c index 4988e19598c8a5..ccb377c07c651f 100644 --- a/arch/x86/xen/grant-table.c +++ b/arch/x86/xen/grant-table.c @@ -90,19 +90,32 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes) } } +static int gnttab_apply(pte_t *pte, unsigned long addr, void *data) +{ + pte_t ***p = data; + + **p = pte; + (*p)++; + return 0; +} + static int arch_gnttab_valloc(struct gnttab_vm_area *area, unsigned nr_frames) { area->ptes = kmalloc_array(nr_frames, sizeof(*area->ptes), GFP_KERNEL); if (area->ptes == NULL) return -ENOMEM; - - area->area = alloc_vm_area(PAGE_SIZE * nr_frames, area->ptes); - if (area->area == NULL) { - kfree(area->ptes); - return -ENOMEM; - } - + area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_IOREMAP); + if (!area->area) + goto out_free_ptes; + if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr, + PAGE_SIZE * nr_frames, gnttab_apply, &area->ptes)) + goto out_free_vm_area; return 0; +out_free_vm_area: + free_vm_area(area->area); +out_free_ptes: + kfree(area->ptes); + return -ENOMEM; } static void arch_gnttab_vfree(struct gnttab_vm_area *area) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 8ecd92a947ee0c..a1a4e2f8163504 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -168,6 +168,7 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, unsigned long start, unsigned long end, const void *caller); +void free_vm_area(struct vm_struct *area); extern struct vm_struct *remove_vm_area(const void *addr); extern struct vm_struct *find_vm_area(const void *addr); @@ -203,10 +204,6 @@ static inline void set_vm_flush_reset_perms(void *addr) } #endif -/* Allocate/destroy a 'vmalloc' VM area. */ -extern struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes); -extern void free_vm_area(struct vm_struct *area); - /* for /dev/kmem */ extern long vread(char *buf, char *addr, unsigned long count); extern long vwrite(char *buf, char *addr, unsigned long count); diff --git a/mm/nommu.c b/mm/nommu.c index 75a327149af127..9272f30e4c4726 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -354,13 +354,6 @@ void vm_unmap_aliases(void) } EXPORT_SYMBOL_GPL(vm_unmap_aliases); -struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes) -{ - BUG(); - return NULL; -} -EXPORT_SYMBOL_GPL(alloc_vm_area); - void free_vm_area(struct vm_struct *area) { BUG(); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 59f2afcf26c312..9f29147deca580 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3077,54 +3077,6 @@ int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, } EXPORT_SYMBOL(remap_vmalloc_range); -static int f(pte_t *pte, unsigned long addr, void *data) -{ - pte_t ***p = data; - - if (p) { - *(*p) = pte; - (*p)++; - } - return 0; -} - -/** - * alloc_vm_area - allocate a range of kernel address space - * @size: size of the area - * @ptes: returns the PTEs for the address space - * - * Returns: NULL on failure, vm_struct on success - * - * This function reserves a range of kernel address space, and - * allocates pagetables to map that range. No actual mappings - * are created. - * - * If @ptes is non-NULL, pointers to the PTEs (in init_mm) - * allocated for the VM area are returned. - */ -struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes) -{ - struct vm_struct *area; - - area = get_vm_area_caller(size, VM_IOREMAP, - __builtin_return_address(0)); - if (area == NULL) - return NULL; - - /* - * This ensures that page tables are constructed for this region - * of kernel virtual address space and mapped into init_mm. - */ - if (apply_to_page_range(&init_mm, (unsigned long)area->addr, - size, f, ptes ? &ptes : NULL)) { - free_vm_area(area); - return NULL; - } - - return area; -} -EXPORT_SYMBOL_GPL(alloc_vm_area); - void free_vm_area(struct vm_struct *area) { struct vm_struct *ret;