From patchwork Wed Sep 30 17:51:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11810011 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6B283618 for ; Wed, 30 Sep 2020 17:53:40 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3765320706 for ; Wed, 30 Sep 2020 17:53:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="irffWdqg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3765320706 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.862.2999 (Exim 4.92) (envelope-from ) id 1kNgHK-0005x1-Bx; Wed, 30 Sep 2020 17:52:34 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 862.2999; Wed, 30 Sep 2020 17:52:34 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kNgHK-0005wu-7W; Wed, 30 Sep 2020 17:52:34 +0000 Received: by outflank-mailman (input) for mailman id 862; Wed, 30 Sep 2020 17:52:33 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kNgHJ-0005QF-PN for xen-devel@lists.xenproject.org; Wed, 30 Sep 2020 17:52:33 +0000 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4e7d0968-b1d5-45d0-8723-5d399b08aad2; Wed, 30 Sep 2020 17:51:57 +0000 (UTC) Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNgGV-0001CN-3O; Wed, 30 Sep 2020 17:51:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kNgHJ-0005QF-PN for xen-devel@lists.xenproject.org; Wed, 30 Sep 2020 17:52:33 +0000 X-Inumbo-ID: 4e7d0968-b1d5-45d0-8723-5d399b08aad2 Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4e7d0968-b1d5-45d0-8723-5d399b08aad2; Wed, 30 Sep 2020 17:51:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=FBifTnPA+3ODKD/Pp+vW9/Ge5GD+C3AXZd2dc0iyUCM=; b=irffWdqgufF/sWH2p3nzwLxebz +W5xC88NFo2tHNxggYCSPvv6ZVXliMqnTe53iafY6xTKcxeuEfNdYBIDP7p9k5WY6i2G17z59YVCa 1q9ya/k4ZYc44/RoxiDLw5zXh0bSXeVOasWsZ/4dvbLIwEM/M40okD5Pia8MOjV4X4r9DGI5/OyGx k38RlfoHj5GBiCqnoSluQb+pc2BUJ8GtWFYfqA9s7ypbvnAQpeC0ZFS2x//BoNN9bwP0GaPp4DOt0 ysMoDdazmturHTmsDPiDGk4XtPMsPCWfi/dJfo9YXQTgUSFHs8mq2MxDvfzz7o4AbukdtQxpo8gM9 s8IAq6UA==; Received: from [2001:4bb8:180:7b62:c70:4a89:bc61:4] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNgGV-0001CN-3O; Wed, 30 Sep 2020 17:51:43 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Chris Wilson , Matthew Auld , Rodrigo Vivi , Minchan Kim , Matthew Wilcox , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Tvrtko Ursulin Subject: [PATCH 06/10] drm/i915: use vmap in shmem_pin_map Date: Wed, 30 Sep 2020 19:51:29 +0200 Message-Id: <20200930175133.1252382-7-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200930175133.1252382-1-hch@lst.de> References: <20200930175133.1252382-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html shmem_pin_map somewhat awkwardly reimplements vmap using alloc_vm_area and manual pte setup. The only practical difference is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't seem to be required here (and could be added to vmap using a flag if actually required). Switch to use vmap, and use vfree to free both the vmalloc mapping and the page array, as well as dropping the references to each page. Signed-off-by: Christoph Hellwig Reviewed-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/gt/shmem_utils.c | 76 +++++++-------------------- 1 file changed, 18 insertions(+), 58 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index 43c7acbdc79dea..f011ea42487e11 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -49,80 +49,40 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj) return file; } -static size_t shmem_npte(struct file *file) -{ - return file->f_mapping->host->i_size >> PAGE_SHIFT; -} - -static void __shmem_unpin_map(struct file *file, void *ptr, size_t n_pte) -{ - unsigned long pfn; - - vunmap(ptr); - - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (!WARN_ON(IS_ERR(page))) { - put_page(page); - put_page(page); - } - } -} - void *shmem_pin_map(struct file *file) { - const size_t n_pte = shmem_npte(file); - pte_t *stack[32], **ptes, **mem; - struct vm_struct *area; - unsigned long pfn; - - mem = stack; - if (n_pte > ARRAY_SIZE(stack)) { - mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL); - if (!mem) - return NULL; - } + struct page **pages; + size_t n_pages, i; + void *vaddr; - area = alloc_vm_area(n_pte << PAGE_SHIFT, mem); - if (!area) { - if (mem != stack) - kvfree(mem); + n_pages = file->f_mapping->host->i_size >> PAGE_SHIFT; + pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL); + if (!pages) return NULL; - } - ptes = mem; - for (pfn = 0; pfn < n_pte; pfn++) { - struct page *page; - - page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, - GFP_KERNEL); - if (IS_ERR(page)) + for (i = 0; i < n_pages; i++) { + pages[i] = shmem_read_mapping_page_gfp(file->f_mapping, i, + GFP_KERNEL); + if (IS_ERR(pages[i])) goto err_page; - - **ptes++ = mk_pte(page, PAGE_KERNEL); } - if (mem != stack) - kvfree(mem); - + vaddr = vmap(pages, n_pages, VM_MAP_PUT_PAGES, PAGE_KERNEL); + if (!vaddr) + goto err_page; mapping_set_unevictable(file->f_mapping); - return area->addr; - + return vaddr; err_page: - if (mem != stack) - kvfree(mem); - - __shmem_unpin_map(file, area->addr, pfn); + while (--i >= 0) + put_page(pages[i]); + kvfree(pages); return NULL; } void shmem_unpin_map(struct file *file, void *ptr) { mapping_clear_unevictable(file->f_mapping); - __shmem_unpin_map(file, ptr, shmem_npte(file)); + vfree(ptr); } static int __shmem_rw(struct file *file, loff_t off,