From patchwork Tue Nov 3 09:27:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11876819 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA96A6A2 for ; Tue, 3 Nov 2020 10:35:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9167421D40 for ; Tue, 3 Nov 2020 10:35:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="cAh//GKs"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="FQTjFZ+Z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbgKCKez (ORCPT ); Tue, 3 Nov 2020 05:34:55 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:40032 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728038AbgKCKeK (ORCPT ); Tue, 3 Nov 2020 05:34:10 -0500 Message-Id: <20201103095859.836711767@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1604399643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=yUxHC0cakBFRwhud8xMOIDvuAaJvgwuFtrBz27XdsEs=; b=cAh//GKsz39P8xqXnJcqDaA3W/edzWVdAqwjKoXA5Js/x368VRmdM0h2fj6bQZPAll8hmr yl8q8CcCNvb+2SfFaIo7cnx8vW2iPoTlWRNGNB06KJ1ERTQlqVwbp+qakHsf3DKm3O66rB +8+R2pAlCb2SMbZCBSyvPjBwiK1AVT7Wn6TtWs6GsMiR2cmkq8EcpYI90/EI8cGq7Q0K6k PWjm7m4uZQ4xtBGqzKEndSPHHSWqIlOOUtSiYkKMXa6yxfM8jl2K7omhkCJJofz/LjfJ5M Q/MfbNPI/NbAKfCFzA8vjy9XJtarFWdneKuOU4n/OP2w4gxAPRJI0FNbRKwq8Q== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1604399643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=yUxHC0cakBFRwhud8xMOIDvuAaJvgwuFtrBz27XdsEs=; b=FQTjFZ+ZYdRELu4iTMN0YYBNgOsOxBJVlxyxuynko9bscI/OxppLps2A/EThS2XSvqYRg0 nAwEXgV5JRubDDDA== Date: Tue, 03 Nov 2020 10:27:44 +0100 From: Thomas Gleixner To: LKML Cc: Linus Torvalds , Peter Zijlstra , Paul McKenney , Christoph Hellwig , Sebastian Andrzej Siewior , VMware Graphics , Roland Scheidegger , David Airlie , Daniel Vetter , dri-devel@lists.freedesktop.org, Andrew Morton , linux-mm@kvack.org, Alexander Viro , Benjamin LaHaise , linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org, x86@kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , sparclinux@vger.kernel.org, Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org, Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Christian Koenig , Huang Rui , Dave Airlie , Gerd Hoffmann , virtualization@lists.linux-foundation.org, spice-devel@lists.freedesktop.org, Ben Skeggs , nouveau@lists.freedesktop.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , intel-gfx@lists.freedesktop.org Subject: [patch V3 32/37] drm/vmgfx: Replace kmap_atomic() References: <20201103092712.714480842@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org There is no reason to disable pagefaults and preemption as a side effect of kmap_atomic_prot(). Use kmap_local_page_prot() instead and document the reasoning for the mapping usage with the given pgprot. Remove the NULL pointer check for the map. These functions return a valid address for valid pages and the return was bogus anyway as it would have left preemption and pagefaults disabled. Signed-off-by: Thomas Gleixner Cc: VMware Graphics Cc: Roland Scheidegger Cc: David Airlie Cc: Daniel Vetter Cc: dri-devel@lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 30 ++++++++++++------------------ 1 file changed, 12 insertions(+), 18 deletions(-) --- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c @@ -375,12 +375,12 @@ static int vmw_bo_cpu_blit_line(struct v copy_size = min_t(u32, copy_size, PAGE_SIZE - src_page_offset); if (unmap_src) { - kunmap_atomic(d->src_addr); + kunmap_local(d->src_addr); d->src_addr = NULL; } if (unmap_dst) { - kunmap_atomic(d->dst_addr); + kunmap_local(d->dst_addr); d->dst_addr = NULL; } @@ -388,12 +388,8 @@ static int vmw_bo_cpu_blit_line(struct v if (WARN_ON_ONCE(dst_page >= d->dst_num_pages)) return -EINVAL; - d->dst_addr = - kmap_atomic_prot(d->dst_pages[dst_page], - d->dst_prot); - if (!d->dst_addr) - return -ENOMEM; - + d->dst_addr = kmap_local_page_prot(d->dst_pages[dst_page], + d->dst_prot); d->mapped_dst = dst_page; } @@ -401,12 +397,8 @@ static int vmw_bo_cpu_blit_line(struct v if (WARN_ON_ONCE(src_page >= d->src_num_pages)) return -EINVAL; - d->src_addr = - kmap_atomic_prot(d->src_pages[src_page], - d->src_prot); - if (!d->src_addr) - return -ENOMEM; - + d->src_addr = kmap_local_page_prot(d->src_pages[src_page], + d->src_prot); d->mapped_src = src_page; } diff->do_cpy(diff, d->dst_addr + dst_page_offset, @@ -436,8 +428,10 @@ static int vmw_bo_cpu_blit_line(struct v * * Performs a CPU blit from one buffer object to another avoiding a full * bo vmap which may exhaust- or fragment vmalloc space. - * On supported architectures (x86), we're using kmap_atomic which avoids - * cross-processor TLB- and cache flushes and may, on non-HIGHMEM systems + * + * On supported architectures (x86), we're using kmap_local_prot() which + * avoids cross-processor TLB- and cache flushes. kmap_local_prot() will + * either map a highmem page with the proper pgprot on HIGHMEM=y systems or * reference already set-up mappings. * * Neither of the buffer objects may be placed in PCI memory @@ -500,9 +494,9 @@ int vmw_bo_cpu_blit(struct ttm_buffer_ob } out: if (d.src_addr) - kunmap_atomic(d.src_addr); + kunmap_local(d.src_addr); if (d.dst_addr) - kunmap_atomic(d.dst_addr); + kunmap_local(d.dst_addr); return ret; }