From patchwork Wed Mar 29 07:32:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CDD3C761AF for ; Wed, 29 Mar 2023 07:24:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B8CD810E4E3; Wed, 29 Mar 2023 07:24:23 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 302FE10E4DE; Wed, 29 Mar 2023 07:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074659; x=1711610659; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LRvnQ75nV6J4t1eJkOClYlFPCaO/BWveDjZ+1rUjKAM=; b=dizsUuUIdk4ylU60Y+d0K2/e2AY6pSPkIADjgXARvM7hieN87aWl77WN S1zUX/917/P4ZHOBlvT7manw40Ua7pEk3MOGsP0n1HvOZ3d63fxoK0Rqc P82p32nBYMSxnesPbMSX2LQO9hN8pO3p1dSqU3M9Ray0ejTee7Kwb0J6w C1ddR1af+Pz2SmyAsuIi/6rOYEZsEdZLyujZ5Nfk3/J6lg7CkPpheko8D VIYyXub1W48vwBm7Q+kJwajfcM9TL6ffcMphTZLW1StPHZCgm3kSCupSS cRUQNT820hmpbKOIBMjrc1yJVp8pdDyFptazoEKOYgjHHblTwMDDi8r5l g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405745856" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405745856" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160545" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160545" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:14 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:12 +0800 Message-Id: <20230329073220.3982460-2-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 1/9] drm/i915: Use kmap_local_page() in gem/i915_gem_object.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. There're 2 reasons why i915_gem_object_read_from_page_kmap() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c, i915_gem_object_read_from_page_kmap() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, i915_gem_object_read_from_page_kmap() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). And remove the redundant variable that stores the address of the mapped page since kunmap_local() can accept any pointer within the page. [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * Rebased on f47e630 (drm/i915/gem: Typecheck page lookups) to keep the "idx" variable of type pgoff_t here. * Added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index e6d4efde4fc5..c0bfdd7784f7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -428,17 +428,15 @@ static void i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { pgoff_t idx = offset >> PAGE_SHIFT; - void *src_map; void *src_ptr; - src_map = kmap_atomic(i915_gem_object_get_page(obj, idx)); - - src_ptr = src_map + offset_in_page(offset); + src_ptr = kmap_local_page(i915_gem_object_get_page(obj, idx)) + + offset_in_page(offset); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) drm_clflush_virt_range(src_ptr, size); memcpy(dst, src_ptr, size); - kunmap_atomic(src_map); + kunmap_local(src_ptr); } static void From patchwork Wed Mar 29 07:32:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D893EC74A5B for ; Wed, 29 Mar 2023 07:24:31 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5BCF310E4E8; Wed, 29 Mar 2023 07:24:31 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0F6D210E4E6; Wed, 29 Mar 2023 07:24:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074665; x=1711610665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B9E1ILllxkVCqOIIL0NVEqX87fQCrIe+LT/GYjLQv6c=; b=IPGrewcs7t+1gJaCxxOmOAXIwBwe3Lftr+sTZUC9jjydLOky2hYLj9Tj q8qmDCFtqMtQ4qA5Bwh/erUuPNfrtOR0tSOutstkukzP8N726bNjVVmTM jTZ76J2kujy9/sWp49JChKoRKg/imsYjtnUT25skJoncj84JZ+u9Ppn5q yB7ZXVdqrdYsY7j3NNyuqdt9+bA8b+MVxGMRX5F1LocBgVM54da9DnCc9 xcz0ZkRrFscRg8KM2jm+0Sc7EfG5QtS+7o3k7x/DhhopgCY6mHd68gNho AtTDrHgPq1g3xyttioTLAM7K24b43Ma3f6VHrx6jX1q9LXVN9xWVNOshP g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405745883" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405745883" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160553" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160553" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:19 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:13 +0800 Message-Id: <20230329073220.3982460-3-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 2/9] drm/i915: Use memcpy_[from/to]_page() in gem/i915_gem_pyhs.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() + memcpy() to memcpy_[from/to]_page(), which use kmap_local_page() to build local mapping and then do memcpy(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. In drm/i915/gem/i915_gem_phys.c, the functions i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys() don't need to disable pagefaults and preemption for mapping because of 2 reasons: 1. The flush operation is safe. In drm/i915/gem/i915_gem_object.c, i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, i915_gem_object_get_pages_phys() and i915_gem_object_put_pages_phys() are two functions where the uses of local mappings in place of atomic mappings are correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() + memcpy() to memcpy_from_page() and memcpy_to_page(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Used memcpy_from_page() and memcpy_to_page() to replace kmap_local_page() + memcpy(). * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * Added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. Also based on his suggestion to use memcpy_[from/to]_page() directly. --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 76efe98eaa14..4c6d3f07260a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -64,16 +64,13 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) dst = vaddr; for (i = 0; i < obj->base.size / PAGE_SIZE; i++) { struct page *page; - void *src; page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) goto err_st; - src = kmap_atomic(page); - memcpy(dst, src, PAGE_SIZE); + memcpy_from_page(dst, page, 0, PAGE_SIZE); drm_clflush_virt_range(dst, PAGE_SIZE); - kunmap_atomic(src); put_page(page); dst += PAGE_SIZE; @@ -112,16 +109,13 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, for (i = 0; i < obj->base.size / PAGE_SIZE; i++) { struct page *page; - char *dst; page = shmem_read_mapping_page(mapping, i); if (IS_ERR(page)) continue; - dst = kmap_atomic(page); drm_clflush_virt_range(src, PAGE_SIZE); - memcpy(dst, src, PAGE_SIZE); - kunmap_atomic(dst); + memcpy_to_page(page, 0, src, PAGE_SIZE); set_page_dirty(page); if (obj->mm.madv == I915_MADV_WILLNEED) From patchwork Wed Mar 29 07:32:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3D54C77B61 for ; Wed, 29 Mar 2023 07:24:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 513CA10E4E9; Wed, 29 Mar 2023 07:24:32 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id D96BA10E4E9; Wed, 29 Mar 2023 07:24:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074669; x=1711610669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lZB7D+9oAqqUk803oQiOFCr0X6utrKVFTvfta1a/CQQ=; b=G2hggRGu5eXem7cemsyrXbUwebGh8sXC4ws42/S8YubBv+V++lBn5mt/ 7wjuSClFs/Up0rK+EdCXfQbNYHpcXDMXxlGdhJW3h2SwDIPIp+XKycs5L y+pat0/YK9ObD7X8MxbmquTOeznO0fqGaAbrFILARQqmMz4HSclPrGfCR py+N+bExBAvEPeuLMOtkx56FTiGfO9LhczUXuu/9IIYUpb5Eq0qkrPkT+ 68vu6DCFOSRpH/0gMAdREgf4ngRB0d/U9gUH1G36dRKR2S3MkbNPA/kAE eBU6EEB2tyJIj5tY1mGpmB+rIP51lNJfp3P0PiXrGYKiw4nr9RLYB3eqK g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405745894" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405745894" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160559" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160559" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:24 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:14 +0800 Message-Id: <20230329073220.3982460-4-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 3/9] drm/i915: Use kmap_local_page() in gem/i915_gem_shmem.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1]. The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/i915_gem_shmem.c, the function shmem_pwrite() need to disable pagefault to eliminate the potential recursion fault[2]. But here __copy_from_user_inatomic() doesn't need to disable preemption and local mapping is valid for sched in/out. So it can use kmap_local_page() / kunmap_local() with pagefault_disable() / pagefault_enable() to replace atomic mapping. [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com [2]: https://patchwork.freedesktop.org/patch/295840/ v2: No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Ira Weiny Reviewed-by: Ira Weiny Reviewed-by: Fabio M. De Francesco Signed-off-by: Zhao Liu --- Suggested by credits: Ira: Referred to his suggestions about keeping pagefault_disable(). Fabio: Referred to his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 37d1efcd3ca6..ad69a79c8b31 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -475,11 +475,13 @@ shmem_pwrite(struct drm_i915_gem_object *obj, if (err < 0) return err; - vaddr = kmap_atomic(page); + vaddr = kmap_local_page(page); + pagefault_disable(); unwritten = __copy_from_user_inatomic(vaddr + pg, user_data, len); - kunmap_atomic(vaddr); + pagefault_enable(); + kunmap_local(vaddr); err = aops->write_end(obj->base.filp, mapping, offset, len, len - unwritten, page, data); From patchwork Wed Mar 29 07:32:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5939EC6FD18 for ; Wed, 29 Mar 2023 07:24:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E3B3010E4EE; Wed, 29 Mar 2023 07:24:41 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 17B8610E4F0; Wed, 29 Mar 2023 07:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074674; x=1711610674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O5mjsJvXEoxA+vJsVu0W75jJyr6KhRreGwElbH0Raj8=; b=Ea6wj8SwifeBVQ3Wd9XmbJZrskQpS04HhP7cx1CiU7sGBZCTtNbyJFFV Sm9JBRywvMLY0G0QhAv568XF+JFC3pxDgmj9daylZGacGql11dXS9nzdj gfcaUsMpyDaAgCgLVlgeUe7z+NrFDY14wPv2Ejk64Kl4Y0GuHUrx6XlY9 sHTi6w4fUva7xEmtj9WwIFlcb4/S4auhMHAvxw0bEw4GwvNz0LwHgQ0Ln S+g5JZi6+UntvvZcRPU4OEmzJgrKTKywhHCA8QvsX04+Z6K/1emrhqoM5 nQifsddr2e6+0vtcH9ijOTUfeoFIpIJh4+iBQCyhXCFN4JGH8MXaQ33Nn A==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405745925" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405745925" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160581" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160581" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:29 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:15 +0800 Message-Id: <20230329073220.3982460-5-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 4/9] drm/i915: Use kmap_local_page() in gem/selftests/huge_pages.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/huge_pages.c, function __cpu_check_shmem() mainly uses mapping to flush cache and check the value. There're 2 reasons why __cpu_check_shmem() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. Function __cpu_check_shmem() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, __cpu_check_shmem() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index defece0bcb81..3f9ea48a48d0 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -1026,7 +1026,7 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val) goto err_unlock; for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) { - u32 *ptr = kmap_atomic(i915_gem_object_get_page(obj, n)); + u32 *ptr = kmap_local_page(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) drm_clflush_virt_range(ptr, PAGE_SIZE); @@ -1034,12 +1034,12 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val) if (ptr[dword] != val) { pr_err("n=%lu ptr[%u]=%u, val=%u\n", n, dword, ptr[dword], val); - kunmap_atomic(ptr); + kunmap_local(ptr); err = -EINVAL; break; } - kunmap_atomic(ptr); + kunmap_local(ptr); } i915_gem_object_finish_access(obj); From patchwork Wed Mar 29 07:32:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14334C74A5B for ; Wed, 29 Mar 2023 07:24:52 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 75A9E10E4F3; Wed, 29 Mar 2023 07:24:51 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4EDEB10E4F3; Wed, 29 Mar 2023 07:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074690; x=1711610690; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eFmVjsv1DCN0/J2TIleZPCjZ8XDr2WxmZ0GRJxV4ib4=; b=ErV2poLEXN2+dV/dAwSWvWM0oegJ0bof6vraxMM9QyWl3e9QKPE8EncL 2+d7d6syOxrO8cgEwJWnaox6egOYWLlkWbyTP47jC+EDf1ylreppEHaNX w/7UYMjf+HAopb5KlPOjQsGIxMIVOxcrgHvvts546ezCRjsF6V+T51jKU pRggXmhB0p+0i21kJdA/ryvop3z6TdHm1Ft4rUYrLxPnWnpkd69YE/ddj +WkcAsIbz/wLPBphfXR2sSUpIpVGYBm54bfD8Q27AVM/vwz2q7UMG97JG bKMB10VW3mOGkCi0EqMUSPUA9i4LIvObZhF0+rw9Rl5lZAhzZHeQ2YXTI g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405746036" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405746036" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160597" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160597" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:34 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:16 +0800 Message-Id: <20230329073220.3982460-6-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 5/9] drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_coherency.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration).. With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/i915_gem_coherency.c, functions cpu_set() and cpu_get() mainly uses mapping to flush cache and assign the value. There're 2 reasons why cpu_set() and cpu_get() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. cpu_set() and cpu_get() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_set() and cpu_get() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- .../gpu/drm/i915/gem/selftests/i915_gem_coherency.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index 3bef1beec7cb..beeb3e12eccc 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -24,7 +24,6 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) { unsigned int needs_clflush; struct page *page; - void *map; u32 *cpu; int err; @@ -34,8 +33,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) goto out; page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT); - map = kmap_atomic(page); - cpu = map + offset_in_page(offset); + cpu = kmap_local_page(page) + offset_in_page(offset); if (needs_clflush & CLFLUSH_BEFORE) drm_clflush_virt_range(cpu, sizeof(*cpu)); @@ -45,7 +43,7 @@ static int cpu_set(struct context *ctx, unsigned long offset, u32 v) if (needs_clflush & CLFLUSH_AFTER) drm_clflush_virt_range(cpu, sizeof(*cpu)); - kunmap_atomic(map); + kunmap_local(cpu); i915_gem_object_finish_access(ctx->obj); out: @@ -57,7 +55,6 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) { unsigned int needs_clflush; struct page *page; - void *map; u32 *cpu; int err; @@ -67,15 +64,14 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v) goto out; page = i915_gem_object_get_page(ctx->obj, offset >> PAGE_SHIFT); - map = kmap_atomic(page); - cpu = map + offset_in_page(offset); + cpu = kmap_local_page(page) + offset_in_page(offset); if (needs_clflush & CLFLUSH_BEFORE) drm_clflush_virt_range(cpu, sizeof(*cpu)); *v = *cpu; - kunmap_atomic(map); + kunmap_local(cpu); i915_gem_object_finish_access(ctx->obj); out: From patchwork Wed Mar 29 07:32:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C9E4C761AF for ; Wed, 29 Mar 2023 07:25:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 544BC10E4D5; Wed, 29 Mar 2023 07:25:04 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id BCB5410E4FE; Wed, 29 Mar 2023 07:25:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074700; x=1711610700; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GMpKo8ThZ2YBrIiNDv5DIUoRW6Piin/4x6sH+rZpkMU=; b=N8SOof7zkbbECzwuzNHEYGNELDAnjm5FRzrTaUH5Vn4ILm5YOuPPd4u3 ttZa/H3J04Wf/rFzSypUwdJtxSnserWGnIkCLKbSMOFzim5LHBp2XhwU7 BMbPG2mwwaL7IUpgwgzBg+RIyzAoeNt2dgaZb/XOFxZ6yuJINj7qthM6R xtjdnZc2il0tckucZ90KWT4MNkGPZCOmJgC5nadikl1nBUFKJ3EmxsGJz DcPotdxbUzHDveF8w9QroLy4Eltnr4zqUnvqIqFu8OQEHyr7u+J83g2w8 CNF1uylvzPYib24AMAFbHJlMCFanf2DJ0IJnTUIuTbwr48oKovs3hfHkQ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405746130" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405746130" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160621" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160621" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:40 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:17 +0800 Message-Id: <20230329073220.3982460-7-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 6/9] drm/i915: Use kmap_local_page() in gem/selftests/i915_gem_context.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption. With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/i915_gem_context.c, functions cpu_fill() and cpu_check() mainly uses mapping to flush cache and check/assign the value. There're 2 reasons why cpu_fill() and cpu_check() don't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. cpu_fill() and cpu_check() call drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, cpu_fill() and cpu_check() are functions where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index a81fa6a20f5a..dcbc0b8e3323 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -481,12 +481,12 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) for (n = 0; n < real_page_count(obj); n++) { u32 *map; - map = kmap_atomic(i915_gem_object_get_page(obj, n)); + map = kmap_local_page(i915_gem_object_get_page(obj, n)); for (m = 0; m < DW_PER_PAGE; m++) map[m] = value; if (!has_llc) drm_clflush_virt_range(map, PAGE_SIZE); - kunmap_atomic(map); + kunmap_local(map); } i915_gem_object_finish_access(obj); @@ -512,7 +512,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (n = 0; n < real_page_count(obj); n++) { u32 *map, m; - map = kmap_atomic(i915_gem_object_get_page(obj, n)); + map = kmap_local_page(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) drm_clflush_virt_range(map, PAGE_SIZE); @@ -538,7 +538,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, } out_unmap: - kunmap_atomic(map); + kunmap_local(map); if (err) break; } From patchwork Wed Mar 29 07:32:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2B9A1C74A5B for ; Wed, 29 Mar 2023 07:25:24 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 93DAC10E4FE; Wed, 29 Mar 2023 07:25:23 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4242110E4FC; Wed, 29 Mar 2023 07:25:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074709; x=1711610709; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ctlg4NjY5kelpxf9KaNbuOy8AYhBpUiiPegXpU79lsk=; b=IXeSWYKhMXZJGUviW+UGlsw7EQ/LenITdZnTDbWWuyd8Ew3q1GZv2WTp PMw+0OuQRC/kr8HBpSs+LW5EmKJsgDC/f60LTVrVHWfEhrDpgK5OPJ+si j9fY9btqjZHH2/DoJ0kAv2xFHsjHLELKSt15EY7XMClUmR8K2u+ooqiuy V/x2DZiYJhd8thWKQaG5YOao13E543xcU5RmHYEQa2PV+bbIJFgZOe1xk ZZl5/IqFmqHWgtE1Mrqn1Y21GIngA3ZkeqdMNXrfDmXZBYjbu0gwDq/SR yspKCJyrpJrgrmXbrV8cTOPm6j4CWttq69v+iiPxKW2LgnNcb0UuzArLN A==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405746207" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405746207" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160631" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160631" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:45 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:18 +0800 Message-Id: <20230329073220.3982460-8-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 7/9] drm/i915: Use memcpy_from_page() in gt/uc/intel_uc_fw.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gt/uc/intel_us_fw.c, the function intel_uc_fw_copy_rsa() just use the mapping to do memory copy so it doesn't need to disable pagefaults and preemption for mapping. Thus the local mapping without atomic context (not disable pagefaults / preemption) is enough. Therefore, intel_uc_fw_copy_rsa() is a function where the use of memcpy_from_page() with kmap_local_page() in place of memcpy() with kmap_atomic() is correctly suited. Convert the calls of memcpy() with kmap_atomic() / kunmap_atomic() to memcpy_from_page() which uses local mapping to copy. [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com/T/#u v2: No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Reviewed-by: Ira Weiny Signed-off-by: Zhao Liu --- Suggested by credits: Ira: Referred to his task document and suggestions about using memcpy_from_page() directly. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c index 65672ff82605..5bbde4abd565 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c @@ -1152,16 +1152,13 @@ size_t intel_uc_fw_copy_rsa(struct intel_uc_fw *uc_fw, void *dst, u32 max_len) for_each_sgt_page(page, iter, uc_fw->obj->mm.pages) { u32 len = min_t(u32, size, PAGE_SIZE - offset); - void *vaddr; if (idx > 0) { idx--; continue; } - vaddr = kmap_atomic(page); - memcpy(dst, vaddr + offset, len); - kunmap_atomic(vaddr); + memcpy_from_page(dst, page, offset, len); offset = 0; dst += len; From patchwork Wed Mar 29 07:32:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B8BDC74A5B for ; Wed, 29 Mar 2023 07:25:26 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1442510EA43; Wed, 29 Mar 2023 07:25:24 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 854F010E4FC; Wed, 29 Mar 2023 07:25:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074711; x=1711610711; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yyQdMm9uqDdypEaS6stlmLE4TXLEkbMEIetdKqrgHwo=; b=bSyXGX8cxPQVQWKpTE/dinL+UwzUNj/aOKgjgwggttx3gWWQrLPSKoEL Qn2L0b6IQbdksNH01hWtMGK4zNrt4aKX5T5Go1qmAFSvjcpr4A50szHEq JfeEtTJa7H+ro73xACenAbnal/Q0ATXcI+XWfhKt9ezjs5GjJb5qJ+JXF i+hKH1tzuR2ZC0RUj5i4hqLPxST0FoZCWUBhdg0oD9RkB/akDZtt113Za EhPoHn9IEF3mD6qHL9j/nwvqJuDM1BXogxNoiNitMPRgSNAmwgfYIpDE5 RoDOWsSF6HjBl5u2pNi3CegRWYduuj118zM6btY+NE+K3J1jWxEFGlflB Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405746229" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405746229" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160648" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160648" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:50 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:19 +0800 Message-Id: <20230329073220.3982460-9-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 8/9] drm/i915: Use kmap_local_page() in i915_cmd_parser.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. There're 2 reasons why function copy_batch() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. In i915_cmd_parser.c, copy_batch() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, copy_batch() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/i915_cmd_parser.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c index ddf49c2dbb91..2905df83e180 100644 --- a/drivers/gpu/drm/i915/i915_cmd_parser.c +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c @@ -1211,11 +1211,11 @@ static u32 *copy_batch(struct drm_i915_gem_object *dst_obj, for (n = offset >> PAGE_SHIFT; remain; n++) { int len = min(remain, PAGE_SIZE - x); - src = kmap_atomic(i915_gem_object_get_page(src_obj, n)); + src = kmap_local_page(i915_gem_object_get_page(src_obj, n)); if (src_needs_clflush) drm_clflush_virt_range(src + x, len); memcpy(ptr, src + x, len); - kunmap_atomic(src); + kunmap_local(src); ptr += len; remain -= len; From patchwork Wed Mar 29 07:32:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA842C77B6C for ; Wed, 29 Mar 2023 07:25:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7ED8910EA48; Wed, 29 Mar 2023 07:25:26 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 082E710EA2A; Wed, 29 Mar 2023 07:25:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074711; x=1711610711; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rjnVmGthyA2AoK36NNkda5pFc1ytlDwIEHuVtO8n3xE=; b=FLc6yL2n5lGbdZsXJB8Wcz7ATWiSd9wsBdKUBFpFZvL1jBr4/A4Rr1Yp FTqll7vWBRpy79G86zGyoIfqB4tlSCz5r+kYLjVs18beoyBowTW1rwpvQ EeQ8O0O4MyiysTmd/3iO0JYyEoU0Ani0V/Yd5IVXOHXw7j/iJupOKBf6b E3pEeK5pj+lU5RNZEvjIayoIH3FWD+HYYBeFdxOOh5xErf9lTvI5nvPNW Lg8riQ3GvDx2UDMcwU2//uGdYuyCCcNdbslP2/FQneAuWI3q3dSM27xlF cU1VUoRhXjaeJ8XEYlP54rGLrfSt2f9UJJFJvGUy/KNfvnx6XuF3VJoC9 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405746238" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405746238" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:25:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160663" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160663" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:55 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:20 +0800 Message-Id: <20230329073220.3982460-10-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 9/9] drm/i915: Use kmap_local_page() in gem/i915_gem_execbuffer.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the calls from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults and preemption disables. In i915_gem_execbuffer.c, eb->reloc_cache.vaddr is mapped by kmap_atomic() in eb_relocate_entry(), and is unmapped by kunmap_atomic() in reloc_cache_reset(). And this mapping/unmapping occurs in two places: one is in eb_relocate_vma(), and another is in eb_relocate_vma_slow(). The function eb_relocate_vma() or eb_relocate_vma_slow() doesn't need to disable pagefaults and preemption during the above mapping/ unmapping. So it can simply use kmap_local_page() / kunmap_local() that can instead do the mapping / unmapping regardless of the context. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: No code change since v1. Added description of the motivation of using kmap_local_page() and "Suggested-by" tag of Fabio. Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu --- Suggested by credits: Ira: Referred to his task document, review comments. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 9dce2957b4e5..805565edd148 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1151,7 +1151,7 @@ static void reloc_cache_unmap(struct reloc_cache *cache) vaddr = unmask_page(cache->vaddr); if (cache->vaddr & KMAP) - kunmap_atomic(vaddr); + kunmap_local(vaddr); else io_mapping_unmap_atomic((void __iomem *)vaddr); } @@ -1167,7 +1167,7 @@ static void reloc_cache_remap(struct reloc_cache *cache, if (cache->vaddr & KMAP) { struct page *page = i915_gem_object_get_page(obj, cache->page); - vaddr = kmap_atomic(page); + vaddr = kmap_local_page(page); cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr; } else { @@ -1197,7 +1197,7 @@ static void reloc_cache_reset(struct reloc_cache *cache, struct i915_execbuffer if (cache->vaddr & CLFLUSH_AFTER) mb(); - kunmap_atomic(vaddr); + kunmap_local(vaddr); i915_gem_object_finish_access(obj); } else { struct i915_ggtt *ggtt = cache_to_ggtt(cache); @@ -1229,7 +1229,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj, struct page *page; if (cache->vaddr) { - kunmap_atomic(unmask_page(cache->vaddr)); + kunmap_local(unmask_page(cache->vaddr)); } else { unsigned int flushes; int err; @@ -1251,7 +1251,7 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj, if (!obj->mm.dirty) set_page_dirty(page); - vaddr = kmap_atomic(page); + vaddr = kmap_local_page(page); cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr; cache->page = pageno;