From patchwork Tue Nov 24 06:07:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1B9BC8301E for ; Tue, 24 Nov 2020 06:08:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A56B1221E9 for ; Tue, 24 Nov 2020 06:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729354AbgKXGIH (ORCPT ); Tue, 24 Nov 2020 01:08:07 -0500 Received: from mga02.intel.com ([134.134.136.20]:57123 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729309AbgKXGIE (ORCPT ); Tue, 24 Nov 2020 01:08:04 -0500 IronPort-SDR: Qc8OPO1J9gqyRaaBn6ljuKzsdRK432dWGNpCWnEuxWefYJeQ1IuQK4rMpSXmFfptYUY1VMat9Y aEqvtEPP3X+w== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="158937222" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="158937222" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:03 -0800 IronPort-SDR: Zw2VaazFW03fZFRdJj38HIMXIVqbN/85Fo9/0Ue7V9ZWM/Q05lCstaS0pglbOknKZNmn/v3oLQ fQIV9CK6ORAg== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="478391562" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:03 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Thomas Gleixner , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 01/17] mm/highmem: Lift memcpy_[to|from]_page and memset_page to core Date: Mon, 23 Nov 2020 22:07:39 -0800 Message-Id: <20201124060755.1405602-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Working through a conversion to a call such as kmap_thread() revealed many places where the pattern kmap/memcpy/kunmap occurred. Eric Biggers, Matthew Wilcox, Christoph Hellwig, Dan Williams, and Al Viro all suggested putting this code into helper functions. Al Viro further pointed out that these functions already existed in the iov_iter code.[1] Placing these functions in 'highmem.h' is suboptimal especially with the changes being proposed in the functionality of kmap. From a caller perspective including/using 'highmem.h' implies that the functions defined in that header are only required when highmem is in use which is increasingly not the case with modern processors. Some headers like mm.h or string.h seem ok but don't really portray the functionality well. 'pagemap.h', on the other hand, makes sense and is already included in many of the places we want to convert. Another alternative would be to create a new header for the promoted memcpy functions, but it masks the fact that these are designed to copy to/from pages using the kernel direct mappings and complicates matters with a new header. Lift memcpy_to_page(), memcpy_from_page(), and memzero_page() to pagemap.h. Also, add a memcpy_page(), memmove_page, and memset_page() to cover more kmap/mem*/kunmap. patterns. [1] https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/ https://lore.kernel.org/lkml/20201013112544.GA5249@infradead.org/ Cc: Dave Hansen Suggested-by: Matthew Wilcox Suggested-by: Christoph Hellwig Suggested-by: Dan Williams Suggested-by: Al Viro Suggested-by: Eric Biggers Signed-off-by: Ira Weiny --- include/linux/pagemap.h | 49 +++++++++++++++++++++++++++++++++++++++++ lib/iov_iter.c | 21 ------------------ 2 files changed, 49 insertions(+), 21 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index c77b7c31b2e4..82a0af6bc843 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1028,4 +1028,53 @@ unsigned int i_blocks_per_page(struct inode *inode, struct page *page) { return thp_size(page) >> inode->i_blkbits; } + +static inline void memcpy_page(struct page *dst_page, size_t dst_off, + struct page *src_page, size_t src_off, + size_t len) +{ + char *dst = kmap_atomic(dst_page); + char *src = kmap_atomic(src_page); + memcpy(dst + dst_off, src + src_off, len); + kunmap_atomic(src); + kunmap_atomic(dst); +} + +static inline void memmove_page(struct page *dst_page, size_t dst_off, + struct page *src_page, size_t src_off, + size_t len) +{ + char *dst = kmap_atomic(dst_page); + char *src = kmap_atomic(src_page); + memmove(dst + dst_off, src + src_off, len); + kunmap_atomic(src); + kunmap_atomic(dst); +} + +static inline void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len) +{ + char *from = kmap_atomic(page); + memcpy(to, from + offset, len); + kunmap_atomic(from); +} + +static inline void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len) +{ + char *to = kmap_atomic(page); + memcpy(to + offset, from, len); + kunmap_atomic(to); +} + +static inline void memset_page(struct page *page, int val, size_t offset, size_t len) +{ + char *addr = kmap_atomic(page); + memset(addr + offset, val, len); + kunmap_atomic(addr); +} + +static inline void memzero_page(struct page *page, size_t offset, size_t len) +{ + memset_page(page, 0, offset, len); +} + #endif /* _LINUX_PAGEMAP_H */ diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 1635111c5bd2..2439a8b4f0d2 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -466,27 +466,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_init); -static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len) -{ - char *from = kmap_atomic(page); - memcpy(to, from + offset, len); - kunmap_atomic(from); -} - -static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len) -{ - char *to = kmap_atomic(page); - memcpy(to + offset, from, len); - kunmap_atomic(to); -} - -static void memzero_page(struct page *page, size_t offset, size_t len) -{ - char *addr = kmap_atomic(page); - memset(addr + offset, 0, len); - kunmap_atomic(addr); -} - static inline bool allocated(struct pipe_buffer *buf) { return buf->ops == &default_pipe_buf_ops; From patchwork Tue Nov 24 06:07:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A444DC83020 for ; Tue, 24 Nov 2020 06:08:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5E4E820857 for ; Tue, 24 Nov 2020 06:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729335AbgKXGIF (ORCPT ); Tue, 24 Nov 2020 01:08:05 -0500 Received: from mga01.intel.com ([192.55.52.88]:19536 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729310AbgKXGIE (ORCPT ); Tue, 24 Nov 2020 01:08:04 -0500 IronPort-SDR: GnRriJyQry2B0NO9IiCpkpOsHTpAJ86AQlnGYejkyL4tR0aA51AwjM7fiO30E98Sqk8dbmGkIX 3PeEnBETmf9w== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="190018235" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="190018235" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:04 -0800 IronPort-SDR: dQbxu2ciemBqIAkRNgzNeyXnHV5s5Yt4XMctgTN4x2VNjHrcVOCGAP2F6qaMJNl/acBH2AeYEu Q+75it+DWKaA== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="536356331" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:03 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Luis Chamberlain , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 02/17] drivers/firmware_loader: Use new memcpy_[to|from]_page() Date: Mon, 23 Nov 2020 22:07:40 -0800 Message-Id: <20201124060755.1405602-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Too many users are using kmap_*() incorrectly and a common pattern is for them to kmap/mempcy/kunmap. Change these calls to use the newly lifted memcpy_[to|from]_page() calls. Cc: Luis Chamberlain Signed-off-by: Ira Weiny --- drivers/base/firmware_loader/fallback.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c index 4dec4b79ae06..dc93dc307d18 100644 --- a/drivers/base/firmware_loader/fallback.c +++ b/drivers/base/firmware_loader/fallback.c @@ -10,6 +10,7 @@ #include #include #include +#include #include "fallback.h" #include "firmware.h" @@ -317,19 +318,17 @@ static void firmware_rw(struct fw_priv *fw_priv, char *buffer, loff_t offset, size_t count, bool read) { while (count) { - void *page_data; int page_nr = offset >> PAGE_SHIFT; int page_ofs = offset & (PAGE_SIZE-1); int page_cnt = min_t(size_t, PAGE_SIZE - page_ofs, count); - page_data = kmap(fw_priv->pages[page_nr]); - if (read) - memcpy(buffer, page_data + page_ofs, page_cnt); + memcpy_from_page(buffer, fw_priv->pages[page_nr], + page_ofs, page_cnt); else - memcpy(page_data + page_ofs, buffer, page_cnt); + memcpy_to_page(fw_priv->pages[page_nr], page_ofs, + buffer, page_cnt); - kunmap(fw_priv->pages[page_nr]); buffer += page_cnt; offset += page_cnt; count -= page_cnt; From patchwork Tue Nov 24 06:07:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927203 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2050C2D0E4 for ; Tue, 24 Nov 2020 06:08:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D23D2076C for ; Tue, 24 Nov 2020 06:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729347AbgKXGIG (ORCPT ); Tue, 24 Nov 2020 01:08:06 -0500 Received: from mga17.intel.com ([192.55.52.151]:1868 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728113AbgKXGIE (ORCPT ); Tue, 24 Nov 2020 01:08:04 -0500 IronPort-SDR: nyM+vhB+2nJyCk4vfoPeDA+arcWNmWVIyzKFdcyT/ZbFKAWoRRdqGK2BH0eZPtgqnZDNh7KuIn 9ZPH6oWwvy+A== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="151736677" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="151736677" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:04 -0800 IronPort-SDR: f/dY95uTTv9GesN8Y02GekMhGpG18T+cyerrpUpNz8pm5s2ET/W+heceKCXRjQXjVvhP/MUpoo xCP1iCZSWDVQ== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="361740977" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:04 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 03/17] drivers/gpu: Convert to mem*_page() Date: Mon, 23 Nov 2020 22:07:41 -0800 Message-Id: <20201124060755.1405602-4-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny The pattern of kmap/mem*/kunmap is repeated. Use the new mem*_page() calls instead. Cc: Patrik Jakobsson Cc: Jani Nikula Cc: Joonas Lahtinen Cc: Rodrigo Vivi Signed-off-by: Ira Weiny --- drivers/gpu/drm/gma500/gma_display.c | 7 +++---- drivers/gpu/drm/gma500/mmu.c | 4 ++-- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 6 ++---- drivers/gpu/drm/i915/gt/intel_gtt.c | 9 ++------- drivers/gpu/drm/i915/gt/shmem_utils.c | 8 +++----- 5 files changed, 12 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c index 3df6d6e850f5..f81114594211 100644 --- a/drivers/gpu/drm/gma500/gma_display.c +++ b/drivers/gpu/drm/gma500/gma_display.c @@ -9,6 +9,7 @@ #include #include +#include #include #include @@ -334,7 +335,7 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc, struct gtt_range *gt; struct gtt_range *cursor_gt = gma_crtc->cursor_gt; struct drm_gem_object *obj; - void *tmp_dst, *tmp_src; + void *tmp_dst; int ret = 0, i, cursor_pages; /* If we didn't get a handle then turn the cursor off */ @@ -400,9 +401,7 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc, /* Copy the cursor to cursor mem */ tmp_dst = dev_priv->vram_addr + cursor_gt->offset; for (i = 0; i < cursor_pages; i++) { - tmp_src = kmap(gt->pages[i]); - memcpy(tmp_dst, tmp_src, PAGE_SIZE); - kunmap(gt->pages[i]); + memcpy_from_page(tmp_dst, gt->pages[i], 0, PAGE_SIZE); tmp_dst += PAGE_SIZE; } diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c index 505044c9a673..8a0856c7f439 100644 --- a/drivers/gpu/drm/gma500/mmu.c +++ b/drivers/gpu/drm/gma500/mmu.c @@ -5,6 +5,7 @@ **************************************************************************/ #include +#include #include "mmu.h" #include "psb_drv.h" @@ -204,8 +205,7 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver *driver, kunmap(pd->p); - clear_page(kmap(pd->dummy_page)); - kunmap(pd->dummy_page); + memzero_page(pd->dummy_page, 0, PAGE_SIZE); pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024); if (!pd->tables) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 75e8b71c18b9..8a25e08edd18 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -558,7 +558,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, do { unsigned int len = min_t(typeof(size), size, PAGE_SIZE); struct page *page; - void *pgdata, *vaddr; + void *pgdata; err = pagecache_write_begin(file, file->f_mapping, offset, len, 0, @@ -566,9 +566,7 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv, if (err < 0) goto fail; - vaddr = kmap(page); - memcpy(vaddr, data, len); - kunmap(page); + memcpy_to_page(page, 0, data, len); err = pagecache_write_end(file, file->f_mapping, offset, len, len, diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 3f1114b58b01..f3d7c601d362 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -153,13 +153,8 @@ static void poison_scratch_page(struct drm_i915_gem_object *scratch) if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)) val = POISON_FREE; - for_each_sgt_page(page, sgt, scratch->mm.pages) { - void *vaddr; - - vaddr = kmap(page); - memset(vaddr, val, PAGE_SIZE); - kunmap(page); - } + for_each_sgt_page(page, sgt, scratch->mm.pages) + memset_page(page, val, 0, PAGE_SIZE); } int setup_scratch_page(struct i915_address_space *vm) diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c index f011ea42487e..2d5f1f2e803d 100644 --- a/drivers/gpu/drm/i915/gt/shmem_utils.c +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c @@ -95,19 +95,17 @@ static int __shmem_rw(struct file *file, loff_t off, unsigned int this = min_t(size_t, PAGE_SIZE - offset_in_page(off), len); struct page *page; - void *vaddr; page = shmem_read_mapping_page_gfp(file->f_mapping, pfn, GFP_KERNEL); if (IS_ERR(page)) return PTR_ERR(page); - vaddr = kmap(page); if (write) - memcpy(vaddr + offset_in_page(off), ptr, this); + memcpy_to_page(page, offset_in_page(off), ptr, this); else - memcpy(ptr, vaddr + offset_in_page(off), this); - kunmap(page); + memcpy_from_page(ptr, page, offset_in_page(off), this); + put_page(page); len -= this; From patchwork Tue Nov 24 06:07:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 617A8C83021 for ; Tue, 24 Nov 2020 06:08:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2F93F2080A for ; Tue, 24 Nov 2020 06:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729614AbgKXGIx (ORCPT ); Tue, 24 Nov 2020 01:08:53 -0500 Received: from mga14.intel.com ([192.55.52.115]:51611 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729344AbgKXGIF (ORCPT ); Tue, 24 Nov 2020 01:08:05 -0500 IronPort-SDR: 6fG57QLT5M3i0ErJL+KAAIrc22J6QvDpxe+VvNWmu54XDTSnQBSpHTZwjTeh9ijPFJsbLk66xj +f0R0i0eXD6g== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="171114718" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="171114718" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:05 -0800 IronPort-SDR: lfFU9ijJ/yT2fs02p0urCEZFiJL9rzYwbCr8oqpexUvHQPy+rjpMQ4ao3qTWUilsOyx2pqDXMo AuGffm+kjQLw== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="332448426" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:04 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , David Howells , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 04/17] fs/afs: Convert to memzero_page() Date: Mon, 23 Nov 2020 22:07:42 -0800 Message-Id: <20201124060755.1405602-5-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Convert the kmap()/memcpy()/kunmap() pattern to memzero_page(). Cc: David Howells Signed-off-by: Ira Weiny Acked-by: David Howells --- fs/afs/write.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/fs/afs/write.c b/fs/afs/write.c index 50371207f327..ed7419de0178 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -30,7 +30,6 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key, { struct afs_read *req; size_t p; - void *data; int ret; _enter(",,%llu", (unsigned long long)pos); @@ -38,9 +37,7 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key, if (pos >= vnode->vfs_inode.i_size) { p = pos & ~PAGE_MASK; ASSERTCMP(p + len, <=, PAGE_SIZE); - data = kmap(page); - memset(data + p, 0, len); - kunmap(page); + memzero_page(page, p, len); return 0; } From patchwork Tue Nov 24 06:07:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9151DC8301C for ; Tue, 24 Nov 2020 06:08:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A56020857 for ; Tue, 24 Nov 2020 06:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729597AbgKXGIo (ORCPT ); Tue, 24 Nov 2020 01:08:44 -0500 Received: from mga11.intel.com ([192.55.52.93]:16839 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729350AbgKXGIH (ORCPT ); Tue, 24 Nov 2020 01:08:07 -0500 IronPort-SDR: ql3PJgIc1/hF/H/yQyQB86AgvBL50dKyqzz3AZm6KbMYi0XOzeZ0HG0Xu0FuNuigUhtLgDVrMc YTWa5uVnZSJA== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="168386490" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="168386490" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 IronPort-SDR: vMyWT9Ts1pPXRYY4Qgt5x6uvQzFmPU/ilBPC97n1U5c8nffZTKCsIj4piilnFL2QZZXR5XVNZs xaqdwBtAaXuw== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="327458680" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:05 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Chris Mason , Josef Bacik , David Sterba , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 05/17] fs/btrfs: Convert to memzero_page() Date: Mon, 23 Nov 2020 22:07:43 -0800 Message-Id: <20201124060755.1405602-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove the kmap/memset()/kunmap pattern and use the new memzero_page() call where possible. Cc: Chris Mason Cc: Josef Bacik Cc: David Sterba Signed-off-by: Ira Weiny --- fs/btrfs/inode.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index da58c58ef9aa..b0bcf9493236 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -590,17 +590,12 @@ static noinline int compress_file_range(struct async_chunk *async_chunk) if (!ret) { unsigned long offset = offset_in_page(total_compressed); struct page *page = pages[nr_pages - 1]; - char *kaddr; /* zero the tail end of the last page, we might be * sending it down to disk */ - if (offset) { - kaddr = kmap_atomic(page); - memset(kaddr + offset, 0, - PAGE_SIZE - offset); - kunmap_atomic(kaddr); - } + if (offset) + memzero_page(page, offset, PAGE_SIZE - offset); will_compress = 1; } } @@ -6485,11 +6480,8 @@ static noinline int uncompress_inline(struct btrfs_path *path, * cover that region here. */ - if (max_size + pg_offset < PAGE_SIZE) { - char *map = kmap(page); - memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset); - kunmap(page); - } + if (max_size + pg_offset < PAGE_SIZE) + memzero_page(page, pg_offset + max_size, PAGE_SIZE - max_size - pg_offset); kfree(tmp); return ret; } @@ -8245,7 +8237,6 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) struct btrfs_ordered_extent *ordered; struct extent_state *cached_state = NULL; struct extent_changeset *data_reserved = NULL; - char *kaddr; unsigned long zero_start; loff_t size; vm_fault_t ret; @@ -8352,10 +8343,8 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) zero_start = PAGE_SIZE; if (zero_start != PAGE_SIZE) { - kaddr = kmap(page); - memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start); + memzero_page(page, zero_start, PAGE_SIZE - zero_start); flush_dcache_page(page); - kunmap(page); } ClearPageChecked(page); set_page_dirty(page); From patchwork Tue Nov 24 06:07:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB648C8301A for ; Tue, 24 Nov 2020 06:08:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 775972085B for ; Tue, 24 Nov 2020 06:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729584AbgKXGIo (ORCPT ); Tue, 24 Nov 2020 01:08:44 -0500 Received: from mga12.intel.com ([192.55.52.136]:4491 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729351AbgKXGIH (ORCPT ); Tue, 24 Nov 2020 01:08:07 -0500 IronPort-SDR: m7IXniPCvH7SWoAM2uOMWaY2cl3ahyGB3rY+pTYLi0uTe3/C1O7Uq/7H/OhlLilaVH0lIjhGTh rayQ9XZWNkow== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="151154775" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="151154775" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 IronPort-SDR: XOXk74SQJN6UfTfgRjtMy1UnoNMmGU+yZXHpoOMKvkhtW0wB0obE4fYMy99iOsVJC2pWlNBv/X KY4+D8/FHXRQ== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="534733453" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 06/17] fs/hfs: Convert to mem*_page() interface Date: Mon, 23 Nov 2020 22:07:44 -0800 Message-Id: <20201124060755.1405602-7-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Where possible remove kmap/mem*/kunmap in favor of the new mem*_page() calls. Signed-off-by: Ira Weiny --- fs/hfs/bnode.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c index b63a4df7327b..56037ae5ba69 100644 --- a/fs/hfs/bnode.c +++ b/fs/hfs/bnode.c @@ -23,8 +23,7 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, off += node->page_offset; page = node->page[0]; - memcpy(buf, kmap(page) + off, len); - kunmap(page); + memcpy_from_page(buf, page, off, len); } u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off) @@ -65,8 +64,7 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len) off += node->page_offset; page = node->page[0]; - memcpy(kmap(page) + off, buf, len); - kunmap(page); + memcpy_to_page(page, off, buf, len); set_page_dirty(page); } @@ -90,8 +88,7 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len) off += node->page_offset; page = node->page[0]; - memset(kmap(page) + off, 0, len); - kunmap(page); + memzero_page(page, off, len); set_page_dirty(page); } @@ -108,9 +105,7 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, src_page = src_node->page[0]; dst_page = dst_node->page[0]; - memcpy(kmap(dst_page) + dst, kmap(src_page) + src, len); - kunmap(src_page); - kunmap(dst_page); + memcpy_page(dst_page, dst, src_page, src, len); set_page_dirty(dst_page); } From patchwork Tue Nov 24 06:07:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE48BC56201 for ; Tue, 24 Nov 2020 06:08:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 886922080A for ; Tue, 24 Nov 2020 06:08:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729369AbgKXGIH (ORCPT ); Tue, 24 Nov 2020 01:08:07 -0500 Received: from mga05.intel.com ([192.55.52.43]:21151 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729352AbgKXGIH (ORCPT ); Tue, 24 Nov 2020 01:08:07 -0500 IronPort-SDR: 1XP7hxXnnyO7UCUFwtMMPwR4D4a4D8wXwdbK97TWPfcOZxZgRHfmIO3zzIFXSuxcQUsXquk4HI I7AnDsF6OI/g== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="256605346" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="256605346" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 IronPort-SDR: mXooYBLIB0fnow1eFoG+61b71uCezpuGZMLYWik/u9TI+UyKJ1Kj8ogGFLjt4ha87h0bKNOxA9 5fNxBcqn6O7g== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="370307585" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Steve French , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 07/17] fs/cifs: Convert to memcpy_page() Date: Mon, 23 Nov 2020 22:07:45 -0800 Message-Id: <20201124060755.1405602-8-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Use memcpy_page() instead of open coding kmap/memcpy/kunmap. Cc: Steve French Signed-off-by: Ira Weiny --- fs/cifs/smb2ops.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 504766cb6c19..d1088ee9a0e6 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -4223,17 +4223,13 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst, /* copy pages form the old */ for (j = 0; j < npages; j++) { - char *dst, *src; unsigned int offset, len; rqst_page_get_length(&new_rq[i], j, &len, &offset); - dst = (char *) kmap(new_rq[i].rq_pages[j]) + offset; - src = (char *) kmap(old_rq[i - 1].rq_pages[j]) + offset; - - memcpy(dst, src, len); - kunmap(new_rq[i].rq_pages[j]); - kunmap(old_rq[i - 1].rq_pages[j]); + memcpy_page(new_rq[i].rq_pages[j], offset, + old_rq[i - 1].rq_pages[j], offset, + len); } } From patchwork Tue Nov 24 06:07:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35CA3C56201 for ; Tue, 24 Nov 2020 06:08:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03D572076C for ; Tue, 24 Nov 2020 06:08:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729407AbgKXGIJ (ORCPT ); Tue, 24 Nov 2020 01:08:09 -0500 Received: from mga01.intel.com ([192.55.52.88]:19536 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729352AbgKXGIJ (ORCPT ); Tue, 24 Nov 2020 01:08:09 -0500 IronPort-SDR: geVYEDrceckE63TiQ+F09/PWTBA1H0KorKtiDXB8rjthEIrk+wtqItL5xSTBZ9V/8+5cRWJk04 5NsgheQjWpRA== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="190018255" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="190018255" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:08 -0800 IronPort-SDR: 6A3FRQDZPACcFAIZnE7ekmb80a3WtXBzNicKSPp2urUQ0L3+ACQ+pyFW78Rgd1bGhbuGXgX/OA xW9/+TJNKiKA== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="370270493" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:06 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 08/17] fs/hfsplus: Convert to mem*_page() Date: Mon, 23 Nov 2020 22:07:46 -0800 Message-Id: <20201124060755.1405602-9-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove the pattern of kmap/mem*/kunmap in favor of the new mem*_page() functions which handle the kmap'ing correctly for us. Signed-off-by: Ira Weiny --- fs/hfsplus/bnode.c | 53 +++++++++++++--------------------------------- 1 file changed, 15 insertions(+), 38 deletions(-) diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c index 177fae4e6581..c4347b1cb36f 100644 --- a/fs/hfsplus/bnode.c +++ b/fs/hfsplus/bnode.c @@ -29,14 +29,12 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memcpy(buf, kmap(*pagep) + off, l); - kunmap(*pagep); + memcpy_from_page(buf, *pagep, off, l); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); - memcpy(buf, kmap(*++pagep), l); - kunmap(*pagep); + memcpy_from_page(buf, *++pagep, 0, l); } } @@ -82,16 +80,14 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memcpy(kmap(*pagep) + off, buf, l); + memcpy_to_page(*pagep, off, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); while ((len -= l) != 0) { buf += l; l = min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++pagep), buf, l); + memcpy_to_page(*++pagep, 0, buf, l); set_page_dirty(*pagep); - kunmap(*pagep); } } @@ -112,15 +108,13 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len) off &= ~PAGE_MASK; l = min_t(int, len, PAGE_SIZE - off); - memset(kmap(*pagep) + off, 0, l); + memzero_page(*pagep, off, l); set_page_dirty(*pagep); - kunmap(*pagep); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memset(kmap(*++pagep), 0, l); + memzero_page(*++pagep, 0, l); set_page_dirty(*pagep); - kunmap(*pagep); } } @@ -142,17 +136,13 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst, if (src == dst) { l = min_t(int, len, PAGE_SIZE - src); - memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l); - kunmap(*src_page); + memcpy_page(*dst_page, src, *src_page, src, l); set_page_dirty(*dst_page); - kunmap(*dst_page); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memcpy(kmap(*++dst_page), kmap(*++src_page), l); - kunmap(*src_page); + memcpy_page(*++dst_page, 0, *++src_page, 0, l); set_page_dirty(*dst_page); - kunmap(*dst_page); } } else { void *src_ptr, *dst_ptr; @@ -202,21 +192,16 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) if (src == dst) { while (src < len) { - memmove(kmap(*dst_page), kmap(*src_page), src); - kunmap(*src_page); + memmove_page(*dst_page, 0, *src_page, 0, src); set_page_dirty(*dst_page); - kunmap(*dst_page); len -= src; src = PAGE_SIZE; src_page--; dst_page--; } src -= len; - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, len); - kunmap(*src_page); + memmove_page(*dst_page, src, *src_page, src, len); set_page_dirty(*dst_page); - kunmap(*dst_page); } else { void *src_ptr, *dst_ptr; @@ -251,19 +236,13 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len) if (src == dst) { l = min_t(int, len, PAGE_SIZE - src); - memmove(kmap(*dst_page) + src, - kmap(*src_page) + src, l); - kunmap(*src_page); + memmove_page(*dst_page, src, *src_page, src, l); set_page_dirty(*dst_page); - kunmap(*dst_page); while ((len -= l) != 0) { l = min_t(int, len, PAGE_SIZE); - memmove(kmap(*++dst_page), - kmap(*++src_page), l); - kunmap(*src_page); + memmove_page(*++dst_page, 0, *++src_page, 0, l); set_page_dirty(*dst_page); - kunmap(*dst_page); } } else { void *src_ptr, *dst_ptr; @@ -593,14 +572,12 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num) } pagep = node->page; - memset(kmap(*pagep) + node->page_offset, 0, - min_t(int, PAGE_SIZE, tree->node_size)); + memzero_page(*pagep, node->page_offset, + min_t(int, PAGE_SIZE, tree->node_size)); set_page_dirty(*pagep); - kunmap(*pagep); for (i = 1; i < tree->pages_per_bnode; i++) { - memset(kmap(*++pagep), 0, PAGE_SIZE); + memzero_page(*++pagep, 0, PAGE_SIZE); set_page_dirty(*pagep); - kunmap(*pagep); } clear_bit(HFS_BNODE_NEW, &node->flags); wake_up(&node->lock_wq); From patchwork Tue Nov 24 06:07:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B40B1C64E7B for ; Tue, 24 Nov 2020 06:08:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 841A22076C for ; Tue, 24 Nov 2020 06:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729428AbgKXGIK (ORCPT ); Tue, 24 Nov 2020 01:08:10 -0500 Received: from mga09.intel.com ([134.134.136.24]:46773 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729392AbgKXGIK (ORCPT ); Tue, 24 Nov 2020 01:08:10 -0500 IronPort-SDR: tSukXnsy5ujVf6P5I2Qes+gmtFQamP/tFM3ZqLsx8prryjz7t96AJ3lB3OlYkJFJfxeedLPFEJ oTpxHyzxpU7Q== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="172052674" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="172052674" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:08 -0800 IronPort-SDR: dPkuJ3cNpwAQdF4IO+jQMXzUYsWknEpJ9XFX9SluvT2WJYgM+NOtXoIN5Iiym2a6VoF/+lWsq2 rgtY2MOcyv9g== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="343047661" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:07 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Jaegeuk Kim , Chao Yu , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 09/17] fs/f2fs: Remove f2fs_copy_page() Date: Mon, 23 Nov 2020 22:07:47 -0800 Message-Id: <20201124060755.1405602-10-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny The new common function memcpy_page() provides this exactly functionality. Remove the local f2fs_copy_page() and call memcpy_page() instead. Cc: Jaegeuk Kim Cc: Chao Yu Signed-off-by: Ira Weiny Acked-by: Chao Yu --- fs/f2fs/f2fs.h | 10 ---------- fs/f2fs/file.c | 3 ++- 2 files changed, 2 insertions(+), 11 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index cb700d797296..546dba7d7cc2 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -2428,16 +2428,6 @@ static inline struct page *f2fs_pagecache_get_page( return pagecache_get_page(mapping, index, fgp_flags, gfp_mask); } -static inline void f2fs_copy_page(struct page *src, struct page *dst) -{ - char *src_kaddr = kmap(src); - char *dst_kaddr = kmap(dst); - - memcpy(dst_kaddr, src_kaddr, PAGE_SIZE); - kunmap(dst); - kunmap(src); -} - static inline void f2fs_put_page(struct page *page, int unlock) { if (!page) diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index ee861c6d9ff0..c38aa186a7c6 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -1234,7 +1235,7 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode, f2fs_put_page(psrc, 1); return PTR_ERR(pdst); } - f2fs_copy_page(psrc, pdst); + memcpy_page(psrc, 0, pdst, 0, PAGE_SIZE); set_page_dirty(pdst); f2fs_put_page(pdst, 1); f2fs_put_page(psrc, 1); From patchwork Tue Nov 24 06:07:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BE4DC83013 for ; Tue, 24 Nov 2020 06:08:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC24D2076C for ; Tue, 24 Nov 2020 06:08:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729540AbgKXGIc (ORCPT ); Tue, 24 Nov 2020 01:08:32 -0500 Received: from mga02.intel.com ([134.134.136.20]:57123 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729399AbgKXGIJ (ORCPT ); Tue, 24 Nov 2020 01:08:09 -0500 IronPort-SDR: MfARRI3a/gCguuWhdR9feRVLUnT0uHlS6WWSIDwUgEkj1JKN4946AqglsXfduVZTInRLtIYszY LoWhFHABgK5A== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="158937248" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="158937248" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:09 -0800 IronPort-SDR: a1Zw8ImtMPR9ZcFrFVbYJZp/rYvTxWYbGw2SIqV5WRiv4JkslYeKY7oUGZWuibQKjllpUJbnil yeHttftjZVyg== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="313139858" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:08 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Christoph Hellwig , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 10/17] fs/freevxfs: Use memcpy_to_page() Date: Mon, 23 Nov 2020 22:07:48 -0800 Message-Id: <20201124060755.1405602-11-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove kmap/memcpy/kunmap pattern in favor of the new memcpy_to_page() Cc: Christoph Hellwig Signed-off-by: Ira Weiny --- fs/freevxfs/vxfs_immed.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/fs/freevxfs/vxfs_immed.c b/fs/freevxfs/vxfs_immed.c index bfc780c682fb..d185fa67b82f 100644 --- a/fs/freevxfs/vxfs_immed.c +++ b/fs/freevxfs/vxfs_immed.c @@ -67,12 +67,8 @@ vxfs_immed_readpage(struct file *fp, struct page *pp) { struct vxfs_inode_info *vip = VXFS_INO(pp->mapping->host); u_int64_t offset = (u_int64_t)pp->index << PAGE_SHIFT; - caddr_t kaddr; - kaddr = kmap(pp); - memcpy(kaddr, vip->vii_immed.vi_immed + offset, PAGE_SIZE); - kunmap(pp); - + memcpy_to_page(pp, 0, vip->vii_immed.vi_immed + offset, PAGE_SIZE); flush_dcache_page(pp); SetPageUptodate(pp); unlock_page(pp); From patchwork Tue Nov 24 06:07:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9AAAC63798 for ; Tue, 24 Nov 2020 06:08:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 83D0020857 for ; Tue, 24 Nov 2020 06:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729574AbgKXGIh (ORCPT ); Tue, 24 Nov 2020 01:08:37 -0500 Received: from mga01.intel.com ([192.55.52.88]:19536 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729394AbgKXGIJ (ORCPT ); Tue, 24 Nov 2020 01:08:09 -0500 IronPort-SDR: 8aAdjn1A9sPIz/WfzfctdS4DrGiBWsRhfJSNpv0BRb+hbUHL4JbF+nWcSRm3Qj17VsbUQdF7OX OLAy4Avl9ZJg== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="190018260" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="190018260" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:09 -0800 IronPort-SDR: taQMImkhmEjtM7b3OBWdgwUOIuhOREAPpl273PPI4iZcnkxolkoTg1yHOTZj3fHHblmGNXOOgs UMsejyzPVMDQ== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="364905156" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:09 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 11/17] fs/reiserfs: Use memcpy_from_page() Date: Mon, 23 Nov 2020 22:07:49 -0800 Message-Id: <20201124060755.1405602-12-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove the open coding of kmap/memcpy/kunmap and use the new memcpy_from_page() function. Signed-off-by: Ira Weiny --- fs/reiserfs/journal.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c index e98f99338f8f..e288bbbe80ff 100644 --- a/fs/reiserfs/journal.c +++ b/fs/reiserfs/journal.c @@ -4184,7 +4184,6 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags) /* copy all the real blocks into log area. dirty log blocks */ if (buffer_journaled(cn->bh)) { struct buffer_head *tmp_bh; - char *addr; struct page *page; tmp_bh = journal_getblk(sb, @@ -4194,11 +4193,9 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags) SB_ONDISK_JOURNAL_SIZE(sb))); set_buffer_uptodate(tmp_bh); page = cn->bh->b_page; - addr = kmap(page); - memcpy(tmp_bh->b_data, - addr + offset_in_page(cn->bh->b_data), - cn->bh->b_size); - kunmap(page); + memcpy_from_page(tmp_bh->b_data, page, + offset_in_page(cn->bh->b_data), + cn->bh->b_size); mark_buffer_dirty(tmp_bh); jindex++; set_buffer_journal_dirty(cn->bh); From patchwork Tue Nov 24 06:07:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AC21C64E90 for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2B74D20857 for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729440AbgKXGIM (ORCPT ); Tue, 24 Nov 2020 01:08:12 -0500 Received: from mga06.intel.com ([134.134.136.31]:47294 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729392AbgKXGIL (ORCPT ); Tue, 24 Nov 2020 01:08:11 -0500 IronPort-SDR: 1Z+3Tman/scoZ5qFR7ic5+jypipBycLcIB8HXHSQu49OP02nmd+t8D1tRBtT5S0tKB/ExT/RG4 PK5/lbbUvKpw== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="233504033" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="233504033" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:10 -0800 IronPort-SDR: fSg3kho9RP5PF2oBZ9Oax6baDQfnypO/Zh2vyAMYdMBMqWduVgHU3/aejc8UCPzQECHJHD4ebm 6foDhsN7ZrhA== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="358708241" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:09 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Nicolas Pitre , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 12/17] fs/cramfs: Use memcpy_from_page() Date: Mon, 23 Nov 2020 22:07:50 -0800 Message-Id: <20201124060755.1405602-13-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove open coded kmap/memcpy/kunmap and use mempcy_from_page() instead. Cc: Nicolas Pitre Signed-off-by: Ira Weiny Acked-by: Nicolas Pitre --- fs/cramfs/inode.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index 4b90cfd1ec36..996a3a32a01f 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -247,8 +247,7 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset, struct page *page = pages[i]; if (page) { - memcpy(data, kmap(page), PAGE_SIZE); - kunmap(page); + memcpy_from_page(data, page, 0, PAGE_SIZE); put_page(page); } else memset(data, 0, PAGE_SIZE); From patchwork Tue Nov 24 06:07:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 436E0C64E8A for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 076422080A for ; Tue, 24 Nov 2020 06:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729450AbgKXGIM (ORCPT ); Tue, 24 Nov 2020 01:08:12 -0500 Received: from mga05.intel.com ([192.55.52.43]:21151 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729436AbgKXGIL (ORCPT ); Tue, 24 Nov 2020 01:08:11 -0500 IronPort-SDR: ICaA16cn1+bjeTk8rTrQvus1leZ5p+xVkHr1OPV7Ua9tr063k+pDWrIcZwUM10KnGCIJO9/V1/ RwtNHSJYGutw== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="256605368" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="256605368" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:11 -0800 IronPort-SDR: nxOA9JzerR2df98ZEZCOtQOv4YxKKey5Z+ptd/fz+0d5s84KFhcL2FX9m4me8mvLGLPHx9Zpgh v8YxPHFT6dWg== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="402819429" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:10 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , "Martin K. Petersen" , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 13/17] drivers/target: Convert to mem*_page() Date: Mon, 23 Nov 2020 22:07:51 -0800 Message-Id: <20201124060755.1405602-14-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove the kmap/mem*()/kunmap patter and use the new mem*_page() functions. Cc: "Martin K. Petersen" Signed-off-by: Ira Weiny --- drivers/target/target_core_rd.c | 6 ++---- drivers/target/target_core_transport.c | 10 +++------- 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c index bf936bbeccfe..30bf0fcae519 100644 --- a/drivers/target/target_core_rd.c +++ b/drivers/target/target_core_rd.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include @@ -117,7 +118,6 @@ static int rd_allocate_sgl_table(struct rd_dev *rd_dev, struct rd_dev_sg_table * sizeof(struct scatterlist)); struct page *pg; struct scatterlist *sg; - unsigned char *p; while (total_sg_needed) { unsigned int chain_entry = 0; @@ -159,9 +159,7 @@ static int rd_allocate_sgl_table(struct rd_dev *rd_dev, struct rd_dev_sg_table * sg_assign_page(&sg[j], pg); sg[j].length = PAGE_SIZE; - p = kmap(pg); - memset(p, init_payload, PAGE_SIZE); - kunmap(pg); + memset_page(pg, init_payload, 0, PAGE_SIZE); } page_offset += sg_per_table; diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c index ff26ab0a5f60..4fec5c728344 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -1689,15 +1690,10 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess */ if (!(se_cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) && se_cmd->data_direction == DMA_FROM_DEVICE) { - unsigned char *buf = NULL; if (sgl) - buf = kmap(sg_page(sgl)) + sgl->offset; - - if (buf) { - memset(buf, 0, sgl->length); - kunmap(sg_page(sgl)); - } + memzero_page(sg_page(sgl), sgl->offset, + sgl->length); } rc = transport_generic_map_mem_to_cmd(se_cmd, sgl, sgl_count, From patchwork Tue Nov 24 06:07:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDFAEC8300E for ; Tue, 24 Nov 2020 06:08:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A310120857 for ; Tue, 24 Nov 2020 06:08:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729519AbgKXGIZ (ORCPT ); Tue, 24 Nov 2020 01:08:25 -0500 Received: from mga01.intel.com ([192.55.52.88]:19536 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729446AbgKXGIM (ORCPT ); Tue, 24 Nov 2020 01:08:12 -0500 IronPort-SDR: t8Cd/D38h5O7O4iGwmPBo1xW6SCsRP51veIaWGbKw5b8zo251aLgYehy17WrQYba8DHIEMgb3E RVikmEi5017w== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="190018273" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="190018273" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:12 -0800 IronPort-SDR: qNw+swS2DnlAHVnC/90HMteyfrwtLJ4aJjU5/rJICAXxM7SUh0B0LW4TtqxJ5wrZ53ZhGJXSTj Bm8/GhwrlpRQ== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="432504175" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:11 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Brian King , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 14/17] drivers/scsi: Use memcpy_to_page() Date: Mon, 23 Nov 2020 22:07:52 -0800 Message-Id: <20201124060755.1405602-15-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove kmap/mem*()/kunmap pattern and use memcpy_to_page() Cc: Brian King Signed-off-by: Ira Weiny --- drivers/scsi/ipr.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index b0aa58d117cc..3cdd8db24270 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -3912,7 +3912,6 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, { int bsize_elem, i, result = 0; struct scatterlist *sg; - void *kaddr; /* Determine the actual number of bytes per element */ bsize_elem = PAGE_SIZE * (1 << sglist->order); @@ -3923,10 +3922,7 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, buffer += bsize_elem) { struct page *page = sg_page(sg); - kaddr = kmap(page); - memcpy(kaddr, buffer, bsize_elem); - kunmap(page); - + memcpy_to_page(page, 0, buffer, bsize_elem); sg->length = bsize_elem; if (result != 0) { @@ -3938,10 +3934,7 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist, if (len % bsize_elem) { struct page *page = sg_page(sg); - kaddr = kmap(page); - memcpy(kaddr, buffer, len % bsize_elem); - kunmap(page); - + memcpy_to_page(page, 0, buffer, len % bsize_elem); sg->length = len % bsize_elem; } From patchwork Tue Nov 24 06:07:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85B66C83016 for ; Tue, 24 Nov 2020 06:08:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3F37F2080A for ; Tue, 24 Nov 2020 06:08:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729492AbgKXGIP (ORCPT ); Tue, 24 Nov 2020 01:08:15 -0500 Received: from mga07.intel.com ([134.134.136.100]:49979 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729436AbgKXGIN (ORCPT ); Tue, 24 Nov 2020 01:08:13 -0500 IronPort-SDR: XRopqTVRsL0draeFH0naHaAZnGgc5EKZLi9o0wWX69p5E8B2oVgFgWdMgYfTcoOnrB90StRGJm KTb4vBqLK/Yg== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="236034477" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="236034477" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:12 -0800 IronPort-SDR: 6fq6inSQX/KlDmMegekc45BwmII8REjo4Y07pTJ+nBMrPU95SWZeBxNufVfu+r+sktbQANwqv5 Rwz7L1nF4pvg== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="312448710" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:11 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Greg Kroah-Hartman , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 15/17] drivers/staging: Use memcpy_to/from_page() Date: Mon, 23 Nov 2020 22:07:53 -0800 Message-Id: <20201124060755.1405602-16-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove kmap/mem*()/kunmap pattern and use memcpy_to/from_page() Cc: Greg Kroah-Hartman Signed-off-by: Ira Weiny --- drivers/staging/rts5208/rtsx_transport.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/staging/rts5208/rtsx_transport.c b/drivers/staging/rts5208/rtsx_transport.c index 909a3e663ef6..e0e52bae953e 100644 --- a/drivers/staging/rts5208/rtsx_transport.c +++ b/drivers/staging/rts5208/rtsx_transport.c @@ -92,13 +92,13 @@ unsigned int rtsx_stor_access_xfer_buf(unsigned char *buffer, while (sglen > 0) { unsigned int plen = min(sglen, (unsigned int) PAGE_SIZE - poff); - unsigned char *ptr = kmap(page); if (dir == TO_XFER_BUF) - memcpy(ptr + poff, buffer + cnt, plen); + memcpy_to_page(page, poff, + buffer + cnt, plen); else - memcpy(buffer + cnt, ptr + poff, plen); - kunmap(page); + memcpy_from_page(buffer + cnt, page, + poff, plen); /* Start at the beginning of the next page */ poff = 0; From patchwork Tue Nov 24 06:07:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B12CC71156 for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 566572076C for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729479AbgKXGIO (ORCPT ); Tue, 24 Nov 2020 01:08:14 -0500 Received: from mga04.intel.com ([192.55.52.120]:12216 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729471AbgKXGIN (ORCPT ); Tue, 24 Nov 2020 01:08:13 -0500 IronPort-SDR: qiuAW+fuYyiLTpEe/ADkQR7jsX4DhftcycdUAqI/yKAPd+k8CyrVQ7FH6766Ssqga9MHhkZ8TW WgZqNIc44TKw== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="169332360" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="169332360" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:13 -0800 IronPort-SDR: ZoBwrB4JFK1e6EM9E54oD67IpZ8rD1Tv7iKjj3JBFFabN5qyxMwV+LWlqPBRk2bB7j7NfqkYON UCcWE7VVML8w== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="546707751" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:12 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Kirti Wankhede , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 16/17] lib: Use mempcy_to/from_page() Date: Mon, 23 Nov 2020 22:07:54 -0800 Message-Id: <20201124060755.1405602-17-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove kmap/mem*()/kunmap pattern and use memcpy_to/from_page() Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: "Jérôme Glisse" Signed-off-by: Ira Weiny --- lib/test_bpf.c | 11 ++--------- lib/test_hmm.c | 10 ++-------- 2 files changed, 4 insertions(+), 17 deletions(-) diff --git a/lib/test_bpf.c b/lib/test_bpf.c index ca7d635bccd9..def048bc1c48 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -6499,25 +6500,17 @@ static void *generate_test_data(struct bpf_test *test, int sub) * single fragment to the skb, filled with * test->frag_data. */ - void *ptr; - page = alloc_page(GFP_KERNEL); if (!page) goto err_kfree_skb; - ptr = kmap(page); - if (!ptr) - goto err_free_page; - memcpy(ptr, test->frag_data, MAX_DATA); - kunmap(page); + memcpy_to_page(page, 0, test->frag_data, MAX_DATA); skb_add_rx_frag(skb, 0, page, 0, MAX_DATA, MAX_DATA); } return skb; -err_free_page: - __free_page(page); err_kfree_skb: kfree_skb(skb); return NULL; diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 80a78877bd93..6a5fe7c4088b 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -321,16 +321,13 @@ static int dmirror_do_read(struct dmirror *dmirror, unsigned long start, for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) { void *entry; struct page *page; - void *tmp; entry = xa_load(&dmirror->pt, pfn); page = xa_untag_pointer(entry); if (!page) return -ENOENT; - tmp = kmap(page); - memcpy(ptr, tmp, PAGE_SIZE); - kunmap(page); + memcpy_from_page(ptr, page, 0, PAGE_SIZE); ptr += PAGE_SIZE; bounce->cpages++; @@ -390,16 +387,13 @@ static int dmirror_do_write(struct dmirror *dmirror, unsigned long start, for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) { void *entry; struct page *page; - void *tmp; entry = xa_load(&dmirror->pt, pfn); page = xa_untag_pointer(entry); if (!page || xa_pointer_tag(entry) != DPT_XA_TAG_WRITE) return -ENOENT; - tmp = kmap(page); - memcpy(tmp, ptr, PAGE_SIZE); - kunmap(page); + memcpy_to_page(page, 0, ptr, PAGE_SIZE); ptr += PAGE_SIZE; bounce->cpages++; From patchwork Tue Nov 24 06:07:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11927215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBE88C71155 for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F98A2080A for ; Tue, 24 Nov 2020 06:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729488AbgKXGIP (ORCPT ); Tue, 24 Nov 2020 01:08:15 -0500 Received: from mga06.intel.com ([134.134.136.31]:47294 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729472AbgKXGIN (ORCPT ); Tue, 24 Nov 2020 01:08:13 -0500 IronPort-SDR: N6CajM/9X0OWyIN8QWFV+9WRFPRMRJC3i+rOXweeiG7w/2G/dGdiqL4Y9lbDtTscdx7K8SlwSX HuTi+hyfePLg== X-IronPort-AV: E=McAfee;i="6000,8403,9814"; a="233504045" X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="233504045" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:13 -0800 IronPort-SDR: kXiqK5kkYEw8PWEc8T5dcga78jp44smEtUsFyYb5/zY3t6D9fGe2JMFe6RVWipJ+wdBOUf7mQ/ xM1WmOGTetfg== X-IronPort-AV: E=Sophos;i="5.78,365,1599548400"; d="scan'208";a="478391608" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2020 22:08:13 -0800 From: ira.weiny@intel.com To: Andrew Morton Cc: Ira Weiny , Kirti Wankhede , Thomas Gleixner , Dave Hansen , Matthew Wilcox , Christoph Hellwig , Dan Williams , Al Viro , Eric Biggers , Luis Chamberlain , Patrik Jakobsson , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Howells , Chris Mason , Josef Bacik , David Sterba , Steve French , Jaegeuk Kim , Chao Yu , Nicolas Pitre , "Martin K. Petersen" , Brian King , Greg Kroah-Hartman , Alexei Starovoitov , Daniel Borkmann , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 17/17] samples: Use memcpy_to/from_page() Date: Mon, 23 Nov 2020 22:07:55 -0800 Message-Id: <20201124060755.1405602-18-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20201124060755.1405602-1-ira.weiny@intel.com> References: <20201124060755.1405602-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny Remove kmap/mem*()/kunmap pattern and use memcpy_to/from_page() Cc: Kirti Wankhede Signed-off-by: Ira Weiny --- samples/vfio-mdev/mbochs.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c index e03068917273..54fe04f63c66 100644 --- a/samples/vfio-mdev/mbochs.c +++ b/samples/vfio-mdev/mbochs.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include @@ -442,7 +443,6 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count, struct device *dev = mdev_dev(mdev); struct page *pg; loff_t poff; - char *map; int ret = 0; mutex_lock(&mdev_state->ops_lock); @@ -479,12 +479,10 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count, pos -= MBOCHS_MMIO_BAR_OFFSET; poff = pos & ~PAGE_MASK; pg = __mbochs_get_page(mdev_state, pos >> PAGE_SHIFT); - map = kmap(pg); if (is_write) - memcpy(map + poff, buf, count); + memcpy_to_page(pg, poff, buf, count); else - memcpy(buf, map + poff, count); - kunmap(pg); + memcpy_from_page(buf, pg, poff, count); put_page(pg); } else {