From patchwork Fri Jun 9 20:24:29 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 9779207 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D507E60352 for ; Fri, 9 Jun 2017 20:31:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C647C286FD for ; Fri, 9 Jun 2017 20:31:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BAAA028704; Fri, 9 Jun 2017 20:31:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1D976286F9 for ; Fri, 9 Jun 2017 20:31:03 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F07CA80481; Fri, 9 Jun 2017 20:31:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com F07CA80481 Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=intel.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dm-devel-bounces@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com F07CA80481 Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D021A18A28; Fri, 9 Jun 2017 20:31:02 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 9C11E4A48E; Fri, 9 Jun 2017 20:31:02 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id v59KV1Yl014893 for ; Fri, 9 Jun 2017 16:31:01 -0400 Received: by smtp.corp.redhat.com (Postfix) id F232A18AA8; Fri, 9 Jun 2017 20:31:00 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from mx1.redhat.com (ext-mx02.extmail.prod.ext.phx2.redhat.com [10.5.110.26]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E97D018A3C; Fri, 9 Jun 2017 20:31:00 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1FD1580C15; Fri, 9 Jun 2017 20:30:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 1FD1580C15 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=intel.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dan.j.williams@intel.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 1FD1580C15 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP; 09 Jun 2017 13:30:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.39,319,1493708400"; d="scan'208"; a="1180531751" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.125]) by fmsmga002.fm.intel.com with ESMTP; 09 Jun 2017 13:30:57 -0700 From: Dan Williams To: linux-nvdimm@lists.01.org Date: Fri, 09 Jun 2017 13:24:29 -0700 Message-ID: <149703986971.20620.10303247412197996310.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com> References: <149703982465.20620.14881139332926778446.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 X-Greylist: Sender passed SPF test, Sender IP whitelisted by DNSRBL, ACL 203 matched, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 09 Jun 2017 20:30:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 09 Jun 2017 20:30:59 +0000 (UTC) for IP:'134.134.136.100' DOMAIN:'mga07.intel.com' HELO:'mga07.intel.com' FROM:'dan.j.williams@intel.com' RCPT:'' X-RedHat-Spam-Score: -1.911 (BAYES_50, DCC_REPUT_00_12, RCVD_IN_DNSWL_MED, SPF_PASS, T_RP_MATCHES_RCVD) 134.134.136.100 mga07.intel.com 134.134.136.100 mga07.intel.com X-Scanned-By: MIMEDefang 2.78 on 10.5.110.26 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-loop: dm-devel@redhat.com Cc: Jan Kara , Matthew Wilcox , x86@kernel.org, linux-kernel@vger.kernel.org, Jeff Moyer , dm-devel@redhat.com, Ingo Molnar , Oliver O'Halloran , viro@zeniv.linux.org.uk, "H. Peter Anvin" , linux-fsdevel@vger.kernel.org, Thomas Gleixner , Ross Zwisler , hch@lst.de Subject: [dm-devel] [PATCH v3 08/14] x86, dax, libnvdimm: move wb_cache_pmem() to libnvdimm X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Fri, 09 Jun 2017 20:31:03 +0000 (UTC) X-Virus-Scanned: ClamAV using ClamSMTP With all calls to this routine re-directed through the pmem driver, we can kill the pmem api indirection. arch_wb_cache_pmem() is now optionally supplied by the arch specific asm/pmem.h. Same as before, pmem flushing is only defined for x86_64, but it is straightforward to add other archs in the future. Cc: Cc: Jan Kara Cc: Jeff Moyer Cc: Ingo Molnar Cc: Christoph Hellwig Cc: "H. Peter Anvin" Cc: Thomas Gleixner Cc: Oliver O'Halloran Cc: Matthew Wilcox Cc: Ross Zwisler Signed-off-by: Dan Williams Reviewed-by: Jan Kara --- arch/x86/include/asm/pmem.h | 18 +----------------- arch/x86/include/asm/uaccess_64.h | 1 + arch/x86/lib/usercopy_64.c | 3 ++- drivers/nvdimm/pmem.c | 2 +- drivers/nvdimm/pmem.h | 7 +++++++ include/linux/pmem.h | 19 ------------------- 6 files changed, 12 insertions(+), 38 deletions(-) -- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h index f4c119d253f3..862be3a9275c 100644 --- a/arch/x86/include/asm/pmem.h +++ b/arch/x86/include/asm/pmem.h @@ -44,25 +44,9 @@ static inline void arch_memcpy_to_pmem(void *dst, const void *src, size_t n) BUG(); } -/** - * arch_wb_cache_pmem - write back a cache range with CLWB - * @vaddr: virtual start address - * @size: number of bytes to write back - * - * Write back a cache range using the CLWB (cache line write back) - * instruction. Note that @size is internally rounded up to be cache - * line size aligned. - */ static inline void arch_wb_cache_pmem(void *addr, size_t size) { - u16 x86_clflush_size = boot_cpu_data.x86_clflush_size; - unsigned long clflush_mask = x86_clflush_size - 1; - void *vend = addr + size; - void *p; - - for (p = (void *)((unsigned long)addr & ~clflush_mask); - p < vend; p += x86_clflush_size) - clwb(p); + clean_cache_range(addr,size); } static inline void arch_invalidate_pmem(void *addr, size_t size) diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index b16f6a1d8b26..bdc4a2761525 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -174,6 +174,7 @@ extern long __copy_user_nocache(void *dst, const void __user *src, extern long __copy_user_flushcache(void *dst, const void __user *src, unsigned size); extern void memcpy_page_flushcache(char *to, struct page *page, size_t offset, size_t len); +void clean_cache_range(void *addr, size_t size); static inline int __copy_from_user_inatomic_nocache(void *dst, const void __user *src, diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c index f42d2fd86ca3..baa80ff29da8 100644 --- a/arch/x86/lib/usercopy_64.c +++ b/arch/x86/lib/usercopy_64.c @@ -85,7 +85,7 @@ copy_user_handle_tail(char *to, char *from, unsigned len) * instruction. Note that @size is internally rounded up to be cache * line size aligned. */ -static void clean_cache_range(void *addr, size_t size) +void clean_cache_range(void *addr, size_t size) { u16 x86_clflush_size = boot_cpu_data.x86_clflush_size; unsigned long clflush_mask = x86_clflush_size - 1; @@ -96,6 +96,7 @@ static void clean_cache_range(void *addr, size_t size) p < vend; p += x86_clflush_size) clwb(p); } +EXPORT_SYMBOL(clean_cache_range); long __copy_user_flushcache(void *dst, const void __user *src, unsigned size) { diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 823b07774244..3b87702d46bb 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -245,7 +245,7 @@ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, static void pmem_dax_flush(struct dax_device *dax_dev, pgoff_t pgoff, void *addr, size_t size) { - wb_cache_pmem(addr, size); + arch_wb_cache_pmem(addr, size); } static const struct dax_operations pmem_dax_ops = { diff --git a/drivers/nvdimm/pmem.h b/drivers/nvdimm/pmem.h index 7f4dbd72a90a..9137ec80b85f 100644 --- a/drivers/nvdimm/pmem.h +++ b/drivers/nvdimm/pmem.h @@ -4,6 +4,13 @@ #include #include #include +#include + +#ifndef CONFIG_ARCH_HAS_PMEM_API +static inline void arch_wb_cache_pmem(void *addr, size_t size) +{ +} +#endif /* this definition is in it's own header for tools/testing/nvdimm to consume */ struct pmem_device { diff --git a/include/linux/pmem.h b/include/linux/pmem.h index 772bd02a5b52..33ae761f010a 100644 --- a/include/linux/pmem.h +++ b/include/linux/pmem.h @@ -31,11 +31,6 @@ static inline void arch_memcpy_to_pmem(void *dst, const void *src, size_t n) BUG(); } -static inline void arch_wb_cache_pmem(void *addr, size_t size) -{ - BUG(); -} - static inline void arch_invalidate_pmem(void *addr, size_t size) { BUG(); @@ -80,18 +75,4 @@ static inline void invalidate_pmem(void *addr, size_t size) if (arch_has_pmem_api()) arch_invalidate_pmem(addr, size); } - -/** - * wb_cache_pmem - write back processor cache for PMEM memory range - * @addr: virtual start address - * @size: number of bytes to write back - * - * Write back the processor cache range starting at 'addr' for 'size' bytes. - * See blkdev_issue_flush() note for memcpy_to_pmem(). - */ -static inline void wb_cache_pmem(void *addr, size_t size) -{ - if (arch_has_pmem_api()) - arch_wb_cache_pmem(addr, size); -} #endif /* __PMEM_H__ */