From patchwork Fri Sep 30 20:10:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Verma, Vishal L" X-Patchwork-Id: 9358835 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 82652608A0 for ; Fri, 30 Sep 2016 20:10:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75D842A01B for ; Fri, 30 Sep 2016 20:10:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A72E2A17B; Fri, 30 Sep 2016 20:10:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF0FD2A17E for ; Fri, 30 Sep 2016 20:10:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751427AbcI3UK4 (ORCPT ); Fri, 30 Sep 2016 16:10:56 -0400 Received: from mga09.intel.com ([134.134.136.24]:44031 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751027AbcI3UKz (ORCPT ); Fri, 30 Sep 2016 16:10:55 -0400 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP; 30 Sep 2016 13:10:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,274,1473145200"; d="scan'208";a="1064607617" Received: from omniknight.lm.intel.com ([10.232.112.53]) by fmsmga002.fm.intel.com with ESMTP; 30 Sep 2016 13:10:54 -0700 From: Vishal Verma To: Cc: Dan Williams , Ross Zwisler , Linda Knippers , , "Rafael J. Wysocki" , Vishal Verma Subject: [PATCH v2 2/3] pmem: reduce kmap_atomic sections to the memcpys only Date: Fri, 30 Sep 2016 14:10:02 -0600 Message-Id: <1475266203-18559-3-git-send-email-vishal.l.verma@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475266203-18559-1-git-send-email-vishal.l.verma@intel.com> References: <1475266203-18559-1-git-send-email-vishal.l.verma@intel.com> Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP pmem_do_bvec used to kmap_atomic at the begin, and only unmap at the end. Things like nvdimm_clear_poison may want to do nvdimm subsystem bookkeeping operations that may involve taking locks or doing memory allocations, and we can't do that from the atomic context. Reduce the atomic context to just what needs it - the memcpy to/from pmem. Cc: Ross Zwisler Signed-off-by: Vishal Verma --- drivers/nvdimm/pmem.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 571a6c7..ca038d8 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -66,13 +66,32 @@ static void pmem_clear_poison(struct pmem_device *pmem, phys_addr_t offset, invalidate_pmem(pmem->virt_addr + offset, len); } +static void map_and_copy_to_pmem(void *pmem_addr, struct page *page, + unsigned int off, unsigned int len) +{ + void *mem = kmap_atomic(page); + + memcpy_to_pmem(pmem_addr, mem + off, len); + kunmap_atomic(mem); +} + +static int map_and_copy_from_pmem(struct page *page, unsigned int off, + void *pmem_addr, unsigned int len) +{ + int rc; + void *mem = kmap_atomic(page); + + rc = memcpy_from_pmem(mem + off, pmem_addr, len); + kunmap_atomic(mem); + return rc; +} + static int pmem_do_bvec(struct pmem_device *pmem, struct page *page, unsigned int len, unsigned int off, bool is_write, sector_t sector) { int rc = 0; bool bad_pmem = false; - void *mem = kmap_atomic(page); phys_addr_t pmem_off = sector * 512 + pmem->data_offset; void *pmem_addr = pmem->virt_addr + pmem_off; @@ -83,7 +102,7 @@ static int pmem_do_bvec(struct pmem_device *pmem, struct page *page, if (unlikely(bad_pmem)) rc = -EIO; else { - rc = memcpy_from_pmem(mem + off, pmem_addr, len); + rc = map_and_copy_from_pmem(page, off, pmem_addr, len); flush_dcache_page(page); } } else { @@ -102,14 +121,13 @@ static int pmem_do_bvec(struct pmem_device *pmem, struct page *page, * after clear poison. */ flush_dcache_page(page); - memcpy_to_pmem(pmem_addr, mem + off, len); + map_and_copy_to_pmem(pmem_addr, page, off, len); if (unlikely(bad_pmem)) { pmem_clear_poison(pmem, pmem_off, len); - memcpy_to_pmem(pmem_addr, mem + off, len); + map_and_copy_to_pmem(pmem_addr, page, off, len); } } - kunmap_atomic(mem); return rc; }