From patchwork Fri Jul 17 07:20:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11669275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D82A9618 for ; Fri, 17 Jul 2020 07:21:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C8764207F5 for ; Fri, 17 Jul 2020 07:21:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728197AbgGQHVa (ORCPT ); Fri, 17 Jul 2020 03:21:30 -0400 Received: from mga05.intel.com ([192.55.52.43]:11404 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728091AbgGQHVK (ORCPT ); Fri, 17 Jul 2020 03:21:10 -0400 IronPort-SDR: q2QxXG//GANbejbTUmIJ7ofW03xKkJzRf3zjp3Fkkuz+2WyTc26u8/06ld2rbym30kaOL90yen Q1MWkLVH+9wg== X-IronPort-AV: E=McAfee;i="6000,8403,9684"; a="234401122" X-IronPort-AV: E=Sophos;i="5.75,362,1589266800"; d="scan'208";a="234401122" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2020 00:21:09 -0700 IronPort-SDR: YpmkIwj70j6oVnuZCPgm7Puk0Ze/LdT6Mtx+PLsWg4I1ZRpbL4Ii+GQfOj73MXeOUOarxnJ+rq LsrpPStzFBzg== X-IronPort-AV: E=Sophos;i="5.75,362,1589266800"; d="scan'208";a="282708053" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2020 00:21:08 -0700 From: ira.weiny@intel.com To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andy Lutomirski , Peter Zijlstra Cc: Ira Weiny , x86@kernel.org, Dave Hansen , Dan Williams , Vishal Verma , Andrew Morton , Fenghua Yu , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [PATCH RFC V2 14/17] dax: Stray write protection for dax_direct_access() Date: Fri, 17 Jul 2020 00:20:53 -0700 Message-Id: <20200717072056.73134-15-ira.weiny@intel.com> X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9 In-Reply-To: <20200717072056.73134-1-ira.weiny@intel.com> References: <20200717072056.73134-1-ira.weiny@intel.com> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Ira Weiny dax_direct_access() is a special case of accessing pmem via a page offset and without a struct page. Because the dax driver is well aware of the special protections it has mapped memory with, call dev_access_[en|dis]able() directly instead of the unnecessary overhead of trying to get a page to kmap. Like kmap though, leverage the existing dax_read[un]lock() functions because they are already required to surround the use of the memory returned from dax_direct_access(). Signed-off-by: Ira Weiny --- drivers/dax/super.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dax/super.c b/drivers/dax/super.c index 021739768093..e8d0a28e6ed2 100644 --- a/drivers/dax/super.c +++ b/drivers/dax/super.c @@ -30,12 +30,14 @@ static DEFINE_SPINLOCK(dax_host_lock); int dax_read_lock(void) { + dev_access_enable(); return srcu_read_lock(&dax_srcu); } EXPORT_SYMBOL_GPL(dax_read_lock); void dax_read_unlock(int id) { + dev_access_disable(); srcu_read_unlock(&dax_srcu, id); } EXPORT_SYMBOL_GPL(dax_read_unlock);