diff mbox

[v2,08/17] libnvdimm: introduce nvdimm_flush() and nvdimm_has_flush()

Message ID 146812111205.32932.3785207412939078453.stgit@dwillia2-desk3.amr.corp.intel.com (mailing list archive)
State New, archived
Headers show

Commit Message

Dan Williams July 10, 2016, 3:25 a.m. UTC
nvdimm_flush() is a replacement for the x86 'pcommit' instruction.  It is
an optional write flushing mechanism that an nvdimm bus can provide for
the pmem driver to consume.  In the case of the NFIT nvdimm-bus-provider
nvdimm_flush() is implemented as a series of flush-hint-address [1]
writes to each dimm in the interleave set (region) that backs the
namespace.

The nvdimm_has_flush() routine relies on platform firmware to describe
the flushing capabilities of a platform.  It uses the heuristic of
whether an nvdimm bus provider provides flush address data to return a
ternary result:

      1: flush addresses defined
      0: dimm topology described without flush addresses (assume ADR)
 -errno: no topology information, unable to determine flush mechanism

The pmem driver is expected to take the following actions on this ternary
result:

      1: nvdimm_flush() in response to REQ_FUA / REQ_FLUSH and shutdown
      0: do not set, WC or FUA on the queue, take no further action
 -errno: warn and then operate as if nvdimm_has_flush() returned '0'

The caveat of this heuristic is that it can not distinguish the "dimm
does not have flush address" case from the "platform firmware is broken
and failed to describe a flush address".  Given we are already
explicitly trusting the NFIT there's not much more we can do beyond
blacklisting broken firmwares if they are ever encountered.

Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/acpi/nfit.c          |   33 ++-----------------------
 drivers/acpi/nfit.h          |    1 -
 drivers/nvdimm/pmem.c        |   27 ++++++++++++++++-----
 drivers/nvdimm/region_devs.c |   55 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/libnvdimm.h    |    2 ++
 5 files changed, 81 insertions(+), 37 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

kernel test robot July 10, 2016, 4:47 a.m. UTC | #1
Hi,

[auto build test ERROR on linux-nvdimm/libnvdimm-for-next]
[also build test ERROR on next-20160708]
[cannot apply to v4.7-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Dan-Williams/replace-pcommit-with-ADR-or-directed-flushing/20160710-113558
base:   https://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm.git libnvdimm-for-next
config: i386-randconfig-r0-201628 (attached as .config)
compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   drivers/nvdimm/region_devs.c: In function 'nvdimm_flush':
>> drivers/nvdimm/region_devs.c:887:4: error: implicit declaration of function 'writeq' [-Werror=implicit-function-declaration]
       writeq(1, ndrd->flush_wpq[i][0]);
       ^~~~~~
   cc1: some warnings being treated as errors

vim +/writeq +887 drivers/nvdimm/region_devs.c

   881		 * writes to avoid the cache via arch_memcpy_to_pmem().  The
   882		 * final wmb() ensures ordering for the NVDIMM flush write.
   883		 */
   884		wmb();
   885		for (i = 0; i < nd_region->ndr_mappings; i++)
   886			if (ndrd->flush_wpq[i][0])
 > 887				writeq(1, ndrd->flush_wpq[i][0]);
   888		wmb();
   889	}
   890	EXPORT_SYMBOL_GPL(nvdimm_flush);

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
Dan Williams July 10, 2016, 5:01 a.m. UTC | #2
On Sat, Jul 9, 2016 at 9:47 PM, kbuild test robot <lkp@intel.com> wrote:
> Hi,
>
> [auto build test ERROR on linux-nvdimm/libnvdimm-for-next]
> [also build test ERROR on next-20160708]
> [cannot apply to v4.7-rc6]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
>
> url:    https://github.com/0day-ci/linux/commits/Dan-Williams/replace-pcommit-with-ADR-or-directed-flushing/20160710-113558
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm.git libnvdimm-for-next
> config: i386-randconfig-r0-201628 (attached as .config)
> compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=i386

Hi kbuild team,

Can we add an "i386 allmodconfig" build to the standard "BUILD
SUCCESS" notification runs?  I had two positive build results on a
private branch prior to posting this series, but the i386 runs did not
build the nvdimm sub-system.

In any event this report is valid, so thank you for that!


>
> All errors (new ones prefixed by >>):
>
>    drivers/nvdimm/region_devs.c: In function 'nvdimm_flush':
>>> drivers/nvdimm/region_devs.c:887:4: error: implicit declaration of function 'writeq' [-Werror=implicit-function-declaration]
>        writeq(1, ndrd->flush_wpq[i][0]);
>        ^~~~~~
>    cc1: some warnings being treated as errors
>
> vim +/writeq +887 drivers/nvdimm/region_devs.c
>
>    881           * writes to avoid the cache via arch_memcpy_to_pmem().  The
>    882           * final wmb() ensures ordering for the NVDIMM flush write.
>    883           */
>    884          wmb();
>    885          for (i = 0; i < nd_region->ndr_mappings; i++)
>    886                  if (ndrd->flush_wpq[i][0])
>  > 887                          writeq(1, ndrd->flush_wpq[i][0]);
>    888          wmb();
>    889  }
>    890  EXPORT_SYMBOL_GPL(nvdimm_flush);
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Philip Li July 11, 2016, 3:48 a.m. UTC | #3
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogV2lsbGlhbXMsIERhbiBK
DQo+IFNlbnQ6IFN1bmRheSwgSnVseSAxMCwgMjAxNiAxOjAxIFBNDQo+IFRvOiBsa3AgPGxrcEBp
bnRlbC5jb20+DQo+IENjOiBrYnVpbGQtYWxsQDAxLm9yZzsgbGludXgtbnZkaW1tQGxpc3RzLjAx
Lm9yZzsgbGludXgtZnNkZXZlbCA8bGludXgtDQo+IGZzZGV2ZWxAdmdlci5rZXJuZWwub3JnPjsg
TGludXggQUNQSSA8bGludXgtYWNwaUB2Z2VyLmtlcm5lbC5vcmc+OyBSb3NzDQo+IFp3aXNsZXIg
PHJvc3Muendpc2xlckBsaW51eC5pbnRlbC5jb20+OyBDaHJpc3RvcGggSGVsbHdpZyA8aGNoQGxz
dC5kZT47IGxpbnV4LQ0KPiBrZXJuZWxAdmdlci5rZXJuZWwub3JnDQo+IFN1YmplY3Q6IFJlOiBb
UEFUQ0ggdjIgMDgvMTddIGxpYm52ZGltbTogaW50cm9kdWNlIG52ZGltbV9mbHVzaCgpIGFuZA0K
PiBudmRpbW1faGFzX2ZsdXNoKCkNCj4gDQo+IE9uIFNhdCwgSnVsIDksIDIwMTYgYXQgOTo0NyBQ
TSwga2J1aWxkIHRlc3Qgcm9ib3QgPGxrcEBpbnRlbC5jb20+IHdyb3RlOg0KPiA+IEhpLA0KPiA+
DQo+ID4gW2F1dG8gYnVpbGQgdGVzdCBFUlJPUiBvbiBsaW51eC1udmRpbW0vbGlibnZkaW1tLWZv
ci1uZXh0XQ0KPiA+IFthbHNvIGJ1aWxkIHRlc3QgRVJST1Igb24gbmV4dC0yMDE2MDcwOF0NCj4g
PiBbY2Fubm90IGFwcGx5IHRvIHY0LjctcmM2XQ0KPiA+IFtpZiB5b3VyIHBhdGNoIGlzIGFwcGxp
ZWQgdG8gdGhlIHdyb25nIGdpdCB0cmVlLCBwbGVhc2UgZHJvcCB1cyBhIG5vdGUgdG8gaGVscA0K
PiBpbXByb3ZlIHRoZSBzeXN0ZW1dDQo+ID4NCj4gPiB1cmw6ICAgIGh0dHBzOi8vZ2l0aHViLmNv
bS8wZGF5LWNpL2xpbnV4L2NvbW1pdHMvRGFuLVdpbGxpYW1zL3JlcGxhY2UtDQo+IHBjb21taXQt
d2l0aC1BRFItb3ItZGlyZWN0ZWQtZmx1c2hpbmcvMjAxNjA3MTAtMTEzNTU4DQo+ID4gYmFzZTog
ICBodHRwczovL2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC9udmRpbW0v
bnZkaW1tLmdpdA0KPiBsaWJudmRpbW0tZm9yLW5leHQNCj4gPiBjb25maWc6IGkzODYtcmFuZGNv
bmZpZy1yMC0yMDE2MjggKGF0dGFjaGVkIGFzIC5jb25maWcpDQo+ID4gY29tcGlsZXI6IGdjYy02
IChEZWJpYW4gNi4xLjEtMSkgNi4xLjEgMjAxNjA0MzANCj4gPiByZXByb2R1Y2U6DQo+ID4gICAg
ICAgICAjIHNhdmUgdGhlIGF0dGFjaGVkIC5jb25maWcgdG8gbGludXggYnVpbGQgdHJlZQ0KPiA+
ICAgICAgICAgbWFrZSBBUkNIPWkzODYNCj4gDQo+IEhpIGtidWlsZCB0ZWFtLA0KPiANCj4gQ2Fu
IHdlIGFkZCBhbiAiaTM4NiBhbGxtb2Rjb25maWciIGJ1aWxkIHRvIHRoZSBzdGFuZGFyZCAiQlVJ
TEQNCj4gU1VDQ0VTUyIgbm90aWZpY2F0aW9uIHJ1bnM/ICBJIGhhZCB0d28gcG9zaXRpdmUgYnVp
bGQgcmVzdWx0cyBvbiBhDQoNClRoYW5rcywgeWVzLCBjdXJyZW50bHkgaTM4NiBhbGxtb2Rjb25m
aWcgaGFzIGJlZW4gY292ZXJlZCBmb3IgYWxsIGtpbmRzIG9mIHRlc3QgaW5jbHVkaW5nDQprYml1
bGQgb24gcmVnaXN0ZXJlZCByZXBvIG9yIExLTUwgcGF0Y2hlcy4gSWYgdGhlIHRlc3QgaXMgcnVu
bmluZyBvbiBhIHJlcG8gZm9yIGl0cyBuZXcgY29tbWl0cywNCmEgQlVJTEQgU1VDQ0VTUyBtYWls
LCBpdCB3aWxsIGxpc3QgdGhlIGN1cnJlbnQgY292ZXJhZ2UgYnkgdGhlIHRpbWUgdGhlIG1haWwg
aXMgc2VudCBvdXQgbGlrZQ0KDQptMzJyICAgICAgICAgICAgICAgICAgICAgICBtMzIxMDR1dF9k
ZWZjb25maWcNCm0zMnIgICAgICAgICAgICAgICAgICAgICBtYXBwaTMuc21wX2RlZmNvbmZpZw0K
bTMyciAgICAgICAgICAgICAgICAgICAgICAgICBvcHNwdXRfZGVmY29uZmlnDQptMzJyICAgICAg
ICAgICAgICAgICAgICAgICAgICAgdXNydl9kZWZjb25maWcNCnh0ZW5zYSAgICAgICAgICAgICAg
ICAgICAgICAgY29tbW9uX2RlZmNvbmZpZw0KeHRlbnNhICAgICAgICAgICAgICAgICAgICAgICAg
ICBpc3NfZGVmY29uZmlnDQppMzg2ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhbGxtb2Rj
b25maWcNCm1pcHMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGp6NDc0MA0KbWlw
cyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGFsbG5vY29uZmlnDQoNCj4gcHJpdmF0ZSBi
cmFuY2ggcHJpb3IgdG8gcG9zdGluZyB0aGlzIHNlcmllcywgYnV0IHRoZSBpMzg2IHJ1bnMgZGlk
IG5vdA0KPiBidWlsZCB0aGUgbnZkaW1tIHN1Yi1zeXN0ZW0uDQo+IA0KPiBJbiBhbnkgZXZlbnQg
dGhpcyByZXBvcnQgaXMgdmFsaWQsIHNvIHRoYW5rIHlvdSBmb3IgdGhhdCENCj4gDQo+IA0KPiA+
DQo+ID4gQWxsIGVycm9ycyAobmV3IG9uZXMgcHJlZml4ZWQgYnkgPj4pOg0KPiA+DQo+ID4gICAg
ZHJpdmVycy9udmRpbW0vcmVnaW9uX2RldnMuYzogSW4gZnVuY3Rpb24gJ252ZGltbV9mbHVzaCc6
DQo+ID4+PiBkcml2ZXJzL252ZGltbS9yZWdpb25fZGV2cy5jOjg4Nzo0OiBlcnJvcjogaW1wbGlj
aXQgZGVjbGFyYXRpb24gb2YgZnVuY3Rpb24NCj4gJ3dyaXRlcScgWy1XZXJyb3I9aW1wbGljaXQt
ZnVuY3Rpb24tZGVjbGFyYXRpb25dDQo+ID4gICAgICAgIHdyaXRlcSgxLCBuZHJkLT5mbHVzaF93
cHFbaV1bMF0pOw0KPiA+ICAgICAgICBefn5+fn4NCj4gPiAgICBjYzE6IHNvbWUgd2FybmluZ3Mg
YmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMNCj4gPg0KPiA+IHZpbSArL3dyaXRlcSArODg3IGRyaXZl
cnMvbnZkaW1tL3JlZ2lvbl9kZXZzLmMNCj4gPg0KPiA+ICAgIDg4MSAgICAgICAgICAgKiB3cml0
ZXMgdG8gYXZvaWQgdGhlIGNhY2hlIHZpYSBhcmNoX21lbWNweV90b19wbWVtKCkuICBUaGUNCj4g
PiAgICA4ODIgICAgICAgICAgICogZmluYWwgd21iKCkgZW5zdXJlcyBvcmRlcmluZyBmb3IgdGhl
IE5WRElNTSBmbHVzaCB3cml0ZS4NCj4gPiAgICA4ODMgICAgICAgICAgICovDQo+ID4gICAgODg0
ICAgICAgICAgIHdtYigpOw0KPiA+ICAgIDg4NSAgICAgICAgICBmb3IgKGkgPSAwOyBpIDwgbmRf
cmVnaW9uLT5uZHJfbWFwcGluZ3M7IGkrKykNCj4gPiAgICA4ODYgICAgICAgICAgICAgICAgICBp
ZiAobmRyZC0+Zmx1c2hfd3BxW2ldWzBdKQ0KPiA+ICA+IDg4NyAgICAgICAgICAgICAgICAgICAg
ICAgICAgd3JpdGVxKDEsIG5kcmQtPmZsdXNoX3dwcVtpXVswXSk7DQo+ID4gICAgODg4ICAgICAg
ICAgIHdtYigpOw0KPiA+ICAgIDg4OSAgfQ0KPiA+ICAgIDg5MCAgRVhQT1JUX1NZTUJPTF9HUEwo
bnZkaW1tX2ZsdXNoKTsNCj4gPg0KPiA+IC0tLQ0KPiA+IDAtREFZIGtlcm5lbCB0ZXN0IGluZnJh
c3RydWN0dXJlICAgICAgICAgICAgICAgIE9wZW4gU291cmNlIFRlY2hub2xvZ3kgQ2VudGVyDQo+
ID4gaHR0cHM6Ly9saXN0cy4wMS5vcmcvcGlwZXJtYWlsL2tidWlsZC1hbGwgICAgICAgICAgICAg
ICAgICAgSW50ZWwgQ29ycG9yYXRpb24NCg0K
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c
index 6796f780870a..0497175ee6cb 100644
--- a/drivers/acpi/nfit.c
+++ b/drivers/acpi/nfit.c
@@ -1393,24 +1393,6 @@  static u64 to_interleave_offset(u64 offset, struct nfit_blk_mmio *mmio)
 	return mmio->base_offset + line_offset + table_offset + sub_line_offset;
 }
 
-static void wmb_blk(struct nfit_blk *nfit_blk)
-{
-
-	if (nfit_blk->nvdimm_flush) {
-		/*
-		 * The first wmb() is needed to 'sfence' all previous writes
-		 * such that they are architecturally visible for the platform
-		 * buffer flush.  Note that we've already arranged for pmem
-		 * writes to avoid the cache via arch_memcpy_to_pmem().  The
-		 * final wmb() ensures ordering for the NVDIMM flush write.
-		 */
-		wmb();
-		writeq(1, nfit_blk->nvdimm_flush);
-		wmb();
-	} else
-		wmb_pmem();
-}
-
 static u32 read_blk_stat(struct nfit_blk *nfit_blk, unsigned int bw)
 {
 	struct nfit_blk_mmio *mmio = &nfit_blk->mmio[DCR];
@@ -1445,7 +1427,7 @@  static void write_blk_ctl(struct nfit_blk *nfit_blk, unsigned int bw,
 		offset = to_interleave_offset(offset, mmio);
 
 	writeq(cmd, mmio->addr.base + offset);
-	wmb_blk(nfit_blk);
+	nvdimm_flush(nfit_blk->nd_region);
 
 	if (nfit_blk->dimm_flags & NFIT_BLK_DCR_LATCH)
 		readq(mmio->addr.base + offset);
@@ -1496,7 +1478,7 @@  static int acpi_nfit_blk_single_io(struct nfit_blk *nfit_blk,
 	}
 
 	if (rw)
-		wmb_blk(nfit_blk);
+		nvdimm_flush(nfit_blk->nd_region);
 
 	rc = read_blk_stat(nfit_blk, lane) ? -EIO : 0;
 	return rc;
@@ -1570,7 +1552,6 @@  static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
 {
 	struct nvdimm_bus_descriptor *nd_desc = to_nd_desc(nvdimm_bus);
 	struct nd_blk_region *ndbr = to_nd_blk_region(dev);
-	struct nfit_flush *nfit_flush;
 	struct nfit_blk_mmio *mmio;
 	struct nfit_blk *nfit_blk;
 	struct nfit_mem *nfit_mem;
@@ -1645,15 +1626,7 @@  static int acpi_nfit_blk_region_enable(struct nvdimm_bus *nvdimm_bus,
 		return rc;
 	}
 
-	nfit_flush = nfit_mem->nfit_flush;
-	if (nfit_flush && nfit_flush->flush->hint_count != 0) {
-		nfit_blk->nvdimm_flush = devm_nvdimm_ioremap(dev,
-				nfit_flush->flush->hint_address[0], 8);
-		if (!nfit_blk->nvdimm_flush)
-			return -ENOMEM;
-	}
-
-	if (!arch_has_wmb_pmem() && !nfit_blk->nvdimm_flush)
+	if (nvdimm_has_flush(nfit_blk->nd_region) < 0)
 		dev_warn(dev, "unable to guarantee persistence of writes\n");
 
 	if (mmio->line_size == 0)
diff --git a/drivers/acpi/nfit.h b/drivers/acpi/nfit.h
index 9282eb324dcc..9fda77cf81da 100644
--- a/drivers/acpi/nfit.h
+++ b/drivers/acpi/nfit.h
@@ -183,7 +183,6 @@  struct nfit_blk {
 	u64 bdw_offset; /* post interleave offset */
 	u64 stat_offset;
 	u64 cmd_offset;
-	void __iomem *nvdimm_flush;
 	u32 dimm_flags;
 };
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index b6fcb97a601c..e303655f243e 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -33,10 +33,24 @@ 
 #include "pfn.h"
 #include "nd.h"
 
+static struct device *to_dev(struct pmem_device *pmem)
+{
+	/*
+	 * nvdimm bus services need a 'dev' parameter, and we record the device
+	 * at init in bb.dev.
+	 */
+	return pmem->bb.dev;
+}
+
+static struct nd_region *to_region(struct pmem_device *pmem)
+{
+	return to_nd_region(to_dev(pmem)->parent);
+}
+
 static void pmem_clear_poison(struct pmem_device *pmem, phys_addr_t offset,
 		unsigned int len)
 {
-	struct device *dev = pmem->bb.dev;
+	struct device *dev = to_dev(pmem);
 	sector_t sector;
 	long cleared;
 
@@ -122,7 +136,7 @@  static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio)
 		nd_iostat_end(bio, start);
 
 	if (bio_data_dir(bio))
-		wmb_pmem();
+		nvdimm_flush(to_region(pmem));
 
 	bio_endio(bio);
 	return BLK_QC_T_NONE;
@@ -136,7 +150,7 @@  static int pmem_rw_page(struct block_device *bdev, sector_t sector,
 
 	rc = pmem_do_bvec(pmem, page, PAGE_SIZE, 0, rw, sector);
 	if (rw & WRITE)
-		wmb_pmem();
+		nvdimm_flush(to_region(pmem));
 
 	/*
 	 * The ->rw_page interface is subtle and tricky.  The core
@@ -193,6 +207,7 @@  static int pmem_attach_disk(struct device *dev,
 		struct nd_namespace_common *ndns)
 {
 	struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev);
+	struct nd_region *nd_region = to_nd_region(dev->parent);
 	struct vmem_altmap __altmap, *altmap = NULL;
 	struct resource *res = &nsio->res;
 	struct nd_pfn *nd_pfn = NULL;
@@ -222,7 +237,7 @@  static int pmem_attach_disk(struct device *dev,
 	dev_set_drvdata(dev, pmem);
 	pmem->phys_addr = res->start;
 	pmem->size = resource_size(res);
-	if (!arch_has_wmb_pmem())
+	if (nvdimm_has_flush(nd_region) < 0)
 		dev_warn(dev, "unable to guarantee persistence of writes\n");
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
@@ -284,7 +299,7 @@  static int pmem_attach_disk(struct device *dev,
 			/ 512);
 	if (devm_init_badblocks(dev, &pmem->bb))
 		return -ENOMEM;
-	nvdimm_badblocks_populate(to_nd_region(dev->parent), &pmem->bb, res);
+	nvdimm_badblocks_populate(nd_region, &pmem->bb, res);
 	disk->bb = &pmem->bb;
 	add_disk(disk);
 
@@ -331,8 +346,8 @@  static int nd_pmem_remove(struct device *dev)
 
 static void nd_pmem_notify(struct device *dev, enum nvdimm_event event)
 {
-	struct nd_region *nd_region = to_nd_region(dev->parent);
 	struct pmem_device *pmem = dev_get_drvdata(dev);
+	struct nd_region *nd_region = to_region(pmem);
 	resource_size_t offset = 0, end_trunc = 0;
 	struct nd_namespace_common *ndns;
 	struct nd_namespace_io *nsio;
diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index 67022f74febc..46b6e2f7d5f0 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -14,6 +14,7 @@ 
 #include <linux/highmem.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
+#include <linux/pmem.h>
 #include <linux/sort.h>
 #include <linux/io.h>
 #include <linux/nd.h>
@@ -864,6 +865,60 @@  struct nd_region *nvdimm_volatile_region_create(struct nvdimm_bus *nvdimm_bus,
 }
 EXPORT_SYMBOL_GPL(nvdimm_volatile_region_create);
 
+/**
+ * nvdimm_flush - flush any posted write queues between the cpu and pmem media
+ * @nd_region: blk or interleaved pmem region
+ */
+void nvdimm_flush(struct nd_region *nd_region)
+{
+	struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
+	int i;
+
+	/*
+	 * The first wmb() is needed to 'sfence' all previous writes
+	 * such that they are architecturally visible for the platform
+	 * buffer flush.  Note that we've already arranged for pmem
+	 * writes to avoid the cache via arch_memcpy_to_pmem().  The
+	 * final wmb() ensures ordering for the NVDIMM flush write.
+	 */
+	wmb();
+	for (i = 0; i < nd_region->ndr_mappings; i++)
+		if (ndrd->flush_wpq[i][0])
+			writeq(1, ndrd->flush_wpq[i][0]);
+	wmb();
+}
+EXPORT_SYMBOL_GPL(nvdimm_flush);
+
+/**
+ * nvdimm_has_flush - determine write flushing requirements
+ * @nd_region: blk or interleaved pmem region
+ *
+ * Returns 1 if writes require flushing
+ * Returns 0 if writes do not require flushing
+ * Returns -ENXIO if flushing capability can not be determined
+ */
+int nvdimm_has_flush(struct nd_region *nd_region)
+{
+	struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
+	int i;
+
+	/* no nvdimm == flushing capability unknown */
+	if (nd_region->ndr_mappings == 0)
+		return -ENXIO;
+
+	for (i = 0; i < nd_region->ndr_mappings; i++)
+		/* flush hints present, flushing required */
+		if (ndrd->flush_wpq[i][0])
+			return 1;
+
+	/*
+	 * The platform defines dimm devices without hints, assume
+	 * platform persistence mechanism like ADR
+	 */
+	return 0;
+}
+EXPORT_SYMBOL_GPL(nvdimm_has_flush);
+
 void __exit nd_region_devs_exit(void)
 {
 	ida_destroy(&region_ida);
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index 815b9b430ead..d37fda6dd64c 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -166,4 +166,6 @@  struct nvdimm *nd_blk_region_to_dimm(struct nd_blk_region *ndbr);
 unsigned int nd_region_acquire_lane(struct nd_region *nd_region);
 void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
 u64 nd_fletcher64(void *addr, size_t len, bool le);
+void nvdimm_flush(struct nd_region *nd_region);
+int nvdimm_has_flush(struct nd_region *nd_region);
 #endif /* __LIBNVDIMM_H__ */