diff mbox series

[v2] virtio pmem: user document

Message ID 20190821073630.2561-1-pagupta@redhat.com (mailing list archive)
State New, archived
Headers show
Series [v2] virtio pmem: user document | expand

Commit Message

Pankaj Gupta Aug. 21, 2019, 7:36 a.m. UTC
This patch documents the steps to use virtio pmem.
It also documents other useful information about
virtio pmem e.g use-case, comparison with Qemu NVDIMM
backend and current limitations.

Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
---
v1->v2
 - Fixes on text format and 'Guest Data persistence'
   section - Cornelia

 docs/virtio-pmem.rst | 75 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 75 insertions(+)
 create mode 100644 docs/virtio-pmem.rst

Comments

Cornelia Huck Aug. 21, 2019, 11:24 a.m. UTC | #1
On Wed, 21 Aug 2019 13:06:30 +0530
Pankaj Gupta <pagupta@redhat.com> wrote:

> This patch documents the steps to use virtio pmem.
> It also documents other useful information about
> virtio pmem e.g use-case, comparison with Qemu NVDIMM
> backend and current limitations.
> 
> Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> ---
> v1->v2
>  - Fixes on text format and 'Guest Data persistence'
>    section - Cornelia
> 
>  docs/virtio-pmem.rst | 75 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 75 insertions(+)
>  create mode 100644 docs/virtio-pmem.rst
> 
> diff --git a/docs/virtio-pmem.rst b/docs/virtio-pmem.rst
> new file mode 100644
> index 0000000000..0346e61674
> --- /dev/null
> +++ b/docs/virtio-pmem.rst
> @@ -0,0 +1,75 @@
> +
> +========================
> +QEMU virtio pmem
> +========================
> +
> + This document explains the setup and usage of virtio pmem device

s/virtio pmem device/the virtio pmem device/

> + which is available since QEMU v4.1.0.
> +
> + The virtio pmem is a paravirtualized persistent memory device on

s/The virtio pmem/The virtio pmem device/

> + regular(i.e non-NVDIMM) storage.

missing blank before '('

> +
> +Usecase
> +--------
> +
> +  Allows to bypass the guest page cache and directly use host page cache.

"Virtio pmem allows to..." ?

> +  This reduces guest memory footprint as the host can make efficient
> +  memory reclaim decisions under memory pressure.
> +
> +o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
> +
> +  NVDIMM emulation on regular(i.e. non-NVDIMM) host storage does not

missing blank before '('

> +  persist the guest writes as there are no defined semantics in the device
> +  specification. The virtio pmem device provides guest write persistence
> +  on non-NVDIMM host storage.
> +
> +virtio pmem usage
> +-----------------
> +
> +  A virtio pmem device backed by a memory-backend-file can be created on
> +  the QEMU command line as in the following example:
> +
> +  -object memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
> +  -device virtio-pmem-pci,memdev=mem1,id=nv1
> +
> +   where:
> +   - "object memory-backend-file,id=mem1,share,mem-path=<image>, size=<image size>"
> +     creates a backend file of size on a mem-path.

"a backend file with the specified size" ?

> +
> +   - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
> +     pci device whose storage is provided by above memory backend device.
> +
> +  Multiple virtio pmem devices can be created if multiple pairs of "-object"
> +  and "-device" are provided.
> +
> +Hotplug
> +-------
> +
> +"Virtio pmem devices can be hotplugged via the QEMU monitor. First, the
> +memory backing has to be added via 'object_add'; afterwards, the virtio
> +pmem device can be added via 'device_add'."

Please lose the '"' (copy/paste leftover, I presume? :)

> +
> +For example, the following commands add another 4GB virtio pmem device to
> +the guest:
> +
> + (qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
> + (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
> +
> +Guest Data Persistence
> +----------------------
> +
> + Guest data persistence on non-NVDIMM requires guest userspace application to

s/application/applications/ ?

> + perform fsync/msync. This is different from a real nvdimm backend where no
> + additional fsync/msync is required. This is to persist guest writes in host
> + backing file which otherwise remains in host page cache and there is risk of
> + losing the data in case of power failure.
> +
> + With virtio pmem device, MAP_SYNC mmap flag is not supported. This provides
> + a hint to application to perform fsync for write persistence.
> +
> +Limitations
> +------------
> +- Real nvdimm device backend is not supported.
> +- virtio pmem hotunplug is not supported.
> +- ACPI NVDIMM features like regions/namespaces are not supported.
> +- ndctl command is not supported.

Only some nits from my side, otherwise looks good to me.
Pankaj Gupta Aug. 21, 2019, 11:47 a.m. UTC | #2
Hi Cornelia,

> > This patch documents the steps to use virtio pmem.
> > It also documents other useful information about
> > virtio pmem e.g use-case, comparison with Qemu NVDIMM
> > backend and current limitations.
> > 
> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > ---
> > v1->v2
> >  - Fixes on text format and 'Guest Data persistence'
> >    section - Cornelia
> > 
> >  docs/virtio-pmem.rst | 75 ++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 75 insertions(+)
> >  create mode 100644 docs/virtio-pmem.rst
> > 
> > diff --git a/docs/virtio-pmem.rst b/docs/virtio-pmem.rst
> > new file mode 100644
> > index 0000000000..0346e61674
> > --- /dev/null
> > +++ b/docs/virtio-pmem.rst
> > @@ -0,0 +1,75 @@
> > +
> > +========================
> > +QEMU virtio pmem
> > +========================
> > +
> > + This document explains the setup and usage of virtio pmem device
> 
> s/virtio pmem device/the virtio pmem device/

done

> 
> > + which is available since QEMU v4.1.0.
> > +
> > + The virtio pmem is a paravirtualized persistent memory device on
> 
> s/The virtio pmem/The virtio pmem device/

o.k

> 
> > + regular(i.e non-NVDIMM) storage.
> 
> missing blank before '('

sure
> 
> > +
> > +Usecase
> > +--------
> > +
> > +  Allows to bypass the guest page cache and directly use host page cache.
> 
> "Virtio pmem allows to..." ?

done.

> 
> > +  This reduces guest memory footprint as the host can make efficient
> > +  memory reclaim decisions under memory pressure.
> > +
> > +o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
> > +
> > +  NVDIMM emulation on regular(i.e. non-NVDIMM) host storage does not
> 
> missing blank before '('

done.
> 
> > +  persist the guest writes as there are no defined semantics in the device
> > +  specification. The virtio pmem device provides guest write persistence
> > +  on non-NVDIMM host storage.
> > +
> > +virtio pmem usage
> > +-----------------
> > +
> > +  A virtio pmem device backed by a memory-backend-file can be created on
> > +  the QEMU command line as in the following example:
> > +
> > +  -object
> > memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
> > +  -device virtio-pmem-pci,memdev=mem1,id=nv1
> > +
> > +   where:
> > +   - "object memory-backend-file,id=mem1,share,mem-path=<image>,
> > size=<image size>"
> > +     creates a backend file of size on a mem-path.
> 
> "a backend file with the specified size" ?

done.

> 
> > +
> > +   - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
> > +     pci device whose storage is provided by above memory backend device.
> > +
> > +  Multiple virtio pmem devices can be created if multiple pairs of
> > "-object"
> > +  and "-device" are provided.
> > +
> > +Hotplug
> > +-------
> > +
> > +"Virtio pmem devices can be hotplugged via the QEMU monitor. First, the
> > +memory backing has to be added via 'object_add'; afterwards, the virtio
> > +pmem device can be added via 'device_add'."
> 
> Please lose the '"' (copy/paste leftover, I presume? :)

Done :)

> 
> > +
> > +For example, the following commands add another 4GB virtio pmem device to
> > +the guest:
> > +
> > + (qemu) object_add
> > memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
> > + (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
> > +
> > +Guest Data Persistence
> > +----------------------
> > +
> > + Guest data persistence on non-NVDIMM requires guest userspace application
> > to
> 
> s/application/applications/ ?

done.

> 
> > + perform fsync/msync. This is different from a real nvdimm backend where
> > no
> > + additional fsync/msync is required. This is to persist guest writes in
> > host
> > + backing file which otherwise remains in host page cache and there is risk
> > of
> > + losing the data in case of power failure.
> > +
> > + With virtio pmem device, MAP_SYNC mmap flag is not supported. This
> > provides
> > + a hint to application to perform fsync for write persistence.
> > +
> > +Limitations
> > +------------
> > +- Real nvdimm device backend is not supported.
> > +- virtio pmem hotunplug is not supported.
> > +- ACPI NVDIMM features like regions/namespaces are not supported.
> > +- ndctl command is not supported.
> 
> Only some nits from my side, otherwise looks good to me.

Thank you for the review. Will post a v3 with the changes.

Best regards,
Pankaj

>
diff mbox series

Patch

diff --git a/docs/virtio-pmem.rst b/docs/virtio-pmem.rst
new file mode 100644
index 0000000000..0346e61674
--- /dev/null
+++ b/docs/virtio-pmem.rst
@@ -0,0 +1,75 @@ 
+
+========================
+QEMU virtio pmem
+========================
+
+ This document explains the setup and usage of virtio pmem device
+ which is available since QEMU v4.1.0.
+
+ The virtio pmem is a paravirtualized persistent memory device on
+ regular(i.e non-NVDIMM) storage.
+
+Usecase
+--------
+
+  Allows to bypass the guest page cache and directly use host page cache.
+  This reduces guest memory footprint as the host can make efficient
+  memory reclaim decisions under memory pressure.
+
+o How does virtio-pmem compare to the nvdimm emulation supported by QEMU?
+
+  NVDIMM emulation on regular(i.e. non-NVDIMM) host storage does not
+  persist the guest writes as there are no defined semantics in the device
+  specification. The virtio pmem device provides guest write persistence
+  on non-NVDIMM host storage.
+
+virtio pmem usage
+-----------------
+
+  A virtio pmem device backed by a memory-backend-file can be created on
+  the QEMU command line as in the following example:
+
+  -object memory-backend-file,id=mem1,share,mem-path=./virtio_pmem.img,size=4G
+  -device virtio-pmem-pci,memdev=mem1,id=nv1
+
+   where:
+   - "object memory-backend-file,id=mem1,share,mem-path=<image>, size=<image size>"
+     creates a backend file of size on a mem-path.
+
+   - "device virtio-pmem-pci,id=nvdimm1,memdev=mem1" creates a virtio pmem
+     pci device whose storage is provided by above memory backend device.
+
+  Multiple virtio pmem devices can be created if multiple pairs of "-object"
+  and "-device" are provided.
+
+Hotplug
+-------
+
+"Virtio pmem devices can be hotplugged via the QEMU monitor. First, the
+memory backing has to be added via 'object_add'; afterwards, the virtio
+pmem device can be added via 'device_add'."
+
+For example, the following commands add another 4GB virtio pmem device to
+the guest:
+
+ (qemu) object_add memory-backend-file,id=mem2,share=on,mem-path=virtio_pmem2.img,size=4G
+ (qemu) device_add virtio-pmem-pci,id=virtio_pmem2,memdev=mem2
+
+Guest Data Persistence
+----------------------
+
+ Guest data persistence on non-NVDIMM requires guest userspace application to
+ perform fsync/msync. This is different from a real nvdimm backend where no
+ additional fsync/msync is required. This is to persist guest writes in host
+ backing file which otherwise remains in host page cache and there is risk of
+ losing the data in case of power failure.
+
+ With virtio pmem device, MAP_SYNC mmap flag is not supported. This provides
+ a hint to application to perform fsync for write persistence.
+
+Limitations
+------------
+- Real nvdimm device backend is not supported.
+- virtio pmem hotunplug is not supported.
+- ACPI NVDIMM features like regions/namespaces are not supported.
+- ndctl command is not supported.