From patchwork Thu Apr 5 10:48:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pankaj Gupta X-Patchwork-Id: 10324379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 79456600CB for ; Thu, 5 Apr 2018 10:49:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A1B62896F for ; Thu, 5 Apr 2018 10:49:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5E99E2914D; Thu, 5 Apr 2018 10:49:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86B3129144 for ; Thu, 5 Apr 2018 10:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751417AbeDEKtK (ORCPT ); Thu, 5 Apr 2018 06:49:10 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:48480 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751312AbeDEKtH (ORCPT ); Thu, 5 Apr 2018 06:49:07 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BC00840201A4; Thu, 5 Apr 2018 10:49:06 +0000 (UTC) Received: from dhcp201-121.englab.pnq.redhat.com (dhcp193-127.pnq.redhat.com [10.65.193.127]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4DDE6AB3EC; Thu, 5 Apr 2018 10:49:01 +0000 (UTC) From: Pankaj Gupta To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, qemu-devel@nongnu.org, linux-nvdimm@ml01.01.org Cc: jack@suse.cz, stefanha@redhat.com, dan.j.williams@intel.com, riel@surriel.com, haozhong.zhang@intel.com, nilal@redhat.com, kwolf@redhat.com, pbonzini@redhat.com, ross.zwisler@intel.com, david@redhat.com, xiaoguangrong.eric@gmail.com, hch@infradead.org, marcel@redhat.com, mst@redhat.com, niteshnarayanlal@hotmail.com, imammedo@redhat.com, pagupta@redhat.com Subject: [RFC 1/2] kvm: add virtio pmem driver Date: Thu, 5 Apr 2018 16:18:32 +0530 Message-Id: <20180405104834.10457-2-pagupta@redhat.com> In-Reply-To: <20180405104834.10457-1-pagupta@redhat.com> References: <20180405104834.10457-1-pagupta@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 05 Apr 2018 10:49:06 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 05 Apr 2018 10:49:06 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'pagupta@redhat.com' RCPT:'' Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds virtio-pmem driver for KVM guest. Guest reads the persistent memory range information from Qemu over VIRTIO and registers it on 'nvdimm_bus'. It also creates a 'nd_region' object with the persistent memory range information so that existing 'nvdimm/pmem' driver can allocate this memeory into system memory map. This way 'virtio-pmem' driver uses existing functionality of pmem driver to register persistent memory compatible for DAX capable filesystems. It also provides function to perform guest flush over VIRTIO from 'pmem' driver when userspace performs flush on DAX memory range. There is work to do including flush at host side and other features of pmem driver. Signed-off-by: Pankaj Gupta --- drivers/virtio/Kconfig | 12 ++++ drivers/virtio/Makefile | 1 + drivers/virtio/virtio_pmem.c | 122 +++++++++++++++++++++++++++++++++++++++ include/linux/libnvdimm.h | 2 + include/uapi/linux/virtio_ids.h | 1 + include/uapi/linux/virtio_pmem.h | 61 ++++++++++++++++++++ 6 files changed, 199 insertions(+) create mode 100644 drivers/virtio/virtio_pmem.c create mode 100644 include/uapi/linux/virtio_pmem.h diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index cff773f..437de03 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -38,6 +38,18 @@ config VIRTIO_PCI_LEGACY If unsure, say Y. +config VIRTIO_PMEM + tristate "Virtio pmem driver" + depends on VIRTIO + ---help--- + This driver adds persistent memory range to nd_region and registers + with nvdimm bus. NVDIMM 'pmem' driver later allocates a persistent + memory range on the memory information added by this driver. In addition + to this, 'virtio-pmem' driver also provides a paravirt flushing interface + from guest to host. + + If unsure, say M. + config VIRTIO_BALLOON tristate "Virtio balloon driver" depends on VIRTIO diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 3a2b5c5..cbe91c6 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o +obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o diff --git a/drivers/virtio/virtio_pmem.c b/drivers/virtio/virtio_pmem.c new file mode 100644 index 0000000..4fffbb5 --- /dev/null +++ b/drivers/virtio/virtio_pmem.c @@ -0,0 +1,122 @@ +/* + * virtio-pmem driver + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static void pmem_flush_done(struct virtqueue *vq) +{ + return; +}; + +static int init_vq(struct virtio_pmem *vpmem) +{ + struct virtqueue *vq; + + /* single vq */ + vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev, + pmem_flush_done, "flush_queue"); + + if (IS_ERR(vq)) + return PTR_ERR(vq); + + return 0; +}; + +static int virtio_pmem_probe(struct virtio_device *vdev) +{ + int err = 0; + struct resource res; + struct virtio_pmem *vpmem; + struct nvdimm_bus *nvdimm_bus; + struct nd_region_desc ndr_desc; + int nid = dev_to_node(&vdev->dev); + static struct nvdimm_bus_descriptor nd_desc; + + if (!vdev->config->get) { + dev_err(&vdev->dev, "%s failure: config disabled\n", + __func__); + return -EINVAL; + } + + vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem), + GFP_KERNEL); + if (!vpmem) { + err = -ENOMEM; + goto out; + } + + vpmem->vdev = vdev; + err = init_vq(vpmem); + if (err) + goto out; + + virtio_cread(vpmem->vdev, struct virtio_pmem_config, + start, &vpmem->start); + virtio_cread(vpmem->vdev, struct virtio_pmem_config, + size, &vpmem->size); + virtio_cread(vpmem->vdev, struct virtio_pmem_config, + align, &vpmem->align); + + res.start = vpmem->start; + res.end = vpmem->start + vpmem->size-1; + + memset(&nd_desc, 0, sizeof(nd_desc)); + nd_desc.provider_name = "virtio-pmem"; + nd_desc.module = THIS_MODULE; + nvdimm_bus = nvdimm_bus_register(&vdev->dev, &nd_desc); + + if (!nvdimm_bus) + goto out_nd; + dev_set_drvdata(&vdev->dev, nvdimm_bus); + + memset(&ndr_desc, 0, sizeof(ndr_desc)); + ndr_desc.res = &res; + ndr_desc.numa_node = nid; + set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); + set_bit(ND_REGION_VIRTIO, &ndr_desc.flags); + + if (!nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc)) + goto out_nd; + + virtio_device_ready(vdev); + return 0; + +out_nd: + nvdimm_bus_unregister(nvdimm_bus); +out: + dev_err(&vdev->dev, "failed to register virtio pmem ranges\n"); + vdev->config->del_vqs(vdev); + return err; +} + +static void virtio_pmem_remove(struct virtio_device *vdev) +{ + struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev); + + nvdimm_bus_unregister(nvdimm_bus); + vdev->config->del_vqs(vdev); +} + +static struct virtio_driver virtio_pmem_driver = { + .driver.name = KBUILD_MODNAME, + .driver.owner = THIS_MODULE, + .id_table = id_table, + .probe = virtio_pmem_probe, + .remove = virtio_pmem_remove, +}; + +module_virtio_driver(virtio_pmem_driver); +MODULE_DEVICE_TABLE(virtio, id_table); +MODULE_DESCRIPTION("Virtio pmem driver"); +MODULE_LICENSE("GPL"); diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h index f8109dd..ce9bf49 100644 --- a/include/linux/libnvdimm.h +++ b/include/linux/libnvdimm.h @@ -47,6 +47,8 @@ enum { /* region flag indicating to direct-map persistent memory by default */ ND_REGION_PAGEMAP = 0, + /* region flag indicating to use VIRTIO flush interface for pmem */ + ND_REGION_VIRTIO = 1, /* mark newly adjusted resources as requiring a label update */ DPA_RESOURCE_ADJUSTED = 1 << 0, diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h index 6d5c3b2..5ebd049 100644 --- a/include/uapi/linux/virtio_ids.h +++ b/include/uapi/linux/virtio_ids.h @@ -43,5 +43,6 @@ #define VIRTIO_ID_INPUT 18 /* virtio input */ #define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */ #define VIRTIO_ID_CRYPTO 20 /* virtio crypto */ +#define VIRTIO_ID_PMEM 21 /* virtio pmem */ #endif /* _LINUX_VIRTIO_IDS_H */ diff --git a/include/uapi/linux/virtio_pmem.h b/include/uapi/linux/virtio_pmem.h new file mode 100644 index 0000000..4a46092 --- /dev/null +++ b/include/uapi/linux/virtio_pmem.h @@ -0,0 +1,61 @@ +/* + * Virtio pmem Driver + * + * Discovers persitent memory information to guest + * and provides a virtio flushing interface + * + */ + +#ifndef _LINUX_VIRTIO_PMEM_H +#define _LINUX_VIRTIO_PMEM_H + +#include +#include +#include +#include +#include + + +struct virtio_pmem_config { + + uint64_t start; + uint64_t size; + uint64_t align; +}; + +struct virtio_pmem { + + struct virtio_device *vdev; + struct virtqueue *req_vq; + + uint64_t start; + uint64_t size; + uint64_t align; +} __packed; + +static struct virtio_device_id id_table[] = { + { VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +void virtio_pmem_flush(struct device *dev) +{ + struct scatterlist sg; + struct virtio_device *vdev = dev_to_virtio(dev->parent->parent); + struct virtio_pmem *vpmem = vdev->priv; + char *buf = "FLUSH"; + int err; + + sg_init_one(&sg, buf, sizeof(buf)); + + err = virtqueue_add_outbuf(vpmem->req_vq, &sg, 1, buf, GFP_KERNEL); + + if (err) { + dev_err(&vdev->dev, "failed to send command to virtio pmem device\n"); + return; + } + + virtqueue_kick(vpmem->req_vq); +}; + +#endif