From patchwork Tue Jan 31 14:55:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Babis Chalios X-Patchwork-Id: 13123067 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FBD7C636CC for ; Tue, 31 Jan 2023 14:57:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232280AbjAaO5b (ORCPT ); Tue, 31 Jan 2023 09:57:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232279AbjAaO5Y (ORCPT ); Tue, 31 Jan 2023 09:57:24 -0500 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8B645086E; Tue, 31 Jan 2023 06:57:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.es; i=@amazon.es; q=dns/txt; s=amazon201209; t=1675177028; x=1706713028; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hTGzKjKLR+1i+6AZV7bbmkFd0Y5JGLoViaPoINjCG8c=; b=X5pwSKjTFjrtNJuMaYW+D6Kxog4xce/gXrUWevPNMWeCE0maOi9u+zbE fJU19gSRCV36/gWCBdBJBXnE4nmvQ5TjHTcQin2YGHOeT1h+RGTexPYZ/ BoWETJzH3iOtXfc/E7S52baaQwV289tZbarakTeWozL/PSuena8ei/4IO o=; X-IronPort-AV: E=Sophos;i="5.97,261,1669075200"; d="scan'208";a="288177045" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-26a610d2.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jan 2023 14:57:04 +0000 Received: from EX13D41EUB003.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-m6i4x-26a610d2.us-west-2.amazon.com (Postfix) with ESMTPS id 98B6441F61; Tue, 31 Jan 2023 14:57:01 +0000 (UTC) Received: from EX19D037EUB003.ant.amazon.com (10.252.61.119) by EX13D41EUB003.ant.amazon.com (10.43.166.252) with Microsoft SMTP Server (TLS) id 15.0.1497.45; Tue, 31 Jan 2023 14:57:00 +0000 Received: from f4d4887fdcfb.ant.amazon.com (10.43.161.198) by EX19D037EUB003.ant.amazon.com (10.252.61.119) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.24; Tue, 31 Jan 2023 14:56:55 +0000 From: Babis Chalios To: Olivia Mackall , Herbert Xu , "Michael S. Tsirkin" , "Jason Wang" , Babis Chalios , , , CC: , , , , Subject: [PATCH v2 1/2] virtio-rng: implement entropy leak feature Date: Tue, 31 Jan 2023 15:55:42 +0100 Message-ID: <20230131145543.86369-2-bchalios@amazon.es> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230131145543.86369-1-bchalios@amazon.es> References: <20230131145543.86369-1-bchalios@amazon.es> MIME-Version: 1.0 X-Originating-IP: [10.43.161.198] X-ClientProxiedBy: EX13D39UWB001.ant.amazon.com (10.43.161.5) To EX19D037EUB003.ant.amazon.com (10.252.61.119) Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement the virtio-rng feature that allows a guest driver to request from the device to perform certain operations in the event of an "entropy leak", such as when taking a VM snapshot or restoring a VM from a snapshot. The guest can request one of two operations: (i) fill a buffer with random bytes, or (ii) perform a memory copy between two bytes. The feature is similar to Microsoft's Virtual Machine Generation ID and it can be used to (1) avoid the race-condition that exists in our current VMGENID implementation, between the time vcpus are resumed and the ACPI notification is being handled and (2) provide mechanisms for notifying user-space about snapshot-related events. This commit implements the protocol between guest and device. Additionally, it makes sure there is always a request for random bytes in the event of entropy leak in-flight. Once such an event is observed, the driver feeds these bytes to as entropy using `add_device_randomness`. Keep in mind that this commit does not solve the race-condition issue, it adds fresh entropy whenever the driver handles the used buffer from the fill-on-leak request. In order to close the race window, we need to expose some API so that other kernel subsystems can request directly notifications from the device. Signed-off-by: Babis Chalios --- drivers/char/hw_random/virtio-rng.c | 200 +++++++++++++++++++++++++++- include/uapi/linux/virtio_rng.h | 3 + 2 files changed, 196 insertions(+), 7 deletions(-) diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c index a6f3a8a2aca6..154f68a1e326 100644 --- a/drivers/char/hw_random/virtio-rng.c +++ b/drivers/char/hw_random/virtio-rng.c @@ -4,12 +4,12 @@ * Copyright (C) 2007, 2008 Rusty Russell IBM Corporation */ -#include #include #include #include #include #include +#include #include #include @@ -18,6 +18,12 @@ static DEFINE_IDA(rng_index_ida); struct virtrng_info { struct hwrng hwrng; struct virtqueue *vq; + /* Leak queues */ + bool has_leakqs; + struct virtqueue *leakq[2]; + spinlock_t lock; + int active_leakq; + char name[25]; int index; bool hwrng_register_done; @@ -29,27 +35,159 @@ struct virtrng_info { /* minimal size returned by rng_buffer_size() */ #if SMP_CACHE_BYTES < 32 u8 data[32]; + u8 leak_data[32]; #else u8 data[SMP_CACHE_BYTES]; + u8 leak_data[SMP_CACHE_BYTES]; #endif }; +/* Swaps the queues and returns the new active leak queue. */ +static struct virtqueue *swap_leakqs(struct virtrng_info *vi) +{ + vi->active_leakq = 1 - vi->active_leakq; + return vi->leakq[vi->active_leakq]; +} + +static struct virtqueue *get_active_leakq(struct virtrng_info *vi) +{ + return vi->leakq[vi->active_leakq]; +} + +static int add_fill_on_leak_request(struct virtrng_info *vi, struct virtqueue *vq, void *data, size_t len) +{ + struct scatterlist sg; + int ret; + + sg_init_one(&sg, data, len); + ret = virtqueue_add_inbuf(vq, &sg, 1, data, GFP_KERNEL); + if (ret) + goto err; + +err: + return ret; +} + +static int virtrng_fill_on_leak(struct virtrng_info *vi, void *data, size_t len) +{ + struct virtqueue *vq; + unsigned long flags; + int ret; + + if (!vi->has_leakqs) + return -EOPNOTSUPP; + + spin_lock_irqsave(&vi->lock, flags); + + vq = get_active_leakq(vi); + ret = add_fill_on_leak_request(vi, vq, data, len); + if (ret) + virtqueue_kick(vq); + + spin_unlock_irqrestore(&vi->lock, flags); + + return ret; +} + +static int add_copy_on_leak_request(struct virtrng_info *vi, struct virtqueue *vq, + void *to, void *from, size_t len) +{ + int ret; + struct scatterlist out, in, *sgs[2]; + + sg_init_one(&out, from, len); + sgs[0] = &out; + sg_init_one(&in, to, len); + sgs[1] = ∈ + + ret = virtqueue_add_sgs(vq, sgs, 1, 1, to, GFP_KERNEL); + if (ret) + goto err; + +err: + return ret; +} + +static int virtrng_copy_on_leak(struct virtrng_info *vi, void *to, void *from, size_t len) +{ + struct virtqueue *vq; + unsigned long flags; + int ret; + + if (!vi->has_leakqs) + return -EOPNOTSUPP; + + spin_lock_irqsave(&vi->lock, flags); + + vq = get_active_leakq(vi); + ret = add_copy_on_leak_request(vi, vq, to, from, len); + if (ret) + virtqueue_kick(vq); + + spin_unlock_irqrestore(&vi->lock, flags); + + return ret; +} + +static void entropy_leak_detected(struct virtqueue *vq) +{ + struct virtrng_info *vi = vq->vdev->priv; + struct virtqueue *activeq; + unsigned int len; + unsigned long flags; + void *buffer; + bool kick_activeq = false; + + spin_lock_irqsave(&vi->lock, flags); + + activeq = get_active_leakq(vi); + /* Drain all the used buffers from the queue */ + while ((buffer = virtqueue_get_buf(vq, &len)) != NULL) { + if (vq == activeq) { + pr_debug("%s: entropy leak detected!", vi->name); + activeq = swap_leakqs(vi); + } + + if (buffer == vi->leak_data) { + add_device_randomness(vi->leak_data, sizeof(vi->leak_data)); + + /* Ensure we always have a pending request for random bytes on entropy + * leak. Do it here, after we have swapped leak queues, so it gets handled + * with the next entropy leak event. + */ + add_fill_on_leak_request(vi, activeq, vi->leak_data, sizeof(vi->leak_data)); + kick_activeq = true; + } + } + + if (kick_activeq) + virtqueue_kick(activeq); + + spin_unlock_irqrestore(&vi->lock, flags); +} + static void random_recv_done(struct virtqueue *vq) { struct virtrng_info *vi = vq->vdev->priv; + unsigned long flags; + spin_lock_irqsave(&vi->lock, flags); /* We can get spurious callbacks, e.g. shared IRQs + virtio_pci. */ if (!virtqueue_get_buf(vi->vq, &vi->data_avail)) - return; + goto unlock; vi->data_idx = 0; complete(&vi->have_data); + +unlock: + spin_unlock_irqrestore(&vi->lock, flags); } static void request_entropy(struct virtrng_info *vi) { struct scatterlist sg; + unsigned long flags; reinit_completion(&vi->have_data); vi->data_avail = 0; @@ -57,10 +195,12 @@ static void request_entropy(struct virtrng_info *vi) sg_init_one(&sg, vi->data, sizeof(vi->data)); + spin_lock_irqsave(&vi->lock, flags); /* There should always be room for one buffer. */ virtqueue_add_inbuf(vi->vq, &sg, 1, vi->data, GFP_KERNEL); virtqueue_kick(vi->vq); + spin_unlock_irqrestore(&vi->lock, flags); } static unsigned int copy_data(struct virtrng_info *vi, void *buf, @@ -126,6 +266,40 @@ static void virtio_cleanup(struct hwrng *rng) complete(&vi->have_data); } +static int init_virtqueues(struct virtrng_info *vi, struct virtio_device *vdev) +{ + int ret = -ENOMEM, total_vqs = 1; + struct virtqueue *vqs[3]; + const char *names[3]; + vq_callback_t *callbacks[3]; + + if (vi->has_leakqs) + total_vqs = 3; + + callbacks[0] = random_recv_done; + names[0] = "input"; + if (vi->has_leakqs) { + callbacks[1] = entropy_leak_detected; + names[1] = "leakq.1"; + callbacks[2] = entropy_leak_detected; + names[2] = "leakq.2"; + } + + ret = virtio_find_vqs(vdev, total_vqs, vqs, callbacks, names, NULL); + if (ret) + goto err; + + vi->vq = vqs[0]; + + if (vi->has_leakqs) { + vi->leakq[0] = vqs[1]; + vi->leakq[1] = vqs[2]; + } + +err: + return ret; +} + static int probe_common(struct virtio_device *vdev) { int err, index; @@ -152,18 +326,24 @@ static int probe_common(struct virtio_device *vdev) }; vdev->priv = vi; - /* We expect a single virtqueue. */ - vi->vq = virtio_find_single_vq(vdev, random_recv_done, "input"); - if (IS_ERR(vi->vq)) { - err = PTR_ERR(vi->vq); - goto err_find; + vi->has_leakqs = virtio_has_feature(vdev, VIRTIO_RNG_F_LEAK); + if (vi->has_leakqs) { + spin_lock_init(&vi->lock); + vi->active_leakq = 0; } + err = init_virtqueues(vi, vdev); + if (err) + goto err_find; + virtio_device_ready(vdev); /* we always have a pending entropy request */ request_entropy(vi); + /* we always have a fill_on_leak request pending */ + virtrng_fill_on_leak(vi, vi->leak_data, sizeof(vi->leak_data)); + return 0; err_find: @@ -246,7 +426,13 @@ static const struct virtio_device_id id_table[] = { { 0 }, }; +static unsigned int features[] = { + VIRTIO_RNG_F_LEAK, +}; + static struct virtio_driver virtio_rng_driver = { + .feature_table = features, + .feature_table_size = ARRAY_SIZE(features), .driver.name = KBUILD_MODNAME, .driver.owner = THIS_MODULE, .id_table = id_table, diff --git a/include/uapi/linux/virtio_rng.h b/include/uapi/linux/virtio_rng.h index c4d5de896f0c..d9774951547e 100644 --- a/include/uapi/linux/virtio_rng.h +++ b/include/uapi/linux/virtio_rng.h @@ -5,4 +5,7 @@ #include #include +/* The feature bitmap for virtio entropy device */ +#define VIRTIO_RNG_F_LEAK 0 + #endif /* _LINUX_VIRTIO_RNG_H */ From patchwork Tue Jan 31 14:55:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Babis Chalios X-Patchwork-Id: 13123068 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1BB9C38142 for ; Tue, 31 Jan 2023 14:57:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232296AbjAaO5l (ORCPT ); Tue, 31 Jan 2023 09:57:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232269AbjAaO5Z (ORCPT ); Tue, 31 Jan 2023 09:57:25 -0500 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 002FC518C5; Tue, 31 Jan 2023 06:57:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.es; i=@amazon.es; q=dns/txt; s=amazon201209; t=1675177036; x=1706713036; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PR8HUEPIxrqi0LUZkI2xhXWCD6zjhx6Dwty3IoU2bD0=; b=Li5VirOB6nkG0rHHunA1GE0GoF5bWEKFnZ64Mr3XSGc1OatyRUpYCZab 44FUGtu+0rw1GueyPI+ocnik8wS6ulS9WTHw1UQXa33EvDFSKj9xr3jke +cgA+1koGS5B43tj95voxEU8uZ4gj5rJOr1jvWmC46HMmMu+aiqAJeOHR s=; X-IronPort-AV: E=Sophos;i="5.97,261,1669075200"; d="scan'208";a="288177124" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-a893d89c.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jan 2023 14:57:14 +0000 Received: from EX13D51EUB004.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-m6i4x-a893d89c.us-west-2.amazon.com (Postfix) with ESMTPS id 119AC41D88; Tue, 31 Jan 2023 14:57:13 +0000 (UTC) Received: from EX19D037EUB003.ant.amazon.com (10.252.61.119) by EX13D51EUB004.ant.amazon.com (10.43.166.217) with Microsoft SMTP Server (TLS) id 15.0.1497.45; Tue, 31 Jan 2023 14:57:11 +0000 Received: from f4d4887fdcfb.ant.amazon.com (10.43.161.198) by EX19D037EUB003.ant.amazon.com (10.252.61.119) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.24; Tue, 31 Jan 2023 14:57:05 +0000 From: Babis Chalios To: Olivia Mackall , Herbert Xu , "Michael S. Tsirkin" , "Jason Wang" , Babis Chalios , , , CC: , , , , Subject: [PATCH v2 2/2] virtio-rng: add sysfs entries for leak detection Date: Tue, 31 Jan 2023 15:55:43 +0100 Message-ID: <20230131145543.86369-3-bchalios@amazon.es> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20230131145543.86369-1-bchalios@amazon.es> References: <20230131145543.86369-1-bchalios@amazon.es> MIME-Version: 1.0 X-Originating-IP: [10.43.161.198] X-ClientProxiedBy: EX13D39UWB001.ant.amazon.com (10.43.161.5) To EX19D037EUB003.ant.amazon.com (10.252.61.119) Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Make use of the copy-on-leak functionality of the virtio rng driver to expose a mechanism to user space for detecting entropy leak events, such as taking a VM snapshot or restoring from one. The driver setups a single page of memory where it stores in the first word a counter and queues a copy-on-leak command for increasing the counter every time an entropy leak occurs. It exposes the value of the counter in a binary sysfs file per device. The file can be mmap'ed and read and every time a change on the counter is observed, `sysfs_notify` is used to notify processes that are polling it. The mechanism is implemented based on the idea of a VM generation counter that had been before proposed as an extension to the VM Generation ID device, where mmap and poll interfaces can be used on the file containing the counter and changes in its value signal snapshot events. It is worth noting that using mmap is entirely race-free, since changes in the counter are observable by user-space as soon as vcpus are resumed. Instead, using poll is not race-free. There is a race-window between the moment the vcpus are resumed and the used-buffers are handled by the virtio-rng driver. Signed-off-by: Babis Chalios --- drivers/char/hw_random/virtio-rng.c | 178 +++++++++++++++++++++++++++- 1 file changed, 175 insertions(+), 3 deletions(-) diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c index 154f68a1e326..9fe9da09f202 100644 --- a/drivers/char/hw_random/virtio-rng.c +++ b/drivers/char/hw_random/virtio-rng.c @@ -4,6 +4,9 @@ * Copyright (C) 2007, 2008 Rusty Russell IBM Corporation */ +#include "linux/gfp.h" +#include "linux/minmax.h" +#include "linux/sysfs.h" #include #include #include @@ -15,6 +18,10 @@ static DEFINE_IDA(rng_index_ida); +#ifdef CONFIG_SYSFS +static struct kobject *virtio_rng_kobj; +#endif + struct virtrng_info { struct hwrng hwrng; struct virtqueue *vq; @@ -23,6 +30,12 @@ struct virtrng_info { struct virtqueue *leakq[2]; spinlock_t lock; int active_leakq; +#ifdef CONFIG_SYSFS + struct kobject *kobj; + struct bin_attribute vm_gen_counter_attr; + unsigned long map_buffer; + unsigned long next_vm_gen_counter; +#endif char name[25]; int index; @@ -42,6 +55,40 @@ struct virtrng_info { #endif }; +#ifdef CONFIG_SYSFS +static ssize_t virtrng_sysfs_read(struct file *filep, struct kobject *kobj, + struct bin_attribute *attr, char *buf, loff_t pos, size_t len) +{ + struct virtrng_info *vi = attr->private; + unsigned long gen_counter = *(unsigned long *)vi->map_buffer; + + if (!len) + return 0; + + len = min(len, sizeof(gen_counter)); + memcpy(buf, &gen_counter, len); + + return len; +} + +static int virtrng_sysfs_mmap(struct file *filep, struct kobject *kobj, + struct bin_attribute *attr, struct vm_area_struct *vma) +{ + struct virtrng_info *vi = attr->private; + + if (vma->vm_pgoff || vma_pages(vma) > 1) + return -EINVAL; + + if (vma->vm_flags & VM_WRITE) + return -EPERM; + + vma->vm_flags |= VM_DONTEXPAND; + vma->vm_flags &= ~VM_MAYWRITE; + + return vm_insert_page(vma, vma->vm_start, virt_to_page(vi->map_buffer)); +} +#endif + /* Swaps the queues and returns the new active leak queue. */ static struct virtqueue *swap_leakqs(struct virtrng_info *vi) { @@ -81,7 +128,7 @@ static int virtrng_fill_on_leak(struct virtrng_info *vi, void *data, size_t len) vq = get_active_leakq(vi); ret = add_fill_on_leak_request(vi, vq, data, len); - if (ret) + if (!ret) virtqueue_kick(vq); spin_unlock_irqrestore(&vi->lock, flags); @@ -121,7 +168,7 @@ static int virtrng_copy_on_leak(struct virtrng_info *vi, void *to, void *from, s vq = get_active_leakq(vi); ret = add_copy_on_leak_request(vi, vq, to, from, len); - if (ret) + if (!ret) virtqueue_kick(vq); spin_unlock_irqrestore(&vi->lock, flags); @@ -137,6 +184,9 @@ static void entropy_leak_detected(struct virtqueue *vq) unsigned long flags; void *buffer; bool kick_activeq = false; +#ifdef CONFIG_SYSFS + bool notify_sysfs = false; +#endif spin_lock_irqsave(&vi->lock, flags); @@ -158,12 +208,34 @@ static void entropy_leak_detected(struct virtqueue *vq) add_fill_on_leak_request(vi, activeq, vi->leak_data, sizeof(vi->leak_data)); kick_activeq = true; } + +#ifdef CONFIG_SYSFS + if (buffer == (void *)vi->map_buffer) { + notify_sysfs = true; + + /* Add a request to bump the generation counter on the next leak event. + * We have already swapped leak queues, so this will get properly handled + * with the next entropy leak event. + */ + vi->next_vm_gen_counter++; + add_copy_on_leak_request(vi, activeq, (void *)vi->map_buffer, + &vi->next_vm_gen_counter, sizeof(unsigned long)); + + kick_activeq = true; + } +#endif } if (kick_activeq) virtqueue_kick(activeq); spin_unlock_irqrestore(&vi->lock, flags); + +#ifdef CONFIG_SYSFS + /* Notify anyone polling on the sysfs file */ + if (notify_sysfs) + sysfs_notify(vi->kobj, NULL, "vm_gen_counter"); +#endif } static void random_recv_done(struct virtqueue *vq) @@ -300,6 +372,59 @@ static int init_virtqueues(struct virtrng_info *vi, struct virtio_device *vdev) return ret; } +#ifdef CONFIG_SYSFS +static int setup_sysfs(struct virtrng_info *vi) +{ + int err; + + vi->next_vm_gen_counter = 1; + + /* We have one binary file per device under /sys/virtio-rng//vm_gen_counter */ + vi->vm_gen_counter_attr.attr.name = "vm_gen_counter"; + vi->vm_gen_counter_attr.attr.mode = 0444; + vi->vm_gen_counter_attr.read = virtrng_sysfs_read; + vi->vm_gen_counter_attr.mmap = virtrng_sysfs_mmap; + vi->vm_gen_counter_attr.private = vi; + + vi->map_buffer = get_zeroed_page(GFP_KERNEL); + if (!vi->map_buffer) + return -ENOMEM; + + err = -ENOMEM; + vi->kobj = kobject_create_and_add(vi->name, virtio_rng_kobj); + if (!vi->kobj) + goto err_page; + + err = sysfs_create_bin_file(vi->kobj, &vi->vm_gen_counter_attr); + if (err) + goto err_kobj; + + return 0; + +err_kobj: + kobject_put(vi->kobj); +err_page: + free_pages(vi->map_buffer, 0); + return err; +} + +static void cleanup_sysfs(struct virtrng_info *vi) +{ + sysfs_remove_bin_file(vi->kobj, &vi->vm_gen_counter_attr); + kobject_put(vi->kobj); + free_pages(vi->map_buffer, 0); +} +#else +static int setup_sysfs(struct virtrng_info *vi) +{ + return 0; +} + +static void cleanup_sysfs(struct virtrng_info *vi) +{ +} +#endif + static int probe_common(struct virtio_device *vdev) { int err, index; @@ -330,11 +455,15 @@ static int probe_common(struct virtio_device *vdev) if (vi->has_leakqs) { spin_lock_init(&vi->lock); vi->active_leakq = 0; + + err = setup_sysfs(vi); + if (err) + goto err_find; } err = init_virtqueues(vi, vdev); if (err) - goto err_find; + goto err_sysfs; virtio_device_ready(vdev); @@ -344,8 +473,18 @@ static int probe_common(struct virtio_device *vdev) /* we always have a fill_on_leak request pending */ virtrng_fill_on_leak(vi, vi->leak_data, sizeof(vi->leak_data)); +#ifdef CONFIG_SYSFS + /* also a copy_on_leak request for the generation counter when we have sysfs + * support. + */ + virtrng_copy_on_leak(vi, (void *)vi->map_buffer, &vi->next_vm_gen_counter, + sizeof(unsigned long)); +#endif + return 0; +err_sysfs: + cleanup_sysfs(vi); err_find: ida_simple_remove(&rng_index_ida, index); err_ida: @@ -363,6 +502,8 @@ static void remove_common(struct virtio_device *vdev) complete(&vi->have_data); if (vi->hwrng_register_done) hwrng_unregister(&vi->hwrng); + if (vi->has_leakqs) + cleanup_sysfs(vi); virtio_reset_device(vdev); vdev->config->del_vqs(vdev); ida_simple_remove(&rng_index_ida, vi->index); @@ -445,7 +586,38 @@ static struct virtio_driver virtio_rng_driver = { #endif }; +#ifdef CONFIG_SYSFS +static int __init virtio_rng_init(void) +{ + int ret; + + virtio_rng_kobj = kobject_create_and_add("virtio-rng", NULL); + if (!virtio_rng_kobj) + return -ENOMEM; + + ret = register_virtio_driver(&virtio_rng_driver); + if (ret < 0) + goto err; + + return 0; + +err: + kobject_put(virtio_rng_kobj); + return ret; +} + +static void __exit virtio_rng_fini(void) +{ + kobject_put(virtio_rng_kobj); + unregister_virtio_driver(&virtio_rng_driver); +} + +module_init(virtio_rng_init); +module_exit(virtio_rng_fini); +#else module_virtio_driver(virtio_rng_driver); +#endif + MODULE_DEVICE_TABLE(virtio, id_table); MODULE_DESCRIPTION("Virtio random number driver"); MODULE_LICENSE("GPL");