From patchwork Wed Jul 29 14:29:22 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Mammedov X-Patchwork-Id: 6894771 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AA1A99F52D for ; Wed, 29 Jul 2015 14:30:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C39972064F for ; Wed, 29 Jul 2015 14:29:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BF6722066E for ; Wed, 29 Jul 2015 14:29:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753999AbbG2O33 (ORCPT ); Wed, 29 Jul 2015 10:29:29 -0400 Received: from mx1.redhat.com ([209.132.183.28]:47806 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753960AbbG2O32 (ORCPT ); Wed, 29 Jul 2015 10:29:28 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id 73111923AA; Wed, 29 Jul 2015 14:29:28 +0000 (UTC) Received: from dell-r430-03.lab.eng.brq.redhat.com (dell-r430-03.lab.eng.brq.redhat.com [10.34.112.60]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t6TETPNJ009041; Wed, 29 Jul 2015 10:29:27 -0400 From: Igor Mammedov To: linux-kernel@vger.kernel.org Cc: mst@redhat.com, pbonzini@redhat.com, kvm@vger.kernel.org Subject: [PATCH 1/2] vhost: add ioctl to query nregions upper limit Date: Wed, 29 Jul 2015 16:29:22 +0200 Message-Id: <1438180163-275465-2-git-send-email-imammedo@redhat.com> In-Reply-To: <1438180163-275465-1-git-send-email-imammedo@redhat.com> References: <1438180163-275465-1-git-send-email-imammedo@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Michael S. Tsirkin" Userspace currently simply tries to give vhost as many regions as it happens to have, but you only have the mem table when you have initialized a large part of VM, so graceful failure is very hard to support. The result is that userspace tends to fail catastrophically. Instead, add a new ioctl so userspace can find out how much kernel supports, up front. This returns a positive value that we commit to. Also, document our contract with legacy userspace: when running on an old kernel, you get -1 and you can assume at least 64 slots. Since 0 value's left unused, let's make that mean that the current userspace behaviour (trial and error) is required, just in case we want it back. Signed-off-by: Michael S. Tsirkin Signed-off-by: Igor Mammedov --- drivers/vhost/vhost.c | 7 ++++++- include/uapi/linux/vhost.h | 17 ++++++++++++++++- 2 files changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index eec2f11..76dc0cf 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -30,7 +30,7 @@ #include "vhost.h" -static ushort max_mem_regions = 64; +static ushort max_mem_regions = VHOST_MEM_MAX_NREGIONS_DEFAULT; module_param(max_mem_regions, ushort, 0444); MODULE_PARM_DESC(max_mem_regions, "Maximum number of memory regions in memory map. (default: 64)"); @@ -944,6 +944,11 @@ long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp) long r; int i, fd; + if (ioctl == VHOST_GET_MEM_MAX_NREGIONS) { + r = max_mem_regions; + goto done; + } + /* If you are not the owner, you can become one */ if (ioctl == VHOST_SET_OWNER) { r = vhost_dev_set_owner(d); diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h index ab373191..2511954 100644 --- a/include/uapi/linux/vhost.h +++ b/include/uapi/linux/vhost.h @@ -80,7 +80,7 @@ struct vhost_memory { * Allows subsequent call to VHOST_OWNER_SET to succeed. */ #define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02) -/* Set up/modify memory layout */ +/* Set up/modify memory layout: see also VHOST_GET_MEM_MAX_NREGIONS below. */ #define VHOST_SET_MEM_TABLE _IOW(VHOST_VIRTIO, 0x03, struct vhost_memory) /* Write logging setup. */ @@ -127,6 +127,21 @@ struct vhost_memory { /* Set eventfd to signal an error */ #define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file) +/* Query upper limit on nregions in VHOST_SET_MEM_TABLE arguments. + * Returns: + * 0 < value <= MAX_INT - gives the upper limit, higher values will fail + * 0 - there's no static limit: try and see if it works + * -1 - on failure + */ +#define VHOST_GET_MEM_MAX_NREGIONS _IO(VHOST_VIRTIO, 0x23) + +/* Returned by VHOST_GET_MEM_MAX_NREGIONS to mean there's no static limit: + * try and it'll work if you are lucky. */ +#define VHOST_MEM_MAX_NREGIONS_NONE 0 +/* We support at least as many nregions in VHOST_SET_MEM_TABLE: + * for use on legacy kernels without VHOST_GET_MEM_MAX_NREGIONS support. */ +#define VHOST_MEM_MAX_NREGIONS_DEFAULT 64 + /* VHOST_NET specific defines */ /* Attach virtio net ring to a raw socket, or tap device.