From patchwork Wed Dec 14 16:30:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13073267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7129C4332F for ; Wed, 14 Dec 2022 16:32:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239083AbiLNQbz (ORCPT ); Wed, 14 Dec 2022 11:31:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239040AbiLNQbg (ORCPT ); Wed, 14 Dec 2022 11:31:36 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51693B58 for ; Wed, 14 Dec 2022 08:30:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671035449; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1lGOBSIFe5pUAxEjRFkRz63QN+FtG9EzKvnfhz76nOw=; b=H5vWvRevavnEXugM0uZlQ/ks2z5Aczz2gobu1PQ1S02EN0hXlq6S2l0QPR2NI4rJEVsrfq wapnV8bKUTH3CNIX900EiHuTG7YkxjXKSZhbCAHq9eHdQI8oKD3x4Np6KIq6mrI6YL9gzx +7TDB8eL0sO3XQhOqnHv/aHNuZ3k8Uk= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-577-pgySfDJONt-HeHlfH1nWUg-1; Wed, 14 Dec 2022 11:30:31 -0500 X-MC-Unique: pgySfDJONt-HeHlfH1nWUg-1 Received: by mail-wm1-f71.google.com with SMTP id m38-20020a05600c3b2600b003d1fc5f1f80so7439866wms.1 for ; Wed, 14 Dec 2022 08:30:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1lGOBSIFe5pUAxEjRFkRz63QN+FtG9EzKvnfhz76nOw=; b=sWhUkTdJZiYIZSBFZo7DD2npX9dKk4BA2lw7oNPAzDku6hGK0FqwMNW4RO99KduJB0 gthnGm1ZM4c8+6SR66T7JqhUNJY8vl/sHQEWER6pNn4K/YHwwYwHi5ixDDZTWIzUnsT1 OTUWkRpgyNj89TT5+9Wqvzfl2i4fodXGKgMqoxqMpIpFuxWuR4t91UtzkV8fV9M+Qsh2 HgZh6zrM9iFhDsjfuE7aG64oj4xGHvsvCREdhtLNp4sYwRhKej+bOXLUhWW/dRNaODL3 wiAdihP0bHOhSjeBtXoyv95cawDe2hJotsk+IuSFoFTojanh98cBq6QhqIzBen62/apc cmqg== X-Gm-Message-State: AFqh2kqpHPz4S8bolMvFj+SVJm3xu1IUOOpqoZKdobuBwHZAOrvWLQRT fPzYT65XE1GHizeczDo6OAxoF1oLyh83xWETzZoWaJr86W/+n8V5nuq/ldsRuN+u7RAkWQndNzM P0GcaQe8Yx9Npxhhz X-Received: by 2002:a5d:6b82:0:b0:254:e300:df10 with SMTP id n2-20020a5d6b82000000b00254e300df10mr3175575wrx.0.1671035430300; Wed, 14 Dec 2022 08:30:30 -0800 (PST) X-Google-Smtp-Source: AMrXdXtON+i2KOCKckOC+y5p1sstetwPc0ECbYZbXcpC/pxYXEHqSVOOlJPZuP9yMFAhSEmoSjaS2Q== X-Received: by 2002:a5d:6b82:0:b0:254:e300:df10 with SMTP id n2-20020a5d6b82000000b00254e300df10mr3175558wrx.0.1671035430129; Wed, 14 Dec 2022 08:30:30 -0800 (PST) Received: from step1.redhat.com (host-87-11-6-51.retail.telecomitalia.it. [87.11.6.51]) by smtp.gmail.com with ESMTPSA id e17-20020adffd11000000b002422816aa25sm3791759wrr.108.2022.12.14.08.30.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 08:30:29 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Jason Wang , Andrey Zhadchenko , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , eperezma@redhat.com, stefanha@redhat.com, netdev@vger.kernel.org, Stefano Garzarella Subject: [RFC PATCH 1/6] vdpa: add bind_mm callback Date: Wed, 14 Dec 2022 17:30:20 +0100 Message-Id: <20221214163025.103075-2-sgarzare@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221214163025.103075-1-sgarzare@redhat.com> References: <20221214163025.103075-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This new optional callback is used to bind the device to a specific address space so the vDPA framework can use VA when this callback is implemented. Suggested-by: Jason Wang Signed-off-by: Stefano Garzarella --- include/linux/vdpa.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 6d0f5e4e82c2..34388e21ef3f 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -282,6 +282,12 @@ struct vdpa_map_file { * @iova: iova to be unmapped * @size: size of the area * Returns integer: success (0) or error (< 0) + * @bind_mm: Bind the device to a specific address space + * so the vDPA framework can use VA when this + * callback is implemented. (optional) + * @vdev: vdpa device + * @mm: address space to bind + * @owner: process that owns the address space * @free: Free resources that belongs to vDPA (optional) * @vdev: vdpa device */ @@ -341,6 +347,8 @@ struct vdpa_config_ops { u64 iova, u64 size); int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group, unsigned int asid); + int (*bind_mm)(struct vdpa_device *vdev, struct mm_struct *mm, + struct task_struct *owner); /* Free device resources */ void (*free)(struct vdpa_device *vdev); From patchwork Wed Dec 14 16:30:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13073266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A2EBC4332F for ; Wed, 14 Dec 2022 16:31:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239033AbiLNQbv (ORCPT ); Wed, 14 Dec 2022 11:31:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239026AbiLNQbb (ORCPT ); Wed, 14 Dec 2022 11:31:31 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54476D2E9 for ; Wed, 14 Dec 2022 08:30:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671035443; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ntjy4bIssvEAdzLttb0Oirdk6ECvBiYBnXEucLTwyX4=; b=KanM77ZYg6z6rR0M+tCJETPSMMcDOf3Z0vkrjkgMJt7JmmDFeaeFZL3KtTZpNnJJjHIcqU cAYzRjYF0F+B4Qg39iwfVBo7iInmO8OnRNpD1Qq3l4e4AaEVTMQEjC2KhkBljO/R6vRft8 tvloGLbh4Ie5W61LQA9ze+27HnLYrbc= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-623-MyMEW1mCMYyguoNxF-NJ9Q-1; Wed, 14 Dec 2022 11:30:33 -0500 X-MC-Unique: MyMEW1mCMYyguoNxF-NJ9Q-1 Received: by mail-wm1-f71.google.com with SMTP id h81-20020a1c2154000000b003d1c8e519fbso7427659wmh.2 for ; Wed, 14 Dec 2022 08:30:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ntjy4bIssvEAdzLttb0Oirdk6ECvBiYBnXEucLTwyX4=; b=N8B+0ZPIqV5tKGWAmxZpniFbafdkHfR2QA2Xz2AE9Wau8qmjLnNkLtR7j0ud5STj+u /zvuuGhkqH6G6aSdx6AcrvGPJs/3j9R0Vaih82Pex2cbOYESmdaxRXydcvLkoD4RpWsY Jy7DA6ntLcYfbLNHZKe1rEuKCuOAgRPuTL1TmNE/EUyRfhcS+GtpnFCjMnC5mkjze7wL wgWDZzo3ssO7OvNMPebsxC5AmWd536+y6poj2+7EEo6LSYpJPVnLrg6ep2yv3czcz2U+ 39Ua8I0SkC7apAJWJBYaBgGTwJ/EnDDr+jrQQTNSmt/NLsHYjgptpodtv3mqDc3rmmBc wbAQ== X-Gm-Message-State: ANoB5pk7kX9QVta1cJqtVPzDwVYdcWbDJS4wfMrnxWrKLO7pgpM9Q6O9 chf0ONYGG0ChzIVS5csyL47JJfr3v4dvEKmYiKmcMEPNgDOR2bkT7XOrCCrtl/mIdbpvJsMpyKu l3TUMNHNIM/dN+BgR X-Received: by 2002:a5d:43c8:0:b0:242:659f:9411 with SMTP id v8-20020a5d43c8000000b00242659f9411mr18656562wrr.9.1671035432137; Wed, 14 Dec 2022 08:30:32 -0800 (PST) X-Google-Smtp-Source: AA0mqf749Y7E+BT4q6kPhJknMxXoEgk8H6qeSlLUTLbG1hIMjBAx7MTauP5T5GnuPa8o2ZC6K5qBtA== X-Received: by 2002:a5d:43c8:0:b0:242:659f:9411 with SMTP id v8-20020a5d43c8000000b00242659f9411mr18656541wrr.9.1671035431915; Wed, 14 Dec 2022 08:30:31 -0800 (PST) Received: from step1.redhat.com (host-87-11-6-51.retail.telecomitalia.it. [87.11.6.51]) by smtp.gmail.com with ESMTPSA id e17-20020adffd11000000b002422816aa25sm3791759wrr.108.2022.12.14.08.30.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 08:30:31 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Jason Wang , Andrey Zhadchenko , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , eperezma@redhat.com, stefanha@redhat.com, netdev@vger.kernel.org, Stefano Garzarella Subject: [RFC PATCH 2/6] vhost-vdpa: use bind_mm device callback Date: Wed, 14 Dec 2022 17:30:21 +0100 Message-Id: <20221214163025.103075-3-sgarzare@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221214163025.103075-1-sgarzare@redhat.com> References: <20221214163025.103075-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC When the user call VHOST_SET_OWNER ioctl and the vDPA device has `use_va` set to true, let's call the bind_mm callback. In this way we can bind the device to the user address space and directly use the user VA. Signed-off-by: Stefano Garzarella --- drivers/vhost/vdpa.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index b08e07fc7d1f..a775d1a52c77 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -219,6 +219,17 @@ static int vhost_vdpa_reset(struct vhost_vdpa *v) return vdpa_reset(vdpa); } +static long vhost_vdpa_bind_mm(struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (!vdpa->use_va || !ops->bind_mm) + return 0; + + return ops->bind_mm(vdpa, v->vdev.mm, current); +} + static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp) { struct vdpa_device *vdpa = v->vdpa; @@ -276,6 +287,10 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 __user *statusp) ret = vdpa_reset(vdpa); if (ret) return ret; + + ret = vhost_vdpa_bind_mm(v); + if (ret) + return ret; } else vdpa_set_status(vdpa, status); @@ -679,6 +694,13 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, break; default: r = vhost_dev_ioctl(&v->vdev, cmd, argp); + if (!r && cmd == VHOST_SET_OWNER) { + r = vhost_vdpa_bind_mm(v); + if (r) { + vhost_dev_reset_owner(&v->vdev, NULL); + break; + } + } if (r == -ENOIOCTLCMD) r = vhost_vdpa_vring_ioctl(v, cmd, argp); break; From patchwork Wed Dec 14 16:30:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13073265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 733ADC4332F for ; Wed, 14 Dec 2022 16:31:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239007AbiLNQb0 (ORCPT ); Wed, 14 Dec 2022 11:31:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238998AbiLNQbX (ORCPT ); Wed, 14 Dec 2022 11:31:23 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5278E62F8 for ; Wed, 14 Dec 2022 08:30:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671035440; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uDSLhjChOzyvO7sWXszZbPlZnPPU1GukY9F9aZWrZdA=; b=HGfz2uk2338ee4LozHACWF45wWfhzxftm4iKsducv4gwC06TPjLt/ErC8vyJB50946iV5W medXHEBiwIBgEkKOm7ES1qU5kAJ931G3Yo/NBhEPzA30MXhXDyXxwQ7uxzNga9jqsQx1Y0 PD/vs/hygBbMb81kVdhwBBgDxHxIKZI= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-537-XCWfJ_jvMwO46qWyMUkQvw-1; Wed, 14 Dec 2022 11:30:37 -0500 X-MC-Unique: XCWfJ_jvMwO46qWyMUkQvw-1 Received: by mail-wm1-f70.google.com with SMTP id n8-20020a05600c294800b003d1cc68889dso4368140wmd.7 for ; Wed, 14 Dec 2022 08:30:36 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uDSLhjChOzyvO7sWXszZbPlZnPPU1GukY9F9aZWrZdA=; b=EmpEGW3cnBq8mX5glM+ray5CrQ96unSXQ+gP2aWKwLLQT9upUo7wT7jimw6ETN+7DX VaKcFvtbbIrfubkm6wW+hf78It21QpJXjs45cZ9oAgHrqZL8nW0AOG1RCxotGZaJDrqG VQsigZegiM3GxYnpp0wTFog8JU1PfaN9JccbNTmwtefowo8u3llnXAq/0W8aWkDxJTwU oot7MmlF31mz3sGwVFbDogI2SdFOTScpY9iT5iNVz7be5VI0TxZ1lKk+PgTTmSXnltu8 tw3MtPhC7JwYQ9vA40fAOxHcTf302k69rlF45eCuTTVIs49CGz8/AW0Fbo0RqntKxr6w gsMA== X-Gm-Message-State: ANoB5pkX3wbmO8pFDSAd2xlTAkJ9I8RGsO7KkWDvSa+GO6NUEpxPENow h+32z0fz2xGwnihmimUTXU+Zmrxlyjggl6kE2VgbxahbGBtntKWleXTMM0EMMc9KEsCuYi02uqw BnwwPUZX+ZUxmadSL X-Received: by 2002:a5d:55c4:0:b0:242:19d6:da77 with SMTP id i4-20020a5d55c4000000b0024219d6da77mr15384067wrw.15.1671035434184; Wed, 14 Dec 2022 08:30:34 -0800 (PST) X-Google-Smtp-Source: AA0mqf64FO5UwTyJyWczFAW3XAofyHe4ZMJGRxWj6B/ZMPYSJWN7NbuJdbyQLq95i6Mag0G2kucxQQ== X-Received: by 2002:a5d:55c4:0:b0:242:19d6:da77 with SMTP id i4-20020a5d55c4000000b0024219d6da77mr15384042wrw.15.1671035433873; Wed, 14 Dec 2022 08:30:33 -0800 (PST) Received: from step1.redhat.com (host-87-11-6-51.retail.telecomitalia.it. [87.11.6.51]) by smtp.gmail.com with ESMTPSA id e17-20020adffd11000000b002422816aa25sm3791759wrr.108.2022.12.14.08.30.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 08:30:33 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Jason Wang , Andrey Zhadchenko , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , eperezma@redhat.com, stefanha@redhat.com, netdev@vger.kernel.org, Stefano Garzarella Subject: [RFC PATCH 3/6] vringh: support VA with iotlb Date: Wed, 14 Dec 2022 17:30:22 +0100 Message-Id: <20221214163025.103075-4-sgarzare@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221214163025.103075-1-sgarzare@redhat.com> References: <20221214163025.103075-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC vDPA supports the possibility to use user VA in the iotlb messages. So, let's add support for user VA in vringh to use it in the vDPA simulators. Signed-off-by: Stefano Garzarella --- include/linux/vringh.h | 5 +- drivers/vdpa/mlx5/core/resources.c | 3 +- drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- drivers/vdpa/vdpa_sim/vdpa_sim.c | 4 +- drivers/vhost/vringh.c | 250 +++++++++++++++++++++++------ 5 files changed, 207 insertions(+), 57 deletions(-) diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 212892cf9822..c70962f16b1f 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -32,6 +32,9 @@ struct vringh { /* Can we get away with weak barriers? */ bool weak_barriers; + /* Use user's VA */ + bool use_va; + /* Last available index we saw (ie. where we're up to). */ u16 last_avail_idx; @@ -279,7 +282,7 @@ void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb, spinlock_t *iotlb_lock); int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, bool use_va, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); diff --git a/drivers/vdpa/mlx5/core/resources.c b/drivers/vdpa/mlx5/core/resources.c index 9800f9bec225..e0bab3458b40 100644 --- a/drivers/vdpa/mlx5/core/resources.c +++ b/drivers/vdpa/mlx5/core/resources.c @@ -233,7 +233,8 @@ static int init_ctrl_vq(struct mlx5_vdpa_dev *mvdev) if (!mvdev->cvq.iotlb) return -ENOMEM; - vringh_set_iotlb(&mvdev->cvq.vring, mvdev->cvq.iotlb, &mvdev->cvq.iommu_lock); + vringh_set_iotlb(&mvdev->cvq.vring, mvdev->cvq.iotlb, + &mvdev->cvq.iommu_lock); return 0; } diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 90913365def4..81ba0867e2c8 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2504,7 +2504,7 @@ static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features, - MLX5_CVQ_MAX_ENT, false, + MLX5_CVQ_MAX_ENT, false, false, (struct vring_desc *)(uintptr_t)cvq->desc_addr, (struct vring_avail *)(uintptr_t)cvq->driver_addr, (struct vring_used *)(uintptr_t)cvq->device_addr); diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index b20689f8fe89..2e0ee7280aa8 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -67,7 +67,7 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) { struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false, + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false, false, (struct vring_desc *)(uintptr_t)vq->desc_addr, (struct vring_avail *) (uintptr_t)vq->driver_addr, @@ -87,7 +87,7 @@ static void vdpasim_vq_reset(struct vdpasim *vdpasim, vq->cb = NULL; vq->private = NULL; vringh_init_iotlb(&vq->vring, vdpasim->dev_attr.supported_features, - VDPASIM_QUEUE_MAX, false, NULL, NULL, NULL); + VDPASIM_QUEUE_MAX, false, false, NULL, NULL, NULL); vq->vring.notify = NULL; } diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 11f59dd06a74..c1f77dc93482 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -1094,15 +1094,99 @@ EXPORT_SYMBOL(vringh_need_notify_kern); #if IS_REACHABLE(CONFIG_VHOST_IOTLB) -static int iotlb_translate(const struct vringh *vrh, - u64 addr, u64 len, u64 *translated, - struct bio_vec iov[], - int iov_size, u32 perm) +static int iotlb_translate_va(const struct vringh *vrh, + u64 addr, u64 len, u64 *translated, + struct iovec iov[], + int iov_size, u32 perm) { struct vhost_iotlb_map *map; struct vhost_iotlb *iotlb = vrh->iotlb; + u64 s = 0, last = addr + len - 1; + int ret = 0; + + spin_lock(vrh->iotlb_lock); + + while (len > s) { + u64 size; + + if (unlikely(ret >= iov_size)) { + ret = -ENOBUFS; + break; + } + + map = vhost_iotlb_itree_first(iotlb, addr, last); + if (!map || map->start > addr) { + ret = -EINVAL; + break; + } else if (!(map->perm & perm)) { + ret = -EPERM; + break; + } + + size = map->size - addr + map->start; + iov[ret].iov_len = min(len - s, size); + iov[ret].iov_base = (void __user *)(unsigned long) + (map->addr + addr - map->start); + s += size; + addr += size; + ++ret; + } + + spin_unlock(vrh->iotlb_lock); + + if (translated) + *translated = min(len, s); + + return ret; +} + +static inline int copy_from_va(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct iovec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)src, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_RO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_init(&iter, READ, iov, ret, *translated); + + return copy_from_iter(dst, *translated, &iter); +} + +static inline int copy_to_va(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct iovec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)dst, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_WO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_init(&iter, WRITE, iov, ret, *translated); + + return copy_to_iter(src, *translated, &iter); +} + +static int iotlb_translate_pa(const struct vringh *vrh, + u64 addr, u64 len, u64 *translated, + struct bio_vec iov[], + int iov_size, u32 perm) +{ + struct vhost_iotlb_map *map; + struct vhost_iotlb *iotlb = vrh->iotlb; + u64 s = 0, last = addr + len - 1; int ret = 0; - u64 s = 0; spin_lock(vrh->iotlb_lock); @@ -1114,8 +1198,7 @@ static int iotlb_translate(const struct vringh *vrh, break; } - map = vhost_iotlb_itree_first(iotlb, addr, - addr + len - 1); + map = vhost_iotlb_itree_first(iotlb, addr, last); if (!map || map->start > addr) { ret = -EINVAL; break; @@ -1143,28 +1226,61 @@ static int iotlb_translate(const struct vringh *vrh, return ret; } +static inline int copy_from_pa(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct bio_vec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)src, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_RO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_bvec(&iter, READ, iov, ret, *translated); + + return copy_from_iter(dst, *translated, &iter); +} + +static inline int copy_to_pa(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct bio_vec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)dst, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_WO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_bvec(&iter, WRITE, iov, ret, *translated); + + return copy_to_iter(src, *translated, &iter); +} + static inline int copy_from_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { u64 total_translated = 0; while (total_translated < len) { - struct bio_vec iov[16]; - struct iov_iter iter; u64 translated; int ret; - ret = iotlb_translate(vrh, (u64)(uintptr_t)src, - len - total_translated, &translated, - iov, ARRAY_SIZE(iov), VHOST_MAP_RO); - if (ret == -ENOBUFS) - ret = ARRAY_SIZE(iov); - else if (ret < 0) - return ret; - - iov_iter_bvec(&iter, READ, iov, ret, translated); + if (vrh->use_va) { + ret = copy_from_va(vrh, dst, src, + len - total_translated, &translated); + } else { + ret = copy_from_pa(vrh, dst, src, + len - total_translated, &translated); + } - ret = copy_from_iter(dst, translated, &iter); if (ret < 0) return ret; @@ -1182,22 +1298,17 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, u64 total_translated = 0; while (total_translated < len) { - struct bio_vec iov[16]; - struct iov_iter iter; u64 translated; int ret; - ret = iotlb_translate(vrh, (u64)(uintptr_t)dst, - len - total_translated, &translated, - iov, ARRAY_SIZE(iov), VHOST_MAP_WO); - if (ret == -ENOBUFS) - ret = ARRAY_SIZE(iov); - else if (ret < 0) - return ret; - - iov_iter_bvec(&iter, WRITE, iov, ret, translated); + if (vrh->use_va) { + ret = copy_to_va(vrh, dst, src, + len - total_translated, &translated); + } else { + ret = copy_to_pa(vrh, dst, src, + len - total_translated, &translated); + } - ret = copy_to_iter(src, translated, &iter); if (ret < 0) return ret; @@ -1212,20 +1323,36 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, static inline int getu16_iotlb(const struct vringh *vrh, u16 *val, const __virtio16 *p) { - struct bio_vec iov; - void *kaddr, *from; int ret; /* Atomic read is needed for getu16 */ - ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, - &iov, 1, VHOST_MAP_RO); - if (ret < 0) - return ret; + if (vrh->use_va) { + struct iovec iov; + + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &iov, 1, VHOST_MAP_RO); + if (ret < 0) + return ret; - kaddr = kmap_atomic(iov.bv_page); - from = kaddr + iov.bv_offset; - *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); - kunmap_atomic(kaddr); + ret = __get_user(*val, (__virtio16 *)iov.iov_base); + if (ret) + return ret; + + *val = vringh16_to_cpu(vrh, *val); + } else { + struct bio_vec iov; + void *kaddr, *from; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &iov, 1, VHOST_MAP_RO); + if (ret < 0) + return ret; + + kaddr = kmap_atomic(iov.bv_page); + from = kaddr + iov.bv_offset; + *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); + kunmap_atomic(kaddr); + } return 0; } @@ -1233,20 +1360,36 @@ static inline int getu16_iotlb(const struct vringh *vrh, static inline int putu16_iotlb(const struct vringh *vrh, __virtio16 *p, u16 val) { - struct bio_vec iov; - void *kaddr, *to; int ret; /* Atomic write is needed for putu16 */ - ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, - &iov, 1, VHOST_MAP_WO); - if (ret < 0) - return ret; + if (vrh->use_va) { + struct iovec iov; - kaddr = kmap_atomic(iov.bv_page); - to = kaddr + iov.bv_offset; - WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); - kunmap_atomic(kaddr); + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &iov, 1, VHOST_MAP_RO); + if (ret < 0) + return ret; + + val = cpu_to_vringh16(vrh, val); + + ret = __put_user(val, (__virtio16 *)iov.iov_base); + if (ret) + return ret; + } else { + struct bio_vec iov; + void *kaddr, *to; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, + &iov, 1, VHOST_MAP_WO); + if (ret < 0) + return ret; + + kaddr = kmap_atomic(iov.bv_page); + to = kaddr + iov.bv_offset; + WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); + kunmap_atomic(kaddr); + } return 0; } @@ -1308,6 +1451,7 @@ static inline int putused_iotlb(const struct vringh *vrh, * @features: the feature bits for this ring. * @num: the number of elements. * @weak_barriers: true if we only need memory barriers, not I/O. + * @use_va: true if IOTLB contains user VA * @desc: the userpace descriptor pointer. * @avail: the userpace avail pointer. * @used: the userpace used pointer. @@ -1315,11 +1459,13 @@ static inline int putused_iotlb(const struct vringh *vrh, * Returns an error if num is invalid. */ int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, bool use_va, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) { + vrh->use_va = use_va; + return vringh_init_kern(vrh, features, num, weak_barriers, desc, avail, used); } From patchwork Wed Dec 14 16:30:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13073268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2471BC4167B for ; Wed, 14 Dec 2022 16:32:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239040AbiLNQcI (ORCPT ); Wed, 14 Dec 2022 11:32:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239043AbiLNQbl (ORCPT ); Wed, 14 Dec 2022 11:31:41 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E884862F5 for ; Wed, 14 Dec 2022 08:30:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671035459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=M+wOP66zSvRhQYDjszQYz+C5vn4h5BGATu0cy0bUORc=; b=Q15AwZRiKDTWOvdrxLlpUxbg6A4w0hx4nX46culqdRURxGAuMPk5NJi24/Qlf0gwIgOkTG 6LaOvuHkZCsLyBO/R2i/ntvR9YUfgx6rY5sOt6g3LlIc1fHNaVgx3DnPO/LebuwvLiJxTu wBVqejwmlXwT/DpKnJYgRjjCsXxKvuo= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-496-j_8JbJghN6OetsetQUtjfA-1; Wed, 14 Dec 2022 11:30:57 -0500 X-MC-Unique: j_8JbJghN6OetsetQUtjfA-1 Received: by mail-wm1-f70.google.com with SMTP id b47-20020a05600c4aaf00b003d031aeb1b6so7424717wmp.9 for ; Wed, 14 Dec 2022 08:30:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M+wOP66zSvRhQYDjszQYz+C5vn4h5BGATu0cy0bUORc=; b=ipqQfZ67RVMA7vMz/DdhQiYB9ap5vnI3Pem5Lrc8RTUJg2uJ48UQIJIxtvWaBLq9b2 MOHXeclS5acI1i9NrvC6c8H1dziMOBeCKloigdVTSLwyghsn84ravMjHVzhG8wuMXDuW MAL8ugU3gTsLukPrk5oujXreuphCBJ2BTU2zWC+0rTJT6N/V6vn04KSQloQqiAyJQsch XTGoCHVE3r6AxAeyi6cCwHgFQhv3OItWu/0S8Mp2lsz8uiZfrvCLevgLeTyqwh97t3qJ epTw7lEz5LCQ2fo+3fgjEalqCqzyJQvGfI6T8XDeSbLFoNiYQzHNtXQcsKVDxbr58WDF uLhQ== X-Gm-Message-State: ANoB5pmVUwXC7/Il2EKL6eAjSc6iDc03QciD14vq4Rt+NfUQ5rVCyf16 ANizgbGwX9Ap0FkBTl0gQKfdC0wSJIFpwGshq0q5vRHSrQRUjbgNQdWdSSPUO+VSDwWTDu+LXAk PbazICJPjMm+Ubt4o X-Received: by 2002:a05:600c:3c95:b0:3d0:4af1:a36e with SMTP id bg21-20020a05600c3c9500b003d04af1a36emr19162740wmb.26.1671035456704; Wed, 14 Dec 2022 08:30:56 -0800 (PST) X-Google-Smtp-Source: AA0mqf60WGqYgsSKb75OsCZF9FEfxsX89v9hIf3pvnwfZ9ScHyDYE49t9+hNy2gQ3Yhy/2cfcsbUgg== X-Received: by 2002:a05:600c:3c95:b0:3d0:4af1:a36e with SMTP id bg21-20020a05600c3c9500b003d04af1a36emr19162723wmb.26.1671035456529; Wed, 14 Dec 2022 08:30:56 -0800 (PST) Received: from step1.redhat.com (host-87-11-6-51.retail.telecomitalia.it. [87.11.6.51]) by smtp.gmail.com with ESMTPSA id c6-20020a05600c0a4600b003d1e3b1624dsm3850323wmq.2.2022.12.14.08.30.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 08:30:55 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Jason Wang , Andrey Zhadchenko , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , eperezma@redhat.com, stefanha@redhat.com, netdev@vger.kernel.org, Stefano Garzarella Subject: [RFC PATCH 4/6] vdpa_sim: make devices agnostic for work management Date: Wed, 14 Dec 2022 17:30:23 +0100 Message-Id: <20221214163025.103075-5-sgarzare@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221214163025.103075-1-sgarzare@redhat.com> References: <20221214163025.103075-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC Let's move work management inside the vdpa_sim core. This way we can easily change how we manage the works, without having to change the devices each time. Signed-off-by: Stefano Garzarella --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 ++- drivers/vdpa/vdpa_sim/vdpa_sim.c | 17 +++++++++++++++-- drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 6 ++---- drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 6 ++---- 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 0e78737dcc16..7e6dd366856f 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -44,7 +44,7 @@ struct vdpasim_dev_attr { u32 ngroups; u32 nas; - work_func_t work_fn; + void (*work_fn)(struct vdpasim *vdpasim); void (*get_config)(struct vdpasim *vdpasim, void *config); void (*set_config)(struct vdpasim *vdpasim, const void *config); }; @@ -73,6 +73,7 @@ struct vdpasim { struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *attr, const struct vdpa_dev_set_config *config); +void vdpasim_schedule_work(struct vdpasim *vdpasim); /* TODO: cross-endian support */ static inline bool vdpasim_is_little_endian(struct vdpasim *vdpasim) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 2e0ee7280aa8..9bde33e38e27 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -245,6 +245,13 @@ static const struct dma_map_ops vdpasim_dma_ops = { static const struct vdpa_config_ops vdpasim_config_ops; static const struct vdpa_config_ops vdpasim_batch_config_ops; +static void vdpasim_work_fn(struct work_struct *work) +{ + struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); + + vdpasim->dev_attr.work_fn(vdpasim); +} + struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, const struct vdpa_dev_set_config *config) { @@ -275,7 +282,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, } vdpasim->dev_attr = *dev_attr; - INIT_WORK(&vdpasim->work, dev_attr->work_fn); + INIT_WORK(&vdpasim->work, vdpasim_work_fn); spin_lock_init(&vdpasim->lock); spin_lock_init(&vdpasim->iommu_lock); @@ -329,6 +336,12 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, } EXPORT_SYMBOL_GPL(vdpasim_create); +void vdpasim_schedule_work(struct vdpasim *vdpasim) +{ + schedule_work(&vdpasim->work); +} +EXPORT_SYMBOL_GPL(vdpasim_schedule_work); + static int vdpasim_set_vq_address(struct vdpa_device *vdpa, u16 idx, u64 desc_area, u64 driver_area, u64 device_area) @@ -357,7 +370,7 @@ static void vdpasim_kick_vq(struct vdpa_device *vdpa, u16 idx) struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; if (vq->ready) - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); } static void vdpasim_set_vq_cb(struct vdpa_device *vdpa, u16 idx, diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c index c6db1a1baf76..ae2309411acd 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -286,9 +285,8 @@ static bool vdpasim_blk_handle_req(struct vdpasim *vdpasim, return handled; } -static void vdpasim_blk_work(struct work_struct *work) +static void vdpasim_blk_work(struct vdpasim *vdpasim) { - struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); bool reschedule = false; int i; @@ -326,7 +324,7 @@ static void vdpasim_blk_work(struct work_struct *work) spin_unlock(&vdpasim->lock); if (reschedule) - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); } static void vdpasim_blk_get_config(struct vdpasim *vdpasim, void *config) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c index c3cb225ea469..a209df365158 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -143,9 +142,8 @@ static void vdpasim_handle_cvq(struct vdpasim *vdpasim) } } -static void vdpasim_net_work(struct work_struct *work) +static void vdpasim_net_work(struct vdpasim *vdpasim) { - struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); struct vdpasim_virtqueue *txq = &vdpasim->vqs[1]; struct vdpasim_virtqueue *rxq = &vdpasim->vqs[0]; ssize_t read, write; @@ -196,7 +194,7 @@ static void vdpasim_net_work(struct work_struct *work) vdpasim_net_complete(rxq, write); if (++pkts > 4) { - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); goto out; } } From patchwork Wed Dec 14 16:30:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13073270 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C613C4332F for ; Wed, 14 Dec 2022 16:32:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239024AbiLNQcN (ORCPT ); Wed, 14 Dec 2022 11:32:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239058AbiLNQbr (ORCPT ); Wed, 14 Dec 2022 11:31:47 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4951525D1 for ; Wed, 14 Dec 2022 08:31:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671035463; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eNeRnQ41YTwI7Dv9IlQ//H1saK0wcCg2ACIcfcXrP3k=; b=AX1VjMqeix+dYY6ip5qDzgEGxtlxx8+xn75UrLvALhvYLQmibwZUA3/y9GtmJ9Do/SbYM1 UbggPo6dLS7iP+NDLK4h17ID9CFb96QgzmpDKI3k5NJUGvVzMb7MyXRFsXlZTAQX3cE8fM jMWQ+aVuwjZzJqr1wSDuAyXUKhkbKPI= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-365-pi20dwqWNxiBy2mPBTHNEQ-1; Wed, 14 Dec 2022 11:30:59 -0500 X-MC-Unique: pi20dwqWNxiBy2mPBTHNEQ-1 Received: by mail-wm1-f71.google.com with SMTP id h9-20020a1c2109000000b003cfd37aec58so6633888wmh.1 for ; Wed, 14 Dec 2022 08:30:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eNeRnQ41YTwI7Dv9IlQ//H1saK0wcCg2ACIcfcXrP3k=; b=kwqQ6UFSsFUxAL3FEm2oVhOayUfIXOgdj53J4deNr7sBJEg0tyDjmjwHVAjtfMcnlp L41c1mgiNf5mOyWNhLBq3oSe5SC7bB2VNA2ASJkdz/StW+NIV7cnnm//hcitZf9vRHqE 6F/qF0QpobLM9mTmLhTyLJRByFrWfAeI21S4TyU28gJ4c3pvjO6tPCjJl+KUk47HUivz h5573iZ+wiQU8cHZBgG3146EeVmWgHJQBKRobHUEiAbEo0ch8FdG0zNYj3m8OVY2NhDG aydKYqZKJHdk51SUebYaW9IeNBXIwOZ5uv6TNizRLEvk6IsIThIZXKhKj6GaSNquV2Z+ KtTA== X-Gm-Message-State: ANoB5pnJUXY0Wnwk3dMVsfFNIlnOpM/rywcbIBAzdgwe3WWsXBGqu80h Mw3h5gqp86OCfSxRoCqFXadfKsSfnX9B47PzHXmI+DW7k642CGMqBqBQRtYMq5WKFgRvUDLJGkU xtsrmnyze9gRGNhTa X-Received: by 2002:a05:600c:3b15:b0:3d0:d177:cac1 with SMTP id m21-20020a05600c3b1500b003d0d177cac1mr19372598wms.36.1671035458590; Wed, 14 Dec 2022 08:30:58 -0800 (PST) X-Google-Smtp-Source: AA0mqf41TbDFZYAxyeY/6/VgR7BHHW22n4sdvEUddOO8zMQQsvGDfuNF6JjoQjUcHxV0WQ+cml5BAg== X-Received: by 2002:a05:600c:3b15:b0:3d0:d177:cac1 with SMTP id m21-20020a05600c3b1500b003d0d177cac1mr19372586wms.36.1671035458429; Wed, 14 Dec 2022 08:30:58 -0800 (PST) Received: from step1.redhat.com (host-87-11-6-51.retail.telecomitalia.it. [87.11.6.51]) by smtp.gmail.com with ESMTPSA id c6-20020a05600c0a4600b003d1e3b1624dsm3850323wmq.2.2022.12.14.08.30.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 08:30:57 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Jason Wang , Andrey Zhadchenko , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , eperezma@redhat.com, stefanha@redhat.com, netdev@vger.kernel.org, Stefano Garzarella Subject: [RFC PATCH 5/6] vdpa_sim: use kthread worker Date: Wed, 14 Dec 2022 17:30:24 +0100 Message-Id: <20221214163025.103075-6-sgarzare@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221214163025.103075-1-sgarzare@redhat.com> References: <20221214163025.103075-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC Let's use our own kthread to run device jobs. This allows us more flexibility, especially we can attach the kthread to the user address space when vDPA uses user's VA. Signed-off-by: Stefano Garzarella --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 ++- drivers/vdpa/vdpa_sim/vdpa_sim.c | 17 ++++++++++++----- 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 7e6dd366856f..07ef53ea375e 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -53,7 +53,8 @@ struct vdpasim_dev_attr { struct vdpasim { struct vdpa_device vdpa; struct vdpasim_virtqueue *vqs; - struct work_struct work; + struct kthread_worker *worker; + struct kthread_work work; struct vdpasim_dev_attr dev_attr; /* spinlock to synchronize virtqueue state */ spinlock_t lock; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 9bde33e38e27..36a1d2e0a6ba 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -11,8 +11,8 @@ #include #include #include +#include #include -#include #include #include #include @@ -245,7 +245,7 @@ static const struct dma_map_ops vdpasim_dma_ops = { static const struct vdpa_config_ops vdpasim_config_ops; static const struct vdpa_config_ops vdpasim_batch_config_ops; -static void vdpasim_work_fn(struct work_struct *work) +static void vdpasim_work_fn(struct kthread_work *work) { struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); @@ -282,7 +282,13 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, } vdpasim->dev_attr = *dev_attr; - INIT_WORK(&vdpasim->work, vdpasim_work_fn); + + kthread_init_work(&vdpasim->work, vdpasim_work_fn); + vdpasim->worker = kthread_create_worker(0, "vDPA sim worker: %s", + dev_attr->name); + if (IS_ERR(vdpasim->worker)) + goto err_iommu; + spin_lock_init(&vdpasim->lock); spin_lock_init(&vdpasim->iommu_lock); @@ -338,7 +344,7 @@ EXPORT_SYMBOL_GPL(vdpasim_create); void vdpasim_schedule_work(struct vdpasim *vdpasim) { - schedule_work(&vdpasim->work); + kthread_queue_work(vdpasim->worker, &vdpasim->work); } EXPORT_SYMBOL_GPL(vdpasim_schedule_work); @@ -689,7 +695,8 @@ static void vdpasim_free(struct vdpa_device *vdpa) struct vdpasim *vdpasim = vdpa_to_sim(vdpa); int i; - cancel_work_sync(&vdpasim->work); + kthread_cancel_work_sync(&vdpasim->work); + kthread_destroy_worker(vdpasim->worker); for (i = 0; i < vdpasim->dev_attr.nvqs; i++) { vringh_kiov_cleanup(&vdpasim->vqs[i].out_iov); From patchwork Wed Dec 14 16:30:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13073269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61BCBC4708D for ; Wed, 14 Dec 2022 16:32:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239052AbiLNQcL (ORCPT ); Wed, 14 Dec 2022 11:32:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239045AbiLNQbo (ORCPT ); Wed, 14 Dec 2022 11:31:44 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 583D8140E3 for ; Wed, 14 Dec 2022 08:31:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671035463; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zUQYRaS8aTqjl4sq2+Qu5cD5eqcY+9WtPJEWRyeBg18=; b=grSixh+62AzEswZ/BeYkWMZtd7bwVAWBrlNy+S6Sj7L+KJEVvOx16mGvDpZwAuo97zNr3R SMNSFI/pDX6j0bz98sEGZLb9L94aNdXZDuIhPezrHj4xXjEz+CYIF+kWaSnQDCK7Afi+J8 k+/KE31a1JAPOYZauLMJGpyuMOVIzYQ= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-215-y9lFCV2eOFmrUi-ulQQscA-1; Wed, 14 Dec 2022 11:31:01 -0500 X-MC-Unique: y9lFCV2eOFmrUi-ulQQscA-1 Received: by mail-wr1-f70.google.com with SMTP id w11-20020adfbacb000000b002418a90da01so58223wrg.16 for ; Wed, 14 Dec 2022 08:31:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zUQYRaS8aTqjl4sq2+Qu5cD5eqcY+9WtPJEWRyeBg18=; b=JVTRa4ZcdFzQz+XKYXpbbrn4zhxNQdKaRK//bcg2as39O2zckL5kYFLuGRa3zEcaEz W17WvZQivQz0/76YwJchAyOx2C0jc4tnGfRVgKNKo8f7XwN6410lM0rbh5NT61iz+9LD y4GQ608IDaVtzg8YUSVjKyK8SY8S1S/o8QSku5wIB1JLMwshqfUrWKgc6n1ZG+RusJlo oOLMnJisWqj9EPnXwArJPrebG9Kr3We+IN86vEMIuOJNnADr0fcIWN9o333j2u15TkXA +HRIogzb5q+ELVZYydzcGRr16aD0TBtulYjDChEhXqe7M2mM3/aeuvYmfXfYmkUBep5r KUXA== X-Gm-Message-State: ANoB5pkIhMAkAv6obCU4gneUdVyveeA4rCCGxqxAIZ7nuPrLbKioGbtR DrwTHEQ8LU8OhmKOWIsv8y51ckZt0tt+LZZYqpaTuzVw0+Fkmk352VbPrPQV5MIZV8d6BlR12xs pfrMm+S7L+jK3Y3I4 X-Received: by 2002:a05:600c:3b1b:b0:3d1:fe12:fe34 with SMTP id m27-20020a05600c3b1b00b003d1fe12fe34mr19040163wms.39.1671035460472; Wed, 14 Dec 2022 08:31:00 -0800 (PST) X-Google-Smtp-Source: AA0mqf61ilOWeas/kVNtW9HKu0e1ntW+e3+kM18c3C5LDa9pJ8S3z9wCuW0PcuQ0VElABaMa/0wSoA== X-Received: by 2002:a05:600c:3b1b:b0:3d1:fe12:fe34 with SMTP id m27-20020a05600c3b1b00b003d1fe12fe34mr19040142wms.39.1671035460286; Wed, 14 Dec 2022 08:31:00 -0800 (PST) Received: from step1.redhat.com (host-87-11-6-51.retail.telecomitalia.it. [87.11.6.51]) by smtp.gmail.com with ESMTPSA id c6-20020a05600c0a4600b003d1e3b1624dsm3850323wmq.2.2022.12.14.08.30.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Dec 2022 08:30:59 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Jason Wang , Andrey Zhadchenko , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, "Michael S. Tsirkin" , eperezma@redhat.com, stefanha@redhat.com, netdev@vger.kernel.org, Stefano Garzarella Subject: [RFC PATCH 6/6] vdpa_sim: add support for user VA Date: Wed, 14 Dec 2022 17:30:25 +0100 Message-Id: <20221214163025.103075-7-sgarzare@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221214163025.103075-1-sgarzare@redhat.com> References: <20221214163025.103075-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC The new "use_va" module parameter (default: false) is used in vdpa_alloc_device() to inform the vDPA framework that the device supports VA. vringh is initialized to use VA only when "use_va" is true and the user's mm has been bound. So, only when the bus supports user VA (e.g. vhost-vdpa). vdpasim_mm_work_fn work is used to attach the kthread to the user address space when the .bind_mm callback is invoked, and to detach it when the device is reset. Signed-off-by: Stefano Garzarella --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 1 + drivers/vdpa/vdpa_sim/vdpa_sim.c | 104 ++++++++++++++++++++++++++++++- 2 files changed, 103 insertions(+), 2 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 07ef53ea375e..1b010e5c0445 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -55,6 +55,7 @@ struct vdpasim { struct vdpasim_virtqueue *vqs; struct kthread_worker *worker; struct kthread_work work; + struct mm_struct *mm_bound; struct vdpasim_dev_attr dev_attr; /* spinlock to synchronize virtqueue state */ spinlock_t lock; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 36a1d2e0a6ba..6e07cedef30c 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -36,10 +36,90 @@ module_param(max_iotlb_entries, int, 0444); MODULE_PARM_DESC(max_iotlb_entries, "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)"); +static bool use_va; +module_param(use_va, bool, 0444); +MODULE_PARM_DESC(use_va, "Enable the device's ability to use VA"); + #define VDPASIM_QUEUE_ALIGN PAGE_SIZE #define VDPASIM_QUEUE_MAX 256 #define VDPASIM_VENDOR_ID 0 +struct vdpasim_mm_work { + struct kthread_work work; + struct task_struct *owner; + struct mm_struct *mm; + bool bind; + int ret; +}; + +static void vdpasim_mm_work_fn(struct kthread_work *work) +{ + struct vdpasim_mm_work *mm_work = + container_of(work, struct vdpasim_mm_work, work); + + mm_work->ret = 0; + + if (mm_work->bind) { + kthread_use_mm(mm_work->mm); +#if 0 + if (mm_work->owner) + mm_work->ret = cgroup_attach_task_all(mm_work->owner, + current); +#endif + } else { +#if 0 + //TODO: check it + cgroup_release(current); +#endif + kthread_unuse_mm(mm_work->mm); + } +} + +static void vdpasim_worker_queue_mm(struct vdpasim *vdpasim, + struct vdpasim_mm_work *mm_work) +{ + struct kthread_work *work = &mm_work->work; + + kthread_init_work(work, vdpasim_mm_work_fn); + kthread_queue_work(vdpasim->worker, work); + + spin_unlock(&vdpasim->lock); + kthread_flush_work(work); + spin_lock(&vdpasim->lock); +} + +static int vdpasim_worker_bind_mm(struct vdpasim *vdpasim, + struct mm_struct *new_mm, + struct task_struct *owner) +{ + struct vdpasim_mm_work mm_work; + + mm_work.owner = owner; + mm_work.mm = new_mm; + mm_work.bind = true; + + vdpasim_worker_queue_mm(vdpasim, &mm_work); + + if (!mm_work.ret) + vdpasim->mm_bound = new_mm; + + return mm_work.ret; +} + +static void vdpasim_worker_unbind_mm(struct vdpasim *vdpasim) +{ + struct vdpasim_mm_work mm_work; + + if (!vdpasim->mm_bound) + return; + + mm_work.mm = vdpasim->mm_bound; + mm_work.bind = false; + + vdpasim_worker_queue_mm(vdpasim, &mm_work); + + vdpasim->mm_bound = NULL; +} static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa) { return container_of(vdpa, struct vdpasim, vdpa); @@ -66,8 +146,10 @@ static void vdpasim_vq_notify(struct vringh *vring) static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) { struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; + bool va_enabled = use_va && vdpasim->mm_bound; - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false, false, + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false, + va_enabled, (struct vring_desc *)(uintptr_t)vq->desc_addr, (struct vring_avail *) (uintptr_t)vq->driver_addr, @@ -96,6 +178,9 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) { int i; + //TODO: should we cancel the works? + vdpasim_worker_unbind_mm(vdpasim); + spin_lock(&vdpasim->iommu_lock); for (i = 0; i < vdpasim->dev_attr.nvqs; i++) { @@ -275,7 +360,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, dev_attr->ngroups, dev_attr->nas, - dev_attr->name, false); + dev_attr->name, use_va); if (IS_ERR(vdpasim)) { ret = PTR_ERR(vdpasim); goto err_alloc; @@ -657,6 +742,19 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, return ret; } +static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm, + struct task_struct *owner) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + int ret; + + spin_lock(&vdpasim->lock); + ret = vdpasim_worker_bind_mm(vdpasim, mm, owner); + spin_unlock(&vdpasim->lock); + + return ret; +} + static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, u64 iova, u64 size, u64 pa, u32 perm, void *opaque) @@ -744,6 +842,7 @@ static const struct vdpa_config_ops vdpasim_config_ops = { .set_group_asid = vdpasim_set_group_asid, .dma_map = vdpasim_dma_map, .dma_unmap = vdpasim_dma_unmap, + .bind_mm = vdpasim_bind_mm, .free = vdpasim_free, }; @@ -776,6 +875,7 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { .get_iova_range = vdpasim_get_iova_range, .set_group_asid = vdpasim_set_group_asid, .set_map = vdpasim_set_map, + .bind_mm = vdpasim_bind_mm, .free = vdpasim_free, };