From patchwork Wed Mar 31 08:05:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174615 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69EC2C433F4 for ; Wed, 31 Mar 2021 08:07:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2440D619E0 for ; Wed, 31 Mar 2021 08:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234294AbhCaIGt (ORCPT ); Wed, 31 Mar 2021 04:06:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234063AbhCaIGX (ORCPT ); Wed, 31 Mar 2021 04:06:23 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1A92C06174A for ; Wed, 31 Mar 2021 01:06:21 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id t18so9096309pjs.3 for ; Wed, 31 Mar 2021 01:06:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Qd7q0+7mMIaCO7kNrN03miiLmUUotvU02ar15rGLAYA=; b=TC4wgya5iSyGQZApckwILsZ07cID8JY+wSUKJkkjSK6QenG/Nmjt6rFLbaVUVwLWuR qZUC8Y0GjUhiGVid8GxHZKbu9gf4fU+lvhfmVfPgGy+5jgQjVK7BNWsvr6SfgOEkycaZ 5vq2ipI4hpOZ0O2rO+5j+j/r4ClYjWkyP2C+gZCHd4C/hmSUIOJRWa8irRXib6jNL6bu kD4W62kB3IbxjCgNwxzpm1RKEIwtcxPZUTI1DX3Y9RXd+XOhqYwQUafQpdnDpvkcgH0M D8Gqf8VvrAIYcVLhc6Mm5fymUe/OERG1iFMO5tcrA4RGKFIGu9cfuHfcaCK9xR+LNdNI 8BIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Qd7q0+7mMIaCO7kNrN03miiLmUUotvU02ar15rGLAYA=; b=k+pqyLbXcByFFb5dzbNWVrOiQhirr5cDdtf+nnq7V20fwehNZGPTdUlBkl1eClY88E EgEkPI/TILiVH+4WmywVreR3eNZz+/Wx0gAosBid1Z1XP5ofS0MDq19uOtm8g6n9Ppak Zh+QLrlJ+nrqvk89GIQGHebbK4n24da/zrN1L0BXbfEHPZVmCwEfijZFQNwoTqMVYJSY TzPWKwZdx5Ze0lwaYFiiZ1kIl4m6f5sWcdlwJ7i6KEFEtGkdj8osPHbm6kV0Bzjo3s+t Tj5Ht/FzR4xkK6RVyt6Qrcgl7hwNerByZsy7jY/y/26/I4+IRuYPm9h1p4TJurrwqm/D 0eJg== X-Gm-Message-State: AOAM531Y3i4zRaPpoLqkKYyj2ksA2ewi7NGQeaNuHf7C0GRoXvrz8yu9 UZDpVg1Dt7VDtMuW1tmxyVgJ X-Google-Smtp-Source: ABdhPJydrF0nk0d3cE3vx8KgidDGa8Zfry5KovS8f39HDenv+8gD9ih8HKxbhTFZ2J9TJmZpWj0qlg== X-Received: by 2002:a17:90a:fb83:: with SMTP id cp3mr2363661pjb.33.1617177981360; Wed, 31 Mar 2021 01:06:21 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id 23sm1644744pgo.53.2021.03.31.01.06.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:20 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 01/10] file: Export receive_fd() to modules Date: Wed, 31 Mar 2021 16:05:10 +0800 Message-Id: <20210331080519.172-2-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Export receive_fd() so that some modules can use it to pass file descriptor between processes without missing any security stuffs. Signed-off-by: Xie Yongji --- fs/file.c | 6 ++++++ include/linux/file.h | 7 +++---- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/fs/file.c b/fs/file.c index dab120b71e44..d7d957217576 100644 --- a/fs/file.c +++ b/fs/file.c @@ -1108,6 +1108,12 @@ int __receive_fd(int fd, struct file *file, int __user *ufd, unsigned int o_flag return new_fd; } +int receive_fd(struct file *file, unsigned int o_flags) +{ + return __receive_fd(-1, file, NULL, o_flags); +} +EXPORT_SYMBOL(receive_fd); + static int ksys_dup3(unsigned int oldfd, unsigned int newfd, int flags) { int err = -EBADF; diff --git a/include/linux/file.h b/include/linux/file.h index 225982792fa2..4667f9567d3e 100644 --- a/include/linux/file.h +++ b/include/linux/file.h @@ -94,6 +94,9 @@ extern void fd_install(unsigned int fd, struct file *file); extern int __receive_fd(int fd, struct file *file, int __user *ufd, unsigned int o_flags); + +extern int receive_fd(struct file *file, unsigned int o_flags); + static inline int receive_fd_user(struct file *file, int __user *ufd, unsigned int o_flags) { @@ -101,10 +104,6 @@ static inline int receive_fd_user(struct file *file, int __user *ufd, return -EFAULT; return __receive_fd(-1, file, ufd, o_flags); } -static inline int receive_fd(struct file *file, unsigned int o_flags) -{ - return __receive_fd(-1, file, NULL, o_flags); -} static inline int receive_fd_replace(int fd, struct file *file, unsigned int o_flags) { return __receive_fd(fd, file, NULL, o_flags); From patchwork Wed Mar 31 08:05:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F3F2C433E6 for ; Wed, 31 Mar 2021 08:07:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E99D7619D7 for ; Wed, 31 Mar 2021 08:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234321AbhCaIGw (ORCPT ); Wed, 31 Mar 2021 04:06:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234207AbhCaIGg (ORCPT ); Wed, 31 Mar 2021 04:06:36 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 493ACC06174A for ; Wed, 31 Mar 2021 01:06:25 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id cl21-20020a17090af695b02900c61ac0f0e9so2611818pjb.1 for ; Wed, 31 Mar 2021 01:06:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b7apvPbQfTG1FYg2uNsocmvMG/QdC7c4RuzNmrGKxtc=; b=A3wAmnxCijyv4JmqyD+7kgtLs8qirdeNzXs9OkQLZgeSTN5oUQUq0roBMJbBPxt+/9 KCpvVlFgxw9ZZn26eKmnOXdTk/yUg65g4D7FuF+OkmhsVMHtXbzEo9csOzlt61WDfo5l otQaPwZTrHtFhq2LaSUvhBbnk3a4qdoG8SzepxCM1HjSG+y4FAJrbwjVnc8KQwZGJR1e CfGVzDy1y8PTrPtOC8GzgLs4uQqr400S/WFl8PXrgHskVyafDc7WUeOP2bcK8aupMQhG i7D6kXW2EKvhLjAVavOvYtm/r8NX8BoRCI40+VVdhnIMvk6XnOHXiH9lgCy/TfikG8TD wozA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b7apvPbQfTG1FYg2uNsocmvMG/QdC7c4RuzNmrGKxtc=; b=db+xqv+8znFySLanMrufNmafaW43U3FWGfkUQMNiMaKDTjhTcqoxES3UKnxWDYUJgo Tz9yLH4nFWvfODI2ZyJ5aI30cwqEYkEbjVjynnH3XDwl7Y5Uj0hVvmy5Gfhgngj2wtRp kxeGTSAtS6HubCD5qv32S86JCqqr0mok5tEMrl/FHWmVj8noNbsMGAC+HZhoGbJi4B8z DjNVLek/CV+goGXPRZ/LvGDpSBzWHpCjemGDxRBv//WFxIQTgE9CUtwSYOoiEKEoqaYr tH2HTZ3Wo7PFG67jkRuNgQi5Xpg6Y5CFj40yb/lhM5oKRBDaOfecSW935mlPaHDjs8ti SJhQ== X-Gm-Message-State: AOAM530BPMun9cLkG2+lOBoyT+GouoiUwb+2edpwS12t0vbbaS6OhlS/ URhEQL9sr7LNAltrLqAyzlvA X-Google-Smtp-Source: ABdhPJxHUvXwhgpDdQBLTdQxkUD9V2flhkuKgUAC1U22SyapfEwojQlQDvAYR/WYJKJ9peQYTPOWLw== X-Received: by 2002:a17:902:7b82:b029:e7:32bc:92 with SMTP id w2-20020a1709027b82b02900e732bc0092mr1917558pll.34.1617177984900; Wed, 31 Mar 2021 01:06:24 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id p2sm1643371pgm.24.2021.03.31.01.06.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:24 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 02/10] eventfd: Increase the recursion depth of eventfd_signal() Date: Wed, 31 Mar 2021 16:05:11 +0800 Message-Id: <20210331080519.172-3-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Increase the recursion depth of eventfd_signal() to 1. This is the maximum recursion depth we have found so far, which can be triggered with the following call chain: kvm_io_bus_write [kvm] --> ioeventfd_write [kvm] --> eventfd_signal [eventfd] --> vhost_poll_wakeup [vhost] --> vduse_vdpa_kick_vq [vduse] --> eventfd_signal [eventfd] Signed-off-by: Xie Yongji Acked-by: Jason Wang --- fs/eventfd.c | 2 +- include/linux/eventfd.h | 5 ++++- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/fs/eventfd.c b/fs/eventfd.c index e265b6dd4f34..cc7cd1dbedd3 100644 --- a/fs/eventfd.c +++ b/fs/eventfd.c @@ -71,7 +71,7 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n) * it returns true, the eventfd_signal() call should be deferred to a * safe context. */ - if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count))) + if (WARN_ON_ONCE(this_cpu_read(eventfd_wake_count) > EFD_WAKE_DEPTH)) return 0; spin_lock_irqsave(&ctx->wqh.lock, flags); diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h index fa0a524baed0..886d99cd38ef 100644 --- a/include/linux/eventfd.h +++ b/include/linux/eventfd.h @@ -29,6 +29,9 @@ #define EFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) #define EFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS | EFD_SEMAPHORE) +/* Maximum recursion depth */ +#define EFD_WAKE_DEPTH 1 + struct eventfd_ctx; struct file; @@ -47,7 +50,7 @@ DECLARE_PER_CPU(int, eventfd_wake_count); static inline bool eventfd_signal_count(void) { - return this_cpu_read(eventfd_wake_count); + return this_cpu_read(eventfd_wake_count) > EFD_WAKE_DEPTH; } #else /* CONFIG_EVENTFD */ From patchwork Wed Mar 31 08:05:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1B52C433F7 for ; Wed, 31 Mar 2021 08:07:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C2AE619DA for ; Wed, 31 Mar 2021 08:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234202AbhCaIGy (ORCPT ); Wed, 31 Mar 2021 04:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234222AbhCaIGk (ORCPT ); Wed, 31 Mar 2021 04:06:40 -0400 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8DF7C061761 for ; Wed, 31 Mar 2021 01:06:28 -0700 (PDT) Received: by mail-pg1-x532.google.com with SMTP id h25so13655710pgm.3 for ; Wed, 31 Mar 2021 01:06:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f7Vhf0sKPolvZdmmoQbWwDj+eTIPxki+8IuZB8c0Kq4=; b=r75Rzrne30uVZcUUdNkOzD94W4ROnkMv7fAN57hx5SsqQiycs7arb/U8T5gr66kqY2 fsJYjTgVulWItUoSriq/FcqQzPwx3Fz0XkL0dAwW6DR6TiQZ5fh9zqob5EY+4Shmuo59 UI6B0NIdzdqg38KAIztpGrWMwp/WjmNC3nfJE9c7zNUk4GE3UIQHNBQrvi1D/vVoIO+h hsdx37JQ3jSo2oFjNRGiYasaDMC3P1a1pxm9YUHLvG+1y1GhwqybDH8jya7I36j0Of6Q RB3uA8sHEhyNHiGFYH28UOTktfqnH1hbLdQ+8d4laajfaXV5EVw6897aE7LGfaIq5Agv Oyzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f7Vhf0sKPolvZdmmoQbWwDj+eTIPxki+8IuZB8c0Kq4=; b=Fcyo5lSDVdKdVHuujyKAJCty9eS6gpwuzaoTRZRQTmha4eUe9XoQDY3dXMgMLLv/nk MX5u3EbBraPjPLgn5WjI1vn1GSjemEnwdAOgocQbyJym/7a55N/PcyG9B1Au41Uq30ec rpZgI1QhI2MGpAChis9aJfoQOAHkJ0/CVxSTafF3Q7Qzaif4J8PKdT3ovYblbLy3uFUu /Apg/9DIobxnZkhuw6EMAMiS/qczw5WSpGZK2zQssn2DdrtXBrTiAf1IRJ+FVh1mbuZB 3xJq9j4Qv+LMPi+B4nc539FbdYQYKJ9I+mQQGuSvKPZVCLyw03N0mA1pqd6hPUPqGd2n pZEg== X-Gm-Message-State: AOAM531FVbY/ikwBOSjr2olgUdIyKySnK2hq6yF8R/1GBuDHXOGcCdsw mGfSonrcQq3/TOpulhEnx4hO X-Google-Smtp-Source: ABdhPJw5SdAYJYwggqFPCWKc2zs9T35K7EWcA/8fPQg9WcIHxPAKmkso7yE0S4NhyNQOP74BaSA/yw== X-Received: by 2002:a63:1144:: with SMTP id 4mr2075169pgr.333.1617177988462; Wed, 31 Mar 2021 01:06:28 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id 4sm1293560pjl.51.2021.03.31.01.06.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:28 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 03/10] vhost-vdpa: protect concurrent access to vhost device iotlb Date: Wed, 31 Mar 2021 16:05:12 +0800 Message-Id: <20210331080519.172-4-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use vhost_dev->mutex to protect vhost device iotlb from concurrent access. Fixes: 4c8cf318("vhost: introduce vDPA-based backend") Cc: stable@vger.kernel.org Signed-off-by: Xie Yongji Acked-by: Jason Wang Reviewed-by: Stefano Garzarella --- drivers/vhost/vdpa.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 3947fbc2d1d5..63b28d3aee7c 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -725,9 +725,11 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, const struct vdpa_config_ops *ops = vdpa->config; int r = 0; + mutex_lock(&dev->mutex); + r = vhost_dev_check_owner(dev); if (r) - return r; + goto unlock; switch (msg->type) { case VHOST_IOTLB_UPDATE: @@ -748,6 +750,8 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, r = -EINVAL; break; } +unlock: + mutex_unlock(&dev->mutex); return r; } From patchwork Wed Mar 31 08:05:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 549ACC433E1 for ; Wed, 31 Mar 2021 08:07:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 171DD619D6 for ; Wed, 31 Mar 2021 08:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234305AbhCaIGu (ORCPT ); Wed, 31 Mar 2021 04:06:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234106AbhCaIGc (ORCPT ); Wed, 31 Mar 2021 04:06:32 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7BC25C06175F for ; Wed, 31 Mar 2021 01:06:32 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id ot17-20020a17090b3b51b0290109c9ac3c34so749843pjb.4 for ; Wed, 31 Mar 2021 01:06:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WWAi/i4+n9XG5D57vjCca2tNocYxaQJOYj5KH2bJ384=; b=RZluAmpVPY/CbIlIbdA+0c62E3RaPK7X2kaPMug+6FT1IdkqdLbD0JDQ/1qB0GTaLd 3xWHfPP5ix1qLir1bh7GYWeeYSAyxgOoeneQhQ8C+rSbOVEvhsVIzTuNLjpZ75OZVz/1 gjzVL64yoeihWGQttXTnmAFeBtV+kinB6o78cOXEeVF2qQcvP9SZvpe9kKZKBzy+iaYr DzSr4pM0otEObT7ZB7P2AaK8kfRjIRdTsKYTEm5NvCMt55BUfrumthb8igtzGcYfBU/0 62TKBP1DXW1GHoAuCiRmURQ/QX/Wow6dnbXeiBxe7vfssgzh3PZAqzyCEJ6smTaXXORz 1+Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WWAi/i4+n9XG5D57vjCca2tNocYxaQJOYj5KH2bJ384=; b=nPfQeIeI8FwCaaWSqc5CQJt7/IIwF1LaMnVeg1ZI0TaU7P+ts5vIoXV5xMXvAylpuu 3UJ6Olf65Nti7se1fY6jRw9O1Na41xSOQkgmRyQXwNd/A9IR+wdA3Wzhk/60hbuulNqv NSndXd6HdieS07w0e7xKriZrwSHkMHL8J3re+cEQ+J089W8KTyvD73SmFmfLJ/hLPYfP DJO5djaqzrKPCMw3ckdKQBR/YZ0w6v7s1mNCg3rG6g2Y0Jq3BqbXAVpHpYdxWtLw81+O R59TwtVW+bBDUPdQa3CslaJiZ5cY9KQiu1rwPllsN6sNu3zO14zZjSMh0B7VoGSZb0wk YZyg== X-Gm-Message-State: AOAM532sy+/YKCpgohAEkpt+FJf9m7EzHieeoc4iJPD81Ib9/ORVDr5R +p+J8j+p2LrvgVnsW+h/y4Xk X-Google-Smtp-Source: ABdhPJy3zFXXY94dCfgdghyfBaBWmPxAwVoqJ1iw33xWHFE3N73rNivZ2bfmtjndrUUi51+8gKxZKw== X-Received: by 2002:a17:90a:f98e:: with SMTP id cq14mr2251426pjb.60.1617177992113; Wed, 31 Mar 2021 01:06:32 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id z25sm1310985pfn.37.2021.03.31.01.06.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:31 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 04/10] vhost-iotlb: Add an opaque pointer for vhost IOTLB Date: Wed, 31 Mar 2021 16:05:13 +0800 Message-Id: <20210331080519.172-5-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add an opaque pointer for vhost IOTLB. And introduce vhost_iotlb_add_range_ctx() to accept it. Suggested-by: Jason Wang Signed-off-by: Xie Yongji Acked-by: Jason Wang --- drivers/vhost/iotlb.c | 20 ++++++++++++++++---- include/linux/vhost_iotlb.h | 3 +++ 2 files changed, 19 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c index 0fd3f87e913c..5c99e1112cbb 100644 --- a/drivers/vhost/iotlb.c +++ b/drivers/vhost/iotlb.c @@ -36,19 +36,21 @@ void vhost_iotlb_map_free(struct vhost_iotlb *iotlb, EXPORT_SYMBOL_GPL(vhost_iotlb_map_free); /** - * vhost_iotlb_add_range - add a new range to vhost IOTLB + * vhost_iotlb_add_range_ctx - add a new range to vhost IOTLB * @iotlb: the IOTLB * @start: start of the IOVA range * @last: last of IOVA range * @addr: the address that is mapped to @start * @perm: access permission of this range + * @opaque: the opaque pointer for the new mapping * * Returns an error last is smaller than start or memory allocation * fails */ -int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, - u64 start, u64 last, - u64 addr, unsigned int perm) +int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, + u64 start, u64 last, + u64 addr, unsigned int perm, + void *opaque) { struct vhost_iotlb_map *map; @@ -71,6 +73,7 @@ int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, map->last = last; map->addr = addr; map->perm = perm; + map->opaque = opaque; iotlb->nmaps++; vhost_iotlb_itree_insert(map, &iotlb->root); @@ -80,6 +83,15 @@ int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, return 0; } +EXPORT_SYMBOL_GPL(vhost_iotlb_add_range_ctx); + +int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, + u64 start, u64 last, + u64 addr, unsigned int perm) +{ + return vhost_iotlb_add_range_ctx(iotlb, start, last, + addr, perm, NULL); +} EXPORT_SYMBOL_GPL(vhost_iotlb_add_range); /** diff --git a/include/linux/vhost_iotlb.h b/include/linux/vhost_iotlb.h index 6b09b786a762..2d0e2f52f938 100644 --- a/include/linux/vhost_iotlb.h +++ b/include/linux/vhost_iotlb.h @@ -17,6 +17,7 @@ struct vhost_iotlb_map { u32 perm; u32 flags_padding; u64 __subtree_last; + void *opaque; }; #define VHOST_IOTLB_FLAG_RETIRE 0x1 @@ -29,6 +30,8 @@ struct vhost_iotlb { unsigned int flags; }; +int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, u64 start, u64 last, + u64 addr, unsigned int perm, void *opaque); int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, u64 start, u64 last, u64 addr, unsigned int perm); void vhost_iotlb_del_range(struct vhost_iotlb *iotlb, u64 start, u64 last); From patchwork Wed Mar 31 08:05:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B613DC433E3 for ; Wed, 31 Mar 2021 08:07:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75065619E2 for ; Wed, 31 Mar 2021 08:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234316AbhCaIGv (ORCPT ); Wed, 31 Mar 2021 04:06:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234197AbhCaIGg (ORCPT ); Wed, 31 Mar 2021 04:06:36 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1165AC061574 for ; Wed, 31 Mar 2021 01:06:36 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id l76so13633359pga.6 for ; Wed, 31 Mar 2021 01:06:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dubzpBCF3+ulmGIlAE/3LSJUNp9rfCq5rSBOfiOcPD8=; b=GQx3aaHtPD0bMREP9V9WD6XCubeNAh4r70/SV9++/4RyFWtPLheSkizet3EGVbLBtY tCYIdA8X7eAHWfWsyPkMCEDqaDI8zbUfsTj10rxZWazA//kmK6IGKz8FR/d1OaVPBE2e g1Q3N0M02USDNdrukcZl5Vx6oY2nRF3paCukXHEK0tyxekUVV0UglhhSi7yaQlO0Zv7L L2aL9jxToNcgDyBHsuITOEjzjhF8uL9iyLbmSZrD296UA+co+l6ZLUHogS8NwmhJLYf4 y0DEVSijhVDMQATF/o1fhG4HoIjbg4nk+8nevJwn4fIbkFy6okdw0bWRg5U780pFi6PG qavg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dubzpBCF3+ulmGIlAE/3LSJUNp9rfCq5rSBOfiOcPD8=; b=XnP/Pr0/d6zMbvKz0Uyf0LuLsKrSnDYZfArSrQhc6IltOIOhDblnmQGbrw65N8dVvh ZnCzuQx+8BZzNdcy0q1gjf7CrPbbu3bviSJpEYPLvw8WB0+o7/dAoUKRdTzj8SvWh4UF YNPwmBKHV4h0khpuOqrcv/JZhlDT5j95iwZPiOLRPDm1r4eoHR3h06oK5C7MZRS4tqyY 2cbmg+GQU5Zs3M+a1jTF4RWTLd6LVQLGPxKKhUVr23EnlwL1xHwXpUd1DN+3pudfE04a DCo2KtGTsSYnr317jak167n/vkUNYI+GbzLtIGV7atE8ZspVCs+ob5bzFNn6tq5ymxaK 4Cbg== X-Gm-Message-State: AOAM5338MxonXZ/aZ4O0dh8IghT6+hXubV4pOY7BuldMPJHQ27ECIIAu DnEaqOAXEjOaZ/doDaYRhcYu X-Google-Smtp-Source: ABdhPJwHsxBRYD6NxqLEq9ZVf+mQOlWrHHFPh2fYICclZ9LBvhEjkTGJyVSqnM5dX5rgVpNieMmW9A== X-Received: by 2002:a65:5b47:: with SMTP id y7mr2132414pgr.119.1617177995607; Wed, 31 Mar 2021 01:06:35 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id f21sm1216751pfe.6.2021.03.31.01.06.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:35 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 05/10] vdpa: Add an opaque pointer for vdpa_config_ops.dma_map() Date: Wed, 31 Mar 2021 16:05:14 +0800 Message-Id: <20210331080519.172-6-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Add an opaque pointer for DMA mapping. Suggested-by: Jason Wang Signed-off-by: Xie Yongji Acked-by: Jason Wang --- drivers/vdpa/vdpa_sim/vdpa_sim.c | 6 +++--- drivers/vhost/vdpa.c | 2 +- include/linux/vdpa.h | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 5b6b2f87d40c..ff331f088baf 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -512,14 +512,14 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, } static int vdpasim_dma_map(struct vdpa_device *vdpa, u64 iova, u64 size, - u64 pa, u32 perm) + u64 pa, u32 perm, void *opaque) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); int ret; spin_lock(&vdpasim->iommu_lock); - ret = vhost_iotlb_add_range(vdpasim->iommu, iova, iova + size - 1, pa, - perm); + ret = vhost_iotlb_add_range_ctx(vdpasim->iommu, iova, iova + size - 1, + pa, perm, opaque); spin_unlock(&vdpasim->iommu_lock); return ret; diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 63b28d3aee7c..22cab98610a1 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -550,7 +550,7 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, return r; if (ops->dma_map) { - r = ops->dma_map(vdpa, iova, size, pa, perm); + r = ops->dma_map(vdpa, iova, size, pa, perm, NULL); } else if (ops->set_map) { if (!v->in_batch) r = ops->set_map(vdpa, dev->iotlb); diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 15fa085fab05..b01f7c9096bf 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -241,7 +241,7 @@ struct vdpa_config_ops { /* DMA ops */ int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb); int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size, - u64 pa, u32 perm); + u64 pa, u32 perm, void *opaque); int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size); /* Free device resources */ From patchwork Wed Mar 31 08:05:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44A31C433E9 for ; Wed, 31 Mar 2021 08:07:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 19B6D619E2 for ; Wed, 31 Mar 2021 08:07:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234329AbhCaIGx (ORCPT ); Wed, 31 Mar 2021 04:06:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234210AbhCaIGk (ORCPT ); Wed, 31 Mar 2021 04:06:40 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EB3AC06175F for ; Wed, 31 Mar 2021 01:06:39 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id w10so9308135pgh.5 for ; Wed, 31 Mar 2021 01:06:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=rtJRvLLxqYfbQp3k0gxj2srSIbASh6xiOG9Mep0BSmk=; b=w5hDeZv0IPaw+gqFRpEaMlZdT7oT4DNs6bmIznvJK7+yn5Op4/jEIFUooqO/yf7Odf foqq+oRANL2424ymkzYHi9jqsgdZ5LWNeumtpctkH/Hd+LoXsWnwJp1aZ88R3Q1X7y2Z FAWenN3c5A3UX6ZnkfyoM28Eri6ELfGfCT3mUSRH0zh4kXBbLEmRTdB/mDR+ZU20SLef MBTaBZhn6XNz7Gq113Pw45gBApk6HQ58W4e9dXp2kXsfuOu+Nq6sT6q6Xm7SCJp4VuXa U5JLZJazm/OgF0+fwBP1zXBjCt6WN15OALNq0ZkozXUACtF0PKU4xgtQofTuVi9Jktcp nnAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rtJRvLLxqYfbQp3k0gxj2srSIbASh6xiOG9Mep0BSmk=; b=St4zWay2JRM8ykgooqUMChuao7X4wxsKU9uy3TsTtw/S8wDN6FU/oTD0WEUFs2cL5q 9b4g0yGjDemV4G+npX9/O46MdwKX0xXa0TjPTkz5bOoSFwo3lIdi8naBUknOFEPRVYTq lcFRtJD5b7sBY8o+pAfoAFLj+mlr3Fmn6r7Ain5VpFHX18wSOeU6volRBIfB9LKaj/lc wPlKZxsj5wz/DJtTUnCeBzOhdrgghV/oDYRbmm3MO62bQeXlbBnOTHcvT5AGbeGxauUl rxl3OSLfF4z1B6tQhN3hws1xqtJqZVB2uSrLGgKyeD4ZU14RmjTeIy5DFlF8WMLIwqQd N1bg== X-Gm-Message-State: AOAM531VaXwnfz/8KHirs+l1avRxSGGA6xNFHjP13raMxJQCJuZrhind NQPgLQMP04D5/wcBnxPfIc2f X-Google-Smtp-Source: ABdhPJxykFPDlbo5t2zr7wx1yYTSCAqgzyvVTf6J3sWPsHKgXsLVdyCoeIH5gAzysavU9FcOJicdAg== X-Received: by 2002:a63:2321:: with SMTP id j33mr2142411pgj.120.1617177999238; Wed, 31 Mar 2021 01:06:39 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id d24sm1333563pjw.37.2021.03.31.01.06.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:38 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 06/10] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap() Date: Wed, 31 Mar 2021 16:05:15 +0800 Message-Id: <20210331080519.172-7-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The upcoming patch is going to support VA mapping/unmapping. So let's factor out the logic of PA mapping/unmapping firstly to make the code more readable. Suggested-by: Jason Wang Signed-off-by: Xie Yongji Acked-by: Jason Wang --- drivers/vhost/vdpa.c | 53 +++++++++++++++++++++++++++++++++------------------- 1 file changed, 34 insertions(+), 19 deletions(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 22cab98610a1..f9aab9013745 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -483,7 +483,7 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, return r; } -static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last) +static void vhost_vdpa_pa_unmap(struct vhost_vdpa *v, u64 start, u64 last) { struct vhost_dev *dev = &v->vdev; struct vhost_iotlb *iotlb = dev->iotlb; @@ -505,6 +505,11 @@ static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last) } } +static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last) +{ + return vhost_vdpa_pa_unmap(v, start, last); +} + static void vhost_vdpa_iotlb_free(struct vhost_vdpa *v) { struct vhost_dev *dev = &v->vdev; @@ -585,37 +590,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) } } -static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, - struct vhost_iotlb_msg *msg) +static int vhost_vdpa_pa_map(struct vhost_vdpa *v, + u64 iova, u64 size, u64 uaddr, u32 perm) { struct vhost_dev *dev = &v->vdev; - struct vhost_iotlb *iotlb = dev->iotlb; struct page **page_list; unsigned long list_size = PAGE_SIZE / sizeof(struct page *); unsigned int gup_flags = FOLL_LONGTERM; unsigned long npages, cur_base, map_pfn, last_pfn = 0; unsigned long lock_limit, sz2pin, nchunks, i; - u64 iova = msg->iova; + u64 start = iova; long pinned; int ret = 0; - if (msg->iova < v->range.first || - msg->iova + msg->size - 1 > v->range.last) - return -EINVAL; - - if (vhost_iotlb_itree_first(iotlb, msg->iova, - msg->iova + msg->size - 1)) - return -EEXIST; - /* Limit the use of memory for bookkeeping */ page_list = (struct page **) __get_free_page(GFP_KERNEL); if (!page_list) return -ENOMEM; - if (msg->perm & VHOST_ACCESS_WO) + if (perm & VHOST_ACCESS_WO) gup_flags |= FOLL_WRITE; - npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT; + npages = PAGE_ALIGN(size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT; if (!npages) { ret = -EINVAL; goto free; @@ -629,7 +625,7 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, goto unlock; } - cur_base = msg->uaddr & PAGE_MASK; + cur_base = uaddr & PAGE_MASK; iova &= PAGE_MASK; nchunks = 0; @@ -660,7 +656,7 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT; ret = vhost_vdpa_map(v, iova, csize, map_pfn << PAGE_SHIFT, - msg->perm); + perm); if (ret) { /* * Unpin the pages that are left unmapped @@ -689,7 +685,7 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, /* Pin the rest chunk */ ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT, - map_pfn << PAGE_SHIFT, msg->perm); + map_pfn << PAGE_SHIFT, perm); out: if (ret) { if (nchunks) { @@ -708,13 +704,32 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, for (pfn = map_pfn; pfn <= last_pfn; pfn++) unpin_user_page(pfn_to_page(pfn)); } - vhost_vdpa_unmap(v, msg->iova, msg->size); + vhost_vdpa_unmap(v, start, size); } unlock: mmap_read_unlock(dev->mm); free: free_page((unsigned long)page_list); return ret; + +} + +static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, + struct vhost_iotlb_msg *msg) +{ + struct vhost_dev *dev = &v->vdev; + struct vhost_iotlb *iotlb = dev->iotlb; + + if (msg->iova < v->range.first || + msg->iova + msg->size - 1 > v->range.last) + return -EINVAL; + + if (vhost_iotlb_itree_first(iotlb, msg->iova, + msg->iova + msg->size - 1)) + return -EEXIST; + + return vhost_vdpa_pa_map(v, msg->iova, msg->size, msg->uaddr, + msg->perm); } static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, From patchwork Wed Mar 31 08:05:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3D2BC433ED for ; Wed, 31 Mar 2021 08:07:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8EC93619CF for ; Wed, 31 Mar 2021 08:07:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234344AbhCaIGz (ORCPT ); Wed, 31 Mar 2021 04:06:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234231AbhCaIGn (ORCPT ); Wed, 31 Mar 2021 04:06:43 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B450C06174A for ; Wed, 31 Mar 2021 01:06:43 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id w8so9087575pjf.4 for ; Wed, 31 Mar 2021 01:06:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BB90qgCfILId3MapNDXsVwvW/dnzqmU0Xngr4EHg7NE=; b=zWCPPcn56B/X4wsm7blaiQP9JpsrLCJvqR7HeXXyzU9f5Mvy4ceV9o0CGaxh1ihVkk JFNRqP3M5ZtecIL2gwgTsaQ6YbSiinHZ/YZRccoRUpqwjo0F5zqPSRHwnjEjcu9xH9ZQ ZXYQcqWNJnwK1FjFo46nsnyNYgUX1R5bWI76G6323aBmZ70Uz9WDmx78qsHNnIxFUWqI PXva6He++d3k4tw5F+n4aMm6UbzALcwg2lV0fNuCkEuxOIXyu76JsufAYHcVYEbgsuqP N4bId6sbx6FY+hQLC3ajZdDAAfeOCPxOxiPTBg03Bsy6zz5g/3XESwWmboB0r/loe2W/ f49g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BB90qgCfILId3MapNDXsVwvW/dnzqmU0Xngr4EHg7NE=; b=hNsRx7vXLOa1AkeMckJ6zD4UxM/mEul9nUaVbEl7jtlWB1ANjZW78rdjwSp9zKz5JS +jqQ2rEP3xB4r5CW7RCUuapL5ezXGb66OKbG1igABpC6Qqc9Ze9uZrrJjzZC71yfqUBr uD7BGWH5R45nnTGOFCdXSe89akP/yb4N1OUfUJG6RzMEcMYjvgI33b6U95nUr18HmlCl f0B4QBFOXAXaZ97ok6vozn8brrC+r03ZqaLHc+63nQVyKAsn+bNU3qqHMVCSgGusKUhX 0+X4rQuk9ud9bK0S2/pxVq/aewR3ITOE3rkU8JV8OPBLp+ZXb+Qnu+9Ek1R3gIK2zOMR adaQ== X-Gm-Message-State: AOAM533t6LowVwCdGIrG8EeT5kuwWvMOqgg42jNlD7b5Nph7OTgF0dv3 vwnjGeH7rY2IGD3f06r+7qoC X-Google-Smtp-Source: ABdhPJyItYyFbM3hjECkaDDhxrMFvyvjl8NXJdGBO7SSkT2pahGdLCCBd9GQeL329BxXqsrfSR1oVA== X-Received: by 2002:a17:90a:6906:: with SMTP id r6mr2351240pjj.235.1617178002927; Wed, 31 Mar 2021 01:06:42 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id p1sm1292201pfn.22.2021.03.31.01.06.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:42 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 07/10] vdpa: Support transferring virtual addressing during DMA mapping Date: Wed, 31 Mar 2021 16:05:16 +0800 Message-Id: <20210331080519.172-8-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch introduces an attribute for vDPA device to indicate whether virtual address can be used. If vDPA device driver set it, vhost-vdpa bus driver will not pin user page and transfer userspace virtual address instead of physical address during DMA mapping. And corresponding vma->vm_file and offset will be also passed as an opaque pointer. Suggested-by: Jason Wang Signed-off-by: Xie Yongji Acked-by: Jason Wang --- drivers/vdpa/ifcvf/ifcvf_main.c | 2 +- drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- drivers/vdpa/vdpa.c | 9 +++- drivers/vdpa/vdpa_sim/vdpa_sim.c | 2 +- drivers/vdpa/virtio_pci/vp_vdpa.c | 2 +- drivers/vhost/vdpa.c | 99 ++++++++++++++++++++++++++++++++++----- include/linux/vdpa.h | 19 ++++++-- 7 files changed, 116 insertions(+), 19 deletions(-) diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c index d555a6a5d1ba..aee013f3eb5f 100644 --- a/drivers/vdpa/ifcvf/ifcvf_main.c +++ b/drivers/vdpa/ifcvf/ifcvf_main.c @@ -431,7 +431,7 @@ static int ifcvf_probe(struct pci_dev *pdev, const struct pci_device_id *id) } adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa, - dev, &ifc_vdpa_ops, NULL); + dev, &ifc_vdpa_ops, NULL, false); if (adapter == NULL) { IFCVF_ERR(pdev, "Failed to allocate vDPA structure"); return -ENOMEM; diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 71397fdafa6a..fb62ebcf464a 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -1982,7 +1982,7 @@ static int mlx5v_probe(struct auxiliary_device *adev, max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS); ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, mdev->device, &mlx5_vdpa_ops, - NULL); + NULL, false); if (IS_ERR(ndev)) return PTR_ERR(ndev); diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index 5cffce67cab0..97fbac276c72 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -71,6 +71,7 @@ static void vdpa_release_dev(struct device *d) * @config: the bus operations that is supported by this device * @size: size of the parent structure that contains private data * @name: name of the vdpa device; optional. + * @use_va: indicate whether virtual address must be used by this device * * Driver should use vdpa_alloc_device() wrapper macro instead of * using this directly. @@ -80,7 +81,8 @@ static void vdpa_release_dev(struct device *d) */ struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, - size_t size, const char *name) + size_t size, const char *name, + bool use_va) { struct vdpa_device *vdev; int err = -EINVAL; @@ -91,6 +93,10 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent, if (!!config->dma_map != !!config->dma_unmap) goto err; + /* It should only work for the device that use on-chip IOMMU */ + if (use_va && !(config->dma_map || config->set_map)) + goto err; + err = -ENOMEM; vdev = kzalloc(size, GFP_KERNEL); if (!vdev) @@ -106,6 +112,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device *parent, vdev->index = err; vdev->config = config; vdev->features_valid = false; + vdev->use_va = use_va; if (name) err = dev_set_name(&vdev->dev, "%s", name); diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index ff331f088baf..d26334e9a412 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -235,7 +235,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr) ops = &vdpasim_config_ops; vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops, - dev_attr->name); + dev_attr->name, false); if (!vdpasim) goto err_alloc; diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c index 1321a2fcd088..03b36aed48d6 100644 --- a/drivers/vdpa/virtio_pci/vp_vdpa.c +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c @@ -377,7 +377,7 @@ static int vp_vdpa_probe(struct pci_dev *pdev, const struct pci_device_id *id) return ret; vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, - dev, &vp_vdpa_ops, NULL); + dev, &vp_vdpa_ops, NULL, false); if (vp_vdpa == NULL) { dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); return -ENOMEM; diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index f9aab9013745..613ea400e0e5 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -505,8 +505,28 @@ static void vhost_vdpa_pa_unmap(struct vhost_vdpa *v, u64 start, u64 last) } } +static void vhost_vdpa_va_unmap(struct vhost_vdpa *v, u64 start, u64 last) +{ + struct vhost_dev *dev = &v->vdev; + struct vhost_iotlb *iotlb = dev->iotlb; + struct vhost_iotlb_map *map; + struct vdpa_map_file *map_file; + + while ((map = vhost_iotlb_itree_first(iotlb, start, last)) != NULL) { + map_file = (struct vdpa_map_file *)map->opaque; + fput(map_file->file); + kfree(map_file); + vhost_iotlb_map_free(iotlb, map); + } +} + static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last) { + struct vdpa_device *vdpa = v->vdpa; + + if (vdpa->use_va) + return vhost_vdpa_va_unmap(v, start, last); + return vhost_vdpa_pa_unmap(v, start, last); } @@ -541,21 +561,21 @@ static int perm_to_iommu_flags(u32 perm) return flags | IOMMU_CACHE; } -static int vhost_vdpa_map(struct vhost_vdpa *v, - u64 iova, u64 size, u64 pa, u32 perm) +static int vhost_vdpa_map(struct vhost_vdpa *v, u64 iova, + u64 size, u64 pa, u32 perm, void *opaque) { struct vhost_dev *dev = &v->vdev; struct vdpa_device *vdpa = v->vdpa; const struct vdpa_config_ops *ops = vdpa->config; int r = 0; - r = vhost_iotlb_add_range(dev->iotlb, iova, iova + size - 1, - pa, perm); + r = vhost_iotlb_add_range_ctx(dev->iotlb, iova, iova + size - 1, + pa, perm, opaque); if (r) return r; if (ops->dma_map) { - r = ops->dma_map(vdpa, iova, size, pa, perm, NULL); + r = ops->dma_map(vdpa, iova, size, pa, perm, opaque); } else if (ops->set_map) { if (!v->in_batch) r = ops->set_map(vdpa, dev->iotlb); @@ -563,13 +583,15 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, r = iommu_map(v->domain, iova, pa, size, perm_to_iommu_flags(perm)); } - - if (r) + if (r) { vhost_iotlb_del_range(dev->iotlb, iova, iova + size - 1); - else + return r; + } + + if (!vdpa->use_va) atomic64_add(size >> PAGE_SHIFT, &dev->mm->pinned_vm); - return r; + return 0; } static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) @@ -590,6 +612,56 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 iova, u64 size) } } +static int vhost_vdpa_va_map(struct vhost_vdpa *v, + u64 iova, u64 size, u64 uaddr, u32 perm) +{ + struct vhost_dev *dev = &v->vdev; + u64 offset, map_size, map_iova = iova; + struct vdpa_map_file *map_file; + struct vm_area_struct *vma; + int ret; + + mmap_read_lock(dev->mm); + + while (size) { + vma = find_vma(dev->mm, uaddr); + if (!vma) { + ret = -EINVAL; + break; + } + map_size = min(size, vma->vm_end - uaddr); + if (!(vma->vm_file && (vma->vm_flags & VM_SHARED) && + !(vma->vm_flags & (VM_IO | VM_PFNMAP)))) + goto next; + + map_file = kzalloc(sizeof(*map_file), GFP_KERNEL); + if (!map_file) { + ret = -ENOMEM; + break; + } + offset = (vma->vm_pgoff << PAGE_SHIFT) + uaddr - vma->vm_start; + map_file->offset = offset; + map_file->file = get_file(vma->vm_file); + ret = vhost_vdpa_map(v, map_iova, map_size, uaddr, + perm, map_file); + if (ret) { + fput(map_file->file); + kfree(map_file); + break; + } +next: + size -= map_size; + uaddr += map_size; + map_iova += map_size; + } + if (ret) + vhost_vdpa_unmap(v, iova, map_iova - iova); + + mmap_read_unlock(dev->mm); + + return ret; +} + static int vhost_vdpa_pa_map(struct vhost_vdpa *v, u64 iova, u64 size, u64 uaddr, u32 perm) { @@ -656,7 +728,7 @@ static int vhost_vdpa_pa_map(struct vhost_vdpa *v, csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT; ret = vhost_vdpa_map(v, iova, csize, map_pfn << PAGE_SHIFT, - perm); + perm, NULL); if (ret) { /* * Unpin the pages that are left unmapped @@ -685,7 +757,7 @@ static int vhost_vdpa_pa_map(struct vhost_vdpa *v, /* Pin the rest chunk */ ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT, - map_pfn << PAGE_SHIFT, perm); + map_pfn << PAGE_SHIFT, perm, NULL); out: if (ret) { if (nchunks) { @@ -718,6 +790,7 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, struct vhost_iotlb_msg *msg) { struct vhost_dev *dev = &v->vdev; + struct vdpa_device *vdpa = v->vdpa; struct vhost_iotlb *iotlb = dev->iotlb; if (msg->iova < v->range.first || @@ -728,6 +801,10 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v, msg->iova + msg->size - 1)) return -EEXIST; + if (vdpa->use_va) + return vhost_vdpa_va_map(v, msg->iova, msg->size, + msg->uaddr, msg->perm); + return vhost_vdpa_pa_map(v, msg->iova, msg->size, msg->uaddr, msg->perm); } diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index b01f7c9096bf..e67404e4b23e 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -44,6 +44,7 @@ struct vdpa_mgmt_dev; * @config: the configuration ops for this device. * @index: device index * @features_valid: were features initialized? for legacy guests + * @use_va: indicate whether virtual address must be used by this device * @nvqs: maximum number of supported virtqueues * @mdev: management device pointer; caller must setup when registering device as part * of dev_add() mgmtdev ops callback before invoking _vdpa_register_device(). @@ -54,6 +55,7 @@ struct vdpa_device { const struct vdpa_config_ops *config; unsigned int index; bool features_valid; + bool use_va; int nvqs; struct vdpa_mgmt_dev *mdev; }; @@ -69,6 +71,16 @@ struct vdpa_iova_range { }; /** + * Corresponding file area for device memory mapping + * @file: vma->vm_file for the mapping + * @offset: mapping offset in the vm_file + */ +struct vdpa_map_file { + struct file *file; + u64 offset; +}; + +/** * vDPA_config_ops - operations for configuring a vDPA device. * Note: vDPA device drivers are required to implement all of the * operations unless it is mentioned to be optional in the following @@ -250,14 +262,15 @@ struct vdpa_config_ops { struct vdpa_device *__vdpa_alloc_device(struct device *parent, const struct vdpa_config_ops *config, - size_t size, const char *name); + size_t size, const char *name, + bool use_va); -#define vdpa_alloc_device(dev_struct, member, parent, config, name) \ +#define vdpa_alloc_device(dev_struct, member, parent, config, name, use_va) \ container_of(__vdpa_alloc_device( \ parent, config, \ sizeof(dev_struct) + \ BUILD_BUG_ON_ZERO(offsetof( \ - dev_struct, member)), name), \ + dev_struct, member)), name, use_va), \ dev_struct, member) int vdpa_register_device(struct vdpa_device *vdev, int nvqs); From patchwork Wed Mar 31 08:05:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174617 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FD55C433DB for ; Wed, 31 Mar 2021 08:07:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EBF7619CF for ; Wed, 31 Mar 2021 08:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234394AbhCaIHV (ORCPT ); Wed, 31 Mar 2021 04:07:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234243AbhCaIGs (ORCPT ); Wed, 31 Mar 2021 04:06:48 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D81D3C06174A for ; Wed, 31 Mar 2021 01:06:47 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id e14so7496013plj.2 for ; Wed, 31 Mar 2021 01:06:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HbraA3GUovC8HHTDRA7rAWyMS2fMttNNZ7zYmSxaNBY=; b=ZdQWmMXK9LqN6EYFwFf0L9s6oXuyIZpAdQsFntchgz8PAhwq39LE/ebytNk3l8q2mA bxcztYw/rnEjlk2Pky3tfNiH9RgpaLI20J6AhLQjaRJdvHKXqh5J5rTPzsZw0ZDLpScq NwfATuKhV78C4KMIDBlhuEtyTQW4C44WD2yqWDx04FV3vGOrLylK5KUeO8NwkTpO7LFr 8OZ2xsBxtNwsC9rCT4/7peYjPH8VieOCzVggkbwBtEj5Xmh4aPOOITaJ67tLOtRMD5f2 rBEQrAsS2tfpQ5rjSikk33pwBulf1LsHjAmexwDOHMhqXcC7UgjSZTuqE83HjYPPowZu unVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HbraA3GUovC8HHTDRA7rAWyMS2fMttNNZ7zYmSxaNBY=; b=fTjURl0/aj9A3aScbJzGnjSww/AXWijKmhv3fvry4pJ3eiV8swpbBO7YT/6OrRzoeo fdBvBZvPzg5Azdut5sUm88boirN0rM6xzxUG+7tqZ8C9np1PbhvQHvWRnf6ymB0rccpx /Lr9KXVas7lFrjxZN6EBoLI6onfDajXjOphtJK94FwPm/Eo9dtDubgw3wu7spNev8k7A dLd7euyREuyBMmlbjUb9OSRM+kDREDcilVg4vpbPikq2Q6qkCJIFYwiqA6kVOyLjvEbs PZUGUJ4909yMprRwG9FtDP6mMiLw1QD+0933EQQ/TDFsWaueD9TKj6YTXL89rrZuYlcq uCmg== X-Gm-Message-State: AOAM531/mCDbh1qZneyjxUni/pGGIup34H/Ieechg90dnp/MH6WF8NsR I299UWrDdUC2nKjB4pnivjGMPUIrO5lD X-Google-Smtp-Source: ABdhPJzt0qlYOwcRoCM3jkPHzS0Gq67t3fffprT5Aqw3JmwBJUh2R1np6vAzPu9Ndl0yrj208n0KCg== X-Received: by 2002:a17:902:e806:b029:e5:cb85:dc4d with SMTP id u6-20020a170902e806b02900e5cb85dc4dmr2140059plg.11.1617178007339; Wed, 31 Mar 2021 01:06:47 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id a5sm1366317pfo.210.2021.03.31.01.06.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:46 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 08/10] vduse: Implement an MMU-based IOMMU driver Date: Wed, 31 Mar 2021 16:05:17 +0800 Message-Id: <20210331080519.172-9-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This implements an MMU-based IOMMU driver to support mapping kernel dma buffer into userspace. The basic idea behind it is treating MMU (VA->PA) as IOMMU (IOVA->PA). The driver will set up MMU mapping instead of IOMMU mapping for the DMA transfer so that the userspace process is able to use its virtual address to access the dma buffer in kernel. And to avoid security issue, a bounce-buffering mechanism is introduced to prevent userspace accessing the original buffer directly. Signed-off-by: Xie Yongji Acked-by: Jason Wang --- drivers/vdpa/vdpa_user/iova_domain.c | 521 +++++++++++++++++++++++++++++++++++ drivers/vdpa/vdpa_user/iova_domain.h | 70 +++++ 2 files changed, 591 insertions(+) create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c new file mode 100644 index 000000000000..ed2a944d99b4 --- /dev/null +++ b/drivers/vdpa/vdpa_user/iova_domain.c @@ -0,0 +1,521 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * MMU-based IOMMU implementation + * + * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights reserved. + * + * Author: Xie Yongji + * + */ + +#include +#include +#include +#include +#include +#include + +#include "iova_domain.h" + +static int vduse_iotlb_add_range(struct vduse_iova_domain *domain, + u64 start, u64 last, + u64 addr, unsigned int perm, + struct file *file, u64 offset) +{ + struct vdpa_map_file *map_file; + int ret; + + map_file = kmalloc(sizeof(*map_file), GFP_ATOMIC); + if (!map_file) + return -ENOMEM; + + map_file->file = get_file(file); + map_file->offset = offset; + + ret = vhost_iotlb_add_range_ctx(domain->iotlb, start, last, + addr, perm, map_file); + if (ret) { + fput(map_file->file); + kfree(map_file); + return ret; + } + return 0; +} + +static void vduse_iotlb_del_range(struct vduse_iova_domain *domain, + u64 start, u64 last) +{ + struct vdpa_map_file *map_file; + struct vhost_iotlb_map *map; + + while ((map = vhost_iotlb_itree_first(domain->iotlb, start, last))) { + map_file = (struct vdpa_map_file *)map->opaque; + fput(map_file->file); + kfree(map_file); + vhost_iotlb_map_free(domain->iotlb, map); + } +} + +int vduse_domain_set_map(struct vduse_iova_domain *domain, + struct vhost_iotlb *iotlb) +{ + struct vdpa_map_file *map_file; + struct vhost_iotlb_map *map; + u64 start = 0ULL, last = ULLONG_MAX; + int ret; + + spin_lock(&domain->iotlb_lock); + vduse_iotlb_del_range(domain, start, last); + + for (map = vhost_iotlb_itree_first(iotlb, start, last); map; + map = vhost_iotlb_itree_next(map, start, last)) { + map_file = (struct vdpa_map_file *)map->opaque; + ret = vduse_iotlb_add_range(domain, map->start, map->last, + map->addr, map->perm, + map_file->file, + map_file->offset); + if (ret) + goto err; + } + spin_unlock(&domain->iotlb_lock); + + return 0; +err: + vduse_iotlb_del_range(domain, start, last); + spin_unlock(&domain->iotlb_lock); + return ret; +} + +static int vduse_domain_map_bounce_page(struct vduse_iova_domain *domain, + u64 iova, u64 size, u64 paddr) +{ + struct vduse_bounce_map *map; + u64 last = iova + size - 1; + + while (iova <= last) { + map = &domain->bounce_maps[iova >> PAGE_SHIFT]; + if (!map->bounce_page) { + map->bounce_page = alloc_page(GFP_ATOMIC); + if (!map->bounce_page) + return -ENOMEM; + } + map->orig_phys = paddr; + paddr += PAGE_SIZE; + iova += PAGE_SIZE; + } + return 0; +} + +static void vduse_domain_unmap_bounce_page(struct vduse_iova_domain *domain, + u64 iova, u64 size) +{ + struct vduse_bounce_map *map; + u64 last = iova + size - 1; + + while (iova <= last) { + map = &domain->bounce_maps[iova >> PAGE_SHIFT]; + map->orig_phys = INVALID_PHYS_ADDR; + iova += PAGE_SIZE; + } +} + +static void do_bounce(phys_addr_t orig, void *addr, size_t size, + enum dma_data_direction dir) +{ + unsigned long pfn = PFN_DOWN(orig); + unsigned int offset = offset_in_page(orig); + char *buffer; + unsigned int sz = 0; + + while (size) { + sz = min_t(size_t, PAGE_SIZE - offset, size); + + buffer = kmap_atomic(pfn_to_page(pfn)); + if (dir == DMA_TO_DEVICE) + memcpy(addr, buffer + offset, sz); + else + memcpy(buffer + offset, addr, sz); + kunmap_atomic(buffer); + + size -= sz; + pfn++; + addr += sz; + offset = 0; + } +} + +static void vduse_domain_bounce(struct vduse_iova_domain *domain, + dma_addr_t iova, size_t size, + enum dma_data_direction dir) +{ + struct vduse_bounce_map *map; + unsigned int offset; + void *addr; + size_t sz; + + while (size) { + map = &domain->bounce_maps[iova >> PAGE_SHIFT]; + offset = offset_in_page(iova); + sz = min_t(size_t, PAGE_SIZE - offset, size); + + if (WARN_ON(!map->bounce_page || + map->orig_phys == INVALID_PHYS_ADDR)) + return; + + addr = page_address(map->bounce_page) + offset; + do_bounce(map->orig_phys + offset, addr, sz, dir); + size -= sz; + iova += sz; + } +} + +static struct page * +vduse_domain_get_mapping_page(struct vduse_iova_domain *domain, u64 iova) +{ + u64 start = iova & PAGE_MASK; + u64 last = start + PAGE_SIZE - 1; + struct vhost_iotlb_map *map; + struct page *page = NULL; + + spin_lock(&domain->iotlb_lock); + map = vhost_iotlb_itree_first(domain->iotlb, start, last); + if (!map) + goto out; + + page = pfn_to_page((map->addr + iova - map->start) >> PAGE_SHIFT); + get_page(page); +out: + spin_unlock(&domain->iotlb_lock); + + return page; +} + +static struct page * +vduse_domain_get_bounce_page(struct vduse_iova_domain *domain, u64 iova) +{ + struct vduse_bounce_map *map; + struct page *page = NULL; + + spin_lock(&domain->iotlb_lock); + map = &domain->bounce_maps[iova >> PAGE_SHIFT]; + if (!map->bounce_page) + goto out; + + page = map->bounce_page; + get_page(page); +out: + spin_unlock(&domain->iotlb_lock); + + return page; +} + +static void +vduse_domain_free_bounce_pages(struct vduse_iova_domain *domain) +{ + struct vduse_bounce_map *map; + unsigned long pfn, bounce_pfns; + + spin_lock(&domain->iotlb_lock); + bounce_pfns = domain->bounce_size >> PAGE_SHIFT; + + for (pfn = 0; pfn < bounce_pfns; pfn++) { + map = &domain->bounce_maps[pfn]; + if (WARN_ON(map->orig_phys != INVALID_PHYS_ADDR)) + continue; + + if (!map->bounce_page) + continue; + + __free_page(map->bounce_page); + map->bounce_page = NULL; + } + spin_unlock(&domain->iotlb_lock); +} + +void vduse_domain_reset_bounce_map(struct vduse_iova_domain *domain) +{ + if (!domain->bounce_map) + return; + + spin_lock(&domain->iotlb_lock); + if (!domain->bounce_map) + goto unlock; + + vduse_iotlb_del_range(domain, 0, domain->bounce_size - 1); + domain->bounce_map = 0; +unlock: + spin_unlock(&domain->iotlb_lock); +} + +static int vduse_domain_init_bounce_map(struct vduse_iova_domain *domain) +{ + int ret = 0; + + if (domain->bounce_map) + return 0; + + spin_lock(&domain->iotlb_lock); + if (domain->bounce_map) + goto unlock; + + ret = vduse_iotlb_add_range(domain, 0, domain->bounce_size - 1, + 0, VHOST_MAP_RW, domain->file, 0); + if (ret) + goto unlock; + + domain->bounce_map = 1; +unlock: + spin_unlock(&domain->iotlb_lock); + return ret; +} + +static dma_addr_t +vduse_domain_alloc_iova(struct iova_domain *iovad, + unsigned long size, unsigned long limit) +{ + unsigned long shift = iova_shift(iovad); + unsigned long iova_len = iova_align(iovad, size) >> shift; + unsigned long iova_pfn; + + if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1))) + iova_len = roundup_pow_of_two(iova_len); + iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true); + + return iova_pfn << shift; +} + +static void vduse_domain_free_iova(struct iova_domain *iovad, + dma_addr_t iova, size_t size) +{ + unsigned long shift = iova_shift(iovad); + unsigned long iova_len = iova_align(iovad, size) >> shift; + + free_iova_fast(iovad, iova >> shift, iova_len); +} + +dma_addr_t vduse_domain_map_page(struct vduse_iova_domain *domain, + struct page *page, unsigned long offset, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct iova_domain *iovad = &domain->stream_iovad; + unsigned long limit = domain->bounce_size - 1; + phys_addr_t pa = page_to_phys(page) + offset; + dma_addr_t iova = vduse_domain_alloc_iova(iovad, size, limit); + + if (!iova) + return DMA_MAPPING_ERROR; + + if (vduse_domain_init_bounce_map(domain)) + goto err; + + if (vduse_domain_map_bounce_page(domain, (u64)iova, (u64)size, pa)) + goto err; + + if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) + vduse_domain_bounce(domain, iova, size, DMA_TO_DEVICE); + + return iova; +err: + vduse_domain_free_iova(iovad, iova, size); + return DMA_MAPPING_ERROR; +} + +void vduse_domain_unmap_page(struct vduse_iova_domain *domain, + dma_addr_t dma_addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + struct iova_domain *iovad = &domain->stream_iovad; + + if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) + vduse_domain_bounce(domain, dma_addr, size, DMA_FROM_DEVICE); + + vduse_domain_unmap_bounce_page(domain, (u64)dma_addr, (u64)size); + vduse_domain_free_iova(iovad, dma_addr, size); +} + +void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain, + size_t size, dma_addr_t *dma_addr, + gfp_t flag, unsigned long attrs) +{ + struct iova_domain *iovad = &domain->consistent_iovad; + unsigned long limit = domain->iova_limit; + dma_addr_t iova = vduse_domain_alloc_iova(iovad, size, limit); + void *orig = alloc_pages_exact(size, flag); + + if (!iova || !orig) + goto err; + + spin_lock(&domain->iotlb_lock); + if (vduse_iotlb_add_range(domain, (u64)iova, (u64)iova + size - 1, + virt_to_phys(orig), VHOST_MAP_RW, + domain->file, (u64)iova)) { + spin_unlock(&domain->iotlb_lock); + goto err; + } + spin_unlock(&domain->iotlb_lock); + + *dma_addr = iova; + + return orig; +err: + *dma_addr = DMA_MAPPING_ERROR; + if (orig) + free_pages_exact(orig, size); + if (iova) + vduse_domain_free_iova(iovad, iova, size); + + return NULL; +} + +void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size, + void *vaddr, dma_addr_t dma_addr, + unsigned long attrs) +{ + struct iova_domain *iovad = &domain->consistent_iovad; + struct vhost_iotlb_map *map; + struct vdpa_map_file *map_file; + phys_addr_t pa; + + spin_lock(&domain->iotlb_lock); + map = vhost_iotlb_itree_first(domain->iotlb, (u64)dma_addr, + (u64)dma_addr + size - 1); + if (WARN_ON(!map)) { + spin_unlock(&domain->iotlb_lock); + return; + } + map_file = (struct vdpa_map_file *)map->opaque; + fput(map_file->file); + kfree(map_file); + pa = map->addr; + vhost_iotlb_map_free(domain->iotlb, map); + spin_unlock(&domain->iotlb_lock); + + vduse_domain_free_iova(iovad, dma_addr, size); + free_pages_exact(phys_to_virt(pa), size); +} + +static vm_fault_t vduse_domain_mmap_fault(struct vm_fault *vmf) +{ + struct vduse_iova_domain *domain = vmf->vma->vm_private_data; + unsigned long iova = vmf->pgoff << PAGE_SHIFT; + struct page *page; + + if (!domain) + return VM_FAULT_SIGBUS; + + if (iova < domain->bounce_size) + page = vduse_domain_get_bounce_page(domain, iova); + else + page = vduse_domain_get_mapping_page(domain, iova); + + if (!page) + return VM_FAULT_SIGBUS; + + vmf->page = page; + + return 0; +} + +static const struct vm_operations_struct vduse_domain_mmap_ops = { + .fault = vduse_domain_mmap_fault, +}; + +static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct vduse_iova_domain *domain = file->private_data; + + vma->vm_flags |= VM_DONTDUMP | VM_DONTEXPAND; + vma->vm_private_data = domain; + vma->vm_ops = &vduse_domain_mmap_ops; + + return 0; +} + +static int vduse_domain_release(struct inode *inode, struct file *file) +{ + struct vduse_iova_domain *domain = file->private_data; + + vduse_domain_reset_bounce_map(domain); + vduse_domain_free_bounce_pages(domain); + put_iova_domain(&domain->stream_iovad); + put_iova_domain(&domain->consistent_iovad); + vhost_iotlb_free(domain->iotlb); + vfree(domain->bounce_maps); + kfree(domain); + + return 0; +} + +static const struct file_operations vduse_domain_fops = { + .mmap = vduse_domain_mmap, + .release = vduse_domain_release, +}; + +void vduse_domain_destroy(struct vduse_iova_domain *domain) +{ + fput(domain->file); +} + +struct vduse_iova_domain * +vduse_domain_create(unsigned long iova_limit, size_t bounce_size) +{ + struct vduse_iova_domain *domain; + struct file *file; + struct vduse_bounce_map *map; + unsigned long pfn, bounce_pfns; + + bounce_pfns = PAGE_ALIGN(bounce_size) >> PAGE_SHIFT; + if (iova_limit <= bounce_size) + return NULL; + + domain = kzalloc(sizeof(*domain), GFP_KERNEL); + if (!domain) + return NULL; + + domain->iotlb = vhost_iotlb_alloc(0, 0); + if (!domain->iotlb) + goto err_iotlb; + + domain->iova_limit = iova_limit; + domain->bounce_size = PAGE_ALIGN(bounce_size); + domain->bounce_maps = vzalloc(bounce_pfns * + sizeof(struct vduse_bounce_map)); + if (!domain->bounce_maps) + goto err_map; + + for (pfn = 0; pfn < bounce_pfns; pfn++) { + map = &domain->bounce_maps[pfn]; + map->orig_phys = INVALID_PHYS_ADDR; + } + file = anon_inode_getfile("[vduse-domain]", &vduse_domain_fops, + domain, O_RDWR); + if (IS_ERR(file)) + goto err_file; + + domain->file = file; + spin_lock_init(&domain->iotlb_lock); + init_iova_domain(&domain->stream_iovad, + PAGE_SIZE, IOVA_START_PFN); + init_iova_domain(&domain->consistent_iovad, + PAGE_SIZE, bounce_pfns); + + return domain; +err_file: + vfree(domain->bounce_maps); +err_map: + vhost_iotlb_free(domain->iotlb); +err_iotlb: + kfree(domain); + return NULL; +} + +int vduse_domain_init(void) +{ + return iova_cache_get(); +} + +void vduse_domain_exit(void) +{ + iova_cache_put(); +} diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h new file mode 100644 index 000000000000..31f822afa5f5 --- /dev/null +++ b/drivers/vdpa/vdpa_user/iova_domain.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * MMU-based IOMMU implementation + * + * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights reserved. + * + * Author: Xie Yongji + * + */ + +#ifndef _VDUSE_IOVA_DOMAIN_H +#define _VDUSE_IOVA_DOMAIN_H + +#include +#include +#include + +#define IOVA_START_PFN 1 + +#define INVALID_PHYS_ADDR (~(phys_addr_t)0) + +struct vduse_bounce_map { + struct page *bounce_page; + u64 orig_phys; +}; + +struct vduse_iova_domain { + struct iova_domain stream_iovad; + struct iova_domain consistent_iovad; + struct vduse_bounce_map *bounce_maps; + size_t bounce_size; + unsigned long iova_limit; + int bounce_map; + struct vhost_iotlb *iotlb; + spinlock_t iotlb_lock; + struct file *file; +}; + +int vduse_domain_set_map(struct vduse_iova_domain *domain, + struct vhost_iotlb *iotlb); + +dma_addr_t vduse_domain_map_page(struct vduse_iova_domain *domain, + struct page *page, unsigned long offset, + size_t size, enum dma_data_direction dir, + unsigned long attrs); + +void vduse_domain_unmap_page(struct vduse_iova_domain *domain, + dma_addr_t dma_addr, size_t size, + enum dma_data_direction dir, unsigned long attrs); + +void *vduse_domain_alloc_coherent(struct vduse_iova_domain *domain, + size_t size, dma_addr_t *dma_addr, + gfp_t flag, unsigned long attrs); + +void vduse_domain_free_coherent(struct vduse_iova_domain *domain, size_t size, + void *vaddr, dma_addr_t dma_addr, + unsigned long attrs); + +void vduse_domain_reset_bounce_map(struct vduse_iova_domain *domain); + +void vduse_domain_destroy(struct vduse_iova_domain *domain); + +struct vduse_iova_domain *vduse_domain_create(unsigned long iova_limit, + size_t bounce_size); + +int vduse_domain_init(void); + +void vduse_domain_exit(void); + +#endif /* _VDUSE_IOVA_DOMAIN_H */ From patchwork Wed Mar 31 08:05:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA494C433E1 for ; Wed, 31 Mar 2021 08:07:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5068619C9 for ; Wed, 31 Mar 2021 08:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234400AbhCaIHX (ORCPT ); Wed, 31 Mar 2021 04:07:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234332AbhCaIGx (ORCPT ); Wed, 31 Mar 2021 04:06:53 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22539C061760 for ; Wed, 31 Mar 2021 01:06:53 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id kk2-20020a17090b4a02b02900c777aa746fso810036pjb.3 for ; Wed, 31 Mar 2021 01:06:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a0kAIaEkxWqvoKfDbJp0jg+AerNl9Z5wmpNV+vgp41k=; b=SOhrI7hi14ui0v1BgF4m2F6uJR03xDFC1J6rvCRAtZukZyd6OPm7lSAoJfJX91CJN6 AUSkA3Xt4MlCpshslpiRZGuuUsYYFr/wovq1cJD4A1v6zwHEpvAevf60PkKMZjsajpdP 2mPkV2n6fKwOCH3YjBL52d1W3F7AcP3yvS6W4oO6F+DYvqaQH/L6dmF4CNJ8sjSSSIOo lF/mzLovyjKgZ+5zkJv24nRvhRqweeTFDPiWnfD/d0UsRYaRhIU6nzOvQcPunps5qG3M iQPie4lFwpDgzwsdxNGxJ9lhQW/XX187vpWxnEbZss0m5BILvI/xfE8FT4ZNz3bqQZ8F NdVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a0kAIaEkxWqvoKfDbJp0jg+AerNl9Z5wmpNV+vgp41k=; b=ekrkhwu/E3TrDW39+3s52s6B1WGwJgKefLPS7r/tea6gdbsOh/ZaYEnWFLsvm9SuGt 8ArNDz28pEOl1oZnvFHMTGnhk7J2BqT9xfMPnU3/7qpNYbt1ZIbgyhDDi4Pv9buAuCg2 GMCNPHi/3yvKe/FoBgtAo1ixSr2Iu/HjLHaL4RZRnsrxDxh0AgKDpoqi0jUwuUHoqwio kXvceSkojWG5Q49VtnBrPwrOI9YO4Dtv/DD57dq3eVySSFXe1Pe+5g3Ujcx4zTO3mGyL XUJ3QrpjdkkMjEEotMCtbgp4o9aouWSylqmFlr9ice/wDbGs82tiaeglR2OShbzZ25xm 7Jkg== X-Gm-Message-State: AOAM533s4xpBaS+JoFaaqjDBwOhZ7PJdnzOSH0Qn7mIxAxdrkYgAuzas Kedc5X3NLAfwR3XzKt9pXOto X-Google-Smtp-Source: ABdhPJzew/N5MOcjDwB58UlCEaBZdJL3Shwjh/XWHfmEna9dbKs/cyvcxVnjf+QRSuehm5lInxZznw== X-Received: by 2002:a17:90b:804:: with SMTP id bk4mr2319874pjb.25.1617178012344; Wed, 31 Mar 2021 01:06:52 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id 205sm1468942pfc.201.2021.03.31.01.06.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:51 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 09/10] vduse: Introduce VDUSE - vDPA Device in Userspace Date: Wed, 31 Mar 2021 16:05:18 +0800 Message-Id: <20210331080519.172-10-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This VDUSE driver enables implementing vDPA devices in userspace. Both control path and data path of vDPA devices will be able to be handled in userspace. In the control path, the VDUSE driver will make use of message mechnism to forward the config operation from vdpa bus driver to userspace. Userspace can use read()/write() to receive/reply those control messages. In the data path, VDUSE_IOTLB_GET_FD ioctl will be used to get the file descriptors referring to vDPA device's iova regions. Then userspace can use mmap() to access those iova regions. Besides, userspace can use ioctl() to inject interrupt and use the eventfd mechanism to receive virtqueue kicks. Signed-off-by: Xie Yongji --- Documentation/userspace-api/ioctl/ioctl-number.rst | 1 + drivers/vdpa/Kconfig | 10 + drivers/vdpa/Makefile | 1 + drivers/vdpa/vdpa_user/Makefile | 5 + drivers/vdpa/vdpa_user/vduse_dev.c | 1362 ++++++++++++++++++++ include/uapi/linux/vduse.h | 175 +++ 6 files changed, 1554 insertions(+) create mode 100644 drivers/vdpa/vdpa_user/Makefile create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c create mode 100644 include/uapi/linux/vduse.h diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst index a4c75a28c839..71722e6f8f23 100644 --- a/Documentation/userspace-api/ioctl/ioctl-number.rst +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst @@ -300,6 +300,7 @@ Code Seq# Include File Comments 'z' 10-4F drivers/s390/crypto/zcrypt_api.h conflict! '|' 00-7F linux/media.h 0x80 00-1F linux/fb.h +0x81 00-1F linux/vduse.h 0x89 00-06 arch/x86/include/asm/sockios.h 0x89 0B-DF linux/sockios.h 0x89 E0-EF linux/sockios.h SIOCPROTOPRIVATE range diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig index a245809c99d0..77a1da522c21 100644 --- a/drivers/vdpa/Kconfig +++ b/drivers/vdpa/Kconfig @@ -25,6 +25,16 @@ config VDPA_SIM_NET help vDPA networking device simulator which loops TX traffic back to RX. +config VDPA_USER + tristate "VDUSE (vDPA Device in Userspace) support" + depends on EVENTFD && MMU && HAS_DMA + select DMA_OPS + select VHOST_IOTLB + select IOMMU_IOVA + help + With VDUSE it is possible to emulate a vDPA Device + in a userspace program. + config IFCVF tristate "Intel IFC VF vDPA driver" depends on PCI_MSI diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile index 67fe7f3d6943..f02ebed33f19 100644 --- a/drivers/vdpa/Makefile +++ b/drivers/vdpa/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_VDPA) += vdpa.o obj-$(CONFIG_VDPA_SIM) += vdpa_sim/ +obj-$(CONFIG_VDPA_USER) += vdpa_user/ obj-$(CONFIG_IFCVF) += ifcvf/ obj-$(CONFIG_MLX5_VDPA) += mlx5/ obj-$(CONFIG_VP_VDPA) += virtio_pci/ diff --git a/drivers/vdpa/vdpa_user/Makefile b/drivers/vdpa/vdpa_user/Makefile new file mode 100644 index 000000000000..260e0b26af99 --- /dev/null +++ b/drivers/vdpa/vdpa_user/Makefile @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 + +vduse-y := vduse_dev.o iova_domain.o + +obj-$(CONFIG_VDPA_USER) += vduse.o diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c new file mode 100644 index 000000000000..51ca73464d0d --- /dev/null +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -0,0 +1,1362 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * VDUSE: vDPA Device in Userspace + * + * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights reserved. + * + * Author: Xie Yongji + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "iova_domain.h" + +#define DRV_VERSION "1.0" +#define DRV_AUTHOR "Yongji Xie " +#define DRV_DESC "vDPA Device in Userspace" +#define DRV_LICENSE "GPL v2" + +#define VDUSE_DEV_MAX (1U << MINORBITS) + +struct vduse_virtqueue { + u16 index; + bool ready; + spinlock_t kick_lock; + spinlock_t irq_lock; + struct eventfd_ctx *kickfd; + struct vdpa_callback cb; + struct work_struct inject; +}; + +struct vduse_dev; + +struct vduse_vdpa { + struct vdpa_device vdpa; + struct vduse_dev *dev; +}; + +struct vduse_dev { + struct vduse_vdpa *vdev; + struct device dev; + struct cdev cdev; + struct vduse_virtqueue *vqs; + struct vduse_iova_domain *domain; + struct mutex lock; + spinlock_t msg_lock; + atomic64_t msg_unique; + wait_queue_head_t waitq; + struct list_head send_list; + struct list_head recv_list; + struct list_head list; + struct vdpa_callback config_cb; + spinlock_t irq_lock; + unsigned long api_version; + bool connected; + int minor; + u16 vq_size_max; + u16 vq_num; + u32 vq_align; + u32 device_id; + u32 vendor_id; +}; + +struct vduse_dev_msg { + struct vduse_dev_request req; + struct vduse_dev_response resp; + struct list_head list; + wait_queue_head_t waitq; + bool completed; +}; + +struct vduse_control { + unsigned long api_version; +}; + +static unsigned long max_bounce_size = (64 * 1024 * 1024); +module_param(max_bounce_size, ulong, 0444); +MODULE_PARM_DESC(max_bounce_size, "Maximum bounce buffer size. (default: 64M)"); + +static unsigned long max_iova_size = (128 * 1024 * 1024); +module_param(max_iova_size, ulong, 0444); +MODULE_PARM_DESC(max_iova_size, "Maximum iova space size (default: 128M)"); + +static DEFINE_MUTEX(vduse_lock); +static LIST_HEAD(vduse_devs); +static DEFINE_IDA(vduse_ida); + +static dev_t vduse_major; +static struct class *vduse_class; +static struct workqueue_struct *vduse_irq_wq; + +static inline struct vduse_dev *vdpa_to_vduse(struct vdpa_device *vdpa) +{ + struct vduse_vdpa *vdev = container_of(vdpa, struct vduse_vdpa, vdpa); + + return vdev->dev; +} + +static inline struct vduse_dev *dev_to_vduse(struct device *dev) +{ + struct vdpa_device *vdpa = dev_to_vdpa(dev); + + return vdpa_to_vduse(vdpa); +} + +static struct vduse_dev_msg *vduse_find_msg(struct list_head *head, + uint32_t request_id) +{ + struct vduse_dev_msg *tmp, *msg = NULL; + + list_for_each_entry(tmp, head, list) { + if (tmp->req.request_id == request_id) { + msg = tmp; + list_del(&tmp->list); + break; + } + } + + return msg; +} + +static struct vduse_dev_msg *vduse_dequeue_msg(struct list_head *head) +{ + struct vduse_dev_msg *msg = NULL; + + if (!list_empty(head)) { + msg = list_first_entry(head, struct vduse_dev_msg, list); + list_del(&msg->list); + } + + return msg; +} + +static void vduse_enqueue_msg(struct list_head *head, + struct vduse_dev_msg *msg) +{ + list_add_tail(&msg->list, head); +} + +static int vduse_dev_msg_sync(struct vduse_dev *dev, + struct vduse_dev_msg *msg) +{ + init_waitqueue_head(&msg->waitq); + spin_lock(&dev->msg_lock); + vduse_enqueue_msg(&dev->send_list, msg); + wake_up(&dev->waitq); + spin_unlock(&dev->msg_lock); + wait_event_interruptible(msg->waitq, msg->completed); + spin_lock(&dev->msg_lock); + if (!msg->completed) + list_del(&msg->list); + spin_unlock(&dev->msg_lock); + + return (msg->resp.result == VDUSE_REQUEST_OK) ? 0 : -1; +} + +static u32 vduse_dev_get_request_id(struct vduse_dev *dev) +{ + return atomic64_fetch_inc(&dev->msg_unique); +} + +static u64 vduse_dev_get_features(struct vduse_dev *dev) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_GET_FEATURES; + msg.req.request_id = vduse_dev_get_request_id(dev); + + return vduse_dev_msg_sync(dev, &msg) ? 0 : msg.resp.f.features; +} + +static int vduse_dev_set_features(struct vduse_dev *dev, u64 features) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_SET_FEATURES; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.f.features = features; + + return vduse_dev_msg_sync(dev, &msg); +} + +static u8 vduse_dev_get_status(struct vduse_dev *dev) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_GET_STATUS; + msg.req.request_id = vduse_dev_get_request_id(dev); + + return vduse_dev_msg_sync(dev, &msg) ? 0 : msg.resp.s.status; +} + +static void vduse_dev_set_status(struct vduse_dev *dev, u8 status) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_SET_STATUS; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.s.status = status; + + vduse_dev_msg_sync(dev, &msg); +} + +static void vduse_dev_get_config(struct vduse_dev *dev, unsigned int offset, + void *buf, unsigned int len) +{ + struct vduse_dev_msg msg = { 0 }; + unsigned int sz; + + while (len) { + sz = min_t(unsigned int, len, sizeof(msg.req.config.data)); + msg.req.type = VDUSE_GET_CONFIG; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.config.offset = offset; + msg.req.config.len = sz; + vduse_dev_msg_sync(dev, &msg); + memcpy(buf, msg.resp.config.data, sz); + buf += sz; + offset += sz; + len -= sz; + } +} + +static void vduse_dev_set_config(struct vduse_dev *dev, unsigned int offset, + const void *buf, unsigned int len) +{ + struct vduse_dev_msg msg = { 0 }; + unsigned int sz; + + while (len) { + sz = min_t(unsigned int, len, sizeof(msg.req.config.data)); + msg.req.type = VDUSE_SET_CONFIG; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.config.offset = offset; + msg.req.config.len = sz; + memcpy(msg.req.config.data, buf, sz); + vduse_dev_msg_sync(dev, &msg); + buf += sz; + offset += sz; + len -= sz; + } +} + +static void vduse_dev_set_vq_num(struct vduse_dev *dev, + struct vduse_virtqueue *vq, u32 num) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_SET_VQ_NUM; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.vq_num.index = vq->index; + msg.req.vq_num.num = num; + + vduse_dev_msg_sync(dev, &msg); +} + +static int vduse_dev_set_vq_addr(struct vduse_dev *dev, + struct vduse_virtqueue *vq, u64 desc_addr, + u64 driver_addr, u64 device_addr) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_SET_VQ_ADDR; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.vq_addr.index = vq->index; + msg.req.vq_addr.desc_addr = desc_addr; + msg.req.vq_addr.driver_addr = driver_addr; + msg.req.vq_addr.device_addr = device_addr; + + return vduse_dev_msg_sync(dev, &msg); +} + +static void vduse_dev_set_vq_ready(struct vduse_dev *dev, + struct vduse_virtqueue *vq, bool ready) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_SET_VQ_READY; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.vq_ready.index = vq->index; + msg.req.vq_ready.ready = ready; + + vduse_dev_msg_sync(dev, &msg); +} + +static bool vduse_dev_get_vq_ready(struct vduse_dev *dev, + struct vduse_virtqueue *vq) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_GET_VQ_READY; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.vq_ready.index = vq->index; + + return vduse_dev_msg_sync(dev, &msg) ? false : msg.resp.vq_ready.ready; +} + +static int vduse_dev_get_vq_state(struct vduse_dev *dev, + struct vduse_virtqueue *vq, + struct vdpa_vq_state *state) +{ + struct vduse_dev_msg msg = { 0 }; + int ret; + + msg.req.type = VDUSE_GET_VQ_STATE; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.vq_state.index = vq->index; + + ret = vduse_dev_msg_sync(dev, &msg); + if (!ret) + state->avail_index = msg.resp.vq_state.avail_idx; + + return ret; +} + +static int vduse_dev_set_vq_state(struct vduse_dev *dev, + struct vduse_virtqueue *vq, + const struct vdpa_vq_state *state) +{ + struct vduse_dev_msg msg = { 0 }; + + msg.req.type = VDUSE_SET_VQ_STATE; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.vq_state.index = vq->index; + msg.req.vq_state.avail_idx = state->avail_index; + + return vduse_dev_msg_sync(dev, &msg); +} + +static int vduse_dev_update_iotlb(struct vduse_dev *dev, + u64 start, u64 last) +{ + struct vduse_dev_msg msg = { 0 }; + + if (last < start) + return -EINVAL; + + msg.req.type = VDUSE_UPDATE_IOTLB; + msg.req.request_id = vduse_dev_get_request_id(dev); + msg.req.iova.start = start; + msg.req.iova.last = last; + + return vduse_dev_msg_sync(dev, &msg); +} + +static ssize_t vduse_dev_read_iter(struct kiocb *iocb, struct iov_iter *to) +{ + struct file *file = iocb->ki_filp; + struct vduse_dev *dev = file->private_data; + struct vduse_dev_msg *msg; + int size = sizeof(struct vduse_dev_request); + ssize_t ret = 0; + + if (iov_iter_count(to) < size) + return 0; + + spin_lock(&dev->msg_lock); + while (1) { + msg = vduse_dequeue_msg(&dev->send_list); + if (msg) + break; + + ret = -EAGAIN; + if (file->f_flags & O_NONBLOCK) + goto unlock; + + spin_unlock(&dev->msg_lock); + ret = wait_event_interruptible_exclusive(dev->waitq, + !list_empty(&dev->send_list)); + if (ret) + return ret; + + spin_lock(&dev->msg_lock); + } + spin_unlock(&dev->msg_lock); + ret = copy_to_iter(&msg->req, size, to); + spin_lock(&dev->msg_lock); + if (ret != size) { + ret = -EFAULT; + vduse_enqueue_msg(&dev->send_list, msg); + goto unlock; + } + vduse_enqueue_msg(&dev->recv_list, msg); + wake_up(&dev->waitq); +unlock: + spin_unlock(&dev->msg_lock); + + return ret; +} + +static ssize_t vduse_dev_write_iter(struct kiocb *iocb, struct iov_iter *from) +{ + struct file *file = iocb->ki_filp; + struct vduse_dev *dev = file->private_data; + struct vduse_dev_response resp; + struct vduse_dev_msg *msg; + size_t ret; + + ret = copy_from_iter(&resp, sizeof(resp), from); + if (ret != sizeof(resp)) + return -EINVAL; + + spin_lock(&dev->msg_lock); + msg = vduse_find_msg(&dev->recv_list, resp.request_id); + if (!msg) { + ret = -EINVAL; + goto unlock; + } + + memcpy(&msg->resp, &resp, sizeof(resp)); + msg->completed = 1; + wake_up(&msg->waitq); +unlock: + spin_unlock(&dev->msg_lock); + + return ret; +} + +static __poll_t vduse_dev_poll(struct file *file, poll_table *wait) +{ + struct vduse_dev *dev = file->private_data; + __poll_t mask = 0; + + poll_wait(file, &dev->waitq, wait); + + if (!list_empty(&dev->send_list)) + mask |= EPOLLIN | EPOLLRDNORM; + if (!list_empty(&dev->recv_list)) + mask |= EPOLLOUT; + + return mask; +} + +static void vduse_dev_reset(struct vduse_dev *dev) +{ + int i; + + /* The coherent mappings are handled in vduse_dev_free_coherent() */ + vduse_domain_reset_bounce_map(dev->domain); + vduse_dev_update_iotlb(dev, 0ULL, ULLONG_MAX); + + spin_lock(&dev->irq_lock); + dev->config_cb.callback = NULL; + dev->config_cb.private = NULL; + spin_unlock(&dev->irq_lock); + + for (i = 0; i < dev->vq_num; i++) { + struct vduse_virtqueue *vq = &dev->vqs[i]; + + spin_lock(&vq->irq_lock); + vq->ready = false; + vq->cb.callback = NULL; + vq->cb.private = NULL; + spin_unlock(&vq->irq_lock); + } +} + +static int vduse_vdpa_set_vq_address(struct vdpa_device *vdpa, u16 idx, + u64 desc_area, u64 driver_area, + u64 device_area) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + return vduse_dev_set_vq_addr(dev, vq, desc_area, + driver_area, device_area); +} + +static void vduse_vdpa_kick_vq(struct vdpa_device *vdpa, u16 idx) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + spin_lock(&vq->kick_lock); + if (vq->ready && vq->kickfd) + eventfd_signal(vq->kickfd, 1); + spin_unlock(&vq->kick_lock); +} + +static void vduse_vdpa_set_vq_cb(struct vdpa_device *vdpa, u16 idx, + struct vdpa_callback *cb) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + spin_lock(&vq->irq_lock); + vq->cb.callback = cb->callback; + vq->cb.private = cb->private; + spin_unlock(&vq->irq_lock); +} + +static void vduse_vdpa_set_vq_num(struct vdpa_device *vdpa, u16 idx, u32 num) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + vduse_dev_set_vq_num(dev, vq, num); +} + +static void vduse_vdpa_set_vq_ready(struct vdpa_device *vdpa, + u16 idx, bool ready) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + vduse_dev_set_vq_ready(dev, vq, ready); + vq->ready = ready; +} + +static bool vduse_vdpa_get_vq_ready(struct vdpa_device *vdpa, u16 idx) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + vq->ready = vduse_dev_get_vq_ready(dev, vq); + + return vq->ready; +} + +static int vduse_vdpa_set_vq_state(struct vdpa_device *vdpa, u16 idx, + const struct vdpa_vq_state *state) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + return vduse_dev_set_vq_state(dev, vq, state); +} + +static int vduse_vdpa_get_vq_state(struct vdpa_device *vdpa, u16 idx, + struct vdpa_vq_state *state) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + struct vduse_virtqueue *vq = &dev->vqs[idx]; + + return vduse_dev_get_vq_state(dev, vq, state); +} + +static u32 vduse_vdpa_get_vq_align(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + return dev->vq_align; +} + +static u64 vduse_vdpa_get_features(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + return vduse_dev_get_features(dev); +} + +static int vduse_vdpa_set_features(struct vdpa_device *vdpa, u64 features) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + if (!(features & (1ULL << VIRTIO_F_ACCESS_PLATFORM))) + return -EINVAL; + + return vduse_dev_set_features(dev, features); +} + +static void vduse_vdpa_set_config_cb(struct vdpa_device *vdpa, + struct vdpa_callback *cb) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + spin_lock(&dev->irq_lock); + dev->config_cb.callback = cb->callback; + dev->config_cb.private = cb->private; + spin_unlock(&dev->irq_lock); +} + +static u16 vduse_vdpa_get_vq_num_max(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + return dev->vq_size_max; +} + +static u32 vduse_vdpa_get_device_id(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + return dev->device_id; +} + +static u32 vduse_vdpa_get_vendor_id(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + return dev->vendor_id; +} + +static u8 vduse_vdpa_get_status(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + return vduse_dev_get_status(dev); +} + +static void vduse_vdpa_set_status(struct vdpa_device *vdpa, u8 status) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + vduse_dev_set_status(dev, status); + + if (status == 0) + vduse_dev_reset(dev); +} + +static void vduse_vdpa_get_config(struct vdpa_device *vdpa, unsigned int offset, + void *buf, unsigned int len) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + vduse_dev_get_config(dev, offset, buf, len); +} + +static void vduse_vdpa_set_config(struct vdpa_device *vdpa, unsigned int offset, + const void *buf, unsigned int len) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + vduse_dev_set_config(dev, offset, buf, len); +} + +static int vduse_vdpa_set_map(struct vdpa_device *vdpa, + struct vhost_iotlb *iotlb) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + int ret; + + ret = vduse_domain_set_map(dev->domain, iotlb); + vduse_dev_update_iotlb(dev, 0ULL, ULLONG_MAX); + + return ret; +} + +static void vduse_vdpa_free(struct vdpa_device *vdpa) +{ + struct vduse_dev *dev = vdpa_to_vduse(vdpa); + + WARN_ON(!list_empty(&dev->send_list)); + WARN_ON(!list_empty(&dev->recv_list)); + dev->vdev = NULL; +} + +static const struct vdpa_config_ops vduse_vdpa_config_ops = { + .set_vq_address = vduse_vdpa_set_vq_address, + .kick_vq = vduse_vdpa_kick_vq, + .set_vq_cb = vduse_vdpa_set_vq_cb, + .set_vq_num = vduse_vdpa_set_vq_num, + .set_vq_ready = vduse_vdpa_set_vq_ready, + .get_vq_ready = vduse_vdpa_get_vq_ready, + .set_vq_state = vduse_vdpa_set_vq_state, + .get_vq_state = vduse_vdpa_get_vq_state, + .get_vq_align = vduse_vdpa_get_vq_align, + .get_features = vduse_vdpa_get_features, + .set_features = vduse_vdpa_set_features, + .set_config_cb = vduse_vdpa_set_config_cb, + .get_vq_num_max = vduse_vdpa_get_vq_num_max, + .get_device_id = vduse_vdpa_get_device_id, + .get_vendor_id = vduse_vdpa_get_vendor_id, + .get_status = vduse_vdpa_get_status, + .set_status = vduse_vdpa_set_status, + .get_config = vduse_vdpa_get_config, + .set_config = vduse_vdpa_set_config, + .set_map = vduse_vdpa_set_map, + .free = vduse_vdpa_free, +}; + +static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page, + unsigned long offset, size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + struct vduse_dev *vdev = dev_to_vduse(dev); + struct vduse_iova_domain *domain = vdev->domain; + + return vduse_domain_map_page(domain, page, offset, size, dir, attrs); +} + +static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct vduse_dev *vdev = dev_to_vduse(dev); + struct vduse_iova_domain *domain = vdev->domain; + + return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs); +} + +static void *vduse_dev_alloc_coherent(struct device *dev, size_t size, + dma_addr_t *dma_addr, gfp_t flag, + unsigned long attrs) +{ + struct vduse_dev *vdev = dev_to_vduse(dev); + struct vduse_iova_domain *domain = vdev->domain; + unsigned long iova; + void *addr; + + *dma_addr = DMA_MAPPING_ERROR; + addr = vduse_domain_alloc_coherent(domain, size, + (dma_addr_t *)&iova, flag, attrs); + if (!addr) + return NULL; + + *dma_addr = (dma_addr_t)iova; + vduse_dev_update_iotlb(vdev, iova, iova + size - 1); + + return addr; +} + +static void vduse_dev_free_coherent(struct device *dev, size_t size, + void *vaddr, dma_addr_t dma_addr, + unsigned long attrs) +{ + struct vduse_dev *vdev = dev_to_vduse(dev); + struct vduse_iova_domain *domain = vdev->domain; + unsigned long start = (unsigned long)dma_addr; + unsigned long last = start + size - 1; + + vduse_domain_free_coherent(domain, size, vaddr, dma_addr, attrs); + vduse_dev_update_iotlb(vdev, start, last); +} + +static const struct dma_map_ops vduse_dev_dma_ops = { + .map_page = vduse_dev_map_page, + .unmap_page = vduse_dev_unmap_page, + .alloc = vduse_dev_alloc_coherent, + .free = vduse_dev_free_coherent, +}; + +static unsigned int perm_to_file_flags(u8 perm) +{ + unsigned int flags = 0; + + switch (perm) { + case VDUSE_ACCESS_WO: + flags |= O_WRONLY; + break; + case VDUSE_ACCESS_RO: + flags |= O_RDONLY; + break; + case VDUSE_ACCESS_RW: + flags |= O_RDWR; + break; + default: + WARN(1, "invalidate vhost IOTLB permission\n"); + break; + } + + return flags; +} + +static int vduse_kickfd_setup(struct vduse_dev *dev, + struct vduse_vq_eventfd *eventfd) +{ + struct eventfd_ctx *ctx = NULL; + struct vduse_virtqueue *vq; + + if (eventfd->index >= dev->vq_num) + return -EINVAL; + + vq = &dev->vqs[eventfd->index]; + if (eventfd->fd > 0) { + ctx = eventfd_ctx_fdget(eventfd->fd); + if (IS_ERR(ctx)) + return PTR_ERR(ctx); + } else if (eventfd->fd != VDUSE_EVENTFD_DEASSIGN) + return 0; + + spin_lock(&vq->kick_lock); + if (vq->kickfd) + eventfd_ctx_put(vq->kickfd); + vq->kickfd = ctx; + spin_unlock(&vq->kick_lock); + + return 0; +} + +static void vduse_vq_irq_inject(struct work_struct *work) +{ + struct vduse_virtqueue *vq = container_of(work, + struct vduse_virtqueue, inject); + + spin_lock_irq(&vq->irq_lock); + if (vq->ready && vq->cb.callback) + vq->cb.callback(vq->cb.private); + spin_unlock_irq(&vq->irq_lock); +} + +static long vduse_dev_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + struct vduse_dev *dev = file->private_data; + void __user *argp = (void __user *)arg; + int ret; + + switch (cmd) { + case VDUSE_IOTLB_GET_FD: { + struct vduse_iotlb_entry entry; + struct vhost_iotlb_map *map; + struct vdpa_map_file *map_file; + struct vduse_iova_domain *domain = dev->domain; + struct file *f = NULL; + + ret = -EFAULT; + if (copy_from_user(&entry, argp, sizeof(entry))) + break; + + ret = -EINVAL; + if (entry.start > entry.last) + break; + + spin_lock(&domain->iotlb_lock); + map = vhost_iotlb_itree_first(domain->iotlb, + entry.start, entry.last); + if (map) { + map_file = (struct vdpa_map_file *)map->opaque; + f = get_file(map_file->file); + entry.offset = map_file->offset; + entry.start = map->start; + entry.last = map->last; + entry.perm = map->perm; + } + spin_unlock(&domain->iotlb_lock); + ret = -EINVAL; + if (!f) + break; + + ret = -EFAULT; + if (copy_to_user(argp, &entry, sizeof(entry))) { + fput(f); + break; + } + ret = receive_fd(f, perm_to_file_flags(entry.perm)); + fput(f); + break; + } + case VDUSE_VQ_SETUP_KICKFD: { + struct vduse_vq_eventfd eventfd; + + ret = -EFAULT; + if (copy_from_user(&eventfd, argp, sizeof(eventfd))) + break; + + ret = vduse_kickfd_setup(dev, &eventfd); + break; + } + case VDUSE_INJECT_VQ_IRQ: + ret = -EINVAL; + if (arg >= dev->vq_num) + break; + + ret = 0; + queue_work(vduse_irq_wq, &dev->vqs[arg].inject); + break; + case VDUSE_INJECT_CONFIG_IRQ: + ret = -EINVAL; + spin_lock_irq(&dev->irq_lock); + if (dev->config_cb.callback) { + dev->config_cb.callback(dev->config_cb.private); + ret = 0; + } + spin_unlock_irq(&dev->irq_lock); + break; + default: + ret = -ENOIOCTLCMD; + break; + } + + return ret; +} + +static int vduse_dev_release(struct inode *inode, struct file *file) +{ + struct vduse_dev *dev = file->private_data; + struct vduse_dev_msg *msg; + int i; + + for (i = 0; i < dev->vq_num; i++) { + struct vduse_virtqueue *vq = &dev->vqs[i]; + + spin_lock(&vq->kick_lock); + if (vq->kickfd) + eventfd_ctx_put(vq->kickfd); + vq->kickfd = NULL; + spin_unlock(&vq->kick_lock); + } + + spin_lock(&dev->msg_lock); + /* Make sure the inflight messages can processed */ + while ((msg = vduse_dequeue_msg(&dev->recv_list))) + vduse_enqueue_msg(&dev->send_list, msg); + spin_unlock(&dev->msg_lock); + + dev->connected = false; + + return 0; +} + +static int vduse_dev_open(struct inode *inode, struct file *file) +{ + struct vduse_dev *dev = container_of(inode->i_cdev, + struct vduse_dev, cdev); + int ret = -EBUSY; + + mutex_lock(&dev->lock); + if (dev->connected) + goto unlock; + + ret = 0; + dev->connected = true; + file->private_data = dev; +unlock: + mutex_unlock(&dev->lock); + + return ret; +} + +static const struct file_operations vduse_dev_fops = { + .owner = THIS_MODULE, + .open = vduse_dev_open, + .release = vduse_dev_release, + .read_iter = vduse_dev_read_iter, + .write_iter = vduse_dev_write_iter, + .poll = vduse_dev_poll, + .unlocked_ioctl = vduse_dev_ioctl, + .compat_ioctl = compat_ptr_ioctl, + .llseek = noop_llseek, +}; + +static struct vduse_dev *vduse_dev_create(void) +{ + struct vduse_dev *dev = kzalloc(sizeof(*dev), GFP_KERNEL); + + if (!dev) + return NULL; + + mutex_init(&dev->lock); + spin_lock_init(&dev->msg_lock); + INIT_LIST_HEAD(&dev->send_list); + INIT_LIST_HEAD(&dev->recv_list); + atomic64_set(&dev->msg_unique, 0); + spin_lock_init(&dev->irq_lock); + + init_waitqueue_head(&dev->waitq); + + return dev; +} + +static void vduse_dev_destroy(struct vduse_dev *dev) +{ + kfree(dev); +} + +static struct vduse_dev *vduse_find_dev(const char *name) +{ + struct vduse_dev *tmp, *dev = NULL; + + list_for_each_entry(tmp, &vduse_devs, list) { + if (!strcmp(dev_name(&tmp->dev), name)) { + dev = tmp; + break; + } + } + return dev; +} + +static int vduse_destroy_dev(char *name) +{ + struct vduse_dev *dev = vduse_find_dev(name); + + if (!dev) + return -EINVAL; + + mutex_lock(&dev->lock); + if (dev->vdev || dev->connected) { + mutex_unlock(&dev->lock); + return -EBUSY; + } + dev->connected = true; + mutex_unlock(&dev->lock); + + list_del(&dev->list); + cdev_device_del(&dev->cdev, &dev->dev); + put_device(&dev->dev); + module_put(THIS_MODULE); + + return 0; +} + +static void vduse_release_dev(struct device *device) +{ + struct vduse_dev *dev = + container_of(device, struct vduse_dev, dev); + + ida_simple_remove(&vduse_ida, dev->minor); + kfree(dev->vqs); + vduse_domain_destroy(dev->domain); + vduse_dev_destroy(dev); +} + +static int vduse_create_dev(struct vduse_dev_config *config, + unsigned long api_version) +{ + int i, ret = -ENOMEM; + struct vduse_dev *dev; + + if (config->bounce_size > max_bounce_size) + return -EINVAL; + + if (config->bounce_size > max_iova_size) + return -EINVAL; + + if (vduse_find_dev(config->name)) + return -EEXIST; + + dev = vduse_dev_create(); + if (!dev) + return -ENOMEM; + + dev->api_version = api_version; + dev->device_id = config->device_id; + dev->vendor_id = config->vendor_id; + dev->domain = vduse_domain_create(max_iova_size - 1, + config->bounce_size); + if (!dev->domain) + goto err_domain; + + dev->vq_align = config->vq_align; + dev->vq_size_max = config->vq_size_max; + dev->vq_num = config->vq_num; + dev->vqs = kcalloc(dev->vq_num, sizeof(*dev->vqs), GFP_KERNEL); + if (!dev->vqs) + goto err_vqs; + + for (i = 0; i < dev->vq_num; i++) { + dev->vqs[i].index = i; + INIT_WORK(&dev->vqs[i].inject, vduse_vq_irq_inject); + spin_lock_init(&dev->vqs[i].kick_lock); + spin_lock_init(&dev->vqs[i].irq_lock); + } + + ret = ida_simple_get(&vduse_ida, 0, VDUSE_DEV_MAX, GFP_KERNEL); + if (ret < 0) + goto err_ida; + + dev->minor = ret; + device_initialize(&dev->dev); + dev->dev.release = vduse_release_dev; + dev->dev.class = vduse_class; + dev->dev.devt = MKDEV(MAJOR(vduse_major), dev->minor); + ret = dev_set_name(&dev->dev, "%s", config->name); + if (ret) { + put_device(&dev->dev); + return ret; + } + cdev_init(&dev->cdev, &vduse_dev_fops); + dev->cdev.owner = THIS_MODULE; + + ret = cdev_device_add(&dev->cdev, &dev->dev); + if (ret) { + put_device(&dev->dev); + return ret; + } + list_add(&dev->list, &vduse_devs); + __module_get(THIS_MODULE); + + return 0; +err_ida: + kfree(dev->vqs); +err_vqs: + vduse_domain_destroy(dev->domain); +err_domain: + vduse_dev_destroy(dev); + return ret; +} + +static long vduse_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + int ret; + void __user *argp = (void __user *)arg; + struct vduse_control *control = file->private_data; + + mutex_lock(&vduse_lock); + switch (cmd) { + case VDUSE_GET_API_VERSION: + ret = control->api_version; + break; + case VDUSE_SET_API_VERSION: + ret = -EINVAL; + if (arg > VDUSE_API_VERSION) + break; + + ret = 0; + control->api_version = arg; + break; + case VDUSE_CREATE_DEV: { + struct vduse_dev_config config; + + ret = -EFAULT; + if (copy_from_user(&config, argp, sizeof(config))) + break; + + ret = vduse_create_dev(&config, control->api_version); + break; + } + case VDUSE_DESTROY_DEV: { + char name[VDUSE_NAME_MAX]; + + ret = -EFAULT; + if (copy_from_user(name, argp, VDUSE_NAME_MAX)) + break; + + ret = vduse_destroy_dev(name); + break; + } + default: + ret = -EINVAL; + break; + } + mutex_unlock(&vduse_lock); + + return ret; +} + +static int vduse_release(struct inode *inode, struct file *file) +{ + struct vduse_control *control = file->private_data; + + kfree(control); + return 0; +} + +static int vduse_open(struct inode *inode, struct file *file) +{ + struct vduse_control *control; + + control = kmalloc(sizeof(struct vduse_control), GFP_KERNEL); + if (!control) + return -ENOMEM; + + control->api_version = VDUSE_API_VERSION; + file->private_data = control; + + return 0; +} + +static const struct file_operations vduse_fops = { + .owner = THIS_MODULE, + .open = vduse_open, + .release = vduse_release, + .unlocked_ioctl = vduse_ioctl, + .compat_ioctl = compat_ptr_ioctl, + .llseek = noop_llseek, +}; + +static char *vduse_devnode(struct device *dev, umode_t *mode) +{ + return kasprintf(GFP_KERNEL, "vduse/%s", dev_name(dev)); +} + +static struct miscdevice vduse_misc = { + .fops = &vduse_fops, + .minor = MISC_DYNAMIC_MINOR, + .name = "vduse", + .nodename = "vduse/control", +}; + +static void vduse_mgmtdev_release(struct device *dev) +{ +} + +static struct device vduse_mgmtdev = { + .init_name = "vduse", + .release = vduse_mgmtdev_release, +}; + +static struct vdpa_mgmt_dev mgmt_dev; + +static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name) +{ + struct vduse_vdpa *vdev; + int ret; + + if (dev->vdev) + return -EEXIST; + + vdev = vdpa_alloc_device(struct vduse_vdpa, vdpa, &dev->dev, + &vduse_vdpa_config_ops, name, true); + if (!vdev) + return -ENOMEM; + + dev->vdev = vdev; + vdev->dev = dev; + vdev->vdpa.dev.dma_mask = &vdev->vdpa.dev.coherent_dma_mask; + ret = dma_set_mask_and_coherent(&vdev->vdpa.dev, DMA_BIT_MASK(64)); + if (ret) { + put_device(&vdev->vdpa.dev); + return ret; + } + set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops); + vdev->vdpa.dma_dev = &vdev->vdpa.dev; + vdev->vdpa.mdev = &mgmt_dev; + + return 0; +} + +static int vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name) +{ + struct vduse_dev *dev; + int ret = -EINVAL; + + mutex_lock(&vduse_lock); + dev = vduse_find_dev(name); + if (!dev) { + mutex_unlock(&vduse_lock); + return -EINVAL; + } + ret = vduse_dev_init_vdpa(dev, name); + mutex_unlock(&vduse_lock); + if (ret) + return ret; + + ret = _vdpa_register_device(&dev->vdev->vdpa, dev->vq_num); + if (ret) { + put_device(&dev->vdev->vdpa.dev); + return ret; + } + + return 0; +} + +static void vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *dev) +{ + _vdpa_unregister_device(dev); +} + +static const struct vdpa_mgmtdev_ops vdpa_dev_mgmtdev_ops = { + .dev_add = vdpa_dev_add, + .dev_del = vdpa_dev_del, +}; + +static struct virtio_device_id id_table[] = { + { VIRTIO_DEV_ANY_ID, VIRTIO_DEV_ANY_ID }, + { 0 }, +}; + +static struct vdpa_mgmt_dev mgmt_dev = { + .device = &vduse_mgmtdev, + .id_table = id_table, + .ops = &vdpa_dev_mgmtdev_ops, +}; + +static int vduse_mgmtdev_init(void) +{ + int ret; + + ret = device_register(&vduse_mgmtdev); + if (ret) + return ret; + + ret = vdpa_mgmtdev_register(&mgmt_dev); + if (ret) + goto err; + + return 0; +err: + device_unregister(&vduse_mgmtdev); + return ret; +} + +static void vduse_mgmtdev_exit(void) +{ + vdpa_mgmtdev_unregister(&mgmt_dev); + device_unregister(&vduse_mgmtdev); +} + +static int vduse_init(void) +{ + int ret; + + if (max_bounce_size >= max_iova_size) + return -EINVAL; + + ret = misc_register(&vduse_misc); + if (ret) + return ret; + + vduse_class = class_create(THIS_MODULE, "vduse"); + if (IS_ERR(vduse_class)) { + ret = PTR_ERR(vduse_class); + goto err_class; + } + vduse_class->devnode = vduse_devnode; + + ret = alloc_chrdev_region(&vduse_major, 0, VDUSE_DEV_MAX, "vduse"); + if (ret) + goto err_chardev; + + vduse_irq_wq = alloc_workqueue("vduse-irq", + WQ_HIGHPRI | WQ_SYSFS | WQ_UNBOUND, 0); + if (!vduse_irq_wq) + goto err_wq; + + ret = vduse_domain_init(); + if (ret) + goto err_domain; + + ret = vduse_mgmtdev_init(); + if (ret) + goto err_mgmtdev; + + return 0; +err_mgmtdev: + vduse_domain_exit(); +err_domain: + destroy_workqueue(vduse_irq_wq); +err_wq: + unregister_chrdev_region(vduse_major, VDUSE_DEV_MAX); +err_chardev: + class_destroy(vduse_class); +err_class: + misc_deregister(&vduse_misc); + return ret; +} +module_init(vduse_init); + +static void vduse_exit(void) +{ + misc_deregister(&vduse_misc); + class_destroy(vduse_class); + unregister_chrdev_region(vduse_major, VDUSE_DEV_MAX); + destroy_workqueue(vduse_irq_wq); + vduse_domain_exit(); + vduse_mgmtdev_exit(); +} +module_exit(vduse_exit); + +MODULE_VERSION(DRV_VERSION); +MODULE_LICENSE(DRV_LICENSE); +MODULE_AUTHOR(DRV_AUTHOR); +MODULE_DESCRIPTION(DRV_DESC); diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h new file mode 100644 index 000000000000..66a6e5212226 --- /dev/null +++ b/include/uapi/linux/vduse.h @@ -0,0 +1,175 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _UAPI_VDUSE_H_ +#define _UAPI_VDUSE_H_ + +#include + +#define VDUSE_API_VERSION 0 + +#define VDUSE_CONFIG_DATA_LEN 256 +#define VDUSE_NAME_MAX 256 + +/* the control messages definition for read/write */ + +enum vduse_req_type { + /* Set the vring address of virtqueue. */ + VDUSE_SET_VQ_NUM, + /* Set the vring address of virtqueue. */ + VDUSE_SET_VQ_ADDR, + /* Set ready status of virtqueue */ + VDUSE_SET_VQ_READY, + /* Get ready status of virtqueue */ + VDUSE_GET_VQ_READY, + /* Set the state for virtqueue */ + VDUSE_SET_VQ_STATE, + /* Get the state for virtqueue */ + VDUSE_GET_VQ_STATE, + /* Set virtio features supported by the driver */ + VDUSE_SET_FEATURES, + /* Get virtio features supported by the device */ + VDUSE_GET_FEATURES, + /* Set the device status */ + VDUSE_SET_STATUS, + /* Get the device status */ + VDUSE_GET_STATUS, + /* Write to device specific configuration space */ + VDUSE_SET_CONFIG, + /* Read from device specific configuration space */ + VDUSE_GET_CONFIG, + /* Notify userspace to update the memory mapping in device IOTLB */ + VDUSE_UPDATE_IOTLB, +}; + +struct vduse_vq_num { + __u32 index; /* virtqueue index */ + __u32 num; /* the size of virtqueue */ +}; + +struct vduse_vq_addr { + __u32 index; /* virtqueue index */ + __u64 desc_addr; /* address of desc area */ + __u64 driver_addr; /* address of driver area */ + __u64 device_addr; /* address of device area */ +}; + +struct vduse_vq_ready { + __u32 index; /* virtqueue index */ + __u8 ready; /* ready status of virtqueue */ +}; + +struct vduse_vq_state { + __u32 index; /* virtqueue index */ + __u16 avail_idx; /* virtqueue state (last_avail_idx) */ +}; + +struct vduse_dev_config_data { + __u32 offset; /* offset from the beginning of config space */ + __u32 len; /* the length to read/write */ + __u8 data[VDUSE_CONFIG_DATA_LEN]; /* data buffer used to read/write */ +}; + +struct vduse_iova_range { + __u64 start; /* start of the IOVA range */ + __u64 last; /* end of the IOVA range */ +}; + +struct vduse_features { + __u64 features; /* virtio features */ +}; + +struct vduse_status { + __u8 status; /* device status */ +}; + +struct vduse_dev_request { + __u32 type; /* request type */ + __u32 request_id; /* request id */ + __u32 reserved[2]; /* for future use */ + union { + struct vduse_vq_num vq_num; /* virtqueue num */ + struct vduse_vq_addr vq_addr; /* virtqueue address */ + struct vduse_vq_ready vq_ready; /* virtqueue ready status */ + struct vduse_vq_state vq_state; /* virtqueue state */ + struct vduse_dev_config_data config; /* virtio device config space */ + struct vduse_iova_range iova; /* iova range for updating */ + struct vduse_features f; /* virtio features */ + struct vduse_status s; /* device status */ + __u32 padding[16]; /* padding */ + }; +}; + +struct vduse_dev_response { + __u32 request_id; /* corresponding request id */ +#define VDUSE_REQUEST_OK 0x00 +#define VDUSE_REQUEST_FAILED 0x01 + __u32 result; /* the result of request */ + __u32 reserved[2]; /* for future use */ + union { + struct vduse_vq_ready vq_ready; /* virtqueue ready status */ + struct vduse_vq_state vq_state; /* virtqueue state */ + struct vduse_dev_config_data config; /* virtio device config space */ + struct vduse_features f; /* virtio features */ + struct vduse_status s; /* device status */ + __u32 padding[16]; /* padding */ + }; +}; + +/* ioctls */ + +struct vduse_dev_config { + char name[VDUSE_NAME_MAX]; /* vduse device name */ + __u32 vendor_id; /* virtio vendor id */ + __u32 device_id; /* virtio device id */ + __u64 bounce_size; /* bounce buffer size for iommu */ + __u16 vq_num; /* the number of virtqueues */ + __u16 vq_size_max; /* the max size of virtqueue */ + __u32 vq_align; /* the allocation alignment of virtqueue's metadata */ + __u32 reserved[8]; /* for future use */ +}; + +struct vduse_iotlb_entry { + __u64 offset; /* the mmap offset on fd */ + __u64 start; /* start of the IOVA range */ + __u64 last; /* last of the IOVA range */ +#define VDUSE_ACCESS_RO 0x1 +#define VDUSE_ACCESS_WO 0x2 +#define VDUSE_ACCESS_RW 0x3 + __u8 perm; /* access permission of this range */ +}; + +struct vduse_vq_eventfd { + __u32 index; /* virtqueue index */ +#define VDUSE_EVENTFD_DEASSIGN -1 + int fd; /* eventfd, -1 means de-assigning the eventfd */ +}; + +#define VDUSE_BASE 0x81 + +/* Get the version of VDUSE API. This is used for future extension */ +#define VDUSE_GET_API_VERSION _IO(VDUSE_BASE, 0x00) + +/* Set the version of VDUSE API. */ +#define VDUSE_SET_API_VERSION _IO(VDUSE_BASE, 0x01) + +/* Create a vduse device which is represented by a char device (/dev/vduse/) */ +#define VDUSE_CREATE_DEV _IOW(VDUSE_BASE, 0x02, struct vduse_dev_config) + +/* Destroy a vduse device. Make sure there are no references to the char device */ +#define VDUSE_DESTROY_DEV _IOW(VDUSE_BASE, 0x03, char[VDUSE_NAME_MAX]) + +/* + * Get a file descriptor for the first overlapped iova region, + * -EINVAL means the iova region doesn't exist. + */ +#define VDUSE_IOTLB_GET_FD _IOWR(VDUSE_BASE, 0x04, struct vduse_iotlb_entry) + +/* Setup an eventfd to receive kick for virtqueue */ +#define VDUSE_VQ_SETUP_KICKFD _IOW(VDUSE_BASE, 0x05, struct vduse_vq_eventfd) + +/* Inject an interrupt for specific virtqueue */ +#define VDUSE_INJECT_VQ_IRQ _IO(VDUSE_BASE, 0x06) + +/* Inject a config interrupt */ +#define VDUSE_INJECT_CONFIG_IRQ _IO(VDUSE_BASE, 0x07) + +#endif /* _UAPI_VDUSE_H_ */ From patchwork Wed Mar 31 08:05:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 12174619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 620DBC433E6 for ; Wed, 31 Mar 2021 08:07:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4EE37619D3 for ; Wed, 31 Mar 2021 08:07:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234411AbhCaIHY (ORCPT ); Wed, 31 Mar 2021 04:07:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234353AbhCaIG4 (ORCPT ); Wed, 31 Mar 2021 04:06:56 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4BD5C06175F for ; Wed, 31 Mar 2021 01:06:56 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id kk2-20020a17090b4a02b02900c777aa746fso810123pjb.3 for ; Wed, 31 Mar 2021 01:06:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DrGe+wSNv+5GrL/VvyrIF4G5z2Kf+xg5gmEDQCgKf7s=; b=DqzZ8q+OkgiXiUnK9PFEYEY4GPyyJUNJwl9trmeo91/ckMTVGZku5qS+JYUp6zbltv 7JvdkVX4/kn4rtl7cFCIfEfQP+rRDpBLK3lx/1fTetBxfqIxNrXCCGHQhn3Cs9UAAe8j dEfKQZNP/uRhRK/+NsTGf2ALmlqAwKGqnklVdGVTMTTpLvLxqs+ujglBcrnEeLW9F8st IYRilOFnC/ht3Qb86fq7BfeHYXVqAkQTbbVeDhg84tYsOWIxpLX+liISgJbcKto8LEu7 bc1z4i3IgIPP24H5wRqaa2c6qh0L8tCPEDgWo7QOxQQdgBRuH6lxgUd+3n+ii0sHchYm WAxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DrGe+wSNv+5GrL/VvyrIF4G5z2Kf+xg5gmEDQCgKf7s=; b=Ja53aCa7Oce3mBlRjpri5fyIBDzpfyg4QO5qHdhRxu6YzgydKJhftqHKQCB8r4AnOp DeufjDSDiDgpjqaE9vklRBcai7L4oq41RUUxQRSNfCCPUxwioxiinCvm9UhHhm7mgly0 G7KgfCGFOxfH3ycCNukFmx75g02edh6OFS5ZYFvboCWM3XjEPGLzNVdHi+EzWo8HjI/t mKIPobuaXcZDV4sCbqchn66YcqBL2GHTRI3tOt5mTuFp9VRYIJtHDCPU0JG+RjUKVyVL bzhtP97eptyikROInQnRRrPs4uAEZvTUdm7flg2FYU+DYSCwSXxui8T4dV49TdmEiaBu 9N+w== X-Gm-Message-State: AOAM533I2Vx8Zz66YTb6vstvqgLT28BQvEtf1qv0t00kt3AzMCB3Fcab 3AYUzvVOkc+VbxP8ur5lmrq6 X-Google-Smtp-Source: ABdhPJzUZ0iCW38qBzP+z9ejwSF3QR26iU/iND3BjH9uOIBsmROmq3cVYSj9jxodUlUSY5l5upx6Gg== X-Received: by 2002:a17:90b:3551:: with SMTP id lt17mr2338546pjb.89.1617178016286; Wed, 31 Mar 2021 01:06:56 -0700 (PDT) Received: from localhost ([139.177.225.243]) by smtp.gmail.com with ESMTPSA id w79sm1324798pfc.87.2021.03.31.01.06.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Mar 2021 01:06:55 -0700 (PDT) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, hch@infradead.org, christian.brauner@canonical.com, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net, mika.penttila@nextfour.com, dan.carpenter@oracle.com Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v6 10/10] Documentation: Add documentation for VDUSE Date: Wed, 31 Mar 2021 16:05:19 +0800 Message-Id: <20210331080519.172-11-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331080519.172-1-xieyongji@bytedance.com> References: <20210331080519.172-1-xieyongji@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org VDUSE (vDPA Device in Userspace) is a framework to support implementing software-emulated vDPA devices in userspace. This document is intended to clarify the VDUSE design and usage. Signed-off-by: Xie Yongji Acked-by: Jason Wang --- Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/vduse.rst | 212 ++++++++++++++++++++++++++++++++++ 2 files changed, 213 insertions(+) create mode 100644 Documentation/userspace-api/vduse.rst diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index acd2cc2a538d..f63119130898 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -24,6 +24,7 @@ place where this information is gathered. ioctl/index iommu media/index + vduse .. only:: subproject and html diff --git a/Documentation/userspace-api/vduse.rst b/Documentation/userspace-api/vduse.rst new file mode 100644 index 000000000000..8c4e2b2df8bb --- /dev/null +++ b/Documentation/userspace-api/vduse.rst @@ -0,0 +1,212 @@ +================================== +VDUSE - "vDPA Device in Userspace" +================================== + +vDPA (virtio data path acceleration) device is a device that uses a +datapath which complies with the virtio specifications with vendor +specific control path. vDPA devices can be both physically located on +the hardware or emulated by software. VDUSE is a framework that makes it +possible to implement software-emulated vDPA devices in userspace. + +How VDUSE works +------------ +Each userspace vDPA device is created by the VDUSE_CREATE_DEV ioctl on +the character device (/dev/vduse/control). Then a device file with the +specified name (/dev/vduse/$NAME) will appear, which can be used to +implement the userspace vDPA device's control path and data path. + +To implement control path, a message-based communication protocol and some +types of control messages are introduced in the VDUSE framework: + +- VDUSE_SET_VQ_ADDR: Set the vring address of virtqueue. + +- VDUSE_SET_VQ_NUM: Set the size of virtqueue + +- VDUSE_SET_VQ_READY: Set ready status of virtqueue + +- VDUSE_GET_VQ_READY: Get ready status of virtqueue + +- VDUSE_SET_VQ_STATE: Set the state for virtqueue + +- VDUSE_GET_VQ_STATE: Get the state for virtqueue + +- VDUSE_SET_FEATURES: Set virtio features supported by the driver + +- VDUSE_GET_FEATURES: Get virtio features supported by the device + +- VDUSE_SET_STATUS: Set the device status + +- VDUSE_GET_STATUS: Get the device status + +- VDUSE_SET_CONFIG: Write to device specific configuration space + +- VDUSE_GET_CONFIG: Read from device specific configuration space + +- VDUSE_UPDATE_IOTLB: Notify userspace to update the memory mapping in device IOTLB + +Those control messages are mostly based on the vdpa_config_ops in +include/linux/vdpa.h which defines a unified interface to control +different types of vdpa device. Userspace needs to read()/write() +on the VDUSE device file to receive/reply those control messages +from/to VDUSE kernel module as follows: + +.. code-block:: c + + static int vduse_message_handler(int dev_fd) + { + int len; + struct vduse_dev_request req; + struct vduse_dev_response resp; + + len = read(dev_fd, &req, sizeof(req)); + if (len != sizeof(req)) + return -1; + + resp.request_id = req.request_id; + + switch (req.type) { + + /* handle different types of message */ + + } + + len = write(dev_fd, &resp, sizeof(resp)); + if (len != sizeof(resp)) + return -1; + + return 0; + } + +In the data path, vDPA device's iova regions will be mapped into userspace +with the help of VDUSE_IOTLB_GET_FD ioctl on the VDUSE device file: + +- VDUSE_IOTLB_GET_FD: get the file descriptor to the first overlapped iova region. + Userspace can access this iova region by passing fd and corresponding size, offset, + perm to mmap(). For example: + +.. code-block:: c + + static int perm_to_prot(uint8_t perm) + { + int prot = 0; + + switch (perm) { + case VDUSE_ACCESS_WO: + prot |= PROT_WRITE; + break; + case VDUSE_ACCESS_RO: + prot |= PROT_READ; + break; + case VDUSE_ACCESS_RW: + prot |= PROT_READ | PROT_WRITE; + break; + } + + return prot; + } + + static void *iova_to_va(int dev_fd, uint64_t iova, uint64_t *len) + { + int fd; + void *addr; + size_t size; + struct vduse_iotlb_entry entry; + + entry.start = iova; + entry.last = iova + 1; + fd = ioctl(dev_fd, VDUSE_IOTLB_GET_FD, &entry); + if (fd < 0) + return NULL; + + size = entry.last - entry.start + 1; + *len = entry.last - iova + 1; + addr = mmap(0, size, perm_to_prot(entry.perm), MAP_SHARED, + fd, entry.offset); + close(fd); + if (addr == MAP_FAILED) + return NULL; + + /* do something to cache this iova region */ + + return addr + iova - entry.start; + } + +Besides, the following ioctls on the VDUSE device file are provided to support +interrupt injection and setting up eventfd for virtqueue kicks: + +- VDUSE_VQ_SETUP_KICKFD: set the kickfd for virtqueue, this eventfd is used + by VDUSE kernel module to notify userspace to consume the vring. + +- VDUSE_INJECT_VQ_IRQ: inject an interrupt for specific virtqueue + +- VDUSE_INJECT_CONFIG_IRQ: inject a config interrupt + +Register VDUSE device on vDPA bus +--------------------------------- +In order to make the VDUSE device work, administrator needs to use the management +API (netlink) to register it on vDPA bus. Some sample codes are show below: + +.. code-block:: c + + static int netlink_add_vduse(const char *name, int device_id) + { + struct nl_sock *nlsock; + struct nl_msg *msg; + int famid; + + nlsock = nl_socket_alloc(); + if (!nlsock) + return -ENOMEM; + + if (genl_connect(nlsock)) + goto free_sock; + + famid = genl_ctrl_resolve(nlsock, VDPA_GENL_NAME); + if (famid < 0) + goto close_sock; + + msg = nlmsg_alloc(); + if (!msg) + goto close_sock; + + if (!genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, famid, 0, 0, + VDPA_CMD_DEV_NEW, 0)) + goto nla_put_failure; + + NLA_PUT_STRING(msg, VDPA_ATTR_DEV_NAME, name); + NLA_PUT_STRING(msg, VDPA_ATTR_MGMTDEV_DEV_NAME, "vduse"); + NLA_PUT_U32(msg, VDPA_ATTR_DEV_ID, device_id); + + if (nl_send_sync(nlsock, msg)) + goto close_sock; + + nl_close(nlsock); + nl_socket_free(nlsock); + + return 0; + nla_put_failure: + nlmsg_free(msg); + close_sock: + nl_close(nlsock); + free_sock: + nl_socket_free(nlsock); + return -1; + } + +MMU-based IOMMU Driver +---------------------- +VDUSE framework implements an MMU-based on-chip IOMMU driver to support +mapping the kernel DMA buffer into the userspace iova region dynamically. +This is mainly designed for virtio-vdpa case (kernel virtio drivers). + +The basic idea behind this driver is treating MMU (VA->PA) as IOMMU (IOVA->PA). +The driver will set up MMU mapping instead of IOMMU mapping for the DMA transfer +so that the userspace process is able to use its virtual address to access +the DMA buffer in kernel. + +And to avoid security issue, a bounce-buffering mechanism is introduced to +prevent userspace accessing the original buffer directly which may contain other +kernel data. During the mapping, unmapping, the driver will copy the data from +the original buffer to the bounce buffer and back, depending on the direction of +the transfer. And the bounce-buffer addresses will be mapped into the user address +space instead of the original one.