From patchwork Thu Nov 12 23:19:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902243 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A1351746 for ; Thu, 12 Nov 2020 23:19:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53DB421D7F for ; Thu, 12 Nov 2020 23:19:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="EZo+RYrd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726083AbgKLXTk (ORCPT ); Thu, 12 Nov 2020 18:19:40 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:38676 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726042AbgKLXTk (ORCPT ); Thu, 12 Nov 2020 18:19:40 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9rN2109254; Thu, 12 Nov 2020 23:19:25 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=0cS2FDz8pxOe9118UxwDzMOV0NFNIGgqN9V6fndyYf4=; b=EZo+RYrdUi1g+pkwCwo0ieUS2reynhcVur6AWu4rxoeso07vOaJW/Y49z/XSgZGS8SLz Uy/8fmhZtjHjhXA25kLJqNelsQ8vrWR/Y0KZEQLpmRB2LlTrm+EEXU4Nj7/+8GNfa09p cHRBbfuWX+YD8p+B2Rk2PF+sxF2Rqkk+DsepMYcLs7J+y5S0I70YOnJsfrUaiunDGHKB 8r+pkJjolu59hND5NwrYKrSOg/Yy3o6dvtYp8YApV2ZUOhT3GyAfYlPM/YawR6Q9/tBK no03TuXFHtQMPUhCdwxvqK0Fgjbt/vUeKRIAputtYm/R8KwmaopMeyUcoH6C8Os/46zu iA== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2120.oracle.com with ESMTP id 34p72exau4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:25 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNB0RZ027426; Thu, 12 Nov 2020 23:19:24 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 34p5g3tsa8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:24 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0ACNJMe5026819; Thu, 12 Nov 2020 23:19:23 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:22 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 01/10] vhost: remove work arg from vhost_work_flush Date: Thu, 12 Nov 2020 17:19:01 -0600 Message-Id: <1605223150-10888-3-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 suspectscore=0 lowpriorityscore=0 adultscore=0 phishscore=0 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org vhost_work_flush doesn't do anything with the work arg. This patch drops it and then renames vhost_work_flush to vhost_work_dev_flush to reflect that the function flushes all the works in the dev and not just a specific queue or work item. Signed-off-by: Mike Christie Acked-by: Jason Wang Reviewed-by: Chaitanya Kulkarni --- drivers/vhost/scsi.c | 4 ++-- drivers/vhost/vhost.c | 8 ++++---- drivers/vhost/vhost.h | 2 +- drivers/vhost/vsock.c | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index f22fce5..8795fd3 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1468,8 +1468,8 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) /* Flush both the vhost poll and vhost work */ for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) vhost_scsi_flush_vq(vs, i); - vhost_work_flush(&vs->dev, &vs->vs_completion_work); - vhost_work_flush(&vs->dev, &vs->vs_event_work); + vhost_work_dev_flush(&vs->dev); + vhost_work_dev_flush(&vs->dev); /* Wait for all reqs issued before the flush to be finished */ for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index a262e12..78d9535 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -231,7 +231,7 @@ void vhost_poll_stop(struct vhost_poll *poll) } EXPORT_SYMBOL_GPL(vhost_poll_stop); -void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work) +void vhost_work_dev_flush(struct vhost_dev *dev) { struct vhost_flush_struct flush; @@ -243,13 +243,13 @@ void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work) wait_for_completion(&flush.wait_event); } } -EXPORT_SYMBOL_GPL(vhost_work_flush); +EXPORT_SYMBOL_GPL(vhost_work_dev_flush); /* Flush any work that has been scheduled. When calling this, don't hold any * locks that are also used by the callback. */ void vhost_poll_flush(struct vhost_poll *poll) { - vhost_work_flush(poll->dev, &poll->work); + vhost_work_dev_flush(poll->dev); } EXPORT_SYMBOL_GPL(vhost_poll_flush); @@ -538,7 +538,7 @@ static int vhost_attach_cgroups(struct vhost_dev *dev) attach.owner = current; vhost_work_init(&attach.work, vhost_attach_cgroups_work); vhost_work_queue(dev, &attach.work); - vhost_work_flush(dev, &attach.work); + vhost_work_dev_flush(dev); return attach.ret; } diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index b063324..1ba8e81 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -46,7 +46,7 @@ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn, void vhost_poll_stop(struct vhost_poll *poll); void vhost_poll_flush(struct vhost_poll *poll); void vhost_poll_queue(struct vhost_poll *poll); -void vhost_work_flush(struct vhost_dev *dev, struct vhost_work *work); +void vhost_work_dev_flush(struct vhost_dev *dev); long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp); struct vhost_log { diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index a483cec..f40205f 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -652,7 +652,7 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock) for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) if (vsock->vqs[i].handle_kick) vhost_poll_flush(&vsock->vqs[i].poll); - vhost_work_flush(&vsock->dev, &vsock->send_pkt_work); + vhost_work_dev_flush(&vsock->dev); } static void vhost_vsock_reset_orphans(struct sock *sk) From patchwork Thu Nov 12 23:19:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902247 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A58CD1391 for ; Thu, 12 Nov 2020 23:19:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C6C9216C4 for ; Thu, 12 Nov 2020 23:19:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="zl4HvnZW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726042AbgKLXTl (ORCPT ); Thu, 12 Nov 2020 18:19:41 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:38690 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726063AbgKLXTl (ORCPT ); Thu, 12 Nov 2020 18:19:41 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAR4R109860; Thu, 12 Nov 2020 23:19:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=on3yVKDApVeThHnfRXreTxe05Ey7ypnRF7cGsfswQMs=; b=zl4HvnZWzCUyiTLVNG3kMQ1ozuBvYxLmr/Aaqz94OrCBTm/W8ONvLDvpIEZnT9p/mmaq suVEDWmL5VWhbcawAPQeCC9edPGqnHWpJJYGoOwKtAKkyHKSQB2HbJ+6LAaUI34HV1nd AvUnHbHC4n+fCUn0Pzgg1nT49d3Hewy68kAKQoTvqrdZarTOQ698wQ5zcZNL+1D8iJtK KBA5bOdNaIEPJR5l48Za7G4+4f5Q4KhBlcaRZcPc8Iqx6nH7mG0GqqH/CmPxEEqWWGH4 yJ35lwB9djx5blwVf4RD0zaaafnccsoGpTTRm47Nozo1YN5ek4elINBno2FHBXFNoPQw 0g== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by userp2120.oracle.com with ESMTP id 34p72exau7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:26 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAH1G075878; Thu, 12 Nov 2020 23:19:26 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userp3030.oracle.com with ESMTP id 34rtksk50f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:26 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0ACNJNww026449; Thu, 12 Nov 2020 23:19:23 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:23 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 02/10] vhost scsi: remove extra flushes Date: Thu, 12 Nov 2020 17:19:02 -0600 Message-Id: <1605223150-10888-4-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 phishscore=0 suspectscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 suspectscore=0 lowpriorityscore=0 adultscore=0 phishscore=0 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org The vhost work flush function was flushing the entire work queue, so there is no need for the double vhost_work_dev_flush calls in vhost_scsi_flush. And we do not need to call vhost_poll_flush for each poller because that call also ends up flushing the same work queue thread the vhost_work_dev_flush call flushed. Signed-off-by: Mike Christie Reviewed-by: Stefan Hajnoczi --- drivers/vhost/scsi.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 8795fd3..4725a08 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1443,11 +1443,6 @@ static void vhost_scsi_handle_kick(struct vhost_work *work) vhost_scsi_handle_vq(vs, vq); } -static void vhost_scsi_flush_vq(struct vhost_scsi *vs, int index) -{ - vhost_poll_flush(&vs->vqs[index].vq.poll); -} - /* Callers must hold dev mutex */ static void vhost_scsi_flush(struct vhost_scsi *vs) { @@ -1466,9 +1461,6 @@ static void vhost_scsi_flush(struct vhost_scsi *vs) kref_put(&old_inflight[i]->kref, vhost_scsi_done_inflight); /* Flush both the vhost poll and vhost work */ - for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) - vhost_scsi_flush_vq(vs, i); - vhost_work_dev_flush(&vs->dev); vhost_work_dev_flush(&vs->dev); /* Wait for all reqs issued before the flush to be finished */ From patchwork Thu Nov 12 23:19:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902283 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 205AC138B for ; Thu, 12 Nov 2020 23:21:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF0FC20759 for ; Thu, 12 Nov 2020 23:21:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="TChRxTh2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726149AbgKLXVg (ORCPT ); Thu, 12 Nov 2020 18:21:36 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:40112 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726005AbgKLXVg (ORCPT ); Thu, 12 Nov 2020 18:21:36 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9ZGj108694; Thu, 12 Nov 2020 23:21:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=yaoC44iWxEWpayBZFO8DRvZUwxcfJVyZmPhJ5CMUOZI=; b=TChRxTh2SVNURuxRIoNEsIksYZ7abBWaHczlqJqbCREMSBlGOkup0x899xj7iLnxYX8w ymMvrWeJHOB/U5bs9ankj6wuIWxCmJNpOds/A4taSToidsCNjSGnJJDo8MQvS/l6PV3E rskudgH1SG5fIIDgAY+FB6cy+1N67jFPl++TFOH82nE55doOzseQ3ki941QTxqkDBAS3 mlgmXWGnPhmOk7F09MkAz/c+xb3t+J0W0RTT7KupRbU+5lsEC/GvXntL0jJKHvuuSogD t1U6qtYR6DEgrWb6MjFrcnIuRyHgVB970hLx20KeMkaV62W5Mq+LUMIONho0c6+8ZE7V mw== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2120.oracle.com with ESMTP id 34p72exb0h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:21:27 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNB0Ra027426; Thu, 12 Nov 2020 23:19:26 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserp3020.oracle.com with ESMTP id 34p5g3tsbj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:26 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ACNJO4W011499; Thu, 12 Nov 2020 23:19:24 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:24 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 03/10] vhost poll: fix coding style Date: Thu, 12 Nov 2020 17:19:03 -0600 Message-Id: <1605223150-10888-5-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 suspectscore=0 lowpriorityscore=0 adultscore=0 phishscore=0 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org We use like 3 coding styles in this struct. Switch to just tabs. Signed-off-by: Mike Christie Reviewed-by: Chaitanya Kulkarni Reviewed-by: Stefan Hajnoczi --- drivers/vhost/vhost.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 1ba8e81..575c818 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -28,12 +28,12 @@ struct vhost_work { /* Poll a file (eventfd or socket) */ /* Note: there's nothing vhost specific about this structure. */ struct vhost_poll { - poll_table table; - wait_queue_head_t *wqh; - wait_queue_entry_t wait; - struct vhost_work work; - __poll_t mask; - struct vhost_dev *dev; + poll_table table; + wait_queue_head_t *wqh; + wait_queue_entry_t wait; + struct vhost_work work; + __poll_t mask; + struct vhost_dev *dev; }; void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn); From patchwork Thu Nov 12 23:19:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902255 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4DB6B1391 for ; Thu, 12 Nov 2020 23:19:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26BCB216C4 for ; Thu, 12 Nov 2020 23:19:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="NP+fCVp4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726157AbgKLXTn (ORCPT ); Thu, 12 Nov 2020 18:19:43 -0500 Received: from aserp2120.oracle.com ([141.146.126.78]:33626 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726081AbgKLXTl (ORCPT ); Thu, 12 Nov 2020 18:19:41 -0500 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9xN4118625; Thu, 12 Nov 2020 23:19:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=b1noA82s4lwdvoXPWkaGQYxPEk5dqmgZwWX7+0ydmPc=; b=NP+fCVp4+mvBbK+p5GRrKz1bK2xdV0OhS6f0H32QrQz0crX6rnGooHzHWK6AE9z7ZtzP 3BRlbOg7usCWSKAlFS271Bwcr7nc0hQb7wCnn/JrPBQIce6xl9dBIbYNfnBZYxWFyiKh H67yVU1vmzXtIDg6JovPd27d23QFRobywtgSBbEEii8Fn0HNJDr2yvCyuLOd9fHfArd+ Rs8xOFCtz2jiCgcAtqV7r1NYyPqkI/ts9us3VFd1AwRxtgS+EMu/i40Un1QpW8OadVYz zq+Y+BF4K00TtShV2KuMfa2dtOhwUNOIyVPXuAMxXInOilUDnGgugF+6EDzfFKJUfkJs VQ== Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by aserp2120.oracle.com with ESMTP id 34nkhm83va-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:27 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNBGrk101388; Thu, 12 Nov 2020 23:19:27 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userp3020.oracle.com with ESMTP id 34rt56umm6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:27 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0ACNJQ4A026454; Thu, 12 Nov 2020 23:19:26 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:25 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 04/10] vhost: support multiple worker threads Date: Thu, 12 Nov 2020 17:19:04 -0600 Message-Id: <1605223150-10888-6-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 bulkscore=0 mlxscore=0 mlxlogscore=999 suspectscore=2 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 priorityscore=1501 mlxscore=0 suspectscore=2 mlxlogscore=999 lowpriorityscore=0 spamscore=0 malwarescore=0 adultscore=0 clxscore=1015 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This is a prep patch to support multiple vhost worker threads per vhost dev. This patch converts the code that had assumed a single worker thread by: 1. Moving worker related fields to a new struct vhost_worker. 2. Converting vhost.c code to use the new struct and assume we will have an array of workers. 3. It also exports 2 helper functions that will be used in the last patch when vhost-scsi is converted to use this new functionality. Why do we need multiple worker threads? The admin can set_num_queues > 1 and the guest OS will run in multiqueue mode where, depending on the num_queues, you might get a queue per CPU. The layers below vhost-scsi are also doing multiqueue. So, while vhost-scsi will create num_queue vqs every IO on every CPU we are using has to be sent from and complete on this one thread on one CPU and can't fully utlize the multiple queues above and below us. With the null_blk driver we max out at 360K IOPs when doing a random workload like: fio --direct=1 --rw=randrw --bs=4k --ioengine=libaio \ --iodepth=VQ_QUEUE_DEPTH --numjobs=NUM_VQS --filename /dev/sdXYZ where NUM_VQS gets up to 8 (number of cores per numa node on my system) and VQ_QUEUE_DEPTH can be anywhere from 32 to 128. With the patches in this set and the patches to remove the sess_cmd_lock and execution_lock from lio's IO path in the SCSI tree for 5.11, we are able to get IOPs from a single LUN up to 700K. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 260 +++++++++++++++++++++++++++++++++++++++----------- drivers/vhost/vhost.h | 14 ++- 2 files changed, 217 insertions(+), 57 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 78d9535..d229515 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -231,16 +231,47 @@ void vhost_poll_stop(struct vhost_poll *poll) } EXPORT_SYMBOL_GPL(vhost_poll_stop); -void vhost_work_dev_flush(struct vhost_dev *dev) +static void vhost_work_queue_on(struct vhost_dev *dev, struct vhost_work *work, + int worker_id) +{ + if (!dev->num_workers) + return; + + if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags)) { + /* We can only add the work to the list after we're + * sure it was not in the list. + * test_and_set_bit() implies a memory barrier. + */ + llist_add(&work->node, &dev->workers[worker_id]->work_list); + wake_up_process(dev->workers[worker_id]->task); + } +} + +void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) +{ + vhost_work_queue_on(dev, work, 0); +} +EXPORT_SYMBOL_GPL(vhost_work_queue); + +static void vhost_work_flush_on(struct vhost_dev *dev, int worker_id) { struct vhost_flush_struct flush; - if (dev->worker) { - init_completion(&flush.wait_event); - vhost_work_init(&flush.work, vhost_flush_work); + init_completion(&flush.wait_event); + vhost_work_init(&flush.work, vhost_flush_work); + + vhost_work_queue_on(dev, &flush.work, worker_id); + wait_for_completion(&flush.wait_event); +} + +void vhost_work_dev_flush(struct vhost_dev *dev) +{ + int i; - vhost_work_queue(dev, &flush.work); - wait_for_completion(&flush.wait_event); + for (i = 0; i < dev->num_workers; i++) { + if (!dev->workers[i]) + continue; + vhost_work_flush_on(dev, i); } } EXPORT_SYMBOL_GPL(vhost_work_dev_flush); @@ -253,26 +284,18 @@ void vhost_poll_flush(struct vhost_poll *poll) } EXPORT_SYMBOL_GPL(vhost_poll_flush); -void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) +/* A lockless hint for busy polling code to exit the loop */ +bool vhost_has_work(struct vhost_dev *dev) { - if (!dev->worker) - return; + int i; - if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags)) { - /* We can only add the work to the list after we're - * sure it was not in the list. - * test_and_set_bit() implies a memory barrier. - */ - llist_add(&work->node, &dev->work_list); - wake_up_process(dev->worker); + for (i = 0; i < dev->num_workers; i++) { + if (dev->workers[i] && + !llist_empty(&dev->workers[i]->work_list)) + return true; } -} -EXPORT_SYMBOL_GPL(vhost_work_queue); -/* A lockless hint for busy polling code to exit the loop */ -bool vhost_has_work(struct vhost_dev *dev) -{ - return !llist_empty(&dev->work_list); + return false; } EXPORT_SYMBOL_GPL(vhost_has_work); @@ -343,7 +366,8 @@ static void vhost_vq_reset(struct vhost_dev *dev, static int vhost_worker(void *data) { - struct vhost_dev *dev = data; + struct vhost_worker *worker = data; + struct vhost_dev *dev = worker->dev; struct vhost_work *work, *work_next; struct llist_node *node; @@ -357,8 +381,7 @@ static int vhost_worker(void *data) __set_current_state(TASK_RUNNING); break; } - - node = llist_del_all(&dev->work_list); + node = llist_del_all(&worker->work_list); if (!node) schedule(); @@ -481,13 +504,13 @@ void vhost_dev_init(struct vhost_dev *dev, dev->umem = NULL; dev->iotlb = NULL; dev->mm = NULL; - dev->worker = NULL; + dev->workers = NULL; + dev->num_workers = 0; dev->iov_limit = iov_limit; dev->weight = weight; dev->byte_weight = byte_weight; dev->use_worker = use_worker; dev->msg_handler = msg_handler; - init_llist_head(&dev->work_list); init_waitqueue_head(&dev->wait); INIT_LIST_HEAD(&dev->read_list); INIT_LIST_HEAD(&dev->pending_list); @@ -500,6 +523,7 @@ void vhost_dev_init(struct vhost_dev *dev, vq->indirect = NULL; vq->heads = NULL; vq->dev = dev; + vq->worker_id = 0; mutex_init(&vq->mutex); vhost_vq_reset(dev, vq); if (vq->handle_kick) @@ -531,14 +555,14 @@ static void vhost_attach_cgroups_work(struct vhost_work *work) s->ret = cgroup_attach_task_all(s->owner, current); } -static int vhost_attach_cgroups(struct vhost_dev *dev) +static int vhost_attach_cgroups_on(struct vhost_dev *dev, int worker_id) { struct vhost_attach_cgroups_struct attach; attach.owner = current; vhost_work_init(&attach.work, vhost_attach_cgroups_work); - vhost_work_queue(dev, &attach.work); - vhost_work_dev_flush(dev); + vhost_work_queue_on(dev, &attach.work, worker_id); + vhost_work_flush_on(dev, worker_id); return attach.ret; } @@ -579,10 +603,153 @@ static void vhost_detach_mm(struct vhost_dev *dev) dev->mm = NULL; } +static void vhost_worker_free(struct vhost_dev *dev, int worker_id) +{ + struct vhost_worker *worker; + + worker = dev->workers[worker_id]; + WARN_ON(!llist_empty(&worker->work_list)); + kthread_stop(worker->task); + kfree(worker); + + dev->workers[worker_id] = NULL; +} + +void vhost_vq_worker_remove(struct vhost_dev *dev, struct vhost_virtqueue *vq) +{ + /* + * vqs may share a worker and so this might have been removed already. + */ + if (!dev->workers[vq->worker_id]) + return; + + vhost_worker_free(dev, vq->worker_id); + dev->num_workers--; + + vq->worker_id = 0; +} +EXPORT_SYMBOL_GPL(vhost_vq_worker_remove); + +static void vhost_workers_free(struct vhost_dev *dev) +{ + int i; + + if (!dev->workers) + return; + + for (i = 0; i < dev->nvqs; i++) { + if (!dev->num_workers) + break; + vhost_vq_worker_remove(dev, dev->vqs[i]); + } + + kfree(dev->workers); + dev->workers = NULL; +} + +static int vhost_worker_create(struct vhost_dev *dev, int worker_id) +{ + struct vhost_worker *worker; + struct task_struct *task; + int ret; + + worker = kzalloc(sizeof(*worker), GFP_KERNEL); + if (!worker) + return -ENOMEM; + + init_llist_head(&worker->work_list); + worker->dev = dev; + + task = kthread_create(vhost_worker, worker, "vhost-%d", current->pid); + if (IS_ERR(task)) { + ret = PTR_ERR(task); + goto free_worker; + } + + dev->workers[worker_id] = worker; + worker->task = task; + wake_up_process(task); /* avoid contributing to loadavg */ + return 0; + +free_worker: + kfree(worker); + return ret; +} + +/** + * vhost_vq_worker_add - create a new worker and add it to workers[] + * @dev: vhost device + * @vq: optional virtqueue to bind worker to. + * + * Caller must have the device mutex and have stopped operations that + * can access the workers array. + */ +int vhost_vq_worker_add(struct vhost_dev *dev, struct vhost_virtqueue *vq) +{ + struct mm_struct *mm; + bool owner_match = true; + int err, worker_id; + + if (vq && vq->worker_id) + return -EINVAL; + + if (vhost_dev_has_owner(dev)) { + mm = get_task_mm(current); + if (mm != dev->mm) + owner_match = false; + mmput(mm); + if (!owner_match) + return -EBUSY; + } + + worker_id = dev->num_workers; + err = vhost_worker_create(dev, worker_id); + if (err) + return -ENOMEM; + dev->num_workers++; + + err = vhost_attach_cgroups_on(dev, worker_id); + if (err) + goto free_worker; + + if (vq) + vq->worker_id = worker_id; + return 0; + +free_worker: + dev->num_workers--; + vhost_worker_free(dev, worker_id); + return err; +} +EXPORT_SYMBOL_GPL(vhost_vq_worker_add); + +static int vhost_workers_create(struct vhost_dev *dev) +{ + int err; + + dev->workers = kcalloc(dev->nvqs, sizeof(struct vhost_worker *), + GFP_KERNEL); + if (!dev->workers) + return -ENOMEM; + /* + * All drivers that set use_worker=true use at least one worker that + * may be bound to multiple vqs. Drivers like vhost-scsi may override + * this later. + */ + err = vhost_vq_worker_add(dev, NULL); + if (err) + goto free_workers; + return 0; + +free_workers: + kfree(dev->workers); + dev->workers = NULL; + return err; +} + /* Caller should have device mutex */ long vhost_dev_set_owner(struct vhost_dev *dev) { - struct task_struct *worker; int err; /* Is there an owner already? */ @@ -595,31 +762,18 @@ long vhost_dev_set_owner(struct vhost_dev *dev) dev->kcov_handle = kcov_common_handle(); if (dev->use_worker) { - worker = kthread_create(vhost_worker, dev, - "vhost-%d", current->pid); - if (IS_ERR(worker)) { - err = PTR_ERR(worker); - goto err_worker; - } - - dev->worker = worker; - wake_up_process(worker); /* avoid contributing to loadavg */ - - err = vhost_attach_cgroups(dev); + err = vhost_workers_create(dev); if (err) - goto err_cgroup; + goto err_worker; } err = vhost_dev_alloc_iovecs(dev); if (err) - goto err_cgroup; + goto err_iovecs; return 0; -err_cgroup: - if (dev->worker) { - kthread_stop(dev->worker); - dev->worker = NULL; - } +err_iovecs: + vhost_workers_free(dev); err_worker: vhost_detach_mm(dev); dev->kcov_handle = 0; @@ -712,12 +866,8 @@ void vhost_dev_cleanup(struct vhost_dev *dev) dev->iotlb = NULL; vhost_clear_msg(dev); wake_up_interruptible_poll(&dev->wait, EPOLLIN | EPOLLRDNORM); - WARN_ON(!llist_empty(&dev->work_list)); - if (dev->worker) { - kthread_stop(dev->worker); - dev->worker = NULL; - dev->kcov_handle = 0; - } + vhost_workers_free(dev); + dev->kcov_handle = 0; vhost_detach_mm(dev); } EXPORT_SYMBOL_GPL(vhost_dev_cleanup); diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 575c818..f334e90 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -16,6 +16,7 @@ #include struct vhost_work; +struct vhost_virtqueue; typedef void (*vhost_work_fn_t)(struct vhost_work *work); #define VHOST_WORK_QUEUED 1 @@ -25,6 +26,12 @@ struct vhost_work { unsigned long flags; }; +struct vhost_worker { + struct task_struct *task; + struct llist_head work_list; + struct vhost_dev *dev; +}; + /* Poll a file (eventfd or socket) */ /* Note: there's nothing vhost specific about this structure. */ struct vhost_poll { @@ -39,6 +46,8 @@ struct vhost_poll { void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn); void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work); bool vhost_has_work(struct vhost_dev *dev); +int vhost_vq_worker_add(struct vhost_dev *dev, struct vhost_virtqueue *vq); +void vhost_vq_worker_remove(struct vhost_dev *dev, struct vhost_virtqueue *vq); void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn, __poll_t mask, struct vhost_dev *dev); @@ -84,6 +93,7 @@ struct vhost_virtqueue { struct vhost_poll poll; + int worker_id; /* The routine to call when the Guest pings us, or timeout. */ vhost_work_fn_t handle_kick; @@ -149,8 +159,8 @@ struct vhost_dev { struct vhost_virtqueue **vqs; int nvqs; struct eventfd_ctx *log_ctx; - struct llist_head work_list; - struct task_struct *worker; + struct vhost_worker **workers; + int num_workers; struct vhost_iotlb *umem; struct vhost_iotlb *iotlb; spinlock_t iotlb_lock; From patchwork Thu Nov 12 23:19:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0CB8E138B for ; Thu, 12 Nov 2020 23:19:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA82D216C4 for ; Thu, 12 Nov 2020 23:19:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="n4BwUT6K" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726174AbgKLXTq (ORCPT ); Thu, 12 Nov 2020 18:19:46 -0500 Received: from aserp2120.oracle.com ([141.146.126.78]:33642 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726143AbgKLXTn (ORCPT ); Thu, 12 Nov 2020 18:19:43 -0500 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9wq9118615; Thu, 12 Nov 2020 23:19:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=gZyyk7ggNXfCzNmiUuDWYeooyouXg0wGLOis3wWvvGU=; b=n4BwUT6KIivc3v7qHpaGRosoFqh0Io33JoRHBK5x/kyNSofyI+UzbJDgUfwFkiOLvd5w aNDBxIP4KIl5y2H0dSUi2NKXDlBSyxgiVgDyVjNkqOUWqgLMFJCLFepRHHu5XxTpqajm dximJ8n+K3Gzm6slvtODkJli7N7Fz+XQo9VOd3Lbg0Z4zIQ/Uiz7R7UgTARAXEz01vAm ZgSffjibAEeAv0MW8VcV8u7yll0aaXT3NGjC2y3gswlvR9G8NkRP7TtawAiwyFnGmuOC Gbr/l+GpiRJbads173+7jjpaLx7U51LGF8KIDe/k/KZ0q5HAVH73VFXh6GXn9pRvBk8t Nw== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2120.oracle.com with ESMTP id 34nkhm83vb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:28 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAH2l075845; Thu, 12 Nov 2020 23:19:28 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userp3030.oracle.com with ESMTP id 34rtksk518-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:28 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ACNJRtr011506; Thu, 12 Nov 2020 23:19:27 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:26 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 05/10] vhost: poll support support multiple workers Date: Thu, 12 Nov 2020 17:19:05 -0600 Message-Id: <1605223150-10888-7-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 phishscore=0 suspectscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 malwarescore=0 adultscore=0 clxscore=1015 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org The final patches are going to have vhost scsi create a vhost worker per IO vq. This patch converts the poll code to poll and queue work on the worker that is tied to the vq (in this patch we maintain the old behavior where all vqs use a single worker). For drivers that do not convert over to the multiple worker support or for the case where the user just does not want to allocate the resources then we maintain support for the single worker case. Note: This adds a new function vhost_vq_work_queue. It's used by this patch and also the next one, so I exported it here. Signed-off-by: Mike Christie --- drivers/vhost/net.c | 6 ++++-- drivers/vhost/vhost.c | 14 +++++++++++--- drivers/vhost/vhost.h | 6 ++++-- 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 531a00d..6a27fe6 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1330,8 +1330,10 @@ static int vhost_net_open(struct inode *inode, struct file *f) VHOST_NET_PKT_WEIGHT, VHOST_NET_WEIGHT, true, NULL); - vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, EPOLLOUT, dev); - vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev); + vhost_poll_init(n->poll + VHOST_NET_VQ_TX, handle_tx_net, EPOLLOUT, dev, + vqs[VHOST_NET_VQ_TX]); + vhost_poll_init(n->poll + VHOST_NET_VQ_RX, handle_rx_net, EPOLLIN, dev, + vqs[VHOST_NET_VQ_RX]); f->private_data = n; n->page_frag.page = NULL; diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d229515..9eeb8c7 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -187,13 +187,15 @@ void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn) /* Init poll structure */ void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn, - __poll_t mask, struct vhost_dev *dev) + __poll_t mask, struct vhost_dev *dev, + struct vhost_virtqueue *vq) { init_waitqueue_func_entry(&poll->wait, vhost_poll_wakeup); init_poll_funcptr(&poll->table, vhost_poll_func); poll->mask = mask; poll->dev = dev; poll->wqh = NULL; + poll->vq = vq; vhost_work_init(&poll->work, fn); } @@ -284,6 +286,12 @@ void vhost_poll_flush(struct vhost_poll *poll) } EXPORT_SYMBOL_GPL(vhost_poll_flush); +void vhost_vq_work_queue(struct vhost_virtqueue *vq, struct vhost_work *work) +{ + vhost_work_queue_on(vq->dev, work, vq->worker_id); +} +EXPORT_SYMBOL_GPL(vhost_vq_work_queue); + /* A lockless hint for busy polling code to exit the loop */ bool vhost_has_work(struct vhost_dev *dev) { @@ -301,7 +309,7 @@ bool vhost_has_work(struct vhost_dev *dev) void vhost_poll_queue(struct vhost_poll *poll) { - vhost_work_queue(poll->dev, &poll->work); + vhost_vq_work_queue(poll->vq, &poll->work); } EXPORT_SYMBOL_GPL(vhost_poll_queue); @@ -528,7 +536,7 @@ void vhost_dev_init(struct vhost_dev *dev, vhost_vq_reset(dev, vq); if (vq->handle_kick) vhost_poll_init(&vq->poll, vq->handle_kick, - EPOLLIN, dev); + EPOLLIN, dev, vq); } } EXPORT_SYMBOL_GPL(vhost_dev_init); diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index f334e90..232c5f9 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -33,7 +33,6 @@ struct vhost_worker { }; /* Poll a file (eventfd or socket) */ -/* Note: there's nothing vhost specific about this structure. */ struct vhost_poll { poll_table table; wait_queue_head_t *wqh; @@ -41,16 +40,19 @@ struct vhost_poll { struct vhost_work work; __poll_t mask; struct vhost_dev *dev; + struct vhost_virtqueue *vq; }; void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn); void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work); bool vhost_has_work(struct vhost_dev *dev); +void vhost_vq_work_queue(struct vhost_virtqueue *vq, struct vhost_work *work); int vhost_vq_worker_add(struct vhost_dev *dev, struct vhost_virtqueue *vq); void vhost_vq_worker_remove(struct vhost_dev *dev, struct vhost_virtqueue *vq); void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn, - __poll_t mask, struct vhost_dev *dev); + __poll_t mask, struct vhost_dev *dev, + struct vhost_virtqueue *vq); int vhost_poll_start(struct vhost_poll *poll, struct file *file); void vhost_poll_stop(struct vhost_poll *poll); void vhost_poll_flush(struct vhost_poll *poll); From patchwork Thu Nov 12 23:19:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4483515E6 for ; Thu, 12 Nov 2020 23:19:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E5D2216C4 for ; Thu, 12 Nov 2020 23:19:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="BOZ4cmOC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726182AbgKLXTq (ORCPT ); Thu, 12 Nov 2020 18:19:46 -0500 Received: from aserp2120.oracle.com ([141.146.126.78]:33658 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725929AbgKLXTn (ORCPT ); Thu, 12 Nov 2020 18:19:43 -0500 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9wqA118615; Thu, 12 Nov 2020 23:19:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=PuG2MHrqBf/1kUfD7Ba10juzqwo0+sOBqOl8iJowU7k=; b=BOZ4cmOCM6oICTeUXY7Ug+7JPpJVJq9AnxN6rNOqQWANEKHj0QeOxPsNibq0e5qxlymS pV+aA68sCUJYVr5ouDcXRa+9inK0ETAFq7cDpTiQS3bmml1nnCzablrX5iZXSxTMqm5Z SaGSKfiuXcCqC1W5g+9M+nCUSoyskTr11HPhM7c+ipaGGqW+joN+kUtxd99tG58KEKtP m0/Aiv9uxSQ55O84soLjErMMt5FxfB8YkTh7CDz0CzRpoXbq2IaKs6b00wtPp7IWbx7Y 5NZamjDd2C5/0NifLbZHLWHB/Z1ebR431uaatRJ214qf1EmB+orFSpAJGZjFMVKc99cD Wg== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2120.oracle.com with ESMTP id 34nkhm83ve-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:29 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAHDa075901; Thu, 12 Nov 2020 23:19:29 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userp3030.oracle.com with ESMTP id 34rtksk51j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:28 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0ACNJRmT026830; Thu, 12 Nov 2020 23:19:28 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:27 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 06/10] vhost scsi: make SCSI cmd completion per vq Date: Thu, 12 Nov 2020 17:19:06 -0600 Message-Id: <1605223150-10888-8-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 phishscore=0 suspectscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 malwarescore=0 adultscore=0 clxscore=1015 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org In the last patches we are going to have a worker thread per IO vq. This patch separates the scsi cmd completion code paths so we can complete cmds based on their vq instead of having all cmds complete on the same worker thread. Signed-off-by: Mike Christie Reviewed-by: Stefan Hajnoczi --- drivers/vhost/scsi.c | 48 +++++++++++++++++++++++++----------------------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 4725a08..2bbe1a8 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -178,6 +178,7 @@ enum { struct vhost_scsi_virtqueue { struct vhost_virtqueue vq; + struct vhost_scsi *vs; /* * Reference counting for inflight reqs, used for flush operation. At * each time, one reference tracks new commands submitted, while we @@ -192,6 +193,9 @@ struct vhost_scsi_virtqueue { struct vhost_scsi_cmd *scsi_cmds; struct sbitmap scsi_tags; int max_cmds; + + struct vhost_work completion_work; + struct llist_head completion_list; }; struct vhost_scsi { @@ -202,9 +206,6 @@ struct vhost_scsi { struct vhost_dev dev; struct vhost_scsi_virtqueue vqs[VHOST_SCSI_MAX_VQ]; - struct vhost_work vs_completion_work; /* cmd completion work item */ - struct llist_head vs_completion_list; /* cmd completion queue */ - struct vhost_work vs_event_work; /* evt injection work item */ struct llist_head vs_event_list; /* evt injection queue */ @@ -380,10 +381,11 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) } else { struct vhost_scsi_cmd *cmd = container_of(se_cmd, struct vhost_scsi_cmd, tvc_se_cmd); - struct vhost_scsi *vs = cmd->tvc_vhost; + struct vhost_scsi_virtqueue *svq = container_of(cmd->tvc_vq, + struct vhost_scsi_virtqueue, vq); - llist_add(&cmd->tvc_completion_list, &vs->vs_completion_list); - vhost_work_queue(&vs->dev, &vs->vs_completion_work); + llist_add(&cmd->tvc_completion_list, &svq->completion_list); + vhost_vq_work_queue(&svq->vq, &svq->completion_work); } } @@ -545,18 +547,17 @@ static void vhost_scsi_evt_work(struct vhost_work *work) */ static void vhost_scsi_complete_cmd_work(struct vhost_work *work) { - struct vhost_scsi *vs = container_of(work, struct vhost_scsi, - vs_completion_work); - DECLARE_BITMAP(signal, VHOST_SCSI_MAX_VQ); + struct vhost_scsi_virtqueue *svq = container_of(work, + struct vhost_scsi_virtqueue, completion_work); struct virtio_scsi_cmd_resp v_rsp; struct vhost_scsi_cmd *cmd, *t; struct llist_node *llnode; struct se_cmd *se_cmd; struct iov_iter iov_iter; - int ret, vq; + bool signal = false; + int ret; - bitmap_zero(signal, VHOST_SCSI_MAX_VQ); - llnode = llist_del_all(&vs->vs_completion_list); + llnode = llist_del_all(&svq->completion_list); llist_for_each_entry_safe(cmd, t, llnode, tvc_completion_list) { se_cmd = &cmd->tvc_se_cmd; @@ -576,21 +577,16 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work) cmd->tvc_in_iovs, sizeof(v_rsp)); ret = copy_to_iter(&v_rsp, sizeof(v_rsp), &iov_iter); if (likely(ret == sizeof(v_rsp))) { - struct vhost_scsi_virtqueue *q; + signal = true; vhost_add_used(cmd->tvc_vq, cmd->tvc_vq_desc, 0); - q = container_of(cmd->tvc_vq, struct vhost_scsi_virtqueue, vq); - vq = q - vs->vqs; - __set_bit(vq, signal); } else pr_err("Faulted on virtio_scsi_cmd_resp\n"); vhost_scsi_release_cmd_res(se_cmd); } - vq = -1; - while ((vq = find_next_bit(signal, VHOST_SCSI_MAX_VQ, vq + 1)) - < VHOST_SCSI_MAX_VQ) - vhost_signal(&vs->dev, &vs->vqs[vq].vq); + if (signal) + vhost_signal(&svq->vs->dev, &svq->vq); } static struct vhost_scsi_cmd * @@ -1799,6 +1795,7 @@ static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features) static int vhost_scsi_open(struct inode *inode, struct file *f) { + struct vhost_scsi_virtqueue *svq; struct vhost_scsi *vs; struct vhost_virtqueue **vqs; int r = -ENOMEM, i; @@ -1814,7 +1811,6 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) if (!vqs) goto err_vqs; - vhost_work_init(&vs->vs_completion_work, vhost_scsi_complete_cmd_work); vhost_work_init(&vs->vs_event_work, vhost_scsi_evt_work); vs->vs_events_nr = 0; @@ -1825,8 +1821,14 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) vs->vqs[VHOST_SCSI_VQ_CTL].vq.handle_kick = vhost_scsi_ctl_handle_kick; vs->vqs[VHOST_SCSI_VQ_EVT].vq.handle_kick = vhost_scsi_evt_handle_kick; for (i = VHOST_SCSI_VQ_IO; i < VHOST_SCSI_MAX_VQ; i++) { - vqs[i] = &vs->vqs[i].vq; - vs->vqs[i].vq.handle_kick = vhost_scsi_handle_kick; + svq = &vs->vqs[i]; + + vqs[i] = &svq->vq; + svq->vs = vs; + init_llist_head(&svq->completion_list); + vhost_work_init(&svq->completion_work, + vhost_scsi_complete_cmd_work); + svq->vq.handle_kick = vhost_scsi_handle_kick; } vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ, UIO_MAXIOV, VHOST_SCSI_WEIGHT, 0, true, NULL); From patchwork Thu Nov 12 23:19:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902251 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 301B0138B for ; Thu, 12 Nov 2020 23:19:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 08292216FD for ; Thu, 12 Nov 2020 23:19:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="R5a1zoi5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726136AbgKLXTm (ORCPT ); Thu, 12 Nov 2020 18:19:42 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:38688 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725929AbgKLXTl (ORCPT ); Thu, 12 Nov 2020 18:19:41 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9uH8109270; Thu, 12 Nov 2020 23:19:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=o28HMjxZvdjabwbgesnkp8UHr7OdXDYF6AOdSGODfhc=; b=R5a1zoi5WqCDkNUW4IrS4unkWWDdWKT5cQUHFwbNY0nwlz8g3Pd0x58d1ZQvrkh5Uzzj ScHPDBF0JYG6y9l7msnHddMyMr2h8yiVNmeD3cQ4RQPCbk/PDmWgDl28wvan3AICsJsm vMpzc4VC0F8kqX/kfIjiHcttrT2lMyWKXZu5/S/6dLsuW4oUbQmgmk+ZAaKzuBkhJENf 792Q7NO02CpKThAimJMZtW5nuWDFRNQZlRJcsbHmBH8Ra65NnW9gBu6SqEoJz4v9trF4 4qgkCV4R1zw4zgGxwuA0EBRq1GBQaj5F/JiBynCMiJpnxTPOXYjJuwmAX5FQgueXHOAQ UQ== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2120.oracle.com with ESMTP id 34p72exauj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:31 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNB0vu027435; Thu, 12 Nov 2020 23:19:30 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 34p5g3tsem-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:30 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0ACNJS85026839; Thu, 12 Nov 2020 23:19:29 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:28 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 07/10] vhost, vhost-scsi: flush IO vqs then send TMF rsp Date: Thu, 12 Nov 2020 17:19:07 -0600 Message-Id: <1605223150-10888-9-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 suspectscore=0 lowpriorityscore=0 adultscore=0 phishscore=0 priorityscore=1501 spamscore=0 impostorscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first which calls vhost_scsi_release_cmd to add them to the work queue. When the next patch adds multiple worker support, the worker threads could still be sending their responses when the tmf's work is run. So this patch has vhost-scsi flush the IO vqs on other worker threads before we send the tmf response. Signed-off-by: Mike Christie Reviewed-by: Stefan Hajnoczi --- drivers/vhost/scsi.c | 16 ++++++++++++++-- drivers/vhost/vhost.c | 6 ++++++ drivers/vhost/vhost.h | 1 + 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 2bbe1a8..612359d 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -1178,11 +1178,23 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work) struct vhost_scsi_tmf *tmf = container_of(work, struct vhost_scsi_tmf, vwork); int resp_code; + int i; + + if (tmf->se_cmd.se_tmr_req->response == TMR_FUNCTION_COMPLETE) { + /* + * When processing a TMF, lio completes the cmds then the + * TMF, so with one worker the TMF always completes after + * cmds. For multiple worker support, we must flush the + * IO vqs that do not share a worker with the ctl vq (vqs + * 3 and up) to make sure they have completed their cmds. + */ + for (i = 1; i < tmf->vhost->dev.num_workers; i++) + vhost_vq_work_flush(&tmf->vhost->vqs[i + VHOST_SCSI_VQ_IO].vq); - if (tmf->se_cmd.se_tmr_req->response == TMR_FUNCTION_COMPLETE) resp_code = VIRTIO_SCSI_S_FUNCTION_SUCCEEDED; - else + } else { resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED; + } vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs, tmf->vq_desc, &tmf->resp_iov, resp_code); diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 9eeb8c7..6a6abfc 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -278,6 +278,12 @@ void vhost_work_dev_flush(struct vhost_dev *dev) } EXPORT_SYMBOL_GPL(vhost_work_dev_flush); +void vhost_vq_work_flush(struct vhost_virtqueue *vq) +{ + vhost_work_flush_on(vq->dev, vq->worker_id); +} +EXPORT_SYMBOL_GPL(vhost_vq_work_flush); + /* Flush any work that has been scheduled. When calling this, don't hold any * locks that are also used by the callback. */ void vhost_poll_flush(struct vhost_poll *poll) diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 232c5f9..0837133 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -46,6 +46,7 @@ struct vhost_poll { void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn); void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work); bool vhost_has_work(struct vhost_dev *dev); +void vhost_vq_work_flush(struct vhost_virtqueue *vq); void vhost_vq_work_queue(struct vhost_virtqueue *vq, struct vhost_work *work); int vhost_vq_worker_add(struct vhost_dev *dev, struct vhost_virtqueue *vq); void vhost_vq_worker_remove(struct vhost_dev *dev, struct vhost_virtqueue *vq); From patchwork Thu Nov 12 23:19:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90EB015E6 for ; Thu, 12 Nov 2020 23:19:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 64BFB216C4 for ; Thu, 12 Nov 2020 23:19:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="lwslecXo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726162AbgKLXTn (ORCPT ); Thu, 12 Nov 2020 18:19:43 -0500 Received: from aserp2120.oracle.com ([141.146.126.78]:33636 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726038AbgKLXTm (ORCPT ); Thu, 12 Nov 2020 18:19:42 -0500 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAs3h119284; Thu, 12 Nov 2020 23:19:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=aPfcN7KuEaC6QPosjKJKTrzN39N6uYT6NYusx0B9wME=; b=lwslecXonjC/dE8BDvWM8JVSJUPGabyjLP0AAIBWG0NGdmw4Oq7SbBVlCAyH1FdQpnzg 8eqE5azlhsOxBJzO6isQuIfnj2Qpx1ZLDV5C58xqK4/tlEt9FBPvKRjCD0QpyoRNYI7B 5HVtKdWWvrh+tmM/DvYCEUbUBG+01jVtpHKNiIoToEJ9M/PZMZCcl/3vwIQ5IBYVHOVv WfqppRjcAl2hg1c2EXKJFWnlZ/J3aggsFdZbqiaV7e30iPmG4EPvgJmB/Sp0TfKv4q1D /VGdNQFRuQonXMK2I6x4yuM687B20O1Y2qShKKsYI/UOggCWym+g4lGFYbXVWOaWcmQs Ig== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by aserp2120.oracle.com with ESMTP id 34nkhm83vh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:31 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNBKnT177101; Thu, 12 Nov 2020 23:19:30 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3030.oracle.com with ESMTP id 34p55s26fw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:30 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ACNJTso020407; Thu, 12 Nov 2020 23:19:29 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:29 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 08/10] vhost: move msg_handler to new ops struct Date: Thu, 12 Nov 2020 17:19:08 -0600 Message-Id: <1605223150-10888-10-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 phishscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 malwarescore=0 adultscore=0 clxscore=1015 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org The next patch adds a callout so drivers can perform some action when we get a VHOST_SET_VRING_ENABLE, so this patch moves the msg_handler callout to a new vhost_dev_ops struct just to keep all the callouts better organized. Signed-off-by: Mike Christie Reviewed-by: Stefan Hajnoczi --- drivers/vhost/vdpa.c | 7 +++++-- drivers/vhost/vhost.c | 10 ++++------ drivers/vhost/vhost.h | 11 ++++++----- 3 files changed, 15 insertions(+), 13 deletions(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 2754f30..f271f42 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -802,6 +802,10 @@ static void vhost_vdpa_set_iova_range(struct vhost_vdpa *v) } } +static struct vhost_dev_ops vdpa_dev_ops = { + .msg_handler = vhost_vdpa_process_iotlb_msg, +}; + static int vhost_vdpa_open(struct inode *inode, struct file *filep) { struct vhost_vdpa *v; @@ -829,8 +833,7 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep) vqs[i] = &v->vqs[i]; vqs[i]->handle_kick = handle_vq_kick; } - vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false, - vhost_vdpa_process_iotlb_msg); + vhost_dev_init(dev, vqs, nvqs, 0, 0, 0, false, &vdpa_dev_ops); dev->iotlb = vhost_iotlb_alloc(0, 0); if (!dev->iotlb) { diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 6a6abfc..2f98b81 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -504,9 +504,7 @@ static size_t vhost_get_desc_size(struct vhost_virtqueue *vq, void vhost_dev_init(struct vhost_dev *dev, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, - bool use_worker, - int (*msg_handler)(struct vhost_dev *dev, - struct vhost_iotlb_msg *msg)) + bool use_worker, struct vhost_dev_ops *ops) { struct vhost_virtqueue *vq; int i; @@ -524,7 +522,7 @@ void vhost_dev_init(struct vhost_dev *dev, dev->weight = weight; dev->byte_weight = byte_weight; dev->use_worker = use_worker; - dev->msg_handler = msg_handler; + dev->ops = ops; init_waitqueue_head(&dev->wait); INIT_LIST_HEAD(&dev->read_list); INIT_LIST_HEAD(&dev->pending_list); @@ -1328,8 +1326,8 @@ ssize_t vhost_chr_write_iter(struct vhost_dev *dev, goto done; } - if (dev->msg_handler) - ret = dev->msg_handler(dev, &msg); + if (dev->ops && dev->ops->msg_handler) + ret = dev->ops->msg_handler(dev, &msg); else ret = vhost_process_iotlb_msg(dev, &msg); if (ret) { diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 0837133..a293f48 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -156,6 +156,10 @@ struct vhost_msg_node { struct list_head node; }; +struct vhost_dev_ops { + int (*msg_handler)(struct vhost_dev *dev, struct vhost_iotlb_msg *msg); +}; + struct vhost_dev { struct mm_struct *mm; struct mutex mutex; @@ -175,16 +179,13 @@ struct vhost_dev { int byte_weight; u64 kcov_handle; bool use_worker; - int (*msg_handler)(struct vhost_dev *dev, - struct vhost_iotlb_msg *msg); + struct vhost_dev_ops *ops; }; bool vhost_exceeds_weight(struct vhost_virtqueue *vq, int pkts, int total_len); void vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs, int iov_limit, int weight, int byte_weight, - bool use_worker, - int (*msg_handler)(struct vhost_dev *dev, - struct vhost_iotlb_msg *msg)); + bool use_worker, struct vhost_dev_ops *ops); long vhost_dev_set_owner(struct vhost_dev *dev); bool vhost_dev_has_owner(struct vhost_dev *dev); long vhost_dev_check_owner(struct vhost_dev *); From patchwork Thu Nov 12 23:19:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F20B1746 for ; Thu, 12 Nov 2020 23:19:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 76D32216C4 for ; Thu, 12 Nov 2020 23:19:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="CAGe0CZk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726143AbgKLXTq (ORCPT ); Thu, 12 Nov 2020 18:19:46 -0500 Received: from aserp2130.oracle.com ([141.146.126.79]:57208 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726149AbgKLXTn (ORCPT ); Thu, 12 Nov 2020 18:19:43 -0500 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAUNv087344; Thu, 12 Nov 2020 23:19:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=inE/sT98h+DxX+TMTZz44K2yKdu909cu59+L/DBU4f4=; b=CAGe0CZkbvzldQmV2dPMlPBVG+mkEbn7zICigJEPkRLannPnWRSwYpx5aoJfSnYhpMn/ DOlSjrO9ArwFFXFQUT4zAthdcEpm0vmraXvcFaZGkwOpZU9J1d2Vi9UwrXGL9mMzgk2Z g48WZ/BAeTyYc8mPcplm1VC5++YQzoWxJ9kbg43cfwrjtDmmMrAi6sxu9cgO7v+M4y7b nlh5rbws2RyWxtKWxiu2iExtylXkSnYFx4qvh4ge7C8b/LHsO+Sk1JyX6B9D3jvtdjTb fKqr44eAcKXFCAKgJjI1TPvpkYbDdX+GkUU5ERrmYT3seXPnPKjGGcHnFCT8qZtJctO8 1A== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by aserp2130.oracle.com with ESMTP id 34nh3b8btu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:31 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNB0Rb027426; Thu, 12 Nov 2020 23:19:31 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3020.oracle.com with ESMTP id 34p5g3tsfe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:31 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ACNJU2Y020415; Thu, 12 Nov 2020 23:19:30 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:30 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 09/10] vhost: add VHOST_SET_VRING_ENABLE support Date: Thu, 12 Nov 2020 17:19:09 -0600 Message-Id: <1605223150-10888-11-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 priorityscore=1501 clxscore=1015 malwarescore=0 mlxscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 impostorscore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This adds a new ioctl VHOST_SET_VRING_ENABLE that the vhost drivers can implement a callout for and execute an operation when the vq is enabled/disabled. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 25 +++++++++++++++++++++++++ drivers/vhost/vhost.h | 1 + include/uapi/linux/vhost.h | 1 + 3 files changed, 27 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 2f98b81..e953031 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -1736,6 +1736,28 @@ static long vhost_vring_set_num_addr(struct vhost_dev *d, return r; } + +static long vhost_vring_set_enable(struct vhost_dev *d, + struct vhost_virtqueue *vq, + void __user *argp) +{ + struct vhost_vring_state s; + int ret = 0; + + if (vq->private_data) + return -EBUSY; + + if (copy_from_user(&s, argp, sizeof s)) + return -EFAULT; + + if (s.num != 1 && s.num != 0) + return -EINVAL; + + if (d->ops && d->ops->enable_vring) + ret = d->ops->enable_vring(vq, s.num); + return ret; +} + long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp) { struct file *eventfp, *filep = NULL; @@ -1765,6 +1787,9 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg mutex_lock(&vq->mutex); switch (ioctl) { + case VHOST_SET_VRING_ENABLE: + r = vhost_vring_set_enable(d, vq, argp); + break; case VHOST_SET_VRING_BASE: /* Moving base with an active backend? * You don't want to do that. */ diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index a293f48..1279c09 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -158,6 +158,7 @@ struct vhost_msg_node { struct vhost_dev_ops { int (*msg_handler)(struct vhost_dev *dev, struct vhost_iotlb_msg *msg); + int (*enable_vring)(struct vhost_virtqueue *vq, bool enable); }; struct vhost_dev { diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h index c998860..3ffd133 100644 --- a/include/uapi/linux/vhost.h +++ b/include/uapi/linux/vhost.h @@ -70,6 +70,7 @@ #define VHOST_VRING_BIG_ENDIAN 1 #define VHOST_SET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x13, struct vhost_vring_state) #define VHOST_GET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x14, struct vhost_vring_state) +#define VHOST_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x15, struct vhost_vring_state) /* The following ioctls use eventfd file descriptors to signal and poll * for events. */ From patchwork Thu Nov 12 23:19:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 11902261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1287817D5 for ; Thu, 12 Nov 2020 23:19:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E0233216C4 for ; Thu, 12 Nov 2020 23:19:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="LMQVjcvm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726081AbgKLXTo (ORCPT ); Thu, 12 Nov 2020 18:19:44 -0500 Received: from aserp2120.oracle.com ([141.146.126.78]:33640 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726063AbgKLXTm (ORCPT ); Thu, 12 Nov 2020 18:19:42 -0500 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACN9s53118597; Thu, 12 Nov 2020 23:19:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=R2JseW7jyZDCbc3BuxyomgGcBpBu+lEPPC0jla1UTkI=; b=LMQVjcvmPD5iB/XUVhjt9pzrbt39iPdgDbrxzzzIVYbm3rbNUqc9hjjzN67+SEUNOOaV xyJKJ8mtJUt2m+rAvgLS2zymTINJY4d7GS4vifA+TWsuGea4oFZLItVUPFK5NTNNpZDy VvWa2ZdnN1M2H+rCtZgIEuUzhRWqCyvJ6qjXf9yvUe2XG5AzRB6wz5WtNKhhqzrz0hcp HjRdcmWPirXJWLP1AyZvSTuGEOUp33mfuxdUO+wHyeRIE4U0sJT3Rlmy0Z9Mm0aR+IiN 5ANar4YMAdnx5pB5evHvOnwFdEfTEvmR8ByibuVDrvZMRXSCx9SVoDNcBtw4yTlTpZpd Gw== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by aserp2120.oracle.com with ESMTP id 34nkhm83vk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 12 Nov 2020 23:19:33 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0ACNAH5Y076027; Thu, 12 Nov 2020 23:19:32 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userp3030.oracle.com with ESMTP id 34rtksk53b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 12 Nov 2020 23:19:32 +0000 Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 0ACNJVHF011617; Thu, 12 Nov 2020 23:19:31 GMT Received: from ol2.localdomain (/73.88.28.6) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 12 Nov 2020 15:19:31 -0800 From: Mike Christie To: stefanha@redhat.com, qemu-devel@nongnu.org, fam@euphon.net, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, virtualization@lists.linux-foundation.org Subject: [PATCH 10/10] vhost-scsi: create a woker per IO vq Date: Thu, 12 Nov 2020 17:19:10 -0600 Message-Id: <1605223150-10888-12-git-send-email-michael.christie@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> References: <1605223150-10888-1-git-send-email-michael.christie@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 phishscore=0 suspectscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9803 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 malwarescore=0 adultscore=0 clxscore=1015 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011120130 Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org This patch has vhost-scsi create a worker thread per IO vq. It also adds a modparam to enable the feature, because I was thinking existing setups might not be expecting the extra threading use, so the default is to use the old single thread multiple vq behavior. Signed-off-by: Mike Christie --- drivers/vhost/scsi.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 612359d..3fb147f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -62,6 +62,12 @@ */ #define VHOST_SCSI_WEIGHT 256 +static bool vhost_scsi_worker_per_io_vq; +module_param_named(thread_per_io_virtqueue, vhost_scsi_worker_per_io_vq, bool, + 0644); +MODULE_PARM_DESC(thread_per_io_virtqueue, + "Create a worker thread per IO virtqueue. Set to true to turn on. Default is false where all virtqueues share a thread."); + struct vhost_scsi_inflight { /* Wait for the flush operation to finish */ struct completion comp; @@ -1805,6 +1811,36 @@ static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features) return 0; } +static int vhost_scsi_enable_vring(struct vhost_virtqueue *vq, bool enable) +{ + struct vhost_scsi *vs = container_of(vq->dev, struct vhost_scsi, dev); + /* + * For compat, we have the evt, ctl and first IO vq share worker0 like + * is setup by default. Additional vqs get their own worker. + */ + if (vq == &vs->vqs[VHOST_SCSI_VQ_CTL].vq || + vq == &vs->vqs[VHOST_SCSI_VQ_EVT].vq || + vq == &vs->vqs[VHOST_SCSI_VQ_IO].vq) + return 0; + + if (enable) { + if (!vhost_scsi_worker_per_io_vq) + return 0; + if (vq->worker_id != 0) + return 0; + return vhost_vq_worker_add(vq->dev, vq); + } else { + if (vq->worker_id == 0) + return 0; + vhost_vq_worker_remove(vq->dev, vq); + return 0; + } +} + +static struct vhost_dev_ops vhost_scsi_dev_ops = { + .enable_vring = vhost_scsi_enable_vring, +}; + static int vhost_scsi_open(struct inode *inode, struct file *f) { struct vhost_scsi_virtqueue *svq; @@ -1843,7 +1879,7 @@ static int vhost_scsi_open(struct inode *inode, struct file *f) svq->vq.handle_kick = vhost_scsi_handle_kick; } vhost_dev_init(&vs->dev, vqs, VHOST_SCSI_MAX_VQ, UIO_MAXIOV, - VHOST_SCSI_WEIGHT, 0, true, NULL); + VHOST_SCSI_WEIGHT, 0, true, &vhost_scsi_dev_ops); vhost_scsi_init_inflight(vs, NULL);