From patchwork Wed Feb 3 18:24:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kurz X-Patchwork-Id: 12065123 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EF3CC433E0 for ; Wed, 3 Feb 2021 18:26:08 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CAA0164DE1 for ; Wed, 3 Feb 2021 18:26:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAA0164DE1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kaod.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:47208 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1l7Mqs-0006tg-Cl for qemu-devel@archiver.kernel.org; Wed, 03 Feb 2021 13:26:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:56812) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1l7Mpc-0005n8-4X for qemu-devel@nongnu.org; Wed, 03 Feb 2021 13:24:48 -0500 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:57193) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1l7Mpa-0002LT-Ck for qemu-devel@nongnu.org; Wed, 03 Feb 2021 13:24:47 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-417-gd3DHtCUON-z8ZrD5bSM6Q-1; Wed, 03 Feb 2021 13:24:43 -0500 X-MC-Unique: gd3DHtCUON-z8ZrD5bSM6Q-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 232A1192AB7A; Wed, 3 Feb 2021 18:24:42 +0000 (UTC) Received: from bahia.redhat.com (ovpn-114-27.ams2.redhat.com [10.36.114.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 09AFC5C233; Wed, 3 Feb 2021 18:24:34 +0000 (UTC) From: Greg Kurz To: qemu-devel@nongnu.org Subject: [PATCH v2] virtiofsd: vu_dispatch locking should never fail Date: Wed, 3 Feb 2021 19:24:34 +0100 Message-Id: <20210203182434.93870-1-groug@kaod.org> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kaod.org Received-SPF: softfail client-ip=207.211.30.44; envelope-from=groug@kaod.org; helo=us-smtp-delivery-44.mimecast.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_NONE=0.001, SPF_SOFTFAIL=0.665 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: virtio-fs@redhat.com, Greg Kurz , "Dr. David Alan Gilbert" , Stefan Hajnoczi , Vivek Goyal Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" pthread_rwlock_rdlock() and pthread_rwlock_wrlock() can fail if a deadlock condition is detected or the current thread already owns the lock. They can also fail, like pthread_rwlock_unlock(), if the mutex wasn't properly initialized. None of these are ever expected to happen with fv_VuDev::vu_dispatch_rwlock. Some users already check the return value and assert, some others don't. Introduce rdlock/wrlock/unlock wrappers that just do the former and use them everywhere for improved consistency and robustness. This is just cleanup. It doesn't fix any actual issue. Signed-off-by: Greg Kurz Reviewed-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- v2: - open-code helpers instead of defining them with a macro (Vivek, Stefan) - fixed rd/wr typo in fv_queue_thread() (Stefan) - make it clear in the changelog this is just cleanup (Stefan) tools/virtiofsd/fuse_virtio.c | 49 +++++++++++++++++++++++++---------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index ddcefee4272f..523ee64fb7ae 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -187,6 +187,31 @@ static void copy_iov(struct iovec *src_iov, int src_count, } } +/* + * pthread_rwlock_rdlock() and pthread_rwlock_wrlock can fail if + * a deadlock condition is detected or the current thread already + * owns the lock. They can also fail, like pthread_rwlock_unlock(), + * if the mutex wasn't properly initialized. None of these are ever + * expected to happen. + */ +static void vu_dispatch_rdlock(struct fv_VuDev *vud) +{ + int ret = pthread_rwlock_rdlock(&vud->vu_dispatch_rwlock); + assert(ret == 0); +} + +static void vu_dispatch_wrlock(struct fv_VuDev *vud) +{ + int ret = pthread_rwlock_wrlock(&vud->vu_dispatch_rwlock); + assert(ret == 0); +} + +static void vu_dispatch_unlock(struct fv_VuDev *vud) +{ + int ret = pthread_rwlock_unlock(&vud->vu_dispatch_rwlock); + assert(ret == 0); +} + /* * Called back by ll whenever it wants to send a reply/message back * The 1st element of the iov starts with the fuse_out_header @@ -240,12 +265,12 @@ int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, copy_iov(iov, count, in_sg, in_num, tosend_len); - pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_rdlock(qi->virtio_dev); pthread_mutex_lock(&qi->vq_lock); vu_queue_push(dev, q, elem, tosend_len); vu_queue_notify(dev, q); pthread_mutex_unlock(&qi->vq_lock); - pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_unlock(qi->virtio_dev); req->reply_sent = true; @@ -403,12 +428,12 @@ int virtio_send_data_iov(struct fuse_session *se, struct fuse_chan *ch, ret = 0; - pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_rdlock(qi->virtio_dev); pthread_mutex_lock(&qi->vq_lock); vu_queue_push(dev, q, elem, tosend_len); vu_queue_notify(dev, q); pthread_mutex_unlock(&qi->vq_lock); - pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_unlock(qi->virtio_dev); err: if (ret == 0) { @@ -558,12 +583,12 @@ out: fuse_log(FUSE_LOG_DEBUG, "%s: elem %d no reply sent\n", __func__, elem->index); - pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_rdlock(qi->virtio_dev); pthread_mutex_lock(&qi->vq_lock); vu_queue_push(dev, q, elem, 0); vu_queue_notify(dev, q); pthread_mutex_unlock(&qi->vq_lock); - pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_unlock(qi->virtio_dev); } pthread_mutex_destroy(&req->ch.lock); @@ -596,7 +621,6 @@ static void *fv_queue_thread(void *opaque) qi->qidx, qi->kick_fd); while (1) { struct pollfd pf[2]; - int ret; pf[0].fd = qi->kick_fd; pf[0].events = POLLIN; @@ -645,8 +669,7 @@ static void *fv_queue_thread(void *opaque) break; } /* Mutual exclusion with virtio_loop() */ - ret = pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); - assert(ret == 0); /* there is no possible error case */ + vu_dispatch_rdlock(qi->virtio_dev); pthread_mutex_lock(&qi->vq_lock); /* out is from guest, in is too guest */ unsigned int in_bytes, out_bytes; @@ -672,7 +695,7 @@ static void *fv_queue_thread(void *opaque) } pthread_mutex_unlock(&qi->vq_lock); - pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_unlock(qi->virtio_dev); /* Process all the requests. */ if (!se->thread_pool_size && req_list != NULL) { @@ -799,7 +822,6 @@ int virtio_loop(struct fuse_session *se) while (!fuse_session_exited(se)) { struct pollfd pf[1]; bool ok; - int ret; pf[0].fd = se->vu_socketfd; pf[0].events = POLLIN; pf[0].revents = 0; @@ -825,12 +847,11 @@ int virtio_loop(struct fuse_session *se) assert(pf[0].revents & POLLIN); fuse_log(FUSE_LOG_DEBUG, "%s: Got VU event\n", __func__); /* Mutual exclusion with fv_queue_thread() */ - ret = pthread_rwlock_wrlock(&se->virtio_dev->vu_dispatch_rwlock); - assert(ret == 0); /* there is no possible error case */ + vu_dispatch_wrlock(se->virtio_dev); ok = vu_dispatch(&se->virtio_dev->dev); - pthread_rwlock_unlock(&se->virtio_dev->vu_dispatch_rwlock); + vu_dispatch_unlock(se->virtio_dev); if (!ok) { fuse_log(FUSE_LOG_ERR, "%s: vu_dispatch failed\n", __func__);