From patchwork Wed Dec 4 19:08:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 11273489 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70F076C1 for ; Wed, 4 Dec 2019 19:22:17 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E711420675 for ; Wed, 4 Dec 2019 19:22:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="BoA0SXS/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E711420675 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Received: from localhost ([::1]:46192 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1icaE3-00021l-NQ for patchwork-qemu-devel@patchwork.kernel.org; Wed, 04 Dec 2019 14:22:15 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:51191) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ica1G-0002dH-Lr for qemu-devel@nongnu.org; Wed, 04 Dec 2019 14:09:05 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ica1E-0005Y6-JK for qemu-devel@nongnu.org; Wed, 04 Dec 2019 14:09:02 -0500 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:38460 helo=us-smtp-delivery-1.mimecast.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1ica1A-0005U3-PP for qemu-devel@nongnu.org; Wed, 04 Dec 2019 14:08:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1575486535; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=n8gBRP4c6w19EXcmTWgsrrJ8ke0Lp20aBqG4u2MjHGw=; b=BoA0SXS/OOVkijkYPCURl5RS+ZFRely4g5umRXI8JYA7f5atLuh9IHFxeYfNppL/H2pj6y oqeGySrwCdySZQA3pOfEirU4Mxh/qzZaqEBQ7fqM6K5hB45Z3bAUzABnF+HRWdy8AMKg/s hB2cGcyvrSQnaj1gbo2qTzhABbZpZI4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-299-_14MUJXLP-a19YejEDOd4w-1; Wed, 04 Dec 2019 14:08:53 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CE66318A6EF4 for ; Wed, 4 Dec 2019 19:08:52 +0000 (UTC) Received: from horse.redhat.com (unknown [10.18.25.35]) by smtp.corp.redhat.com (Postfix) with ESMTP id CA9C860BF3; Wed, 4 Dec 2019 19:08:47 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 35CF3224750; Wed, 4 Dec 2019 14:08:42 -0500 (EST) From: Vivek Goyal To: virtio-fs@redhat.com, qemu-devel@nongnu.org Subject: [PATCH v2 5/5] virtiofsd: Implement blocking posix locks Date: Wed, 4 Dec 2019 14:08:36 -0500 Message-Id: <20191204190836.31324-6-vgoyal@redhat.com> In-Reply-To: <20191204190836.31324-1-vgoyal@redhat.com> References: <20191204190836.31324-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-MC-Unique: _14MUJXLP-a19YejEDOd4w-1 X-Mimecast-Spam-Score: 0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 207.211.31.81 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mszeredi@redhat.com, dgilbert@redhat.com, stefanha@redhat.com Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" As of now we don't support fcntl(F_SETLKW) and if we see one, we return -EOPNOTSUPP. Change that by accepting these requests and returning a reply immediately asking caller to wait. Once lock is available, send a notification to the waiter indicating lock is available. Signed-off-by: Vivek Goyal --- contrib/virtiofsd/fuse_kernel.h | 7 +++ contrib/virtiofsd/fuse_lowlevel.c | 23 ++++++- contrib/virtiofsd/fuse_lowlevel.h | 25 ++++++++ contrib/virtiofsd/fuse_virtio.c | 97 ++++++++++++++++++++++++++++-- contrib/virtiofsd/passthrough_ll.c | 49 ++++++++++++--- 5 files changed, 185 insertions(+), 16 deletions(-) diff --git a/contrib/virtiofsd/fuse_kernel.h b/contrib/virtiofsd/fuse_kernel.h index 2bdc8b1c88..432eb14d14 100644 --- a/contrib/virtiofsd/fuse_kernel.h +++ b/contrib/virtiofsd/fuse_kernel.h @@ -444,6 +444,7 @@ enum fuse_notify_code { FUSE_NOTIFY_STORE = 4, FUSE_NOTIFY_RETRIEVE = 5, FUSE_NOTIFY_DELETE = 6, + FUSE_NOTIFY_LOCK = 7, FUSE_NOTIFY_CODE_MAX, }; @@ -836,6 +837,12 @@ struct fuse_notify_retrieve_in { uint64_t dummy4; }; +struct fuse_notify_lock_out { + uint64_t unique; + int32_t error; + int32_t padding; +}; + /* Device ioctls: */ #define FUSE_DEV_IOC_CLONE _IOR(229, 0, uint32_t) diff --git a/contrib/virtiofsd/fuse_lowlevel.c b/contrib/virtiofsd/fuse_lowlevel.c index d4a42d9804..3d9c289510 100644 --- a/contrib/virtiofsd/fuse_lowlevel.c +++ b/contrib/virtiofsd/fuse_lowlevel.c @@ -183,7 +183,8 @@ int fuse_send_reply_iov_nofree(fuse_req_t req, int error, struct iovec *iov, { struct fuse_out_header out; - if (error <= -1000 || error > 0) { + /* error = 1 has been used to signal client to wait for notificaiton */ + if (error <= -1000 || error > 1) { fuse_log(FUSE_LOG_ERR, "fuse: bad error value: %i\n", error); error = -ERANGE; } @@ -291,6 +292,12 @@ int fuse_reply_err(fuse_req_t req, int err) return send_reply(req, -err, NULL, 0); } +int fuse_reply_wait(fuse_req_t req) +{ + /* TODO: This is a hack. Fix it */ + return send_reply(req, 1, NULL, 0); +} + void fuse_reply_none(fuse_req_t req) { fuse_free_req(req); @@ -2207,6 +2214,20 @@ static int send_notify_iov(struct fuse_session *se, int notify_code, return fuse_send_msg(se, NULL, iov, count); } +int fuse_lowlevel_notify_lock(struct fuse_session *se, uint64_t unique, + int32_t error) +{ + struct fuse_notify_lock_out outarg = {0}; + struct iovec iov[2]; + + outarg.unique = unique; + outarg.error = -error; + + iov[1].iov_base = &outarg; + iov[1].iov_len = sizeof(outarg); + return send_notify_iov(se, FUSE_NOTIFY_LOCK, iov, 2); +} + int fuse_lowlevel_notify_poll(struct fuse_pollhandle *ph) { if (ph != NULL) { diff --git a/contrib/virtiofsd/fuse_lowlevel.h b/contrib/virtiofsd/fuse_lowlevel.h index e664d2d12d..4126b4f967 100644 --- a/contrib/virtiofsd/fuse_lowlevel.h +++ b/contrib/virtiofsd/fuse_lowlevel.h @@ -1251,6 +1251,22 @@ struct fuse_lowlevel_ops { */ int fuse_reply_err(fuse_req_t req, int err); +/** + * Ask caller to wait for lock. + * + * Possible requests: + * setlkw + * + * If caller sends a blocking lock request (setlkw), then reply to caller + * that wait for lock to be available. Once lock is available caller will + * receive a notification with request's unique id. Notification will + * carry info whether lock was successfully obtained or not. + * + * @param req request handle + * @return zero for success, -errno for failure to send reply + */ +int fuse_reply_wait(fuse_req_t req); + /** * Don't send reply * @@ -1704,6 +1720,15 @@ int fuse_lowlevel_notify_delete(struct fuse_session *se, int fuse_lowlevel_notify_store(struct fuse_session *se, fuse_ino_t ino, off_t offset, struct fuse_bufvec *bufv, enum fuse_buf_copy_flags flags); +/** + * Notify event related to previous lock request + * + * @param se the session object + * @param unique the unique id of the request which requested setlkw + * @param error zero for success, -errno for the failure + */ +int fuse_lowlevel_notify_lock(struct fuse_session *se, uint64_t unique, + int32_t error); /* ----------------------------------------------------------- * * Utility functions * diff --git a/contrib/virtiofsd/fuse_virtio.c b/contrib/virtiofsd/fuse_virtio.c index 94cf9b3791..129dd329f6 100644 --- a/contrib/virtiofsd/fuse_virtio.c +++ b/contrib/virtiofsd/fuse_virtio.c @@ -208,6 +208,83 @@ static void copy_iov(struct iovec *src_iov, int src_count, } } +static int virtio_send_notify_msg(struct fuse_session *se, struct iovec *iov, + int count) +{ + struct fv_QueueInfo *qi; + VuDev *dev = &se->virtio_dev->dev; + VuVirtq *q; + FVRequest *req; + VuVirtqElement *elem; + unsigned int in_num; + struct fuse_out_header *out = iov[0].iov_base; + size_t in_len, tosend_len = iov_size(iov, count); + struct iovec *in_sg; + int ret = 0; + + /* Notifications have unique == 0 */ + assert (!out->unique); + + if (!se->notify_enabled) + return -EOPNOTSUPP; + + /* If notifications are enabled, queue index 1 is notification queue */ + qi = se->virtio_dev->qi[1]; + q = vu_get_queue(dev, qi->qidx); + + pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + pthread_mutex_lock(&qi->vq_lock); + /* Pop an element from queue */ + req = vu_queue_pop(dev, q, sizeof(FVRequest), NULL, NULL); + if (!req) { + /* TODO: Implement some sort of ring buffer and queue notifications + * on that and send these later when notification queue has space + * available. + */ + ret = -ENOSPC; + } + pthread_mutex_unlock(&qi->vq_lock); + pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); + + if (ret) + return ret; + + out->len = tosend_len; + elem = &req->elem; + in_num = elem->in_num; + in_sg = elem->in_sg; + in_len = iov_size(in_sg, in_num); + fuse_log(FUSE_LOG_DEBUG, "%s: elem %d: with %d in desc of length %zd\n", + __func__, elem->index, in_num, in_len); + + if (in_len < sizeof(struct fuse_out_header)) { + fuse_log(FUSE_LOG_ERR, "%s: elem %d too short for out_header\n", + __func__, elem->index); + ret = -E2BIG; + goto out; + } + + if (in_len < tosend_len) { + fuse_log(FUSE_LOG_ERR, "%s: elem %d too small for data len" + " %zd\n", __func__, elem->index, tosend_len); + ret = -E2BIG; + goto out; + } + + /* First copy the header data from iov->in_sg */ + copy_iov(iov, count, in_sg, in_num, tosend_len); + + pthread_rwlock_rdlock(&qi->virtio_dev->vu_dispatch_rwlock); + pthread_mutex_lock(&qi->vq_lock); + vu_queue_push(dev, q, elem, tosend_len); + vu_queue_notify(dev, q); + pthread_mutex_unlock(&qi->vq_lock); + pthread_rwlock_unlock(&qi->virtio_dev->vu_dispatch_rwlock); +out: + free(req); + return ret; +} + /* * Called back by ll whenever it wants to send a reply/message back * The 1st element of the iov starts with the fuse_out_header @@ -216,11 +293,11 @@ static void copy_iov(struct iovec *src_iov, int src_count, int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, struct iovec *iov, int count) { - FVRequest *req = container_of(ch, FVRequest, ch); - struct fv_QueueInfo *qi = ch->qi; + FVRequest *req; + struct fv_QueueInfo *qi; VuDev *dev = &se->virtio_dev->dev; - VuVirtq *q = vu_get_queue(dev, qi->qidx); - VuVirtqElement *elem = &req->elem; + VuVirtq *q; + VuVirtqElement *elem; int ret = 0; assert(count >= 1); @@ -231,8 +308,15 @@ int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, size_t tosend_len = iov_size(iov, count); - /* unique == 0 is notification, which we don't support */ - assert(out->unique); + /* unique == 0 is notification */ + if (!out->unique) + return virtio_send_notify_msg(se, iov, count); + + assert(ch); + req = container_of(ch, FVRequest, ch); + elem = &req->elem; + qi = ch->qi; + q = vu_get_queue(dev, qi->qidx); assert(!req->reply_sent); /* The 'in' part of the elem is to qemu */ @@ -885,6 +969,7 @@ static int fv_get_config(VuDev *dev, uint8_t *config, uint32_t len) struct fuse_notify_delete_out delete_out; struct fuse_notify_store_out store_out; struct fuse_notify_retrieve_out retrieve_out; + struct fuse_notify_lock_out lock_out; }; notify_size = sizeof(struct fuse_out_header) + diff --git a/contrib/virtiofsd/passthrough_ll.c b/contrib/virtiofsd/passthrough_ll.c index 6aa56882e8..308fc76530 100644 --- a/contrib/virtiofsd/passthrough_ll.c +++ b/contrib/virtiofsd/passthrough_ll.c @@ -1926,7 +1926,10 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino, struct lo_data *lo = lo_data(req); struct lo_inode *inode; struct lo_inode_plock *plock; - int ret, saverr = 0; + int ret, saverr = 0, ofd; + uint64_t unique; + struct fuse_session *se = req->se; + bool async_lock = false; fuse_log(FUSE_LOG_DEBUG, "lo_setlk(ino=%" PRIu64 ", flags=%d)" " cmd=%d pid=%d owner=0x%lx sleep=%d l_whence=%d" @@ -1934,11 +1937,6 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino, lock->l_type, lock->l_pid, fi->lock_owner, sleep, lock->l_whence, lock->l_start, lock->l_len); - if (sleep) { - fuse_reply_err(req, EOPNOTSUPP); - return; - } - inode = lo_inode(req, ino); if (!inode) { fuse_reply_err(req, EBADF); @@ -1951,21 +1949,54 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino, if (!plock) { saverr = ret; + pthread_mutex_unlock(&inode->plock_mutex); goto out; } + /* + * plock is now released when inode is going away. We already have + * a reference on inode, so it is guaranteed that plock->fd is + * still around even after dropping inode->plock_mutex lock + */ + ofd = plock->fd; + pthread_mutex_unlock(&inode->plock_mutex); + + /* + * If this lock request can block, request caller to wait for + * notification. Do not access req after this. Once lock is + * available, send a notification instead. + */ + if (sleep && lock->l_type != F_UNLCK) { + /* + * If notification queue is not enabled, can't support async + * locks. + */ + if (!se->notify_enabled) { + saverr = EOPNOTSUPP; + goto out; + } + async_lock = true; + unique = req->unique; + fuse_reply_wait(req); + } /* TODO: Is it alright to modify flock? */ lock->l_pid = 0; - ret = fcntl(plock->fd, F_OFD_SETLK, lock); + if (async_lock) + ret = fcntl(ofd, F_OFD_SETLKW, lock); + else + ret = fcntl(ofd, F_OFD_SETLK, lock); if (ret == -1) { saverr = errno; } out: - pthread_mutex_unlock(&inode->plock_mutex); lo_inode_put(lo, &inode); - fuse_reply_err(req, saverr); + if (!async_lock) + fuse_reply_err(req, saverr); + else { + fuse_lowlevel_notify_lock(se, unique, saverr); + } } static void lo_fsyncdir(fuse_req_t req, fuse_ino_t ino, int datasync,