From patchwork Fri Aug 23 15:14:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Umang Jain X-Patchwork-Id: 13775409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C8A9C52D7C for ; Fri, 23 Aug 2024 15:16:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=g6lx3LRkaAEqu9xzjg0yH2gAwUt2lHDPA27TvqD+BkM=; b=GOUjncZ/MriyZMPo34O0p9IVQs YCCtJXoAWS0YynvYHV2HUzHYppkYMuwYwvrxn02EZlgHj1z5PE5L7bwW8PYFaOw1q1ySUPZBX+HnN IxdN/Z3TM0AJHEJi8mge9x40VACZZ6abnVp3m2UFL7JG5RzYJTgx2Zj8GTmgyGQxziqY4yeN/1uFK FUeK+U6VcD9ZoECkTGv5RIhEILkfvHqiDEN5uz5Msxrbs4jBWxnEzJAu7jcjdgWDG9J9OsRc8Et3h Lv/xLebDv04V5MHYBrxKTEoHYq0KM44mWrVxXrcDKTwI0KJFPa7uGD5pr7xO8ONnhQ7W7zuZ+66a5 ZPvV8Xhw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1shW1Q-0000000HGEj-3RnB; Fri, 23 Aug 2024 15:16:16 +0000 Received: from perceval.ideasonboard.com ([213.167.242.64]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1shVzm-0000000HFim-0NqN; Fri, 23 Aug 2024 15:14:35 +0000 Received: from [192.168.29.25] (unknown [IPv6:2405:201:2015:f873:55f8:639e:8e9f:12ec]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id 41FADC8E; Fri, 23 Aug 2024 17:13:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1724426008; bh=Qumo+mDDsZCyfEHj0eKeGypIwyiGxm0ZfLShYxZQslo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=QOOmO6vt6J2A3b/kYV6po2/cpSB3UqSoLcv8777gpxbGcPoIr2n6smkyUkbNrWUsR dTOr2J8pPU3/k2V8/kAZXraVQoqX/roApokHPLlB11Pg4mggyWToGU/E1htmXnDVeA oP15sgbZS7mBwnRsE3enFeUfjXjYczmqw+3FZ5Zg= From: Umang Jain Date: Fri, 23 Aug 2024 20:44:22 +0530 Subject: [PATCH 2/7] staging: vchiq_core: Simplify vchiq_bulk_transfer() MIME-Version: 1.0 Message-Id: <20240823-to_sent2-v1-2-8bc182a0adaf@ideasonboard.com> References: <20240823-to_sent2-v1-0-8bc182a0adaf@ideasonboard.com> In-Reply-To: <20240823-to_sent2-v1-0-8bc182a0adaf@ideasonboard.com> To: Florian Fainelli , Broadcom internal kernel review list , Greg Kroah-Hartman Cc: linux-rpi-kernel@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org, Dan Carpenter , Laurent Pinchart , Kieran Bingham , Arnd Bergmann , Stefan Wahren , Dave Stevenson , Phil Elwell , Umang Jain X-Mailer: b4 0.13.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1724426064; l=8892; i=umang.jain@ideasonboard.com; s=20240731; h=from:subject:message-id; bh=Qumo+mDDsZCyfEHj0eKeGypIwyiGxm0ZfLShYxZQslo=; b=jwmAOJ5IcYFoclhYaZymKlxjYm+gnbMbWpFG4qsLb6fo3017BuBeFOnuEDB9OU4YXKt0PtXJH Lqsr4HcWpWMA+5vW1aOMQhBhYmweG9/ckYwL/lstRQJffNirT8y2iR5 X-Developer-Key: i=umang.jain@ideasonboard.com; a=ed25519; pk=7pvnIBNsDpFUMiph0Vlhrr01+rAn5fSIn/QtDeLeXL0= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240823_081434_315611_2BD916FC X-CRM114-Status: GOOD ( 16.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Factor out core logic for preparing bulk data transfer(mutex locking, waits on vchiq_bulk_queue wait-queue, initialising the bulk transfer) out of the vchiq_bulk_transfer(). This simplifies the existing vchiq_bulk_transfer() and makes it more readable since all the core logic is handled in vchiq_bulk_xfer_queue_msg_interruptible(). It will also help us to refactor vchiq_bulk_transfer() easily for different vchiq bulk transfer modes. No functional changes intended in this patch. Signed-off-by: Umang Jain --- .../vc04_services/interface/vchiq_arm/vchiq_core.c | 217 ++++++++++++--------- 1 file changed, 121 insertions(+), 96 deletions(-) diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c index 228a41ecf90c..c31f5d773fa5 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c @@ -189,6 +189,13 @@ static const char *const conn_state_names[] = { static void release_message_sync(struct vchiq_state *state, struct vchiq_header *header); +static int +vchiq_bulk_xfer_queue_msg_interruptible(struct vchiq_service *service, + void *offset, void __user *uoffset, + int size, void *userdata, + enum vchiq_bulk_mode mode, + enum vchiq_bulk_dir dir); + static const char *msg_type_str(unsigned int msg_type) { switch (msg_type) { @@ -2991,15 +2998,9 @@ int vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned int handle, enum vchiq_bulk_mode mode, enum vchiq_bulk_dir dir) { struct vchiq_service *service = find_service_by_handle(instance, handle); - struct vchiq_bulk_queue *queue; - struct vchiq_bulk *bulk; - struct vchiq_state *state; struct bulk_waiter *bulk_waiter = NULL; - const char dir_char = (dir == VCHIQ_BULK_TRANSMIT) ? 't' : 'r'; - const int dir_msgtype = (dir == VCHIQ_BULK_TRANSMIT) ? - VCHIQ_MSG_BULK_TX : VCHIQ_MSG_BULK_RX; + struct vchiq_bulk *bulk; int status = -EINVAL; - int payload[2]; if (!service) goto error_exit; @@ -3027,89 +3028,10 @@ int vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned int handle, goto error_exit; } - state = service->state; - - queue = (dir == VCHIQ_BULK_TRANSMIT) ? - &service->bulk_tx : &service->bulk_rx; - - if (mutex_lock_killable(&service->bulk_mutex)) { - status = -EAGAIN; - goto error_exit; - } - - if (queue->local_insert == queue->remove + VCHIQ_NUM_SERVICE_BULKS) { - VCHIQ_SERVICE_STATS_INC(service, bulk_stalls); - do { - mutex_unlock(&service->bulk_mutex); - if (wait_for_completion_interruptible(&service->bulk_remove_event)) { - status = -EAGAIN; - goto error_exit; - } - if (mutex_lock_killable(&service->bulk_mutex)) { - status = -EAGAIN; - goto error_exit; - } - } while (queue->local_insert == queue->remove + - VCHIQ_NUM_SERVICE_BULKS); - } - - bulk = &queue->bulks[BULK_INDEX(queue->local_insert)]; - - bulk->mode = mode; - bulk->dir = dir; - bulk->userdata = userdata; - bulk->size = size; - bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED; - - if (vchiq_prepare_bulk_data(instance, bulk, offset, uoffset, size, dir)) - goto unlock_error_exit; - - /* - * Ensure that the bulk data record is visible to the peer - * before proceeding. - */ - wmb(); - - dev_dbg(state->dev, "core: %d: bt (%d->%d) %cx %x@%pad %pK\n", - state->id, service->localport, service->remoteport, - dir_char, size, &bulk->data, userdata); - - /* - * The slot mutex must be held when the service is being closed, so - * claim it here to ensure that isn't happening - */ - if (mutex_lock_killable(&state->slot_mutex)) { - status = -EAGAIN; - goto cancel_bulk_error_exit; - } - - if (service->srvstate != VCHIQ_SRVSTATE_OPEN) - goto unlock_both_error_exit; - - payload[0] = lower_32_bits(bulk->data); - payload[1] = bulk->size; - status = queue_message(state, - NULL, - VCHIQ_MAKE_MSG(dir_msgtype, - service->localport, - service->remoteport), - memcpy_copy_callback, - &payload, - sizeof(payload), - QMFLAGS_IS_BLOCKING | - QMFLAGS_NO_MUTEX_LOCK | - QMFLAGS_NO_MUTEX_UNLOCK); + status = vchiq_bulk_xfer_queue_msg_interruptible(service, offset, uoffset, + size, userdata, mode, dir); if (status) - goto unlock_both_error_exit; - - queue->local_insert++; - - mutex_unlock(&state->slot_mutex); - mutex_unlock(&service->bulk_mutex); - - dev_dbg(state->dev, "core: %d: bt:%d %cx li=%x ri=%x p=%x\n", - state->id, service->localport, dir_char, queue->local_insert, - queue->remote_insert, queue->process); + goto error_exit; vchiq_service_put(service); @@ -3125,13 +3047,6 @@ int vchiq_bulk_transfer(struct vchiq_instance *instance, unsigned int handle, return status; -unlock_both_error_exit: - mutex_unlock(&state->slot_mutex); -cancel_bulk_error_exit: - vchiq_complete_bulk(service->instance, bulk); -unlock_error_exit: - mutex_unlock(&service->bulk_mutex); - error_exit: if (service) vchiq_service_put(service); @@ -3294,6 +3209,116 @@ vchiq_release_message(struct vchiq_instance *instance, unsigned int handle, } EXPORT_SYMBOL(vchiq_release_message); +/* + * Prepares a bulk transfer to be queued. The function is interruptible and is + * intended to be called from user threads. It may return -EAGAIN to indicate + * that a signal has been received and the call should be retried after being + * returned to user context. + */ +static int +vchiq_bulk_xfer_queue_msg_interruptible(struct vchiq_service *service, + void *offset, void __user *uoffset, + int size, void *userdata, + enum vchiq_bulk_mode mode, + enum vchiq_bulk_dir dir) +{ + struct vchiq_bulk_queue *queue; + struct vchiq_bulk *bulk; + struct vchiq_state *state = service->state; + const char dir_char = (dir == VCHIQ_BULK_TRANSMIT) ? 't' : 'r'; + const int dir_msgtype = (dir == VCHIQ_BULK_TRANSMIT) ? + VCHIQ_MSG_BULK_TX : VCHIQ_MSG_BULK_RX; + int status = -EINVAL; + int payload[2]; + + queue = (dir == VCHIQ_BULK_TRANSMIT) ? + &service->bulk_tx : &service->bulk_rx; + + if (mutex_lock_killable(&service->bulk_mutex)) + return -EAGAIN; + + if (queue->local_insert == queue->remove + VCHIQ_NUM_SERVICE_BULKS) { + VCHIQ_SERVICE_STATS_INC(service, bulk_stalls); + do { + mutex_unlock(&service->bulk_mutex); + if (wait_for_completion_interruptible(&service->bulk_remove_event)) + return -EAGAIN; + if (mutex_lock_killable(&service->bulk_mutex)) + return -EAGAIN; + } while (queue->local_insert == queue->remove + + VCHIQ_NUM_SERVICE_BULKS); + } + + bulk = &queue->bulks[BULK_INDEX(queue->local_insert)]; + + bulk->mode = mode; + bulk->dir = dir; + bulk->userdata = userdata; + bulk->size = size; + bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED; + + if (vchiq_prepare_bulk_data(service->instance, bulk, offset, uoffset, size, dir)) + goto unlock_error_exit; + + /* + * Ensure that the bulk data record is visible to the peer + * before proceeding. + */ + wmb(); + + dev_dbg(state->dev, "core: %d: bt (%d->%d) %cx %x@%pad %pK\n", + state->id, service->localport, service->remoteport, + dir_char, size, &bulk->data, userdata); + + /* + * The slot mutex must be held when the service is being closed, so + * claim it here to ensure that isn't happening + */ + if (mutex_lock_killable(&state->slot_mutex)) { + status = -EAGAIN; + goto cancel_bulk_error_exit; + } + + if (service->srvstate != VCHIQ_SRVSTATE_OPEN) + goto unlock_both_error_exit; + + payload[0] = lower_32_bits(bulk->data); + payload[1] = bulk->size; + status = queue_message(state, + NULL, + VCHIQ_MAKE_MSG(dir_msgtype, + service->localport, + service->remoteport), + memcpy_copy_callback, + &payload, + sizeof(payload), + QMFLAGS_IS_BLOCKING | + QMFLAGS_NO_MUTEX_LOCK | + QMFLAGS_NO_MUTEX_UNLOCK); + if (status) + goto unlock_both_error_exit; + + queue->local_insert++; + + mutex_unlock(&state->slot_mutex); + mutex_unlock(&service->bulk_mutex); + + dev_dbg(state->dev, "core: %d: bt:%d %cx li=%x ri=%x p=%x\n", + state->id, service->localport, dir_char, queue->local_insert, + queue->remote_insert, queue->process); + + return status; + +unlock_both_error_exit: + mutex_unlock(&state->slot_mutex); +cancel_bulk_error_exit: + vchiq_complete_bulk(service->instance, bulk); +unlock_error_exit: + mutex_unlock(&service->bulk_mutex); + + return status; +} + static void release_message_sync(struct vchiq_state *state, struct vchiq_header *header) {