From patchwork Thu May 26 23:50:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Osipenko X-Patchwork-Id: 12862918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 721E4C433EF for ; Thu, 26 May 2022 23:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244892AbiEZX4d (ORCPT ); Thu, 26 May 2022 19:56:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349842AbiEZXzx (ORCPT ); Thu, 26 May 2022 19:55:53 -0400 Received: from bhuna.collabora.co.uk (bhuna.collabora.co.uk [46.235.227.227]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94484ED71A; Thu, 26 May 2022 16:55:20 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: dmitry.osipenko) with ESMTPSA id 73A211F459FE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1653609319; bh=e7WawsCreQZA6oq9juG2n98bdBThV2TqnbrdmbGR/iM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZDjpXRH1m6FKUcq+BXCWaqIO/5RtOY9SMw67juPPt67p9qIENyjxZio3RG2eXtbIU EFxCTrqqkl2e/2bubVhWZ7DGgqpjrW4mi9eXtLce1ntgPuqWZslCqVdudNCN7C8SgX nkxR08+/Du4jh2tfvHMAuzjd1RKrErCYTuBd96vgRxyegNdSegbJmT54xlrS8kkj70 KDeTWHTi2CTu3cS6m+kc/AQaRyCFLKFZJYXeSOe1zhEnTdVzXqlHBNynf7tG6lANHw pYOb1K4aWEKDCzYQUNSSd06+CPx3hsEc4BhNLfjm4BSrZh+o+b7OVw+ICTgQ1bLHKn wDHwAPmaNjcBA== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Daniel Almeida , Gert Wollny , Gustavo Padovan , Daniel Stone , Tomeu Vizoso , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Rob Herring , Steven Price , Alyssa Rosenzweig , Rob Clark , Emil Velikov , Robin Murphy , Qiang Yu , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , "Pan, Xinhui" , Thierry Reding , Tomasz Figa , Marek Szyprowski , Mauro Carvalho Chehab , Alex Deucher , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Dmitry Osipenko , Dmitry Osipenko , linux-tegra@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kernel@collabora.com Subject: [PATCH v6 21/22] drm/virtio: Return proper error codes instead of -1 Date: Fri, 27 May 2022 02:50:39 +0300 Message-Id: <20220526235040.678984-22-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220526235040.678984-1-dmitry.osipenko@collabora.com> References: <20220526235040.678984-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Don't return -1 in error cases, return proper error code. The returned error codes propagate to error messages and to userspace and it's always good to have a meaningful error number for debugging purposes. Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 2a04dad1ae89..40402367d593 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -320,7 +320,7 @@ static int virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, if (fence && vbuf->objs) virtio_gpu_array_unlock_resv(vbuf->objs); free_vbuf(vgdev, vbuf); - return -1; + return -ENODEV; } if (vgdev->has_indirect) @@ -384,7 +384,7 @@ static int virtio_gpu_queue_fenced_ctrl_buffer(struct virtio_gpu_device *vgdev, if (!sgt) { if (fence && vbuf->objs) virtio_gpu_array_unlock_resv(vbuf->objs); - return -1; + return -ENOMEM; } elemcnt += sg_ents; @@ -750,7 +750,7 @@ static int virtio_get_edid_block(void *data, u8 *buf, size_t start = block * EDID_LENGTH; if (start + len > le32_to_cpu(resp->size)) - return -1; + return -EINVAL; memcpy(buf, resp->edid + start, len); return 0; }