From patchwork Tue Oct 24 16:07:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5FC27C25B6C for ; Tue, 24 Oct 2023 16:12:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 05FDD10E414; Tue, 24 Oct 2023 16:12:02 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id F2CCC10E40B; Tue, 24 Oct 2023 16:11:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163918; x=1729699918; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DDR9QYygtf/mqSlMDbJLfsleqL3tuJUQWKqqud2aweU=; b=ikGSY1/UIYDrJeJRyGtnjKeMoLlsPUGaOLTSSRbwptSfcF2d48FpjrP8 Awo26qkKwfPtVkmgGqlcSdrmYzAf+1kFyv6PCPulL8xQ4wXElquBNi3eC LQUzpdjiNoWm/AOISiLaAQ1w2yk6m1j3qLOofmsFcI5lWExU2FqQnanI5 /PdYDqxYmp9E0QPzmm/U/t3ynE8manAG3R5cQNMctrIxVlmpPXFctT/MP eBraDhYs+ZxqKeV3MOZsUtkUHes3uhLXTyAU3lnh02PTiUKUAKY21JXdd YYI4fWNeH7HOYZ7e/QY4voAEjRKyDFxHL3ghnZH/1ROIRAKyRhFsn5s97 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328094" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328094" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:07:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237142" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237142" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:16 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:20 +0100 Message-Id: <20231024160727.282960-2-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 1/8] cgroup: Add the DRM cgroup controller X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin Skeleton controller without any functionality. Signed-off-by: Tvrtko Ursulin --- include/linux/cgroup_drm.h | 9 ++++++ include/linux/cgroup_subsys.h | 4 +++ init/Kconfig | 7 ++++ kernel/cgroup/Makefile | 1 + kernel/cgroup/drm.c | 60 +++++++++++++++++++++++++++++++++++ 5 files changed, 81 insertions(+) create mode 100644 include/linux/cgroup_drm.h create mode 100644 kernel/cgroup/drm.c diff --git a/include/linux/cgroup_drm.h b/include/linux/cgroup_drm.h new file mode 100644 index 000000000000..8ef66a47619f --- /dev/null +++ b/include/linux/cgroup_drm.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#ifndef _CGROUP_DRM_H +#define _CGROUP_DRM_H + +#endif /* _CGROUP_DRM_H */ diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 445235487230..49460494a010 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -65,6 +65,10 @@ SUBSYS(rdma) SUBSYS(misc) #endif +#if IS_ENABLED(CONFIG_CGROUP_DRM) +SUBSYS(drm) +#endif + /* * The following subsystems are not supported on the default hierarchy. */ diff --git a/init/Kconfig b/init/Kconfig index 6d35728b94b2..ed8ffa444e37 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1066,6 +1066,13 @@ config CGROUP_RDMA Attaching processes with active RDMA resources to the cgroup hierarchy is allowed even if can cross the hierarchy's limit. +config CGROUP_DRM + bool "DRM controller" + help + Provides the DRM subsystem controller. + + ... + config CGROUP_FREEZER bool "Freezer controller" help diff --git a/kernel/cgroup/Makefile b/kernel/cgroup/Makefile index 12f8457ad1f9..849bd2917477 100644 --- a/kernel/cgroup/Makefile +++ b/kernel/cgroup/Makefile @@ -6,4 +6,5 @@ obj-$(CONFIG_CGROUP_PIDS) += pids.o obj-$(CONFIG_CGROUP_RDMA) += rdma.o obj-$(CONFIG_CPUSETS) += cpuset.o obj-$(CONFIG_CGROUP_MISC) += misc.o +obj-$(CONFIG_CGROUP_DRM) += drm.o obj-$(CONFIG_CGROUP_DEBUG) += debug.o diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c new file mode 100644 index 000000000000..02c8eaa633d3 --- /dev/null +++ b/kernel/cgroup/drm.c @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2023 Intel Corporation + */ + +#include +#include +#include + +struct drm_cgroup_state { + struct cgroup_subsys_state css; +}; + +struct drm_root_cgroup_state { + struct drm_cgroup_state drmcs; +}; + +static struct drm_root_cgroup_state root_drmcs; + +static inline struct drm_cgroup_state * +css_to_drmcs(struct cgroup_subsys_state *css) +{ + return container_of(css, struct drm_cgroup_state, css); +} + +static void drmcs_free(struct cgroup_subsys_state *css) +{ + struct drm_cgroup_state *drmcs = css_to_drmcs(css); + + if (drmcs != &root_drmcs.drmcs) + kfree(drmcs); +} + +static struct cgroup_subsys_state * +drmcs_alloc(struct cgroup_subsys_state *parent_css) +{ + struct drm_cgroup_state *drmcs; + + if (!parent_css) { + drmcs = &root_drmcs.drmcs; + } else { + drmcs = kzalloc(sizeof(*drmcs), GFP_KERNEL); + if (!drmcs) + return ERR_PTR(-ENOMEM); + } + + return &drmcs->css; +} + +struct cftype files[] = { + { } /* Zero entry terminates. */ +}; + +struct cgroup_subsys drm_cgrp_subsys = { + .css_alloc = drmcs_alloc, + .css_free = drmcs_free, + .early_init = false, + .legacy_cftypes = files, + .dfl_cftypes = files, +}; From patchwork Tue Oct 24 16:07:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id ABB08C25B6C for ; Tue, 24 Oct 2023 16:12:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 076E010E415; Tue, 24 Oct 2023 16:12:06 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id C4FF210E40B; Tue, 24 Oct 2023 16:11:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163918; x=1729699918; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0KdJE6jnsS4OpVM+7U2Xy+CfCREI9E0lPGtdoa6KUi4=; b=gg2dptsc5IToFSo2cOCr4nBUfE+jetB22UQCk+yE7FhK/C80fBYlWzjo G53NRpSg1udpMJjD2gYNluRNyLWM+zTGpHypqSWIxQwDPwNBf5DGJMPQ8 sFTcUpQsCFiDKSn9XO6VDmaN4FNZVwDxB6cmvcOULHiQOa4iY0jL/P1no IVjqtBNhMQIxQYCYN875BJNCmX2chHgx8wXYsc5Vr6kmWI7RvKBB2/j/E KDlz5iYIkIP1oxfoq/JS1tkl4tWx4iRBwMP4XHaZHfMd5T4JXo47NwaXA 5cIqqvWrt739munFmZyJ5LNh0gEZUodm5mhWHwDtkBzr/KTOfeRimNUTv w==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328117" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328117" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:07:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237183" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237183" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:20 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:21 +0100 Message-Id: <20231024160727.282960-3-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 2/8] drm/cgroup: Track DRM clients per cgroup X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin To enable propagation of settings from the cgroup DRM controller to DRM and vice-versa, we need to start tracking to which cgroups DRM clients belong. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/drm_file.c | 6 ++++ include/drm/drm_file.h | 6 ++++ include/linux/cgroup_drm.h | 20 ++++++++++++ kernel/cgroup/drm.c | 62 +++++++++++++++++++++++++++++++++++++- 4 files changed, 93 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index 446458aca8e9..200abf7e79ce 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -32,6 +32,7 @@ */ #include +#include #include #include #include @@ -304,6 +305,8 @@ static void drm_close_helper(struct file *filp) list_del(&file_priv->lhead); mutex_unlock(&dev->filelist_mutex); + drmcgroup_client_close(file_priv); + drm_file_free(file_priv); } @@ -367,6 +370,8 @@ int drm_open_helper(struct file *filp, struct drm_minor *minor) list_add(&priv->lhead, &dev->filelist); mutex_unlock(&dev->filelist_mutex); + drmcgroup_client_open(priv); + #ifdef CONFIG_DRM_LEGACY #ifdef __alpha__ /* @@ -533,6 +538,7 @@ void drm_file_update_pid(struct drm_file *filp) mutex_unlock(&dev->filelist_mutex); if (pid != old) { + drmcgroup_client_migrate(filp); get_pid(pid); synchronize_rcu(); put_pid(old); diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h index e1b5b4282f75..ddf6f5450e1f 100644 --- a/include/drm/drm_file.h +++ b/include/drm/drm_file.h @@ -30,6 +30,7 @@ #ifndef _DRM_FILE_H_ #define _DRM_FILE_H_ +#include #include #include #include @@ -281,6 +282,11 @@ struct drm_file { /** @minor: &struct drm_minor for this file. */ struct drm_minor *minor; +#if IS_ENABLED(CONFIG_CGROUP_DRM) + struct cgroup_subsys_state *__css; + struct list_head clink; +#endif + /** * @object_idr: * diff --git a/include/linux/cgroup_drm.h b/include/linux/cgroup_drm.h index 8ef66a47619f..176431842d8e 100644 --- a/include/linux/cgroup_drm.h +++ b/include/linux/cgroup_drm.h @@ -6,4 +6,24 @@ #ifndef _CGROUP_DRM_H #define _CGROUP_DRM_H +#include + +#if IS_ENABLED(CONFIG_CGROUP_DRM) +void drmcgroup_client_open(struct drm_file *file_priv); +void drmcgroup_client_close(struct drm_file *file_priv); +void drmcgroup_client_migrate(struct drm_file *file_priv); +#else +static inline void drmcgroup_client_open(struct drm_file *file_priv) +{ +} + +static inline void drmcgroup_client_close(struct drm_file *file_priv) +{ +} + +static void drmcgroup_client_migrate(struct drm_file *file_priv) +{ +} +#endif + #endif /* _CGROUP_DRM_H */ diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index 02c8eaa633d3..d702be1b441f 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -5,17 +5,25 @@ #include #include +#include +#include #include struct drm_cgroup_state { struct cgroup_subsys_state css; + + struct list_head clients; }; struct drm_root_cgroup_state { struct drm_cgroup_state drmcs; }; -static struct drm_root_cgroup_state root_drmcs; +static struct drm_root_cgroup_state root_drmcs = { + .drmcs.clients = LIST_HEAD_INIT(root_drmcs.drmcs.clients), +}; + +static DEFINE_MUTEX(drmcg_mutex); static inline struct drm_cgroup_state * css_to_drmcs(struct cgroup_subsys_state *css) @@ -42,11 +50,63 @@ drmcs_alloc(struct cgroup_subsys_state *parent_css) drmcs = kzalloc(sizeof(*drmcs), GFP_KERNEL); if (!drmcs) return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&drmcs->clients); } return &drmcs->css; } +void drmcgroup_client_open(struct drm_file *file_priv) +{ + struct drm_cgroup_state *drmcs; + + drmcs = css_to_drmcs(task_get_css(current, drm_cgrp_id)); + + mutex_lock(&drmcg_mutex); + file_priv->__css = &drmcs->css; /* Keeps the reference. */ + list_add_tail(&file_priv->clink, &drmcs->clients); + mutex_unlock(&drmcg_mutex); +} +EXPORT_SYMBOL_GPL(drmcgroup_client_open); + +void drmcgroup_client_close(struct drm_file *file_priv) +{ + struct drm_cgroup_state *drmcs; + + drmcs = css_to_drmcs(file_priv->__css); + + mutex_lock(&drmcg_mutex); + list_del(&file_priv->clink); + file_priv->__css = NULL; + mutex_unlock(&drmcg_mutex); + + css_put(&drmcs->css); +} +EXPORT_SYMBOL_GPL(drmcgroup_client_close); + +void drmcgroup_client_migrate(struct drm_file *file_priv) +{ + struct drm_cgroup_state *src, *dst; + struct cgroup_subsys_state *old; + + mutex_lock(&drmcg_mutex); + + old = file_priv->__css; + src = css_to_drmcs(old); + dst = css_to_drmcs(task_get_css(current, drm_cgrp_id)); + + if (src != dst) { + file_priv->__css = &dst->css; /* Keeps the reference. */ + list_move_tail(&file_priv->clink, &dst->clients); + } + + mutex_unlock(&drmcg_mutex); + + css_put(old); +} +EXPORT_SYMBOL_GPL(drmcgroup_client_migrate); + struct cftype files[] = { { } /* Zero entry terminates. */ }; From patchwork Tue Oct 24 16:07:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4AC21C25B48 for ; Tue, 24 Oct 2023 16:12:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C15FF10E40A; Tue, 24 Oct 2023 16:12:07 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7348210E40D; Tue, 24 Oct 2023 16:12:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163920; x=1729699920; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=grMzeSHRyi5Hbnx2+6Q3kkMweOiaep23+j//w7Ao1+s=; b=g6zBbTMiaV27U4S6kWIC5HC+6VIreX6P4E/OM3KE9RSJ6wbGa3BnqzgJ 2VA4X+QvTlHTzkMvx7f8SUi7pBR7SZKGXxLYu0TzC3cXGMb4m0s90cR1Q oEoCvlffOFReQFQyvr/I5CF4QrpJzQVFFyD431O0SS3c7l2LvVhnHzBkc eBMbosRJ37IWq8ufAFJmVDT8AllXIYDuG8LQtaP53HNcxetvPacOtoX6J hgx+SBxF0uxKBby5Dbufl4qgkVQWACnC9FpPMK7FULXmxyMbaeNHGYFdk JseAICbvw2kQj63CFAj3FdUjXkTQ6Se7/vaCny1wgYK4nZdrpFgHMzWwM g==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328139" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328139" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:07:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237205" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237205" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:23 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:22 +0100 Message-Id: <20231024160727.282960-4-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 3/8] drm/cgroup: Add ability to query drm cgroup GPU time X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin Add a driver callback and core helper which allow querying the time spent on GPUs for processes belonging to a group. Signed-off-by: Tvrtko Ursulin --- include/drm/drm_drv.h | 28 ++++++++++++++++++++++++++++ kernel/cgroup/drm.c | 20 ++++++++++++++++++++ 2 files changed, 48 insertions(+) diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index e2640dc64e08..d1cee5899cde 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -157,6 +157,24 @@ enum drm_driver_feature { DRIVER_HAVE_IRQ = BIT(30), }; +/** + * struct drm_cgroup_ops + * + * This structure contains a number of callbacks that drivers can provide if + * they are able to support one or more of the functionalities implemented by + * the DRM cgroup controller. + */ +struct drm_cgroup_ops { + /** + * @active_time_us: + * + * Optional callback for reporting the GPU time consumed by this client. + * + * Used by the DRM core when queried by the DRM cgroup controller. + */ + u64 (*active_time_us) (struct drm_file *); +}; + /** * struct drm_driver - DRM driver structure * @@ -434,6 +452,16 @@ struct drm_driver { */ const struct file_operations *fops; +#ifdef CONFIG_CGROUP_DRM + /** + * @cg_ops: + * + * Optional pointer to driver callbacks facilitating integration with + * the DRM cgroup controller. + */ + const struct drm_cgroup_ops *cg_ops; +#endif + #ifdef CONFIG_DRM_LEGACY /* Everything below here is for legacy driver, never use! */ /* private: */ diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index d702be1b441f..acdb76635b60 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -9,6 +9,8 @@ #include #include +#include + struct drm_cgroup_state { struct cgroup_subsys_state css; @@ -31,6 +33,24 @@ css_to_drmcs(struct cgroup_subsys_state *css) return container_of(css, struct drm_cgroup_state, css); } +static u64 drmcs_get_active_time_us(struct drm_cgroup_state *drmcs) +{ + struct drm_file *fpriv; + u64 total = 0; + + lockdep_assert_held(&drmcg_mutex); + + list_for_each_entry(fpriv, &drmcs->clients, clink) { + const struct drm_cgroup_ops *cg_ops = + fpriv->minor->dev->driver->cg_ops; + + if (cg_ops && cg_ops->active_time_us) + total += cg_ops->active_time_us(fpriv); + } + + return total; +} + static void drmcs_free(struct cgroup_subsys_state *css) { struct drm_cgroup_state *drmcs = css_to_drmcs(css); From patchwork Tue Oct 24 16:07:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 972CDC00A8F for ; Tue, 24 Oct 2023 16:12:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D239E10E413; Tue, 24 Oct 2023 16:12:06 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8D38E10E40D; Tue, 24 Oct 2023 16:12:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163921; x=1729699921; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5GDvJD6iQ0dI3ndzQ1pnqCYM145L2PPps9Ux1wYqtB8=; b=RLfroejUIEvDocCW9+Hj32hu+CuWJbMNrP8BoiLQqzmivaDKoFqV99p+ PM4DZ8mcRJtiWZA+N7HYLmCUBJvw0/H/NMeR2p2qel3zLzZ7xMVhLyQWz IrIDwIgiNExwALQwbsPSqkS3qspvhg6sD/Fiva6G3ySycGrwhQH8MbM/i ZThh3vIunG07KnEa5L96ZQ4SVl6ysFtnR+f/zR6gwkTkLR8Y30got6BJ/ fKPp2G3NyRlZjdKcKsPe0IQzIAh3GqoNqV75vqttR/mkNIHJDx5n8uogk z6UN4Vz3rOabCIgS2Wrg1bFF/eYn7yVLeAhZ7+VfC8/sK4urRETP/I7DQ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328154" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328154" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:07:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237232" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237232" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:27 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:23 +0100 Message-Id: <20231024160727.282960-5-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 4/8] drm/cgroup: Add over budget signalling callback X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin Add a new callback via which the drm cgroup controller is notifying the drm core that a certain process is above its allotted GPU time. Signed-off-by: Tvrtko Ursulin --- include/drm/drm_drv.h | 8 ++++++++ kernel/cgroup/drm.c | 16 ++++++++++++++++ 2 files changed, 24 insertions(+) diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h index d1cee5899cde..c518f03b9f0f 100644 --- a/include/drm/drm_drv.h +++ b/include/drm/drm_drv.h @@ -173,6 +173,14 @@ struct drm_cgroup_ops { * Used by the DRM core when queried by the DRM cgroup controller. */ u64 (*active_time_us) (struct drm_file *); + + /** + * @signal_budget: + * + * Optional callback used by the DRM core to forward over/under GPU time + * messages sent by the DRM cgroup controller. + */ + int (*signal_budget) (struct drm_file *, u64 used, u64 budget); }; /** diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index acdb76635b60..68f31797c4f0 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -51,6 +51,22 @@ static u64 drmcs_get_active_time_us(struct drm_cgroup_state *drmcs) return total; } +static void +drmcs_signal_budget(struct drm_cgroup_state *drmcs, u64 usage, u64 budget) +{ + struct drm_file *fpriv; + + lockdep_assert_held(&drmcg_mutex); + + list_for_each_entry(fpriv, &drmcs->clients, clink) { + const struct drm_cgroup_ops *cg_ops = + fpriv->minor->dev->driver->cg_ops; + + if (cg_ops && cg_ops->signal_budget) + cg_ops->signal_budget(fpriv, usage, budget); + } +} + static void drmcs_free(struct cgroup_subsys_state *css) { struct drm_cgroup_state *drmcs = css_to_drmcs(css); From patchwork Tue Oct 24 16:07:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 47479C25B6C for ; Tue, 24 Oct 2023 16:12:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7ECAB10E418; Tue, 24 Oct 2023 16:12:07 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 188D810E415; Tue, 24 Oct 2023 16:12:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163922; x=1729699922; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lKoMcXbN2L6Bp1UujVMkXfxYLgXbZyOn20AweTGZa3U=; b=F5KmFVzhwSURcCtHt3vML95dTl2zNrEpDFfoLbD0nEGjG5JHq/6q+RYj NldKe3SuuXi5xACshWyGJIuZnJQO1WSmlPkFnKZeEnsbZ/2SXB9vdUy4d LdejboYjcbtTKQAbP/bt5/jhPzATNkVupKvAmDaoBN+qF1F8/0+dBdvQl 4RTp1ILreHVLCgf/LNVQ14YX1waTCF7HmqkbvMrf82EIlnkirN13M7CZW W8CNpbKkOnkPrdptazblLaVsudcvThHdTtX3PXgz+1egAYth0zNd4FScj hQaM2+L/AafQJKX0ClX+vanRquR30aESmCrd6yzTPH2egkrLDhAFRDRkp g==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328171" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328171" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:07:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237262" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237262" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:30 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:24 +0100 Message-Id: <20231024160727.282960-6-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 5/8] drm/cgroup: Only track clients which are providing drm_cgroup_ops X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin To reduce the number of tracking going on, especially with drivers which will not support any sort of control from the drm cgroup controller side, lets express the funcionality as opt-in and use the presence of drm_cgroup_ops as activation criteria. Signed-off-by: Tvrtko Ursulin --- kernel/cgroup/drm.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index 68f31797c4f0..60e1f3861576 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -97,6 +97,9 @@ void drmcgroup_client_open(struct drm_file *file_priv) { struct drm_cgroup_state *drmcs; + if (!file_priv->minor->dev->driver->cg_ops) + return; + drmcs = css_to_drmcs(task_get_css(current, drm_cgrp_id)); mutex_lock(&drmcg_mutex); @@ -112,6 +115,9 @@ void drmcgroup_client_close(struct drm_file *file_priv) drmcs = css_to_drmcs(file_priv->__css); + if (!file_priv->minor->dev->driver->cg_ops) + return; + mutex_lock(&drmcg_mutex); list_del(&file_priv->clink); file_priv->__css = NULL; @@ -126,6 +132,9 @@ void drmcgroup_client_migrate(struct drm_file *file_priv) struct drm_cgroup_state *src, *dst; struct cgroup_subsys_state *old; + if (!file_priv->minor->dev->driver->cg_ops) + return; + mutex_lock(&drmcg_mutex); old = file_priv->__css; From patchwork Tue Oct 24 16:07:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 11B26C25B6E for ; Tue, 24 Oct 2023 16:12:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 50A8610E41F; Tue, 24 Oct 2023 16:12:08 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 65AEB10E40A; Tue, 24 Oct 2023 16:12:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163923; x=1729699923; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tgFcvi2Tct9E2KJkfB8D7R9vGcjYXRM0ioLSPTlwTCY=; b=K5Lg23xt4B3cLw+XCzbEq3CUBUdGdB144zOU/+Q7Kzj2TjTwM5N9Hook nPoKLwdhHQks5MX5CoshWBZ+PyRTJPdOhRwjyKzDCofJeLMV/G7z58viQ fRwQ4MR9oTatJ5CTFtGNzhfPIsbxuGWB3rRobmLuBkMpyMMpWmu/0UsjV h1bUewr1CH3dg51o1oyAdWZYxYj5Ko+4nqNFynsncaUaivDkMoSuRiiJe JE30Bt51bQjQCwIFUMKahOZ3gqzvr9LxAacXv9GTPrWvfnxLFK0PWm0hO H+zbVKK8/ZK6Ju/CPzNAg861y5nqYoCcudx9Vpbz+SlFQ7nLMP8PZDQa3 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328180" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328180" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:07:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237294" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237294" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:34 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:25 +0100 Message-Id: <20231024160727.282960-7-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 6/8] cgroup/drm: Introduce weight based drm cgroup control X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , =?utf-8?q?Michal_Ko?= =?utf-8?q?utn=C3=BD?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin Similar to CPU scheduling, implement a concept of weight in the drm cgroup controller. Uses the same range and default as the CPU controller - CGROUP_WEIGHT_MIN, CGROUP_WEIGHT_DFL and CGROUP_WEIGHT_MAX. Later each cgroup is assigned a time budget proportionaly based on the relative weights of it's siblings. This time budget is in turn split by the group's children and so on. This will be used to implement a soft, or best effort signal from drm cgroup to drm core notifying about groups which are over their allotted budget. No guarantees that the limit can be enforced are provided or implied. Checking of GPU usage is done periodically by the controller which can be configured via drmcg_period_ms kernel boot parameter and which defaults to 2s. Signed-off-by: Tvrtko Ursulin Cc: Michal Koutný Cc: Tejun Heo --- Documentation/admin-guide/cgroup-v2.rst | 31 ++ kernel/cgroup/drm.c | 422 +++++++++++++++++++++++- 2 files changed, 450 insertions(+), 3 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index b26b5274eaaf..841533527b7b 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -2418,6 +2418,37 @@ HugeTLB Interface Files hugetlb pages of in this cgroup. Only active in use hugetlb pages are included. The per-node values are in bytes. +DRM +--- + +The DRM controller allows configuring weight based time control. + +DRM weight based time control +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The controller configures the GPU time allowed per group and periodically scans +the belonging tasks to detect the over budget condition, at which point it +invokes a callback notifying the DRM core of the condition. + +Because of the heterogenous hardware and driver DRM capabilities, time control +is implemented as a loose co-operative (bi-directional) interface between the +controller and DRM core. + +DRM core provides an API to query per process GPU utilization and 2nd API to +receive notification from the cgroup controller when the group enters or exits +the over budget condition. + +Individual DRM drivers which implement the interface are expected to act on this +in a best-effort manner. There are no guarantees that the time budget will be +respected. + +DRM weight based time control interface files +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + drm.weight + Standard cgroup weight based control [1, 10000] used to configure the + relative distributing of GPU time between the sibling groups. + Misc ---- diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index 60e1f3861576..1d1570bf3e90 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -6,7 +6,9 @@ #include #include #include +#include #include +#include #include #include @@ -15,10 +17,28 @@ struct drm_cgroup_state { struct cgroup_subsys_state css; struct list_head clients; + + unsigned int weight; + + unsigned int sum_children_weights; + + bool over; + bool over_budget; + + u64 per_s_budget_us; + u64 prev_active_us; + u64 active_us; }; struct drm_root_cgroup_state { struct drm_cgroup_state drmcs; + + unsigned int period_us; + + unsigned int last_scan_duration_us; + ktime_t prev_timestamp; + + struct delayed_work scan_work; }; static struct drm_root_cgroup_state root_drmcs = { @@ -27,6 +47,9 @@ static struct drm_root_cgroup_state root_drmcs = { static DEFINE_MUTEX(drmcg_mutex); +static int drmcg_period_ms = 2000; +module_param(drmcg_period_ms, int, 0644); + static inline struct drm_cgroup_state * css_to_drmcs(struct cgroup_subsys_state *css) { @@ -67,12 +90,272 @@ drmcs_signal_budget(struct drm_cgroup_state *drmcs, u64 usage, u64 budget) } } +static u64 +drmcs_read_weight(struct cgroup_subsys_state *css, struct cftype *cft) +{ + struct drm_cgroup_state *drmcs = css_to_drmcs(css); + + return drmcs->weight; +} + +static int +drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype, + u64 weight) +{ + struct drm_cgroup_state *drmcs = css_to_drmcs(css); + int ret; + + if (weight < CGROUP_WEIGHT_MIN || weight > CGROUP_WEIGHT_MAX) + return -ERANGE; + + ret = mutex_lock_interruptible(&drmcg_mutex); + if (ret) + return ret; + drmcs->weight = weight; + mutex_unlock(&drmcg_mutex); + + return 0; +} + +static bool __start_scanning(unsigned int period_us) +{ + struct drm_cgroup_state *root = &root_drmcs.drmcs; + struct cgroup_subsys_state *node; + ktime_t start, now; + bool ok = false; + + lockdep_assert_held(&drmcg_mutex); + + start = ktime_get(); + if (period_us > root_drmcs.last_scan_duration_us) + period_us -= root_drmcs.last_scan_duration_us; + + rcu_read_lock(); + + css_for_each_descendant_post(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + + if (!css_tryget_online(node)) + goto out; + + drmcs->active_us = 0; + drmcs->sum_children_weights = 0; + + if (period_us && node == &root->css) + drmcs->per_s_budget_us = + DIV_ROUND_UP_ULL((u64)period_us * USEC_PER_SEC, + USEC_PER_SEC); + else + drmcs->per_s_budget_us = 0; + + css_put(node); + } + + css_for_each_descendant_post(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + struct drm_cgroup_state *parent; + u64 active; + + if (!css_tryget_online(node)) + goto out; + if (!node->parent) { + css_put(node); + continue; + } + if (!css_tryget_online(node->parent)) { + css_put(node); + goto out; + } + parent = css_to_drmcs(node->parent); + + active = drmcs_get_active_time_us(drmcs); + if (period_us && active > drmcs->prev_active_us) + drmcs->active_us += active - drmcs->prev_active_us; + drmcs->prev_active_us = active; + + parent->active_us += drmcs->active_us; + parent->sum_children_weights += drmcs->weight; + + css_put(node); + css_put(&parent->css); + } + + ok = true; + now = ktime_get(); + root_drmcs.last_scan_duration_us = ktime_to_us(ktime_sub(now, start)); + root_drmcs.prev_timestamp = now; + +out: + rcu_read_unlock(); + + return ok; +} + +static void scan_worker(struct work_struct *work) +{ + struct drm_cgroup_state *root = &root_drmcs.drmcs; + struct cgroup_subsys_state *node; + unsigned int period_us; + + mutex_lock(&drmcg_mutex); + + rcu_read_lock(); + + if (WARN_ON_ONCE(!css_tryget_online(&root->css))) { + rcu_read_unlock(); + mutex_unlock(&drmcg_mutex); + return; + } + + period_us = ktime_to_us(ktime_sub(ktime_get(), + root_drmcs.prev_timestamp)); + + /* + * 1st pass - reset working values and update hierarchical weights and + * GPU utilisation. + */ + if (!__start_scanning(period_us)) + goto out_retry; /* + * Always come back later if scanner races with + * core cgroup management. (Repeated pattern.) + */ + + css_for_each_descendant_pre(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + struct cgroup_subsys_state *css; + u64 reused_us = 0, unused_us = 0; + unsigned int over_weights = 0; + + if (!css_tryget_online(node)) + goto out_retry; + + /* + * 2nd pass - calculate initial budgets, mark over budget + * siblings and add up unused budget for the group. + */ + css_for_each_child(css, &drmcs->css) { + struct drm_cgroup_state *sibling = css_to_drmcs(css); + + if (!css_tryget_online(css)) { + css_put(node); + goto out_retry; + } + + sibling->per_s_budget_us = + DIV_ROUND_UP_ULL(drmcs->per_s_budget_us * + sibling->weight, + drmcs->sum_children_weights); + + sibling->over = sibling->active_us > + sibling->per_s_budget_us; + if (sibling->over) + over_weights += sibling->weight; + else + unused_us += sibling->per_s_budget_us - + sibling->active_us; + + css_put(css); + } + + /* + * 3rd pass - spread unused budget according to relative weights + * of over budget siblings. + */ + while (over_weights && reused_us < unused_us) { + unsigned int under = 0; + + unused_us -= reused_us; + reused_us = 0; + + css_for_each_child(css, &drmcs->css) { + struct drm_cgroup_state *sibling; + u64 extra_us, max_us, need_us; + + if (!css_tryget_online(css)) { + css_put(node); + goto out_retry; + } + + sibling = css_to_drmcs(css); + if (!sibling->over) { + css_put(css); + continue; + } + + extra_us = DIV_ROUND_UP_ULL(unused_us * + sibling->weight, + over_weights); + max_us = sibling->per_s_budget_us + extra_us; + if (max_us > sibling->active_us) + need_us = sibling->active_us - + sibling->per_s_budget_us; + else + need_us = extra_us; + reused_us += need_us; + sibling->per_s_budget_us += need_us; + sibling->over = sibling->active_us > + sibling->per_s_budget_us; + if (!sibling->over) + under += sibling->weight; + + css_put(css); + } + + over_weights -= under; + } + + css_put(node); + } + + /* + * 4th pass - send out over/under budget notifications. + */ + css_for_each_descendant_post(node, &root->css) { + struct drm_cgroup_state *drmcs = css_to_drmcs(node); + + if (!css_tryget_online(node)) + goto out_retry; + + if (drmcs->over || drmcs->over_budget) + drmcs_signal_budget(drmcs, + drmcs->active_us, + drmcs->per_s_budget_us); + drmcs->over_budget = drmcs->over; + + css_put(node); + } + +out_retry: + rcu_read_unlock(); + mutex_unlock(&drmcg_mutex); + + period_us = READ_ONCE(root_drmcs.period_us); + if (period_us) + schedule_delayed_work(&root_drmcs.scan_work, + usecs_to_jiffies(period_us)); + + css_put(&root->css); +} + static void drmcs_free(struct cgroup_subsys_state *css) { - struct drm_cgroup_state *drmcs = css_to_drmcs(css); + if (css != &root_drmcs.drmcs.css) + kfree(css_to_drmcs(css)); +} - if (drmcs != &root_drmcs.drmcs) - kfree(drmcs); +static void record_baseline_utilisation(void) +{ + /* + * Re-capture baseline group GPU times to avoid downward jumps. + * + * __start_scanning can fail if hierarchy members transition their + * online status while it is traversing the tree, so retry with a little + * bit of back-off to be nice, although it is not really needed but + * callers are also not latency sensitive, especially since retrying is + * very unlikely during stable system operation. + */ + while (!__start_scanning(0)) + synchronize_rcu(); } static struct cgroup_subsys_state * @@ -82,6 +365,7 @@ drmcs_alloc(struct cgroup_subsys_state *parent_css) if (!parent_css) { drmcs = &root_drmcs.drmcs; + INIT_DELAYED_WORK(&root_drmcs.scan_work, scan_worker); } else { drmcs = kzalloc(sizeof(*drmcs), GFP_KERNEL); if (!drmcs) @@ -90,9 +374,128 @@ drmcs_alloc(struct cgroup_subsys_state *parent_css) INIT_LIST_HEAD(&drmcs->clients); } + drmcs->weight = CGROUP_WEIGHT_DFL; + return &drmcs->css; } +static int drmcs_online(struct cgroup_subsys_state *css) +{ + if (css == &root_drmcs.drmcs.css && drmcg_period_ms) { + const int min_period_ms = 500; + int period_ms; + + mutex_lock(&drmcg_mutex); + record_baseline_utilisation(); + if (drmcg_period_ms < min_period_ms) { + period_ms = min_period_ms; + pr_notice("Capping DRM control group scanning to %ums\n", + period_ms); + } else { + period_ms = drmcg_period_ms; + } + root_drmcs.period_us = period_ms * 1000; + mod_delayed_work(system_wq, + &root_drmcs.scan_work, + usecs_to_jiffies(root_drmcs.period_us)); + mutex_unlock(&drmcg_mutex); + } + + return 0; +} + +static void drmcs_offline(struct cgroup_subsys_state *css) +{ + bool flush = false; + + if (css != &root_drmcs.drmcs.css) + return; + + mutex_lock(&drmcg_mutex); + if (root_drmcs.period_us) { + root_drmcs.period_us = 0; + cancel_delayed_work(&root_drmcs.scan_work); + flush = true; + } + mutex_unlock(&drmcg_mutex); + + if (flush) + flush_delayed_work(&root_drmcs.scan_work); +} + +static struct drm_cgroup_state *old_drmcs; + +static int drmcs_can_attach(struct cgroup_taskset *tset) +{ + struct cgroup_subsys_state *css; + struct task_struct *task; + + task = cgroup_taskset_first(tset, &css); + old_drmcs = css_to_drmcs(task_css(task, drm_cgrp_id)); + + return 0; +} + +static void drmcs_attach(struct cgroup_taskset *tset) +{ + struct drm_cgroup_state *old = old_drmcs; + struct cgroup_subsys_state *css; + struct drm_file *fpriv, *next; + struct drm_cgroup_state *new; + struct task_struct *task; + bool migrated = false; + + if (!old) + return; + + task = cgroup_taskset_first(tset, &css); + new = css_to_drmcs(task_css(task, drm_cgrp_id)); + if (new == old) + return; + + mutex_lock(&drmcg_mutex); + + list_for_each_entry_safe(fpriv, next, &old->clients, clink) { + cgroup_taskset_for_each(task, css, tset) { + struct cgroup_subsys_state *old_css; + + if (task->flags & PF_KTHREAD) + continue; + if (!thread_group_leader(task)) + continue; + + new = css_to_drmcs(task_css(task, drm_cgrp_id)); + if (WARN_ON_ONCE(new == old)) + continue; + + if (rcu_access_pointer(fpriv->pid) != task_tgid(task)) + continue; + + if (WARN_ON_ONCE(fpriv->__css != &old->css)) + continue; + + old_css = fpriv->__css; + fpriv->__css = &new->css; + css_get(fpriv->__css); + list_move_tail(&fpriv->clink, &new->clients); + css_put(old_css); + migrated = true; + } + } + + if (migrated) + record_baseline_utilisation(); + + mutex_unlock(&drmcg_mutex); + + old_drmcs = NULL; +} + +static void drmcs_cancel_attach(struct cgroup_taskset *tset) +{ + old_drmcs = NULL; +} + void drmcgroup_client_open(struct drm_file *file_priv) { struct drm_cgroup_state *drmcs; @@ -121,6 +524,7 @@ void drmcgroup_client_close(struct drm_file *file_priv) mutex_lock(&drmcg_mutex); list_del(&file_priv->clink); file_priv->__css = NULL; + record_baseline_utilisation(); mutex_unlock(&drmcg_mutex); css_put(&drmcs->css); @@ -144,6 +548,7 @@ void drmcgroup_client_migrate(struct drm_file *file_priv) if (src != dst) { file_priv->__css = &dst->css; /* Keeps the reference. */ list_move_tail(&file_priv->clink, &dst->clients); + record_baseline_utilisation(); } mutex_unlock(&drmcg_mutex); @@ -153,12 +558,23 @@ void drmcgroup_client_migrate(struct drm_file *file_priv) EXPORT_SYMBOL_GPL(drmcgroup_client_migrate); struct cftype files[] = { + { + .name = "weight", + .flags = CFTYPE_NOT_ON_ROOT, + .read_u64 = drmcs_read_weight, + .write_u64 = drmcs_write_weight, + }, { } /* Zero entry terminates. */ }; struct cgroup_subsys drm_cgrp_subsys = { .css_alloc = drmcs_alloc, .css_free = drmcs_free, + .css_online = drmcs_online, + .css_offline = drmcs_offline, + .can_attach = drmcs_can_attach, + .attach = drmcs_attach, + .cancel_attach = drmcs_cancel_attach, .early_init = false, .legacy_cftypes = files, .dfl_cftypes = files, From patchwork Tue Oct 24 16:07:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06A34C00A8F for ; Tue, 24 Oct 2023 16:12:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id CD0F210E416; Tue, 24 Oct 2023 16:12:09 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 10C8E10E422; Tue, 24 Oct 2023 16:12:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163923; x=1729699923; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iL/Kggt96aZN7qZXaFHGGnuXzbL3GYSs3ahmgxkULj8=; b=E4pBBbhq6WDSi9aGdQYqqobaWm+S/x4/Pr9pJz24Xu3QA/lR3V+GXp6g oma7HTbc/NKz8AjKwsCagtjZnmy+Z8WDjW+7M3o7sHb8qJV/NGOv68/d+ tfZ1Sc8tpoyg7GwWPge4Pfiyje1FvUWV3YHH5swe/R/U8kJ6dn22Ua5Gb 5BwX+zAXJ5g1ANVgpC3EeoY9XpkSkmX6YVl0aPPChBNyqKQQSvmiwwDVG 5EfwtZrOb5X1KOM9caDJICeaq+htLXTQkCtOKPP0JhHhOaNktxeBxKd2j QNqCqKEwLksy9DncupUhW4hy/IbhG2Q0Z/vqUGY8bcPVbHuL+elETgxaa g==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328194" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328194" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:08:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237322" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237322" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:38 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:26 +0100 Message-Id: <20231024160727.282960-8-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 7/8] drm/i915: Implement cgroup controller over budget throttling X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin When notified by the drm core we are over our allotted time budget, i915 instance will check if any of the GPU engines it is reponsible for is fully saturated. If it is, and the client in question is using that engine, it will throttle it. For now throttling is done simplistically by lowering the scheduling priority while clients are throttled. Signed-off-by: Tvrtko Ursulin --- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 38 +++- drivers/gpu/drm/i915/i915_driver.c | 11 + drivers/gpu/drm/i915/i915_drm_client.c | 203 +++++++++++++++++- drivers/gpu/drm/i915/i915_drm_client.h | 11 + 4 files changed, 253 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 683fd8d3151c..f87935a030a1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -3086,6 +3086,42 @@ static void retire_requests(struct intel_timeline *tl, struct i915_request *end) break; } +#ifdef CONFIG_CGROUP_DRM +static unsigned int +__get_class(struct drm_i915_file_private *fpriv, const struct i915_request *rq) +{ + unsigned int class; + + class = rq->context->engine->uabi_class; + + if (WARN_ON_ONCE(class >= ARRAY_SIZE(fpriv->client->throttle))) + class = 0; + + return class; +} + +static void copy_priority(struct i915_sched_attr *attr, + const struct i915_execbuffer *eb, + const struct i915_request *rq) +{ + struct drm_i915_file_private *file_priv = eb->file->driver_priv; + int prio; + + *attr = eb->gem_context->sched; + + prio = file_priv->client->throttle[__get_class(file_priv, rq)]; + if (prio) + attr->priority = prio; +} +#else +static void copy_priority(struct i915_sched_attr *attr, + const struct i915_execbuffer *eb, + const struct i915_request *rq) +{ + *attr = eb->gem_context->sched; +} +#endif + static int eb_request_add(struct i915_execbuffer *eb, struct i915_request *rq, int err, bool last_parallel) { @@ -3102,7 +3138,7 @@ static int eb_request_add(struct i915_execbuffer *eb, struct i915_request *rq, /* Check that the context wasn't destroyed before submission */ if (likely(!intel_context_is_closed(eb->context))) { - attr = eb->gem_context->sched; + copy_priority(&attr, eb, rq); } else { /* Serialise with context_close via the add_to_timeline */ i915_request_set_error_once(rq, -ENOENT); diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c index 8a0e2c745e1f..450bbcfc16af 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -1794,6 +1794,13 @@ static const struct drm_ioctl_desc i915_ioctls[] = { DRM_IOCTL_DEF_DRV(I915_GEM_VM_DESTROY, i915_gem_vm_destroy_ioctl, DRM_RENDER_ALLOW), }; +#ifdef CONFIG_CGROUP_DRM +static const struct drm_cgroup_ops i915_drm_cgroup_ops = { + .active_time_us = i915_drm_cgroup_get_active_time_us, + .signal_budget = i915_drm_cgroup_signal_budget, +}; +#endif + /* * Interface history: * @@ -1823,6 +1830,10 @@ static const struct drm_driver i915_drm_driver = { .postclose = i915_driver_postclose, .show_fdinfo = PTR_IF(IS_ENABLED(CONFIG_PROC_FS), i915_drm_client_fdinfo), +#ifdef CONFIG_CGROUP_DRM + .cg_ops = &i915_drm_cgroup_ops, +#endif + .gem_prime_import = i915_gem_prime_import, .dumb_create = i915_gem_dumb_create, diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c index 2a44b3876cb5..403baf8c86ad 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.c +++ b/drivers/gpu/drm/i915/i915_drm_client.c @@ -4,6 +4,7 @@ */ #include +#include #include #include @@ -40,7 +41,7 @@ void __i915_drm_client_free(struct kref *kref) kfree(client); } -#ifdef CONFIG_PROC_FS +#if defined(CONFIG_PROC_FS) || defined(CONFIG_CGROUP_DRM) static const char * const uabi_class_names[] = { [I915_ENGINE_CLASS_RENDER] = "render", [I915_ENGINE_CLASS_COPY] = "copy", @@ -65,20 +66,204 @@ static u64 busy_add(struct i915_gem_context *ctx, unsigned int class) return total; } +static u64 get_class_active_ns(struct i915_drm_client *client, + struct drm_i915_private *i915, + unsigned int class, + unsigned int *capacity) +{ + struct i915_gem_context *ctx; + u64 total; + + *capacity = i915->engine_uabi_class_count[class]; + if (!*capacity) + return 0; + + total = atomic64_read(&client->past_runtime[class]); + + rcu_read_lock(); + list_for_each_entry_rcu(ctx, &client->ctx_list, client_link) + total += busy_add(ctx, class); + rcu_read_unlock(); + + return total; +} + +static bool supports_stats(struct drm_i915_private *i915) +{ + return GRAPHICS_VER(i915) >= 8; +} +#endif + +#if defined(CONFIG_CGROUP_DRM) +u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file) +{ + struct drm_i915_file_private *fpriv = file->driver_priv; + struct i915_drm_client *client = fpriv->client; + struct drm_i915_private *i915 = fpriv->i915; + unsigned int i; + u64 busy = 0; + + if (!supports_stats(i915)) + return 0; + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) { + unsigned int capacity; + u64 b; + + b = get_class_active_ns(client, i915, i, &capacity); + if (capacity) { + b = DIV_ROUND_UP_ULL(b, capacity * 1000); + busy += b; + } + } + + return busy; +} + +int i915_drm_cgroup_signal_budget(struct drm_file *file, u64 usage, u64 budget) +{ + struct drm_i915_file_private *fpriv = file->driver_priv; + u64 class_usage[I915_LAST_UABI_ENGINE_CLASS + 1]; + u64 class_last[I915_LAST_UABI_ENGINE_CLASS + 1]; + struct i915_drm_client *client = fpriv->client; + struct drm_i915_private *i915 = fpriv->i915; + struct intel_engine_cs *engine; + bool over = usage > budget; + struct task_struct *task; + struct pid *pid; + unsigned int i; + ktime_t unused; + int ret = 0; + u64 t; + + if (!supports_stats(i915)) + return -EINVAL; + + if (usage == 0 && budget == 0) + return 0; + + rcu_read_lock(); + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + if (over) { + client->over_budget++; + if (!client->over_budget) + client->over_budget = 2; + + drm_dbg(&i915->drm, "%s[%u] over budget (%llu/%llu)\n", + task ? task->comm : "", pid_vnr(pid), + usage, budget); + } else { + client->over_budget = 0; + memset(client->class_last, 0, sizeof(client->class_last)); + memset(client->throttle, 0, sizeof(client->throttle)); + + drm_dbg(&i915->drm, "%s[%u] un-throttled; under budget\n", + task ? task->comm : "", pid_vnr(pid)); + + rcu_read_unlock(); + return 0; + } + rcu_read_unlock(); + + memset(class_usage, 0, sizeof(class_usage)); + for_each_uabi_engine(engine, i915) + class_usage[engine->uabi_class] += + ktime_to_ns(intel_engine_get_busy_time(engine, &unused)); + + memcpy(class_last, client->class_last, sizeof(class_last)); + memcpy(client->class_last, class_usage, sizeof(class_last)); + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) + class_usage[i] -= class_last[i]; + + t = client->last; + client->last = ktime_get_raw_ns(); + t = client->last - t; + + if (client->over_budget == 1) + return 0; + + for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) { + u64 client_class_usage[I915_LAST_UABI_ENGINE_CLASS + 1]; + unsigned int capacity, rel_usage; + + if (!i915->engine_uabi_class_count[i]) + continue; + + t = DIV_ROUND_UP_ULL(t, 1000); + class_usage[i] = DIV_ROUND_CLOSEST_ULL(class_usage[i], 1000); + rel_usage = DIV_ROUND_CLOSEST_ULL(class_usage[i] * 100ULL, + t * + i915->engine_uabi_class_count[i]); + if (rel_usage < 95) { + /* Physical class not oversubsribed. */ + if (client->throttle[i]) { + client->throttle[i] = 0; + + rcu_read_lock(); + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + drm_dbg(&i915->drm, + "%s[%u] un-throttled; physical class %s utilisation %u%%\n", + task ? task->comm : "", + pid_vnr(pid), + uabi_class_names[i], + rel_usage); + rcu_read_unlock(); + } + continue; + } + + client_class_usage[i] = + get_class_active_ns(client, i915, i, &capacity); + if (client_class_usage[i]) { + int permille; + + ret |= 1; + + permille = DIV_ROUND_CLOSEST_ULL((usage - budget) * + 1000, + budget); + client->throttle[i] = + DIV_ROUND_CLOSEST(permille * + I915_CONTEXT_MIN_USER_PRIORITY, + 1000); + if (client->throttle[i] < + I915_CONTEXT_MIN_USER_PRIORITY) + client->throttle[i] = + I915_CONTEXT_MIN_USER_PRIORITY; + + rcu_read_lock(); + pid = rcu_dereference(file->pid); + task = pid_task(pid, PIDTYPE_TGID); + drm_dbg(&i915->drm, + "%s[%u] %d‰ over budget, throttled to priority %d; physical class %s utilisation %u%%\n", + task ? task->comm : "", + pid_vnr(pid), + permille, + client->throttle[i], + uabi_class_names[i], + rel_usage); + rcu_read_unlock(); + } + } + + return ret; +} +#endif + +#ifdef CONFIG_PROC_FS static void show_client_class(struct drm_printer *p, struct drm_i915_private *i915, struct i915_drm_client *client, unsigned int class) { - const unsigned int capacity = i915->engine_uabi_class_count[class]; - u64 total = atomic64_read(&client->past_runtime[class]); - struct i915_gem_context *ctx; + unsigned int capacity; + u64 total; - rcu_read_lock(); - list_for_each_entry_rcu(ctx, &client->ctx_list, client_link) - total += busy_add(ctx, class); - rcu_read_unlock(); + total = get_class_active_ns(client, i915, class, &capacity); if (capacity) drm_printf(p, "drm-engine-%s:\t%llu ns\n", @@ -102,7 +287,7 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file) * ****************************************************************** */ - if (GRAPHICS_VER(i915) < 8) + if (!supports_stats(i915)) return; for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++) diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h index 67816c912bca..396dbb0780cc 100644 --- a/drivers/gpu/drm/i915/i915_drm_client.h +++ b/drivers/gpu/drm/i915/i915_drm_client.h @@ -29,6 +29,13 @@ struct i915_drm_client { * @past_runtime: Accumulation of pphwsp runtimes from closed contexts. */ atomic64_t past_runtime[I915_LAST_UABI_ENGINE_CLASS + 1]; + +#ifdef CONFIG_CGROUP_DRM + int throttle[I915_LAST_UABI_ENGINE_CLASS + 1]; + unsigned int over_budget; + u64 last; + u64 class_last[I915_LAST_UABI_ENGINE_CLASS + 1]; +#endif }; static inline struct i915_drm_client * @@ -49,4 +56,8 @@ struct i915_drm_client *i915_drm_client_alloc(void); void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file); +u64 i915_drm_cgroup_get_active_time_us(struct drm_file *file); +int i915_drm_cgroup_signal_budget(struct drm_file *file, + u64 usage, u64 budget); + #endif /* !__I915_DRM_CLIENT_H__ */ From patchwork Tue Oct 24 16:07:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 13435068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B7C32C25B48 for ; Tue, 24 Oct 2023 16:12:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2207510E419; Tue, 24 Oct 2023 16:12:10 +0000 (UTC) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 145AB10E420; Tue, 24 Oct 2023 16:12:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698163924; x=1729699924; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cx4m6Re1t0koStp5FqrNlsfdb8zARqyFqR2+8EFxmrM=; b=Bpn7LqE5OAMG0zJnhs/VDCRxkCuXgcC/bvCush47c4Q3l+bU+Bqb9bo7 Czeo24Mema3Z1DzZk+C3AS7bU0gMaELXP0a7EqLfwjRSCmKFXyiilqa7c knycnn0mkSMcKjIW0t5k72Q+utu5weYqIXJ2CYnogPEwAqqY7Q3GlNUbi VjfMjt7z5vRkgPuGU6I5CuTG/MUELtcueWiiyvhHRQnCRxt0Sf1S09dCO DErioRU03k2PoUZLEzZbymQ4XAwPG2xydeYHWWvOeC5uVPj58Ds6yN5o6 z/cFXCvbUdmdWSBf3wsUTE46eP1JVd/mPqQHPHJD4kw5CW7+kapz745xe Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="451328211" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="451328211" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:08:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10873"; a="902237358" X-IronPort-AV: E=Sophos;i="6.03,248,1694761200"; d="scan'208";a="902237358" Received: from aidenbar-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.213.219.125]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2023 09:05:41 -0700 From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Tue, 24 Oct 2023 17:07:27 +0100 Message-Id: <20231024160727.282960-9-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> References: <20231024160727.282960-1-tvrtko.ursulin@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [RFC 8/8] cgroup/drm: Expose GPU utilisation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rob Clark , Kenny.Ho@amd.com, Tvrtko Ursulin , Daniel Vetter , Eero Tamminen , Johannes Weiner , linux-kernel@vger.kernel.org, =?utf-8?q?St=C3=A9phane_Marchesin?= , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , Zefan Li , Dave Airlie , Tejun Heo , cgroups@vger.kernel.org, "T . J . Mercier" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Tvrtko Ursulin To support container use cases where external orchestrators want to make deployment and migration decisions based on GPU load and capacity, we can expose the GPU load as seen by the controller in a new drm.active_us field. This field contains a monotonic cumulative time cgroup has spent executing GPU loads, as reported by the DRM drivers being used by group members. Signed-off-by: Tvrtko Ursulin Cc: Tejun Heo Cc: Eero Tamminen --- Documentation/admin-guide/cgroup-v2.rst | 8 +++++++ kernel/cgroup/drm.c | 29 ++++++++++++++++++++++++- 2 files changed, 36 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index 841533527b7b..9ac8ab65161c 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -2445,6 +2445,14 @@ respected. DRM weight based time control interface files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + drm.stat + A read-only flat-keyed file. + + Contains these fields: + + - usage_usec - GPU time used by the group, recursively including all + child groups. + drm.weight Standard cgroup weight based control [1, 10000] used to configure the relative distributing of GPU time between the sibling groups. diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c index 1d1570bf3e90..127730990301 100644 --- a/kernel/cgroup/drm.c +++ b/kernel/cgroup/drm.c @@ -25,6 +25,8 @@ struct drm_cgroup_state { bool over; bool over_budget; + u64 total_us; + u64 per_s_budget_us; u64 prev_active_us; u64 active_us; @@ -117,6 +119,24 @@ drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype, return 0; } +static int drmcs_show_stat(struct seq_file *sf, void *v) +{ + struct drm_cgroup_state *drmcs = css_to_drmcs(seq_css(sf)); + u64 val; + +#ifndef CONFIG_64BIT + mutex_lock(&drmcg_mutex); +#endif + val = drmcs->total_us; +#ifndef CONFIG_64BIT + mutex_unlock(&drmcg_mutex); +#endif + + seq_printf(sf, "usage_usec %llu\n", val); + + return 0; +} + static bool __start_scanning(unsigned int period_us) { struct drm_cgroup_state *root = &root_drmcs.drmcs; @@ -169,11 +189,14 @@ static bool __start_scanning(unsigned int period_us) parent = css_to_drmcs(node->parent); active = drmcs_get_active_time_us(drmcs); - if (period_us && active > drmcs->prev_active_us) + if (period_us && active > drmcs->prev_active_us) { drmcs->active_us += active - drmcs->prev_active_us; + drmcs->total_us += drmcs->active_us; + } drmcs->prev_active_us = active; parent->active_us += drmcs->active_us; + parent->total_us += drmcs->active_us; parent->sum_children_weights += drmcs->weight; css_put(node); @@ -564,6 +587,10 @@ struct cftype files[] = { .read_u64 = drmcs_read_weight, .write_u64 = drmcs_write_weight, }, + { + .name = "stat", + .seq_show = drmcs_show_stat, + }, { } /* Zero entry terminates. */ };