From patchwork Thu Jan 28 10:21:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Zhi A" X-Patchwork-Id: 8148681 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D76609F440 for ; Thu, 28 Jan 2016 10:25:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 587F42035B for ; Thu, 28 Jan 2016 10:25:14 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id CC5B92021A for ; Thu, 28 Jan 2016 10:25:12 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B75ED6E7FD; Thu, 28 Jan 2016 02:25:11 -0800 (PST) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by gabe.freedesktop.org (Postfix) with ESMTP id 587AE6E7FD for ; Thu, 28 Jan 2016 02:25:03 -0800 (PST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP; 28 Jan 2016 02:25:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,358,1449561600"; d="scan'208";a="870588866" Received: from dev-inno.bj.intel.com ([10.238.135.69]) by orsmga001.jf.intel.com with ESMTP; 28 Jan 2016 02:24:55 -0800 From: Zhi Wang To: intel-gfx@lists.freedesktop.org, igvt-g@lists.01.org Date: Thu, 28 Jan 2016 18:21:49 +0800 Message-Id: <1453976511-27322-28-git-send-email-zhi.a.wang@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1453976511-27322-1-git-send-email-zhi.a.wang@intel.com> References: <1453976511-27322-1-git-send-email-zhi.a.wang@intel.com> Cc: daniel.vetter@ffwll.ch, david.j.cowperthwaite@intel.com Subject: [Intel-gfx] [RFC 27/29] drm/i915: gvt: vGPU schedule policy framework X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch introduces a vGPU schedule policy framework, with a timer based schedule policy module for now Signed-off-by: Zhi Wang --- drivers/gpu/drm/i915/gvt/Makefile | 3 +- drivers/gpu/drm/i915/gvt/gvt.h | 2 + drivers/gpu/drm/i915/gvt/handlers.c | 16 ++ drivers/gpu/drm/i915/gvt/instance.c | 16 ++ drivers/gpu/drm/i915/gvt/sched_policy.c | 295 ++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/gvt/sched_policy.h | 48 ++++++ drivers/gpu/drm/i915/gvt/scheduler.c | 5 + drivers/gpu/drm/i915/gvt/scheduler.h | 3 + 8 files changed, 387 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/i915/gvt/sched_policy.c create mode 100644 drivers/gpu/drm/i915/gvt/sched_policy.h diff --git a/drivers/gpu/drm/i915/gvt/Makefile b/drivers/gpu/drm/i915/gvt/Makefile index 46f71db..dcaf715 100644 --- a/drivers/gpu/drm/i915/gvt/Makefile +++ b/drivers/gpu/drm/i915/gvt/Makefile @@ -1,6 +1,7 @@ GVT_SOURCE := gvt.o params.o aperture_gm.o mmio.o handlers.o instance.o \ trace_points.o interrupt.o gtt.o cfg_space.o opregion.o utility.o \ - fb_decoder.o display.o edid.o control.o execlist.o scheduler.o + fb_decoder.o display.o edid.o control.o execlist.o scheduler.o \ + sched_policy.o ccflags-y += -I$(src) -I$(src)/.. -Wall -Werror -Wno-unused-function i915_gvt-y := $(GVT_SOURCE) diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index 83f1017..5788bb7 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -44,6 +44,7 @@ #include "display.h" #include "execlist.h" #include "scheduler.h" +#include "sched_policy.h" #define GVT_MAX_VGPU 8 @@ -160,6 +161,7 @@ struct vgt_device { unsigned long last_reset_time; atomic_t crashing; bool warn_untrack; + void *sched_data; }; struct gvt_gm_allocator { diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index 356cfc4..a04d0cb 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -259,6 +259,22 @@ static bool dpy_reg_mmio_read_3(struct vgt_device *vgt, unsigned int offset, static bool ring_mode_write(struct vgt_device *vgt, unsigned int off, void *p_data, unsigned int bytes) { + u32 data = *(u32 *)p_data; + int ring_id = gvt_render_mmio_to_ring_id(off); + bool enable_execlist; + + if (_MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE) + || _MASKED_BIT_DISABLE(GFX_RUN_LIST_ENABLE)) { + enable_execlist = !!(data & GFX_RUN_LIST_ENABLE); + + gvt_info("EXECLIST %s on ring %d.", + (enable_execlist ? "enabling" : "disabling"), + ring_id); + + if (enable_execlist) + gvt_start_schedule(vgt); + } + return true; } diff --git a/drivers/gpu/drm/i915/gvt/instance.c b/drivers/gpu/drm/i915/gvt/instance.c index 959c8ee..0b7eb8f 100644 --- a/drivers/gpu/drm/i915/gvt/instance.c +++ b/drivers/gpu/drm/i915/gvt/instance.c @@ -193,9 +193,22 @@ void gvt_destroy_instance(struct vgt_device *vgt) struct pgt_device *pdev = vgt->pdev; mutex_lock(&pdev->lock); + + gvt_stop_schedule(vgt); + + mutex_unlock(&pdev->lock); + + if (atomic_read(&vgt->running_workload_num)) + gvt_wait_instance_idle(vgt); + + mutex_lock(&pdev->lock); + + gvt_clean_instance_sched_policy(vgt); + gvt_set_instance_offline(vgt); if (vgt->id != -1) idr_remove(&pdev->instance_idr, vgt->id); + mutex_unlock(&pdev->lock); hypervisor_hvm_exit(vgt); @@ -234,6 +247,9 @@ struct vgt_device *gvt_create_instance(struct pgt_device *pdev, vgt->id = id; vgt->pdev = pdev; + if (!gvt_init_instance_sched_policy(vgt)) + goto err; + vgt->warn_untrack = true; if (!create_virtual_device_state(vgt, info)) diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.c b/drivers/gpu/drm/i915/gvt/sched_policy.c new file mode 100644 index 0000000..14f4301 --- /dev/null +++ b/drivers/gpu/drm/i915/gvt/sched_policy.c @@ -0,0 +1,295 @@ +/* + * Copyright(c) 2011-2016 Intel Corporation. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#include "gvt.h" + +static bool instance_has_pending_workload(struct vgt_device *vgt) +{ + struct gvt_virtual_execlist_info *info; + int i; + + for (i = 0; i < I915_NUM_RINGS; i++) { + info = &vgt->virtual_execlist_info[i]; + if (!list_empty(workload_q_head(vgt, i))) + return true; + } + + return false; +} + +static void try_to_schedule_next_instance(struct pgt_device *pdev) +{ + struct gvt_workload_scheduler *scheduler = + &pdev->workload_scheduler; + int i; + + /* no target to schedule */ + if (!scheduler->next_instance) + return; + + gvt_dbg_sched("try to schedule next instance %d", + scheduler->next_instance->id); + + /* + * after the flag is set, workload dispatch thread will + * stop dispatching workload for current instance + */ + scheduler->need_reschedule = true; + + /* still have uncompleted workload? */ + for (i = 0; i < I915_NUM_RINGS; i++) { + if (scheduler->current_workload[i]) { + gvt_dbg_sched("still have running workload"); + return; + } + } + + gvt_dbg_sched("switch to next instance %d", + scheduler->next_instance->id); + + /* switch current instance */ + scheduler->current_instance = scheduler->next_instance; + scheduler->next_instance = NULL; + + /* wake up workload dispatch thread */ + for (i = 0; i < I915_NUM_RINGS; i++) + wake_up(&scheduler->waitq[i]); + + scheduler->need_reschedule = false; +} + +struct tbs_instance_data { + struct list_head list; + struct vgt_device *vgt; + /* put some per-instance sched stats here*/ +}; + +struct tbs_sched_data { + struct pgt_device *pdev; + struct delayed_work work; + unsigned long period; + atomic_t runq_instance_num; + struct list_head runq_head; +}; + +#define GVT_DEFAULT_TIME_SLICE (16 * HZ / 1000) + +static void tbs_sched_func(struct work_struct *work) +{ + struct tbs_sched_data *sched_data = container_of(work, + struct tbs_sched_data, work.work); + struct tbs_instance_data *instance_data; + + struct pgt_device *pdev = sched_data->pdev; + struct gvt_workload_scheduler *scheduler = + &pdev->workload_scheduler; + + struct vgt_device *vgt = NULL; + struct list_head *pos, *head; + + mutex_lock(&pdev->lock); + + /* no instance or has already had a target */ + if (list_empty(&sched_data->runq_head)|| scheduler->next_instance) + goto out; + + if (scheduler->current_instance) { + instance_data = scheduler->current_instance->sched_data; + head = &instance_data->list; + } else { + gvt_dbg_sched("no current instance search from q head"); + head = &sched_data->runq_head; + } + + /* search a instance with pending workload */ + list_for_each(pos, head) { + if (pos == &sched_data->runq_head) + continue; + + instance_data = container_of(pos, struct tbs_instance_data, list); + if (!instance_has_pending_workload(instance_data->vgt)) + continue; + + vgt = instance_data->vgt; + break; + } + + if (vgt) { + scheduler->next_instance = vgt; + gvt_dbg_sched("pick next instance %d", vgt->id); + } +out: + if (scheduler->next_instance) { + gvt_dbg_sched("try to schedule next instance %d", + scheduler->next_instance->id); + try_to_schedule_next_instance(pdev); + } + + /* + * still have instance on runq + * or last schedule haven't finished due to running workload + */ + if (atomic_read(&sched_data->runq_instance_num) || scheduler->next_instance) + schedule_delayed_work(&sched_data->work, sched_data->period); + + mutex_unlock(&pdev->lock); +} + +static bool tbs_sched_init(struct pgt_device *pdev) +{ + struct gvt_workload_scheduler *scheduler = + &pdev->workload_scheduler; + + struct tbs_sched_data *data; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) { + gvt_err("fail to allocate sched data"); + return false; + } + + INIT_LIST_HEAD(&data->runq_head); + INIT_DELAYED_WORK(&data->work, tbs_sched_func); + data->period = GVT_DEFAULT_TIME_SLICE; + data->pdev = pdev; + + atomic_set(&data->runq_instance_num, 0); + scheduler->sched_data = data; + + return true; +} + +static void tbs_sched_clean(struct pgt_device *pdev) +{ + struct gvt_workload_scheduler *scheduler = + &pdev->workload_scheduler; + struct tbs_sched_data *data = scheduler->sched_data; + + cancel_delayed_work(&data->work); + kfree(data); + scheduler->sched_data = NULL; +} + +static bool tbs_sched_instance_init(struct vgt_device *vgt) +{ + struct tbs_instance_data *data; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) { + gvt_err("fail to allocate memory"); + return false; + } + + data->vgt = vgt; + INIT_LIST_HEAD(&data->list); + + vgt->sched_data = data; + + return true; +} + +static void tbs_sched_instance_clean(struct vgt_device *vgt) +{ + kfree(vgt->sched_data); + vgt->sched_data = NULL; +} + +static void tbs_sched_start_schedule(struct vgt_device *vgt) +{ + struct tbs_sched_data *sched_data = vgt->pdev->workload_scheduler.sched_data; + struct tbs_instance_data *instance_data = vgt->sched_data; + + if (!list_empty(&instance_data->list)) + return; + + list_add_tail(&instance_data->list, &sched_data->runq_head); + atomic_inc(&sched_data->runq_instance_num); + + schedule_delayed_work(&sched_data->work, sched_data->period); +} + +static void tbs_sched_stop_schedule(struct vgt_device *vgt) +{ + struct tbs_sched_data *sched_data = vgt->pdev->workload_scheduler.sched_data; + struct tbs_instance_data *instance_data = vgt->sched_data; + + atomic_dec(&sched_data->runq_instance_num); + list_del_init(&instance_data->list); +} + +struct gvt_schedule_policy_ops tbs_schedule_ops = { + .init = tbs_sched_init, + .clean = tbs_sched_clean, + .instance_init = tbs_sched_instance_init, + .instance_clean = tbs_sched_instance_clean, + .start_schedule = tbs_sched_start_schedule, + .stop_schedule = tbs_sched_stop_schedule, +}; + +bool gvt_init_sched_policy(struct pgt_device *pdev) +{ + pdev->workload_scheduler.sched_ops = &tbs_schedule_ops; + + return pdev->workload_scheduler.sched_ops->init(pdev); +} + +void gvt_clean_sched_policy(struct pgt_device *pdev) +{ + pdev->workload_scheduler.sched_ops->clean(pdev); +} + +bool gvt_init_instance_sched_policy(struct vgt_device *vgt) +{ + return vgt->pdev->workload_scheduler.sched_ops->instance_init(vgt); +} + +void gvt_clean_instance_sched_policy(struct vgt_device *vgt) +{ + vgt->pdev->workload_scheduler.sched_ops->instance_clean(vgt); +} + +void gvt_start_schedule(struct vgt_device *vgt) +{ + gvt_info("[vgt %d] start schedule", vgt->id); + + vgt->pdev->workload_scheduler.sched_ops->start_schedule(vgt); +} + +void gvt_stop_schedule(struct vgt_device *vgt) +{ + struct gvt_workload_scheduler *scheduler = + &vgt->pdev->workload_scheduler; + + gvt_info("[vgt %d] stop schedule", vgt->id); + + scheduler->sched_ops->stop_schedule(vgt); + + if (scheduler->next_instance == vgt) + scheduler->next_instance = NULL; + + if (scheduler->current_instance == vgt) { + /* stop workload dispatching */ + scheduler->need_reschedule = true; + scheduler->current_instance = NULL; + } +} diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.h b/drivers/gpu/drm/i915/gvt/sched_policy.h new file mode 100644 index 0000000..9cc1899 --- /dev/null +++ b/drivers/gpu/drm/i915/gvt/sched_policy.h @@ -0,0 +1,48 @@ +/* + * Copyright(c) 2011-2016 Intel Corporation. All rights reserved. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifndef __GVT_SCHED_POLICY__ +#define __GVT_SCHED_POLICY__ + +struct gvt_schedule_policy_ops { + bool (*init)(struct pgt_device *pdev); + void (*clean)(struct pgt_device *pdev); + bool (*instance_init)(struct vgt_device *vgt); + void (*instance_clean)(struct vgt_device *vgt); + void (*start_schedule)(struct vgt_device *vgt); + void (*stop_schedule)(struct vgt_device *vgt); +}; + +bool gvt_init_sched_policy(struct pgt_device *pdev); + +void gvt_clean_sched_policy(struct pgt_device *pdev); + +bool gvt_init_instance_sched_policy(struct vgt_device *vgt); + +void gvt_clean_instance_sched_policy(struct vgt_device *vgt); + +void gvt_start_schedule(struct vgt_device *vgt); + +void gvt_stop_schedule(struct vgt_device *vgt); + +#endif diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c index cdf179f..d8d2e23 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.c +++ b/drivers/gpu/drm/i915/gvt/scheduler.c @@ -434,6 +434,8 @@ void gvt_clean_workload_scheduler(struct pgt_device *pdev) i915_gem_context_unreference(scheduler->shadow_ctx); scheduler->shadow_ctx = NULL; + + gvt_clean_sched_policy(pdev); } bool gvt_init_workload_scheduler(struct pgt_device *pdev) @@ -474,6 +476,9 @@ bool gvt_init_workload_scheduler(struct pgt_device *pdev) } } + if (!gvt_init_sched_policy(pdev)) + goto err; + return true; err: if (param) { diff --git a/drivers/gpu/drm/i915/gvt/scheduler.h b/drivers/gpu/drm/i915/gvt/scheduler.h index c4e7fa2..7a8f1eb 100644 --- a/drivers/gpu/drm/i915/gvt/scheduler.h +++ b/drivers/gpu/drm/i915/gvt/scheduler.h @@ -35,6 +35,9 @@ struct gvt_workload_scheduler { wait_queue_head_t workload_complete_wq; struct task_struct *thread[I915_NUM_RINGS]; wait_queue_head_t waitq[I915_NUM_RINGS]; + + void *sched_data; + struct gvt_schedule_policy_ops *sched_ops; }; struct gvt_workload {