From patchwork Fri Nov 1 17:19:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Imre Deak X-Patchwork-Id: 3126771 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D92B19F3C4 for ; Fri, 1 Nov 2013 17:20:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C3C2C20498 for ; Fri, 1 Nov 2013 17:20:48 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id AC6B520173 for ; Fri, 1 Nov 2013 17:20:47 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5C862F0715; Fri, 1 Nov 2013 10:20:46 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga14.intel.com (mga14.intel.com [143.182.124.37]) by gabe.freedesktop.org (Postfix) with ESMTP id 854D1F071D for ; Fri, 1 Nov 2013 10:20:44 -0700 (PDT) Received: from azsmga002.ch.intel.com ([10.2.17.35]) by azsmga102.ch.intel.com with ESMTP; 01 Nov 2013 10:20:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.93,535,1378882800"; d="scan'208";a="315814112" Received: from intelbox.fi.intel.com (HELO localhost) ([10.237.72.105]) by AZSMGA002.ch.intel.com with ESMTP; 01 Nov 2013 10:19:59 -0700 From: Imre Deak To: intel-gfx@lists.freedesktop.org Date: Fri, 1 Nov 2013 19:19:48 +0200 Message-Id: <1383326394-3933-3-git-send-email-imre.deak@intel.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1383326394-3933-1-git-send-email-imre.deak@intel.com> References: <1383326394-3933-1-git-send-email-imre.deak@intel.com> Subject: [Intel-gfx] [PATCH 2/8] drm/i915: support for multiple power wells X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: intel-gfx-bounces@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org X-Spam-Status: No, score=-4.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP HW generations so far had only one always-on power well and optionally one dynamic power well. Upcoming HW gens may have multiple dynamic power wells, so add some infrastructure to support them. The idea is to keep the existing power domain API used by the rest of the driver and create a mapping between these power domains and the underlying power wells. This mapping can differ from one HW to another but high level driver code doesn't need to know about this. Through the existing get/put API it would just ask for a given power domain and the power domain framework would make sure the relevant power wells get enabled in the right order. Signed-off-by: Imre Deak --- drivers/gpu/drm/i915/i915_drv.h | 12 +++-- drivers/gpu/drm/i915/intel_pm.c | 113 ++++++++++++++++++++++++++++++++-------- 2 files changed, 99 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index e2e72e8..038da2a 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -913,21 +913,27 @@ struct intel_ilk_power_mgmt { /* Power well structure for haswell */ struct i915_power_well { + const char *name; /* power well enable/disable usage count */ int count; + unsigned long domains; + void *data; + void (*set)(struct drm_device *dev, struct i915_power_well *power_well, + bool enable); + bool (*is_enabled)(struct drm_device *dev, + struct i915_power_well *power_well); }; -#define I915_MAX_POWER_WELLS 1 - struct i915_power_domains { /* * Power wells needed for initialization at driver init and suspend * time are on. They are kept on until after the first modeset. */ bool init_power_on; + int power_well_count; struct mutex lock; - struct i915_power_well power_wells[I915_MAX_POWER_WELLS]; + struct i915_power_well *power_wells; }; struct i915_dri1_state { diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 34e1a8b..8640b78 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -5521,27 +5521,66 @@ static bool is_always_on_power_domain(struct drm_device *dev, return BIT(domain) & always_on_domains; } +#define for_each_power_well(i, power_well, domain_mask, power_domains) \ + for (i = 0; \ + i < (power_domains)->power_well_count && \ + ((power_well) = &(power_domains)->power_wells[i]); \ + i++) \ + if ((power_well)->domains & (domain_mask)) + +#define for_each_power_well_rev(i, power_well, domain_mask, power_domains) \ + for (i = (power_domains)->power_well_count - 1; \ + i >= 0 && ((power_well) = &(power_domains)->power_wells[i]);\ + i--) \ + if ((power_well)->domains & (domain_mask)) + /** * We should only use the power well if we explicitly asked the hardware to * enable it, so check if it's enabled and also check if we've requested it to * be enabled. */ -bool intel_display_power_enabled(struct drm_device *dev, - enum intel_display_power_domain domain) +static bool hsw_power_well_enabled(struct drm_device *dev, + struct i915_power_well *power_well) { struct drm_i915_private *dev_priv = dev->dev_private; - if (!HAS_POWER_WELL(dev)) - return true; - - if (is_always_on_power_domain(dev, domain)) - return true; - return I915_READ(HSW_PWR_WELL_DRIVER) == (HSW_PWR_WELL_ENABLE_REQUEST | HSW_PWR_WELL_STATE_ENABLED); } -static void __intel_set_power_well(struct drm_device *dev, bool enable) +bool intel_display_power_enabled(struct drm_device *dev, + enum intel_display_power_domain domain) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + struct i915_power_domains *power_domains; + struct i915_power_well *power_well; + bool is_enabled; + int i; + + if (!HAS_POWER_WELL(dev)) + return true; + + if (is_always_on_power_domain(dev, domain)) + return true; + + power_domains = &dev_priv->power_domains; + + is_enabled = true; + + mutex_lock(&power_domains->lock); + for_each_power_well_rev(i, power_well, BIT(domain), power_domains) { + if (!power_well->is_enabled(dev, power_well)) { + is_enabled = false; + break; + } + } + mutex_unlock(&power_domains->lock); + + return is_enabled; +} + +static void hsw_set_power_well(struct drm_device *dev, + struct i915_power_well *power_well, bool enable) { struct drm_i915_private *dev_priv = dev->dev_private; bool is_enabled, enable_requested; @@ -5591,16 +5630,17 @@ static void __intel_set_power_well(struct drm_device *dev, bool enable) static void __intel_power_well_get(struct drm_device *dev, struct i915_power_well *power_well) { - if (!power_well->count++) - __intel_set_power_well(dev, true); + if (!power_well->count++ && power_well->set) + power_well->set(dev, power_well, true); } static void __intel_power_well_put(struct drm_device *dev, struct i915_power_well *power_well) { WARN_ON(!power_well->count); - if (!--power_well->count && i915_disable_power_well) - __intel_set_power_well(dev, false); + + if (!--power_well->count && power_well->set && i915_disable_power_well) + power_well->set(dev, power_well, false); } void intel_display_power_get(struct drm_device *dev, @@ -5608,6 +5648,8 @@ void intel_display_power_get(struct drm_device *dev, { struct drm_i915_private *dev_priv = dev->dev_private; struct i915_power_domains *power_domains; + struct i915_power_well *power_well; + int i; if (!HAS_POWER_WELL(dev)) return; @@ -5618,7 +5660,8 @@ void intel_display_power_get(struct drm_device *dev, power_domains = &dev_priv->power_domains; mutex_lock(&power_domains->lock); - __intel_power_well_get(dev, &power_domains->power_wells[0]); + for_each_power_well(i, power_well, BIT(domain), power_domains) + __intel_power_well_get(dev, power_well); mutex_unlock(&power_domains->lock); } @@ -5627,6 +5670,8 @@ void intel_display_power_put(struct drm_device *dev, { struct drm_i915_private *dev_priv = dev->dev_private; struct i915_power_domains *power_domains; + struct i915_power_well *power_well; + int i; if (!HAS_POWER_WELL(dev)) return; @@ -5637,7 +5682,8 @@ void intel_display_power_put(struct drm_device *dev, power_domains = &dev_priv->power_domains; mutex_lock(&power_domains->lock); - __intel_power_well_put(dev, &power_domains->power_wells[0]); + for_each_power_well_rev(i, power_well, BIT(domain), power_domains) + __intel_power_well_put(dev, power_well); mutex_unlock(&power_domains->lock); } @@ -5671,17 +5717,37 @@ void i915_release_power_well(void) } EXPORT_SYMBOL_GPL(i915_release_power_well); +static struct i915_power_well hsw_power_wells[] = { + { + .name = "display", + .domains = POWER_DOMAIN_MASK & ~HSW_ALWAYS_ON_POWER_DOMAINS, + .is_enabled = hsw_power_well_enabled, + .set = hsw_set_power_well, + }, +}; + int intel_power_domains_init(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; struct i915_power_domains *power_domains = &dev_priv->power_domains; - struct i915_power_well *power_well; + + if (!HAS_POWER_WELL(dev)) + return 0; mutex_init(&power_domains->lock); - hsw_pwr = power_domains; - power_well = &power_domains->power_wells[0]; - power_well->count = 0; + /* + * The enabling order will be from lower to higher indexed wells, + * the disabling order is reversed. + */ + if (IS_HASWELL(dev)) { + power_domains->power_wells = hsw_power_wells; + power_domains->power_well_count = ARRAY_SIZE(hsw_power_wells); + + hsw_pwr = power_domains; + } else { + WARN_ON(1); + } return 0; } @@ -5696,15 +5762,16 @@ static void intel_power_domains_resume(struct drm_device *dev) struct drm_i915_private *dev_priv = dev->dev_private; struct i915_power_domains *power_domains = &dev_priv->power_domains; struct i915_power_well *power_well; + int i; if (!HAS_POWER_WELL(dev)) return; mutex_lock(&power_domains->lock); - - power_well = &power_domains->power_wells[0]; - __intel_set_power_well(dev, power_well->count > 0); - + for_each_power_well(i, power_well, POWER_DOMAIN_MASK, power_domains) { + if (power_well->set) + power_well->set(dev, power_well, power_well->count > 0); + } mutex_unlock(&power_domains->lock); }