Message ID | 20180726141410.2185-4-mahesh1.kumar@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Decode memdev info and bandwidth and implemnt latency WA | expand |
On Thu, Jul 26, 2018 at 7:14 AM, Mahesh Kumar <mahesh1.kumar@intel.com> wrote:
> Bspec: 4381
Do we know that these numbers are stable?
I don't know if this form is common in the kernel, but in Mesa we
specify the name of the page which should always allow readers to find
it.
Hi Matt, On 7/27/2018 9:21 AM, Matt Turner wrote: > On Thu, Jul 26, 2018 at 7:14 AM, Mahesh Kumar <mahesh1.kumar@intel.com> wrote: >> Bspec: 4381 > Do we know that these numbers are stable? yes these numbers are fixed in Bspec > I don't know if this form is common in the kernel, but in Mesa we > specify the name of the page which should always allow readers to find > it. This is common practice in kernel to give Bspec Number reference instead of name of the page. -Mahesh
On Fri, Jul 27, 2018 at 11:40:14AM +0530, Kumar, Mahesh wrote: > Hi Matt, > > > On 7/27/2018 9:21 AM, Matt Turner wrote: > > On Thu, Jul 26, 2018 at 7:14 AM, Mahesh Kumar <mahesh1.kumar@intel.com> wrote: > > > Bspec: 4381 > > Do we know that these numbers are stable? > yes these numbers are fixed in Bspec > > I don't know if this form is common in the kernel, but in Mesa we > > specify the name of the page which should always allow readers to find > > it. > This is common practice in kernel to give Bspec Number reference instead of > name of the page. Well, I believe Matt has a good point here. It is stable for now and for a while, but we had seen changes on the web interface in use. Also this number doesn't help devs who don't use the web interface. > -Mahesh > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Hi, On 7/28/2018 11:18 AM, Rodrigo Vivi wrote: > On Fri, Jul 27, 2018 at 11:40:14AM +0530, Kumar, Mahesh wrote: >> Hi Matt, >> >> >> On 7/27/2018 9:21 AM, Matt Turner wrote: >>> On Thu, Jul 26, 2018 at 7:14 AM, Mahesh Kumar <mahesh1.kumar@intel.com> wrote: >>>> Bspec: 4381 >>> Do we know that these numbers are stable? >> yes these numbers are fixed in Bspec >>> I don't know if this form is common in the kernel, but in Mesa we >>> specify the name of the page which should always allow readers to find >>> it. >> This is common practice in kernel to give Bspec Number reference instead of >> name of the page. > Well, I believe Matt has a good point here. > It is stable for now and for a while, but we had seen changes > on the web interface in use. Also this number doesn't help devs who > don't use the web interface. sure, will update this in next version. Will float updated series once receive review comments on other patches. -Mahesh > >> -Mahesh >> >> _______________________________________________ >> Intel-gfx mailing list >> Intel-gfx@lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On Thu, Jul 26, 2018 at 07:44:08PM +0530, Mahesh Kumar wrote: > Memory with 16GB dimms require an increase of 1us in level-0 latency. > This patch implements the same. > Bspec: 4381 > > changes since V1: > - s/memdev_info/dram_info > - make skl_is_16gb_dimm pure function > > Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com> > --- > drivers/gpu/drm/i915/i915_drv.c | 40 ++++++++++++++++++++++++++++++++++++++-- > drivers/gpu/drm/i915/i915_drv.h | 3 +++ > drivers/gpu/drm/i915/intel_pm.c | 13 +++++++++++++ > 3 files changed, 54 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c > index ddf6bf9b500a..86bc2e685522 100644 > --- a/drivers/gpu/drm/i915/i915_drv.c > +++ b/drivers/gpu/drm/i915/i915_drv.c > @@ -1070,6 +1070,25 @@ static void intel_sanitize_options(struct drm_i915_private *dev_priv) > intel_gvt_sanitize_options(dev_priv); > } > > +static bool > +skl_is_16gb_dimm(u8 rank, u8 size, u8 width) > +{ > + if (size == 16 && width == SKL_DRAM_WIDTH_X8 && > + rank == SKL_DRAM_RANK_SINGLE) > + return true; > + else if (size == 32 && width == SKL_DRAM_WIDTH_X8 && > + rank == SKL_DRAM_RANK_DUAL) > + return true; > + else if (size == 8 && width == SKL_DRAM_WIDTH_X16 && > + rank == SKL_DRAM_RANK_SINGLE) > + return true; > + else if (size == 16 && width == SKL_DRAM_WIDTH_X16 && > + rank == SKL_DRAM_RANK_DUAL) > + return true; nip: This is right, but it would be easier if order of checks were in same order as spec. Well.. but no real need for change since I already checked ;) > + > + return false; > +} > + > static int > skl_dram_get_channel_info(struct dram_channel_info *ch, u32 val) > { > @@ -1077,6 +1096,7 @@ skl_dram_get_channel_info(struct dram_channel_info *ch, u32 val) > u8 l_size, s_size; > u8 l_width, s_width; > enum dram_rank rank; > + bool l_16gb_dimm, s_16gb_dimm; I don't think we need this extra locals > > if (!val) > return -1; > @@ -1103,6 +1123,13 @@ skl_dram_get_channel_info(struct dram_channel_info *ch, u32 val) > else > rank = I915_DRAM_RANK_SINGLE; > > + l_16gb_dimm = skl_is_16gb_dimm(l_rank, l_size, l_width); > + s_16gb_dimm = skl_is_16gb_dimm(s_rank, s_size, s_width); > + > + if (l_16gb_dimm || s_16gb_dimm) > + ch->is_16gb_dimm = true; > + else > + ch->is_16gb_dimm = false; ch->is_16gb_dim = skl_is_16gb_dimm(l_rank, l_size, l_width) || skl_is_16gb_dimm(s_rank, s_size, s_width); > ch->l_info.size = l_size; > ch->s_info.size = s_size; > ch->l_info.width = l_width; > @@ -1137,6 +1164,8 @@ skl_dram_get_channels_info(struct drm_i915_private *dev_priv) > return -EINVAL; > } > > + dram_info->valid_dimm = true; > + > /* > * If any of the channel is single rank channel, worst case output > * will be same as if single rank memory, so consider single rank > @@ -1152,6 +1181,10 @@ skl_dram_get_channels_info(struct drm_i915_private *dev_priv) > DRM_INFO("couldn't get memory rank information\n"); > return -EINVAL; > } > + > + if (ch0.is_16gb_dimm || ch1.is_16gb_dimm) > + dram_info->is_16gb_dimm = true; > + > return 0; > } > > @@ -1263,6 +1296,7 @@ bxt_get_dram_info(struct drm_i915_private *dev_priv) > return -EINVAL; > } > > + dram_info->valid_dimm = true; > dram_info->valid = true; > return 0; > } > @@ -1275,6 +1309,8 @@ intel_get_dram_info(struct drm_i915_private *dev_priv) > int ret; > > dram_info->valid = false; > + dram_info->valid_dimm = false; > + dram_info->is_16gb_dimm = false; > dram_info->rank = I915_DRAM_RANK_INVALID; > dram_info->bandwidth_kbps = 0; > dram_info->num_channels = 0; > @@ -1298,9 +1334,9 @@ intel_get_dram_info(struct drm_i915_private *dev_priv) > sprintf(bandwidth_str, "unknown"); > DRM_DEBUG_KMS("DRAM bandwidth:%s, total-channels: %u\n", > bandwidth_str, dram_info->num_channels); > - DRM_DEBUG_KMS("DRAM rank: %s rank\n", > + DRM_DEBUG_KMS("DRAM rank: %s rank 16GB-dimm:%s\n", > (dram_info->rank == I915_DRAM_RANK_DUAL) ? > - "dual" : "single"); > + "dual" : "single", yesno(dram_info->is_16gb_dimm)); > } > > /** > diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > index 2d12fc152b49..854f3c828e01 100644 > --- a/drivers/gpu/drm/i915/i915_drv.h > +++ b/drivers/gpu/drm/i915/i915_drv.h > @@ -1906,6 +1906,8 @@ struct drm_i915_private { > > struct dram_info { > bool valid; > + bool valid_dimm; > + bool is_16gb_dimm; > u8 num_channels; > enum dram_rank { > I915_DRAM_RANK_INVALID = 0, > @@ -2133,6 +2135,7 @@ struct dram_channel_info { > u8 size, width, rank; > } l_info, s_info; > enum dram_rank rank; > + bool is_16gb_dimm; > }; > > static inline struct drm_i915_private *to_i915(const struct drm_device *dev) > diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c > index 7312ecb73415..2446f53adf21 100644 > --- a/drivers/gpu/drm/i915/intel_pm.c > +++ b/drivers/gpu/drm/i915/intel_pm.c > @@ -2874,6 +2874,19 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv, > } > } > > + /* > + * WA Level-0 adjustment for 16GB DIMMs: SKL+ > + * If we could not get dimm info enable this WA to prevent from > + * any underrun. If not able to get Dimm info assume 16GB dimm > + * to avoid any underrun. > + */ > + if (dev_priv->dram_info.valid_dimm) { > + if (dev_priv->dram_info.is_16gb_dimm) > + wm[0] += 1; > + } else { > + wm[0] += 1; well... spec don't cover this case... I believe it is safe, but only concern would be this creating latency for other cases... just thinking loud, but I'm probably in favor of doing this anyway... > + } > + > } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { > uint64_t sskpd = I915_READ64(MCH_SSKPD); > > -- > 2.16.2 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index ddf6bf9b500a..86bc2e685522 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -1070,6 +1070,25 @@ static void intel_sanitize_options(struct drm_i915_private *dev_priv) intel_gvt_sanitize_options(dev_priv); } +static bool +skl_is_16gb_dimm(u8 rank, u8 size, u8 width) +{ + if (size == 16 && width == SKL_DRAM_WIDTH_X8 && + rank == SKL_DRAM_RANK_SINGLE) + return true; + else if (size == 32 && width == SKL_DRAM_WIDTH_X8 && + rank == SKL_DRAM_RANK_DUAL) + return true; + else if (size == 8 && width == SKL_DRAM_WIDTH_X16 && + rank == SKL_DRAM_RANK_SINGLE) + return true; + else if (size == 16 && width == SKL_DRAM_WIDTH_X16 && + rank == SKL_DRAM_RANK_DUAL) + return true; + + return false; +} + static int skl_dram_get_channel_info(struct dram_channel_info *ch, u32 val) { @@ -1077,6 +1096,7 @@ skl_dram_get_channel_info(struct dram_channel_info *ch, u32 val) u8 l_size, s_size; u8 l_width, s_width; enum dram_rank rank; + bool l_16gb_dimm, s_16gb_dimm; if (!val) return -1; @@ -1103,6 +1123,13 @@ skl_dram_get_channel_info(struct dram_channel_info *ch, u32 val) else rank = I915_DRAM_RANK_SINGLE; + l_16gb_dimm = skl_is_16gb_dimm(l_rank, l_size, l_width); + s_16gb_dimm = skl_is_16gb_dimm(s_rank, s_size, s_width); + + if (l_16gb_dimm || s_16gb_dimm) + ch->is_16gb_dimm = true; + else + ch->is_16gb_dimm = false; ch->l_info.size = l_size; ch->s_info.size = s_size; ch->l_info.width = l_width; @@ -1137,6 +1164,8 @@ skl_dram_get_channels_info(struct drm_i915_private *dev_priv) return -EINVAL; } + dram_info->valid_dimm = true; + /* * If any of the channel is single rank channel, worst case output * will be same as if single rank memory, so consider single rank @@ -1152,6 +1181,10 @@ skl_dram_get_channels_info(struct drm_i915_private *dev_priv) DRM_INFO("couldn't get memory rank information\n"); return -EINVAL; } + + if (ch0.is_16gb_dimm || ch1.is_16gb_dimm) + dram_info->is_16gb_dimm = true; + return 0; } @@ -1263,6 +1296,7 @@ bxt_get_dram_info(struct drm_i915_private *dev_priv) return -EINVAL; } + dram_info->valid_dimm = true; dram_info->valid = true; return 0; } @@ -1275,6 +1309,8 @@ intel_get_dram_info(struct drm_i915_private *dev_priv) int ret; dram_info->valid = false; + dram_info->valid_dimm = false; + dram_info->is_16gb_dimm = false; dram_info->rank = I915_DRAM_RANK_INVALID; dram_info->bandwidth_kbps = 0; dram_info->num_channels = 0; @@ -1298,9 +1334,9 @@ intel_get_dram_info(struct drm_i915_private *dev_priv) sprintf(bandwidth_str, "unknown"); DRM_DEBUG_KMS("DRAM bandwidth:%s, total-channels: %u\n", bandwidth_str, dram_info->num_channels); - DRM_DEBUG_KMS("DRAM rank: %s rank\n", + DRM_DEBUG_KMS("DRAM rank: %s rank 16GB-dimm:%s\n", (dram_info->rank == I915_DRAM_RANK_DUAL) ? - "dual" : "single"); + "dual" : "single", yesno(dram_info->is_16gb_dimm)); } /** diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 2d12fc152b49..854f3c828e01 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -1906,6 +1906,8 @@ struct drm_i915_private { struct dram_info { bool valid; + bool valid_dimm; + bool is_16gb_dimm; u8 num_channels; enum dram_rank { I915_DRAM_RANK_INVALID = 0, @@ -2133,6 +2135,7 @@ struct dram_channel_info { u8 size, width, rank; } l_info, s_info; enum dram_rank rank; + bool is_16gb_dimm; }; static inline struct drm_i915_private *to_i915(const struct drm_device *dev) diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 7312ecb73415..2446f53adf21 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -2874,6 +2874,19 @@ static void intel_read_wm_latency(struct drm_i915_private *dev_priv, } } + /* + * WA Level-0 adjustment for 16GB DIMMs: SKL+ + * If we could not get dimm info enable this WA to prevent from + * any underrun. If not able to get Dimm info assume 16GB dimm + * to avoid any underrun. + */ + if (dev_priv->dram_info.valid_dimm) { + if (dev_priv->dram_info.is_16gb_dimm) + wm[0] += 1; + } else { + wm[0] += 1; + } + } else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { uint64_t sskpd = I915_READ64(MCH_SSKPD);
Memory with 16GB dimms require an increase of 1us in level-0 latency. This patch implements the same. Bspec: 4381 changes since V1: - s/memdev_info/dram_info - make skl_is_16gb_dimm pure function Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com> --- drivers/gpu/drm/i915/i915_drv.c | 40 ++++++++++++++++++++++++++++++++++++++-- drivers/gpu/drm/i915/i915_drv.h | 3 +++ drivers/gpu/drm/i915/intel_pm.c | 13 +++++++++++++ 3 files changed, 54 insertions(+), 2 deletions(-)