From patchwork Mon Jul 8 01:35:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaolin Zhang X-Patchwork-Id: 11034487 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1A323112C for ; Mon, 8 Jul 2019 01:35:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 088E8205FD for ; Mon, 8 Jul 2019 01:35:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F135827B2F; Mon, 8 Jul 2019 01:35:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5FDA2205FD for ; Mon, 8 Jul 2019 01:35:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 66DF2899C7; Mon, 8 Jul 2019 01:35:42 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9AFD7899C7; Mon, 8 Jul 2019 01:35:41 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Jul 2019 18:35:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,464,1557212400"; d="scan'208";a="173131270" Received: from xzhan34-mobl3.bj.intel.com ([10.238.154.53]) by FMSMGA003.fm.intel.com with ESMTP; 07 Jul 2019 18:35:40 -0700 From: Xiaolin Zhang To: intel-gfx@lists.freedesktop.org Date: Mon, 8 Jul 2019 09:35:17 +0800 Message-Id: <1562549722-27046-5-git-send-email-xiaolin.zhang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562549722-27046-1-git-send-email-xiaolin.zhang@intel.com> References: <1562549722-27046-1-git-send-email-xiaolin.zhang@intel.com> Subject: [Intel-gfx] [PATCH v7 4/9] drm/i915: vgpu ppgtt update pv optimization X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gvt-dev@lists.freedesktop.org MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP This patch extends vgpu ppgtt g2v notification to notify host GVT-g of ppgtt update from guest including alloc_4lvl, clear_4lv4 and insert_4lvl. These updates use the shared memory page to pass struct pv_ppgtt_update from guest to GVT which is used for pv optimiation implemeation within host GVT side. This patch also add one new pv_caps level to control ppgtt update. Use PV_PPGTT_UPDATE to control this level of pv optimization. v0: RFC. v1: rebased. v2: added pv callbacks for vm.{allocate_va_range, insert_entries, clear_range} within ppgtt. v3: rebased, disable huge page ppgtt support when using PVMMIO ppgtt update due to complex and performance impact. v4: moved alloc/insert/clear_4lvl pv callbacks into i915_vgpu_pv.c and added a single intel_vgpu_config_pv_caps() for vgpu pv callbacks setup. v5: rebase. v6: rebase. v7: rebase. Signed-off-by: Xiaolin Zhang --- drivers/gpu/drm/i915/i915_gem.c | 3 +- drivers/gpu/drm/i915/i915_gem_gtt.c | 9 +++-- drivers/gpu/drm/i915/i915_gem_gtt.h | 8 ++++ drivers/gpu/drm/i915/i915_vgpu.c | 79 +++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/i915_vgpu.h | 25 ++++++++++++ 5 files changed, 120 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 7ade42b..de306e3 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1421,7 +1421,8 @@ int i915_gem_init(struct drm_i915_private *dev_priv) int ret; /* We need to fallback to 4K pages if host doesn't support huge gtt. */ - if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv)) + if ((intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv)) + || intel_vgpu_enabled_pv_caps(dev_priv, PV_PPGTT_UPDATE)) mkwrite_device_info(dev_priv)->page_sizes = I915_GTT_PAGE_SIZE_4K; diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 236c964..d6224f9 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -910,7 +910,7 @@ static void gen8_ppgtt_clear_3lvl(struct i915_address_space *vm, * This is the top-level structure in 4-level page tables used on gen8+. * Empty entries are always scratch pml4e. */ -static void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm, +void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm, u64 start, u64 length) { struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); @@ -1150,7 +1150,7 @@ static void gen8_ppgtt_insert_huge_entries(struct i915_vma *vma, } while (iter->sg); } -static void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm, +void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm, struct i915_vma *vma, enum i915_cache_level cache_level, u32 flags) @@ -1466,7 +1466,7 @@ static int gen8_ppgtt_alloc_3lvl(struct i915_address_space *vm, i915_vm_to_ppgtt(vm)->pd, start, length); } -static int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm, +int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm, u64 start, u64 length) { struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); @@ -1656,6 +1656,9 @@ static struct i915_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915) ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl; ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl; ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl; + + if (intel_vgpu_active(i915)) + intel_vgpu_config_pv_caps(i915, PV_PPGTT_UPDATE, ppgtt); } else { if (intel_vgpu_active(i915)) { err = gen8_preallocate_top_level_pdp(ppgtt); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index 57a68ef..e19e66a 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -641,6 +641,14 @@ int gen6_ppgtt_pin(struct i915_ppgtt *base); void gen6_ppgtt_unpin(struct i915_ppgtt *base); void gen6_ppgtt_unpin_all(struct i915_ppgtt *base); +void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm, + u64 start, u64 length); +void gen8_ppgtt_insert_4lvl(struct i915_address_space *vm, + struct i915_vma *vma, + enum i915_cache_level cache_level, u32 flags); +int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm, + u64 start, u64 length); + void i915_gem_suspend_gtt_mappings(struct drm_i915_private *dev_priv); void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv); diff --git a/drivers/gpu/drm/i915/i915_vgpu.c b/drivers/gpu/drm/i915/i915_vgpu.c index acbe3a0..2aad0b8 100644 --- a/drivers/gpu/drm/i915/i915_vgpu.c +++ b/drivers/gpu/drm/i915/i915_vgpu.c @@ -96,6 +96,9 @@ void i915_detect_vgpu(struct drm_i915_private *dev_priv) dev_priv->vgpu.active = true; + /* guest driver PV capability */ + dev_priv->vgpu.pv_caps = PV_PPGTT_UPDATE; + if (!intel_vgpu_check_pv_caps(dev_priv, shared_area)) { DRM_INFO("Virtual GPU for Intel GVT-g detected.\n"); return; @@ -313,6 +316,82 @@ int intel_vgt_balloon(struct i915_ggtt *ggtt) * i915 vgpu PV support for Linux */ +static int vgpu_ppgtt_pv_update(struct drm_i915_private *dev_priv, + u32 action, u64 pdp, u64 start, u64 length, u32 cache_level) +{ + u32 data[8]; + + data[0] = action; + data[1] = lower_32_bits(pdp); + data[2] = upper_32_bits(pdp); + data[3] = lower_32_bits(start); + data[4] = upper_32_bits(start); + data[5] = lower_32_bits(length); + data[6] = upper_32_bits(length); + data[7] = cache_level; + + return intel_vgpu_pv_send(dev_priv, data, ARRAY_SIZE(data)); +} + +static void gen8_ppgtt_clear_4lvl_pv(struct i915_address_space *vm, + u64 start, u64 length) +{ + struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); + struct drm_i915_private *dev_priv = vm->i915; + + gen8_ppgtt_clear_4lvl(vm, start, length); + vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_CLEAR, + px_dma(ppgtt->pd), start, length, 0); +} + +static void gen8_ppgtt_insert_4lvl_pv(struct i915_address_space *vm, + struct i915_vma *vma, + enum i915_cache_level cache_level, u32 flags) +{ + struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); + struct drm_i915_private *dev_priv = vm->i915; + u64 start = vma->node.start; + u64 length = vma->node.size; + + gen8_ppgtt_insert_4lvl(vm, vma, cache_level, flags); + vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_INSERT, + px_dma(ppgtt->pd), start, length, cache_level); +} + +static int gen8_ppgtt_alloc_4lvl_pv(struct i915_address_space *vm, + u64 start, u64 length) +{ + struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); + struct drm_i915_private *dev_priv = vm->i915; + int ret; + + ret = gen8_ppgtt_alloc_4lvl(vm, start, length); + if (ret) + return ret; + + return vgpu_ppgtt_pv_update(dev_priv, PV_ACTION_PPGTT_L4_ALLOC, + px_dma(ppgtt->pd), start, length, 0); +} + +/* + * config guest driver PV ops for different PV features + */ +void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv, + enum pv_caps cap, void *data) +{ + struct i915_ppgtt *ppgtt; + + if (!intel_vgpu_enabled_pv_caps(dev_priv, cap)) + return; + + if (cap == PV_PPGTT_UPDATE) { + ppgtt = (struct i915_ppgtt *)data; + ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl_pv; + ppgtt->vm.insert_entries = gen8_ppgtt_insert_4lvl_pv; + ppgtt->vm.clear_range = gen8_ppgtt_clear_4lvl_pv; + } +} + /** * wait_for_desc_update - Wait for the command buffer descriptor update. * @desc: buffer descriptor diff --git a/drivers/gpu/drm/i915/i915_vgpu.h b/drivers/gpu/drm/i915/i915_vgpu.h index 24c49b7..92c84d3 100644 --- a/drivers/gpu/drm/i915/i915_vgpu.h +++ b/drivers/gpu/drm/i915/i915_vgpu.h @@ -33,6 +33,21 @@ #define PV_CMD_OFF (PAGE_SIZE/2) /* + * define different capabilities of PV optimization + */ +enum pv_caps { + PV_PPGTT_UPDATE = 0x1, +}; + +/* PV actions */ +enum intel_vgpu_pv_action { + PV_ACTION_DEFAULT = 0x0, + PV_ACTION_PPGTT_L4_ALLOC, + PV_ACTION_PPGTT_L4_CLEAR, + PV_ACTION_PPGTT_L4_INSERT, +}; + +/* * A shared page(4KB) between gvt and VM, could be allocated by guest driver * or a fixed location in PCI bar 0 region */ @@ -119,6 +134,14 @@ intel_vgpu_has_pv_caps(struct drm_i915_private *dev_priv) return dev_priv->vgpu.caps & VGT_CAPS_PV; } +static inline bool +intel_vgpu_enabled_pv_caps(struct drm_i915_private *dev_priv, + enum pv_caps cap) +{ + return (dev_priv->vgpu.active) && intel_vgpu_has_pv_caps(dev_priv) + && (dev_priv->vgpu.pv_caps & cap); +} + static inline void intel_vgpu_pv_notify(struct drm_i915_private *dev_priv) { @@ -138,4 +161,6 @@ void intel_vgt_deballoon(struct i915_ggtt *ggtt); /* i915 vgpu pv related functions */ bool intel_vgpu_check_pv_caps(struct drm_i915_private *dev_priv, void __iomem *shared_area); +void intel_vgpu_config_pv_caps(struct drm_i915_private *dev_priv, + enum pv_caps cap, void *data); #endif /* _I915_VGPU_H_ */