From patchwork Tue Sep 17 05:48:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaolin Zhang X-Patchwork-Id: 11148075 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B13613BD for ; Tue, 17 Sep 2019 05:49:11 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8378221670 for ; Tue, 17 Sep 2019 05:49:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8378221670 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 13DA06E131; Tue, 17 Sep 2019 05:49:11 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 308AB6E131; Tue, 17 Sep 2019 05:49:10 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Sep 2019 22:49:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,515,1559545200"; d="scan'208";a="386441307" Received: from xzhan34-mobl3.bj.intel.com ([10.238.154.70]) by fmsmga005.fm.intel.com with ESMTP; 16 Sep 2019 22:49:07 -0700 From: Xiaolin Zhang To: intel-gvt-dev@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Date: Tue, 17 Sep 2019 13:48:19 +0800 Message-Id: <1568699301-2799-9-git-send-email-xiaolin.zhang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1568699301-2799-1-git-send-email-xiaolin.zhang@intel.com> References: <1568699301-2799-1-git-send-email-xiaolin.zhang@intel.com> Subject: [Intel-gfx] [PATCH v10 8/9] drm/i915/gvt: GVTg support ppgtt pv optimization X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: zhenyu.z.wang@intel.com, hang.yuan@intel.com, zhiyuan.lv@intel.com MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" This patch handles ppgtt update from g2v notification. It read out ppgtt pte entries from guest pte tables page and convert them to host pfns. It creates local ppgtt tables and insert the content pages into the local ppgtt tables directly, which does not track the usage of guest page table and removes the cost of write protection from the original shadow page mechansim. v0: RFC. v1: rebase. v2: rebase. v3: report pv pggtt cap to guest. v4: renamed VGPU_PVMMIO with VGPU_PVCAP for name consistance, no PV support if gfx vtd enabled. v5: rebase. v6: rebase. v7: added command transport buffer support. Signed-off-by: Xiaolin Zhang --- drivers/gpu/drm/i915/gvt/gtt.c | 298 ++++++++++++++++++++++++++++++++++++ drivers/gpu/drm/i915/gvt/gtt.h | 11 ++ drivers/gpu/drm/i915/gvt/gvt.h | 4 + drivers/gpu/drm/i915/gvt/handlers.c | 127 ++++++++++++++- drivers/gpu/drm/i915/gvt/vgpu.c | 2 + 5 files changed, 441 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c index 4b04af5..c944ac2 100644 --- a/drivers/gpu/drm/i915/gvt/gtt.c +++ b/drivers/gpu/drm/i915/gvt/gtt.c @@ -1771,6 +1771,25 @@ static int ppgtt_handle_guest_write_page_table_bytes( return 0; } +static void invalidate_mm_pv(struct intel_vgpu_mm *mm) +{ + struct intel_vgpu *vgpu = mm->vgpu; + struct intel_gvt *gvt = vgpu->gvt; + struct intel_gvt_gtt *gtt = &gvt->gtt; + struct intel_gvt_gtt_pte_ops *ops = gtt->pte_ops; + struct intel_gvt_gtt_entry se; + + i915_vm_put(&mm->ppgtt->vm); + + ppgtt_get_shadow_root_entry(mm, &se, 0); + if (!ops->test_present(&se)) + return; + se.val64 = 0; + ppgtt_set_shadow_root_entry(mm, &se, 0); + + mm->ppgtt_mm.shadowed = false; +} + static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm) { struct intel_vgpu *vgpu = mm->vgpu; @@ -1783,6 +1802,11 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm) if (!mm->ppgtt_mm.shadowed) return; + if (VGPU_PVCAP(mm->vgpu, PV_PPGTT_UPDATE)) { + invalidate_mm_pv(mm); + return; + } + for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.shadow_pdps); index++) { ppgtt_get_shadow_root_entry(mm, &se, index); @@ -1800,6 +1824,26 @@ static void invalidate_ppgtt_mm(struct intel_vgpu_mm *mm) mm->ppgtt_mm.shadowed = false; } +static int shadow_mm_pv(struct intel_vgpu_mm *mm) +{ + struct intel_vgpu *vgpu = mm->vgpu; + struct intel_gvt *gvt = vgpu->gvt; + struct intel_gvt_gtt_entry se; + + mm->ppgtt = i915_ppgtt_create(gvt->dev_priv); + if (IS_ERR(mm->ppgtt)) { + gvt_vgpu_err("fail to create ppgtt for pdp 0x%llx\n", + px_dma(mm->ppgtt->pd)); + return PTR_ERR(mm->ppgtt); + } + + se.type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY; + se.val64 = px_dma(mm->ppgtt->pd); + ppgtt_set_shadow_root_entry(mm, &se, 0); + mm->ppgtt_mm.shadowed = true; + + return 0; +} static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm) { @@ -1814,6 +1858,9 @@ static int shadow_ppgtt_mm(struct intel_vgpu_mm *mm) if (mm->ppgtt_mm.shadowed) return 0; + if (VGPU_PVCAP(mm->vgpu, PV_PPGTT_UPDATE)) + return shadow_mm_pv(mm); + mm->ppgtt_mm.shadowed = true; for (index = 0; index < ARRAY_SIZE(mm->ppgtt_mm.guest_pdps); index++) { @@ -2825,3 +2872,254 @@ void intel_vgpu_reset_gtt(struct intel_vgpu *vgpu) intel_vgpu_destroy_all_ppgtt_mm(vgpu); intel_vgpu_reset_ggtt(vgpu, true); } + +#define GEN8_PDE_SHIFT 21 +#define GEN8_PML4E_SHIFT 39 +#define GEN8_PDPE_SHIFT 30 +#define GEN8_PML4E_SIZE (1UL << GEN8_PML4E_SHIFT) +#define GEN8_PML4E_SIZE_MASK (~(GEN8_PML4E_SIZE - 1)) +#define GEN8_PDPE_SIZE (1UL << GEN8_PDPE_SHIFT) +#define GEN8_PDPE_SIZE_MASK (~(GEN8_PDPE_SIZE - 1)) +#define GEN8_PDE_SIZE (1UL << GEN8_PDE_SHIFT) +#define GEN8_PDE_SIZE_MASK (~(GEN8_PDE_SIZE - 1)) + +#define pml4_addr_end(addr, end) \ +({ unsigned long __boundary = \ + ((addr) + GEN8_PML4E_SIZE) & GEN8_PML4E_SIZE_MASK; \ + (__boundary < (end)) ? __boundary : (end); \ +}) + +#define pdp_addr_end(addr, end) \ +({ unsigned long __boundary = \ + ((addr) + GEN8_PDPE_SIZE) & GEN8_PDPE_SIZE_MASK; \ + (__boundary < (end)) ? __boundary : (end); \ +}) + +#define pd_addr_end(addr, end) \ +({ unsigned long __boundary = \ + ((addr) + GEN8_PDE_SIZE) & GEN8_PDE_SIZE_MASK; \ + (__boundary < (end)) ? __boundary : (end); \ +}) + +struct ppgtt_walk { + unsigned long *mfns; + int mfn_index; + unsigned long *pt; +}; + +static int walk_pt_range(struct intel_vgpu *vgpu, u64 pt, + u64 start, u64 end, struct ppgtt_walk *walk) +{ + const struct intel_gvt_device_info *info = &vgpu->gvt->device_info; + struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops; + unsigned long start_index, end_index; + int ret; + int i; + unsigned long mfn, gfn; + + start_index = gma_ops->gma_to_pte_index(start); + end_index = ((end - start) >> PAGE_SHIFT) + start_index; + + ret = intel_gvt_hypervisor_read_gpa(vgpu, + (pt & PAGE_MASK) + (start_index << info->gtt_entry_size_shift), + walk->pt + start_index, + (end_index - start_index) << info->gtt_entry_size_shift); + if (ret) { + gvt_vgpu_err("fail to read gpa %llx\n", pt); + return ret; + } + + for (i = start_index; i < end_index; i++) { + gfn = walk->pt[i] >> PAGE_SHIFT; + mfn = intel_gvt_hypervisor_gfn_to_mfn(vgpu, gfn); + if (mfn == INTEL_GVT_INVALID_ADDR) { + gvt_vgpu_err("fail to translate gfn: 0x%lx\n", gfn); + return -ENXIO; + } + walk->mfns[walk->mfn_index++] = mfn << PAGE_SHIFT; + } + + return 0; +} + + +static int walk_pd_range(struct intel_vgpu *vgpu, u64 pd, + u64 start, u64 end, struct ppgtt_walk *walk) +{ + const struct intel_gvt_device_info *info = &vgpu->gvt->device_info; + struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops; + unsigned long index; + u64 pt, next; + int ret = 0; + + do { + index = gma_ops->gma_to_pde_index(start); + + ret = intel_gvt_hypervisor_read_gpa(vgpu, + (pd & PAGE_MASK) + (index << + info->gtt_entry_size_shift), &pt, 8); + if (ret) + return ret; + next = pd_addr_end(start, end); + walk_pt_range(vgpu, pt, start, next, walk); + + start = next; + } while (start != end); + + return ret; +} + + +static int walk_pdp_range(struct intel_vgpu *vgpu, u64 pdp, + u64 start, u64 end, struct ppgtt_walk *walk) +{ + const struct intel_gvt_device_info *info = &vgpu->gvt->device_info; + struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops; + unsigned long index; + u64 pd, next; + int ret = 0; + + do { + index = gma_ops->gma_to_l4_pdp_index(start); + + ret = intel_gvt_hypervisor_read_gpa(vgpu, + (pdp & PAGE_MASK) + (index << + info->gtt_entry_size_shift), &pd, 8); + if (ret) + return ret; + next = pdp_addr_end(start, end); + walk_pd_range(vgpu, pd, start, next, walk); + start = next; + } while (start != end); + + return ret; +} + + +static int walk_pml4_range(struct intel_vgpu *vgpu, u64 pml4, + u64 start, u64 end, struct ppgtt_walk *walk) +{ + const struct intel_gvt_device_info *info = &vgpu->gvt->device_info; + struct intel_gvt_gtt_gma_ops *gma_ops = vgpu->gvt->gtt.gma_ops; + unsigned long index; + u64 pdp, next; + int ret = 0; + + do { + index = gma_ops->gma_to_pml4_index(start); + ret = intel_gvt_hypervisor_read_gpa(vgpu, + (pml4 & PAGE_MASK) + (index << + info->gtt_entry_size_shift), &pdp, 8); + if (ret) + return ret; + next = pml4_addr_end(start, end); + walk_pdp_range(vgpu, pdp, start, next, walk); + start = next; + } while (start != end); + + return ret; +} + +static int intel_vgpu_pv_ppgtt_insert_4lvl(struct intel_vgpu *vgpu, + struct intel_vgpu_mm *mm, + u64 pml4, u64 start, u64 length, u32 cache_level) +{ + int ret = 0; + struct sg_table st; + struct scatterlist *sg = NULL; + int num_pages; + struct i915_vma vma; + struct ppgtt_walk walk; + int i; + + num_pages = length >> PAGE_SHIFT; + + walk.mfn_index = 0; + walk.mfns = NULL; + walk.pt = NULL; + + walk.mfns = kmalloc_array(num_pages, + sizeof(unsigned long), GFP_KERNEL); + if (!walk.mfns) { + ret = -ENOMEM; + goto fail; + } + + walk.pt = (unsigned long *)__get_free_pages(GFP_KERNEL, 0); + if (!walk.pt) { + ret = -ENOMEM; + goto fail; + } + + if (sg_alloc_table(&st, num_pages, GFP_KERNEL)) { + ret = -ENOMEM; + goto fail; + } + + ret = walk_pml4_range(vgpu, pml4, start, start + length, &walk); + if (ret) + goto fail_free_sg; + + WARN_ON(num_pages != walk.mfn_index); + + for_each_sg(st.sgl, sg, num_pages, i) { + sg->offset = 0; + sg->length = PAGE_SIZE; + sg_dma_address(sg) = walk.mfns[i]; + sg_dma_len(sg) = PAGE_SIZE; + } + + memset(&vma, 0, sizeof(vma)); + vma.node.start = start; + vma.pages = &st; + mm->ppgtt->vm.insert_entries(&mm->ppgtt->vm, &vma, cache_level, 0); + +fail_free_sg: + sg_free_table(&st); +fail: + kfree(walk.mfns); + free_page((unsigned long)walk.pt); + + return ret; +} + +int intel_vgpu_handle_pv_ppgtt_update(struct intel_vgpu *vgpu, + u32 action, struct pv_ppgtt_update *pv_ppgtt) +{ + struct intel_vgpu_mm *mm; + u64 pdp, start, length; + u32 cache_level; + int ret = 0; + + pdp = pv_ppgtt->pdp; + start = pv_ppgtt->start; + length = pv_ppgtt->length; + cache_level = pv_ppgtt->cache_level; + + mm = intel_vgpu_find_ppgtt_mm(vgpu, &pdp); + if (!mm) { + gvt_vgpu_err("failed to find pdp 0x%llx\n", pdp); + ret = -EINVAL; + } + + if (action == PV_ACTION_PPGTT_L4_ALLOC) { + ret = mm->ppgtt->vm.allocate_va_range(&mm->ppgtt->vm, + start, length); + if (ret) + gvt_vgpu_err("failed to alloc %llx\n", pdp); + } + + if (action == PV_ACTION_PPGTT_L4_CLEAR) { + mm->ppgtt->vm.clear_range(&mm->ppgtt->vm, + start, length); + } + + if (action == PV_ACTION_PPGTT_L4_INSERT) { + ret = intel_vgpu_pv_ppgtt_insert_4lvl(vgpu, mm, + pdp, start, length, cache_level); + if (ret) + gvt_vgpu_err("failed to insert %llx\n", pdp); + } + + return ret; +} diff --git a/drivers/gpu/drm/i915/gvt/gtt.h b/drivers/gpu/drm/i915/gvt/gtt.h index 8878931..a969331 100644 --- a/drivers/gpu/drm/i915/gvt/gtt.h +++ b/drivers/gpu/drm/i915/gvt/gtt.h @@ -141,6 +141,7 @@ struct intel_gvt_partial_pte { struct intel_vgpu_mm { enum intel_gvt_mm_type type; + struct i915_ppgtt *ppgtt; struct intel_vgpu *vgpu; struct kref ref; @@ -253,6 +254,14 @@ struct intel_vgpu_ppgtt_spt { struct list_head post_shadow_list; }; +/* ppgtt pv support data structure */ +struct pv_ppgtt_update { + u64 pdp; + u64 start; + u64 length; + u32 cache_level; +}; + int intel_vgpu_sync_oos_pages(struct intel_vgpu *vgpu); int intel_vgpu_flush_post_shadow(struct intel_vgpu *vgpu); @@ -278,4 +287,6 @@ int intel_vgpu_emulate_ggtt_mmio_read(struct intel_vgpu *vgpu, int intel_vgpu_emulate_ggtt_mmio_write(struct intel_vgpu *vgpu, unsigned int off, void *p_data, unsigned int bytes); +int intel_vgpu_handle_pv_ppgtt_update(struct intel_vgpu *vgpu, + u32 action, struct pv_ppgtt_update *pv_ppgtt); #endif /* _GVT_GTT_H_ */ diff --git a/drivers/gpu/drm/i915/gvt/gvt.h b/drivers/gpu/drm/i915/gvt/gvt.h index 71213e0..4e658a5 100644 --- a/drivers/gpu/drm/i915/gvt/gvt.h +++ b/drivers/gpu/drm/i915/gvt/gvt.h @@ -53,6 +53,10 @@ #define GVT_MAX_VGPU 8 +#define VGPU_PVCAP(vgpu, cap) \ + ((vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) & (cap)) \ + && vgpu->shared_page_enabled) + struct intel_gvt_host { struct device *dev; bool initialized; diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c index eb09003..7176831 100644 --- a/drivers/gpu/drm/i915/gvt/handlers.c +++ b/drivers/gpu/drm/i915/gvt/handlers.c @@ -1209,6 +1209,127 @@ static int pvinfo_mmio_read(struct intel_vgpu *vgpu, unsigned int offset, return 0; } +static inline unsigned int ct_header_get_len(u32 header) +{ + return (header >> PV_CT_MSG_LEN_SHIFT) & PV_CT_MSG_LEN_MASK; +} + +static inline unsigned int ct_header_get_action(u32 header) +{ + return (header >> PV_CT_MSG_ACTION_SHIFT) & PV_CT_MSG_ACTION_MASK; +} + +static int fetch_pv_command_buffer(struct intel_vgpu *vgpu, + struct vgpu_pv_ct_buffer_desc *desc, + u32 *fence, u32 *action, u32 *data) +{ + u32 head, tail, len, size, off; + u32 cmd_head; + u32 avail; + u32 ret; + + /* fetch command descriptor */ + off = PV_DESC_OFF; + ret = intel_gvt_read_shared_page(vgpu, off, desc, sizeof(*desc)); + if (ret) + return ret; + + GEM_BUG_ON(desc->size % 4); + GEM_BUG_ON(desc->head % 4); + GEM_BUG_ON(desc->tail % 4); + GEM_BUG_ON(tail >= size); + GEM_BUG_ON(head >= size); + + /* tail == head condition indicates empty */ + head = desc->head/4; + tail = desc->tail/4; + size = desc->size/4; + + if (unlikely((tail - head) == 0)) + return -ENODATA; + + /* fetch command head */ + off = desc->addr + head * 4; + ret = intel_gvt_read_shared_page(vgpu, off, &cmd_head, 4); + head = (head + 1) % size; + if (ret) + goto err; + + len = ct_header_get_len(cmd_head) - 1; + *action = ct_header_get_action(cmd_head); + + /* fetch command fence */ + off = desc->addr + head * 4; + ret = intel_gvt_read_shared_page(vgpu, off, fence, 4); + head = (head + 1) % size; + if (ret) + goto err; + + /* no command data */ + if (len == 0) + goto err; + + /* fetch command data */ + avail = size - head; + if (len <= avail) { + off = desc->addr + head * 4; + ret = intel_gvt_read_shared_page(vgpu, off, data, len * 4); + head = (head + len) % size; + if (ret) + goto err; + } else { + /* swap case */ + off = desc->addr + head * 4; + ret = intel_gvt_read_shared_page(vgpu, off, data, avail * 4); + head = (head + avail) % size; + if (ret) + goto err; + + off = desc->addr; + ret = intel_gvt_read_shared_page(vgpu, off, &data[avail], + (len - avail) * 4); + head = (head + len - avail) % size; + if (ret) + goto err; + } + +err: + desc->head = head * 4; + return ret; +} + +static int handle_pv_actions(struct intel_vgpu *vgpu) +{ + struct vgpu_pv_ct_buffer_desc desc; + u32 fence, action; + u32 data[32]; + int ret; + struct pv_ppgtt_update *ppgtt; + + ret = fetch_pv_command_buffer(vgpu, &desc, &fence, &action, data); + if (ret) + return ret; + + switch (action) { + case PV_ACTION_PPGTT_L4_ALLOC: + case PV_ACTION_PPGTT_L4_CLEAR: + case PV_ACTION_PPGTT_L4_INSERT: + ppgtt = (struct pv_ppgtt_update *)data; + ret = intel_vgpu_handle_pv_ppgtt_update(vgpu, action, ppgtt); + break; + default: + break; + } + + /* write command descriptor back */ + desc.fence = fence; + desc.status = ret; + + ret = intel_gvt_write_shared_page(vgpu, PV_DESC_OFF, + &desc, sizeof(desc)); + return ret; +} + static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification) { enum intel_gvt_gtt_type root_entry_type = GTT_TYPE_PPGTT_ROOT_L4_ENTRY; @@ -1217,6 +1338,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification) unsigned long gpa, gfn; u16 ver_major = PV_MAJOR; u16 ver_minor = PV_MINOR; + int ret = 0; pdps = (u64 *)&vgpu_vreg64_t(vgpu, vgtif_reg(pdp[0])); @@ -1243,6 +1365,9 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification) intel_gvt_write_shared_page(vgpu, 0, &ver_major, 2); intel_gvt_write_shared_page(vgpu, 2, &ver_minor, 2); break; + case VGT_G2V_PV_SEND_TRIGGER: + ret = handle_pv_actions(vgpu); + break; case VGT_G2V_EXECLIST_CONTEXT_CREATE: case VGT_G2V_EXECLIST_CONTEXT_DESTROY: case 1: /* Remove this in guest driver. */ @@ -1250,7 +1375,7 @@ static int handle_g2v_notification(struct intel_vgpu *vgpu, int notification) default: gvt_vgpu_err("Invalid PV notification %d\n", notification); } - return 0; + return ret; } static int send_display_ready_uevent(struct intel_vgpu *vgpu, int ready) diff --git a/drivers/gpu/drm/i915/gvt/vgpu.c b/drivers/gpu/drm/i915/gvt/vgpu.c index 811edbb..e8a957a 100644 --- a/drivers/gpu/drm/i915/gvt/vgpu.c +++ b/drivers/gpu/drm/i915/gvt/vgpu.c @@ -49,6 +49,8 @@ void populate_pvinfo_page(struct intel_vgpu *vgpu) vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_HUGE_GTT; vgpu_vreg_t(vgpu, vgtif_reg(vgt_caps)) |= VGT_CAPS_PV; + if (!intel_vtd_active()) + vgpu_vreg_t(vgpu, vgtif_reg(pv_caps)) = PV_PPGTT_UPDATE; vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.base)) = vgpu_aperture_gmadr_base(vgpu); vgpu_vreg_t(vgpu, vgtif_reg(avail_rs.mappable_gmadr.size)) =