From patchwork Thu Aug 11 07:23:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 9274551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5FD0C600CB for ; Thu, 11 Aug 2016 07:23:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A88428530 for ; Thu, 11 Aug 2016 07:23:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3CB4E28532; Thu, 11 Aug 2016 07:23:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8DD8B28530 for ; Thu, 11 Aug 2016 07:23:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 120D26E8D4; Thu, 11 Aug 2016 07:23:32 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail-wm0-x244.google.com (mail-wm0-x244.google.com [IPv6:2a00:1450:400c:c09::244]) by gabe.freedesktop.org (Postfix) with ESMTPS id 5B8966E8D4 for ; Thu, 11 Aug 2016 07:23:30 +0000 (UTC) Received: by mail-wm0-x244.google.com with SMTP id i138so1308979wmf.3 for ; Thu, 11 Aug 2016 00:23:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id; bh=fVXiQWFj0c8AMyiiF6QVbE2t1v72Hc4mMaSKsGRVqsE=; b=hkHvjEJOQp59nxLFake9hno3WK7EtMet5R40thKkR2qrsr37BIOaNz0uv3DzYn+k0M FIpB9wzlX6F5NCg8nB7twOvGtR0SvpHMYazTdgzomvCznt0oFzdxsKRZlposfVu5fLFg VAXe8JKwk6WOrMjka82lxitMaRE4dZ/bsTdPCb+HW+4DkJvXq0kPpY8Hf2xhCTDy/ssg l0Aym+z2aM8boFbH3R81DrNdAm6FBKgW5wUrQWxOcUpTjGA5hCw/VacVmAqtMUiY8Y2t dYV51wRlnAQvxb2nMnDwR6xLFErz58E3uwllwDJ2qUHi+6zlqJmUlH8ysatWIiWJmJzS 0o+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id; bh=fVXiQWFj0c8AMyiiF6QVbE2t1v72Hc4mMaSKsGRVqsE=; b=dcbh8O9lczn4PwHkd9UARIueUd5Xg6aVUf+a2RRKcxZRte9DaG+reAYXnSzid+IUje 9sjci4+AadSwpyvyiCjuphSDEZE+HYzaJPVgJth6BYBE1FKrq11a698997/W3ceVmdap ZfJj85c9Jwu52F7m/zqnbUQGXtP8/qVnxUfSKzsgzuHVZzLQamkTw8QPTG8LMjKOzl1f oFxFAS5iFVrSzquxYgWYb24WEQSd90JkAlLCwd289slzH/hFpwEIxVqportIpTPAcZNy 0+VHMExCYxEqIsW+j4Lwl2Ajrlz5ZrBP1Jyj/tC4U6MnkzMgaxpQrpJub7qyJmGOZ9pS RROg== X-Gm-Message-State: AEkoouvPfza8TZaot8XvpamQLcE0K8tBu/73705n3HkHtGDNsbNCLjF4D+PhZeKL4KlDhg== X-Received: by 10.194.58.112 with SMTP id p16mr8115644wjq.24.1470900208748; Thu, 11 Aug 2016 00:23:28 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id n7sm1237009wjf.11.2016.08.11.00.23.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 11 Aug 2016 00:23:27 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Thu, 11 Aug 2016 08:23:24 +0100 Message-Id: <1470900204-1725-1-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.8.1 Subject: [Intel-gfx] [PATCH libdrm v2] intel: Export raw GEM mmap interfaces X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Export a set of interfaces to allow the caller to have precise control over mapping the buffer - but still provide caching of the mmaps between callers. Signed-off-by: Chris Wilson --- intel/intel_bufmgr.h | 4 ++ intel/intel_bufmgr_gem.c | 154 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 158 insertions(+) diff --git a/intel/intel_bufmgr.h b/intel/intel_bufmgr.h index a1abbcd..a7285b7 100644 --- a/intel/intel_bufmgr.h +++ b/intel/intel_bufmgr.h @@ -184,6 +184,10 @@ int drm_intel_gem_bo_map_unsynchronized(drm_intel_bo *bo); int drm_intel_gem_bo_map_gtt(drm_intel_bo *bo); int drm_intel_gem_bo_unmap_gtt(drm_intel_bo *bo); +void *drm_intel_gem_bo_map__cpu(drm_intel_bo *bo); +void *drm_intel_gem_bo_map__gtt(drm_intel_bo *bo); +void *drm_intel_gem_bo_map__wc(drm_intel_bo *bo); + int drm_intel_gem_bo_get_reloc_count(drm_intel_bo *bo); void drm_intel_gem_bo_clear_relocs(drm_intel_bo *bo, int start); void drm_intel_gem_bo_start_gtt_access(drm_intel_bo *bo, int write_enable); diff --git a/intel/intel_bufmgr_gem.c b/intel/intel_bufmgr_gem.c index 0a4012b..004aeb1 100644 --- a/intel/intel_bufmgr_gem.c +++ b/intel/intel_bufmgr_gem.c @@ -211,6 +211,8 @@ struct _drm_intel_bo_gem { void *mem_virtual; /** GTT virtual address for the buffer, saved across map/unmap cycles */ void *gtt_virtual; + /** WC CPU address for the buffer, saved across map/unmap cycles */ + void *wc_virtual; /** * Virtual address of the buffer allocated by user, used for userptr * objects only. @@ -1188,6 +1190,11 @@ drm_intel_gem_bo_free(drm_intel_bo *bo) drm_munmap(bo_gem->mem_virtual, bo_gem->bo.size); bufmgr_gem->vma_count--; } + if (bo_gem->wc_virtual) { + VG(VALGRIND_FREELIKE_BLOCK(bo_gem->wc_virtual, 0)); + drm_munmap(bo_gem->wc_virtual, bo_gem->bo.size); + bufmgr_gem->vma_count--; + } if (bo_gem->gtt_virtual) { drm_munmap(bo_gem->gtt_virtual, bo_gem->bo.size); bufmgr_gem->vma_count--; @@ -1213,6 +1220,9 @@ drm_intel_gem_bo_mark_mmaps_incoherent(drm_intel_bo *bo) if (bo_gem->mem_virtual) VALGRIND_MAKE_MEM_NOACCESS(bo_gem->mem_virtual, bo->size); + if (bo_gem->wc_virtual) + VALGRIND_MAKE_MEM_NOACCESS(bo_gem->wc_virtual, bo->size); + if (bo_gem->gtt_virtual) VALGRIND_MAKE_MEM_NOACCESS(bo_gem->gtt_virtual, bo->size); #endif @@ -1277,6 +1287,11 @@ static void drm_intel_gem_bo_purge_vma_cache(drm_intel_bufmgr_gem *bufmgr_gem) bo_gem->mem_virtual = NULL; bufmgr_gem->vma_count--; } + if (bo_gem->wc_virtual) { + drm_munmap(bo_gem->wc_virtual, bo_gem->bo.size); + bo_gem->wc_virtual = NULL; + bufmgr_gem->vma_count--; + } if (bo_gem->gtt_virtual) { drm_munmap(bo_gem->gtt_virtual, bo_gem->bo.size); bo_gem->gtt_virtual = NULL; @@ -1292,6 +1307,8 @@ static void drm_intel_gem_bo_close_vma(drm_intel_bufmgr_gem *bufmgr_gem, DRMLISTADDTAIL(&bo_gem->vma_list, &bufmgr_gem->vma_cache); if (bo_gem->mem_virtual) bufmgr_gem->vma_count++; + if (bo_gem->wc_virtual) + bufmgr_gem->vma_count++; if (bo_gem->gtt_virtual) bufmgr_gem->vma_count++; drm_intel_gem_bo_purge_vma_cache(bufmgr_gem); @@ -1304,6 +1321,8 @@ static void drm_intel_gem_bo_open_vma(drm_intel_bufmgr_gem *bufmgr_gem, DRMLISTDEL(&bo_gem->vma_list); if (bo_gem->mem_virtual) bufmgr_gem->vma_count--; + if (bo_gem->wc_virtual) + bufmgr_gem->vma_count--; if (bo_gem->gtt_virtual) bufmgr_gem->vma_count--; drm_intel_gem_bo_purge_vma_cache(bufmgr_gem); @@ -3300,6 +3319,141 @@ drm_intel_bufmgr_gem_unref(drm_intel_bufmgr *bufmgr) } } +void *drm_intel_gem_bo_map__gtt(drm_intel_bo *bo) +{ + drm_intel_bufmgr_gem *bufmgr_gem = (drm_intel_bufmgr_gem *) bo->bufmgr; + drm_intel_bo_gem *bo_gem = (drm_intel_bo_gem *) bo; + + if (bo_gem->gtt_virtual) + return bo_gem->gtt_virtual; + + if (bo_gem->is_userptr) + return NULL; + + pthread_mutex_lock(&bufmgr_gem->lock); + if (bo_gem->gtt_virtual == NULL) { + struct drm_i915_gem_mmap_gtt mmap_arg; + void *ptr; + + DBG("bo_map_gtt: mmap %d (%s), map_count=%d\n", + bo_gem->gem_handle, bo_gem->name, bo_gem->map_count); + + if (bo_gem->map_count++ == 0) + drm_intel_gem_bo_open_vma(bufmgr_gem, bo_gem); + + memclear(mmap_arg); + mmap_arg.handle = bo_gem->gem_handle; + + /* Get the fake offset back... */ + ptr = MAP_FAILED; + if (drmIoctl(bufmgr_gem->fd, + DRM_IOCTL_I915_GEM_MMAP_GTT, + &mmap_arg) == 0) { + /* and mmap it */ + ptr = drm_mmap(0, bo->size, PROT_READ | PROT_WRITE, + MAP_SHARED, bufmgr_gem->fd, + mmap_arg.offset); + } + if (ptr == MAP_FAILED) { + if (--bo_gem->map_count == 0) + drm_intel_gem_bo_close_vma(bufmgr_gem, bo_gem); + ptr = NULL; + } + + bo_gem->gtt_virtual = ptr; + } + pthread_mutex_unlock(&bufmgr_gem->lock); + + return bo_gem->gtt_virtual; +} + +void *drm_intel_gem_bo_map__cpu(drm_intel_bo *bo) +{ + drm_intel_bufmgr_gem *bufmgr_gem = (drm_intel_bufmgr_gem *) bo->bufmgr; + drm_intel_bo_gem *bo_gem = (drm_intel_bo_gem *) bo; + + if (bo_gem->mem_virtual) + return bo_gem->mem_virtual; + + if (bo_gem->is_userptr) { + /* Return the same user ptr */ + return bo_gem->user_virtual; + } + + pthread_mutex_lock(&bufmgr_gem->lock); + if (!bo_gem->mem_virtual) { + struct drm_i915_gem_mmap mmap_arg; + + if (bo_gem->map_count++ == 0) + drm_intel_gem_bo_open_vma(bufmgr_gem, bo_gem); + + DBG("bo_map: %d (%s), map_count=%d\n", + bo_gem->gem_handle, bo_gem->name, bo_gem->map_count); + + memclear(mmap_arg); + mmap_arg.handle = bo_gem->gem_handle; + mmap_arg.size = bo->size; + if (drmIoctl(bufmgr_gem->fd, + DRM_IOCTL_I915_GEM_MMAP, + &mmap_arg)) { + DBG("%s:%d: Error mapping buffer %d (%s): %s .\n", + __FILE__, __LINE__, bo_gem->gem_handle, + bo_gem->name, strerror(errno)); + if (--bo_gem->map_count == 0) + drm_intel_gem_bo_close_vma(bufmgr_gem, bo_gem); + } else { + VG(VALGRIND_MALLOCLIKE_BLOCK(mmap_arg.addr_ptr, mmap_arg.size, 0, 1)); + bo_gem->mem_virtual = (void *)(uintptr_t) mmap_arg.addr_ptr; + } + } + pthread_mutex_unlock(&bufmgr_gem->lock); + + return bo_gem->mem_virtual; +} + +void *drm_intel_gem_bo_map__wc(drm_intel_bo *bo) +{ + drm_intel_bufmgr_gem *bufmgr_gem = (drm_intel_bufmgr_gem *) bo->bufmgr; + drm_intel_bo_gem *bo_gem = (drm_intel_bo_gem *) bo; + + if (bo_gem->wc_virtual) + return bo_gem->wc_virtual; + + if (bo_gem->is_userptr) + return NULL; + + pthread_mutex_lock(&bufmgr_gem->lock); + if (!bo_gem->wc_virtual) { + struct drm_i915_gem_mmap mmap_arg; + + if (bo_gem->map_count++ == 0) + drm_intel_gem_bo_open_vma(bufmgr_gem, bo_gem); + + DBG("bo_map: %d (%s), map_count=%d\n", + bo_gem->gem_handle, bo_gem->name, bo_gem->map_count); + + memclear(mmap_arg); + mmap_arg.handle = bo_gem->gem_handle; + mmap_arg.size = bo->size; + mmap_arg.flags = I915_MMAP_WC; + if (drmIoctl(bufmgr_gem->fd, + DRM_IOCTL_I915_GEM_MMAP, + &mmap_arg)) { + DBG("%s:%d: Error mapping buffer %d (%s): %s .\n", + __FILE__, __LINE__, bo_gem->gem_handle, + bo_gem->name, strerror(errno)); + if (--bo_gem->map_count == 0) + drm_intel_gem_bo_close_vma(bufmgr_gem, bo_gem); + } else { + VG(VALGRIND_MALLOCLIKE_BLOCK(mmap_arg.addr_ptr, mmap_arg.size, 0, 1)); + bo_gem->wc_virtual = (void *)(uintptr_t) mmap_arg.addr_ptr; + } + } + pthread_mutex_unlock(&bufmgr_gem->lock); + + return bo_gem->wc_virtual; +} + /** * Initializes the GEM buffer manager, which uses the kernel to allocate, map, * and manage map buffer objections.