From patchwork Mon Mar 31 12:28:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lauri Kasanen X-Patchwork-Id: 3913991 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AB2E6BF540 for ; Mon, 31 Mar 2014 12:27:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B6FE820380 for ; Mon, 31 Mar 2014 12:27:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id BFAEC2037D for ; Mon, 31 Mar 2014 12:27:44 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 16F836E26B; Mon, 31 Mar 2014 05:27:44 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) by gabe.freedesktop.org (Postfix) with ESMTP id D35236E26B for ; Mon, 31 Mar 2014 05:27:42 -0700 (PDT) Received: from Valinor ([84.248.177.169]) by mail.gmx.com (mrgmx102) with ESMTPA (Nemesis) id 0MNw0t-1WOFNv1q1L-007QYP for ; Mon, 31 Mar 2014 14:27:40 +0200 Date: Mon, 31 Mar 2014 15:28:18 +0300 From: Lauri Kasanen To: dri-devel@lists.freedesktop.org Subject: [PATCH 2/4] drm/ttm: Add optional support for two-ended allocation Message-Id: <20140331152818.cfcea033.cand@gmx.com> X-Mailer: Sylpheed 3.1.4 (GTK+ 2.18.6; x86_64-unknown-linux-gnu) Mime-Version: 1.0 X-Provags-ID: V03:K0:G+erU55F/BXop/pdXBfo1cu0d8eSbTQJ53bSCCeEOsLW+Y7opJf A5OBJwmMD8+mCWpZe2gRfXMj2Dyqqerqgod4CvARQeOPyklLbWpxydijQNkxzBZi/dJ8Sym 72WD640cyJvU7+LF8U04jxWaSf+mB+72sK2dPoEvlZdoN61SshNnnfwNIK6cizM2Di1Ewjo hQe5xP5kA17vmDhVZ+82w== X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Allocating small bos from one end and large ones from the other helps improve the quality of fragmentation. This depends on "drm: Optionally create mm blocks from top-to-bottom" by Chris Wilson. Signed-off-by: Lauri Kasanen --- drivers/gpu/drm/ttm/ttm_bo.c | 4 +++- drivers/gpu/drm/ttm/ttm_bo_manager.c | 16 +++++++++++++--- include/drm/ttm/ttm_bo_driver.h | 7 ++++++- 3 files changed, 22 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 9df79ac..caf7cd3 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1453,7 +1453,8 @@ int ttm_bo_device_init(struct ttm_bo_device *bdev, struct ttm_bo_driver *driver, struct address_space *mapping, uint64_t file_page_offset, - bool need_dma32) + bool need_dma32, + uint32_t alloc_threshold) { int ret = -EINVAL; @@ -1476,6 +1477,7 @@ int ttm_bo_device_init(struct ttm_bo_device *bdev, bdev->dev_mapping = mapping; bdev->glob = glob; bdev->need_dma32 = need_dma32; + bdev->alloc_threshold = alloc_threshold; bdev->val_seq = 0; spin_lock_init(&bdev->fence_lock); mutex_lock(&glob->device_list_mutex); diff --git a/drivers/gpu/drm/ttm/ttm_bo_manager.c b/drivers/gpu/drm/ttm/ttm_bo_manager.c index c58eba33..db9fcb4 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_manager.c +++ b/drivers/gpu/drm/ttm/ttm_bo_manager.c @@ -55,6 +55,7 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; struct drm_mm *mm = &rman->mm; struct drm_mm_node *node = NULL; + enum drm_mm_allocator_flags aflags = DRM_MM_CREATE_DEFAULT; unsigned long lpfn; int ret; @@ -65,12 +66,21 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, node = kzalloc(sizeof(*node), GFP_KERNEL); if (!node) return -ENOMEM; + /** + * If the driver requested a threshold, use two-ended allocation. + * Pinned buffers require bottom-up allocation. + */ + if (man->bdev->alloc_threshold && + !(bo->mem.placement & TTM_PL_FLAG_NO_EVICT) && + man->bdev->alloc_threshold < (mem->num_pages * PAGE_SIZE)) + aflags = DRM_MM_CREATE_TOP; spin_lock(&rman->lock); - ret = drm_mm_insert_node_in_range(mm, node, mem->num_pages, - mem->page_alignment, + ret = drm_mm_insert_node_in_range_generic(mm, node, mem->num_pages, + mem->page_alignment, 0, placement->fpfn, lpfn, - DRM_MM_SEARCH_BEST); + DRM_MM_SEARCH_BEST, + aflags); spin_unlock(&rman->lock); if (unlikely(ret)) { diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 5d8aabe..f5fe6df 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -565,6 +565,7 @@ struct ttm_bo_device { struct delayed_work wq; bool need_dma32; + uint32_t alloc_threshold; }; /** @@ -751,6 +752,8 @@ extern int ttm_bo_device_release(struct ttm_bo_device *bdev); * @file_page_offset: Offset into the device address space that is available * for buffer data. This ensures compatibility with other users of the * address space. + * @alloc_threshold: If non-zero, use this as the threshold for two-ended + * allocation. * * Initializes a struct ttm_bo_device: * Returns: @@ -760,7 +763,9 @@ extern int ttm_bo_device_init(struct ttm_bo_device *bdev, struct ttm_bo_global *glob, struct ttm_bo_driver *driver, struct address_space *mapping, - uint64_t file_page_offset, bool need_dma32); + uint64_t file_page_offset, + bool need_dma32, + uint32_t alloc_threshold); /** * ttm_bo_unmap_virtual