From patchwork Thu Oct 5 13:07:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 9987113 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 80D586029B for ; Thu, 5 Oct 2017 13:08:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71C3F28C66 for ; Thu, 5 Oct 2017 13:08:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 665D928C90; Thu, 5 Oct 2017 13:08:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0797A28C66 for ; Thu, 5 Oct 2017 13:08:14 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 225FD6E7E9; Thu, 5 Oct 2017 13:08:13 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x242.google.com (mail-wm0-x242.google.com [IPv6:2a00:1450:400c:c09::242]) by gabe.freedesktop.org (Postfix) with ESMTPS id D1CD66E7E7; Thu, 5 Oct 2017 13:08:10 +0000 (UTC) Received: by mail-wm0-x242.google.com with SMTP id l68so2079576wmd.5; Thu, 05 Oct 2017 06:08:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=erwUtGnt11eCwfMp9a4U4OUnD1eAlNgq7B8WQUBzJbc=; b=hMXfnIUN+6MYyK+s828Xz4YSORqteLcLnsIlplFAaXoWry69ytA+kW+XStKfS/Q4qI bzuikiwyJ8Jk72ZRbjlyUAf7eyfoxVLG9pBc7T1NmwU9n2shcE+YMjKNXfx440lX+WOk H1uuLW6j/ds5QZqbAv4kGv+KmSwLecC4wt6WnENZSi1L+JqEIr0wsC5FdW7+gZF+39c4 b4+52CyShpPmSwXbU3w/c46l4CLHLpO0bTTy3J3lU9a7+JrZMX272ZXf4zXRtAjmmbGq OnZuYk2mCsGPjjQNSP2bJy5gcQaiGN2QnYct8rnXsKEuOBkmrs3Xx6s9WGzOUkNJfiva 0CTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=erwUtGnt11eCwfMp9a4U4OUnD1eAlNgq7B8WQUBzJbc=; b=ZSM/roGc8Y738SHPOmodqxPtdlyqJznAblwnmkRjuaUv2zZL02CmkvwKpFYTs2Y+l9 fi13b/54zMRi862fjXxNOTacecG4oie3EmwKt3E2QrqnEA30Ng5i1OnJPvaqKDVmiz5R Wub/JuvjSmErDRtbi241t6kjcpKmE+jX5Ic/3gb3d94+Bb+ih8iAzC14BzuxsN+fKkOA mhXT77HXx2A5sfozw5/qvo8Tg/WZKI1/Ms8DA1rezVaomyMcyM9IvXl4ah3KLXD19eVg 7oEPF0BBtj869txQvVQ4zq5XFgLVBNaZfyEfZkpAlmVnW5+W4mgOtzQJfnOhBMfCpnRn IKbQ== X-Gm-Message-State: AMCzsaV4Is58yQttXhDhig1FdNd4XOceGRD5J2I9rYkqC7EoLNTT/WOU eRr8OVtC/sUbxlG8tC6gEp3Ehvi8 X-Google-Smtp-Source: AOwi7QAZvu3WEbMF3UzkSc40NahHclNnKqwqPhLfCLkdEz3FOCZKHT3YpRQg/Clqd9kJYgWX2MhQXQ== X-Received: by 10.28.211.69 with SMTP id k66mr18677889wmg.1.1507208889123; Thu, 05 Oct 2017 06:08:09 -0700 (PDT) Received: from localhost.localdomain ([2a02:908:1251:7981:c12c:1293:87f3:cbfe]) by smtp.gmail.com with ESMTPSA id 133sm15002742wmu.4.2017.10.05.06.08.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 05 Oct 2017 06:08:08 -0700 (PDT) From: "=?UTF-8?q?Christian=20K=C3=B6nig?=" X-Google-Original-From: =?UTF-8?q?Christian=20K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 4/6] drm/ttm: move more logic into ttm_page_pool_get_pages Date: Thu, 5 Oct 2017 15:07:52 +0200 Message-Id: <1507208874-3448-4-git-send-email-deathsimple@vodafone.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507208874-3448-1-git-send-email-deathsimple@vodafone.de> References: <1507208874-3448-1-git-send-email-deathsimple@vodafone.de> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König Make it easier to add huge page pool. Signed-off-by: Christian König --- drivers/gpu/drm/ttm/ttm_page_alloc.c | 98 +++++++++++++++++++----------------- 1 file changed, 52 insertions(+), 46 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index e8d42d4..a800387 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -627,19 +627,20 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, } /** - * Cut 'count' number of pages from the pool and put them on the return list. + * Allocate pages from the pool and put them on the return list. * - * @return count of pages still required to fulfill the request. + * @return zero for success or negative error code. */ -static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool, - struct list_head *pages, - int ttm_flags, - enum ttm_caching_state cstate, - unsigned count) +static int ttm_page_pool_get_pages(struct ttm_page_pool *pool, + struct list_head *pages, + int ttm_flags, + enum ttm_caching_state cstate, + unsigned count) { unsigned long irq_flags; struct list_head *p; unsigned i; + int r = 0; spin_lock_irqsave(&pool->lock, irq_flags); ttm_page_pool_fill_locked(pool, ttm_flags, cstate, count, &irq_flags); @@ -672,7 +673,35 @@ static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool, count = 0; out: spin_unlock_irqrestore(&pool->lock, irq_flags); - return count; + + /* clear the pages coming from the pool if requested */ + if (ttm_flags & TTM_PAGE_FLAG_ZERO_ALLOC) { + struct page *page; + + list_for_each_entry(page, pages, lru) { + if (PageHighMem(page)) + clear_highpage(page); + else + clear_page(page_address(page)); + } + } + + /* If pool didn't have enough pages allocate new one. */ + if (count) { + gfp_t gfp_flags = pool->gfp_flags; + + /* set zero flag for page allocation if required */ + if (ttm_flags & TTM_PAGE_FLAG_ZERO_ALLOC) + gfp_flags |= __GFP_ZERO; + + /* ttm_alloc_new_pages doesn't reference pool so we can run + * multiple requests in parallel. + **/ + r = ttm_alloc_new_pages(pages, gfp_flags, ttm_flags, cstate, + count); + } + + return r; } /* Put all pages in pages list to correct pool to wait for reuse */ @@ -742,18 +771,18 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags, struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct list_head plist; struct page *p = NULL; - gfp_t gfp_flags = GFP_USER; unsigned count; int r; - /* set zero flag for page allocation if required */ - if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) - gfp_flags |= __GFP_ZERO; - /* No pool for cached pages */ if (pool == NULL) { + gfp_t gfp_flags = GFP_USER; unsigned i, j; + /* set zero flag for page allocation if required */ + if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) + gfp_flags |= __GFP_ZERO; + if (flags & TTM_PAGE_FLAG_DMA32) gfp_flags |= GFP_DMA32; else @@ -791,44 +820,21 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags, return 0; } - /* combine zero flag to pool flags */ - gfp_flags |= pool->gfp_flags; - /* First we take pages from the pool */ INIT_LIST_HEAD(&plist); - npages = ttm_page_pool_get_pages(pool, &plist, flags, cstate, npages); + r = ttm_page_pool_get_pages(pool, &plist, flags, cstate, npages); + count = 0; - list_for_each_entry(p, &plist, lru) { + list_for_each_entry(p, &plist, lru) pages[count++] = p; - } - - /* clear the pages coming from the pool if requested */ - if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) { - list_for_each_entry(p, &plist, lru) { - if (PageHighMem(p)) - clear_highpage(p); - else - clear_page(page_address(p)); - } - } - /* If pool didn't have enough pages allocate new one. */ - if (npages > 0) { - /* ttm_alloc_new_pages doesn't reference pool so we can run - * multiple requests in parallel. - **/ - INIT_LIST_HEAD(&plist); - r = ttm_alloc_new_pages(&plist, gfp_flags, flags, cstate, npages); - list_for_each_entry(p, &plist, lru) { - pages[count++] = p; - } - if (r) { - /* If there is any pages in the list put them back to - * the pool. */ - pr_err("Failed to allocate extra pages for large request\n"); - ttm_put_pages(pages, count, flags, cstate); - return r; - } + if (r) { + /* If there is any pages in the list put them back to + * the pool. + */ + pr_err("Failed to allocate extra pages for large request\n"); + ttm_put_pages(pages, count, flags, cstate); + return r; } return 0;