From patchwork Fri Sep 22 12:13:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 9965923 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A9414600C5 for ; Fri, 22 Sep 2017 12:13:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D3BB2988E for ; Fri, 22 Sep 2017 12:13:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 80E3E29893; Fri, 22 Sep 2017 12:13:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E405B2988E for ; Fri, 22 Sep 2017 12:13:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E91506E9C9; Fri, 22 Sep 2017 12:13:40 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x241.google.com (mail-wm0-x241.google.com [IPv6:2a00:1450:400c:c09::241]) by gabe.freedesktop.org (Postfix) with ESMTPS id D1B8C6E9BD; Fri, 22 Sep 2017 12:13:36 +0000 (UTC) Received: by mail-wm0-x241.google.com with SMTP id e64so1119056wmi.2; Fri, 22 Sep 2017 05:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=HT5TABjG3xxtOlVCIBqjohh865TfQLvtQwATTLCnl+M=; b=r6FezKcW+cZTeWnHvKHDPUijaKYu15clHxoPiXH5Qk71kISgAldAqwpsbhosvEzMkc DnCYHu3ANzo5fcPvGZnNM0JRSzILiIWVIQ5LGa9cPOfAl8lGdtPfLkoaxym4faVE+GF7 r8T+jue8lf9kK3ADXlW9f+H3T8dnDNpUOmPi2H5sF/ehQO+/HD6FY+T0Mxx0Lafg1oB7 HUPB04BBtLxFeUn8ZiGBB3PfkLCrjmntM/NkZG9Gj3b+HHFmCyzvWHllH94bdxaaVROX AF2Y76CkhjwQD1FUv3cZY3Xf0XygjXhpbvyM562gIvI4Ls4DPIJ5zP/if3Hgn6WVoSgZ oFvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HT5TABjG3xxtOlVCIBqjohh865TfQLvtQwATTLCnl+M=; b=Y22Ws8I9Tvt3fm8Qmpeg7PPTSV4/fMUCa4vmJ2iE546cEo62aJf3Pv3qNVUdAhn2VN eOIuNj3NFSrqk430V04cgHxm1IAeMuyIqEEOjSunRNyjMdilJ9pMb7cO7xsSgS5F1o+U EbGiIMYBRz1sfSRnaEbulAm4wDd6mt/WnLgNJABOZ69f0uJaT5EMaHas98YIbDGn/Rk0 Nr5DNDspCrppi0O6PzN9zCO1KYl2M7zpKXyoZzW3hRH7/RYwnOiWGfqfuWfx3iD1JP/+ KlN+k4a1jNjrOxqx9gLbsWlNYI9ncHHXfJ8xOUMcX/b3qYU+a0vrjlMDa4C89r5rg2yd FiAA== X-Gm-Message-State: AHPjjUjLOh+aIVOsiMQgcwvwZuKAlcToodqriYYks95Aa5rgTBrGG7gq kQCqs5rgDV3anIHdOlWrGsfqx1RM X-Google-Smtp-Source: AOwi7QDkzHozcifynAUUb6JZNfAs/Z3u4EtBUh+wcxxe90x6DfgZQAPtI7j0mGGND5ADkZpGNMVLkQ== X-Received: by 10.28.214.206 with SMTP id n197mr3567153wmg.21.1506082415080; Fri, 22 Sep 2017 05:13:35 -0700 (PDT) Received: from localhost.localdomain ([2a02:908:1251:7981:98cd:d000:f7ff:c900]) by smtp.gmail.com with ESMTPSA id i50sm4044739wrf.84.2017.09.22.05.13.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 22 Sep 2017 05:13:34 -0700 (PDT) From: "=?UTF-8?q?Christian=20K=C3=B6nig?=" X-Google-Original-From: =?UTF-8?q?Christian=20K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 7/9] drm/ttm: move more logic into ttm_page_pool_get_pages Date: Fri, 22 Sep 2017 14:13:23 +0200 Message-Id: <1506082405-1556-7-git-send-email-deathsimple@vodafone.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1506082405-1556-1-git-send-email-deathsimple@vodafone.de> References: <1506082405-1556-1-git-send-email-deathsimple@vodafone.de> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König Make it easier to add huge page pool. Signed-off-by: Christian König --- drivers/gpu/drm/ttm/ttm_page_alloc.c | 98 +++++++++++++++++++----------------- 1 file changed, 52 insertions(+), 46 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 3dd2a8a..7044ffa 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -627,19 +627,20 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, } /** - * Cut 'count' number of pages from the pool and put them on the return list. + * Allocate pages from the pool and put them on the return list. * - * @return count of pages still required to fulfill the request. + * @return zero for success or negative error code. */ -static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool, - struct list_head *pages, - int ttm_flags, - enum ttm_caching_state cstate, - unsigned count) +static int ttm_page_pool_get_pages(struct ttm_page_pool *pool, + struct list_head *pages, + int ttm_flags, + enum ttm_caching_state cstate, + unsigned count) { unsigned long irq_flags; struct list_head *p; unsigned i; + int r = 0; spin_lock_irqsave(&pool->lock, irq_flags); ttm_page_pool_fill_locked(pool, ttm_flags, cstate, count, &irq_flags); @@ -672,7 +673,35 @@ static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool, count = 0; out: spin_unlock_irqrestore(&pool->lock, irq_flags); - return count; + + /* clear the pages coming from the pool if requested */ + if (ttm_flags & TTM_PAGE_FLAG_ZERO_ALLOC) { + struct page *page; + + list_for_each_entry(page, pages, lru) { + if (PageHighMem(page)) + clear_highpage(page); + else + clear_page(page_address(page)); + } + } + + /* If pool didn't have enough pages allocate new one. */ + if (count) { + gfp_t gfp_flags = pool->gfp_flags; + + /* set zero flag for page allocation if required */ + if (ttm_flags & TTM_PAGE_FLAG_ZERO_ALLOC) + gfp_flags |= __GFP_ZERO; + + /* ttm_alloc_new_pages doesn't reference pool so we can run + * multiple requests in parallel. + **/ + r = ttm_alloc_new_pages(pages, gfp_flags, ttm_flags, cstate, + count); + } + + return r; } /* Put all pages in pages list to correct pool to wait for reuse */ @@ -742,18 +771,18 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags, struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct list_head plist; struct page *p = NULL; - gfp_t gfp_flags = GFP_USER; unsigned count; int r; - /* set zero flag for page allocation if required */ - if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) - gfp_flags |= __GFP_ZERO; - /* No pool for cached pages */ if (pool == NULL) { + gfp_t gfp_flags = GFP_USER; unsigned i, j; + /* set zero flag for page allocation if required */ + if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) + gfp_flags |= __GFP_ZERO; + if (flags & TTM_PAGE_FLAG_DMA32) gfp_flags |= GFP_DMA32; else @@ -791,44 +820,21 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags, return 0; } - /* combine zero flag to pool flags */ - gfp_flags |= pool->gfp_flags; - /* First we take pages from the pool */ INIT_LIST_HEAD(&plist); - npages = ttm_page_pool_get_pages(pool, &plist, flags, cstate, npages); + r = ttm_page_pool_get_pages(pool, &plist, flags, cstate, npages); + count = 0; - list_for_each_entry(p, &plist, lru) { + list_for_each_entry(p, &plist, lru) pages[count++] = p; - } - - /* clear the pages coming from the pool if requested */ - if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) { - list_for_each_entry(p, &plist, lru) { - if (PageHighMem(p)) - clear_highpage(p); - else - clear_page(page_address(p)); - } - } - /* If pool didn't have enough pages allocate new one. */ - if (npages > 0) { - /* ttm_alloc_new_pages doesn't reference pool so we can run - * multiple requests in parallel. - **/ - INIT_LIST_HEAD(&plist); - r = ttm_alloc_new_pages(&plist, gfp_flags, flags, cstate, npages); - list_for_each_entry(p, &plist, lru) { - pages[count++] = p; - } - if (r) { - /* If there is any pages in the list put them back to - * the pool. */ - pr_err("Failed to allocate extra pages for large request\n"); - ttm_put_pages(pages, count, flags, cstate); - return r; - } + if (r) { + /* If there is any pages in the list put them back to + * the pool. + */ + pr_err("Failed to allocate extra pages for large request\n"); + ttm_put_pages(pages, count, flags, cstate); + return r; } return 0;