From patchwork Sat Nov 21 04:49:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 11923171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99695C63798 for ; Sat, 21 Nov 2020 04:50:14 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4787221D40 for ; Sat, 21 Nov 2020 04:50:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="DCrKMrkR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4787221D40 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6D2FC6E96B; Sat, 21 Nov 2020 04:50:11 +0000 (UTC) Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by gabe.freedesktop.org (Postfix) with ESMTPS id E03CB6E968 for ; Sat, 21 Nov 2020 04:50:07 +0000 (UTC) Received: by mail-pg1-x542.google.com with SMTP id q34so9120452pgb.11 for ; Fri, 20 Nov 2020 20:50:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GHpNF24q+TKBy5NU/YkjHOD8mOY77JiRU3YsnIt/B8o=; b=DCrKMrkRfr7pIobw9eFJ61OTFy8o6Mk8c8usKaJebs3hYGzyI5mm34lx1W72BDn0UK dKdVA/2b9wj9YUyc9BfAwlnPFRJr22kYTlUxkWP63lnGSM2R19BlOAk8Lhubi33p6YpL kXuTKlemDSfqrc1QgkebjmI37vhXwgvwEP1jBfPOommMfZx/kgfGlYw9nnm8JmY3HyAV 6HeKjrzWn1NdMz+c1iEKZg+0Lmu8981vgxxf/ZNA0TtWE2Cs/b4MuEuD9P2Q+RhHg7VK j0MbY+VVkmT/L2UXflqd/WIknCoxH15HI5OUthHmf1uRAfLVqVwytjq8VsJF3DHT8CPk vRWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GHpNF24q+TKBy5NU/YkjHOD8mOY77JiRU3YsnIt/B8o=; b=lO3NLZFKMczp0MUU92qMMvYvNR5Qiv1y+KCLTc0SC2vL8SuaiuAwR1u9it8X4VbDYC 8+8rzQHSGOUtT9OFXwVQRasy7ayXNwR9ch8kiVr71IlpuGCxTJY7C9YOM951k15ERD21 blniJ3JzFstPfoCzP5MO05SQkdI6Wwa3rpcJG7x4UVxqIC+Zc/CgVQcMPU9ECdS/F1uZ mmkLpOmB0LrhcxOeXYyXflSXRX1VJpr7JhW2O9+Eo6mf6kC5rskv467QgD8RpXKlQDB8 NNpD/gZFi2Kkdbw8gc/AlS8g3rPoKtyd9I9b3I0w0v06sVBS6gxM1vQFFdmFUT+Ndx27 wC/w== X-Gm-Message-State: AOAM532YlPGZN2kn5AS3F4OTmr+rP2oxeFg0yxRTuPjNJXWMSXSGKK+Y jpE4ScqMSubdF+E+LsDINZJ+Vw== X-Google-Smtp-Source: ABdhPJxSUX5Zw8O37YM9KJak58EF62vq3SMAbc0DS/2cMEpBj1/bbfbSKiiRQQWrDtlqY+wdoqzfSw== X-Received: by 2002:a17:90a:12cc:: with SMTP id b12mr13258284pjg.150.1605934207573; Fri, 20 Nov 2020 20:50:07 -0800 (PST) Received: from localhost.localdomain ([2601:1c2:680:1319:692:26ff:feda:3a81]) by smtp.gmail.com with ESMTPSA id w196sm5407692pfd.177.2020.11.20.20.50.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Nov 2020 20:50:07 -0800 (PST) From: John Stultz To: lkml Subject: [PATCH v6 5/5] dma-buf: system_heap: Allocate higher order pages if available Date: Sat, 21 Nov 2020 04:49:55 +0000 Message-Id: <20201121044955.58215-6-john.stultz@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201121044955.58215-1-john.stultz@linaro.org> References: <20201121044955.58215-1-john.stultz@linaro.org> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Sandeep Patil , dri-devel@lists.freedesktop.org, Ezequiel Garcia , Robin Murphy , James Jones , Liam Mark , Laura Abbott , Chris Goldsworthy , Hridya Valsaraju , =?utf-8?q?=C3=98rjan_Eide?= , linux-media@vger.kernel.org, Suren Baghdasaryan , Daniel Mentz Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" While the system heap can return non-contiguous pages, try to allocate larger order pages if possible. This will allow slight performance gains and make implementing page pooling easier. Cc: Sumit Semwal Cc: Liam Mark Cc: Laura Abbott Cc: Brian Starkey Cc: Hridya Valsaraju Cc: Suren Baghdasaryan Cc: Sandeep Patil Cc: Daniel Mentz Cc: Chris Goldsworthy Cc: Ørjan Eide Cc: Robin Murphy Cc: Ezequiel Garcia Cc: Simon Ser Cc: James Jones Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Reviewed-by: Brian Starkey Signed-off-by: John Stultz --- v3: * Use page_size() rather then opencoding it v5: * Add comment explaining order size rational --- drivers/dma-buf/heaps/system_heap.c | 89 +++++++++++++++++++++++------ 1 file changed, 71 insertions(+), 18 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index b1a7b355132f..de275b7ff1ed 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -40,6 +40,20 @@ struct dma_heap_attachment { bool mapped; }; +#define HIGH_ORDER_GFP (((GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN \ + | __GFP_NORETRY) & ~__GFP_RECLAIM) \ + | __GFP_COMP) +#define LOW_ORDER_GFP (GFP_HIGHUSER | __GFP_ZERO | __GFP_COMP) +static gfp_t order_flags[] = {HIGH_ORDER_GFP, LOW_ORDER_GFP, LOW_ORDER_GFP}; +/* + * The selection of the orders used for allocation (1MB, 64K, 4K) is designed + * to match with the sizes often found in IOMMUs. Using order 4 pages instead + * of order 0 pages can significantly improve the performance of many IOMMUs + * by reducing TLB pressure and time spent updating page tables. + */ +static const unsigned int orders[] = {8, 4, 0}; +#define NUM_ORDERS ARRAY_SIZE(orders) + static struct sg_table *dup_sg_table(struct sg_table *table) { struct sg_table *new_table; @@ -272,8 +286,11 @@ static void system_heap_dma_buf_release(struct dma_buf *dmabuf) int i; table = &buffer->sg_table; - for_each_sgtable_sg(table, sg, i) - __free_page(sg_page(sg)); + for_each_sg(table->sgl, sg, table->nents, i) { + struct page *page = sg_page(sg); + + __free_pages(page, compound_order(page)); + } sg_free_table(table); kfree(buffer); } @@ -291,6 +308,26 @@ static const struct dma_buf_ops system_heap_buf_ops = { .release = system_heap_dma_buf_release, }; +static struct page *alloc_largest_available(unsigned long size, + unsigned int max_order) +{ + struct page *page; + int i; + + for (i = 0; i < NUM_ORDERS; i++) { + if (size < (PAGE_SIZE << orders[i])) + continue; + if (max_order < orders[i]) + continue; + + page = alloc_pages(order_flags[i], orders[i]); + if (!page) + continue; + return page; + } + return NULL; +} + static int system_heap_allocate(struct dma_heap *heap, unsigned long len, unsigned long fd_flags, @@ -298,11 +335,13 @@ static int system_heap_allocate(struct dma_heap *heap, { struct system_heap_buffer *buffer; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + unsigned long size_remaining = len; + unsigned int max_order = orders[0]; struct dma_buf *dmabuf; struct sg_table *table; struct scatterlist *sg; - pgoff_t pagecount; - pgoff_t pg; + struct list_head pages; + struct page *page, *tmp_page; int i, ret = -ENOMEM; buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); @@ -314,25 +353,35 @@ static int system_heap_allocate(struct dma_heap *heap, buffer->heap = heap; buffer->len = len; - table = &buffer->sg_table; - pagecount = len / PAGE_SIZE; - if (sg_alloc_table(table, pagecount, GFP_KERNEL)) - goto free_buffer; - - sg = table->sgl; - for (pg = 0; pg < pagecount; pg++) { - struct page *page; + INIT_LIST_HEAD(&pages); + i = 0; + while (size_remaining > 0) { /* * Avoid trying to allocate memory if the process * has been killed by SIGKILL */ if (fatal_signal_pending(current)) - goto free_pages; - page = alloc_page(GFP_KERNEL | __GFP_ZERO); + goto free_buffer; + + page = alloc_largest_available(size_remaining, max_order); if (!page) - goto free_pages; + goto free_buffer; + + list_add_tail(&page->lru, &pages); + size_remaining -= page_size(page); + max_order = compound_order(page); + i++; + } + + table = &buffer->sg_table; + if (sg_alloc_table(table, i, GFP_KERNEL)) + goto free_buffer; + + sg = table->sgl; + list_for_each_entry_safe(page, tmp_page, &pages, lru) { sg_set_page(sg, page, page_size(page), 0); sg = sg_next(sg); + list_del(&page->lru); } /* create the dmabuf */ @@ -352,14 +401,18 @@ static int system_heap_allocate(struct dma_heap *heap, /* just return, as put will call release and that will free */ return ret; } - return ret; free_pages: - for_each_sgtable_sg(table, sg, i) - __free_page(sg_page(sg)); + for_each_sgtable_sg(table, sg, i) { + struct page *p = sg_page(sg); + + __free_pages(p, compound_order(p)); + } sg_free_table(table); free_buffer: + list_for_each_entry_safe(page, tmp_page, &pages, lru) + __free_pages(page, compound_order(page)); kfree(buffer); return ret;