From patchwork Fri Oct 21 14:11:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tvrtko Ursulin X-Patchwork-Id: 9389325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1B4DE607F0 for ; Fri, 21 Oct 2016 14:13:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 055C92A239 for ; Fri, 21 Oct 2016 14:13:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE26D2A23C; Fri, 21 Oct 2016 14:13:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C1262A239 for ; Fri, 21 Oct 2016 14:13:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934220AbcJUOMm (ORCPT ); Fri, 21 Oct 2016 10:12:42 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:36184 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933813AbcJUOLo (ORCPT ); Fri, 21 Oct 2016 10:11:44 -0400 Received: by mail-lf0-f68.google.com with SMTP id b75so6115977lfg.3 for ; Fri, 21 Oct 2016 07:11:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ursulin-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pHpI/O5Jp/pFcBZzUkNu3fOScC32A312Iv3vNhxvNm0=; b=FODkl/p52ODxLkY9WIgD5IhQXzRme8OntYPuwGoxjb4g+x78k/2ePMCh9LcoArZcQK d2+d+XSuFhr+EImnf5v4OQhcgUtIc2WYPKAP5zptm8CHOG8rk65Zc3lUOA6YCZGCY06U FA02WEod6a6Ug4PI4PimYWMSB89XhbmL3rhT661y4OD0erry/EzNBvvPc7OEWqFkuWP4 enEA6lrRtd0LqOXfS5nZNXZhSONA4ZJzxAE2+/I9g0LDRwcG6H1R9f2Tmr9SQ07KF8OI egalBJpr6hp00YC5db/bw4OaSQq6ANBTJ//Lvy90LT7PaHwnXAEImA8z/19q28sdRNJX YXng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pHpI/O5Jp/pFcBZzUkNu3fOScC32A312Iv3vNhxvNm0=; b=VH2wunH++vm4JAQprpg4HsMRUMko9fCLHfPYMP0lEYytGY7SmmXmWFPIR++dcluW4u wzdM+OpTodx0WwSN/Y7gQ5bsZcThV5pfd2zLgMlXNEx/QS4EP28z9bLsUCZxw0JEAsmS VL40JBhBOircn9BFp+ELES5vtrrpSwDXlLPbp5ZGfYjFuiCF84AnhB75vERPN7D09Y5A tBUDP/0mfWvJCa2RUBeY3/vs+V01r+15KEv0wUOediE0iIz2d+1a7D8dOabQOfarz2Te 6w21D7VQiromd4QNaHdLPhKYkHAkQop7DhFJmADoHscp539tUDCMnUwX1dhsR4KY0urv VnUg== X-Gm-Message-State: AA6/9RluqTp8U6YbPCsSPl7kSaiz2Jcbd8s0kvGJqUIEISBRLkK6mm6yqajh5+3HsF6zLQ== X-Received: by 10.28.129.9 with SMTP id c9mr3147707wmd.54.1477059102689; Fri, 21 Oct 2016 07:11:42 -0700 (PDT) Received: from e31.Home ([2a02:c7d:9b6d:e300:916a:6cab:ac67:71c2]) by smtp.gmail.com with ESMTPSA id ya1sm3114013wjb.23.2016.10.21.07.11.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Oct 2016 07:11:42 -0700 (PDT) From: Tvrtko Ursulin X-Google-Original-From: Tvrtko Ursulin To: Intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, Chris Wilson , Tvrtko Ursulin Subject: [PATCH 5/5] drm/i915: Use __sg_alloc_table_from_pages for userptr allocations Date: Fri, 21 Oct 2016 15:11:23 +0100 Message-Id: <1477059083-3500-6-git-send-email-tvrtko.ursulin@linux.intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1477059083-3500-1-git-send-email-tvrtko.ursulin@linux.intel.com> References: <1477059083-3500-1-git-send-email-tvrtko.ursulin@linux.intel.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Tvrtko Ursulin With the addition of __sg_alloc_table_from_pages we can control the maximum coallescing size and eliminate a separate path for allocating backing store here. This also makes the tables as compact as possible in all cases. Signed-off-by: Tvrtko Ursulin --- drivers/gpu/drm/i915/i915_drv.h | 9 +++++++++ drivers/gpu/drm/i915/i915_gem.c | 11 +---------- drivers/gpu/drm/i915/i915_gem_userptr.c | 29 +++++++---------------------- 3 files changed, 17 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 5b2b7f3c6e76..577a3a87f680 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -4001,4 +4001,13 @@ int remap_io_mapping(struct vm_area_struct *vma, __T; \ }) +static inline unsigned int i915_swiotlb_max_size(void) +{ +#if IS_ENABLED(CONFIG_SWIOTLB) + return swiotlb_nr_tbl() << IO_TLB_SHIFT; +#else + return UINT_MAX; +#endif +} + #endif diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 4bf675568a37..18125d7279c6 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2205,15 +2205,6 @@ i915_gem_object_put_pages(struct drm_i915_gem_object *obj) return 0; } -static unsigned int swiotlb_max_size(void) -{ -#if IS_ENABLED(CONFIG_SWIOTLB) - return swiotlb_nr_tbl() << IO_TLB_SHIFT; -#else - return UINT_MAX; -#endif -} - static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) { @@ -2222,7 +2213,7 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) struct address_space *mapping; struct sg_table *st; struct page *page, **pages; - unsigned int max_segment = swiotlb_max_size(); + unsigned int max_segment = i915_swiotlb_max_size(); int ret; gfp_t gfp; diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c index e537930c64b5..17dca225a3e0 100644 --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -397,36 +397,21 @@ struct get_pages_work { struct task_struct *task; }; -#if IS_ENABLED(CONFIG_SWIOTLB) -#define swiotlb_active() swiotlb_nr_tbl() -#else -#define swiotlb_active() 0 -#endif - static int st_set_pages(struct sg_table **st, struct page **pvec, int num_pages) { - struct scatterlist *sg; - int ret, n; + unsigned int max_segment = i915_swiotlb_max_size(); + int ret; *st = kmalloc(sizeof(**st), GFP_KERNEL); if (*st == NULL) return -ENOMEM; - if (swiotlb_active()) { - ret = sg_alloc_table(*st, num_pages, GFP_KERNEL); - if (ret) - goto err; - - for_each_sg((*st)->sgl, sg, num_pages, n) - sg_set_page(sg, pvec[n], PAGE_SIZE, 0); - } else { - ret = sg_alloc_table_from_pages(*st, pvec, num_pages, - 0, num_pages << PAGE_SHIFT, - GFP_KERNEL); - if (ret) - goto err; - } + ret = __sg_alloc_table_from_pages(*st, pvec, num_pages, 0, + num_pages << PAGE_SHIFT, + GFP_KERNEL, max_segment); + if (ret) + goto err; return 0;