From patchwork Tue Apr 5 12:57:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 8751351 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4A1BBC0553 for ; Tue, 5 Apr 2016 12:57:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 313CE2012D for ; Tue, 5 Apr 2016 12:57:57 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 2642D201BC for ; Tue, 5 Apr 2016 12:57:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BD1BA6E789; Tue, 5 Apr 2016 12:57:55 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x241.google.com (mail-wm0-x241.google.com [IPv6:2a00:1450:400c:c09::241]) by gabe.freedesktop.org (Postfix) with ESMTPS id EFF466E789; Tue, 5 Apr 2016 12:57:53 +0000 (UTC) Received: by mail-wm0-x241.google.com with SMTP id o4so659353wmo.3; Tue, 05 Apr 2016 05:57:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m/HBHYFGDw7qXjfg8dp5YK07OW9sWujviwymrzOLncI=; b=PSOato9MGm0a3IGhPkGsBVgp7WHQrHKiIklvWjf8nCvJ2cBjLbQtpnY33QPHNCkJcf vhnGXp1a0EkbKHEDIz2ZFy+4AtQkG3qp0OQiljQclcxZ7D7BjHtXG2PuUYL5FX4txj1o bSHNe79eXF6iH7/lJ6E7DkC071KgNrhhrI8Iv1g0Drl3VMjsPYPlfpHFQcrOQS5K8aNU aqWBnGWxjRty+zVrAHAjmoCJTlJuYByQ6LHycOZ1yIncgRQdj/Oq1DmTNck96RycjuLM 34g2yvKSELowIjQFhD9QFx9Ur8Zt2mLNdya57DEeBmBH391zeugXBIXaAOEHSLn5XxQh RrLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=m/HBHYFGDw7qXjfg8dp5YK07OW9sWujviwymrzOLncI=; b=FK8FmYuhf7BieE1r9/l1oJsLlgkMW+AtKSutsj/NcAnWTSSa5+P17w98n8CMdd2/U9 nYPszWAnDphgttPtPV+fU/2xm1TRRngojarvuuZNU6MITdqM6YPpb5JU1UXE/JpvkzkV xL5kvgzq2jReZxjiHiHqwI02UudF9bFwSBbyGl/MkvZsk0fW6gNrMdUtL5UfZ+WITc7A b/k2+4wzu3VUAWEdRXIi1PP58jSoacx3HOeHGazOC8qqZXUHDU3YYk5HuTBkUblbif72 +LPPBFAa/7EbHxnTCDjbxHhdEgH6nPOUZ1F0oPcfb2OKDBamoPpsqgp7+xf8eIzAA1C8 kGSA== X-Gm-Message-State: AD7BkJKuWixuD622GIL52aNsvNG/RPivmc9QxytCdnyQtz/VvVYu+GkCqlB5cNOj6WJOmQ== X-Received: by 10.194.2.225 with SMTP id 1mr8817414wjx.29.1459861072705; Tue, 05 Apr 2016 05:57:52 -0700 (PDT) Received: from haswell.alporthouse.com ([78.156.65.138]) by smtp.gmail.com with ESMTPSA id gk4sm34410043wjd.7.2016.04.05.05.57.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 05 Apr 2016 05:57:51 -0700 (PDT) From: Chris Wilson To: intel-gfx@lists.freedesktop.org Subject: [PATCH 5/6] drm,i915: Introduce drm_malloc_gfp() Date: Tue, 5 Apr 2016 13:57:36 +0100 Message-Id: <1459861057-25931-6-git-send-email-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.8.0.rc3 In-Reply-To: <1459861057-25931-1-git-send-email-chris@chris-wilson.co.uk> References: <1459861057-25931-1-git-send-email-chris@chris-wilson.co.uk> MIME-Version: 1.0 Cc: dri-devel@lists.freedesktop.org X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP I have instances where I want to use drm_malloc_ab() but with a custom gfp mask. And with those, where I want a temporary allocation, I want to try a high-order kmalloc() before using a vmalloc(). So refactor my usage into drm_malloc_gfp(). Signed-off-by: Chris Wilson Cc: dri-devel@lists.freedesktop.org Cc: Ville Syrjälä Reviewed-by: Ville Syrjälä Acked-by: Dave Airlie --- drivers/gpu/drm/i915/i915_gem.c | 4 +--- drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 +++----- drivers/gpu/drm/i915/i915_gem_gtt.c | 5 +++-- drivers/gpu/drm/i915/i915_gem_userptr.c | 15 ++++----------- include/drm/drm_mem_util.h | 19 +++++++++++++++++++ 5 files changed, 30 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index be4cf13343d5..985f067c1f0e 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2421,9 +2421,7 @@ void *i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj) int n; n = obj->base.size >> PAGE_SHIFT; - pages = kmalloc(n*sizeof(*pages), GFP_TEMPORARY | __GFP_NOWARN); - if (pages == NULL) - pages = drm_malloc_ab(n, sizeof(*pages)); + pages = drm_malloc_gfp(n, sizeof(*pages), GFP_TEMPORARY); if (pages != NULL) { n = 0; for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0) diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 0ee61fd014df..6ee4f00f620c 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -1783,11 +1783,9 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data, return -EINVAL; } - exec2_list = kmalloc(sizeof(*exec2_list)*args->buffer_count, - GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); - if (exec2_list == NULL) - exec2_list = drm_malloc_ab(sizeof(*exec2_list), - args->buffer_count); + exec2_list = drm_malloc_gfp(args->buffer_count, + sizeof(*exec2_list), + GFP_TEMPORARY); if (exec2_list == NULL) { DRM_DEBUG("Failed to allocate exec list for %d buffers\n", args->buffer_count); diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index ae9cb2735767..18f2bd7caad5 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -3413,8 +3413,9 @@ intel_rotate_fb_obj_pages(struct intel_rotation_info *rot_info, int ret = -ENOMEM; /* Allocate a temporary list of source pages for random access. */ - page_addr_list = drm_malloc_ab(obj->base.size / PAGE_SIZE, - sizeof(dma_addr_t)); + page_addr_list = drm_malloc_gfp(obj->base.size / PAGE_SIZE, + sizeof(dma_addr_t), + GFP_TEMPORARY); if (!page_addr_list) return ERR_PTR(ret); diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c index 291a9393493d..67883ebf9504 100644 --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -494,10 +494,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) ret = -ENOMEM; pinned = 0; - pvec = kmalloc(npages*sizeof(struct page *), - GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); - if (pvec == NULL) - pvec = drm_malloc_ab(npages, sizeof(struct page *)); + pvec = drm_malloc_gfp(npages, sizeof(struct page *), GFP_TEMPORARY); if (pvec != NULL) { struct mm_struct *mm = obj->userptr.mm->mm; @@ -634,14 +631,10 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) pvec = NULL; pinned = 0; if (obj->userptr.mm->mm == current->mm) { - pvec = kmalloc(num_pages*sizeof(struct page *), - GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); + pvec = drm_malloc_gfp(num_pages, sizeof(struct page *), GFP_TEMPORARY); if (pvec == NULL) { - pvec = drm_malloc_ab(num_pages, sizeof(struct page *)); - if (pvec == NULL) { - __i915_gem_userptr_set_active(obj, false); - return -ENOMEM; - } + __i915_gem_userptr_set_active(obj, false); + return -ENOMEM; } pinned = __get_user_pages_fast(obj->userptr.ptr, num_pages, diff --git a/include/drm/drm_mem_util.h b/include/drm/drm_mem_util.h index e42495ad8136..741ce75a72b4 100644 --- a/include/drm/drm_mem_util.h +++ b/include/drm/drm_mem_util.h @@ -54,6 +54,25 @@ static __inline__ void *drm_malloc_ab(size_t nmemb, size_t size) GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL); } +static __inline__ void *drm_malloc_gfp(size_t nmemb, size_t size, gfp_t gfp) +{ + if (size != 0 && nmemb > SIZE_MAX / size) + return NULL; + + if (size * nmemb <= PAGE_SIZE) + return kmalloc(nmemb * size, gfp); + + if (gfp & __GFP_RECLAIMABLE) { + void *ptr = kmalloc(nmemb * size, + gfp | __GFP_NOWARN | __GFP_NORETRY); + if (ptr) + return ptr; + } + + return __vmalloc(size * nmemb, + gfp | __GFP_HIGHMEM, PAGE_KERNEL); +} + static __inline void drm_free_large(void *ptr) { kvfree(ptr);