From patchwork Thu Jul 15 22:38:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7111CC636CA for ; Thu, 15 Jul 2021 22:39:15 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3EFAB613D2 for ; Thu, 15 Jul 2021 22:39:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EFAB613D2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2876C6E455; Thu, 15 Jul 2021 22:39:12 +0000 (UTC) Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by gabe.freedesktop.org (Postfix) with ESMTPS id AA66C6E455 for ; Thu, 15 Jul 2021 22:39:09 +0000 (UTC) Received: by mail-pf1-x433.google.com with SMTP id y4so6991163pfi.9 for ; Thu, 15 Jul 2021 15:39:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AXY3KUSW9Nh3jlxZEN6OgRLzanV/GR5AneGVxHmy0Rg=; b=kqUpwV6SpnVPxpNNHUoiua+b0BFXa7TwcPp0f2fdqwmRAxuYmILKe3TS2/6yKzXpJw 9D7ohHP2eJz6O4a7ZQ6sQ3K4uUG9J+elfYONhpfWCshkDUNGFAPfUuJlUjKVcAJeHuUD UiXkQqWxju1BfFv8dwcjtHjxDXn/Z3gA2/Dgys+hnG/pd5lWZjkdO/zhzQngwBOBQWp0 G2EJujXaPeNxbUKU6Bh9H6RHOfL6cZHTqYnoowBZzE9qNvMd+z9JDZFNLlMJrbDS/1er LSexYes86mgbfMmTEquCmw2L/OaG0zJ7Lvyj/k7Kg+RCY/RtYZysJ2gef0ggWzcwycmY ixkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AXY3KUSW9Nh3jlxZEN6OgRLzanV/GR5AneGVxHmy0Rg=; b=KM72q+bLwDUWrbp7nFo1O9sIgO2gzJpkXFJ40LiymechvpantyF2PfwLZwCHuIXs+X asPWe2Z7OkkUe7glFZeFkPhqg8QL2RQPu/5IrbBajCpL0v8mI6ovI1qGgy+xgh+PSlpG cI/R9zrLpwVR49l6rQE4gsoUfotny7toPtBLQoHMP5AvJMUPhFoqnGXTyDRh8a5G/rOG JoVFGwVZCoM7kikz+g4nfu/+u/jc0iIJinKGNIIeUEOpknxNOwIRjrmG8bDFgCr+VXDS TSQN2XjlxNJECIGtSEbO/yNqj0QX8hpdWIfFC5YzgPNXlFkY/cgLu0anjUG3p+qshPuR auhw== X-Gm-Message-State: AOAM533rO/ALIMDQv28DOiV370R21cyz2HB0ZeHHFNhmXisG+uHYTDDd PVlKgpHPBG6TUg6nT1Ttt+ydQw== X-Google-Smtp-Source: ABdhPJzQX8E8UVcB1zrEXImj+Oi23KhE7NSH48pEGzl3eTjznEtBZhLoKoF8NeAqcahNMD2b+QdbZQ== X-Received: by 2002:a05:6a00:1582:b029:333:a366:fe47 with SMTP id u2-20020a056a001582b0290333a366fe47mr2470827pfk.0.1626388749378; Thu, 15 Jul 2021 15:39:09 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:09 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 1/7] drm/i915/gem: Check object_can_migrate from object_migrate Date: Thu, 15 Jul 2021 17:38:54 -0500 Message-Id: <20210715223900.1840576-2-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jason Ekstrand , Matthew Auld Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" We don't roll them together entirely because there are still a couple cases where we want a separate can_migrate check. For instance, the display code checks that you can migrate a buffer to LMEM before it accepts it in fb_create. The dma-buf import code also uses it to do an early check and return a different error code if someone tries to attach a LMEM-only dma-buf to another driver. However, no one actually wants to call object_migrate when can_migrate has failed. The stated intention is for self-tests but none of those actually take advantage of this unsafe migration. Signed-off-by: Jason Ekstrand Cc: Daniel Vetter Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 13 ++----------- .../gpu/drm/i915/gem/selftests/i915_gem_migrate.c | 15 --------------- 2 files changed, 2 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 9da7b288b7ede..f2244ae09a613 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -584,12 +584,6 @@ bool i915_gem_object_can_migrate(struct drm_i915_gem_object *obj, * completed yet, and to accomplish that, i915_gem_object_wait_migration() * must be called. * - * This function is a bit more permissive than i915_gem_object_can_migrate() - * to allow for migrating objects where the caller knows exactly what is - * happening. For example within selftests. More specifically this - * function allows migrating I915_BO_ALLOC_USER objects to regions - * that are not in the list of allowable regions. - * * Note: the @ww parameter is not used yet, but included to make sure * callers put some effort into obtaining a valid ww ctx if one is * available. @@ -616,11 +610,8 @@ int i915_gem_object_migrate(struct drm_i915_gem_object *obj, if (obj->mm.region == mr) return 0; - if (!i915_gem_object_evictable(obj)) - return -EBUSY; - - if (!obj->ops->migrate) - return -EOPNOTSUPP; + if (!i915_gem_object_can_migrate(obj, id)) + return -EINVAL; return obj->ops->migrate(obj, mr); } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c index 0b7144d2991ca..28a700f08b49a 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_migrate.c @@ -61,11 +61,6 @@ static int igt_create_migrate(struct intel_gt *gt, enum intel_region_id src, if (err) continue; - if (!i915_gem_object_can_migrate(obj, dst)) { - err = -EINVAL; - continue; - } - err = i915_gem_object_migrate(obj, &ww, dst); if (err) continue; @@ -114,11 +109,6 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww, return err; if (i915_gem_object_is_lmem(obj)) { - if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) { - pr_err("object can't migrate to smem.\n"); - return -EINVAL; - } - err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM); if (err) { pr_err("Object failed migration to smem\n"); @@ -137,11 +127,6 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww, } } else { - if (!i915_gem_object_can_migrate(obj, INTEL_REGION_LMEM)) { - pr_err("object can't migrate to lmem.\n"); - return -EINVAL; - } - err = i915_gem_object_migrate(obj, ww, INTEL_REGION_LMEM); if (err) { pr_err("Object failed migration to lmem\n"); From patchwork Thu Jul 15 22:38:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEC1FC636CA for ; Thu, 15 Jul 2021 22:39:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A67CF613D2 for ; Thu, 15 Jul 2021 22:39:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A67CF613D2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3E8546E49A; Thu, 15 Jul 2021 22:39:13 +0000 (UTC) Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by gabe.freedesktop.org (Postfix) with ESMTPS id 891F16E455 for ; Thu, 15 Jul 2021 22:39:11 +0000 (UTC) Received: by mail-pf1-x430.google.com with SMTP id p36so6975656pfw.11 for ; Thu, 15 Jul 2021 15:39:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/jcwphcKJO9pLpY070tV+qaBYrvw17p6gFbN9lGJR2I=; b=RObuNOMMSoQKIBLvLDzC0l+1NS3oX5NrJCVtFApipg7dBOrt9Cw6R9x4i2SZ2jB7j1 JCtj+pApzgFPNKE0BteEXuVWAvomdLZwXqt2sYf5Vn8v0guOJ2mqXUXt51LXUVVdQf1B f9sP34FELkQU3RSNNuzKX5K5weeNxqE6O0RgDY+S9sTLa8Q4p4IzN+oNeyP6SLVdjqJp gG5lS83Ix71329RNcFH1RWiM2wqo/jS9ZroTXpN66CGTGjq1ydTk5b6FXabvNhyWlRqo hygF4b9gULs6Sn4dz3U5PloAPvmOlTolG1OUvitwyd5jbZfu144K8+h6UScsHV2Hz83U yZUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/jcwphcKJO9pLpY070tV+qaBYrvw17p6gFbN9lGJR2I=; b=JVai1PjxKvG6NnzD80qj9lSdQxR2xfDX6PTQYJ8BSUxO6ihtXj943jEzpyu2Fd5uhp qRfAaqybD/ByQP3O8dwJXbwhPLRZ7cTEq60cevBrzbhoyvjqG8l1lP25i7+yBaqlU7l5 YWmDgiXpPucTvOkLxR/ZRX6Ocxp50VXmwzyzSzkGHjG4fnFSan+Rw+t1NDbMQctGQMAe hvbS+EzMXryb7ckQVO4m68LK8F2w+XO8t1tLSZafasyeAJihLR41LuPIYfd/zt1PxgIL pkCFXcGH9mfzLRTV8/YhWVzogQPlx5814yzKjBKWJgR5W9TOD/UNiY16MqzMttVR7b2p 285g== X-Gm-Message-State: AOAM531YmMFWn7W1O6u5+guAZgm5wUar3RtoTrBFXU6UPCn9V1shWJpZ +fChOoYHbZJq5UrRweyn5DUDbw== X-Google-Smtp-Source: ABdhPJxtvb599Pb+jYg4WyJmpxc79u9lpzcDxUUL5z3sAf6yKhwwR80yXezC6JzoNShMt6nW/yfzgQ== X-Received: by 2002:a05:6a00:21c6:b029:2ff:e9:94f0 with SMTP id t6-20020a056a0021c6b02902ff00e994f0mr7077057pfj.73.1626388750958; Thu, 15 Jul 2021 15:39:10 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:10 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 2/7] drm/i915/gem: Refactor placement setup for i915_gem_object_create* Date: Thu, 15 Jul 2021 17:38:55 -0500 Message-Id: <20210715223900.1840576-3-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Since we don't allow changing the set of regions after creation, we can make ext_set_placements() build up the region set directly in the create_ext and assign it to the object later. This is similar to what we did for contexts with the proto-context only simpler because there's no funny object shuffling. This will be used in the next patch to allow us to de-duplicate a bunch of code. Also, since we know the maximum number of regions up-front, we can use a fixed-size temporary array for the regions. This simplifies memory management a bit for this new delayed approach. Signed-off-by: Jason Ekstrand --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 75 +++++++++++++--------- 1 file changed, 45 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 51f92e4b1a69d..391c8c4a12172 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -27,10 +27,13 @@ static u32 object_max_page_size(struct drm_i915_gem_object *obj) return max_page_size; } -static void object_set_placements(struct drm_i915_gem_object *obj, - struct intel_memory_region **placements, - unsigned int n_placements) +static int object_set_placements(struct drm_i915_gem_object *obj, + struct intel_memory_region **placements, + unsigned int n_placements) { + struct intel_memory_region **arr; + unsigned int i; + GEM_BUG_ON(!n_placements); /* @@ -44,9 +47,20 @@ static void object_set_placements(struct drm_i915_gem_object *obj, obj->mm.placements = &i915->mm.regions[mr->id]; obj->mm.n_placements = 1; } else { - obj->mm.placements = placements; + arr = kmalloc_array(n_placements, + sizeof(struct intel_memory_region *), + GFP_KERNEL); + if (!arr) + return -ENOMEM; + + for (i = 0; i < n_placements; i++) + arr[i] = placements[i]; + + obj->mm.placements = arr; obj->mm.n_placements = n_placements; } + + return 0; } static int i915_gem_publish(struct drm_i915_gem_object *obj, @@ -148,7 +162,9 @@ i915_gem_dumb_create(struct drm_file *file, return -ENOMEM; mr = intel_memory_region_by_type(to_i915(dev), mem_type); - object_set_placements(obj, &mr, 1); + ret = object_set_placements(obj, &mr, 1); + if (ret) + goto object_free; ret = i915_gem_setup(obj, args->size); if (ret) @@ -184,7 +200,9 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data, return -ENOMEM; mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); - object_set_placements(obj, &mr, 1); + ret = object_set_placements(obj, &mr, 1); + if (ret) + goto object_free; ret = i915_gem_setup(obj, args->size); if (ret) @@ -197,9 +215,12 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data, return ret; } +#define MAX_N_PLACEMENTS = (INTEL_REGION_UNKNOWN + 1) + struct create_ext { struct drm_i915_private *i915; - struct drm_i915_gem_object *vanilla_object; + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN]; + unsigned int n_placements; }; static void repr_placements(char *buf, size_t size, @@ -230,8 +251,7 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, struct drm_i915_private *i915 = ext_data->i915; struct drm_i915_gem_memory_class_instance __user *uregions = u64_to_user_ptr(args->regions); - struct drm_i915_gem_object *obj = ext_data->vanilla_object; - struct intel_memory_region **placements; + struct intel_memory_region *placements[INTEL_REGION_UNKNOWN]; u32 mask; int i, ret = 0; @@ -245,6 +265,8 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, ret = -EINVAL; } + BUILD_BUG_ON(ARRAY_SIZE(i915->mm.regions) != ARRAY_SIZE(placements)); + BUILD_BUG_ON(ARRAY_SIZE(ext_data->placements) != ARRAY_SIZE(placements)); if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) { drm_dbg(&i915->drm, "num_regions is too large\n"); ret = -EINVAL; @@ -253,12 +275,6 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, if (ret) return ret; - placements = kmalloc_array(args->num_regions, - sizeof(struct intel_memory_region *), - GFP_KERNEL); - if (!placements) - return -ENOMEM; - mask = 0; for (i = 0; i < args->num_regions; i++) { struct drm_i915_gem_memory_class_instance region; @@ -293,14 +309,13 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, ++uregions; } - if (obj->mm.placements) { + if (ext_data->n_placements) { ret = -EINVAL; goto out_dump; } - object_set_placements(obj, placements, args->num_regions); - if (args->num_regions == 1) - kfree(placements); + for (i = 0; i < args->num_regions; i++) + ext_data->placements[i] = placements[i]; return 0; @@ -308,11 +323,11 @@ static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, if (1) { char buf[256]; - if (obj->mm.placements) { + if (ext_data->n_placements) { repr_placements(buf, sizeof(buf), - obj->mm.placements, - obj->mm.n_placements); + ext_data->placements, + ext_data->n_placements); drm_dbg(&i915->drm, "Placements were already set in previous EXT. Existing placements: %s\n", buf); @@ -358,7 +373,6 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_create_ext *args = data; struct create_ext ext_data = { .i915 = i915 }; - struct intel_memory_region **placements_ext; struct drm_i915_gem_object *obj; int ret; @@ -371,21 +385,22 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, if (!obj) return -ENOMEM; - ext_data.vanilla_object = obj; ret = i915_user_extensions(u64_to_user_ptr(args->extensions), create_extensions, ARRAY_SIZE(create_extensions), &ext_data); - placements_ext = obj->mm.placements; if (ret) goto object_free; - if (!placements_ext) { - struct intel_memory_region *mr = + if (!ext_data.n_placements) { + ext_data.placements[0] = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); - - object_set_placements(obj, &mr, 1); + ext_data.n_placements = 1; } + ret = object_set_placements(obj, ext_data.placements, + ext_data.n_placements); + if (ret) + goto object_free; ret = i915_gem_setup(obj, args->size); if (ret) @@ -395,7 +410,7 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, object_free: if (obj->mm.n_placements > 1) - kfree(placements_ext); + kfree(obj->mm.placements); i915_gem_object_free(obj); return ret; } From patchwork Thu Jul 15 22:38:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0C9CC636C9 for ; Thu, 15 Jul 2021 22:39:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7342C613D4 for ; Thu, 15 Jul 2021 22:39:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7342C613D4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C66F86E4A5; Thu, 15 Jul 2021 22:39:14 +0000 (UTC) Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [IPv6:2607:f8b0:4864:20::532]) by gabe.freedesktop.org (Postfix) with ESMTPS id 22EFE6E4A5 for ; Thu, 15 Jul 2021 22:39:13 +0000 (UTC) Received: by mail-pg1-x532.google.com with SMTP id s18so8045274pgg.8 for ; Thu, 15 Jul 2021 15:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IEg9Ql1P0pH2ya8JFjLrolvs7S9Ijker4C+eWzsnT8Q=; b=O89Bpv1/zSsu0DH6HPFsfiqzn3QwUS9Qkg708aexqokYhCSkW6TJ3MCtPn7AbfHDtd ZeKmyUmkN/Y++EdgellfXk5PbwYgOvuSBZjH7lPX714W4N+RyW5+xVIj/3n37DAMR0W5 oicRn/6d249iKqbQo6/JGhB18/fNfTXf/SaXReATMxaWJndRln8sCdfhKBUoFVxQJ3oz LMIfONqyB1LYtTOXH5xu1q4pA/ovW8mVR+E3xmi+JWgqrsTjbXvrT/ZgR2awAicoClUE syu2nbbwaly4U32D7dvNxhlN7J1nZwjl6H2AE9DBjtHaMhh4k2aAbu52KgodP3LNge2w F3gA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IEg9Ql1P0pH2ya8JFjLrolvs7S9Ijker4C+eWzsnT8Q=; b=KrM+LkjGaVzTFl1xfzeD2Ym2rqULpboFbNqbKEaNWFdM37DuFkNhqVstOeLQEEFak8 oCjeUhxPvKCM9xU0L7yJVxKqnqMzFPlYmuCS3Wt2bcZF/0LyX27sbjMN/jEimmr3IH7V MQSSKxqbz7zmzlmEaFad8gZ91dLtgz9l6u2gvlQwIv15Hh08p3r4oWpCneYr7n32+1la TAqmtzK5jdbHDgGsYlTNxs6rqKACNP6R4eQLMpcIht6QfYDWWY1h0spZ1XNbVXXDKOOG bbPLj0UsZ+U59mJ7+VLkpoCzTOvXMp2Ziu3CDvKGvTAc1B0Bcs3wGFZlp8iJpa83pTDc IzcA== X-Gm-Message-State: AOAM531KBA3SQl3fIQFKJkw5zxeFH9SW4B4HrApwemJxKy9n6SZ8WqXP tkF6YjtwzAhjE+zc/AlLEx1JQQ== X-Google-Smtp-Source: ABdhPJw4xNvB3n3K1EP2lU9bKKW1nu/gkdDvilFbg/yhLcirnaM9iNh2vL+3t0Wo/1z+q5mpkf/k4w== X-Received: by 2002:a05:6a00:2182:b029:32d:a761:6f5 with SMTP id h2-20020a056a002182b029032da76106f5mr7177303pfi.10.1626388752612; Thu, 15 Jul 2021 15:39:12 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:12 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 3/7] drm/i915/gem: Unify user object creation Date: Thu, 15 Jul 2021 17:38:56 -0500 Message-Id: <20210715223900.1840576-4-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Instead of hand-rolling the same three calls in each function, pull them into an i915_gem_object_create_user helper. Apart from re-ordering of the placements array ENOMEM check, the only functional change here should be that i915_gem_dumb_create now calls i915_gem_flush_free_objects which it probably should have been calling all along. Signed-off-by: Jason Ekstrand --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 106 +++++++++------------ 1 file changed, 43 insertions(+), 63 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 391c8c4a12172..69bf9ec777642 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -11,13 +11,14 @@ #include "i915_trace.h" #include "i915_user_extensions.h" -static u32 object_max_page_size(struct drm_i915_gem_object *obj) +static u32 object_max_page_size(struct intel_memory_region **placements, + unsigned int n_placements) { u32 max_page_size = 0; int i; - for (i = 0; i < obj->mm.n_placements; i++) { - struct intel_memory_region *mr = obj->mm.placements[i]; + for (i = 0; i < n_placements; i++) { + struct intel_memory_region *mr = placements[i]; GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); max_page_size = max_t(u32, max_page_size, mr->min_page_size); @@ -81,22 +82,35 @@ static int i915_gem_publish(struct drm_i915_gem_object *obj, return 0; } -static int -i915_gem_setup(struct drm_i915_gem_object *obj, u64 size) +static struct drm_i915_gem_object * +i915_gem_object_create_user(struct drm_i915_private *i915, u64 size, + struct intel_memory_region **placements, + unsigned int n_placements) { - struct intel_memory_region *mr = obj->mm.placements[0]; + struct intel_memory_region *mr = placements[0]; + struct drm_i915_gem_object *obj; unsigned int flags; int ret; - size = round_up(size, object_max_page_size(obj)); + i915_gem_flush_free_objects(i915); + + obj = i915_gem_object_alloc(); + if (!obj) + return ERR_PTR(-ENOMEM); + + size = round_up(size, object_max_page_size(placements, n_placements)); if (size == 0) - return -EINVAL; + return ERR_PTR(-EINVAL); /* For most of the ABI (e.g. mmap) we think in system pages */ GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); if (i915_gem_object_size_2big(size)) - return -E2BIG; + return ERR_PTR(-E2BIG); + + ret = object_set_placements(obj, placements, n_placements); + if (ret) + goto object_free; /* * I915_BO_ALLOC_USER will make sure the object is cleared before @@ -106,12 +120,18 @@ i915_gem_setup(struct drm_i915_gem_object *obj, u64 size) ret = mr->ops->init_object(mr, obj, size, 0, flags); if (ret) - return ret; + goto object_free; GEM_BUG_ON(size != obj->base.size); trace_i915_gem_object_create(obj); - return 0; + return obj; + +object_free: + if (obj->mm.n_placements > 1) + kfree(obj->mm.placements); + i915_gem_object_free(obj); + return ERR_PTR(ret); } int @@ -124,7 +144,6 @@ i915_gem_dumb_create(struct drm_file *file, enum intel_memory_type mem_type; int cpp = DIV_ROUND_UP(args->bpp, 8); u32 format; - int ret; switch (cpp) { case 1: @@ -157,24 +176,13 @@ i915_gem_dumb_create(struct drm_file *file, if (HAS_LMEM(to_i915(dev))) mem_type = INTEL_MEMORY_LOCAL; - obj = i915_gem_object_alloc(); - if (!obj) - return -ENOMEM; - mr = intel_memory_region_by_type(to_i915(dev), mem_type); - ret = object_set_placements(obj, &mr, 1); - if (ret) - goto object_free; - ret = i915_gem_setup(obj, args->size); - if (ret) - goto object_free; + obj = i915_gem_object_create_user(to_i915(dev), args->size, &mr, 1); + if (IS_ERR(obj)) + return PTR_ERR(obj); return i915_gem_publish(obj, file, &args->size, &args->handle); - -object_free: - i915_gem_object_free(obj); - return ret; } /** @@ -191,28 +199,14 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data, struct drm_i915_gem_create *args = data; struct drm_i915_gem_object *obj; struct intel_memory_region *mr; - int ret; - - i915_gem_flush_free_objects(i915); - - obj = i915_gem_object_alloc(); - if (!obj) - return -ENOMEM; mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); - ret = object_set_placements(obj, &mr, 1); - if (ret) - goto object_free; - ret = i915_gem_setup(obj, args->size); - if (ret) - goto object_free; + obj = i915_gem_object_create_user(i915, args->size, &mr, 1); + if (IS_ERR(obj)) + return PTR_ERR(obj); return i915_gem_publish(obj, file, &args->size, &args->handle); - -object_free: - i915_gem_object_free(obj); - return ret; } #define MAX_N_PLACEMENTS = (INTEL_REGION_UNKNOWN + 1) @@ -379,38 +373,24 @@ i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, if (args->flags) return -EINVAL; - i915_gem_flush_free_objects(i915); - - obj = i915_gem_object_alloc(); - if (!obj) - return -ENOMEM; - ret = i915_user_extensions(u64_to_user_ptr(args->extensions), create_extensions, ARRAY_SIZE(create_extensions), &ext_data); if (ret) - goto object_free; + return ret; if (!ext_data.n_placements) { ext_data.placements[0] = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); ext_data.n_placements = 1; } - ret = object_set_placements(obj, ext_data.placements, - ext_data.n_placements); - if (ret) - goto object_free; - ret = i915_gem_setup(obj, args->size); - if (ret) - goto object_free; + obj = i915_gem_object_create_user(i915, args->size, + ext_data.placements, + ext_data.n_placements); + if (IS_ERR(obj)) + return PTR_ERR(obj); return i915_gem_publish(obj, file, &args->size, &args->handle); - -object_free: - if (obj->mm.n_placements > 1) - kfree(obj->mm.placements); - i915_gem_object_free(obj); - return ret; } From patchwork Thu Jul 15 22:38:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27874C636C8 for ; Thu, 15 Jul 2021 22:39:25 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E66F161158 for ; Thu, 15 Jul 2021 22:39:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E66F161158 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 951756E8C1; Thu, 15 Jul 2021 22:39:19 +0000 (UTC) Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by gabe.freedesktop.org (Postfix) with ESMTPS id 199A86E8BE for ; Thu, 15 Jul 2021 22:39:15 +0000 (UTC) Received: by mail-pj1-x102f.google.com with SMTP id me13-20020a17090b17cdb0290173bac8b9c9so7575032pjb.3 for ; Thu, 15 Jul 2021 15:39:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WciUpITvEcHMdrnkAxX4EGa72kGWHk1hAVU4U/kERfQ=; b=bAAD2MAvv4do4lfSoOsyQp0dNCGz2SxmzldW3v2FLw0MjnDkf+hvIzT6+rqcrAxA2o qPqP83F2fgffxdpYeUFW/yMyMditQ+V6pYhszovcXeOB+C1uq6qv+aRm53H6PMmkWwLG kJ4ZsYPhQx53jUmf9f3X5nmz8BLA4NgDszaXEzsk5kesTDFNwm2S6m793gFUJs8TYSXp Rrm+scyRIrggf26+w/t/y/FqtFqRojXP/zx+dXQpvbrhR2DZ0roWnYbneLXMmYMuH+eA /FnaPve6pwK2L8xv5wmMdb7XLliOfJlNwB1dJ68svgfk6O7e77MM0uxvLklF+mWC6rFZ Tsaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WciUpITvEcHMdrnkAxX4EGa72kGWHk1hAVU4U/kERfQ=; b=gCDayokhxLkxGoGmUEq11ldGpznALswWhBcPxFh+rt4AFIgjJtqktN+JPVKtJPYmtC UE7xjJRKcjxVRIUzZvzYHHybqRCmK6hpobZ68d6/IV1K2moRasG+vvqnieXhIGLy32IA xXk4G7NSHYhDpQYnxKJ/q0EXjVecGEhSMyvLF+1zNZrmwd61VZFjERlm7c3rqvkb1eef 7UNvy77KF5eTAkoe6aGhBPrG8PVc45OOGzOHgZEXBru6XaGzisaF3T/UnUNWsaMCWf/V j7p2dO5f59yCzOJaHmJuBQGtxD83aXk9dWjVwLMMl5DSd6QcgSFeuoK5v0KF/H+/mu6a d+ug== X-Gm-Message-State: AOAM53187aTLgAR+QMOcGoaMJKELctnU8vDVdUaahqg2rAimFnxabVbg /A+B6KBgRWZxr5Sa+4n5PF+SO4MA1VAdjg== X-Google-Smtp-Source: ABdhPJwln5alaY0SRCNgr7HQimiY6NvglQIPRqZMtXsuuUDJPZS7T/NL9JrFSbr7WddJjrbPrkk8/Q== X-Received: by 2002:a17:90b:4b4e:: with SMTP id mi14mr12504357pjb.109.1626388754644; Thu, 15 Jul 2021 15:39:14 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:14 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 4/7] drm/i915/gem/ttm: Place new BOs in the requested region Date: Thu, 15 Jul 2021 17:38:57 -0500 Message-Id: <20210715223900.1840576-5-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Matthew Auld , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" __i915_gem_ttm_object_init() was ignoring the placement requests coming from the client and always placing all BOs in SMEM upon creation. Instead, compute the requested placement set from the object and pass that into ttm_bo_init_reserved(). Signed-off-by: Jason Ekstrand Cc: Thomas Hellström Cc: Matthew Auld Cc: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 6589411396d3f..d30f274c329c7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -898,6 +898,8 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, { static struct lock_class_key lock_class; struct drm_i915_private *i915 = mem->i915; + struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS]; + struct ttm_placement placement; struct ttm_operation_ctx ctx = { .interruptible = true, .no_wait_gpu = false, @@ -919,6 +921,9 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, /* Forcing the page size is kernel internal only */ GEM_BUG_ON(page_size && obj->mm.n_placements); + GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS); + i915_ttm_placement_from_obj(obj, &requested, busy, &placement); + /* * If this function fails, it will call the destructor, but * our caller still owns the object. So no freeing in the @@ -927,8 +932,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, * until successful initialization. */ ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), size, - bo_type, &i915_sys_placement, - page_size >> PAGE_SHIFT, + bo_type, &placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); if (ret) return i915_ttm_err_to_gem(ret); From patchwork Thu Jul 15 22:38:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65647C636C9 for ; Thu, 15 Jul 2021 22:39:23 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 36446613D2 for ; Thu, 15 Jul 2021 22:39:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 36446613D2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E5D906E8BE; Thu, 15 Jul 2021 22:39:17 +0000 (UTC) Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by gabe.freedesktop.org (Postfix) with ESMTPS id 05D606E8BE for ; Thu, 15 Jul 2021 22:39:17 +0000 (UTC) Received: by mail-pf1-x42b.google.com with SMTP id p22so6995479pfh.8 for ; Thu, 15 Jul 2021 15:39:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8pTOymRDqWskKnKmtLphb8tat5i85/mb3yEa5IkiVBc=; b=MWtlj8+O5xBkZ9mHU7RLV54ZZJ7p51SLZ7x18vbvixWheF2KXR0D4dLpDphMy3P/JN SiDgc7yuizXGzsyuiis75qRnFL6RLj9bPVB8vDN2hKGACqDMPbgEpZp/QHzSRvmMUbBC dNhi2gE6jcTdIQdt4LImKt+DqaypAfBgwyotLGK7ecTHW5/bexuYyG17inNueE3JR74G rs4NBTZnjEffIDX5rdLLmzbUffOCUziXPPqpegDVet4JP8x5Uqojli4z/7ANean+gW83 3GpObVRqLN4n9jX0snZvInxZ4+NfUV/E6ESshUjQFkZ1YwRhEHp+5FB31PxCEmKN34Gq toTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8pTOymRDqWskKnKmtLphb8tat5i85/mb3yEa5IkiVBc=; b=UV96oxYdAfmeV76VH108Z9CumGiwLOJkUOZ6zH7Ea4lrPjN9NoYgdxYwpfce6HHEcx 5dHdBenPMXOgYIyB+FTRojxV3Mf3rNTJoV1tIkVhPhgXtfM5ryKYwNzRYArpHxT5GdbB 1pboprJeS57b0+PV+sVzI1bQ0QLhkXIROFa4ntAip1O9RKc+ln9l5I03wc+cVa06ICkx +35fZ4OG9crBWkC5wphNcT4Q3jmjpLnxrn0cShyDCaFF7IC0Bs9kBLRE+QMWhFu6zAdJ +OfD4Ax6uOfQq7YZcYKmV6PBvMQfWu86kWIfduXHFkF3NhwusvhQ9MwQF6VcC78Iyrhr it/g== X-Gm-Message-State: AOAM533fGntXKP4ahhtbcmVBvMHYz09gCkQ//DxeNCQKGuLXwrgWwSgs VX4AdNhCTylTBuIs98QXnHESaQ== X-Google-Smtp-Source: ABdhPJyj9IH5gQGCAb5nrGiMdhg576lPjnEyY0Qh9FnZqwvuCU7aUd1ZO2OrdSqt7GiDyl0GxB1lNg== X-Received: by 2002:aa7:8d5a:0:b029:302:e2cb:6d79 with SMTP id s26-20020aa78d5a0000b0290302e2cb6d79mr6845687pfe.71.1626388756586; Thu, 15 Jul 2021 15:39:16 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:16 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 5/7] drm/i915/gem/ttm: Respect the objection region in placement_from_obj Date: Thu, 15 Jul 2021 17:38:58 -0500 Message-Id: <20210715223900.1840576-6-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Matthew Auld , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Whenever we had a user object (n_placements > 0), we were ignoring obj->mm.region and always putting obj->placements[0] as the requested region. For LMEM+SMEM objects, this was causing them to get shoved into LMEM on every i915_ttm_get_pages() even when SMEM was requested by, say, i915_gem_object_migrate(). Signed-off-by: Jason Ekstrand Cc: Thomas Hellström Cc: Matthew Auld Cc: Maarten Lankhorst --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index d30f274c329c7..5985e994d56cf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -150,8 +150,7 @@ i915_ttm_placement_from_obj(const struct drm_i915_gem_object *obj, unsigned int i; placement->num_placement = 1; - i915_ttm_place_from_region(num_allowed ? obj->mm.placements[0] : - obj->mm.region, requested, flags); + i915_ttm_place_from_region(obj->mm.region, requested, flags); /* Cache this on object? */ placement->num_busy_placement = num_allowed; From patchwork Thu Jul 15 22:38:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B824C636C9 for ; Thu, 15 Jul 2021 22:39:27 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4C418611AC for ; Thu, 15 Jul 2021 22:39:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C418611AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 94A6B6E8C3; Thu, 15 Jul 2021 22:39:20 +0000 (UTC) Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by gabe.freedesktop.org (Postfix) with ESMTPS id F282F6E8C2 for ; Thu, 15 Jul 2021 22:39:18 +0000 (UTC) Received: by mail-pg1-x52d.google.com with SMTP id y4so8023564pgl.10 for ; Thu, 15 Jul 2021 15:39:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iyozTSomnY57D+e+B5ZX9prvxFTQugp7z+hR/eTorww=; b=fsIHp9qJxKQCYTxYSygRbtrVdskVdQpwPhqRmenzAggXpqTfWpSOYPuWU4dZqgeAWG VMEp1XkQODBah750DFVcFKcObHVrRO57f03WJD7egh59VIdqtryyDEj6zb1OnfogCjq/ Zs23oe/Z+q7LSzrG0qSj1FNmXRHzVd7gZjvo+BQINDK6FTqkBMdKhxi2g2HKDiWDWa9b kxZJX6gbq744sCeNyUtkSh835TIG29Ifn8ywMkIzA73od3/QvhzrHtF/T+gr7Vp1qGO6 iAICpn2ZoxVGzQXkWaBWrzPt7aVgY/kTuLtbVg9ckF7o1pd7WI+a2cRrolitlN9vhnK+ najA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iyozTSomnY57D+e+B5ZX9prvxFTQugp7z+hR/eTorww=; b=UZeA8Sy8ZK4rNsFZTXTM4Bs4rPlFjziOqNGtNZ1xIXo625f/0TjR5ZOGsu82dwA1Nu xYBw7CMPDHSJyRBg+oXF9NGTOl7qAliHhBVVzKgdOHFDJDkVSynHtH+sKPtre1dhRxH4 WbRYH/AFxcZgEdVdsC7NzKfZuMcinjxwTR7LUy6CUtM+uqqMJMwzOFNAI4YewCgzl4D+ HpDoBBMPaX6C4jfyWqibBvsd/rTTDqptV/g8vNS0TTEJw6GceFuG94E0CdrK9MBvMyAy yNTniPdFMvUUg4hTiNirsKSSr2L4brssX+Sk74EGjQh9DwHVJQj2nLdXIEH5pZyHsmDR IutA== X-Gm-Message-State: AOAM530rP2NVYhBGGHqkb0olD0H6tgItzethsKxqrLRVlmWCmwSGI/DM b2mHK++j7+6Sna0NxCOB6a/rGA== X-Google-Smtp-Source: ABdhPJzYSaPU+NPZ30UE5b17SAlXEWyCx2wEdozswkNTiUkviFPP1WjAWB1Krwz9f844/0/dmn7/QQ== X-Received: by 2002:a05:6a00:16c6:b029:32d:e190:9dd0 with SMTP id l6-20020a056a0016c6b029032de1909dd0mr6960538pfc.70.1626388758467; Thu, 15 Jul 2021 15:39:18 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:18 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 6/7] drm/i915/gem: Correct the locking and pin pattern for dma-buf (v6) Date: Thu, 15 Jul 2021 17:38:59 -0500 Message-Id: <20210715223900.1840576-7-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Jason Ekstrand , "Michael J . Ruhl" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Thomas Hellström If our exported dma-bufs are imported by another instance of our driver, that instance will typically have the imported dma-bufs locked during dma_buf_map_attachment(). But the exporter also locks the same reservation object in the map_dma_buf() callback, which leads to recursive locking. So taking the lock inside _pin_pages_unlocked() is incorrect. Additionally, the current pinning code path is contrary to the defined way that pinning should occur. Remove the explicit pin/unpin from the map/umap functions and move them to the attach/detach allowing correct locking to occur, and to match the static dma-buf drm_prime pattern. Add a live selftest to exercise both dynamic and non-dynamic exports. v2: - Extend the selftest with a fake dynamic importer. - Provide real pin and unpin callbacks to not abuse the interface. v3: (ruhl) - Remove the dynamic export support and move the pinning into the attach/detach path. v4: (ruhl) - Put pages does not need to assert on the dma-resv v5: (jason) - Lock around dma_buf_unmap_attachment() when emulating a dynamic importer in the subtests. - Use pin_pages_unlocked v6: (jason) - Use dma_buf_attach instead of dma_buf_attach_dynamic in the selftests Reported-by: Michael J. Ruhl Signed-off-by: Thomas Hellström Signed-off-by: Michael J. Ruhl Signed-off-by: Jason Ekstrand Reviewed-by: Jason Ekstrand --- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 43 ++++++-- .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 103 +++++++++++++++++- 2 files changed, 132 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 616c3a2f1baf0..9a655f69a0671 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -12,6 +12,8 @@ #include "i915_gem_object.h" #include "i915_scatterlist.h" +I915_SELFTEST_DECLARE(static bool force_different_devices;) + static struct drm_i915_gem_object *dma_buf_to_obj(struct dma_buf *buf) { return to_intel_bo(buf->priv); @@ -25,15 +27,11 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme struct scatterlist *src, *dst; int ret, i; - ret = i915_gem_object_pin_pages_unlocked(obj); - if (ret) - goto err; - /* Copy sg so that we make an independent mapping */ st = kmalloc(sizeof(struct sg_table), GFP_KERNEL); if (st == NULL) { ret = -ENOMEM; - goto err_unpin_pages; + goto err; } ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL); @@ -58,8 +56,6 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme sg_free_table(st); err_free: kfree(st); -err_unpin_pages: - i915_gem_object_unpin_pages(obj); err: return ERR_PTR(ret); } @@ -68,13 +64,9 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *sg, enum dma_data_direction dir) { - struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf); - dma_unmap_sgtable(attachment->dev, sg, dir, DMA_ATTR_SKIP_CPU_SYNC); sg_free_table(sg); kfree(sg); - - i915_gem_object_unpin_pages(obj); } static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) @@ -168,7 +160,31 @@ static int i915_gem_end_cpu_access(struct dma_buf *dma_buf, enum dma_data_direct return err; } +/** + * i915_gem_dmabuf_attach - Do any extra attach work necessary + * @dmabuf: imported dma-buf + * @attach: new attach to do work on + * + */ +static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attach) +{ + struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf); + + return i915_gem_object_pin_pages_unlocked(obj); +} + +static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf, + struct dma_buf_attachment *attach) +{ + struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf); + + i915_gem_object_unpin_pages(obj); +} + static const struct dma_buf_ops i915_dmabuf_ops = { + .attach = i915_gem_dmabuf_attach, + .detach = i915_gem_dmabuf_detach, .map_dma_buf = i915_gem_map_dma_buf, .unmap_dma_buf = i915_gem_unmap_dma_buf, .release = drm_gem_dmabuf_release, @@ -204,6 +220,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) struct sg_table *pages; unsigned int sg_page_sizes; + assert_object_held(obj); + pages = dma_buf_map_attachment(obj->base.import_attach, DMA_BIDIRECTIONAL); if (IS_ERR(pages)) @@ -241,7 +259,8 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, if (dma_buf->ops == &i915_dmabuf_ops) { obj = dma_buf_to_obj(dma_buf); /* is it from our device? */ - if (obj->base.dev == dev) { + if (obj->base.dev == dev && + !I915_SELFTEST_ONLY(force_different_devices)) { /* * Importing dmabuf exported from out own gem increases * refcount on gem itself instead of f_count of dmabuf. diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index dd74bc09ec88d..4451bbb4917e4 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -35,7 +35,7 @@ static int igt_dmabuf_export(void *arg) static int igt_dmabuf_import_self(void *arg) { struct drm_i915_private *i915 = arg; - struct drm_i915_gem_object *obj; + struct drm_i915_gem_object *obj, *import_obj; struct drm_gem_object *import; struct dma_buf *dmabuf; int err; @@ -65,14 +65,112 @@ static int igt_dmabuf_import_self(void *arg) err = -EINVAL; goto out_import; } + import_obj = to_intel_bo(import); + + i915_gem_object_lock(import_obj, NULL); + err = ____i915_gem_object_get_pages(import_obj); + i915_gem_object_unlock(import_obj); + if (err) { + pr_err("Same object dma-buf get_pages failed!\n"); + goto out_import; + } err = 0; out_import: - i915_gem_object_put(to_intel_bo(import)); + i915_gem_object_put(import_obj); +out_dmabuf: + dma_buf_put(dmabuf); +out: + i915_gem_object_put(obj); + return err; +} + +static int igt_dmabuf_import_same_driver(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct drm_i915_gem_object *obj, *import_obj; + struct drm_gem_object *import; + struct dma_buf *dmabuf; + struct dma_buf_attachment *import_attach; + struct sg_table *st; + long timeout; + int err; + + force_different_devices = true; + obj = i915_gem_object_create_shmem(i915, PAGE_SIZE); + if (IS_ERR(obj)) + goto out_ret; + + dmabuf = i915_gem_prime_export(&obj->base, 0); + if (IS_ERR(dmabuf)) { + pr_err("i915_gem_prime_export failed with err=%d\n", + (int)PTR_ERR(dmabuf)); + err = PTR_ERR(dmabuf); + goto out; + } + + import = i915_gem_prime_import(&i915->drm, dmabuf); + if (IS_ERR(import)) { + pr_err("i915_gem_prime_import failed with err=%d\n", + (int)PTR_ERR(import)); + err = PTR_ERR(import); + goto out_dmabuf; + } + + if (import == &obj->base) { + pr_err("i915_gem_prime_import reused gem object!\n"); + err = -EINVAL; + goto out_import; + } + + import_obj = to_intel_bo(import); + + i915_gem_object_lock(import_obj, NULL); + err = ____i915_gem_object_get_pages(import_obj); + if (err) { + pr_err("Different objects dma-buf get_pages failed!\n"); + i915_gem_object_unlock(import_obj); + goto out_import; + } + + /* + * If the exported object is not in system memory, something + * weird is going on. TODO: When p2p is supported, this is no + * longer considered weird. + */ + if (obj->mm.region != i915->mm.regions[INTEL_REGION_SMEM]) { + pr_err("Exported dma-buf is not in system memory\n"); + err = -EINVAL; + } + + i915_gem_object_unlock(import_obj); + + /* Now try a fake an importer */ + import_attach = dma_buf_attach(dmabuf, obj->base.dev->dev); + if (IS_ERR(import_attach)) + goto out_import; + + st = dma_buf_map_attachment(import_attach, DMA_BIDIRECTIONAL); + if (IS_ERR(st)) + goto out_detach; + + timeout = dma_resv_wait_timeout(dmabuf->resv, false, true, 5 * HZ); + if (!timeout) { + pr_err("dmabuf wait for exclusive fence timed out.\n"); + timeout = -ETIME; + } + err = timeout > 0 ? 0 : timeout; + dma_buf_unmap_attachment(import_attach, st, DMA_BIDIRECTIONAL); +out_detach: + dma_buf_detach(dmabuf, import_attach); +out_import: + i915_gem_object_put(import_obj); out_dmabuf: dma_buf_put(dmabuf); out: i915_gem_object_put(obj); +out_ret: + force_different_devices = false; return err; } @@ -286,6 +384,7 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { SUBTEST(igt_dmabuf_export), + SUBTEST(igt_dmabuf_import_same_driver), }; return i915_subtests(tests, i915); From patchwork Thu Jul 15 22:39:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12381219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17059C636C9 for ; Thu, 15 Jul 2021 22:39:31 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E2423611AC for ; Thu, 15 Jul 2021 22:39:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E2423611AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 671216E8C5; Thu, 15 Jul 2021 22:39:24 +0000 (UTC) Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0BCA96E8C5 for ; Thu, 15 Jul 2021 22:39:21 +0000 (UTC) Received: by mail-pj1-x1029.google.com with SMTP id g6-20020a17090adac6b029015d1a9a6f1aso6438102pjx.1 for ; Thu, 15 Jul 2021 15:39:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uTvb5nCOZNBFjkHb/A0OT2QdNDoGq3FEFV5E3JS80iA=; b=JmtVnbT5kDFSjpkjRVGjGcpfcRgBk7LeFBfR+uOvOVp/hkgbhWjCLrB83O2InRAxJq 8pG5roHvCiSoIS8qamFzruU5/IWH1Oaft2Pz1cuFizUqQ09vE87i71ZZa/CSDwrWvpYi yEtg6jS3bBww1jtzpgcMbZw8dx3Xvl7TjKwLYLCisNmasIm+ijlHXBbvO3y8s7NemPRi rLdzX2dNN5dWaItniReVYfECFnv8Mqph3VFLsORIC0LjpvDez5N9fCLAh9xinvpcS+0H N1bXXBitFWzkduodEGV+Mm0g/wuvCQvzKWf9QbJecHy1v6a11ZiqjPOWtJN//qd3rvnE f1AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uTvb5nCOZNBFjkHb/A0OT2QdNDoGq3FEFV5E3JS80iA=; b=HXC/q3qvs1Q3xhb4Ynw7ksEVNq4cufD0R9Np0+QhPnm6nZshSzYQyK898/yDBJFWsI FIGiZOpFofYgbtLL/t87Xuzpv4iy7XjMwSnY9s8sML8QzP8YzUUo4w/7H191/+da4qse 44A3HXIPczExEZpK/M+pg7lsmPj/jSuqX/xvCdr19cCq+eLV+wA06QVAYrmZvCkG1sUF t8o1TH7SXION8PCSU/vX5ny8ew8M7qxdhM8VaZ4Uz9F1od3PNG0JMrtWgecqIZgeK2Si cRishvckCVMlxEMtg8TVBc0MuiSEYhm4PD01XbE74ubGInEL/mAGSQuES8nG8ssssbDT 8Axg== X-Gm-Message-State: AOAM533ql5MRlQKkAWT+z6wsPOfoIXfcRXBwJPGW1x0CrLfiFktcBfr0 63fxZSpzXDxokQlwsmxISsQaGg== X-Google-Smtp-Source: ABdhPJxyhJv81KVkp7JZiyi9kHmYNr5psMEV2RHJ3qgPbebMHV02Zef1CSJp+VQ+2aD12SUixWV6zQ== X-Received: by 2002:a17:902:c246:b029:129:b2e0:be90 with SMTP id 6-20020a170902c246b0290129b2e0be90mr5203729plg.84.1626388760480; Thu, 15 Jul 2021 15:39:20 -0700 (PDT) Received: from omlet.com ([134.134.137.87]) by smtp.gmail.com with ESMTPSA id ft7sm9959459pjb.32.2021.07.15.15.39.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 15:39:20 -0700 (PDT) From: Jason Ekstrand To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 7/7] drm/i915/gem: Migrate to system at dma-buf attach time (v6) Date: Thu, 15 Jul 2021 17:39:00 -0500 Message-Id: <20210715223900.1840576-8-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715223900.1840576-1-jason@jlekstrand.net> References: <20210715223900.1840576-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , kernel test robot , Jason Ekstrand , "Michael J . Ruhl" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Thomas Hellström Until we support p2p dma or as a complement to that, migrate data to system memory at dma-buf attach time if possible. v2: - Rebase on dynamic exporter. Update the igt_dmabuf_import_same_driver selftest to migrate if we are LMEM capable. v3: - Migrate also in the pin() callback. v4: - Migrate in attach v5: (jason) - Lock around the migration v6: (jason) - Move the can_migrate check outside the lock - Rework the selftests to test more migration conditions. In particular, SMEM, LMEM, and LMEM+SMEM are all checked. Signed-off-by: Thomas Hellström Signed-off-by: Michael J. Ruhl Reported-by: kernel test robot Signed-off-by: Jason Ekstrand Reviewed-by: Jason Ekstrand --- drivers/gpu/drm/i915/gem/i915_gem_create.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 23 ++++- drivers/gpu/drm/i915/gem/i915_gem_object.h | 4 + .../drm/i915/gem/selftests/i915_gem_dmabuf.c | 89 ++++++++++++++++++- 4 files changed, 112 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 69bf9ec777642..ed6b0d75ff801 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -82,7 +82,7 @@ static int i915_gem_publish(struct drm_i915_gem_object *obj, return 0; } -static struct drm_i915_gem_object * +struct drm_i915_gem_object * i915_gem_object_create_user(struct drm_i915_private *i915, u64 size, struct intel_memory_region **placements, unsigned int n_placements) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index 9a655f69a0671..5d438b95826b9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -170,8 +170,29 @@ static int i915_gem_dmabuf_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) { struct drm_i915_gem_object *obj = dma_buf_to_obj(dmabuf); + struct i915_gem_ww_ctx ww; + int err; + + if (!i915_gem_object_can_migrate(obj, INTEL_REGION_SMEM)) + return -EOPNOTSUPP; + + for_i915_gem_ww(&ww, err, true) { + err = i915_gem_object_lock(obj, &ww); + if (err) + continue; + + err = i915_gem_object_migrate(obj, &ww, INTEL_REGION_SMEM); + if (err) + continue; - return i915_gem_object_pin_pages_unlocked(obj); + err = i915_gem_object_wait_migration(obj, 0); + if (err) + continue; + + err = i915_gem_object_pin_pages(obj); + } + + return err; } static void i915_gem_dmabuf_detach(struct dma_buf *dmabuf, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 8be4fadeee487..fbae53bd46384 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -61,6 +61,10 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915, struct drm_i915_gem_object * i915_gem_object_create_shmem_from_data(struct drm_i915_private *i915, const void *data, resource_size_t size); +struct drm_i915_gem_object * +i915_gem_object_create_user(struct drm_i915_private *i915, u64 size, + struct intel_memory_region **placements, + unsigned int n_placements); extern const struct drm_i915_gem_object_ops i915_gem_shmem_ops; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c index 4451bbb4917e4..7b7647e7e220a 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c @@ -85,9 +85,62 @@ static int igt_dmabuf_import_self(void *arg) return err; } -static int igt_dmabuf_import_same_driver(void *arg) +static int igt_dmabuf_import_same_driver_lmem(void *arg) { struct drm_i915_private *i915 = arg; + struct intel_memory_region *lmem = i915->mm.regions[INTEL_REGION_LMEM]; + struct drm_i915_gem_object *obj; + struct drm_gem_object *import; + struct dma_buf *dmabuf; + int err; + + if (!i915->mm.regions[INTEL_REGION_LMEM]) + return 0; + + force_different_devices = true; + + obj = i915_gem_object_create_user(i915, PAGE_SIZE, &lmem, 1); + if (IS_ERR(obj)) { + pr_err("i915_gem_object_create_user failed with err=%d\n", + (int)PTR_ERR(dmabuf)); + err = PTR_ERR(obj); + goto out_ret; + } + + dmabuf = i915_gem_prime_export(&obj->base, 0); + if (IS_ERR(dmabuf)) { + pr_err("i915_gem_prime_export failed with err=%d\n", + (int)PTR_ERR(dmabuf)); + err = PTR_ERR(dmabuf); + goto out; + } + + /* We expect an import of an LMEM-only object to fail with + * -EOPNOTSUPP because it can't be migrated to SMEM. + */ + import = i915_gem_prime_import(&i915->drm, dmabuf); + if (!IS_ERR(import)) { + drm_gem_object_put(import); + pr_err("i915_gem_prime_import succeeded when it shouldn't have\n"); + err = -EINVAL; + } else if (PTR_ERR(import) != -EOPNOTSUPP) { + pr_err("i915_gem_prime_import failed with the wrong err=%d\n", + (int)PTR_ERR(import)); + err = PTR_ERR(import); + } + + dma_buf_put(dmabuf); +out: + i915_gem_object_put(obj); +out_ret: + force_different_devices = false; + return err; +} + +static int igt_dmabuf_import_same_driver(struct drm_i915_private *i915, + struct intel_memory_region **regions, + unsigned int num_regions) +{ struct drm_i915_gem_object *obj, *import_obj; struct drm_gem_object *import; struct dma_buf *dmabuf; @@ -97,9 +150,15 @@ static int igt_dmabuf_import_same_driver(void *arg) int err; force_different_devices = true; - obj = i915_gem_object_create_shmem(i915, PAGE_SIZE); - if (IS_ERR(obj)) + + obj = i915_gem_object_create_user(i915, PAGE_SIZE, + regions, num_regions); + if (IS_ERR(obj)) { + pr_err("i915_gem_object_create_user failed with err=%d\n", + (int)PTR_ERR(dmabuf)); + err = PTR_ERR(obj); goto out_ret; + } dmabuf = i915_gem_prime_export(&obj->base, 0); if (IS_ERR(dmabuf)) { @@ -174,6 +233,26 @@ static int igt_dmabuf_import_same_driver(void *arg) return err; } +static int igt_dmabuf_import_same_driver_smem(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_memory_region *smem = i915->mm.regions[INTEL_REGION_SMEM]; + return igt_dmabuf_import_same_driver(i915, &smem, 1); +} + +static int igt_dmabuf_import_same_driver_lmem_smem(void *arg) +{ + struct drm_i915_private *i915 = arg; + struct intel_memory_region *regions[2]; + + if (!i915->mm.regions[INTEL_REGION_LMEM]) + return 0; + + regions[0] = i915->mm.regions[INTEL_REGION_LMEM]; + regions[1] = i915->mm.regions[INTEL_REGION_SMEM]; + return igt_dmabuf_import_same_driver(i915, regions, 2); +} + static int igt_dmabuf_import(void *arg) { struct drm_i915_private *i915 = arg; @@ -384,7 +463,9 @@ int i915_gem_dmabuf_live_selftests(struct drm_i915_private *i915) { static const struct i915_subtest tests[] = { SUBTEST(igt_dmabuf_export), - SUBTEST(igt_dmabuf_import_same_driver), + SUBTEST(igt_dmabuf_import_same_driver_lmem), + SUBTEST(igt_dmabuf_import_same_driver_smem), + SUBTEST(igt_dmabuf_import_same_driver_lmem_smem), }; return i915_subtests(tests, i915);