From patchwork Tue Jun 22 09:58:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12336729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BA72C48BDF for ; Tue, 22 Jun 2021 09:58:58 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 39D4F6135F for ; Tue, 22 Jun 2021 09:58:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 39D4F6135F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2DB726E0DD; Tue, 22 Jun 2021 09:58:57 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id B22B589F47; Tue, 22 Jun 2021 09:58:55 +0000 (UTC) IronPort-SDR: 4deIb2HvLJm8MJw2vtklDW3o1MKnQTpOFdfGh83RbJmVqA/A429MpARPbkicBF2B6vNAr95XI1 qX0FvIPt/tNQ== X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="206843467" X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; d="scan'208";a="206843467" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2021 02:58:53 -0700 IronPort-SDR: t2JfLWQmPbwZwwdY4eMUX5a5f/QFYbXFkN3mwj/d0TabAHqVUC35bpBuIosMFAw0A71edx05xE YIFPS+wwSEKA== X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; d="scan'208";a="452553207" Received: from ctuckwel-mobl3.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.213.202.56]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2021 02:58:51 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 22 Jun 2021 10:58:43 +0100 Message-Id: <20210622095843.132549-1-matthew.auld@intel.com> X-Mailer: git-send-email 2.26.3 MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH] drm/i915/ttm: consider all placements for the page alignment X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" Just checking the current region is not enough, if we later migrate the object somewhere else. For example if the placements are {SMEM, LMEM}, then we might get this wrong. Another idea might be to make the page_alignment part of the ttm_place, instead of the BO. Signed-off-by: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index c5deb8b7227c..5d894bba6430 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -753,6 +753,25 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object *bo) call_rcu(&obj->rcu, __i915_gem_free_object_rcu); } +static u64 i915_gem_object_page_size(struct drm_i915_gem_object *obj) +{ + u64 page_size; + int i; + + if (!obj->mm.n_placements) + return obj->mm.region->min_page_size; + + page_size = 0; + for (i = 0; i < obj->mm.n_placements; i++) { + struct intel_memory_region *mr = obj->mm.placements[i]; + + page_size = max_t(u64, mr->min_page_size, page_size); + } + + GEM_BUG_ON(!page_size); + return page_size; +} + /** * __i915_gem_ttm_object_init - Initialize a ttm-backed i915 gem object * @mem: The initial memory region for the object. @@ -793,7 +812,7 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, obj->base.vma_node.driver_private = i915_gem_to_ttm(obj); ret = ttm_bo_init(&i915->bdev, i915_gem_to_ttm(obj), size, bo_type, &i915_sys_placement, - mem->min_page_size >> PAGE_SHIFT, + i915_gem_object_page_size(obj) >> PAGE_SHIFT, true, NULL, NULL, i915_ttm_bo_destroy); if (!ret) obj->ttm.created = true;