From patchwork Wed Jun 23 11:26:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 12339647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1570BC4743C for ; Wed, 23 Jun 2021 11:27:03 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB056610F7 for ; Wed, 23 Jun 2021 11:27:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB056610F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 4A9AE6E8BC; Wed, 23 Jun 2021 11:27:00 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4036E6E8B8; Wed, 23 Jun 2021 11:26:57 +0000 (UTC) IronPort-SDR: 37WICaIZ5r7FvWHyFBBkf+5TGDCqkEcGejvhuCBOqVIie3IOZVFYv45tvWN+UEya1VhQBtzv4+ lbbDkloCm2xw== X-IronPort-AV: E=McAfee;i="6200,9189,10023"; a="271086590" X-IronPort-AV: E=Sophos;i="5.83,293,1616482800"; d="scan'208";a="271086590" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2021 04:26:56 -0700 IronPort-SDR: vjYspanLSwwRdHk/xlA5ejlFlN16CDRROJxXlh/eB8kbt9uzMEigHHoBrLa0wpISb3+p+ejNkH PYp4rScHJinQ== X-IronPort-AV: E=Sophos;i="5.83,293,1616482800"; d="scan'208";a="454619210" Received: from dconnon-mobl.ger.corp.intel.com (HELO mwauld-desk1.intel.com) ([10.252.14.111]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Jun 2021 04:26:55 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Subject: [PATCH 3/3] drm/i915/gtt: ignore min_page_size for paging structures Date: Wed, 23 Jun 2021 12:26:37 +0100 Message-Id: <20210623112637.266855-3-matthew.auld@intel.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20210623112637.266855-1-matthew.auld@intel.com> References: <20210623112637.266855-1-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , dri-devel@lists.freedesktop.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The min_page_size is only needed for pages inserted into the GTT, and for our paging structures we only need at most 4K bytes, so simply ignore the min_page_size restrictions here, otherwise we might see some severe overallocation on some devices. Signed-off-by: Matthew Auld Cc: Thomas Hellström Reviewed-by: Thomas Hellström --- drivers/gpu/drm/i915/gt/intel_gtt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c index 084ea65d59c0..61e8a8c25374 100644 --- a/drivers/gpu/drm/i915/gt/intel_gtt.c +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c @@ -16,7 +16,7 @@ struct drm_i915_gem_object *alloc_pt_lmem(struct i915_address_space *vm, int sz) { struct drm_i915_gem_object *obj; - obj = i915_gem_object_create_lmem(vm->i915, sz, 0); + obj = __i915_gem_object_create_lmem_with_ps(vm->i915, sz, sz, 0); /* * Ensure all paging structures for this vm share the same dma-resv * object underneath, with the idea that one object_lock() will lock