From patchwork Thu Feb 15 17:44:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 13558923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C87F9C48BF0 for ; Thu, 15 Feb 2024 17:46:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0812210E377; Thu, 15 Feb 2024 17:46:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="SdqqSvEF"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) by gabe.freedesktop.org (Postfix) with ESMTPS id D317110E18F; Thu, 15 Feb 2024 17:46:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708019207; x=1739555207; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QsAcbwqS9NG5zOC8meKG7XXp4iiGY4OFkDlGSkASW1E=; b=SdqqSvEF2fQ5YHAl8wexDDquArKRctin5PCrdeOFgF7yX29iXoGLmJ0u lioLzoyMdhLB25UNG0xunFu0hosvRVLwO8dsNI4PhjSM4ggQa3FPXuBd2 ACF9EzwFVi3V9nXu+/XXGfuptbG7+xiIPB/yf/kv6JexJA/1s1CO1A6w3 BcVdfm2Rzcl54HvL2AzYrHzZ0dNYAqdbuOClZ/H6A0sWeYHcBnyyv4gK9 x7jZlAaQcX3MqwPnjfkVPFSNTfCwOe/y5RPX09QPYlcynb6UHV5QTmh/9 0aAImthOfKb70spcyIWkwRKwq+k9aj2yeKayA0GKDiphj7DORIGAJDftR w==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="13513990" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="13513990" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="3892077" Received: from dhalpin-mobl1.ger.corp.intel.com (HELO mwauld-mobl1.intel.com) ([10.252.21.158]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 09:46:40 -0800 From: Matthew Auld To: intel-xe@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org, Arunpravin Paneer Selvam , =?utf-8?q?Chris?= =?utf-8?q?tian_K=C3=B6nig?= , stable@vger.kernel.org Subject: [PATCH 2/6] drm/buddy: fix range bias Date: Thu, 15 Feb 2024 17:44:33 +0000 Message-ID: <20240215174431.285069-8-matthew.auld@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240215174431.285069-7-matthew.auld@intel.com> References: <20240215174431.285069-7-matthew.auld@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is a corner case here where start/end is after/before the block range we are currently checking. If so we need to be sure that splitting the block will eventually give use the block size we need. To do that we should adjust the block range to account for the start/end, and only continue with the split if the size/alignment will fit the requested size. Not doing so can result in leaving split blocks unmerged when it eventually fails. Fixes: afea229fe102 ("drm: improve drm_buddy_alloc function") Signed-off-by: Matthew Auld Cc: Arunpravin Paneer Selvam Cc: Christian König Cc: # v5.18+ Reviewed-by: Arunpravin Paneer Selvam --- drivers/gpu/drm/drm_buddy.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c index c1a99bf4dffd..d09540d4065b 100644 --- a/drivers/gpu/drm/drm_buddy.c +++ b/drivers/gpu/drm/drm_buddy.c @@ -332,6 +332,7 @@ alloc_range_bias(struct drm_buddy *mm, u64 start, u64 end, unsigned int order) { + u64 req_size = mm->chunk_size << order; struct drm_buddy_block *block; struct drm_buddy_block *buddy; LIST_HEAD(dfs); @@ -367,6 +368,15 @@ alloc_range_bias(struct drm_buddy *mm, if (drm_buddy_block_is_allocated(block)) continue; + if (block_start < start || block_end > end) { + u64 adjusted_start = max(block_start, start); + u64 adjusted_end = min(block_end, end); + + if (round_down(adjusted_end + 1, req_size) <= + round_up(adjusted_start, req_size)) + continue; + } + if (contains(start, end, block_start, block_end) && order == drm_buddy_block_order(block)) { /*