From patchwork Thu Nov 7 20:58:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13867132 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 769A9218330; Thu, 7 Nov 2024 20:58:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731013113; cv=none; b=EPw7p4O1S8G4nip+lYFQXXV75/d3sw66DkLxOrSauMueOX5TNL6hhBoE1EkGAz2aA7XDRTsrIJrOwNXCRWpCywKSpIVZYypNpLZivahBXhysMOIylNtyS1StPFfZVjEI4vaqIz9eH+RBh+mgeBvhSyICl5MTLe08vTCQkHouDm8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731013113; c=relaxed/simple; bh=94mNaBSKBGKU/oxXHv0KQQ3HrmkwrIQ1mGVq1crl7PM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tBXfwz/afCAsH0qdK99cyHyCZEU01gonBHy2Rfs6v5Jz+0V5VlsgBsmfXzcUx0GoKrUxGrslOHUcPEBJpd95wQ+XcpUX6xHPMJ0mwVhX1VdQnsawxIeEF11GbN3iBSe8/DtFlVts2RnMPYfHj37prHczu5HNOwNGjM6DZvH13Qk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ElWU+EHp; arc=none smtp.client-ip=192.198.163.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ElWU+EHp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731013112; x=1762549112; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=94mNaBSKBGKU/oxXHv0KQQ3HrmkwrIQ1mGVq1crl7PM=; b=ElWU+EHpri2R/uvozMsWjWkR1QXi698JOvONE7beyk1bSpm+lhKXUzWl Wv/5SwWQRfZEHqxmxGlkr51LWxhs93/C027USswdNVt8wYVcOc7LHegJu SiMYUtOM8wnqaWuDtH3HJFoXSuMCAF6SCBv3LKmYRMZcXT/kjOF9VC1fX xZX2OQpC46PIa17r7qYk3x+iQKPLJKW5FeyZIQA49qd273ZjHJ6cS7DNx a/SuFqiK6sy6jNDdbMkbpRsJEPVxOS1w5yrNH2YIGwEK/2opDWc423qBp CnQdWlgv+UfXRVCI1La+4g7SpEEbN0XkWcQcUfiG04Ec0ZquSfQGYlPTP Q==; X-CSE-ConnectionGUID: sTdqaSG6SP2g//rYy0aZug== X-CSE-MsgGUID: PoOGLGUiTwG6JXM80Fm1bg== X-IronPort-AV: E=McAfee;i="6700,10204,11249"; a="30300304" X-IronPort-AV: E=Sophos;i="6.12,136,1728975600"; d="scan'208";a="30300304" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2024 12:58:31 -0800 X-CSE-ConnectionGUID: n2IJ74b0Qe+lizxmBhiMtw== X-CSE-MsgGUID: xJGMfSZzSWqhEHrSMlRm6w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,136,1728975600"; d="scan'208";a="90093559" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.125.110.195]) by orviesa005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2024 12:58:29 -0800 From: Ira Weiny Date: Thu, 07 Nov 2024 14:58:19 -0600 Subject: [PATCH v7 01/27] range: Add range_overlaps() Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241107-dcd-type2-upstream-v7-1-56a84e66bc36@intel.com> References: <20241107-dcd-type2-upstream-v7-0-56a84e66bc36@intel.com> In-Reply-To: <20241107-dcd-type2-upstream-v7-0-56a84e66bc36@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org, Johannes Thumshirn X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1731013104; l=3560; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=94mNaBSKBGKU/oxXHv0KQQ3HrmkwrIQ1mGVq1crl7PM=; b=FjTQQBlIrjX+U8Sga2DcTPBulTU64M2NoB3Mndc7uk7/8hf3Lld+UBhhiburKhf6EPtFXuQHR 9I75UxIkXOvCv8QxYfJJRQjOBTVnQaqH6K7dPfVj8BGbvpRBVhglgXG X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Code to support CXL Dynamic Capacity devices will have extent ranges which need to be compared for intersection not a subset as is being checked in range_contains(). range_overlaps() is defined in btrfs with a different meaning from what is required in the standard range code. Dan Williams pointed this out in [1]. Adjust the btrfs call according to his suggestion there. Then add a generic range_overlaps(). Cc: Dan Williams Cc: Chris Mason Cc: Josef Bacik Cc: David Sterba Cc: linux-btrfs@vger.kernel.org Link: https://lore.kernel.org/all/65949f79ef908_8dc68294f2@dwillia2-xfh.jf.intel.com.notmuch/ [1] Acked-by: David Sterba Reviewed-by: Davidlohr Bueso Reviewed-by: Johannes Thumshirn Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny --- fs/btrfs/ordered-data.c | 10 +++++----- include/linux/range.h | 8 ++++++++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index 2104d60c216166d577ef81750c63167248f33b6a..744c3375ee6a88e0fc01ef7664e923a48cbe6dca 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -111,8 +111,8 @@ static struct rb_node *__tree_search(struct rb_root *root, u64 file_offset, return NULL; } -static int range_overlaps(struct btrfs_ordered_extent *entry, u64 file_offset, - u64 len) +static int btrfs_range_overlaps(struct btrfs_ordered_extent *entry, u64 file_offset, + u64 len) { if (file_offset + len <= entry->file_offset || entry->file_offset + entry->num_bytes <= file_offset) @@ -985,7 +985,7 @@ struct btrfs_ordered_extent *btrfs_lookup_ordered_range( while (1) { entry = rb_entry(node, struct btrfs_ordered_extent, rb_node); - if (range_overlaps(entry, file_offset, len)) + if (btrfs_range_overlaps(entry, file_offset, len)) break; if (entry->file_offset >= file_offset + len) { @@ -1114,12 +1114,12 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ordered_range( } if (prev) { entry = rb_entry(prev, struct btrfs_ordered_extent, rb_node); - if (range_overlaps(entry, file_offset, len)) + if (btrfs_range_overlaps(entry, file_offset, len)) goto out; } if (next) { entry = rb_entry(next, struct btrfs_ordered_extent, rb_node); - if (range_overlaps(entry, file_offset, len)) + if (btrfs_range_overlaps(entry, file_offset, len)) goto out; } /* No ordered extent in the range */ diff --git a/include/linux/range.h b/include/linux/range.h index 6ad0b73cb7adc0ee53451b8fed0a70772adc98fa..876cd5355158eff267a42991ba17fa35a1d31600 100644 --- a/include/linux/range.h +++ b/include/linux/range.h @@ -13,11 +13,19 @@ static inline u64 range_len(const struct range *range) return range->end - range->start + 1; } +/* True if r1 completely contains r2 */ static inline bool range_contains(struct range *r1, struct range *r2) { return r1->start <= r2->start && r1->end >= r2->end; } +/* True if any part of r1 overlaps r2 */ +static inline bool range_overlaps(const struct range *r1, + const struct range *r2) +{ + return r1->start <= r2->end && r1->end >= r2->start; +} + int add_range(struct range *range, int az, int nr_range, u64 start, u64 end);