diff mbox series

[v10,5/9] fsdax: Introduce dax_load_page()

Message ID 20220127124058.1172422-6-ruansy.fnst@fujitsu.com (mailing list archive)
State New, archived
Headers show
Series fsdax: introduce fs query to support reflink | expand

Commit Message

Shiyang Ruan Jan. 27, 2022, 12:40 p.m. UTC
The current dax_lock_page() locks dax entry by obtaining mapping and
index in page.  To support 1-to-N RMAP in NVDIMM, we need a new function
to lock a specific dax entry corresponding to this file's mapping,index.
And output the page corresponding to the specific dax entry for caller
use.

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/dax.c            | 44 ++++++++++++++++++++++++++++++++++++++++++++
 include/linux/dax.h |  8 ++++++++
 2 files changed, 52 insertions(+)

Comments

Dan Williams Feb. 16, 2022, 1:34 a.m. UTC | #1
On Thu, Jan 27, 2022 at 4:41 AM Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
>
> The current dax_lock_page() locks dax entry by obtaining mapping and
> index in page.  To support 1-to-N RMAP in NVDIMM, we need a new function
> to lock a specific dax entry

I do not see a call to dax_lock_entry() in this function, what keeps
this lookup valid after xas_unlock_irq()?
Shiyang Ruan Feb. 16, 2022, 3:02 a.m. UTC | #2
在 2022/2/16 9:34, Dan Williams 写道:
> On Thu, Jan 27, 2022 at 4:41 AM Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
>>
>> The current dax_lock_page() locks dax entry by obtaining mapping and
>> index in page.  To support 1-to-N RMAP in NVDIMM, we need a new function
>> to lock a specific dax entry
> 
> I do not see a call to dax_lock_entry() in this function, what keeps
> this lookup valid after xas_unlock_irq()?

I am not sure if I understood your advice correctly:  You said 
dax_lock_entry() is not necessary in v9[1].  So, I deleted it.

[1]: 
https://lore.kernel.org/linux-xfs/CAPcyv4jVDfpHb1DCW+NLXH2YBgLghCVy8o6wrc02CXx4g-Bv7Q@mail.gmail.com/


--
Thanks,
Ruan.
Dan Williams Feb. 16, 2022, 3:07 a.m. UTC | #3
On Tue, Feb 15, 2022 at 7:02 PM Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
>
>
>
> 在 2022/2/16 9:34, Dan Williams 写道:
> > On Thu, Jan 27, 2022 at 4:41 AM Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
> >>
> >> The current dax_lock_page() locks dax entry by obtaining mapping and
> >> index in page.  To support 1-to-N RMAP in NVDIMM, we need a new function
> >> to lock a specific dax entry
> >
> > I do not see a call to dax_lock_entry() in this function, what keeps
> > this lookup valid after xas_unlock_irq()?
>
> I am not sure if I understood your advice correctly:  You said
> dax_lock_entry() is not necessary in v9[1].  So, I deleted it.
>
> [1]:
> https://lore.kernel.org/linux-xfs/CAPcyv4jVDfpHb1DCW+NLXH2YBgLghCVy8o6wrc02CXx4g-Bv7Q@mail.gmail.com/

I also said, "if the filesystem can make those guarantees" it was not
clear whether this helper is being called back from an FS context that
guarantees those associations or not. As far as I can see there is
nothing that protects that association. Apologies for the confusion, I
was misunderstanding where the protection was being enforced in this
case.
diff mbox series

Patch

diff --git a/fs/dax.c b/fs/dax.c
index c8d57080c1aa..964512107c23 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -455,6 +455,50 @@  void dax_unlock_page(struct page *page, dax_entry_t cookie)
 	dax_unlock_entry(&xas, (void *)cookie);
 }
 
+/*
+ * dax_load_page - Load the page corresponding to a (mapping,offset)
+ * @mapping: the file's mapping whose entry we want to load
+ * @index:   the offset within this file
+ * @page:    output the dax page corresponding to this dax entry
+ *
+ * Return: error if it isn't a dax mapping, otherwise 0.
+ */
+int dax_load_page(struct address_space *mapping, pgoff_t index,
+		struct page **page)
+{
+	XA_STATE(xas, &mapping->i_pages, 0);
+	void *entry;
+
+	if (!dax_mapping(mapping))
+		return -EBUSY;
+
+	rcu_read_lock();
+	for (;;) {
+		entry = NULL;
+		xas_lock_irq(&xas);
+		xas_set(&xas, index);
+		entry = xas_load(&xas);
+		if (dax_is_locked(entry)) {
+			rcu_read_unlock();
+			wait_entry_unlocked(&xas, entry);
+			rcu_read_lock();
+			continue;
+		}
+		if (entry &&
+		    !dax_is_zero_entry(entry) && !dax_is_empty_entry(entry)) {
+			/*
+			 * Output the page if the dax entry exists and isn't
+			 * a zero or empty entry.
+			 */
+			*page = pfn_to_page(dax_to_pfn(entry));
+		}
+		xas_unlock_irq(&xas);
+		break;
+	}
+	rcu_read_unlock();
+	return 0;
+}
+
 /*
  * Find page cache entry at given index. If it is a DAX entry, return it
  * with the entry locked. If the page cache doesn't contain an entry at
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 96cfc63b12fd..530ff9733dd9 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -155,6 +155,8 @@  struct page *dax_layout_busy_page(struct address_space *mapping);
 struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end);
 dax_entry_t dax_lock_page(struct page *page);
 void dax_unlock_page(struct page *page, dax_entry_t cookie);
+int dax_load_page(struct address_space *mapping,
+		unsigned long index, struct page **page);
 #else
 static inline struct page *dax_layout_busy_page(struct address_space *mapping)
 {
@@ -182,6 +184,12 @@  static inline dax_entry_t dax_lock_page(struct page *page)
 static inline void dax_unlock_page(struct page *page, dax_entry_t cookie)
 {
 }
+
+static inline int dax_load_page(struct address_space *mapping,
+		unsigned long index, struct page **page)
+{
+	return 0;
+}
 #endif
 
 int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,