diff mbox series

[v3,5/9] xen/riscv: introduce asm/pmap.h header

Message ID 11b5487659a9c76793e520c108cd92c6c84b908d.1721834549.git.oleksii.kurochko@gmail.com (mailing list archive)
State Superseded
Headers show
Series RISCV device tree mapping | expand

Commit Message

Oleksii Kurochko July 24, 2024, 3:31 p.m. UTC
Introduces arch_pmap_{un}map functions and select HAS_PMAP
for CONFIG_RISCV.

Additionaly it was necessary to introduce functions:
 - mfn_from_xen_entry
 - mfn_to_pte

Also flush_xen_tlb_range_va_local() and flush_xen_tlb_one_local()
are introduced and use in arch_pmap_unmap().

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V3:
 - rename argument of function mfn_to_xen_entry(..., attr -> flags ).
 - update the code of mfn_to_xen_entry() to use flags argument.
 - add blank in mfn_from_pte() in return line.
 - introduce flush_xen_tlb_range_va_local() and use it inside arch_pmap_{un}map().
 - s/__ASM_PMAP_H__/ASM_PMAP_H
 - add SPDX-License-Identifier: GPL-2.0 
---
 xen/arch/riscv/Kconfig                |  1 +
 xen/arch/riscv/include/asm/flushtlb.h | 22 ++++++++++++++++++
 xen/arch/riscv/include/asm/page.h     |  2 ++
 xen/arch/riscv/include/asm/pmap.h     | 33 +++++++++++++++++++++++++++
 xen/arch/riscv/mm.c                   | 15 ++++++++++++
 5 files changed, 73 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/pmap.h

Comments

Jan Beulich July 29, 2024, 2:29 p.m. UTC | #1
On 24.07.2024 17:31, Oleksii Kurochko wrote:
> Introduces arch_pmap_{un}map functions and select HAS_PMAP
> for CONFIG_RISCV.
> 
> Additionaly it was necessary to introduce functions:
>  - mfn_from_xen_entry
>  - mfn_to_pte
> 
> Also flush_xen_tlb_range_va_local() and flush_xen_tlb_one_local()
> are introduced and use in arch_pmap_unmap().

Just flush_xen_tlb_one_local() would suffice for the purposes here.
flush_xen_tlb_range_va_local() will need some kind of cutoff, to
avoid looping for an excessivly long time.

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/pmap.h
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef ASM_PMAP_H
> +#define ASM_PMAP_H
> +
> +#include <xen/bug.h>
> +#include <xen/mm.h>
> +#include <xen/page-size.h>
> +
> +#include <asm/fixmap.h>
> +#include <asm/flushtlb.h>
> +#include <asm/system.h>
> +
> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
> +{
> +    pte_t *entry = &xen_fixmap[slot];
> +    pte_t pte;
> +
> +    ASSERT(!pte_is_valid(*entry));
> +
> +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> +    write_pte(entry, pte);
> +}

Perhaps add a comment to clarify why it's safe to omit a TLB flush here.
Note that arch_pmap_unmap() having one is a necessary but not sufficient
condition to that. In principle hardware may also cache "negative" TLB
entries; I have no idea how RISC-V behaves in this regard, or whether
that aspect is actually left to implementations.

> +static inline void arch_pmap_unmap(unsigned int slot)
> +{
> +    pte_t pte = {};
> +
> +    write_pte(&xen_fixmap[slot], pte);
> +
> +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
> +}

For both functions, even if mainly for documentation purposes, it may
be desirable to mark them __init. It's again a necessary (but not
sufficient) condition here, to validly use local TLB flushes only.

> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -382,3 +382,18 @@ int map_pages_to_xen(unsigned long virt,
>      BUG_ON("unimplemented");
>      return -1;
>  }
> +
> +static inline pte_t mfn_from_pte(mfn_t mfn)
> +{
> +    unsigned long pte = mfn_x(mfn) << PTE_PPN_SHIFT;
> +    return (pte_t){ .pte = pte };
> +}

My earlier naming related comment may not have been here, but it
was certainly meant to apply to any similar functions: A function
of this name should imo take a pte_t and produce an mfn_t. IOW I'd
expect this function to be pte_from_mfn(). However, I question it being
a good idea to do it this way. The resulting pte_t isn't valid yet, aiui,
as it still needs at least the access controls filled in. Such a partial
pte_t would better not be floating around at any time, imo. IOW the
function likely wants to take a 2nd parameter.

Jan
Oleksii Kurochko July 29, 2024, 4:22 p.m. UTC | #2
On Mon, 2024-07-29 at 16:29 +0200, Jan Beulich wrote:
> On 24.07.2024 17:31, Oleksii Kurochko wrote:
> > Introduces arch_pmap_{un}map functions and select HAS_PMAP
> > for CONFIG_RISCV.
> > 
> > Additionaly it was necessary to introduce functions:
> >  - mfn_from_xen_entry
> >  - mfn_to_pte
> > 
> > Also flush_xen_tlb_range_va_local() and flush_xen_tlb_one_local()
> > are introduced and use in arch_pmap_unmap().
> 
> Just flush_xen_tlb_one_local() would suffice for the purposes here.
> flush_xen_tlb_range_va_local() will need some kind of cutoff, to
> avoid looping for an excessivly long time.
> 
> > --- /dev/null
> > +++ b/xen/arch/riscv/include/asm/pmap.h
> > @@ -0,0 +1,33 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef ASM_PMAP_H
> > +#define ASM_PMAP_H
> > +
> > +#include <xen/bug.h>
> > +#include <xen/mm.h>
> > +#include <xen/page-size.h>
> > +
> > +#include <asm/fixmap.h>
> > +#include <asm/flushtlb.h>
> > +#include <asm/system.h>
> > +
> > +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
> > +{
> > +    pte_t *entry = &xen_fixmap[slot];
> > +    pte_t pte;
> > +
> > +    ASSERT(!pte_is_valid(*entry));
> > +
> > +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> > +    write_pte(entry, pte);
> > +}
> 
> Perhaps add a comment to clarify why it's safe to omit a TLB flush
> here.
> Note that arch_pmap_unmap() having one is a necessary but not
> sufficient
> condition to that. In principle hardware may also cache "negative"
> TLB
> entries; I have no idea how RISC-V behaves in this regard, or whether
> that aspect is actually left to implementations.
what do you mean by "negative" TLB? an old TLB entry which should be
invalidated?

> 
> > +static inline void arch_pmap_unmap(unsigned int slot)
> > +{
> > +    pte_t pte = {};
> > +
> > +    write_pte(&xen_fixmap[slot], pte);
> > +
> > +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
> > +}
> 
> For both functions, even if mainly for documentation purposes, it may
> be desirable to mark them __init. It's again a necessary (but not
> sufficient) condition here, to validly use local TLB flushes only.
Does __init in this context means that it will be called only by boot
cpu at the start and that is it?

> 
> > --- a/xen/arch/riscv/mm.c
> > +++ b/xen/arch/riscv/mm.c
> > @@ -382,3 +382,18 @@ int map_pages_to_xen(unsigned long virt,
> >      BUG_ON("unimplemented");
> >      return -1;
> >  }
> > +
> > +static inline pte_t mfn_from_pte(mfn_t mfn)
> > +{
> > +    unsigned long pte = mfn_x(mfn) << PTE_PPN_SHIFT;
> > +    return (pte_t){ .pte = pte };
> > +}
> 
> My earlier naming related comment may not have been here, but it
> was certainly meant to apply to any similar functions: A function
> of this name should imo take a pte_t and produce an mfn_t. IOW I'd
> expect this function to be pte_from_mfn(). However, I question it
> being
> a good idea to do it this way. The resulting pte_t isn't valid yet,
> aiui,
> as it still needs at least the access controls filled in. Such a
> partial
> pte_t would better not be floating around at any time, imo. IOW the
> function likely wants to take a 2nd parameter.
Thanks. I'll double check the namings and update the prototype of this
function.


~ Oleksii
Jan Beulich July 30, 2024, 7:56 a.m. UTC | #3
On 29.07.2024 18:22, oleksii.kurochko@gmail.com wrote:
> On Mon, 2024-07-29 at 16:29 +0200, Jan Beulich wrote:
>> On 24.07.2024 17:31, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/pmap.h
>>> @@ -0,0 +1,33 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +#ifndef ASM_PMAP_H
>>> +#define ASM_PMAP_H
>>> +
>>> +#include <xen/bug.h>
>>> +#include <xen/mm.h>
>>> +#include <xen/page-size.h>
>>> +
>>> +#include <asm/fixmap.h>
>>> +#include <asm/flushtlb.h>
>>> +#include <asm/system.h>
>>> +
>>> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
>>> +{
>>> +    pte_t *entry = &xen_fixmap[slot];
>>> +    pte_t pte;
>>> +
>>> +    ASSERT(!pte_is_valid(*entry));
>>> +
>>> +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
>>> +    write_pte(entry, pte);
>>> +}
>>
>> Perhaps add a comment to clarify why it's safe to omit a TLB flush
>> here.
>> Note that arch_pmap_unmap() having one is a necessary but not
>> sufficient
>> condition to that. In principle hardware may also cache "negative"
>> TLB
>> entries; I have no idea how RISC-V behaves in this regard, or whether
>> that aspect is actually left to implementations.
> what do you mean by "negative" TLB? an old TLB entry which should be
> invalidated?

No, I mean TLB entries saying "no valid translation here". I.e. avoiding
recurring walks of something that was once found to have no translation.

>>> +static inline void arch_pmap_unmap(unsigned int slot)
>>> +{
>>> +    pte_t pte = {};
>>> +
>>> +    write_pte(&xen_fixmap[slot], pte);
>>> +
>>> +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
>>> +}
>>
>> For both functions, even if mainly for documentation purposes, it may
>> be desirable to mark them __init. It's again a necessary (but not
>> sufficient) condition here, to validly use local TLB flushes only.
> Does __init in this context means that it will be called only by boot
> cpu at the start and that is it?

No, and I said so in my reply. __init is indicative of the function not
being suitable for runtime use, but it is not enough to indicate the
function is also unsuitable for use as soon as a 2nd CPU is being brought
up. Yet for the latter we have no proper annotation. Hence my suggestion
to use the former to have at least a limited indicator.

An alternative might be to add ASSERT(system_state < SYS_STATE_smp_boot).
That, however, exists in common/pmap.c already anyway.

Yet another alternative would be a clarifying comment.

Jan
Oleksii Kurochko July 30, 2024, 11:39 a.m. UTC | #4
On Tue, 2024-07-30 at 09:56 +0200, Jan Beulich wrote:
> On 29.07.2024 18:22, oleksii.kurochko@gmail.com wrote:
> > On Mon, 2024-07-29 at 16:29 +0200, Jan Beulich wrote:
> > > On 24.07.2024 17:31, Oleksii Kurochko wrote:
> > > > --- /dev/null
> > > > +++ b/xen/arch/riscv/include/asm/pmap.h
> > > > @@ -0,0 +1,33 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +#ifndef ASM_PMAP_H
> > > > +#define ASM_PMAP_H
> > > > +
> > > > +#include <xen/bug.h>
> > > > +#include <xen/mm.h>
> > > > +#include <xen/page-size.h>
> > > > +
> > > > +#include <asm/fixmap.h>
> > > > +#include <asm/flushtlb.h>
> > > > +#include <asm/system.h>
> > > > +
> > > > +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
> > > > +{
> > > > +    pte_t *entry = &xen_fixmap[slot];
> > > > +    pte_t pte;
> > > > +
> > > > +    ASSERT(!pte_is_valid(*entry));
> > > > +
> > > > +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> > > > +    write_pte(entry, pte);
> > > > +}
> > > 
> > > Perhaps add a comment to clarify why it's safe to omit a TLB
> > > flush
> > > here.
> > > Note that arch_pmap_unmap() having one is a necessary but not
> > > sufficient
> > > condition to that. In principle hardware may also cache
> > > "negative"
> > > TLB
> > > entries; I have no idea how RISC-V behaves in this regard, or
> > > whether
> > > that aspect is actually left to implementations.
> > what do you mean by "negative" TLB? an old TLB entry which should
> > be
> > invalidated?
> 
> No, I mean TLB entries saying "no valid translation here". I.e.
> avoiding
> recurring walks of something that was once found to have no
> translation.
But we can't modify an existent entry because we have
"ASSERT(!pte_is_valid(*entry))" in pmap_map() function and we are doing
TLB flush during pmap_unmap(). So when we will be in pmap_map() we are
sure that there is no TLB entry for the new pte.

> 
> > > > +static inline void arch_pmap_unmap(unsigned int slot)
> > > > +{
> > > > +    pte_t pte = {};
> > > > +
> > > > +    write_pte(&xen_fixmap[slot], pte);
> > > > +
> > > > +    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot),
> > > > PAGE_SIZE);
> > > > +}
> > > 
> > > For both functions, even if mainly for documentation purposes, it
> > > may
> > > be desirable to mark them __init. It's again a necessary (but not
> > > sufficient) condition here, to validly use local TLB flushes
> > > only.
> > Does __init in this context means that it will be called only by
> > boot
> > cpu at the start and that is it?
> 
> No, and I said so in my reply. __init is indicative of the function
> not
> being suitable for runtime use, but it is not enough to indicate the
> function is also unsuitable for use as soon as a 2nd CPU is being
> brought
> up. Yet for the latter we have no proper annotation. Hence my
> suggestion
> to use the former to have at least a limited indicator.
> 
> An alternative might be to add ASSERT(system_state <
> SYS_STATE_smp_boot).
> That, however, exists in common/pmap.c already anyway.
> 
> Yet another alternative would be a clarifying comment.
Thanks for clarifying. I will stick to the first option with __init.

~ Oleksii
Jan Beulich July 30, 2024, 12:15 p.m. UTC | #5
On 30.07.2024 13:39, oleksii.kurochko@gmail.com wrote:
> On Tue, 2024-07-30 at 09:56 +0200, Jan Beulich wrote:
>> On 29.07.2024 18:22, oleksii.kurochko@gmail.com wrote:
>>> On Mon, 2024-07-29 at 16:29 +0200, Jan Beulich wrote:
>>>> On 24.07.2024 17:31, Oleksii Kurochko wrote:
>>>>> --- /dev/null
>>>>> +++ b/xen/arch/riscv/include/asm/pmap.h
>>>>> @@ -0,0 +1,33 @@
>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>> +#ifndef ASM_PMAP_H
>>>>> +#define ASM_PMAP_H
>>>>> +
>>>>> +#include <xen/bug.h>
>>>>> +#include <xen/mm.h>
>>>>> +#include <xen/page-size.h>
>>>>> +
>>>>> +#include <asm/fixmap.h>
>>>>> +#include <asm/flushtlb.h>
>>>>> +#include <asm/system.h>
>>>>> +
>>>>> +static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
>>>>> +{
>>>>> +    pte_t *entry = &xen_fixmap[slot];
>>>>> +    pte_t pte;
>>>>> +
>>>>> +    ASSERT(!pte_is_valid(*entry));
>>>>> +
>>>>> +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
>>>>> +    write_pte(entry, pte);
>>>>> +}
>>>>
>>>> Perhaps add a comment to clarify why it's safe to omit a TLB
>>>> flush
>>>> here.
>>>> Note that arch_pmap_unmap() having one is a necessary but not
>>>> sufficient
>>>> condition to that. In principle hardware may also cache
>>>> "negative"
>>>> TLB
>>>> entries; I have no idea how RISC-V behaves in this regard, or
>>>> whether
>>>> that aspect is actually left to implementations.
>>> what do you mean by "negative" TLB? an old TLB entry which should
>>> be
>>> invalidated?
>>
>> No, I mean TLB entries saying "no valid translation here". I.e.
>> avoiding
>> recurring walks of something that was once found to have no
>> translation.
> But we can't modify an existent entry because we have
> "ASSERT(!pte_is_valid(*entry))" in pmap_map() function and we are doing
> TLB flush during pmap_unmap().

You _always_ modify an existing entry. That may be a not-present one, but
that's still an entry. And that's part of why I think you still didn't
understand what I said. A particular implementation, if permitted by the
spec, may very well put in place a TLB entry when the result of a page
walk was a non-present entry. So ...

> So when we will be in pmap_map() we are
> sure that there is no TLB entry for the new pte.

..., can you point me at the part of the spec saying that such "negative"
TLB entries are not permitted?

Jan
Oleksii Kurochko July 31, 2024, 10 a.m. UTC | #6
On Tue, 2024-07-30 at 14:15 +0200, Jan Beulich wrote:
> On 30.07.2024 13:39, oleksii.kurochko@gmail.com wrote:
> > On Tue, 2024-07-30 at 09:56 +0200, Jan Beulich wrote:
> > > On 29.07.2024 18:22, oleksii.kurochko@gmail.com wrote:
> > > > On Mon, 2024-07-29 at 16:29 +0200, Jan Beulich wrote:
> > > > > On 24.07.2024 17:31, Oleksii Kurochko wrote:
> > > > > > --- /dev/null
> > > > > > +++ b/xen/arch/riscv/include/asm/pmap.h
> > > > > > @@ -0,0 +1,33 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > +#ifndef ASM_PMAP_H
> > > > > > +#define ASM_PMAP_H
> > > > > > +
> > > > > > +#include <xen/bug.h>
> > > > > > +#include <xen/mm.h>
> > > > > > +#include <xen/page-size.h>
> > > > > > +
> > > > > > +#include <asm/fixmap.h>
> > > > > > +#include <asm/flushtlb.h>
> > > > > > +#include <asm/system.h>
> > > > > > +
> > > > > > +static inline void arch_pmap_map(unsigned int slot, mfn_t
> > > > > > mfn)
> > > > > > +{
> > > > > > +    pte_t *entry = &xen_fixmap[slot];
> > > > > > +    pte_t pte;
> > > > > > +
> > > > > > +    ASSERT(!pte_is_valid(*entry));
> > > > > > +
> > > > > > +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> > > > > > +    write_pte(entry, pte);
> > > > > > +}
> > > > > 
> > > > > Perhaps add a comment to clarify why it's safe to omit a TLB
> > > > > flush
> > > > > here.
> > > > > Note that arch_pmap_unmap() having one is a necessary but not
> > > > > sufficient
> > > > > condition to that. In principle hardware may also cache
> > > > > "negative"
> > > > > TLB
> > > > > entries; I have no idea how RISC-V behaves in this regard, or
> > > > > whether
> > > > > that aspect is actually left to implementations.
> > > > what do you mean by "negative" TLB? an old TLB entry which
> > > > should
> > > > be
> > > > invalidated?
> > > 
> > > No, I mean TLB entries saying "no valid translation here". I.e.
> > > avoiding
> > > recurring walks of something that was once found to have no
> > > translation.
> > But we can't modify an existent entry because we have
> > "ASSERT(!pte_is_valid(*entry))" in pmap_map() function and we are
> > doing
> > TLB flush during pmap_unmap().
> 
> You _always_ modify an existing entry. That may be a not-present one,
> but
> that's still an entry. And that's part of why I think you still
> didn't
> understand what I said. A particular implementation, if permitted by
> the
> spec, may very well put in place a TLB entry when the result of a
> page
> walk was a non-present entry. So ...
> 
> > So when we will be in pmap_map() we are
> > sure that there is no TLB entry for the new pte.
> 
> ..., can you point me at the part of the spec saying that such
> "negative"
> TLB entries are not permitted?
I double-checked the spec and it seems to me you are right and it could
be an issue. Absense of "negative" TLB entrues will be guaranted only
in the case when Svvptc extension is present (
https://github.com/riscv/riscv-svvptc/blob/main/svvptc.adoc?plain=1#L85
):
   Depending on the microarchitecture, some possible ways to facilitate
   implementation of Svvptc include: not having any address-translation
   caches, not
   storing Invalid PTEs in the address-translation caches, automatically
   evicting
   Invalid PTEs using a bounded timer, or making address-translation
   caches
   coherent with store instructions that modify PTEs.

If the mentioned above extension isn't present. Also there is the
following words in the spec 
( https://github.com/riscv/riscv-isa-manual/blob/main/src/supervisor.adoc?plain=1#L1213
):
   For the special cases of increasing the permissions on a leaf PTE and
   changing an invalid
   PTE to a valid leaf, software may choose to execute the SFENCE.VMA
   lazily. After
   modifying the PTE but before executing SFENCE.VMA, either the new or
   old permissions
   will be used. In the latter case, a page-fault exception might occur,
   at which point software
   should execute SFENCE.VMA in accordance with the previous bullet point.

Probably in the future we will need also similar to work Linux kernel
does:
https://patchwork.kernel.org/project/linux-mips/cover/20240131155929.169961-1-alexghiti@rivosinc.com/

So I'll add flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE)
in arch_pmap_map.

~ Oleksii
diff mbox series

Patch

diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
index 259eea8d3b..0112aa8778 100644
--- a/xen/arch/riscv/Kconfig
+++ b/xen/arch/riscv/Kconfig
@@ -3,6 +3,7 @@  config RISCV
 	select FUNCTION_ALIGNMENT_16B
 	select GENERIC_BUG_FRAME
 	select HAS_DEVICE_TREE
+	select HAS_PMAP
 
 config RISCV_64
 	def_bool y
diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include/asm/flushtlb.h
index 7ce32bea0b..cf66e90773 100644
--- a/xen/arch/riscv/include/asm/flushtlb.h
+++ b/xen/arch/riscv/include/asm/flushtlb.h
@@ -5,6 +5,28 @@ 
 #include <xen/bug.h>
 #include <xen/cpumask.h>
 
+/* Flush TLB of local processor for address va. */
+static inline void flush_xen_tlb_one_local(vaddr_t va)
+{
+    asm volatile ( "sfence.vma %0" :: "r" (va) : "memory" );
+}
+
+/*
+ * Flush a range of VA's hypervisor mappings from the TLB of the local
+ * processor.
+ */
+static inline void flush_xen_tlb_range_va_local(vaddr_t va,
+                                                unsigned long size)
+{
+    vaddr_t end = va + size;
+
+    while ( va < end )
+    {
+        flush_xen_tlb_one_local(va);
+        va += PAGE_SIZE;
+    }
+}
+
 /*
  * Filter the given set of CPUs, removing those that definitely flushed their
  * TLB since @page_timestamp.
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
index 0cc2f37cf8..2308cefb0a 100644
--- a/xen/arch/riscv/include/asm/page.h
+++ b/xen/arch/riscv/include/asm/page.h
@@ -52,6 +52,8 @@  typedef struct {
 #endif
 } pte_t;
 
+pte_t mfn_to_xen_entry(mfn_t mfn, unsigned int access_bits);
+
 static inline pte_t paddr_to_pte(paddr_t paddr,
                                  unsigned int permissions)
 {
diff --git a/xen/arch/riscv/include/asm/pmap.h b/xen/arch/riscv/include/asm/pmap.h
new file mode 100644
index 0000000000..068d0794b1
--- /dev/null
+++ b/xen/arch/riscv/include/asm/pmap.h
@@ -0,0 +1,33 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef ASM_PMAP_H
+#define ASM_PMAP_H
+
+#include <xen/bug.h>
+#include <xen/mm.h>
+#include <xen/page-size.h>
+
+#include <asm/fixmap.h>
+#include <asm/flushtlb.h>
+#include <asm/system.h>
+
+static inline void arch_pmap_map(unsigned int slot, mfn_t mfn)
+{
+    pte_t *entry = &xen_fixmap[slot];
+    pte_t pte;
+
+    ASSERT(!pte_is_valid(*entry));
+
+    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
+    write_pte(entry, pte);
+}
+
+static inline void arch_pmap_unmap(unsigned int slot)
+{
+    pte_t pte = {};
+
+    write_pte(&xen_fixmap[slot], pte);
+
+    flush_xen_tlb_range_va_local(FIXMAP_ADDR(slot), PAGE_SIZE);
+}
+
+#endif /* ASM_PMAP_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 35724505ec..959b6fc63e 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -382,3 +382,18 @@  int map_pages_to_xen(unsigned long virt,
     BUG_ON("unimplemented");
     return -1;
 }
+
+static inline pte_t mfn_from_pte(mfn_t mfn)
+{
+    unsigned long pte = mfn_x(mfn) << PTE_PPN_SHIFT;
+    return (pte_t){ .pte = pte };
+}
+
+inline pte_t mfn_to_xen_entry(mfn_t mfn, unsigned int access_bits)
+{
+    pte_t pte = mfn_from_pte(mfn);
+
+    pte.pte |= access_bits;
+
+    return pte;
+}