diff mbox series

[RFC,v8,12/56] x86/sev: Add RMP entry lookup helpers

Message ID 20230220183847.59159-13-michael.roth@amd.com (mailing list archive)
State New, archived
Headers show
Series Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support | expand

Commit Message

Michael Roth Feb. 20, 2023, 6:38 p.m. UTC
From: Brijesh Singh <brijesh.singh@amd.com>

The snp_lookup_page_in_rmptable() can be used by the host to read the RMP
entry for a given page. The RMP entry format is documented in AMD PPR, see
https://bugzilla.kernel.org/attachment.cgi?id=296015.

Co-developed-by: Ashish Kalra <ashish.kalra@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Michael Roth <michael.roth@amd.com>
---
 arch/x86/include/asm/sev.h |  4 +-
 arch/x86/kernel/sev.c      | 84 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 87 insertions(+), 1 deletion(-)

Comments

Vlastimil Babka March 3, 2023, 3:28 p.m. UTC | #1
On 2/20/23 19:38, Michael Roth wrote:
> From: Brijesh Singh <brijesh.singh@amd.com>
> 
> The snp_lookup_page_in_rmptable() can be used by the host to read the RMP
> entry for a given page. The RMP entry format is documented in AMD PPR, see
> https://bugzilla.kernel.org/attachment.cgi?id=296015.
> 
> Co-developed-by: Ashish Kalra <ashish.kalra@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Michael Roth <michael.roth@amd.com>
> ---

> +/*
> + * Return 1 if the RMP entry is assigned, 0 if it exists but is not assigned,
> + * and -errno if there is no corresponding RMP entry.
> + */

Hmm IMHO the kernel's idiomatic way is to return 0 on "success" and I'd
assume the more intuitive expectation of success here if the entry is
assigned? The various callers seem to differ though so I guess it depends on
context. Some however don't distinguish their "failure" from an ERR and
maybe they should, at least for the purposes of the various printks?

> +int snp_lookup_rmpentry(u64 pfn, int *level)
> +{
> +	struct rmpentry *e;
> +
> +	e = __snp_lookup_rmpentry(pfn, level);
> +	if (IS_ERR(e))
> +		return PTR_ERR(e);
> +
> +	return !!rmpentry_assigned(e);
> +}
> +EXPORT_SYMBOL_GPL(snp_lookup_rmpentry);
Michael Roth March 29, 2023, 10:59 p.m. UTC | #2
On Fri, Mar 03, 2023 at 04:28:39PM +0100, Vlastimil Babka wrote:
> On 2/20/23 19:38, Michael Roth wrote:
> > From: Brijesh Singh <brijesh.singh@amd.com>
> > 
> > The snp_lookup_page_in_rmptable() can be used by the host to read the RMP
> > entry for a given page. The RMP entry format is documented in AMD PPR, see
> > https://bugzilla.kernel.org/attachment.cgi?id=296015.
> > 
> > Co-developed-by: Ashish Kalra <ashish.kalra@amd.com>
> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> > Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> > Signed-off-by: Michael Roth <michael.roth@amd.com>
> > ---
> 
> > +/*
> > + * Return 1 if the RMP entry is assigned, 0 if it exists but is not assigned,
> > + * and -errno if there is no corresponding RMP entry.
> > + */
> 
> Hmm IMHO the kernel's idiomatic way is to return 0 on "success" and I'd
> assume the more intuitive expectation of success here if the entry is
> assigned?

In general I'd agree. Here's it's a little awkward though.
snp_lookup_rmpentry() sort of wants to be a bool, where true indicates
an assigned entry was found, false indicates no assigned entry.

But it also has to deal with error values, so the most direct way to
encapsulate that is true == 1, false == 0, and < 0 for errors.

Inverting it to align more with kernel expections of 0 == success/true
gets awkward too, because stuff like:

  if (snp_lookup_rmpentry(...))
    //error

still doesn't work the way most other functions written in this way
would since it could still be "successful" if we were expecting PFN to
be in shared state. So the return value needs special handling there
too.

Would it make sense to define it something like this?:

  /*
   * Query information about the RMP entry corresponding to the given
   * PFN.
   * 
   * Returns 0 on success, and -errno if there was a problem accessing
   * the RMP entry.
   */
  int snp_lookup_rmpentry(u64 pfn, int *level, bool *assigned)

> The various callers seem to differ though so I guess it depends on
> context. Some however don't distinguish their "failure" from an ERR and
> maybe they should, at least for the purposes of the various printks?

Yes, regardless of what we decide above, the call-sites should properly
distinguish between failure/assigned/not-assigned and report the
information accordingly. I'll get those fixed up where needed.

Thanks,

-Mike

> 
> > +int snp_lookup_rmpentry(u64 pfn, int *level)
> > +{
> > +	struct rmpentry *e;
> > +
> > +	e = __snp_lookup_rmpentry(pfn, level);
> > +	if (IS_ERR(e))
> > +		return PTR_ERR(e);
> > +
> > +	return !!rmpentry_assigned(e);
> > +}
> > +EXPORT_SYMBOL_GPL(snp_lookup_rmpentry);
>
Vlastimil Babka April 20, 2023, 4:31 p.m. UTC | #3
On 3/30/23 00:59, Michael Roth wrote:
> On Fri, Mar 03, 2023 at 04:28:39PM +0100, Vlastimil Babka wrote:
>> On 2/20/23 19:38, Michael Roth wrote:
>> > From: Brijesh Singh <brijesh.singh@amd.com>
>> > 
>> > The snp_lookup_page_in_rmptable() can be used by the host to read the RMP
>> > entry for a given page. The RMP entry format is documented in AMD PPR, see
>> > https://bugzilla.kernel.org/attachment.cgi?id=296015.
>> > 
>> > Co-developed-by: Ashish Kalra <ashish.kalra@amd.com>
>> > Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
>> > Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>> > Signed-off-by: Michael Roth <michael.roth@amd.com>
>> > ---
>> 
>> > +/*
>> > + * Return 1 if the RMP entry is assigned, 0 if it exists but is not assigned,
>> > + * and -errno if there is no corresponding RMP entry.
>> > + */
>> 
>> Hmm IMHO the kernel's idiomatic way is to return 0 on "success" and I'd
>> assume the more intuitive expectation of success here if the entry is
>> assigned?
> 
> In general I'd agree. Here's it's a little awkward though.
> snp_lookup_rmpentry() sort of wants to be a bool, where true indicates
> an assigned entry was found, false indicates no assigned entry.
> 
> But it also has to deal with error values, so the most direct way to
> encapsulate that is true == 1, false == 0, and < 0 for errors.
> 
> Inverting it to align more with kernel expections of 0 == success/true
> gets awkward too, because stuff like:
> 
>   if (snp_lookup_rmpentry(...))
>     //error
> 
> still doesn't work the way most other functions written in this way
> would since it could still be "successful" if we were expecting PFN to
> be in shared state. So the return value needs special handling there
> too.
> 
> Would it make sense to define it something like this?:
> 
>   /*
>    * Query information about the RMP entry corresponding to the given
>    * PFN.
>    * 
>    * Returns 0 on success, and -errno if there was a problem accessing
>    * the RMP entry.
>    */
>   int snp_lookup_rmpentry(u64 pfn, int *level, bool *assigned)

Yeah that looks fine to me. Hope you find out it makes it easier to work
with in the callers too.

> 
>> The various callers seem to differ though so I guess it depends on
>> context. Some however don't distinguish their "failure" from an ERR and
>> maybe they should, at least for the purposes of the various printks?
> 
> Yes, regardless of what we decide above, the call-sites should properly
> distinguish between failure/assigned/not-assigned and report the
> information accordingly. I'll get those fixed up where needed.

Great, thanks!

> Thanks,
> 
> -Mike
> 
>> 
>> > +int snp_lookup_rmpentry(u64 pfn, int *level)
>> > +{
>> > +	struct rmpentry *e;
>> > +
>> > +	e = __snp_lookup_rmpentry(pfn, level);
>> > +	if (IS_ERR(e))
>> > +		return PTR_ERR(e);
>> > +
>> > +	return !!rmpentry_assigned(e);
>> > +}
>> > +EXPORT_SYMBOL_GPL(snp_lookup_rmpentry);
>>
diff mbox series

Patch

diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index ebc271bb6d8e..8d3ce2ad27da 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -83,7 +83,7 @@  extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
 
 /* RMP page size */
 #define RMP_PG_SIZE_4K			0
-
+#define RMP_TO_X86_PG_LEVEL(level)	(((level) == RMP_PG_SIZE_4K) ? PG_LEVEL_4K : PG_LEVEL_2M)
 #define RMPADJUST_VMSA_PAGE_BIT		BIT(16)
 
 /* SNP Guest message request */
@@ -197,6 +197,7 @@  void snp_set_wakeup_secondary_cpu(void);
 bool snp_init(struct boot_params *bp);
 void __init __noreturn snp_abort(void);
 int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, unsigned long *fw_err);
+int snp_lookup_rmpentry(u64 pfn, int *level);
 #else
 static inline void sev_es_ist_enter(struct pt_regs *regs) { }
 static inline void sev_es_ist_exit(void) { }
@@ -221,6 +222,7 @@  static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in
 {
 	return -ENOTTY;
 }
+static inline int snp_lookup_rmpentry(u64 pfn, int *level) { return 0; }
 #endif
 
 #endif
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index e54e412c9916..a063c1b98034 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -61,11 +61,36 @@ 
 #define AP_INIT_CR0_DEFAULT		0x60000010
 #define AP_INIT_MXCSR_DEFAULT		0x1f80
 
+/*
+ * The RMP entry format is not architectural. The format is defined in PPR
+ * Family 19h Model 01h, Rev B1 processor.
+ */
+struct rmpentry {
+	union {
+		struct {
+			u64	assigned	: 1,
+				pagesize	: 1,
+				immutable	: 1,
+				rsvd1		: 9,
+				gpa		: 39,
+				asid		: 10,
+				vmsa		: 1,
+				validated	: 1,
+				rsvd2		: 1;
+		} info;
+		u64 low;
+	};
+	u64 high;
+} __packed;
+
 /*
  * The first 16KB from the RMP_BASE is used by the processor for the
  * bookkeeping, the range needs to be added during the RMP entry lookup.
  */
 #define RMPTABLE_CPU_BOOKKEEPING_SZ	0x4000
+#define RMPENTRY_SHIFT			8
+#define rmptable_page_offset(x)	(RMPTABLE_CPU_BOOKKEEPING_SZ + \
+				 (((unsigned long)x) >> RMPENTRY_SHIFT))
 
 /* For early boot hypervisor communication in SEV-ES enabled guests */
 static struct ghcb boot_ghcb_page __bss_decrypted __aligned(PAGE_SIZE);
@@ -2435,3 +2460,62 @@  static int __init snp_host_init(void)
  * the page(s) used for DMA are hypervisor owned.
  */
 fs_initcall(snp_host_init);
+
+static inline unsigned int rmpentry_assigned(struct rmpentry *e)
+{
+	return e->info.assigned;
+}
+
+static inline unsigned int rmpentry_pagesize(struct rmpentry *e)
+{
+	return e->info.pagesize;
+}
+
+static struct rmpentry *rmptable_entry(unsigned long paddr)
+{
+	unsigned long vaddr;
+
+	vaddr = rmptable_start + rmptable_page_offset(paddr);
+	if (unlikely(vaddr > rmptable_end))
+		return ERR_PTR(-EFAULT);
+
+	return (struct rmpentry *)vaddr;
+}
+
+static struct rmpentry *__snp_lookup_rmpentry(u64 pfn, int *level)
+{
+	unsigned long paddr = pfn << PAGE_SHIFT;
+	struct rmpentry *entry, *large_entry;
+
+	if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
+		return ERR_PTR(-ENXIO);
+
+	if (!pfn_valid(pfn))
+		return ERR_PTR(-EINVAL);
+
+	entry = rmptable_entry(paddr);
+	if (IS_ERR(entry))
+		return entry;
+
+	/* Read a large RMP entry to get the correct page level used in RMP entry. */
+	large_entry = rmptable_entry(paddr & PMD_MASK);
+	*level = RMP_TO_X86_PG_LEVEL(rmpentry_pagesize(large_entry));
+
+	return entry;
+}
+
+/*
+ * Return 1 if the RMP entry is assigned, 0 if it exists but is not assigned,
+ * and -errno if there is no corresponding RMP entry.
+ */
+int snp_lookup_rmpentry(u64 pfn, int *level)
+{
+	struct rmpentry *e;
+
+	e = __snp_lookup_rmpentry(pfn, level);
+	if (IS_ERR(e))
+		return PTR_ERR(e);
+
+	return !!rmpentry_assigned(e);
+}
+EXPORT_SYMBOL_GPL(snp_lookup_rmpentry);