diff mbox series

[v3,3/3] selftests/tdx: Test GetQuote TDX attestation feature

Message ID 972e1d5c5ec53e2757fb17a586558c5385e987dd.1684048511.git.sathyanarayanan.kuppuswamy@linux.intel.com (mailing list archive)
State New
Headers show
Series TDX Guest Quote generation support | expand

Commit Message

Kuppuswamy Sathyanarayanan May 14, 2023, 7:23 a.m. UTC
In TDX guest, the second stage of the attestation process is Quote
generation. This process is required to convert the locally generated
TDREPORT into a remotely verifiable Quote. It involves sending the
TDREPORT data to a Quoting Enclave (QE) which will verify the
integrity of the TDREPORT and sign it with an attestation key.

Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
allow the user agent to get the TD Quote.

Add a kernel selftest module to verify the Quote generation feature.

TD Quote generation involves following steps:

* Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
* Embed the TDREPORT data in quote buffer and request for quote
  generation via TDX_CMD_GET_QUOTE IOCTL request.
* Upon completion of the GetQuote request, check for non zero value
  in the status field of Quote header to make sure the generated
  quote is valid.

Reviewed-by: Tony Luck <tony.luck@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Reviewed-by: Shuah Khan <skhan@linuxfoundation.org>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Erdem Aktas <erdemaktas@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
---

Changes since v2:
 * Adapted to struct tdx_quote_hdr -> struct tdx_quote_buf rename.

 tools/testing/selftests/tdx/tdx_guest_test.c | 65 ++++++++++++++++++--
 1 file changed, 59 insertions(+), 6 deletions(-)

Comments

Dan Williams June 12, 2023, 7:03 p.m. UTC | #1
[ add David, Brijesh, and Atish]

Kuppuswamy Sathyanarayanan wrote:
> In TDX guest, the second stage of the attestation process is Quote
> generation. This process is required to convert the locally generated
> TDREPORT into a remotely verifiable Quote. It involves sending the
> TDREPORT data to a Quoting Enclave (QE) which will verify the
> integrity of the TDREPORT and sign it with an attestation key.
> 
> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> allow the user agent to get the TD Quote.
> 
> Add a kernel selftest module to verify the Quote generation feature.
> 
> TD Quote generation involves following steps:
> 
> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> * Embed the TDREPORT data in quote buffer and request for quote
>   generation via TDX_CMD_GET_QUOTE IOCTL request.
> * Upon completion of the GetQuote request, check for non zero value
>   in the status field of Quote header to make sure the generated
>   quote is valid.

What this cover letter does not say is that this is adding another
instance of the similar pattern as SNP_GET_REPORT.

Linux is best served when multiple vendors trying to do similar
operations are brought together behind a common ABI. We see this in the
history of wrangling SCSI vendors behind common interfaces. Now multiple
confidential computing vendors trying to develop similar flows with
differentiated formats where that differentiation need not leak over the
ABI boundary.

My observation of SNP_GET_REPORT and TDX_CMD_GET_REPORT is that they are
both passing blobs across the user/kernel and platform/kernel boundary
for the purposes of unlocking other resources. To me that is a flow that
the Keys subsystem has infrastructure to handle. It has the concept of
upcalls and asynchronous population of blobs by handles and mechanisms
to protect and cache those communications. Linux / the Keys subsystem
could benefit from the enhancements it would need to cover these 2
cases. Specifically, the benefit that when ARM and RISC-V arrive with
similar communications with platform TSMs (Trusted Security Module) they
can build upon the same infrastructure.

David, am I reaching with that association? My strawman mapping of
TDX_CMD_GET_QUOTE to request_key() is something like:

request_key(coco_quote, "description", "<uuencoded tdreport>")

Where this is a common key_type for all vendors, but the description and
arguments have room for vendor differentiation when doing the upcall to
the platform TSM, but userspace never needs to contend with the
different vendor formats, that is all handled internally to the kernel.

At this point I am just looking for confirmation that the "every vendor
invent a new character device + ioctl" does not scale and a deeper
conversation is needed. Keys is a plausible solution to that ABI
proliferation problem.
Kuppuswamy Sathyanarayanan June 19, 2023, 5:38 a.m. UTC | #2
Hi Dan,

On 6/12/23 12:03 PM, Dan Williams wrote:
> [ add David, Brijesh, and Atish]
> 
> Kuppuswamy Sathyanarayanan wrote:
>> In TDX guest, the second stage of the attestation process is Quote
>> generation. This process is required to convert the locally generated
>> TDREPORT into a remotely verifiable Quote. It involves sending the
>> TDREPORT data to a Quoting Enclave (QE) which will verify the
>> integrity of the TDREPORT and sign it with an attestation key.
>>
>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
>> allow the user agent to get the TD Quote.
>>
>> Add a kernel selftest module to verify the Quote generation feature.
>>
>> TD Quote generation involves following steps:
>>
>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
>> * Embed the TDREPORT data in quote buffer and request for quote
>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
>> * Upon completion of the GetQuote request, check for non zero value
>>   in the status field of Quote header to make sure the generated
>>   quote is valid.
> 
> What this cover letter does not say is that this is adding another
> instance of the similar pattern as SNP_GET_REPORT.
> 
> Linux is best served when multiple vendors trying to do similar
> operations are brought together behind a common ABI. We see this in the
> history of wrangling SCSI vendors behind common interfaces. Now multiple
> confidential computing vendors trying to develop similar flows with
> differentiated formats where that differentiation need not leak over the
> ABI boundary.
> 
> My observation of SNP_GET_REPORT and TDX_CMD_GET_REPORT is that they are
> both passing blobs across the user/kernel and platform/kernel boundary
> for the purposes of unlocking other resources. To me that is a flow that
> the Keys subsystem has infrastructure to handle. It has the concept of
> upcalls and asynchronous population of blobs by handles and mechanisms
> to protect and cache those communications. Linux / the Keys subsystem
> could benefit from the enhancements it would need to cover these 2
> cases. Specifically, the benefit that when ARM and RISC-V arrive with
> similar communications with platform TSMs (Trusted Security Module) they
> can build upon the same infrastructure.
> 
> David, am I reaching with that association? My strawman mapping of
> TDX_CMD_GET_QUOTE to request_key() is something like:
> 
> request_key(coco_quote, "description", "<uuencoded tdreport>")
> 
> Where this is a common key_type for all vendors, but the description and
> arguments have room for vendor differentiation when doing the upcall to
> the platform TSM, but userspace never needs to contend with the
> different vendor formats, that is all handled internally to the kernel.
> 
> At this point I am just looking for confirmation that the "every vendor
> invent a new character device + ioctl" does not scale and a deeper
> conversation is needed. Keys is a plausible solution to that ABI
> proliferation problem.

I agree that vendor-specific interfaces do not scale, and the ABI generalization
will benefit future vendors who require similar feature support. However, such
generalization, in my opinion, will make more sense if the requirements at the top
level are also generalized. Currently, each vendor has their own attestation flow,
and the user ABI they introduced includes a lot of vendor-specific information to
support it. IMO, it is difficult to hide these vendor-specific information from the
user without generalizing the high level attestation flow.

I have included the attestation IOCTL interfaces used by S390, AMD SEV and TDX
below for reference. As you can see, each of these ARCHs uses very different
input and output data formats. It contains a lot of vendor-specific information
(like the vmpl field in struct snp_report_req or the meas_addr, arcb_adr in struct
uvio_attest). The only thing I see in common is the use of input and output blobs.

Even if we just generalize the ABI now, I'm not sure if the unified ABI we create
now will meet the requirements of RISC-v or ARM platforms when they introduce their
own attestation flow in the future. My thinking is that without some sort of
arch-agnostic high-level attestation workflow, ABI generalization has very little
benefit.

======================================================================
// Following are S390 specific attestation struct
drivers/s390/char/uvdevice.c

struct uvio_ioctl_cb {
        __u32 flags;
        __u16 uv_rc;                    /* UV header rc value */
        __u16 uv_rrc;                   /* UV header rrc value */
        __u64 argument_addr;            /* Userspace address of uvio argument */
        __u32 argument_len;
        __u8  reserved14[0x40 - 0x14];  /* must be zero */
};

#define UVIO_ATT_USER_DATA_LEN          0x100
#define UVIO_ATT_UID_LEN                0x10


struct uvio_attest {
        __u64 arcb_addr;                                /* 0x0000 */
        __u64 meas_addr;                                /* 0x0008 */
        __u64 add_data_addr;                            /* 0x0010 */
        __u8  user_data[UVIO_ATT_USER_DATA_LEN];        /* 0x0018 */
        __u8  config_uid[UVIO_ATT_UID_LEN];             /* 0x0118 */
        __u32 arcb_len;                                 /* 0x0128 */
        __u32 meas_len;                                 /* 0x012c */
        __u32 add_data_len;                             /* 0x0130 */
        __u16 user_data_len;                            /* 0x0134 */
        __u16 reserved136;                              /* 0x0136 */
};


#define UVIO_DEVICE_NAME "uv"
#define UVIO_TYPE_UVC 'u'

#define UVIO_IOCTL_ATT _IOWR(UVIO_TYPE_UVC, 0x01, struct uvio_ioctl_cb)


// Following are TDX specific interfaces
drivers/virt/coco/tdx-guest/tdx-guest.c

/**
 * struct tdx_report_req - Request struct for TDX_CMD_GET_REPORT0 IOCTL.
 *
 * @reportdata: User buffer with REPORTDATA to be included into TDREPORT.
 *              Typically it can be some nonce provided by attestation
 *              service, so the generated TDREPORT can be uniquely verified.
 * @tdreport: User buffer to store TDREPORT output from TDCALL[TDG.MR.REPORT].
 */
struct tdx_report_req {
        __u8 reportdata[TDX_REPORTDATA_LEN];
        __u8 tdreport[TDX_REPORT_LEN];
};



/* struct tdx_quote_buf: Format of Quote request buffer.
 * @version: Quote format version, filled by TD.
 * @status: Status code of Quote request, filled by VMM.
 * @in_len: Length of TDREPORT, filled by TD.
 * @out_len: Length of Quote data, filled by VMM.
 * @data: Quote data on output or TDREPORT on input.
 *
 * More details of Quote request buffer can be found in TDX
 * Guest-Host Communication Interface (GHCI) for Intel TDX 1.0,
 * section titled "TDG.VP.VMCALL<GetQuote>"
 */
struct tdx_quote_buf {
        __u64 version;
        __u64 status;
        __u32 in_len;
        __u32 out_len;
        __u64 data[];
};

/* struct tdx_quote_req: Request struct for TDX_CMD_GET_QUOTE IOCTL.
 * @buf: Address of user buffer in the format of struct tdx_quote_buf.
 *       Upon successful completion of IOCTL, output is copied back to
 *       the same buffer (in struct tdx_quote_buf.data).
 * @len: Length of the Quote buffer.
 */
struct tdx_quote_req {
        __u64 buf;
        __u64 len;
};

/*
 * TDX_CMD_GET_REPORT0 - Get TDREPORT0 (a.k.a. TDREPORT subtype 0) using
 *                       TDCALL[TDG.MR.REPORT]
 *
 * Return 0 on success, -EIO on TDCALL execution failure, and
 * standard errno on other general error cases.
 */
#define TDX_CMD_GET_REPORT0              _IOWR('T', 1, struct tdx_report_req)

/*
 * TDX_CMD_GET_QUOTE - Get TD Guest Quote from QE/QGS using GetQuote
 *                     TDVMCALL.
 *
 * Returns 0 on success or standard errno on other failures.
 */
#define TDX_CMD_GET_QUOTE               _IOWR('T', 2, struct tdx_quote_req)


// Following are AMD SEV specific interfaces
drivers/virt/coco/sev-guest/sev-guest.c

struct snp_report_req {
        /* user data that should be included in the report */
        __u8 user_data[64];

        /* The vmpl level to be included in the report */
        __u32 vmpl;

        /* Must be zero filled */
        __u8 rsvd[28];
};

struct snp_report_resp {
        /* response data, see SEV-SNP spec for the format */
        __u8 data[4000];
};

struct snp_guest_request_ioctl {
        /* message version number (must be non-zero) */
        __u8 msg_version;

        /* Request and response structure address */
        __u64 req_data;
        __u64 resp_data;

        /* bits[63:32]: VMM error code, bits[31:0] firmware error code (see psp-sev.h) */
        union {
                __u64 exitinfo2;
                struct {
                        __u32 fw_error;
                        __u32 vmm_error;
                };
        };
};

/* Get SNP attestation report */
#define SNP_GET_REPORT _IOWR(SNP_GUEST_REQ_IOC_TYPE, 0x0, struct snp_guest_request_ioctl)
======================================================================

In addition to GetReport support, each of these guest attestation drivers includes
IOCTLs to handle vendor-specific needs (such as Derived key support in the
SEV driver or RTMR Extend support in the TDX guest driver). So we cannot
completely unify all IOCTL interfaces in these drivers.
Erdem Aktas June 22, 2023, 11:31 p.m. UTC | #3
On Mon, Jun 12, 2023 at 12:03 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> [ add David, Brijesh, and Atish]
>
> Kuppuswamy Sathyanarayanan wrote:
> > In TDX guest, the second stage of the attestation process is Quote
> > generation. This process is required to convert the locally generated
> > TDREPORT into a remotely verifiable Quote. It involves sending the
> > TDREPORT data to a Quoting Enclave (QE) which will verify the
> > integrity of the TDREPORT and sign it with an attestation key.
> >
> > Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> > allow the user agent to get the TD Quote.
> >
> > Add a kernel selftest module to verify the Quote generation feature.
> >
> > TD Quote generation involves following steps:
> >
> > * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> > * Embed the TDREPORT data in quote buffer and request for quote
> >   generation via TDX_CMD_GET_QUOTE IOCTL request.
> > * Upon completion of the GetQuote request, check for non zero value
> >   in the status field of Quote header to make sure the generated
> >   quote is valid.
>
> What this cover letter does not say is that this is adding another
> instance of the similar pattern as SNP_GET_REPORT.
>
> Linux is best served when multiple vendors trying to do similar
> operations are brought together behind a common ABI. We see this in the
> history of wrangling SCSI vendors behind common interfaces. Now multiple

Compared to the number of SCSI vendors, I think the number of CPU
vendors for confidential computing seems manageable to me. Is this
really a good comparison?

> confidential computing vendors trying to develop similar flows with
> differentiated formats where that differentiation need not leak over the
> ABI boundary.

<Just my personal opinion below>
I agree with this statement in the high level but it is also somehow
surprising for me after all the discussion happened around this topic.
Honestly, I feel like there are multiple versions of "Intel"  working
in different directions.

If we want multiple vendors trying to do the similar things behind a
common ABI, it should start with the spec. Since this comment is
coming from Intel, I wonder if there is any plan to combine the GHCB
and GHCI interfaces under common ABI in the future or why it did not
even happen in the first place.

What I see is that Intel has GETQUOTE TDVMCALL interface in its spec
and again Intel does not really want to provide support for it in
linux. It feels really frustrating.

>
> My observation of SNP_GET_REPORT and TDX_CMD_GET_REPORT is that they are
> both passing blobs across the user/kernel and platform/kernel boundary
> for the purposes of unlocking other resources. To me that is a flow that
> the Keys subsystem has infrastructure to handle. It has the concept of
> upcalls and asynchronous population of blobs by handles and mechanisms
> to protect and cache those communications. Linux / the Keys subsystem
> could benefit from the enhancements it would need to cover these 2
> cases. Specifically, the benefit that when ARM and RISC-V arrive with
> similar communications with platform TSMs (Trusted Security Module) they
> can build upon the same infrastructure.
>
> David, am I reaching with that association? My strawman mapping of
> TDX_CMD_GET_QUOTE to request_key() is something like:
>
> request_key(coco_quote, "description", "<uuencoded tdreport>")
>
> Where this is a common key_type for all vendors, but the description and
> arguments have room for vendor differentiation when doing the upcall to
> the platform TSM, but userspace never needs to contend with the
> different vendor formats, that is all handled internally to the kernel.

I think the problem definition here is not accurate. With AMD SNP,
guests need to do a hypercall to KVM and KVM needs to issue a
SNP_GUEST_REQUEST(MSG_REPORT_REQ) to the SP firmware. In TDX, guests
need to do a TDCALL to TDXMODULE to get the TDREPORT and then it needs
to get that report delivered to the host userspace to get the TDQUOTE
generated by the SGX quoting enclave. Also TDQUOTE is designed to work
async while the SNP_GUEST_REQUESTS are blocking vmcalls.

Those are completely different flows. Are you suggesting that intel
should also come down to a single call to get the TDQUOTE like AMD
SNP?

The TDCALL interface asking for the TDREPORT is already there. AMD
does not need to ask the report and the quote separately.

Here, the problem was that Intel (or "upstream community") did not
want to implement/accept hypercall for TDQUOTE which would be handled
by the user space VMM. The alternative implementation (using vsock)
does not work for many use cases including ours. I do not see how your
suggestion addresses the problem that this patch was trying to solve.

So while I like the suggested direction, I am not sure how much it is
possible to come up with a common ABI even with just only for 2
vendors (AMD and Intel) without doing spec changes which is a multi
year effort imho.

>
> At this point I am just looking for confirmation that the "every vendor
> invent a new character device + ioctl" does not scale and a deeper
> conversation is needed. Keys is a plausible solution to that ABI
> proliferation problem.
Huang, Kai June 22, 2023, 11:44 p.m. UTC | #4
On Thu, 2023-06-22 at 16:31 -0700, Erdem Aktas wrote:
> So while I like the suggested direction, I am not sure how much it is
> possible to come up with a common ABI even with just only for 2
> vendors (AMD and Intel) without doing spec changes which is a multi
> year effort imho.

I don't want to intervene the discussion around whether this direction is
correct or not, however I want to say request_key() may not be the right place
to fit Quote (or remote verifiable data blob in general for attestation).

> request_key(coco_quote, "description", "<uuencoded tdreport>")

Although both key and Quote are data blob in some way, Quote certainly is not a
key but have much more information.  The man page of request_key() seems to
suggest it's just for key:

       request_key - request a key from the kernel's key management
       facility

So IMHO using request_key() to fit Quote may cause bigger confusion.
Dan Williams June 23, 2023, 10:27 p.m. UTC | #5
Dan Williams wrote:
> [ add David, Brijesh, and Atish]
> 
> Kuppuswamy Sathyanarayanan wrote:
> > In TDX guest, the second stage of the attestation process is Quote
> > generation. This process is required to convert the locally generated
> > TDREPORT into a remotely verifiable Quote. It involves sending the
> > TDREPORT data to a Quoting Enclave (QE) which will verify the
> > integrity of the TDREPORT and sign it with an attestation key.
> > 
> > Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> > allow the user agent to get the TD Quote.
> > 
> > Add a kernel selftest module to verify the Quote generation feature.
> > 
> > TD Quote generation involves following steps:
> > 
> > * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> > * Embed the TDREPORT data in quote buffer and request for quote
> >   generation via TDX_CMD_GET_QUOTE IOCTL request.
> > * Upon completion of the GetQuote request, check for non zero value
> >   in the status field of Quote header to make sure the generated
> >   quote is valid.
> 
> What this cover letter does not say is that this is adding another
> instance of the similar pattern as SNP_GET_REPORT.
> 
> Linux is best served when multiple vendors trying to do similar
> operations are brought together behind a common ABI. We see this in the
> history of wrangling SCSI vendors behind common interfaces. Now multiple
> confidential computing vendors trying to develop similar flows with
> differentiated formats where that differentiation need not leak over the
> ABI boundary.
[..]

Below is a rough mock up of this approach to demonstrate the direction.
Again, the goal is to define an ABI that can support any vendor's
arch-specific attestation method and key provisioning flows without
leaking vendor-specific details, or confidential material over the
user/kernel ABI.

The observation is that there are a sufficient number of attestation
flows available to review where Linux can define a superset ABI to
contain them all. The other observation is that the implementations have
features that may cross-polinate over time. For example the SEV
privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
PCRs) mechanisms address generic Confidential Computing use cases.

Vendor specific ioctls for all of this feels like surrender when Linux
already has the keys subsystem which has plenty of degrees of freedom
for tracking blobs with signatures and using those blobs to instantiate
other blobs. It already serves as the ABI wrapping various TPM
implementations and marshaling keys for storage encryption and other use
cases that intersect Confidential Computing.

The benefit of deprecating vendor-specific abstraction layers in
userspace is secondary. The primary benefit is collaboration. It enables
kernel developers from various architectures to collaborate on common
infrastructure. If, referring back to my previous example, SEV adopts an
RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
unfortunate if those efforts were siloed, duplicated, and needlessly
differentiated to userspace. So while there are arguably a manageable
number of basic arch attestation methods the planned expansion of those
to build incremental functionality is where I believe we, as a
community, will be glad that we invested in a "Linux format" for all of
this.

An example, to show what the strawman patch below enables: (req_key is
the sample program from "man 2 request_key")

# ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
Key ID is 10e2f3a7
# keyctl pipe 0x10e2f3a7 | hexdump -C
00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00004000

This is the kernel instantiating a TDX Quote without the TDREPORT
implementation detail ever leaving the kernel. Now, this is only the
top-half of what is needed. The missing bottom half takes that material
and uses it to instantiate derived key material like the storage
decryption key internal to the kernel. See "The Process" in
Documentation/security/keys/request-key.rst for how the Keys subsystem
handles the "keys for keys" use case.

---
diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
index f79ab13a5c28..0f775847028e 100644
--- a/drivers/virt/Kconfig
+++ b/drivers/virt/Kconfig
@@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
 
 source "drivers/virt/coco/tdx-guest/Kconfig"
 
+config GUEST_ATTEST
+	tristate
+	select KEYS
+
 endif
diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
index e9aa6fc96fab..66f6b838f8f4 100644
--- a/drivers/virt/Makefile
+++ b/drivers/virt/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)		+= acrn/
 obj-$(CONFIG_EFI_SECRET)	+= coco/efi_secret/
 obj-$(CONFIG_SEV_GUEST)		+= coco/sev-guest/
 obj-$(CONFIG_INTEL_TDX_GUEST)	+= coco/tdx-guest/
+obj-$(CONFIG_GUEST_ATTEST)	+= coco/guest-attest/
diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
new file mode 100644
index 000000000000..5581c5a27588
--- /dev/null
+++ b/drivers/virt/coco/guest-attest/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
+guest_attest-y := key.o
diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
new file mode 100644
index 000000000000..2a494b6dd7a7
--- /dev/null
+++ b/drivers/virt/coco/guest-attest/key.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/seq_file.h>
+#include <linux/key-type.h>
+#include <linux/module.h>
+#include <linux/base64.h>
+
+#include <keys/request_key_auth-type.h>
+#include <keys/user-type.h>
+
+#include "guest-attest.h"
+
+static LIST_HEAD(guest_attest_list);
+static DECLARE_RWSEM(guest_attest_rwsem);
+
+static struct guest_attest_ops *fetch_ops(void)
+{
+	return list_first_entry_or_null(&guest_attest_list,
+					struct guest_attest_ops, list);
+}
+
+static struct guest_attest_ops *get_ops(void)
+{
+	down_read(&guest_attest_rwsem);
+	return fetch_ops();
+}
+
+static void put_ops(void)
+{
+	up_read(&guest_attest_rwsem);
+}
+
+int register_guest_attest_ops(struct guest_attest_ops *ops)
+{
+	struct guest_attest_ops *conflict;
+	int rc;
+
+	down_write(&guest_attest_rwsem);
+	conflict = fetch_ops();
+	if (conflict) {
+		pr_err("\"%s\" ops already registered\n", conflict->name);
+		rc = -EEXIST;
+		goto out;
+	}
+	list_add(&ops->list, &guest_attest_list);
+	try_module_get(ops->module);
+	rc = 0;
+out:
+	up_write(&guest_attest_rwsem);
+	return rc;
+}
+EXPORT_SYMBOL_GPL(register_guest_attest_ops);
+
+void unregister_guest_attest_ops(struct guest_attest_ops *ops)
+{
+	down_write(&guest_attest_rwsem);
+	list_del(&ops->list);
+	up_write(&guest_attest_rwsem);
+	module_put(ops->module);
+}
+EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
+
+static int __guest_attest_request_key(struct key *key, int level,
+				      struct key *dest_keyring,
+				      const char *callout_info, int callout_len,
+				      struct key *authkey)
+{
+	struct guest_attest_ops *ops;
+	void *payload = NULL;
+	int rc, payload_len;
+
+	ops = get_ops();
+	if (!ops)
+		return -ENOKEY;
+
+	payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
+	if (!payload) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	payload_len = base64_decode(callout_info, callout_len, payload);
+	if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
+		rc = -EINVAL;
+		goto out;
+	}
+
+	rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
+				 authkey);
+out:
+	kfree(payload);
+	put_ops();
+	return rc;
+}
+
+static int guest_attest_request_key(struct key *authkey, void *data)
+{
+	struct request_key_auth *rka = get_request_key_auth(authkey);
+	struct key *key = rka->target_key;
+	unsigned long long id;
+	int rc, level;
+
+	pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
+		 rka->callout_info ? (char *)rka->callout_info : "\"none\"");
+
+	if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
+		return -EINVAL;
+
+	if (!rka->callout_info) {
+		rc = -EINVAL;
+		goto out;
+	}
+
+	rc = __guest_attest_request_key(key, level, rka->dest_keyring,
+					rka->callout_info, rka->callout_len,
+					authkey);
+out:
+	complete_request_key(authkey, rc);
+	return rc;
+}
+
+static int guest_attest_vet_description(const char *desc)
+{
+	unsigned long long id;
+	int level;
+
+	if (sscanf(desc, "guest_attest:%d:%llu", &level, &id) != 2)
+		return -EINVAL;
+	return 0;
+}
+
+static struct key_type key_type_guest_attest = {
+	.name = "guest_attest",
+	.preparse = user_preparse,
+	.free_preparse = user_free_preparse,
+	.instantiate = generic_key_instantiate,
+	.revoke = user_revoke,
+	.destroy = user_destroy,
+	.describe = user_describe,
+	.read = user_read,
+	.vet_description = guest_attest_vet_description,
+	.request_key = guest_attest_request_key,
+};
+
+static int __init guest_attest_init(void)
+{
+	return register_key_type(&key_type_guest_attest);
+}
+
+static void __exit guest_attest_exit(void)
+{
+	unregister_key_type(&key_type_guest_attest);
+}
+
+module_init(guest_attest_init);
+module_exit(guest_attest_exit);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/virt/coco/tdx-guest/Kconfig b/drivers/virt/coco/tdx-guest/Kconfig
index 14246fc2fb02..9a1ec85369fe 100644
--- a/drivers/virt/coco/tdx-guest/Kconfig
+++ b/drivers/virt/coco/tdx-guest/Kconfig
@@ -1,6 +1,7 @@
 config TDX_GUEST_DRIVER
 	tristate "TDX Guest driver"
 	depends on INTEL_TDX_GUEST
+	select GUEST_ATTEST
 	help
 	  The driver provides userspace interface to communicate with
 	  the TDX module to request the TDX guest details like attestation
diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
index 388491fa63a1..65b5aab284d9 100644
--- a/drivers/virt/coco/tdx-guest/tdx-guest.c
+++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
@@ -13,11 +13,13 @@
 #include <linux/string.h>
 #include <linux/uaccess.h>
 #include <linux/set_memory.h>
+#include <linux/key-type.h>
 
 #include <uapi/linux/tdx-guest.h>
 
 #include <asm/cpu_device_id.h>
 #include <asm/tdx.h>
+#include "../guest-attest/guest-attest.h"
 
 /*
  * Intel's SGX QE implementation generally uses Quote size less
@@ -229,6 +231,62 @@ static const struct x86_cpu_id tdx_guest_ids[] = {
 };
 MODULE_DEVICE_TABLE(x86cpu, tdx_guest_ids);
 
+static int tdx_request_attest(struct key *key, int level,
+			      struct key *dest_keyring, void *payload,
+			      int payload_len, struct key *authkey)
+{
+	u8 *tdreport;
+	long ret;
+
+	tdreport = kzalloc(TDX_REPORT_LEN, GFP_KERNEL);
+	if (!tdreport)
+		return -ENOMEM;
+
+	/* Generate TDREPORT0 using "TDG.MR.REPORT" TDCALL */
+	ret = tdx_mcall_get_report0(payload, tdreport);
+	if (ret)
+		goto out;
+
+	mutex_lock(&quote_lock);
+
+	memset(qentry->buf, 0, qentry->buf_len);
+	reinit_completion(&qentry->compl);
+	qentry->valid = true;
+
+	/* Submit GetQuote Request using GetQuote hyperetall */
+	ret = tdx_hcall_get_quote(qentry->buf, qentry->buf_len);
+	if (ret) {
+		pr_err("GetQuote hyperetall failed, status:%lx\n", ret);
+		ret = -EIO;
+		goto quote_failed;
+	}
+
+	/*
+	 * Although the GHCI specification does not state explicitly that
+	 * the VMM must not wait indefinitely for the Quote request to be
+	 * completed, a sane VMM should always notify the guest after a
+	 * certain time, regardless of whether the Quote generation is
+	 * successful or not.  For now just assume the VMM will do so.
+	 */
+	wait_for_completion(&qentry->compl);
+
+	ret = key_instantiate_and_link(key, qentry->buf, qentry->buf_len,
+				       dest_keyring, authkey);
+
+quote_failed:
+	qentry->valid = false;
+	mutex_unlock(&quote_lock);
+out:
+	kfree(tdreport);
+	return ret;
+}
+
+static struct guest_attest_ops tdx_attest_ops = {
+	.name = KBUILD_MODNAME,
+	.module = THIS_MODULE,
+	.request_attest = tdx_request_attest,
+};
+
 static int __init tdx_guest_init(void)
 {
 	int ret;
@@ -251,8 +309,14 @@ static int __init tdx_guest_init(void)
 	if (ret)
 		goto free_quote;
 
+	ret = register_guest_attest_ops(&tdx_attest_ops);
+	if (ret)
+		goto free_irq;
+
 	return 0;
 
+free_irq:
+	tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
 free_quote:
 	free_quote_entry(qentry);
 free_misc:
@@ -264,6 +328,7 @@ module_init(tdx_guest_init);
 
 static void __exit tdx_guest_exit(void)
 {
+	unregister_guest_attest_ops(&tdx_attest_ops);
 	tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
 	free_quote_entry(qentry);
 	misc_deregister(&tdx_misc_dev);
Dan Williams June 23, 2023, 10:31 p.m. UTC | #6
Huang, Kai wrote:
> On Thu, 2023-06-22 at 16:31 -0700, Erdem Aktas wrote:
> > So while I like the suggested direction, I am not sure how much it is
> > possible to come up with a common ABI even with just only for 2
> > vendors (AMD and Intel) without doing spec changes which is a multi
> > year effort imho.
> 
> I don't want to intervene the discussion around whether this direction is
> correct or not, however I want to say request_key() may not be the right place
> to fit Quote (or remote verifiable data blob in general for attestation).
> 
> > request_key(coco_quote, "description", "<uuencoded tdreport>")
> 
> Although both key and Quote are data blob in some way, Quote certainly is not a
> key but have much more information.  The man page of request_key() seems to
> suggest it's just for key:
> 
>        request_key - request a key from the kernel's key management
>        facility
> 

Read further in that man page and see the example of generic user
defined value stored as a "key". A "key" is just a blob that has meaning
to access other resources / instantiate other keys.
Dan Williams June 23, 2023, 10:49 p.m. UTC | #7
Erdem Aktas wrote:
> On Mon, Jun 12, 2023 at 12:03 PM Dan Williams <dan.j.williams@intel.com>
> wrote:
> 
> > [ add David, Brijesh, and Atish]
> >
> > Kuppuswamy Sathyanarayanan wrote:
> > > In TDX guest, the second stage of the attestation process is Quote
> > > generation. This process is required to convert the locally generated
> > > TDREPORT into a remotely verifiable Quote. It involves sending the
> > > TDREPORT data to a Quoting Enclave (QE) which will verify the
> > > integrity of the TDREPORT and sign it with an attestation key.
> > >
> > > Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> > > allow the user agent to get the TD Quote.
> > >
> > > Add a kernel selftest module to verify the Quote generation feature.
> > >
> > > TD Quote generation involves following steps:
> > >
> > > * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> > > * Embed the TDREPORT data in quote buffer and request for quote
> > >   generation via TDX_CMD_GET_QUOTE IOCTL request.
> > > * Upon completion of the GetQuote request, check for non zero value
> > >   in the status field of Quote header to make sure the generated
> > >   quote is valid.
> >
> > What this cover letter does not say is that this is adding another
> > instance of the similar pattern as SNP_GET_REPORT.
> >
> > Linux is best served when multiple vendors trying to do similar
> > operations are brought together behind a common ABI. We see this in the
> > history of wrangling SCSI vendors behind common interfaces.
> 
> Compared to the number of SCSI vendors, I think the number of CPU vendors
> for confidential computing seems manageable to me. Is this really a good
> comparison?

Fair enough, and prompted by this I talk a bit more about the
motiviations and benefits of a Keys abstraction for attestation here:

https://lore.kernel.org/all/64961c3baf8ce_142af829436@dwillia2-xfh.jf.intel.com.notmuch/

> > Now multiple
> > confidential computing vendors trying to develop similar flows with
> > differentiated formats where that differentiation need not leak over the
> > ABI boundary.
> >
> 
> <Just my personal opinion below>
> I agree with this statement in the high level but it is also somehow
> surprising for me after all the discussion happened around this topic.
> Honestly, I feel like there are multiple versions of "Intel"  working in
> different directions.

This proposal was sent while firmly wearing my Linux community hat. I
agree, the timing here is unfortunate.

> If we want multiple vendors trying to do the similar things behind a common
> ABI, it should start with the spec. Since this comment is coming from
> Intel, I wonder if there is any plan to combine the GHCB and GHCI
> interfaces under common ABI in the future or why it did not even happen in
> the first place.

Per above comment about firmly wearing my Linux hat I am coming at this
purely from the perspective of what do we do now as a community that
continues to see these implementations proliferate and grow more
features. Common specs are great, but I agree with you, it is too late
for that, but I hope that as Linux asserts "this is what it should look
like" it starts to influence future IP innovation, and attestation
service providers, to acommodate the kernel's ABI momentum.

> What I see is that Intel has GETQUOTE TDVMCALL interface in its spec and
> again Intel does not really want to provide support for it in linux. It
> feels really frustrating.

I am aware of how frustrating late feedback can be. I am also encouraged
by some of the conversations and investigations that have already
happened around how Keys fits what these attestation solutions need.

> > My observation of SNP_GET_REPORT and TDX_CMD_GET_REPORT is that they are
> > both passing blobs across the user/kernel and platform/kernel boundary
> > for the purposes of unlocking other resources. To me that is a flow that
> > the Keys subsystem has infrastructure to handle. It has the concept of
> > upcalls and asynchronous population of blobs by handles and mechanisms
> > to protect and cache those communications. Linux / the Keys subsystem
> > could benefit from the enhancements it would need to cover these 2
> > cases. Specifically, the benefit that when ARM and RISC-V arrive with
> > similar communications with platform TSMs (Trusted Security Module) they
> > can build upon the same infrastructure.
> >
> > David, am I reaching with that association? My strawman mapping of
> > TDX_CMD_GET_QUOTE to request_key() is something like:
> >
> > request_key(coco_quote, "description", "<uuencoded tdreport>")
> >
> > Where this is a common key_type for all vendors, but the description and
> > arguments have room for vendor differentiation when doing the upcall to
> > the platform TSM, but userspace never needs to contend with the
> > different vendor formats, that is all handled internally to the kernel.
> >
> > I think the problem definition here is not accurate. With AMD SNP, guests
> need to do a hypercall to KVM and KVM needs to issue
> a  SNP_GUEST_REQUEST(MSG_REPORT_REQ) to the SP firmware. In TDX, guests
> need to do a TDCALL to TDXMODULE to get the TDREPORT and then it needs to
> get that report delivered to the host userspace to get the TDQUOTE
> generated by the SGX quoting enclave. Also TDQUOTE is designed to work
> async while the SNP_GUEST_REQUESTS are blocking vmcalls.
> 
> Those are completely different flows. Are you suggesting that intel should
> also come down to a single call to get the TDQUOTE like AMD SNP?

The Keys subsystem supports async instantiation of key material with
usermode upcalls if necessary. So I do not see a problem supporting
these flows behind a common key type.

> The TDCALL interface asking for the TDREPORT is already there. AMD does not
> need to ask the report and the quote separately.
> 
> Here, the problem was that Intel (upstream) did not want to implement
> hypercall for TDQUOTE which would be handled by the user space VMM. The
> alternative implementation (using vsock) does not work for many use cases
> including ours. I do not see how your suggestion addresses the problem that
> this patch was trying to solve.

Perhaps the strawman mockup makes it more clear:

https://lore.kernel.org/all/64961c3baf8ce_142af829436@dwillia2-xfh.jf.intel.com.notmuch/

> So while I like the suggested direction, I am not sure how much it is
> possible to come up with a common ABI even with just only for 2 vendors
> (AMD and Intel) without doing spec changes which is a multi year effort
> imho.

I agree, hardware spec changes are out of scope for this effort, but
Keys might require some additional flows to be built up in the kernel
that could be previously handled in userspace. I.e. the "bottom half"
that I reference in the mockup.

This is something we went through with using "encrypted-keys" for
nvdimm. Instead of an ioctl to inject a secret key over the user kernel
boundary a key server need to store a serialized version of the
encrypted key blob and pass that into the kernel.
Kuppuswamy Sathyanarayanan June 26, 2023, 3:05 a.m. UTC | #8
Hi Dan,

On 6/23/23 3:27 PM, Dan Williams wrote:
> Dan Williams wrote:
>> [ add David, Brijesh, and Atish]
>>
>> Kuppuswamy Sathyanarayanan wrote:
>>> In TDX guest, the second stage of the attestation process is Quote
>>> generation. This process is required to convert the locally generated
>>> TDREPORT into a remotely verifiable Quote. It involves sending the
>>> TDREPORT data to a Quoting Enclave (QE) which will verify the
>>> integrity of the TDREPORT and sign it with an attestation key.
>>>
>>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
>>> allow the user agent to get the TD Quote.
>>>
>>> Add a kernel selftest module to verify the Quote generation feature.
>>>
>>> TD Quote generation involves following steps:
>>>
>>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
>>> * Embed the TDREPORT data in quote buffer and request for quote
>>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
>>> * Upon completion of the GetQuote request, check for non zero value
>>>   in the status field of Quote header to make sure the generated
>>>   quote is valid.
>>
>> What this cover letter does not say is that this is adding another
>> instance of the similar pattern as SNP_GET_REPORT.
>>
>> Linux is best served when multiple vendors trying to do similar
>> operations are brought together behind a common ABI. We see this in the
>> history of wrangling SCSI vendors behind common interfaces. Now multiple
>> confidential computing vendors trying to develop similar flows with
>> differentiated formats where that differentiation need not leak over the
>> ABI boundary.
> [..]
> 
> Below is a rough mock up of this approach to demonstrate the direction.
> Again, the goal is to define an ABI that can support any vendor's
> arch-specific attestation method and key provisioning flows without
> leaking vendor-specific details, or confidential material over the
> user/kernel ABI.

Thanks for working on this mock code and helping out. It gives me the
general idea about your proposal.

> 
> The observation is that there are a sufficient number of attestation
> flows available to review where Linux can define a superset ABI to
> contain them all. The other observation is that the implementations have
> features that may cross-polinate over time. For example the SEV
> privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
> PCRs) mechanisms address generic Confidential Computing use cases.


I agree with your point about VMPL and RTMR feature cases. This observation
is valid for AMD SEV and TDX attestation flows. But I am not sure whether
it will hold true for other vendor implementations. Our sample set is not
good enough to make this conclusion. The reason for my concern is, if you
check the ABI interface used in the S390 arch attestation driver
(drivers/s390/char/uvdevice.c), you would notice that there is a significant
difference between the ABI used in that driver and SEV/TDX drivers. The S390
driver attestation request appears to accept two data blobs as input, as well
as a variety of vendor-specific header configurations.

Maybe the s390 attestation model is a special case, but, I think we consider
this issue. Since we don't have a common spec, there is chance that any
superset ABI we define now may not meet future vendor requirements. One way to
handle it to leave enough space in the generic ABI to handle future vendor
requirements.

I think it would be better if other vendors (like ARM or RISC) can comment and
confirm whether this proposal meets their demands.

> 
> Vendor specific ioctls for all of this feels like surrender when Linux
> already has the keys subsystem which has plenty of degrees of freedom
> for tracking blobs with signatures and using those blobs to instantiate
> other blobs. It already serves as the ABI wrapping various TPM
> implementations and marshaling keys for storage encryption and other use
> cases that intersect Confidential Computing.
> 
> The benefit of deprecating vendor-specific abstraction layers in
> userspace is secondary. The primary benefit is collaboration. It enables
> kernel developers from various architectures to collaborate on common
> infrastructure. If, referring back to my previous example, SEV adopts an
> RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
> unfortunate if those efforts were siloed, duplicated, and needlessly
> differentiated to userspace. So while there are arguably a manageable
> number of basic arch attestation methods the planned expansion of those
> to build incremental functionality is where I believe we, as a
> community, will be glad that we invested in a "Linux format" for all of
> this.
> 
> An example, to show what the strawman patch below enables: (req_key is
> the sample program from "man 2 request_key")
> 
> # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
> Key ID is 10e2f3a7
> # keyctl pipe 0x10e2f3a7 | hexdump -C
> 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
> 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
> 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> *
> 00004000
> 
> This is the kernel instantiating a TDX Quote without the TDREPORT
> implementation detail ever leaving the kernel. Now, this is only the

IIUC, the idea here is to cache the quote data and return it to the user whenever
possible, right? If yes, I think such optimization may not be very useful for our
case. AFAIK, the quote data will change whenever there is a change in the guest
measurement data. Since the validity of the generated quote will not be long,
and the frequency of quote generation requests is expected to be less, we may not
get much benefit from caching the quote data. I think we can keep this logic simple
by directly retrieving the quote data from the quoting enclave whenever there is a
request from the user.

> top-half of what is needed. The missing bottom half takes that material
> and uses it to instantiate derived key material like the storage
> decryption key internal to the kernel. See "The Process" in
> Documentation/security/keys/request-key.rst for how the Keys subsystem
> handles the "keys for keys" use case.

This is only useful for key-server use case, right? Attestation can also be
used for use cases like pattern matching or uploading some secure data, etc.
Since key-server is not the only use case, does it make sense to suppport
this derived key feature?

> 
> ---
> diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
> index f79ab13a5c28..0f775847028e 100644
> --- a/drivers/virt/Kconfig
> +++ b/drivers/virt/Kconfig
> @@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
>  
>  source "drivers/virt/coco/tdx-guest/Kconfig"
>  
> +config GUEST_ATTEST
> +	tristate
> +	select KEYS
> +
>  endif
> diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
> index e9aa6fc96fab..66f6b838f8f4 100644
> --- a/drivers/virt/Makefile
> +++ b/drivers/virt/Makefile
> @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)		+= acrn/
>  obj-$(CONFIG_EFI_SECRET)	+= coco/efi_secret/
>  obj-$(CONFIG_SEV_GUEST)		+= coco/sev-guest/
>  obj-$(CONFIG_INTEL_TDX_GUEST)	+= coco/tdx-guest/
> +obj-$(CONFIG_GUEST_ATTEST)	+= coco/guest-attest/
> diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
> new file mode 100644
> index 000000000000..5581c5a27588
> --- /dev/null
> +++ b/drivers/virt/coco/guest-attest/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
> +guest_attest-y := key.o
> diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
> new file mode 100644
> index 000000000000..2a494b6dd7a7
> --- /dev/null
> +++ b/drivers/virt/coco/guest-attest/key.c
> @@ -0,0 +1,159 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +#include <linux/seq_file.h>
> +#include <linux/key-type.h>
> +#include <linux/module.h>
> +#include <linux/base64.h>
> +
> +#include <keys/request_key_auth-type.h>
> +#include <keys/user-type.h>
> +
> +#include "guest-attest.h"

Can you share you guest-attest.h?

> +
> +static LIST_HEAD(guest_attest_list);
> +static DECLARE_RWSEM(guest_attest_rwsem);
> +
> +static struct guest_attest_ops *fetch_ops(void)
> +{
> +	return list_first_entry_or_null(&guest_attest_list,
> +					struct guest_attest_ops, list);
> +}
> +
> +static struct guest_attest_ops *get_ops(void)
> +{
> +	down_read(&guest_attest_rwsem);
> +	return fetch_ops();
> +}
> +
> +static void put_ops(void)
> +{
> +	up_read(&guest_attest_rwsem);
> +}
> +
> +int register_guest_attest_ops(struct guest_attest_ops *ops)
> +{
> +	struct guest_attest_ops *conflict;
> +	int rc;
> +
> +	down_write(&guest_attest_rwsem);
> +	conflict = fetch_ops();
> +	if (conflict) {
> +		pr_err("\"%s\" ops already registered\n", conflict->name);
> +		rc = -EEXIST;
> +		goto out;
> +	}
> +	list_add(&ops->list, &guest_attest_list);
> +	try_module_get(ops->module);
> +	rc = 0;
> +out:
> +	up_write(&guest_attest_rwsem);
> +	return rc;
> +}
> +EXPORT_SYMBOL_GPL(register_guest_attest_ops);
> +
> +void unregister_guest_attest_ops(struct guest_attest_ops *ops)
> +{
> +	down_write(&guest_attest_rwsem);
> +	list_del(&ops->list);
> +	up_write(&guest_attest_rwsem);
> +	module_put(ops->module);
> +}
> +EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
> +
> +static int __guest_attest_request_key(struct key *key, int level,
> +				      struct key *dest_keyring,
> +				      const char *callout_info, int callout_len,
> +				      struct key *authkey)
> +{
> +	struct guest_attest_ops *ops;
> +	void *payload = NULL;
> +	int rc, payload_len;
> +
> +	ops = get_ops();
> +	if (!ops)
> +		return -ENOKEY;
> +
> +	payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
> +	if (!payload) {
> +		rc = -ENOMEM;
> +		goto out;
> +	}

Is the idea to get the values like vmpl part of the payload?

> +
> +	payload_len = base64_decode(callout_info, callout_len, payload);
> +	if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
> +		rc = -EINVAL;
> +		goto out;
> +	}
> +
> +	rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
> +				 authkey);
> +out:
> +	kfree(payload);
> +	put_ops();
> +	return rc;
> +}
> +
> +static int guest_attest_request_key(struct key *authkey, void *data)
> +{
> +	struct request_key_auth *rka = get_request_key_auth(authkey);
> +	struct key *key = rka->target_key;
> +	unsigned long long id;
> +	int rc, level;
> +
> +	pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
> +		 rka->callout_info ? (char *)rka->callout_info : "\"none\"");
> +
> +	if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
> +		return -EINVAL;
> +

Can you explain some details about the id and level? It is not very clear why
we need it.

> +	if (!rka->callout_info) {
> +		rc = -EINVAL;
> +		goto out;
> +	}
> +
> +	rc = __guest_attest_request_key(key, level, rka->dest_keyring,
> +					rka->callout_info, rka->callout_len,
> +					authkey);
> +out:
> +	complete_request_key(authkey, rc);
> +	return rc;
> +}
> +
> +static int guest_attest_vet_description(const char *desc)
> +{
> +	unsigned long long id;
> +	int level;
> +
> +	if (sscanf(desc, "guest_attest:%d:%llu", &level, &id) != 2)
> +		return -EINVAL;
> +	return 0;
> +}
> +
> +static struct key_type key_type_guest_attest = {
> +	.name = "guest_attest",
> +	.preparse = user_preparse,
> +	.free_preparse = user_free_preparse,
> +	.instantiate = generic_key_instantiate,
> +	.revoke = user_revoke,
> +	.destroy = user_destroy,
> +	.describe = user_describe,
> +	.read = user_read,
> +	.vet_description = guest_attest_vet_description,
> +	.request_key = guest_attest_request_key,
> +};
> +
> +static int __init guest_attest_init(void)
> +{
> +	return register_key_type(&key_type_guest_attest);
> +}
> +
> +static void __exit guest_attest_exit(void)
> +{
> +	unregister_key_type(&key_type_guest_attest);
> +}
> +
> +module_init(guest_attest_init);
> +module_exit(guest_attest_exit);
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/virt/coco/tdx-guest/Kconfig b/drivers/virt/coco/tdx-guest/Kconfig
> index 14246fc2fb02..9a1ec85369fe 100644
> --- a/drivers/virt/coco/tdx-guest/Kconfig
> +++ b/drivers/virt/coco/tdx-guest/Kconfig
> @@ -1,6 +1,7 @@
>  config TDX_GUEST_DRIVER
>  	tristate "TDX Guest driver"
>  	depends on INTEL_TDX_GUEST
> +	select GUEST_ATTEST
>  	help
>  	  The driver provides userspace interface to communicate with
>  	  the TDX module to request the TDX guest details like attestation
> diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
> index 388491fa63a1..65b5aab284d9 100644
> --- a/drivers/virt/coco/tdx-guest/tdx-guest.c
> +++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
> @@ -13,11 +13,13 @@
>  #include <linux/string.h>
>  #include <linux/uaccess.h>
>  #include <linux/set_memory.h>
> +#include <linux/key-type.h>
>  
>  #include <uapi/linux/tdx-guest.h>
>  
>  #include <asm/cpu_device_id.h>
>  #include <asm/tdx.h>
> +#include "../guest-attest/guest-attest.h"
>  
>  /*
>   * Intel's SGX QE implementation generally uses Quote size less
> @@ -229,6 +231,62 @@ static const struct x86_cpu_id tdx_guest_ids[] = {
>  };
>  MODULE_DEVICE_TABLE(x86cpu, tdx_guest_ids);
>  
> +static int tdx_request_attest(struct key *key, int level,
> +			      struct key *dest_keyring, void *payload,
> +			      int payload_len, struct key *authkey)
> +{
> +	u8 *tdreport;
> +	long ret;
> +
> +	tdreport = kzalloc(TDX_REPORT_LEN, GFP_KERNEL);
> +	if (!tdreport)
> +		return -ENOMEM;
> +
> +	/* Generate TDREPORT0 using "TDG.MR.REPORT" TDCALL */
> +	ret = tdx_mcall_get_report0(payload, tdreport);
> +	if (ret)
> +		goto out;
> +
> +	mutex_lock(&quote_lock);
> +
> +	memset(qentry->buf, 0, qentry->buf_len);
> +	reinit_completion(&qentry->compl);
> +	qentry->valid = true;
> +
> +	/* Submit GetQuote Request using GetQuote hyperetall */
> +	ret = tdx_hcall_get_quote(qentry->buf, qentry->buf_len);
> +	if (ret) {
> +		pr_err("GetQuote hyperetall failed, status:%lx\n", ret);
> +		ret = -EIO;
> +		goto quote_failed;
> +	}
> +
> +	/*
> +	 * Although the GHCI specification does not state explicitly that
> +	 * the VMM must not wait indefinitely for the Quote request to be
> +	 * completed, a sane VMM should always notify the guest after a
> +	 * certain time, regardless of whether the Quote generation is
> +	 * successful or not.  For now just assume the VMM will do so.
> +	 */
> +	wait_for_completion(&qentry->compl);
> +
> +	ret = key_instantiate_and_link(key, qentry->buf, qentry->buf_len,
> +				       dest_keyring, authkey);
> +
> +quote_failed:
> +	qentry->valid = false;
> +	mutex_unlock(&quote_lock);
> +out:
> +	kfree(tdreport);
> +	return ret;
> +}
> +
> +static struct guest_attest_ops tdx_attest_ops = {
> +	.name = KBUILD_MODNAME,
> +	.module = THIS_MODULE,
> +	.request_attest = tdx_request_attest,
> +};
> +
>  static int __init tdx_guest_init(void)
>  {
>  	int ret;
> @@ -251,8 +309,14 @@ static int __init tdx_guest_init(void)
>  	if (ret)
>  		goto free_quote;
>  
> +	ret = register_guest_attest_ops(&tdx_attest_ops);
> +	if (ret)
> +		goto free_irq;
> +
>  	return 0;
>  
> +free_irq:
> +	tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
>  free_quote:
>  	free_quote_entry(qentry);
>  free_misc:
> @@ -264,6 +328,7 @@ module_init(tdx_guest_init);
>  
>  static void __exit tdx_guest_exit(void)
>  {
> +	unregister_guest_attest_ops(&tdx_attest_ops);
>  	tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
>  	free_quote_entry(qentry);
>  	misc_deregister(&tdx_misc_dev);
Dionna Amalie Glaze June 26, 2023, 6:57 p.m. UTC | #9
On Sun, Jun 25, 2023 at 8:06 PM Sathyanarayanan Kuppuswamy
<sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
>
> Hi Dan,
>
> On 6/23/23 3:27 PM, Dan Williams wrote:
> > Dan Williams wrote:
> >> [ add David, Brijesh, and Atish]
> >>
> >> Kuppuswamy Sathyanarayanan wrote:
> >>> In TDX guest, the second stage of the attestation process is Quote
> >>> generation. This process is required to convert the locally generated
> >>> TDREPORT into a remotely verifiable Quote. It involves sending the
> >>> TDREPORT data to a Quoting Enclave (QE) which will verify the
> >>> integrity of the TDREPORT and sign it with an attestation key.
> >>>
> >>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> >>> allow the user agent to get the TD Quote.
> >>>
> >>> Add a kernel selftest module to verify the Quote generation feature.
> >>>
> >>> TD Quote generation involves following steps:
> >>>
> >>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> >>> * Embed the TDREPORT data in quote buffer and request for quote
> >>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
> >>> * Upon completion of the GetQuote request, check for non zero value
> >>>   in the status field of Quote header to make sure the generated
> >>>   quote is valid.
> >>
> >> What this cover letter does not say is that this is adding another
> >> instance of the similar pattern as SNP_GET_REPORT.
> >>
> >> Linux is best served when multiple vendors trying to do similar
> >> operations are brought together behind a common ABI. We see this in the
> >> history of wrangling SCSI vendors behind common interfaces. Now multiple
> >> confidential computing vendors trying to develop similar flows with
> >> differentiated formats where that differentiation need not leak over the
> >> ABI boundary.
> > [..]
> >
> > Below is a rough mock up of this approach to demonstrate the direction.
> > Again, the goal is to define an ABI that can support any vendor's
> > arch-specific attestation method and key provisioning flows without
> > leaking vendor-specific details, or confidential material over the
> > user/kernel ABI.
>
> Thanks for working on this mock code and helping out. It gives me the
> general idea about your proposal.
>
> >
> > The observation is that there are a sufficient number of attestation
> > flows available to review where Linux can define a superset ABI to
> > contain them all. The other observation is that the implementations have
> > features that may cross-polinate over time. For example the SEV
> > privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
> > PCRs) mechanisms address generic Confidential Computing use cases.
>
>
> I agree with your point about VMPL and RTMR feature cases. This observation
> is valid for AMD SEV and TDX attestation flows. But I am not sure whether
> it will hold true for other vendor implementations. Our sample set is not
> good enough to make this conclusion. The reason for my concern is, if you
> check the ABI interface used in the S390 arch attestation driver
> (drivers/s390/char/uvdevice.c), you would notice that there is a significant
> difference between the ABI used in that driver and SEV/TDX drivers. The S390
> driver attestation request appears to accept two data blobs as input, as well
> as a variety of vendor-specific header configurations.
>
> Maybe the s390 attestation model is a special case, but, I think we consider
> this issue. Since we don't have a common spec, there is chance that any
> superset ABI we define now may not meet future vendor requirements. One way to
> handle it to leave enough space in the generic ABI to handle future vendor
> requirements.
>
> I think it would be better if other vendors (like ARM or RISC) can comment and
> confirm whether this proposal meets their demands.
>

The VMPL-based separation that will house the supervisor module known
as SVSM can have protocols that implement a TPM command interface, or
an RTMR-extension interface, and will also need to have an
SVSM-specific protocol attestation report format to keep the secure
chain of custody apparent. We'd have different formats and protocols
in the kernel, at least, to speak to each technology. I'm not sure
it's worth the trouble of papering over all the... 3-4 technologies
with similar but still weirdly different formats and ways of doing
things with an abstracted attestation ABI, especially since the output
all has to be interpreted in an architecture-specific way anyway.

ARM's Confidential Computing Realm Management Extensions (RME) seems
to be going along the lines of a runtime measurement register model
with their hardware enforced security. The number of registers isn't
prescribed in the spec.

+Joey Gouly +linux-coco@lists.linux.dev as far as RME is concerned, do
you know who would be best to weigh in on this discussion of a unified
attestation model?

> >
> > Vendor specific ioctls for all of this feels like surrender when Linux
> > already has the keys subsystem which has plenty of degrees of freedom
> > for tracking blobs with signatures and using those blobs to instantiate
> > other blobs. It already serves as the ABI wrapping various TPM
> > implementations and marshaling keys for storage encryption and other use
> > cases that intersect Confidential Computing.
> >
> > The benefit of deprecating vendor-specific abstraction layers in
> > userspace is secondary. The primary benefit is collaboration. It enables
> > kernel developers from various architectures to collaborate on common
> > infrastructure. If, referring back to my previous example, SEV adopts an
> > RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
> > unfortunate if those efforts were siloed, duplicated, and needlessly
> > differentiated to userspace. So while there are arguably a manageable
> > number of basic arch attestation methods the planned expansion of those
> > to build incremental functionality is where I believe we, as a
> > community, will be glad that we invested in a "Linux format" for all of
> > this.
> >
> > An example, to show what the strawman patch below enables: (req_key is
> > the sample program from "man 2 request_key")
> >
> > # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
> > Key ID is 10e2f3a7
> > # keyctl pipe 0x10e2f3a7 | hexdump -C
> > 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
> > 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
> > 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> > *
> > 00004000
> >
> > This is the kernel instantiating a TDX Quote without the TDREPORT
> > implementation detail ever leaving the kernel. Now, this is only the
>
> IIUC, the idea here is to cache the quote data and return it to the user whenever
> possible, right? If yes, I think such optimization may not be very useful for our
> case. AFAIK, the quote data will change whenever there is a change in the guest
> measurement data. Since the validity of the generated quote will not be long,
> and the frequency of quote generation requests is expected to be less, we may not
> get much benefit from caching the quote data. I think we can keep this logic simple
> by directly retrieving the quote data from the quoting enclave whenever there is a
> request from the user.
>
> > top-half of what is needed. The missing bottom half takes that material
> > and uses it to instantiate derived key material like the storage
> > decryption key internal to the kernel. See "The Process" in
> > Documentation/security/keys/request-key.rst for how the Keys subsystem
> > handles the "keys for keys" use case.
>
> This is only useful for key-server use case, right? Attestation can also be
> used for use cases like pattern matching or uploading some secure data, etc.
> Since key-server is not the only use case, does it make sense to suppport
> this derived key feature?
>
> >
> > ---
> > diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
> > index f79ab13a5c28..0f775847028e 100644
> > --- a/drivers/virt/Kconfig
> > +++ b/drivers/virt/Kconfig
> > @@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
> >
> >  source "drivers/virt/coco/tdx-guest/Kconfig"
> >
> > +config GUEST_ATTEST
> > +     tristate
> > +     select KEYS
> > +
> >  endif
> > diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
> > index e9aa6fc96fab..66f6b838f8f4 100644
> > --- a/drivers/virt/Makefile
> > +++ b/drivers/virt/Makefile
> > @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)              += acrn/
> >  obj-$(CONFIG_EFI_SECRET)     += coco/efi_secret/
> >  obj-$(CONFIG_SEV_GUEST)              += coco/sev-guest/
> >  obj-$(CONFIG_INTEL_TDX_GUEST)        += coco/tdx-guest/
> > +obj-$(CONFIG_GUEST_ATTEST)   += coco/guest-attest/
> > diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
> > new file mode 100644
> > index 000000000000..5581c5a27588
> > --- /dev/null
> > +++ b/drivers/virt/coco/guest-attest/Makefile
> > @@ -0,0 +1,2 @@
> > +obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
> > +guest_attest-y := key.o
> > diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
> > new file mode 100644
> > index 000000000000..2a494b6dd7a7
> > --- /dev/null
> > +++ b/drivers/virt/coco/guest-attest/key.c
> > @@ -0,0 +1,159 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
> > +
> > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> > +#include <linux/seq_file.h>
> > +#include <linux/key-type.h>
> > +#include <linux/module.h>
> > +#include <linux/base64.h>
> > +
> > +#include <keys/request_key_auth-type.h>
> > +#include <keys/user-type.h>
> > +
> > +#include "guest-attest.h"
>
> Can you share you guest-attest.h?
>
> > +
> > +static LIST_HEAD(guest_attest_list);
> > +static DECLARE_RWSEM(guest_attest_rwsem);
> > +
> > +static struct guest_attest_ops *fetch_ops(void)
> > +{
> > +     return list_first_entry_or_null(&guest_attest_list,
> > +                                     struct guest_attest_ops, list);
> > +}
> > +
> > +static struct guest_attest_ops *get_ops(void)
> > +{
> > +     down_read(&guest_attest_rwsem);
> > +     return fetch_ops();
> > +}
> > +
> > +static void put_ops(void)
> > +{
> > +     up_read(&guest_attest_rwsem);
> > +}
> > +
> > +int register_guest_attest_ops(struct guest_attest_ops *ops)
> > +{
> > +     struct guest_attest_ops *conflict;
> > +     int rc;
> > +
> > +     down_write(&guest_attest_rwsem);
> > +     conflict = fetch_ops();
> > +     if (conflict) {
> > +             pr_err("\"%s\" ops already registered\n", conflict->name);
> > +             rc = -EEXIST;
> > +             goto out;
> > +     }
> > +     list_add(&ops->list, &guest_attest_list);
> > +     try_module_get(ops->module);
> > +     rc = 0;
> > +out:
> > +     up_write(&guest_attest_rwsem);
> > +     return rc;
> > +}
> > +EXPORT_SYMBOL_GPL(register_guest_attest_ops);
> > +
> > +void unregister_guest_attest_ops(struct guest_attest_ops *ops)
> > +{
> > +     down_write(&guest_attest_rwsem);
> > +     list_del(&ops->list);
> > +     up_write(&guest_attest_rwsem);
> > +     module_put(ops->module);
> > +}
> > +EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
> > +
> > +static int __guest_attest_request_key(struct key *key, int level,
> > +                                   struct key *dest_keyring,
> > +                                   const char *callout_info, int callout_len,
> > +                                   struct key *authkey)
> > +{
> > +     struct guest_attest_ops *ops;
> > +     void *payload = NULL;
> > +     int rc, payload_len;
> > +
> > +     ops = get_ops();
> > +     if (!ops)
> > +             return -ENOKEY;
> > +
> > +     payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
> > +     if (!payload) {
> > +             rc = -ENOMEM;
> > +             goto out;
> > +     }
>
> Is the idea to get the values like vmpl part of the payload?
>
> > +
> > +     payload_len = base64_decode(callout_info, callout_len, payload);
> > +     if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
> > +             rc = -EINVAL;
> > +             goto out;
> > +     }
> > +
> > +     rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
> > +                              authkey);
> > +out:
> > +     kfree(payload);
> > +     put_ops();
> > +     return rc;
> > +}
> > +
> > +static int guest_attest_request_key(struct key *authkey, void *data)
> > +{
> > +     struct request_key_auth *rka = get_request_key_auth(authkey);
> > +     struct key *key = rka->target_key;
> > +     unsigned long long id;
> > +     int rc, level;
> > +
> > +     pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
> > +              rka->callout_info ? (char *)rka->callout_info : "\"none\"");
> > +
> > +     if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
> > +             return -EINVAL;
> > +
>
> Can you explain some details about the id and level? It is not very clear why
> we need it.
>
> > +     if (!rka->callout_info) {
> > +             rc = -EINVAL;
> > +             goto out;
> > +     }
> > +
> > +     rc = __guest_attest_request_key(key, level, rka->dest_keyring,
> > +                                     rka->callout_info, rka->callout_len,
> > +                                     authkey);
> > +out:
> > +     complete_request_key(authkey, rc);
> > +     return rc;
> > +}
> > +
> > +static int guest_attest_vet_description(const char *desc)
> > +{
> > +     unsigned long long id;
> > +     int level;
> > +
> > +     if (sscanf(desc, "guest_attest:%d:%llu", &level, &id) != 2)
> > +             return -EINVAL;
> > +     return 0;
> > +}
> > +
> > +static struct key_type key_type_guest_attest = {
> > +     .name = "guest_attest",
> > +     .preparse = user_preparse,
> > +     .free_preparse = user_free_preparse,
> > +     .instantiate = generic_key_instantiate,
> > +     .revoke = user_revoke,
> > +     .destroy = user_destroy,
> > +     .describe = user_describe,
> > +     .read = user_read,
> > +     .vet_description = guest_attest_vet_description,
> > +     .request_key = guest_attest_request_key,
> > +};
> > +
> > +static int __init guest_attest_init(void)
> > +{
> > +     return register_key_type(&key_type_guest_attest);
> > +}
> > +
> > +static void __exit guest_attest_exit(void)
> > +{
> > +     unregister_key_type(&key_type_guest_attest);
> > +}
> > +
> > +module_init(guest_attest_init);
> > +module_exit(guest_attest_exit);
> > +MODULE_LICENSE("GPL v2");
> > diff --git a/drivers/virt/coco/tdx-guest/Kconfig b/drivers/virt/coco/tdx-guest/Kconfig
> > index 14246fc2fb02..9a1ec85369fe 100644
> > --- a/drivers/virt/coco/tdx-guest/Kconfig
> > +++ b/drivers/virt/coco/tdx-guest/Kconfig
> > @@ -1,6 +1,7 @@
> >  config TDX_GUEST_DRIVER
> >       tristate "TDX Guest driver"
> >       depends on INTEL_TDX_GUEST
> > +     select GUEST_ATTEST
> >       help
> >         The driver provides userspace interface to communicate with
> >         the TDX module to request the TDX guest details like attestation
> > diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
> > index 388491fa63a1..65b5aab284d9 100644
> > --- a/drivers/virt/coco/tdx-guest/tdx-guest.c
> > +++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
> > @@ -13,11 +13,13 @@
> >  #include <linux/string.h>
> >  #include <linux/uaccess.h>
> >  #include <linux/set_memory.h>
> > +#include <linux/key-type.h>
> >
> >  #include <uapi/linux/tdx-guest.h>
> >
> >  #include <asm/cpu_device_id.h>
> >  #include <asm/tdx.h>
> > +#include "../guest-attest/guest-attest.h"
> >
> >  /*
> >   * Intel's SGX QE implementation generally uses Quote size less
> > @@ -229,6 +231,62 @@ static const struct x86_cpu_id tdx_guest_ids[] = {
> >  };
> >  MODULE_DEVICE_TABLE(x86cpu, tdx_guest_ids);
> >
> > +static int tdx_request_attest(struct key *key, int level,
> > +                           struct key *dest_keyring, void *payload,
> > +                           int payload_len, struct key *authkey)
> > +{
> > +     u8 *tdreport;
> > +     long ret;
> > +
> > +     tdreport = kzalloc(TDX_REPORT_LEN, GFP_KERNEL);
> > +     if (!tdreport)
> > +             return -ENOMEM;
> > +
> > +     /* Generate TDREPORT0 using "TDG.MR.REPORT" TDCALL */
> > +     ret = tdx_mcall_get_report0(payload, tdreport);
> > +     if (ret)
> > +             goto out;
> > +
> > +     mutex_lock(&quote_lock);
> > +
> > +     memset(qentry->buf, 0, qentry->buf_len);
> > +     reinit_completion(&qentry->compl);
> > +     qentry->valid = true;
> > +
> > +     /* Submit GetQuote Request using GetQuote hyperetall */
> > +     ret = tdx_hcall_get_quote(qentry->buf, qentry->buf_len);
> > +     if (ret) {
> > +             pr_err("GetQuote hyperetall failed, status:%lx\n", ret);
> > +             ret = -EIO;
> > +             goto quote_failed;
> > +     }
> > +
> > +     /*
> > +      * Although the GHCI specification does not state explicitly that
> > +      * the VMM must not wait indefinitely for the Quote request to be
> > +      * completed, a sane VMM should always notify the guest after a
> > +      * certain time, regardless of whether the Quote generation is
> > +      * successful or not.  For now just assume the VMM will do so.
> > +      */
> > +     wait_for_completion(&qentry->compl);
> > +
> > +     ret = key_instantiate_and_link(key, qentry->buf, qentry->buf_len,
> > +                                    dest_keyring, authkey);
> > +
> > +quote_failed:
> > +     qentry->valid = false;
> > +     mutex_unlock(&quote_lock);
> > +out:
> > +     kfree(tdreport);
> > +     return ret;
> > +}
> > +
> > +static struct guest_attest_ops tdx_attest_ops = {
> > +     .name = KBUILD_MODNAME,
> > +     .module = THIS_MODULE,
> > +     .request_attest = tdx_request_attest,
> > +};
> > +
> >  static int __init tdx_guest_init(void)
> >  {
> >       int ret;
> > @@ -251,8 +309,14 @@ static int __init tdx_guest_init(void)
> >       if (ret)
> >               goto free_quote;
> >
> > +     ret = register_guest_attest_ops(&tdx_attest_ops);
> > +     if (ret)
> > +             goto free_irq;
> > +
> >       return 0;
> >
> > +free_irq:
> > +     tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
> >  free_quote:
> >       free_quote_entry(qentry);
> >  free_misc:
> > @@ -264,6 +328,7 @@ module_init(tdx_guest_init);
> >
> >  static void __exit tdx_guest_exit(void)
> >  {
> > +     unregister_guest_attest_ops(&tdx_attest_ops);
> >       tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
> >       free_quote_entry(qentry);
> >       misc_deregister(&tdx_misc_dev);
>
> --
> Sathyanarayanan Kuppuswamy
> Linux Kernel Developer



--
-Dionna Glaze, PhD (she/her)
Kuppuswamy Sathyanarayanan June 27, 2023, 12:39 a.m. UTC | #10
+Atish

Atish, any comments on this topic from RISC-v?

On 6/26/23 11:57 AM, Dionna Amalie Glaze wrote:
> On Sun, Jun 25, 2023 at 8:06 PM Sathyanarayanan Kuppuswamy
> <sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
>>
>> Hi Dan,
>>
>> On 6/23/23 3:27 PM, Dan Williams wrote:
>>> Dan Williams wrote:
>>>> [ add David, Brijesh, and Atish]
>>>>
>>>> Kuppuswamy Sathyanarayanan wrote:
>>>>> In TDX guest, the second stage of the attestation process is Quote
>>>>> generation. This process is required to convert the locally generated
>>>>> TDREPORT into a remotely verifiable Quote. It involves sending the
>>>>> TDREPORT data to a Quoting Enclave (QE) which will verify the
>>>>> integrity of the TDREPORT and sign it with an attestation key.
>>>>>
>>>>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
>>>>> allow the user agent to get the TD Quote.
>>>>>
>>>>> Add a kernel selftest module to verify the Quote generation feature.
>>>>>
>>>>> TD Quote generation involves following steps:
>>>>>
>>>>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
>>>>> * Embed the TDREPORT data in quote buffer and request for quote
>>>>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
>>>>> * Upon completion of the GetQuote request, check for non zero value
>>>>>   in the status field of Quote header to make sure the generated
>>>>>   quote is valid.
>>>>
>>>> What this cover letter does not say is that this is adding another
>>>> instance of the similar pattern as SNP_GET_REPORT.
>>>>
>>>> Linux is best served when multiple vendors trying to do similar
>>>> operations are brought together behind a common ABI. We see this in the
>>>> history of wrangling SCSI vendors behind common interfaces. Now multiple
>>>> confidential computing vendors trying to develop similar flows with
>>>> differentiated formats where that differentiation need not leak over the
>>>> ABI boundary.
>>> [..]
>>>
>>> Below is a rough mock up of this approach to demonstrate the direction.
>>> Again, the goal is to define an ABI that can support any vendor's
>>> arch-specific attestation method and key provisioning flows without
>>> leaking vendor-specific details, or confidential material over the
>>> user/kernel ABI.
>>
>> Thanks for working on this mock code and helping out. It gives me the
>> general idea about your proposal.
>>
>>>
>>> The observation is that there are a sufficient number of attestation
>>> flows available to review where Linux can define a superset ABI to
>>> contain them all. The other observation is that the implementations have
>>> features that may cross-polinate over time. For example the SEV
>>> privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
>>> PCRs) mechanisms address generic Confidential Computing use cases.
>>
>>
>> I agree with your point about VMPL and RTMR feature cases. This observation
>> is valid for AMD SEV and TDX attestation flows. But I am not sure whether
>> it will hold true for other vendor implementations. Our sample set is not
>> good enough to make this conclusion. The reason for my concern is, if you
>> check the ABI interface used in the S390 arch attestation driver
>> (drivers/s390/char/uvdevice.c), you would notice that there is a significant
>> difference between the ABI used in that driver and SEV/TDX drivers. The S390
>> driver attestation request appears to accept two data blobs as input, as well
>> as a variety of vendor-specific header configurations.
>>
>> Maybe the s390 attestation model is a special case, but, I think we consider
>> this issue. Since we don't have a common spec, there is chance that any
>> superset ABI we define now may not meet future vendor requirements. One way to
>> handle it to leave enough space in the generic ABI to handle future vendor
>> requirements.
>>
>> I think it would be better if other vendors (like ARM or RISC) can comment and
>> confirm whether this proposal meets their demands.
>>
> 
> The VMPL-based separation that will house the supervisor module known
> as SVSM can have protocols that implement a TPM command interface, or
> an RTMR-extension interface, and will also need to have an
> SVSM-specific protocol attestation report format to keep the secure
> chain of custody apparent. We'd have different formats and protocols
> in the kernel, at least, to speak to each technology. I'm not sure
> it's worth the trouble of papering over all the... 3-4 technologies
> with similar but still weirdly different formats and ways of doing
> things with an abstracted attestation ABI, especially since the output
> all has to be interpreted in an architecture-specific way anyway.
> 
> ARM's Confidential Computing Realm Management Extensions (RME) seems
> to be going along the lines of a runtime measurement register model
> with their hardware enforced security. The number of registers isn't
> prescribed in the spec.
> 
> +Joey Gouly +linux-coco@lists.linux.dev as far as RME is concerned, do
> you know who would be best to weigh in on this discussion of a unified
> attestation model?


> 
>>>
>>> Vendor specific ioctls for all of this feels like surrender when Linux
>>> already has the keys subsystem which has plenty of degrees of freedom
>>> for tracking blobs with signatures and using those blobs to instantiate
>>> other blobs. It already serves as the ABI wrapping various TPM
>>> implementations and marshaling keys for storage encryption and other use
>>> cases that intersect Confidential Computing.
>>>
>>> The benefit of deprecating vendor-specific abstraction layers in
>>> userspace is secondary. The primary benefit is collaboration. It enables
>>> kernel developers from various architectures to collaborate on common
>>> infrastructure. If, referring back to my previous example, SEV adopts an
>>> RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
>>> unfortunate if those efforts were siloed, duplicated, and needlessly
>>> differentiated to userspace. So while there are arguably a manageable
>>> number of basic arch attestation methods the planned expansion of those
>>> to build incremental functionality is where I believe we, as a
>>> community, will be glad that we invested in a "Linux format" for all of
>>> this.
>>>
>>> An example, to show what the strawman patch below enables: (req_key is
>>> the sample program from "man 2 request_key")
>>>
>>> # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
>>> Key ID is 10e2f3a7
>>> # keyctl pipe 0x10e2f3a7 | hexdump -C
>>> 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
>>> 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
>>> 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>>> *
>>> 00004000
>>>
>>> This is the kernel instantiating a TDX Quote without the TDREPORT
>>> implementation detail ever leaving the kernel. Now, this is only the
>>
>> IIUC, the idea here is to cache the quote data and return it to the user whenever
>> possible, right? If yes, I think such optimization may not be very useful for our
>> case. AFAIK, the quote data will change whenever there is a change in the guest
>> measurement data. Since the validity of the generated quote will not be long,
>> and the frequency of quote generation requests is expected to be less, we may not
>> get much benefit from caching the quote data. I think we can keep this logic simple
>> by directly retrieving the quote data from the quoting enclave whenever there is a
>> request from the user.
>>
>>> top-half of what is needed. The missing bottom half takes that material
>>> and uses it to instantiate derived key material like the storage
>>> decryption key internal to the kernel. See "The Process" in
>>> Documentation/security/keys/request-key.rst for how the Keys subsystem
>>> handles the "keys for keys" use case.
>>
>> This is only useful for key-server use case, right? Attestation can also be
>> used for use cases like pattern matching or uploading some secure data, etc.
>> Since key-server is not the only use case, does it make sense to suppport
>> this derived key feature?
>>
>>>
>>> ---
>>> diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
>>> index f79ab13a5c28..0f775847028e 100644
>>> --- a/drivers/virt/Kconfig
>>> +++ b/drivers/virt/Kconfig
>>> @@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
>>>
>>>  source "drivers/virt/coco/tdx-guest/Kconfig"
>>>
>>> +config GUEST_ATTEST
>>> +     tristate
>>> +     select KEYS
>>> +
>>>  endif
>>> diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
>>> index e9aa6fc96fab..66f6b838f8f4 100644
>>> --- a/drivers/virt/Makefile
>>> +++ b/drivers/virt/Makefile
>>> @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)              += acrn/
>>>  obj-$(CONFIG_EFI_SECRET)     += coco/efi_secret/
>>>  obj-$(CONFIG_SEV_GUEST)              += coco/sev-guest/
>>>  obj-$(CONFIG_INTEL_TDX_GUEST)        += coco/tdx-guest/
>>> +obj-$(CONFIG_GUEST_ATTEST)   += coco/guest-attest/
>>> diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
>>> new file mode 100644
>>> index 000000000000..5581c5a27588
>>> --- /dev/null
>>> +++ b/drivers/virt/coco/guest-attest/Makefile
>>> @@ -0,0 +1,2 @@
>>> +obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
>>> +guest_attest-y := key.o
>>> diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
>>> new file mode 100644
>>> index 000000000000..2a494b6dd7a7
>>> --- /dev/null
>>> +++ b/drivers/virt/coco/guest-attest/key.c
>>> @@ -0,0 +1,159 @@
>>> +// SPDX-License-Identifier: GPL-2.0-only
>>> +/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
>>> +
>>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>>> +#include <linux/seq_file.h>
>>> +#include <linux/key-type.h>
>>> +#include <linux/module.h>
>>> +#include <linux/base64.h>
>>> +
>>> +#include <keys/request_key_auth-type.h>
>>> +#include <keys/user-type.h>
>>> +
>>> +#include "guest-attest.h"
>>
>> Can you share you guest-attest.h?
>>
>>> +
>>> +static LIST_HEAD(guest_attest_list);
>>> +static DECLARE_RWSEM(guest_attest_rwsem);
>>> +
>>> +static struct guest_attest_ops *fetch_ops(void)
>>> +{
>>> +     return list_first_entry_or_null(&guest_attest_list,
>>> +                                     struct guest_attest_ops, list);
>>> +}
>>> +
>>> +static struct guest_attest_ops *get_ops(void)
>>> +{
>>> +     down_read(&guest_attest_rwsem);
>>> +     return fetch_ops();
>>> +}
>>> +
>>> +static void put_ops(void)
>>> +{
>>> +     up_read(&guest_attest_rwsem);
>>> +}
>>> +
>>> +int register_guest_attest_ops(struct guest_attest_ops *ops)
>>> +{
>>> +     struct guest_attest_ops *conflict;
>>> +     int rc;
>>> +
>>> +     down_write(&guest_attest_rwsem);
>>> +     conflict = fetch_ops();
>>> +     if (conflict) {
>>> +             pr_err("\"%s\" ops already registered\n", conflict->name);
>>> +             rc = -EEXIST;
>>> +             goto out;
>>> +     }
>>> +     list_add(&ops->list, &guest_attest_list);
>>> +     try_module_get(ops->module);
>>> +     rc = 0;
>>> +out:
>>> +     up_write(&guest_attest_rwsem);
>>> +     return rc;
>>> +}
>>> +EXPORT_SYMBOL_GPL(register_guest_attest_ops);
>>> +
>>> +void unregister_guest_attest_ops(struct guest_attest_ops *ops)
>>> +{
>>> +     down_write(&guest_attest_rwsem);
>>> +     list_del(&ops->list);
>>> +     up_write(&guest_attest_rwsem);
>>> +     module_put(ops->module);
>>> +}
>>> +EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
>>> +
>>> +static int __guest_attest_request_key(struct key *key, int level,
>>> +                                   struct key *dest_keyring,
>>> +                                   const char *callout_info, int callout_len,
>>> +                                   struct key *authkey)
>>> +{
>>> +     struct guest_attest_ops *ops;
>>> +     void *payload = NULL;
>>> +     int rc, payload_len;
>>> +
>>> +     ops = get_ops();
>>> +     if (!ops)
>>> +             return -ENOKEY;
>>> +
>>> +     payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
>>> +     if (!payload) {
>>> +             rc = -ENOMEM;
>>> +             goto out;
>>> +     }
>>
>> Is the idea to get the values like vmpl part of the payload?
>>
>>> +
>>> +     payload_len = base64_decode(callout_info, callout_len, payload);
>>> +     if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
>>> +             rc = -EINVAL;
>>> +             goto out;
>>> +     }
>>> +
>>> +     rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
>>> +                              authkey);
>>> +out:
>>> +     kfree(payload);
>>> +     put_ops();
>>> +     return rc;
>>> +}
>>> +
>>> +static int guest_attest_request_key(struct key *authkey, void *data)
>>> +{
>>> +     struct request_key_auth *rka = get_request_key_auth(authkey);
>>> +     struct key *key = rka->target_key;
>>> +     unsigned long long id;
>>> +     int rc, level;
>>> +
>>> +     pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
>>> +              rka->callout_info ? (char *)rka->callout_info : "\"none\"");
>>> +
>>> +     if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
>>> +             return -EINVAL;
>>> +
>>
>> Can you explain some details about the id and level? It is not very clear why
>> we need it.
>>
>>> +     if (!rka->callout_info) {
>>> +             rc = -EINVAL;
>>> +             goto out;
>>> +     }
>>> +
>>> +     rc = __guest_attest_request_key(key, level, rka->dest_keyring,
>>> +                                     rka->callout_info, rka->callout_len,
>>> +                                     authkey);
>>> +out:
>>> +     complete_request_key(authkey, rc);
>>> +     return rc;
>>> +}
>>> +
>>> +static int guest_attest_vet_description(const char *desc)
>>> +{
>>> +     unsigned long long id;
>>> +     int level;
>>> +
>>> +     if (sscanf(desc, "guest_attest:%d:%llu", &level, &id) != 2)
>>> +             return -EINVAL;
>>> +     return 0;
>>> +}
>>> +
>>> +static struct key_type key_type_guest_attest = {
>>> +     .name = "guest_attest",
>>> +     .preparse = user_preparse,
>>> +     .free_preparse = user_free_preparse,
>>> +     .instantiate = generic_key_instantiate,
>>> +     .revoke = user_revoke,
>>> +     .destroy = user_destroy,
>>> +     .describe = user_describe,
>>> +     .read = user_read,
>>> +     .vet_description = guest_attest_vet_description,
>>> +     .request_key = guest_attest_request_key,
>>> +};
>>> +
>>> +static int __init guest_attest_init(void)
>>> +{
>>> +     return register_key_type(&key_type_guest_attest);
>>> +}
>>> +
>>> +static void __exit guest_attest_exit(void)
>>> +{
>>> +     unregister_key_type(&key_type_guest_attest);
>>> +}
>>> +
>>> +module_init(guest_attest_init);
>>> +module_exit(guest_attest_exit);
>>> +MODULE_LICENSE("GPL v2");
>>> diff --git a/drivers/virt/coco/tdx-guest/Kconfig b/drivers/virt/coco/tdx-guest/Kconfig
>>> index 14246fc2fb02..9a1ec85369fe 100644
>>> --- a/drivers/virt/coco/tdx-guest/Kconfig
>>> +++ b/drivers/virt/coco/tdx-guest/Kconfig
>>> @@ -1,6 +1,7 @@
>>>  config TDX_GUEST_DRIVER
>>>       tristate "TDX Guest driver"
>>>       depends on INTEL_TDX_GUEST
>>> +     select GUEST_ATTEST
>>>       help
>>>         The driver provides userspace interface to communicate with
>>>         the TDX module to request the TDX guest details like attestation
>>> diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
>>> index 388491fa63a1..65b5aab284d9 100644
>>> --- a/drivers/virt/coco/tdx-guest/tdx-guest.c
>>> +++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
>>> @@ -13,11 +13,13 @@
>>>  #include <linux/string.h>
>>>  #include <linux/uaccess.h>
>>>  #include <linux/set_memory.h>
>>> +#include <linux/key-type.h>
>>>
>>>  #include <uapi/linux/tdx-guest.h>
>>>
>>>  #include <asm/cpu_device_id.h>
>>>  #include <asm/tdx.h>
>>> +#include "../guest-attest/guest-attest.h"
>>>
>>>  /*
>>>   * Intel's SGX QE implementation generally uses Quote size less
>>> @@ -229,6 +231,62 @@ static const struct x86_cpu_id tdx_guest_ids[] = {
>>>  };
>>>  MODULE_DEVICE_TABLE(x86cpu, tdx_guest_ids);
>>>
>>> +static int tdx_request_attest(struct key *key, int level,
>>> +                           struct key *dest_keyring, void *payload,
>>> +                           int payload_len, struct key *authkey)
>>> +{
>>> +     u8 *tdreport;
>>> +     long ret;
>>> +
>>> +     tdreport = kzalloc(TDX_REPORT_LEN, GFP_KERNEL);
>>> +     if (!tdreport)
>>> +             return -ENOMEM;
>>> +
>>> +     /* Generate TDREPORT0 using "TDG.MR.REPORT" TDCALL */
>>> +     ret = tdx_mcall_get_report0(payload, tdreport);
>>> +     if (ret)
>>> +             goto out;
>>> +
>>> +     mutex_lock(&quote_lock);
>>> +
>>> +     memset(qentry->buf, 0, qentry->buf_len);
>>> +     reinit_completion(&qentry->compl);
>>> +     qentry->valid = true;
>>> +
>>> +     /* Submit GetQuote Request using GetQuote hyperetall */
>>> +     ret = tdx_hcall_get_quote(qentry->buf, qentry->buf_len);
>>> +     if (ret) {
>>> +             pr_err("GetQuote hyperetall failed, status:%lx\n", ret);
>>> +             ret = -EIO;
>>> +             goto quote_failed;
>>> +     }
>>> +
>>> +     /*
>>> +      * Although the GHCI specification does not state explicitly that
>>> +      * the VMM must not wait indefinitely for the Quote request to be
>>> +      * completed, a sane VMM should always notify the guest after a
>>> +      * certain time, regardless of whether the Quote generation is
>>> +      * successful or not.  For now just assume the VMM will do so.
>>> +      */
>>> +     wait_for_completion(&qentry->compl);
>>> +
>>> +     ret = key_instantiate_and_link(key, qentry->buf, qentry->buf_len,
>>> +                                    dest_keyring, authkey);
>>> +
>>> +quote_failed:
>>> +     qentry->valid = false;
>>> +     mutex_unlock(&quote_lock);
>>> +out:
>>> +     kfree(tdreport);
>>> +     return ret;
>>> +}
>>> +
>>> +static struct guest_attest_ops tdx_attest_ops = {
>>> +     .name = KBUILD_MODNAME,
>>> +     .module = THIS_MODULE,
>>> +     .request_attest = tdx_request_attest,
>>> +};
>>> +
>>>  static int __init tdx_guest_init(void)
>>>  {
>>>       int ret;
>>> @@ -251,8 +309,14 @@ static int __init tdx_guest_init(void)
>>>       if (ret)
>>>               goto free_quote;
>>>
>>> +     ret = register_guest_attest_ops(&tdx_attest_ops);
>>> +     if (ret)
>>> +             goto free_irq;
>>> +
>>>       return 0;
>>>
>>> +free_irq:
>>> +     tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
>>>  free_quote:
>>>       free_quote_entry(qentry);
>>>  free_misc:
>>> @@ -264,6 +328,7 @@ module_init(tdx_guest_init);
>>>
>>>  static void __exit tdx_guest_exit(void)
>>>  {
>>> +     unregister_guest_attest_ops(&tdx_attest_ops);
>>>       tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
>>>       free_quote_entry(qentry);
>>>       misc_deregister(&tdx_misc_dev);
>>
>> --
>> Sathyanarayanan Kuppuswamy
>> Linux Kernel Developer
> 
> 
> 
> --
> -Dionna Glaze, PhD (she/her)
Dan Williams June 27, 2023, 11:44 p.m. UTC | #11
Sathyanarayanan Kuppuswamy wrote:
> Hi Dan,
> 
> On 6/23/23 3:27 PM, Dan Williams wrote:
> > Dan Williams wrote:
> >> [ add David, Brijesh, and Atish]
> >>
> >> Kuppuswamy Sathyanarayanan wrote:
> >>> In TDX guest, the second stage of the attestation process is Quote
> >>> generation. This process is required to convert the locally generated
> >>> TDREPORT into a remotely verifiable Quote. It involves sending the
> >>> TDREPORT data to a Quoting Enclave (QE) which will verify the
> >>> integrity of the TDREPORT and sign it with an attestation key.
> >>>
> >>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> >>> allow the user agent to get the TD Quote.
> >>>
> >>> Add a kernel selftest module to verify the Quote generation feature.
> >>>
> >>> TD Quote generation involves following steps:
> >>>
> >>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> >>> * Embed the TDREPORT data in quote buffer and request for quote
> >>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
> >>> * Upon completion of the GetQuote request, check for non zero value
> >>>   in the status field of Quote header to make sure the generated
> >>>   quote is valid.
> >>
> >> What this cover letter does not say is that this is adding another
> >> instance of the similar pattern as SNP_GET_REPORT.
> >>
> >> Linux is best served when multiple vendors trying to do similar
> >> operations are brought together behind a common ABI. We see this in the
> >> history of wrangling SCSI vendors behind common interfaces. Now multiple
> >> confidential computing vendors trying to develop similar flows with
> >> differentiated formats where that differentiation need not leak over the
> >> ABI boundary.
> > [..]
> > 
> > Below is a rough mock up of this approach to demonstrate the direction.
> > Again, the goal is to define an ABI that can support any vendor's
> > arch-specific attestation method and key provisioning flows without
> > leaking vendor-specific details, or confidential material over the
> > user/kernel ABI.
> 
> Thanks for working on this mock code and helping out. It gives me the
> general idea about your proposal.
> 
> > 
> > The observation is that there are a sufficient number of attestation
> > flows available to review where Linux can define a superset ABI to
> > contain them all. The other observation is that the implementations have
> > features that may cross-polinate over time. For example the SEV
> > privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
> > PCRs) mechanisms address generic Confidential Computing use cases.
> 
> 
> I agree with your point about VMPL and RTMR feature cases. This observation
> is valid for AMD SEV and TDX attestation flows. But I am not sure whether
> it will hold true for other vendor implementations. Our sample set is not
> good enough to make this conclusion. The reason for my concern is, if you
> check the ABI interface used in the S390 arch attestation driver
> (drivers/s390/char/uvdevice.c), you would notice that there is a significant
> difference between the ABI used in that driver and SEV/TDX drivers. The S390
> driver attestation request appears to accept two data blobs as input, as well
> as a variety of vendor-specific header configurations.

I would need more time to investigate. It's also the case that if both
major x86 vendors plus ARM and/or RISC-V can all get behind the same
frontend then that is already success in my mind.

> Maybe the s390 attestation model is a special case, but, I think we consider
> this issue. Since we don't have a common spec, there is chance that any
> superset ABI we define now may not meet future vendor requirements. One way to
> handle it to leave enough space in the generic ABI to handle future vendor
> requirements.

Perhaps, but the goal here is to clearly indicate "this is how Linux
conveys confidential computing attestation concepts". If there is future
vendor innovation in this space it needs to consider how it meets the
established needs of Linux, not the other way round.

> I think it would be better if other vendors (like ARM or RISC) can comment and
> confirm whether this proposal meets their demands.

The more participation the better. Open source definitely involves a
component speaking up when the definition of things are still malleable,
or catching issues before they go upstream.

> > Vendor specific ioctls for all of this feels like surrender when Linux
> > already has the keys subsystem which has plenty of degrees of freedom
> > for tracking blobs with signatures and using those blobs to instantiate
> > other blobs. It already serves as the ABI wrapping various TPM
> > implementations and marshaling keys for storage encryption and other use
> > cases that intersect Confidential Computing.
> > 
> > The benefit of deprecating vendor-specific abstraction layers in
> > userspace is secondary. The primary benefit is collaboration. It enables
> > kernel developers from various architectures to collaborate on common
> > infrastructure. If, referring back to my previous example, SEV adopts an
> > RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
> > unfortunate if those efforts were siloed, duplicated, and needlessly
> > differentiated to userspace. So while there are arguably a manageable
> > number of basic arch attestation methods the planned expansion of those
> > to build incremental functionality is where I believe we, as a
> > community, will be glad that we invested in a "Linux format" for all of
> > this.
> > 
> > An example, to show what the strawman patch below enables: (req_key is
> > the sample program from "man 2 request_key")
> > 
> > # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
> > Key ID is 10e2f3a7
> > # keyctl pipe 0x10e2f3a7 | hexdump -C
> > 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
> > 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
> > 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> > *
> > 00004000
> > 
> > This is the kernel instantiating a TDX Quote without the TDREPORT
> > implementation detail ever leaving the kernel. Now, this is only the
> 
> IIUC, the idea here is to cache the quote data and return it to the user whenever
> possible, right? If yes, I think such optimization may not be very useful for our
> case. AFAIK, the quote data will change whenever there is a change in the guest
> measurement data. Since the validity of the generated quote will not be long,
> and the frequency of quote generation requests is expected to be less, we may not
> get much benefit from caching the quote data. I think we can keep this logic simple
> by directly retrieving the quote data from the quoting enclave whenever there is a
> request from the user.

The Keys subsystem already supports the concept of keys that expire
immediately, so no need for special consideration here that I can see.

> > top-half of what is needed. The missing bottom half takes that material
> > and uses it to instantiate derived key material like the storage
> > decryption key internal to the kernel. See "The Process" in
> > Documentation/security/keys/request-key.rst for how the Keys subsystem
> > handles the "keys for keys" use case.
> 
> This is only useful for key-server use case, right? Attestation can also be
> used for use cases like pattern matching or uploading some secure data, etc.
> Since key-server is not the only use case, does it make sense to suppport
> this derived key feature?

The Keys subsystem is just a way for both userspace and kernel space to
request the instantiation of blobs that mediate access to another
resource be it another key or something else. So key-server is only one
example client.

The other reason for defining a common frontend is so the kernel can
understand and mediate resource access as a kernel is wont to do. The
ioctl() approach blinds the kernel and requires userspace to repeatedly
solve problems in vendor specific ways. They Keys proposal also has the
property of not requiring userspace round trips for things like
unlocking storage. The driver can just do request_keys() and kick off
the process on its own.

> 
> > 
> > ---
> > diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
> > index f79ab13a5c28..0f775847028e 100644
> > --- a/drivers/virt/Kconfig
> > +++ b/drivers/virt/Kconfig
> > @@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
> >  
> >  source "drivers/virt/coco/tdx-guest/Kconfig"
> >  
> > +config GUEST_ATTEST
> > +	tristate
> > +	select KEYS
> > +
> >  endif
> > diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
> > index e9aa6fc96fab..66f6b838f8f4 100644
> > --- a/drivers/virt/Makefile
> > +++ b/drivers/virt/Makefile
> > @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)		+= acrn/
> >  obj-$(CONFIG_EFI_SECRET)	+= coco/efi_secret/
> >  obj-$(CONFIG_SEV_GUEST)		+= coco/sev-guest/
> >  obj-$(CONFIG_INTEL_TDX_GUEST)	+= coco/tdx-guest/
> > +obj-$(CONFIG_GUEST_ATTEST)	+= coco/guest-attest/
> > diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
> > new file mode 100644
> > index 000000000000..5581c5a27588
> > --- /dev/null
> > +++ b/drivers/virt/coco/guest-attest/Makefile
> > @@ -0,0 +1,2 @@
> > +obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
> > +guest_attest-y := key.o
> > diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
> > new file mode 100644
> > index 000000000000..2a494b6dd7a7
> > --- /dev/null
> > +++ b/drivers/virt/coco/guest-attest/key.c
> > @@ -0,0 +1,159 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
> > +
> > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> > +#include <linux/seq_file.h>
> > +#include <linux/key-type.h>
> > +#include <linux/module.h>
> > +#include <linux/base64.h>
> > +
> > +#include <keys/request_key_auth-type.h>
> > +#include <keys/user-type.h>
> > +
> > +#include "guest-attest.h"
> 
> Can you share you guest-attest.h?

Apologies, missed a 'git add':

/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2023 Intel Corporation. */

#ifndef __GUEST_ATTEST_H__
#define __GUEST_ATTEST_H__
#include <linux/list.h>

/*
 * arch specific ops, only one is expected to be registered at a time
 * i.e. either SEV or TDX, but not both
 */
struct guest_attest_ops {
        const char *name;
        struct module *module;
        struct list_head list;
        int (*request_attest)(struct key *key, int level,
                              struct key *dest_keyring, void *payload,
                              int payload_len, struct key *authkey);
};

#define GUEST_ATTEST_DATALEN 64

int register_guest_attest_ops(struct guest_attest_ops *ops);
void unregister_guest_attest_ops(struct guest_attest_ops *ops);

#endif /*__GUEST_ATTEST_H__ */


> 
> > +
> > +static LIST_HEAD(guest_attest_list);
> > +static DECLARE_RWSEM(guest_attest_rwsem);
> > +
> > +static struct guest_attest_ops *fetch_ops(void)
> > +{
> > +	return list_first_entry_or_null(&guest_attest_list,
> > +					struct guest_attest_ops, list);
> > +}
> > +
> > +static struct guest_attest_ops *get_ops(void)
> > +{
> > +	down_read(&guest_attest_rwsem);
> > +	return fetch_ops();
> > +}
> > +
> > +static void put_ops(void)
> > +{
> > +	up_read(&guest_attest_rwsem);
> > +}
> > +
> > +int register_guest_attest_ops(struct guest_attest_ops *ops)
> > +{
> > +	struct guest_attest_ops *conflict;
> > +	int rc;
> > +
> > +	down_write(&guest_attest_rwsem);
> > +	conflict = fetch_ops();
> > +	if (conflict) {
> > +		pr_err("\"%s\" ops already registered\n", conflict->name);
> > +		rc = -EEXIST;
> > +		goto out;
> > +	}
> > +	list_add(&ops->list, &guest_attest_list);
> > +	try_module_get(ops->module);
> > +	rc = 0;
> > +out:
> > +	up_write(&guest_attest_rwsem);
> > +	return rc;
> > +}
> > +EXPORT_SYMBOL_GPL(register_guest_attest_ops);
> > +
> > +void unregister_guest_attest_ops(struct guest_attest_ops *ops)
> > +{
> > +	down_write(&guest_attest_rwsem);
> > +	list_del(&ops->list);
> > +	up_write(&guest_attest_rwsem);
> > +	module_put(ops->module);
> > +}
> > +EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
> > +
> > +static int __guest_attest_request_key(struct key *key, int level,
> > +				      struct key *dest_keyring,
> > +				      const char *callout_info, int callout_len,
> > +				      struct key *authkey)
> > +{
> > +	struct guest_attest_ops *ops;
> > +	void *payload = NULL;
> > +	int rc, payload_len;
> > +
> > +	ops = get_ops();
> > +	if (!ops)
> > +		return -ENOKEY;
> > +
> > +	payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
> > +	if (!payload) {
> > +		rc = -ENOMEM;
> > +		goto out;
> > +	}
> 
> Is the idea to get the values like vmpl part of the payload?

No, to me vmpl likely needs to be conveyed in the key-description.
Payload is simply the 64-bytes that both SEV and TDX take as input for
the attestation request. The AMD specification seems to imply that
payload is itself a public key? If that is the expectation then it may
be more appropriate to make that a separate retrieval vs something
passed in directly from userspace.

> 
> > +
> > +	payload_len = base64_decode(callout_info, callout_len, payload);
> > +	if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
> > +		rc = -EINVAL;
> > +		goto out;
> > +	}
> > +
> > +	rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
> > +				 authkey);
> > +out:
> > +	kfree(payload);
> > +	put_ops();
> > +	return rc;
> > +}
> > +
> > +static int guest_attest_request_key(struct key *authkey, void *data)
> > +{
> > +	struct request_key_auth *rka = get_request_key_auth(authkey);
> > +	struct key *key = rka->target_key;
> > +	unsigned long long id;
> > +	int rc, level;
> > +
> > +	pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
> > +		 rka->callout_info ? (char *)rka->callout_info : "\"none\"");
> > +
> > +	if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
> > +		return -EINVAL;
> > +
> 
> Can you explain some details about the id and level? It is not very clear why
> we need it.

@level is my mockup of the vmpl concept, and @id is something free-form
for now that the requester of the attestation knows what it means. This
id-scheme would follow a Linux-defined format. More discussion is needed
here about what attestation data is used for and opportunities to define
a common naming scheme. For example, what is the common name of the
guest_attest-key that supports retrieving the storage decryption key
from the key-server.
Dan Williams June 28, 2023, 12:11 a.m. UTC | #12
Dionna Amalie Glaze wrote:
> On Sun, Jun 25, 2023 at 8:06 PM Sathyanarayanan Kuppuswamy
> <sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
[..]
> > Hi Dan,
> >
> > On 6/23/23 3:27 PM, Dan Williams wrote:
> > > Dan Williams wrote:
> > >> [ add David, Brijesh, and Atish]
> > >>
> > >> Kuppuswamy Sathyanarayanan wrote:
> > >>> In TDX guest, the second stage of the attestation process is Quote
> > >>> generation. This process is required to convert the locally generated
> > >>> TDREPORT into a remotely verifiable Quote. It involves sending the
> > >>> TDREPORT data to a Quoting Enclave (QE) which will verify the
> > >>> integrity of the TDREPORT and sign it with an attestation key.
> > >>>
> > >>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> > >>> allow the user agent to get the TD Quote.
> > >>>
> > >>> Add a kernel selftest module to verify the Quote generation feature.
> > >>>
> > >>> TD Quote generation involves following steps:
> > >>>
> > >>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> > >>> * Embed the TDREPORT data in quote buffer and request for quote
> > >>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
> > >>> * Upon completion of the GetQuote request, check for non zero value
> > >>>   in the status field of Quote header to make sure the generated
> > >>>   quote is valid.
> > >>
> > >> What this cover letter does not say is that this is adding another
> > >> instance of the similar pattern as SNP_GET_REPORT.
> > >>
> > >> Linux is best served when multiple vendors trying to do similar
> > >> operations are brought together behind a common ABI. We see this in the
> > >> history of wrangling SCSI vendors behind common interfaces. Now multiple
> > >> confidential computing vendors trying to develop similar flows with
> > >> differentiated formats where that differentiation need not leak over the
> > >> ABI boundary.
> > > [..]
> > >
> > > Below is a rough mock up of this approach to demonstrate the direction.
> > > Again, the goal is to define an ABI that can support any vendor's
> > > arch-specific attestation method and key provisioning flows without
> > > leaking vendor-specific details, or confidential material over the
> > > user/kernel ABI.
> >
> > Thanks for working on this mock code and helping out. It gives me the
> > general idea about your proposal.
> >
> > >
> > > The observation is that there are a sufficient number of attestation
> > > flows available to review where Linux can define a superset ABI to
> > > contain them all. The other observation is that the implementations have
> > > features that may cross-polinate over time. For example the SEV
> > > privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
> > > PCRs) mechanisms address generic Confidential Computing use cases.
> >
> >
> > I agree with your point about VMPL and RTMR feature cases. This observation
> > is valid for AMD SEV and TDX attestation flows. But I am not sure whether
> > it will hold true for other vendor implementations. Our sample set is not
> > good enough to make this conclusion. The reason for my concern is, if you
> > check the ABI interface used in the S390 arch attestation driver
> > (drivers/s390/char/uvdevice.c), you would notice that there is a significant
> > difference between the ABI used in that driver and SEV/TDX drivers. The S390
> > driver attestation request appears to accept two data blobs as input, as well
> > as a variety of vendor-specific header configurations.
> >
> > Maybe the s390 attestation model is a special case, but, I think we consider
> > this issue. Since we don't have a common spec, there is chance that any
> > superset ABI we define now may not meet future vendor requirements. One way to
> > handle it to leave enough space in the generic ABI to handle future vendor
> > requirements.
> >
> > I think it would be better if other vendors (like ARM or RISC) can comment and
> > confirm whether this proposal meets their demands.
> >
> 
> The VMPL-based separation that will house the supervisor module known
> as SVSM can have protocols that implement a TPM command interface, or
> an RTMR-extension interface, and will also need to have an
> SVSM-specific protocol attestation report format to keep the secure
> chain of custody apparent. We'd have different formats and protocols
> in the kernel, at least, to speak to each technology. 

That's where I hope the line can be drawn, i.e. that all of this vendor
differentiation really only matters inside the kernel in the end.

> I'm not sure it's worth the trouble of papering over all the... 3-4
> technologies with similar but still weirdly different formats and ways
> of doing things with an abstracted attestation ABI, especially since
> the output all has to be interpreted in an architecture-specific way
> anyway.

This is where I need help. Can you identify where the following
assertion falls over:

"The minimum viable key-server is one that can generically validate a
blob with an ECDSA signature".

I.e. the fact that SEV and TDX send different length blobs is less
important than validating that signature.

If it is always the case that specific fields in the blob need to be
decoded then yes, that weakens the assertion. However, maybe that means
that kernel code parses the blob and conveys that parsed info along with
vendor attestation payload all signed by a Linux key. I.e. still allow
for a unified output format + signed vendor blob and provide a path to
keep all the vendor specific handling internal to the kernel.

> ARM's Confidential Computing Realm Management Extensions (RME) seems
> to be going along the lines of a runtime measurement register model
> with their hardware enforced security. The number of registers isn't
> prescribed in the spec.
> 
> +Joey Gouly +linux-coco@lists.linux.dev as far as RME is concerned, do
> you know who would be best to weigh in on this discussion of a unified
> attestation model?
Dionna Amalie Glaze June 28, 2023, 1:36 a.m. UTC | #13
On Tue, Jun 27, 2023 at 5:13 PM Dan Williams <dan.j.williams@intel.com> wrote:
> [..]
> >
> > The VMPL-based separation that will house the supervisor module known
> > as SVSM can have protocols that implement a TPM command interface, or
> > an RTMR-extension interface, and will also need to have an
> > SVSM-specific protocol attestation report format to keep the secure
> > chain of custody apparent. We'd have different formats and protocols
> > in the kernel, at least, to speak to each technology.
>
> That's where I hope the line can be drawn, i.e. that all of this vendor
> differentiation really only matters inside the kernel in the end.
>
> > I'm not sure it's worth the trouble of papering over all the... 3-4
> > technologies with similar but still weirdly different formats and ways
> > of doing things with an abstracted attestation ABI, especially since
> > the output all has to be interpreted in an architecture-specific way
> > anyway.
>
> This is where I need help. Can you identify where the following
> assertion falls over:
>
> "The minimum viable key-server is one that can generically validate a
> blob with an ECDSA signature".
>
> I.e. the fact that SEV and TDX send different length blobs is less
> important than validating that signature.
>
> If it is always the case that specific fields in the blob need to be
> decoded then yes, that weakens the assertion. However, maybe that means
> that kernel code parses the blob and conveys that parsed info along with
> vendor attestation payload all signed by a Linux key. I.e. still allow
> for a unified output format + signed vendor blob and provide a path to
> keep all the vendor specific handling internal to the kernel.
>

All the specific fields of the blob have to be decoded and subjected
to an acceptance policy. That policy will most always be different
across different platforms and VM owners. I wrote all of
github.com/google/go-sev-guest, including the verification and
validation logic, and it's going to get more complicated, and the
sources of the data that provide validators with notions of what
values can be trusted will be varied. The formats are not
standardized. The Confidential Computing Consortium should be working
toward that, but it's a slow process. There's IETF RATS. There's
in-toto.io attestations. There's Azure's JWT thing. There's a signed
serialized protocol buffer that I've decided is what Google is going
to produce while we figure out all the "right" formats to use. There
will be factions and absolute gridlock for multiple years if we
require solidifying an abstraction for the kernel to manage all this
logic before passing a report on to user space.

Now, not only are the field contents important, the certificates of
the keys that signed the report are important. Each platform has its
own special x509v3 extensions and key hierarchy to express what parts
of the report should be what value if signed by this key, and in TDX's
case there are extra endpoints that you need to query to determine if
there's an active CVE on the associated TCB version. This is how they
avoid adding every cpu's key to the leaf certificate's CRL.

You really shouldn't be putting attestation validation logic in the
kernel. It belongs outside of the VM entirely with the party that will
only release access keys to the VM if it can prove it's running the
software it claims, on the platform it claims. I think Windows puts a
remote procedure call in their guest attestation driver to the Azure
attestation service, and that is an anti-pattern in my mind.

> > ARM's Confidential Computing Realm Management Extensions (RME) seems
> > to be going along the lines of a runtime measurement register model
> > with their hardware enforced security. The number of registers isn't
> > prescribed in the spec.
> >
> > +Joey Gouly +linux-coco@lists.linux.dev as far as RME is concerned, do
> > you know who would be best to weigh in on this discussion of a unified
> > attestation model?
Huang, Kai June 28, 2023, 2:16 a.m. UTC | #14
On Tue, 2023-06-27 at 18:36 -0700, Dionna Amalie Glaze wrote:
> On Tue, Jun 27, 2023 at 5:13 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > [..]
> > > 
> > > The VMPL-based separation that will house the supervisor module known
> > > as SVSM can have protocols that implement a TPM command interface, or
> > > an RTMR-extension interface, and will also need to have an
> > > SVSM-specific protocol attestation report format to keep the secure
> > > chain of custody apparent. We'd have different formats and protocols
> > > in the kernel, at least, to speak to each technology.
> > 
> > That's where I hope the line can be drawn, i.e. that all of this vendor
> > differentiation really only matters inside the kernel in the end.
> > 
> > > I'm not sure it's worth the trouble of papering over all the... 3-4
> > > technologies with similar but still weirdly different formats and ways
> > > of doing things with an abstracted attestation ABI, especially since
> > > the output all has to be interpreted in an architecture-specific way
> > > anyway.
> > 
> > This is where I need help. Can you identify where the following
> > assertion falls over:
> > 
> > "The minimum viable key-server is one that can generically validate a
> > blob with an ECDSA signature".
> > 
> > I.e. the fact that SEV and TDX send different length blobs is less
> > important than validating that signature.
> > 
> > If it is always the case that specific fields in the blob need to be
> > decoded then yes, that weakens the assertion. However, maybe that means
> > that kernel code parses the blob and conveys that parsed info along with
> > vendor attestation payload all signed by a Linux key. I.e. still allow
> > for a unified output format + signed vendor blob and provide a path to
> > keep all the vendor specific handling internal to the kernel.
> > 
> 
> All the specific fields of the blob have to be decoded and subjected
> to an acceptance policy. That policy will most always be different
> across different platforms and VM owners. I wrote all of
> github.com/google/go-sev-guest, including the verification and
> validation logic, and it's going to get more complicated, and the
> sources of the data that provide validators with notions of what
> values can be trusted will be varied. The formats are not
> standardized. The Confidential Computing Consortium should be working
> toward that, but it's a slow process. There's IETF RATS. There's
> in-toto.io attestations. There's Azure's JWT thing. There's a signed
> serialized protocol buffer that I've decided is what Google is going
> to produce while we figure out all the "right" formats to use. There
> will be factions and absolute gridlock for multiple years if we
> require solidifying an abstraction for the kernel to manage all this
> logic before passing a report on to user space.
> 
> Now, not only are the field contents important, the certificates of
> the keys that signed the report are important. Each platform has its
> own special x509v3 extensions and key hierarchy to express what parts
> of the report should be what value if signed by this key, and in TDX's
> case there are extra endpoints that you need to query to determine if
> there's an active CVE on the associated TCB version. This is how they
> avoid adding every cpu's key to the leaf certificate's CRL.
> 
> You really shouldn't be putting attestation validation logic in the
> kernel.

Agreed.  The data blob for remote verification should be just some data blob to
the kernel.  I think the kernel shouldn't even try to understand the data blob
is for which architecture.  From the kernel's perspective, it should be just
some data blob that the kernel gets from hardware/firmware or whatever embedded
in the root-of-trust in the hardware after taking some input from usrspace for
the unique identity of the blob that can be used to, e.g., mitigate replay-
attack, etc.
Huang, Kai June 28, 2023, 2:47 a.m. UTC | #15
On Fri, 2023-06-23 at 15:27 -0700, Dan Williams wrote:
> An example, to show what the strawman patch below enables: (req_key is
> the sample program from "man 2 request_key")
> 
> # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
> Key ID is 10e2f3a7
> # keyctl pipe 0x10e2f3a7 | hexdump -C
> 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
> 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
> 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> *
> 00004000
> 
> This is the kernel instantiating a TDX Quote without the TDREPORT
> implementation detail ever leaving the kernel. 

There might be one small issue here.  The generated Quote has a userspace
provided 'u64 REPORTDATA' (which originally comes from userspace when generating
the TDREPORT) which is supposed to be used by the attestation service to
uniquely identify this Quote to mitigate some sort of reply-attack.  For
instance, the REPORTDATA could be a per-TLS-session data provided by the
attestation service.

I don't know whether other archs have similar thing in their Quote-like blob,
but I believe this in general is a reasonable thing.

IIUC, one problem of using above request_key() to generate the Quote is
potentially other userspace processes are able to see this, while I believe this
REPORTDATA is only supposed to be visible by the application which is
responsible for talking to the attestation service.  

I am not sure whether this is a risk, but using IOCTL() should be able to avoid
this risk.
Dan Williams June 28, 2023, 2:52 a.m. UTC | #16
Dionna Amalie Glaze wrote:
> On Tue, Jun 27, 2023 at 5:13 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > [..]
> > >
> > > The VMPL-based separation that will house the supervisor module known
> > > as SVSM can have protocols that implement a TPM command interface, or
> > > an RTMR-extension interface, and will also need to have an
> > > SVSM-specific protocol attestation report format to keep the secure
> > > chain of custody apparent. We'd have different formats and protocols
> > > in the kernel, at least, to speak to each technology.
> >
> > That's where I hope the line can be drawn, i.e. that all of this vendor
> > differentiation really only matters inside the kernel in the end.
> >
> > > I'm not sure it's worth the trouble of papering over all the... 3-4
> > > technologies with similar but still weirdly different formats and ways
> > > of doing things with an abstracted attestation ABI, especially since
> > > the output all has to be interpreted in an architecture-specific way
> > > anyway.
> >
> > This is where I need help. Can you identify where the following
> > assertion falls over:
> >
> > "The minimum viable key-server is one that can generically validate a
> > blob with an ECDSA signature".
> >
> > I.e. the fact that SEV and TDX send different length blobs is less
> > important than validating that signature.
> >
> > If it is always the case that specific fields in the blob need to be
> > decoded then yes, that weakens the assertion. However, maybe that means
> > that kernel code parses the blob and conveys that parsed info along with
> > vendor attestation payload all signed by a Linux key. I.e. still allow
> > for a unified output format + signed vendor blob and provide a path to
> > keep all the vendor specific handling internal to the kernel.
> >

First, thank you for engaging, it speeds up the iteration. This
confirmed my worry that the secondary goal of this proposal, a common
verification implementation, is indeed unachievable in the near term. A
few clarifying questions below, but I will let this go.

The primary goal, achievable on a short runway, is more for kernel
developers. It is to have a common infrastructure for marshaling vendor
payloads, provide a mechanism to facilitate kernel initiated requests to
a key-server, and to deploy a common frontend for concepts like runtime
measurement (likely as another backend to what Keys already understands
for various TPM PCR implementations).

> All the specific fields of the blob have to be decoded and subjected
> to an acceptance policy. That policy will most always be different
> across different platforms and VM owners. I wrote all of
> github.com/google/go-sev-guest, including the verification and
> validation logic, and it's going to get more complicated, and the
> sources of the data that provide validators with notions of what
> values can be trusted will be varied.

Can you provide an example? I ask only to include it in the kernel
commit log for a crisp explanation why this proposed Keys format will
continue to convey a raw vendor blob with no kernel abstraction as part
of its payload for the foreseeable future.

> The formats are not standardized. The Confidential Computing
> Consortium should be working toward that, but it's a slow process.
> There's IETF RATS. There's in-toto.io attestations. There's Azure's
> JWT thing. There's a signed serialized protocol buffer that I've
> decided is what Google is going to produce while we figure out all the
> "right" formats to use. There will be factions and absolute gridlock
> for multiple years if we require solidifying an abstraction for the
> kernel to manage all this logic before passing a report on to user
> space.

Understood. When that standardization process completes my expectation
is that it slots into the common conveyance method and no need to go
rewrite code that already knows how to interface with Keys to get
attestation evidence.

> Now, not only are the field contents important, the certificates of
> the keys that signed the report are important. Each platform has its
> own special x509v3 extensions and key hierarchy to express what parts
> of the report should be what value if signed by this key, and in TDX's
> case there are extra endpoints that you need to query to determine if
> there's an active CVE on the associated TCB version. This is how they
> avoid adding every cpu's key to the leaf certificate's CRL.
> 
> You really shouldn't be putting attestation validation logic in the
> kernel.

It was less putting validation logic in the kernel, and more hoping for
a way to abstract some common parsing in advance of a true standard
attestation format, but point taken.

> It belongs outside of the VM entirely with the party that will
> only release access keys to the VM if it can prove it's running the
> software it claims, on the platform it claims. I think Windows puts a
> remote procedure call in their guest attestation driver to the Azure
> attestation service, and that is an anti-pattern in my mind.

I can not speak to the Windows implementation, but the Linux Keys
subsystem is there to handle Key construction that may be requested by
userspace or the kernel and may be serviced by built-in keys,
device/platform instantiated keys, or keys retrieved via an upcall to
userspace.

The observation is that existing calls to request_key() in the kernel
likely have reason to be serviced by a confidential computing key server
somewhere in the chain. So, might as well enlighten the Keys subsystem
to retrieve this information and skip round trips to userspace run
vendor specific ioctls. Make the kernel as self sufficient as possible,
and make SEV, TDX, etc. developers talk more to each other about their
needs.
Greg KH June 28, 2023, 6:46 a.m. UTC | #17
On Wed, Jun 28, 2023 at 02:16:45AM +0000, Huang, Kai wrote:
> > You really shouldn't be putting attestation validation logic in the
> > kernel.
> 
> Agreed.  The data blob for remote verification should be just some data blob to
> the kernel.  I think the kernel shouldn't even try to understand the data blob
> is for which architecture.  From the kernel's perspective, it should be just
> some data blob that the kernel gets from hardware/firmware or whatever embedded
> in the root-of-trust in the hardware after taking some input from usrspace for
> the unique identity of the blob that can be used to, e.g., mitigate replay-
> attack, etc.

Great, then use the common "data blob" api that we have in the kernel
for a very long time now, the "firwmare download" api, or the sysfs
binary file api.  Both of them just use the kernel as a pass-through and
do not touch the data at all.  No need for crazy custom ioctls and all
that mess :)

thanks,

greg k-h
Huang, Kai June 28, 2023, 8:56 a.m. UTC | #18
On Wed, 2023-06-28 at 08:46 +0200, gregkh@linuxfoundation.org wrote:
> On Wed, Jun 28, 2023 at 02:16:45AM +0000, Huang, Kai wrote:
> > > You really shouldn't be putting attestation validation logic in the
> > > kernel.
> > 
> > Agreed.  The data blob for remote verification should be just some data blob to
> > the kernel.  I think the kernel shouldn't even try to understand the data blob
> > is for which architecture.  From the kernel's perspective, it should be just
> > some data blob that the kernel gets from hardware/firmware or whatever embedded
> > in the root-of-trust in the hardware after taking some input from usrspace for
> > the unique identity of the blob that can be used to, e.g., mitigate replay-
> > attack, etc.
> 
> Great, then use the common "data blob" api that we have in the kernel
> for a very long time now, the "firwmare download" api, or the sysfs
> binary file api.  Both of them just use the kernel as a pass-through and
> do not touch the data at all.  No need for crazy custom ioctls and all
> that mess :)
> 

I guess I was talking about from "kernel shouldn't try to parse attestation data
blob" perspective.  Looking at AMD's attestation flow (I have no deep
understanding of AMD's attestation flow), the assumption of "one remote
verifiable data blob" isn't even true -- AMD can return "attestation report"
(remote verifiable) and the "certificate" to verify it separately:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snp-attestation.html

On the other hand, AFAICT Intel SGX-based attestation doesn't have a mechanism
"for the kernel" to return certificate(s), but choose to embed the
certificate(s) to the Quote itself.  I believe we can add such mechanism (e.g.,
another TDVMCALL) for the kernel to get certificate(s) separately, but AFAICT it
doesn't exist yet.

Btw, getting "remote verifiable blob" is only one step of the attestation flow.
For instance, before the blob can be generated, there must be a step to
establish the attestation key between the machine and the attestation service. 
And the flow to do this could be very different between vendors too.

That being said, while I believe all those differences can be unified in some
way, I think the question is whether it is worth to put such effort to try to
unify attestation flow for all vendors.  As Erdem Aktas mentioned earlier, "the
number of CPU vendors for confidential computing seems manageable".
Greg KH June 28, 2023, 9:02 a.m. UTC | #19
On Wed, Jun 28, 2023 at 08:56:30AM +0000, Huang, Kai wrote:
> On Wed, 2023-06-28 at 08:46 +0200, gregkh@linuxfoundation.org wrote:
> > On Wed, Jun 28, 2023 at 02:16:45AM +0000, Huang, Kai wrote:
> > > > You really shouldn't be putting attestation validation logic in the
> > > > kernel.
> > > 
> > > Agreed.  The data blob for remote verification should be just some data blob to
> > > the kernel.  I think the kernel shouldn't even try to understand the data blob
> > > is for which architecture.  From the kernel's perspective, it should be just
> > > some data blob that the kernel gets from hardware/firmware or whatever embedded
> > > in the root-of-trust in the hardware after taking some input from usrspace for
> > > the unique identity of the blob that can be used to, e.g., mitigate replay-
> > > attack, etc.
> > 
> > Great, then use the common "data blob" api that we have in the kernel
> > for a very long time now, the "firwmare download" api, or the sysfs
> > binary file api.  Both of them just use the kernel as a pass-through and
> > do not touch the data at all.  No need for crazy custom ioctls and all
> > that mess :)
> > 
> 
> I guess I was talking about from "kernel shouldn't try to parse attestation data
> blob" perspective.  Looking at AMD's attestation flow (I have no deep
> understanding of AMD's attestation flow), the assumption of "one remote
> verifiable data blob" isn't even true -- AMD can return "attestation report"
> (remote verifiable) and the "certificate" to verify it separately:
> 
> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snp-attestation.html
> 
> On the other hand, AFAICT Intel SGX-based attestation doesn't have a mechanism
> "for the kernel" to return certificate(s), but choose to embed the
> certificate(s) to the Quote itself.  I believe we can add such mechanism (e.g.,
> another TDVMCALL) for the kernel to get certificate(s) separately, but AFAICT it
> doesn't exist yet.
> 
> Btw, getting "remote verifiable blob" is only one step of the attestation flow.
> For instance, before the blob can be generated, there must be a step to
> establish the attestation key between the machine and the attestation service. 
> And the flow to do this could be very different between vendors too.
> 
> That being said, while I believe all those differences can be unified in some
> way, I think the question is whether it is worth to put such effort to try to
> unify attestation flow for all vendors.  As Erdem Aktas mentioned earlier, "the
> number of CPU vendors for confidential computing seems manageable".

So you think that there should be a custom user/kernel api for every
single different CPU vendor?  That's not how kernel development works,
sorry.  Let's try to unify them to make both the kernel, and userspace,
sane.

And Dan is right, if this is handling keys, then the key subsystem needs
to be used here instead of custom ioctls.

thanks,

greg k-h
Huang, Kai June 28, 2023, 9:45 a.m. UTC | #20
On Wed, 2023-06-28 at 11:02 +0200, gregkh@linuxfoundation.org wrote:
> On Wed, Jun 28, 2023 at 08:56:30AM +0000, Huang, Kai wrote:
> > On Wed, 2023-06-28 at 08:46 +0200, gregkh@linuxfoundation.org wrote:
> > > On Wed, Jun 28, 2023 at 02:16:45AM +0000, Huang, Kai wrote:
> > > > > You really shouldn't be putting attestation validation logic in the
> > > > > kernel.
> > > > 
> > > > Agreed.  The data blob for remote verification should be just some data blob to
> > > > the kernel.  I think the kernel shouldn't even try to understand the data blob
> > > > is for which architecture.  From the kernel's perspective, it should be just
> > > > some data blob that the kernel gets from hardware/firmware or whatever embedded
> > > > in the root-of-trust in the hardware after taking some input from usrspace for
> > > > the unique identity of the blob that can be used to, e.g., mitigate replay-
> > > > attack, etc.
> > > 
> > > Great, then use the common "data blob" api that we have in the kernel
> > > for a very long time now, the "firwmare download" api, or the sysfs
> > > binary file api.  Both of them just use the kernel as a pass-through and
> > > do not touch the data at all.  No need for crazy custom ioctls and all
> > > that mess :)
> > > 
> > 
> > I guess I was talking about from "kernel shouldn't try to parse attestation data
> > blob" perspective.  Looking at AMD's attestation flow (I have no deep
> > understanding of AMD's attestation flow), the assumption of "one remote
> > verifiable data blob" isn't even true -- AMD can return "attestation report"
> > (remote verifiable) and the "certificate" to verify it separately:
> > 
> > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snp-attestation.html
> > 
> > On the other hand, AFAICT Intel SGX-based attestation doesn't have a mechanism
> > "for the kernel" to return certificate(s), but choose to embed the
> > certificate(s) to the Quote itself.  I believe we can add such mechanism (e.g.,
> > another TDVMCALL) for the kernel to get certificate(s) separately, but AFAICT it
> > doesn't exist yet.
> > 
> > Btw, getting "remote verifiable blob" is only one step of the attestation flow.
> > For instance, before the blob can be generated, there must be a step to
> > establish the attestation key between the machine and the attestation service. 
> > And the flow to do this could be very different between vendors too.
> > 
> > That being said, while I believe all those differences can be unified in some
> > way, I think the question is whether it is worth to put such effort to try to
> > unify attestation flow for all vendors.  As Erdem Aktas mentioned earlier, "the
> > number of CPU vendors for confidential computing seems manageable".
> 
> So you think that there should be a custom user/kernel api for every
> single different CPU vendor?  That's not how kernel development works,
> sorry.  Let's try to unify them to make both the kernel, and userspace,
> sane.
> 
> And Dan is right, if this is handling keys, then the key subsystem needs
> to be used here instead of custom ioctls.
> 

Sure.  I have no objection to this.
Samuel Ortiz June 28, 2023, 3:24 p.m. UTC | #21
On Tue, Jun 27, 2023 at 05:11:07PM -0700, Dan Williams wrote:
> Dionna Amalie Glaze wrote:
> > On Sun, Jun 25, 2023 at 8:06 PM Sathyanarayanan Kuppuswamy
> > <sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
> [..]
> > > Hi Dan,
> > >
> > > On 6/23/23 3:27 PM, Dan Williams wrote:
> > > > Dan Williams wrote:
> > > >> [ add David, Brijesh, and Atish]
> > > >>
> > > >> Kuppuswamy Sathyanarayanan wrote:
> > > >>> In TDX guest, the second stage of the attestation process is Quote
> > > >>> generation. This process is required to convert the locally generated
> > > >>> TDREPORT into a remotely verifiable Quote. It involves sending the
> > > >>> TDREPORT data to a Quoting Enclave (QE) which will verify the
> > > >>> integrity of the TDREPORT and sign it with an attestation key.
> > > >>>
> > > >>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> > > >>> allow the user agent to get the TD Quote.
> > > >>>
> > > >>> Add a kernel selftest module to verify the Quote generation feature.
> > > >>>
> > > >>> TD Quote generation involves following steps:
> > > >>>
> > > >>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> > > >>> * Embed the TDREPORT data in quote buffer and request for quote
> > > >>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
> > > >>> * Upon completion of the GetQuote request, check for non zero value
> > > >>>   in the status field of Quote header to make sure the generated
> > > >>>   quote is valid.
> > > >>
> > > >> What this cover letter does not say is that this is adding another
> > > >> instance of the similar pattern as SNP_GET_REPORT.
> > > >>
> > > >> Linux is best served when multiple vendors trying to do similar
> > > >> operations are brought together behind a common ABI. We see this in the
> > > >> history of wrangling SCSI vendors behind common interfaces. Now multiple
> > > >> confidential computing vendors trying to develop similar flows with
> > > >> differentiated formats where that differentiation need not leak over the
> > > >> ABI boundary.
> > > > [..]
> > > >
> > > > Below is a rough mock up of this approach to demonstrate the direction.
> > > > Again, the goal is to define an ABI that can support any vendor's
> > > > arch-specific attestation method and key provisioning flows without
> > > > leaking vendor-specific details, or confidential material over the
> > > > user/kernel ABI.
> > >
> > > Thanks for working on this mock code and helping out. It gives me the
> > > general idea about your proposal.
> > >
> > > >
> > > > The observation is that there are a sufficient number of attestation
> > > > flows available to review where Linux can define a superset ABI to
> > > > contain them all. The other observation is that the implementations have
> > > > features that may cross-polinate over time. For example the SEV
> > > > privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
> > > > PCRs) mechanisms address generic Confidential Computing use cases.
> > >
> > >
> > > I agree with your point about VMPL and RTMR feature cases. This observation
> > > is valid for AMD SEV and TDX attestation flows. But I am not sure whether
> > > it will hold true for other vendor implementations. Our sample set is not
> > > good enough to make this conclusion. The reason for my concern is, if you
> > > check the ABI interface used in the S390 arch attestation driver
> > > (drivers/s390/char/uvdevice.c), you would notice that there is a significant
> > > difference between the ABI used in that driver and SEV/TDX drivers. The S390
> > > driver attestation request appears to accept two data blobs as input, as well
> > > as a variety of vendor-specific header configurations.
> > >
> > > Maybe the s390 attestation model is a special case, but, I think we consider
> > > this issue. Since we don't have a common spec, there is chance that any
> > > superset ABI we define now may not meet future vendor requirements. One way to
> > > handle it to leave enough space in the generic ABI to handle future vendor
> > > requirements.
> > >
> > > I think it would be better if other vendors (like ARM or RISC) can comment and
> > > confirm whether this proposal meets their demands.
> > >
> > 
> > The VMPL-based separation that will house the supervisor module known
> > as SVSM can have protocols that implement a TPM command interface, or
> > an RTMR-extension interface, and will also need to have an
> > SVSM-specific protocol attestation report format to keep the secure
> > chain of custody apparent. We'd have different formats and protocols
> > in the kernel, at least, to speak to each technology. 
> 
> That's where I hope the line can be drawn, i.e. that all of this vendor
> differentiation really only matters inside the kernel in the end.

Looking at your keys subsystem based PoC (thanks for putting it
together), I understand that the intention is to pass an attestation
evidence request as a payload to the kernel, in a abstract way.
i.e. the void *data in:
static int guest_attest_request_key(struct key *authkey, void *data)

And then passiing that down to vendor specific handlers
(tdx_request_attest in your PoC) for it to behave as a key server for
that attestation evidence request. The vendor magic of transforming an
attestation request into an actual attestation evidence (typically
signed with platform derived keys) is stuffed into that handler. The
format, content and protection of both the attestation evidence request
and the evidence itself is left for the guest kernel handler (e.g.
tdx_request_attest) to handle.

Is that a fair description of your proposal?

If it is, then it makes sense to me, and could serve as a generic
abstraction for confidential computing guest attestation evidence
requests. I think it could support the TDX, SEV and also the RISC-V
(aka CoVE) guest attestation request evidence flow.

> > I'm not sure it's worth the trouble of papering over all the... 3-4
> > technologies with similar but still weirdly different formats and ways
> > of doing things with an abstracted attestation ABI, especially since
> > the output all has to be interpreted in an architecture-specific way
> > anyway.
> 
> This is where I need help. Can you identify where the following
> assertion falls over:
> 
> "The minimum viable key-server is one that can generically validate a
> blob with an ECDSA signature".
> 
> I.e. the fact that SEV and TDX send different length blobs is less
> important than validating that signature.

I'm not sure which signature we're talking about here.
The final attestation evidence (The blob the guest workload will send to
a remote attestation service) is signed with a platform derived key,
typically rooted into a manufacturer's CA. It is then up to the *remote*
attestation service to authenticate and validate the evidence signature.
Then it can go through the actual attestation verification flow
(comparison against reference values, policy checks, etc). The latter
should be none of the guest kernel's business, which is what your
proposal seems to be heading to.


> If it is always the case that specific fields in the blob need to be
> decoded then yes, that weakens the assertion. 

Your vendor specific key handler may have to decode the passed void
pointer into a vendor specific structure before sending it down to the
TSM/ASP/etc, and that's fine imho.

Cheers,
Samuel.
Samuel Ortiz June 28, 2023, 3:31 p.m. UTC | #22
On Tue, Jun 27, 2023 at 06:36:07PM -0700, Dionna Amalie Glaze wrote:
> On Tue, Jun 27, 2023 at 5:13 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > [..]
> > >
> > > The VMPL-based separation that will house the supervisor module known
> > > as SVSM can have protocols that implement a TPM command interface, or
> > > an RTMR-extension interface, and will also need to have an
> > > SVSM-specific protocol attestation report format to keep the secure
> > > chain of custody apparent. We'd have different formats and protocols
> > > in the kernel, at least, to speak to each technology.
> >
> > That's where I hope the line can be drawn, i.e. that all of this vendor
> > differentiation really only matters inside the kernel in the end.
> >
> > > I'm not sure it's worth the trouble of papering over all the... 3-4
> > > technologies with similar but still weirdly different formats and ways
> > > of doing things with an abstracted attestation ABI, especially since
> > > the output all has to be interpreted in an architecture-specific way
> > > anyway.
> >
> > This is where I need help. Can you identify where the following
> > assertion falls over:
> >
> > "The minimum viable key-server is one that can generically validate a
> > blob with an ECDSA signature".
> >
> > I.e. the fact that SEV and TDX send different length blobs is less
> > important than validating that signature.
> >
> > If it is always the case that specific fields in the blob need to be
> > decoded then yes, that weakens the assertion. However, maybe that means
> > that kernel code parses the blob and conveys that parsed info along with
> > vendor attestation payload all signed by a Linux key. I.e. still allow
> > for a unified output format + signed vendor blob and provide a path to
> > keep all the vendor specific handling internal to the kernel.
> >
> 
> All the specific fields of the blob have to be decoded and subjected
> to an acceptance policy. That policy will most always be different
> across different platforms and VM owners. I wrote all of
> github.com/google/go-sev-guest, including the verification and
> validation logic, and it's going to get more complicated, and the
> sources of the data that provide validators with notions of what
> values can be trusted will be varied. The formats are not
> standardized. The Confidential Computing Consortium should be working
> toward that, but it's a slow process. There's IETF RATS. There's
> in-toto.io attestations. There's Azure's JWT thing. There's a signed
> serialized protocol buffer that I've decided is what Google is going
> to produce while we figure out all the "right" formats to use. There
> will be factions and absolute gridlock for multiple years if we
> require solidifying an abstraction for the kernel to manage all this
> logic before passing a report on to user space.

I agree with most of the above, but all that nightmate^Wcomplexity is
handled on the remote attestation side. If I understand the current
discussion, it's about how to abstract a guest attestation evidence
generation request in a vendor agnostic way. And I think what's proposed
here is simply to pass a binary payload (The evidence request from the
guest userspace) to the kernel key subsystem, hook it into vendor
specific handler and get userspace an attestation evidence (a platform
key signed blob) back to the guest app. The guest app can then give that
to an attestation service, and that's when all the above described
complexity takes place. Am I missing something?

> Now, not only are the field contents important, the certificates of
> the keys that signed the report are important. Each platform has its
> own special x509v3 extensions and key hierarchy to express what parts
> of the report should be what value if signed by this key, and in TDX's
> case there are extra endpoints that you need to query to determine if
> there's an active CVE on the associated TCB version. This is how they
> avoid adding every cpu's key to the leaf certificate's CRL.
> 
> You really shouldn't be putting attestation validation logic in the
> kernel. 

AFAIU, that's not part of the proposal/PoC/mockup. It's all about
funneling an attestation evidence request down to the TSM/PSP/firmware
for it to generate an actually verifiable attestation evidence.

Cheers,
Samuel.
Samuel Ortiz June 28, 2023, 3:41 p.m. UTC | #23
On Mon, Jun 26, 2023 at 05:39:12PM -0700, Sathyanarayanan Kuppuswamy wrote:
> +Atish
> 
> Atish, any comments on this topic from RISC-v?

The CoVE (RISC-V confidential computing specification) would benefit
from the proposed abstract API. Similar to at least both TDX and SEV,
the CoVE attestation evidence generation proposed (The spec is not
ratified yet) interface [1] basically takes some binary blobs in (a TVM
public key and an attestation challenge blob) and requests the TSM to
generate an attesation evidence for the caller. The TSM will generate
such evidence from all static and runtime measurements and sign it with
its DICE derived key. Attestation lingo set aside, the pattern here is
similar to SEV and TDX. Having a common API for a generic attestation
evidence generation interface would avoid having to add yet another
ioctl based interface specific to CoVE.

Another interface we could think about commonizing is the measurement
extension one. I think both TDX and CoVE allow for a guest to
dynamically extend its measurements (to dedicated, runtime PCRs).

Cheers,
Samuel.

[1] https://github.com/riscv-non-isa/riscv-ap-tee/blob/main/specification/sbi_cove.adoc#function-cove-guest-get-evidence-fid-8

> On 6/26/23 11:57 AM, Dionna Amalie Glaze wrote:
> > On Sun, Jun 25, 2023 at 8:06 PM Sathyanarayanan Kuppuswamy
> > <sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
> >>
> >> Hi Dan,
> >>
> >> On 6/23/23 3:27 PM, Dan Williams wrote:
> >>> Dan Williams wrote:
> >>>> [ add David, Brijesh, and Atish]
> >>>>
> >>>> Kuppuswamy Sathyanarayanan wrote:
> >>>>> In TDX guest, the second stage of the attestation process is Quote
> >>>>> generation. This process is required to convert the locally generated
> >>>>> TDREPORT into a remotely verifiable Quote. It involves sending the
> >>>>> TDREPORT data to a Quoting Enclave (QE) which will verify the
> >>>>> integrity of the TDREPORT and sign it with an attestation key.
> >>>>>
> >>>>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
> >>>>> allow the user agent to get the TD Quote.
> >>>>>
> >>>>> Add a kernel selftest module to verify the Quote generation feature.
> >>>>>
> >>>>> TD Quote generation involves following steps:
> >>>>>
> >>>>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
> >>>>> * Embed the TDREPORT data in quote buffer and request for quote
> >>>>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
> >>>>> * Upon completion of the GetQuote request, check for non zero value
> >>>>>   in the status field of Quote header to make sure the generated
> >>>>>   quote is valid.
> >>>>
> >>>> What this cover letter does not say is that this is adding another
> >>>> instance of the similar pattern as SNP_GET_REPORT.
> >>>>
> >>>> Linux is best served when multiple vendors trying to do similar
> >>>> operations are brought together behind a common ABI. We see this in the
> >>>> history of wrangling SCSI vendors behind common interfaces. Now multiple
> >>>> confidential computing vendors trying to develop similar flows with
> >>>> differentiated formats where that differentiation need not leak over the
> >>>> ABI boundary.
> >>> [..]
> >>>
> >>> Below is a rough mock up of this approach to demonstrate the direction.
> >>> Again, the goal is to define an ABI that can support any vendor's
> >>> arch-specific attestation method and key provisioning flows without
> >>> leaking vendor-specific details, or confidential material over the
> >>> user/kernel ABI.
> >>
> >> Thanks for working on this mock code and helping out. It gives me the
> >> general idea about your proposal.
> >>
> >>>
> >>> The observation is that there are a sufficient number of attestation
> >>> flows available to review where Linux can define a superset ABI to
> >>> contain them all. The other observation is that the implementations have
> >>> features that may cross-polinate over time. For example the SEV
> >>> privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
> >>> PCRs) mechanisms address generic Confidential Computing use cases.
> >>
> >>
> >> I agree with your point about VMPL and RTMR feature cases. This observation
> >> is valid for AMD SEV and TDX attestation flows. But I am not sure whether
> >> it will hold true for other vendor implementations. Our sample set is not
> >> good enough to make this conclusion. The reason for my concern is, if you
> >> check the ABI interface used in the S390 arch attestation driver
> >> (drivers/s390/char/uvdevice.c), you would notice that there is a significant
> >> difference between the ABI used in that driver and SEV/TDX drivers. The S390
> >> driver attestation request appears to accept two data blobs as input, as well
> >> as a variety of vendor-specific header configurations.
> >>
> >> Maybe the s390 attestation model is a special case, but, I think we consider
> >> this issue. Since we don't have a common spec, there is chance that any
> >> superset ABI we define now may not meet future vendor requirements. One way to
> >> handle it to leave enough space in the generic ABI to handle future vendor
> >> requirements.
> >>
> >> I think it would be better if other vendors (like ARM or RISC) can comment and
> >> confirm whether this proposal meets their demands.
> >>
> > 
> > The VMPL-based separation that will house the supervisor module known
> > as SVSM can have protocols that implement a TPM command interface, or
> > an RTMR-extension interface, and will also need to have an
> > SVSM-specific protocol attestation report format to keep the secure
> > chain of custody apparent. We'd have different formats and protocols
> > in the kernel, at least, to speak to each technology. I'm not sure
> > it's worth the trouble of papering over all the... 3-4 technologies
> > with similar but still weirdly different formats and ways of doing
> > things with an abstracted attestation ABI, especially since the output
> > all has to be interpreted in an architecture-specific way anyway.
> > 
> > ARM's Confidential Computing Realm Management Extensions (RME) seems
> > to be going along the lines of a runtime measurement register model
> > with their hardware enforced security. The number of registers isn't
> > prescribed in the spec.
> > 
> > +Joey Gouly +linux-coco@lists.linux.dev as far as RME is concerned, do
> > you know who would be best to weigh in on this discussion of a unified
> > attestation model?
> 
> 
> > 
> >>>
> >>> Vendor specific ioctls for all of this feels like surrender when Linux
> >>> already has the keys subsystem which has plenty of degrees of freedom
> >>> for tracking blobs with signatures and using those blobs to instantiate
> >>> other blobs. It already serves as the ABI wrapping various TPM
> >>> implementations and marshaling keys for storage encryption and other use
> >>> cases that intersect Confidential Computing.
> >>>
> >>> The benefit of deprecating vendor-specific abstraction layers in
> >>> userspace is secondary. The primary benefit is collaboration. It enables
> >>> kernel developers from various architectures to collaborate on common
> >>> infrastructure. If, referring back to my previous example, SEV adopts an
> >>> RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
> >>> unfortunate if those efforts were siloed, duplicated, and needlessly
> >>> differentiated to userspace. So while there are arguably a manageable
> >>> number of basic arch attestation methods the planned expansion of those
> >>> to build incremental functionality is where I believe we, as a
> >>> community, will be glad that we invested in a "Linux format" for all of
> >>> this.
> >>>
> >>> An example, to show what the strawman patch below enables: (req_key is
> >>> the sample program from "man 2 request_key")
> >>>
> >>> # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
> >>> Key ID is 10e2f3a7
> >>> # keyctl pipe 0x10e2f3a7 | hexdump -C
> >>> 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
> >>> 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
> >>> 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
> >>> *
> >>> 00004000
> >>>
> >>> This is the kernel instantiating a TDX Quote without the TDREPORT
> >>> implementation detail ever leaving the kernel. Now, this is only the
> >>
> >> IIUC, the idea here is to cache the quote data and return it to the user whenever
> >> possible, right? If yes, I think such optimization may not be very useful for our
> >> case. AFAIK, the quote data will change whenever there is a change in the guest
> >> measurement data. Since the validity of the generated quote will not be long,
> >> and the frequency of quote generation requests is expected to be less, we may not
> >> get much benefit from caching the quote data. I think we can keep this logic simple
> >> by directly retrieving the quote data from the quoting enclave whenever there is a
> >> request from the user.
> >>
> >>> top-half of what is needed. The missing bottom half takes that material
> >>> and uses it to instantiate derived key material like the storage
> >>> decryption key internal to the kernel. See "The Process" in
> >>> Documentation/security/keys/request-key.rst for how the Keys subsystem
> >>> handles the "keys for keys" use case.
> >>
> >> This is only useful for key-server use case, right? Attestation can also be
> >> used for use cases like pattern matching or uploading some secure data, etc.
> >> Since key-server is not the only use case, does it make sense to suppport
> >> this derived key feature?
> >>
> >>>
> >>> ---
> >>> diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
> >>> index f79ab13a5c28..0f775847028e 100644
> >>> --- a/drivers/virt/Kconfig
> >>> +++ b/drivers/virt/Kconfig
> >>> @@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
> >>>
> >>>  source "drivers/virt/coco/tdx-guest/Kconfig"
> >>>
> >>> +config GUEST_ATTEST
> >>> +     tristate
> >>> +     select KEYS
> >>> +
> >>>  endif
> >>> diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
> >>> index e9aa6fc96fab..66f6b838f8f4 100644
> >>> --- a/drivers/virt/Makefile
> >>> +++ b/drivers/virt/Makefile
> >>> @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)              += acrn/
> >>>  obj-$(CONFIG_EFI_SECRET)     += coco/efi_secret/
> >>>  obj-$(CONFIG_SEV_GUEST)              += coco/sev-guest/
> >>>  obj-$(CONFIG_INTEL_TDX_GUEST)        += coco/tdx-guest/
> >>> +obj-$(CONFIG_GUEST_ATTEST)   += coco/guest-attest/
> >>> diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
> >>> new file mode 100644
> >>> index 000000000000..5581c5a27588
> >>> --- /dev/null
> >>> +++ b/drivers/virt/coco/guest-attest/Makefile
> >>> @@ -0,0 +1,2 @@
> >>> +obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
> >>> +guest_attest-y := key.o
> >>> diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
> >>> new file mode 100644
> >>> index 000000000000..2a494b6dd7a7
> >>> --- /dev/null
> >>> +++ b/drivers/virt/coco/guest-attest/key.c
> >>> @@ -0,0 +1,159 @@
> >>> +// SPDX-License-Identifier: GPL-2.0-only
> >>> +/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
> >>> +
> >>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> >>> +#include <linux/seq_file.h>
> >>> +#include <linux/key-type.h>
> >>> +#include <linux/module.h>
> >>> +#include <linux/base64.h>
> >>> +
> >>> +#include <keys/request_key_auth-type.h>
> >>> +#include <keys/user-type.h>
> >>> +
> >>> +#include "guest-attest.h"
> >>
> >> Can you share you guest-attest.h?
> >>
> >>> +
> >>> +static LIST_HEAD(guest_attest_list);
> >>> +static DECLARE_RWSEM(guest_attest_rwsem);
> >>> +
> >>> +static struct guest_attest_ops *fetch_ops(void)
> >>> +{
> >>> +     return list_first_entry_or_null(&guest_attest_list,
> >>> +                                     struct guest_attest_ops, list);
> >>> +}
> >>> +
> >>> +static struct guest_attest_ops *get_ops(void)
> >>> +{
> >>> +     down_read(&guest_attest_rwsem);
> >>> +     return fetch_ops();
> >>> +}
> >>> +
> >>> +static void put_ops(void)
> >>> +{
> >>> +     up_read(&guest_attest_rwsem);
> >>> +}
> >>> +
> >>> +int register_guest_attest_ops(struct guest_attest_ops *ops)
> >>> +{
> >>> +     struct guest_attest_ops *conflict;
> >>> +     int rc;
> >>> +
> >>> +     down_write(&guest_attest_rwsem);
> >>> +     conflict = fetch_ops();
> >>> +     if (conflict) {
> >>> +             pr_err("\"%s\" ops already registered\n", conflict->name);
> >>> +             rc = -EEXIST;
> >>> +             goto out;
> >>> +     }
> >>> +     list_add(&ops->list, &guest_attest_list);
> >>> +     try_module_get(ops->module);
> >>> +     rc = 0;
> >>> +out:
> >>> +     up_write(&guest_attest_rwsem);
> >>> +     return rc;
> >>> +}
> >>> +EXPORT_SYMBOL_GPL(register_guest_attest_ops);
> >>> +
> >>> +void unregister_guest_attest_ops(struct guest_attest_ops *ops)
> >>> +{
> >>> +     down_write(&guest_attest_rwsem);
> >>> +     list_del(&ops->list);
> >>> +     up_write(&guest_attest_rwsem);
> >>> +     module_put(ops->module);
> >>> +}
> >>> +EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
> >>> +
> >>> +static int __guest_attest_request_key(struct key *key, int level,
> >>> +                                   struct key *dest_keyring,
> >>> +                                   const char *callout_info, int callout_len,
> >>> +                                   struct key *authkey)
> >>> +{
> >>> +     struct guest_attest_ops *ops;
> >>> +     void *payload = NULL;
> >>> +     int rc, payload_len;
> >>> +
> >>> +     ops = get_ops();
> >>> +     if (!ops)
> >>> +             return -ENOKEY;
> >>> +
> >>> +     payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
> >>> +     if (!payload) {
> >>> +             rc = -ENOMEM;
> >>> +             goto out;
> >>> +     }
> >>
> >> Is the idea to get the values like vmpl part of the payload?
> >>
> >>> +
> >>> +     payload_len = base64_decode(callout_info, callout_len, payload);
> >>> +     if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
> >>> +             rc = -EINVAL;
> >>> +             goto out;
> >>> +     }
> >>> +
> >>> +     rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
> >>> +                              authkey);
> >>> +out:
> >>> +     kfree(payload);
> >>> +     put_ops();
> >>> +     return rc;
> >>> +}
> >>> +
> >>> +static int guest_attest_request_key(struct key *authkey, void *data)
> >>> +{
> >>> +     struct request_key_auth *rka = get_request_key_auth(authkey);
> >>> +     struct key *key = rka->target_key;
> >>> +     unsigned long long id;
> >>> +     int rc, level;
> >>> +
> >>> +     pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
> >>> +              rka->callout_info ? (char *)rka->callout_info : "\"none\"");
> >>> +
> >>> +     if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
> >>> +             return -EINVAL;
> >>> +
> >>
> >> Can you explain some details about the id and level? It is not very clear why
> >> we need it.
> >>
> >>> +     if (!rka->callout_info) {
> >>> +             rc = -EINVAL;
> >>> +             goto out;
> >>> +     }
> >>> +
> >>> +     rc = __guest_attest_request_key(key, level, rka->dest_keyring,
> >>> +                                     rka->callout_info, rka->callout_len,
> >>> +                                     authkey);
> >>> +out:
> >>> +     complete_request_key(authkey, rc);
> >>> +     return rc;
> >>> +}
> >>> +
> >>> +static int guest_attest_vet_description(const char *desc)
> >>> +{
> >>> +     unsigned long long id;
> >>> +     int level;
> >>> +
> >>> +     if (sscanf(desc, "guest_attest:%d:%llu", &level, &id) != 2)
> >>> +             return -EINVAL;
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static struct key_type key_type_guest_attest = {
> >>> +     .name = "guest_attest",
> >>> +     .preparse = user_preparse,
> >>> +     .free_preparse = user_free_preparse,
> >>> +     .instantiate = generic_key_instantiate,
> >>> +     .revoke = user_revoke,
> >>> +     .destroy = user_destroy,
> >>> +     .describe = user_describe,
> >>> +     .read = user_read,
> >>> +     .vet_description = guest_attest_vet_description,
> >>> +     .request_key = guest_attest_request_key,
> >>> +};
> >>> +
> >>> +static int __init guest_attest_init(void)
> >>> +{
> >>> +     return register_key_type(&key_type_guest_attest);
> >>> +}
> >>> +
> >>> +static void __exit guest_attest_exit(void)
> >>> +{
> >>> +     unregister_key_type(&key_type_guest_attest);
> >>> +}
> >>> +
> >>> +module_init(guest_attest_init);
> >>> +module_exit(guest_attest_exit);
> >>> +MODULE_LICENSE("GPL v2");
> >>> diff --git a/drivers/virt/coco/tdx-guest/Kconfig b/drivers/virt/coco/tdx-guest/Kconfig
> >>> index 14246fc2fb02..9a1ec85369fe 100644
> >>> --- a/drivers/virt/coco/tdx-guest/Kconfig
> >>> +++ b/drivers/virt/coco/tdx-guest/Kconfig
> >>> @@ -1,6 +1,7 @@
> >>>  config TDX_GUEST_DRIVER
> >>>       tristate "TDX Guest driver"
> >>>       depends on INTEL_TDX_GUEST
> >>> +     select GUEST_ATTEST
> >>>       help
> >>>         The driver provides userspace interface to communicate with
> >>>         the TDX module to request the TDX guest details like attestation
> >>> diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
> >>> index 388491fa63a1..65b5aab284d9 100644
> >>> --- a/drivers/virt/coco/tdx-guest/tdx-guest.c
> >>> +++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
> >>> @@ -13,11 +13,13 @@
> >>>  #include <linux/string.h>
> >>>  #include <linux/uaccess.h>
> >>>  #include <linux/set_memory.h>
> >>> +#include <linux/key-type.h>
> >>>
> >>>  #include <uapi/linux/tdx-guest.h>
> >>>
> >>>  #include <asm/cpu_device_id.h>
> >>>  #include <asm/tdx.h>
> >>> +#include "../guest-attest/guest-attest.h"
> >>>
> >>>  /*
> >>>   * Intel's SGX QE implementation generally uses Quote size less
> >>> @@ -229,6 +231,62 @@ static const struct x86_cpu_id tdx_guest_ids[] = {
> >>>  };
> >>>  MODULE_DEVICE_TABLE(x86cpu, tdx_guest_ids);
> >>>
> >>> +static int tdx_request_attest(struct key *key, int level,
> >>> +                           struct key *dest_keyring, void *payload,
> >>> +                           int payload_len, struct key *authkey)
> >>> +{
> >>> +     u8 *tdreport;
> >>> +     long ret;
> >>> +
> >>> +     tdreport = kzalloc(TDX_REPORT_LEN, GFP_KERNEL);
> >>> +     if (!tdreport)
> >>> +             return -ENOMEM;
> >>> +
> >>> +     /* Generate TDREPORT0 using "TDG.MR.REPORT" TDCALL */
> >>> +     ret = tdx_mcall_get_report0(payload, tdreport);
> >>> +     if (ret)
> >>> +             goto out;
> >>> +
> >>> +     mutex_lock(&quote_lock);
> >>> +
> >>> +     memset(qentry->buf, 0, qentry->buf_len);
> >>> +     reinit_completion(&qentry->compl);
> >>> +     qentry->valid = true;
> >>> +
> >>> +     /* Submit GetQuote Request using GetQuote hyperetall */
> >>> +     ret = tdx_hcall_get_quote(qentry->buf, qentry->buf_len);
> >>> +     if (ret) {
> >>> +             pr_err("GetQuote hyperetall failed, status:%lx\n", ret);
> >>> +             ret = -EIO;
> >>> +             goto quote_failed;
> >>> +     }
> >>> +
> >>> +     /*
> >>> +      * Although the GHCI specification does not state explicitly that
> >>> +      * the VMM must not wait indefinitely for the Quote request to be
> >>> +      * completed, a sane VMM should always notify the guest after a
> >>> +      * certain time, regardless of whether the Quote generation is
> >>> +      * successful or not.  For now just assume the VMM will do so.
> >>> +      */
> >>> +     wait_for_completion(&qentry->compl);
> >>> +
> >>> +     ret = key_instantiate_and_link(key, qentry->buf, qentry->buf_len,
> >>> +                                    dest_keyring, authkey);
> >>> +
> >>> +quote_failed:
> >>> +     qentry->valid = false;
> >>> +     mutex_unlock(&quote_lock);
> >>> +out:
> >>> +     kfree(tdreport);
> >>> +     return ret;
> >>> +}
> >>> +
> >>> +static struct guest_attest_ops tdx_attest_ops = {
> >>> +     .name = KBUILD_MODNAME,
> >>> +     .module = THIS_MODULE,
> >>> +     .request_attest = tdx_request_attest,
> >>> +};
> >>> +
> >>>  static int __init tdx_guest_init(void)
> >>>  {
> >>>       int ret;
> >>> @@ -251,8 +309,14 @@ static int __init tdx_guest_init(void)
> >>>       if (ret)
> >>>               goto free_quote;
> >>>
> >>> +     ret = register_guest_attest_ops(&tdx_attest_ops);
> >>> +     if (ret)
> >>> +             goto free_irq;
> >>> +
> >>>       return 0;
> >>>
> >>> +free_irq:
> >>> +     tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
> >>>  free_quote:
> >>>       free_quote_entry(qentry);
> >>>  free_misc:
> >>> @@ -264,6 +328,7 @@ module_init(tdx_guest_init);
> >>>
> >>>  static void __exit tdx_guest_exit(void)
> >>>  {
> >>> +     unregister_guest_attest_ops(&tdx_attest_ops);
> >>>       tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
> >>>       free_quote_entry(qentry);
> >>>       misc_deregister(&tdx_misc_dev);
> >>
> >> --
> >> Sathyanarayanan Kuppuswamy
> >> Linux Kernel Developer
> > 
> > 
> > 
> > --
> > -Dionna Glaze, PhD (she/her)
> 
> -- 
> Sathyanarayanan Kuppuswamy
> Linux Kernel Developer
>
Kuppuswamy Sathyanarayanan June 28, 2023, 3:55 p.m. UTC | #24
Hi Samuel,

On 6/28/23 8:41 AM, Samuel Ortiz wrote:
> On Mon, Jun 26, 2023 at 05:39:12PM -0700, Sathyanarayanan Kuppuswamy wrote:
>> +Atish
>>
>> Atish, any comments on this topic from RISC-v?
> 
> The CoVE (RISC-V confidential computing specification) would benefit
> from the proposed abstract API. Similar to at least both TDX and SEV,
> the CoVE attestation evidence generation proposed (The spec is not
> ratified yet) interface [1] basically takes some binary blobs in (a TVM
> public key and an attestation challenge blob) and requests the TSM to
> generate an attesation evidence for the caller. The TSM will generate
> such evidence from all static and runtime measurements and sign it with
> its DICE derived key. Attestation lingo set aside, the pattern here is
> similar to SEV and TDX. Having a common API for a generic attestation
> evidence generation interface would avoid having to add yet another
> ioctl based interface specific to CoVE.

Great, this gives us more confidence about the generic ABI design.

> 
> Another interface we could think about commonizing is the measurement
> extension one. I think both TDX and CoVE allow for a guest to
> dynamically extend its measurements (to dedicated, runtime PCRs).
> 

Yes. I think most vendors will need similar support. We are planning
to add a generic ABI for this as well.


> Cheers,
> Samuel.
> 
> [1] https://github.com/riscv-non-isa/riscv-ap-tee/blob/main/specification/sbi_cove.adoc#function-cove-guest-get-evidence-fid-8
> 
>> On 6/26/23 11:57 AM, Dionna Amalie Glaze wrote:
>>> On Sun, Jun 25, 2023 at 8:06 PM Sathyanarayanan Kuppuswamy
>>> <sathyanarayanan.kuppuswamy@linux.intel.com> wrote:
>>>>
>>>> Hi Dan,
>>>>
>>>> On 6/23/23 3:27 PM, Dan Williams wrote:
>>>>> Dan Williams wrote:
>>>>>> [ add David, Brijesh, and Atish]
>>>>>>
>>>>>> Kuppuswamy Sathyanarayanan wrote:
>>>>>>> In TDX guest, the second stage of the attestation process is Quote
>>>>>>> generation. This process is required to convert the locally generated
>>>>>>> TDREPORT into a remotely verifiable Quote. It involves sending the
>>>>>>> TDREPORT data to a Quoting Enclave (QE) which will verify the
>>>>>>> integrity of the TDREPORT and sign it with an attestation key.
>>>>>>>
>>>>>>> Intel's TDX attestation driver exposes TDX_CMD_GET_QUOTE IOCTL to
>>>>>>> allow the user agent to get the TD Quote.
>>>>>>>
>>>>>>> Add a kernel selftest module to verify the Quote generation feature.
>>>>>>>
>>>>>>> TD Quote generation involves following steps:
>>>>>>>
>>>>>>> * Get the TDREPORT data using TDX_CMD_GET_REPORT IOCTL.
>>>>>>> * Embed the TDREPORT data in quote buffer and request for quote
>>>>>>>   generation via TDX_CMD_GET_QUOTE IOCTL request.
>>>>>>> * Upon completion of the GetQuote request, check for non zero value
>>>>>>>   in the status field of Quote header to make sure the generated
>>>>>>>   quote is valid.
>>>>>>
>>>>>> What this cover letter does not say is that this is adding another
>>>>>> instance of the similar pattern as SNP_GET_REPORT.
>>>>>>
>>>>>> Linux is best served when multiple vendors trying to do similar
>>>>>> operations are brought together behind a common ABI. We see this in the
>>>>>> history of wrangling SCSI vendors behind common interfaces. Now multiple
>>>>>> confidential computing vendors trying to develop similar flows with
>>>>>> differentiated formats where that differentiation need not leak over the
>>>>>> ABI boundary.
>>>>> [..]
>>>>>
>>>>> Below is a rough mock up of this approach to demonstrate the direction.
>>>>> Again, the goal is to define an ABI that can support any vendor's
>>>>> arch-specific attestation method and key provisioning flows without
>>>>> leaking vendor-specific details, or confidential material over the
>>>>> user/kernel ABI.
>>>>
>>>> Thanks for working on this mock code and helping out. It gives me the
>>>> general idea about your proposal.
>>>>
>>>>>
>>>>> The observation is that there are a sufficient number of attestation
>>>>> flows available to review where Linux can define a superset ABI to
>>>>> contain them all. The other observation is that the implementations have
>>>>> features that may cross-polinate over time. For example the SEV
>>>>> privelege level consideration ("vmpl"), and the TDX RTMR (think TPM
>>>>> PCRs) mechanisms address generic Confidential Computing use cases.
>>>>
>>>>
>>>> I agree with your point about VMPL and RTMR feature cases. This observation
>>>> is valid for AMD SEV and TDX attestation flows. But I am not sure whether
>>>> it will hold true for other vendor implementations. Our sample set is not
>>>> good enough to make this conclusion. The reason for my concern is, if you
>>>> check the ABI interface used in the S390 arch attestation driver
>>>> (drivers/s390/char/uvdevice.c), you would notice that there is a significant
>>>> difference between the ABI used in that driver and SEV/TDX drivers. The S390
>>>> driver attestation request appears to accept two data blobs as input, as well
>>>> as a variety of vendor-specific header configurations.
>>>>
>>>> Maybe the s390 attestation model is a special case, but, I think we consider
>>>> this issue. Since we don't have a common spec, there is chance that any
>>>> superset ABI we define now may not meet future vendor requirements. One way to
>>>> handle it to leave enough space in the generic ABI to handle future vendor
>>>> requirements.
>>>>
>>>> I think it would be better if other vendors (like ARM or RISC) can comment and
>>>> confirm whether this proposal meets their demands.
>>>>
>>>
>>> The VMPL-based separation that will house the supervisor module known
>>> as SVSM can have protocols that implement a TPM command interface, or
>>> an RTMR-extension interface, and will also need to have an
>>> SVSM-specific protocol attestation report format to keep the secure
>>> chain of custody apparent. We'd have different formats and protocols
>>> in the kernel, at least, to speak to each technology. I'm not sure
>>> it's worth the trouble of papering over all the... 3-4 technologies
>>> with similar but still weirdly different formats and ways of doing
>>> things with an abstracted attestation ABI, especially since the output
>>> all has to be interpreted in an architecture-specific way anyway.
>>>
>>> ARM's Confidential Computing Realm Management Extensions (RME) seems
>>> to be going along the lines of a runtime measurement register model
>>> with their hardware enforced security. The number of registers isn't
>>> prescribed in the spec.
>>>
>>> +Joey Gouly +linux-coco@lists.linux.dev as far as RME is concerned, do
>>> you know who would be best to weigh in on this discussion of a unified
>>> attestation model?
>>
>>
>>>
>>>>>
>>>>> Vendor specific ioctls for all of this feels like surrender when Linux
>>>>> already has the keys subsystem which has plenty of degrees of freedom
>>>>> for tracking blobs with signatures and using those blobs to instantiate
>>>>> other blobs. It already serves as the ABI wrapping various TPM
>>>>> implementations and marshaling keys for storage encryption and other use
>>>>> cases that intersect Confidential Computing.
>>>>>
>>>>> The benefit of deprecating vendor-specific abstraction layers in
>>>>> userspace is secondary. The primary benefit is collaboration. It enables
>>>>> kernel developers from various architectures to collaborate on common
>>>>> infrastructure. If, referring back to my previous example, SEV adopts an
>>>>> RTMR-like mechanism and TDX adopts a vmpl-like mechanism it would be
>>>>> unfortunate if those efforts were siloed, duplicated, and needlessly
>>>>> differentiated to userspace. So while there are arguably a manageable
>>>>> number of basic arch attestation methods the planned expansion of those
>>>>> to build incremental functionality is where I believe we, as a
>>>>> community, will be glad that we invested in a "Linux format" for all of
>>>>> this.
>>>>>
>>>>> An example, to show what the strawman patch below enables: (req_key is
>>>>> the sample program from "man 2 request_key")
>>>>>
>>>>> # ./req_key guest_attest guest_attest:0:0-$desc $(cat user_data | base64)
>>>>> Key ID is 10e2f3a7
>>>>> # keyctl pipe 0x10e2f3a7 | hexdump -C
>>>>> 00000000  54 44 58 20 47 65 6e 65  72 61 74 65 64 20 51 75  |TDX Generated Qu|
>>>>> 00000010  6f 74 65 00 00 00 00 00  00 00 00 00 00 00 00 00  |ote.............|
>>>>> 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
>>>>> *
>>>>> 00004000
>>>>>
>>>>> This is the kernel instantiating a TDX Quote without the TDREPORT
>>>>> implementation detail ever leaving the kernel. Now, this is only the
>>>>
>>>> IIUC, the idea here is to cache the quote data and return it to the user whenever
>>>> possible, right? If yes, I think such optimization may not be very useful for our
>>>> case. AFAIK, the quote data will change whenever there is a change in the guest
>>>> measurement data. Since the validity of the generated quote will not be long,
>>>> and the frequency of quote generation requests is expected to be less, we may not
>>>> get much benefit from caching the quote data. I think we can keep this logic simple
>>>> by directly retrieving the quote data from the quoting enclave whenever there is a
>>>> request from the user.
>>>>
>>>>> top-half of what is needed. The missing bottom half takes that material
>>>>> and uses it to instantiate derived key material like the storage
>>>>> decryption key internal to the kernel. See "The Process" in
>>>>> Documentation/security/keys/request-key.rst for how the Keys subsystem
>>>>> handles the "keys for keys" use case.
>>>>
>>>> This is only useful for key-server use case, right? Attestation can also be
>>>> used for use cases like pattern matching or uploading some secure data, etc.
>>>> Since key-server is not the only use case, does it make sense to suppport
>>>> this derived key feature?
>>>>
>>>>>
>>>>> ---
>>>>> diff --git a/drivers/virt/Kconfig b/drivers/virt/Kconfig
>>>>> index f79ab13a5c28..0f775847028e 100644
>>>>> --- a/drivers/virt/Kconfig
>>>>> +++ b/drivers/virt/Kconfig
>>>>> @@ -54,4 +54,8 @@ source "drivers/virt/coco/sev-guest/Kconfig"
>>>>>
>>>>>  source "drivers/virt/coco/tdx-guest/Kconfig"
>>>>>
>>>>> +config GUEST_ATTEST
>>>>> +     tristate
>>>>> +     select KEYS
>>>>> +
>>>>>  endif
>>>>> diff --git a/drivers/virt/Makefile b/drivers/virt/Makefile
>>>>> index e9aa6fc96fab..66f6b838f8f4 100644
>>>>> --- a/drivers/virt/Makefile
>>>>> +++ b/drivers/virt/Makefile
>>>>> @@ -12,3 +12,4 @@ obj-$(CONFIG_ACRN_HSM)              += acrn/
>>>>>  obj-$(CONFIG_EFI_SECRET)     += coco/efi_secret/
>>>>>  obj-$(CONFIG_SEV_GUEST)              += coco/sev-guest/
>>>>>  obj-$(CONFIG_INTEL_TDX_GUEST)        += coco/tdx-guest/
>>>>> +obj-$(CONFIG_GUEST_ATTEST)   += coco/guest-attest/
>>>>> diff --git a/drivers/virt/coco/guest-attest/Makefile b/drivers/virt/coco/guest-attest/Makefile
>>>>> new file mode 100644
>>>>> index 000000000000..5581c5a27588
>>>>> --- /dev/null
>>>>> +++ b/drivers/virt/coco/guest-attest/Makefile
>>>>> @@ -0,0 +1,2 @@
>>>>> +obj-$(CONFIG_GUEST_ATTEST) += guest_attest.o
>>>>> +guest_attest-y := key.o
>>>>> diff --git a/drivers/virt/coco/guest-attest/key.c b/drivers/virt/coco/guest-attest/key.c
>>>>> new file mode 100644
>>>>> index 000000000000..2a494b6dd7a7
>>>>> --- /dev/null
>>>>> +++ b/drivers/virt/coco/guest-attest/key.c
>>>>> @@ -0,0 +1,159 @@
>>>>> +// SPDX-License-Identifier: GPL-2.0-only
>>>>> +/* Copyright(c) 2023 Intel Corporation. All rights reserved. */
>>>>> +
>>>>> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>>>>> +#include <linux/seq_file.h>
>>>>> +#include <linux/key-type.h>
>>>>> +#include <linux/module.h>
>>>>> +#include <linux/base64.h>
>>>>> +
>>>>> +#include <keys/request_key_auth-type.h>
>>>>> +#include <keys/user-type.h>
>>>>> +
>>>>> +#include "guest-attest.h"
>>>>
>>>> Can you share you guest-attest.h?
>>>>
>>>>> +
>>>>> +static LIST_HEAD(guest_attest_list);
>>>>> +static DECLARE_RWSEM(guest_attest_rwsem);
>>>>> +
>>>>> +static struct guest_attest_ops *fetch_ops(void)
>>>>> +{
>>>>> +     return list_first_entry_or_null(&guest_attest_list,
>>>>> +                                     struct guest_attest_ops, list);
>>>>> +}
>>>>> +
>>>>> +static struct guest_attest_ops *get_ops(void)
>>>>> +{
>>>>> +     down_read(&guest_attest_rwsem);
>>>>> +     return fetch_ops();
>>>>> +}
>>>>> +
>>>>> +static void put_ops(void)
>>>>> +{
>>>>> +     up_read(&guest_attest_rwsem);
>>>>> +}
>>>>> +
>>>>> +int register_guest_attest_ops(struct guest_attest_ops *ops)
>>>>> +{
>>>>> +     struct guest_attest_ops *conflict;
>>>>> +     int rc;
>>>>> +
>>>>> +     down_write(&guest_attest_rwsem);
>>>>> +     conflict = fetch_ops();
>>>>> +     if (conflict) {
>>>>> +             pr_err("\"%s\" ops already registered\n", conflict->name);
>>>>> +             rc = -EEXIST;
>>>>> +             goto out;
>>>>> +     }
>>>>> +     list_add(&ops->list, &guest_attest_list);
>>>>> +     try_module_get(ops->module);
>>>>> +     rc = 0;
>>>>> +out:
>>>>> +     up_write(&guest_attest_rwsem);
>>>>> +     return rc;
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(register_guest_attest_ops);
>>>>> +
>>>>> +void unregister_guest_attest_ops(struct guest_attest_ops *ops)
>>>>> +{
>>>>> +     down_write(&guest_attest_rwsem);
>>>>> +     list_del(&ops->list);
>>>>> +     up_write(&guest_attest_rwsem);
>>>>> +     module_put(ops->module);
>>>>> +}
>>>>> +EXPORT_SYMBOL_GPL(unregister_guest_attest_ops);
>>>>> +
>>>>> +static int __guest_attest_request_key(struct key *key, int level,
>>>>> +                                   struct key *dest_keyring,
>>>>> +                                   const char *callout_info, int callout_len,
>>>>> +                                   struct key *authkey)
>>>>> +{
>>>>> +     struct guest_attest_ops *ops;
>>>>> +     void *payload = NULL;
>>>>> +     int rc, payload_len;
>>>>> +
>>>>> +     ops = get_ops();
>>>>> +     if (!ops)
>>>>> +             return -ENOKEY;
>>>>> +
>>>>> +     payload = kzalloc(max(GUEST_ATTEST_DATALEN, callout_len), GFP_KERNEL);
>>>>> +     if (!payload) {
>>>>> +             rc = -ENOMEM;
>>>>> +             goto out;
>>>>> +     }
>>>>
>>>> Is the idea to get the values like vmpl part of the payload?
>>>>
>>>>> +
>>>>> +     payload_len = base64_decode(callout_info, callout_len, payload);
>>>>> +     if (payload_len < 0 || payload_len > GUEST_ATTEST_DATALEN) {
>>>>> +             rc = -EINVAL;
>>>>> +             goto out;
>>>>> +     }
>>>>> +
>>>>> +     rc = ops->request_attest(key, level, dest_keyring, payload, payload_len,
>>>>> +                              authkey);
>>>>> +out:
>>>>> +     kfree(payload);
>>>>> +     put_ops();
>>>>> +     return rc;
>>>>> +}
>>>>> +
>>>>> +static int guest_attest_request_key(struct key *authkey, void *data)
>>>>> +{
>>>>> +     struct request_key_auth *rka = get_request_key_auth(authkey);
>>>>> +     struct key *key = rka->target_key;
>>>>> +     unsigned long long id;
>>>>> +     int rc, level;
>>>>> +
>>>>> +     pr_debug("desc: %s op: %s callout: %s\n", key->description, rka->op,
>>>>> +              rka->callout_info ? (char *)rka->callout_info : "\"none\"");
>>>>> +
>>>>> +     if (sscanf(key->description, "guest_attest:%d:%llu", &level, &id) != 2)
>>>>> +             return -EINVAL;
>>>>> +
>>>>
>>>> Can you explain some details about the id and level? It is not very clear why
>>>> we need it.
>>>>
>>>>> +     if (!rka->callout_info) {
>>>>> +             rc = -EINVAL;
>>>>> +             goto out;
>>>>> +     }
>>>>> +
>>>>> +     rc = __guest_attest_request_key(key, level, rka->dest_keyring,
>>>>> +                                     rka->callout_info, rka->callout_len,
>>>>> +                                     authkey);
>>>>> +out:
>>>>> +     complete_request_key(authkey, rc);
>>>>> +     return rc;
>>>>> +}
>>>>> +
>>>>> +static int guest_attest_vet_description(const char *desc)
>>>>> +{
>>>>> +     unsigned long long id;
>>>>> +     int level;
>>>>> +
>>>>> +     if (sscanf(desc, "guest_attest:%d:%llu", &level, &id) != 2)
>>>>> +             return -EINVAL;
>>>>> +     return 0;
>>>>> +}
>>>>> +
>>>>> +static struct key_type key_type_guest_attest = {
>>>>> +     .name = "guest_attest",
>>>>> +     .preparse = user_preparse,
>>>>> +     .free_preparse = user_free_preparse,
>>>>> +     .instantiate = generic_key_instantiate,
>>>>> +     .revoke = user_revoke,
>>>>> +     .destroy = user_destroy,
>>>>> +     .describe = user_describe,
>>>>> +     .read = user_read,
>>>>> +     .vet_description = guest_attest_vet_description,
>>>>> +     .request_key = guest_attest_request_key,
>>>>> +};
>>>>> +
>>>>> +static int __init guest_attest_init(void)
>>>>> +{
>>>>> +     return register_key_type(&key_type_guest_attest);
>>>>> +}
>>>>> +
>>>>> +static void __exit guest_attest_exit(void)
>>>>> +{
>>>>> +     unregister_key_type(&key_type_guest_attest);
>>>>> +}
>>>>> +
>>>>> +module_init(guest_attest_init);
>>>>> +module_exit(guest_attest_exit);
>>>>> +MODULE_LICENSE("GPL v2");
>>>>> diff --git a/drivers/virt/coco/tdx-guest/Kconfig b/drivers/virt/coco/tdx-guest/Kconfig
>>>>> index 14246fc2fb02..9a1ec85369fe 100644
>>>>> --- a/drivers/virt/coco/tdx-guest/Kconfig
>>>>> +++ b/drivers/virt/coco/tdx-guest/Kconfig
>>>>> @@ -1,6 +1,7 @@
>>>>>  config TDX_GUEST_DRIVER
>>>>>       tristate "TDX Guest driver"
>>>>>       depends on INTEL_TDX_GUEST
>>>>> +     select GUEST_ATTEST
>>>>>       help
>>>>>         The driver provides userspace interface to communicate with
>>>>>         the TDX module to request the TDX guest details like attestation
>>>>> diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
>>>>> index 388491fa63a1..65b5aab284d9 100644
>>>>> --- a/drivers/virt/coco/tdx-guest/tdx-guest.c
>>>>> +++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
>>>>> @@ -13,11 +13,13 @@
>>>>>  #include <linux/string.h>
>>>>>  #include <linux/uaccess.h>
>>>>>  #include <linux/set_memory.h>
>>>>> +#include <linux/key-type.h>
>>>>>
>>>>>  #include <uapi/linux/tdx-guest.h>
>>>>>
>>>>>  #include <asm/cpu_device_id.h>
>>>>>  #include <asm/tdx.h>
>>>>> +#include "../guest-attest/guest-attest.h"
>>>>>
>>>>>  /*
>>>>>   * Intel's SGX QE implementation generally uses Quote size less
>>>>> @@ -229,6 +231,62 @@ static const struct x86_cpu_id tdx_guest_ids[] = {
>>>>>  };
>>>>>  MODULE_DEVICE_TABLE(x86cpu, tdx_guest_ids);
>>>>>
>>>>> +static int tdx_request_attest(struct key *key, int level,
>>>>> +                           struct key *dest_keyring, void *payload,
>>>>> +                           int payload_len, struct key *authkey)
>>>>> +{
>>>>> +     u8 *tdreport;
>>>>> +     long ret;
>>>>> +
>>>>> +     tdreport = kzalloc(TDX_REPORT_LEN, GFP_KERNEL);
>>>>> +     if (!tdreport)
>>>>> +             return -ENOMEM;
>>>>> +
>>>>> +     /* Generate TDREPORT0 using "TDG.MR.REPORT" TDCALL */
>>>>> +     ret = tdx_mcall_get_report0(payload, tdreport);
>>>>> +     if (ret)
>>>>> +             goto out;
>>>>> +
>>>>> +     mutex_lock(&quote_lock);
>>>>> +
>>>>> +     memset(qentry->buf, 0, qentry->buf_len);
>>>>> +     reinit_completion(&qentry->compl);
>>>>> +     qentry->valid = true;
>>>>> +
>>>>> +     /* Submit GetQuote Request using GetQuote hyperetall */
>>>>> +     ret = tdx_hcall_get_quote(qentry->buf, qentry->buf_len);
>>>>> +     if (ret) {
>>>>> +             pr_err("GetQuote hyperetall failed, status:%lx\n", ret);
>>>>> +             ret = -EIO;
>>>>> +             goto quote_failed;
>>>>> +     }
>>>>> +
>>>>> +     /*
>>>>> +      * Although the GHCI specification does not state explicitly that
>>>>> +      * the VMM must not wait indefinitely for the Quote request to be
>>>>> +      * completed, a sane VMM should always notify the guest after a
>>>>> +      * certain time, regardless of whether the Quote generation is
>>>>> +      * successful or not.  For now just assume the VMM will do so.
>>>>> +      */
>>>>> +     wait_for_completion(&qentry->compl);
>>>>> +
>>>>> +     ret = key_instantiate_and_link(key, qentry->buf, qentry->buf_len,
>>>>> +                                    dest_keyring, authkey);
>>>>> +
>>>>> +quote_failed:
>>>>> +     qentry->valid = false;
>>>>> +     mutex_unlock(&quote_lock);
>>>>> +out:
>>>>> +     kfree(tdreport);
>>>>> +     return ret;
>>>>> +}
>>>>> +
>>>>> +static struct guest_attest_ops tdx_attest_ops = {
>>>>> +     .name = KBUILD_MODNAME,
>>>>> +     .module = THIS_MODULE,
>>>>> +     .request_attest = tdx_request_attest,
>>>>> +};
>>>>> +
>>>>>  static int __init tdx_guest_init(void)
>>>>>  {
>>>>>       int ret;
>>>>> @@ -251,8 +309,14 @@ static int __init tdx_guest_init(void)
>>>>>       if (ret)
>>>>>               goto free_quote;
>>>>>
>>>>> +     ret = register_guest_attest_ops(&tdx_attest_ops);
>>>>> +     if (ret)
>>>>> +             goto free_irq;
>>>>> +
>>>>>       return 0;
>>>>>
>>>>> +free_irq:
>>>>> +     tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
>>>>>  free_quote:
>>>>>       free_quote_entry(qentry);
>>>>>  free_misc:
>>>>> @@ -264,6 +328,7 @@ module_init(tdx_guest_init);
>>>>>
>>>>>  static void __exit tdx_guest_exit(void)
>>>>>  {
>>>>> +     unregister_guest_attest_ops(&tdx_attest_ops);
>>>>>       tdx_unregister_event_irq_cb(quote_cb_handler, qentry);
>>>>>       free_quote_entry(qentry);
>>>>>       misc_deregister(&tdx_misc_dev);
>>>>
>>>> --
>>>> Sathyanarayanan Kuppuswamy
>>>> Linux Kernel Developer
>>>
>>>
>>>
>>> --
>>> -Dionna Glaze, PhD (she/her)
>>
>> -- 
>> Sathyanarayanan Kuppuswamy
>> Linux Kernel Developer
>>
Dionna Amalie Glaze June 29, 2023, 4:25 p.m. UTC | #25
>
> First, thank you for engaging, it speeds up the iteration. This
> confirmed my worry that the secondary goal of this proposal, a common
> verification implementation, is indeed unachievable in the near term. A
> few clarifying questions below, but I will let this go.
>
> The primary goal, achievable on a short runway, is more for kernel
> developers. It is to have a common infrastructure for marshaling vendor
> payloads, provide a mechanism to facilitate kernel initiated requests to
> a key-server, and to deploy a common frontend for concepts like runtime
> measurement (likely as another backend to what Keys already understands
> for various TPM PCR implementations).
>

That sounds good, though the devil is in the details. The TPM
situation will be exacerbated by a lower root of trust. The TPM itself
doesn't have a specification for its own attested firmware.

> > All the specific fields of the blob have to be decoded and subjected
> > to an acceptance policy. That policy will most always be different
> > across different platforms and VM owners. I wrote all of
> > github.com/google/go-sev-guest, including the verification and
> > validation logic, and it's going to get more complicated, and the
> > sources of the data that provide validators with notions of what
> > values can be trusted will be varied.
>
> Can you provide an example? I ask only to include it in the kernel
> commit log for a crisp explanation why this proposed Keys format will
> continue to convey a raw vendor blob with no kernel abstraction as part
> of its payload for the foreseeable future.
>

An example is that while there is a common notion that each report
will have some attestation key whose certificate needs to be verified,
there is additional collateral that must be downloaded to

* verify a TDX key certificate against updates to known weaknesses of
the key's details
* verify the measurement in the report against a vendor's signed
golden measurement
* [usually offline and signed by the analyzing principal that the
analysis was done] fully verify the measurement given a build
provenance document like SLSA. The complexity of this analysis could
even engage in static analysis of every commit since a certain date,
or from a developer of low repute... whatever the verifier wants to
do.

These are all in the realm of interpreting the blob for acceptance, so
it's best to keep uninterpreted.

> > The formats are not standardized. The Confidential Computing
> > Consortium should be working toward that, but it's a slow process.
> > There's IETF RATS. There's in-toto.io attestations. There's Azure's
> > JWT thing. There's a signed serialized protocol buffer that I've
> > decided is what Google is going to produce while we figure out all the
> > "right" formats to use. There will be factions and absolute gridlock
> > for multiple years if we require solidifying an abstraction for the
> > kernel to manage all this logic before passing a report on to user
> > space.
>
> Understood. When that standardization process completes my expectation
> is that it slots into the common conveyance method and no need to go
> rewrite code that already knows how to interface with Keys to get
> attestation evidence.
>

I can get on board with that. I don't think there will be much cause
for more than a handful of attestation requests with different report
data, so it shouldn't overwhelm the key subsystem.

> >
> > You really shouldn't be putting attestation validation logic in the
> > kernel.
>
> It was less putting validation logic in the kernel, and more hoping for
> a way to abstract some common parsing in advance of a true standard
> attestation format, but point taken.
>

I think we'll have hardware-provided blobs and host-provided cached
collateral. The caching could be achieved with a hosted proxy server,
but given SEV-SNP already has GET_EXT_GUEST_REQUEST to simplify
delivery, I think it's fair to offer other technologies the chance at
supporting a similar simple solution.

Everything else will have to come from the network or the workload itself.


> > It belongs outside of the VM entirely with the party that will
> > only release access keys to the VM if it can prove it's running the
> > software it claims, on the platform it claims. I think Windows puts a
> > remote procedure call in their guest attestation driver to the Azure
> > attestation service, and that is an anti-pattern in my mind.
>
> I can not speak to the Windows implementation, but the Linux Keys
> subsystem is there to handle Key construction that may be requested by
> userspace or the kernel and may be serviced by built-in keys,
> device/platform instantiated keys, or keys retrieved via an upcall to
> userspace.
>
> The observation is that existing calls to request_key() in the kernel
> likely have reason to be serviced by a confidential computing key server
> somewhere in the chain. So, might as well enlighten the Keys subsystem
> to retrieve this information and skip round trips to userspace run
> vendor specific ioctls. Make the kernel as self sufficient as possible,
> and make SEV, TDX, etc. developers talk more to each other about their
> needs.

That sounds reasonable. I think one wrinkle in the current design is
that SGX and SEV-SNP provide derived keys as a thing separate from
attestation but still based on firmware measurement, and TDX doesn't
yet. It may in the future come with a TDX module update that gets
derived keys through an SGX enclave–who knows. The MSG_KEY_REQ guest
request for a SEV-SNP derived key has some bits and bobs to select
different VM material to mix into the key derivation, so that would
need to be in the API as well. It makes request_key a little weird to
use for both. I don't even think there's a sufficient abstraction for
the guest-attest device to provide, since there isn't a common
REPORT_DATA + attestation level pair of inputs that drive it. If we're
fine with a technology-tagged uninterpreted input blob for key
derivation, and the device returns an error if the real hardware
doesn't match the technology tag, then that could be an okay enough
interface.

I could be convinced to leave MSG_KEY_REQ out of Linux entirely, but
only for selfish reasons. The alternative is to set up a sealing key
escrow service that releases sealing keys when a VM's attestation
matches a pre-registered policy, which is extremely heavy-handed when
you can enforce workload identity at VM launch time and have a safe
derived key with this technology. I think there's a Decentriq blog
post about setting that whole supply chain up ("swiss cheese to
cheddar"), so they'd likely have some words about that.
Thomas Gleixner Aug. 23, 2023, 8:25 a.m. UTC | #26
On Thu, Jun 22 2023 at 14:01, Erdem Aktas wrote:
> On Mon, Jun 12, 2023 at 12:03 PM Dan Williams <dan.j.williams@intel.com>
> wrote:
>> Now multiple
>> confidential computing vendors trying to develop similar flows with
>> differentiated formats where that differentiation need not leak over the
>> ABI boundary.
>>
>
> <Just my personal opinion below>
> I agree with this statement in the high level but it is also somehow
> surprising for me after all the discussion happened around this topic.
> Honestly, I feel like there are multiple versions of "Intel"  working in
> different directions.
>
> If we want multiple vendors trying to do the similar things behind a common
> ABI, it should start with the spec. Since this comment is coming from
> Intel, I wonder if there is any plan to combine the GHCB and GHCI
> interfaces under common ABI in the future or why it did not even happen in
> the first place.

You are conflating things here.

The GETQUOTE TDVMCALL interface is part of the Guest-Hypervisor
Communication Interface (GHCI), which is a firmware interface.

Firmware (likewise hardware) interfaces have the unfortunate property
that they are mostly cast in stone.

But that has absolutely nothing to do with the way how the kernel
implements support for them. If we'd follow your reasoning then we'd
have a gazillion of vendor specific SCSI stacks in the kernel.

> What I see is that Intel has GETQUOTE TDVMCALL interface in its spec and
> again Intel does not really want to provide support for it in linux. It
> feels really frustrating.

Intel definitely wants to provide support for this interface and this
very thread is about that support. But Intel is not in a position to
define what the kernel community has to accept or not, neither is
Google.

Sure, it would have been more efficient to come up with a better
interface earlier, but that's neither an Intel nor a TDX specific
problem.

It's just how kernel development works. Some ideas look good on first
sight, some stuff slips through and at some point the maintainers
realize that this is not the way to go and request a proper generalized
and maintainable implementation.

If you can provide compelling technical reasons why the IOCTL is the
better and more maintainable approach for the kernel, then we are all
ears and happy to debate that on the technical level.

Feel free to be frustrated, but I can assure you that the only way to
resolve this dilemma is to sit down and actually get work done in a way
which is acceptable by the kernel community at the technical level.

Everything else is frustrating for everyone involved, not only you.

Thanks,

        tglx
diff mbox series

Patch

diff --git a/tools/testing/selftests/tdx/tdx_guest_test.c b/tools/testing/selftests/tdx/tdx_guest_test.c
index 81d8cb88ea1a..0b4b4293b9cb 100644
--- a/tools/testing/selftests/tdx/tdx_guest_test.c
+++ b/tools/testing/selftests/tdx/tdx_guest_test.c
@@ -18,6 +18,7 @@ 
 #define TDX_GUEST_DEVNAME "/dev/tdx_guest"
 #define HEX_DUMP_SIZE 8
 #define DEBUG 0
+#define QUOTE_SIZE 8192
 
 /**
  * struct tdreport_type - Type header of TDREPORT_STRUCT.
@@ -128,21 +129,29 @@  static void print_array_hex(const char *title, const char *prefix_str,
 	printf("\n");
 }
 
+/* Helper function to get TDREPORT */
+long get_tdreport0(int devfd, struct tdx_report_req *req)
+{
+	int i;
+
+	/* Generate sample report data */
+	for (i = 0; i < TDX_REPORTDATA_LEN; i++)
+		req->reportdata[i] = i;
+
+	return ioctl(devfd, TDX_CMD_GET_REPORT0, req);
+}
+
 TEST(verify_report)
 {
 	struct tdx_report_req req;
 	struct tdreport *tdreport;
-	int devfd, i;
+	int devfd;
 
 	devfd = open(TDX_GUEST_DEVNAME, O_RDWR | O_SYNC);
 	ASSERT_LT(0, devfd);
 
-	/* Generate sample report data */
-	for (i = 0; i < TDX_REPORTDATA_LEN; i++)
-		req.reportdata[i] = i;
-
 	/* Get TDREPORT */
-	ASSERT_EQ(0, ioctl(devfd, TDX_CMD_GET_REPORT0, &req));
+	ASSERT_EQ(0, get_tdreport0(devfd, &req));
 
 	if (DEBUG) {
 		print_array_hex("\n\t\tTDX report data\n", "",
@@ -160,4 +169,48 @@  TEST(verify_report)
 	ASSERT_EQ(0, close(devfd));
 }
 
+TEST(verify_quote)
+{
+	struct tdx_quote_buf *quote_buf;
+	struct tdx_report_req rep_req;
+	struct tdx_quote_req req;
+	__u64 quote_buf_size;
+	int devfd;
+
+	/* Open attestation device */
+	devfd = open(TDX_GUEST_DEVNAME, O_RDWR | O_SYNC);
+
+	ASSERT_LT(0, devfd);
+
+	/* Add size for quote header */
+	quote_buf_size = sizeof(*quote_buf) + QUOTE_SIZE;
+
+	/* Allocate quote buffer */
+	quote_buf = (struct tdx_quote_buf *)malloc(quote_buf_size);
+	ASSERT_NE(NULL, quote_buf);
+
+	/* Initialize GetQuote header */
+	quote_buf->version = 1;
+	quote_buf->status  = GET_QUOTE_SUCCESS;
+	quote_buf->in_len  = TDX_REPORT_LEN;
+	quote_buf->out_len = 0;
+
+	/* Get TDREPORT data */
+	ASSERT_EQ(0, get_tdreport0(devfd, &rep_req));
+
+	/* Fill GetQuote request */
+	memcpy(quote_buf->data, rep_req.tdreport, TDX_REPORT_LEN);
+	req.buf	  = (__u64)quote_buf;
+	req.len	  = quote_buf_size;
+
+	ASSERT_EQ(0, ioctl(devfd, TDX_CMD_GET_QUOTE, &req));
+
+	/* Check whether GetQuote request is successful */
+	EXPECT_EQ(0, quote_buf->status);
+
+	free(quote_buf);
+
+	ASSERT_EQ(0, close(devfd));
+}
+
 TEST_HARNESS_MAIN