Message ID | 0d4ef987825080715873360c8b41ebb467b7dfdf.1713257278.git.vaishali.thakkar@vates.tech (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | x86/hvm: Allow supplying a dynamic start ASID | expand |
On 16/04/2024 9:54 am, Vaishali Thakkar wrote: > Currently, Xen always starts the ASID allocation at 1. But > for SEV technologies the ASID space is divided. This is > because it's a security issue if a guest is started as > ES/SNP and is migrated to SEV-only. So, the types are > tracked explicitly. > > Thus, in preparation of SEV support in Xen, add min_asid > to allow supplying the dynamic start ASID during the > allocation process. > > Signed-off-by: Vaishali Thakkar <vaishali.thakkar@vates.tech> Mechanically, this is fine, but is it going to be useful in the long term? For SEV and SEV-ES/SNP, we must run the VM in the single fixed ASID negotiated with the ASP. This is not not optional. For non-encrypted VMs, we are free to choose between using the remaining ASID space as we used to (i.e. run it per-pCPU and tick it over to flush the TLBs), or to run it in a fixed ASID per guest too. The latter lets us use INVLPGB, and would avoid maintaining two different TLB-tagging schemes. I'd say that we absolutely do want INVLPGB support for managing non-encrypted VMs, and we cannot mix both fixed and floating ASIDs at the same time. We should seriously consider whether it's worth maintaining two schemes, or just switching wholesale to a fixed ASID scheme. I don't have a good answer here... If it where anything but TLB flushing, it would be an obvious choice, but TLB handling bugs are some of the nastiest to show up. ~Andrew
On 4/16/24 4:12 PM, Andrew Cooper wrote: > On 16/04/2024 9:54 am, Vaishali Thakkar wrote: >> Currently, Xen always starts the ASID allocation at 1. But >> for SEV technologies the ASID space is divided. This is >> because it's a security issue if a guest is started as >> ES/SNP and is migrated to SEV-only. So, the types are >> tracked explicitly. >> >> Thus, in preparation of SEV support in Xen, add min_asid >> to allow supplying the dynamic start ASID during the >> allocation process. >> >> Signed-off-by: Vaishali Thakkar <vaishali.thakkar@vates.tech> > > Mechanically, this is fine, but is it going to be useful in the long term? > > For SEV and SEV-ES/SNP, we must run the VM in the single fixed ASID > negotiated with the ASP. This is not not optional. > > For non-encrypted VMs, we are free to choose between using the remaining > ASID space as we used to (i.e. run it per-pCPU and tick it over to flush > the TLBs), or to run it in a fixed ASID per guest too. > > The latter lets us use INVLPGB, and would avoid maintaining two > different TLB-tagging schemes. > > > I'd say that we absolutely do want INVLPGB support for managing > non-encrypted VMs, and we cannot mix both fixed and floating ASIDs at > the same time. I agree that having a both schemes at the time is not practical. And I've some RFC patches in work to propose fixed ASID scheme for all domains (encrypted or not) to start a discussion regarding that. IMO, this patch is still useful because my idea was to then use min_asid as a holder to separate out the non-encrypted, encrypted space and SEV ES/ SNP ASID space. If it's more easier to see the usefulness of this patch along with other asid fixes, I'm fine with putting it in that RFC patchset. But I thought that this change was independent enough to be sent by itself. > We should seriously consider whether it's worth maintaining two schemes, > or just switching wholesale to a fixed ASID scheme. > > I don't have a good answer here... If it where anything but TLB > flushing, it would be an obvious choice, but TLB handling bugs are some > of the nastiest to show up. > > ~Andrew
On 4/16/24 4:25 PM, Vaishali Thakkar wrote: > On 4/16/24 4:12 PM, Andrew Cooper wrote: >> On 16/04/2024 9:54 am, Vaishali Thakkar wrote: >>> Currently, Xen always starts the ASID allocation at 1. But >>> for SEV technologies the ASID space is divided. This is >>> because it's a security issue if a guest is started as >>> ES/SNP and is migrated to SEV-only. So, the types are >>> tracked explicitly. >>> >>> Thus, in preparation of SEV support in Xen, add min_asid >>> to allow supplying the dynamic start ASID during the >>> allocation process. >>> >>> Signed-off-by: Vaishali Thakkar <vaishali.thakkar@vates.tech> >> >> Mechanically, this is fine, but is it going to be useful in the long term? >> >> For SEV and SEV-ES/SNP, we must run the VM in the single fixed ASID >> negotiated with the ASP. This is not not optional. >> >> For non-encrypted VMs, we are free to choose between using the remaining >> ASID space as we used to (i.e. run it per-pCPU and tick it over to flush >> the TLBs), or to run it in a fixed ASID per guest too. >> >> The latter lets us use INVLPGB, and would avoid maintaining two >> different TLB-tagging schemes. >> >> >> I'd say that we absolutely do want INVLPGB support for managing >> non-encrypted VMs, and we cannot mix both fixed and floating ASIDs at >> the same time. > > I agree that having a both schemes at the time is not practical. And I've > some RFC patches in work to propose fixed ASID scheme for all domains > (encrypted or not) to start a discussion regarding that. > > IMO, this patch is still useful because my idea was to then use min_asid > as a holder to separate out the non-encrypted, encrypted space and SEV ES/ > SNP ASID space. If it's more easier to see the usefulness of this patch > along with other asid fixes, I'm fine with putting it in that RFC patchset. > But I thought that this change was independent enough to be sent by > itself. s/encrypted/SEV >> We should seriously consider whether it's worth maintaining two schemes, >> or just switching wholesale to a fixed ASID scheme. >> >> I don't have a good answer here... If it where anything but TLB >> flushing, it would be an obvious choice, but TLB handling bugs are some >> of the nastiest to show up. >> >> ~Andrew > >
diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c index 8d27b7dba1..e14b64f2c8 100644 --- a/xen/arch/x86/hvm/asid.c +++ b/xen/arch/x86/hvm/asid.c @@ -41,6 +41,7 @@ boolean_param("asid", opt_asid_enabled); /* Per-CPU ASID management. */ struct hvm_asid_data { uint64_t core_asid_generation; + uint32_t min_asid; uint32_t next_asid; uint32_t max_asid; bool disabled; @@ -53,7 +54,8 @@ void hvm_asid_init(int nasids) static int8_t g_disabled = -1; struct hvm_asid_data *data = &this_cpu(hvm_asid_data); - data->max_asid = nasids - 1; + data->min_asid = 1; + data->max_asid = nasids - data->min_asid; data->disabled = !opt_asid_enabled || (nasids <= 1); if ( g_disabled != data->disabled ) @@ -66,8 +68,8 @@ void hvm_asid_init(int nasids) /* Zero indicates 'invalid generation', so we start the count at one. */ data->core_asid_generation = 1; - /* Zero indicates 'ASIDs disabled', so we start the count at one. */ - data->next_asid = 1; + /* Zero indicates 'ASIDs disabled', so we start the count at min_asid. */ + data->next_asid = data->min_asid; } void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid) @@ -117,7 +119,7 @@ bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid) if ( unlikely(data->next_asid > data->max_asid) ) { hvm_asid_flush_core(); - data->next_asid = 1; + data->next_asid = data->min_asid; if ( data->disabled ) goto disabled; }
Currently, Xen always starts the ASID allocation at 1. But for SEV technologies the ASID space is divided. This is because it's a security issue if a guest is started as ES/SNP and is migrated to SEV-only. So, the types are tracked explicitly. Thus, in preparation of SEV support in Xen, add min_asid to allow supplying the dynamic start ASID during the allocation process. Signed-off-by: Vaishali Thakkar <vaishali.thakkar@vates.tech> --- xen/arch/x86/hvm/asid.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)