Message ID | 20230510060611.12950-2-binbin.wu@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Linear Address Masking (LAM) KVM Enabling | expand |
On Wed, May 10, 2023 at 02:06:06PM +0800, Binbin Wu wrote: >Define a 32-bit parameter and consolidate the two bools into it. > >__linearize() has two bool parameters write and fetch. And new flag >will be needed to support new feature (e.g. LAM needs a flag to skip >address untag under some conditions). > >No functional change intended. > >In the follow-up patches, the new parameter will be extended for LAM. > >Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> one nit below >--- > arch/x86/kvm/emulate.c | 19 +++++++++++++------ > arch/x86/kvm/kvm_emulate.h | 4 ++++ > 2 files changed, 17 insertions(+), 6 deletions(-) > >diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c >index 936a397a08cd..9508836e8a35 100644 >--- a/arch/x86/kvm/emulate.c >+++ b/arch/x86/kvm/emulate.c >@@ -687,8 +687,8 @@ static unsigned insn_alignment(struct x86_emulate_ctxt *ctxt, unsigned size) > static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, > struct segmented_address addr, > unsigned *max_size, unsigned size, >- bool write, bool fetch, >- enum x86emul_mode mode, ulong *linear) >+ u32 flags, enum x86emul_mode mode, >+ ulong *linear) > { > struct desc_struct desc; > bool usable; >@@ -696,6 +696,8 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, > u32 lim; > u16 sel; > u8 va_bits; >+ bool fetch = !!(flags & X86EMUL_F_FETCH); >+ bool write = !!(flags & X86EMUL_F_WRITE); > > la = seg_base(ctxt, addr.seg) + addr.ea; > *max_size = 0; >@@ -757,7 +759,11 @@ static int linearize(struct x86_emulate_ctxt *ctxt, > ulong *linear) > { > unsigned max_size; >- return __linearize(ctxt, addr, &max_size, size, write, false, >+ u32 flags = 0; >+ >+ if (write) >+ flags |= X86EMUL_F_WRITE; this can be more dense: u32 flags = write ? X86EMUL_F_WRITE : 0;
On Wed, 2023-05-10 at 14:06 +0800, Binbin Wu wrote: > Define a 32-bit parameter and consolidate the two bools into it. Technically, per C standard I don't think you can "define" a parameter of a function, but can only "declare". > > __linearize() has two bool parameters write and fetch. And new flag > will be needed to support new feature (e.g. LAM needs a flag to skip > address untag under some conditions). Since this is the first patch to mention LAM in this series, it would be better to use the full name Intel Linear Address Masking (LAM). > > No functional change intended. > > In the follow-up patches, the new parameter will be extended for LAM. A duplicated sentence to me. Perhaps you can just remove it. Some changelog material FYI: Consolidate two bool parameters (write/fetch) of __linearize() into a 'u32 flag' parameter to make the function be more concise and future extendable, i.e. to support Intel Linear Address Masking (LAM), which allows high non-address bits of linear address to be used as metadata. Define two flags to replace the two bools. A new flag will be added to to support LAM to skip masking off metadata bits of linear address under some conditions. No functional change intended. > > Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Anyway: Acked-by: Kai Huang <kai.huang@intel.com>
On 5/10/2023 3:42 PM, Chao Gao wrote: > On Wed, May 10, 2023 at 02:06:06PM +0800, Binbin Wu wrote: >> Define a 32-bit parameter and consolidate the two bools into it. >> >> __linearize() has two bool parameters write and fetch. And new flag >> will be needed to support new feature (e.g. LAM needs a flag to skip >> address untag under some conditions). >> >> No functional change intended. >> >> In the follow-up patches, the new parameter will be extended for LAM. >> >> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> > Reviewed-by: Chao Gao <chao.gao@intel.com> > > one nit below > >> --- >> arch/x86/kvm/emulate.c | 19 +++++++++++++------ >> arch/x86/kvm/kvm_emulate.h | 4 ++++ >> 2 files changed, 17 insertions(+), 6 deletions(-) >> >> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c >> index 936a397a08cd..9508836e8a35 100644 >> --- a/arch/x86/kvm/emulate.c >> +++ b/arch/x86/kvm/emulate.c >> @@ -687,8 +687,8 @@ static unsigned insn_alignment(struct x86_emulate_ctxt *ctxt, unsigned size) >> static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, >> struct segmented_address addr, >> unsigned *max_size, unsigned size, >> - bool write, bool fetch, >> - enum x86emul_mode mode, ulong *linear) >> + u32 flags, enum x86emul_mode mode, >> + ulong *linear) >> { >> struct desc_struct desc; >> bool usable; >> @@ -696,6 +696,8 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, >> u32 lim; >> u16 sel; >> u8 va_bits; >> + bool fetch = !!(flags & X86EMUL_F_FETCH); >> + bool write = !!(flags & X86EMUL_F_WRITE); >> >> la = seg_base(ctxt, addr.seg) + addr.ea; >> *max_size = 0; >> @@ -757,7 +759,11 @@ static int linearize(struct x86_emulate_ctxt *ctxt, >> ulong *linear) >> { >> unsigned max_size; >> - return __linearize(ctxt, addr, &max_size, size, write, false, >> + u32 flags = 0; >> + >> + if (write) >> + flags |= X86EMUL_F_WRITE; > this can be more dense: > > u32 flags = write ? X86EMUL_F_WRITE : 0; Thanks, will update it.
On 5/10/2023 8:41 PM, Huang, Kai wrote: > On Wed, 2023-05-10 at 14:06 +0800, Binbin Wu wrote: >> Define a 32-bit parameter and consolidate the two bools into it. > Technically, per C standard I don't think you can "define" a parameter of a > function, but can only "declare". > >> __linearize() has two bool parameters write and fetch. And new flag >> will be needed to support new feature (e.g. LAM needs a flag to skip >> address untag under some conditions). > Since this is the first patch to mention LAM in this series, it would be better > to use the full name Intel Linear Address Masking (LAM). > >> No functional change intended. >> >> In the follow-up patches, the new parameter will be extended for LAM. > A duplicated sentence to me. Perhaps you can just remove it. > > Some changelog material FYI: > > Consolidate two bool parameters (write/fetch) of __linearize() into a > 'u32 flag' parameter to make the function be more concise and future > extendable, i.e. to support Intel Linear Address Masking (LAM), which > allows high non-address bits of linear address to be used as metadata. > > Define two flags to replace the two bools. A new flag will be added to > to support LAM to skip masking off metadata bits of linear address > under > some conditions. > > No functional change intended. Thanks, will update it. > >> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> > Anyway: > > Acked-by: Kai Huang <kai.huang@intel.com>
From: Binbin Wu > Sent: 11 May 2023 02:26 ... > >> unsigned max_size; > >> - return __linearize(ctxt, addr, &max_size, size, write, false, > >> + u32 flags = 0; > >> + > >> + if (write) > >> + flags |= X86EMUL_F_WRITE; > > this can be more dense: > > > > u32 flags = write ? X86EMUL_F_WRITE : 0; > Thanks, will update it. You can also dispense with the extra local variable and put the ?: into the parameter list. Even more so with the other calls sites. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On 5/11/2023 5:58 PM, David Laight wrote: > From: Binbin Wu >> Sent: 11 May 2023 02:26 > ... >>>> unsigned max_size; >>>> - return __linearize(ctxt, addr, &max_size, size, write, false, >>>> + u32 flags = 0; >>>> + >>>> + if (write) >>>> + flags |= X86EMUL_F_WRITE; >>> this can be more dense: >>> >>> u32 flags = write ? X86EMUL_F_WRITE : 0; >> Thanks, will update it. > You can also dispense with the extra local variable and > put the ?: into the parameter list. > > Even more so with the other calls sites. Thanks, I will check whether they are better to be put in the parameter list directly instead of using a local variable. > > David > > - > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > Registration No: 1397386 (Wales)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 936a397a08cd..9508836e8a35 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -687,8 +687,8 @@ static unsigned insn_alignment(struct x86_emulate_ctxt *ctxt, unsigned size) static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, struct segmented_address addr, unsigned *max_size, unsigned size, - bool write, bool fetch, - enum x86emul_mode mode, ulong *linear) + u32 flags, enum x86emul_mode mode, + ulong *linear) { struct desc_struct desc; bool usable; @@ -696,6 +696,8 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, u32 lim; u16 sel; u8 va_bits; + bool fetch = !!(flags & X86EMUL_F_FETCH); + bool write = !!(flags & X86EMUL_F_WRITE); la = seg_base(ctxt, addr.seg) + addr.ea; *max_size = 0; @@ -757,7 +759,11 @@ static int linearize(struct x86_emulate_ctxt *ctxt, ulong *linear) { unsigned max_size; - return __linearize(ctxt, addr, &max_size, size, write, false, + u32 flags = 0; + + if (write) + flags |= X86EMUL_F_WRITE; + return __linearize(ctxt, addr, &max_size, size, flags, ctxt->mode, linear); } @@ -768,10 +774,11 @@ static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst) unsigned max_size; struct segmented_address addr = { .seg = VCPU_SREG_CS, .ea = dst }; + u32 flags = X86EMUL_F_FETCH; if (ctxt->op_bytes != sizeof(unsigned long)) addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1); - rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear); + rc = __linearize(ctxt, addr, &max_size, 1, flags, ctxt->mode, &linear); if (rc == X86EMUL_CONTINUE) ctxt->_eip = addr.ea; return rc; @@ -896,6 +903,7 @@ static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size) int cur_size = ctxt->fetch.end - ctxt->fetch.data; struct segmented_address addr = { .seg = VCPU_SREG_CS, .ea = ctxt->eip + cur_size }; + u32 flags = X86EMUL_F_FETCH; /* * We do not know exactly how many bytes will be needed, and @@ -907,8 +915,7 @@ static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size) * boundary check itself. Instead, we use max_size to check * against op_size. */ - rc = __linearize(ctxt, addr, &max_size, 0, false, true, ctxt->mode, - &linear); + rc = __linearize(ctxt, addr, &max_size, 0, flags, ctxt->mode, &linear); if (unlikely(rc != X86EMUL_CONTINUE)) return rc; diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h index ab65f3a47dfd..5b9ec610b2cb 100644 --- a/arch/x86/kvm/kvm_emulate.h +++ b/arch/x86/kvm/kvm_emulate.h @@ -88,6 +88,10 @@ struct x86_instruction_info { #define X86EMUL_IO_NEEDED 5 /* IO is needed to complete emulation */ #define X86EMUL_INTERCEPTED 6 /* Intercepted by nested VMCB/VMCS */ +/* x86-specific emulation flags */ +#define X86EMUL_F_FETCH BIT(0) +#define X86EMUL_F_WRITE BIT(1) + struct x86_emulate_ops { void (*vm_bugged)(struct x86_emulate_ctxt *ctxt); /*
Define a 32-bit parameter and consolidate the two bools into it. __linearize() has two bool parameters write and fetch. And new flag will be needed to support new feature (e.g. LAM needs a flag to skip address untag under some conditions). No functional change intended. In the follow-up patches, the new parameter will be extended for LAM. Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> --- arch/x86/kvm/emulate.c | 19 +++++++++++++------ arch/x86/kvm/kvm_emulate.h | 4 ++++ 2 files changed, 17 insertions(+), 6 deletions(-)