Message ID | 20240902100355.3032079-5-andrew.cooper3@citrix.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | ARM: Cleanup following bitops improvements | expand |
On 02/09/2024 12:03, Andrew Cooper wrote: > > > These are all loops over a scalar value, and don't need to call general bitop > helpers behind the scenes. > > No functional change. > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> > --- > CC: Stefano Stabellini <sstabellini@kernel.org> > CC: Julien Grall <julien@xen.org> > CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> > CC: Bertrand Marquis <bertrand.marquis@arm.com> > CC: Michal Orzel <michal.orzel@amd.com> > > Slighly RFC. It's unclear whether len is the size of the register, or the > size of the access. For sub-GPR accesses, won't the upper bits be clear > anyway? If so, this can be simplified further. See dispatch_mmio_write(). "len" refers to access size (i.e. 1/4/8 bytes). Each register is defined with a region access i.e. VGIC_ACCESS_32bit that is compared with the actual access. In the current code there is no register with 8B access. If there is a mismatch, there will be a data abort injected. Also, the top 32-bits are not cleared anywhere, so I don't think we can drop masking. @Julien? > > $ ../scripts/bloat-o-meter xen-syms-arm64-{before,after} > add/remove: 0/0 grow/shrink: 2/5 up/down: 20/-140 (-120) > Function old new delta > vgic_mmio_write_spending 320 332 +12 > vgic_mmio_write_cpending 368 376 +8 > vgic_mmio_write_sactive 192 176 -16 > vgic_mmio_write_cactive 192 176 -16 > vgic_mmio_write_cenable 316 288 -28 > vgic_mmio_write_senable 320 284 -36 > vgic_mmio_write_sgir 344 300 -44 > > $ ../scripts/bloat-o-meter xen-syms-arm32-{before,after} > add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-32 (-32) > Function old new delta > vgic_mmio_write_sactive 204 200 -4 > vgic_mmio_write_cpending 464 460 -4 > vgic_mmio_write_cactive 204 200 -4 > vgic_mmio_write_sgir 336 316 -20 > --- > xen/arch/arm/vgic/vgic-mmio-v2.c | 3 +-- > xen/arch/arm/vgic/vgic-mmio.c | 36 +++++++++++++++++++++----------- > 2 files changed, 25 insertions(+), 14 deletions(-) > > diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c > index 670b335db2c3..42fac0403f07 100644 > --- a/xen/arch/arm/vgic/vgic-mmio-v2.c > +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c > @@ -90,7 +90,6 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu, > unsigned int intid = val & GICD_SGI_INTID_MASK; > unsigned long targets = (val & GICD_SGI_TARGET_MASK) >> > GICD_SGI_TARGET_SHIFT; > - unsigned int vcpu_id; > > switch ( val & GICD_SGI_TARGET_LIST_MASK ) > { > @@ -108,7 +107,7 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu, > return; > } > > - bitmap_for_each ( vcpu_id, &targets, 8 ) > + for_each_set_bit ( vcpu_id, (uint8_t)targets ) > { > struct vcpu *vcpu = d->vcpu[vcpu_id]; > struct vgic_irq *irq = vgic_get_irq(d, vcpu, intid); > diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c > index bd4caf7fc326..f7c336a238ab 100644 > --- a/xen/arch/arm/vgic/vgic-mmio.c > +++ b/xen/arch/arm/vgic/vgic-mmio.c > @@ -69,9 +69,11 @@ void vgic_mmio_write_senable(struct vcpu *vcpu, > unsigned long val) > { > uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); > - unsigned int i; > > - bitmap_for_each ( i, &val, len * 8 ) > + if ( len < sizeof(val) ) > + val &= (1UL << (len * 8)) - 1; > + > + for_each_set_bit ( i, val ) > { > struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); > unsigned long flags; > @@ -114,9 +116,11 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, > unsigned long val) > { > uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); > - unsigned int i; > > - bitmap_for_each ( i, &val, len * 8 ) > + if ( len < sizeof(val) ) > + val &= (1UL << (len * 8)) - 1; > + > + for_each_set_bit ( i, val ) > { > struct vgic_irq *irq; > unsigned long flags; > @@ -182,11 +186,13 @@ void vgic_mmio_write_spending(struct vcpu *vcpu, > unsigned long val) > { > uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); > - unsigned int i; > unsigned long flags; > irq_desc_t *desc; > > - bitmap_for_each ( i, &val, len * 8 ) > + if ( len < sizeof(val) ) > + val &= (1UL << (len * 8)) - 1; > + > + for_each_set_bit ( i, val ) > { > struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); > > @@ -230,11 +236,13 @@ void vgic_mmio_write_cpending(struct vcpu *vcpu, > unsigned long val) > { > uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); > - unsigned int i; > unsigned long flags; > irq_desc_t *desc; > > - bitmap_for_each ( i, &val, len * 8 ) > + if ( len < sizeof(val) ) > + val &= (1UL << (len * 8)) - 1; > + > + for_each_set_bit ( i, val ) > { > struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); > > @@ -326,9 +334,11 @@ void vgic_mmio_write_cactive(struct vcpu *vcpu, > unsigned long val) > { > uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); > - unsigned int i; > > - bitmap_for_each ( i, &val, len * 8 ) > + if ( len < sizeof(val) ) > + val &= (1UL << (len * 8)) - 1; > + > + for_each_set_bit ( i, val ) > { > struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); > > @@ -356,9 +366,11 @@ void vgic_mmio_write_sactive(struct vcpu *vcpu, > unsigned long val) > { > uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); > - unsigned int i; > > - bitmap_for_each ( i, &val, len * 8 ) > + if ( len < sizeof(val) ) > + val &= (1UL << (len * 8)) - 1; > + > + for_each_set_bit ( i, val ) > { > struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); > > -- > 2.39.2 > ~Michal
On 03/09/2024 11:30 am, Michal Orzel wrote: > > On 02/09/2024 12:03, Andrew Cooper wrote: >> >> These are all loops over a scalar value, and don't need to call general bitop >> helpers behind the scenes. >> >> No functional change. >> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> >> --- >> CC: Stefano Stabellini <sstabellini@kernel.org> >> CC: Julien Grall <julien@xen.org> >> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> >> CC: Bertrand Marquis <bertrand.marquis@arm.com> >> CC: Michal Orzel <michal.orzel@amd.com> >> >> Slighly RFC. It's unclear whether len is the size of the register, or the >> size of the access. For sub-GPR accesses, won't the upper bits be clear >> anyway? If so, this can be simplified further. > See dispatch_mmio_write(). "len" refers to access size (i.e. 1/4/8 bytes). Each register is defined > with a region access i.e. VGIC_ACCESS_32bit that is compared with the actual access. In the current code > there is no register with 8B access. If there is a mismatch, there will be a data abort injected. > Also, the top 32-bits are not cleared anywhere, so I don't think we can drop masking. @Julien? Ok, so it is necessary right now to have the clamping logic in every callback. However, given that the size is validated before dispatching, clamping once in dispatch_mmio_write() would save a lot of repeated code in the callbacks. i.e., I think this: diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index bd4caf7fc326..e8b9805a0b2c 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -590,6 +590,9 @@ static int dispatch_mmio_write(struct vcpu *vcpu, mmio_info_t *info, if ( !region ) return 0; + if ( len < sizeof(data) ) + data &= ~((1UL << (len * 8)) - 1); + switch (iodev->iodev_type) { case IODEV_DIST: would work to replace every if() introduced below. ~Andrew > >> $ ../scripts/bloat-o-meter xen-syms-arm64-{before,after} >> add/remove: 0/0 grow/shrink: 2/5 up/down: 20/-140 (-120) >> Function old new delta >> vgic_mmio_write_spending 320 332 +12 >> vgic_mmio_write_cpending 368 376 +8 >> vgic_mmio_write_sactive 192 176 -16 >> vgic_mmio_write_cactive 192 176 -16 >> vgic_mmio_write_cenable 316 288 -28 >> vgic_mmio_write_senable 320 284 -36 >> vgic_mmio_write_sgir 344 300 -44 >> >> $ ../scripts/bloat-o-meter xen-syms-arm32-{before,after} >> add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-32 (-32) >> Function old new delta >> vgic_mmio_write_sactive 204 200 -4 >> vgic_mmio_write_cpending 464 460 -4 >> vgic_mmio_write_cactive 204 200 -4 >> vgic_mmio_write_sgir 336 316 -20 >> --- >> xen/arch/arm/vgic/vgic-mmio-v2.c | 3 +-- >> xen/arch/arm/vgic/vgic-mmio.c | 36 +++++++++++++++++++++----------- >> 2 files changed, 25 insertions(+), 14 deletions(-) >> >> diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c >> index 670b335db2c3..42fac0403f07 100644 >> --- a/xen/arch/arm/vgic/vgic-mmio-v2.c >> +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c >> @@ -90,7 +90,6 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu, >> unsigned int intid = val & GICD_SGI_INTID_MASK; >> unsigned long targets = (val & GICD_SGI_TARGET_MASK) >> >> GICD_SGI_TARGET_SHIFT; >> - unsigned int vcpu_id; >> >> switch ( val & GICD_SGI_TARGET_LIST_MASK ) >> { >> @@ -108,7 +107,7 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu, >> return; >> } >> >> - bitmap_for_each ( vcpu_id, &targets, 8 ) >> + for_each_set_bit ( vcpu_id, (uint8_t)targets ) >> { >> struct vcpu *vcpu = d->vcpu[vcpu_id]; >> struct vgic_irq *irq = vgic_get_irq(d, vcpu, intid); >> diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c >> index bd4caf7fc326..f7c336a238ab 100644 >> --- a/xen/arch/arm/vgic/vgic-mmio.c >> +++ b/xen/arch/arm/vgic/vgic-mmio.c >> @@ -69,9 +69,11 @@ void vgic_mmio_write_senable(struct vcpu *vcpu, >> unsigned long val) >> { >> uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); >> - unsigned int i; >> >> - bitmap_for_each ( i, &val, len * 8 ) >> + if ( len < sizeof(val) ) >> + val &= (1UL << (len * 8)) - 1; >> + >> + for_each_set_bit ( i, val ) >> { >> struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); >> unsigned long flags; >> @@ -114,9 +116,11 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, >> unsigned long val) >> { >> uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); >> - unsigned int i; >> >> - bitmap_for_each ( i, &val, len * 8 ) >> + if ( len < sizeof(val) ) >> + val &= (1UL << (len * 8)) - 1; >> + >> + for_each_set_bit ( i, val ) >> { >> struct vgic_irq *irq; >> unsigned long flags; >> @@ -182,11 +186,13 @@ void vgic_mmio_write_spending(struct vcpu *vcpu, >> unsigned long val) >> { >> uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); >> - unsigned int i; >> unsigned long flags; >> irq_desc_t *desc; >> >> - bitmap_for_each ( i, &val, len * 8 ) >> + if ( len < sizeof(val) ) >> + val &= (1UL << (len * 8)) - 1; >> + >> + for_each_set_bit ( i, val ) >> { >> struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); >> >> @@ -230,11 +236,13 @@ void vgic_mmio_write_cpending(struct vcpu *vcpu, >> unsigned long val) >> { >> uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); >> - unsigned int i; >> unsigned long flags; >> irq_desc_t *desc; >> >> - bitmap_for_each ( i, &val, len * 8 ) >> + if ( len < sizeof(val) ) >> + val &= (1UL << (len * 8)) - 1; >> + >> + for_each_set_bit ( i, val ) >> { >> struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); >> >> @@ -326,9 +334,11 @@ void vgic_mmio_write_cactive(struct vcpu *vcpu, >> unsigned long val) >> { >> uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); >> - unsigned int i; >> >> - bitmap_for_each ( i, &val, len * 8 ) >> + if ( len < sizeof(val) ) >> + val &= (1UL << (len * 8)) - 1; >> + >> + for_each_set_bit ( i, val ) >> { >> struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); >> >> @@ -356,9 +366,11 @@ void vgic_mmio_write_sactive(struct vcpu *vcpu, >> unsigned long val) >> { >> uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); >> - unsigned int i; >> >> - bitmap_for_each ( i, &val, len * 8 ) >> + if ( len < sizeof(val) ) >> + val &= (1UL << (len * 8)) - 1; >> + >> + for_each_set_bit ( i, val ) >> { >> struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); >> >> -- >> 2.39.2 >> > ~Michal
Hi, On 03/09/2024 14:19, Andrew Cooper wrote: > On 03/09/2024 11:30 am, Michal Orzel wrote: >> >> On 02/09/2024 12:03, Andrew Cooper wrote: >>> >>> These are all loops over a scalar value, and don't need to call general bitop >>> helpers behind the scenes. >>> >>> No functional change. >>> >>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> >>> --- >>> CC: Stefano Stabellini <sstabellini@kernel.org> >>> CC: Julien Grall <julien@xen.org> >>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> >>> CC: Bertrand Marquis <bertrand.marquis@arm.com> >>> CC: Michal Orzel <michal.orzel@amd.com> >>> >>> Slighly RFC. It's unclear whether len is the size of the register, or the >>> size of the access. For sub-GPR accesses, won't the upper bits be clear >>> anyway? If so, this can be simplified further. >> See dispatch_mmio_write(). "len" refers to access size (i.e. 1/4/8 bytes). Each register is defined >> with a region access i.e. VGIC_ACCESS_32bit that is compared with the actual access. In the current code >> there is no register with 8B access. If there is a mismatch, there will be a data abort injected. >> Also, the top 32-bits are not cleared anywhere, so I don't think we can drop masking. @Julien? That's correct, there are no masking in the I/O dispatch helpers. > > Ok, so it is necessary right now to have the clamping logic in every > callback. > > However, given that the size is validated before dispatching, clamping > once in dispatch_mmio_write() would save a lot of repeated code in the > callbacks. > > i.e., I think this: > > diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c > index bd4caf7fc326..e8b9805a0b2c 100644 > --- a/xen/arch/arm/vgic/vgic-mmio.c > +++ b/xen/arch/arm/vgic/vgic-mmio.c > @@ -590,6 +590,9 @@ static int dispatch_mmio_write(struct vcpu *vcpu, > mmio_info_t *info, > if ( !region ) > return 0; > > + if ( len < sizeof(data) ) > + data &= ~((1UL << (len * 8)) - 1); > + I think it would make sense to move the masking one level higher in handle_write() (arch/arm/io.c). So this would cover all the emulation helpers. Cheers,
diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c index 670b335db2c3..42fac0403f07 100644 --- a/xen/arch/arm/vgic/vgic-mmio-v2.c +++ b/xen/arch/arm/vgic/vgic-mmio-v2.c @@ -90,7 +90,6 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu, unsigned int intid = val & GICD_SGI_INTID_MASK; unsigned long targets = (val & GICD_SGI_TARGET_MASK) >> GICD_SGI_TARGET_SHIFT; - unsigned int vcpu_id; switch ( val & GICD_SGI_TARGET_LIST_MASK ) { @@ -108,7 +107,7 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu, return; } - bitmap_for_each ( vcpu_id, &targets, 8 ) + for_each_set_bit ( vcpu_id, (uint8_t)targets ) { struct vcpu *vcpu = d->vcpu[vcpu_id]; struct vgic_irq *irq = vgic_get_irq(d, vcpu, intid); diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c index bd4caf7fc326..f7c336a238ab 100644 --- a/xen/arch/arm/vgic/vgic-mmio.c +++ b/xen/arch/arm/vgic/vgic-mmio.c @@ -69,9 +69,11 @@ void vgic_mmio_write_senable(struct vcpu *vcpu, unsigned long val) { uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); - unsigned int i; - bitmap_for_each ( i, &val, len * 8 ) + if ( len < sizeof(val) ) + val &= (1UL << (len * 8)) - 1; + + for_each_set_bit ( i, val ) { struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); unsigned long flags; @@ -114,9 +116,11 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu, unsigned long val) { uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); - unsigned int i; - bitmap_for_each ( i, &val, len * 8 ) + if ( len < sizeof(val) ) + val &= (1UL << (len * 8)) - 1; + + for_each_set_bit ( i, val ) { struct vgic_irq *irq; unsigned long flags; @@ -182,11 +186,13 @@ void vgic_mmio_write_spending(struct vcpu *vcpu, unsigned long val) { uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); - unsigned int i; unsigned long flags; irq_desc_t *desc; - bitmap_for_each ( i, &val, len * 8 ) + if ( len < sizeof(val) ) + val &= (1UL << (len * 8)) - 1; + + for_each_set_bit ( i, val ) { struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); @@ -230,11 +236,13 @@ void vgic_mmio_write_cpending(struct vcpu *vcpu, unsigned long val) { uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); - unsigned int i; unsigned long flags; irq_desc_t *desc; - bitmap_for_each ( i, &val, len * 8 ) + if ( len < sizeof(val) ) + val &= (1UL << (len * 8)) - 1; + + for_each_set_bit ( i, val ) { struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); @@ -326,9 +334,11 @@ void vgic_mmio_write_cactive(struct vcpu *vcpu, unsigned long val) { uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); - unsigned int i; - bitmap_for_each ( i, &val, len * 8 ) + if ( len < sizeof(val) ) + val &= (1UL << (len * 8)) - 1; + + for_each_set_bit ( i, val ) { struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i); @@ -356,9 +366,11 @@ void vgic_mmio_write_sactive(struct vcpu *vcpu, unsigned long val) { uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1); - unsigned int i; - bitmap_for_each ( i, &val, len * 8 ) + if ( len < sizeof(val) ) + val &= (1UL << (len * 8)) - 1; + + for_each_set_bit ( i, val ) { struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i);
These are all loops over a scalar value, and don't need to call general bitop helpers behind the scenes. No functional change. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> --- CC: Stefano Stabellini <sstabellini@kernel.org> CC: Julien Grall <julien@xen.org> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com> CC: Bertrand Marquis <bertrand.marquis@arm.com> CC: Michal Orzel <michal.orzel@amd.com> Slighly RFC. It's unclear whether len is the size of the register, or the size of the access. For sub-GPR accesses, won't the upper bits be clear anyway? If so, this can be simplified further. $ ../scripts/bloat-o-meter xen-syms-arm64-{before,after} add/remove: 0/0 grow/shrink: 2/5 up/down: 20/-140 (-120) Function old new delta vgic_mmio_write_spending 320 332 +12 vgic_mmio_write_cpending 368 376 +8 vgic_mmio_write_sactive 192 176 -16 vgic_mmio_write_cactive 192 176 -16 vgic_mmio_write_cenable 316 288 -28 vgic_mmio_write_senable 320 284 -36 vgic_mmio_write_sgir 344 300 -44 $ ../scripts/bloat-o-meter xen-syms-arm32-{before,after} add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-32 (-32) Function old new delta vgic_mmio_write_sactive 204 200 -4 vgic_mmio_write_cpending 464 460 -4 vgic_mmio_write_cactive 204 200 -4 vgic_mmio_write_sgir 336 316 -20 --- xen/arch/arm/vgic/vgic-mmio-v2.c | 3 +-- xen/arch/arm/vgic/vgic-mmio.c | 36 +++++++++++++++++++++----------- 2 files changed, 25 insertions(+), 14 deletions(-)