diff mbox series

[2/3] xen/arm: Defer GICv2 CPU interface mapping until the first access

Message ID 20230116015820.1269387-3-Henry.Wang@arm.com (mailing list archive)
State Superseded
Headers show
Series P2M improvements for Arm | expand

Commit Message

Henry Wang Jan. 16, 2023, 1:58 a.m. UTC
Currently, the mapping of the GICv2 CPU interface is created in
arch_domain_create(). This causes some troubles in populating and
freeing of the domain P2M pages pool. For example, a default 16
P2M pages are required in p2m_init() to cope with the P2M mapping
of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
the complexity of P2M destroy in the failure path of
arch_domain_create().

As per discussion in [1], similarly as the MMIO access for ACPI, this
patch defers the GICv2 CPU interface mapping until the first MMIO
access. This is achieved by moving the GICv2 CPU interface mapping
code from vgic_v2_domain_init() to the stage-2 data abort trap handling
code. The original CPU interface size and virtual CPU interface base
address is now saved in `struct vgic_dist` instead of the local
variable of vgic_v2_domain_init().

Note that GICv2 changes introduced by this patch is not applied to the
"New vGIC" implementation, as the "New vGIC" is not used. Also since
the hardware domain (Dom0) has an unlimited size P2M pool, the
gicv2_map_hwdom_extra_mappings() is also not touched by this patch.

[1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/

Signed-off-by: Henry Wang <Henry.Wang@arm.com>
---
 xen/arch/arm/include/asm/vgic.h |  2 ++
 xen/arch/arm/traps.c            | 19 ++++++++++++++++---
 xen/arch/arm/vgic-v2.c          | 25 ++++++-------------------
 3 files changed, 24 insertions(+), 22 deletions(-)

Comments

Michal Orzel Jan. 20, 2023, 9:54 a.m. UTC | #1
Hi Henry,

On 16/01/2023 02:58, Henry Wang wrote:
> 
> 
> Currently, the mapping of the GICv2 CPU interface is created in
> arch_domain_create(). This causes some troubles in populating and
> freeing of the domain P2M pages pool. For example, a default 16
> P2M pages are required in p2m_init() to cope with the P2M mapping
> of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause
> the complexity of P2M destroy in the failure path of
> arch_domain_create().
> 
> As per discussion in [1], similarly as the MMIO access for ACPI, this
> patch defers the GICv2 CPU interface mapping until the first MMIO
> access. This is achieved by moving the GICv2 CPU interface mapping
> code from vgic_v2_domain_init() to the stage-2 data abort trap handling
> code. The original CPU interface size and virtual CPU interface base
> address is now saved in `struct vgic_dist` instead of the local
> variable of vgic_v2_domain_init().
> 
> Note that GICv2 changes introduced by this patch is not applied to the
> "New vGIC" implementation, as the "New vGIC" is not used. Also since
The fact that "New vGIC" is not used very often and its work is in-progress
does not mean that we can implement changes resulting in a build failure
when enabling CONFIG_NEW_VGIC, which is the case with your patch.
So this needs to be fixed.

> the hardware domain (Dom0) has an unlimited size P2M pool, the
> gicv2_map_hwdom_extra_mappings() is also not touched by this patch.
> 
> [1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@xen.org/
> 
> Signed-off-by: Henry Wang <Henry.Wang@arm.com>
> ---
>  xen/arch/arm/include/asm/vgic.h |  2 ++
>  xen/arch/arm/traps.c            | 19 ++++++++++++++++---
>  xen/arch/arm/vgic-v2.c          | 25 ++++++-------------------
>  3 files changed, 24 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
> index 3d44868039..1d37c291e1 100644
> --- a/xen/arch/arm/include/asm/vgic.h
> +++ b/xen/arch/arm/include/asm/vgic.h
> @@ -153,6 +153,8 @@ struct vgic_dist {
>      /* Base address for guest GIC */
>      paddr_t dbase; /* Distributor base address */
>      paddr_t cbase; /* CPU interface base address */
> +    paddr_t csize; /* CPU interface size */
> +    paddr_t vbase; /* virtual CPU interface base address */
Could you swap them so that base address variables are grouped?

>  #ifdef CONFIG_GICV3
>      /* GIC V3 addressing */
>      /* List of contiguous occupied by the redistributors */
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 061c92acbd..d98f166050 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
>  }
> 
>  /*
> - * When using ACPI, most of the MMIO regions will be mapped on-demand
> - * in stage-2 page tables for the hardware domain because Xen is not
> - * able to know from the EFI memory map the MMIO regions.
> + * Try to map the MMIO regions for some special cases:
> + * 1. When using ACPI, most of the MMIO regions will be mapped on-demand
> + *    in stage-2 page tables for the hardware domain because Xen is not
> + *    able to know from the EFI memory map the MMIO regions.
> + * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
> + *    on the first access of the MMIO region.
>   */
>  static bool try_map_mmio(gfn_t gfn)
>  {
> @@ -1798,6 +1801,16 @@ static bool try_map_mmio(gfn_t gfn)
>      /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
>      mfn_t mfn = _mfn(gfn_x(gfn));
> 
> +    /*
> +     * Map the GICv2 virtual cpu interface in the gic cpu interface
NIT: in all the other places you use CPU (capital letters)

> +     * region of the guest on the first access of the MMIO region.
> +     */
> +    if ( d->arch.vgic.version == GIC_V2 &&
> +         gfn_x(gfn) == gfn_x(gaddr_to_gfn(d->arch.vgic.cbase)) )
There is a macro gnf_eq that you can use to avoid opencoding it.

> +        return !map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
At this point you can use just gfn instead of gaddr_to_gfn(d->arch.vgic.cbase)

> +                                 d->arch.vgic.csize / PAGE_SIZE,
> +                                 maddr_to_mfn(d->arch.vgic.vbase));
> +
>      /*
>       * Device-Tree should already have everything mapped when building
>       * the hardware domain.
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 0026cb4360..21e14a5a6f 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -644,10 +644,6 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
> 
>  static int vgic_v2_domain_init(struct domain *d)
>  {
> -    int ret;
> -    paddr_t csize;
> -    paddr_t vbase;
> -
>      /*
>       * The hardware domain and direct-mapped domain both get the hardware
>       * address.
> @@ -667,8 +663,8 @@ static int vgic_v2_domain_init(struct domain *d)
>           * aligned to PAGE_SIZE.
>           */
>          d->arch.vgic.cbase = vgic_v2_hw.cbase;
> -        csize = vgic_v2_hw.csize;
> -        vbase = vgic_v2_hw.vbase;
> +        d->arch.vgic.csize = vgic_v2_hw.csize;
> +        d->arch.vgic.vbase = vgic_v2_hw.vbase;
>      }
>      else if ( is_domain_direct_mapped(d) )
>      {
> @@ -683,8 +679,8 @@ static int vgic_v2_domain_init(struct domain *d)
>           */
>          d->arch.vgic.dbase = vgic_v2_hw.dbase;
>          d->arch.vgic.cbase = vgic_v2_hw.cbase;
> -        csize = GUEST_GICC_SIZE;
> -        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> +        d->arch.vgic.csize = GUEST_GICC_SIZE;
> +        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
>      }
>      else
>      {
> @@ -697,19 +693,10 @@ static int vgic_v2_domain_init(struct domain *d)
>           */
>          BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
>          d->arch.vgic.cbase = GUEST_GICC_BASE;
> -        csize = GUEST_GICC_SIZE;
> -        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> +        d->arch.vgic.csize = GUEST_GICC_SIZE;
> +        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
>      }
> 
> -    /*
> -     * Map the gic virtual cpu interface in the gic cpu interface
> -     * region of the guest.
> -     */
> -    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
> -                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
> -    if ( ret )
> -        return ret;
> -
Maybe adding some comment like "Mapping of the virtual CPU interface is deferred until first access"
would be helpful.

~Michal
Henry Wang Jan. 27, 2023, 11:11 a.m. UTC | #2
Hi Michal,

> -----Original Message-----
> From: Michal Orzel <michal.orzel@amd.com>
> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
> the first access
> 
> Hi Henry,
> 
> On 16/01/2023 02:58, Henry Wang wrote:
> > Note that GICv2 changes introduced by this patch is not applied to the
> > "New vGIC" implementation, as the "New vGIC" is not used. Also since
> The fact that "New vGIC" is not used very often and its work is in-progress
> does not mean that we can implement changes resulting in a build failure
> when enabling CONFIG_NEW_VGIC, which is the case with your patch.
> So this needs to be fixed.

I will add the "New vGIC" changes in v2.

> 
> > @@ -153,6 +153,8 @@ struct vgic_dist {
> >      /* Base address for guest GIC */
> >      paddr_t dbase; /* Distributor base address */
> >      paddr_t cbase; /* CPU interface base address */
> > +    paddr_t csize; /* CPU interface size */
> > +    paddr_t vbase; /* virtual CPU interface base address */
> Could you swap them so that base address variables are grouped?

Sure, my original thought was grouping the CPU interface related fields but
since you prefer grouping the base address, I will follow your suggestion.

> 
> >  #ifdef CONFIG_GICV3
> >      /* GIC V3 addressing */
> >      /* List of contiguous occupied by the redistributors */
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index 061c92acbd..d98f166050 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw,
> uint8_t fsc)
> >  }
> >
> >  /*
> > - * When using ACPI, most of the MMIO regions will be mapped on-
> demand
> > - * in stage-2 page tables for the hardware domain because Xen is not
> > - * able to know from the EFI memory map the MMIO regions.
> > + * Try to map the MMIO regions for some special cases:
> > + * 1. When using ACPI, most of the MMIO regions will be mapped on-
> demand
> > + *    in stage-2 page tables for the hardware domain because Xen is not
> > + *    able to know from the EFI memory map the MMIO regions.
> > + * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
> > + *    on the first access of the MMIO region.
> >   */
> >  static bool try_map_mmio(gfn_t gfn)
> >  {
> > @@ -1798,6 +1801,16 @@ static bool try_map_mmio(gfn_t gfn)
> >      /* For the hardware domain, all MMIOs are mapped with GFN == MFN
> */
> >      mfn_t mfn = _mfn(gfn_x(gfn));
> >
> > +    /*
> > +     * Map the GICv2 virtual cpu interface in the gic cpu interface
> NIT: in all the other places you use CPU (capital letters)

Oh good catch, thank you. I think this part is the same as the original in-code
comment, but since I am touching this part anyway, it would be good to
correct them.

> 
> > +     * region of the guest on the first access of the MMIO region.
> > +     */
> > +    if ( d->arch.vgic.version == GIC_V2 &&
> > +         gfn_x(gfn) == gfn_x(gaddr_to_gfn(d->arch.vgic.cbase)) )
> There is a macro gnf_eq that you can use to avoid opencoding it.

Thanks! I will fix in v2.

> 
> > +        return !map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
> At this point you can use just gfn instead of gaddr_to_gfn(d->arch.vgic.cbase)

Will fix in v2.

> 
> > +                                 d->arch.vgic.csize / PAGE_SIZE,
> > +                                 maddr_to_mfn(d->arch.vgic.vbase));
> > +
> >      /*
> >       * Device-Tree should already have everything mapped when building
> >       * the hardware domain.
> > diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> > index 0026cb4360..21e14a5a6f 100644
> > --- a/xen/arch/arm/vgic-v2.c
> > +++ b/xen/arch/arm/vgic-v2.c
> > @@ -644,10 +644,6 @@ static int vgic_v2_vcpu_init(struct vcpu *v)
> >
> >  static int vgic_v2_domain_init(struct domain *d)
> >  {
> > -    int ret;
> > -    paddr_t csize;
> > -    paddr_t vbase;
> > -
> >      /*
> >       * The hardware domain and direct-mapped domain both get the
> hardware
> >       * address.
> > @@ -667,8 +663,8 @@ static int vgic_v2_domain_init(struct domain *d)
> >           * aligned to PAGE_SIZE.
> >           */
> >          d->arch.vgic.cbase = vgic_v2_hw.cbase;
> > -        csize = vgic_v2_hw.csize;
> > -        vbase = vgic_v2_hw.vbase;
> > +        d->arch.vgic.csize = vgic_v2_hw.csize;
> > +        d->arch.vgic.vbase = vgic_v2_hw.vbase;
> >      }
> >      else if ( is_domain_direct_mapped(d) )
> >      {
> > @@ -683,8 +679,8 @@ static int vgic_v2_domain_init(struct domain *d)
> >           */
> >          d->arch.vgic.dbase = vgic_v2_hw.dbase;
> >          d->arch.vgic.cbase = vgic_v2_hw.cbase;
> > -        csize = GUEST_GICC_SIZE;
> > -        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> > +        d->arch.vgic.csize = GUEST_GICC_SIZE;
> > +        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> >      }
> >      else
> >      {
> > @@ -697,19 +693,10 @@ static int vgic_v2_domain_init(struct domain *d)
> >           */
> >          BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
> >          d->arch.vgic.cbase = GUEST_GICC_BASE;
> > -        csize = GUEST_GICC_SIZE;
> > -        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> > +        d->arch.vgic.csize = GUEST_GICC_SIZE;
> > +        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
> >      }
> >
> > -    /*
> > -     * Map the gic virtual cpu interface in the gic cpu interface
> > -     * region of the guest.
> > -     */
> > -    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
> > -                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
> > -    if ( ret )
> > -        return ret;
> > -
> Maybe adding some comment like "Mapping of the virtual CPU interface is
> deferred until first access"
> would be helpful.

Sure, I will add the comment in v2.

Kind regards,
Henry

> 
> ~Michal
Julien Grall Jan. 27, 2023, 11:20 a.m. UTC | #3
Hi,

On 27/01/2023 11:11, Henry Wang wrote:
>> -----Original Message-----
>> From: Michal Orzel <michal.orzel@amd.com>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>
>> Hi Henry,
>>
>> On 16/01/2023 02:58, Henry Wang wrote:
>>> Note that GICv2 changes introduced by this patch is not applied to the
>>> "New vGIC" implementation, as the "New vGIC" is not used. Also since
>> The fact that "New vGIC" is not used very often and its work is in-progress
>> does not mean that we can implement changes resulting in a build failure
>> when enabling CONFIG_NEW_VGIC, which is the case with your patch.
>> So this needs to be fixed.
> 
> I will add the "New vGIC" changes in v2.
> 
>>
>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>       /* Base address for guest GIC */
>>>       paddr_t dbase; /* Distributor base address */
>>>       paddr_t cbase; /* CPU interface base address */
>>> +    paddr_t csize; /* CPU interface size */
>>> +    paddr_t vbase; /* virtual CPU interface base address */
>> Could you swap them so that base address variables are grouped?
> 
> Sure, my original thought was grouping the CPU interface related fields but
> since you prefer grouping the base address, I will follow your suggestion.

I would actually prefer your approach because it is easier to associate 
the size with the base.

An alternative would be to use a structure to combine the base/size. So 
it is even clearer the association.

I don't have a strong opinion on either of the two approach I suggested.

Cheers,
Henry Wang Jan. 27, 2023, 11:30 a.m. UTC | #4
Hi Julien,

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
> the first access
> 
> Hi,
> 
> >>> @@ -153,6 +153,8 @@ struct vgic_dist {
> >>>       /* Base address for guest GIC */
> >>>       paddr_t dbase; /* Distributor base address */
> >>>       paddr_t cbase; /* CPU interface base address */
> >>> +    paddr_t csize; /* CPU interface size */
> >>> +    paddr_t vbase; /* virtual CPU interface base address */
> >> Could you swap them so that base address variables are grouped?
> >
> > Sure, my original thought was grouping the CPU interface related fields but
> > since you prefer grouping the base address, I will follow your suggestion.
> 
> I would actually prefer your approach because it is easier to associate
> the size with the base.
> 
> An alternative would be to use a structure to combine the base/size. So
> it is even clearer the association.
> 
> I don't have a strong opinion on either of the two approach I suggested.

Maybe we can do something like this:
```
paddr_t dbase; /* Distributor base address */
paddr_t vbase; /* virtual CPU interface base address */
paddr_t cbase; /* CPU interface base address */
paddr_t csize; /* CPU interface size */    
```

So we can ensure both "base address variables are grouped" and
"CPU interface variables are grouped".

If you don't like this, I would prefer the way I am currently doing, as
personally I think an extra structure would slightly be an overkill :)

Thanks.

Kind regards,
Henry

> 
> Cheers,
> 
> --
> Julien Grall
Julien Grall Jan. 27, 2023, 11:35 a.m. UTC | #5
On 27/01/2023 11:30, Henry Wang wrote:
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>
>> Hi,
>>
>>>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>>>        /* Base address for guest GIC */
>>>>>        paddr_t dbase; /* Distributor base address */
>>>>>        paddr_t cbase; /* CPU interface base address */
>>>>> +    paddr_t csize; /* CPU interface size */
>>>>> +    paddr_t vbase; /* virtual CPU interface base address */
>>>> Could you swap them so that base address variables are grouped?
>>>
>>> Sure, my original thought was grouping the CPU interface related fields but
>>> since you prefer grouping the base address, I will follow your suggestion.
>>
>> I would actually prefer your approach because it is easier to associate
>> the size with the base.
>>
>> An alternative would be to use a structure to combine the base/size. So
>> it is even clearer the association.
>>
>> I don't have a strong opinion on either of the two approach I suggested.
> 
> Maybe we can do something like this:
> ```
> paddr_t dbase; /* Distributor base address */
> paddr_t vbase; /* virtual CPU interface base address */
> paddr_t cbase; /* CPU interface base address */
> paddr_t csize; /* CPU interface size */
> ```
> 
> So we can ensure both "base address variables are grouped" and
> "CPU interface variables are grouped".
> 
> If you don't like this, I would prefer the way I am currently doing, as
> personally I think an extra structure would slightly be an overkill :)

This is really a matter of taste here. My preference is your initial 
approach because I find strange to have virtual CPU interface 
information the physical one.

Cheers,
Henry Wang Jan. 27, 2023, 11:39 a.m. UTC | #6
Hi Julien,

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
> the first access
> >>>>> @@ -153,6 +153,8 @@ struct vgic_dist {
> >>>>>        /* Base address for guest GIC */
> >>>>>        paddr_t dbase; /* Distributor base address */
> >>>>>        paddr_t cbase; /* CPU interface base address */
> >>>>> +    paddr_t csize; /* CPU interface size */
> >>>>> +    paddr_t vbase; /* virtual CPU interface base address */
> >>>> Could you swap them so that base address variables are grouped?
> >>>
> >>> Sure, my original thought was grouping the CPU interface related fields
> but
> >>> since you prefer grouping the base address, I will follow your suggestion.
> >>
> >> I would actually prefer your approach because it is easier to associate
> >> the size with the base.
> >>
> >> An alternative would be to use a structure to combine the base/size. So
> >> it is even clearer the association.
> >>
> >> I don't have a strong opinion on either of the two approach I suggested.
> >
> > Maybe we can do something like this:
> > ```
> > paddr_t dbase; /* Distributor base address */
> > paddr_t vbase; /* virtual CPU interface base address */
> > paddr_t cbase; /* CPU interface base address */
> > paddr_t csize; /* CPU interface size */
> > ```
> >
> > So we can ensure both "base address variables are grouped" and
> > "CPU interface variables are grouped".
> >
> > If you don't like this, I would prefer the way I am currently doing, as
> > personally I think an extra structure would slightly be an overkill :)
> 
> This is really a matter of taste here.

Indeed,

> My preference is your initial
> approach because I find strange to have virtual CPU interface
> information the physical one.

then I will keep it as it is if there is no strong objection from Michal.

Kind regards,
Henry

> 
> Cheers,
> 
> --
> Julien Grall
Michal Orzel Jan. 27, 2023, 11:52 a.m. UTC | #7
Hi Henry,

On 27/01/2023 12:39, Henry Wang wrote:
> 
> 
> Hi Julien,
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Subject: Re: [PATCH 2/3] xen/arm: Defer GICv2 CPU interface mapping until
>> the first access
>>>>>>> @@ -153,6 +153,8 @@ struct vgic_dist {
>>>>>>>        /* Base address for guest GIC */
>>>>>>>        paddr_t dbase; /* Distributor base address */
>>>>>>>        paddr_t cbase; /* CPU interface base address */
>>>>>>> +    paddr_t csize; /* CPU interface size */
>>>>>>> +    paddr_t vbase; /* virtual CPU interface base address */
>>>>>> Could you swap them so that base address variables are grouped?
>>>>>
>>>>> Sure, my original thought was grouping the CPU interface related fields
>> but
>>>>> since you prefer grouping the base address, I will follow your suggestion.
>>>>
>>>> I would actually prefer your approach because it is easier to associate
>>>> the size with the base.
>>>>
>>>> An alternative would be to use a structure to combine the base/size. So
>>>> it is even clearer the association.
>>>>
>>>> I don't have a strong opinion on either of the two approach I suggested.
>>>
>>> Maybe we can do something like this:
>>> ```
>>> paddr_t dbase; /* Distributor base address */
>>> paddr_t vbase; /* virtual CPU interface base address */
>>> paddr_t cbase; /* CPU interface base address */
>>> paddr_t csize; /* CPU interface size */
>>> ```
>>>
>>> So we can ensure both "base address variables are grouped" and
>>> "CPU interface variables are grouped".
>>>
>>> If you don't like this, I would prefer the way I am currently doing, as
>>> personally I think an extra structure would slightly be an overkill :)
>>
>> This is really a matter of taste here.
> 
> Indeed,
> 
>> My preference is your initial
>> approach because I find strange to have virtual CPU interface
>> information the physical one.
> 
> then I will keep it as it is if there is no strong objection from Michal.
there are none. It was just a suggestion.

~Michal
diff mbox series

Patch

diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
index 3d44868039..1d37c291e1 100644
--- a/xen/arch/arm/include/asm/vgic.h
+++ b/xen/arch/arm/include/asm/vgic.h
@@ -153,6 +153,8 @@  struct vgic_dist {
     /* Base address for guest GIC */
     paddr_t dbase; /* Distributor base address */
     paddr_t cbase; /* CPU interface base address */
+    paddr_t csize; /* CPU interface size */
+    paddr_t vbase; /* virtual CPU interface base address */
 #ifdef CONFIG_GICV3
     /* GIC V3 addressing */
     /* List of contiguous occupied by the redistributors */
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 061c92acbd..d98f166050 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1787,9 +1787,12 @@  static inline bool hpfar_is_valid(bool s1ptw, uint8_t fsc)
 }
 
 /*
- * When using ACPI, most of the MMIO regions will be mapped on-demand
- * in stage-2 page tables for the hardware domain because Xen is not
- * able to know from the EFI memory map the MMIO regions.
+ * Try to map the MMIO regions for some special cases:
+ * 1. When using ACPI, most of the MMIO regions will be mapped on-demand
+ *    in stage-2 page tables for the hardware domain because Xen is not
+ *    able to know from the EFI memory map the MMIO regions.
+ * 2. For guests using GICv2, the GICv2 CPU interface mapping is created
+ *    on the first access of the MMIO region.
  */
 static bool try_map_mmio(gfn_t gfn)
 {
@@ -1798,6 +1801,16 @@  static bool try_map_mmio(gfn_t gfn)
     /* For the hardware domain, all MMIOs are mapped with GFN == MFN */
     mfn_t mfn = _mfn(gfn_x(gfn));
 
+    /*
+     * Map the GICv2 virtual cpu interface in the gic cpu interface
+     * region of the guest on the first access of the MMIO region.
+     */
+    if ( d->arch.vgic.version == GIC_V2 &&
+         gfn_x(gfn) == gfn_x(gaddr_to_gfn(d->arch.vgic.cbase)) )
+        return !map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
+                                 d->arch.vgic.csize / PAGE_SIZE,
+                                 maddr_to_mfn(d->arch.vgic.vbase));
+
     /*
      * Device-Tree should already have everything mapped when building
      * the hardware domain.
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 0026cb4360..21e14a5a6f 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -644,10 +644,6 @@  static int vgic_v2_vcpu_init(struct vcpu *v)
 
 static int vgic_v2_domain_init(struct domain *d)
 {
-    int ret;
-    paddr_t csize;
-    paddr_t vbase;
-
     /*
      * The hardware domain and direct-mapped domain both get the hardware
      * address.
@@ -667,8 +663,8 @@  static int vgic_v2_domain_init(struct domain *d)
          * aligned to PAGE_SIZE.
          */
         d->arch.vgic.cbase = vgic_v2_hw.cbase;
-        csize = vgic_v2_hw.csize;
-        vbase = vgic_v2_hw.vbase;
+        d->arch.vgic.csize = vgic_v2_hw.csize;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase;
     }
     else if ( is_domain_direct_mapped(d) )
     {
@@ -683,8 +679,8 @@  static int vgic_v2_domain_init(struct domain *d)
          */
         d->arch.vgic.dbase = vgic_v2_hw.dbase;
         d->arch.vgic.cbase = vgic_v2_hw.cbase;
-        csize = GUEST_GICC_SIZE;
-        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+        d->arch.vgic.csize = GUEST_GICC_SIZE;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
     else
     {
@@ -697,19 +693,10 @@  static int vgic_v2_domain_init(struct domain *d)
          */
         BUILD_BUG_ON(GUEST_GICC_SIZE != SZ_8K);
         d->arch.vgic.cbase = GUEST_GICC_BASE;
-        csize = GUEST_GICC_SIZE;
-        vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
+        d->arch.vgic.csize = GUEST_GICC_SIZE;
+        d->arch.vgic.vbase = vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset;
     }
 
-    /*
-     * Map the gic virtual cpu interface in the gic cpu interface
-     * region of the guest.
-     */
-    ret = map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase),
-                           csize / PAGE_SIZE, maddr_to_mfn(vbase));
-    if ( ret )
-        return ret;
-
     register_mmio_handler(d, &vgic_v2_distr_mmio_handler, d->arch.vgic.dbase,
                           PAGE_SIZE, NULL);