diff mbox series

[RFC,v1,3/8] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus

Message ID 20230818021629.RFC.v1.3.I326c62dc062aed8d901d319aa665dbe983c7904c@changeid (mailing list archive)
State New, archived
Headers show
Series Install domain onto multiple smmus | expand

Commit Message

Michael Shavit Aug. 17, 2023, 6:16 p.m. UTC
Pick an ASID that is within the supported range of all SMMUs that the
domain is installed to.

Signed-off-by: Michael Shavit <mshavit@google.com>
---

 .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c   | 23 +++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

Comments

Jason Gunthorpe Aug. 17, 2023, 6:38 p.m. UTC | #1
On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> Pick an ASID that is within the supported range of all SMMUs that the
> domain is installed to.
> 
> Signed-off-by: Michael Shavit <mshavit@google.com>
> ---

This seems like a pretty niche scenario, maybe we should just keep a
global for the max ASID?

Otherwise we need a code to change the ASID, even for non-SVA domains,
when the domain is installed in different devices if the current ASID
is over the instance max..

Ideally domain ASID would be selected at domain allocation time.

Jason
Michael Shavit Aug. 21, 2023, 9:31 a.m. UTC | #2
On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > Pick an ASID that is within the supported range of all SMMUs that the
> > domain is installed to.
> >
> > Signed-off-by: Michael Shavit <mshavit@google.com>
> > ---
>
> This seems like a pretty niche scenario, maybe we should just keep a
> global for the max ASID?
>
> Otherwise we need a code to change the ASID, even for non-SVA domains,
> when the domain is installed in different devices if the current ASID
> is over the instance max..

This RFC took the other easy way out for this problem by rejecting
attaching a domain if its currently assigned ASID/VMID
is out of range when attaching to a new SMMU. But I'm not sure
which of the two options is the right trade-off.
Especially if we move VMID to a global allocator (which I plan to add
for v2), setting a global maximum for VMID of 256 sounds small.
Jason Gunthorpe Aug. 21, 2023, 11:54 a.m. UTC | #3
On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> >
> > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > Pick an ASID that is within the supported range of all SMMUs that the
> > > domain is installed to.
> > >
> > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > ---
> >
> > This seems like a pretty niche scenario, maybe we should just keep a
> > global for the max ASID?
> >
> > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > when the domain is installed in different devices if the current ASID
> > is over the instance max..
> 
> This RFC took the other easy way out for this problem by rejecting
> attaching a domain if its currently assigned ASID/VMID
> is out of range when attaching to a new SMMU. But I'm not sure
> which of the two options is the right trade-off.
> Especially if we move VMID to a global allocator (which I plan to add
> for v2), setting a global maximum for VMID of 256 sounds small.

IMHO the simplest and best thing is to make both vmid and asid as
local allocators. Then alot of these problems disappear

Jason
Michael Shavit Aug. 21, 2023, 1:38 p.m. UTC | #4
On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > >
> > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > domain is installed to.
> > > >
> > > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > > ---
> > >
> > > This seems like a pretty niche scenario, maybe we should just keep a
> > > global for the max ASID?
> > >
> > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > when the domain is installed in different devices if the current ASID
> > > is over the instance max..
> >
> > This RFC took the other easy way out for this problem by rejecting
> > attaching a domain if its currently assigned ASID/VMID
> > is out of range when attaching to a new SMMU. But I'm not sure
> > which of the two options is the right trade-off.
> > Especially if we move VMID to a global allocator (which I plan to add
> > for v2), setting a global maximum for VMID of 256 sounds small.
>
> IMHO the simplest and best thing is to make both vmid and asid as
> local allocators. Then alot of these problems disappear

Well that does sound like the most flexible, but IMO quite a lot more
complicated.

I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
installed_smmus` patch and uses a flat master list in smmu_domain as
suggested by Robin, for comparison with the v1. But at a glance using a
local allocator would require:
1. Keeping that patch so we can track the asid/vmid for a domain on a
per smmu instance
2. Keeping a map in the smmu struct so that arm_smmu_share_asid can
find any arm_smmu_installed_smmu that need to have their asid updated
(on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
attached to, which just at a glance looks headache inducing because of
sva's piggybacking on the rid domain.)
Jason Gunthorpe Aug. 21, 2023, 1:50 p.m. UTC | #5
On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> >
> > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > >
> > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > domain is installed to.
> > > > >
> > > > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > > > ---
> > > >
> > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > global for the max ASID?
> > > >
> > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > when the domain is installed in different devices if the current ASID
> > > > is over the instance max..
> > >
> > > This RFC took the other easy way out for this problem by rejecting
> > > attaching a domain if its currently assigned ASID/VMID
> > > is out of range when attaching to a new SMMU. But I'm not sure
> > > which of the two options is the right trade-off.
> > > Especially if we move VMID to a global allocator (which I plan to add
> > > for v2), setting a global maximum for VMID of 256 sounds small.
> >
> > IMHO the simplest and best thing is to make both vmid and asid as
> > local allocators. Then alot of these problems disappear
> 
> Well that does sound like the most flexible, but IMO quite a lot more
> complicated.
> 
> I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> installed_smmus` patch and uses a flat master list in smmu_domain as
> suggested by Robin, for comparison with the v1. But at a glance using a
> local allocator would require:

> 1. Keeping that patch so we can track the asid/vmid for a domain on a
> per smmu instance

You'd have to store the cache tag in the per-master struct on that
list and take it out of the domain struct.

Ie the list of attached masters contains the per-master cache tag
instead of a global cache tag.

The only place you need the cache tag is when iterating over the list
of masters, so it is OK.

If the list of masters is sorted by smmu then the first master of each
smmu can be used to perform the cache tag invalidation, then the rest
of the list is the ATC invalidation.

The looping code will be a bit ugly.

> 2. Keeping a map in the smmu struct so that arm_smmu_share_asid can
> find any arm_smmu_installed_smmu that need to have their asid
> updated

Yes, the global xarray moves into the smmu

> (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> attached to, which just at a glance looks headache inducing because of
> sva's piggybacking on the rid domain.)

Not every smmu, just the one you are *currently* attaching to. We
don't care if the *other* smmu's have different ASIDs, maybe they are
not using BTM, or won't use SVA.

We care that *our* smmu has the right ASID when we go to attach the
domain.

So the logic is not really any different, you already know the smmu
because it is attaching, and you do the same locking xarray stuff as
was done globally, just against this local smmu.

Jason
Michael Shavit Aug. 21, 2023, 2:16 p.m. UTC | #6
On Mon, Aug 21, 2023 at 9:50 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> > On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > >
> > > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > > >
> > > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > > domain is installed to.
> > > > > >
> > > > > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > > > > ---
> > > > >
> > > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > > global for the max ASID?
> > > > >
> > > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > > when the domain is installed in different devices if the current ASID
> > > > > is over the instance max..
> > > >
> > > > This RFC took the other easy way out for this problem by rejecting
> > > > attaching a domain if its currently assigned ASID/VMID
> > > > is out of range when attaching to a new SMMU. But I'm not sure
> > > > which of the two options is the right trade-off.
> > > > Especially if we move VMID to a global allocator (which I plan to add
> > > > for v2), setting a global maximum for VMID of 256 sounds small.
> > >
> > > IMHO the simplest and best thing is to make both vmid and asid as
> > > local allocators. Then alot of these problems disappear
> >
> > Well that does sound like the most flexible, but IMO quite a lot more
> > complicated.
> >
> > I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> > installed_smmus` patch and uses a flat master list in smmu_domain as
> > suggested by Robin, for comparison with the v1. But at a glance using a
> > local allocator would require:
>
> > 1. Keeping that patch so we can track the asid/vmid for a domain on a
> > per smmu instance
>
> You'd have to store the cache tag in the per-master struct on that
> list and take it out of the domain struct.
>
> Ie the list of attached masters contains the per-master cache tag
> instead of a global cache tag.
>
> The only place you need the cache tag is when iterating over the list
> of masters, so it is OK.
>
> If the list of masters is sorted by smmu then the first master of each
> smmu can be used to perform the cache tag invalidation, then the rest
> of the list is the ATC invalidation.
>
> The looping code will be a bit ugly.

I suppose that could work.... but I'm worried it's gonna be messy,
especially if we think about how the PASID feature would interact.
With PASID, there could be multiple domains attached to a master. So
we won't be able to store a single cache tag/asid for the currently
attached domain on the arm_smmu_master. Still doable however; as it
could move into the struct mapping a domain to each PASID/master pair,
with your loop still using the first entry in the list (until it meets
an entry belonging to a different SMMU) for the invalidation.

> > 2. Keeping a map in the smmu struct so that arm_smmu_share_asid can
> > find any arm_smmu_installed_smmu that need to have their asid
> > updated
>
> Yes, the global xarray moves into the smmu
>
> > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > attached to, which just at a glance looks headache inducing because of
> > sva's piggybacking on the rid domain.)
>
> Not every smmu, just the one you are *currently* attaching to. We
> don't care if the *other* smmu's have different ASIDs, maybe they are
> not using BTM, or won't use SVA.

I mean because the domain in arm_smmu_mmu_notifier_get is the RID
domain (not the SVA domain, same issue we discussed in previous
thread) , which can be attached to multiple SMMUs.
Jason Gunthorpe Aug. 21, 2023, 2:26 p.m. UTC | #7
On Mon, Aug 21, 2023 at 10:16:54PM +0800, Michael Shavit wrote:
> On Mon, Aug 21, 2023 at 9:50 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> >
> > On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> > > On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > >
> > > > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > > > >
> > > > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > > > domain is installed to.
> > > > > > >
> > > > > > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > > > > > ---
> > > > > >
> > > > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > > > global for the max ASID?
> > > > > >
> > > > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > > > when the domain is installed in different devices if the current ASID
> > > > > > is over the instance max..
> > > > >
> > > > > This RFC took the other easy way out for this problem by rejecting
> > > > > attaching a domain if its currently assigned ASID/VMID
> > > > > is out of range when attaching to a new SMMU. But I'm not sure
> > > > > which of the two options is the right trade-off.
> > > > > Especially if we move VMID to a global allocator (which I plan to add
> > > > > for v2), setting a global maximum for VMID of 256 sounds small.
> > > >
> > > > IMHO the simplest and best thing is to make both vmid and asid as
> > > > local allocators. Then alot of these problems disappear
> > >
> > > Well that does sound like the most flexible, but IMO quite a lot more
> > > complicated.
> > >
> > > I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> > > installed_smmus` patch and uses a flat master list in smmu_domain as
> > > suggested by Robin, for comparison with the v1. But at a glance using a
> > > local allocator would require:
> >
> > > 1. Keeping that patch so we can track the asid/vmid for a domain on a
> > > per smmu instance
> >
> > You'd have to store the cache tag in the per-master struct on that
> > list and take it out of the domain struct.
> >
> > Ie the list of attached masters contains the per-master cache tag
> > instead of a global cache tag.
> >
> > The only place you need the cache tag is when iterating over the list
> > of masters, so it is OK.
> >
> > If the list of masters is sorted by smmu then the first master of each
> > smmu can be used to perform the cache tag invalidation, then the rest
> > of the list is the ATC invalidation.
> >
> > The looping code will be a bit ugly.
> 
> I suppose that could work.... but I'm worried it's gonna be messy,
> especially if we think about how the PASID feature would interact.
> With PASID, there could be multiple domains attached to a master. So
> we won't be able to store a single cache tag/asid for the currently
> attached domain on the arm_smmu_master. 

I wasn't suggesting to store it in the arm_smmu_master, I was
suggesting to store it in the same place you store the per-master
PASID.

eg I expect that on attach the domain will allocate new memory to
store the pasid/cache tag/master/domain and thread that memory on a
list of attached masters.

> > > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > > attached to, which just at a glance looks headache inducing because of
> > > sva's piggybacking on the rid domain.)
> >
> > Not every smmu, just the one you are *currently* attaching to. We
> > don't care if the *other* smmu's have different ASIDs, maybe they are
> > not using BTM, or won't use SVA.
> 
> I mean because the domain in arm_smmu_mmu_notifier_get is the RID
> domain (not the SVA domain, same issue we discussed in previous
> thread) , which can be attached to multiple SMMUs.

Oh that is totally nonsensical. I expect you will need to fix that
sooner than later. Once the CD table is moved and there is a proper
way to track the PASID it should not be needed. It shouldn't fall into
the decision making about where to put the ASID xarray.

Jason
Michael Shavit Aug. 21, 2023, 2:39 p.m. UTC | #8
On Mon, Aug 21, 2023 at 10:26 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Mon, Aug 21, 2023 at 10:16:54PM +0800, Michael Shavit wrote:
> > On Mon, Aug 21, 2023 at 9:50 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > >
> > > On Mon, Aug 21, 2023 at 09:38:40PM +0800, Michael Shavit wrote:
> > > > On Mon, Aug 21, 2023 at 7:54 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > > >
> > > > > On Mon, Aug 21, 2023 at 05:31:23PM +0800, Michael Shavit wrote:
> > > > > > On Fri, Aug 18, 2023 at 2:38 AM Jason Gunthorpe <jgg@nvidia.com> wrote:
> > > > > > >
> > > > > > > On Fri, Aug 18, 2023 at 02:16:25AM +0800, Michael Shavit wrote:
> > > > > > > > Pick an ASID that is within the supported range of all SMMUs that the
> > > > > > > > domain is installed to.
> > > > > > > >
> > > > > > > > Signed-off-by: Michael Shavit <mshavit@google.com>
> > > > > > > > ---
> > > > > > >
> > > > > > > This seems like a pretty niche scenario, maybe we should just keep a
> > > > > > > global for the max ASID?
> > > > > > >
> > > > > > > Otherwise we need a code to change the ASID, even for non-SVA domains,
> > > > > > > when the domain is installed in different devices if the current ASID
> > > > > > > is over the instance max..
> > > > > >
> > > > > > This RFC took the other easy way out for this problem by rejecting
> > > > > > attaching a domain if its currently assigned ASID/VMID
> > > > > > is out of range when attaching to a new SMMU. But I'm not sure
> > > > > > which of the two options is the right trade-off.
> > > > > > Especially if we move VMID to a global allocator (which I plan to add
> > > > > > for v2), setting a global maximum for VMID of 256 sounds small.
> > > > >
> > > > > IMHO the simplest and best thing is to make both vmid and asid as
> > > > > local allocators. Then alot of these problems disappear
> > > >
> > > > Well that does sound like the most flexible, but IMO quite a lot more
> > > > complicated.
> > > >
> > > > I'll post a v2 RFC that removes the `iommu/arm-smmu-v3: Add list of
> > > > installed_smmus` patch and uses a flat master list in smmu_domain as
> > > > suggested by Robin, for comparison with the v1. But at a glance using a
> > > > local allocator would require:
> > >
> > > > 1. Keeping that patch so we can track the asid/vmid for a domain on a
> > > > per smmu instance
> > >
> > > You'd have to store the cache tag in the per-master struct on that
> > > list and take it out of the domain struct.
> > >
> > > Ie the list of attached masters contains the per-master cache tag
> > > instead of a global cache tag.
> > >
> > > The only place you need the cache tag is when iterating over the list
> > > of masters, so it is OK.
> > >
> > > If the list of masters is sorted by smmu then the first master of each
> > > smmu can be used to perform the cache tag invalidation, then the rest
> > > of the list is the ATC invalidation.
> > >
> > > The looping code will be a bit ugly.
> >
> > I suppose that could work.... but I'm worried it's gonna be messy,
> > especially if we think about how the PASID feature would interact.
> > With PASID, there could be multiple domains attached to a master. So
> > we won't be able to store a single cache tag/asid for the currently
> > attached domain on the arm_smmu_master.
>
> I wasn't suggesting to store it in the arm_smmu_master, I was
> suggesting to store it in the same place you store the per-master
> PASID.
>
> eg I expect that on attach the domain will allocate new memory to
> store the pasid/cache tag/master/domain and thread that memory on a
> list of attached masters.

Gotcha.

> > > > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > > > attached to, which just at a glance looks headache inducing because of
> > > > sva's piggybacking on the rid domain.)
> > >
> > > Not every smmu, just the one you are *currently* attaching to. We
> > > don't care if the *other* smmu's have different ASIDs, maybe they are
> > > not using BTM, or won't use SVA.
> >
> > I mean because the domain in arm_smmu_mmu_notifier_get is the RID
> > domain (not the SVA domain, same issue we discussed in previous
> > thread) , which can be attached to multiple SMMUs.
>
> Oh that is totally nonsensical. I expect you will need to fix that
> sooner than later. Once the CD table is moved and there is a proper
> way to track the PASID it should not be needed. It shouldn't fall into
> the decision making about where to put the ASID xarray.

Right I got a bit of a chicken and egg problem with all these series.

Can we keep the simpler solutions where ASID/VMID across SMMUs has
non-optimal constraints and re-consider this after all the other
changes land (this series, set_dev_pasid series, fixing sva)?
Jason Gunthorpe Aug. 21, 2023, 2:56 p.m. UTC | #9
On Mon, Aug 21, 2023 at 10:39:14PM +0800, Michael Shavit wrote:
> > > > > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > > > > attached to, which just at a glance looks headache inducing because of
> > > > > sva's piggybacking on the rid domain.)
> > > >
> > > > Not every smmu, just the one you are *currently* attaching to. We
> > > > don't care if the *other* smmu's have different ASIDs, maybe they are
> > > > not using BTM, or won't use SVA.
> > >
> > > I mean because the domain in arm_smmu_mmu_notifier_get is the RID
> > > domain (not the SVA domain, same issue we discussed in previous
> > > thread) , which can be attached to multiple SMMUs.
> >
> > Oh that is totally nonsensical. I expect you will need to fix that
> > sooner than later. Once the CD table is moved and there is a proper
> > way to track the PASID it should not be needed. It shouldn't fall into
> > the decision making about where to put the ASID xarray.
> 
> Right I got a bit of a chicken and egg problem with all these series.

Yes, I'm not surprised to hear this

Still, it would nice to move forward without going in a weird
direction too much.

Once the CD table is moved to the master what do you think is blocking
fixing up the SVA stuff to not rely on a RID domain?

Jason
Michael Shavit Aug. 22, 2023, 8:53 a.m. UTC | #10
On Mon, Aug 21, 2023 at 10:56 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Mon, Aug 21, 2023 at 10:39:14PM +0800, Michael Shavit wrote:
> > > > > > (on a loop over every smmu the domain in arm_smmu_mmu_notifier_get is
> > > > > > attached to, which just at a glance looks headache inducing because of
> > > > > > sva's piggybacking on the rid domain.)
> > > > >
> > > > > Not every smmu, just the one you are *currently* attaching to. We
> > > > > don't care if the *other* smmu's have different ASIDs, maybe they are
> > > > > not using BTM, or won't use SVA.
> > > >
> > > > I mean because the domain in arm_smmu_mmu_notifier_get is the RID
> > > > domain (not the SVA domain, same issue we discussed in previous
> > > > thread) , which can be attached to multiple SMMUs.
> > >
> > > Oh that is totally nonsensical. I expect you will need to fix that
> > > sooner than later. Once the CD table is moved and there is a proper
> > > way to track the PASID it should not be needed. It shouldn't fall into
> > > the decision making about where to put the ASID xarray.
> >
> > Right I got a bit of a chicken and egg problem with all these series.
>
> Yes, I'm not surprised to hear this
>
> Still, it would nice to move forward without going in a weird
> direction too much.
>
> Once the CD table is moved to the master what do you think is blocking
> fixing up the SVA stuff to not rely on a RID domain?

These aren't necessarily strict dependencies, but ideally I'd like to:
1. Natively support PASID attachments in the smmu domain (patch(es)
from the set_dev_pasid series)
2. Support attaching a domain to multiple SMMUs (this series)
3. SVA framework support for allocating a single SVA domain per MM
struct (Tina's series)

SVA can then directly attach an sva domain to a master in its
set_dev_pasid call, without having to do any sort of sharing of
smmu_notifiers or CDs across domains. The SVA domain allocated would
directly be attached to the master.
diff mbox series

Patch

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 58def59c36004..ab941e394cae5 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -82,6 +82,20 @@  static int arm_smmu_write_ctx_desc_devices(struct arm_smmu_domain *smmu_domain,
 	return ret;
 }
 
+static u32 arm_smmu_domain_max_asid_bits(struct arm_smmu_domain *smmu_domain)
+{
+	struct arm_smmu_installed_smmu *installed_smmu;
+	unsigned long flags;
+	u32 asid_bits = 16;
+
+	spin_lock_irqsave(&smmu_domain->installed_smmus_lock, flags);
+	list_for_each_entry(installed_smmu, &smmu_domain->installed_smmus,
+			    list)
+		asid_bits = min(asid_bits, installed_smmu->smmu->asid_bits);
+	spin_unlock_irqrestore(&smmu_domain->installed_smmus_lock, flags);
+	return asid_bits;
+}
+
 /*
  * Check if the CPU ASID is available on the SMMU side. If a private context
  * descriptor is using it, try to replace it.
@@ -92,7 +106,6 @@  arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 	int ret;
 	u32 new_asid;
 	struct arm_smmu_ctx_desc *cd;
-	struct arm_smmu_device *smmu;
 	struct arm_smmu_domain *smmu_domain;
 
 	cd = xa_load(&arm_smmu_asid_xa, asid);
@@ -108,10 +121,12 @@  arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
 	}
 
 	smmu_domain = container_of(cd, struct arm_smmu_domain, cd);
-	smmu = smmu_domain->smmu;
 
-	ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
-		       XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+	ret = xa_alloc(
+		&arm_smmu_asid_xa, &new_asid, cd,
+		XA_LIMIT(1,
+			 (1 << arm_smmu_domain_max_asid_bits(smmu_domain)) - 1),
+		GFP_KERNEL);
 	if (ret)
 		return ERR_PTR(-ENOSPC);
 	/*