diff mbox series

[v5,14/26] cxl/pci: Store the endpoint's Component Register mappings in struct cxl_dev_state

Message ID 20230607221651.2454764-15-terry.bowman@amd.com
State Superseded
Headers show
Series cxl/pci: Add support for RCH RAS error handling | expand

Commit Message

Bowman, Terry June 7, 2023, 10:16 p.m. UTC
From: Robert Richter <rrichter@amd.com>

Same as for ports and dports, also store the endpoint's Component
Register mappings, use struct cxl_dev_state for that.

Signed-off-by: Robert Richter <rrichter@amd.com>
Signed-off-by: Terry Bowman <terry.bowman@amd.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
 drivers/cxl/cxlmem.h | 3 ++-
 drivers/cxl/pci.c    | 9 +++++----
 2 files changed, 7 insertions(+), 5 deletions(-)

Comments

Shesha Bhushan Sreenivasamurthy June 7, 2023, 11:01 p.m. UTC | #1
Hi,
For DCD sideband there needs to be LD-ID. Is the following approach acceptable?

 -device cxl-type3,bus=swport0,volatile-memdev=vmem0,dc-memdev=vmem1,id=cxl-vmem0,num-dc-regions=2,ldid=0 \
 -device cxl-type3,bus=swport0,volatile-memdev=vmem1,dc-memdev=vmem2,id=cxl-vmem1,num-dc-regions=2,ldid=1 \
 -device i2c_mctp_cxl,bus=aspeed.i2c.bus.0,address=24,target=cxl-vmem0,cxl-vmem1")

With this configuration, the same i2c device is handing both LDs and in FMAPI commands we use the LDID specified above.

Thanks,
Shesha.
Jonathan Cameron June 8, 2023, 10:31 a.m. UTC | #2
On Wed, 7 Jun 2023 23:01:11 +0000
Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com> wrote:

> Hi,
> For DCD sideband there needs to be LD-ID. Is the following approach acceptable?

QEMU question so +CC qemu-devel

> 
>  -device cxl-type3,bus=swport0,volatile-memdev=vmem0,dc-memdev=vmem1,id=cxl-vmem0,num-dc-regions=2,ldid=0 \
>  -device cxl-type3,bus=swport0,volatile-memdev=vmem1,dc-memdev=vmem2,id=cxl-vmem1,num-dc-regions=2,ldid=1 \

Those will be PCI functions at this level so you can't do this until we have more advanced switch support
(it has to know about multiple VHs - right now we only support fixed config switches).  You could connect them
to different switch ports - effectively that will be what it looks like when we do emulate a configurable switch.

>  -device i2c_mctp_cxl,bus=aspeed.i2c.bus.0,address=24,target=cxl-vmem0,cxl-vmem1")
> 
> With this configuration, the same i2c device is handing both LDs and in FMAPI commands we use the LDID specified above.

This effectively becomes a partial implementation of either an MLD or an MH-SLD
To manage the actual memory access, those will almost certainly need a bunch of other shared
infrastructure.  So I'd ultimately expect the i2c_mctp_cxl device to target whatever
device represents that shared infrastructure - it might be a separate device or a 'lead' type 3 device.

So I'm not sure how this will fit together longer term.  We need same infrastructure
to work for a mailbox CCI on a MH-SLD/MLD as well and in that case there isn't a separate
device to which we can provide multiple targets as you've done in your proposal here.

So I think we need to work out how to handle all of (I've probably forgotten something)
X marks done or in progress.

X 1) i2c_mctp_cxl to an SLD (no PCI Mailbox definition for this one)
  2) i2c_mctp_cxl directly to an MLD (your case)
X 3) i2c_mctp_cxl to a fixed config switch (single fixed VH no MLD capable ports)
X 4) PCI mailbox via switch CCI device that fixed config switch (no MLD capable ports)
	Even with this simple design some fun things you can do.
  5) i2c_mctp_cxl to a configurable switch (probably a separate as yet to be defined management interface - that messes with hotplug)
  6) PCI mailbox via switch CCI to configurable switch (again to defined management interface).
  7) i2c_mctp_cxl to an MH-SLD - probably to which ever device also has support for
     tunneling to the FM owned LD via the PCI mailbox.
X 8) PCI mailbox on MH-SLD tunneling to the FM owned LD.
  9) i2c_mctp_cxl to an MH-MLD - similar to above - this one isn't that much more complex than MH-SLD case.
X 10) PCI mailbox to MH-MLD - similar to above.
  11) Tunneling via the switch CCI (then over PCI-VDM - though that detail isn't visible in QEMU) to an SLD
  12) Tunneling via the switch CCI (then PCI-VDM) to an MH-SLD and on to he FM owned LD.
  13) Tunneling via the switch CCI (then over PCI-VDM) to an MLD / MH-MLD
 
Current i2c_mctp_cxl covers 1 and 3
I'm part way through the tunnelling support for (8 and 100) - need to revisit and wire up the switch CCI PoC
properly which will give us 4.

2 needs MLD support in general which we could maybe make work with a static binding in a switch but that
  would be odd - so we probably need to emulate a configurable switch for that
5,6 need configurable switch
7 needs same as 2 plus tunneling part similar to 4
9 again probably configurable switch for the MLD part to make sense
11 is fairly straight forward - but not done yet.
12 also not too bad, but needs the MH-SLD part to be fleshed out (some work on going )
13 needs pretty much everything defined.

Trying to get the command line interface and device model right until we have PoC code
for a few more cases is going to be at most a draft of what it might look like.

So in short, lots to do.  For now feel free to hack whatever you need in to be able
to test the FM-API side of things, we can move that towards a clean command line definition
once we have one figured out!

Jonathan


> 
> Thanks,
> Shesha.
Jonathan Cameron June 8, 2023, 10:36 a.m. UTC | #3
Shesha,

You've sent an email with the 'In-reply-to' set to one of Terry's patches.
Please check why that happened and make sure you don't do that in future as
it hides your unrelated thread in email clients and the archives!

See
https://lore.kernel.org/linux-cxl/20230607221651.2454764-1-terry.bowman@amd.com/T/#t 
for example

Jonathan

On Thu, 8 Jun 2023 11:31:53 +0100
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Wed, 7 Jun 2023 23:01:11 +0000
> Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com> wrote:
> 
> > Hi,
> > For DCD sideband there needs to be LD-ID. Is the following approach acceptable?  
> 
> QEMU question so +CC qemu-devel
> 
> > 
> >  -device cxl-type3,bus=swport0,volatile-memdev=vmem0,dc-memdev=vmem1,id=cxl-vmem0,num-dc-regions=2,ldid=0 \
> >  -device cxl-type3,bus=swport0,volatile-memdev=vmem1,dc-memdev=vmem2,id=cxl-vmem1,num-dc-regions=2,ldid=1 \  
> 
> Those will be PCI functions at this level so you can't do this until we have more advanced switch support
> (it has to know about multiple VHs - right now we only support fixed config switches).  You could connect them
> to different switch ports - effectively that will be what it looks like when we do emulate a configurable switch.
> 
> >  -device i2c_mctp_cxl,bus=aspeed.i2c.bus.0,address=24,target=cxl-vmem0,cxl-vmem1")
> > 
> > With this configuration, the same i2c device is handing both LDs and in FMAPI commands we use the LDID specified above.  
> 
> This effectively becomes a partial implementation of either an MLD or an MH-SLD
> To manage the actual memory access, those will almost certainly need a bunch of other shared
> infrastructure.  So I'd ultimately expect the i2c_mctp_cxl device to target whatever
> device represents that shared infrastructure - it might be a separate device or a 'lead' type 3 device.
> 
> So I'm not sure how this will fit together longer term.  We need same infrastructure
> to work for a mailbox CCI on a MH-SLD/MLD as well and in that case there isn't a separate
> device to which we can provide multiple targets as you've done in your proposal here.
> 
> So I think we need to work out how to handle all of (I've probably forgotten something)
> X marks done or in progress.
> 
> X 1) i2c_mctp_cxl to an SLD (no PCI Mailbox definition for this one)
>   2) i2c_mctp_cxl directly to an MLD (your case)
> X 3) i2c_mctp_cxl to a fixed config switch (single fixed VH no MLD capable ports)
> X 4) PCI mailbox via switch CCI device that fixed config switch (no MLD capable ports)
> 	Even with this simple design some fun things you can do.
>   5) i2c_mctp_cxl to a configurable switch (probably a separate as yet to be defined management interface - that messes with hotplug)
>   6) PCI mailbox via switch CCI to configurable switch (again to defined management interface).
>   7) i2c_mctp_cxl to an MH-SLD - probably to which ever device also has support for
>      tunneling to the FM owned LD via the PCI mailbox.
> X 8) PCI mailbox on MH-SLD tunneling to the FM owned LD.
>   9) i2c_mctp_cxl to an MH-MLD - similar to above - this one isn't that much more complex than MH-SLD case.
> X 10) PCI mailbox to MH-MLD - similar to above.
>   11) Tunneling via the switch CCI (then over PCI-VDM - though that detail isn't visible in QEMU) to an SLD
>   12) Tunneling via the switch CCI (then PCI-VDM) to an MH-SLD and on to he FM owned LD.
>   13) Tunneling via the switch CCI (then over PCI-VDM) to an MLD / MH-MLD
>  
> Current i2c_mctp_cxl covers 1 and 3
> I'm part way through the tunnelling support for (8 and 100) - need to revisit and wire up the switch CCI PoC
> properly which will give us 4.
> 
> 2 needs MLD support in general which we could maybe make work with a static binding in a switch but that
>   would be odd - so we probably need to emulate a configurable switch for that
> 5,6 need configurable switch
> 7 needs same as 2 plus tunneling part similar to 4
> 9 again probably configurable switch for the MLD part to make sense
> 11 is fairly straight forward - but not done yet.
> 12 also not too bad, but needs the MH-SLD part to be fleshed out (some work on going )
> 13 needs pretty much everything defined.
> 
> Trying to get the command line interface and device model right until we have PoC code
> for a few more cases is going to be at most a draft of what it might look like.
> 
> So in short, lots to do.  For now feel free to hack whatever you need in to be able
> to test the FM-API side of things, we can move that towards a clean command line definition
> once we have one figured out!
> 
> Jonathan
> 
> 
> > 
> > Thanks,
> > Shesha.  
> 
>
Shesha Bhushan Sreenivasamurthy June 8, 2023, 11:38 p.m. UTC | #4
Hi,

Thinking a bit more, LD in CXL are PCIe endpoint functions. Therefore 1-1 mapping of cxl-i2c device per PCIe device is sufficient, and we use function number in BDF as the LD-ID. Does it makes sense ?

From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
Sent: Thursday, June 8, 2023 3:36 AM
To: Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com>
Cc: linux-cxl@vger.kernel.org <linux-cxl@vger.kernel.org>; <"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: [EXT] Re: Concept of LD-ID in QEMU 
 
External Email

----------------------------------------------------------------------

Shesha,

You've sent an email with the 'In-reply-to' set to one of Terry's patches.
Please check why that happened and make sure you don't do that in future as
it hides your unrelated thread in email clients and the archives!

See
https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_linux-2Dcxl_20230607221651.2454764-2D1-2Dterry.bowman-40amd.com_T_-23t&d=DwIFAw&c=nKjWec2b6R0mOyPaz7xtfQ&r=Zta64bwn4nurTRpD4LY2OGr8KklkMRPn7Z_Qy0o4unU&m=ZMaAF9tkNCKfk8gYXMiPERZeIhavaQ7MdKtqCbShRF6w5ISrgHqAl6XDOONYbprZ&s=ERVX40JlAgnvRvPLm8sYZalsYDbpUU7YAqI0Ol0faPI&e=  
for example

ss - Apologies. Will be careful.

Jonathan

On Thu, 8 Jun 2023 11:31:53 +0100
Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:

> On Wed, 7 Jun 2023 23:01:11 +0000
> Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com> wrote:
> 
> > Hi,
> > For DCD sideband there needs to be LD-ID. Is the following approach acceptable?  
> 
> QEMU question so +CC qemu-devel
> 
> > 
> >  -device cxl-type3,bus=swport0,volatile-memdev=vmem0,dc-memdev=vmem1,id=cxl-vmem0,num-dc-regions=2,ldid=0 \
> >  -device cxl-type3,bus=swport0,volatile-memdev=vmem1,dc-memdev=vmem2,id=cxl-vmem1,num-dc-regions=2,ldid=1 \  
> 
> Those will be PCI functions at this level so you can't do this until we have more advanced switch support
> (it has to know about multiple VHs - right now we only support fixed config switches).  You could connect them
> to different switch ports - effectively that will be what it looks like when we do emulate a configurable switch.
> 
> >  -device i2c_mctp_cxl,bus=aspeed.i2c.bus.0,address=24,target=cxl-vmem0,cxl-vmem1")
> > 
> > With this configuration, the same i2c device is handing both LDs and in FMAPI commands we use the LDID specified above.  
> 
> This effectively becomes a partial implementation of either an MLD or an MH-SLD
> To manage the actual memory access, those will almost certainly need a bunch of other shared
> infrastructure.  So I'd ultimately expect the i2c_mctp_cxl device to target whatever
> device represents that shared infrastructure - it might be a separate device or a 'lead' type 3 device.
> 
> So I'm not sure how this will fit together longer term.  We need same infrastructure
> to work for a mailbox CCI on a MH-SLD/MLD as well and in that case there isn't a separate
> device to which we can provide multiple targets as you've done in your proposal here.
> 
> So I think we need to work out how to handle all of (I've probably forgotten something)
> X marks done or in progress.
> 
> X 1) i2c_mctp_cxl to an SLD (no PCI Mailbox definition for this one)
>   2) i2c_mctp_cxl directly to an MLD (your case)
> X 3) i2c_mctp_cxl to a fixed config switch (single fixed VH no MLD capable ports)
> X 4) PCI mailbox via switch CCI device that fixed config switch (no MLD capable ports)
>        Even with this simple design some fun things you can do.
>   5) i2c_mctp_cxl to a configurable switch (probably a separate as yet to be defined management interface - that messes with hotplug)
>   6) PCI mailbox via switch CCI to configurable switch (again to defined management interface).
>   7) i2c_mctp_cxl to an MH-SLD - probably to which ever device also has support for
>      tunneling to the FM owned LD via the PCI mailbox.
> X 8) PCI mailbox on MH-SLD tunneling to the FM owned LD.
>   9) i2c_mctp_cxl to an MH-MLD - similar to above - this one isn't that much more complex than MH-SLD case.
> X 10) PCI mailbox to MH-MLD - similar to above.
>   11) Tunneling via the switch CCI (then over PCI-VDM - though that detail isn't visible in QEMU) to an SLD
>   12) Tunneling via the switch CCI (then PCI-VDM) to an MH-SLD and on to he FM owned LD.
>   13) Tunneling via the switch CCI (then over PCI-VDM) to an MLD / MH-MLD

> Current i2c_mctp_cxl covers 1 and 3
> I'm part way through the tunnelling support for (8 and 100) - need to revisit and wire up the switch CCI PoC
> properly which will give us 4.
> 
> 2 needs MLD support in general which we could maybe make work with a static binding in a switch but that
>   would be odd - so we probably need to emulate a configurable switch for that
> 5,6 need configurable switch
> 7 needs same as 2 plus tunneling part similar to 4
> 9 again probably configurable switch for the MLD part to make sense
> 11 is fairly straight forward - but not done yet.
> 12 also not too bad, but needs the MH-SLD part to be fleshed out (some work on going )
> 13 needs pretty much everything defined.
> 
> Trying to get the command line interface and device model right until we have PoC code
> for a few more cases is going to be at most a draft of what it might look like.
> 
> So in short, lots to do.  For now feel free to hack whatever you need in to be able
> to test the FM-API side of things, we can move that towards a clean command line definition
> once we have one figured out!
> 
> Jonathan
> 
> 
> > 
> > Thanks,
> > Shesha.  
> 
>
Jonathan Cameron June 9, 2023, 11:20 a.m. UTC | #5
On Thu, 8 Jun 2023 23:38:34 +0000
Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com> wrote:

> Hi,
> 
> Thinking a bit more, LD in CXL are PCIe endpoint functions. Therefore 1-1 mapping of cxl-i2c device per PCIe device is sufficient, and we use function number in BDF as the LD-ID. Does it makes sense ?

LDs are PCIe endpoint functions (always function 0) as seen from the Virtual
Heirarchies (as they end up under a particular vPBB - which look like a
downstream port of a switch to the host) but they aren't from a more general
topology point of view of actual fabric topology and when we are tunneling we
address them via physical port, not virtual port I think (not read that bit
of the spec for a while). See figure 7-23 in CXL 3.0

Outer tunneling command targets a port number (unwrapped at the switch),
inner one targets the LD - unwrapped at the MLD and sent to appropriate LD
including FM owned LD (if I understand this stack directly).

Also no relationship between BDF and LD-ID so don't do that as the maximum
ID is only 16 which would rather limit your PCI toplogies if that's the BDF as
well.

For now just do what you originally said and add an ID (starting from 0).
We can probably do that automatically once more infrastructure is in place.

Jonathan


> 
> From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
> Sent: Thursday, June 8, 2023 3:36 AM
> To: Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com>
> Cc: linux-cxl@vger.kernel.org <linux-cxl@vger.kernel.org>; <"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
> Subject: [EXT] Re: Concept of LD-ID in QEMU 
>  
> External Email
> 
> ----------------------------------------------------------------------
> 
> Shesha,
> 
> You've sent an email with the 'In-reply-to' set to one of Terry's patches.
> Please check why that happened and make sure you don't do that in future as
> it hides your unrelated thread in email clients and the archives!
> 
> See
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_linux-2Dcxl_20230607221651.2454764-2D1-2Dterry.bowman-40amd.com_T_-23t&d=DwIFAw&c=nKjWec2b6R0mOyPaz7xtfQ&r=Zta64bwn4nurTRpD4LY2OGr8KklkMRPn7Z_Qy0o4unU&m=ZMaAF9tkNCKfk8gYXMiPERZeIhavaQ7MdKtqCbShRF6w5ISrgHqAl6XDOONYbprZ&s=ERVX40JlAgnvRvPLm8sYZalsYDbpUU7YAqI0Ol0faPI&e=  
> for example
> 
> ss - Apologies. Will be careful.
> 
> Jonathan
> 
> On Thu, 8 Jun 2023 11:31:53 +0100
> Jonathan Cameron <Jonathan.Cameron@Huawei.com> wrote:
> 
> > On Wed, 7 Jun 2023 23:01:11 +0000
> > Shesha Bhushan Sreenivasamurthy <sheshas@marvell.com> wrote:
> >   
> > > Hi,
> > > For DCD sideband there needs to be LD-ID. Is the following approach acceptable?    
> > 
> > QEMU question so +CC qemu-devel
> >   
> > > 
> > >  -device cxl-type3,bus=swport0,volatile-memdev=vmem0,dc-memdev=vmem1,id=cxl-vmem0,num-dc-regions=2,ldid=0 \
> > >  -device cxl-type3,bus=swport0,volatile-memdev=vmem1,dc-memdev=vmem2,id=cxl-vmem1,num-dc-regions=2,ldid=1 \    
> > 
> > Those will be PCI functions at this level so you can't do this until we have more advanced switch support
> > (it has to know about multiple VHs - right now we only support fixed config switches).  You could connect them
> > to different switch ports - effectively that will be what it looks like when we do emulate a configurable switch.
> >   
> > >  -device i2c_mctp_cxl,bus=aspeed.i2c.bus.0,address=24,target=cxl-vmem0,cxl-vmem1")
> > > 
> > > With this configuration, the same i2c device is handing both LDs and in FMAPI commands we use the LDID specified above.    
> > 
> > This effectively becomes a partial implementation of either an MLD or an MH-SLD
> > To manage the actual memory access, those will almost certainly need a bunch of other shared
> > infrastructure.  So I'd ultimately expect the i2c_mctp_cxl device to target whatever
> > device represents that shared infrastructure - it might be a separate device or a 'lead' type 3 device.
> > 
> > So I'm not sure how this will fit together longer term.  We need same infrastructure
> > to work for a mailbox CCI on a MH-SLD/MLD as well and in that case there isn't a separate
> > device to which we can provide multiple targets as you've done in your proposal here.
> > 
> > So I think we need to work out how to handle all of (I've probably forgotten something)
> > X marks done or in progress.
> > 
> > X 1) i2c_mctp_cxl to an SLD (no PCI Mailbox definition for this one)
> >   2) i2c_mctp_cxl directly to an MLD (your case)
> > X 3) i2c_mctp_cxl to a fixed config switch (single fixed VH no MLD capable ports)
> > X 4) PCI mailbox via switch CCI device that fixed config switch (no MLD capable ports)
> >        Even with this simple design some fun things you can do.
> >   5) i2c_mctp_cxl to a configurable switch (probably a separate as yet to be defined management interface - that messes with hotplug)
> >   6) PCI mailbox via switch CCI to configurable switch (again to defined management interface).
> >   7) i2c_mctp_cxl to an MH-SLD - probably to which ever device also has support for
> >      tunneling to the FM owned LD via the PCI mailbox.
> > X 8) PCI mailbox on MH-SLD tunneling to the FM owned LD.
> >   9) i2c_mctp_cxl to an MH-MLD - similar to above - this one isn't that much more complex than MH-SLD case.
> > X 10) PCI mailbox to MH-MLD - similar to above.
> >   11) Tunneling via the switch CCI (then over PCI-VDM - though that detail isn't visible in QEMU) to an SLD
> >   12) Tunneling via the switch CCI (then PCI-VDM) to an MH-SLD and on to he FM owned LD.
> >   13) Tunneling via the switch CCI (then over PCI-VDM) to an MLD / MH-MLD
> >  
> > Current i2c_mctp_cxl covers 1 and 3
> > I'm part way through the tunnelling support for (8 and 100) - need to revisit and wire up the switch CCI PoC
> > properly which will give us 4.
> > 
> > 2 needs MLD support in general which we could maybe make work with a static binding in a switch but that
> >   would be odd - so we probably need to emulate a configurable switch for that
> > 5,6 need configurable switch
> > 7 needs same as 2 plus tunneling part similar to 4
> > 9 again probably configurable switch for the MLD part to make sense
> > 11 is fairly straight forward - but not done yet.
> > 12 also not too bad, but needs the MH-SLD part to be fleshed out (some work on going )
> > 13 needs pretty much everything defined.
> > 
> > Trying to get the command line interface and device model right until we have PoC code
> > for a few more cases is going to be at most a draft of what it might look like.
> > 
> > So in short, lots to do.  For now feel free to hack whatever you need in to be able
> > to test the FM-API side of things, we can move that towards a clean command line definition
> > once we have one figured out!
> > 
> > Jonathan
> > 
> >   
> > > 
> > > Thanks,
> > > Shesha.    
> > 
> >
Dan Williams June 10, 2023, 2:29 a.m. UTC | #6
Terry Bowman wrote:
> From: Robert Richter <rrichter@amd.com>
> 
> Same as for ports and dports, also store the endpoint's Component
> Register mappings, use struct cxl_dev_state for that.
> 
> Signed-off-by: Robert Richter <rrichter@amd.com>
> Signed-off-by: Terry Bowman <terry.bowman@amd.com>
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>

This really feels like it should fold in the removal of
cxlds->component_reg_phys in the same patch. I do not see any reason for
cxlds->component_reg_phys and cxlds->comp_map to coexist in the history.
diff mbox series

Patch

diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index a2845a7a69d8..2823c5aaf3db 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -263,6 +263,7 @@  struct cxl_poison_state {
  *
  * @dev: The device associated with this CXL state
  * @cxlmd: The device representing the CXL.mem capabilities of @dev
+ * @comp_map: component register capability mappings
  * @regs: Parsed register blocks
  * @cxl_dvsec: Offset to the PCIe device DVSEC
  * @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH)
@@ -299,7 +300,7 @@  struct cxl_poison_state {
 struct cxl_dev_state {
 	struct device *dev;
 	struct cxl_memdev *cxlmd;
-
+	struct cxl_register_map comp_map;
 	struct cxl_regs regs;
 	int cxl_dvsec;
 
diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
index 2975b232fcd1..816b23a6c4aa 100644
--- a/drivers/cxl/pci.c
+++ b/drivers/cxl/pci.c
@@ -662,15 +662,16 @@  static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	 * still be useful for management functions so don't return an error.
 	 */
 	cxlds->component_reg_phys = CXL_RESOURCE_NONE;
-	rc = cxl_pci_setup_regs(pdev, CXL_REGLOC_RBI_COMPONENT, &map);
+	rc = cxl_pci_setup_regs(pdev, CXL_REGLOC_RBI_COMPONENT,
+				&cxlds->comp_map);
 	if (rc)
 		dev_warn(&pdev->dev, "No component registers (%d)\n", rc);
-	else if (!map.component_map.ras.valid)
+	else if (!cxlds->comp_map.component_map.ras.valid)
 		dev_dbg(&pdev->dev, "RAS registers not found\n");
 
-	cxlds->component_reg_phys = map.resource;
+	cxlds->component_reg_phys = cxlds->comp_map.resource;
 
-	rc = cxl_map_component_regs(&map, &cxlds->regs.component,
+	rc = cxl_map_component_regs(&cxlds->comp_map, &cxlds->regs.component,
 				    BIT(CXL_CM_CAP_CAP_ID_RAS));
 	if (rc)
 		dev_dbg(&pdev->dev, "Failed to map RAS capability.\n");