Message ID | 20241216161042.42108-18-alejandro.lucero-palau@amd.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
Series | cxl: add type2 device basic support | expand |
On Mon, 16 Dec 2024 16:10:32 +0000 <alejandro.lucero-palau@amd.com> wrote: > From: Alejandro Lucero <alucerop@amd.com> > > Region creation involves finding available DPA (device-physical-address) > capacity to map into HPA (host-physical-address) space. Given the HPA > capacity constraint, define an API, cxl_request_dpa(), that has the > flexibility to map the minimum amount of memory the driver needs to Bonus space before map. > operate vs the total possible that can be mapped given HPA availability. > > Factor out the core of cxl_dpa_alloc, that does free space scanning, > into a cxl_dpa_freespace() helper, and use that to balance the capacity > available to map vs the @min and @max arguments to cxl_request_dpa. > > Based on https://lore.kernel.org/linux-cxl/168592158743.1948938.7622563891193802610.stgit@dwillia2-xfh.jf.intel.com/ > > Signed-off-by: Alejandro Lucero <alucerop@amd.com> > Co-developed-by: Dan Williams <dan.j.williams@intel.com> Comments inline. > --- > drivers/cxl/core/hdm.c | 154 +++++++++++++++++++++++++++++++++++------ > include/cxl/cxl.h | 5 ++ > 2 files changed, 138 insertions(+), 21 deletions(-) > > +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > +{ > + struct cxl_port *port = cxled_to_port(cxled); > + struct device *dev = &cxled->cxld.dev; > + resource_size_t start, avail, skip; > + int rc; > + > + down_write(&cxl_dpa_rwsem); > + if (cxled->cxld.region) { > + dev_dbg(dev, "EBUSY, decoder attached to %s\n", > + dev_name(&cxled->cxld.region->dev)); > + rc = -EBUSY; > + goto out; > + } > + > + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { > + dev_dbg(dev, "EBUSY, decoder enabled\n"); > + rc = -EBUSY; > goto out; > } > > + avail = cxl_dpa_freespace(cxled, &start, &skip); > + > if (size > avail) { > dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, > - cxl_decoder_mode_name(cxled->mode), &avail); > + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", This is reverting an earlier change. I guess accidental? > + &avail); > rc = -ENOSPC; > goto out; > } > @@ -538,6 +557,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); > } > +/** > + * cxl_request_dpa - search and reserve DPA given input constraints > + * @cxlmd: memdev with an endpoint port with available decoders > + * @is_ram: DPA operation mode (ram vs pmem) > + * @min: the minimum amount of capacity the call needs > + * @max: extra capacity to allocate after min is satisfied Includes the extra capacity. Otherwise capacity allocated as documented is min + max which seems unlikely. > + * > + * Given that a region needs to allocate from limited HPA capacity it > + * may be the case that a device has more mappable DPA capacity than > + * available HPA. So, the expectation is that @min is a driver known > + * value for how much capacity is needed, and @max is based the limit of > + * how much HPA space is available for a new region. > + * > + * Returns a pinned cxl_decoder with at least @min bytes of capacity > + * reserved, or an error pointer. The caller is also expected to own the > + * lifetime of the memdev registration associated with the endpoint to > + * pin the decoder registered as well. > + */ > +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd, > + bool is_ram, > + resource_size_t min, > + resource_size_t max) > +{ > + struct cxl_port *endpoint = cxlmd->endpoint; > + struct cxl_endpoint_decoder *cxled; > + enum cxl_decoder_mode mode; > + struct device *cxled_dev; > + resource_size_t alloc; > + int rc; > + > + if (!IS_ALIGNED(min | max, SZ_256M)) > + return ERR_PTR(-EINVAL); > + > + down_read(&cxl_dpa_rwsem); > + cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); > + up_read(&cxl_dpa_rwsem); > + > + if (!cxled_dev) > + cxled = ERR_PTR(-ENXIO); if (!cxled_dev) return ERR_PTR(-ENXIO); cxled = to... if (!cxled) //assuming this has any way to fail in which case I think you would need to put the device... put_device(cxled_dev); return NULL; Though do you actualy want to return an error in this case? > + else > + cxled = to_cxl_endpoint_decoder(cxled_dev); > + > + if (!cxled || IS_ERR(cxled)) > + return cxled; Drop this with changes above. > + > + if (is_ram) > + mode = CXL_DECODER_RAM; > + else > + mode = CXL_DECODER_PMEM; > + > + rc = cxl_dpa_set_mode(cxled, mode); > + if (rc) > + goto err; > + > + down_read(&cxl_dpa_rwsem); > + alloc = cxl_dpa_freespace(cxled, NULL, NULL); > + up_read(&cxl_dpa_rwsem); > + > + if (max) > + alloc = min(max, alloc); > + if (alloc < min) { > + rc = -ENOMEM; > + goto err; > + } > + > + rc = cxl_dpa_alloc(cxled, alloc); > + if (rc) > + goto err; > + > + return cxled; > +err: > + put_device(cxled_dev); > + return ERR_PTR(rc); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, "CXL");
On 12/24/24 17:53, Jonathan Cameron wrote: > On Mon, 16 Dec 2024 16:10:32 +0000 > <alejandro.lucero-palau@amd.com> wrote: > >> From: Alejandro Lucero <alucerop@amd.com> >> >> Region creation involves finding available DPA (device-physical-address) >> capacity to map into HPA (host-physical-address) space. Given the HPA >> capacity constraint, define an API, cxl_request_dpa(), that has the >> flexibility to map the minimum amount of memory the driver needs to > Bonus space before map. Ok. >> operate vs the total possible that can be mapped given HPA availability. >> >> Factor out the core of cxl_dpa_alloc, that does free space scanning, >> into a cxl_dpa_freespace() helper, and use that to balance the capacity >> available to map vs the @min and @max arguments to cxl_request_dpa. >> >> Based on https://lore.kernel.org/linux-cxl/168592158743.1948938.7622563891193802610.stgit@dwillia2-xfh.jf.intel.com/ >> >> Signed-off-by: Alejandro Lucero <alucerop@amd.com> >> Co-developed-by: Dan Williams <dan.j.williams@intel.com> > Comments inline. > >> --- >> drivers/cxl/core/hdm.c | 154 +++++++++++++++++++++++++++++++++++------ >> include/cxl/cxl.h | 5 ++ >> 2 files changed, 138 insertions(+), 21 deletions(-) >> >> +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> +{ >> + struct cxl_port *port = cxled_to_port(cxled); >> + struct device *dev = &cxled->cxld.dev; >> + resource_size_t start, avail, skip; >> + int rc; >> + >> + down_write(&cxl_dpa_rwsem); >> + if (cxled->cxld.region) { >> + dev_dbg(dev, "EBUSY, decoder attached to %s\n", >> + dev_name(&cxled->cxld.region->dev)); >> + rc = -EBUSY; >> + goto out; >> + } >> + >> + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { >> + dev_dbg(dev, "EBUSY, decoder enabled\n"); >> + rc = -EBUSY; >> goto out; >> } >> >> + avail = cxl_dpa_freespace(cxled, &start, &skip); >> + >> if (size > avail) { >> dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, >> - cxl_decoder_mode_name(cxled->mode), &avail); >> + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", > This is reverting an earlier change. I guess accidental? Yes, I should be using the function. >> + &avail); >> rc = -ENOSPC; >> goto out; >> } >> @@ -538,6 +557,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); >> } >> +/** >> + * cxl_request_dpa - search and reserve DPA given input constraints >> + * @cxlmd: memdev with an endpoint port with available decoders >> + * @is_ram: DPA operation mode (ram vs pmem) >> + * @min: the minimum amount of capacity the call needs >> + * @max: extra capacity to allocate after min is satisfied > Includes the extra capacity. Otherwise capacity allocated as documented > is min + max which seems unlikely. Right. I'll fix it. >> + * >> + * Given that a region needs to allocate from limited HPA capacity it >> + * may be the case that a device has more mappable DPA capacity than >> + * available HPA. So, the expectation is that @min is a driver known >> + * value for how much capacity is needed, and @max is based the limit of >> + * how much HPA space is available for a new region. >> + * >> + * Returns a pinned cxl_decoder with at least @min bytes of capacity >> + * reserved, or an error pointer. The caller is also expected to own the >> + * lifetime of the memdev registration associated with the endpoint to >> + * pin the decoder registered as well. >> + */ >> +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd, >> + bool is_ram, >> + resource_size_t min, >> + resource_size_t max) >> +{ >> + struct cxl_port *endpoint = cxlmd->endpoint; >> + struct cxl_endpoint_decoder *cxled; >> + enum cxl_decoder_mode mode; >> + struct device *cxled_dev; >> + resource_size_t alloc; >> + int rc; >> + >> + if (!IS_ALIGNED(min | max, SZ_256M)) >> + return ERR_PTR(-EINVAL); >> + >> + down_read(&cxl_dpa_rwsem); >> + cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); >> + up_read(&cxl_dpa_rwsem); >> + >> + if (!cxled_dev) >> + cxled = ERR_PTR(-ENXIO); > if (!cxled_dev) > return ERR_PTR(-ENXIO); > > cxled = to... > if (!cxled) //assuming this has any way to fail in which > case I think you would need to put the device... > put_device(cxled_dev); > return NULL; > > Though do you actualy want to return an error in this case? This handling makes the code clearer, and yes, you are right about the put_device. I'll fix it. >> + else >> + cxled = to_cxl_endpoint_decoder(cxled_dev); >> + >> + if (!cxled || IS_ERR(cxled)) >> + return cxled; > Drop this with changes above. Sure. Thanks! > >> + >> + if (is_ram) >> + mode = CXL_DECODER_RAM; >> + else >> + mode = CXL_DECODER_PMEM; >> + >> + rc = cxl_dpa_set_mode(cxled, mode); >> + if (rc) >> + goto err; >> + >> + down_read(&cxl_dpa_rwsem); >> + alloc = cxl_dpa_freespace(cxled, NULL, NULL); >> + up_read(&cxl_dpa_rwsem); >> + >> + if (max) >> + alloc = min(max, alloc); >> + if (alloc < min) { >> + rc = -ENOMEM; >> + goto err; >> + } >> + >> + rc = cxl_dpa_alloc(cxled, alloc); >> + if (rc) >> + goto err; >> + >> + return cxled; >> +err: >> + put_device(cxled_dev); >> + return ERR_PTR(rc); >> +} >> +EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, "CXL");
diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index 28edd5822486..4fa248ec56c3 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -3,6 +3,7 @@ #include <linux/seq_file.h> #include <linux/device.h> #include <linux/delay.h> +#include <cxl/cxl.h> #include "cxlmem.h" #include "core.h" @@ -417,6 +418,7 @@ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled) up_write(&cxl_dpa_rwsem); return rc; } +EXPORT_SYMBOL_NS_GPL(cxl_dpa_free, "CXL"); int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, enum cxl_decoder_mode mode) @@ -455,31 +457,17 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, return 0; } -int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) +static resource_size_t cxl_dpa_freespace(struct cxl_endpoint_decoder *cxled, + resource_size_t *start_out, + resource_size_t *skip_out) { struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); resource_size_t free_ram_start, free_pmem_start; - struct cxl_port *port = cxled_to_port(cxled); struct cxl_dev_state *cxlds = cxlmd->cxlds; - struct device *dev = &cxled->cxld.dev; resource_size_t start, avail, skip; struct resource *p, *last; - int rc; - - down_write(&cxl_dpa_rwsem); - if (cxled->cxld.region) { - dev_dbg(dev, "decoder attached to %s\n", - dev_name(&cxled->cxld.region->dev)); - rc = -EBUSY; - goto out; - } - - if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { - dev_dbg(dev, "decoder enabled\n"); - rc = -EBUSY; - goto out; - } + lockdep_assert_held(&cxl_dpa_rwsem); for (p = cxlds->ram_res.child, last = NULL; p; p = p->sibling) last = p; if (last) @@ -516,14 +504,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) skip_end = start - 1; skip = skip_end - skip_start + 1; } else { - dev_dbg(dev, "mode not set\n"); - rc = -EINVAL; + avail = 0; + } + + if (!avail) + return 0; + if (start_out) + *start_out = start; + if (skip_out) + *skip_out = skip; + return avail; +} + +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) +{ + struct cxl_port *port = cxled_to_port(cxled); + struct device *dev = &cxled->cxld.dev; + resource_size_t start, avail, skip; + int rc; + + down_write(&cxl_dpa_rwsem); + if (cxled->cxld.region) { + dev_dbg(dev, "EBUSY, decoder attached to %s\n", + dev_name(&cxled->cxld.region->dev)); + rc = -EBUSY; + goto out; + } + + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { + dev_dbg(dev, "EBUSY, decoder enabled\n"); + rc = -EBUSY; goto out; } + avail = cxl_dpa_freespace(cxled, &start, &skip); + if (size > avail) { dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, - cxl_decoder_mode_name(cxled->mode), &avail); + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", + &avail); rc = -ENOSPC; goto out; } @@ -538,6 +557,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); } +static int find_free_decoder(struct device *dev, void *data) +{ + struct cxl_endpoint_decoder *cxled; + struct cxl_port *port; + + if (!is_endpoint_decoder(dev)) + return 0; + + cxled = to_cxl_endpoint_decoder(dev); + port = cxled_to_port(cxled); + + if (cxled->cxld.id != port->hdm_end + 1) + return 0; + + return 1; +} + +/** + * cxl_request_dpa - search and reserve DPA given input constraints + * @cxlmd: memdev with an endpoint port with available decoders + * @is_ram: DPA operation mode (ram vs pmem) + * @min: the minimum amount of capacity the call needs + * @max: extra capacity to allocate after min is satisfied + * + * Given that a region needs to allocate from limited HPA capacity it + * may be the case that a device has more mappable DPA capacity than + * available HPA. So, the expectation is that @min is a driver known + * value for how much capacity is needed, and @max is based the limit of + * how much HPA space is available for a new region. + * + * Returns a pinned cxl_decoder with at least @min bytes of capacity + * reserved, or an error pointer. The caller is also expected to own the + * lifetime of the memdev registration associated with the endpoint to + * pin the decoder registered as well. + */ +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd, + bool is_ram, + resource_size_t min, + resource_size_t max) +{ + struct cxl_port *endpoint = cxlmd->endpoint; + struct cxl_endpoint_decoder *cxled; + enum cxl_decoder_mode mode; + struct device *cxled_dev; + resource_size_t alloc; + int rc; + + if (!IS_ALIGNED(min | max, SZ_256M)) + return ERR_PTR(-EINVAL); + + down_read(&cxl_dpa_rwsem); + cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); + up_read(&cxl_dpa_rwsem); + + if (!cxled_dev) + cxled = ERR_PTR(-ENXIO); + else + cxled = to_cxl_endpoint_decoder(cxled_dev); + + if (!cxled || IS_ERR(cxled)) + return cxled; + + if (is_ram) + mode = CXL_DECODER_RAM; + else + mode = CXL_DECODER_PMEM; + + rc = cxl_dpa_set_mode(cxled, mode); + if (rc) + goto err; + + down_read(&cxl_dpa_rwsem); + alloc = cxl_dpa_freespace(cxled, NULL, NULL); + up_read(&cxl_dpa_rwsem); + + if (max) + alloc = min(max, alloc); + if (alloc < min) { + rc = -ENOMEM; + goto err; + } + + rc = cxl_dpa_alloc(cxled, alloc); + if (rc) + goto err; + + return cxled; +err: + put_device(cxled_dev); + return ERR_PTR(rc); +} +EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, "CXL"); + static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl) { u16 eig; diff --git a/include/cxl/cxl.h b/include/cxl/cxl.h index eacd5e5e6fe8..c450dc09a2c6 100644 --- a/include/cxl/cxl.h +++ b/include/cxl/cxl.h @@ -55,4 +55,9 @@ struct cxl_port; struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_memdev *cxlmd, unsigned long flags, resource_size_t *max); +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_memdev *cxlmd, + bool is_ram, + resource_size_t min, + resource_size_t max); +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); #endif