Message ID | 20180612103008.19436-1-mb@lightnvm.io (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
> On 12 Jun 2018, at 03.30, Matias Bjørling <mb@lightnvm.io> wrote: > > For devices that does not specify a limit on its transfer size, the > get_chk_meta command may send down a single I/O retrieving the full > chunk metadata table. Resulting in large 2-4MB I/O requests. Instead, > split up the I/Os to a maximum of 256KB and issue them separately to > improve I/O latency. > > Signed-off-by: Matias Bjørling <mb@lightnvm.io> > --- > drivers/nvme/host/lightnvm.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c > index b9989717418d..3b644b0e9713 100644 > --- a/drivers/nvme/host/lightnvm.c > +++ b/drivers/nvme/host/lightnvm.c > @@ -583,7 +583,13 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, > struct ppa_addr ppa; > size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); > size_t log_pos, offset, len; > - int ret, i; > + int ret, i, max_len; > + > + /* > + * limit requests to maximum 256K to avoid issuing arbitrary large > + * requests when the device does not specific a maximum transfer size. > + */ > + max_len = min_t(unsigned int, ctrl->max_hw_sectors << 9, 256 * 1024); > > /* Normalize lba address space to obtain log offset */ > ppa.ppa = slba; > @@ -596,7 +602,7 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, > offset = log_pos * sizeof(struct nvme_nvm_chk_meta); > > while (left) { > - len = min_t(unsigned int, left, ctrl->max_hw_sectors << 9); > + len = min_t(unsigned int, left, max_len); > > ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK, > dev_meta, len, offset); > -- > 2.11.0 Looks good to me. Reviewed-by: Javier González <javier@cnexlabs.com>
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index b9989717418d..3b644b0e9713 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -583,7 +583,13 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, struct ppa_addr ppa; size_t left = nchks * sizeof(struct nvme_nvm_chk_meta); size_t log_pos, offset, len; - int ret, i; + int ret, i, max_len; + + /* + * limit requests to maximum 256K to avoid issuing arbitrary large + * requests when the device does not specific a maximum transfer size. + */ + max_len = min_t(unsigned int, ctrl->max_hw_sectors << 9, 256 * 1024); /* Normalize lba address space to obtain log offset */ ppa.ppa = slba; @@ -596,7 +602,7 @@ static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev, offset = log_pos * sizeof(struct nvme_nvm_chk_meta); while (left) { - len = min_t(unsigned int, left, ctrl->max_hw_sectors << 9); + len = min_t(unsigned int, left, max_len); ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK, dev_meta, len, offset);
For devices that does not specify a limit on its transfer size, the get_chk_meta command may send down a single I/O retrieving the full chunk metadata table. Resulting in large 2-4MB I/O requests. Instead, split up the I/Os to a maximum of 256KB and issue them separately to improve I/O latency. Signed-off-by: Matias Bjørling <mb@lightnvm.io> --- drivers/nvme/host/lightnvm.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)