lightnvm: invalidate addresses on multipage fail
diff mbox

Message ID 1454332202-13368-1-git-send-email-javier@javigon.com
State New
Headers show

Commit Message

=?UTF-8?q?Javier=20Gonz=C3=A1lez?= Feb. 1, 2016, 1:10 p.m. UTC
If a page mapping fails when mapping several pages in a single write bio
request, make sure that already mapped pages are invalidated. Since
other legit mappings coming from a different bio request might have
occurred, rolling back the failed bio is a difficult, unnecessary
overhead; in part because when a mapping fails something bad has
happened already. Still, invalidating pages in the failed bio will help
GC.

Signed-off-by: Javier González <javier@cnexlabs.com>
---
 drivers/lightnvm/rrpc.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Matias Bjørling Feb. 3, 2016, 8:10 a.m. UTC | #1
On 02/01/2016 02:10 PM, Javier González wrote:
> If a page mapping fails when mapping several pages in a single write bio
> request, make sure that already mapped pages are invalidated. Since
> other legit mappings coming from a different bio request might have
> occurred, rolling back the failed bio is a difficult, unnecessary
> overhead; in part because when a mapping fails something bad has
> happened already. Still, invalidating pages in the failed bio will help
> GC.
> 
> Signed-off-by: Javier González <javier@cnexlabs.com>
> ---
>  drivers/lightnvm/rrpc.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
> index c4d0b04..29befe9 100644
> --- a/drivers/lightnvm/rrpc.c
> +++ b/drivers/lightnvm/rrpc.c
> @@ -730,6 +730,7 @@ static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
>  		} else {
>  			BUG_ON(is_gc);
>  			rrpc_unlock_laddr(rrpc, r);
> +			rrpc_invalidate_range(rrpc, laddr, i + 1);
>  			nvm_dev_dma_free(rrpc->dev, rqd->ppa_list,
>  							rqd->dma_ppa_list);
>  			return NVM_IO_DONE;
> 

I'm not sure I understand this. This is in the read path, why would it
need to invalidate pages if a page is not mapped?
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
=?UTF-8?q?Javier=20Gonz=C3=A1lez?= Feb. 3, 2016, 8:15 a.m. UTC | #2
> 
> On 03 Feb 2016, at 09:10, Matias Bjørling <mb@lightnvm.io> wrote:
> 
> On 02/01/2016 02:10 PM, Javier González wrote:
>> If a page mapping fails when mapping several pages in a single write bio
>> request, make sure that already mapped pages are invalidated. Since
>> other legit mappings coming from a different bio request might have
>> occurred, rolling back the failed bio is a difficult, unnecessary
>> overhead; in part because when a mapping fails something bad has
>> happened already. Still, invalidating pages in the failed bio will help
>> GC.
>> 
>> Signed-off-by: Javier González <javier@cnexlabs.com>
>> ---
>> drivers/lightnvm/rrpc.c | 1 +
>> 1 file changed, 1 insertion(+)
>> 
>> diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
>> index c4d0b04..29befe9 100644
>> --- a/drivers/lightnvm/rrpc.c
>> +++ b/drivers/lightnvm/rrpc.c
>> @@ -730,6 +730,7 @@ static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
>> 		} else {
>> 			BUG_ON(is_gc);
>> 			rrpc_unlock_laddr(rrpc, r);
>> +			rrpc_invalidate_range(rrpc, laddr, i + 1);
>> 			nvm_dev_dma_free(rrpc->dev, rqd->ppa_list,
>> 							rqd->dma_ppa_list);
>> 			return NVM_IO_DONE;
> 
> I'm not sure I understand this. This is in the read path, why would it
> need to invalidate pages if a page is not mapped?

You are right. I sent the wrong patch. I’ll send the right one now.

Javier

Patch
diff mbox

diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index c4d0b04..29befe9 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -730,6 +730,7 @@  static int rrpc_read_ppalist_rq(struct rrpc *rrpc, struct bio *bio,
 		} else {
 			BUG_ON(is_gc);
 			rrpc_unlock_laddr(rrpc, r);
+			rrpc_invalidate_range(rrpc, laddr, i + 1);
 			nvm_dev_dma_free(rrpc->dev, rqd->ppa_list,
 							rqd->dma_ppa_list);
 			return NVM_IO_DONE;