diff mbox series

[v2] libnvdimm/region: Update nvdimm_has_flush() to handle ND_REGION_ASYNC

Message ID 20210406032233.490596-1-vaibhav@linux.ibm.com (mailing list archive)
State New, archived
Headers show
Series [v2] libnvdimm/region: Update nvdimm_has_flush() to handle ND_REGION_ASYNC | expand

Commit Message

Vaibhav Jain April 6, 2021, 3:22 a.m. UTC
In case a platform doesn't provide explicit flush-hints but provides an
explicit flush callback via ND_REGION_ASYNC region flag, then
nvdimm_has_flush() still returns '0' indicating that writes do not
require flushing. This happens on PPC64 with patch at [1] applied,
where 'deep_flush' of a region was denied even though an explicit
flush function was provided.

Similar problem is also seen with virtio-pmem where the 'deep_flush'
sysfs attribute is not visible as in absence of any registered nvdimm,
'nd_region->ndr_mappings == 0'.

Fix this by updating nvdimm_has_flush() adding a condition to
nvdimm_has_flush() testing for ND_REGION_ASYNC flag on the region
and see if a 'region->flush' callback is assigned. Also remove
explicit test for 'nd_region->ndr_mapping' since regions can be marked
'ND_REGION_SYNC' without any explicit mappings as in case of
virtio-pmem.

References:
[1] "powerpc/papr_scm: Implement support for H_SCM_FLUSH hcall"
https://lore.kernel.org/linux-nvdimm/161703936121.36.7260632399	582101498.stgit@e1fbed493c87

Cc: <stable@vger.kernel.org>
Fixes: c5d4355d10d4 ("libnvdimm: nd_region flush callback support")
Reported-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
---
Changelog:

v2:
* Added the fixes tag and addressed the patch to stable tree [ Aneesh ]
* Updated patch description to address the virtio-pmem case.
* Removed test for 'nd_region->ndr_mappings' from beginning of
  nvdimm_has_flush() to handle the virtio-pmem case.
---
 drivers/nvdimm/region_devs.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

Comments

Aneesh Kumar K.V April 6, 2021, 5:13 a.m. UTC | #1
Vaibhav Jain <vaibhav@linux.ibm.com> writes:

> In case a platform doesn't provide explicit flush-hints but provides an
> explicit flush callback via ND_REGION_ASYNC region flag, then
> nvdimm_has_flush() still returns '0' indicating that writes do not
> require flushing. This happens on PPC64 with patch at [1] applied,
> where 'deep_flush' of a region was denied even though an explicit
> flush function was provided.
>
> Similar problem is also seen with virtio-pmem where the 'deep_flush'
> sysfs attribute is not visible as in absence of any registered nvdimm,
> 'nd_region->ndr_mappings == 0'.
>
> Fix this by updating nvdimm_has_flush() adding a condition to
> nvdimm_has_flush() testing for ND_REGION_ASYNC flag on the region
> and see if a 'region->flush' callback is assigned. Also remove
> explicit test for 'nd_region->ndr_mapping' since regions can be marked
> 'ND_REGION_SYNC' without any explicit mappings as in case of
> virtio-pmem.

Do we need to check for ND_REGION_ASYNC? What if the backend wants to
provide a synchronous dax region but with different deep flush semantic
than writing to wpq flush address? 
ie,

>
> References:
> [1] "powerpc/papr_scm: Implement support for H_SCM_FLUSH hcall"
> https://lore.kernel.org/linux-nvdimm/161703936121.36.7260632399	582101498.stgit@e1fbed493c87
>
> Cc: <stable@vger.kernel.org>
> Fixes: c5d4355d10d4 ("libnvdimm: nd_region flush callback support")
> Reported-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
> ---
> Changelog:
>
> v2:
> * Added the fixes tag and addressed the patch to stable tree [ Aneesh ]
> * Updated patch description to address the virtio-pmem case.
> * Removed test for 'nd_region->ndr_mappings' from beginning of
>   nvdimm_has_flush() to handle the virtio-pmem case.
> ---
>  drivers/nvdimm/region_devs.c | 14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> index ef23119db574..cdf5eb6fa425 100644
> --- a/drivers/nvdimm/region_devs.c
> +++ b/drivers/nvdimm/region_devs.c
> @@ -1234,11 +1234,15 @@ int nvdimm_has_flush(struct nd_region *nd_region)
>  {
>  	int i;
>  
> -	/* no nvdimm or pmem api == flushing capability unknown */
> -	if (nd_region->ndr_mappings == 0
> -			|| !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
> +	/* no pmem api == flushing capability unknown */
> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
>  		return -ENXIO;
>  
> +	/* Test if an explicit flush function is defined */
> +	if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush)
> +		return 1;
> +


        if (nd_region->flush)
                return 1


> +	/* Test if any flush hints for the region are available */
>  	for (i = 0; i < nd_region->ndr_mappings; i++) {
>  		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
>  		struct nvdimm *nvdimm = nd_mapping->nvdimm;
> @@ -1249,8 +1253,8 @@ int nvdimm_has_flush(struct nd_region *nd_region)
>  	}
>  
>  	/*
> -	 * The platform defines dimm devices without hints, assume
> -	 * platform persistence mechanism like ADR
> +	 * The platform defines dimm devices without hints nor explicit flush,
> +	 * assume platform persistence mechanism like ADR
>  	 */
>  	return 0;
>  }
> -- 
> 2.30.2
> _______________________________________________
> Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
> To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
Vaibhav Jain April 6, 2021, 11:37 a.m. UTC | #2
Hi Aneesh,
Thanks for looking into this patch.

"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:

> Vaibhav Jain <vaibhav@linux.ibm.com> writes:
>
>> In case a platform doesn't provide explicit flush-hints but provides an
>> explicit flush callback via ND_REGION_ASYNC region flag, then
>> nvdimm_has_flush() still returns '0' indicating that writes do not
>> require flushing. This happens on PPC64 with patch at [1] applied,
>> where 'deep_flush' of a region was denied even though an explicit
>> flush function was provided.
>>
>> Similar problem is also seen with virtio-pmem where the 'deep_flush'
>> sysfs attribute is not visible as in absence of any registered nvdimm,
>> 'nd_region->ndr_mappings == 0'.
>>
>> Fix this by updating nvdimm_has_flush() adding a condition to
>> nvdimm_has_flush() testing for ND_REGION_ASYNC flag on the region
>> and see if a 'region->flush' callback is assigned. Also remove
>> explicit test for 'nd_region->ndr_mapping' since regions can be marked
>> 'ND_REGION_SYNC' without any explicit mappings as in case of
>> virtio-pmem.
>
> Do we need to check for ND_REGION_ASYNC? What if the backend wants to
> provide a synchronous dax region but with different deep flush semantic
> than writing to wpq flush address? 
> ie,

For a synchronous dax region, writes arent expected to require any
flushing (or deep-flush) so this function should ideally return '0' in
such a case. Hence I had added the test for ND_REGION_ASYNC region flag.

>
>>
>> References:
>> [1] "powerpc/papr_scm: Implement support for H_SCM_FLUSH hcall"
>> https://lore.kernel.org/linux-nvdimm/161703936121.36.7260632399	582101498.stgit@e1fbed493c87
>>
>> Cc: <stable@vger.kernel.org>
>> Fixes: c5d4355d10d4 ("libnvdimm: nd_region flush callback support")
>> Reported-by: Shivaprasad G Bhat <sbhat@linux.ibm.com>
>> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
>> ---
>> Changelog:
>>
>> v2:
>> * Added the fixes tag and addressed the patch to stable tree [ Aneesh ]
>> * Updated patch description to address the virtio-pmem case.
>> * Removed test for 'nd_region->ndr_mappings' from beginning of
>>   nvdimm_has_flush() to handle the virtio-pmem case.
>> ---
>>  drivers/nvdimm/region_devs.c | 14 +++++++++-----
>>  1 file changed, 9 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
>> index ef23119db574..cdf5eb6fa425 100644
>> --- a/drivers/nvdimm/region_devs.c
>> +++ b/drivers/nvdimm/region_devs.c
>> @@ -1234,11 +1234,15 @@ int nvdimm_has_flush(struct nd_region *nd_region)
>>  {
>>  	int i;
>>  
>> -	/* no nvdimm or pmem api == flushing capability unknown */
>> -	if (nd_region->ndr_mappings == 0
>> -			|| !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
>> +	/* no pmem api == flushing capability unknown */
>> +	if (!IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
>>  		return -ENXIO;
>>  
>> +	/* Test if an explicit flush function is defined */
>> +	if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush)
>> +		return 1;
>> +
>
>
>         if (nd_region->flush)
>                 return 1
>
>
>> +	/* Test if any flush hints for the region are available */
>>  	for (i = 0; i < nd_region->ndr_mappings; i++) {
>>  		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
>>  		struct nvdimm *nvdimm = nd_mapping->nvdimm;
>> @@ -1249,8 +1253,8 @@ int nvdimm_has_flush(struct nd_region *nd_region)
>>  	}
>>  
>>  	/*
>> -	 * The platform defines dimm devices without hints, assume
>> -	 * platform persistence mechanism like ADR
>> +	 * The platform defines dimm devices without hints nor explicit flush,
>> +	 * assume platform persistence mechanism like ADR
>>  	 */
>>  	return 0;
>>  }
>> -- 
>> 2.30.2
>> _______________________________________________
>> Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
>> To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
> _______________________________________________
> Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
> To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
Aneesh Kumar K.V April 6, 2021, 12:05 p.m. UTC | #3
On 4/6/21 5:07 PM, Vaibhav Jain wrote:
> Hi Aneesh,
> Thanks for looking into this patch.
> 
> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> 
>> Vaibhav Jain <vaibhav@linux.ibm.com> writes:
>>
>>> In case a platform doesn't provide explicit flush-hints but provides an
>>> explicit flush callback via ND_REGION_ASYNC region flag, then
>>> nvdimm_has_flush() still returns '0' indicating that writes do not
>>> require flushing. This happens on PPC64 with patch at [1] applied,
>>> where 'deep_flush' of a region was denied even though an explicit
>>> flush function was provided.
>>>
>>> Similar problem is also seen with virtio-pmem where the 'deep_flush'
>>> sysfs attribute is not visible as in absence of any registered nvdimm,
>>> 'nd_region->ndr_mappings == 0'.
>>>
>>> Fix this by updating nvdimm_has_flush() adding a condition to
>>> nvdimm_has_flush() testing for ND_REGION_ASYNC flag on the region
>>> and see if a 'region->flush' callback is assigned. Also remove
>>> explicit test for 'nd_region->ndr_mapping' since regions can be marked
>>> 'ND_REGION_SYNC' without any explicit mappings as in case of
>>> virtio-pmem.
>>
>> Do we need to check for ND_REGION_ASYNC? What if the backend wants to
>> provide a synchronous dax region but with different deep flush semantic
>> than writing to wpq flush address?
>> ie,
> 
> For a synchronous dax region, writes arent expected to require any
> flushing (or deep-flush) so this function should ideally return '0' in
> such a case. Hence I had added the test for ND_REGION_ASYNC region flag.
> 

that is not correct. For example, we could ideally move the wpq flush as 
an nd_region->flush callback for acpi or we could implement a deep flush 
for a synchronous dax region exposed by papr_scm driver that ensures 
stores indeed reached the media managed by the hypervisor.

-aneesh
Vaibhav Jain April 8, 2021, 10:48 a.m. UTC | #4
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:

> On 4/6/21 5:07 PM, Vaibhav Jain wrote:
>> Hi Aneesh,
>> Thanks for looking into this patch.
>> 
>> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
>> 
>>> Vaibhav Jain <vaibhav@linux.ibm.com> writes:
>>>
>>>> In case a platform doesn't provide explicit flush-hints but provides an
>>>> explicit flush callback via ND_REGION_ASYNC region flag, then
>>>> nvdimm_has_flush() still returns '0' indicating that writes do not
>>>> require flushing. This happens on PPC64 with patch at [1] applied,
>>>> where 'deep_flush' of a region was denied even though an explicit
>>>> flush function was provided.
>>>>
>>>> Similar problem is also seen with virtio-pmem where the 'deep_flush'
>>>> sysfs attribute is not visible as in absence of any registered nvdimm,
>>>> 'nd_region->ndr_mappings == 0'.
>>>>
>>>> Fix this by updating nvdimm_has_flush() adding a condition to
>>>> nvdimm_has_flush() testing for ND_REGION_ASYNC flag on the region
>>>> and see if a 'region->flush' callback is assigned. Also remove
>>>> explicit test for 'nd_region->ndr_mapping' since regions can be marked
>>>> 'ND_REGION_SYNC' without any explicit mappings as in case of
>>>> virtio-pmem.
>>>
>>> Do we need to check for ND_REGION_ASYNC? What if the backend wants to
>>> provide a synchronous dax region but with different deep flush semantic
>>> than writing to wpq flush address?
>>> ie,
>> 
>> For a synchronous dax region, writes arent expected to require any
>> flushing (or deep-flush) so this function should ideally return '0' in
>> such a case. Hence I had added the test for ND_REGION_ASYNC region flag.
>> 
>
> that is not correct. For example, we could ideally move the wpq flush as 
> an nd_region->flush callback for acpi or we could implement a deep flush 
> for a synchronous dax region exposed by papr_scm driver that ensures 
> stores indeed reached the media managed by the hypervisor.
Sure, makes sense now. I have sent out a v3 of this patch with this
addressed.

>
> -aneesh
diff mbox series

Patch

diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index ef23119db574..cdf5eb6fa425 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -1234,11 +1234,15 @@  int nvdimm_has_flush(struct nd_region *nd_region)
 {
 	int i;
 
-	/* no nvdimm or pmem api == flushing capability unknown */
-	if (nd_region->ndr_mappings == 0
-			|| !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
+	/* no pmem api == flushing capability unknown */
+	if (!IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API))
 		return -ENXIO;
 
+	/* Test if an explicit flush function is defined */
+	if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush)
+		return 1;
+
+	/* Test if any flush hints for the region are available */
 	for (i = 0; i < nd_region->ndr_mappings; i++) {
 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
 		struct nvdimm *nvdimm = nd_mapping->nvdimm;
@@ -1249,8 +1253,8 @@  int nvdimm_has_flush(struct nd_region *nd_region)
 	}
 
 	/*
-	 * The platform defines dimm devices without hints, assume
-	 * platform persistence mechanism like ADR
+	 * The platform defines dimm devices without hints nor explicit flush,
+	 * assume platform persistence mechanism like ADR
 	 */
 	return 0;
 }