diff mbox series

[v0] nvmem: core: Export nvmem cell info to userspace

Message ID 1553061201-28894-1-git-send-email-gkohli@codeaurora.org (mailing list archive)
State Not Applicable, archived
Headers show
Series [v0] nvmem: core: Export nvmem cell info to userspace | expand

Commit Message

Gaurav Kohli March 20, 2019, 5:53 a.m. UTC
From: Shiraz Hashim <shashim@codeaurora.org>

Existing nvmem framework export full register space
as nvmem binary, but not exporting child node of nvmem
which is nvmem cell. Kernel can read the specific cell
by using nvmem_cell_read but userspace don't have such
provision.

Add framework to export nvmem cell as well, So
userspace can use it directly.

Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
Co-developed-by: Gaurav Kohli <gkohli@codeaurora.org>

Comments

Srinivas Kandagatla March 22, 2019, 3:23 p.m. UTC | #1
On 20/03/2019 05:53, Gaurav Kohli wrote:
> From: Shiraz Hashim <shashim@codeaurora.org>
> 
> Existing nvmem framework export full register space
> as nvmem binary, but not exporting child node of nvmem
> which is nvmem cell. Kernel can read the specific cell
> by using nvmem_cell_read but userspace don't have such
> provision.
> 
> Add framework to export nvmem cell as well, So
> userspace can use it directly.
> 
> Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
> Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
> Co-developed-by: Gaurav Kohli <gkohli@codeaurora.org>
> 
> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c

Thankyou for the patch.

Why do you need such provision when the userspace can just get the cell 
values using correct offset and size.
This will also bring over head of managing entries dynamically + 
confusing userspace abi.

Unless you have a valid reason or usecase I don't see the need for this.

thanks,
srini
Gaurav Kohli March 24, 2019, 3:25 p.m. UTC | #2
On 3/22/2019 8:53 PM, Srinivas Kandagatla wrote:
>
>
> On 20/03/2019 05:53, Gaurav Kohli wrote:
>> From: Shiraz Hashim <shashim@codeaurora.org>
>>
>> Existing nvmem framework export full register space
>> as nvmem binary, but not exporting child node of nvmem
>> which is nvmem cell. Kernel can read the specific cell
>> by using nvmem_cell_read but userspace don't have such
>> provision.
>>
>> Add framework to export nvmem cell as well, So
>> userspace can use it directly.
>>
>> Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
>> Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
>> Co-developed-by: Gaurav Kohli <gkohli@codeaurora.org>
>>
>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>
> Thankyou for the patch.
>
> Why do you need such provision when the userspace can just get the 
> cell values using correct offset and size.
> This will also bring over head of managing entries dynamically + 
> confusing userspace abi.
>
> Unless you have a valid reason or usecase I don't see the need for this.


Hi Srinivas,


This is mainly for user space convenience, In existing implementation 
they have to do manipulation according

to offset and bit, And with present patch, they just have to do cat for 
cell name and which can also be easily maintainable

for different soc. But with current, it is difficult to maintain users 
space code as each time we have to change user space code according to bit.


This would also help to expose certain bit only as per the bit parameter 
mentioned in dt node, which would also help to protect exposing of

other bits to user space.

>
> thanks,
> srini
Srinivas Kandagatla March 25, 2019, 10:58 a.m. UTC | #3
On 24/03/2019 15:25, Gaurav Kohli wrote:
> 
> On 3/22/2019 8:53 PM, Srinivas Kandagatla wrote:
>>
>>
>> On 20/03/2019 05:53, Gaurav Kohli wrote:
>>> From: Shiraz Hashim <shashim@codeaurora.org>
>>>
>>> Existing nvmem framework export full register space
>>> as nvmem binary, but not exporting child node of nvmem
>>> which is nvmem cell. Kernel can read the specific cell
>>> by using nvmem_cell_read but userspace don't have such
>>> provision.
>>>
>>> Add framework to export nvmem cell as well, So
>>> userspace can use it directly.
>>>
>>> Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
>>> Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
>>> Co-developed-by: Gaurav Kohli <gkohli@codeaurora.org>
>>>
>>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>>
>> Thankyou for the patch.
>>
>> Why do you need such provision when the userspace can just get the 
>> cell values using correct offset and size.
>> This will also bring over head of managing entries dynamically + 
>> confusing userspace abi.
>>
>> Unless you have a valid reason or usecase I don't see the need for this.
> 
> 
> Hi Srinivas,
> 
> 
> This is mainly for user space convenience, In existing implementation 
> they have to do manipulation according

> 
> to offset and bit, And with present patch, they just have to do cat for 
> cell name and which can also be easily maintainable
Yes, that is expected I guess!

> 
> for different soc. But with current, it is difficult to maintain users 
> space code as each time we have to change user space code according to bit.

Which user space code/application are you referring to here? Are these 
open source?

> 
> 
> This would also help to expose certain bit only as per the bit parameter 
> mentioned in dt node, which would also help to protect exposing of
> 
NVMEM is not just limited for DT users, non dt users use this f/w too.
So the problem is not as simple as it sounds.

If your issue is just about DT, you could easily parse active device 
tree via  /proc/device-tree and get cell offset, length and names from 
it, use this information to read from nvmem.

There are other concerns about the userspace ABI w.r.t udev events.
udev events would race with the creation on this cell entries resulting 
in a behavior where user-space applications would not see the entries 
after udev events.

In worst case if we decide to go with adding cells to nvmem then we 
should do it before the device is even probed using group attributes.
And this would mean that we can not support cells that are dynamically 
defined. And there might be some memory freeing issues in this method too!

--srini


> other bits to user space.
> 
>>
>> thanks,
>> srini
>
Gaurav Kohli March 26, 2019, 1:14 p.m. UTC | #4
On 3/25/2019 4:28 PM, Srinivas Kandagatla wrote:
>
>
> On 24/03/2019 15:25, Gaurav Kohli wrote:
>>
>> On 3/22/2019 8:53 PM, Srinivas Kandagatla wrote:
>>>
>>>
>>> On 20/03/2019 05:53, Gaurav Kohli wrote:
>>>> From: Shiraz Hashim <shashim@codeaurora.org>
>>>>
>>>> Existing nvmem framework export full register space
>>>> as nvmem binary, but not exporting child node of nvmem
>>>> which is nvmem cell. Kernel can read the specific cell
>>>> by using nvmem_cell_read but userspace don't have such
>>>> provision.
>>>>
>>>> Add framework to export nvmem cell as well, So
>>>> userspace can use it directly.
>>>>
>>>> Signed-off-by: Shiraz Hashim <shashim@codeaurora.org>
>>>> Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
>>>> Co-developed-by: Gaurav Kohli <gkohli@codeaurora.org>
>>>>
>>>> diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
>>>
>>> Thankyou for the patch.
>>>
>>> Why do you need such provision when the userspace can just get the 
>>> cell values using correct offset and size.
>>> This will also bring over head of managing entries dynamically + 
>>> confusing userspace abi.
>>>
>>> Unless you have a valid reason or usecase I don't see the need for 
>>> this.
>>
>>
>> Hi Srinivas,
>>
>>
>> This is mainly for user space convenience, In existing implementation 
>> they have to do manipulation according
>
>>
>> to offset and bit, And with present patch, they just have to do cat 
>> for cell name and which can also be easily maintainable
> Yes, that is expected I guess!
>
>>
>> for different soc. But with current, it is difficult to maintain 
>> users space code as each time we have to change user space code 
>> according to bit.
>
> Which user space code/application are you referring to here? Are these 
> open source?

Hi Srini,

This is not open source, we have a requirement to read certain bits of 
nvmem.

>
>>
>>
>> This would also help to expose certain bit only as per the bit 
>> parameter mentioned in dt node, which would also help to protect 
>> exposing of
>>
> NVMEM is not just limited for DT users, non dt users use this f/w too.
> So the problem is not as simple as it sounds.
>
> If your issue is just about DT, you could easily parse active device 
> tree via  /proc/device-tree and get cell offset, length and names from 
> it, use this information to read from nvmem.
>
> There are other concerns about the userspace ABI w.r.t udev events.
> udev events would race with the creation on this cell entries 
> resulting in a behavior where user-space applications would not see 
> the entries after udev events.
>
> In worst case if we decide to go with adding cells to nvmem then we 
> should do it before the device is even probed using group attributes.
> And this would mean that we can not support cells that are dynamically 
> defined. And there might be some memory freeing issues in this method 
> too!
yes i agree they are dynamic entries as well of nvmem cell,  Can you 
please suggest some other way ?
>
> --srini
>
>
>> other bits to user space.
>>
>>>
>>> thanks,
>>> srini
>>
diff mbox series

Patch

diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index f24008b..e4b6160 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -47,6 +47,7 @@  struct nvmem_cell {
 	int			nbits;
 	struct device_node	*np;
 	struct nvmem_device	*nvmem;
+	struct bin_attribute	attr;
 	struct list_head	node;
 };
 
@@ -99,6 +100,27 @@  static ssize_t type_show(struct device *dev,
 	return sprintf(buf, "%s\n", nvmem_type_str[nvmem->type]);
 }
 
+static ssize_t bin_attr_nvmem_cell_read(struct file *filp, struct kobject *kobj,
+				    struct bin_attribute *attr,
+				    char *buf, loff_t pos, size_t count)
+{
+	struct nvmem_cell *cell;
+	size_t len;
+	u8 *data;
+
+	if (attr->private)
+		cell = attr->private;
+	else
+		return -EINVAL;
+
+	data = nvmem_cell_read(cell, &len);
+
+	len = (len > count) ? count : len;
+	memcpy(buf, data, len);
+
+	return len;
+}
+
 static DEVICE_ATTR_RO(type);
 
 static struct attribute *nvmem_attrs[] = {
@@ -324,6 +346,7 @@  static void nvmem_cell_drop(struct nvmem_cell *cell)
 {
 	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_CELL_REMOVE, cell);
 	mutex_lock(&nvmem_mutex);
+	device_remove_bin_file(&cell->nvmem->dev, &cell->attr);
 	list_del(&cell->node);
 	mutex_unlock(&nvmem_mutex);
 	of_node_put(cell->np);
@@ -341,8 +364,24 @@  static void nvmem_device_remove_all_cells(const struct nvmem_device *nvmem)
 
 static void nvmem_cell_add(struct nvmem_cell *cell)
 {
+	int rval;
+	struct bin_attribute *nvmem_cell_attr = &cell->attr;
+
 	mutex_lock(&nvmem_mutex);
 	list_add_tail(&cell->node, &cell->nvmem->cells);
+
+	/* add attr for this cell */
+	nvmem_cell_attr->attr.name = cell->name;
+	nvmem_cell_attr->attr.mode = 0400;
+	nvmem_cell_attr->private = cell;
+	nvmem_cell_attr->size = cell->bytes;
+	nvmem_cell_attr->read = bin_attr_nvmem_cell_read;
+	rval = device_create_bin_file(&cell->nvmem->dev, nvmem_cell_attr);
+	if (rval) {
+		dev_err(&cell->nvmem->dev,
+			"Failed to create cell binary file %d\n", rval);
+	}
+
 	mutex_unlock(&nvmem_mutex);
 	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_CELL_ADD, cell);
 }