From patchwork Mon Jul 20 14:43:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srinivas Kandagatla X-Patchwork-Id: 6828351 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 816EBC05AC for ; Mon, 20 Jul 2015 14:46:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2811C20582 for ; Mon, 20 Jul 2015 14:46:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B14342047C for ; Mon, 20 Jul 2015 14:46:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZHCIb-0006n5-UQ; Mon, 20 Jul 2015 14:44:09 +0000 Received: from mail-wi0-f180.google.com ([209.85.212.180]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZHCII-0006Vb-J3 for linux-arm-kernel@lists.infradead.org; Mon, 20 Jul 2015 14:43:53 +0000 Received: by wibxm9 with SMTP id xm9so28576546wib.1 for ; Mon, 20 Jul 2015 07:43:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AbcCHQ/IbazPU7BMWFO6obwjYqY2y01xZ12uOt1vNSA=; b=bo7i3FszTB6T6DlVIjhWRO33l/ClEdCZSgqJ6uFetL/Lukua61FonLqTLdvoa0kVud CEsV5jRJkZshLc1/ZOJMm3UE3rrMcXYeorSBM5V1rvPBOr5bao3bFPg8L4UCwyazbnnd zaFBH509FZf2G5YeiIs1JHJMzScJ9mx8BePnaOkTCzsoLkWUkS3Q29lcZ8LrEmWpPAXB 8cee+VqGqEaIIQjKAFlowBDml4QXLbBbyJw+6NkKIAlHqHFOjHupn1BIlA2Hhurli1CC eeqegs8QnAC4SlyHyUPL0z7miNzhCme/bLjSHpW5llyXvxpa2awXfRMaB/5TyQNxY6HF r8ZA== X-Gm-Message-State: ALoCoQn6RXFhVIz6Ml2UblTZwjZFybkU4FBosstT1u5RFBKOmf4lcR17lOiEPAzoA9wwQ5XLfCDW X-Received: by 10.194.112.3 with SMTP id im3mr57783668wjb.54.1437403408967; Mon, 20 Jul 2015 07:43:28 -0700 (PDT) Received: from srini-ThinkPad-X1-Carbon-2nd.dlink.com (host-78-144-121-210.as13285.net. [78.144.121.210]) by smtp.gmail.com with ESMTPSA id be9sm32323558wjb.26.2015.07.20.07.43.25 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 20 Jul 2015 07:43:28 -0700 (PDT) From: Srinivas Kandagatla To: linux-arm-kernel@lists.infradead.org, Greg Kroah-Hartman Subject: [PATCH v8 2/9] nvmem: Add a simple NVMEM framework for consumers Date: Mon, 20 Jul 2015 15:43:23 +0100 Message-Id: <1437403403-4180-1-git-send-email-srinivas.kandagatla@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1437403352-4091-1-git-send-email-srinivas.kandagatla@linaro.org> References: <1437403352-4091-1-git-send-email-srinivas.kandagatla@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150720_074351_013477_E72AF73E X-CRM114-Status: GOOD ( 23.85 ) X-Spam-Score: -2.6 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: stefan.wahren@i2se.com, devicetree@vger.kernel.org, arnd@arndb.de, linux-api@vger.kernel.org, s.hauer@pengutronix.de, sboyd@codeaurora.org, linux-kernel@vger.kernel.org, Rob Herring , pantelis.antoniou@konsulko.com, Mark Brown , Srinivas Kandagatla , linux-arm-msm@vger.kernel.org, mporter@konsulko.com, Maxime Ripard , wxt@rock-chips.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch adds just consumers part of the framework just to enable easy review. Up until now, nvmem drivers were stored in drivers/misc, where they all had to duplicate pretty much the same code to register a sysfs file, allow in-kernel users to access the content of the devices they were driving, etc. This was also a problem as far as other in-kernel users were involved, since the solutions used were pretty much different from on driver to another, there was a rather big abstraction leak. This introduction of this framework aims at solving this. It also introduces DT representation for consumer devices to go get the data they require (MAC Addresses, SoC/Revision ID, part numbers, and so on) from the nvmems. Having regmap interface to this framework would give much better abstraction for nvmems on different buses. Signed-off-by: Maxime Ripard [Maxime Ripard: intial version of the framework] Signed-off-by: Srinivas Kandagatla --- drivers/nvmem/core.c | 415 +++++++++++++++++++++++++++++++++++++++++ include/linux/nvmem-consumer.h | 61 ++++++ 2 files changed, 476 insertions(+) diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index bde5528..de14c36 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -365,6 +365,421 @@ int nvmem_unregister(struct nvmem_device *nvmem) } EXPORT_SYMBOL_GPL(nvmem_unregister); +static struct nvmem_device *__nvmem_device_get(struct device_node *np, + struct nvmem_cell **cellp, + const char *cell_id) +{ + struct nvmem_device *nvmem = NULL; + + mutex_lock(&nvmem_mutex); + + if (np) { + nvmem = of_nvmem_find(np); + if (!nvmem) { + mutex_unlock(&nvmem_mutex); + return ERR_PTR(-EPROBE_DEFER); + } + } else { + struct nvmem_cell *cell = nvmem_find_cell(cell_id); + + if (cell) { + nvmem = cell->nvmem; + *cellp = cell; + } + + if (!nvmem) { + mutex_unlock(&nvmem_mutex); + return ERR_PTR(-ENOENT); + } + } + + nvmem->users++; + mutex_unlock(&nvmem_mutex); + + if (!try_module_get(nvmem->owner)) { + dev_err(&nvmem->dev, + "could not increase module refcount for cell %s\n", + nvmem->name); + + mutex_lock(&nvmem_mutex); + nvmem->users--; + mutex_unlock(&nvmem_mutex); + + return ERR_PTR(-EINVAL); + } + + return nvmem; +} + +static void __nvmem_device_put(struct nvmem_device *nvmem) +{ + module_put(nvmem->owner); + mutex_lock(&nvmem_mutex); + nvmem->users--; + mutex_unlock(&nvmem_mutex); +} + +static struct nvmem_cell *nvmem_cell_get_from_list(const char *cell_id) +{ + struct nvmem_cell *cell = NULL; + struct nvmem_device *nvmem; + + nvmem = __nvmem_device_get(NULL, &cell, cell_id); + if (IS_ERR(nvmem)) + return ERR_CAST(nvmem); + + return cell; +} + +#if IS_ENABLED(CONFIG_NVMEM) && IS_ENABLED(CONFIG_OF) +/** + * of_nvmem_cell_get() - Get a nvmem cell from given device node and cell id + * + * @dev node: Device tree node that uses the nvmem cell + * @id: nvmem cell name from nvmem-cell-names property. + * + * Return: Will be an ERR_PTR() on error or a valid pointer + * to a struct nvmem_cell. The nvmem_cell will be freed by the + * nvmem_cell_put(). + */ +struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, + const char *name) +{ + struct device_node *cell_np, *nvmem_np; + struct nvmem_cell *cell; + struct nvmem_device *nvmem; + const __be32 *addr; + int rval, len, index; + + index = of_property_match_string(np, "nvmem-cell-names", name); + + cell_np = of_parse_phandle(np, "nvmem-cells", index); + if (!cell_np) + return ERR_PTR(-EINVAL); + + nvmem_np = of_get_next_parent(cell_np); + if (!nvmem_np) + return ERR_PTR(-EINVAL); + + nvmem = __nvmem_device_get(nvmem_np, NULL, NULL); + if (IS_ERR(nvmem)) + return ERR_CAST(nvmem); + + addr = of_get_property(cell_np, "reg", &len); + if (!addr || (len < 2 * sizeof(int))) { + dev_err(&nvmem->dev, "nvmem: invalid reg on %s\n", + cell_np->full_name); + rval = -EINVAL; + goto err_mem; + } + + cell = kzalloc(sizeof(*cell), GFP_KERNEL); + if (!cell) { + rval = -ENOMEM; + goto err_mem; + } + + cell->nvmem = nvmem; + cell->offset = be32_to_cpup(addr++); + cell->bytes = be32_to_cpup(addr); + cell->name = cell_np->name; + + addr = of_get_property(cell_np, "bits", &len); + if (addr && len == (2 * sizeof(int))) { + cell->bit_offset = be32_to_cpup(addr++); + cell->nbits = be32_to_cpup(addr); + } + + if (cell->nbits) + cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset, + BITS_PER_BYTE); + + if (!IS_ALIGNED(cell->offset, nvmem->stride)) { + dev_err(&nvmem->dev, + "cell %s unaligned to nvmem stride %d\n", + cell->name, nvmem->stride); + rval = -EINVAL; + goto err_sanity; + } + + nvmem_cell_add(cell); + + return cell; + +err_sanity: + kfree(cell); + +err_mem: + __nvmem_device_put(nvmem); + + return ERR_PTR(rval); +} +EXPORT_SYMBOL_GPL(of_nvmem_cell_get); +#endif + +/** + * nvmem_cell_get() - Get nvmem cell of device form a given cell name + * + * @dev node: Device tree node that uses the nvmem cell + * @id: nvmem cell name to get. + * + * Return: Will be an ERR_PTR() on error or a valid pointer + * to a struct nvmem_cell. The nvmem_cell will be freed by the + * nvmem_cell_put(). + */ +struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *cell_id) +{ + struct nvmem_cell *cell; + + if (dev->of_node) { /* try dt first */ + cell = of_nvmem_cell_get(dev->of_node, cell_id); + if (!IS_ERR(cell) || PTR_ERR(cell) == -EPROBE_DEFER) + return cell; + } + + return nvmem_cell_get_from_list(cell_id); +} +EXPORT_SYMBOL_GPL(nvmem_cell_get); + +static void devm_nvmem_cell_release(struct device *dev, void *res) +{ + nvmem_cell_put(*(struct nvmem_cell **)res); +} + +/** + * devm_nvmem_cell_get() - Get nvmem cell of device form a given id + * + * @dev node: Device tree node that uses the nvmem cell + * @id: nvmem id in nvmem-names property. + * + * Return: Will be an ERR_PTR() on error or a valid pointer + * to a struct nvmem_cell. The nvmem_cell will be freed by the + * automatically once the device is freed. + */ +struct nvmem_cell *devm_nvmem_cell_get(struct device *dev, const char *id) +{ + struct nvmem_cell **ptr, *cell; + + ptr = devres_alloc(devm_nvmem_cell_release, sizeof(*ptr), GFP_KERNEL); + if (!ptr) + return ERR_PTR(-ENOMEM); + + cell = nvmem_cell_get(dev, id); + if (!IS_ERR(cell)) { + *ptr = cell; + devres_add(dev, ptr); + } else { + devres_free(ptr); + } + + return cell; +} +EXPORT_SYMBOL_GPL(devm_nvmem_cell_get); + +static int devm_nvmem_cell_match(struct device *dev, void *res, void *data) +{ + struct nvmem_cell **c = res; + + if (WARN_ON(!c || !*c)) + return 0; + + return *c == data; +} + +/** + * devm_nvmem_cell_put() - Release previously allocated nvmem cell + * from devm_nvmem_cell_get. + * + * @cell: Previously allocated nvmem cell by devm_nvmem_cell_get() + */ +void devm_nvmem_cell_put(struct device *dev, struct nvmem_cell *cell) +{ + int ret; + + ret = devres_release(dev, devm_nvmem_cell_release, + devm_nvmem_cell_match, cell); + + WARN_ON(ret); +} +EXPORT_SYMBOL(devm_nvmem_cell_put); + +/** + * nvmem_cell_put() - Release previously allocated nvmem cell. + * + * @cell: Previously allocated nvmem cell by nvmem_cell_get() + */ +void nvmem_cell_put(struct nvmem_cell *cell) +{ + struct nvmem_device *nvmem = cell->nvmem; + + __nvmem_device_put(nvmem); + nvmem_cell_drop(cell); +} +EXPORT_SYMBOL_GPL(nvmem_cell_put); + +static inline void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, + void *buf) +{ + u8 *p, *b; + int i, bit_offset = cell->bit_offset; + + p = b = buf; + if (bit_offset) { + /* First shift */ + *b++ >>= bit_offset; + + /* setup rest of the bytes if any */ + for (i = 1; i < cell->bytes; i++) { + /* Get bits from next byte and shift them towards msb */ + *p |= *b << (BITS_PER_BYTE - bit_offset); + + p = b; + *b++ >>= bit_offset; + } + + /* result fits in less bytes */ + if (cell->bytes != DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE)) + *p-- = 0; + } + /* clear msb bits if any leftover in the last byte */ + *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0); +} + +static int __nvmem_cell_read(struct nvmem_device *nvmem, + struct nvmem_cell *cell, + void *buf, size_t *len) +{ + int rc; + + rc = regmap_raw_read(nvmem->regmap, cell->offset, buf, cell->bytes); + + if (IS_ERR_VALUE(rc)) + return rc; + + /* shift bits in-place */ + if (cell->bit_offset || cell->bit_offset) + nvmem_shift_read_buffer_in_place(cell, buf); + + *len = cell->bytes; + + return 0; +} + +/** + * nvmem_cell_read() - Read a given nvmem cell + * + * @cell: nvmem cell to be read. + * @len: pointer to length of cell which will be populated on successful read. + * + * Return: ERR_PTR() on error or a valid pointer to a char * buffer on success. + * The buffer should be freed by the consumer with a kfree(). + */ +void *nvmem_cell_read(struct nvmem_cell *cell, size_t *len) +{ + struct nvmem_device *nvmem = cell->nvmem; + u8 *buf; + int rc; + + if (!nvmem || !nvmem->regmap) + return ERR_PTR(-EINVAL); + + buf = kzalloc(cell->bytes, GFP_KERNEL); + if (!buf) + return ERR_PTR(-ENOMEM); + + rc = __nvmem_cell_read(nvmem, cell, buf, len); + if (IS_ERR_VALUE(rc)) { + kfree(buf); + return ERR_PTR(rc); + } + + return buf; +} +EXPORT_SYMBOL_GPL(nvmem_cell_read); + +static inline void *nvmem_cell_prepare_write_buffer(struct nvmem_cell *cell, + u8 *_buf, int len) +{ + struct nvmem_device *nvmem = cell->nvmem; + int i, rc, nbits, bit_offset = cell->bit_offset; + u8 v, *p, *buf, *b, pbyte, pbits; + + nbits = cell->nbits; + buf = kzalloc(cell->bytes, GFP_KERNEL); + if (!buf) + return ERR_PTR(-ENOMEM); + + memcpy(buf, _buf, len); + p = b = buf; + + if (bit_offset) { + pbyte = *b; + *b <<= bit_offset; + + /* setup the first byte with lsb bits from nvmem */ + rc = regmap_raw_read(nvmem->regmap, cell->offset, &v, 1); + *b++ |= GENMASK(bit_offset - 1, 0) & v; + + /* setup rest of the byte if any */ + for (i = 1; i < cell->bytes; i++) { + /* Get last byte bits and shift them towards lsb */ + pbits = pbyte >> (BITS_PER_BYTE - 1 - bit_offset); + pbyte = *b; + p = b; + *b <<= bit_offset; + *b++ |= pbits; + } + } + + /* if it's not end on byte boundary */ + if ((nbits + bit_offset) % BITS_PER_BYTE) { + /* setup the last byte with msb bits from nvmem */ + rc = regmap_raw_read(nvmem->regmap, + cell->offset + cell->bytes - 1, &v, 1); + *p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v; + + } + + return buf; +} + +/** + * nvmem_cell_write() - Write to a given nvmem cell + * + * @cell: nvmem cell to be written. + * @buf: Buffer to be written. + * @len: length of buffer to be written to nvmem cell. + * + * Return: length of bytes written or negative on failure. + */ +int nvmem_cell_write(struct nvmem_cell *cell, void *buf, size_t len) +{ + struct nvmem_device *nvmem = cell->nvmem; + int rc; + + if (!nvmem || !nvmem->regmap || nvmem->read_only || + (cell->bit_offset == 0 && len != cell->bytes)) + return -EINVAL; + + if (cell->bit_offset || cell->nbits) { + buf = nvmem_cell_prepare_write_buffer(cell, buf, len); + if (IS_ERR(buf)) + return PTR_ERR(buf); + } + + rc = regmap_raw_write(nvmem->regmap, cell->offset, buf, cell->bytes); + + /* free the tmp buffer */ + if (cell->bit_offset) + kfree(buf); + + if (IS_ERR_VALUE(rc)) + return rc; + + return len; +} +EXPORT_SYMBOL_GPL(nvmem_cell_write); + static int __init nvmem_init(void) { return bus_register(&nvmem_bus_type); diff --git a/include/linux/nvmem-consumer.h b/include/linux/nvmem-consumer.h index 1e9e767..297cc67 100644 --- a/include/linux/nvmem-consumer.h +++ b/include/linux/nvmem-consumer.h @@ -12,6 +12,11 @@ #ifndef _LINUX_NVMEM_CONSUMER_H #define _LINUX_NVMEM_CONSUMER_H +struct device; +struct device_node; +/* consumer cookie */ +struct nvmem_cell; + struct nvmem_cell_info { const char *name; unsigned int offset; @@ -20,4 +25,60 @@ struct nvmem_cell_info { unsigned int nbits; }; +#if IS_ENABLED(CONFIG_NVMEM) + +/* Cell based interface */ +struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *name); +struct nvmem_cell *devm_nvmem_cell_get(struct device *dev, const char *name); +void nvmem_cell_put(struct nvmem_cell *cell); +void devm_nvmem_cell_put(struct device *dev, struct nvmem_cell *cell); +void *nvmem_cell_read(struct nvmem_cell *cell, size_t *len); +int nvmem_cell_write(struct nvmem_cell *cell, void *buf, size_t len); + +#else + +static inline struct nvmem_cell *nvmem_cell_get(struct device *dev, + const char *name) +{ + return ERR_PTR(-ENOSYS); +} + +static inline struct nvmem_cell *devm_nvmem_cell_get(struct device *dev, + const char *name) +{ + return ERR_PTR(-ENOSYS); +} + +static inline void devm_nvmem_cell_put(struct device *dev, + struct nvmem_cell *cell) +{ + +} +static inline void nvmem_cell_put(struct nvmem_cell *cell) +{ +} + +static inline char *nvmem_cell_read(struct nvmem_cell *cell, size_t *len) +{ + return ERR_PTR(-ENOSYS); +} + +static inline int nvmem_cell_write(struct nvmem_cell *cell, + const char *buf, size_t len) +{ + return -ENOSYS; +} +#endif /* CONFIG_NVMEM */ + +#if IS_ENABLED(CONFIG_NVMEM) && IS_ENABLED(CONFIG_OF) +struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, + const char *name); +#else +static inline struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, + const char *name) +{ + return ERR_PTR(-ENOSYS); +} +#endif /* CONFIG_NVMEM && CONFIG_OF */ + #endif /* ifndef _LINUX_NVMEM_CONSUMER_H */