From patchwork Tue Jun 9 11:40:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Kumar Singh X-Patchwork-Id: 11595321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6518A618 for ; Tue, 9 Jun 2020 11:40:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30DF52068D for ; Tue, 9 Jun 2020 11:40:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="k4QonPBW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728903AbgFILkg (ORCPT ); Tue, 9 Jun 2020 07:40:36 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:49784 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728051AbgFILkg (ORCPT ); Tue, 9 Jun 2020 07:40:36 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1591702834; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=iFq/6BNj3oOF7yRuC9BdDH5XAV22Ax7DPBHG6k35Jq4=; b=k4QonPBWIELeIEZzbSknRSLLSln7zB9e7vWruv5gkZMibV4VNYYca3sYGE1RVYdGJuiAoowK an5R37eoPRvs5nsFEj6FwQrrTJ3J5KZ/lbUeraUmnnOvWipBStC7Zk4z5nM9g4Te977cTdNG PvsRvD6ea53Usl8aQokoJ0DaH6w= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n11.prod.us-east-1.postgun.com with SMTP id 5edf7531356bcc26aba05242 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Tue, 09 Jun 2020 11:40:33 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id E0E44C433CA; Tue, 9 Jun 2020 11:40:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from deesin-linux.qualcomm.com (blr-c-bdr-fw-01_GlobalNAT_AllZones-Outside.qualcomm.com [103.229.19.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: deesin) by smtp.codeaurora.org (Postfix) with ESMTPSA id 61F2FC433C6; Tue, 9 Jun 2020 11:40:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 61F2FC433C6 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=deesin@codeaurora.org From: Deepak Kumar Singh To: bjorn.andersson@linaro.org, clew@codeaurora.org, aneela@codeaurora.org Cc: Deepak Kumar Singh , Andy Gross , linux-arm-msm@vger.kernel.org (open list:ARM/QUALCOMM SUPPORT), linux-kernel@vger.kernel.org (open list) Subject: [PATCH V1 1/2] soc: qcom: smem: validate fields of shared structures Date: Tue, 9 Jun 2020 17:10:03 +0530 Message-Id: <1591702804-26223-2-git-send-email-deesin@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1591702804-26223-1-git-send-email-deesin@codeaurora.org> References: <1591702804-26223-1-git-send-email-deesin@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Structures in shared memory that can be modified by remote processors may have untrusted values, they should be validated before use. Adding proper validation before using fields of shared structures. Signed-off-by: Deepak Kumar Singh --- drivers/soc/qcom/smem.c | 194 +++++++++++++++++++++++++++++++++--------------- 1 file changed, 133 insertions(+), 61 deletions(-) diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c index 28c19bc..c1bd310 100644 --- a/drivers/soc/qcom/smem.c +++ b/drivers/soc/qcom/smem.c @@ -249,11 +249,9 @@ struct smem_region { * struct qcom_smem - device data for the smem device * @dev: device pointer * @hwlock: reference to a hwspinlock - * @global_partition: pointer to global partition when in use - * @global_cacheline: cacheline size for global partition - * @partitions: list of pointers to partitions affecting the current + * @global_partition_entry: pointer to global partition entry when in use + * @ptable_entries: list of pointers to partitions table entry of current * processor/host - * @cacheline: list of cacheline sizes for each host * @item_count: max accepted item number * @num_regions: number of @regions * @regions: list of the memory regions defining the shared memory @@ -263,10 +261,8 @@ struct qcom_smem { struct hwspinlock *hwlock; - struct smem_partition_header *global_partition; - size_t global_cacheline; - struct smem_partition_header *partitions[SMEM_HOST_COUNT]; - size_t cacheline[SMEM_HOST_COUNT]; + struct smem_ptable_entry *global_partition_entry; + struct smem_ptable_entry *ptable_entries[SMEM_HOST_COUNT]; u32 item_count; struct platform_device *socinfo; @@ -274,7 +270,19 @@ struct qcom_smem { struct smem_region regions[]; }; -static void * +/* Pointer to the one and only smem handle */ +static struct qcom_smem *__smem; + +/* Timeout (ms) for the trylock of remote spinlocks */ +#define HWSPINLOCK_TIMEOUT 1000 + +static struct smem_partition_header * +ptable_entry_to_phdr(struct smem_ptable_entry *entry) +{ + return __smem->regions[0].virt_base + le32_to_cpu(entry->offset); +} + +static struct smem_private_entry * phdr_to_last_uncached_entry(struct smem_partition_header *phdr) { void *p = phdr; @@ -339,25 +347,27 @@ static void *cached_entry_to_item(struct smem_private_entry *e) return p - le32_to_cpu(e->size); } -/* Pointer to the one and only smem handle */ -static struct qcom_smem *__smem; - -/* Timeout (ms) for the trylock of remote spinlocks */ -#define HWSPINLOCK_TIMEOUT 1000 - static int qcom_smem_alloc_private(struct qcom_smem *smem, - struct smem_partition_header *phdr, + struct smem_ptable_entry *entry, unsigned item, size_t size) { struct smem_private_entry *hdr, *end; + struct smem_partition_header *phdr; size_t alloc_size; void *cached; + void *p_end; + + phdr = ptable_entry_to_phdr(entry); + p_end = (void *)phdr + le32_to_cpu(entry->size); hdr = phdr_to_first_uncached_entry(phdr); end = phdr_to_last_uncached_entry(phdr); cached = phdr_to_last_cached_entry(phdr); + if (WARN_ON((void *)end > p_end || (void *)cached > p_end)) + return -EINVAL; + while (hdr < end) { if (hdr->canary != SMEM_PRIVATE_CANARY) goto bad_canary; @@ -366,6 +376,8 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem, hdr = uncached_entry_next(hdr); } + if (WARN_ON((void *)hdr > p_end)) + return -EINVAL; /* Check that we don't grow into the cached region */ alloc_size = sizeof(*hdr) + ALIGN(size, 8); @@ -440,7 +452,7 @@ static int qcom_smem_alloc_global(struct qcom_smem *smem, */ int qcom_smem_alloc(unsigned host, unsigned item, size_t size) { - struct smem_partition_header *phdr; + struct smem_ptable_entry *entry; unsigned long flags; int ret; @@ -462,12 +474,12 @@ int qcom_smem_alloc(unsigned host, unsigned item, size_t size) if (ret) return ret; - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { - phdr = __smem->partitions[host]; - ret = qcom_smem_alloc_private(__smem, phdr, item, size); - } else if (__smem->global_partition) { - phdr = __smem->global_partition; - ret = qcom_smem_alloc_private(__smem, phdr, item, size); + if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) { + entry = __smem->ptable_entries[host]; + ret = qcom_smem_alloc_private(__smem, entry, item, size); + } else if (__smem->global_partition_entry) { + entry = __smem->global_partition_entry; + ret = qcom_smem_alloc_private(__smem, entry, item, size); } else { ret = qcom_smem_alloc_global(__smem, item, size); } @@ -482,9 +494,11 @@ static void *qcom_smem_get_global(struct qcom_smem *smem, unsigned item, size_t *size) { - struct smem_header *header; - struct smem_region *region; struct smem_global_entry *entry; + struct smem_header *header; + struct smem_region *area; + u64 entry_offset; + u32 e_size; u32 aux_base; unsigned i; @@ -496,12 +510,19 @@ static void *qcom_smem_get_global(struct qcom_smem *smem, aux_base = le32_to_cpu(entry->aux_base) & AUX_BASE_MASK; for (i = 0; i < smem->num_regions; i++) { - region = &smem->regions[i]; + area = &smem->regions[i]; + + if (area->aux_base == aux_base || !aux_base) { + e_size = le32_to_cpu(entry->size); + entry_offset = le32_to_cpu(entry->offset); + + if (WARN_ON(e_size + entry_offset > area->size)) + return ERR_PTR(-EINVAL); - if (region->aux_base == aux_base || !aux_base) { if (size != NULL) - *size = le32_to_cpu(entry->size); - return region->virt_base + le32_to_cpu(entry->offset); + *size = e_size; + + return area->virt_base + entry_offset; } } @@ -509,50 +530,92 @@ static void *qcom_smem_get_global(struct qcom_smem *smem, } static void *qcom_smem_get_private(struct qcom_smem *smem, - struct smem_partition_header *phdr, - size_t cacheline, + struct smem_ptable_entry *entry, unsigned item, size_t *size) { struct smem_private_entry *e, *end; + struct smem_partition_header *phdr; + void *item_ptr, *p_end; + u32 partition_size; + size_t cacheline; + u32 padding_data; + u32 e_size; + + phdr = ptable_entry_to_phdr(entry); + partition_size = le32_to_cpu(entry->size); + p_end = (void *)phdr + partition_size; + cacheline = le32_to_cpu(entry->cacheline); e = phdr_to_first_uncached_entry(phdr); end = phdr_to_last_uncached_entry(phdr); + if (WARN_ON((void *)end > p_end)) + return ERR_PTR(-EINVAL); + while (e < end) { if (e->canary != SMEM_PRIVATE_CANARY) goto invalid_canary; if (le16_to_cpu(e->item) == item) { - if (size != NULL) - *size = le32_to_cpu(e->size) - - le16_to_cpu(e->padding_data); - - return uncached_entry_to_item(e); + if (size != NULL) { + e_size = le32_to_cpu(e->size); + padding_data = le16_to_cpu(e->padding_data); + + if (e_size < partition_size + && padding_data < e_size) + *size = e_size - padding_data; + else + return ERR_PTR(-EINVAL); + } + + item_ptr = uncached_entry_to_item(e); + if (WARN_ON(item_ptr > p_end)) + return ERR_PTR(-EINVAL); + + return item_ptr; } e = uncached_entry_next(e); } + if (WARN_ON((void *)e > p_end)) + return ERR_PTR(-EINVAL); /* Item was not found in the uncached list, search the cached list */ e = phdr_to_first_cached_entry(phdr, cacheline); end = phdr_to_last_cached_entry(phdr); + if (WARN_ON((void *)e < (void *)phdr || (void *)end > p_end)) + return ERR_PTR(-EINVAL); + while (e > end) { if (e->canary != SMEM_PRIVATE_CANARY) goto invalid_canary; if (le16_to_cpu(e->item) == item) { - if (size != NULL) - *size = le32_to_cpu(e->size) - - le16_to_cpu(e->padding_data); - - return cached_entry_to_item(e); + if (size != NULL) { + e_size = le32_to_cpu(e->size); + padding_data = le16_to_cpu(e->padding_data); + + if (e_size < partition_size + && padding_data < e_size) + *size = e_size - padding_data; + else + return ERR_PTR(-EINVAL); + } + + item_ptr = cached_entry_to_item(e); + if (WARN_ON(item_ptr < (void *)phdr)) + return ERR_PTR(-EINVAL); + + return item_ptr; } e = cached_entry_next(e, cacheline); } + if (WARN_ON((void *)e < (void *)phdr)) + return ERR_PTR(-EINVAL); return ERR_PTR(-ENOENT); @@ -574,9 +637,8 @@ static void *qcom_smem_get_private(struct qcom_smem *smem, */ void *qcom_smem_get(unsigned host, unsigned item, size_t *size) { - struct smem_partition_header *phdr; + struct smem_ptable_entry *entry; unsigned long flags; - size_t cacheln; int ret; void *ptr = ERR_PTR(-EPROBE_DEFER); @@ -592,14 +654,12 @@ void *qcom_smem_get(unsigned host, unsigned item, size_t *size) if (ret) return ERR_PTR(ret); - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { - phdr = __smem->partitions[host]; - cacheln = __smem->cacheline[host]; - ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size); - } else if (__smem->global_partition) { - phdr = __smem->global_partition; - cacheln = __smem->global_cacheline; - ptr = qcom_smem_get_private(__smem, phdr, cacheln, item, size); + if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) { + entry = __smem->ptable_entries[host]; + ptr = qcom_smem_get_private(__smem, entry, item, size); + } else if (__smem->global_partition_entry) { + entry = __smem->global_partition_entry; + ptr = qcom_smem_get_private(__smem, entry, item, size); } else { ptr = qcom_smem_get_global(__smem, item, size); } @@ -621,23 +681,37 @@ EXPORT_SYMBOL(qcom_smem_get); int qcom_smem_get_free_space(unsigned host) { struct smem_partition_header *phdr; + struct smem_ptable_entry *entry; struct smem_header *header; unsigned ret; if (!__smem) return -EPROBE_DEFER; - if (host < SMEM_HOST_COUNT && __smem->partitions[host]) { - phdr = __smem->partitions[host]; + if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) { + entry = __smem->ptable_entries[host]; + phdr = ptable_entry_to_phdr(entry); + ret = le32_to_cpu(phdr->offset_free_cached) - le32_to_cpu(phdr->offset_free_uncached); - } else if (__smem->global_partition) { - phdr = __smem->global_partition; + + if (ret > le32_to_cpu(entry->size)) + return -EINVAL; + } else if (__smem->global_partition_entry) { + entry = __smem->global_partition_entry; + phdr = ptable_entry_to_phdr(entry); + ret = le32_to_cpu(phdr->offset_free_cached) - le32_to_cpu(phdr->offset_free_uncached); + + if (ret > le32_to_cpu(entry->size)) + return -EINVAL; } else { header = __smem->regions[0].virt_base; ret = le32_to_cpu(header->available); + + if (ret > __smem->regions[0].size) + return -EINVAL; } return ret; @@ -772,7 +846,7 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem) bool found = false; int i; - if (smem->global_partition) { + if (smem->global_partition_entry) { dev_err(smem->dev, "Already found the global partition\n"); return -EINVAL; } @@ -807,8 +881,7 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem) if (!header) return -EINVAL; - smem->global_partition = header; - smem->global_cacheline = le32_to_cpu(entry->cacheline); + smem->global_partition_entry = entry; return 0; } @@ -848,7 +921,7 @@ qcom_smem_enumerate_partitions(struct qcom_smem *smem, u16 local_host) return -EINVAL; } - if (smem->partitions[remote_host]) { + if (smem->ptable_entries[remote_host]) { dev_err(smem->dev, "duplicate host %hu\n", remote_host); return -EINVAL; } @@ -857,8 +930,7 @@ qcom_smem_enumerate_partitions(struct qcom_smem *smem, u16 local_host) if (!header) return -EINVAL; - smem->partitions[remote_host] = header; - smem->cacheline[remote_host] = le32_to_cpu(entry->cacheline); + smem->ptable_entries[remote_host] = entry; } return 0; From patchwork Tue Jun 9 11:40:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Kumar Singh X-Patchwork-Id: 11595323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 149A392A for ; Tue, 9 Jun 2020 11:40:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E57462064C for ; Tue, 9 Jun 2020 11:40:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="SVKXB/ic" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729076AbgFILkt (ORCPT ); Tue, 9 Jun 2020 07:40:49 -0400 Received: from mail27.static.mailgun.info ([104.130.122.27]:53408 "EHLO mail27.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728902AbgFILks (ORCPT ); Tue, 9 Jun 2020 07:40:48 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1591702847; h=References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=+UGRQoOwBERu2zKIIAlIPKPWCL9327VwkFms0xPlYEQ=; b=SVKXB/iclJYHGu6LMlJn+4hgyiWUdcxD8uT7zTKYk0A6yOWgximWMq5cLqY4eINUSaKd7Guz jJesVq3EuGdup70i4XmQieS4FjT6jTonKSQ/fqRbB6t6E8qNS1IGv8W/c9C9N8PnUavnyVU5 DP92YsJKH69xgR08HcKLILfasAA= X-Mailgun-Sending-Ip: 104.130.122.27 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n14.prod.us-west-2.postgun.com with SMTP id 5edf753f3b3439f23ae37ca7 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Tue, 09 Jun 2020 11:40:47 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 11C2BC433CB; Tue, 9 Jun 2020 11:40:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=ham autolearn_force=no version=3.4.0 Received: from deesin-linux.qualcomm.com (blr-c-bdr-fw-01_GlobalNAT_AllZones-Outside.qualcomm.com [103.229.19.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: deesin) by smtp.codeaurora.org (Postfix) with ESMTPSA id ABDB2C43391; Tue, 9 Jun 2020 11:40:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org ABDB2C43391 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=deesin@codeaurora.org From: Deepak Kumar Singh To: bjorn.andersson@linaro.org, clew@codeaurora.org, aneela@codeaurora.org Cc: Deepak Kumar Singh , Andy Gross , linux-arm-msm@vger.kernel.org (open list:ARM/QUALCOMM SUPPORT), linux-kernel@vger.kernel.org (open list) Subject: [PATCH V1 2/2] soc: qcom: smem: map only partitions used by local HOST Date: Tue, 9 Jun 2020 17:10:04 +0530 Message-Id: <1591702804-26223-3-git-send-email-deesin@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1591702804-26223-1-git-send-email-deesin@codeaurora.org> References: <1591702804-26223-1-git-send-email-deesin@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org SMEM driver is IO mapping complete region and CPU is doing a speculative read into a partition where local HOST does not have permission resulting in a NOC error. Map only those partitions which are accessibly to local HOST. Signed-off-by: Deepak Kumar Singh --- drivers/soc/qcom/smem.c | 226 +++++++++++++++++++++++++++++++++++------------- 1 file changed, 167 insertions(+), 59 deletions(-) diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c index c1bd310..4a152d6 100644 --- a/drivers/soc/qcom/smem.c +++ b/drivers/soc/qcom/smem.c @@ -193,6 +193,19 @@ struct smem_partition_header { __le32 offset_free_cached; __le32 reserved[3]; }; +/** + * struct smem_partition_desc - descriptor for partition + * @virt_base: starting virtual address of partition + * @phys_base: starting physical address of partition + * @cacheline: alignment for "cached" entries + * @size: size of partition + */ +struct smem_partition_desc { + void __iomem *virt_base; + u32 phys_base; + u32 cacheline; + u32 size; +}; static const u8 SMEM_PART_MAGIC[] = { 0x24, 0x50, 0x52, 0x54 }; @@ -249,9 +262,9 @@ struct smem_region { * struct qcom_smem - device data for the smem device * @dev: device pointer * @hwlock: reference to a hwspinlock - * @global_partition_entry: pointer to global partition entry when in use - * @ptable_entries: list of pointers to partitions table entry of current - * processor/host + * @ptable_base: virtual base of partition table + * @global_partition_desc: descriptor for global partition when in use + * @partition_desc: list of partition descriptor of current processor/host * @item_count: max accepted item number * @num_regions: number of @regions * @regions: list of the memory regions defining the shared memory @@ -261,9 +274,11 @@ struct qcom_smem { struct hwspinlock *hwlock; - struct smem_ptable_entry *global_partition_entry; - struct smem_ptable_entry *ptable_entries[SMEM_HOST_COUNT]; u32 item_count; + struct smem_ptable *ptable_base; + struct smem_partition_desc global_partition_desc; + struct smem_partition_desc partition_desc[SMEM_HOST_COUNT]; + struct platform_device *socinfo; unsigned num_regions; @@ -276,12 +291,6 @@ static struct qcom_smem *__smem; /* Timeout (ms) for the trylock of remote spinlocks */ #define HWSPINLOCK_TIMEOUT 1000 -static struct smem_partition_header * -ptable_entry_to_phdr(struct smem_ptable_entry *entry) -{ - return __smem->regions[0].virt_base + le32_to_cpu(entry->offset); -} - static struct smem_private_entry * phdr_to_last_uncached_entry(struct smem_partition_header *phdr) { @@ -348,7 +357,7 @@ static void *cached_entry_to_item(struct smem_private_entry *e) } static int qcom_smem_alloc_private(struct qcom_smem *smem, - struct smem_ptable_entry *entry, + struct smem_partition_desc *p_desc, unsigned item, size_t size) { @@ -358,8 +367,8 @@ static int qcom_smem_alloc_private(struct qcom_smem *smem, void *cached; void *p_end; - phdr = ptable_entry_to_phdr(entry); - p_end = (void *)phdr + le32_to_cpu(entry->size); + phdr = p_desc->virt_base; + p_end = (void *)phdr + p_desc->size; hdr = phdr_to_first_uncached_entry(phdr); end = phdr_to_last_uncached_entry(phdr); @@ -452,7 +461,7 @@ static int qcom_smem_alloc_global(struct qcom_smem *smem, */ int qcom_smem_alloc(unsigned host, unsigned item, size_t size) { - struct smem_ptable_entry *entry; + struct smem_partition_desc *p_desc; unsigned long flags; int ret; @@ -474,12 +483,12 @@ int qcom_smem_alloc(unsigned host, unsigned item, size_t size) if (ret) return ret; - if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) { - entry = __smem->ptable_entries[host]; - ret = qcom_smem_alloc_private(__smem, entry, item, size); - } else if (__smem->global_partition_entry) { - entry = __smem->global_partition_entry; - ret = qcom_smem_alloc_private(__smem, entry, item, size); + if (host < SMEM_HOST_COUNT && __smem->partition_desc[host].virt_base) { + p_desc = &__smem->partition_desc[host]; + ret = qcom_smem_alloc_private(__smem, p_desc, item, size); + } else if (__smem->global_partition_desc.virt_base) { + p_desc = &__smem->global_partition_desc; + ret = qcom_smem_alloc_private(__smem, p_desc, item, size); } else { ret = qcom_smem_alloc_global(__smem, item, size); } @@ -530,22 +539,20 @@ static void *qcom_smem_get_global(struct qcom_smem *smem, } static void *qcom_smem_get_private(struct qcom_smem *smem, - struct smem_ptable_entry *entry, + struct smem_partition_desc *p_desc, unsigned item, size_t *size) { struct smem_private_entry *e, *end; struct smem_partition_header *phdr; void *item_ptr, *p_end; - u32 partition_size; size_t cacheline; u32 padding_data; u32 e_size; - phdr = ptable_entry_to_phdr(entry); - partition_size = le32_to_cpu(entry->size); - p_end = (void *)phdr + partition_size; - cacheline = le32_to_cpu(entry->cacheline); + phdr = p_desc->virt_base; + p_end = (void *)phdr + p_desc->size; + cacheline = p_desc->cacheline; e = phdr_to_first_uncached_entry(phdr); end = phdr_to_last_uncached_entry(phdr); @@ -562,7 +569,7 @@ static void *qcom_smem_get_private(struct qcom_smem *smem, e_size = le32_to_cpu(e->size); padding_data = le16_to_cpu(e->padding_data); - if (e_size < partition_size + if (e_size < p_desc->size && padding_data < e_size) *size = e_size - padding_data; else @@ -598,7 +605,7 @@ static void *qcom_smem_get_private(struct qcom_smem *smem, e_size = le32_to_cpu(e->size); padding_data = le16_to_cpu(e->padding_data); - if (e_size < partition_size + if (e_size < p_desc->size && padding_data < e_size) *size = e_size - padding_data; else @@ -637,7 +644,7 @@ static void *qcom_smem_get_private(struct qcom_smem *smem, */ void *qcom_smem_get(unsigned host, unsigned item, size_t *size) { - struct smem_ptable_entry *entry; + struct smem_partition_desc *p_desc; unsigned long flags; int ret; void *ptr = ERR_PTR(-EPROBE_DEFER); @@ -654,12 +661,12 @@ void *qcom_smem_get(unsigned host, unsigned item, size_t *size) if (ret) return ERR_PTR(ret); - if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) { - entry = __smem->ptable_entries[host]; - ptr = qcom_smem_get_private(__smem, entry, item, size); - } else if (__smem->global_partition_entry) { - entry = __smem->global_partition_entry; - ptr = qcom_smem_get_private(__smem, entry, item, size); + if (host < SMEM_HOST_COUNT && __smem->partition_desc[host].virt_base) { + p_desc = &__smem->partition_desc[host]; + ptr = qcom_smem_get_private(__smem, p_desc, item, size); + } else if (__smem->global_partition_desc.virt_base) { + p_desc = &__smem->global_partition_desc; + ptr = qcom_smem_get_private(__smem, p_desc, item, size); } else { ptr = qcom_smem_get_global(__smem, item, size); } @@ -681,30 +688,30 @@ EXPORT_SYMBOL(qcom_smem_get); int qcom_smem_get_free_space(unsigned host) { struct smem_partition_header *phdr; - struct smem_ptable_entry *entry; + struct smem_partition_desc *p_desc; struct smem_header *header; unsigned ret; if (!__smem) return -EPROBE_DEFER; - if (host < SMEM_HOST_COUNT && __smem->ptable_entries[host]) { - entry = __smem->ptable_entries[host]; - phdr = ptable_entry_to_phdr(entry); + if (host < SMEM_HOST_COUNT && __smem->partition_desc[host].virt_base) { + p_desc = &__smem->partition_desc[host]; + phdr = p_desc->virt_base; ret = le32_to_cpu(phdr->offset_free_cached) - le32_to_cpu(phdr->offset_free_uncached); - if (ret > le32_to_cpu(entry->size)) + if (ret > p_desc->size) return -EINVAL; - } else if (__smem->global_partition_entry) { - entry = __smem->global_partition_entry; - phdr = ptable_entry_to_phdr(entry); + } else if (__smem->global_partition_desc.virt_base) { + p_desc = &__smem->global_partition_desc; + phdr = p_desc->virt_base; ret = le32_to_cpu(phdr->offset_free_cached) - le32_to_cpu(phdr->offset_free_uncached); - if (ret > le32_to_cpu(entry->size)) + if (ret > p_desc->size) return -EINVAL; } else { header = __smem->regions[0].virt_base; @@ -718,6 +725,15 @@ int qcom_smem_get_free_space(unsigned host) } EXPORT_SYMBOL(qcom_smem_get_free_space); +static int addr_in_range(void *virt_base, unsigned int size, void *addr) +{ + if (virt_base && addr >= virt_base && + addr < virt_base + size) + return 1; + + return 0; +} + /** * qcom_smem_virt_to_phys() - return the physical address associated * with an smem item pointer (previously returned by qcom_smem_get() @@ -727,17 +743,36 @@ EXPORT_SYMBOL(qcom_smem_get_free_space); */ phys_addr_t qcom_smem_virt_to_phys(void *p) { - unsigned i; + struct smem_partition_desc *p_desc; + struct smem_region *area; + u64 offset; + u32 i; + + for (i = 0; i < SMEM_HOST_COUNT; i++) { + p_desc = &__smem->partition_desc[i]; + + if (addr_in_range(p_desc->virt_base, p_desc->size, p)) { + offset = p - p_desc->virt_base; + + return (phys_addr_t)p_desc->phys_base + offset; + } + } + + p_desc = &__smem->global_partition_desc; + + if (addr_in_range(p_desc->virt_base, p_desc->size, p)) { + offset = p - p_desc->virt_base; + + return (phys_addr_t)p_desc->phys_base + offset; + } for (i = 0; i < __smem->num_regions; i++) { - struct smem_region *region = &__smem->regions[i]; + area = &__smem->regions[i]; - if (p < region->virt_base) - continue; - if (p < region->virt_base + region->size) { - u64 offset = p - region->virt_base; + if (addr_in_range(area->virt_base, area->size, p)) { + offset = p - area->virt_base; - return (phys_addr_t)region->aux_base + offset; + return (phys_addr_t)area->aux_base + offset; } } @@ -761,7 +796,7 @@ static struct smem_ptable *qcom_smem_get_ptable(struct qcom_smem *smem) struct smem_ptable *ptable; u32 version; - ptable = smem->regions[0].virt_base + smem->regions[0].size - SZ_4K; + ptable = smem->ptable_base; if (memcmp(ptable->magic, SMEM_PTABLE_MAGIC, sizeof(ptable->magic))) return ERR_PTR(-ENOENT); @@ -800,9 +835,15 @@ qcom_smem_partition_header(struct qcom_smem *smem, struct smem_ptable_entry *entry, u16 host0, u16 host1) { struct smem_partition_header *header; + u32 phys_addr; u32 size; - header = smem->regions[0].virt_base + le32_to_cpu(entry->offset); + phys_addr = smem->regions[0].aux_base + le32_to_cpu(entry->offset); + header = devm_ioremap_wc(smem->dev, + phys_addr, le32_to_cpu(entry->size)); + + if (!header) + return NULL; if (memcmp(header->magic, SMEM_PART_MAGIC, sizeof(header->magic))) { dev_err(smem->dev, "bad partition magic %02x %02x %02x %02x\n", @@ -846,7 +887,7 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem) bool found = false; int i; - if (smem->global_partition_entry) { + if (smem->global_partition_desc.virt_base) { dev_err(smem->dev, "Already found the global partition\n"); return -EINVAL; } @@ -881,7 +922,11 @@ static int qcom_smem_set_global_partition(struct qcom_smem *smem) if (!header) return -EINVAL; - smem->global_partition_entry = entry; + smem->global_partition_desc.virt_base = (void __iomem *)header; + smem->global_partition_desc.phys_base = smem->regions[0].aux_base + + le32_to_cpu(entry->offset); + smem->global_partition_desc.size = le32_to_cpu(entry->size); + smem->global_partition_desc.cacheline = le32_to_cpu(entry->cacheline); return 0; } @@ -921,7 +966,7 @@ qcom_smem_enumerate_partitions(struct qcom_smem *smem, u16 local_host) return -EINVAL; } - if (smem->ptable_entries[remote_host]) { + if (smem->partition_desc[remote_host].virt_base) { dev_err(smem->dev, "duplicate host %hu\n", remote_host); return -EINVAL; } @@ -930,7 +975,14 @@ qcom_smem_enumerate_partitions(struct qcom_smem *smem, u16 local_host) if (!header) return -EINVAL; - smem->ptable_entries[remote_host] = entry; + smem->partition_desc[remote_host].virt_base = + (void __iomem *)header; + smem->partition_desc[remote_host].phys_base = + smem->regions[0].aux_base + le32_to_cpu(entry->offset); + smem->partition_desc[remote_host].size = + le32_to_cpu(entry->size); + smem->partition_desc[remote_host].cacheline = + le32_to_cpu(entry->cacheline); } return 0; @@ -965,6 +1017,61 @@ static int qcom_smem_map_memory(struct qcom_smem *smem, struct device *dev, return 0; } +static int qcom_smem_map_toc(struct qcom_smem *smem, struct device *dev, + const char *name, int i) +{ + struct device_node *np; + struct resource r; + int ret; + + np = of_parse_phandle(dev->of_node, name, 0); + if (!np) { + dev_err(dev, "No %s specified\n", name); + return -EINVAL; + } + + ret = of_address_to_resource(np, 0, &r); + of_node_put(np); + if (ret) + return ret; + + smem->regions[i].aux_base = (u32)r.start; + smem->regions[i].size = resource_size(&r); + /* map starting 4K for smem header */ + smem->regions[i].virt_base = devm_ioremap_wc(dev, r.start, SZ_4K); + /* map last 4k for toc */ + smem->ptable_base = devm_ioremap_wc(dev, + r.start + resource_size(&r) - SZ_4K, SZ_4K); + + if (!smem->regions[i].virt_base || !smem->ptable_base) + return -ENOMEM; + + return 0; +} + +static int qcom_smem_mamp_legacy(struct qcom_smem *smem) +{ + struct smem_header *header; + u32 phys_addr; + u32 p_size; + + phys_addr = smem->regions[0].aux_base; + header = smem->regions[0].virt_base; + p_size = header->available; + + /* unmap previously mapped starting 4k for smem header */ + devm_iounmap(smem->dev, smem->regions[0].virt_base); + + smem->regions[0].size = p_size; + smem->regions[0].virt_base = devm_ioremap_wc(smem->dev, + phys_addr, p_size); + + if (!smem->regions[0].virt_base) + return -ENOMEM; + + return 0; +} + static int qcom_smem_probe(struct platform_device *pdev) { struct smem_header *header; @@ -987,7 +1094,7 @@ static int qcom_smem_probe(struct platform_device *pdev) smem->dev = &pdev->dev; smem->num_regions = num_regions; - ret = qcom_smem_map_memory(smem, &pdev->dev, "memory-region", 0); + ret = qcom_smem_map_toc(smem, &pdev->dev, "memory-region", 0); if (ret) return ret; @@ -1011,6 +1118,7 @@ static int qcom_smem_probe(struct platform_device *pdev) smem->item_count = qcom_smem_get_item_count(smem); break; case SMEM_GLOBAL_HEAP_VERSION: + qcom_smem_mamp_legacy(smem); smem->item_count = SMEM_ITEM_COUNT; break; default: