From patchwork Tue Nov 5 01:27:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 11226815 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAB3913BD for ; Tue, 5 Nov 2019 01:28:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9E3A521929 for ; Tue, 5 Nov 2019 01:28:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="key not found in DNS" (0-bit key) header.d=codeaurora.org header.i=@codeaurora.org header.b="VK57a430"; dkim=fail reason="key not found in DNS" (0-bit key) header.d=codeaurora.org header.i=@codeaurora.org header.b="hCIZQNou" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387538AbfKEB2U (ORCPT ); Mon, 4 Nov 2019 20:28:20 -0500 Received: from smtp.codeaurora.org ([198.145.29.96]:38280 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387504AbfKEB2R (ORCPT ); Mon, 4 Nov 2019 20:28:17 -0500 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id AC92E60F94; Tue, 5 Nov 2019 01:28:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1572917295; bh=O5/dBiQ34p5m3d8Ac1VYSCtAum35K+Wf9GM84bPttPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VK57a430ZeFmnPkCwHUXtKNkOX6O0V87SKW9c7XFpcr9zJ0Xkwg1paCADpv7/ruvk QW6VUsgG3aEl9zaWdMvp+4luEOj4ZcfXVRqgGaUC8Qwg3LzwZcgodyn08sbzilPppj z94YrwTRT/YkumgTpTTs/T2og7GNdtmA4Yql6OLs= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_INVALID,DKIM_SIGNED,SPF_NONE autolearn=no autolearn_force=no version=3.4.0 Received: from eberman-linux.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: eberman@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id BF1FB60F81; Tue, 5 Nov 2019 01:28:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1572917282; bh=O5/dBiQ34p5m3d8Ac1VYSCtAum35K+Wf9GM84bPttPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hCIZQNou+ropeYV7fgaMQcYmkUG1B1/q6hO/hXbH8wepq9BKASkILAaouu0SaYQ48 7xvWgkDG816vZHQ8TXI/n3EDLVLYF5YDn3PESpaa91LTHY2gr95gBMhNin4ENWHBkv qp9aDdBzDU8C/dMyRi8k1NZFsts9k7bf1SdgPjE4= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org BF1FB60F81 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=eberman@codeaurora.org From: Elliot Berman To: bjorn.andersson@linaro.org, saiprakash.ranjan@codeaurora.org, agross@kernel.org Cc: tsoni@codeaurora.org, sidgup@codeaurora.org, psodagud@codeaurora.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Elliot Berman Subject: [PATCH 15/17] firmware: qcom_scm: Merge legacy and SMCCC conventions Date: Mon, 4 Nov 2019 17:27:34 -0800 Message-Id: <1572917256-24205-16-git-send-email-eberman@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572917256-24205-1-git-send-email-eberman@codeaurora.org> References: <1572917256-24205-1-git-send-email-eberman@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Copy/paste legacy SCM driver into qcom_scm-64 with the following notes: - Renamed qcom_scm_call in qcom_scm-32 to qcom_scm_call_legacy in the copy. - Renamed qcom_scm_call_atomic in qcom_scm 32 to qcom_scm_call_atomic_legacy in the copy. - Rename __qcom_scm_call_smccc to qcom_scm_call_smccc. - Filled in implementations set_cold_boot_addr, set_warm_boot_addr, and cpu_power_down from qcom_scm-32.c. - set_dload_mode, io_writel, io_readl now use atomic variants as in qcom_scm-32. Signed-off-by: Elliot Berman --- drivers/firmware/qcom_scm-64.c | 322 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 314 insertions(+), 8 deletions(-) diff --git a/drivers/firmware/qcom_scm-64.c b/drivers/firmware/qcom_scm-64.c index d10845a..7bb68f2 100644 --- a/drivers/firmware/qcom_scm-64.c +++ b/drivers/firmware/qcom_scm-64.c @@ -95,7 +95,7 @@ static void __qcom_scm_call_do_quirk(const struct arm_smccc_args *smc, } while (res->a0 == QCOM_SCM_INTERRUPTED); } -static int ___qcom_scm_call_smccc(struct device *dev, +static int qcom_scm_call_smccc(struct device *dev, struct qcom_scm_desc *desc, bool atomic) { int arglen = desc->arginfo & 0xf; @@ -185,6 +185,223 @@ static int ___qcom_scm_call_smccc(struct device *dev, return 0; } +#define LEGACY_FUNCNUM(s, c) (((s) << 10) | ((c) & 0x3ff)) + +/** + * struct legacy_command - one SCM command buffer + * @len: total available memory for command and response + * @buf_offset: start of command buffer + * @resp_hdr_offset: start of response buffer + * @id: command to be executed + * @buf: buffer returned from legacy_get_command_buffer() + * + * An SCM command is laid out in memory as follows: + * + * ------------------- <--- struct legacy_command + * | command header | + * ------------------- <--- legacy_get_command_buffer() + * | command buffer | + * ------------------- <--- struct legacy_response and + * | response header | legacy_command_to_response() + * ------------------- <--- legacy_get_response_buffer() + * | response buffer | + * ------------------- + * + * There can be arbitrary padding between the headers and buffers so + * you should always use the appropriate qcom_scm_get_*_buffer() routines + * to access the buffers in a safe manner. + */ +struct legacy_command { + __le32 len; + __le32 buf_offset; + __le32 resp_hdr_offset; + __le32 id; + __le32 buf[0]; +}; + +/** + * struct legacy_response - one SCM response buffer + * @len: total available memory for response + * @buf_offset: start of response data relative to start of legacy_response + * @is_complete: indicates if the command has finished processing + */ +struct legacy_response { + __le32 len; + __le32 buf_offset; + __le32 is_complete; +}; + +/** + * legacy_command_to_response() - Get a pointer to a legacy_response + * @cmd: command + * + * Returns a pointer to a response for a command. + */ +static inline struct legacy_response *legacy_command_to_response( + const struct legacy_command *cmd) +{ + return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset); +} + +/** + * legacy_get_command_buffer() - Get a pointer to a command buffer + * @cmd: command + * + * Returns a pointer to the command buffer of a command. + */ +static inline void *legacy_get_command_buffer(const struct legacy_command *cmd) +{ + return (void *)cmd->buf; +} + +/** + * legacy_get_response_buffer() - Get a pointer to a response buffer + * @rsp: response + * + * Returns a pointer to a response buffer of a response. + */ +static inline void *legacy_get_response_buffer(const struct legacy_response *rsp) +{ + return (void *)rsp + le32_to_cpu(rsp->buf_offset); +} + +static void __qcom_scm_call_do(const struct arm_smccc_args *smc, + struct arm_smccc_res *res) +{ + do { + arm_smccc_smc(smc->a[0], smc->a[1], smc->a[2], smc->a[3], + smc->a[4], smc->a[5], smc->a[6], smc->a[7], res); + } while (res->a0 == QCOM_SCM_INTERRUPTED); +} + +/** + * qcom_scm_call_legacy() - Send an SCM command + * @dev: struct device + * @svc_id: service identifier + * @cmd_id: command identifier + * @cmd_buf: command buffer + * @cmd_len: length of the command buffer + * @resp_buf: response buffer + * @resp_len: length of the response buffer + * + * Sends a command to the SCM and waits for the command to finish processing. + * + * A note on cache maintenance: + * Note that any buffers that are expected to be accessed by the secure world + * must be flushed before invoking qcom_scm_call and invalidated in the cache + * immediately after qcom_scm_call returns. Cache maintenance on the command + * and response buffers is taken care of by qcom_scm_call; however, callers are + * responsible for any other cached buffers passed over to the secure world. + */ +static int qcom_scm_call_legacy(struct device *dev, struct qcom_scm_desc *desc) +{ + int arglen = desc->arginfo & 0xf; + int ret = 0, context_id; + size_t i; + struct legacy_command *cmd; + struct legacy_response *rsp; + struct arm_smccc_args smc = {0}; + struct arm_smccc_res res; + const size_t cmd_len = arglen * sizeof(__le32); + const size_t resp_len = MAX_QCOM_SCM_RETS * sizeof(__le32); + size_t alloc_len = sizeof(*cmd) + cmd_len + sizeof(*rsp) + resp_len; + dma_addr_t cmd_phys; + __le32 *arg_buf; + __le32 *res_buf; + + cmd = kzalloc(PAGE_ALIGN(alloc_len), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + + cmd->len = cpu_to_le32(alloc_len); + cmd->buf_offset = cpu_to_le32(sizeof(*cmd)); + cmd->resp_hdr_offset = cpu_to_le32(sizeof(*cmd) + cmd_len); + cmd->id = cpu_to_le32(LEGACY_FUNCNUM(desc->svc, desc->cmd)); + + arg_buf = legacy_get_command_buffer(cmd); + for (i = 0; i < arglen; i++) + arg_buf[i] = cpu_to_le32(desc->args[i]); + + rsp = legacy_command_to_response(cmd); + + cmd_phys = dma_map_single(dev, cmd, alloc_len, DMA_TO_DEVICE); + if (dma_mapping_error(dev, cmd_phys)) { + kfree(cmd); + return -ENOMEM; + } + + smc.a[0] = 1; + smc.a[1] = (unsigned long)&context_id; + smc.a[2] = cmd_phys; + + mutex_lock(&qcom_scm_lock); + __qcom_scm_call_do(&smc, &res); + if (res.a0 < 0) + ret = qcom_scm_remap_error(res.a0); + mutex_unlock(&qcom_scm_lock); + if (ret) + goto out; + + do { + dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len, + sizeof(*rsp), DMA_FROM_DEVICE); + } while (!rsp->is_complete); + + dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len + + le32_to_cpu(rsp->buf_offset), + resp_len, DMA_FROM_DEVICE); + + res_buf = legacy_get_response_buffer(rsp); + for (i = 0; i < MAX_QCOM_SCM_RETS; i++) + desc->res[i] = le32_to_cpu(res_buf[i]); +out: + dma_unmap_single(dev, cmd_phys, alloc_len, DMA_TO_DEVICE); + kfree(cmd); + return ret; +} + +#define LEGACY_ATOMIC_N_REG_ARGS 5 +#define LEGACY_ATOMIC_FIRST_REG_IDX 2 +#define LEGACY_CLASS_REGISTER (0x2 << 8) +#define LEGACY_MASK_IRQS BIT(5) +#define LEGACY_ATOMIC(svc, cmd, n) ((LEGACY_FUNCNUM(svc, cmd) << 12) | \ + LEGACY_CLASS_REGISTER | \ + LEGACY_MASK_IRQS | \ + (n & 0xf)) + +/** + * qcom_scm_call_atomic_legacy() - Send an atomic SCM command with up to + * 5 arguments and 3 return values + * + * This shall only be used with commands that are guaranteed to be + * uninterruptable, atomic and SMP safe. + */ +static int qcom_scm_call_atomic_legacy(struct device *dev, + struct qcom_scm_desc *desc) +{ + int context_id; + struct arm_smccc_args smc = {0}; + struct arm_smccc_res res; + size_t i, arglen = desc->arginfo & 0xf; + + BUG_ON(arglen > LEGACY_ATOMIC_N_REG_ARGS); + + smc.a[0] = LEGACY_ATOMIC(desc->svc, desc->cmd, arglen); + smc.a[1] = (unsigned long)&context_id; + + for (i = 0; i < arglen; i++) + smc.a[i + LEGACY_ATOMIC_FIRST_REG_IDX] = desc->args[i]; + + arm_smccc_smc(smc.a[0], smc.a[1], smc.a[2], smc.a[3], + smc.a[4], smc.a[5], smc.a[6], smc.a[7], &res); + + desc->res[0] = res.a1; + desc->res[1] = res.a2; + desc->res[2] = res.a3; + + return res.a0; +} + /** * qcom_scm_call() - Invoke a syscall in the secure world * @dev: device @@ -198,7 +415,7 @@ static int ___qcom_scm_call_smccc(struct device *dev, static int qcom_scm_call(struct device *dev, struct qcom_scm_desc *desc) { might_sleep(); - return ___qcom_scm_call_smccc(dev, desc, false); + return qcom_scm_call_smccc(dev, desc, false); } /** @@ -214,9 +431,14 @@ static int qcom_scm_call(struct device *dev, struct qcom_scm_desc *desc) */ static int qcom_scm_call_atomic(struct device *dev, struct qcom_scm_desc *desc) { - return ___qcom_scm_call_smccc(dev, desc, true); + return qcom_scm_call_smccc(dev, desc, true); } +#define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00 +#define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01 +#define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08 +#define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20 + /** * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus * @entry: Entry point function for the cpus @@ -228,9 +450,54 @@ static int qcom_scm_call_atomic(struct device *dev, struct qcom_scm_desc *desc) int __qcom_scm_set_cold_boot_addr(struct device *dev, void *entry, const cpumask_t *cpus) { - return -ENOTSUPP; + int flags = 0; + int cpu; + int scm_cb_flags[] = { + QCOM_SCM_FLAG_COLDBOOT_CPU0, + QCOM_SCM_FLAG_COLDBOOT_CPU1, + QCOM_SCM_FLAG_COLDBOOT_CPU2, + QCOM_SCM_FLAG_COLDBOOT_CPU3, + }; + struct qcom_scm_desc desc = { + .svc = QCOM_SCM_SVC_BOOT, + .cmd = QCOM_SCM_BOOT_SET_ADDR, + .owner = ARM_SMCCC_OWNER_SIP, + }; + + if (!cpus || (cpus && cpumask_empty(cpus))) + return -EINVAL; + + for_each_cpu(cpu, cpus) { + if (cpu < ARRAY_SIZE(scm_cb_flags)) + flags |= scm_cb_flags[cpu]; + else + set_cpu_present(cpu, false); + } + + desc.args[0] = flags; + desc.args[1] = virt_to_phys(entry); + desc.arginfo = QCOM_SCM_ARGS(2); + + return qcom_scm_call_atomic(dev, &desc); } +#define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04 +#define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02 +#define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10 +#define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40 + +struct qcom_scm_entry { + int flag; + void *entry; +}; + +static struct qcom_scm_entry qcom_scm_wb[] = { + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 }, + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 }, + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 }, + { .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 }, +}; + /** * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus * @dev: Device pointer @@ -243,7 +510,37 @@ int __qcom_scm_set_cold_boot_addr(struct device *dev, void *entry, int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry, const cpumask_t *cpus) { - return -ENOTSUPP; + int ret; + int flags = 0; + int cpu; + struct qcom_scm_desc desc = { + .svc = QCOM_SCM_SVC_BOOT, + .cmd = QCOM_SCM_BOOT_SET_ADDR, + }; + + /* + * Reassign only if we are switching from hotplug entry point + * to cpuidle entry point or vice versa. + */ + for_each_cpu(cpu, cpus) { + if (entry == qcom_scm_wb[cpu].entry) + continue; + flags |= qcom_scm_wb[cpu].flag; + } + + /* No change in entry function */ + if (!flags) + return 0; + + desc.args[0] = flags; + desc.args[1] = virt_to_phys(entry); + ret = qcom_scm_call(dev, &desc); + if (!ret) { + for_each_cpu(cpu, cpus) + qcom_scm_wb[cpu].entry = entry; + } + + return ret; } /** @@ -256,6 +553,15 @@ int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry, */ void __qcom_scm_cpu_power_down(struct device *dev, u32 flags) { + struct qcom_scm_desc desc = { + .svc = QCOM_SCM_SVC_BOOT, + .cmd = QCOM_SCM_BOOT_TERMINATE_PC, + .args[0] = flags & QCOM_SCM_FLUSH_FLAG_MASK, + .arginfo = QCOM_SCM_ARGS(1), + .owner = ARM_SMCCC_OWNER_SIP, + }; + + qcom_scm_call_atomic(dev, &desc); } int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id) @@ -288,7 +594,7 @@ int __qcom_scm_set_dload_mode(struct device *dev, bool enable) desc.args[1] = enable ? QCOM_SCM_BOOT_SET_DLOAD_MODE : 0; desc.arginfo = QCOM_SCM_ARGS(2); - return qcom_scm_call(dev, &desc); + return qcom_scm_call_atomic(dev, &desc); } bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral) @@ -412,7 +718,7 @@ int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, desc.args[0] = addr; desc.arginfo = QCOM_SCM_ARGS(1); - ret = qcom_scm_call(dev, &desc); + ret = qcom_scm_call_atomic(dev, &desc); if (ret >= 0) *val = desc.res[0]; @@ -431,7 +737,7 @@ int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val) desc.args[1] = val; desc.arginfo = QCOM_SCM_ARGS(2); - return qcom_scm_call(dev, &desc); + return qcom_scm_call_atomic(dev, &desc); } int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, u32 cmd_id)