From patchwork Fri Jul 28 13:23:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFEF1C04A6A for ; Fri, 28 Jul 2023 13:25:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233354AbjG1NZ2 (ORCPT ); Fri, 28 Jul 2023 09:25:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236248AbjG1NZ1 (ORCPT ); Fri, 28 Jul 2023 09:25:27 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5044B3AB3; Fri, 28 Jul 2023 06:25:26 -0700 (PDT) Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9r5Ep002075; Fri, 28 Jul 2023 13:25:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=QtVWmx7sMR3bZfvSX8QR1AHyfv5xD8RTgyhdN6qj+0E=; b=pZau4dpukTm7NfMINXnBqREDsAE85igassxeoRoSk8hth5QqlZhByKFYs3Os61mWibIr h+WtG8EWiqYghpUMTQkScHZ2ztQeb6cqcRGE3hwRh79WJbTqJNJWD5z+dfjuTk77XRkW WLmYWX3wn74O2yNquwVxops1+Uj1PygQVSVqavcAFBZje3qESFzk1sNXkcCCVCGzrmlS ZxLChPNBLQvdHshluXROb+rOrY5V7TGDRxv/ti/5FgRJtwJ+ySMnZSlEWA8oV084lphG o+vKO0SkMHhRrpBetueTTkAL1C65x4qUp4kqDpM19tO1tw4JmELnjc8dFLPa+8C+VS3l vQ== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s448hh7ga-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:15 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPE9P025076 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:14 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:10 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 01/33] MAINTAINERS: Add Qualcomm Iris video accelerator driver Date: Fri, 28 Jul 2023 18:53:12 +0530 Message-ID: <1690550624-14642-2-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: gd1TGAVpIwlxqpg9Vok5-kEqoxlLRnyf X-Proofpoint-ORIG-GUID: gd1TGAVpIwlxqpg9Vok5-kEqoxlLRnyf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 clxscore=1015 bulkscore=0 lowpriorityscore=0 suspectscore=0 adultscore=0 mlxscore=0 priorityscore=1501 mlxlogscore=877 impostorscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add an entry for Iris video encoder/decoder accelerator driver. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- MAINTAINERS | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 3be1bdf..ea633b2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17671,6 +17671,16 @@ T: git git://linuxtv.org/media_tree.git F: Documentation/devicetree/bindings/media/*venus* F: drivers/media/platform/qcom/venus/ +QUALCOMM IRIS VIDEO ACCELERATOR DRIVER +M: Vikash Garodia +M: Dikshita Agarwal +L: linux-media@vger.kernel.org +L: linux-arm-msm@vger.kernel.org +S: Maintained +T: git git://linuxtv.org/media_tree.git +F: Documentation/devicetree/bindings/media/qcom,*-iris.yaml +F: drivers/media/platform/qcom/iris/ + QUALCOMM WCN36XX WIRELESS DRIVER M: Loic Poulain L: wcn36xx@lists.infradead.org From patchwork Fri Jul 28 13:23:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C462DC001DF for ; Fri, 28 Jul 2023 13:25:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236695AbjG1NZ1 (ORCPT ); Fri, 28 Jul 2023 09:25:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235488AbjG1NZ0 (ORCPT ); Fri, 28 Jul 2023 09:25:26 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72478211C; Fri, 28 Jul 2023 06:25:24 -0700 (PDT) Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBTCdR016636; Fri, 28 Jul 2023 13:25:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=cP+Wc3KqdeHD/f+dcdEtVMlTEbXl+sOCN19Hua0L34I=; b=gLqPCn66JL3I4/wjzkPbkoI1nfOhSGXfeZ+kcffR0PL++E+w+9AZ2fPdc45LeX/4zc8F MefCpJiIjuxoSGnBF45MTqKTeCZbLhAPhYGgdJkyRtmOFjK9lN97UWuWVb5fayQMb4A0 Cdrz3/9gqxFmCt5qW6fKd0O2dcupMasTlPtJPAi9YoRWwxZVACCFaCpzELVvqs7huGnS DBKtnBolDrKOd3wlvZTuLUggOCMfNG8Ch1VB3dA78WzYTvQ8+OiUIK/bImRlk4uUb38+ EAokBtIbZYD+NO7a0YbSfKZJnBQBY3zJxkrTV8H1+j9UDnPyPKlxgOLGU4DyYGmxkSLD DA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3k7u3jxn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:19 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPI3U002584 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:18 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:14 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 02/33] iris: vidc: add core functions Date: Fri, 28 Jul 2023 18:53:13 +0530 Message-ID: <1690550624-14642-3-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 9Qsij7_7W5FPVwDvqJvbfwHUebqYa0Re X-Proofpoint-ORIG-GUID: 9Qsij7_7W5FPVwDvqJvbfwHUebqYa0Re X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 clxscore=1015 phishscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280123 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements the platform driver methods, file operations and v4l2 registration. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/src/msm_vidc_probe.c | 660 +++++++++++++++++++++ 1 file changed, 660 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_probe.c diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_probe.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_probe.c new file mode 100644 index 0000000..43439cb --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_probe.c @@ -0,0 +1,660 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2022, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_state.h" +#include "venus_hfi.h" + +#define BASE_DEVICE_NUMBER 32 + +struct msm_vidc_core *g_core; + +static inline bool is_video_device(struct device *dev) +{ + return !!(of_device_is_compatible(dev->of_node, "qcom,sm8550-vidc")); +} + +static inline bool is_video_context_bank_device(struct device *dev) +{ + return !!(of_device_is_compatible(dev->of_node, "qcom,vidc,cb-ns")); +} + +static int msm_vidc_init_resources(struct msm_vidc_core *core) +{ + struct msm_vidc_resource *res = NULL; + int rc = 0; + + res = devm_kzalloc(&core->pdev->dev, sizeof(*res), GFP_KERNEL); + if (!res) { + d_vpr_e("%s: failed to alloc memory for resource\n", __func__); + return -ENOMEM; + } + core->resource = res; + + rc = call_res_op(core, init, core); + if (rc) { + d_vpr_e("%s: Failed to init resources: %d\n", __func__, rc); + return rc; + } + + return 0; +} + +static const struct of_device_id msm_vidc_dt_match[] = { + {.compatible = "qcom,sm8550-vidc"}, + {.compatible = "qcom,vidc,cb-ns"}, + MSM_VIDC_EMPTY_BRACE +}; +MODULE_DEVICE_TABLE(of, msm_vidc_dt_match); + +static void msm_vidc_release_video_device(struct video_device *vdev) +{ + d_vpr_e("%s: video device released\n", __func__); +} + +static void msm_vidc_unregister_video_device(struct msm_vidc_core *core, + enum msm_vidc_domain_type type) +{ + int index; + + if (type == MSM_VIDC_DECODER) + index = 0; + else if (type == MSM_VIDC_ENCODER) + index = 1; + else + return; + + v4l2_m2m_release(core->vdev[index].m2m_dev); + + video_set_drvdata(&core->vdev[index].vdev, NULL); + video_unregister_device(&core->vdev[index].vdev); +} + +static int msm_vidc_register_video_device(struct msm_vidc_core *core, + enum msm_vidc_domain_type type, int nr) +{ + int rc = 0; + int index; + + d_vpr_h("%s: domain %d\n", __func__, type); + + if (type == MSM_VIDC_DECODER) + index = 0; + else if (type == MSM_VIDC_ENCODER) + index = 1; + else + return -EINVAL; + + core->vdev[index].vdev.release = + msm_vidc_release_video_device; + core->vdev[index].vdev.fops = core->v4l2_file_ops; + if (type == MSM_VIDC_DECODER) + core->vdev[index].vdev.ioctl_ops = core->v4l2_ioctl_ops_dec; + else + core->vdev[index].vdev.ioctl_ops = core->v4l2_ioctl_ops_enc; + core->vdev[index].vdev.vfl_dir = VFL_DIR_M2M; + core->vdev[index].type = type; + core->vdev[index].vdev.v4l2_dev = &core->v4l2_dev; + core->vdev[index].vdev.device_caps = core->capabilities[DEVICE_CAPS].value; + rc = video_register_device(&core->vdev[index].vdev, + VFL_TYPE_VIDEO, nr); + if (rc) { + d_vpr_e("Failed to register the video device\n"); + return rc; + } + video_set_drvdata(&core->vdev[index].vdev, core); + + core->vdev[index].m2m_dev = v4l2_m2m_init(core->v4l2_m2m_ops); + if (IS_ERR(core->vdev[index].m2m_dev)) { + d_vpr_e("Failed to initialize V4L2 M2M device\n"); + rc = PTR_ERR(core->vdev[index].m2m_dev); + goto m2m_init_failed; + } + + return 0; + +m2m_init_failed: + video_unregister_device(&core->vdev[index].vdev); + return rc; +} + +static int msm_vidc_deinitialize_core(struct msm_vidc_core *core) +{ + int rc = 0; + + if (!core) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + mutex_destroy(&core->lock); + msm_vidc_update_core_state(core, MSM_VIDC_CORE_DEINIT, __func__); + + if (core->batch_workq) + destroy_workqueue(core->batch_workq); + + if (core->pm_workq) + destroy_workqueue(core->pm_workq); + + core->batch_workq = NULL; + core->pm_workq = NULL; + + return rc; +} + +static int msm_vidc_initialize_core(struct msm_vidc_core *core) +{ + int rc = 0; + + msm_vidc_update_core_state(core, MSM_VIDC_CORE_DEINIT, __func__); + + core->pm_workq = create_singlethread_workqueue("pm_workq"); + if (!core->pm_workq) { + d_vpr_e("%s: create pm workq failed\n", __func__); + rc = -EINVAL; + goto exit; + } + + core->batch_workq = create_singlethread_workqueue("batch_workq"); + if (!core->batch_workq) { + d_vpr_e("%s: create batch workq failed\n", __func__); + rc = -EINVAL; + goto exit; + } + + core->packet_size = VIDC_IFACEQ_VAR_HUGE_PKT_SIZE; + core->packet = devm_kzalloc(&core->pdev->dev, core->packet_size, GFP_KERNEL); + if (!core->packet) { + d_vpr_e("%s: failed to alloc core packet\n", __func__); + rc = -ENOMEM; + goto exit; + } + + core->response_packet = devm_kzalloc(&core->pdev->dev, core->packet_size, GFP_KERNEL); + if (!core->packet) { + d_vpr_e("%s: failed to alloc core response packet\n", __func__); + rc = -ENOMEM; + goto exit; + } + + mutex_init(&core->lock); + INIT_LIST_HEAD(&core->instances); + INIT_LIST_HEAD(&core->dangling_instances); + + INIT_DELAYED_WORK(&core->pm_work, venus_hfi_pm_work_handler); + INIT_DELAYED_WORK(&core->fw_unload_work, msm_vidc_fw_unload_handler); + + return 0; +exit: + if (core->batch_workq) + destroy_workqueue(core->batch_workq); + if (core->pm_workq) + destroy_workqueue(core->pm_workq); + core->batch_workq = NULL; + core->pm_workq = NULL; + + return rc; +} + +static void msm_vidc_devm_deinit_core(void *res) +{ + struct msm_vidc_core *core = res; + + msm_vidc_deinitialize_core(core); +} + +static int msm_vidc_devm_init_core(struct device *dev, struct msm_vidc_core *core) +{ + int rc = 0; + + if (!dev || !core) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + rc = msm_vidc_initialize_core(core); + if (rc) { + d_vpr_e("%s: init failed with %d\n", __func__, rc); + return rc; + } + + rc = devm_add_action_or_reset(dev, msm_vidc_devm_deinit_core, (void *)core); + if (rc) + return -EINVAL; + + return rc; +} + +static void msm_vidc_devm_debugfs_put(void *res) +{ + struct dentry *parent = res; + + debugfs_remove_recursive(parent); +} + +static struct dentry *msm_vidc_devm_debugfs_get(struct device *dev) +{ + struct dentry *parent = NULL; + int rc = 0; + + if (!dev) { + d_vpr_e("%s: invalid params\n", __func__); + return NULL; + } + + parent = msm_vidc_debugfs_init_drv(); + if (!parent) + return NULL; + + rc = devm_add_action_or_reset(dev, msm_vidc_devm_debugfs_put, (void *)parent); + if (rc) + return NULL; + + return parent; +} + +static int msm_vidc_setup_context_bank(struct msm_vidc_core *core, + struct device *dev) +{ + struct context_bank_info *cb = NULL; + int rc = 0; + + cb = msm_vidc_get_context_bank_for_device(core, dev); + if (!cb) { + d_vpr_e("%s: Failed to get context bank device for %s\n", + __func__, dev_name(dev)); + return -EIO; + } + + /* populate dev & domain field */ + cb->dev = dev; + cb->domain = iommu_get_domain_for_dev(cb->dev); + if (!cb->domain) { + d_vpr_e("%s: Failed to get iommu domain for %s\n", __func__, dev_name(dev)); + return -EIO; + } + + if (cb->dma_mask) { + rc = dma_set_mask_and_coherent(cb->dev, cb->dma_mask); + if (rc) { + d_vpr_e("%s: dma_set_mask_and_coherent failed\n", __func__); + return rc; + } + } + + /* + * configure device segment size and segment boundary to ensure + * iommu mapping returns one mapping (which is required for partial + * cache operations) + */ + if (!dev->dma_parms) + dev->dma_parms = + devm_kzalloc(dev, sizeof(*dev->dma_parms), GFP_KERNEL); + dma_set_max_seg_size(dev, (unsigned int)DMA_BIT_MASK(32)); + dma_set_seg_boundary(dev, (unsigned long)DMA_BIT_MASK(64)); + + iommu_set_fault_handler(cb->domain, msm_vidc_smmu_fault_handler, (void *)core); + + d_vpr_h("%s: name %s addr start %x size %x secure %d\n", + __func__, cb->name, cb->addr_range.start, + cb->addr_range.size, cb->secure); + d_vpr_h("%s: dma_coherant %d region %d dev_name %s domain %pK dma_mask %llu\n", + __func__, cb->dma_coherant, cb->region, dev_name(cb->dev), + cb->domain, cb->dma_mask); + + return rc; +} + +static int msm_vidc_remove_video_device(struct platform_device *pdev) +{ + struct msm_vidc_core *core; + + if (!pdev) { + d_vpr_e("%s: invalid input %pK", __func__, pdev); + return -EINVAL; + } + + core = dev_get_drvdata(&pdev->dev); + if (!core) { + d_vpr_e("%s: invalid core\n", __func__); + return -EINVAL; + } + + msm_vidc_core_deinit(core, true); + venus_hfi_queue_deinit(core); + + msm_vidc_unregister_video_device(core, MSM_VIDC_ENCODER); + msm_vidc_unregister_video_device(core, MSM_VIDC_DECODER); + + v4l2_device_unregister(&core->v4l2_dev); + + d_vpr_h("depopulating sub devices\n"); + /* + * Trigger remove for each sub-device i.e. qcom,context-bank,xxxx + * When msm_vidc_remove is called for each sub-device, destroy + * context-bank mappings. + */ + of_platform_depopulate(&pdev->dev); + + dev_set_drvdata(&pdev->dev, NULL); + g_core = NULL; + d_vpr_h("%s(): succssful\n", __func__); + + return 0; +} + +static int msm_vidc_remove_context_bank(struct platform_device *pdev) +{ + d_vpr_h("%s(): %s\n", __func__, dev_name(&pdev->dev)); + + return 0; +} + +static int msm_vidc_remove(struct platform_device *pdev) +{ + /* + * Sub devices remove will be triggered by of_platform_depopulate() + * after core_deinit(). It return immediately after completing + * sub-device remove. + */ + if (is_video_device(&pdev->dev)) + return msm_vidc_remove_video_device(pdev); + else if (is_video_context_bank_device(&pdev->dev)) + return msm_vidc_remove_context_bank(pdev); + + /* How did we end up here? */ + WARN_ON(1); + return -EINVAL; +} + +static int msm_vidc_probe_video_device(struct platform_device *pdev) +{ + int rc = 0; + struct msm_vidc_core *core = NULL; + int nr = BASE_DEVICE_NUMBER; + + d_vpr_h("%s: %s\n", __func__, dev_name(&pdev->dev)); + + core = devm_kzalloc(&pdev->dev, sizeof(struct msm_vidc_core), GFP_KERNEL); + if (!core) { + d_vpr_e("%s: failed to alloc memory for core\n", __func__); + return -ENOMEM; + } + g_core = core; + + core->pdev = pdev; + dev_set_drvdata(&pdev->dev, core); + + core->debugfs_parent = msm_vidc_devm_debugfs_get(&pdev->dev); + if (!core->debugfs_parent) + d_vpr_h("Failed to create debugfs for msm_vidc\n"); + + rc = msm_vidc_devm_init_core(&pdev->dev, core); + if (rc) { + d_vpr_e("%s: init core failed with %d\n", __func__, rc); + goto init_core_failed; + } + + rc = msm_vidc_init_platform(core); + if (rc) { + d_vpr_e("%s: init platform failed with %d\n", __func__, rc); + rc = -EINVAL; + goto init_plat_failed; + } + + rc = msm_vidc_init_resources(core); + if (rc) { + d_vpr_e("%s: init resource failed with %d\n", __func__, rc); + goto init_res_failed; + } + + rc = msm_vidc_init_core_caps(core); + if (rc) { + d_vpr_e("%s: init core caps failed with %d\n", __func__, rc); + goto init_res_failed; + } + + rc = msm_vidc_init_instance_caps(core); + if (rc) { + d_vpr_e("%s: init inst cap failed with %d\n", __func__, rc); + goto init_inst_caps_fail; + } + + core->debugfs_root = msm_vidc_debugfs_init_core(core); + if (!core->debugfs_root) + d_vpr_h("Failed to init debugfs core\n"); + + d_vpr_h("populating sub devices\n"); + /* + * Trigger probe for each sub-device i.e. qcom,msm-vidc,context-bank. + * When msm_vidc_probe is called for each sub-device, parse the + * context-bank details. + */ + rc = of_platform_populate(pdev->dev.of_node, msm_vidc_dt_match, NULL, + &pdev->dev); + if (rc) { + d_vpr_e("Failed to trigger probe for sub-devices\n"); + goto sub_dev_failed; + } + + rc = v4l2_device_register(&pdev->dev, &core->v4l2_dev); + if (rc) { + d_vpr_e("Failed to register v4l2 device\n"); + goto v4l2_reg_failed; + } + + /* setup the decoder device */ + rc = msm_vidc_register_video_device(core, MSM_VIDC_DECODER, nr); + if (rc) { + d_vpr_e("Failed to register video decoder\n"); + goto dec_reg_failed; + } + + /* setup the encoder device */ + rc = msm_vidc_register_video_device(core, MSM_VIDC_ENCODER, nr + 1); + if (rc) { + d_vpr_e("Failed to register video encoder\n"); + goto enc_reg_failed; + } + + rc = venus_hfi_queue_init(core); + if (rc) { + d_vpr_e("%s: interface queues init failed\n", __func__); + goto queues_init_failed; + } + + rc = msm_vidc_core_init(core); + if (rc) { + d_vpr_e("%s: sys init failed\n", __func__); + goto core_init_failed; + } + + d_vpr_h("%s(): succssful\n", __func__); + + return rc; + +core_init_failed: + venus_hfi_queue_deinit(core); +queues_init_failed: + of_platform_depopulate(&pdev->dev); +sub_dev_failed: + msm_vidc_unregister_video_device(core, MSM_VIDC_ENCODER); +enc_reg_failed: + msm_vidc_unregister_video_device(core, MSM_VIDC_DECODER); +dec_reg_failed: + v4l2_device_unregister(&core->v4l2_dev); +v4l2_reg_failed: +init_inst_caps_fail: +init_res_failed: +init_plat_failed: +init_core_failed: + dev_set_drvdata(&pdev->dev, NULL); + g_core = NULL; + + return rc; +} + +static int msm_vidc_probe_context_bank(struct platform_device *pdev) +{ + struct msm_vidc_core *core = NULL; + int rc = 0; + + if (!pdev) { + d_vpr_e("%s: Invalid platform device %pK", __func__, pdev); + return -EINVAL; + } else if (!pdev->dev.parent) { + d_vpr_e("%s: Failed to find a parent for %s\n", + __func__, dev_name(&pdev->dev)); + return -ENODEV; + } + + d_vpr_h("%s(): %s\n", __func__, dev_name(&pdev->dev)); + + core = dev_get_drvdata(pdev->dev.parent); + if (!core) { + d_vpr_e("%s: core not found in device %s", + __func__, dev_name(pdev->dev.parent)); + return -EINVAL; + } + + rc = msm_vidc_setup_context_bank(core, &pdev->dev); + if (rc) { + d_vpr_e("%s: Failed to probe context bank %s\n", + __func__, dev_name(&pdev->dev)); + return rc; + } + + return rc; +} + +static int msm_vidc_probe(struct platform_device *pdev) +{ + if (!pdev) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + /* + * Sub devices probe will be triggered by of_platform_populate() towards + * the end of the probe function after msm-vidc device probe is + * completed. Return immediately after completing sub-device probe. + */ + if (is_video_device(&pdev->dev)) + return msm_vidc_probe_video_device(pdev); + else if (is_video_context_bank_device(&pdev->dev)) + return msm_vidc_probe_context_bank(pdev); + + /* How did we end up here? */ + WARN_ON(1); + return -EINVAL; +} + +static int msm_vidc_pm_suspend(struct device *dev) +{ + int rc = 0; + struct msm_vidc_core *core; + enum msm_vidc_allow allow = MSM_VIDC_DISALLOW; + + /* + * Bail out if + * - driver possibly not probed yet + * - not the main device. We don't support power management on + * subdevices (e.g. context banks) + */ + if (!dev || !dev->driver || !is_video_device(dev)) + return 0; + + core = dev_get_drvdata(dev); + if (!core) { + d_vpr_e("%s: invalid core\n", __func__); + return -EINVAL; + } + + core_lock(core, __func__); + allow = msm_vidc_allow_pm_suspend(core); + + if (allow == MSM_VIDC_IGNORE) { + d_vpr_h("%s: pm already suspended\n", __func__); + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_PM_SUSPEND, __func__); + rc = 0; + goto unlock; + } else if (allow != MSM_VIDC_ALLOW) { + d_vpr_h("%s: pm suspend not allowed\n", __func__); + rc = 0; + goto unlock; + } + + rc = msm_vidc_suspend(core); + if (rc == -EOPNOTSUPP) + rc = 0; + else if (rc) + d_vpr_e("Failed to suspend: %d\n", rc); + else + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_PM_SUSPEND, __func__); + +unlock: + core_unlock(core, __func__); + return rc; +} + +static int msm_vidc_pm_resume(struct device *dev) +{ + struct msm_vidc_core *core; + + /* + * Bail out if + * - driver possibly not probed yet + * - not the main device. We don't support power management on + * subdevices (e.g. context banks) + */ + if (!dev || !dev->driver || !is_video_device(dev)) + return 0; + + core = dev_get_drvdata(dev); + if (!core) { + d_vpr_e("%s: invalid core\n", __func__); + return -EINVAL; + } + + /* remove PM suspend from core sub_state */ + core_lock(core, __func__); + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_PM_SUSPEND, 0, __func__); + core_unlock(core, __func__); + + return 0; +} + +static const struct dev_pm_ops msm_vidc_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(msm_vidc_pm_suspend, msm_vidc_pm_resume) +}; + +struct platform_driver msm_vidc_driver = { + .probe = msm_vidc_probe, + .remove = msm_vidc_remove, + .driver = { + .name = "msm_vidc_v4l2", + .of_match_table = msm_vidc_dt_match, + .pm = &msm_vidc_pm_ops, + }, +}; + +module_platform_driver(msm_vidc_driver); +MODULE_LICENSE("GPL"); From patchwork Fri Jul 28 13:23:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15875C001E0 for ; Fri, 28 Jul 2023 13:25:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236723AbjG1NZm (ORCPT ); Fri, 28 Jul 2023 09:25:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236720AbjG1NZj (ORCPT ); Fri, 28 Jul 2023 09:25:39 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFF263C0E; Fri, 28 Jul 2023 06:25:33 -0700 (PDT) Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SCqas5003536; Fri, 28 Jul 2023 13:25:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=QBa9zxGqdV9ygvCJBDvJhODZkvJBcGxK5TxryGQEu20=; b=ddE96R6SDeLuPZCjtyFij1EQz7UKGRHKdYTo0irbWoKqy/+gpJ/Ow6WowkJ0X7Moc79g HIsH8HU0TFK83IM9rFhv4/v3KR5zP/MOqvtOrxZCEiqcbKM6EwAIoA3yDNSfqSnSQ0LJ Tt1tUgHcD1vdIJ+5lMZHp/HypIW0k0gWrYzYHxedFnMZV81eHfqbb09khcRLbdgFUVW6 jOLLYNWXmgPC/zlIrOkzMSBMEKI41ZUMaTyZ5MLWTx5agx33GVG0LJtZdHT5mPDyoymI bUgtV4h5h9Wtq4dASfmfOhpmaHYd4oG4ouUdksSJlPy5Dj5vWTkWARk3DII2JTJ8h776 1g== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s4e2702sw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:22 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPLRw001637 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:21 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:18 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 03/33] iris: vidc: add v4l2 wrapper file Date: Fri, 28 Jul 2023 18:53:14 +0530 Message-ID: <1690550624-14642-4-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: X5tfOWsvH0x4uXQzaI_S9SvAIi9L6Hz7 X-Proofpoint-ORIG-GUID: X5tfOWsvH0x4uXQzaI_S9SvAIi9L6Hz7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 adultscore=0 spamscore=0 mlxscore=0 impostorscore=0 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Here is the implementation of v4l2 wrapper functions for all v4l2 IOCTLs. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_v4l2.h | 77 ++ .../platform/qcom/iris/vidc/src/msm_vidc_v4l2.c | 953 +++++++++++++++++++++ 2 files changed, 1030 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_v4l2.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_v4l2.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_v4l2.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_v4l2.h new file mode 100644 index 0000000..3766c9d --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_v4l2.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_V4L2_H_ +#define _MSM_VIDC_V4L2_H_ + +#include +#include +#include +#include +#include + +int msm_v4l2_open(struct file *filp); +int msm_v4l2_close(struct file *filp); +int msm_v4l2_querycap(struct file *filp, void *fh, + struct v4l2_capability *cap); +int msm_v4l2_enum_fmt(struct file *file, void *fh, + struct v4l2_fmtdesc *f); +int msm_v4l2_try_fmt(struct file *file, void *fh, + struct v4l2_format *f); +int msm_v4l2_s_fmt(struct file *file, void *fh, + struct v4l2_format *f); +int msm_v4l2_g_fmt(struct file *file, void *fh, + struct v4l2_format *f); +int msm_v4l2_s_selection(struct file *file, void *fh, + struct v4l2_selection *s); +int msm_v4l2_g_selection(struct file *file, void *fh, + struct v4l2_selection *s); +int msm_v4l2_s_parm(struct file *file, void *fh, + struct v4l2_streamparm *a); +int msm_v4l2_g_parm(struct file *file, void *fh, + struct v4l2_streamparm *a); +int msm_v4l2_reqbufs(struct file *file, void *fh, + struct v4l2_requestbuffers *b); +int msm_v4l2_querybuf(struct file *file, void *fh, + struct v4l2_buffer *b); +int msm_v4l2_create_bufs(struct file *filp, void *fh, + struct v4l2_create_buffers *b); +int msm_v4l2_prepare_buf(struct file *filp, void *fh, + struct v4l2_buffer *b); +int msm_v4l2_qbuf(struct file *file, void *fh, + struct v4l2_buffer *b); +int msm_v4l2_dqbuf(struct file *file, void *fh, + struct v4l2_buffer *b); +int msm_v4l2_streamon(struct file *file, void *fh, + enum v4l2_buf_type i); +int msm_v4l2_streamoff(struct file *file, void *fh, + enum v4l2_buf_type i); +int msm_v4l2_subscribe_event(struct v4l2_fh *fh, + const struct v4l2_event_subscription *sub); +int msm_v4l2_unsubscribe_event(struct v4l2_fh *fh, + const struct v4l2_event_subscription *sub); +int msm_v4l2_try_decoder_cmd(struct file *file, void *fh, + struct v4l2_decoder_cmd *enc); +int msm_v4l2_decoder_cmd(struct file *file, void *fh, + struct v4l2_decoder_cmd *dec); +int msm_v4l2_try_encoder_cmd(struct file *file, void *fh, + struct v4l2_encoder_cmd *enc); +int msm_v4l2_encoder_cmd(struct file *file, void *fh, + struct v4l2_encoder_cmd *enc); +int msm_v4l2_enum_framesizes(struct file *file, void *fh, + struct v4l2_frmsizeenum *fsize); +int msm_v4l2_enum_frameintervals(struct file *file, void *fh, + struct v4l2_frmivalenum *fival); +int msm_v4l2_queryctrl(struct file *file, void *fh, + struct v4l2_queryctrl *ctrl); +int msm_v4l2_querymenu(struct file *file, void *fh, + struct v4l2_querymenu *qmenu); +unsigned int msm_v4l2_poll(struct file *filp, + struct poll_table_struct *pt); +void msm_v4l2_m2m_device_run(void *priv); +void msm_v4l2_m2m_job_abort(void *priv); + +#endif // _MSM_VIDC_V4L2_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_v4l2.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_v4l2.c new file mode 100644 index 0000000..6dfb18b --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_v4l2.c @@ -0,0 +1,953 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_v4l2.h" + +static struct msm_vidc_inst *get_vidc_inst(struct file *filp, void *fh) +{ + if (!filp || !filp->private_data) + return NULL; + return container_of(filp->private_data, + struct msm_vidc_inst, fh); +} + +unsigned int msm_v4l2_poll(struct file *filp, struct poll_table_struct *pt) +{ + int poll = 0; + struct msm_vidc_inst *inst = get_vidc_inst(filp, NULL); + + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return POLLERR; + } + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + poll = POLLERR; + goto exit; + } + + poll = msm_vidc_poll((void *)inst, filp, pt); + if (poll) + goto exit; + +exit: + put_inst(inst); + return poll; +} + +int msm_v4l2_open(struct file *filp) +{ + struct video_device *vdev = video_devdata(filp); + struct msm_video_device *vid_dev = + container_of(vdev, struct msm_video_device, vdev); + struct msm_vidc_core *core = video_drvdata(filp); + struct msm_vidc_inst *inst; + + inst = msm_vidc_open(core, vid_dev->type); + if (!inst) { + d_vpr_e("Failed to create instance, type = %d\n", + vid_dev->type); + return -ENOMEM; + } + filp->private_data = &inst->fh; + return 0; +} + +int msm_v4l2_close(struct file *filp) +{ + int rc = 0; + struct msm_vidc_inst *inst; + + inst = get_vidc_inst(filp, NULL); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + rc = msm_vidc_close(inst); + filp->private_data = NULL; + return rc; +} + +int msm_v4l2_querycap(struct file *filp, void *fh, + struct v4l2_capability *cap) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !cap) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_querycap((void *)inst, cap); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_enum_fmt(struct file *filp, void *fh, + struct v4l2_fmtdesc *f) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !f) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_enum_fmt((void *)inst, f); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_try_fmt(struct file *filp, void *fh, struct v4l2_format *f) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !f) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = inst->event_handle(inst, MSM_VIDC_TRY_FMT, f); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_s_fmt(struct file *filp, void *fh, + struct v4l2_format *f) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !f) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = inst->event_handle(inst, MSM_VIDC_S_FMT, f); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_g_fmt(struct file *filp, void *fh, + struct v4l2_format *f) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !f) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_g_fmt((void *)inst, f); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_s_selection(struct file *filp, void *fh, + struct v4l2_selection *s) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !s) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_s_selection((void *)inst, s); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_g_selection(struct file *filp, void *fh, + struct v4l2_selection *s) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !s) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_g_selection((void *)inst, s); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_s_parm(struct file *filp, void *fh, + struct v4l2_streamparm *a) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !a) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_s_param((void *)inst, a); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_g_parm(struct file *filp, void *fh, + struct v4l2_streamparm *a) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !a) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_g_param((void *)inst, a); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_reqbufs(struct file *filp, void *fh, + struct v4l2_requestbuffers *b) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !b) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = inst->event_handle(inst, MSM_VIDC_REQBUFS, b); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_querybuf(struct file *filp, void *fh, + struct v4l2_buffer *b) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !b) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_querybuf((void *)inst, b); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_create_bufs(struct file *filp, void *fh, + struct v4l2_create_buffers *b) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !b) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_create_bufs((void *)inst, b); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_prepare_buf(struct file *filp, void *fh, + struct v4l2_buffer *b) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + struct video_device *vdev = video_devdata(filp); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !b) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_prepare_buf((void *)inst, vdev->v4l2_dev->mdev, b); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_qbuf(struct file *filp, void *fh, + struct v4l2_buffer *b) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + struct video_device *vdev = video_devdata(filp); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !b) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EINVAL; + goto exit; + } + + rc = msm_vidc_qbuf(inst, vdev->v4l2_dev->mdev, b); + if (rc) + goto exit; + +exit: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_dqbuf(struct file *filp, void *fh, + struct v4l2_buffer *b) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !b) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + rc = msm_vidc_dqbuf(inst, b); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_streamon(struct file *filp, void *fh, + enum v4l2_buf_type i) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto exit; + } + + rc = msm_vidc_streamon((void *)inst, i); + if (rc) + goto exit; + +exit: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_streamoff(struct file *filp, void *fh, + enum v4l2_buf_type i) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + rc = msm_vidc_streamoff((void *)inst, i); + if (rc) + i_vpr_e(inst, "%s: msm_vidc_stramoff failed\n", __func__); + + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_subscribe_event(struct v4l2_fh *fh, + const struct v4l2_event_subscription *sub) +{ + struct msm_vidc_inst *inst; + int rc = 0; + + inst = container_of(fh, struct msm_vidc_inst, fh); + inst = get_inst_ref(g_core, inst); + if (!inst || !sub) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_subscribe_event((void *)inst, sub); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_unsubscribe_event(struct v4l2_fh *fh, + const struct v4l2_event_subscription *sub) +{ + struct msm_vidc_inst *inst; + int rc = 0; + + inst = container_of(fh, struct msm_vidc_inst, fh); + inst = get_inst_ref(g_core, inst); + if (!inst || !sub) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + rc = msm_vidc_unsubscribe_event((void *)inst, sub); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_try_decoder_cmd(struct file *filp, void *fh, + struct v4l2_decoder_cmd *dec) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !dec) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_try_cmd(inst, (union msm_v4l2_cmd *)dec); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_decoder_cmd(struct file *filp, void *fh, + struct v4l2_decoder_cmd *dec) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + enum msm_vidc_event event; + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + if (!dec) { + i_vpr_e(inst, "%s: invalid params\n", __func__); + rc = -EINVAL; + goto unlock; + } + if (dec->cmd != V4L2_DEC_CMD_START && + dec->cmd != V4L2_DEC_CMD_STOP) { + i_vpr_e(inst, "%s: invalid cmd %#x\n", __func__, dec->cmd); + rc = -EINVAL; + goto unlock; + } + event = (dec->cmd == V4L2_DEC_CMD_START ? MSM_VIDC_CMD_START : MSM_VIDC_CMD_STOP); + rc = inst->event_handle(inst, event, NULL); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_try_encoder_cmd(struct file *filp, void *fh, + struct v4l2_encoder_cmd *enc) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !enc) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_try_cmd(inst, (union msm_v4l2_cmd *)enc); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_encoder_cmd(struct file *filp, void *fh, + struct v4l2_encoder_cmd *enc) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + enum msm_vidc_event event; + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + if (!enc) { + i_vpr_e(inst, "%s: invalid params\n", __func__); + rc = -EINVAL; + goto unlock; + } + if (enc->cmd != V4L2_ENC_CMD_START && + enc->cmd != V4L2_ENC_CMD_STOP) { + i_vpr_e(inst, "%s: invalid cmd %#x\n", __func__, enc->cmd); + rc = -EINVAL; + goto unlock; + } + event = (enc->cmd == V4L2_ENC_CMD_START ? MSM_VIDC_CMD_START : MSM_VIDC_CMD_STOP); + rc = inst->event_handle(inst, event, NULL); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_enum_framesizes(struct file *filp, void *fh, + struct v4l2_frmsizeenum *fsize) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !fsize) { + d_vpr_e("%s: invalid params: %pK %pK\n", + __func__, inst, fsize); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_enum_framesizes((void *)inst, fsize); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_enum_frameintervals(struct file *filp, void *fh, + struct v4l2_frmivalenum *fival) + +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !fival) { + d_vpr_e("%s: invalid params: %pK %pK\n", + __func__, inst, fival); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_enum_frameintervals((void *)inst, fival); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_queryctrl(struct file *filp, void *fh, + struct v4l2_queryctrl *ctrl) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !ctrl) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_query_ctrl((void *)inst, ctrl); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +int msm_v4l2_querymenu(struct file *filp, void *fh, + struct v4l2_querymenu *qmenu) +{ + struct msm_vidc_inst *inst = get_vidc_inst(filp, fh); + int rc = 0; + + inst = get_inst_ref(g_core, inst); + if (!inst || !qmenu) { + d_vpr_e("%s: invalid params %pK %pK\n", + __func__, inst, qmenu); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: inst in error state\n", __func__); + rc = -EBUSY; + goto unlock; + } + rc = msm_vidc_query_menu((void *)inst, qmenu); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + + return rc; +} + +void msm_v4l2_m2m_device_run(void *priv) +{ + d_vpr_l("%s(): device_run\n", __func__); +} + +void msm_v4l2_m2m_job_abort(void *priv) +{ + struct msm_vidc_inst *inst = priv; + + if (!inst) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + i_vpr_h(inst, "%s: m2m job aborted\n", __func__); + v4l2_m2m_job_finish(inst->m2m_dev, inst->m2m_ctx); +} From patchwork Fri Jul 28 13:23:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A520FC001DF for ; Fri, 28 Jul 2023 13:25:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236714AbjG1NZl (ORCPT ); Fri, 28 Jul 2023 09:25:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236713AbjG1NZi (ORCPT ); Fri, 28 Jul 2023 09:25:38 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A43A3ABD; Fri, 28 Jul 2023 06:25:32 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S8v0Qb001161; Fri, 28 Jul 2023 13:25:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=RXxSbb1po9KEOK21uuwHvNLiGOLOvf6Gp9gYXfdMDUI=; b=DWQQ4ctPVhwgMC/WsdpJcd/Ogyva2q+51uX4gypSro7Q2xNsR3y/s/Nscqi/yVx/Jo4/ A8M4hCLTRIxSMBHctUUZ8AVk4XJFKaHvLjQDfUYxYjg26ljpC2GAbi/npfqa5mZNq/Lm uwLC+kZbRJRl91iV0EumWMOn2O216d4JO5silPiwro/p6E6Q5CUYsEpgCVx++G5aSUW+ RrmCRfa/EHYHUOT6WemqgTJsQImqPENTBzr1hAwHCSFC08KznPfuZ12RWaNUUuoOBegY 5M7atfZ7iVsZukEcveH/9/g27F5M0tQN36VwDJeW4SiE5TVJtDLVjMAjeq/mxnCeVlpQ rA== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3n2kbcmv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:26 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPPsO025554 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:25 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:21 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 04/33] iris: add vidc wrapper file Date: Fri, 28 Jul 2023 18:53:15 +0530 Message-ID: <1690550624-14642-5-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: _A8VWtD7wEHHQjf1GDneDiOPEiBpGKvo X-Proofpoint-ORIG-GUID: _A8VWtD7wEHHQjf1GDneDiOPEiBpGKvo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 mlxscore=0 suspectscore=0 spamscore=0 clxscore=1015 bulkscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280123 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements vidc wrapper functions for all v4l2 IOCTLs. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../media/platform/qcom/iris/vidc/inc/msm_vidc.h | 60 ++ .../media/platform/qcom/iris/vidc/src/msm_vidc.c | 841 +++++++++++++++++++++ 2 files changed, 901 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc.h new file mode 100644 index 0000000..6cd5fad --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_H_ +#define _MSM_VIDC_H_ + +#include +#include + +struct msm_vidc_core; +struct msm_vidc_inst; + +union msm_v4l2_cmd { + struct v4l2_decoder_cmd dec; + struct v4l2_encoder_cmd enc; +}; + +void *msm_vidc_open(struct msm_vidc_core *core, u32 session_type); +int msm_vidc_close(struct msm_vidc_inst *inst); +int msm_vidc_querycap(struct msm_vidc_inst *inst, struct v4l2_capability *cap); +int msm_vidc_enum_fmt(struct msm_vidc_inst *inst, struct v4l2_fmtdesc *f); +int msm_vidc_try_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_vidc_s_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_vidc_g_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_vidc_s_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s); +int msm_vidc_g_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s); +int msm_vidc_s_param(struct msm_vidc_inst *inst, struct v4l2_streamparm *sp); +int msm_vidc_g_param(struct msm_vidc_inst *inst, struct v4l2_streamparm *sp); +int msm_vidc_reqbufs(struct msm_vidc_inst *inst, struct v4l2_requestbuffers *b); +int msm_vidc_querybuf(struct msm_vidc_inst *inst, struct v4l2_buffer *b); +int msm_vidc_create_bufs(struct msm_vidc_inst *inst, struct v4l2_create_buffers *b); +int msm_vidc_prepare_buf(struct msm_vidc_inst *inst, struct media_device *mdev, + struct v4l2_buffer *b); +int msm_vidc_release_buffer(struct msm_vidc_inst *inst, int buffer_type, + unsigned int buffer_index); +int msm_vidc_qbuf(struct msm_vidc_inst *inst, struct media_device *mdev, + struct v4l2_buffer *b); +int msm_vidc_dqbuf(struct msm_vidc_inst *inste, struct v4l2_buffer *b); +int msm_vidc_streamon(struct msm_vidc_inst *inst, enum v4l2_buf_type i); +int msm_vidc_query_ctrl(struct msm_vidc_inst *inst, struct v4l2_queryctrl *ctrl); +int msm_vidc_query_menu(struct msm_vidc_inst *inst, struct v4l2_querymenu *qmenu); +int msm_vidc_streamoff(struct msm_vidc_inst *inst, enum v4l2_buf_type i); +int msm_vidc_try_cmd(struct msm_vidc_inst *inst, union msm_v4l2_cmd *cmd); +int msm_vidc_start_cmd(struct msm_vidc_inst *inst); +int msm_vidc_stop_cmd(struct msm_vidc_inst *inst); +int msm_vidc_poll(struct msm_vidc_inst *inst, struct file *filp, + struct poll_table_struct *pt); +int msm_vidc_subscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub); +int msm_vidc_unsubscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub); +int msm_vidc_dqevent(struct msm_vidc_inst *inst, struct v4l2_event *event); +int msm_vidc_g_crop(struct msm_vidc_inst *inst, struct v4l2_crop *a); +int msm_vidc_enum_framesizes(struct msm_vidc_inst *inst, struct v4l2_frmsizeenum *fsize); +int msm_vidc_enum_frameintervals(struct msm_vidc_inst *inst, struct v4l2_frmivalenum *fival); + +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc.c new file mode 100644 index 0000000..c9848c7 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc.c @@ -0,0 +1,841 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include + +#include "msm_vdec.h" +#include "msm_venc.h" +#include "msm_vidc.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_power.h" +#include "msm_vidc_v4l2.h" +#include "msm_vidc_vb2.h" +#include "venus_hfi_response.h" + +#define MSM_VIDC_DRV_NAME "msm_vidc_driver" +#define MSM_VIDC_BUS_NAME "platform:msm_vidc_bus" + +static inline bool valid_v4l2_buffer(struct v4l2_buffer *b, + struct msm_vidc_inst *inst) +{ + if (b->type == INPUT_MPLANE || b->type == OUTPUT_MPLANE) + return b->length > 0; + + return false; +} + +static int get_poll_flags(struct msm_vidc_inst *inst, u32 port) +{ + int poll = 0; + struct vb2_queue *q = NULL; + struct vb2_buffer *vb = NULL; + unsigned long flags = 0; + + if (port >= MAX_PORT) { + d_vpr_e("%s: invalid params, inst %pK, port %d\n", + __func__, inst, port); + return poll; + } + q = inst->bufq[port].vb2q; + + spin_lock_irqsave(&q->done_lock, flags); + if (!list_empty(&q->done_list)) + vb = list_first_entry(&q->done_list, struct vb2_buffer, + done_entry); + if (vb && (vb->state == VB2_BUF_STATE_DONE || + vb->state == VB2_BUF_STATE_ERROR)) { + if (port == OUTPUT_PORT) + poll |= POLLIN | POLLRDNORM; + else if (port == INPUT_PORT) + poll |= POLLOUT | POLLWRNORM; + } + spin_unlock_irqrestore(&q->done_lock, flags); + + return poll; +} + +int msm_vidc_poll(struct msm_vidc_inst *inst, struct file *filp, + struct poll_table_struct *wait) +{ + int poll = 0; + + poll_wait(filp, &inst->fh.wait, wait); + poll_wait(filp, &inst->bufq[INPUT_PORT].vb2q->done_wq, wait); + poll_wait(filp, &inst->bufq[OUTPUT_PORT].vb2q->done_wq, wait); + + if (v4l2_event_pending(&inst->fh)) + poll |= POLLPRI; + + poll |= get_poll_flags(inst, INPUT_PORT); + poll |= get_poll_flags(inst, OUTPUT_PORT); + + return poll; +} + +int msm_vidc_querycap(struct msm_vidc_inst *inst, struct v4l2_capability *cap) +{ + strscpy(cap->driver, MSM_VIDC_DRV_NAME, sizeof(cap->driver)); + strscpy(cap->bus_info, MSM_VIDC_BUS_NAME, sizeof(cap->bus_info)); + + memset(cap->reserved, 0, sizeof(cap->reserved)); + + if (is_decode_session(inst)) + strscpy(cap->card, "msm_vidc_decoder", sizeof(cap->card)); + else if (is_encode_session(inst)) + strscpy(cap->card, "msm_vidc_encoder", sizeof(cap->card)); + else + return -EINVAL; + + return 0; +} + +int msm_vidc_enum_fmt(struct msm_vidc_inst *inst, struct v4l2_fmtdesc *f) +{ + if (is_decode_session(inst)) + return msm_vdec_enum_fmt(inst, f); + if (is_encode_session(inst)) + return msm_venc_enum_fmt(inst, f); + + return -EINVAL; +} + +int msm_vidc_query_ctrl(struct msm_vidc_inst *inst, struct v4l2_queryctrl *q_ctrl) +{ + int rc = 0; + struct v4l2_ctrl *ctrl; + + ctrl = v4l2_ctrl_find(&inst->ctrl_handler, q_ctrl->id); + if (!ctrl) { + i_vpr_e(inst, "%s: get_ctrl failed for id %d\n", + __func__, q_ctrl->id); + return -EINVAL; + } + q_ctrl->minimum = ctrl->minimum; + q_ctrl->maximum = ctrl->maximum; + q_ctrl->default_value = ctrl->default_value; + q_ctrl->flags = 0; + q_ctrl->step = ctrl->step; + i_vpr_h(inst, + "query ctrl: %s: min %d, max %d, default %d step %d flags %#x\n", + ctrl->name, q_ctrl->minimum, q_ctrl->maximum, + q_ctrl->default_value, q_ctrl->step, q_ctrl->flags); + return rc; +} + +int msm_vidc_query_menu(struct msm_vidc_inst *inst, struct v4l2_querymenu *qmenu) +{ + int rc = 0; + struct v4l2_ctrl *ctrl; + + ctrl = v4l2_ctrl_find(&inst->ctrl_handler, qmenu->id); + if (!ctrl) { + i_vpr_e(inst, "%s: get_ctrl failed for id %d\n", + __func__, qmenu->id); + return -EINVAL; + } + if (ctrl->type != V4L2_CTRL_TYPE_MENU) { + i_vpr_e(inst, "%s: ctrl: %s: type (%d) is not MENU type\n", + __func__, ctrl->name, ctrl->type); + return -EINVAL; + } + if (qmenu->index < ctrl->minimum || qmenu->index > ctrl->maximum) + return -EINVAL; + + if (ctrl->menu_skip_mask & (1 << qmenu->index)) + rc = -EINVAL; + + i_vpr_h(inst, + "%s: ctrl: %s: min %lld, max %lld, menu_skip_mask %lld, qmenu: id %u, index %d, %s\n", + __func__, ctrl->name, ctrl->minimum, ctrl->maximum, + ctrl->menu_skip_mask, qmenu->id, qmenu->index, + rc ? "not supported" : "supported"); + return rc; +} + +int msm_vidc_try_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + + if (is_decode_session(inst)) + rc = msm_vdec_try_fmt(inst, f); + if (is_encode_session(inst)) + rc = msm_venc_try_fmt(inst, f); + + if (rc) + i_vpr_e(inst, "%s: try_fmt(%d) failed %d\n", + __func__, f->type, rc); + return rc; +} + +int msm_vidc_s_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + + if (is_decode_session(inst)) + rc = msm_vdec_s_fmt(inst, f); + if (is_encode_session(inst)) + rc = msm_venc_s_fmt(inst, f); + + if (rc) + i_vpr_e(inst, "%s: s_fmt(%d) failed %d\n", + __func__, f->type, rc); + return rc; +} + +int msm_vidc_g_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + + if (is_decode_session(inst)) + rc = msm_vdec_g_fmt(inst, f); + if (is_encode_session(inst)) + rc = msm_venc_g_fmt(inst, f); + if (rc) + return rc; + + i_vpr_h(inst, "%s: type %s format %s width %d height %d size %d\n", + __func__, v4l2_type_name(f->type), + v4l2_pixelfmt_name(inst, f->fmt.pix_mp.pixelformat), + f->fmt.pix_mp.width, f->fmt.pix_mp.height, + f->fmt.pix_mp.plane_fmt[0].sizeimage); + + return 0; +} + +int msm_vidc_s_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s) +{ + int rc = 0; + + if (is_decode_session(inst)) + rc = msm_vdec_s_selection(inst, s); + if (is_encode_session(inst)) + rc = msm_venc_s_selection(inst, s); + + return rc; +} + +int msm_vidc_g_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s) +{ + int rc = 0; + + if (is_decode_session(inst)) + rc = msm_vdec_g_selection(inst, s); + if (is_encode_session(inst)) + rc = msm_venc_g_selection(inst, s); + + return rc; +} + +int msm_vidc_s_param(struct msm_vidc_inst *inst, struct v4l2_streamparm *param) +{ + int rc = 0; + + if (param->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE && + param->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) + return -EINVAL; + + if (is_encode_session(inst)) { + rc = msm_venc_s_param(inst, param); + } else { + i_vpr_e(inst, "%s: invalid domain %#x\n", + __func__, inst->domain); + return -EINVAL; + } + + return rc; +} + +int msm_vidc_g_param(struct msm_vidc_inst *inst, struct v4l2_streamparm *param) +{ + int rc = 0; + + if (param->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE && + param->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) + return -EINVAL; + + if (is_encode_session(inst)) { + rc = msm_venc_g_param(inst, param); + } else { + i_vpr_e(inst, "%s: invalid domain %#x\n", + __func__, inst->domain); + return -EINVAL; + } + + return rc; +} + +int msm_vidc_reqbufs(struct msm_vidc_inst *inst, struct v4l2_requestbuffers *b) +{ + int rc = 0; + int port; + + port = v4l2_type_to_driver_port(inst, b->type, __func__); + if (port < 0) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_reqbufs(inst->bufq[port].vb2q, b); + if (rc) { + i_vpr_e(inst, "%s: vb2_reqbufs(%d) failed, %d\n", + __func__, b->type, rc); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_querybuf(struct msm_vidc_inst *inst, struct v4l2_buffer *b) +{ + int rc = 0; + int port; + + port = v4l2_type_to_driver_port(inst, b->type, __func__); + if (port < 0) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_querybuf(inst->bufq[port].vb2q, b); + if (rc) { + i_vpr_e(inst, "%s: vb2_querybuf(%d) failed, %d\n", + __func__, b->type, rc); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_create_bufs(struct msm_vidc_inst *inst, struct v4l2_create_buffers *b) +{ + int rc = 0; + int port; + struct v4l2_format *f; + + f = &b->format; + port = v4l2_type_to_driver_port(inst, f->type, __func__); + if (port < 0) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_create_bufs(inst->bufq[port].vb2q, b); + if (rc) { + i_vpr_e(inst, "%s: vb2_create_bufs(%d) failed, %d\n", + __func__, f->type, rc); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_prepare_buf(struct msm_vidc_inst *inst, struct media_device *mdev, + struct v4l2_buffer *b) +{ + int rc = 0; + struct vb2_queue *q; + + if (!valid_v4l2_buffer(b, inst)) { + d_vpr_e("%s: invalid params %pK %pK\n", __func__, inst, b); + return -EINVAL; + } + + q = msm_vidc_get_vb2q(inst, b->type, __func__); + if (!q) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_prepare_buf(q, mdev, b); + if (rc) { + i_vpr_e(inst, "%s: failed with %d\n", __func__, rc); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_qbuf(struct msm_vidc_inst *inst, struct media_device *mdev, + struct v4l2_buffer *b) +{ + int rc = 0; + struct vb2_queue *q; + + if (!valid_v4l2_buffer(b, inst)) { + d_vpr_e("%s: invalid params %pK %pK\n", __func__, inst, b); + return -EINVAL; + } + + q = msm_vidc_get_vb2q(inst, b->type, __func__); + if (!q) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_qbuf(q, mdev, b); + if (rc) + i_vpr_e(inst, "%s: failed with %d\n", __func__, rc); + +exit: + return rc; +} + +int msm_vidc_dqbuf(struct msm_vidc_inst *inst, struct v4l2_buffer *b) +{ + int rc = 0; + struct vb2_queue *q; + + if (!valid_v4l2_buffer(b, inst)) { + d_vpr_e("%s: invalid params %pK %pK\n", __func__, inst, b); + return -EINVAL; + } + + q = msm_vidc_get_vb2q(inst, b->type, __func__); + if (!q) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_dqbuf(q, b, true); + if (rc == -EAGAIN) { + goto exit; + } else if (rc) { + i_vpr_l(inst, "%s: failed with %d\n", __func__, rc); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_streamon(struct msm_vidc_inst *inst, enum v4l2_buf_type type) +{ + int rc = 0; + int port; + + port = v4l2_type_to_driver_port(inst, type, __func__); + if (port < 0) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_streamon(inst->bufq[port].vb2q, type); + if (rc) { + i_vpr_e(inst, "%s: vb2_streamon(%d) failed, %d\n", + __func__, type, rc); + goto exit; + } + +exit: + return rc; +} +EXPORT_SYMBOL(msm_vidc_streamon); + +int msm_vidc_streamoff(struct msm_vidc_inst *inst, enum v4l2_buf_type type) +{ + int rc = 0; + int port; + + port = v4l2_type_to_driver_port(inst, type, __func__); + if (port < 0) { + rc = -EINVAL; + goto exit; + } + + rc = vb2_streamoff(inst->bufq[port].vb2q, type); + if (rc) { + i_vpr_e(inst, "%s: vb2_streamoff(%d) failed, %d\n", + __func__, type, rc); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_try_cmd(struct msm_vidc_inst *inst, union msm_v4l2_cmd *cmd) +{ + int rc = 0; + struct v4l2_decoder_cmd *dec = NULL; + struct v4l2_encoder_cmd *enc = NULL; + + if (is_decode_session(inst)) { + dec = (struct v4l2_decoder_cmd *)cmd; + i_vpr_h(inst, "%s: cmd %d\n", __func__, dec->cmd); + if (dec->cmd != V4L2_DEC_CMD_STOP && dec->cmd != V4L2_DEC_CMD_START) + return -EINVAL; + dec->flags = 0; + if (dec->cmd == V4L2_DEC_CMD_STOP) { + dec->stop.pts = 0; + } else if (dec->cmd == V4L2_DEC_CMD_START) { + dec->start.speed = 0; + dec->start.format = V4L2_DEC_START_FMT_NONE; + } + } else if (is_encode_session(inst)) { + enc = (struct v4l2_encoder_cmd *)cmd; + i_vpr_h(inst, "%s: cmd %d\n", __func__, enc->cmd); + if (enc->cmd != V4L2_ENC_CMD_STOP && enc->cmd != V4L2_ENC_CMD_START) + return -EINVAL; + enc->flags = 0; + } + + return rc; +} + +int msm_vidc_start_cmd(struct msm_vidc_inst *inst) +{ + int rc = 0; + + if (!is_decode_session(inst) && !is_encode_session(inst)) { + i_vpr_e(inst, "%s: invalid session %d\n", __func__, inst->domain); + return -EINVAL; + } + + if (is_decode_session(inst)) { + rc = msm_vdec_start_cmd(inst); + if (rc) + return rc; + } else if (is_encode_session(inst)) { + rc = msm_venc_start_cmd(inst); + if (rc) + return rc; + } + + return rc; +} + +int msm_vidc_stop_cmd(struct msm_vidc_inst *inst) +{ + int rc = 0; + + if (!is_decode_session(inst) && !is_encode_session(inst)) { + i_vpr_e(inst, "%s: invalid session %d\n", __func__, inst->domain); + return -EINVAL; + } + + if (is_decode_session(inst)) { + rc = msm_vdec_stop_cmd(inst); + if (rc) + return rc; + } else if (is_encode_session(inst)) { + rc = msm_venc_stop_cmd(inst); + if (rc) + return rc; + } + + return rc; +} + +int msm_vidc_enum_framesizes(struct msm_vidc_inst *inst, struct v4l2_frmsizeenum *fsize) +{ + enum msm_vidc_colorformat_type colorfmt; + enum msm_vidc_codec_type codec; + + /* only index 0 allowed as per v4l2 spec */ + if (fsize->index) + return -EINVAL; + + /* validate pixel format */ + codec = v4l2_codec_to_driver(inst, fsize->pixel_format, __func__); + if (!codec) { + colorfmt = v4l2_colorformat_to_driver(inst, fsize->pixel_format, + __func__); + if (colorfmt == MSM_VIDC_FMT_NONE) { + i_vpr_e(inst, "%s: unsupported pix fmt %#x\n", + __func__, fsize->pixel_format); + return -EINVAL; + } + } + + fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE; + fsize->stepwise.min_width = inst->capabilities[FRAME_WIDTH].min; + fsize->stepwise.max_width = inst->capabilities[FRAME_WIDTH].max; + fsize->stepwise.step_width = + inst->capabilities[FRAME_WIDTH].step_or_mask; + fsize->stepwise.min_height = inst->capabilities[FRAME_HEIGHT].min; + fsize->stepwise.max_height = inst->capabilities[FRAME_HEIGHT].max; + fsize->stepwise.step_height = + inst->capabilities[FRAME_HEIGHT].step_or_mask; + + return 0; +} + +int msm_vidc_enum_frameintervals(struct msm_vidc_inst *inst, struct v4l2_frmivalenum *fival) +{ + struct msm_vidc_core *core; + enum msm_vidc_colorformat_type colorfmt; + u32 fps, mbpf; + + if (is_decode_session(inst)) { + i_vpr_e(inst, "%s: not supported by decoder\n", __func__); + return -ENOTTY; + } + + core = inst->core; + + /* only index 0 allowed as per v4l2 spec */ + if (fival->index) + return -EINVAL; + + /* validate pixel format */ + colorfmt = v4l2_colorformat_to_driver(inst, fival->pixel_format, __func__); + if (colorfmt == MSM_VIDC_FMT_NONE) { + i_vpr_e(inst, "%s: unsupported pix fmt %#x\n", + __func__, fival->pixel_format); + return -EINVAL; + } + + /* validate resolution */ + if (fival->width > inst->capabilities[FRAME_WIDTH].max || + fival->width < inst->capabilities[FRAME_WIDTH].min || + fival->height > inst->capabilities[FRAME_HEIGHT].max || + fival->height < inst->capabilities[FRAME_HEIGHT].min) { + i_vpr_e(inst, "%s: unsupported resolution %u x %u\n", __func__, + fival->width, fival->height); + return -EINVAL; + } + + /* calculate max supported fps for a given resolution */ + mbpf = NUM_MBS_PER_FRAME(fival->height, fival->width); + fps = core->capabilities[MAX_MBPS].value / mbpf; + + fival->type = V4L2_FRMIVAL_TYPE_STEPWISE; + fival->stepwise.min.numerator = 1; + fival->stepwise.min.denominator = + min_t(u32, fps, inst->capabilities[FRAME_RATE].max); + fival->stepwise.max.numerator = 1; + fival->stepwise.max.denominator = 1; + fival->stepwise.step.numerator = 1; + fival->stepwise.step.denominator = inst->capabilities[FRAME_RATE].max; + + return 0; +} + +int msm_vidc_subscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub) +{ + int rc = 0; + + i_vpr_h(inst, "%s: type %d id %d\n", __func__, sub->type, sub->id); + + if (is_decode_session(inst)) + rc = msm_vdec_subscribe_event(inst, sub); + if (is_encode_session(inst)) + rc = msm_venc_subscribe_event(inst, sub); + + return rc; +} + +int msm_vidc_unsubscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub) +{ + int rc = 0; + + i_vpr_h(inst, "%s: type %d id %d\n", __func__, sub->type, sub->id); + rc = v4l2_event_unsubscribe(&inst->fh, sub); + if (rc) + i_vpr_e(inst, "%s: failed, type %d id %d\n", + __func__, sub->type, sub->id); + return rc; +} + +int msm_vidc_dqevent(struct msm_vidc_inst *inst, struct v4l2_event *event) +{ + int rc = 0; + + rc = v4l2_event_dequeue(&inst->fh, event, false); + if (rc) + i_vpr_e(inst, "%s: fialed\n", __func__); + return rc; +} + +void *msm_vidc_open(struct msm_vidc_core *core, u32 session_type) +{ + int rc = 0; + struct msm_vidc_inst *inst = NULL; + int i = 0; + + d_vpr_h("%s: %s\n", __func__, video_banner); + + if (session_type != MSM_VIDC_DECODER && + session_type != MSM_VIDC_ENCODER) { + d_vpr_e("%s: invalid session_type %d\n", + __func__, session_type); + return NULL; + } + + rc = msm_vidc_core_init(core); + if (rc) + return NULL; + + rc = msm_vidc_core_init_wait(core); + if (rc) + return NULL; + + inst = vzalloc(sizeof(*inst)); + if (!inst) { + d_vpr_e("%s: allocation failed\n", __func__); + return NULL; + } + + inst->core = core; + inst->domain = session_type; + inst->session_id = hash32_ptr(inst); + msm_vidc_update_state(inst, MSM_VIDC_OPEN, __func__); + inst->sub_state = MSM_VIDC_SUB_STATE_NONE; + strscpy(inst->sub_state_name, "SUB_STATE_NONE", sizeof(inst->sub_state_name)); + inst->active = true; + inst->ipsc_properties_set = false; + inst->opsc_properties_set = false; + inst->caps_list_prepared = false; + inst->has_bframe = false; + inst->iframe = false; + inst->initial_time_us = ktime_get_ns() / 1000; + kref_init(&inst->kref); + mutex_init(&inst->lock); + mutex_init(&inst->ctx_q_lock); + mutex_init(&inst->client_lock); + msm_vidc_update_debug_str(inst); + i_vpr_h(inst, "Opening video instance: %d\n", session_type); + + rc = msm_vidc_add_session(inst); + if (rc) { + i_vpr_e(inst, "%s: failed to add session\n", __func__); + goto fail_add_session; + } + + rc = msm_vidc_pools_init(inst); + if (rc) { + i_vpr_e(inst, "%s: failed to init pool buffers\n", __func__); + goto fail_pools_init; + } + INIT_LIST_HEAD(&inst->caps_list); + INIT_LIST_HEAD(&inst->timestamps.list); + INIT_LIST_HEAD(&inst->buffers.input.list); + INIT_LIST_HEAD(&inst->buffers.output.list); + INIT_LIST_HEAD(&inst->buffers.read_only.list); + INIT_LIST_HEAD(&inst->buffers.bin.list); + INIT_LIST_HEAD(&inst->buffers.arp.list); + INIT_LIST_HEAD(&inst->buffers.comv.list); + INIT_LIST_HEAD(&inst->buffers.non_comv.list); + INIT_LIST_HEAD(&inst->buffers.line.list); + INIT_LIST_HEAD(&inst->buffers.dpb.list); + INIT_LIST_HEAD(&inst->buffers.persist.list); + INIT_LIST_HEAD(&inst->buffers.vpss.list); + INIT_LIST_HEAD(&inst->mem_info.bin.list); + INIT_LIST_HEAD(&inst->mem_info.arp.list); + INIT_LIST_HEAD(&inst->mem_info.comv.list); + INIT_LIST_HEAD(&inst->mem_info.non_comv.list); + INIT_LIST_HEAD(&inst->mem_info.line.list); + INIT_LIST_HEAD(&inst->mem_info.dpb.list); + INIT_LIST_HEAD(&inst->mem_info.persist.list); + INIT_LIST_HEAD(&inst->mem_info.vpss.list); + INIT_LIST_HEAD(&inst->children_list); + INIT_LIST_HEAD(&inst->firmware_list); + INIT_LIST_HEAD(&inst->enc_input_crs); + INIT_LIST_HEAD(&inst->dmabuf_tracker); + INIT_LIST_HEAD(&inst->input_timer_list); + INIT_LIST_HEAD(&inst->buffer_stats_list); + for (i = 0; i < MAX_SIGNAL; i++) + init_completion(&inst->completions[i]); + + inst->workq = create_singlethread_workqueue("workq"); + if (!inst->workq) { + i_vpr_e(inst, "%s: create workq failed\n", __func__); + goto fail_create_workq; + } + + INIT_DELAYED_WORK(&inst->stats_work, msm_vidc_stats_handler); + + rc = msm_vidc_v4l2_fh_init(inst); + if (rc) + goto fail_eventq_init; + + rc = msm_vidc_vb2_queue_init(inst); + if (rc) + goto fail_vb2q_init; + + if (is_decode_session(inst)) + rc = msm_vdec_inst_init(inst); + else if (is_encode_session(inst)) + rc = msm_venc_inst_init(inst); + if (rc) + goto fail_inst_init; + + msm_vidc_scale_power(inst, true); + + rc = msm_vidc_session_open(inst); + if (rc) { + msm_vidc_core_deinit(core, true); + goto fail_session_open; + } + + inst->debugfs_root = + msm_vidc_debugfs_init_inst(inst, core->debugfs_root); + if (!inst->debugfs_root) + i_vpr_h(inst, "%s: debugfs not available\n", __func__); + + return inst; + +fail_session_open: + if (is_decode_session(inst)) + msm_vdec_inst_deinit(inst); + else if (is_encode_session(inst)) + msm_venc_inst_deinit(inst); +fail_inst_init: + msm_vidc_vb2_queue_deinit(inst); +fail_vb2q_init: + msm_vidc_v4l2_fh_deinit(inst); +fail_eventq_init: + destroy_workqueue(inst->workq); +fail_create_workq: + msm_vidc_pools_deinit(inst); +fail_pools_init: + msm_vidc_remove_session(inst); + msm_vidc_remove_dangling_session(inst); +fail_add_session: + mutex_destroy(&inst->client_lock); + mutex_destroy(&inst->ctx_q_lock); + mutex_destroy(&inst->lock); + vfree(inst); + return NULL; +} + +int msm_vidc_close(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + + core = inst->core; + + client_lock(inst, __func__); + inst_lock(inst, __func__); + /* print final stats */ + msm_vidc_print_stats(inst); + /* print internal buffer memory usage stats */ + msm_vidc_print_memory_stats(inst); + msm_vidc_session_close(inst); + msm_vidc_change_state(inst, MSM_VIDC_CLOSE, __func__); + inst->sub_state = MSM_VIDC_SUB_STATE_NONE; + strscpy(inst->sub_state_name, "SUB_STATE_NONE", sizeof(inst->sub_state_name)); + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + cancel_stats_work_sync(inst); + put_inst(inst); + msm_vidc_schedule_core_deinit(core); + + return 0; +} From patchwork Fri Jul 28 13:23:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA5FAC001DE for ; Fri, 28 Jul 2023 13:25:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236732AbjG1NZo (ORCPT ); Fri, 28 Jul 2023 09:25:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233488AbjG1NZj (ORCPT ); Fri, 28 Jul 2023 09:25:39 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 107CE3C27; Fri, 28 Jul 2023 06:25:37 -0700 (PDT) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBTCUi009156; Fri, 28 Jul 2023 13:25:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=/uSRfTECbhlqVOfiWP+fxAnqrA7hoq4tLnjWEr+IL8o=; b=WkqehVRJcBasVGJzmnLgQwlyPFL0lSNqv7+OuVrfKmRYu1/Qyly87KggyAu8nRsztQww F3/6dFmIWxDq4fGQnCzFJRP72wlfwd7k/JZA45M+4SFyx8xhlMbYu5HI3B+elck2iuz1 eTkHERSR2E3iyMDzyMnpuRxqDzOOutHFTkRlt2++4q7SCLPkUsZpJ99x/kbXsoeRaFiG B6NiC7S8iuGK4511f1E2SJNHOFhwiG5N3jbCSKn+MW+TKYUHrmS/zQpXYUnKcMWyqG73 VoRGthErt9GmxYXPKaYrACsTLWT76EzdhXch0YYDovt3sBsVug1TQ3/xXFuVfuLiIYBQ 8Q== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3ufutb46-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:29 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPSxs002751 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:28 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:25 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 05/33] iris: vidc: add vb2 ops Date: Fri, 28 Jul 2023 18:53:16 +0530 Message-ID: <1690550624-14642-6-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: JAR4xasuA_ZIt4CJ7MzL39YrkZr7aXFc X-Proofpoint-ORIG-GUID: JAR4xasuA_ZIt4CJ7MzL39YrkZr7aXFc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 mlxlogscore=631 mlxscore=0 clxscore=1015 priorityscore=1501 adultscore=0 phishscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements vb2 ops for streaming modes for alloc, free, map and unmap buffers. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_vb2.h | 39 ++ .../platform/qcom/iris/vidc/src/msm_vidc_vb2.c | 605 +++++++++++++++++++++ 2 files changed, 644 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_vb2.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_vb2.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_vb2.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_vb2.h new file mode 100644 index 0000000..12378ce --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_vb2.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_VB2_H_ +#define _MSM_VIDC_VB2_H_ + +#include +#include + +#include "msm_vidc_inst.h" + +struct vb2_queue *msm_vidc_get_vb2q(struct msm_vidc_inst *inst, + u32 type, const char *func); + +/* vb2_mem_ops */ +void *msm_vb2_alloc(struct vb2_buffer *vb, struct device *dev, + unsigned long size); +void *msm_vb2_attach_dmabuf(struct vb2_buffer *vb, struct device *dev, + struct dma_buf *dbuf, unsigned long size); + +void msm_vb2_put(void *buf_priv); +int msm_vb2_mmap(void *buf_priv, struct vm_area_struct *vma); +void msm_vb2_detach_dmabuf(void *buf_priv); +int msm_vb2_map_dmabuf(void *buf_priv); +void msm_vb2_unmap_dmabuf(void *buf_priv); + +/* vb2_ops */ +int msm_vb2_queue_setup(struct vb2_queue *q, + unsigned int *num_buffers, unsigned int *num_planes, + unsigned int sizes[], struct device *alloc_devs[]); +int msm_vidc_start_streaming(struct msm_vidc_inst *inst, struct vb2_queue *q); +int msm_vidc_stop_streaming(struct msm_vidc_inst *inst, struct vb2_queue *q); +int msm_vb2_start_streaming(struct vb2_queue *q, unsigned int count); +void msm_vb2_stop_streaming(struct vb2_queue *q); +void msm_vb2_buf_queue(struct vb2_buffer *vb2); +#endif // _MSM_VIDC_VB2_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_vb2.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_vb2.c new file mode 100644 index 0000000..c936d95 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_vb2.c @@ -0,0 +1,605 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vdec.h" +#include "msm_venc.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "msm_vidc_vb2.h" + +struct vb2_queue *msm_vidc_get_vb2q(struct msm_vidc_inst *inst, + u32 type, const char *func) +{ + struct vb2_queue *q = NULL; + + if (type == INPUT_MPLANE) { + q = inst->bufq[INPUT_PORT].vb2q; + } else if (type == OUTPUT_MPLANE) { + q = inst->bufq[OUTPUT_PORT].vb2q; + } else { + i_vpr_e(inst, "%s: invalid buffer type %d\n", + __func__, type); + } + return q; +} + +void *msm_vb2_alloc(struct vb2_buffer *vb, struct device *dev, + unsigned long size) +{ + return (void *)0xdeadbeef; +} + +void *msm_vb2_attach_dmabuf(struct vb2_buffer *vb, struct device *dev, + struct dma_buf *dbuf, unsigned long size) +{ + struct msm_vidc_inst *inst; + struct msm_vidc_core *core; + struct msm_vidc_buffer *buf = NULL; + struct msm_vidc_buffer *ro_buf, *dummy; + + if (!vb || !dev || !dbuf || !vb->vb2_queue) { + d_vpr_e("%s: invalid params\n", __func__); + return NULL; + } + inst = vb->vb2_queue->drv_priv; + inst = get_inst_ref(g_core, inst); + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params %pK\n", __func__, inst); + return NULL; + } + core = inst->core; + + buf = msm_vidc_fetch_buffer(inst, vb); + if (!buf) { + i_vpr_e(inst, "%s: failed to fetch buffer\n", __func__); + buf = NULL; + goto exit; + } + buf->inst = inst; + buf->dmabuf = dbuf; + + if (is_decode_session(inst) && is_output_buffer(buf->type)) { + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + if (ro_buf->dmabuf != buf->dmabuf) + continue; + print_vidc_buffer(VIDC_LOW, "low ", "attach: found ro buf", inst, ro_buf); + buf->attach = ro_buf->attach; + ro_buf->attach = NULL; + goto exit; + } + } + + buf->attach = call_mem_op(core, dma_buf_attach, core, dbuf, dev); + if (!buf->attach) { + buf->attach = NULL; + buf = NULL; + goto exit; + } + print_vidc_buffer(VIDC_LOW, "low ", "attach", inst, buf); + +exit: + if (!buf) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + put_inst(inst); + return buf; +} + +void msm_vb2_put(void *buf_priv) +{ +} + +int msm_vb2_mmap(void *buf_priv, struct vm_area_struct *vma) +{ + return 0; +} + +void msm_vb2_detach_dmabuf(void *buf_priv) +{ + struct msm_vidc_buffer *vbuf = buf_priv; + struct msm_vidc_buffer *ro_buf, *dummy; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst; + + if (!vbuf || !vbuf->inst) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + inst = vbuf->inst; + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params %pK\n", __func__, inst); + return; + } + core = inst->core; + + if (is_decode_session(inst) && is_output_buffer(vbuf->type)) { + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + if (ro_buf->dmabuf != vbuf->dmabuf) + continue; + print_vidc_buffer(VIDC_LOW, "low ", "detach: found ro buf", inst, ro_buf); + ro_buf->attach = vbuf->attach; + vbuf->attach = NULL; + goto exit; + } + } + + print_vidc_buffer(VIDC_LOW, "low ", "detach", inst, vbuf); + if (vbuf->attach && vbuf->dmabuf) { + call_mem_op(core, dma_buf_detach, core, vbuf->dmabuf, vbuf->attach); + vbuf->attach = NULL; + } + +exit: + vbuf->dmabuf = NULL; + vbuf->inst = NULL; +} + +int msm_vb2_map_dmabuf(void *buf_priv) +{ + int rc = 0; + struct msm_vidc_buffer *buf = buf_priv; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst; + struct msm_vidc_buffer *ro_buf, *dummy; + + if (!buf || !buf->inst) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + inst = buf->inst; + inst = get_inst_ref(g_core, inst); + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + + if (is_decode_session(inst) && is_output_buffer(buf->type)) { + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + if (ro_buf->dmabuf != buf->dmabuf) + continue; + print_vidc_buffer(VIDC_LOW, "low ", "map: found ro buf", inst, ro_buf); + buf->sg_table = ro_buf->sg_table; + buf->device_addr = ro_buf->device_addr; + ro_buf->sg_table = NULL; + goto exit; + } + } + + buf->sg_table = call_mem_op(core, dma_buf_map_attachment, core, buf->attach); + if (!buf->sg_table || !buf->sg_table->sgl) { + buf->sg_table = NULL; + rc = -ENOMEM; + goto exit; + } + buf->device_addr = sg_dma_address(buf->sg_table->sgl); + print_vidc_buffer(VIDC_HIGH, "high", "map", inst, buf); + +exit: + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + put_inst(inst); + return rc; +} + +void msm_vb2_unmap_dmabuf(void *buf_priv) +{ + struct msm_vidc_buffer *vbuf = buf_priv; + struct msm_vidc_buffer *ro_buf, *dummy; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst; + + if (!vbuf || !vbuf->inst) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + inst = vbuf->inst; + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params %pK\n", __func__, inst); + return; + } + core = inst->core; + + if (is_decode_session(inst) && is_output_buffer(vbuf->type)) { + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + if (ro_buf->dmabuf != vbuf->dmabuf) + continue; + print_vidc_buffer(VIDC_LOW, "low ", "unmap: found ro buf", inst, ro_buf); + ro_buf->sg_table = vbuf->sg_table; + vbuf->sg_table = NULL; + vbuf->device_addr = 0x0; + goto exit; + } + } + + print_vidc_buffer(VIDC_HIGH, "high", "unmap", inst, vbuf); + if (vbuf->attach && vbuf->sg_table) { + call_mem_op(core, dma_buf_unmap_attachment, core, vbuf->attach, vbuf->sg_table); + vbuf->sg_table = NULL; + vbuf->device_addr = 0x0; + } + +exit: + return; +} + +int msm_vb2_queue_setup(struct vb2_queue *q, + unsigned int *num_buffers, unsigned int *num_planes, + unsigned int sizes[], struct device *alloc_devs[]) +{ + int rc = 0; + struct msm_vidc_inst *inst; + struct msm_vidc_core *core; + int port; + struct v4l2_format *f; + enum msm_vidc_buffer_type buffer_type = 0; + enum msm_vidc_buffer_region region = MSM_VIDC_REGION_NONE; + struct context_bank_info *cb = NULL; + struct msm_vidc_buffers *buffers; + + if (!q || !num_buffers || !num_planes || + !sizes || !q->drv_priv) { + d_vpr_e("%s: invalid params, q = %pK, %pK, %pK\n", + __func__, q, num_buffers, num_planes); + return -EINVAL; + } + inst = q->drv_priv; + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params %pK\n", __func__, inst); + return -EINVAL; + } + core = inst->core; + + if (is_state(inst, MSM_VIDC_STREAMING)) { + i_vpr_e(inst, "%s: invalid state %d\n", __func__, inst->state); + return -EINVAL; + } + + port = v4l2_type_to_driver_port(inst, q->type, __func__); + if (port < 0) + return -EINVAL; + + /* prepare dependency list once per session */ + if (!inst->caps_list_prepared) { + rc = msm_vidc_prepare_dependency_list(inst); + if (rc) + return rc; + inst->caps_list_prepared = true; + } + + /* adjust v4l2 properties for master port */ + if ((is_encode_session(inst) && port == OUTPUT_PORT) || + (is_decode_session(inst) && port == INPUT_PORT)) { + rc = msm_vidc_adjust_v4l2_properties(inst); + if (rc) { + i_vpr_e(inst, "%s: failed to adjust properties\n", __func__); + return rc; + } + } + + if (*num_planes && (port == INPUT_PORT || port == OUTPUT_PORT)) { + f = &inst->fmts[port]; + if (*num_planes != f->fmt.pix_mp.num_planes) { + i_vpr_e(inst, "%s: requested num_planes %d not supported %d\n", + __func__, *num_planes, f->fmt.pix_mp.num_planes); + return -EINVAL; + } + if (sizes[0] < inst->fmts[port].fmt.pix_mp.plane_fmt[0].sizeimage) { + i_vpr_e(inst, "%s: requested size %d not acceptable\n", + __func__, sizes[0]); + return -EINVAL; + } + } + + buffer_type = v4l2_type_to_driver(q->type, __func__); + if (!buffer_type) + return -EINVAL; + + rc = msm_vidc_free_buffers(inst, buffer_type); + if (rc) { + i_vpr_e(inst, "%s: failed to free buffers, type %s\n", + __func__, v4l2_type_name(q->type)); + return rc; + } + + buffers = msm_vidc_get_buffers(inst, buffer_type, __func__); + if (!buffers) + return -EINVAL; + + buffers->min_count = call_session_op(core, min_count, inst, buffer_type); + buffers->extra_count = call_session_op(core, extra_count, inst, buffer_type); + if (*num_buffers < buffers->min_count + buffers->extra_count) + *num_buffers = buffers->min_count + buffers->extra_count; + buffers->actual_count = *num_buffers; + *num_planes = 1; + + buffers->size = call_session_op(core, buffer_size, inst, buffer_type); + + inst->fmts[port].fmt.pix_mp.plane_fmt[0].sizeimage = buffers->size; + sizes[0] = inst->fmts[port].fmt.pix_mp.plane_fmt[0].sizeimage; + + rc = msm_vidc_allocate_buffers(inst, buffer_type, *num_buffers); + if (rc) { + i_vpr_e(inst, "%s: failed to allocate buffers, type %s\n", + __func__, v4l2_type_name(q->type)); + return rc; + } + + region = call_mem_op(core, buffer_region, inst, buffer_type); + cb = msm_vidc_get_context_bank_for_region(core, region); + if (!cb) { + d_vpr_e("%s: Failed to get context bank device\n", + __func__); + return -EIO; + } + q->dev = cb->dev; + + i_vpr_h(inst, + "queue_setup: type %s num_buffers %d sizes[0] %d cb %s\n", + v4l2_type_name(q->type), *num_buffers, sizes[0], cb->name); + return rc; +} + +int msm_vb2_start_streaming(struct vb2_queue *q, unsigned int count) +{ + int rc = 0; + struct msm_vidc_inst *inst; + + if (!q || !q->drv_priv) { + d_vpr_e("%s: invalid input, q = %pK\n", __func__, q); + return -EINVAL; + } + inst = q->drv_priv; + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + rc = inst->event_handle(inst, MSM_VIDC_STREAMON, q); + if (rc) { + i_vpr_e(inst, "Streamon: %s failed\n", v4l2_type_name(q->type)); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + goto exit; + } + +exit: + return rc; +} + +int msm_vidc_start_streaming(struct msm_vidc_inst *inst, struct vb2_queue *q) +{ + enum msm_vidc_buffer_type buf_type; + int rc = 0; + + if (q->type != INPUT_MPLANE && q->type != OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, q->type); + return -EINVAL; + } + if (!is_decode_session(inst) && !is_encode_session(inst)) { + i_vpr_e(inst, "%s: invalid session %d\n", __func__, inst->domain); + return -EINVAL; + } + i_vpr_h(inst, "Streamon: %s\n", v4l2_type_name(q->type)); + + if (!inst->once_per_session_set) { + inst->once_per_session_set = true; + rc = msm_vidc_session_set_codec(inst); + if (rc) + return rc; + + if (is_encode_session(inst)) { + rc = msm_vidc_alloc_and_queue_session_int_bufs(inst, + MSM_VIDC_BUF_ARP); + if (rc) + return rc; + } else if (is_decode_session(inst)) { + rc = msm_vidc_session_set_default_header(inst); + if (rc) + return rc; + + rc = msm_vidc_alloc_and_queue_session_int_bufs(inst, + MSM_VIDC_BUF_PERSIST); + if (rc) + return rc; + } + } + + if (is_decode_session(inst)) + inst->decode_batch.enable = msm_vidc_allow_decode_batch(inst); + + msm_vidc_allow_dcvs(inst); + msm_vidc_power_data_reset(inst); + + if (q->type == INPUT_MPLANE) { + if (is_decode_session(inst)) + rc = msm_vdec_streamon_input(inst); + else if (is_encode_session(inst)) + rc = msm_venc_streamon_input(inst); + } else if (q->type == OUTPUT_MPLANE) { + if (is_decode_session(inst)) + rc = msm_vdec_streamon_output(inst); + else if (is_encode_session(inst)) + rc = msm_venc_streamon_output(inst); + } + if (rc) + return rc; + + /* print final buffer counts & size details */ + msm_vidc_print_buffer_info(inst); + + /* print internal buffer memory usage stats */ + msm_vidc_print_memory_stats(inst); + + buf_type = v4l2_type_to_driver(q->type, __func__); + if (!buf_type) + return -EINVAL; + + /* queue pending buffers */ + rc = msm_vidc_queue_deferred_buffers(inst, buf_type); + if (rc) + return rc; + + /* initialize statistics timer(one time) */ + if (!inst->stats.time_ms) + inst->stats.time_ms = ktime_get_ns() / 1000 / 1000; + + /* schedule to print buffer statistics */ + rc = schedule_stats_work(inst); + if (rc) + return rc; + + if ((q->type == INPUT_MPLANE && inst->bufq[OUTPUT_PORT].vb2q->streaming) || + (q->type == OUTPUT_MPLANE && inst->bufq[INPUT_PORT].vb2q->streaming)) { + rc = msm_vidc_get_properties(inst); + if (rc) + return rc; + } + + i_vpr_h(inst, "Streamon: %s successful\n", v4l2_type_name(q->type)); + return rc; +} + +int msm_vidc_stop_streaming(struct msm_vidc_inst *inst, struct vb2_queue *q) +{ + int rc = 0; + + if (q->type != INPUT_MPLANE && q->type != OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, q->type); + return -EINVAL; + } + if (!is_decode_session(inst) && !is_encode_session(inst)) { + i_vpr_e(inst, "%s: invalid session %d\n", __func__, inst->domain); + return -EINVAL; + } + i_vpr_h(inst, "Streamoff: %s\n", v4l2_type_name(q->type)); + + if (q->type == INPUT_MPLANE) { + if (is_decode_session(inst)) + rc = msm_vdec_streamoff_input(inst); + else if (is_encode_session(inst)) + rc = msm_venc_streamoff_input(inst); + } else if (q->type == OUTPUT_MPLANE) { + if (is_decode_session(inst)) + rc = msm_vdec_streamoff_output(inst); + else if (is_encode_session(inst)) + rc = msm_venc_streamoff_output(inst); + } + if (rc) + return rc; + + /* Input port streamoff */ + if (q->type == INPUT_MPLANE) { + /* flush timestamps list */ + msm_vidc_flush_ts(inst); + } + + /* print internal buffer memory usage stats */ + msm_vidc_print_memory_stats(inst); + + i_vpr_h(inst, "Streamoff: %s successful\n", v4l2_type_name(q->type)); + return rc; +} + +void msm_vb2_stop_streaming(struct vb2_queue *q) +{ + struct msm_vidc_inst *inst; + int rc = 0; + + if (!q || !q->drv_priv) { + d_vpr_e("%s: invalid input, q = %pK\n", __func__, q); + return; + } + inst = q->drv_priv; + if (!inst) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + + rc = inst->event_handle(inst, MSM_VIDC_STREAMOFF, q); + if (rc) { + i_vpr_e(inst, "Streamoff: %s failed\n", v4l2_type_name(q->type)); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } +} + +void msm_vb2_buf_queue(struct vb2_buffer *vb2) +{ + int rc = 0; + struct msm_vidc_inst *inst; + struct dma_buf *dbuf = NULL; + struct msm_vidc_core *core; + u64 ktime_ns = ktime_get_ns(); + + if (!vb2) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + + inst = vb2_get_drv_priv(vb2->vb2_queue); + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + core = inst->core; + + if (!vb2->planes[0].bytesused) { + if (vb2->type == INPUT_MPLANE) { + /* Expecting non-zero filledlen on INPUT port */ + i_vpr_e(inst, + "%s: zero bytesused input buffer not supported\n", __func__); + rc = -EINVAL; + goto exit; + } + } + + inst->last_qbuf_time_ns = ktime_ns; + + if (vb2->type == INPUT_MPLANE) { + rc = msm_vidc_update_input_rate(inst, div_u64(ktime_ns, 1000)); + if (rc) + goto exit; + } + + /* + * Userspace may close fd(from other thread), before driver attempts to call + * dma_buf_get() in qbuf(FTB) sequence(for decoder output buffer) which may + * lead to different kind of security issues. Add check to compare if dma_buf + * address is matching with driver dma_buf_get returned address for that fd. + */ + + dbuf = call_mem_op(core, dma_buf_get, inst, vb2->planes[0].m.fd); + if (dbuf != vb2->planes[0].dbuf) { + i_vpr_e(inst, "%s: invalid dmabuf address 0x%p expected 0x%p\n", + __func__, dbuf, vb2->planes[0].dbuf); + rc = -EINVAL; + goto exit; + } + + if (is_decode_session(inst)) + rc = msm_vdec_qbuf(inst, vb2); + else if (is_encode_session(inst)) + rc = msm_venc_qbuf(inst, vb2); + else + rc = -EINVAL; + if (rc) { + print_vb2_buffer("failed vb2-qbuf", inst, vb2); + goto exit; + } + +exit: + if (dbuf) + call_mem_op(core, dma_buf_put, inst, dbuf); + + if (rc) { + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + vb2_buffer_done(vb2, VB2_BUF_STATE_ERROR); + } +} From patchwork Fri Jul 28 13:23:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F82DC41513 for ; Fri, 28 Jul 2023 13:26:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236760AbjG1N0D (ORCPT ); Fri, 28 Jul 2023 09:26:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236017AbjG1NZr (ORCPT ); Fri, 28 Jul 2023 09:25:47 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EC213C0E; Fri, 28 Jul 2023 06:25:41 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9Kvwh008893; Fri, 28 Jul 2023 13:25:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=EQfSBwVm5bdEIOc5Ahjtgrz9Ih3JfIovxU7G0xvUu18=; b=FsYInNXQypkopB+ZI3wp3aExoFWvbSF7DdD+W8FDC8IUPy4x/d4zlWFzViOGKAoJE1PE CyNQBlfu9LW5Rbj3cK8+mNuQc8sUM5WLAxN6kfALfqlS1dhMHs3fWLB4ehSzOfV75VsA 7hX+ewZeTj55k4XzqlkpO2loeefbtZGUdIAobOvclBKuisSMBzuepuGZtvq4F0c850A5 V7NUewwnIEGjCzsejpSS8U7sUO/j4UT6aaKVhlD9CUUS9W/n/0h0pJwuXERccuGDkupd je1HMyw3B2z1OWCljDlc18MWhdx0kxo2I44/f/ipciNOMzatEAju0wB1ShacRMoQ23y3 nA== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7t3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:32 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPVfl013529 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:31 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:28 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 06/33] iris: vidc: define video core and instance context Date: Fri, 28 Jul 2023 18:53:17 +0530 Message-ID: <1690550624-14642-7-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: zplgZJDMlLmfsWwq9FCnM2yhVC1yfXfU X-Proofpoint-GUID: zplgZJDMlLmfsWwq9FCnM2yhVC1yfXfU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements video core and instance context structure and associated core and session ops. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_core.h | 165 ++++++++++++++++ .../platform/qcom/iris/vidc/inc/msm_vidc_inst.h | 207 +++++++++++++++++++++ 2 files changed, 372 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_core.h create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_inst.h diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_core.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_core.h new file mode 100644 index 0000000..cd8804ff --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_core.h @@ -0,0 +1,165 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_CORE_H_ +#define _MSM_VIDC_CORE_H_ + +#include + +#include "msm_vidc_internal.h" +#include "msm_vidc_state.h" +#include "resources.h" +#include "venus_hfi_queue.h" + +#define MAX_EVENTS 30 + +#define call_iris_op(d, op, ...) \ + (((d) && (d)->iris_ops && (d)->iris_ops->op) ? \ + ((d)->iris_ops->op(__VA_ARGS__)) : 0) + +struct msm_vidc_iris_ops { + int (*boot_firmware)(struct msm_vidc_core *core); + int (*raise_interrupt)(struct msm_vidc_core *core); + int (*clear_interrupt)(struct msm_vidc_core *core); + int (*prepare_pc)(struct msm_vidc_core *core); + int (*power_on)(struct msm_vidc_core *core); + int (*power_off)(struct msm_vidc_core *core); + int (*watchdog)(struct msm_vidc_core *core, u32 intr_status); +}; + +struct msm_vidc_mem_addr { + u32 align_device_addr; + u8 *align_virtual_addr; + u32 mem_size; + struct msm_vidc_mem mem; +}; + +struct msm_vidc_iface_q_info { + void *q_hdr; + struct msm_vidc_mem_addr q_array; +}; + +struct msm_video_device { + enum msm_vidc_domain_type type; + struct video_device vdev; + struct v4l2_m2m_dev *m2m_dev; +}; + +struct msm_vidc_core_power { + u64 clk_freq; + u64 bw_ddr; + u64 bw_llcc; +}; + +/** + * struct msm_vidc_core - holds core parameters valid for all instances + * + * @pdev: refernce to platform device structure + * @vdev: a reference to video device structure for encoder & decoder instances + * @v4l2_dev : a holder for v4l2 device structure + * @instances: a list_head of all instances + * @dangling_instances : a list_head of all dangling instances + * @debugfs_parent: debugfs node for msm_vidc + * @debugfs_root: debugfs node for core info + * @fw_version: a holder for fw version + * @state: a structure of core states + * @state_handle: a handler for core state change + * @sub_state: enumeration of core substate + * @sub_state_name: holder for core substate name + * @lock: a lock for this strucure + * @resources: a structure for core resources + * @platform: a structure for platform data + * @intr_status: interrupt status + * @spur_count: counter for spurious interrupt + * @reg_count: counter for interrupts + * @enc_codecs_count: encoder codec count + * @dec_codecs_count: decoder codec count + * @capabilities: an array for supported core capabilities + * @inst_caps: a pointer to supported instance capabilities + * @sfr: SFR register memory + * @iface_q_table: Interface queue table memory + * @iface_queues: a array of interface queues info + * @pm_work: delayed work to handle power collapse + * @pm_workq: workqueue for power collapse work + * @batch_workq: workqueue for batching + * @fw_unload_work: delayed work for fw unload + * @power: a sturture for core power + * @skip_pc_count: a counter for skipped power collpase + * @last_packet_type: holder for last packet type info + * @packet: pointer to packet from driver to fw + * @packet_size: size of packet + * @response_packet: a pointer to response packet from fw to driver + * @v4l2_file_ops: a pointer to v4l2 file ops + * @v4l2_ioctl_ops_enc: a pointer to v4l2 ioctl ops for encoder + * @v4l2_ioctl_ops_dec: a pointer to v4l2 ioclt ops for decoder + * @v4l2_ctrl_ops: a pointer to v4l2 control ops + * @vb2_ops: a pointer to vb2 ops + * @vb2_mem_ops: a pointer to vb2 memory ops + * @v4l2_m2m_ops: a pointer to v4l2_mem ops + * @iris_ops: a pointer to iris ops + * @res_ops: a pointer to resource management ops + * @session_ops: a pointer to session level ops + * @mem_ops: a pointer to memory management ops + * @header_id: id of packet header + * @packet_id: id of packet + * @sys_init_id: id of sys init packet + */ + +struct msm_vidc_core { + struct platform_device *pdev; + struct msm_video_device vdev[2]; + struct v4l2_device v4l2_dev; + struct list_head instances; + struct list_head dangling_instances; + struct dentry *debugfs_parent; + struct dentry *debugfs_root; + char fw_version[MAX_NAME_LENGTH]; + enum msm_vidc_core_state state; + int (*state_handle)(struct msm_vidc_core *core, + enum msm_vidc_core_event_type type, + struct msm_vidc_event_data *data); + enum msm_vidc_core_sub_state sub_state; + char sub_state_name[MAX_NAME_LENGTH]; + struct mutex lock; /* lock for core structure */ + struct msm_vidc_resource *resource; + struct msm_vidc_platform *platform; + u32 intr_status; + u32 spur_count; + u32 reg_count; + u32 enc_codecs_count; + u32 dec_codecs_count; + struct msm_vidc_core_capability capabilities[CORE_CAP_MAX + 1]; + struct msm_vidc_inst_capability *inst_caps; + struct msm_vidc_mem_addr sfr; + struct msm_vidc_mem_addr iface_q_table; + struct msm_vidc_iface_q_info iface_queues[VIDC_IFACEQ_NUMQ]; + struct delayed_work pm_work; + struct workqueue_struct *pm_workq; + struct workqueue_struct *batch_workq; + struct delayed_work fw_unload_work; + struct msm_vidc_core_power power; + u32 skip_pc_count; + u32 last_packet_type; + u8 *packet; + u32 packet_size; + u8 *response_packet; + struct v4l2_file_operations *v4l2_file_ops; + const struct v4l2_ioctl_ops *v4l2_ioctl_ops_enc; + const struct v4l2_ioctl_ops *v4l2_ioctl_ops_dec; + const struct v4l2_ctrl_ops *v4l2_ctrl_ops; + const struct vb2_ops *vb2_ops; + struct vb2_mem_ops *vb2_mem_ops; + struct v4l2_m2m_ops *v4l2_m2m_ops; + struct msm_vidc_iris_ops *iris_ops; + const struct msm_vidc_resources_ops *res_ops; + struct msm_vidc_session_ops *session_ops; + const struct msm_vidc_memory_ops *mem_ops; + u32 header_id; + u32 packet_id; + u32 sys_init_id; +}; + +#endif // _MSM_VIDC_CORE_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_inst.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_inst.h new file mode 100644 index 0000000..96c8903 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_inst.h @@ -0,0 +1,207 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_INST_H_ +#define _MSM_VIDC_INST_H_ + +#include "hfi_property.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_state.h" + +#define call_session_op(c, op, ...) \ + (((c) && (c)->session_ops && (c)->session_ops->op) ? \ + ((c)->session_ops->op(__VA_ARGS__)) : 0) + +struct msm_vidc_session_ops { + u64 (*calc_freq)(struct msm_vidc_inst *inst, u32 data_size); + int (*calc_bw)(struct msm_vidc_inst *inst, + struct vidc_bus_vote_data *vote_data); + int (*decide_work_route)(struct msm_vidc_inst *inst); + int (*decide_work_mode)(struct msm_vidc_inst *inst); + int (*decide_quality_mode)(struct msm_vidc_inst *inst); + int (*buffer_size)(struct msm_vidc_inst *inst, enum msm_vidc_buffer_type type); + int (*min_count)(struct msm_vidc_inst *inst, enum msm_vidc_buffer_type type); + int (*extra_count)(struct msm_vidc_inst *inst, enum msm_vidc_buffer_type type); + int (*ring_buf_count)(struct msm_vidc_inst *inst, u32 data_size); +}; + +struct msm_vidc_mem_list_info { + struct msm_vidc_mem_list bin; + struct msm_vidc_mem_list arp; + struct msm_vidc_mem_list comv; + struct msm_vidc_mem_list non_comv; + struct msm_vidc_mem_list line; + struct msm_vidc_mem_list dpb; + struct msm_vidc_mem_list persist; + struct msm_vidc_mem_list vpss; +}; + +struct msm_vidc_buffers_info { + struct msm_vidc_buffers input; + struct msm_vidc_buffers output; + struct msm_vidc_buffers read_only; + struct msm_vidc_buffers bin; + struct msm_vidc_buffers arp; + struct msm_vidc_buffers comv; + struct msm_vidc_buffers non_comv; + struct msm_vidc_buffers line; + struct msm_vidc_buffers dpb; + struct msm_vidc_buffers persist; + struct msm_vidc_buffers vpss; +}; + +struct buf_queue { + struct vb2_queue *vb2q; +}; + +/** + * struct msm_vidc_inst - holds per instance parameters + * + * @list: used for attach an instance to the core + * @lock: lock for this structure + * @ctx_q_lock: lock to serialize ioctls calls related to queues + * @client_lock: lock to serialize ioctls + * @state: instnace state + * @event_handle: handler for different v4l2 ioctls + * @sub_state: substate of instance + * @sub_state_name: substate name + * @domain: domain type: encoder or decoder + * @codec: codec type + * @core: pointer to core structure + * @kref: instance reference + * @session_id: id of current session + * @debug_str: debug string + * @packet: HFI packet + * @packet_size: HFI packet size + * @fmts: structure of v4l2_format + * @ctrl_handler: reference of v4l2 ctrl handler + * @fh: reference of v4l2 file handler + * @m2m_dev: m2m device handle + * @m2m_ctx: m2m device context + * @num_ctrls: supported number of controls + * @hfi_rc_type: type of HFI rate control + * @hfi_layer_type: type of HFI layer encoding + * @bufq: array of vb2 queue + * @crop: structure of crop info + * @compose: structure of compose info + * @power: structure of power info + * @bus_data: structure of bus data + * @pool: array of memory pool of buffers + * @buffers: structure of buffer info + * @mem_info: structure of memory info + * @timestamps: structure of timestamp related info + * @subcr_params: array of subscription params which driver subscribes to fw + * @hfi_frame_info: structure of frame info + * @decode_batch: strtucre of decode batch + * @decode_vpp_delay: structure for vpp delay related info + * @session_idle: structure of idle session related info + * @stats_work: delayed work for buffer stats + * @workq: pointer to workqueue + * @enc_input_crs: list head of input compression rations + * @dmabuf_tracker: list head of dambuf tracker + * @input_timer_list: list head of input timer + * @caps_list: list head of capability + * @children_list: list head of children list of caps + * @firmware_list: list head of fw list of cap which will be set to cap + * @buffer_stats_list: list head of buffer stats + * @once_per_session_set: boolean to set once per session property + * @ipsc_properties_set: boolean to set ipsc properties to fw + * @opsc_properties_set: boolean to set opsc properties to fw + * @caps_list_prepared: boolean to prepare capability list + * @debugfs_root: root of debugfs + * @debug_count: count for ETBs, EBDs, FTBs and FBDs + * @stats: structure for bw stats + * @capabilities: array of supported instance capabilities + * @completions: structure of signal completions + * @active: boolean for active instances + * @last_qbuf_time_ns: time of last qbuf to driver + * @initial_time_us: start timestamp + * @max_input_data_size: max size of input data + * @dpb_list_payload: array of dpb buffers + * @input_dpb_list_enabled: boolean for input dpb buffer list + * @output_dpb_list_enabled: boolean for output dpb buffer list + * @max_rate: max input rate + * @has_bframe: boolean for B frame + * @ir_enabled: boolean for intra refresh + * @iframe: boolean for I frame + * @fw_min_count: minimnum count of buffers needed by fw + */ + +struct msm_vidc_inst { + struct list_head list; + struct mutex lock; /* instance lock */ + /* lock to serialize IOCTL calls related to queues */ + struct mutex ctx_q_lock; + struct mutex client_lock; /* lock to serialize IOCTLs */ + enum msm_vidc_state state; + int (*event_handle)(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data); + enum msm_vidc_sub_state sub_state; + char sub_state_name[MAX_NAME_LENGTH]; + enum msm_vidc_domain_type domain; + enum msm_vidc_codec_type codec; + void *core; + struct kref kref; + u32 session_id; + u8 debug_str[24]; + void *packet; + u32 packet_size; + struct v4l2_format fmts[MAX_PORT]; + struct v4l2_ctrl_handler ctrl_handler; + struct v4l2_fh fh; + struct v4l2_m2m_dev *m2m_dev; + struct v4l2_m2m_ctx *m2m_ctx; + u32 num_ctrls; + enum hfi_rate_control hfi_rc_type; + enum hfi_layer_encoding_type hfi_layer_type; + struct buf_queue bufq[MAX_PORT]; + struct msm_vidc_rectangle crop; + struct msm_vidc_rectangle compose; + struct msm_vidc_power power; + struct vidc_bus_vote_data bus_data; + struct msm_memory_pool pool[MSM_MEM_POOL_MAX]; + struct msm_vidc_buffers_info buffers; + struct msm_vidc_mem_list_info mem_info; + struct msm_vidc_timestamps timestamps; + struct msm_vidc_subscription_params subcr_params[MAX_PORT]; + struct msm_vidc_hfi_frame_info hfi_frame_info; + struct msm_vidc_decode_batch decode_batch; + struct msm_vidc_decode_vpp_delay decode_vpp_delay; + struct msm_vidc_session_idle session_idle; + struct delayed_work stats_work; + struct workqueue_struct *workq; + struct list_head enc_input_crs; + struct list_head dmabuf_tracker; /* struct msm_memory_dmabuf */ + struct list_head input_timer_list; /* struct msm_vidc_input_timer */ + struct list_head caps_list; + struct list_head children_list; /* struct msm_vidc_inst_cap_entry */ + struct list_head firmware_list; /* struct msm_vidc_inst_cap_entry */ + struct list_head buffer_stats_list; /* struct msm_vidc_buffer_stats */ + bool once_per_session_set; + bool ipsc_properties_set; + bool opsc_properties_set; + bool caps_list_prepared; + struct dentry *debugfs_root; + struct debug_buf_count debug_count; + struct msm_vidc_statistics stats; + struct msm_vidc_inst_cap capabilities[INST_CAP_MAX + 1]; + struct completion completions[MAX_SIGNAL]; + bool active; + u64 last_qbuf_time_ns; + u64 initial_time_us; + u32 max_input_data_size; + u32 dpb_list_payload[MAX_DPB_LIST_ARRAY_SIZE]; + bool input_dpb_list_enabled; + bool output_dpb_list_enabled; + u32 max_rate; + bool has_bframe; + bool ir_enabled; + bool iframe; + u32 fw_min_count; +}; + +#endif // _MSM_VIDC_INST_H_ From patchwork Fri Jul 28 13:23:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC5AAC001E0 for ; Fri, 28 Jul 2023 13:26:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236748AbjG1N0P (ORCPT ); Fri, 28 Jul 2023 09:26:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42662 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236755AbjG1N0C (ORCPT ); Fri, 28 Jul 2023 09:26:02 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22F154217; Fri, 28 Jul 2023 06:25:47 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S6sxYe013477; Fri, 28 Jul 2023 13:25:36 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=hSzVg+ovAm3QNuBLt8y/SoqBD2ilpNPN3GdQG5PzeIA=; b=LrJvpNJPD9AVBbhYleGH15/jilTcOhZN7sIvSG/060jKNrTkwGzWDJmASR316d47HvTf ouQDU8Mh9JoTu+8HwASkbXDTSCv2oXbmejjP02saGoVEXd8QbyG4BortQkfVALMVh3Ca X5r1b5+jJhacK90jBk+JUOpWiC8WLFALUEYwOtOEh1ipXuf61wkpsptf3NCv1sWmuCeF RpJNd945faeMpnVF4vWxAkdAls4rT9zTy0Ns3yayyQuySzydVIe/2gQ5bAm6gaAWni6U r+NXoRF2PFiijmkBSvJWQ75sEsUIv40J34Bhd2R+fMRy8Tvf39pnJk5UKDGiuFtle0Fc 3g== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7t7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:35 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPZ6f025638 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:35 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:31 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 07/33] iris: iris: add video encoder files Date: Fri, 28 Jul 2023 18:53:18 +0530 Message-ID: <1690550624-14642-8-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: BXbbYEQV5DXc9IPMd0Fve2pJVYZNmuYN X-Proofpoint-GUID: BXbbYEQV5DXc9IPMd0Fve2pJVYZNmuYN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements video encoder functionalities of the driver. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../media/platform/qcom/iris/vidc/inc/msm_venc.h | 34 + .../media/platform/qcom/iris/vidc/src/msm_venc.c | 1484 ++++++++++++++++++++ 2 files changed, 1518 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_venc.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_venc.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_venc.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_venc.h new file mode 100644 index 0000000..bb0a395 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_venc.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VENC_H_ +#define _MSM_VENC_H_ + +#include "msm_vidc_core.h" +#include "msm_vidc_inst.h" + +int msm_venc_streamoff_input(struct msm_vidc_inst *inst); +int msm_venc_streamon_input(struct msm_vidc_inst *inst); +int msm_venc_streamoff_output(struct msm_vidc_inst *inst); +int msm_venc_streamon_output(struct msm_vidc_inst *inst); +int msm_venc_qbuf(struct msm_vidc_inst *inst, struct vb2_buffer *vb2); +int msm_venc_stop_cmd(struct msm_vidc_inst *inst); +int msm_venc_start_cmd(struct msm_vidc_inst *inst); +int msm_venc_try_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_venc_s_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_venc_s_fmt_output(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_venc_g_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_venc_s_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s); +int msm_venc_g_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s); +int msm_venc_s_param(struct msm_vidc_inst *inst, struct v4l2_streamparm *s_parm); +int msm_venc_g_param(struct msm_vidc_inst *inst, struct v4l2_streamparm *s_parm); +int msm_venc_subscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub); +int msm_venc_enum_fmt(struct msm_vidc_inst *inst, struct v4l2_fmtdesc *f); +int msm_venc_inst_init(struct msm_vidc_inst *inst); +int msm_venc_inst_deinit(struct msm_vidc_inst *inst); + +#endif // _MSM_VENC_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_venc.c b/drivers/media/platform/qcom/iris/vidc/src/msm_venc.c new file mode 100644 index 0000000..4962716 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_venc.c @@ -0,0 +1,1484 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "hfi_packet.h" +#include "msm_media_info.h" +#include "msm_venc.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "venus_hfi.h" + +static const u32 msm_venc_input_set_prop[] = { + HFI_PROP_COLOR_FORMAT, + HFI_PROP_RAW_RESOLUTION, + HFI_PROP_CROP_OFFSETS, + HFI_PROP_LINEAR_STRIDE_SCANLINE, + HFI_PROP_SIGNAL_COLOR_INFO, +}; + +static const u32 msm_venc_output_set_prop[] = { + HFI_PROP_BITSTREAM_RESOLUTION, + HFI_PROP_CROP_OFFSETS, +}; + +static const u32 msm_venc_input_subscribe_for_properties[] = { + HFI_PROP_NO_OUTPUT, +}; + +static const u32 msm_venc_output_subscribe_for_properties[] = { + HFI_PROP_PICTURE_TYPE, + HFI_PROP_BUFFER_MARK, + HFI_PROP_WORST_COMPRESSION_RATIO, +}; + +static const u32 msm_venc_output_internal_buffer_type[] = { + MSM_VIDC_BUF_BIN, + MSM_VIDC_BUF_COMV, + MSM_VIDC_BUF_NON_COMV, + MSM_VIDC_BUF_LINE, + MSM_VIDC_BUF_DPB, +}; + +static const u32 msm_venc_input_internal_buffer_type[] = { + MSM_VIDC_BUF_VPSS, +}; + +struct msm_venc_prop_type_handle { + u32 type; + int (*handle)(struct msm_vidc_inst *inst, enum msm_vidc_port_type port); +}; + +static int msm_venc_codec_change(struct msm_vidc_inst *inst, u32 v4l2_codec) +{ + int rc = 0; + bool session_init = false; + + if (!inst->codec) + session_init = true; + + if (inst->codec && inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat == v4l2_codec) + return 0; + + i_vpr_h(inst, "%s: codec changed from %s to %s\n", + __func__, v4l2_pixelfmt_name(inst, inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat), + v4l2_pixelfmt_name(inst, v4l2_codec)); + + inst->codec = v4l2_codec_to_driver(inst, v4l2_codec, __func__); + if (!inst->codec) { + i_vpr_e(inst, "%s: invalid codec %#x\n", __func__, v4l2_codec); + rc = -EINVAL; + goto exit; + } + + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat = v4l2_codec; + rc = msm_vidc_update_debug_str(inst); + if (rc) + goto exit; + + rc = msm_vidc_get_inst_capability(inst); + if (rc) + goto exit; + + rc = msm_vidc_ctrl_handler_init(inst, session_init); + if (rc) + goto exit; + + rc = msm_vidc_update_buffer_count(inst, INPUT_PORT); + if (rc) + goto exit; + + rc = msm_vidc_update_buffer_count(inst, OUTPUT_PORT); + if (rc) + goto exit; + +exit: + return rc; +} + +/* todo: add logs for each property once finalised */ +static int msm_venc_set_colorformat(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 pixelformat; + enum msm_vidc_colorformat_type colorformat; + u32 hfi_colorformat; + + if (port != INPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + pixelformat = inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat; + colorformat = v4l2_colorformat_to_driver(inst, pixelformat, __func__); + if (!(colorformat & inst->capabilities[PIX_FMTS].step_or_mask)) { + i_vpr_e(inst, "%s: invalid pixelformat %s\n", + __func__, v4l2_pixelfmt_name(inst, pixelformat)); + return -EINVAL; + } + + hfi_colorformat = get_hfi_colorformat(inst, colorformat); + i_vpr_h(inst, "%s: hfi colorformat: %#x", __func__, + hfi_colorformat); + rc = venus_hfi_session_property(inst, + HFI_PROP_COLOR_FORMAT, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32_ENUM, + &hfi_colorformat, + sizeof(u32)); + if (rc) + return rc; + return 0; +} + +static int msm_venc_set_stride_scanline(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 color_format, stride_y, scanline_y; + u32 stride_uv = 0, scanline_uv = 0; + u32 payload[2]; + + if (port != INPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + color_format = inst->capabilities[PIX_FMTS].value; + if (!is_linear_colorformat(color_format)) { + i_vpr_h(inst, + "%s: not a linear color fmt, property is not set\n", + __func__); + return 0; + } + + if (is_rgba_colorformat(color_format)) { + stride_y = video_rgb_stride_pix(color_format, + inst->fmts[INPUT_PORT].fmt.pix_mp.width); + scanline_y = video_rgb_scanlines(color_format, + inst->fmts[INPUT_PORT].fmt.pix_mp.height); + } else { + stride_y = video_y_stride_pix(color_format, + inst->fmts[INPUT_PORT].fmt.pix_mp.width); + scanline_y = video_y_scanlines(color_format, + inst->fmts[INPUT_PORT].fmt.pix_mp.height); + } + if (color_format == MSM_VIDC_FMT_NV12 || + color_format == MSM_VIDC_FMT_P010 || + color_format == MSM_VIDC_FMT_NV21) { + stride_uv = stride_y; + scanline_uv = scanline_y / 2; + } + + payload[0] = stride_y << 16 | scanline_y; + payload[1] = stride_uv << 16 | scanline_uv; + i_vpr_h(inst, + "%s: y: stride %d scanline, %d uv: stride %d scanline_uv %d", + __func__, + stride_y, scanline_y, stride_uv, scanline_uv); + rc = venus_hfi_session_property(inst, + HFI_PROP_LINEAR_STRIDE_SCANLINE, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_64_PACKED, + &payload, + sizeof(u64)); + if (rc) + return rc; + + return 0; +} + +static int msm_venc_set_raw_resolution(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 resolution; + + if (port != INPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + resolution = (inst->fmts[port].fmt.pix_mp.width << 16) | + inst->fmts[port].fmt.pix_mp.height; + i_vpr_h(inst, "%s: width: %d height: %d\n", __func__, + inst->fmts[port].fmt.pix_mp.width, inst->fmts[port].fmt.pix_mp.height); + rc = venus_hfi_session_property(inst, + HFI_PROP_RAW_RESOLUTION, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_32_PACKED, + &resolution, + sizeof(u32)); + if (rc) + return rc; + return 0; +} + +static int msm_venc_set_bitstream_resolution(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 resolution; + + if (port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + resolution = (inst->fmts[port].fmt.pix_mp.width << 16) | + inst->fmts[port].fmt.pix_mp.height; + i_vpr_h(inst, "%s: width: %d height: %d\n", __func__, + inst->fmts[port].fmt.pix_mp.width, + inst->fmts[port].fmt.pix_mp.height); + rc = venus_hfi_session_property(inst, + HFI_PROP_BITSTREAM_RESOLUTION, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_32_PACKED, + &resolution, + sizeof(u32)); + if (rc) + return rc; + return 0; +} + +static int msm_venc_set_crop_offsets(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 left_offset, top_offset, right_offset, bottom_offset; + u32 crop[2] = {0}; + u32 width, height; + + if (port != OUTPUT_PORT && port != INPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + if (port == INPUT_PORT) { + left_offset = inst->crop.left; + top_offset = inst->crop.top; + width = inst->crop.width; + height = inst->crop.height; + } else { + left_offset = inst->compose.left; + top_offset = inst->compose.top; + width = inst->compose.width; + height = inst->compose.height; + if (is_rotation_90_or_270(inst)) { + width = inst->compose.height; + height = inst->compose.width; + } + } + + right_offset = (inst->fmts[port].fmt.pix_mp.width - width); + bottom_offset = (inst->fmts[port].fmt.pix_mp.height - height); + + crop[0] = left_offset << 16 | top_offset; + crop[1] = right_offset << 16 | bottom_offset; + i_vpr_h(inst, + "%s: l_off: %d t_off: %d r_off: %d b_off: %d", + __func__, + left_offset, top_offset, right_offset, bottom_offset); + + rc = venus_hfi_session_property(inst, + HFI_PROP_CROP_OFFSETS, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_64_PACKED, + &crop, + sizeof(u64)); + if (rc) + return rc; + return 0; +} + +static int msm_venc_set_colorspace(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 primaries = MSM_VIDC_PRIMARIES_RESERVED; + u32 matrix_coeff = MSM_VIDC_MATRIX_COEFF_RESERVED; + u32 transfer_char = MSM_VIDC_TRANSFER_RESERVED; + u32 full_range = 0; + u32 colour_description_present_flag = 0; + u32 video_signal_type_present_flag = 0, payload = 0; + /* Unspecified video format */ + u32 video_format = 5; + struct v4l2_format *input_fmt; + u32 pix_fmt; + struct v4l2_pix_format_mplane *pixmp = NULL; + + if (port != INPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + if (inst->capabilities[SIGNAL_COLOR_INFO].flags & CAP_FLAG_CLIENT_SET) { + i_vpr_h(inst, "%s: client configured colorspace via control\n", __func__); + return 0; + } + + input_fmt = &inst->fmts[INPUT_PORT]; + pixmp = &inst->fmts[port].fmt.pix_mp; + pix_fmt = v4l2_colorformat_to_driver(inst, + input_fmt->fmt.pix_mp.pixelformat, __func__); + if (pixmp->colorspace != V4L2_COLORSPACE_DEFAULT || + pixmp->ycbcr_enc != V4L2_YCBCR_ENC_DEFAULT || + pixmp->xfer_func != V4L2_XFER_FUNC_DEFAULT) { + colour_description_present_flag = 1; + video_signal_type_present_flag = 1; + primaries = v4l2_color_primaries_to_driver(inst, + pixmp->colorspace, __func__); + matrix_coeff = v4l2_matrix_coeff_to_driver(inst, + pixmp->ycbcr_enc, __func__); + transfer_char = v4l2_transfer_char_to_driver(inst, + pixmp->xfer_func, __func__); + } else if (is_rgba_colorformat(pix_fmt)) { + colour_description_present_flag = 1; + video_signal_type_present_flag = 1; + primaries = MSM_VIDC_PRIMARIES_BT709; + matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT709; + transfer_char = MSM_VIDC_TRANSFER_BT709; + full_range = 0; + } + + if (pixmp->quantization != + V4L2_QUANTIZATION_DEFAULT) { + video_signal_type_present_flag = 1; + full_range = pixmp->quantization == + V4L2_QUANTIZATION_FULL_RANGE ? 1 : 0; + } + + payload = (matrix_coeff & 0xFF) | + ((transfer_char << 8) & 0xFF00) | + ((primaries << 16) & 0xFF0000) | + ((colour_description_present_flag << 24) & 0x1000000) | + ((full_range << 25) & 0x2000000) | + ((video_format << 26) & 0x1C000000) | + ((video_signal_type_present_flag << 29) & 0x20000000); + i_vpr_h(inst, "%s: color info: %#x\n", __func__, payload); + rc = venus_hfi_session_property(inst, + HFI_PROP_SIGNAL_COLOR_INFO, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_32_PACKED, + &payload, + sizeof(u32)); + if (rc) + return rc; + return 0; +} + +static int msm_venc_set_quality_mode(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core = inst->core; + u32 mode; + + rc = call_session_op(core, decide_quality_mode, inst); + if (rc) { + i_vpr_e(inst, "%s: decide_work_route failed\n", + __func__); + return -EINVAL; + } + + mode = inst->capabilities[QUALITY_MODE].value; + i_vpr_h(inst, "%s: quality_mode: %u\n", __func__, mode); + rc = venus_hfi_session_property(inst, + HFI_PROP_QUALITY_MODE, + HFI_HOST_FLAGS_NONE, + HFI_PORT_BITSTREAM, + HFI_PAYLOAD_U32_ENUM, + &mode, + sizeof(u32)); + if (rc) + return rc; + return 0; +} + +static int msm_venc_set_ring_buffer_count(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_inst_cap *cap; + + cap = &inst->capabilities[ENC_RING_BUFFER_COUNT]; + + if (!cap->set) + return 0; + + rc = cap->set(inst, ENC_RING_BUFFER_COUNT); + if (rc) { + i_vpr_e(inst, "%s: set cap failed\n", __func__); + return rc; + } + + return 0; +} + +static int msm_venc_set_input_properties(struct msm_vidc_inst *inst) +{ + int i, j, rc = 0; + static const struct msm_venc_prop_type_handle prop_type_handle_arr[] = { + {HFI_PROP_COLOR_FORMAT, msm_venc_set_colorformat }, + {HFI_PROP_RAW_RESOLUTION, msm_venc_set_raw_resolution }, + {HFI_PROP_CROP_OFFSETS, msm_venc_set_crop_offsets }, + {HFI_PROP_LINEAR_STRIDE_SCANLINE, msm_venc_set_stride_scanline }, + {HFI_PROP_SIGNAL_COLOR_INFO, msm_venc_set_colorspace }, + }; + + for (i = 0; i < ARRAY_SIZE(msm_venc_input_set_prop); i++) { + /* set session input properties */ + for (j = 0; j < ARRAY_SIZE(prop_type_handle_arr); j++) { + if (prop_type_handle_arr[j].type == msm_venc_input_set_prop[i]) { + rc = prop_type_handle_arr[j].handle(inst, INPUT_PORT); + if (rc) + goto exit; + break; + } + } + + /* is property type unknown ? */ + if (j == ARRAY_SIZE(prop_type_handle_arr)) + i_vpr_e(inst, "%s: unknown property %#x\n", __func__, + msm_venc_input_set_prop[i]); + } + +exit: + return rc; +} + +static int msm_venc_set_output_properties(struct msm_vidc_inst *inst) +{ + int i, j, rc = 0; + static const struct msm_venc_prop_type_handle prop_type_handle_arr[] = { + {HFI_PROP_BITSTREAM_RESOLUTION, msm_venc_set_bitstream_resolution }, + {HFI_PROP_CROP_OFFSETS, msm_venc_set_crop_offsets }, + }; + + for (i = 0; i < ARRAY_SIZE(msm_venc_output_set_prop); i++) { + /* set session output properties */ + for (j = 0; j < ARRAY_SIZE(prop_type_handle_arr); j++) { + if (prop_type_handle_arr[j].type == msm_venc_output_set_prop[i]) { + rc = prop_type_handle_arr[j].handle(inst, OUTPUT_PORT); + if (rc) + goto exit; + break; + } + } + + /* is property type unknown ? */ + if (j == ARRAY_SIZE(prop_type_handle_arr)) + i_vpr_e(inst, "%s: unknown property %#x\n", __func__, + msm_venc_output_set_prop[i]); + } + +exit: + return rc; +} + +static int msm_venc_set_internal_properties(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_venc_set_quality_mode(inst); + if (rc) + return rc; + + rc = msm_venc_set_ring_buffer_count(inst); + + return rc; +} + +static int msm_venc_get_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int i, rc = 0; + + for (i = 0; i < ARRAY_SIZE(msm_venc_input_internal_buffer_type); i++) { + rc = msm_vidc_get_internal_buffers(inst, msm_venc_input_internal_buffer_type[i]); + if (rc) + return rc; + } + + return rc; +} + +static int msm_venc_destroy_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf, *dummy; + const u32 *internal_buf_type; + u32 i, len; + + if (port == INPUT_PORT) { + internal_buf_type = msm_venc_input_internal_buffer_type; + len = ARRAY_SIZE(msm_venc_input_internal_buffer_type); + } else { + internal_buf_type = msm_venc_output_internal_buffer_type; + len = ARRAY_SIZE(msm_venc_output_internal_buffer_type); + } + + for (i = 0; i < len; i++) { + buffers = msm_vidc_get_buffers(inst, internal_buf_type[i], __func__); + if (!buffers) + return -EINVAL; + + if (buffers->reuse) { + i_vpr_l(inst, "%s: reuse enabled for %s\n", __func__, + buf_name(internal_buf_type[i])); + continue; + } + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + i_vpr_h(inst, + "%s: destroying internal buffer: type %d idx %d fd %d addr %#llx size %d\n", + __func__, buf->type, buf->index, buf->fd, + buf->device_addr, buf->buffer_size); + + rc = msm_vidc_destroy_internal_buffer(inst, buf); + if (rc) + return rc; + } + } + + return 0; +} + +static int msm_venc_create_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int i, rc = 0; + + for (i = 0; i < ARRAY_SIZE(msm_venc_input_internal_buffer_type); i++) { + rc = msm_vidc_create_internal_buffers(inst, msm_venc_input_internal_buffer_type[i]); + if (rc) + return rc; + } + + return rc; +} + +static int msm_venc_queue_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int i, rc = 0; + + for (i = 0; i < ARRAY_SIZE(msm_venc_input_internal_buffer_type); i++) { + rc = msm_vidc_queue_internal_buffers(inst, msm_venc_input_internal_buffer_type[i]); + if (rc) + return rc; + } + + return rc; +} + +static int msm_venc_get_output_internal_buffers(struct msm_vidc_inst *inst) +{ + int i, rc = 0; + + for (i = 0; i < ARRAY_SIZE(msm_venc_output_internal_buffer_type); i++) { + rc = msm_vidc_get_internal_buffers(inst, msm_venc_output_internal_buffer_type[i]); + if (rc) + return rc; + } + + return rc; +} + +static int msm_venc_create_output_internal_buffers(struct msm_vidc_inst *inst) +{ + int i, rc = 0; + + for (i = 0; i < ARRAY_SIZE(msm_venc_output_internal_buffer_type); i++) { + rc = msm_vidc_create_internal_buffers(inst, + msm_venc_output_internal_buffer_type[i]); + if (rc) + return rc; + } + + return 0; +} + +static int msm_venc_queue_output_internal_buffers(struct msm_vidc_inst *inst) +{ + int i, rc = 0; + + for (i = 0; i < ARRAY_SIZE(msm_venc_output_internal_buffer_type); i++) { + rc = msm_vidc_queue_internal_buffers(inst, msm_venc_output_internal_buffer_type[i]); + if (rc) + return rc; + } + + return 0; +} + +static int msm_venc_property_subscription(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + u32 payload[32] = {0}; + u32 i; + u32 payload_size = 0; + + payload[0] = HFI_MODE_PROPERTY; + if (port == INPUT_PORT) { + for (i = 0; i < ARRAY_SIZE(msm_venc_input_subscribe_for_properties); i++) + payload[i + 1] = msm_venc_input_subscribe_for_properties[i]; + payload_size = (ARRAY_SIZE(msm_venc_input_subscribe_for_properties) + 1) * + sizeof(u32); + } else if (port == OUTPUT_PORT) { + for (i = 0; i < ARRAY_SIZE(msm_venc_output_subscribe_for_properties); i++) + payload[i + 1] = msm_venc_output_subscribe_for_properties[i]; + payload_size = (ARRAY_SIZE(msm_venc_output_subscribe_for_properties) + 1) * + sizeof(u32); + } else { + i_vpr_e(inst, "%s: invalid port: %d\n", __func__, port); + return -EINVAL; + } + + return venus_hfi_session_command(inst, + HFI_CMD_SUBSCRIBE_MODE, + port, + HFI_PAYLOAD_U32_ARRAY, + &payload[0], + payload_size); +} + +int msm_venc_streamoff_input(struct msm_vidc_inst *inst) +{ + return msm_vidc_session_streamoff(inst, INPUT_PORT); +} + +int msm_venc_streamon_input(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vidc_check_session_supported(inst); + if (rc) + goto error; + + rc = msm_vidc_check_scaling_supported(inst); + if (rc) + goto error; + + rc = msm_venc_set_input_properties(inst); + if (rc) + goto error; + + rc = msm_venc_get_input_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_venc_destroy_internal_buffers(inst, INPUT_PORT); + if (rc) + goto error; + + rc = msm_venc_create_input_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_venc_queue_input_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_venc_property_subscription(inst, INPUT_PORT); + if (rc) + goto error; + + rc = msm_vidc_process_streamon_input(inst); + if (rc) + goto error; + + return 0; + +error: + i_vpr_e(inst, "%s: failed\n", __func__); + return rc; +} + +int msm_venc_qbuf(struct msm_vidc_inst *inst, struct vb2_buffer *vb2) +{ + return msm_vidc_queue_buffer_single(inst, vb2); +} + +int msm_venc_stop_cmd(struct msm_vidc_inst *inst) +{ + i_vpr_h(inst, "received cmd: drain\n"); + return msm_vidc_process_drain(inst); +} + +int msm_venc_start_cmd(struct msm_vidc_inst *inst) +{ + i_vpr_h(inst, "received cmd: resume\n"); + + vb2_clear_last_buffer_dequeued(inst->bufq[OUTPUT_PORT].vb2q); + + /* tune power features */ + msm_vidc_allow_dcvs(inst); + msm_vidc_power_data_reset(inst); + + /* print final buffer counts & size details */ + msm_vidc_print_buffer_info(inst); + + /* print internal buffer memory usage stats */ + msm_vidc_print_memory_stats(inst); + + return msm_vidc_process_resume(inst); +} + +int msm_venc_streamoff_output(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = inst->core; + + /* restore LAYER_COUNT max allowed value */ + inst->capabilities[ENH_LAYER_COUNT].max = + core->capabilities[MAX_ENH_LAYER_COUNT].value; + + rc = msm_vidc_session_streamoff(inst, OUTPUT_PORT); + if (rc) + return rc; + + return 0; +} + +int msm_venc_streamon_output(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_venc_set_output_properties(inst); + if (rc) + goto error; + + rc = msm_vidc_set_v4l2_properties(inst); + if (rc) + goto error; + + rc = msm_venc_set_internal_properties(inst); + if (rc) + goto error; + + rc = msm_venc_get_output_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_venc_destroy_internal_buffers(inst, OUTPUT_PORT); + if (rc) + goto error; + + rc = msm_venc_create_output_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_venc_queue_output_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_venc_property_subscription(inst, OUTPUT_PORT); + if (rc) + goto error; + + rc = msm_vidc_process_streamon_output(inst); + if (rc) + goto error; + + return 0; + +error: + i_vpr_e(inst, "%s: failed\n", __func__); + msm_venc_streamoff_output(inst); + return rc; +} + +int msm_venc_try_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp; + u32 pix_fmt; + + memset(pixmp->reserved, 0, sizeof(pixmp->reserved)); + + if (f->type == INPUT_MPLANE) { + pix_fmt = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + if (!pix_fmt) { + i_vpr_e(inst, "%s: unsupported format, set current params\n", __func__); + f->fmt.pix_mp.pixelformat = inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat; + f->fmt.pix_mp.width = inst->fmts[INPUT_PORT].fmt.pix_mp.width; + f->fmt.pix_mp.height = inst->fmts[INPUT_PORT].fmt.pix_mp.height; + pix_fmt = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, + __func__); + } + } else if (f->type == OUTPUT_MPLANE) { + pix_fmt = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + if (!pix_fmt) { + i_vpr_e(inst, "%s: unsupported codec, set current params\n", __func__); + f->fmt.pix_mp.width = inst->fmts[OUTPUT_PORT].fmt.pix_mp.width; + f->fmt.pix_mp.height = inst->fmts[OUTPUT_PORT].fmt.pix_mp.height; + f->fmt.pix_mp.pixelformat = inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat; + } + } else { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, f->type); + return -EINVAL; + } + + if (pixmp->field == V4L2_FIELD_ANY) + pixmp->field = V4L2_FIELD_NONE; + pixmp->num_planes = 1; + + return rc; +} + +int msm_venc_s_fmt_output(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + struct v4l2_format *fmt; + struct msm_vidc_core *core; + u32 codec_align; + u32 width, height; + enum msm_vidc_codec_type codec; + + core = inst->core; + msm_venc_try_fmt(inst, f); + + fmt = &inst->fmts[OUTPUT_PORT]; + if (fmt->fmt.pix_mp.pixelformat != f->fmt.pix_mp.pixelformat) { + rc = msm_venc_codec_change(inst, f->fmt.pix_mp.pixelformat); + if (rc) + return rc; + } + fmt->type = OUTPUT_MPLANE; + + codec = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + + codec_align = (codec == MSM_VIDC_HEVC) ? 32 : 16; + /* use rotated width height if rotation is enabled */ + width = inst->compose.width; + height = inst->compose.height; + if (is_rotation_90_or_270(inst)) { + width = inst->compose.height; + height = inst->compose.width; + } + /* width, height is readonly for client */ + fmt->fmt.pix_mp.width = ALIGN(width, codec_align); + fmt->fmt.pix_mp.height = ALIGN(height, codec_align); + fmt->fmt.pix_mp.num_planes = 1; + fmt->fmt.pix_mp.plane_fmt[0].bytesperline = 0; + fmt->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_OUTPUT); + /* video hw supports conversion to V4L2_COLORSPACE_REC709 only */ + if (f->fmt.pix_mp.colorspace != V4L2_COLORSPACE_DEFAULT && + f->fmt.pix_mp.colorspace != V4L2_COLORSPACE_REC709) + f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_DEFAULT; + fmt->fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace; + fmt->fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func; + fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc; + fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization; + inst->buffers.output.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_OUTPUT); + if (inst->buffers.output.actual_count < + inst->buffers.output.min_count + + inst->buffers.output.extra_count) { + inst->buffers.output.actual_count = + inst->buffers.output.min_count + + inst->buffers.output.extra_count; + } + inst->buffers.output.size = + fmt->fmt.pix_mp.plane_fmt[0].sizeimage; + + i_vpr_h(inst, + "%s: type: OUTPUT, codec %s width %d height %d size %u min_count %d extra_count %d\n", + __func__, v4l2_pixelfmt_name(inst, fmt->fmt.pix_mp.pixelformat), + fmt->fmt.pix_mp.width, fmt->fmt.pix_mp.height, + fmt->fmt.pix_mp.plane_fmt[0].sizeimage, + inst->buffers.output.min_count, + inst->buffers.output.extra_count); + + /* finally update client format */ + memcpy(f, fmt, sizeof(struct v4l2_format)); + return rc; +} + +static int msm_venc_s_fmt_input(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + struct v4l2_format *fmt, *output_fmt; + struct msm_vidc_core *core; + u32 pix_fmt, width, height, size, bytesperline; + + core = inst->core; + msm_venc_try_fmt(inst, f); + + pix_fmt = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + msm_vidc_update_cap_value(inst, PIX_FMTS, pix_fmt, __func__); + + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + if (is_rgba_colorformat(pix_fmt)) + bytesperline = video_rgb_stride_bytes(pix_fmt, f->fmt.pix_mp.width); + else + bytesperline = video_y_stride_bytes(pix_fmt, f->fmt.pix_mp.width); + + fmt = &inst->fmts[INPUT_PORT]; + fmt->type = INPUT_MPLANE; + fmt->fmt.pix_mp.width = width; + fmt->fmt.pix_mp.height = height; + fmt->fmt.pix_mp.num_planes = 1; + fmt->fmt.pix_mp.pixelformat = f->fmt.pix_mp.pixelformat; + fmt->fmt.pix_mp.plane_fmt[0].bytesperline = bytesperline; + size = call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_INPUT); + fmt->fmt.pix_mp.plane_fmt[0].sizeimage = size; + /* update input port colorspace info */ + fmt->fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace; + fmt->fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func; + fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc; + fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization; + /* + * Update output port colorspace info. + */ + output_fmt = &inst->fmts[OUTPUT_PORT]; + output_fmt->fmt.pix_mp.colorspace = fmt->fmt.pix_mp.colorspace; + output_fmt->fmt.pix_mp.xfer_func = fmt->fmt.pix_mp.xfer_func; + output_fmt->fmt.pix_mp.ycbcr_enc = fmt->fmt.pix_mp.ycbcr_enc; + output_fmt->fmt.pix_mp.quantization = fmt->fmt.pix_mp.quantization; + + inst->buffers.input.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_INPUT); + if (inst->buffers.input.actual_count < + inst->buffers.input.min_count + + inst->buffers.input.extra_count) { + inst->buffers.input.actual_count = + inst->buffers.input.min_count + + inst->buffers.input.extra_count; + } + inst->buffers.input.size = size; + + if (f->fmt.pix_mp.width != inst->crop.width || + f->fmt.pix_mp.height != inst->crop.height) { + /* reset crop dimensions with updated resolution */ + inst->crop.top = 0; + inst->crop.left = 0; + inst->crop.width = f->fmt.pix_mp.width; + inst->crop.height = f->fmt.pix_mp.height; + + /* reset compose dimensions with updated resolution */ + inst->compose.top = 0; + inst->compose.left = 0; + inst->compose.width = f->fmt.pix_mp.width; + inst->compose.height = f->fmt.pix_mp.height; + + /* update output format */ + rc = msm_venc_s_fmt_output(inst, output_fmt); + if (rc) + return rc; + } + + i_vpr_h(inst, + "%s: type: INPUT, format %s width %d height %d size %u min_count %d extra_count %d\n", + __func__, v4l2_pixelfmt_name(inst, fmt->fmt.pix_mp.pixelformat), + fmt->fmt.pix_mp.width, fmt->fmt.pix_mp.height, + fmt->fmt.pix_mp.plane_fmt[0].sizeimage, + inst->buffers.input.min_count, + inst->buffers.input.extra_count); + + /* finally update client format */ + memcpy(f, fmt, sizeof(struct v4l2_format)); + + return rc; +} + +int msm_venc_s_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + + if (f->type == INPUT_MPLANE) { + rc = msm_venc_s_fmt_input(inst, f); + if (rc) + goto exit; + } else if (f->type == OUTPUT_MPLANE) { + rc = msm_venc_s_fmt_output(inst, f); + if (rc) + goto exit; + } else { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, f->type); + rc = -EINVAL; + goto exit; + } + +exit: + if (rc) + i_vpr_e(inst, "%s: failed\n", __func__); + + return rc; +} + +int msm_venc_g_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + int port; + + port = v4l2_type_to_driver_port(inst, f->type, __func__); + if (port < 0) + return -EINVAL; + + memcpy(f, &inst->fmts[port], sizeof(struct v4l2_format)); + + return rc; +} + +int msm_venc_s_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s) +{ + int rc = 0; + struct v4l2_format *output_fmt; + + if (s->type != INPUT_MPLANE && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, s->type); + return -EINVAL; + } + + switch (s->target) { + case V4L2_SEL_TGT_CROP: + if (s->r.left || s->r.top) { + i_vpr_h(inst, "%s: unsupported top %d or left %d\n", + __func__, s->r.left, s->r.top); + s->r.left = 0; + s->r.top = 0; + } + if (s->r.width > inst->fmts[INPUT_PORT].fmt.pix_mp.width) { + i_vpr_h(inst, "%s: unsupported width %d, fmt width %d\n", + __func__, s->r.width, + inst->fmts[INPUT_PORT].fmt.pix_mp.width); + s->r.width = inst->fmts[INPUT_PORT].fmt.pix_mp.width; + } + if (s->r.height > inst->fmts[INPUT_PORT].fmt.pix_mp.height) { + i_vpr_h(inst, "%s: unsupported height %d, fmt height %d\n", + __func__, s->r.height, + inst->fmts[INPUT_PORT].fmt.pix_mp.height); + s->r.height = inst->fmts[INPUT_PORT].fmt.pix_mp.height; + } + + inst->crop.left = s->r.left; + inst->crop.top = s->r.top; + inst->crop.width = s->r.width; + inst->crop.height = s->r.height; + /* adjust compose such that it is within crop */ + inst->compose.left = inst->crop.left; + inst->compose.top = inst->crop.top; + inst->compose.width = inst->crop.width; + inst->compose.height = inst->crop.height; + /* update output format based on new crop dimensions */ + output_fmt = &inst->fmts[OUTPUT_PORT]; + rc = msm_venc_s_fmt_output(inst, output_fmt); + if (rc) + return rc; + break; + case V4L2_SEL_TGT_COMPOSE: + if (s->r.left < inst->crop.left) { + i_vpr_e(inst, + "%s: compose left (%d) less than crop left (%d)\n", + __func__, s->r.left, inst->crop.left); + s->r.left = inst->crop.left; + } + if (s->r.top < inst->crop.top) { + i_vpr_e(inst, + "%s: compose top (%d) less than crop top (%d)\n", + __func__, s->r.top, inst->crop.top); + s->r.top = inst->crop.top; + } + if (s->r.width > inst->crop.width) { + i_vpr_e(inst, + "%s: compose width (%d) greate than crop width (%d)\n", + __func__, s->r.width, inst->crop.width); + s->r.width = inst->crop.width; + } + if (s->r.height > inst->crop.height) { + i_vpr_e(inst, + "%s: compose height (%d) greate than crop height (%d)\n", + __func__, s->r.height, inst->crop.height); + s->r.height = inst->crop.height; + } + inst->compose.left = s->r.left; + inst->compose.top = s->r.top; + inst->compose.width = s->r.width; + inst->compose.height = s->r.height; + + if (is_scaling_enabled(inst)) { + i_vpr_h(inst, + "%s: scaling enabled, crop: l %d t %d w %d h %d compose: l %d t %d w %d h %d\n", + __func__, inst->crop.left, inst->crop.top, + inst->crop.width, inst->crop.height, + inst->compose.left, inst->compose.top, + inst->compose.width, inst->compose.height); + } + + /* update output format based on new compose dimensions */ + output_fmt = &inst->fmts[OUTPUT_PORT]; + rc = msm_venc_s_fmt_output(inst, output_fmt); + if (rc) + return rc; + break; + default: + i_vpr_e(inst, "%s: invalid target %d\n", + __func__, s->target); + rc = -EINVAL; + break; + } + if (!rc) + i_vpr_h(inst, "%s: target %d, r [%d, %d, %d, %d]\n", + __func__, s->target, s->r.top, s->r.left, + s->r.width, s->r.height); + return rc; +} + +int msm_venc_g_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s) +{ + int rc = 0; + + if (s->type != INPUT_MPLANE && s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, s->type); + return -EINVAL; + } + + switch (s->target) { + case V4L2_SEL_TGT_CROP_BOUNDS: + case V4L2_SEL_TGT_CROP_DEFAULT: + case V4L2_SEL_TGT_CROP: + s->r.left = inst->crop.left; + s->r.top = inst->crop.top; + s->r.width = inst->crop.width; + s->r.height = inst->crop.height; + break; + case V4L2_SEL_TGT_COMPOSE_BOUNDS: + case V4L2_SEL_TGT_COMPOSE_PADDED: + case V4L2_SEL_TGT_COMPOSE_DEFAULT: + case V4L2_SEL_TGT_COMPOSE: + s->r.left = inst->compose.left; + s->r.top = inst->compose.top; + s->r.width = inst->compose.width; + s->r.height = inst->compose.height; + break; + default: + i_vpr_e(inst, "%s: invalid target %d\n", + __func__, s->target); + rc = -EINVAL; + break; + } + if (!rc) + i_vpr_h(inst, "%s: target %d, r [%d, %d, %d, %d]\n", + __func__, s->target, s->r.top, s->r.left, + s->r.width, s->r.height); + return rc; +} + +int msm_venc_s_param(struct msm_vidc_inst *inst, + struct v4l2_streamparm *s_parm) +{ + int rc = 0; + struct v4l2_fract *timeperframe = NULL; + u32 q16_rate, max_rate, default_rate; + u64 us_per_frame = 0, input_rate = 0; + bool is_frame_rate = false; + + if (s_parm->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { + timeperframe = &s_parm->parm.output.timeperframe; + max_rate = inst->capabilities[OPERATING_RATE].max >> 16; + default_rate = inst->capabilities[OPERATING_RATE].value >> 16; + s_parm->parm.output.capability = V4L2_CAP_TIMEPERFRAME; + } else { + timeperframe = &s_parm->parm.capture.timeperframe; + is_frame_rate = true; + max_rate = inst->capabilities[FRAME_RATE].max >> 16; + default_rate = inst->capabilities[FRAME_RATE].value >> 16; + s_parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME; + } + + if (!timeperframe->denominator || !timeperframe->numerator) { + i_vpr_e(inst, "%s: type %s, invalid rate, update with default\n", + __func__, v4l2_type_name(s_parm->type)); + if (!timeperframe->numerator) + timeperframe->numerator = 1; + if (!timeperframe->denominator) + timeperframe->denominator = default_rate; + } + + us_per_frame = timeperframe->numerator * (u64)USEC_PER_SEC; + do_div(us_per_frame, timeperframe->denominator); + + if (!us_per_frame) { + i_vpr_e(inst, "%s: us_per_frame is zero\n", __func__); + rc = -EINVAL; + goto exit; + } + + input_rate = (u64)USEC_PER_SEC; + do_div(input_rate, us_per_frame); + + i_vpr_h(inst, "%s: type %s, %s value %llu\n", + __func__, v4l2_type_name(s_parm->type), + is_frame_rate ? "frame rate" : "operating rate", input_rate); + + q16_rate = (u32)input_rate << 16; + msm_vidc_update_cap_value(inst, is_frame_rate ? FRAME_RATE : OPERATING_RATE, + q16_rate, __func__); + if ((s_parm->type == INPUT_MPLANE && inst->bufq[INPUT_PORT].vb2q->streaming) || + (s_parm->type == OUTPUT_MPLANE && inst->bufq[OUTPUT_PORT].vb2q->streaming)) { + rc = msm_vidc_check_core_mbps(inst); + if (rc) { + i_vpr_e(inst, "%s: unsupported load\n", __func__); + goto reset_rate; + } + rc = input_rate > max_rate; + if (rc) { + i_vpr_e(inst, "%s: unsupported rate %llu, max %u\n", __func__, + input_rate, max_rate); + rc = -ENOMEM; + goto reset_rate; + } + } + + if (is_frame_rate) + inst->capabilities[FRAME_RATE].flags |= CAP_FLAG_CLIENT_SET; + else + inst->capabilities[OPERATING_RATE].flags |= CAP_FLAG_CLIENT_SET; + /* + * In static case, frame rate is set via + * inst database set function mentioned in + * FRAME_RATE cap id. + * In dynamic case, frame rate is set like below. + */ + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) { + rc = venus_hfi_session_property(inst, + HFI_PROP_FRAME_RATE, + HFI_HOST_FLAGS_NONE, + HFI_PORT_BITSTREAM, + HFI_PAYLOAD_Q16, + &q16_rate, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, + "%s: failed to set frame rate to fw\n", __func__); + goto exit; + } + } + + return 0; + +reset_rate: + if (rc) { + i_vpr_e(inst, "%s: setting rate %llu failed, reset to %u\n", __func__, + input_rate, default_rate); + msm_vidc_update_cap_value(inst, is_frame_rate ? FRAME_RATE : OPERATING_RATE, + default_rate << 16, __func__); + } +exit: + return rc; +} + +int msm_venc_g_param(struct msm_vidc_inst *inst, + struct v4l2_streamparm *s_parm) +{ + struct v4l2_fract *timeperframe = NULL; + + if (s_parm->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { + timeperframe = &s_parm->parm.output.timeperframe; + timeperframe->numerator = 1; + timeperframe->denominator = + inst->capabilities[OPERATING_RATE].value >> 16; + s_parm->parm.output.capability = V4L2_CAP_TIMEPERFRAME; + } else { + timeperframe = &s_parm->parm.capture.timeperframe; + timeperframe->numerator = 1; + timeperframe->denominator = + inst->capabilities[FRAME_RATE].value >> 16; + s_parm->parm.capture.capability = V4L2_CAP_TIMEPERFRAME; + } + + i_vpr_h(inst, "%s: type %s, num %u denom %u\n", + __func__, v4l2_type_name(s_parm->type), timeperframe->numerator, + timeperframe->denominator); + return 0; +} + +int msm_venc_subscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub) +{ + int rc = 0; + + switch (sub->type) { + case V4L2_EVENT_EOS: + rc = v4l2_event_subscribe(&inst->fh, sub, MAX_EVENTS, NULL); + break; + case V4L2_EVENT_CTRL: + rc = v4l2_ctrl_subscribe_event(&inst->fh, sub); + break; + default: + i_vpr_e(inst, "%s: invalid type %d id %d\n", __func__, sub->type, sub->id); + return -EINVAL; + } + + if (rc) + i_vpr_e(inst, "%s: failed, type %d id %d\n", + __func__, sub->type, sub->id); + return rc; +} + +int msm_venc_enum_fmt(struct msm_vidc_inst *inst, struct v4l2_fmtdesc *f) +{ + int rc = 0; + struct msm_vidc_core *core; + u32 array[32] = {0}; + u32 i = 0; + + core = inst->core; + + if (f->type == OUTPUT_MPLANE) { + u32 codecs = core->capabilities[ENC_CODECS].value; + u32 idx = 0; + + for (i = 0; i <= 31; i++) { + if (codecs & BIT(i)) { + if (idx >= ARRAY_SIZE(array)) + break; + array[idx] = codecs & BIT(i); + idx++; + } + } + if (!array[f->index]) + return -EINVAL; + f->pixelformat = v4l2_codec_from_driver(inst, array[f->index], + __func__); + if (!f->pixelformat) + return -EINVAL; + f->flags = V4L2_FMT_FLAG_COMPRESSED; + strscpy(f->description, "codec", sizeof(f->description)); + } else if (f->type == INPUT_MPLANE) { + u32 formats = inst->capabilities[PIX_FMTS].step_or_mask; + u32 idx = 0; + + for (i = 0; i <= 31; i++) { + if (formats & BIT(i)) { + if (idx >= ARRAY_SIZE(array)) + break; + array[idx] = formats & BIT(i); + idx++; + } + } + if (!array[f->index]) + return -EINVAL; + f->pixelformat = v4l2_colorformat_from_driver(inst, array[f->index], + __func__); + if (!f->pixelformat) + return -EINVAL; + strscpy(f->description, "colorformat", sizeof(f->description)); + } + + memset(f->reserved, 0, sizeof(f->reserved)); + + i_vpr_h(inst, "%s: index %d, %s: %s, flags %#x\n", + __func__, f->index, f->description, + v4l2_pixelfmt_name(inst, f->pixelformat), f->flags); + return rc; +} + +int msm_venc_inst_init(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + struct v4l2_format *f; + enum msm_vidc_colorformat_type colorformat; + + core = inst->core; + + if (core->capabilities[DCVS].value) + inst->power.dcvs_mode = true; + + f = &inst->fmts[OUTPUT_PORT]; + f->type = OUTPUT_MPLANE; + f->fmt.pix_mp.width = DEFAULT_WIDTH; + f->fmt.pix_mp.height = DEFAULT_HEIGHT; + f->fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264; + f->fmt.pix_mp.num_planes = 1; + f->fmt.pix_mp.plane_fmt[0].bytesperline = 0; + f->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_OUTPUT); + f->fmt.pix_mp.field = V4L2_FIELD_NONE; + f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_DEFAULT; + f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT; + f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; + f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT; + inst->buffers.output.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.actual_count = + inst->buffers.output.min_count + + inst->buffers.output.extra_count; + inst->buffers.output.size = f->fmt.pix_mp.plane_fmt[0].sizeimage; + + inst->crop.left = 0; + inst->crop.top = 0; + inst->crop.width = f->fmt.pix_mp.width; + inst->crop.height = f->fmt.pix_mp.height; + + inst->compose.left = 0; + inst->compose.top = 0; + inst->compose.width = f->fmt.pix_mp.width; + inst->compose.height = f->fmt.pix_mp.height; + + f = &inst->fmts[INPUT_PORT]; + f->type = INPUT_MPLANE; + f->fmt.pix_mp.pixelformat = + v4l2_colorformat_from_driver(inst, MSM_VIDC_FMT_NV12C, __func__); + f->fmt.pix_mp.width = DEFAULT_WIDTH; + f->fmt.pix_mp.height = DEFAULT_HEIGHT; + f->fmt.pix_mp.num_planes = 1; + colorformat = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, + __func__); + f->fmt.pix_mp.plane_fmt[0].bytesperline = + video_y_stride_bytes(colorformat, DEFAULT_WIDTH); + f->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_INPUT); + f->fmt.pix_mp.field = V4L2_FIELD_NONE; + f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_DEFAULT; + f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT; + f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; + f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT; + inst->buffers.input.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.actual_count = + inst->buffers.input.min_count + + inst->buffers.input.extra_count; + inst->buffers.input.size = f->fmt.pix_mp.plane_fmt[0].sizeimage; + + inst->hfi_rc_type = HFI_RC_VBR_CFR; + inst->hfi_layer_type = HFI_HIER_P_SLIDING_WINDOW; + + rc = msm_venc_codec_change(inst, + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat); + + return rc; +} + +int msm_venc_inst_deinit(struct msm_vidc_inst *inst) +{ + return msm_vidc_ctrl_handler_deinit(inst); +} From patchwork Fri Jul 28 13:23:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FDF9C0015E for ; Fri, 28 Jul 2023 13:26:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236746AbjG1N0d (ORCPT ); Fri, 28 Jul 2023 09:26:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236367AbjG1N0M (ORCPT ); Fri, 28 Jul 2023 09:26:12 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2204B448F; Fri, 28 Jul 2023 06:25:53 -0700 (PDT) Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBHxmF007734; Fri, 28 Jul 2023 13:25:40 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=xZRekrZGwNLzUU7bpRSMzQm+qeahyTFbOymkr+M+6GQ=; b=p3VCLi82DP+NCy1sxdzUHEkvebwB91cqZKS1wjg+KsoEh3SmGaIIs5l2C9tKURZWRqnB 0mcNpFw2SC7fYpPa1L0LjVNYqp+u/LbCdy7/gV091jRTd3jnM3PiyhOIjq3dKfhzQRaM tNq2YD6x1KT1nwASKnd+nefT3SI1kS8bn8hN8BVBx4H1+KCwRZvzeZ+jqgJ1lwSogcR9 wsTiOWStV/Aci/fl2xmGfSgJgoOCygZMg4NfAXVtDBVp1ejMHqfUfZsXYJs6OtQyrYRv gGiC9yzru8QKC6OHOmHEQoo9hFcPqFmPYwH1poxg+zAa/45jtSrDDFOnGSczeRs8QnvD xQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s46ttgyxc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:39 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPceZ013570 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:38 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:35 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 08/33] iris: vidc: add video decoder files Date: Fri, 28 Jul 2023 18:53:19 +0530 Message-ID: <1690550624-14642-9-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: R3jX60PZbBUttW2O1G6mbH7uLxygGPuA X-Proofpoint-GUID: R3jX60PZbBUttW2O1G6mbH7uLxygGPuA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 adultscore=0 spamscore=0 malwarescore=0 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements decoder functionalities of the driver. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../media/platform/qcom/iris/vidc/inc/msm_vdec.h | 40 + .../media/platform/qcom/iris/vidc/src/msm_vdec.c | 2091 ++++++++++++++++++++ 2 files changed, 2131 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vdec.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vdec.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vdec.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vdec.h new file mode 100644 index 0000000..bece9a2 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vdec.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VDEC_H_ +#define _MSM_VDEC_H_ + +#include "msm_vidc_core.h" +#include "msm_vidc_inst.h" + +int msm_vdec_streamoff_input(struct msm_vidc_inst *inst); +int msm_vdec_streamon_input(struct msm_vidc_inst *inst); +int msm_vdec_streamoff_output(struct msm_vidc_inst *inst); +int msm_vdec_streamon_output(struct msm_vidc_inst *inst); +int msm_vdec_qbuf(struct msm_vidc_inst *inst, struct vb2_buffer *vb2); +int msm_vdec_try_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_vdec_s_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_vdec_g_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f); +int msm_vdec_s_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s); +int msm_vdec_g_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s); +int msm_vdec_subscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub); +int msm_vdec_enum_fmt(struct msm_vidc_inst *inst, struct v4l2_fmtdesc *f); +int msm_vdec_inst_init(struct msm_vidc_inst *inst); +int msm_vdec_inst_deinit(struct msm_vidc_inst *inst); +int msm_vdec_init_input_subcr_params(struct msm_vidc_inst *inst); +int msm_vdec_input_port_settings_change(struct msm_vidc_inst *inst); +int msm_vdec_stop_cmd(struct msm_vidc_inst *inst); +int msm_vdec_start_cmd(struct msm_vidc_inst *inst); +int msm_vdec_handle_release_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf); +int msm_vdec_set_num_comv(struct msm_vidc_inst *inst); +int msm_vdec_get_input_internal_buffers(struct msm_vidc_inst *inst); +int msm_vdec_create_input_internal_buffers(struct msm_vidc_inst *inst); +int msm_vdec_queue_input_internal_buffers(struct msm_vidc_inst *inst); +int msm_vdec_release_input_internal_buffers(struct msm_vidc_inst *inst); + +#endif // _MSM_VDEC_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vdec.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vdec.c new file mode 100644 index 0000000..6f5bc29 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vdec.c @@ -0,0 +1,2091 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include + +#include "hfi_packet.h" +#include "msm_media_info.h" +#include "msm_vdec.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "venus_hfi.h" + +/* TODO: update based on clips */ +#define MAX_DEC_BATCH_SIZE 6 +#define SKIP_BATCH_WINDOW 100 + +static const u32 msm_vdec_internal_buffer_type[] = { + MSM_VIDC_BUF_BIN, + MSM_VIDC_BUF_COMV, + MSM_VIDC_BUF_NON_COMV, + MSM_VIDC_BUF_LINE, +}; + +static const u32 msm_vdec_output_internal_buffer_type[] = { + MSM_VIDC_BUF_DPB, +}; + +struct msm_vdec_prop_type_handle { + u32 type; + int (*handle)(struct msm_vidc_inst *inst, enum msm_vidc_port_type port); +}; + +static int msm_vdec_codec_change(struct msm_vidc_inst *inst, u32 v4l2_codec) +{ + int rc = 0; + bool session_init = false; + + if (!inst->codec) + session_init = true; + + if (inst->codec && inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat == v4l2_codec) + return 0; + + i_vpr_h(inst, "%s: codec changed from %s to %s\n", + __func__, v4l2_pixelfmt_name(inst, inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat), + v4l2_pixelfmt_name(inst, v4l2_codec)); + + inst->codec = v4l2_codec_to_driver(inst, v4l2_codec, __func__); + if (!inst->codec) { + i_vpr_e(inst, "%s: invalid codec %#x\n", __func__, v4l2_codec); + rc = -EINVAL; + goto exit; + } + + inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat = v4l2_codec; + rc = msm_vidc_update_debug_str(inst); + if (rc) + goto exit; + + rc = msm_vidc_get_inst_capability(inst); + if (rc) + goto exit; + + rc = msm_vidc_ctrl_handler_init(inst, session_init); + if (rc) + goto exit; + + rc = msm_vidc_update_buffer_count(inst, INPUT_PORT); + if (rc) + goto exit; + + rc = msm_vidc_update_buffer_count(inst, OUTPUT_PORT); + if (rc) + goto exit; + +exit: + return rc; +} + +static int msm_vdec_set_bitstream_resolution(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 resolution; + + resolution = inst->fmts[INPUT_PORT].fmt.pix_mp.width << 16 | + inst->fmts[INPUT_PORT].fmt.pix_mp.height; + i_vpr_h(inst, "%s: width: %d height: %d\n", __func__, + inst->fmts[INPUT_PORT].fmt.pix_mp.width, + inst->fmts[INPUT_PORT].fmt.pix_mp.height); + inst->subcr_params[port].bitstream_resolution = resolution; + rc = venus_hfi_session_property(inst, + HFI_PROP_BITSTREAM_RESOLUTION, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32, + &resolution, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_linear_stride_scanline(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 stride_y, scanline_y, stride_uv, scanline_uv; + u32 payload[2]; + enum msm_vidc_colorformat_type colorformat; + + colorformat = v4l2_colorformat_to_driver(inst, + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + + if (!is_linear_yuv_colorformat(colorformat)) + return 0; + + stride_y = inst->fmts[OUTPUT_PORT].fmt.pix_mp.width; + scanline_y = inst->fmts[OUTPUT_PORT].fmt.pix_mp.height; + stride_uv = stride_y; + scanline_uv = scanline_y / 2; + + payload[0] = stride_y << 16 | scanline_y; + payload[1] = stride_uv << 16 | scanline_uv; + i_vpr_h(inst, "%s: stride_y: %d scanline_y: %d stride_uv: %d, scanline_uv: %d", + __func__, stride_y, scanline_y, stride_uv, scanline_uv); + rc = venus_hfi_session_property(inst, + HFI_PROP_LINEAR_STRIDE_SCANLINE, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, OUTPUT_PORT), + HFI_PAYLOAD_U64, + &payload, + sizeof(u64)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_ubwc_stride_scanline(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 stride_y, scanline_y, stride_uv, scanline_uv; + u32 meta_stride_y, meta_scanline_y, meta_stride_uv, meta_scanline_uv; + u32 payload[4]; + struct v4l2_format *f; + u32 pix_fmt, width, height, colorformat; + + f = &inst->fmts[OUTPUT_PORT]; + pix_fmt = f->fmt.pix_mp.pixelformat; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + colorformat = v4l2_colorformat_to_driver(inst, pix_fmt, __func__); + + if (!is_ubwc_colorformat(colorformat)) + return 0; + + stride_y = video_y_stride_bytes(colorformat, width); + scanline_y = video_y_scanlines(colorformat, height); + stride_uv = video_uv_stride_bytes(colorformat, width); + scanline_uv = video_uv_scanlines(colorformat, height); + + meta_stride_y = video_y_meta_stride(colorformat, width); + meta_scanline_y = video_y_meta_scanlines(colorformat, height); + meta_stride_uv = video_uv_meta_stride(colorformat, width); + meta_scanline_uv = video_uv_meta_scanlines(colorformat, height); + + payload[0] = stride_y << 16 | scanline_y; + payload[1] = stride_uv << 16 | scanline_uv; + payload[2] = meta_stride_y << 16 | meta_scanline_y; + payload[3] = meta_stride_uv << 16 | meta_scanline_uv; + + i_vpr_h(inst, + "%s: y: stride %d scanline %d, uv: stride %d scanline %d, y_meta: stride %d scaline %d, uv_meta: stride %d scanline %d", + __func__, + stride_y, scanline_y, stride_uv, scanline_uv, + meta_stride_y, meta_scanline_y, + meta_stride_uv, meta_scanline_uv); + rc = venus_hfi_session_property(inst, + HFI_PROP_UBWC_STRIDE_SCANLINE, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, OUTPUT_PORT), + HFI_PAYLOAD_U32_ARRAY, + &payload[0], + sizeof(u32) * 4); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_crop_offsets(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 left_offset, top_offset, right_offset, bottom_offset; + u32 payload[2] = {0}; + + left_offset = inst->crop.left; + top_offset = inst->crop.top; + right_offset = (inst->fmts[INPUT_PORT].fmt.pix_mp.width - + inst->crop.width); + bottom_offset = (inst->fmts[INPUT_PORT].fmt.pix_mp.height - + inst->crop.height); + + payload[0] = left_offset << 16 | top_offset; + payload[1] = right_offset << 16 | bottom_offset; + i_vpr_h(inst, + "%s: l_off: %d t_off: %d r_off: %d b_offs: %d", + __func__, + left_offset, top_offset, right_offset, bottom_offset); + inst->subcr_params[port].crop_offsets[0] = payload[0]; + inst->subcr_params[port].crop_offsets[1] = payload[1]; + rc = venus_hfi_session_property(inst, + HFI_PROP_CROP_OFFSETS, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_64_PACKED, + &payload, + sizeof(u64)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_bit_depth(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 pix_fmt; + u32 bitdepth = 8 << 16 | 8; + enum msm_vidc_colorformat_type colorformat; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + pix_fmt = inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat; + colorformat = v4l2_colorformat_to_driver(inst, pix_fmt, __func__); + if (is_10bit_colorformat(colorformat)) + bitdepth = 10 << 16 | 10; + + inst->subcr_params[port].bit_depth = bitdepth; + msm_vidc_update_cap_value(inst, BIT_DEPTH, bitdepth, __func__); + i_vpr_h(inst, "%s: bit depth: %#x", __func__, bitdepth); + rc = venus_hfi_session_property(inst, + HFI_PROP_LUMA_CHROMA_BIT_DEPTH, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32, + &bitdepth, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_coded_frames(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 coded_frames = 0; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + if (inst->capabilities[CODED_FRAMES].value == + CODED_FRAMES_PROGRESSIVE) + coded_frames = HFI_BITMASK_FRAME_MBS_ONLY_FLAG; + inst->subcr_params[port].coded_frames = coded_frames; + i_vpr_h(inst, "%s: coded frames: %d", __func__, coded_frames); + rc = venus_hfi_session_property(inst, + HFI_PROP_CODED_FRAMES, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32, + &coded_frames, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_min_output_count(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 min_output; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + min_output = inst->buffers.output.min_count; + inst->subcr_params[port].fw_min_count = min_output; + i_vpr_h(inst, "%s: firmware min output count: %d", + __func__, min_output); + rc = venus_hfi_session_property(inst, + HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32, + &min_output, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + return rc; +} + +static int msm_vdec_set_picture_order_count(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 poc = 0; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + inst->subcr_params[port].pic_order_cnt = poc; + i_vpr_h(inst, "%s: picture order count: %d", __func__, poc); + rc = venus_hfi_session_property(inst, + HFI_PROP_PIC_ORDER_CNT_TYPE, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32, + &poc, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_colorspace(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 primaries = MSM_VIDC_PRIMARIES_RESERVED; + u32 matrix_coeff = MSM_VIDC_MATRIX_COEFF_RESERVED; + u32 transfer_char = MSM_VIDC_TRANSFER_RESERVED; + u32 full_range = V4L2_QUANTIZATION_DEFAULT; + u32 colour_description_present_flag = 0; + u32 video_signal_type_present_flag = 0, color_info = 0; + /* Unspecified video format */ + u32 video_format = 5; + struct v4l2_pix_format_mplane *pixmp = NULL; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + if (inst->codec == MSM_VIDC_VP9) + return 0; + + pixmp = &inst->fmts[port].fmt.pix_mp; + if (pixmp->colorspace != V4L2_COLORSPACE_DEFAULT || + pixmp->ycbcr_enc != V4L2_YCBCR_ENC_DEFAULT || + pixmp->xfer_func != V4L2_XFER_FUNC_DEFAULT) { + colour_description_present_flag = 1; + video_signal_type_present_flag = 1; + primaries = v4l2_color_primaries_to_driver(inst, + pixmp->colorspace, __func__); + matrix_coeff = v4l2_matrix_coeff_to_driver(inst, + pixmp->ycbcr_enc, __func__); + transfer_char = v4l2_transfer_char_to_driver(inst, + pixmp->xfer_func, __func__); + } + + if (pixmp->quantization != V4L2_QUANTIZATION_DEFAULT) { + video_signal_type_present_flag = 1; + full_range = pixmp->quantization == + V4L2_QUANTIZATION_FULL_RANGE ? 1 : 0; + } + + color_info = (matrix_coeff & 0xFF) | + ((transfer_char << 8) & 0xFF00) | + ((primaries << 16) & 0xFF0000) | + ((colour_description_present_flag << 24) & 0x1000000) | + ((full_range << 25) & 0x2000000) | + ((video_format << 26) & 0x1C000000) | + ((video_signal_type_present_flag << 29) & 0x20000000); + + inst->subcr_params[port].color_info = color_info; + i_vpr_h(inst, "%s: color info: %#x\n", __func__, color_info); + rc = venus_hfi_session_property(inst, + HFI_PROP_SIGNAL_COLOR_INFO, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_32_PACKED, + &color_info, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_profile(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 profile; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + profile = inst->capabilities[PROFILE].value; + inst->subcr_params[port].profile = profile; + i_vpr_h(inst, "%s: profile: %d", __func__, profile); + rc = venus_hfi_session_property(inst, + HFI_PROP_PROFILE, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32_ENUM, + &profile, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_level(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 level; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + level = inst->capabilities[LEVEL].value; + inst->subcr_params[port].level = level; + i_vpr_h(inst, "%s: level: %d", __func__, level); + rc = venus_hfi_session_property(inst, + HFI_PROP_LEVEL, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32_ENUM, + &level, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_tier(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + u32 tier; + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + return -EINVAL; + } + + tier = inst->capabilities[HEVC_TIER].value; + inst->subcr_params[port].tier = tier; + i_vpr_h(inst, "%s: tier: %d", __func__, tier); + rc = venus_hfi_session_property(inst, + HFI_PROP_TIER, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + HFI_PAYLOAD_U32_ENUM, + &tier, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_colorformat(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 pixelformat; + enum msm_vidc_colorformat_type colorformat; + u32 hfi_colorformat; + + pixelformat = inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat; + colorformat = v4l2_colorformat_to_driver(inst, pixelformat, __func__); + hfi_colorformat = get_hfi_colorformat(inst, colorformat); + i_vpr_h(inst, "%s: hfi colorformat: %d", + __func__, hfi_colorformat); + rc = venus_hfi_session_property(inst, + HFI_PROP_COLOR_FORMAT, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, OUTPUT_PORT), + HFI_PAYLOAD_U32, + &hfi_colorformat, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_set_output_properties(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vdec_set_colorformat(inst); + if (rc) + return rc; + + rc = msm_vdec_set_linear_stride_scanline(inst); + if (rc) + return rc; + + rc = msm_vdec_set_ubwc_stride_scanline(inst); + + return rc; +} + +int msm_vdec_get_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 i = 0; + + for (i = 0; i < ARRAY_SIZE(msm_vdec_internal_buffer_type); i++) { + rc = msm_vidc_get_internal_buffers(inst, msm_vdec_internal_buffer_type[i]); + if (rc) + return rc; + } + + return rc; +} + +static int msm_vdec_get_output_internal_buffers(struct msm_vidc_inst *inst) +{ + return msm_vidc_get_internal_buffers(inst, MSM_VIDC_BUF_DPB); +} + +static int msm_vdec_destroy_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf, *dummy; + const u32 *internal_buf_type; + u32 i, len; + + if (port == INPUT_PORT) { + internal_buf_type = msm_vdec_internal_buffer_type; + len = ARRAY_SIZE(msm_vdec_internal_buffer_type); + } else { + internal_buf_type = msm_vdec_output_internal_buffer_type; + len = ARRAY_SIZE(msm_vdec_output_internal_buffer_type); + } + + for (i = 0; i < len; i++) { + buffers = msm_vidc_get_buffers(inst, internal_buf_type[i], __func__); + if (!buffers) + return -EINVAL; + + if (buffers->reuse) { + i_vpr_l(inst, "%s: reuse enabled for %s\n", __func__, + buf_name(internal_buf_type[i])); + continue; + } + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + i_vpr_h(inst, + "%s: destroying internal buffer: type %d idx %d fd %d addr %#llx size %d\n", + __func__, buf->type, buf->index, buf->fd, + buf->device_addr, buf->buffer_size); + + rc = msm_vidc_destroy_internal_buffer(inst, buf); + if (rc) + return rc; + } + } + + return 0; +} + +int msm_vdec_create_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 i = 0; + + for (i = 0; i < ARRAY_SIZE(msm_vdec_internal_buffer_type); i++) { + rc = msm_vidc_create_internal_buffers(inst, msm_vdec_internal_buffer_type[i]); + if (rc) + return rc; + } + + return 0; +} + +static int msm_vdec_create_output_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vidc_create_internal_buffers(inst, MSM_VIDC_BUF_DPB); + if (rc) + return rc; + + return 0; +} + +int msm_vdec_queue_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 i = 0; + + for (i = 0; i < ARRAY_SIZE(msm_vdec_internal_buffer_type); i++) { + rc = msm_vidc_queue_internal_buffers(inst, msm_vdec_internal_buffer_type[i]); + if (rc) + return rc; + } + + return 0; +} + +static int msm_vdec_queue_output_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vidc_queue_internal_buffers(inst, MSM_VIDC_BUF_DPB); + if (rc) + return rc; + + return 0; +} + +int msm_vdec_release_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 i = 0; + + for (i = 0; i < ARRAY_SIZE(msm_vdec_internal_buffer_type); i++) { + rc = msm_vidc_release_internal_buffers(inst, msm_vdec_internal_buffer_type[i]); + if (rc) + return rc; + } + + return 0; +} + +static int msm_vdec_subscribe_input_port_settings_change(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_core *core; + u32 payload[32] = {0}; + u32 i, j; + u32 subscribe_psc_size; + const u32 *psc; + static const struct msm_vdec_prop_type_handle prop_type_handle_arr[] = { + {HFI_PROP_BITSTREAM_RESOLUTION, msm_vdec_set_bitstream_resolution }, + {HFI_PROP_CROP_OFFSETS, msm_vdec_set_crop_offsets }, + {HFI_PROP_LUMA_CHROMA_BIT_DEPTH, msm_vdec_set_bit_depth }, + {HFI_PROP_CODED_FRAMES, msm_vdec_set_coded_frames }, + {HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, msm_vdec_set_min_output_count }, + {HFI_PROP_PIC_ORDER_CNT_TYPE, msm_vdec_set_picture_order_count }, + {HFI_PROP_SIGNAL_COLOR_INFO, msm_vdec_set_colorspace }, + {HFI_PROP_PROFILE, msm_vdec_set_profile }, + {HFI_PROP_LEVEL, msm_vdec_set_level }, + {HFI_PROP_TIER, msm_vdec_set_tier }, + }; + + core = inst->core; + + payload[0] = HFI_MODE_PORT_SETTINGS_CHANGE; + if (inst->codec == MSM_VIDC_H264) { + subscribe_psc_size = core->platform->data.psc_avc_tbl_size; + psc = core->platform->data.psc_avc_tbl; + } else if (inst->codec == MSM_VIDC_HEVC) { + subscribe_psc_size = core->platform->data.psc_hevc_tbl_size; + psc = core->platform->data.psc_hevc_tbl; + } else if (inst->codec == MSM_VIDC_VP9) { + subscribe_psc_size = core->platform->data.psc_vp9_tbl_size; + psc = core->platform->data.psc_vp9_tbl; + } else { + i_vpr_e(inst, "%s: unsupported codec: %d\n", __func__, inst->codec); + psc = NULL; + return -EINVAL; + } + + if (!psc || !subscribe_psc_size) { + i_vpr_e(inst, "%s: invalid params\n", __func__); + return -EINVAL; + } + + payload[0] = HFI_MODE_PORT_SETTINGS_CHANGE; + for (i = 0; i < subscribe_psc_size; i++) + payload[i + 1] = psc[i]; + rc = venus_hfi_session_command(inst, + HFI_CMD_SUBSCRIBE_MODE, + port, + HFI_PAYLOAD_U32_ARRAY, + &payload[0], + ((subscribe_psc_size + 1) * + sizeof(u32))); + + for (i = 0; i < subscribe_psc_size; i++) { + /* set session properties */ + for (j = 0; j < ARRAY_SIZE(prop_type_handle_arr); j++) { + if (prop_type_handle_arr[j].type == psc[i]) { + rc = prop_type_handle_arr[j].handle(inst, port); + if (rc) + goto exit; + break; + } + } + + /* is property type unknown ? */ + if (j == ARRAY_SIZE(prop_type_handle_arr)) + i_vpr_e(inst, "%s: unknown property %#x\n", __func__, psc[i]); + } + +exit: + return rc; +} + +static int msm_vdec_subscribe_property(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + u32 payload[32] = {0}; + u32 i, count = 0; + struct msm_vidc_core *core; + u32 subscribe_prop_size; + const u32 *subcribe_prop; + + core = inst->core; + + payload[0] = HFI_MODE_PROPERTY; + + if (port == INPUT_PORT) { + if (inst->codec == MSM_VIDC_H264) { + subscribe_prop_size = core->platform->data.dec_input_prop_size_avc; + subcribe_prop = core->platform->data.dec_input_prop_avc; + } else if (inst->codec == MSM_VIDC_HEVC) { + subscribe_prop_size = core->platform->data.dec_input_prop_size_hevc; + subcribe_prop = core->platform->data.dec_input_prop_hevc; + } else if (inst->codec == MSM_VIDC_VP9) { + subscribe_prop_size = core->platform->data.dec_input_prop_size_vp9; + subcribe_prop = core->platform->data.dec_input_prop_vp9; + } else { + i_vpr_e(inst, "%s: unsupported codec: %d\n", __func__, inst->codec); + subcribe_prop = NULL; + return -EINVAL; + } + + for (i = 0; i < subscribe_prop_size; i++) { + payload[count + 1] = subcribe_prop[i]; + count++; + + if (subcribe_prop[i] == HFI_PROP_DPB_LIST) { + inst->input_dpb_list_enabled = true; + i_vpr_h(inst, "%s: DPB_LIST suscribed on input port", __func__); + } + } + } else if (port == OUTPUT_PORT) { + if (inst->codec == MSM_VIDC_H264) { + subscribe_prop_size = core->platform->data.dec_output_prop_size_avc; + subcribe_prop = core->platform->data.dec_output_prop_avc; + } else if (inst->codec == MSM_VIDC_HEVC) { + subscribe_prop_size = core->platform->data.dec_output_prop_size_hevc; + subcribe_prop = core->platform->data.dec_output_prop_hevc; + } else if (inst->codec == MSM_VIDC_VP9) { + subscribe_prop_size = core->platform->data.dec_output_prop_size_vp9; + subcribe_prop = core->platform->data.dec_output_prop_vp9; + } else { + i_vpr_e(inst, "%s: unsupported codec: %d\n", __func__, inst->codec); + subcribe_prop = NULL; + return -EINVAL; + } + for (i = 0; i < subscribe_prop_size; i++) { + payload[count + 1] = subcribe_prop[i]; + count++; + + if (subcribe_prop[i] == HFI_PROP_DPB_LIST) { + inst->output_dpb_list_enabled = true; + i_vpr_h(inst, "%s: DPB_LIST suscribed on output port", __func__); + } + } + } else { + i_vpr_e(inst, "%s: invalid port: %d\n", __func__, port); + return -EINVAL; + } + + return venus_hfi_session_command(inst, + HFI_CMD_SUBSCRIBE_MODE, + port, + HFI_PAYLOAD_U32_ARRAY, + &payload[0], + (count + 1) * sizeof(u32)); +} + +int msm_vdec_init_input_subcr_params(struct msm_vidc_inst *inst) +{ + struct msm_vidc_subscription_params *subsc_params; + u32 left_offset, top_offset, right_offset, bottom_offset; + u32 primaries, matrix_coeff, transfer_char; + u32 full_range = 0, video_format = 0; + u32 colour_description_present_flag = 0; + u32 video_signal_type_present_flag = 0; + struct v4l2_pix_format_mplane *pixmp_ip, *pixmp_op; + + subsc_params = &inst->subcr_params[INPUT_PORT]; + pixmp_ip = &inst->fmts[INPUT_PORT].fmt.pix_mp; + pixmp_op = &inst->fmts[OUTPUT_PORT].fmt.pix_mp; + + subsc_params->bitstream_resolution = + pixmp_ip->width << 16 | + pixmp_ip->height; + + left_offset = inst->crop.left; + top_offset = inst->crop.top; + right_offset = (pixmp_ip->width - inst->crop.width); + bottom_offset = (pixmp_ip->height - inst->crop.height); + subsc_params->crop_offsets[0] = + left_offset << 16 | top_offset; + subsc_params->crop_offsets[1] = + right_offset << 16 | bottom_offset; + + subsc_params->fw_min_count = inst->buffers.output.min_count; + + primaries = v4l2_color_primaries_to_driver(inst, + pixmp_op->colorspace, __func__); + matrix_coeff = v4l2_matrix_coeff_to_driver(inst, + pixmp_op->ycbcr_enc, __func__); + transfer_char = v4l2_transfer_char_to_driver(inst, + pixmp_op->xfer_func, __func__); + full_range = pixmp_op->quantization == + V4L2_QUANTIZATION_FULL_RANGE ? 1 : 0; + subsc_params->color_info = + (matrix_coeff & 0xFF) | + ((transfer_char << 8) & 0xFF00) | + ((primaries << 16) & 0xFF0000) | + ((colour_description_present_flag << 24) & 0x1000000) | + ((full_range << 25) & 0x2000000) | + ((video_format << 26) & 0x1C000000) | + ((video_signal_type_present_flag << 29) & 0x20000000); + + subsc_params->profile = inst->capabilities[PROFILE].value; + subsc_params->level = inst->capabilities[LEVEL].value; + subsc_params->tier = inst->capabilities[HEVC_TIER].value; + subsc_params->pic_order_cnt = inst->capabilities[POC].value; + subsc_params->bit_depth = inst->capabilities[BIT_DEPTH].value; + if (inst->capabilities[CODED_FRAMES].value == + CODED_FRAMES_PROGRESSIVE) + subsc_params->coded_frames = HFI_BITMASK_FRAME_MBS_ONLY_FLAG; + else + subsc_params->coded_frames = 0; + + return 0; +} + +int msm_vdec_set_num_comv(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 num_comv = 0; + + num_comv = inst->capabilities[NUM_COMV].value; + i_vpr_h(inst, "%s: num COMV: %d", __func__, num_comv); + rc = venus_hfi_session_property(inst, + HFI_PROP_COMV_BUFFER_COUNT, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, INPUT_PORT), + HFI_PAYLOAD_U32, + &num_comv, + sizeof(u32)); + if (rc) { + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; + } + + return rc; +} + +static int msm_vdec_read_input_subcr_params(struct msm_vidc_inst *inst) +{ + struct msm_vidc_subscription_params subsc_params; + struct msm_vidc_core *core; + u32 width, height; + u32 primaries, matrix_coeff, transfer_char; + u32 full_range = 0; + u32 colour_description_present_flag = 0; + u32 video_signal_type_present_flag = 0; + enum msm_vidc_colorformat_type output_fmt; + struct v4l2_pix_format_mplane *pixmp_ip, *pixmp_op; + + core = inst->core; + + subsc_params = inst->subcr_params[INPUT_PORT]; + pixmp_ip = &inst->fmts[INPUT_PORT].fmt.pix_mp; + pixmp_op = &inst->fmts[OUTPUT_PORT].fmt.pix_mp; + width = (subsc_params.bitstream_resolution & + HFI_BITMASK_BITSTREAM_WIDTH) >> 16; + height = subsc_params.bitstream_resolution & + HFI_BITMASK_BITSTREAM_HEIGHT; + + pixmp_ip->width = width; + pixmp_ip->height = height; + + output_fmt = v4l2_colorformat_to_driver(inst, + pixmp_op->pixelformat, __func__); + + pixmp_op->width = video_y_stride_pix(output_fmt, width); + pixmp_op->height = video_y_scanlines(output_fmt, height); + pixmp_op->plane_fmt[0].bytesperline = + video_y_stride_bytes(output_fmt, width); + pixmp_op->plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_OUTPUT); + + matrix_coeff = subsc_params.color_info & 0xFF; + transfer_char = (subsc_params.color_info & 0xFF00) >> 8; + primaries = (subsc_params.color_info & 0xFF0000) >> 16; + colour_description_present_flag = + (subsc_params.color_info & 0x1000000) >> 24; + full_range = (subsc_params.color_info & 0x2000000) >> 25; + video_signal_type_present_flag = + (subsc_params.color_info & 0x20000000) >> 29; + + pixmp_op->colorspace = V4L2_COLORSPACE_DEFAULT; + pixmp_op->xfer_func = V4L2_XFER_FUNC_DEFAULT; + pixmp_op->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; + pixmp_op->quantization = V4L2_QUANTIZATION_DEFAULT; + + if (video_signal_type_present_flag) { + pixmp_op->quantization = + full_range ? + V4L2_QUANTIZATION_FULL_RANGE : + V4L2_QUANTIZATION_LIM_RANGE; + if (colour_description_present_flag) { + pixmp_op->colorspace = + v4l2_color_primaries_from_driver(inst, primaries, __func__); + pixmp_op->xfer_func = + v4l2_transfer_char_from_driver(inst, transfer_char, __func__); + pixmp_op->ycbcr_enc = + v4l2_matrix_coeff_from_driver(inst, matrix_coeff, __func__); + } else { + i_vpr_h(inst, + "%s: color description flag is not present\n", + __func__); + } + } else { + i_vpr_h(inst, "%s: video_signal type is not present\n", + __func__); + } + + /* align input port color info with output port */ + pixmp_ip->colorspace = pixmp_op->colorspace; + pixmp_ip->xfer_func = pixmp_op->xfer_func; + pixmp_ip->ycbcr_enc = pixmp_op->ycbcr_enc; + pixmp_ip->quantization = pixmp_op->quantization; + + inst->crop.top = subsc_params.crop_offsets[0] & 0xFFFF; + inst->crop.left = (subsc_params.crop_offsets[0] >> 16) & 0xFFFF; + inst->crop.height = pixmp_ip->height - + (subsc_params.crop_offsets[1] & 0xFFFF) - inst->crop.top; + inst->crop.width = pixmp_ip->width - + ((subsc_params.crop_offsets[1] >> 16) & 0xFFFF) - inst->crop.left; + + msm_vidc_update_cap_value(inst, PROFILE, subsc_params.profile, __func__); + msm_vidc_update_cap_value(inst, LEVEL, subsc_params.level, __func__); + msm_vidc_update_cap_value(inst, HEVC_TIER, subsc_params.tier, __func__); + msm_vidc_update_cap_value(inst, POC, subsc_params.pic_order_cnt, __func__); + if (subsc_params.bit_depth == BIT_DEPTH_8) + msm_vidc_update_cap_value(inst, BIT_DEPTH, BIT_DEPTH_8, __func__); + else + msm_vidc_update_cap_value(inst, BIT_DEPTH, BIT_DEPTH_10, __func__); + if (subsc_params.coded_frames & HFI_BITMASK_FRAME_MBS_ONLY_FLAG) + msm_vidc_update_cap_value(inst, CODED_FRAMES, CODED_FRAMES_PROGRESSIVE, __func__); + else + msm_vidc_update_cap_value(inst, CODED_FRAMES, CODED_FRAMES_INTERLACE, __func__); + + inst->fw_min_count = subsc_params.fw_min_count; + inst->buffers.output.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_OUTPUT); + + return 0; +} + +int msm_vdec_input_port_settings_change(struct msm_vidc_inst *inst) +{ + u32 rc = 0; + struct v4l2_event event = {0}; + + if (!inst->bufq[INPUT_PORT].vb2q->streaming) { + i_vpr_e(inst, "%s: input port not streaming\n", + __func__); + return 0; + } + + rc = msm_vdec_read_input_subcr_params(inst); + if (rc) + return rc; + + event.type = V4L2_EVENT_SOURCE_CHANGE; + event.u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION; + v4l2_event_queue_fh(&inst->fh, &event); + + return rc; +} + +int msm_vdec_streamoff_input(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vidc_session_streamoff(inst, INPUT_PORT); + if (rc) + return rc; + + return 0; +} + +int msm_vdec_streamon_input(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vidc_check_session_supported(inst); + if (rc) + goto error; + + rc = msm_vidc_set_v4l2_properties(inst); + if (rc) + goto error; + + rc = msm_vdec_get_input_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_vdec_destroy_internal_buffers(inst, INPUT_PORT); + if (rc) + goto error; + + rc = msm_vdec_create_input_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_vdec_queue_input_internal_buffers(inst); + if (rc) + goto error; + + if (!inst->ipsc_properties_set) { + rc = msm_vdec_subscribe_input_port_settings_change(inst, INPUT_PORT); + if (rc) + goto error; + inst->ipsc_properties_set = true; + } + + rc = msm_vdec_subscribe_property(inst, INPUT_PORT); + if (rc) + goto error; + + rc = msm_vidc_process_streamon_input(inst); + if (rc) + goto error; + + rc = msm_vidc_flush_ts(inst); + if (rc) + goto error; + + return 0; + +error: + i_vpr_e(inst, "%s: failed\n", __func__); + return rc; +} + +static int schedule_batch_work(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + mod_delayed_work(core->batch_workq, &inst->decode_batch.work, + msecs_to_jiffies(core->capabilities[DECODE_BATCH_TIMEOUT].value)); + + return 0; +} + +static int cancel_batch_work(struct msm_vidc_inst *inst) +{ + if (!inst) { + d_vpr_e("%s: Invalid arguments\n", __func__); + return -EINVAL; + } + cancel_delayed_work(&inst->decode_batch.work); + + return 0; +} + +int msm_vdec_streamoff_output(struct msm_vidc_inst *inst) +{ + int rc = 0; + + /* cancel pending batch work */ + cancel_batch_work(inst); + rc = msm_vidc_session_streamoff(inst, OUTPUT_PORT); + if (rc) + return rc; + + return 0; +} + +static int msm_vdec_subscribe_output_port_settings_change(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_core *core; + u32 payload[32] = {0}; + u32 prop_type, payload_size, payload_type; + u32 i; + struct msm_vidc_subscription_params subsc_params; + u32 subscribe_psc_size = 0; + const u32 *psc = NULL; + + core = inst->core; + + payload[0] = HFI_MODE_PORT_SETTINGS_CHANGE; + if (inst->codec == MSM_VIDC_H264) { + subscribe_psc_size = core->platform->data.psc_avc_tbl_size; + psc = core->platform->data.psc_avc_tbl; + } else if (inst->codec == MSM_VIDC_HEVC) { + subscribe_psc_size = core->platform->data.psc_hevc_tbl_size; + psc = core->platform->data.psc_hevc_tbl; + } else if (inst->codec == MSM_VIDC_VP9) { + subscribe_psc_size = core->platform->data.psc_vp9_tbl_size; + psc = core->platform->data.psc_vp9_tbl; + } else { + i_vpr_e(inst, "%s: unsupported codec: %d\n", __func__, inst->codec); + psc = NULL; + return -EINVAL; + } + + if (!psc || !subscribe_psc_size) { + i_vpr_e(inst, "%s: invalid params\n", __func__); + return -EINVAL; + } + + payload[0] = HFI_MODE_PORT_SETTINGS_CHANGE; + for (i = 0; i < subscribe_psc_size; i++) + payload[i + 1] = psc[i]; + + rc = venus_hfi_session_command(inst, + HFI_CMD_SUBSCRIBE_MODE, + port, + HFI_PAYLOAD_U32_ARRAY, + &payload[0], + ((subscribe_psc_size + 1) * + sizeof(u32))); + + subsc_params = inst->subcr_params[port]; + for (i = 0; i < subscribe_psc_size; i++) { + payload[0] = 0; + payload[1] = 0; + payload_size = 0; + payload_type = 0; + prop_type = psc[i]; + switch (prop_type) { + case HFI_PROP_BITSTREAM_RESOLUTION: + payload[0] = subsc_params.bitstream_resolution; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_CROP_OFFSETS: + payload[0] = subsc_params.crop_offsets[0]; + payload[1] = subsc_params.crop_offsets[1]; + payload_size = sizeof(u64); + payload_type = HFI_PAYLOAD_64_PACKED; + break; + case HFI_PROP_LUMA_CHROMA_BIT_DEPTH: + payload[0] = subsc_params.bit_depth; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_CODED_FRAMES: + payload[0] = subsc_params.coded_frames; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT: + payload[0] = subsc_params.fw_min_count; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_PIC_ORDER_CNT_TYPE: + payload[0] = subsc_params.pic_order_cnt; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_SIGNAL_COLOR_INFO: + payload[0] = subsc_params.color_info; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_PROFILE: + payload[0] = subsc_params.profile; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_LEVEL: + payload[0] = subsc_params.level; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + case HFI_PROP_TIER: + payload[0] = subsc_params.tier; + payload_size = sizeof(u32); + payload_type = HFI_PAYLOAD_U32; + break; + default: + i_vpr_e(inst, "%s: unknown property %#x\n", __func__, + prop_type); + prop_type = 0; + rc = -EINVAL; + break; + } + if (prop_type) { + rc = venus_hfi_session_property(inst, + prop_type, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, port), + payload_type, + &payload, + payload_size); + if (rc) + return rc; + } + } + + return rc; +} + +int msm_vdec_streamon_output(struct msm_vidc_inst *inst) +{ + int rc = 0; + + if (inst->capabilities[CODED_FRAMES].value == CODED_FRAMES_INTERLACE && + !is_ubwc_colorformat(inst->capabilities[PIX_FMTS].value)) { + i_vpr_e(inst, + "%s: interlace with non-ubwc color format is unsupported\n", + __func__); + return -EINVAL; + } + + rc = msm_vidc_check_session_supported(inst); + if (rc) + goto error; + + rc = msm_vdec_set_output_properties(inst); + if (rc) + goto error; + + if (!inst->opsc_properties_set) { + memcpy(&inst->subcr_params[OUTPUT_PORT], + &inst->subcr_params[INPUT_PORT], + sizeof(inst->subcr_params[INPUT_PORT])); + rc = msm_vdec_subscribe_output_port_settings_change(inst, OUTPUT_PORT); + if (rc) + goto error; + inst->opsc_properties_set = true; + } + + rc = msm_vdec_subscribe_property(inst, OUTPUT_PORT); + if (rc) + goto error; + + rc = msm_vdec_get_output_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_vdec_destroy_internal_buffers(inst, OUTPUT_PORT); + if (rc) + goto error; + + rc = msm_vdec_create_output_internal_buffers(inst); + if (rc) + goto error; + + rc = msm_vidc_process_streamon_output(inst); + if (rc) + goto error; + + rc = msm_vdec_queue_output_internal_buffers(inst); + if (rc) + goto error; + + return 0; + +error: + i_vpr_e(inst, "%s: failed\n", __func__); + msm_vdec_streamoff_output(inst); + return rc; +} + +static inline +enum msm_vidc_allow msm_vdec_allow_queue_deferred_buffers(struct msm_vidc_inst *inst) +{ + int count; + + /* do not defer buffers initially to avoid latency issues */ + if (inst->power.buffer_counter <= SKIP_BATCH_WINDOW) + return MSM_VIDC_ALLOW; + + /* defer qbuf, if pending buffers count less than batch size */ + count = msm_vidc_num_buffers(inst, MSM_VIDC_BUF_OUTPUT, MSM_VIDC_ATTR_DEFERRED); + if (count < inst->decode_batch.size) + return MSM_VIDC_DEFER; + + return MSM_VIDC_ALLOW; +} + +static int msm_vdec_qbuf_batch(struct msm_vidc_inst *inst, + struct vb2_buffer *vb2) +{ + struct msm_vidc_buffer *buf = NULL; + enum msm_vidc_allow allow; + + if (!inst->decode_batch.size) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + buf = msm_vidc_get_driver_buf(inst, vb2); + if (!buf) + return -EINVAL; + + if (is_state(inst, MSM_VIDC_OPEN) || + is_state(inst, MSM_VIDC_INPUT_STREAMING)) { + print_vidc_buffer(VIDC_LOW, "low ", "qbuf deferred", inst, buf); + return 0; + } + + allow = msm_vdec_allow_queue_deferred_buffers(inst); + if (allow == MSM_VIDC_DISALLOW) { + i_vpr_e(inst, "%s: queue deferred buffers not allowed\n", __func__); + return -EINVAL; + } else if (allow == MSM_VIDC_DEFER) { + print_vidc_buffer(VIDC_LOW, "low ", "batch-qbuf deferred", inst, buf); + schedule_batch_work(inst); + return 0; + } + + cancel_batch_work(inst); + return msm_vidc_queue_deferred_buffers(inst, MSM_VIDC_BUF_OUTPUT); +} + +static int msm_vdec_release_eligible_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_buffer *ro_buf; + + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + /* release only release eligible read-only buffers */ + if (!(ro_buf->attr & MSM_VIDC_ATTR_RELEASE_ELIGIBLE)) + continue; + /* skip releasing buffers for which release cmd was already sent */ + if (ro_buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + rc = venus_hfi_release_buffer(inst, ro_buf); + if (rc) + return rc; + ro_buf->attr |= MSM_VIDC_ATTR_PENDING_RELEASE; + ro_buf->attr &= ~MSM_VIDC_ATTR_RELEASE_ELIGIBLE; + print_vidc_buffer(VIDC_LOW, "low ", "release buf", inst, ro_buf); + } + + return rc; +} + +static int msm_vdec_release_nonref_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 fw_ro_count = 0, nonref_ro_count = 0; + struct msm_vidc_buffer *ro_buf; + int i = 0; + bool found = false; + + /* count read_only buffers which are not pending release in read_only list */ + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + if (!(ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY)) + continue; + if (ro_buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + fw_ro_count++; + } + + if (fw_ro_count <= MAX_DPB_COUNT) + return 0; + + /* + * Mark those read only buffers present in read_only list as + * non-reference if that buffer is not part of dpb_list_payload. + * count such non-ref read only buffers as nonref_ro_count. + * dpb_list_payload details: + * payload[0-1] : 64 bits base_address of DPB-1 + * payload[2] : 32 bits addr_offset of DPB-1 + * payload[3] : 32 bits data_offset of DPB-1 + */ + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + found = false; + if (!(ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY)) + continue; + if (ro_buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + for (i = 0; (i + 3) < MAX_DPB_LIST_ARRAY_SIZE; i = i + 4) { + if (ro_buf->device_addr == inst->dpb_list_payload[i] && + ro_buf->data_offset == inst->dpb_list_payload[i + 3]) { + found = true; + break; + } + } + if (!found) + nonref_ro_count++; + } + + if (nonref_ro_count <= inst->buffers.output.min_count) + return 0; + + i_vpr_l(inst, "%s: fw ro buf count %d, non-ref ro count %d\n", + __func__, fw_ro_count, nonref_ro_count); + + /* release the eligible buffers as per above condition */ + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + found = false; + if (!(ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY)) + continue; + if (ro_buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + for (i = 0; (i + 3) < MAX_DPB_LIST_ARRAY_SIZE; i = i + 4) { + if (ro_buf->device_addr == inst->dpb_list_payload[i] && + ro_buf->data_offset == inst->dpb_list_payload[i + 3]) { + found = true; + break; + } + } + if (!found) { + ro_buf->attr |= MSM_VIDC_ATTR_PENDING_RELEASE; + print_vidc_buffer(VIDC_LOW, "low ", "release buf", inst, ro_buf); + rc = venus_hfi_release_buffer(inst, ro_buf); + if (rc) + return rc; + } + } + + return rc; +} + +int msm_vdec_qbuf(struct msm_vidc_inst *inst, struct vb2_buffer *vb2) +{ + int rc = 0; + + /* batch decoder output & meta buffer only */ + if (inst->decode_batch.enable && vb2->type == OUTPUT_MPLANE) + rc = msm_vdec_qbuf_batch(inst, vb2); + else + rc = msm_vidc_queue_buffer_single(inst, vb2); + if (rc) + return rc; + + /* + * if DPB_LIST property is subscribed on output port, then + * driver needs to hold at least MAX_BPB_COUNT of read only + * buffers. So call msm_vdec_release_nonref_buffers() to handle + * the same. + */ + if (vb2->type == OUTPUT_MPLANE) { + if (inst->input_dpb_list_enabled) + rc = msm_vdec_release_eligible_buffers(inst); + else if (inst->output_dpb_list_enabled) + rc = msm_vdec_release_nonref_buffers(inst); + if (rc) + return rc; + } + + return rc; +} + +static int msm_vdec_alloc_and_queue_additional_dpb_buffers(struct msm_vidc_inst *inst) +{ + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buffer = NULL; + int i, cur_min_count = 0, rc = 0; + + /* get latest min_count and size */ + rc = msm_vidc_get_internal_buffers(inst, MSM_VIDC_BUF_DPB); + if (rc) + return rc; + + buffers = msm_vidc_get_buffers(inst, MSM_VIDC_BUF_DPB, __func__); + if (!buffers) + return -EINVAL; + + /* get current min_count */ + list_for_each_entry(buffer, &buffers->list, list) + cur_min_count++; + + /* skip alloc and queue */ + if (cur_min_count >= buffers->min_count) + return 0; + + i_vpr_h(inst, "%s: dpb buffer count increased from %u -> %u\n", + __func__, cur_min_count, buffers->min_count); + + /* allocate additional DPB buffers */ + for (i = cur_min_count; i < buffers->min_count; i++) { + rc = msm_vidc_create_internal_buffer(inst, MSM_VIDC_BUF_DPB, i); + if (rc) + return rc; + } + + /* queue additional DPB buffers */ + rc = msm_vidc_queue_internal_buffers(inst, MSM_VIDC_BUF_DPB); + if (rc) + return rc; + + return 0; +} + +int msm_vdec_stop_cmd(struct msm_vidc_inst *inst) +{ + i_vpr_h(inst, "received cmd: drain\n"); + return msm_vidc_process_drain(inst); +} + +int msm_vdec_start_cmd(struct msm_vidc_inst *inst) +{ + int rc = 0; + + i_vpr_h(inst, "received cmd: resume\n"); + vb2_clear_last_buffer_dequeued(inst->bufq[OUTPUT_PORT].vb2q); + + if (inst->capabilities[CODED_FRAMES].value == CODED_FRAMES_INTERLACE && + !is_ubwc_colorformat(inst->capabilities[PIX_FMTS].value)) { + i_vpr_e(inst, + "%s: interlace with non-ubwc color format is unsupported\n", + __func__); + return -EINVAL; + } + + /* tune power features */ + inst->decode_batch.enable = msm_vidc_allow_decode_batch(inst); + msm_vidc_allow_dcvs(inst); + msm_vidc_power_data_reset(inst); + + /* + * client is completing partial port reconfiguration, + * hence reallocate input internal buffers before input port + * is resumed. + */ + if (is_sub_state(inst, MSM_VIDC_DRC) && + is_sub_state(inst, MSM_VIDC_DRC_LAST_BUFFER) && + is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = msm_vidc_alloc_and_queue_input_internal_buffers(inst); + if (rc) + return rc; + + rc = msm_vidc_set_stage(inst, STAGE); + if (rc) + return rc; + + rc = msm_vidc_set_pipe(inst, PIPE); + if (rc) + return rc; + } + + /* allocate and queue extra dpb buffers */ + rc = msm_vdec_alloc_and_queue_additional_dpb_buffers(inst); + if (rc) + return rc; + + /* queue pending deferred buffers */ + rc = msm_vidc_queue_deferred_buffers(inst, MSM_VIDC_BUF_OUTPUT); + if (rc) + return rc; + + /* print final buffer counts & size details */ + msm_vidc_print_buffer_info(inst); + + /* print internal buffer memory usage stats */ + msm_vidc_print_memory_stats(inst); + + rc = msm_vidc_process_resume(inst); + + return rc; +} + +int msm_vdec_try_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + struct v4l2_pix_format_mplane *pixmp = &f->fmt.pix_mp; + u32 pix_fmt; + + memset(pixmp->reserved, 0, sizeof(pixmp->reserved)); + if (f->type == INPUT_MPLANE) { + pix_fmt = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + if (!pix_fmt) { + i_vpr_e(inst, "%s: unsupported codec, set current params\n", __func__); + f->fmt.pix_mp.width = inst->fmts[INPUT_PORT].fmt.pix_mp.width; + f->fmt.pix_mp.height = inst->fmts[INPUT_PORT].fmt.pix_mp.height; + f->fmt.pix_mp.pixelformat = inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat; + pix_fmt = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + } + } else if (f->type == OUTPUT_MPLANE) { + pix_fmt = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + if (!pix_fmt) { + i_vpr_e(inst, "%s: unsupported format, set current params\n", __func__); + f->fmt.pix_mp.pixelformat = inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat; + f->fmt.pix_mp.width = inst->fmts[OUTPUT_PORT].fmt.pix_mp.width; + f->fmt.pix_mp.height = inst->fmts[OUTPUT_PORT].fmt.pix_mp.height; + } + if (inst->bufq[INPUT_PORT].vb2q->streaming) { + f->fmt.pix_mp.height = inst->fmts[INPUT_PORT].fmt.pix_mp.height; + f->fmt.pix_mp.width = inst->fmts[INPUT_PORT].fmt.pix_mp.width; + } + } else { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, f->type); + return -EINVAL; + } + + if (pixmp->field == V4L2_FIELD_ANY) + pixmp->field = V4L2_FIELD_NONE; + + pixmp->num_planes = 1; + return rc; +} + +static bool msm_vidc_check_max_sessions_vp9d(struct msm_vidc_core *core) +{ + u32 vp9d_instance_count = 0; + struct msm_vidc_inst *inst = NULL; + + core_lock(core, __func__); + list_for_each_entry(inst, &core->instances, list) { + if (is_decode_session(inst) && + inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat == + V4L2_PIX_FMT_VP9) + vp9d_instance_count++; + } + core_unlock(core, __func__); + + if (vp9d_instance_count > MAX_VP9D_INST_COUNT) + return true; + return false; +} + +int msm_vdec_s_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + struct msm_vidc_core *core; + struct v4l2_format *fmt, *output_fmt; + u32 codec_align; + enum msm_vidc_colorformat_type colorformat; + + core = inst->core; + msm_vdec_try_fmt(inst, f); + + if (f->type == INPUT_MPLANE) { + if (inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat != + f->fmt.pix_mp.pixelformat) { + rc = msm_vdec_codec_change(inst, f->fmt.pix_mp.pixelformat); + if (rc) + goto err_invalid_fmt; + } + + if (f->fmt.pix_mp.pixelformat == V4L2_PIX_FMT_VP9) { + if (msm_vidc_check_max_sessions_vp9d(inst->core)) { + i_vpr_e(inst, + "%s: vp9d sessions exceeded max limit %d\n", + __func__, MAX_VP9D_INST_COUNT); + rc = -ENOMEM; + goto err_invalid_fmt; + } + } + + fmt = &inst->fmts[INPUT_PORT]; + fmt->type = INPUT_MPLANE; + + codec_align = inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat == + V4L2_PIX_FMT_HEVC ? 32 : 16; + fmt->fmt.pix_mp.width = ALIGN(f->fmt.pix_mp.width, codec_align); + fmt->fmt.pix_mp.height = ALIGN(f->fmt.pix_mp.height, codec_align); + fmt->fmt.pix_mp.num_planes = 1; + fmt->fmt.pix_mp.plane_fmt[0].bytesperline = 0; + fmt->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_INPUT); + if (inst->buffers.input.actual_count < + inst->buffers.input.min_count + + inst->buffers.input.extra_count) { + inst->buffers.input.actual_count = + inst->buffers.input.min_count + + inst->buffers.input.extra_count; + } + inst->buffers.input.size = + fmt->fmt.pix_mp.plane_fmt[0].sizeimage; + /* update input port color info */ + fmt->fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace; + fmt->fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func; + fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc; + fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization; + /* update output port color info */ + output_fmt = &inst->fmts[OUTPUT_PORT]; + output_fmt->fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace; + output_fmt->fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func; + output_fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc; + output_fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization; + + /* update crop dimensions */ + inst->crop.left = 0; + inst->crop.top = 0; + inst->crop.width = f->fmt.pix_mp.width; + inst->crop.height = f->fmt.pix_mp.height; + i_vpr_h(inst, + "%s: type: INPUT, codec %s width %d height %d size %u min_count %d extra_count %d\n", + __func__, v4l2_pixelfmt_name(inst, f->fmt.pix_mp.pixelformat), + f->fmt.pix_mp.width, f->fmt.pix_mp.height, + fmt->fmt.pix_mp.plane_fmt[0].sizeimage, + inst->buffers.input.min_count, + inst->buffers.input.extra_count); + } else if (f->type == OUTPUT_MPLANE) { + fmt = &inst->fmts[OUTPUT_PORT]; + fmt->type = OUTPUT_MPLANE; + if (inst->bufq[INPUT_PORT].vb2q->streaming) { + f->fmt.pix_mp.height = inst->fmts[INPUT_PORT].fmt.pix_mp.height; + f->fmt.pix_mp.width = inst->fmts[INPUT_PORT].fmt.pix_mp.width; + } + fmt->fmt.pix_mp.pixelformat = f->fmt.pix_mp.pixelformat; + colorformat = v4l2_colorformat_to_driver(inst, fmt->fmt.pix_mp.pixelformat, + __func__); + fmt->fmt.pix_mp.width = video_y_stride_pix(colorformat, f->fmt.pix_mp.width); + fmt->fmt.pix_mp.height = video_y_scanlines(colorformat, f->fmt.pix_mp.height); + fmt->fmt.pix_mp.num_planes = 1; + fmt->fmt.pix_mp.plane_fmt[0].bytesperline = + video_y_stride_bytes(colorformat, f->fmt.pix_mp.width); + fmt->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_OUTPUT); + + if (!inst->bufq[INPUT_PORT].vb2q->streaming) + inst->buffers.output.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_OUTPUT); + if (inst->buffers.output.actual_count < + inst->buffers.output.min_count + + inst->buffers.output.extra_count) { + inst->buffers.output.actual_count = + inst->buffers.output.min_count + + inst->buffers.output.extra_count; + } + inst->buffers.output.size = + fmt->fmt.pix_mp.plane_fmt[0].sizeimage; + msm_vidc_update_cap_value(inst, PIX_FMTS, colorformat, __func__); + + /* update crop while input port is not streaming */ + if (!inst->bufq[INPUT_PORT].vb2q->streaming) { + inst->crop.top = 0; + inst->crop.left = 0; + inst->crop.width = f->fmt.pix_mp.width; + inst->crop.height = f->fmt.pix_mp.height; + } + i_vpr_h(inst, + "%s: type: OUTPUT, format %s width %d height %d size %u min_count %d extra_count %d\n", + __func__, v4l2_pixelfmt_name(inst, fmt->fmt.pix_mp.pixelformat), + fmt->fmt.pix_mp.width, fmt->fmt.pix_mp.height, + fmt->fmt.pix_mp.plane_fmt[0].sizeimage, + inst->buffers.output.min_count, + inst->buffers.output.extra_count); + } else { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, f->type); + goto err_invalid_fmt; + } + memcpy(f, fmt, sizeof(struct v4l2_format)); + +err_invalid_fmt: + return rc; +} + +int msm_vdec_g_fmt(struct msm_vidc_inst *inst, struct v4l2_format *f) +{ + int rc = 0; + int port; + + port = v4l2_type_to_driver_port(inst, f->type, __func__); + if (port < 0) + return -EINVAL; + + memcpy(f, &inst->fmts[port], sizeof(struct v4l2_format)); + + return rc; +} + +int msm_vdec_s_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s) +{ + i_vpr_e(inst, "%s: unsupported\n", __func__); + return -EINVAL; +} + +int msm_vdec_g_selection(struct msm_vidc_inst *inst, struct v4l2_selection *s) +{ + if (s->type != OUTPUT_MPLANE && s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, s->type); + return -EINVAL; + } + + switch (s->target) { + case V4L2_SEL_TGT_CROP_BOUNDS: + case V4L2_SEL_TGT_CROP_DEFAULT: + case V4L2_SEL_TGT_CROP: + case V4L2_SEL_TGT_COMPOSE_BOUNDS: + case V4L2_SEL_TGT_COMPOSE_PADDED: + case V4L2_SEL_TGT_COMPOSE_DEFAULT: + case V4L2_SEL_TGT_COMPOSE: + s->r.left = inst->crop.left; + s->r.top = inst->crop.top; + s->r.width = inst->crop.width; + s->r.height = inst->crop.height; + break; + default: + i_vpr_e(inst, "%s: invalid target %d\n", + __func__, s->target); + return -EINVAL; + } + i_vpr_h(inst, "%s: target %d, r [%d, %d, %d, %d]\n", + __func__, s->target, s->r.top, s->r.left, + s->r.width, s->r.height); + return 0; +} + +int msm_vdec_subscribe_event(struct msm_vidc_inst *inst, + const struct v4l2_event_subscription *sub) +{ + int rc = 0; + + switch (sub->type) { + case V4L2_EVENT_EOS: + rc = v4l2_event_subscribe(&inst->fh, sub, MAX_EVENTS, NULL); + break; + case V4L2_EVENT_SOURCE_CHANGE: + rc = v4l2_src_change_event_subscribe(&inst->fh, sub); + break; + case V4L2_EVENT_CTRL: + rc = v4l2_ctrl_subscribe_event(&inst->fh, sub); + break; + default: + i_vpr_e(inst, "%s: invalid type %d id %d\n", __func__, sub->type, sub->id); + return -EINVAL; + } + + if (rc) + i_vpr_e(inst, "%s: failed, type %d id %d\n", + __func__, sub->type, sub->id); + return rc; +} + +static int msm_vdec_check_colorformat_supported(struct msm_vidc_inst *inst, + enum msm_vidc_colorformat_type colorformat) +{ + bool supported = true; + + /* do not reject coloformats before streamon */ + if (!inst->bufq[INPUT_PORT].vb2q->streaming) + return true; + + /* + * bit_depth 8 bit supports 8 bit colorformats only + * bit_depth 10 bit supports 10 bit colorformats only + * interlace supports ubwc colorformats only + */ + if (inst->capabilities[BIT_DEPTH].value == BIT_DEPTH_8 && + !is_8bit_colorformat(colorformat)) + supported = false; + if (inst->capabilities[BIT_DEPTH].value == BIT_DEPTH_10 && + !is_10bit_colorformat(colorformat)) + supported = false; + if (inst->capabilities[CODED_FRAMES].value == + CODED_FRAMES_INTERLACE && + !is_ubwc_colorformat(colorformat)) + supported = false; + + return supported; +} + +int msm_vdec_enum_fmt(struct msm_vidc_inst *inst, struct v4l2_fmtdesc *f) +{ + int rc = 0; + struct msm_vidc_core *core; + u32 array[32] = {0}; + u32 i = 0; + + if (f->index >= ARRAY_SIZE(array)) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + + if (f->type == INPUT_MPLANE) { + u32 codecs = core->capabilities[DEC_CODECS].value; + u32 idx = 0; + + for (i = 0; i <= 31; i++) { + if (codecs & BIT(i)) { + if (idx >= ARRAY_SIZE(array)) + break; + array[idx] = codecs & BIT(i); + idx++; + } + } + if (!array[f->index]) + return -EINVAL; + f->pixelformat = v4l2_codec_from_driver(inst, array[f->index], + __func__); + if (!f->pixelformat) + return -EINVAL; + f->flags = V4L2_FMT_FLAG_COMPRESSED; + strscpy(f->description, "codec", sizeof(f->description)); + } else if (f->type == OUTPUT_MPLANE) { + u32 formats = inst->capabilities[PIX_FMTS].step_or_mask; + u32 idx = 0; + + for (i = 0; i <= 31; i++) { + if (formats & BIT(i)) { + if (idx >= ARRAY_SIZE(array)) + break; + if (msm_vdec_check_colorformat_supported(inst, formats & BIT(i))) { + array[idx] = formats & BIT(i); + idx++; + } + } + } + if (!array[f->index]) + return -EINVAL; + f->pixelformat = v4l2_colorformat_from_driver(inst, array[f->index], + __func__); + if (!f->pixelformat) + return -EINVAL; + strscpy(f->description, "colorformat", sizeof(f->description)); + } + + memset(f->reserved, 0, sizeof(f->reserved)); + + i_vpr_h(inst, "%s: index %d, %s: %s, flags %#x\n", + __func__, f->index, f->description, + v4l2_pixelfmt_name(inst, f->pixelformat), f->flags); + return rc; +} + +int msm_vdec_inst_init(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + struct v4l2_format *f; + enum msm_vidc_colorformat_type colorformat; + + core = inst->core; + + INIT_DELAYED_WORK(&inst->decode_batch.work, msm_vidc_batch_handler); + if (core->capabilities[DECODE_BATCH].value) { + inst->decode_batch.enable = true; + inst->decode_batch.size = MAX_DEC_BATCH_SIZE; + } + if (core->capabilities[DCVS].value) + inst->power.dcvs_mode = true; + + f = &inst->fmts[INPUT_PORT]; + f->type = INPUT_MPLANE; + f->fmt.pix_mp.width = DEFAULT_WIDTH; + f->fmt.pix_mp.height = DEFAULT_HEIGHT; + f->fmt.pix_mp.pixelformat = V4L2_PIX_FMT_H264; + f->fmt.pix_mp.num_planes = 1; + f->fmt.pix_mp.plane_fmt[0].bytesperline = 0; + f->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_INPUT); + f->fmt.pix_mp.field = V4L2_FIELD_NONE; + inst->buffers.input.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.actual_count = + inst->buffers.input.min_count + + inst->buffers.input.extra_count; + inst->buffers.input.size = f->fmt.pix_mp.plane_fmt[0].sizeimage; + + inst->crop.left = 0; + inst->crop.top = 0; + inst->crop.width = f->fmt.pix_mp.width; + inst->crop.height = f->fmt.pix_mp.height; + + f = &inst->fmts[OUTPUT_PORT]; + f->type = OUTPUT_MPLANE; + f->fmt.pix_mp.pixelformat = + v4l2_colorformat_from_driver(inst, MSM_VIDC_FMT_NV12C, __func__); + colorformat = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + f->fmt.pix_mp.width = video_y_stride_pix(colorformat, DEFAULT_WIDTH); + f->fmt.pix_mp.height = video_y_scanlines(colorformat, DEFAULT_HEIGHT); + f->fmt.pix_mp.num_planes = 1; + f->fmt.pix_mp.plane_fmt[0].bytesperline = + video_y_stride_bytes(colorformat, DEFAULT_WIDTH); + f->fmt.pix_mp.plane_fmt[0].sizeimage = + call_session_op(core, buffer_size, inst, MSM_VIDC_BUF_OUTPUT); + f->fmt.pix_mp.field = V4L2_FIELD_NONE; + f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_DEFAULT; + f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT; + f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; + f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT; + inst->buffers.output.min_count = + call_session_op(core, min_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.extra_count = + call_session_op(core, extra_count, inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.actual_count = + inst->buffers.output.min_count + + inst->buffers.output.extra_count; + inst->buffers.output.size = f->fmt.pix_mp.plane_fmt[0].sizeimage; + inst->fw_min_count = 0; + + inst->input_dpb_list_enabled = false; + inst->output_dpb_list_enabled = false; + + rc = msm_vdec_codec_change(inst, inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat); + + return rc; +} + +int msm_vdec_inst_deinit(struct msm_vidc_inst *inst) +{ + int rc = 0; + + /* cancel pending batch work */ + cancel_batch_work(inst); + rc = msm_vidc_ctrl_handler_deinit(inst); + + return rc; +} From patchwork Fri Jul 28 13:23:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9152C04E69 for ; Fri, 28 Jul 2023 13:26:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236758AbjG1N0T (ORCPT ); Fri, 28 Jul 2023 09:26:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42614 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234277AbjG1N0G (ORCPT ); Fri, 28 Jul 2023 09:26:06 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B6544483; Fri, 28 Jul 2023 06:25:51 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9Kvwj008893; Fri, 28 Jul 2023 13:25:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=tI8WQiJsc3snSCqoYqTYhp9DaiBIcpXw30eJ4gxkENw=; b=JF8iePRd7nkSd1oFJqhCbPRAKDOqTVh7PoZF61YjcR0BGbW7XM8qa6apY963LUQUqtgm GavePD2pZQMj97nQRTfvBRZIQGr/z5+mViQK1NVQcjZcEGf4jNCX/ebba3amayrH3ZaJ OvDfZqIasUMPQSpD5UWpgntDzX773dxCcziuhfnTaI9xmrGG2V/78mNqj//cta8Pge+k FzrxEtWXMEnvy6h3ylpvBi36UHYZ3MHCVlhMkpEOoTwz+p61g3VV1/1uQXkobjRRfube ksSB5iNl34eYe/2LXln8dFdWJnMK5W372LOMhRCyCXprbMPYC+DFZvJTg7JgMQyFw8o+ nA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7td-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:42 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPgMe002901 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:42 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:38 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 09/33] iris: vidc: add control files Date: Fri, 28 Jul 2023 18:53:20 +0530 Message-ID: <1690550624-14642-10-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: 3BQuUU7g_UxNy4uQga8ZNF_Hp8PqPr7X X-Proofpoint-GUID: 3BQuUU7g_UxNy4uQga8ZNF_Hp8PqPr7X X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements supported v4l2 encoder and decoder controls. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_control.h | 26 + .../platform/qcom/iris/vidc/src/msm_vidc_control.c | 824 +++++++++++++++++++++ 2 files changed, 850 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_control.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_control.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_control.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_control.h new file mode 100644 index 0000000..08ba77d --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_control.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_CONTROL_H_ +#define _MSM_VIDC_CONTROL_H_ + +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +int msm_vidc_ctrl_handler_init(struct msm_vidc_inst *inst, bool init); +int msm_vidc_ctrl_handler_deinit(struct msm_vidc_inst *inst); +int msm_v4l2_op_s_ctrl(struct v4l2_ctrl *ctrl); +int msm_v4l2_op_g_volatile_ctrl(struct v4l2_ctrl *ctrl); +int msm_vidc_s_ctrl(struct msm_vidc_inst *inst, struct v4l2_ctrl *ctrl); +int msm_vidc_prepare_dependency_list(struct msm_vidc_inst *inst); +int msm_vidc_adjust_v4l2_properties(struct msm_vidc_inst *inst); +int msm_vidc_set_v4l2_properties(struct msm_vidc_inst *inst); +bool is_valid_cap_id(enum msm_vidc_inst_capability_type cap_id); +bool is_valid_cap(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id); +enum msm_vidc_inst_capability_type msm_vidc_get_cap_id(struct msm_vidc_inst *inst, + u32 id); +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_control.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_control.c new file mode 100644 index 0000000..73b0db6 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_control.c @@ -0,0 +1,824 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_venc.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_platform.h" + +static inline bool has_children(struct msm_vidc_inst_cap *cap) +{ + return !!cap->children[0]; +} + +static inline bool is_leaf(struct msm_vidc_inst_cap *cap) +{ + return !has_children(cap); +} + +bool is_valid_cap_id(enum msm_vidc_inst_capability_type cap_id) +{ + return cap_id > INST_CAP_NONE && cap_id < INST_CAP_MAX; +} + +bool is_valid_cap(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id) +{ + if (cap_id <= INST_CAP_NONE || cap_id >= INST_CAP_MAX) + return false; + + return !!inst->capabilities[cap_id].cap_id; +} + +static inline bool is_all_childrens_visited(struct msm_vidc_inst_cap *cap, + bool lookup[INST_CAP_MAX]) +{ + bool found = true; + int i; + + for (i = 0; i < MAX_CAP_CHILDREN; i++) { + if (cap->children[i] == INST_CAP_NONE) + continue; + + if (!lookup[cap->children[i]]) { + found = false; + break; + } + } + return found; +} + +static int add_node_list(struct list_head *list, enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst_cap_entry *entry = NULL; + + entry = vzalloc(sizeof(*entry)); + if (!entry) { + d_vpr_e("%s: allocation failed\n", __func__); + return -ENOMEM; + } + + INIT_LIST_HEAD(&entry->list); + entry->cap_id = cap_id; + list_add(&entry->list, list); + + return 0; +} + +static int add_node(struct list_head *list, struct msm_vidc_inst_cap *lcap, + bool lookup[INST_CAP_MAX]) +{ + int rc = 0; + + if (lookup[lcap->cap_id]) + return 0; + + rc = add_node_list(list, lcap->cap_id); + if (rc) + return rc; + + lookup[lcap->cap_id] = true; + return 0; +} + +static int msm_vidc_add_capid_to_fw_list(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst_cap_entry *entry = NULL; + int rc = 0; + + /* skip adding if cap_id already present in firmware list */ + list_for_each_entry(entry, &inst->firmware_list, list) { + if (entry->cap_id == cap_id) { + i_vpr_l(inst, + "%s: cap[%d] %s already present in fw list\n", + __func__, cap_id, cap_name(cap_id)); + return 0; + } + } + + rc = add_node_list(&inst->firmware_list, cap_id); + if (rc) + return rc; + + return 0; +} + +static int msm_vidc_add_children(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst_cap *cap; + int i, rc = 0; + + cap = &inst->capabilities[cap_id]; + + for (i = 0; i < MAX_CAP_CHILDREN; i++) { + if (!cap->children[i]) + break; + + if (!is_valid_cap_id(cap->children[i])) + continue; + + rc = add_node_list(&inst->children_list, cap->children[i]); + if (rc) + return rc; + } + + return rc; +} + +static int msm_vidc_adjust_cap(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, + struct v4l2_ctrl *ctrl, const char *func) +{ + struct msm_vidc_inst_cap *cap; + int rc = 0; + + /* validate cap_id */ + if (!is_valid_cap_id(cap_id)) + return 0; + + /* validate cap */ + cap = &inst->capabilities[cap_id]; + if (!is_valid_cap(inst, cap->cap_id)) + return 0; + + /* check if adjust supported */ + if (!cap->adjust) { + if (ctrl) + msm_vidc_update_cap_value(inst, cap_id, ctrl->val, func); + return 0; + } + + /* call adjust */ + rc = cap->adjust(inst, ctrl); + if (rc) { + i_vpr_e(inst, "%s: adjust cap failed for %s\n", func, cap_name(cap_id)); + return rc; + } + + return rc; +} + +static int msm_vidc_set_cap(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, + const char *func) +{ + struct msm_vidc_inst_cap *cap; + int rc = 0; + + /* validate cap_id */ + if (!is_valid_cap_id(cap_id)) + return 0; + + /* validate cap */ + cap = &inst->capabilities[cap_id]; + if (!is_valid_cap(inst, cap->cap_id)) + return 0; + + /* check if set supported */ + if (!cap->set) + return 0; + + /* call set */ + rc = cap->set(inst, cap_id); + if (rc) { + i_vpr_e(inst, "%s: set cap failed for %s\n", func, cap_name(cap_id)); + return rc; + } + + return rc; +} + +static int msm_vidc_adjust_dynamic_property(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, + struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst_cap_entry *entry = NULL, *temp = NULL; + struct msm_vidc_inst_cap *cap; + s32 prev_value; + int rc = 0; + + cap = &inst->capabilities[0]; + + /* sanitize cap_id */ + if (!is_valid_cap_id(cap_id)) { + i_vpr_e(inst, "%s: invalid cap_id %u\n", __func__, cap_id); + return -EINVAL; + } + + if (!(cap[cap_id].flags & CAP_FLAG_DYNAMIC_ALLOWED)) { + i_vpr_h(inst, + "%s: dynamic setting of cap[%d] %s is not allowed\n", + __func__, cap_id, cap_name(cap_id)); + return -EBUSY; + } + i_vpr_h(inst, "%s: cap[%d] %s\n", __func__, cap_id, cap_name(cap_id)); + + prev_value = cap[cap_id].value; + rc = msm_vidc_adjust_cap(inst, cap_id, ctrl, __func__); + if (rc) + return rc; + + if (cap[cap_id].value == prev_value && cap_id == GOP_SIZE) { + /* + * Ignore setting same GOP size value to firmware to avoid + * unnecessary generation of IDR frame. + */ + return 0; + } + + /* add cap_id to firmware list always */ + rc = msm_vidc_add_capid_to_fw_list(inst, cap_id); + if (rc) + goto error; + + /* add children only if cap value modified */ + if (cap[cap_id].value == prev_value) + return 0; + + rc = msm_vidc_add_children(inst, cap_id); + if (rc) + goto error; + + list_for_each_entry_safe(entry, temp, &inst->children_list, list) { + if (!is_valid_cap_id(entry->cap_id)) { + rc = -EINVAL; + goto error; + } + + if (!cap[entry->cap_id].adjust) { + i_vpr_e(inst, "%s: child cap must have ajdust function %s\n", + __func__, cap_name(entry->cap_id)); + rc = -EINVAL; + goto error; + } + + prev_value = cap[entry->cap_id].value; + rc = msm_vidc_adjust_cap(inst, entry->cap_id, NULL, __func__); + if (rc) + goto error; + + /* add children if cap value modified */ + if (cap[entry->cap_id].value != prev_value) { + /* add cap_id to firmware list always */ + rc = msm_vidc_add_capid_to_fw_list(inst, entry->cap_id); + if (rc) + goto error; + + rc = msm_vidc_add_children(inst, entry->cap_id); + if (rc) + goto error; + } + + list_del_init(&entry->list); + vfree(entry); + } + + /* expecting children_list to be empty */ + if (!list_empty(&inst->children_list)) { + i_vpr_e(inst, "%s: child_list is not empty\n", __func__); + rc = -EINVAL; + goto error; + } + + return 0; +error: + list_for_each_entry_safe(entry, temp, &inst->children_list, list) { + i_vpr_e(inst, "%s: child list: %s\n", __func__, cap_name(entry->cap_id)); + list_del_init(&entry->list); + vfree(entry); + } + list_for_each_entry_safe(entry, temp, &inst->firmware_list, list) { + i_vpr_e(inst, "%s: fw list: %s\n", __func__, cap_name(entry->cap_id)); + list_del_init(&entry->list); + vfree(entry); + } + + return rc; +} + +static int msm_vidc_set_dynamic_property(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst_cap_entry *entry = NULL, *temp = NULL; + int rc = 0; + + list_for_each_entry_safe(entry, temp, &inst->firmware_list, list) { + rc = msm_vidc_set_cap(inst, entry->cap_id, __func__); + if (rc) + goto error; + + list_del_init(&entry->list); + vfree(entry); + } + + return 0; +error: + list_for_each_entry_safe(entry, temp, &inst->firmware_list, list) { + i_vpr_e(inst, "%s: fw list: %s\n", __func__, cap_name(entry->cap_id)); + list_del_init(&entry->list); + vfree(entry); + } + + return rc; +} + +int msm_vidc_ctrl_handler_deinit(struct msm_vidc_inst *inst) +{ + i_vpr_h(inst, "%s(): num ctrls %d\n", __func__, inst->num_ctrls); + v4l2_ctrl_handler_free(&inst->ctrl_handler); + memset(&inst->ctrl_handler, 0, sizeof(struct v4l2_ctrl_handler)); + + return 0; +} + +int msm_vidc_ctrl_handler_init(struct msm_vidc_inst *inst, bool init) +{ + int rc = 0; + struct msm_vidc_inst_cap *cap; + struct msm_vidc_core *core; + int idx = 0; + struct v4l2_ctrl_config ctrl_cfg = {0}; + int num_ctrls = 0, ctrl_idx = 0; + u64 codecs_count, step_or_mask; + + core = inst->core; + cap = &inst->capabilities[0]; + + if (!core->v4l2_ctrl_ops) { + i_vpr_e(inst, "%s: no control ops\n", __func__); + return -EINVAL; + } + + for (idx = 0; idx < INST_CAP_MAX; idx++) { + if (cap[idx].v4l2_id) + num_ctrls++; + } + if (!num_ctrls) { + i_vpr_e(inst, "%s: no ctrls available in cap database\n", + __func__); + return -EINVAL; + } + + if (init) { + codecs_count = is_encode_session(inst) ? + core->enc_codecs_count : + core->dec_codecs_count; + rc = v4l2_ctrl_handler_init(&inst->ctrl_handler, + INST_CAP_MAX * codecs_count); + if (rc) { + i_vpr_e(inst, "control handler init failed, %d\n", + inst->ctrl_handler.error); + goto error; + } + } + + for (idx = 0; idx < INST_CAP_MAX; idx++) { + struct v4l2_ctrl *ctrl; + + if (!cap[idx].v4l2_id) + continue; + + if (ctrl_idx >= num_ctrls) { + i_vpr_e(inst, + "%s: invalid ctrl %#x, max allowed %d\n", + __func__, cap[idx].v4l2_id, + num_ctrls); + rc = -EINVAL; + goto error; + } + i_vpr_l(inst, + "%s: cap[%d] %24s, value %d min %d max %d step_or_mask %#x flags %#x v4l2_id %#x hfi_id %#x\n", + __func__, idx, cap_name(idx), + cap[idx].value, + cap[idx].min, + cap[idx].max, + cap[idx].step_or_mask, + cap[idx].flags, + cap[idx].v4l2_id, + cap[idx].hfi_id); + + memset(&ctrl_cfg, 0, sizeof(struct v4l2_ctrl_config)); + + /* + * few controls might have been already initialized in instance initialization, + * so modify the range values for them instead of initializing them again + */ + if (!init) { + struct msm_vidc_ctrl_data ctrl_priv_data; + + ctrl = v4l2_ctrl_find(&inst->ctrl_handler, cap[idx].v4l2_id); + if (ctrl) { + step_or_mask = (cap[idx].flags & CAP_FLAG_MENU) ? + ~(cap[idx].step_or_mask) : + cap[idx].step_or_mask; + memset(&ctrl_priv_data, 0, sizeof(struct msm_vidc_ctrl_data)); + ctrl_priv_data.skip_s_ctrl = true; + ctrl->priv = &ctrl_priv_data; + v4l2_ctrl_modify_range(ctrl, + cap[idx].min, + cap[idx].max, + step_or_mask, + cap[idx].value); + /* reset private data to null to ensure s_ctrl not skipped */ + ctrl->priv = NULL; + continue; + } + } + + if (cap[idx].flags & CAP_FLAG_MENU) { + ctrl = v4l2_ctrl_new_std_menu(&inst->ctrl_handler, + core->v4l2_ctrl_ops, + cap[idx].v4l2_id, + cap[idx].max, + ~(cap[idx].step_or_mask), + cap[idx].value); + } else { + ctrl = v4l2_ctrl_new_std(&inst->ctrl_handler, + core->v4l2_ctrl_ops, + cap[idx].v4l2_id, + cap[idx].min, + cap[idx].max, + cap[idx].step_or_mask, + cap[idx].value); + } + if (!ctrl) { + i_vpr_e(inst, "%s: invalid ctrl %#x cap %24s\n", __func__, + cap[idx].v4l2_id, cap_name(idx)); + rc = -EINVAL; + goto error; + } + + rc = inst->ctrl_handler.error; + if (rc) { + i_vpr_e(inst, + "error adding ctrl (%#x) to ctrl handle, %d\n", + cap[idx].v4l2_id, + inst->ctrl_handler.error); + goto error; + } + + if (cap[idx].flags & CAP_FLAG_VOLATILE) + ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE; + + ctrl->flags |= V4L2_CTRL_FLAG_EXECUTE_ON_WRITE; + ctrl_idx++; + } + inst->num_ctrls = num_ctrls; + i_vpr_h(inst, "%s(): num ctrls %d\n", __func__, inst->num_ctrls); + + return 0; +error: + msm_vidc_ctrl_handler_deinit(inst); + + return rc; +} + +static int +msm_vidc_update_buffer_count_if_needed(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + bool update_input_port = false, update_output_port = false; + + switch (cap_id) { + case LAYER_TYPE: + case ENH_LAYER_COUNT: + case LAYER_ENABLE: + update_input_port = true; + break; + default: + update_input_port = false; + update_output_port = false; + break; + } + + if (update_input_port) { + rc = msm_vidc_update_buffer_count(inst, INPUT_PORT); + if (rc) + return rc; + } + if (update_output_port) { + rc = msm_vidc_update_buffer_count(inst, OUTPUT_PORT); + if (rc) + return rc; + } + + return rc; +} + +int msm_v4l2_op_g_volatile_ctrl(struct v4l2_ctrl *ctrl) +{ + int rc = 0; + struct msm_vidc_inst *inst; + + if (!ctrl) { + d_vpr_e("%s: invalid ctrl parameter\n", __func__); + return -EINVAL; + } + + inst = container_of(ctrl->handler, + struct msm_vidc_inst, ctrl_handler); + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: could not find inst for ctrl %s id %#x\n", + __func__, ctrl->name, ctrl->id); + return -EINVAL; + } + client_lock(inst, __func__); + inst_lock(inst, __func__); + + rc = msm_vidc_get_control(inst, ctrl); + if (rc) { + i_vpr_e(inst, "%s: failed for ctrl %s id %#x\n", + __func__, ctrl->name, ctrl->id); + goto unlock; + } else { + i_vpr_h(inst, "%s: ctrl %s id %#x, value %d\n", + __func__, ctrl->name, ctrl->id, ctrl->val); + } + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + return rc; +} + +static int +msm_vidc_update_static_property(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, + struct v4l2_ctrl *ctrl) +{ + int rc = 0; + + /* update value to db */ + msm_vidc_update_cap_value(inst, cap_id, ctrl->val, __func__); + + if (cap_id == ROTATION) { + struct v4l2_format *output_fmt; + + output_fmt = &inst->fmts[OUTPUT_PORT]; + rc = msm_venc_s_fmt_output(inst, output_fmt); + if (rc) + return rc; + } + + if (cap_id == BITSTREAM_SIZE_OVERWRITE) { + rc = msm_vidc_update_bitstream_buffer_size(inst); + if (rc) + return rc; + } + + if (cap_id == ENH_LAYER_COUNT && inst->codec == MSM_VIDC_HEVC) { + u32 enable; + + /* enable LAYER_ENABLE cap if HEVC_HIER enh layers > 0 */ + if (ctrl->val > 0) + enable = 1; + else + enable = 0; + + msm_vidc_update_cap_value(inst, LAYER_ENABLE, enable, __func__); + } + + rc = msm_vidc_update_buffer_count_if_needed(inst, cap_id); + + return rc; +} + +int msm_vidc_s_ctrl(struct msm_vidc_inst *inst, struct v4l2_ctrl *ctrl) +{ + enum msm_vidc_inst_capability_type cap_id; + struct msm_vidc_inst_cap *cap; + int rc = 0; + u32 port; + + cap = &inst->capabilities[0]; + + i_vpr_h(inst, FMT_STRING_SET_CTRL, + __func__, state_name(inst->state), ctrl->name, ctrl->id, ctrl->val); + + cap_id = msm_vidc_get_cap_id(inst, ctrl->id); + if (!is_valid_cap_id(cap_id)) { + i_vpr_e(inst, "%s: invalid cap_id for ctrl %s\n", __func__, ctrl->name); + return -EINVAL; + } + + /* mark client set flag */ + cap[cap_id].flags |= CAP_FLAG_CLIENT_SET; + + port = is_encode_session(inst) ? OUTPUT_PORT : INPUT_PORT; + if (!inst->bufq[port].vb2q->streaming) { + /* static case */ + rc = msm_vidc_update_static_property(inst, cap_id, ctrl); + if (rc) + return rc; + } else { + /* dynamic case */ + rc = msm_vidc_adjust_dynamic_property(inst, cap_id, ctrl); + if (rc) + return rc; + + rc = msm_vidc_set_dynamic_property(inst); + if (rc) + return rc; + } + + return rc; +} + +int msm_v4l2_op_s_ctrl(struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst; + struct msm_vidc_ctrl_data *priv_ctrl_data; + int rc = 0; + + if (!ctrl) { + d_vpr_e("%s: invalid ctrl parameter\n", __func__); + return -EINVAL; + } + + /* + * v4l2_ctrl_modify_range may internally call s_ctrl + * which will again try to acquire lock leading to deadlock, + * Add check to avoid such scenario. + */ + priv_ctrl_data = ctrl->priv ? ctrl->priv : NULL; + if (priv_ctrl_data && priv_ctrl_data->skip_s_ctrl) { + d_vpr_l("%s: skip s_ctrl (%s)\n", __func__, ctrl->name); + return 0; + } + + inst = container_of(ctrl->handler, struct msm_vidc_inst, ctrl_handler); + inst = get_inst_ref(g_core, inst); + if (!inst) { + d_vpr_e("%s: invalid instance\n", __func__); + return -EINVAL; + } + + client_lock(inst, __func__); + inst_lock(inst, __func__); + rc = inst->event_handle(inst, MSM_VIDC_S_CTRL, ctrl); + if (rc) + goto unlock; + +unlock: + inst_unlock(inst, __func__); + client_unlock(inst, __func__); + put_inst(inst); + return rc; +} + +int msm_vidc_prepare_dependency_list(struct msm_vidc_inst *inst) +{ + struct list_head leaf_list, opt_list; + struct msm_vidc_inst_cap *cap, *lcap, *temp_cap; + struct msm_vidc_inst_cap_entry *entry = NULL, *temp = NULL; + bool leaf_visited[INST_CAP_MAX]; + bool opt_visited[INST_CAP_MAX]; + int tmp_count_total, tmp_count, num_nodes = 0; + int i, rc = 0; + + cap = &inst->capabilities[0]; + + if (!list_empty(&inst->caps_list)) { + i_vpr_h(inst, "%s: dependency list already prepared\n", __func__); + return 0; + } + + /* init local list and lookup table entries */ + INIT_LIST_HEAD(&leaf_list); + INIT_LIST_HEAD(&opt_list); + memset(&leaf_visited, 0, sizeof(leaf_visited)); + memset(&opt_visited, 0, sizeof(opt_visited)); + + /* populate leaf nodes first */ + for (i = 1; i < INST_CAP_MAX; i++) { + lcap = &cap[i]; + if (!is_valid_cap(inst, lcap->cap_id)) + continue; + + /* sanitize cap value */ + if (i != lcap->cap_id) { + i_vpr_e(inst, "%s: cap id mismatch. expected %s, actual %s\n", + __func__, cap_name(i), cap_name(lcap->cap_id)); + rc = -EINVAL; + goto error; + } + + /* add all leaf nodes */ + if (is_leaf(lcap)) { + rc = add_node(&leaf_list, lcap, leaf_visited); + if (rc) + goto error; + } else { + rc = add_node(&opt_list, lcap, opt_visited); + if (rc) + goto error; + } + } + + /* find total optional list entries */ + list_for_each_entry(entry, &opt_list, list) + num_nodes++; + + /* used for loop detection */ + tmp_count_total = num_nodes; + tmp_count = num_nodes; + + /* sort final outstanding nodes */ + list_for_each_entry_safe(entry, temp, &opt_list, list) { + /* initially remove entry from opt list */ + list_del_init(&entry->list); + opt_visited[entry->cap_id] = false; + tmp_count--; + temp_cap = &cap[entry->cap_id]; + + /** + * if all child are visited then add this entry to + * leaf list else add it to the end of optional list. + */ + if (is_all_childrens_visited(temp_cap, leaf_visited)) { + list_add(&entry->list, &leaf_list); + leaf_visited[entry->cap_id] = true; + tmp_count_total--; + } else { + list_add_tail(&entry->list, &opt_list); + opt_visited[entry->cap_id] = true; + } + + /* detect loop */ + if (!tmp_count) { + if (num_nodes == tmp_count_total) { + i_vpr_e(inst, "%s: loop detected in subgraph %d\n", + __func__, num_nodes); + rc = -EINVAL; + goto error; + } + num_nodes = tmp_count_total; + tmp_count = tmp_count_total; + } + } + + /* expecting opt_list to be empty */ + if (!list_empty(&opt_list)) { + i_vpr_e(inst, "%s: opt_list is not empty\n", __func__); + rc = -EINVAL; + goto error; + } + + /* move elements to &inst->caps_list from local */ + list_replace_init(&leaf_list, &inst->caps_list); + + return 0; +error: + list_for_each_entry_safe(entry, temp, &opt_list, list) { + i_vpr_e(inst, "%s: opt_list: %s\n", __func__, cap_name(entry->cap_id)); + list_del_init(&entry->list); + vfree(entry); + } + list_for_each_entry_safe(entry, temp, &leaf_list, list) { + i_vpr_e(inst, "%s: leaf_list: %s\n", __func__, cap_name(entry->cap_id)); + list_del_init(&entry->list); + vfree(entry); + } + return rc; +} + +int msm_vidc_adjust_v4l2_properties(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst_cap_entry *entry = NULL, *temp = NULL; + int rc = 0; + + /* adjust all possible caps from caps_list */ + list_for_each_entry_safe(entry, temp, &inst->caps_list, list) { + i_vpr_l(inst, "%s: cap: id %3u, name %s\n", __func__, + entry->cap_id, cap_name(entry->cap_id)); + + rc = msm_vidc_adjust_cap(inst, entry->cap_id, NULL, __func__); + if (rc) + return rc; + } + + return rc; +} + +int msm_vidc_set_v4l2_properties(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst_cap_entry *entry = NULL, *temp = NULL; + int rc = 0; + + /* set all caps from caps_list */ + list_for_each_entry_safe(entry, temp, &inst->caps_list, list) { + rc = msm_vidc_set_cap(inst, entry->cap_id, __func__); + if (rc) + return rc; + } + + return rc; +} From patchwork Fri Jul 28 13:23:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2A23C0015E for ; Fri, 28 Jul 2023 13:26:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233245AbjG1N0w (ORCPT ); Fri, 28 Jul 2023 09:26:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236747AbjG1N0e (ORCPT ); Fri, 28 Jul 2023 09:26:34 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 15F9E3C12; Fri, 28 Jul 2023 06:26:01 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S7HaSc029013; Fri, 28 Jul 2023 13:25:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=lsk8GejIJAO9teWY8MMZeyqmoEEyvPWNgq0OBtlavpw=; b=eKtjQwiA7EFu/syxhnQzd2wkZ/vmiJz1+BwvKGzZ9PP7eayqr9pY6KYf1vVqOnqWrWVn 7o0FBmrGu0YWQo3yqzL895oRzk4BM2GRJRMA8Qs5Q8KmzxcOm2tBSlQVRc24Ig0oyOJl XUKGMi8B5T7uZdScqZTy2qfImD8SuBivCsq5VpEThCAJUFz+gMk0Lo/e5qi6MN+uuI6A YxumNoLE2luGauqi0wIwvAiuH40K1AXyMEePPxNG3bVyEjBnXcXjAh6znENoMv4yb4yK bjAO4x9rH+6nJJkfpjLIr1ylATsTUb3DCiY+GL6wOKQCKS7/3oBG1tgzzoOXVKKJ3kza 5g== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7tg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:46 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPkSq025711 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:46 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:42 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 10/33] iris: vidc: add helper functions Date: Fri, 28 Jul 2023 18:53:21 +0530 Message-ID: <1690550624-14642-11-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: PGC3V6Fp7q6h3DT16v3Gq_0SlWJjSRGb X-Proofpoint-GUID: PGC3V6Fp7q6h3DT16v3Gq_0SlWJjSRGb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements common helper functions for v4l2 to vidc and vice versa conversion for different enums. Add helpers for state checks, buffer management, locks etc. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_driver.h | 352 ++ .../platform/qcom/iris/vidc/src/msm_vidc_driver.c | 4276 ++++++++++++++++++++ 2 files changed, 4628 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_driver.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_driver.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_driver.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_driver.h new file mode 100644 index 0000000..459e540 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_driver.h @@ -0,0 +1,352 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_DRIVER_H_ +#define _MSM_VIDC_DRIVER_H_ + +#include +#include + +#include "msm_vidc_core.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +extern struct msm_vidc_core *g_core; + +#define MSM_VIDC_SESSION_INACTIVE_THRESHOLD_MS 1000 + +enum msm_vidc_debugfs_event; + +static inline bool is_decode_session(struct msm_vidc_inst *inst) +{ + return inst->domain == MSM_VIDC_DECODER; +} + +static inline bool is_encode_session(struct msm_vidc_inst *inst) +{ + return inst->domain == MSM_VIDC_ENCODER; +} + +static inline bool is_input_buffer(enum msm_vidc_buffer_type buffer_type) +{ + return buffer_type == MSM_VIDC_BUF_INPUT; +} + +static inline bool is_output_buffer(enum msm_vidc_buffer_type buffer_type) +{ + return buffer_type == MSM_VIDC_BUF_OUTPUT; +} + +static inline bool is_scaling_enabled(struct msm_vidc_inst *inst) +{ + return inst->crop.left != inst->compose.left || + inst->crop.top != inst->compose.top || + inst->crop.width != inst->compose.width || + inst->crop.height != inst->compose.height; +} + +static inline bool is_rotation_90_or_270(struct msm_vidc_inst *inst) +{ + return inst->capabilities[ROTATION].value == 90 || + inst->capabilities[ROTATION].value == 270; +} + +static inline bool is_internal_buffer(enum msm_vidc_buffer_type buffer_type) +{ + return buffer_type == MSM_VIDC_BUF_BIN || + buffer_type == MSM_VIDC_BUF_ARP || + buffer_type == MSM_VIDC_BUF_COMV || + buffer_type == MSM_VIDC_BUF_NON_COMV || + buffer_type == MSM_VIDC_BUF_LINE || + buffer_type == MSM_VIDC_BUF_DPB || + buffer_type == MSM_VIDC_BUF_PERSIST || + buffer_type == MSM_VIDC_BUF_VPSS; +} + +static inline bool is_linear_yuv_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return colorformat == MSM_VIDC_FMT_NV12 || + colorformat == MSM_VIDC_FMT_NV21 || + colorformat == MSM_VIDC_FMT_P010; +} + +static inline bool is_linear_rgba_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return colorformat == MSM_VIDC_FMT_RGBA8888; +} + +static inline bool is_linear_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return is_linear_yuv_colorformat(colorformat) || is_linear_rgba_colorformat(colorformat); +} + +static inline bool is_ubwc_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return colorformat == MSM_VIDC_FMT_NV12C || + colorformat == MSM_VIDC_FMT_TP10C || + colorformat == MSM_VIDC_FMT_RGBA8888C; +} + +static inline bool is_10bit_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return colorformat == MSM_VIDC_FMT_P010 || + colorformat == MSM_VIDC_FMT_TP10C; +} + +static inline bool is_8bit_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return colorformat == MSM_VIDC_FMT_NV12 || + colorformat == MSM_VIDC_FMT_NV12C || + colorformat == MSM_VIDC_FMT_NV21 || + colorformat == MSM_VIDC_FMT_RGBA8888 || + colorformat == MSM_VIDC_FMT_RGBA8888C; +} + +static inline bool is_rgba_colorformat(enum msm_vidc_colorformat_type colorformat) +{ + return colorformat == MSM_VIDC_FMT_RGBA8888 || + colorformat == MSM_VIDC_FMT_RGBA8888C; +} + +static inline bool is_split_mode_enabled(struct msm_vidc_inst *inst) +{ + if (!is_decode_session(inst)) + return false; + + if (is_linear_colorformat(inst->capabilities[PIX_FMTS].value)) + return true; + + return false; +} + +static inline bool is_low_power_session(struct msm_vidc_inst *inst) +{ + return (inst->capabilities[QUALITY_MODE].value == + MSM_VIDC_POWER_SAVE_MODE); +} + +static inline bool is_hierb_type_requested(struct msm_vidc_inst *inst) +{ + return (inst->codec == MSM_VIDC_H264 && + inst->capabilities[LAYER_TYPE].value == + V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B) || + (inst->codec == MSM_VIDC_HEVC && + inst->capabilities[LAYER_TYPE].value == + V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B); +} + +static inline bool is_active_session(u64 prev, u64 curr) +{ + u64 ts_delta; + + if (!prev || !curr) + return true; + + ts_delta = (prev < curr) ? curr - prev : prev - curr; + + return ((ts_delta / NSEC_PER_MSEC) <= + MSM_VIDC_SESSION_INACTIVE_THRESHOLD_MS); +} + +static inline bool is_session_error(struct msm_vidc_inst *inst) +{ + return inst->state == MSM_VIDC_ERROR; +} + +static inline bool is_secure_region(enum msm_vidc_buffer_region region) +{ + return !(region == MSM_VIDC_NON_SECURE || + region == MSM_VIDC_NON_SECURE_PIXEL); +} + +const char *cap_name(enum msm_vidc_inst_capability_type cap_id); +const char *v4l2_pixelfmt_name(struct msm_vidc_inst *inst, u32 pixelfmt); +const char *v4l2_type_name(u32 port); +void print_vidc_buffer(u32 tag, const char *tag_str, const char *str, struct msm_vidc_inst *inst, + struct msm_vidc_buffer *vbuf); +void print_vb2_buffer(const char *str, struct msm_vidc_inst *inst, + struct vb2_buffer *vb2); +enum msm_vidc_codec_type v4l2_codec_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_codec, const char *func); +u32 v4l2_codec_from_driver(struct msm_vidc_inst *inst, enum msm_vidc_codec_type codec, + const char *func); +enum msm_vidc_colorformat_type v4l2_colorformat_to_driver(struct msm_vidc_inst *inst, + u32 colorformat, const char *func); +u32 v4l2_colorformat_from_driver(struct msm_vidc_inst *inst, + enum msm_vidc_colorformat_type colorformat, const char *func); +u32 v4l2_color_primaries_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_primaries, const char *func); +u32 v4l2_color_primaries_from_driver(struct msm_vidc_inst *inst, + u32 vidc_color_primaries, const char *func); +u32 v4l2_transfer_char_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_transfer_char, const char *func); +u32 v4l2_transfer_char_from_driver(struct msm_vidc_inst *inst, + u32 vidc_transfer_char, const char *func); +u32 v4l2_matrix_coeff_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_matrix_coeff, const char *func); +u32 v4l2_matrix_coeff_from_driver(struct msm_vidc_inst *inst, + u32 vidc_matrix_coeff, const char *func); +int v4l2_type_to_driver_port(struct msm_vidc_inst *inst, u32 type, + const char *func); +const char *allow_name(enum msm_vidc_allow allow); +int msm_vidc_create_internal_buffer(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type, u32 index); +int msm_vidc_get_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_vidc_create_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_vidc_queue_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_vidc_alloc_and_queue_session_int_bufs(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_vidc_release_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_vidc_vb2_buffer_done(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf); +int msm_vidc_remove_dangling_session(struct msm_vidc_inst *inst); +int msm_vidc_remove_session(struct msm_vidc_inst *inst); +int msm_vidc_add_session(struct msm_vidc_inst *inst); +int msm_vidc_session_open(struct msm_vidc_inst *inst); +int msm_vidc_session_set_codec(struct msm_vidc_inst *inst); +int msm_vidc_session_set_default_header(struct msm_vidc_inst *inst); +int msm_vidc_session_streamoff(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port); +int msm_vidc_session_close(struct msm_vidc_inst *inst); +int msm_vidc_kill_session(struct msm_vidc_inst *inst); +int msm_vidc_get_inst_capability(struct msm_vidc_inst *inst); +int msm_vidc_change_core_state(struct msm_vidc_core *core, + enum msm_vidc_core_state request_state, const char *func); +int msm_vidc_change_core_sub_state(struct msm_vidc_core *core, + enum msm_vidc_core_sub_state clear_sub_states, + enum msm_vidc_core_sub_state set_sub_states, const char *func); +int msm_vidc_core_init(struct msm_vidc_core *core); +int msm_vidc_core_init_wait(struct msm_vidc_core *core); +int msm_vidc_core_deinit(struct msm_vidc_core *core, bool force); +int msm_vidc_core_deinit_locked(struct msm_vidc_core *core, bool force); +int msm_vidc_inst_timeout(struct msm_vidc_inst *inst); +int msm_vidc_print_buffer_info(struct msm_vidc_inst *inst); +int msm_vidc_print_inst_info(struct msm_vidc_inst *inst); +void msm_vidc_print_core_info(struct msm_vidc_core *core); +int msm_vidc_smmu_fault_handler(struct iommu_domain *domain, + struct device *dev, unsigned long iova, int flags, void *data); +void msm_vidc_fw_unload_handler(struct work_struct *work); +int msm_vidc_suspend(struct msm_vidc_core *core); +void msm_vidc_batch_handler(struct work_struct *work); +int msm_vidc_v4l2_fh_init(struct msm_vidc_inst *inst); +int msm_vidc_v4l2_fh_deinit(struct msm_vidc_inst *inst); +int msm_vidc_vb2_queue_init(struct msm_vidc_inst *inst); +int msm_vidc_vb2_queue_deinit(struct msm_vidc_inst *inst); +int msm_vidc_get_control(struct msm_vidc_inst *inst, struct v4l2_ctrl *ctrl); +struct msm_vidc_buffers *msm_vidc_get_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type, + const char *func); +struct msm_vidc_mem_list *msm_vidc_get_mem_info(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type, + const char *func); +struct msm_vidc_buffer *msm_vidc_get_driver_buf(struct msm_vidc_inst *inst, + struct vb2_buffer *vb2); +int msm_vidc_allocate_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buf_type, u32 num_buffers); +int msm_vidc_free_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buf_type); +void msm_vidc_update_stats(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf, + enum msm_vidc_debugfs_event etype); +void msm_vidc_stats_handler(struct work_struct *work); +int schedule_stats_work(struct msm_vidc_inst *inst); +int cancel_stats_work_sync(struct msm_vidc_inst *inst); +void msm_vidc_print_stats(struct msm_vidc_inst *inst); +void msm_vidc_print_memory_stats(struct msm_vidc_inst *inst); +enum msm_vidc_buffer_type v4l2_type_to_driver(u32 type, const char *func); +int msm_vidc_buf_queue(struct msm_vidc_inst *inst, struct msm_vidc_buffer *buf); +int msm_vidc_queue_buffer_single(struct msm_vidc_inst *inst, struct vb2_buffer *vb2); +int msm_vidc_queue_deferred_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buf_type); +int msm_vidc_destroy_internal_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer); +void msm_vidc_destroy_buffers(struct msm_vidc_inst *inst); +int msm_vidc_flush_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type type); +int msm_vidc_flush_read_only_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type type); +struct msm_vidc_inst *get_inst_ref(struct msm_vidc_core *core, + struct msm_vidc_inst *instance); +struct msm_vidc_inst *get_inst(struct msm_vidc_core *core, + u32 session_id); +void put_inst(struct msm_vidc_inst *inst); +enum msm_vidc_allow msm_vidc_allow_input_psc(struct msm_vidc_inst *inst); +bool msm_vidc_allow_drain_last_flag(struct msm_vidc_inst *inst); +bool msm_vidc_allow_psc_last_flag(struct msm_vidc_inst *inst); +enum msm_vidc_allow msm_vidc_allow_pm_suspend(struct msm_vidc_core *core); +int msm_vidc_state_change_streamon(struct msm_vidc_inst *inst, u32 type); +int msm_vidc_state_change_streamoff(struct msm_vidc_inst *inst, u32 type); +int msm_vidc_state_change_input_psc(struct msm_vidc_inst *inst); +int msm_vidc_state_change_drain_last_flag(struct msm_vidc_inst *inst); +int msm_vidc_state_change_psc_last_flag(struct msm_vidc_inst *inst); +int msm_vidc_process_drain(struct msm_vidc_inst *inst); +int msm_vidc_process_resume(struct msm_vidc_inst *inst); +int msm_vidc_process_streamon_input(struct msm_vidc_inst *inst); +int msm_vidc_process_streamon_output(struct msm_vidc_inst *inst); +int msm_vidc_process_stop_done(struct msm_vidc_inst *inst, + enum signal_session_response signal_type); +int msm_vidc_process_drain_done(struct msm_vidc_inst *inst); +int msm_vidc_process_drain_last_flag(struct msm_vidc_inst *inst); +int msm_vidc_process_psc_last_flag(struct msm_vidc_inst *inst); +int msm_vidc_get_mbs_per_frame(struct msm_vidc_inst *inst); +u32 msm_vidc_get_max_bitrate(struct msm_vidc_inst *inst); +int msm_vidc_get_fps(struct msm_vidc_inst *inst); +int msm_vidc_num_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type type, + enum msm_vidc_buffer_attributes attr); +void core_lock(struct msm_vidc_core *core, const char *function); +void core_unlock(struct msm_vidc_core *core, const char *function); +void inst_lock(struct msm_vidc_inst *inst, const char *function); +void inst_unlock(struct msm_vidc_inst *inst, const char *function); +void client_lock(struct msm_vidc_inst *inst, const char *function); +void client_unlock(struct msm_vidc_inst *inst, const char *function); +int msm_vidc_update_bitstream_buffer_size(struct msm_vidc_inst *inst); +int msm_vidc_update_buffer_count(struct msm_vidc_inst *inst, u32 port); +void msm_vidc_schedule_core_deinit(struct msm_vidc_core *core); +int msm_vidc_init_core_caps(struct msm_vidc_core *core); +int msm_vidc_init_instance_caps(struct msm_vidc_core *core); +int msm_vidc_update_debug_str(struct msm_vidc_inst *inst); +void msm_vidc_allow_dcvs(struct msm_vidc_inst *inst); +bool msm_vidc_allow_decode_batch(struct msm_vidc_inst *inst); +int msm_vidc_check_session_supported(struct msm_vidc_inst *inst); +bool msm_vidc_ignore_session_load(struct msm_vidc_inst *inst); +int msm_vidc_check_core_mbps(struct msm_vidc_inst *inst); +int msm_vidc_check_core_mbpf(struct msm_vidc_inst *inst); +int msm_vidc_check_scaling_supported(struct msm_vidc_inst *inst); +int msm_vidc_update_timestamp_rate(struct msm_vidc_inst *inst, u64 timestamp); +int msm_vidc_get_timestamp_rate(struct msm_vidc_inst *inst); +int msm_vidc_flush_ts(struct msm_vidc_inst *inst); +const char *buf_name(enum msm_vidc_buffer_type type); +bool res_is_greater_than(u32 width, u32 height, + u32 ref_width, u32 ref_height); +bool res_is_greater_than_or_equal_to(u32 width, u32 height, + u32 ref_width, u32 ref_height); +bool res_is_less_than(u32 width, u32 height, + u32 ref_width, u32 ref_height); +bool res_is_less_than_or_equal_to(u32 width, u32 height, + u32 ref_width, u32 ref_height); +bool is_hevc_10bit_decode_session(struct msm_vidc_inst *inst); +int signal_session_msg_receipt(struct msm_vidc_inst *inst, + enum signal_session_response cmd); +int msm_vidc_get_properties(struct msm_vidc_inst *inst); +int msm_vidc_update_input_rate(struct msm_vidc_inst *inst, u64 time_us); +int msm_vidc_get_input_rate(struct msm_vidc_inst *inst); +int msm_vidc_get_frame_rate(struct msm_vidc_inst *inst); +int msm_vidc_get_operating_rate(struct msm_vidc_inst *inst); +int msm_vidc_alloc_and_queue_input_internal_buffers(struct msm_vidc_inst *inst); +int vb2_buffer_to_driver(struct vb2_buffer *vb2, struct msm_vidc_buffer *buf); +struct msm_vidc_buffer *msm_vidc_fetch_buffer(struct msm_vidc_inst *inst, + struct vb2_buffer *vb2); +struct context_bank_info *msm_vidc_get_context_bank_for_region(struct msm_vidc_core *core, + enum msm_vidc_buffer_region region); +struct context_bank_info *msm_vidc_get_context_bank_for_device(struct msm_vidc_core *core, + struct device *dev); + +#endif // _MSM_VIDC_DRIVER_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_driver.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_driver.c new file mode 100644 index 0000000..029fc71 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_driver.c @@ -0,0 +1,4276 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2022, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include + +#include "hfi_packet.h" +#include "msm_media_info.h" +#include "msm_vdec.h" +#include "msm_venc.h" +#include "msm_vidc.h" +#include "msm_vidc_control.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "msm_vidc_state.h" +#include "venus_hfi.h" +#include "venus_hfi_response.h" + +#define is_odd(val) ((val) % 2 == 1) +#define in_range(val, min, max) (((min) <= (val)) && ((val) <= (max))) +#define COUNT_BITS(a, out) { \ + while ((a) >= 1) { \ + (out) += (a) & (1); \ + (a) >>= (1); \ + } \ +} + +/* do not modify the cap names as it is used in test scripts */ +static const char * const cap_name_arr[] = { + [INST_CAP_NONE] = "INST_CAP_NONE", + [MIN_FRAME_QP] = "MIN_FRAME_QP", + [MAX_FRAME_QP] = "MAX_FRAME_QP", + [I_FRAME_QP] = "I_FRAME_QP", + [P_FRAME_QP] = "P_FRAME_QP", + [B_FRAME_QP] = "B_FRAME_QP", + [TIME_DELTA_BASED_RC] = "TIME_DELTA_BASED_RC", + [CONSTANT_QUALITY] = "CONSTANT_QUALITY", + [VBV_DELAY] = "VBV_DELAY", + [PEAK_BITRATE] = "PEAK_BITRATE", + [ENTROPY_MODE] = "ENTROPY_MODE", + [TRANSFORM_8X8] = "TRANSFORM_8X8", + [STAGE] = "STAGE", + [LTR_COUNT] = "LTR_COUNT", + [IR_PERIOD] = "IR_PERIOD", + [BITRATE_BOOST] = "BITRATE_BOOST", + [OUTPUT_ORDER] = "OUTPUT_ORDER", + [INPUT_BUF_HOST_MAX_COUNT] = "INPUT_BUF_HOST_MAX_COUNT", + [OUTPUT_BUF_HOST_MAX_COUNT] = "OUTPUT_BUF_HOST_MAX_COUNT", + [VUI_TIMING_INFO] = "VUI_TIMING_INFO", + [SLICE_DECODE] = "SLICE_DECODE", + [PROFILE] = "PROFILE", + [ENH_LAYER_COUNT] = "ENH_LAYER_COUNT", + [BIT_RATE] = "BIT_RATE", + [GOP_SIZE] = "GOP_SIZE", + [B_FRAME] = "B_FRAME", + [ALL_INTRA] = "ALL_INTRA", + [MIN_QUALITY] = "MIN_QUALITY", + [SLICE_MODE] = "SLICE_MODE", + [FRAME_WIDTH] = "FRAME_WIDTH", + [LOSSLESS_FRAME_WIDTH] = "LOSSLESS_FRAME_WIDTH", + [FRAME_HEIGHT] = "FRAME_HEIGHT", + [LOSSLESS_FRAME_HEIGHT] = "LOSSLESS_FRAME_HEIGHT", + [PIX_FMTS] = "PIX_FMTS", + [MIN_BUFFERS_INPUT] = "MIN_BUFFERS_INPUT", + [MIN_BUFFERS_OUTPUT] = "MIN_BUFFERS_OUTPUT", + [MBPF] = "MBPF", + [BATCH_MBPF] = "BATCH_MBPF", + [BATCH_FPS] = "BATCH_FPS", + [LOSSLESS_MBPF] = "LOSSLESS_MBPF", + [FRAME_RATE] = "FRAME_RATE", + [OPERATING_RATE] = "OPERATING_RATE", + [INPUT_RATE] = "INPUT_RATE", + [TIMESTAMP_RATE] = "TIMESTAMP_RATE", + [SCALE_FACTOR] = "SCALE_FACTOR", + [MB_CYCLES_VSP] = "MB_CYCLES_VSP", + [MB_CYCLES_VPP] = "MB_CYCLES_VPP", + [MB_CYCLES_LP] = "MB_CYCLES_LP", + [MB_CYCLES_FW] = "MB_CYCLES_FW", + [MB_CYCLES_FW_VPP] = "MB_CYCLES_FW_VPP", + [ENC_RING_BUFFER_COUNT] = "ENC_RING_BUFFER_COUNT", + [HFLIP] = "HFLIP", + [VFLIP] = "VFLIP", + [ROTATION] = "ROTATION", + [HEADER_MODE] = "HEADER_MODE", + [PREPEND_SPSPPS_TO_IDR] = "PREPEND_SPSPPS_TO_IDR", + [WITHOUT_STARTCODE] = "WITHOUT_STARTCODE", + [NAL_LENGTH_FIELD] = "NAL_LENGTH_FIELD", + [REQUEST_I_FRAME] = "REQUEST_I_FRAME", + [BITRATE_MODE] = "BITRATE_MODE", + [LOSSLESS] = "LOSSLESS", + [FRAME_SKIP_MODE] = "FRAME_SKIP_MODE", + [FRAME_RC_ENABLE] = "FRAME_RC_ENABLE", + [GOP_CLOSURE] = "GOP_CLOSURE", + [USE_LTR] = "USE_LTR", + [MARK_LTR] = "MARK_LTR", + [BASELAYER_PRIORITY] = "BASELAYER_PRIORITY", + [IR_TYPE] = "IR_TYPE", + [AU_DELIMITER] = "AU_DELIMITER", + [GRID_ENABLE] = "GRID_ENABLE", + [GRID_SIZE] = "GRID_SIZE", + [I_FRAME_MIN_QP] = "I_FRAME_MIN_QP", + [P_FRAME_MIN_QP] = "P_FRAME_MIN_QP", + [B_FRAME_MIN_QP] = "B_FRAME_MIN_QP", + [I_FRAME_MAX_QP] = "I_FRAME_MAX_QP", + [P_FRAME_MAX_QP] = "P_FRAME_MAX_QP", + [B_FRAME_MAX_QP] = "B_FRAME_MAX_QP", + [LAYER_TYPE] = "LAYER_TYPE", + [LAYER_ENABLE] = "LAYER_ENABLE", + [L0_BR] = "L0_BR", + [L1_BR] = "L1_BR", + [L2_BR] = "L2_BR", + [L3_BR] = "L3_BR", + [L4_BR] = "L4_BR", + [L5_BR] = "L5_BR", + [LEVEL] = "LEVEL", + [HEVC_TIER] = "HEVC_TIER", + [DISPLAY_DELAY_ENABLE] = "DISPLAY_DELAY_ENABLE", + [DISPLAY_DELAY] = "DISPLAY_DELAY", + [CONCEAL_COLOR_8BIT] = "CONCEAL_COLOR_8BIT", + [CONCEAL_COLOR_10BIT] = "CONCEAL_COLOR_10BIT", + [LF_MODE] = "LF_MODE", + [LF_ALPHA] = "LF_ALPHA", + [LF_BETA] = "LF_BETA", + [SLICE_MAX_BYTES] = "SLICE_MAX_BYTES", + [SLICE_MAX_MB] = "SLICE_MAX_MB", + [MB_RC] = "MB_RC", + [CHROMA_QP_INDEX_OFFSET] = "CHROMA_QP_INDEX_OFFSET", + [PIPE] = "PIPE", + [POC] = "POC", + [CODED_FRAMES] = "CODED_FRAMES", + [BIT_DEPTH] = "BIT_DEPTH", + [BITSTREAM_SIZE_OVERWRITE] = "BITSTREAM_SIZE_OVERWRITE", + [DEFAULT_HEADER] = "DEFAULT_HEADER", + [RAP_FRAME] = "RAP_FRAME", + [SEQ_CHANGE_AT_SYNC_FRAME] = "SEQ_CHANGE_AT_SYNC_FRAME", + [QUALITY_MODE] = "QUALITY_MODE", + [CABAC_MAX_BITRATE] = "CABAC_MAX_BITRATE", + [CAVLC_MAX_BITRATE] = "CAVLC_MAX_BITRATE", + [ALLINTRA_MAX_BITRATE] = "ALLINTRA_MAX_BITRATE", + [NUM_COMV] = "NUM_COMV", + [SIGNAL_COLOR_INFO] = "SIGNAL_COLOR_INFO", + [INST_CAP_MAX] = "INST_CAP_MAX", +}; + +const char *cap_name(enum msm_vidc_inst_capability_type cap_id) +{ + const char *name = "UNKNOWN CAP"; + + if (cap_id >= ARRAY_SIZE(cap_name_arr)) + goto exit; + + name = cap_name_arr[cap_id]; + +exit: + return name; +} + +static const char * const buf_type_name_arr[] = { + [MSM_VIDC_BUF_NONE] = "NONE", + [MSM_VIDC_BUF_INPUT] = "INPUT", + [MSM_VIDC_BUF_OUTPUT] = "OUTPUT", + [MSM_VIDC_BUF_READ_ONLY] = "READ_ONLY", + [MSM_VIDC_BUF_INTERFACE_QUEUE] = "INTERFACE_QUEUE", + [MSM_VIDC_BUF_BIN] = "BIN", + [MSM_VIDC_BUF_ARP] = "ARP", + [MSM_VIDC_BUF_COMV] = "COMV", + [MSM_VIDC_BUF_NON_COMV] = "NON_COMV", + [MSM_VIDC_BUF_LINE] = "LINE", + [MSM_VIDC_BUF_DPB] = "DPB", + [MSM_VIDC_BUF_PERSIST] = "PERSIST", + [MSM_VIDC_BUF_VPSS] = "VPSS" +}; + +const char *buf_name(enum msm_vidc_buffer_type type) +{ + const char *name = "UNKNOWN BUF"; + + if (type >= ARRAY_SIZE(buf_type_name_arr)) + goto exit; + + name = buf_type_name_arr[type]; + +exit: + return name; +} + +static const char * const inst_allow_name_arr[] = { + [MSM_VIDC_DISALLOW] = "MSM_VIDC_DISALLOW", + [MSM_VIDC_ALLOW] = "MSM_VIDC_ALLOW", + [MSM_VIDC_DEFER] = "MSM_VIDC_DEFER", + [MSM_VIDC_DISCARD] = "MSM_VIDC_DISCARD", + [MSM_VIDC_IGNORE] = "MSM_VIDC_IGNORE", +}; + +const char *allow_name(enum msm_vidc_allow allow) +{ + const char *name = "UNKNOWN"; + + if (allow >= ARRAY_SIZE(inst_allow_name_arr)) + goto exit; + + name = inst_allow_name_arr[allow]; + +exit: + return name; +} + +const char *v4l2_type_name(u32 port) +{ + switch (port) { + case INPUT_MPLANE: return "INPUT"; + case OUTPUT_MPLANE: return "OUTPUT"; + } + + return "UNKNOWN"; +} + +const char *v4l2_pixelfmt_name(struct msm_vidc_inst *inst, u32 pixfmt) +{ + struct msm_vidc_core *core; + const struct codec_info *codec_info; + const struct color_format_info *color_format_info; + u32 i, size; + + core = inst->core; + codec_info = core->platform->data.format_data->codec_info; + size = core->platform->data.format_data->codec_info_size; + + for (i = 0; i < size; i++) { + if (codec_info[i].v4l2_codec == pixfmt) + return codec_info[i].pixfmt_name; + } + + color_format_info = core->platform->data.format_data->color_format_info; + size = core->platform->data.format_data->color_format_info_size; + + for (i = 0; i < size; i++) { + if (color_format_info[i].v4l2_color_format == pixfmt) + return color_format_info[i].pixfmt_name; + } + + return "UNKNOWN"; +} + +void print_vidc_buffer(u32 tag, const char *tag_str, const char *str, struct msm_vidc_inst *inst, + struct msm_vidc_buffer *vbuf) +{ + struct dma_buf *dbuf; + struct inode *f_inode; + unsigned long inode_num = 0; + long ref_count = -1; + + if (!vbuf || !tag_str || !str) + return; + + dbuf = (struct dma_buf *)vbuf->dmabuf; + if (dbuf && dbuf->file) { + f_inode = file_inode(dbuf->file); + if (f_inode) { + inode_num = f_inode->i_ino; + ref_count = file_count(dbuf->file); + } + } + + dprintk_inst(tag, tag_str, inst, + "%s: %s: idx %2d fd %3d off %d daddr %#llx inode %8lu ref %2ld size %8d filled %8d flags %#x ts %8lld attr %#x dbuf_get %d attach %d map %d counts(etb ebd ftb fbd) %4llu %4llu %4llu %4llu\n", + str, buf_name(vbuf->type), + vbuf->index, vbuf->fd, vbuf->data_offset, + vbuf->device_addr, inode_num, ref_count, vbuf->buffer_size, + vbuf->data_size, vbuf->flags, vbuf->timestamp, vbuf->attr, + vbuf->dbuf_get, vbuf->attach ? 1 : 0, vbuf->sg_table ? 1 : 0, + inst->debug_count.etb, inst->debug_count.ebd, + inst->debug_count.ftb, inst->debug_count.fbd); +} + +void print_vb2_buffer(const char *str, struct msm_vidc_inst *inst, + struct vb2_buffer *vb2) +{ + i_vpr_e(inst, + "%s: %s: idx %2d fd %d off %d size %d filled %d\n", + str, vb2->type == INPUT_MPLANE ? "INPUT" : "OUTPUT", + vb2->index, vb2->planes[0].m.fd, + vb2->planes[0].data_offset, vb2->planes[0].length, + vb2->planes[0].bytesused); +} + +int msm_vidc_suspend(struct msm_vidc_core *core) +{ + return venus_hfi_suspend(core); +} + +enum msm_vidc_buffer_type v4l2_type_to_driver(u32 type, const char *func) +{ + enum msm_vidc_buffer_type buffer_type = 0; + + switch (type) { + case INPUT_MPLANE: + buffer_type = MSM_VIDC_BUF_INPUT; + break; + case OUTPUT_MPLANE: + buffer_type = MSM_VIDC_BUF_OUTPUT; + break; + default: + d_vpr_e("%s: invalid v4l2 buffer type %#x\n", func, type); + break; + } + return buffer_type; +} + +u32 v4l2_type_from_driver(enum msm_vidc_buffer_type buffer_type, + const char *func) +{ + u32 type = 0; + + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + type = INPUT_MPLANE; + break; + case MSM_VIDC_BUF_OUTPUT: + type = OUTPUT_MPLANE; + break; + default: + d_vpr_e("%s: invalid driver buffer type %d\n", + func, buffer_type); + break; + } + return type; +} + +enum msm_vidc_codec_type v4l2_codec_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_codec, const char *func) +{ + struct msm_vidc_core *core; + const struct codec_info *codec_info; + u32 i, size; + enum msm_vidc_codec_type codec = 0; + + core = inst->core; + codec_info = core->platform->data.format_data->codec_info; + size = core->platform->data.format_data->codec_info_size; + + for (i = 0; i < size; i++) { + if (codec_info[i].v4l2_codec == v4l2_codec) + return codec_info[i].vidc_codec; + } + + d_vpr_h("%s: invalid v4l2 codec %#x\n", func, v4l2_codec); + return codec; +} + +u32 v4l2_codec_from_driver(struct msm_vidc_inst *inst, + enum msm_vidc_codec_type codec, const char *func) +{ + struct msm_vidc_core *core; + const struct codec_info *codec_info; + u32 i, size; + u32 v4l2_codec = 0; + + core = inst->core; + codec_info = core->platform->data.format_data->codec_info; + size = core->platform->data.format_data->codec_info_size; + + for (i = 0; i < size; i++) { + if (codec_info[i].vidc_codec == codec) + return codec_info[i].v4l2_codec; + } + + d_vpr_e("%s: invalid driver codec %#x\n", func, codec); + return v4l2_codec; +} + +enum msm_vidc_colorformat_type v4l2_colorformat_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_colorformat, + const char *func) +{ + struct msm_vidc_core *core; + const struct color_format_info *color_format_info; + u32 i, size; + enum msm_vidc_colorformat_type colorformat = 0; + + core = inst->core; + color_format_info = core->platform->data.format_data->color_format_info; + size = core->platform->data.format_data->color_format_info_size; + + for (i = 0; i < size; i++) { + if (color_format_info[i].v4l2_color_format == v4l2_colorformat) + return color_format_info[i].vidc_color_format; + } + + d_vpr_e("%s: invalid v4l2 color format %#x\n", func, v4l2_colorformat); + return colorformat; +} + +u32 v4l2_colorformat_from_driver(struct msm_vidc_inst *inst, + enum msm_vidc_colorformat_type colorformat, + const char *func) +{ + struct msm_vidc_core *core; + const struct color_format_info *color_format_info; + u32 i, size; + u32 v4l2_colorformat = 0; + + core = inst->core; + color_format_info = core->platform->data.format_data->color_format_info; + size = core->platform->data.format_data->color_format_info_size; + + for (i = 0; i < size; i++) { + if (color_format_info[i].vidc_color_format == colorformat) + return color_format_info[i].v4l2_color_format; + } + + d_vpr_e("%s: invalid driver color format %#x\n", func, colorformat); + return v4l2_colorformat; +} + +u32 v4l2_color_primaries_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_primaries, const char *func) +{ + struct msm_vidc_core *core; + const struct color_primaries_info *color_prim_info; + u32 i, size; + u32 vidc_color_primaries = MSM_VIDC_PRIMARIES_RESERVED; + + core = inst->core; + color_prim_info = core->platform->data.format_data->color_prim_info; + size = core->platform->data.format_data->color_prim_info_size; + + for (i = 0; i < size; i++) { + if (color_prim_info[i].v4l2_color_primaries == v4l2_primaries) + return color_prim_info[i].vidc_color_primaries; + } + + i_vpr_e(inst, "%s: invalid v4l2 color primaries %d\n", + func, v4l2_primaries); + + return vidc_color_primaries; +} + +u32 v4l2_color_primaries_from_driver(struct msm_vidc_inst *inst, + u32 vidc_color_primaries, const char *func) +{ + struct msm_vidc_core *core; + const struct color_primaries_info *color_prim_info; + u32 i, size; + u32 v4l2_primaries = V4L2_COLORSPACE_DEFAULT; + + core = inst->core; + color_prim_info = core->platform->data.format_data->color_prim_info; + size = core->platform->data.format_data->color_prim_info_size; + + for (i = 0; i < size; i++) { + if (color_prim_info[i].vidc_color_primaries == vidc_color_primaries) + return color_prim_info[i].v4l2_color_primaries; + } + + i_vpr_e(inst, "%s: invalid hfi color primaries %d\n", + func, vidc_color_primaries); + + return v4l2_primaries; +} + +u32 v4l2_transfer_char_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_transfer_char, const char *func) +{ + struct msm_vidc_core *core; + const struct transfer_char_info *transfer_char_info; + u32 i, size; + u32 vidc_transfer_char = MSM_VIDC_TRANSFER_RESERVED; + + core = inst->core; + transfer_char_info = core->platform->data.format_data->transfer_char_info; + size = core->platform->data.format_data->transfer_char_info_size; + + for (i = 0; i < size; i++) { + if (transfer_char_info[i].v4l2_transfer_char == v4l2_transfer_char) + return transfer_char_info[i].vidc_transfer_char; + } + + i_vpr_e(inst, "%s: invalid v4l2 transfer char %d\n", + func, v4l2_transfer_char); + + return vidc_transfer_char; +} + +u32 v4l2_transfer_char_from_driver(struct msm_vidc_inst *inst, + u32 vidc_transfer_char, const char *func) +{ + struct msm_vidc_core *core; + const struct transfer_char_info *transfer_char_info; + u32 i, size; + u32 v4l2_transfer_char = V4L2_XFER_FUNC_DEFAULT; + + core = inst->core; + transfer_char_info = core->platform->data.format_data->transfer_char_info; + size = core->platform->data.format_data->transfer_char_info_size; + + for (i = 0; i < size; i++) { + if (transfer_char_info[i].vidc_transfer_char == vidc_transfer_char) + return transfer_char_info[i].v4l2_transfer_char; + } + + i_vpr_e(inst, "%s: invalid hfi transfer char %d\n", + func, vidc_transfer_char); + + return v4l2_transfer_char; +} + +u32 v4l2_matrix_coeff_to_driver(struct msm_vidc_inst *inst, + u32 v4l2_matrix_coeff, const char *func) +{ + struct msm_vidc_core *core; + const struct matrix_coeff_info *matrix_coeff_info; + u32 i, size; + u32 vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_RESERVED; + + core = inst->core; + matrix_coeff_info = core->platform->data.format_data->matrix_coeff_info; + size = core->platform->data.format_data->matrix_coeff_info_size; + + for (i = 0; i < size; i++) { + if (matrix_coeff_info[i].v4l2_matrix_coeff == v4l2_matrix_coeff) + return matrix_coeff_info[i].vidc_matrix_coeff; + } + + i_vpr_e(inst, "%s: invalid v4l2 matrix coeff %d\n", + func, v4l2_matrix_coeff); + + return vidc_matrix_coeff; +} + +u32 v4l2_matrix_coeff_from_driver(struct msm_vidc_inst *inst, + u32 vidc_matrix_coeff, const char *func) +{ + struct msm_vidc_core *core; + const struct matrix_coeff_info *matrix_coeff_info; + u32 i, size; + u32 v4l2_matrix_coeff = V4L2_YCBCR_ENC_DEFAULT; + + core = inst->core; + matrix_coeff_info = core->platform->data.format_data->matrix_coeff_info; + size = core->platform->data.format_data->matrix_coeff_info_size; + + for (i = 0; i < size; i++) { + if (matrix_coeff_info[i].vidc_matrix_coeff == vidc_matrix_coeff) + return matrix_coeff_info[i].v4l2_matrix_coeff; + } + + i_vpr_e(inst, "%s: invalid hfi matrix coeff %d\n", + func, vidc_matrix_coeff); + + return v4l2_matrix_coeff; +} + +int v4l2_type_to_driver_port(struct msm_vidc_inst *inst, u32 type, + const char *func) +{ + int port; + + if (type == INPUT_MPLANE) { + port = INPUT_PORT; + } else if (type == OUTPUT_MPLANE) { + port = OUTPUT_PORT; + } else { + i_vpr_e(inst, "%s: port not found for v4l2 type %d\n", + func, type); + port = -EINVAL; + } + + return port; +} + +struct msm_vidc_buffers *msm_vidc_get_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type, + const char *func) +{ + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + return &inst->buffers.input; + case MSM_VIDC_BUF_OUTPUT: + return &inst->buffers.output; + case MSM_VIDC_BUF_READ_ONLY: + return &inst->buffers.read_only; + case MSM_VIDC_BUF_BIN: + return &inst->buffers.bin; + case MSM_VIDC_BUF_ARP: + return &inst->buffers.arp; + case MSM_VIDC_BUF_COMV: + return &inst->buffers.comv; + case MSM_VIDC_BUF_NON_COMV: + return &inst->buffers.non_comv; + case MSM_VIDC_BUF_LINE: + return &inst->buffers.line; + case MSM_VIDC_BUF_DPB: + return &inst->buffers.dpb; + case MSM_VIDC_BUF_PERSIST: + return &inst->buffers.persist; + case MSM_VIDC_BUF_VPSS: + return &inst->buffers.vpss; + case MSM_VIDC_BUF_INTERFACE_QUEUE: + return NULL; + default: + i_vpr_e(inst, "%s: invalid driver buffer type %d\n", + func, buffer_type); + return NULL; + } +} + +struct msm_vidc_mem_list *msm_vidc_get_mem_info(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type, + const char *func) +{ + switch (buffer_type) { + case MSM_VIDC_BUF_BIN: + return &inst->mem_info.bin; + case MSM_VIDC_BUF_ARP: + return &inst->mem_info.arp; + case MSM_VIDC_BUF_COMV: + return &inst->mem_info.comv; + case MSM_VIDC_BUF_NON_COMV: + return &inst->mem_info.non_comv; + case MSM_VIDC_BUF_LINE: + return &inst->mem_info.line; + case MSM_VIDC_BUF_DPB: + return &inst->mem_info.dpb; + case MSM_VIDC_BUF_PERSIST: + return &inst->mem_info.persist; + case MSM_VIDC_BUF_VPSS: + return &inst->mem_info.vpss; + default: + i_vpr_e(inst, "%s: invalid driver buffer type %d\n", + func, buffer_type); + return NULL; + } +} + +bool res_is_greater_than(u32 width, u32 height, + u32 ref_width, u32 ref_height) +{ + u32 num_mbs = NUM_MBS_PER_FRAME(height, width); + u32 max_side = max(ref_width, ref_height); + + if (num_mbs > NUM_MBS_PER_FRAME(ref_height, ref_width) || + width > max_side || + height > max_side) + return true; + else + return false; +} + +bool res_is_greater_than_or_equal_to(u32 width, u32 height, + u32 ref_width, u32 ref_height) +{ + u32 num_mbs = NUM_MBS_PER_FRAME(height, width); + u32 max_side = max(ref_width, ref_height); + + if (num_mbs >= NUM_MBS_PER_FRAME(ref_height, ref_width) || + width >= max_side || + height >= max_side) + return true; + else + return false; +} + +bool res_is_less_than(u32 width, u32 height, + u32 ref_width, u32 ref_height) +{ + u32 num_mbs = NUM_MBS_PER_FRAME(height, width); + u32 max_side = max(ref_width, ref_height); + + if (num_mbs < NUM_MBS_PER_FRAME(ref_height, ref_width) && + width < max_side && + height < max_side) + return true; + else + return false; +} + +bool res_is_less_than_or_equal_to(u32 width, u32 height, + u32 ref_width, u32 ref_height) +{ + u32 num_mbs = NUM_MBS_PER_FRAME(height, width); + u32 max_side = max(ref_width, ref_height); + + if (num_mbs <= NUM_MBS_PER_FRAME(ref_height, ref_width) && + width <= max_side && + height <= max_side) + return true; + else + return false; +} + +int signal_session_msg_receipt(struct msm_vidc_inst *inst, + enum signal_session_response cmd) +{ + if (cmd < MAX_SIGNAL) + complete(&inst->completions[cmd]); + return 0; +} + +enum msm_vidc_allow msm_vidc_allow_input_psc(struct msm_vidc_inst *inst) +{ + enum msm_vidc_allow allow = MSM_VIDC_ALLOW; + /* + * if drc sequence is not completed by client, fw is not + * expected to raise another ipsc + */ + if (is_sub_state(inst, MSM_VIDC_DRC)) { + i_vpr_e(inst, "%s: not allowed in sub state %s\n", + __func__, inst->sub_state_name); + return MSM_VIDC_DISALLOW; + } + + return allow; +} + +bool msm_vidc_allow_drain_last_flag(struct msm_vidc_inst *inst) +{ + /* + * drain last flag is expected only when DRAIN, INPUT_PAUSE + * is set and DRAIN_LAST_BUFFER is not set + */ + if (is_sub_state(inst, MSM_VIDC_DRAIN) && + is_sub_state(inst, MSM_VIDC_INPUT_PAUSE) && + !is_sub_state(inst, MSM_VIDC_DRAIN_LAST_BUFFER)) + return true; + + i_vpr_e(inst, "%s: not allowed in sub state %s\n", + __func__, inst->sub_state_name); + return false; +} + +bool msm_vidc_allow_psc_last_flag(struct msm_vidc_inst *inst) +{ + /* + * drc last flag is expected only when DRC, INPUT_PAUSE + * is set and DRC_LAST_BUFFER is not set + */ + if (is_sub_state(inst, MSM_VIDC_DRC) && + is_sub_state(inst, MSM_VIDC_INPUT_PAUSE) && + !is_sub_state(inst, MSM_VIDC_DRC_LAST_BUFFER)) + return true; + + i_vpr_e(inst, "%s: not allowed in sub state %s\n", + __func__, inst->sub_state_name); + + return false; +} + +enum msm_vidc_allow msm_vidc_allow_pm_suspend(struct msm_vidc_core *core) +{ + /* core must be in valid state to do pm_suspend */ + if (!core_in_valid_state(core)) { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + return MSM_VIDC_DISALLOW; + } + + /* check if power is enabled */ + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_h("%s: Power already disabled\n", __func__); + return MSM_VIDC_IGNORE; + } + + return MSM_VIDC_ALLOW; +} + +bool is_hevc_10bit_decode_session(struct msm_vidc_inst *inst) +{ + bool is10bit = false; + enum msm_vidc_colorformat_type colorformat; + + /* in case of decoder session return false */ + if (!is_decode_session(inst)) + return false; + + colorformat = + v4l2_colorformat_to_driver(inst, + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + + if (colorformat == MSM_VIDC_FMT_TP10C || colorformat == MSM_VIDC_FMT_P010) + is10bit = true; + + return is_decode_session(inst) && + inst->codec == MSM_VIDC_HEVC && + is10bit; +} + +int msm_vidc_state_change_streamon(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + enum msm_vidc_state new_state = MSM_VIDC_ERROR; + + if (port == INPUT_PORT) { + if (is_state(inst, MSM_VIDC_OPEN)) + new_state = MSM_VIDC_INPUT_STREAMING; + else if (is_state(inst, MSM_VIDC_OUTPUT_STREAMING)) + new_state = MSM_VIDC_STREAMING; + } else if (port == OUTPUT_PORT) { + if (is_state(inst, MSM_VIDC_OPEN)) + new_state = MSM_VIDC_OUTPUT_STREAMING; + else if (is_state(inst, MSM_VIDC_INPUT_STREAMING)) + new_state = MSM_VIDC_STREAMING; + } + + return msm_vidc_change_state(inst, new_state, __func__); +} + +int msm_vidc_state_change_streamoff(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + enum msm_vidc_state new_state = MSM_VIDC_ERROR; + + if (port == INPUT_PORT) { + if (is_state(inst, MSM_VIDC_INPUT_STREAMING)) + new_state = MSM_VIDC_OPEN; + else if (is_state(inst, MSM_VIDC_STREAMING)) + new_state = MSM_VIDC_OUTPUT_STREAMING; + } else if (port == OUTPUT_PORT) { + if (is_state(inst, MSM_VIDC_OUTPUT_STREAMING)) + new_state = MSM_VIDC_OPEN; + else if (is_state(inst, MSM_VIDC_STREAMING)) + new_state = MSM_VIDC_INPUT_STREAMING; + } + rc = msm_vidc_change_state(inst, new_state, __func__); + if (rc) + goto exit; + +exit: + return rc; +} + +int msm_vidc_process_drain(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = venus_hfi_session_drain(inst, INPUT_PORT); + if (rc) + return rc; + rc = msm_vidc_change_sub_state(inst, 0, MSM_VIDC_DRAIN, __func__); + if (rc) + return rc; + + msm_vidc_scale_power(inst, true); + + return rc; +} + +int msm_vidc_process_resume(struct msm_vidc_inst *inst) +{ + int rc = 0; + enum msm_vidc_sub_state clear_sub_state = MSM_VIDC_SUB_STATE_NONE; + bool drain_pending = false; + + msm_vidc_scale_power(inst, true); + + /* first check DRC pending else check drain pending */ + if (is_sub_state(inst, MSM_VIDC_DRC) && + is_sub_state(inst, MSM_VIDC_DRC_LAST_BUFFER)) { + clear_sub_state = MSM_VIDC_DRC | MSM_VIDC_DRC_LAST_BUFFER; + /* + * if drain sequence is not completed then do not resume here. + * client will eventually complete drain sequence in which ports + * will be resumed. + */ + drain_pending = is_sub_state(inst, MSM_VIDC_DRAIN) && + is_sub_state(inst, MSM_VIDC_DRAIN_LAST_BUFFER); + if (!drain_pending) { + if (is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = venus_hfi_session_resume(inst, INPUT_PORT, + HFI_CMD_SETTINGS_CHANGE); + if (rc) + return rc; + clear_sub_state |= MSM_VIDC_INPUT_PAUSE; + } + if (is_sub_state(inst, MSM_VIDC_OUTPUT_PAUSE)) { + rc = venus_hfi_session_resume(inst, OUTPUT_PORT, + HFI_CMD_SETTINGS_CHANGE); + if (rc) + return rc; + clear_sub_state |= MSM_VIDC_OUTPUT_PAUSE; + } + } + } else if (is_sub_state(inst, MSM_VIDC_DRAIN) && + is_sub_state(inst, MSM_VIDC_DRAIN_LAST_BUFFER)) { + clear_sub_state = MSM_VIDC_DRAIN | MSM_VIDC_DRAIN_LAST_BUFFER; + if (is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = venus_hfi_session_resume(inst, INPUT_PORT, HFI_CMD_DRAIN); + if (rc) + return rc; + clear_sub_state |= MSM_VIDC_INPUT_PAUSE; + } + if (is_sub_state(inst, MSM_VIDC_OUTPUT_PAUSE)) { + rc = venus_hfi_session_resume(inst, OUTPUT_PORT, HFI_CMD_DRAIN); + if (rc) + return rc; + clear_sub_state |= MSM_VIDC_OUTPUT_PAUSE; + } + } + + rc = msm_vidc_change_sub_state(inst, clear_sub_state, 0, __func__); + + return rc; +} + +int msm_vidc_process_streamon_input(struct msm_vidc_inst *inst) +{ + int rc = 0; + enum msm_vidc_sub_state clear_sub_state = MSM_VIDC_SUB_STATE_NONE; + enum msm_vidc_sub_state set_sub_state = MSM_VIDC_SUB_STATE_NONE; + + msm_vidc_scale_power(inst, true); + + rc = venus_hfi_start(inst, INPUT_PORT); + if (rc) + return rc; + + /* clear input pause substate immediately */ + if (is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = msm_vidc_change_sub_state(inst, MSM_VIDC_INPUT_PAUSE, 0, __func__); + if (rc) + return rc; + } + + /* + * if DRC sequence is not completed by the client then PAUSE + * firmware input port to avoid firmware raising IPSC again. + * When client completes DRC or DRAIN sequences, firmware + * input port will be resumed. + */ + if (is_sub_state(inst, MSM_VIDC_DRC) || + is_sub_state(inst, MSM_VIDC_DRAIN)) { + if (!is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = venus_hfi_session_pause(inst, INPUT_PORT); + if (rc) + return rc; + set_sub_state = MSM_VIDC_INPUT_PAUSE; + } + } + + rc = msm_vidc_state_change_streamon(inst, INPUT_PORT); + if (rc) + return rc; + + rc = msm_vidc_change_sub_state(inst, clear_sub_state, set_sub_state, __func__); + + return rc; +} + +int msm_vidc_process_streamon_output(struct msm_vidc_inst *inst) +{ + int rc = 0; + enum msm_vidc_sub_state clear_sub_state = MSM_VIDC_SUB_STATE_NONE; + enum msm_vidc_sub_state set_sub_state = MSM_VIDC_SUB_STATE_NONE; + bool drain_pending = false; + + msm_vidc_scale_power(inst, true); + + /* + * client completed drc sequence, reset DRC and + * MSM_VIDC_DRC_LAST_BUFFER substates + */ + if (is_sub_state(inst, MSM_VIDC_DRC) && + is_sub_state(inst, MSM_VIDC_DRC_LAST_BUFFER)) { + clear_sub_state = MSM_VIDC_DRC | MSM_VIDC_DRC_LAST_BUFFER; + } + /* + * Client is completing port reconfiguration, hence reallocate + * input internal buffers before input port is resumed. + * Drc sub-state cannot be checked because DRC sub-state will + * not be set during initial port reconfiguration. + */ + if (is_decode_session(inst) && + is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = msm_vidc_alloc_and_queue_input_internal_buffers(inst); + if (rc) + return rc; + rc = msm_vidc_set_stage(inst, STAGE); + if (rc) + return rc; + rc = msm_vidc_set_pipe(inst, PIPE); + if (rc) + return rc; + } + + /* + * fw input port is paused due to ipsc. now that client + * completed drc sequence, resume fw input port provided + * drain is not pending and input port is streaming. + */ + drain_pending = is_sub_state(inst, MSM_VIDC_DRAIN) && + is_sub_state(inst, MSM_VIDC_DRAIN_LAST_BUFFER); + if (!drain_pending && is_state(inst, MSM_VIDC_INPUT_STREAMING)) { + if (is_sub_state(inst, MSM_VIDC_INPUT_PAUSE)) { + rc = venus_hfi_session_resume(inst, INPUT_PORT, + HFI_CMD_SETTINGS_CHANGE); + if (rc) + return rc; + clear_sub_state |= MSM_VIDC_INPUT_PAUSE; + } + } + + rc = venus_hfi_start(inst, OUTPUT_PORT); + if (rc) + return rc; + + /* clear output pause substate immediately */ + if (is_sub_state(inst, MSM_VIDC_OUTPUT_PAUSE)) { + rc = msm_vidc_change_sub_state(inst, MSM_VIDC_OUTPUT_PAUSE, 0, __func__); + if (rc) + return rc; + } + + rc = msm_vidc_state_change_streamon(inst, OUTPUT_PORT); + if (rc) + return rc; + + rc = msm_vidc_change_sub_state(inst, clear_sub_state, set_sub_state, __func__); + + return rc; +} + +int msm_vidc_process_stop_done(struct msm_vidc_inst *inst, + enum signal_session_response signal_type) +{ + int rc = 0; + enum msm_vidc_sub_state set_sub_state = MSM_VIDC_SUB_STATE_NONE; + + if (signal_type == SIGNAL_CMD_STOP_INPUT) { + set_sub_state = MSM_VIDC_INPUT_PAUSE; + /* + * FW is expected to return DRC LAST flag before input + * stop done if DRC sequence is pending + */ + if (is_sub_state(inst, MSM_VIDC_DRC) && + !is_sub_state(inst, MSM_VIDC_DRC_LAST_BUFFER)) { + i_vpr_e(inst, "%s: drc last flag pkt not received\n", __func__); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + /* + * for a decode session, FW is expected to return + * DRAIN LAST flag before input stop done if + * DRAIN sequence is pending + */ + if (is_decode_session(inst) && + is_sub_state(inst, MSM_VIDC_DRAIN) && + !is_sub_state(inst, MSM_VIDC_DRAIN_LAST_BUFFER)) { + i_vpr_e(inst, "%s: drain last flag pkt not received\n", __func__); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + } else if (signal_type == SIGNAL_CMD_STOP_OUTPUT) { + set_sub_state = MSM_VIDC_OUTPUT_PAUSE; + } + + rc = msm_vidc_change_sub_state(inst, 0, set_sub_state, __func__); + if (rc) + return rc; + + signal_session_msg_receipt(inst, signal_type); + return rc; +} + +int msm_vidc_process_drain_done(struct msm_vidc_inst *inst) +{ + int rc = 0; + + if (is_sub_state(inst, MSM_VIDC_DRAIN)) { + rc = msm_vidc_change_sub_state(inst, 0, MSM_VIDC_INPUT_PAUSE, __func__); + if (rc) + return rc; + } else { + i_vpr_e(inst, "%s: unexpected drain done\n", __func__); + } + + return rc; +} + +int msm_vidc_process_drain_last_flag(struct msm_vidc_inst *inst) +{ + return msm_vidc_state_change_drain_last_flag(inst); +} + +int msm_vidc_process_psc_last_flag(struct msm_vidc_inst *inst) +{ + return msm_vidc_state_change_psc_last_flag(inst); +} + +int msm_vidc_state_change_input_psc(struct msm_vidc_inst *inst) +{ + enum msm_vidc_sub_state set_sub_state = MSM_VIDC_SUB_STATE_NONE; + + /* + * if output port is not streaming, then do not set DRC substate + * because DRC_LAST_FLAG is not going to be received. Update + * INPUT_PAUSE substate only + */ + if (is_state(inst, MSM_VIDC_INPUT_STREAMING) || + is_state(inst, MSM_VIDC_OPEN)) + set_sub_state = MSM_VIDC_INPUT_PAUSE; + else + set_sub_state = MSM_VIDC_DRC | MSM_VIDC_INPUT_PAUSE; + + return msm_vidc_change_sub_state(inst, 0, set_sub_state, __func__); +} + +int msm_vidc_state_change_drain_last_flag(struct msm_vidc_inst *inst) +{ + enum msm_vidc_sub_state set_sub_state = MSM_VIDC_SUB_STATE_NONE; + + set_sub_state = MSM_VIDC_DRAIN_LAST_BUFFER | MSM_VIDC_OUTPUT_PAUSE; + return msm_vidc_change_sub_state(inst, 0, set_sub_state, __func__); +} + +int msm_vidc_state_change_psc_last_flag(struct msm_vidc_inst *inst) +{ + enum msm_vidc_sub_state set_sub_state = MSM_VIDC_SUB_STATE_NONE; + + set_sub_state = MSM_VIDC_DRC_LAST_BUFFER | MSM_VIDC_OUTPUT_PAUSE; + return msm_vidc_change_sub_state(inst, 0, set_sub_state, __func__); +} + +int msm_vidc_get_control(struct msm_vidc_inst *inst, struct v4l2_ctrl *ctrl) +{ + int rc = 0; + enum msm_vidc_inst_capability_type cap_id; + + cap_id = msm_vidc_get_cap_id(inst, ctrl->id); + if (!is_valid_cap_id(cap_id)) { + i_vpr_e(inst, "%s: could not find cap_id for ctrl %s\n", + __func__, ctrl->name); + return -EINVAL; + } + + switch (cap_id) { + case MIN_BUFFERS_OUTPUT: + ctrl->val = inst->buffers.output.min_count + + inst->buffers.output.extra_count; + i_vpr_h(inst, "g_min: output buffers %d\n", ctrl->val); + break; + case MIN_BUFFERS_INPUT: + ctrl->val = inst->buffers.input.min_count + + inst->buffers.input.extra_count; + i_vpr_h(inst, "g_min: input buffers %d\n", ctrl->val); + break; + default: + i_vpr_e(inst, "invalid ctrl %s id %d\n", + ctrl->name, ctrl->id); + return -EINVAL; + } + + return rc; +} + +int msm_vidc_get_mbs_per_frame(struct msm_vidc_inst *inst) +{ + int height = 0, width = 0; + struct v4l2_format *inp_f; + + if (is_decode_session(inst)) { + inp_f = &inst->fmts[INPUT_PORT]; + width = max(inp_f->fmt.pix_mp.width, inst->crop.width); + height = max(inp_f->fmt.pix_mp.height, inst->crop.height); + } else if (is_encode_session(inst)) { + width = inst->crop.width; + height = inst->crop.height; + } + + return NUM_MBS_PER_FRAME(height, width); +} + +int msm_vidc_get_fps(struct msm_vidc_inst *inst) +{ + int fps; + u32 frame_rate, operating_rate; + + frame_rate = msm_vidc_get_frame_rate(inst); + operating_rate = msm_vidc_get_operating_rate(inst); + + if (operating_rate > frame_rate) + fps = operating_rate ? operating_rate : 1; + else + fps = frame_rate; + + return fps; +} + +int msm_vidc_num_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type type, + enum msm_vidc_buffer_attributes attr) +{ + int count = 0; + struct msm_vidc_buffer *vbuf; + struct msm_vidc_buffers *buffers; + + if (is_output_buffer(type)) { + buffers = &inst->buffers.output; + } else if (is_input_buffer(type)) { + buffers = &inst->buffers.input; + } else { + i_vpr_e(inst, "%s: invalid buffer type %#x\n", + __func__, type); + return count; + } + + list_for_each_entry(vbuf, &buffers->list, list) { + if (vbuf->type != type) + continue; + if (!(vbuf->attr & attr)) + continue; + count++; + } + + return count; +} + +int vb2_buffer_to_driver(struct vb2_buffer *vb2, + struct msm_vidc_buffer *buf) +{ + int rc = 0; + struct vb2_v4l2_buffer *vbuf; + + if (!vb2 || !buf) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + vbuf = to_vb2_v4l2_buffer(vb2); + + buf->fd = vb2->planes[0].m.fd; + buf->data_offset = vb2->planes[0].data_offset; + buf->data_size = vb2->planes[0].bytesused - vb2->planes[0].data_offset; + buf->buffer_size = vb2->planes[0].length; + buf->timestamp = vb2->timestamp; + buf->flags = vbuf->flags; + buf->attr = 0; + + return rc; +} + +int msm_vidc_process_readonly_buffers(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf) +{ + int rc = 0; + struct msm_vidc_buffer *ro_buf, *dummy; + struct msm_vidc_core *core; + + core = inst->core; + + if (!is_decode_session(inst) || !is_output_buffer(buf->type)) + return 0; + + /* + * check if read_only buffer is present in read_only list + * if present: add ro flag to buf provided buffer is not + * pending release + */ + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + if (ro_buf->device_addr != buf->device_addr) + continue; + if (ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY && + !(ro_buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE)) { + /* add READ_ONLY to the buffer going to the firmware */ + buf->attr |= MSM_VIDC_ATTR_READ_ONLY; + /* + * remove READ_ONLY on the read_only list buffer so that + * it will get removed from the read_only list below + */ + ro_buf->attr &= ~MSM_VIDC_ATTR_READ_ONLY; + break; + } + } + + /* remove ro buffers if not required anymore */ + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + /* if read only buffer do not remove */ + if (ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY) + continue; + + print_vidc_buffer(VIDC_LOW, "low ", "ro buf removed", inst, ro_buf); + /* unmap the buffer if driver holds mapping */ + if (ro_buf->sg_table && ro_buf->attach) { + call_mem_op(core, dma_buf_unmap_attachment, core, + ro_buf->attach, ro_buf->sg_table); + call_mem_op(core, dma_buf_detach, core, + ro_buf->dmabuf, ro_buf->attach); + ro_buf->sg_table = NULL; + ro_buf->attach = NULL; + } + if (ro_buf->dbuf_get) { + call_mem_op(core, dma_buf_put, inst, ro_buf->dmabuf); + ro_buf->dmabuf = NULL; + ro_buf->dbuf_get = 0; + } + + list_del_init(&ro_buf->list); + msm_vidc_pool_free(inst, ro_buf); + } + + return rc; +} + +int msm_vidc_update_input_rate(struct msm_vidc_inst *inst, u64 time_us) +{ + struct msm_vidc_input_timer *input_timer; + struct msm_vidc_input_timer *prev_timer = NULL; + struct msm_vidc_core *core; + u64 counter = 0; + u64 input_timer_sum_us = 0; + + core = inst->core; + + input_timer = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_BUF_TIMER); + if (!input_timer) + return -ENOMEM; + + input_timer->time_us = time_us; + INIT_LIST_HEAD(&input_timer->list); + list_add_tail(&input_timer->list, &inst->input_timer_list); + list_for_each_entry(input_timer, &inst->input_timer_list, list) { + if (prev_timer) { + input_timer_sum_us += input_timer->time_us - prev_timer->time_us; + counter++; + } + prev_timer = input_timer; + } + + if (input_timer_sum_us && counter >= INPUT_TIMER_LIST_SIZE) + inst->capabilities[INPUT_RATE].value = + (s32)(DIV64_U64_ROUND_CLOSEST(counter * 1000000, + input_timer_sum_us) << 16); + + /* delete the first entry once counter >= INPUT_TIMER_LIST_SIZE */ + if (counter >= INPUT_TIMER_LIST_SIZE) { + input_timer = list_first_entry(&inst->input_timer_list, + struct msm_vidc_input_timer, list); + list_del_init(&input_timer->list); + msm_vidc_pool_free(inst, input_timer); + } + + return 0; +} + +int msm_vidc_flush_input_timer(struct msm_vidc_inst *inst) +{ + struct msm_vidc_input_timer *input_timer, *dummy_timer; + struct msm_vidc_core *core; + + core = inst->core; + + i_vpr_l(inst, "%s: flush input_timer list\n", __func__); + list_for_each_entry_safe(input_timer, dummy_timer, &inst->input_timer_list, list) { + list_del_init(&input_timer->list); + msm_vidc_pool_free(inst, input_timer); + } + return 0; +} + +int msm_vidc_get_input_rate(struct msm_vidc_inst *inst) +{ + return inst->capabilities[INPUT_RATE].value >> 16; +} + +int msm_vidc_get_timestamp_rate(struct msm_vidc_inst *inst) +{ + return inst->capabilities[TIMESTAMP_RATE].value >> 16; +} + +int msm_vidc_get_frame_rate(struct msm_vidc_inst *inst) +{ + return inst->capabilities[FRAME_RATE].value >> 16; +} + +int msm_vidc_get_operating_rate(struct msm_vidc_inst *inst) +{ + return inst->capabilities[OPERATING_RATE].value >> 16; +} + +static int msm_vidc_insert_sort(struct list_head *head, + struct msm_vidc_sort *entry) +{ + struct msm_vidc_sort *first, *node; + struct msm_vidc_sort *prev = NULL; + bool is_inserted = false; + + if (!head || !entry) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + if (list_empty(head)) { + list_add(&entry->list, head); + return 0; + } + + first = list_first_entry(head, struct msm_vidc_sort, list); + if (entry->val < first->val) { + list_add(&entry->list, head); + return 0; + } + + list_for_each_entry(node, head, list) { + if (prev && + entry->val >= prev->val && entry->val <= node->val) { + list_add(&entry->list, &prev->list); + is_inserted = true; + break; + } + prev = node; + } + + if (!is_inserted && prev) + list_add(&entry->list, &prev->list); + + return 0; +} + +static struct msm_vidc_timestamp *msm_vidc_get_least_rank_ts(struct msm_vidc_inst *inst) +{ + struct msm_vidc_timestamp *ts, *final = NULL; + u64 least_rank = INT_MAX; + + list_for_each_entry(ts, &inst->timestamps.list, sort.list) { + if (ts->rank < least_rank) { + least_rank = ts->rank; + final = ts; + } + } + + return final; +} + +int msm_vidc_flush_ts(struct msm_vidc_inst *inst) +{ + struct msm_vidc_timestamp *temp, *ts = NULL; + struct msm_vidc_core *core; + + core = inst->core; + + list_for_each_entry_safe(ts, temp, &inst->timestamps.list, sort.list) { + i_vpr_l(inst, "%s: flushing ts: val %llu, rank %llu\n", + __func__, ts->sort.val, ts->rank); + list_del(&ts->sort.list); + msm_vidc_pool_free(inst, ts); + } + inst->timestamps.count = 0; + inst->timestamps.rank = 0; + + return 0; +} + +int msm_vidc_update_timestamp_rate(struct msm_vidc_inst *inst, u64 timestamp) +{ + struct msm_vidc_timestamp *ts, *prev = NULL; + struct msm_vidc_core *core; + int rc = 0; + u32 window_size = 0; + u32 timestamp_rate = 0; + u64 ts_ms = 0; + u32 counter = 0; + + core = inst->core; + + ts = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_TIMESTAMP); + if (!ts) { + i_vpr_e(inst, "%s: ts alloc failed\n", __func__); + return -ENOMEM; + } + + INIT_LIST_HEAD(&ts->sort.list); + ts->sort.val = timestamp; + ts->rank = inst->timestamps.rank++; + rc = msm_vidc_insert_sort(&inst->timestamps.list, &ts->sort); + if (rc) + return rc; + inst->timestamps.count++; + + if (is_encode_session(inst)) + window_size = ENC_FPS_WINDOW; + else + window_size = DEC_FPS_WINDOW; + + /* keep sliding window */ + if (inst->timestamps.count > window_size) { + ts = msm_vidc_get_least_rank_ts(inst); + if (!ts) { + i_vpr_e(inst, "%s: least rank ts is NULL\n", __func__); + return -EINVAL; + } + inst->timestamps.count--; + list_del(&ts->sort.list); + msm_vidc_pool_free(inst, ts); + } + + /* Calculate timestamp rate */ + list_for_each_entry(ts, &inst->timestamps.list, sort.list) { + if (prev) { + if (ts->sort.val == prev->sort.val) + continue; + ts_ms += div_u64(ts->sort.val - prev->sort.val, 1000000); + counter++; + } + prev = ts; + } + if (ts_ms) + timestamp_rate = (u32)div_u64((u64)counter * 1000, ts_ms); + + msm_vidc_update_cap_value(inst, TIMESTAMP_RATE, timestamp_rate << 16, __func__); + + return 0; +} + +struct msm_vidc_buffer *msm_vidc_get_driver_buf(struct msm_vidc_inst *inst, + struct vb2_buffer *vb2) +{ + int rc = 0; + struct msm_vidc_buffer *buf; + struct msm_vidc_core *core; + + core = inst->core; + + buf = msm_vidc_fetch_buffer(inst, vb2); + if (!buf) { + i_vpr_e(inst, "%s: failed to fetch buffer\n", __func__); + return NULL; + } + + rc = vb2_buffer_to_driver(vb2, buf); + if (rc) + return NULL; + + /* treat every buffer as deferred buffer initially */ + buf->attr |= MSM_VIDC_ATTR_DEFERRED; + + if (is_decode_session(inst) && is_output_buffer(buf->type)) { + /* get a reference */ + if (!buf->dbuf_get) { + buf->dmabuf = call_mem_op(core, dma_buf_get, inst, buf->fd); + if (!buf->dmabuf) + return NULL; + buf->dbuf_get = 1; + } + } + + return buf; +} + +int msm_vidc_allocate_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buf_type, + u32 num_buffers) +{ + int rc = 0; + int idx = 0; + struct msm_vidc_buffer *buf = NULL; + struct msm_vidc_buffers *buffers; + struct msm_vidc_core *core; + + core = inst->core; + + buffers = msm_vidc_get_buffers(inst, buf_type, __func__); + if (!buffers) + return -EINVAL; + + for (idx = 0; idx < num_buffers; idx++) { + buf = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_BUFFER); + if (!buf) { + i_vpr_e(inst, "%s: alloc failed\n", __func__); + return -EINVAL; + } + INIT_LIST_HEAD(&buf->list); + list_add_tail(&buf->list, &buffers->list); + buf->type = buf_type; + buf->index = idx; + buf->region = call_mem_op(core, buffer_region, inst, buf_type); + } + i_vpr_h(inst, "%s: allocated %d buffers for type %s\n", + __func__, num_buffers, buf_name(buf_type)); + + return rc; +} + +int msm_vidc_free_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buf_type) +{ + int rc = 0; + int buf_count = 0; + struct msm_vidc_buffer *buf, *dummy; + struct msm_vidc_buffers *buffers; + struct msm_vidc_core *core; + + core = inst->core; + + buffers = msm_vidc_get_buffers(inst, buf_type, __func__); + if (!buffers) + return -EINVAL; + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + buf_count++; + print_vidc_buffer(VIDC_LOW, "low ", "free buffer", inst, buf); + list_del_init(&buf->list); + msm_vidc_pool_free(inst, buf); + } + i_vpr_h(inst, "%s: freed %d buffers for type %s\n", + __func__, buf_count, buf_name(buf_type)); + + return rc; +} + +struct msm_vidc_buffer *msm_vidc_fetch_buffer(struct msm_vidc_inst *inst, + struct vb2_buffer *vb2) + +{ + struct msm_vidc_buffer *buf = NULL; + struct msm_vidc_buffers *buffers; + enum msm_vidc_buffer_type buf_type; + bool found = false; + + buf_type = v4l2_type_to_driver(vb2->type, __func__); + if (!buf_type) + return NULL; + + buffers = msm_vidc_get_buffers(inst, buf_type, __func__); + if (!buffers) + return NULL; + + list_for_each_entry(buf, &buffers->list, list) { + if (buf->index == vb2->index) { + found = true; + break; + } + } + + if (!found) { + i_vpr_e(inst, "%s: buffer not found for index %d for vb2 buffer type %s\n", + __func__, vb2->index, v4l2_type_name(vb2->type)); + return NULL; + } + + return buf; +} + +static bool is_single_session(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 count = 0; + + core = inst->core; + + core_lock(core, __func__); + list_for_each_entry(inst, &core->instances, list) + count++; + core_unlock(core, __func__); + + return count == 1; +} + +void msm_vidc_allow_dcvs(struct msm_vidc_inst *inst) +{ + bool allow = false; + struct msm_vidc_core *core; + u32 fps; + + core = inst->core; + + allow = core->capabilities[DCVS].value; + if (!allow) { + i_vpr_h(inst, "%s: core doesn't support dcvs\n", __func__); + goto exit; + } + + allow = !inst->decode_batch.enable; + if (!allow) { + i_vpr_h(inst, "%s: decode_batching enabled\n", __func__); + goto exit; + } + + fps = msm_vidc_get_fps(inst); + if (is_decode_session(inst) && + fps >= inst->capabilities[FRAME_RATE].max) { + allow = false; + i_vpr_h(inst, "%s: unsupported fps %d\n", __func__, fps); + goto exit; + } + +exit: + i_vpr_hp(inst, "%s: dcvs: %s\n", __func__, allow ? "enabled" : "disabled"); + + inst->power.dcvs_flags = 0; + inst->power.dcvs_mode = allow; +} + +bool msm_vidc_allow_decode_batch(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst_cap *cap; + struct msm_vidc_core *core; + bool allow = false; + u32 value = 0; + + core = inst->core; + cap = &inst->capabilities[0]; + + allow = inst->decode_batch.enable; + if (!allow) { + i_vpr_h(inst, "%s: batching already disabled\n", __func__); + goto exit; + } + + allow = core->capabilities[DECODE_BATCH].value; + if (!allow) { + i_vpr_h(inst, "%s: core doesn't support batching\n", __func__); + goto exit; + } + + allow = is_single_session(inst); + if (!allow) { + i_vpr_h(inst, "%s: multiple sessions running\n", __func__); + goto exit; + } + + allow = is_decode_session(inst); + if (!allow) { + i_vpr_h(inst, "%s: not a decoder session\n", __func__); + goto exit; + } + + value = msm_vidc_get_fps(inst); + allow = value < cap[BATCH_FPS].value; + if (!allow) { + i_vpr_h(inst, "%s: unsupported fps %u, max %u\n", __func__, + value, cap[BATCH_FPS].value); + goto exit; + } + + value = msm_vidc_get_mbs_per_frame(inst); + allow = value < cap[BATCH_MBPF].value; + if (!allow) { + i_vpr_h(inst, "%s: unsupported mbpf %u, max %u\n", __func__, + value, cap[BATCH_MBPF].value); + goto exit; + } + +exit: + i_vpr_hp(inst, "%s: batching: %s\n", __func__, allow ? "enabled" : "disabled"); + + return allow; +} + +void msm_vidc_update_stats(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf, enum msm_vidc_debugfs_event etype) +{ + if ((is_decode_session(inst) && etype == MSM_VIDC_DEBUGFS_EVENT_ETB) || + (is_encode_session(inst) && etype == MSM_VIDC_DEBUGFS_EVENT_FBD)) + inst->stats.data_size += buf->data_size; + + msm_vidc_debugfs_update(inst, etype); +} + +void msm_vidc_print_stats(struct msm_vidc_inst *inst) +{ + u32 frame_rate, operating_rate, achieved_fps, etb, ebd, ftb, fbd, dt_ms; + u64 bitrate_kbps = 0, time_ms = ktime_get_ns() / 1000 / 1000; + + etb = inst->debug_count.etb - inst->stats.count.etb; + ebd = inst->debug_count.ebd - inst->stats.count.ebd; + ftb = inst->debug_count.ftb - inst->stats.count.ftb; + fbd = inst->debug_count.fbd - inst->stats.count.fbd; + frame_rate = inst->capabilities[FRAME_RATE].value >> 16; + operating_rate = inst->capabilities[OPERATING_RATE].value >> 16; + + dt_ms = time_ms - inst->stats.time_ms; + achieved_fps = (fbd * 1000) / dt_ms; + bitrate_kbps = (inst->stats.data_size * 8 * 1000) / (dt_ms * 1024); + + i_vpr_hs(inst, + "counts (etb,ebd,ftb,fbd): %u %u %u %u (total %llu %llu %llu %llu), bps %lldKbps fps %u/s, frame rate %u, op rate %u, avg bw llcc %ukhz, avb bw ddr %ukhz, dt %ums\n", + etb, ebd, ftb, fbd, inst->debug_count.etb, inst->debug_count.ebd, + inst->debug_count.ftb, inst->debug_count.fbd, bitrate_kbps, + achieved_fps, frame_rate, operating_rate, + inst->stats.avg_bw_llcc, inst->stats.avg_bw_ddr, dt_ms); + + inst->stats.count = inst->debug_count; + inst->stats.data_size = 0; + inst->stats.avg_bw_llcc = 0; + inst->stats.avg_bw_ddr = 0; + inst->stats.time_ms = time_ms; +} + +void msm_vidc_print_memory_stats(struct msm_vidc_inst *inst) +{ + static enum msm_vidc_buffer_type buf_type_arr[8] = { + MSM_VIDC_BUF_BIN, + MSM_VIDC_BUF_ARP, + MSM_VIDC_BUF_COMV, + MSM_VIDC_BUF_NON_COMV, + MSM_VIDC_BUF_LINE, + MSM_VIDC_BUF_DPB, + MSM_VIDC_BUF_PERSIST, + MSM_VIDC_BUF_VPSS, + }; + u32 count_arr[8]; + u32 size_arr[8]; + u32 size_kb_arr[8]; + u64 total_size = 0; + struct msm_vidc_buffers *buffers; + int cnt; + + /* reset array values */ + memset(&count_arr, 0, sizeof(count_arr)); + memset(&size_arr, 0, sizeof(size_arr)); + memset(&size_kb_arr, 0, sizeof(size_kb_arr)); + + /* populate buffer details */ + for (cnt = 0; cnt < 8; cnt++) { + buffers = msm_vidc_get_buffers(inst, buf_type_arr[cnt], __func__); + if (!buffers) + continue; + + size_arr[cnt] = buffers->size; + count_arr[cnt] = buffers->min_count; + size_kb_arr[cnt] = (size_arr[cnt] * count_arr[cnt]) / 1024; + total_size += size_arr[cnt] * count_arr[cnt]; + } + + /* print internal memory stats */ + i_vpr_hs(inst, + "%s %u kb(%ux%d) %s %u kb(%ux%d) %s %u kb(%ux%d) %s %u kb(%ux%d) %s %u kb(%ux%d) %s %u kb(%ux%d) %s %u kb(%ux%d) %s %u kb(%ux%d) total %llu kb\n", + buf_name(buf_type_arr[0]), size_kb_arr[0], size_arr[0], count_arr[0], + buf_name(buf_type_arr[1]), size_kb_arr[1], size_arr[1], count_arr[1], + buf_name(buf_type_arr[2]), size_kb_arr[2], size_arr[2], count_arr[2], + buf_name(buf_type_arr[3]), size_kb_arr[3], size_arr[3], count_arr[3], + buf_name(buf_type_arr[4]), size_kb_arr[4], size_arr[4], count_arr[4], + buf_name(buf_type_arr[5]), size_kb_arr[5], size_arr[5], count_arr[5], + buf_name(buf_type_arr[6]), size_kb_arr[6], size_arr[6], count_arr[6], + buf_name(buf_type_arr[7]), size_kb_arr[7], size_arr[7], count_arr[7], + (total_size / 1024)); +} + +int schedule_stats_work(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + if (!is_stats_enabled()) { + i_vpr_h(inst, "%s: stats not enabled. Skip scheduling\n", __func__); + return 0; + } + + /** + * Hfi session is already closed and inst also going to be + * closed soon. So skip scheduling new stats_work to avoid + * use-after-free issues with close sequence. + */ + if (!inst->packet) { + i_vpr_e(inst, "skip scheduling stats_work\n"); + return 0; + } + core = inst->core; + mod_delayed_work(inst->workq, &inst->stats_work, + msecs_to_jiffies(core->capabilities[STATS_TIMEOUT_MS].value)); + + return 0; +} + +int cancel_stats_work_sync(struct msm_vidc_inst *inst) +{ + cancel_delayed_work_sync(&inst->stats_work); + + return 0; +} + +void msm_vidc_stats_handler(struct work_struct *work) +{ + struct msm_vidc_inst *inst; + + inst = container_of(work, struct msm_vidc_inst, stats_work.work); + inst = get_inst_ref(g_core, inst); + if (!inst || !inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + + inst_lock(inst, __func__); + msm_vidc_print_stats(inst); + schedule_stats_work(inst); + inst_unlock(inst, __func__); + + put_inst(inst); +} + +static int msm_vidc_queue_buffer(struct msm_vidc_inst *inst, struct msm_vidc_buffer *buf) +{ + enum msm_vidc_debugfs_event etype; + int rc = 0; + + if (is_decode_session(inst) && is_output_buffer(buf->type)) { + rc = msm_vidc_process_readonly_buffers(inst, buf); + if (rc) + return rc; + } + + print_vidc_buffer(VIDC_HIGH, "high", "qbuf", inst, buf); + + rc = venus_hfi_queue_buffer(inst, buf); + if (rc) + return rc; + + buf->attr &= ~MSM_VIDC_ATTR_DEFERRED; + buf->attr |= MSM_VIDC_ATTR_QUEUED; + + if (is_input_buffer(buf->type)) + inst->power.buffer_counter++; + + if (is_input_buffer(buf->type)) + etype = MSM_VIDC_DEBUGFS_EVENT_ETB; + else + etype = MSM_VIDC_DEBUGFS_EVENT_FTB; + + msm_vidc_update_stats(inst, buf, etype); + + return 0; +} + +int msm_vidc_alloc_and_queue_input_internal_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vdec_get_input_internal_buffers(inst); + if (rc) + return rc; + + rc = msm_vdec_release_input_internal_buffers(inst); + if (rc) + return rc; + + rc = msm_vdec_create_input_internal_buffers(inst); + if (rc) + return rc; + + rc = msm_vdec_queue_input_internal_buffers(inst); + + return rc; +} + +int msm_vidc_queue_deferred_buffers(struct msm_vidc_inst *inst, enum msm_vidc_buffer_type buf_type) +{ + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf; + int rc = 0; + + buffers = msm_vidc_get_buffers(inst, buf_type, __func__); + if (!buffers) + return -EINVAL; + + msm_vidc_scale_power(inst, true); + + list_for_each_entry(buf, &buffers->list, list) { + if (!(buf->attr & MSM_VIDC_ATTR_DEFERRED)) + continue; + rc = msm_vidc_queue_buffer(inst, buf); + if (rc) + return rc; + } + + return 0; +} + +int msm_vidc_buf_queue(struct msm_vidc_inst *inst, struct msm_vidc_buffer *buf) +{ + msm_vidc_scale_power(inst, is_input_buffer(buf->type)); + + return msm_vidc_queue_buffer(inst, buf); +} + +int msm_vidc_queue_buffer_single(struct msm_vidc_inst *inst, struct vb2_buffer *vb2) +{ + int rc = 0; + struct msm_vidc_buffer *buf = NULL; + struct msm_vidc_core *core = NULL; + + core = inst->core; + + buf = msm_vidc_get_driver_buf(inst, vb2); + if (!buf) + return -EINVAL; + + rc = inst->event_handle(inst, MSM_VIDC_BUF_QUEUE, buf); + if (rc) + goto exit; + +exit: + if (rc) + i_vpr_e(inst, "%s: qbuf failed\n", __func__); + return rc; +} + +int msm_vidc_destroy_internal_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer) +{ + struct msm_vidc_buffers *buffers; + struct msm_vidc_mem_list *mem_list; + struct msm_vidc_mem *mem, *mem_dummy; + struct msm_vidc_buffer *buf, *dummy; + struct msm_vidc_core *core; + + core = inst->core; + + if (!is_internal_buffer(buffer->type)) { + i_vpr_e(inst, "%s: type: %s is not internal\n", + __func__, buf_name(buffer->type)); + return 0; + } + + i_vpr_h(inst, "%s: destroy: type: %8s, size: %9u, device_addr %#llx\n", __func__, + buf_name(buffer->type), buffer->buffer_size, buffer->device_addr); + + buffers = msm_vidc_get_buffers(inst, buffer->type, __func__); + if (!buffers) + return -EINVAL; + mem_list = msm_vidc_get_mem_info(inst, buffer->type, __func__); + if (!mem_list) + return -EINVAL; + + list_for_each_entry_safe(mem, mem_dummy, &mem_list->list, list) { + if (mem->dmabuf == buffer->dmabuf) { + call_mem_op(core, memory_unmap_free, core, mem); + list_del(&mem->list); + msm_vidc_pool_free(inst, mem); + break; + } + } + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + if (buf->dmabuf == buffer->dmabuf) { + list_del(&buf->list); + msm_vidc_pool_free(inst, buf); + break; + } + } + + return 0; +} + +int msm_vidc_get_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + u32 buf_size; + u32 buf_count; + struct msm_vidc_core *core; + struct msm_vidc_buffers *buffers; + + core = inst->core; + + buf_size = call_session_op(core, buffer_size, + inst, buffer_type); + + buf_count = call_session_op(core, min_count, + inst, buffer_type); + + buffers = msm_vidc_get_buffers(inst, buffer_type, __func__); + if (!buffers) + return -EINVAL; + + /* + * In a usecase when film grain is initially present, dpb buffers + * are allocated and in the middle of the session, if film grain + * is disabled, then dpb internal buffers should be destroyed. + * When film grain is disabled, buffer_size op call returns 0. + * To ensure buffers->reuse is set to false, add check to detect + * if buf_size has become zero. Do the same for buf_count as well. + */ + if (buf_size && buf_size <= buffers->size && + buf_count && buf_count <= buffers->min_count) { + buffers->reuse = true; + } else { + buffers->reuse = false; + buffers->size = buf_size; + buffers->min_count = buf_count; + } + return 0; +} + +int msm_vidc_create_internal_buffer(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type, u32 index) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_mem_list *mem_list; + struct msm_vidc_buffer *buffer; + struct msm_vidc_mem *mem; + struct msm_vidc_core *core; + + core = inst->core; + if (!is_internal_buffer(buffer_type)) { + i_vpr_e(inst, "%s: type %s is not internal\n", + __func__, buf_name(buffer_type)); + return 0; + } + + buffers = msm_vidc_get_buffers(inst, buffer_type, __func__); + if (!buffers) + return -EINVAL; + mem_list = msm_vidc_get_mem_info(inst, buffer_type, __func__); + if (!mem_list) + return -EINVAL; + + if (!buffers->size) + return 0; + + buffer = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_BUFFER); + if (!buffer) { + i_vpr_e(inst, "%s: buf alloc failed\n", __func__); + return -ENOMEM; + } + INIT_LIST_HEAD(&buffer->list); + buffer->type = buffer_type; + buffer->index = index; + buffer->buffer_size = buffers->size; + list_add_tail(&buffer->list, &buffers->list); + + mem = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_ALLOC_MAP); + if (!mem) { + i_vpr_e(inst, "%s: mem poo alloc failed\n", __func__); + return -ENOMEM; + } + INIT_LIST_HEAD(&mem->list); + mem->type = buffer_type; + mem->region = call_mem_op(core, buffer_region, inst, buffer_type); + mem->size = buffer->buffer_size; + mem->secure = is_secure_region(mem->region); + rc = call_mem_op(core, memory_alloc_map, core, mem); + if (rc) + return -ENOMEM; + list_add_tail(&mem->list, &mem_list->list); + + buffer->dmabuf = mem->dmabuf; + buffer->device_addr = mem->device_addr; + buffer->region = mem->region; + i_vpr_h(inst, "%s: create: type: %8s, size: %9u, device_addr %#llx\n", __func__, + buf_name(buffer_type), buffers->size, buffer->device_addr); + + return 0; +} + +int msm_vidc_create_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + int i; + + buffers = msm_vidc_get_buffers(inst, buffer_type, __func__); + if (!buffers) + return -EINVAL; + + if (buffers->reuse) { + i_vpr_l(inst, "%s: reuse enabled for %s\n", __func__, buf_name(buffer_type)); + return 0; + } + + for (i = 0; i < buffers->min_count; i++) { + rc = msm_vidc_create_internal_buffer(inst, buffer_type, i); + if (rc) + return rc; + } + + return rc; +} + +int msm_vidc_queue_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buffer, *dummy; + + if (!is_internal_buffer(buffer_type)) { + i_vpr_e(inst, "%s: %s is not internal\n", __func__, buf_name(buffer_type)); + return 0; + } + + /* + * Set HFI_PROP_COMV_BUFFER_COUNT to firmware even if COMV buffer + * is reused. + */ + if (is_decode_session(inst) && buffer_type == MSM_VIDC_BUF_COMV) { + rc = msm_vdec_set_num_comv(inst); + if (rc) + return rc; + } + + buffers = msm_vidc_get_buffers(inst, buffer_type, __func__); + if (!buffers) + return -EINVAL; + + list_for_each_entry_safe(buffer, dummy, &buffers->list, list) { + /* do not queue pending release buffers */ + if (buffer->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + /* do not queue already queued buffers */ + if (buffer->attr & MSM_VIDC_ATTR_QUEUED) + continue; + rc = venus_hfi_queue_buffer(inst, buffer); + if (rc) + return rc; + /* mark queued */ + buffer->attr |= MSM_VIDC_ATTR_QUEUED; + + i_vpr_h(inst, "%s: queue: type: %8s, size: %9u, device_addr %#llx\n", __func__, + buf_name(buffer->type), buffer->buffer_size, buffer->device_addr); + } + + return 0; +} + +int msm_vidc_alloc_and_queue_session_int_bufs(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int rc = 0; + + if (buffer_type != MSM_VIDC_BUF_ARP && + buffer_type != MSM_VIDC_BUF_PERSIST) { + i_vpr_e(inst, "%s: invalid buffer type: %s\n", + __func__, buf_name(buffer_type)); + rc = -EINVAL; + goto exit; + } + + rc = msm_vidc_get_internal_buffers(inst, buffer_type); + if (rc) + goto exit; + + rc = msm_vidc_create_internal_buffers(inst, buffer_type); + if (rc) + goto exit; + + rc = msm_vidc_queue_internal_buffers(inst, buffer_type); + if (rc) + goto exit; + +exit: + return rc; +} + +int msm_vidc_release_internal_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buffer, *dummy; + + if (!is_internal_buffer(buffer_type)) { + i_vpr_e(inst, "%s: %s is not internal\n", + __func__, buf_name(buffer_type)); + return 0; + } + + buffers = msm_vidc_get_buffers(inst, buffer_type, __func__); + if (!buffers) + return -EINVAL; + + if (buffers->reuse) { + i_vpr_l(inst, "%s: reuse enabled for %s buf\n", + __func__, buf_name(buffer_type)); + return 0; + } + + list_for_each_entry_safe(buffer, dummy, &buffers->list, list) { + /* do not release already pending release buffers */ + if (buffer->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + /* release only queued buffers */ + if (!(buffer->attr & MSM_VIDC_ATTR_QUEUED)) + continue; + rc = venus_hfi_release_buffer(inst, buffer); + if (rc) + return rc; + /* mark pending release */ + buffer->attr |= MSM_VIDC_ATTR_PENDING_RELEASE; + + i_vpr_h(inst, "%s: release: type: %8s, size: %9u, device_addr %#llx\n", __func__, + buf_name(buffer->type), buffer->buffer_size, buffer->device_addr); + } + + return 0; +} + +int msm_vidc_vb2_buffer_done(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf) +{ + int type, port, state; + struct vb2_queue *q; + struct vb2_buffer *vb2; + struct vb2_v4l2_buffer *vbuf; + bool found; + + type = v4l2_type_from_driver(buf->type, __func__); + if (!type) + return -EINVAL; + port = v4l2_type_to_driver_port(inst, type, __func__); + if (port < 0) + return -EINVAL; + + q = inst->bufq[port].vb2q; + if (!q->streaming) { + i_vpr_e(inst, "%s: port %d is not streaming\n", + __func__, port); + return -EINVAL; + } + + found = false; + list_for_each_entry(vb2, &q->queued_list, queued_entry) { + if (vb2->state != VB2_BUF_STATE_ACTIVE) + continue; + if (vb2->index == buf->index) { + found = true; + break; + } + } + if (!found) { + print_vidc_buffer(VIDC_ERR, "err ", "vb2 not found for", inst, buf); + return -EINVAL; + } + /** + * v4l2 clears buffer state related flags. For driver errors + * send state as error to avoid skipping V4L2_BUF_FLAG_ERROR + * flag at v4l2 side. + */ + if (buf->flags & MSM_VIDC_BUF_FLAG_ERROR) + state = VB2_BUF_STATE_ERROR; + else + state = VB2_BUF_STATE_DONE; + + vbuf = to_vb2_v4l2_buffer(vb2); + vbuf->flags = buf->flags; + vb2->timestamp = buf->timestamp; + vb2->planes[0].bytesused = buf->data_size + vb2->planes[0].data_offset; + vb2_buffer_done(vb2, state); + + return 0; +} + +int msm_vidc_v4l2_fh_init(struct msm_vidc_inst *inst) +{ + int rc = 0; + int index; + struct msm_vidc_core *core; + + core = inst->core; + + /* do not init, if already inited */ + if (inst->fh.vdev) { + i_vpr_e(inst, "%s: already inited\n", __func__); + return -EINVAL; + } + + if (is_decode_session(inst)) + index = 0; + else if (is_encode_session(inst)) + index = 1; + else + return -EINVAL; + + v4l2_fh_init(&inst->fh, &core->vdev[index].vdev); + inst->fh.ctrl_handler = &inst->ctrl_handler; + v4l2_fh_add(&inst->fh); + + return rc; +} + +int msm_vidc_v4l2_fh_deinit(struct msm_vidc_inst *inst) +{ + int rc = 0; + + /* do not deinit, if not already inited */ + if (!inst->fh.vdev) { + i_vpr_h(inst, "%s: already not inited\n", __func__); + return 0; + } + + v4l2_fh_del(&inst->fh); + inst->fh.ctrl_handler = NULL; + v4l2_fh_exit(&inst->fh); + + return rc; +} + +static int vb2q_init(struct msm_vidc_inst *inst, + struct vb2_queue *q, enum v4l2_buf_type type) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = inst->core; + + q->type = type; + q->io_modes = VB2_MMAP | VB2_DMABUF; + q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + q->ops = core->vb2_ops; + q->mem_ops = core->vb2_mem_ops; + q->drv_priv = inst; + q->copy_timestamp = 1; + rc = vb2_queue_init(q); + if (rc) + i_vpr_e(inst, "%s: vb2_queue_init failed for type %d\n", + __func__, type); + return rc; +} + +static int m2m_queue_init(void *priv, struct vb2_queue *src_vq, + struct vb2_queue *dst_vq) +{ + int rc = 0; + struct msm_vidc_inst *inst = priv; + struct msm_vidc_core *core; + + if (!inst || !inst->core || !src_vq || !dst_vq) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + + src_vq->lock = &inst->ctx_q_lock; + src_vq->dev = &core->pdev->dev; + rc = vb2q_init(inst, src_vq, INPUT_MPLANE); + if (rc) + goto fail_input_vb2q_init; + inst->bufq[INPUT_PORT].vb2q = src_vq; + + dst_vq->lock = src_vq->lock; + dst_vq->dev = &core->pdev->dev; + rc = vb2q_init(inst, dst_vq, OUTPUT_MPLANE); + if (rc) + goto fail_out_vb2q_init; + inst->bufq[OUTPUT_PORT].vb2q = dst_vq; + return rc; + +fail_out_vb2q_init: + vb2_queue_release(inst->bufq[INPUT_PORT].vb2q); +fail_input_vb2q_init: + return rc; +} + +int msm_vidc_vb2_queue_init(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = inst->core; + + if (inst->m2m_dev) { + i_vpr_e(inst, "%s: vb2q already inited\n", __func__); + return -EINVAL; + } + + inst->m2m_dev = v4l2_m2m_init(core->v4l2_m2m_ops); + if (IS_ERR(inst->m2m_dev)) { + i_vpr_e(inst, "%s: failed to initialize v4l2 m2m device\n", __func__); + rc = PTR_ERR(inst->m2m_dev); + goto fail_m2m_init; + } + + /* v4l2_m2m_ctx_init will do input & output queues initialization */ + inst->m2m_ctx = v4l2_m2m_ctx_init(inst->m2m_dev, inst, m2m_queue_init); + if (!inst->m2m_ctx) { + rc = -EINVAL; + i_vpr_e(inst, "%s: v4l2_m2m_ctx_init failed\n", __func__); + goto fail_m2m_ctx_init; + } + inst->fh.m2m_ctx = inst->m2m_ctx; + + return 0; + +fail_m2m_ctx_init: + v4l2_m2m_release(inst->m2m_dev); + inst->m2m_dev = NULL; +fail_m2m_init: + return rc; +} + +int msm_vidc_vb2_queue_deinit(struct msm_vidc_inst *inst) +{ + int rc = 0; + + if (!inst->m2m_dev) { + i_vpr_h(inst, "%s: vb2q already deinited\n", __func__); + return 0; + } + + /* + * vb2_queue_release() for input and output queues + * is called from v4l2_m2m_ctx_release() + */ + v4l2_m2m_ctx_release(inst->m2m_ctx); + inst->m2m_ctx = NULL; + inst->bufq[OUTPUT_PORT].vb2q = NULL; + inst->bufq[INPUT_PORT].vb2q = NULL; + v4l2_m2m_release(inst->m2m_dev); + inst->m2m_dev = NULL; + + return rc; +} + +int msm_vidc_add_session(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_inst *i; + struct msm_vidc_core *core; + u32 count = 0; + + core = inst->core; + + core_lock(core, __func__); + if (core->state != MSM_VIDC_CORE_INIT) { + i_vpr_e(inst, "%s: invalid state %s\n", + __func__, core_state_name(core->state)); + rc = -EINVAL; + goto unlock; + } + list_for_each_entry(i, &core->instances, list) + count++; + + if (count < core->capabilities[MAX_SESSION_COUNT].value) { + list_add_tail(&inst->list, &core->instances); + } else { + i_vpr_e(inst, "%s: max limit %d already running %d sessions\n", + __func__, core->capabilities[MAX_SESSION_COUNT].value, count); + rc = -EAGAIN; + } +unlock: + core_unlock(core, __func__); + + return rc; +} + +int msm_vidc_remove_session(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst *i, *temp; + struct msm_vidc_core *core; + u32 count = 0; + + core = inst->core; + + core_lock(core, __func__); + list_for_each_entry_safe(i, temp, &core->instances, list) { + if (i->session_id == inst->session_id) { + list_move_tail(&i->list, &core->dangling_instances); + i_vpr_h(inst, "%s: removed session %#x\n", + __func__, i->session_id); + } + } + list_for_each_entry(i, &core->instances, list) + count++; + i_vpr_h(inst, "%s: remaining sessions %d\n", __func__, count); + core_unlock(core, __func__); + + return 0; +} + +int msm_vidc_remove_dangling_session(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst *i, *temp; + struct msm_vidc_core *core; + u32 count = 0, dcount = 0; + + core = inst->core; + + core_lock(core, __func__); + list_for_each_entry_safe(i, temp, &core->dangling_instances, list) { + if (i->session_id == inst->session_id) { + list_del_init(&i->list); + i_vpr_h(inst, "%s: removed dangling session %#x\n", + __func__, i->session_id); + break; + } + } + list_for_each_entry(i, &core->instances, list) + count++; + list_for_each_entry(i, &core->dangling_instances, list) + dcount++; + i_vpr_h(inst, "%s: remaining sessions. active %d, dangling %d\n", + __func__, count, dcount); + core_unlock(core, __func__); + + return 0; +} + +int msm_vidc_session_open(struct msm_vidc_inst *inst) +{ + int rc = 0; + + inst->packet_size = 4096; + + inst->packet = vzalloc(inst->packet_size); + if (!inst->packet) { + i_vpr_e(inst, "%s: allocation failed\n", __func__); + return -ENOMEM; + } + + rc = venus_hfi_session_open(inst); + if (rc) + goto error; + + return 0; +error: + i_vpr_e(inst, "%s(): session open failed\n", __func__); + vfree(inst->packet); + inst->packet = NULL; + return rc; +} + +int msm_vidc_session_set_codec(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = venus_hfi_session_set_codec(inst); + if (rc) + return rc; + + return 0; +} + +int msm_vidc_session_set_default_header(struct msm_vidc_inst *inst) +{ + int rc = 0; + u32 default_header = false; + + default_header = inst->capabilities[DEFAULT_HEADER].value; + i_vpr_h(inst, "%s: default header: %d", __func__, default_header); + rc = venus_hfi_session_property(inst, + HFI_PROP_DEC_DEFAULT_HEADER, + HFI_HOST_FLAGS_NONE, + get_hfi_port(inst, INPUT_PORT), + HFI_PAYLOAD_U32, + &default_header, + sizeof(u32)); + if (rc) + i_vpr_e(inst, "%s: set property failed\n", __func__); + return rc; +} + +int msm_vidc_session_streamoff(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + int rc = 0; + int count = 0; + struct msm_vidc_core *core; + enum signal_session_response signal_type; + enum msm_vidc_buffer_type buffer_type; + u32 hw_response_timeout_val; + + if (port == INPUT_PORT) { + signal_type = SIGNAL_CMD_STOP_INPUT; + buffer_type = MSM_VIDC_BUF_INPUT; + } else if (port == OUTPUT_PORT) { + signal_type = SIGNAL_CMD_STOP_OUTPUT; + buffer_type = MSM_VIDC_BUF_OUTPUT; + } else { + i_vpr_e(inst, "%s: invalid port: %d\n", __func__, port); + return -EINVAL; + } + + rc = venus_hfi_stop(inst, port); + if (rc) + goto error; + + core = inst->core; + hw_response_timeout_val = core->capabilities[HW_RESPONSE_TIMEOUT].value; + i_vpr_h(inst, "%s: wait on port: %d for time: %d ms\n", + __func__, port, hw_response_timeout_val); + inst_unlock(inst, __func__); + rc = wait_for_completion_timeout(&inst->completions[signal_type], + msecs_to_jiffies(hw_response_timeout_val)); + if (!rc) { + i_vpr_e(inst, "%s: session stop timed out for port: %d\n", + __func__, port); + rc = -ETIMEDOUT; + msm_vidc_inst_timeout(inst); + } else { + rc = 0; + } + inst_lock(inst, __func__); + + if (rc) + goto error; + + if (port == INPUT_PORT) { + /* flush input timer list */ + msm_vidc_flush_input_timer(inst); + } + + /* no more queued buffers after streamoff */ + count = msm_vidc_num_buffers(inst, buffer_type, MSM_VIDC_ATTR_QUEUED); + if (!count) { + i_vpr_h(inst, "%s: stop successful on port: %d\n", + __func__, port); + } else { + i_vpr_e(inst, + "%s: %d buffers pending with firmware on port: %d\n", + __func__, count, port); + rc = -EINVAL; + goto error; + } + + rc = msm_vidc_state_change_streamoff(inst, port); + if (rc) + goto error; + + /* flush deferred buffers */ + msm_vidc_flush_buffers(inst, buffer_type); + msm_vidc_flush_read_only_buffers(inst, buffer_type); + return 0; + +error: + msm_vidc_kill_session(inst); + msm_vidc_flush_buffers(inst, buffer_type); + msm_vidc_flush_read_only_buffers(inst, buffer_type); + return rc; +} + +int msm_vidc_session_close(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + bool wait_for_response; + u32 hw_response_timeout_val; + + core = inst->core; + hw_response_timeout_val = core->capabilities[HW_RESPONSE_TIMEOUT].value; + wait_for_response = true; + rc = venus_hfi_session_close(inst); + if (rc) { + i_vpr_e(inst, "%s: session close cmd failed\n", __func__); + wait_for_response = false; + } + + /* we are not supposed to send any more commands after close */ + i_vpr_h(inst, "%s: free session packet data\n", __func__); + vfree(inst->packet); + inst->packet = NULL; + + if (wait_for_response) { + i_vpr_h(inst, "%s: wait on close for time: %d ms\n", + __func__, hw_response_timeout_val); + inst_unlock(inst, __func__); + rc = wait_for_completion_timeout(&inst->completions[SIGNAL_CMD_CLOSE], + msecs_to_jiffies(hw_response_timeout_val)); + if (!rc) { + i_vpr_e(inst, "%s: session close timed out\n", __func__); + rc = -ETIMEDOUT; + msm_vidc_inst_timeout(inst); + } else { + rc = 0; + i_vpr_h(inst, "%s: close successful\n", __func__); + } + inst_lock(inst, __func__); + } + + return rc; +} + +int msm_vidc_kill_session(struct msm_vidc_inst *inst) +{ + if (!inst->session_id) { + i_vpr_e(inst, "%s: already killed\n", __func__); + return 0; + } + + i_vpr_e(inst, "%s: killing session\n", __func__); + msm_vidc_session_close(inst); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + + return 0; +} + +int msm_vidc_get_inst_capability(struct msm_vidc_inst *inst) +{ + int i; + u32 codecs_count = 0; + struct msm_vidc_core *core; + + core = inst->core; + + codecs_count = core->enc_codecs_count + core->dec_codecs_count; + + for (i = 0; i < codecs_count; i++) { + if (core->inst_caps[i].domain == inst->domain && + core->inst_caps[i].codec == inst->codec) { + i_vpr_h(inst, + "%s: copied capabilities with %#x codec, %#x domain\n", + __func__, inst->codec, inst->domain); + memcpy(&inst->capabilities[0], &core->inst_caps[i].cap[0], + (INST_CAP_MAX + 1) * sizeof(struct msm_vidc_inst_cap)); + } + } + + return 0; +} + +int msm_vidc_init_core_caps(struct msm_vidc_core *core) +{ + int rc = 0; + int i, num_platform_caps; + struct msm_platform_core_capability *platform_data; + + platform_data = core->platform->data.core_data; + if (!platform_data) { + d_vpr_e("%s: platform core data is NULL\n", + __func__); + rc = -EINVAL; + goto exit; + } + + num_platform_caps = core->platform->data.core_data_size; + + /* loop over platform caps */ + for (i = 0; i < num_platform_caps && i < CORE_CAP_MAX; i++) { + core->capabilities[platform_data[i].type].type = platform_data[i].type; + core->capabilities[platform_data[i].type].value = platform_data[i].value; + } + +exit: + return rc; +} + +static int update_inst_capability(struct msm_platform_inst_capability *in, + struct msm_vidc_inst_capability *capability) +{ + if (!in || !capability) { + d_vpr_e("%s: invalid params %pK %pK\n", + __func__, in, capability); + return -EINVAL; + } + if (in->cap_id >= INST_CAP_MAX) { + d_vpr_e("%s: invalid cap id %d\n", __func__, in->cap_id); + return -EINVAL; + } + + capability->cap[in->cap_id].cap_id = in->cap_id; + capability->cap[in->cap_id].min = in->min; + capability->cap[in->cap_id].max = in->max; + capability->cap[in->cap_id].step_or_mask = in->step_or_mask; + capability->cap[in->cap_id].value = in->value; + capability->cap[in->cap_id].flags = in->flags; + capability->cap[in->cap_id].v4l2_id = in->v4l2_id; + capability->cap[in->cap_id].hfi_id = in->hfi_id; + + return 0; +} + +static int update_inst_cap_dependency(struct msm_platform_inst_cap_dependency *in, + struct msm_vidc_inst_capability *capability) +{ + if (!in || !capability) { + d_vpr_e("%s: invalid params %pK %pK\n", + __func__, in, capability); + return -EINVAL; + } + if (in->cap_id >= INST_CAP_MAX) { + d_vpr_e("%s: invalid cap id %d\n", __func__, in->cap_id); + return -EINVAL; + } + + if (capability->cap[in->cap_id].cap_id != in->cap_id) { + d_vpr_e("%s: invalid cap id %d\n", __func__, in->cap_id); + return -EINVAL; + } + + memcpy(capability->cap[in->cap_id].children, in->children, + sizeof(capability->cap[in->cap_id].children)); + capability->cap[in->cap_id].adjust = in->adjust; + capability->cap[in->cap_id].set = in->set; + + return 0; +} + +int msm_vidc_init_instance_caps(struct msm_vidc_core *core) +{ + int rc = 0; + u8 enc_valid_codecs, dec_valid_codecs; + u8 count_bits, codecs_count = 0; + u8 enc_codecs_count = 0, dec_codecs_count = 0; + int i, j, check_bit; + int num_platform_cap_data, num_platform_cap_dependency_data; + struct msm_platform_inst_capability *platform_cap_data = NULL; + struct msm_platform_inst_cap_dependency *platform_cap_dependency_data = NULL; + + platform_cap_data = core->platform->data.inst_cap_data; + if (!platform_cap_data) { + d_vpr_e("%s: platform instance cap data is NULL\n", + __func__); + rc = -EINVAL; + goto error; + } + + platform_cap_dependency_data = core->platform->data.inst_cap_dependency_data; + if (!platform_cap_dependency_data) { + d_vpr_e("%s: platform instance cap dependency data is NULL\n", + __func__); + rc = -EINVAL; + goto error; + } + + enc_valid_codecs = core->capabilities[ENC_CODECS].value; + count_bits = enc_valid_codecs; + COUNT_BITS(count_bits, enc_codecs_count); + core->enc_codecs_count = enc_codecs_count; + + dec_valid_codecs = core->capabilities[DEC_CODECS].value; + count_bits = dec_valid_codecs; + COUNT_BITS(count_bits, dec_codecs_count); + core->dec_codecs_count = dec_codecs_count; + + codecs_count = enc_codecs_count + dec_codecs_count; + core->inst_caps = devm_kzalloc(&core->pdev->dev, + codecs_count * sizeof(struct msm_vidc_inst_capability), + GFP_KERNEL); + if (!core->inst_caps) { + d_vpr_e("%s: failed to alloc memory for instance caps\n", __func__); + rc = -ENOMEM; + goto error; + } + + check_bit = 0; + /* determine codecs for enc domain */ + for (i = 0; i < enc_codecs_count; i++) { + while (check_bit < (sizeof(enc_valid_codecs) * 8)) { + if (enc_valid_codecs & BIT(check_bit)) { + core->inst_caps[i].domain = MSM_VIDC_ENCODER; + core->inst_caps[i].codec = enc_valid_codecs & + BIT(check_bit); + check_bit++; + break; + } + check_bit++; + } + } + + /* reset checkbit to check from 0th bit of decoder codecs set bits*/ + check_bit = 0; + /* determine codecs for dec domain */ + for (; i < codecs_count; i++) { + while (check_bit < (sizeof(dec_valid_codecs) * 8)) { + if (dec_valid_codecs & BIT(check_bit)) { + core->inst_caps[i].domain = MSM_VIDC_DECODER; + core->inst_caps[i].codec = dec_valid_codecs & + BIT(check_bit); + check_bit++; + break; + } + check_bit++; + } + } + + num_platform_cap_data = core->platform->data.inst_cap_data_size; + num_platform_cap_dependency_data = core->platform->data.inst_cap_dependency_data_size; + d_vpr_h("%s: num caps %d, dependency %d\n", __func__, + num_platform_cap_data, num_platform_cap_dependency_data); + + /* loop over each platform capability */ + for (i = 0; i < num_platform_cap_data; i++) { + /* select matching core codec and update it */ + for (j = 0; j < codecs_count; j++) { + if ((platform_cap_data[i].domain & + core->inst_caps[j].domain) && + (platform_cap_data[i].codec & + core->inst_caps[j].codec)) { + /* update core capability */ + rc = update_inst_capability(&platform_cap_data[i], + &core->inst_caps[j]); + if (rc) + return rc; + } + } + } + + /* loop over each platform dependency capability */ + for (i = 0; i < num_platform_cap_dependency_data; i++) { + /* select matching core codec and update it */ + for (j = 0; j < codecs_count; j++) { + if ((platform_cap_dependency_data[i].domain & + core->inst_caps[j].domain) && + (platform_cap_dependency_data[i].codec & + core->inst_caps[j].codec)) { + /* update core dependency capability */ + rc = update_inst_cap_dependency(&platform_cap_dependency_data[i], + &core->inst_caps[j]); + if (rc) + return rc; + } + } + } + +error: + return rc; +} + +int msm_vidc_core_deinit_locked(struct msm_vidc_core *core, bool force) +{ + int rc = 0; + struct msm_vidc_inst *inst, *dummy; + enum msm_vidc_allow allow; + + rc = __strict_check(core, __func__); + if (rc) { + d_vpr_e("%s(): core was not locked\n", __func__); + return rc; + } + + if (is_core_state(core, MSM_VIDC_CORE_DEINIT)) + return 0; + + /* print error for state change not allowed case */ + allow = msm_vidc_allow_core_state_change(core, MSM_VIDC_CORE_DEINIT); + if (allow != MSM_VIDC_ALLOW) + d_vpr_e("%s: %s core state change %s -> %s\n", __func__, + allow_name(allow), core_state_name(core->state), + core_state_name(MSM_VIDC_CORE_DEINIT)); + + if (force) { + d_vpr_e("%s(): force deinit core\n", __func__); + } else { + /* in normal case, deinit core only if no session present */ + if (!list_empty(&core->instances)) { + d_vpr_h("%s(): skip deinit\n", __func__); + return 0; + } + d_vpr_h("%s(): deinit core\n", __func__); + } + + venus_hfi_core_deinit(core, force); + + /* unlink all sessions from core, if any */ + list_for_each_entry_safe(inst, dummy, &core->instances, list) { + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + list_move_tail(&inst->list, &core->dangling_instances); + } + msm_vidc_change_core_state(core, MSM_VIDC_CORE_DEINIT, __func__); + + return rc; +} + +int msm_vidc_core_deinit(struct msm_vidc_core *core, bool force) +{ + int rc = 0; + + core_lock(core, __func__); + rc = msm_vidc_core_deinit_locked(core, force); + core_unlock(core, __func__); + + return rc; +} + +int msm_vidc_core_init_wait(struct msm_vidc_core *core) +{ + const int interval = 10; + int max_tries, count = 0, rc = 0; + + core_lock(core, __func__); + if (is_core_state(core, MSM_VIDC_CORE_INIT)) { + rc = 0; + goto unlock; + } else if (is_core_state(core, MSM_VIDC_CORE_DEINIT) || + is_core_state(core, MSM_VIDC_CORE_ERROR)) { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + rc = -EINVAL; + goto unlock; + } + + d_vpr_h("%s(): waiting for state change\n", __func__); + max_tries = core->capabilities[HW_RESPONSE_TIMEOUT].value / interval; + while (count < max_tries) { + if (core->state != MSM_VIDC_CORE_INIT_WAIT) + break; + + core_unlock(core, __func__); + msleep_interruptible(interval); + core_lock(core, __func__); + count++; + } + d_vpr_h("%s: state %s, interval %u, count %u, max_tries %u\n", __func__, + core_state_name(core->state), interval, count, max_tries); + + if (is_core_state(core, MSM_VIDC_CORE_INIT)) { + d_vpr_h("%s: sys init successful\n", __func__); + rc = 0; + goto unlock; + } else if (is_core_state(core, MSM_VIDC_CORE_INIT_WAIT)) { + d_vpr_h("%s: sys init wait timedout. state %s\n", + __func__, core_state_name(core->state)); + msm_vidc_change_core_state(core, MSM_VIDC_CORE_ERROR, __func__); + /* mark video hw unresponsive */ + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_VIDEO_UNRESPONSIVE, + __func__); + /* core deinit to handle error */ + msm_vidc_core_deinit_locked(core, true); + rc = -EINVAL; + goto unlock; + } else { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + rc = -EINVAL; + goto unlock; + } +unlock: + core_unlock(core, __func__); + return rc; +} + +int msm_vidc_core_init(struct msm_vidc_core *core) +{ + enum msm_vidc_allow allow; + int rc = 0; + + core_lock(core, __func__); + if (core_in_valid_state(core)) { + goto unlock; + } else if (is_core_state(core, MSM_VIDC_CORE_ERROR)) { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + rc = -EINVAL; + goto unlock; + } + + /* print error for state change not allowed case */ + allow = msm_vidc_allow_core_state_change(core, MSM_VIDC_CORE_INIT_WAIT); + if (allow != MSM_VIDC_ALLOW) + d_vpr_e("%s: %s core state change %s -> %s\n", __func__, + allow_name(allow), core_state_name(core->state), + core_state_name(MSM_VIDC_CORE_INIT_WAIT)); + + msm_vidc_change_core_state(core, MSM_VIDC_CORE_INIT_WAIT, __func__); + /* clear PM suspend from core sub_state */ + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_PM_SUSPEND, 0, __func__); + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_PAGE_FAULT, 0, __func__); + + rc = venus_hfi_core_init(core); + if (rc) { + msm_vidc_change_core_state(core, MSM_VIDC_CORE_ERROR, __func__); + d_vpr_e("%s: core init failed\n", __func__); + /* do core deinit to handle error */ + msm_vidc_core_deinit_locked(core, true); + goto unlock; + } + +unlock: + core_unlock(core, __func__); + return rc; +} + +int msm_vidc_inst_timeout(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + struct msm_vidc_inst *instance; + bool found; + + core = inst->core; + + core_lock(core, __func__); + /* + * All sessions will be removed from core list in core deinit, + * do not deinit core from a session which is not present in + * core list. + */ + found = false; + list_for_each_entry(instance, &core->instances, list) { + if (instance == inst) { + found = true; + break; + } + } + if (!found) { + i_vpr_e(inst, + "%s: session not available in core list\n", __func__); + rc = -EINVAL; + goto unlock; + } + /* mark video hw unresponsive */ + msm_vidc_change_core_state(core, MSM_VIDC_CORE_ERROR, __func__); + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_VIDEO_UNRESPONSIVE, + __func__); + + /* call core deinit for a valid instance timeout case */ + msm_vidc_core_deinit_locked(core, true); + +unlock: + core_unlock(core, __func__); + + return rc; +} + +int msm_vidc_print_buffer_info(struct msm_vidc_inst *inst) +{ + struct msm_vidc_buffers *buffers; + int i; + + /* Print buffer details */ + for (i = 1; i < ARRAY_SIZE(buf_type_name_arr); i++) { + buffers = msm_vidc_get_buffers(inst, i, __func__); + if (!buffers) + continue; + + i_vpr_h(inst, + "buf: type: %15s, min %2d, extra %2d, actual %2d, size %9u, reuse %d\n", + buf_name(i), buffers->min_count, + buffers->extra_count, buffers->actual_count, + buffers->size, buffers->reuse); + } + + return 0; +} + +int msm_vidc_print_inst_info(struct msm_vidc_inst *inst) +{ + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf; + enum msm_vidc_port_type port; + bool is_decode; + u32 bit_depth, bit_rate, frame_rate, width, height; + struct dma_buf *dbuf; + struct inode *f_inode; + unsigned long inode_num = 0; + long ref_count = -1; + int i = 0; + + is_decode = is_decode_session(inst); + port = is_decode ? INPUT_PORT : OUTPUT_PORT; + width = inst->fmts[port].fmt.pix_mp.width; + height = inst->fmts[port].fmt.pix_mp.height; + bit_depth = inst->capabilities[BIT_DEPTH].value & 0xFFFF; + bit_rate = inst->capabilities[BIT_RATE].value; + frame_rate = inst->capabilities[FRAME_RATE].value >> 16; + + i_vpr_e(inst, "%s session, HxW: %d x %d, fps: %d, bitrate: %d, bit-depth: %d\n", + is_decode ? "Decode" : "Encode", + height, width, + frame_rate, bit_rate, bit_depth); + + /* Print buffer details */ + for (i = 1; i < ARRAY_SIZE(buf_type_name_arr); i++) { + buffers = msm_vidc_get_buffers(inst, i, __func__); + if (!buffers) + continue; + + i_vpr_e(inst, "count: type: %11s, min: %2d, extra: %2d, actual: %2d\n", + buf_name(i), buffers->min_count, + buffers->extra_count, buffers->actual_count); + + list_for_each_entry(buf, &buffers->list, list) { + if (!buf->dmabuf) + continue; + dbuf = (struct dma_buf *)buf->dmabuf; + if (dbuf && dbuf->file) { + f_inode = file_inode(dbuf->file); + if (f_inode) { + inode_num = f_inode->i_ino; + ref_count = file_count(dbuf->file); + } + } + i_vpr_e(inst, + "buf: type: %11s, index: %2d, fd: %4d, size: %9u, off: %8u, filled: %9u, daddr: %#llx, inode: %8lu, ref: %2ld, flags: %8x, ts: %16lld, attr: %8x\n", + buf_name(i), buf->index, buf->fd, buf->buffer_size, + buf->data_offset, buf->data_size, buf->device_addr, + inode_num, ref_count, buf->flags, buf->timestamp, buf->attr); + } + } + + return 0; +} + +void msm_vidc_print_core_info(struct msm_vidc_core *core) +{ + struct msm_vidc_inst *inst = NULL; + struct msm_vidc_inst *instances[MAX_SUPPORTED_INSTANCES]; + s32 num_instances = 0; + + core_lock(core, __func__); + list_for_each_entry(inst, &core->instances, list) + instances[num_instances++] = inst; + core_unlock(core, __func__); + + while (num_instances--) { + inst = instances[num_instances]; + inst = get_inst_ref(core, inst); + if (!inst) + continue; + inst_lock(inst, __func__); + msm_vidc_print_inst_info(inst); + inst_unlock(inst, __func__); + put_inst(inst); + } +} + +int msm_vidc_smmu_fault_handler(struct iommu_domain *domain, + struct device *dev, unsigned long iova, + int flags, void *data) +{ + struct msm_vidc_core *core = data; + + if (is_core_sub_state(core, CORE_SUBSTATE_PAGE_FAULT)) { + if (core->capabilities[NON_FATAL_FAULTS].value) { + dprintk_ratelimit(VIDC_ERR, "err ", + "%s: non-fatal pagefault address: %lx\n", + __func__, iova); + return 0; + } + } + + d_vpr_e(FMT_STRING_FAULT_HANDLER, __func__, iova); + + /* mark smmu fault as handled */ + core_lock(core, __func__); + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_PAGE_FAULT, __func__); + core_unlock(core, __func__); + + msm_vidc_print_core_info(core); + /* + * Return -ENOSYS to elicit the default behaviour of smmu driver. + * If we return -ENOSYS, then smmu driver assumes page fault handler + * is not installed and prints a list of useful debug information like + * FAR, SID etc. This information is not printed if we return 0. + */ + return 0; +} + +void msm_vidc_fw_unload_handler(struct work_struct *work) +{ + struct msm_vidc_core *core = NULL; + int rc = 0; + + core = container_of(work, struct msm_vidc_core, fw_unload_work.work); + + d_vpr_h("%s: deinitializing video core\n", __func__); + rc = msm_vidc_core_deinit(core, false); + if (rc) + d_vpr_e("%s: Failed to deinit core\n", __func__); +} + +void msm_vidc_batch_handler(struct work_struct *work) +{ + struct msm_vidc_inst *inst; + struct msm_vidc_core *core; + int rc = 0; + + inst = container_of(work, struct msm_vidc_inst, decode_batch.work.work); + inst = get_inst_ref(g_core, inst); + if (!inst || !inst->core) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + + core = inst->core; + inst_lock(inst, __func__); + if (is_session_error(inst)) { + i_vpr_e(inst, "%s: failled. Session error\n", __func__); + goto exit; + } + + if (is_core_sub_state(core, CORE_SUBSTATE_PM_SUSPEND)) { + i_vpr_h(inst, "%s: device in pm suspend state\n", __func__); + goto exit; + } + + if (is_state(inst, MSM_VIDC_OPEN) || + is_state(inst, MSM_VIDC_INPUT_STREAMING)) { + i_vpr_e(inst, "%s: not allowed in state: %s\n", __func__, + state_name(inst->state)); + goto exit; + } + + i_vpr_h(inst, "%s: queue pending batch buffers\n", __func__); + rc = msm_vidc_queue_deferred_buffers(inst, MSM_VIDC_BUF_OUTPUT); + if (rc) { + i_vpr_e(inst, "%s: batch qbufs failed\n", __func__); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + +exit: + inst_unlock(inst, __func__); + put_inst(inst); +} + +int msm_vidc_flush_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type type) +{ + int rc = 0; + struct msm_vidc_core *core; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf, *dummy; + enum msm_vidc_buffer_type buffer_type[1]; + int i; + + core = inst->core; + + if (is_input_buffer(type)) { + buffer_type[0] = MSM_VIDC_BUF_INPUT; + } else if (is_output_buffer(type)) { + buffer_type[0] = MSM_VIDC_BUF_OUTPUT; + } else { + i_vpr_h(inst, "%s: invalid buffer type %d\n", + __func__, type); + return -EINVAL; + } + + for (i = 0; i < ARRAY_SIZE(buffer_type); i++) { + buffers = msm_vidc_get_buffers(inst, buffer_type[i], __func__); + if (!buffers) + return -EINVAL; + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + if (buf->attr & MSM_VIDC_ATTR_QUEUED || + buf->attr & MSM_VIDC_ATTR_DEFERRED) { + print_vidc_buffer(VIDC_HIGH, "high", "flushing buffer", inst, buf); + if (!(buf->attr & MSM_VIDC_ATTR_BUFFER_DONE)) { + buf->attr |= MSM_VIDC_ATTR_BUFFER_DONE; + buf->data_size = 0; + if (buf->dbuf_get) { + call_mem_op(core, dma_buf_put, inst, buf->dmabuf); + buf->dbuf_get = 0; + } + msm_vidc_vb2_buffer_done(inst, buf); + } + } + } + } + + return rc; +} + +int msm_vidc_flush_read_only_buffers(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type type) +{ + int rc = 0; + struct msm_vidc_buffer *ro_buf, *dummy; + struct msm_vidc_core *core; + + core = inst->core; + + if (!is_decode_session(inst) || !is_output_buffer(type)) + return 0; + + list_for_each_entry_safe(ro_buf, dummy, &inst->buffers.read_only.list, list) { + if (ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY) + continue; + print_vidc_buffer(VIDC_ERR, "high", "flush ro buf", inst, ro_buf); + if (ro_buf->attach && ro_buf->sg_table) + call_mem_op(core, dma_buf_unmap_attachment, core, + ro_buf->attach, ro_buf->sg_table); + if (ro_buf->attach && ro_buf->dmabuf) + call_mem_op(core, dma_buf_detach, core, + ro_buf->dmabuf, ro_buf->attach); + if (ro_buf->dbuf_get) + call_mem_op(core, dma_buf_put, inst, ro_buf->dmabuf); + ro_buf->attach = NULL; + ro_buf->sg_table = NULL; + ro_buf->dmabuf = NULL; + ro_buf->dbuf_get = 0; + ro_buf->device_addr = 0x0; + list_del_init(&ro_buf->list); + msm_vidc_pool_free(inst, ro_buf); + } + + return rc; +} + +void msm_vidc_destroy_buffers(struct msm_vidc_inst *inst) +{ + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf, *dummy; + struct msm_vidc_timestamp *ts, *dummy_ts; + struct msm_memory_dmabuf *dbuf, *dummy_dbuf; + struct msm_vidc_input_timer *timer, *dummy_timer; + struct msm_vidc_buffer_stats *stats, *dummy_stats; + struct msm_vidc_inst_cap_entry *entry, *dummy_entry; + struct msm_vidc_input_cr_data *cr, *dummy_cr; + struct msm_vidc_core *core; + + static const enum msm_vidc_buffer_type ext_buf_types[] = { + MSM_VIDC_BUF_INPUT, + MSM_VIDC_BUF_OUTPUT, + }; + static const enum msm_vidc_buffer_type internal_buf_types[] = { + MSM_VIDC_BUF_BIN, + MSM_VIDC_BUF_ARP, + MSM_VIDC_BUF_COMV, + MSM_VIDC_BUF_NON_COMV, + MSM_VIDC_BUF_LINE, + MSM_VIDC_BUF_DPB, + MSM_VIDC_BUF_PERSIST, + MSM_VIDC_BUF_VPSS, + }; + int i; + + core = inst->core; + + for (i = 0; i < ARRAY_SIZE(internal_buf_types); i++) { + buffers = msm_vidc_get_buffers(inst, internal_buf_types[i], __func__); + if (!buffers) + continue; + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + i_vpr_h(inst, + "destroying internal buffer: type %d idx %d fd %d addr %#llx size %d\n", + buf->type, buf->index, buf->fd, buf->device_addr, buf->buffer_size); + msm_vidc_destroy_internal_buffer(inst, buf); + } + } + + /* + * read_only list does not take dma ref_count using dma_buf_get(). + * dma_buf ptr will be obselete when its ref_count reaches zero. + * Hence printthe dma_buf info before releasing the ref count. + */ + list_for_each_entry_safe(buf, dummy, &inst->buffers.read_only.list, list) { + print_vidc_buffer(VIDC_ERR, "err ", "destroying ro buf", inst, buf); + if (buf->attach && buf->sg_table) + call_mem_op(core, dma_buf_unmap_attachment, core, + buf->attach, buf->sg_table); + if (buf->attach && buf->dmabuf) + call_mem_op(core, dma_buf_detach, core, buf->dmabuf, buf->attach); + if (buf->dbuf_get) + call_mem_op(core, dma_buf_put, inst, buf->dmabuf); + list_del_init(&buf->list); + msm_vidc_pool_free(inst, buf); + } + + for (i = 0; i < ARRAY_SIZE(ext_buf_types); i++) { + buffers = msm_vidc_get_buffers(inst, ext_buf_types[i], __func__); + if (!buffers) + continue; + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + if (buf->dbuf_get || buf->attach || buf->sg_table) + print_vidc_buffer(VIDC_ERR, "err ", "destroying: put dmabuf", + inst, buf); + if (buf->attach && buf->sg_table) + call_mem_op(core, dma_buf_unmap_attachment, core, + buf->attach, buf->sg_table); + if (buf->attach && buf->dmabuf) + call_mem_op(core, dma_buf_detach, core, buf->dmabuf, buf->attach); + if (buf->dbuf_get) + call_mem_op(core, dma_buf_put, inst, buf->dmabuf); + list_del_init(&buf->list); + msm_vidc_pool_free(inst, buf); + } + } + + list_for_each_entry_safe(ts, dummy_ts, &inst->timestamps.list, sort.list) { + i_vpr_e(inst, "%s: removing ts: val %lld, rank %lld\n", + __func__, ts->sort.val, ts->rank); + list_del(&ts->sort.list); + msm_vidc_pool_free(inst, ts); + } + + list_for_each_entry_safe(timer, dummy_timer, &inst->input_timer_list, list) { + i_vpr_e(inst, "%s: removing input_timer %lld\n", + __func__, timer->time_us); + list_del(&timer->list); + msm_vidc_pool_free(inst, timer); + } + + list_for_each_entry_safe(stats, dummy_stats, &inst->buffer_stats_list, list) { + list_del(&stats->list); + msm_vidc_pool_free(inst, stats); + } + + list_for_each_entry_safe(dbuf, dummy_dbuf, &inst->dmabuf_tracker, list) { + struct dma_buf *dmabuf; + struct inode *f_inode; + unsigned long inode_num = 0; + + dmabuf = dbuf->dmabuf; + if (dmabuf && dmabuf->file) { + f_inode = file_inode(dmabuf->file); + if (f_inode) + inode_num = f_inode->i_ino; + } + i_vpr_e(inst, "%s: removing dma_buf %p, inode %lu, refcount %u\n", + __func__, dbuf->dmabuf, inode_num, dbuf->refcount); + call_mem_op(core, dma_buf_put_completely, inst, dbuf); + } + + list_for_each_entry_safe(entry, dummy_entry, &inst->firmware_list, list) { + i_vpr_e(inst, "%s: fw list: %s\n", __func__, cap_name(entry->cap_id)); + list_del(&entry->list); + vfree(entry); + } + + list_for_each_entry_safe(entry, dummy_entry, &inst->children_list, list) { + i_vpr_e(inst, "%s: child list: %s\n", __func__, cap_name(entry->cap_id)); + list_del(&entry->list); + vfree(entry); + } + + list_for_each_entry_safe(entry, dummy_entry, &inst->caps_list, list) { + list_del(&entry->list); + vfree(entry); + } + + list_for_each_entry_safe(cr, dummy_cr, &inst->enc_input_crs, list) { + list_del(&cr->list); + vfree(cr); + } + + /* destroy buffers from pool */ + msm_vidc_pools_deinit(inst); +} + +static void msm_vidc_close_helper(struct kref *kref) +{ + struct msm_vidc_inst *inst = container_of(kref, + struct msm_vidc_inst, kref); + struct msm_vidc_core *core; + + core = inst->core; + + msm_vidc_debugfs_deinit_inst(inst); + if (is_decode_session(inst)) + msm_vdec_inst_deinit(inst); + else if (is_encode_session(inst)) + msm_venc_inst_deinit(inst); + /** + * Lock is not necessay here, but in force close case, + * vb2q_deinit() will attempt to call stop_streaming() + * vb2 callback and i.e expecting inst lock to be taken. + * So acquire lock before calling vb2q_deinit. + */ + inst_lock(inst, __func__); + msm_vidc_vb2_queue_deinit(inst); + msm_vidc_v4l2_fh_deinit(inst); + inst_unlock(inst, __func__); + destroy_workqueue(inst->workq); + msm_vidc_destroy_buffers(inst); + msm_vidc_remove_session(inst); + msm_vidc_remove_dangling_session(inst); + mutex_destroy(&inst->client_lock); + mutex_destroy(&inst->ctx_q_lock); + mutex_destroy(&inst->lock); + vfree(inst); +} + +struct msm_vidc_inst *get_inst_ref(struct msm_vidc_core *core, + struct msm_vidc_inst *instance) +{ + struct msm_vidc_inst *inst = NULL; + bool matches = false; + + mutex_lock(&core->lock); + list_for_each_entry(inst, &core->instances, list) { + if (inst == instance) { + matches = true; + break; + } + } + inst = (matches && kref_get_unless_zero(&inst->kref)) ? inst : NULL; + mutex_unlock(&core->lock); + return inst; +} + +struct msm_vidc_inst *get_inst(struct msm_vidc_core *core, + u32 session_id) +{ + struct msm_vidc_inst *inst = NULL; + bool matches = false; + + mutex_lock(&core->lock); + list_for_each_entry(inst, &core->instances, list) { + if (inst->session_id == session_id) { + matches = true; + break; + } + } + inst = (matches && kref_get_unless_zero(&inst->kref)) ? inst : NULL; + mutex_unlock(&core->lock); + return inst; +} + +void put_inst(struct msm_vidc_inst *inst) +{ + kref_put(&inst->kref, msm_vidc_close_helper); +} + +void core_lock(struct msm_vidc_core *core, const char *function) +{ + mutex_lock(&core->lock); +} + +void core_unlock(struct msm_vidc_core *core, const char *function) +{ + mutex_unlock(&core->lock); +} + +void inst_lock(struct msm_vidc_inst *inst, const char *function) +{ + mutex_lock(&inst->lock); +} + +void inst_unlock(struct msm_vidc_inst *inst, const char *function) +{ + mutex_unlock(&inst->lock); +} + +void client_lock(struct msm_vidc_inst *inst, const char *function) +{ + mutex_lock(&inst->client_lock); +} + +void client_unlock(struct msm_vidc_inst *inst, const char *function) +{ + mutex_unlock(&inst->client_lock); +} + +int msm_vidc_update_bitstream_buffer_size(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + struct v4l2_format *fmt; + + core = inst->core; + + if (is_decode_session(inst)) { + fmt = &inst->fmts[INPUT_PORT]; + fmt->fmt.pix_mp.plane_fmt[0].sizeimage = call_session_op(core, buffer_size, + inst, MSM_VIDC_BUF_INPUT); + } + + return 0; +} + +int msm_vidc_update_buffer_count(struct msm_vidc_inst *inst, u32 port) +{ + struct msm_vidc_core *core; + + core = inst->core; + + switch (port) { + case INPUT_PORT: + inst->buffers.input.min_count = call_session_op(core, min_count, + inst, MSM_VIDC_BUF_INPUT); + inst->buffers.input.extra_count = call_session_op(core, extra_count, + inst, MSM_VIDC_BUF_INPUT); + if (inst->buffers.input.actual_count < + inst->buffers.input.min_count + + inst->buffers.input.extra_count) { + inst->buffers.input.actual_count = + inst->buffers.input.min_count + + inst->buffers.input.extra_count; + } + + i_vpr_h(inst, "%s: type: INPUT, count: min %u, extra %u, actual %u\n", __func__, + inst->buffers.input.min_count, + inst->buffers.input.extra_count, + inst->buffers.input.actual_count); + break; + case OUTPUT_PORT: + if (!inst->bufq[INPUT_PORT].vb2q->streaming) + inst->buffers.output.min_count = call_session_op(core, min_count, + inst, MSM_VIDC_BUF_OUTPUT); + inst->buffers.output.extra_count = call_session_op(core, extra_count, + inst, MSM_VIDC_BUF_OUTPUT); + if (inst->buffers.output.actual_count < + inst->buffers.output.min_count + + inst->buffers.output.extra_count) { + inst->buffers.output.actual_count = + inst->buffers.output.min_count + + inst->buffers.output.extra_count; + } + + i_vpr_h(inst, "%s: type: OUTPUT, count: min %u, extra %u, actual %u\n", __func__, + inst->buffers.output.min_count, + inst->buffers.output.extra_count, + inst->buffers.output.actual_count); + break; + default: + d_vpr_e("%s unknown port %d\n", __func__, port); + return -EINVAL; + } + + return 0; +} + +void msm_vidc_schedule_core_deinit(struct msm_vidc_core *core) +{ + if (!core->capabilities[FW_UNLOAD].value) + return; + + cancel_delayed_work(&core->fw_unload_work); + + schedule_delayed_work(&core->fw_unload_work, + msecs_to_jiffies(core->capabilities[FW_UNLOAD_DELAY].value)); + + d_vpr_h("firmware unload delayed by %u ms\n", + core->capabilities[FW_UNLOAD_DELAY].value); +} + +static const char *get_codec_str(enum msm_vidc_codec_type type) +{ + switch (type) { + case MSM_VIDC_H264: return " avc"; + case MSM_VIDC_HEVC: return "hevc"; + case MSM_VIDC_VP9: return " vp9"; + } + + return "...."; +} + +static const char *get_domain_str(enum msm_vidc_domain_type type) +{ + switch (type) { + case MSM_VIDC_ENCODER: return "E"; + case MSM_VIDC_DECODER: return "D"; + } + + return "."; +} + +int msm_vidc_update_debug_str(struct msm_vidc_inst *inst) +{ + u32 sid; + const char *codec; + const char *domain; + + sid = inst->session_id; + codec = get_codec_str(inst->codec); + domain = get_domain_str(inst->domain); + + snprintf(inst->debug_str, sizeof(inst->debug_str), "%08x: %s%s", + sid, codec, domain); + + d_vpr_h("%s: sid: %08x, codec: %s, domain: %s, final: %s\n", + __func__, sid, codec, domain, inst->debug_str); + + return 0; +} + +static int msm_vidc_print_running_instances_info(struct msm_vidc_core *core) +{ + struct msm_vidc_inst *inst; + u32 height, width, fps, orate; + struct msm_vidc_inst_cap *cap; + struct v4l2_format *out_f; + struct v4l2_format *inp_f; + char prop[64]; + + d_vpr_e("Print all running instances\n"); + d_vpr_e("%6s | %6s | %5s | %5s | %5s\n", "width", "height", "fps", "orate", "prop"); + + core_lock(core, __func__); + list_for_each_entry(inst, &core->instances, list) { + out_f = &inst->fmts[OUTPUT_PORT]; + inp_f = &inst->fmts[INPUT_PORT]; + cap = &inst->capabilities[0]; + memset(&prop, 0, sizeof(prop)); + + width = max(out_f->fmt.pix_mp.width, inp_f->fmt.pix_mp.width); + height = max(out_f->fmt.pix_mp.height, inp_f->fmt.pix_mp.height); + fps = cap[FRAME_RATE].value >> 16; + orate = cap[OPERATING_RATE].value >> 16; + + strlcat(prop, "RT ", sizeof(prop)); + + i_vpr_e(inst, "%6u | %6u | %5u | %5u | %5s\n", width, height, fps, orate, prop); + } + core_unlock(core, __func__); + + return 0; +} + +int msm_vidc_get_inst_load(struct msm_vidc_inst *inst) +{ + u32 mbpf, fps; + u32 frame_rate, operating_rate, input_rate, timestamp_rate; + + mbpf = msm_vidc_get_mbs_per_frame(inst); + frame_rate = msm_vidc_get_frame_rate(inst); + operating_rate = msm_vidc_get_operating_rate(inst); + fps = max(frame_rate, operating_rate); + + if (is_decode_session(inst)) { + input_rate = msm_vidc_get_input_rate(inst); + timestamp_rate = msm_vidc_get_timestamp_rate(inst); + fps = max(fps, input_rate); + fps = max(fps, timestamp_rate); + } + + return mbpf * fps; +} + +int msm_vidc_check_core_mbps(struct msm_vidc_inst *inst) +{ + u32 mbps = 0, total_mbps = 0; + struct msm_vidc_core *core; + struct msm_vidc_inst *instance; + + core = inst->core; + + core_lock(core, __func__); + list_for_each_entry(instance, &core->instances, list) { + /* ignore invalid/error session */ + if (is_session_error(instance)) + continue; + + mbps = msm_vidc_get_inst_load(instance); + total_mbps += mbps; + } + core_unlock(core, __func__); + + /* reject if cumulative mbps of all sessions is greater than MAX_MBPS */ + if (total_mbps > core->capabilities[MAX_MBPS].value) { + i_vpr_e(inst, "%s: Hardware overloaded. needed %u, max %u", __func__, + total_mbps, core->capabilities[MAX_MBPS].value); + return -ENOMEM; + } + + i_vpr_h(inst, "%s: HW load needed %u is within max %u", __func__, + total_mbps, core->capabilities[MAX_MBPS].value); + + return 0; +} + +int msm_vidc_check_core_mbpf(struct msm_vidc_inst *inst) +{ + u32 video_mbpf = 0; + struct msm_vidc_core *core; + struct msm_vidc_inst *instance; + + core = inst->core; + + core_lock(core, __func__); + list_for_each_entry(instance, &core->instances, list) { + video_mbpf += msm_vidc_get_mbs_per_frame(instance); + } + core_unlock(core, __func__); + + if (video_mbpf > core->capabilities[MAX_MBPF].value) { + i_vpr_e(inst, "%s: video overloaded. needed %u, max %u", __func__, + video_mbpf, core->capabilities[MAX_MBPF].value); + return -ENOMEM; + } + + return 0; +} + +static int msm_vidc_check_inst_mbpf(struct msm_vidc_inst *inst) +{ + u32 mbpf = 0, max_mbpf = 0; + struct msm_vidc_inst_cap *cap; + + cap = &inst->capabilities[0]; + + if (is_encode_session(inst) && cap[LOSSLESS].value) + max_mbpf = cap[LOSSLESS_MBPF].max; + else + max_mbpf = cap[MBPF].max; + + /* check current session mbpf */ + mbpf = msm_vidc_get_mbs_per_frame(inst); + if (mbpf > max_mbpf) { + i_vpr_e(inst, "%s: session overloaded. needed %u, max %u", __func__, + mbpf, max_mbpf); + return -ENOMEM; + } + + return 0; +} + +u32 msm_vidc_get_max_bitrate(struct msm_vidc_inst *inst) +{ + u32 max_bitrate = 0x7fffffff; + + if (inst->capabilities[ALL_INTRA].value) + max_bitrate = min(max_bitrate, + (u32)inst->capabilities[ALLINTRA_MAX_BITRATE].max); + + if (inst->codec == MSM_VIDC_HEVC) { + max_bitrate = min_t(u32, max_bitrate, + inst->capabilities[CABAC_MAX_BITRATE].max); + } else if (inst->codec == MSM_VIDC_H264) { + if (inst->capabilities[ENTROPY_MODE].value == + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC) + max_bitrate = min(max_bitrate, + (u32)inst->capabilities[CAVLC_MAX_BITRATE].max); + else + max_bitrate = min(max_bitrate, + (u32)inst->capabilities[CABAC_MAX_BITRATE].max); + } + if (max_bitrate == 0x7fffffff || !max_bitrate) + max_bitrate = min(max_bitrate, (u32)inst->capabilities[BIT_RATE].max); + + return max_bitrate; +} + +static int msm_vidc_check_resolution_supported(struct msm_vidc_inst *inst) +{ + struct msm_vidc_inst_cap *cap; + u32 width = 0, height = 0, min_width, min_height, + max_width, max_height; + bool is_interlaced = false; + + cap = &inst->capabilities[0]; + + if (is_decode_session(inst)) { + width = inst->fmts[INPUT_PORT].fmt.pix_mp.width; + height = inst->fmts[INPUT_PORT].fmt.pix_mp.height; + } else if (is_encode_session(inst)) { + width = inst->crop.width; + height = inst->crop.height; + } + + if (is_encode_session(inst) && cap[LOSSLESS].value) { + min_width = cap[LOSSLESS_FRAME_WIDTH].min; + max_width = cap[LOSSLESS_FRAME_WIDTH].max; + min_height = cap[LOSSLESS_FRAME_HEIGHT].min; + max_height = cap[LOSSLESS_FRAME_HEIGHT].max; + } else { + min_width = cap[FRAME_WIDTH].min; + max_width = cap[FRAME_WIDTH].max; + min_height = cap[FRAME_HEIGHT].min; + max_height = cap[FRAME_HEIGHT].max; + } + + /* check if input width and height is in supported range */ + if (is_decode_session(inst) || is_encode_session(inst)) { + if (!in_range(width, min_width, max_width) || + !in_range(height, min_height, max_height)) { + i_vpr_e(inst, + "%s: unsupported input wxh [%u x %u], allowed range: [%u x %u] to [%u x %u]\n", + __func__, width, height, min_width, + min_height, max_width, max_height); + return -EINVAL; + } + } + + /* check interlace supported resolution */ + is_interlaced = cap[CODED_FRAMES].value == CODED_FRAMES_INTERLACE; + if (is_interlaced && (width > INTERLACE_WIDTH_MAX || height > INTERLACE_HEIGHT_MAX || + NUM_MBS_PER_FRAME(width, height) > INTERLACE_MB_PER_FRAME_MAX)) { + i_vpr_e(inst, "%s: unsupported interlace wxh [%u x %u], max [%u x %u]\n", + __func__, width, height, INTERLACE_WIDTH_MAX, INTERLACE_HEIGHT_MAX); + return -EINVAL; + } + + return 0; +} + +static int msm_vidc_check_max_sessions(struct msm_vidc_inst *inst) +{ + u32 width = 0, height = 0; + u32 num_1080p_sessions = 0, num_4k_sessions = 0, num_8k_sessions = 0; + struct msm_vidc_inst *i; + struct msm_vidc_core *core; + + core = inst->core; + + core_lock(core, __func__); + list_for_each_entry(i, &core->instances, list) { + if (is_decode_session(i)) { + width = i->fmts[INPUT_PORT].fmt.pix_mp.width; + height = i->fmts[INPUT_PORT].fmt.pix_mp.height; + } else if (is_encode_session(i)) { + width = i->crop.width; + height = i->crop.height; + } + + /* + * one 8k session equals to 64 720p sessions in reality. + * So for one 8k session the number of 720p sessions will + * exceed max supported session count(16), hence one 8k session + * will be rejected as well. + * Therefore, treat one 8k session equal to two 4k sessions and + * one 4k session equal to two 1080p sessions and + * one 1080p session equal to two 720p sessions. This equation + * will make one 8k session equal to eight 720p sessions + * which looks good. + * + * Do not treat resolutions above 4k as 8k session instead + * treat (4K + half 4k) above as 8k session + */ + if (res_is_greater_than(width, height, 4096 + (4096 >> 1), 2176 + (2176 >> 1))) { + num_8k_sessions += 1; + num_4k_sessions += 2; + num_1080p_sessions += 4; + } else if (res_is_greater_than(width, height, 1920 + (1920 >> 1), + 1088 + (1088 >> 1))) { + num_4k_sessions += 1; + num_1080p_sessions += 2; + } else if (res_is_greater_than(width, height, 1280 + (1280 >> 1), + 736 + (736 >> 1))) { + num_1080p_sessions += 1; + } + } + core_unlock(core, __func__); + + if (num_8k_sessions > core->capabilities[MAX_NUM_8K_SESSIONS].value) { + i_vpr_e(inst, "%s: total 8k sessions %d, exceeded max limit %d\n", + __func__, num_8k_sessions, + core->capabilities[MAX_NUM_8K_SESSIONS].value); + return -ENOMEM; + } + + if (num_4k_sessions > core->capabilities[MAX_NUM_4K_SESSIONS].value) { + i_vpr_e(inst, "%s: total 4K sessions %d, exceeded max limit %d\n", + __func__, num_4k_sessions, + core->capabilities[MAX_NUM_4K_SESSIONS].value); + return -ENOMEM; + } + + if (num_1080p_sessions > core->capabilities[MAX_NUM_1080P_SESSIONS].value) { + i_vpr_e(inst, "%s: total 1080p sessions %d, exceeded max limit %d\n", + __func__, num_1080p_sessions, + core->capabilities[MAX_NUM_1080P_SESSIONS].value); + return -ENOMEM; + } + + return 0; +} + +int msm_vidc_check_session_supported(struct msm_vidc_inst *inst) +{ + int rc = 0; + + rc = msm_vidc_check_core_mbps(inst); + if (rc) + goto exit; + + rc = msm_vidc_check_core_mbpf(inst); + if (rc) + goto exit; + + rc = msm_vidc_check_inst_mbpf(inst); + if (rc) + goto exit; + + rc = msm_vidc_check_resolution_supported(inst); + if (rc) + goto exit; + + rc = msm_vidc_check_max_sessions(inst); + if (rc) + goto exit; + +exit: + if (rc) { + i_vpr_e(inst, "%s: current session not supported\n", __func__); + msm_vidc_print_running_instances_info(inst->core); + } + + return rc; +} + +int msm_vidc_check_scaling_supported(struct msm_vidc_inst *inst) +{ + u32 iwidth, owidth, iheight, oheight, ds_factor; + + if (is_decode_session(inst)) { + i_vpr_h(inst, "%s: Scaling is supported for encode session only\n", __func__); + return 0; + } + + if (!is_scaling_enabled(inst)) { + i_vpr_h(inst, "%s: Scaling not enabled. skip scaling check\n", __func__); + return 0; + } + + iwidth = inst->crop.width; + iheight = inst->crop.height; + owidth = inst->compose.width; + oheight = inst->compose.height; + ds_factor = inst->capabilities[SCALE_FACTOR].value; + + /* upscaling: encoder doesnot support upscaling */ + if (owidth > iwidth || oheight > iheight) { + i_vpr_e(inst, "%s: upscale not supported: input [%u x %u], output [%u x %u]\n", + __func__, iwidth, iheight, owidth, oheight); + return -EINVAL; + } + + /* downscaling: only supported up to 1/8 of width & 1/8 of height */ + if (iwidth > owidth * ds_factor || iheight > oheight * ds_factor) { + i_vpr_e(inst, + "%s: unsupported ratio: input [%u x %u], output [%u x %u], ratio %u\n", + __func__, iwidth, iheight, owidth, oheight, ds_factor); + return -EINVAL; + } + + return 0; +} + +struct msm_vidc_fw_query_params { + u32 hfi_prop_name; + u32 port; +}; + +int msm_vidc_get_properties(struct msm_vidc_inst *inst) +{ + int rc = 0; + int i; + + static const struct msm_vidc_fw_query_params fw_query_params[] = { + {HFI_PROP_STAGE, HFI_PORT_NONE}, + {HFI_PROP_PIPE, HFI_PORT_NONE}, + {HFI_PROP_QUALITY_MODE, HFI_PORT_BITSTREAM} + }; + + for (i = 0; i < ARRAY_SIZE(fw_query_params); i++) { + if (is_decode_session(inst)) { + if (fw_query_params[i].hfi_prop_name == HFI_PROP_QUALITY_MODE) + continue; + } + + i_vpr_l(inst, "%s: querying fw for property %#x\n", __func__, + fw_query_params[i].hfi_prop_name); + + rc = venus_hfi_session_property(inst, + fw_query_params[i].hfi_prop_name, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED | + HFI_HOST_FLAGS_GET_PROPERTY), + fw_query_params[i].port, + HFI_PAYLOAD_NONE, + NULL, + 0); + if (rc) + return rc; + } + + return 0; +} + +struct context_bank_info *msm_vidc_get_context_bank_for_region(struct msm_vidc_core *core, + enum msm_vidc_buffer_region region) +{ + struct context_bank_info *cb = NULL, *match = NULL; + + if (!region || region >= MSM_VIDC_REGION_MAX) { + d_vpr_e("Invalid region %#x\n", region); + return NULL; + } + + venus_hfi_for_each_context_bank(core, cb) { + if (cb->region == region) { + match = cb; + break; + } + } + if (!match) + d_vpr_e("cb not found for region %#x\n", region); + + return match; +} + +struct context_bank_info *msm_vidc_get_context_bank_for_device(struct msm_vidc_core *core, + struct device *dev) +{ + struct context_bank_info *cb = NULL, *match = NULL; + + venus_hfi_for_each_context_bank(core, cb) { + if (of_device_is_compatible(dev->of_node, cb->name)) { + match = cb; + break; + } + } + if (!match) + d_vpr_e("cb not found for dev %s\n", dev_name(dev)); + + return match; +} From patchwork Fri Jul 28 13:23:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 619DEC41513 for ; Fri, 28 Jul 2023 13:26:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235316AbjG1N0k (ORCPT ); Fri, 28 Jul 2023 09:26:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235541AbjG1N0V (ORCPT ); Fri, 28 Jul 2023 09:26:21 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52A0544BE; Fri, 28 Jul 2023 06:25:58 -0700 (PDT) Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SAwYtg016027; Fri, 28 Jul 2023 13:25:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=NcpVwYAYWvyLubX5f4uViQg1+53y1qmun0CSgg3fc/M=; b=DoSoL2GUDKa1DnnN8NHeB+HlT5xAJHk/NIyBGeeDjK1i7r+8wH6OgXZeG7mwsWFf/e3l msg6OU8UsIc3FUqRdMxKulezkN0jKmMnrtQlp4pJoAY83C7rp78Rc9jdKOTrUQQIKbT6 q0Zn5T10XCykOXgeQdt5Qa7AeW00MNbPrD9UTK3sk6JA8zMI+bhbj+D3lXx5jNaembSD jloxR3Ye/ZP+WagQpoSX7A31w30upRgsO3NHS9GHuGhXMB8zojl2xY2LBIW12HPmt2DM NwdGC+eEYNX0rFaaNAXl2VpJeW1dvax7u284joWrU8ZrrhL2j5aP/Vq4yjnugca3j/75 zQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s448hh7h9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:49 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPnWA013610 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:49 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:45 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 11/33] iris: vidc: add helpers for memory management Date: Fri, 28 Jul 2023 18:53:22 +0530 Message-ID: <1690550624-14642-12-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: nFoIX4ORq1G0wP8kDvqtB-J-vyPJ3cLK X-Proofpoint-ORIG-GUID: nFoIX4ORq1G0wP8kDvqtB-J-vyPJ3cLK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 clxscore=1015 bulkscore=0 lowpriorityscore=0 suspectscore=0 adultscore=0 mlxscore=0 priorityscore=1501 mlxlogscore=958 impostorscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements helper functions for allocating, freeing, mapping and unmapping memory. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_memory.h | 83 ++++ .../platform/qcom/iris/vidc/src/msm_vidc_memory.c | 448 +++++++++++++++++++++ 2 files changed, 531 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_memory.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_memory.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_memory.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_memory.h new file mode 100644 index 0000000..d6d244a --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_memory.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_MEMORY_H_ +#define _MSM_VIDC_MEMORY_H_ + +#include "msm_vidc_internal.h" + +struct msm_memory_dmabuf { + struct list_head list; + struct dma_buf *dmabuf; + u32 refcount; +}; + +enum msm_memory_pool_type { + MSM_MEM_POOL_BUFFER = 0, + MSM_MEM_POOL_ALLOC_MAP, + MSM_MEM_POOL_TIMESTAMP, + MSM_MEM_POOL_DMABUF, + MSM_MEM_POOL_BUF_TIMER, + MSM_MEM_POOL_BUF_STATS, + MSM_MEM_POOL_MAX, +}; + +struct msm_memory_alloc_header { + struct list_head list; + u32 type; + bool busy; + void *buf; +}; + +struct msm_memory_pool { + u32 size; + char *name; + struct list_head free_pool; /* list of struct msm_memory_alloc_header */ + struct list_head busy_pool; /* list of struct msm_memory_alloc_header */ +}; + +void *msm_vidc_pool_alloc(struct msm_vidc_inst *inst, + enum msm_memory_pool_type type); +void msm_vidc_pool_free(struct msm_vidc_inst *inst, void *vidc_buf); +int msm_vidc_pools_init(struct msm_vidc_inst *inst); +void msm_vidc_pools_deinit(struct msm_vidc_inst *inst); + +#define call_mem_op(c, op, ...) \ + (((c) && (c)->mem_ops && (c)->mem_ops->op) ? \ + ((c)->mem_ops->op(__VA_ARGS__)) : 0) + +struct msm_vidc_memory_ops { + struct dma_buf *(*dma_buf_get)(struct msm_vidc_inst *inst, + int fd); + void (*dma_buf_put)(struct msm_vidc_inst *inst, + struct dma_buf *dmabuf); + void (*dma_buf_put_completely)(struct msm_vidc_inst *inst, + struct msm_memory_dmabuf *buf); + struct dma_buf_attachment *(*dma_buf_attach)(struct msm_vidc_core *core, + struct dma_buf *dbuf, + struct device *dev); + int (*dma_buf_detach)(struct msm_vidc_core *core, struct dma_buf *dbuf, + struct dma_buf_attachment *attach); + struct sg_table *(*dma_buf_map_attachment)(struct msm_vidc_core *core, + struct dma_buf_attachment *attach); + int (*dma_buf_unmap_attachment)(struct msm_vidc_core *core, + struct dma_buf_attachment *attach, + struct sg_table *table); + int (*memory_alloc_map)(struct msm_vidc_core *core, + struct msm_vidc_mem *mem); + int (*memory_unmap_free)(struct msm_vidc_core *core, + struct msm_vidc_mem *mem); + int (*mem_dma_map_page)(struct msm_vidc_core *core, + struct msm_vidc_mem *mem); + int (*mem_dma_unmap_page)(struct msm_vidc_core *core, + struct msm_vidc_mem *mem); + u32 (*buffer_region)(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +}; + +const struct msm_vidc_memory_ops *get_mem_ops(void); + +#endif // _MSM_VIDC_MEMORY_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_memory.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_memory.c new file mode 100644 index 0000000..c97d9c7 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_memory.c @@ -0,0 +1,448 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include + +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "venus_hfi.h" + +MODULE_IMPORT_NS(DMA_BUF); + +struct msm_vidc_type_size_name { + enum msm_memory_pool_type type; + u32 size; + char *name; +}; + +static const struct msm_vidc_type_size_name buftype_size_name_arr[] = { + {MSM_MEM_POOL_BUFFER, sizeof(struct msm_vidc_buffer), "MSM_MEM_POOL_BUFFER" }, + {MSM_MEM_POOL_ALLOC_MAP, sizeof(struct msm_vidc_mem), "MSM_MEM_POOL_ALLOC_MAP" }, + {MSM_MEM_POOL_TIMESTAMP, sizeof(struct msm_vidc_timestamp), "MSM_MEM_POOL_TIMESTAMP" }, + {MSM_MEM_POOL_DMABUF, sizeof(struct msm_memory_dmabuf), "MSM_MEM_POOL_DMABUF" }, + {MSM_MEM_POOL_BUF_TIMER, sizeof(struct msm_vidc_input_timer), "MSM_MEM_POOL_BUF_TIMER" }, + {MSM_MEM_POOL_BUF_STATS, sizeof(struct msm_vidc_buffer_stats), "MSM_MEM_POOL_BUF_STATS"}, +}; + +void *msm_vidc_pool_alloc(struct msm_vidc_inst *inst, enum msm_memory_pool_type type) +{ + struct msm_memory_alloc_header *hdr = NULL; + struct msm_memory_pool *pool; + + if (type < 0 || type >= MSM_MEM_POOL_MAX) { + d_vpr_e("%s: Invalid params\n", __func__); + return NULL; + } + pool = &inst->pool[type]; + + if (!list_empty(&pool->free_pool)) { + /* get 1st node from free pool */ + hdr = list_first_entry(&pool->free_pool, struct msm_memory_alloc_header, list); + + /* move node from free pool to busy pool */ + list_move_tail(&hdr->list, &pool->busy_pool); + + /* reset existing data */ + memset((char *)hdr->buf, 0, pool->size); + + /* set busy flag to true. This is to catch double free request */ + hdr->busy = true; + + return hdr->buf; + } + + hdr = vzalloc(pool->size + sizeof(struct msm_memory_alloc_header)); + + INIT_LIST_HEAD(&hdr->list); + hdr->type = type; + hdr->busy = true; + hdr->buf = (void *)(hdr + 1); + list_add_tail(&hdr->list, &pool->busy_pool); + + return hdr->buf; +} + +void msm_vidc_pool_free(struct msm_vidc_inst *inst, void *vidc_buf) +{ + struct msm_memory_alloc_header *hdr; + struct msm_memory_pool *pool; + + if (!vidc_buf) { + d_vpr_e("%s: Invalid params\n", __func__); + return; + } + hdr = (struct msm_memory_alloc_header *)vidc_buf - 1; + + /* sanitize buffer addr */ + if (hdr->buf != vidc_buf) { + i_vpr_e(inst, "%s: invalid buf addr %p\n", __func__, vidc_buf); + return; + } + + /* sanitize pool type */ + if (hdr->type < 0 || hdr->type >= MSM_MEM_POOL_MAX) { + i_vpr_e(inst, "%s: invalid pool type %#x\n", __func__, hdr->type); + return; + } + pool = &inst->pool[hdr->type]; + + /* catch double-free request */ + if (!hdr->busy) { + i_vpr_e(inst, "%s: double free request. type %s, addr %p\n", __func__, + pool->name, vidc_buf); + return; + } + hdr->busy = false; + + /* move node from busy pool to free pool */ + list_move_tail(&hdr->list, &pool->free_pool); +} + +static void msm_vidc_destroy_pool_buffers(struct msm_vidc_inst *inst, + enum msm_memory_pool_type type) +{ + struct msm_memory_alloc_header *hdr, *dummy; + struct msm_memory_pool *pool; + u32 fcount = 0, bcount = 0; + + if (type < 0 || type >= MSM_MEM_POOL_MAX) { + d_vpr_e("%s: Invalid params\n", __func__); + return; + } + pool = &inst->pool[type]; + + /* detect memleak: busy pool is expected to be empty here */ + if (!list_empty(&pool->busy_pool)) + i_vpr_e(inst, "%s: destroy request on active buffer. type %s\n", + __func__, pool->name); + + /* destroy all free buffers */ + list_for_each_entry_safe(hdr, dummy, &pool->free_pool, list) { + list_del(&hdr->list); + vfree(hdr); + fcount++; + } + + /* destroy all busy buffers */ + list_for_each_entry_safe(hdr, dummy, &pool->busy_pool, list) { + list_del(&hdr->list); + vfree(hdr); + bcount++; + } + + i_vpr_h(inst, "%s: type: %23s, count: free %2u, busy %2u\n", + __func__, pool->name, fcount, bcount); +} + +int msm_vidc_pools_init(struct msm_vidc_inst *inst) +{ + u32 i; + + if (ARRAY_SIZE(buftype_size_name_arr) != MSM_MEM_POOL_MAX) { + i_vpr_e(inst, "%s: num elements mismatch %lu %u\n", __func__, + ARRAY_SIZE(buftype_size_name_arr), MSM_MEM_POOL_MAX); + return -EINVAL; + } + + for (i = 0; i < MSM_MEM_POOL_MAX; i++) { + if (i != buftype_size_name_arr[i].type) { + i_vpr_e(inst, "%s: type mismatch %u %u\n", __func__, + i, buftype_size_name_arr[i].type); + return -EINVAL; + } + inst->pool[i].size = buftype_size_name_arr[i].size; + inst->pool[i].name = buftype_size_name_arr[i].name; + INIT_LIST_HEAD(&inst->pool[i].free_pool); + INIT_LIST_HEAD(&inst->pool[i].busy_pool); + } + + return 0; +} + +void msm_vidc_pools_deinit(struct msm_vidc_inst *inst) +{ + u32 i = 0; + + /* destroy all buffers from all pool types */ + for (i = 0; i < MSM_MEM_POOL_MAX; i++) + msm_vidc_destroy_pool_buffers(inst, i); +} + +static struct dma_buf *msm_vidc_dma_buf_get(struct msm_vidc_inst *inst, int fd) +{ + struct msm_memory_dmabuf *buf = NULL; + struct dma_buf *dmabuf = NULL; + bool found = false; + + /* get local dmabuf ref for tracking */ + dmabuf = dma_buf_get(fd); + if (IS_ERR_OR_NULL(dmabuf)) { + d_vpr_e("Failed to get dmabuf for %d, error %d\n", + fd, PTR_ERR_OR_ZERO(dmabuf)); + return NULL; + } + + /* track dmabuf - inc refcount if already present */ + list_for_each_entry(buf, &inst->dmabuf_tracker, list) { + if (buf->dmabuf == dmabuf) { + buf->refcount++; + found = true; + break; + } + } + if (found) { + /* put local dmabuf ref */ + dma_buf_put(dmabuf); + return dmabuf; + } + + /* get tracker instance from pool */ + buf = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_DMABUF); + if (!buf) { + i_vpr_e(inst, "%s: dmabuf alloc failed\n", __func__); + dma_buf_put(dmabuf); + return NULL; + } + /* hold dmabuf strong ref in tracker */ + buf->dmabuf = dmabuf; + buf->refcount = 1; + INIT_LIST_HEAD(&buf->list); + + /* add new dmabuf entry to tracker */ + list_add_tail(&buf->list, &inst->dmabuf_tracker); + + return dmabuf; +} + +static void msm_vidc_dma_buf_put(struct msm_vidc_inst *inst, struct dma_buf *dmabuf) +{ + struct msm_memory_dmabuf *buf = NULL; + bool found = false; + + if (!dmabuf) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + + /* track dmabuf - dec refcount if already present */ + list_for_each_entry(buf, &inst->dmabuf_tracker, list) { + if (buf->dmabuf == dmabuf) { + buf->refcount--; + found = true; + break; + } + } + if (!found) { + i_vpr_e(inst, "%s: invalid dmabuf %p\n", __func__, dmabuf); + return; + } + + /* non-zero refcount - do nothing */ + if (buf->refcount) + return; + + /* remove dmabuf entry from tracker */ + list_del(&buf->list); + + /* release dmabuf strong ref from tracker */ + dma_buf_put(buf->dmabuf); + + /* put tracker instance back to pool */ + msm_vidc_pool_free(inst, buf); +} + +static void msm_vidc_dma_buf_put_completely(struct msm_vidc_inst *inst, + struct msm_memory_dmabuf *buf) +{ + if (!buf) { + d_vpr_e("%s: invalid params\n", __func__); + return; + } + + while (buf->refcount) { + buf->refcount--; + if (!buf->refcount) { + /* remove dmabuf entry from tracker */ + list_del(&buf->list); + + /* release dmabuf strong ref from tracker */ + dma_buf_put(buf->dmabuf); + + /* put tracker instance back to pool */ + msm_vidc_pool_free(inst, buf); + break; + } + } +} + +static struct dma_buf_attachment *msm_vidc_dma_buf_attach(struct msm_vidc_core *core, + struct dma_buf *dbuf, + struct device *dev) +{ + int rc = 0; + struct dma_buf_attachment *attach = NULL; + + if (!dbuf || !dev) { + d_vpr_e("%s: invalid params\n", __func__); + return NULL; + } + + attach = dma_buf_attach(dbuf, dev); + if (IS_ERR_OR_NULL(attach)) { + rc = PTR_ERR_OR_ZERO(attach) ? PTR_ERR_OR_ZERO(attach) : -1; + d_vpr_e("Failed to attach dmabuf, error %d\n", rc); + return NULL; + } + + return attach; +} + +static int msm_vidc_dma_buf_detach(struct msm_vidc_core *core, struct dma_buf *dbuf, + struct dma_buf_attachment *attach) +{ + int rc = 0; + + if (!dbuf || !attach) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + dma_buf_detach(dbuf, attach); + + return rc; +} + +static int msm_vidc_dma_buf_unmap_attachment(struct msm_vidc_core *core, + struct dma_buf_attachment *attach, + struct sg_table *table) +{ + int rc = 0; + + if (!attach || !table) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + dma_buf_unmap_attachment(attach, table, DMA_BIDIRECTIONAL); + + return rc; +} + +static struct sg_table *msm_vidc_dma_buf_map_attachment(struct msm_vidc_core *core, + struct dma_buf_attachment *attach) +{ + int rc = 0; + struct sg_table *table = NULL; + + if (!attach) { + d_vpr_e("%s: invalid params\n", __func__); + return NULL; + } + + table = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + if (IS_ERR_OR_NULL(table)) { + rc = PTR_ERR_OR_ZERO(table) ? PTR_ERR_OR_ZERO(table) : -1; + d_vpr_e("Failed to map table, error %d\n", rc); + return NULL; + } + if (!table->sgl) { + d_vpr_e("%s: sgl is NULL\n", __func__); + msm_vidc_dma_buf_unmap_attachment(core, attach, table); + return NULL; + } + + return table; +} + +static int msm_vidc_memory_alloc_map(struct msm_vidc_core *core, struct msm_vidc_mem *mem) +{ + int size = 0; + struct context_bank_info *cb = NULL; + + if (!mem) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + size = ALIGN(mem->size, SZ_4K); + mem->attrs = DMA_ATTR_WRITE_COMBINE; + + cb = msm_vidc_get_context_bank_for_region(core, mem->region); + if (!cb) { + d_vpr_e("%s: failed to get context bank device\n", __func__); + return -EIO; + } + + mem->kvaddr = dma_alloc_attrs(cb->dev, size, &mem->device_addr, GFP_KERNEL, + mem->attrs); + if (!mem->kvaddr) { + d_vpr_e("%s: dma_alloc_attrs returned NULL\n", __func__); + return -ENOMEM; + } + + d_vpr_h("%s: dmabuf %pK, size %d, buffer_type %s, secure %d, region %d\n", + __func__, mem->kvaddr, mem->size, buf_name(mem->type), + mem->secure, mem->region); + + return 0; +} + +static int msm_vidc_memory_unmap_free(struct msm_vidc_core *core, struct msm_vidc_mem *mem) +{ + int rc = 0; + struct context_bank_info *cb = NULL; + + if (!mem || !mem->device_addr || !mem->kvaddr) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + d_vpr_h("%s: dmabuf %pK, size %d, kvaddr %pK, buffer_type %s, secure %d, region %d\n", + __func__, (void *)mem->device_addr, mem->size, mem->kvaddr, + buf_name(mem->type), mem->secure, mem->region); + + cb = msm_vidc_get_context_bank_for_region(core, mem->region); + if (!cb) { + d_vpr_e("%s: failed to get context bank device\n", __func__); + return -EIO; + } + + dma_free_attrs(cb->dev, mem->size, mem->kvaddr, mem->device_addr, mem->attrs); + + mem->kvaddr = NULL; + mem->device_addr = 0; + + return rc; +} + +static u32 msm_vidc_buffer_region(struct msm_vidc_inst *inst, enum msm_vidc_buffer_type buffer_type) +{ + return MSM_VIDC_NON_SECURE; +} + +static const struct msm_vidc_memory_ops msm_mem_ops = { + .dma_buf_get = msm_vidc_dma_buf_get, + .dma_buf_put = msm_vidc_dma_buf_put, + .dma_buf_put_completely = msm_vidc_dma_buf_put_completely, + .dma_buf_attach = msm_vidc_dma_buf_attach, + .dma_buf_detach = msm_vidc_dma_buf_detach, + .dma_buf_map_attachment = msm_vidc_dma_buf_map_attachment, + .dma_buf_unmap_attachment = msm_vidc_dma_buf_unmap_attachment, + .memory_alloc_map = msm_vidc_memory_alloc_map, + .memory_unmap_free = msm_vidc_memory_unmap_free, + .buffer_region = msm_vidc_buffer_region, +}; + +const struct msm_vidc_memory_ops *get_mem_ops(void) +{ + return &msm_mem_ops; +} From patchwork Fri Jul 28 13:23:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331928 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70138C001DE for ; Fri, 28 Jul 2023 13:26:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233918AbjG1N0y (ORCPT ); Fri, 28 Jul 2023 09:26:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233488AbjG1N0j (ORCPT ); Fri, 28 Jul 2023 09:26:39 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E004C3C2A; Fri, 28 Jul 2023 06:26:03 -0700 (PDT) Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SCqa6p003585; Fri, 28 Jul 2023 13:25:53 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=Epy9RBhNUgm5Xztk71bE16tWjRwOWziC4zXmdvwHKto=; b=pLekNlqw/WZGEZ5wrSLlUE2hLStBUlgnwUPvTY/8A3HGFptnp2k1GL0L/hu+eyxMvuOq ZZZPq9Z6+IOIqoBhjqZfJl+EtLhK8Kq6Ujy2i9U7IYbz2tt4nXjiZgEOPIN0iwgjLIRT MeYVT9Fz7lN3ObLePIb9mEqHQNRDwOmWcmjkil5IjvxnuTlTlr0CnahrL1mALlY/nuLr 8SiYtvgBTJ07jJPV1cWAABAakXtbW88dPFm6pAmim+DHOldaOK6JAw296cA++XUgUTU+ UWhK2ss9VAKR3P0DzRUlI2f5Jpq2i2Krt53D2BajYY1fLCdQv5Zxh3QcPpn4/vi9syal CQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s4e2702tr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:53 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPqV1013637 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:52 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:49 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 12/33] iris: vidc: add helper functions for resource management Date: Fri, 28 Jul 2023 18:53:23 +0530 Message-ID: <1690550624-14642-13-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: yGzU8lcZyzMUBWoDr19NY0p4YwtwvBCn X-Proofpoint-ORIG-GUID: yGzU8lcZyzMUBWoDr19NY0p4YwtwvBCn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 adultscore=0 spamscore=0 mlxscore=0 impostorscore=0 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements ops to initialize, enable and disable extrenal resources needed by video driver like power domains, clocks etc. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../media/platform/qcom/iris/vidc/inc/resources.h | 259 ++++ .../media/platform/qcom/iris/vidc/src/resources.c | 1321 ++++++++++++++++++++ 2 files changed, 1580 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/resources.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/resources.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/resources.h b/drivers/media/platform/qcom/iris/vidc/inc/resources.h new file mode 100644 index 0000000..9b2588e --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/resources.h @@ -0,0 +1,259 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2022, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_RESOURCES_H_ +#define _MSM_VIDC_RESOURCES_H_ + +struct icc_path; +struct regulator; +struct clk; +struct reset_control; +struct llcc_slice_desc; +struct iommu_domain; +struct device; +struct msm_vidc_core; + +/* + * These are helper macros to iterate over various lists within + * msm_vidc_core->resource. The intention is to cut down on a lot + * of boiler-plate code + */ + +/* Read as "for each 'thing' in a set of 'thingies'" */ +#define venus_hfi_for_each_thing(__device, __thing, __thingy) \ + venus_hfi_for_each_thing_continue(__device, __thing, __thingy, 0) + +#define venus_hfi_for_each_thing_reverse(__device, __thing, __thingy) \ + venus_hfi_for_each_thing_reverse_continue(__device, __thing, __thingy, \ + (__device)->resource->__thingy##_set.count - 1) + +/* TODO: the __from parameter technically not required since we can figure it + * out with some pointer magic (i.e. __thing - __thing##_tbl[0]). If this macro + * sees extensive use, probably worth cleaning it up but for now omitting it + * since it introduces unnecessary complexity. + */ +#define venus_hfi_for_each_thing_continue(__device, __thing, __thingy, __from) \ + for (__thing = &(__device)->resource->\ + __thingy##_set.__thingy##_tbl[__from]; \ + __thing < &(__device)->resource->__thingy##_set.__thingy##_tbl[0] + \ + ((__device)->resource->__thingy##_set.count - __from); \ + ++__thing) + +#define venus_hfi_for_each_thing_reverse_continue(__device, __thing, __thingy, \ + __from) \ + for (__thing = &(__device)->resource->\ + __thingy##_set.__thingy##_tbl[__from]; \ + __thing >= &(__device)->resource->__thingy##_set.__thingy##_tbl[0]; \ + --__thing) + +/* Bus set helpers */ +#define venus_hfi_for_each_bus(__device, __binfo) \ + venus_hfi_for_each_thing(__device, __binfo, bus) +#define venus_hfi_for_each_bus_reverse(__device, __binfo) \ + venus_hfi_for_each_thing_reverse(__device, __binfo, bus) + +/* Regular set helpers */ +#define venus_hfi_for_each_regulator(__device, __rinfo) \ + venus_hfi_for_each_thing(__device, __rinfo, regulator) +#define venus_hfi_for_each_regulator_reverse(__device, __rinfo) \ + venus_hfi_for_each_thing_reverse(__device, __rinfo, regulator) +#define venus_hfi_for_each_regulator_reverse_continue(__device, __rinfo, \ + __from) \ + venus_hfi_for_each_thing_reverse_continue(__device, __rinfo, \ + regulator, __from) + +/* Power domain set helpers */ +#define venus_hfi_for_each_power_domain(__device, __pdinfo) \ + venus_hfi_for_each_thing(__device, __pdinfo, power_domain) + +/* Clock set helpers */ +#define venus_hfi_for_each_clock(__device, __cinfo) \ + venus_hfi_for_each_thing(__device, __cinfo, clock) +#define venus_hfi_for_each_clock_reverse(__device, __cinfo) \ + venus_hfi_for_each_thing_reverse(__device, __cinfo, clock) + +/* Reset clock set helpers */ +#define venus_hfi_for_each_reset_clock(__device, __rcinfo) \ + venus_hfi_for_each_thing(__device, __rcinfo, reset) +#define venus_hfi_for_each_reset_clock_reverse(__device, __rcinfo) \ + venus_hfi_for_each_thing_reverse(__device, __rcinfo, reset) +#define venus_hfi_for_each_reset_clock_reverse_continue(__device, __rinfo, \ + __from) \ + venus_hfi_for_each_thing_reverse_continue(__device, __rinfo, \ + reset, __from) + +/* Subcache set helpers */ +#define venus_hfi_for_each_subcache(__device, __sinfo) \ + venus_hfi_for_each_thing(__device, __sinfo, subcache) +#define venus_hfi_for_each_subcache_reverse(__device, __sinfo) \ + venus_hfi_for_each_thing_reverse(__device, __sinfo, subcache) + +/* Contextbank set helpers */ +#define venus_hfi_for_each_context_bank(__device, __sinfo) \ + venus_hfi_for_each_thing(__device, __sinfo, context_bank) +#define venus_hfi_for_each_context_bank_reverse(__device, __sinfo) \ + venus_hfi_for_each_thing_reverse(__device, __sinfo, context_bank) + +enum msm_vidc_branch_mem_flags { + MSM_VIDC_CLKFLAG_RETAIN_PERIPH, + MSM_VIDC_CLKFLAG_NORETAIN_PERIPH, + MSM_VIDC_CLKFLAG_RETAIN_MEM, + MSM_VIDC_CLKFLAG_NORETAIN_MEM, + MSM_VIDC_CLKFLAG_PERIPH_OFF_SET, + MSM_VIDC_CLKFLAG_PERIPH_OFF_CLEAR, +}; + +struct bus_info { + struct icc_path *icc; + const char *name; + u32 min_kbps; + u32 max_kbps; +}; + +struct bus_set { + struct bus_info *bus_tbl; + u32 count; +}; + +struct regulator_info { + struct regulator *regulator; + const char *name; + bool hw_power_collapse; +}; + +struct regulator_set { + struct regulator_info *regulator_tbl; + u32 count; +}; + +struct power_domain_info { + struct device *genpd_dev; + const char *name; +}; + +struct power_domain_set { + struct power_domain_info *power_domain_tbl; + u32 count; +}; + +struct clock_info { + struct clk *clk; + const char *name; + u32 clk_id; + bool has_scaling; + u64 prev; +}; + +struct clock_set { + struct clock_info *clock_tbl; + u32 count; +}; + +struct reset_info { + struct reset_control *rst; + const char *name; + bool exclusive_release; +}; + +struct reset_set { + struct reset_info *reset_tbl; + u32 count; +}; + +struct subcache_info { + struct llcc_slice_desc *subcache; + const char *name; + u32 llcc_id; + bool isactive; +}; + +struct subcache_set { + struct subcache_info *subcache_tbl; + u32 count; + bool set_to_fw; +}; + +struct addr_range { + u32 start; + u32 size; +}; + +struct context_bank_info { + const char *name; + struct addr_range addr_range; + bool secure; + bool dma_coherant; + struct device *dev; + struct iommu_domain *domain; + u32 region; + u64 dma_mask; +}; + +struct context_bank_set { + struct context_bank_info *context_bank_tbl; + u32 count; +}; + +struct frequency_table { + unsigned long freq; +}; + +struct freq_set { + struct frequency_table *freq_tbl; + u32 count; +}; + +struct msm_vidc_resource { + u8 __iomem *register_base_addr; + int irq; + struct bus_set bus_set; + struct regulator_set regulator_set; + struct power_domain_set power_domain_set; + struct clock_set clock_set; + struct reset_set reset_set; + struct subcache_set subcache_set; + struct context_bank_set context_bank_set; + struct freq_set freq_set; + int fw_cookie; +}; + +#define call_res_op(c, op, ...) \ + (((c) && (c)->res_ops && (c)->res_ops->op) ? \ + ((c)->res_ops->op(__VA_ARGS__)) : 0) + +struct msm_vidc_resources_ops { + int (*init)(struct msm_vidc_core *core); + + int (*reset_bridge)(struct msm_vidc_core *core); + int (*reset_control_acquire)(struct msm_vidc_core *core, + const char *name); + int (*reset_control_release)(struct msm_vidc_core *core, + const char *name); + int (*reset_control_assert)(struct msm_vidc_core *core, + const char *name); + int (*reset_control_deassert)(struct msm_vidc_core *core, + const char *name); + + int (*gdsc_init)(struct msm_vidc_core *core); + int (*gdsc_on)(struct msm_vidc_core *core, const char *name); + int (*gdsc_off)(struct msm_vidc_core *core, const char *name); + int (*gdsc_hw_ctrl)(struct msm_vidc_core *core); + int (*gdsc_sw_ctrl)(struct msm_vidc_core *core); + + int (*llcc)(struct msm_vidc_core *core, bool enable); + int (*set_bw)(struct msm_vidc_core *core, unsigned long bw_ddr, + unsigned long bw_llcc); + int (*set_clks)(struct msm_vidc_core *core, u64 rate); + + int (*clk_disable)(struct msm_vidc_core *core, const char *name); + int (*clk_enable)(struct msm_vidc_core *core, const char *name); + int (*clk_set_flag)(struct msm_vidc_core *core, + const char *name, enum msm_vidc_branch_mem_flags flag); +}; + +const struct msm_vidc_resources_ops *get_resources_ops(void); + +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/resources.c b/drivers/media/platform/qcom/iris/vidc/src/resources.c new file mode 100644 index 0000000..b0800b9 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/resources.c @@ -0,0 +1,1321 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "venus_hfi.h" + +/* Less than 50MBps is treated as trivial BW change */ +#define TRIVIAL_BW_THRESHOLD 50000 +#define TRIVIAL_BW_CHANGE(a, b) \ + ((a) > (b) ? (a) - (b) < TRIVIAL_BW_THRESHOLD : \ + (b) - (a) < TRIVIAL_BW_THRESHOLD) + +enum reset_state { + INIT = 1, + ASSERT, + DEASSERT, +}; + +/* A comparator to compare loads (needed later on) */ +static inline int cmp(const void *a, const void *b) +{ + /* want to sort in reverse so flip the comparison */ + return ((struct freq_table *)b)->freq - + ((struct freq_table *)a)->freq; +} + +static void __fatal_error(bool fatal) +{ + WARN_ON(fatal); +} + +static void devm_llcc_release(void *res) +{ + llcc_slice_putd((struct llcc_slice_desc *)res); +} + +static struct llcc_slice_desc *devm_llcc_get(struct device *dev, u32 id) +{ + struct llcc_slice_desc *llcc = NULL; + int rc = 0; + + llcc = llcc_slice_getd(id); + if (!llcc) + return NULL; + + /** + * register release callback with devm, so that when device goes + * out of scope(during remove sequence), devm will take care of + * de-register part by invoking release callback. + */ + rc = devm_add_action_or_reset(dev, devm_llcc_release, (void *)llcc); + if (rc) + return NULL; + + return llcc; +} + +static void devm_pd_release(void *res) +{ + struct device *pd = (struct device *)res; + + d_vpr_h("%s(): %s\n", __func__, dev_name(pd)); + dev_pm_domain_detach(pd, true); +} + +static struct device *devm_pd_get(struct device *dev, const char *name) +{ + struct device *pd = NULL; + int rc = 0; + + pd = dev_pm_domain_attach_by_name(dev, name); + if (!pd) { + d_vpr_e("%s: pm domain attach failed %s\n", __func__, name); + return NULL; + } + + rc = devm_add_action_or_reset(dev, devm_pd_release, (void *)pd); + if (rc) { + d_vpr_e("%s: add action or reset failed %s\n", __func__, name); + return NULL; + } + + return pd; +} + +static void devm_opp_dl_release(void *res) +{ + struct device_link *link = (struct device_link *)res; + + d_vpr_h("%s(): %s\n", __func__, dev_name(&link->link_dev)); + device_link_del(link); +} + +static int devm_opp_dl_get(struct device *dev, struct device *supplier) +{ + u32 flag = DL_FLAG_RPM_ACTIVE | DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS; + struct device_link *link = NULL; + int rc = 0; + + link = device_link_add(dev, supplier, flag); + if (!link) { + d_vpr_e("%s: device link add failed\n", __func__); + return -EINVAL; + } + + rc = devm_add_action_or_reset(dev, devm_opp_dl_release, (void *)link); + if (rc) { + d_vpr_e("%s: add action or reset failed\n", __func__); + return rc; + } + + return rc; +} + +static void devm_pm_runtime_put_sync(void *res) +{ + struct device *dev = (struct device *)res; + + d_vpr_h("%s(): %s\n", __func__, dev_name(dev)); + pm_runtime_put_sync(dev); +} + +static int devm_pm_runtime_get_sync(struct device *dev) +{ + int rc = 0; + + rc = pm_runtime_get_sync(dev); + if (rc < 0) { + d_vpr_e("%s: pm domain get sync failed\n", __func__); + return rc; + } + + rc = devm_add_action_or_reset(dev, devm_pm_runtime_put_sync, (void *)dev); + if (rc) { + d_vpr_e("%s: add action or reset failed\n", __func__); + return rc; + } + + return rc; +} + +static int __opp_set_rate(struct msm_vidc_core *core, u64 freq) +{ + unsigned long opp_freq = 0; + struct dev_pm_opp *opp; + int rc = 0; + + opp_freq = freq; + + /* find max(ceil) freq from opp table */ + opp = dev_pm_opp_find_freq_ceil(&core->pdev->dev, &opp_freq); + if (IS_ERR(opp)) { + opp = dev_pm_opp_find_freq_floor(&core->pdev->dev, &opp_freq); + if (IS_ERR(opp)) { + d_vpr_e("%s: unable to find freq %lld in opp table\n", __func__, freq); + return -EINVAL; + } + } + dev_pm_opp_put(opp); + + /* print freq value */ + d_vpr_h("%s: set rate %lu (requested %llu)\n", + __func__, opp_freq, freq); + + /* scale freq to power up mxc & mmcx */ + rc = dev_pm_opp_set_rate(&core->pdev->dev, opp_freq); + if (rc) { + d_vpr_e("%s: failed to set rate\n", __func__); + return rc; + } + + return rc; +} + +static int __init_register_base(struct msm_vidc_core *core) +{ + struct msm_vidc_resource *res; + + res = core->resource; + + res->register_base_addr = devm_platform_ioremap_resource(core->pdev, 0); + if (IS_ERR(res->register_base_addr)) { + d_vpr_e("%s: map reg addr failed %ld\n", + __func__, PTR_ERR(res->register_base_addr)); + return -EINVAL; + } + d_vpr_h("%s: reg_base %p\n", __func__, res->register_base_addr); + + return 0; +} + +static int __init_irq(struct msm_vidc_core *core) +{ + struct msm_vidc_resource *res; + int rc = 0; + + res = core->resource; + + res->irq = platform_get_irq(core->pdev, 0); + + if (res->irq < 0) + d_vpr_e("%s: get irq failed, %d\n", __func__, res->irq); + + d_vpr_h("%s: irq %d\n", __func__, res->irq); + + rc = devm_request_threaded_irq(&core->pdev->dev, res->irq, venus_hfi_isr, + venus_hfi_isr_handler, IRQF_TRIGGER_HIGH, "msm-vidc", core); + if (rc) { + d_vpr_e("%s: Failed to allocate venus IRQ\n", __func__); + return rc; + } + disable_irq_nosync(res->irq); + + return rc; +} + +static int __init_bus(struct msm_vidc_core *core) +{ + const struct bw_table *bus_tbl; + struct bus_set *interconnects; + struct bus_info *binfo = NULL; + u32 bus_count = 0, cnt = 0; + int rc = 0; + + interconnects = &core->resource->bus_set; + + bus_tbl = core->platform->data.bw_tbl; + bus_count = core->platform->data.bw_tbl_size; + + if (!bus_tbl || !bus_count) { + d_vpr_e("%s: invalid bus tbl %p or count %d\n", + __func__, bus_tbl, bus_count); + return -EINVAL; + } + + /* allocate bus_set */ + interconnects->bus_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*interconnects->bus_tbl) * bus_count, + GFP_KERNEL); + if (!interconnects->bus_tbl) { + d_vpr_e("%s: failed to alloc memory for bus table\n", __func__); + return -ENOMEM; + } + interconnects->count = bus_count; + + /* populate bus field from platform data */ + for (cnt = 0; cnt < interconnects->count; cnt++) { + interconnects->bus_tbl[cnt].name = bus_tbl[cnt].name; + interconnects->bus_tbl[cnt].min_kbps = bus_tbl[cnt].min_kbps; + interconnects->bus_tbl[cnt].max_kbps = bus_tbl[cnt].max_kbps; + } + + /* print bus fields */ + venus_hfi_for_each_bus(core, binfo) { + d_vpr_h("%s: name %s min_kbps %u max_kbps %u\n", + __func__, binfo->name, binfo->min_kbps, binfo->max_kbps); + } + + /* get interconnect handle */ + venus_hfi_for_each_bus(core, binfo) { + binfo->icc = devm_of_icc_get(&core->pdev->dev, binfo->name); + if (IS_ERR_OR_NULL(binfo->icc)) { + d_vpr_e("%s: failed to get bus: %s\n", __func__, binfo->name); + rc = PTR_ERR_OR_ZERO(binfo->icc) ? + PTR_ERR_OR_ZERO(binfo->icc) : -EBADHANDLE; + binfo->icc = NULL; + return rc; + } + } + + return rc; +} + +static int __init_power_domains(struct msm_vidc_core *core) +{ + struct power_domain_info *pdinfo = NULL; + const struct pd_table *pd_tbl; + struct power_domain_set *pds; + struct device **opp_vdevs = NULL; + const char * const *opp_tbl; + u32 pd_count = 0, opp_count = 0, cnt = 0; + int rc = 0; + + pds = &core->resource->power_domain_set; + + pd_tbl = core->platform->data.pd_tbl; + pd_count = core->platform->data.pd_tbl_size; + + /* skip init if power domain not supported */ + if (!pd_count) { + d_vpr_h("%s: power domain entries not available in db\n", __func__); + return 0; + } + + /* sanitize power domain table */ + if (!pd_tbl) { + d_vpr_e("%s: invalid power domain tbl\n", __func__); + return -EINVAL; + } + + /* allocate power_domain_set */ + pds->power_domain_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*pds->power_domain_tbl) * pd_count, + GFP_KERNEL); + if (!pds->power_domain_tbl) { + d_vpr_e("%s: failed to alloc memory for pd table\n", __func__); + return -ENOMEM; + } + pds->count = pd_count; + + /* populate power domain fields */ + for (cnt = 0; cnt < pds->count; cnt++) + pds->power_domain_tbl[cnt].name = pd_tbl[cnt].name; + + /* print power domain fields */ + venus_hfi_for_each_power_domain(core, pdinfo) + d_vpr_h("%s: pd name %s\n", __func__, pdinfo->name); + + /* get power domain handle */ + venus_hfi_for_each_power_domain(core, pdinfo) { + pdinfo->genpd_dev = devm_pd_get(&core->pdev->dev, pdinfo->name); + if (IS_ERR_OR_NULL(pdinfo->genpd_dev)) { + rc = PTR_ERR_OR_ZERO(pdinfo->genpd_dev) ? + PTR_ERR_OR_ZERO(pdinfo->genpd_dev) : -EBADHANDLE; + d_vpr_e("%s: failed to get pd: %s\n", __func__, pdinfo->name); + pdinfo->genpd_dev = NULL; + return rc; + } + } + + opp_tbl = core->platform->data.opp_tbl; + opp_count = core->platform->data.opp_tbl_size; + + /* skip init if opp not supported */ + if (opp_count < 2) { + d_vpr_h("%s: opp entries not available\n", __func__); + return 0; + } + + /* sanitize opp table */ + if (!opp_tbl) { + d_vpr_e("%s: invalid opp table\n", __func__); + return -EINVAL; + } + + /* ignore NULL entry at the end of table */ + opp_count -= 1; + + /* print opp table entries */ + for (cnt = 0; cnt < opp_count; cnt++) + d_vpr_h("%s: opp name %s\n", __func__, opp_tbl[cnt]); + + /* populate opp power domains(for rails) */ + rc = devm_pm_opp_attach_genpd(&core->pdev->dev, opp_tbl, &opp_vdevs); + if (rc) + return rc; + + /* create device_links b/w consumer(dev) and multiple suppliers(mx, mmcx) */ + for (cnt = 0; cnt < opp_count; cnt++) { + rc = devm_opp_dl_get(&core->pdev->dev, opp_vdevs[cnt]); + if (rc) { + d_vpr_e("%s: failed to create dl: %s\n", + __func__, dev_name(opp_vdevs[cnt])); + return rc; + } + } + + /* initialize opp table from device tree */ + rc = devm_pm_opp_of_add_table(&core->pdev->dev); + if (rc) { + d_vpr_e("%s: failed to add opp table\n", __func__); + return rc; + } + + /** + * 1. power up mx & mmcx supply for RCG(mvs0_clk_src) + * 2. power up gdsc0c for mvs0c branch clk + * 3. power up gdsc0 for mvs0 branch clk + */ + + /** + * power up mxc, mmcx rails to enable supply for + * RCG(video_cc_mvs0_clk_src) + */ + /* enable runtime pm */ + rc = devm_pm_runtime_enable(&core->pdev->dev); + if (rc) { + d_vpr_e("%s: failed to enable runtime pm\n", __func__); + return rc; + } + /* power up rails(mxc & mmcx) */ + rc = devm_pm_runtime_get_sync(&core->pdev->dev); + if (rc) { + d_vpr_e("%s: failed to get sync runtime pm\n", __func__); + return rc; + } + + return rc; +} + +static int __init_clocks(struct msm_vidc_core *core) +{ + const struct clk_table *clk_tbl; + struct clock_set *clocks; + struct clock_info *cinfo = NULL; + u32 clk_count = 0, cnt = 0; + int rc = 0; + + clocks = &core->resource->clock_set; + + clk_tbl = core->platform->data.clk_tbl; + clk_count = core->platform->data.clk_tbl_size; + + if (!clk_tbl || !clk_count) { + d_vpr_e("%s: invalid clock tbl %p or count %d\n", + __func__, clk_tbl, clk_count); + return -EINVAL; + } + + /* allocate clock_set */ + clocks->clock_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*clocks->clock_tbl) * clk_count, + GFP_KERNEL); + if (!clocks->clock_tbl) { + d_vpr_e("%s: failed to alloc memory for clock table\n", __func__); + return -ENOMEM; + } + clocks->count = clk_count; + + /* populate clock field from platform data */ + for (cnt = 0; cnt < clocks->count; cnt++) { + clocks->clock_tbl[cnt].name = clk_tbl[cnt].name; + clocks->clock_tbl[cnt].clk_id = clk_tbl[cnt].clk_id; + clocks->clock_tbl[cnt].has_scaling = clk_tbl[cnt].scaling; + } + + /* print clock fields */ + venus_hfi_for_each_clock(core, cinfo) { + d_vpr_h("%s: clock name %s clock id %#x scaling %d\n", + __func__, cinfo->name, cinfo->clk_id, cinfo->has_scaling); + } + + /* get clock handle */ + venus_hfi_for_each_clock(core, cinfo) { + cinfo->clk = devm_clk_get(&core->pdev->dev, cinfo->name); + if (IS_ERR_OR_NULL(cinfo->clk)) { + d_vpr_e("%s: failed to get clock: %s\n", __func__, cinfo->name); + rc = PTR_ERR_OR_ZERO(cinfo->clk) ? + PTR_ERR_OR_ZERO(cinfo->clk) : -EINVAL; + cinfo->clk = NULL; + return rc; + } + } + + return rc; +} + +static int __init_reset_clocks(struct msm_vidc_core *core) +{ + const struct clk_rst_table *rst_tbl; + struct reset_set *rsts; + struct reset_info *rinfo = NULL; + u32 rst_count = 0, cnt = 0; + int rc = 0; + + rsts = &core->resource->reset_set; + + rst_tbl = core->platform->data.clk_rst_tbl; + rst_count = core->platform->data.clk_rst_tbl_size; + + if (!rst_tbl || !rst_count) { + d_vpr_e("%s: invalid reset tbl %p or count %d\n", + __func__, rst_tbl, rst_count); + return -EINVAL; + } + + /* allocate reset_set */ + rsts->reset_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*rsts->reset_tbl) * rst_count, + GFP_KERNEL); + if (!rsts->reset_tbl) { + d_vpr_e("%s: failed to alloc memory for reset table\n", __func__); + return -ENOMEM; + } + rsts->count = rst_count; + + /* populate clock field from platform data */ + for (cnt = 0; cnt < rsts->count; cnt++) { + rsts->reset_tbl[cnt].name = rst_tbl[cnt].name; + rsts->reset_tbl[cnt].exclusive_release = rst_tbl[cnt].exclusive_release; + } + + /* print reset clock fields */ + venus_hfi_for_each_reset_clock(core, rinfo) { + d_vpr_h("%s: reset clk %s, exclusive %d\n", + __func__, rinfo->name, rinfo->exclusive_release); + } + + /* get reset clock handle */ + venus_hfi_for_each_reset_clock(core, rinfo) { + if (rinfo->exclusive_release) + rinfo->rst = devm_reset_control_get_exclusive_released(&core->pdev->dev, + rinfo->name); + else + rinfo->rst = devm_reset_control_get(&core->pdev->dev, rinfo->name); + if (IS_ERR_OR_NULL(rinfo->rst)) { + d_vpr_e("%s: failed to get reset clock: %s\n", __func__, rinfo->name); + rc = PTR_ERR_OR_ZERO(rinfo->rst) ? + PTR_ERR_OR_ZERO(rinfo->rst) : -EINVAL; + rinfo->rst = NULL; + return rc; + } + } + + return rc; +} + +static int __init_subcaches(struct msm_vidc_core *core) +{ + const struct subcache_table *llcc_tbl; + struct subcache_set *caches; + struct subcache_info *sinfo = NULL; + u32 llcc_count = 0, cnt = 0; + int rc = 0; + + caches = &core->resource->subcache_set; + + /* skip init if subcache not available */ + if (!is_sys_cache_present(core)) + return 0; + + llcc_tbl = core->platform->data.subcache_tbl; + llcc_count = core->platform->data.subcache_tbl_size; + + if (!llcc_tbl || !llcc_count) { + d_vpr_e("%s: invalid llcc tbl %p or count %d\n", + __func__, llcc_tbl, llcc_count); + return -EINVAL; + } + + /* allocate clock_set */ + caches->subcache_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*caches->subcache_tbl) * llcc_count, + GFP_KERNEL); + if (!caches->subcache_tbl) { + d_vpr_e("%s: failed to alloc memory for subcache table\n", __func__); + return -ENOMEM; + } + caches->count = llcc_count; + + /* populate subcache fields from platform data */ + for (cnt = 0; cnt < caches->count; cnt++) { + caches->subcache_tbl[cnt].name = llcc_tbl[cnt].name; + caches->subcache_tbl[cnt].llcc_id = llcc_tbl[cnt].llcc_id; + } + + /* print subcache fields */ + venus_hfi_for_each_subcache(core, sinfo) { + d_vpr_h("%s: name %s subcache id %d\n", + __func__, sinfo->name, sinfo->llcc_id); + } + + /* get subcache/llcc handle */ + venus_hfi_for_each_subcache(core, sinfo) { + sinfo->subcache = devm_llcc_get(&core->pdev->dev, sinfo->llcc_id); + if (IS_ERR_OR_NULL(sinfo->subcache)) { + d_vpr_e("%s: failed to get subcache: %d\n", __func__, sinfo->llcc_id); + rc = PTR_ERR_OR_ZERO(sinfo->subcache) ? + PTR_ERR_OR_ZERO(sinfo->subcache) : -EBADHANDLE; + sinfo->subcache = NULL; + return rc; + } + } + + return rc; +} + +static int __init_freq_table(struct msm_vidc_core *core) +{ + struct freq_table *freq_tbl; + struct freq_set *clks; + u32 freq_count = 0, cnt = 0; + int rc = 0; + + clks = &core->resource->freq_set; + + freq_tbl = core->platform->data.freq_tbl; + freq_count = core->platform->data.freq_tbl_size; + + if (!freq_tbl || !freq_count) { + d_vpr_e("%s: invalid freq tbl %p or count %d\n", + __func__, freq_tbl, freq_count); + return -EINVAL; + } + + /* allocate freq_set */ + clks->freq_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*clks->freq_tbl) * freq_count, + GFP_KERNEL); + if (!clks->freq_tbl) { + d_vpr_e("%s: failed to alloc memory for freq table\n", __func__); + return -ENOMEM; + } + clks->count = freq_count; + + /* populate freq field from platform data */ + for (cnt = 0; cnt < clks->count; cnt++) + clks->freq_tbl[cnt].freq = freq_tbl[cnt].freq; + + /* sort freq table */ + sort(clks->freq_tbl, clks->count, sizeof(*clks->freq_tbl), cmp, NULL); + + /* print freq field freq_set */ + d_vpr_h("%s: updated freq table\n", __func__); + for (cnt = 0; cnt < clks->count; cnt++) + d_vpr_h("%s:\t %lu\n", __func__, clks->freq_tbl[cnt].freq); + + return rc; +} + +static int __init_context_banks(struct msm_vidc_core *core) +{ + const struct context_bank_table *cb_tbl; + struct context_bank_set *cbs; + struct context_bank_info *cbinfo = NULL; + u32 cb_count = 0, cnt = 0; + int rc = 0; + + cbs = &core->resource->context_bank_set; + + cb_tbl = core->platform->data.context_bank_tbl; + cb_count = core->platform->data.context_bank_tbl_size; + + if (!cb_tbl || !cb_count) { + d_vpr_e("%s: invalid context bank tbl %p or count %d\n", + __func__, cb_tbl, cb_count); + return -EINVAL; + } + + /* allocate context_bank table */ + cbs->context_bank_tbl = devm_kzalloc(&core->pdev->dev, + sizeof(*cbs->context_bank_tbl) * cb_count, + GFP_KERNEL); + if (!cbs->context_bank_tbl) { + d_vpr_e("%s: failed to alloc memory for context_bank table\n", __func__); + return -ENOMEM; + } + cbs->count = cb_count; + + /** + * populate context bank field from platform data except + * dev & domain which are assigned as part of context bank + * probe sequence + */ + for (cnt = 0; cnt < cbs->count; cnt++) { + cbs->context_bank_tbl[cnt].name = cb_tbl[cnt].name; + cbs->context_bank_tbl[cnt].addr_range.start = cb_tbl[cnt].start; + cbs->context_bank_tbl[cnt].addr_range.size = cb_tbl[cnt].size; + cbs->context_bank_tbl[cnt].secure = cb_tbl[cnt].secure; + cbs->context_bank_tbl[cnt].dma_coherant = cb_tbl[cnt].dma_coherant; + cbs->context_bank_tbl[cnt].region = cb_tbl[cnt].region; + cbs->context_bank_tbl[cnt].dma_mask = cb_tbl[cnt].dma_mask; + } + + /* print context_bank fiels */ + venus_hfi_for_each_context_bank(core, cbinfo) { + d_vpr_h("%s: name %s addr start %#x size %#x secure %d\n", + __func__, cbinfo->name, cbinfo->addr_range.start, + cbinfo->addr_range.size, cbinfo->secure); + + d_vpr_h("%s: coherant %d region %d dma_mask %llu\n", + __func__, cbinfo->dma_coherant, cbinfo->region, + cbinfo->dma_mask); + } + + return rc; +} + +static int __enable_power_domains(struct msm_vidc_core *core, const char *name) +{ + struct power_domain_info *pdinfo = NULL; + int rc = 0; + + /* power up rails(mxc & mmcx) to enable RCG(video_cc_mvs0_clk_src) */ + rc = __opp_set_rate(core, ULONG_MAX); + if (rc) { + d_vpr_e("%s: opp setrate failed\n", __func__); + return rc; + } + + /* power up (gdsc0/gdsc0c) to enable (mvs0/mvs0c) branch clock */ + venus_hfi_for_each_power_domain(core, pdinfo) { + if (strcmp(pdinfo->name, name)) + continue; + + rc = pm_runtime_get_sync(pdinfo->genpd_dev); + if (rc < 0) { + d_vpr_e("%s: failed to get sync: %s\n", __func__, pdinfo->name); + return rc; + } + d_vpr_h("%s: enabled power doamin %s\n", __func__, pdinfo->name); + } + + return rc; +} + +static int __disable_power_domains(struct msm_vidc_core *core, const char *name) +{ + struct power_domain_info *pdinfo = NULL; + int rc = 0; + + /* power down (gdsc0/gdsc0c) to disable (mvs0/mvs0c) branch clock */ + venus_hfi_for_each_power_domain(core, pdinfo) { + if (strcmp(pdinfo->name, name)) + continue; + + rc = pm_runtime_put_sync(pdinfo->genpd_dev); + if (rc) { + d_vpr_e("%s: failed to put sync: %s\n", __func__, pdinfo->name); + return rc; + } + d_vpr_h("%s: disabled power doamin %s\n", __func__, pdinfo->name); + } + + /* power down rails(mxc & mmcx) to disable RCG(video_cc_mvs0_clk_src) */ + rc = __opp_set_rate(core, 0); + if (rc) { + d_vpr_e("%s: opp setrate failed\n", __func__); + return rc; + } + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_GDSC_HANDOFF, 0, __func__); + + return rc; +} + +static int __hand_off_power_domains(struct msm_vidc_core *core) +{ + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_GDSC_HANDOFF, __func__); + + return 0; +} + +static int __acquire_power_domains(struct msm_vidc_core *core) +{ + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_GDSC_HANDOFF, 0, __func__); + + return 0; +} + +static int __disable_subcaches(struct msm_vidc_core *core) +{ + struct subcache_info *sinfo; + int rc = 0; + + if (!is_sys_cache_present(core)) + return 0; + + /* De-activate subcaches */ + venus_hfi_for_each_subcache_reverse(core, sinfo) { + if (!sinfo->isactive) + continue; + + d_vpr_h("%s: De-activate subcache %s\n", __func__, sinfo->name); + rc = llcc_slice_deactivate(sinfo->subcache); + if (rc) { + d_vpr_e("Failed to de-activate %s: %d\n", + sinfo->name, rc); + } + sinfo->isactive = false; + } + + return 0; +} + +static int __enable_subcaches(struct msm_vidc_core *core) +{ + int rc = 0; + u32 c = 0; + struct subcache_info *sinfo; + + if (!is_sys_cache_present(core)) + return 0; + + /* Activate subcaches */ + venus_hfi_for_each_subcache(core, sinfo) { + rc = llcc_slice_activate(sinfo->subcache); + if (rc) { + d_vpr_e("Failed to activate %s: %d\n", sinfo->name, rc); + __fatal_error(true); + goto err_activate_fail; + } + sinfo->isactive = true; + d_vpr_h("Activated subcache %s\n", sinfo->name); + c++; + } + + d_vpr_h("Activated %d Subcaches to Venus\n", c); + + return 0; + +err_activate_fail: + __disable_subcaches(core); + return rc; +} + +static int llcc_enable(struct msm_vidc_core *core, bool enable) +{ + int ret; + + if (enable) + ret = __enable_subcaches(core); + else + ret = __disable_subcaches(core); + + return ret; +} + +static int __vote_bandwidth(struct bus_info *bus, unsigned long bw_kbps) +{ + int rc = 0; + + if (!bus->icc) { + d_vpr_e("%s: invalid bus\n", __func__); + return -EINVAL; + } + + d_vpr_p("Voting bus %s to ab %lu kBps\n", bus->name, bw_kbps); + + rc = icc_set_bw(bus->icc, bw_kbps, 0); + if (rc) + d_vpr_e("Failed voting bus %s to ab %lu, rc=%d\n", + bus->name, bw_kbps, rc); + + return rc; +} + +static int __unvote_buses(struct msm_vidc_core *core) +{ + int rc = 0; + struct bus_info *bus = NULL; + + core->power.bw_ddr = 0; + core->power.bw_llcc = 0; + + venus_hfi_for_each_bus(core, bus) { + rc = __vote_bandwidth(bus, 0); + if (rc) + goto err_unknown_device; + } + +err_unknown_device: + return rc; +} + +static int __vote_buses(struct msm_vidc_core *core, + unsigned long bw_ddr, unsigned long bw_llcc) +{ + int rc = 0; + struct bus_info *bus = NULL; + unsigned long bw_kbps = 0, bw_prev = 0; + enum vidc_bus_type type; + + venus_hfi_for_each_bus(core, bus) { + if (bus && bus->icc) { + type = get_type_frm_name(bus->name); + + if (type == DDR) { + bw_kbps = bw_ddr; + bw_prev = core->power.bw_ddr; + } else if (type == LLCC) { + bw_kbps = bw_llcc; + bw_prev = core->power.bw_llcc; + } else { + bw_kbps = bus->max_kbps; + bw_prev = core->power.bw_ddr ? + bw_kbps : 0; + } + + /* ensure freq is within limits */ + bw_kbps = clamp_t(typeof(bw_kbps), bw_kbps, + bus->min_kbps, bus->max_kbps); + + if (TRIVIAL_BW_CHANGE(bw_kbps, bw_prev) && bw_prev) { + d_vpr_l("Skip voting bus %s to %lu kBps\n", + bus->name, bw_kbps); + continue; + } + + rc = __vote_bandwidth(bus, bw_kbps); + + if (type == DDR) + core->power.bw_ddr = bw_kbps; + else if (type == LLCC) + core->power.bw_llcc = bw_kbps; + } else { + d_vpr_e("No BUS to Vote\n"); + } + } + + return rc; +} + +static int set_bw(struct msm_vidc_core *core, unsigned long bw_ddr, + unsigned long bw_llcc) +{ + if (!bw_ddr && !bw_llcc) + return __unvote_buses(core); + + return __vote_buses(core, bw_ddr, bw_llcc); +} + +static int __set_clk_rate(struct msm_vidc_core *core, struct clock_info *cl, + u64 rate) +{ + int rc = 0; + + /* bail early if requested clk rate is not changed */ + if (rate == cl->prev) + return 0; + + d_vpr_p("Scaling clock %s to %llu, prev %llu\n", + cl->name, rate, cl->prev); + + rc = clk_set_rate(cl->clk, rate); + if (rc) { + d_vpr_e("%s: Failed to set clock rate %llu %s: %d\n", + __func__, rate, cl->name, rc); + return rc; + } + + cl->prev = rate; + + return rc; +} + +static int __set_clocks(struct msm_vidc_core *core, u64 freq) +{ + struct clock_info *cl; + int rc = 0; + + /* scale mxc & mmcx rails */ + rc = __opp_set_rate(core, freq); + if (rc) { + d_vpr_e("%s: opp setrate failed %lld\n", __func__, freq); + return rc; + } + + venus_hfi_for_each_clock(core, cl) { + if (cl->has_scaling) { + rc = __set_clk_rate(core, cl, freq); + if (rc) + return rc; + } + } + + return 0; +} + +static int __disable_unprepare_clock(struct msm_vidc_core *core, + const char *clk_name) +{ + int rc = 0; + struct clock_info *cl; + bool found; + + found = false; + venus_hfi_for_each_clock(core, cl) { + if (!cl->clk) { + d_vpr_e("%s: invalid clock %s\n", __func__, cl->name); + return -EINVAL; + } + if (strcmp(cl->name, clk_name)) + continue; + found = true; + clk_disable_unprepare(cl->clk); + if (cl->has_scaling) + __set_clk_rate(core, cl, 0); + cl->prev = 0; + d_vpr_h("%s: clock %s disable unprepared\n", __func__, cl->name); + break; + } + if (!found) { + d_vpr_e("%s: clock %s not found\n", __func__, clk_name); + return -EINVAL; + } + + return rc; +} + +static int __prepare_enable_clock(struct msm_vidc_core *core, + const char *clk_name) +{ + int rc = 0; + struct clock_info *cl; + bool found; + u64 rate = 0; + + found = false; + venus_hfi_for_each_clock(core, cl) { + if (!cl->clk) { + d_vpr_e("%s: invalid clock\n", __func__); + return -EINVAL; + } + if (strcmp(cl->name, clk_name)) + continue; + found = true; + /* + * For the clocks we control, set the rate prior to preparing + * them. Since we don't really have a load at this point, scale + * it to the lowest frequency possible + */ + if (cl->has_scaling) { + rate = clk_round_rate(cl->clk, 0); + /** + * source clock is already multipled with scaling ratio and __set_clk_rate + * attempts to multiply again. So divide scaling ratio before calling + * __set_clk_rate. + */ + rate = rate / MSM_VIDC_CLOCK_SOURCE_SCALING_RATIO; + __set_clk_rate(core, cl, rate); + } + + rc = clk_prepare_enable(cl->clk); + if (rc) { + d_vpr_e("%s: failed to enable clock %s\n", + __func__, cl->name); + return rc; + } + if (!__clk_is_enabled(cl->clk)) { + d_vpr_e("%s: clock %s not enabled\n", + __func__, cl->name); + clk_disable_unprepare(cl->clk); + if (cl->has_scaling) + __set_clk_rate(core, cl, 0); + return -EINVAL; + } + d_vpr_h("%s: clock %s prepare enabled\n", __func__, cl->name); + break; + } + if (!found) { + d_vpr_e("%s: clock %s not found\n", __func__, clk_name); + return -EINVAL; + } + + return rc; +} + +static int __init_resources(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = __init_register_base(core); + if (rc) + return rc; + + rc = __init_irq(core); + if (rc) + return rc; + + rc = __init_bus(core); + if (rc) + return rc; + + rc = call_res_op(core, gdsc_init, core); + if (rc) + return rc; + + rc = __init_clocks(core); + if (rc) + return rc; + + rc = __init_reset_clocks(core); + if (rc) + return rc; + + rc = __init_subcaches(core); + if (rc) + return rc; + + rc = __init_freq_table(core); + if (rc) + return rc; + + rc = __init_context_banks(core); + + return rc; +} + +static int __reset_control_acquire_name(struct msm_vidc_core *core, + const char *name) +{ + struct reset_info *rcinfo = NULL; + int rc = 0; + bool found = false; + + venus_hfi_for_each_reset_clock(core, rcinfo) { + if (strcmp(rcinfo->name, name)) + continue; + + /* this function is valid only for exclusive_release reset clocks*/ + if (!rcinfo->exclusive_release) { + d_vpr_e("%s: unsupported reset control (%s), exclusive %d\n", + __func__, name, rcinfo->exclusive_release); + return -EINVAL; + } + + found = true; + + rc = reset_control_acquire(rcinfo->rst); + if (rc) + d_vpr_e("%s: failed to acquire reset control (%s), rc = %d\n", + __func__, rcinfo->name, rc); + else + d_vpr_h("%s: acquire reset control (%s)\n", + __func__, rcinfo->name); + break; + } + if (!found) { + d_vpr_e("%s: reset control (%s) not found\n", __func__, name); + rc = -EINVAL; + } + + return rc; +} + +static int __reset_control_release_name(struct msm_vidc_core *core, + const char *name) +{ + struct reset_info *rcinfo = NULL; + int rc = 0; + bool found = false; + + venus_hfi_for_each_reset_clock(core, rcinfo) { + if (strcmp(rcinfo->name, name)) + continue; + + /* this function is valid only for exclusive_release reset clocks*/ + if (!rcinfo->exclusive_release) { + d_vpr_e("%s: unsupported reset control (%s), exclusive %d\n", + __func__, name, rcinfo->exclusive_release); + return -EINVAL; + } + + found = true; + + reset_control_release(rcinfo->rst); + if (rc) + d_vpr_e("%s: release reset control (%s) failed\n", + __func__, rcinfo->name); + else + d_vpr_h("%s: release reset control (%s) done\n", + __func__, rcinfo->name); + break; + } + if (!found) { + d_vpr_e("%s: reset control (%s) not found\n", __func__, name); + rc = -EINVAL; + } + + return rc; +} + +static int __reset_control_assert_name(struct msm_vidc_core *core, + const char *name) +{ + struct reset_info *rcinfo = NULL; + int rc = 0; + bool found = false; + + venus_hfi_for_each_reset_clock(core, rcinfo) { + if (strcmp(rcinfo->name, name)) + continue; + + found = true; + rc = reset_control_assert(rcinfo->rst); + if (rc) + d_vpr_e("%s: failed to assert reset control (%s), rc = %d\n", + __func__, rcinfo->name, rc); + else + d_vpr_h("%s: assert reset control (%s)\n", + __func__, rcinfo->name); + break; + } + if (!found) { + d_vpr_e("%s: reset control (%s) not found\n", __func__, name); + rc = -EINVAL; + } + + return rc; +} + +static int __reset_control_deassert_name(struct msm_vidc_core *core, + const char *name) +{ + struct reset_info *rcinfo = NULL; + int rc = 0; + bool found = false; + + venus_hfi_for_each_reset_clock(core, rcinfo) { + if (strcmp(rcinfo->name, name)) + continue; + found = true; + rc = reset_control_deassert(rcinfo->rst); + if (rc) + d_vpr_e("%s: deassert reset control for (%s) failed, rc %d\n", + __func__, rcinfo->name, rc); + else + d_vpr_h("%s: deassert reset control (%s)\n", + __func__, rcinfo->name); + break; + } + if (!found) { + d_vpr_e("%s: reset control (%s) not found\n", __func__, name); + rc = -EINVAL; + } + + return rc; +} + +static int __reset_control_deassert(struct msm_vidc_core *core) +{ + struct reset_info *rcinfo = NULL; + int rc = 0; + + venus_hfi_for_each_reset_clock(core, rcinfo) { + rc = reset_control_deassert(rcinfo->rst); + if (rc) { + d_vpr_e("%s: deassert reset control failed. rc = %d\n", __func__, rc); + continue; + } + d_vpr_h("%s: deassert reset control %s\n", __func__, rcinfo->name); + } + + return rc; +} + +static int __reset_control_assert(struct msm_vidc_core *core) +{ + struct reset_info *rcinfo = NULL; + int rc = 0, cnt = 0; + + venus_hfi_for_each_reset_clock(core, rcinfo) { + if (!rcinfo->rst) { + d_vpr_e("%s: invalid reset clock %s\n", + __func__, rcinfo->name); + return -EINVAL; + } + rc = reset_control_assert(rcinfo->rst); + if (rc) { + d_vpr_e("%s: failed to assert reset control %s, rc = %d\n", + __func__, rcinfo->name, rc); + goto deassert_reset_control; + } + cnt++; + d_vpr_h("%s: assert reset control %s, count %d\n", __func__, rcinfo->name, cnt); + + usleep_range(1000, 1100); + } + + return rc; +deassert_reset_control: + venus_hfi_for_each_reset_clock_reverse_continue(core, rcinfo, cnt) { + d_vpr_e("%s: deassert reset control %s\n", __func__, rcinfo->name); + reset_control_deassert(rcinfo->rst); + } + + return rc; +} + +static int __reset_ahb2axi_bridge(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = __reset_control_assert(core); + if (rc) + return rc; + + rc = __reset_control_deassert(core); + + return rc; +} + +static const struct msm_vidc_resources_ops res_ops = { + .init = __init_resources, + .reset_bridge = __reset_ahb2axi_bridge, + .reset_control_acquire = __reset_control_acquire_name, + .reset_control_release = __reset_control_release_name, + .reset_control_assert = __reset_control_assert_name, + .reset_control_deassert = __reset_control_deassert_name, + .gdsc_init = __init_power_domains, + .gdsc_on = __enable_power_domains, + .gdsc_off = __disable_power_domains, + .gdsc_hw_ctrl = __hand_off_power_domains, + .gdsc_sw_ctrl = __acquire_power_domains, + .llcc = llcc_enable, + .set_bw = set_bw, + .set_clks = __set_clocks, + .clk_enable = __prepare_enable_clock, + .clk_disable = __disable_unprepare_clock, +}; + +const struct msm_vidc_resources_ops *get_resources_ops(void) +{ + return &res_ops; +} From patchwork Fri Jul 28 13:23:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FD0AC001E0 for ; Fri, 28 Jul 2023 13:27:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235671AbjG1N1K (ORCPT ); Fri, 28 Jul 2023 09:27:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236763AbjG1N0x (ORCPT ); Fri, 28 Jul 2023 09:26:53 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7ED546A2; Fri, 28 Jul 2023 06:26:09 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SCxrHi021636; Fri, 28 Jul 2023 13:25:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=U4+TVMRsJZjMgXnPzsJ3uOErxFdZEK3WZbIk30sRS3k=; b=VDUodzfXkqHniCWJiWTMazRg4ZEymFQVe79lDGdouSr04j5QQk3SKl8qoh+znpZcKT2F XmBnVPsYHszo+eT6epRjRzGkVFeyt360SgS0EaCl66TsS6u5CcyjofIt9ZSvOUZUgcmv PQrWZpLnaZfFPZ+1yNcQbWcLXxBbJhKdghkF6RS8IF1aX5NvqLXlzPrqsOFTehJB9eJR 34xDF3zZtvpxvIBbmfi0WSBcTr4LA9LqD8yjTc6AsKUB6UNL8wAscHTDicE+H0fHJqT+ HJhG20nNZMwMsyLvgnjsRf/bOVfMZjD5FrpMsbZgYvvCrBZjqnrafWA9R48cjj5su2Gx WQ== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s469hh1gy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:56 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPuN1002374 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:56 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:52 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 13/33] iris: vidc: add helper functions for power management Date: Fri, 28 Jul 2023 18:53:24 +0530 Message-ID: <1690550624-14642-14-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: LAx9nLJITVufFzoxgXQqC8UGYFFAtuSD X-Proofpoint-GUID: LAx9nLJITVufFzoxgXQqC8UGYFFAtuSD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 priorityscore=1501 impostorscore=0 malwarescore=0 phishscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 mlxscore=0 spamscore=0 lowpriorityscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal this implements functions for calculating current load of the hardware. Depending on the count of instances and resolutions it selects the best clock rate for the video core. Also it scales clocks, power and enable/disable dcvs. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_power.h | 94 ++++ .../platform/qcom/iris/vidc/src/msm_vidc_power.c | 560 +++++++++++++++++++++ 2 files changed, 654 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_power.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_power.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_power.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_power.h new file mode 100644 index 0000000..cb424f5 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_power.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_POWER_H_ +#define _MSM_VIDC_POWER_H_ + +#include "msm_vidc_debug.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +#define COMPRESSION_RATIO_MAX 5 + +/* TODO: Move to dtsi OR use source clock instead of branch clock.*/ +#define MSM_VIDC_CLOCK_SOURCE_SCALING_RATIO 3 + +enum vidc_bus_type { + PERF, + DDR, + LLCC, +}; + +/* + * Minimum dimensions for which to calculate bandwidth. + * This means that anything bandwidth(0, 0) == + * bandwidth(BASELINE_DIMENSIONS.width, BASELINE_DIMENSIONS.height) + */ +static const struct { + int height, width; +} BASELINE_DIMENSIONS = { + .width = 1280, + .height = 720, +}; + +/* converts Mbps to bps (the "b" part can be bits or bytes based on context) */ +#define kbps(__mbps) ((__mbps) * 1000) +#define bps(__mbps) (kbps(__mbps) * 1000) + +static inline u32 get_type_frm_name(const char *name) +{ + if (!strcmp(name, "venus-llcc")) + return LLCC; + else if (!strcmp(name, "venus-ddr")) + return DDR; + else + return PERF; +} + +#define DUMP_HEADER_MAGIC 0xdeadbeef +#define DUMP_FP_FMT "%FP" /* special format for fp_t */ + +struct dump { + char *key; + char *format; + size_t val; +}; + +void __dump(struct dump dump[], int len); + +static inline bool __ubwc(enum msm_vidc_colorformat_type f) +{ + switch (f) { + case MSM_VIDC_FMT_NV12C: + case MSM_VIDC_FMT_TP10C: + return true; + default: + return false; + } +} + +static inline int __bpp(enum msm_vidc_colorformat_type f) +{ + switch (f) { + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12C: + case MSM_VIDC_FMT_RGBA8888C: + return 8; + case MSM_VIDC_FMT_P010: + case MSM_VIDC_FMT_TP10C: + return 10; + default: + d_vpr_e("Unsupported colorformat (%x)", f); + return INT_MAX; + } +} + +u64 msm_vidc_max_freq(struct msm_vidc_inst *inst); +int msm_vidc_scale_power(struct msm_vidc_inst *inst, bool scale_buses); +void msm_vidc_power_data_reset(struct msm_vidc_inst *inst); + +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_power.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_power.c new file mode 100644 index 0000000..cb5c7b7c --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_power.c @@ -0,0 +1,560 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc_buffer.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "venus_hfi.h" + +/* Q16 Format */ +#define MSM_VIDC_MIN_UBWC_COMPLEXITY_FACTOR (1 << 16) +#define MSM_VIDC_MAX_UBWC_COMPLEXITY_FACTOR (4 << 16) +#define MSM_VIDC_MIN_UBWC_COMPRESSION_RATIO (1 << 16) +#define MSM_VIDC_MAX_UBWC_COMPRESSION_RATIO (5 << 16) + +void __dump(struct dump dump[], int len) +{ + int c = 0; + + for (c = 0; c < len; ++c) { + char format_line[128] = "", formatted_line[128] = ""; + + if (dump[c].val == DUMP_HEADER_MAGIC) { + snprintf(formatted_line, sizeof(formatted_line), "%s\n", + dump[c].key); + } else { + snprintf(format_line, sizeof(format_line), + " %-35s: %s\n", dump[c].key, + dump[c].format); + snprintf(formatted_line, sizeof(formatted_line), + format_line, dump[c].val); + } + d_vpr_b("%s", formatted_line); + } +} + +u64 msm_vidc_max_freq(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + struct frequency_table *freq_tbl; + u64 freq = 0; + + core = inst->core; + + if (!core->resource || !core->resource->freq_set.freq_tbl || + !core->resource->freq_set.count) { + i_vpr_e(inst, "%s: invalid frequency table\n", __func__); + return freq; + } + freq_tbl = core->resource->freq_set.freq_tbl; + freq = freq_tbl[0].freq; + + i_vpr_l(inst, "%s: rate = %llu\n", __func__, freq); + return freq; +} + +static int fill_dynamic_stats(struct msm_vidc_inst *inst, + struct vidc_bus_vote_data *vote_data) +{ + struct msm_vidc_input_cr_data *temp, *next; + u32 cf = MSM_VIDC_MAX_UBWC_COMPLEXITY_FACTOR; + u32 cr = MSM_VIDC_MIN_UBWC_COMPRESSION_RATIO; + u32 input_cr = MSM_VIDC_MIN_UBWC_COMPRESSION_RATIO; + u32 frame_size; + + if (inst->power.fw_cr) + cr = inst->power.fw_cr; + + if (inst->power.fw_cf) { + cf = inst->power.fw_cf; + frame_size = (msm_vidc_get_mbs_per_frame(inst) / (32 * 8) * 3) / 2; + if (frame_size) + cf = cf / frame_size; + } + + list_for_each_entry_safe(temp, next, &inst->enc_input_crs, list) + input_cr = min(input_cr, temp->input_cr); + + vote_data->compression_ratio = cr; + vote_data->complexity_factor = cf; + vote_data->input_cr = input_cr; + + /* Sanitize CF values from HW */ + cf = clamp_t(u32, cf, MSM_VIDC_MIN_UBWC_COMPLEXITY_FACTOR, + MSM_VIDC_MAX_UBWC_COMPLEXITY_FACTOR); + cr = clamp_t(u32, cr, MSM_VIDC_MIN_UBWC_COMPRESSION_RATIO, + MSM_VIDC_MAX_UBWC_COMPRESSION_RATIO); + input_cr = clamp_t(u32, input_cr, MSM_VIDC_MIN_UBWC_COMPRESSION_RATIO, + MSM_VIDC_MAX_UBWC_COMPRESSION_RATIO); + + vote_data->compression_ratio = cr; + vote_data->complexity_factor = cf; + vote_data->input_cr = input_cr; + + i_vpr_l(inst, + "Input CR = %d Recon CR = %d Complexity Factor = %d\n", + vote_data->input_cr, vote_data->compression_ratio, + vote_data->complexity_factor); + + return 0; +} + +static int msm_vidc_set_buses(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + struct msm_vidc_inst *temp; + u64 total_bw_ddr = 0, total_bw_llcc = 0; + u64 curr_time_ns; + + core = inst->core; + + mutex_lock(&core->lock); + curr_time_ns = ktime_get_ns(); + list_for_each_entry(temp, &core->instances, list) { + /* skip for session where no input is there to process */ + if (!temp->max_input_data_size) + continue; + + /* skip inactive session bus bandwidth */ + if (!is_active_session(temp->last_qbuf_time_ns, curr_time_ns)) { + temp->active = false; + continue; + } + + if (temp->power.power_mode == VIDC_POWER_TURBO) { + total_bw_ddr = INT_MAX; + total_bw_llcc = INT_MAX; + break; + } + + total_bw_ddr += temp->power.ddr_bw; + total_bw_llcc += temp->power.sys_cache_bw; + } + mutex_unlock(&core->lock); + + rc = venus_hfi_scale_buses(inst, total_bw_ddr, total_bw_llcc); + if (rc) + return rc; + + return 0; +} + +int msm_vidc_scale_buses(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + struct vidc_bus_vote_data *vote_data; + struct v4l2_format *out_f; + struct v4l2_format *inp_f; + u32 operating_rate, frame_rate; + + core = inst->core; + if (!core->resource) { + i_vpr_e(inst, "%s: invalid resource params\n", __func__); + return -EINVAL; + } + vote_data = &inst->bus_data; + + vote_data->power_mode = VIDC_POWER_NORMAL; + if (inst->power.buffer_counter < DCVS_WINDOW) + vote_data->power_mode = VIDC_POWER_TURBO; + + if (vote_data->power_mode == VIDC_POWER_TURBO) + goto set_buses; + + out_f = &inst->fmts[OUTPUT_PORT]; + inp_f = &inst->fmts[INPUT_PORT]; + + vote_data->codec = inst->codec; + vote_data->input_width = inp_f->fmt.pix_mp.width; + vote_data->input_height = inp_f->fmt.pix_mp.height; + vote_data->output_width = out_f->fmt.pix_mp.width; + vote_data->output_height = out_f->fmt.pix_mp.height; + vote_data->lcu_size = (inst->codec == MSM_VIDC_HEVC || + inst->codec == MSM_VIDC_VP9) ? 32 : 16; + vote_data->fps = inst->max_rate; + + if (is_encode_session(inst)) { + vote_data->domain = MSM_VIDC_ENCODER; + vote_data->bitrate = inst->capabilities[BIT_RATE].value; + vote_data->rotation = inst->capabilities[ROTATION].value; + vote_data->b_frames_enabled = + inst->capabilities[B_FRAME].value > 0; + + /* scale bitrate if operating rate is larger than frame rate */ + frame_rate = msm_vidc_get_frame_rate(inst); + operating_rate = inst->max_rate; + if (frame_rate && operating_rate && operating_rate > frame_rate) + vote_data->bitrate = (vote_data->bitrate / frame_rate) * operating_rate; + + vote_data->num_formats = 1; + vote_data->color_formats[0] = + v4l2_colorformat_to_driver(inst, + inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + } else if (is_decode_session(inst)) { + u32 color_format; + + vote_data->domain = MSM_VIDC_DECODER; + vote_data->bitrate = inst->max_input_data_size * vote_data->fps * 8; + color_format = + v4l2_colorformat_to_driver(inst, + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + if (is_linear_colorformat(color_format)) { + vote_data->num_formats = 2; + /* + * 0 index - dpb colorformat + * 1 index - opb colorformat + */ + if (is_10bit_colorformat(color_format)) + vote_data->color_formats[0] = MSM_VIDC_FMT_TP10C; + else + vote_data->color_formats[0] = MSM_VIDC_FMT_NV12; + vote_data->color_formats[1] = color_format; + } else { + vote_data->num_formats = 1; + vote_data->color_formats[0] = color_format; + } + } + vote_data->work_mode = inst->capabilities[STAGE].value; + if (core->resource->subcache_set.set_to_fw) + vote_data->use_sys_cache = true; + vote_data->num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + fill_dynamic_stats(inst, vote_data); + + call_session_op(core, calc_bw, inst, vote_data); + + inst->power.ddr_bw = vote_data->calc_bw_ddr; + inst->power.sys_cache_bw = vote_data->calc_bw_llcc; + + if (!inst->stats.avg_bw_llcc) + inst->stats.avg_bw_llcc = inst->power.sys_cache_bw; + else + inst->stats.avg_bw_llcc = + (inst->stats.avg_bw_llcc + inst->power.sys_cache_bw) / 2; + + if (!inst->stats.avg_bw_ddr) + inst->stats.avg_bw_ddr = inst->power.ddr_bw; + else + inst->stats.avg_bw_ddr = + (inst->stats.avg_bw_ddr + inst->power.ddr_bw) / 2; + +set_buses: + inst->power.power_mode = vote_data->power_mode; + rc = msm_vidc_set_buses(inst); + if (rc) + return rc; + + return 0; +} + +int msm_vidc_set_clocks(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + struct msm_vidc_inst *temp; + u64 freq; + u64 rate = 0; + bool increment, decrement; + u64 curr_time_ns; + int i = 0; + + core = inst->core; + + if (!core->resource || !core->resource->freq_set.freq_tbl || + !core->resource->freq_set.count) { + d_vpr_e("%s: invalid frequency table\n", __func__); + return -EINVAL; + } + + mutex_lock(&core->lock); + increment = false; + decrement = true; + freq = 0; + curr_time_ns = ktime_get_ns(); + list_for_each_entry(temp, &core->instances, list) { + /* skip for session where no input is there to process */ + if (!temp->max_input_data_size) + continue; + + /* skip inactive session clock rate */ + if (!is_active_session(temp->last_qbuf_time_ns, curr_time_ns)) { + temp->active = false; + continue; + } + freq += temp->power.min_freq; + + /* increment even if one session requested for it */ + if (temp->power.dcvs_flags & MSM_VIDC_DCVS_INCR) + increment = true; + /* decrement only if all sessions requested for it */ + if (!(temp->power.dcvs_flags & MSM_VIDC_DCVS_DECR)) + decrement = false; + } + + /* + * keep checking from lowest to highest rate until + * table rate >= requested rate + */ + for (i = core->resource->freq_set.count - 1; i >= 0; i--) { + rate = core->resource->freq_set.freq_tbl[i].freq; + if (rate >= freq) + break; + } + if (i < 0) + i = 0; + if (increment) { + if (i > 0) + rate = core->resource->freq_set.freq_tbl[i - 1].freq; + } else if (decrement) { + if (i < (int)(core->platform->data.freq_tbl_size - 1)) + rate = core->resource->freq_set.freq_tbl[i + 1].freq; + } + core->power.clk_freq = (u32)rate; + + i_vpr_p(inst, "%s: clock rate %llu requested %llu increment %d decrement %d\n", + __func__, rate, freq, increment, decrement); + mutex_unlock(&core->lock); + + rc = venus_hfi_scale_clocks(inst, rate); + if (rc) + return rc; + + return 0; +} + +static int msm_vidc_apply_dcvs(struct msm_vidc_inst *inst) +{ + int rc = 0; + int bufs_with_fw = 0; + struct msm_vidc_power *power; + + /* skip dcvs */ + if (!inst->power.dcvs_mode) + return 0; + + power = &inst->power; + + if (is_decode_session(inst)) { + bufs_with_fw = msm_vidc_num_buffers(inst, MSM_VIDC_BUF_OUTPUT, + MSM_VIDC_ATTR_QUEUED); + } else { + bufs_with_fw = msm_vidc_num_buffers(inst, MSM_VIDC_BUF_INPUT, + MSM_VIDC_ATTR_QUEUED); + } + + /* +1 as one buffer is going to be queued after the function */ + bufs_with_fw += 1; + + /* + * DCVS decides clock level based on below algorithm + * + * Limits : + * min_threshold : Buffers required for reference by FW. + * nom_threshold : Midpoint of Min and Max thresholds + * max_threshold : Min Threshold + DCVS extra buffers, allocated + * for smooth flow. + * 1) When buffers outside FW are reaching client's extra buffers, + * FW is slow and will impact pipeline, Increase clock. + * 2) When pending buffers with FW are less than FW requested, + * pipeline has cushion to absorb FW slowness, Decrease clocks. + * 3) When DCVS has engaged(Inc or Dec): + * For decode: + * - Pending buffers with FW transitions past the nom_threshold, + * switch to calculated load, this smoothens the clock transitions. + * For encode: + * - Always switch to calculated load. + * 4) Otherwise maintain previous Load config. + */ + if (bufs_with_fw >= power->max_threshold) { + power->dcvs_flags = MSM_VIDC_DCVS_INCR; + goto exit; + } else if (bufs_with_fw < power->min_threshold) { + power->dcvs_flags = MSM_VIDC_DCVS_DECR; + goto exit; + } + + /* encoder: dcvs window handling */ + if (is_encode_session(inst)) { + power->dcvs_flags = 0; + goto exit; + } + + /* decoder: dcvs window handling */ + if ((power->dcvs_flags & MSM_VIDC_DCVS_DECR && bufs_with_fw >= power->nom_threshold) || + (power->dcvs_flags & MSM_VIDC_DCVS_INCR && bufs_with_fw <= power->nom_threshold)) { + power->dcvs_flags = 0; + } + +exit: + i_vpr_p(inst, "dcvs: bufs_with_fw %d th[%d %d %d] flags %#x\n", + bufs_with_fw, power->min_threshold, + power->nom_threshold, power->max_threshold, + power->dcvs_flags); + + return rc; +} + +int msm_vidc_scale_clocks(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + + core = inst->core; + + if (inst->power.buffer_counter < DCVS_WINDOW || + is_sub_state(inst, MSM_VIDC_DRC) || + is_sub_state(inst, MSM_VIDC_DRAIN)) { + inst->power.min_freq = msm_vidc_max_freq(inst); + inst->power.dcvs_flags = 0; + } else { + inst->power.min_freq = + call_session_op(core, calc_freq, inst, inst->max_input_data_size); + msm_vidc_apply_dcvs(inst); + } + inst->power.curr_freq = inst->power.min_freq; + msm_vidc_set_clocks(inst); + + return 0; +} + +int msm_vidc_scale_power(struct msm_vidc_inst *inst, bool scale_buses) +{ + struct msm_vidc_core *core; + struct msm_vidc_buffer *vbuf; + u32 data_size = 0; + u32 cnt = 0; + u32 fps; + u32 frame_rate, operating_rate; + u32 timestamp_rate = 0, input_rate = 0; + + core = inst->core; + + if (!inst->active) { + /* scale buses for inactive -> active session */ + scale_buses = true; + inst->active = true; + } + + /* + * consider avg. filled length in decode batching case + * to avoid overvoting for the entire batch due to single + * frame with huge filled length + */ + if (inst->decode_batch.enable) { + list_for_each_entry(vbuf, &inst->buffers.input.list, list) { + if (vbuf->attr & MSM_VIDC_ATTR_DEFERRED || + vbuf->attr & MSM_VIDC_ATTR_QUEUED) { + data_size += vbuf->data_size; + cnt++; + } + } + if (cnt) + data_size /= cnt; + } else { + list_for_each_entry(vbuf, &inst->buffers.input.list, list) + data_size = max(data_size, vbuf->data_size); + } + inst->max_input_data_size = data_size; + + frame_rate = msm_vidc_get_frame_rate(inst); + operating_rate = msm_vidc_get_operating_rate(inst); + fps = max(frame_rate, operating_rate); + /* + * Consider input queuing rate in power scaling in + * because client may not set the frame rate / operating rate + * and we need to rely on input queue rate + */ + if (is_decode_session(inst)) { + /* + * when buffer detected fps is more than client set value by 12.5%, + * utilize buffer detected fps to scale clock. + */ + timestamp_rate = msm_vidc_get_timestamp_rate(inst); + input_rate = msm_vidc_get_input_rate(inst); + if (timestamp_rate > (fps + fps / 8)) + fps = timestamp_rate; + + if (input_rate > fps) + fps = input_rate; + } + inst->max_rate = fps; + + /* no pending inputs - skip scale power */ + if (!inst->max_input_data_size) + return 0; + + if (msm_vidc_scale_clocks(inst)) + i_vpr_e(inst, "failed to scale clock\n"); + + if (scale_buses) { + if (msm_vidc_scale_buses(inst)) + i_vpr_e(inst, "failed to scale bus\n"); + } + + i_vpr_hp(inst, + "power: inst: clk %lld ddr %d llcc %d dcvs flags %#x fps %u (%u %u %u %u) core: clk %lld ddr %lld llcc %lld\n", + inst->power.curr_freq, inst->power.ddr_bw, + inst->power.sys_cache_bw, inst->power.dcvs_flags, + inst->max_rate, frame_rate, operating_rate, timestamp_rate, + input_rate, core->power.clk_freq, core->power.bw_ddr, + core->power.bw_llcc); + + return 0; +} + +void msm_vidc_dcvs_data_reset(struct msm_vidc_inst *inst) +{ + struct msm_vidc_power *dcvs; + u32 min_count, actual_count, max_count; + + dcvs = &inst->power; + if (is_encode_session(inst)) { + min_count = inst->buffers.input.min_count; + actual_count = inst->buffers.input.actual_count; + max_count = min((min_count + DCVS_ENC_EXTRA_INPUT_BUFFERS), actual_count); + } else if (is_decode_session(inst)) { + min_count = inst->buffers.output.min_count; + actual_count = inst->buffers.output.actual_count; + max_count = min((min_count + DCVS_DEC_EXTRA_OUTPUT_BUFFERS), actual_count); + } else { + i_vpr_e(inst, "%s: invalid domain type %d\n", + __func__, inst->domain); + return; + } + + dcvs->min_threshold = min_count; + dcvs->max_threshold = max_count; + dcvs->dcvs_window = min_count < max_count ? max_count - min_count : 0; + dcvs->nom_threshold = dcvs->min_threshold + (dcvs->dcvs_window / 2); + dcvs->dcvs_flags = 0; + + i_vpr_p(inst, "%s: dcvs: thresholds [%d %d %d] flags %#x\n", + __func__, dcvs->min_threshold, + dcvs->nom_threshold, dcvs->max_threshold, + dcvs->dcvs_flags); +} + +void msm_vidc_power_data_reset(struct msm_vidc_inst *inst) +{ + int rc = 0; + + msm_vidc_dcvs_data_reset(inst); + + inst->power.buffer_counter = 0; + inst->power.fw_cr = 0; + inst->power.fw_cf = INT_MAX; + + rc = msm_vidc_scale_power(inst, true); + if (rc) + i_vpr_e(inst, "%s: failed to scale power\n", __func__); +} From patchwork Fri Jul 28 13:23:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F613C41513 for ; Fri, 28 Jul 2023 13:27:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236807AbjG1N1Z (ORCPT ); Fri, 28 Jul 2023 09:27:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236620AbjG1N1D (ORCPT ); Fri, 28 Jul 2023 09:27:03 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D9983C04; Fri, 28 Jul 2023 06:26:16 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9WeRs005403; Fri, 28 Jul 2023 13:26:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=nGbFeuevhwIZKzxFOr1pwWXYD6KxFBmbckJHvwBIpvo=; b=FYTVszJ6alKc08Yvp2rrUG46VAAuSGnE7CAN/HjlTFQMe38GE3PsftSMJNo22+timfvw uU1A0Jf38dj4qRP7p/hJgGB3Iv9JFxAiG7y9xyXcvld4Eiuxd1VDbMueSDKR/Kpy73ux /JlJ3fkVzPfO9SKbXbVG4d6jM50WOyr1OQqMrT4a79x8gwcQSCQhRxhcasLhSfyvOMxd PHhb6tKsRnrIxJct5oY06gejwoiZmlqHcvvYnWpmEJJMF8OTtbYUtvr+iCj6BwyHwblP 1ouhT3TtK12kLAPlJebhr9jYl7gp2wevgm8CM8Lt6+0pRnZaGPrjPtIDZfztUMmKg26r Ig== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s469hh1h2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:00 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDPx2g025873 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:25:59 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:56 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 14/33] iris: vidc: add helpers for state management Date: Fri, 28 Jul 2023 18:53:25 +0530 Message-ID: <1690550624-14642-15-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: m_zyt9yoo2guwI-gGpOPnikZ0CTR3Ksv X-Proofpoint-GUID: m_zyt9yoo2guwI-gGpOPnikZ0CTR3Ksv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 priorityscore=1501 impostorscore=0 malwarescore=0 phishscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 mlxscore=0 spamscore=0 lowpriorityscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements the functions to handle different core and instance state transitions. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_state.h | 102 ++ .../platform/qcom/iris/vidc/src/msm_vidc_state.c | 1607 ++++++++++++++++++++ 2 files changed, 1709 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_state.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_state.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_state.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_state.h new file mode 100644 index 0000000..7fe4fcc --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_state.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_STATE_H_ +#define _MSM_VIDC_STATE_H_ + +#include "msm_vidc_internal.h" + +enum msm_vidc_core_state { + MSM_VIDC_CORE_DEINIT, + MSM_VIDC_CORE_INIT_WAIT, + MSM_VIDC_CORE_INIT, + MSM_VIDC_CORE_ERROR, +}; + +enum msm_vidc_core_sub_state { + CORE_SUBSTATE_NONE = 0x0, + CORE_SUBSTATE_POWER_ENABLE = BIT(0), + CORE_SUBSTATE_GDSC_HANDOFF = BIT(1), + CORE_SUBSTATE_PM_SUSPEND = BIT(2), + CORE_SUBSTATE_FW_PWR_CTRL = BIT(3), + CORE_SUBSTATE_PAGE_FAULT = BIT(4), + CORE_SUBSTATE_CPU_WATCHDOG = BIT(5), + CORE_SUBSTATE_VIDEO_UNRESPONSIVE = BIT(6), + CORE_SUBSTATE_MAX = BIT(7), +}; + +enum msm_vidc_core_event_type { + CORE_EVENT_NONE = BIT(0), + CORE_EVENT_UPDATE_SUB_STATE = BIT(1), +}; + +enum msm_vidc_state { + MSM_VIDC_OPEN, + MSM_VIDC_INPUT_STREAMING, + MSM_VIDC_OUTPUT_STREAMING, + MSM_VIDC_STREAMING, + MSM_VIDC_CLOSE, + MSM_VIDC_ERROR, +}; + +#define MSM_VIDC_SUB_STATE_NONE 0 +#define MSM_VIDC_MAX_SUB_STATES 6 +/* + * max value of inst->sub_state if all + * the 6 valid bits are set i.e 111111==>63 + */ +#define MSM_VIDC_MAX_SUB_STATE_VALUE ((1 << MSM_VIDC_MAX_SUB_STATES) - 1) + +enum msm_vidc_sub_state { + MSM_VIDC_DRAIN = BIT(0), + MSM_VIDC_DRC = BIT(1), + MSM_VIDC_DRAIN_LAST_BUFFER = BIT(2), + MSM_VIDC_DRC_LAST_BUFFER = BIT(3), + MSM_VIDC_INPUT_PAUSE = BIT(4), + MSM_VIDC_OUTPUT_PAUSE = BIT(5), +}; + +enum msm_vidc_event { + MSM_VIDC_TRY_FMT, + MSM_VIDC_S_FMT, + MSM_VIDC_REQBUFS, + MSM_VIDC_S_CTRL, + MSM_VIDC_STREAMON, + MSM_VIDC_STREAMOFF, + MSM_VIDC_CMD_START, + MSM_VIDC_CMD_STOP, + MSM_VIDC_BUF_QUEUE, +}; + +/* core statemachine functions */ +enum msm_vidc_allow msm_vidc_allow_core_state_change(struct msm_vidc_core *core, + enum msm_vidc_core_state req_state); +int msm_vidc_update_core_state(struct msm_vidc_core *core, + enum msm_vidc_core_state request_state, const char *func); +bool core_in_valid_state(struct msm_vidc_core *core); +bool is_core_state(struct msm_vidc_core *core, enum msm_vidc_core_state state); +bool is_core_sub_state(struct msm_vidc_core *core, enum msm_vidc_core_sub_state sub_state); +const char *core_state_name(enum msm_vidc_core_state state); +const char *core_sub_state_name(enum msm_vidc_core_sub_state sub_state); + +/* inst statemachine functions */ +bool is_drc_pending(struct msm_vidc_inst *inst); +bool is_drain_pending(struct msm_vidc_inst *inst); +int msm_vidc_update_state(struct msm_vidc_inst *inst, + enum msm_vidc_state request_state, const char *func); +int msm_vidc_change_state(struct msm_vidc_inst *inst, + enum msm_vidc_state request_state, const char *func); +int msm_vidc_change_sub_state(struct msm_vidc_inst *inst, + enum msm_vidc_sub_state clear_sub_state, + enum msm_vidc_sub_state set_sub_state, + const char *func); +const char *state_name(enum msm_vidc_state state); +const char *sub_state_name(enum msm_vidc_sub_state sub_state); +bool is_state(struct msm_vidc_inst *inst, enum msm_vidc_state state); +bool is_sub_state(struct msm_vidc_inst *inst, + enum msm_vidc_sub_state sub_state); + +#endif // _MSM_VIDC_STATE_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_state.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_state.c new file mode 100644 index 0000000..0361e69 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_state.c @@ -0,0 +1,1607 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_state.h" +#include "msm_vidc_vb2.h" +#include "venus_hfi.h" + +bool core_in_valid_state(struct msm_vidc_core *core) +{ + return (core->state == MSM_VIDC_CORE_INIT || + core->state == MSM_VIDC_CORE_INIT_WAIT); +} + +bool is_core_state(struct msm_vidc_core *core, enum msm_vidc_core_state state) +{ + return core->state == state; +} + +bool is_drc_pending(struct msm_vidc_inst *inst) +{ + return is_sub_state(inst, MSM_VIDC_DRC) && + is_sub_state(inst, MSM_VIDC_DRC_LAST_BUFFER); +} + +bool is_drain_pending(struct msm_vidc_inst *inst) +{ + return is_sub_state(inst, MSM_VIDC_DRAIN) && + is_sub_state(inst, MSM_VIDC_DRAIN_LAST_BUFFER); +} + +static const char * const core_state_name_arr[] = { + [MSM_VIDC_CORE_DEINIT] = "CORE_DEINIT", + [MSM_VIDC_CORE_INIT_WAIT] = "CORE_INIT_WAIT", + [MSM_VIDC_CORE_INIT] = "CORE_INIT", + [MSM_VIDC_CORE_ERROR] = "CORE_ERROR", +}; + +const char *core_state_name(enum msm_vidc_core_state state) +{ + const char *name = "UNKNOWN STATE"; + + if (state >= ARRAY_SIZE(core_state_name_arr)) + goto exit; + + name = core_state_name_arr[state]; + +exit: + return name; +} + +static const char * const event_name_arr[] = { + [MSM_VIDC_TRY_FMT] = "TRY_FMT", + [MSM_VIDC_S_FMT] = "S_FMT", + [MSM_VIDC_REQBUFS] = "REQBUFS", + [MSM_VIDC_S_CTRL] = "S_CTRL", + [MSM_VIDC_STREAMON] = "STREAMON", + [MSM_VIDC_STREAMOFF] = "STREAMOFF", + [MSM_VIDC_CMD_START] = "CMD_START", + [MSM_VIDC_CMD_STOP] = "CMD_STOP", + [MSM_VIDC_BUF_QUEUE] = "BUF_QUEUE", +}; + +static const char *event_name(enum msm_vidc_event event) +{ + const char *name = "UNKNOWN EVENT"; + + if (event >= ARRAY_SIZE(event_name_arr)) + goto exit; + + name = event_name_arr[event]; + +exit: + return name; +} + +static int __strict_inst_check(struct msm_vidc_inst *inst, const char *function) +{ + bool fatal = !mutex_is_locked(&inst->lock); + + WARN_ON(fatal); + + return fatal ? -EINVAL : 0; +} + +struct msm_vidc_core_sub_state_allow { + enum msm_vidc_core_state state; + enum msm_vidc_allow allow; + u32 sub_state_mask; +}; + +static u32 msm_vidc_core_sub_state_mask(enum msm_vidc_core_state state, + u32 allow) +{ + int cnt; + u32 sub_state_mask = 0; + static struct msm_vidc_core_sub_state_allow sub_state[] = { + /* state, allow, sub_state */ + {MSM_VIDC_CORE_DEINIT, MSM_VIDC_ALLOW, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_DEINIT, MSM_VIDC_IGNORE, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_ALLOW, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_DISALLOW, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_IGNORE, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_INIT, MSM_VIDC_ALLOW, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_ERROR, MSM_VIDC_ALLOW, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + {MSM_VIDC_CORE_ERROR, MSM_VIDC_DISALLOW, CORE_SUBSTATE_POWER_ENABLE | + CORE_SUBSTATE_GDSC_HANDOFF | + CORE_SUBSTATE_PM_SUSPEND | + CORE_SUBSTATE_FW_PWR_CTRL | + CORE_SUBSTATE_PAGE_FAULT | + CORE_SUBSTATE_CPU_WATCHDOG | + CORE_SUBSTATE_VIDEO_UNRESPONSIVE }, + }; + + for (cnt = 0; cnt < ARRAY_SIZE(sub_state); cnt++) { + if (sub_state[cnt].state == state && sub_state[cnt].allow == allow) { + sub_state_mask = sub_state[cnt].sub_state_mask; + break; + } + } + + return sub_state_mask; +} + +static int msm_vidc_core_deinit_state(struct msm_vidc_core *core, + enum msm_vidc_core_event_type type, + struct msm_vidc_event_data *data) +{ + int rc = 0; + + switch (type) { + case CORE_EVENT_UPDATE_SUB_STATE: + { + u32 req_sub_state = data->edata.uval; + u32 allow_mask = msm_vidc_core_sub_state_mask(core->state, MSM_VIDC_ALLOW); + + req_sub_state = data->edata.uval; + + /* none of the requested substate supported */ + if (!(req_sub_state & allow_mask)) { + d_vpr_e("%s: invalid substate update request %#x\n", + __func__, req_sub_state); + return -EINVAL; + } + + /* update core substate */ + core->sub_state |= req_sub_state & allow_mask; + return rc; + } + default: { + d_vpr_e("%s: unexpected core event type %u\n", + __func__, type); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_core_init_wait_state(struct msm_vidc_core *core, + enum msm_vidc_core_event_type type, + struct msm_vidc_event_data *data) +{ + int rc = 0; + + switch (type) { + case CORE_EVENT_UPDATE_SUB_STATE: + { + u32 req_sub_state = data->edata.uval; + u32 allow_mask = msm_vidc_core_sub_state_mask(core->state, MSM_VIDC_ALLOW); + + req_sub_state = data->edata.uval; + + /* none of the requested substate supported */ + if (!(req_sub_state & allow_mask)) { + d_vpr_e("%s: invalid substate update request %#x\n", + __func__, req_sub_state); + return -EINVAL; + } + + /* update core substate */ + core->sub_state |= req_sub_state & allow_mask; + return rc; + } + default: { + d_vpr_e("%s: unexpected core event type %u\n", + __func__, type); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_core_init_state(struct msm_vidc_core *core, + enum msm_vidc_core_event_type type, + struct msm_vidc_event_data *data) +{ + int rc = 0; + + switch (type) { + case CORE_EVENT_UPDATE_SUB_STATE: + { + u32 req_sub_state = data->edata.uval; + u32 allow_mask = msm_vidc_core_sub_state_mask(core->state, MSM_VIDC_ALLOW); + + req_sub_state = data->edata.uval; + + /* none of the requested substate supported */ + if (!(req_sub_state & allow_mask)) { + d_vpr_e("%s: invalid substate update request %#x\n", + __func__, req_sub_state); + return -EINVAL; + } + + /* update core substate */ + core->sub_state |= req_sub_state & allow_mask; + return rc; + } + default: { + d_vpr_e("%s: unexpected core event type %u\n", + __func__, type); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_core_error_state(struct msm_vidc_core *core, + enum msm_vidc_core_event_type type, + struct msm_vidc_event_data *data) +{ + int rc = 0; + + switch (type) { + case CORE_EVENT_UPDATE_SUB_STATE: + { + u32 req_sub_state = data->edata.uval; + u32 allow_mask = msm_vidc_core_sub_state_mask(core->state, MSM_VIDC_ALLOW); + + req_sub_state = data->edata.uval; + + /* none of the requested substate supported */ + if (!(req_sub_state & allow_mask)) { + d_vpr_e("%s: invalid substate update request %#x\n", + __func__, req_sub_state); + return -EINVAL; + } + + /* update core substate */ + core->sub_state |= req_sub_state & allow_mask; + return rc; + } + default: { + d_vpr_e("%s: unexpected core event type %u\n", + __func__, type); + return -EINVAL; + } + } + + return rc; +} + +struct msm_vidc_core_state_handle { + enum msm_vidc_core_state state; + int (*handle)(struct msm_vidc_core *core, + enum msm_vidc_core_event_type type, + struct msm_vidc_event_data *data); +}; + +static struct msm_vidc_core_state_handle + *msm_vidc_get_core_state_handle(enum msm_vidc_core_state req_state) +{ + int cnt; + struct msm_vidc_core_state_handle *core_state_handle = NULL; + static struct msm_vidc_core_state_handle state_handle[] = { + {MSM_VIDC_CORE_DEINIT, msm_vidc_core_deinit_state }, + {MSM_VIDC_CORE_INIT_WAIT, msm_vidc_core_init_wait_state }, + {MSM_VIDC_CORE_INIT, msm_vidc_core_init_state }, + {MSM_VIDC_CORE_ERROR, msm_vidc_core_error_state }, + }; + + for (cnt = 0; cnt < ARRAY_SIZE(state_handle); cnt++) { + if (state_handle[cnt].state == req_state) { + core_state_handle = &state_handle[cnt]; + break; + } + } + + /* if req_state does not exist in the table */ + if (cnt == ARRAY_SIZE(state_handle)) { + d_vpr_e("%s: invalid core state \"%s\" requested\n", + __func__, core_state_name(req_state)); + return core_state_handle; + } + + return core_state_handle; +} + +int msm_vidc_update_core_state(struct msm_vidc_core *core, + enum msm_vidc_core_state request_state, const char *func) +{ + struct msm_vidc_core_state_handle *state_handle = NULL; + int rc = 0; + + /* get core state handler for requested state */ + state_handle = msm_vidc_get_core_state_handle(request_state); + if (!state_handle) + return -EINVAL; + + d_vpr_h("%s: core state changed to %s from %s\n", func, + core_state_name(state_handle->state), core_state_name(core->state)); + + /* finally update core state and handler */ + core->state = state_handle->state; + core->state_handle = state_handle->handle; + + return rc; +} + +struct msm_vidc_core_state_allow { + enum msm_vidc_core_state from; + enum msm_vidc_core_state to; + enum msm_vidc_allow allow; +}; + +enum msm_vidc_allow msm_vidc_allow_core_state_change(struct msm_vidc_core *core, + enum msm_vidc_core_state req_state) +{ + int cnt; + enum msm_vidc_allow allow = MSM_VIDC_DISALLOW; + static struct msm_vidc_core_state_allow state[] = { + /* from, to, allow */ + {MSM_VIDC_CORE_DEINIT, MSM_VIDC_CORE_DEINIT, MSM_VIDC_IGNORE }, + {MSM_VIDC_CORE_DEINIT, MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_ALLOW }, + {MSM_VIDC_CORE_DEINIT, MSM_VIDC_CORE_INIT, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CORE_DEINIT, MSM_VIDC_CORE_ERROR, MSM_VIDC_IGNORE }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_CORE_DEINIT, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_IGNORE }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_CORE_INIT, MSM_VIDC_ALLOW }, + {MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_CORE_ERROR, MSM_VIDC_ALLOW }, + {MSM_VIDC_CORE_INIT, MSM_VIDC_CORE_DEINIT, MSM_VIDC_ALLOW }, + {MSM_VIDC_CORE_INIT, MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CORE_INIT, MSM_VIDC_CORE_INIT, MSM_VIDC_IGNORE }, + {MSM_VIDC_CORE_INIT, MSM_VIDC_CORE_ERROR, MSM_VIDC_ALLOW }, + {MSM_VIDC_CORE_ERROR, MSM_VIDC_CORE_DEINIT, MSM_VIDC_ALLOW }, + {MSM_VIDC_CORE_ERROR, MSM_VIDC_CORE_INIT_WAIT, MSM_VIDC_IGNORE }, + {MSM_VIDC_CORE_ERROR, MSM_VIDC_CORE_INIT, MSM_VIDC_IGNORE }, + {MSM_VIDC_CORE_ERROR, MSM_VIDC_CORE_ERROR, MSM_VIDC_IGNORE }, + }; + + for (cnt = 0; cnt < ARRAY_SIZE(state); cnt++) { + if (state[cnt].from == core->state && state[cnt].to == req_state) { + allow = state[cnt].allow; + break; + } + } + + return allow; +} + +int msm_vidc_change_core_state(struct msm_vidc_core *core, + enum msm_vidc_core_state request_state, const char *func) +{ + enum msm_vidc_allow allow; + int rc = 0; + + /* core must be locked */ + rc = __strict_check(core, func); + if (rc) { + d_vpr_e("%s(): core was not locked\n", func); + return rc; + } + + /* current and requested state is same */ + if (core->state == request_state) + return 0; + + /* check if requested state movement is allowed */ + allow = msm_vidc_allow_core_state_change(core, request_state); + if (allow == MSM_VIDC_IGNORE) { + d_vpr_h("%s: %s core state change %s -> %s\n", func, + allow_name(allow), core_state_name(core->state), + core_state_name(request_state)); + return 0; + } else if (allow == MSM_VIDC_DISALLOW) { + d_vpr_e("%s: %s core state change %s -> %s\n", func, + allow_name(allow), core_state_name(core->state), + core_state_name(request_state)); + return -EINVAL; + } + + /* go ahead and update core state */ + return msm_vidc_update_core_state(core, request_state, func); +} + +bool is_core_sub_state(struct msm_vidc_core *core, enum msm_vidc_core_sub_state sub_state) +{ + return !!(core->sub_state & sub_state); +} + +const char *core_sub_state_name(enum msm_vidc_core_sub_state sub_state) +{ + switch (sub_state) { + case CORE_SUBSTATE_NONE: return "NONE "; + case CORE_SUBSTATE_GDSC_HANDOFF: return "GDSC_HANDOFF "; + case CORE_SUBSTATE_PM_SUSPEND: return "PM_SUSPEND "; + case CORE_SUBSTATE_FW_PWR_CTRL: return "FW_PWR_CTRL "; + case CORE_SUBSTATE_POWER_ENABLE: return "POWER_ENABLE "; + case CORE_SUBSTATE_PAGE_FAULT: return "PAGE_FAULT "; + case CORE_SUBSTATE_CPU_WATCHDOG: return "CPU_WATCHDOG "; + case CORE_SUBSTATE_VIDEO_UNRESPONSIVE: return "VIDEO_UNRESPONSIVE "; + case CORE_SUBSTATE_MAX: return "MAX "; + } + + return "UNKNOWN "; +} + +static int prepare_core_sub_state_name(enum msm_vidc_core_sub_state sub_state, + char *buf, u32 size) +{ + int i = 0; + + if (!buf || !size) + return -EINVAL; + + strscpy(buf, "\0", size); + if (sub_state == CORE_SUBSTATE_NONE) { + strscpy(buf, "CORE_SUBSTATE_NONE", size); + return 0; + } + + for (i = 0; BIT(i) < CORE_SUBSTATE_MAX; i++) { + if (sub_state & BIT(i)) + strlcat(buf, core_sub_state_name(BIT(i)), size); + } + + return 0; +} + +static int msm_vidc_update_core_sub_state(struct msm_vidc_core *core, + enum msm_vidc_core_sub_state sub_state, + const char *func) +{ + struct msm_vidc_event_data data; + char sub_state_name[MAX_NAME_LENGTH]; + int ret = 0, rc = 0; + + /* no substate update */ + if (!sub_state) + return 0; + + /* invoke update core substate event */ + memset(&data, 0, sizeof(struct msm_vidc_event_data)); + data.edata.uval = sub_state; + rc = core->state_handle(core, CORE_EVENT_UPDATE_SUB_STATE, &data); + if (rc) { + ret = prepare_core_sub_state_name(sub_state, sub_state_name, + sizeof(sub_state_name) - 1); + if (!ret) + d_vpr_e("%s: state %s, requested invalid core substate %s\n", + func, core_state_name(core->state), sub_state_name); + return rc; + } + + return rc; +} + +int msm_vidc_change_core_sub_state(struct msm_vidc_core *core, + enum msm_vidc_core_sub_state clear_sub_state, + enum msm_vidc_core_sub_state set_sub_state, + const char *func) +{ + int rc = 0; + enum msm_vidc_core_sub_state prev_sub_state; + + /* core must be locked */ + rc = __strict_check(core, func); + if (rc) { + d_vpr_e("%s(): core was not locked\n", func); + return rc; + } + + /* sanitize core state handler */ + if (!core->state_handle) { + d_vpr_e("%s: invalid core state handle\n", __func__); + return -EINVAL; + } + + /* final value will not change */ + if (clear_sub_state == set_sub_state) + return 0; + + /* sanitize clear & set value */ + if (set_sub_state > CORE_SUBSTATE_MAX || + clear_sub_state > CORE_SUBSTATE_MAX) { + d_vpr_e("%s: invalid sub states. clear %#x or set %#x\n", + func, clear_sub_state, set_sub_state); + return -EINVAL; + } + + prev_sub_state = core->sub_state; + + /* set sub state */ + rc = msm_vidc_update_core_sub_state(core, set_sub_state, func); + if (rc) + return rc; + + /* check if all core substates updated */ + if ((core->sub_state & set_sub_state) != set_sub_state) + d_vpr_e("%s: all substates not updated %#x, expected %#x\n", + func, core->sub_state & set_sub_state, set_sub_state); + + /* clear sub state */ + core->sub_state &= ~clear_sub_state; + + /* print substates only when there is a change */ + if (core->sub_state != prev_sub_state) { + rc = prepare_core_sub_state_name(core->sub_state, core->sub_state_name, + sizeof(core->sub_state_name) - 1); + if (!rc) + d_vpr_h("%s: core sub state changed to %s\n", func, core->sub_state_name); + } + + return 0; +} + +/* do not modify the state names as it is used in test scripts */ +static const char * const state_name_arr[] = { + [MSM_VIDC_OPEN] = "OPEN", + [MSM_VIDC_INPUT_STREAMING] = "INPUT_STREAMING", + [MSM_VIDC_OUTPUT_STREAMING] = "OUTPUT_STREAMING", + [MSM_VIDC_STREAMING] = "STREAMING", + [MSM_VIDC_CLOSE] = "CLOSE", + [MSM_VIDC_ERROR] = "ERROR", +}; + +const char *state_name(enum msm_vidc_state state) +{ + const char *name = "UNKNOWN STATE"; + + if (state >= ARRAY_SIZE(state_name_arr)) + goto exit; + + name = state_name_arr[state]; + +exit: + return name; +} + +bool is_state(struct msm_vidc_inst *inst, enum msm_vidc_state state) +{ + return inst->state == state; +} + +bool is_sub_state(struct msm_vidc_inst *inst, enum msm_vidc_sub_state sub_state) +{ + return (inst->sub_state & sub_state); +} + +const char *sub_state_name(enum msm_vidc_sub_state sub_state) +{ + switch (sub_state) { + case MSM_VIDC_DRAIN: return "DRAIN "; + case MSM_VIDC_DRC: return "DRC "; + case MSM_VIDC_DRAIN_LAST_BUFFER: return "DRAIN_LAST_BUFFER "; + case MSM_VIDC_DRC_LAST_BUFFER: return "DRC_LAST_BUFFER "; + case MSM_VIDC_INPUT_PAUSE: return "INPUT_PAUSE "; + case MSM_VIDC_OUTPUT_PAUSE: return "OUTPUT_PAUSE "; + } + + return "SUB_STATE_NONE"; +} + +static int prepare_sub_state_name(enum msm_vidc_sub_state sub_state, + char *buf, u32 size) +{ + int i = 0; + + if (!buf || !size) + return -EINVAL; + + strscpy(buf, "\0", size); + if (sub_state == MSM_VIDC_SUB_STATE_NONE) { + strscpy(buf, "SUB_STATE_NONE", size); + return 0; + } + + for (i = 0; i < MSM_VIDC_MAX_SUB_STATES; i++) { + if (sub_state & BIT(i)) + strlcat(buf, sub_state_name(BIT(i)), size); + } + + return 0; +} + +struct msm_vidc_state_allow { + enum msm_vidc_state from; + enum msm_vidc_state to; + enum msm_vidc_allow allow; +}; + +static enum msm_vidc_allow msm_vidc_allow_state_change(struct msm_vidc_inst *inst, + enum msm_vidc_state req_state) +{ + int cnt; + enum msm_vidc_allow allow = MSM_VIDC_DISALLOW; + static struct msm_vidc_state_allow state[] = { + /* from, to, allow */ + {MSM_VIDC_OPEN, MSM_VIDC_OPEN, MSM_VIDC_IGNORE }, + {MSM_VIDC_OPEN, MSM_VIDC_INPUT_STREAMING, MSM_VIDC_ALLOW }, + {MSM_VIDC_OPEN, MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_ALLOW }, + {MSM_VIDC_OPEN, MSM_VIDC_STREAMING, MSM_VIDC_DISALLOW }, + {MSM_VIDC_OPEN, MSM_VIDC_CLOSE, MSM_VIDC_ALLOW }, + {MSM_VIDC_OPEN, MSM_VIDC_ERROR, MSM_VIDC_ALLOW }, + + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_OPEN, MSM_VIDC_ALLOW }, + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_INPUT_STREAMING, MSM_VIDC_IGNORE }, + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_DISALLOW }, + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_STREAMING, MSM_VIDC_ALLOW }, + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_CLOSE, MSM_VIDC_ALLOW }, + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_ERROR, MSM_VIDC_ALLOW }, + + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_OPEN, MSM_VIDC_ALLOW }, + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_INPUT_STREAMING, MSM_VIDC_DISALLOW }, + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_IGNORE }, + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_STREAMING, MSM_VIDC_ALLOW }, + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_CLOSE, MSM_VIDC_ALLOW }, + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_ERROR, MSM_VIDC_ALLOW }, + + {MSM_VIDC_STREAMING, MSM_VIDC_OPEN, MSM_VIDC_DISALLOW }, + {MSM_VIDC_STREAMING, MSM_VIDC_INPUT_STREAMING, MSM_VIDC_ALLOW }, + {MSM_VIDC_STREAMING, MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_ALLOW }, + {MSM_VIDC_STREAMING, MSM_VIDC_STREAMING, MSM_VIDC_IGNORE }, + {MSM_VIDC_STREAMING, MSM_VIDC_CLOSE, MSM_VIDC_ALLOW }, + {MSM_VIDC_STREAMING, MSM_VIDC_ERROR, MSM_VIDC_ALLOW }, + + {MSM_VIDC_CLOSE, MSM_VIDC_OPEN, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CLOSE, MSM_VIDC_INPUT_STREAMING, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CLOSE, MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CLOSE, MSM_VIDC_STREAMING, MSM_VIDC_DISALLOW }, + {MSM_VIDC_CLOSE, MSM_VIDC_CLOSE, MSM_VIDC_IGNORE }, + {MSM_VIDC_CLOSE, MSM_VIDC_ERROR, MSM_VIDC_IGNORE }, + + {MSM_VIDC_ERROR, MSM_VIDC_OPEN, MSM_VIDC_IGNORE }, + {MSM_VIDC_ERROR, MSM_VIDC_INPUT_STREAMING, MSM_VIDC_IGNORE }, + {MSM_VIDC_ERROR, MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_IGNORE }, + {MSM_VIDC_ERROR, MSM_VIDC_STREAMING, MSM_VIDC_IGNORE }, + {MSM_VIDC_ERROR, MSM_VIDC_CLOSE, MSM_VIDC_IGNORE }, + {MSM_VIDC_ERROR, MSM_VIDC_ERROR, MSM_VIDC_IGNORE }, + }; + + for (cnt = 0; cnt < ARRAY_SIZE(state); cnt++) { + if (state[cnt].from == inst->state && state[cnt].to == req_state) { + allow = state[cnt].allow; + break; + } + } + + return allow; +} + +static int msm_vidc_open_state(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data) +{ + int rc = 0; + + /* inst must be locked */ + rc = __strict_inst_check(inst, __func__); + if (rc) { + i_vpr_e(inst, "%s(): inst was not locked\n", __func__); + return -EINVAL; + } + + switch (event) { + case MSM_VIDC_TRY_FMT: + { + struct v4l2_format *f = (struct v4l2_format *)data; + + /* allow try_fmt request in open state */ + rc = msm_vidc_try_fmt(inst, f); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_FMT: + { + struct v4l2_format *f = (struct v4l2_format *)data; + + /* allow s_fmt request in open state */ + rc = msm_vidc_s_fmt(inst, f); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_CTRL: + { + struct v4l2_ctrl *ctrl = (struct v4l2_ctrl *)data; + + /* allow set_control request in open state */ + rc = msm_vidc_s_ctrl(inst, ctrl); + if (rc) + return rc; + break; + } + case MSM_VIDC_REQBUFS: + { + struct v4l2_requestbuffers *b = (struct v4l2_requestbuffers *)data; + + /* allow reqbufs request in open state */ + rc = msm_vidc_reqbufs(inst, b); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMON: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* allow streamon request in open state */ + rc = msm_vidc_start_streaming(inst, q); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMOFF: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* ignore streamoff request in open state */ + i_vpr_h(inst, "%s: streamoff of (%s) ignored in state (%s)\n", + __func__, v4l2_type_name(q->type), state_name(inst->state)); + break; + } + case MSM_VIDC_CMD_START: + { + /* disallow start cmd request in open state */ + i_vpr_e(inst, "%s: (%s) not allowed, sub_state (%s)\n", + __func__, event_name(event), inst->sub_state_name); + + return -EBUSY; + } + case MSM_VIDC_CMD_STOP: + { + /* ignore stop cmd request in open state */ + i_vpr_h(inst, "%s: (%s) ignored, sub_state (%s)\n", + __func__, event_name(event), inst->sub_state_name); + break; + } + case MSM_VIDC_BUF_QUEUE: + { + struct msm_vidc_buffer *buf = (struct msm_vidc_buffer *)data; + + /* defer qbuf request in open state */ + print_vidc_buffer(VIDC_LOW, "low ", "qbuf deferred", inst, buf); + break; + } + default: + { + i_vpr_e(inst, "%s: unexpected event %s\n", __func__, event_name(event)); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_input_streaming_state(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data) +{ + int rc = 0; + + /* inst must be locked */ + rc = __strict_inst_check(inst, __func__); + if (rc) { + i_vpr_e(inst, "%s(): inst was not locked\n", __func__); + return -EINVAL; + } + + switch (event) { + case MSM_VIDC_BUF_QUEUE: + { + struct msm_vidc_buffer *buf = (struct msm_vidc_buffer *)data; + + /* disallow */ + if (!is_input_buffer(buf->type) && !is_output_buffer(buf->type)) { + i_vpr_e(inst, "%s: invalid buf type %u\n", __func__, buf->type); + return -EINVAL; + } + + /* defer output port */ + if (is_output_buffer(buf->type)) { + print_vidc_buffer(VIDC_LOW, "low ", "qbuf deferred", inst, buf); + return 0; + } + + rc = msm_vidc_buf_queue(inst, buf); + if (rc) + return rc; + break; + } + case MSM_VIDC_TRY_FMT: + { + struct v4l2_format *f = (struct v4l2_format *)data; + + /* disallow */ + if (f->type == INPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) port\n", + __func__, event_name(event), v4l2_type_name(f->type)); + return -EBUSY; + } + + rc = msm_vidc_try_fmt(inst, f); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_FMT: + { + struct v4l2_format *f = (struct v4l2_format *)data; + + /* disallow */ + if (f->type == INPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) port\n", + __func__, event_name(event), v4l2_type_name(f->type)); + return -EBUSY; + } + + rc = msm_vidc_s_fmt(inst, f); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_CTRL: + { + struct v4l2_ctrl *ctrl = (struct v4l2_ctrl *)data; + u32 cap_id = msm_vidc_get_cap_id(inst, ctrl->id); + + if (cap_id == INST_CAP_NONE) { + i_vpr_e(inst, "%s: invalid cap_id %u\n", __func__, cap_id); + return -EINVAL; + } + + /* disallow */ + if (is_decode_session(inst)) { + /* check dynamic allowed if master port is streaming */ + if (!(inst->capabilities[cap_id].flags & CAP_FLAG_DYNAMIC_ALLOWED)) { + i_vpr_e(inst, "%s: cap_id %#x not allowed in state %s\n", + __func__, cap_id, state_name(inst->state)); + return -EINVAL; + } + } + + rc = msm_vidc_s_ctrl(inst, ctrl); + if (rc) + return rc; + break; + } + case MSM_VIDC_REQBUFS: + { + struct v4l2_requestbuffers *b = (struct v4l2_requestbuffers *)data; + + /* disallow */ + if (b->type == INPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) port\n", + __func__, event_name(event), v4l2_type_name(b->type)); + return -EBUSY; + } + + rc = msm_vidc_reqbufs(inst, b); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMON: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* disallow */ + if (q->type == INPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) type\n", + __func__, event_name(event), v4l2_type_name(q->type)); + return -EBUSY; + } + + rc = msm_vidc_start_streaming(inst, q); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMOFF: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* ignore */ + if (q->type == OUTPUT_MPLANE) { + i_vpr_h(inst, "%s: streamoff of (%s) ignored in state (%s)\n", + __func__, v4l2_type_name(q->type), state_name(inst->state)); + return 0; + } + + /* sanitize type field */ + if (q->type != INPUT_MPLANE) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, q->type); + return -EINVAL; + } + + rc = msm_vidc_stop_streaming(inst, q); + if (rc) + return rc; + break; + } + case MSM_VIDC_CMD_START: + { + /* disallow if START called for non DRC/drain cases */ + if (!is_drc_pending(inst) && !is_drain_pending(inst)) { + i_vpr_e(inst, "%s: (%s) not allowed, sub_state (%s)\n", + __func__, event_name(event), inst->sub_state_name); + return -EBUSY; + } + + /* client would call start(resume) to complete DRC/drain sequence */ + rc = msm_vidc_start_cmd(inst); + if (rc) + return rc; + break; + } + case MSM_VIDC_CMD_STOP: + { + /* back to back drain not allowed */ + if (is_sub_state(inst, MSM_VIDC_DRAIN)) { + i_vpr_e(inst, "%s: drain (%s) not allowed, sub_state (%s)\n\n", + __func__, event_name(event), inst->sub_state_name); + return -EBUSY; + } + + rc = msm_vidc_stop_cmd(inst); + if (rc) + return rc; + break; + } + default: + { + i_vpr_e(inst, "%s: unexpected event %s\n", __func__, event_name(event)); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_output_streaming_state(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data) +{ + int rc = 0; + + /* inst must be locked */ + rc = __strict_inst_check(inst, __func__); + if (rc) { + i_vpr_e(inst, "%s(): inst was not locked\n", __func__); + return -EINVAL; + } + + switch (event) { + case MSM_VIDC_BUF_QUEUE: + { + struct msm_vidc_buffer *buf = (struct msm_vidc_buffer *)data; + + /* disallow */ + if (!is_input_buffer(buf->type) && !is_output_buffer(buf->type)) { + i_vpr_e(inst, "%s: invalid buf type %u\n", __func__, buf->type); + return -EINVAL; + } + + /* defer input port */ + if (is_input_buffer(buf->type)) { + print_vidc_buffer(VIDC_LOW, "low ", "qbuf deferred", inst, buf); + return 0; + } + + rc = msm_vidc_buf_queue(inst, buf); + if (rc) + return rc; + break; + } + case MSM_VIDC_TRY_FMT: + { + struct v4l2_format *f = (struct v4l2_format *)data; + + /* disallow */ + if (f->type == OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) port\n", + __func__, event_name(event), v4l2_type_name(f->type)); + return -EBUSY; + } + + rc = msm_vidc_try_fmt(inst, f); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_FMT: + { + struct v4l2_format *f = (struct v4l2_format *)data; + + /* disallow */ + if (f->type == OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) port\n", + __func__, event_name(event), v4l2_type_name(f->type)); + return -EBUSY; + } + + rc = msm_vidc_s_fmt(inst, f); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_CTRL: + { + struct v4l2_ctrl *ctrl = (struct v4l2_ctrl *)data; + u32 cap_id = msm_vidc_get_cap_id(inst, ctrl->id); + + if (cap_id == INST_CAP_NONE) { + i_vpr_e(inst, "%s: invalid cap_id %u\n", __func__, cap_id); + return -EINVAL; + } + + /* disallow */ + if (is_encode_session(inst)) { + /* check dynamic allowed if master port is streaming */ + if (!(inst->capabilities[cap_id].flags & CAP_FLAG_DYNAMIC_ALLOWED)) { + i_vpr_e(inst, "%s: cap_id %#x not allowed in state %s\n", + __func__, cap_id, state_name(inst->state)); + return -EINVAL; + } + } + + rc = msm_vidc_s_ctrl(inst, ctrl); + if (rc) + return rc; + break; + } + case MSM_VIDC_REQBUFS: + { + struct v4l2_requestbuffers *b = (struct v4l2_requestbuffers *)data; + + /* disallow */ + if (b->type == OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) port\n", + __func__, event_name(event), v4l2_type_name(b->type)); + return -EBUSY; + } + + rc = msm_vidc_reqbufs(inst, b); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMON: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* disallow */ + if (q->type == OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: (%s) not allowed for (%s) type\n", + __func__, event_name(event), v4l2_type_name(q->type)); + return -EBUSY; + } + + rc = msm_vidc_start_streaming(inst, q); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMOFF: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* ignore */ + if (q->type == INPUT_MPLANE) { + i_vpr_h(inst, "%s: streamoff of (%s) ignored in state (%s)\n", + __func__, v4l2_type_name(q->type), state_name(inst->state)); + return 0; + } + + /* sanitize type field */ + if (q->type != OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, q->type); + return -EINVAL; + } + + rc = msm_vidc_stop_streaming(inst, q); + if (rc) + return rc; + break; + } + case MSM_VIDC_CMD_START: + { + /* disallow if START called for non DRC/drain cases */ + if (!is_drc_pending(inst) && !is_drain_pending(inst)) { + i_vpr_e(inst, "%s: (%s) not allowed, sub_state (%s)\n", + __func__, event_name(event), inst->sub_state_name); + return -EBUSY; + } + + /* client would call start(resume) to complete DRC/drain sequence */ + rc = msm_vidc_start_cmd(inst); + if (rc) + return rc; + break; + } + case MSM_VIDC_CMD_STOP: + { + /* drain not allowed as input is not streaming */ + i_vpr_e(inst, "%s: drain (%s) not allowed, sub state %s\n", + __func__, event_name(event), inst->sub_state_name); + return -EBUSY; + } + default: { + i_vpr_e(inst, "%s: unexpected event %s\n", __func__, event_name(event)); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_streaming_state(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data) +{ + int rc = 0; + + /* inst must be locked */ + rc = __strict_inst_check(inst, __func__); + if (rc) { + i_vpr_e(inst, "%s(): inst was not locked\n", __func__); + return -EINVAL; + } + + switch (event) { + case MSM_VIDC_BUF_QUEUE: + { + struct msm_vidc_buffer *buf = (struct msm_vidc_buffer *)data; + + /* disallow */ + if (!is_input_buffer(buf->type) && !is_output_buffer(buf->type)) { + i_vpr_e(inst, "%s: invalid buf type %u\n", __func__, buf->type); + return -EINVAL; + } + + rc = msm_vidc_buf_queue(inst, buf); + if (rc) + return rc; + break; + } + case MSM_VIDC_S_CTRL: + { + struct v4l2_ctrl *ctrl = (struct v4l2_ctrl *)data; + u32 cap_id = msm_vidc_get_cap_id(inst, ctrl->id); + + if (cap_id == INST_CAP_NONE) { + i_vpr_e(inst, "%s: invalid cap_id %u\n", __func__, cap_id); + return -EINVAL; + } + + /* disallow */ + if (!(inst->capabilities[cap_id].flags & CAP_FLAG_DYNAMIC_ALLOWED)) { + i_vpr_e(inst, "%s: cap_id %#x not allowed in state %s\n", + __func__, cap_id, state_name(inst->state)); + return -EINVAL; + } + + rc = msm_vidc_s_ctrl(inst, ctrl); + if (rc) + return rc; + break; + } + case MSM_VIDC_STREAMOFF: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + /* sanitize type field */ + if (q->type != INPUT_MPLANE && q->type != OUTPUT_MPLANE) { + i_vpr_e(inst, "%s: invalid type %d\n", __func__, q->type); + return -EINVAL; + } + + rc = msm_vidc_stop_streaming(inst, q); + if (rc) + return rc; + break; + } + case MSM_VIDC_CMD_START: + { + /* disallow if START called for non DRC/drain cases */ + if (!is_drc_pending(inst) && !is_drain_pending(inst)) { + i_vpr_e(inst, "%s: (%s) not allowed, sub_state (%s)\n", + __func__, event_name(event), inst->sub_state_name); + return -EBUSY; + } + + /* client would call start(resume) to complete DRC/drain sequence */ + rc = msm_vidc_start_cmd(inst); + if (rc) + return rc; + break; + } + case MSM_VIDC_CMD_STOP: + { + /* back to back drain not allowed */ + if (is_sub_state(inst, MSM_VIDC_DRAIN)) { + i_vpr_e(inst, "%s: drain (%s) not allowed, sub_state (%s)\n\n", + __func__, event_name(event), inst->sub_state_name); + return -EBUSY; + } + + rc = msm_vidc_stop_cmd(inst); + if (rc) + return rc; + break; + } + default: { + i_vpr_e(inst, "%s: unexpected event %s\n", __func__, event_name(event)); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_close_state(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data) +{ + int rc = 0; + + /* inst must be locked */ + rc = __strict_inst_check(inst, __func__); + if (rc) { + i_vpr_e(inst, "%s(): inst was not locked\n", __func__); + return -EINVAL; + } + + switch (event) { + case MSM_VIDC_STREAMOFF: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + rc = msm_vidc_stop_streaming(inst, q); + if (rc) + return rc; + break; + } + default: { + i_vpr_e(inst, "%s: unexpected event %s\n", __func__, event_name(event)); + return -EINVAL; + } + } + + return rc; +} + +static int msm_vidc_error_state(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data) +{ + int rc = 0; + + /* inst must be locked */ + rc = __strict_inst_check(inst, __func__); + if (rc) { + i_vpr_e(inst, "%s(): inst was not locked\n", __func__); + return -EINVAL; + } + + switch (event) { + case MSM_VIDC_STREAMOFF: + { + struct vb2_queue *q = (struct vb2_queue *)data; + + rc = msm_vidc_stop_streaming(inst, q); + if (rc) + return rc; + break; + } + default: { + i_vpr_e(inst, "%s: unexpected event %s\n", __func__, event_name(event)); + return -EINVAL; + } + } + + return rc; +} + +struct msm_vidc_state_handle { + enum msm_vidc_state state; + int (*handle)(struct msm_vidc_inst *inst, + enum msm_vidc_event event, void *data); +}; + +static struct msm_vidc_state_handle *msm_vidc_get_state_handle(struct msm_vidc_inst *inst, + enum msm_vidc_state req_state) +{ + int cnt; + struct msm_vidc_state_handle *inst_state_handle = NULL; + static struct msm_vidc_state_handle state_handle[] = { + {MSM_VIDC_OPEN, msm_vidc_open_state }, + {MSM_VIDC_INPUT_STREAMING, msm_vidc_input_streaming_state }, + {MSM_VIDC_OUTPUT_STREAMING, msm_vidc_output_streaming_state }, + {MSM_VIDC_STREAMING, msm_vidc_streaming_state }, + {MSM_VIDC_CLOSE, msm_vidc_close_state }, + {MSM_VIDC_ERROR, msm_vidc_error_state }, + }; + + for (cnt = 0; cnt < ARRAY_SIZE(state_handle); cnt++) { + if (state_handle[cnt].state == req_state) { + inst_state_handle = &state_handle[cnt]; + break; + } + } + + /* check if req_state does not exist in the table */ + if (cnt == ARRAY_SIZE(state_handle)) { + i_vpr_e(inst, "%s: invalid state %s\n", __func__, state_name(req_state)); + return inst_state_handle; + } + + return inst_state_handle; +} + +int msm_vidc_update_state(struct msm_vidc_inst *inst, + enum msm_vidc_state request_state, const char *func) +{ + struct msm_vidc_state_handle *state_handle = NULL; + int rc = 0; + + /* get inst state handler for requested state */ + state_handle = msm_vidc_get_state_handle(inst, request_state); + if (!state_handle) + return -EINVAL; + + if (request_state == MSM_VIDC_ERROR) + i_vpr_e(inst, FMT_STRING_STATE_CHANGE, + func, state_name(request_state), state_name(inst->state)); + else + i_vpr_h(inst, FMT_STRING_STATE_CHANGE, + func, state_name(request_state), state_name(inst->state)); + + /* finally update inst state and handler */ + inst->state = state_handle->state; + inst->event_handle = state_handle->handle; + + return rc; +} + +int msm_vidc_change_state(struct msm_vidc_inst *inst, + enum msm_vidc_state request_state, const char *func) +{ + enum msm_vidc_allow allow; + int rc; + + if (is_session_error(inst)) { + i_vpr_h(inst, + "%s: inst is in bad state, can not change state to %s\n", + func, state_name(request_state)); + return 0; + } + + /* current and requested state is same */ + if (inst->state == request_state) + return 0; + + /* check if requested state movement is allowed */ + allow = msm_vidc_allow_state_change(inst, request_state); + if (allow != MSM_VIDC_ALLOW) { + i_vpr_e(inst, "%s: %s state change %s -> %s\n", func, + allow_name(allow), state_name(inst->state), + state_name(request_state)); + return (allow == MSM_VIDC_DISALLOW ? -EINVAL : 0); + } + + /* go ahead and update inst state */ + rc = msm_vidc_update_state(inst, request_state, func); + if (rc) + return rc; + + return 0; +} + +struct msm_vidc_sub_state_allow { + enum msm_vidc_state state; + enum msm_vidc_allow allow; + u32 sub_state_mask; +}; + +static int msm_vidc_set_sub_state(struct msm_vidc_inst *inst, + enum msm_vidc_sub_state sub_state, const char *func) +{ + char sub_state_name[MAX_NAME_LENGTH]; + int cnt, rc = 0; + static struct msm_vidc_sub_state_allow sub_state_allow[] = { + /* state, allow, sub_state */ + {MSM_VIDC_OPEN, MSM_VIDC_DISALLOW, MSM_VIDC_DRC | + MSM_VIDC_DRAIN | + MSM_VIDC_DRC_LAST_BUFFER | + MSM_VIDC_DRAIN_LAST_BUFFER | + MSM_VIDC_INPUT_PAUSE | + MSM_VIDC_OUTPUT_PAUSE }, + + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_DISALLOW, MSM_VIDC_DRC_LAST_BUFFER | + MSM_VIDC_DRAIN_LAST_BUFFER | + MSM_VIDC_OUTPUT_PAUSE }, + {MSM_VIDC_INPUT_STREAMING, MSM_VIDC_ALLOW, MSM_VIDC_DRC | + MSM_VIDC_DRAIN | + MSM_VIDC_INPUT_PAUSE }, + + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_DISALLOW, MSM_VIDC_DRC | + MSM_VIDC_DRAIN | + MSM_VIDC_INPUT_PAUSE }, + {MSM_VIDC_OUTPUT_STREAMING, MSM_VIDC_ALLOW, MSM_VIDC_DRC_LAST_BUFFER | + MSM_VIDC_DRAIN_LAST_BUFFER | + MSM_VIDC_OUTPUT_PAUSE }, + + {MSM_VIDC_STREAMING, MSM_VIDC_ALLOW, MSM_VIDC_DRC | + MSM_VIDC_DRAIN | + MSM_VIDC_DRC_LAST_BUFFER | + MSM_VIDC_DRAIN_LAST_BUFFER | + MSM_VIDC_INPUT_PAUSE | + MSM_VIDC_OUTPUT_PAUSE }, + + {MSM_VIDC_CLOSE, MSM_VIDC_ALLOW, MSM_VIDC_DRC | + MSM_VIDC_DRAIN | + MSM_VIDC_DRC_LAST_BUFFER | + MSM_VIDC_DRAIN_LAST_BUFFER | + MSM_VIDC_INPUT_PAUSE | + MSM_VIDC_OUTPUT_PAUSE }, + + {MSM_VIDC_ERROR, MSM_VIDC_ALLOW, MSM_VIDC_DRC | + MSM_VIDC_DRAIN | + MSM_VIDC_DRC_LAST_BUFFER | + MSM_VIDC_DRAIN_LAST_BUFFER | + MSM_VIDC_INPUT_PAUSE | + MSM_VIDC_OUTPUT_PAUSE }, + }; + + /* no substate to update */ + if (!sub_state) + return 0; + + /* check if any substate is disallowed */ + for (cnt = 0; cnt < ARRAY_SIZE(sub_state_allow); cnt++) { + /* skip other states */ + if (sub_state_allow[cnt].state != inst->state) + continue; + + /* continue if not disallowed */ + if (sub_state_allow[cnt].allow != MSM_VIDC_DISALLOW) + continue; + + if (sub_state_allow[cnt].sub_state_mask & sub_state) { + prepare_sub_state_name(sub_state, sub_state_name, sizeof(sub_state_name)); + i_vpr_e(inst, "%s: state (%s), disallow substate (%s)\n", + func, state_name(inst->state), sub_state_name); + return -EINVAL; + } + } + + /* remove ignorable substates from a given substate */ + for (cnt = 0; cnt < ARRAY_SIZE(sub_state_allow); cnt++) { + /* skip other states */ + if (sub_state_allow[cnt].state != inst->state) + continue; + + /* continue if not ignored */ + if (sub_state_allow[cnt].allow != MSM_VIDC_IGNORE) + continue; + + if (sub_state_allow[cnt].sub_state_mask & sub_state) { + prepare_sub_state_name(sub_state, sub_state_name, sizeof(sub_state_name)); + i_vpr_h(inst, "%s: state (%s), ignore substate (%s)\n", + func, state_name(inst->state), sub_state_name); + + /* remove ignorable substate bits from actual */ + sub_state &= ~(sub_state_allow[cnt].sub_state_mask & sub_state); + break; + } + } + + /* check if all substate bits are allowed */ + for (cnt = 0; cnt < ARRAY_SIZE(sub_state_allow); cnt++) { + /* skip other states */ + if (sub_state_allow[cnt].state != inst->state) + continue; + + /* continue if not allowed */ + if (sub_state_allow[cnt].allow != MSM_VIDC_ALLOW) + continue; + + if ((sub_state_allow[cnt].sub_state_mask & sub_state) != sub_state) { + prepare_sub_state_name(sub_state, sub_state_name, sizeof(sub_state_name)); + i_vpr_e(inst, "%s: state (%s), not all substates allowed (%s)\n", + func, state_name(inst->state), sub_state_name); + return -EINVAL; + } + } + + /* update substate */ + inst->sub_state |= sub_state; + + return rc; +} + +int msm_vidc_change_sub_state(struct msm_vidc_inst *inst, + enum msm_vidc_sub_state clear_sub_state, + enum msm_vidc_sub_state set_sub_state, const char *func) +{ + enum msm_vidc_sub_state prev_sub_state; + int rc = 0; + + if (is_session_error(inst)) { + i_vpr_h(inst, + "%s: inst is in bad state, can not change sub state\n", func); + return 0; + } + + /* final value will not change */ + if (!clear_sub_state && !set_sub_state) + return 0; + + /* sanitize clear & set value */ + if ((clear_sub_state & set_sub_state) || + set_sub_state > MSM_VIDC_MAX_SUB_STATE_VALUE || + clear_sub_state > MSM_VIDC_MAX_SUB_STATE_VALUE) { + i_vpr_e(inst, "%s: invalid sub states to clear %#x or set %#x\n", + func, clear_sub_state, set_sub_state); + return -EINVAL; + } + + prev_sub_state = inst->sub_state; + + /* set sub state */ + rc = msm_vidc_set_sub_state(inst, set_sub_state, __func__); + if (rc) + return rc; + + /* clear sub state */ + inst->sub_state &= ~clear_sub_state; + + /* print substates only when there is a change */ + if (inst->sub_state != prev_sub_state) { + rc = prepare_sub_state_name(inst->sub_state, inst->sub_state_name, + sizeof(inst->sub_state_name)); + if (!rc) + i_vpr_h(inst, "%s: state %s and sub state changed to %s\n", + func, state_name(inst->state), inst->sub_state_name); + } + + return 0; +} From patchwork Fri Jul 28 13:23:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5730AC001DE for ; Fri, 28 Jul 2023 13:27:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236350AbjG1N1Y (ORCPT ); Fri, 28 Jul 2023 09:27:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236166AbjG1N1B (ORCPT ); Fri, 28 Jul 2023 09:27:01 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA5374205; Fri, 28 Jul 2023 06:26:16 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SCTpwu029619; Fri, 28 Jul 2023 13:26:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=06NvEMJDO8PfbtWRhw5YMdDI6raD7tbpxMlxM6wD8AI=; b=jgEl/0njW2g4u2331TzEePL6YUwzR1wSaHdiWFCC2nrDD8VyNect2buo6VabGcX1VBF0 ZsVFO9lVGs98/SNhBcz3sL3uSSfBGijJgKu1G4Yc+rO7IMwK5tbGrCQC+3W4az3+V53E FNA2Fo5PZFhSttD1lVcJDFEOSIGjom6lhW5+tYaBJB6Jntu38CDkEPW7lOW/vGrBLSFr gtz+wILc1pHi41xPltONN463HusR6rKB/OFloxI1c7byGuNoi97oOyga0Sfl5Jln3fYS m8Jx/CnrfVP6NYTcY4GcLfbuhD554XeTMPNfTxXuVK6rqhsa6ZcflZ85UA1Ma1YjbCHV oQ== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7u1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:03 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQ2Fh002438 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:02 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:25:59 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 15/33] iris: add vidc buffer files Date: Fri, 28 Jul 2023 18:53:26 +0530 Message-ID: <1690550624-14642-16-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: WcM0Q6GkC1Cc8Wb4eYqZT23lHlYk4TRa X-Proofpoint-GUID: WcM0Q6GkC1Cc8Wb4eYqZT23lHlYk4TRa X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements the size and count calcualtions of different types of buffers for encoder and decoder. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_buffer.h | 32 +++ .../platform/qcom/iris/vidc/src/msm_vidc_buffer.c | 290 +++++++++++++++++++++ 2 files changed, 322 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_buffer.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_buffer.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_buffer.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_buffer.h new file mode 100644 index 0000000..0fffcf0 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_buffer.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __H_MSM_VIDC_BUFFER_H__ +#define __H_MSM_VIDC_BUFFER_H__ + +#include "msm_vidc_inst.h" + +#define MIN_DEC_INPUT_BUFFERS 4 +#define MIN_DEC_OUTPUT_BUFFERS 4 + +#define MIN_ENC_INPUT_BUFFERS 4 +#define MIN_ENC_OUTPUT_BUFFERS 4 + +#define DCVS_ENC_EXTRA_INPUT_BUFFERS 4 +#define DCVS_DEC_EXTRA_OUTPUT_BUFFERS 4 + +u32 msm_vidc_input_min_count(struct msm_vidc_inst *inst); +u32 msm_vidc_output_min_count(struct msm_vidc_inst *inst); +u32 msm_vidc_input_extra_count(struct msm_vidc_inst *inst); +u32 msm_vidc_output_extra_count(struct msm_vidc_inst *inst); +u32 msm_vidc_internal_buffer_count(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +u32 msm_vidc_decoder_input_size(struct msm_vidc_inst *inst); +u32 msm_vidc_decoder_output_size(struct msm_vidc_inst *inst); +u32 msm_vidc_encoder_input_size(struct msm_vidc_inst *inst); +u32 msm_vidc_encoder_output_size(struct msm_vidc_inst *inst); + +#endif // __H_MSM_VIDC_BUFFER_H__ diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_buffer.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_buffer.c new file mode 100644 index 0000000..a2a0eea --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_buffer.c @@ -0,0 +1,290 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_media_info.h" +#include "msm_vidc_buffer.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +/* Generic function for all targets. Not being used for iris2 */ +u32 msm_vidc_input_min_count(struct msm_vidc_inst *inst) +{ + u32 input_min_count = 0; + u32 hb_enh_layer = 0; + + if (is_decode_session(inst)) { + input_min_count = MIN_DEC_INPUT_BUFFERS; + } else if (is_encode_session(inst)) { + input_min_count = MIN_ENC_INPUT_BUFFERS; + if (is_hierb_type_requested(inst)) { + hb_enh_layer = + inst->capabilities[ENH_LAYER_COUNT].value; + if (inst->codec == MSM_VIDC_H264 && + !inst->capabilities[LAYER_ENABLE].value) { + hb_enh_layer = 0; + } + if (hb_enh_layer) + input_min_count = (1 << hb_enh_layer) + 2; + } + } else { + i_vpr_e(inst, "%s: invalid domain %d\n", + __func__, inst->domain); + return 0; + } + + return input_min_count; +} + +u32 msm_vidc_output_min_count(struct msm_vidc_inst *inst) +{ + u32 output_min_count; + + if (!is_decode_session(inst) && !is_encode_session(inst)) + return 0; + + if (is_encode_session(inst)) + return MIN_ENC_OUTPUT_BUFFERS; + + /* decoder handling below */ + /* fw_min_count > 0 indicates reconfig event has already arrived */ + if (inst->fw_min_count) { + if (is_split_mode_enabled(inst) && + inst->codec == MSM_VIDC_VP9) { + /* + * return opb min buffer count as min(4, fw_min_count) + * fw min count is used for dpb min count + */ + return min_t(u32, 4, inst->fw_min_count); + } else { + return inst->fw_min_count; + } + } + + /* initial handling before reconfig event arrived */ + switch (inst->codec) { + case MSM_VIDC_H264: + case MSM_VIDC_HEVC: + output_min_count = 4; + break; + case MSM_VIDC_VP9: + output_min_count = 9; + break; + default: + output_min_count = 4; + break; + } + + return output_min_count; +} + +u32 msm_vidc_input_extra_count(struct msm_vidc_inst *inst) +{ + u32 count = 0; + struct msm_vidc_core *core; + + core = inst->core; + + if (is_decode_session(inst)) { + /* + * if decode batching enabled, ensure minimum batch size + * count of input buffers present on input port + */ + if (core->capabilities[DECODE_BATCH].value && + inst->decode_batch.enable) { + if (inst->buffers.input.min_count < inst->decode_batch.size) { + count = inst->decode_batch.size - + inst->buffers.input.min_count; + } + } + } else if (is_encode_session(inst)) { + /* add dcvs buffers, if platform supports dcvs */ + if (core->capabilities[DCVS].value) + count = DCVS_ENC_EXTRA_INPUT_BUFFERS; + } + + return count; +} + +u32 msm_vidc_output_extra_count(struct msm_vidc_inst *inst) +{ + u32 count = 0; + struct msm_vidc_core *core; + + core = inst->core; + + if (is_decode_session(inst)) { + /* add dcvs buffers, if platform supports dcvs */ + if (core->capabilities[DCVS].value) + count = DCVS_DEC_EXTRA_OUTPUT_BUFFERS; + /* + * if decode batching enabled, ensure minimum batch size + * count of extra output buffers added on output port + */ + if (core->capabilities[DECODE_BATCH].value && + inst->decode_batch.enable && + count < inst->decode_batch.size) + count = inst->decode_batch.size; + } + + return count; +} + +u32 msm_vidc_internal_buffer_count(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + u32 count = 0; + + if (is_encode_session(inst)) + return 1; + + if (is_decode_session(inst)) { + if (buffer_type == MSM_VIDC_BUF_BIN || + buffer_type == MSM_VIDC_BUF_LINE || + buffer_type == MSM_VIDC_BUF_PERSIST) { + count = 1; + } else if (buffer_type == MSM_VIDC_BUF_COMV || + buffer_type == MSM_VIDC_BUF_NON_COMV) { + if (inst->codec == MSM_VIDC_H264 || + inst->codec == MSM_VIDC_HEVC) + count = 1; + else + count = 0; + } else { + i_vpr_e(inst, "%s: unsupported buffer type %s\n", + __func__, buf_name(buffer_type)); + count = 0; + } + } + + return count; +} + +u32 msm_vidc_decoder_input_size(struct msm_vidc_inst *inst) +{ + u32 frame_size, num_mbs; + u32 div_factor = 1; + u32 base_res_mbs = NUM_MBS_4k; + struct v4l2_format *f; + u32 bitstream_size_overwrite = 0; + enum msm_vidc_codec_type codec; + + bitstream_size_overwrite = + inst->capabilities[BITSTREAM_SIZE_OVERWRITE].value; + if (bitstream_size_overwrite) { + frame_size = bitstream_size_overwrite; + i_vpr_h(inst, "client configured bitstream buffer size %d\n", + frame_size); + return frame_size; + } + + /* + * Decoder input size calculation: + * For 8k resolution, buffer size is calculated as 8k mbs / 4 and + * for 8k cases we expect width/height to be set always. + * In all other cases, buffer size is calculated as + * 4k mbs for VP8/VP9 and 4k / 2 for remaining codecs. + */ + f = &inst->fmts[INPUT_PORT]; + codec = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + num_mbs = msm_vidc_get_mbs_per_frame(inst); + if (num_mbs > NUM_MBS_4k) { + div_factor = 4; + base_res_mbs = inst->capabilities[MBPF].value; + } else { + base_res_mbs = NUM_MBS_4k; + if (codec == MSM_VIDC_VP9) + div_factor = 1; + else + div_factor = 2; + } + + frame_size = base_res_mbs * MB_SIZE_IN_PIXEL * 3 / 2 / div_factor; + + /* multiply by 10/8 (1.25) to get size for 10 bit case */ + if (codec == MSM_VIDC_VP9 || codec == MSM_VIDC_HEVC) + frame_size = frame_size + (frame_size >> 2); + + i_vpr_h(inst, "set input buffer size to %d\n", frame_size); + + return ALIGN(frame_size, SZ_4K); +} + +u32 msm_vidc_decoder_output_size(struct msm_vidc_inst *inst) +{ + u32 size; + struct v4l2_format *f; + enum msm_vidc_colorformat_type colorformat; + + f = &inst->fmts[OUTPUT_PORT]; + colorformat = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, + __func__); + size = video_buffer_size(colorformat, f->fmt.pix_mp.width, + f->fmt.pix_mp.height, true); + return size; +} + +u32 msm_vidc_encoder_input_size(struct msm_vidc_inst *inst) +{ + u32 size; + struct v4l2_format *f; + u32 width, height; + enum msm_vidc_colorformat_type colorformat; + + f = &inst->fmts[INPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + colorformat = v4l2_colorformat_to_driver(inst, f->fmt.pix_mp.pixelformat, + __func__); + size = video_buffer_size(colorformat, width, height, true); + return size; +} + +u32 msm_vidc_encoder_output_size(struct msm_vidc_inst *inst) +{ + u32 frame_size; + u32 mbs_per_frame; + u32 width, height; + struct v4l2_format *f; + enum msm_vidc_codec_type codec; + + f = &inst->fmts[OUTPUT_PORT]; + codec = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + /* + * Encoder output size calculation: 32 Align width/height + * For resolution <= 480x360p : YUVsize * 2 + * For resolution > 360p & <= 4K : YUVsize / 2 + * For resolution > 4k : YUVsize / 4 + * Initially frame_size = YUVsize * 2; + */ + + width = ALIGN(f->fmt.pix_mp.width, BUFFER_ALIGNMENT_SIZE(32)); + height = ALIGN(f->fmt.pix_mp.height, BUFFER_ALIGNMENT_SIZE(32)); + mbs_per_frame = NUM_MBS_PER_FRAME(width, height); + frame_size = (width * height * 3); + + /* Image session: 2 x yuv size */ + if (inst->capabilities[BITRATE_MODE].value == V4L2_MPEG_VIDEO_BITRATE_MODE_CQ) + goto skip_calc; + + if (mbs_per_frame <= NUM_MBS_360P) + (void)frame_size; /* Default frame_size = YUVsize * 2 */ + else if (mbs_per_frame <= NUM_MBS_4k) + frame_size = frame_size >> 2; + else + frame_size = frame_size >> 3; + +skip_calc: + /* multiply by 10/8 (1.25) to get size for 10 bit case */ + if (codec == MSM_VIDC_HEVC) + frame_size = frame_size + (frame_size >> 2); + + frame_size = ALIGN(frame_size, SZ_4K); + + return frame_size; +} From patchwork Fri Jul 28 13:23:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79CDEC001E0 for ; Fri, 28 Jul 2023 13:27:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236763AbjG1N1h (ORCPT ); Fri, 28 Jul 2023 09:27:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235541AbjG1N1L (ORCPT ); Fri, 28 Jul 2023 09:27:11 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26F9D4487; Fri, 28 Jul 2023 06:26:25 -0700 (PDT) Received: from pps.filterd (m0279867.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SDIkNH002452; Fri, 28 Jul 2023 13:26:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=zSez0CIPi45UvoFnsjzVakfeJM5EAWOkj90R60b4vJQ=; b=bTEqBaqNtFm6uC5ZWgOLKCG/6mRPn7FVIOPXldVnJLv21xKvPGehjXx92PesBJLuqkIs 6JAoQv+TzaiPBKdS3dqRC/at+14N0vlCYb4DVNL8cicZN+t+m1tQ4hpR3bU1N21ViK6p 9llvFVJYJDbo2Gmd7VQPv5GJdblCL7uWJOau9Ipy8KdFoiAVPhSCbZMoU59w1gRVsyKq yj/U8w1rzniDoq6+49ISiKEU30GO5Ox/6ZOOyCigk29VzGnRD94KDGtcC2+S2GpARHEM UeRnz+uAiUs17Xlcx3r8q38vOPNlSkWJYzGjIJDVYPBGvpHzt+h5jUpioCBDKn/tZUxE Pw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s4e2702u4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:06 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQ6pb002075 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:06 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:02 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 16/33] iris: add helpers for media format Date: Fri, 28 Jul 2023 18:53:27 +0530 Message-ID: <1690550624-14642-17-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: QodYnF-AvVmabDUFT3w1Ky3vMQo53th9 X-Proofpoint-ORIG-GUID: QodYnF-AvVmabDUFT3w1Ky3vMQo53th9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxlogscore=989 phishscore=0 suspectscore=0 adultscore=0 spamscore=0 mlxscore=0 impostorscore=0 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal Add helpers to calculate stride, scanline, buffer size etc. for different media formats. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_media_info.h | 599 +++++++++++++++++++++ 1 file changed, 599 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_media_info.h diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_media_info.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_media_info.h new file mode 100644 index 0000000..772b2482 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_media_info.h @@ -0,0 +1,599 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __MSM_MEDIA_INFO_H__ +#define __MSM_MEDIA_INFO_H__ + +#include "msm_vidc_internal.h" + +/* Width and Height should be multiple of 16 */ +#define INTERLACE_WIDTH_MAX 1920 +#define INTERLACE_HEIGHT_MAX 1920 +#define INTERLACE_MB_PER_FRAME_MAX ((1920 * 1088) / 256) + +#ifndef MSM_MEDIA_ALIGN +#define MSM_MEDIA_ALIGN(__sz, __align) (((__align) & ((__align) - 1)) ?\ + ((((__sz) + (__align) - 1) / (__align)) * (__align)) :\ + (((__sz) + (__align) - 1) & (~((__align) - 1)))) +#endif + +#ifndef MSM_MEDIA_ROUNDUP +#define MSM_MEDIA_ROUNDUP(__sz, __r) (((__sz) + ((__r) - 1)) / (__r)) +#endif + +/* + * Function arguments: + * @v4l2_fmt + * @width + * Progressive: width + * Interlaced: width + */ +static inline unsigned int video_y_stride_bytes(unsigned int colorformat, + unsigned int width) +{ + unsigned int alignment, stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12C: + alignment = 128; + stride = MSM_MEDIA_ALIGN(width, alignment); + break; + case MSM_VIDC_FMT_TP10C: + alignment = 256; + stride = MSM_MEDIA_ALIGN(width, 192); + stride = MSM_MEDIA_ALIGN(stride * 4 / 3, alignment); + break; + case MSM_VIDC_FMT_P010: + alignment = 256; + stride = MSM_MEDIA_ALIGN(width * 2, alignment); + break; + default: + break; + } +invalid_input: + return stride; +} + +/* + * Function arguments: + * @v4l2_fmt + * @width + * Progressive: width + * Interlaced: width + */ +static inline unsigned int video_y_stride_pix(unsigned int colorformat, + unsigned int width) +{ + unsigned int alignment, stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12C: + case MSM_VIDC_FMT_P010: + alignment = 128; + stride = MSM_MEDIA_ALIGN(width, alignment); + break; + case MSM_VIDC_FMT_TP10C: + alignment = 192; + stride = MSM_MEDIA_ALIGN(width, alignment); + break; + default: + break; + } + +invalid_input: + return stride; +} + +/* + * Function arguments: + * @v4l2_fmt + * @width + * Progressive: width + * Interlaced: width + */ +static inline unsigned int video_uv_stride_bytes(unsigned int colorformat, + unsigned int width) +{ + unsigned int alignment, stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_NV12C: + alignment = 128; + stride = MSM_MEDIA_ALIGN(width, alignment); + break; + case MSM_VIDC_FMT_TP10C: + alignment = 256; + stride = MSM_MEDIA_ALIGN(width, 192); + stride = MSM_MEDIA_ALIGN(stride * 4 / 3, alignment); + break; + case MSM_VIDC_FMT_P010: + alignment = 256; + stride = MSM_MEDIA_ALIGN(width * 2, alignment); + break; + default: + break; + } +invalid_input: + return stride; +} + +/* + * Function arguments: + * @v4l2_fmt + * @width + * Progressive: width + * Interlaced: width + */ +static inline unsigned int video_uv_stride_pix(unsigned int colorformat, + unsigned int width) +{ + unsigned int alignment, stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_NV12C: + case MSM_VIDC_FMT_P010: + alignment = 128; + stride = MSM_MEDIA_ALIGN(width, alignment); + break; + case MSM_VIDC_FMT_TP10C: + alignment = 192; + stride = MSM_MEDIA_ALIGN(width, alignment); + break; + default: + break; + } +invalid_input: + return stride; +} + +/* + * Function arguments: + * @v4l2_fmt + * @height + * Progressive: height + * Interlaced: (height+1)>>1 + */ +static inline unsigned int video_y_scanlines(unsigned int colorformat, + unsigned int height) +{ + unsigned int alignment, sclines = 0; + + if (!height) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12C: + case MSM_VIDC_FMT_P010: + alignment = 32; + break; + case MSM_VIDC_FMT_TP10C: + alignment = 16; + break; + default: + return 0; + } + sclines = MSM_MEDIA_ALIGN(height, alignment); +invalid_input: + return sclines; +} + +/* + * Function arguments: + * @v4l2_fmt + * @height + * Progressive: height + * Interlaced: (height+1)>>1 + */ +static inline unsigned int video_uv_scanlines(unsigned int colorformat, + unsigned int height) +{ + unsigned int alignment, sclines = 0; + + if (!height) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_TP10C: + case MSM_VIDC_FMT_P010: + alignment = 16; + break; + case MSM_VIDC_FMT_NV12C: + alignment = 32; + break; + default: + goto invalid_input; + } + + sclines = MSM_MEDIA_ALIGN((height + 1) >> 1, alignment); + +invalid_input: + return sclines; +} + +/* + * Function arguments: + * @v4l2_fmt + * @width + * Progressive: width + * Interlaced: width + */ +static inline unsigned int video_y_meta_stride(unsigned int colorformat, + unsigned int width) +{ + int y_tile_width = 0, y_meta_stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12C: + y_tile_width = 32; + break; + case MSM_VIDC_FMT_TP10C: + y_tile_width = 48; + break; + default: + goto invalid_input; + } + + y_meta_stride = MSM_MEDIA_ROUNDUP(width, y_tile_width); + y_meta_stride = MSM_MEDIA_ALIGN(y_meta_stride, 64); + +invalid_input: + return y_meta_stride; +} + +/* + * Function arguments: + * @v4l2_fmt + * @height + * Progressive: height + * Interlaced: (height+1)>>1 + */ +static inline unsigned int video_y_meta_scanlines(unsigned int colorformat, + unsigned int height) +{ + int y_tile_height = 0, y_meta_scanlines = 0; + + if (!height) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12C: + y_tile_height = 8; + break; + case MSM_VIDC_FMT_TP10C: + y_tile_height = 4; + break; + default: + goto invalid_input; + } + + y_meta_scanlines = MSM_MEDIA_ROUNDUP(height, y_tile_height); + y_meta_scanlines = MSM_MEDIA_ALIGN(y_meta_scanlines, 16); + +invalid_input: + return y_meta_scanlines; +} + +/* + * Function arguments: + * @v4l2_fmt + * @width + * Progressive: width + * Interlaced: width + */ +static inline unsigned int video_uv_meta_stride(unsigned int colorformat, + unsigned int width) +{ + int uv_tile_width = 0, uv_meta_stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12C: + uv_tile_width = 16; + break; + case MSM_VIDC_FMT_TP10C: + uv_tile_width = 24; + break; + default: + goto invalid_input; + } + + uv_meta_stride = MSM_MEDIA_ROUNDUP((width + 1) >> 1, uv_tile_width); + uv_meta_stride = MSM_MEDIA_ALIGN(uv_meta_stride, 64); + +invalid_input: + return uv_meta_stride; +} + +/* + * Function arguments: + * @v4l2_fmt + * @height + * Progressive: height + * Interlaced: (height+1)>>1 + */ +static inline unsigned int video_uv_meta_scanlines(unsigned int colorformat, + unsigned int height) +{ + int uv_tile_height = 0, uv_meta_scanlines = 0; + + if (!height) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12C: + uv_tile_height = 8; + break; + case MSM_VIDC_FMT_TP10C: + uv_tile_height = 4; + break; + default: + goto invalid_input; + } + + uv_meta_scanlines = MSM_MEDIA_ROUNDUP((height + 1) >> 1, uv_tile_height); + uv_meta_scanlines = MSM_MEDIA_ALIGN(uv_meta_scanlines, 16); + +invalid_input: + return uv_meta_scanlines; +} + +static inline unsigned int video_rgb_stride_bytes(unsigned int colorformat, + unsigned int width) +{ + unsigned int alignment = 0, stride = 0, bpp = 4; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_RGBA8888C: + case MSM_VIDC_FMT_RGBA8888: + alignment = 256; + break; + default: + goto invalid_input; + } + + stride = MSM_MEDIA_ALIGN(width * bpp, alignment); + +invalid_input: + return stride; +} + +static inline unsigned int video_rgb_stride_pix(unsigned int colorformat, + unsigned int width) +{ + unsigned int bpp = 4; + + return video_rgb_stride_bytes(colorformat, width) / bpp; +} + +static inline unsigned int video_rgb_scanlines(unsigned int colorformat, + unsigned int height) +{ + unsigned int alignment = 0, scanlines = 0; + + if (!height) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_RGBA8888C: + alignment = 16; + break; + case MSM_VIDC_FMT_RGBA8888: + alignment = 32; + break; + default: + goto invalid_input; + } + + scanlines = MSM_MEDIA_ALIGN(height, alignment); + +invalid_input: + return scanlines; +} + +static inline unsigned int video_rgb_meta_stride(unsigned int colorformat, + unsigned int width) +{ + int rgb_tile_width = 0, rgb_meta_stride = 0; + + if (!width) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_RGBA8888C: + case MSM_VIDC_FMT_RGBA8888: + rgb_tile_width = 16; + break; + default: + goto invalid_input; + } + + rgb_meta_stride = MSM_MEDIA_ROUNDUP(width, rgb_tile_width); + rgb_meta_stride = MSM_MEDIA_ALIGN(rgb_meta_stride, 64); + +invalid_input: + return rgb_meta_stride; +} + +static inline unsigned int video_rgb_meta_scanlines(unsigned int colorformat, + unsigned int height) +{ + int rgb_tile_height = 0, rgb_meta_scanlines = 0; + + if (!height) + goto invalid_input; + + switch (colorformat) { + case MSM_VIDC_FMT_RGBA8888C: + case MSM_VIDC_FMT_RGBA8888: + rgb_tile_height = 4; + break; + default: + goto invalid_input; + } + + rgb_meta_scanlines = MSM_MEDIA_ROUNDUP(height, rgb_tile_height); + rgb_meta_scanlines = MSM_MEDIA_ALIGN(rgb_meta_scanlines, 16); + +invalid_input: + return rgb_meta_scanlines; +} + +static inline unsigned int video_buffer_size(unsigned int colorformat, unsigned int pix_width, + unsigned int pix_height, unsigned int interlace) +{ + unsigned int size = 0; + unsigned int y_plane, uv_plane, y_stride, + uv_stride, y_sclines, uv_sclines; + unsigned int y_ubwc_plane = 0, uv_ubwc_plane = 0; + unsigned int y_meta_stride = 0, y_meta_scanlines = 0; + unsigned int uv_meta_stride = 0, uv_meta_scanlines = 0; + unsigned int y_meta_plane = 0, uv_meta_plane = 0; + unsigned int rgb_stride = 0, rgb_scanlines = 0; + unsigned int rgb_plane = 0, rgb_ubwc_plane = 0, rgb_meta_plane = 0; + unsigned int rgb_meta_stride = 0, rgb_meta_scanlines = 0; + + if (!pix_width || !pix_height) + goto invalid_input; + + y_stride = video_y_stride_bytes(colorformat, pix_width); + uv_stride = video_uv_stride_bytes(colorformat, pix_width); + y_sclines = video_y_scanlines(colorformat, pix_height); + uv_sclines = video_uv_scanlines(colorformat, pix_height); + rgb_stride = video_rgb_stride_bytes(colorformat, pix_width); + rgb_scanlines = video_rgb_scanlines(colorformat, pix_height); + + switch (colorformat) { + case MSM_VIDC_FMT_NV21: + case MSM_VIDC_FMT_NV12: + case MSM_VIDC_FMT_P010: + y_plane = y_stride * y_sclines; + uv_plane = uv_stride * uv_sclines; + size = y_plane + uv_plane; + break; + case MSM_VIDC_FMT_NV12C: + y_meta_stride = video_y_meta_stride(colorformat, pix_width); + uv_meta_stride = video_uv_meta_stride(colorformat, pix_width); + if (!interlace && colorformat == MSM_VIDC_FMT_NV12C) { + y_ubwc_plane = MSM_MEDIA_ALIGN(y_stride * y_sclines, 4096); + uv_ubwc_plane = MSM_MEDIA_ALIGN(uv_stride * uv_sclines, 4096); + y_meta_scanlines = + video_y_meta_scanlines(colorformat, pix_height); + y_meta_plane = MSM_MEDIA_ALIGN(y_meta_stride * y_meta_scanlines, 4096); + uv_meta_scanlines = + video_uv_meta_scanlines(colorformat, pix_height); + uv_meta_plane = MSM_MEDIA_ALIGN(uv_meta_stride * + uv_meta_scanlines, 4096); + size = (y_ubwc_plane + uv_ubwc_plane + y_meta_plane + + uv_meta_plane); + } else { + if (pix_width <= INTERLACE_WIDTH_MAX && + pix_height <= INTERLACE_HEIGHT_MAX && + (pix_height * pix_width) / 256 <= INTERLACE_MB_PER_FRAME_MAX) { + y_sclines = + video_y_scanlines(colorformat, (pix_height + 1) >> 1); + y_ubwc_plane = + MSM_MEDIA_ALIGN(y_stride * y_sclines, 4096); + uv_sclines = + video_uv_scanlines(colorformat, (pix_height + 1) >> 1); + uv_ubwc_plane = + MSM_MEDIA_ALIGN(uv_stride * uv_sclines, 4096); + y_meta_scanlines = + video_y_meta_scanlines(colorformat, (pix_height + 1) >> 1); + y_meta_plane = MSM_MEDIA_ALIGN(y_meta_stride * y_meta_scanlines, + 4096); + uv_meta_scanlines = + video_uv_meta_scanlines(colorformat, (pix_height + 1) >> 1); + uv_meta_plane = MSM_MEDIA_ALIGN(uv_meta_stride * + uv_meta_scanlines, 4096); + size = (y_ubwc_plane + uv_ubwc_plane + y_meta_plane + + uv_meta_plane) * 2; + } else { + y_sclines = video_y_scanlines(colorformat, pix_height); + y_ubwc_plane = + MSM_MEDIA_ALIGN(y_stride * y_sclines, 4096); + uv_sclines = video_uv_scanlines(colorformat, pix_height); + uv_ubwc_plane = + MSM_MEDIA_ALIGN(uv_stride * uv_sclines, 4096); + y_meta_scanlines = + video_y_meta_scanlines(colorformat, pix_height); + y_meta_plane = MSM_MEDIA_ALIGN(y_meta_stride * y_meta_scanlines, + 4096); + uv_meta_scanlines = + video_uv_meta_scanlines(colorformat, pix_height); + uv_meta_plane = MSM_MEDIA_ALIGN(uv_meta_stride * + uv_meta_scanlines, 4096); + size = (y_ubwc_plane + uv_ubwc_plane + y_meta_plane + + uv_meta_plane); + } + } + break; + case MSM_VIDC_FMT_TP10C: + y_ubwc_plane = MSM_MEDIA_ALIGN(y_stride * y_sclines, 4096); + uv_ubwc_plane = MSM_MEDIA_ALIGN(uv_stride * uv_sclines, 4096); + y_meta_stride = video_y_meta_stride(colorformat, pix_width); + y_meta_scanlines = video_y_meta_scanlines(colorformat, pix_height); + y_meta_plane = MSM_MEDIA_ALIGN(y_meta_stride * y_meta_scanlines, 4096); + uv_meta_stride = video_uv_meta_stride(colorformat, pix_width); + uv_meta_scanlines = video_uv_meta_scanlines(colorformat, pix_height); + uv_meta_plane = MSM_MEDIA_ALIGN(uv_meta_stride * + uv_meta_scanlines, 4096); + + size = y_ubwc_plane + uv_ubwc_plane + y_meta_plane + + uv_meta_plane; + break; + case MSM_VIDC_FMT_RGBA8888C: + rgb_ubwc_plane = MSM_MEDIA_ALIGN(rgb_stride * rgb_scanlines, 4096); + rgb_meta_stride = video_rgb_meta_stride(colorformat, pix_width); + rgb_meta_scanlines = video_rgb_meta_scanlines(colorformat, pix_height); + rgb_meta_plane = MSM_MEDIA_ALIGN(rgb_meta_stride * rgb_meta_scanlines, 4096); + size = rgb_ubwc_plane + rgb_meta_plane; + break; + case MSM_VIDC_FMT_RGBA8888: + rgb_plane = MSM_MEDIA_ALIGN(rgb_stride * rgb_scanlines, 4096); + size = rgb_plane; + break; + default: + break; + } + +invalid_input: + size = MSM_MEDIA_ALIGN(size, 4096); + return size; +} + +#endif From patchwork Fri Jul 28 13:23:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3D8EC0015E for ; Fri, 28 Jul 2023 13:27:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236784AbjG1N1j (ORCPT ); Fri, 28 Jul 2023 09:27:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236781AbjG1N1N (ORCPT ); Fri, 28 Jul 2023 09:27:13 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8765A3AB7; Fri, 28 Jul 2023 06:26:27 -0700 (PDT) Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9WeRt005403; Fri, 28 Jul 2023 13:26:11 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=wyoCtzArxV8kAZPvF7yPJuwlwUtyaPQXaGPR/vrHNM8=; b=dZHXtgMgwsVVYZLClZn5LRUq6wPHsgeJMlPPvq67rw3phCKaLGmN37CYwAE2idSGCyGx BMdUeMHMdtDM51LO+D297OWRc7KgxkQ7cDH2qG0Uxz3RH+MOIwkQHSbwJhWH6EWsece4 exBvOcy1ILTjc9m9DGk7GM7h3VYEsi/Q3c2vncodGRAd7yKYFpPgjHvg9uWVck7QFQax rI2v8YVSYbQcoh+dABgtOwDNQ00VWjHdyeainNmdrfEcT41Av3HauM7YyqLQRm2XBn7m X0KbxlPzE12rg86zvd4TE6jFfStqhGV5ut601E3FknLSG2CryDkjZ5G+U1F4veWf470g BQ== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s469hh1ha-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:10 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQ9jZ026019 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:09 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:06 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 17/33] iris: vidc: define various structures and enum Date: Fri, 28 Jul 2023 18:53:28 +0530 Message-ID: <1690550624-14642-18-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: epOXuaPjwGmMfH306DS8MvQbBlHki3i0 X-Proofpoint-GUID: epOXuaPjwGmMfH306DS8MvQbBlHki3i0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 priorityscore=1501 impostorscore=0 malwarescore=0 phishscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 mlxscore=0 spamscore=0 lowpriorityscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Define various structures and enums used by driver like core capability, instance capability, color space info, buffer types etc. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../qcom/iris/vidc/inc/msm_vidc_internal.h | 787 +++++++++++++++++++++ 1 file changed, 787 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_internal.h diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_internal.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_internal.h new file mode 100644 index 0000000..4d54834 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_internal.h @@ -0,0 +1,787 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_INTERNAL_H_ +#define _MSM_VIDC_INTERNAL_H_ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct msm_vidc_core; +struct msm_vidc_inst; + +static const char video_banner[] = "Video-Banner: (" __stringify(VIDEO_COMPILE_BY) "@" + __stringify(VIDEO_COMPILE_HOST) ") (" __stringify(VIDEO_COMPILE_TIME) ")"; + +#define MAX_NAME_LENGTH 128 +#define VENUS_VERSION_LENGTH 128 +#define MAX_MATRIX_COEFFS 9 +#define MAX_BIAS_COEFFS 3 +#define MAX_LIMIT_COEFFS 6 +#define MAX_DEBUGFS_NAME 50 +#define DEFAULT_HEIGHT 240 +#define DEFAULT_WIDTH 320 +#define DEFAULT_FPS 30 +#define MAXIMUM_VP9_FPS 60 +#define RT_DEC_DOWN_PRORITY_OFFSET 1 +#define MAX_SUPPORTED_INSTANCES 16 +#define DEFAULT_BSE_VPP_DELAY 2 +#define MAX_CAP_PARENTS 20 +#define MAX_CAP_CHILDREN 20 +#define DEFAULT_MAX_HOST_BUF_COUNT 64 +#define DEFAULT_MAX_HOST_BURST_BUF_COUNT 256 +#define BIT_DEPTH_8 (8 << 16 | 8) +#define BIT_DEPTH_10 (10 << 16 | 10) +#define CODED_FRAMES_PROGRESSIVE 0x0 +#define CODED_FRAMES_INTERLACE 0x1 +#define MAX_VP9D_INST_COUNT 6 +/* TODO: move below macros to waipio.c */ +#define MAX_ENH_LAYER_HB 3 +#define MAX_HEVC_VBR_ENH_LAYER_SLIDING_WINDOW 5 +#define MAX_HEVC_NON_VBR_ENH_LAYER_SLIDING_WINDOW 3 +#define MAX_AVC_ENH_LAYER_SLIDING_WINDOW 3 +#define MAX_AVC_ENH_LAYER_HYBRID_HP 5 +#define INVALID_DEFAULT_MARK_OR_USE_LTR -1 +#define MAX_SLICES_PER_FRAME 10 +#define MAX_SLICES_FRAME_RATE 60 +#define MAX_MB_SLICE_WIDTH 4096 +#define MAX_MB_SLICE_HEIGHT 2160 +#define MAX_BYTES_SLICE_WIDTH 1920 +#define MAX_BYTES_SLICE_HEIGHT 1088 +#define MIN_HEVC_SLICE_WIDTH 384 +#define MIN_AVC_SLICE_WIDTH 192 +#define MIN_SLICE_HEIGHT 128 +#define MAX_BITRATE_BOOST 25 +#define MAX_SUPPORTED_MIN_QUALITY 70 +#define MIN_CHROMA_QP_OFFSET -12 +#define MAX_CHROMA_QP_OFFSET 0 +#define MIN_QP_10BIT -11 +#define MIN_QP_8BIT 1 +#define INVALID_FD -1 +#define MAX_ENCODING_REFERNCE_FRAMES 7 +#define MAX_LTR_FRAME_COUNT_5 5 +#define MAX_LTR_FRAME_COUNT_2 2 +#define MAX_ENC_RING_BUF_COUNT 5 /* to be tuned */ +#define MAX_TRANSCODING_STATS_FRAME_RATE 60 +#define MAX_TRANSCODING_STATS_WIDTH 4096 +#define MAX_TRANSCODING_STATS_HEIGHT 2304 + +#define DCVS_WINDOW 16 +#define ENC_FPS_WINDOW 3 +#define DEC_FPS_WINDOW 10 +#define INPUT_TIMER_LIST_SIZE 30 + +#define INPUT_MPLANE V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE +#define OUTPUT_MPLANE V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE + +#define VIDC_IFACEQ_MAX_PKT_SIZE 1024 +#define VIDC_IFACEQ_MED_PKT_SIZE 768 +#define VIDC_IFACEQ_MIN_PKT_SIZE 8 +#define VIDC_IFACEQ_VAR_SMALL_PKT_SIZE 100 +#define VIDC_IFACEQ_VAR_LARGE_PKT_SIZE 512 +#define VIDC_IFACEQ_VAR_HUGE_PKT_SIZE (1024 * 4) + +#define NUM_MBS_PER_SEC(__height, __width, __fps) \ + (NUM_MBS_PER_FRAME(__height, __width) * (__fps)) + +#define NUM_MBS_PER_FRAME(__height, __width) \ + ((ALIGN(__height, 16) / 16) * (ALIGN(__width, 16) / 16)) + +#ifdef V4L2_CTRL_CLASS_CODEC +#define IS_PRIV_CTRL(idx) ( \ + (V4L2_CTRL_ID2WHICH(idx) == V4L2_CTRL_CLASS_CODEC) && \ + V4L2_CTRL_DRIVER_PRIV(idx)) +#else +#define IS_PRIV_CTRL(idx) ( \ + (V4L2_CTRL_ID2WHICH(idx) == V4L2_CTRL_CLASS_MPEG) && \ + V4L2_CTRL_DRIVER_PRIV(idx)) +#endif + +#define BUFFER_ALIGNMENT_SIZE(x) x +#define NUM_MBS_360P (((480 + 15) >> 4) * ((360 + 15) >> 4)) +#define NUM_MBS_720P (((1280 + 15) >> 4) * ((720 + 15) >> 4)) +#define NUM_MBS_4k (((4096 + 15) >> 4) * ((2304 + 15) >> 4)) +#define MB_SIZE_IN_PIXEL (16 * 16) + +#define DB_H264_DISABLE_SLICE_BOUNDARY \ + V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED_AT_SLICE_BOUNDARY + +#define DB_HEVC_DISABLE_SLICE_BOUNDARY \ + V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_DISABLED_AT_SLICE_BOUNDARY + +/* + * Convert Q16 number into Integer and Fractional part up to 2 places. + * Ex : 105752 / 65536 = 1.61; 1.61 in Q16 = 105752; + * Integer part = 105752 / 65536 = 1; + * Reminder = 105752 * 0xFFFF = 40216; Last 16 bits. + * Fractional part = 40216 * 100 / 65536 = 61; + * Now convert to FP(1, 61, 100). + */ +#define Q16_INT(q) ((q) >> 16) +#define Q16_FRAC(q) ((((q) & 0xFFFF) * 100) >> 16) + +/* define timeout values */ +#define HW_RESPONSE_TIMEOUT_VALUE (1000) +#define SW_PC_DELAY_VALUE (HW_RESPONSE_TIMEOUT_VALUE + 500) +#define FW_UNLOAD_DELAY_VALUE (SW_PC_DELAY_VALUE + 1500) + +#define MAX_DPB_COUNT 32 + /* + * max dpb count in firmware = 16 + * each dpb: 4 words - + * dpb list array size = 16 * 4 + * dpb payload size = 16 * 4 * 4 + */ +#define MAX_DPB_LIST_ARRAY_SIZE (16 * 4) +#define MAX_DPB_LIST_PAYLOAD_SIZE (16 * 4 * 4) + +enum msm_vidc_domain_type { + MSM_VIDC_ENCODER = BIT(0), + MSM_VIDC_DECODER = BIT(1), +}; + +enum msm_vidc_codec_type { + MSM_VIDC_H264 = BIT(0), + MSM_VIDC_HEVC = BIT(1), + MSM_VIDC_VP9 = BIT(2), +}; + +enum msm_vidc_colorformat_type { + MSM_VIDC_FMT_NONE = 0, + MSM_VIDC_FMT_NV12C = BIT(0), + MSM_VIDC_FMT_NV12 = BIT(1), + MSM_VIDC_FMT_NV21 = BIT(2), + MSM_VIDC_FMT_TP10C = BIT(3), + MSM_VIDC_FMT_P010 = BIT(4), + MSM_VIDC_FMT_RGBA8888C = BIT(5), + MSM_VIDC_FMT_RGBA8888 = BIT(6), +}; + +enum msm_vidc_buffer_type { + MSM_VIDC_BUF_NONE, + MSM_VIDC_BUF_INPUT, + MSM_VIDC_BUF_OUTPUT, + MSM_VIDC_BUF_READ_ONLY, + MSM_VIDC_BUF_INTERFACE_QUEUE, + MSM_VIDC_BUF_BIN, + MSM_VIDC_BUF_ARP, + MSM_VIDC_BUF_COMV, + MSM_VIDC_BUF_NON_COMV, + MSM_VIDC_BUF_LINE, + MSM_VIDC_BUF_DPB, + MSM_VIDC_BUF_PERSIST, + MSM_VIDC_BUF_VPSS, +}; + +/* always match with v4l2 flags V4L2_BUF_FLAG_* */ +enum msm_vidc_buffer_flags { + MSM_VIDC_BUF_FLAG_KEYFRAME = 0x00000008, + MSM_VIDC_BUF_FLAG_PFRAME = 0x00000010, + MSM_VIDC_BUF_FLAG_BFRAME = 0x00000020, + MSM_VIDC_BUF_FLAG_ERROR = 0x00000040, + MSM_VIDC_BUF_FLAG_LAST = 0x00100000, +}; + +enum msm_vidc_buffer_attributes { + MSM_VIDC_ATTR_DEFERRED = BIT(0), + MSM_VIDC_ATTR_READ_ONLY = BIT(1), + MSM_VIDC_ATTR_PENDING_RELEASE = BIT(2), + MSM_VIDC_ATTR_QUEUED = BIT(3), + MSM_VIDC_ATTR_DEQUEUED = BIT(4), + MSM_VIDC_ATTR_BUFFER_DONE = BIT(5), + MSM_VIDC_ATTR_RELEASE_ELIGIBLE = BIT(6), +}; + +enum msm_vidc_buffer_region { + MSM_VIDC_REGION_NONE = 0, + MSM_VIDC_NON_SECURE, + MSM_VIDC_NON_SECURE_PIXEL, + MSM_VIDC_SECURE_PIXEL, + MSM_VIDC_SECURE_NONPIXEL, + MSM_VIDC_SECURE_BITSTREAM, + MSM_VIDC_REGION_MAX, +}; + +enum msm_vidc_port_type { + INPUT_PORT = 0, + OUTPUT_PORT, + PORT_NONE, + MAX_PORT, +}; + +enum msm_vidc_stage_type { + MSM_VIDC_STAGE_NONE = 0, + MSM_VIDC_STAGE_1 = 1, + MSM_VIDC_STAGE_2 = 2, +}; + +enum msm_vidc_pipe_type { + MSM_VIDC_PIPE_NONE = 0, + MSM_VIDC_PIPE_1 = 1, + MSM_VIDC_PIPE_2 = 2, + MSM_VIDC_PIPE_4 = 4, +}; + +enum msm_vidc_quality_mode { + MSM_VIDC_MAX_QUALITY_MODE = 0x1, + MSM_VIDC_POWER_SAVE_MODE = 0x2, +}; + +enum msm_vidc_color_primaries { + MSM_VIDC_PRIMARIES_RESERVED = 0, + MSM_VIDC_PRIMARIES_BT709 = 1, + MSM_VIDC_PRIMARIES_UNSPECIFIED = 2, + MSM_VIDC_PRIMARIES_BT470_SYSTEM_M = 4, + MSM_VIDC_PRIMARIES_BT470_SYSTEM_BG = 5, + MSM_VIDC_PRIMARIES_BT601_525 = 6, + MSM_VIDC_PRIMARIES_SMPTE_ST240M = 7, + MSM_VIDC_PRIMARIES_GENERIC_FILM = 8, + MSM_VIDC_PRIMARIES_BT2020 = 9, + MSM_VIDC_PRIMARIES_SMPTE_ST428_1 = 10, + MSM_VIDC_PRIMARIES_SMPTE_RP431_2 = 11, + MSM_VIDC_PRIMARIES_SMPTE_EG431_1 = 12, + MSM_VIDC_PRIMARIES_SMPTE_EBU_TECH = 22, +}; + +enum msm_vidc_transfer_characteristics { + MSM_VIDC_TRANSFER_RESERVED = 0, + MSM_VIDC_TRANSFER_BT709 = 1, + MSM_VIDC_TRANSFER_UNSPECIFIED = 2, + MSM_VIDC_TRANSFER_BT470_SYSTEM_M = 4, + MSM_VIDC_TRANSFER_BT470_SYSTEM_BG = 5, + MSM_VIDC_TRANSFER_BT601_525_OR_625 = 6, + MSM_VIDC_TRANSFER_SMPTE_ST240M = 7, + MSM_VIDC_TRANSFER_LINEAR = 8, + MSM_VIDC_TRANSFER_LOG_100_1 = 9, + MSM_VIDC_TRANSFER_LOG_SQRT = 10, + MSM_VIDC_TRANSFER_XVYCC = 11, + MSM_VIDC_TRANSFER_BT1361_0 = 12, + MSM_VIDC_TRANSFER_SRGB_SYCC = 13, + MSM_VIDC_TRANSFER_BT2020_14 = 14, + MSM_VIDC_TRANSFER_BT2020_15 = 15, + MSM_VIDC_TRANSFER_SMPTE_ST2084_PQ = 16, + MSM_VIDC_TRANSFER_SMPTE_ST428_1 = 17, + MSM_VIDC_TRANSFER_BT2100_2_HLG = 18, +}; + +enum msm_vidc_matrix_coefficients { + MSM_VIDC_MATRIX_COEFF_SRGB_SMPTE_ST428_1 = 0, + MSM_VIDC_MATRIX_COEFF_BT709 = 1, + MSM_VIDC_MATRIX_COEFF_UNSPECIFIED = 2, + MSM_VIDC_MATRIX_COEFF_RESERVED = 3, + MSM_VIDC_MATRIX_COEFF_FCC_TITLE_47 = 4, + MSM_VIDC_MATRIX_COEFF_BT470_SYS_BG_OR_BT601_625 = 5, + MSM_VIDC_MATRIX_COEFF_BT601_525_BT1358_525_OR_625 = 6, + MSM_VIDC_MATRIX_COEFF_SMPTE_ST240 = 7, + MSM_VIDC_MATRIX_COEFF_YCGCO = 8, + MSM_VIDC_MATRIX_COEFF_BT2020_NON_CONSTANT = 9, + MSM_VIDC_MATRIX_COEFF_BT2020_CONSTANT = 10, + MSM_VIDC_MATRIX_COEFF_SMPTE_ST2085 = 11, + MSM_VIDC_MATRIX_COEFF_SMPTE_CHROM_DERV_NON_CONSTANT = 12, + MSM_VIDC_MATRIX_COEFF_SMPTE_CHROM_DERV_CONSTANT = 13, + MSM_VIDC_MATRIX_COEFF_BT2100 = 14, +}; + +enum msm_vidc_preprocess_type { + MSM_VIDC_PREPROCESS_NONE = BIT(0), + MSM_VIDC_PREPROCESS_TYPE0 = BIT(1), +}; + +enum msm_vidc_core_capability_type { + CORE_CAP_NONE = 0, + ENC_CODECS, + DEC_CODECS, + MAX_SESSION_COUNT, + MAX_NUM_720P_SESSIONS, + MAX_NUM_1080P_SESSIONS, + MAX_NUM_4K_SESSIONS, + MAX_NUM_8K_SESSIONS, + MAX_LOAD, + MAX_RT_MBPF, + MAX_MBPF, + MAX_MBPS, + MAX_MBPF_HQ, + MAX_MBPS_HQ, + MAX_MBPF_B_FRAME, + MAX_MBPS_B_FRAME, + MAX_MBPS_ALL_INTRA, + MAX_ENH_LAYER_COUNT, + NUM_VPP_PIPE, + SW_PC, + SW_PC_DELAY, + FW_UNLOAD, + FW_UNLOAD_DELAY, + HW_RESPONSE_TIMEOUT, + PREFIX_BUF_COUNT_PIX, + PREFIX_BUF_SIZE_PIX, + PREFIX_BUF_COUNT_NON_PIX, + PREFIX_BUF_SIZE_NON_PIX, + PAGEFAULT_NON_FATAL, + PAGETABLE_CACHING, + DCVS, + DECODE_BATCH, + DECODE_BATCH_TIMEOUT, + STATS_TIMEOUT_MS, + AV_SYNC_WINDOW_SIZE, + CLK_FREQ_THRESHOLD, + NON_FATAL_FAULTS, + DEVICE_CAPS, + CORE_CAP_MAX, +}; + +/** + * msm_vidc_prepare_dependency_list() api will prepare caps_list by looping over + * enums(msm_vidc_inst_capability_type) from 0 to INST_CAP_MAX and arranges the + * node in such a way that parents willbe at the front and dependent children + * in the back. + * + * caps_list preparation may become CPU intensive task, so to save CPU cycles, + * organize enum in proper order(leaf caps at the beginning and dependent parent caps + * at back), so that during caps_list preparation num CPU cycles spent will reduce. + * + * Note: It will work, if enum kept at different places, but not efficient. + * + * - place all leaf(no child) enums before PROFILE cap. + * - place all intermittent(having both parent and child) enums before FRAME_WIDTH cap. + * - place all root(no parent) enums before INST_CAP_MAX cap. + */ + +enum msm_vidc_inst_capability_type { + INST_CAP_NONE = 0, + MIN_FRAME_QP, + MAX_FRAME_QP, + I_FRAME_QP, + P_FRAME_QP, + B_FRAME_QP, + TIME_DELTA_BASED_RC, + CONSTANT_QUALITY, + VBV_DELAY, + PEAK_BITRATE, + ENTROPY_MODE, + TRANSFORM_8X8, + STAGE, + LTR_COUNT, + IR_PERIOD, + BITRATE_BOOST, + OUTPUT_ORDER, + INPUT_BUF_HOST_MAX_COUNT, + OUTPUT_BUF_HOST_MAX_COUNT, + VUI_TIMING_INFO, + SLICE_DECODE, + PROFILE, + ENH_LAYER_COUNT, + BIT_RATE, + GOP_SIZE, + B_FRAME, + ALL_INTRA, + MIN_QUALITY, + SLICE_MODE, + FRAME_WIDTH, + LOSSLESS_FRAME_WIDTH, + FRAME_HEIGHT, + LOSSLESS_FRAME_HEIGHT, + PIX_FMTS, + MIN_BUFFERS_INPUT, + MIN_BUFFERS_OUTPUT, + MBPF, + BATCH_MBPF, + BATCH_FPS, + LOSSLESS_MBPF, + FRAME_RATE, + OPERATING_RATE, + INPUT_RATE, + TIMESTAMP_RATE, + SCALE_FACTOR, + MB_CYCLES_VSP, + MB_CYCLES_VPP, + MB_CYCLES_LP, + MB_CYCLES_FW, + MB_CYCLES_FW_VPP, + ENC_RING_BUFFER_COUNT, + HFLIP, + VFLIP, + ROTATION, + HEADER_MODE, + PREPEND_SPSPPS_TO_IDR, + WITHOUT_STARTCODE, + NAL_LENGTH_FIELD, + REQUEST_I_FRAME, + BITRATE_MODE, + LOSSLESS, + FRAME_SKIP_MODE, + FRAME_RC_ENABLE, + GOP_CLOSURE, + USE_LTR, + MARK_LTR, + BASELAYER_PRIORITY, + IR_TYPE, + AU_DELIMITER, + GRID_ENABLE, + GRID_SIZE, + I_FRAME_MIN_QP, + P_FRAME_MIN_QP, + B_FRAME_MIN_QP, + I_FRAME_MAX_QP, + P_FRAME_MAX_QP, + B_FRAME_MAX_QP, + LAYER_TYPE, + LAYER_ENABLE, + L0_BR, + L1_BR, + L2_BR, + L3_BR, + L4_BR, + L5_BR, + LEVEL, + HEVC_TIER, + DISPLAY_DELAY_ENABLE, + DISPLAY_DELAY, + CONCEAL_COLOR_8BIT, + CONCEAL_COLOR_10BIT, + LF_MODE, + LF_ALPHA, + LF_BETA, + SLICE_MAX_BYTES, + SLICE_MAX_MB, + MB_RC, + CHROMA_QP_INDEX_OFFSET, + PIPE, + POC, + CODED_FRAMES, + BIT_DEPTH, + BITSTREAM_SIZE_OVERWRITE, + DEFAULT_HEADER, + RAP_FRAME, + SEQ_CHANGE_AT_SYNC_FRAME, + QUALITY_MODE, + CABAC_MAX_BITRATE, + CAVLC_MAX_BITRATE, + ALLINTRA_MAX_BITRATE, + NUM_COMV, + SIGNAL_COLOR_INFO, + INST_CAP_MAX, +}; + +enum msm_vidc_inst_capability_flags { + CAP_FLAG_NONE = 0, + CAP_FLAG_DYNAMIC_ALLOWED = BIT(0), + CAP_FLAG_MENU = BIT(1), + CAP_FLAG_INPUT_PORT = BIT(2), + CAP_FLAG_OUTPUT_PORT = BIT(3), + CAP_FLAG_CLIENT_SET = BIT(4), + CAP_FLAG_BITMASK = BIT(5), + CAP_FLAG_VOLATILE = BIT(6), +}; + +struct msm_vidc_inst_cap { + enum msm_vidc_inst_capability_type cap_id; + s32 min; + s32 max; + u32 step_or_mask; + s32 value; + u32 v4l2_id; + u32 hfi_id; + enum msm_vidc_inst_capability_flags flags; + enum msm_vidc_inst_capability_type children[MAX_CAP_CHILDREN]; + int (*adjust)(void *inst, + struct v4l2_ctrl *ctrl); + int (*set)(void *inst, + enum msm_vidc_inst_capability_type cap_id); +}; + +struct msm_vidc_inst_capability { + enum msm_vidc_domain_type domain; + enum msm_vidc_codec_type codec; + struct msm_vidc_inst_cap cap[INST_CAP_MAX + 1]; +}; + +struct msm_vidc_core_capability { + enum msm_vidc_core_capability_type type; + u32 value; +}; + +struct msm_vidc_inst_cap_entry { + /* list of struct msm_vidc_inst_cap_entry */ + struct list_head list; + enum msm_vidc_inst_capability_type cap_id; +}; + +struct msm_vidc_event_data { + union { + bool bval; + u32 uval; + u64 uval64; + s32 val; + s64 val64; + void *ptr; + } edata; +}; + +struct debug_buf_count { + u64 etb; + u64 ftb; + u64 fbd; + u64 ebd; +}; + +struct msm_vidc_statistics { + struct debug_buf_count count; + u64 data_size; + u64 time_ms; + u32 avg_bw_llcc; + u32 avg_bw_ddr; +}; + +enum msm_vidc_cache_op { + MSM_VIDC_CACHE_CLEAN, + MSM_VIDC_CACHE_INVALIDATE, + MSM_VIDC_CACHE_CLEAN_INVALIDATE, +}; + +enum msm_vidc_dcvs_flags { + MSM_VIDC_DCVS_INCR = BIT(0), + MSM_VIDC_DCVS_DECR = BIT(1), +}; + +enum msm_vidc_clock_properties { + CLOCK_PROP_HAS_SCALING = BIT(0), + CLOCK_PROP_HAS_MEM_RETENTION = BIT(1), +}; + +enum signal_session_response { + SIGNAL_CMD_STOP_INPUT = 0, + SIGNAL_CMD_STOP_OUTPUT, + SIGNAL_CMD_CLOSE, + MAX_SIGNAL, +}; + +struct msm_vidc_input_cr_data { + struct list_head list; + u32 index; + u32 input_cr; +}; + +struct msm_vidc_session_idle { + bool idle; + u64 last_activity_time_ns; +}; + +struct msm_vidc_color_info { + u32 colorspace; + u32 ycbcr_enc; + u32 xfer_func; + u32 quantization; +}; + +struct msm_vidc_rectangle { + u32 left; + u32 top; + u32 width; + u32 height; +}; + +struct msm_vidc_subscription_params { + u32 bitstream_resolution; + u32 crop_offsets[2]; + u32 bit_depth; + u32 coded_frames; + u32 fw_min_count; + u32 pic_order_cnt; + u32 color_info; + u32 profile; + u32 level; + u32 tier; +}; + +struct msm_vidc_hfi_frame_info { + u32 picture_type; + u32 no_output; + u32 subframe_input; + u32 cr; + u32 cf; + u32 data_corrupt; + u32 overflow; +}; + +struct msm_vidc_decode_vpp_delay { + bool enable; + u32 size; +}; + +struct msm_vidc_decode_batch { + bool enable; + u32 size; + struct delayed_work work; +}; + +enum msm_vidc_power_mode { + VIDC_POWER_NORMAL = 0, + VIDC_POWER_LOW, + VIDC_POWER_TURBO, +}; + +struct vidc_bus_vote_data { + enum msm_vidc_domain_type domain; + enum msm_vidc_codec_type codec; + enum msm_vidc_power_mode power_mode; + u32 color_formats[2]; + int num_formats; /* 1 = DPB-OPB unified; 2 = split */ + int input_height, input_width, bitrate; + int output_height, output_width; + int rotation; + int compression_ratio; + int complexity_factor; + int input_cr; + u32 lcu_size; + u32 fps; + u32 work_mode; + bool use_sys_cache; + bool b_frames_enabled; + u64 calc_bw_ddr; + u64 calc_bw_llcc; + u32 num_vpp_pipes; +}; + +struct msm_vidc_power { + enum msm_vidc_power_mode power_mode; + u32 buffer_counter; + u32 min_threshold; + u32 nom_threshold; + u32 max_threshold; + bool dcvs_mode; + u32 dcvs_window; + u64 min_freq; + u64 curr_freq; + u32 ddr_bw; + u32 sys_cache_bw; + u32 dcvs_flags; + u32 fw_cr; + u32 fw_cf; +}; + +struct msm_vidc_mem { + struct list_head list; + enum msm_vidc_buffer_type type; + enum msm_vidc_buffer_region region; + u32 size; + u8 secure:1; + u8 map_kernel:1; + struct dma_buf *dmabuf; + struct iosys_map dmabuf_map; + void *kvaddr; + dma_addr_t device_addr; + unsigned long attrs; + u32 refcount; + struct sg_table *table; + struct dma_buf_attachment *attach; + enum dma_data_direction direction; +}; + +struct msm_vidc_mem_list { + struct list_head list; // list of "struct msm_vidc_mem" +}; + +struct msm_vidc_buffer { + struct list_head list; + struct msm_vidc_inst *inst; + enum msm_vidc_buffer_type type; + enum msm_vidc_buffer_region region; + u32 index; + int fd; + u32 buffer_size; + u32 data_offset; + u32 data_size; + u64 device_addr; + u32 flags; + u64 timestamp; + enum msm_vidc_buffer_attributes attr; + void *dmabuf; + struct sg_table *sg_table; + struct dma_buf_attachment *attach; + u32 dbuf_get:1; + u32 start_time_ms; + u32 end_time_ms; +}; + +struct msm_vidc_buffers { + struct list_head list; // list of "struct msm_vidc_buffer" + u32 min_count; + u32 extra_count; + u32 actual_count; + u32 size; + bool reuse; +}; + +struct msm_vidc_buffer_stats { + struct list_head list; + u32 frame_num; + u64 timestamp; + u32 etb_time_ms; + u32 ebd_time_ms; + u32 ftb_time_ms; + u32 fbd_time_ms; + u32 data_size; + u32 flags; + u32 ts_offset; +}; + +enum msm_vidc_buffer_stats_flag { + MSM_VIDC_STATS_FLAG_CORRUPT = BIT(0), + MSM_VIDC_STATS_FLAG_OVERFLOW = BIT(1), + MSM_VIDC_STATS_FLAG_NO_OUTPUT = BIT(2), + MSM_VIDC_STATS_FLAG_SUBFRAME_INPUT = BIT(3), +}; + +struct msm_vidc_sort { + struct list_head list; + s64 val; +}; + +struct msm_vidc_timestamp { + struct msm_vidc_sort sort; + u64 rank; +}; + +struct msm_vidc_timestamps { + struct list_head list; + u32 count; + u64 rank; +}; + +struct msm_vidc_input_timer { + struct list_head list; + u64 time_us; +}; + +enum msm_vidc_allow { + MSM_VIDC_DISALLOW, + MSM_VIDC_ALLOW, + MSM_VIDC_DEFER, + MSM_VIDC_DISCARD, + MSM_VIDC_IGNORE, +}; + +struct msm_vidc_sfr { + u32 bufsize; + u8 rg_data[]; +}; + +struct msm_vidc_ctrl_data { + bool skip_s_ctrl; +}; + +#endif // _MSM_VIDC_INTERNAL_H_ From patchwork Fri Jul 28 13:23:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB0E9C0015E for ; Fri, 28 Jul 2023 13:28:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236795AbjG1N1s (ORCPT ); Fri, 28 Jul 2023 09:27:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236798AbjG1N1V (ORCPT ); Fri, 28 Jul 2023 09:27:21 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4667449C; Fri, 28 Jul 2023 06:26:36 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SDJIEe015744; Fri, 28 Jul 2023 13:26:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=sqeEFtDw1XRsOvS/kBnzdVechWcRwb6YthjIVyHq4Og=; b=jTrOVMW67KKwyONQEmdQlvVRGnWSDuLstALTR4VQyK/OBrpesr2BrZS0le7cXOmLuPcT 9P7GDwjQvEqMUA8xJtxS6jf4BDxl/8kMiPxOXDsN1rYaEx2ym61IUDmrzbqmHh038xUB z8jr6S/55cW7Y9Qb5BSSx/cwxo2fWhpov3V1afS3chadcE6yRWcufITmWxL0Eo85QG4V sZLUp++vXcVoA3d1RclMP7/04hIh06u8sHx/5xHY8hKwnkoWWOAzlsLR2RFi4jlVrPDP Hpl/gUzLYkSPILV/uhlssByxIf+G2OQHCkacKEblT1OzsVYrs4aaSqGOYXJB0u6NTZvL Ew== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7ua-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:13 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQDAe002570 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:13 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:09 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 18/33] iris: vidc: hfi: add Host Firmware Interface (HFI) Date: Fri, 28 Jul 2023 18:53:29 +0530 Message-ID: <1690550624-14642-19-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: pB50ozKhIRViJ3_3siIF3WukcHiC1J6w X-Proofpoint-GUID: pB50ozKhIRViJ3_3siIF3WukcHiC1J6w X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements the interface for communication between host driver and firmware through interface commands and messages. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../media/platform/qcom/iris/vidc/inc/venus_hfi.h | 66 + .../media/platform/qcom/iris/vidc/src/venus_hfi.c | 1503 ++++++++++++++++++++ 2 files changed, 1569 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/venus_hfi.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/venus_hfi.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi.h b/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi.h new file mode 100644 index 0000000..99c504d --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _VENUS_HFI_H_ +#define _VENUS_HFI_H_ + +#include +#include +#include +#include + +#include "msm_vidc_core.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +#define VIDC_MAX_PC_SKIP_COUNT 10 + +struct vidc_buffer_addr_info { + enum msm_vidc_buffer_type buffer_type; + u32 buffer_size; + u32 num_buffers; + u32 align_device_addr; + u32 extradata_addr; + u32 extradata_size; + u32 response_required; +}; + +int __strict_check(struct msm_vidc_core *core, + const char *function); +int venus_hfi_session_property(struct msm_vidc_inst *inst, + u32 pkt_type, u32 flags, u32 port, + u32 payload_type, void *payload, + u32 payload_size); +int venus_hfi_session_command(struct msm_vidc_inst *inst, + u32 cmd, enum msm_vidc_port_type port, + u32 payload_type, void *payload, u32 payload_size); +int venus_hfi_queue_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer); +int venus_hfi_release_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer); +int venus_hfi_start(struct msm_vidc_inst *inst, enum msm_vidc_port_type port); +int venus_hfi_stop(struct msm_vidc_inst *inst, enum msm_vidc_port_type port); +int venus_hfi_session_close(struct msm_vidc_inst *inst); +int venus_hfi_session_open(struct msm_vidc_inst *inst); +int venus_hfi_session_pause(struct msm_vidc_inst *inst, enum msm_vidc_port_type port); +int venus_hfi_session_resume(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port, u32 payload); +int venus_hfi_session_drain(struct msm_vidc_inst *inst, enum msm_vidc_port_type port); +int venus_hfi_session_set_codec(struct msm_vidc_inst *inst); +int venus_hfi_core_init(struct msm_vidc_core *core); +int venus_hfi_core_deinit(struct msm_vidc_core *core, bool force); +int venus_hfi_suspend(struct msm_vidc_core *core); +int venus_hfi_reserve_hardware(struct msm_vidc_inst *inst, u32 duration); +int venus_hfi_scale_clocks(struct msm_vidc_inst *inst, u64 freq); +int venus_hfi_scale_buses(struct msm_vidc_inst *inst, u64 bw_ddr, u64 bw_llcc); +int venus_hfi_set_ir_period(struct msm_vidc_inst *inst, u32 ir_type, + enum msm_vidc_inst_capability_type cap_id); +void venus_hfi_pm_work_handler(struct work_struct *work); +irqreturn_t venus_hfi_isr(int irq, void *data); +irqreturn_t venus_hfi_isr_handler(int irq, void *data); +int __prepare_pc(struct msm_vidc_core *core); + +#endif // _VENUS_HFI_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/venus_hfi.c b/drivers/media/platform/qcom/iris/vidc/src/venus_hfi.c new file mode 100644 index 0000000..87cac76 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/venus_hfi.c @@ -0,0 +1,1503 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "firmware.h" +#include "hfi_packet.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power.h" +#include "msm_vidc_state.h" +#include "venus_hfi.h" +#include "venus_hfi_queue.h" +#include "venus_hfi_response.h" + +#define update_offset(offset, val) ((offset) += (val)) +#define update_timestamp(ts, val) \ + do { \ + do_div((ts), NSEC_PER_USEC); \ + (ts) += (val); \ + (ts) *= NSEC_PER_USEC; \ + } while (0) + +static int __resume(struct msm_vidc_core *core); +static int __suspend(struct msm_vidc_core *core); + +static void __fatal_error(bool fatal) +{ + WARN_ON(fatal); +} + +int __strict_check(struct msm_vidc_core *core, const char *function) +{ + bool fatal = !mutex_is_locked(&core->lock); + + __fatal_error(fatal); + + if (fatal) + d_vpr_e("%s: strict check failed\n", function); + + return fatal ? -EINVAL : 0; +} + +static bool __valdiate_session(struct msm_vidc_core *core, + struct msm_vidc_inst *inst, const char *func) +{ + bool valid = false; + struct msm_vidc_inst *temp; + int rc = 0; + + rc = __strict_check(core, __func__); + if (rc) + return false; + + list_for_each_entry(temp, &core->instances, list) { + if (temp == inst) { + valid = true; + break; + } + } + if (!valid) + i_vpr_e(inst, "%s: invalid session\n", func); + + return valid; +} + +static void __schedule_power_collapse_work(struct msm_vidc_core *core) +{ + if (!core->capabilities[SW_PC].value) { + d_vpr_l("software power collapse not enabled\n"); + return; + } + + if (!mod_delayed_work(core->pm_workq, &core->pm_work, + msecs_to_jiffies(core->capabilities[SW_PC_DELAY].value))) { + d_vpr_h("power collapse already scheduled\n"); + } else { + d_vpr_l("power collapse scheduled for %d ms\n", + core->capabilities[SW_PC_DELAY].value); + } +} + +static void __cancel_power_collapse_work(struct msm_vidc_core *core) +{ + if (!core->capabilities[SW_PC].value) + return; + + cancel_delayed_work(&core->pm_work); +} + +static void __flush_debug_queue(struct msm_vidc_core *core, + u8 *packet, u32 packet_size) +{ + u8 *log; + struct hfi_debug_header *pkt; + bool local_packet = false; + enum vidc_msg_prio_fw log_level_fw = msm_fw_debug; + + if (!packet || !packet_size) { + packet = vzalloc(VIDC_IFACEQ_VAR_HUGE_PKT_SIZE); + if (!packet) { + d_vpr_e("%s: allocation failed\n", __func__); + return; + } + packet_size = VIDC_IFACEQ_VAR_HUGE_PKT_SIZE; + + local_packet = true; + + /* + * Local packet is used when error occurred. + * It is good to print these logs to printk as well. + */ + log_level_fw |= FW_PRINTK; + } + + while (!venus_hfi_queue_dbg_read(core, packet)) { + pkt = (struct hfi_debug_header *)packet; + + if (pkt->size < sizeof(struct hfi_debug_header)) { + d_vpr_e("%s: invalid pkt size %d\n", + __func__, pkt->size); + continue; + } + if (pkt->size >= packet_size) { + d_vpr_e("%s: pkt size[%d] >= packet_size[%d]\n", + __func__, pkt->size, packet_size); + continue; + } + + packet[pkt->size] = '\0'; + /* + * All fw messages starts with new line character. This + * causes dprintk to print this message in two lines + * in the kernel log. Ignoring the first character + * from the message fixes this to print it in a single + * line. + */ + log = (u8 *)packet + sizeof(struct hfi_debug_header) + 1; + dprintk_firmware(log_level_fw, "%s", log); + } + + if (local_packet) + vfree(packet); +} + +static int __cmdq_write(struct msm_vidc_core *core, void *pkt) +{ + int rc; + + rc = __resume(core); + if (rc) + return rc; + + rc = venus_hfi_queue_cmd_write(core, pkt); + if (!rc) + __schedule_power_collapse_work(core); + + return rc; +} + +static int __sys_set_debug(struct msm_vidc_core *core, u32 debug) +{ + int rc = 0; + + rc = hfi_packet_sys_debug_config(core, core->packet, + core->packet_size, debug); + if (rc) + goto exit; + + rc = __cmdq_write(core, core->packet); + if (rc) + goto exit; + +exit: + if (rc) + d_vpr_e("Debug mode setting to FW failed\n"); + + return rc; +} + +static int __sys_set_power_control(struct msm_vidc_core *core, bool enable) +{ + int rc = 0; + + if (!is_core_sub_state(core, CORE_SUBSTATE_GDSC_HANDOFF)) { + d_vpr_e("%s: skipping as power control hanfoff was not done\n", + __func__); + return rc; + } + + if (!core_in_valid_state(core)) { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + return rc; + } + + rc = hfi_packet_sys_intraframe_powercollapse(core, core->packet, + core->packet_size, enable); + if (rc) + return rc; + + rc = __cmdq_write(core, core->packet); + if (rc) + return rc; + + rc = msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_FW_PWR_CTRL, __func__); + if (rc) + return rc; + + d_vpr_h("%s: set hardware power control successful\n", __func__); + + return rc; +} + +int __prepare_pc(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = hfi_packet_sys_pc_prep(core, core->packet, core->packet_size); + if (rc) { + d_vpr_e("Failed to create sys pc prep pkt\n"); + goto err_pc_prep; + } + + if (__cmdq_write(core, core->packet)) + rc = -ENOTEMPTY; + if (rc) + d_vpr_e("Failed to prepare venus for power off"); +err_pc_prep: + return rc; +} + +static int __power_collapse(struct msm_vidc_core *core, bool force) +{ + int rc = 0; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_h("%s: Power already disabled\n", __func__); + goto exit; + } + + if (!core_in_valid_state(core)) { + d_vpr_e("%s: Core not in init state\n", __func__); + return -EINVAL; + } + + __flush_debug_queue(core, (!force ? core->packet : NULL), core->packet_size); + + rc = call_iris_op(core, prepare_pc, core); + if (rc) + goto skip_power_off; + + rc = __suspend(core); + if (rc) + d_vpr_e("Failed __suspend\n"); + +exit: + return rc; + +skip_power_off: + d_vpr_e("%s: skipped\n", __func__); + return -EAGAIN; +} + +static int __release_subcaches(struct msm_vidc_core *core) +{ + int rc = 0; + struct subcache_info *sinfo; + struct hfi_buffer buf; + + if (!is_sys_cache_present(core)) + return 0; + + if (!core->resource->subcache_set.set_to_fw) { + d_vpr_h("Subcaches not set to Venus\n"); + return 0; + } + + rc = hfi_create_header(core->packet, core->packet_size, + 0, core->header_id++); + if (rc) + return rc; + + memset(&buf, 0, sizeof(struct hfi_buffer)); + buf.type = HFI_BUFFER_SUBCACHE; + buf.flags = HFI_BUF_HOST_FLAG_RELEASE; + + venus_hfi_for_each_subcache_reverse(core, sinfo) { + if (!sinfo->isactive) + continue; + + buf.index = sinfo->subcache->slice_id; + buf.buffer_size = sinfo->subcache->slice_size; + + rc = hfi_create_packet(core->packet, + core->packet_size, + HFI_CMD_BUFFER, + HFI_BUF_HOST_FLAG_NONE, + HFI_PAYLOAD_STRUCTURE, + HFI_PORT_NONE, + core->packet_id++, + &buf, + sizeof(buf)); + if (rc) + return rc; + } + + /* Set resource to Venus for activated subcaches */ + rc = __cmdq_write(core, core->packet); + if (rc) + return rc; + + venus_hfi_for_each_subcache_reverse(core, sinfo) { + if (!sinfo->isactive) + continue; + + d_vpr_h("%s: release Subcache id %d size %lu done\n", + __func__, sinfo->subcache->slice_id, + sinfo->subcache->slice_size); + } + core->resource->subcache_set.set_to_fw = false; + + return 0; +} + +static int __set_subcaches(struct msm_vidc_core *core) +{ + int rc = 0; + struct subcache_info *sinfo; + struct hfi_buffer buf; + + if (!is_sys_cache_present(core)) + return 0; + + if (core->resource->subcache_set.set_to_fw) { + d_vpr_h("Subcaches already set to Venus\n"); + return 0; + } + + rc = hfi_create_header(core->packet, core->packet_size, + 0, core->header_id++); + if (rc) + goto err_fail_set_subacaches; + + memset(&buf, 0, sizeof(struct hfi_buffer)); + buf.type = HFI_BUFFER_SUBCACHE; + buf.flags = HFI_BUF_HOST_FLAG_NONE; + + venus_hfi_for_each_subcache(core, sinfo) { + if (!sinfo->isactive) + continue; + buf.index = sinfo->subcache->slice_id; + buf.buffer_size = sinfo->subcache->slice_size; + + rc = hfi_create_packet(core->packet, + core->packet_size, + HFI_CMD_BUFFER, + HFI_BUF_HOST_FLAG_NONE, + HFI_PAYLOAD_STRUCTURE, + HFI_PORT_NONE, + core->packet_id++, + &buf, + sizeof(buf)); + if (rc) + goto err_fail_set_subacaches; + } + + /* Set resource to Venus for activated subcaches */ + rc = __cmdq_write(core, core->packet); + if (rc) + goto err_fail_set_subacaches; + + venus_hfi_for_each_subcache(core, sinfo) { + if (!sinfo->isactive) + continue; + d_vpr_h("%s: set Subcache id %d size %lu done\n", + __func__, sinfo->subcache->slice_id, + sinfo->subcache->slice_size); + } + core->resource->subcache_set.set_to_fw = true; + + return 0; + +err_fail_set_subacaches: + call_res_op(core, llcc, core, false); + return rc; +} + +static int __venus_power_off(struct msm_vidc_core *core) +{ + int rc = 0; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) + return 0; + + rc = call_iris_op(core, power_off, core); + if (rc) { + d_vpr_e("Failed to power off, err: %d\n", rc); + return rc; + } + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE, 0, __func__); + + return rc; +} + +static int __venus_power_on(struct msm_vidc_core *core) +{ + int rc = 0; + + if (is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) + return 0; + + rc = call_iris_op(core, power_on, core); + if (rc) { + d_vpr_e("Failed to power on, err: %d\n", rc); + return rc; + } + + rc = msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_POWER_ENABLE, __func__); + + return rc; +} + +static int __suspend(struct msm_vidc_core *core) +{ + int rc = 0; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_h("Power already disabled\n"); + return 0; + } + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + d_vpr_h("Entering suspend\n"); + + rc = fw_suspend(core); + if (rc) { + d_vpr_e("Failed to suspend video core %d\n", rc); + goto err_tzbsp_suspend; + } + + call_res_op(core, llcc, core, false); + + __venus_power_off(core); + d_vpr_h("Venus power off\n"); + return rc; + +err_tzbsp_suspend: + return rc; +} + +static int __resume(struct msm_vidc_core *core) +{ + int rc = 0; + + if (is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + goto exit; + } else if (!core_in_valid_state(core)) { + d_vpr_e("%s: core not in valid state\n", __func__); + return -EINVAL; + } + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + d_vpr_h("Resuming from power collapse\n"); + /* reset handoff done from core sub_state */ + rc = msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_GDSC_HANDOFF, 0, __func__); + if (rc) + return rc; + /* reset hw pwr ctrl from core sub_state */ + rc = msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_FW_PWR_CTRL, 0, __func__); + if (rc) + return rc; + + rc = __venus_power_on(core); + if (rc) { + d_vpr_e("Failed to power on venus\n"); + goto err_venus_power_on; + } + + /* Reboot the firmware */ + rc = fw_resume(core); + if (rc) { + d_vpr_e("Failed to resume video core %d\n", rc); + goto err_set_video_state; + } + + /* + * Hand off control of regulators to h/w _after_ loading fw. + * Note that the GDSC will turn off when switching from normal + * (s/w triggered) to fast (HW triggered) unless the h/w vote is + * present. + */ + call_res_op(core, gdsc_hw_ctrl, core); + + /* Wait for boot completion */ + rc = call_iris_op(core, boot_firmware, core); + if (rc) { + d_vpr_e("Failed to reset venus core\n"); + goto err_reset_core; + } + + __sys_set_debug(core, (msm_fw_debug & FW_LOGMASK) >> FW_LOGSHIFT); + + rc = call_res_op(core, llcc, core, true); + if (rc) { + d_vpr_e("Failed to activate subcache\n"); + goto err_reset_core; + } + __set_subcaches(core); + + rc = __sys_set_power_control(core, true); + if (rc) { + d_vpr_e("%s: set power control failed\n", __func__); + call_res_op(core, gdsc_sw_ctrl, core); + rc = 0; + } + + d_vpr_h("Resumed from power collapse\n"); +exit: + /* Don't reset skip_pc_count for SYS_PC_PREP cmd */ + //if (core->last_packet_type != HFI_CMD_SYS_PC_PREP) + // core->skip_pc_count = 0; + return rc; +err_reset_core: + fw_suspend(core); +err_set_video_state: + __venus_power_off(core); +err_venus_power_on: + d_vpr_e("Failed to resume from power collapse\n"); + return rc; +} + +int __load_fw(struct msm_vidc_core *core) +{ + int rc = 0; + + /* clear all substates */ + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_MAX - 1, 0, __func__); + + rc = __venus_power_on(core); + if (rc) { + d_vpr_e("%s: power on failed\n", __func__); + goto fail_power; + } + + rc = fw_load(core); + if (rc) + goto fail_load_fw; + + /* + * Hand off control of regulators to h/w _after_ loading fw. + * Note that the GDSC will turn off when switching from normal + * (s/w triggered) to fast (HW triggered) unless the h/w vote is + * present. + */ + call_res_op(core, gdsc_hw_ctrl, core); + + return rc; +fail_load_fw: + __venus_power_off(core); +fail_power: + return rc; +} + +void __unload_fw(struct msm_vidc_core *core) +{ + if (!core->resource->fw_cookie) + return; + + cancel_delayed_work(&core->pm_work); + fw_unload(core); + __venus_power_off(core); + + /* clear all substates */ + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_MAX - 1, 0, __func__); +} + +static int __response_handler(struct msm_vidc_core *core) +{ + int rc = 0; + + if (call_iris_op(core, watchdog, core, core->intr_status)) { + struct hfi_packet pkt = {.type = HFI_SYS_ERROR_WD_TIMEOUT}; + + core_lock(core, __func__); + msm_vidc_change_core_state(core, MSM_VIDC_CORE_ERROR, __func__); + /* mark cpu watchdog error */ + msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_CPU_WATCHDOG, __func__); + d_vpr_e("%s: CPU WD error received\n", __func__); + core_unlock(core, __func__); + + return handle_system_error(core, &pkt); + } + + memset(core->response_packet, 0, core->packet_size); + while (!venus_hfi_queue_msg_read(core, core->response_packet)) { + rc = handle_response(core, core->response_packet); + if (rc) + continue; + /* check for system error */ + if (core->state != MSM_VIDC_CORE_INIT) + break; + memset(core->response_packet, 0, core->packet_size); + } + + __schedule_power_collapse_work(core); + __flush_debug_queue(core, core->response_packet, core->packet_size); + + return rc; +} + +irqreturn_t venus_hfi_isr(int irq, void *data) +{ + disable_irq_nosync(irq); + return IRQ_WAKE_THREAD; +} + +irqreturn_t venus_hfi_isr_handler(int irq, void *data) +{ + struct msm_vidc_core *core = data; + int num_responses = 0, rc = 0; + + if (!core) { + d_vpr_e("%s: invalid params\n", __func__); + return IRQ_NONE; + } + + core_lock(core, __func__); + rc = __resume(core); + if (rc) { + d_vpr_e("%s: Power on failed\n", __func__); + core_unlock(core, __func__); + goto exit; + } + call_iris_op(core, clear_interrupt, core); + core_unlock(core, __func__); + + num_responses = __response_handler(core); + +exit: + if (!call_iris_op(core, watchdog, core, core->intr_status)) + enable_irq(irq); + + return IRQ_HANDLED; +} + +void venus_hfi_pm_work_handler(struct work_struct *work) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = container_of(work, struct msm_vidc_core, pm_work.work); + + core_lock(core, __func__); + d_vpr_h("%s: try power collapse\n", __func__); + /* + * It is ok to check this variable outside the lock since + * it is being updated in this context only + */ + if (core->skip_pc_count >= VIDC_MAX_PC_SKIP_COUNT) { + d_vpr_e("Failed to PC for %d times\n", + core->skip_pc_count); + core->skip_pc_count = 0; + msm_vidc_change_core_state(core, MSM_VIDC_CORE_ERROR, __func__); + /* mark video hw unresponsive */ + msm_vidc_change_core_sub_state(core, 0, + CORE_SUBSTATE_VIDEO_UNRESPONSIVE, __func__); + /* do core deinit to handle error */ + msm_vidc_core_deinit_locked(core, true); + goto unlock; + } + + /* core already deinited - skip power collapse */ + if (is_core_state(core, MSM_VIDC_CORE_DEINIT)) { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + goto unlock; + } + + rc = __power_collapse(core, false); + switch (rc) { + case 0: + core->skip_pc_count = 0; + /* Cancel pending delayed works if any */ + __cancel_power_collapse_work(core); + d_vpr_h("%s: power collapse successful!\n", __func__); + break; + case -EBUSY: + core->skip_pc_count = 0; + d_vpr_h("%s: retry PC as dsp is busy\n", __func__); + __schedule_power_collapse_work(core); + break; + case -EAGAIN: + core->skip_pc_count++; + d_vpr_e("%s: retry power collapse (count %d)\n", + __func__, core->skip_pc_count); + __schedule_power_collapse_work(core); + break; + default: + d_vpr_e("%s: power collapse failed\n", __func__); + break; + } +unlock: + core_unlock(core, __func__); +} + +static int __sys_init(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = hfi_packet_sys_init(core, core->packet, core->packet_size); + if (rc) + return rc; + + rc = __cmdq_write(core, core->packet); + if (rc) + return rc; + + return 0; +} + +static int __sys_image_version(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = hfi_packet_image_version(core, core->packet, core->packet_size); + if (rc) + return rc; + + rc = __cmdq_write(core, core->packet); + if (rc) + return rc; + + return 0; +} + +int venus_hfi_core_init(struct msm_vidc_core *core) +{ + int rc = 0; + + d_vpr_h("%s(): core %pK\n", __func__, core); + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + rc = venus_hfi_queue_init(core); + if (rc) + goto error; + + rc = __load_fw(core); + if (rc) + goto error; + + rc = call_iris_op(core, boot_firmware, core); + if (rc) + goto error; + + rc = call_res_op(core, llcc, core, true); + if (rc) + goto error; + + rc = __sys_init(core); + if (rc) + goto error; + + rc = __sys_image_version(core); + if (rc) + goto error; + + rc = __sys_set_debug(core, (msm_fw_debug & FW_LOGMASK) >> FW_LOGSHIFT); + if (rc) + goto error; + + rc = __set_subcaches(core); + if (rc) + goto error; + + rc = __sys_set_power_control(core, true); + if (rc) { + d_vpr_e("%s: set power control failed\n", __func__); + call_res_op(core, gdsc_sw_ctrl, core); + rc = 0; + } + + d_vpr_h("%s(): successful\n", __func__); + return 0; + +error: + d_vpr_e("%s(): failed\n", __func__); + return rc; +} + +int venus_hfi_core_deinit(struct msm_vidc_core *core, bool force) +{ + int rc = 0; + + d_vpr_h("%s(): core %pK\n", __func__, core); + rc = __strict_check(core, __func__); + if (rc) + return rc; + + if (is_core_state(core, MSM_VIDC_CORE_DEINIT)) + return 0; + __resume(core); + __flush_debug_queue(core, (!force ? core->packet : NULL), core->packet_size); + __release_subcaches(core); + call_res_op(core, llcc, core, false); + __unload_fw(core); + /** + * coredump need to be called after firmware unload, coredump also + * copying queues memory. So need to be called before queues deinit. + */ + if (msm_vidc_fw_dump) + fw_coredump(core); + + return 0; +} + +int venus_hfi_suspend(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + if (!core->capabilities[SW_PC].value) { + d_vpr_h("Skip suspending venus\n"); + return 0; + } + + d_vpr_h("Suspending Venus\n"); + rc = __power_collapse(core, true); + if (!rc) { + /* Cancel pending delayed works if any */ + __cancel_power_collapse_work(core); + } else { + d_vpr_e("%s: Venus is busy\n", __func__); + rc = -EBUSY; + } + + return rc; +} + +int venus_hfi_session_open(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + __sys_set_debug(core, (msm_fw_debug & FW_LOGMASK) >> FW_LOGSHIFT); + + rc = hfi_packet_session_command(inst, + HFI_CMD_OPEN, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED), + HFI_PORT_NONE, + 0, /* session_id */ + HFI_PAYLOAD_U32, + &inst->session_id, /* payload */ + sizeof(u32)); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_set_codec(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + u32 codec; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + rc = hfi_create_header(inst->packet, inst->packet_size, + inst->session_id, core->header_id++); + if (rc) + goto unlock; + + codec = get_hfi_codec(inst); + rc = hfi_create_packet(inst->packet, inst->packet_size, + HFI_PROP_CODEC, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32_ENUM, + HFI_PORT_NONE, + core->packet_id++, + &codec, + sizeof(u32)); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_property(struct msm_vidc_inst *inst, + u32 pkt_type, u32 flags, u32 port, + u32 payload_type, void *payload, u32 payload_size) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + rc = hfi_create_header(inst->packet, inst->packet_size, + inst->session_id, core->header_id++); + if (rc) + goto unlock; + rc = hfi_create_packet(inst->packet, inst->packet_size, + pkt_type, + flags, + payload_type, + port, + core->packet_id++, + payload, + payload_size); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_close(struct msm_vidc_inst *inst) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + rc = hfi_packet_session_command(inst, + HFI_CMD_CLOSE, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED | + HFI_HOST_FLAGS_NON_DISCARDABLE), + HFI_PORT_NONE, + inst->session_id, + HFI_PAYLOAD_NONE, + NULL, + 0); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_start(struct msm_vidc_inst *inst, enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + goto unlock; + } + + rc = hfi_packet_session_command(inst, + HFI_CMD_START, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED), + get_hfi_port(inst, port), + inst->session_id, + HFI_PAYLOAD_NONE, + NULL, + 0); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_stop(struct msm_vidc_inst *inst, enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + goto unlock; + } + + rc = hfi_packet_session_command(inst, + HFI_CMD_STOP, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED | + HFI_HOST_FLAGS_NON_DISCARDABLE), + get_hfi_port(inst, port), + inst->session_id, + HFI_PAYLOAD_NONE, + NULL, + 0); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_pause(struct msm_vidc_inst *inst, enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + goto unlock; + } + + rc = hfi_packet_session_command(inst, + HFI_CMD_PAUSE, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED), + get_hfi_port(inst, port), + inst->session_id, + HFI_PAYLOAD_NONE, + NULL, + 0); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_resume(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port, u32 payload) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + if (port != INPUT_PORT && port != OUTPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + goto unlock; + } + + rc = hfi_packet_session_command(inst, + HFI_CMD_RESUME, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED), + get_hfi_port(inst, port), + inst->session_id, + HFI_PAYLOAD_U32, + &payload, + sizeof(u32)); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_drain(struct msm_vidc_inst *inst, enum msm_vidc_port_type port) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + if (port != INPUT_PORT) { + i_vpr_e(inst, "%s: invalid port %d\n", __func__, port); + goto unlock; + } + + rc = hfi_packet_session_command(inst, + HFI_CMD_DRAIN, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED | + HFI_HOST_FLAGS_NON_DISCARDABLE), + get_hfi_port(inst, port), + inst->session_id, + HFI_PAYLOAD_NONE, + NULL, + 0); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_session_command(struct msm_vidc_inst *inst, + u32 cmd, enum msm_vidc_port_type port, u32 payload_type, + void *payload, u32 payload_size) +{ + int rc = 0; + struct msm_vidc_core *core; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + rc = hfi_create_header(inst->packet, inst->packet_size, + inst->session_id, + core->header_id++); + if (rc) + goto unlock; + + rc = hfi_create_packet(inst->packet, inst->packet_size, + cmd, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED), + payload_type, + get_hfi_port(inst, port), + core->packet_id++, + payload, + payload_size); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_queue_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer) +{ + int rc = 0; + struct msm_vidc_core *core; + struct hfi_buffer hfi_buffer; + + if (!inst->packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + rc = get_hfi_buffer(inst, buffer, &hfi_buffer); + if (rc) + goto unlock; + + rc = hfi_create_header(inst->packet, inst->packet_size, + inst->session_id, core->header_id++); + if (rc) + goto unlock; + + rc = hfi_create_packet(inst->packet, + inst->packet_size, + HFI_CMD_BUFFER, + HFI_HOST_FLAGS_INTR_REQUIRED, + HFI_PAYLOAD_STRUCTURE, + get_hfi_port_from_buffer_type(inst, buffer->type), + core->packet_id++, + &hfi_buffer, + sizeof(hfi_buffer)); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_release_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer) +{ + int rc = 0; + struct msm_vidc_core *core; + struct hfi_buffer hfi_buffer; + + if (!inst->packet || !buffer) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + core = inst->core; + core_lock(core, __func__); + + if (!__valdiate_session(core, inst, __func__)) { + rc = -EINVAL; + goto unlock; + } + + rc = get_hfi_buffer(inst, buffer, &hfi_buffer); + if (rc) + goto unlock; + + /* add release flag */ + hfi_buffer.flags |= HFI_BUF_HOST_FLAG_RELEASE; + + rc = hfi_create_header(inst->packet, inst->packet_size, + inst->session_id, core->header_id++); + if (rc) + goto unlock; + + rc = hfi_create_packet(inst->packet, + inst->packet_size, + HFI_CMD_BUFFER, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED), + HFI_PAYLOAD_STRUCTURE, + get_hfi_port_from_buffer_type(inst, buffer->type), + core->packet_id++, + &hfi_buffer, + sizeof(hfi_buffer)); + if (rc) + goto unlock; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) + goto unlock; + +unlock: + core_unlock(core, __func__); + return rc; +} + +int venus_hfi_scale_clocks(struct msm_vidc_inst *inst, u64 freq) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = inst->core; + + core_lock(core, __func__); + rc = __resume(core); + if (rc) { + i_vpr_e(inst, "%s: Resume from power collapse failed\n", __func__); + goto exit; + } + rc = call_res_op(core, set_clks, core, freq); + if (rc) + goto exit; + +exit: + core_unlock(core, __func__); + + return rc; +} + +int venus_hfi_scale_buses(struct msm_vidc_inst *inst, u64 bw_ddr, u64 bw_llcc) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = inst->core; + + core_lock(core, __func__); + rc = __resume(core); + if (rc) { + i_vpr_e(inst, "%s: Resume from power collapse failed\n", __func__); + goto exit; + } + rc = call_res_op(core, set_bw, core, bw_ddr, bw_llcc); + if (rc) + goto exit; + +exit: + core_unlock(core, __func__); + + return rc; +} + +int venus_hfi_set_ir_period(struct msm_vidc_inst *inst, u32 ir_type, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_core *core; + u32 ir_period, sync_frame_req = 0; + + core = inst->core; + + core_lock(core, __func__); + + ir_period = inst->capabilities[cap_id].value; + + rc = hfi_create_header(inst->packet, inst->packet_size, + inst->session_id, core->header_id++); + if (rc) + goto exit; + + /* Request sync frame if ir period enabled dynamically */ + if (!inst->ir_enabled) { + inst->ir_enabled = ((ir_period > 0) ? true : false); + if (inst->ir_enabled && inst->bufq[OUTPUT_PORT].vb2q->streaming) { + sync_frame_req = HFI_SYNC_FRAME_REQUEST_WITH_PREFIX_SEQ_HDR; + rc = hfi_create_packet(inst->packet, inst->packet_size, + HFI_PROP_REQUEST_SYNC_FRAME, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32_ENUM, + msm_vidc_get_port_info(inst, REQUEST_I_FRAME), + core->packet_id++, + &sync_frame_req, + sizeof(u32)); + if (rc) + goto exit; + } + } + + rc = hfi_create_packet(inst->packet, inst->packet_size, + ir_type, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + msm_vidc_get_port_info(inst, cap_id), + core->packet_id++, + &ir_period, + sizeof(u32)); + if (rc) + goto exit; + + rc = __cmdq_write(inst->core, inst->packet); + if (rc) { + i_vpr_e(inst, "%s: failed to set inst->capabilities[%d] %s to fw\n", + __func__, cap_id, cap_name(cap_id)); + goto exit; + } + +exit: + core_unlock(core, __func__); + + return rc; +} From patchwork Fri Jul 28 13:23:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 694B4C04A6A for ; Fri, 28 Jul 2023 13:28:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233570AbjG1N2V (ORCPT ); Fri, 28 Jul 2023 09:28:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236759AbjG1N11 (ORCPT ); Fri, 28 Jul 2023 09:27:27 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AFEF44B1; Fri, 28 Jul 2023 06:26:48 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBeJkT014427; Fri, 28 Jul 2023 13:26:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=OG5Uj/DR2foKcZeLzdHyg6MwPd1WOPdLScSUdHKimg8=; b=M7sDFz4vn4tGlx1ErU79dN45xjjWuyKvPxS/IXqfYkP+83Lk44xxal9O9U7CIx/qD6lz 9Q0QbPDhN55kvACGx60hFK+rEzEZUyyf7pPpeiLHIhvw/lJiYaV4UX1h06OmpS1IRQpf dRNQPe1qUBWELLWQLGybDOVKm/LO2ucg+o8nTi+E6mxUPto04o/gHfGbLn6YdcTIjyj+ t31lylIncR+RjtRMBSshWtJjq3U+v5OYS6nohZQw0r6/ba6ASG+J0EEW2C2Ay1kVaVkY Ir7pj3AEy4+uhxO78bc/UOVUGEP4jPKskgYkDw3sRZ5zCILxwxlHXl1XZo87Y25dIDT/ cA== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7ud-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:17 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQGTt002602 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:16 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:13 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 19/33] iris: vidc: hfi: add Host Firmware Interface (HFI) response handling Date: Fri, 28 Jul 2023 18:53:30 +0530 Message-ID: <1690550624-14642-20-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: x7txvqrfokplTqx4P7jOldy75BKnansR X-Proofpoint-GUID: x7txvqrfokplTqx4P7jOldy75BKnansR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements the handlng of responses sent from firmware to the host driver. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../qcom/iris/vidc/inc/venus_hfi_response.h | 26 + .../qcom/iris/vidc/src/venus_hfi_response.c | 1607 ++++++++++++++++++++ 2 files changed, 1633 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_response.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/venus_hfi_response.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_response.h b/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_response.h new file mode 100644 index 0000000..92e6c0e --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_response.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __VENUS_HFI_RESPONSE_H__ +#define __VENUS_HFI_RESPONSE_H__ + +#include "hfi_packet.h" + +extern struct msm_vidc_core *g_core; +int handle_response(struct msm_vidc_core *core, + void *response); +int validate_packet(u8 *response_pkt, u8 *core_resp_pkt, + u32 core_resp_pkt_size, const char *func); +bool is_valid_port(struct msm_vidc_inst *inst, u32 port, + const char *func); +bool is_valid_hfi_buffer_type(struct msm_vidc_inst *inst, + u32 buffer_type, const char *func); +int handle_system_error(struct msm_vidc_core *core, + struct hfi_packet *pkt); +int handle_release_output_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer); + +#endif // __VENUS_HFI_RESPONSE_H__ diff --git a/drivers/media/platform/qcom/iris/vidc/src/venus_hfi_response.c b/drivers/media/platform/qcom/iris/vidc/src/venus_hfi_response.c new file mode 100644 index 0000000..b12a564 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/venus_hfi_response.c @@ -0,0 +1,1607 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include + +#include "hfi_packet.h" +#include "msm_vdec.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "venus_hfi.h" +#include "venus_hfi_response.h" + +#define in_range(range, val) (((range.begin) < (val)) && ((range.end) > (val))) + +struct msm_vidc_core_hfi_range { + u32 begin; + u32 end; + int (*handle)(struct msm_vidc_core *core, struct hfi_packet *pkt); +}; + +struct msm_vidc_inst_hfi_range { + u32 begin; + u32 end; + int (*handle)(struct msm_vidc_inst *inst, struct hfi_packet *pkt); +}; + +struct msm_vidc_hfi_buffer_handle { + enum hfi_buffer_type type; + int (*handle)(struct msm_vidc_inst *inst, struct hfi_buffer *buffer); +}; + +struct msm_vidc_hfi_packet_handle { + enum hfi_buffer_type type; + int (*handle)(struct msm_vidc_inst *inst, struct hfi_packet *pkt); +}; + +void print_psc_properties(const char *str, struct msm_vidc_inst *inst, + struct msm_vidc_subscription_params subsc_params) +{ + i_vpr_h(inst, + "%s: w %d, h %d, crop: offsets[0] %#x offsets[1] %#x, bit depth %#x, coded frames %d, fw min count %d, poc %d, color info %d, profile %d, level %d, tier %d\n", + str, (subsc_params.bitstream_resolution & HFI_BITMASK_BITSTREAM_WIDTH) >> 16, + (subsc_params.bitstream_resolution & HFI_BITMASK_BITSTREAM_HEIGHT), + subsc_params.crop_offsets[0], subsc_params.crop_offsets[1], + subsc_params.bit_depth, subsc_params.coded_frames, + subsc_params.fw_min_count, subsc_params.pic_order_cnt, + subsc_params.color_info, subsc_params.profile, subsc_params.level, + subsc_params.tier); +} + +static void print_sfr_message(struct msm_vidc_core *core) +{ + struct msm_vidc_sfr *vsfr = NULL; + u32 vsfr_size = 0; + void *p = NULL; + + vsfr = (struct msm_vidc_sfr *)core->sfr.align_virtual_addr; + if (vsfr) { + if (vsfr->bufsize != core->sfr.mem_size) { + d_vpr_e("Invalid SFR buf size %d actual %d\n", + vsfr->bufsize, core->sfr.mem_size); + return; + } + vsfr_size = vsfr->bufsize - sizeof(u32); + p = memchr(vsfr->rg_data, '\0', vsfr_size); + /* SFR isn't guaranteed to be NULL terminated */ + if (!p) + vsfr->rg_data[vsfr_size - 1] = '\0'; + + d_vpr_e(FMT_STRING_MSG_SFR, vsfr->rg_data); + } +} + +u32 vidc_port_from_hfi(struct msm_vidc_inst *inst, + enum hfi_packet_port_type hfi_port) +{ + enum msm_vidc_port_type port = MAX_PORT; + + if (is_decode_session(inst)) { + switch (hfi_port) { + case HFI_PORT_BITSTREAM: + port = INPUT_PORT; + break; + case HFI_PORT_RAW: + port = OUTPUT_PORT; + break; + case HFI_PORT_NONE: + port = PORT_NONE; + break; + default: + i_vpr_e(inst, "%s: invalid hfi port type %d\n", + __func__, hfi_port); + break; + } + } else if (is_encode_session(inst)) { + switch (hfi_port) { + case HFI_PORT_RAW: + port = INPUT_PORT; + break; + case HFI_PORT_BITSTREAM: + port = OUTPUT_PORT; + break; + case HFI_PORT_NONE: + port = PORT_NONE; + break; + default: + i_vpr_e(inst, "%s: invalid hfi port type %d\n", + __func__, hfi_port); + break; + } + } else { + i_vpr_e(inst, "%s: invalid domain %#x\n", + __func__, inst->domain); + } + + return port; +} + +bool is_valid_hfi_port(struct msm_vidc_inst *inst, u32 port, + u32 buffer_type, const char *func) +{ + if (port == HFI_PORT_NONE && + buffer_type != HFI_BUFFER_ARP && + buffer_type != HFI_BUFFER_PERSIST) + goto invalid; + + if (port != HFI_PORT_BITSTREAM && port != HFI_PORT_RAW) + goto invalid; + + return true; + +invalid: + i_vpr_e(inst, "%s: invalid port %#x buffer_type %u\n", + func, port, buffer_type); + return false; +} + +bool is_valid_hfi_buffer_type(struct msm_vidc_inst *inst, + u32 buffer_type, const char *func) +{ + if (buffer_type != HFI_BUFFER_BITSTREAM && + buffer_type != HFI_BUFFER_RAW && + buffer_type != HFI_BUFFER_BIN && + buffer_type != HFI_BUFFER_ARP && + buffer_type != HFI_BUFFER_COMV && + buffer_type != HFI_BUFFER_NON_COMV && + buffer_type != HFI_BUFFER_LINE && + buffer_type != HFI_BUFFER_DPB && + buffer_type != HFI_BUFFER_PERSIST && + buffer_type != HFI_BUFFER_VPSS) { + i_vpr_e(inst, "%s: invalid buffer type %#x\n", + func, buffer_type); + return false; + } + return true; +} + +int validate_packet(u8 *response_pkt, u8 *core_resp_pkt, + u32 core_resp_pkt_size, const char *func) +{ + u8 *response_limit; + u32 response_pkt_size = 0; + + if (!response_pkt || !core_resp_pkt || !core_resp_pkt_size) { + d_vpr_e("%s: invalid params\n", func); + return -EINVAL; + } + + response_limit = core_resp_pkt + core_resp_pkt_size; + + if (response_pkt < core_resp_pkt || response_pkt > response_limit) { + d_vpr_e("%s: invalid packet address\n", func); + return -EINVAL; + } + + response_pkt_size = *(u32 *)response_pkt; + if (!response_pkt_size) { + d_vpr_e("%s: response packet size cannot be zero\n", func); + return -EINVAL; + } + + if (response_pkt_size < sizeof(struct hfi_packet)) { + d_vpr_e("%s: invalid packet size %d\n", + func, response_pkt_size); + return -EINVAL; + } + + if (response_pkt + response_pkt_size > response_limit) { + d_vpr_e("%s: invalid packet size %d\n", + func, response_pkt_size); + return -EINVAL; + } + return 0; +} + +static int validate_hdr_packet(struct msm_vidc_core *core, + struct hfi_header *hdr, const char *function) +{ + struct hfi_packet *packet; + u8 *pkt; + int i, rc = 0; + + if (hdr->size < sizeof(struct hfi_header) + sizeof(struct hfi_packet)) { + d_vpr_e("%s: invalid header size %d\n", __func__, hdr->size); + return -EINVAL; + } + + pkt = (u8 *)((u8 *)hdr + sizeof(struct hfi_header)); + + /* validate all packets */ + for (i = 0; i < hdr->num_packets; i++) { + packet = (struct hfi_packet *)pkt; + rc = validate_packet(pkt, core->response_packet, core->packet_size, function); + if (rc) + return rc; + + pkt += packet->size; + } + + return 0; +} + +static bool check_for_packet_payload(struct msm_vidc_inst *inst, + struct hfi_packet *pkt, const char *func) +{ + u32 payload_size = 0; + + if (pkt->payload_info == HFI_PAYLOAD_NONE) { + i_vpr_h(inst, "%s: no playload available for packet %#x\n", + func, pkt->type); + return false; + } + + switch (pkt->payload_info) { + case HFI_PAYLOAD_U32: + case HFI_PAYLOAD_S32: + case HFI_PAYLOAD_Q16: + case HFI_PAYLOAD_U32_ENUM: + case HFI_PAYLOAD_32_PACKED: + payload_size = 4; + break; + case HFI_PAYLOAD_U64: + case HFI_PAYLOAD_S64: + case HFI_PAYLOAD_64_PACKED: + payload_size = 8; + break; + case HFI_PAYLOAD_STRUCTURE: + if (pkt->type == HFI_CMD_BUFFER) + payload_size = sizeof(struct hfi_buffer); + break; + default: + payload_size = 0; + break; + } + + if (pkt->size < sizeof(struct hfi_packet) + payload_size) { + i_vpr_e(inst, + "%s: invalid payload size %u payload type %#x for packet %#x\n", + func, pkt->size, pkt->payload_info, pkt->type); + return false; + } + + return true; +} + +static int handle_session_last_flag_info(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int rc = 0; + + if (pkt->type == HFI_INFO_HFI_FLAG_PSC_LAST) { + if (msm_vidc_allow_psc_last_flag(inst)) + rc = msm_vidc_process_psc_last_flag(inst); + else + rc = -EINVAL; + } else if (pkt->type == HFI_INFO_HFI_FLAG_DRAIN_LAST) { + if (msm_vidc_allow_drain_last_flag(inst)) + rc = msm_vidc_process_drain_last_flag(inst); + else + rc = -EINVAL; + } else { + i_vpr_e(inst, "%s: invalid packet type %#x\n", __func__, + pkt->type); + } + + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + + return rc; +} + +static int handle_session_info(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int rc = 0; + char *info; + + switch (pkt->type) { + case HFI_INFO_UNSUPPORTED: + info = "unsupported"; + break; + case HFI_INFO_DATA_CORRUPT: + info = "data corrupt"; + inst->hfi_frame_info.data_corrupt = 1; + break; + case HFI_INFO_BUFFER_OVERFLOW: + info = "buffer overflow"; + inst->hfi_frame_info.overflow = 1; + break; + case HFI_INFO_HFI_FLAG_DRAIN_LAST: + info = "drain last flag"; + rc = handle_session_last_flag_info(inst, pkt); + break; + case HFI_INFO_HFI_FLAG_PSC_LAST: + info = "drc last flag"; + rc = handle_session_last_flag_info(inst, pkt); + break; + default: + info = "unknown"; + break; + } + + i_vpr_h(inst, "session info (%#x): %s\n", pkt->type, info); + + return rc; +} + +static int handle_session_error(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int rc = 0; + char *error; + + switch (pkt->type) { + case HFI_ERROR_MAX_SESSIONS: + error = "exceeded max sessions"; + break; + case HFI_ERROR_UNKNOWN_SESSION: + error = "unknown session id"; + break; + case HFI_ERROR_INVALID_STATE: + error = "invalid operation for current state"; + break; + case HFI_ERROR_INSUFFICIENT_RESOURCES: + error = "insufficient resources"; + break; + case HFI_ERROR_BUFFER_NOT_SET: + error = "internal buffers not set"; + break; + case HFI_ERROR_FATAL: + error = "fatal error"; + break; + default: + error = "unknown"; + break; + } + + i_vpr_e(inst, "%s: session error received %#x: %s\n", + __func__, pkt->type, error); + + rc = msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + return rc; +} + +int handle_system_error(struct msm_vidc_core *core, + struct hfi_packet *pkt) +{ + d_vpr_e("%s: system error received\n", __func__); + print_sfr_message(core); + + msm_vidc_core_deinit(core, true); + + return 0; +} + +static int handle_system_init(struct msm_vidc_core *core, + struct hfi_packet *pkt) +{ + if (!(pkt->flags & HFI_FW_FLAGS_SUCCESS)) { + d_vpr_h("%s: unhandled. flags=%d\n", __func__, pkt->flags); + return 0; + } + + core_lock(core, __func__); + if (pkt->packet_id != core->sys_init_id) { + d_vpr_e("%s: invalid pkt id %u, expected %u\n", __func__, + pkt->packet_id, core->sys_init_id); + goto unlock; + } + + msm_vidc_change_core_state(core, MSM_VIDC_CORE_INIT, __func__); + d_vpr_h("%s: successful\n", __func__); + +unlock: + core_unlock(core, __func__); + return 0; +} + +static int handle_session_open(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful\n", __func__); + + return 0; +} + +static int handle_session_close(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful\n", __func__); + + signal_session_msg_receipt(inst, SIGNAL_CMD_CLOSE); + return 0; +} + +static int handle_session_start(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful for port %d\n", + __func__, pkt->port); + return 0; +} + +static int handle_session_stop(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int rc = 0; + enum signal_session_response signal_type = -1; + + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful for port %d\n", + __func__, pkt->port); + + if (is_encode_session(inst)) { + if (pkt->port == HFI_PORT_RAW) { + signal_type = SIGNAL_CMD_STOP_INPUT; + } else if (pkt->port == HFI_PORT_BITSTREAM) { + signal_type = SIGNAL_CMD_STOP_OUTPUT; + } else { + i_vpr_e(inst, "%s: invalid port: %d\n", + __func__, pkt->port); + return -EINVAL; + } + } else if (is_decode_session(inst)) { + if (pkt->port == HFI_PORT_RAW) { + signal_type = SIGNAL_CMD_STOP_OUTPUT; + } else if (pkt->port == HFI_PORT_BITSTREAM) { + signal_type = SIGNAL_CMD_STOP_INPUT; + } else { + i_vpr_e(inst, "%s: invalid port: %d\n", + __func__, pkt->port); + return -EINVAL; + } + } else { + i_vpr_e(inst, "%s: invalid session\n", __func__); + return -EINVAL; + } + + if (signal_type != -1) { + rc = msm_vidc_process_stop_done(inst, signal_type); + if (rc) + return rc; + } + + return 0; +} + +static int handle_session_drain(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful\n", __func__); + + return msm_vidc_process_drain_done(inst); +} + +static int get_driver_buffer_flags(struct msm_vidc_inst *inst, u32 hfi_flags) +{ + u32 driver_flags = 0; + + if (inst->hfi_frame_info.picture_type & HFI_PICTURE_IDR) + driver_flags |= MSM_VIDC_BUF_FLAG_KEYFRAME; + else if (inst->hfi_frame_info.picture_type & HFI_PICTURE_P) + driver_flags |= MSM_VIDC_BUF_FLAG_PFRAME; + else if (inst->hfi_frame_info.picture_type & HFI_PICTURE_B) + driver_flags |= MSM_VIDC_BUF_FLAG_BFRAME; + else if (inst->hfi_frame_info.picture_type & HFI_PICTURE_I) + driver_flags |= MSM_VIDC_BUF_FLAG_KEYFRAME; + else if (inst->hfi_frame_info.picture_type & HFI_PICTURE_CRA) + driver_flags |= MSM_VIDC_BUF_FLAG_KEYFRAME; + else if (inst->hfi_frame_info.picture_type & HFI_PICTURE_BLA) + driver_flags |= MSM_VIDC_BUF_FLAG_KEYFRAME; + + if (inst->hfi_frame_info.data_corrupt) + driver_flags |= MSM_VIDC_BUF_FLAG_ERROR; + + if (inst->hfi_frame_info.overflow) + driver_flags |= MSM_VIDC_BUF_FLAG_ERROR; + + if ((is_encode_session(inst) && + (hfi_flags & HFI_BUF_FW_FLAG_LAST)) || + (is_decode_session(inst) && + ((hfi_flags & HFI_BUF_FW_FLAG_LAST) || + (hfi_flags & HFI_BUF_FW_FLAG_PSC_LAST)))) + driver_flags |= MSM_VIDC_BUF_FLAG_LAST; + + return driver_flags; +} + +static int handle_read_only_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buf) +{ + struct msm_vidc_buffer *ro_buf; + struct msm_vidc_core *core; + bool found = false; + + core = inst->core; + + if (!is_decode_session(inst) || !is_output_buffer(buf->type)) + return 0; + + if (!(buf->attr & MSM_VIDC_ATTR_READ_ONLY)) + return 0; + + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + if (ro_buf->device_addr == buf->device_addr) { + found = true; + break; + } + } + /* + * RO flag: add to read_only list if buffer is not present + * if present, do nothing + */ + if (!found) { + ro_buf = msm_vidc_pool_alloc(inst, MSM_MEM_POOL_BUFFER); + if (!ro_buf) { + i_vpr_e(inst, "%s: buffer alloc failed\n", __func__); + return -ENOMEM; + } + ro_buf->index = -1; + ro_buf->inst = inst; + ro_buf->type = buf->type; + ro_buf->fd = buf->fd; + ro_buf->dmabuf = buf->dmabuf; + ro_buf->device_addr = buf->device_addr; + ro_buf->data_offset = buf->data_offset; + ro_buf->dbuf_get = buf->dbuf_get; + buf->dbuf_get = 0; + INIT_LIST_HEAD(&ro_buf->list); + list_add_tail(&ro_buf->list, &inst->buffers.read_only.list); + print_vidc_buffer(VIDC_LOW, "low ", "ro buf added", inst, ro_buf); + } else { + print_vidc_buffer(VIDC_LOW, "low ", "ro buf found", inst, ro_buf); + } + ro_buf->attr |= MSM_VIDC_ATTR_READ_ONLY; + + return 0; +} + +static int handle_non_read_only_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + struct msm_vidc_buffer *ro_buf; + + if (!is_decode_session(inst) || buffer->type != HFI_BUFFER_RAW) + return 0; + + if (buffer->flags & HFI_BUF_FW_FLAG_READONLY) + return 0; + + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + if (ro_buf->device_addr == buffer->base_address) { + ro_buf->attr &= ~MSM_VIDC_ATTR_READ_ONLY; + break; + } + } + + return 0; +} + +static int handle_psc_last_flag_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + if (!(buffer->flags & HFI_BUF_FW_FLAG_PSC_LAST)) + return 0; + + if (!msm_vidc_allow_psc_last_flag(inst)) + return -EINVAL; + + return msm_vidc_process_psc_last_flag(inst); +} + +static int handle_drain_last_flag_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + int rc = 0; + + if (!(buffer->flags & HFI_BUF_FW_FLAG_LAST)) + return 0; + + if (!msm_vidc_allow_drain_last_flag(inst)) + return -EINVAL; + + if (is_decode_session(inst)) { + rc = msm_vidc_process_drain_last_flag(inst); + if (rc) + return rc; + } else if (is_encode_session(inst)) { + rc = msm_vidc_state_change_drain_last_flag(inst); + if (rc) + return rc; + } + + return rc; +} + +static int handle_input_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf; + struct msm_vidc_core *core; + bool found; + + core = inst->core; + buffers = msm_vidc_get_buffers(inst, MSM_VIDC_BUF_INPUT, __func__); + if (!buffers) + return -EINVAL; + + found = false; + list_for_each_entry(buf, &buffers->list, list) { + if (buf->index == buffer->index) { + found = true; + break; + } + } + if (!found) { + i_vpr_e(inst, "%s: invalid buffer idx %d addr %#llx data_offset %d\n", + __func__, buffer->index, buffer->base_address, + buffer->data_offset); + return -EINVAL; + } + + if (!(buf->attr & MSM_VIDC_ATTR_QUEUED)) { + print_vidc_buffer(VIDC_ERR, "err ", "not queued", inst, buf); + return 0; + } + + buf->data_size = buffer->data_size; + buf->attr &= ~MSM_VIDC_ATTR_QUEUED; + buf->attr |= MSM_VIDC_ATTR_DEQUEUED; + + buf->flags = 0; + buf->flags = get_driver_buffer_flags(inst, buffer->flags); + + print_vidc_buffer(VIDC_HIGH, "high", "dqbuf", inst, buf); + msm_vidc_update_stats(inst, buf, MSM_VIDC_DEBUGFS_EVENT_EBD); + + return rc; +} + +static int handle_output_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf; + struct msm_vidc_core *core; + bool found, fatal = false; + + core = inst->core; + + /* handle drain last flag buffer */ + if (buffer->flags & HFI_BUF_FW_FLAG_LAST) { + rc = handle_drain_last_flag_buffer(inst, buffer); + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + + if (is_decode_session(inst)) { + /* handle release response for decoder output buffer */ + if (buffer->flags & HFI_BUF_FW_FLAG_RELEASE_DONE) + return handle_release_output_buffer(inst, buffer); + /* handle psc last flag buffer */ + if (buffer->flags & HFI_BUF_FW_FLAG_PSC_LAST) { + rc = handle_psc_last_flag_buffer(inst, buffer); + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + /* handle non-read only buffer */ + if (!(buffer->flags & HFI_BUF_FW_FLAG_READONLY)) { + rc = handle_non_read_only_buffer(inst, buffer); + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + } + + buffers = msm_vidc_get_buffers(inst, MSM_VIDC_BUF_OUTPUT, __func__); + if (!buffers) + return -EINVAL; + + found = false; + list_for_each_entry(buf, &buffers->list, list) { + if (!(buf->attr & MSM_VIDC_ATTR_QUEUED)) + continue; + if (is_decode_session(inst)) + found = (buf->index == buffer->index && + buf->device_addr == buffer->base_address && + buf->data_offset == buffer->data_offset); + else + found = (buf->index == buffer->index); + + if (found) + break; + } + if (!found) { + i_vpr_l(inst, "%s: invalid idx %d daddr %#llx\n", + __func__, buffer->index, buffer->base_address); + return 0; + } + + buf->data_offset = buffer->data_offset; + buf->data_size = buffer->data_size; + buf->timestamp = buffer->timestamp; + + buf->attr &= ~MSM_VIDC_ATTR_QUEUED; + buf->attr |= MSM_VIDC_ATTR_DEQUEUED; + + if (is_encode_session(inst)) { + /* encoder output is not expected to be corrupted */ + if (inst->hfi_frame_info.data_corrupt) { + i_vpr_e(inst, "%s: encode output is corrupted\n", __func__); + fatal = true; + } + if (inst->hfi_frame_info.overflow) { + /* overflow not expected for cbr_cfr session */ + if (!buffer->data_size && inst->hfi_rc_type == HFI_RC_CBR_CFR) { + i_vpr_e(inst, "%s: overflow detected for cbr_cfr session\n", + __func__); + fatal = true; + } + } + if (fatal) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + + /* + * reset data size to zero for last flag buffer. + * reset RO flag for last flag buffer. + */ + if ((buffer->flags & HFI_BUF_FW_FLAG_LAST) || + (buffer->flags & HFI_BUF_FW_FLAG_PSC_LAST)) { + if (buffer->data_size) { + i_vpr_e(inst, "%s: reset data size to zero for last flag buffer\n", + __func__); + buf->data_size = 0; + } + if (buffer->flags & HFI_BUF_FW_FLAG_READONLY) { + i_vpr_e(inst, "%s: reset RO flag for last flag buffer\n", + __func__); + buffer->flags &= ~HFI_BUF_FW_FLAG_READONLY; + } + } + + if (is_decode_session(inst)) { + /* RO flag is not expected when internal dpb buffers are allocated */ + if (inst->buffers.dpb.size && buffer->flags & HFI_BUF_FW_FLAG_READONLY) { + print_vidc_buffer(VIDC_ERR, "err ", "unexpected RO flag", inst, buf); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + + if (buffer->flags & HFI_BUF_FW_FLAG_READONLY) { + buf->attr |= MSM_VIDC_ATTR_READ_ONLY; + rc = handle_read_only_buffer(inst, buf); + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } else { + buf->attr &= ~MSM_VIDC_ATTR_READ_ONLY; + } + } + + buf->flags = 0; + buf->flags = get_driver_buffer_flags(inst, buffer->flags); + + if (is_decode_session(inst)) { + inst->power.fw_cr = inst->hfi_frame_info.cr; + inst->power.fw_cf = inst->hfi_frame_info.cf; + } else { + inst->power.fw_cr = inst->hfi_frame_info.cr; + } + + if (is_decode_session(inst) && buf->data_size) + msm_vidc_update_timestamp_rate(inst, buf->timestamp); + + print_vidc_buffer(VIDC_HIGH, "high", "dqbuf", inst, buf); + msm_vidc_update_stats(inst, buf, MSM_VIDC_DEBUGFS_EVENT_FBD); + + return rc; +} + +static int handle_dequeue_buffers(struct msm_vidc_inst *inst) +{ + int rc = 0; + int i; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf; + struct msm_vidc_buffer *dummy; + struct msm_vidc_core *core; + static const enum msm_vidc_buffer_type buffer_type[] = { + MSM_VIDC_BUF_INPUT, + MSM_VIDC_BUF_OUTPUT, + }; + + core = inst->core; + for (i = 0; i < ARRAY_SIZE(buffer_type); i++) { + buffers = msm_vidc_get_buffers(inst, buffer_type[i], __func__); + if (!buffers) + return -EINVAL; + + list_for_each_entry_safe(buf, dummy, &buffers->list, list) { + if (buf->attr & MSM_VIDC_ATTR_DEQUEUED) { + buf->attr &= ~MSM_VIDC_ATTR_DEQUEUED; + /* + * do not send vb2_buffer_done when fw returns + * same buffer again + */ + if (buf->attr & MSM_VIDC_ATTR_BUFFER_DONE) { + print_vidc_buffer(VIDC_HIGH, "high", + "vb2 done already", + inst, buf); + } else { + buf->attr |= MSM_VIDC_ATTR_BUFFER_DONE; + if (buf->dbuf_get) { + call_mem_op(core, dma_buf_put, inst, buf->dmabuf); + buf->dbuf_get = 0; + } + rc = msm_vidc_vb2_buffer_done(inst, buf); + if (rc) { + print_vidc_buffer(VIDC_HIGH, "err ", + "vb2 done failed", + inst, buf); + /* ignore the error */ + rc = 0; + } + } + } + } + } + + return rc; +} + +static int handle_release_internal_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + int rc = 0; + struct msm_vidc_buffers *buffers; + struct msm_vidc_buffer *buf; + bool found; + + buffers = msm_vidc_get_buffers(inst, hfi_buf_type_to_driver(inst->domain, buffer->type, + HFI_PORT_NONE), __func__); + if (!buffers) + return -EINVAL; + + found = false; + list_for_each_entry(buf, &buffers->list, list) { + if (buf->device_addr == buffer->base_address) { + found = true; + break; + } + } + if (!found) { + i_vpr_e(inst, "%s: invalid idx %d daddr %#llx\n", + __func__, buffer->index, buffer->base_address); + return -EINVAL; + } + + if (!is_internal_buffer(buf->type)) + return 0; + + /* remove QUEUED attribute */ + buf->attr &= ~MSM_VIDC_ATTR_QUEUED; + + /* + * firmware will return/release internal buffer in two cases + * - driver sent release cmd in which case driver should destroy the buffer + * - as part stop cmd in which case driver can reuse the buffer, so skip + * destroying the buffer + */ + if (buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) { + rc = msm_vidc_destroy_internal_buffer(inst, buf); + if (rc) + return rc; + } + return rc; +} + +int handle_release_output_buffer(struct msm_vidc_inst *inst, + struct hfi_buffer *buffer) +{ + int rc = 0; + struct msm_vidc_buffer *buf; + bool found = false; + + list_for_each_entry(buf, &inst->buffers.read_only.list, list) { + if (buf->device_addr == buffer->base_address && + buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) { + found = true; + break; + } + } + if (!found) { + i_vpr_e(inst, "%s: invalid idx %d daddr %#llx\n", + __func__, buffer->index, buffer->base_address); + return -EINVAL; + } + + buf->attr &= ~MSM_VIDC_ATTR_READ_ONLY; + buf->attr &= ~MSM_VIDC_ATTR_PENDING_RELEASE; + print_vidc_buffer(VIDC_LOW, "low ", "release done", inst, buf); + + return rc; +} + +static int handle_session_buffer(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int i, rc = 0; + struct hfi_buffer *buffer; + u32 hfi_handle_size = 0; + const struct msm_vidc_hfi_buffer_handle *hfi_handle_arr = NULL; + static const struct msm_vidc_hfi_buffer_handle enc_input_hfi_handle[] = { + {HFI_BUFFER_RAW, handle_input_buffer }, + {HFI_BUFFER_VPSS, handle_release_internal_buffer }, + }; + static const struct msm_vidc_hfi_buffer_handle enc_output_hfi_handle[] = { + {HFI_BUFFER_BITSTREAM, handle_output_buffer }, + {HFI_BUFFER_BIN, handle_release_internal_buffer }, + {HFI_BUFFER_COMV, handle_release_internal_buffer }, + {HFI_BUFFER_NON_COMV, handle_release_internal_buffer }, + {HFI_BUFFER_LINE, handle_release_internal_buffer }, + {HFI_BUFFER_ARP, handle_release_internal_buffer }, + {HFI_BUFFER_DPB, handle_release_internal_buffer }, + }; + static const struct msm_vidc_hfi_buffer_handle dec_input_hfi_handle[] = { + {HFI_BUFFER_BITSTREAM, handle_input_buffer }, + {HFI_BUFFER_BIN, handle_release_internal_buffer }, + {HFI_BUFFER_COMV, handle_release_internal_buffer }, + {HFI_BUFFER_NON_COMV, handle_release_internal_buffer }, + {HFI_BUFFER_LINE, handle_release_internal_buffer }, + {HFI_BUFFER_PERSIST, handle_release_internal_buffer }, + }; + static const struct msm_vidc_hfi_buffer_handle dec_output_hfi_handle[] = { + {HFI_BUFFER_RAW, handle_output_buffer }, + {HFI_BUFFER_DPB, handle_release_internal_buffer }, + }; + + if (pkt->payload_info == HFI_PAYLOAD_NONE) { + i_vpr_h(inst, "%s: received hfi buffer packet without payload\n", + __func__); + return 0; + } + + if (!check_for_packet_payload(inst, pkt, __func__)) { + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + return 0; + } + + buffer = (struct hfi_buffer *)((u8 *)pkt + sizeof(struct hfi_packet)); + if (!is_valid_hfi_buffer_type(inst, buffer->type, __func__)) { + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + return 0; + } + + if (!is_valid_hfi_port(inst, pkt->port, buffer->type, __func__)) { + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + return 0; + } + + if (is_encode_session(inst)) { + if (pkt->port == HFI_PORT_RAW) { + hfi_handle_size = ARRAY_SIZE(enc_input_hfi_handle); + hfi_handle_arr = enc_input_hfi_handle; + } else if (pkt->port == HFI_PORT_BITSTREAM) { + hfi_handle_size = ARRAY_SIZE(enc_output_hfi_handle); + hfi_handle_arr = enc_output_hfi_handle; + } + } else if (is_decode_session(inst)) { + if (pkt->port == HFI_PORT_BITSTREAM) { + hfi_handle_size = ARRAY_SIZE(dec_input_hfi_handle); + hfi_handle_arr = dec_input_hfi_handle; + } else if (pkt->port == HFI_PORT_RAW) { + hfi_handle_size = ARRAY_SIZE(dec_output_hfi_handle); + hfi_handle_arr = dec_output_hfi_handle; + } + } + + /* handle invalid session */ + if (!hfi_handle_arr || !hfi_handle_size) { + i_vpr_e(inst, "%s: invalid session %d\n", __func__, inst->domain); + return -EINVAL; + } + + /* handle session buffer */ + for (i = 0; i < hfi_handle_size; i++) { + if (hfi_handle_arr[i].type == buffer->type) { + rc = hfi_handle_arr[i].handle(inst, buffer); + if (rc) + return rc; + break; + } + } + + /* handle unknown buffer type */ + if (i == hfi_handle_size) { + i_vpr_e(inst, "%s: port %u, unknown buffer type %#x\n", __func__, + pkt->port, buffer->type); + return -EINVAL; + } + + return rc; +} + +static int handle_input_port_settings_change(struct msm_vidc_inst *inst) +{ + int rc = 0; + enum msm_vidc_allow allow = MSM_VIDC_DISALLOW; + + allow = msm_vidc_allow_input_psc(inst); + if (allow == MSM_VIDC_DISALLOW) { + return -EINVAL; + } else if (allow == MSM_VIDC_ALLOW) { + rc = msm_vidc_state_change_input_psc(inst); + if (rc) + return rc; + print_psc_properties("INPUT_PSC", inst, inst->subcr_params[INPUT_PORT]); + rc = msm_vdec_input_port_settings_change(inst); + if (rc) + return rc; + } + + return rc; +} + +static int handle_output_port_settings_change(struct msm_vidc_inst *inst) +{ + print_psc_properties("OUTPUT_PSC", inst, inst->subcr_params[OUTPUT_PORT]); + + return 0; +} + +static int handle_port_settings_change(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int rc = 0; + + i_vpr_h(inst, "%s: Received port settings change, type %d\n", + __func__, pkt->port); + + if (pkt->port == HFI_PORT_RAW) { + rc = handle_output_port_settings_change(inst); + if (rc) + goto exit; + } else if (pkt->port == HFI_PORT_BITSTREAM) { + rc = handle_input_port_settings_change(inst); + if (rc) + goto exit; + } else { + i_vpr_e(inst, "%s: invalid port type: %#x\n", + __func__, pkt->port); + rc = -EINVAL; + goto exit; + } + +exit: + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + return rc; +} + +static int handle_session_subscribe_mode(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful\n", __func__); + return 0; +} + +static int handle_session_pause(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful\n", __func__); + return 0; +} + +static int handle_session_resume(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + if (pkt->flags & HFI_FW_FLAGS_SUCCESS) + i_vpr_h(inst, "%s: successful\n", __func__); + return 0; +} + +static int handle_session_command(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int i, rc; + static const struct msm_vidc_hfi_packet_handle hfi_pkt_handle[] = { + {HFI_CMD_OPEN, handle_session_open }, + {HFI_CMD_CLOSE, handle_session_close }, + {HFI_CMD_START, handle_session_start }, + {HFI_CMD_STOP, handle_session_stop }, + {HFI_CMD_DRAIN, handle_session_drain }, + {HFI_CMD_BUFFER, handle_session_buffer }, + {HFI_CMD_SETTINGS_CHANGE, handle_port_settings_change }, + {HFI_CMD_SUBSCRIBE_MODE, handle_session_subscribe_mode }, + {HFI_CMD_PAUSE, handle_session_pause }, + {HFI_CMD_RESUME, handle_session_resume }, + }; + + /* handle session pkt */ + for (i = 0; i < ARRAY_SIZE(hfi_pkt_handle); i++) { + if (hfi_pkt_handle[i].type == pkt->type) { + rc = hfi_pkt_handle[i].handle(inst, pkt); + if (rc) + return rc; + break; + } + } + + /* handle unknown buffer type */ + if (i == ARRAY_SIZE(hfi_pkt_handle)) { + i_vpr_e(inst, "%s: Unsupported command type: %#x\n", __func__, pkt->type); + return -EINVAL; + } + + return 0; +} + +static int handle_dpb_list_property(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + u32 payload_size, num_words_in_payload; + u8 *payload_start; + int i = 0; + struct msm_vidc_buffer *ro_buf; + bool found = false; + u64 device_addr; + + if (!is_decode_session(inst)) { + i_vpr_e(inst, + "%s: unsupported for non-decode session\n", __func__); + return -EINVAL; + } + + payload_size = pkt->size - sizeof(struct hfi_packet); + num_words_in_payload = payload_size / 4; + payload_start = (u8 *)((u8 *)pkt + sizeof(struct hfi_packet)); + memset(inst->dpb_list_payload, 0, MAX_DPB_LIST_ARRAY_SIZE); + + if (payload_size > MAX_DPB_LIST_PAYLOAD_SIZE) { + i_vpr_e(inst, + "%s: dpb list payload size %d exceeds expected max size %d\n", + __func__, payload_size, MAX_DPB_LIST_PAYLOAD_SIZE); + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + return -EINVAL; + } + memcpy(inst->dpb_list_payload, payload_start, payload_size); + + /* + * dpb_list_payload details: + * payload[0-1] : 64 bits base_address of DPB-1 + * payload[2] : 32 bits addr_offset of DPB-1 + * payload[3] : 32 bits data_offset of DPB-1 + */ + for (i = 0; (i + 3) < num_words_in_payload; i = i + 4) { + i_vpr_l(inst, + "%s: base addr %#x %#x, addr offset %#x, data offset %#x\n", + __func__, inst->dpb_list_payload[i], inst->dpb_list_payload[i + 1], + inst->dpb_list_payload[i + 2], inst->dpb_list_payload[i + 3]); + } + + list_for_each_entry(ro_buf, &inst->buffers.read_only.list, list) { + found = false; + /* do not mark RELEASE_ELIGIBLE for non-read only buffers */ + if (!(ro_buf->attr & MSM_VIDC_ATTR_READ_ONLY)) + continue; + /* no need to mark RELEASE_ELIGIBLE again */ + if (ro_buf->attr & MSM_VIDC_ATTR_RELEASE_ELIGIBLE) + continue; + /* + * do not add RELEASE_ELIGIBLE to buffers for which driver + * sent release cmd already + */ + if (ro_buf->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + continue; + for (i = 0; (i + 3) < num_words_in_payload; i = i + 4) { + device_addr = *((u64 *)(&inst->dpb_list_payload[i])); + if (ro_buf->device_addr == device_addr) { + found = true; + break; + } + } + /* mark a buffer as RELEASE_ELIGIBLE if not found in dpb list */ + if (!found) + ro_buf->attr |= MSM_VIDC_ATTR_RELEASE_ELIGIBLE; + } + + return 0; +} + +static int handle_property_with_payload(struct msm_vidc_inst *inst, + struct hfi_packet *pkt, u32 port) +{ + int rc = 0; + u32 *payload_ptr = NULL; + + payload_ptr = (u32 *)((u8 *)pkt + sizeof(struct hfi_packet)); + if (!payload_ptr) { + i_vpr_e(inst, + "%s: payload_ptr cannot be null\n", __func__); + return -EINVAL; + } + + switch (pkt->type) { + case HFI_PROP_BITSTREAM_RESOLUTION: + inst->subcr_params[port].bitstream_resolution = payload_ptr[0]; + break; + case HFI_PROP_CROP_OFFSETS: + inst->subcr_params[port].crop_offsets[0] = payload_ptr[0]; + inst->subcr_params[port].crop_offsets[1] = payload_ptr[1]; + break; + case HFI_PROP_LUMA_CHROMA_BIT_DEPTH: + inst->subcr_params[port].bit_depth = payload_ptr[0]; + break; + case HFI_PROP_CODED_FRAMES: + inst->subcr_params[port].coded_frames = payload_ptr[0]; + break; + case HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT: + inst->subcr_params[port].fw_min_count = payload_ptr[0]; + break; + case HFI_PROP_PIC_ORDER_CNT_TYPE: + inst->subcr_params[port].pic_order_cnt = payload_ptr[0]; + break; + case HFI_PROP_SIGNAL_COLOR_INFO: + inst->subcr_params[port].color_info = payload_ptr[0]; + break; + case HFI_PROP_PROFILE: + inst->subcr_params[port].profile = payload_ptr[0]; + break; + case HFI_PROP_LEVEL: + inst->subcr_params[port].level = payload_ptr[0]; + break; + case HFI_PROP_TIER: + inst->subcr_params[port].tier = payload_ptr[0]; + break; + case HFI_PROP_PICTURE_TYPE: + inst->hfi_frame_info.picture_type = payload_ptr[0]; + if (inst->hfi_frame_info.picture_type & HFI_PICTURE_B) + inst->has_bframe = true; + if (inst->hfi_frame_info.picture_type & HFI_PICTURE_IDR) + inst->iframe = true; + else + inst->iframe = false; + break; + case HFI_PROP_SUBFRAME_INPUT: + if (port != INPUT_PORT) { + i_vpr_e(inst, + "%s: invalid port: %d for property %#x\n", + __func__, pkt->port, pkt->type); + break; + } + inst->hfi_frame_info.subframe_input = 1; + break; + case HFI_PROP_WORST_COMPRESSION_RATIO: + inst->hfi_frame_info.cr = payload_ptr[0]; + break; + case HFI_PROP_WORST_COMPLEXITY_FACTOR: + inst->hfi_frame_info.cf = payload_ptr[0]; + break; + case HFI_PROP_CABAC_SESSION: + if (payload_ptr[0] == 1) + msm_vidc_update_cap_value(inst, ENTROPY_MODE, + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC, + __func__); + else + msm_vidc_update_cap_value(inst, ENTROPY_MODE, + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC, + __func__); + break; + case HFI_PROP_DPB_LIST: + rc = handle_dpb_list_property(inst, pkt); + if (rc) + break; + break; + case HFI_PROP_QUALITY_MODE: + if (inst->capabilities[QUALITY_MODE].value != payload_ptr[0]) + i_vpr_e(inst, + "%s: fw quality mode(%d) not matching the capability value(%d)\n", + __func__, payload_ptr[0], + inst->capabilities[QUALITY_MODE].value); + break; + case HFI_PROP_STAGE: + if (inst->capabilities[STAGE].value != payload_ptr[0]) + i_vpr_e(inst, + "%s: fw stage mode(%d) not matching the capability value(%d)\n", + __func__, payload_ptr[0], inst->capabilities[STAGE].value); + break; + case HFI_PROP_PIPE: + if (inst->capabilities[PIPE].value != payload_ptr[0]) + i_vpr_e(inst, + "%s: fw pipe mode(%d) not matching the capability value(%d)\n", + __func__, payload_ptr[0], inst->capabilities[PIPE].value); + break; + default: + i_vpr_e(inst, "%s: invalid property %#x\n", + __func__, pkt->type); + break; + } + + return rc; +} + +static int handle_property_without_payload(struct msm_vidc_inst *inst, + struct hfi_packet *pkt, u32 port) +{ + int rc = 0; + + switch (pkt->type) { + case HFI_PROP_DPB_LIST: + /* + * if fw sends dpb list property without payload, + * it means there are no more reference buffers. + */ + rc = handle_dpb_list_property(inst, pkt); + if (rc) + break; + break; + case HFI_PROP_NO_OUTPUT: + if (port != INPUT_PORT) { + i_vpr_e(inst, + "%s: invalid port: %d for property %#x\n", + __func__, pkt->port, pkt->type); + break; + } + i_vpr_h(inst, "received no_output property\n"); + inst->hfi_frame_info.no_output = 1; + break; + default: + i_vpr_e(inst, "%s: invalid property %#x\n", + __func__, pkt->type); + break; + } + + return rc; +} + +static int handle_session_property(struct msm_vidc_inst *inst, + struct hfi_packet *pkt) +{ + int rc = 0; + u32 port; + + i_vpr_l(inst, "%s: property type %#x\n", __func__, pkt->type); + + port = vidc_port_from_hfi(inst, pkt->port); + if (port >= MAX_PORT) { + i_vpr_e(inst, + "%s: invalid port: %d for property %#x\n", + __func__, pkt->port, pkt->type); + return -EINVAL; + } + + if (pkt->flags & HFI_FW_FLAGS_INFORMATION) { + i_vpr_h(inst, + "%s: information flag received for property %#x packet\n", + __func__, pkt->type); + return 0; + } + + if (check_for_packet_payload(inst, pkt, __func__)) { + rc = handle_property_with_payload(inst, pkt, port); + if (rc) + return rc; + } else { + rc = handle_property_without_payload(inst, pkt, port); + if (rc) + return rc; + } + + return rc; +} + +static int handle_image_version_property(struct msm_vidc_core *core, + struct hfi_packet *pkt) +{ + u32 i = 0; + u8 *str_image_version; + u32 req_bytes; + + req_bytes = pkt->size - sizeof(*pkt); + if (req_bytes < VENUS_VERSION_LENGTH - 1) { + d_vpr_e("%s: bad_pkt: %d\n", __func__, req_bytes); + return -EINVAL; + } + str_image_version = (u8 *)pkt + sizeof(struct hfi_packet); + /* + * The version string returned by firmware includes null + * characters at the start and in between. Replace the null + * characters with space, to print the version info. + */ + for (i = 0; i < VENUS_VERSION_LENGTH - 1; i++) { + if (str_image_version[i] != '\0') + core->fw_version[i] = str_image_version[i]; + else + core->fw_version[i] = ' '; + } + core->fw_version[i] = '\0'; + + d_vpr_h("%s: F/W version: %s\n", __func__, core->fw_version); + return 0; +} + +static int handle_system_property(struct msm_vidc_core *core, + struct hfi_packet *pkt) +{ + int rc = 0; + + switch (pkt->type) { + case HFI_PROP_IMAGE_VERSION: + rc = handle_image_version_property(core, pkt); + break; + default: + d_vpr_h("%s: property type %#x successful\n", + __func__, pkt->type); + break; + } + return rc; +} + +static int handle_system_response(struct msm_vidc_core *core, + struct hfi_header *hdr) +{ + int rc = 0; + struct hfi_packet *packet; + u8 *pkt, *start_pkt; + int i, j; + static const struct msm_vidc_core_hfi_range be[] = { + {HFI_SYSTEM_ERROR_BEGIN, HFI_SYSTEM_ERROR_END, handle_system_error }, + {HFI_PROP_BEGIN, HFI_PROP_END, handle_system_property }, + {HFI_CMD_BEGIN, HFI_CMD_END, handle_system_init }, + }; + + start_pkt = (u8 *)((u8 *)hdr + sizeof(struct hfi_header)); + for (i = 0; i < ARRAY_SIZE(be); i++) { + pkt = start_pkt; + for (j = 0; j < hdr->num_packets; j++) { + packet = (struct hfi_packet *)pkt; + /* handle system error */ + if (packet->flags & HFI_FW_FLAGS_SYSTEM_ERROR) { + d_vpr_e("%s: received system error %#x\n", + __func__, packet->type); + rc = handle_system_error(core, packet); + if (rc) + goto exit; + goto exit; + } + if (in_range(be[i], packet->type)) { + rc = be[i].handle(core, packet); + if (rc) + goto exit; + + /* skip processing anymore packets after system error */ + if (!i) { + d_vpr_e("%s: skip processing anymore packets\n", __func__); + goto exit; + } + } + pkt += packet->size; + } + } + +exit: + return rc; +} + +static int __handle_session_response(struct msm_vidc_inst *inst, + struct hfi_header *hdr) +{ + int rc = 0; + struct hfi_packet *packet; + u8 *pkt, *start_pkt; + bool dequeue = false; + int i, j; + static const struct msm_vidc_inst_hfi_range be[] = { + {HFI_SESSION_ERROR_BEGIN, HFI_SESSION_ERROR_END, handle_session_error }, + {HFI_INFORMATION_BEGIN, HFI_INFORMATION_END, handle_session_info }, + {HFI_PROP_BEGIN, HFI_PROP_END, handle_session_property }, + {HFI_CMD_BEGIN, HFI_CMD_END, handle_session_command }, + }; + + memset(&inst->hfi_frame_info, 0, sizeof(struct msm_vidc_hfi_frame_info)); + start_pkt = (u8 *)((u8 *)hdr + sizeof(struct hfi_header)); + for (i = 0; i < ARRAY_SIZE(be); i++) { + pkt = start_pkt; + for (j = 0; j < hdr->num_packets; j++) { + packet = (struct hfi_packet *)pkt; + /* handle session error */ + if (packet->flags & HFI_FW_FLAGS_SESSION_ERROR) { + i_vpr_e(inst, "%s: received session error %#x\n", + __func__, packet->type); + handle_session_error(inst, packet); + } + if (in_range(be[i], packet->type)) { + dequeue |= (packet->type == HFI_CMD_BUFFER); + rc = be[i].handle(inst, packet); + if (rc) + msm_vidc_change_state(inst, MSM_VIDC_ERROR, __func__); + } + pkt += packet->size; + } + } + + if (dequeue) { + rc = handle_dequeue_buffers(inst); + if (rc) + return rc; + } + memset(&inst->hfi_frame_info, 0, sizeof(struct msm_vidc_hfi_frame_info)); + + return rc; +} + +static int handle_session_response(struct msm_vidc_core *core, + struct hfi_header *hdr) +{ + struct msm_vidc_inst *inst; + struct hfi_packet *packet; + u8 *pkt; + int i, rc = 0; + bool found_ipsc = false; + + inst = get_inst(core, hdr->session_id); + if (!inst) { + d_vpr_e("%s: Invalid inst\n", __func__); + return -EINVAL; + } + + inst_lock(inst, __func__); + /* search for cmd settings change pkt */ + pkt = (u8 *)((u8 *)hdr + sizeof(struct hfi_header)); + for (i = 0; i < hdr->num_packets; i++) { + packet = (struct hfi_packet *)pkt; + if (packet->type == HFI_CMD_SETTINGS_CHANGE) { + if (packet->port == HFI_PORT_BITSTREAM) { + found_ipsc = true; + break; + } + } + pkt += packet->size; + } + + /* if ipsc packet is found, initialise subsc_params */ + if (found_ipsc) + msm_vdec_init_input_subcr_params(inst); + + rc = __handle_session_response(inst, hdr); + if (rc) + goto exit; + +exit: + inst_unlock(inst, __func__); + put_inst(inst); + return rc; +} + +int handle_response(struct msm_vidc_core *core, void *response) +{ + struct hfi_header *hdr; + int rc = 0; + + hdr = (struct hfi_header *)response; + rc = validate_hdr_packet(core, hdr, __func__); + if (rc) { + d_vpr_e("%s: hdr pkt validation failed\n", __func__); + return handle_system_error(core, NULL); + } + + if (!hdr->session_id) + return handle_system_response(core, hdr); + else + return handle_session_response(core, hdr); + + return 0; +} From patchwork Fri Jul 28 13:23:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EF45C04E69 for ; Fri, 28 Jul 2023 13:28:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235534AbjG1N2W (ORCPT ); Fri, 28 Jul 2023 09:28:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236855AbjG1N15 (ORCPT ); Fri, 28 Jul 2023 09:27:57 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EF0049C6; Fri, 28 Jul 2023 06:27:19 -0700 (PDT) Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBlxh7031964; Fri, 28 Jul 2023 13:26:20 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=OROrDqAOeFlRsGHCmUoiaxjpGzXBXnPfK8kqvpOf1V4=; b=Q0Oc8Z1d4jPRcrzCQodm3Pj0BsgySfJ3yLm5X+9EZE/6R7P4Ys8LwCcSnx2HnM443XSv yKIgqeWzytl7cB5cvu/f+G57qSGNEcW+7gTlxS6JSaAsozBDSSPWP26gVRwCP/+STHZY /V4/sxdojCU1p6qQl3NNm7QZJY3KVdve3hfF06sI8ZxKyp4RKyiQJ8qiTysuvDFrp5Gc OcsiMpsmIiC1xdr1aovn9hL6CNl2NEDrXMC8nwumUJR/cEdybTZJS+VJekcU5NYYIqda 3wdTryhaDr+dDfX0cfyCLJPzYkHw7usGv+RrrNrAL0Qfu3Be6Ez/11GsqOxHUO3AaMH7 Qw== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s46ttgyy9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:20 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQJnx014335 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:19 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:16 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 20/33] iris: vidc: hfi: add helpers for handling shared queues Date: Fri, 28 Jul 2023 18:53:31 +0530 Message-ID: <1690550624-14642-21-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: QhbggkF2COfDPmBV7Wj2EvGJ9XJIBmXi X-Proofpoint-GUID: QhbggkF2COfDPmBV7Wj2EvGJ9XJIBmXi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 impostorscore=0 clxscore=1015 priorityscore=1501 mlxlogscore=713 lowpriorityscore=0 adultscore=0 spamscore=0 malwarescore=0 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements functions to allocate and update the shared memory used for sending commands to firmware and receiving messages from firmware. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/venus_hfi_queue.h | 89 ++++ .../platform/qcom/iris/vidc/src/venus_hfi_queue.c | 537 +++++++++++++++++++++ 2 files changed, 626 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_queue.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/venus_hfi_queue.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_queue.h b/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_queue.h new file mode 100644 index 0000000..f533811 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/venus_hfi_queue.h @@ -0,0 +1,89 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2022, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _VENUS_HFI_QUEUE_H_ +#define _VENUS_HFI_QUEUE_H_ + +#include + +#include "msm_vidc_internal.h" + +#define HFI_MASK_QHDR_TX_TYPE 0xff000000 +#define HFI_MASK_QHDR_RX_TYPE 0x00ff0000 +#define HFI_MASK_QHDR_PRI_TYPE 0x0000ff00 +#define HFI_MASK_QHDR_Q_ID_TYPE 0x000000ff +#define HFI_Q_ID_HOST_TO_CTRL_CMD_Q 0 +#define HFI_Q_ID_CTRL_TO_HOST_MSG_Q 1 +#define HFI_Q_ID_CTRL_TO_HOST_DEBUG_Q 2 +#define HFI_MASK_QHDR_STATUS 0x000000ff + +#define VIDC_IFACEQ_NUMQ 3 +#define VIDC_IFACEQ_CMDQ_IDX 0 +#define VIDC_IFACEQ_MSGQ_IDX 1 +#define VIDC_IFACEQ_DBGQ_IDX 2 +#define VIDC_IFACEQ_MAX_BUF_COUNT 50 +#define VIDC_IFACE_MAX_PARALLEL_CLNTS 16 +#define VIDC_IFACEQ_DFLT_QHDR 0x01010000 + +struct hfi_queue_table_header { + u32 qtbl_version; + u32 qtbl_size; + u32 qtbl_qhdr0_offset; + u32 qtbl_qhdr_size; + u32 qtbl_num_q; + u32 qtbl_num_active_q; + void *device_addr; + char name[256]; +}; + +struct hfi_queue_header { + u32 qhdr_status; + u32 qhdr_start_addr; + u32 qhdr_type; + u32 qhdr_q_size; + u32 qhdr_pkt_size; + u32 qhdr_pkt_drop_cnt; + u32 qhdr_rx_wm; + u32 qhdr_tx_wm; + u32 qhdr_rx_req; + u32 qhdr_tx_req; + u32 qhdr_rx_irq_status; + u32 qhdr_tx_irq_status; + u32 qhdr_read_idx; + u32 qhdr_write_idx; +}; + +#define VIDC_IFACEQ_TABLE_SIZE (sizeof(struct hfi_queue_table_header) + \ + sizeof(struct hfi_queue_header) * VIDC_IFACEQ_NUMQ) + +#define VIDC_IFACEQ_QUEUE_SIZE (VIDC_IFACEQ_MAX_PKT_SIZE * \ + VIDC_IFACEQ_MAX_BUF_COUNT * VIDC_IFACE_MAX_PARALLEL_CLNTS) + +#define VIDC_IFACEQ_GET_QHDR_START_ADDR(ptr, i) \ + ((void *)(((ptr) + sizeof(struct hfi_queue_table_header)) + \ + ((i) * sizeof(struct hfi_queue_header)))) + +#define QDSS_SIZE 4096 +#define SFR_SIZE 4096 + +#define QUEUE_SIZE (VIDC_IFACEQ_TABLE_SIZE + \ + (VIDC_IFACEQ_QUEUE_SIZE * VIDC_IFACEQ_NUMQ)) + +#define ALIGNED_QDSS_SIZE ALIGN(QDSS_SIZE, SZ_4K) +#define ALIGNED_SFR_SIZE ALIGN(SFR_SIZE, SZ_4K) +#define ALIGNED_QUEUE_SIZE ALIGN(QUEUE_SIZE, SZ_4K) +#define SHARED_QSIZE ALIGN(ALIGNED_SFR_SIZE + ALIGNED_QUEUE_SIZE + \ + ALIGNED_QDSS_SIZE, SZ_1M) +#define TOTAL_QSIZE (SHARED_QSIZE - ALIGNED_SFR_SIZE - ALIGNED_QDSS_SIZE) + +int venus_hfi_queue_cmd_write(struct msm_vidc_core *core, void *pkt); +int venus_hfi_queue_msg_read(struct msm_vidc_core *core, void *pkt); +int venus_hfi_queue_dbg_read(struct msm_vidc_core *core, void *pkt); +void venus_hfi_queue_deinit(struct msm_vidc_core *core); +int venus_hfi_queue_init(struct msm_vidc_core *core); +int venus_hfi_reset_queue_header(struct msm_vidc_core *core); + +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/venus_hfi_queue.c b/drivers/media/platform/qcom/iris/vidc/src/venus_hfi_queue.c new file mode 100644 index 0000000..8e038ba --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/venus_hfi_queue.c @@ -0,0 +1,537 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "venus_hfi.h" +#include "venus_hfi_queue.h" + +static void __set_queue_hdr_defaults(struct hfi_queue_header *q_hdr) +{ + q_hdr->qhdr_status = 0x1; + q_hdr->qhdr_type = VIDC_IFACEQ_DFLT_QHDR; + q_hdr->qhdr_q_size = VIDC_IFACEQ_QUEUE_SIZE / 4; + q_hdr->qhdr_pkt_size = 0; + q_hdr->qhdr_rx_wm = 0x1; + q_hdr->qhdr_tx_wm = 0x1; + q_hdr->qhdr_rx_req = 0x1; + q_hdr->qhdr_tx_req = 0x0; + q_hdr->qhdr_rx_irq_status = 0x0; + q_hdr->qhdr_tx_irq_status = 0x0; + q_hdr->qhdr_read_idx = 0x0; + q_hdr->qhdr_write_idx = 0x0; +} + +static void __dump_packet(u8 *packet, const char *function, void *qinfo) +{ + u32 c = 0, session_id, packet_size = *(u32 *)packet; + const int row_size = 32; + /* + * row must contain enough for 0xdeadbaad * 8 to be converted into + * "de ad ba ab " * 8 + '\0' + */ + char row[3 * 32]; + + session_id = *((u32 *)packet + 1); + + d_vpr_t("%08x: %s: %pK\n", session_id, function, qinfo); + + for (c = 0; c * row_size < packet_size; ++c) { + int bytes_to_read = ((c + 1) * row_size > packet_size) ? + packet_size % row_size : row_size; + hex_dump_to_buffer(packet + c * row_size, bytes_to_read, + row_size, 4, row, sizeof(row), false); + d_vpr_t("%08x: %s\n", session_id, row); + } +} + +static int __write_queue(struct msm_vidc_iface_q_info *qinfo, u8 *packet, + bool *rx_req_is_set) +{ + struct hfi_queue_header *queue; + u32 packet_size_in_words, new_write_idx; + u32 empty_space, read_idx, write_idx; + u32 *write_ptr; + + if (!qinfo || !packet) { + d_vpr_e("%s: invalid params %pK %pK\n", + __func__, qinfo, packet); + return -EINVAL; + } else if (!qinfo->q_array.align_virtual_addr) { + d_vpr_e("Queues have already been freed\n"); + return -EINVAL; + } + + queue = (struct hfi_queue_header *)qinfo->q_hdr; + if (!queue) { + d_vpr_e("queue not present\n"); + return -ENOENT; + } + + if (msm_vidc_debug & VIDC_PKT) + __dump_packet(packet, __func__, qinfo); + + packet_size_in_words = (*(u32 *)packet) >> 2; + if (!packet_size_in_words || packet_size_in_words > + qinfo->q_array.mem_size >> 2) { + d_vpr_e("Invalid packet size\n"); + return -ENODATA; + } + + read_idx = queue->qhdr_read_idx; + write_idx = queue->qhdr_write_idx; + + empty_space = (write_idx >= read_idx) ? + ((qinfo->q_array.mem_size >> 2) - (write_idx - read_idx)) : + (read_idx - write_idx); + if (empty_space <= packet_size_in_words) { + queue->qhdr_tx_req = 1; + d_vpr_e("Insufficient size (%d) to write (%d)\n", + empty_space, packet_size_in_words); + return -ENOTEMPTY; + } + + queue->qhdr_tx_req = 0; + + new_write_idx = write_idx + packet_size_in_words; + write_ptr = (u32 *)((qinfo->q_array.align_virtual_addr) + + (write_idx << 2)); + if (write_ptr < (u32 *)qinfo->q_array.align_virtual_addr || + write_ptr > (u32 *)(qinfo->q_array.align_virtual_addr + + qinfo->q_array.mem_size)) { + d_vpr_e("Invalid write index\n"); + return -ENODATA; + } + + if (new_write_idx < (qinfo->q_array.mem_size >> 2)) { + memcpy(write_ptr, packet, packet_size_in_words << 2); + } else { + new_write_idx -= qinfo->q_array.mem_size >> 2; + memcpy(write_ptr, packet, (packet_size_in_words - + new_write_idx) << 2); + memcpy((void *)qinfo->q_array.align_virtual_addr, + packet + ((packet_size_in_words - new_write_idx) << 2), + new_write_idx << 2); + } + + /* + * Memory barrier to make sure packet is written before updating the + * write index + */ + mb(); + queue->qhdr_write_idx = new_write_idx; + if (rx_req_is_set) + *rx_req_is_set = true; + /* + * Memory barrier to make sure write index is updated before an + * interrupt is raised on venus. + */ + mb(); + return 0; +} + +static int __read_queue(struct msm_vidc_iface_q_info *qinfo, u8 *packet, + u32 *pb_tx_req_is_set) +{ + struct hfi_queue_header *queue; + u32 packet_size_in_words, new_read_idx; + u32 *read_ptr; + u32 receive_request = 0; + u32 read_idx, write_idx; + int rc = 0; + + if (!qinfo || !packet || !pb_tx_req_is_set) { + d_vpr_e("%s: invalid params %pK %pK %pK\n", + __func__, qinfo, packet, pb_tx_req_is_set); + return -EINVAL; + } else if (!qinfo->q_array.align_virtual_addr) { + d_vpr_e("Queues have already been freed\n"); + return -EINVAL; + } + + /* + * Memory barrier to make sure data is valid before + *reading it + */ + mb(); + queue = (struct hfi_queue_header *)qinfo->q_hdr; + + if (!queue) { + d_vpr_e("Queue memory is not allocated\n"); + return -ENOMEM; + } + + /* + * Do not set receive request for debug queue, if set, + * Venus generates interrupt for debug messages even + * when there is no response message available. + * In general debug queue will not become full as it + * is being emptied out for every interrupt from Venus. + * Venus will anyway generates interrupt if it is full. + */ + if (queue->qhdr_type & HFI_Q_ID_CTRL_TO_HOST_MSG_Q) + receive_request = 1; + + read_idx = queue->qhdr_read_idx; + write_idx = queue->qhdr_write_idx; + + if (read_idx == write_idx) { + queue->qhdr_rx_req = receive_request; + /* + * mb() to ensure qhdr is updated in main memory + * so that venus reads the updated header values + */ + mb(); + *pb_tx_req_is_set = 0; + d_vpr_l("%s queue is empty, rx_req = %u, tx_req = %u, read_idx = %u\n", + receive_request ? "message" : "debug", + queue->qhdr_rx_req, queue->qhdr_tx_req, + queue->qhdr_read_idx); + return -ENODATA; + } + + read_ptr = (u32 *)((qinfo->q_array.align_virtual_addr) + + (read_idx << 2)); + if (read_ptr < (u32 *)qinfo->q_array.align_virtual_addr || + read_ptr > (u32 *)(qinfo->q_array.align_virtual_addr + + qinfo->q_array.mem_size - sizeof(*read_ptr))) { + d_vpr_e("Invalid read index\n"); + return -ENODATA; + } + + packet_size_in_words = (*read_ptr) >> 2; + if (!packet_size_in_words) { + d_vpr_e("Zero packet size\n"); + return -ENODATA; + } + + new_read_idx = read_idx + packet_size_in_words; + if (((packet_size_in_words << 2) <= VIDC_IFACEQ_VAR_HUGE_PKT_SIZE) && + read_idx <= (qinfo->q_array.mem_size >> 2)) { + if (new_read_idx < (qinfo->q_array.mem_size >> 2)) { + memcpy(packet, read_ptr, + packet_size_in_words << 2); + } else { + new_read_idx -= (qinfo->q_array.mem_size >> 2); + memcpy(packet, read_ptr, + (packet_size_in_words - new_read_idx) << 2); + memcpy(packet + ((packet_size_in_words - + new_read_idx) << 2), + (u8 *)qinfo->q_array.align_virtual_addr, + new_read_idx << 2); + } + } else { + d_vpr_e("BAD packet received, read_idx: %#x, pkt_size: %d\n", + read_idx, packet_size_in_words << 2); + d_vpr_e("Dropping this packet\n"); + new_read_idx = write_idx; + rc = -ENODATA; + } + + queue->qhdr_rx_req = receive_request; + + queue->qhdr_read_idx = new_read_idx; + /* + * mb() to ensure qhdr is updated in main memory + * so that venus reads the updated header values + */ + mb(); + + *pb_tx_req_is_set = (queue->qhdr_tx_req == 1) ? 1 : 0; + + if ((msm_vidc_debug & VIDC_PKT) && + !(queue->qhdr_type & HFI_Q_ID_CTRL_TO_HOST_DEBUG_Q)) { + __dump_packet(packet, __func__, qinfo); + } + + return rc; +} + +/* Writes into cmdq without raising an interrupt */ +static int __iface_cmdq_write_relaxed(struct msm_vidc_core *core, + void *pkt, bool *requires_interrupt) +{ + struct msm_vidc_iface_q_info *q_info; + int rc = -E2BIG; + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + if (!core_in_valid_state(core)) { + d_vpr_e("%s: fw not in init state\n", __func__); + rc = -EINVAL; + goto err_q_null; + } + + q_info = &core->iface_queues[VIDC_IFACEQ_CMDQ_IDX]; + if (!q_info) { + d_vpr_e("cannot write to shared Q's\n"); + goto err_q_null; + } + + if (!q_info->q_array.align_virtual_addr) { + d_vpr_e("cannot write to shared CMD Q's\n"); + rc = -ENODATA; + goto err_q_null; + } + + if (!__write_queue(q_info, (u8 *)pkt, requires_interrupt)) + rc = 0; + else + d_vpr_e("queue full\n"); + +err_q_null: + return rc; +} + +int venus_hfi_queue_cmd_write(struct msm_vidc_core *core, void *pkt) +{ + bool needs_interrupt = false; + int rc = __iface_cmdq_write_relaxed(core, pkt, &needs_interrupt); + + if (!rc && needs_interrupt) + call_iris_op(core, raise_interrupt, core); + + return rc; +} + +int venus_hfi_queue_msg_read(struct msm_vidc_core *core, void *pkt) +{ + u32 tx_req_is_set = 0; + int rc = 0; + struct msm_vidc_iface_q_info *q_info; + + if (!pkt) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + if (!core_in_valid_state(core)) { + d_vpr_e("%s: fw not in init state\n", __func__); + rc = -EINVAL; + goto read_error_null; + } + + q_info = &core->iface_queues[VIDC_IFACEQ_MSGQ_IDX]; + if (!q_info->q_array.align_virtual_addr) { + d_vpr_e("cannot read from shared MSG Q's\n"); + rc = -ENODATA; + goto read_error_null; + } + + if (!__read_queue(q_info, (u8 *)pkt, &tx_req_is_set)) { + if (tx_req_is_set) { + //call_iris_op(core, raise_interrupt, core); + d_vpr_e("%s: queue is full\n", __func__); + rc = -EINVAL; + goto read_error_null; + } + rc = 0; + } else { + rc = -ENODATA; + } + +read_error_null: + return rc; +} + +int venus_hfi_queue_dbg_read(struct msm_vidc_core *core, void *pkt) +{ + u32 tx_req_is_set = 0; + int rc = 0; + struct msm_vidc_iface_q_info *q_info; + + if (!pkt) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + q_info = &core->iface_queues[VIDC_IFACEQ_DBGQ_IDX]; + if (!q_info->q_array.align_virtual_addr) { + d_vpr_e("cannot read from shared DBG Q's\n"); + rc = -ENODATA; + goto dbg_error_null; + } + + if (!__read_queue(q_info, (u8 *)pkt, &tx_req_is_set)) { + if (tx_req_is_set) { + d_vpr_e("%s: queue is full\n", __func__); + //call_iris_op(core, raise_interrupt, core); + rc = -EINVAL; + goto dbg_error_null; + } + rc = 0; + } else { + rc = -ENODATA; + } + +dbg_error_null: + return rc; +} + +void venus_hfi_queue_deinit(struct msm_vidc_core *core) +{ + int i; + + if (!core->iface_q_table.align_virtual_addr) { + d_vpr_h("%s: queues already deallocated\n", __func__); + return; + } + + call_mem_op(core, memory_unmap_free, core, &core->iface_q_table.mem); + call_mem_op(core, memory_unmap_free, core, &core->sfr.mem); + + for (i = 0; i < VIDC_IFACEQ_NUMQ; i++) { + core->iface_queues[i].q_hdr = NULL; + core->iface_queues[i].q_array.align_virtual_addr = NULL; + core->iface_queues[i].q_array.align_device_addr = 0; + } + + core->iface_q_table.align_virtual_addr = NULL; + core->iface_q_table.align_device_addr = 0; + + core->sfr.align_virtual_addr = NULL; + core->sfr.align_device_addr = 0; +} + +int venus_hfi_reset_queue_header(struct msm_vidc_core *core) +{ + struct msm_vidc_iface_q_info *iface_q; + struct hfi_queue_header *q_hdr; + int i; + + for (i = 0; i < VIDC_IFACEQ_NUMQ; i++) { + iface_q = &core->iface_queues[i]; + __set_queue_hdr_defaults(iface_q->q_hdr); + } + + iface_q = &core->iface_queues[VIDC_IFACEQ_CMDQ_IDX]; + q_hdr = iface_q->q_hdr; + q_hdr->qhdr_start_addr = iface_q->q_array.align_device_addr; + q_hdr->qhdr_type |= HFI_Q_ID_HOST_TO_CTRL_CMD_Q; + + iface_q = &core->iface_queues[VIDC_IFACEQ_MSGQ_IDX]; + q_hdr = iface_q->q_hdr; + q_hdr->qhdr_start_addr = iface_q->q_array.align_device_addr; + q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_MSG_Q; + + iface_q = &core->iface_queues[VIDC_IFACEQ_DBGQ_IDX]; + q_hdr = iface_q->q_hdr; + q_hdr->qhdr_start_addr = iface_q->q_array.align_device_addr; + q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_DEBUG_Q; + /* + * Set receive request to zero on debug queue as there is no + * need of interrupt from video hardware for debug messages + */ + q_hdr->qhdr_rx_req = 0; + + return 0; +} + +int venus_hfi_queue_init(struct msm_vidc_core *core) +{ + int rc = 0; + struct hfi_queue_table_header *q_tbl_hdr; + struct hfi_queue_header *q_hdr; + struct msm_vidc_iface_q_info *iface_q; + struct msm_vidc_mem mem; + int offset = 0; + u32 i; + + if (core->iface_q_table.align_virtual_addr) { + d_vpr_h("%s: queues already allocated\n", __func__); + venus_hfi_reset_queue_header(core); + return 0; + } + + memset(&mem, 0, sizeof(mem)); + mem.type = MSM_VIDC_BUF_INTERFACE_QUEUE; + mem.region = MSM_VIDC_NON_SECURE; + mem.size = TOTAL_QSIZE; + mem.secure = false; + mem.map_kernel = true; + rc = call_mem_op(core, memory_alloc_map, core, &mem); + if (rc) { + d_vpr_e("%s: alloc and map failed\n", __func__); + goto fail_alloc_queue; + } + core->iface_q_table.align_virtual_addr = mem.kvaddr; + core->iface_q_table.align_device_addr = mem.device_addr; + core->iface_q_table.mem = mem; + + core->iface_q_table.mem_size = VIDC_IFACEQ_TABLE_SIZE; + offset += core->iface_q_table.mem_size; + + for (i = 0; i < VIDC_IFACEQ_NUMQ; i++) { + iface_q = &core->iface_queues[i]; + iface_q->q_array.align_device_addr = mem.device_addr + offset; + iface_q->q_array.align_virtual_addr = (void *)((char *)mem.kvaddr + offset); + iface_q->q_array.mem_size = VIDC_IFACEQ_QUEUE_SIZE; + offset += iface_q->q_array.mem_size; + iface_q->q_hdr = + VIDC_IFACEQ_GET_QHDR_START_ADDR(core->iface_q_table.align_virtual_addr, i); + __set_queue_hdr_defaults(iface_q->q_hdr); + } + + q_tbl_hdr = (struct hfi_queue_table_header *) + core->iface_q_table.align_virtual_addr; + q_tbl_hdr->qtbl_version = 0; + q_tbl_hdr->device_addr = (void *)core; + strscpy(q_tbl_hdr->name, "msm_v4l2_vidc", sizeof(q_tbl_hdr->name)); + q_tbl_hdr->qtbl_size = VIDC_IFACEQ_TABLE_SIZE; + q_tbl_hdr->qtbl_qhdr0_offset = sizeof(struct hfi_queue_table_header); + q_tbl_hdr->qtbl_qhdr_size = sizeof(struct hfi_queue_header); + q_tbl_hdr->qtbl_num_q = VIDC_IFACEQ_NUMQ; + q_tbl_hdr->qtbl_num_active_q = VIDC_IFACEQ_NUMQ; + + iface_q = &core->iface_queues[VIDC_IFACEQ_CMDQ_IDX]; + q_hdr = iface_q->q_hdr; + q_hdr->qhdr_start_addr = iface_q->q_array.align_device_addr; + q_hdr->qhdr_type |= HFI_Q_ID_HOST_TO_CTRL_CMD_Q; + + iface_q = &core->iface_queues[VIDC_IFACEQ_MSGQ_IDX]; + q_hdr = iface_q->q_hdr; + q_hdr->qhdr_start_addr = iface_q->q_array.align_device_addr; + q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_MSG_Q; + + iface_q = &core->iface_queues[VIDC_IFACEQ_DBGQ_IDX]; + q_hdr = iface_q->q_hdr; + q_hdr->qhdr_start_addr = iface_q->q_array.align_device_addr; + q_hdr->qhdr_type |= HFI_Q_ID_CTRL_TO_HOST_DEBUG_Q; + /* + * Set receive request to zero on debug queue as there is no + * need of interrupt from video hardware for debug messages + */ + q_hdr->qhdr_rx_req = 0; + + /* sfr buffer */ + memset(&mem, 0, sizeof(mem)); + mem.type = MSM_VIDC_BUF_INTERFACE_QUEUE; + mem.region = MSM_VIDC_NON_SECURE; + mem.size = ALIGNED_SFR_SIZE; + mem.secure = false; + mem.map_kernel = true; + rc = call_mem_op(core, memory_alloc_map, core, &mem); + if (rc) { + d_vpr_e("%s: sfr alloc and map failed\n", __func__); + goto fail_alloc_queue; + } + core->sfr.align_virtual_addr = mem.kvaddr; + core->sfr.align_device_addr = mem.device_addr; + core->sfr.mem = mem; + + core->sfr.mem_size = ALIGNED_SFR_SIZE; + /* write sfr buffer size in first word */ + *((u32 *)core->sfr.align_virtual_addr) = core->sfr.mem_size; + + return 0; + +fail_alloc_queue: + return -ENOMEM; +} From patchwork Fri Jul 28 13:23:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D059C41513 for ; Fri, 28 Jul 2023 13:28:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236804AbjG1N2m (ORCPT ); Fri, 28 Jul 2023 09:28:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236955AbjG1N2L (ORCPT ); Fri, 28 Jul 2023 09:28:11 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 359FC49E8; Fri, 28 Jul 2023 06:27:34 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9RPeF010756; Fri, 28 Jul 2023 13:26:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=Ud1QmpytpS8kljsAju5FtPOx6RIqUoSKID8szClshbU=; b=K2KNQ0HdCH57mNSUt8nA1UTNm9bxXJ7zTbrdZQo1XtzMCydB4eedBmh2pKrjRYaXYT3g pEYE4KZFPXXMdPkC0Pr/y6wLxlZA5/5D/pxkG7mYnOiizA6tlKzjrKupxs5Wz+3A2gat TSTtdi4f0O48rdH+roqLBwBP9yOcaCyYUh3cZheq0TV1hXraY/DZEWvxkYvwH0ZlS5na Y9BIlyzLxZt0RHy+QCR9LT1aJ7PcG1rgSpXNDGTWBttVgSctokHWXeFeUR3vZnn5gcKl kOrU++wwS/RdcQFo0CqfljiEItf8hUQxarEda/z1Skm6FeFYvzX3qHSPHX8nhpitf/w2 tA== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3n2kbcp0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:24 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQNb8003833 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:23 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:19 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 21/33] iris: vidc: hfi: Add packetization layer Date: Fri, 28 Jul 2023 18:53:32 +0530 Message-ID: <1690550624-14642-22-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: sUjhxMZuyDqmKoNH8rbNBIUIn_Qv7RCu X-Proofpoint-ORIG-GUID: sUjhxMZuyDqmKoNH8rbNBIUIn_Qv7RCu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 mlxscore=0 suspectscore=0 spamscore=0 clxscore=1015 bulkscore=0 mlxlogscore=908 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org This implements hfi packets used to communicate the info to/from firmware. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/hfi_command.h | 190 ++++++ .../media/platform/qcom/iris/vidc/inc/hfi_packet.h | 52 ++ .../media/platform/qcom/iris/vidc/src/hfi_packet.c | 657 +++++++++++++++++++++ 3 files changed, 899 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/hfi_command.h create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/hfi_packet.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/hfi_packet.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/hfi_command.h b/drivers/media/platform/qcom/iris/vidc/inc/hfi_command.h new file mode 100644 index 0000000..5542e56 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/hfi_command.h @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __H_HFI_COMMAND_H__ +#define __H_HFI_COMMAND_H__ + +//todo: DP: remove below headers +#include +#include + +#define HFI_VIDEO_ARCH_LX 0x1 + +struct hfi_header { + u32 size; + u32 session_id; + u32 header_id; + u32 reserved[4]; + u32 num_packets; +}; + +struct hfi_packet { + u32 size; + u32 type; + u32 flags; + u32 payload_info; + u32 port; + u32 packet_id; + u32 reserved[2]; +}; + +struct hfi_buffer { + u32 type; + u32 index; + u64 base_address; + u32 addr_offset; + u32 buffer_size; + u32 data_offset; + u32 data_size; + u64 timestamp; + u32 flags; + u32 reserved[5]; +}; + +enum hfi_packet_host_flags { + HFI_HOST_FLAGS_NONE = 0x00000000, + HFI_HOST_FLAGS_INTR_REQUIRED = 0x00000001, + HFI_HOST_FLAGS_RESPONSE_REQUIRED = 0x00000002, + HFI_HOST_FLAGS_NON_DISCARDABLE = 0x00000004, + HFI_HOST_FLAGS_GET_PROPERTY = 0x00000008, +}; + +enum hfi_packet_firmware_flags { + HFI_FW_FLAGS_NONE = 0x00000000, + HFI_FW_FLAGS_SUCCESS = 0x00000001, + HFI_FW_FLAGS_INFORMATION = 0x00000002, + HFI_FW_FLAGS_SESSION_ERROR = 0x00000004, + HFI_FW_FLAGS_SYSTEM_ERROR = 0x00000008, +}; + +enum hfi_packet_payload_info { + HFI_PAYLOAD_NONE = 0x00000000, + HFI_PAYLOAD_U32 = 0x00000001, + HFI_PAYLOAD_S32 = 0x00000002, + HFI_PAYLOAD_U64 = 0x00000003, + HFI_PAYLOAD_S64 = 0x00000004, + HFI_PAYLOAD_STRUCTURE = 0x00000005, + HFI_PAYLOAD_BLOB = 0x00000006, + HFI_PAYLOAD_STRING = 0x00000007, + HFI_PAYLOAD_Q16 = 0x00000008, + HFI_PAYLOAD_U32_ENUM = 0x00000009, + HFI_PAYLOAD_32_PACKED = 0x0000000a, + HFI_PAYLOAD_U32_ARRAY = 0x0000000b, + HFI_PAYLOAD_S32_ARRAY = 0x0000000c, + HFI_PAYLOAD_64_PACKED = 0x0000000d, +}; + +enum hfi_packet_port_type { + HFI_PORT_NONE = 0x00000000, + HFI_PORT_BITSTREAM = 0x00000001, + HFI_PORT_RAW = 0x00000002, +}; + +enum hfi_buffer_type { + HFI_BUFFER_BITSTREAM = 0x00000001, + HFI_BUFFER_RAW = 0x00000002, + HFI_BUFFER_METADATA = 0x00000003, + HFI_BUFFER_SUBCACHE = 0x00000004, + HFI_BUFFER_PARTIAL_DATA = 0x00000005, + HFI_BUFFER_DPB = 0x00000006, + HFI_BUFFER_BIN = 0x00000007, + HFI_BUFFER_LINE = 0x00000008, + HFI_BUFFER_ARP = 0x00000009, + HFI_BUFFER_COMV = 0x0000000A, + HFI_BUFFER_NON_COMV = 0x0000000B, + HFI_BUFFER_PERSIST = 0x0000000C, + HFI_BUFFER_VPSS = 0x0000000D, +}; + +enum hfi_buffer_host_flags { + HFI_BUF_HOST_FLAG_NONE = 0x00000000, + HFI_BUF_HOST_FLAG_RELEASE = 0x00000001, + HFI_BUF_HOST_FLAG_READONLY = 0x00000010, + HFI_BUF_HOST_FLAG_CODEC_CONFIG = 0x00000100, + HFI_BUF_HOST_FLAGS_CB_NON_SECURE = 0x00000200, + HFI_BUF_HOST_FLAGS_CB_SECURE_PIXEL = 0x00000400, + HFI_BUF_HOST_FLAGS_CB_SECURE_BITSTREAM = 0x00000800, + HFI_BUF_HOST_FLAGS_CB_SECURE_NON_PIXEL = 0x00001000, + HFI_BUF_HOST_FLAGS_CB_NON_SECURE_PIXEL = 0x00002000, +}; + +enum hfi_buffer_firmware_flags { + HFI_BUF_FW_FLAG_NONE = 0x00000000, + HFI_BUF_FW_FLAG_RELEASE_DONE = 0x00000001, + HFI_BUF_FW_FLAG_READONLY = 0x00000010, + HFI_BUF_FW_FLAG_CODEC_CONFIG = 0x00000100, + HFI_BUF_FW_FLAG_LAST = 0x10000000, + HFI_BUF_FW_FLAG_PSC_LAST = 0x20000000, +}; + +enum hfi_metapayload_header_flags { + HFI_METADATA_FLAGS_NONE = 0x00000000, + HFI_METADATA_FLAGS_TOP_FIELD = 0x00000001, + HFI_METADATA_FLAGS_BOTTOM_FIELD = 0x00000002, +}; + +struct metabuf_header { + u32 count; + u32 size; + u32 version; + u32 reserved[5]; +}; + +struct metapayload_header { + u32 type; + u32 size; + u32 version; + u32 offset; + u32 flags; + u32 reserved[3]; +}; + +enum hfi_property_mode_type { + HFI_MODE_NONE = 0x00000000, + HFI_MODE_PORT_SETTINGS_CHANGE = 0x00000001, + HFI_MODE_PROPERTY = 0x00000002, + HFI_MODE_METADATA = 0x00000004, + HFI_MODE_DYNAMIC_METADATA = 0x00000005, +}; + +enum hfi_reserve_type { + HFI_RESERVE_START = 0x1, + HFI_RESERVE_STOP = 0x2, +}; + +#define HFI_CMD_BEGIN 0x01000000 +#define HFI_CMD_INIT 0x01000001 +#define HFI_CMD_POWER_COLLAPSE 0x01000002 +#define HFI_CMD_OPEN 0x01000003 +#define HFI_CMD_CLOSE 0x01000004 +#define HFI_CMD_START 0x01000005 +#define HFI_CMD_STOP 0x01000006 +#define HFI_CMD_DRAIN 0x01000007 +#define HFI_CMD_RESUME 0x01000008 +#define HFI_CMD_BUFFER 0x01000009 +#define HFI_CMD_DELIVERY_MODE 0x0100000A +#define HFI_CMD_SUBSCRIBE_MODE 0x0100000B +#define HFI_CMD_SETTINGS_CHANGE 0x0100000C + +#define HFI_SSR_TYPE_SW_ERR_FATAL 0x1 +#define HFI_SSR_TYPE_SW_DIV_BY_ZERO 0x2 +#define HFI_SSR_TYPE_CPU_WDOG_IRQ 0x3 +#define HFI_SSR_TYPE_NOC_ERROR 0x4 +#define HFI_BITMASK_HW_CLIENT_ID 0x000000f0 +#define HFI_BITMASK_SSR_TYPE 0x0000000f +#define HFI_CMD_SSR 0x0100000D + +#define HFI_STABILITY_TYPE_VCODEC_HUNG 0x1 +#define HFI_STABILITY_TYPE_ENC_BUFFER_FULL 0x2 +#define HFI_BITMASK_STABILITY_TYPE 0x0000000f +#define HFI_CMD_STABILITY 0x0100000E + +#define HFI_CMD_RESERVE 0x0100000F +#define HFI_CMD_FLUSH 0x01000010 +#define HFI_CMD_PAUSE 0x01000011 +#define HFI_CMD_END 0x01FFFFFF + +#endif //__H_HFI_COMMAND_H__ diff --git a/drivers/media/platform/qcom/iris/vidc/inc/hfi_packet.h b/drivers/media/platform/qcom/iris/vidc/inc/hfi_packet.h new file mode 100644 index 0000000..dc19c85 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/hfi_packet.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _HFI_PACKET_H_ +#define _HFI_PACKET_H_ + +#include "hfi_command.h" +#include "hfi_property.h" +#include "msm_vidc_core.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +u32 get_hfi_port(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port); +u32 get_hfi_port_from_buffer_type(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +u32 hfi_buf_type_from_driver(enum msm_vidc_domain_type domain, + enum msm_vidc_buffer_type buffer_type); +u32 hfi_buf_type_to_driver(enum msm_vidc_domain_type domain, + enum hfi_buffer_type buffer_type, + enum hfi_packet_port_type port_type); +u32 get_hfi_codec(struct msm_vidc_inst *inst); +u32 get_hfi_colorformat(struct msm_vidc_inst *inst, + enum msm_vidc_colorformat_type colorformat); +int get_hfi_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer, struct hfi_buffer *buf); +int hfi_create_header(u8 *packet, u32 packet_size, + u32 session_id, u32 header_id); +int hfi_create_packet(u8 *packet, u32 packet_size, + u32 pkt_type, u32 pkt_flags, u32 payload_type, u32 port, + u32 packet_id, void *payload, u32 payload_size); +int hfi_create_buffer(u8 *packet, u32 packet_size, u32 *offset, + enum msm_vidc_domain_type domain, + struct msm_vidc_buffer *data); +int hfi_packet_sys_init(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size); +int hfi_packet_image_version(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size); +int hfi_packet_sys_pc_prep(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size); +int hfi_packet_sys_debug_config(struct msm_vidc_core *core, u8 *pkt, + u32 pkt_size, u32 debug_config); +int hfi_packet_session_command(struct msm_vidc_inst *inst, u32 pkt_type, + u32 flags, u32 port, u32 session_id, + u32 payload_type, void *payload, u32 payload_size); +int hfi_packet_sys_intraframe_powercollapse(struct msm_vidc_core *core, u8 *pkt, + u32 pkt_size, u32 enable); + +#endif // _HFI_PACKET_H_ diff --git a/drivers/media/platform/qcom/iris/vidc/src/hfi_packet.c b/drivers/media/platform/qcom/iris/vidc/src/hfi_packet.c new file mode 100644 index 0000000..2cf777c --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/hfi_packet.c @@ -0,0 +1,657 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "hfi_packet.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_platform.h" + +u32 get_hfi_port(struct msm_vidc_inst *inst, + enum msm_vidc_port_type port) +{ + u32 hfi_port = HFI_PORT_NONE; + + if (is_decode_session(inst)) { + switch (port) { + case INPUT_PORT: + hfi_port = HFI_PORT_BITSTREAM; + break; + case OUTPUT_PORT: + hfi_port = HFI_PORT_RAW; + break; + default: + i_vpr_e(inst, "%s: invalid port type %d\n", + __func__, port); + break; + } + } else if (is_encode_session(inst)) { + switch (port) { + case INPUT_PORT: + hfi_port = HFI_PORT_RAW; + break; + case OUTPUT_PORT: + hfi_port = HFI_PORT_BITSTREAM; + break; + default: + i_vpr_e(inst, "%s: invalid port type %d\n", + __func__, port); + break; + } + } else { + i_vpr_e(inst, "%s: invalid domain %#x\n", + __func__, inst->domain); + } + + return hfi_port; +} + +u32 get_hfi_port_from_buffer_type(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + u32 hfi_port = HFI_PORT_NONE; + + if (is_decode_session(inst)) { + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + case MSM_VIDC_BUF_BIN: + case MSM_VIDC_BUF_COMV: + case MSM_VIDC_BUF_NON_COMV: + case MSM_VIDC_BUF_LINE: + hfi_port = HFI_PORT_BITSTREAM; + break; + case MSM_VIDC_BUF_OUTPUT: + case MSM_VIDC_BUF_DPB: + hfi_port = HFI_PORT_RAW; + break; + case MSM_VIDC_BUF_PERSIST: + hfi_port = HFI_PORT_NONE; + break; + default: + i_vpr_e(inst, "%s: invalid buffer type %d\n", + __func__, buffer_type); + break; + } + } else if (is_encode_session(inst)) { + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + case MSM_VIDC_BUF_VPSS: + hfi_port = HFI_PORT_RAW; + break; + case MSM_VIDC_BUF_OUTPUT: + case MSM_VIDC_BUF_BIN: + case MSM_VIDC_BUF_COMV: + case MSM_VIDC_BUF_NON_COMV: + case MSM_VIDC_BUF_LINE: + case MSM_VIDC_BUF_DPB: + hfi_port = HFI_PORT_BITSTREAM; + break; + case MSM_VIDC_BUF_ARP: + hfi_port = HFI_PORT_NONE; + break; + default: + i_vpr_e(inst, "%s: invalid buffer type %d\n", + __func__, buffer_type); + break; + } + } else { + i_vpr_e(inst, "%s: invalid domain %#x\n", + __func__, inst->domain); + } + + return hfi_port; +} + +u32 hfi_buf_type_from_driver(enum msm_vidc_domain_type domain, + enum msm_vidc_buffer_type buffer_type) +{ + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + if (domain == MSM_VIDC_DECODER) + return HFI_BUFFER_BITSTREAM; + else + return HFI_BUFFER_RAW; + case MSM_VIDC_BUF_OUTPUT: + if (domain == MSM_VIDC_DECODER) + return HFI_BUFFER_RAW; + else + return HFI_BUFFER_BITSTREAM; + case MSM_VIDC_BUF_BIN: + return HFI_BUFFER_BIN; + case MSM_VIDC_BUF_ARP: + return HFI_BUFFER_ARP; + case MSM_VIDC_BUF_COMV: + return HFI_BUFFER_COMV; + case MSM_VIDC_BUF_NON_COMV: + return HFI_BUFFER_NON_COMV; + case MSM_VIDC_BUF_LINE: + return HFI_BUFFER_LINE; + case MSM_VIDC_BUF_DPB: + return HFI_BUFFER_DPB; + case MSM_VIDC_BUF_PERSIST: + return HFI_BUFFER_PERSIST; + case MSM_VIDC_BUF_VPSS: + return HFI_BUFFER_VPSS; + default: + d_vpr_e("invalid buffer type %d\n", + buffer_type); + return 0; + } +} + +u32 hfi_buf_type_to_driver(enum msm_vidc_domain_type domain, + enum hfi_buffer_type buffer_type, + enum hfi_packet_port_type port_type) +{ + switch (buffer_type) { + case HFI_BUFFER_BITSTREAM: + if (domain == MSM_VIDC_DECODER) + return MSM_VIDC_BUF_INPUT; + else + return MSM_VIDC_BUF_OUTPUT; + case HFI_BUFFER_RAW: + if (domain == MSM_VIDC_DECODER) + return MSM_VIDC_BUF_OUTPUT; + else + return MSM_VIDC_BUF_INPUT; + case HFI_BUFFER_BIN: + return MSM_VIDC_BUF_BIN; + case HFI_BUFFER_ARP: + return MSM_VIDC_BUF_ARP; + case HFI_BUFFER_COMV: + return MSM_VIDC_BUF_COMV; + case HFI_BUFFER_NON_COMV: + return MSM_VIDC_BUF_NON_COMV; + case HFI_BUFFER_LINE: + return MSM_VIDC_BUF_LINE; + case HFI_BUFFER_DPB: + return MSM_VIDC_BUF_DPB; + case HFI_BUFFER_PERSIST: + return MSM_VIDC_BUF_PERSIST; + case HFI_BUFFER_VPSS: + return MSM_VIDC_BUF_VPSS; + default: + d_vpr_e("invalid buffer type %d\n", + buffer_type); + return 0; + } +} + +u32 get_hfi_codec(struct msm_vidc_inst *inst) +{ + switch (inst->codec) { + case MSM_VIDC_H264: + if (is_encode_session(inst)) + return HFI_CODEC_ENCODE_AVC; + else + return HFI_CODEC_DECODE_AVC; + case MSM_VIDC_HEVC: + if (is_encode_session(inst)) + return HFI_CODEC_ENCODE_HEVC; + else + return HFI_CODEC_DECODE_HEVC; + case MSM_VIDC_VP9: + return HFI_CODEC_DECODE_VP9; + default: + i_vpr_e(inst, "invalid codec %d, domain %d\n", + inst->codec, inst->domain); + return 0; + } +} + +u32 get_hfi_colorformat(struct msm_vidc_inst *inst, + enum msm_vidc_colorformat_type colorformat) +{ + u32 hfi_colorformat = HFI_COLOR_FMT_NV12_UBWC; + + switch (colorformat) { + case MSM_VIDC_FMT_NV12: + hfi_colorformat = HFI_COLOR_FMT_NV12; + break; + case MSM_VIDC_FMT_NV12C: + hfi_colorformat = HFI_COLOR_FMT_NV12_UBWC; + break; + case MSM_VIDC_FMT_P010: + hfi_colorformat = HFI_COLOR_FMT_P010; + break; + case MSM_VIDC_FMT_TP10C: + hfi_colorformat = HFI_COLOR_FMT_TP10_UBWC; + break; + case MSM_VIDC_FMT_RGBA8888: + hfi_colorformat = HFI_COLOR_FMT_RGBA8888; + break; + case MSM_VIDC_FMT_RGBA8888C: + hfi_colorformat = HFI_COLOR_FMT_RGBA8888_UBWC; + break; + case MSM_VIDC_FMT_NV21: + hfi_colorformat = HFI_COLOR_FMT_NV21; + break; + default: + i_vpr_e(inst, "%s: invalid colorformat %d\n", + __func__, colorformat); + break; + } + + return hfi_colorformat; +} + +static u32 get_hfi_region_flag(enum msm_vidc_buffer_region region) +{ + switch (region) { + case MSM_VIDC_NON_SECURE: + return HFI_BUF_HOST_FLAGS_CB_NON_SECURE; + case MSM_VIDC_NON_SECURE_PIXEL: + return HFI_BUF_HOST_FLAGS_CB_NON_SECURE_PIXEL; + case MSM_VIDC_SECURE_PIXEL: + return HFI_BUF_HOST_FLAGS_CB_SECURE_PIXEL; + case MSM_VIDC_SECURE_NONPIXEL: + return HFI_BUF_HOST_FLAGS_CB_SECURE_NON_PIXEL; + case MSM_VIDC_SECURE_BITSTREAM: + return HFI_BUF_HOST_FLAGS_CB_SECURE_BITSTREAM; + case MSM_VIDC_REGION_MAX: + case MSM_VIDC_REGION_NONE: + default: + return HFI_BUF_HOST_FLAG_NONE; + } +} + +int get_hfi_buffer(struct msm_vidc_inst *inst, + struct msm_vidc_buffer *buffer, struct hfi_buffer *buf) +{ + memset(buf, 0, sizeof(struct hfi_buffer)); + buf->type = hfi_buf_type_from_driver(inst->domain, buffer->type); + buf->index = buffer->index; + buf->base_address = buffer->device_addr; + buf->addr_offset = 0; + buf->buffer_size = buffer->buffer_size; + /* + * for decoder input buffers, firmware (BSE HW) needs 256 aligned + * buffer size otherwise it will truncate or ignore the data after 256 + * aligned size which may lead to error concealment + */ + if (is_decode_session(inst) && is_input_buffer(buffer->type)) + buf->buffer_size = ALIGN(buffer->buffer_size, 256); + buf->data_offset = buffer->data_offset; + buf->data_size = buffer->data_size; + if (buffer->attr & MSM_VIDC_ATTR_READ_ONLY) + buf->flags |= HFI_BUF_HOST_FLAG_READONLY; + if (buffer->attr & MSM_VIDC_ATTR_PENDING_RELEASE) + buf->flags |= HFI_BUF_HOST_FLAG_RELEASE; + buf->flags |= get_hfi_region_flag(buffer->region); + buf->timestamp = buffer->timestamp; + + return 0; +} + +int hfi_create_header(u8 *packet, u32 packet_size, u32 session_id, + u32 header_id) +{ + struct hfi_header *hdr = (struct hfi_header *)packet; + + if (!packet || packet_size < sizeof(struct hfi_header)) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + + memset(hdr, 0, sizeof(struct hfi_header)); + + hdr->size = sizeof(struct hfi_header); + hdr->session_id = session_id; + hdr->header_id = header_id; + hdr->num_packets = 0; + return 0; +} + +int hfi_create_packet(u8 *packet, u32 packet_size, u32 pkt_type, + u32 pkt_flags, u32 payload_type, u32 port, + u32 packet_id, void *payload, u32 payload_size) +{ + struct hfi_header *hdr; + struct hfi_packet *pkt; + u32 pkt_size; + + if (!packet) { + d_vpr_e("%s: invalid params\n", __func__); + return -EINVAL; + } + hdr = (struct hfi_header *)packet; + if (hdr->size < sizeof(struct hfi_header)) { + d_vpr_e("%s: invalid hdr size %d\n", __func__, hdr->size); + return -EINVAL; + } + pkt = (struct hfi_packet *)(packet + hdr->size); + pkt_size = sizeof(struct hfi_packet) + payload_size; + if (packet_size < hdr->size + pkt_size) { + d_vpr_e("%s: invalid packet_size %d, %d %d\n", + __func__, packet_size, hdr->size, pkt_size); + return -EINVAL; + } + memset(pkt, 0, pkt_size); + pkt->size = pkt_size; + pkt->type = pkt_type; + pkt->flags = pkt_flags; + pkt->payload_info = payload_type; + pkt->port = port; + pkt->packet_id = packet_id; + if (payload_size) + memcpy((u8 *)pkt + sizeof(struct hfi_packet), + payload, payload_size); + + hdr->num_packets++; + hdr->size += pkt->size; + return 0; +} + +int hfi_packet_sys_init(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size) +{ + int rc = 0; + u32 payload = 0; + + rc = hfi_create_header(pkt, pkt_size, + 0 /*session_id*/, + core->header_id++); + if (rc) + goto err_sys_init; + + /* HFI_CMD_SYSTEM_INIT */ + payload = HFI_VIDEO_ARCH_LX; + d_vpr_h("%s: arch %d\n", __func__, payload); + core->sys_init_id = core->packet_id++; + rc = hfi_create_packet(pkt, pkt_size, + HFI_CMD_INIT, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED | + HFI_HOST_FLAGS_NON_DISCARDABLE), + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->sys_init_id, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_MAX_CHANNELS */ + payload = core->platform->data.ubwc_config->max_channels; + d_vpr_h("%s: ubwc max channels %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_MAX_CHANNELS, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_MAL_LENGTH */ + payload = core->platform->data.ubwc_config->mal_length; + d_vpr_h("%s: ubwc mal length %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_MAL_LENGTH, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_HBB */ + payload = core->platform->data.ubwc_config->highest_bank_bit; + d_vpr_h("%s: ubwc hbb %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_HBB, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_BANK_SWZL_LEVEL1 */ + payload = core->platform->data.ubwc_config->bank_swzl_level; + d_vpr_h("%s: ubwc swzl1 %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_BANK_SWZL_LEVEL1, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_BANK_SWZL_LEVEL2 */ + payload = core->platform->data.ubwc_config->bank_swz2_level; + d_vpr_h("%s: ubwc swzl2 %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_BANK_SWZL_LEVEL2, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_BANK_SWZL_LEVEL3 */ + payload = core->platform->data.ubwc_config->bank_swz3_level; + d_vpr_h("%s: ubwc swzl3 %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_BANK_SWZL_LEVEL3, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + /* HFI_PROP_UBWC_BANK_SPREADING */ + payload = core->platform->data.ubwc_config->bank_spreading; + d_vpr_h("%s: ubwc bank spreading %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_UBWC_BANK_SPREADING, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_sys_init; + + d_vpr_h("System init packet created\n"); + return rc; + +err_sys_init: + d_vpr_e("%s: create packet failed\n", __func__); + return rc; +} + +int hfi_packet_image_version(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size) +{ + int rc = 0; + + rc = hfi_create_header(pkt, pkt_size, + 0 /*session_id*/, + core->header_id++); + if (rc) + goto err_img_version; + + /* HFI_PROP_IMAGE_VERSION */ + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_IMAGE_VERSION, + (HFI_HOST_FLAGS_RESPONSE_REQUIRED | + HFI_HOST_FLAGS_INTR_REQUIRED | + HFI_HOST_FLAGS_GET_PROPERTY), + HFI_PAYLOAD_NONE, + HFI_PORT_NONE, + core->packet_id++, + NULL, 0); + if (rc) + goto err_img_version; + + d_vpr_h("Image version packet created\n"); + return rc; + +err_img_version: + d_vpr_e("%s: create packet failed\n", __func__); + return rc; +} + +int hfi_packet_sys_pc_prep(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size) +{ + int rc = 0; + + rc = hfi_create_header(pkt, pkt_size, + 0 /*session_id*/, + core->header_id++); + if (rc) + goto err_sys_pc; + + /* HFI_CMD_POWER_COLLAPSE */ + rc = hfi_create_packet(pkt, pkt_size, + HFI_CMD_POWER_COLLAPSE, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_NONE, + HFI_PORT_NONE, + core->packet_id++, + NULL, 0); + if (rc) + goto err_sys_pc; + + d_vpr_h("Power collapse packet created\n"); + return rc; + +err_sys_pc: + d_vpr_e("%s: create packet failed\n", __func__); + return rc; +} + +int hfi_packet_sys_debug_config(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size, u32 debug_config) +{ + int rc = 0; + u32 payload = 0; + + rc = hfi_create_header(pkt, pkt_size, + 0 /*session_id*/, + core->header_id++); + if (rc) + goto err_debug; + + /* HFI_PROP_DEBUG_CONFIG */ + payload = 0; /*TODO:Change later*/ + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_DEBUG_CONFIG, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32_ENUM, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_debug; + + /* HFI_PROP_DEBUG_LOG_LEVEL */ + payload = debug_config; /*TODO:Change later*/ + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_DEBUG_LOG_LEVEL, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32_ENUM, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err_debug; + +err_debug: + if (rc) + d_vpr_e("%s: create packet failed\n", __func__); + + return rc; +} + +int hfi_packet_session_command(struct msm_vidc_inst *inst, u32 pkt_type, + u32 flags, u32 port, u32 session_id, + u32 payload_type, void *payload, u32 payload_size) +{ + int rc = 0; + struct msm_vidc_core *core; + + core = inst->core; + + rc = hfi_create_header(inst->packet, inst->packet_size, + session_id, core->header_id++); + if (rc) + goto err_cmd; + + rc = hfi_create_packet(inst->packet, + inst->packet_size, + pkt_type, + flags, + payload_type, + port, + core->packet_id++, + payload, + payload_size); + if (rc) + goto err_cmd; + + i_vpr_h(inst, "Command packet 0x%x created\n", pkt_type); + return rc; + +err_cmd: + i_vpr_e(inst, "%s: create packet failed\n", __func__); + return rc; +} + +int hfi_packet_sys_intraframe_powercollapse(struct msm_vidc_core *core, + u8 *pkt, u32 pkt_size, u32 enable) +{ + int rc = 0; + u32 payload = 0; + + rc = hfi_create_header(pkt, pkt_size, + 0 /*session_id*/, + core->header_id++); + if (rc) + goto err; + + /* HFI_PROP_INTRA_FRAME_POWER_COLLAPSE */ + payload = enable; + d_vpr_h("%s: intra frame power collapse %d\n", __func__, payload); + rc = hfi_create_packet(pkt, pkt_size, + HFI_PROP_INTRA_FRAME_POWER_COLLAPSE, + HFI_HOST_FLAGS_NONE, + HFI_PAYLOAD_U32, + HFI_PORT_NONE, + core->packet_id++, + &payload, + sizeof(u32)); + if (rc) + goto err; + + d_vpr_h("IFPC packet created\n"); + return rc; + +err: + d_vpr_e("%s: create packet failed\n", __func__); + return rc; +} From patchwork Fri Jul 28 13:23:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331958 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40EFFC41513 for ; Fri, 28 Jul 2023 13:29:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236778AbjG1N3L (ORCPT ); Fri, 28 Jul 2023 09:29:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236842AbjG1N2t (ORCPT ); Fri, 28 Jul 2023 09:28:49 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 804EB3C25; Fri, 28 Jul 2023 06:28:07 -0700 (PDT) Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S7F9CS027724; Fri, 28 Jul 2023 13:26:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=rrbALL2Sz3VNqLuQVEHpUhcQuPUraXFWuuw2tLJ10Mc=; b=EZm1tHP1zIzEkt3+fRLU67qSe4qJ7eJfIAut4FPxrFjH3cyt08hYpXN3NABD5qDMV8K5 XCetvf7BTmohe+OAWDjouodslER69X8zqypX9Aro/gbSXpKTHwyEmV1igFNzM4Q/qGTU yWQjTANRtywwtdUM3qS/ncnLerDFqnf++4t+YCHBQy7VW3q2lgUnMkNI79d7hA06PRbb uVek8+lQrIGYOkmrQCnhO2R2Qa/q75Rl2PxPQFbIRMjOX9KMTKE5Dvk1n0XwIP/cVYek 41Qa1OZ6ObZj1L5/VD1JU8MTrbnDFWUydMvKZNMA1fN5J9wWggrKI5YhFcRM9V56sYib Xg== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3k7u3k02-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:42 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQQeW026628 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:26 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:23 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 22/33] iris: vidc: hfi: defines HFI properties and enums Date: Fri, 28 Jul 2023 18:53:33 +0530 Message-ID: <1690550624-14642-23-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: ynYBWNuI1yZgWN2LJVZVe_tDLgcsSrAk X-Proofpoint-ORIG-GUID: ynYBWNuI1yZgWN2LJVZVe_tDLgcsSrAk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 clxscore=1015 phishscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Defines hfi properties supported by firmware and enums like codec, colorformat, profile, level, rate control etc. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/hfi_property.h | 666 +++++++++++++++++++++ 1 file changed, 666 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/hfi_property.h diff --git a/drivers/media/platform/qcom/iris/vidc/inc/hfi_property.h b/drivers/media/platform/qcom/iris/vidc/inc/hfi_property.h new file mode 100644 index 0000000..3fb6601 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/hfi_property.h @@ -0,0 +1,666 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __H_HFI_PROPERTY_H__ +#define __H_HFI_PROPERTY_H__ + +//todo: DP: remove below header +#include + +#define HFI_PROP_BEGIN 0x03000000 +#define HFI_PROP_IMAGE_VERSION 0x03000001 +#define HFI_PROP_INTRA_FRAME_POWER_COLLAPSE 0x03000002 +#define HFI_PROP_UBWC_MAX_CHANNELS 0x03000003 +#define HFI_PROP_UBWC_MAL_LENGTH 0x03000004 +#define HFI_PROP_UBWC_HBB 0x03000005 +#define HFI_PROP_UBWC_BANK_SWZL_LEVEL1 0x03000006 +#define HFI_PROP_UBWC_BANK_SWZL_LEVEL2 0x03000007 +#define HFI_PROP_UBWC_BANK_SWZL_LEVEL3 0x03000008 +#define HFI_PROP_UBWC_BANK_SPREADING 0x03000009 + +enum hfi_debug_config { + HFI_DEBUG_CONFIG_DEFAULT = 0x00000000, + HFI_DEBUG_CONFIG_CLRDBGQ = 0x00000001, + HFI_DEBUG_CONFIG_WFI = 0x00000002, + HFI_DEBUG_CONFIG_ARM9WD = 0x00000004, +}; + +#define HFI_PROP_DEBUG_CONFIG 0x0300000a + +enum hfi_debug_log_level { + HFI_DEBUG_LOG_NONE = 0x00000000, + HFI_DEBUG_LOG_ERROR = 0x00000001, + HFI_DEBUG_LOG_FATAL = 0x00000002, + HFI_DEBUG_LOG_PERF = 0x00000004, + HFI_DEBUG_LOG_HIGH = 0x00000008, + HFI_DEBUG_LOG_MEDIUM = 0x00000010, + HFI_DEBUG_LOG_LOW = 0x00000020, +}; + +struct hfi_debug_header { + u32 size; + u32 debug_level; + u32 reserved[2]; +}; + +#define HFI_PROP_DEBUG_LOG_LEVEL 0x0300000b + +#define HFI_PROP_FENCE_CLIENT_DATA 0x0300000d + +enum hfi_codec_type { + HFI_CODEC_DECODE_AVC = 1, + HFI_CODEC_ENCODE_AVC = 2, + HFI_CODEC_DECODE_HEVC = 3, + HFI_CODEC_ENCODE_HEVC = 4, + HFI_CODEC_DECODE_VP9 = 5, + HFI_CODEC_DECODE_MPEG2 = 6, + HFI_CODEC_DECODE_AV1 = 7, +}; + +#define HFI_PROP_CODEC 0x03000100 + +enum hfi_color_format { + HFI_COLOR_FMT_OPAQUE = 0, + HFI_COLOR_FMT_NV12 = 1, + HFI_COLOR_FMT_NV12_UBWC = 2, + HFI_COLOR_FMT_P010 = 3, + HFI_COLOR_FMT_TP10_UBWC = 4, + HFI_COLOR_FMT_RGBA8888 = 5, + HFI_COLOR_FMT_RGBA8888_UBWC = 6, + HFI_COLOR_FMT_NV21 = 7, +}; + +#define HFI_PROP_COLOR_FORMAT 0x03000101 + +#define HFI_PROP_SECURE 0x03000102 + +#define HFI_BITMASK_BITSTREAM_WIDTH 0xffff0000 +#define HFI_BITMASK_BITSTREAM_HEIGHT 0x0000ffff +#define HFI_PROP_BITSTREAM_RESOLUTION 0x03000103 + +#define HFI_BITMASK_LINEAR_STRIDE 0xffff0000 +#define HFI_BITMASK_LINEAR_SCANLINE 0x0000ffff +#define HFI_PROP_LINEAR_STRIDE_SCANLINE 0x03000104 + +#define HFI_BITMASK_CROP_RIGHT_OFFSET 0xffff0000 +#define HFI_BITMASK_CROP_BOTTOM_OFFSET 0x0000ffff +#define HFI_BITMASK_CROP_LEFT_OFFSET 0xffff0000 +#define HFI_BITMASK_CROP_TOP_OFFSET 0x0000ffff +#define HFI_PROP_CROP_OFFSETS 0x03000105 + +#define HFI_PROP_SESSION_PRIORITY 0x03000106 + +enum hfi_avc_profile_type { + HFI_AVC_PROFILE_BASELINE = 0, + HFI_AVC_PROFILE_CONSTRAINED_BASELINE = 1, + HFI_AVC_PROFILE_MAIN = 2, + HFI_AVC_PROFILE_HIGH = 4, + HFI_AVC_PROFILE_CONSTRAINED_HIGH = 17 +}; + +enum hfi_hevc_profile_type { + HFI_H265_PROFILE_MAIN = 0, + HFI_H265_PROFILE_MAIN_STILL_PICTURE = 1, + HFI_H265_PROFILE_MAIN_10 = 2, + HFI_H265_PROFILE_MAIN_10_STILL_PICTURE = 3, +}; + +enum hfi_vp9_profile_type { + HFI_VP9_PROFILE_0 = 0, + HFI_VP9_PROFILE_1 = 1, + HFI_VP9_PROFILE_2 = 2, + HFI_VP9_PROFILE_3 = 3, +}; + +enum hfi_mpeg2_profile_type { + HFI_MP2_PROFILE_SIMPLE = 0, + HFI_MP2_PROFILE_MAIN = 1, +}; + +enum hfi_av1_profile_type { + HFI_AV1_PROFILE_MAIN = 0, + HFI_AV1_PROFILE_HIGH = 1, + HFI_AV1_PROFILE_PROF = 2, +}; + +#define HFI_PROP_PROFILE 0x03000107 + +enum hfi_avc_level_type { + HFI_AVC_LEVEL_1_0 = 0, + HFI_AVC_LEVEL_1B = 1, + HFI_AVC_LEVEL_1_1 = 2, + HFI_AVC_LEVEL_1_2 = 3, + HFI_AVC_LEVEL_1_3 = 4, + HFI_AVC_LEVEL_2_0 = 5, + HFI_AVC_LEVEL_2_1 = 6, + HFI_AVC_LEVEL_2_2 = 7, + HFI_AVC_LEVEL_3_0 = 8, + HFI_AVC_LEVEL_3_1 = 9, + HFI_AVC_LEVEL_3_2 = 10, + HFI_AVC_LEVEL_4_0 = 11, + HFI_AVC_LEVEL_4_1 = 12, + HFI_AVC_LEVEL_4_2 = 13, + HFI_AVC_LEVEL_5_0 = 14, + HFI_AVC_LEVEL_5_1 = 15, + HFI_AVC_LEVEL_5_2 = 16, + HFI_AVC_LEVEL_6_0 = 17, + HFI_AVC_LEVEL_6_1 = 18, + HFI_AVC_LEVEL_6_2 = 19, +}; + +enum hfi_hevc_level_type { + HFI_H265_LEVEL_1 = 0, + HFI_H265_LEVEL_2 = 1, + HFI_H265_LEVEL_2_1 = 2, + HFI_H265_LEVEL_3 = 3, + HFI_H265_LEVEL_3_1 = 4, + HFI_H265_LEVEL_4 = 5, + HFI_H265_LEVEL_4_1 = 6, + HFI_H265_LEVEL_5 = 7, + HFI_H265_LEVEL_5_1 = 8, + HFI_H265_LEVEL_5_2 = 9, + HFI_H265_LEVEL_6 = 10, + HFI_H265_LEVEL_6_1 = 11, + HFI_H265_LEVEL_6_2 = 12, +}; + +enum hfi_vp9_level_type { + HFI_VP9_LEVEL_1_0 = 0, + HFI_VP9_LEVEL_1_1 = 1, + HFI_VP9_LEVEL_2_0 = 2, + HFI_VP9_LEVEL_2_1 = 3, + HFI_VP9_LEVEL_3_0 = 4, + HFI_VP9_LEVEL_3_1 = 5, + HFI_VP9_LEVEL_4_0 = 6, + HFI_VP9_LEVEL_4_1 = 7, + HFI_VP9_LEVEL_5_0 = 8, + HFI_VP9_LEVEL_5_1 = 9, + HFI_VP9_LEVEL_6_0 = 10, + HFI_VP9_LEVEL_6_1 = 11, +}; + +enum hfi_mpeg2_level_type { + HFI_MP2_LEVEL_LOW = 0, + HFI_MP2_LEVEL_MAIN = 1, + HFI_MP2_LEVEL_HIGH_1440 = 2, + HFI_MP2_LEVEL_HIGH = 3, +}; + +enum hfi_av1_level_type { + HFI_AV1_LEVEL_2_0 = 0, + HFI_AV1_LEVEL_2_1 = 1, + HFI_AV1_LEVEL_2_2 = 2, + HFI_AV1_LEVEL_2_3 = 3, + HFI_AV1_LEVEL_3_0 = 4, + HFI_AV1_LEVEL_3_1 = 5, + HFI_AV1_LEVEL_3_2 = 6, + HFI_AV1_LEVEL_3_3 = 7, + HFI_AV1_LEVEL_4_0 = 8, + HFI_AV1_LEVEL_4_1 = 9, + HFI_AV1_LEVEL_4_2 = 10, + HFI_AV1_LEVEL_4_3 = 11, + HFI_AV1_LEVEL_5_0 = 12, + HFI_AV1_LEVEL_5_1 = 13, + HFI_AV1_LEVEL_5_2 = 14, + HFI_AV1_LEVEL_5_3 = 15, + HFI_AV1_LEVEL_6_0 = 16, + HFI_AV1_LEVEL_6_1 = 17, + HFI_AV1_LEVEL_6_2 = 18, + HFI_AV1_LEVEL_6_3 = 19, + HFI_AV1_LEVEL_7_0 = 20, + HFI_AV1_LEVEL_7_1 = 21, + HFI_AV1_LEVEL_7_2 = 22, + HFI_AV1_LEVEL_7_3 = 23, + HFI_AV1_LEVEL_MAX = 31, +}; + +enum hfi_codec_level_type { + HFI_LEVEL_NONE = 0xFFFFFFFF, +}; + +#define HFI_PROP_LEVEL 0x03000108 + +enum hfi_hevc_tier_type { + HFI_H265_TIER_MAIN = 0, + HFI_H265_TIER_HIGH = 1, +}; + +enum hfi_av1_tier_type { + HFI_AV1_TIER_MAIN = 0, + HFI_AV1_TIER_HIGH = 1, +}; + +#define HFI_PROP_TIER 0x03000109 + +#define HFI_PROP_STAGE 0x0300010a + +#define HFI_PROP_PIPE 0x0300010b + +#define HFI_PROP_FRAME_RATE 0x0300010c + +#define HFI_BITMASK_CONCEAL_LUMA 0x000003ff +#define HFI_BITMASK_CONCEAL_CB 0x000ffC00 +#define HFI_BITMASK_CONCEAL_CR 0x3ff00000 +#define HFI_PROP_CONCEAL_COLOR_8BIT 0x0300010d + +#define HFI_BITMASK_CONCEAL_LUMA 0x000003ff +#define HFI_BITMASK_CONCEAL_CB 0x000ffC00 +#define HFI_BITMASK_CONCEAL_CR 0x3ff00000 +#define HFI_PROP_CONCEAL_COLOR_10BIT 0x0300010e + +#define HFI_BITMASK_LUMA_BIT_DEPTH 0xffff0000 +#define HFI_BITMASK_CHROMA_BIT_DEPTH 0x0000ffff +#define HFI_PROP_LUMA_CHROMA_BIT_DEPTH 0x0300010f + +#define HFI_BITMASK_FRAME_MBS_ONLY_FLAG 0x00000001 +#define HFI_BITMASK_MB_ADAPTIVE_FRAME_FIELD_FLAG 0x00000002 +#define HFI_PROP_CODED_FRAMES 0x03000120 + +#define HFI_PROP_CABAC_SESSION 0x03000121 + +#define HFI_PROP_8X8_TRANSFORM 0x03000122 + +#define HFI_PROP_BUFFER_HOST_MAX_COUNT 0x03000123 + +#define HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT 0x03000124 + +#define HFI_PROP_BUFFER_MAXDPB_COUNT 0x03000125 + +#define HFI_PROP_BUFFER_MAX_NUM_REFERENCE 0x03000126 + +#define HFI_PROP_MAX_NUM_REORDER_FRAMES 0x03000127 + +#define HFI_PROP_PIC_ORDER_CNT_TYPE 0x03000128 + +enum hfi_deblock_mode { + HFI_DEBLOCK_ALL_BOUNDARY = 0x0, + HFI_DEBLOCK_DISABLE = 0x1, + HFI_DEBLOCK_DISABLE_AT_SLICE_BOUNDARY = 0x2, +}; + +#define HFI_PROP_DEBLOCKING_MODE 0x03000129 + +enum hfi_rate_control { + HFI_RC_VBR_CFR = 0x00000000, + HFI_RC_CBR_CFR = 0x00000001, + HFI_RC_CQ = 0x00000002, + HFI_RC_OFF = 0x00000003, + HFI_RC_CBR_VFR = 0x00000004, + HFI_RC_LOSSLESS = 0x00000005, +}; + +#define HFI_PROP_RATE_CONTROL 0x0300012a + +#define HFI_PROP_TIME_DELTA_BASED_RATE_CONTROL 0x0300012b + +#define HFI_PROP_CONTENT_ADAPTIVE_CODING 0x0300012c + +#define HFI_PROP_BITRATE_BOOST 0x0300012d + +#define HFI_BITMASK_QP_I 0x000000ff +#define HFI_BITMASK_QP_P 0x0000ff00 +#define HFI_BITMASK_QP_B 0x00ff0000 +#define HFI_BITMASK_QP_ENABLE 0x0f000000 +#define HFI_BITMASK_QP_LAYERS 0xf0000000 +#define HFI_PROP_QP_PACKED 0x0300012e + +#define HFI_PROP_MIN_QP_PACKED 0x0300012f + +#define HFI_PROP_MAX_QP_PACKED 0x03000130 + +#define HFI_PROP_IR_RANDOM_PERIOD 0x03000131 + +#define HFI_PROP_MULTI_SLICE_MB_COUNT 0x03000132 + +#define HFI_PROP_MULTI_SLICE_BYTES_COUNT 0x03000133 + +#define HFI_PROP_LTR_COUNT 0x03000134 + +#define HFI_PROP_LTR_MARK 0x03000135 + +#define HFI_PROP_LTR_USE 0x03000136 + +#define HFI_PROP_LTR_MARK_USE_DETAILS 0x03000137 + +enum hfi_layer_encoding_type { + HFI_HIER_P_SLIDING_WINDOW = 0x1, + HFI_HIER_P_HYBRID_LTR = 0x2, + HFI_HIER_B = 0x3, +}; + +#define HFI_PROP_LAYER_ENCODING_TYPE 0x03000138 + +#define HFI_PROP_LAYER_COUNT 0x03000139 + +enum hfi_chromaqp_offset_mode { + HFI_ADAPTIVE_CHROMAQP_OFFSET = 0x0, + HFI_FIXED_CHROMAQP_OFFSET = 0x1, +}; + +#define HFI_BITMASK_CHROMA_CB_OFFSET 0x0000ffff +#define HFI_BITMASK_CHROMA_CR_OFFSET 0xffff0000 +#define HFI_PROP_CHROMA_QP_OFFSET 0x0300013a + +#define HFI_PROP_TOTAL_BITRATE 0x0300013b + +#define HFI_PROP_BITRATE_LAYER1 0x0300013c + +#define HFI_PROP_BITRATE_LAYER2 0x0300013d + +#define HFI_PROP_BITRATE_LAYER3 0x0300013e + +#define HFI_PROP_BITRATE_LAYER4 0x0300013f + +#define HFI_PROP_BITRATE_LAYER5 0x03000140 + +#define HFI_PROP_BITRATE_LAYER6 0x03000141 + +#define HFI_PROP_BASELAYER_PRIORITYID 0x03000142 + +#define HFI_PROP_CONSTANT_QUALITY 0x03000143 + +#define HFI_PROP_HEIC_GRID_ENABLE 0x03000144 + +enum hfi_syncframe_request_mode { + HFI_SYNC_FRAME_REQUEST_WITHOUT_SEQ_HDR = 0x00000001, + HFI_SYNC_FRAME_REQUEST_WITH_PREFIX_SEQ_HDR = 0x00000002, +}; + +#define HFI_PROP_REQUEST_SYNC_FRAME 0x03000145 + +#define HFI_PROP_MAX_GOP_FRAMES 0x03000146 + +#define HFI_PROP_MAX_B_FRAMES 0x03000147 + +enum hfi_quality_mode { + HFI_MODE_MAX_QUALITY = 0x1, + HFI_MODE_POWER_SAVE = 0x2, +}; + +#define HFI_PROP_QUALITY_MODE 0x03000148 + +enum hfi_seq_header_mode { + HFI_SEQ_HEADER_SEPERATE_FRAME = 0x00000001, + HFI_SEQ_HEADER_JOINED_WITH_1ST_FRAME = 0x00000002, + HFI_SEQ_HEADER_PREFIX_WITH_SYNC_FRAME = 0x00000004, + HFI_SEQ_HEADER_METADATA = 0x00000008, +}; + +#define HFI_PROP_SEQ_HEADER_MODE 0x03000149 + +#define HFI_PROP_METADATA_SEQ_HEADER_NAL 0x0300014a + +enum hfi_rotation { + HFI_ROTATION_NONE = 0x00000000, + HFI_ROTATION_90 = 0x00000001, + HFI_ROTATION_180 = 0x00000002, + HFI_ROTATION_270 = 0x00000003, +}; + +#define HFI_PROP_ROTATION 0x0300014b + +enum hfi_flip { + HFI_DISABLE_FLIP = 0x00000000, + HFI_HORIZONTAL_FLIP = 0x00000001, + HFI_VERTICAL_FLIP = 0x00000002, +}; + +#define HFI_PROP_FLIP 0x0300014c + +#define HFI_PROP_SCALAR 0x0300014d + +enum hfi_blur_types { + HFI_BLUR_NONE = 0x00000000, + HFI_BLUR_EXTERNAL = 0x00000001, + HFI_BLUR_ADAPTIVE = 0x00000002, +}; + +#define HFI_PROP_BLUR_TYPES 0x0300014e + +#define HFI_BITMASK_BLUR_WIDTH 0xffff0000 +#define HFI_BITMASK_BLUR_HEIGHT 0x0000ffff +#define HFI_PROP_BLUR_RESOLUTION 0x0300014f + +#define HFI_BITMASK_SPS_ID 0x000000ff +#define HFI_BITMASK_PPS_ID 0x0000ff00 +#define HFI_BITMASK_VPS_ID 0x00ff0000 +#define HFI_PROP_SEQUENCE_HEADER_IDS 0x03000150 + +#define HFI_PROP_AUD 0x03000151 + +#define HFI_PROP_DPB_LUMA_CHROMA_MISR 0x03000153 + +#define HFI_PROP_OPB_LUMA_CHROMA_MISR 0x03000154 + +#define HFI_BITMASK_QP_I 0x000000ff +#define HFI_BITMASK_QP_P 0x0000ff00 +#define HFI_BITMASK_QP_B 0x00ff0000 +#define HFI_BITMASK_QP_ENABLE 0x0f000000 +#define HFI_BITMASK_QP_LAYERS 0xf0000000 +#define HFI_PROP_SIGNAL_COLOR_INFO 0x03000155 + +enum hfi_interlace_info { + HFI_INTERLACE_INFO_NONE = 0x00000000, + HFI_FRAME_PROGRESSIVE = 0x00000001, + HFI_FRAME_MBAFF = 0x00000002, + HFI_FRAME_INTERLEAVE_TOPFIELD_FIRST = 0x00000004, + HFI_FRAME_INTERLEAVE_BOTTOMFIELD_FIRST = 0x00000008, + HFI_FRAME_INTERLACE_TOPFIELD_FIRST = 0x00000010, + HFI_FRAME_INTERLACE_BOTTOMFIELD_FIRST = 0x00000020, +}; + +#define HFI_PROP_INTERLACE_INFO 0x03000156 + +#define HFI_PROP_CSC 0x03000157 + +#define HFI_PROP_CSC_MATRIX 0x03000158 + +#define HFI_PROP_CSC_BIAS 0x03000159 + +#define HFI_PROP_CSC_LIMIT 0x0300015a + +#define HFI_PROP_DECODE_ORDER_OUTPUT 0x0300015b + +#define HFI_PROP_TIMESTAMP 0x0300015c + +#define HFI_PROP_FRAMERATE_FROM_BITSTREAM 0x0300015d + +#define HFI_PROP_SEI_RECOVERY_POINT 0x0300015e + +#define HFI_PROP_CONEALED_MB_COUNT 0x0300015f + +#define HFI_BITMASK_SAR_WIDTH 0xffff0000 +#define HFI_BITMASK_SAR_HEIGHT 0x0000ffff +#define HFI_PROP_SAR_RESOLUTION 0x03000160 + +#define HFI_PROP_HISTOGRAM_INFO 0x03000161 + +enum hfi_picture_type { + HFI_PICTURE_IDR = 0x00000001, + HFI_PICTURE_P = 0x00000002, + HFI_PICTURE_B = 0x00000004, + HFI_PICTURE_I = 0x00000008, + HFI_PICTURE_CRA = 0x00000010, + HFI_PICTURE_BLA = 0x00000020, + HFI_PICTURE_NOSHOW = 0x00000040, +}; + +#define HFI_PROP_PICTURE_TYPE 0x03000162 + +#define HFI_PROP_SEI_MASTERING_DISPLAY_COLOUR 0x03000163 + +#define HFI_PROP_SEI_CONTENT_LIGHT_LEVEL 0x03000164 + +#define HFI_PROP_SEI_HDR10PLUS_USERDATA 0x03000165 + +#define HFI_PROP_SEI_STREAM_USERDATA 0x03000166 + +#define HFI_PROP_EVA_STAT_INFO 0x03000167 + +#define HFI_PROP_DEC_DEFAULT_HEADER 0x03000168 + +#define HFI_PROP_DEC_START_FROM_RAP_FRAME 0x03000169 + +#define HFI_PROP_NO_OUTPUT 0x0300016a + +#define HFI_PROP_BUFFER_TAG 0x0300016b + +#define HFI_PROP_BUFFER_MARK 0x0300016c + +#define HFI_PROP_SUBFRAME_OUTPUT 0x0300016d + +#define HFI_PROP_ENC_QP_METADATA 0x0300016e + +#define HFI_PROP_DEC_QP_METADATA 0x0300016f + +#define HFI_PROP_SEI_FRAME_PACKING_ARRANGEMENT 0x03000170 + +#define HFI_PROP_SEI_PAN_SCAN_RECT 0x03000171 + +#define HFI_PROP_THUMBNAIL_MODE 0x03000172 + +#define HFI_PROP_ROI_INFO 0x03000173 + +#define HFI_PROP_WORST_COMPRESSION_RATIO 0x03000174 + +#define HFI_PROP_WORST_COMPLEXITY_FACTOR 0x03000175 + +#define HFI_PROP_VBV_DELAY 0x03000176 + +#define HFI_PROP_SEQ_CHANGE_AT_SYNC_FRAME 0x03000177 + +#define HFI_BITMASK_RAW_WIDTH 0xffff0000 +#define HFI_BITMASK_RAW_HEIGHT 0x0000ffff +#define HFI_PROP_RAW_RESOLUTION 0x03000178 + +#define HFI_PROP_DPB_TAG_LIST 0x03000179 + +#define HFI_PROP_DPB_LIST 0x0300017A + +enum hfi_nal_length_field_type { + HFI_NAL_LENGTH_STARTCODES = 0, + HFI_NAL_LENGTH_SIZE_4 = 4, +}; + +#define HFI_PROP_NAL_LENGTH_FIELD 0x0300017B + +#define HFI_PROP_TOTAL_PEAK_BITRATE 0x0300017C + +#define HFI_PROP_MAINTAIN_MIN_QUALITY 0x0300017D + +#define HFI_PROP_IR_CYCLIC_PERIOD 0x0300017E + +#define HFI_PROP_ENABLE_SLICE_DELIVERY 0x0300017F + +#define HFI_PROP_AV1_FILM_GRAIN_PRESENT 0x03000180 + +#define HFI_PROP_AV1_SUPER_BLOCK_ENABLED 0x03000181 + +#define HFI_PROP_AV1_OP_POINT 0x03000182 + +#define HFI_PROP_SUBFRAME_INPUT 0x03000183 + +#define HFI_PROP_OPB_ENABLE 0x03000184 + +#define HFI_PROP_AV1_TILE_ROWS_COLUMNS 0x03000187 + +#define HFI_PROP_AV1_DRAP_CONFIG 0x03000189 + +enum hfi_saliency_type { + HFI_SALIENCY_NONE, + HFI_SALIENCY_TYPE0, +}; + +#define HFI_PROP_ROI_AS_SALIENCY_INFO 0x0300018A + +#define HFI_PROP_FENCE 0x0300018B + +#define HFI_PROP_REQUEST_PREPROCESS 0x0300018E + +#define HFI_PROP_UBWC_STRIDE_SCANLINE 0x03000190 + +#define HFI_PROP_TRANSCODING_STAT_INFO 0x03000191 + +#define HFI_PROP_DOLBY_RPU_METADATA 0x03000192 + +#define HFI_PROP_COMV_BUFFER_COUNT 0x03000193 + +#define HFI_PROP_DISABLE_VUI_TIMING_INFO 0x03000194 + +#define HFI_PROP_SLICE_DECODE 0x03000196 + +#define HFI_PROP_AV1_UNIFORM_TILE_SPACING 0x03000197 + +#define HFI_PROP_ENC_RING_BIN_BUF 0x0300019C + +/* u32 */ +enum hfi_fence_type { + HFI_SW_FENCE = 0x00000001, + HFI_SYNX_V2_FENCE = 0x00000002, +}; + +#define HFI_PROP_FENCE_TYPE 0x0300019D + +enum hfi_fence_direction_type { + HFI_FENCE_TX_ENABLE = 0x00000001, + HFI_FENCE_RX_ENABLE = 0x00000002, +}; + +#define HFI_PROP_FENCE_DIRECTION 0x0300019E + +#define HFI_PROP_FENCE_ERROR_DATA_CORRUPT 0x0300019F + +#define HFI_PROP_END 0x03FFFFFF + +#define HFI_SESSION_ERROR_BEGIN 0x04000000 + +#define HFI_ERROR_UNKNOWN_SESSION 0x04000001 + +#define HFI_ERROR_MAX_SESSIONS 0x04000002 + +#define HFI_ERROR_FATAL 0x04000003 + +#define HFI_ERROR_INVALID_STATE 0x04000004 + +#define HFI_ERROR_INSUFFICIENT_RESOURCES 0x04000005 + +#define HFI_ERROR_BUFFER_NOT_SET 0x04000006 + +#define HFI_ERROR_DRAP_CONFIG_EXCEED 0x04000007 + +#define HFI_SESSION_ERROR_END 0x04FFFFFF + +#define HFI_SYSTEM_ERROR_BEGIN 0x05000000 + +#define HFI_SYS_ERROR_WD_TIMEOUT 0x05000001 + +#define HFI_SYS_ERROR_NOC 0x05000002 + +#define HFI_SYS_ERROR_FATAL 0x05000003 + +#define HFI_SYSTEM_ERROR_END 0x05FFFFFF + +#define HFI_INFORMATION_BEGIN 0x06000000 + +#define HFI_INFO_UNSUPPORTED 0x06000001 + +#define HFI_INFO_DATA_CORRUPT 0x06000002 + +#define HFI_INFO_NEGATIVE_TIMESTAMP 0x06000003 + +#define HFI_INFO_BUFFER_OVERFLOW 0x06000004 + +#define HFI_INFO_VCODEC_RESET 0x06000005 + +#define HFI_INFO_HFI_FLAG_DRAIN_LAST 0x06000006 + +#define HFI_INFO_HFI_FLAG_PSC_LAST 0x06000007 + +#define HFI_INFO_FENCE_SIGNAL_ERROR 0x06000008 + +#define HFI_INFORMATION_END 0x06FFFFFF + +#endif //__H_HFI_PROPERTY_H__ From patchwork Fri Jul 28 13:23:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57948C0015E for ; Fri, 28 Jul 2023 13:28:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236820AbjG1N2n (ORCPT ); Fri, 28 Jul 2023 09:28:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236953AbjG1N2L (ORCPT ); Fri, 28 Jul 2023 09:28:11 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3617949EA; Fri, 28 Jul 2023 06:27:35 -0700 (PDT) Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9DdlL004300; Fri, 28 Jul 2023 13:26:31 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=+ZaDGwVUfhnyAOVh0gNt/Fg30/Df8MffdW8wq9Xy488=; b=Zj2h3PGEunE1lV8/A/X3R7+SThqj27zK+cZ3IPuvVOoj4gxv9yLAlJEQXT7yZ+Uy/4ZP SKsd6m+3/Qe2g2e19Y5v6ZOaUd3cKzdOdynjs6gZPk0fkW2sxb76JIMu1HIf+8Jd9SAT hqHNKVyfgEGVl4oFHsog4EXxO6SGd7+GA3kvDJQtXW18uDXkaf5iknYqo6XZ85EfJWIv 0oMjvnG9nYPfqEU2FGkRN/tIVqnSPNKDinhwVtKhxIdzG825ACNghiXJ0gDGmgosypAr zc6xIWhNmzqTb9/cFuCCfM72dNWrIVoeoW7r744Xxwyc3qN/QVArtjfry9XqBuKW24mK eg== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s403vhk4s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:31 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQUF8003910 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:30 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:26 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 23/33] iris: vidc: add PIL functionality for video firmware Date: Fri, 28 Jul 2023 18:53:34 +0530 Message-ID: <1690550624-14642-24-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 21EPb-dA_DQNaRBeUp6pY0Az-bCLsUjK X-Proofpoint-ORIG-GUID: 21EPb-dA_DQNaRBeUp6pY0Az-bCLsUjK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 suspectscore=0 priorityscore=1501 adultscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 lowpriorityscore=0 spamscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal Here is the implementation of loading/unloading fw in memory via mdt loader. This also implements fw suspend and resume. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../media/platform/qcom/iris/vidc/inc/firmware.h | 18 ++ .../media/platform/qcom/iris/vidc/src/firmware.c | 294 +++++++++++++++++++++ 2 files changed, 312 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/firmware.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/firmware.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/firmware.h b/drivers/media/platform/qcom/iris/vidc/inc/firmware.h new file mode 100644 index 0000000..bd52180 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/firmware.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_FIRMWARE_H_ +#define _MSM_VIDC_FIRMWARE_H_ + +struct msm_vidc_core; + +int fw_load(struct msm_vidc_core *core); +int fw_unload(struct msm_vidc_core *core); +int fw_suspend(struct msm_vidc_core *core); +int fw_resume(struct msm_vidc_core *core); +void fw_coredump(struct msm_vidc_core *core); + +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/firmware.c b/drivers/media/platform/qcom/iris/vidc/src/firmware.c new file mode 100644 index 0000000..f420096 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/firmware.c @@ -0,0 +1,294 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "firmware.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_platform.h" + +#define MAX_FIRMWARE_NAME_SIZE 128 + +struct tzbsp_memprot { + u32 cp_start; + u32 cp_size; + u32 cp_nonpixel_start; + u32 cp_nonpixel_size; +}; + +enum tzbsp_video_state { + TZBSP_VIDEO_STATE_SUSPEND = 0, + TZBSP_VIDEO_STATE_RESUME = 1, + TZBSP_VIDEO_STATE_RESTORE_THRESHOLD = 2, +}; + +static int protect_cp_mem(struct msm_vidc_core *core) +{ + struct tzbsp_memprot memprot; + int rc = 0; + struct context_bank_info *cb; + + memprot.cp_start = 0x0; + memprot.cp_size = 0x0; + memprot.cp_nonpixel_start = 0x0; + memprot.cp_nonpixel_size = 0x0; + + venus_hfi_for_each_context_bank(core, cb) { + if (cb->region == MSM_VIDC_NON_SECURE) { + memprot.cp_size = cb->addr_range.start; + + d_vpr_h("%s: memprot.cp_size: %#x\n", + __func__, memprot.cp_size); + } + + if (cb->region == MSM_VIDC_SECURE_NONPIXEL) { + memprot.cp_nonpixel_start = cb->addr_range.start; + memprot.cp_nonpixel_size = cb->addr_range.size; + + d_vpr_h("%s: cp_nonpixel_start: %#x size: %#x\n", + __func__, memprot.cp_nonpixel_start, + memprot.cp_nonpixel_size); + } + } + + rc = qcom_scm_mem_protect_video_var(memprot.cp_start, + memprot.cp_size, + memprot.cp_nonpixel_start, + memprot.cp_nonpixel_size); + if (rc) + d_vpr_e("Failed to protect memory(%d)\n", rc); + + return rc; +} + +static int __load_fw_to_memory(struct platform_device *pdev, + const char *fw_name) +{ + int rc = 0; + const struct firmware *firmware = NULL; + struct msm_vidc_core *core; + char firmware_name[MAX_FIRMWARE_NAME_SIZE] = { 0 }; + struct device_node *node = NULL; + struct resource res = { 0 }; + phys_addr_t phys = 0; + size_t res_size = 0; + ssize_t fw_size = 0; + void *virt = NULL; + int pas_id = 0; + + if (!fw_name || !(*fw_name) || !pdev) { + d_vpr_e("%s: Invalid inputs\n", __func__); + return -EINVAL; + } + if (strlen(fw_name) >= MAX_FIRMWARE_NAME_SIZE - 4) { + d_vpr_e("%s: Invalid fw name\n", __func__); + return -EINVAL; + } + + core = dev_get_drvdata(&pdev->dev); + if (!core) { + d_vpr_e("%s: core not found in device %s", + __func__, dev_name(&pdev->dev)); + return -EINVAL; + } + scnprintf(firmware_name, ARRAY_SIZE(firmware_name), "%s.mbn", fw_name); + + pas_id = core->platform->data.pas_id; + + node = of_parse_phandle(pdev->dev.of_node, "memory-region", 0); + if (!node) { + d_vpr_e("%s: failed to read \"memory-region\"\n", + __func__); + return -EINVAL; + } + + rc = of_address_to_resource(node, 0, &res); + if (rc) { + d_vpr_e("%s: failed to read \"memory-region\", error %d\n", + __func__, rc); + goto exit; + } + phys = res.start; + res_size = (size_t)resource_size(&res); + + rc = request_firmware(&firmware, firmware_name, &pdev->dev); + if (rc) { + d_vpr_e("%s: failed to request fw \"%s\", error %d\n", + __func__, firmware_name, rc); + goto exit; + } + + fw_size = qcom_mdt_get_size(firmware); + if (fw_size < 0 || res_size < (size_t)fw_size) { + rc = -EINVAL; + d_vpr_e("%s: out of bound fw image fw size: %ld, res_size: %lu", + __func__, fw_size, res_size); + goto exit; + } + + virt = memremap(phys, res_size, MEMREMAP_WC); + if (!virt) { + d_vpr_e("%s: failed to remap fw memory phys %pa[p]\n", + __func__, &phys); + return -ENOMEM; + } + + /* prevent system suspend during fw_load */ + pm_stay_awake(pdev->dev.parent); + rc = qcom_mdt_load(&pdev->dev, firmware, firmware_name, + pas_id, virt, phys, res_size, NULL); + pm_relax(pdev->dev.parent); + if (rc) { + d_vpr_e("%s: error %d loading fw \"%s\"\n", + __func__, rc, firmware_name); + goto exit; + } + rc = qcom_scm_pas_auth_and_reset(pas_id); + if (rc) { + d_vpr_e("%s: error %d authenticating fw \"%s\"\n", + __func__, rc, firmware_name); + goto exit; + } + + memunmap(virt); + release_firmware(firmware); + d_vpr_h("%s: firmware \"%s\" loaded successfully\n", + __func__, firmware_name); + + return pas_id; + +exit: + if (virt) + memunmap(virt); + if (firmware) + release_firmware(firmware); + + return rc; +} + +int fw_load(struct msm_vidc_core *core) +{ + int rc; + + if (!core->resource->fw_cookie) { + core->resource->fw_cookie = __load_fw_to_memory(core->pdev, + core->platform->data.fwname); + if (core->resource->fw_cookie <= 0) { + d_vpr_e("%s: firmware download failed %d\n", + __func__, core->resource->fw_cookie); + core->resource->fw_cookie = 0; + return -ENOMEM; + } + } + + rc = protect_cp_mem(core); + if (rc) { + d_vpr_e("%s: protect memory failed\n", __func__); + goto fail_protect_mem; + } + + return rc; + +fail_protect_mem: + if (core->resource->fw_cookie) + qcom_scm_pas_shutdown(core->resource->fw_cookie); + core->resource->fw_cookie = 0; + return rc; +} + +int fw_unload(struct msm_vidc_core *core) +{ + int ret; + + if (!core->resource->fw_cookie) + return -EINVAL; + + ret = qcom_scm_pas_shutdown(core->resource->fw_cookie); + if (ret) + d_vpr_e("Firmware unload failed rc=%d\n", ret); + + core->resource->fw_cookie = 0; + + return ret; +} + +int fw_suspend(struct msm_vidc_core *core) +{ + return qcom_scm_set_remote_state(TZBSP_VIDEO_STATE_SUSPEND, 0); +} + +int fw_resume(struct msm_vidc_core *core) +{ + return qcom_scm_set_remote_state(TZBSP_VIDEO_STATE_RESUME, 0); +} + +void fw_coredump(struct msm_vidc_core *core) +{ + int rc = 0; + struct platform_device *pdev; + struct device_node *node = NULL; + struct resource res = {0}; + phys_addr_t mem_phys = 0; + size_t res_size = 0; + void *mem_va = NULL; + char *data = NULL, *dump = NULL; + u64 total_size; + + pdev = core->pdev; + + node = of_parse_phandle(pdev->dev.of_node, "memory-region", 0); + if (!node) { + d_vpr_e("%s: DT error getting \"memory-region\" property\n", + __func__); + return; + } + + rc = of_address_to_resource(node, 0, &res); + if (rc) { + d_vpr_e("%s: error %d while getting \"memory-region\" resource\n", + __func__, rc); + return; + } + + mem_phys = res.start; + res_size = (size_t)resource_size(&res); + + mem_va = memremap(mem_phys, res_size, MEMREMAP_WC); + if (!mem_va) { + d_vpr_e("%s: unable to remap firmware memory\n", __func__); + return; + } + total_size = res_size + TOTAL_QSIZE + ALIGNED_SFR_SIZE; + + data = vmalloc(total_size); + if (!data) { + memunmap(mem_va); + return; + } + dump = data; + + /* copy firmware dump */ + memcpy(data, mem_va, res_size); + memunmap(mem_va); + + /* copy queues(cmd, msg, dbg) dump(along with headers) */ + data += res_size; + memcpy(data, (char *)core->iface_q_table.align_virtual_addr, TOTAL_QSIZE); + + /* copy sfr dump */ + data += TOTAL_QSIZE; + memcpy(data, (char *)core->sfr.align_virtual_addr, ALIGNED_SFR_SIZE); + + dev_coredumpv(&pdev->dev, dump, total_size, GFP_KERNEL); +} From patchwork Fri Jul 28 13:23:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85C6AC001DE for ; Fri, 28 Jul 2023 13:28:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235245AbjG1N2r (ORCPT ); Fri, 28 Jul 2023 09:28:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236977AbjG1N2R (ORCPT ); Fri, 28 Jul 2023 09:28:17 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A71449FE; Fri, 28 Jul 2023 06:27:36 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBrSNE011914; Fri, 28 Jul 2023 13:26:35 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=X7nFhXYJYyC4Y/Q3Um9IJu6eQkTiTHgAxauTw8MQB6w=; b=N3mm9nlLKN8ek/Ee4v3BhMmyOrk4GC7NclMopS+14l1szeWLVAYSwmBz8+CcAUB8FGmE BzXKacxSwcxgO1wcgLXMrL8EBCao8kqi+hJeTXI6TcnviLfzJCJwvJJcwrIgJrwfviER g1ZiAPX3qa5ESLamVEvDSBZ+bMFf693I45L/wf1qmBggaGd1/5lj9InJyUMTXBi+YmA8 4UdqUvE2dTlT42bokOjl0cbFz4bEAdatiXYLcA1zDJeELeF0LK+BPNr3yZbxBXpdP8as HvC5VRD8amecCTTxVJWGPd36Wymo3w0OpKwS7Hee9DTdnf/XkjQw5UfkkeQaZBf7+iwk LQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s468qs150-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:34 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQXIm014493 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:33 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:30 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 24/33] iris: vidc: add debug files Date: Fri, 28 Jul 2023 18:53:35 +0530 Message-ID: <1690550624-14642-25-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: MIpiyj7b-8yRl8TkwQkmxvVlWP1rK50h X-Proofpoint-ORIG-GUID: MIpiyj7b-8yRl8TkwQkmxvVlWP1rK50h X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 suspectscore=0 impostorscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org this implements the debugging framework. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../platform/qcom/iris/vidc/inc/msm_vidc_debug.h | 186 +++++++ .../platform/qcom/iris/vidc/src/msm_vidc_debug.c | 581 +++++++++++++++++++++ 2 files changed, 767 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_debug.h create mode 100644 drivers/media/platform/qcom/iris/vidc/src/msm_vidc_debug.c diff --git a/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_debug.h b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_debug.h new file mode 100644 index 0000000..ffced01 --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/inc/msm_vidc_debug.h @@ -0,0 +1,186 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __MSM_VIDC_DEBUG__ +#define __MSM_VIDC_DEBUG__ + +#include +#include +#include +#include +#include +#include + +struct msm_vidc_core; +struct msm_vidc_inst; + +#ifndef VIDC_DBG_LABEL +#define VIDC_DBG_LABEL "msm_vidc" +#endif + +/* Allow only 6 prints/sec */ +#define VIDC_DBG_SESSION_RATELIMIT_INTERVAL (1 * HZ) +#define VIDC_DBG_SESSION_RATELIMIT_BURST 6 + +#define VIDC_DBG_TAG_INST VIDC_DBG_LABEL ": %4s: %s: " +#define VIDC_DBG_TAG_CORE VIDC_DBG_LABEL ": %4s: %08x: %s: " +#define FW_DBG_TAG VIDC_DBG_LABEL ": %6s: " +#define DEFAULT_SID ((u32)-1) + +#ifndef MSM_VIDC_EMPTY_BRACE +#define MSM_VIDC_EMPTY_BRACE {}, +#endif + +extern unsigned int msm_vidc_debug; +extern unsigned int msm_fw_debug; +extern bool msm_vidc_fw_dump; + +/* do not modify the log message as it is used in test scripts */ +#define FMT_STRING_SET_CTRL \ + "%s: state %s, name %s, id 0x%x value %d\n" +#define FMT_STRING_STATE_CHANGE \ + "%s: state changed to %s from %s\n" +#define FMT_STRING_MSG_SFR \ + "SFR Message from FW: %s\n" +#define FMT_STRING_FAULT_HANDLER \ + "%s: faulting address: %lx\n" +#define FMT_STRING_SET_CAP \ + "set cap: name: %24s, cap value: %#10x, hfi: %#10llx\n" + +/* To enable messages OR these values and + * echo the result to debugfs file. + * + * To enable all messages set msm_vidc_debug = 0x101F + */ + +enum vidc_msg_prio_drv { + VIDC_ERR = 0x00000001, + VIDC_HIGH = 0x00000002, + VIDC_LOW = 0x00000004, + VIDC_PERF = 0x00000008, + VIDC_PKT = 0x00000010, + VIDC_BUS = 0x00000020, + VIDC_STAT = 0x00000040, + VIDC_ENCODER = 0x00000100, + VIDC_DECODER = 0x00000200, + VIDC_PRINTK = 0x10000000, + VIDC_FTRACE = 0x20000000, +}; + +enum vidc_msg_prio_fw { + FW_LOW = 0x00000001, + FW_MED = 0x00000002, + FW_HIGH = 0x00000004, + FW_ERROR = 0x00000008, + FW_FATAL = 0x00000010, + FW_PERF = 0x00000020, + FW_CACHE_LOW = 0x00000100, + FW_CACHE_MED = 0x00000200, + FW_CACHE_HIGH = 0x00000400, + FW_CACHE_ERROR = 0x00000800, + FW_CACHE_FATAL = 0x00001000, + FW_CACHE_PERF = 0x00002000, + FW_PRINTK = 0x10000000, + FW_FTRACE = 0x20000000, +}; + +#define DRV_LOG (VIDC_ERR | VIDC_PRINTK) +#define DRV_LOGSHIFT (0) +#define DRV_LOGMASK (0x0FFFFFFF) + +#define FW_LOG (FW_ERROR | FW_FATAL | FW_PRINTK) +#define FW_LOGSHIFT (0) +#define FW_LOGMASK (0x0FFFFFFF) + +#define dprintk_inst(__level, __level_str, inst, __fmt, ...) \ + do { \ + if (inst && (msm_vidc_debug & (__level))) { \ + pr_info(VIDC_DBG_TAG_INST __fmt, \ + __level_str, \ + inst->debug_str, \ + ##__VA_ARGS__); \ + } \ + } while (0) + +#define i_vpr_e(inst, __fmt, ...) dprintk_inst(VIDC_ERR, "err ", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_i(inst, __fmt, ...) dprintk_inst(VIDC_HIGH, "high", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_h(inst, __fmt, ...) dprintk_inst(VIDC_HIGH, "high", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_l(inst, __fmt, ...) dprintk_inst(VIDC_LOW, "low ", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_p(inst, __fmt, ...) dprintk_inst(VIDC_PERF, "perf", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_t(inst, __fmt, ...) dprintk_inst(VIDC_PKT, "pkt ", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_b(inst, __fmt, ...) dprintk_inst(VIDC_BUS, "bus ", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_s(inst, __fmt, ...) dprintk_inst(VIDC_STAT, "stat", inst, __fmt, ##__VA_ARGS__) + +#define i_vpr_hp(inst, __fmt, ...) \ + dprintk_inst(VIDC_HIGH | VIDC_PERF, "high", inst, __fmt, ##__VA_ARGS__) +#define i_vpr_hs(inst, __fmt, ...) \ + dprintk_inst(VIDC_HIGH | VIDC_STAT, "stat", inst, __fmt, ##__VA_ARGS__) + +#define dprintk_core(__level, __level_str, __fmt, ...) \ + do { \ + if (msm_vidc_debug & (__level)) { \ + pr_info(VIDC_DBG_TAG_CORE __fmt, \ + __level_str, \ + DEFAULT_SID, \ + "codec", \ + ##__VA_ARGS__); \ + } \ + } while (0) + +#define d_vpr_e(__fmt, ...) dprintk_core(VIDC_ERR, "err ", __fmt, ##__VA_ARGS__) +#define d_vpr_h(__fmt, ...) dprintk_core(VIDC_HIGH, "high", __fmt, ##__VA_ARGS__) +#define d_vpr_l(__fmt, ...) dprintk_core(VIDC_LOW, "low ", __fmt, ##__VA_ARGS__) +#define d_vpr_p(__fmt, ...) dprintk_core(VIDC_PERF, "perf", __fmt, ##__VA_ARGS__) +#define d_vpr_t(__fmt, ...) dprintk_core(VIDC_PKT, "pkt ", __fmt, ##__VA_ARGS__) +#define d_vpr_b(__fmt, ...) dprintk_core(VIDC_BUS, "bus ", __fmt, ##__VA_ARGS__) +#define d_vpr_s(__fmt, ...) dprintk_core(VIDC_STAT, "stat", __fmt, ##__VA_ARGS__) +#define d_vpr_hs(__fmt, ...) \ + dprintk_core(VIDC_HIGH | VIDC_STAT, "high", __fmt, ##__VA_ARGS__) + +#define dprintk_ratelimit(__level, __level_str, __fmt, ...) \ + do { \ + if (msm_vidc_check_ratelimit()) { \ + dprintk_core(__level, __level_str, __fmt, ##__VA_ARGS__); \ + } \ + } while (0) + +#define dprintk_firmware(__level, __fmt, ...) \ + do { \ + if ((msm_fw_debug & (__level)) & FW_PRINTK) { \ + pr_info(FW_DBG_TAG __fmt, \ + "fw", \ + ##__VA_ARGS__); \ + } \ + } while (0) + +enum msm_vidc_debugfs_event { + MSM_VIDC_DEBUGFS_EVENT_ETB, + MSM_VIDC_DEBUGFS_EVENT_EBD, + MSM_VIDC_DEBUGFS_EVENT_FTB, + MSM_VIDC_DEBUGFS_EVENT_FBD, +}; + +enum msm_vidc_bug_on_error { + MSM_VIDC_BUG_ON_FATAL = BIT(0), + MSM_VIDC_BUG_ON_NOC = BIT(1), + MSM_VIDC_BUG_ON_WD_TIMEOUT = BIT(2), +}; + +struct dentry *msm_vidc_debugfs_init_drv(void); +struct dentry *msm_vidc_debugfs_init_core(struct msm_vidc_core *core); +struct dentry *msm_vidc_debugfs_init_inst(struct msm_vidc_inst *inst, + struct dentry *parent); +void msm_vidc_debugfs_deinit_inst(struct msm_vidc_inst *inst); +void msm_vidc_debugfs_update(struct msm_vidc_inst *inst, + enum msm_vidc_debugfs_event e); +int msm_vidc_check_ratelimit(void); + +static inline bool is_stats_enabled(void) +{ + return !!(msm_vidc_debug & VIDC_STAT); +} + +#endif diff --git a/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_debug.c b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_debug.c new file mode 100644 index 0000000..489e8dc --- /dev/null +++ b/drivers/media/platform/qcom/iris/vidc/src/msm_vidc_debug.c @@ -0,0 +1,581 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" + +#define MAX_DEBUG_LEVEL_STRING_LEN 15 +#define MSM_VIDC_MIN_STATS_DELAY_MS 200 +#define MSM_VIDC_MAX_STATS_DELAY_MS 10000 + +unsigned int msm_vidc_debug = DRV_LOG; +unsigned int msm_fw_debug = FW_LOG; + +static int debug_level_set_drv(const char *val, + const struct kernel_param *kp) +{ + struct msm_vidc_core *core = NULL; + unsigned int dvalue; + int ret; + + if (!kp || !kp->arg || !val) { + d_vpr_e("%s: Invalid params\n", __func__); + return -EINVAL; + } + + ret = kstrtouint(val, 0, &dvalue); + if (ret) + return ret; + + msm_vidc_debug = dvalue; + + core = *(struct msm_vidc_core **)kp->arg; + + if (!core) { + d_vpr_e("%s: Invalid core/capabilities\n", __func__); + return 0; + } + + /* check if driver is more than default level */ + if ((dvalue & DRV_LOGMASK) & ~(DRV_LOG)) { + core->capabilities[HW_RESPONSE_TIMEOUT].value = 4 * HW_RESPONSE_TIMEOUT_VALUE; + core->capabilities[SW_PC_DELAY].value = 4 * SW_PC_DELAY_VALUE; + core->capabilities[FW_UNLOAD_DELAY].value = 4 * FW_UNLOAD_DELAY_VALUE; + } else { + /* reset timeout values, if user reduces the logging */ + core->capabilities[HW_RESPONSE_TIMEOUT].value = HW_RESPONSE_TIMEOUT_VALUE; + core->capabilities[SW_PC_DELAY].value = SW_PC_DELAY_VALUE; + core->capabilities[FW_UNLOAD_DELAY].value = FW_UNLOAD_DELAY_VALUE; + } + + d_vpr_h("timeout for driver: hw_response %u, sw_pc %u, fw_unload %u, debug_level %#x\n", + core->capabilities[HW_RESPONSE_TIMEOUT].value, + core->capabilities[SW_PC_DELAY].value, + core->capabilities[FW_UNLOAD_DELAY].value, + msm_vidc_debug); + + return 0; +} + +static int debug_level_set_fw(const char *val, + const struct kernel_param *kp) +{ + struct msm_vidc_core *core = NULL; + unsigned int dvalue; + int ret; + + if (!kp || !kp->arg || !val) { + d_vpr_e("%s: Invalid params\n", __func__); + return -EINVAL; + } + + ret = kstrtouint(val, 0, &dvalue); + if (ret) + return ret; + + msm_fw_debug = dvalue; + + core = *(struct msm_vidc_core **)kp->arg; + + if (!core) { + d_vpr_e("%s: Invalid core/capabilities\n", __func__); + return 0; + } + + /* check if firmware is more than default level */ + if ((dvalue & FW_LOGMASK) & ~(FW_LOG)) { + core->capabilities[HW_RESPONSE_TIMEOUT].value = 4 * HW_RESPONSE_TIMEOUT_VALUE; + core->capabilities[SW_PC_DELAY].value = 4 * SW_PC_DELAY_VALUE; + core->capabilities[FW_UNLOAD_DELAY].value = 4 * FW_UNLOAD_DELAY_VALUE; + } else { + /* reset timeout values, if user reduces the logging */ + core->capabilities[HW_RESPONSE_TIMEOUT].value = HW_RESPONSE_TIMEOUT_VALUE; + core->capabilities[SW_PC_DELAY].value = SW_PC_DELAY_VALUE; + core->capabilities[FW_UNLOAD_DELAY].value = FW_UNLOAD_DELAY_VALUE; + } + + d_vpr_h("timeout for firmware: hw_response %u, sw_pc %u, fw_unload %u, debug_level %#x\n", + core->capabilities[HW_RESPONSE_TIMEOUT].value, + core->capabilities[SW_PC_DELAY].value, + core->capabilities[FW_UNLOAD_DELAY].value, + msm_fw_debug); + + return 0; +} + +static int debug_level_get_drv(char *buffer, const struct kernel_param *kp) +{ + return scnprintf(buffer, PAGE_SIZE, "%#x", msm_vidc_debug); +} + +static int debug_level_get_fw(char *buffer, const struct kernel_param *kp) +{ + return scnprintf(buffer, PAGE_SIZE, "%#x", msm_fw_debug); +} + +static const struct kernel_param_ops msm_vidc_debug_fops = { + .set = debug_level_set_drv, + .get = debug_level_get_drv, +}; + +static const struct kernel_param_ops msm_fw_debug_fops = { + .set = debug_level_set_fw, + .get = debug_level_get_fw, +}; + +static int fw_dump_set(const char *val, const struct kernel_param *kp) +{ + unsigned int dvalue; + int ret; + + if (!kp || !kp->arg || !val) { + d_vpr_e("%s: Invalid params\n", __func__); + return -EINVAL; + } + + ret = kstrtouint(val, 0, &dvalue); + if (ret) + return ret; + + msm_vidc_fw_dump = dvalue; + + d_vpr_h("fw dump %s\n", msm_vidc_fw_dump ? "Enabled" : "Disabled"); + + return 0; +} + +static int fw_dump_get(char *buffer, const struct kernel_param *kp) +{ + return scnprintf(buffer, PAGE_SIZE, "%#x", msm_vidc_fw_dump); +} + +static const struct kernel_param_ops msm_vidc_fw_dump_fops = { + .set = fw_dump_set, + .get = fw_dump_get, +}; + +module_param_cb(msm_vidc_debug, &msm_vidc_debug_fops, &g_core, 0644); +module_param_cb(msm_fw_debug, &msm_fw_debug_fops, &g_core, 0644); +module_param_cb(msm_vidc_fw_dump, &msm_vidc_fw_dump_fops, &g_core, 0644); + +bool msm_vidc_fw_dump = !true; +EXPORT_SYMBOL(msm_vidc_fw_dump); + +#define MAX_DBG_BUF_SIZE 4096 + +struct core_inst_pair { + struct msm_vidc_core *core; + struct msm_vidc_inst *inst; +}; + +/* debug fs support */ + +static u32 write_str(char *buffer, size_t size, const char *fmt, ...) +{ + va_list args; + u32 len; + + va_start(args, fmt); + len = vscnprintf(buffer, size, fmt, args); + va_end(args); + return len; +} + +static ssize_t core_info_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + struct msm_vidc_core *core = file->private_data; + char *cur, *end, *dbuf = NULL; + ssize_t len = 0; + + if (!core) { + d_vpr_e("%s: invalid params %pK\n", __func__, core); + return 0; + } + + dbuf = vzalloc(MAX_DBG_BUF_SIZE); + if (!dbuf) { + d_vpr_e("%s: allocation failed\n", __func__); + return -ENOMEM; + } + + cur = dbuf; + end = cur + MAX_DBG_BUF_SIZE; + + cur += write_str(cur, end - cur, "Core state: %d\n", core->state); + + cur += write_str(cur, end - cur, + "FW version : %s\n", core->fw_version); + cur += write_str(cur, end - cur, + "register_base: 0x%x\n", core->resource->register_base_addr); + cur += write_str(cur, end - cur, "irq: %u\n", core->resource->irq); + + len = simple_read_from_buffer(buf, count, ppos, dbuf, cur - dbuf); + + vfree(dbuf); + return len; +} + +static const struct file_operations core_info_fops = { + .open = simple_open, + .read = core_info_read, +}; + +static ssize_t stats_delay_write_ms(struct file *filp, const char __user *buf, + size_t count, loff_t *ppos) +{ + int rc = 0; + struct msm_vidc_core *core = filp->private_data; + char kbuf[MAX_DEBUG_LEVEL_STRING_LEN] = {0}; + u32 delay_ms = 0; + + if (!core) { + d_vpr_e("%s: invalid params %pK\n", __func__, core); + return 0; + } + + /* filter partial writes and invalid commands */ + if (*ppos != 0 || count >= sizeof(kbuf) || count == 0) { + d_vpr_e("returning error - pos %lld, count %lu\n", *ppos, count); + rc = -EINVAL; + } + + rc = simple_write_to_buffer(kbuf, sizeof(kbuf) - 1, ppos, buf, count); + if (rc < 0) { + d_vpr_e("%s: User memory fault\n", __func__); + rc = -EFAULT; + goto exit; + } + + rc = kstrtoint(kbuf, 0, &delay_ms); + if (rc) { + d_vpr_e("returning error err %d\n", rc); + rc = -EINVAL; + goto exit; + } + delay_ms = clamp_t(u32, delay_ms, MSM_VIDC_MIN_STATS_DELAY_MS, MSM_VIDC_MAX_STATS_DELAY_MS); + core->capabilities[STATS_TIMEOUT_MS].value = delay_ms; + d_vpr_h("Stats delay is updated to - %d ms\n", delay_ms); + +exit: + return rc; +} + +static ssize_t stats_delay_read_ms(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + size_t len; + char kbuf[MAX_DEBUG_LEVEL_STRING_LEN]; + struct msm_vidc_core *core = file->private_data; + + if (!core) { + d_vpr_e("%s: invalid params %pK\n", __func__, core); + return 0; + } + + len = scnprintf(kbuf, sizeof(kbuf), "%u\n", core->capabilities[STATS_TIMEOUT_MS].value); + return simple_read_from_buffer(buf, count, ppos, kbuf, len); +} + +static const struct file_operations stats_delay_fops = { + .open = simple_open, + .write = stats_delay_write_ms, + .read = stats_delay_read_ms, +}; + +struct dentry *msm_vidc_debugfs_init_drv(void) +{ + struct dentry *dir = NULL; + + dir = debugfs_create_dir("msm_vidc", NULL); + if (IS_ERR_OR_NULL(dir)) { + dir = NULL; + goto failed_create_dir; + } + + return dir; + +failed_create_dir: + debugfs_remove_recursive(dir); + + return NULL; +} + +struct dentry *msm_vidc_debugfs_init_core(struct msm_vidc_core *core) +{ + struct dentry *dir = NULL; + char debugfs_name[MAX_DEBUGFS_NAME]; + struct dentry *parent; + + if (!core->debugfs_parent) { + d_vpr_e("%s: invalid params\n", __func__); + goto failed_create_dir; + } + parent = core->debugfs_parent; + + snprintf(debugfs_name, MAX_DEBUGFS_NAME, "core"); + dir = debugfs_create_dir(debugfs_name, parent); + if (IS_ERR_OR_NULL(dir)) { + dir = NULL; + d_vpr_e("Failed to create debugfs for msm_vidc\n"); + goto failed_create_dir; + } + if (!debugfs_create_file("info", 0444, dir, core, &core_info_fops)) { + d_vpr_e("debugfs_create_file: fail\n"); + goto failed_create_dir; + } + + if (!debugfs_create_file("stats_delay_ms", 0644, dir, core, &stats_delay_fops)) { + d_vpr_e("debugfs_create_file: fail\n"); + goto failed_create_dir; + } +failed_create_dir: + return dir; +} + +static int inst_info_open(struct inode *inode, struct file *file) +{ + d_vpr_l("Open inode ptr: %pK\n", inode->i_private); + file->private_data = inode->i_private; + return 0; +} + +static ssize_t inst_info_read(struct file *file, char __user *buf, + size_t count, loff_t *ppos) +{ + struct core_inst_pair *idata = file->private_data; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst; + char *cur, *end, *dbuf = NULL; + int i, j; + ssize_t len = 0; + struct v4l2_format *f; + + if (!idata || !idata->core || !idata->inst) { + d_vpr_e("%s: invalid params %pK\n", __func__, idata); + return 0; + } + + core = idata->core; + inst = idata->inst; + + inst = get_inst(core, inst->session_id); + if (!inst) { + d_vpr_h("%s: instance has become obsolete", __func__); + return 0; + } + + dbuf = vzalloc(MAX_DBG_BUF_SIZE); + if (!dbuf) { + d_vpr_e("%s: allocation failed\n", __func__); + len = -ENOMEM; + goto failed_alloc; + } + cur = dbuf; + end = cur + MAX_DBG_BUF_SIZE; + + f = &inst->fmts[OUTPUT_PORT]; + cur += write_str(cur, end - cur, "==============================\n"); + cur += write_str(cur, end - cur, "INSTANCE: %pK (%s)\n", inst, + inst->domain == MSM_VIDC_ENCODER ? "Encoder" : "Decoder"); + cur += write_str(cur, end - cur, "==============================\n"); + cur += write_str(cur, end - cur, "core: %pK\n", inst->core); + cur += write_str(cur, end - cur, "height: %d\n", f->fmt.pix_mp.height); + cur += write_str(cur, end - cur, "width: %d\n", f->fmt.pix_mp.width); + cur += write_str(cur, end - cur, "fps: %d\n", + inst->capabilities[FRAME_RATE].value >> 16); + cur += write_str(cur, end - cur, "state: %d\n", inst->state); + cur += write_str(cur, end - cur, "-----------Formats-------------\n"); + for (i = 0; i < MAX_PORT; i++) { + if (i != INPUT_PORT && i != OUTPUT_PORT) + continue; + f = &inst->fmts[i]; + cur += write_str(cur, end - cur, "capability: %s\n", + i == INPUT_PORT ? "Output" : "Capture"); + cur += write_str(cur, end - cur, "planes : %d\n", + f->fmt.pix_mp.num_planes); + cur += write_str(cur, end - cur, + "type: %s\n", i == INPUT_PORT ? + "Output" : "Capture"); + cur += write_str(cur, end - cur, "count: %u\n", + inst->bufq[i].vb2q->num_buffers); + + for (j = 0; j < f->fmt.pix_mp.num_planes; j++) + cur += write_str(cur, end - cur, + "size for plane %d: %u\n", + j, f->fmt.pix_mp.plane_fmt[j].sizeimage); + + cur += write_str(cur, end - cur, "\n"); + } + cur += write_str(cur, end - cur, "-------------------------------\n"); + cur += write_str(cur, end - cur, "ETB Count: %d\n", + inst->debug_count.etb); + cur += write_str(cur, end - cur, "EBD Count: %d\n", + inst->debug_count.ebd); + cur += write_str(cur, end - cur, "FTB Count: %d\n", + inst->debug_count.ftb); + cur += write_str(cur, end - cur, "FBD Count: %d\n", + inst->debug_count.fbd); + + len = simple_read_from_buffer(buf, count, ppos, + dbuf, cur - dbuf); + + vfree(dbuf); +failed_alloc: + put_inst(inst); + return len; +} + +static int inst_info_release(struct inode *inode, struct file *file) +{ + d_vpr_l("Release inode ptr: %pK\n", inode->i_private); + file->private_data = NULL; + return 0; +} + +static const struct file_operations inst_info_fops = { + .open = inst_info_open, + .read = inst_info_read, + .release = inst_info_release, +}; + +struct dentry *msm_vidc_debugfs_init_inst(struct msm_vidc_inst *inst, struct dentry *parent) +{ + struct dentry *dir = NULL, *info = NULL; + char debugfs_name[MAX_DEBUGFS_NAME]; + struct core_inst_pair *idata = NULL; + + snprintf(debugfs_name, MAX_DEBUGFS_NAME, "inst_%d", inst->session_id); + + idata = vzalloc(sizeof(*idata)); + if (!idata) { + i_vpr_e(inst, "%s: allocation failed\n", __func__); + goto exit; + } + + idata->core = inst->core; + idata->inst = inst; + + dir = debugfs_create_dir(debugfs_name, parent); + if (IS_ERR_OR_NULL(dir)) { + dir = NULL; + i_vpr_e(inst, + "%s: Failed to create debugfs for msm_vidc\n", + __func__); + goto failed_create_dir; + } + + info = debugfs_create_file("info", 0444, dir, + idata, &inst_info_fops); + if (IS_ERR_OR_NULL(info)) { + i_vpr_e(inst, "%s: debugfs_create_file: fail\n", + __func__); + goto failed_create_file; + } + + dir->d_inode->i_private = info->d_inode->i_private; + return dir; + +failed_create_file: + debugfs_remove_recursive(dir); + dir = NULL; +failed_create_dir: + vfree(idata); +exit: + return dir; +} + +void msm_vidc_debugfs_deinit_inst(struct msm_vidc_inst *inst) +{ + struct dentry *dentry = NULL; + + if (!inst->debugfs_root) + return; + + dentry = inst->debugfs_root; + if (dentry->d_inode) { + i_vpr_l(inst, "%s: Destroy %pK\n", + __func__, dentry->d_inode->i_private); + vfree(dentry->d_inode->i_private); + dentry->d_inode->i_private = NULL; + } + debugfs_remove_recursive(dentry); + inst->debugfs_root = NULL; +} + +void msm_vidc_debugfs_update(struct msm_vidc_inst *inst, + enum msm_vidc_debugfs_event e) +{ + switch (e) { + case MSM_VIDC_DEBUGFS_EVENT_ETB: + inst->debug_count.etb++; + if (inst->debug_count.ebd && + inst->debug_count.ftb > inst->debug_count.fbd) { + } + break; + case MSM_VIDC_DEBUGFS_EVENT_EBD: + inst->debug_count.ebd++; + /* + * Host needs to ensure FW at least have 2 buffers available always + * one for HW processing and another for fw processing in parallel + * to avoid FW starving for buffers + */ + if (inst->debug_count.etb < (inst->debug_count.ebd + 2)) { + i_vpr_p(inst, + "EBD: FW needs input buffers. Processed etb %llu ebd %llu ftb %llu fbd %llu\n", + inst->debug_count.etb, inst->debug_count.ebd, + inst->debug_count.ftb, inst->debug_count.fbd); + } + if (inst->debug_count.fbd && + inst->debug_count.ftb < (inst->debug_count.fbd + 2)) + i_vpr_p(inst, + "EBD: FW needs output buffers. Processed etb %llu ebd %llu ftb %llu fbd %llu\n", + inst->debug_count.etb, inst->debug_count.ebd, + inst->debug_count.ftb, inst->debug_count.fbd); + break; + case MSM_VIDC_DEBUGFS_EVENT_FTB: + inst->debug_count.ftb++; + if (inst->debug_count.ebd && + inst->debug_count.etb > inst->debug_count.ebd) { + } + break; + case MSM_VIDC_DEBUGFS_EVENT_FBD: + inst->debug_count.fbd++; + /* + * Host needs to ensure FW at least have 2 buffers available always + * one for HW processing and another for fw processing in parallel + * to avoid FW starving for buffers + */ + if (inst->debug_count.ftb < (inst->debug_count.fbd + 2)) { + i_vpr_p(inst, + "FBD: FW needs output buffers. Processed etb %llu ebd %llu ftb %llu fbd %llu\n", + inst->debug_count.etb, inst->debug_count.ebd, + inst->debug_count.ftb, inst->debug_count.fbd); + } + if (inst->debug_count.ebd && + inst->debug_count.etb < (inst->debug_count.ebd + 2)) + i_vpr_p(inst, + "FBD: FW needs input buffers. Processed etb %llu ebd %llu ftb %llu fbd %llu\n", + inst->debug_count.etb, inst->debug_count.ebd, + inst->debug_count.ftb, inst->debug_count.fbd); + break; + default: + i_vpr_e(inst, "invalid event in debugfs: %d\n", e); + break; + } +} + +int msm_vidc_check_ratelimit(void) +{ + static DEFINE_RATELIMIT_STATE(_rs, + VIDC_DBG_SESSION_RATELIMIT_INTERVAL, + VIDC_DBG_SESSION_RATELIMIT_BURST); + return __ratelimit(&_rs); +} From patchwork Fri Jul 28 13:23:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 705D3C001E0 for ; Fri, 28 Jul 2023 13:29:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235089AbjG1N3H (ORCPT ); Fri, 28 Jul 2023 09:29:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236812AbjG1N2m (ORCPT ); Fri, 28 Jul 2023 09:28:42 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46BA74C20; Fri, 28 Jul 2023 06:27:54 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SArYU4019330; Fri, 28 Jul 2023 13:26:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=++FHM2liwvhQaHKhMeBLnLKOnMqmzwkapoe4pjKLTdU=; b=dgXUi52duDg4D1U/agMEDhc54oM6HMiLeSJl4nRAjavXwboE4zZo2N72oLFT2m3lXypk dYaxh856ZwTuqT+nhZtUl4uyzZNNJGUL8GKhYSbnxZXt1NBOl2HVeZviG6N2CxLtK7OA owiyjpFEf7qGe+45xqg4Ej3o06ZUdMNy4biBJqPbNuSCHBLILxALtch4n2hrze8teKct dhtOWJ7J8swKg0gGxunH43cB6lcn6VwqKcQzvJMddyECTdi3PtCdL1JIDqWVQFwK5SWL Gm2D2G2QcFpgm0AvWqYl/DsKviFd+NWls8AuyXGzH0c8L/2na8Tacaln1AzgeQt5C7PY 5w== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s468qs151-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:38 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQbBa002765 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:37 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:33 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 25/33] iris: platform: add platform files Date: Fri, 28 Jul 2023 18:53:36 +0530 Message-ID: <1690550624-14642-26-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: oiz77TKbDeYsVq2iG45_u-eFA6go_fno X-Proofpoint-ORIG-GUID: oiz77TKbDeYsVq2iG45_u-eFA6go_fno X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 suspectscore=0 impostorscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements adjust/set functions for different capabilities supported by driver. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../iris/platform/common/inc/msm_vidc_platform.h | 305 +++ .../iris/platform/common/src/msm_vidc_platform.c | 2499 ++++++++++++++++++++ 2 files changed, 2804 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/platform/common/inc/msm_vidc_platform.h create mode 100644 drivers/media/platform/qcom/iris/platform/common/src/msm_vidc_platform.c diff --git a/drivers/media/platform/qcom/iris/platform/common/inc/msm_vidc_platform.h b/drivers/media/platform/qcom/iris/platform/common/inc/msm_vidc_platform.h new file mode 100644 index 0000000..87c9f2f --- /dev/null +++ b/drivers/media/platform/qcom/iris/platform/common/inc/msm_vidc_platform.h @@ -0,0 +1,305 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021,, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_PLATFORM_H_ +#define _MSM_VIDC_PLATFORM_H_ + +#include +#include + +#include "msm_vidc_internal.h" +#include "msm_vidc_core.h" + +#define DDR_TYPE_LPDDR4 0x6 +#define DDR_TYPE_LPDDR4X 0x7 +#define DDR_TYPE_LPDDR5 0x8 +#define DDR_TYPE_LPDDR5X 0x9 + +#define UBWC_CONFIG(mc, ml, hbb, bs1, bs2, bs3, bsp) \ +{ \ + .max_channels = mc, \ + .mal_length = ml, \ + .highest_bank_bit = hbb, \ + .bank_swzl_level = bs1, \ + .bank_swz2_level = bs2, \ + .bank_swz3_level = bs3, \ + .bank_spreading = bsp, \ +} + +struct bw_table { + const char *name; + u32 min_kbps; + u32 max_kbps; +}; + +struct pd_table { + const char *name; +}; + +struct regulator_table { + const char *name; + bool hw_trigger; +}; + +struct clk_table { + const char *name; + u32 clk_id; + bool scaling; +}; + +struct clk_rst_table { + const char *name; + bool exclusive_release; +}; + +struct subcache_table { + const char *name; + u32 llcc_id; +}; + +struct context_bank_table { + const char *name; + u32 start; + u32 size; + bool secure; + bool dma_coherant; + u32 region; + u64 dma_mask; +}; + +struct freq_table { + unsigned long freq; +}; + +struct reg_preset_table { + u32 reg; + u32 value; + u32 mask; +}; + +struct msm_vidc_ubwc_config_data { + u32 max_channels; + u32 mal_length; + u32 highest_bank_bit; + u32 bank_swzl_level; + u32 bank_swz2_level; + u32 bank_swz3_level; + u32 bank_spreading; +}; + +struct codec_info { + u32 v4l2_codec; + enum msm_vidc_codec_type vidc_codec; + const char *pixfmt_name; +}; + +struct color_format_info { + u32 v4l2_color_format; + enum msm_vidc_colorformat_type vidc_color_format; + const char *pixfmt_name; +}; + +struct color_primaries_info { + u32 v4l2_color_primaries; + enum msm_vidc_color_primaries vidc_color_primaries; +}; + +struct transfer_char_info { + u32 v4l2_transfer_char; + enum msm_vidc_transfer_characteristics vidc_transfer_char; +}; + +struct matrix_coeff_info { + u32 v4l2_matrix_coeff; + enum msm_vidc_matrix_coefficients vidc_matrix_coeff; +}; + +struct msm_platform_core_capability { + enum msm_vidc_core_capability_type type; + u32 value; +}; + +struct msm_platform_inst_capability { + enum msm_vidc_inst_capability_type cap_id; + enum msm_vidc_domain_type domain; + enum msm_vidc_codec_type codec; + s32 min; + s32 max; + u32 step_or_mask; + s32 value; + u32 v4l2_id; + u32 hfi_id; + enum msm_vidc_inst_capability_flags flags; +}; + +struct msm_platform_inst_cap_dependency { + enum msm_vidc_inst_capability_type cap_id; + enum msm_vidc_domain_type domain; + enum msm_vidc_codec_type codec; + enum msm_vidc_inst_capability_type children[MAX_CAP_CHILDREN]; + int (*adjust)(void *inst, + struct v4l2_ctrl *ctrl); + int (*set)(void *inst, + enum msm_vidc_inst_capability_type cap_id); +}; + +struct msm_vidc_compat_handle { + const char *compat; + int (*init_platform)(struct msm_vidc_core *core); + int (*init_iris)(struct msm_vidc_core *core); +}; + +struct msm_vidc_format_capability { + struct codec_info *codec_info; + u32 codec_info_size; + struct color_format_info *color_format_info; + u32 color_format_info_size; + struct color_primaries_info *color_prim_info; + u32 color_prim_info_size; + struct transfer_char_info *transfer_char_info; + u32 transfer_char_info_size; + struct matrix_coeff_info *matrix_coeff_info; + u32 matrix_coeff_info_size; +}; + +struct msm_vidc_platform_data { + const struct bw_table *bw_tbl; + unsigned int bw_tbl_size; + const struct regulator_table *regulator_tbl; + unsigned int regulator_tbl_size; + const struct pd_table *pd_tbl; + unsigned int pd_tbl_size; + const char * const *opp_tbl; + unsigned int opp_tbl_size; + const struct clk_table *clk_tbl; + unsigned int clk_tbl_size; + const struct clk_rst_table *clk_rst_tbl; + unsigned int clk_rst_tbl_size; + const struct subcache_table *subcache_tbl; + unsigned int subcache_tbl_size; + const struct context_bank_table *context_bank_tbl; + unsigned int context_bank_tbl_size; + struct freq_table *freq_tbl; + unsigned int freq_tbl_size; + const struct reg_preset_table *reg_prst_tbl; + unsigned int reg_prst_tbl_size; + struct msm_vidc_ubwc_config_data *ubwc_config; + const char *fwname; + u32 pas_id; + struct msm_platform_core_capability *core_data; + u32 core_data_size; + struct msm_platform_inst_capability *inst_cap_data; + u32 inst_cap_data_size; + struct msm_platform_inst_cap_dependency *inst_cap_dependency_data; + u32 inst_cap_dependency_data_size; + struct msm_vidc_format_capability *format_data; + const u32 *psc_avc_tbl; + unsigned int psc_avc_tbl_size; + const u32 *psc_hevc_tbl; + unsigned int psc_hevc_tbl_size; + const u32 *psc_vp9_tbl; + unsigned int psc_vp9_tbl_size; + const u32 *dec_input_prop_avc; + unsigned int dec_input_prop_size_avc; + const u32 *dec_input_prop_hevc; + unsigned int dec_input_prop_size_hevc; + const u32 *dec_input_prop_vp9; + unsigned int dec_input_prop_size_vp9; + const u32 *dec_output_prop_avc; + unsigned int dec_output_prop_size_avc; + const u32 *dec_output_prop_hevc; + unsigned int dec_output_prop_size_hevc; + const u32 *dec_output_prop_vp9; + unsigned int dec_output_prop_size_vp9; +}; + +struct msm_vidc_platform { + struct msm_vidc_platform_data data; +}; + +static inline bool is_sys_cache_present(struct msm_vidc_core *core) +{ + return !!core->platform->data.subcache_tbl_size; +} + +int msm_vidc_init_platform(struct msm_vidc_core *core); + +/* control framework support functions */ + +enum msm_vidc_inst_capability_type msm_vidc_get_cap_id(struct msm_vidc_inst *inst, u32 id); +int msm_vidc_update_cap_value(struct msm_vidc_inst *inst, u32 cap, + s32 adjusted_val, const char *func); +bool is_parent_available(struct msm_vidc_inst *inst, u32 cap_id, + u32 check_parent, const char *func); +int msm_vidc_get_parent_value(struct msm_vidc_inst *inst, u32 cap, u32 parent, + s32 *value, const char *func); +u32 msm_vidc_get_port_info(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_v4l2_menu_to_hfi(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, u32 *value); +int msm_vidc_v4l2_to_hfi_enum(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, u32 *value); +int msm_vidc_packetize_control(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, u32 payload_type, + void *hfi_val, u32 payload_size, const char *func); +int msm_vidc_adjust_bitrate(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_layer_bitrate(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_bitrate_mode(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_entropy_mode(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_profile(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_ltr_count(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_use_ltr(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_mark_ltr(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_delta_based_rc(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_output_order(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_input_buf_host_max_count(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_output_buf_host_max_count(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_transform_8x8(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_chroma_qp_index_offset(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_slice_count(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_layer_count(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_gop_size(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_b_frame(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_peak_bitrate(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_hevc_min_qp(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_hevc_max_qp(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_hevc_i_frame_qp(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_hevc_p_frame_qp(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_hevc_b_frame_qp(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_bitrate_boost(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_min_quality(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_all_intra(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_dec_slice_mode(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_adjust_ir_period(void *instance, struct v4l2_ctrl *ctrl); +int msm_vidc_set_header_mode(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_deblock_mode(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_min_qp(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_max_qp(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_frame_qp(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_req_sync_frame(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_chroma_qp_index_offset(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_slice_count(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_layer_count_and_type(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_gop_size(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_bitrate(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_layer_bitrate(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_u32(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_u32_packed(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_u32_enum(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_constant_quality(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_cbr_related_properties(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_use_and_mark_ltr(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_nal_length(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_flip(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_rotation(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_ir_period(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_stage(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_pipe(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_level(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_q16(void *instance, enum msm_vidc_inst_capability_type cap_id); +int msm_vidc_set_vui_timing_info(void *instance, enum msm_vidc_inst_capability_type cap_id); + +#endif // _MSM_VIDC_PLATFORM_H_ diff --git a/drivers/media/platform/qcom/iris/platform/common/src/msm_vidc_platform.c b/drivers/media/platform/qcom/iris/platform/common/src/msm_vidc_platform.c new file mode 100644 index 0000000..d7441ea --- /dev/null +++ b/drivers/media/platform/qcom/iris/platform/common/src/msm_vidc_platform.c @@ -0,0 +1,2499 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include + +#include "hfi_packet.h" +#include "hfi_property.h" +#include "msm_vidc_control.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_iris3.h" +#include "msm_vidc_sm8550.h" +#include "msm_vidc_memory.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_v4l2.h" +#include "msm_vidc_vb2.h" +#include "venus_hfi.h" + +#define CAP_TO_8BIT_QP(a) { \ + if ((a) < MIN_QP_8BIT) \ + (a) = MIN_QP_8BIT; \ +} + +/* + * Below calculation for number of reference frames + * is picked up from hfi macro HFI_IRIS3_ENC_RECON_BUF_COUNT + */ +#define SLIDING_WINDOW_REF_FRAMES(codec, total_hp_layers, ltr_count, num_ref) { \ + if (codec == MSM_VIDC_HEVC) { \ + num_ref = (total_hp_layers + 1) >> 1; \ + } else if (codec == MSM_VIDC_H264) { \ + if (total_hp_layers < 4) \ + num_ref = (total_hp_layers - 1); \ + else \ + num_ref = total_hp_layers; \ + } \ + if (ltr_count) \ + num_ref = num_ref + ltr_count; \ +} + +static struct v4l2_file_operations msm_v4l2_file_operations = { + .owner = THIS_MODULE, + .open = msm_v4l2_open, + .release = msm_v4l2_close, + .unlocked_ioctl = video_ioctl2, + .poll = msm_v4l2_poll, +}; + +static const struct v4l2_ioctl_ops msm_v4l2_ioctl_ops_enc = { + .vidioc_querycap = msm_v4l2_querycap, + .vidioc_enum_fmt_vid_cap = msm_v4l2_enum_fmt, + .vidioc_enum_fmt_vid_out = msm_v4l2_enum_fmt, + .vidioc_enum_framesizes = msm_v4l2_enum_framesizes, + .vidioc_enum_frameintervals = msm_v4l2_enum_frameintervals, + .vidioc_try_fmt_vid_cap_mplane = msm_v4l2_try_fmt, + .vidioc_try_fmt_vid_out_mplane = msm_v4l2_try_fmt, + .vidioc_s_fmt_vid_cap = msm_v4l2_s_fmt, + .vidioc_s_fmt_vid_out = msm_v4l2_s_fmt, + .vidioc_s_fmt_vid_cap_mplane = msm_v4l2_s_fmt, + .vidioc_s_fmt_vid_out_mplane = msm_v4l2_s_fmt, + .vidioc_g_fmt_vid_cap = msm_v4l2_g_fmt, + .vidioc_g_fmt_vid_out = msm_v4l2_g_fmt, + .vidioc_g_fmt_vid_cap_mplane = msm_v4l2_g_fmt, + .vidioc_g_fmt_vid_out_mplane = msm_v4l2_g_fmt, + .vidioc_g_selection = msm_v4l2_g_selection, + .vidioc_s_selection = msm_v4l2_s_selection, + .vidioc_s_parm = msm_v4l2_s_parm, + .vidioc_g_parm = msm_v4l2_g_parm, + .vidioc_reqbufs = msm_v4l2_reqbufs, + .vidioc_querybuf = msm_v4l2_querybuf, + .vidioc_create_bufs = msm_v4l2_create_bufs, + .vidioc_prepare_buf = msm_v4l2_prepare_buf, + .vidioc_qbuf = msm_v4l2_qbuf, + .vidioc_dqbuf = msm_v4l2_dqbuf, + .vidioc_streamon = msm_v4l2_streamon, + .vidioc_streamoff = msm_v4l2_streamoff, + .vidioc_queryctrl = msm_v4l2_queryctrl, + .vidioc_querymenu = msm_v4l2_querymenu, + .vidioc_subscribe_event = msm_v4l2_subscribe_event, + .vidioc_unsubscribe_event = msm_v4l2_unsubscribe_event, + .vidioc_try_encoder_cmd = msm_v4l2_try_encoder_cmd, + .vidioc_encoder_cmd = msm_v4l2_encoder_cmd, +}; + +static const struct v4l2_ioctl_ops msm_v4l2_ioctl_ops_dec = { + .vidioc_querycap = msm_v4l2_querycap, + .vidioc_enum_fmt_vid_cap = msm_v4l2_enum_fmt, + .vidioc_enum_fmt_vid_out = msm_v4l2_enum_fmt, + .vidioc_enum_framesizes = msm_v4l2_enum_framesizes, + .vidioc_enum_frameintervals = msm_v4l2_enum_frameintervals, + .vidioc_try_fmt_vid_cap_mplane = msm_v4l2_try_fmt, + .vidioc_try_fmt_vid_out_mplane = msm_v4l2_try_fmt, + .vidioc_s_fmt_vid_cap = msm_v4l2_s_fmt, + .vidioc_s_fmt_vid_out = msm_v4l2_s_fmt, + .vidioc_s_fmt_vid_cap_mplane = msm_v4l2_s_fmt, + .vidioc_s_fmt_vid_out_mplane = msm_v4l2_s_fmt, + .vidioc_g_fmt_vid_cap = msm_v4l2_g_fmt, + .vidioc_g_fmt_vid_out = msm_v4l2_g_fmt, + .vidioc_g_fmt_vid_cap_mplane = msm_v4l2_g_fmt, + .vidioc_g_fmt_vid_out_mplane = msm_v4l2_g_fmt, + .vidioc_g_selection = msm_v4l2_g_selection, + .vidioc_s_selection = msm_v4l2_s_selection, + .vidioc_reqbufs = msm_v4l2_reqbufs, + .vidioc_querybuf = msm_v4l2_querybuf, + .vidioc_create_bufs = msm_v4l2_create_bufs, + .vidioc_prepare_buf = msm_v4l2_prepare_buf, + .vidioc_qbuf = msm_v4l2_qbuf, + .vidioc_dqbuf = msm_v4l2_dqbuf, + .vidioc_streamon = msm_v4l2_streamon, + .vidioc_streamoff = msm_v4l2_streamoff, + .vidioc_queryctrl = msm_v4l2_queryctrl, + .vidioc_querymenu = msm_v4l2_querymenu, + .vidioc_subscribe_event = msm_v4l2_subscribe_event, + .vidioc_unsubscribe_event = msm_v4l2_unsubscribe_event, + .vidioc_try_decoder_cmd = msm_v4l2_try_decoder_cmd, + .vidioc_decoder_cmd = msm_v4l2_decoder_cmd, +}; + +static const struct v4l2_ctrl_ops msm_v4l2_ctrl_ops = { + .s_ctrl = msm_v4l2_op_s_ctrl, + .g_volatile_ctrl = msm_v4l2_op_g_volatile_ctrl, +}; + +static const struct vb2_ops msm_vb2_ops = { + .queue_setup = msm_vb2_queue_setup, + .start_streaming = msm_vb2_start_streaming, + .buf_queue = msm_vb2_buf_queue, + .stop_streaming = msm_vb2_stop_streaming, +}; + +static struct vb2_mem_ops msm_vb2_mem_ops = { + .alloc = msm_vb2_alloc, + .put = msm_vb2_put, + .mmap = msm_vb2_mmap, + .attach_dmabuf = msm_vb2_attach_dmabuf, + .detach_dmabuf = msm_vb2_detach_dmabuf, + .map_dmabuf = msm_vb2_map_dmabuf, + .unmap_dmabuf = msm_vb2_unmap_dmabuf, +}; + +static struct v4l2_m2m_ops msm_v4l2_m2m_ops = { + .device_run = msm_v4l2_m2m_device_run, + .job_abort = msm_v4l2_m2m_job_abort, +}; + +static const struct msm_vidc_compat_handle compat_handle[] = { + { + .compat = "qcom,sm8550-vidc", + .init_platform = msm_vidc_init_platform_sm8550, + .init_iris = msm_vidc_init_iris3, + }, +}; + +static int msm_vidc_init_ops(struct msm_vidc_core *core) +{ + d_vpr_h("%s: initialize ops\n", __func__); + core->v4l2_file_ops = &msm_v4l2_file_operations; + core->v4l2_ioctl_ops_enc = &msm_v4l2_ioctl_ops_enc; + core->v4l2_ioctl_ops_dec = &msm_v4l2_ioctl_ops_dec; + core->v4l2_ctrl_ops = &msm_v4l2_ctrl_ops; + core->vb2_ops = &msm_vb2_ops; + core->vb2_mem_ops = &msm_vb2_mem_ops; + core->v4l2_m2m_ops = &msm_v4l2_m2m_ops; + core->mem_ops = get_mem_ops(); + if (!core->mem_ops) { + d_vpr_e("%s: invalid memory ops\n", __func__); + return -EINVAL; + } + core->res_ops = get_resources_ops(); + if (!core->res_ops) { + d_vpr_e("%s: invalid resource ops\n", __func__); + return -EINVAL; + } + + return 0; +} + +static int msm_vidc_init_platform_variant(struct msm_vidc_core *core) +{ + struct device *dev = NULL; + int i, rc = 0; + + dev = &core->pdev->dev; + + /* select platform based on compatible match */ + for (i = 0; i < ARRAY_SIZE(compat_handle); i++) { + if (of_device_is_compatible(dev->of_node, compat_handle[i].compat)) { + rc = compat_handle[i].init_platform(core); + if (rc) { + d_vpr_e("%s: (%s) init failed with %d\n", + __func__, compat_handle[i].compat, rc); + return rc; + } + break; + } + } + + /* handle unknown compat type */ + if (i == ARRAY_SIZE(compat_handle)) { + d_vpr_e("%s: Unsupported device: (%s)\n", __func__, dev_name(dev)); + return -EINVAL; + } + + return rc; +} + +static int msm_vidc_init_vpu(struct msm_vidc_core *core) +{ + struct device *dev = NULL; + int i, rc = 0; + + dev = &core->pdev->dev; + + /* select platform based on compatible match */ + for (i = 0; i < ARRAY_SIZE(compat_handle); i++) { + if (of_device_is_compatible(dev->of_node, compat_handle[i].compat)) { + rc = compat_handle[i].init_iris(core); + if (rc) { + d_vpr_e("%s: (%s) init failed with %d\n", + __func__, compat_handle[i].compat, rc); + return rc; + } + break; + } + } + + /* handle unknown compat type */ + if (i == ARRAY_SIZE(compat_handle)) { + d_vpr_e("%s: Unsupported device: (%s)\n", __func__, dev_name(dev)); + return -EINVAL; + } + + return rc; +} + +int msm_vidc_init_platform(struct msm_vidc_core *core) +{ + int rc = 0; + struct msm_vidc_platform *platform = NULL; + + platform = devm_kzalloc(&core->pdev->dev, sizeof(struct msm_vidc_platform), + GFP_KERNEL); + if (!platform) { + d_vpr_e("%s: failed to alloc memory for platform\n", __func__); + return -ENOMEM; + } + + core->platform = platform; + + /* selected ops can be re-assigned in platform specific file */ + rc = msm_vidc_init_ops(core); + if (rc) + return rc; + + rc = msm_vidc_init_platform_variant(core); + if (rc) + return rc; + + rc = msm_vidc_init_vpu(core); + + return rc; +} + +/****************** control framework utility functions **********************/ + +enum msm_vidc_inst_capability_type msm_vidc_get_cap_id(struct msm_vidc_inst *inst, u32 id) +{ + enum msm_vidc_inst_capability_type i = INST_CAP_NONE + 1; + + enum msm_vidc_inst_capability_type cap_id = INST_CAP_NONE; + + do { + if (inst->capabilities[i].v4l2_id == id) { + cap_id = inst->capabilities[i].cap_id; + break; + } + i++; + } while (i < INST_CAP_MAX); + + return cap_id; +} + +int msm_vidc_update_cap_value(struct msm_vidc_inst *inst, u32 cap_id, + s32 adjusted_val, const char *func) +{ + int prev_value = 0; + + prev_value = inst->capabilities[cap_id].value; + inst->capabilities[cap_id].value = adjusted_val; + + if (prev_value != inst->capabilities[cap_id].value) { + i_vpr_h(inst, + "%s: updated database: name: %s, value: %#x -> %#x\n", + func, cap_name(cap_id), + prev_value, inst->capabilities[cap_id].value); + } + + return 0; +} + +bool is_parent_available(struct msm_vidc_inst *inst, + u32 cap_id, u32 check_parent, const char *func) +{ + int i = 0; + u32 cap_child; + + if (!is_valid_cap_id(cap_id) || !is_valid_cap_id(check_parent)) + return false; + + while (i < MAX_CAP_CHILDREN && + inst->capabilities[check_parent].children[i]) { + cap_child = inst->capabilities[check_parent].children[i]; + if (cap_child == cap_id) + return true; + i++; + } + + i_vpr_e(inst, + "%s: missing parent %s for %s\n", + func, cap_name(check_parent), cap_name(cap_id)); + return false; +} + +int msm_vidc_get_parent_value(struct msm_vidc_inst *inst, + u32 cap_id, u32 parent, s32 *value, const char *func) +{ + int rc = 0; + + if (is_parent_available(inst, cap_id, parent, func)) { + switch (parent) { + case BITRATE_MODE: + *value = inst->hfi_rc_type; + break; + case LAYER_TYPE: + *value = inst->hfi_layer_type; + break; + default: + *value = inst->capabilities[parent].value; + break; + } + } else { + rc = -EINVAL; + } + + return rc; +} + +u32 msm_vidc_get_port_info(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id) +{ + if (inst->capabilities[cap_id].flags & CAP_FLAG_INPUT_PORT && + inst->capabilities[cap_id].flags & CAP_FLAG_OUTPUT_PORT) { + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) + return get_hfi_port(inst, INPUT_PORT); + else + return get_hfi_port(inst, OUTPUT_PORT); + } + + if (inst->capabilities[cap_id].flags & CAP_FLAG_INPUT_PORT) + return get_hfi_port(inst, INPUT_PORT); + else if (inst->capabilities[cap_id].flags & CAP_FLAG_OUTPUT_PORT) + return get_hfi_port(inst, OUTPUT_PORT); + else + return HFI_PORT_NONE; +} + +int msm_vidc_v4l2_menu_to_hfi(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, u32 *value) +{ + switch (cap_id) { + case ENTROPY_MODE: + switch (inst->capabilities[cap_id].value) { + case V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC: + *value = 1; + break; + case V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC: + *value = 0; + break; + default: + *value = 1; + goto set_default; + } + return 0; + default: + i_vpr_e(inst, + "%s: mapping not specified for ctrl_id: %#x\n", + __func__, inst->capabilities[cap_id].v4l2_id); + return -EINVAL; + } + +set_default: + i_vpr_e(inst, + "%s: invalid value %d for ctrl id: %#x. Set default: %u\n", + __func__, inst->capabilities[cap_id].value, + inst->capabilities[cap_id].v4l2_id, *value); + return 0; +} + +int msm_vidc_v4l2_to_hfi_enum(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, u32 *value) +{ + switch (cap_id) { + case BITRATE_MODE: + *value = inst->hfi_rc_type; + return 0; + case PROFILE: + case LEVEL: + case HEVC_TIER: + case LAYER_TYPE: + if (inst->codec == MSM_VIDC_HEVC) { + switch (inst->capabilities[cap_id].value) { + case V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B: + *value = HFI_HIER_B; + break; + case V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_P: + //TODO (AS): check if this is right mapping + *value = HFI_HIER_P_SLIDING_WINDOW; + break; + default: + *value = HFI_HIER_P_SLIDING_WINDOW; + goto set_default; + } + } + return 0; + case ROTATION: + switch (inst->capabilities[cap_id].value) { + case 0: + *value = HFI_ROTATION_NONE; + break; + case 90: + *value = HFI_ROTATION_90; + break; + case 180: + *value = HFI_ROTATION_180; + break; + case 270: + *value = HFI_ROTATION_270; + break; + default: + *value = HFI_ROTATION_NONE; + goto set_default; + } + return 0; + case LF_MODE: + if (inst->codec == MSM_VIDC_HEVC) { + switch (inst->capabilities[cap_id].value) { + case V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_ENABLED: + *value = HFI_DEBLOCK_ALL_BOUNDARY; + break; + case V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_DISABLED: + *value = HFI_DEBLOCK_DISABLE; + break; + case DB_HEVC_DISABLE_SLICE_BOUNDARY: + *value = HFI_DEBLOCK_DISABLE_AT_SLICE_BOUNDARY; + break; + default: + *value = HFI_DEBLOCK_ALL_BOUNDARY; + goto set_default; + } + } else if (inst->codec == MSM_VIDC_H264) { + switch (inst->capabilities[cap_id].value) { + case V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED: + *value = HFI_DEBLOCK_ALL_BOUNDARY; + break; + case V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED: + *value = HFI_DEBLOCK_DISABLE; + break; + case DB_H264_DISABLE_SLICE_BOUNDARY: + *value = HFI_DEBLOCK_DISABLE_AT_SLICE_BOUNDARY; + break; + default: + *value = HFI_DEBLOCK_ALL_BOUNDARY; + goto set_default; + } + } + return 0; + case NAL_LENGTH_FIELD: + switch (inst->capabilities[cap_id].value) { + case V4L2_MPEG_VIDEO_HEVC_SIZE_4: + *value = HFI_NAL_LENGTH_SIZE_4; + break; + default: + *value = HFI_NAL_LENGTH_STARTCODES; + goto set_default; + } + return 0; + default: + i_vpr_e(inst, + "%s: mapping not specified for ctrl_id: %#x\n", + __func__, inst->capabilities[cap_id].v4l2_id); + return -EINVAL; + } + +set_default: + i_vpr_e(inst, + "%s: invalid value %d for ctrl id: %#x. Set default: %u\n", + __func__, inst->capabilities[cap_id].value, + inst->capabilities[cap_id].v4l2_id, *value); + return 0; +} + +int msm_vidc_packetize_control(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id, u32 payload_type, + void *hfi_val, u32 payload_size, const char *func) +{ + int rc = 0; + u64 payload = 0; + + if (payload_size > sizeof(u32)) { + i_vpr_e(inst, "%s: payload size is more than u32 for cap[%d] %s\n", + func, cap_id, cap_name(cap_id)); + return -EINVAL; + } + + if (payload_size == sizeof(u32)) + payload = *(u32 *)hfi_val; + else if (payload_size == sizeof(u8)) + payload = *(u8 *)hfi_val; + else if (payload_size == sizeof(u16)) + payload = *(u16 *)hfi_val; + + i_vpr_h(inst, FMT_STRING_SET_CAP, + cap_name(cap_id), inst->capabilities[cap_id].value, payload); + + rc = venus_hfi_session_property(inst, + inst->capabilities[cap_id].hfi_id, + HFI_HOST_FLAGS_NONE, + msm_vidc_get_port_info(inst, cap_id), + payload_type, + hfi_val, + payload_size); + if (rc) { + i_vpr_e(inst, "%s: failed to set cap[%d] %s to fw\n", + func, cap_id, cap_name(cap_id)); + return rc; + } + + return 0; +} + +/*************** End of control framework utility functions ******************/ + +/*********************** Control Adjust functions ****************************/ + +int msm_vidc_adjust_entropy_mode(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 profile = -1; + + /* ctrl is always NULL in streamon case */ + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[ENTROPY_MODE].value; + + if (inst->codec != MSM_VIDC_H264) { + i_vpr_e(inst, + "%s: incorrect entry in database. fix the database\n", + __func__); + return 0; + } + + if (msm_vidc_get_parent_value(inst, ENTROPY_MODE, PROFILE, &profile, __func__)) + return -EINVAL; + + if (profile == V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE || + profile == V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) + adjusted_value = V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC; + + msm_vidc_update_cap_value(inst, ENTROPY_MODE, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_bitrate_mode(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + int lossless, frame_rc, bitrate_mode, frame_skip; + u32 hfi_value = 0; + + bitrate_mode = inst->capabilities[BITRATE_MODE].value; + lossless = inst->capabilities[LOSSLESS].value; + frame_rc = inst->capabilities[FRAME_RC_ENABLE].value; + frame_skip = inst->capabilities[FRAME_SKIP_MODE].value; + + if (lossless) { + hfi_value = HFI_RC_LOSSLESS; + goto update; + } + + if (!frame_rc) { + hfi_value = HFI_RC_OFF; + goto update; + } + + if (bitrate_mode == V4L2_MPEG_VIDEO_BITRATE_MODE_VBR) { + hfi_value = HFI_RC_VBR_CFR; + } else if (bitrate_mode == V4L2_MPEG_VIDEO_BITRATE_MODE_CBR) { + if (frame_skip) + hfi_value = HFI_RC_CBR_VFR; + else + hfi_value = HFI_RC_CBR_CFR; + } else if (bitrate_mode == V4L2_MPEG_VIDEO_BITRATE_MODE_CQ) { + hfi_value = HFI_RC_CQ; + } + +update: + inst->hfi_rc_type = hfi_value; + i_vpr_h(inst, "%s: hfi rc type: %#x\n", + __func__, inst->hfi_rc_type); + + return 0; +} + +int msm_vidc_adjust_profile(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 pix_fmt = -1; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[PROFILE].value; + + /* PIX_FMTS dependency is common across all chipsets. + * Hence, PIX_FMTS must be specified as Parent for HEVC profile. + * Otherwise it would be a database error that should be fixed. + */ + if (msm_vidc_get_parent_value(inst, PROFILE, PIX_FMTS, &pix_fmt, __func__)) + return -EINVAL; + + /* 10 bit profile for 10 bit color format */ + if (pix_fmt == MSM_VIDC_FMT_TP10C || pix_fmt == MSM_VIDC_FMT_P010) + adjusted_value = V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_10; + else + /* 8 bit profile for 8 bit color format */ + adjusted_value = V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN; + + msm_vidc_update_cap_value(inst, PROFILE, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_ltr_count(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 rc_type = -1, all_intra = 0, pix_fmts = MSM_VIDC_FMT_NONE; + s32 layer_type = -1, enh_layer_count = -1; + u32 num_ref_frames = 0, max_exceeding_ref_frames = 0; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[LTR_COUNT].value; + + if (msm_vidc_get_parent_value(inst, LTR_COUNT, BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + if ((rc_type != HFI_RC_OFF && + rc_type != HFI_RC_CBR_CFR && + rc_type != HFI_RC_CBR_VFR)) { + adjusted_value = 0; + i_vpr_h(inst, + "%s: ltr count unsupported, rc_type: %#x\n", + __func__, rc_type); + goto exit; + } + + if (is_valid_cap(inst, ALL_INTRA)) { + if (msm_vidc_get_parent_value(inst, LTR_COUNT, ALL_INTRA, &all_intra, __func__)) + return -EINVAL; + if (all_intra) { + adjusted_value = 0; + goto exit; + } + } + + if (!msm_vidc_get_parent_value(inst, LTR_COUNT, PIX_FMTS, &pix_fmts, __func__)) { + if (is_10bit_colorformat(pix_fmts)) + adjusted_value = 0; + } + + if (!msm_vidc_get_parent_value(inst, LTR_COUNT, ENH_LAYER_COUNT, + &enh_layer_count, __func__) && + !msm_vidc_get_parent_value(inst, LTR_COUNT, LAYER_TYPE, + &layer_type, __func__)) { + if (layer_type == HFI_HIER_P_SLIDING_WINDOW) { + SLIDING_WINDOW_REF_FRAMES(inst->codec, + inst->capabilities[ENH_LAYER_COUNT].value + 1, + adjusted_value, num_ref_frames); + if (num_ref_frames > MAX_ENCODING_REFERNCE_FRAMES) { + /* + * reduce ltr count to avoid num ref + * frames going beyond limit + */ + max_exceeding_ref_frames = num_ref_frames - + MAX_ENCODING_REFERNCE_FRAMES; + if (adjusted_value >= max_exceeding_ref_frames) + adjusted_value -= max_exceeding_ref_frames; + else + adjusted_value = 0; + } + } + i_vpr_h(inst, + "%s: ltr count %d enh_layers %d layer_type %d\n", + __func__, adjusted_value, + inst->capabilities[ENH_LAYER_COUNT].value, + layer_type); + } + +exit: + msm_vidc_update_cap_value(inst, LTR_COUNT, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_use_ltr(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value, ltr_count; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[USE_LTR].value; + + /* + * Since USE_LTR is only set dynamically, and LTR_COUNT is static + * control, no need to make LTR_COUNT as parent for USE_LTR as + * LTR_COUNT value will always be updated when dynamically USE_LTR + * is set + */ + ltr_count = inst->capabilities[LTR_COUNT].value; + if (!ltr_count) + return 0; + + if (adjusted_value <= 0 || + adjusted_value > ((1 << ltr_count) - 1)) { + /* + * USE_LTR is bitmask value, hence should be + * > 0 and <= (2 ^ LTR_COUNT) - 1 + */ + i_vpr_e(inst, "%s: invalid value %d\n", + __func__, adjusted_value); + return 0; + } + + /* USE_LTR value is a bitmask value */ + msm_vidc_update_cap_value(inst, USE_LTR, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_mark_ltr(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value, ltr_count; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[MARK_LTR].value; + + /* + * Since MARK_LTR is only set dynamically, and LTR_COUNT is static + * control, no need to make LTR_COUNT as parent for MARK_LTR as + * LTR_COUNT value will always be updated when dynamically MARK_LTR + * is set + */ + ltr_count = inst->capabilities[LTR_COUNT].value; + if (!ltr_count) + return 0; + + if (adjusted_value < 0 || + adjusted_value > (ltr_count - 1)) { + /* MARK_LTR value should be >= 0 and <= (LTR_COUNT - 1) */ + i_vpr_e(inst, "%s: invalid value %d\n", + __func__, adjusted_value); + return 0; + } + + msm_vidc_update_cap_value(inst, MARK_LTR, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_delta_based_rc(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 rc_type = -1; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[TIME_DELTA_BASED_RC].value; + + if (msm_vidc_get_parent_value(inst, TIME_DELTA_BASED_RC, BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + if (rc_type == HFI_RC_OFF || rc_type == HFI_RC_CQ) + adjusted_value = 0; + + msm_vidc_update_cap_value(inst, TIME_DELTA_BASED_RC, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_output_order(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + s32 display_delay = -1, display_delay_enable = -1; + u32 adjusted_value; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[OUTPUT_ORDER].value; + + if (msm_vidc_get_parent_value(inst, OUTPUT_ORDER, DISPLAY_DELAY, + &display_delay, __func__) || + msm_vidc_get_parent_value(inst, OUTPUT_ORDER, DISPLAY_DELAY_ENABLE, + &display_delay_enable, __func__)) + return -EINVAL; + + if (display_delay_enable && !display_delay) + adjusted_value = 1; + + msm_vidc_update_cap_value(inst, OUTPUT_ORDER, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_input_buf_host_max_count(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + u32 adjusted_value; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[INPUT_BUF_HOST_MAX_COUNT].value; + + msm_vidc_update_cap_value(inst, INPUT_BUF_HOST_MAX_COUNT, + adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_output_buf_host_max_count(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + u32 adjusted_value; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[OUTPUT_BUF_HOST_MAX_COUNT].value; + + msm_vidc_update_cap_value(inst, OUTPUT_BUF_HOST_MAX_COUNT, + adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_transform_8x8(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 profile = -1; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[TRANSFORM_8X8].value; + + if (inst->codec != MSM_VIDC_H264) { + i_vpr_e(inst, + "%s: incorrect entry in database. fix the database\n", + __func__); + return 0; + } + + if (msm_vidc_get_parent_value(inst, TRANSFORM_8X8, PROFILE, &profile, __func__)) + return -EINVAL; + + if (profile != V4L2_MPEG_VIDEO_H264_PROFILE_HIGH && + profile != V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH) + adjusted_value = 0; + + msm_vidc_update_cap_value(inst, TRANSFORM_8X8, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_chroma_qp_index_offset(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[CHROMA_QP_INDEX_OFFSET].value; + + if (adjusted_value != MIN_CHROMA_QP_OFFSET) + adjusted_value = MAX_CHROMA_QP_OFFSET; + + msm_vidc_update_cap_value(inst, CHROMA_QP_INDEX_OFFSET, + adjusted_value, __func__); + + return 0; +} + +static bool msm_vidc_check_all_layer_bitrate_set(struct msm_vidc_inst *inst) +{ + bool layer_bitrate_set = true; + u32 cap_id = 0, i, enh_layer_count; + u32 layer_br_caps[6] = {L0_BR, L1_BR, L2_BR, L3_BR, L4_BR, L5_BR}; + + enh_layer_count = inst->capabilities[ENH_LAYER_COUNT].value; + + for (i = 0; i <= enh_layer_count; i++) { + if (i >= ARRAY_SIZE(layer_br_caps)) + break; + cap_id = layer_br_caps[i]; + if (!(inst->capabilities[cap_id].flags & CAP_FLAG_CLIENT_SET)) { + layer_bitrate_set = false; + break; + } + } + + return layer_bitrate_set; +} + +static u32 msm_vidc_get_cumulative_bitrate(struct msm_vidc_inst *inst) +{ + int i; + u32 cap_id = 0; + u32 cumulative_br = 0; + s32 enh_layer_count; + u32 layer_br_caps[6] = {L0_BR, L1_BR, L2_BR, L3_BR, L4_BR, L5_BR}; + + enh_layer_count = inst->capabilities[ENH_LAYER_COUNT].value; + + for (i = 0; i <= enh_layer_count; i++) { + if (i >= ARRAY_SIZE(layer_br_caps)) + break; + cap_id = layer_br_caps[i]; + cumulative_br += inst->capabilities[cap_id].value; + } + + return cumulative_br; +} + +int msm_vidc_adjust_slice_count(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + struct v4l2_format *output_fmt; + s32 adjusted_value, rc_type = -1, slice_mode, all_intra = 0, + enh_layer_count = 0; + u32 slice_val, mbpf = 0, mbps = 0, max_mbpf = 0, max_mbps = 0, bitrate = 0; + u32 update_cap, max_avg_slicesize, output_width, output_height; + u32 min_width, min_height, max_width, max_height, fps; + + slice_mode = ctrl ? ctrl->val : + inst->capabilities[SLICE_MODE].value; + + if (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE) + return 0; + + bitrate = inst->capabilities[BIT_RATE].value; + + if (msm_vidc_get_parent_value(inst, SLICE_MODE, + BITRATE_MODE, &rc_type, __func__) || + msm_vidc_get_parent_value(inst, SLICE_MODE, + ENH_LAYER_COUNT, &enh_layer_count, __func__)) + return -EINVAL; + + if (enh_layer_count && msm_vidc_check_all_layer_bitrate_set(inst)) + bitrate = msm_vidc_get_cumulative_bitrate(inst); + + fps = inst->capabilities[FRAME_RATE].value >> 16; + if (fps > MAX_SLICES_FRAME_RATE || + (rc_type != HFI_RC_OFF && + rc_type != HFI_RC_CBR_CFR && + rc_type != HFI_RC_CBR_VFR && + rc_type != HFI_RC_VBR_CFR)) { + adjusted_value = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE; + update_cap = SLICE_MODE; + i_vpr_h(inst, + "%s: slice unsupported, fps: %u, rc_type: %#x\n", + __func__, fps, rc_type); + goto exit; + } + + if (is_valid_cap(inst, ALL_INTRA)) { + if (msm_vidc_get_parent_value(inst, SLICE_MODE, + ALL_INTRA, &all_intra, __func__)) + return -EINVAL; + + if (all_intra == 1) { + adjusted_value = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE; + update_cap = SLICE_MODE; + i_vpr_h(inst, + "%s: slice unsupported, all_intra %d\n", __func__, all_intra); + goto exit; + } + } + + output_fmt = &inst->fmts[OUTPUT_PORT]; + output_width = output_fmt->fmt.pix_mp.width; + output_height = output_fmt->fmt.pix_mp.height; + + max_width = (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) ? + MAX_MB_SLICE_WIDTH : MAX_BYTES_SLICE_WIDTH; + max_height = (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) ? + MAX_MB_SLICE_HEIGHT : MAX_BYTES_SLICE_HEIGHT; + min_width = (inst->codec == MSM_VIDC_HEVC) ? + MIN_HEVC_SLICE_WIDTH : MIN_AVC_SLICE_WIDTH; + min_height = MIN_SLICE_HEIGHT; + + /* + * For V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB: + * - width >= 384 and height >= 128 + * - width and height <= 4096 + * For V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BYTES: + * - width >= 192 and height >= 128 + * - width and height <= 1920 + */ + if (output_width < min_width || output_height < min_height || + output_width > max_width || output_height > max_width) { + adjusted_value = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE; + update_cap = SLICE_MODE; + i_vpr_h(inst, + "%s: slice unsupported, codec: %#x wxh: [%dx%d]\n", + __func__, inst->codec, output_width, output_height); + goto exit; + } + + mbpf = NUM_MBS_PER_FRAME(output_height, output_width); + mbps = NUM_MBS_PER_SEC(output_height, output_width, fps); + max_mbpf = NUM_MBS_PER_FRAME(max_height, max_width); + max_mbps = NUM_MBS_PER_SEC(max_height, max_width, MAX_SLICES_FRAME_RATE); + + if (mbpf > max_mbpf || mbps > max_mbps) { + adjusted_value = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE; + update_cap = SLICE_MODE; + i_vpr_h(inst, + "%s: Unsupported, mbpf[%u] > max[%u], mbps[%u] > max[%u]\n", + __func__, mbpf, max_mbpf, mbps, max_mbps); + goto exit; + } + + if (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) { + update_cap = SLICE_MAX_MB; + slice_val = inst->capabilities[SLICE_MAX_MB].value; + slice_val = max(slice_val, mbpf / MAX_SLICES_PER_FRAME); + } else { + slice_val = inst->capabilities[SLICE_MAX_BYTES].value; + update_cap = SLICE_MAX_BYTES; + if (rc_type != HFI_RC_OFF) { + max_avg_slicesize = ((bitrate / fps) / 8) / + MAX_SLICES_PER_FRAME; + slice_val = max(slice_val, max_avg_slicesize); + } + } + adjusted_value = slice_val; + +exit: + msm_vidc_update_cap_value(inst, update_cap, adjusted_value, __func__); + + return 0; +} + +static int msm_vidc_adjust_static_layer_count_and_type(struct msm_vidc_inst *inst, + s32 layer_count) +{ + bool hb_requested = false; + + if (!layer_count) { + i_vpr_h(inst, "client not enabled layer encoding\n"); + goto exit; + } + + if (inst->hfi_rc_type == HFI_RC_CQ) { + i_vpr_h(inst, "rc type is CQ, disabling layer encoding\n"); + layer_count = 0; + goto exit; + } + + if (inst->codec == MSM_VIDC_H264) { + if (!inst->capabilities[LAYER_ENABLE].value) { + layer_count = 0; + goto exit; + } + + hb_requested = (inst->capabilities[LAYER_TYPE].value == + V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B) ? + true : false; + } else if (inst->codec == MSM_VIDC_HEVC) { + hb_requested = (inst->capabilities[LAYER_TYPE].value == + V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B) ? + true : false; + } + + if (hb_requested && inst->hfi_rc_type != HFI_RC_VBR_CFR) { + i_vpr_h(inst, + "%s: HB layer encoding is supported for VBR rc only\n", + __func__); + layer_count = 0; + goto exit; + } + + /* decide hfi layer type */ + if (hb_requested) { + inst->hfi_layer_type = HFI_HIER_B; + } else { + /* HP requested */ + inst->hfi_layer_type = HFI_HIER_P_SLIDING_WINDOW; + if (inst->codec == MSM_VIDC_H264 && + inst->hfi_rc_type == HFI_RC_VBR_CFR) + inst->hfi_layer_type = HFI_HIER_P_HYBRID_LTR; + } + + /* sanitize layer count based on layer type and codec, and rc type */ + if (inst->hfi_layer_type == HFI_HIER_B) { + if (layer_count > MAX_ENH_LAYER_HB) + layer_count = MAX_ENH_LAYER_HB; + } else if (inst->hfi_layer_type == HFI_HIER_P_HYBRID_LTR) { + if (layer_count > MAX_AVC_ENH_LAYER_HYBRID_HP) + layer_count = MAX_AVC_ENH_LAYER_HYBRID_HP; + } else if (inst->hfi_layer_type == HFI_HIER_P_SLIDING_WINDOW) { + if (inst->codec == MSM_VIDC_H264) { + if (layer_count > MAX_AVC_ENH_LAYER_SLIDING_WINDOW) + layer_count = MAX_AVC_ENH_LAYER_SLIDING_WINDOW; + } else if (inst->codec == MSM_VIDC_HEVC) { + if (inst->hfi_rc_type == HFI_RC_VBR_CFR) { + if (layer_count > MAX_HEVC_VBR_ENH_LAYER_SLIDING_WINDOW) + layer_count = MAX_HEVC_VBR_ENH_LAYER_SLIDING_WINDOW; + } else { + if (layer_count > MAX_HEVC_NON_VBR_ENH_LAYER_SLIDING_WINDOW) + layer_count = MAX_HEVC_NON_VBR_ENH_LAYER_SLIDING_WINDOW; + } + } + } + +exit: + msm_vidc_update_cap_value(inst, ENH_LAYER_COUNT, layer_count, __func__); + inst->capabilities[ENH_LAYER_COUNT].max = layer_count; + return 0; +} + +int msm_vidc_adjust_layer_count(void *instance, struct v4l2_ctrl *ctrl) +{ + int rc = 0; + + s32 client_layer_count; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + client_layer_count = ctrl ? ctrl->val : + inst->capabilities[ENH_LAYER_COUNT].value; + + if (!is_parent_available(inst, ENH_LAYER_COUNT, BITRATE_MODE, __func__)) + return -EINVAL; + + if (!inst->bufq[OUTPUT_PORT].vb2q->streaming) { + rc = msm_vidc_adjust_static_layer_count_and_type(inst, client_layer_count); + if (rc) + goto exit; + } else { + if (inst->hfi_rc_type == HFI_RC_CBR_CFR || + inst->hfi_rc_type == HFI_RC_CBR_VFR) { + i_vpr_h(inst, + "%s: ignoring dynamic layer count change for CBR mode\n", + __func__); + goto exit; + } + + if (inst->hfi_layer_type == HFI_HIER_P_HYBRID_LTR || + inst->hfi_layer_type == HFI_HIER_P_SLIDING_WINDOW) { + /* dynamic layer count change is only supported for HP */ + if (client_layer_count > + inst->capabilities[ENH_LAYER_COUNT].max) + client_layer_count = + inst->capabilities[ENH_LAYER_COUNT].max; + + msm_vidc_update_cap_value(inst, ENH_LAYER_COUNT, + client_layer_count, __func__); + } + } + +exit: + return rc; +} + +int msm_vidc_adjust_gop_size(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 adjusted_value, enh_layer_count = -1; + u32 min_gop_size, num_subgops; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[GOP_SIZE].value; + + if (msm_vidc_get_parent_value(inst, GOP_SIZE, + ENH_LAYER_COUNT, &enh_layer_count, __func__)) + return -EINVAL; + + if (!enh_layer_count) + goto exit; + + /* + * Layer encoding needs GOP size to be multiple of subgop size + * And subgop size is 2 ^ number of enhancement layers. + */ + + /* v4l2 layer count is the number of enhancement layers */ + min_gop_size = 1 << enh_layer_count; + num_subgops = (adjusted_value + (min_gop_size >> 1)) / + min_gop_size; + if (num_subgops) + adjusted_value = num_subgops * min_gop_size; + else + adjusted_value = min_gop_size; + +exit: + msm_vidc_update_cap_value(inst, GOP_SIZE, adjusted_value, __func__); + return 0; +} + +int msm_vidc_adjust_b_frame(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 adjusted_value, enh_layer_count = -1; + const u32 max_bframe_size = 7; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[B_FRAME].value; + + if (msm_vidc_get_parent_value(inst, B_FRAME, + ENH_LAYER_COUNT, &enh_layer_count, __func__)) + return -EINVAL; + + if (!enh_layer_count || inst->hfi_layer_type != HFI_HIER_B) { + adjusted_value = 0; + goto exit; + } + + adjusted_value = (1 << enh_layer_count) - 1; + /* Allowed Bframe values are 0, 1, 3, 7 */ + if (adjusted_value > max_bframe_size) + adjusted_value = max_bframe_size; + +exit: + msm_vidc_update_cap_value(inst, B_FRAME, adjusted_value, __func__); + return 0; +} + +int msm_vidc_adjust_bitrate(void *instance, struct v4l2_ctrl *ctrl) +{ + int i, rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + s32 adjusted_value, enh_layer_count; + u32 cumulative_bitrate = 0, cap_id = 0, cap_value = 0; + u32 layer_br_caps[6] = {L0_BR, L1_BR, L2_BR, L3_BR, L4_BR, L5_BR}; + u32 max_bitrate = 0; + + /* ignore layer bitrate when total bitrate is set */ + if (inst->capabilities[BIT_RATE].flags & CAP_FLAG_CLIENT_SET) { + /* + * For static case, ctrl is null. + * For dynamic case, only BIT_RATE cap uses this adjust function. + * Hence, no need to check for ctrl id to be BIT_RATE control, and not + * any of layer bitrate controls. + */ + adjusted_value = ctrl ? ctrl->val : inst->capabilities[BIT_RATE].value; + msm_vidc_update_cap_value(inst, BIT_RATE, adjusted_value, __func__); + + return 0; + } + + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + + if (msm_vidc_get_parent_value(inst, BIT_RATE, + ENH_LAYER_COUNT, &enh_layer_count, __func__)) + return -EINVAL; + + /* get max bit rate for current session config*/ + max_bitrate = msm_vidc_get_max_bitrate(inst); + if (inst->capabilities[BIT_RATE].value > max_bitrate) + msm_vidc_update_cap_value(inst, BIT_RATE, max_bitrate, __func__); + + /* + * ENH_LAYER_COUNT cap max is positive only if + * layer encoding is enabled during streamon. + */ + if (inst->capabilities[ENH_LAYER_COUNT].max) { + if (!msm_vidc_check_all_layer_bitrate_set(inst)) { + i_vpr_h(inst, + "%s: client did not set all layer bitrates\n", + __func__); + return 0; + } + + cumulative_bitrate = msm_vidc_get_cumulative_bitrate(inst); + + /* cap layer bitrates to max supported bitrate */ + if (cumulative_bitrate > max_bitrate) { + u32 decrement_in_value = 0; + u32 decrement_in_percent = ((cumulative_bitrate - max_bitrate) * 100) / + max_bitrate; + + cumulative_bitrate = 0; + for (i = 0; i <= enh_layer_count; i++) { + if (i >= ARRAY_SIZE(layer_br_caps)) + break; + cap_id = layer_br_caps[i]; + cap_value = inst->capabilities[cap_id].value; + + decrement_in_value = (cap_value * + decrement_in_percent) / 100; + cumulative_bitrate += (cap_value - decrement_in_value); + + /* + * cap value for the L*_BR is changed. Hence, update cap, + * and add to FW_LIST to set new values to firmware. + */ + msm_vidc_update_cap_value(inst, cap_id, + (cap_value - decrement_in_value), + __func__); + } + } + + i_vpr_h(inst, + "%s: update BIT_RATE with cumulative bitrate\n", + __func__); + msm_vidc_update_cap_value(inst, BIT_RATE, + cumulative_bitrate, __func__); + } + + return rc; +} + +int msm_vidc_adjust_layer_bitrate(void *instance, struct v4l2_ctrl *ctrl) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + u32 cumulative_bitrate = 0; + u32 client_set_cap_id = INST_CAP_NONE; + u32 old_br = 0, new_br = 0, exceeded_br = 0; + s32 max_bitrate; + + if (!ctrl) + return 0; + + /* ignore layer bitrate when total bitrate is set */ + if (inst->capabilities[BIT_RATE].flags & CAP_FLAG_CLIENT_SET) + return 0; + + /* + * This is no-op function because layer bitrates were already adjusted + * in msm_vidc_adjust_bitrate function + */ + if (!inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + + /* + * ENH_LAYER_COUNT cap max is positive only if + * layer encoding is enabled during streamon. + */ + if (!inst->capabilities[ENH_LAYER_COUNT].max) { + i_vpr_e(inst, "%s: layers not enabled\n", __func__); + return -EINVAL; + } + + if (!msm_vidc_check_all_layer_bitrate_set(inst)) { + i_vpr_h(inst, + "%s: client did not set all layer bitrates\n", + __func__); + return 0; + } + + client_set_cap_id = msm_vidc_get_cap_id(inst, ctrl->id); + if (client_set_cap_id == INST_CAP_NONE) { + i_vpr_e(inst, "%s: could not find cap_id for ctrl %s\n", + __func__, ctrl->name); + return -EINVAL; + } + + cumulative_bitrate = msm_vidc_get_cumulative_bitrate(inst); + max_bitrate = inst->capabilities[BIT_RATE].max; + old_br = inst->capabilities[client_set_cap_id].value; + new_br = ctrl->val; + + /* + * new bitrate is not supposed to cause cumulative bitrate to + * exceed max supported bitrate + */ + + if ((cumulative_bitrate - old_br + new_br) > max_bitrate) { + /* adjust new bitrate */ + exceeded_br = (cumulative_bitrate - old_br + new_br) - max_bitrate; + new_br = ctrl->val - exceeded_br; + } + msm_vidc_update_cap_value(inst, client_set_cap_id, new_br, __func__); + + /* adjust totol bitrate cap */ + i_vpr_h(inst, + "%s: update BIT_RATE with cumulative bitrate\n", + __func__); + msm_vidc_update_cap_value(inst, BIT_RATE, + msm_vidc_get_cumulative_bitrate(inst), __func__); + + return rc; +} + +int msm_vidc_adjust_peak_bitrate(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 rc_type = -1, bitrate = -1; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[PEAK_BITRATE].value; + + if (msm_vidc_get_parent_value(inst, PEAK_BITRATE, + BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + if (rc_type != HFI_RC_CBR_CFR && + rc_type != HFI_RC_CBR_VFR) + return 0; + + if (msm_vidc_get_parent_value(inst, PEAK_BITRATE, + BIT_RATE, &bitrate, __func__)) + return -EINVAL; + + /* Peak Bitrate should be larger than or equal to avg bitrate */ + if (inst->capabilities[PEAK_BITRATE].flags & CAP_FLAG_CLIENT_SET) { + if (adjusted_value < bitrate) + adjusted_value = bitrate; + } else { + adjusted_value = inst->capabilities[BIT_RATE].value; + } + + msm_vidc_update_cap_value(inst, PEAK_BITRATE, adjusted_value, __func__); + + return 0; +} + +static int msm_vidc_adjust_hevc_qp(struct msm_vidc_inst *inst, + enum msm_vidc_inst_capability_type cap_id) +{ + s32 pix_fmt = -1; + + if (!(inst->codec == MSM_VIDC_HEVC)) { + i_vpr_e(inst, + "%s: incorrect cap[%d] %s entry in database, fix database\n", + __func__, cap_id, cap_name(cap_id)); + return -EINVAL; + } + + if (msm_vidc_get_parent_value(inst, cap_id, PIX_FMTS, &pix_fmt, __func__)) + return -EINVAL; + + if (pix_fmt == MSM_VIDC_FMT_P010 || pix_fmt == MSM_VIDC_FMT_TP10C) + goto exit; + + CAP_TO_8BIT_QP(inst->capabilities[cap_id].value); + if (cap_id == MIN_FRAME_QP) { + CAP_TO_8BIT_QP(inst->capabilities[I_FRAME_MIN_QP].value); + CAP_TO_8BIT_QP(inst->capabilities[P_FRAME_MIN_QP].value); + CAP_TO_8BIT_QP(inst->capabilities[B_FRAME_MIN_QP].value); + } else if (cap_id == MAX_FRAME_QP) { + CAP_TO_8BIT_QP(inst->capabilities[I_FRAME_MAX_QP].value); + CAP_TO_8BIT_QP(inst->capabilities[P_FRAME_MAX_QP].value); + CAP_TO_8BIT_QP(inst->capabilities[B_FRAME_MAX_QP].value); + } + +exit: + return 0; +} + +int msm_vidc_adjust_hevc_min_qp(void *instance, struct v4l2_ctrl *ctrl) +{ + int rc = 0; + + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + if (ctrl) + msm_vidc_update_cap_value(inst, MIN_FRAME_QP, ctrl->val, __func__); + + rc = msm_vidc_adjust_hevc_qp(inst, MIN_FRAME_QP); + + return rc; +} + +int msm_vidc_adjust_hevc_max_qp(void *instance, struct v4l2_ctrl *ctrl) +{ + int rc = 0; + + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + if (ctrl) + msm_vidc_update_cap_value(inst, MAX_FRAME_QP, ctrl->val, __func__); + + rc = msm_vidc_adjust_hevc_qp(inst, MAX_FRAME_QP); + + return rc; +} + +int msm_vidc_adjust_hevc_i_frame_qp(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + if (ctrl) + msm_vidc_update_cap_value(inst, I_FRAME_QP, ctrl->val, __func__); + + return msm_vidc_adjust_hevc_qp(inst, I_FRAME_QP); +} + +int msm_vidc_adjust_hevc_p_frame_qp(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + if (ctrl) + msm_vidc_update_cap_value(inst, P_FRAME_QP, ctrl->val, __func__); + + return msm_vidc_adjust_hevc_qp(inst, P_FRAME_QP); +} + +int msm_vidc_adjust_hevc_b_frame_qp(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + if (ctrl) + msm_vidc_update_cap_value(inst, B_FRAME_QP, ctrl->val, __func__); + + return msm_vidc_adjust_hevc_qp(inst, B_FRAME_QP); +} + +int msm_vidc_adjust_all_intra(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 gop_size = -1, bframe = -1; + u32 width, height, fps, mbps, max_mbps; + + adjusted_value = inst->capabilities[ALL_INTRA].value; + + if (msm_vidc_get_parent_value(inst, ALL_INTRA, GOP_SIZE, &gop_size, __func__) || + msm_vidc_get_parent_value(inst, ALL_INTRA, B_FRAME, &bframe, __func__)) + return -EINVAL; + + width = inst->crop.width; + height = inst->crop.height; + fps = msm_vidc_get_fps(inst); + mbps = NUM_MBS_PER_SEC(height, width, fps); + core = inst->core; + max_mbps = core->capabilities[MAX_MBPS_ALL_INTRA].value; + + if (mbps > max_mbps) { + adjusted_value = 0; + i_vpr_h(inst, "%s: mbps %d exceeds max supported mbps %d\n", + __func__, mbps, max_mbps); + goto exit; + } + + if (!gop_size && !bframe) + adjusted_value = 1; + +exit: + msm_vidc_update_cap_value(inst, ALL_INTRA, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_bitrate_boost(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 min_quality = -1, rc_type = -1; + u32 max_bitrate = 0, bitrate = 0; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[BITRATE_BOOST].value; + + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + + if (msm_vidc_get_parent_value(inst, BITRATE_BOOST, + MIN_QUALITY, &min_quality, __func__) || + msm_vidc_get_parent_value(inst, BITRATE_BOOST, + BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + /* + * Bitrate Boost are supported only for VBR rc type. + * Hence, do not adjust or set to firmware for non VBR rc's + */ + if (rc_type != HFI_RC_VBR_CFR) { + adjusted_value = 0; + goto adjust; + } + + if (min_quality) { + adjusted_value = MAX_BITRATE_BOOST; + goto adjust; + } + + max_bitrate = msm_vidc_get_max_bitrate(inst); + bitrate = inst->capabilities[BIT_RATE].value; + if (adjusted_value) { + if ((bitrate + bitrate / (100 / adjusted_value)) > max_bitrate) { + i_vpr_h(inst, + "%s: bitrate %d is beyond max bitrate %d, remove bitrate boost\n", + __func__, max_bitrate, bitrate); + adjusted_value = 0; + } + } + +adjust: + msm_vidc_update_cap_value(inst, BITRATE_BOOST, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_min_quality(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 rc_type = -1, enh_layer_count = -1, pix_fmts = -1; + u32 width, height, frame_rate; + struct v4l2_format *f; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[MIN_QUALITY].value; + + /* + * Although MIN_QUALITY is static, one of its parents, + * ENH_LAYER_COUNT is dynamic cap. Hence, dynamic call + * may be made for MIN_QUALITY via ENH_LAYER_COUNT. + * Therefore, below streaming check is required to avoid + * runtime modification of MIN_QUALITY. + */ + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + + if (msm_vidc_get_parent_value(inst, MIN_QUALITY, + BITRATE_MODE, &rc_type, __func__) || + msm_vidc_get_parent_value(inst, MIN_QUALITY, + ENH_LAYER_COUNT, &enh_layer_count, __func__)) + return -EINVAL; + + /* + * Min Quality is supported only for VBR rc type. + * Hence, do not adjust or set to firmware for non VBR rc's + */ + if (rc_type != HFI_RC_VBR_CFR) { + adjusted_value = 0; + goto update_and_exit; + } + + frame_rate = inst->capabilities[FRAME_RATE].value >> 16; + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + /* + * VBR Min Quality not supported for: + * - HEVC 10bit + * - ROI support + * - HP encoding + * - Resolution beyond 1080P + * (It will fall back to CQCAC 25% or 0% (CAC) or CQCAC-OFF) + */ + if (inst->codec == MSM_VIDC_HEVC) { + if (msm_vidc_get_parent_value(inst, MIN_QUALITY, + PIX_FMTS, &pix_fmts, __func__)) + return -EINVAL; + + if (is_10bit_colorformat(pix_fmts)) { + i_vpr_h(inst, + "%s: min quality is supported only for 8 bit\n", + __func__); + adjusted_value = 0; + goto update_and_exit; + } + } + + if (res_is_greater_than(width, height, 1920, 1080)) { + i_vpr_h(inst, "%s: unsupported res, wxh %ux%u\n", + __func__, width, height); + adjusted_value = 0; + goto update_and_exit; + } + + if (frame_rate > 60) { + i_vpr_h(inst, "%s: unsupported fps %u\n", + __func__, frame_rate); + adjusted_value = 0; + goto update_and_exit; + } + + if (enh_layer_count > 0 && inst->hfi_layer_type != HFI_HIER_B) { + i_vpr_h(inst, + "%s: min quality not supported for HP encoding\n", + __func__); + adjusted_value = 0; + goto update_and_exit; + } + + /* Above conditions are met. Hence enable min quality */ + adjusted_value = MAX_SUPPORTED_MIN_QUALITY; + +update_and_exit: + msm_vidc_update_cap_value(inst, MIN_QUALITY, adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_dec_slice_mode(void *instance, struct v4l2_ctrl *ctrl) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 adjusted_value = 0; + s32 picture_order = -1; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[SLICE_DECODE].value; + + if (msm_vidc_get_parent_value(inst, SLICE_DECODE, OUTPUT_ORDER, + &picture_order, __func__)) + return -EINVAL; + + if (!picture_order) + adjusted_value = 0; + + msm_vidc_update_cap_value(inst, SLICE_DECODE, + adjusted_value, __func__); + + return 0; +} + +int msm_vidc_adjust_ir_period(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value, all_intra = 0, + pix_fmts = MSM_VIDC_FMT_NONE; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + adjusted_value = ctrl ? ctrl->val : inst->capabilities[IR_PERIOD].value; + + if (msm_vidc_get_parent_value(inst, IR_PERIOD, ALL_INTRA, + &all_intra, __func__)) + return -EINVAL; + + if (all_intra) { + adjusted_value = 0; + i_vpr_h(inst, "%s: intra refresh unsupported, all intra: %d\n", + __func__, all_intra); + goto exit; + } + + if (inst->codec == MSM_VIDC_HEVC) { + if (msm_vidc_get_parent_value(inst, IR_PERIOD, + PIX_FMTS, &pix_fmts, __func__)) + return -EINVAL; + + if (is_10bit_colorformat(pix_fmts)) { + i_vpr_h(inst, + "%s: intra refresh is supported only for 8 bit\n", + __func__); + adjusted_value = 0; + goto exit; + } + } + + /* + * BITRATE_MODE dependency is NOT common across all chipsets. + * Hence, do not return error if not specified as one of the parent. + */ + if (is_parent_available(inst, IR_PERIOD, BITRATE_MODE, __func__) && + inst->hfi_rc_type != HFI_RC_CBR_CFR && + inst->hfi_rc_type != HFI_RC_CBR_VFR) + adjusted_value = 0; + +exit: + msm_vidc_update_cap_value(inst, IR_PERIOD, adjusted_value, __func__); + + return 0; +} + +/******************* End of Control Adjust functions *************************/ + +/************************* Control Set functions *****************************/ + +int msm_vidc_set_header_mode(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + int header_mode, prepend_sps_pps; + u32 hfi_value = 0; + + header_mode = inst->capabilities[cap_id].value; + prepend_sps_pps = inst->capabilities[PREPEND_SPSPPS_TO_IDR].value; + + /* prioritize PREPEND_SPSPPS_TO_IDR mode over other header modes */ + if (prepend_sps_pps) + hfi_value = HFI_SEQ_HEADER_PREFIX_WITH_SYNC_FRAME; + else if (header_mode == V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME) + hfi_value = HFI_SEQ_HEADER_JOINED_WITH_1ST_FRAME; + else + hfi_value = HFI_SEQ_HEADER_SEPERATE_FRAME; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_deblock_mode(void *instance, enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 alpha = 0, beta = 0; + u32 lf_mode, hfi_value = 0, lf_offset = 6; + + rc = msm_vidc_v4l2_to_hfi_enum(inst, LF_MODE, &lf_mode); + if (rc) + return -EINVAL; + + beta = inst->capabilities[LF_BETA].value + lf_offset; + alpha = inst->capabilities[LF_ALPHA].value + lf_offset; + hfi_value = (alpha << 16) | (beta << 8) | lf_mode; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_32_PACKED, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_constant_quality(void *instance, enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + s32 rc_type = -1; + + if (msm_vidc_get_parent_value(inst, cap_id, BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + if (rc_type != HFI_RC_CQ) + return 0; + + hfi_value = inst->capabilities[cap_id].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_cbr_related_properties(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + s32 rc_type = -1; + + if (msm_vidc_get_parent_value(inst, cap_id, BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + if (rc_type != HFI_RC_CBR_VFR && + rc_type != HFI_RC_CBR_CFR) + return 0; + + hfi_value = inst->capabilities[cap_id].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_use_and_mark_ltr(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + + if (!inst->capabilities[LTR_COUNT].value || + inst->capabilities[cap_id].value == + INVALID_DEFAULT_MARK_OR_USE_LTR) { + i_vpr_h(inst, + "%s: LTR_COUNT: %d %s: %d, cap %s is not set\n", + __func__, inst->capabilities[LTR_COUNT].value, + cap_name(cap_id), + inst->capabilities[cap_id].value, + cap_name(cap_id)); + return 0; + } + + hfi_value = inst->capabilities[cap_id].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_min_qp(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + s32 i_frame_qp = 0, p_frame_qp = 0, b_frame_qp = 0, min_qp_enable = 0; + u32 i_qp_enable = 0, p_qp_enable = 0, b_qp_enable = 0; + u32 client_qp_enable = 0, hfi_value = 0, offset = 0; + + if (inst->capabilities[MIN_FRAME_QP].flags & CAP_FLAG_CLIENT_SET) + min_qp_enable = 1; + + if (min_qp_enable || + (inst->capabilities[I_FRAME_MIN_QP].flags & CAP_FLAG_CLIENT_SET)) + i_qp_enable = 1; + if (min_qp_enable || + (inst->capabilities[P_FRAME_MIN_QP].flags & CAP_FLAG_CLIENT_SET)) + p_qp_enable = 1; + if (min_qp_enable || + (inst->capabilities[B_FRAME_MIN_QP].flags & CAP_FLAG_CLIENT_SET)) + b_qp_enable = 1; + + client_qp_enable = i_qp_enable | p_qp_enable << 1 | b_qp_enable << 2; + if (!client_qp_enable) { + i_vpr_h(inst, + "%s: client did not set min qp, cap %s is not set\n", + __func__, cap_name(cap_id)); + return 0; + } + + if (is_10bit_colorformat(inst->capabilities[PIX_FMTS].value)) + offset = 12; + + /* + * I_FRAME_MIN_QP, P_FRAME_MIN_QP, B_FRAME_MIN_QP, + * MIN_FRAME_QP caps have default value as MIN_QP_10BIT values. + * Hence, if client sets either one among MIN_FRAME_QP + * and (I_FRAME_MIN_QP or P_FRAME_MIN_QP or B_FRAME_MIN_QP), + * max of both caps will result into client set value. + */ + i_frame_qp = max(inst->capabilities[I_FRAME_MIN_QP].value, + inst->capabilities[MIN_FRAME_QP].value) + offset; + p_frame_qp = max(inst->capabilities[P_FRAME_MIN_QP].value, + inst->capabilities[MIN_FRAME_QP].value) + offset; + b_frame_qp = max(inst->capabilities[B_FRAME_MIN_QP].value, + inst->capabilities[MIN_FRAME_QP].value) + offset; + + hfi_value = i_frame_qp | p_frame_qp << 8 | b_frame_qp << 16 | + client_qp_enable << 24; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_32_PACKED, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_max_qp(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + s32 i_frame_qp = 0, p_frame_qp = 0, b_frame_qp = 0, max_qp_enable = 0; + u32 i_qp_enable = 0, p_qp_enable = 0, b_qp_enable = 0; + u32 client_qp_enable = 0, hfi_value = 0, offset = 0; + + if (inst->capabilities[MAX_FRAME_QP].flags & CAP_FLAG_CLIENT_SET) + max_qp_enable = 1; + + if (max_qp_enable || + (inst->capabilities[I_FRAME_MAX_QP].flags & CAP_FLAG_CLIENT_SET)) + i_qp_enable = 1; + if (max_qp_enable || + (inst->capabilities[P_FRAME_MAX_QP].flags & CAP_FLAG_CLIENT_SET)) + p_qp_enable = 1; + if (max_qp_enable || + (inst->capabilities[B_FRAME_MAX_QP].flags & CAP_FLAG_CLIENT_SET)) + b_qp_enable = 1; + + client_qp_enable = i_qp_enable | p_qp_enable << 1 | b_qp_enable << 2; + if (!client_qp_enable) { + i_vpr_h(inst, + "%s: client did not set max qp, cap %s is not set\n", + __func__, cap_name(cap_id)); + return 0; + } + + if (is_10bit_colorformat(inst->capabilities[PIX_FMTS].value)) + offset = 12; + + /* + * I_FRAME_MAX_QP, P_FRAME_MAX_QP, B_FRAME_MAX_QP, + * MAX_FRAME_QP caps have default value as MAX_QP values. + * Hence, if client sets either one among MAX_FRAME_QP + * and (I_FRAME_MAX_QP or P_FRAME_MAX_QP or B_FRAME_MAX_QP), + * min of both caps will result into client set value. + */ + i_frame_qp = min(inst->capabilities[I_FRAME_MAX_QP].value, + inst->capabilities[MAX_FRAME_QP].value) + offset; + p_frame_qp = min(inst->capabilities[P_FRAME_MAX_QP].value, + inst->capabilities[MAX_FRAME_QP].value) + offset; + b_frame_qp = min(inst->capabilities[B_FRAME_MAX_QP].value, + inst->capabilities[MAX_FRAME_QP].value) + offset; + + hfi_value = i_frame_qp | p_frame_qp << 8 | b_frame_qp << 16 | + client_qp_enable << 24; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_32_PACKED, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_frame_qp(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + struct msm_vidc_inst_cap *capab; + s32 i_frame_qp = 0, p_frame_qp = 0, b_frame_qp = 0; + u32 i_qp_enable = 0, p_qp_enable = 0, b_qp_enable = 0; + u32 client_qp_enable = 0, hfi_value = 0, offset = 0; + s32 rc_type = -1; + + capab = inst->capabilities; + + if (msm_vidc_get_parent_value(inst, cap_id, + BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) { + if (rc_type != HFI_RC_OFF) { + i_vpr_h(inst, + "%s: dynamic qp not allowed for rc type %d\n", + __func__, rc_type); + return 0; + } + } + + if (rc_type == HFI_RC_OFF) { + /* Mandatorily set for rc off case */ + i_qp_enable = 1; + p_qp_enable = 1; + b_qp_enable = 1; + } else { + /* Set only if client has set for NON rc off case */ + if (capab[I_FRAME_QP].flags & CAP_FLAG_CLIENT_SET) + i_qp_enable = 1; + if (capab[P_FRAME_QP].flags & CAP_FLAG_CLIENT_SET) + p_qp_enable = 1; + if (capab[B_FRAME_QP].flags & CAP_FLAG_CLIENT_SET) + b_qp_enable = 1; + } + + client_qp_enable = i_qp_enable | p_qp_enable << 1 | b_qp_enable << 2; + if (!client_qp_enable) { + i_vpr_h(inst, + "%s: client did not set frame qp, cap %s is not set\n", + __func__, cap_name(cap_id)); + return 0; + } + + if (is_10bit_colorformat(capab[PIX_FMTS].value)) + offset = 12; + + i_frame_qp = capab[I_FRAME_QP].value + offset; + p_frame_qp = capab[P_FRAME_QP].value + offset; + b_frame_qp = capab[B_FRAME_QP].value + offset; + + hfi_value = i_frame_qp | p_frame_qp << 8 | b_frame_qp << 16 | + client_qp_enable << 24; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_32_PACKED, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_req_sync_frame(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 prepend_spspps; + u32 hfi_value = 0; + + prepend_spspps = inst->capabilities[PREPEND_SPSPPS_TO_IDR].value; + if (prepend_spspps) + hfi_value = HFI_SYNC_FRAME_REQUEST_WITH_PREFIX_SEQ_HDR; + else + hfi_value = HFI_SYNC_FRAME_REQUEST_WITHOUT_SEQ_HDR; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_chroma_qp_index_offset(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0, chroma_qp_offset_mode = 0, chroma_qp = 0; + u32 offset = 12; + + if (inst->capabilities[cap_id].flags & CAP_FLAG_CLIENT_SET) + chroma_qp_offset_mode = HFI_FIXED_CHROMAQP_OFFSET; + else + chroma_qp_offset_mode = HFI_ADAPTIVE_CHROMAQP_OFFSET; + + chroma_qp = inst->capabilities[cap_id].value + offset; + hfi_value = chroma_qp_offset_mode | chroma_qp << 8 | chroma_qp << 16; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_32_PACKED, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_slice_count(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 slice_mode = -1; + u32 hfi_value = 0, set_cap_id = 0; + + slice_mode = inst->capabilities[SLICE_MODE].value; + + if (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE) { + i_vpr_h(inst, "%s: slice mode is: %u, ignore setting to fw\n", + __func__, slice_mode); + return 0; + } + if (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) { + hfi_value = (inst->codec == MSM_VIDC_HEVC) ? + ((inst->capabilities[SLICE_MAX_MB].value + 3) / 4) : + inst->capabilities[SLICE_MAX_MB].value; + set_cap_id = SLICE_MAX_MB; + } else if (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BYTES) { + hfi_value = inst->capabilities[SLICE_MAX_BYTES].value; + set_cap_id = SLICE_MAX_BYTES; + } + + return msm_vidc_packetize_control(inst, set_cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_nal_length(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = HFI_NAL_LENGTH_STARTCODES; + + if (!inst->capabilities[WITHOUT_STARTCODE].value) { + hfi_value = HFI_NAL_LENGTH_STARTCODES; + } else { + rc = msm_vidc_v4l2_to_hfi_enum(inst, NAL_LENGTH_FIELD, &hfi_value); + if (rc) + return -EINVAL; + } + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_layer_count_and_type(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_layer_count, hfi_layer_type = 0; + + if (!inst->bufq[OUTPUT_PORT].vb2q->streaming) { + /* set layer type */ + hfi_layer_type = inst->hfi_layer_type; + cap_id = LAYER_TYPE; + + rc = msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_layer_type, sizeof(u32), __func__); + if (rc) + goto exit; + } else { + if (inst->hfi_layer_type == HFI_HIER_B) { + i_vpr_l(inst, + "%s: HB dyn layers change is not supported\n", + __func__); + return 0; + } + } + + /* set layer count */ + cap_id = ENH_LAYER_COUNT; + /* hfi baselayer starts from 1 */ + hfi_layer_count = inst->capabilities[ENH_LAYER_COUNT].value + 1; + + rc = msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_layer_count, sizeof(u32), __func__); + if (rc) + goto exit; + +exit: + return rc; +} + +int msm_vidc_set_gop_size(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value; + + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) { + if (inst->hfi_layer_type == HFI_HIER_B) { + i_vpr_l(inst, + "%s: HB dyn GOP setting is not supported\n", + __func__); + return 0; + } + } + + hfi_value = inst->capabilities[GOP_SIZE].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_bitrate(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + + /* set Total Bitrate */ + if (inst->capabilities[BIT_RATE].flags & CAP_FLAG_CLIENT_SET) + goto set_total_bitrate; + + /* + * During runtime, if BIT_RATE cap CLIENT_SET flag is not set, + * then this function will be called due to change in ENH_LAYER_COUNT. + * In this case, client did not change bitrate, hence, no need to set + * to fw. + */ + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + +set_total_bitrate: + hfi_value = inst->capabilities[BIT_RATE].value; + return msm_vidc_packetize_control(inst, BIT_RATE, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_layer_bitrate(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + + if (!inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + + /* set Total Bitrate */ + if (inst->capabilities[BIT_RATE].flags & CAP_FLAG_CLIENT_SET) { + i_vpr_h(inst, + "%s: Total bitrate is set, ignore layer bitrate\n", + __func__); + return 0; + } + + /* + * ENH_LAYER_COUNT cap max is positive only if + * layer encoding is enabled during streamon. + */ + if (!inst->capabilities[ENH_LAYER_COUNT].max || + !msm_vidc_check_all_layer_bitrate_set(inst)) { + i_vpr_h(inst, + "%s: invalid layer bitrate, ignore setting to fw\n", + __func__); + return 0; + } + + /* + * Accept layerwise bitrate but set total bitrate which was already + * adjusted based on layer bitrate + */ + hfi_value = inst->capabilities[BIT_RATE].value; + return msm_vidc_packetize_control(inst, BIT_RATE, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_flip(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + u32 hflip, vflip, hfi_value = HFI_DISABLE_FLIP; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + hflip = inst->capabilities[HFLIP].value; + vflip = inst->capabilities[VFLIP].value; + + if (hflip) + hfi_value |= HFI_HORIZONTAL_FLIP; + + if (vflip) + hfi_value |= HFI_VERTICAL_FLIP; + + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) { + if (hfi_value != HFI_DISABLE_FLIP) { + rc = msm_vidc_set_req_sync_frame(inst, REQUEST_I_FRAME); + if (rc) + return rc; + } + } + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_rotation(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value; + + rc = msm_vidc_v4l2_to_hfi_enum(inst, cap_id, &hfi_value); + if (rc) + return -EINVAL; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_level(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + + hfi_value = inst->capabilities[cap_id].value; + if (!(inst->capabilities[cap_id].flags & CAP_FLAG_CLIENT_SET)) + hfi_value = HFI_LEVEL_NONE; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_ir_period(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 ir_type = 0; + struct msm_vidc_core *core; + + core = inst->core; + + if (inst->capabilities[IR_TYPE].value == + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_RANDOM) { + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) { + i_vpr_h(inst, "%s: dynamic random intra refresh not allowed\n", + __func__); + return 0; + } + ir_type = HFI_PROP_IR_RANDOM_PERIOD; + } else if (inst->capabilities[IR_TYPE].value == + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_CYCLIC) { + ir_type = HFI_PROP_IR_CYCLIC_PERIOD; + } else { + i_vpr_e(inst, "%s: invalid ir_type %d\n", + __func__, inst->capabilities[IR_TYPE].value); + return -EINVAL; + } + + rc = venus_hfi_set_ir_period(inst, ir_type, cap_id); + if (rc) { + i_vpr_e(inst, "%s: failed to set ir period %d\n", + __func__, inst->capabilities[IR_PERIOD].value); + return rc; + } + + return rc; +} + +int msm_vidc_set_q16(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value = 0; + + hfi_value = inst->capabilities[cap_id].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_Q16, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_u32(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value; + + if (inst->capabilities[cap_id].flags & CAP_FLAG_MENU) { + rc = msm_vidc_v4l2_menu_to_hfi(inst, cap_id, &hfi_value); + if (rc) + return -EINVAL; + } else { + hfi_value = inst->capabilities[cap_id].value; + } + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_u32_packed(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value; + + if (inst->capabilities[cap_id].flags & CAP_FLAG_MENU) { + rc = msm_vidc_v4l2_menu_to_hfi(inst, cap_id, &hfi_value); + if (rc) + return -EINVAL; + } else { + hfi_value = inst->capabilities[cap_id].value; + } + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_32_PACKED, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_u32_enum(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value; + + rc = msm_vidc_v4l2_to_hfi_enum(inst, cap_id, &hfi_value); + if (rc) + return -EINVAL; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32_ENUM, + &hfi_value, sizeof(u32), __func__); +} + +int msm_vidc_set_s32(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 hfi_value = 0; + + hfi_value = inst->capabilities[cap_id].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_S32, + &hfi_value, sizeof(s32), __func__); +} + +int msm_vidc_set_stage(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + u32 stage = 0; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + core = inst->core; + + rc = call_session_op(core, decide_work_mode, inst); + if (rc) { + i_vpr_e(inst, "%s: decide_work_mode failed\n", __func__); + return -EINVAL; + } + + stage = inst->capabilities[STAGE].value; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &stage, sizeof(u32), __func__); +} + +int msm_vidc_set_pipe(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + int rc = 0; + u32 pipe; + struct msm_vidc_core *core; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + + core = inst->core; + + rc = call_session_op(core, decide_work_route, inst); + if (rc) { + i_vpr_e(inst, "%s: decide_work_route failed\n", + __func__); + return -EINVAL; + } + + pipe = inst->capabilities[PIPE].value; + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &pipe, sizeof(u32), __func__); +} + +int msm_vidc_set_vui_timing_info(void *instance, + enum msm_vidc_inst_capability_type cap_id) +{ + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + u32 hfi_value; + + /* + * hfi is HFI_PROP_DISABLE_VUI_TIMING_INFO and v4l2 cap is + * V4L2_CID_MPEG_VIDC_VUI_TIMING_INFO and hence reverse + * the hfi_value from cap_id value. + */ + if (inst->capabilities[cap_id].value == 1) + hfi_value = 0; + else + hfi_value = 1; + + return msm_vidc_packetize_control(inst, cap_id, HFI_PAYLOAD_U32, + &hfi_value, sizeof(u32), __func__); +} + +/********************* End of Control Set functions **************************/ From patchwork Fri Jul 28 13:23:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00EBEC001DE for ; Fri, 28 Jul 2023 13:29:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236852AbjG1N3X (ORCPT ); Fri, 28 Jul 2023 09:29:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236851AbjG1N2u (ORCPT ); Fri, 28 Jul 2023 09:28:50 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CAE73C22; Fri, 28 Jul 2023 06:28:06 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SBd98C018015; Fri, 28 Jul 2023 13:26:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=G5oETVCbo7pwezRFv5fv8f0gSrF7GoUOoytHUbCpDa8=; b=QQQv1u/76KoMHwy1ppotMe0xXXGtpQyPfRyMAhX3E8F7C5Ipc1qO1TSoXVNF3WH/cOMh y4msdpFAHKnG+Tol+S4/nQztsGcWU8PmOzme3y5T0RVQGzmSsZk6dznF5aegXORHdYX9 rnpYIS4TOxjECfj9lgyQOB+Y3rOy7dRnIwQ4N1fI1CerCaQ12tNT09EFHW8pueFveye7 ALNJJPweJXmtq5p9JRAWLkOIlazbCvo9QbfKq6rY1rVgekqzBtWTaLHFke+UUpmLkdn4 pQNcvI0hmvRFbJOYgt+hCbIwNz4p9YBQeOOUBRuml5Rw+l/ZUqzRqXW5bxmSipS8/oAi kw== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s468qs156-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:41 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQeSa002793 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:40 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:37 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 26/33] iris: platform: sm8550: add capability file for sm8550 Date: Fri, 28 Jul 2023 18:53:37 +0530 Message-ID: <1690550624-14642-27-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: XPOk0xq0b2Lx37PLV-2NVbP9p6Ey_liG X-Proofpoint-ORIG-GUID: XPOk0xq0b2Lx37PLV-2NVbP9p6Ey_liG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 suspectscore=0 impostorscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements all the capabilities supported by sm8550. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../iris/platform/sm8550/inc/msm_vidc_sm8550.h | 14 + .../iris/platform/sm8550/src/msm_vidc_sm8550.c | 1727 ++++++++++++++++++++ 2 files changed, 1741 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/platform/sm8550/inc/msm_vidc_sm8550.h create mode 100644 drivers/media/platform/qcom/iris/platform/sm8550/src/msm_vidc_sm8550.c diff --git a/drivers/media/platform/qcom/iris/platform/sm8550/inc/msm_vidc_sm8550.h b/drivers/media/platform/qcom/iris/platform/sm8550/inc/msm_vidc_sm8550.h new file mode 100644 index 0000000..0a2f172 --- /dev/null +++ b/drivers/media/platform/qcom/iris/platform/sm8550/inc/msm_vidc_sm8550.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_SM8550_H_ +#define _MSM_VIDC_SM8550_H_ + +#include "msm_vidc_core.h" + +int msm_vidc_init_platform_sm8550(struct msm_vidc_core *core); + +#endif // _MSM_VIDC_SM8550_H_ diff --git a/drivers/media/platform/qcom/iris/platform/sm8550/src/msm_vidc_sm8550.c b/drivers/media/platform/qcom/iris/platform/sm8550/src/msm_vidc_sm8550.c new file mode 100644 index 0000000..2408556 --- /dev/null +++ b/drivers/media/platform/qcom/iris/platform/sm8550/src/msm_vidc_sm8550.c @@ -0,0 +1,1727 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include + +#include "hfi_command.h" +#include "hfi_property.h" +#include "msm_vidc_control.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_iris3.h" +#include "msm_vidc_sm8550.h" +#include "msm_vidc_platform.h" + +/* version: major[24:31], minor[16:23], revision[0:15] */ +#define DRIVER_VERSION 0x04000000 +#define DEFAULT_VIDEO_CONCEAL_COLOR_BLACK 0x8020010 +#define MAX_BASE_LAYER_PRIORITY_ID 63 +#define MAX_OP_POINT 31 +#define MAX_BITRATE 245000000 +#define DEFAULT_BITRATE 20000000 +#define MINIMUM_FPS 1 +#define MAXIMUM_FPS 480 +#define MAXIMUM_DEC_FPS 960 +#define MAX_QP 51 +#define DEFAULT_QP 20 +#define MAX_CONSTANT_QUALITY 100 +#define MIN_SLICE_BYTE_SIZE 512 +#define MAX_SLICE_BYTE_SIZE \ + ((MAX_BITRATE) >> 3) +#define MAX_SLICE_MB_SIZE \ + (((4096 + 15) >> 4) * ((2304 + 15) >> 4)) + +#define ENC MSM_VIDC_ENCODER +#define DEC MSM_VIDC_DECODER +#define H264 MSM_VIDC_H264 +#define HEVC MSM_VIDC_HEVC +#define VP9 MSM_VIDC_VP9 +#define CODECS_ALL (H264 | HEVC | VP9) +#define MAXIMUM_OVERRIDE_VP9_FPS 200 + +static struct codec_info codec_data_sm8550[] = { + { + .v4l2_codec = V4L2_PIX_FMT_H264, + .vidc_codec = MSM_VIDC_H264, + .pixfmt_name = "AVC", + }, + { + .v4l2_codec = V4L2_PIX_FMT_HEVC, + .vidc_codec = MSM_VIDC_HEVC, + .pixfmt_name = "HEVC", + }, + { + .v4l2_codec = V4L2_PIX_FMT_VP9, + .vidc_codec = MSM_VIDC_VP9, + .pixfmt_name = "VP9", + }, +}; + +static struct color_format_info color_format_data_sm8550[] = { + { + .v4l2_color_format = V4L2_PIX_FMT_NV12, + .vidc_color_format = MSM_VIDC_FMT_NV12, + .pixfmt_name = "NV12", + }, + { + .v4l2_color_format = V4L2_PIX_FMT_NV21, + .vidc_color_format = MSM_VIDC_FMT_NV21, + .pixfmt_name = "NV21", + }, + { + .v4l2_color_format = V4L2_PIX_FMT_QC08C, + .vidc_color_format = MSM_VIDC_FMT_NV12C, + .pixfmt_name = "NV12C", + }, + { + .v4l2_color_format = V4L2_PIX_FMT_QC10C, + .vidc_color_format = MSM_VIDC_FMT_TP10C, + .pixfmt_name = "TP10C", + }, + { + .v4l2_color_format = V4L2_PIX_FMT_RGBA32, + .vidc_color_format = MSM_VIDC_FMT_RGBA8888, + .pixfmt_name = "RGBA", + }, +}; + +static struct color_primaries_info color_primaries_data_sm8550[] = { + { + .v4l2_color_primaries = V4L2_COLORSPACE_DEFAULT, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_RESERVED, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_REC709, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_BT709, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_470_SYSTEM_M, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_BT470_SYSTEM_M, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_470_SYSTEM_BG, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_BT470_SYSTEM_BG, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_SMPTE170M, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_BT601_525, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_SMPTE240M, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_SMPTE_ST240M, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_BT2020, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_BT2020, + }, + { + .v4l2_color_primaries = V4L2_COLORSPACE_DCI_P3, + .vidc_color_primaries = MSM_VIDC_PRIMARIES_SMPTE_RP431_2, + }, +}; + +static struct transfer_char_info transfer_char_data_sm8550[] = { + { + .v4l2_transfer_char = V4L2_XFER_FUNC_DEFAULT, + .vidc_transfer_char = MSM_VIDC_TRANSFER_RESERVED, + }, + { + .v4l2_transfer_char = V4L2_XFER_FUNC_709, + .vidc_transfer_char = MSM_VIDC_TRANSFER_BT709, + }, + { + .v4l2_transfer_char = V4L2_XFER_FUNC_SMPTE240M, + .vidc_transfer_char = MSM_VIDC_TRANSFER_SMPTE_ST240M, + }, + { + .v4l2_transfer_char = V4L2_XFER_FUNC_SRGB, + .vidc_transfer_char = MSM_VIDC_TRANSFER_SRGB_SYCC, + }, + { + .v4l2_transfer_char = V4L2_XFER_FUNC_SMPTE2084, + .vidc_transfer_char = MSM_VIDC_TRANSFER_SMPTE_ST2084_PQ, + }, +}; + +static struct matrix_coeff_info matrix_coeff_data_sm8550[] = { + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_DEFAULT, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_RESERVED, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_709, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT709, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_XV709, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT709, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_XV601, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT470_SYS_BG_OR_BT601_625, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_601, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT601_525_BT1358_525_OR_625, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_SMPTE240M, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_SMPTE_ST240, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_BT2020, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT2020_NON_CONSTANT, + }, + { + .v4l2_matrix_coeff = V4L2_YCBCR_ENC_BT2020_CONST_LUM, + .vidc_matrix_coeff = MSM_VIDC_MATRIX_COEFF_BT2020_CONSTANT, + }, +}; + +static struct msm_platform_core_capability core_data_sm8550[] = { + /* {type, value} */ + {ENC_CODECS, H264 | HEVC}, + {DEC_CODECS, H264 | HEVC | VP9}, + {MAX_SESSION_COUNT, 16}, + {MAX_NUM_720P_SESSIONS, 16}, + {MAX_NUM_1080P_SESSIONS, 16}, + {MAX_NUM_4K_SESSIONS, 8}, + {MAX_NUM_8K_SESSIONS, 2}, + {MAX_RT_MBPF, 174080}, /* (8192x4352)/256 + (4096x2176)/256*/ + {MAX_MBPF, 278528}, /* ((8192x4352)/256) * 2 */ + {MAX_MBPS, 7833600}, /* max_load + * 7680x4320@60fps or 3840x2176@240fps + * which is greater than 4096x2176@120fps, + * 8192x4320@48fps + */ + {MAX_MBPF_HQ, 8160}, /* ((1920x1088)/256) */ + {MAX_MBPS_HQ, 489600}, /* ((1920x1088)/256)@60fps */ + {MAX_MBPF_B_FRAME, 32640}, /* 3840x2176/256 */ + {MAX_MBPS_B_FRAME, 1958400}, /* 3840x2176/256 MBs@60fps */ + {MAX_MBPS_ALL_INTRA, 1044480}, /* 4096x2176/256 MBs@30fps */ + {MAX_ENH_LAYER_COUNT, 5}, + {NUM_VPP_PIPE, 4}, + {SW_PC, 1}, + {FW_UNLOAD, 0}, + {HW_RESPONSE_TIMEOUT, HW_RESPONSE_TIMEOUT_VALUE}, /* 1000 ms */ + {SW_PC_DELAY, SW_PC_DELAY_VALUE }, /* 1500 ms (>HW_RESPONSE_TIMEOUT)*/ + {FW_UNLOAD_DELAY, FW_UNLOAD_DELAY_VALUE }, /* 3000 ms (>SW_PC_DELAY)*/ + {DCVS, 1}, + {DECODE_BATCH, 1}, + {DECODE_BATCH_TIMEOUT, 200}, + {STATS_TIMEOUT_MS, 2000}, + {NON_FATAL_FAULTS, 1}, + {DEVICE_CAPS, V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING}, +}; + +static struct msm_platform_inst_capability instance_cap_data_sm8550[] = { + /* {cap, domain, codec, + * min, max, step_or_mask, value, + * v4l2_id, + * hfi_id, + * flags} + */ + {FRAME_WIDTH, DEC, CODECS_ALL, 96, 8192, 1, 1920}, + + {FRAME_WIDTH, DEC, VP9, 96, 4096, 1, 1920}, + + {FRAME_WIDTH, ENC, CODECS_ALL, 128, 8192, 1, 1920}, + + {FRAME_WIDTH, ENC, HEVC, 96, 8192, 1, 1920}, + + {LOSSLESS_FRAME_WIDTH, ENC, CODECS_ALL, 128, 4096, 1, 1920}, + + {LOSSLESS_FRAME_WIDTH, ENC, HEVC, 96, 4096, 1, 1920}, + + {FRAME_HEIGHT, DEC, CODECS_ALL, 96, 8192, 1, 1080}, + + {FRAME_HEIGHT, DEC, VP9, 96, 4096, 1, 1080}, + + {FRAME_HEIGHT, ENC, CODECS_ALL, 128, 8192, 1, 1080}, + + {FRAME_HEIGHT, ENC, HEVC, 96, 8192, 1, 1080}, + + {LOSSLESS_FRAME_HEIGHT, ENC, CODECS_ALL, 128, 4096, 1, 1080}, + + {LOSSLESS_FRAME_HEIGHT, ENC, HEVC, 96, 4096, 1, 1080}, + + {PIX_FMTS, ENC | DEC, H264, + MSM_VIDC_FMT_NV12, + MSM_VIDC_FMT_NV12C, + MSM_VIDC_FMT_NV12 | MSM_VIDC_FMT_NV21 | MSM_VIDC_FMT_NV12C, + MSM_VIDC_FMT_NV12C}, + + {PIX_FMTS, ENC | DEC, HEVC | VP9, + MSM_VIDC_FMT_NV12, + MSM_VIDC_FMT_TP10C, + MSM_VIDC_FMT_NV12 | MSM_VIDC_FMT_NV21 | MSM_VIDC_FMT_NV12C | + MSM_VIDC_FMT_TP10C, + MSM_VIDC_FMT_NV12C}, + + {MIN_BUFFERS_INPUT, ENC | DEC, CODECS_ALL, 0, 64, 1, 4, + V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, + 0, + CAP_FLAG_VOLATILE}, + + {MIN_BUFFERS_OUTPUT, ENC | DEC, CODECS_ALL, + 0, 64, 1, 4, + V4L2_CID_MIN_BUFFERS_FOR_CAPTURE, + HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_VOLATILE}, + + /* (8192 * 4320) / 256 */ + {MBPF, ENC, CODECS_ALL, 64, 138240, 1, 138240}, + + {MBPF, ENC, HEVC, 36, 138240, 1, 138240}, + + {MBPF, DEC, CODECS_ALL, 36, 138240, 1, 138240}, + + /* (4096 * 2304) / 256 */ + {MBPF, DEC, VP9, 36, 36864, 1, 36864}, + + /* (4096 * 2304) / 256 */ + {LOSSLESS_MBPF, ENC, H264 | HEVC, 64, 36864, 1, 36864}, + + /* Batch Mode Decode */ + /* TODO: update with new values based on updated voltage corner */ + {BATCH_MBPF, DEC, H264 | HEVC | VP9, 64, 34816, 1, 34816}, + + /* (4096 * 2304) / 256 */ + {BATCH_FPS, DEC, H264 | HEVC | VP9, 1, 120, 1, 120}, + + {FRAME_RATE, ENC | DEC, CODECS_ALL, + (MINIMUM_FPS << 16), (MAXIMUM_FPS << 16), + 1, (DEFAULT_FPS << 16), + 0, + HFI_PROP_FRAME_RATE, + CAP_FLAG_OUTPUT_PORT}, + + {OPERATING_RATE, ENC | DEC, CODECS_ALL, + (MINIMUM_FPS << 16), (MAXIMUM_FPS << 16), + 1, (DEFAULT_FPS << 16)}, + + {INPUT_RATE, ENC | DEC, CODECS_ALL, + (MINIMUM_FPS << 16), INT_MAX, + 1, (DEFAULT_FPS << 16)}, + + {TIMESTAMP_RATE, ENC | DEC, CODECS_ALL, + (MINIMUM_FPS << 16), INT_MAX, + 1, (DEFAULT_FPS << 16)}, + + {SCALE_FACTOR, ENC, H264 | HEVC, 1, 8, 1, 8}, + + {MB_CYCLES_VSP, ENC, CODECS_ALL, 25, 25, 1, 25}, + + {MB_CYCLES_VSP, DEC, CODECS_ALL, 25, 25, 1, 25}, + + {MB_CYCLES_VSP, DEC, VP9, 60, 60, 1, 60}, + + {MB_CYCLES_VPP, ENC, CODECS_ALL, 675, 675, 1, 675}, + + {MB_CYCLES_VPP, DEC, CODECS_ALL, 200, 200, 1, 200}, + + {MB_CYCLES_LP, ENC, CODECS_ALL, 320, 320, 1, 320}, + + {MB_CYCLES_LP, DEC, CODECS_ALL, 200, 200, 1, 200}, + + {MB_CYCLES_FW, ENC | DEC, CODECS_ALL, 489583, 489583, 1, 489583}, + + {MB_CYCLES_FW_VPP, ENC, CODECS_ALL, 48405, 48405, 1, 48405}, + + {MB_CYCLES_FW_VPP, DEC, CODECS_ALL, 66234, 66234, 1, 66234}, + + {HFLIP, ENC, CODECS_ALL, + 0, 1, 1, 0, + V4L2_CID_HFLIP, + HFI_PROP_FLIP, + CAP_FLAG_OUTPUT_PORT | + CAP_FLAG_INPUT_PORT | CAP_FLAG_DYNAMIC_ALLOWED}, + + {VFLIP, ENC, CODECS_ALL, + 0, 1, 1, 0, + V4L2_CID_VFLIP, + HFI_PROP_FLIP, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {ROTATION, ENC, CODECS_ALL, + 0, 270, 90, 0, + V4L2_CID_ROTATE, + HFI_PROP_ROTATION, + CAP_FLAG_OUTPUT_PORT}, + + {HEADER_MODE, ENC, CODECS_ALL, + V4L2_MPEG_VIDEO_HEADER_MODE_SEPARATE, + V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME, + BIT(V4L2_MPEG_VIDEO_HEADER_MODE_SEPARATE) | + BIT(V4L2_MPEG_VIDEO_HEADER_MODE_JOINED_WITH_1ST_FRAME), + V4L2_MPEG_VIDEO_HEADER_MODE_SEPARATE, + V4L2_CID_MPEG_VIDEO_HEADER_MODE, + HFI_PROP_SEQ_HEADER_MODE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {PREPEND_SPSPPS_TO_IDR, ENC, CODECS_ALL, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_PREPEND_SPSPPS_TO_IDR}, + + {WITHOUT_STARTCODE, ENC, CODECS_ALL, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_HEVC_WITHOUT_STARTCODE, + HFI_PROP_NAL_LENGTH_FIELD, + CAP_FLAG_OUTPUT_PORT}, + + {NAL_LENGTH_FIELD, ENC, CODECS_ALL, + V4L2_MPEG_VIDEO_HEVC_SIZE_0, + V4L2_MPEG_VIDEO_HEVC_SIZE_4, + BIT(V4L2_MPEG_VIDEO_HEVC_SIZE_0) | + BIT(V4L2_MPEG_VIDEO_HEVC_SIZE_4), + V4L2_MPEG_VIDEO_HEVC_SIZE_0, + V4L2_CID_MPEG_VIDEO_HEVC_SIZE_OF_LENGTH_FIELD, + HFI_PROP_NAL_LENGTH_FIELD, + CAP_FLAG_MENU | CAP_FLAG_OUTPUT_PORT}, + + /* TODO: Firmware introduced enumeration type for this + * with and without seq header. + */ + {REQUEST_I_FRAME, ENC, H264 | HEVC, + 0, 0, 0, 0, + V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME, + HFI_PROP_REQUEST_SYNC_FRAME, + CAP_FLAG_INPUT_PORT | CAP_FLAG_DYNAMIC_ALLOWED}, + + /* Enc: Keeping CABAC and CAVLC as same bitrate. + * Dec: there's no use of Bitrate cap + */ + {BIT_RATE, ENC, H264 | HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_BITRATE, + HFI_PROP_TOTAL_BITRATE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {BITRATE_MODE, ENC, H264, + V4L2_MPEG_VIDEO_BITRATE_MODE_VBR, + V4L2_MPEG_VIDEO_BITRATE_MODE_CBR, + BIT(V4L2_MPEG_VIDEO_BITRATE_MODE_VBR) | + BIT(V4L2_MPEG_VIDEO_BITRATE_MODE_CBR), + V4L2_MPEG_VIDEO_BITRATE_MODE_VBR, + V4L2_CID_MPEG_VIDEO_BITRATE_MODE, + HFI_PROP_RATE_CONTROL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {BITRATE_MODE, ENC, HEVC, + V4L2_MPEG_VIDEO_BITRATE_MODE_VBR, + V4L2_MPEG_VIDEO_BITRATE_MODE_CQ, + BIT(V4L2_MPEG_VIDEO_BITRATE_MODE_VBR) | + BIT(V4L2_MPEG_VIDEO_BITRATE_MODE_CBR) | + BIT(V4L2_MPEG_VIDEO_BITRATE_MODE_CQ), + V4L2_MPEG_VIDEO_BITRATE_MODE_VBR, + V4L2_CID_MPEG_VIDEO_BITRATE_MODE, + HFI_PROP_RATE_CONTROL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {CABAC_MAX_BITRATE, ENC, H264 | HEVC, 0, + 160000000, 1, 160000000}, + + {CAVLC_MAX_BITRATE, ENC, H264, 0, + 220000000, 1, 220000000}, + + {ALLINTRA_MAX_BITRATE, ENC, H264 | HEVC, 0, + 245000000, 1, 245000000}, + + {NUM_COMV, DEC, CODECS_ALL, + 0, INT_MAX, 1, 0}, + + {LOSSLESS, ENC, HEVC, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_HEVC_LOSSLESS_CU}, + + {FRAME_SKIP_MODE, ENC, H264 | HEVC, + V4L2_MPEG_VIDEO_FRAME_SKIP_MODE_DISABLED, + V4L2_MPEG_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT, + BIT(V4L2_MPEG_VIDEO_FRAME_SKIP_MODE_DISABLED) | + BIT(V4L2_MPEG_VIDEO_FRAME_SKIP_MODE_LEVEL_LIMIT) | + BIT(V4L2_MPEG_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT), + V4L2_MPEG_VIDEO_FRAME_SKIP_MODE_DISABLED, + V4L2_CID_MPEG_VIDEO_FRAME_SKIP_MODE, + 0, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {FRAME_RC_ENABLE, ENC, H264 | HEVC, + 0, 1, 1, 1, + V4L2_CID_MPEG_VIDEO_FRAME_RC_ENABLE}, + + {CONSTANT_QUALITY, ENC, HEVC, + 1, MAX_CONSTANT_QUALITY, 1, 90, + V4L2_CID_MPEG_VIDEO_CONSTANT_QUALITY, + HFI_PROP_CONSTANT_QUALITY, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {GOP_SIZE, ENC, CODECS_ALL, + 0, INT_MAX, 1, 2 * DEFAULT_FPS - 1, + V4L2_CID_MPEG_VIDEO_GOP_SIZE, + HFI_PROP_MAX_GOP_FRAMES, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {GOP_CLOSURE, ENC, H264 | HEVC, + 0, 1, 1, 1, + V4L2_CID_MPEG_VIDEO_GOP_CLOSURE, + 0}, + + {B_FRAME, ENC, H264 | HEVC, + 0, 7, 1, 0, + V4L2_CID_MPEG_VIDEO_B_FRAMES, + HFI_PROP_MAX_B_FRAMES, + CAP_FLAG_OUTPUT_PORT}, + + {LTR_COUNT, ENC, H264 | HEVC, + 0, MAX_LTR_FRAME_COUNT_2, 1, 0, + V4L2_CID_MPEG_VIDEO_LTR_COUNT, + HFI_PROP_LTR_COUNT, + CAP_FLAG_OUTPUT_PORT}, + + {USE_LTR, ENC, H264 | HEVC, + 0, + ((1 << MAX_LTR_FRAME_COUNT_2) - 1), + 0, 0, + V4L2_CID_MPEG_VIDEO_USE_LTR_FRAMES, + HFI_PROP_LTR_USE, + CAP_FLAG_INPUT_PORT | CAP_FLAG_DYNAMIC_ALLOWED}, + + {MARK_LTR, ENC, H264 | HEVC, + INVALID_DEFAULT_MARK_OR_USE_LTR, + (MAX_LTR_FRAME_COUNT_2 - 1), + 1, INVALID_DEFAULT_MARK_OR_USE_LTR, + V4L2_CID_MPEG_VIDEO_FRAME_LTR_INDEX, + HFI_PROP_LTR_MARK, + CAP_FLAG_INPUT_PORT | CAP_FLAG_DYNAMIC_ALLOWED}, + + {BASELAYER_PRIORITY, ENC, H264, + 0, MAX_BASE_LAYER_PRIORITY_ID, 1, 0, + V4L2_CID_MPEG_VIDEO_BASELAYER_PRIORITY_ID, + HFI_PROP_BASELAYER_PRIORITYID, + CAP_FLAG_OUTPUT_PORT}, + + {IR_TYPE, ENC, H264 | HEVC, + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_RANDOM, + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_CYCLIC, + BIT(V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_RANDOM) | + BIT(V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_CYCLIC), + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE_RANDOM, + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE, + 0, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {IR_PERIOD, ENC, H264 | HEVC, + 0, INT_MAX, 1, 0, + V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD, + 0, + CAP_FLAG_INPUT_PORT | CAP_FLAG_OUTPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {AU_DELIMITER, ENC, H264 | HEVC, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_AU_DELIMITER, + HFI_PROP_AUD, + CAP_FLAG_OUTPUT_PORT}, + + {MIN_QUALITY, ENC, H264 | HEVC, + 0, MAX_SUPPORTED_MIN_QUALITY, 70, MAX_SUPPORTED_MIN_QUALITY, + 0, + HFI_PROP_MAINTAIN_MIN_QUALITY, + CAP_FLAG_OUTPUT_PORT}, + + {VBV_DELAY, ENC, H264 | HEVC, + 200, 300, 100, 300, + V4L2_CID_MPEG_VIDEO_VBV_DELAY, + HFI_PROP_VBV_DELAY, + CAP_FLAG_OUTPUT_PORT}, + + {PEAK_BITRATE, ENC, H264 | HEVC, + /* default peak bitrate is 10% larger than avg bitrate */ + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_BITRATE_PEAK, + HFI_PROP_TOTAL_PEAK_BITRATE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {MIN_FRAME_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MIN_QP_8BIT, + V4L2_CID_MPEG_VIDEO_H264_MIN_QP, + HFI_PROP_MIN_QP_PACKED, + CAP_FLAG_OUTPUT_PORT}, + + {MIN_FRAME_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MIN_QP_10BIT, + V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP, + HFI_PROP_MIN_QP_PACKED, + CAP_FLAG_OUTPUT_PORT}, + + {I_FRAME_MIN_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MIN_QP_8BIT, + V4L2_CID_MPEG_VIDEO_H264_I_FRAME_MIN_QP}, + + {I_FRAME_MIN_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MIN_QP_10BIT, + V4L2_CID_MPEG_VIDEO_HEVC_I_FRAME_MIN_QP}, + + {P_FRAME_MIN_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MIN_QP_8BIT, + V4L2_CID_MPEG_VIDEO_H264_P_FRAME_MIN_QP}, + + {P_FRAME_MIN_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MIN_QP_10BIT, + V4L2_CID_MPEG_VIDEO_HEVC_P_FRAME_MIN_QP}, + + {B_FRAME_MIN_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MIN_QP_8BIT, + V4L2_CID_MPEG_VIDEO_H264_B_FRAME_MIN_QP}, + + {B_FRAME_MIN_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MIN_QP_10BIT, + V4L2_CID_MPEG_VIDEO_HEVC_B_FRAME_MIN_QP}, + + {MAX_FRAME_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_H264_MAX_QP, + HFI_PROP_MAX_QP_PACKED, + CAP_FLAG_OUTPUT_PORT}, + + {MAX_FRAME_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP, + HFI_PROP_MAX_QP_PACKED, + CAP_FLAG_OUTPUT_PORT}, + + {I_FRAME_MAX_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_H264_I_FRAME_MAX_QP}, + + {I_FRAME_MAX_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_HEVC_I_FRAME_MAX_QP}, + + {P_FRAME_MAX_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_H264_P_FRAME_MAX_QP}, + + {P_FRAME_MAX_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_HEVC_P_FRAME_MAX_QP}, + + {B_FRAME_MAX_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_H264_B_FRAME_MAX_QP}, + + {B_FRAME_MAX_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, MAX_QP, + V4L2_CID_MPEG_VIDEO_HEVC_B_FRAME_MAX_QP}, + + {I_FRAME_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, DEFAULT_QP, + V4L2_CID_MPEG_VIDEO_HEVC_I_FRAME_QP, + HFI_PROP_QP_PACKED, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {I_FRAME_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, DEFAULT_QP, + V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP, + HFI_PROP_QP_PACKED, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {P_FRAME_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, DEFAULT_QP, + V4L2_CID_MPEG_VIDEO_HEVC_P_FRAME_QP, + HFI_PROP_QP_PACKED, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {P_FRAME_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, DEFAULT_QP, + V4L2_CID_MPEG_VIDEO_H264_P_FRAME_QP, + HFI_PROP_QP_PACKED, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {B_FRAME_QP, ENC, HEVC, + MIN_QP_10BIT, MAX_QP, 1, DEFAULT_QP, + V4L2_CID_MPEG_VIDEO_HEVC_B_FRAME_QP, + HFI_PROP_QP_PACKED, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {B_FRAME_QP, ENC, H264, + MIN_QP_8BIT, MAX_QP, 1, DEFAULT_QP, + V4L2_CID_MPEG_VIDEO_H264_B_FRAME_QP, + HFI_PROP_QP_PACKED, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {LAYER_TYPE, ENC, HEVC, + V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B, + V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_P, + BIT(V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B) | + BIT(V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_P), + V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_P, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE, + HFI_PROP_LAYER_ENCODING_TYPE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LAYER_TYPE, ENC, H264, + V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B, + V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_P, + BIT(V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B) | + BIT(V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_P), + V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_P, + V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_TYPE, + HFI_PROP_LAYER_ENCODING_TYPE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LAYER_ENABLE, ENC, H264, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING, + HFI_PROP_LAYER_ENCODING_TYPE, + CAP_FLAG_OUTPUT_PORT}, + + {LAYER_ENABLE, ENC, HEVC, + 0, 1, 1, 0, + 0, + 0, + CAP_FLAG_OUTPUT_PORT}, + + {ENH_LAYER_COUNT, ENC, HEVC, + 0, 5, 1, 0, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_LAYER, + HFI_PROP_LAYER_COUNT, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {ENH_LAYER_COUNT, ENC, H264, + 0, 5, 1, 0, + V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER, + HFI_PROP_LAYER_COUNT, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L0_BR, ENC, H264, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_H264_HIER_CODING_L0_BR, + HFI_PROP_BITRATE_LAYER1, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L0_BR, ENC, HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L0_BR, + HFI_PROP_BITRATE_LAYER1, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L1_BR, ENC, H264, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_H264_HIER_CODING_L1_BR, + HFI_PROP_BITRATE_LAYER2, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L1_BR, ENC, HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L1_BR, + HFI_PROP_BITRATE_LAYER2, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L2_BR, ENC, H264, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_H264_HIER_CODING_L2_BR, + HFI_PROP_BITRATE_LAYER3, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L2_BR, ENC, HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L2_BR, + HFI_PROP_BITRATE_LAYER3, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L3_BR, ENC, H264, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_H264_HIER_CODING_L3_BR, + HFI_PROP_BITRATE_LAYER4, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + {L3_BR, ENC, HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L3_BR, + HFI_PROP_BITRATE_LAYER4, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L4_BR, ENC, H264, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_H264_HIER_CODING_L4_BR, + HFI_PROP_BITRATE_LAYER5, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L4_BR, ENC, HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L4_BR, + HFI_PROP_BITRATE_LAYER5, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L5_BR, ENC, H264, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_H264_HIER_CODING_L5_BR, + HFI_PROP_BITRATE_LAYER6, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {L5_BR, ENC, HEVC, + 1, MAX_BITRATE, 1, DEFAULT_BITRATE, + V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L5_BR, + HFI_PROP_BITRATE_LAYER6, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_INPUT_PORT | + CAP_FLAG_DYNAMIC_ALLOWED}, + + {ENTROPY_MODE, ENC, H264, + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC, + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC, + BIT(V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC) | + BIT(V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC), + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC, + V4L2_CID_MPEG_VIDEO_H264_ENTROPY_MODE, + HFI_PROP_CABAC_SESSION, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {ENTROPY_MODE, DEC, H264 | HEVC | VP9, + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC, + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC, + BIT(V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC) | + BIT(V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC), + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC, + 0, + HFI_PROP_CABAC_SESSION}, + + {PROFILE, ENC | DEC, H264, + V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, + V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH, + BIT(V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) | + BIT(V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_HIGH) | + BIT(V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) | + BIT(V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) | + BIT(V4L2_MPEG_VIDEO_H264_PROFILE_HIGH), + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH, + V4L2_CID_MPEG_VIDEO_H264_PROFILE, + HFI_PROP_PROFILE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {PROFILE, ENC | DEC, HEVC, + V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN, + V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_10, + BIT(V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN) | + BIT(V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_STILL_PICTURE) | + BIT(V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_10), + V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN, + V4L2_CID_MPEG_VIDEO_HEVC_PROFILE, + HFI_PROP_PROFILE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {PROFILE, DEC, VP9, + V4L2_MPEG_VIDEO_VP9_PROFILE_0, + V4L2_MPEG_VIDEO_VP9_PROFILE_2, + BIT(V4L2_MPEG_VIDEO_VP9_PROFILE_0) | + BIT(V4L2_MPEG_VIDEO_VP9_PROFILE_2), + V4L2_MPEG_VIDEO_VP9_PROFILE_0, + V4L2_CID_MPEG_VIDEO_VP9_PROFILE, + HFI_PROP_PROFILE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LEVEL, ENC, H264, + V4L2_MPEG_VIDEO_H264_LEVEL_1_0, + V4L2_MPEG_VIDEO_H264_LEVEL_6_0, + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1B) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_3) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_2_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_2_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_2_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_3_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_3_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_3_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_4_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_4_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_4_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_5_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_5_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_5_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_6_0), + V4L2_MPEG_VIDEO_H264_LEVEL_5_0, + V4L2_CID_MPEG_VIDEO_H264_LEVEL, + HFI_PROP_LEVEL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LEVEL, ENC, HEVC, + V4L2_MPEG_VIDEO_HEVC_LEVEL_1, + V4L2_MPEG_VIDEO_HEVC_LEVEL_6_2, + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_2) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_2_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_3) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_3_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_4) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_4_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_5) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_5_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_5_2) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_6) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_6_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_6_2), + V4L2_MPEG_VIDEO_HEVC_LEVEL_5, + V4L2_CID_MPEG_VIDEO_HEVC_LEVEL, + HFI_PROP_LEVEL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LEVEL, DEC, H264, + V4L2_MPEG_VIDEO_H264_LEVEL_1_0, + V4L2_MPEG_VIDEO_H264_LEVEL_6_2, + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1B) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_1_3) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_2_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_2_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_2_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_3_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_3_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_3_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_4_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_4_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_4_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_5_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_5_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_5_2) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_6_0) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_6_1) | + BIT(V4L2_MPEG_VIDEO_H264_LEVEL_6_2), + V4L2_MPEG_VIDEO_H264_LEVEL_6_1, + V4L2_CID_MPEG_VIDEO_H264_LEVEL, + HFI_PROP_LEVEL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LEVEL, DEC, HEVC, + V4L2_MPEG_VIDEO_HEVC_LEVEL_1, + V4L2_MPEG_VIDEO_HEVC_LEVEL_6_2, + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_2) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_2_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_3) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_3_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_4) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_4_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_5) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_5_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_5_2) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_6) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_6_1) | + BIT(V4L2_MPEG_VIDEO_HEVC_LEVEL_6_2), + V4L2_MPEG_VIDEO_HEVC_LEVEL_6_1, + V4L2_CID_MPEG_VIDEO_HEVC_LEVEL, + HFI_PROP_LEVEL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LEVEL, DEC, VP9, + V4L2_MPEG_VIDEO_VP9_LEVEL_1_0, + V4L2_MPEG_VIDEO_VP9_LEVEL_6_0, + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_1_0) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_1_1) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_2_0) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_2_1) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_3_0) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_3_1) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_4_0) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_4_1) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_5_0) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_5_1) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_5_2) | + BIT(V4L2_MPEG_VIDEO_VP9_LEVEL_6_0), + V4L2_MPEG_VIDEO_VP9_LEVEL_6_0, + V4L2_CID_MPEG_VIDEO_VP9_LEVEL, + HFI_PROP_LEVEL, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {HEVC_TIER, ENC | DEC, HEVC, + V4L2_MPEG_VIDEO_HEVC_TIER_MAIN, + V4L2_MPEG_VIDEO_HEVC_TIER_HIGH, + BIT(V4L2_MPEG_VIDEO_HEVC_TIER_MAIN) | + BIT(V4L2_MPEG_VIDEO_HEVC_TIER_HIGH), + V4L2_MPEG_VIDEO_HEVC_TIER_HIGH, + V4L2_CID_MPEG_VIDEO_HEVC_TIER, + HFI_PROP_TIER, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LF_MODE, ENC, H264, + V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED, + DB_H264_DISABLE_SLICE_BOUNDARY, + BIT(V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED) | + BIT(V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED) | + BIT(DB_H264_DISABLE_SLICE_BOUNDARY), + V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED, + V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_MODE, + HFI_PROP_DEBLOCKING_MODE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LF_MODE, ENC, HEVC, + V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_DISABLED, + DB_HEVC_DISABLE_SLICE_BOUNDARY, + BIT(V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_DISABLED) | + BIT(V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_ENABLED) | + BIT(DB_HEVC_DISABLE_SLICE_BOUNDARY), + V4L2_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE_ENABLED, + V4L2_CID_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE, + HFI_PROP_DEBLOCKING_MODE, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {LF_ALPHA, ENC, H264, + -6, 6, 1, 0, + V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_ALPHA}, + + {LF_ALPHA, ENC, HEVC, + -6, 6, 1, 0, + V4L2_CID_MPEG_VIDEO_HEVC_LF_TC_OFFSET_DIV2}, + + {LF_BETA, ENC, H264, + -6, 6, 1, 0, + V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_BETA}, + + {LF_BETA, ENC, HEVC, + -6, 6, 1, 0, + V4L2_CID_MPEG_VIDEO_HEVC_LF_BETA_OFFSET_DIV2}, + + {SLICE_MODE, ENC, H264 | HEVC, + V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE, + V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BYTES, + BIT(V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE) | + BIT(V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) | + BIT(V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BYTES), + V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE, + V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE, + 0, + CAP_FLAG_OUTPUT_PORT | CAP_FLAG_MENU}, + + {SLICE_MAX_BYTES, ENC, H264 | HEVC, + MIN_SLICE_BYTE_SIZE, MAX_SLICE_BYTE_SIZE, + 1, MIN_SLICE_BYTE_SIZE, + V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BYTES, + HFI_PROP_MULTI_SLICE_BYTES_COUNT, + CAP_FLAG_OUTPUT_PORT}, + + {SLICE_MAX_MB, ENC, H264 | HEVC, + 1, MAX_SLICE_MB_SIZE, 1, 1, + V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_MB, + HFI_PROP_MULTI_SLICE_MB_COUNT, + CAP_FLAG_OUTPUT_PORT}, + + {MB_RC, ENC, H264 | HEVC, + 0, 1, 1, 1, + V4L2_CID_MPEG_VIDEO_MB_RC_ENABLE, + 0, + CAP_FLAG_OUTPUT_PORT}, + + {TRANSFORM_8X8, ENC, H264, + 0, 1, 1, 1, + V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM, + HFI_PROP_8X8_TRANSFORM, + CAP_FLAG_OUTPUT_PORT}, + + {CHROMA_QP_INDEX_OFFSET, ENC, HEVC, + MIN_CHROMA_QP_OFFSET, MAX_CHROMA_QP_OFFSET, + 1, MAX_CHROMA_QP_OFFSET, + V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET, + HFI_PROP_CHROMA_QP_OFFSET, + CAP_FLAG_OUTPUT_PORT}, + + {DISPLAY_DELAY_ENABLE, DEC, H264 | HEVC | VP9, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY_ENABLE, + HFI_PROP_DECODE_ORDER_OUTPUT, + CAP_FLAG_INPUT_PORT}, + + {DISPLAY_DELAY, DEC, H264 | HEVC | VP9, + 0, 1, 1, 0, + V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY, + HFI_PROP_DECODE_ORDER_OUTPUT, + CAP_FLAG_INPUT_PORT}, + + {OUTPUT_ORDER, DEC, H264 | HEVC | VP9, + 0, 1, 1, 0, + 0, + HFI_PROP_DECODE_ORDER_OUTPUT, + CAP_FLAG_INPUT_PORT}, + + {INPUT_BUF_HOST_MAX_COUNT, ENC | DEC, CODECS_ALL, + DEFAULT_MAX_HOST_BUF_COUNT, DEFAULT_MAX_HOST_BURST_BUF_COUNT, + 1, DEFAULT_MAX_HOST_BUF_COUNT, + 0, + HFI_PROP_BUFFER_HOST_MAX_COUNT, + CAP_FLAG_INPUT_PORT}, + + {OUTPUT_BUF_HOST_MAX_COUNT, ENC | DEC, CODECS_ALL, + DEFAULT_MAX_HOST_BUF_COUNT, DEFAULT_MAX_HOST_BURST_BUF_COUNT, + 1, DEFAULT_MAX_HOST_BUF_COUNT, + 0, + HFI_PROP_BUFFER_HOST_MAX_COUNT, + CAP_FLAG_OUTPUT_PORT}, + + {CONCEAL_COLOR_8BIT, DEC, CODECS_ALL, 0x0, 0xff3fcff, 1, + DEFAULT_VIDEO_CONCEAL_COLOR_BLACK, + V4L2_CID_MPEG_VIDEO_MUTE_YUV, + HFI_PROP_CONCEAL_COLOR_8BIT, + CAP_FLAG_INPUT_PORT}, + + {CONCEAL_COLOR_10BIT, DEC, CODECS_ALL, 0x0, 0x3fffffff, 1, + DEFAULT_VIDEO_CONCEAL_COLOR_BLACK, + V4L2_CID_MPEG_VIDEO_MUTE_YUV, + HFI_PROP_CONCEAL_COLOR_10BIT, + CAP_FLAG_INPUT_PORT}, + + {STAGE, DEC | ENC, CODECS_ALL, + MSM_VIDC_STAGE_1, + MSM_VIDC_STAGE_2, 1, + MSM_VIDC_STAGE_2, + 0, + HFI_PROP_STAGE}, + + {PIPE, DEC | ENC, CODECS_ALL, + MSM_VIDC_PIPE_1, + MSM_VIDC_PIPE_4, 1, + MSM_VIDC_PIPE_4, + 0, + HFI_PROP_PIPE}, + + {POC, DEC, H264, 0, 2, 1, 1, + 0, + HFI_PROP_PIC_ORDER_CNT_TYPE}, + + {QUALITY_MODE, ENC, CODECS_ALL, + MSM_VIDC_MAX_QUALITY_MODE, + MSM_VIDC_POWER_SAVE_MODE, 1, + MSM_VIDC_POWER_SAVE_MODE}, + + {CODED_FRAMES, DEC, H264 | HEVC, + CODED_FRAMES_PROGRESSIVE, CODED_FRAMES_INTERLACE, + 1, CODED_FRAMES_PROGRESSIVE, + 0, + HFI_PROP_CODED_FRAMES}, + + {BIT_DEPTH, DEC, CODECS_ALL, BIT_DEPTH_8, BIT_DEPTH_10, 1, BIT_DEPTH_8, + 0, + HFI_PROP_LUMA_CHROMA_BIT_DEPTH}, + + {BITSTREAM_SIZE_OVERWRITE, DEC, CODECS_ALL, 0, INT_MAX, 1, 0, + 0}, + + {DEFAULT_HEADER, DEC, CODECS_ALL, + 0, 1, 1, 0, + 0, + HFI_PROP_DEC_DEFAULT_HEADER}, + + {RAP_FRAME, DEC, CODECS_ALL, + 0, 1, 1, 1, + 0, + HFI_PROP_DEC_START_FROM_RAP_FRAME, + CAP_FLAG_INPUT_PORT}, + + {SEQ_CHANGE_AT_SYNC_FRAME, DEC, CODECS_ALL, + 0, 1, 1, 1, + 0, + HFI_PROP_SEQ_CHANGE_AT_SYNC_FRAME, + CAP_FLAG_INPUT_PORT | CAP_FLAG_DYNAMIC_ALLOWED}, + + {ALL_INTRA, ENC, H264 | HEVC, + 0, 1, 1, 0, + 0, + 0, + CAP_FLAG_OUTPUT_PORT}, +}; + +static struct msm_platform_inst_cap_dependency instance_cap_dependency_data_sm8550[] = { + /* {cap, domain, codec, + * children, + * adjust, set} + */ + + {PIX_FMTS, ENC, H264, + {IR_PERIOD}}, + + {PIX_FMTS, ENC, HEVC, + {PROFILE, MIN_FRAME_QP, MAX_FRAME_QP, I_FRAME_QP, P_FRAME_QP, + B_FRAME_QP, MIN_QUALITY, IR_PERIOD, LTR_COUNT}}, + + {PIX_FMTS, DEC, HEVC, + {PROFILE}}, + + {FRAME_RATE, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_q16}, + + {HFLIP, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_flip}, + + {VFLIP, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_flip}, + + {ROTATION, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_rotation}, + + {HEADER_MODE, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_header_mode}, + + {WITHOUT_STARTCODE, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_nal_length}, + + {REQUEST_I_FRAME, ENC, H264 | HEVC, + {0}, + NULL, + msm_vidc_set_req_sync_frame}, + + {BIT_RATE, ENC, H264, + {PEAK_BITRATE, L0_BR}, + msm_vidc_adjust_bitrate, + msm_vidc_set_bitrate}, + + {BIT_RATE, ENC, HEVC, + {PEAK_BITRATE, L0_BR}, + msm_vidc_adjust_bitrate, + msm_vidc_set_bitrate}, + + {BITRATE_MODE, ENC, H264, + {LTR_COUNT, IR_PERIOD, I_FRAME_QP, P_FRAME_QP, + B_FRAME_QP, ENH_LAYER_COUNT, BIT_RATE, + MIN_QUALITY, VBV_DELAY, + PEAK_BITRATE, SLICE_MODE}, + msm_vidc_adjust_bitrate_mode, + msm_vidc_set_u32_enum}, + + {BITRATE_MODE, ENC, HEVC, + {LTR_COUNT, IR_PERIOD, I_FRAME_QP, P_FRAME_QP, + B_FRAME_QP, CONSTANT_QUALITY, ENH_LAYER_COUNT, + BIT_RATE, MIN_QUALITY, VBV_DELAY, + PEAK_BITRATE, SLICE_MODE}, + msm_vidc_adjust_bitrate_mode, + msm_vidc_set_u32_enum}, + + {CONSTANT_QUALITY, ENC, HEVC, + {0}, + NULL, + msm_vidc_set_constant_quality}, + + {GOP_SIZE, ENC, CODECS_ALL, + {ALL_INTRA}, + msm_vidc_adjust_gop_size, + msm_vidc_set_gop_size}, + + {B_FRAME, ENC, H264 | HEVC, + {ALL_INTRA}, + msm_vidc_adjust_b_frame, + msm_vidc_set_u32}, + + {LTR_COUNT, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_ltr_count, + msm_vidc_set_u32}, + + {USE_LTR, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_use_ltr, + msm_vidc_set_use_and_mark_ltr}, + + {MARK_LTR, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_mark_ltr, + msm_vidc_set_use_and_mark_ltr}, + + {IR_PERIOD, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_ir_period, + msm_vidc_set_ir_period}, + + {AU_DELIMITER, ENC, H264 | HEVC, + {0}, + NULL, + msm_vidc_set_u32}, + + {MIN_QUALITY, ENC, H264, + {0}, + msm_vidc_adjust_min_quality, + msm_vidc_set_u32}, + + {MIN_QUALITY, ENC, HEVC, + {0}, + msm_vidc_adjust_min_quality, + msm_vidc_set_u32}, + + {VBV_DELAY, ENC, H264 | HEVC, + {0}, + NULL, + msm_vidc_set_cbr_related_properties}, + + {PEAK_BITRATE, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_peak_bitrate, + msm_vidc_set_cbr_related_properties}, + + {MIN_FRAME_QP, ENC, H264, + {0}, + NULL, + msm_vidc_set_min_qp}, + + {MIN_FRAME_QP, ENC, HEVC, + {0}, + msm_vidc_adjust_hevc_min_qp, + msm_vidc_set_min_qp}, + + {MAX_FRAME_QP, ENC, H264, + {0}, + NULL, + msm_vidc_set_max_qp}, + + {MAX_FRAME_QP, ENC, HEVC, + {0}, + msm_vidc_adjust_hevc_max_qp, + msm_vidc_set_max_qp}, + + {I_FRAME_QP, ENC, HEVC, + {0}, + msm_vidc_adjust_hevc_i_frame_qp, + msm_vidc_set_frame_qp}, + + {I_FRAME_QP, ENC, H264, + {0}, + NULL, + msm_vidc_set_frame_qp}, + + {P_FRAME_QP, ENC, HEVC, + {0}, + msm_vidc_adjust_hevc_p_frame_qp, + msm_vidc_set_frame_qp}, + + {P_FRAME_QP, ENC, H264, + {0}, + NULL, + msm_vidc_set_frame_qp}, + + {B_FRAME_QP, ENC, HEVC, + {0}, + msm_vidc_adjust_hevc_b_frame_qp, + msm_vidc_set_frame_qp}, + + {B_FRAME_QP, ENC, H264, + {0}, + NULL, + msm_vidc_set_frame_qp}, + + {LAYER_TYPE, ENC, H264 | HEVC, + {LTR_COUNT}}, + + {LAYER_ENABLE, ENC, H264 | HEVC, + {0}}, + + {ENH_LAYER_COUNT, ENC, H264 | HEVC, + {GOP_SIZE, B_FRAME, BIT_RATE, MIN_QUALITY, SLICE_MODE, + LTR_COUNT}, + msm_vidc_adjust_layer_count, + msm_vidc_set_layer_count_and_type}, + + {L0_BR, ENC, H264 | HEVC, + {L1_BR}, + msm_vidc_adjust_layer_bitrate, + msm_vidc_set_layer_bitrate}, + + {L1_BR, ENC, H264 | HEVC, + {L2_BR}, + msm_vidc_adjust_layer_bitrate, + msm_vidc_set_layer_bitrate}, + + {L2_BR, ENC, H264 | HEVC, + {L3_BR}, + msm_vidc_adjust_layer_bitrate, + msm_vidc_set_layer_bitrate}, + + {L3_BR, ENC, H264 | HEVC, + {L4_BR}, + msm_vidc_adjust_layer_bitrate, + msm_vidc_set_layer_bitrate}, + + {L4_BR, ENC, H264 | HEVC, + {L5_BR}, + msm_vidc_adjust_layer_bitrate, + msm_vidc_set_layer_bitrate}, + + {L5_BR, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_layer_bitrate, + msm_vidc_set_layer_bitrate}, + + {ENTROPY_MODE, ENC, H264, + {BIT_RATE}, + msm_vidc_adjust_entropy_mode, + msm_vidc_set_u32}, + + {PROFILE, ENC, H264, + {ENTROPY_MODE, TRANSFORM_8X8}, + NULL, + msm_vidc_set_u32_enum}, + + {PROFILE, DEC, H264, + {ENTROPY_MODE}, + NULL, + msm_vidc_set_u32_enum}, + + {PROFILE, ENC | DEC, HEVC, + {0}, + msm_vidc_adjust_profile, + msm_vidc_set_u32_enum}, + + {PROFILE, DEC, VP9, + {0}, + NULL, + msm_vidc_set_u32_enum}, + + {LEVEL, DEC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_u32_enum}, + + {LEVEL, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_level}, + + {HEVC_TIER, ENC | DEC, HEVC, + {0}, + NULL, + msm_vidc_set_u32_enum}, + + {LF_MODE, ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_deblock_mode}, + + {SLICE_MODE, ENC, H264 | HEVC, + {STAGE}, + msm_vidc_adjust_slice_count, + msm_vidc_set_slice_count}, + + {TRANSFORM_8X8, ENC, H264, + {0}, + msm_vidc_adjust_transform_8x8, + msm_vidc_set_u32}, + + {CHROMA_QP_INDEX_OFFSET, ENC, HEVC, + {0}, + msm_vidc_adjust_chroma_qp_index_offset, + msm_vidc_set_chroma_qp_index_offset}, + + {DISPLAY_DELAY_ENABLE, DEC, H264 | HEVC | VP9, + {OUTPUT_ORDER}, + NULL, + NULL}, + + {DISPLAY_DELAY, DEC, H264 | HEVC | VP9, + {OUTPUT_ORDER}, + NULL, + NULL}, + + {OUTPUT_ORDER, DEC, H264 | HEVC | VP9, + {0}, + msm_vidc_adjust_output_order, + msm_vidc_set_u32}, + + {INPUT_BUF_HOST_MAX_COUNT, ENC | DEC, CODECS_ALL, + {0}, + msm_vidc_adjust_input_buf_host_max_count, + msm_vidc_set_u32}, + + {INPUT_BUF_HOST_MAX_COUNT, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_input_buf_host_max_count, + msm_vidc_set_u32}, + + {OUTPUT_BUF_HOST_MAX_COUNT, ENC | DEC, CODECS_ALL, + {0}, + msm_vidc_adjust_output_buf_host_max_count, + msm_vidc_set_u32}, + + {OUTPUT_BUF_HOST_MAX_COUNT, ENC, H264 | HEVC, + {0}, + msm_vidc_adjust_output_buf_host_max_count, + msm_vidc_set_u32}, + + {CONCEAL_COLOR_8BIT, DEC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_u32_packed}, + + {CONCEAL_COLOR_10BIT, DEC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_u32_packed}, + + {STAGE, ENC | DEC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_stage}, + + {STAGE, ENC, H264 | HEVC, + {0}, + NULL, + msm_vidc_set_stage}, + + {STAGE, DEC, H264 | HEVC | VP9, + {0}, + NULL, + msm_vidc_set_stage}, + + {PIPE, DEC | ENC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_pipe}, + + {RAP_FRAME, DEC, CODECS_ALL, + {0}, + NULL, + msm_vidc_set_u32}, + + {ALL_INTRA, ENC, H264 | HEVC, + {LTR_COUNT, IR_PERIOD, SLICE_MODE, BIT_RATE}, + msm_vidc_adjust_all_intra, + NULL}, +}; + +/* Default UBWC config for LPDDR5 */ +static struct msm_vidc_ubwc_config_data ubwc_config_sm8550[] = { + UBWC_CONFIG(8, 32, 16, 0, 1, 1, 1), +}; + +static struct msm_vidc_format_capability format_data_sm8550 = { + .codec_info = codec_data_sm8550, + .codec_info_size = ARRAY_SIZE(codec_data_sm8550), + .color_format_info = color_format_data_sm8550, + .color_format_info_size = ARRAY_SIZE(color_format_data_sm8550), + .color_prim_info = color_primaries_data_sm8550, + .color_prim_info_size = ARRAY_SIZE(color_primaries_data_sm8550), + .transfer_char_info = transfer_char_data_sm8550, + .transfer_char_info_size = ARRAY_SIZE(transfer_char_data_sm8550), + .matrix_coeff_info = matrix_coeff_data_sm8550, + .matrix_coeff_info_size = ARRAY_SIZE(matrix_coeff_data_sm8550), +}; + +/* name, min_kbps, max_kbps */ +static const struct bw_table sm8550_bw_table[] = { + { "venus-cnoc", 1000, 1000 }, + { "venus-ddr", 1000, 15000000 }, +}; + +/* name */ +static const struct pd_table sm8550_pd_table[] = { + { "iris-ctl" }, + { "vcodec" }, +}; + +/* name */ +static const char * const sm8550_opp_table[] = { "mx", "mmcx", NULL }; + +/* name, clock id, scaling */ +static const struct clk_table sm8550_clk_table[] = { + { "gcc_video_axi0", GCC_VIDEO_AXI0_CLK, 0 }, + { "core_clk", VIDEO_CC_MVS0C_CLK, 0 }, + { "vcodec_clk", VIDEO_CC_MVS0_CLK, 1 }, +}; + +/* name, exclusive_release */ +static const struct clk_rst_table sm8550_clk_reset_table[] = { + { "video_axi_reset", 0 }, +}; + +/* name, start, size, secure, dma_coherant, region, dma_mask */ +const struct context_bank_table sm8550_context_bank_table[] = { + {"qcom,vidc,cb-ns", 0x25800000, 0xba800000, 0, 1, MSM_VIDC_NON_SECURE, 0xe0000000 - 1}, + {"qcom,vidc,cb-sec-non-pxl", 0x01000000, 0x24800000, 1, 0, MSM_VIDC_SECURE_NONPIXEL, 0 }, +}; + +/* freq */ +static struct freq_table sm8550_freq_table[] = { + {533333333}, {444000000}, {366000000}, {338000000}, {240000000} +}; + +/* register, value, mask */ +static const struct reg_preset_table sm8550_reg_preset_table[] = { + { 0xB0088, 0x0, 0x11 }, +}; + +/* decoder properties */ +static const u32 sm8550_vdec_psc_avc[] = { + HFI_PROP_BITSTREAM_RESOLUTION, + HFI_PROP_CROP_OFFSETS, + HFI_PROP_CODED_FRAMES, + HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, + HFI_PROP_PIC_ORDER_CNT_TYPE, + HFI_PROP_PROFILE, + HFI_PROP_LEVEL, + HFI_PROP_SIGNAL_COLOR_INFO, +}; + +static const u32 sm8550_vdec_psc_hevc[] = { + HFI_PROP_BITSTREAM_RESOLUTION, + HFI_PROP_CROP_OFFSETS, + HFI_PROP_LUMA_CHROMA_BIT_DEPTH, + HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, + HFI_PROP_PROFILE, + HFI_PROP_LEVEL, + HFI_PROP_TIER, + HFI_PROP_SIGNAL_COLOR_INFO, +}; + +static const u32 sm8550_vdec_psc_vp9[] = { + HFI_PROP_BITSTREAM_RESOLUTION, + HFI_PROP_CROP_OFFSETS, + HFI_PROP_LUMA_CHROMA_BIT_DEPTH, + HFI_PROP_BUFFER_FW_MIN_OUTPUT_COUNT, + HFI_PROP_PROFILE, + HFI_PROP_LEVEL, +}; + +static const u32 sm8550_vdec_input_properties_avc[] = { + HFI_PROP_NO_OUTPUT, + HFI_PROP_SUBFRAME_INPUT, +}; + +static const u32 sm8550_vdec_input_properties_hevc[] = { + HFI_PROP_NO_OUTPUT, + HFI_PROP_SUBFRAME_INPUT, +}; + +static const u32 sm8550_vdec_input_properties_vp9[] = { + HFI_PROP_NO_OUTPUT, + HFI_PROP_SUBFRAME_INPUT, +}; + +static const u32 sm8550_vdec_output_properties_avc[] = { + HFI_PROP_WORST_COMPRESSION_RATIO, + HFI_PROP_WORST_COMPLEXITY_FACTOR, + HFI_PROP_PICTURE_TYPE, + HFI_PROP_DPB_LIST, + HFI_PROP_CABAC_SESSION, +}; + +static const u32 sm8550_vdec_output_properties_hevc[] = { + HFI_PROP_WORST_COMPRESSION_RATIO, + HFI_PROP_WORST_COMPLEXITY_FACTOR, + HFI_PROP_PICTURE_TYPE, + HFI_PROP_DPB_LIST, +}; + +static const u32 sm8550_vdec_output_properties_vp9[] = { + HFI_PROP_WORST_COMPRESSION_RATIO, + HFI_PROP_WORST_COMPLEXITY_FACTOR, + HFI_PROP_PICTURE_TYPE, + HFI_PROP_DPB_LIST, +}; + +static const struct msm_vidc_platform_data sm8550_data = { + /* resources dependent on other module */ + .bw_tbl = sm8550_bw_table, + .bw_tbl_size = ARRAY_SIZE(sm8550_bw_table), + .clk_tbl = sm8550_clk_table, + .clk_tbl_size = ARRAY_SIZE(sm8550_clk_table), + .clk_rst_tbl = sm8550_clk_reset_table, + .clk_rst_tbl_size = ARRAY_SIZE(sm8550_clk_reset_table), + .subcache_tbl = NULL, + .subcache_tbl_size = 0, + + /* populate context bank */ + .context_bank_tbl = sm8550_context_bank_table, + .context_bank_tbl_size = ARRAY_SIZE(sm8550_context_bank_table), + + /* populate power domain and opp table */ + .pd_tbl = sm8550_pd_table, + .pd_tbl_size = ARRAY_SIZE(sm8550_pd_table), + .opp_tbl = sm8550_opp_table, + .opp_tbl_size = ARRAY_SIZE(sm8550_opp_table), + + /* platform specific resources */ + .freq_tbl = sm8550_freq_table, + .freq_tbl_size = ARRAY_SIZE(sm8550_freq_table), + .reg_prst_tbl = sm8550_reg_preset_table, + .reg_prst_tbl_size = ARRAY_SIZE(sm8550_reg_preset_table), + .fwname = "vpu30_4v", + .pas_id = 9, + + /* caps related resorces */ + .core_data = core_data_sm8550, + .core_data_size = ARRAY_SIZE(core_data_sm8550), + .inst_cap_data = instance_cap_data_sm8550, + .inst_cap_data_size = ARRAY_SIZE(instance_cap_data_sm8550), + .inst_cap_dependency_data = instance_cap_dependency_data_sm8550, + .inst_cap_dependency_data_size = ARRAY_SIZE(instance_cap_dependency_data_sm8550), + .ubwc_config = ubwc_config_sm8550, + .format_data = &format_data_sm8550, + + /* decoder properties related*/ + .psc_avc_tbl = sm8550_vdec_psc_avc, + .psc_avc_tbl_size = ARRAY_SIZE(sm8550_vdec_psc_avc), + .psc_hevc_tbl = sm8550_vdec_psc_hevc, + .psc_hevc_tbl_size = ARRAY_SIZE(sm8550_vdec_psc_hevc), + .psc_vp9_tbl = sm8550_vdec_psc_vp9, + .psc_vp9_tbl_size = ARRAY_SIZE(sm8550_vdec_psc_vp9), + .dec_input_prop_avc = sm8550_vdec_input_properties_avc, + .dec_input_prop_hevc = sm8550_vdec_input_properties_hevc, + .dec_input_prop_vp9 = sm8550_vdec_input_properties_vp9, + .dec_input_prop_size_avc = ARRAY_SIZE(sm8550_vdec_input_properties_avc), + .dec_input_prop_size_hevc = ARRAY_SIZE(sm8550_vdec_input_properties_hevc), + .dec_input_prop_size_vp9 = ARRAY_SIZE(sm8550_vdec_input_properties_vp9), + .dec_output_prop_avc = sm8550_vdec_output_properties_avc, + .dec_output_prop_hevc = sm8550_vdec_output_properties_hevc, + .dec_output_prop_vp9 = sm8550_vdec_output_properties_vp9, + .dec_output_prop_size_avc = ARRAY_SIZE(sm8550_vdec_output_properties_avc), + .dec_output_prop_size_hevc = ARRAY_SIZE(sm8550_vdec_output_properties_hevc), + .dec_output_prop_size_vp9 = ARRAY_SIZE(sm8550_vdec_output_properties_vp9), +}; + +static int msm_vidc_init_data(struct msm_vidc_core *core) +{ + d_vpr_h("%s: initialize sm8550 data\n", __func__); + + core->platform->data = sm8550_data; + + return 0; +} + +int msm_vidc_init_platform_sm8550(struct msm_vidc_core *core) +{ + return msm_vidc_init_data(core); +} From patchwork Fri Jul 28 13:23:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13332083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B0FAC001DE for ; Fri, 28 Jul 2023 15:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236867AbjG1PyO (ORCPT ); Fri, 28 Jul 2023 11:54:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236969AbjG1PyK (ORCPT ); Fri, 28 Jul 2023 11:54:10 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0B584203; Fri, 28 Jul 2023 08:53:55 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S8IdU5026933; Fri, 28 Jul 2023 13:26:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=RQM1pHsQEs3n66eGzhwxSmT3s/xfK19YPFzMp1huDbw=; b=pAWXQY9FnUXOWapWBcQmNY2o3tVVyjUvlVYkb10OQK3Q/sTA7c05gRNSH0pL8f7fh8hb 5CkaB4z8mTgaXvtwJPWRE5tc96LML/V1pTKpxMHu2rk+G/emUZ8iUoyCy339ZWVqEYcg UvBjdw/QUeObs7jEsr8a6Icz+AgxCT6xRFjCCmAs/1TJNt0G+KqXONkLoyWdDAheuHgT 4y/9pMg1bN7JWOFeKeiycyIFc271RFNa06t0YOBUxcl3ZzxwU8RoFTxfih/fTUj1c1hX 9aidZ60RV5+Zkpw/JzH7d74WWoet8p468Y4/XrrSqnUksMj7DXVm6s+06Tug6/kSt12O 2g== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s468qs15b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:44 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQi5H003281 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:44 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:40 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 27/33] iris: variant: add helper functions for register handling Date: Fri, 28 Jul 2023 18:53:38 +0530 Message-ID: <1690550624-14642-28-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: dfuZQE2NBQfSwAkUgd7g5jc6fTBoOnwl X-Proofpoint-ORIG-GUID: dfuZQE2NBQfSwAkUgd7g5jc6fTBoOnwl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 suspectscore=0 impostorscore=0 malwarescore=0 phishscore=0 mlxlogscore=565 mlxscore=0 bulkscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements the functions to read and write different regsiters. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../iris/variant/common/inc/msm_vidc_variant.h | 22 +++ .../iris/variant/common/src/msm_vidc_variant.c | 163 +++++++++++++++++++++ 2 files changed, 185 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/variant/common/inc/msm_vidc_variant.h create mode 100644 drivers/media/platform/qcom/iris/variant/common/src/msm_vidc_variant.c diff --git a/drivers/media/platform/qcom/iris/variant/common/inc/msm_vidc_variant.h b/drivers/media/platform/qcom/iris/variant/common/inc/msm_vidc_variant.h new file mode 100644 index 0000000..58ba276 --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/common/inc/msm_vidc_variant.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022, The Linux Foundation. All rights reserved. + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_VARIANT_H_ +#define _MSM_VIDC_VARIANT_H_ + +#include + +struct msm_vidc_core; + +int __write_register_masked(struct msm_vidc_core *core, u32 reg, u32 value, + u32 mask); +int __write_register(struct msm_vidc_core *core, u32 reg, u32 value); +int __read_register(struct msm_vidc_core *core, u32 reg, u32 *value); +int __read_register_with_poll_timeout(struct msm_vidc_core *core, u32 reg, + u32 mask, u32 exp_val, u32 sleep_us, u32 timeout_us); +int __set_registers(struct msm_vidc_core *core); + +#endif diff --git a/drivers/media/platform/qcom/iris/variant/common/src/msm_vidc_variant.c b/drivers/media/platform/qcom/iris/variant/common/src/msm_vidc_variant.c new file mode 100644 index 0000000..4901844 --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/common/src/msm_vidc_variant.c @@ -0,0 +1,163 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include + +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_state.h" +#include "msm_vidc_variant.h" +#include "venus_hfi.h" + +int __write_register(struct msm_vidc_core *core, u32 reg, u32 value) +{ + u32 hwiosymaddr = reg; + u8 *base_addr; + int rc = 0; + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_e("HFI Write register failed : Power is OFF\n"); + return -EINVAL; + } + + base_addr = core->resource->register_base_addr; + d_vpr_l("regwrite(%pK + %#x) = %#x\n", base_addr, hwiosymaddr, value); + base_addr += hwiosymaddr; + writel_relaxed(value, base_addr); + + /* Memory barrier to make sure value is written into the register */ + wmb(); + + return rc; +} + +/* + * Argument mask is used to specify which bits to update. In case mask is 0x11, + * only bits 0 & 4 will be updated with corresponding bits from value. To update + * entire register with value, set mask = 0xFFFFFFFF. + */ +int __write_register_masked(struct msm_vidc_core *core, u32 reg, u32 value, + u32 mask) +{ + u32 prev_val, new_val; + u8 *base_addr; + int rc = 0; + + rc = __strict_check(core, __func__); + if (rc) + return rc; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_e("%s: register write failed, power is off\n", + __func__); + return -EINVAL; + } + + base_addr = core->resource->register_base_addr; + base_addr += reg; + + prev_val = readl_relaxed(base_addr); + /* + * Memory barrier to ensure register read is correct + */ + rmb(); + + new_val = (prev_val & ~mask) | (value & mask); + d_vpr_l("Base addr: %pK, writing to: %#x, mask: %#x\n", + base_addr, reg, mask); + + d_vpr_l("previous-value: %#x, value: %#x, new-value: %#x...\n", + prev_val, value, new_val); + writel_relaxed(new_val, base_addr); + /* + * Memory barrier to make sure value is written into the register. + */ + wmb(); + + return rc; +} + +int __read_register(struct msm_vidc_core *core, u32 reg, u32 *value) +{ + int rc = 0; + u8 *base_addr; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_e("HFI Read register failed : Power is OFF\n"); + return -EINVAL; + } + + base_addr = core->resource->register_base_addr; + + *value = readl_relaxed(base_addr + reg); + /* + * Memory barrier to make sure value is read correctly from the + * register. + */ + rmb(); + d_vpr_l("regread(%pK + %#x) = %#x\n", base_addr, reg, *value); + + return rc; +} + +int __read_register_with_poll_timeout(struct msm_vidc_core *core, u32 reg, + u32 mask, u32 exp_val, u32 sleep_us, + u32 timeout_us) +{ + int rc = 0; + u32 val = 0; + u8 *addr; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) { + d_vpr_e("%s failed: Power is OFF\n", __func__); + return -EINVAL; + } + + addr = (u8 *)core->resource->register_base_addr + reg; + + rc = readl_relaxed_poll_timeout(addr, val, ((val & mask) == exp_val), sleep_us, timeout_us); + /* + * Memory barrier to make sure value is read correctly from the + * register. + */ + rmb(); + d_vpr_l("regread(%pK + %#x) = %#x. rc %d, mask %#x, exp_val %#x\n", + core->resource->register_base_addr, reg, val, rc, mask, exp_val); + d_vpr_l("cond %u, sleep %u, timeout %u\n", + ((val & mask) == exp_val), sleep_us, timeout_us); + + return rc; +} + +int __set_registers(struct msm_vidc_core *core) +{ + const struct reg_preset_table *reg_prst; + unsigned int prst_count; + int cnt, rc = 0; + + reg_prst = core->platform->data.reg_prst_tbl; + prst_count = core->platform->data.reg_prst_tbl_size; + + /* skip if there is no preset reg available */ + if (!reg_prst || !prst_count) + return 0; + + for (cnt = 0; cnt < prst_count; cnt++) { + rc = __write_register_masked(core, reg_prst[cnt].reg, + reg_prst[cnt].value, reg_prst[cnt].mask); + if (rc) + return rc; + } + + return rc; +} From patchwork Fri Jul 28 13:23:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13332047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DF33C001DF for ; Fri, 28 Jul 2023 15:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235761AbjG1PCN (ORCPT ); Fri, 28 Jul 2023 11:02:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237234AbjG1PCK (ORCPT ); Fri, 28 Jul 2023 11:02:10 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E62864208; Fri, 28 Jul 2023 08:01:58 -0700 (PDT) Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SDKQch029413; Fri, 28 Jul 2023 13:26:48 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=2exeVXvA3Z3y797Y4EnhP8u5A8xSv6opsolXX3iKrRE=; b=L4ckU1qysP99sXfSumXULxRvTpQJ8OsEMajZemRYQYDDoHJ7df9CedavJed+EbaTS3YV N9V2Vvgjh/FGVoltQPABS+Wf+xlvtEGHUuerm3NX/71YuJRm637rEus+UgvQquyoH4si Y9Pn+ZUv+t20eqReXtcdmSeM58Y0No6yBbuO1Vmle0sTzrO+BnvZ0mgqZNSKgL8r8gKA 6RQFuyC0yUdKvFEGWRgr+CtbASfJ5HWSelqDnFTodLa+xn1zP+lGkibUHubkoEz78E/n ZIHWbEwbESJHNSVWo90KwTi0EvmS1Y/4h7wv2MgN0bLGLUYxYRqebcpISX1dK5Bt9333 tg== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s468qs15g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:48 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQlA7004143 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:47 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:43 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 28/33] iris: variant: iris3: add iris3 specific ops Date: Fri, 28 Jul 2023 18:53:39 +0530 Message-ID: <1690550624-14642-29-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 07wP0DGB9iOn8_7foO4enE3B5a8oKEtY X-Proofpoint-ORIG-GUID: 07wP0DGB9iOn8_7foO4enE3B5a8oKEtY X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 suspectscore=0 impostorscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 mlxscore=0 bulkscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements iris3 specific ops for power on, power off, boot firmware, power collapse etc. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../qcom/iris/variant/iris3/inc/msm_vidc_iris3.h | 15 + .../qcom/iris/variant/iris3/src/msm_vidc_iris3.c | 954 +++++++++++++++++++++ 2 files changed, 969 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_iris3.h create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_iris3.c diff --git a/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_iris3.h b/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_iris3.h new file mode 100644 index 0000000..704367e --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_iris3.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _MSM_VIDC_IRIS3_H_ +#define _MSM_VIDC_IRIS3_H_ + +#include "msm_vidc_core.h" + +int msm_vidc_init_iris3(struct msm_vidc_core *core); +int msm_vidc_adjust_bitrate_boost_iris3(void *instance, struct v4l2_ctrl *ctrl); + +#endif // _MSM_VIDC_IRIS3_H_ diff --git a/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_iris3.c b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_iris3.c new file mode 100644 index 0000000..95dff62 --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_iris3.c @@ -0,0 +1,954 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc_buffer.h" +#include "msm_vidc_buffer_iris3.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_internal.h" +#include "msm_vidc_iris3.h" +#include "msm_vidc_platform.h" +#include "msm_vidc_power_iris3.h" +#include "msm_vidc_state.h" +#include "msm_vidc_variant.h" +#include "venus_hfi.h" + +#define VIDEO_ARCH_LX 1 + +#define VCODEC_BASE_OFFS_IRIS3 0x00000000 +#define AON_MVP_NOC_RESET 0x0001F000 +#define CPU_BASE_OFFS_IRIS3 0x000A0000 +#define AON_BASE_OFFS 0x000E0000 +#define CPU_CS_BASE_OFFS_IRIS3 (CPU_BASE_OFFS_IRIS3) +#define CPU_IC_BASE_OFFS_IRIS3 (CPU_BASE_OFFS_IRIS3) + +#define CPU_CS_A2HSOFTINTCLR_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x1C) +#define CPU_CS_VCICMD_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x20) +#define CPU_CS_VCICMDARG0_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x24) +#define CPU_CS_VCICMDARG1_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x28) +#define CPU_CS_VCICMDARG2_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x2C) +#define CPU_CS_VCICMDARG3_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x30) +#define CPU_CS_VMIMSG_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x34) +#define CPU_CS_VMIMSGAG0_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x38) +#define CPU_CS_VMIMSGAG1_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x3C) +#define CPU_CS_SCIACMD_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x48) +#define CPU_CS_H2XSOFTINTEN_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x148) + +/* HFI_CTRL_STATUS */ +#define CPU_CS_SCIACMDARG0_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x4C) +#define CPU_CS_SCIACMDARG0_HFI_CTRL_ERROR_STATUS_BMSK_IRIS3 0xfe +#define CPU_CS_SCIACMDARG0_HFI_CTRL_PC_READY_IRIS3 0x100 +#define CPU_CS_SCIACMDARG0_HFI_CTRL_INIT_IDLE_MSG_BMSK_IRIS3 0x40000000 + +/* HFI_QTBL_INFO */ +#define CPU_CS_SCIACMDARG1_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x50) + +/* HFI_QTBL_ADDR */ +#define CPU_CS_SCIACMDARG2_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x54) + +/* HFI_VERSION_INFO */ +#define CPU_CS_SCIACMDARG3_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x58) + +/* SFR_ADDR */ +#define CPU_CS_SCIBCMD_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x5C) + +/* MMAP_ADDR */ +#define CPU_CS_SCIBCMDARG0_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x60) + +/* UC_REGION_ADDR */ +#define CPU_CS_SCIBARG1_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x64) + +/* UC_REGION_ADDR */ +#define CPU_CS_SCIBARG2_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x68) + +#define CPU_CS_AHB_BRIDGE_SYNC_RESET (CPU_CS_BASE_OFFS_IRIS3 + 0x160) +#define CPU_CS_AHB_BRIDGE_SYNC_RESET_STATUS (CPU_CS_BASE_OFFS_IRIS3 + 0x164) + +/* FAL10 Feature Control */ +#define CPU_CS_X2RPMH_IRIS3 (CPU_CS_BASE_OFFS_IRIS3 + 0x168) +#define CPU_CS_X2RPMH_MASK0_BMSK_IRIS3 0x1 +#define CPU_CS_X2RPMH_MASK0_SHFT_IRIS3 0x0 +#define CPU_CS_X2RPMH_MASK1_BMSK_IRIS3 0x2 +#define CPU_CS_X2RPMH_MASK1_SHFT_IRIS3 0x1 +#define CPU_CS_X2RPMH_SWOVERRIDE_BMSK_IRIS3 0x4 +#define CPU_CS_X2RPMH_SWOVERRIDE_SHFT_IRIS3 0x3 + +#define CPU_IC_SOFTINT_IRIS3 (CPU_IC_BASE_OFFS_IRIS3 + 0x150) +#define CPU_IC_SOFTINT_H2A_SHFT_IRIS3 0x0 + +/* + * -------------------------------------------------------------------------- + * MODULE: AON_MVP_NOC_RESET_REGISTERS + * -------------------------------------------------------------------------- + */ +#define AON_WRAPPER_MVP_NOC_RESET_REQ (AON_MVP_NOC_RESET + 0x000) +#define AON_WRAPPER_MVP_NOC_RESET_ACK (AON_MVP_NOC_RESET + 0x004) + +/* + * -------------------------------------------------------------------------- + * MODULE: wrapper + * -------------------------------------------------------------------------- + */ +#define WRAPPER_BASE_OFFS_IRIS3 0x000B0000 +#define WRAPPER_INTR_STATUS_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x0C) +#define WRAPPER_INTR_STATUS_A2HWD_BMSK_IRIS3 0x8 +#define WRAPPER_INTR_STATUS_A2H_BMSK_IRIS3 0x4 + +#define WRAPPER_INTR_MASK_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x10) +#define WRAPPER_INTR_MASK_A2HWD_BMSK_IRIS3 0x8 +#define WRAPPER_INTR_MASK_A2HCPU_BMSK_IRIS3 0x4 + +#define WRAPPER_CPU_CLOCK_CONFIG_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x2000) +#define WRAPPER_CPU_CGC_DIS_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x2010) +#define WRAPPER_CPU_STATUS_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x2014) + +#define WRAPPER_DEBUG_BRIDGE_LPI_CONTROL_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x54) +#define WRAPPER_DEBUG_BRIDGE_LPI_STATUS_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x58) +#define WRAPPER_IRIS_CPU_NOC_LPI_CONTROL (WRAPPER_BASE_OFFS_IRIS3 + 0x5C) +#define WRAPPER_IRIS_CPU_NOC_LPI_STATUS (WRAPPER_BASE_OFFS_IRIS3 + 0x60) +#define WRAPPER_CORE_POWER_STATUS (WRAPPER_BASE_OFFS_IRIS3 + 0x80) +#define WRAPPER_CORE_CLOCK_CONFIG_IRIS3 (WRAPPER_BASE_OFFS_IRIS3 + 0x88) + +/* + * -------------------------------------------------------------------------- + * MODULE: tz_wrapper + * -------------------------------------------------------------------------- + */ +#define WRAPPER_TZ_BASE_OFFS 0x000C0000 +#define WRAPPER_TZ_CPU_CLOCK_CONFIG (WRAPPER_TZ_BASE_OFFS) +#define WRAPPER_TZ_CPU_STATUS (WRAPPER_TZ_BASE_OFFS + 0x10) +#define WRAPPER_TZ_CTL_AXI_CLOCK_CONFIG (WRAPPER_TZ_BASE_OFFS + 0x14) +#define WRAPPER_TZ_QNS4PDXFIFO_RESET (WRAPPER_TZ_BASE_OFFS + 0x18) + +#define CTRL_INIT_IRIS3 CPU_CS_SCIACMD_IRIS3 + +#define CTRL_STATUS_IRIS3 CPU_CS_SCIACMDARG0_IRIS3 +#define CTRL_ERROR_STATUS__M_IRIS3 \ + CPU_CS_SCIACMDARG0_HFI_CTRL_ERROR_STATUS_BMSK_IRIS3 +#define CTRL_INIT_IDLE_MSG_BMSK_IRIS3 \ + CPU_CS_SCIACMDARG0_HFI_CTRL_INIT_IDLE_MSG_BMSK_IRIS3 +#define CTRL_STATUS_PC_READY_IRIS3 \ + CPU_CS_SCIACMDARG0_HFI_CTRL_PC_READY_IRIS3 + +#define QTBL_INFO_IRIS3 CPU_CS_SCIACMDARG1_IRIS3 + +#define QTBL_ADDR_IRIS3 CPU_CS_SCIACMDARG2_IRIS3 + +#define VERSION_INFO_IRIS3 CPU_CS_SCIACMDARG3_IRIS3 + +#define SFR_ADDR_IRIS3 CPU_CS_SCIBCMD_IRIS3 +#define MMAP_ADDR_IRIS3 CPU_CS_SCIBCMDARG0_IRIS3 +#define UC_REGION_ADDR_IRIS3 CPU_CS_SCIBARG1_IRIS3 +#define UC_REGION_SIZE_IRIS3 CPU_CS_SCIBARG2_IRIS3 + +#define AON_WRAPPER_MVP_NOC_LPI_CONTROL (AON_BASE_OFFS) +#define AON_WRAPPER_MVP_NOC_LPI_STATUS (AON_BASE_OFFS + 0x4) + +/* + * -------------------------------------------------------------------------- + * MODULE: VCODEC_SS registers + * -------------------------------------------------------------------------- + */ +#define VCODEC_SS_IDLE_STATUSN (VCODEC_BASE_OFFS_IRIS3 + 0x70) + +/* + * -------------------------------------------------------------------------- + * MODULE: vcodec noc error log registers (iris3) + * -------------------------------------------------------------------------- + */ +#define VCODEC_NOC_VIDEO_A_NOC_BASE_OFFS 0x00010000 +#define VCODEC_NOC_ERL_MAIN_SWID_LOW 0x00011200 +#define VCODEC_NOC_ERL_MAIN_SWID_HIGH 0x00011204 +#define VCODEC_NOC_ERL_MAIN_MAINCTL_LOW 0x00011208 +#define VCODEC_NOC_ERL_MAIN_ERRVLD_LOW 0x00011210 +#define VCODEC_NOC_ERL_MAIN_ERRCLR_LOW 0x00011218 +#define VCODEC_NOC_ERL_MAIN_ERRLOG0_LOW 0x00011220 +#define VCODEC_NOC_ERL_MAIN_ERRLOG0_HIGH 0x00011224 +#define VCODEC_NOC_ERL_MAIN_ERRLOG1_LOW 0x00011228 +#define VCODEC_NOC_ERL_MAIN_ERRLOG1_HIGH 0x0001122C +#define VCODEC_NOC_ERL_MAIN_ERRLOG2_LOW 0x00011230 +#define VCODEC_NOC_ERL_MAIN_ERRLOG2_HIGH 0x00011234 +#define VCODEC_NOC_ERL_MAIN_ERRLOG3_LOW 0x00011238 +#define VCODEC_NOC_ERL_MAIN_ERRLOG3_HIGH 0x0001123C + +static int __interrupt_init_iris3(struct msm_vidc_core *core) +{ + u32 mask_val = 0; + int rc = 0; + + /* All interrupts should be disabled initially 0x1F6 : Reset value */ + rc = __read_register(core, WRAPPER_INTR_MASK_IRIS3, &mask_val); + if (rc) + return rc; + + /* Write 0 to unmask CPU and WD interrupts */ + mask_val &= ~(WRAPPER_INTR_MASK_A2HWD_BMSK_IRIS3 | + WRAPPER_INTR_MASK_A2HCPU_BMSK_IRIS3); + rc = __write_register(core, WRAPPER_INTR_MASK_IRIS3, mask_val); + if (rc) + return rc; + + return 0; +} + +static int __setup_ucregion_memory_map_iris3(struct msm_vidc_core *core) +{ + u32 value; + int rc = 0; + + value = (u32)core->iface_q_table.align_device_addr; + rc = __write_register(core, UC_REGION_ADDR_IRIS3, value); + if (rc) + return rc; + + value = SHARED_QSIZE; + rc = __write_register(core, UC_REGION_SIZE_IRIS3, value); + if (rc) + return rc; + + value = (u32)core->iface_q_table.align_device_addr; + rc = __write_register(core, QTBL_ADDR_IRIS3, value); + if (rc) + return rc; + + rc = __write_register(core, QTBL_INFO_IRIS3, 0x01); + if (rc) + return rc; + + /* update queues vaddr for debug purpose */ + value = (u32)((u64)core->iface_q_table.align_virtual_addr); + rc = __write_register(core, CPU_CS_VCICMDARG0_IRIS3, value); + if (rc) + return rc; + + value = (u32)((u64)core->iface_q_table.align_virtual_addr >> 32); + rc = __write_register(core, CPU_CS_VCICMDARG1_IRIS3, value); + if (rc) + return rc; + + if (core->sfr.align_device_addr) { + value = (u32)core->sfr.align_device_addr + VIDEO_ARCH_LX; + rc = __write_register(core, SFR_ADDR_IRIS3, value); + if (rc) + return rc; + } + + return 0; +} + +static bool is_iris3_hw_power_collapsed(struct msm_vidc_core *core) +{ + int rc = 0; + u32 value = 0, pwr_status = 0; + + rc = __read_register(core, WRAPPER_CORE_POWER_STATUS, &value); + if (rc) + return false; + + /* if BIT(1) is 1 then video hw power is on else off */ + pwr_status = value & BIT(1); + return pwr_status ? false : true; +} + +static int __power_off_iris3_hardware(struct msm_vidc_core *core) +{ + int rc = 0, i; + u32 value = 0; + bool pwr_collapsed = false; + + /* + * Incase hw power control is enabled, for both CPU WD, video + * hw unresponsive cases, check for power status to decide on + * executing NOC reset sequence before disabling power. If there + * is no CPU WD and hw power control is enabled, fw is expected + * to power collapse video hw always. + */ + if (is_core_sub_state(core, CORE_SUBSTATE_FW_PWR_CTRL)) { + pwr_collapsed = is_iris3_hw_power_collapsed(core); + if (is_core_sub_state(core, CORE_SUBSTATE_CPU_WATCHDOG) || + is_core_sub_state(core, CORE_SUBSTATE_VIDEO_UNRESPONSIVE)) { + if (pwr_collapsed) { + d_vpr_e("%s: video hw power collapsed %s\n", + __func__, core->sub_state_name); + goto disable_power; + } else { + d_vpr_e("%s: video hw is power ON %s\n", + __func__, core->sub_state_name); + } + } else { + if (!pwr_collapsed) + d_vpr_e("%s: video hw is not power collapsed\n", __func__); + + d_vpr_h("%s: disabling hw power\n", __func__); + goto disable_power; + } + } + + /* + * check to make sure core clock branch enabled else + * we cannot read vcodec top idle register + */ + rc = __read_register(core, WRAPPER_CORE_CLOCK_CONFIG_IRIS3, &value); + if (rc) + return rc; + + if (value) { + d_vpr_h("%s: core clock config not enabled, enabling it to read vcodec registers\n", + __func__); + rc = __write_register(core, WRAPPER_CORE_CLOCK_CONFIG_IRIS3, 0); + if (rc) + return rc; + } + + /* + * add MNoC idle check before collapsing MVS0 per HPG update + * poll for NoC DMA idle -> HPG 6.1.1 + */ + for (i = 0; i < core->capabilities[NUM_VPP_PIPE].value; i++) { + rc = __read_register_with_poll_timeout(core, VCODEC_SS_IDLE_STATUSN + 4 * i, + 0x400000, 0x400000, 2000, 20000); + if (rc) + d_vpr_h("%s: VCODEC_SS_IDLE_STATUSN (%d) is not idle (%#x)\n", + __func__, i, value); + } + + /* Apply partial reset on MSF interface and wait for ACK */ + rc = __write_register(core, AON_WRAPPER_MVP_NOC_RESET_REQ, 0x3); + if (rc) + return rc; + + rc = __read_register_with_poll_timeout(core, AON_WRAPPER_MVP_NOC_RESET_ACK, + 0x3, 0x3, 200, 2000); + if (rc) + d_vpr_h("%s: AON_WRAPPER_MVP_NOC_RESET assert failed\n", __func__); + + /* De-assert partial reset on MSF interface and wait for ACK */ + rc = __write_register(core, AON_WRAPPER_MVP_NOC_RESET_REQ, 0x0); + if (rc) + return rc; + + rc = __read_register_with_poll_timeout(core, AON_WRAPPER_MVP_NOC_RESET_ACK, + 0x3, 0x0, 200, 2000); + if (rc) + d_vpr_h("%s: AON_WRAPPER_MVP_NOC_RESET de-assert failed\n", __func__); + + /* + * Reset both sides of 2 ahb2ahb_bridges (TZ and non-TZ) + * do we need to check status register here? + */ + rc = __write_register(core, CPU_CS_AHB_BRIDGE_SYNC_RESET, 0x3); + if (rc) + return rc; + rc = __write_register(core, CPU_CS_AHB_BRIDGE_SYNC_RESET, 0x2); + if (rc) + return rc; + rc = __write_register(core, CPU_CS_AHB_BRIDGE_SYNC_RESET, 0x0); + if (rc) + return rc; + +disable_power: + /* power down process */ + rc = call_res_op(core, gdsc_off, core, "vcodec"); + if (rc) { + d_vpr_e("%s: disable regulator vcodec failed\n", __func__); + rc = 0; + } + + rc = call_res_op(core, clk_disable, core, "vcodec_clk"); + if (rc) { + d_vpr_e("%s: disable unprepare vcodec_clk failed\n", __func__); + rc = 0; + } + + return rc; +} + +static int __power_off_iris3_controller(struct msm_vidc_core *core) +{ + int rc = 0; + + /* + * mask fal10_veto QLPAC error since fal10_veto can go 1 + * when pwwait == 0 and clamped to 0 -> HPG 6.1.2 + */ + rc = __write_register(core, CPU_CS_X2RPMH_IRIS3, 0x3); + if (rc) + return rc; + + /* set MNoC to low power, set PD_NOC_QREQ (bit 0) */ + rc = __write_register_masked(core, AON_WRAPPER_MVP_NOC_LPI_CONTROL, + 0x1, BIT(0)); + if (rc) + return rc; + + rc = __read_register_with_poll_timeout(core, AON_WRAPPER_MVP_NOC_LPI_STATUS, + 0x1, 0x1, 200, 2000); + if (rc) + d_vpr_h("%s: AON_WRAPPER_MVP_NOC_LPI_CONTROL failed\n", __func__); + + /* Set Iris CPU NoC to Low power */ + rc = __write_register_masked(core, WRAPPER_IRIS_CPU_NOC_LPI_CONTROL, + 0x1, BIT(0)); + if (rc) + return rc; + + rc = __read_register_with_poll_timeout(core, WRAPPER_IRIS_CPU_NOC_LPI_STATUS, + 0x1, 0x1, 200, 2000); + if (rc) + d_vpr_h("%s: WRAPPER_IRIS_CPU_NOC_LPI_CONTROL failed\n", __func__); + + /* Debug bridge LPI release */ + rc = __write_register(core, WRAPPER_DEBUG_BRIDGE_LPI_CONTROL_IRIS3, 0x0); + if (rc) + return rc; + + rc = __read_register_with_poll_timeout(core, WRAPPER_DEBUG_BRIDGE_LPI_STATUS_IRIS3, + 0xffffffff, 0x0, 200, 2000); + if (rc) + d_vpr_h("%s: debug bridge release failed\n", __func__); + + /* Reset MVP QNS4PDXFIFO */ + rc = __write_register(core, WRAPPER_TZ_CTL_AXI_CLOCK_CONFIG, 0x3); + if (rc) + return rc; + + rc = __write_register(core, WRAPPER_TZ_QNS4PDXFIFO_RESET, 0x1); + if (rc) + return rc; + + rc = __write_register(core, WRAPPER_TZ_QNS4PDXFIFO_RESET, 0x0); + if (rc) + return rc; + + rc = __write_register(core, WRAPPER_TZ_CTL_AXI_CLOCK_CONFIG, 0x0); + if (rc) + return rc; + + /* Turn off MVP MVS0C core clock */ + rc = call_res_op(core, clk_disable, core, "core_clk"); + if (rc) { + d_vpr_e("%s: disable unprepare core_clk failed\n", __func__); + rc = 0; + } + + /* power down process */ + rc = call_res_op(core, gdsc_off, core, "iris-ctl"); + if (rc) { + d_vpr_e("%s: disable regulator iris-ctl failed\n", __func__); + rc = 0; + } + + return rc; +} + +static int __power_off_iris3(struct msm_vidc_core *core) +{ + int rc = 0; + + if (!is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) + return 0; + + rc = call_res_op(core, set_clks, core, 0); + if (rc) + d_vpr_e("%s: resetting clocks failed\n", __func__); + + if (__power_off_iris3_hardware(core)) + d_vpr_e("%s: failed to power off hardware\n", __func__); + + if (__power_off_iris3_controller(core)) + d_vpr_e("%s: failed to power off controller\n", __func__); + + rc = call_res_op(core, set_bw, core, 0, 0); + if (rc) + d_vpr_e("%s: failed to unvote buses\n", __func__); + + if (!call_iris_op(core, watchdog, core, core->intr_status)) + disable_irq_nosync(core->resource->irq); + + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE, 0, __func__); + + return rc; +} + +static int __power_on_iris3_controller(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = call_res_op(core, gdsc_on, core, "iris-ctl"); + if (rc) + goto fail_regulator; + + rc = call_res_op(core, reset_bridge, core); + if (rc) + goto fail_reset_ahb2axi; + + rc = call_res_op(core, clk_enable, core, "gcc_video_axi0"); + if (rc) + goto fail_clk_axi; + + rc = call_res_op(core, clk_enable, core, "core_clk"); + if (rc) + goto fail_clk_controller; + + return 0; + +fail_clk_controller: + call_res_op(core, clk_disable, core, "gcc_video_axi0"); +fail_clk_axi: +fail_reset_ahb2axi: + call_res_op(core, gdsc_off, core, "iris-ctl"); +fail_regulator: + return rc; +} + +static int __power_on_iris3_hardware(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = call_res_op(core, gdsc_on, core, "vcodec"); + if (rc) + goto fail_regulator; + + rc = call_res_op(core, clk_enable, core, "vcodec_clk"); + if (rc) + goto fail_clk_controller; + + return 0; + +fail_clk_controller: + call_res_op(core, gdsc_off, core, "vcodec"); +fail_regulator: + return rc; +} + +static int __power_on_iris3(struct msm_vidc_core *core) +{ + struct frequency_table *freq_tbl; + u32 freq = 0; + int rc = 0; + + if (is_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE)) + return 0; + + if (!core_in_valid_state(core)) { + d_vpr_e("%s: invalid core state %s\n", + __func__, core_state_name(core->state)); + return -EINVAL; + } + + /* Vote for all hardware resources */ + rc = call_res_op(core, set_bw, core, INT_MAX, INT_MAX); + if (rc) { + d_vpr_e("%s: failed to vote buses, rc %d\n", __func__, rc); + goto fail_vote_buses; + } + + rc = __power_on_iris3_controller(core); + if (rc) { + d_vpr_e("%s: failed to power on iris3 controller\n", __func__); + goto fail_power_on_controller; + } + + rc = __power_on_iris3_hardware(core); + if (rc) { + d_vpr_e("%s: failed to power on iris3 hardware\n", __func__); + goto fail_power_on_hardware; + } + /* video controller and hardware powered on successfully */ + rc = msm_vidc_change_core_sub_state(core, 0, CORE_SUBSTATE_POWER_ENABLE, __func__); + if (rc) + goto fail_power_on_substate; + + freq_tbl = core->resource->freq_set.freq_tbl; + freq = core->power.clk_freq ? core->power.clk_freq : + freq_tbl[0].freq; + + rc = call_res_op(core, set_clks, core, freq); + if (rc) { + d_vpr_e("%s: failed to scale clocks\n", __func__); + rc = 0; + } + /* + * Re-program all of the registers that get reset as a result of + * regulator_disable() and _enable() + */ + __set_registers(core); + + __interrupt_init_iris3(core); + core->intr_status = 0; + enable_irq(core->resource->irq); + + return rc; + +fail_power_on_substate: + __power_off_iris3_hardware(core); +fail_power_on_hardware: + __power_off_iris3_controller(core); +fail_power_on_controller: + call_res_op(core, set_bw, core, 0, 0); +fail_vote_buses: + msm_vidc_change_core_sub_state(core, CORE_SUBSTATE_POWER_ENABLE, 0, __func__); + return rc; +} + +static int __prepare_pc_iris3(struct msm_vidc_core *core) +{ + int rc = 0; + u32 wfi_status = 0, idle_status = 0, pc_ready = 0; + u32 ctrl_status = 0; + + rc = __read_register(core, CTRL_STATUS_IRIS3, &ctrl_status); + if (rc) + return rc; + + pc_ready = ctrl_status & CTRL_STATUS_PC_READY_IRIS3; + idle_status = ctrl_status & BIT(30); + + if (pc_ready) { + d_vpr_h("Already in pc_ready state\n"); + return 0; + } + rc = __read_register(core, WRAPPER_TZ_CPU_STATUS, &wfi_status); + if (rc) + return rc; + + wfi_status &= BIT(0); + if (!wfi_status || !idle_status) { + d_vpr_e("Skipping PC, wfi status not set\n"); + goto skip_power_off; + } + + rc = __prepare_pc(core); + if (rc) { + d_vpr_e("Failed __prepare_pc %d\n", rc); + goto skip_power_off; + } + + rc = __read_register_with_poll_timeout(core, CTRL_STATUS_IRIS3, + CTRL_STATUS_PC_READY_IRIS3, + CTRL_STATUS_PC_READY_IRIS3, 250, 2500); + if (rc) { + d_vpr_e("%s: Skip PC. Ctrl status not set\n", __func__); + goto skip_power_off; + } + + rc = __read_register_with_poll_timeout(core, WRAPPER_TZ_CPU_STATUS, + BIT(0), 0x1, 250, 2500); + if (rc) { + d_vpr_e("%s: Skip PC. Wfi status not set\n", __func__); + goto skip_power_off; + } + return rc; + +skip_power_off: + rc = __read_register(core, CTRL_STATUS_IRIS3, &ctrl_status); + if (rc) + return rc; + rc = __read_register(core, WRAPPER_TZ_CPU_STATUS, &wfi_status); + if (rc) + return rc; + wfi_status &= BIT(0); + d_vpr_e("Skip PC, wfi=%#x, idle=%#x, pcr=%#x, ctrl=%#x)\n", + wfi_status, idle_status, pc_ready, ctrl_status); + return -EAGAIN; +} + +static int __raise_interrupt_iris3(struct msm_vidc_core *core) +{ + int rc = 0; + + rc = __write_register(core, CPU_IC_SOFTINT_IRIS3, 1 << CPU_IC_SOFTINT_H2A_SHFT_IRIS3); + if (rc) + return rc; + + return 0; +} + +static int __watchdog_iris3(struct msm_vidc_core *core, u32 intr_status) +{ + int rc = 0; + + if (intr_status & WRAPPER_INTR_STATUS_A2HWD_BMSK_IRIS3) { + d_vpr_e("%s: received watchdog interrupt\n", __func__); + rc = 1; + } + + return rc; +} + +static int __clear_interrupt_iris3(struct msm_vidc_core *core) +{ + u32 intr_status = 0, mask = 0; + int rc = 0; + + rc = __read_register(core, WRAPPER_INTR_STATUS_IRIS3, &intr_status); + if (rc) + return rc; + + mask = (WRAPPER_INTR_STATUS_A2H_BMSK_IRIS3 | + WRAPPER_INTR_STATUS_A2HWD_BMSK_IRIS3 | + CTRL_INIT_IDLE_MSG_BMSK_IRIS3); + + if (intr_status & mask) { + core->intr_status |= intr_status; + core->reg_count++; + d_vpr_l("INTERRUPT: times: %d interrupt_status: %d\n", + core->reg_count, intr_status); + } else { + core->spur_count++; + } + + rc = __write_register(core, CPU_CS_A2HSOFTINTCLR_IRIS3, 1); + if (rc) + return rc; + + return 0; +} + +static int __boot_firmware_iris3(struct msm_vidc_core *core) +{ + int rc = 0; + u32 ctrl_init_val = 0, ctrl_status = 0, count = 0, max_tries = 1000; + + rc = __setup_ucregion_memory_map_iris3(core); + if (rc) + return rc; + + ctrl_init_val = BIT(0); + + rc = __write_register(core, CTRL_INIT_IRIS3, ctrl_init_val); + if (rc) + return rc; + + while (!ctrl_status && count < max_tries) { + rc = __read_register(core, CTRL_STATUS_IRIS3, &ctrl_status); + if (rc) + return rc; + + if ((ctrl_status & CTRL_ERROR_STATUS__M_IRIS3) == 0x4) { + d_vpr_e("invalid setting for UC_REGION\n"); + break; + } + + usleep_range(50, 100); + count++; + } + + if (count >= max_tries) { + d_vpr_e("Error booting up vidc firmware\n"); + return -ETIME; + } + + /* Enable interrupt before sending commands to venus */ + rc = __write_register(core, CPU_CS_H2XSOFTINTEN_IRIS3, 0x1); + if (rc) + return rc; + + rc = __write_register(core, CPU_CS_X2RPMH_IRIS3, 0x0); + + return rc; +} + +int msm_vidc_decide_work_mode_iris3(struct msm_vidc_inst *inst) +{ + u32 work_mode; + struct v4l2_format *inp_f; + u32 width, height; + bool res_ok = false; + + work_mode = MSM_VIDC_STAGE_2; + inp_f = &inst->fmts[INPUT_PORT]; + + if (is_decode_session(inst)) { + height = inp_f->fmt.pix_mp.height; + width = inp_f->fmt.pix_mp.width; + res_ok = res_is_less_than(width, height, 1280, 720); + if (inst->capabilities[CODED_FRAMES].value == + CODED_FRAMES_INTERLACE || res_ok) { + work_mode = MSM_VIDC_STAGE_1; + } + } else if (is_encode_session(inst)) { + height = inst->crop.height; + width = inst->crop.width; + res_ok = !res_is_greater_than(width, height, 4096, 2160); + if (inst->capabilities[SLICE_MODE].value == + V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BYTES) { + work_mode = MSM_VIDC_STAGE_1; + } + if (inst->capabilities[LOSSLESS].value) + work_mode = MSM_VIDC_STAGE_2; + + if (!inst->capabilities[GOP_SIZE].value) + work_mode = MSM_VIDC_STAGE_2; + } else { + i_vpr_e(inst, "%s: invalid session type\n", __func__); + return -EINVAL; + } + + i_vpr_h(inst, "Configuring work mode = %u gop size = %u\n", + work_mode, inst->capabilities[GOP_SIZE].value); + msm_vidc_update_cap_value(inst, STAGE, work_mode, __func__); + + return 0; +} + +int msm_vidc_decide_work_route_iris3(struct msm_vidc_inst *inst) +{ + u32 work_route; + struct msm_vidc_core *core; + + core = inst->core; + work_route = core->capabilities[NUM_VPP_PIPE].value; + + if (is_decode_session(inst)) { + if (inst->capabilities[CODED_FRAMES].value == + CODED_FRAMES_INTERLACE) + work_route = MSM_VIDC_PIPE_1; + } else if (is_encode_session(inst)) { + u32 slice_mode; + + slice_mode = inst->capabilities[SLICE_MODE].value; + + /*TODO Pipe=1 for legacy CBR*/ + if (slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BYTES) + work_route = MSM_VIDC_PIPE_1; + + } else { + i_vpr_e(inst, "%s: invalid session type\n", __func__); + return -EINVAL; + } + + i_vpr_h(inst, "Configuring work route = %u", work_route); + msm_vidc_update_cap_value(inst, PIPE, work_route, __func__); + + return 0; +} + +int msm_vidc_decide_quality_mode_iris3(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 mbpf, mbps, max_hq_mbpf, max_hq_mbps; + u32 mode = MSM_VIDC_POWER_SAVE_MODE; + + if (!is_encode_session(inst)) + return 0; + + /* lossless or all intra runs at quality mode */ + if (inst->capabilities[LOSSLESS].value || + inst->capabilities[ALL_INTRA].value) { + mode = MSM_VIDC_MAX_QUALITY_MODE; + goto decision_done; + } + + mbpf = msm_vidc_get_mbs_per_frame(inst); + mbps = mbpf * msm_vidc_get_fps(inst); + core = inst->core; + max_hq_mbpf = core->capabilities[MAX_MBPF_HQ].value; + max_hq_mbps = core->capabilities[MAX_MBPS_HQ].value; + + if (mbpf <= max_hq_mbpf && mbps <= max_hq_mbps) + mode = MSM_VIDC_MAX_QUALITY_MODE; + +decision_done: + msm_vidc_update_cap_value(inst, QUALITY_MODE, mode, __func__); + + return 0; +} + +int msm_vidc_adjust_bitrate_boost_iris3(void *instance, struct v4l2_ctrl *ctrl) +{ + s32 adjusted_value; + struct msm_vidc_inst *inst = (struct msm_vidc_inst *)instance; + s32 rc_type = -1; + u32 width, height, frame_rate; + struct v4l2_format *f; + u32 max_bitrate = 0, bitrate = 0; + + adjusted_value = ctrl ? ctrl->val : + inst->capabilities[BITRATE_BOOST].value; + + if (inst->bufq[OUTPUT_PORT].vb2q->streaming) + return 0; + + if (msm_vidc_get_parent_value(inst, BITRATE_BOOST, + BITRATE_MODE, &rc_type, __func__)) + return -EINVAL; + + /* + * Bitrate Boost are supported only for VBR rc type. + * Hence, do not adjust or set to firmware for non VBR rc's + */ + if (rc_type != HFI_RC_VBR_CFR) { + adjusted_value = 0; + goto adjust; + } + + frame_rate = inst->capabilities[FRAME_RATE].value >> 16; + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + /* + * honor client set bitrate boost + * if client did not set, keep max bitrate boost up to 4k@60fps + * and remove bitrate boost after 4k@60fps + */ + if (inst->capabilities[BITRATE_BOOST].flags & CAP_FLAG_CLIENT_SET) { + /* accept client set bitrate boost value as is */ + } else { + if (res_is_less_than_or_equal_to(width, height, 4096, 2176) && + frame_rate <= 60) + adjusted_value = MAX_BITRATE_BOOST; + else + adjusted_value = 0; + } + + max_bitrate = msm_vidc_get_max_bitrate(inst); + bitrate = inst->capabilities[BIT_RATE].value; + if (adjusted_value) { + if ((bitrate + bitrate / (100 / adjusted_value)) > max_bitrate) { + i_vpr_h(inst, + "%s: bitrate %d is beyond max bitrate %d, remove bitrate boost\n", + __func__, max_bitrate, bitrate); + adjusted_value = 0; + } + } +adjust: + msm_vidc_update_cap_value(inst, BITRATE_BOOST, adjusted_value, __func__); + + return 0; +} + +static struct msm_vidc_iris_ops iris3_ops = { + .boot_firmware = __boot_firmware_iris3, + .raise_interrupt = __raise_interrupt_iris3, + .clear_interrupt = __clear_interrupt_iris3, + .power_on = __power_on_iris3, + .power_off = __power_off_iris3, + .prepare_pc = __prepare_pc_iris3, + .watchdog = __watchdog_iris3, +}; + +static struct msm_vidc_session_ops msm_session_ops = { + .buffer_size = msm_buffer_size_iris3, + .min_count = msm_buffer_min_count_iris3, + .extra_count = msm_buffer_extra_count_iris3, + .calc_freq = msm_vidc_calc_freq_iris3, + .calc_bw = msm_vidc_calc_bw_iris3, + .decide_work_route = msm_vidc_decide_work_route_iris3, + .decide_work_mode = msm_vidc_decide_work_mode_iris3, + .decide_quality_mode = msm_vidc_decide_quality_mode_iris3, +}; + +int msm_vidc_init_iris3(struct msm_vidc_core *core) +{ + core->iris_ops = &iris3_ops; + core->session_ops = &msm_session_ops; + + return 0; +} From patchwork Fri Jul 28 13:23:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331960 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FF12C0015E for ; Fri, 28 Jul 2023 13:29:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236888AbjG1N3m (ORCPT ); Fri, 28 Jul 2023 09:29:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235146AbjG1N3I (ORCPT ); Fri, 28 Jul 2023 09:29:08 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCDA444B2; Fri, 28 Jul 2023 06:28:26 -0700 (PDT) Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S8iFgK004389; Fri, 28 Jul 2023 13:26:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=NakI8Swm1jDQ5/yTFoxmaCTDH8DURQUpI3YiWufXLpA=; b=LNwoFC0WKKeAsxMjK1wcpwrYjjIiIAHtcm3UiIFPOn6gk/u+ZgEmuQS13D53eDM7YBO/ xNadb/T0gZn8SQ8fSbcCAj5fuMQ5EHasqlupSkDbBRapgBt//bJ2sKbZzBptU/tnt4+S b9JY1JuTz7lXI+QBCzCMpzngL7HjBvyUR80m84rsEecqD1VIV5pBiA8Y4J349gZQdIUr yoV45oW6EWriLo8rm3SVJO4PBFBOOWFC9OOgAFwsUcE2mOi+qLra00+web8hvryRSnoH sLFoBml2ta73no5gr0XdC8gFc+NKTFSk09nUsg0EDBJBuGO+vNvPLKya5MM7DTAWUFBH 2g== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3n2kbcq5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:51 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQpqc002941 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:51 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:47 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 29/33] iris: variant: iris3: add helpers for buffer size calculations Date: Fri, 28 Jul 2023 18:53:40 +0530 Message-ID: <1690550624-14642-30-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: Mv8-SO9wgpkPQLGySpr_BthoG3F8Odyv X-Proofpoint-ORIG-GUID: Mv8-SO9wgpkPQLGySpr_BthoG3F8Odyv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 phishscore=0 impostorscore=0 mlxscore=0 suspectscore=0 spamscore=0 clxscore=1015 bulkscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements iris3 specific buffer size calculation for firmware internal buffers, input and output buffers for encoder and decoder. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../qcom/iris/variant/iris3/inc/hfi_buffer_iris3.h | 1481 ++++++++++++++++++++ .../iris/variant/iris3/inc/msm_vidc_buffer_iris3.h | 19 + .../iris/variant/iris3/src/msm_vidc_buffer_iris3.c | 595 ++++++++ 3 files changed, 2095 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/inc/hfi_buffer_iris3.h create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_buffer_iris3.h create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_buffer_iris3.c diff --git a/drivers/media/platform/qcom/iris/variant/iris3/inc/hfi_buffer_iris3.h b/drivers/media/platform/qcom/iris/variant/iris3/inc/hfi_buffer_iris3.h new file mode 100644 index 0000000..cb068ca --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/inc/hfi_buffer_iris3.h @@ -0,0 +1,1481 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __HFI_BUFFER_IRIS3__ +#define __HFI_BUFFER_IRIS3__ + +#include + +#include "hfi_property.h" + +typedef u8 HFI_U8; +typedef s8 HFI_S8; +typedef u16 HFI_U16; +typedef s16 HFI_S16; +typedef u32 HFI_U32; +typedef s32 HFI_S32; +typedef u64 HFI_U64; +typedef u32 HFI_BOOL; + +#ifndef MIN +#define MIN(x, y) (((x) < (y)) ? (x) : (y)) +#endif + +#ifndef MAX +#define MAX(x, y) (((x) > (y)) ? (x) : (y)) +#endif + +#define HFI_ALIGNMENT_4096 (4096) + +#define BUF_SIZE_ALIGN_16 (16) +#define BUF_SIZE_ALIGN_32 (32) +#define BUF_SIZE_ALIGN_64 (64) +#define BUF_SIZE_ALIGN_128 (128) +#define BUF_SIZE_ALIGN_256 (256) +#define BUF_SIZE_ALIGN_512 (512) +#define BUF_SIZE_ALIGN_4096 (4096) + +#define HFI_ALIGN(a, b) (((b) & ((b) - 1)) ? (((a) + (b) - 1) / \ + (b) * (b)) : (((a) + (b) - 1) & (~((b) - 1)))) + +#define HFI_WORKMODE_1 1 +#define HFI_WORKMODE_2 2 + +#define HFI_DEFAULT_METADATA_STRIDE_MULTIPLE (64) +#define HFI_DEFAULT_METADATA_BUFFERHEIGHT_MULTIPLE (16) + +#define HFI_COLOR_FORMAT_YUV420_NV12_UBWC_Y_TILE_HEIGHT (8) +#define HFI_COLOR_FORMAT_YUV420_NV12_UBWC_Y_TILE_WIDTH (32) +#define HFI_COLOR_FORMAT_YUV420_NV12_UBWC_UV_TILE_HEIGHT (8) +#define HFI_COLOR_FORMAT_YUV420_NV12_UBWC_UV_TILE_WIDTH (16) +#define HFI_COLOR_FORMAT_YUV420_TP10_UBWC_Y_TILE_HEIGHT (4) +#define HFI_COLOR_FORMAT_YUV420_TP10_UBWC_Y_TILE_WIDTH (48) +#define HFI_COLOR_FORMAT_YUV420_TP10_UBWC_UV_TILE_HEIGHT (4) +#define HFI_COLOR_FORMAT_YUV420_TP10_UBWC_UV_TILE_WIDTH (24) +#define HFI_COLOR_FORMAT_RGBA8888_UBWC_TILE_HEIGHT (4) +#define HFI_COLOR_FORMAT_RGBA8888_UBWC_TILE_WIDTH (16) + +#define HFI_NV12_IL_CALC_Y_STRIDE(stride, frame_width, stride_multiple) \ + (stride = HFI_ALIGN(frame_width, stride_multiple)) + +#define HFI_NV12_IL_CALC_Y_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) (buf_height = HFI_ALIGN(frame_height, \ + min_buf_height_multiple)) + +#define HFI_NV12_IL_CALC_UV_STRIDE(stride, frame_width, stride_multiple) \ + (stride = HFI_ALIGN(frame_width, stride_multiple)) + +#define HFI_NV12_IL_CALC_UV_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) (buf_height = HFI_ALIGN((((frame_height) + 1) \ + >> 1), min_buf_height_multiple)) + +#define HFI_NV12_IL_CALC_BUF_SIZE(buf_size, y_bufsize, y_stride, y_buf_height, \ + uv_buf_size, uv_stride, uv_buf_height) \ + do { \ + y_bufsize = (y_stride * y_buf_height); \ + uv_buf_size = (uv_stride * uv_buf_height); \ + buf_size = HFI_ALIGN(y_bufsize + uv_buf_size, HFI_ALIGNMENT_4096) \ + } while (0) + +#define HFI_NV12_UBWC_IL_CALC_Y_BUF_SIZE(y_bufsize, y_stride, y_buf_height) \ + (y_bufsize = HFI_ALIGN(y_stride * y_buf_height, HFI_ALIGNMENT_4096)) + +#define HFI_NV12_UBWC_IL_CALC_UV_BUF_SIZE(uv_buf_size, \ + uv_stride, uv_buf_height) \ + (uv_buf_size = HFI_ALIGN(uv_stride * uv_buf_height, HFI_ALIGNMENT_4096)) + +#define HFI_NV12_UBWC_IL_CALC_BUF_SIZE_V2(buf_size,\ + frame_width, frame_height, y_stride_multiple,\ + y_buffer_height_multiple, uv_stride_multiple, \ + uv_buffer_height_multiple, y_metadata_stride_multiple, \ + y_metadata_buffer_height_multiple, \ + uv_metadata_stride_multiple, uv_metadata_buffer_height_multiple) \ + do { \ + HFI_U32 y_buf_size, uv_buf_size, y_meta_size, uv_meta_size; \ + HFI_U32 stride, _height; \ + HFI_U32 half_height = (frame_height + 1) >> 1; \ + HFI_NV12_IL_CALC_Y_STRIDE(stride, frame_width,\ + y_stride_multiple); \ + HFI_NV12_IL_CALC_Y_BUFHEIGHT(_height, half_height,\ + y_buffer_height_multiple); \ + HFI_NV12_UBWC_IL_CALC_Y_BUF_SIZE(y_buf_size, stride, _height);\ + HFI_NV12_IL_CALC_UV_STRIDE(stride, frame_width, \ + uv_stride_multiple); \ + HFI_NV12_IL_CALC_UV_BUFHEIGHT(_height, half_height, \ + uv_buffer_height_multiple); \ + HFI_NV12_UBWC_IL_CALC_UV_BUF_SIZE(uv_buf_size, stride, _height);\ + HFI_UBWC_CALC_METADATA_PLANE_STRIDE(stride, frame_width,\ + y_metadata_stride_multiple, \ + HFI_COLOR_FORMAT_YUV420_NV12_UBWC_Y_TILE_WIDTH);\ + HFI_UBWC_METADATA_PLANE_BUFHEIGHT(_height, half_height, \ + y_metadata_buffer_height_multiple,\ + HFI_COLOR_FORMAT_YUV420_NV12_UBWC_Y_TILE_HEIGHT);\ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(y_meta_size, stride, \ + _height); \ + HFI_UBWC_UV_METADATA_PLANE_STRIDE(stride, frame_width,\ + uv_metadata_stride_multiple, \ + HFI_COLOR_FORMAT_YUV420_NV12_UBWC_UV_TILE_WIDTH); \ + HFI_UBWC_UV_METADATA_PLANE_BUFHEIGHT(_height, half_height,\ + uv_metadata_buffer_height_multiple,\ + HFI_COLOR_FORMAT_YUV420_NV12_UBWC_UV_TILE_HEIGHT);\ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(uv_meta_size, stride, \ + _height); \ + buf_size = (y_buf_size + uv_buf_size + y_meta_size + \ + uv_meta_size) << 1;\ + } while (0) + +#define HFI_YUV420_TP10_CALC_Y_STRIDE(stride, frame_width, stride_multiple) \ + do { \ + stride = HFI_ALIGN(frame_width, 192); \ + stride = HFI_ALIGN(stride * 4 / 3, stride_multiple); \ + } while (0) + +#define HFI_YUV420_TP10_CALC_Y_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) \ + (buf_height = HFI_ALIGN(frame_height, min_buf_height_multiple)) + +#define HFI_YUV420_TP10_CALC_UV_STRIDE(stride, frame_width, stride_multiple) \ + do { \ + stride = HFI_ALIGN(frame_width, 192); \ + stride = HFI_ALIGN(stride * 4 / 3, stride_multiple); \ + } while (0) + +#define HFI_YUV420_TP10_CALC_UV_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) \ + (buf_height = HFI_ALIGN(((frame_height + 1) >> 1), \ + min_buf_height_multiple)) + +#define HFI_YUV420_TP10_CALC_BUF_SIZE(buf_size, y_buf_size, y_stride,\ + y_buf_height, uv_buf_size, uv_stride, uv_buf_height) \ + do { \ + y_buf_size = (y_stride * y_buf_height); \ + uv_buf_size = (uv_stride * uv_buf_height); \ + buf_size = y_buf_size + uv_buf_size \ + } while (0) + +#define HFI_YUV420_TP10_UBWC_CALC_Y_BUF_SIZE(y_buf_size, y_stride, \ + y_buf_height) \ + (y_buf_size = HFI_ALIGN(y_stride * y_buf_height, HFI_ALIGNMENT_4096)) + +#define HFI_YUV420_TP10_UBWC_CALC_UV_BUF_SIZE(uv_buf_size, uv_stride, \ + uv_buf_height) \ + (uv_buf_size = HFI_ALIGN(uv_stride * uv_buf_height, HFI_ALIGNMENT_4096)) + +#define HFI_YUV420_TP10_UBWC_CALC_BUF_SIZE(buf_size, y_stride, y_buf_height, \ + uv_stride, uv_buf_height, y_md_stride, y_md_height, uv_md_stride, \ + uv_md_height)\ + do { \ + HFI_U32 y_data_size, uv_data_size, y_md_size, uv_md_size; \ + HFI_YUV420_TP10_UBWC_CALC_Y_BUF_SIZE(y_data_size, y_stride,\ + y_buf_height); \ + HFI_YUV420_TP10_UBWC_CALC_UV_BUF_SIZE(uv_data_size, uv_stride, \ + uv_buf_height); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(y_md_size, y_md_stride, \ + y_md_height); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(uv_md_size, uv_md_stride, \ + uv_md_height); \ + buf_size = y_data_size + uv_data_size + y_md_size + \ + uv_md_size; \ + } while (0) + +#define HFI_YUV420_P010_CALC_Y_STRIDE(stride, frame_width, stride_multiple) \ + (stride = HFI_ALIGN(frame_width * 2, stride_multiple)) + +#define HFI_YUV420_P010_CALC_Y_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) \ + (buf_height = HFI_ALIGN(frame_height, min_buf_height_multiple)) + +#define HFI_YUV420_P010_CALC_UV_STRIDE(stride, frame_width, stride_multiple) \ + (stride = HFI_ALIGN(frame_width * 2, stride_multiple)) + +#define HFI_YUV420_P010_CALC_UV_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) \ + (buf_height = HFI_ALIGN(((frame_height + 1) >> 1), \ + min_buf_height_multiple)) + +#define HFI_YUV420_P010_CALC_BUF_SIZE(buf_size, y_data_size, y_stride, \ + y_buf_height, uv_data_size, uv_stride, uv_buf_height) \ + do { \ + y_data_size = HFI_ALIGN(y_stride * y_buf_height, \ + HFI_ALIGNMENT_4096);\ + uv_data_size = HFI_ALIGN(uv_stride * uv_buf_height, \ + HFI_ALIGNMENT_4096); \ + buf_size = y_data_size + uv_data_size; \ + } while (0) + +#define HFI_RGB888_CALC_STRIDE(stride, frame_width, stride_multiple) \ + (stride = ((frame_width * 3) + stride_multiple - 1) & \ + (0xffffffff - (stride_multiple - 1))) + +#define HFI_RGB888_CALC_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) \ + (buf_height = ((frame_height + min_buf_height_multiple - 1) & \ + (0xffffffff - (min_buf_height_multiple - 1)))) + +#define HFI_RGB888_CALC_BUF_SIZE(buf_size, stride, buf_height) \ + (buf_size = ((stride) * (buf_height))) + +#define HFI_RGBA8888_CALC_STRIDE(stride, frame_width, stride_multiple) \ + (stride = HFI_ALIGN((frame_width << 2), stride_multiple)) + +#define HFI_RGBA8888_CALC_BUFHEIGHT(buf_height, frame_height, \ + min_buf_height_multiple) \ + (buf_height = HFI_ALIGN(frame_height, min_buf_height_multiple)) + +#define HFI_RGBA8888_CALC_BUF_SIZE(buf_size, stride, buf_height) \ + (buf_size = (stride) * (buf_height)) + +#define HFI_RGBA8888_UBWC_CALC_DATA_PLANE_BUF_SIZE(buf_size, stride, \ + buf_height) \ + (buf_size = HFI_ALIGN((stride) * (buf_height), HFI_ALIGNMENT_4096)) + +#define HFI_RGBA8888_UBWC_BUF_SIZE(buf_size, data_buf_size, \ + metadata_buffer_size, stride, buf_height, _metadata_tride, \ + _metadata_buf_height) \ + do { \ + HFI_RGBA8888_UBWC_CALC_DATA_PLANE_BUF_SIZE(data_buf_size, \ + stride, buf_height); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(metadata_buffer_size, \ + _metadata_tride, _metadata_buf_height); \ + buf_size = data_buf_size + metadata_buffer_size \ + } while (0) + +#define HFI_UBWC_CALC_METADATA_PLANE_STRIDE(metadata_stride, frame_width,\ + metadata_stride_multiple, tile_width_in_pels) \ + ((metadata_stride = HFI_ALIGN(((frame_width + (tile_width_in_pels - 1)) /\ + tile_width_in_pels), metadata_stride_multiple))) + +#define HFI_UBWC_METADATA_PLANE_BUFHEIGHT(metadata_buf_height, frame_height, \ + metadata_height_multiple, tile_height_in_pels) \ + ((metadata_buf_height = HFI_ALIGN(((frame_height + \ + (tile_height_in_pels - 1)) / tile_height_in_pels), \ + metadata_height_multiple))) + +#define HFI_UBWC_UV_METADATA_PLANE_STRIDE(metadata_stride, frame_width, \ + metadata_stride_multiple, tile_width_in_pels) \ + ((metadata_stride = HFI_ALIGN(((((frame_width + 1) >> 1) +\ + (tile_width_in_pels - 1)) / tile_width_in_pels), \ + metadata_stride_multiple))) + +#define HFI_UBWC_UV_METADATA_PLANE_BUFHEIGHT(metadata_buf_height, frame_height,\ + metadata_height_multiple, tile_height_in_pels) \ + (metadata_buf_height = HFI_ALIGN(((((frame_height + 1) >> 1) + \ + (tile_height_in_pels - 1)) / tile_height_in_pels), \ + metadata_height_multiple)) + +#define HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(buffer_size, _metadata_tride, \ + _metadata_buf_height) \ + ((buffer_size = HFI_ALIGN(_metadata_tride * _metadata_buf_height, \ + HFI_ALIGNMENT_4096))) + +#define BUFFER_ALIGNMENT_512_BYTES 512 +#define BUFFER_ALIGNMENT_256_BYTES 256 +#define BUFFER_ALIGNMENT_128_BYTES 128 +#define BUFFER_ALIGNMENT_64_BYTES 64 +#define BUFFER_ALIGNMENT_32_BYTES 32 +#define BUFFER_ALIGNMENT_16_BYTES 16 +#define BUFFER_ALIGNMENT_8_BYTES 8 +#define BUFFER_ALIGNMENT_4_BYTES 4 + +#define VENUS_DMA_ALIGNMENT BUFFER_ALIGNMENT_256_BYTES + +#define MAX_FE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE 64 +#define MAX_FE_NBR_CTRL_LCU32_LINE_BUFFER_SIZE 64 +#define MAX_FE_NBR_CTRL_LCU16_LINE_BUFFER_SIZE 64 +#define MAX_FE_NBR_DATA_LUMA_LINE_BUFFER_SIZE 640 +#define MAX_FE_NBR_DATA_CB_LINE_BUFFER_SIZE 320 +#define MAX_FE_NBR_DATA_CR_LINE_BUFFER_SIZE 320 + +#define MAX_SE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE (128 / 8) +#define MAX_SE_NBR_CTRL_LCU32_LINE_BUFFER_SIZE (128 / 8) +#define MAX_SE_NBR_CTRL_LCU16_LINE_BUFFER_SIZE (128 / 8) + +#define MAX_PE_NBR_DATA_LCU64_LINE_BUFFER_SIZE (64 * 2 * 3) +#define MAX_PE_NBR_DATA_LCU32_LINE_BUFFER_SIZE (32 * 2 * 3) +#define MAX_PE_NBR_DATA_LCU16_LINE_BUFFER_SIZE (16 * 2 * 3) + +#define MAX_TILE_COLUMNS 32 + +#define SIZE_VPSS_LB(size, frame_width, frame_height, num_vpp_pipes) \ + do { \ + HFI_U32 vpss_4tap_top_buffer_size, vpss_div2_top_buffer_size, \ + vpss_4tap_left_buffer_size, vpss_div2_left_buffer_size; \ + HFI_U32 opb_wr_top_line_luma_buffer_size, \ + opb_wr_top_line_chroma_buffer_size, \ + opb_lb_wr_llb_y_buffer_size,\ + opb_lb_wr_llb_uv_buffer_size; \ + HFI_U32 macrotiling_size; \ + vpss_4tap_top_buffer_size = 0; \ + vpss_div2_top_buffer_size = 0; \ + vpss_4tap_left_buffer_size = 0; \ + vpss_div2_left_buffer_size = 0; \ + macrotiling_size = 32; \ + opb_wr_top_line_luma_buffer_size = HFI_ALIGN(frame_width, \ + macrotiling_size) / macrotiling_size * 256; \ + opb_wr_top_line_luma_buffer_size = \ + HFI_ALIGN(opb_wr_top_line_luma_buffer_size, \ + VENUS_DMA_ALIGNMENT) + (MAX_TILE_COLUMNS - 1) * 256; \ + opb_wr_top_line_luma_buffer_size = \ + MAX(opb_wr_top_line_luma_buffer_size, (32 * \ + HFI_ALIGN(frame_height, 8))); \ + opb_wr_top_line_chroma_buffer_size = \ + opb_wr_top_line_luma_buffer_size;\ + opb_lb_wr_llb_uv_buffer_size = \ + HFI_ALIGN((HFI_ALIGN(frame_height, 8) / (4 / 2)) * 64,\ + BUFFER_ALIGNMENT_32_BYTES); \ + opb_lb_wr_llb_y_buffer_size = \ + HFI_ALIGN((HFI_ALIGN(frame_height, 8) / (4 / 2)) * 64,\ + BUFFER_ALIGNMENT_32_BYTES); \ + size = num_vpp_pipes * 2 * (vpss_4tap_top_buffer_size + \ + vpss_div2_top_buffer_size) + \ + 2 * (vpss_4tap_left_buffer_size + \ + vpss_div2_left_buffer_size) + \ + opb_wr_top_line_luma_buffer_size + \ + opb_wr_top_line_chroma_buffer_size + \ + opb_lb_wr_llb_uv_buffer_size + \ + opb_lb_wr_llb_y_buffer_size; \ + } while (0) + +#define VPP_CMD_MAX_SIZE (1 << 20) +#define NUM_HW_PIC_BUF 32 +#define BIN_BUFFER_THRESHOLD (1280 * 736) +#define H264D_MAX_SLICE 1800 +#define SIZE_H264D_BUFTAB_T (256) +#define SIZE_H264D_HW_PIC_T (1 << 11) +#define SIZE_H264D_BSE_CMD_PER_BUF (32 * 4) +#define SIZE_H264D_VPP_CMD_PER_BUF (512) + +#define SIZE_H264D_LB_FE_TOP_DATA(frame_width, frame_height) \ + (MAX_FE_NBR_DATA_LUMA_LINE_BUFFER_SIZE * HFI_ALIGN(frame_width, 16) * 3) + +#define SIZE_H264D_LB_FE_TOP_CTRL(frame_width, frame_height) \ + (MAX_FE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE * ((frame_width + 15) >> 4)) + +#define SIZE_H264D_LB_FE_LEFT_CTRL(frame_width, frame_height) \ + (MAX_FE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE * ((frame_height + 15) >> 4)) + +#define SIZE_H264D_LB_SE_TOP_CTRL(frame_width, frame_height) \ + (MAX_SE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE * ((frame_width + 15) >> 4)) + +#define SIZE_H264D_LB_SE_LEFT_CTRL(frame_width, frame_height) \ + (MAX_SE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE * ((frame_height + 15) >> 4)) + +#define SIZE_H264D_LB_PE_TOP_DATA(frame_width, frame_height) \ + (MAX_PE_NBR_DATA_LCU64_LINE_BUFFER_SIZE * ((frame_width + 15) >> 4)) + +#define SIZE_H264D_LB_VSP_TOP(frame_width, frame_height) \ + ((((frame_width + 15) >> 4) << 7)) + +#define SIZE_H264D_LB_RECON_DMA_METADATA_WR(frame_width, frame_height) \ + (HFI_ALIGN(frame_height, 16) * 32) + +#define SIZE_H264D_QP(frame_width, frame_height) \ + (((frame_width + 63) >> 6) * ((frame_height + 63) >> 6) * 128) + +#define SIZE_HW_PIC(size_per_buf) \ + (NUM_HW_PIC_BUF * size_per_buf) + +#define SIZE_H264D_BSE_CMD_BUF(_size, frame_width, frame_height) \ + do { \ + HFI_U32 _height = HFI_ALIGN(frame_height, \ + BUFFER_ALIGNMENT_32_BYTES); \ + _size = MIN((((_height + 15) >> 4) * 48), H264D_MAX_SLICE) *\ + SIZE_H264D_BSE_CMD_PER_BUF; \ + } while (0) + +#define SIZE_H264D_VPP_CMD_BUF(_size, frame_width, frame_height) \ + do { \ + HFI_U32 _height = HFI_ALIGN(frame_height, \ + BUFFER_ALIGNMENT_32_BYTES); \ + _size = MIN((((_height + 15) >> 4) * 48), H264D_MAX_SLICE) * \ + SIZE_H264D_VPP_CMD_PER_BUF; \ + if (_size > VPP_CMD_MAX_SIZE) \ + _size = VPP_CMD_MAX_SIZE; \ + } while (0) + +#define HFI_BUFFER_COMV_H264D(comv_size, frame_width, \ + frame_height, _comv_bufcount) \ + do { \ + HFI_U32 frame_width_in_mbs = ((frame_width + 15) >> 4); \ + HFI_U32 frame_height_in_mbs = ((frame_height + 15) >> 4); \ + HFI_U32 col_mv_aligned_width = (frame_width_in_mbs << 7); \ + HFI_U32 col_zero_aligned_width = (frame_width_in_mbs << 2); \ + HFI_U32 col_zero_size = 0, size_colloc = 0; \ + col_mv_aligned_width = HFI_ALIGN(col_mv_aligned_width, \ + BUFFER_ALIGNMENT_16_BYTES); \ + col_zero_aligned_width = HFI_ALIGN(col_zero_aligned_width, \ + BUFFER_ALIGNMENT_16_BYTES); \ + col_zero_size = col_zero_aligned_width * \ + ((frame_height_in_mbs + 1) >> 1); \ + col_zero_size = HFI_ALIGN(col_zero_size, \ + BUFFER_ALIGNMENT_64_BYTES); \ + col_zero_size <<= 1; \ + col_zero_size = HFI_ALIGN(col_zero_size, \ + BUFFER_ALIGNMENT_512_BYTES); \ + size_colloc = col_mv_aligned_width * ((frame_height_in_mbs + \ + 1) >> 1); \ + size_colloc = HFI_ALIGN(size_colloc, \ + BUFFER_ALIGNMENT_64_BYTES); \ + size_colloc <<= 1; \ + size_colloc = HFI_ALIGN(size_colloc, \ + BUFFER_ALIGNMENT_512_BYTES); \ + size_colloc += (col_zero_size + SIZE_H264D_BUFTAB_T * 2); \ + comv_size = size_colloc * (_comv_bufcount); \ + comv_size += BUFFER_ALIGNMENT_512_BYTES; \ + } while (0) + +#define HFI_BUFFER_NON_COMV_H264D(_size, frame_width, frame_height, \ + num_vpp_pipes) \ + do { \ + HFI_U32 _size_bse, _size_vpp; \ + SIZE_H264D_BSE_CMD_BUF(_size_bse, frame_width, frame_height); \ + SIZE_H264D_VPP_CMD_BUF(_size_vpp, frame_width, frame_height); \ + _size = HFI_ALIGN(_size_bse, VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(_size_vpp, VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_HW_PIC(SIZE_H264D_HW_PIC_T), \ + VENUS_DMA_ALIGNMENT); \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define HFI_BUFFER_LINE_H264D(_size, frame_width, frame_height, \ + is_opb, num_vpp_pipes) \ + do { \ + HFI_U32 vpss_lb_size = 0; \ + _size = HFI_ALIGN(SIZE_H264D_LB_FE_TOP_DATA(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H264D_LB_FE_TOP_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H264D_LB_FE_LEFT_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_H264D_LB_SE_TOP_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H264D_LB_SE_LEFT_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * \ + num_vpp_pipes + \ + HFI_ALIGN(SIZE_H264D_LB_PE_TOP_DATA(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H264D_LB_VSP_TOP(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H264D_LB_RECON_DMA_METADATA_WR\ + (frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) * 2 + HFI_ALIGN(SIZE_H264D_QP\ + (frame_width, frame_height), VENUS_DMA_ALIGNMENT); \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + if (is_opb) { \ + SIZE_VPSS_LB(vpss_lb_size, frame_width, frame_height, \ + num_vpp_pipes); \ + } \ + _size = HFI_ALIGN((_size + vpss_lb_size), \ + VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define H264_CABAC_HDR_RATIO_HD_TOT 1 +#define H264_CABAC_RES_RATIO_HD_TOT 3 + +#define SIZE_H264D_HW_BIN_BUFFER(_size, frame_width, frame_height, \ + delay, num_vpp_pipes) \ + do { \ + HFI_U32 size_yuv, size_bin_hdr, size_bin_res; \ + size_yuv = ((frame_width * frame_height) <= \ + BIN_BUFFER_THRESHOLD) ?\ + ((BIN_BUFFER_THRESHOLD * 3) >> 1) : \ + ((frame_width * frame_height * 3) >> 1); \ + size_bin_hdr = size_yuv * H264_CABAC_HDR_RATIO_HD_TOT; \ + size_bin_res = size_yuv * H264_CABAC_RES_RATIO_HD_TOT; \ + size_bin_hdr = size_bin_hdr * (((((HFI_U32)(delay)) & 31) /\ + 10) + 2) / 2; \ + size_bin_res = size_bin_res * (((((HFI_U32)(delay)) & 31) /\ + 10) + 2) / 2; \ + size_bin_hdr = HFI_ALIGN(size_bin_hdr / num_vpp_pipes,\ + VENUS_DMA_ALIGNMENT) * num_vpp_pipes; \ + size_bin_res = HFI_ALIGN(size_bin_res / num_vpp_pipes, \ + VENUS_DMA_ALIGNMENT) * num_vpp_pipes; \ + _size = size_bin_hdr + size_bin_res; \ + } while (0) + +#define HFI_BUFFER_BIN_H264D(_size, frame_width, frame_height, is_interlaced, \ + delay, num_vpp_pipes) \ + do { \ + HFI_U32 n_aligned_w = HFI_ALIGN(frame_width, \ + BUFFER_ALIGNMENT_16_BYTES);\ + HFI_U32 n_aligned_h = HFI_ALIGN(frame_height, \ + BUFFER_ALIGNMENT_16_BYTES); \ + if (!is_interlaced) { \ + SIZE_H264D_HW_BIN_BUFFER(_size, n_aligned_w, \ + n_aligned_h, delay, num_vpp_pipes); \ + } else \ + _size = 0; \ + } while (0) + +#define NUM_SLIST_BUF_H264 (256 + 32) +#define SIZE_SLIST_BUF_H264 (512) +#define SIZE_SEI_USERDATA (4096) +#define H264_NUM_FRM_INFO (66) +#define H264_DISPLAY_BUF_SIZE (3328) +#define SIZE_DOLBY_RPU_METADATA (41 * 1024) +#define HFI_BUFFER_PERSIST_H264D(_size, rpu_enabled) \ + (_size = HFI_ALIGN((SIZE_SLIST_BUF_H264 * NUM_SLIST_BUF_H264 + \ + H264_DISPLAY_BUF_SIZE * H264_NUM_FRM_INFO + \ + NUM_HW_PIC_BUF * SIZE_SEI_USERDATA + \ + (rpu_enabled) * NUM_HW_PIC_BUF * SIZE_DOLBY_RPU_METADATA), \ + VENUS_DMA_ALIGNMENT)) + +#define LCU_MAX_SIZE_PELS 64 +#define LCU_MIN_SIZE_PELS 16 + +#define H265D_MAX_SLICE 1200 +#define SIZE_H265D_HW_PIC_T SIZE_H264D_HW_PIC_T +#define SIZE_H265D_BSE_CMD_PER_BUF (16 * sizeof(HFI_U32)) +#define SIZE_H265D_VPP_CMD_PER_BUF (256) + +#define SIZE_H265D_LB_FE_TOP_DATA(frame_width, frame_height) \ + (MAX_FE_NBR_DATA_LUMA_LINE_BUFFER_SIZE * \ + (HFI_ALIGN(frame_width, 64) + 8) * 2) + +#define SIZE_H265D_LB_FE_TOP_CTRL(frame_width, frame_height) \ + (MAX_FE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE * \ + (HFI_ALIGN(frame_width, LCU_MAX_SIZE_PELS) / LCU_MIN_SIZE_PELS)) + +#define SIZE_H265D_LB_FE_LEFT_CTRL(frame_width, frame_height) \ + (MAX_FE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE * \ + (HFI_ALIGN(frame_height, LCU_MAX_SIZE_PELS) / LCU_MIN_SIZE_PELS)) + +#define SIZE_H265D_LB_SE_TOP_CTRL(frame_width, frame_height) \ + ((LCU_MAX_SIZE_PELS / 8 * (128 / 8)) * ((frame_width + 15) >> 4)) + +#define SIZE_H265D_LB_SE_LEFT_CTRL(frame_width, frame_height) \ + (MAX(((frame_height + 16 - 1) / 8) * \ + MAX_SE_NBR_CTRL_LCU16_LINE_BUFFER_SIZE, \ + MAX(((frame_height + 32 - 1) / 8) * \ + MAX_SE_NBR_CTRL_LCU32_LINE_BUFFER_SIZE, \ + ((frame_height + 64 - 1) / 8) * \ + MAX_SE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE))) + +#define SIZE_H265D_LB_PE_TOP_DATA(frame_width, frame_height) \ + (MAX_PE_NBR_DATA_LCU64_LINE_BUFFER_SIZE * (HFI_ALIGN(frame_width, \ + LCU_MIN_SIZE_PELS) / LCU_MIN_SIZE_PELS)) + +#define SIZE_H265D_LB_VSP_TOP(frame_width, frame_height) \ + (((frame_width + 63) >> 6) * 128) + +#define SIZE_H265D_LB_VSP_LEFT(frame_width, frame_height) \ + (((frame_height + 63) >> 6) * 128) + +#define SIZE_H265D_LB_RECON_DMA_METADATA_WR(frame_width, frame_height) \ + SIZE_H264D_LB_RECON_DMA_METADATA_WR(frame_width, frame_height) + +#define SIZE_H265D_QP(frame_width, frame_height) \ + SIZE_H264D_QP(frame_width, frame_height) + +#define SIZE_H265D_BSE_CMD_BUF(_size, frame_width, frame_height)\ + do { \ + _size = HFI_ALIGN(((HFI_ALIGN(frame_width, \ + LCU_MAX_SIZE_PELS) / LCU_MIN_SIZE_PELS) * \ + (HFI_ALIGN(frame_height, LCU_MAX_SIZE_PELS) /\ + LCU_MIN_SIZE_PELS)) * NUM_HW_PIC_BUF, VENUS_DMA_ALIGNMENT); \ + _size = MIN(_size, H265D_MAX_SLICE + 1); \ + _size = 2 * _size * SIZE_H265D_BSE_CMD_PER_BUF; \ + } while (0) + +#define SIZE_H265D_VPP_CMD_BUF(_size, frame_width, frame_height) \ + do { \ + _size = HFI_ALIGN(((HFI_ALIGN(frame_width, LCU_MAX_SIZE_PELS) /\ + LCU_MIN_SIZE_PELS) * (HFI_ALIGN(frame_height, \ + LCU_MAX_SIZE_PELS) / LCU_MIN_SIZE_PELS)) * \ + NUM_HW_PIC_BUF, VENUS_DMA_ALIGNMENT); \ + _size = MIN(_size, H265D_MAX_SLICE + 1); \ + _size = HFI_ALIGN(_size, 4); \ + _size = 2 * _size * SIZE_H265D_VPP_CMD_PER_BUF; \ + if (_size > VPP_CMD_MAX_SIZE) { \ + _size = VPP_CMD_MAX_SIZE; \ + } \ + } while (0) + +#define HFI_BUFFER_COMV_H265D(_size, frame_width, frame_height, \ + _comv_bufcount) \ + do { \ + _size = HFI_ALIGN(((((frame_width + 15) >> 4) * \ + ((frame_height + 15) >> 4)) << 8), \ + BUFFER_ALIGNMENT_512_BYTES); \ + _size *= _comv_bufcount; \ + _size += BUFFER_ALIGNMENT_512_BYTES; \ + } while (0) + +#define HDR10_HIST_EXTRADATA_SIZE (4 * 1024) + +#define HFI_BUFFER_NON_COMV_H265D(_size, frame_width, frame_height, \ + num_vpp_pipes) \ + do { \ + HFI_U32 _size_bse, _size_vpp; \ + SIZE_H265D_BSE_CMD_BUF(_size_bse, frame_width, \ + frame_height); \ + SIZE_H265D_VPP_CMD_BUF(_size_vpp, frame_width, \ + frame_height); \ + _size = HFI_ALIGN(_size_bse, VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(_size_vpp, VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(NUM_HW_PIC_BUF * 20 * 22 * 4, \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(2 * sizeof(HFI_U16) * \ + (HFI_ALIGN(frame_width, LCU_MAX_SIZE_PELS) / \ + LCU_MIN_SIZE_PELS) * (HFI_ALIGN(frame_height, \ + LCU_MAX_SIZE_PELS) / LCU_MIN_SIZE_PELS), \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_HW_PIC(SIZE_H265D_HW_PIC_T), \ + VENUS_DMA_ALIGNMENT) + \ + HDR10_HIST_EXTRADATA_SIZE; \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define HFI_BUFFER_LINE_H265D(_size, frame_width, frame_height, \ + is_opb, num_vpp_pipes) \ + do { \ + HFI_U32 vpss_lb_size = 0; \ + _size = HFI_ALIGN(SIZE_H265D_LB_FE_TOP_DATA(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H265D_LB_FE_TOP_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H265D_LB_FE_LEFT_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_H265D_LB_SE_LEFT_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_H265D_LB_SE_TOP_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H265D_LB_PE_TOP_DATA(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H265D_LB_VSP_TOP(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_H265D_LB_VSP_LEFT(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_H265D_LB_RECON_DMA_METADATA_WR\ + (frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) * 4 + \ + HFI_ALIGN(SIZE_H265D_QP(frame_width, frame_height),\ + VENUS_DMA_ALIGNMENT); \ + if (is_opb) { \ + SIZE_VPSS_LB(vpss_lb_size, frame_width, frame_height,\ + num_vpp_pipes); \ + } \ + _size = HFI_ALIGN((_size + vpss_lb_size), \ + VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define H265_CABAC_HDR_RATIO_HD_TOT 2 +#define H265_CABAC_RES_RATIO_HD_TOT 2 + +#define SIZE_H265D_HW_BIN_BUFFER(_size, frame_width, frame_height, \ + delay, num_vpp_pipes) \ + do { \ + HFI_U32 size_yuv, size_bin_hdr, size_bin_res; \ + size_yuv = ((frame_width * frame_height) <= \ + BIN_BUFFER_THRESHOLD) ? \ + ((BIN_BUFFER_THRESHOLD * 3) >> 1) : \ + ((frame_width * frame_height * 3) >> 1); \ + size_bin_hdr = size_yuv * H265_CABAC_HDR_RATIO_HD_TOT; \ + size_bin_res = size_yuv * H265_CABAC_RES_RATIO_HD_TOT; \ + size_bin_hdr = size_bin_hdr * \ + (((((HFI_U32)(delay)) & 31) / 10) + 2) / 2; \ + size_bin_res = size_bin_res * \ + (((((HFI_U32)(delay)) & 31) / 10) + 2) / 2; \ + size_bin_hdr = HFI_ALIGN(size_bin_hdr / \ + num_vpp_pipes, VENUS_DMA_ALIGNMENT) * \ + num_vpp_pipes; \ + size_bin_res = HFI_ALIGN(size_bin_res / num_vpp_pipes,\ + VENUS_DMA_ALIGNMENT) * num_vpp_pipes; \ + _size = size_bin_hdr + size_bin_res; \ + } while (0) + +#define HFI_BUFFER_BIN_H265D(_size, frame_width, frame_height, \ + is_interlaced, delay, num_vpp_pipes) \ + do { \ + HFI_U32 n_aligned_w = HFI_ALIGN(frame_width, \ + BUFFER_ALIGNMENT_16_BYTES); \ + HFI_U32 n_aligned_h = HFI_ALIGN(frame_height, \ + BUFFER_ALIGNMENT_16_BYTES); \ + if (!is_interlaced) { \ + SIZE_H265D_HW_BIN_BUFFER(_size, n_aligned_w, \ + n_aligned_h, delay, num_vpp_pipes); \ + } else \ + _size = 0; \ + } while (0) + +#define SIZE_SLIST_BUF_H265 (1 << 10) +#define NUM_SLIST_BUF_H265 (80 + 20) +#define H265_NUM_TILE_COL 32 +#define H265_NUM_TILE_ROW 128 +#define H265_NUM_TILE (H265_NUM_TILE_ROW * H265_NUM_TILE_COL + 1) +#define H265_NUM_FRM_INFO (48) +#define H265_DISPLAY_BUF_SIZE (3072) +#define HFI_BUFFER_PERSIST_H265D(_size, rpu_enabled) \ + (_size = HFI_ALIGN((SIZE_SLIST_BUF_H265 * NUM_SLIST_BUF_H265 + \ + H265_NUM_FRM_INFO * H265_DISPLAY_BUF_SIZE + \ + H265_NUM_TILE * sizeof(HFI_U32) + NUM_HW_PIC_BUF * SIZE_SEI_USERDATA + \ + rpu_enabled * NUM_HW_PIC_BUF * SIZE_DOLBY_RPU_METADATA),\ + VENUS_DMA_ALIGNMENT)) + +#define SIZE_VPXD_LB_FE_LEFT_CTRL(frame_width, frame_height) \ + MAX(((frame_height + 15) >> 4) * \ + MAX_FE_NBR_CTRL_LCU16_LINE_BUFFER_SIZE, \ + MAX(((frame_height + 31) >> 5) * \ + MAX_FE_NBR_CTRL_LCU32_LINE_BUFFER_SIZE, \ + ((frame_height + 63) >> 6) * MAX_FE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE)) +#define SIZE_VPXD_LB_FE_TOP_CTRL(frame_width, frame_height) \ + (((HFI_ALIGN(frame_width, 64) + 8) * 10 * 2)) +#define SIZE_VPXD_LB_SE_TOP_CTRL(frame_width, frame_height) \ + (((frame_width + 15) >> 4) * MAX_FE_NBR_CTRL_LCU16_LINE_BUFFER_SIZE) +#define SIZE_VPXD_LB_SE_LEFT_CTRL(frame_width, frame_height) \ + MAX(((frame_height + 15) >> 4) * \ + MAX_SE_NBR_CTRL_LCU16_LINE_BUFFER_SIZE,\ + MAX(((frame_height + 31) >> 5) * \ + MAX_SE_NBR_CTRL_LCU32_LINE_BUFFER_SIZE, \ + ((frame_height + 63) >> 6) * MAX_SE_NBR_CTRL_LCU64_LINE_BUFFER_SIZE)) +#define SIZE_VPXD_LB_RECON_DMA_METADATA_WR(frame_width, frame_height) \ + HFI_ALIGN((HFI_ALIGN(frame_height, 8) / (4 / 2)) * 64,\ + BUFFER_ALIGNMENT_32_BYTES) +#define SIZE_MP2D_LB_FE_TOP_DATA(frame_width, frame_height) \ + ((HFI_ALIGN(frame_width, 16) + 8) * 10 * 2) +#define SIZE_VP9D_LB_FE_TOP_DATA(frame_width, frame_height) \ + ((HFI_ALIGN(HFI_ALIGN(frame_width, 8), 64) + 8) * 10 * 2) +#define SIZE_MP2D_LB_PE_TOP_DATA(frame_width, frame_height) \ + ((HFI_ALIGN(frame_width, 16) >> 4) * 64) +#define SIZE_VP9D_LB_PE_TOP_DATA(frame_width, frame_height) \ + ((HFI_ALIGN(HFI_ALIGN(frame_width, 8), 64) >> 6) * 176) +#define SIZE_MP2D_LB_VSP_TOP(frame_width, frame_height) \ + (((HFI_ALIGN(frame_width, 16) >> 4) * 64 / 2) + 256) +#define SIZE_VP9D_LB_VSP_TOP(frame_width, frame_height) \ + ((((HFI_ALIGN(HFI_ALIGN(frame_width, 8), 64) >> 6) * 64 * 8) + 256)) + +#define HFI_IRIS3_VP9D_COMV_SIZE \ + ((((8192 + 63) >> 6) * ((4320 + 63) >> 6) * 8 * 8 * 2 * 8)) + +#define SIZE_VP9D_QP(frame_width, frame_height) \ + SIZE_H264D_QP(frame_width, frame_height) + +#define HFI_IRIS3_VP9D_LB_SIZE(_size, frame_width, frame_height, num_vpp_pipes)\ + (_size = HFI_ALIGN(SIZE_VPXD_LB_FE_LEFT_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_VPXD_LB_SE_LEFT_CTRL(frame_width, frame_height),\ + VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_VP9D_LB_VSP_TOP(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_VPXD_LB_FE_TOP_CTRL(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) + 2 * \ + HFI_ALIGN(SIZE_VPXD_LB_RECON_DMA_METADATA_WR \ + (frame_width, frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_VPXD_LB_SE_TOP_CTRL(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_VP9D_LB_PE_TOP_DATA(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_VP9D_LB_FE_TOP_DATA(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_VP9D_QP(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT)) + +#define HFI_BUFFER_LINE_VP9D(_size, frame_width, frame_height, \ + _yuv_bufcount_min, is_opb, num_vpp_pipes) \ + do { \ + HFI_U32 _lb_size = 0; \ + HFI_U32 vpss_lb_size = 0; \ + HFI_IRIS3_VP9D_LB_SIZE(_lb_size, frame_width, frame_height,\ + num_vpp_pipes); \ + if (is_opb) { \ + SIZE_VPSS_LB(vpss_lb_size, frame_width, frame_height, \ + num_vpp_pipes); \ + } \ + _size = _lb_size + vpss_lb_size; \ + } while (0) + +#define VPX_DECODER_FRAME_CONCURENCY_LVL (2) +#define VPX_DECODER_FRAME_BIN_HDR_BUDGET_RATIO (1 / 2) +#define VPX_DECODER_FRAME_BIN_RES_BUDGET_RATIO (3 / 2) + +#define HFI_BUFFER_BIN_VP9D(_size, frame_width, frame_height, \ + is_interlaced, num_vpp_pipes) \ + do { \ + HFI_U32 _size_yuv = HFI_ALIGN(frame_width, \ + BUFFER_ALIGNMENT_16_BYTES) *\ + HFI_ALIGN(frame_height, BUFFER_ALIGNMENT_16_BYTES) * 3 / 2; \ + if (!is_interlaced) { \ + _size = HFI_ALIGN(((MAX(_size_yuv, \ + ((BIN_BUFFER_THRESHOLD * 3) >> 1)) * \ + VPX_DECODER_FRAME_BIN_HDR_BUDGET_RATIO * \ + VPX_DECODER_FRAME_CONCURENCY_LVL) / num_vpp_pipes), \ + VENUS_DMA_ALIGNMENT) + HFI_ALIGN(((MAX(_size_yuv, \ + ((BIN_BUFFER_THRESHOLD * 3) >> 1)) * \ + VPX_DECODER_FRAME_BIN_RES_BUDGET_RATIO * \ + VPX_DECODER_FRAME_CONCURENCY_LVL) / num_vpp_pipes), \ + VENUS_DMA_ALIGNMENT); \ + _size = _size * num_vpp_pipes; \ + } \ + else \ + _size = 0; \ + } while (0) + +#define VP9_NUM_FRAME_INFO_BUF 32 +#define VP9_NUM_PROBABILITY_TABLE_BUF (VP9_NUM_FRAME_INFO_BUF + 4) +#define VP9_PROB_TABLE_SIZE (3840) +#define VP9_FRAME_INFO_BUF_SIZE (6144) + +#define VP9_UDC_HEADER_BUF_SIZE (3 * 128) +#define MAX_SUPERFRAME_HEADER_LEN (34) +#define CCE_TILE_OFFSET_SIZE HFI_ALIGN(32 * 4 * 4, BUFFER_ALIGNMENT_32_BYTES) + +#define HFI_BUFFER_PERSIST_VP9D(_size) \ + (_size = HFI_ALIGN(VP9_NUM_PROBABILITY_TABLE_BUF * VP9_PROB_TABLE_SIZE, \ + VENUS_DMA_ALIGNMENT) + HFI_ALIGN(HFI_IRIS3_VP9D_COMV_SIZE, \ + VENUS_DMA_ALIGNMENT) + HFI_ALIGN(MAX_SUPERFRAME_HEADER_LEN, \ + VENUS_DMA_ALIGNMENT) + HFI_ALIGN(VP9_UDC_HEADER_BUF_SIZE, \ + VENUS_DMA_ALIGNMENT) + HFI_ALIGN(VP9_NUM_FRAME_INFO_BUF * \ + CCE_TILE_OFFSET_SIZE, VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(VP9_NUM_FRAME_INFO_BUF * VP9_FRAME_INFO_BUF_SIZE, \ + VENUS_DMA_ALIGNMENT) + HDR10_HIST_EXTRADATA_SIZE) + +#define HFI_BUFFER_LINE_MP2D(_size, frame_width, frame_height, \ +_yuv_bufcount_min, is_opb, num_vpp_pipes) \ + do { \ + HFI_U32 vpss_lb_size = 0; \ + _size = HFI_ALIGN(SIZE_VPXD_LB_FE_LEFT_CTRL(frame_width, \ + frame_height), VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_VPXD_LB_SE_LEFT_CTRL(frame_width, frame_height),\ + VENUS_DMA_ALIGNMENT) * num_vpp_pipes + \ + HFI_ALIGN(SIZE_MP2D_LB_VSP_TOP(frame_width, frame_height),\ + VENUS_DMA_ALIGNMENT) + HFI_ALIGN(SIZE_VPXD_LB_FE_TOP_CTRL\ + (frame_width, frame_height), VENUS_DMA_ALIGNMENT) + \ + 2 * HFI_ALIGN(SIZE_VPXD_LB_RECON_DMA_METADATA_WR(frame_width,\ + frame_height), VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_VPXD_LB_SE_TOP_CTRL(frame_width, frame_height),\ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_MP2D_LB_PE_TOP_DATA(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT) + \ + HFI_ALIGN(SIZE_MP2D_LB_FE_TOP_DATA(frame_width, frame_height), \ + VENUS_DMA_ALIGNMENT); \ + if (is_opb) { \ + SIZE_VPSS_LB(vpss_lb_size, frame_width, frame_height, \ + num_vpp_pipes); \ + } \ + _size += vpss_lb_size; \ + } while (0) + +#define HFI_BUFFER_BIN_MP2D(_size, frame_width, frame_height, is_interlaced) 0 + +#define QMATRIX_SIZE (sizeof(HFI_U32) * 128 + 256) +#define MP2D_QPDUMP_SIZE 115200 +#define HFI_BUFFER_PERSIST_MP2D(_size) \ + do { \ + _size = QMATRIX_SIZE + MP2D_QPDUMP_SIZE \ + } while (0) + +#define HFI_BUFFER_BITSTREAM_ENC(size, frame_width, frame_height, \ + rc_type, is_ten_bit) \ + do { \ + HFI_U32 aligned_width, aligned_height, bitstream_size, yuv_size; \ + aligned_width = HFI_ALIGN(frame_width, 32); \ + aligned_height = HFI_ALIGN(frame_height, 32); \ + bitstream_size = aligned_width * aligned_height * 3; \ + yuv_size = (aligned_width * aligned_height * 3) >> 1; \ + if (aligned_width * aligned_height > (4096 * 2176)) { \ + /* bitstream_size = 0.25 * yuv_size; */ \ + bitstream_size = (bitstream_size >> 3); \ + } \ + else if (aligned_width * aligned_height > (1280 * 720)) { \ + /* bitstream_size = 0.5 * yuv_size; */ \ + bitstream_size = (bitstream_size >> 2); \ + } else { \ + /* bitstream_size = 2 * yuv_size; */ \ + } \ + if ((rc_type == HFI_RC_CQ || rc_type == HFI_RC_OFF) && \ + bitstream_size < yuv_size) { \ + bitstream_size = (bitstream_size << 1);\ + } \ + if (is_ten_bit) { \ + bitstream_size = (bitstream_size) + \ + (bitstream_size >> 2); \ + } \ + size = HFI_ALIGN(bitstream_size, HFI_ALIGNMENT_4096); \ + } while (0) + +#define HFI_IRIS3_ENC_TILE_SIZE_INFO(tile_size, tile_count, last_tile_size, \ + frame_width_coded, codec_standard) \ + do { \ + HFI_U32 without_tile_enc_width; \ + HFI_U32 min_tile_size = 352, fixed_tile_width = 960; \ + without_tile_enc_width = min_tile_size + fixed_tile_width; \ + if (codec_standard == HFI_CODEC_ENCODE_HEVC && \ + frame_width_coded > without_tile_enc_width) { \ + tile_size = fixed_tile_width; \ + tile_count = (frame_width_coded + tile_size - 1) / tile_size; \ + last_tile_size = (frame_width_coded - (tile_size * (tile_count - 1))); \ + if (last_tile_size < min_tile_size) { \ + tile_count -= 1; \ + last_tile_size = (tile_size + min_tile_size); \ + } \ + } else { \ + tile_size = frame_width_coded; \ + tile_count = 1; \ + last_tile_size = 0; \ + } \ + } while (0) + +#define HFI_IRIS3_ENC_MB_BASED_MULTI_SLICE_COUNT(total_slice_count, frame_width, frame_height, \ + codec_standard, multi_slice_max_mb_count) \ + do { \ + HFI_U32 tile_size, tile_count, last_tile_size, \ + slice_count_per_tile, slice_count_in_last_tile; \ + HFI_U32 mbs_in_one_tile, mbs_in_last_tile; \ + HFI_U32 frame_width_coded, frame_height_coded, lcu_size; \ + lcu_size = (codec_standard == HFI_CODEC_ENCODE_HEVC) ? 32 : 16; \ + frame_width_coded = HFI_ALIGN(frame_width, lcu_size); \ + frame_height_coded = HFI_ALIGN(frame_height, lcu_size); \ + HFI_IRIS3_ENC_TILE_SIZE_INFO(tile_size, tile_count, last_tile_size, \ + frame_width_coded, codec_standard); \ + mbs_in_one_tile = (tile_size * frame_height_coded) / (lcu_size * lcu_size); \ + slice_count_per_tile = \ + (mbs_in_one_tile + multi_slice_max_mb_count - 1) / \ + (multi_slice_max_mb_count); \ + if (last_tile_size) { \ + mbs_in_last_tile = \ + (last_tile_size * frame_height_coded) / (lcu_size * lcu_size); \ + slice_count_in_last_tile = \ + (mbs_in_last_tile + multi_slice_max_mb_count - 1) / \ + (multi_slice_max_mb_count); \ + total_slice_count = \ + (slice_count_per_tile * (tile_count - 1)) + \ + slice_count_in_last_tile; \ + } else \ + total_slice_count = (slice_count_per_tile * tile_count); \ + } while (0) + +#define SIZE_ROI_METADATA_ENC(size_roi, frame_width, frame_height, lcu_size)\ + do { \ + HFI_U32 width_in_lcus = 0, height_in_lcus = 0, n_shift = 0; \ + while (lcu_size && !(lcu_size & 0x1)) { \ + n_shift++; \ + lcu_size = lcu_size >> 1; \ + } \ + width_in_lcus = (frame_width + (lcu_size - 1)) >> n_shift; \ + height_in_lcus = (frame_height + (lcu_size - 1)) >> n_shift; \ + size_roi = (((width_in_lcus + 7) >> 3) << 3) * \ + height_in_lcus * 2 + 256; \ + } while (0) + +#define HFI_BUFFER_INPUT_METADATA_ENC(size, frame_width, frame_height, \ + is_roi_enabled, lcu_size) \ + do { \ + HFI_U32 roi_size = 0; \ + if (is_roi_enabled) { \ + SIZE_ROI_METADATA_ENC(roi_size, frame_width, \ + frame_height, lcu_size); \ + } \ + size = roi_size + 16384; \ + size = HFI_ALIGN(size, HFI_ALIGNMENT_4096); \ + } while (0) + +#define HFI_BUFFER_INPUT_METADATA_H264E(size_metadata, frame_width, \ + frame_height, is_roi_enabled) \ + HFI_BUFFER_INPUT_METADATA_ENC(size_metadata, frame_width, \ + frame_height, is_roi_enabled, 16) + +#define HFI_BUFFER_INPUT_METADATA_H265E(size_metadata, frame_width, \ + frame_height, is_roi_enabled) \ + HFI_BUFFER_INPUT_METADATA_ENC(size_metadata, frame_width, \ + frame_height, is_roi_enabled, 32) + +#define HFI_BUFFER_ARP_ENC(size) \ + (size = 204800) + +#define HFI_MAX_COL_FRAME 6 +#define HFI_VENUS_VENC_TRE_WB_BUFF_SIZE (65 << 4) // bytes +#define HFI_VENUS_VENC_DB_LINE_BUFF_PER_MB 512 +#define HFI_VENUS_VPPSG_MAX_REGISTERS 2048 +#define HFI_VENUS_WIDTH_ALIGNMENT 128 +#define HFI_VENUS_WIDTH_TEN_BIT_ALIGNMENT 192 +#define HFI_VENUS_HEIGHT_ALIGNMENT 32 +#define VENUS_METADATA_STRIDE_MULTIPLE 64 +#define VENUS_METADATA_HEIGHT_MULTIPLE 16 + +#ifndef SYSTEM_LAL_TILE10 +#define SYSTEM_LAL_TILE10 192 +#endif + +#define HFI_IRIS3_ENC_RECON_BUF_COUNT(num_recon, n_bframe, ltr_count, \ + _total_hp_layers, _total_hb_layers, hybrid_hp, codec_standard) \ + do { \ + HFI_U32 num_ref = 1; \ + if (n_bframe) \ + num_ref = 2; \ + if (_total_hp_layers > 1) { \ + if (hybrid_hp) \ + num_ref = (_total_hp_layers + 1) >> 1; \ + else if (codec_standard == HFI_CODEC_ENCODE_HEVC) \ + num_ref = (_total_hp_layers + 1) >> 1; \ + else if (codec_standard == HFI_CODEC_ENCODE_AVC && \ + _total_hp_layers < 4) \ + num_ref = (_total_hp_layers - 1); \ + else \ + num_ref = _total_hp_layers; \ + } \ + if (ltr_count) \ + num_ref = num_ref + ltr_count; \ + if (_total_hb_layers > 1) { \ + if (codec_standard == HFI_CODEC_ENCODE_HEVC) \ + num_ref = (_total_hb_layers); \ + else if (codec_standard == HFI_CODEC_ENCODE_AVC) \ + num_ref = (1 << (_total_hb_layers - 2)) + 1; \ + } \ + num_recon = num_ref + 1; \ + } while (0) + +#define SIZE_BIN_BITSTREAM_ENC(_size, rc_type, frame_width, frame_height, \ + work_mode, lcu_size, profile) \ + do { \ + HFI_U32 size_aligned_width = 0, size_aligned_height = 0; \ + HFI_U32 bitstream_size_eval = 0; \ + size_aligned_width = HFI_ALIGN((frame_width), lcu_size); \ + size_aligned_height = HFI_ALIGN((frame_height), lcu_size); \ + if (work_mode == HFI_WORKMODE_2) { \ + if (rc_type == HFI_RC_CQ || rc_type == HFI_RC_OFF) { \ + bitstream_size_eval = (((size_aligned_width) * \ + (size_aligned_height) * 3) >> 1); \ + } \ + else { \ + bitstream_size_eval = ((size_aligned_width) * \ + (size_aligned_height) * 3); \ + if (rc_type == HFI_RC_LOSSLESS) { \ + bitstream_size_eval = (bitstream_size_eval * 3 >> 2); \ + } else if ((size_aligned_width * size_aligned_height) > \ + (4096 * 2176)) { \ + bitstream_size_eval >>= 3; \ + } else if ((size_aligned_width * size_aligned_height) > \ + (480 * 320)) { \ + bitstream_size_eval >>= 2; \ + } \ + if (profile == HFI_H265_PROFILE_MAIN_10 || \ + profile == HFI_H265_PROFILE_MAIN_10_STILL_PICTURE) \ + bitstream_size_eval = (bitstream_size_eval * 5 >> 2); \ + } \ + } else { \ + bitstream_size_eval = size_aligned_width * \ + size_aligned_height * 3; \ + } \ + _size = HFI_ALIGN(bitstream_size_eval, VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define SIZE_ENC_SINGLE_PIPE(size, rc_type, bitbin_size, num_vpp_pipes, \ + frame_width, frame_height, lcu_size) \ + do { \ + HFI_U32 size_single_pipe_eval = 0, sao_bin_buffer_size = 0, \ + _padded_bin_sz = 0; \ + HFI_U32 size_aligned_width = 0, size_aligned_height = 0; \ + size_aligned_width = HFI_ALIGN((frame_width), lcu_size); \ + size_aligned_height = HFI_ALIGN((frame_height), lcu_size); \ + if ((size_aligned_width * size_aligned_height) > \ + (3840 * 2160)) { \ + size_single_pipe_eval = (bitbin_size / num_vpp_pipes); \ + } else if (num_vpp_pipes > 2) { \ + size_single_pipe_eval = bitbin_size / 2; \ + } else { \ + size_single_pipe_eval = bitbin_size; \ + } \ + if (rc_type == HFI_RC_LOSSLESS) { \ + size_single_pipe_eval = (size_single_pipe_eval << 1); \ + } \ + sao_bin_buffer_size = (64 * ((((frame_width) + \ + BUFFER_ALIGNMENT_32_BYTES) * ((frame_height) +\ + BUFFER_ALIGNMENT_32_BYTES)) >> 10)) + 384; \ + _padded_bin_sz = HFI_ALIGN(size_single_pipe_eval, \ + VENUS_DMA_ALIGNMENT);\ + size_single_pipe_eval = sao_bin_buffer_size + _padded_bin_sz; \ + size_single_pipe_eval = HFI_ALIGN(size_single_pipe_eval, \ + VENUS_DMA_ALIGNMENT); \ + size = size_single_pipe_eval; \ + } while (0) + +#define HFI_BUFFER_BIN_ENC(_size, rc_type, frame_width, frame_height, lcu_size, \ + work_mode, num_vpp_pipes, profile) \ + do { \ + HFI_U32 bitstream_size = 0, total_bitbin_buffers = 0, \ + size_single_pipe = 0, bitbin_size = 0; \ + SIZE_BIN_BITSTREAM_ENC(bitstream_size, rc_type, frame_width, \ + frame_height, work_mode, lcu_size, profile); \ + if (work_mode == HFI_WORKMODE_2) { \ + total_bitbin_buffers = 3; \ + bitbin_size = bitstream_size * 12 / 10; \ + bitbin_size = HFI_ALIGN(bitbin_size, \ + VENUS_DMA_ALIGNMENT); \ + } \ + else if ((lcu_size == 16) || (num_vpp_pipes > 1)) { \ + total_bitbin_buffers = 1; \ + bitbin_size = bitstream_size; \ + } \ + if (total_bitbin_buffers > 0) { \ + SIZE_ENC_SINGLE_PIPE(size_single_pipe, rc_type, bitbin_size, \ + num_vpp_pipes, frame_width, frame_height, lcu_size); \ + bitbin_size = size_single_pipe * num_vpp_pipes; \ + _size = HFI_ALIGN(bitbin_size, VENUS_DMA_ALIGNMENT) * \ + total_bitbin_buffers + 512; \ + } else \ + /* Avoid 512 Bytes allocation in case of 1Pipe HEVC Direct Mode*/ \ + _size = 0; \ + } while (0) + +#define HFI_BUFFER_BIN_H264E(_size, rc_type, frame_width, frame_height, \ + work_mode, num_vpp_pipes, profile) \ + HFI_BUFFER_BIN_ENC(_size, rc_type, frame_width, frame_height, 16, \ + work_mode, num_vpp_pipes, profile) + +#define HFI_BUFFER_BIN_H265E(_size, rc_type, frame_width, frame_height, \ + work_mode, num_vpp_pipes, profile) \ + HFI_BUFFER_BIN_ENC(_size, rc_type, frame_width, frame_height, 32,\ + work_mode, num_vpp_pipes, profile) + +#define SIZE_ENC_SLICE_INFO_BUF(num_lcu_in_frame) HFI_ALIGN((256 + \ + (num_lcu_in_frame << 4)), VENUS_DMA_ALIGNMENT) +#define SIZE_LINE_BUF_CTRL(frame_width_coded) \ + HFI_ALIGN(frame_width_coded, VENUS_DMA_ALIGNMENT) +#define SIZE_LINE_BUF_CTRL_ID2(frame_width_coded) \ + HFI_ALIGN(frame_width_coded, VENUS_DMA_ALIGNMENT) + +#define SIZE_LINEBUFF_DATA(_size, is_ten_bit, frame_width_coded) \ + (_size = is_ten_bit ? (((((10 * (frame_width_coded) +\ + 1024) + (VENUS_DMA_ALIGNMENT - 1)) & \ + (~(VENUS_DMA_ALIGNMENT - 1))) * 1) + \ + (((((10 * (frame_width_coded) + 1024) >> 1) + \ + (VENUS_DMA_ALIGNMENT - 1)) & (~(VENUS_DMA_ALIGNMENT - 1))) * \ + 2)) : (((((8 * (frame_width_coded) + 1024) + \ + (VENUS_DMA_ALIGNMENT - 1)) \ + & (~(VENUS_DMA_ALIGNMENT - 1))) * 1) + \ + (((((8 * (frame_width_coded) +\ + 1024) >> 1) + (VENUS_DMA_ALIGNMENT - 1)) & \ + (~(VENUS_DMA_ALIGNMENT - 1))) * 2))) + +#define SIZE_LEFT_LINEBUFF_CTRL(_size, standard, frame_height_coded, \ + num_vpp_pipes_enc) \ + do { \ + _size = (standard == HFI_CODEC_ENCODE_HEVC) ? \ + (((frame_height_coded) + \ + (BUF_SIZE_ALIGN_32)) / BUF_SIZE_ALIGN_32 * 4 * 16) : \ + (((frame_height_coded) + 15) / 16 * 5 * 16); \ + if ((num_vpp_pipes_enc) > 1) { \ + _size += BUFFER_ALIGNMENT_512_BYTES; \ + _size = HFI_ALIGN(_size, BUFFER_ALIGNMENT_512_BYTES) *\ + (num_vpp_pipes_enc); \ + } \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define SIZE_LEFT_LINEBUFF_RECON_PIX(_size, is_ten_bit, frame_height_coded, \ + num_vpp_pipes_enc) \ + (_size = (((is_ten_bit + 1) * 2 * (frame_height_coded) + \ + VENUS_DMA_ALIGNMENT) + \ + (VENUS_DMA_ALIGNMENT << (num_vpp_pipes_enc - 1)) - 1) & \ + (~((VENUS_DMA_ALIGNMENT << (num_vpp_pipes_enc - 1)) - 1)) * 1) + +#define SIZE_TOP_LINEBUFF_CTRL_FE(_size, frame_width_coded, standard) \ + do { \ + _size = (standard == HFI_CODEC_ENCODE_HEVC) ? (64 * \ + ((frame_width_coded) >> 5)) : (VENUS_DMA_ALIGNMENT + 16 * \ + ((frame_width_coded) >> 4)); \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define SIZE_LEFT_LINEBUFF_CTRL_FE(frame_height_coded, num_vpp_pipes_enc) \ + ((((VENUS_DMA_ALIGNMENT + 64 * ((frame_height_coded) >> 4)) + \ + (VENUS_DMA_ALIGNMENT << (num_vpp_pipes_enc - 1)) - 1) & \ + (~((VENUS_DMA_ALIGNMENT << (num_vpp_pipes_enc - 1)) - 1)) * 1) * \ + num_vpp_pipes_enc) + +#define SIZE_LEFT_LINEBUFF_METADATA_RECON_Y(_size, frame_height_coded, \ + is_ten_bit, num_vpp_pipes_enc) \ + do { \ + _size = ((VENUS_DMA_ALIGNMENT + 64 * ((frame_height_coded) / \ + (8 * (is_ten_bit ? 4 : 8))))); \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + _size = (_size * num_vpp_pipes_enc); \ + } while (0) + +#define SIZE_LEFT_LINEBUFF_METADATA_RECON_UV(_size, frame_height_coded, \ + is_ten_bit, num_vpp_pipes_enc) \ + do { \ + _size = ((VENUS_DMA_ALIGNMENT + 64 * ((frame_height_coded) / \ + (4 * (is_ten_bit ? 4 : 8))))); \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + _size = (_size * num_vpp_pipes_enc); \ + } while (0) + +#define SIZE_LINEBUFF_RECON_PIX(_size, is_ten_bit, frame_width_coded) \ + do { \ + _size = ((is_ten_bit ? 3 : 2) * (frame_width_coded)); \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT); \ + } while (0) + +#define SIZE_SLICE_CMD_BUFFER (HFI_ALIGN(20480, VENUS_DMA_ALIGNMENT)) +#define SIZE_SPS_PPS_SLICE_HDR (2048 + 4096) + +#define SIZE_FRAME_RC_BUF_SIZE(_size, standard, frame_height_coded, \ + num_vpp_pipes_enc) \ + do { \ + _size = (standard == HFI_CODEC_ENCODE_HEVC) ? (256 + 16 * \ + (14 + ((((frame_height_coded) >> 5) + 7) >> 3))) : \ + (256 + 16 * (14 + ((((frame_height_coded) >> 4) + 7) >> 3))); \ + _size *= 11; \ + if (num_vpp_pipes_enc > 1) { \ + _size = HFI_ALIGN(_size, VENUS_DMA_ALIGNMENT) * \ + num_vpp_pipes_enc;\ + } \ + _size = HFI_ALIGN(_size, BUFFER_ALIGNMENT_512_BYTES) * \ + HFI_MAX_COL_FRAME; \ + } while (0) + +#define ENC_BITCNT_BUF_SIZE(num_lcu_in_frame) HFI_ALIGN((256 + \ + (4 * (num_lcu_in_frame))), VENUS_DMA_ALIGNMENT) +#define ENC_BITMAP_BUF_SIZE(num_lcu_in_frame) HFI_ALIGN((256 + \ + ((num_lcu_in_frame) >> 3)), VENUS_DMA_ALIGNMENT) +#define SIZE_LINE_BUF_SDE(frame_width_coded) HFI_ALIGN((256 + \ + (16 * ((frame_width_coded) >> 4))), VENUS_DMA_ALIGNMENT) + +#define SIZE_BSE_SLICE_CMD_BUF ((((8192 << 2) + 7) & (~7)) * 3) + +#define SIZE_LAMBDA_LUT (256 * 11) +#define SIZE_OVERRIDE_BUF(num_lcumb) (HFI_ALIGN(((16 * (((num_lcumb) + 7)\ + >> 3))), VENUS_DMA_ALIGNMENT) * 2) +#define SIZE_IR_BUF(num_lcu_in_frame) HFI_ALIGN((((((num_lcu_in_frame) << 1) + 7) &\ + (~7)) * 3), VENUS_DMA_ALIGNMENT) + +#define SIZE_VPSS_LINE_BUF(num_vpp_pipes_enc, frame_height_coded, \ + frame_width_coded) \ + (HFI_ALIGN(((((((8192) >> 2) << 5) * (num_vpp_pipes_enc)) + 64) + \ + (((((MAX((frame_width_coded), (frame_height_coded)) + 3) >> 2) << 5) +\ + 256) * 16)), VENUS_DMA_ALIGNMENT)) + +#define SIZE_TOP_LINE_BUF_FIRST_STG_SAO(frame_width_coded) \ + HFI_ALIGN((16 * ((frame_width_coded) >> 5)), VENUS_DMA_ALIGNMENT) + +#define HFI_BUFFER_LINE_ENC(_size, frame_width, frame_height, is_ten_bit, \ + num_vpp_pipes_enc, lcu_size, standard) \ + do { \ + HFI_U32 width_in_lcus = 0, height_in_lcus = 0, \ + frame_width_coded = 0, frame_height_coded = 0; \ + HFI_U32 line_buff_data_size = 0, left_line_buff_ctrl_size = 0, \ + left_line_buff_recon_pix_size = 0, \ + top_line_buff_ctrl_fe_size = 0; \ + HFI_U32 left_line_buff_metadata_recon__y__size = 0, \ + left_line_buff_metadata_recon__uv__size = 0, \ + line_buff_recon_pix_size = 0; \ + width_in_lcus = ((frame_width) + (lcu_size) - 1) / (lcu_size); \ + height_in_lcus = ((frame_height) + (lcu_size) - 1) / (lcu_size); \ + frame_width_coded = width_in_lcus * (lcu_size); \ + frame_height_coded = height_in_lcus * (lcu_size); \ + SIZE_LINEBUFF_DATA(line_buff_data_size, is_ten_bit, \ + frame_width_coded);\ + SIZE_LEFT_LINEBUFF_CTRL(left_line_buff_ctrl_size, standard, \ + frame_height_coded, num_vpp_pipes_enc); \ + SIZE_LEFT_LINEBUFF_RECON_PIX(left_line_buff_recon_pix_size, \ + is_ten_bit, frame_height_coded, num_vpp_pipes_enc); \ + SIZE_TOP_LINEBUFF_CTRL_FE(top_line_buff_ctrl_fe_size, \ + frame_width_coded, standard); \ + SIZE_LEFT_LINEBUFF_METADATA_RECON_Y\ + (left_line_buff_metadata_recon__y__size, \ + frame_height_coded, is_ten_bit, num_vpp_pipes_enc); \ + SIZE_LEFT_LINEBUFF_METADATA_RECON_UV\ + (left_line_buff_metadata_recon__uv__size, \ + frame_height_coded, is_ten_bit, num_vpp_pipes_enc); \ + SIZE_LINEBUFF_RECON_PIX(line_buff_recon_pix_size, is_ten_bit,\ + frame_width_coded); \ + _size = SIZE_LINE_BUF_CTRL(frame_width_coded) + \ + SIZE_LINE_BUF_CTRL_ID2(frame_width_coded) + \ + line_buff_data_size + \ + left_line_buff_ctrl_size + \ + left_line_buff_recon_pix_size + \ + top_line_buff_ctrl_fe_size + \ + left_line_buff_metadata_recon__y__size + \ + left_line_buff_metadata_recon__uv__size + \ + line_buff_recon_pix_size + \ + SIZE_LEFT_LINEBUFF_CTRL_FE(frame_height_coded, \ + num_vpp_pipes_enc) + SIZE_LINE_BUF_SDE(frame_width_coded) + \ + SIZE_VPSS_LINE_BUF(num_vpp_pipes_enc, frame_height_coded, \ + frame_width_coded) + \ + SIZE_TOP_LINE_BUF_FIRST_STG_SAO(frame_width_coded); \ + } while (0) + +#define HFI_BUFFER_LINE_H264E(_size, frame_width, frame_height, is_ten_bit, \ + num_vpp_pipes) \ + HFI_BUFFER_LINE_ENC(_size, frame_width, frame_height, 0, \ + num_vpp_pipes, 16, HFI_CODEC_ENCODE_AVC) + +#define HFI_BUFFER_LINE_H265E(_size, frame_width, frame_height, is_ten_bit, \ + num_vpp_pipes) \ + HFI_BUFFER_LINE_ENC(_size, frame_width, frame_height, \ + is_ten_bit, num_vpp_pipes, 32, HFI_CODEC_ENCODE_HEVC) + +#define HFI_BUFFER_COMV_ENC(_size, frame_width, frame_height, lcu_size, \ + num_recon, standard) \ + do { \ + HFI_U32 size_colloc_mv = 0, size_colloc_rc = 0; \ + HFI_U32 mb_width = ((frame_width) + 15) >> 4; \ + HFI_U32 mb_height = ((frame_height) + 15) >> 4; \ + HFI_U32 width_in_lcus = ((frame_width) + (lcu_size) - 1) /\ + (lcu_size); \ + HFI_U32 height_in_lcus = ((frame_height) + (lcu_size) - 1) / \ + (lcu_size); \ + HFI_U32 num_lcu_in_frame = width_in_lcus * height_in_lcus; \ + size_colloc_mv = (standard == HFI_CODEC_ENCODE_HEVC) ? \ + (16 * ((num_lcu_in_frame << 2) + BUFFER_ALIGNMENT_32_BYTES)) : \ + (3 * 16 * (width_in_lcus * height_in_lcus +\ + BUFFER_ALIGNMENT_32_BYTES)); \ + size_colloc_mv = HFI_ALIGN(size_colloc_mv, \ + VENUS_DMA_ALIGNMENT) * num_recon; \ + size_colloc_rc = (((mb_width + 7) >> 3) * 16 * 2 * mb_height); \ + size_colloc_rc = HFI_ALIGN(size_colloc_rc, \ + VENUS_DMA_ALIGNMENT) * HFI_MAX_COL_FRAME; \ + _size = size_colloc_mv + size_colloc_rc; \ + } while (0) + +#define HFI_BUFFER_COMV_H264E(_size, frame_width, frame_height, num_recon) \ + HFI_BUFFER_COMV_ENC(_size, frame_width, frame_height, 16, \ + num_recon, HFI_CODEC_ENCODE_AVC) + +#define HFI_BUFFER_COMV_H265E(_size, frame_width, frame_height, num_recon) \ + HFI_BUFFER_COMV_ENC(_size, frame_width, frame_height, 32,\ + num_recon, HFI_CODEC_ENCODE_HEVC) + +#define HFI_BUFFER_NON_COMV_ENC(_size, frame_width, frame_height, \ + num_vpp_pipes_enc, lcu_size, standard) \ + do { \ + HFI_U32 width_in_lcus = 0, height_in_lcus = 0, \ + frame_width_coded = 0, frame_height_coded = 0, \ + num_lcu_in_frame = 0, num_lcumb = 0; \ + HFI_U32 frame_rc_buf_size = 0; \ + width_in_lcus = ((frame_width) + (lcu_size) - 1) / (lcu_size); \ + height_in_lcus = ((frame_height) + (lcu_size) - 1) / (lcu_size); \ + num_lcu_in_frame = width_in_lcus * height_in_lcus; \ + frame_width_coded = width_in_lcus * (lcu_size); \ + frame_height_coded = height_in_lcus * (lcu_size); \ + num_lcumb = (frame_height_coded / lcu_size) * \ + ((frame_width_coded + lcu_size * 8) / lcu_size); \ + SIZE_FRAME_RC_BUF_SIZE(frame_rc_buf_size, standard, \ + frame_height_coded, num_vpp_pipes_enc); \ + _size = SIZE_ENC_SLICE_INFO_BUF(num_lcu_in_frame) + \ + SIZE_SLICE_CMD_BUFFER + \ + SIZE_SPS_PPS_SLICE_HDR + \ + frame_rc_buf_size + \ + ENC_BITCNT_BUF_SIZE(num_lcu_in_frame) + \ + ENC_BITMAP_BUF_SIZE(num_lcu_in_frame) + \ + SIZE_BSE_SLICE_CMD_BUF + \ + SIZE_LAMBDA_LUT + \ + SIZE_OVERRIDE_BUF(num_lcumb) + \ + SIZE_IR_BUF(num_lcu_in_frame); \ + } while (0) + +#define HFI_BUFFER_NON_COMV_H264E(_size, frame_width, frame_height, \ + num_vpp_pipes_enc) \ + HFI_BUFFER_NON_COMV_ENC(_size, frame_width, frame_height, \ + num_vpp_pipes_enc, 16, HFI_CODEC_ENCODE_AVC) + +#define HFI_BUFFER_NON_COMV_H265E(_size, frame_width, frame_height, \ + num_vpp_pipes_enc) \ + HFI_BUFFER_NON_COMV_ENC(_size, frame_width, frame_height, \ + num_vpp_pipes_enc, 32, HFI_CODEC_ENCODE_HEVC) + +#define SIZE_ENC_REF_BUFFER(size, frame_width, frame_height) \ + do { \ + HFI_U32 u_buffer_width = 0, u_buffer_height = 0, \ + u_chroma_buffer_height = 0; \ + u_buffer_height = HFI_ALIGN(frame_height, \ + HFI_VENUS_HEIGHT_ALIGNMENT); \ + u_chroma_buffer_height = frame_height >> 1; \ + u_chroma_buffer_height = HFI_ALIGN(u_chroma_buffer_height, \ + HFI_VENUS_HEIGHT_ALIGNMENT); \ + u_buffer_width = HFI_ALIGN(frame_width, \ + HFI_VENUS_WIDTH_ALIGNMENT); \ + size = (u_buffer_height + u_chroma_buffer_height) * \ + u_buffer_width; \ + } while (0) + +#define SIZE_ENC_TEN_BIT_REF_BUFFER(size, frame_width, frame_height) \ + do { \ + HFI_U32 ref_buf_height = 0, ref_luma_stride_in_bytes = 0, \ + u_ref_stride = 0, luma_size = 0, ref_chrm_height_in_bytes = 0, \ + chroma_size = 0, ref_buf_size = 0; \ + ref_buf_height = (frame_height + \ + (HFI_VENUS_HEIGHT_ALIGNMENT - 1)) \ + & (~(HFI_VENUS_HEIGHT_ALIGNMENT - 1)); \ + ref_luma_stride_in_bytes = ((frame_width + \ + SYSTEM_LAL_TILE10 - 1) / SYSTEM_LAL_TILE10) * \ + SYSTEM_LAL_TILE10; \ + u_ref_stride = 4 * (ref_luma_stride_in_bytes / 3); \ + u_ref_stride = (u_ref_stride + (BUF_SIZE_ALIGN_128 - 1)) &\ + (~(BUF_SIZE_ALIGN_128 - 1)); \ + luma_size = ref_buf_height * u_ref_stride; \ + ref_chrm_height_in_bytes = (((frame_height + 1) >> 1) + \ + (BUF_SIZE_ALIGN_32 - 1)) & (~(BUF_SIZE_ALIGN_32 - 1)); \ + chroma_size = u_ref_stride * ref_chrm_height_in_bytes; \ + luma_size = (luma_size + (BUF_SIZE_ALIGN_4096 - 1)) & \ + (~(BUF_SIZE_ALIGN_4096 - 1)); \ + chroma_size = (chroma_size + (BUF_SIZE_ALIGN_4096 - 1)) & \ + (~(BUF_SIZE_ALIGN_4096 - 1)); \ + ref_buf_size = luma_size + chroma_size; \ + size = ref_buf_size; \ + } while (0) + +#define HFI_BUFFER_DPB_ENC(_size, frame_width, frame_height, is_ten_bit) \ + do { \ + HFI_U32 metadata_stride, metadata_buf_height, meta_size_y, \ + meta_size_c; \ + HFI_U32 ten_bit_ref_buf_size = 0, ref_buf_size = 0; \ + if (!is_ten_bit) { \ + SIZE_ENC_REF_BUFFER(ref_buf_size, frame_width, \ + frame_height); \ + HFI_UBWC_CALC_METADATA_PLANE_STRIDE(metadata_stride, \ + (frame_width), 64, \ + HFI_COLOR_FORMAT_YUV420_NV12_UBWC_Y_TILE_WIDTH); \ + HFI_UBWC_METADATA_PLANE_BUFHEIGHT(metadata_buf_height, \ + (frame_height), 16, \ + HFI_COLOR_FORMAT_YUV420_NV12_UBWC_Y_TILE_HEIGHT); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(meta_size_y, \ + metadata_stride, metadata_buf_height); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(meta_size_c, \ + metadata_stride, metadata_buf_height); \ + _size = ref_buf_size + meta_size_y + meta_size_c; \ + } else { \ + SIZE_ENC_TEN_BIT_REF_BUFFER(ten_bit_ref_buf_size, \ + frame_width, frame_height); \ + HFI_UBWC_CALC_METADATA_PLANE_STRIDE(metadata_stride, \ + frame_width, VENUS_METADATA_STRIDE_MULTIPLE, \ + HFI_COLOR_FORMAT_YUV420_TP10_UBWC_Y_TILE_WIDTH); \ + HFI_UBWC_METADATA_PLANE_BUFHEIGHT(metadata_buf_height, \ + frame_height, VENUS_METADATA_HEIGHT_MULTIPLE, \ + HFI_COLOR_FORMAT_YUV420_TP10_UBWC_Y_TILE_HEIGHT); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(meta_size_y, \ + metadata_stride, metadata_buf_height); \ + HFI_UBWC_METADATA_PLANE_BUFFER_SIZE(meta_size_c, \ + metadata_stride, metadata_buf_height); \ + _size = ten_bit_ref_buf_size + meta_size_y + \ + meta_size_c; \ + } \ + } while (0) + +#define HFI_BUFFER_DPB_H264E(_size, frame_width, frame_height) \ + HFI_BUFFER_DPB_ENC(_size, frame_width, frame_height, 0) + +#define HFI_BUFFER_DPB_H265E(_size, frame_width, frame_height, is_ten_bit) \ + HFI_BUFFER_DPB_ENC(_size, frame_width, frame_height, is_ten_bit) + +#define HFI_BUFFER_VPSS_ENC(vpss_size, dswidth, dsheight, ds_enable, blur, is_ten_bit) \ + do { \ + vpss_size = 0; \ + if (ds_enable || blur) { \ + HFI_BUFFER_DPB_ENC(vpss_size, dswidth, dsheight, is_ten_bit); \ + } \ + } while (0) + +#define HFI_IRIS3_ENC_MIN_INPUT_BUF_COUNT(numinput, totalhblayers) \ + do { \ + numinput = 3; \ + if (totalhblayers >= 2) { \ + numinput = (1 << (totalhblayers - 1)) + 2; \ + } \ + } while (0) + +#endif /* __HFI_BUFFER_IRIS3__ */ diff --git a/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_buffer_iris3.h b/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_buffer_iris3.h new file mode 100644 index 0000000..1d44662 --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_buffer_iris3.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __H_MSM_VIDC_BUFFER_IRIS3_H__ +#define __H_MSM_VIDC_BUFFER_IRIS3_H__ + +#include "msm_vidc_inst.h" + +int msm_buffer_size_iris3(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_buffer_min_count_iris3(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); +int msm_buffer_extra_count_iris3(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type); + +#endif // __H_MSM_VIDC_BUFFER_IRIS3_H__ diff --git a/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_buffer_iris3.c b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_buffer_iris3.c new file mode 100644 index 0000000..f9a999c --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_buffer_iris3.c @@ -0,0 +1,595 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "hfi_buffer_iris3.h" +#include "hfi_property.h" +#include "msm_media_info.h" +#include "msm_vidc_buffer.h" +#include "msm_vidc_buffer_iris3.h" +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_platform.h" + +static u32 msm_vidc_decoder_bin_size_iris3(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 size = 0; + u32 width, height, num_vpp_pipes; + struct v4l2_format *f; + bool is_interlaced; + u32 vpp_delay; + + core = inst->core; + + num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + if (inst->decode_vpp_delay.enable) + vpp_delay = inst->decode_vpp_delay.size; + else + vpp_delay = DEFAULT_BSE_VPP_DELAY; + if (inst->capabilities[CODED_FRAMES].value == + CODED_FRAMES_PROGRESSIVE) + is_interlaced = false; + else + is_interlaced = true; + f = &inst->fmts[INPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_BIN_H264D(size, width, height, + is_interlaced, vpp_delay, num_vpp_pipes); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_BIN_H265D(size, width, height, + 0, vpp_delay, num_vpp_pipes); + else if (inst->codec == MSM_VIDC_VP9) + HFI_BUFFER_BIN_VP9D(size, width, height, + 0, num_vpp_pipes); + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_decoder_comv_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + u32 width, height, num_comv, vpp_delay; + struct v4l2_format *f; + + f = &inst->fmts[INPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + num_comv = inst->buffers.output.min_count; + + msm_vidc_update_cap_value(inst, NUM_COMV, num_comv, __func__); + + if (inst->decode_vpp_delay.enable) + vpp_delay = inst->decode_vpp_delay.size; + else + vpp_delay = DEFAULT_BSE_VPP_DELAY; + num_comv = max(vpp_delay + 1, num_comv); + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_COMV_H264D(size, width, height, num_comv); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_COMV_H265D(size, width, height, num_comv); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_decoder_non_comv_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + u32 width, height, num_vpp_pipes; + struct msm_vidc_core *core; + struct v4l2_format *f; + + core = inst->core; + + num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + + f = &inst->fmts[INPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_NON_COMV_H264D(size, width, height, num_vpp_pipes); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_NON_COMV_H265D(size, width, height, num_vpp_pipes); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_decoder_line_size_iris3(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 size = 0; + u32 width, height, out_min_count, num_vpp_pipes, vpp_delay; + struct v4l2_format *f; + bool is_opb; + u32 color_fmt; + + core = inst->core; + num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + + color_fmt = v4l2_colorformat_to_driver(inst, + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + if (is_linear_colorformat(color_fmt)) + is_opb = true; + else + is_opb = false; + /* + * assume worst case, since color format is unknown at this + * time. + */ + is_opb = true; + + if (inst->decode_vpp_delay.enable) + vpp_delay = inst->decode_vpp_delay.size; + else + vpp_delay = DEFAULT_BSE_VPP_DELAY; + + f = &inst->fmts[INPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + out_min_count = inst->buffers.output.min_count; + out_min_count = max(vpp_delay + 1, out_min_count); + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_LINE_H264D(size, width, height, is_opb, + num_vpp_pipes); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_LINE_H265D(size, width, height, is_opb, + num_vpp_pipes); + else if (inst->codec == MSM_VIDC_VP9) + HFI_BUFFER_LINE_VP9D(size, width, height, out_min_count, + is_opb, num_vpp_pipes); + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_decoder_persist_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_PERSIST_H264D(size, 0); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_PERSIST_H265D(size, 0); + else if (inst->codec == MSM_VIDC_VP9) + HFI_BUFFER_PERSIST_VP9D(size); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_decoder_dpb_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + u32 color_fmt; + u32 width, height; + struct v4l2_format *f; + + color_fmt = inst->capabilities[PIX_FMTS].value; + if (!is_linear_colorformat(color_fmt)) + return size; + + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + if (color_fmt == MSM_VIDC_FMT_NV12 || + color_fmt == MSM_VIDC_FMT_NV12C) { + color_fmt = MSM_VIDC_FMT_NV12C; + HFI_NV12_UBWC_IL_CALC_BUF_SIZE_V2(size, width, height, + video_y_stride_bytes(color_fmt, width), + video_y_scanlines(color_fmt, height), + video_uv_stride_bytes(color_fmt, width), + video_uv_scanlines(color_fmt, height), + video_y_meta_stride(color_fmt, width), + video_y_meta_scanlines(color_fmt, height), + video_uv_meta_stride(color_fmt, width), + video_uv_meta_scanlines(color_fmt, height)); + } else if (color_fmt == MSM_VIDC_FMT_P010 || + color_fmt == MSM_VIDC_FMT_TP10C) { + color_fmt = MSM_VIDC_FMT_TP10C; + HFI_YUV420_TP10_UBWC_CALC_BUF_SIZE(size, + video_y_stride_bytes(color_fmt, width), + video_y_scanlines(color_fmt, height), + video_uv_stride_bytes(color_fmt, width), + video_uv_scanlines(color_fmt, height), + video_y_meta_stride(color_fmt, width), + video_y_meta_scanlines(color_fmt, height), + video_uv_meta_stride(color_fmt, width), + video_uv_meta_scanlines(color_fmt, height)); + } + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +/* encoder internal buffers */ +static u32 msm_vidc_encoder_bin_size_iris3(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 size = 0; + u32 width, height, num_vpp_pipes, stage, profile; + struct v4l2_format *f; + + core = inst->core; + + num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + stage = inst->capabilities[STAGE].value; + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + profile = inst->capabilities[PROFILE].value; + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_BIN_H264E(size, inst->hfi_rc_type, width, + height, stage, num_vpp_pipes, profile); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_BIN_H265E(size, inst->hfi_rc_type, width, + height, stage, num_vpp_pipes, profile); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_get_recon_buf_count(struct msm_vidc_inst *inst) +{ + u32 num_buf_recon = 0; + s32 n_bframe, ltr_count, hp_layers = 0, hb_layers = 0; + bool is_hybrid_hp = false; + u32 hfi_codec = 0; + + n_bframe = inst->capabilities[B_FRAME].value; + ltr_count = inst->capabilities[LTR_COUNT].value; + + if (inst->hfi_layer_type == HFI_HIER_B) { + hb_layers = inst->capabilities[ENH_LAYER_COUNT].value + 1; + } else { + hp_layers = inst->capabilities[ENH_LAYER_COUNT].value + 1; + if (inst->hfi_layer_type == HFI_HIER_P_HYBRID_LTR) + is_hybrid_hp = true; + } + + if (inst->codec == MSM_VIDC_H264) + hfi_codec = HFI_CODEC_ENCODE_AVC; + else if (inst->codec == MSM_VIDC_HEVC) + hfi_codec = HFI_CODEC_ENCODE_HEVC; + + HFI_IRIS3_ENC_RECON_BUF_COUNT(num_buf_recon, n_bframe, ltr_count, + hp_layers, hb_layers, is_hybrid_hp, hfi_codec); + + return num_buf_recon; +} + +static u32 msm_vidc_encoder_comv_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + u32 width, height, num_recon = 0; + struct v4l2_format *f; + + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + num_recon = msm_vidc_get_recon_buf_count(inst); + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_COMV_H264E(size, width, height, num_recon); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_COMV_H265E(size, width, height, num_recon); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_encoder_non_comv_size_iris3(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 size = 0; + u32 width, height, num_vpp_pipes; + struct v4l2_format *f; + + core = inst->core; + + num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_NON_COMV_H264E(size, width, height, num_vpp_pipes); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_NON_COMV_H265E(size, width, height, num_vpp_pipes); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_encoder_line_size_iris3(struct msm_vidc_inst *inst) +{ + struct msm_vidc_core *core; + u32 size = 0; + u32 width, height, pixfmt, num_vpp_pipes; + bool is_tenbit = false; + struct v4l2_format *f; + + core = inst->core; + num_vpp_pipes = core->capabilities[NUM_VPP_PIPE].value; + pixfmt = inst->capabilities[PIX_FMTS].value; + + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + is_tenbit = (pixfmt == MSM_VIDC_FMT_P010 || pixfmt == MSM_VIDC_FMT_TP10C); + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_LINE_H264E(size, width, height, is_tenbit, num_vpp_pipes); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_LINE_H265E(size, width, height, is_tenbit, num_vpp_pipes); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_encoder_dpb_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + u32 width, height, pixfmt; + struct v4l2_format *f; + bool is_tenbit; + + f = &inst->fmts[OUTPUT_PORT]; + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + + pixfmt = inst->capabilities[PIX_FMTS].value; + is_tenbit = (pixfmt == MSM_VIDC_FMT_P010 || pixfmt == MSM_VIDC_FMT_TP10C); + + if (inst->codec == MSM_VIDC_H264) + HFI_BUFFER_DPB_H264E(size, width, height); + else if (inst->codec == MSM_VIDC_HEVC) + HFI_BUFFER_DPB_H265E(size, width, height, is_tenbit); + + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_encoder_arp_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + + HFI_BUFFER_ARP_ENC(size); + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_encoder_vpss_size_iris3(struct msm_vidc_inst *inst) +{ + u32 size = 0; + bool ds_enable = false, is_tenbit = false; + u32 rotation_val = HFI_ROTATION_NONE; + u32 width, height, driver_colorfmt; + struct v4l2_format *f; + + ds_enable = is_scaling_enabled(inst); + msm_vidc_v4l2_to_hfi_enum(inst, ROTATION, &rotation_val); + + f = &inst->fmts[OUTPUT_PORT]; + if (is_rotation_90_or_270(inst)) { + /* + * output width and height are rotated, + * so unrotate them to use as arguments to + * HFI_BUFFER_VPSS_ENC. + */ + width = f->fmt.pix_mp.height; + height = f->fmt.pix_mp.width; + } else { + width = f->fmt.pix_mp.width; + height = f->fmt.pix_mp.height; + } + + f = &inst->fmts[INPUT_PORT]; + driver_colorfmt = v4l2_colorformat_to_driver(inst, + f->fmt.pix_mp.pixelformat, __func__); + is_tenbit = is_10bit_colorformat(driver_colorfmt); + + HFI_BUFFER_VPSS_ENC(size, width, height, ds_enable, 0, is_tenbit); + i_vpr_l(inst, "%s: size %d\n", __func__, size); + return size; +} + +static u32 msm_vidc_encoder_output_size_iris3(struct msm_vidc_inst *inst) +{ + u32 frame_size; + struct v4l2_format *f; + bool is_ten_bit = false; + int bitrate_mode, frame_rc; + u32 hfi_rc_type = HFI_RC_VBR_CFR; + enum msm_vidc_codec_type codec; + + f = &inst->fmts[OUTPUT_PORT]; + codec = v4l2_codec_to_driver(inst, f->fmt.pix_mp.pixelformat, __func__); + if (codec == MSM_VIDC_HEVC) + is_ten_bit = true; + + bitrate_mode = inst->capabilities[BITRATE_MODE].value; + frame_rc = inst->capabilities[FRAME_RC_ENABLE].value; + if (!frame_rc) + hfi_rc_type = HFI_RC_OFF; + else if (bitrate_mode == V4L2_MPEG_VIDEO_BITRATE_MODE_CQ) + hfi_rc_type = HFI_RC_CQ; + + HFI_BUFFER_BITSTREAM_ENC(frame_size, f->fmt.pix_mp.width, + f->fmt.pix_mp.height, hfi_rc_type, is_ten_bit); + + return frame_size; +} + +struct msm_vidc_buf_type_handle { + enum msm_vidc_buffer_type type; + u32 (*handle)(struct msm_vidc_inst *inst); +}; + +int msm_buffer_size_iris3(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int i; + u32 size = 0, buf_type_handle_size = 0; + const struct msm_vidc_buf_type_handle *buf_type_handle_arr = NULL; + static const struct msm_vidc_buf_type_handle dec_buf_type_handle[] = { + {MSM_VIDC_BUF_INPUT, msm_vidc_decoder_input_size }, + {MSM_VIDC_BUF_OUTPUT, msm_vidc_decoder_output_size }, + {MSM_VIDC_BUF_BIN, msm_vidc_decoder_bin_size_iris3 }, + {MSM_VIDC_BUF_COMV, msm_vidc_decoder_comv_size_iris3 }, + {MSM_VIDC_BUF_NON_COMV, msm_vidc_decoder_non_comv_size_iris3 }, + {MSM_VIDC_BUF_LINE, msm_vidc_decoder_line_size_iris3 }, + {MSM_VIDC_BUF_PERSIST, msm_vidc_decoder_persist_size_iris3 }, + {MSM_VIDC_BUF_DPB, msm_vidc_decoder_dpb_size_iris3 }, + }; + static const struct msm_vidc_buf_type_handle enc_buf_type_handle[] = { + {MSM_VIDC_BUF_INPUT, msm_vidc_encoder_input_size }, + {MSM_VIDC_BUF_OUTPUT, msm_vidc_encoder_output_size_iris3 }, + {MSM_VIDC_BUF_BIN, msm_vidc_encoder_bin_size_iris3 }, + {MSM_VIDC_BUF_COMV, msm_vidc_encoder_comv_size_iris3 }, + {MSM_VIDC_BUF_NON_COMV, msm_vidc_encoder_non_comv_size_iris3 }, + {MSM_VIDC_BUF_LINE, msm_vidc_encoder_line_size_iris3 }, + {MSM_VIDC_BUF_DPB, msm_vidc_encoder_dpb_size_iris3 }, + {MSM_VIDC_BUF_ARP, msm_vidc_encoder_arp_size_iris3 }, + {MSM_VIDC_BUF_VPSS, msm_vidc_encoder_vpss_size_iris3 }, + }; + + if (is_decode_session(inst)) { + buf_type_handle_size = ARRAY_SIZE(dec_buf_type_handle); + buf_type_handle_arr = dec_buf_type_handle; + } else if (is_encode_session(inst)) { + buf_type_handle_size = ARRAY_SIZE(enc_buf_type_handle); + buf_type_handle_arr = enc_buf_type_handle; + } + + /* handle invalid session */ + if (!buf_type_handle_arr || !buf_type_handle_size) { + i_vpr_e(inst, "%s: invalid session %d\n", __func__, inst->domain); + return size; + } + + /* fetch buffer size */ + for (i = 0; i < buf_type_handle_size; i++) { + if (buf_type_handle_arr[i].type == buffer_type) { + size = buf_type_handle_arr[i].handle(inst); + break; + } + } + + /* handle unknown buffer type */ + if (i == buf_type_handle_size) { + i_vpr_e(inst, "%s: unknown buffer type %#x\n", __func__, buffer_type); + goto exit; + } + + i_vpr_l(inst, "buffer_size: type: %11s, size: %9u\n", buf_name(buffer_type), size); + +exit: + return size; +} + +static int msm_vidc_input_min_count_iris3(struct msm_vidc_inst *inst) +{ + u32 input_min_count = 0; + u32 total_hb_layer = 0; + + if (is_decode_session(inst)) { + input_min_count = MIN_DEC_INPUT_BUFFERS; + } else if (is_encode_session(inst)) { + total_hb_layer = is_hierb_type_requested(inst) ? + inst->capabilities[ENH_LAYER_COUNT].value + 1 : 0; + if (inst->codec == MSM_VIDC_H264 && + !inst->capabilities[LAYER_ENABLE].value) { + total_hb_layer = 0; + } + HFI_IRIS3_ENC_MIN_INPUT_BUF_COUNT(input_min_count, + total_hb_layer); + } else { + i_vpr_e(inst, "%s: invalid domain %d\n", __func__, inst->domain); + return 0; + } + + return input_min_count; +} + +static int msm_buffer_dpb_count(struct msm_vidc_inst *inst) +{ + int count = 0; + + /* encoder dpb buffer count */ + if (is_encode_session(inst)) + return msm_vidc_get_recon_buf_count(inst); + + /* decoder dpb buffer count */ + if (is_split_mode_enabled(inst)) { + count = inst->fw_min_count ? + inst->fw_min_count : inst->buffers.output.min_count; + } + + return count; +} + +int msm_buffer_min_count_iris3(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int count = 0; + + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + count = msm_vidc_input_min_count_iris3(inst); + break; + case MSM_VIDC_BUF_OUTPUT: + count = msm_vidc_output_min_count(inst); + break; + case MSM_VIDC_BUF_BIN: + case MSM_VIDC_BUF_COMV: + case MSM_VIDC_BUF_NON_COMV: + case MSM_VIDC_BUF_LINE: + case MSM_VIDC_BUF_PERSIST: + case MSM_VIDC_BUF_ARP: + case MSM_VIDC_BUF_VPSS: + count = msm_vidc_internal_buffer_count(inst, buffer_type); + break; + case MSM_VIDC_BUF_DPB: + count = msm_buffer_dpb_count(inst); + break; + default: + break; + } + + i_vpr_l(inst, " min_count: type: %11s, count: %9u\n", buf_name(buffer_type), count); + return count; +} + +int msm_buffer_extra_count_iris3(struct msm_vidc_inst *inst, + enum msm_vidc_buffer_type buffer_type) +{ + int count = 0; + + switch (buffer_type) { + case MSM_VIDC_BUF_INPUT: + count = msm_vidc_input_extra_count(inst); + break; + case MSM_VIDC_BUF_OUTPUT: + count = msm_vidc_output_extra_count(inst); + break; + default: + break; + } + + i_vpr_l(inst, "extra_count: type: %11s, count: %9u\n", buf_name(buffer_type), count); + return count; +} From patchwork Fri Jul 28 13:23:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13332019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5524DC0015E for ; Fri, 28 Jul 2023 14:55:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237288AbjG1Ozc (ORCPT ); Fri, 28 Jul 2023 10:55:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229638AbjG1Ozb (ORCPT ); Fri, 28 Jul 2023 10:55:31 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFA6B10DA; Fri, 28 Jul 2023 07:55:29 -0700 (PDT) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SCcnhG018734; Fri, 28 Jul 2023 13:26:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=BIERZBL2DkYWODuGcQaEZKlY9xQVnIYZd7ms4E09YBk=; b=K6ZXOpZ0gM1c6ZYEMVeRWAthLKmJPI14oWwSWIPkc2R1zALEqmeYJEr41tuCnJWK9f4L C/a7+VIPt0KVzzwMF8gQBv6RBYoud+r/Joj760X51fsm9Ev04ylewU6j6ofrkoVGCKU5 gf/EpbeAmZFpISmWDt3npuW9yAtbI+guqQAncUmP4iMHaKon9VluRfiSNkK71tVwvYfx lPa2UlFOdGRGK0mt3lY8IfCG2SynVqIjzrf18dh5ik9NgfXz1jJjvYAjVhWc5twH66gM t5fcLcQ3tRe0wPb+Bit3jBIdIHYOKVrhGzV3JxmscQTQDFjSmdfEuoRR8q9G61zL989d iQ== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3ufutb6d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:54 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA04.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQs2s026814 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:54 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:50 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 30/33] iris: variant: iris3: add helper for bus and clock calculation Date: Fri, 28 Jul 2023 18:53:41 +0530 Message-ID: <1690550624-14642-31-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: IbgFW8DJeBFXFvISjt3aydGrsylqML8J X-Proofpoint-ORIG-GUID: IbgFW8DJeBFXFvISjt3aydGrsylqML8J X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 priorityscore=1501 adultscore=0 phishscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This adds the helper function to calculate the required bus bandwidth and clock frequency for the given video usecase/s. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../iris/variant/iris3/inc/msm_vidc_power_iris3.h | 17 + .../iris/variant/iris3/src/msm_vidc_power_iris3.c | 345 +++++++++++++++++++++ 2 files changed, 362 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_power_iris3.h create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_power_iris3.c diff --git a/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_power_iris3.h b/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_power_iris3.h new file mode 100644 index 0000000..a6f3e54 --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/inc/msm_vidc_power_iris3.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef __H_MSM_VIDC_POWER_IRIS3_H__ +#define __H_MSM_VIDC_POWER_IRIS3_H__ + +#include "msm_vidc_inst.h" +#include "msm_vidc_power.h" + +u64 msm_vidc_calc_freq_iris3(struct msm_vidc_inst *inst, u32 data_size); +int msm_vidc_calc_bw_iris3(struct msm_vidc_inst *inst, + struct vidc_bus_vote_data *vote_data); + +#endif diff --git a/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_power_iris3.c b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_power_iris3.c new file mode 100644 index 0000000..32b549c --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_power_iris3.c @@ -0,0 +1,345 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2020-2021, The Linux Foundation. All rights reserved. + * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc_core.h" +#include "msm_vidc_debug.h" +#include "msm_vidc_driver.h" +#include "msm_vidc_inst.h" +#include "msm_vidc_power.h" +#include "msm_vidc_power_iris3.h" +#include "perf_static_model.h" + +static int msm_vidc_init_codec_input_freq(struct msm_vidc_inst *inst, u32 data_size, + struct api_calculation_input *codec_input) +{ + enum msm_vidc_port_type port; + u32 color_fmt; + + if (is_encode_session(inst)) { + codec_input->decoder_or_encoder = CODEC_ENCODER; + } else if (is_decode_session(inst)) { + codec_input->decoder_or_encoder = CODEC_DECODER; + } else { + d_vpr_e("%s: invalid domain %d\n", __func__, inst->domain); + return -EINVAL; + } + + codec_input->chipset_gen = MSM_SM8550; + + if (inst->codec == MSM_VIDC_H264) { + codec_input->codec = CODEC_H264; + codec_input->lcu_size = 16; + if (inst->capabilities[ENTROPY_MODE].value == + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC) + codec_input->entropy_coding_mode = CODEC_ENTROPY_CODING_CABAC; + else + codec_input->entropy_coding_mode = CODEC_ENTROPY_CODING_CAVLC; + } else if (inst->codec == MSM_VIDC_HEVC) { + codec_input->codec = CODEC_HEVC; + codec_input->lcu_size = 32; + } else if (inst->codec == MSM_VIDC_VP9) { + codec_input->codec = CODEC_VP9; + codec_input->lcu_size = 16; + } else { + d_vpr_e("%s: invalid codec %d\n", __func__, inst->codec); + return -EINVAL; + } + + codec_input->pipe_num = inst->capabilities[PIPE].value; + codec_input->frame_rate = inst->max_rate; + + port = is_decode_session(inst) ? INPUT_PORT : OUTPUT_PORT; + codec_input->frame_width = inst->fmts[port].fmt.pix_mp.width; + codec_input->frame_height = inst->fmts[port].fmt.pix_mp.height; + + if (inst->capabilities[STAGE].value == MSM_VIDC_STAGE_1) { + codec_input->vsp_vpp_mode = CODEC_VSPVPP_MODE_1S; + } else if (inst->capabilities[STAGE].value == MSM_VIDC_STAGE_2) { + codec_input->vsp_vpp_mode = CODEC_VSPVPP_MODE_2S; + } else { + d_vpr_e("%s: invalid stage %d\n", __func__, + inst->capabilities[STAGE].value); + return -EINVAL; + } + + if (inst->capabilities[BIT_DEPTH].value == BIT_DEPTH_8) + codec_input->bitdepth = CODEC_BITDEPTH_8; + else + codec_input->bitdepth = CODEC_BITDEPTH_10; + + /* + * Used for calculating Encoder GOP Complexity + * hierachical_layer= 0..7 used as Array Index + * inst->capabilities[B_FRAME].value=[ 0 1 2] + * TODO how to map? + */ + + /* set as IPP */ + codec_input->hierachical_layer = 0; + + if (is_decode_session(inst)) + color_fmt = + v4l2_colorformat_to_driver(inst, + inst->fmts[OUTPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + else + color_fmt = + v4l2_colorformat_to_driver(inst, + inst->fmts[INPUT_PORT].fmt.pix_mp.pixelformat, + __func__); + + codec_input->linear_opb = is_linear_colorformat(color_fmt); + codec_input->bitrate_mbps = + (codec_input->frame_rate * data_size * 8) / 1000000; + + /* set as sanity mode */ + codec_input->regression_mode = 1; + + return 0; +} + +static int msm_vidc_init_codec_input_bus(struct msm_vidc_inst *inst, struct vidc_bus_vote_data *d, + struct api_calculation_input *codec_input) +{ + u32 complexity_factor_int = 0, complexity_factor_frac = 0; + bool opb_compression_enabled = false; + + if (!d) + return -EINVAL; + + if (d->domain == MSM_VIDC_ENCODER) { + codec_input->decoder_or_encoder = CODEC_ENCODER; + } else if (d->domain == MSM_VIDC_DECODER) { + codec_input->decoder_or_encoder = CODEC_DECODER; + } else { + d_vpr_e("%s: invalid domain %d\n", __func__, d->domain); + return -EINVAL; + } + + codec_input->chipset_gen = MSM_SM8550; + + if (d->codec == MSM_VIDC_H264) { + codec_input->codec = CODEC_H264; + } else if (d->codec == MSM_VIDC_HEVC) { + codec_input->codec = CODEC_HEVC; + } else if (d->codec == MSM_VIDC_VP9) { + codec_input->codec = CODEC_VP9; + } else { + d_vpr_e("%s: invalid codec %d\n", __func__, d->codec); + return -EINVAL; + } + + codec_input->lcu_size = d->lcu_size; + codec_input->pipe_num = d->num_vpp_pipes; + codec_input->frame_rate = d->fps; + codec_input->frame_width = d->input_width; + codec_input->frame_height = d->input_height; + + if (d->work_mode == MSM_VIDC_STAGE_1) { + codec_input->vsp_vpp_mode = CODEC_VSPVPP_MODE_1S; + } else if (d->work_mode == MSM_VIDC_STAGE_2) { + codec_input->vsp_vpp_mode = CODEC_VSPVPP_MODE_2S; + } else { + d_vpr_e("%s: invalid stage %d\n", __func__, d->work_mode); + return -EINVAL; + } + + if (inst->capabilities[ENTROPY_MODE].value == + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CABAC) { + codec_input->entropy_coding_mode = CODEC_ENTROPY_CODING_CABAC; + } else if (inst->capabilities[ENTROPY_MODE].value == + V4L2_MPEG_VIDEO_H264_ENTROPY_MODE_CAVLC) { + codec_input->entropy_coding_mode = CODEC_ENTROPY_CODING_CAVLC; + } else { + d_vpr_e("%s: invalid entropy %d\n", __func__, + inst->capabilities[ENTROPY_MODE].value); + return -EINVAL; + } + + /* + * Used for calculating Encoder GOP Complexity + * hierachical_layer= 0..7 used as Array Index + * TODO how to map? + */ + codec_input->hierachical_layer = 0; /* set as IPP */ + + /* + * If the calculated motion_vector_complexity is > 2 then set the + * complexity_setting and refframe_complexity to be pwc(performance worst case) + * values. If the motion_vector_complexity is < 2 then set the complexity_setting + * and refframe_complexity to be average case values. + */ + + complexity_factor_int = Q16_INT(d->complexity_factor); + complexity_factor_frac = Q16_FRAC(d->complexity_factor); + + if (complexity_factor_int < COMPLEXITY_THRESHOLD || + (complexity_factor_int == COMPLEXITY_THRESHOLD && + complexity_factor_frac == 0)) { + /* set as average case values */ + codec_input->complexity_setting = COMPLEXITY_SETTING_AVG; + codec_input->refframe_complexity = REFFRAME_COMPLEXITY_AVG; + } else { + /* set as pwc */ + codec_input->complexity_setting = COMPLEXITY_SETTING_PWC; + codec_input->refframe_complexity = REFFRAME_COMPLEXITY_PWC; + } + + codec_input->status_llc_onoff = d->use_sys_cache; + + if (__bpp(d->color_formats[0]) == 8) + codec_input->bitdepth = CODEC_BITDEPTH_8; + else + codec_input->bitdepth = CODEC_BITDEPTH_10; + + if (d->num_formats == 1) { + codec_input->split_opb = 0; + codec_input->linear_opb = !__ubwc(d->color_formats[0]); + } else if (d->num_formats == 2) { + codec_input->split_opb = 1; + codec_input->linear_opb = !__ubwc(d->color_formats[1]); + } else { + d_vpr_e("%s: invalid num_formats %d\n", + __func__, d->num_formats); + return -EINVAL; + } + + codec_input->linear_ipb = 0; /* set as ubwc ipb */ + + /* TODO Confirm if we always LOSSLESS mode ie lossy_ipb = 0*/ + codec_input->lossy_ipb = 0; /* set as lossless ipb */ + + /* TODO Confirm if no multiref */ + codec_input->encoder_multiref = 0; /* set as no multiref */ + codec_input->bitrate_mbps = (d->bitrate / 1000000); /* bps 10; set as 10mbps */ + + opb_compression_enabled = d->num_formats >= 2 && __ubwc(d->color_formats[1]); + + /* ANDROID CR is in Q16 format, StaticModel CR in x100 format */ + codec_input->cr_dpb = ((Q16_INT(d->compression_ratio) * 100) + + Q16_FRAC(d->compression_ratio)); + + codec_input->cr_opb = opb_compression_enabled ? + codec_input->cr_dpb : 65536; + + codec_input->cr_ipb = ((Q16_INT(d->input_cr) * 100) + Q16_FRAC(d->input_cr)); + codec_input->cr_rpb = codec_input->cr_dpb; /* cr_rpb only for encoder */ + + /* disable by default, only enable for aurora depth map session */ + codec_input->lumaonly_decode = 0; + + /* set as custom regression mode, as are using cr,cf values from FW */ + codec_input->regression_mode = REGRESSION_MODE_CUSTOM; + + /* Dump all the variables for easier debugging */ + if (msm_vidc_debug & VIDC_BUS) { + struct dump dump[] = { + {"complexity_factor_int", "%d", complexity_factor_int}, + {"complexity_factor_frac", "%d", complexity_factor_frac}, + {"refframe_complexity", "%d", codec_input->refframe_complexity}, + {"complexity_setting", "%d", codec_input->complexity_setting}, + {"cr_dpb", "%d", codec_input->cr_dpb}, + {"cr_opb", "%d", codec_input->cr_opb}, + {"cr_ipb", "%d", codec_input->cr_ipb}, + {"cr_rpb", "%d", codec_input->cr_rpb}, + {"lcu size", "%d", codec_input->lcu_size}, + {"pipe number", "%d", codec_input->pipe_num}, + {"frame_rate", "%d", codec_input->frame_rate}, + {"frame_width", "%d", codec_input->frame_width}, + {"frame_height", "%d", codec_input->frame_height}, + {"work_mode", "%d", d->work_mode}, + {"encoder_or_decode", "%d", inst->domain}, + {"chipset_gen", "%d", codec_input->chipset_gen}, + {"codec_input", "%d", codec_input->codec}, + {"entropy_coding_mode", "%d", codec_input->entropy_coding_mode}, + {"hierachical_layer", "%d", codec_input->hierachical_layer}, + {"status_llc_onoff", "%d", codec_input->status_llc_onoff}, + {"bit_depth", "%d", codec_input->bitdepth}, + {"split_opb", "%d", codec_input->split_opb}, + {"linear_opb", "%d", codec_input->linear_opb}, + {"linear_ipb", "%d", codec_input->linear_ipb}, + {"lossy_ipb", "%d", codec_input->lossy_ipb}, + {"encoder_multiref", "%d", codec_input->encoder_multiref}, + {"bitrate_mbps", "%d", codec_input->bitrate_mbps}, + {"lumaonly_decode", "%d", codec_input->lumaonly_decode}, + {"regression_mode", "%d", codec_input->regression_mode}, + }; + __dump(dump, ARRAY_SIZE(dump)); + } + + return 0; +} + +u64 msm_vidc_calc_freq_iris3(struct msm_vidc_inst *inst, u32 data_size) +{ + u64 freq = 0; + struct msm_vidc_core *core; + int ret = 0; + struct api_calculation_input codec_input; + struct api_calculation_freq_output codec_output; + u32 fps, mbpf; + + core = inst->core; + + mbpf = msm_vidc_get_mbs_per_frame(inst); + fps = inst->max_rate; + + memset(&codec_input, 0, sizeof(struct api_calculation_input)); + memset(&codec_output, 0, sizeof(struct api_calculation_freq_output)); + ret = msm_vidc_init_codec_input_freq(inst, data_size, &codec_input); + if (ret) + return freq; + ret = msm_vidc_calculate_frequency(codec_input, &codec_output); + if (ret) + return freq; + freq = codec_output.hw_min_freq * 1000000; /* Convert to Hz */ + + i_vpr_p(inst, "%s: filled len %d, required freq %llu, fps %u, mbpf %u\n", + __func__, data_size, freq, fps, mbpf); + + if (inst->iframe && is_hevc_10bit_decode_session(inst)) { + /* + * for HEVC 10bit and iframe case only allow TURBO and + * limit to NOM for all other cases + */ + } else { + /* limit to NOM, index 0 is TURBO, index 1 is NOM clock rate */ + if (core->resource->freq_set.count >= 2 && + freq > core->resource->freq_set.freq_tbl[1].freq) + freq = core->resource->freq_set.freq_tbl[1].freq; + } + + return freq; +} + +int msm_vidc_calc_bw_iris3(struct msm_vidc_inst *inst, + struct vidc_bus_vote_data *vidc_data) +{ + int ret = 0; + struct api_calculation_input codec_input; + struct api_calculation_bw_output codec_output; + + if (!vidc_data) + return ret; + + memset(&codec_input, 0, sizeof(struct api_calculation_input)); + memset(&codec_output, 0, sizeof(struct api_calculation_bw_output)); + + ret = msm_vidc_init_codec_input_bus(inst, vidc_data, &codec_input); + if (ret) + return ret; + ret = msm_vidc_calculate_bandwidth(codec_input, &codec_output); + if (ret) + return ret; + + vidc_data->calc_bw_ddr = kbps(codec_output.ddr_bw_rd + codec_output.ddr_bw_wr); + vidc_data->calc_bw_llcc = kbps(codec_output.noc_bw_rd + codec_output.noc_bw_wr); + + i_vpr_l(inst, "%s: calc_bw_ddr %llu calc_bw_llcc %llu", + __func__, vidc_data->calc_bw_ddr, vidc_data->calc_bw_llcc); + + return ret; +} From patchwork Fri Jul 28 13:23:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 329A7C001DF for ; Fri, 28 Jul 2023 14:19:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233183AbjG1OTa (ORCPT ); Fri, 28 Jul 2023 10:19:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236280AbjG1OT3 (ORCPT ); Fri, 28 Jul 2023 10:19:29 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A493E47; Fri, 28 Jul 2023 07:19:26 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S9wAHF004457; Fri, 28 Jul 2023 13:26:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=vr5ZdLxJc4mgvFa8ZTz/SYlkacKXEtu/6e/SVjfMutk=; b=JU6R8XnGO03vhhLWT7Ez1kGK5Ryca+C+u3BWE56n9e1hF9JORT2xhFPY6zXlEgN7ITdB ZFUqkuecZ/HtKYecV715HqStYi/iTHtOGa37KLPjHNcMUq825mlwViOv0YzBVtqvDA5O F+gxPmtQTP3f9oa6eGnOEJZgxnkNTJ1cUEtdQltCwTXXuYFgREEjajF1pqK2/deRxRSs 9SCaDVfhb9/bgou1hpWxKvbtwXUSO4Z8ChTBKOJx4TetYb5vdiQUKuVEs1MyQathSxjR fIEiqXzRWxarB/nHFJO0sedwrlsZVoI3T/pybz59bM2BM10NDKwuEC8nqQ1WfBvzHoj7 FA== Received: from nasanppmta05.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7vk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:58 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDQvRl002980 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:26:57 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:54 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 31/33] iris: variant: iris: implement the logic to compute bus bandwidth Date: Fri, 28 Jul 2023 18:53:42 +0530 Message-ID: <1690550624-14642-32-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: G1YBRa91uQJ-nTaHwI1gLRiUTIyPh4Wj X-Proofpoint-GUID: G1YBRa91uQJ-nTaHwI1gLRiUTIyPh4Wj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements the logic to compute bus bandwidth required by encoder or decoder for a specific usecase. It takes input as various video usecase parameters as configured by clients. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../iris/variant/iris3/inc/perf_static_model.h | 229 ++++++ .../iris/variant/iris3/src/msm_vidc_bus_iris3.c | 884 +++++++++++++++++++++ 2 files changed, 1113 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/inc/perf_static_model.h create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_bus_iris3.c diff --git a/drivers/media/platform/qcom/iris/variant/iris3/inc/perf_static_model.h b/drivers/media/platform/qcom/iris/variant/iris3/inc/perf_static_model.h new file mode 100644 index 0000000..238f1af --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/inc/perf_static_model.h @@ -0,0 +1,229 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2023, Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _PERF_STATIC_MODEL_H_ +#define _PERF_STATIC_MODEL_H_ + +#include + +/* Reordered CODECS to match Bitrate Table rows */ +#define CODEC_H264_CAVLC 0 +#define CODEC_H264 1 +#define CODEC_HEVC 2 +#define CODEC_VP9 3 + +#define CODEC_BSE_FrameFactor 0 +#define CODEC_BSE_MBFactor 1 +#define CODEC_BSE_LUC_SIZE 2 + +#define CODEC_GOP_IPP 0 +#define CODEC_GOP_IbP 1 +#define CODEC_GOP_I1B2b1P 2 +#define CODEC_GOP_I3B4b1P 3 +#define CODEC_GOP_PONLY 4 +#define CODEC_GOP_BONLY 6 +#define CODEC_GOP_IONLY 7 + +#define CODEC_ENCODER_GOP_Bb_ENTRY 0 +#define CODEC_ENCODER_GOP_P_ENTRY 1 +#define CODEC_ENCODER_GOP_FACTORY_ENTRY 2 + +#define CODEC_ENTROPY_CODING_CAVLC 0 +#define CODEC_ENTROPY_CODING_CABAC 1 + +#define CODEC_VSPVPP_MODE_1S 1 +#define CODEC_VSPVPP_MODE_2S 2 + +#define COMP_SETTING_PWC 0 +#define COMP_SETTING_AVG 1 +#define COMP_SETTING_POWER 2 + +#define CODEC_BITDEPTH_8 8 +#define CODEC_BITDEPTH_10 10 + +#define ENCODE_YUV 0 +#define ENCODE_RGB 1 + +#define COMPLEXITY_PWC 0 +#define COMPLEXITY_AVG 1 +#define COMPLEXITY_POWER 2 + +#define MAX_LINE 2048 +#ifndef VENUS_MAX_FILENAME_LENGTH +#define VENUS_MAX_FILENAME_LENGTH 1024 +#endif + +#define CODEC_ENCODER 1 +#define CODEC_DECODER 2 + +#define COMPLEXITY_THRESHOLD 2 + +enum chipset_generation { + MSM_SM8450, + MSM_SM8550, + MSM_MAX, +}; + +enum regression_mode { + /* ignores client set cr and bitrate settings */ + REGRESSION_MODE_SANITY = 1, + /* cr and bitrate default mode */ + REGRESSION_MODE_DEFAULT, + /* custom mode where client will set cr and bitrate values */ + REGRESSION_MODE_CUSTOM, +}; + +/* + * If firmware provided motion_vector_complexity is >= 2 then set the + * complexity_setting as PWC (performance worst case) + * If the motion_vector_complexity is < 2 then set the complexity_setting + * as AVG (average case value) + */ +enum complexity_setting { + COMPLEXITY_SETTING_PWC = 0, + COMPLEXITY_SETTING_AVG = 1, + COMPLEXITY_SETTING_PWR = 2, +}; + +/* + * If firmware provided motion_vector_complexity is >= 2 then set the + * refframe_complexity as PWC (performance worst case) + * If the motion_vector_complexity is < 2 then set the refframe_complexity + * as AVG (average case value) + */ +enum refframe_complexity { + REFFRAME_COMPLEXITY_PWC = 4, + REFFRAME_COMPLEXITY_AVG = 2, + REFFRAME_COMPLEXITY_PWR = 1, +}; + +struct api_calculation_input { + /*2: decoder; 1: encoder */ + u32 decoder_or_encoder; + + /* enum chipset_generation */ + u32 chipset_gen; + + u32 codec; + u32 lcu_size; + u32 pipe_num; + u32 frame_rate; + u32 frame_width; + u32 frame_height; + u32 vsp_vpp_mode; + u32 entropy_coding_mode; + u32 hierachical_layer; + + /* PWC, AVG/POWER */ + u32 complexity_setting; + + u32 status_llc_onoff; + u32 bitdepth; + u32 linear_opb; + + /* AV1D FG */ + u32 split_opb; + + u32 linear_ipb; + u32 lossy_ipb; + u32 ipb_yuvrgb; + u32 encoder_multiref; + u32 bitrate_mbps; + u32 refframe_complexity; + u32 cr_ipb; + u32 cr_rpb; + u32 cr_dpb; + u32 cr_opb; + u32 regression_mode; + + /* used in aurora for depth map decode */ + u32 lumaonly_decode; +}; + +struct corner_voting { + u32 percent_lowbound; + u32 percent_highbound; +}; + +struct api_calculation_freq_output { + u32 vpp_min_freq; + u32 vsp_min_freq; + u32 tensilica_min_freq; + u32 hw_min_freq; + u32 enc_hqmode; + struct corner_voting usecase_corner; +}; + +struct api_calculation_bw_output { + u32 vsp_read_noc; + u32 vsp_write_noc; + u32 vsp_read_ddr; + u32 vsp_write_ddr; + u32 vsp_rd_wr_total_noc; + u32 vsp_rd_wr_total_ddr; + + u32 collocated_rd_noc; + u32 collocated_wr_noc; + u32 collocated_rd_ddr; + u32 collocated_wr_ddr; + u32 collocated_rd_wr_total_noc; + u32 collocated_rd_wr_total_ddr; + + u32 dpb_rd_y_noc; + u32 dpb_rd_crcb_noc; + u32 dpb_rdwr_duetooverlap_noc; + u32 dpb_wr_noc; + u32 dpb_rd_y_ddr; + u32 dpb_rd_crcb_ddr; + u32 dpb_rdwr_duetooverlap_ddr; + u32 dpb_wr_ddr; + u32 dpb_rd_wr_total_noc; + u32 dpb_rd_wr_total_ddr; + + u32 opb_write_total_noc; + u32 opb_write_total_ddr; + + u32 ipb_rd_total_noc; + u32 ipb_rd_total_ddr; + + u32 bse_tlb_rd_noc; + u32 bse_tlb_wr_noc; + u32 bse_tlb_rd_ddr; + u32 bse_tlb_wr_ddr; + u32 bse_rd_wr_total_noc; + u32 bse_rd_wr_total_ddr; + + u32 statistics_rd_noc; + u32 statistics_wr_noc; + u32 statistics_rd_ddr; + u32 statistics_wr_ddr; + + u32 mmu_rd_noc; + u32 mmu_rd_ddr; + + u32 noc_bw_rd; + u32 noc_bw_wr; + u32 ddr_bw_rd; + u32 ddr_bw_wr; + + /* llc BW components for aurora */ + u32 dpb_rd_y_llc; + u32 dpb_rd_crcb_llc; + u32 dpb_wr_llc; + u32 bse_tlb_rd_llc; + u32 bse_tlb_wr_llc; + u32 vsp_read_llc; + u32 vsp_write_llc; + + u32 llc_bw_rd; + u32 llc_bw_wr; +}; + +int msm_vidc_calculate_frequency(struct api_calculation_input codec_input, + struct api_calculation_freq_output *codec_output); +int msm_vidc_calculate_bandwidth(struct api_calculation_input codec_input, + struct api_calculation_bw_output *codec_output); + +#endif /*_PERF_STATIC_MODEL_H_ */ diff --git a/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_bus_iris3.c b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_bus_iris3.c new file mode 100644 index 0000000..92aa995 --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_bus_iris3.c @@ -0,0 +1,884 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc_debug.h" +#include "perf_static_model.h" + +/* 100x */ +static u32 dpbopb_ubwc30_cr_table_cratio_iris3[7][12] = { + {237, 399, 272, 137, 225, 158, 185, 259, 203, 138, 167, 152}, + {269, 404, 302, 202, 367, 238, 210, 299, 232, 134, 181, 149}, + {269, 404, 302, 202, 367, 238, 210, 299, 232, 134, 181, 149}, + {269, 404, 302, 202, 367, 238, 210, 299, 232, 134, 181, 149}, + {237, 399, 272, 137, 225, 158, 185, 259, 203, 138, 167, 152}, + {269, 404, 302, 202, 367, 238, 210, 299, 232, 134, 181, 149}, + {269, 404, 302, 202, 367, 238, 210, 299, 232, 134, 181, 149}, +}; + +/* 100x */ +static u32 rpb_ubwc30_cr_table_cratio_iris3[7][12] = { + {193, 294, 218, 135, 214, 155, 175, 241, 191, 139, 162, 149}, + {285, 406, 316, 207, 373, 243, 201, 280, 221, 139, 177, 152}, + {285, 406, 316, 207, 373, 243, 201, 280, 221, 139, 177, 152}, + {285, 406, 316, 207, 373, 243, 201, 280, 221, 139, 177, 152}, + {193, 294, 218, 135, 214, 155, 175, 241, 191, 139, 162, 149}, + {285, 406, 316, 207, 373, 243, 201, 280, 221, 139, 177, 152}, + {285, 406, 316, 207, 373, 243, 201, 280, 221, 139, 177, 152}, +}; + +/* 100x */ +static u32 ipblossy_ubwc30_cr_table_cratio_iris3[7][12] = { + {215, 215, 215, 174, 174, 174, 266, 266, 266, 231, 231, 231}, + {254, 254, 254, 219, 219, 219, 292, 292, 292, 249, 249, 249}, + {254, 254, 254, 219, 219, 219, 292, 292, 292, 249, 249, 249}, + {254, 254, 254, 219, 219, 219, 292, 292, 292, 249, 249, 249}, + {215, 215, 215, 174, 174, 174, 266, 266, 266, 231, 231, 231}, + {254, 254, 254, 219, 219, 219, 292, 292, 292, 249, 249, 249}, + {254, 254, 254, 219, 219, 219, 292, 292, 292, 249, 249, 249}, +}; + +/* 100x */ +static u32 ipblossless_ubwc30_cr_table_cratio_iris3[7][12] = { + {185, 215, 194, 147, 178, 159, 162, 181, 169, 138, 161, 146}, + {186, 217, 195, 151, 183, 161, 164, 182, 170, 140, 168, 148}, + {186, 217, 195, 151, 183, 161, 164, 182, 170, 140, 168, 148}, + {186, 217, 195, 151, 183, 161, 164, 182, 170, 140, 168, 148}, + {185, 215, 194, 147, 178, 159, 162, 181, 169, 138, 161, 146}, + {186, 217, 195, 151, 183, 161, 164, 182, 170, 140, 168, 148}, + {186, 217, 195, 151, 183, 161, 164, 182, 170, 140, 168, 148}, +}; + +/* 100x */ +static u32 en_original_compression_factor_rgba_pwd_iris3 = 243; +/* 100x */ +static u32 en_original_compression_factor_rgba_avg_iris3 = 454; + +/* H I J K L M N O P + * TotalW Total R Frequency Write Read + * Name B b P B b P B b P + * I3B4b1P 0.5 1.875 3 4 1 1 0 1 2 2 1 + * I1B2b1P 0.5 1.75 1 2 1 1 0 1 2 2 1 + * IbP 0.5 1.5 0 1 1 1 0 1 2 2 1 + * IPP 1 1 0 0 1 1 0 1 2 2 1 + * P 1 1 0 0 1 1 0 1 2 2 1 + * smallB 0 2 0 1 0 1 0 1 2 2 1 + * bigB 1 2 1 0 0 1 0 1 2 2 1 + * + * Total W = SUMPRODUCT(H16:J16, K16 : M16) / SUM(H16:J16) + * Total R = SUMPRODUCT(H16:J16, N16 : P16) / SUM(H16:J16) + */ + +/* 1000x */ +static u32 iris3_en_readfactor[7] = {1000, 1500, 1750, 1875, 1000, 2000, 2000}; +/* 1000x */ +static u32 iris3_en_writefactor[7] = {1000, 500, 500, 500, 1000, 0, 1000}; +static u32 iris3_en_frame_num_parallel = 1; + +u32 calculate_number_lcus_iris3(u32 width, u32 height, u32 lcu_size) +{ + u32 mbs_width = (width % lcu_size) ? + (width / lcu_size + 1) : (width / lcu_size); + u32 mbs_height = (height % lcu_size) ? + (height / lcu_size + 1) : (height / lcu_size); + + return mbs_width * mbs_height; +} + +u32 calculate_number_ubwctiles_iris3(u32 width, u32 height, u32 tile_w, u32 tile_h) +{ + u32 tiles_width = (width % tile_w) ? + (width / tile_w + 1) : (width / tile_w); + u32 tiles_height = (height % tile_h) ? + (height / tile_h + 1) : (height / tile_h); + + return tiles_width * tiles_height; +} + +struct compression_factors { + u32 dpb_cf_y; + u32 dpb_cf_cbcr; + u32 opb_cf_ycbcr; + u32 dpb_cr_y; + u32 ipb_cr_y; + u32 ipb_cr; +} compression_factor; + +u32 get_compression_factors(struct compression_factors *compression_factor, + struct api_calculation_input codec_input) +{ + u8 cr_index_entry, cr_index_y, cr_index_c, cr_index_uni; + u32 frame_width; + u32 frame_height; + + frame_width = codec_input.frame_width; + frame_height = codec_input.frame_height; + if (frame_width * frame_height <= 1920 * 1080) + cr_index_entry = 0; + else + cr_index_entry = 1; + + if (codec_input.bitdepth == CODEC_BITDEPTH_8) { + /* NOT PWC or average and power case */ + if (codec_input.complexity_setting != 0) { + cr_index_y = 0; + cr_index_c = 1; + cr_index_uni = 2; + } else { + cr_index_y = 3; + cr_index_c = 4; + cr_index_uni = 5; + } + } else { + /* NOT PWC or average and power case */ + if (codec_input.complexity_setting != 0) { + cr_index_y = 6; + cr_index_c = 7; + cr_index_uni = 8; + } else { + cr_index_y = 9; + cr_index_c = 10; + cr_index_uni = 11; + } + } + + if (codec_input.decoder_or_encoder == CODEC_DECODER) { + compression_factor->dpb_cf_y = + dpbopb_ubwc30_cr_table_cratio_iris3[cr_index_entry][cr_index_y]; + compression_factor->dpb_cf_cbcr = + dpbopb_ubwc30_cr_table_cratio_iris3[cr_index_entry][cr_index_c]; + compression_factor->opb_cf_ycbcr = + dpbopb_ubwc30_cr_table_cratio_iris3[cr_index_entry][cr_index_uni]; + + if (codec_input.regression_mode == 3 && + /* input cr numbers from interface */ + (codec_input.cr_dpb != 0 || codec_input.cr_opb != 0)) { + compression_factor->dpb_cf_y = (u32)(codec_input.cr_dpb * 100); + compression_factor->dpb_cf_cbcr = (u32)(codec_input.cr_dpb * 100); + compression_factor->opb_cf_ycbcr = (u32)(codec_input.cr_opb * 100); + } + } else { /* encoder */ + /* + * IPB CR Table Choice; static sheet (if framewidth<3840, use lossless table) + * (else, use lossy table) + * stick to this choice for SW purpose (no change for SW) + */ + if (frame_width < 3840) { + compression_factor->ipb_cr = + ipblossless_ubwc30_cr_table_cratio_iris3 + [cr_index_entry][cr_index_uni]; + compression_factor->ipb_cr_y = + ipblossless_ubwc30_cr_table_cratio_iris3 + [cr_index_entry][cr_index_y]; + } else { + compression_factor->ipb_cr = + ipblossy_ubwc30_cr_table_cratio_iris3[cr_index_entry] + [cr_index_uni]; + compression_factor->ipb_cr_y = + ipblossy_ubwc30_cr_table_cratio_iris3[cr_index_entry] + [cr_index_y]; + } + + compression_factor->dpb_cf_y = + rpb_ubwc30_cr_table_cratio_iris3[cr_index_entry][cr_index_y]; + + compression_factor->dpb_cf_cbcr = + rpb_ubwc30_cr_table_cratio_iris3[cr_index_entry][cr_index_c]; + + if (codec_input.regression_mode == 3 && + /* input cr from interface */ + (codec_input.cr_ipb != 0 || codec_input.cr_rpb != 0)) { + compression_factor->dpb_cf_y = (u32)(codec_input.cr_rpb * 100); + compression_factor->dpb_cf_cbcr = (u32)(codec_input.cr_rpb * 100); + compression_factor->ipb_cr_y = (u32)(codec_input.cr_ipb * 100); + } + } + + return 0; +} + +static int calculate_bandwidth_decoder_iris3(struct api_calculation_input codec_input, + struct api_calculation_bw_output *codec_output) +{ + /* common control parameters */ + u32 frame_width; + u32 frame_height; + u32 frame_lcu_size = 16; /* initialized to h264 */ + u32 lcu_per_frame; + u32 target_bitrate; + u32 collocated_bytes_per_lcu = 16; /* initialized to h264 */ + + u32 frame420_y_bw_linear_8bpp; + u32 frame420_y_bw_no_ubwc_tile_10bpp; + u32 frame420_y_bw_linear_10bpp; + + u16 ubwc_tile_w; + u16 ubwc_tile_h; + + u32 dpb_compression_factor_y; + u32 dpb_compression_factor_cbcr; + + u32 reconstructed_write_bw_factor_rd; + u32 reference_y_read_bw_factor; + u32 reference_cbcr_read_bw_factor; + + /* decoder control parameters */ + u32 decoder_vsp_read_factor = 6; + u32 bins_to_bits_factor = 4; + + u32 dpb_to_opb_ratios_ds = 1; + + u8 llc_enabled_ref_y_rd = 1; + u8 llc_enable_ref_crcb_rd = 1; + u8 llc_enabled_bse_tlb = 1; + /* this is for 2pipe and 1pipe LLC */ + + u32 opb_compression_factor_ycbcr; + u32 dpb_ubwc_tile_width_pixels; + u32 dpb_ubwc_tile_height_pixels; + u32 decoder_frame_complexity_factor; + u32 llc_saving = 130; /* Initialized to H264 */ + + u32 bse_tlb_byte_per_lcu = 0; + + u32 large_bw_calculation_fp = 0; + + llc_enabled_ref_y_rd = (codec_input.status_llc_onoff) ? 1 : 0; + llc_enable_ref_crcb_rd = (codec_input.status_llc_onoff) ? 1 : 0; + /* H265D BSE tlb in LLC will be pored in Kailua */ + llc_enabled_bse_tlb = (codec_input.status_llc_onoff) ? 1 : 0; + + frame_width = codec_input.frame_width; + frame_height = codec_input.frame_height; + if (codec_input.codec == CODEC_H264 || + codec_input.codec == CODEC_H264_CAVLC) { + frame_lcu_size = 16; + collocated_bytes_per_lcu = 16; + llc_saving = 130; + } else if (codec_input.codec == CODEC_HEVC) { + if (codec_input.lcu_size == 32) { + frame_lcu_size = 32; + collocated_bytes_per_lcu = 64; + llc_saving = 114; + } else if (codec_input.lcu_size == 64) { + frame_lcu_size = 64; + collocated_bytes_per_lcu = 256; + llc_saving = 107; + } + } else if (codec_input.codec == CODEC_VP9) { + if (codec_input.lcu_size == 32) { + frame_lcu_size = 32; + collocated_bytes_per_lcu = 64; + llc_saving = 114; + } else if (codec_input.lcu_size == 64) { + frame_lcu_size = 64; + collocated_bytes_per_lcu = 256; + llc_saving = 107; + } + } + + lcu_per_frame = + calculate_number_lcus_iris3(frame_width, frame_height, frame_lcu_size); + + target_bitrate = (u32)(codec_input.bitrate_mbps); /* Mbps */ + + ubwc_tile_w = (codec_input.bitdepth == CODEC_BITDEPTH_8) ? 32 : 48; + ubwc_tile_h = (codec_input.bitdepth == CODEC_BITDEPTH_8) ? 8 : 4; + + frame420_y_bw_linear_8bpp = + ((calculate_number_ubwctiles_iris3(frame_width, frame_height, 32, 8) * + 256 * codec_input.frame_rate + 999) / 1000 + 999) / 1000; + + frame420_y_bw_no_ubwc_tile_10bpp = + ((calculate_number_ubwctiles_iris3(frame_width, frame_height, 48, 4) * + 256 * codec_input.frame_rate + 999) / 1000 + 999) / 1000; + frame420_y_bw_linear_10bpp = ((frame_width * frame_height * + codec_input.frame_rate * 2 + 999) / 1000 + 999) / 1000; + + /* TODO Integrate Compression Ratio returned by FW */ + get_compression_factors(&compression_factor, codec_input); + dpb_compression_factor_y = compression_factor.dpb_cf_y; + dpb_compression_factor_cbcr = compression_factor.dpb_cf_cbcr; + opb_compression_factor_ycbcr = compression_factor.opb_cf_ycbcr; + + dpb_ubwc_tile_width_pixels = ubwc_tile_w; + + dpb_ubwc_tile_height_pixels = ubwc_tile_h; + + decoder_frame_complexity_factor = + (codec_input.complexity_setting == 0) ? + 400 : ((codec_input.complexity_setting == 1) ? 266 : 100); + + reconstructed_write_bw_factor_rd = (codec_input.complexity_setting == 0) ? + 105 : 100; + + reference_y_read_bw_factor = llc_saving; + + reference_cbcr_read_bw_factor = llc_saving; + + if (codec_input.codec == CODEC_HEVC) { + if (codec_input.lcu_size == 32) + bse_tlb_byte_per_lcu = 64; + else if (codec_input.lcu_size == 16) + bse_tlb_byte_per_lcu = 32; + else + bse_tlb_byte_per_lcu = 128; + } else if ((codec_input.codec == CODEC_H264) || + (codec_input.codec == CODEC_H264_CAVLC)) { + bse_tlb_byte_per_lcu = 64; + } else if (codec_input.codec == CODEC_VP9) { + bse_tlb_byte_per_lcu = 304; + } + + codec_output->noc_bw_rd = 0; + codec_output->noc_bw_wr = 0; + codec_output->ddr_bw_rd = 0; + codec_output->ddr_bw_wr = 0; + + large_bw_calculation_fp = 0; + large_bw_calculation_fp = ((target_bitrate * + decoder_vsp_read_factor + 7) / 8); + + codec_output->vsp_read_noc = large_bw_calculation_fp; + + codec_output->vsp_read_ddr = codec_output->vsp_read_noc; + + large_bw_calculation_fp = ((target_bitrate * + bins_to_bits_factor + 7) / 8); + + codec_output->vsp_write_noc = large_bw_calculation_fp; + codec_output->vsp_write_ddr = codec_output->vsp_write_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->vsp_read_noc; + codec_output->ddr_bw_rd += codec_output->vsp_read_ddr; + codec_output->noc_bw_wr += codec_output->vsp_write_noc; + codec_output->ddr_bw_wr += codec_output->vsp_write_ddr; + + large_bw_calculation_fp = 0; + large_bw_calculation_fp = ((collocated_bytes_per_lcu * + lcu_per_frame * codec_input.frame_rate + 999) / 1000 + 999) / 1000; + codec_output->collocated_rd_noc = large_bw_calculation_fp; + codec_output->collocated_wr_noc = codec_output->collocated_rd_noc; + codec_output->collocated_rd_ddr = codec_output->collocated_rd_noc; + codec_output->collocated_wr_ddr = codec_output->collocated_wr_noc; + + codec_output->collocated_rd_wr_total_noc = + (u32)(codec_output->collocated_rd_noc + codec_output->collocated_wr_noc); + + codec_output->collocated_rd_wr_total_ddr = + codec_output->collocated_rd_wr_total_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->collocated_rd_noc; + codec_output->noc_bw_wr += codec_output->collocated_wr_noc; + codec_output->ddr_bw_rd += codec_output->collocated_rd_ddr; + codec_output->ddr_bw_wr += codec_output->collocated_wr_ddr; + + large_bw_calculation_fp = 0; + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : + frame420_y_bw_no_ubwc_tile_10bpp) * decoder_frame_complexity_factor; + + large_bw_calculation_fp = + (large_bw_calculation_fp + dpb_compression_factor_y - 1) / + dpb_compression_factor_y; + + codec_output->dpb_rd_y_noc = large_bw_calculation_fp; + + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : frame420_y_bw_no_ubwc_tile_10bpp) * + decoder_frame_complexity_factor; + + large_bw_calculation_fp = + (large_bw_calculation_fp + dpb_compression_factor_cbcr - 1) / + dpb_compression_factor_cbcr / 2; + + codec_output->dpb_rd_crcb_noc = large_bw_calculation_fp; + codec_output->dpb_rdwr_duetooverlap_noc = 0; + + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : frame420_y_bw_no_ubwc_tile_10bpp) * + reconstructed_write_bw_factor_rd; + + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : frame420_y_bw_no_ubwc_tile_10bpp) * + reconstructed_write_bw_factor_rd; + + large_bw_calculation_fp = large_bw_calculation_fp * + (dpb_compression_factor_y / 2 + dpb_compression_factor_cbcr); + + large_bw_calculation_fp = (large_bw_calculation_fp + dpb_compression_factor_y - 1) / + dpb_compression_factor_y; + + large_bw_calculation_fp = + (large_bw_calculation_fp + dpb_compression_factor_cbcr - 1) / + dpb_compression_factor_cbcr; + + codec_output->dpb_wr_noc = large_bw_calculation_fp; + + codec_output->dpb_rd_y_ddr = (llc_enabled_ref_y_rd) ? + ((codec_output->dpb_rd_y_noc * 100 + reference_y_read_bw_factor - 1) / + reference_y_read_bw_factor) : codec_output->dpb_rd_y_noc; + + codec_output->dpb_rd_crcb_ddr = (llc_enable_ref_crcb_rd) ? + ((codec_output->dpb_rd_crcb_noc * 100 + + reference_cbcr_read_bw_factor - 1) / + reference_cbcr_read_bw_factor) : codec_output->dpb_rd_crcb_noc; + + codec_output->dpb_rdwr_duetooverlap_ddr = 0; + codec_output->dpb_wr_ddr = codec_output->dpb_wr_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->dpb_rd_y_noc; + codec_output->noc_bw_rd += codec_output->dpb_rd_crcb_noc; + codec_output->noc_bw_rd += codec_output->dpb_rdwr_duetooverlap_noc; + codec_output->noc_bw_wr += codec_output->dpb_wr_noc; + codec_output->ddr_bw_rd += codec_output->dpb_rd_y_ddr; + codec_output->ddr_bw_rd += codec_output->dpb_rd_crcb_ddr; + codec_output->ddr_bw_rd += codec_output->dpb_rdwr_duetooverlap_ddr; + codec_output->ddr_bw_wr += codec_output->dpb_wr_ddr; + + if (codec_input.linear_opb || codec_input.split_opb) { + if (codec_input.linear_opb) { + if (codec_input.bitdepth == CODEC_BITDEPTH_8) { + large_bw_calculation_fp = ((frame420_y_bw_linear_8bpp) * + 3 / 2 / dpb_to_opb_ratios_ds); + + codec_output->opb_write_total_noc = large_bw_calculation_fp; + } else { + large_bw_calculation_fp = ((frame420_y_bw_linear_10bpp) * + 3 / 2 / dpb_to_opb_ratios_ds); + + codec_output->opb_write_total_noc = large_bw_calculation_fp; + } + } else { /* (CODEC_INPUT.split_opb) */ + if (codec_input.bitdepth == CODEC_BITDEPTH_8) { + large_bw_calculation_fp = + (frame420_y_bw_linear_8bpp * 3 / 2 / dpb_to_opb_ratios_ds * + 100 + opb_compression_factor_ycbcr - 1) / + opb_compression_factor_ycbcr; + + codec_output->opb_write_total_noc = large_bw_calculation_fp; + } else { + large_bw_calculation_fp = + (frame420_y_bw_no_ubwc_tile_10bpp * 3 / 2 / + dpb_to_opb_ratios_ds * 100 + + opb_compression_factor_ycbcr - 1) / + opb_compression_factor_ycbcr; + + codec_output->opb_write_total_noc = large_bw_calculation_fp; + } + } + } else { + codec_output->opb_write_total_noc = 0; + } + + codec_output->opb_write_total_ddr = codec_output->opb_write_total_noc; + + /* accumulation */ + codec_output->noc_bw_wr += codec_output->opb_write_total_noc; + codec_output->ddr_bw_wr += codec_output->opb_write_total_ddr; + + large_bw_calculation_fp = ((bse_tlb_byte_per_lcu * lcu_per_frame * + codec_input.frame_rate + 999) / 1000 + 999) / 1000; + + codec_output->bse_tlb_rd_noc = large_bw_calculation_fp; + + if (llc_enabled_bse_tlb) + codec_output->bse_tlb_rd_ddr = 0; + else + codec_output->bse_tlb_rd_ddr = codec_output->bse_tlb_rd_noc; + + codec_output->bse_tlb_wr_noc = codec_output->bse_tlb_rd_noc; + + if (llc_enabled_bse_tlb) + codec_output->bse_tlb_wr_ddr = 0; + else + codec_output->bse_tlb_wr_ddr = codec_output->bse_tlb_wr_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->bse_tlb_rd_noc; + codec_output->ddr_bw_rd += codec_output->bse_tlb_rd_ddr; + codec_output->noc_bw_wr += codec_output->bse_tlb_wr_noc; + codec_output->ddr_bw_wr += codec_output->bse_tlb_wr_ddr; + + codec_output->mmu_rd_ddr = 0; + codec_output->mmu_rd_noc = 0; + /* accumulation */ + codec_output->noc_bw_rd += codec_output->mmu_rd_noc; + codec_output->ddr_bw_rd += codec_output->mmu_rd_ddr; + + return 0; +} + +static int calculate_bandwidth_encoder_iris3(struct api_calculation_input codec_input, + struct api_calculation_bw_output *codec_output) +{ + /* common control parameters */ + u32 frame_width; + u32 frame_height; + u32 frame_lcu_size; + u32 lcu_per_frame; + u32 target_bitrate; + u32 collocated_bytes_per_lcu; + + u32 frame420_y_bw_linear_8bpp; + u32 frame420_y_bw_no_ubwc_tile_10bpp; + u32 frame420_y_bw_linear_10bpp; + + u16 ubwc_tile_w; + u16 ubwc_tile_h; + + u32 dpb_compression_factor_y; + u32 dpb_compression_factor_cbcr; + + u32 reconstructed_write_bw_factor_rd; + u32 reference_y_read_bw_factor; + u32 reference_crcb_read_bw_factor; + + /* encoder control parameters */ + u32 en_vertical_tiles_width = 960; + + u8 en_rotation_90_270 = 0; + /* TODO Can we use (codec_input.status_llc_onoff) for enc_llc_*? */ + u8 en_llc_enable_ref_rd_crcb = 0; + u8 en_llc_enable_rec_wr_uncompleted = 0; + u8 en_llc_enable_ref_rd_y_overlap = 0; + + u32 en_bins_to_bits_factor = 4; + u32 en_search_windows_size_horizontal = 96; + + u32 en_tile_number; + u32 ipb_compression_factor_y; + u32 ipb_compression_factor; + + u32 large_bw_calculation_fp = 0; + + /* TODO Are these really needed in Encoder? */ + u32 bse_tlb_byte_per_lcu = 0; + u8 llc_enabled_bse_tlb = 1; + + /*H265D BSE tlb in LLC will be pored in Kailua */ + llc_enabled_bse_tlb = (codec_input.status_llc_onoff) ? 1 : 0; + + frame_width = codec_input.frame_width; + frame_height = codec_input.frame_height; + if (codec_input.codec == CODEC_H264 || + codec_input.codec == CODEC_H264_CAVLC) { + frame_lcu_size = 16; + collocated_bytes_per_lcu = 16; + } else if (codec_input.codec == CODEC_HEVC) { + frame_lcu_size = 32; + collocated_bytes_per_lcu = 64; + } else { + /* TODO What is the value for VP9 ? */ + frame_lcu_size = 16; + collocated_bytes_per_lcu = 16; /* TODO Fixes Uninitialized compilation error. */ + } + + lcu_per_frame = + calculate_number_lcus_iris3(frame_width, frame_height, frame_lcu_size); + + bse_tlb_byte_per_lcu = 16; /* TODO Should be in common declaration */ + + target_bitrate = (u32)(codec_input.bitrate_mbps); /* Mbps */ + + ubwc_tile_w = (codec_input.bitdepth == CODEC_BITDEPTH_8) ? 32 : 48; + ubwc_tile_h = (codec_input.bitdepth == CODEC_BITDEPTH_8) ? 8 : 4; + + /* yuv */ + if (codec_input.ipb_yuvrgb == 0) { + frame420_y_bw_linear_8bpp = + ((calculate_number_ubwctiles_iris3(frame_width, frame_height, + 32, 8) * 256 * codec_input.frame_rate + 999) / 1000 + 999) / 1000; + } else { /* RGBA */ + frame420_y_bw_linear_8bpp = + ((calculate_number_ubwctiles_iris3(frame_width, frame_height, + 6, 4) * 256 * codec_input.frame_rate + 999) / 1000 + 999) / 1000; + } + + frame420_y_bw_no_ubwc_tile_10bpp = + ((calculate_number_ubwctiles_iris3(frame_width, frame_height, 48, 4) * + 256 * codec_input.frame_rate + 999) / 1000 + 999) / 1000; + + frame420_y_bw_linear_10bpp = ((frame_width * frame_height * + codec_input.frame_rate * 2 + 999) / 1000 + 999) / 1000; + + /* TODO Integrate Compression Ratio returned by FW */ + get_compression_factors(&compression_factor, codec_input); + dpb_compression_factor_y = compression_factor.dpb_cf_y; + dpb_compression_factor_cbcr = compression_factor.dpb_cf_cbcr; + ipb_compression_factor_y = compression_factor.ipb_cr_y; + ipb_compression_factor = compression_factor.ipb_cr; + + en_tile_number = (frame_width % en_vertical_tiles_width) ? + ((frame_width / en_vertical_tiles_width) + 1) : + (frame_width / en_vertical_tiles_width); + + en_tile_number = en_tile_number * 100; + + /* ceil is same as excel roundup (float, 0); */ + reconstructed_write_bw_factor_rd = ((en_tile_number - 100) * 2 * + ((codec_input.lcu_size + ubwc_tile_w - 1) / ubwc_tile_w) * + ubwc_tile_w + (frame_width - 1)) / (frame_width) + 100; + + reference_y_read_bw_factor = ((en_tile_number - 100) * 2 * + ((en_search_windows_size_horizontal + ubwc_tile_w - 1) / ubwc_tile_w) * + ubwc_tile_w + (frame_width - 1)) / frame_width + 100; + + reference_crcb_read_bw_factor = 150; + + codec_output->noc_bw_rd = 0; + codec_output->noc_bw_wr = 0; + codec_output->ddr_bw_rd = 0; + codec_output->ddr_bw_wr = 0; + + large_bw_calculation_fp = (target_bitrate * en_bins_to_bits_factor + 7) / 8; + codec_output->vsp_read_noc = large_bw_calculation_fp; + codec_output->vsp_read_ddr = codec_output->vsp_read_noc; + large_bw_calculation_fp = (target_bitrate + 7) / 8; + + codec_output->vsp_write_noc = codec_output->vsp_read_noc + + large_bw_calculation_fp; + + codec_output->vsp_write_ddr = codec_output->vsp_write_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->vsp_read_noc; + codec_output->ddr_bw_rd += codec_output->vsp_read_ddr; + codec_output->noc_bw_wr += codec_output->vsp_write_noc; + codec_output->ddr_bw_wr += codec_output->vsp_write_ddr; + + large_bw_calculation_fp = ((collocated_bytes_per_lcu * lcu_per_frame * + codec_input.frame_rate + 999) / 1000 + 999) / 1000; + + codec_output->collocated_rd_noc = large_bw_calculation_fp; + codec_output->collocated_wr_noc = codec_output->collocated_rd_noc; + codec_output->collocated_rd_ddr = codec_output->collocated_rd_noc; + codec_output->collocated_wr_ddr = codec_output->collocated_wr_noc; + + codec_output->collocated_rd_wr_total_noc = + (u32)(codec_output->collocated_rd_noc + codec_output->collocated_wr_noc); + codec_output->collocated_rd_wr_total_ddr = + codec_output->collocated_rd_wr_total_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->collocated_rd_noc; + codec_output->noc_bw_wr += codec_output->collocated_wr_noc; + codec_output->ddr_bw_rd += codec_output->collocated_rd_ddr; + codec_output->ddr_bw_wr += codec_output->collocated_wr_ddr; + + large_bw_calculation_fp = 0; + + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : + frame420_y_bw_no_ubwc_tile_10bpp) * reference_y_read_bw_factor; + + large_bw_calculation_fp = (large_bw_calculation_fp * + iris3_en_readfactor[codec_input.hierachical_layer]); + + large_bw_calculation_fp = (large_bw_calculation_fp + + dpb_compression_factor_y - 1) / dpb_compression_factor_y; + + large_bw_calculation_fp = (large_bw_calculation_fp + 999) / 1000; + + codec_output->dpb_rd_y_noc = large_bw_calculation_fp; + + large_bw_calculation_fp = 0; + + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : + frame420_y_bw_no_ubwc_tile_10bpp) * reference_crcb_read_bw_factor / 2; + + large_bw_calculation_fp = large_bw_calculation_fp * + iris3_en_readfactor[codec_input.hierachical_layer]; + + large_bw_calculation_fp = (large_bw_calculation_fp + + dpb_compression_factor_cbcr - 1) / dpb_compression_factor_cbcr; + + large_bw_calculation_fp = (large_bw_calculation_fp + 999) / 1000; + codec_output->dpb_rd_crcb_noc = large_bw_calculation_fp; + + large_bw_calculation_fp = 0; + + large_bw_calculation_fp = ((codec_input.bitdepth == CODEC_BITDEPTH_8) ? + frame420_y_bw_linear_8bpp : frame420_y_bw_no_ubwc_tile_10bpp) * + reconstructed_write_bw_factor_rd * + iris3_en_writefactor[codec_input.hierachical_layer] / + iris3_en_frame_num_parallel; + + large_bw_calculation_fp = (large_bw_calculation_fp + 999) / 1000; + + large_bw_calculation_fp = large_bw_calculation_fp * + (dpb_compression_factor_cbcr + dpb_compression_factor_y / 2); + + large_bw_calculation_fp = (large_bw_calculation_fp + + dpb_compression_factor_y - 1) / dpb_compression_factor_y; + + large_bw_calculation_fp = (large_bw_calculation_fp + + dpb_compression_factor_cbcr - 1) / dpb_compression_factor_cbcr; + + codec_output->dpb_wr_noc = large_bw_calculation_fp; + + /* + * Summary: + * by default (for both HFR and HSR cases) : + * -Any resolution and fps >= 120, enable layering. + * (120 -> 3, 240 -> 4, 480 -> 5) + * - (once we enable layering) : 50 per cent frames are Non - reference + * frames.recon write is disable by Venus firmware + * - Customer has ability to enable / disable layering. + * Hence, recon write savings would not be there if customer + * explicitly disables layer encoding. + */ + + /*HFR Cases use alternating rec write if not PWC*/ + if (codec_input.frame_rate >= 120 && codec_input.complexity_setting != 0) + codec_output->dpb_wr_noc = codec_output->dpb_wr_noc / 2; + + /* for power cases with [B1] adaptive non-ref b frame */ + /* power caes IbP non reference b */ + if (codec_input.hierachical_layer >= 1 && + codec_input.hierachical_layer <= 3 && + codec_input.complexity_setting != 0) + codec_output->dpb_wr_noc = codec_output->dpb_wr_noc / 2; + + large_bw_calculation_fp = 0; + large_bw_calculation_fp = codec_output->dpb_wr_noc * + (reconstructed_write_bw_factor_rd - 100); + + large_bw_calculation_fp = (large_bw_calculation_fp + + reconstructed_write_bw_factor_rd - 1) / reconstructed_write_bw_factor_rd; + + codec_output->dpb_rdwr_duetooverlap_noc = large_bw_calculation_fp; + + codec_output->dpb_rd_y_ddr = (en_llc_enable_ref_rd_y_overlap) ? + (codec_output->dpb_rd_y_noc * 100 + reference_y_read_bw_factor - 1) / + reference_y_read_bw_factor : codec_output->dpb_rd_y_noc; + + codec_output->dpb_rd_crcb_ddr = (en_llc_enable_ref_rd_crcb) ? + (codec_output->dpb_rd_crcb_noc * 100 + reference_crcb_read_bw_factor - 1) / + reference_crcb_read_bw_factor : codec_output->dpb_rd_crcb_noc; + + codec_output->dpb_rdwr_duetooverlap_ddr = (en_llc_enable_rec_wr_uncompleted) ? + 0 : codec_output->dpb_rdwr_duetooverlap_noc; + + codec_output->dpb_wr_ddr = (en_llc_enable_rec_wr_uncompleted) ? + 0 : codec_output->dpb_wr_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->dpb_rd_y_noc; + codec_output->noc_bw_rd += codec_output->dpb_rd_crcb_noc; + codec_output->noc_bw_rd += codec_output->dpb_rdwr_duetooverlap_noc; + codec_output->noc_bw_wr += codec_output->dpb_wr_noc; + codec_output->ddr_bw_rd += codec_output->dpb_rd_y_ddr; + codec_output->ddr_bw_rd += codec_output->dpb_rd_crcb_ddr; + codec_output->ddr_bw_rd += codec_output->dpb_rdwr_duetooverlap_ddr; + codec_output->ddr_bw_wr += codec_output->dpb_wr_ddr; + + if (codec_input.bitdepth == CODEC_BITDEPTH_8) { + if (codec_input.ipb_yuvrgb == 0) { /* yuv */ + large_bw_calculation_fp = ((frame420_y_bw_linear_8bpp) * 3 / 2); + codec_output->ipb_rd_total_noc = large_bw_calculation_fp; + if (codec_input.linear_ipb == 0) { + codec_output->ipb_rd_total_noc = + (large_bw_calculation_fp * 100 + ipb_compression_factor + - 1) / ipb_compression_factor; + } + } else { /* rgb */ + large_bw_calculation_fp = frame420_y_bw_linear_8bpp; + codec_output->ipb_rd_total_noc = large_bw_calculation_fp; + if (codec_input.linear_ipb == 0) { + if (codec_input.complexity_setting == 0) /* pwc */ + codec_output->ipb_rd_total_noc = + (large_bw_calculation_fp * 100 + + en_original_compression_factor_rgba_pwd_iris3 + - 1) / + en_original_compression_factor_rgba_pwd_iris3; + else + codec_output->ipb_rd_total_noc = + (large_bw_calculation_fp * 100 + + en_original_compression_factor_rgba_avg_iris3 - 1) / + en_original_compression_factor_rgba_avg_iris3; + } + } + } else { + if (codec_input.linear_ipb == 1) { + large_bw_calculation_fp = (frame420_y_bw_linear_10bpp) * 3 / 2; + codec_output->ipb_rd_total_noc = large_bw_calculation_fp; + } else { + large_bw_calculation_fp = (frame420_y_bw_no_ubwc_tile_10bpp * + 300 / 2 + ipb_compression_factor - 1) / ipb_compression_factor; + codec_output->ipb_rd_total_noc = large_bw_calculation_fp; + } + } + + if (en_rotation_90_270) { + if (codec_input.codec == CODEC_HEVC) { + if (codec_input.bitdepth == CODEC_BITDEPTH_8 && + codec_input.ipb_yuvrgb == 0) + codec_output->ipb_rd_total_noc = + codec_output->ipb_rd_total_noc * 1; + else + codec_output->ipb_rd_total_noc = + codec_output->ipb_rd_total_noc * 3; + } else { + codec_output->ipb_rd_total_noc = codec_output->ipb_rd_total_noc * 2; + } + } + + codec_output->ipb_rd_total_ddr = codec_output->ipb_rd_total_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->ipb_rd_total_noc; + codec_output->ddr_bw_rd += codec_output->ipb_rd_total_ddr; + + codec_output->bse_tlb_rd_noc = + ((bse_tlb_byte_per_lcu * lcu_per_frame * codec_input.frame_rate + 999) + / 1000 + 999) / 1000; + + if (llc_enabled_bse_tlb) /* TODO should be common declaration */ + codec_output->bse_tlb_rd_ddr = 0; + else + codec_output->bse_tlb_rd_ddr = codec_output->bse_tlb_rd_noc; + + codec_output->bse_tlb_wr_noc = codec_output->bse_tlb_rd_noc; + + if (llc_enabled_bse_tlb) + codec_output->bse_tlb_wr_ddr = 0; + else + codec_output->bse_tlb_wr_ddr = codec_output->bse_tlb_wr_noc; + + /* accumulation */ + codec_output->noc_bw_rd += codec_output->bse_tlb_rd_noc; + codec_output->ddr_bw_rd += codec_output->bse_tlb_rd_ddr; + codec_output->noc_bw_wr += codec_output->bse_tlb_wr_noc; + codec_output->ddr_bw_wr += codec_output->bse_tlb_wr_ddr; + + codec_output->mmu_rd_ddr = 0; + codec_output->mmu_rd_noc = 0; + /* accumulation */ + codec_output->noc_bw_rd += codec_output->mmu_rd_noc; + codec_output->ddr_bw_rd += codec_output->mmu_rd_ddr; + + return 0; +} + +int msm_vidc_calculate_bandwidth(struct api_calculation_input codec_input, + struct api_calculation_bw_output *codec_output) +{ + int rc = 0; + + if (codec_input.decoder_or_encoder == CODEC_DECODER) { + rc = calculate_bandwidth_decoder_iris3(codec_input, codec_output); + } else if (codec_input.decoder_or_encoder == CODEC_ENCODER) { + rc = calculate_bandwidth_encoder_iris3(codec_input, codec_output); + } else { + d_vpr_e("%s: invalid codec %u\n", __func__, codec_input.decoder_or_encoder); + return -EINVAL; + } + + return rc; +} From patchwork Fri Jul 28 13:23:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13331996 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A83A1C001DF for ; Fri, 28 Jul 2023 14:24:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236859AbjG1OYJ (ORCPT ); Fri, 28 Jul 2023 10:24:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236194AbjG1OYI (ORCPT ); Fri, 28 Jul 2023 10:24:08 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6232D35BF; Fri, 28 Jul 2023 07:24:06 -0700 (PDT) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36SB80Nu023361; Fri, 28 Jul 2023 13:27:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=12lxi0iuh1niBROR+SGm8stvj1hc0vJCxUJta4SPFr8=; b=TCkajS1LTQCt7skCNBFPPEfffb6cuIplnxVmI3BhZRs3dvlcRjWuHkLtBOCLVveN/Xyr UFnp3UNc860BBG4+i9dZZcWf1S+blNfyKVV7ZUV9D5OhlMlVUyC2IsMm7ySjeJ27ZAqS PQVu0j5U6z+kiOS7Mr74+F49y6HsSS/qBJeHOoYanB0S6pA4EnosPI6/3r5ROQCwPiE6 d3z3h7tSW6/pOANpxRHzL3IBp+iyTp1VmekyyoDTOJHG0fCEmsceMWrWAk3maAl/ZYvS 8BWwOqVGMEGhslPJOg33O/mDdmuhBSBk15W3UYS+UkiQxNBAWs0J9KRKOOCxkvkHQ54E Tg== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s3ufutb6h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:27:01 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDR151003363 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:27:01 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:26:57 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 32/33] iris: variant: iris3: implement logic to compute clock frequency Date: Fri, 28 Jul 2023 18:53:43 +0530 Message-ID: <1690550624-14642-33-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: rozaoR9n9PQ48MGIDis5WEZpqK2QasAI X-Proofpoint-ORIG-GUID: rozaoR9n9PQ48MGIDis5WEZpqK2QasAI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 lowpriorityscore=0 impostorscore=0 bulkscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 priorityscore=1501 adultscore=0 phishscore=0 suspectscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This implements the logic to computer the required clock frequency by encoder or decoder for a specific usecase. It considers the input as various parameters configured by client for that usecase. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- .../iris/variant/iris3/src/msm_vidc_clock_iris3.c | 627 +++++++++++++++++++++ 1 file changed, 627 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_clock_iris3.c diff --git a/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_clock_iris3.c b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_clock_iris3.c new file mode 100644 index 0000000..6665aef --- /dev/null +++ b/drivers/media/platform/qcom/iris/variant/iris3/src/msm_vidc_clock_iris3.c @@ -0,0 +1,627 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2023 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include "msm_vidc_debug.h" + +#define ENABLE_FINEBITRATE_SUBUHD60 0 +#include "perf_static_model.h" + +/* + * Chipset Generation Technology: SW/FW overhead profiling + * need update with new numbers + */ +static u32 frequency_table_iris3[2][6] = { + /* //make lowsvs_D1 as invalid; */ + {533, 444, 366, 338, 240, 0}, + {800, 666, 549, 507, 360, 0}, +}; + + /* Tensilica cycles */ +#define DECODER_VPP_FW_OVERHEAD_IRIS3 66234 + +/* Tensilica cycles; this is measured in Lahaina 1stage with FW profiling */ +#define DECODER_VPPVSP1STAGE_FW_OVERHEAD_IRIS3 93000 + +#define DECODER_VSP_FW_OVERHEAD_IRIS3 \ + (DECODER_VPPVSP1STAGE_FW_OVERHEAD_IRIS3 - DECODER_VPP_FW_OVERHEAD_IRIS3) + +/* Tensilica cycles; encoder has ARP register */ +#define ENCODER_VPP_FW_OVERHEAD_IRIS3 48405 + +#define ENCODER_VPPVSP1STAGE_FW_OVERHEAD_IRIS3 \ + (ENCODER_VPP_FW_OVERHEAD_IRIS3 + DECODER_VSP_FW_OVERHEAD_IRIS3) + +#define DECODER_SW_OVERHEAD_IRIS3 489583 +#define ENCODER_SW_OVERHEAD_IRIS3 489583 + +/* Video IP Core Technology: pipefloor and pipe penlaty */ +static u32 decoder_vpp_target_clk_per_mb_iris3 = 200; + +/* + * These pipe penalty numbers only applies to 4 pipe + * For 2pipe and 1pipe, these numbers need recalibrate + */ +static u32 pipe_penalty_iris3[3][3] = { + /* NON AV1 */ + {1059, 1059, 1059}, + /* AV1 RECOMMENDED TILE 1080P_V2XH1, UHD_V2X2, 8KUHD_V8X2 */ + {1410, 1248, 1226}, + /* AV1 YOUTUBE/NETFLIX TILE 1080P_V4XH2_V4X1, UHD_V8X4_V8X1, 8KUHD_V8X8_V8X1 */ + {2039, 2464, 1191}, +}; + +/* + * Video IP Core Technology: bitrate constraint + * HW limit bitrate table (these values are measured end to end, + * fw/sw impacts are also considered) + * TODO Can we convert to Cycles/MB? This will remove DIVISION. + */ +static u32 bitrate_table_iris3_2stage_fp[4][10] = { + /* h264 cavlc */ + {0, 220, 220, 220, 220, 220, 220, 220, 220, 220}, + /* h264 cabac */ + {0, 140, 150, 160, 175, 190, 190, 190, 190, 190}, + /* h265 */ + {90, 140, 160, 180, 190, 200, 200, 200, 200, 200}, + /* vp9 */ + {90, 90, 90, 90, 90, 90, 90, 90, 90, 90}, +}; + +/* HW limit bitrate table (these values are measured end to end fw/sw impacts + * are also considered + */ +static u32 bitrate_table_iris3_1stage_fp[4][10] = { /* 1-stage assume IPPP */ + /* h264 cavlc */ + {0, 220, 220, 220, 220, 220, 220, 220, 220, 220}, + /* h264 cabac */ + {0, 110, 150, 150, 150, 150, 150, 150, 150, 150}, + /* h265 */ + {0, 140, 150, 150, 150, 150, 150, 150, 150, 150}, + /* vp9 */ + {0, 70, 70, 70, 70, 70, 70, 70, 70, 70}, +}; + +static u32 input_bitrate_fp; + +/* 8KUHD60; UHD240; 1080p960 with B */ +static u32 fp_pixel_count_bar0 = 3840 * 2160 * 240; +/* 8KUHD60; UHD240; 1080p960 without B */ +static u32 fp_pixel_count_bar1 = 3840 * 2160 * 240; +/* 1080p720 */ +static u32 fp_pixel_count_bar2 = 3840 * 2160 * 180; +/* UHD120 */ +static u32 fp_pixel_count_bar3 = 3840 * 2160 * 120; +/* UHD90 */ +static u32 fp_pixel_count_bar4 = 3840 * 2160 * 90; +/* UHD60 */ +static u32 fp_pixel_count_bar5 = 3840 * 2160 * 60; +/* UHD30; FHD120; HD240 */ +static u32 fp_pixel_count_bar6 = 3840 * 2160 * 30; +/* FHD60 */ +static u32 fp_pixel_count_bar7 = 1920 * 1080 * 60; +/* FHD30 */ +static u32 fp_pixel_count_bar8 = 1920 * 1080 * 30; + +static u32 codec_encoder_gop_complexity_table_fp[8][3]; +static u32 codec_mbspersession_iris3; + +static u32 calculate_number_mbs_iris3(u32 width, u32 height, u32 lcu_size) +{ + u32 mbs_width = (width % lcu_size) ? + (width / lcu_size + 1) : (width / lcu_size); + + u32 mbs_height = (height % lcu_size) ? + (height / lcu_size + 1) : (height / lcu_size); + + return mbs_width * mbs_height * (lcu_size / 16) * (lcu_size / 16); +} + +static int initialize_encoder_complexity_table(void) +{ + /* Beging Calculate Encoder GOP Complexity Table and HW Floor numbers */ + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_Bb_ENTRY] = 70000; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_P_ENTRY] = 10000; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_Bb_ENTRY] * 150 + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_P_ENTRY] * 100); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_P_ENTRY] - 1); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] / + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I3B4b1P][CODEC_ENCODER_GOP_P_ENTRY]); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_Bb_ENTRY] = 30000; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_P_ENTRY] = 10000; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_Bb_ENTRY] * 150 + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_P_ENTRY] * 100); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_P_ENTRY] - 1); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_FACTORY_ENTRY] / + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_I1B2b1P][CODEC_ENCODER_GOP_P_ENTRY]); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_Bb_ENTRY] = 10000; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_P_ENTRY] = 10000; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_Bb_ENTRY] * 150 + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_P_ENTRY] * 100); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_P_ENTRY] - 1); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_FACTORY_ENTRY] / + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IbP][CODEC_ENCODER_GOP_P_ENTRY]); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_Bb_ENTRY] = 0; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_P_ENTRY] = 1; + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_Bb_ENTRY] * 150 + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_P_ENTRY] * 100); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_P_ENTRY] - 1); + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_FACTORY_ENTRY] = + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_FACTORY_ENTRY] / + (codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_Bb_ENTRY] + + codec_encoder_gop_complexity_table_fp + [CODEC_GOP_IPP][CODEC_ENCODER_GOP_P_ENTRY]); + + return 0; +} + +u32 get_bitrate_entry(u32 pixle_count) +{ + u32 bitrate_entry = 0; + + if (pixle_count >= fp_pixel_count_bar1) + bitrate_entry = 1; + else if (pixle_count >= fp_pixel_count_bar2) + bitrate_entry = 2; + else if (pixle_count >= fp_pixel_count_bar3) + bitrate_entry = 3; + else if (pixle_count >= fp_pixel_count_bar4) + bitrate_entry = 4; + else if (pixle_count >= fp_pixel_count_bar5) + bitrate_entry = 5; + else if (pixle_count >= fp_pixel_count_bar6) + bitrate_entry = 6; + else if (pixle_count >= fp_pixel_count_bar7) + bitrate_entry = 7; + else if (pixle_count >= fp_pixel_count_bar8) + bitrate_entry = 8; + else + bitrate_entry = 9; + + return bitrate_entry; +} + +static int calculate_vsp_min_freq(struct api_calculation_input codec_input, + struct api_calculation_freq_output *codec_output) +{ + /* + * VSP calculation + * different methodology from Lahaina + */ + u32 vsp_hw_min_frequency = 0; + /* UInt32 decoder_vsp_fw_overhead = 100 + 5; // amplified by 100x */ + u32 fw_sw_vsp_offset = 1000 + 55; /* amplified by 1000x */ + + /* + * Ignore fw_sw_vsp_offset, as this is baked into the reference bitrate tables. + * As a consequence remove x1000 multiplier as well. + */ + u32 codec = codec_input.codec; + /* UInt32 *bitratetable; */ + u32 pixle_count = codec_input.frame_width * + codec_input.frame_height * codec_input.frame_rate; + + u8 bitrate_entry = get_bitrate_entry(pixle_count); /* TODO EXTRACT */ + + input_bitrate_fp = ((u32)(codec_input.bitrate_mbps * 100 + 99)) / 100; + vsp_hw_min_frequency = frequency_table_iris3[0][1] * input_bitrate_fp * 1000; + + /* 8KUHD60fps with B frame */ + if (pixle_count >= fp_pixel_count_bar0 && + codec_input.hierachical_layer != CODEC_GOP_IPP) { + /* + * FORMULA: VSPfreq = NOMINAL * (InputBitrate / ReferenceBitrate); + * ReferenceBitrate = 0 for, + * - 1Stage TURBO, all Codecs. + * - 2Stage TURBO, H264 & H265. + * + * 8KUHD60fps with B frame + * - bitrate_entry = 0 + * - Clock=NOMINAL for H264 & 2Stage H265. + * Because bitrate table entry for TURBO is 0. + * + * TODO : Reduce these conditions by removing the zero entries from Bitrate table. + */ + vsp_hw_min_frequency = frequency_table_iris3[0][1] * + input_bitrate_fp * 1000; + + if (codec_input.codec == CODEC_H264 || + codec_input.codec == CODEC_H264_CAVLC || + (codec_input.codec == CODEC_HEVC && + codec_input.vsp_vpp_mode == CODEC_VSPVPP_MODE_1S)) { + vsp_hw_min_frequency = + DIV_ROUND_UP(frequency_table_iris3[0][1], fw_sw_vsp_offset); + } else if ((codec_input.codec == CODEC_HEVC && + codec_input.vsp_vpp_mode == CODEC_VSPVPP_MODE_2S) || + codec_input.codec == CODEC_VP9) { + if (codec_input.vsp_vpp_mode == CODEC_VSPVPP_MODE_2S) { + vsp_hw_min_frequency = + DIV_ROUND_UP(vsp_hw_min_frequency, + (bitrate_table_iris3_2stage_fp[codec][0] * + fw_sw_vsp_offset)); + } else { + vsp_hw_min_frequency = + DIV_ROUND_UP(vsp_hw_min_frequency, + (bitrate_table_iris3_1stage_fp[codec][0] * + fw_sw_vsp_offset)); + } + } + } else { + vsp_hw_min_frequency = frequency_table_iris3[0][1] * + input_bitrate_fp * 1000; + + if (codec_input.codec == CODEC_H264_CAVLC && + codec_input.entropy_coding_mode == CODEC_ENTROPY_CODING_CAVLC) + codec = CODEC_H264_CAVLC; + else if (codec_input.codec == CODEC_H264 && + codec_input.entropy_coding_mode == CODEC_ENTROPY_CODING_CABAC) + codec = CODEC_H264; + + if (codec_input.vsp_vpp_mode == CODEC_VSPVPP_MODE_2S) + vsp_hw_min_frequency = + DIV_ROUND_UP(vsp_hw_min_frequency, + (bitrate_table_iris3_2stage_fp[codec][bitrate_entry]) * + fw_sw_vsp_offset); + else + vsp_hw_min_frequency = + DIV_ROUND_UP(vsp_hw_min_frequency, + (bitrate_table_iris3_1stage_fp[codec][bitrate_entry]) * + fw_sw_vsp_offset); + } + + codec_output->vsp_min_freq = vsp_hw_min_frequency; + return 0; +} + +static u32 calculate_pipe_penalty(struct api_calculation_input codec_input) +{ + u32 pipe_penalty_codec = 0; + + /* decoder */ + if (codec_input.decoder_or_encoder == CODEC_DECODER) + pipe_penalty_codec = pipe_penalty_iris3[0][0]; + else + pipe_penalty_codec = 101; + + return pipe_penalty_codec; +} + +static int calculate_vpp_min_freq(struct api_calculation_input codec_input, + struct api_calculation_freq_output *codec_output) +{ + u32 vpp_hw_min_frequency = 0; + u32 fmin = 0; + u32 tensilica_min_frequency = 0; + u32 decoder_vsp_fw_overhead = 100 + 5; /* amplified by 100x */ + /* UInt32 fw_sw_vsp_offset = 1000 + 55; amplified by 1000x */ + /* TODO from calculate_sw_vsp_min_freq */ + u32 vsp_hw_min_frequency = codec_output->vsp_min_freq; + u32 pipe_penalty_codec = 0; + u32 fmin_fwoverhead105 = 0; + u32 fmin_measured_fwoverhead = 0; + u32 lpmode_uhd_cycle_permb = 0; + u32 hqmode1080p_cycle_permb = 0; + u32 encoder_vpp_target_clk_per_mb = 0; + + codec_mbspersession_iris3 = + calculate_number_mbs_iris3(codec_input.frame_width, + codec_input.frame_height, + codec_input.lcu_size) * + codec_input.frame_rate; + + /* Section 2. 0 VPP/VSP calculation */ + if (codec_input.decoder_or_encoder == CODEC_DECODER) { /* decoder */ + vpp_hw_min_frequency = ((decoder_vpp_target_clk_per_mb_iris3) * + (codec_mbspersession_iris3) + codec_input.pipe_num - 1) / + (codec_input.pipe_num); + + vpp_hw_min_frequency = (vpp_hw_min_frequency + 99999) / 1000000; + + if (codec_input.pipe_num > 1) { + pipe_penalty_codec = calculate_pipe_penalty(codec_input); + vpp_hw_min_frequency = (vpp_hw_min_frequency * + pipe_penalty_codec + 999) / 1000; + } + + if (codec_input.vsp_vpp_mode == CODEC_VSPVPP_MODE_2S) { + /* FW overhead, convert FW cycles to impact to one pipe */ + u64 decoder_vpp_fw_overhead = 0; + + decoder_vpp_fw_overhead = + DIV_ROUND_UP((DECODER_VPP_FW_OVERHEAD_IRIS3 * 10 * + codec_input.frame_rate), 15); + + decoder_vpp_fw_overhead = + DIV_ROUND_UP((decoder_vpp_fw_overhead * 1000), + (codec_mbspersession_iris3 * + decoder_vpp_target_clk_per_mb_iris3 / + codec_input.pipe_num)); + + decoder_vpp_fw_overhead += 1000; + decoder_vpp_fw_overhead = (decoder_vpp_fw_overhead < 1050) ? + 1050 : decoder_vpp_fw_overhead; + + /* VPP HW + FW */ + if (codec_input.linear_opb == 1 && + codec_input.bitdepth == CODEC_BITDEPTH_10) + /* multiply by 1.20 for 10b case */ + decoder_vpp_fw_overhead = 1200 + decoder_vpp_fw_overhead - 1000; + + vpp_hw_min_frequency = (vpp_hw_min_frequency * + decoder_vpp_fw_overhead + 999) / 1000; + + /* VSP HW+FW */ + vsp_hw_min_frequency = + (vsp_hw_min_frequency * decoder_vsp_fw_overhead + 99) / 100; + + fmin = (vpp_hw_min_frequency > vsp_hw_min_frequency) ? + vpp_hw_min_frequency : vsp_hw_min_frequency; + } else { + /* 1-stage need SW cycles + FW cycles + HW time */ + if (codec_input.linear_opb == 1 && + codec_input.bitdepth == CODEC_BITDEPTH_10) + /* multiply by 1.20 for 10b linear case */ + vpp_hw_min_frequency = + (vpp_hw_min_frequency * 1200 + 999) / 1000; + + /* + * HW time + * comment: 02/23/2021 SY: the bitrate is measured bitrate, + * the overlapping effect is already considered into bitrate. + * no need to add extra anymore + */ + fmin = (vpp_hw_min_frequency > vsp_hw_min_frequency) ? + vpp_hw_min_frequency : vsp_hw_min_frequency; + + /* FW time */ + fmin_fwoverhead105 = (fmin * 105 + 99) / 100; + fmin_measured_fwoverhead = fmin + + (((DECODER_VPPVSP1STAGE_FW_OVERHEAD_IRIS3 * + codec_input.frame_rate * 10 + 14) / 15 + 999) / 1000 + 999) / + 1000; + + fmin = (fmin_fwoverhead105 > fmin_measured_fwoverhead) ? + fmin_fwoverhead105 : fmin_measured_fwoverhead; + } + + tensilica_min_frequency = (DECODER_SW_OVERHEAD_IRIS3 * 10 + 14) / 15; + tensilica_min_frequency = (tensilica_min_frequency + 999) / 1000; + tensilica_min_frequency = tensilica_min_frequency * codec_input.frame_rate; + tensilica_min_frequency = (tensilica_min_frequency + 999) / 1000; + fmin = (tensilica_min_frequency > fmin) ? tensilica_min_frequency : fmin; + } else { /* encoder */ + /* Decide LP/HQ */ + u8 hq_mode = 0; + + if (codec_input.pipe_num > 1) + if (codec_input.frame_width * codec_input.frame_height <= + 1920 * 1080) + if (codec_input.frame_width * codec_input.frame_height * + codec_input.frame_rate <= 1920 * 1080 * 60) + hq_mode = 1; + + codec_output->enc_hqmode = hq_mode; + + /* Section 1. 0 */ + /* TODO ONETIME call, should be in another place. */ + initialize_encoder_complexity_table(); + + /* End Calculate Encoder GOP Complexity Table */ + + /* VPP base cycle */ + lpmode_uhd_cycle_permb = (320 * + codec_encoder_gop_complexity_table_fp + [codec_input.hierachical_layer][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + 99) / 100; + + if (codec_input.frame_width == 1920 && + (codec_input.frame_height == 1080 || + codec_input.frame_height == 1088) && + codec_input.frame_rate >= 480) + lpmode_uhd_cycle_permb = (90 * 4 * + codec_encoder_gop_complexity_table_fp + [codec_input.hierachical_layer][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + 99) / 100; + + if (codec_input.frame_width == 1280 && + (codec_input.frame_height == 720 || + codec_input.frame_height == 768) && + codec_input.frame_rate >= 960) + lpmode_uhd_cycle_permb = (99 * 4 * + codec_encoder_gop_complexity_table_fp + [codec_input.hierachical_layer][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + 99) / 100; + + hqmode1080p_cycle_permb = (675 * + codec_encoder_gop_complexity_table_fp + [codec_input.hierachical_layer][CODEC_ENCODER_GOP_FACTORY_ENTRY] + + 99) / 100; + + encoder_vpp_target_clk_per_mb = (hq_mode) ? + hqmode1080p_cycle_permb : lpmode_uhd_cycle_permb; + + vpp_hw_min_frequency = ((encoder_vpp_target_clk_per_mb) * + (codec_mbspersession_iris3) + codec_input.pipe_num - 1) / + (codec_input.pipe_num); + + vpp_hw_min_frequency = (vpp_hw_min_frequency + 99999) / 1000000; + + if (codec_input.pipe_num > 1) { + u32 pipe_penalty_codec = 101; + + vpp_hw_min_frequency = (vpp_hw_min_frequency * + pipe_penalty_codec + 99) / 100; + } + + if (codec_input.vsp_vpp_mode == CODEC_VSPVPP_MODE_2S) { + /* FW overhead, convert FW cycles to impact to one pipe */ + u64 encoder_vpp_fw_overhead = 0; + + encoder_vpp_fw_overhead = + DIV_ROUND_UP((ENCODER_VPP_FW_OVERHEAD_IRIS3 * 10 * + codec_input.frame_rate), 15); + + encoder_vpp_fw_overhead = + DIV_ROUND_UP((encoder_vpp_fw_overhead * 1000), + (codec_mbspersession_iris3 * + encoder_vpp_target_clk_per_mb / + codec_input.pipe_num)); + + encoder_vpp_fw_overhead += 1000; + + encoder_vpp_fw_overhead = (encoder_vpp_fw_overhead < 1050) ? + 1050 : encoder_vpp_fw_overhead; + + /* VPP HW + FW */ + vpp_hw_min_frequency = (vpp_hw_min_frequency * + encoder_vpp_fw_overhead + 999) / 1000; + + /* TODO : decoder_vsp_fw_overhead? */ + vsp_hw_min_frequency = (vsp_hw_min_frequency * + decoder_vsp_fw_overhead + 99) / 100; + + fmin = (vpp_hw_min_frequency > vsp_hw_min_frequency) ? + vpp_hw_min_frequency : vsp_hw_min_frequency; + } else { + /* HW time */ + fmin = (vpp_hw_min_frequency > vsp_hw_min_frequency) ? + vpp_hw_min_frequency : vsp_hw_min_frequency; + + /* FW time */ + fmin_fwoverhead105 = (fmin * 105 + 99) / 100; + fmin_measured_fwoverhead = fmin + + (((DECODER_VPPVSP1STAGE_FW_OVERHEAD_IRIS3 * + codec_input.frame_rate * 10 + 14) / 15 + 999) / + 1000 + 999) / 1000; + + fmin = (fmin_fwoverhead105 > fmin_measured_fwoverhead) ? + fmin_fwoverhead105 : fmin_measured_fwoverhead; + /* SW time */ + } + + tensilica_min_frequency = (ENCODER_SW_OVERHEAD_IRIS3 * 10 + 14) / 15; + tensilica_min_frequency = (tensilica_min_frequency + 999) / 1000; + + tensilica_min_frequency = tensilica_min_frequency * + codec_input.frame_rate; + + tensilica_min_frequency = (tensilica_min_frequency + 999) / 1000; + + fmin = (tensilica_min_frequency > fmin) ? + tensilica_min_frequency : fmin; + } + + codec_output->vpp_min_freq = vpp_hw_min_frequency; + codec_output->vsp_min_freq = vsp_hw_min_frequency; + codec_output->tensilica_min_freq = tensilica_min_frequency; + codec_output->hw_min_freq = fmin; + + return 0; +} + +int msm_vidc_calculate_frequency(struct api_calculation_input codec_input, + struct api_calculation_freq_output *codec_output) +{ + int rc = 0; + + rc = calculate_vsp_min_freq(codec_input, codec_output); + if (rc) + return rc; + + rc = calculate_vpp_min_freq(codec_input, codec_output); + + return rc; +} From patchwork Fri Jul 28 13:23:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikash Garodia X-Patchwork-Id: 13332002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FEB2C001E0 for ; Fri, 28 Jul 2023 14:28:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236804AbjG1O2M (ORCPT ); Fri, 28 Jul 2023 10:28:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237103AbjG1O2A (ORCPT ); Fri, 28 Jul 2023 10:28:00 -0400 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C4423A95; Fri, 28 Jul 2023 07:27:59 -0700 (PDT) Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 36S6ckg4012494; Fri, 28 Jul 2023 13:27:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=7E6kElnq6c3Yp/nIQlsBCyWrx67x40jTdIWWziqzz7Y=; b=Z+SfZ51enjD7WCDBRqzC7HMNpZH7q+z92AwyohUaIOe6UsXsjVuwYLITfNSO1eTOiy8l qy1gPwj6F+clJcfdpXHNVYYHRwTgsGZITHRXohOAeUp2HJj4xI6KVyrPpUD5t/UE+D7c 2vFEb9eYZQcoS/F5G75mNKW99/OC+oUvmF0uVLOQBZ1N1XE2i4I6jyvSvTMA7Z2HkFQI wdBOjb9dJ6jZxR7YdzjLy0JaepXEXHEEp9KE8bEDKaJ0cmqvkHKSXUZLXHRbtRD2So6b 8HlY9W90kgoN0EJgXP/qSuF6Z98RSgC+8ad/Ih9BmVWOTkZBgrf/zebHPrC/54bw4oYb iQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3s447kh7vw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:27:04 +0000 Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 36SDR4nF015169 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 28 Jul 2023 13:27:04 GMT Received: from hu-vgarodia-hyd.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.30; Fri, 28 Jul 2023 06:27:00 -0700 From: Vikash Garodia To: , , , , , , , , CC: , Vikash Garodia Subject: [PATCH 33/33] iris: enable building of iris video driver Date: Fri, 28 Jul 2023 18:53:44 +0530 Message-ID: <1690550624-14642-34-git-send-email-quic_vgarodia@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> References: <1690550624-14642-1-git-send-email-quic_vgarodia@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: kmev0UsJhFeeKN2MADZqcsMNuHnixTqU X-Proofpoint-GUID: kmev0UsJhFeeKN2MADZqcsMNuHnixTqU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-27_10,2023-07-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=973 clxscore=1015 malwarescore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0 suspectscore=0 spamscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2306200000 definitions=main-2307280124 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Dikshita Agarwal This adds iris driver Makefile and Kconfig, also changes v4l2 platform/qcom Makefile/Kconfig in order to enable compilation of the driver. Signed-off-by: Dikshita Agarwal Signed-off-by: Vikash Garodia --- drivers/media/platform/qcom/Kconfig | 1 + drivers/media/platform/qcom/Makefile | 1 + drivers/media/platform/qcom/iris/Kconfig | 15 ++++++++++ drivers/media/platform/qcom/iris/Makefile | 46 +++++++++++++++++++++++++++++++ 4 files changed, 63 insertions(+) create mode 100644 drivers/media/platform/qcom/iris/Kconfig create mode 100644 drivers/media/platform/qcom/iris/Makefile diff --git a/drivers/media/platform/qcom/Kconfig b/drivers/media/platform/qcom/Kconfig index cc5799b..b86bebd 100644 --- a/drivers/media/platform/qcom/Kconfig +++ b/drivers/media/platform/qcom/Kconfig @@ -4,3 +4,4 @@ comment "Qualcomm media platform drivers" source "drivers/media/platform/qcom/camss/Kconfig" source "drivers/media/platform/qcom/venus/Kconfig" +source "drivers/media/platform/qcom/iris/Kconfig" diff --git a/drivers/media/platform/qcom/Makefile b/drivers/media/platform/qcom/Makefile index 4f055c3..83eea29 100644 --- a/drivers/media/platform/qcom/Makefile +++ b/drivers/media/platform/qcom/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only obj-y += camss/ obj-y += venus/ +obj-y += iris/ diff --git a/drivers/media/platform/qcom/iris/Kconfig b/drivers/media/platform/qcom/iris/Kconfig new file mode 100644 index 0000000..d434c31 --- /dev/null +++ b/drivers/media/platform/qcom/iris/Kconfig @@ -0,0 +1,15 @@ +config VIDEO_QCOM_IRIS + tristate "Qualcomm Iris V4L2 encoder/decoder driver" + depends on V4L_MEM2MEM_DRIVERS + depends on VIDEO_DEV && QCOM_SMEM + depends on (ARCH_QCOM && IOMMU_DMA) || COMPILE_TEST + select QCOM_MDT_LOADER if ARCH_QCOM + select QCOM_SCM + select VIDEOBUF2_DMA_CONTIG + select V4L2_MEM2MEM_DEV + select DMABUF_HEAPS + help + This is a V4L2 driver for Qualcomm Iris video accelerator + hardware. It accelerates encoding and decoding operations + on various Qualcomm SoCs. + To compile this driver as a module choose m here. diff --git a/drivers/media/platform/qcom/iris/Makefile b/drivers/media/platform/qcom/iris/Makefile new file mode 100644 index 0000000..e681c4f --- /dev/null +++ b/drivers/media/platform/qcom/iris/Makefile @@ -0,0 +1,46 @@ +KBUILD_OPTIONS+= VIDEO_ROOT=$(KERNEL_SRC)/$(M) + +VIDEO_COMPILE_TIME = $(shell date) +VIDEO_COMPILE_BY = $(shell whoami | sed 's/\\/\\\\/') +VIDEO_COMPILE_HOST = $(shell uname -n) +VIDEO_GEN_PATH = $(srctree)/$(src)/vidc/inc/video_generated_h + +$(shell echo '#define VIDEO_COMPILE_TIME "$(VIDEO_COMPILE_TIME)"' > $(VIDEO_GEN_PATH)) +$(shell echo '#define VIDEO_COMPILE_BY "$(VIDEO_COMPILE_BY)"' >> $(VIDEO_GEN_PATH)) +$(shell echo '#define VIDEO_COMPILE_HOST "$(VIDEO_COMPILE_HOST)"' >> $(VIDEO_GEN_PATH)) + +iris-objs += vidc/src/msm_vidc_v4l2.o \ + vidc/src/msm_vidc_vb2.o \ + vidc/src/msm_vidc.o \ + vidc/src/msm_vdec.o \ + vidc/src/msm_venc.o \ + vidc/src/msm_vidc_driver.o \ + vidc/src/msm_vidc_control.o \ + vidc/src/msm_vidc_buffer.o \ + vidc/src/msm_vidc_power.o \ + vidc/src/msm_vidc_probe.o \ + vidc/src/resources.o \ + vidc/src/firmware.o \ + vidc/src/msm_vidc_debug.o \ + vidc/src/msm_vidc_memory.o \ + vidc/src/venus_hfi.o \ + vidc/src/venus_hfi_queue.o \ + vidc/src/hfi_packet.o \ + vidc/src/venus_hfi_response.o \ + vidc/src/msm_vidc_state.o \ + platform/common/src/msm_vidc_platform.o \ + platform/sm8550/src/msm_vidc_sm8550.o \ + variant/common/src/msm_vidc_variant.o \ + variant/iris3/src/msm_vidc_buffer_iris3.o \ + variant/iris3/src/msm_vidc_iris3.o \ + variant/iris3/src/msm_vidc_power_iris3.o \ + variant/iris3/src/msm_vidc_bus_iris3.o \ + variant/iris3/src/msm_vidc_clock_iris3.o + +obj-$(CONFIG_VIDEO_QCOM_IRIS) += iris.o + +ccflags-y += -I$(srctree)/$(src)/vidc/inc +ccflags-y += -I$(srctree)/$(src)/platform/common/inc +ccflags-y += -I$(srctree)/$(src)/platform/sm8550/inc +ccflags-y += -I$(srctree)/$(src)/variant/common/inc +ccflags-y += -I$(srctree)/$(src)/variant/iris3/inc