From patchwork Mon Aug 10 22:26:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3B84739 for ; Mon, 10 Aug 2020 22:27:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B29702075D for ; Mon, 10 Aug 2020 22:27:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="wQdnQY8t" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727021AbgHJW1Y (ORCPT ); Mon, 10 Aug 2020 18:27:24 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:44387 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726974AbgHJW1X (ORCPT ); Mon, 10 Aug 2020 18:27:23 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098442; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=FHtW4WjaTsFoNisPqVbyXMhsG1JBUXON5dZ+qL6ZJhk=; b=wQdnQY8tC80vxS2B1E7pnseBIQLc2aXuS4KcN09Syl0msOSEm4jbSfnUvGpzO7kw3/IJVtiE tAhNBAHLHWRcT0D82/DeAQsKmErxjG9Rn8TOc2HAoMCBPfqVdhzRRXz22INnERatjrEt+sd3 rwMd4e1n0JvhNR/a7BSlhzj5QLU= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n11.prod.us-west-2.postgun.com with SMTP id 5f31c9be4c787f237b0ff913 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:10 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 62C15C43391; Mon, 10 Aug 2020 22:27:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id D52DCC433CA; Mon, 10 Aug 2020 22:27:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org D52DCC433CA Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Greg Kroah-Hartman , Joerg Roedel , Jon Hunter , Krishna Reddy , Nicolin Chen , Thierry Reding , Vivek Gautam , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 01/13] iommu/arm-smmu: Pass io-pgtable config to implementation specific function Date: Mon, 10 Aug 2020 16:26:45 -0600 Message-Id: <20200810222657.1841322-2-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Construct the io-pgtable config before calling the implementation specific init_context function and pass it so the implementation specific function can get a chance to change it before the io-pgtable is created. Signed-off-by: Jordan Crouse --- drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 3 ++- drivers/iommu/arm/arm-smmu/arm-smmu.c | 11 ++++++----- drivers/iommu/arm/arm-smmu/arm-smmu.h | 3 ++- 3 files changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c index f4ff124a1967..a9861dcd0884 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c @@ -68,7 +68,8 @@ static int cavium_cfg_probe(struct arm_smmu_device *smmu) return 0; } -static int cavium_init_context(struct arm_smmu_domain *smmu_domain) +static int cavium_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg) { struct cavium_smmu *cs = container_of(smmu_domain->smmu, struct cavium_smmu, smmu); diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 09c42af9f31e..37d8d49299b4 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -795,11 +795,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, cfg->asid = cfg->cbndx; smmu_domain->smmu = smmu; - if (smmu->impl && smmu->impl->init_context) { - ret = smmu->impl->init_context(smmu_domain); - if (ret) - goto out_unlock; - } pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = smmu->pgsize_bitmap, @@ -810,6 +805,12 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, .iommu_dev = smmu->dev, }; + if (smmu->impl && smmu->impl->init_context) { + ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg); + if (ret) + goto out_clear_smmu; + } + if (smmu_domain->non_strict) pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index d890a4a968e8..83294516ac08 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -386,7 +386,8 @@ struct arm_smmu_impl { u64 val); int (*cfg_probe)(struct arm_smmu_device *smmu); int (*reset)(struct arm_smmu_device *smmu); - int (*init_context)(struct arm_smmu_domain *smmu_domain); + int (*init_context)(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *cfg); void (*tlb_sync)(struct arm_smmu_device *smmu, int page, int sync, int status); int (*def_domain_type)(struct device *dev); From patchwork Mon Aug 10 22:26:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708311 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8AA9D174A for ; Mon, 10 Aug 2020 22:28:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71DEA20782 for ; Mon, 10 Aug 2020 22:28:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="pc9oCyPV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727793AbgHJW2Q (ORCPT ); Mon, 10 Aug 2020 18:28:16 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:43034 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726789AbgHJW1X (ORCPT ); Mon, 10 Aug 2020 18:27:23 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098442; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=J+r2IuMCdiQpFZk2LJ7Mue1k3xmchWvzAhnGNLaZ568=; b=pc9oCyPVaVV1pGJ+KUat2cTdeobJRHj+0QIVBvrQwDJD2yI4CftFFZ7mrzupbVwj6AIk35Di zqmivNicf2s2iDqr6kVnm7cxqO96izblQaWwFGwvAR+djaXuFuJb9L/ic0OsXT7lGwEdjFUW y4Tw8ZaVI8CSCPPK6ZrG1CElf3I= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n17.prod.us-west-2.postgun.com with SMTP id 5f31c9c03f2ce110209d59d5 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:12 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id B56FEC433AD; Mon, 10 Aug 2020 22:27:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id B22D5C433C6; Mon, 10 Aug 2020 22:27:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B22D5C433C6 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Greg Kroah-Hartman , Joerg Roedel , Jon Hunter , Krishna Reddy , Thierry Reding , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 02/13] iommu/arm-smmu: Add support for split pagetables Date: Mon, 10 Aug 2020 16:26:46 -0600 Message-Id: <20200810222657.1841322-3-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Enable TTBR1 for a context bank if IO_PGTABLE_QUIRK_ARM_TTBR1 is selected by the io-pgtable configuration. Signed-off-by: Jordan Crouse --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 21 ++++++++++++++++----- drivers/iommu/arm/arm-smmu/arm-smmu.h | 25 +++++++++++++++++++------ 2 files changed, 35 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 37d8d49299b4..976d43a7f2ff 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -552,11 +552,15 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, cb->ttbr[0] = pgtbl_cfg->arm_v7s_cfg.ttbr; cb->ttbr[1] = 0; } else { - cb->ttbr[0] = pgtbl_cfg->arm_lpae_s1_cfg.ttbr; - cb->ttbr[0] |= FIELD_PREP(ARM_SMMU_TTBRn_ASID, - cfg->asid); + cb->ttbr[0] = FIELD_PREP(ARM_SMMU_TTBRn_ASID, + cfg->asid); cb->ttbr[1] = FIELD_PREP(ARM_SMMU_TTBRn_ASID, - cfg->asid); + cfg->asid); + + if (pgtbl_cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + cb->ttbr[1] |= pgtbl_cfg->arm_lpae_s1_cfg.ttbr; + else + cb->ttbr[0] |= pgtbl_cfg->arm_lpae_s1_cfg.ttbr; } } else { cb->ttbr[0] = pgtbl_cfg->arm_lpae_s2_cfg.vttbr; @@ -822,7 +826,14 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, /* Update the domain's page sizes to reflect the page table format */ domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; - domain->geometry.aperture_end = (1UL << ias) - 1; + + if (pgtbl_cfg.quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) { + domain->geometry.aperture_start = ~0UL << ias; + domain->geometry.aperture_end = ~0UL; + } else { + domain->geometry.aperture_end = (1UL << ias) - 1; + } + domain->geometry.force_aperture = true; /* Initialise the context bank with our page table cfg */ diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index 83294516ac08..f3e456893f28 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -169,10 +169,12 @@ enum arm_smmu_cbar_type { #define ARM_SMMU_CB_TCR 0x30 #define ARM_SMMU_TCR_EAE BIT(31) #define ARM_SMMU_TCR_EPD1 BIT(23) +#define ARM_SMMU_TCR_A1 BIT(22) #define ARM_SMMU_TCR_TG0 GENMASK(15, 14) #define ARM_SMMU_TCR_SH0 GENMASK(13, 12) #define ARM_SMMU_TCR_ORGN0 GENMASK(11, 10) #define ARM_SMMU_TCR_IRGN0 GENMASK(9, 8) +#define ARM_SMMU_TCR_EPD0 BIT(7) #define ARM_SMMU_TCR_T0SZ GENMASK(5, 0) #define ARM_SMMU_VTCR_RES1 BIT(31) @@ -350,12 +352,23 @@ struct arm_smmu_domain { static inline u32 arm_smmu_lpae_tcr(struct io_pgtable_cfg *cfg) { - return ARM_SMMU_TCR_EPD1 | - FIELD_PREP(ARM_SMMU_TCR_TG0, cfg->arm_lpae_s1_cfg.tcr.tg) | - FIELD_PREP(ARM_SMMU_TCR_SH0, cfg->arm_lpae_s1_cfg.tcr.sh) | - FIELD_PREP(ARM_SMMU_TCR_ORGN0, cfg->arm_lpae_s1_cfg.tcr.orgn) | - FIELD_PREP(ARM_SMMU_TCR_IRGN0, cfg->arm_lpae_s1_cfg.tcr.irgn) | - FIELD_PREP(ARM_SMMU_TCR_T0SZ, cfg->arm_lpae_s1_cfg.tcr.tsz); + u32 tcr = FIELD_PREP(ARM_SMMU_TCR_TG0, cfg->arm_lpae_s1_cfg.tcr.tg) | + FIELD_PREP(ARM_SMMU_TCR_SH0, cfg->arm_lpae_s1_cfg.tcr.sh) | + FIELD_PREP(ARM_SMMU_TCR_ORGN0, cfg->arm_lpae_s1_cfg.tcr.orgn) | + FIELD_PREP(ARM_SMMU_TCR_IRGN0, cfg->arm_lpae_s1_cfg.tcr.irgn) | + FIELD_PREP(ARM_SMMU_TCR_T0SZ, cfg->arm_lpae_s1_cfg.tcr.tsz); + + /* + * When TTBR1 is selected shift the TCR fields by 16 bits and disable + * translation in TTBR0 + */ + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) { + tcr = (tcr << 16) & ~ARM_SMMU_TCR_A1; + tcr |= ARM_SMMU_TCR_EPD0; + } else + tcr |= ARM_SMMU_TCR_EPD1; + + return tcr; } static inline u32 arm_smmu_lpae_tcr2(struct io_pgtable_cfg *cfg) From patchwork Mon Aug 10 22:26:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708281 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 67118618 for ; Mon, 10 Aug 2020 22:27:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 42F422073E for ; Mon, 10 Aug 2020 22:27:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="IjQ7+s2O" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727113AbgHJW1l (ORCPT ); Mon, 10 Aug 2020 18:27:41 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:33822 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbgHJW1k (ORCPT ); Mon, 10 Aug 2020 18:27:40 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098459; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=3tBI3iqXr6vSTLdel0v7xPA/k9+rhE+hKHRnCgyN4Mw=; b=IjQ7+s2OKytFYYLCuTTjXDQiCXaDsY2iqDf+0LHeNLDWtQkQOrWi720/9uyjkVACmtlNEL/3 7CljxKtdTgQLrz8LaBJTsebB8I4nD+toScowd5nAsEdix+brb0rb8IUvEWT/alF/8NqgA75F W7DLsM2VEkWOaAwVuhfgvW7mLXY= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n04.prod.us-west-2.postgun.com with SMTP id 5f31c9c44c787f237b100715 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:16 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id ABE07C43391; Mon, 10 Aug 2020 22:27:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id EC4BAC433CA; Mon, 10 Aug 2020 22:27:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org EC4BAC433CA Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Greg Kroah-Hartman , Hanna Hawa , Joerg Roedel , Jonathan Marek , Krishna Reddy , Nicolin Chen , Stephen Boyd , Thierry Reding , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 03/13] iommu/arm-smmu: Prepare for the adreno-smmu implementation Date: Mon, 10 Aug 2020 16:26:47 -0600 Message-Id: <20200810222657.1841322-4-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Do a bit of prep work to add the upcoming adreno-smmu implementation. Add an hook to allow the implementation to choose which context banks to allocate. Then, add domain_attr_get / domain_attr_set hooks to allow for implementation specific domain attributes. Move some of the common structs to arm-smmu.h in anticipation of them being used by the implementations and update some of the existing hooks to pass more information that the implementation will need. These modifications will be used by the upcoming Adreno SMMU implementation to identify the GPU device and properly configure it for pagetable switching. Signed-off-by: Jordan Crouse --- drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 2 +- drivers/iommu/arm/arm-smmu/arm-smmu.c | 83 ++++++++-------------- drivers/iommu/arm/arm-smmu/arm-smmu.h | 56 ++++++++++++++- 3 files changed, 87 insertions(+), 54 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c index a9861dcd0884..88f17cc33023 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c @@ -69,7 +69,7 @@ static int cavium_cfg_probe(struct arm_smmu_device *smmu) } static int cavium_init_context(struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *pgtbl_cfg) + struct io_pgtable_cfg *pgtbl_cfg, struct device *dev) { struct cavium_smmu *cs = container_of(smmu_domain->smmu, struct cavium_smmu, smmu); diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 976d43a7f2ff..e0a3e0da885b 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -65,41 +65,10 @@ module_param(disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); -struct arm_smmu_s2cr { - struct iommu_group *group; - int count; - enum arm_smmu_s2cr_type type; - enum arm_smmu_s2cr_privcfg privcfg; - u8 cbndx; -}; - #define s2cr_init_val (struct arm_smmu_s2cr){ \ .type = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS, \ } -struct arm_smmu_smr { - u16 mask; - u16 id; - bool valid; -}; - -struct arm_smmu_cb { - u64 ttbr[2]; - u32 tcr[2]; - u32 mair[2]; - struct arm_smmu_cfg *cfg; -}; - -struct arm_smmu_master_cfg { - struct arm_smmu_device *smmu; - s16 smendx[]; -}; -#define INVALID_SMENDX -1 -#define cfg_smendx(cfg, fw, i) \ - (i >= fw->num_ids ? INVALID_SMENDX : cfg->smendx[i]) -#define for_each_cfg_sme(cfg, fw, i, idx) \ - for (i = 0; idx = cfg_smendx(cfg, fw, i), i < fw->num_ids; ++i) - static bool using_legacy_binding, using_generic_binding; static inline int arm_smmu_rpm_get(struct arm_smmu_device *smmu) @@ -234,19 +203,6 @@ static int arm_smmu_register_legacy_master(struct device *dev, } #endif /* CONFIG_ARM_SMMU_LEGACY_DT_BINDINGS */ -static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end) -{ - int idx; - - do { - idx = find_next_zero_bit(map, end, start); - if (idx == end) - return -ENOSPC; - } while (test_and_set_bit(idx, map)); - - return idx; -} - static void __arm_smmu_free_bitmap(unsigned long *map, int idx) { clear_bit(idx, map); @@ -578,7 +534,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, } } -static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) +void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) { u32 reg; bool stage1; @@ -665,7 +621,8 @@ static void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) } static int arm_smmu_init_domain_context(struct iommu_domain *domain, - struct arm_smmu_device *smmu) + struct arm_smmu_device *smmu, + struct device *dev) { int irq, start, ret = 0; unsigned long ias, oas; @@ -780,10 +737,20 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, ret = -EINVAL; goto out_unlock; } - ret = __arm_smmu_alloc_bitmap(smmu->context_map, start, + + smmu_domain->smmu = smmu; + + if (smmu->impl && smmu->impl->alloc_context_bank) + ret = smmu->impl->alloc_context_bank(smmu_domain, dev, + start, smmu->num_context_banks); + else + ret = __arm_smmu_alloc_bitmap(smmu->context_map, start, smmu->num_context_banks); - if (ret < 0) + + if (ret < 0) { + smmu_domain->smmu = NULL; goto out_unlock; + } cfg->cbndx = ret; if (smmu->version < ARM_SMMU_V2) { @@ -798,8 +765,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, else cfg->asid = cfg->cbndx; - smmu_domain->smmu = smmu; - pgtbl_cfg = (struct io_pgtable_cfg) { .pgsize_bitmap = smmu->pgsize_bitmap, .ias = ias, @@ -810,7 +775,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, }; if (smmu->impl && smmu->impl->init_context) { - ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg); + ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg, dev); if (ret) goto out_clear_smmu; } @@ -1194,7 +1159,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) return ret; /* Ensure that the domain is finalised */ - ret = arm_smmu_init_domain_context(domain, smmu); + ret = arm_smmu_init_domain_context(domain, smmu, dev); if (ret < 0) goto rpm_put; @@ -1534,6 +1499,13 @@ static int arm_smmu_domain_get_attr(struct iommu_domain *domain, *(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); return 0; default: + if (smmu_domain->smmu) { + const struct arm_smmu_impl *impl = smmu_domain->smmu->impl; + + if (impl && impl->domain_get_attr) + return impl->domain_get_attr(smmu_domain, attr, data); + } + return -ENODEV; } break; @@ -1575,6 +1547,13 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain, break; default: ret = -ENODEV; + + if (smmu_domain->smmu) { + const struct arm_smmu_impl *impl = smmu_domain->smmu->impl; + + if (impl && impl->domain_get_attr) + ret = impl->domain_set_attr(smmu_domain, attr, data); + } } break; case IOMMU_DOMAIN_DMA: diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index f3e456893f28..870f0fd060a5 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -256,6 +256,21 @@ enum arm_smmu_implementation { QCOM_SMMUV2, }; +struct arm_smmu_s2cr { + struct iommu_group *group; + int count; + enum arm_smmu_s2cr_type type; + enum arm_smmu_s2cr_privcfg privcfg; + u8 cbndx; +}; + +struct arm_smmu_smr { + u16 mask; + u16 id; + bool valid; + bool pinned; +}; + struct arm_smmu_device { struct device *dev; @@ -331,6 +346,13 @@ struct arm_smmu_cfg { }; #define ARM_SMMU_INVALID_IRPTNDX 0xff +struct arm_smmu_cb { + u64 ttbr[2]; + u32 tcr[2]; + u32 mair[2]; + struct arm_smmu_cfg *cfg; +}; + enum arm_smmu_domain_stage { ARM_SMMU_DOMAIN_S1 = 0, ARM_SMMU_DOMAIN_S2, @@ -350,6 +372,11 @@ struct arm_smmu_domain { struct iommu_domain domain; }; +struct arm_smmu_master_cfg { + struct arm_smmu_device *smmu; + s16 smendx[]; +}; + static inline u32 arm_smmu_lpae_tcr(struct io_pgtable_cfg *cfg) { u32 tcr = FIELD_PREP(ARM_SMMU_TCR_TG0, cfg->arm_lpae_s1_cfg.tcr.tg) | @@ -400,14 +427,39 @@ struct arm_smmu_impl { int (*cfg_probe)(struct arm_smmu_device *smmu); int (*reset)(struct arm_smmu_device *smmu); int (*init_context)(struct arm_smmu_domain *smmu_domain, - struct io_pgtable_cfg *cfg); + struct io_pgtable_cfg *cfg, struct device *dev); void (*tlb_sync)(struct arm_smmu_device *smmu, int page, int sync, int status); int (*def_domain_type)(struct device *dev); irqreturn_t (*global_fault)(int irq, void *dev); irqreturn_t (*context_fault)(int irq, void *dev); + int (*alloc_context_bank)(struct arm_smmu_domain *smmu_domain, + struct device *dev, int start, int max); + int (*domain_get_attr)(struct arm_smmu_domain *smmu_domain, + enum iommu_attr attr, void *data); + int (*domain_set_attr)(struct arm_smmu_domain *smmu_domain, + enum iommu_attr attr, void *data); }; +#define INVALID_SMENDX -1 +#define cfg_smendx(cfg, fw, i) \ + (i >= fw->num_ids ? INVALID_SMENDX : cfg->smendx[i]) +#define for_each_cfg_sme(cfg, fw, i, idx) \ + for (i = 0; idx = cfg_smendx(cfg, fw, i), i < fw->num_ids; ++i) + +static inline int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end) +{ + int idx; + + do { + idx = find_next_zero_bit(map, end, start); + if (idx == end) + return -ENOSPC; + } while (test_and_set_bit(idx, map)); + + return idx; +} + static inline void __iomem *arm_smmu_page(struct arm_smmu_device *smmu, int n) { return smmu->base + (n << smmu->pgshift); @@ -471,7 +523,9 @@ static inline void arm_smmu_writeq(struct arm_smmu_device *smmu, int page, struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu); struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu); struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu); +struct arm_smmu_device *qcom_adreno_smmu_impl_init(struct arm_smmu_device *smmu); +void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx); int arm_mmu500_reset(struct arm_smmu_device *smmu); #endif /* _ARM_SMMU_H */ From patchwork Mon Aug 10 22:26:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21EE7618 for ; Mon, 10 Aug 2020 22:27:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 08B9B2076C for ; Mon, 10 Aug 2020 22:27:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="eNUOfGpW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727035AbgHJW10 (ORCPT ); Mon, 10 Aug 2020 18:27:26 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:14937 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727019AbgHJW1Z (ORCPT ); Mon, 10 Aug 2020 18:27:25 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098444; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=/fIbyeruvI6OUypkSkElgAVRI3+iioCzbrZcNV4VFzA=; b=eNUOfGpWWva42oHRB7ZSEYs2qSAlWwOJm8VtgS1CIcDiiAJie7c+IIjlDCtzZviEMcpxnnUz q1iAQnnU6uWszY7xGB+eVR6XLiWi5aaVdeFfjaHTpPtkGw1W9V4I9xijPWV/nEVpp5jzaGzF idFRbKVsFwO1MxmkT08nE8wO4pg= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n02.prod.us-west-2.postgun.com with SMTP id 5f31c9c44c787f237b10076d (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:16 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id C6F01C43391; Mon, 10 Aug 2020 22:27:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id A3963C433C9; Mon, 10 Aug 2020 22:27:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org A3963C433C9 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Joerg Roedel , linux-kernel@vger.kernel.org Subject: [PATCH v12 04/13] iommu: Add a domain attribute to get/set a pagetable configuration Date: Mon, 10 Aug 2020 16:26:48 -0600 Message-Id: <20200810222657.1841322-5-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add domain attribute DOMAIN_ATTR_PGTABLE_CFG. This will be used by arm-smmu to share the current pagetable configuration with the leaf driver and to allow the leaf driver to set up a new pagetable configuration under certain circumstances. Signed-off-by: Jordan Crouse --- include/linux/iommu.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index fee209efb756..995ab8c47ef2 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -118,6 +118,7 @@ enum iommu_attr { DOMAIN_ATTR_FSL_PAMUV1, DOMAIN_ATTR_NESTING, /* two stages of translation */ DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, + DOMAIN_ATTR_PGTABLE_CFG, DOMAIN_ATTR_MAX, }; From patchwork Mon Aug 10 22:26:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD5BD16B1 for ; Mon, 10 Aug 2020 22:27:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9028E2076C for ; Mon, 10 Aug 2020 22:27:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="ErBkxXA4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726974AbgHJW1Z (ORCPT ); Mon, 10 Aug 2020 18:27:25 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:33822 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727014AbgHJW1W (ORCPT ); Mon, 10 Aug 2020 18:27:22 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098441; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=tUOKbiUcS+3hxjUOj9tzpqSCBTSozznJNryL8WPj6SE=; b=ErBkxXA4C9mL9lNbT44uVkuTuzCT9tX7oqskUQqaPCUV6CuF1TM1/JYlK6FWDEklShcj2qEu mtPR6VMnccNv9+hOqb0u7usdiROZ6rFTH/RXE2NY/mCg2qwLJaCZupYuBhL1m4KQW8pBJ9eG EWOaVNG6jIIjWLE3RX6WGfHs+mw= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n10.prod.us-west-2.postgun.com with SMTP id 5f31c9c84c787f237b10106c (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:20 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id DDB5FC433C6; Mon, 10 Aug 2020 22:27:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 8A793C433A0; Mon, 10 Aug 2020 22:27:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 8A793C433A0 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Hanna Hawa , Joerg Roedel , Krishna Reddy , Sibi Sankar , Stephen Boyd , Vivek Gautam , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 05/13] iommu/arm-smmu-qcom: Add implementation for the adreno GPU SMMU Date: Mon, 10 Aug 2020 16:26:49 -0600 Message-Id: <20200810222657.1841322-6-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add a special implementation for the SMMU attached to most Adreno GPU target triggered from the qcom,adreno-smmu compatible string. The new Adreno SMMU implementation will enable split pagetables (TTBR1) for the domain attached to the GPU device (SID 0) and hard code it context bank 0 so the GPU hardware can implement per-instance pagetables. Signed-off-by: Jordan Crouse --- drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 3 + drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 156 ++++++++++++++++++++- 2 files changed, 157 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c index 88f17cc33023..d199b4bff15d 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-impl.c @@ -223,6 +223,9 @@ struct arm_smmu_device *arm_smmu_impl_init(struct arm_smmu_device *smmu) of_device_is_compatible(np, "qcom,sm8250-smmu-500")) return qcom_smmu_impl_init(smmu); + if (of_device_is_compatible(smmu->dev->of_node, "qcom,adreno-smmu")) + return qcom_adreno_smmu_impl_init(smmu); + if (of_device_is_compatible(np, "marvell,ap806-smmu-500")) smmu->impl = &mrvl_mmu500_impl; diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index be4318044f96..3be10145bf57 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -12,6 +12,138 @@ struct qcom_smmu { struct arm_smmu_device smmu; }; +#define QCOM_ADRENO_SMMU_GPU_SID 0 + +static bool qcom_adreno_smmu_is_gpu_device(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev); + int idx, i; + + /* + * The GPU will always use SID 0 so that is a handy way to uniquely + * identify it and configure it for per-instance pagetables + */ + for_each_cfg_sme(cfg, fwspec, i, idx) { + u16 sid = FIELD_GET(ARM_SMMU_SMR_ID, fwspec->ids[i]); + + if (sid == QCOM_ADRENO_SMMU_GPU_SID) + return true; + } + + return false; +} + +/* + * Local implementation to configure TTBR0 with the specified pagetable config. + * The GPU driver will call this to enable TTBR0 when per-instance pagetables + * are active + */ +static int qcom_adreno_smmu_set_pgtable_cfg(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg) +{ + struct io_pgtable *pgtable = io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops); + struct arm_smmu_cfg *cfg = &smmu_domain->cfg; + struct arm_smmu_cb *cb = &smmu_domain->smmu->cbs[cfg->cbndx]; + + /* The domain must have split pagetables already enabled */ + if (cb->tcr[0] & ARM_SMMU_TCR_EPD1) + return -EINVAL; + + /* If the pagetable config is NULL, disable TTBR0 */ + if (!pgtbl_cfg) { + /* Do nothing if it is already disabled */ + if ((cb->tcr[0] & ARM_SMMU_TCR_EPD0)) + return -EINVAL; + + /* Set TCR to the original configuration */ + cb->tcr[0] = arm_smmu_lpae_tcr(&pgtable->cfg); + cb->ttbr[0] = FIELD_PREP(ARM_SMMU_TTBRn_ASID, cb->cfg->asid); + } else { + u32 tcr = cb->tcr[0]; + + /* Don't call this again if TTBR0 is already enabled */ + if (!(cb->tcr[0] & ARM_SMMU_TCR_EPD0)) + return -EINVAL; + + tcr |= arm_smmu_lpae_tcr(pgtbl_cfg); + tcr &= ~(ARM_SMMU_TCR_EPD0 | ARM_SMMU_TCR_EPD1); + + cb->tcr[0] = tcr; + cb->ttbr[0] = pgtbl_cfg->arm_lpae_s1_cfg.ttbr; + cb->ttbr[0] |= FIELD_PREP(ARM_SMMU_TTBRn_ASID, cb->cfg->asid); + } + + arm_smmu_write_context_bank(smmu_domain->smmu, cb->cfg->cbndx); + return 0; +} + +static int qcom_adreno_smmu_domain_get_attr(struct arm_smmu_domain *smmu_domain, + enum iommu_attr attr, void *data) +{ + struct io_pgtable *pgtable; + struct io_pgtable_cfg *dest = data; + + if (attr == DOMAIN_ATTR_PGTABLE_CFG) { + + if (!smmu_domain->pgtbl_ops) + return -ENODEV; + + pgtable = io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops); + memcpy(dest, &pgtable->cfg, sizeof(*dest)); + return 0; + } + + return -ENODEV; +} + +static int qcom_adreno_smmu_domain_set_attr(struct arm_smmu_domain *smmu_domain, + enum iommu_attr attr, void *data) +{ + if (attr == DOMAIN_ATTR_PGTABLE_CFG) + return qcom_adreno_smmu_set_pgtable_cfg(smmu_domain, data); + + return -ENODEV; +} + +static int qcom_adreno_smmu_alloc_context_bank(struct arm_smmu_domain *smmu_domain, + struct device *dev, int start, int count) +{ + struct arm_smmu_device *smmu = smmu_domain->smmu; + + /* + * Assign context bank 0 to the GPU device so the GPU hardware can + * switch pagetables + */ + if (qcom_adreno_smmu_is_gpu_device(dev)) { + start = 0; + count = 1; + } else { + start = 1; + } + + return __arm_smmu_alloc_bitmap(smmu->context_map, start, count); +} + +static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg, struct device *dev) +{ + /* Only enable split pagetables for the GPU device (SID 0) */ + if (!qcom_adreno_smmu_is_gpu_device(dev)) + return 0; + + /* + * All targets that use the qcom,adreno-smmu compatible string *should* + * be AARCH64 stage 1 but double check because the arm-smmu code assumes + * that is the case when the TTBR1 quirk is enabled + */ + if ((smmu_domain->stage == ARM_SMMU_DOMAIN_S1) && + (smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64)) + pgtbl_cfg->quirks |= IO_PGTABLE_QUIRK_ARM_TTBR1; + + return 0; +} + static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = { { .compatible = "qcom,adreno" }, { .compatible = "qcom,mdp4" }, @@ -65,7 +197,17 @@ static const struct arm_smmu_impl qcom_smmu_impl = { .reset = qcom_smmu500_reset, }; -struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu) +static const struct arm_smmu_impl qcom_adreno_smmu_impl = { + .init_context = qcom_adreno_smmu_init_context, + .def_domain_type = qcom_smmu_def_domain_type, + .reset = qcom_smmu500_reset, + .alloc_context_bank = qcom_adreno_smmu_alloc_context_bank, + .domain_set_attr = qcom_adreno_smmu_domain_set_attr, + .domain_get_attr = qcom_adreno_smmu_domain_get_attr, +}; + +static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu, + const struct arm_smmu_impl *impl) { struct qcom_smmu *qsmmu; @@ -75,8 +217,18 @@ struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu) qsmmu->smmu = *smmu; - qsmmu->smmu.impl = &qcom_smmu_impl; + qsmmu->smmu.impl = impl; devm_kfree(smmu->dev, smmu); return &qsmmu->smmu; } + +struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu) +{ + return qcom_smmu_create(smmu, &qcom_smmu_impl); +} + +struct arm_smmu_device *qcom_adreno_smmu_impl_init(struct arm_smmu_device *smmu) +{ + return qcom_smmu_create(smmu, &qcom_adreno_smmu_impl); +} From patchwork Mon Aug 10 22:26:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2EE92739 for ; Mon, 10 Aug 2020 22:28:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16A522075D for ; Mon, 10 Aug 2020 22:28:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="ttcW1DBk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727024AbgHJW1X (ORCPT ); Mon, 10 Aug 2020 18:27:23 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:35273 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727021AbgHJW1X (ORCPT ); Mon, 10 Aug 2020 18:27:23 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098442; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=GrEw3oSw++nMPzEl+/BHJDxleP5vanw4s1YWFdwf0mU=; b=ttcW1DBkqZFxWQhu9vWGGik7RDpPvQnR6tOAelQWeyeShOVSrXkd4Q19Eu7R1R5hYRIRBoQ8 /kIuEZlqcHC5TqkvvcEdo2DyFNS24uHtUb85FlCNShgzflSnMeVvtYjM4aePCwATs/77+UMw 8dNup8FdSpa7t4i4bzp0crSeeVQ= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n01.prod.us-west-2.postgun.com with SMTP id 5f31c9cac85a1092b0faef36 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:22 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 658CAC433B2; Mon, 10 Aug 2020 22:27:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 7E7E2C433C9; Mon, 10 Aug 2020 22:27:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 7E7E2C433C9 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Rob Herring , Joerg Roedel , Rob Herring , devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 06/13] dt-bindings: arm-smmu: Add compatible string for Adreno GPU SMMU Date: Mon, 10 Aug 2020 16:26:50 -0600 Message-Id: <20200810222657.1841322-7-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Every Qcom Adreno GPU has an embedded SMMU for its own use. These devices depend on unique features such as split pagetables, different stall/halt requirements and other settings. Identify them with a compatible string so that they can be identified in the arm-smmu implementation specific code. Signed-off-by: Jordan Crouse Reviewed-by: Rob Herring --- Documentation/devicetree/bindings/iommu/arm,smmu.yaml | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/Documentation/devicetree/bindings/iommu/arm,smmu.yaml b/Documentation/devicetree/bindings/iommu/arm,smmu.yaml index 503160a7b9a0..70996348c1d8 100644 --- a/Documentation/devicetree/bindings/iommu/arm,smmu.yaml +++ b/Documentation/devicetree/bindings/iommu/arm,smmu.yaml @@ -49,6 +49,10 @@ properties: - enum: - nvidia,tegra194-smmu - const: nvidia,smmu-500 + - description: Qcom Adreno GPUs implementing "arm,smmu-v2" + items: + - const: qcom,adreno-smmu + - const: qcom,smmu-v2 - items: - const: arm,mmu-500 - const: arm,smmu-v2 From patchwork Mon Aug 10 22:26:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 710D3618 for ; Mon, 10 Aug 2020 22:28:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F7D42073E for ; Mon, 10 Aug 2020 22:28:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="Re6pYLuP" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727837AbgHJW17 (ORCPT ); Mon, 10 Aug 2020 18:27:59 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:35273 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727799AbgHJW15 (ORCPT ); Mon, 10 Aug 2020 18:27:57 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098476; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=/WjUrlTXh5D2SPZiwegkNsn6ZuyTBrGm+RToSc4ezM0=; b=Re6pYLuPGLDujjXfLjlCNkPfc5b0/D13Qw3Ycf7NBrBMqufS+LVN3ZShcXvGyb+kQ4Krvj0O rLpxHgVT6wty//y4caDK/tX6sLD4oMzEzQrm5XtmJwHsNbaTXlYHDC4r8h//eZ1lKp2mRAJu D8Ft77smmwXICx+U47q7i2ysvoo= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n05.prod.us-west-2.postgun.com with SMTP id 5f31c9d5c85a1092b0fb06d2 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:33 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 4926DC433A0; Mon, 10 Aug 2020 22:27:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 0F94DC433B1; Mon, 10 Aug 2020 22:27:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 0F94DC433B1 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , AngeloGioacchino Del Regno , Ben Dooks , Brian Masney , Daniel Vetter , David Airlie , Emil Velikov , Eric Anholt , Jonathan Marek , Rob Clark , Sam Ravnborg , Sean Paul , Sharat Masetty , Shawn Guo , Wambui Karuga , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 07/13] drm/msm: Add a context pointer to the submitqueue Date: Mon, 10 Aug 2020 16:26:51 -0600 Message-Id: <20200810222657.1841322-8-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Each submitqueue is attached to a context. Add a pointer to the context to the submitqueue at create time and refcount it so that it stays around through the life of the queue. GPU submissions can access the active context via the submitqueue instead of requiring it to be passed around from function to function. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 12 +++++------- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 ++--- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 5 ++--- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 3 +-- drivers/gpu/drm/msm/msm_drv.c | 3 ++- drivers/gpu/drm/msm/msm_drv.h | 8 ++++++++ drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 8 ++++---- drivers/gpu/drm/msm/msm_gpu.c | 9 ++++----- drivers/gpu/drm/msm/msm_gpu.h | 7 +++---- drivers/gpu/drm/msm/msm_submitqueue.c | 8 +++++++- 11 files changed, 39 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 9e63a190642c..eff2439ea57b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -43,8 +43,7 @@ static void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) gpu_write(gpu, REG_A5XX_CP_RB_WPTR, wptr); } -static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx) +static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit) { struct msm_drm_private *priv = gpu->dev->dev_private; struct msm_ringbuffer *ring = submit->ring; @@ -57,7 +56,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit case MSM_SUBMIT_CMD_IB_TARGET_BUF: break; case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - if (priv->lastctx == ctx) + if (priv->lastctx == submit->queue->ctx) break; /* fall-thru */ case MSM_SUBMIT_CMD_BUF: @@ -103,8 +102,7 @@ static void a5xx_submit_in_rb(struct msm_gpu *gpu, struct msm_gem_submit *submit msm_gpu_retire(gpu); } -static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx) +static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu); @@ -114,7 +112,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) { priv->lastctx = NULL; - a5xx_submit_in_rb(gpu, submit, ctx); + a5xx_submit_in_rb(gpu, submit); return; } @@ -148,7 +146,7 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, case MSM_SUBMIT_CMD_IB_TARGET_BUF: break; case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - if (priv->lastctx == ctx) + if (priv->lastctx == submit->queue->ctx) break; /* fall-thru */ case MSM_SUBMIT_CMD_BUF: diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index c5a3e4d4c007..5eabb0109577 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -81,8 +81,7 @@ static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter, OUT_RING(ring, upper_32_bits(iova)); } -static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx) +static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) { unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT; struct msm_drm_private *priv = gpu->dev->dev_private; @@ -115,7 +114,7 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, case MSM_SUBMIT_CMD_IB_TARGET_BUF: break; case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - if (priv->lastctx == ctx) + if (priv->lastctx == submit->queue->ctx) break; /* fall-thru */ case MSM_SUBMIT_CMD_BUF: diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index e23641a5ec84..b38a8126541a 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -457,8 +457,7 @@ void adreno_recover(struct msm_gpu *gpu) } } -void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx) +void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct msm_drm_private *priv = gpu->dev->dev_private; @@ -472,7 +471,7 @@ void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, break; case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: /* ignore if there has not been a ctx switch: */ - if (priv->lastctx == ctx) + if (priv->lastctx == submit->queue->ctx) break; /* fall-thru */ case MSM_SUBMIT_CMD_BUF: diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 99bb468f5f24..0ae8b373c428 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -267,8 +267,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, const struct firmware *fw, u64 *iova); int adreno_hw_init(struct msm_gpu *gpu); void adreno_recover(struct msm_gpu *gpu); -void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx); +void adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit); void adreno_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring); bool adreno_idle(struct msm_gpu *gpu, struct msm_ringbuffer *ring); #if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 7d641c7e3514..108b663c3ef2 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -594,6 +594,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) if (!ctx) return -ENOMEM; + kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); ctx->aspace = priv->gpu ? priv->gpu->aspace : NULL; @@ -615,7 +616,7 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) static void context_close(struct msm_file_private *ctx) { msm_submitqueue_close(ctx); - kfree(ctx); + kref_put(&ctx->ref, msm_file_private_destroy); } static void msm_postclose(struct drm_device *dev, struct drm_file *file) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index af259b0573ea..f69c6d62584d 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -57,6 +57,7 @@ struct msm_file_private { struct list_head submitqueues; int queueid; struct msm_gem_address_space *aspace; + struct kref ref; }; enum msm_mdp_plane_property { @@ -428,6 +429,13 @@ void msm_submitqueue_close(struct msm_file_private *ctx); void msm_submitqueue_destroy(struct kref *kref); +static inline void msm_file_private_destroy(struct kref *kref) +{ + struct msm_file_private *ctx = container_of(kref, + struct msm_file_private, ref); + + kfree(ctx); +} #define DBG(fmt, ...) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__) #define VERB(fmt, ...) if (0) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 972490b14ba5..9c573c4269cb 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -142,6 +142,7 @@ struct msm_gem_submit { bool valid; /* true if no cmdstream patching needed */ bool in_rb; /* "sudo" mode, copy cmds into RB */ struct msm_ringbuffer *ring; + struct msm_file_private *ctx; unsigned int nr_cmds; unsigned int nr_bos; u32 ident; /* A "identifier" for the submit for logging */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 8cb9aa15ff90..aa5c60a7132d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -27,7 +27,7 @@ #define BO_PINNED 0x2000 static struct msm_gem_submit *submit_create(struct drm_device *dev, - struct msm_gpu *gpu, struct msm_gem_address_space *aspace, + struct msm_gpu *gpu, struct msm_gpu_submitqueue *queue, uint32_t nr_bos, uint32_t nr_cmds) { @@ -43,7 +43,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, return NULL; submit->dev = dev; - submit->aspace = aspace; + submit->aspace = queue->ctx->aspace; submit->gpu = gpu; submit->fence = NULL; submit->cmd = (void *)&submit->bos[nr_bos]; @@ -677,7 +677,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } } - submit = submit_create(dev, gpu, ctx->aspace, queue, args->nr_bos, + submit = submit_create(dev, gpu, queue, args->nr_bos, args->nr_cmds); if (!submit) { ret = -ENOMEM; @@ -785,7 +785,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } } - msm_gpu_submit(gpu, submit, ctx); + msm_gpu_submit(gpu, submit); args->fence = submit->fence->seqno; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d5645472b25d..a1f3da6550e5 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -520,7 +520,7 @@ static void recover_worker(struct work_struct *work) struct msm_ringbuffer *ring = gpu->rb[i]; list_for_each_entry(submit, &ring->submits, node) - gpu->funcs->submit(gpu, submit, NULL); + gpu->funcs->submit(gpu, submit); } } @@ -747,8 +747,7 @@ void msm_gpu_retire(struct msm_gpu *gpu) } /* add bo's to gpu's ring, and kick gpu: */ -void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx) +void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) { struct drm_device *dev = gpu->dev; struct msm_drm_private *priv = dev->dev_private; @@ -788,8 +787,8 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, msm_gem_move_to_active(&msm_obj->base, gpu, false, submit->fence); } - gpu->funcs->submit(gpu, submit, ctx); - priv->lastctx = ctx; + gpu->funcs->submit(gpu, submit); + priv->lastctx = submit->queue->ctx; hangcheck_timer_reset(gpu); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 0db117a7339b..d496d488222c 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -44,8 +44,7 @@ struct msm_gpu_funcs { int (*hw_init)(struct msm_gpu *gpu); int (*pm_suspend)(struct msm_gpu *gpu); int (*pm_resume)(struct msm_gpu *gpu); - void (*submit)(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx); + void (*submit)(struct msm_gpu *gpu, struct msm_gem_submit *submit); void (*flush)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); irqreturn_t (*irq)(struct msm_gpu *irq); struct msm_ringbuffer *(*active_ring)(struct msm_gpu *gpu); @@ -181,6 +180,7 @@ struct msm_gpu_submitqueue { u32 flags; u32 prio; int faults; + struct msm_file_private *ctx; struct list_head node; struct kref ref; }; @@ -280,8 +280,7 @@ int msm_gpu_perfcntr_sample(struct msm_gpu *gpu, uint32_t *activetime, uint32_t *totaltime, uint32_t ncntrs, uint32_t *cntrs); void msm_gpu_retire(struct msm_gpu *gpu); -void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, - struct msm_file_private *ctx); +void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit); int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index a1d94be7883a..10f557225a3e 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -49,8 +49,10 @@ void msm_submitqueue_close(struct msm_file_private *ctx) * No lock needed in close and there won't * be any more user ioctls coming our way */ - list_for_each_entry_safe(entry, tmp, &ctx->submitqueues, node) + list_for_each_entry_safe(entry, tmp, &ctx->submitqueues, node) { + kref_put(&ctx->ref, msm_file_private_destroy); msm_submitqueue_put(entry); + } } int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, @@ -81,6 +83,9 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, write_lock(&ctx->queuelock); + kref_get(&ctx->ref); + + queue->ctx = ctx; queue->id = ctx->queueid++; if (id) @@ -177,6 +182,7 @@ int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) list_del(&entry->node); write_unlock(&ctx->queuelock); + kref_put(&ctx->ref, msm_file_private_destroy); msm_submitqueue_put(entry); return 0; } From patchwork Mon Aug 10 22:26:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BED99618 for ; Mon, 10 Aug 2020 22:28:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A6B7F20782 for ; Mon, 10 Aug 2020 22:28:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="b6mUgUC1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727842AbgHJW2A (ORCPT ); Mon, 10 Aug 2020 18:28:00 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:33930 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727103AbgHJW1n (ORCPT ); Mon, 10 Aug 2020 18:27:43 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098463; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=wYFdk4sZjkkv1anpMyFTRw+xKujsLrZ6WLap5ZC3G3k=; b=b6mUgUC1PtttmiMeCwrXssC4qEZGLFy2+RObTIs8GMDVB2/5WNcjqcPQWe/OaynqgKsLBKRO WyRLUwKK2tirTHcF5L530epDoqAM48HY4MwqEv8bO+4fUJrZdddzMsqNpRLBlZ18istr3s9c +8Ytdd7nzzLm1OlUCIfSY/3UH0E= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n12.prod.us-west-2.postgun.com with SMTP id 5f31c9d4d96d28d61ee2058d (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:32 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 36D50C4339C; Mon, 10 Aug 2020 22:27:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id EEE56C4339C; Mon, 10 Aug 2020 22:27:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org EEE56C4339C Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Brian Masney , Daniel Vetter , David Airlie , Douglas Anderson , Jonathan Marek , Rob Clark , Sean Paul , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 08/13] drm/msm: Set the global virtual address range from the IOMMU domain Date: Mon, 10 Aug 2020 16:26:52 -0600 Message-Id: <20200810222657.1841322-9-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Use the aperture settings from the IOMMU domain to set up the virtual address range for the GPU. This allows us to transparently deal with IOMMU side features (like split pagetables). Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 13 +++++++++++-- drivers/gpu/drm/msm/msm_iommu.c | 7 +++++++ 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b38a8126541a..f9e3badf2fca 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -192,9 +192,18 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, struct iommu_domain *iommu = iommu_domain_alloc(&platform_bus_type); struct msm_mmu *mmu = msm_iommu_new(&pdev->dev, iommu); struct msm_gem_address_space *aspace; + u64 start, size; - aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M, - 0xffffffff - SZ_16M); + /* + * Use the aperture start or SZ_16M, whichever is greater. This will + * ensure that we align with the allocated pagetable range while still + * allowing room in the lower 32 bits for GMEM and whatnot + */ + start = max_t(u64, SZ_16M, iommu->geometry.aperture_start); + size = iommu->geometry.aperture_end - start + 1; + + aspace = msm_gem_address_space_create(mmu, "gpu", + start & GENMASK(48, 0), size); if (IS_ERR(aspace) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 3a381a9674c9..1b6635504069 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -36,6 +36,10 @@ static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + /* The arm-smmu driver expects the addresses to be sign extended */ + if (iova & BIT_ULL(48)) + iova |= GENMASK_ULL(63, 49); + ret = iommu_map_sg(iommu->domain, iova, sgt->sgl, sgt->nents, prot); WARN_ON(!ret); @@ -46,6 +50,9 @@ static int msm_iommu_unmap(struct msm_mmu *mmu, uint64_t iova, size_t len) { struct msm_iommu *iommu = to_msm_iommu(mmu); + if (iova & BIT_ULL(48)) + iova |= GENMASK_ULL(63, 49); + iommu_unmap(iommu->domain, iova, len); return 0; From patchwork Mon Aug 10 22:26:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708299 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D42E8739 for ; Mon, 10 Aug 2020 22:28:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ADE9120782 for ; Mon, 10 Aug 2020 22:28:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="VbX5gGaQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727852AbgHJW2A (ORCPT ); Mon, 10 Aug 2020 18:28:00 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:23253 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727112AbgHJW1m (ORCPT ); Mon, 10 Aug 2020 18:27:42 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098461; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=iq9TuxnwB5kGxb5cTMsU0LE5bIkRcWS49YoceQPqScU=; b=VbX5gGaQvMT7un0M/uXuMD1DyfCsSUy0JLZNTIdy7iJqtQgHHGRkSj5IZVreZEZjWzGNxgVc 7DD8/25H9J+v/9cIUb8RdPNHK8UeMriN5yFrWXd72YjTxgFaaFyvo12Okq+9Lsce4A05Yarx 94HByZgEpBYHM8kC1ww1SASxoXE= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n01.prod.us-west-2.postgun.com with SMTP id 5f31c9ddc85a1092b0fb1672 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:41 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 03826C433A1; Mon, 10 Aug 2020 22:27:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 1DFF9C433AF; Mon, 10 Aug 2020 22:27:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 1DFF9C433AF Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Daniel Vetter , David Airlie , Rob Clark , Sean Paul , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 09/13] drm/msm: Add support to create a local pagetable Date: Mon, 10 Aug 2020 16:26:53 -0600 Message-Id: <20200810222657.1841322-10-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support to create a io-pgtable for use by targets that support per-instance pagetables. In order to support per-instance pagetables the GPU SMMU device needs to have the qcom,adreno-smmu compatible string and split pagetables enabled. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpummu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 190 ++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 16 ++- 3 files changed, 205 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpummu.c b/drivers/gpu/drm/msm/msm_gpummu.c index 310a31b05faa..aab121f4beb7 100644 --- a/drivers/gpu/drm/msm/msm_gpummu.c +++ b/drivers/gpu/drm/msm/msm_gpummu.c @@ -102,7 +102,7 @@ struct msm_mmu *msm_gpummu_new(struct device *dev, struct msm_gpu *gpu) } gpummu->gpu = gpu; - msm_mmu_init(&gpummu->base, dev, &funcs); + msm_mmu_init(&gpummu->base, dev, &funcs, MSM_MMU_GPUMMU); return &gpummu->base; } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 1b6635504069..bc04dda8a198 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -4,15 +4,201 @@ * Author: Rob Clark */ +#include #include "msm_drv.h" #include "msm_mmu.h" struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; + atomic_t pagetables; }; + #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) +struct msm_iommu_pagetable { + struct msm_mmu base; + struct msm_mmu *parent; + struct io_pgtable_ops *pgtbl_ops; + phys_addr_t ttbr; + u32 asid; +}; +static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) +{ + return container_of(mmu, struct msm_iommu_pagetable, base); +} + +static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, + size_t size) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + size_t unmapped = 0; + + /* Unmap the block one page at a time */ + while (size) { + unmapped += ops->unmap(ops, iova, 4096, NULL); + iova += 4096; + size -= 4096; + } + + iommu_flush_tlb_all(to_msm_iommu(pagetable->parent)->domain); + + return (unmapped == size) ? 0 : -EINVAL; +} + +static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, + struct sg_table *sgt, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct scatterlist *sg; + size_t mapped = 0; + u64 addr = iova; + unsigned int i; + + for_each_sg(sgt->sgl, sg, sgt->nents, i) { + size_t size = sg->length; + phys_addr_t phys = sg_phys(sg); + + /* Map the block one page at a time */ + while (size) { + if (ops->map(ops, addr, phys, 4096, prot, GFP_KERNEL)) { + msm_iommu_pagetable_unmap(mmu, iova, mapped); + return -EINVAL; + } + + phys += 4096; + addr += 4096; + size -= 4096; + mapped += 4096; + } + } + + return 0; +} + +static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + + /* + * If this is the last attached pagetable for the parent, + * disable TTBR0 in the arm-smmu driver + */ + if (atomic_dec_return(&iommu->pagetables) == 0) + iommu_domain_set_attr(iommu->domain, + DOMAIN_ATTR_PGTABLE_CFG, NULL); + + free_io_pgtable_ops(pagetable->pgtbl_ops); + kfree(pagetable); +} + +int msm_iommu_pagetable_params(struct msm_mmu *mmu, + phys_addr_t *ttbr, int *asid) +{ + struct msm_iommu_pagetable *pagetable; + + if (mmu->type != MSM_MMU_IOMMU_PAGETABLE) + return -EINVAL; + + pagetable = to_pagetable(mmu); + + if (ttbr) + *ttbr = pagetable->ttbr; + + if (asid) + *asid = pagetable->asid; + + return 0; +} + +static const struct msm_mmu_funcs pagetable_funcs = { + .map = msm_iommu_pagetable_map, + .unmap = msm_iommu_pagetable_unmap, + .destroy = msm_iommu_pagetable_destroy, +}; + +static void msm_iommu_tlb_flush_all(void *cookie) +{ +} + +static void msm_iommu_tlb_flush_walk(unsigned long iova, size_t size, + size_t granule, void *cookie) +{ +} + +static void msm_iommu_tlb_add_page(struct iommu_iotlb_gather *gather, + unsigned long iova, size_t granule, void *cookie) +{ +} + +static const struct iommu_flush_ops null_tlb_ops = { + .tlb_flush_all = msm_iommu_tlb_flush_all, + .tlb_flush_walk = msm_iommu_tlb_flush_walk, + .tlb_flush_leaf = msm_iommu_tlb_flush_walk, + .tlb_add_page = msm_iommu_tlb_add_page, +}; + +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +{ + struct msm_iommu *iommu = to_msm_iommu(parent); + static int next_asid = 16; + struct msm_iommu_pagetable *pagetable; + struct io_pgtable_cfg cfg; + int ret; + + /* Get the pagetable configuration from the domain */ + ret = iommu_domain_get_attr(iommu->domain, + DOMAIN_ATTR_PGTABLE_CFG, &cfg); + if (ret) + return ERR_PTR(ret); + + pagetable = kzalloc(sizeof(*pagetable), GFP_KERNEL); + if (!pagetable) + return ERR_PTR(-ENOMEM); + + msm_mmu_init(&pagetable->base, parent->dev, &pagetable_funcs, + MSM_MMU_IOMMU_PAGETABLE); + + /* The incoming cfg will have the TTBR1 quirk enabled */ + cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; + cfg.tlb = &null_tlb_ops; + + pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, + &cfg, iommu->domain); + + if (!pagetable->pgtbl_ops) { + kfree(pagetable); + return ERR_PTR(-ENOMEM); + } + + /* + * If this is the first pagetable that we've allocated, send it back to + * the arm-smmu driver as a trigger to set up TTBR0 + */ + if (atomic_inc_return(&iommu->pagetables) == 1) { + ret = iommu_domain_set_attr(iommu->domain, + DOMAIN_ATTR_PGTABLE_CFG, &cfg); + if (ret) { + free_io_pgtable_ops(pagetable->pgtbl_ops); + kfree(pagetable); + return ERR_PTR(ret); + } + } + + /* Needed later for TLB flush */ + pagetable->parent = parent; + pagetable->ttbr = cfg.arm_lpae_s1_cfg.ttbr; + + pagetable->asid = next_asid; + next_asid = (next_asid + 1) % 255; + next_asid = min(16, next_asid); + + return &pagetable->base; +} + static int msm_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg) { @@ -85,9 +271,11 @@ struct msm_mmu *msm_iommu_new(struct device *dev, struct iommu_domain *domain) return ERR_PTR(-ENOMEM); iommu->domain = domain; - msm_mmu_init(&iommu->base, dev, &funcs); + msm_mmu_init(&iommu->base, dev, &funcs, MSM_MMU_IOMMU); iommu_set_fault_handler(domain, msm_fault_handler, iommu); + atomic_set(&iommu->pagetables, 0); + ret = iommu_attach_device(iommu->domain, dev); if (ret) { kfree(iommu); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 3a534ee59bf6..61ade89d9e48 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -17,18 +17,26 @@ struct msm_mmu_funcs { void (*destroy)(struct msm_mmu *mmu); }; +enum msm_mmu_type { + MSM_MMU_GPUMMU, + MSM_MMU_IOMMU, + MSM_MMU_IOMMU_PAGETABLE, +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags); void *arg; + enum msm_mmu_type type; }; static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, - const struct msm_mmu_funcs *funcs) + const struct msm_mmu_funcs *funcs, enum msm_mmu_type type) { mmu->dev = dev; mmu->funcs = funcs; + mmu->type = type; } struct msm_mmu *msm_iommu_new(struct device *dev, struct iommu_domain *domain); @@ -41,7 +49,13 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg, mmu->handler = handler; } +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); + void msm_gpummu_params(struct msm_mmu *mmu, dma_addr_t *pt_base, dma_addr_t *tran_error); + +int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, + int *asid); + #endif /* __MSM_MMU_H__ */ From patchwork Mon Aug 10 22:26:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DEF4C739 for ; Mon, 10 Aug 2020 22:27:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2B8E2083B for ; Mon, 10 Aug 2020 22:27:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="KRrZDom4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727010AbgHJW1s (ORCPT ); Mon, 10 Aug 2020 18:27:48 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:31814 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727114AbgHJW1s (ORCPT ); Mon, 10 Aug 2020 18:27:48 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098467; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=1l56jKTnCbGbzDy9PC9bfDI6CvjcNk4zvHShg3Sn/ic=; b=KRrZDom48RVzFLB5l2W6TxFz0xUe4XVXsmjoZkfcO4C00HCXBdhWhzamOf/FREFJjz4Op4OC W/qaGlrmlpsWrauqc/0JSF8RI9qwgPggclG50Y4IN5ju0I8QQ3yLkBHgk3wrtUgBZDFZJBza fBaIN3O0DGhr+XvMAXv1QtPpcq4= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n08.prod.us-west-2.postgun.com with SMTP id 5f31c9db2f4952907d4d7a68 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:39 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id E23E3C43453; Mon, 10 Aug 2020 22:27:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id A2429C433B2; Mon, 10 Aug 2020 22:27:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org A2429C433B2 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Daniel Vetter , David Airlie , Rob Clark , Sean Paul , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 10/13] drm/msm: Add support for private address space instances Date: Mon, 10 Aug 2020 16:26:54 -0600 Message-Id: <20200810222657.1841322-11-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for allocating private address space instances. Targets that support per-context pagetables should implement their own function to allocate private address spaces. The default will return a pointer to the global address space. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_drv.c | 13 +++++++------ drivers/gpu/drm/msm/msm_drv.h | 5 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 9 +++++++++ drivers/gpu/drm/msm/msm_gpu.c | 22 ++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 5 +++++ 5 files changed, 48 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 108b663c3ef2..f072306f1260 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -597,7 +597,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = priv->gpu ? priv->gpu->aspace : NULL; + ctx->aspace = msm_gpu_create_private_address_space(priv->gpu); file->driver_priv = ctx; return 0; @@ -780,18 +780,19 @@ static int msm_ioctl_gem_cpu_fini(struct drm_device *dev, void *data, } static int msm_ioctl_gem_info_iova(struct drm_device *dev, - struct drm_gem_object *obj, uint64_t *iova) + struct drm_file *file, struct drm_gem_object *obj, + uint64_t *iova) { - struct msm_drm_private *priv = dev->dev_private; + struct msm_file_private *ctx = file->driver_priv; - if (!priv->gpu) + if (!ctx->aspace) return -EINVAL; /* * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, priv->gpu->aspace, iova); + return msm_gem_get_iova(obj, ctx->aspace, iova); } static int msm_ioctl_gem_info(struct drm_device *dev, void *data, @@ -830,7 +831,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void *data, args->value = msm_gem_mmap_offset(obj); break; case MSM_INFO_GET_IOVA: - ret = msm_ioctl_gem_info_iova(dev, obj, &args->value); + ret = msm_ioctl_gem_info_iova(dev, file, obj, &args->value); break; case MSM_INFO_SET_NAME: /* length check should leave room for terminating null: */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index f69c6d62584d..51a5c9083e13 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -249,6 +249,10 @@ int msm_gem_map_vma(struct msm_gem_address_space *aspace, void msm_gem_close_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma); + +struct msm_gem_address_space * +msm_gem_address_space_get(struct msm_gem_address_space *aspace); + void msm_gem_address_space_put(struct msm_gem_address_space *aspace); struct msm_gem_address_space * @@ -434,6 +438,7 @@ static inline void msm_file_private_destroy(struct kref *kref) struct msm_file_private *ctx = container_of(kref, struct msm_file_private, ref); + msm_gem_address_space_put(ctx->aspace); kfree(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 5f6a11211b64..29cc1305cf37 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -27,6 +27,15 @@ void msm_gem_address_space_put(struct msm_gem_address_space *aspace) kref_put(&aspace->kref, msm_gem_address_space_destroy); } +struct msm_gem_address_space * +msm_gem_address_space_get(struct msm_gem_address_space *aspace) +{ + if (!IS_ERR_OR_NULL(aspace)) + kref_get(&aspace->kref); + + return aspace; +} + /* Actually unmap memory for the vma */ void msm_gem_purge_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index a1f3da6550e5..b070355369d8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -823,6 +823,28 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) return 0; } +/* Return a new address space for a msm_drm_private instance */ +struct msm_gem_address_space * +msm_gpu_create_private_address_space(struct msm_gpu *gpu) +{ + struct msm_gem_address_space *aspace = NULL; + + if (!gpu) + return NULL; + + /* + * If the target doesn't support private address spaces then return + * the global one + */ + if (gpu->funcs->create_private_address_space) + aspace = gpu->funcs->create_private_address_space(gpu); + + if (IS_ERR_OR_NULL(aspace)) + aspace = msm_gem_address_space_get(gpu->aspace); + + return aspace; +} + int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d496d488222c..d298657b4730 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -64,6 +64,8 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp); struct msm_gem_address_space *(*create_address_space) (struct msm_gpu *gpu, struct platform_device *pdev); + struct msm_gem_address_space *(*create_private_address_space) + (struct msm_gpu *gpu); }; struct msm_gpu { @@ -286,6 +288,9 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); +struct msm_gem_address_space * +msm_gpu_create_private_address_space(struct msm_gpu *gpu); + void msm_gpu_cleanup(struct msm_gpu *gpu); struct msm_gpu *adreno_load_gpu(struct drm_device *dev); From patchwork Mon Aug 10 22:26:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708305 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D8E83618 for ; Mon, 10 Aug 2020 22:28:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BBCA420771 for ; Mon, 10 Aug 2020 22:28:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="CdjHunNJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727900AbgHJW2M (ORCPT ); Mon, 10 Aug 2020 18:28:12 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:23253 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727863AbgHJW2G (ORCPT ); Mon, 10 Aug 2020 18:28:06 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098486; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=Kg5KcKniOF4VcUQ3JzzA0zCBvzqehxly4H3C177Y3gA=; b=CdjHunNJe0ATZMNoy+RYPeWjMEhby/ZgCOf6hNrSMbzKvfWc0d1v1D4+oWDuFJ3c7yZ9JS+T 3J1clFyOwXsl9lgrr73UX0D+G7lCa0IBWJpYYIYdZRngyGr6XsTslBBTJ0y2Zm9LCP0Bc43N UhZrqWzCD2lFZlWthxabwqWu6J8= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n08.prod.us-west-2.postgun.com with SMTP id 5f31c9e103528d4024a7322c (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:45 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id CD399C43453; Mon, 10 Aug 2020 22:27:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 2CF88C433A1; Mon, 10 Aug 2020 22:27:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 2CF88C433A1 Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Akhil P Oommen , Daniel Vetter , David Airlie , Emil Velikov , Eric Anholt , Jonathan Marek , Rob Clark , Sean Paul , Sharat Masetty , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 11/13] drm/msm/a6xx: Add support for per-instance pagetables Date: Mon, 10 Aug 2020 16:26:55 -0600 Message-Id: <20200810222657.1841322-12-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for using per-instance pagetables if all the dependencies are available. Signed-off-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 70 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 1 + drivers/gpu/drm/msm/msm_ringbuffer.h | 1 + 3 files changed, 72 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 5eabb0109577..9653ac9b3cb8 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -81,6 +81,56 @@ static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter, OUT_RING(ring, upper_32_bits(iova)); } +static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, + struct msm_ringbuffer *ring, struct msm_file_private *ctx) +{ + phys_addr_t ttbr; + u32 asid; + u64 memptr = rbmemptr(ring, ttbr0); + + if (ctx == a6xx_gpu->cur_ctx) + return; + + if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + return; + + /* Execute the table update */ + OUT_PKT7(ring, CP_SMMU_TABLE_UPDATE, 4); + OUT_RING(ring, CP_SMMU_TABLE_UPDATE_0_TTBR0_LO(lower_32_bits(ttbr))); + + /* + * For now ignore the asid since the smmu driver uses a TLBIASID to + * flush the TLB when we use iommu_flush_tlb_all() and the smmu driver + * isn't aware that the asid changed. Instead, keep the default asid + * (0, same as the context bank) to make sure the TLB is properly + * flushed. + */ + OUT_RING(ring, + CP_SMMU_TABLE_UPDATE_1_TTBR0_HI(upper_32_bits(ttbr)) | + CP_SMMU_TABLE_UPDATE_1_ASID(0)); + OUT_RING(ring, CP_SMMU_TABLE_UPDATE_2_CONTEXTIDR(0)); + OUT_RING(ring, CP_SMMU_TABLE_UPDATE_3_CONTEXTBANK(0)); + + /* + * Write the new TTBR0 to the memstore. This is good for debugging. + */ + OUT_PKT7(ring, CP_MEM_WRITE, 4); + OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr))); + OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr))); + OUT_RING(ring, lower_32_bits(ttbr)); + OUT_RING(ring, (0 << 16) | upper_32_bits(ttbr)); + + /* + * And finally, trigger a uche flush to be sure there isn't anything + * lingering in that part of the GPU + */ + + OUT_PKT7(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, 0x31); + + a6xx_gpu->cur_ctx = ctx; +} + static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) { unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT; @@ -90,6 +140,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i; + a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx); + get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO, rbmemptr_stats(ring, index, cpcycles_start)); @@ -696,6 +748,8 @@ static int a6xx_hw_init(struct msm_gpu *gpu) /* Always come up on rb 0 */ a6xx_gpu->cur_ring = gpu->rb[0]; + a6xx_gpu->cur_ctx = NULL; + /* Enable the SQE_to start the CP engine */ gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1); @@ -1008,6 +1062,21 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu) return (unsigned long)busy_time; } +static struct msm_gem_address_space * +a6xx_create_private_address_space(struct msm_gpu *gpu) +{ + struct msm_gem_address_space *aspace = NULL; + struct msm_mmu *mmu; + + mmu = msm_iommu_pagetable_create(gpu->aspace->mmu); + + if (!IS_ERR(mmu)) + aspace = msm_gem_address_space_create(mmu, + "gpu", 0x100000000ULL, 0x1ffffffffULL); + + return aspace; +} + static const struct adreno_gpu_funcs funcs = { .base = { .get_param = adreno_get_param, @@ -1031,6 +1100,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_put = a6xx_gpu_state_put, #endif .create_address_space = adreno_iommu_create_address_space, + .create_private_address_space = a6xx_create_private_address_space, }, .get_timestamp = a6xx_get_timestamp, }; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h index 03ba60d5b07f..da22d7549d9b 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h @@ -19,6 +19,7 @@ struct a6xx_gpu { uint64_t sqe_iova; struct msm_ringbuffer *cur_ring; + struct msm_file_private *cur_ctx; struct a6xx_gmu gmu; }; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 7764373d0ed2..0987d6bf848c 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -31,6 +31,7 @@ struct msm_rbmemptrs { volatile uint32_t fence; volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT]; + volatile u64 ttbr0; }; struct msm_ringbuffer { From patchwork Mon Aug 10 22:26:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708301 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B7625618 for ; Mon, 10 Aug 2020 22:28:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 938D220782 for ; Mon, 10 Aug 2020 22:28:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="FMKHSV+v" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727873AbgHJW2D (ORCPT ); Mon, 10 Aug 2020 18:28:03 -0400 Received: from mail29.static.mailgun.info ([104.130.122.29]:14937 "EHLO mail29.static.mailgun.info" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727845AbgHJW2D (ORCPT ); Mon, 10 Aug 2020 18:28:03 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098482; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=yOMYncf5nMXrU1PRqoN8EKXZ0DovQfRsHybdYeETpeQ=; b=FMKHSV+vzOlIOc3TpfSKv7jQzUh/290nGYXBMG/CPxQWyOlRrd8R3O+aFcd7eakYHoUsV35y C0IRYP4jcGH8nGS+kB0Q89yFED0TFoVs26lKzyF7YdH77+oSJ1mG/JrRCnJFpMUEoUirKqXo j/5P5hm4d9pP9XwH6HHf1EYUN74= X-Mailgun-Sending-Ip: 104.130.122.29 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n04.prod.us-west-2.postgun.com with SMTP id 5f31c9e12889723bf87907e2 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:45 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 116ECC4344F; Mon, 10 Aug 2020 22:27:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 917CBC4344D; Mon, 10 Aug 2020 22:27:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 917CBC4344D Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Andy Gross , Rob Herring , devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v12 12/13] arm: dts: qcom: sm845: Set the compatible string for the GPU SMMU Date: Mon, 10 Aug 2020 16:26:56 -0600 Message-Id: <20200810222657.1841322-13-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Set the qcom,adreno-smmu compatible string for the GPU SMMU to enable split pagetables and per-instance pagetables for drm/msm. Signed-off-by: Jordan Crouse --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index 2884577dcb77..6a9adaa401a9 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -4058,7 +4058,7 @@ opp-257000000 { }; adreno_smmu: iommu@5040000 { - compatible = "qcom,sdm845-smmu-v2", "qcom,smmu-v2"; + compatible = "qcom,adreno-smmu", "qcom,smmu-v2"; reg = <0 0x5040000 0 0x10000>; #iommu-cells = <1>; #global-interrupts = <2>; From patchwork Mon Aug 10 22:26:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jordan Crouse X-Patchwork-Id: 11708289 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFB51618 for ; Mon, 10 Aug 2020 22:27:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9366A20768 for ; Mon, 10 Aug 2020 22:27:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=mg.codeaurora.org header.i=@mg.codeaurora.org header.b="aeIj/qT7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727810AbgHJW1w (ORCPT ); Mon, 10 Aug 2020 18:27:52 -0400 Received: from m43-7.mailgun.net ([69.72.43.7]:43034 "EHLO m43-7.mailgun.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727794AbgHJW1v (ORCPT ); Mon, 10 Aug 2020 18:27:51 -0400 DKIM-Signature: a=rsa-sha256; v=1; c=relaxed/relaxed; d=mg.codeaurora.org; q=dns/txt; s=smtp; t=1597098470; h=Content-Transfer-Encoding: MIME-Version: References: In-Reply-To: Message-Id: Date: Subject: Cc: To: From: Sender; bh=luFjlnOjyWOq8Y2pNHotM6G2poKKOZibPxqXcVoTZhI=; b=aeIj/qT7ZwITy0qwHtnXPQOX65j+q+JQ3g/giFuGKl5KUiO/nlsC9otEsM6S3oaZWunii6H+ xe9DGDKgSPiPfkGm0+VPjPuIjH5BHSK50guYEyqnN7gX6fOb51h5Uu8RmN8rl0af1pwIZPZX JTUnYSOTzN/O/4BEO2FKSNnq140= X-Mailgun-Sending-Ip: 69.72.43.7 X-Mailgun-Sid: WyI1MzIzYiIsICJsaW51eC1hcm0tbXNtQHZnZXIua2VybmVsLm9yZyIsICJiZTllNGEiXQ== Received: from smtp.codeaurora.org (ec2-35-166-182-171.us-west-2.compute.amazonaws.com [35.166.182.171]) by smtp-out-n09.prod.us-west-2.postgun.com with SMTP id 5f31c9e63f2ce110209da998 (version=TLS1.2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256); Mon, 10 Aug 2020 22:27:50 GMT Received: by smtp.codeaurora.org (Postfix, from userid 1001) id 5B0A1C4344F; Mon, 10 Aug 2020 22:27:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-caf-mail-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=ALL_TRUSTED,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from jordan-laptop.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: jcrouse) by smtp.codeaurora.org (Postfix) with ESMTPSA id 38B0EC4345B; Mon, 10 Aug 2020 22:27:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org 38B0EC4345B Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: aws-us-west-2-caf-mail-1.web.codeaurora.org; spf=none smtp.mailfrom=jcrouse@codeaurora.org From: Jordan Crouse To: linux-arm-msm@vger.kernel.org Cc: Will Deacon , Robin Murphy , Bjorn Andersson , iommu@lists.linux-foundation.org, freedreno@lists.freedesktop.org, Sai Prakash Ranjan , Greg Kroah-Hartman , Joerg Roedel , Krishna Reddy , Pritesh Raithatha , Sibi Sankar , Stephen Boyd , Thierry Reding , Vivek Gautam , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [RFC v12 13/13] iommu/arm-smmu: Add a init_context_bank implementation hook Date: Mon, 10 Aug 2020 16:26:57 -0600 Message-Id: <20200810222657.1841322-14-jcrouse@codeaurora.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add a new implementation hook to allow the implementation specific code to tweek the context bank configuration just before it gets written. The first user will be the Adreno GPU implementation to turn on SCTLR.HUPCF to ensure that a page fault doesn't terminating pending transactions. Doing so could hang the GPU if one of the terminated transactions is a CP read. Signed-off-by: Jordan Crouse --- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 16 ++++++++++++++++ drivers/iommu/arm/arm-smmu/arm-smmu.c | 21 +++++++++++++-------- drivers/iommu/arm/arm-smmu/arm-smmu.h | 5 +++++ 3 files changed, 34 insertions(+), 8 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 3be10145bf57..baa026ddca1c 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -34,6 +34,19 @@ static bool qcom_adreno_smmu_is_gpu_device(struct device *dev) return false; } +#define QCOM_ADRENO_SMMU_GPU 1 + +static void qcom_adreno_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, + struct arm_smmu_cb *cb) +{ + /* + * On the GPU device we want to process subsequent transactions after a + * fault to keep the GPU from hanging + */ + if (cb->cfg->priv == QCOM_ADRENO_SMMU_GPU) + cb->sctlr |= ARM_SMMU_SCTLR_HUPCF; +} + /* * Local implementation to configure TTBR0 with the specified pagetable config. * The GPU driver will call this to enable TTBR0 when per-instance pagetables @@ -120,6 +133,7 @@ static int qcom_adreno_smmu_alloc_context_bank(struct arm_smmu_domain *smmu_doma count = 1; } else { start = 1; + smmu_domain->cfg.priv = QCOM_ADRENO_SMMU_GPU; } return __arm_smmu_alloc_bitmap(smmu->context_map, start, count); @@ -141,6 +155,7 @@ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain, (smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64)) pgtbl_cfg->quirks |= IO_PGTABLE_QUIRK_ARM_TTBR1; + return 0; } @@ -204,6 +219,7 @@ static const struct arm_smmu_impl qcom_adreno_smmu_impl = { .alloc_context_bank = qcom_adreno_smmu_alloc_context_bank, .domain_set_attr = qcom_adreno_smmu_domain_set_attr, .domain_get_attr = qcom_adreno_smmu_domain_get_attr, + .init_context_bank = qcom_adreno_smmu_init_context_bank, }; static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu, diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index e0a3e0da885b..6225649bbfef 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -532,6 +532,18 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, cb->mair[1] = pgtbl_cfg->arm_lpae_s1_cfg.mair >> 32; } } + + cb->sctlr = ARM_SMMU_SCTLR_CFIE | ARM_SMMU_SCTLR_CFRE | ARM_SMMU_SCTLR_AFE | + ARM_SMMU_SCTLR_TRE | ARM_SMMU_SCTLR_M; + + if (stage1) + cb->sctlr |= ARM_SMMU_SCTLR_S1_ASIDPNE; + if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) + cb->sctlr |= ARM_SMMU_SCTLR_E; + + /* Give the implementation a chance to adjust the configuration */ + if (smmu_domain->smmu->impl && smmu_domain->smmu->impl->init_context_bank) + smmu_domain->smmu->impl->init_context_bank(smmu_domain, cb); } void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) @@ -610,14 +622,7 @@ void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) } /* SCTLR */ - reg = ARM_SMMU_SCTLR_CFIE | ARM_SMMU_SCTLR_CFRE | ARM_SMMU_SCTLR_AFE | - ARM_SMMU_SCTLR_TRE | ARM_SMMU_SCTLR_M; - if (stage1) - reg |= ARM_SMMU_SCTLR_S1_ASIDPNE; - if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) - reg |= ARM_SMMU_SCTLR_E; - - arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, reg); + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, cb->sctlr); } static int arm_smmu_init_domain_context(struct iommu_domain *domain, diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index 870f0fd060a5..e84b8da8b93b 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -143,6 +143,7 @@ enum arm_smmu_cbar_type { #define ARM_SMMU_CB_SCTLR 0x0 #define ARM_SMMU_SCTLR_S1_ASIDPNE BIT(12) +#define ARM_SMMU_SCTLR_HUPCF BIT(8) #define ARM_SMMU_SCTLR_CFCFG BIT(7) #define ARM_SMMU_SCTLR_CFIE BIT(6) #define ARM_SMMU_SCTLR_CFRE BIT(5) @@ -343,6 +344,7 @@ struct arm_smmu_cfg { }; enum arm_smmu_cbar_type cbar; enum arm_smmu_context_fmt fmt; + unsigned long priv; }; #define ARM_SMMU_INVALID_IRPTNDX 0xff @@ -350,6 +352,7 @@ struct arm_smmu_cb { u64 ttbr[2]; u32 tcr[2]; u32 mair[2]; + u32 sctlr; struct arm_smmu_cfg *cfg; }; @@ -439,6 +442,8 @@ struct arm_smmu_impl { enum iommu_attr attr, void *data); int (*domain_set_attr)(struct arm_smmu_domain *smmu_domain, enum iommu_attr attr, void *data); + void (*init_context_bank)(struct arm_smmu_domain *smmu_domain, + struct arm_smmu_cb *cb); }; #define INVALID_SMENDX -1 From patchwork Fri Aug 14 02:41:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11713429 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DE32B109B for ; Fri, 14 Aug 2020 02:42:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC27320885 for ; Fri, 14 Aug 2020 02:42:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="pzZUIGcw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726786AbgHNCmA (ORCPT ); Thu, 13 Aug 2020 22:42:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726587AbgHNCl7 (ORCPT ); Thu, 13 Aug 2020 22:41:59 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0122C061757; Thu, 13 Aug 2020 19:41:59 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id t11so3527205plr.5; Thu, 13 Aug 2020 19:41:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cNcI7HrS0AvVaVc1646ajuPmDSYJLUf29QkaS+BoOVY=; b=pzZUIGcw49UDhnzFSNiJLflzdLOYeaztPxIO8Z2rq7KbTpKDbOu6Q/NfnaJ0pBfO1Z POQ7qfbCMMNpd17V3W2gtt6CGkwN+5o3Z4KIj2ays4lmq1xVKH+zBbMnACz5FKN4BSYc 3rMhRbcgOcjsv+Nlwuype4ly1JI8DzU9U3+EpFJXS86R6pcGluuupPd8fSiz1JlLNaal Yq2hZjPQSab6JDrdnvNJxmb5Eu0oSBD2r8MmILivxuatn+mV5+EKhPfiDZ0tHAPPMec8 ms+34lpasfaU9vBIhPXcOvFz5xkW5+4mY4erNzSlet1/bRMUcEjfsP3Cn1qQ1y64auPs 28kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cNcI7HrS0AvVaVc1646ajuPmDSYJLUf29QkaS+BoOVY=; b=OmJGuEBHzjXrH3PBo1Wf0OTn3mW1cqlmbVNeXob2TdU7TmuikIo6CHP13ghMr99WEz BQ6FQGC183N3zG0+1Ezob1xGb8JL+y+UpMKEmBjtOXGLYcjUp2KKjaQiwZu8izQicAwd Kh/FA6m3bWVR2uctDFq5c7zZK6ObgDWuHLafrGKP1h+u3iWno9ET7BPOd+zz0tSDvhH+ WDVkxyfnz3637wTnC14ytwx6ZEgru1DlFlIIWFHhqzkm1E4Iq9dxwczYvUhg1ZCxq+Ih 0XujK9yQpJwG/jM2Rst+mtRmFtYQYC+Wj0ttPOvjd2UmqezLe6XHphwbkb327eZmhSGt K3Mw== X-Gm-Message-State: AOAM530CohXDcYnXVMVu2hVt0iX9xujqz1geUAfc0Clkv/vsq3yqTieW yUXzUViVyRQzn885BgPzCwg= X-Google-Smtp-Source: ABdhPJycLuwK0gbIMMbYhrOWKhbzBvp6l116+BY14zk2g2riKk8oOqQ2oO4VNgrA+LtOYcNRiL4Ndg== X-Received: by 2002:a17:90a:fd82:: with SMTP id cx2mr536746pjb.67.1597372919243; Thu, 13 Aug 2020 19:41:59 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id g2sm6352994pju.23.2020.08.13.19.41.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 19:41:58 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org Cc: Sai Prakash Ranjan , Will Deacon , freedreno@lists.freedesktop.org, Bjorn Andersson , Sibi Sankar , Vivek Gautam , Stephen Boyd , Robin Murphy , Joerg Roedel , linux-arm-kernel@lists.infradead.org, Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 14/19] drm/msm: Add support to create a local pagetable Date: Thu, 13 Aug 2020 19:41:09 -0700 Message-Id: <20200814024114.1177553-15-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Jordan Crouse Add support to create a io-pgtable for use by targets that support per-instance pagetables. In order to support per-instance pagetables the GPU SMMU device needs to have the qcom,adreno-smmu compatible string and split pagetables enabled. Signed-off-by: Jordan Crouse Signed-off-by: Rob Clark Reviewed-by: Bjorn Andersson --- drivers/gpu/drm/msm/msm_gpummu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 199 ++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 16 ++- 3 files changed, 214 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpummu.c b/drivers/gpu/drm/msm/msm_gpummu.c index 310a31b05faa..aab121f4beb7 100644 --- a/drivers/gpu/drm/msm/msm_gpummu.c +++ b/drivers/gpu/drm/msm/msm_gpummu.c @@ -102,7 +102,7 @@ struct msm_mmu *msm_gpummu_new(struct device *dev, struct msm_gpu *gpu) } gpummu->gpu = gpu; - msm_mmu_init(&gpummu->base, dev, &funcs); + msm_mmu_init(&gpummu->base, dev, &funcs, MSM_MMU_GPUMMU); return &gpummu->base; } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 1b6635504069..697cc0a059d6 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -4,15 +4,210 @@ * Author: Rob Clark */ +#include +#include #include "msm_drv.h" #include "msm_mmu.h" struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; + atomic_t pagetables; }; + #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) +struct msm_iommu_pagetable { + struct msm_mmu base; + struct msm_mmu *parent; + struct io_pgtable_ops *pgtbl_ops; + phys_addr_t ttbr; + u32 asid; +}; +static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) +{ + return container_of(mmu, struct msm_iommu_pagetable, base); +} + +static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, + size_t size) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + size_t unmapped = 0; + + /* Unmap the block one page at a time */ + while (size) { + unmapped += ops->unmap(ops, iova, 4096, NULL); + iova += 4096; + size -= 4096; + } + + iommu_flush_tlb_all(to_msm_iommu(pagetable->parent)->domain); + + return (unmapped == size) ? 0 : -EINVAL; +} + +static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, + struct sg_table *sgt, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct scatterlist *sg; + size_t mapped = 0; + u64 addr = iova; + unsigned int i; + + for_each_sg(sgt->sgl, sg, sgt->nents, i) { + size_t size = sg->length; + phys_addr_t phys = sg_phys(sg); + + /* Map the block one page at a time */ + while (size) { + if (ops->map(ops, addr, phys, 4096, prot, GFP_KERNEL)) { + msm_iommu_pagetable_unmap(mmu, iova, mapped); + return -EINVAL; + } + + phys += 4096; + addr += 4096; + size -= 4096; + mapped += 4096; + } + } + + return 0; +} + +static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + struct adreno_smmu_priv *adreno_smmu = + dev_get_drvdata(pagetable->parent->dev); + + /* + * If this is the last attached pagetable for the parent, + * disable TTBR0 in the arm-smmu driver + */ + if (atomic_dec_return(&iommu->pagetables) == 0) + adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); + + free_io_pgtable_ops(pagetable->pgtbl_ops); + kfree(pagetable); +} + +int msm_iommu_pagetable_params(struct msm_mmu *mmu, + phys_addr_t *ttbr, int *asid) +{ + struct msm_iommu_pagetable *pagetable; + + if (mmu->type != MSM_MMU_IOMMU_PAGETABLE) + return -EINVAL; + + pagetable = to_pagetable(mmu); + + if (ttbr) + *ttbr = pagetable->ttbr; + + if (asid) + *asid = pagetable->asid; + + return 0; +} + +static const struct msm_mmu_funcs pagetable_funcs = { + .map = msm_iommu_pagetable_map, + .unmap = msm_iommu_pagetable_unmap, + .destroy = msm_iommu_pagetable_destroy, +}; + +static void msm_iommu_tlb_flush_all(void *cookie) +{ +} + +static void msm_iommu_tlb_flush_walk(unsigned long iova, size_t size, + size_t granule, void *cookie) +{ +} + +static void msm_iommu_tlb_add_page(struct iommu_iotlb_gather *gather, + unsigned long iova, size_t granule, void *cookie) +{ +} + +static const struct iommu_flush_ops null_tlb_ops = { + .tlb_flush_all = msm_iommu_tlb_flush_all, + .tlb_flush_walk = msm_iommu_tlb_flush_walk, + .tlb_flush_leaf = msm_iommu_tlb_flush_walk, + .tlb_add_page = msm_iommu_tlb_add_page, +}; + +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); + struct msm_iommu *iommu = to_msm_iommu(parent); + struct msm_iommu_pagetable *pagetable; + const struct io_pgtable_cfg *ttbr1_cfg = NULL; + struct io_pgtable_cfg ttbr0_cfg; + int ret; + + /* Get the pagetable configuration from the domain */ + if (adreno_smmu->cookie) + ttbr1_cfg = adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie); + if (!ttbr1_cfg) + return ERR_PTR(-ENODEV); + + pagetable = kzalloc(sizeof(*pagetable), GFP_KERNEL); + if (!pagetable) + return ERR_PTR(-ENOMEM); + + msm_mmu_init(&pagetable->base, parent->dev, &pagetable_funcs, + MSM_MMU_IOMMU_PAGETABLE); + + /* Clone the TTBR1 cfg as starting point for TTBR0 cfg: */ + ttbr0_cfg = *ttbr1_cfg; + + /* The incoming cfg will have the TTBR1 quirk enabled */ + ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; + ttbr0_cfg.tlb = &null_tlb_ops; + + pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, + &ttbr0_cfg, iommu->domain); + + if (!pagetable->pgtbl_ops) { + kfree(pagetable); + return ERR_PTR(-ENOMEM); + } + + /* + * If this is the first pagetable that we've allocated, send it back to + * the arm-smmu driver as a trigger to set up TTBR0 + */ + if (atomic_inc_return(&iommu->pagetables) == 1) { + ret = adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, &ttbr0_cfg); + if (ret) { + free_io_pgtable_ops(pagetable->pgtbl_ops); + kfree(pagetable); + return ERR_PTR(ret); + } + } + + /* Needed later for TLB flush */ + pagetable->parent = parent; + pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr; + + /* + * TODO we would like each set of page tables to have a unique ASID + * to optimize TLB invalidation. But iommu_flush_tlb_all() will + * end up flushing the ASID used for TTBR1 pagetables, which is not + * what we want. So for now just use the same ASID as TTBR1. + */ + pagetable->asid = 0; + + return &pagetable->base; +} + static int msm_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg) { @@ -85,9 +280,11 @@ struct msm_mmu *msm_iommu_new(struct device *dev, struct iommu_domain *domain) return ERR_PTR(-ENOMEM); iommu->domain = domain; - msm_mmu_init(&iommu->base, dev, &funcs); + msm_mmu_init(&iommu->base, dev, &funcs, MSM_MMU_IOMMU); iommu_set_fault_handler(domain, msm_fault_handler, iommu); + atomic_set(&iommu->pagetables, 0); + ret = iommu_attach_device(iommu->domain, dev); if (ret) { kfree(iommu); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 3a534ee59bf6..61ade89d9e48 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -17,18 +17,26 @@ struct msm_mmu_funcs { void (*destroy)(struct msm_mmu *mmu); }; +enum msm_mmu_type { + MSM_MMU_GPUMMU, + MSM_MMU_IOMMU, + MSM_MMU_IOMMU_PAGETABLE, +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags); void *arg; + enum msm_mmu_type type; }; static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, - const struct msm_mmu_funcs *funcs) + const struct msm_mmu_funcs *funcs, enum msm_mmu_type type) { mmu->dev = dev; mmu->funcs = funcs; + mmu->type = type; } struct msm_mmu *msm_iommu_new(struct device *dev, struct iommu_domain *domain); @@ -41,7 +49,13 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg, mmu->handler = handler; } +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); + void msm_gpummu_params(struct msm_mmu *mmu, dma_addr_t *pt_base, dma_addr_t *tran_error); + +int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, + int *asid); + #endif /* __MSM_MMU_H__ */ From patchwork Fri Aug 14 02:41:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11713437 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62DCA109B for ; Fri, 14 Aug 2020 02:42:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4171222BEB for ; Fri, 14 Aug 2020 02:42:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EFIXpeqe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726837AbgHNCmH (ORCPT ); Thu, 13 Aug 2020 22:42:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726664AbgHNCmD (ORCPT ); Thu, 13 Aug 2020 22:42:03 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E672FC061757; Thu, 13 Aug 2020 19:42:02 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id 189so3181959pgg.13; Thu, 13 Aug 2020 19:42:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E59Nq2pttf4LeQXvZqVPv0sPEVlUkfb3pwyFFLxnuGA=; b=EFIXpeqexttFjeEDRuhe+DhNoMr3tB1PiDz671qzAllqT3f59uuNhJjtIkYdsRGCVb 0pI2GRkvRnTHv+kxv/mZZMZEFfz3kB8MjGbhhL/YtpA1M9rjWNfJoVRmxCm7WiWdKn9p vHtrjgta/w0HhClZywrtHfG4M9Pa/YrESyWG/BGKAJLhgc/Kw4Zo75NvA6z0MWQof/qr hY4cZiJ5DrFOX8YUwfQNh8u2X3DgDNZdZp9/BrwriOcX3kF9cb7ZOlfD8xn49wnvSsU1 IRFuYssusa5sV9+x67BmcBdNZvrXX+nBw02FbjJikXHVHGEUAgvs14RvYTQnCqtmkgpW SVbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E59Nq2pttf4LeQXvZqVPv0sPEVlUkfb3pwyFFLxnuGA=; b=WLSQJ0dcstOcM63hwbCXiUW9M3h0ksTF4ZLQJWRYs1Tc6H/krOh56N+bYMj998/Xi5 A2xtKu8R0EwOisrZYG4BZl2ytc9xLnOl8zwgV5oQma/Vu6fsZlXuRJDpG3PKaI8hmLYA 0fDV64tW+f1pewD2WyF9iEqOxmTGsxmPT30zP2meePgFZB18Xpcu0XH5ItExpWSmMp7R 5mjdqlAKY7t3adJd5GjZZUYqtxeIR1sv2oVHX+ljeSn5fPptosJkNV2TVgjdpno7UoFy PXl1tud15BSgyc2Ah0kpsyf+UuMAoBhWgFPbdSWV1CITIR3F0jWd2XxO+WMqGMMe8X+L GiJA== X-Gm-Message-State: AOAM533A9RqDEwtab7lIXvqY6qCMofZJazGCkIrymSflfF+U3en9eXaj 75zhGioMce6Clz40sRPhpzyExIt/83AYjA== X-Google-Smtp-Source: ABdhPJzsyABvOOf2O9DGT5EKq+q6ngPmNzGfyyoO6Rzaje181mDwB92bG4ZFa1F8atypLlY9SNR+fQ== X-Received: by 2002:a63:7a06:: with SMTP id v6mr354721pgc.441.1597372921873; Thu, 13 Aug 2020 19:42:01 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id d23sm2233273pgm.11.2020.08.13.19.42.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 19:42:00 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org Cc: Sai Prakash Ranjan , Will Deacon , freedreno@lists.freedesktop.org, Bjorn Andersson , Sibi Sankar , Vivek Gautam , Stephen Boyd , Robin Murphy , Joerg Roedel , linux-arm-kernel@lists.infradead.org, Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 15/19] drm/msm: Add support for private address space instances Date: Thu, 13 Aug 2020 19:41:10 -0700 Message-Id: <20200814024114.1177553-16-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Jordan Crouse Add support for allocating private address space instances. Targets that support per-context pagetables should implement their own function to allocate private address spaces. The default will return a pointer to the global address space. Signed-off-by: Jordan Crouse Signed-off-by: Rob Clark Reviewed-by: Bjorn Andersson --- drivers/gpu/drm/msm/msm_drv.c | 13 +++++++------ drivers/gpu/drm/msm/msm_drv.h | 5 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 9 +++++++++ drivers/gpu/drm/msm/msm_gpu.c | 22 ++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 5 +++++ 5 files changed, 48 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 01845a3b8d52..8e70d220bba8 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -597,7 +597,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = priv->gpu ? priv->gpu->aspace : NULL; + ctx->aspace = msm_gpu_create_private_address_space(priv->gpu); file->driver_priv = ctx; return 0; @@ -780,18 +780,19 @@ static int msm_ioctl_gem_cpu_fini(struct drm_device *dev, void *data, } static int msm_ioctl_gem_info_iova(struct drm_device *dev, - struct drm_gem_object *obj, uint64_t *iova) + struct drm_file *file, struct drm_gem_object *obj, + uint64_t *iova) { - struct msm_drm_private *priv = dev->dev_private; + struct msm_file_private *ctx = file->driver_priv; - if (!priv->gpu) + if (!ctx->aspace) return -EINVAL; /* * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, priv->gpu->aspace, iova); + return msm_gem_get_iova(obj, ctx->aspace, iova); } static int msm_ioctl_gem_info(struct drm_device *dev, void *data, @@ -830,7 +831,7 @@ static int msm_ioctl_gem_info(struct drm_device *dev, void *data, args->value = msm_gem_mmap_offset(obj); break; case MSM_INFO_GET_IOVA: - ret = msm_ioctl_gem_info_iova(dev, obj, &args->value); + ret = msm_ioctl_gem_info_iova(dev, file, obj, &args->value); break; case MSM_INFO_SET_NAME: /* length check should leave room for terminating null: */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 4561bfb5e745..2ca9c3c03845 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -249,6 +249,10 @@ int msm_gem_map_vma(struct msm_gem_address_space *aspace, void msm_gem_close_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma); + +struct msm_gem_address_space * +msm_gem_address_space_get(struct msm_gem_address_space *aspace); + void msm_gem_address_space_put(struct msm_gem_address_space *aspace); struct msm_gem_address_space * @@ -434,6 +438,7 @@ static inline void __msm_file_private_destroy(struct kref *kref) struct msm_file_private *ctx = container_of(kref, struct msm_file_private, ref); + msm_gem_address_space_put(ctx->aspace); kfree(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 5f6a11211b64..29cc1305cf37 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -27,6 +27,15 @@ void msm_gem_address_space_put(struct msm_gem_address_space *aspace) kref_put(&aspace->kref, msm_gem_address_space_destroy); } +struct msm_gem_address_space * +msm_gem_address_space_get(struct msm_gem_address_space *aspace) +{ + if (!IS_ERR_OR_NULL(aspace)) + kref_get(&aspace->kref); + + return aspace; +} + /* Actually unmap memory for the vma */ void msm_gem_purge_vma(struct msm_gem_address_space *aspace, struct msm_gem_vma *vma) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index e1a3cbe25a0c..951850804d77 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -823,6 +823,28 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) return 0; } +/* Return a new address space for a msm_drm_private instance */ +struct msm_gem_address_space * +msm_gpu_create_private_address_space(struct msm_gpu *gpu) +{ + struct msm_gem_address_space *aspace = NULL; + + if (!gpu) + return NULL; + + /* + * If the target doesn't support private address spaces then return + * the global one + */ + if (gpu->funcs->create_private_address_space) + aspace = gpu->funcs->create_private_address_space(gpu); + + if (IS_ERR_OR_NULL(aspace)) + aspace = msm_gem_address_space_get(gpu->aspace); + + return aspace; +} + int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 1f96ac0d9049..4052a18e18c2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -65,6 +65,8 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp); struct msm_gem_address_space *(*create_address_space) (struct msm_gpu *gpu, struct platform_device *pdev); + struct msm_gem_address_space *(*create_private_address_space) + (struct msm_gpu *gpu); }; struct msm_gpu { @@ -295,6 +297,9 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); +struct msm_gem_address_space * +msm_gpu_create_private_address_space(struct msm_gpu *gpu); + void msm_gpu_cleanup(struct msm_gpu *gpu); struct msm_gpu *adreno_load_gpu(struct drm_device *dev); From patchwork Fri Aug 14 02:41:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11713443 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C2A51744 for ; Fri, 14 Aug 2020 02:42:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1DBDB20838 for ; Fri, 14 Aug 2020 02:42:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="oDbPOGzE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726923AbgHNCmK (ORCPT ); Thu, 13 Aug 2020 22:42:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726904AbgHNCmH (ORCPT ); Thu, 13 Aug 2020 22:42:07 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55F20C061757; Thu, 13 Aug 2020 19:42:07 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id 74so3820886pfx.13; Thu, 13 Aug 2020 19:42:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Gr+5+NDBCVmeiFJo1r00faHACbyaQmX75rYoMEjRwIg=; b=oDbPOGzEexe7SJDpbQQ6F8naxiLN1EUWNd0JWjWgsxTlvgZLnRIeAzNTEqge7kL5gj 4SArYJV+gk/gNt1zY/0g8hL5gcr3NP+7YMvmkIUuHSHGIQLscc9XsFWaIsq7bq3+zH5S 9zHmg8Jxg4vsiFmNjasbBIrkRDrogZqGhoDasUV4prR6SCpxf0go9p+pkbk9meHbsr6r zW3Nzd6ESFIsC3wT94uDuRXoAy9USdPyr0KdaxoVJVQcgrzdg7QzJDXz36fvilWLXTtW djl4Vq9a8+D5sXUrY38mu0lDhbiTQX+UIi071e7IxR4RtTi8rLIfkLDDWwHCfCF5WXn0 wfJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Gr+5+NDBCVmeiFJo1r00faHACbyaQmX75rYoMEjRwIg=; b=Udn5pElFFpirUB8v9jBEzz2+Tx3FLcJMCQmjAiYPomMz3hdFNtMqB3esMvNcI/VM4Z fFKg3r8APa42hnuKG/pxiS6tayJJbme0QjHsZ9YcDvqlOwIEFjHYJjWvl/uIvcD95y8c cIx0mo9GSup6VYifadiVH/E3BwW6TdTR869jwJRCZm0xbr0RRMcJpUMom1WP7Qf0zVKW BDzsF0qNArb5VhQWThuD578cr7IK1qrHsuabA2/mHNNRF+TNscVGBFzylXQZuWjl97ze X9TUq0yFtJsavTcZKSZmc6lM8fAM0LzXQNM2CBjHJCAHitHk75V3GQI+LLuacQY/VeVA m9sA== X-Gm-Message-State: AOAM532d3wUXU+OrnxatuTFiR+h9HRarEh+SDDJOkf4h/1ESocnDJff0 tXLZtiv5Z4AaOKxn3Dxl1Xg= X-Google-Smtp-Source: ABdhPJya1wzik8HNL+iqier1Suq9Z44qItIp0O0EoriWEzNVBa0AA0osvx93+cneOVRu0mkspY9abg== X-Received: by 2002:a62:6186:: with SMTP id v128mr265713pfb.211.1597372926813; Thu, 13 Aug 2020 19:42:06 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id s18sm6791831pgj.3.2020.08.13.19.42.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 19:42:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org Cc: Sai Prakash Ranjan , Will Deacon , freedreno@lists.freedesktop.org, Bjorn Andersson , Sibi Sankar , Vivek Gautam , Stephen Boyd , Robin Murphy , Joerg Roedel , linux-arm-kernel@lists.infradead.org, Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Jonathan Marek , Sharat Masetty , Eric Anholt , Akhil P Oommen , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 16/19] drm/msm/a6xx: Add support for per-instance pagetables Date: Thu, 13 Aug 2020 19:41:11 -0700 Message-Id: <20200814024114.1177553-17-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Jordan Crouse Add support for using per-instance pagetables if all the dependencies are available. Signed-off-by: Jordan Crouse Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 70 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/adreno/a6xx_gpu.h | 1 + drivers/gpu/drm/msm/msm_ringbuffer.h | 1 + 3 files changed, 72 insertions(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 5eabb0109577..9653ac9b3cb8 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -81,6 +81,56 @@ static void get_stats_counter(struct msm_ringbuffer *ring, u32 counter, OUT_RING(ring, upper_32_bits(iova)); } +static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, + struct msm_ringbuffer *ring, struct msm_file_private *ctx) +{ + phys_addr_t ttbr; + u32 asid; + u64 memptr = rbmemptr(ring, ttbr0); + + if (ctx == a6xx_gpu->cur_ctx) + return; + + if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + return; + + /* Execute the table update */ + OUT_PKT7(ring, CP_SMMU_TABLE_UPDATE, 4); + OUT_RING(ring, CP_SMMU_TABLE_UPDATE_0_TTBR0_LO(lower_32_bits(ttbr))); + + /* + * For now ignore the asid since the smmu driver uses a TLBIASID to + * flush the TLB when we use iommu_flush_tlb_all() and the smmu driver + * isn't aware that the asid changed. Instead, keep the default asid + * (0, same as the context bank) to make sure the TLB is properly + * flushed. + */ + OUT_RING(ring, + CP_SMMU_TABLE_UPDATE_1_TTBR0_HI(upper_32_bits(ttbr)) | + CP_SMMU_TABLE_UPDATE_1_ASID(0)); + OUT_RING(ring, CP_SMMU_TABLE_UPDATE_2_CONTEXTIDR(0)); + OUT_RING(ring, CP_SMMU_TABLE_UPDATE_3_CONTEXTBANK(0)); + + /* + * Write the new TTBR0 to the memstore. This is good for debugging. + */ + OUT_PKT7(ring, CP_MEM_WRITE, 4); + OUT_RING(ring, CP_MEM_WRITE_0_ADDR_LO(lower_32_bits(memptr))); + OUT_RING(ring, CP_MEM_WRITE_1_ADDR_HI(upper_32_bits(memptr))); + OUT_RING(ring, lower_32_bits(ttbr)); + OUT_RING(ring, (0 << 16) | upper_32_bits(ttbr)); + + /* + * And finally, trigger a uche flush to be sure there isn't anything + * lingering in that part of the GPU + */ + + OUT_PKT7(ring, CP_EVENT_WRITE, 1); + OUT_RING(ring, 0x31); + + a6xx_gpu->cur_ctx = ctx; +} + static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) { unsigned int index = submit->seqno % MSM_GPU_SUBMIT_STATS_COUNT; @@ -90,6 +140,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i; + a6xx_set_pagetable(a6xx_gpu, ring, submit->queue->ctx); + get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO, rbmemptr_stats(ring, index, cpcycles_start)); @@ -696,6 +748,8 @@ static int a6xx_hw_init(struct msm_gpu *gpu) /* Always come up on rb 0 */ a6xx_gpu->cur_ring = gpu->rb[0]; + a6xx_gpu->cur_ctx = NULL; + /* Enable the SQE_to start the CP engine */ gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1); @@ -1008,6 +1062,21 @@ static unsigned long a6xx_gpu_busy(struct msm_gpu *gpu) return (unsigned long)busy_time; } +static struct msm_gem_address_space * +a6xx_create_private_address_space(struct msm_gpu *gpu) +{ + struct msm_gem_address_space *aspace = NULL; + struct msm_mmu *mmu; + + mmu = msm_iommu_pagetable_create(gpu->aspace->mmu); + + if (!IS_ERR(mmu)) + aspace = msm_gem_address_space_create(mmu, + "gpu", 0x100000000ULL, 0x1ffffffffULL); + + return aspace; +} + static const struct adreno_gpu_funcs funcs = { .base = { .get_param = adreno_get_param, @@ -1031,6 +1100,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_put = a6xx_gpu_state_put, #endif .create_address_space = adreno_iommu_create_address_space, + .create_private_address_space = a6xx_create_private_address_space, }, .get_timestamp = a6xx_get_timestamp, }; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h index 03ba60d5b07f..da22d7549d9b 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.h @@ -19,6 +19,7 @@ struct a6xx_gpu { uint64_t sqe_iova; struct msm_ringbuffer *cur_ring; + struct msm_file_private *cur_ctx; struct a6xx_gmu gmu; }; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 7764373d0ed2..0987d6bf848c 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -31,6 +31,7 @@ struct msm_rbmemptrs { volatile uint32_t fence; volatile struct msm_gpu_submit_stats stats[MSM_GPU_SUBMIT_STATS_COUNT]; + volatile u64 ttbr0; }; struct msm_ringbuffer { From patchwork Fri Aug 14 02:41:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11713441 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E08D013B6 for ; Fri, 14 Aug 2020 02:42:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C42FC22BF5 for ; Fri, 14 Aug 2020 02:42:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WSjYyuU3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726931AbgHNCmK (ORCPT ); Thu, 13 Aug 2020 22:42:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726664AbgHNCmK (ORCPT ); Thu, 13 Aug 2020 22:42:10 -0400 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D090C061757; Thu, 13 Aug 2020 19:42:09 -0700 (PDT) Received: by mail-pj1-x1042.google.com with SMTP id d4so3694386pjx.5; Thu, 13 Aug 2020 19:42:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/RMbEO1wS6ve28JkP/1VJiMbFcTiFM9NBrI4zgQscQM=; b=WSjYyuU3IicqTzGKZ+qyMjOajK0Y+CUj5/an3UNid9VEFQQQ/GpitZqkA4Xu71HuQ8 uyDetJmFxIA+LZ4XKMdbTacXBdQ0BS3SOFEr+zfj8Vs4H+aqeVXRsK6JQK/JbDEBN6KL KRMXnTA5bzHXH18nGWyEk4Oe0sZ/kKopVL4uPgbxwS47tJ5mL4kSYON0DOcpPgu4ncj3 JPOGnNh3/vssv6qpnjwMjU1C0PCegmTgZmDWFyDlNeNIQ2ThNiwnKSOI0nCAg2qqb6AX sLmkxHWNJBh1RrM51cityB+EYPnBq7MWoZfATphAbQy5D3+kK4PBubVAY2hgGHqvs2Xq TPhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/RMbEO1wS6ve28JkP/1VJiMbFcTiFM9NBrI4zgQscQM=; b=tRNafEInAB3MERGu9lVdOpYuq3usjoa99/z0Sj38c9TiKyMv8ZSlS2LbLd9jPyOwc0 2Jk+Y36p4T2uRY82k79yD8GYji0gSv/Je4fvzD3yTGlQsMAf/khwy+wPv+XkMchlgQsI 4PDSC1kxk4hELy6fXGuB16glyAYk1b1ETK5qHwtKIty4ahtAxqCa3V5dDOS6iI7Duuju HvuTNgoNlYGrAwtfeusFft9FFvdM8zDNH+YGpFq5CPQ/YCB2o9jN3753cKMDvzwo77PF 6ra3jUFYNAUmfOhNvcI7VZdx/M8PmpSrMMzkNMJq526isUqV1qh8cY/6JkDAt2U9TNuG D/kA== X-Gm-Message-State: AOAM530W9mQN3kMWHMTaq/OmNC7qj4K7rW9At2+PgVd9apmntBl1dIqG CaGKiY74lx+8Y8WNIXcWsCY= X-Google-Smtp-Source: ABdhPJx2Hm4qJq+ws1Wzuo6A7am1wv6hPi7gD/YHP4Z3vqtJe3/ztGrQUEPhXJiyo9idH9IbGNEomw== X-Received: by 2002:a17:902:b701:: with SMTP id d1mr562133pls.92.1597372929283; Thu, 13 Aug 2020 19:42:09 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id x136sm7189060pfc.28.2020.08.13.19.42.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 19:42:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org Cc: Sai Prakash Ranjan , Will Deacon , freedreno@lists.freedesktop.org, Bjorn Andersson , Sibi Sankar , Vivek Gautam , Stephen Boyd , Robin Murphy , Joerg Roedel , linux-arm-kernel@lists.infradead.org, Jordan Crouse , Rob Clark , Andy Gross , Rob Herring , devicetree@vger.kernel.org (open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 17/19] arm: dts: qcom: sm845: Set the compatible string for the GPU SMMU Date: Thu, 13 Aug 2020 19:41:12 -0700 Message-Id: <20200814024114.1177553-18-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Jordan Crouse Set the qcom,adreno-smmu compatible string for the GPU SMMU to enable split pagetables and per-instance pagetables for drm/msm. Signed-off-by: Jordan Crouse Signed-off-by: Rob Clark --- arch/arm64/boot/dts/qcom/sdm845.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi index 2884577dcb77..6a9adaa401a9 100644 --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi @@ -4058,7 +4058,7 @@ opp-257000000 { }; adreno_smmu: iommu@5040000 { - compatible = "qcom,sdm845-smmu-v2", "qcom,smmu-v2"; + compatible = "qcom,adreno-smmu", "qcom,smmu-v2"; reg = <0 0x5040000 0 0x10000>; #iommu-cells = <1>; #global-interrupts = <2>; From patchwork Fri Aug 14 02:41:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11713447 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21DF5109B for ; Fri, 14 Aug 2020 02:42:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09DE220838 for ; Fri, 14 Aug 2020 02:42:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mGYvZ8LB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726883AbgHNCmT (ORCPT ); Thu, 13 Aug 2020 22:42:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726664AbgHNCmT (ORCPT ); Thu, 13 Aug 2020 22:42:19 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE0FAC061757; Thu, 13 Aug 2020 19:42:18 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id x6so3812100pgx.12; Thu, 13 Aug 2020 19:42:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ORHfJz2e9TNUi11ClButtEprelkWs9RC+Iw7WuZZvQo=; b=mGYvZ8LB7tkBgR4ispTOOXDtmoDnJ0ENjIx7BCQ9VRkSut3GQfaIUYgdXpfYnGGuKg rZsPpm9wb26hYcXQcKc3aB74wVftPvCPcXGjyQqF7nWAXECSCmfltDB3BmeWGN2kkoXs osk+LRRkJPFsF9fxu+/EQaO4YaAY3IdnGgDhsDOf+f/ribXjeTndrgcxVJ+hQw2GGr4s ztbQqQ4tTlhOaV5EnPjL+S7x1AlsREgqkm4JjC+Z+NIk3/omRK3fVbdB1JyrWUgvd5gD v4n8h7JJxLJx2P8gvPi9gRw815etQEUjldg81O6PxszqQdmHHdHmD/LPuIKlK+3Y4RTT Optg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ORHfJz2e9TNUi11ClButtEprelkWs9RC+Iw7WuZZvQo=; b=jQceyZRYOnXTgR2AmAnXuRnjWKdB9qcOE2LKKJAJ3txjuVSMiwBTQ7kcPTW1YyQ96e KWElp5MQnGBhcSejFUGKLGO+4nNpyMwgu8spP/D4M78tLSvsiBr5ddUV+NVsUdDTIRaV Rh7vT0U63JjfkphUk3wHuaWaBAkgz4ILoXoh3T+JIm4Ee99z6DyQ66cvQaQr5UeX5gCR Yx19tuKOkNeQ+wFc0C9MtM08ipgC8dDSJACiqh0F5oAYc5afnQHvnPZAFcnv9O/Q2Cgl BhdqgdlRdAIr/dIuXo+t7AiGJkMV1qUaKHNWdroZli6yDy/emTfgatX5xYnHlbVUDtYu z3Ow== X-Gm-Message-State: AOAM532RMxTuD2rNgnLjgY0+ReUWnXPEtMBMDMek7pz904jMm5gr9/kF wzia0vQY6UIAo9sNmoGNAiQ= X-Google-Smtp-Source: ABdhPJyR5WpqeE+88Qzfp41p5EbTBVgk+DjiD7KpYYcxsk9uoQzqZoh5cgFIQ2d/oEWKX5VXLHan3w== X-Received: by 2002:a65:6106:: with SMTP id z6mr390099pgu.310.1597372938297; Thu, 13 Aug 2020 19:42:18 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id b63sm7201394pfg.43.2020.08.13.19.42.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 19:42:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org Cc: Sai Prakash Ranjan , Will Deacon , freedreno@lists.freedesktop.org, Bjorn Andersson , Sibi Sankar , Vivek Gautam , Stephen Boyd , Robin Murphy , Joerg Roedel , linux-arm-kernel@lists.infradead.org, Rob Clark , Jordan Crouse , Greg Kroah-Hartman , Thierry Reding , Krishna Reddy , Jon Hunter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 18/19] iommu/arm-smmu: add a way for implementations to influence SCTLR Date: Thu, 13 Aug 2020 19:41:13 -0700 Message-Id: <20200814024114.1177553-19-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark For the Adreno GPU's SMMU, we want SCTLR.HUPCF set to ensure that pending translations are not terminated on iova fault. Otherwise a terminated CP read could hang the GPU by returning invalid command-stream data. Signed-off-by: Rob Clark Reviewed-by: Bjorn Andersson --- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 6 ++++++ drivers/iommu/arm/arm-smmu/arm-smmu.c | 3 +++ drivers/iommu/arm/arm-smmu/arm-smmu.h | 3 +++ 3 files changed, 12 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 5640d9960610..2aa6249050ff 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -127,6 +127,12 @@ static int qcom_adreno_smmu_init_context(struct arm_smmu_domain *smmu_domain, (smmu_domain->cfg.fmt == ARM_SMMU_CTX_FMT_AARCH64)) pgtbl_cfg->quirks |= IO_PGTABLE_QUIRK_ARM_TTBR1; + /* + * On the GPU device we want to process subsequent transactions after a + * fault to keep the GPU from hanging + */ + smmu_domain->cfg.sctlr_set |= ARM_SMMU_SCTLR_HUPCF; + /* * Initialize private interface with GPU: */ diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index e63a480d7f71..bbec5793faf8 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -617,6 +617,9 @@ void arm_smmu_write_context_bank(struct arm_smmu_device *smmu, int idx) if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) reg |= ARM_SMMU_SCTLR_E; + reg |= cfg->sctlr_set; + reg &= ~cfg->sctlr_clr; + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, reg); } diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index cd75a33967bb..2df3a70a8a41 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -144,6 +144,7 @@ enum arm_smmu_cbar_type { #define ARM_SMMU_CB_SCTLR 0x0 #define ARM_SMMU_SCTLR_S1_ASIDPNE BIT(12) #define ARM_SMMU_SCTLR_CFCFG BIT(7) +#define ARM_SMMU_SCTLR_HUPCF BIT(8) #define ARM_SMMU_SCTLR_CFIE BIT(6) #define ARM_SMMU_SCTLR_CFRE BIT(5) #define ARM_SMMU_SCTLR_E BIT(4) @@ -341,6 +342,8 @@ struct arm_smmu_cfg { u16 asid; u16 vmid; }; + u32 sctlr_set; /* extra bits to set in SCTLR */ + u32 sctlr_clr; /* bits to mask in SCTLR */ enum arm_smmu_cbar_type cbar; enum arm_smmu_context_fmt fmt; }; From patchwork Fri Aug 14 02:41:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11713451 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B07313B6 for ; Fri, 14 Aug 2020 02:42:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0EE222177B for ; Fri, 14 Aug 2020 02:42:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="a7oYXh+g" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726664AbgHNCmW (ORCPT ); Thu, 13 Aug 2020 22:42:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726605AbgHNCmV (ORCPT ); Thu, 13 Aug 2020 22:42:21 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49F6DC061757; Thu, 13 Aug 2020 19:42:21 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id v15so3819010pgh.6; Thu, 13 Aug 2020 19:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FvoB/QZ4hH7jO2LobIZsox+NVJPMDXlJlM4677kKgLg=; b=a7oYXh+gB7wzD2sGKJdQwETvOjphf21jMJWEBa6suIVbpaEmBDQPjp8pQqE7PaKOvf yVv+eNa2TIZqokgxHGqUuFDYk2+EOjPTAEOTjoHQzpf4I+DqkhxMjyED1pT6Y3K+hPi/ U+ga1p/HKlYS3iQZdjjIzklEjl2+MUz0W1gxp6251RuWuAPC9lYvwgBRseRaQ9Y7pmi9 q2veITkGjxSkyEFgIfspGGrn8RUuuAgMLN44lBJu0bAK+5qJfrSbpNX/JUlDwY7o0osI R/uhVO8OvVHkuDNEy7cqvP1uRvPAVe7nU4jE4aNNL08sjKpUnxzIkRDUr1MuotEKS9iP Vk1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FvoB/QZ4hH7jO2LobIZsox+NVJPMDXlJlM4677kKgLg=; b=gJYS407mW2tXRQpSd+w0zdLAOMk/wwVaNmhpsHIltBbOQdeOiFjFRfGBezqHZARVxV 4ytLM1dWp2jpucd9yviEFhLZdj73TevzdqEV8WF9tsZIOqa0aBO26eejCHj41BxqgZbt McGBa9bcscW5H5IUTg5sriG80uGWdprp4yrtsFQUFoIWzWUV0sBMypdfdSUbgc1aXx17 zSqpY0dvSqoVdLZeTlLgPzsMT0GlGwUO+kD/yIJ604+Kfj/VzcPxsM8TsJGkpN1i4I1G oZ5MX5hVr4wp6b47wT6s1sg9EJUrS0HO4DEQDO86tHu8W4v/i+0+ZPq/EDkRaHIMK6Ni RPkw== X-Gm-Message-State: AOAM531b3sS8QVzIC+rjKZJYRZeCyOQvsRh2HmgwG4UXOFut8eMK90Kx LSK1hEdddchY+uOSN3n3yn0= X-Google-Smtp-Source: ABdhPJz7zs3EYsHz6zTyJWmSmcnJ89HgA+GRCOgZyF5+aS2jdsaHqSElpXE/ljsfNzpiCQZ7QzGXZg== X-Received: by 2002:aa7:96b0:: with SMTP id g16mr283185pfk.19.1597372940752; Thu, 13 Aug 2020 19:42:20 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id 7sm7203244pff.78.2020.08.13.19.42.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 19:42:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org, linux-arm-msm@vger.kernel.org Cc: Sai Prakash Ranjan , Will Deacon , freedreno@lists.freedesktop.org, Bjorn Andersson , Sibi Sankar , Vivek Gautam , Stephen Boyd , Robin Murphy , Joerg Roedel , linux-arm-kernel@lists.infradead.org, Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 19/19] drm/msm: show process names in gem_describe Date: Thu, 13 Aug 2020 19:41:14 -0700 Message-Id: <20200814024114.1177553-20-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200810222657.1841322-1-jcrouse@codeaurora.org> References: <20200810222657.1841322-1-jcrouse@codeaurora.org> MIME-Version: 1.0 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark In $debugfs/gem we already show any vma(s) associated with an object. Also show process names if the vma's address space is a per-process address space. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse Reviewed-by: Bjorn Andersson --- drivers/gpu/drm/msm/msm_drv.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 25 +++++++++++++++++++++---- drivers/gpu/drm/msm/msm_gem.h | 5 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 1 + drivers/gpu/drm/msm/msm_gpu.c | 8 +++++--- drivers/gpu/drm/msm/msm_gpu.h | 2 +- 6 files changed, 34 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 8e70d220bba8..8d5c4f98c332 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -597,7 +597,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = msm_gpu_create_private_address_space(priv->gpu); + ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); file->driver_priv = ctx; return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 3cb7aeb93fd3..76a6c5271e57 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -842,11 +842,28 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) seq_puts(m, " vmas:"); - list_for_each_entry(vma, &msm_obj->vmas, list) - seq_printf(m, " [%s: %08llx,%s,inuse=%d]", - vma->aspace != NULL ? vma->aspace->name : NULL, - vma->iova, vma->mapped ? "mapped" : "unmapped", + list_for_each_entry(vma, &msm_obj->vmas, list) { + const char *name, *comm; + if (vma->aspace) { + struct msm_gem_address_space *aspace = vma->aspace; + struct task_struct *task = + get_pid_task(aspace->pid, PIDTYPE_PID); + if (task) { + comm = kstrdup(task->comm, GFP_KERNEL); + } else { + comm = NULL; + } + name = aspace->name; + } else { + name = comm = NULL; + } + seq_printf(m, " [%s%s%s: aspace=%p, %08llx,%s,inuse=%d]", + name, comm ? ":" : "", comm ? comm : "", + vma->aspace, vma->iova, + vma->mapped ? "mapped" : "unmapped", vma->inuse); + kfree(comm); + } seq_puts(m, "\n"); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9c573c4269cb..7b1c7a5f8eef 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -24,6 +24,11 @@ struct msm_gem_address_space { spinlock_t lock; /* Protects drm_mm node allocation/removal */ struct msm_mmu *mmu; struct kref kref; + + /* For address spaces associated with a specific process, this + * will be non-NULL: + */ + struct pid *pid; }; struct msm_gem_vma { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 29cc1305cf37..80a8a266d68f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -17,6 +17,7 @@ msm_gem_address_space_destroy(struct kref *kref) drm_mm_takedown(&aspace->mm); if (aspace->mmu) aspace->mmu->funcs->destroy(aspace->mmu); + put_pid(aspace->pid); kfree(aspace); } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 951850804d77..ac8961187a73 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -825,10 +825,9 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) /* Return a new address space for a msm_drm_private instance */ struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu) +msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task) { struct msm_gem_address_space *aspace = NULL; - if (!gpu) return NULL; @@ -836,8 +835,11 @@ msm_gpu_create_private_address_space(struct msm_gpu *gpu) * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) + if (gpu->funcs->create_private_address_space) { aspace = gpu->funcs->create_private_address_space(gpu); + if (!IS_ERR(aspace)) + aspace->pid = get_pid(task_pid(task)); + } if (IS_ERR_OR_NULL(aspace)) aspace = msm_gem_address_space_get(gpu->aspace); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 4052a18e18c2..59f26bd0fe42 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -298,7 +298,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, const char *name, struct msm_gpu_config *config); struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu); +msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu);