From patchwork Thu Jun 24 17:17:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 12342679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FFA3C49EA7 for ; Thu, 24 Jun 2021 17:18:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D399613EE for ; Thu, 24 Jun 2021 17:18:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232178AbhFXRUv (ORCPT ); Thu, 24 Jun 2021 13:20:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232051AbhFXRUr (ORCPT ); Thu, 24 Jun 2021 13:20:47 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60682C061574 for ; Thu, 24 Jun 2021 10:18:27 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id k6so5726635pfk.12 for ; Thu, 24 Jun 2021 10:18:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IsTJoDNngEMlkAn/n6JPF2A0qmwgdP70DkZ0WXOjNOY=; b=YxZMzszw+nsXLPhGNhV2myye4gK/wRHBlmjreXJlWAVzJm0ZFytYlX9Rhu7tXDdANy XP1z9XjdXqim9N+/fnHAr0XwqqNu+9bp2809pZLHqN7ql8Lr5NEoIcZuY3dcG8iAoXf+ A/rp72coTwhCjbCFAAXwSsCuz5cEkE50kotnQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IsTJoDNngEMlkAn/n6JPF2A0qmwgdP70DkZ0WXOjNOY=; b=hIokvPFupZWsFP5mBt9unpn4vnelfJE8h8aiKxVfdD03aeBQxsN7n6XObUxTuGGjqy C13dqJ9EPG+nkISjDfd2Beq0cSHP6GIsnVNYYVOiuRk1poOQyv90JcdSP5jXq3NsH0P1 2a9imIZPVknpN1MWVsW9efIT32LfCCDqOLEndOdaOC0cmKxpiLLZu1Y48V0/huHS6euP wf4fWiAGpNQw1v+CHe5wCFxCsiGcLsF8wo/hWfxlt85uD5OVJAiuOLPIUHSHtw7Y4VSp 2KK2ntBXWLifYyMIxuyK4IBTpKPY5VBtOKMbnPSqdBauxH9LD2Z0HH2bfNNHn3Utj0lU ZXKA== X-Gm-Message-State: AOAM533ovOBrFc9UHXprmpHnNN34yoW7WnzpbdVKcKCXFbYWumTifgBW NC+9dpEN2he4ePhKO5jdpgy6LA== X-Google-Smtp-Source: ABdhPJwajpWmjQoylie1eDc67SDX5JHn0tkLIkBmw0/8N+zayKuUy7HrB9/qTrgy2P3lRFk7RypkmA== X-Received: by 2002:a63:ee11:: with SMTP id e17mr5579673pgi.323.1624555106903; Thu, 24 Jun 2021 10:18:26 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:202:201:fd74:62bc:19e3:a43b]) by smtp.gmail.com with ESMTPSA id z9sm3365960pfa.2.2021.06.24.10.18.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Jun 2021 10:18:26 -0700 (PDT) From: Douglas Anderson To: will@kernel.org, robin.murphy@arm.com, joro@8bytes.org, bjorn.andersson@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com, bhelgaas@google.com Cc: john.garry@huawei.com, robdclark@chromium.org, quic_c_gdjako@quicinc.com, saravanak@google.com, rajatja@google.com, saiprakash.ranjan@codeaurora.org, vbadigan@codeaurora.org, linux-mmc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org, iommu@lists.linux-foundation.org, sonnyrao@chromium.org, joel@joelfernandes.org, Douglas Anderson , Andrew Morton , Jonathan Corbet , "Maciej W. Rozycki" , "Paul E. McKenney" , Peter Zijlstra , Randy Dunlap , Viresh Kumar , Vlastimil Babka , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/3] iommu: Add per-domain strictness and combine with the global default Date: Thu, 24 Jun 2021 10:17:57 -0700 Message-Id: <20210624101557.v2.1.Id84a954e705fcad3fdb35beb2bc372e4bf2108c7@changeid> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog In-Reply-To: <20210624171759.4125094-1-dianders@chromium.org> References: <20210624171759.4125094-1-dianders@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Strictness has the semantic of being a per-domain property. This is why iommu_get_dma_strict() takes a "struct iommu_domain" as a parameter. Let's add knowledge to the "struct iommu_domain" so we can know whether we'd like each domain to be strict. In this patch nothing sets the per-domain strictness, it just paves the way for future patches to do so. Prior to this patch we could only affect strictness at a global level. We'll still honor the global strictness level if it has been explicitly set and it's stricter than the one requested per-domain. NOTE: it's even more obvious that iommu_set_dma_strict() and iommu_get_dma_strict() are non-symmetric after this change. However, they have always been asymmetric by design [0]. The function iommu_get_dma_strict() should now make it super obvious where strictness comes from and who overides who. Though the function changed a bunch to make the logic clearer, the only two new rules should be: * Devices can force strictness for themselves, overriding the cmdline "iommu.strict=0" or a call to iommu_set_dma_strict(false)). * Devices can request non-strictness for themselves, assuming there was no cmdline "iommu.strict=1" or a call to iommu_set_dma_strict(true). [0] https://lore.kernel.org/r/a023af85-5060-0a3c-4648-b00f8b8c0430@arm.com/ Signed-off-by: Douglas Anderson --- This patch clearly will cause conflicts if John Garry's patches [1] land before it. It shouldn't be too hard to rebase, though. Essentially with John's patches it'll be impossible for what's called `cmdline_dma_strict` in my patch to be "default". It'll probably make sense to rearrange the logic/names a bit though just to make things clearer. [1] https://lore.kernel.org/r/1624016058-189713-1-git-send-email-john.garry@huawei.com/ Changes in v2: - No longer based on changes adding strictness to "struct device" - Updated kernel-parameters docs. .../admin-guide/kernel-parameters.txt | 5 ++- drivers/iommu/iommu.c | 43 +++++++++++++++---- include/linux/iommu.h | 7 +++ 3 files changed, 45 insertions(+), 10 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index cb89dbdedc46..7675fd79f9a9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1995,9 +1995,12 @@ throughput at the cost of reduced device isolation. Will fall back to strict mode if not supported by the relevant IOMMU driver. - 1 - Strict mode (default). + 1 - Strict mode. DMA unmap operations invalidate IOMMU hardware TLBs synchronously. + NOTE: if "iommu.strict" is not specified in the command + line then it's up to the system to try to determine the + proper strictness. iommu.passthrough= [ARM64, X86] Configure DMA to bypass the IOMMU by default. diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 808ab70d5df5..7943d2105b2f 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -29,7 +29,8 @@ static struct kset *iommu_group_kset; static DEFINE_IDA(iommu_group_ida); static unsigned int iommu_def_domain_type __read_mostly; -static bool iommu_dma_strict __read_mostly = true; +static enum iommu_strictness cmdline_dma_strict __read_mostly; +static enum iommu_strictness driver_dma_strict __read_mostly; static u32 iommu_cmd_line __read_mostly; struct iommu_group { @@ -69,7 +70,6 @@ static const char * const iommu_group_resv_type_string[] = { }; #define IOMMU_CMD_LINE_DMA_API BIT(0) -#define IOMMU_CMD_LINE_STRICT BIT(1) static int iommu_alloc_default_domain(struct iommu_group *group, struct device *dev); @@ -334,27 +334,52 @@ static int __init iommu_set_def_domain_type(char *str) } early_param("iommu.passthrough", iommu_set_def_domain_type); +static inline enum iommu_strictness bool_to_strictness(bool strict) +{ + return strict ? IOMMU_STRICT : IOMMU_NOT_STRICT; +} + static int __init iommu_dma_setup(char *str) { - int ret = kstrtobool(str, &iommu_dma_strict); + bool strict; + int ret = kstrtobool(str, &strict); if (!ret) - iommu_cmd_line |= IOMMU_CMD_LINE_STRICT; + cmdline_dma_strict = bool_to_strictness(strict); return ret; } early_param("iommu.strict", iommu_dma_setup); void iommu_set_dma_strict(bool strict) { - if (strict || !(iommu_cmd_line & IOMMU_CMD_LINE_STRICT)) - iommu_dma_strict = strict; + /* + * Valid transitions: + * - DEFAULT -> NON_STRICT + * - DEFAULT -> STRICT + * - NON_STRICT -> STRICT + * + * Everything else is ignored. + */ + if (driver_dma_strict != IOMMU_STRICT) + driver_dma_strict = bool_to_strictness(strict); } bool iommu_get_dma_strict(struct iommu_domain *domain) { - /* only allow lazy flushing for DMA domains */ - if (domain->type == IOMMU_DOMAIN_DMA) - return iommu_dma_strict; + /* Non-DMA domains or anyone forcing it to strict makes it strict */ + if (domain->type != IOMMU_DOMAIN_DMA || + cmdline_dma_strict == IOMMU_STRICT || + driver_dma_strict == IOMMU_STRICT || + domain->strictness == IOMMU_STRICT) + return true; + + /* Anyone requesting non-strict (if no forces) makes it non-strict */ + if (cmdline_dma_strict == IOMMU_NOT_STRICT || + driver_dma_strict == IOMMU_NOT_STRICT || + domain->strictness == IOMMU_NOT_STRICT) + return false; + + /* Nobody said anything, so it's strict by default */ return true; } EXPORT_SYMBOL_GPL(iommu_get_dma_strict); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 32d448050bf7..2e172059c931 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -79,8 +79,15 @@ struct iommu_domain_geometry { #define IOMMU_DOMAIN_DMA (__IOMMU_DOMAIN_PAGING | \ __IOMMU_DOMAIN_DMA_API) +enum iommu_strictness { + IOMMU_DEFAULT_STRICTNESS = 0, /* zero-init ends up at default */ + IOMMU_NOT_STRICT, + IOMMU_STRICT, +}; + struct iommu_domain { unsigned type; + enum iommu_strictness strictness; const struct iommu_ops *ops; unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ iommu_fault_handler_t handler; From patchwork Thu Jun 24 17:17:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 12342681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F6E7C49EA5 for ; Thu, 24 Jun 2021 17:18:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 23EE161057 for ; Thu, 24 Jun 2021 17:18:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232456AbhFXRU5 (ORCPT ); Thu, 24 Jun 2021 13:20:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232348AbhFXRUt (ORCPT ); Thu, 24 Jun 2021 13:20:49 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CBAEC061768 for ; Thu, 24 Jun 2021 10:18:29 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id c5so5746413pfv.8 for ; Thu, 24 Jun 2021 10:18:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ltzQtO7QGv7ihuZ03SI1P6XFOPcuSJGUvlCT/41r+iE=; b=E3OBm6d+DSXVCCSeW92dikvGwmwuqWU/DsoUwEYgJrR2tlg8YH4d/NZF7DVqR7/He+ LenZJUO1mfAjegUrPWjAnmmG5S7PCCZ42geQk165WsFmQEdnb0Ad39+HwartwwMFYgMQ 9jjJUrS+i22102pFcWK6Q0jsikCiBKGzI6+UY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ltzQtO7QGv7ihuZ03SI1P6XFOPcuSJGUvlCT/41r+iE=; b=dF5DMZYvlhBcdR3vtc71D30iVEYafTRMM3w7+RhXscofiUP3mNVbfj+Eh7tOa/Socy XA1ILdG28/Xm7RhmgfkdaC3qzsYuY+ZqVzCvnBp/XeR10JgXW7Cb1ztTZcMGzW8zDOcp XnO0o1YWDXLl2ltyQd9pf6hNxDPGYT6yflzPaOzsRORZt9I6BI8CT7uW1r3YyZfu6A4m EM9zNI9qhmGk49UFbGhRKgf9prjNM5wwcVHn6HmHs1BXo/MI9S37XiGCisk877zvQxUL +rJ+G/8+TWo3qzvM+mO9F2mUOc8p+Uvuh6fK091cijY00SgIrAaPXsWtNkN9Z5ixTtfs 9KvQ== X-Gm-Message-State: AOAM533E58TeAqWw3Q9rmn4iuPeIG0EQc4g+/YGhmS4/iaOA2BD+93k5 xzWu7h3SnbPwaCiH7Q0sUZFELg== X-Google-Smtp-Source: ABdhPJy3hQDCnPl6h1dVLCWoQFoDYGjFrg2/1rsYH0TJwFbq3jVshaM62dCqdnk2IUlzQb/NxFXZ3A== X-Received: by 2002:a63:1263:: with SMTP id 35mr5496773pgs.221.1624555108972; Thu, 24 Jun 2021 10:18:28 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:202:201:fd74:62bc:19e3:a43b]) by smtp.gmail.com with ESMTPSA id z9sm3365960pfa.2.2021.06.24.10.18.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Jun 2021 10:18:28 -0700 (PDT) From: Douglas Anderson To: will@kernel.org, robin.murphy@arm.com, joro@8bytes.org, bjorn.andersson@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com, bhelgaas@google.com Cc: john.garry@huawei.com, robdclark@chromium.org, quic_c_gdjako@quicinc.com, saravanak@google.com, rajatja@google.com, saiprakash.ranjan@codeaurora.org, vbadigan@codeaurora.org, linux-mmc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org, iommu@lists.linux-foundation.org, sonnyrao@chromium.org, joel@joelfernandes.org, Douglas Anderson , Jordan Crouse , Krishna Reddy , Nicolin Chen , Thierry Reding , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/3] iommu/arm-smmu: Check for strictness after calling impl->init_context() Date: Thu, 24 Jun 2021 10:17:58 -0700 Message-Id: <20210624101557.v2.2.I0ddf490bdaa450eb50ab568f35b1cae03bf358f0@changeid> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog In-Reply-To: <20210624171759.4125094-1-dianders@chromium.org> References: <20210624171759.4125094-1-dianders@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org Implementations should be able to affect the strictness so reorder a little bit so we call them before we look at the strictness. Signed-off-by: Douglas Anderson --- Changes in v2: - Patch moving check for strictness in arm-smmu new for v2. drivers/iommu/arm/arm-smmu/arm-smmu.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 6f72c4d208ca..659d3fddffa5 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -761,15 +761,15 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, .iommu_dev = smmu->dev, }; - if (!iommu_get_dma_strict(domain)) - pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; - if (smmu->impl && smmu->impl->init_context) { ret = smmu->impl->init_context(smmu_domain, &pgtbl_cfg, dev); if (ret) goto out_clear_smmu; } + if (!iommu_get_dma_strict(domain)) + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; + if (smmu_domain->pgtbl_quirks) pgtbl_cfg.quirks |= smmu_domain->pgtbl_quirks; From patchwork Thu Jun 24 17:17:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 12342683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C23A5C49EA7 for ; Thu, 24 Jun 2021 17:18:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAF8661057 for ; Thu, 24 Jun 2021 17:18:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232192AbhFXRU6 (ORCPT ); Thu, 24 Jun 2021 13:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232412AbhFXRUv (ORCPT ); Thu, 24 Jun 2021 13:20:51 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 963C9C0617A8 for ; Thu, 24 Jun 2021 10:18:31 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id 22-20020a17090a0c16b0290164a5354ad0so6342973pjs.2 for ; Thu, 24 Jun 2021 10:18:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nkeWCBI+uU/7cGiyR8KqX3jtouOj5+BeMGH/nCx1Gq8=; b=Ayx3KtOrfzrF668/PdjmLpf7tQlp6VlbptxTyKHq7GsvYQIJPQY5qKMrWpc9iYrXjR kUJX4rrZWID0zFM5jzuOxvaqFCw8/SZ/SXf6IwfZ9urvuDnhdrQFQCEb2gseUY8BXV7p KwbwUAVnc+MiyP4GgT8glTY6U/l2S8y2wDwOU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nkeWCBI+uU/7cGiyR8KqX3jtouOj5+BeMGH/nCx1Gq8=; b=m+39JknIGOZB5nXnjKRpMy2vtkqn9UrZqYSbLtIYc6F3MLTb9SZtoCYKfydVn6bdiX o6hstmjp1Je0xTScc9u/gs/ENj9yspAfNprg9B2KHkxJxOLUFTamsqX7frqNxb63ag4T fNYeSEYAkuJitBMblsMqZy711lklpGzSuKcA8DFmFvMe09MC4DDSgixLk3kllubUbxAX ENJk17Wmr70RKOOG0GS5ABXLoKvfsTA5ksXCjbfP5hxrSfe12LhxPfT/bJtsX1MRayEH 1z9GYOB9F2FGhKKeD7dwkP0YcQ/S/ZQ1zZ+DyMeiL5ry1EUeUShfJ1qcMdomersem+A4 jCJA== X-Gm-Message-State: AOAM530PM5WT4zAGJebYzO2lFGI7OIk/XDZ+MkRsTCmPLjB1IxRcb1OE CbcpfJSWRm1nfusr6BwJMkNTiA== X-Google-Smtp-Source: ABdhPJyh24ZBk4IyfzZyEkgHlKXR3pzQ1j5uOTe/6sAHkvrmVGAJ4JM61YDjzhKl8PSKbPF0E6Cx2A== X-Received: by 2002:a17:90a:8c4:: with SMTP id 4mr16092152pjn.82.1624555111053; Thu, 24 Jun 2021 10:18:31 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:202:201:fd74:62bc:19e3:a43b]) by smtp.gmail.com with ESMTPSA id z9sm3365960pfa.2.2021.06.24.10.18.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Jun 2021 10:18:30 -0700 (PDT) From: Douglas Anderson To: will@kernel.org, robin.murphy@arm.com, joro@8bytes.org, bjorn.andersson@linaro.org, ulf.hansson@linaro.org, adrian.hunter@intel.com, bhelgaas@google.com Cc: john.garry@huawei.com, robdclark@chromium.org, quic_c_gdjako@quicinc.com, saravanak@google.com, rajatja@google.com, saiprakash.ranjan@codeaurora.org, vbadigan@codeaurora.org, linux-mmc@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-pci@vger.kernel.org, iommu@lists.linux-foundation.org, sonnyrao@chromium.org, joel@joelfernandes.org, Douglas Anderson , Jordan Crouse , Konrad Dybcio , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/3] mmc: sdhci-msm: Request non-strict IOMMU mode Date: Thu, 24 Jun 2021 10:17:59 -0700 Message-Id: <20210624101557.v2.3.Icde6be7601a5939960caf802056c88cd5132eb4e@changeid> X-Mailer: git-send-email 2.32.0.93.g670b81a890-goog In-Reply-To: <20210624171759.4125094-1-dianders@chromium.org> References: <20210624171759.4125094-1-dianders@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org The concept of IOMMUs being in strict vs. non-strict mode is a pre-existing Linux concept. I've included a rough summary here to help evaluate this patch. IOMMUs can be run in "strict" mode or in "non-strict" mode. The quick-summary difference between the two is that in "strict" mode we wait until everything is flushed out when we unmap DMA memory. In "non-strict" we don't. Using the IOMMU in "strict" mode is more secure/safer but slower because we have to sit and wait for flushes while we're unmapping. To explain a bit why "non-strict" mode is unsafe, let's imagine two examples. An example of "non-strict" being insecure when reading from a device: a) Linux driver maps memory for DMA. b) Linux driver starts DMA on the device. c) Device write to RAM subject to bounds checking done by IOMMU. d) Device finishes writing to RAM and signals transfer is finished. e) Linux driver starts unmapping DMA memory but doesn't wait for the unmap to finish (the definition of non-strict). At this point, though, the Linux APIs say that the driver owns the memory and shouldn't expect any more scribbling from the DMA device. f) Linux driver validates that the data in memory looks sane and that accessing it won't cause the driver to, for instance, overflow a buffer. g) Device takes advantage of knowledge of how the Linux driver works and sneaks in a modification to the data after the validation but before the IOMMU unmap flush finishes. h) Device has now caused the Linux driver to access memory it shouldn't. An example of "non-strict" being insecure when writing to a device: a) Linux driver writes data intended for the device to RAM. b) Linux driver maps memory for DMA. c) Linux driver starts DMA on the device. d) Device reads from RAM subject to bounds checking done by IOMMU. e) Device finishes reading from RAM and signals transfer is finished. f) Linux driver starts unmapping DMA memory but doesn't wait for the unmap to finish (the definition of non-strict) g) Linux driver frees memory and returns it to the pool of memory available for other users to allocate. h) Memory is allocated for another purpose since it was free memory. i) Device takes advantage of the period of time before IOMMU flush to read memory that it shouldn't have had access to. What exactly the memory could contain depends on the randomness of who allocated next, though exploits have been built on flimisier holes. As you can see from the above examples, using the iommu in "non-strict" mode might not sound _too_ scary (the window of badness is small and the exposed memory is small) but there is certainly risk. Let's evaluate the risk by breaking it down into two problems that IOMMUs are supposed to be protecting us against: Case 1: IOMMUs prevent malicious code running on the peripheral (maybe a malicious peripheral or maybe someone exploited a benign peripheral) from turning into an exploit of the Linux kernel. This is particularly important if the peripheral has loadable / updatable firmware or if the peripheral has some type of general purpose processor and is processing untrusted inputs. It's also important if the device is something that can be easily plugged into the host and the device has direct DMA access itself, like a PCIe device. Case 2: IOMMUs limit the severity of a class of software bugs in the kernel. If we misconfigure a peripheral by accident then instead of the peripheral clobbering random memory due to a bug we might get an IOMMU error. Now that we understand the issue and the risks, let's evaluate whether we really need "strict" mode for the Qualcomm SDHCI controllers. I will make the argument that we don't _need_ strict mode for them. Why? * The SDHCI controller on Qualcomm SoCs doesn't appear to have loadable / updatable firmware and, assuming it's got some firmware baked into it, I see no evidence that the firmware could be compromised. * Even though, for external SD cards in particular, the controller is dealing with "untrusted" inputs, it's dealing with them in a very controlled way. It seems unlikely that a rogue SD card would be able to present something to the SDHCI controller that would cause it to DMA to/from an address other than one the kernel told it about. * Although it would be nice to catch more software bugs, once the Linux driver has been debugged and stressed the value is not very high. If the IOMMU caught something like this the system would be in a pretty bad shape anyway (we don't really recover from IOMMU errors) and the only benefit would be a better spotlight on what went wrong. Now we have a good understanding of the benefits of "strict" mode for our SDHCI controllers, let's look at some performance numbers. I used "dd" to measure read speeds from eMMC on a sc7180-trogdor-lazor board. Basic test command (while booted from USB): echo 3 > /proc/sys/vm/drop_caches dd if=/dev/mmcblk1 of=/dev/null bs=4M count=512 I attempted to run my tests for enough iterations that results stabilized and weren't too noisy. Tests were run with patches picked to the chromeos-5.4 tree (sanity checked against v5.13-rc7). I also attempted to compare to other attempts to address IOMMU problems and/or attempts to bump the cpufreq up to solve this problem: - eMMC datasheet spec: 300 MB/s "Typical Sequential Performance" NOTE: we're driving the bus at 192 MHz instead of 200 Mhz so we might not be able to achieve the full 300 MB/s. - Baseline: 210.9 MB/s - Baseline + peg cpufreq to max: 284.3 MB/s - This patch: 279.6 MB/s - This patch + peg cpufreq to max: 288.1 MB/s - Joel's IO Wait fix [1]: 258.4 MB/s - Joel's IO Wait fix [1] + peg cpufreq to max: 287.8 MB/s - TLBIVA patches [2] + [3]: 214.7 MB/s - TLBIVA patches [2] + [3] + peg cpufreq to max: 285.7 MB/s - This patch plus Joel's [1]: 280.2 MB/s - This patch plus Joel's [1] + peg...: 279.0 MB/s NOTE: I suspect something in the system was thermal throttling since there's a heat wave right now. I also spent a little bit of time trying to see if I could get the IOMMU flush for MMC out of the critical path but was unable to figure out how to do this and get good performance. Overall I'd say that the performance results above show: * It's really not straightforward to point at "one thing" that is making our eMMC performance bad. * It's certainly possible to get pretty good eMMC performance even without this patch. * This patch makes it much easier to get good eMMC performance. * No other solutions that I found resulted in quite as good eMMC performance as having this patch. Given all the above (security safety concerns are minimal and it's a nice performance win), I'm proposing that running SDHCI on Qualcomm SoCs in non-strict mode is the right thing to do until such point in time as someone can come up with a better solution to get good SD/eMMC performance without it. Now that we've decided we want the SD/MMC controller in non-strict mode, we need to figure out how to make it happen. We will take advantage of the fact that on Qualcomm IOMMUs we know that SD/MMC controllers are in a domain by themselves and hook in when initting the domain context. In response to a previous version of this series there had been discussion [4] of having this driven from a device tree property and this solution doesn't preclude that but is a good jumping off point. NOTES: * It's likely that arguments similar to the above can be made for other SDHCI controllers. However, given that this is something that can have an impact on security it feels like we want each SDHCI controller to opt-in. I believe it is conceivable, for instance, that some SDHCI controllers might have loadable or updatable firmware. * It's also likely other peripherals will want this to get the quick performance win. That also should be fine, though anyone landing a similar patch should be very careful that it is low risk for all users of a given peripheral. * Conceivably if even this patch is considered too "high risk", we could limit this to just non-removable cards (like eMMC) by just checking the device tree. This is one nice advantage of using the pre_probe() to set this. [1] https://lore.kernel.org/r/20210618040639.3113489-1-joel@joelfernandes.org [2] https://lore.kernel.org/r/1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com/ [3] https://lore.kernel.org/r/cover.1623981933.git.saiprakash.ranjan@codeaurora.org/ [4] https://lore.kernel.org/r/20210621235248.2521620-1-dianders@chromium.org Signed-off-by: Douglas Anderson --- Changes in v2: - Now accomplish the goal by putting rules in the IOMMU driver. - Reworded commit message to clarify things pointed out by Greg. drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 98b3a1c2a181..bd66376d21ce 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -172,6 +172,24 @@ static const struct of_device_id qcom_smmu_client_of_match[] __maybe_unused = { { } }; +static const struct of_device_id qcom_smmu_nonstrict_of_match[] __maybe_unused = { + { .compatible = "qcom,sdhci-msm-v4" }, + { .compatible = "qcom,sdhci-msm-v5" }, + { } +}; + +static int qcom_smmu_init_context(struct arm_smmu_domain *smmu_domain, + struct io_pgtable_cfg *pgtbl_cfg, struct device *dev) +{ + const struct of_device_id *match = + of_match_device(qcom_smmu_nonstrict_of_match, dev); + + if (match) + smmu_domain->domain.strictness = IOMMU_NOT_STRICT; + + return 0; +} + static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu) { unsigned int last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups - 1); @@ -295,6 +313,7 @@ static int qcom_smmu500_reset(struct arm_smmu_device *smmu) } static const struct arm_smmu_impl qcom_smmu_impl = { + .init_context = qcom_smmu_init_context, .cfg_probe = qcom_smmu_cfg_probe, .def_domain_type = qcom_smmu_def_domain_type, .reset = qcom_smmu500_reset,