From patchwork Fri May 7 10:21:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E71FC433ED for ; Fri, 7 May 2021 10:26:32 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 912A36145E for ; Fri, 7 May 2021 10:26:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 912A36145E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fz9NN0FfIFdwy13QfM5v7LkO9iSmRpDsvzNj3B9cdIs=; b=V60l2KHkYMNwoHPoss1PctRKt 34QI7CSMX02XuWxv9D39yeaqrY3Hcep+OiL8YM+IKqHuGMUHNI4j93sjEL6ssOGY6bg9i1IB19BKl TFqwBjBaEh/Pr9kVuIn8PuuLX/+NiClXkqU0KsGySNr1I8fSgXcSVyJAEr9v58ckdR0TuZnhcBhbu qxocQR7gSxd9QbUbfat2H181fSbeG8rNjdT44AMauPqhzI0LoZvhUz2zK9Ru3TMQs5sr1fKlz0QLs Xeh2XmKs8ULjqxgW3LSJIeMk9D8GCGOXhHqyjlgcyuue88u5lTXsFBCH42gEPf95b0Dmo52xV2lWS OyxEjBtug==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexex-006nvB-0W; Fri, 07 May 2021 10:24:39 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdF-006nOR-49 for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=CG21RVBnaJDr+gb3P/8DY9muqcQZPL+sHvuEn+THDnU=; b=gCpaVQK+hW7EkE+XqaqGEycPm3 z1Cq+t/TMQ7XifZQ50vTYIhisLhHX45BUusGCyaaAaW/SvsoHnkuYJlk3Lgah+LhazsYU4Bnjztkk pYASMe+B4Rz8agJwBAwd01kBvPY/7LdqCA50iysLoXONAGC7djdLyLuoEACIXJKnaKTl6swCta2H1 /GPwtz+EsfY3I4HxTnFpTVAA6sevyfG00TdOh3+eyKorsvvSZkQikQdxp5+BPFp5Ayy5ElPfeS+KP 86w0EPa3LtAhtwDZW/g+ZcmRglpPhnHQEJzzr7OZ79OF7N/opVKXhbmd3euSvShFYTeTyxiyoBfEV dFof0vHg==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd8-006m7Q-Sb for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:51 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zk2zMMzCqxX; Fri, 7 May 2021 18:20:02 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:31 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 01/13] iommu: Introduce dirty log tracking framework Date: Fri, 7 May 2021 18:21:59 +0800 Message-ID: <20210507102211.8836-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_284821_FC39470A X-CRM114-Status: GOOD ( 17.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Some types of IOMMU are capable of tracking DMA dirty log, such as ARM SMMU with HTTU or Intel IOMMU with SLADE. This introduces the dirty log tracking framework in the IOMMU base layer. Four new essential interfaces are added, and we maintaince the status of dirty log tracking in iommu_domain. 1. iommu_support_dirty_log: Check whether domain supports dirty log tracking 2. iommu_switch_dirty_log: Perform actions to start|stop dirty log tracking 3. iommu_sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap 4. iommu_clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap Note: Don't concurrently call these interfaces with other ops that access underlying page table. Signed-off-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/iommu.c | 201 +++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 63 +++++++++++ include/trace/events/iommu.h | 63 +++++++++++ 3 files changed, 327 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 808ab70d5df5..0d15620d1e90 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1940,6 +1940,7 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus, domain->type = type; /* Assume all sizes by default; the driver may override this later */ domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap; + mutex_init(&domain->switch_log_lock); return domain; } @@ -2703,6 +2704,206 @@ int iommu_set_pgtable_quirks(struct iommu_domain *domain, } EXPORT_SYMBOL_GPL(iommu_set_pgtable_quirks); +bool iommu_support_dirty_log(struct iommu_domain *domain) +{ + const struct iommu_ops *ops = domain->ops; + + return ops->support_dirty_log && ops->support_dirty_log(domain); +} +EXPORT_SYMBOL_GPL(iommu_support_dirty_log); + +int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, + unsigned long iova, size_t size, int prot) +{ + const struct iommu_ops *ops = domain->ops; + unsigned long orig_iova = iova; + unsigned int min_pagesz; + size_t orig_size = size; + bool flush = false; + int ret = 0; + + if (unlikely(!ops->switch_dirty_log)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n", + iova, size, min_pagesz); + return -EINVAL; + } + + mutex_lock(&domain->switch_log_lock); + if (enable && domain->dirty_log_tracking) { + ret = -EBUSY; + goto out; + } else if (!enable && !domain->dirty_log_tracking) { + ret = -EINVAL; + goto out; + } + + pr_debug("switch_dirty_log %s for: iova 0x%lx size 0x%zx\n", + enable ? "enable" : "disable", iova, size); + + while (size) { + size_t pgsize = iommu_pgsize(domain, iova, size); + + flush = true; + ret = ops->switch_dirty_log(domain, enable, iova, pgsize, prot); + if (ret) + break; + + pr_debug("switch_dirty_log handled: iova 0x%lx size 0x%zx\n", + iova, pgsize); + + iova += pgsize; + size -= pgsize; + } + + if (flush) + iommu_flush_iotlb_all(domain); + + if (!ret) { + domain->dirty_log_tracking = enable; + trace_switch_dirty_log(orig_iova, orig_size, enable); + } +out: + mutex_unlock(&domain->switch_log_lock); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_switch_dirty_log); + +int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, + size_t size, unsigned long *bitmap, + unsigned long base_iova, unsigned long bitmap_pgshift) +{ + const struct iommu_ops *ops = domain->ops; + unsigned long orig_iova = iova; + unsigned int min_pagesz; + size_t orig_size = size; + int ret = 0; + + if (unlikely(!ops->sync_dirty_log)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n", + iova, size, min_pagesz); + return -EINVAL; + } + + mutex_lock(&domain->switch_log_lock); + if (!domain->dirty_log_tracking) { + ret = -EINVAL; + goto out; + } + + pr_debug("sync_dirty_log for: iova 0x%lx size 0x%zx\n", iova, size); + + while (size) { + size_t pgsize = iommu_pgsize(domain, iova, size); + + ret = ops->sync_dirty_log(domain, iova, pgsize, + bitmap, base_iova, bitmap_pgshift); + if (ret) + break; + + pr_debug("sync_dirty_log handled: iova 0x%lx size 0x%zx\n", + iova, pgsize); + + iova += pgsize; + size -= pgsize; + } + + if (!ret) + trace_sync_dirty_log(orig_iova, orig_size); +out: + mutex_unlock(&domain->switch_log_lock); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_sync_dirty_log); + +static int __iommu_clear_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + const struct iommu_ops *ops = domain->ops; + unsigned long orig_iova = iova; + size_t orig_size = size; + int ret = 0; + + if (unlikely(!ops->clear_dirty_log)) + return -ENODEV; + + pr_debug("clear_dirty_log for: iova 0x%lx size 0x%zx\n", iova, size); + + while (size) { + size_t pgsize = iommu_pgsize(domain, iova, size); + + ret = ops->clear_dirty_log(domain, iova, pgsize, bitmap, + base_iova, bitmap_pgshift); + if (ret) + break; + + pr_debug("clear_dirty_log handled: iova 0x%lx size 0x%zx\n", + iova, pgsize); + + iova += pgsize; + size -= pgsize; + } + + if (!ret) + trace_clear_dirty_log(orig_iova, orig_size); + + return ret; +} + +int iommu_clear_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + unsigned long riova, rsize; + unsigned int min_pagesz; + bool flush = false; + int rs, re, start, end; + int ret = 0; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx min_pagesz 0x%x\n", + iova, min_pagesz); + return -EINVAL; + } + + mutex_lock(&domain->switch_log_lock); + if (!domain->dirty_log_tracking) { + ret = -EINVAL; + goto out; + } + + start = (iova - base_iova) >> bitmap_pgshift; + end = start + (size >> bitmap_pgshift); + bitmap_for_each_set_region(bitmap, rs, re, start, end) { + flush = true; + riova = base_iova + (rs << bitmap_pgshift); + rsize = (re - rs) << bitmap_pgshift; + ret = __iommu_clear_dirty_log(domain, riova, rsize, bitmap, + base_iova, bitmap_pgshift); + if (ret) + break; + } + + if (flush) + iommu_flush_iotlb_all(domain); +out: + mutex_unlock(&domain->switch_log_lock); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_clear_dirty_log); + void iommu_get_resv_regions(struct device *dev, struct list_head *list) { const struct iommu_ops *ops = dev->bus->iommu_ops; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 32d448050bf7..e0e40dda974d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -87,6 +87,8 @@ struct iommu_domain { void *handler_token; struct iommu_domain_geometry geometry; void *iova_cookie; + bool dirty_log_tracking; + struct mutex switch_log_lock; }; enum iommu_cap { @@ -193,6 +195,10 @@ struct iommu_iotlb_gather { * @device_group: find iommu group for a particular device * @enable_nesting: Enable nesting * @set_pgtable_quirks: Set io page table quirks (IO_PGTABLE_QUIRK_*) + * @support_dirty_log: Check whether domain supports dirty log tracking + * @switch_dirty_log: Perform actions to start|stop dirty log tracking + * @sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap + * @clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap * @get_resv_regions: Request list of reserved regions for a device * @put_resv_regions: Free list of reserved regions for a device * @apply_resv_region: Temporary helper call-back for iova reserved ranges @@ -245,6 +251,22 @@ struct iommu_ops { int (*set_pgtable_quirks)(struct iommu_domain *domain, unsigned long quirks); + /* + * Track dirty log. Note: Don't concurrently call these interfaces with + * other ops that access underlying page table. + */ + bool (*support_dirty_log)(struct iommu_domain *domain); + int (*switch_dirty_log)(struct iommu_domain *domain, bool enable, + unsigned long iova, size_t size, int prot); + int (*sync_dirty_log)(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, unsigned long base_iova, + unsigned long bitmap_pgshift); + int (*clear_dirty_log)(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, unsigned long base_iova, + unsigned long bitmap_pgshift); + /* Request/Free a list of reserved regions for a device */ void (*get_resv_regions)(struct device *dev, struct list_head *list); void (*put_resv_regions)(struct device *dev, struct list_head *list); @@ -475,6 +497,17 @@ extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *); int iommu_enable_nesting(struct iommu_domain *domain); int iommu_set_pgtable_quirks(struct iommu_domain *domain, unsigned long quirks); +extern bool iommu_support_dirty_log(struct iommu_domain *domain); +extern int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, + unsigned long iova, size_t size, int prot); +extern int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, + size_t size, unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift); +extern int iommu_clear_dirty_log(struct iommu_domain *domain, unsigned long iova, + size_t dma_size, unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift); void iommu_set_dma_strict(bool val); bool iommu_get_dma_strict(struct iommu_domain *domain); @@ -848,6 +881,36 @@ static inline int iommu_set_pgtable_quirks(struct iommu_domain *domain, return 0; } +static inline bool iommu_support_dirty_log(struct iommu_domain *domain) +{ + return false; +} + +static inline int iommu_switch_dirty_log(struct iommu_domain *domain, + bool enable, unsigned long iova, + size_t size, int prot) +{ + return -EINVAL; +} + +static inline int iommu_sync_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long pgshift) +{ + return -EINVAL; +} + +static inline int iommu_clear_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long pgshift) +{ + return -EINVAL; +} + static inline int iommu_device_register(struct iommu_device *iommu, const struct iommu_ops *ops, struct device *hwdev) diff --git a/include/trace/events/iommu.h b/include/trace/events/iommu.h index 72b4582322ff..6436d693d357 100644 --- a/include/trace/events/iommu.h +++ b/include/trace/events/iommu.h @@ -129,6 +129,69 @@ TRACE_EVENT(unmap, ) ); +TRACE_EVENT(switch_dirty_log, + + TP_PROTO(unsigned long iova, size_t size, bool enable), + + TP_ARGS(iova, size, enable), + + TP_STRUCT__entry( + __field(u64, iova) + __field(size_t, size) + __field(bool, enable) + ), + + TP_fast_assign( + __entry->iova = iova; + __entry->size = size; + __entry->enable = enable; + ), + + TP_printk("IOMMU: iova=0x%016llx size=%zu enable=%u", + __entry->iova, __entry->size, __entry->enable + ) +); + +TRACE_EVENT(sync_dirty_log, + + TP_PROTO(unsigned long iova, size_t size), + + TP_ARGS(iova, size), + + TP_STRUCT__entry( + __field(u64, iova) + __field(size_t, size) + ), + + TP_fast_assign( + __entry->iova = iova; + __entry->size = size; + ), + + TP_printk("IOMMU: iova=0x%016llx size=%zu", __entry->iova, + __entry->size) +); + +TRACE_EVENT(clear_dirty_log, + + TP_PROTO(unsigned long iova, size_t size), + + TP_ARGS(iova, size), + + TP_STRUCT__entry( + __field(u64, iova) + __field(size_t, size) + ), + + TP_fast_assign( + __entry->iova = iova; + __entry->size = size; + ), + + TP_printk("IOMMU: iova=0x%016llx size=%zu", __entry->iova, + __entry->size) +); + DECLARE_EVENT_CLASS(iommu_error, TP_PROTO(struct device *dev, unsigned long iova, int flags), From patchwork Fri May 7 10:22:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D26DC433B4 for ; Fri, 7 May 2021 10:26:11 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ECF696145D for ; Fri, 7 May 2021 10:26:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECF696145D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ev+gGtkhgdaUYmCjXIzGMN/5+rjL5eMw4GZsdtW3aWI=; b=FFwlm+Ui5emtkaWy26QJEVMnP zE49JUMvzxw0ydETFTRtilujyLVn+lwMYWB6AedAwTQXe0CDc1mBEvzDcS3hJkOxT9/8T0uTLhF6+ KgnjfrOK57nYHJkTmmFfKqsGj2y1FScuKV5mAaMjCkVYi87VnZUF9Mdr0B1GeQSuKz9xntMpg5ufw fNtGzZBVHb0obY6TwEyBBDnif05t1lx5/iTIHDpBCYpBYCY54rNN85WFrn9BDen7ufTRXYECK0WqW BFDwqo700N2Oxbq+i7xqqol15gAQjSni5lHr35NCrzG7EEECujGlUJChDZyDpZZF4o7BrEXiz4UB9 ZnEqm5DSw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexeQ-006nj8-RC; Fri, 07 May 2021 10:24:07 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdE-006nOG-L1 for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=B/wTlGNjEuukQ1DWk9ZJ7xhJqEewC23fJe4k4Ur628E=; b=WIER4sxAeBXmfSNDh4RuGVaIM4 z2+LeyY0lJZkDK6wRaHjoK87H/hgZitIAtywN0LnAd/2itst1OGgyceLh6XVDvoSW+XiqlsrGmKx9 nCpe0IhAxO0oAdBLipQyKhRvgvyDObhsPSowKnOh8w0Oe3Em0dcJafMjxr5gC89dZYG9twfBFBBeN Bhv/69XGhmaz5dDcU/SwYr8MxGsYaLihYTI2346jgUPB+WsBoFA79kH9HZhWP+jL61pyjdw9vx/zJ FWKvznKFFMuBmFQDCRzlJRvswJUIXynoo1G7vnGMQfbIphbuLx6u5IV02ifuuZ4g1j1LkXsz2VPq/ Mb1jGb1g==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd9-006m7O-5s for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:51 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Fc5yx41FXzQkCY; Fri, 7 May 2021 18:19:21 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:32 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 02/13] iommu/io-pgtable-arm: Add quirk ARM_HD and ARM_BBMLx Date: Fri, 7 May 2021 18:22:00 +0800 Message-ID: <20210507102211.8836-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_573634_EFFA29AD X-CRM114-Status: GOOD ( 13.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang These features are essential to support dirty log tracking for SMMU with io-pgtable mapping. The dirty state information is encoded using the access permission bits AP[2] (stage 1) or S2AP[1] (stage 2) in conjunction with the DBM (Dirty Bit Modifier) bit, where DBM means writable and AP[2]/ S2AP[1] means dirty. When has ARM_HD, we set DBM bit for S1 mapping. As SMMU nested mode is not upstreamed for now, we just aim to support dirty log tracking for stage1 with io-pgtable mapping (means not support SVA). Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/io-pgtable-arm.c | 7 ++++++- include/linux/io-pgtable.h | 11 +++++++++++ 2 files changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 87def58e79b5..94d790b8ed27 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -72,6 +72,7 @@ #define ARM_LPAE_PTE_NSTABLE (((arm_lpae_iopte)1) << 63) #define ARM_LPAE_PTE_XN (((arm_lpae_iopte)3) << 53) +#define ARM_LPAE_PTE_DBM (((arm_lpae_iopte)1) << 51) #define ARM_LPAE_PTE_AF (((arm_lpae_iopte)1) << 10) #define ARM_LPAE_PTE_SH_NS (((arm_lpae_iopte)0) << 8) #define ARM_LPAE_PTE_SH_OS (((arm_lpae_iopte)2) << 8) @@ -81,7 +82,7 @@ #define ARM_LPAE_PTE_ATTR_LO_MASK (((arm_lpae_iopte)0x3ff) << 2) /* Ignore the contiguous bit for block splitting */ -#define ARM_LPAE_PTE_ATTR_HI_MASK (((arm_lpae_iopte)6) << 52) +#define ARM_LPAE_PTE_ATTR_HI_MASK (((arm_lpae_iopte)13) << 51) #define ARM_LPAE_PTE_ATTR_MASK (ARM_LPAE_PTE_ATTR_LO_MASK | \ ARM_LPAE_PTE_ATTR_HI_MASK) /* Software bit for solving coherency races */ @@ -379,6 +380,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, int prot) { + struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_lpae_iopte pte; if (data->iop.fmt == ARM_64_LPAE_S1 || @@ -386,6 +388,9 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, pte = ARM_LPAE_PTE_nG; if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ)) pte |= ARM_LPAE_PTE_AP_RDONLY; + else if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_HD) + pte |= ARM_LPAE_PTE_DBM; + if (!(prot & IOMMU_PRIV)) pte |= ARM_LPAE_PTE_AP_UNPRIV; } else { diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 4d40dfa75b55..92274705b772 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -82,6 +82,14 @@ struct io_pgtable_cfg { * * IO_PGTABLE_QUIRK_ARM_OUTER_WBWA: Override the outer-cacheability * attributes set in the TCR for a non-coherent page-table walker. + * + * IO_PGTABLE_QUIRK_ARM_HD: Support hardware management of dirty status. + * + * IO_PGTABLE_QUIRK_ARM_BBML1: ARM SMMU supports BBM Level 1 behavior + * when changing block size. + * + * IO_PGTABLE_QUIRK_ARM_BBML2: ARM SMMU supports BBM Level 2 behavior + * when changing block size. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) @@ -89,6 +97,9 @@ struct io_pgtable_cfg { #define IO_PGTABLE_QUIRK_NON_STRICT BIT(4) #define IO_PGTABLE_QUIRK_ARM_TTBR1 BIT(5) #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6) + #define IO_PGTABLE_QUIRK_ARM_HD BIT(7) + #define IO_PGTABLE_QUIRK_ARM_BBML1 BIT(8) + #define IO_PGTABLE_QUIRK_ARM_BBML2 BIT(9) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; From patchwork Fri May 7 10:22:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 530EAC433B4 for ; Fri, 7 May 2021 10:25:52 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B3102613F0 for ; Fri, 7 May 2021 10:25:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B3102613F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pJKhrISqMtf9O7SPM2t4xomEvSGfkzIhujnPoT478Hs=; b=Mq8tKTS8+0aSwQYF6U/CzFKLJ hQglw/GQDP1rLHAoFQI2vEsr+2XYMuYvx/WOW7LpmcbjXo9Gdq40chgdro58PuQyPZyO8ZOIK+i+L I4dUKThNNI3SJnC8e7Vx7O2wHuyBMSxwM7pw0sI+kRTqxriUgWL4Z4/rYxoHd2MiLgAMkJnwuOooO TWSOVy+feTho9W8kZKn9DLxO1pKS+YQN5uIbqMcOBKiFUaZr/je4EvVIwgH5I3tUFo34kjEVdNe9I LMoGjFgVf3odsGNzNUl6KJN/L4RfRldDnrmiDc1XHdOPZ64inVrKjG5/Omb4XurdUgWMHuRjz9125 rx1nJHGTg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexeh-006nof-2X; Fri, 07 May 2021 10:24:23 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdE-006nOM-SC for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=5C2ag6s6WTPWRHG1Rce9Q1hI2J8CIZKp/qh6l6k8ois=; b=gEEEHF6sc/bopmvE3LN6nYVbTO SkZl8hU/CPCHCRj8seUO4ma0odldJ0N76w7j0Z6MiumMpEk5BBs20Zoxio9ZkurpTj8U5ssMqQwLk oxWFM6A2Vz7/vngyiM95p0Uv4LWP/m/QmPpbmY1glC78FXF0y3R1nMvc1GYiKkrLLTOxmo1iZnj6/ sOTZbogJTU8bnDQf/BQjP37y6Zdq98xDrgyRfJDTnCELgmGdEK45p8ym6jQv6zYUGaXS0aJvCicuY 8ex5eaRz5xxF4GCLmXTSU/2QGoOVbsvnbcuU17rks2AvXos8WxZVBvPQG3CW7SijYOZ5Aq/RxH3x9 wn40pnFw==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd9-006m7R-6W for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:51 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zk3NqczCqys; Fri, 7 May 2021 18:20:02 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:33 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 03/13] iommu/io-pgtable-arm: Add and realize split_block ops Date: Fri, 7 May 2021 18:22:01 +0800 Message-ID: <20210507102211.8836-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_608027_B4D5087E X-CRM114-Status: GOOD ( 17.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang Block(largepage) mapping is not a proper granule for dirty log tracking. Take an extreme example, if DMA writes one byte, under 1G mapping, the dirty amount reported is 1G, but under 4K mapping, the dirty amount is just 4K. This splits block descriptor to an span of page descriptors. BBML1 or BBML2 feature is required. Spliting block is designed to be only used by dirty log tracking, which does not concurrently work with other pgtable ops that access underlying page table, so race condition does not exist. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/io-pgtable-arm.c | 122 +++++++++++++++++++++++++++++++++ include/linux/io-pgtable.h | 2 + 2 files changed, 124 insertions(+) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 94d790b8ed27..664a9548b199 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -79,6 +79,8 @@ #define ARM_LPAE_PTE_SH_IS (((arm_lpae_iopte)3) << 8) #define ARM_LPAE_PTE_NS (((arm_lpae_iopte)1) << 5) #define ARM_LPAE_PTE_VALID (((arm_lpae_iopte)1) << 0) +/* Block descriptor bits */ +#define ARM_LPAE_PTE_NT (((arm_lpae_iopte)1) << 16) #define ARM_LPAE_PTE_ATTR_LO_MASK (((arm_lpae_iopte)0x3ff) << 2) /* Ignore the contiguous bit for block splitting */ @@ -679,6 +681,125 @@ static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, return iopte_to_paddr(pte, data) | iova; } +static size_t __arm_lpae_split_block(struct arm_lpae_io_pgtable *data, + unsigned long iova, size_t size, int lvl, + arm_lpae_iopte *ptep); + +static size_t arm_lpae_do_split_blk(struct arm_lpae_io_pgtable *data, + unsigned long iova, size_t size, + arm_lpae_iopte blk_pte, int lvl, + arm_lpae_iopte *ptep) +{ + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte pte, *tablep; + phys_addr_t blk_paddr; + size_t tablesz = ARM_LPAE_GRANULE(data); + size_t split_sz = ARM_LPAE_BLOCK_SIZE(lvl, data); + int i; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return 0; + + tablep = __arm_lpae_alloc_pages(tablesz, GFP_ATOMIC, cfg); + if (!tablep) + return 0; + + blk_paddr = iopte_to_paddr(blk_pte, data); + pte = iopte_prot(blk_pte); + for (i = 0; i < tablesz / sizeof(pte); i++, blk_paddr += split_sz) + __arm_lpae_init_pte(data, blk_paddr, pte, lvl, &tablep[i]); + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_BBML1) { + /* Race does not exist */ + blk_pte |= ARM_LPAE_PTE_NT; + __arm_lpae_set_pte(ptep, blk_pte, cfg); + io_pgtable_tlb_flush_walk(&data->iop, iova, size, size); + } + /* Race does not exist */ + pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg); + + /* Have splited it into page? */ + if (lvl == (ARM_LPAE_MAX_LEVELS - 1)) + return size; + + /* Go back to lvl - 1 */ + ptep -= ARM_LPAE_LVL_IDX(iova, lvl - 1, data); + return __arm_lpae_split_block(data, iova, size, lvl - 1, ptep); +} + +static size_t __arm_lpae_split_block(struct arm_lpae_io_pgtable *data, + unsigned long iova, size_t size, int lvl, + arm_lpae_iopte *ptep) +{ + arm_lpae_iopte pte; + struct io_pgtable *iop = &data->iop; + size_t base, next_size, total_size; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return 0; + + ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return 0; + + if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { + if (iopte_leaf(pte, lvl, iop->fmt)) { + if (lvl == (ARM_LPAE_MAX_LEVELS - 1) || + (pte & ARM_LPAE_PTE_AP_RDONLY)) + return size; + + /* We find a writable block, split it. */ + return arm_lpae_do_split_blk(data, iova, size, pte, + lvl + 1, ptep); + } else { + /* If it is the last table level, then nothing to do */ + if (lvl == (ARM_LPAE_MAX_LEVELS - 2)) + return size; + + total_size = 0; + next_size = ARM_LPAE_BLOCK_SIZE(lvl + 1, data); + ptep = iopte_deref(pte, data); + for (base = 0; base < size; base += next_size) + total_size += __arm_lpae_split_block(data, + iova + base, next_size, lvl + 1, + ptep); + return total_size; + } + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + WARN(1, "Can't split behind a block.\n"); + return 0; + } + + /* Keep on walkin */ + ptep = iopte_deref(pte, data); + return __arm_lpae_split_block(data, iova, size, lvl + 1, ptep); +} + +static size_t arm_lpae_split_block(struct io_pgtable_ops *ops, + unsigned long iova, size_t size) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte *ptep = data->pgd; + int lvl = data->start_level; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + return 0; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext)) + return 0; + + /* If it is smallest granule, then nothing to do */ + if (size == ARM_LPAE_BLOCK_SIZE(ARM_LPAE_MAX_LEVELS - 1, data)) + return size; + + return __arm_lpae_split_block(data, iova, size, lvl, ptep); +} + static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; @@ -757,6 +878,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) .map = arm_lpae_map, .unmap = arm_lpae_unmap, .iova_to_phys = arm_lpae_iova_to_phys, + .split_block = arm_lpae_split_block, }; return data; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 92274705b772..eba6c6ccbe49 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -167,6 +167,8 @@ struct io_pgtable_ops { size_t size, struct iommu_iotlb_gather *gather); phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, unsigned long iova); + size_t (*split_block)(struct io_pgtable_ops *ops, unsigned long iova, + size_t size); }; /** From patchwork Fri May 7 10:22:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24E23C433ED for ; Fri, 7 May 2021 10:24:47 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9128261456 for ; Fri, 7 May 2021 10:24:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9128261456 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FRQkNi1asfqJHa/5C+pBSuqI0BetUmeqnTzu/wITdp0=; b=OI5tsHVQISumFTN4AbsihZ5tI tqXzAEcRztYXn037AhtvJ04buwTFHd+lGXJjovWTODwwNtG+SYrnMyaD622HOJFD5ZX3tkEqF8EIN IJ18LNRJLhB+RHAA9OnwYmy0kPVmM7VtlGv//41zXvbuhPk1JDAR15YI7jqdePCFn4rJFfOaPhtNg I8vPhCRz6zBRQgokh93QuCrQIzX6eFb0bs/bjhNJaSXcVRkeqjwK5eDldEALGp9lRsA4lpgmzTt9n zb2SdKSsyE1hCm7+xdLwRsyHVHPYZoIPIPgqbcrcjcN3S368xhB9HE4vsNDhDn+YsFUcv+wM8mcsd qupz/q61w==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexdQ-006nQs-DJ; Fri, 07 May 2021 10:23:04 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdD-006nNd-0r for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2q76t4XKWysMNZ0cDRBDR/BYT0Uj3ScTPNcP+No15EA=; b=fBWuuYT3qnC5Kv8JTnuU5fuRQ5 neKa++GvxAbcGJUD8aOKQBcn2/A6jGsZ95NhJPqtEGHJ+KFoOSPeqgQEJMhv0Ai+hpg3YgraT0ykY GTwoic5c0fkiHXV9wtm4GFzecRTAi0zcd/Fzsgf3QOQnfO+HMiSdmvXFYrZdIwnPCiSTQoE9avHMA jFXFV1Z0NKhJzxOUMdvjo/O3/Bn0Ftw3ZSQ1NYufbFVkxTNnaTScnzmpYJq+mHP2DmxaII0Nn610/ jvytm2dU69UVLZFWgIFJwZZnO058ey2KoZvJM9te4HI1JRoKBKJ/s8My0Si5Vzgk2aY0McV6Wp+O4 yqn/b1CQ==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd8-006m7Z-K9 for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:49 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fc5z30FQ5zqTB9; Fri, 7 May 2021 18:19:27 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:34 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 04/13] iommu/io-pgtable-arm: Add and realize merge_page ops Date: Fri, 7 May 2021 18:22:02 +0800 Message-ID: <20210507102211.8836-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_065657_F5888303 X-CRM114-Status: GOOD ( 13.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This recovers block mappings and unmap the span of page mappings. BBML1 or BBML2 feature is required. Merging page is designed to be only used by dirty log tracking, which does not concurrently work with other pgtable ops that access underlying page table, so race condition does not exist. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/io-pgtable-arm.c | 78 ++++++++++++++++++++++++++++++++++ include/linux/io-pgtable.h | 2 + 2 files changed, 80 insertions(+) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 664a9548b199..b9f6e3370032 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -800,6 +800,83 @@ static size_t arm_lpae_split_block(struct io_pgtable_ops *ops, return __arm_lpae_split_block(data, iova, size, lvl, ptep); } +static size_t __arm_lpae_merge_page(struct arm_lpae_io_pgtable *data, + unsigned long iova, phys_addr_t paddr, + size_t size, int lvl, arm_lpae_iopte *ptep, + arm_lpae_iopte prot) +{ + arm_lpae_iopte pte, *tablep; + struct io_pgtable *iop = &data->iop; + struct io_pgtable_cfg *cfg = &data->iop.cfg; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return 0; + + ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return 0; + + if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { + if (iopte_leaf(pte, lvl, iop->fmt)) + return size; + + /* Race does not exist */ + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_BBML1) { + prot |= ARM_LPAE_PTE_NT; + __arm_lpae_init_pte(data, paddr, prot, lvl, ptep); + io_pgtable_tlb_flush_walk(iop, iova, size, + ARM_LPAE_GRANULE(data)); + + prot &= ~(ARM_LPAE_PTE_NT); + __arm_lpae_init_pte(data, paddr, prot, lvl, ptep); + } else { + __arm_lpae_init_pte(data, paddr, prot, lvl, ptep); + } + + tablep = iopte_deref(pte, data); + __arm_lpae_free_pgtable(data, lvl + 1, tablep); + return size; + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + /* The size is too small, already merged */ + return size; + } + + /* Keep on walkin */ + ptep = iopte_deref(pte, data); + return __arm_lpae_merge_page(data, iova, paddr, size, lvl + 1, ptep, prot); +} + +static size_t arm_lpae_merge_page(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t size, int iommu_prot) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte *ptep = data->pgd; + int lvl = data->start_level; + arm_lpae_iopte prot; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + return 0; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext || paddr >> cfg->oas)) + return 0; + + /* If no access, then nothing to do */ + if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) + return size; + + /* If it is smallest granule, then nothing to do */ + if (size == ARM_LPAE_BLOCK_SIZE(ARM_LPAE_MAX_LEVELS - 1, data)) + return size; + + prot = arm_lpae_prot_to_pte(data, iommu_prot); + return __arm_lpae_merge_page(data, iova, paddr, size, lvl, ptep, prot); +} + static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; @@ -879,6 +956,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) .unmap = arm_lpae_unmap, .iova_to_phys = arm_lpae_iova_to_phys, .split_block = arm_lpae_split_block, + .merge_page = arm_lpae_merge_page, }; return data; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index eba6c6ccbe49..e77576d946a2 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -169,6 +169,8 @@ struct io_pgtable_ops { unsigned long iova); size_t (*split_block)(struct io_pgtable_ops *ops, unsigned long iova, size_t size); + size_t (*merge_page)(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t phys, size_t size, int prot); }; /** From patchwork Fri May 7 10:22:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 084D6C433B4 for ; Fri, 7 May 2021 10:25:11 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 75BE46145D for ; Fri, 7 May 2021 10:25:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 75BE46145D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ttTbIBPKacPcED5eWyOuFc9FJhVPJv3fPncWFrvVKpc=; b=G8/TwKweAP7+orrLrrrrS+TEn 7Godv3TTmcAz9kCCqJ1IpdlY4grv8J6z66N8THPREYUw8ld98UZ4GZKNyUNfnt9sZHnqpJ73KClQy yptVHlxtJrTr7Ih/avZcUWRyVmIaZoYsBCx4FwXHEWoFOH8ImanoH/WOVIYPWZv1kQ1adtUZLVToQ 3aLIs0J83+KYRyANpbbS3HLnlGuwf090siWjbmhj9M/rLR76fUNMoNKBY9UmgF0U85L4aWvuOI2i9 EgCPQbZeIXaekGXrS+Z7JzmPjDv31fPIHxLLabPsSaES4kIMrUCZQMqQ5+cRpHRegqB5IENBpKjhy UUtwYjC4Q==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexda-006nSm-PP; Fri, 07 May 2021 10:23:14 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdD-006nNe-2c for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=TXFuJ3nOwBWW6UzXlhi3/nV/nXHxTOPdx5RuSMGcNIY=; b=o8gML5F3um0D75hWZEAs1Cqe69 Iut2JiX0cm+e+BQaiaR8pzlQtv1xdM7rgKFV/0d5Y4Tqw7c1g8VkWpVmgz9JfeZSN5WNy2qBe6g/D g0ukFqLqpQ4Mi93ZzxSbN0c4vK86Yh55eSvfLwX7SsLdkC3OzAIG0s+3b9MuIPt30D/m9l2aw53O9 wWM2JqxsoOVbdcNRezVb6iDFZBlg0ExDntVseDpPAA54MMPTMgGGoP049V32CHqU1SCTItR13P7VB rgJfQbjp5v/uDOt1WJJSztufR2dfmexLBNOmlaeax/gZB5xUGnX6evKWxb5SYIKpG6/pNUTNDXMJV 7jm50vZQ==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd8-006m7W-KH for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:49 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fc5z26gNgzqTB4; Fri, 7 May 2021 18:19:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:35 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 05/13] iommu/io-pgtable-arm: Add and realize sync_dirty_log ops Date: Fri, 7 May 2021 18:22:03 +0800 Message-ID: <20210507102211.8836-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_052933_95FFF6BA X-CRM114-Status: GOOD ( 15.67 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang During dirty log tracking, user will try to retrieve dirty log from iommu if it supports hardware dirty log. Scan leaf TTD and treat it is dirty if it's writable. As we just set DBM bit for stage1 mapping, so check whether AP[2] is not set. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/io-pgtable-arm.c | 89 ++++++++++++++++++++++++++++++++++ include/linux/io-pgtable.h | 4 ++ 2 files changed, 93 insertions(+) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index b9f6e3370032..155d440099ab 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -877,6 +877,94 @@ static size_t arm_lpae_merge_page(struct io_pgtable_ops *ops, unsigned long iova return __arm_lpae_merge_page(data, iova, paddr, size, lvl, ptep, prot); } +static int __arm_lpae_sync_dirty_log(struct arm_lpae_io_pgtable *data, + unsigned long iova, size_t size, + int lvl, arm_lpae_iopte *ptep, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + arm_lpae_iopte pte; + struct io_pgtable *iop = &data->iop; + size_t base, next_size; + unsigned long offset; + int nbits, ret; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return -EINVAL; + + ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return -EINVAL; + + if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { + if (iopte_leaf(pte, lvl, iop->fmt)) { + if (pte & ARM_LPAE_PTE_AP_RDONLY) + return 0; + + /* It is writable, set the bitmap */ + nbits = size >> bitmap_pgshift; + offset = (iova - base_iova) >> bitmap_pgshift; + bitmap_set(bitmap, offset, nbits); + return 0; + } + /* Current level is table, traverse next level */ + next_size = ARM_LPAE_BLOCK_SIZE(lvl + 1, data); + ptep = iopte_deref(pte, data); + for (base = 0; base < size; base += next_size) { + ret = __arm_lpae_sync_dirty_log(data, iova + base, + next_size, lvl + 1, ptep, bitmap, + base_iova, bitmap_pgshift); + if (ret) + return ret; + } + return 0; + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + if (pte & ARM_LPAE_PTE_AP_RDONLY) + return 0; + + /* Though the size is too small, also set bitmap */ + nbits = size >> bitmap_pgshift; + offset = (iova - base_iova) >> bitmap_pgshift; + bitmap_set(bitmap, offset, nbits); + return 0; + } + + /* Keep on walkin */ + ptep = iopte_deref(pte, data); + return __arm_lpae_sync_dirty_log(data, iova, size, lvl + 1, ptep, + bitmap, base_iova, bitmap_pgshift); +} + +static int arm_lpae_sync_dirty_log(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte *ptep = data->pgd; + int lvl = data->start_level; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + return -EINVAL; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext)) + return -EINVAL; + + if (data->iop.fmt != ARM_64_LPAE_S1 && + data->iop.fmt != ARM_32_LPAE_S1) + return -EINVAL; + + return __arm_lpae_sync_dirty_log(data, iova, size, lvl, ptep, + bitmap, base_iova, bitmap_pgshift); +} + static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; @@ -957,6 +1045,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) .iova_to_phys = arm_lpae_iova_to_phys, .split_block = arm_lpae_split_block, .merge_page = arm_lpae_merge_page, + .sync_dirty_log = arm_lpae_sync_dirty_log, }; return data; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index e77576d946a2..329fa99d9d96 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -171,6 +171,10 @@ struct io_pgtable_ops { size_t size); size_t (*merge_page)(struct io_pgtable_ops *ops, unsigned long iova, phys_addr_t phys, size_t size, int prot); + int (*sync_dirty_log)(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long *bitmap, unsigned long base_iova, + unsigned long bitmap_pgshift); }; /** From patchwork Fri May 7 10:22:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244309 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EC4AC433B4 for ; Fri, 7 May 2021 10:26:01 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1DC861456 for ; Fri, 7 May 2021 10:26:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1DC861456 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Jhn8wrQzDYRn+V56d2HYoWttqqS3ckrsTtBqMIqgEXE=; b=Ft6cvOfw1Gdw4oILSstBcNgTt +GFmu8+80lhlFBdP8cME7Wae2nKEaXbcS2wiaHaaETQMFJgMR2UOo7SaA86IXMn4z2jkO2Vkcyfv8 3Kvg/OgkWQfoLdM5Ob1MAVJ+ChXWK3OBK/ygnW7DYmTaHT/0wgux1Yb6UUQ4Ki0focegrCXZ97h29 1ski5UP76LP0JCR/3bvt7AbtMHhXL5Hxd/HlpC1P3Ujgkhw6PkWnTtJO7lXxMRe9NxWL8jJYr+CCJ dM80cyG9b8a5rBanqlL1Cf6ThXx2+/jczlbyI2mgobJfS+CC8aw3iXmt+D4+Sbb1hXxNpWelEsFVP IcB95p0oQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexeE-006neO-3t; Fri, 07 May 2021 10:23:54 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdE-006nO1-8X for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Yzp76q30x8eN/IK3LKJ22SBC6oX7o+l/yX0FCesO+e8=; b=rf5MJll5GHVAxPqNrARvMxynUD qzSTsroxC6VxztF22YApKSZC2oTk519jw57KHezBKD5M5q9ExXvuBrU8wg5PXq4LaC1P2sNn2eljK W+kGcMyHxJ4t5xP+cK67HDOadEq91yBrNlWkQ9fAfhFe8TYhfKajDgZy0bBbnqUcc2RPIvDlJDPrA jS26clmsiWvukkcfrDWU+Fz6eeHZx7NM8bGFiLbQPIp3IxV3sKHIDPAo9K9F+AtdnhSp9Vmz2Uhm/ rQX9Qd9FLkTsyWAQCCLC68VKxPaRTH7yUtJ1uSF78blUMyH3LWWTQSr5Ijfq55iRel820RNad0Fxg 3/KVroxQ==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd8-006m7a-KD for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:51 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fc5z30gzhzqTB5; Fri, 7 May 2021 18:19:27 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:36 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 06/13] iommu/io-pgtable-arm: Add and realize clear_dirty_log ops Date: Fri, 7 May 2021 18:22:04 +0800 Message-ID: <20210507102211.8836-7-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_084223_8694E25F X-CRM114-Status: GOOD ( 16.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang After dirty log is retrieved, user should clear dirty log to re-enable dirty log tracking for these dirtied pages. This clears the dirty state (As we just set DBM bit for stage1 mapping, so should set the AP[2] bit) of these leaf TTDs that are specified by the user provided bitmap. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/io-pgtable-arm.c | 93 ++++++++++++++++++++++++++++++++++ include/linux/io-pgtable.h | 4 ++ 2 files changed, 97 insertions(+) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 155d440099ab..2b41b9d0faa3 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -965,6 +965,98 @@ static int arm_lpae_sync_dirty_log(struct io_pgtable_ops *ops, bitmap, base_iova, bitmap_pgshift); } +static int __arm_lpae_clear_dirty_log(struct arm_lpae_io_pgtable *data, + unsigned long iova, size_t size, + int lvl, arm_lpae_iopte *ptep, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + arm_lpae_iopte pte; + struct io_pgtable *iop = &data->iop; + unsigned long offset; + size_t base, next_size; + int nbits, ret, i; + + if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) + return -EINVAL; + + ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + return -EINVAL; + + if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { + if (iopte_leaf(pte, lvl, iop->fmt)) { + if (pte & ARM_LPAE_PTE_AP_RDONLY) + return 0; + + /* Ensure all corresponding bits are set */ + nbits = size >> bitmap_pgshift; + offset = (iova - base_iova) >> bitmap_pgshift; + for (i = offset; i < offset + nbits; i++) { + if (!test_bit(i, bitmap)) + return 0; + } + + /* Race does not exist */ + pte |= ARM_LPAE_PTE_AP_RDONLY; + __arm_lpae_set_pte(ptep, pte, &iop->cfg); + return 0; + } + /* Current level is table, traverse next level */ + next_size = ARM_LPAE_BLOCK_SIZE(lvl + 1, data); + ptep = iopte_deref(pte, data); + for (base = 0; base < size; base += next_size) { + ret = __arm_lpae_clear_dirty_log(data, iova + base, + next_size, lvl + 1, ptep, bitmap, + base_iova, bitmap_pgshift); + if (ret) + return ret; + } + return 0; + } else if (iopte_leaf(pte, lvl, iop->fmt)) { + /* Though the size is too small, it is already clean */ + if (pte & ARM_LPAE_PTE_AP_RDONLY) + return 0; + + return -EINVAL; + } + + /* Keep on walkin */ + ptep = iopte_deref(pte, data); + return __arm_lpae_clear_dirty_log(data, iova, size, lvl + 1, ptep, + bitmap, base_iova, bitmap_pgshift); +} + +static int arm_lpae_clear_dirty_log(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &data->iop.cfg; + arm_lpae_iopte *ptep = data->pgd; + int lvl = data->start_level; + long iaext = (s64)iova >> cfg->ias; + + if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + return -EINVAL; + + if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) + iaext = ~iaext; + if (WARN_ON(iaext)) + return -EINVAL; + + if (data->iop.fmt != ARM_64_LPAE_S1 && + data->iop.fmt != ARM_32_LPAE_S1) + return -EINVAL; + + return __arm_lpae_clear_dirty_log(data, iova, size, lvl, ptep, + bitmap, base_iova, bitmap_pgshift); +} + static void arm_lpae_restrict_pgsizes(struct io_pgtable_cfg *cfg) { unsigned long granule, page_sizes; @@ -1046,6 +1138,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) .split_block = arm_lpae_split_block, .merge_page = arm_lpae_merge_page, .sync_dirty_log = arm_lpae_sync_dirty_log, + .clear_dirty_log = arm_lpae_clear_dirty_log, }; return data; diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 329fa99d9d96..4781407d5e2d 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -175,6 +175,10 @@ struct io_pgtable_ops { unsigned long iova, size_t size, unsigned long *bitmap, unsigned long base_iova, unsigned long bitmap_pgshift); + int (*clear_dirty_log)(struct io_pgtable_ops *ops, + unsigned long iova, size_t size, + unsigned long *bitmap, unsigned long base_iova, + unsigned long bitmap_pgshift); }; /** From patchwork Fri May 7 10:22:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 922B8C433ED for ; Fri, 7 May 2021 10:25:35 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1BF306145E for ; Fri, 7 May 2021 10:25:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BF306145E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=fG65ZicE6Qb4RPvUPaJwzANqXBoioOKnE8fj7E1c1wQ=; b=Xzome5gYJEpzy2kp63PMcHVL1 iG5HdtTpoI5GUZ+woDL43U5c7UJ9qVh+B1ZONHbY3hAfGgmaCMPvsqL617rl+7orCsY6e8j6Tqm5C fFHrCSzGlEoJdi/6ACRMNiNDKNPIPuyf2kEOUs+HpEazFUcKX0EazChfXV/RZDKNlgGyPiPS8sQML r3atlnbeT7Z3K6PBlTJFMPZzO3NU7ImCQKJoTxOKQfd1neMdXA+By/X7mlZHK+SzLTr34vJDnBbjl fdOivo5yKZZaUCw3ldqBv7beernysqk5DeQWIfk4b8zb598ckLUEbkLIOb8krx4sE8lZ7HnVjjzM3 D/Yd6fZSw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexdy-006nYi-9a; Fri, 07 May 2021 10:23:38 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdD-006nNj-HJ for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=3aXY155hJgYDZaAUgctenBOrpzp+PI6Ld/uUc6RKQ1Q=; b=2cVQCFbqMYD47VygV27ZOw33SX Tdfl3reP5AWet2679eaGqPKSlnYyXYslCvdhVRHbfJ99mIpB7vt0u9gt9oqe6loSidhPSt9YDNf2Z +Cmg94E4Z+BDw1kb7o5KiT43GIU1tahBoL52l4j74acbsRBBQYo/sio6LQq5znB0q/8uidOAHIae0 Jbgxgw8GlOIWMCPwA1TH5i36Rgw+Ezb4Fm6b2ebmxMvv88gqJnZ8Ilq6Z/OGSwfBEijcK4Io2RsCD JJAbkVbscrSB+hXIWZhcetwEJOvdX5+tpQ9o7E0tlK9M4Skpj7TXav224OIOxc4gSu6whGvMAsyJi FeN20d1g==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd8-006m7Y-KG for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:50 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fc5z25tfrzqT8q; Fri, 7 May 2021 18:19:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:37 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 07/13] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update Date: Fri, 7 May 2021 18:22:05 +0800 Message-ID: <20210507102211.8836-8-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032247_081889_B50C44BF X-CRM114-Status: GOOD ( 19.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Jean-Philippe Brucker If the SMMU supports it and the kernel was built with HTTU support, enable hardware update of access and dirty flags. This is essential for shared page tables, to reduce the number of access faults on the fault queue. Normal DMA with io-pgtables doesn't currently use the access or dirty flags. We can enable HTTU even if CPUs don't support it, because the kernel always checks for HW dirty bit and updates the PTE flags atomically. Signed-off-by: Jean-Philippe Brucker --- .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 2 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 ++++++++++++++++++- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 ++++ 3 files changed, 50 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index bb251cab61f3..ae075e675892 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -121,10 +121,12 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) if (err) goto out_free_asid; + /* HA and HD will be filtered out later if not supported by the SMMU */ tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - vabits_actual) | FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) | + CTXDESC_CD_0_TCR_HA | CTXDESC_CD_0_TCR_HD | CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; switch (PAGE_SIZE) { diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 54b2f27b81d4..4ac59a89bc76 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1010,10 +1010,17 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, * this substream's traffic */ } else { /* (1) and (2) */ + u64 tcr = cd->tcr; + cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK); cdptr[2] = 0; cdptr[3] = cpu_to_le64(cd->mair); + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + tcr &= ~CTXDESC_CD_0_TCR_HD; + if (!(smmu->features & ARM_SMMU_FEAT_HA)) + tcr &= ~CTXDESC_CD_0_TCR_HA; + /* * STE is live, and the SMMU might read dwords of this CD in any * order. Ensure that it observes valid values before reading @@ -1021,7 +1028,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, */ arm_smmu_sync_cd(smmu_domain, ssid, true); - val = cd->tcr | + val = tcr | #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | #endif @@ -3242,6 +3249,28 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } +static void arm_smmu_get_httu(struct arm_smmu_device *smmu, u32 reg) +{ + u32 fw_features = smmu->features & (ARM_SMMU_FEAT_HA | ARM_SMMU_FEAT_HD); + u32 features = 0; + + switch (FIELD_GET(IDR0_HTTU, reg)) { + case IDR0_HTTU_ACCESS_DIRTY: + features |= ARM_SMMU_FEAT_HD; + fallthrough; + case IDR0_HTTU_ACCESS: + features |= ARM_SMMU_FEAT_HA; + } + + if (smmu->dev->of_node) + smmu->features |= features; + else if (features != fw_features) + /* ACPI IORT sets the HTTU bits */ + dev_warn(smmu->dev, + "IDR0.HTTU overridden by FW configuration (0x%x)\n", + fw_features); +} + static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; @@ -3302,6 +3331,8 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->features |= ARM_SMMU_FEAT_E2H; } + arm_smmu_get_httu(smmu, reg); + /* * The coherency feature as set by FW is used in preference to the ID * register, but warn on mismatch. @@ -3487,6 +3518,14 @@ static int arm_smmu_device_acpi_probe(struct platform_device *pdev, if (iort_smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE) smmu->features |= ARM_SMMU_FEAT_COHERENCY; + switch (FIELD_GET(ACPI_IORT_SMMU_V3_HTTU_OVERRIDE, iort_smmu->flags)) { + case IDR0_HTTU_ACCESS_DIRTY: + smmu->features |= ARM_SMMU_FEAT_HD; + fallthrough; + case IDR0_HTTU_ACCESS: + smmu->features |= ARM_SMMU_FEAT_HA; + } + return 0; } #else diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 46e8c49214a8..3edcd31b046e 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -33,6 +33,9 @@ #define IDR0_ASID16 (1 << 12) #define IDR0_ATS (1 << 10) #define IDR0_HYP (1 << 9) +#define IDR0_HTTU GENMASK(7, 6) +#define IDR0_HTTU_ACCESS 1 +#define IDR0_HTTU_ACCESS_DIRTY 2 #define IDR0_COHACC (1 << 4) #define IDR0_TTF GENMASK(3, 2) #define IDR0_TTF_AARCH64 2 @@ -285,6 +288,9 @@ #define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) #define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) +#define CTXDESC_CD_0_TCR_HA (1UL << 43) +#define CTXDESC_CD_0_TCR_HD (1UL << 42) + #define CTXDESC_CD_0_AA64 (1UL << 41) #define CTXDESC_CD_0_S (1UL << 44) #define CTXDESC_CD_0_R (1UL << 45) @@ -605,6 +611,8 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_BTM (1 << 16) #define ARM_SMMU_FEAT_SVA (1 << 17) #define ARM_SMMU_FEAT_E2H (1 << 18) +#define ARM_SMMU_FEAT_HA (1 << 19) +#define ARM_SMMU_FEAT_HD (1 << 20) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) From patchwork Fri May 7 10:22:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E40ACC433ED for ; Fri, 7 May 2021 10:24:30 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 903B2613F0 for ; Fri, 7 May 2021 10:24:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 903B2613F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=37MffObc4EbDNc3l2NEEsyV2dA1fyXq4wCvMFQvezOc=; b=Qoay0Voq3sbMrGtDy5cIzE7Yw /tLj/zQLuzojXSNVKB8cm+JYF00H6x9h7ZAxL2/cXFHoVuLc46j+HBrJaOgFkUJCJVjCuKqV+f21E MY1AiiYoaToIBNuKZ3LUqDxgS4xNcQMI7mY1iOZiRXI8UK6xwoKpEqGmTb6TVJhGx8nL1MgEoY9U3 DsjpqEVwMrpubR3S9kLvcOII7walsMqOVTVru+5Bn5SP8OappLMwAuvsNegqnqFnnJMmqqZeJDj/K XgfAJql5N9n2MMICN4u1uRfExMKkWqeB6C0jPjuKEJpWVo1fQA+wj0SBWuHla7EPekhkKVir+dlEE jR7jBGwRA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexdF-006nOa-TG; Fri, 07 May 2021 10:22:54 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdD-006nNX-0p for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=oIKulCh0ONNP6ImP29Q+zdLpC5OJ51AR12x1ktnFxwE=; b=oZGkmx3PPzHpGuyy5DL35l6Kbh dVOdiWBHz+Px0uohFz6ko1QfdpDqRjIcVMWt+1GN4jmbZIXKZ+CBcXWtRAFDCFfk0wOiUlGd5PCiD EZ1ytOBhR9xRXckAvhjQuLemrQSmPY60SvRDtHaNRV97yEI72uKoS2sULfoBfT+WyEM3xAfeX6YdR CsYcS5T05bzq0hwsZFACEPXte3wJNqezENouoK6kRfirQz648bnONcGxZS6tMOQLlkIr506BW09BZ oOBo4h3SP0GWl7yvkk6v44kHzOec+VOdd0rDRDv5j+gvFLzfe3fT1o/xV8XD5YVRbAfWQrMMPydjx zps0+T+Q==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexd8-006m7X-KI for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:49 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fc5z26JbSzqTB2; Fri, 7 May 2021 18:19:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:37 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 08/13] iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mapping Date: Fri, 7 May 2021 18:22:06 +0800 Message-ID: <20210507102211.8836-9-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032246_851769_8E71DFA5 X-CRM114-Status: UNSURE ( 8.67 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang As nested mode is not upstreamed now, we just aim to support dirty log tracking for stage1 with io-pgtable mapping (means not support SVA mapping). If HTTU is supported, we enable HA/HD bits in the SMMU CD and transfer ARM_HD quirk to io-pgtable. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 4ac59a89bc76..c42e59655fd0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1942,6 +1942,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, tcr->orgn) | FIELD_PREP(CTXDESC_CD_0_TCR_SH0, tcr->sh) | FIELD_PREP(CTXDESC_CD_0_TCR_IPS, tcr->ips) | + CTXDESC_CD_0_TCR_HA | CTXDESC_CD_0_TCR_HD | CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; cfg->cd.mair = pgtbl_cfg->arm_lpae_s1_cfg.mair; @@ -2047,6 +2048,8 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, if (!iommu_get_dma_strict(domain)) pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; + if (smmu->features & ARM_SMMU_FEAT_HD) + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_ARM_HD; pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); if (!pgtbl_ops) From patchwork Fri May 7 10:22:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93206C433B4 for ; Fri, 7 May 2021 10:27:32 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 16DB66143F for ; Fri, 7 May 2021 10:27:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16DB66143F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zZlpyRP7q+ztV7yXduyW7ZgOsxQ89VcsQ1i4/5AKbpo=; b=NXp8ZHLTEcd4HVVf50HLeDZa5 R882gortqbNMIhVLk8L6ABSgofiTCGYtOOuL3IwmnDTw+9UmgLib+k0FG1X4NMgpgnzoYEnjVf5tI 0OT3BQF2IXf5loxcbKVw+fiqdVnhy6QfQWDt3gPPf2L1MngVyCMljoYCzhDwluhxKdeIok2u9lhur paxPNjUR+8DdOhRuLE3KLa/Rse9ZpqLJyPNEtyMBJ3imJXNSjfyJqcUAN4GJwaThligeVaJfagJ4l C0DZ4YSWmdjKl7T9nTjjQsIZ+mCoyf3BVB5ClP2/8OL5T8mlv8l0u5+IWltYQIH84ddB6qkmavChD UqYvAVD+w==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexgI-006oUN-UC; Fri, 07 May 2021 10:26:03 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdG-006nP1-In for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=gVpNAivgEnqfscXf8/4fzw5dCiOgPx4bpX2GVJ6YWPQ=; b=qVM51J5xgwRKLFc2EUlgOFCIwe mGcpKbGxyAd/qBymtpPiEBmen+n51E4IO+6yI1Seyc9m22OfIaSEghQ025wqxEZ8tU60JNjtABTV6 0/QHN4W7YM1YewECnsN/78juvViviFqZB0P7kcxr3V54NG9ZFYFHp1MHwzqVYfBPzPBuj/359pnjk Ue7zQi5ZLFNSw1ZIBb2ikQrbDcICjlbtRorDV2YGxnswYK9UNef3QIJotCdXBksCXPs9JMflIyTKj +juiM3F4QMlMCEss0OSNX/t32Li9IFxyhWJOw4m4clGnpYcqB88kwkblnmZX2P3k4GhhaWR99hmlG zly301tA==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdD-006mAf-Ew for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:53 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zw4crwzCr7C; Fri, 7 May 2021 18:20:12 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:38 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 09/13] iommu/arm-smmu-v3: Add feature detection for BBML Date: Fri, 7 May 2021 18:22:07 +0800 Message-ID: <20210507102211.8836-10-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032251_855899_817C681C X-CRM114-Status: UNSURE ( 9.52 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang This detects BBML feature and if SMMU supports it, transfer BBMLx quirk to io-pgtable. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 19 +++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 6 ++++++ 2 files changed, 25 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index c42e59655fd0..3a2dc3177180 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2051,6 +2051,11 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, if (smmu->features & ARM_SMMU_FEAT_HD) pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_ARM_HD; + if (smmu->features & ARM_SMMU_FEAT_BBML1) + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_ARM_BBML1; + else if (smmu->features & ARM_SMMU_FEAT_BBML2) + pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_ARM_BBML2; + pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain); if (!pgtbl_ops) return -ENOMEM; @@ -3419,6 +3424,20 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) /* IDR3 */ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR3); + switch (FIELD_GET(IDR3_BBML, reg)) { + case IDR3_BBML0: + break; + case IDR3_BBML1: + smmu->features |= ARM_SMMU_FEAT_BBML1; + break; + case IDR3_BBML2: + smmu->features |= ARM_SMMU_FEAT_BBML2; + break; + default: + dev_err(smmu->dev, "unknown/unsupported BBM behavior level\n"); + return -ENXIO; + } + if (FIELD_GET(IDR3_RIL, reg)) smmu->features |= ARM_SMMU_FEAT_RANGE_INV; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 3edcd31b046e..e3b6bdd292c9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -54,6 +54,10 @@ #define IDR1_SIDSIZE GENMASK(5, 0) #define ARM_SMMU_IDR3 0xc +#define IDR3_BBML GENMASK(12, 11) +#define IDR3_BBML0 0 +#define IDR3_BBML1 1 +#define IDR3_BBML2 2 #define IDR3_RIL (1 << 10) #define ARM_SMMU_IDR5 0x14 @@ -613,6 +617,8 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_E2H (1 << 18) #define ARM_SMMU_FEAT_HA (1 << 19) #define ARM_SMMU_FEAT_HD (1 << 20) +#define ARM_SMMU_FEAT_BBML1 (1 << 21) +#define ARM_SMMU_FEAT_BBML2 (1 << 22) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) From patchwork Fri May 7 10:22:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEB8EC433ED for ; Fri, 7 May 2021 10:28:02 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3443A613F0 for ; Fri, 7 May 2021 10:28:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3443A613F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=oslViZbjyXWS6pvDWHOqXx3P0SUExRXXIXnEUEDYsBc=; b=hw1JnlyTzd/Y5O3S1omzDZUwy 9EYgZ/iJ0Mb88bUu3IeG6N25yPOftmQVxjONad6QO0ktBNoZT+8ANYeg1H0fE49BKEIWWRvAtAOEH toYeBSUmk2o5JtC1cNoZFOMzn40A8GrLaDxBzX+oWoZCXPuCUjLaQG5nuLxWxH4yOLJDUYAvrXZT1 EaTa8c4jeN0yg9hhK4qsUNameambAuaq7E6cN8YThdV4rPp1VJQsYB8WMK5SOSDthZafbEJ4eCBTj txqBrxwRSeAnWPOJufGRQRZbI8U/Rfs+JoSyflpJWABGkvCxFEy68STQEMVT2Tf+56qjty1NSKZO5 aBvHFT5TQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexgt-006ojd-Ay; Fri, 07 May 2021 10:26:39 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdI-006nPW-7q for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:56 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=IRVyR0dqLKuM9A9+fvsa+kgHd+tg0fSJ+L/8FukUw5E=; b=sQq9yw5f91SW+2M6LZu9Rk/Wij 2b2QRdfL3V/CDXaJvc2MSazXLIQAxJ3deUIYXcz0Cl49DZPYm8nLCr0EK38JfrIxga7vpR44DTx1S LCcAWBOtrUw7T9VPJVggI/2ZavRT6UeO4J1UhJANlyfsJQpLSZjwT36e7I1mhskN6LUWpOE7PHUw5 XLd43LAUVi/HsC6oVWBDLDeNjIWrpb+RX4MAgi4U3q8KrHZIQ+yetI3xdH5yc7m1BoiacgfjlNG/C GGfKH7m8FSMPajISmxWmJrCRv2ZndLcpraCR9tGFCinzgS2zVnNNi0dZ+64un0ODitkyFBuTqTa9D ehZH752g==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdF-006mBc-0y for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:55 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zw50PwzCr7b; Fri, 7 May 2021 18:20:12 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:39 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 10/13] iommu/arm-smmu-v3: Realize switch_dirty_log iommu ops Date: Fri, 7 May 2021 18:22:08 +0800 Message-ID: <20210507102211.8836-11-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032253_460479_D113E5EB X-CRM114-Status: GOOD ( 16.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang This realizes switch_dirty_log. In order to get finer dirty granule, it invokes arm_smmu_split_block when start dirty log, and invokes arm_smmu_merge_page() to recover block mapping when stop dirty log. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 142 ++++++++++++++++++++ drivers/iommu/iommu.c | 5 +- include/linux/iommu.h | 2 + 3 files changed, 147 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 3a2dc3177180..6de81d6ab652 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2580,6 +2580,147 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain) return ret; } +static int arm_smmu_split_block(struct iommu_domain *domain, + unsigned long iova, size_t size) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + size_t handled_size; + + if (!(smmu->features & (ARM_SMMU_FEAT_BBML1 | ARM_SMMU_FEAT_BBML2))) { + dev_err(smmu->dev, "don't support BBML1/2, can't split block\n"); + return -ENODEV; + } + if (!ops || !ops->split_block) { + pr_err("io-pgtable don't realize split block\n"); + return -ENODEV; + } + + handled_size = ops->split_block(ops, iova, size); + if (handled_size != size) { + pr_err("split block failed\n"); + return -EFAULT; + } + + return 0; +} + +static int __arm_smmu_merge_page(struct iommu_domain *domain, + unsigned long iova, phys_addr_t paddr, + size_t size, int prot) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + size_t handled_size; + + if (!ops || !ops->merge_page) { + pr_err("io-pgtable don't realize merge page\n"); + return -ENODEV; + } + + while (size) { + size_t pgsize = iommu_pgsize(domain, iova | paddr, size); + + handled_size = ops->merge_page(ops, iova, paddr, pgsize, prot); + if (handled_size != pgsize) { + pr_err("merge page failed\n"); + return -EFAULT; + } + + pr_debug("merge handled: iova 0x%lx pa %pa size 0x%zx\n", + iova, &paddr, pgsize); + + iova += pgsize; + paddr += pgsize; + size -= pgsize; + } + + return 0; +} + +static int arm_smmu_merge_page(struct iommu_domain *domain, unsigned long iova, + size_t size, int prot) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + phys_addr_t phys; + dma_addr_t p, i; + size_t cont_size; + int ret = 0; + + if (!(smmu->features & (ARM_SMMU_FEAT_BBML1 | ARM_SMMU_FEAT_BBML2))) { + dev_err(smmu->dev, "don't support BBML1/2, can't merge page\n"); + return -ENODEV; + } + + if (!ops || !ops->iova_to_phys) + return -ENODEV; + + while (size) { + phys = ops->iova_to_phys(ops, iova); + cont_size = PAGE_SIZE; + p = phys + cont_size; + i = iova + cont_size; + + while (cont_size < size && p == ops->iova_to_phys(ops, i)) { + p += PAGE_SIZE; + i += PAGE_SIZE; + cont_size += PAGE_SIZE; + } + + if (cont_size != PAGE_SIZE) { + ret = __arm_smmu_merge_page(domain, iova, phys, + cont_size, prot); + if (ret) + break; + } + + iova += cont_size; + size -= cont_size; + } + + return ret; +} + +static int arm_smmu_switch_dirty_log(struct iommu_domain *domain, bool enable, + unsigned long iova, size_t size, int prot) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + return -ENODEV; + if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1) + return -EINVAL; + + if (enable) { + /* + * For SMMU, the hardware dirty management is always enabled if + * hardware supports HTTU HD. The action to start dirty log is + * spliting block mapping. + * + * We don't return error even if the split operation fail, as we + * can still track dirty at block granule, which is still a much + * better choice compared to full dirty policy. + */ + arm_smmu_split_block(domain, iova, size); + } else { + /* + * For SMMU, the hardware dirty management is always enabled if + * hardware supports HTTU HD. The action to stop dirty log is + * merging page mapping. + * + * We don't return error even if the merge operation fail, as it + * just effects performace of DMA transaction. + */ + arm_smmu_merge_page(domain, iova, size, prot); + } + + return 0; +} + static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); @@ -2678,6 +2819,7 @@ static struct iommu_ops arm_smmu_ops = { .release_device = arm_smmu_release_device, .device_group = arm_smmu_device_group, .enable_nesting = arm_smmu_enable_nesting, + .switch_dirty_log = arm_smmu_switch_dirty_log, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 0d15620d1e90..bb19df2317ed 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2375,8 +2375,8 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) } EXPORT_SYMBOL_GPL(iommu_iova_to_phys); -static size_t iommu_pgsize(struct iommu_domain *domain, - unsigned long addr_merge, size_t size) +size_t iommu_pgsize(struct iommu_domain *domain, + unsigned long addr_merge, size_t size) { unsigned int pgsize_idx; size_t pgsize; @@ -2406,6 +2406,7 @@ static size_t iommu_pgsize(struct iommu_domain *domain, return pgsize; } +EXPORT_SYMBOL_GPL(iommu_pgsize); static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index e0e40dda974d..0a77db4f397f 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -427,6 +427,8 @@ extern int iommu_sva_unbind_gpasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid); extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); +extern size_t iommu_pgsize(struct iommu_domain *domain, + unsigned long addr_merge, size_t size); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot); extern int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova, From patchwork Fri May 7 10:22:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75F00C433ED for ; Fri, 7 May 2021 10:27:00 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 085D16143F for ; Fri, 7 May 2021 10:27:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 085D16143F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8W0CWcI505nbhZ9RO5+5PNM50nxXqlREPVRG1/4tfkM=; b=rM4L9eLRog7jXsIe2vlNqskI7 KrYAm8ZLqxq6vDlXBj/BlrRrPxUQ7W+mcUwaZXZNGXS8ed5GkSGlceEGNX3zwH/kAFxc0taRghzZh YhQLmgn35Zm7D2PrTT53eteQRCnokLGennKc2saK3EKVPDkkfH/BXQ6XNO0Qs1p7Ai3dHGTbrLXxn ug5hFg+pPr38pT8VhmXcYpm2Hfbs9Zd8g5KvqQblEfSMFdB605kA1P+N885e/1cgODFhy9VIzFxlV s322SMPTwQGIQWfUvb0nYPSqj8YiiffQvAOpekqHjSoRN2YGaPaGAZFQBXnDEFQPF9Zik8Lwo2v5j LsqljHPPA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexfV-006oA0-OB; Fri, 07 May 2021 10:25:14 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdG-006nOh-2Q for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ZIZXoFRygYJyXDX6aQhdhVGFytYA+yjJhlIU/UqbVlw=; b=kvORX63SlPD7D2qAa97QqPdyIi XPg7uutstCh24+tgxHl1gKO9t0Wk7n+YZ5X2X2RxvmRd8wK5gVsUAPda6DtcjXeG+sCwUPOwQRFhp 5upnK3VoXb82QBXBbbvXSsUNe2j2jIxJBfv2R7/A2ZNlgpv8BHLJXjURwSsEiENjPpGQ4j1M3S9zJ gKCeLX1DJ4C7OqN/l+13+Iw9sGCb1P8Q2XIlWf6NZuWgLrNg7+FKMRj4+VWxoBVWC+7hBfPwghobn mNXjIKJfG/hXv9cO448b4j1JqsAZqrerqkJPOlt86F56bEFE8FEYnjsLAXsNItVrmxcLoUlKufnAN cJvFe09Q==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdC-006mAQ-Gw for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:52 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zw3scPzCr6L; Fri, 7 May 2021 18:20:12 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:40 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 11/13] iommu/arm-smmu-v3: Realize sync_dirty_log iommu ops Date: Fri, 7 May 2021 18:22:09 +0800 Message-ID: <20210507102211.8836-12-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032250_764491_ACA93904 X-CRM114-Status: GOOD ( 10.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang This realizes sync_dirty_log iommu ops based on sync_dirty_log io-pgtable ops. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 30 +++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 6de81d6ab652..3d3c0f8e2446 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2721,6 +2721,35 @@ static int arm_smmu_switch_dirty_log(struct iommu_domain *domain, bool enable, return 0; } +static int arm_smmu_sync_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + return -ENODEV; + if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1) + return -EINVAL; + + if (!ops || !ops->sync_dirty_log) { + pr_err("io-pgtable don't realize sync dirty log\n"); + return -ENODEV; + } + + /* + * Flush iotlb to ensure all inflight transactions are completed. + * See doc IHI0070Da 3.13.4 "HTTU behavior summary". + */ + arm_smmu_flush_iotlb_all(domain); + return ops->sync_dirty_log(ops, iova, size, bitmap, base_iova, + bitmap_pgshift); +} + static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); @@ -2820,6 +2849,7 @@ static struct iommu_ops arm_smmu_ops = { .device_group = arm_smmu_device_group, .enable_nesting = arm_smmu_enable_nesting, .switch_dirty_log = arm_smmu_switch_dirty_log, + .sync_dirty_log = arm_smmu_sync_dirty_log, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, From patchwork Fri May 7 10:22:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C47BC43461 for ; Fri, 7 May 2021 10:26:41 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EB7CD6145E for ; Fri, 7 May 2021 10:26:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB7CD6145E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=l8mFC8o0Xq52i+hdhSJ1Dtj3e4/vp4KjIIc/D+7TGgY=; b=bDmxMMdm73UOmeC68/cHkZqRl oESIa9mRaWi/uhm1bP78Is2910JiaCIW1EcZK8QWVNms/l85vIGmVa1HgOdykb2cGLryniygVTQgR SFxkfQRXcT5GJRTKgjpHMc8VXPiCIK/Y1XiVmqC3i6TKsz5Jx195butjc3RpZjCq5x26X5CanYnBV u3KusF3j1EVwpzxIqmxvM9I/vqvuBzF+PJGJX8+Wj/L959jbW4LgfzZwYs6G8zRTMqRDahi7z9ft2 I1LvDPjq+DsHXs+t2GLf4IRkWM4S6WcEBba7FUsELInblxHBrZb8z1iKudopCunxCJY7H41NAyjzg zZ80dc9pg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexfE-006o2R-D5; Fri, 07 May 2021 10:24:57 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdF-006nOb-Rf for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:53 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Jn+tNLmvOy1rUPlCgEHGPj9SG+GaBiTJXAa4VP5exSs=; b=OWYs0bufaBtOAPUG8rHIRiOhRY kgjqERDu0mKttSjboLMyxCJRBYB5friu+fyH+UhSnWs0dPNRhZtAiA59z2emFcJG2ccu1Dbbpuqmf o6Qam3fD9giGvzkOiLeXau3P1kZnivZTidDi4VAxzvDNuZL1ogodlOtncsTEpxb9kF+pE7NlR4Ge0 KmGmMLwzxsmOF0z9gYlQn6y/N9bELM9ZvNnW3GIDWH007l7e1zyvWXxpjrdW4XnRaKbF9VrkDKYkC TBs9R7o+Qxu6w9MJzt4s8kz/b+yKlg5x19mWjFQIOTXctqu8rnqi07Ev/LUbwOULSVb0Wyj37MBtI ZIXhX8Zg==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdC-006mAF-FC for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:52 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zw3TcbzCr3x; Fri, 7 May 2021 18:20:12 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:41 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 12/13] iommu/arm-smmu-v3: Realize clear_dirty_log iommu ops Date: Fri, 7 May 2021 18:22:10 +0800 Message-ID: <20210507102211.8836-13-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032250_712286_FB1C2F6F X-CRM114-Status: UNSURE ( 9.79 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang This realizes clear_dirty_log iommu ops based on clear_dirty_log io-pgtable ops. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 25 +++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 3d3c0f8e2446..9b4739247dbb 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2750,6 +2750,30 @@ static int arm_smmu_sync_dirty_log(struct iommu_domain *domain, bitmap_pgshift); } +static int arm_smmu_clear_dirty_log(struct iommu_domain *domain, + unsigned long iova, size_t size, + unsigned long *bitmap, + unsigned long base_iova, + unsigned long bitmap_pgshift) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + return -ENODEV; + if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1) + return -EINVAL; + + if (!ops || !ops->clear_dirty_log) { + pr_err("io-pgtable don't realize clear dirty log\n"); + return -ENODEV; + } + + return ops->clear_dirty_log(ops, iova, size, bitmap, base_iova, + bitmap_pgshift); +} + static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); @@ -2850,6 +2874,7 @@ static struct iommu_ops arm_smmu_ops = { .enable_nesting = arm_smmu_enable_nesting, .switch_dirty_log = arm_smmu_switch_dirty_log, .sync_dirty_log = arm_smmu_sync_dirty_log, + .clear_dirty_log = arm_smmu_clear_dirty_log, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, From patchwork Fri May 7 10:22:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12244317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3718EC433B4 for ; Fri, 7 May 2021 10:26:58 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 00C54613F0 for ; Fri, 7 May 2021 10:26:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 00C54613F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=A8CqkwWxtoWVifho7AbBiW7M539p72OL1C/ZT00WqQQ=; b=CCc4P3/RgUjWeT0m3wS2awqkh rlG7t0ac7NR/IbQEEQlBYZLNZaxDhLrCjyTQXbicnT6VkUOb0pjpklDXIfgmQ19O5+mCuBxEllFBC 8WBlu/Fu93Aq3fZIt4EFX2sT/51Fc6brxpo3ZXcTV8UZTTcqq8gOHKuphpskVTWTAhKeE+DrNQkzP CeRpCxDrs9Aiq4yflX6t7zdmlOtv++iiKgfL7T1wiNIcEdfY8spNseU5Q3/KQvO+J6hnzJzuD0JdR B6iFC/mjXyUqw+XgJIrgy9HjckhV/Q5W4SHLdSnThQ4cF8LBLfJPPlDbBEhBA/yYyifH5Gu3dfTBm TmILFbO7Q==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lexfr-006oJ1-VY; Fri, 07 May 2021 10:25:36 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdG-006nOx-9x for linux-arm-kernel@desiato.infradead.org; Fri, 07 May 2021 10:22:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=jKLBq+t1vE94gUY8mUmDPjBh5xT4CNxTD3ijHGg6haE=; b=BP5lJV+qf/feWHku7KU81Bz74T sA8+APCIcgxumw86bZyNW5vDqBcuNTgV9LJeBBo4iemGzNurUOewd5SsF4cI4B094B5KxAJDo8R6A P9dYxk5SznieD8fyOPLaPgNxF5i76qbnG7a3VG+YSXOmVJWYNp312xBe38ZTT5ZwPo2o79ddCUO4P ltWYnVkRJyf12LWZeAR6Sb52j28FDhb57mQx66duBVu+64ZsGX+4kq+r3cbjWJegydgLeqRfesECN rwcPSi7ilxd30a/TYU3w+mSa1aEM9YlBCxSvHLYBjt9ZF4N4k94KxEvsoutai/1rD6i8pcVIoPq3K UfLUKt6g==; Received: from szxga07-in.huawei.com ([45.249.212.35]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lexdD-006mAe-EW for linux-arm-kernel@lists.infradead.org; Fri, 07 May 2021 10:22:53 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Fc5zw4Fn0zCr6W; Fri, 7 May 2021 18:20:12 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.498.0; Fri, 7 May 2021 18:22:42 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , "Jean-Philippe Brucker" , Lu Baolu , "Yi Sun" , Tian Kevin CC: Alex Williamson , Kirti Wankhede , Cornelia Huck , Jonathan Cameron , , , , Subject: [RFC PATCH v4 13/13] iommu/arm-smmu-v3: Realize support_dirty_log iommu ops Date: Fri, 7 May 2021 18:22:11 +0800 Message-ID: <20210507102211.8836-14-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210507102211.8836-1-zhukeqian1@huawei.com> References: <20210507102211.8836-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210507_032251_677437_BE4CBB2F X-CRM114-Status: GOOD ( 10.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Kunkun Jiang We have implemented these interfaces required to support iommu dirty log tracking. The last step is reporting this feature to upper user, then the user can perform higher policy base on it. For arm smmuv3, it is equal to ARM_SMMU_FEAT_HD. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 9b4739247dbb..59d11f084199 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2684,6 +2684,13 @@ static int arm_smmu_merge_page(struct iommu_domain *domain, unsigned long iova, return ret; } +static bool arm_smmu_support_dirty_log(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + return !!(smmu_domain->smmu->features & ARM_SMMU_FEAT_HD); +} + static int arm_smmu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) { @@ -2872,6 +2879,7 @@ static struct iommu_ops arm_smmu_ops = { .release_device = arm_smmu_release_device, .device_group = arm_smmu_device_group, .enable_nesting = arm_smmu_enable_nesting, + .support_dirty_log = arm_smmu_support_dirty_log, .switch_dirty_log = arm_smmu_switch_dirty_log, .sync_dirty_log = arm_smmu_sync_dirty_log, .clear_dirty_log = arm_smmu_clear_dirty_log,