From patchwork Mon Feb 24 18:23:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401241 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0DFEB14D5 for ; Mon, 24 Feb 2020 18:24:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C204122464 for ; Mon, 24 Feb 2020 18:24:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="BxwepHpI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C204122464 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A1586B000C; Mon, 24 Feb 2020 13:24:33 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F1CD76B000D; Mon, 24 Feb 2020 13:24:32 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBF126B000E; Mon, 24 Feb 2020 13:24:32 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id C4BDD6B000C for ; Mon, 24 Feb 2020 13:24:32 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 62922181AC9CC for ; Mon, 24 Feb 2020 18:24:32 +0000 (UTC) X-FDA: 76525845984.27.guide58_61c4053b72b4a X-Spam-Summary: 2,0,0,39be7d3bdff9679c,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:966:973:982:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1711:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2731:2840:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3872:4119:4250:4321:4385:4605:5007:6261:6653:6742:7903:10004:11026:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:13972:14096:14181:14394:14721:21080:21324:21433:21444:21451:21627:21789:21796:21990:30036:30054:30076,0,RBL:209.85.128.67:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: guide58_61c4053b72b4a X-Filterd-Recvd-Size: 8203 Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:31 +0000 (UTC) Received: by mail-wm1-f67.google.com with SMTP id c84so335896wme.4 for ; Mon, 24 Feb 2020 10:24:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kbHJ4C4uglc6rYdZYTlW3Ff7m1UL228yD2k/lvDMteQ=; b=BxwepHpIFjHRcQQpVZG+EplOFBe3rPSTVBngLVdFteqRT2sLkmijWsDNxP3gXAef7v A9Ki3xgUFAgIDFoku0kKMZVVruyvYqZ8P8ImelWV3mtAtcAJTUGNA/kEnXYxrutWTloe fmjk12xB0K+ir7axe3XxEzuiqhFr0/u9/5K9kFOyGIdcu5bt5VVq83gQymC8zsmkJCrv voT9EkmQHwO+gN+4rzvID2aVflV2VnNv2bi6U866Sm6W2RilOTdNfOLgBgPttVSwqcS3 OsiGL9oPhNzw8EjsxCP7KMoEV62b+oEAsAdB6DLym25mnnvzgc/nSHXSofYo+yTjBwRE C0Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kbHJ4C4uglc6rYdZYTlW3Ff7m1UL228yD2k/lvDMteQ=; b=TujltA5JsaBKqqm8ptzxMHnIoWK0eoZklJqL1ZDYKjIX65SUWr4jQh6smKhOABJa0u BSsXs1vmXFOBetJ4kMr/LyNP4/wDGuKsajILk/hihFhHp3nq2eYoUS0VERU81xQEfkoM JOdSJNdp8DKOMWuenSEGEWZFLH8UI+goJOip619wQ4Ih8BNQl6WC66vVu7aDU7PLlJ8N uOZT3xvGWabjSNNAfBVcgKi7FLvgpxYz2BFHMT/odY5si7cIXBgH4aog2FdZ3fYXjpjB SnuMd0qjIedFyHzB1pvt+5sjuBK1QstLsZlHa1MrwSycEqH0wfXZogCT5cWSOCPZ1NSo PVNQ== X-Gm-Message-State: APjAAAUJy6j3unOS5gAAl7XT64cavdtD2bRFmN5cIrr4pBNfGWLeogy8 LB11s30NCNK4bYmBZf9JUvBWIQ== X-Google-Smtp-Source: APXvYqxeEVL2FgAN9u8VRKK0cl4gGMwWc0V5xlt1mC7IWnfDQ/iqcXGTBT4xHtfN+pJg9F8k5m70JQ== X-Received: by 2002:a1c:7919:: with SMTP id l25mr284798wme.135.1582568670689; Mon, 24 Feb 2020 10:24:30 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:30 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Andrew Morton , Arnd Bergmann , Dimitri Sivanich , Greg Kroah-Hartman , Jason Gunthorpe Subject: [PATCH v4 01/26] mm/mmu_notifiers: pass private data down to alloc_notifier() Date: Mon, 24 Feb 2020 19:23:36 +0100 Message-Id: <20200224182401.353359-2-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new allocation scheme introduced by 2c7933f53f6b ("mm/mmu_notifiers: add a get/put scheme for the registration") provides a convenient way for users to attach notifier data to an mm. However, it would be even better to create this notifier data atomically. Since the alloc_notifier() callback only takes an mm argument at the moment, some users have to perform the allocation in two times. alloc_notifier() initially creates an incomplete structure, which is then finalized using more context once mmu_notifier_get() returns. This second step requires carrying an initialization lock in the notifier data and playing dirty tricks to order memory accesses against live invalidation. To simplify MMU notifier allocation, pass an allocation context to mmu_notifier_get(). Cc: Andrew Morton Cc: Arnd Bergmann Cc: Dimitri Sivanich Cc: Greg Kroah-Hartman Cc: Jason Gunthorpe Signed-off-by: Jean-Philippe Brucker --- drivers/misc/sgi-gru/grutlbpurge.c | 4 ++-- include/linux/mmu_notifier.h | 10 ++++++---- mm/mmu_notifier.c | 6 ++++-- 3 files changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c index 10921cd2608d..77610e1704f6 100644 --- a/drivers/misc/sgi-gru/grutlbpurge.c +++ b/drivers/misc/sgi-gru/grutlbpurge.c @@ -235,7 +235,7 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn, gms, range->start, range->end); } -static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm) +static struct mmu_notifier *gru_alloc_notifier(struct mm_struct *mm, void *privdata) { struct gru_mm_struct *gms; @@ -266,7 +266,7 @@ struct gru_mm_struct *gru_register_mmu_notifier(void) { struct mmu_notifier *mn; - mn = mmu_notifier_get_locked(&gru_mmuops, current->mm); + mn = mmu_notifier_get_locked(&gru_mmuops, current->mm, NULL); if (IS_ERR(mn)) return ERR_CAST(mn); diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 736f6918335e..06e68fa2b019 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -207,7 +207,7 @@ struct mmu_notifier_ops { * callbacks are currently running. It is called from a SRCU callback * and cannot sleep. */ - struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm); + struct mmu_notifier *(*alloc_notifier)(struct mm_struct *mm, void *privdata); void (*free_notifier)(struct mmu_notifier *subscription); }; @@ -271,14 +271,16 @@ static inline int mm_has_notifiers(struct mm_struct *mm) } struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops, - struct mm_struct *mm); + struct mm_struct *mm, + void *privdata); static inline struct mmu_notifier * -mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm) +mmu_notifier_get(const struct mmu_notifier_ops *ops, struct mm_struct *mm, + void *privdata) { struct mmu_notifier *ret; down_write(&mm->mmap_sem); - ret = mmu_notifier_get_locked(ops, mm); + ret = mmu_notifier_get_locked(ops, mm, privdata); up_write(&mm->mmap_sem); return ret; } diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index ef3973a5d34a..8beb9dcbe0fd 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -734,6 +734,7 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops) * the mm & ops * @ops: The operations struct being subscribe with * @mm : The mm to attach notifiers too + * @privdata: Initialization data passed down to ops->alloc_notifier() * * This function either allocates a new mmu_notifier via * ops->alloc_notifier(), or returns an already existing notifier on the @@ -747,7 +748,8 @@ find_get_mmu_notifier(struct mm_struct *mm, const struct mmu_notifier_ops *ops) * and can be converted to an active mm pointer via mmget_not_zero(). */ struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops, - struct mm_struct *mm) + struct mm_struct *mm, + void *privdata) { struct mmu_notifier *subscription; int ret; @@ -760,7 +762,7 @@ struct mmu_notifier *mmu_notifier_get_locked(const struct mmu_notifier_ops *ops, return subscription; } - subscription = ops->alloc_notifier(mm); + subscription = ops->alloc_notifier(mm, privdata); if (IS_ERR(subscription)) return subscription; subscription->ops = ops; From patchwork Mon Feb 24 18:23:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401249 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5FAA14D5 for ; Mon, 24 Feb 2020 18:24:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57DE7222C2 for ; Mon, 24 Feb 2020 18:24:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="EiwbFNa0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57DE7222C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 30F3E6B000D; Mon, 24 Feb 2020 13:24:35 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 29B9C6B000E; Mon, 24 Feb 2020 13:24:35 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C4346B0010; Mon, 24 Feb 2020 13:24:35 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id DFF966B000D for ; Mon, 24 Feb 2020 13:24:34 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6E3EB824556B for ; Mon, 24 Feb 2020 18:24:34 +0000 (UTC) X-FDA: 76525846068.10.store98_61ff296c19610 X-Spam-Summary: 2,0,0,87bc10ad1b4668a4,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:69:327:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2196:2198:2199:2200:2201:2393:2538:2553:2559:2562:2693:2731:2898:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4559:4605:4860:5007:6119:6261:6653:6742:7576:7875:7903:7904:7974:8603:8660:8999:9010:10226:10394:11026:11473:11658:11914:12043:12048:12291:12294:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13148:13230:13894:13972:14096:14394:21080:21222:21433:21444:21451:21611:21627:21789:21939:21987:21990:30003:30054:30067:30070:30080:30090,0,RBL:209.85.221.65:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: store98_61ff296c19610 X-Filterd-Recvd-Size: 26106 Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:33 +0000 (UTC) Received: by mail-wr1-f65.google.com with SMTP id l5so7321937wrx.4 for ; Mon, 24 Feb 2020 10:24:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2Of3JghqCpGDz9oh6GMb2vVOZYtnqrHKPbjfNShyiqA=; b=EiwbFNa0NseujkNcGTXH/qavwPwue+N0DBVGCljbTF4OCsUoOjMej1ABcVn46Lp33M DNXRiTq98X0EShM+BMvg8HfFYt/urHHaEjWLGE+3CBrGLgUtx5Prs0chgRi83Xva2tGm NKNAqqBSA5O3xWzvQbDwVJgkmsYxJ1v54xnxXMc775GrOFZSbIXCiYJxkebsuoR88vlf qVoyipaFDjjxZhI7UcUyGlxJPRZAgXuWmTCyZfD6klUmfTWpmJlf9gnyaolR3mMDjdsY DlByvsvQ8Gt4Wf6Bh8qZpPISQbR3dDDddh5b5/i9hlXUmXv3/jh7IvNIzw1b58drASFe FwUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2Of3JghqCpGDz9oh6GMb2vVOZYtnqrHKPbjfNShyiqA=; b=BLqielfzvSh/8hEkNlq8Ol7zOCEjIqJvD3IrzubUBOyWP/afpuMoZugq2XEknQdiM6 InRngNhF6Oam+Zvf3jgF/JGuPdX9Q7GJnfXPnxIT1Wc3B8Olq/Mm/HixDxsduA6myaC3 7bgb4aFuSyg+XSv7X/VteKgjUZfA39noggrXdy2nIR07x/ZHLtmxrJ1Bx+miSDYmi05X g9suvXdeqNwJ2exvAB9IMJyHvXwJ40JcZEuVtmqFqdiKclrIBJX1E/6NA4krcdyItkmR PLGhNt+6YRs55lVxD4JSeq5vK18mdhsSYgVyIyOiuYW+0R/dQ01OKtNm6YC3qBImxcFW +ivA== X-Gm-Message-State: APjAAAWtk47yr/4Nvb4fF43E9F/seEMTWkFW/vJFsdKthlF29jiiSl5B aQAow8LqTgKOxxGIZ8ox1WEyDg== X-Google-Smtp-Source: APXvYqyLTWZPNvV+XJFLEgDloHj1nqwu3kQAad1tMr369N6t07io1sreICmE+tps2i+fLdZJAL6QMQ== X-Received: by 2002:adf:f012:: with SMTP id j18mr67797142wro.314.1582568671841; Mon, 24 Feb 2020 10:24:31 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:31 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 02/26] iommu/sva: Manage process address spaces Date: Mon, 24 Feb 2020 19:23:37 +0100 Message-Id: <20200224182401.353359-3-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker Add a small library to help IOMMU drivers manage process address spaces bound to their devices. Register an MMU notifier to track modification on each address space bound to one or more devices. IOMMU drivers must implement the io_mm_ops and can then use the helpers provided by this library to easily implement the SVA API introduced by commit 26b25a2b98e4. The io_mm_ops are: void *alloc(struct mm_struct *) Allocate a PASID context private to the IOMMU driver. There is a single context per mm. IOMMU drivers may perform arch-specific operations in there, for example pinning down a CPU ASID (on Arm). int attach(struct device *, int pasid, void *ctx, bool attach_domain) Attach a context to the device, by setting up the PASID table entry. int invalidate(struct device *, int pasid, void *ctx, unsigned long vaddr, size_t size) Invalidate TLB entries for this address range. int detach(struct device *, int pasid, void *ctx, bool detach_domain) Detach a context from the device, by clearing the PASID table entry and invalidating cached entries. void free(void *ctx) Free a context. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Kconfig | 7 + drivers/iommu/Makefile | 1 + drivers/iommu/iommu-sva.c | 561 ++++++++++++++++++++++++++++++++++++++ drivers/iommu/iommu-sva.h | 64 +++++ drivers/iommu/iommu.c | 1 + include/linux/iommu.h | 3 + 6 files changed, 637 insertions(+) create mode 100644 drivers/iommu/iommu-sva.c create mode 100644 drivers/iommu/iommu-sva.h diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index d2fade984999..acca20e2da2f 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -102,6 +102,13 @@ config IOMMU_DMA select IRQ_MSI_IOMMU select NEED_SG_DMA_LENGTH +# Shared Virtual Addressing library +config IOMMU_SVA + bool + select IOASID + select IOMMU_API + select MMU_NOTIFIER + config FSL_PAMU bool "Freescale IOMMU support" depends on PCI diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 9f33fdb3bb05..40c800dd4e3e 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -37,3 +37,4 @@ obj-$(CONFIG_S390_IOMMU) += s390-iommu.o obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o +obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c new file mode 100644 index 000000000000..64f1d1c82383 --- /dev/null +++ b/drivers/iommu/iommu-sva.c @@ -0,0 +1,561 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Manage PASIDs and bind process address spaces to devices. + * + * Copyright (C) 2018 ARM Ltd. + */ + +#include +#include +#include +#include +#include +#include + +#include "iommu-sva.h" + +/** + * DOC: io_mm model + * + * The io_mm keeps track of process address spaces shared between CPU and IOMMU. + * The following example illustrates the relation between structures + * iommu_domain, io_mm and iommu_sva. The iommu_sva struct is a bond between + * io_mm and device. A device can have multiple io_mm and an io_mm may be bound + * to multiple devices. + * ___________________________ + * | IOMMU domain A | + * | ________________ | + * | | IOMMU group | +------- io_pgtables + * | | | | + * | | dev 00:00.0 ----+------- bond 1 --- io_mm X + * | |________________| \ | + * | '----- bond 2 ---. + * |___________________________| \ + * ___________________________ \ + * | IOMMU domain B | io_mm Y + * | ________________ | / / + * | | IOMMU group | | / / + * | | | | / / + * | | dev 00:01.0 ------------ bond 3 -' / + * | | dev 00:01.1 ------------ bond 4 --' + * | |________________| | + * | +------- io_pgtables + * |___________________________| + * + * In this example, device 00:00.0 is in domain A, devices 00:01.* are in domain + * B. All devices within the same domain access the same address spaces. Device + * 00:00.0 accesses address spaces X and Y, each corresponding to an mm_struct. + * Devices 00:01.* only access address space Y. In addition each + * IOMMU_DOMAIN_DMA domain has a private address space, io_pgtable, that is + * managed with iommu_map()/iommu_unmap(), and isn't shared with the CPU MMU. + * + * To obtain the above configuration, users would for instance issue the + * following calls: + * + * iommu_sva_bind_device(dev 00:00.0, mm X, ...) -> bond 1 + * iommu_sva_bind_device(dev 00:00.0, mm Y, ...) -> bond 2 + * iommu_sva_bind_device(dev 00:01.0, mm Y, ...) -> bond 3 + * iommu_sva_bind_device(dev 00:01.1, mm Y, ...) -> bond 4 + * + * A single Process Address Space ID (PASID) is allocated for each mm. In the + * example, devices use PASID 1 to read/write into address space X and PASID 2 + * to read/write into address space Y. Calling iommu_sva_get_pasid() on bond 1 + * returns 1, and calling it on bonds 2-4 returns 2. + * + * Hardware tables describing this configuration in the IOMMU would typically + * look like this: + * + * PASID tables + * of domain A + * .->+--------+ + * / 0 | |-------> io_pgtable + * / +--------+ + * Device tables / 1 | |-------> pgd X + * +--------+ / +--------+ + * 00:00.0 | A |-' 2 | |--. + * +--------+ +--------+ \ + * : : 3 | | \ + * +--------+ +--------+ --> pgd Y + * 00:01.0 | B |--. / + * +--------+ \ | + * 00:01.1 | B |----+ PASID tables | + * +--------+ \ of domain B | + * '->+--------+ | + * 0 | |-- | --> io_pgtable + * +--------+ | + * 1 | | | + * +--------+ | + * 2 | |---' + * +--------+ + * 3 | | + * +--------+ + * + * With this model, a single call binds all devices in a given domain to an + * address space. Other devices in the domain will get the same bond implicitly. + * However, users must issue one bind() for each device, because IOMMUs may + * implement SVA differently. Furthermore, mandating one bind() per device + * allows the driver to perform sanity-checks on device capabilities. + * + * In some IOMMUs, one entry of the PASID table (typically the first one) can + * hold non-PASID translations. In this case PASID 0 is reserved and the first + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable + * pointer is held in the device table and PASID 0 is available to the + * allocator. + */ + +struct io_mm { + struct list_head devices; + struct mm_struct *mm; + struct mmu_notifier notifier; + + /* Late initialization */ + const struct io_mm_ops *ops; + void *ctx; + int pasid; +}; + +#define to_io_mm(mmu_notifier) container_of(mmu_notifier, struct io_mm, notifier) +#define to_iommu_bond(handle) container_of(handle, struct iommu_bond, sva) + +struct iommu_bond { + struct iommu_sva sva; + struct io_mm __rcu *io_mm; + + struct list_head mm_head; + void *drvdata; + struct rcu_head rcu_head; + refcount_t refs; +}; + +static DECLARE_IOASID_SET(shared_pasid); + +static struct mmu_notifier_ops iommu_mmu_notifier_ops; + +/* + * Serializes modifications of bonds. + * Lock order: Device SVA mutex; global SVA mutex; IOASID lock + */ +static DEFINE_MUTEX(iommu_sva_lock); + +struct io_mm_alloc_params { + const struct io_mm_ops *ops; + int min_pasid, max_pasid; +}; + +static struct mmu_notifier *io_mm_alloc(struct mm_struct *mm, void *privdata) +{ + int ret; + struct io_mm *io_mm; + struct io_mm_alloc_params *params = privdata; + + io_mm = kzalloc(sizeof(*io_mm), GFP_KERNEL); + if (!io_mm) + return ERR_PTR(-ENOMEM); + + io_mm->mm = mm; + io_mm->ops = params->ops; + INIT_LIST_HEAD(&io_mm->devices); + + io_mm->pasid = ioasid_alloc(&shared_pasid, params->min_pasid, + params->max_pasid, io_mm->mm); + if (io_mm->pasid == INVALID_IOASID) { + ret = -ENOSPC; + goto err_free_io_mm; + } + + io_mm->ctx = params->ops->alloc(mm); + if (IS_ERR(io_mm->ctx)) { + ret = PTR_ERR(io_mm->ctx); + goto err_free_pasid; + } + return &io_mm->notifier; + +err_free_pasid: + ioasid_free(io_mm->pasid); +err_free_io_mm: + kfree(io_mm); + return ERR_PTR(ret); +} + +static void io_mm_free(struct mmu_notifier *mn) +{ + struct io_mm *io_mm = to_io_mm(mn); + + WARN_ON(!list_empty(&io_mm->devices)); + + io_mm->ops->release(io_mm->ctx); + ioasid_free(io_mm->pasid); + kfree(io_mm); +} + +/* + * io_mm_get - Allocate an io_mm or get the existing one for the given mm + * @mm: the mm + * @ops: callbacks for the IOMMU driver + * @min_pasid: minimum PASID value (inclusive) + * @max_pasid: maximum PASID value (inclusive) + * + * Returns a valid io_mm or an error pointer. + */ +static struct io_mm *io_mm_get(struct mm_struct *mm, + const struct io_mm_ops *ops, + int min_pasid, int max_pasid) +{ + struct io_mm *io_mm; + struct mmu_notifier *mn; + struct io_mm_alloc_params params = { + .ops = ops, + .min_pasid = min_pasid, + .max_pasid = max_pasid, + }; + + /* + * A single notifier can exist for this (ops, mm) pair. Allocate it if + * necessary. + */ + mn = mmu_notifier_get(&iommu_mmu_notifier_ops, mm, ¶ms); + if (IS_ERR(mn)) + return ERR_CAST(mn); + io_mm = to_io_mm(mn); + + if (WARN_ON(io_mm->ops != ops)) { + mmu_notifier_put(mn); + return ERR_PTR(-EINVAL); + } + + return io_mm; +} + +static void io_mm_put(struct io_mm *io_mm) +{ + mmu_notifier_put(&io_mm->notifier); +} + +static struct iommu_sva * +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata) +{ + int ret = 0; + bool attach_domain = true; + struct iommu_bond *bond, *tmp; + struct iommu_domain *domain, *other; + struct iommu_sva_param *param = dev->iommu_param->sva_param; + + domain = iommu_get_domain_for_dev(dev); + + bond = kzalloc(sizeof(*bond), GFP_KERNEL); + if (!bond) + return ERR_PTR(-ENOMEM); + + bond->sva.dev = dev; + bond->drvdata = drvdata; + refcount_set(&bond->refs, 1); + RCU_INIT_POINTER(bond->io_mm, io_mm); + + mutex_lock(&iommu_sva_lock); + /* Is it already bound to the device or domain? */ + list_for_each_entry(tmp, &io_mm->devices, mm_head) { + if (tmp->sva.dev != dev) { + other = iommu_get_domain_for_dev(tmp->sva.dev); + if (domain == other) + attach_domain = false; + + continue; + } + + if (WARN_ON(tmp->drvdata != drvdata)) { + ret = -EINVAL; + goto err_free; + } + + /* + * Hold a single io_mm reference per bond. Note that we can't + * return an error after this, otherwise the caller would drop + * an additional reference to the io_mm. + */ + refcount_inc(&tmp->refs); + io_mm_put(io_mm); + kfree(bond); + mutex_unlock(&iommu_sva_lock); + return &tmp->sva; + } + + list_add_rcu(&bond->mm_head, &io_mm->devices); + param->nr_bonds++; + mutex_unlock(&iommu_sva_lock); + + ret = io_mm->ops->attach(bond->sva.dev, io_mm->pasid, io_mm->ctx, + attach_domain); + if (ret) + goto err_remove; + + return &bond->sva; + +err_remove: + /* + * At this point concurrent threads may have started to access the + * io_mm->devices list in order to invalidate address ranges, which + * requires to free the bond via kfree_rcu() + */ + mutex_lock(&iommu_sva_lock); + param->nr_bonds--; + list_del_rcu(&bond->mm_head); + +err_free: + mutex_unlock(&iommu_sva_lock); + kfree_rcu(bond, rcu_head); + return ERR_PTR(ret); +} + +static void io_mm_detach_locked(struct iommu_bond *bond) +{ + struct io_mm *io_mm; + struct iommu_bond *tmp; + bool detach_domain = true; + struct iommu_domain *domain, *other; + + io_mm = rcu_dereference_protected(bond->io_mm, + lockdep_is_held(&iommu_sva_lock)); + if (!io_mm) + return; + + domain = iommu_get_domain_for_dev(bond->sva.dev); + + /* Are other devices in the same domain still attached to this mm? */ + list_for_each_entry(tmp, &io_mm->devices, mm_head) { + if (tmp == bond) + continue; + other = iommu_get_domain_for_dev(tmp->sva.dev); + if (domain == other) { + detach_domain = false; + break; + } + } + + io_mm->ops->detach(bond->sva.dev, io_mm->pasid, io_mm->ctx, + detach_domain); + + list_del_rcu(&bond->mm_head); + RCU_INIT_POINTER(bond->io_mm, NULL); + + /* Free after RCU grace period */ + io_mm_put(io_mm); +} + +/* + * io_mm_release - release MMU notifier + * + * Called when the mm exits. Some devices may still be bound to the io_mm. A few + * things need to be done before it is safe to release: + * + * - Tell the device driver to stop using this PASID. + * - Clear the PASID table and invalidate TLBs. + * - Drop all references to this io_mm. + */ +static void io_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) +{ + struct iommu_bond *bond, *next; + struct io_mm *io_mm = to_io_mm(mn); + + mutex_lock(&iommu_sva_lock); + list_for_each_entry_safe(bond, next, &io_mm->devices, mm_head) { + struct device *dev = bond->sva.dev; + struct iommu_sva *sva = &bond->sva; + + if (sva->ops && sva->ops->mm_exit && + sva->ops->mm_exit(dev, sva, bond->drvdata)) + dev_WARN(dev, "possible leak of PASID %u", + io_mm->pasid); + + /* unbind() frees the bond, we just detach it */ + io_mm_detach_locked(bond); + } + mutex_unlock(&iommu_sva_lock); +} + +static void io_mm_invalidate_range(struct mmu_notifier *mn, + struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + struct iommu_bond *bond; + struct io_mm *io_mm = to_io_mm(mn); + + rcu_read_lock(); + list_for_each_entry_rcu(bond, &io_mm->devices, mm_head) + io_mm->ops->invalidate(bond->sva.dev, io_mm->pasid, io_mm->ctx, + start, end - start); + rcu_read_unlock(); +} + +static struct mmu_notifier_ops iommu_mmu_notifier_ops = { + .alloc_notifier = io_mm_alloc, + .free_notifier = io_mm_free, + .release = io_mm_release, + .invalidate_range = io_mm_invalidate_range, +}; + +struct iommu_sva * +iommu_sva_bind_generic(struct device *dev, struct mm_struct *mm, + const struct io_mm_ops *ops, void *drvdata) +{ + struct io_mm *io_mm; + struct iommu_sva *handle; + struct iommu_param *param = dev->iommu_param; + + if (!param) + return ERR_PTR(-ENODEV); + + mutex_lock(¶m->sva_lock); + if (!param->sva_param) { + handle = ERR_PTR(-ENODEV); + goto out_unlock; + } + + io_mm = io_mm_get(mm, ops, param->sva_param->min_pasid, + param->sva_param->max_pasid); + if (IS_ERR(io_mm)) { + handle = ERR_CAST(io_mm); + goto out_unlock; + } + + handle = io_mm_attach(dev, io_mm, drvdata); + if (IS_ERR(handle)) + io_mm_put(io_mm); + +out_unlock: + mutex_unlock(¶m->sva_lock); + return handle; +} +EXPORT_SYMBOL_GPL(iommu_sva_bind_generic); + +static void iommu_sva_unbind_locked(struct iommu_bond *bond) +{ + struct device *dev = bond->sva.dev; + struct iommu_sva_param *param = dev->iommu_param->sva_param; + + if (!refcount_dec_and_test(&bond->refs)) + return; + + io_mm_detach_locked(bond); + param->nr_bonds--; + kfree_rcu(bond, rcu_head); +} + +void iommu_sva_unbind_generic(struct iommu_sva *handle) +{ + struct iommu_param *param = handle->dev->iommu_param; + + if (WARN_ON(!param)) + return; + + mutex_lock(¶m->sva_lock); + mutex_lock(&iommu_sva_lock); + iommu_sva_unbind_locked(to_iommu_bond(handle)); + mutex_unlock(&iommu_sva_lock); + mutex_unlock(¶m->sva_lock); +} +EXPORT_SYMBOL_GPL(iommu_sva_unbind_generic); + +/** + * iommu_sva_enable() - Enable Shared Virtual Addressing for a device + * @dev: the device + * @sva_param: the parameters. + * + * Called by an IOMMU driver to setup the SVA parameters + * @sva_param is duplicated and can be freed when this function returns. + * + * Return 0 if initialization succeeded, or an error. + */ +int iommu_sva_enable(struct device *dev, struct iommu_sva_param *sva_param) +{ + int ret; + struct iommu_sva_param *new_param; + struct iommu_param *param = dev->iommu_param; + + if (!param) + return -ENODEV; + + new_param = kmemdup(sva_param, sizeof(*new_param), GFP_KERNEL); + if (!new_param) + return -ENOMEM; + + mutex_lock(¶m->sva_lock); + if (param->sva_param) { + ret = -EEXIST; + goto err_unlock; + } + + dev->iommu_param->sva_param = new_param; + mutex_unlock(¶m->sva_lock); + return 0; + +err_unlock: + mutex_unlock(¶m->sva_lock); + kfree(new_param); + return ret; +} +EXPORT_SYMBOL_GPL(iommu_sva_enable); + +/** + * iommu_sva_disable() - Disable Shared Virtual Addressing for a device + * @dev: the device + * + * IOMMU drivers call this to disable SVA. + */ +int iommu_sva_disable(struct device *dev) +{ + int ret = 0; + struct iommu_param *param = dev->iommu_param; + + if (!param) + return -EINVAL; + + mutex_lock(¶m->sva_lock); + if (!param->sva_param) { + ret = -ENODEV; + goto out_unlock; + } + + /* Require that all contexts are unbound */ + if (param->sva_param->nr_bonds) { + ret = -EBUSY; + goto out_unlock; + } + + kfree(param->sva_param); + param->sva_param = NULL; +out_unlock: + mutex_unlock(¶m->sva_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_sva_disable); + +bool iommu_sva_enabled(struct device *dev) +{ + bool enabled; + struct iommu_param *param = dev->iommu_param; + + if (!param) + return false; + + mutex_lock(¶m->sva_lock); + enabled = !!param->sva_param; + mutex_unlock(¶m->sva_lock); + return enabled; +} +EXPORT_SYMBOL_GPL(iommu_sva_enabled); + +int iommu_sva_get_pasid_generic(struct iommu_sva *handle) +{ + struct io_mm *io_mm; + int pasid = IOMMU_PASID_INVALID; + struct iommu_bond *bond = to_iommu_bond(handle); + + rcu_read_lock(); + io_mm = rcu_dereference(bond->io_mm); + if (io_mm) + pasid = io_mm->pasid; + rcu_read_unlock(); + return pasid; +} +EXPORT_SYMBOL_GPL(iommu_sva_get_pasid_generic); diff --git a/drivers/iommu/iommu-sva.h b/drivers/iommu/iommu-sva.h new file mode 100644 index 000000000000..dd55c2db0936 --- /dev/null +++ b/drivers/iommu/iommu-sva.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * SVA library for IOMMU drivers + */ +#ifndef _IOMMU_SVA_H +#define _IOMMU_SVA_H + +#include +#include +#include + +struct io_mm_ops { + /* Allocate a PASID context for an mm */ + void *(*alloc)(struct mm_struct *mm); + + /* + * Attach a PASID context to a device. Write the entry into the PASID + * table. + * + * @attach_domain is true when no other device in the IOMMU domain is + * already attached to this context. IOMMU drivers that share the + * PASID tables within a domain don't need to write the PASID entry + * when @attach_domain is false. + */ + int (*attach)(struct device *dev, int pasid, void *ctx, + bool attach_domain); + + /* + * Detach a PASID context from a device. Clear the entry from the PASID + * table and invalidate if necessary. + * + * @detach_domain is true when no other device in the IOMMU domain is + * still attached to this context. IOMMU drivers that share the PASID + * table within a domain don't need to clear the PASID entry when + * @detach_domain is false, only invalidate the caches. + */ + void (*detach)(struct device *dev, int pasid, void *ctx, + bool detach_domain); + + /* Invalidate a range of addresses. Cannot sleep. */ + void (*invalidate)(struct device *dev, int pasid, void *ctx, + unsigned long vaddr, size_t size); + + /* Free a context. Cannot sleep. */ + void (*release)(void *ctx); +}; + +struct iommu_sva_param { + u32 min_pasid; + u32 max_pasid; + int nr_bonds; +}; + +struct iommu_sva * +iommu_sva_bind_generic(struct device *dev, struct mm_struct *mm, + const struct io_mm_ops *ops, void *drvdata); +void iommu_sva_unbind_generic(struct iommu_sva *handle); +int iommu_sva_get_pasid_generic(struct iommu_sva *handle); + +int iommu_sva_enable(struct device *dev, struct iommu_sva_param *sva_param); +int iommu_sva_disable(struct device *dev); +bool iommu_sva_enabled(struct device *dev); + +#endif /* _IOMMU_SVA_H */ diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 3e3528436e0b..c8bd972c1788 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -164,6 +164,7 @@ static struct iommu_param *iommu_get_dev_param(struct device *dev) return NULL; mutex_init(¶m->lock); + mutex_init(¶m->sva_lock); dev->iommu_param = param; return param; } diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 1739f8a7a4b4..83397ae88d2d 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -368,6 +368,7 @@ struct iommu_fault_param { * struct iommu_param - collection of per-device IOMMU data * * @fault_param: IOMMU detected device fault reporting data + * @sva_param: IOMMU parameter for SVA * * TODO: migrate other per device data pointers under iommu_dev_data, e.g. * struct iommu_group *iommu_group; @@ -376,6 +377,8 @@ struct iommu_fault_param { struct iommu_param { struct mutex lock; struct iommu_fault_param *fault_param; + struct mutex sva_lock; + struct iommu_sva_param *sva_param; }; int iommu_device_register(struct iommu_device *iommu); From patchwork Mon Feb 24 18:23:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 79F4917E0 for ; Mon, 24 Feb 2020 18:24:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1EB2E218AC for ; Mon, 24 Feb 2020 18:24:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="OQYlseWp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1EB2E218AC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A0626B000E; Mon, 24 Feb 2020 13:24:35 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 94F5D6B0010; Mon, 24 Feb 2020 13:24:35 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 817B96B0032; Mon, 24 Feb 2020 13:24:35 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id 630A16B000E for ; Mon, 24 Feb 2020 13:24:35 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 030163A92 for ; Mon, 24 Feb 2020 18:24:35 +0000 (UTC) X-FDA: 76525846110.13.fear33_621d583766030 X-Spam-Summary: 2,0,0,76b688f7bc58fccd,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:327:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2538:2553:2559:2562:2693:2731:2899:2901:2902:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4384:4385:4395:4423:4605:5007:6261:6653:6742:7576:7875:7903:8660:9010:9163:10394:10954:11026:11473:11658:11914:12043:12048:12291:12294:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13148:13161:13229:13230:13868:13894:13972:14096:14394:21080:21324:21433:21444:21451:21627:21740:21789:21809:21939:21990:30012:30029:30046:30054:30067:30070:30083:30090,0,RBL:209.85.221.67:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: fear33_621d583766030 X-Filterd-Recvd-Size: 21490 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:34 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id z15so3239368wrl.1 for ; Mon, 24 Feb 2020 10:24:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dpP8QKbvZfKx4BLvfhuVaymZkXoUuA51oTl107kkmdk=; b=OQYlseWpjE/boVIpUBQYjn5lSl7dpvJIroyuwDu0+oDvrksFxHFN0DETsUM/fZZfmp hE+k4UwOcclmdsszACWL1rccF7pEJg1wp/ShE421MjeHolaPmTuNTpavtMxjnrkVkKI8 ZaArj2pE7BmRBeD2CTqrDnG60o4WAWyX99UlIark62+hiQK5gCYKWtayxP4+uI1PocyC XXpWkGyat0C7rff7WcqZIJC/Vx+DStP/EwgzaLT2JfagjtbYn3oaR+CIRSlPrNEs6teX njsLeki1L8mVNpWnIY/FKoiVG0BBw88u1KUVZMC9h0bUK6fUMfyAvkXDksRGzacd54tu e9jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dpP8QKbvZfKx4BLvfhuVaymZkXoUuA51oTl107kkmdk=; b=fRtytWvtybq96ywsWpNn82XS8kK9n7VA1S35j2NT6abeuGzBESFkfiosw/U6AT44Ts 61LvQZ/yWfHq+fZ9AIf7gaW2ZHqXuHSV0ZBJ5OJcdp0To4xLdYKHENTYbW54YN7WxOc1 gKWes7cG2zKDKZ8EqSjW96ZtZhDIY4VoXJjaA4niTnn44LVihz02FZuEtbw1ynKXAH8c JdX+9R8936IPOHYx6a2uB//9Ke0RYBLw1Ss0es3pmMQV1kvsQH7x126mJi7yyNTZWpsk Aa+CDLqy17QR5dLRtfTY3Bz6KUPubYInsMVGZl48GkGO/bGtrIig2tvlFIFPqiCp+8an IGVQ== X-Gm-Message-State: APjAAAX/KMRM7hEXWMHVxVqOMifM9E7J/DMzRTdXJf4WpaZZswXuAaTg GziCzIN4j91npqQfidZS3JuEUg== X-Google-Smtp-Source: APXvYqw5y/QKfZnhkP2UU9wSKjjz1JNYvmKtPAE6jNlpA9r8WHtleC/wKYchzVmljMAnI2YIgr97Xw== X-Received: by 2002:a05:6000:114f:: with SMTP id d15mr41793317wrx.130.1582568672931; Mon, 24 Feb 2020 10:24:32 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:32 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 03/26] iommu: Add a page fault handler Date: Mon, 24 Feb 2020 19:23:38 +0100 Message-Id: <20200224182401.353359-4-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCI PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was recently added to the IOMMU core. Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, supposedly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). When the handler succeeded, the IOMMU retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Kconfig | 4 + drivers/iommu/Makefile | 1 + drivers/iommu/io-pgfault.c | 451 +++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 59 +++++ 4 files changed, 515 insertions(+) create mode 100644 drivers/iommu/io-pgfault.c diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index acca20e2da2f..e4a42e1708b4 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -109,6 +109,10 @@ config IOMMU_SVA select IOMMU_API select MMU_NOTIFIER +config IOMMU_PAGE_FAULT + bool + select IOMMU_API + config FSL_PAMU bool "Freescale IOMMU support" depends on PCI diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 40c800dd4e3e..bf5cb4ee8409 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o obj-$(CONFIG_IOMMU_DEBUGFS) += iommu-debugfs.o obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o +obj-$(CONFIG_IOMMU_PAGE_FAULT) += io-pgfault.o obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c new file mode 100644 index 000000000000..76e153c59fe3 --- /dev/null +++ b/drivers/iommu/io-pgfault.c @@ -0,0 +1,451 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Handle device page faults + * + * Copyright (C) 2018 ARM Ltd. + */ + +#include +#include +#include +#include + +/** + * struct iopf_queue - IO Page Fault queue + * @wq: the fault workqueue + * @flush: low-level flush callback + * @flush_arg: flush() argument + * @devices: devices attached to this queue + * @lock: protects the device list + */ +struct iopf_queue { + struct workqueue_struct *wq; + iopf_queue_flush_t flush; + void *flush_arg; + struct list_head devices; + struct mutex lock; +}; + +/** + * struct iopf_device_param - IO Page Fault data attached to a device + * @dev: the device that owns this param + * @queue: IOPF queue + * @queue_list: index into queue->devices + * @partial: faults that are part of a Page Request Group for which the last + * request hasn't been submitted yet. + * @busy: the param is being used + * @wq_head: signal a change to @busy + */ +struct iopf_device_param { + struct device *dev; + struct iopf_queue *queue; + struct list_head queue_list; + struct list_head partial; + bool busy; + wait_queue_head_t wq_head; +}; + +struct iopf_fault { + struct iommu_fault fault; + struct list_head head; +}; + +struct iopf_group { + struct iopf_fault last_fault; + struct list_head faults; + struct work_struct work; + struct device *dev; +}; + +static int iopf_complete(struct device *dev, struct iopf_fault *iopf, + enum iommu_page_response_code status) +{ + struct iommu_page_response resp = { + .version = IOMMU_PAGE_RESP_VERSION_1, + .pasid = iopf->fault.prm.pasid, + .grpid = iopf->fault.prm.grpid, + .code = status, + }; + + if (iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID) + resp.flags = IOMMU_PAGE_RESP_PASID_VALID; + + return iommu_page_response(dev, &resp); +} + +static enum iommu_page_response_code +iopf_handle_single(struct iopf_fault *iopf) +{ + /* TODO */ + return -ENODEV; +} + +static void iopf_handle_group(struct work_struct *work) +{ + struct iopf_group *group; + struct iopf_fault *iopf, *next; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; + + group = container_of(work, struct iopf_group, work); + + list_for_each_entry_safe(iopf, next, &group->faults, head) { + /* + * For the moment, errors are sticky: don't handle subsequent + * faults in the group if there is an error. + */ + if (status == IOMMU_PAGE_RESP_SUCCESS) + status = iopf_handle_single(iopf); + + if (!(iopf->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) + kfree(iopf); + } + + iopf_complete(group->dev, &group->last_fault, status); + kfree(group); +} + +/** + * iommu_queue_iopf - IO Page Fault handler + * @evt: fault event + * @cookie: struct device, passed to iommu_register_device_fault_handler. + * + * Add a fault to the device workqueue, to be handled by mm. + * + * Return: 0 on success and <0 on error. + */ +int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +{ + int ret; + struct iopf_group *group; + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + + struct device *dev = cookie; + struct iommu_param *param = dev->iommu_param; + + if (WARN_ON(!mutex_is_locked(¶m->lock))) + return -EINVAL; + + if (fault->type != IOMMU_FAULT_PAGE_REQ) + /* Not a recoverable page fault */ + return -EOPNOTSUPP; + + /* + * As long as we're holding param->lock, the queue can't be unlinked + * from the device and therefore cannot disappear. + */ + iopf_param = param->iopf_param; + if (!iopf_param) + return -ENODEV; + + if (!(fault->prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) { + iopf = kzalloc(sizeof(*iopf), GFP_KERNEL); + if (!iopf) + return -ENOMEM; + + iopf->fault = *fault; + + /* Non-last request of a group. Postpone until the last one */ + list_add(&iopf->head, &iopf_param->partial); + + return 0; + } + + group = kzalloc(sizeof(*group), GFP_KERNEL); + if (!group) { + /* + * The caller will send a response to the hardware. But we do + * need to clean up before leaving, otherwise partial faults + * will be stuck. + */ + ret = -ENOMEM; + goto cleanup_partial; + } + + group->dev = dev; + group->last_fault.fault = *fault; + INIT_LIST_HEAD(&group->faults); + list_add(&group->last_fault.head, &group->faults); + INIT_WORK(&group->work, iopf_handle_group); + + /* See if we have partial faults for this group */ + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) { + if (iopf->fault.prm.grpid == fault->prm.grpid) + /* Insert *before* the last fault */ + list_move(&iopf->head, &group->faults); + } + + queue_work(iopf_param->queue->wq, &group->work); + return 0; + +cleanup_partial: + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) { + if (iopf->fault.prm.grpid == fault->prm.grpid) { + list_del(&iopf->head); + kfree(iopf); + } + } + return ret; +} +EXPORT_SYMBOL_GPL(iommu_queue_iopf); + +/** + * iopf_queue_flush_dev - Ensure that all queued faults have been processed + * @dev: the endpoint whose faults need to be flushed. + * @pasid: the PASID affected by this flush + * + * Users must call this function when releasing a PASID, to ensure that all + * pending faults for this PASID have been handled, and won't hit the address + * space of the next process that uses this PASID. + * + * This function can also be called before shutting down the device, in which + * case @pasid should be IOMMU_PASID_INVALID. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_flush_dev(struct device *dev, int pasid) +{ + int ret = 0; + struct iopf_queue *queue; + struct iopf_device_param *iopf_param; + struct iommu_param *param = dev->iommu_param; + + if (!param) + return -ENODEV; + + /* + * It is incredibly easy to find ourselves in a deadlock situation if + * we're not careful, because we're taking the opposite path as + * iommu_queue_iopf: + * + * iopf_queue_flush_dev() | PRI queue handler + * lock(¶m->lock) | iommu_queue_iopf() + * queue->flush() | lock(¶m->lock) + * wait PRI queue empty | + * + * So we can't hold the device param lock while flushing. Take a + * reference to the device param instead, to prevent the queue from + * going away. + */ + mutex_lock(¶m->lock); + iopf_param = param->iopf_param; + if (iopf_param) { + queue = param->iopf_param->queue; + iopf_param->busy = true; + } else { + ret = -ENODEV; + } + mutex_unlock(¶m->lock); + if (ret) + return ret; + + /* + * When removing a PASID, the device driver tells the device to stop + * using it, and flush any pending fault to the IOMMU. In this flush + * callback, the IOMMU driver makes sure that there are no such faults + * left in the low-level queue. + */ + queue->flush(queue->flush_arg, dev, pasid); + + flush_workqueue(queue->wq); + + mutex_lock(¶m->lock); + iopf_param->busy = false; + wake_up(&iopf_param->wq_head); + mutex_unlock(¶m->lock); + + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); + +/** + * iopf_queue_discard_partial - Remove all pending partial fault + * @queue: the queue whose partial faults need to be discarded + * + * When the hardware queue overflows, last page faults in a group may have been + * lost and the IOMMU driver calls this to discard all partial faults. The + * driver shouldn't be adding new faults to this queue concurrently. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + + if (!queue) + return -EINVAL; + + mutex_lock(&queue->lock); + list_for_each_entry(iopf_param, &queue->devices, queue_list) { + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) + kfree(iopf); + } + mutex_unlock(&queue->lock); + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_discard_partial); + +/** + * iopf_queue_add_device - Add producer to the fault queue + * @queue: IOPF queue + * @dev: device to add + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) +{ + int ret = -EINVAL; + struct iopf_device_param *iopf_param; + struct iommu_param *param = dev->iommu_param; + + if (!param) + return -ENODEV; + + iopf_param = kzalloc(sizeof(*iopf_param), GFP_KERNEL); + if (!iopf_param) + return -ENOMEM; + + INIT_LIST_HEAD(&iopf_param->partial); + iopf_param->queue = queue; + iopf_param->dev = dev; + init_waitqueue_head(&iopf_param->wq_head); + + mutex_lock(&queue->lock); + mutex_lock(¶m->lock); + if (!param->iopf_param) { + list_add(&iopf_param->queue_list, &queue->devices); + param->iopf_param = iopf_param; + ret = 0; + } + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); + + if (ret) + kfree(iopf_param); + + return ret; +} +EXPORT_SYMBOL_GPL(iopf_queue_add_device); + +/** + * iopf_queue_remove_device - Remove producer from fault queue + * @queue: IOPF queue + * @dev: device to remove + * + * Caller makes sure that no more faults are reported for this device. + * + * Return: 0 on success and <0 on error. + */ +int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) +{ + int ret = -EINVAL; + struct iopf_fault *iopf, *next; + struct iopf_device_param *iopf_param; + struct iommu_param *param = dev->iommu_param; + + if (!param || !queue) + return -EINVAL; + + do { + mutex_lock(&queue->lock); + mutex_lock(¶m->lock); + iopf_param = param->iopf_param; + if (iopf_param && iopf_param->queue == queue) { + if (iopf_param->busy) { + ret = -EBUSY; + } else { + list_del(&iopf_param->queue_list); + param->iopf_param = NULL; + ret = 0; + } + } + mutex_unlock(¶m->lock); + mutex_unlock(&queue->lock); + + /* + * If there is an ongoing flush, wait for it to complete and + * then retry. iopf_param isn't going away since we're the only + * thread that can free it. + */ + if (ret == -EBUSY) + wait_event(iopf_param->wq_head, !iopf_param->busy); + else if (ret) + return ret; + } while (ret == -EBUSY); + + /* Just in case some faults are still stuck */ + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) + kfree(iopf); + + kfree(iopf_param); + + return 0; +} +EXPORT_SYMBOL_GPL(iopf_queue_remove_device); + +/** + * iopf_queue_alloc - Allocate and initialize a fault queue + * @name: a unique string identifying the queue (for workqueue) + * @flush: a callback that flushes the low-level queue + * @cookie: driver-private data passed to the flush callback + * + * The callback is called before the workqueue is flushed. The IOMMU driver must + * commit all faults that are pending in its low-level queues at the time of the + * call, into the IOPF queue (with iommu_report_device_fault). The callback + * takes a device pointer as argument, hinting what endpoint is causing the + * flush. When the device is NULL, all faults should be committed. + * + * Return: the queue on success and NULL on error. + */ +struct iopf_queue * +iopf_queue_alloc(const char *name, iopf_queue_flush_t flush, void *cookie) +{ + struct iopf_queue *queue; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + if (!queue) + return NULL; + + /* + * The WQ is unordered because the low-level handler enqueues faults by + * group. PRI requests within a group have to be ordered, but once + * that's dealt with, the high-level function can handle groups out of + * order. + */ + queue->wq = alloc_workqueue("iopf_queue/%s", WQ_UNBOUND, 0, name); + if (!queue->wq) { + kfree(queue); + return NULL; + } + + queue->flush = flush; + queue->flush_arg = cookie; + INIT_LIST_HEAD(&queue->devices); + mutex_init(&queue->lock); + + return queue; +} +EXPORT_SYMBOL_GPL(iopf_queue_alloc); + +/** + * iopf_queue_free - Free IOPF queue + * @queue: queue to free + * + * Counterpart to iopf_queue_alloc(). The driver must not be queuing faults or + * adding/removing devices on this queue anymore. + */ +void iopf_queue_free(struct iopf_queue *queue) +{ + struct iopf_device_param *iopf_param, *next; + + if (!queue) + return; + + list_for_each_entry_safe(iopf_param, next, &queue->devices, queue_list) + iopf_queue_remove_device(queue, iopf_param->dev); + + destroy_workqueue(queue->wq); + kfree(queue); +} +EXPORT_SYMBOL_GPL(iopf_queue_free); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 83397ae88d2d..e7bc47ba24f8 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -364,11 +364,20 @@ struct iommu_fault_param { struct mutex lock; }; +/** + * iopf_queue_flush_t - Flush low-level page fault queue + * + * Report all faults currently pending in the low-level page fault queue + */ +struct iopf_queue; +typedef int (*iopf_queue_flush_t)(void *cookie, struct device *dev, int pasid); + /** * struct iommu_param - collection of per-device IOMMU data * * @fault_param: IOMMU detected device fault reporting data * @sva_param: IOMMU parameter for SVA + * @iopf_param: I/O Page Fault queue and data * * TODO: migrate other per device data pointers under iommu_dev_data, e.g. * struct iommu_group *iommu_group; @@ -377,6 +386,7 @@ struct iommu_fault_param { struct iommu_param { struct mutex lock; struct iommu_fault_param *fault_param; + struct iopf_device_param *iopf_param; struct mutex sva_lock; struct iommu_sva_param *sva_param; }; @@ -1081,4 +1091,53 @@ void iommu_debugfs_setup(void); static inline void iommu_debugfs_setup(void) {} #endif +#ifdef CONFIG_IOMMU_PAGE_FAULT +extern int iommu_queue_iopf(struct iommu_fault *fault, void *cookie); + +extern int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev); +extern int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev); +extern int iopf_queue_flush_dev(struct device *dev, int pasid); +extern struct iopf_queue * +iopf_queue_alloc(const char *name, iopf_queue_flush_t flush, void *cookie); +extern void iopf_queue_free(struct iopf_queue *queue); +extern int iopf_queue_discard_partial(struct iopf_queue *queue); +#else /* CONFIG_IOMMU_PAGE_FAULT */ +static inline int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) +{ + return -ENODEV; +} + +static inline int iopf_queue_add_device(struct iopf_queue *queue, + struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_remove_device(struct iopf_queue *queue, + struct device *dev) +{ + return -ENODEV; +} + +static inline int iopf_queue_flush_dev(struct device *dev, int pasid) +{ + return -ENODEV; +} + +static inline struct iopf_queue * +iopf_queue_alloc(const char *name, iopf_queue_flush_t flush, void *cookie) +{ + return NULL; +} + +static inline void iopf_queue_free(struct iopf_queue *queue) +{ +} + +static inline int iopf_queue_discard_partial(struct iopf_queue *queue) +{ + return -ENODEV; +} +#endif /* CONFIG_IOMMU_PAGE_FAULT */ + #endif /* __LINUX_IOMMU_H */ From patchwork Mon Feb 24 18:23:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28F9F17E0 for ; Mon, 24 Feb 2020 18:24:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E08F92084E for ; Mon, 24 Feb 2020 18:24:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="k7mwUwe4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E08F92084E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 46E116B0010; Mon, 24 Feb 2020 13:24:36 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 444386B0032; Mon, 24 Feb 2020 13:24:36 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30E9B6B0036; Mon, 24 Feb 2020 13:24:36 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 14D296B0010 for ; Mon, 24 Feb 2020 13:24:36 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B2DE6180AD806 for ; Mon, 24 Feb 2020 18:24:35 +0000 (UTC) X-FDA: 76525846110.29.spark43_6240821eada02 X-Spam-Summary: 2,0,0,1707bba11136a029,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1541:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:3872:4250:4321:5007:6261:6653:6742:7576:7875:7901:7903:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14394:14721:21080:21324:21433:21444:21451:21627:21987:21990:30003:30054:30080,0,RBL:209.85.221.67:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: spark43_6240821eada02 X-Filterd-Recvd-Size: 5283 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:35 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id c9so11578901wrw.8 for ; Mon, 24 Feb 2020 10:24:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t1PrCtZ92DmYIwAPbSp2v0XRYBlM5YZLGBrm+uzQwJw=; b=k7mwUwe4f2eaHon/rtOusNwQtFja3+krXzwRGZ76SMD3QIwZYajs0PGl8nhuOu3e/Q 1apHjKKThGa9F78oa6DJmFMg6Kv3M0zbTmO8+Eugcb2SS/fYXhJ75yn9RhWShh6+ftqW SUfddUdmPp4ujwFKVXozNBlY07/fSNyDISwYswJ30vRiZLJreuqTJ9fsed4djsvvIOny CMUJVS9RaKMUHia7+nXA9mZRl4QIa/ovZElpv9c2ACyb/VTFsnCnfWqipmg5gOCNeNSo PYeBPGGuW7xmjQCqEwsgxBu69OnvQazgX6XmX1S0LvnK/9wPepXn8vuUv4vl+VoCv6/L MvGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t1PrCtZ92DmYIwAPbSp2v0XRYBlM5YZLGBrm+uzQwJw=; b=jTzMdnyd5L/XDDhYWQSVcj8nStJWNQdry+Bpu7EA3ktwJw+CfZAQyOAwfuXtyK2A7D cbevHoPHTRnwRMCVM6kJdtVWKE0Ybh3G0MQHntp3sund5Yb2/YEHjRQS96TXjiHXOW4J qvMnKGNb4SSQ3XOwiY3nsiIyr/+AyaqP1zRiWW5y3aZAVGF23poemzCcY8BCY8Ulm/oU VzG07pjB9n4H6ItDrfIzkfJUJO5N3erYd6ua+IAagH3zNMav15ptOdR/JBp6gEQAffUi l2pOn6bdzVcQHMdIu8BvC/NkdJF+vj12zxhYeLXp/ET4nkKEawibvZpUH96DYKrxPr/4 vTzg== X-Gm-Message-State: APjAAAUJ6w5nic5/c83OsEKkI46kF4YGCAquBO34PJcJKRZQowDlCt+0 +hhHRqqbWBAPnrfpARp397YsDA== X-Google-Smtp-Source: APXvYqyJoaGCAZwRcxK03cHvWfZ3rPd9BodvC//3q8tPBefjV5p8k/3KCURkzx9rn1YbvnVHRvzLqg== X-Received: by 2002:adf:e908:: with SMTP id f8mr4337597wrm.37.1582568674081; Mon, 24 Feb 2020 10:24:34 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:33 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 04/26] iommu/sva: Search mm by PASID Date: Mon, 24 Feb 2020 19:23:39 +0100 Message-Id: <20200224182401.353359-5-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker The fault handler will need to find an mm given its PASID. This is the reason we have an IDR for storing address spaces, so hook it up. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/iommu-sva.c | 19 +++++++++++++++++++ include/linux/iommu.h | 9 +++++++++ 2 files changed, 28 insertions(+) diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index 64f1d1c82383..bfd0c477f290 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -559,3 +559,22 @@ int iommu_sva_get_pasid_generic(struct iommu_sva *handle) return pasid; } EXPORT_SYMBOL_GPL(iommu_sva_get_pasid_generic); + +/* ioasid wants a void * argument */ +static bool __mmget_not_zero(void *mm) +{ + return mmget_not_zero(mm); +} + +/** + * iommu_sva_find() - Find mm associated to the given PASID + * @pasid: Process Address Space ID assigned to the mm + * + * Returns the mm corresponding to this PASID, or an error if not found. A + * reference to the mm is taken, and must be released with mmput(). + */ +struct mm_struct *iommu_sva_find(int pasid) +{ + return ioasid_find(&shared_pasid, pasid, __mmget_not_zero); +} +EXPORT_SYMBOL_GPL(iommu_sva_find); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index e7bc47ba24f8..e52a8731e7a9 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -1091,6 +1091,15 @@ void iommu_debugfs_setup(void); static inline void iommu_debugfs_setup(void) {} #endif +#ifdef CONFIG_IOMMU_SVA +extern struct mm_struct *iommu_sva_find(int pasid); +#else /* !CONFIG_IOMMU_SVA */ +static inline struct mm_struct *iommu_sva_find(int pasid) +{ + return NULL; +} +#endif /* !CONFIG_IOMMU_SVA */ + #ifdef CONFIG_IOMMU_PAGE_FAULT extern int iommu_queue_iopf(struct iommu_fault *fault, void *cookie); From patchwork Mon Feb 24 18:23:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C625E92A for ; Mon, 24 Feb 2020 18:24:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 87F9321744 for ; Mon, 24 Feb 2020 18:24:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="M99vGkY3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87F9321744 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 83F686B0032; Mon, 24 Feb 2020 13:24:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7C8026B0036; Mon, 24 Feb 2020 13:24:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 693EA6B0037; Mon, 24 Feb 2020 13:24:37 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 4FF4B6B0032 for ; Mon, 24 Feb 2020 13:24:37 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EACA7249C for ; Mon, 24 Feb 2020 18:24:36 +0000 (UTC) X-FDA: 76525846152.14.turn59_626bc489f3810 X-Spam-Summary: 40,2.5,0,4bde632df2cbf9c7,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2689:2693:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:3874:4118:4250:4321:4384:4385:4395:5007:6119:6261:6653:6742:7576:7875:7901:7903:8660:9010:10011:11026:11473:11658:11914:12043:12048:12291:12294:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13148:13230:13894:14096:14110:14181:14394:14721:21080:21324:21433:21444:21451:21627:21740:21795:21987:21990:30046:30051:30054:30090,0,RBL:209.85.221.67:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:33,LUA_SUMMARY:none X-HE-Tag: turn59_626bc489f3810 X-Filterd-Recvd-Size: 7663 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:36 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id p18so7933402wre.9 for ; Mon, 24 Feb 2020 10:24:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4BAEKhOJP+vfPilA/A1esaPeb85rQ2xVF1xbquVY184=; b=M99vGkY31vfp3/5YFldshvaVVuOLujfh7ojFCDTnoKW41IMEWZ1aa749ylDMF02mL8 EsF76A/xd0+D9G0LiMy/75wIjbUpTj5ggp4qwOihccmc7SvybU8lVP0RrXN+zijYA5Qc apJfsPNlPuSkqhHR1soKd4ou/vRywxfhU5WwI8xMxkH50xEZuIxIVA73IGnf8LHXAHCj UjbtPVxpTDPbVdR0XSSIE6QUa8HgvKgCyzUv3n8pFmnjopnM1ov1eMApD+8zJk5ozwNB OxiUpbtucYRP2+G9IoH1N/DTaBxGAqy/M1yy3lxCR8X1HtuJnfAnMQlEHu+w73fAfXiH B1Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4BAEKhOJP+vfPilA/A1esaPeb85rQ2xVF1xbquVY184=; b=dw4qzc2xmDIzzelgBPWlv50r8ZyRAFjseTHIHKmEtXreCh6se24ih3idCq7xpJrhMP 40E7NUZ2E72yhp23J+vvrAramz09qXaEGb6lq/I0vPLon3eEUkuuvrSswXR3p+bNIlpq /t6dZ2sqLnya/jxrcC9I4RXuQ9aI76pRTzFonZ+kAIkGTp2VotFvuASY8ogqO5AGtVTM nFg8799DDYWrpmjwb7/oViZsViWHK1+ZHqdhXbQyUuzK0QItu4TNMajfq8lyjP7TdIX6 2+TsJKjy/A+9AmlqqIUYL1NmVBJ1ogNVY3KUCB5ZrqV3u0tpXzsLR9ADg9fOKXW7xiNS YvCg== X-Gm-Message-State: APjAAAWU3jFAUKx6Ox/M+si6sdC7KUl1Am/vwWmDVFrkNTHZ8M0ygbI0 OA/3VXD2fOBwLW0RPHjhLmkzDQ== X-Google-Smtp-Source: APXvYqzHbo1uSnOjPvq1dly/T3bNuAsEH0s+K/Vbu0r956R09tVR+hP/VVgCejHxcU6EI1GRu4bYzg== X-Received: by 2002:a5d:4651:: with SMTP id j17mr69088399wrs.237.1582568675196; Mon, 24 Feb 2020 10:24:35 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:34 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 05/26] iommu/iopf: Handle mm faults Date: Mon, 24 Feb 2020 19:23:40 +0100 Message-Id: <20200224182401.353359-6-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker When a recoverable page fault is handled by the fault workqueue, find the associated mm and call handle_mm_fault. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/io-pgfault.c | 86 +++++++++++++++++++++++++++++++++++++- 1 file changed, 84 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c index 76e153c59fe3..ffa9f14e0803 100644 --- a/drivers/iommu/io-pgfault.c +++ b/drivers/iommu/io-pgfault.c @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -76,8 +77,65 @@ static int iopf_complete(struct device *dev, struct iopf_fault *iopf, static enum iommu_page_response_code iopf_handle_single(struct iopf_fault *iopf) { - /* TODO */ - return -ENODEV; + vm_fault_t ret; + struct mm_struct *mm; + struct vm_area_struct *vma; + unsigned int access_flags = 0; + unsigned int fault_flags = FAULT_FLAG_REMOTE; + struct iommu_fault_page_request *prm = &iopf->fault.prm; + enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID; + + if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID)) + return status; + + mm = iommu_sva_find(prm->pasid); + if (IS_ERR_OR_NULL(mm)) + return status; + + down_read(&mm->mmap_sem); + + vma = find_extend_vma(mm, prm->addr); + if (!vma) + /* Unmapped area */ + goto out_put_mm; + + if (prm->perm & IOMMU_FAULT_PERM_READ) + access_flags |= VM_READ; + + if (prm->perm & IOMMU_FAULT_PERM_WRITE) { + access_flags |= VM_WRITE; + fault_flags |= FAULT_FLAG_WRITE; + } + + if (prm->perm & IOMMU_FAULT_PERM_EXEC) { + access_flags |= VM_EXEC; + fault_flags |= FAULT_FLAG_INSTRUCTION; + } + + if (!(prm->perm & IOMMU_FAULT_PERM_PRIV)) + fault_flags |= FAULT_FLAG_USER; + + if (access_flags & ~vma->vm_flags) + /* Access fault */ + goto out_put_mm; + + ret = handle_mm_fault(vma, prm->addr, fault_flags); + status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID : + IOMMU_PAGE_RESP_SUCCESS; + +out_put_mm: + up_read(&mm->mmap_sem); + + /* + * If the process exits while we're handling the fault on its mm, we + * can't do mmput(). exit_mmap() would release the MMU notifier, calling + * iommu_notifier_release(), which has to flush the fault queue that + * we're executing on... So mmput_async() moves the release of the mm to + * another thread, if we're the last user. + */ + mmput_async(mm); + + return status; } static void iopf_handle_group(struct work_struct *work) @@ -111,6 +169,30 @@ static void iopf_handle_group(struct work_struct *work) * * Add a fault to the device workqueue, to be handled by mm. * + * This module doesn't handle PCI PASID Stop Marker; IOMMU drivers must discard + * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't + * expect a response. It may be generated when disabling a PASID (issuing a + * PASID stop request) by some PCI devices. + * + * The PASID stop request is triggered by the mm_exit() callback. When the + * callback returns from the device driver, no page request is generated for + * this PASID anymore and outstanding ones have been pushed to the IOMMU (as per + * PCIe 4.0r1.0 - 6.20.1 and 10.4.1.2 - Managing PASID TLP Prefix Usage). Some + * PCI devices will wait for all outstanding page requests to come back with a + * response before completing the PASID stop request. Others do not wait for + * page responses, and instead issue this Stop Marker that tells us when the + * PASID can be reallocated. + * + * It is safe to discard the Stop Marker because it is an optimization. + * a. Page requests, which are posted requests, have been flushed to the IOMMU + * when mm_exit() returns, + * b. We flush all fault queues after mm_exit() returns and before freeing the + * PASID. + * + * So even though the Stop Marker might be issued by the device *after* the stop + * request completes, outstanding faults will have been dealt with by the time + * we free the PASID. + * * Return: 0 on success and <0 on error. */ int iommu_queue_iopf(struct iommu_fault *fault, void *cookie) From patchwork Mon Feb 24 18:23:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7045D17E0 for ; Mon, 24 Feb 2020 18:24:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 33ADF2084E for ; Mon, 24 Feb 2020 18:24:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="hQ7R/j32" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33ADF2084E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A7986B0036; Mon, 24 Feb 2020 13:24:38 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 556C96B0037; Mon, 24 Feb 2020 13:24:38 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41F8F6B006C; Mon, 24 Feb 2020 13:24:38 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 27AD56B0036 for ; Mon, 24 Feb 2020 13:24:38 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C8313181AC9CC for ; Mon, 24 Feb 2020 18:24:37 +0000 (UTC) X-FDA: 76525846194.28.nerve66_628d6cc978862 X-Spam-Summary: 2,0,0,f19a4a795b63eec4,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4250:4385:5007:6261:6653:6742:7576:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:13894:13972:14096:14181:14394:14721:21080:21444:21627:21987:21990:30054,0,RBL:209.85.221.66:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: nerve66_628d6cc978862 X-Filterd-Recvd-Size: 5950 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:37 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id e8so11577359wrm.5 for ; Mon, 24 Feb 2020 10:24:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5ke1+5YSOcxDNzSzBBGnaN+kOVlCxvSlETx/6pQryQw=; b=hQ7R/j32SLrNiw6cB+MKzedjV5jb7wC2uDlPXljxSGn/Mm+e8K6vJRpSEh03PvC4u9 6SWVO6jkqy8f1MhFEXuwvn5CVzySq1eAR0OMIHnH231Ilx59ro788ZXbVKhAb08vEu3x 2DRDRX3reoEDjA1nXPvHdhi3NlMMHALRilHOP4FbegQ6bDl+MXF72BSCvs01Iga0ey/V lRA5BpuJcyT4Bo+4XEn4NxJCsLV3waRxK/XqqsUDdfn1vY0NKse5RC5ThKKbEOiOSL6S WBTlf6zQcjBQmTv4KAcSJKa3LHmn9N03VuV6CMiAjWvmhiZgf7OibmnfYlq8z45tqDXq +QmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5ke1+5YSOcxDNzSzBBGnaN+kOVlCxvSlETx/6pQryQw=; b=ZtyCvWmlYRWbzW8HeSj8dkSofqy9phiY5pJkmgfUjuA+rj5V7eMr0ta1+gurTviRsa fzQbzidr42LV0IOD22pIiXXs4AzwLFddkw6pGTcWr6ZOKovGyw/ICi+8UMFRiqpsRwu8 kbFwEw2jRJFcGz8csYjb9uK/17pYdyT1EGBvEn8VVk48PSN/BZvx+7vNY2W3kFbCskNB /Sfy6/QedAMpDNH1V9bcvaybCrJG7EcvxrgtbvRHiWFdS4b9fdbhPkoWHBb5eL99B9TV 9FTnr87mWtCZHCTCOlce/hGzRZGsLB9jYVgtpD/ILAgO6LJbP55kGmz3BpruI8TcmuiL 2OIA== X-Gm-Message-State: APjAAAXsbE4qNGCQ8sJdozZtWqEXBM+iRONBTzFqeq/wFFn000eral2m 2x6RdngJiq8fQhY6h2S7T+dvWw== X-Google-Smtp-Source: APXvYqxKuED5BtGA+H9sncjVVYaPSWdkOdWHekAifEqs9Cn+ZBcftrmVm7FgSaad06EVLWBinDxKzg== X-Received: by 2002:adf:df83:: with SMTP id z3mr67641748wrl.389.1582568676229; Mon, 24 Feb 2020 10:24:36 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:35 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 06/26] iommu/sva: Register page fault handler Date: Mon, 24 Feb 2020 19:23:41 +0100 Message-Id: <20200224182401.353359-7-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker When enabling SVA, register the fault handler. Device driver will register an I/O page fault queue before or after calling iommu_sva_enable. The fault queue must be flushed before any io_mm is freed, to make sure that its PASID isn't used in any fault queue, and can be reallocated. Add iopf_queue_flush() calls in a few strategic locations. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Kconfig | 1 + drivers/iommu/iommu-sva.c | 16 ++++++++++++++++ 2 files changed, 17 insertions(+) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index e4a42e1708b4..211684e785ea 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -106,6 +106,7 @@ config IOMMU_DMA config IOMMU_SVA bool select IOASID + select IOMMU_PAGE_FAULT select IOMMU_API select MMU_NOTIFIER diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c index bfd0c477f290..494ca0824e4b 100644 --- a/drivers/iommu/iommu-sva.c +++ b/drivers/iommu/iommu-sva.c @@ -366,6 +366,8 @@ static void io_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) dev_WARN(dev, "possible leak of PASID %u", io_mm->pasid); + iopf_queue_flush_dev(dev, io_mm->pasid); + /* unbind() frees the bond, we just detach it */ io_mm_detach_locked(bond); } @@ -442,11 +444,20 @@ static void iommu_sva_unbind_locked(struct iommu_bond *bond) void iommu_sva_unbind_generic(struct iommu_sva *handle) { + int pasid; struct iommu_param *param = handle->dev->iommu_param; if (WARN_ON(!param)) return; + /* + * Caller stopped the device from issuing PASIDs, now make sure they are + * out of the fault queue. + */ + pasid = iommu_sva_get_pasid_generic(handle); + if (pasid != IOMMU_PASID_INVALID) + iopf_queue_flush_dev(handle->dev, pasid); + mutex_lock(¶m->sva_lock); mutex_lock(&iommu_sva_lock); iommu_sva_unbind_locked(to_iommu_bond(handle)); @@ -484,6 +495,10 @@ int iommu_sva_enable(struct device *dev, struct iommu_sva_param *sva_param) goto err_unlock; } + ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev); + if (ret) + goto err_unlock; + dev->iommu_param->sva_param = new_param; mutex_unlock(¶m->sva_lock); return 0; @@ -521,6 +536,7 @@ int iommu_sva_disable(struct device *dev) goto out_unlock; } + iommu_unregister_device_fault_handler(dev); kfree(param->sva_param); param->sva_param = NULL; out_unlock: From patchwork Mon Feb 24 18:23:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D20117E0 for ; Mon, 24 Feb 2020 18:24:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C165220880 for ; Mon, 24 Feb 2020 18:24:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="wurWsTSw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C165220880 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ECE2B6B0037; Mon, 24 Feb 2020 13:24:39 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E7E076B006C; Mon, 24 Feb 2020 13:24:39 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF8BC6B006E; Mon, 24 Feb 2020 13:24:39 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B26D16B0037 for ; Mon, 24 Feb 2020 13:24:39 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3614A2C04 for ; Mon, 24 Feb 2020 18:24:39 +0000 (UTC) X-FDA: 76525846278.16.eye10_62bb0975b421c X-Spam-Summary: 2,0,0,8e2494085e17ec7e,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:1:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2559:2562:2689:2693:2739:2895:2903:3138:3139:3140:3141:3142:3165:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4605:5007:6117:6119:6261:6653:6742:7576:7901:7903:7904:8660:8957:9036:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12895:13053:13148:13230:13894:14394:21080:21433:21444:21451:21627:21740:21795:21990:30003:30025:30034:30051:30054:30056:30070:30080:30091,0,RBL:209.85.221.68:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: eye10_62bb0975b421c X-Filterd-Recvd-Size: 12159 Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:38 +0000 (UTC) Received: by mail-wr1-f68.google.com with SMTP id g3so11536537wrs.12 for ; Mon, 24 Feb 2020 10:24:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KpThO/73Hf8fKZWwG7eIK7pB6a161VXU//ezplFtiNc=; b=wurWsTSwZWeEfUF1UTnYzLLaQe0lN+Dv6qSbHjX5O5rqnLdhTJOHXmk9UkcMBOg7w4 6TVFg+ZKq5x75bexXPCFt5CxGQguPkCp4XxNSb76hEQ51bwIabnFhtKhYniJA1Fzax9v gpOWpzpmNrvGSAUC9hemFVdyEzaFDx1qUNQDA9icjWjQgkLMMiYTDdVeISFYx7fPv9hI FAh9lzvLRZXvc8JHu2aEuUxHzVUn6op4/iQlDFAJ1HlVn9Z8BUU5yVYWMEaJAgpjOXeq X3Ft1x7U09LJdUVNN7hiSdDaZOgDO9VWVnVaCmW87gkYO9wAsgt4iwnS4LLCbz/LOipA KtPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KpThO/73Hf8fKZWwG7eIK7pB6a161VXU//ezplFtiNc=; b=BLwvHLJNW9BBiqvYBGiZBWwfpna+IsxIsWRH5xydwHzDPZaFhVqFw5tB9n7O2Gh52K mqZ1m39OLNEtGQ6zIYNhdt3BJd9roAeavJr7PJ2kIUAbqhqb1SEu+/f/a51w3mE7+NVv ac7euicLIwUOg7TEFS9dH/qMFI9UqRuVfFGT1wolAbf0ytDEc3HPghEOlha1Um7s7nxo THcVmnasz9HoL0GVIhzCZbRImzcNxVfg24y6aookRNt4GEjG7E7zfUW8RQCDMg6lllQI /yu2qt1G0WcyI/hphge91Tum0X6dHgeXDscn4m34nN4kcersD+ggOd2K0phLVj4uuVUO 2qXQ== X-Gm-Message-State: APjAAAVRaZ/Uc85yNkkePWLDxTKkCkDaV/UN/xhc+s3FbzgyYmhZ/Pq+ fAZmF0Kd/KpTOrAZRr6FEtdJfQ== X-Google-Smtp-Source: APXvYqyOwGuE49FsphDPhVMErAww2yo0bMR9pVKChxLNjHcxrwuaei0Bub3jjd1F5uS+m8Nj72D/Ew== X-Received: by 2002:a5d:4ac8:: with SMTP id y8mr67394296wrs.272.1582568677336; Mon, 24 Feb 2020 10:24:37 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:36 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 07/26] arm64: mm: Pin down ASIDs for sharing mm with devices Date: Mon, 24 Feb 2020 19:23:42 +0100 Message-Id: <20200224182401.353359-8-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker To enable address space sharing with the IOMMU, introduce mm_context_get() and mm_context_put(), that pin down a context and ensure that it will keep its ASID after a rollover. Export the symbols to let the modular SMMUv3 driver use them. Pinning is necessary because a device constantly needs a valid ASID, unlike tasks that only require one when running. Without pinning, we would need to notify the IOMMU when we're about to use a new ASID for a task, and it would get complicated when a new task is assigned a shared ASID. Consider the following scenario with no ASID pinned: 1. Task t1 is running on CPUx with shared ASID (gen=1, asid=1) 2. Task t2 is scheduled on CPUx, gets ASID (1, 2) 3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1) We would now have to immediately generate a new ASID for t1, notify the IOMMU, and finally enable task tn. We are holding the lock during all that time, since we can't afford having another CPU trigger a rollover. The IOMMU issues invalidation commands that can take tens of milliseconds. It gets needlessly complicated. All we wanted to do was schedule task tn, that has no business with the IOMMU. By letting the IOMMU pin tasks when needed, we avoid stalling the slow path, and let the pinning fail when we're out of shareable ASIDs. After a rollover, the allocator expects at least one ASID to be available in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS - 1) is the maximum number of ASIDs that can be shared with the IOMMU. Signed-off-by: Jean-Philippe Brucker --- v2->v4: handle KPTI --- arch/arm64/include/asm/mmu.h | 1 + arch/arm64/include/asm/mmu_context.h | 11 ++- arch/arm64/mm/context.c | 103 +++++++++++++++++++++++++-- 3 files changed, 109 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index e4d862420bb4..70ac3d4cbd3e 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -18,6 +18,7 @@ typedef struct { atomic64_t id; + unsigned long pinned; void *vdso; unsigned long flags; } mm_context_t; diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 3827ff4040a3..70715c10c02a 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -175,7 +175,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp) #define destroy_context(mm) do { } while(0) void check_and_switch_context(struct mm_struct *mm, unsigned int cpu); -#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) +static inline int +init_new_context(struct task_struct *tsk, struct mm_struct *mm) +{ + atomic64_set(&mm->context.id, 0); + mm->context.pinned = 0; + return 0; +} #ifdef CONFIG_ARM64_SW_TTBR0_PAN static inline void update_saved_ttbr0(struct task_struct *tsk, @@ -248,6 +254,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, void verify_cpu_asid_bits(void); void post_ttbr_update_workaround(void); +unsigned long mm_context_get(struct mm_struct *mm); +void mm_context_put(struct mm_struct *mm); + #endif /* !__ASSEMBLY__ */ #endif /* !__ASM_MMU_CONTEXT_H */ diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 121aba5b1941..5558de88b67d 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -26,6 +26,10 @@ static DEFINE_PER_CPU(atomic64_t, active_asids); static DEFINE_PER_CPU(u64, reserved_asids); static cpumask_t tlb_flush_pending; +static unsigned long max_pinned_asids; +static unsigned long nr_pinned_asids; +static unsigned long *pinned_asid_map; + #define ASID_MASK (~GENMASK(asid_bits - 1, 0)) #define ASID_FIRST_VERSION (1UL << asid_bits) @@ -73,6 +77,9 @@ void verify_cpu_asid_bits(void) static void set_kpti_asid_bits(void) { + unsigned int k; + u8 *dst = (u8 *)asid_map; + u8 *src = (u8 *)pinned_asid_map; unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long); /* * In case of KPTI kernel/user ASIDs are allocated in @@ -80,7 +87,8 @@ static void set_kpti_asid_bits(void) * is set, then the ASID will map only userspace. Thus * mark even as reserved for kernel. */ - memset(asid_map, 0xaa, len); + for (k = 0; k < len; k++) + dst[k] = src[k] | 0xaa; } static void set_reserved_asid_bits(void) @@ -88,9 +96,12 @@ static void set_reserved_asid_bits(void) if (arm64_kernel_unmapped_at_el0()) set_kpti_asid_bits(); else - bitmap_clear(asid_map, 0, NUM_USER_ASIDS); + bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS); } +#define asid_gen_match(asid) \ + (!(((asid) ^ atomic64_read(&asid_generation)) >> asid_bits)) + static void flush_context(void) { int i; @@ -161,6 +172,14 @@ static u64 new_context(struct mm_struct *mm) if (check_update_reserved_asid(asid, newasid)) return newasid; + /* + * If it is pinned, we can keep using it. Note that reserved + * takes priority, because even if it is also pinned, we need to + * update the generation into the reserved_asids. + */ + if (mm->context.pinned) + return newasid; + /* * We had a valid ASID in a previous life, so try to re-use * it if possible. @@ -219,8 +238,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) * because atomic RmWs are totally ordered for a given location. */ old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); - if (old_active_asid && - !((asid ^ atomic64_read(&asid_generation)) >> asid_bits) && + if (old_active_asid && asid_gen_match(asid) && atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), old_active_asid, asid)) goto switch_mm_fastpath; @@ -228,7 +246,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) raw_spin_lock_irqsave(&cpu_asid_lock, flags); /* Check that our ASID belongs to the current generation. */ asid = atomic64_read(&mm->context.id); - if ((asid ^ atomic64_read(&asid_generation)) >> asid_bits) { + if (!asid_gen_match(asid)) { asid = new_context(mm); atomic64_set(&mm->context.id, asid); } @@ -251,6 +269,68 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) cpu_switch_mm(mm->pgd, mm); } +unsigned long mm_context_get(struct mm_struct *mm) +{ + unsigned long flags; + u64 asid; + + raw_spin_lock_irqsave(&cpu_asid_lock, flags); + + asid = atomic64_read(&mm->context.id); + + if (mm->context.pinned) { + mm->context.pinned++; + asid &= ~ASID_MASK; + goto out_unlock; + } + + if (nr_pinned_asids >= max_pinned_asids) { + asid = 0; + goto out_unlock; + } + + if (!asid_gen_match(asid)) { + /* + * We went through one or more rollover since that ASID was + * used. Ensure that it is still valid, or generate a new one. + */ + asid = new_context(mm); + atomic64_set(&mm->context.id, asid); + } + + asid &= ~ASID_MASK; + + nr_pinned_asids++; + __set_bit(asid2idx(asid), pinned_asid_map); + mm->context.pinned++; + +out_unlock: + raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); + + /* Set the equivalent of USER_ASID_BIT */ + if (asid && IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) + asid |= 1; + + return asid; +} +EXPORT_SYMBOL_GPL(mm_context_get); + +void mm_context_put(struct mm_struct *mm) +{ + unsigned long flags; + u64 asid = atomic64_read(&mm->context.id) & ~ASID_MASK; + + raw_spin_lock_irqsave(&cpu_asid_lock, flags); + + if (--mm->context.pinned == 0) { + __clear_bit(asid2idx(asid), pinned_asid_map); + nr_pinned_asids--; + } + + raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); +} +EXPORT_SYMBOL_GPL(mm_context_put); + /* Errata workaround post TTBRx_EL1 update. */ asmlinkage void post_ttbr_update_workaround(void) { @@ -279,6 +359,19 @@ static int asids_init(void) panic("Failed to allocate bitmap for %lu ASIDs\n", NUM_USER_ASIDS); + pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), + sizeof(*pinned_asid_map), GFP_KERNEL); + if (!pinned_asid_map) + panic("Failed to allocate pinned bitmap\n"); + + /* + * We assume that an ASID is always available after a rollover. This + * means that even if all CPUs have a reserved ASID, there still is at + * least one slot available in the asid map. + */ + max_pinned_asids = num_available_asids - num_possible_cpus() - 1; + nr_pinned_asids = 0; + /* * We cannot call set_reserved_asid_bits() here because CPU * caps are not finalized yet, so it is safer to assume KPTI From patchwork Mon Feb 24 18:23:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401293 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E482414D5 for ; Mon, 24 Feb 2020 18:24:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A514D20CC7 for ; Mon, 24 Feb 2020 18:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="aDvzZHPt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A514D20CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AB61B6B006C; Mon, 24 Feb 2020 13:24:40 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9EC736B0070; Mon, 24 Feb 2020 13:24:40 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88B346B0071; Mon, 24 Feb 2020 13:24:40 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 7274E6B006C for ; Mon, 24 Feb 2020 13:24:40 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1DEA4824556B for ; Mon, 24 Feb 2020 18:24:40 +0000 (UTC) X-FDA: 76525846320.05.song83_62e1d5ebfc131 X-Spam-Summary: 2,0,0,84c486a79213ff3c,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:4117:4250:4321:5007:6119:6261:6653:6742:9592:10004:11026:11473:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:13894:13972:14096:14110:14181:14394:14721:21080:21444:21451:21611:21627:30012:30054,0,RBL:209.85.221.66:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: song83_62e1d5ebfc131 X-Filterd-Recvd-Size: 6381 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:39 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id r11so11569163wrq.10 for ; Mon, 24 Feb 2020 10:24:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6tqcdrj1xZvDcY7k2ZzEJzD20ZqgKnIoPLeb93yl2Xc=; b=aDvzZHPt9iVEQwZJj1vud5mf/cCAG1jb3PpU0tPKkgnq9edj9bpqrHE6Eoktch/Wxe YkevW7vtCQ7o4zxdCRdqv7yvITCgSxf6H6szEOP+F3WE5Ip0NrJp+tBFY4ii8waFp1+G g6F0duX+U/X1mGeZhsHnDiBNc6YigoXnE6IXag4W7ySDFxe6VvcC+ni71ci5R9yuXX8R EIF3HXiiGiaQ+0ZIc7OclYDsNN6BvIE7UZlNby1PK/lZ5772P8Uf8yq5xCyAqHI7MCxu HvKguxhEpuIXgThDOS+/Ngh5iwPVr2umzGovygvcHO1ZCp3TmC3j/q2drow2O6jsDj8S bnng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6tqcdrj1xZvDcY7k2ZzEJzD20ZqgKnIoPLeb93yl2Xc=; b=DcyzK+U587GuhMz3xehNmYDeVYaiPb8njpbjAmTQqzLtqqFdQmv1y1gH2eaTmN5lLa q5uTe6TAW2u0u9vpBpquEJiRWm8EbzVPzka2W9j8M6TSYrF5asjXr41NlfObTsOydEH0 Vzjaa42McpxZik7/DsmpXUd+lyMwcZiD720/687uRT+y+mWlKWfV/2eC3sBKgucsXAxQ xDxkPGNCpp5fXkeIXy4EirvfJvu70q3UeIO8hdtSqevK37WGVrG6BQ9od3yiK1sSrHX/ kTUC74I4X/XfNH2b8fSu2LEgOPg/SnQwqoRBq+ts7xx35cTAq1gz24K8p6R+NwVzjoH0 ROuA== X-Gm-Message-State: APjAAAW2mDhuHoUoNcmFaWF5GZqH/Ax9+pOqTbFP8dsZ65a/25YXSycN KbzzlvuDQsH1/guvHMLhv5tyDQ== X-Google-Smtp-Source: APXvYqweDyVog+lllJdgzOC5yFjqb0Y6Z6mnJXgAaaIz7bHvgd5fr5XnnlJkWPmoF4mmrIHnKf7dfA== X-Received: by 2002:adf:fec4:: with SMTP id q4mr9813505wrs.368.1582568678393; Mon, 24 Feb 2020 10:24:38 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:38 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org Subject: [PATCH v4 08/26] iommu/io-pgtable-arm: Move some definitions to a header Date: Mon, 24 Feb 2020 19:23:43 +0100 Message-Id: <20200224182401.353359-9-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extract some of the most generic TCR defines, so they can be reused by the page table sharing code. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/io-pgtable-arm.c | 27 ++------------------------- drivers/iommu/io-pgtable-arm.h | 30 ++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 25 deletions(-) create mode 100644 drivers/iommu/io-pgtable-arm.h diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 983b08477e64..75782b525c2f 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -20,6 +20,8 @@ #include +#include "io-pgtable-arm.h" + #define ARM_LPAE_MAX_ADDR_BITS 52 #define ARM_LPAE_S2_MAX_CONCAT_PAGES 16 #define ARM_LPAE_MAX_LEVELS 4 @@ -100,23 +102,6 @@ #define ARM_LPAE_PTE_MEMATTR_DEV (((arm_lpae_iopte)0x1) << 2) /* Register bits */ -#define ARM_LPAE_TCR_TG0_4K 0 -#define ARM_LPAE_TCR_TG0_64K 1 -#define ARM_LPAE_TCR_TG0_16K 2 - -#define ARM_LPAE_TCR_TG1_16K 1 -#define ARM_LPAE_TCR_TG1_4K 2 -#define ARM_LPAE_TCR_TG1_64K 3 - -#define ARM_LPAE_TCR_SH_NS 0 -#define ARM_LPAE_TCR_SH_OS 2 -#define ARM_LPAE_TCR_SH_IS 3 - -#define ARM_LPAE_TCR_RGN_NC 0 -#define ARM_LPAE_TCR_RGN_WBWA 1 -#define ARM_LPAE_TCR_RGN_WT 2 -#define ARM_LPAE_TCR_RGN_WB 3 - #define ARM_LPAE_VTCR_SL0_MASK 0x3 #define ARM_LPAE_TCR_T0SZ_SHIFT 0 @@ -124,14 +109,6 @@ #define ARM_LPAE_VTCR_PS_SHIFT 16 #define ARM_LPAE_VTCR_PS_MASK 0x7 -#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL -#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL -#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL -#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL -#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL -#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL -#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL - #define ARM_LPAE_MAIR_ATTR_SHIFT(n) ((n) << 3) #define ARM_LPAE_MAIR_ATTR_MASK 0xff #define ARM_LPAE_MAIR_ATTR_DEVICE 0x04 diff --git a/drivers/iommu/io-pgtable-arm.h b/drivers/iommu/io-pgtable-arm.h new file mode 100644 index 000000000000..ba7cfdf7afa0 --- /dev/null +++ b/drivers/iommu/io-pgtable-arm.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef IO_PGTABLE_ARM_H_ +#define IO_PGTABLE_ARM_H_ + +#define ARM_LPAE_TCR_TG0_4K 0 +#define ARM_LPAE_TCR_TG0_64K 1 +#define ARM_LPAE_TCR_TG0_16K 2 + +#define ARM_LPAE_TCR_TG1_16K 1 +#define ARM_LPAE_TCR_TG1_4K 2 +#define ARM_LPAE_TCR_TG1_64K 3 + +#define ARM_LPAE_TCR_SH_NS 0 +#define ARM_LPAE_TCR_SH_OS 2 +#define ARM_LPAE_TCR_SH_IS 3 + +#define ARM_LPAE_TCR_RGN_NC 0 +#define ARM_LPAE_TCR_RGN_WBWA 1 +#define ARM_LPAE_TCR_RGN_WT 2 +#define ARM_LPAE_TCR_RGN_WB 3 + +#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL +#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL +#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL +#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL +#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL +#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL +#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL + +#endif /* IO_PGTABLE_ARM_H_ */ From patchwork Mon Feb 24 18:23:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401299 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86E671871 for ; Mon, 24 Feb 2020 18:24:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 478FF20CC7 for ; Mon, 24 Feb 2020 18:24:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="eI8yYEK7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 478FF20CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 61CAF6B0070; Mon, 24 Feb 2020 13:24:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5F27D6B0071; Mon, 24 Feb 2020 13:24:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E2ED6B0072; Mon, 24 Feb 2020 13:24:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 35EEE6B0070 for ; Mon, 24 Feb 2020 13:24:41 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E728B181AC9CC for ; Mon, 24 Feb 2020 18:24:40 +0000 (UTC) X-FDA: 76525846320.09.guide25_6303ee14bba03 X-Spam-Summary: 2,0,0,7ff47e419fed9254,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3867:3868:3870:3871:4117:4250:4385:4605:5007:6119:6261:6653:6742:7576:7903:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:13894:14093:14096:14181:14394:14721:21080:21433:21444:21451:21627:21990:30054,0,RBL:209.85.128.66:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: guide25_6303ee14bba03 X-Filterd-Recvd-Size: 6379 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:40 +0000 (UTC) Received: by mail-wm1-f66.google.com with SMTP id t23so318666wmi.1 for ; Mon, 24 Feb 2020 10:24:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PQBZlQXWcV9D5kUq6nU/sNwIcBZzF9eb/NGOMt8k20I=; b=eI8yYEK7OL2vRGcpG1JnOgsMhuDCO4FXpzt9iuYs7rxQMKyOhp9KhI6x7vaf9IwPZ/ kMlSIfxdQt4lS/U4vVf8kxi+HenGZEauzOVE872pbr0e0ieZwNbK4HR43P+vIbiIkTgR AF8PX2d3e5ts8X9Qh0BYqeVSmsTe8bm5BUlC8jxx/e59zILlRqOXfDm/mQsDIUPuqxDf hsc3maRZYuhQQqz6cSJLqhQ/dmVNWbW/E/G8tYAVuNtffl8+B1msnQ3Z2BW0+KVqWXHV 3KMq1zSe1xIGVjkKk+tQ8XnwBg/DWJIWVCFiRRrxgiTV3h6Ci5Dkon9FjvCNhrGQPYi9 5JFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PQBZlQXWcV9D5kUq6nU/sNwIcBZzF9eb/NGOMt8k20I=; b=LVncsXlpl3U8jzvmZJA+sojzSq+lQbkwc96D4iRarempQp14Qc8o6GzRahGszDY/Sp QhAoxZBbxTslWA9ahdD8gCzf1cIoNvyduOkkdH64oBrM9gY/xL+Ey/3Bv3B2wJR6I3vD 2S2IBmvmtdw8f13hcNHmzRSRXELXjTAOvmfFKydM5yeXOS17g1t/mGrcs67Dt/06Pa+u +Ks+Dr2eQzb0Tdwl1KmO1DYhIw/v8A3jPQFOVGsQRKVMVrzxvJ/1ibHDSSPeEnK7WoId +XcXkYTkKFqBHEYBbRl+20kUip0yy8ko9MKLTgIZny4Y4ygDRKYevHeeJMu9ECXUxQzI LPeA== X-Gm-Message-State: APjAAAVmWCs6refdAa/biqvPtD20yBq4TYBdONj1I/vk73AYYFDW7m6V 2G+7DGPFZPhmKHMxZHxGYffFSA== X-Google-Smtp-Source: APXvYqy8MSJlJty3XKxuu4hz13wjSc+2uWe+QAIoI4+u4Wyj/t1n6Km8w38KIy+F6OJ89qWYqndYGw== X-Received: by 2002:a7b:cc97:: with SMTP id p23mr303798wma.89.1582568679409; Mon, 24 Feb 2020 10:24:39 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:39 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 09/26] iommu/arm-smmu-v3: Manage ASIDs with xarray Date: Mon, 24 Feb 2020 19:23:44 +0100 Message-Id: <20200224182401.353359-10-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker In preparation for sharing some ASIDs with the CPU, use a global xarray to store ASIDs and their context. ASID#1 is not reserved, and the ASID space is global. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 87ae31ef35a1..7737b70e74cd 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -651,7 +651,6 @@ struct arm_smmu_device { #define ARM_SMMU_MAX_ASIDS (1 << 16) unsigned int asid_bits; - DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS); #define ARM_SMMU_MAX_VMIDS (1 << 16) unsigned int vmid_bits; @@ -711,6 +710,8 @@ struct arm_smmu_option_prop { const char *prop; }; +static DEFINE_XARRAY_ALLOC1(asid_xa); + static struct arm_smmu_option_prop arm_smmu_options[] = { { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"}, @@ -1742,6 +1743,14 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain) cdcfg->cdtab = NULL; } +static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) +{ + if (!cd->asid) + return; + + xa_erase(&asid_xa, cd->asid); +} + /* Stream table manipulation functions */ static void arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc *desc) @@ -2388,10 +2397,9 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; - if (cfg->cdcfg.cdtab) { + if (cfg->cdcfg.cdtab) arm_smmu_free_cd_tables(smmu_domain); - arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid); - } + arm_smmu_free_asid(&cfg->cd); } else { struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; if (cfg->vmid) @@ -2406,14 +2414,15 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, struct io_pgtable_cfg *pgtbl_cfg) { int ret; - int asid; + u32 asid; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr; - asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits); - if (asid < 0) - return asid; + ret = xa_alloc(&asid_xa, &asid, &cfg->cd, + XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL); + if (ret) + return ret; cfg->s1cdmax = master->ssid_bits; @@ -2446,7 +2455,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, out_free_cd_tables: arm_smmu_free_cd_tables(smmu_domain); out_free_asid: - arm_smmu_bitmap_free(smmu->asid_map, asid); + arm_smmu_free_asid(&cfg->cd); return ret; } From patchwork Mon Feb 24 18:23:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401301 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53A1D92A for ; Mon, 24 Feb 2020 18:24:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 182C720838 for ; Mon, 24 Feb 2020 18:24:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="pDRHKweD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 182C720838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 840CA6B0071; Mon, 24 Feb 2020 13:24:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7A35D6B0072; Mon, 24 Feb 2020 13:24:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F4EE6B0073; Mon, 24 Feb 2020 13:24:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id 406866B0071 for ; Mon, 24 Feb 2020 13:24:42 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EA2CD283D for ; Mon, 24 Feb 2020 18:24:41 +0000 (UTC) X-FDA: 76525846362.08.rake70_632cd48540031 X-Spam-Summary: 2,0,0,c4b3c9c3bbe19290,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:334:355:368:369:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1568:1711:1714:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:3138:3139:3140:3141:3142:3865:3867:3870:4250:4321:5007:6261:6653:6742:7903:10004:11026:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12895:13069:13138:13161:13229:13231:13311:13357:13894:14181:14384:14394:14721:21080:21222:21444:21451:21627:21990:30054:30080:30089,0,RBL:209.85.221.67:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: rake70_632cd48540031 X-Filterd-Recvd-Size: 4118 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:41 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id c9so11579311wrw.8 for ; Mon, 24 Feb 2020 10:24:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fiWmOcLAVe+tVXhcsvVE1DFO1O4ryIgAcLdwqW0ca+M=; b=pDRHKweDnlBKRULwy0ik0zwEBtEv1j2cgrRKvimaQC6F87phqKiS6hoWHYe79Woxaq dvU2Y+udalR1IWc9MZSMY0Zs5f9NdIHxKJhYe9t+iiuM18v5G9IPkB7YyVsl1xvn6G0k Lyx8J36rm2ynK9P22EeAO7k6RYA6roW1f6WRVmaqmmlmQjHQxIyE4MBbDQS/EJmu3I3p Qd3S4xWYoA2d1rHcA8AWP0OW4En+tWRi+9rXYw9g503nUnZih/bJ/n2/yoE090xj2LJU H5r4hDx2HFNGusqI0S9/YfyB+pnbzKC6Jhc3+KlEeOqTYu+SoSFgOQm1PAsvdd7NVhoF WL4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fiWmOcLAVe+tVXhcsvVE1DFO1O4ryIgAcLdwqW0ca+M=; b=V+3ajQE1bMNqXk4pkXIldZH57FqwYYEL7ie8yCQsIAS6g1K/iaCL6d+X7ITpd4qOUh A0Pk6WpxEbCFJ9GE8PINT1DCdXFp2d4b1yZTyxuEAK4kZl6U15h4UAZqHhgCb7FiEuC1 KVeS5UwVr6DBJ6iJjHlDH6kli1tdAamomoG7Y2ujYxajUYJ91j8xvPr4DG2Tq096VwJy sA7ZQX3q4nwvGhRwUz371hkyNRq8laaI8o1aFCtc9UmlMpOuwOjBWPb+uLb/qZckQqM/ SD7uyAXdbbLWVVaEg5lmIO5tpoiYLaMy4ncA3sxtgsksfIaWkbTPbXn8z/wJYCdl2kx6 kA7w== X-Gm-Message-State: APjAAAUcRKzAQrP0FdkkKuIgKN2KhZhW/wXffGh38do1GX4qnNML4aw1 uaPXbWBUuKz+jpE08plVUElx/w== X-Google-Smtp-Source: APXvYqz8CuJxubqS6SRPcAebK5WrAwOgc3vNvNYTHpZ8VXVrF/jAhnwu+rN8D9EGJiHbmYtTOasOQA== X-Received: by 2002:a5d:4f8b:: with SMTP id d11mr65031300wru.87.1582568680476; Mon, 24 Feb 2020 10:24:40 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:40 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org Subject: [PATCH v4 10/26] arm64: cpufeature: Export symbol read_sanitised_ftr_reg() Date: Mon, 24 Feb 2020 19:23:45 +0100 Message-Id: <20200224182401.353359-11-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to share CPU page tables with devices. Allow the driver to be built as module by exporting the read_sanitized_ftr_reg() cpufeature symbol. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kernel/cpufeature.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 0b6715625cf6..a96d2fb12e4d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -838,6 +838,7 @@ u64 read_sanitised_ftr_reg(u32 id) BUG_ON(!regp); return regp->sys_val; } +EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg); #define read_sysreg_case(r) \ case r: return read_sysreg_s(r) From patchwork Mon Feb 24 18:23:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401309 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F135792A for ; Mon, 24 Feb 2020 18:25:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B11B120CC7 for ; Mon, 24 Feb 2020 18:25:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="SPcwqoV7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B11B120CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C5E3E6B0072; Mon, 24 Feb 2020 13:24:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BC0ED6B0074; Mon, 24 Feb 2020 13:24:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A395D6B0075; Mon, 24 Feb 2020 13:24:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 8D8BA6B0072 for ; Mon, 24 Feb 2020 13:24:43 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 368C6181AC9CC for ; Mon, 24 Feb 2020 18:24:43 +0000 (UTC) X-FDA: 76525846446.30.straw52_6358f64f25a4c X-Spam-Summary: 2,0,0,d3cf4239a8c700e5,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:2:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4250:4321:4385:4605:5007:6117:6119:6261:6653:6742:7576:7874:7903:8784:10004:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13894:13972:14096:14394:21080:21324:21433:21444:21451:21627:21987:21990:30054,0,RBL:209.85.128.67:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: straw52_6358f64f25a4c X-Filterd-Recvd-Size: 9424 Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:42 +0000 (UTC) Received: by mail-wm1-f67.google.com with SMTP id s144so466623wme.1 for ; Mon, 24 Feb 2020 10:24:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HvIb5a5WLdlhrkNjCFl6MOBB1+T6U04UZ4IR7hmYWK0=; b=SPcwqoV7lGB2oL/3/rsAiA0ayQ0NoctvatW9Q7DXlc59+EEY3mjcaCGfm83ITfM1xg oxOsFOJhXr4PvY6JgAPnGjaLnN4qG/rvSM79UwAdvo4NXNPWZHmsrbWC6mXO0ONXyqIZ 9V1XemAMVGTZbBVDgy5MF+XLRcU6ZKpmny66D8SUmskWYbauZ53u9qrnqU1F8VdMyJQQ 1nMRDnlTnjSOGzjVA95xQ0KtPertnohu0p8Q8xU3RXpS2Exc2DBIOIdgzmvrN8/uyP5b /gjNnlQbrFmXL3w0f84TXmo2bcxf6O4VGf5HWxQ3BzRscLJSM+dGNiqtEigtYaMCPc60 sM7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HvIb5a5WLdlhrkNjCFl6MOBB1+T6U04UZ4IR7hmYWK0=; b=XuA/0sjMIZmX2+NCfd8CdAFXCuaaH0V8ibphaAkzHTF0J5H+krLrKdHE7jQtnrBNB2 XQgSX2RyUbHF7Dv35UFS7wGvbQfN4zRsLn+cDQ9JO755QLPDASXI9LrviBu4x0eJxURv QaQej1irZMcXMjBW/2d4aREOh1Ulz1H8zWn1rR3iTJ1eUe+fODdNyC7RUfncruJCE/vl XGMJpa/jxLEAyLLle0GLgTY9lzR7Xs16FD1dEhuFvvj5+/1kidTSmvSanjQ+7mKd2cS3 UunJE+2LW3006fVgY56ArAUFkc2MKJ342v6CxxPJqdzi0Oa/4PevtLX8aKDbEwjRFCAf AY1g== X-Gm-Message-State: APjAAAX7s8oK9p1JlX7MGNTHtjvnOP/fDcpzwAUVsjXVfPIHsHXypDl5 0ATD+3pf/5JV5JZxmTxkRqApzg== X-Google-Smtp-Source: APXvYqznJOIqbRFJKt6oPPVj99H38141DBd5j/HtYxvRfv5tpgtK3N+1x93avzLJhmua44A15FW55Q== X-Received: by 2002:a1c:2089:: with SMTP id g131mr291161wmg.63.1582568681562; Mon, 24 Feb 2020 10:24:41 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:41 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 11/26] iommu/arm-smmu-v3: Share process page tables Date: Mon, 24 Feb 2020 19:23:46 +0100 Message-Id: <20200224182401.353359-12-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR, MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split into two sets, shared and private. Shared ASIDs correspond to those obtained from the arch ASID allocator, and private ASIDs are used for "classic" map/unmap DMA. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 164 +++++++++++++++++++++++++++++++++++- 1 file changed, 160 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 7737b70e74cd..3f9adfd1b015 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,8 @@ #include +#include "io-pgtable-arm.h" + /* MMIO registers */ #define ARM_SMMU_IDR0 0x0 #define IDR0_ST_LVL GENMASK(28, 27) @@ -575,6 +578,9 @@ struct arm_smmu_ctx_desc { u64 ttbr; u64 tcr; u64 mair; + + refcount_t refs; + struct mm_struct *mm; }; struct arm_smmu_l1_ctx_desc { @@ -1639,7 +1645,8 @@ static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | #endif - CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET | + CTXDESC_CD_0_R | CTXDESC_CD_0_A | + (cd->mm ? 0 : CTXDESC_CD_0_ASET) | CTXDESC_CD_0_AA64 | FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | CTXDESC_CD_0_V; @@ -1743,12 +1750,159 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain) cdcfg->cdtab = NULL; } -static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) +static void arm_smmu_init_cd(struct arm_smmu_ctx_desc *cd) { + refcount_set(&cd->refs, 1); +} + +static bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) +{ + bool free; + struct arm_smmu_ctx_desc *old_cd; + if (!cd->asid) - return; + return false; + + xa_lock(&asid_xa); + free = refcount_dec_and_test(&cd->refs); + if (free) { + old_cd = __xa_erase(&asid_xa, cd->asid); + WARN_ON(old_cd != cd); + } + xa_unlock(&asid_xa); + return free; +} + +static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) +{ + struct arm_smmu_ctx_desc *cd; + + cd = xa_load(&asid_xa, asid); + if (!cd) + return NULL; - xa_erase(&asid_xa, cd->asid); + if (cd->mm) { + /* + * It's pretty common to find a stale CD when doing unbind-bind, + * given that the release happens after a RCU grace period. + * arm_smmu_free_asid() hasn't gone through yet, so reuse it. + */ + refcount_inc(&cd->refs); + return cd; + } + + /* + * Ouch, ASID is already in use for a private cd. + * TODO: seize it. + */ + return ERR_PTR(-EEXIST); +} + +__maybe_unused +static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) +{ + u16 asid; + int ret = 0; + u64 tcr, par, reg; + struct arm_smmu_ctx_desc *cd; + struct arm_smmu_ctx_desc *old_cd = NULL; + + asid = mm_context_get(mm); + if (!asid) + return ERR_PTR(-ESRCH); + + cd = kzalloc(sizeof(*cd), GFP_KERNEL); + if (!cd) { + ret = -ENOMEM; + goto err_put_context; + } + + arm_smmu_init_cd(cd); + + xa_lock(&asid_xa); + old_cd = arm_smmu_share_asid(asid); + if (!old_cd) { + old_cd = __xa_store(&asid_xa, asid, cd, GFP_ATOMIC); + /* + * Keep error, clear valid pointers. If there was an old entry + * it has been moved already by arm_smmu_share_asid(). + */ + old_cd = ERR_PTR(xa_err(old_cd)); + cd->asid = asid; + } + xa_unlock(&asid_xa); + + if (IS_ERR(old_cd)) { + ret = PTR_ERR(old_cd); + goto err_free_cd; + } else if (old_cd) { + if (WARN_ON(old_cd->mm != mm)) { + ret = -EINVAL; + goto err_free_cd; + } + kfree(cd); + mm_context_put(mm); + return old_cd; + } + + tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) | + FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) | + FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) | + FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) | + CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; + + switch (PAGE_SIZE) { + case SZ_4K: + tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, + ARM_LPAE_TCR_TG0_4K); + break; + case SZ_16K: + tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, + ARM_LPAE_TCR_TG0_16K); + break; + case SZ_64K: + tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, + ARM_LPAE_TCR_TG0_64K); + break; + default: + WARN_ON(1); + ret = -EINVAL; + goto err_free_asid; + } + + reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + par = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); + tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par); + + cd->ttbr = virt_to_phys(mm->pgd); + cd->tcr = tcr; + /* + * MAIR value is pretty much constant and global, so we can just get it + * from the current CPU register + */ + cd->mair = read_sysreg(mair_el1); + + cd->mm = mm; + + return cd; + +err_free_asid: + arm_smmu_free_asid(cd); +err_free_cd: + kfree(cd); +err_put_context: + mm_context_put(mm); + return ERR_PTR(ret); +} + +__maybe_unused +static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd) +{ + if (arm_smmu_free_asid(cd)) { + /* Unpin ASID */ + mm_context_put(cd->mm); + kfree(cd); + } } /* Stream table manipulation functions */ @@ -2419,6 +2573,8 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr; + arm_smmu_init_cd(&cfg->cd); + ret = xa_alloc(&asid_xa, &asid, &cfg->cd, XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL); if (ret) From patchwork Mon Feb 24 18:23:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401315 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BED6892A for ; Mon, 24 Feb 2020 18:25:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E7BA20838 for ; Mon, 24 Feb 2020 18:25:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="J9A0ZfwW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E7BA20838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3FA336B0074; Mon, 24 Feb 2020 13:24:45 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 383D26B0075; Mon, 24 Feb 2020 13:24:45 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 222D66B0078; Mon, 24 Feb 2020 13:24:45 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 083C76B0074 for ; Mon, 24 Feb 2020 13:24:45 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 954A8180AD806 for ; Mon, 24 Feb 2020 18:24:44 +0000 (UTC) X-FDA: 76525846488.12.force85_638364b998d10 X-Spam-Summary: 2,0,0,aa804223a5acb5be,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:2:41:355:379:541:800:960:966:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:2196:2199:2376:2393:2553:2559:2562:2693:2895:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4250:4321:4385:5007:6119:6261:6653:6742:7576:7874:7903:8603:9040:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13161:13229:13894:14096:14394:21080:21433:21444:21627:21990:30054:30070:30090,0,RBL:209.85.128.66:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: force85_638364b998d10 X-Filterd-Recvd-Size: 9749 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:43 +0000 (UTC) Received: by mail-wm1-f66.google.com with SMTP id a9so345158wmj.3 for ; Mon, 24 Feb 2020 10:24:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KHws8PPyeoETs9MQxZkRIDY5PffBS7gRHL2ghOeKyiM=; b=J9A0ZfwW2Sg5Oi6rxq36ZRvJqnC+eT7pD0P717DYVO6/HFHQ95x+UxG3pT7JhyRRKq CdIF02uk2292tIngupeW/yBbUErtCxYmDRVo6/5y/bkWByPW3bThl1dFBPj02zswj4Jj qF1YBh2b5YpZCqRYchpXE+tX4M0bWn5d/yg2DE/0qXgJdRLQn7Jv41Au15O+H858/zQt czw8MaqmewR7TtTWraIWi2f/TynY8Vh/Nb1Xohjt8dZw13zDR/5E+o6/8wq/Q/qSoglg qTLXTok0ayNKInC5TtlxmZx8bCE3DudSWrwI0IGO3EVxbfrtz46cdYcJcsn41BOBbITL /gGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KHws8PPyeoETs9MQxZkRIDY5PffBS7gRHL2ghOeKyiM=; b=nZOxqtlKKAtAHRMjj47aCkUafL+BRIM5Lx0qeMP/ASIuxA7gMV9F7DRPvqp+qWCEIh uZSSOVINlRwXPyHlUbwQBZuxXnvLS9CKCQ+nIw31PrHCqkf/o6nnGUeb2l/VaL/giHyH RjUnLF6M1SxNPzvEmyY1COb3xo9zVbLr3cH6cG5EZCfpQOLxKtk5Shc6bx0zoDpS7NcF CYklljRvWuwc5Ao+gnQMtPiu4CkzNqnlTjRDKu3KCDhiyb+o0apqO2pOjG9ac7lOrk4k GgCwJfJS64QMpkqN6EfsHuJ/HZ9aS4mlpkrCdHHet8OfSNAjLxGuTI8zyexjO83E1GHq AU6w== X-Gm-Message-State: APjAAAXv8owasPLIRXnaoXoln52/Hoggsf7W6QorydIi6aVTsVu/VoKh iza5DbBweojUVHCGsD+mkzNu3Q== X-Google-Smtp-Source: APXvYqynW6kWyhnVaJWymOzWm/6YFP2jaUG27aS/Wip8UJw52RMXIUpjgeRxpypHFTwzOYwt3yNJ4A== X-Received: by 2002:a1c:67c3:: with SMTP id b186mr345249wmc.36.1582568682671; Mon, 24 Feb 2020 10:24:42 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:42 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 12/26] iommu/arm-smmu-v3: Seize private ASID Date: Mon, 24 Feb 2020 19:23:47 +0100 Message-Id: <20200224182401.353359-13-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker The SMMU has a single ASID space, the union of shared and private ASID sets. This means that the SMMU driver competes with the arch allocator for ASIDs. Shared ASIDs are those of Linux processes, allocated by the arch, and contribute in broadcast TLB maintenance. Private ASIDs are allocated by the SMMU driver and used for "classic" map/unmap DMA. They require explicit TLB invalidations. When we pin down an mm_context and get an ASID that is already in use by the SMMU, it belongs to a private context. We used to simply abort the bind, but this is unfair to users that would be unable to bind a few seemingly random processes. Try to allocate a new private ASID for the context, and make the old ASID shared. Introduce a new lock to prevent races when rewriting context descriptors. Unfortunately it has to be a spinlock since we take it while holding the asid lock, which will be held in non-sleepable context (freeing ASIDs from an RCU callback). Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 83 +++++++++++++++++++++++++++++-------- 1 file changed, 66 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 3f9adfd1b015..2839527ec9ee 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -717,6 +717,7 @@ struct arm_smmu_option_prop { }; static DEFINE_XARRAY_ALLOC1(asid_xa); +static DEFINE_SPINLOCK(contexts_lock); static struct arm_smmu_option_prop arm_smmu_options[] = { { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" }, @@ -1513,6 +1514,17 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, } /* Context descriptor manipulation functions */ +static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) +{ + struct arm_smmu_cmdq_ent cmd = { + .opcode = CMDQ_OP_TLBI_NH_ASID, + .tlbi.asid = asid, + }; + + arm_smmu_cmdq_issue_cmd(smmu, &cmd); + arm_smmu_cmdq_issue_sync(smmu); +} + static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, int ssid, bool leaf) { @@ -1547,7 +1559,7 @@ static int arm_smmu_alloc_cd_leaf_table(struct arm_smmu_device *smmu, size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3); l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size, - &l1_desc->l2ptr_dma, GFP_KERNEL); + &l1_desc->l2ptr_dma, GFP_ATOMIC); if (!l1_desc->l2ptr) { dev_warn(smmu->dev, "failed to allocate context descriptor table\n"); @@ -1593,8 +1605,8 @@ static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS; } -static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, - int ssid, struct arm_smmu_ctx_desc *cd) +static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, + int ssid, struct arm_smmu_ctx_desc *cd) { /* * This function handles the following cases: @@ -1670,6 +1682,17 @@ static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, return 0; } +static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, + int ssid, struct arm_smmu_ctx_desc *cd) +{ + int ret; + + spin_lock(&contexts_lock); + ret = __arm_smmu_write_ctx_desc(smmu_domain, ssid, cd); + spin_unlock(&contexts_lock); + return ret; +} + static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain) { int ret; @@ -1773,9 +1796,18 @@ static bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd) return free; } +/* + * Try to reserve this ASID in the SMMU. If it is in use, try to steal it from + * the private entry. Careful here, we may be modifying the context tables of + * another SMMU! + */ static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) { + int ret; + u32 new_asid; struct arm_smmu_ctx_desc *cd; + struct arm_smmu_device *smmu; + struct arm_smmu_domain *smmu_domain; cd = xa_load(&asid_xa, asid); if (!cd) @@ -1791,11 +1823,31 @@ static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) return cd; } + smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd); + smmu = smmu_domain->smmu; + + /* + * Race with unmap: TLB invalidations will start targeting the new ASID, + * which isn't assigned yet. We'll do an invalidate-all on the old ASID + * later, so it doesn't matter. + */ + ret = __xa_alloc(&asid_xa, &new_asid, cd, + XA_LIMIT(1, 1 << smmu->asid_bits), GFP_ATOMIC); + if (ret) + return ERR_PTR(-ENOSPC); + cd->asid = new_asid; + /* - * Ouch, ASID is already in use for a private cd. - * TODO: seize it. + * Update ASID and invalidate CD in all associated masters. There will + * be some overlap between use of both ASIDs, until we invalidate the + * TLB. */ - return ERR_PTR(-EEXIST); + arm_smmu_write_ctx_desc(smmu_domain, 0, cd); + + /* Invalidate TLB entries previously associated with that context */ + arm_smmu_tlb_inv_asid(smmu, asid); + + return NULL; } __maybe_unused @@ -2389,15 +2441,6 @@ static void arm_smmu_tlb_inv_context(void *cookie) struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cmdq_ent cmd; - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { - cmd.opcode = CMDQ_OP_TLBI_NH_ASID; - cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; - cmd.tlbi.vmid = 0; - } else { - cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; - cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; - } - /* * NOTE: when io-pgtable is in non-strict mode, we may get here with * PTEs previously cleared by unmaps on the current CPU not yet visible @@ -2405,8 +2448,14 @@ static void arm_smmu_tlb_inv_context(void *cookie) * insertion to guarantee those are observed before the TLBI. Do be * careful, 007. */ - arm_smmu_cmdq_issue_cmd(smmu, &cmd); - arm_smmu_cmdq_issue_sync(smmu); + if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg.cd.asid); + } else { + cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + arm_smmu_cmdq_issue_cmd(smmu, &cmd); + arm_smmu_cmdq_issue_sync(smmu); + } arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); } From patchwork Mon Feb 24 18:23:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401321 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 775C717E0 for ; Mon, 24 Feb 2020 18:25:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3802E20880 for ; Mon, 24 Feb 2020 18:25:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="K2euF5pV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3802E20880 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 496DD6B0075; Mon, 24 Feb 2020 13:24:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4493C6B007B; Mon, 24 Feb 2020 13:24:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C2EA6B007D; Mon, 24 Feb 2020 13:24:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 0C6D16B0075 for ; Mon, 24 Feb 2020 13:24:46 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 954E8181AC9CC for ; Mon, 24 Feb 2020 18:24:45 +0000 (UTC) X-FDA: 76525846530.01.ocean06_63ac176cf5f19 X-Spam-Summary: 13,1.2,0,80fcd473ef745064,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2890:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:4042:4119:4250:4321:5007:6117:6119:6261:6653:6742:7576:7862:7903:8603:10008:11026:11232:11473:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12679:12683:12895:13161:13229:13894:14096:14394:21080:21444:21450:21451:21627:21990:30001:30003:30054:30070:30080,0,RBL:209.85.128.65:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: ocean06_63ac176cf5f19 X-Filterd-Recvd-Size: 8660 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:44 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id a6so353187wme.2 for ; Mon, 24 Feb 2020 10:24:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VZzaOtmpBWk0D2v8fL7Zeypf33IeahkaNmnZnif113I=; b=K2euF5pVYNAhonoO5khYwI6BsplCOwGWhlMtTnJB4eQ+hn6RIHymr179DQ8eHX+LHa ijA3g4exqAAL3q2OKlQHOZnFDKWh5ME3dOyyKFPqXkVVPewgMffh2b2DWu7gNNRtnnHy BRADIyENrUMBjQ8oLT1Yl0IqWWoy/AYWp3sxKHFT6fRsl58Sr6YcNAxD4Fy4bkpQB2VC 3aSEOSl6Eb8t4fyejE57kfgh+AEjcqoA/4+mtGcWcaq8HetByztJ+nLHarJQZsEabl6Y NwbtorWE6f18Gu8jhjR4QSq5VNPZ/Bs0VvvIhY0LTSJpXgXsUpfq1VBIqwjOm0VDDjKe us+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VZzaOtmpBWk0D2v8fL7Zeypf33IeahkaNmnZnif113I=; b=Wp2Nn1NA6Gxj2B+/LozUm2gWhCpVLiYR/lcbZIadnCRCgrKPHudsMxCKZarLGa5d5F FHroOHvJF6UjR4ZKeNQCzhzaql9tYw/2xtVCrxBVlhWFjH/ImmI3T2hHNo7X/93q4ehy wfUBwWb/6u1CT+F5nzYYCUqsExE3+uAIfH/mAgV1BpQYE5vVj0Jw9MT1FN2v9SRRx8c/ hGQMTEJJ4Z1v/nTo7HPtnHiy1BRPHIpV0kTiIKXL843xAE2h+Z0/kkeV+Pcn4Ds0VDYY +saaNCAye8N5/Ugk1P+YnNUVFsW/dtWpmNIVuO/w3aCi+Rll+t23RqlibGfFZTAdAVC+ cTmg== X-Gm-Message-State: APjAAAVmBzHbGRoJXbWngtL7gdeRnfay2dW2j/AV3C9E+zqY+daxKpfc CZvV+TE/GbyPWWiRvD1MzXjAEQ== X-Google-Smtp-Source: APXvYqyY0DtWspDkZQv16SFZHbi9Y8DEMSvPEjuQrwIwdFyNqdguuGmDr8TOpifnv4y3GLYVoeNPfg== X-Received: by 2002:a1c:7d92:: with SMTP id y140mr283862wmc.145.1582568683758; Mon, 24 Feb 2020 10:24:43 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:43 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 13/26] iommu/arm-smmu-v3: Add support for VHE Date: Mon, 24 Feb 2020 19:23:48 +0100 Message-Id: <20200224182401.353359-14-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow to run a host kernel at EL2. When using normal DMA, Device and CPU address spaces are dissociated, and do not need to implement the same capabilities, so VHE hasn't been used in the SMMU until now. With shared address spaces however, ASIDs are shared between MMU and SMMU, and broadcast TLB invalidations issued by a CPU are taken into account by the SMMU. TLB entries on both sides need to have identical exception level in order to be cleared with a single invalidation. When the CPU is using VHE, enable VHE in the SMMU for all STEs. Normal DMA mappings will need to use TLBI_EL2 commands instead of TLBI_NH, but shouldn't be otherwise affected by this change. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 2839527ec9ee..77554d89653b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -472,6 +473,8 @@ struct arm_smmu_cmdq_ent { #define CMDQ_OP_TLBI_NH_ASID 0x11 #define CMDQ_OP_TLBI_NH_VA 0x12 #define CMDQ_OP_TLBI_EL2_ALL 0x20 + #define CMDQ_OP_TLBI_EL2_ASID 0x21 + #define CMDQ_OP_TLBI_EL2_VA 0x22 #define CMDQ_OP_TLBI_S12_VMALL 0x28 #define CMDQ_OP_TLBI_S2_IPA 0x2a #define CMDQ_OP_TLBI_NSNH_ALL 0x30 @@ -638,6 +641,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_HYP (1 << 12) #define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) #define ARM_SMMU_FEAT_VAX (1 << 14) +#define ARM_SMMU_FEAT_E2H (1 << 15) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -909,6 +913,8 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) break; case CMDQ_OP_TLBI_NH_VA: cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); + /* Fallthrough */ + case CMDQ_OP_TLBI_EL2_VA: cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid); cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf); cmd[1] |= ent->tlbi.addr & CMDQ_TLBI_1_VA_MASK; @@ -924,6 +930,9 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) case CMDQ_OP_TLBI_S12_VMALL: cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid); break; + case CMDQ_OP_TLBI_EL2_ASID: + cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid); + break; case CMDQ_OP_ATC_INV: cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid); cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global); @@ -1517,7 +1526,8 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) { struct arm_smmu_cmdq_ent cmd = { - .opcode = CMDQ_OP_TLBI_NH_ASID, + .opcode = smmu->features & ARM_SMMU_FEAT_E2H ? + CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID, .tlbi.asid = asid, }; @@ -2075,13 +2085,16 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, } if (s1_cfg) { + int strw = smmu->features & ARM_SMMU_FEAT_E2H ? + STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1; + BUG_ON(ste_live); dst[1] = cpu_to_le64( FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) | FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) | FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) | FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) | - FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1)); + FIELD_PREP(STRTAB_STE_1_STRW, strw)); if (smmu->features & ARM_SMMU_FEAT_STALLS && !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) @@ -2476,7 +2489,8 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, return; if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { - cmd.opcode = CMDQ_OP_TLBI_NH_VA; + cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ? + CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA; cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; } else { cmd.opcode = CMDQ_OP_TLBI_S2_IPA; @@ -3743,7 +3757,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) writel_relaxed(reg, smmu->base + ARM_SMMU_CR1); /* CR2 (random crap) */ - reg = CR2_PTM | CR2_RECINVSID | CR2_E2H; + reg = CR2_PTM | CR2_RECINVSID; + + if (smmu->features & ARM_SMMU_FEAT_E2H) + reg |= CR2_E2H; + writel_relaxed(reg, smmu->base + ARM_SMMU_CR2); /* Stream table */ @@ -3901,8 +3919,11 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_MSI) smmu->features |= ARM_SMMU_FEAT_MSI; - if (reg & IDR0_HYP) + if (reg & IDR0_HYP) { smmu->features |= ARM_SMMU_FEAT_HYP; + if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + smmu->features |= ARM_SMMU_FEAT_E2H; + } /* * The coherency feature as set by FW is used in preference to the ID From patchwork Mon Feb 24 18:23:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401323 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5904F14D5 for ; Mon, 24 Feb 2020 18:25:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0F33A20838 for ; Mon, 24 Feb 2020 18:25:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="MFGs5MAp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F33A20838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 050AE6B007B; Mon, 24 Feb 2020 13:24:47 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 003F46B007D; Mon, 24 Feb 2020 13:24:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D94006B007E; Mon, 24 Feb 2020 13:24:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id C154C6B007B for ; Mon, 24 Feb 2020 13:24:46 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 69D108245578 for ; Mon, 24 Feb 2020 18:24:46 +0000 (UTC) X-FDA: 76525846572.07.nail85_63cff9ef0a52b X-Spam-Summary: 2,0,0,b6ef264ae0f7ab68,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:5007:6119:6120:6261:6653:6742:7576:7901:7903:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12679:12895:13161:13229:13894:13972:14096:14181:14394:14721:21080:21220:21433:21444:21451:21627:30003:30054:30070:30080,0,RBL:209.85.128.66:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: nail85_63cff9ef0a52b X-Filterd-Recvd-Size: 6453 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:45 +0000 (UTC) Received: by mail-wm1-f66.google.com with SMTP id s10so310711wmh.3 for ; Mon, 24 Feb 2020 10:24:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CtUhXGlPsIJ68oWTTxDmR3+h0pZJEayRCmYSb70blVo=; b=MFGs5MApluTu8IWUQBUgGAvYQlWhw1Qg2i+s50QDE2pYw6MygCWeXQvU/qEk+wfERl tPpPnU4HcP6RGBxQuFWSwHBNETP/FPS0qBwIHIkmI3LJOI5RvUK2VhdbyzebuzRIDKRZ KcNhdPVk42z7exdHVwlPnaJ/qMMBtfBGKweEvWlXxWjNys3C6A0o0SdGp9EMSAAxIo3H NANpUzrLoBK5dZmkPbaQsif1wuvzXYei3gn+ou7YVkO53ACHTmwdLRkdjRAXpY2EzFb+ 1IDZsvUikMU64ttNWS2qFyACSY2bSLQKCbTwGmkKTT8xO9+ZOMvb18lvg+NrPE5G9nsr /WWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CtUhXGlPsIJ68oWTTxDmR3+h0pZJEayRCmYSb70blVo=; b=osUlPSIVr1LZZGHz390LIhlyTAROrNTZJqDxhFqMF4Lbuow48uNGDjSja+vbDQ0Gch tiv46GOLl6H4auV740ga02kvwy3hLud+me6ONGXnGcPnDKe59cZ29JB9zZRQYPQv72Jl x8OEv2tAZlChhGCSiL0ZDEIaItxoOsR56079Q2xyFtTwOQNtiuBYkgacH16OcxoVNtDo GzytU7gQijSyXqFhLSnufDLZojnu7aSjriGEUF5AL8nZQcwPBvPQERtCchzzPYe3trWl RTdvCrnf6PUz+vELcAyW3lyXB4zM2++I0CLJCkH8ErYIJo6BhVJBaJ4mIX2OHAOhoQ/a lMHw== X-Gm-Message-State: APjAAAVsjXxX1E4Lasfq1T2KP9jC5REnU7FNhNXUa41pDPyufuJlysyV KZ2QuPk3R7HGdwveir3fs1jvMA== X-Google-Smtp-Source: APXvYqxO1XjG5q9pMUD2nep9ZGbytcZIoxc+NEQ3zQomCXOkrENArq99rGCzJfi6OEEwS8GBwWrg4A== X-Received: by 2002:a7b:cb46:: with SMTP id v6mr318402wmj.117.1582568684877; Mon, 24 Feb 2020 10:24:44 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:44 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 14/26] iommu/arm-smmu-v3: Enable broadcast TLB maintenance Date: Mon, 24 Feb 2020 19:23:49 +0100 Message-Id: <20200224182401.353359-15-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker The SMMUv3 can handle invalidation targeted at TLB entries with shared ASIDs. If the implementation supports broadcast TLB maintenance, enable it and keep track of it in a feature bit. The SMMU will then be affected by inner-shareable TLB invalidations from other agents. A major side-effect of this change is that stage-2 translation contexts are now affected by all invalidations by VMID. VMIDs are all shared and the only ways to prevent over-invalidation, since the stage-2 page tables are not shared between CPU and SMMU, are to either disable BTM or allocate different VMIDs. This patch does not address the problem. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 77554d89653b..b72b2fdcd21f 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -56,6 +56,7 @@ #define IDR0_ASID16 (1 << 12) #define IDR0_ATS (1 << 10) #define IDR0_HYP (1 << 9) +#define IDR0_BTM (1 << 5) #define IDR0_COHACC (1 << 4) #define IDR0_TTF GENMASK(3, 2) #define IDR0_TTF_AARCH64 2 @@ -642,6 +643,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_STALL_FORCE (1 << 13) #define ARM_SMMU_FEAT_VAX (1 << 14) #define ARM_SMMU_FEAT_E2H (1 << 15) +#define ARM_SMMU_FEAT_BTM (1 << 16) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -3757,11 +3759,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) writel_relaxed(reg, smmu->base + ARM_SMMU_CR1); /* CR2 (random crap) */ - reg = CR2_PTM | CR2_RECINVSID; + reg = CR2_RECINVSID; if (smmu->features & ARM_SMMU_FEAT_E2H) reg |= CR2_E2H; + if (!(smmu->features & ARM_SMMU_FEAT_BTM)) + reg |= CR2_PTM; + writel_relaxed(reg, smmu->base + ARM_SMMU_CR2); /* Stream table */ @@ -3872,6 +3877,7 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY; + bool vhe = cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN); /* IDR0 */ reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0); @@ -3921,10 +3927,19 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) if (reg & IDR0_HYP) { smmu->features |= ARM_SMMU_FEAT_HYP; - if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN)) + if (vhe) smmu->features |= ARM_SMMU_FEAT_E2H; } + /* + * If the CPU is using VHE, but the SMMU doesn't support it, the SMMU + * will create TLB entries for NH-EL1 world and will miss the + * broadcasted TLB invalidations that target EL2-E2H world. Don't enable + * BTM in that case. + */ + if (reg & IDR0_BTM && (!vhe || reg & IDR0_HYP)) + smmu->features |= ARM_SMMU_FEAT_BTM; + /* * The coherency feature as set by FW is used in preference to the ID * register, but warn on mismatch. From patchwork Mon Feb 24 18:23:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401325 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 065C792A for ; Mon, 24 Feb 2020 18:25:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1DC32084E for ; Mon, 24 Feb 2020 18:25:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="fv4SMCnv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1DC32084E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3101D6B007D; Mon, 24 Feb 2020 13:24:48 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 272126B0080; Mon, 24 Feb 2020 13:24:48 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 113596B0081; Mon, 24 Feb 2020 13:24:48 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id E77AC6B007D for ; Mon, 24 Feb 2020 13:24:47 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 9B963180AD806 for ; Mon, 24 Feb 2020 18:24:47 +0000 (UTC) X-FDA: 76525846614.15.hole05_63fbfed7fb762 X-Spam-Summary: 2,0,0,515ced1a0a219191,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:4117:4250:4321:5007:6119:6261:6653:6742:7576:7903:8603:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13894:14181:14394:14721:21080:21433:21444:21451:21627:21990:30034:30054:30070,0,RBL:209.85.221.68:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:15,LUA_SUMMARY:none X-HE-Tag: hole05_63fbfed7fb762 X-Filterd-Recvd-Size: 6502 Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:46 +0000 (UTC) Received: by mail-wr1-f68.google.com with SMTP id m16so11544572wrx.11 for ; Mon, 24 Feb 2020 10:24:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NRHBKJykpG/l2CLRpIAYf7MF7s3vRHP9YUPKSLfMvLg=; b=fv4SMCnvr/ltph2l+C3yE2rK9K4wI8QLF5VnYwdocUe8//ucsgYRR0KK4iM6jBB6zB AzMs5Nr0Y9GzZhI8kPpCu6Z6IadvFKIuzmtHmzPpJ09JPCmVdjINhxdGb5rqj35RCjse xYohylfBvtq67rXze9BVQXkKxwnp56h/6D6t1S7IO5zLEiT8eS+g5M9gFNoAeE43veUr BOh2fCenr+RsobHOcYs7QpSHNIOHukYHR38/0QxKIDen0V/gPdnJz8euUqtTn/QieTo7 QIjaRtodjoY8zQI/CSRapSnvw4JboLhdQzXsp77NGR7dnk8AbFyZ01aC5whAEWdgbeVT h+Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NRHBKJykpG/l2CLRpIAYf7MF7s3vRHP9YUPKSLfMvLg=; b=Y4b57nkU6f9S+53RK25K2ZJqgTtwk8dhvazqmSmjzHa9ranNYQOvN+IUfFZt8UIh7e ChimxFP8x+9l3iDYGf8PaEZkSlxTps0DMKVSWXzngjV7D6GphzGM1vYNbkp71+3dymF0 yghrFP7a3znChbIGEYWDkpsHI4Tcmnr/o0maBArfrmfSuXv2mC1AyiRyil9l2BnYfS1a JCJ/hv9l2OD0Z14hXYt/UJqPCXTnUTAAhrtdrQggNeUyFqImqa4EkdVYeLRimlmevwwe q3U86GdgLBjiDyyPm4BRISbXPWa3McwsqbU0DLegZHtYbZFPMILtlLpfEHX56DvGOXAK RUuQ== X-Gm-Message-State: APjAAAXLlO2bCzQa6/p6gDj+Qo3C1YqSZ7uh8idxDJxmJXhKhSrWxI+P Q1f9vdvHccAm8sq/YheSJzEhLQ== X-Google-Smtp-Source: APXvYqxd3PKQLrWTH+xuDThNjeJqmjg7tyD5z1KfZrZ7d0GQka2DjspbetTkYKxFAd0IwbQ2IMfIhQ== X-Received: by 2002:a5d:4a8c:: with SMTP id o12mr65975078wrq.43.1582568685910; Mon, 24 Feb 2020 10:24:45 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:45 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 15/26] iommu/arm-smmu-v3: Add SVA feature checking Date: Mon, 24 Feb 2020 19:23:50 +0100 Message-Id: <20200224182401.353359-16-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker Aggregate all sanity-checks for sharing CPU page tables with the SMMU under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to check FEAT_ATS and FEAT_PRI. For platform SVA, they will most likely have to check FEAT_STALLS. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 72 +++++++++++++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index b72b2fdcd21f..77a846440ba6 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -644,6 +644,7 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_VAX (1 << 14) #define ARM_SMMU_FEAT_E2H (1 << 15) #define ARM_SMMU_FEAT_BTM (1 << 16) +#define ARM_SMMU_FEAT_SVA (1 << 17) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -3873,6 +3874,74 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) return 0; } +static bool arm_smmu_supports_sva(struct arm_smmu_device *smmu) +{ + unsigned long reg, fld; + unsigned long oas; + unsigned long asid_bits; + + u32 feat_mask = ARM_SMMU_FEAT_BTM | ARM_SMMU_FEAT_COHERENCY; + + if ((smmu->features & feat_mask) != feat_mask) + return false; + + if (!(smmu->pgsize_bitmap & PAGE_SIZE)) + return false; + + /* + * Get the smallest PA size of all CPUs (sanitized by cpufeature). We're + * not even pretending to support AArch32 here. + */ + reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_PARANGE_SHIFT); + switch (fld) { + case 0x0: + oas = 32; + break; + case 0x1: + oas = 36; + break; + case 0x2: + oas = 40; + break; + case 0x3: + oas = 42; + break; + case 0x4: + oas = 44; + break; + case 0x5: + oas = 48; + break; + case 0x6: + oas = 52; + break; + default: + return false; + } + + /* abort if MMU outputs addresses greater than what we support. */ + if (smmu->oas < oas) + return false; + + /* We can support bigger ASIDs than the CPU, but not smaller */ + fld = cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR0_ASID_SHIFT); + asid_bits = fld ? 16 : 8; + if (smmu->asid_bits < asid_bits) + return false; + + /* + * See max_pinned_asids in arch/arm64/mm/context.c. The following is + * generally the maximum number of bindable processes. + */ + if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) + asid_bits--; + dev_dbg(smmu->dev, "%d shared contexts\n", (1 << asid_bits) - + num_possible_cpus() - 2); + + return true; +} + static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) { u32 reg; @@ -4080,6 +4149,9 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->ias = max(smmu->ias, smmu->oas); + if (arm_smmu_supports_sva(smmu)) + smmu->features |= ARM_SMMU_FEAT_SVA; + dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n", smmu->ias, smmu->oas, smmu->features); return 0; From patchwork Mon Feb 24 18:23:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE61592A for ; Mon, 24 Feb 2020 18:25:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7C0F52084E for ; Mon, 24 Feb 2020 18:25:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="zxiRGOVQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C0F52084E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2BFB26B0080; Mon, 24 Feb 2020 13:24:49 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 24AA46B0081; Mon, 24 Feb 2020 13:24:49 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 111236B0082; Mon, 24 Feb 2020 13:24:49 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id ECFEE6B0080 for ; Mon, 24 Feb 2020 13:24:48 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9D87E121A for ; Mon, 24 Feb 2020 18:24:48 +0000 (UTC) X-FDA: 76525846656.12.sense62_6425258a91034 X-Spam-Summary: 2,0,0,b372fc3d06d24b48,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3872:4250:4321:5007:6261:6653:6742:10004:11026:11658:11914:12048:12296:12297:12438:12517:12519:12555:12895:12986:13069:13311:13357:13894:14096:14181:14384:14394:14721:21080:21324:21444:21627:21789:21990:30054,0,RBL:209.85.128.68:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: sense62_6425258a91034 X-Filterd-Recvd-Size: 4816 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:48 +0000 (UTC) Received: by mail-wm1-f68.google.com with SMTP id m3so161382wmi.0 for ; Mon, 24 Feb 2020 10:24:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1/v4/yX+Rb+z7FmgKRxusFg6LNtZiO3DMXvnBgeOuHA=; b=zxiRGOVQbUYycQsFrjTLJX2IpngMLUiFnM9zOzjXR47e2SetqBWdJnpHlovEcuPFBv FexG3YDDT1ALrD/D+FKHkP2MIXaGBqx7A0sQaHVDRDNDQVr6jS2GdDb8Dv4EB4Ft15Rc X+Oex0fYMVCjcbrMtfUHVdIYJ1+hcTCMOOiPeqQ4R0z6xH0F4B7muTW0XjymSxneLuIP UjsAEBRJN0lKsKcUb0AF5d72SsOmuIZT45CPqq1FHWour+tr4KPFS8ATERhupHhS/6vO NnLNTmMYIpRJABwOkWxXIjD1Zqj0p9uyUCjjFEDFuUat5g+KAY0cf2+EDDRSjS2adgZy cP3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1/v4/yX+Rb+z7FmgKRxusFg6LNtZiO3DMXvnBgeOuHA=; b=ZAmhgS8N5mCbjHcqk+BsmHPekMryI/vUxQJ6Ssco8JnK+g3mdekup7JLCqahF1skcG FudtXNaVVfphN/dMJ+HYNLEfO/f56P2POn7oOzAPEorDuU5oKtT9HsEfSuIhXdj23X+7 VyrnLNH581/2TRO+d9sjeYEhFlfkZrzu+UZOhVHm5/pdlYCExC9frHN5j92QNPHSv4w3 bll626RLAswg5tPwGh9/1V17X7fJFQMk9pUNe4BX1npqJH015mqTReh1O+nCBuBYVgiu akUpgz9wCCA/TBSH21zzJbPDaxdGdiANwSmc6swT1MTUsF8b08+Ufg1bhAMj/hbjL8/R Xa1A== X-Gm-Message-State: APjAAAW2b8p5d4wDSvj+sAVMHG6XutNiAJcwKGiKF8y19pe/6j1OSY/D 1rEaIDeCFzN/quE7FxRHGBqgqQ== X-Google-Smtp-Source: APXvYqxqoFKehhPoecAVUtglmpmoNhk6pIi6LUZIxLg0IgobfRPy8bO8YntSNF9Wq6iiPuinGYBuMA== X-Received: by 2002:a05:600c:2255:: with SMTP id a21mr309717wmm.79.1582568687070; Mon, 24 Feb 2020 10:24:47 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:46 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org Subject: [PATCH v4 16/26] iommu/arm-smmu-v3: Add dev_to_master() helper Date: Mon, 24 Feb 2020 19:23:51 +0100 Message-Id: <20200224182401.353359-17-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We'll need to frequently find the SMMU master associated to a device when implementing SVA. Move it to a separate function. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 77a846440ba6..54bd6913d648 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -747,6 +747,15 @@ static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom) return container_of(dom, struct arm_smmu_domain, domain); } +static struct arm_smmu_master *dev_to_master(struct device *dev) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + + if (!fwspec) + return NULL; + return fwspec->iommu_priv; +} + static void parse_driver_options(struct arm_smmu_device *smmu) { int i = 0; @@ -2940,15 +2949,13 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; unsigned long flags; - struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct arm_smmu_device *smmu; + struct arm_smmu_master *master = dev_to_master(dev); struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); - struct arm_smmu_master *master; - if (!fwspec) + if (!master) return -ENOENT; - master = fwspec->iommu_priv; smmu = master->smmu; arm_smmu_detach_dev(master); From patchwork Mon Feb 24 18:23:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401329 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96B8E92A for ; Mon, 24 Feb 2020 18:25:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4960720838 for ; Mon, 24 Feb 2020 18:25:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="DVVCVC36" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4960720838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BC2386B0081; Mon, 24 Feb 2020 13:24:50 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B5A866B0083; Mon, 24 Feb 2020 13:24:50 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A07A6B0085; Mon, 24 Feb 2020 13:24:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 768D36B0081 for ; Mon, 24 Feb 2020 13:24:50 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 26E54180AD806 for ; Mon, 24 Feb 2020 18:24:50 +0000 (UTC) X-FDA: 76525846740.21.fifth66_6454674a26041 X-Spam-Summary: 2,0,0,5b8929401326f422,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:1:2:41:355:379:541:800:960:966:968:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:4050:4250:4321:4385:4605:5007:6119:6261:6653:6742:7576:7901:8603:10004:10394:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13894:13972:14096:14394:21080:21433:21444:21451:21627:21789:21987:21990:30003:30045:30054,0,RBL:209.85.221.67:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: fifth66_6454674a26041 X-Filterd-Recvd-Size: 11035 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:49 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id p18so7934117wre.9 for ; Mon, 24 Feb 2020 10:24:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xgVOPZ49x+1WrAdlWgbH+8j706qx2NuqDvbYc/uYWqQ=; b=DVVCVC36hOalbx91/U6kWD3ccIyGZj7ZRW5vpODSj3gu99YZ5+LCzw4kQfmBti4iJW I5ScSgKsJDdcwatGi5aA2Qx5/ffbToJmmMzZ0xftW+g2u6cChxL8uMOCYwct5IVk707S 2FuW+tN52SNX4qyRaGUM2rAAJxuwMk1Fq+u9P0FhmYP9BIpo4PCfwht7+KJOtLqVYX09 zXN9rgCcbRujvfDXfatzgxkjUn76Zuiq06G0lQzeP6mQUtufL30DcY2NyWUj7SjEqnJY IYeeZ+Y5NWWRqXvsEAL/j2Ic6zO4+PGLo4Wxn3yw4PROGnWprozGZFdCjKu3vnhPwHKa sP2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xgVOPZ49x+1WrAdlWgbH+8j706qx2NuqDvbYc/uYWqQ=; b=Y8+j3w7rgiHXBySj372J3wuNZ4lT0cB3xa6Dtht5JkH7GRYWh2lf3dASqw02gIicRT Jpu9VyUj+woZSnq0f6/xw+WVYwwfSZ8aG41r+TsCnlO8+tHRYcPgsKuOjkqXv1Xwl9lM 96GEseMjOMpFyxSob4qEM6QSRFoLYCBi3jIS37PYsCxhe7mNA+QDBG/WdM42jAjJXU/6 ru+JXmFhilhBazYEeAMjRwWAsCOap1xj4/dRby+NkUp8jwerqmUy1UnxyEu1sXuX7fAF 9HgaiGkKpXUCKeXGbX2vbygGQYy8gYY1q9pJX5mE+qdMwfoXJK3XOWTjWGoP/w8+6rjn 7mGA== X-Gm-Message-State: APjAAAUu3l42BImwEzCRjHdiN9PUTPOEwJImxt+wxj+A99pDW9Acp7u8 g2mmkt2O/RLcsH/7p+5TgSgj+Q== X-Google-Smtp-Source: APXvYqySTJeCzOmWaA354F9WnN4bUCNey753ZHQTjm+04zgAPahujuB+OVrEUHRHVNITM4h9f5/NJw== X-Received: by 2002:a5d:4847:: with SMTP id n7mr67178382wrs.30.1582568688179; Mon, 24 Feb 2020 10:24:48 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:47 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 17/26] iommu/arm-smmu-v3: Implement mm operations Date: Mon, 24 Feb 2020 19:23:52 +0100 Message-Id: <20200224182401.353359-18-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker Hook SVA operations to support sharing page tables with the SMMUv3: * dev_enable/disable/has_feature for device drivers to modify the SVA state. * sva_bind/unbind and sva_get_pasid to bind device and address spaces. * The mm_attach/detach/invalidate/free callbacks from iommu-sva Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/Kconfig | 1 + drivers/iommu/arm-smmu-v3.c | 176 +++++++++++++++++++++++++++++++++++- 2 files changed, 175 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 211684e785ea..05341155d34b 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -434,6 +434,7 @@ config ARM_SMMU_V3 tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support" depends on ARM64 select IOMMU_API + select IOMMU_SVA select IOMMU_IO_PGTABLE_LPAE select GENERIC_MSI_IRQ_DOMAIN help diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 54bd6913d648..3973f7222864 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -36,6 +36,7 @@ #include #include "io-pgtable-arm.h" +#include "iommu-sva.h" /* MMIO registers */ #define ARM_SMMU_IDR0 0x0 @@ -1872,7 +1873,6 @@ static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid) return NULL; } -__maybe_unused static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) { u16 asid; @@ -1969,7 +1969,6 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) return ERR_PTR(ret); } -__maybe_unused static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd) { if (arm_smmu_free_asid(cd)) { @@ -2958,6 +2957,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) smmu = master->smmu; + if (iommu_sva_enabled(dev)) { + /* Did the previous driver forget to release SVA handles? */ + dev_err(dev, "cannot attach - SVA enabled\n"); + return -EBUSY; + } + arm_smmu_detach_dev(master); mutex_lock(&smmu_domain->init_mutex); @@ -3057,6 +3062,81 @@ arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) return ops->iova_to_phys(ops, iova); } +static void arm_smmu_mm_invalidate(struct device *dev, int pasid, void *entry, + unsigned long iova, size_t size) +{ + /* TODO: Invalidate ATC */ +} + +static int arm_smmu_mm_attach(struct device *dev, int pasid, void *entry, + bool attach_domain) +{ + struct arm_smmu_ctx_desc *cd = entry; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + /* + * If another device in the domain has already been attached, the + * context descriptor is already valid. + */ + if (!attach_domain) + return 0; + + return arm_smmu_write_ctx_desc(smmu_domain, pasid, cd); +} + +static void arm_smmu_mm_detach(struct device *dev, int pasid, void *entry, + bool detach_domain) +{ + struct arm_smmu_ctx_desc *cd = entry; + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + if (detach_domain) { + arm_smmu_write_ctx_desc(smmu_domain, pasid, NULL); + + /* + * The ASID allocator won't broadcast the final TLB + * invalidations for this ASID, so we need to do it manually. + * For private contexts, freeing io-pgtable ops performs the + * invalidation. + */ + arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); + } + + /* TODO: invalidate ATC */ +} + +static void *arm_smmu_mm_alloc(struct mm_struct *mm) +{ + return arm_smmu_alloc_shared_cd(mm); +} + +static void arm_smmu_mm_free(void *entry) +{ + arm_smmu_free_shared_cd(entry); +} + +static struct io_mm_ops arm_smmu_mm_ops = { + .alloc = arm_smmu_mm_alloc, + .invalidate = arm_smmu_mm_invalidate, + .attach = arm_smmu_mm_attach, + .detach = arm_smmu_mm_detach, + .release = arm_smmu_mm_free, +}; + +static struct iommu_sva * +arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void *drvdata) +{ + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_S1) + return ERR_PTR(-EINVAL); + + return iommu_sva_bind_generic(dev, mm, &arm_smmu_mm_ops, drvdata); +} + static struct platform_driver arm_smmu_driver; static @@ -3175,6 +3255,7 @@ static void arm_smmu_remove_device(struct device *dev) master = fwspec->iommu_priv; smmu = master->smmu; + iommu_sva_disable(dev); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); iommu_device_unlink(&smmu->iommu, dev); @@ -3294,6 +3375,90 @@ static void arm_smmu_get_resv_regions(struct device *dev, iommu_dma_get_resv_regions(dev, head); } +static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) +{ + return false; +} + +static bool arm_smmu_dev_has_feature(struct device *dev, + enum iommu_dev_features feat) +{ + struct arm_smmu_master *master = dev_to_master(dev); + + if (!master) + return false; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + if (!(master->smmu->features & ARM_SMMU_FEAT_SVA)) + return false; + + /* SSID and IOPF support are mandatory for the moment */ + return master->ssid_bits && arm_smmu_iopf_supported(master); + default: + return false; + } +} + +static bool arm_smmu_dev_feature_enabled(struct device *dev, + enum iommu_dev_features feat) +{ + struct arm_smmu_master *master = dev_to_master(dev); + + if (!master) + return false; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return iommu_sva_enabled(dev); + default: + return false; + } +} + +static int arm_smmu_dev_enable_sva(struct device *dev) +{ + struct arm_smmu_master *master = dev_to_master(dev); + struct iommu_sva_param param = { + .min_pasid = 1, + .max_pasid = 0xfffffU, + }; + + param.max_pasid = min(param.max_pasid, (1U << master->ssid_bits) - 1); + return iommu_sva_enable(dev, ¶m); +} + +static int arm_smmu_dev_enable_feature(struct device *dev, + enum iommu_dev_features feat) +{ + if (!arm_smmu_dev_has_feature(dev, feat)) + return -ENODEV; + + if (arm_smmu_dev_feature_enabled(dev, feat)) + return -EBUSY; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return arm_smmu_dev_enable_sva(dev); + default: + return -EINVAL; + } +} + +static int arm_smmu_dev_disable_feature(struct device *dev, + enum iommu_dev_features feat) +{ + if (!arm_smmu_dev_feature_enabled(dev, feat)) + return -EINVAL; + + switch (feat) { + case IOMMU_DEV_FEAT_SVA: + return iommu_sva_disable(dev); + default: + return -EINVAL; + } +} + static struct iommu_ops arm_smmu_ops = { .capable = arm_smmu_capable, .domain_alloc = arm_smmu_domain_alloc, @@ -3312,6 +3477,13 @@ static struct iommu_ops arm_smmu_ops = { .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, + .dev_has_feat = arm_smmu_dev_has_feature, + .dev_feat_enabled = arm_smmu_dev_feature_enabled, + .dev_enable_feat = arm_smmu_dev_enable_feature, + .dev_disable_feat = arm_smmu_dev_disable_feature, + .sva_bind = arm_smmu_sva_bind, + .sva_unbind = iommu_sva_unbind_generic, + .sva_get_pasid = iommu_sva_get_pasid_generic, .pgsize_bitmap = -1UL, /* Restricted during device attach */ }; From patchwork Mon Feb 24 18:23:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401331 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F79E92A for ; Mon, 24 Feb 2020 18:25:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EA2DA20838 for ; Mon, 24 Feb 2020 18:25:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="fxvvkF/Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA2DA20838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8673D6B0083; Mon, 24 Feb 2020 13:24:51 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7F1B86B0085; Mon, 24 Feb 2020 13:24:51 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68FB46B0087; Mon, 24 Feb 2020 13:24:51 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 517966B0083 for ; Mon, 24 Feb 2020 13:24:51 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 039572470 for ; Mon, 24 Feb 2020 18:24:51 +0000 (UTC) X-FDA: 76525846740.06.frame48_6479be17b4b22 X-Spam-Summary: 2,0,0,cad51c69c7a7fa4e,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3874:4118:4250:4321:4605:5007:6117:6119:6261:6653:6742:7576:7862:7874:7901:7903:8603:9040:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13141:13161:13229:13230:13894:14096:14110:14181:14394:14721:21080:21211:21444:21451:21611:21627:21789:21990:30054,0,RBL:209.85.221.66:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: frame48_6479be17b4b22 X-Filterd-Recvd-Size: 7182 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:50 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id e8so11578086wrm.5 for ; Mon, 24 Feb 2020 10:24:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZmUwfBE2Jvvg1hKUoqTW1Tzu3Hjef3qChRCYp/V+4z4=; b=fxvvkF/Z32DjH84285DNlIPhJoi0kRLNJoxVIH7KOt2EDK83vWJJdX2p0ctjcP0+gm qisx9YvSxt+OhX+CZ8obez7cElrie6GQHcCW1oQ4UQb93k576+N2gpVIEqJVr9bohALT 6riHGDBI9q5MffRpe8+xohDweiZ/qUWIrGYHsajhyVDmsdv3l8uSOo5T9POpRTIIiapz FvwthMPXu+xQ8cHZx0EbY0+ZszexexopCdlk5+7lw+f4/+AQcZADd2kews00jYTIZqws RHpE2hpFC9DfAMDCdZxJmKNinpQ8YtMpz5/fZkrtYGwf+p++YWkknqDYg95TBwEvcHpE JJWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZmUwfBE2Jvvg1hKUoqTW1Tzu3Hjef3qChRCYp/V+4z4=; b=nNYCgTItZui5nGifk1wg++D7WMasEQGfg9FeAAnsFBGbmBC+xuG4bQwnwlO5A4X2gh HLH4gH8mgZV2wUIDdIRJttR58cH7TIpBkZg03rmp7di6Xs2TKz3TwBMHTolu6wYypTRG XCsA1QUcCSFBvbu+YU/fKBFitKnkuUSg5bC1nHELR+awVccYEIAeZz0lJfOTKnWKRhnF X0CGmaqmmVD26PZAvy/Eqwi8RcYrErQcbDI3lSytpW0zjPv2n4RInMW8XuMu5fIU3hKV NcQrueJfOXtMe0fdKxZNRwrrm8i1pGmmHonYQWylAK+KU5Uerezb3ZFvJdWW36oMv0tt KDtA== X-Gm-Message-State: APjAAAVLa6kPTfZNN4bljOoEwXD14fLnOfQh4DIXHW3pHk6ZqaNPFMFb j2ZX74S8TfbMv/VriFDdHLyJcw== X-Google-Smtp-Source: APXvYqybBiQ3Y5uECJkf6wnUNdgrSmLMTNXn6Wwg6bg+H52TrB4b+jy0fSf/fGsZzyfFNyePzPdLuA== X-Received: by 2002:adf:df90:: with SMTP id z16mr66588301wrl.273.1582568689235; Mon, 24 Feb 2020 10:24:49 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:48 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 18/26] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops Date: Mon, 24 Feb 2020 19:23:53 +0100 Message-Id: <20200224182401.353359-19-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker The core calls us when an mm is modified. Perform the required ATC invalidations. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 44 ++++++++++++++++++++++++++++++++----- 1 file changed, 38 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 3973f7222864..95b4caceae1a 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -2354,6 +2354,20 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size, size_t inval_grain_shift = 12; unsigned long page_start, page_end; + /* + * ATS and PASID: + * + * If substream_valid is clear, the PCIe TLP is sent without a PASID + * prefix. In that case all ATC entries within the address range are + * invalidated, including those that were requested with a PASID! There + * is no way to invalidate only entries without PASID. + * + * When using STRTAB_STE_1_S1DSS_SSID0 (reserving CD 0 for non-PASID + * traffic), translation requests without PASID create ATC entries + * without PASID, which must be invalidated with substream_valid clear. + * This has the unpleasant side-effect of invalidating all PASID-tagged + * ATC entries within the address range. + */ *cmd = (struct arm_smmu_cmdq_ent) { .opcode = CMDQ_OP_ATC_INV, .substream_valid = !!ssid, @@ -2397,12 +2411,12 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size, cmd->atc.size = log2_span; } -static int arm_smmu_atc_inv_master(struct arm_smmu_master *master) +static int arm_smmu_atc_inv_master(struct arm_smmu_master *master, int ssid) { int i; struct arm_smmu_cmdq_ent cmd; - arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd); + arm_smmu_atc_inv_to_cmd(ssid, 0, 0, &cmd); for (i = 0; i < master->num_sids; i++) { cmd.atc.sid = master->sids[i]; @@ -2874,7 +2888,7 @@ static void arm_smmu_disable_ats(struct arm_smmu_master *master) * ATC invalidation via the SMMU. */ wmb(); - arm_smmu_atc_inv_master(master); + arm_smmu_atc_inv_master(master, 0); atomic_dec(&smmu_domain->nr_ats_masters); } @@ -3065,7 +3079,22 @@ arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) static void arm_smmu_mm_invalidate(struct device *dev, int pasid, void *entry, unsigned long iova, size_t size) { - /* TODO: Invalidate ATC */ + int i; + struct arm_smmu_cmdq_ent cmd; + struct arm_smmu_cmdq_batch cmds = {}; + struct arm_smmu_master *master = dev_to_master(dev); + + if (!master->ats_enabled) + return; + + arm_smmu_atc_inv_to_cmd(pasid, iova, size, &cmd); + + for (i = 0; i < master->num_sids; i++) { + cmd.atc.sid = master->sids[i]; + arm_smmu_cmdq_batch_add(master->smmu, &cmds, &cmd); + } + + arm_smmu_cmdq_batch_submit(master->smmu, &cmds); } static int arm_smmu_mm_attach(struct device *dev, int pasid, void *entry, @@ -3089,6 +3118,7 @@ static void arm_smmu_mm_detach(struct device *dev, int pasid, void *entry, bool detach_domain) { struct arm_smmu_ctx_desc *cd = entry; + struct arm_smmu_master *master = dev_to_master(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -3102,9 +3132,11 @@ static void arm_smmu_mm_detach(struct device *dev, int pasid, void *entry, * invalidation. */ arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid); - } + arm_smmu_atc_inv_domain(smmu_domain, pasid, 0, 0); - /* TODO: invalidate ATC */ + } else if (master->ats_enabled) { + arm_smmu_atc_inv_master(master, pasid); + } } static void *arm_smmu_mm_alloc(struct mm_struct *mm) From patchwork Mon Feb 24 18:23:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401333 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B80092A for ; Mon, 24 Feb 2020 18:25:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B1AE20838 for ; Mon, 24 Feb 2020 18:25:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="YllwiJrZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B1AE20838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6435F6B0085; Mon, 24 Feb 2020 13:24:52 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5D0416B0087; Mon, 24 Feb 2020 13:24:52 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 424A76B0088; Mon, 24 Feb 2020 13:24:52 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 266566B0085 for ; Mon, 24 Feb 2020 13:24:52 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D020D181AC9CC for ; Mon, 24 Feb 2020 18:24:51 +0000 (UTC) X-FDA: 76525846782.17.ants40_649b7d8868334 X-Spam-Summary: 2,0,0,3f017c51bf35be21,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2895:2901:2903:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4605:5007:6117:6119:6261:6653:6742:7576:7874:7903:8784:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12895:13255:13894:14096:14181:14394:14721:21080:21444:21451:21627:21990:30034:30054:30070,0,RBL:209.85.221.66:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: ants40_649b7d8868334 X-Filterd-Recvd-Size: 7025 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:51 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id y17so2769299wrn.6 for ; Mon, 24 Feb 2020 10:24:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OqyBSLsjWoTwUeyl/94+2rCzV9+vYMPMP4GIUnRD0qA=; b=YllwiJrZdJ1NjO07lPuBqd6vBWp/2/geoV9/Zzog5cPbDrIrLFnmIwe3ONCoKCVSiG ZAYw10/1nei1dfdmPsqeVoOWU9YTNhg6z5DCfIauUNueL2OSnQ7hdxBfDVjpfBwG5pr5 WlzCkO9JAW+/l+cBIHevykIdN1AVsIJ+/ItvELyk4F9HPziiDWRCPa6TCA+HVyDENaUx YQ0aZDPJUpiBlN2ZSrd27WwxwNsN5H2lEPJj8yn1/e8hAZ12hdJY3fHnzNfUrlBxT7bQ NxIifg85M9zkTlKSWlFIWhJbk1CamOGpKA+2XMby8Q9upRPoweAV5jj9BHvMdYmg3o2Q u7pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OqyBSLsjWoTwUeyl/94+2rCzV9+vYMPMP4GIUnRD0qA=; b=eaY1A6wRYAjIKUtcxHR47jTEyqmV4Yj5tjLQF8H/rxUo+zUtTFkaPqYR+XCfoQsu4l NNjUkyJLCibSq63Q+H032bkQMRbZUFzMltd0BmQaTuBdVuXKLl7T5d238j62vDPBJbUk r04qKNu/Nm+UvgDTepdG4D2wGcVDJ1tRxINuxtob5IIfhXzp9xunWViTS1fQPs8Q4ce0 KuusXKvelywg5XdyPGGIO1yF6UjEdrzl+Z1kfTJ39C8YktdXBgJVh77kwHaeQ7F1G/Rt jMsivGhtX3aNd/UmelQB4umE09XUq3NYzO+qfR3G4DiheLhOruCkjrulxmVmbTrX2djh E7Pw== X-Gm-Message-State: APjAAAWqX/iTaCVTxS0C0vLIixHhAnR/GqCAm5To+YF0urfu6kusaZ5w tJjruWDo4rh9PypArpcvyv2crA== X-Google-Smtp-Source: APXvYqweLQT3Woeir8eRQoWOxeTHyd8FqFAJUftioLhxJBGnZqgw7j5oKrWyOIQLiBydPtpty1UXkw== X-Received: by 2002:a05:6000:188:: with SMTP id p8mr66983633wrx.336.1582568690310; Mon, 24 Feb 2020 10:24:50 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:49 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 19/26] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update Date: Mon, 24 Feb 2020 19:23:54 +0100 Message-Id: <20200224182401.353359-20-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker If the SMMU supports it and the kernel was built with HTTU support, enable hardware update of access and dirty flags. This is essential for shared page tables, to reduce the number of access faults on the fault queue. We can enable HTTU even if CPUs don't support it, because the kernel always checks for HW dirty bit and updates the PTE flags atomically. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 95b4caceae1a..015e8e59e0ef 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -57,6 +57,8 @@ #define IDR0_ASID16 (1 << 12) #define IDR0_ATS (1 << 10) #define IDR0_HYP (1 << 9) +#define IDR0_HD (1 << 7) +#define IDR0_HA (1 << 6) #define IDR0_BTM (1 << 5) #define IDR0_COHACC (1 << 4) #define IDR0_TTF GENMASK(3, 2) @@ -305,6 +307,9 @@ #define CTXDESC_CD_0_TCR_IPS GENMASK_ULL(34, 32) #define CTXDESC_CD_0_TCR_TBI0 (1ULL << 38) +#define CTXDESC_CD_0_TCR_HA (1UL << 43) +#define CTXDESC_CD_0_TCR_HD (1UL << 42) + #define CTXDESC_CD_0_AA64 (1UL << 41) #define CTXDESC_CD_0_S (1UL << 44) #define CTXDESC_CD_0_R (1UL << 45) @@ -646,6 +651,8 @@ struct arm_smmu_device { #define ARM_SMMU_FEAT_E2H (1 << 15) #define ARM_SMMU_FEAT_BTM (1 << 16) #define ARM_SMMU_FEAT_SVA (1 << 17) +#define ARM_SMMU_FEAT_HA (1 << 18) +#define ARM_SMMU_FEAT_HD (1 << 19) u32 features; #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0) @@ -1665,10 +1672,17 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, * this substream's traffic */ } else { /* (1) and (2) */ + u64 tcr = cd->tcr; + cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK); cdptr[2] = 0; cdptr[3] = cpu_to_le64(cd->mair); + if (!(smmu->features & ARM_SMMU_FEAT_HD)) + tcr &= ~CTXDESC_CD_0_TCR_HD; + if (!(smmu->features & ARM_SMMU_FEAT_HA)) + tcr &= ~CTXDESC_CD_0_TCR_HA; + /* * STE is live, and the SMMU might read dwords of this CD in any * order. Ensure that it observes valid values before reading @@ -1676,7 +1690,7 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, */ arm_smmu_sync_cd(smmu_domain, ssid, true); - val = cd->tcr | + val = tcr | #ifdef __BIG_ENDIAN CTXDESC_CD_0_ENDI | #endif @@ -1919,10 +1933,12 @@ static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm) return old_cd; } + /* HA and HD will be filtered out later if not supported by the SMMU */ tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) | FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) | FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) | + CTXDESC_CD_0_TCR_HA | CTXDESC_CD_0_TCR_HD | CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64; switch (PAGE_SIZE) { @@ -4211,6 +4227,12 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->features |= ARM_SMMU_FEAT_E2H; } + if (reg & (IDR0_HA | IDR0_HD)) { + smmu->features |= ARM_SMMU_FEAT_HA; + if (reg & IDR0_HD) + smmu->features |= ARM_SMMU_FEAT_HD; + } + /* * If the CPU is using VHE, but the SMMU doesn't support it, the SMMU * will create TLB entries for NH-EL1 world and will miss the From patchwork Mon Feb 24 18:23:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 415AC92A for ; Mon, 24 Feb 2020 18:25:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E8D8920838 for ; Mon, 24 Feb 2020 18:25:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="zuB/c0OA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E8D8920838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CAFB86B009B; Mon, 24 Feb 2020 13:24:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C5B8E6B009C; Mon, 24 Feb 2020 13:24:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B24BB6B009D; Mon, 24 Feb 2020 13:24:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id 96DBA6B009B for ; Mon, 24 Feb 2020 13:24:53 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3EEC18245578 for ; Mon, 24 Feb 2020 18:24:53 +0000 (UTC) X-FDA: 76525846866.27.money59_64c834f8d715f X-Spam-Summary: 2,0,0,44a18a94d3b925dc,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:1:2:41:355:379:541:800:960:966:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2196:2199:2393:2553:2559:2562:2898:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:4051:4250:4385:4605:5007:6261:6653:6742:7576:7862:7904:8603:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13894:14096:14110:14394:21080:21324:21444:21451:21627:21990:30012:30054:30070:30080:30090,0,RBL:209.85.128.65:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: money59_64c834f8d715f X-Filterd-Recvd-Size: 12068 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:52 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id a9so345615wmj.3 for ; Mon, 24 Feb 2020 10:24:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5LozG5AByqC+Y2QsZ96qniz5z6ofm6RdMRwjrLXS+sI=; b=zuB/c0OAZTGH2gXaCP3au67WP342eVaV0fpGipW5OGGN73wuEBs/1ROZf/qKpO3VQt woFLTxZNGIJ4967+Me5izVNTthCeacqxnSzqP00CSdVZMYLxj/tNulidjUqG/0VzmRjZ QBZKhGAJqmIBtlFc+1BYL0isbmHkquFthlxS1h9jLjYAR2EePZIEbd27msJirvOsnTsx RtziIkQVAZ7U2qRYVUnPq6uOoH+JWffn9cp0foAx+2pOdmEGO8+xrnjYEsG5ZmahMt01 evtZlwEGkxH6SJ4mAKSe6obj4rsKWhmZnxT9Y8StwOwgIqQtl1cZzvX0yP51JfpyAWBn X2ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5LozG5AByqC+Y2QsZ96qniz5z6ofm6RdMRwjrLXS+sI=; b=ZEacwt/dgCxibT0TR+iUttozmcpAVeUg3OybRJGrh11knsTWrOc8pbYpcAaDnSeSLL C5GZz2a3fFAZi3rz6alLj7EPEkSaFbqjcu5RMycBrFQLpDt2YNdxy3dx0lr1+lGghV6S x/M/TgwrOXgWq+GRr+cmCGPs6JQfq7zM4j0iBvuRbko9zVx2PFjwIrhsm2hHAjkV0h9y +PYU467GRD4LqIrK8nAwMPSrfNh6LfKZd3uR2SG4abMg0SVmzpFXYyE1/t2HpXTrxLrG W9Khmptpj9Hq73edx/HJm/0V0bxhQXlIW21+6zFUzoQ3OFkS99t9Vq0V+JOLjvQPjaZi pIiQ== X-Gm-Message-State: APjAAAUIBHWHDSNXL7UnPfcaCTPdPlxy2o4FkVlrm8fviZ4kvIbXI6sL DpF2CeVY6YKzN/gAWmRzJ5jDfbDyu1I= X-Google-Smtp-Source: APXvYqzYYT4fN7IA/sD4kQOlIG2hDqJ3nfsH3tflLkE3KADa5Uj7qCLqWe6yrMSFQh0UM4q/4UEzKA== X-Received: by 2002:a1c:f001:: with SMTP id a1mr323565wmb.76.1582568691412; Mon, 24 Feb 2020 10:24:51 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:51 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 20/26] iommu/arm-smmu-v3: Maintain a SID->device structure Date: Mon, 24 Feb 2020 19:23:55 +0100 Message-Id: <20200224182401.353359-21-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker When handling faults from the event or PRI queue, we need to find the struct device associated to a SID. Add a rb_tree to keep track of SIDs. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 177 +++++++++++++++++++++++++++++------- 1 file changed, 145 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 015e8e59e0ef..28f8583cd47b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -684,6 +684,15 @@ struct arm_smmu_device { /* IOMMU core code handle */ struct iommu_device iommu; + + struct rb_root streams; + struct mutex streams_mutex; +}; + +struct arm_smmu_stream { + u32 id; + struct arm_smmu_master *master; + struct rb_node node; }; /* SMMU private data for each master */ @@ -692,8 +701,8 @@ struct arm_smmu_master { struct device *dev; struct arm_smmu_domain *domain; struct list_head domain_head; - u32 *sids; - unsigned int num_sids; + struct arm_smmu_stream *streams; + unsigned int num_streams; bool ats_enabled; unsigned int ssid_bits; }; @@ -1573,8 +1582,8 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain, spin_lock_irqsave(&smmu_domain->devices_lock, flags); list_for_each_entry(master, &smmu_domain->devices, domain_head) { - for (i = 0; i < master->num_sids; i++) { - cmd.cfgi.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.cfgi.sid = master->streams[i].id; arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd); } } @@ -2201,6 +2210,32 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } +__maybe_unused +static struct arm_smmu_master * +arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) +{ + struct rb_node *node; + struct arm_smmu_stream *stream; + struct arm_smmu_master *master = NULL; + + mutex_lock(&smmu->streams_mutex); + node = smmu->streams.rb_node; + while (node) { + stream = rb_entry(node, struct arm_smmu_stream, node); + if (stream->id < sid) { + node = node->rb_right; + } else if (stream->id > sid) { + node = node->rb_left; + } else { + master = stream->master; + break; + } + } + mutex_unlock(&smmu->streams_mutex); + + return master; +} + /* IRQ and event handlers */ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { @@ -2434,8 +2469,8 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master, int ssid) arm_smmu_atc_inv_to_cmd(ssid, 0, 0, &cmd); - for (i = 0; i < master->num_sids; i++) { - cmd.atc.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.atc.sid = master->streams[i].id; arm_smmu_cmdq_issue_cmd(master->smmu, &cmd); } @@ -2478,8 +2513,8 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, if (!master->ats_enabled) continue; - for (i = 0; i < master->num_sids; i++) { - cmd.atc.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.atc.sid = master->streams[i].id; arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd); } } @@ -2846,13 +2881,13 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master) int i, j; struct arm_smmu_device *smmu = master->smmu; - for (i = 0; i < master->num_sids; ++i) { - u32 sid = master->sids[i]; + for (i = 0; i < master->num_streams; ++i) { + u32 sid = master->streams[i].id; __le64 *step = arm_smmu_get_step_for_sid(smmu, sid); /* Bridged PCI devices may end up with duplicated IDs */ for (j = 0; j < i; j++) - if (master->sids[j] == sid) + if (master->streams[j].id == sid) break; if (j < i) continue; @@ -3105,8 +3140,8 @@ static void arm_smmu_mm_invalidate(struct device *dev, int pasid, void *entry, arm_smmu_atc_inv_to_cmd(pasid, iova, size, &cmd); - for (i = 0; i < master->num_sids; i++) { - cmd.atc.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd.atc.sid = master->streams[i].id; arm_smmu_cmdq_batch_add(master->smmu, &cmds, &cmd); } @@ -3206,11 +3241,99 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } +static int arm_smmu_insert_master(struct arm_smmu_device *smmu, + struct arm_smmu_master *master) +{ + int i; + int ret = 0; + struct arm_smmu_stream *new_stream, *cur_stream; + struct rb_node **new_node, *parent_node = NULL; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); + + master->streams = kcalloc(fwspec->num_ids, + sizeof(struct arm_smmu_stream), GFP_KERNEL); + if (!master->streams) + return -ENOMEM; + master->num_streams = fwspec->num_ids; + + mutex_lock(&smmu->streams_mutex); + for (i = 0; i < fwspec->num_ids && !ret; i++) { + u32 sid = fwspec->ids[i]; + + new_stream = &master->streams[i]; + new_stream->id = sid; + new_stream->master = master; + + /* Check the SIDs are in range of the SMMU and our stream table */ + if (!arm_smmu_sid_in_range(smmu, sid)) { + ret = -ERANGE; + break; + } + + /* Ensure l2 strtab is initialised */ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + ret = arm_smmu_init_l2_strtab(smmu, sid); + if (ret) + break; + } + + /* Insert into SID tree */ + new_node = &(smmu->streams.rb_node); + while (*new_node) { + cur_stream = rb_entry(*new_node, struct arm_smmu_stream, + node); + parent_node = *new_node; + if (cur_stream->id > new_stream->id) { + new_node = &((*new_node)->rb_left); + } else if (cur_stream->id < new_stream->id) { + new_node = &((*new_node)->rb_right); + } else { + dev_warn(master->dev, + "stream %u already in tree\n", + cur_stream->id); + ret = -EINVAL; + break; + } + } + + if (!ret) { + rb_link_node(&new_stream->node, parent_node, new_node); + rb_insert_color(&new_stream->node, &smmu->streams); + } + } + + if (ret) { + for (; i > 0; i--) + rb_erase(&master->streams[i].node, &smmu->streams); + kfree(master->streams); + } + mutex_unlock(&smmu->streams_mutex); + + return ret; +} + +static void arm_smmu_remove_master(struct arm_smmu_device *smmu, + struct arm_smmu_master *master) +{ + int i; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); + + if (!master->streams) + return; + + mutex_lock(&smmu->streams_mutex); + for (i = 0; i < fwspec->num_ids; i++) + rb_erase(&master->streams[i].node, &smmu->streams); + mutex_unlock(&smmu->streams_mutex); + + kfree(master->streams); +} + static struct iommu_ops arm_smmu_ops; static int arm_smmu_add_device(struct device *dev) { - int i, ret; + int ret; struct arm_smmu_device *smmu; struct arm_smmu_master *master; struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); @@ -3232,26 +3355,11 @@ static int arm_smmu_add_device(struct device *dev) master->dev = dev; master->smmu = smmu; - master->sids = fwspec->ids; - master->num_sids = fwspec->num_ids; fwspec->iommu_priv = master; - /* Check the SIDs are in range of the SMMU and our stream table */ - for (i = 0; i < master->num_sids; i++) { - u32 sid = master->sids[i]; - - if (!arm_smmu_sid_in_range(smmu, sid)) { - ret = -ERANGE; - goto err_free_master; - } - - /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - ret = arm_smmu_init_l2_strtab(smmu, sid); - if (ret) - goto err_free_master; - } - } + ret = arm_smmu_insert_master(smmu, master); + if (ret) + goto err_free_master; master->ssid_bits = min(smmu->ssid_bits, fwspec->num_pasid_bits); @@ -3286,6 +3394,7 @@ static int arm_smmu_add_device(struct device *dev) iommu_device_unlink(&smmu->iommu, dev); err_disable_pasid: arm_smmu_disable_pasid(master); + arm_smmu_remove_master(smmu, master); err_free_master: kfree(master); fwspec->iommu_priv = NULL; @@ -3308,6 +3417,7 @@ static void arm_smmu_remove_device(struct device *dev) iommu_group_remove_device(dev); iommu_device_unlink(&smmu->iommu, dev); arm_smmu_disable_pasid(master); + arm_smmu_remove_master(smmu, master); kfree(master); iommu_fwspec_free(dev); } @@ -3751,6 +3861,9 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; + mutex_init(&smmu->streams_mutex); + smmu->streams = RB_ROOT; + ret = arm_smmu_init_queues(smmu); if (ret) return ret; From patchwork Mon Feb 24 18:23:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CEC4392A for ; Mon, 24 Feb 2020 18:25:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9C26220838 for ; Mon, 24 Feb 2020 18:25:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Y+gSmo8l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9C26220838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7076F6B009C; Mon, 24 Feb 2020 13:24:54 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 667EE6B009E; Mon, 24 Feb 2020 13:24:54 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52F4C6B009F; Mon, 24 Feb 2020 13:24:54 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 38A436B009C for ; Mon, 24 Feb 2020 13:24:54 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D36ED181AC9CC for ; Mon, 24 Feb 2020 18:24:53 +0000 (UTC) X-FDA: 76525846866.02.key66_64e91cc724544 X-Spam-Summary: 2,0,0,43c904803d3f8c0f,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3871:3872:4250:4605:5007:6119:6261:6653:6742:9592:10004:11026:11232:11233:11473:11658:11914:12048:12294:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14181:14384:14394:14721:21080:21444:21627:30054:30080,0,RBL:209.85.128.65:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: key66_64e91cc724544 X-Filterd-Recvd-Size: 4914 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:53 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id t23so319262wmi.1 for ; Mon, 24 Feb 2020 10:24:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aCHZ2/al18nS+2E0e9l6kgT+YcKjbmb+qVD5r1+/lK8=; b=Y+gSmo8lII58Es18d6cmWnjASSfFsl2x7rSop+H1cCniaeVd/mWWI5iBoB1UpuvK70 WiCmaMu86SH+ozlrxPN0f3fd2cO/JAUaxBiQjRgMuCf7Ml/BcTQQ6QqzwqYLHtbBX4fL yyzI0XOaj4GoxBnIawd6Mc5b5sYLmz1hB0hfrlqesUOIHsyuwl6fwe4xrznynuKvPrPm urpHxochnQaDTmvE0T8BMvFjyPHBUT4MCdL4NYgnT6YOTbS0T2aSE78xPtZcUQVCujJe 0MHikP4CvKbRZoTP4zT5mS23Ld+5V1Do2KyHkPn75kcCDLo4LZ5lZQdlY5r/Bb2YxQdE 7Mqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aCHZ2/al18nS+2E0e9l6kgT+YcKjbmb+qVD5r1+/lK8=; b=aQjRmosb/zyaDPynCTu5DHPFYd70joBLtgod1LhnfOCVaNxFH/u8vR/V855U9P/D+0 RN1eze7gSha6p2+QQGB+btP3S6tL4wqKfR+UyQrRgW5kEs4ZC/0NnWwzjB7PCoXVjbhM ivPQbkWUYw4IaPGXAG1NgVQDgUZsnq4Ld5X4gIrJ8p5I6YC/80/OselSqLakOVyuRDSu OQ5FLrHCMTnsl7yXgETT/kWWTE03HAXHp3Vpe093exQRt7CvjCOc42t6UkuSmf9WoHeD z1d5Ys6+wegMEUwORYJqrmp+3gk2ZCQSD4PAxwbwxkpKb/6fSGK/eWx0NiY2Wl9mT1F1 tNVg== X-Gm-Message-State: APjAAAX6kwiVLPiXua3YcKaQ1zDye6VUCz0zwVDR/z9g00kIEdSht4TQ sAywnN0wRPrJAlkNTYY9klpWNw== X-Google-Smtp-Source: APXvYqytXg1jldnNeE/KhXMpeqS2ECo/t3rCI0BZEyvldnBIFN5uQN85/iXzJ1Ss+SSBo62MlVDbIA== X-Received: by 2002:a05:600c:2215:: with SMTP id z21mr312611wml.55.1582568692416; Mon, 24 Feb 2020 10:24:52 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:52 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org Subject: [PATCH v4 21/26] iommu/arm-smmu-v3: Ratelimit event dump Date: Mon, 24 Feb 2020 19:23:56 +0100 Message-Id: <20200224182401.353359-22-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a device or driver misbehaves, it is possible to receive events much faster than we can print them out. Ratelimit the printing of events. Signed-off-by: Jean-Philippe Brucker Tested-by: Aaro Koskinen --- During the SVA tests when the device driver didn't properly stop DMA before unbinding, the event queue thread would almost lock-up the server with a flood of event 0xa. This patch helped recover from the error. --- drivers/iommu/arm-smmu-v3.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 28f8583cd47b..6a5987cce03f 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -2243,17 +2243,20 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->evtq.q; struct arm_smmu_ll_queue *llq = &q->llq; + static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL, + DEFAULT_RATELIMIT_BURST); u64 evt[EVTQ_ENT_DWORDS]; do { while (!queue_remove_raw(q, evt)) { u8 id = FIELD_GET(EVTQ_0_ID, evt[0]); - dev_info(smmu->dev, "event 0x%02x received:\n", id); - for (i = 0; i < ARRAY_SIZE(evt); ++i) - dev_info(smmu->dev, "\t0x%016llx\n", - (unsigned long long)evt[i]); - + if (__ratelimit(&rs)) { + dev_info(smmu->dev, "event 0x%02x received:\n", id); + for (i = 0; i < ARRAY_SIZE(evt); ++i) + dev_info(smmu->dev, "\t0x%016llx\n", + (unsigned long long)evt[i]); + } } /* From patchwork Mon Feb 24 18:23:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59FB414D5 for ; Mon, 24 Feb 2020 18:25:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1DD9320838 for ; Mon, 24 Feb 2020 18:25:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="jrea57vR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1DD9320838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E7FA6B009E; Mon, 24 Feb 2020 13:24:56 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2734F6B00A0; Mon, 24 Feb 2020 13:24:56 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C4B36B00A1; Mon, 24 Feb 2020 13:24:56 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id DD5EC6B009E for ; Mon, 24 Feb 2020 13:24:55 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 93430180AD806 for ; Mon, 24 Feb 2020 18:24:55 +0000 (UTC) X-FDA: 76525846950.23.power45_6525522aa041c X-Spam-Summary: 2,0,0,73b7198e8edf81a2,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2198:2199:2376:2393:2553:2559:2562:2731:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4605:5007:6119:6261:6653:6742:7576:10004:11657:11658:11914:12043:12048:12297:12517:12519:12555:12679:12895:13869:13894:14096:14181:14394:14721:21063:21080:21324:21444:21627:21740:30003:30005:30034:30054:30070:30090:30091,0,RBL:209.85.128.50:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: power45_6525522aa041c X-Filterd-Recvd-Size: 5463 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:54 +0000 (UTC) Received: by mail-wm1-f50.google.com with SMTP id s10so311153wmh.3 for ; Mon, 24 Feb 2020 10:24:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=i0VWA2fvKY7B7aVEb87RbBlKKCbdscLm7wXe7hq9l+o=; b=jrea57vRdBhgNK56CGB6FBQaFESTfknRHc96QQ82B0wBh5E8nOIvvqoDp88g21NJms vCtpWf4eZMeIPhoNOEnzXNGWUU/L/DIOMIpzlQ/DLRoBlZsRMavpixederXLjWbnWC7P VgrsbfWlv0djfGH1aSceYJMSD+h2AEC2octSj5r/52HuAkGcmwSLGFyNX2/RJNR0SEXo hjZ4qCxHZra5C4YTiCc/afuFkROLgPCt3wD/HTSI54xo9RGvqdw0v21iw6isZfklwJdL qB7285DeYxybLPJvLl7Ham/dtFKMQ20giHlQ9T+3SZtHeP+2yGyfOLAugLIBycvdgB68 fVnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=i0VWA2fvKY7B7aVEb87RbBlKKCbdscLm7wXe7hq9l+o=; b=C8Y0KFGo4GnBDWPbwFLO+st0MEaTDXIIULYy2jYc3jODeb5/BifsaKu36U0ApNzfrE eklnSqWU0hOODl2dZG45GIUiMXGwN0Y1bg9pnKbGV1/5xXeAuvYh9dW985N7oWQ7LNNk SHOkYGlWpGZoWpRgmdpit64YI7XgoyvUYJE0TSEcMISO+CDdIeZ5xoZSK9hFiBWz6iEI B6Y1HFOj+gfyU9HYlRAdOl8m0DOPraMQXjWykD4Y15h2UugDjDUNFTH7nR25kkbkQ+q9 J9m3qaQZ+EIFKbaXNSpQNg8eijTWTf78blMvzohwMQInpRATawAtRAQrcMZjrYUO9VyD ymXw== X-Gm-Message-State: APjAAAUC1VSrWN7hXofuWjmHFmNtC5mJKOZ+NQsZr7o9SR0iQSHtN/86 iRbpuKpXZkiw5XK+GpZrVaJB9Q== X-Google-Smtp-Source: APXvYqzxdf7GnXuxJz22KdsBgghNsJTC0LtjcF0NB8OgJIKAqhQeqkdamrZEnO7kob3vSOk3XbOcTw== X-Received: by 2002:a7b:c8d2:: with SMTP id f18mr292389wml.47.1582568694023; Mon, 24 Feb 2020 10:24:54 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:53 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker , Rob Herring Subject: [PATCH v4 22/26] dt-bindings: document stall property for IOMMU masters Date: Mon, 24 Feb 2020 19:23:57 +0100 Message-Id: <20200224182401.353359-23-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker On ARM systems, some platform devices behind an IOMMU may support stall, which is the ability to recover from page faults. Let the firmware tell us when a device supports stall. Reviewed-by: Rob Herring Signed-off-by: Jean-Philippe Brucker --- .../devicetree/bindings/iommu/iommu.txt | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/Documentation/devicetree/bindings/iommu/iommu.txt b/Documentation/devicetree/bindings/iommu/iommu.txt index 3c36334e4f94..26ba9e530f13 100644 --- a/Documentation/devicetree/bindings/iommu/iommu.txt +++ b/Documentation/devicetree/bindings/iommu/iommu.txt @@ -92,6 +92,24 @@ Optional properties: tagging DMA transactions with an address space identifier. By default, this is 0, which means that the device only has one address space. +- dma-can-stall: When present, the master can wait for a transaction to + complete for an indefinite amount of time. Upon translation fault some + IOMMUs, instead of aborting the translation immediately, may first + notify the driver and keep the transaction in flight. This allows the OS + to inspect the fault and, for example, make physical pages resident + before updating the mappings and completing the transaction. Such IOMMU + accepts a limited number of simultaneous stalled transactions before + having to either put back-pressure on the master, or abort new faulting + transactions. + + Firmware has to opt-in stalling, because most buses and masters don't + support it. In particular it isn't compatible with PCI, where + transactions have to complete before a time limit. More generally it + won't work in systems and masters that haven't been designed for + stalling. For example the OS, in order to handle a stalled transaction, + may attempt to retrieve pages from secondary storage in a stalled + domain, leading to a deadlock. + Notes: ====== From patchwork Mon Feb 24 18:23:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 06BB992A for ; Mon, 24 Feb 2020 18:25:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ADCC220838 for ; Mon, 24 Feb 2020 18:25:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="fL1E3oVt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ADCC220838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ADA6E6B00A0; Mon, 24 Feb 2020 13:24:57 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A15436B00A2; Mon, 24 Feb 2020 13:24:57 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B1C56B00A3; Mon, 24 Feb 2020 13:24:57 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 6607E6B00A0 for ; Mon, 24 Feb 2020 13:24:57 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 224852C78 for ; Mon, 24 Feb 2020 18:24:57 +0000 (UTC) X-FDA: 76525847034.30.hour58_655a0ff1eea01 X-Spam-Summary: 2,0,0,239389ef6a32359b,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:327:355:379:421:541:800:960:966:968:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2691:2693:2898:2901:3138:3139:3140:3141:3142:3503:3504:3664:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6117:6119:6261:6653:6742:7576:7862:7874:7875:7903:8660:9036:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13148:13149:13161:13229:13230:13894:13972:14394:21063:21080:21324:21433:21444:21451:21524:21627:21789:21809:21990:30012:30034:30054:30070:30080:30083:30090,0,RBL:209.85.221.65:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:5,LUA_SUMMARY:none X-HE-Tag: hour58_655a0ff1eea01 X-Filterd-Recvd-Size: 20352 Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:56 +0000 (UTC) Received: by mail-wr1-f65.google.com with SMTP id z3so11585749wru.3 for ; Mon, 24 Feb 2020 10:24:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zVX+mlRX+s3veY2PcEWly/teDXbsgH98vR1sOizQCiM=; b=fL1E3oVtkTlb6hf5HJBk7QGnYUD/y5WEsr/+ted4C8frC9Yk4bqBZwNSjJGytgabac ovt99YoqqXIxY1uB99RsMLeu4RXGGIyr0Rmm93s0pthB1sjjLZWvAbrhmcXUzAHFIEYe W2m9i8Ml4LGpmcXOhFfConVLuQk8wiRiw4tJiAnStTp3aRS5y3sb2P6n7dk2qMahxajj LfXNhlhSO+TkFbFDHbT+ZLLJu4ywyoAHIcYeaWCXqGBZmxyXg9rG2xjhuRJdqmFpxPi+ z5MtWoJTZdKX5itiEgGPpQjx2STUoFyj/sW47HrVbrSmhvZyxEM29GWJvEpcjjuD/a1B XNng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zVX+mlRX+s3veY2PcEWly/teDXbsgH98vR1sOizQCiM=; b=ZVlRj+e238VYvB5d2f5XPAYyvKGD0gl+9JzV0rwbD8D9mcnF/LnVRBjPKxVZu37mP9 LEU8AP+eLr5tTAazSGV4HbckhqkiO18ji/R31zOqX6oKRDckNqHTvQHHV04NQ8ANV2NC a7y/aCM/4y8dRsQxhdwDacFxttiATb5GNw8jKDfYXcyols/5q2Q3gJjABLBaT41DSCCM hOKD1oi72g40PVtyXr4VRgp0B2GNJxARaXgjZSQZzbKq3AzvlYb+bX9T0BrtLzTlxgQY YObSEFD8emtTnmN1QV07FC+29059PD6Mu+iq6bU5mZZoz69ssg2TS/Mi87Hf5CDABrno 9kug== X-Gm-Message-State: APjAAAXQNNHx0zB1kiEbbs/UBjbejAjvTlkB/uz31PrBZs2GXlLkFFAd Z6RXLyi+2EcGACH8xVD0NsuxkQ== X-Google-Smtp-Source: APXvYqw7OoE8ut8GqCjOo5EyH4TXutQBZ1mgq0We9MJur7p+HTPKo4pruWm22yShGedTmlCPuoLNzA== X-Received: by 2002:a5d:4f8b:: with SMTP id d11mr65032064wru.87.1582568695196; Mon, 24 Feb 2020 10:24:55 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:54 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 23/26] iommu/arm-smmu-v3: Add stall support for platform devices Date: Mon, 24 Feb 2020 19:23:58 +0100 Message-Id: <20200224182401.353359-24-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker The SMMU provides a Stall model for handling page faults in platform devices. It is similar to PCI PRI, but doesn't require devices to have their own translation cache. Instead, faulting transactions are parked and the OS is given a chance to fix the page tables and retry the transaction. Enable stall for devices that support it (opt-in by firmware). When an event corresponds to a translation error, call the IOMMU fault handler. If the fault is recoverable, it will call us back to terminate or continue the stall. Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 271 ++++++++++++++++++++++++++++++++++-- drivers/iommu/of_iommu.c | 5 +- include/linux/iommu.h | 2 + 3 files changed, 269 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 6a5987cce03f..da5dda5ba26a 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -374,6 +374,13 @@ #define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) #define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) +#define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) +#define CMDQ_RESUME_0_RESP_TERM 0UL +#define CMDQ_RESUME_0_RESP_RETRY 1UL +#define CMDQ_RESUME_0_RESP_ABORT 2UL +#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12) +#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0) + #define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12) #define CMDQ_SYNC_0_CS_NONE 0 #define CMDQ_SYNC_0_CS_IRQ 1 @@ -390,6 +397,25 @@ #define EVTQ_0_ID GENMASK_ULL(7, 0) +#define EVT_ID_TRANSLATION_FAULT 0x10 +#define EVT_ID_ADDR_SIZE_FAULT 0x11 +#define EVT_ID_ACCESS_FAULT 0x12 +#define EVT_ID_PERMISSION_FAULT 0x13 + +#define EVTQ_0_SSV (1UL << 11) +#define EVTQ_0_SSID GENMASK_ULL(31, 12) +#define EVTQ_0_SID GENMASK_ULL(63, 32) +#define EVTQ_1_STAG GENMASK_ULL(15, 0) +#define EVTQ_1_STALL (1UL << 31) +#define EVTQ_1_PRIV (1UL << 33) +#define EVTQ_1_EXEC (1UL << 34) +#define EVTQ_1_READ (1UL << 35) +#define EVTQ_1_S2 (1UL << 39) +#define EVTQ_1_CLASS GENMASK_ULL(41, 40) +#define EVTQ_1_TT_READ (1UL << 44) +#define EVTQ_2_ADDR GENMASK_ULL(63, 0) +#define EVTQ_3_IPA GENMASK_ULL(51, 12) + /* PRI queue */ #define PRIQ_ENT_SZ_SHIFT 4 #define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) @@ -510,6 +536,13 @@ struct arm_smmu_cmdq_ent { enum pri_resp resp; } pri; + #define CMDQ_OP_RESUME 0x44 + struct { + u32 sid; + u16 stag; + u8 resp; + } resume; + #define CMDQ_OP_CMD_SYNC 0x46 struct { u64 msiaddr; @@ -545,6 +578,10 @@ struct arm_smmu_queue { u32 __iomem *prod_reg; u32 __iomem *cons_reg; + + /* Event and PRI */ + u64 batch; + wait_queue_head_t wq; }; struct arm_smmu_queue_poll { @@ -568,6 +605,7 @@ struct arm_smmu_cmdq_batch { struct arm_smmu_evtq { struct arm_smmu_queue q; + struct iopf_queue *iopf; u32 max_stalls; }; @@ -704,6 +742,7 @@ struct arm_smmu_master { struct arm_smmu_stream *streams; unsigned int num_streams; bool ats_enabled; + bool stall_enabled; unsigned int ssid_bits; }; @@ -721,6 +760,7 @@ struct arm_smmu_domain { struct io_pgtable_ops *pgtbl_ops; bool non_strict; + bool stall_enabled; atomic_t nr_ats_masters; enum arm_smmu_domain_stage stage; @@ -985,6 +1025,11 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) } cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp); break; + case CMDQ_OP_RESUME: + cmd[0] |= FIELD_PREP(CMDQ_RESUME_0_SID, ent->resume.sid); + cmd[0] |= FIELD_PREP(CMDQ_RESUME_0_RESP, ent->resume.resp); + cmd[1] |= FIELD_PREP(CMDQ_RESUME_1_STAG, ent->resume.stag); + break; case CMDQ_OP_CMD_SYNC: if (ent->sync.msiaddr) { cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ); @@ -1551,6 +1596,45 @@ static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true); } +static int arm_smmu_page_response(struct device *dev, + struct iommu_fault_event *unused, + struct iommu_page_response *resp) +{ + struct arm_smmu_cmdq_ent cmd = {0}; + struct arm_smmu_master *master = dev_iommu_fwspec_get(dev)->iommu_priv; + int sid = master->streams[0].id; + + if (master->stall_enabled) { + cmd.opcode = CMDQ_OP_RESUME; + cmd.resume.sid = sid; + cmd.resume.stag = resp->grpid; + switch (resp->code) { + case IOMMU_PAGE_RESP_INVALID: + case IOMMU_PAGE_RESP_FAILURE: + cmd.resume.resp = CMDQ_RESUME_0_RESP_ABORT; + break; + case IOMMU_PAGE_RESP_SUCCESS: + cmd.resume.resp = CMDQ_RESUME_0_RESP_RETRY; + break; + default: + return -EINVAL; + } + } else { + /* TODO: insert PRI response here */ + return -ENODEV; + } + + arm_smmu_cmdq_issue_cmd(master->smmu, &cmd); + /* + * Don't send a SYNC, it doesn't do anything for RESUME or PRI_RESP. + * RESUME consumption guarantees that the stalled transaction will be + * terminated... at some point in the future. PRI_RESP is fire and + * forget. + */ + + return 0; +} + /* Context descriptor manipulation functions */ static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid) { @@ -1709,8 +1793,7 @@ static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) | CTXDESC_CD_0_V; - /* STALL_MODEL==0b10 && CD.S==0 is ILLEGAL */ - if (smmu->features & ARM_SMMU_FEAT_STALL_FORCE) + if (smmu_domain->stall_enabled) val |= CTXDESC_CD_0_S; } @@ -2133,7 +2216,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, FIELD_PREP(STRTAB_STE_1_STRW, strw)); if (smmu->features & ARM_SMMU_FEAT_STALLS && - !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) + !master->stall_enabled) dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD); val |= (s1_cfg->cdcfg.cdtab_dma & STRTAB_STE_0_S1CTXPTR_MASK) | @@ -2210,7 +2293,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } -__maybe_unused static struct arm_smmu_master * arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) { @@ -2237,21 +2319,119 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) } /* IRQ and event handlers */ +static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) +{ + int ret; + u32 perm = 0; + struct arm_smmu_master *master; + bool ssid_valid = evt[0] & EVTQ_0_SSV; + u8 type = FIELD_GET(EVTQ_0_ID, evt[0]); + u32 sid = FIELD_GET(EVTQ_0_SID, evt[0]); + struct iommu_fault_event fault_evt = { }; + struct iommu_fault *flt = &fault_evt.fault; + + /* Stage-2 is always pinned at the moment */ + if (evt[1] & EVTQ_1_S2) + return -EFAULT; + + master = arm_smmu_find_master(smmu, sid); + if (!master) + return -EINVAL; + + if (evt[1] & EVTQ_1_READ) + perm |= IOMMU_FAULT_PERM_READ; + else + perm |= IOMMU_FAULT_PERM_WRITE; + + if (evt[1] & EVTQ_1_EXEC) + perm |= IOMMU_FAULT_PERM_EXEC; + + if (evt[1] & EVTQ_1_PRIV) + perm |= IOMMU_FAULT_PERM_PRIV; + + if (evt[1] & EVTQ_1_STALL) { + flt->type = IOMMU_FAULT_PAGE_REQ; + flt->prm = (struct iommu_fault_page_request) { + .flags = IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, + .pasid = FIELD_GET(EVTQ_0_SSID, evt[0]), + .grpid = FIELD_GET(EVTQ_1_STAG, evt[1]), + .perm = perm, + .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), + }; + + if (ssid_valid) + flt->prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + } else { + flt->type = IOMMU_FAULT_DMA_UNRECOV; + flt->event = (struct iommu_fault_unrecoverable) { + .flags = IOMMU_FAULT_UNRECOV_ADDR_VALID | + IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID, + .pasid = FIELD_GET(EVTQ_0_SSID, evt[0]), + .perm = perm, + .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), + .fetch_addr = FIELD_GET(EVTQ_3_IPA, evt[3]), + }; + + if (ssid_valid) + flt->event.flags |= IOMMU_FAULT_UNRECOV_PASID_VALID; + + switch (type) { + case EVT_ID_TRANSLATION_FAULT: + case EVT_ID_ADDR_SIZE_FAULT: + case EVT_ID_ACCESS_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_PTE_FETCH; + break; + case EVT_ID_PERMISSION_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_PERMISSION; + break; + default: + /* TODO: report other unrecoverable faults. */ + return -EFAULT; + } + } + + ret = iommu_report_device_fault(master->dev, &fault_evt); + if (ret && flt->type == IOMMU_FAULT_PAGE_REQ) { + /* Nobody cared, abort the access */ + struct iommu_page_response resp = { + .pasid = flt->prm.pasid, + .grpid = flt->prm.grpid, + .code = IOMMU_PAGE_RESP_FAILURE, + }; + arm_smmu_page_response(master->dev, NULL, &resp); + } + + return ret; +} + static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { - int i; + int i, ret; + int num_handled = 0; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->evtq.q; struct arm_smmu_ll_queue *llq = &q->llq; + size_t queue_size = 1 << llq->max_n_shift; static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); u64 evt[EVTQ_ENT_DWORDS]; + spin_lock(&q->wq.lock); do { while (!queue_remove_raw(q, evt)) { u8 id = FIELD_GET(EVTQ_0_ID, evt[0]); - if (__ratelimit(&rs)) { + spin_unlock(&q->wq.lock); + ret = arm_smmu_handle_evt(smmu, evt); + spin_lock(&q->wq.lock); + + if (++num_handled == queue_size) { + q->batch++; + wake_up_all_locked(&q->wq); + num_handled = 0; + } + + if (ret && __ratelimit(&rs)) { dev_info(smmu->dev, "event 0x%02x received:\n", id); for (i = 0; i < ARRAY_SIZE(evt); ++i) dev_info(smmu->dev, "\t0x%016llx\n", @@ -2270,6 +2450,11 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) /* Sync our overflow flag, as we believe we're up to speed */ llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) | Q_IDX(llq, llq->cons); + queue_sync_cons_out(q); + + wake_up_all_locked(&q->wq); + spin_unlock(&q->wq.lock); + return IRQ_HANDLED; } @@ -2333,6 +2518,33 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) return IRQ_HANDLED; } +/* + * arm_smmu_flush_evtq - wait until all events currently in the queue have been + * consumed. + * + * Wait until the evtq thread finished a batch, or until the queue is empty. + * Note that we don't handle overflows on q->batch. If it occurs, just wait for + * the queue to be empty. + */ +static int arm_smmu_flush_evtq(void *cookie, struct device *dev, int pasid) +{ + int ret; + u64 batch; + struct arm_smmu_device *smmu = cookie; + struct arm_smmu_queue *q = &smmu->evtq.q; + + spin_lock(&q->wq.lock); + if (queue_sync_prod_in(q) == -EOVERFLOW) + dev_err(smmu->dev, "evtq overflow detected -- requests lost\n"); + + batch = q->batch; + ret = wait_event_interruptible_locked(q->wq, queue_empty(&q->llq) || + q->batch >= batch + 2); + spin_unlock(&q->wq.lock); + + return ret; +} + static int arm_smmu_device_disable(struct arm_smmu_device *smmu); static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) @@ -2724,6 +2936,9 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain, cfg->s1cdmax = master->ssid_bits; + if (master->stall_enabled) + smmu_domain->stall_enabled = true; + ret = arm_smmu_alloc_cd_tables(smmu_domain); if (ret) goto out_free_asid; @@ -3056,6 +3271,10 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) smmu_domain->s1_cfg.s1cdmax, master->ssid_bits); ret = -EINVAL; goto out_unlock; + } else if (smmu_domain->stall_enabled && !master->stall_enabled) { + dev_err(dev, "cannot attach to stall-enabled domain\n"); + ret = -EINVAL; + goto out_unlock; } master->domain = smmu_domain; @@ -3380,6 +3599,10 @@ static int arm_smmu_add_device(struct device *dev) master->ssid_bits = min_t(u8, master->ssid_bits, CTXDESC_LINEAR_CDMAX); + if ((smmu->features & ARM_SMMU_FEAT_STALLS && fwspec->can_stall) || + smmu->features & ARM_SMMU_FEAT_STALL_FORCE) + master->stall_enabled = true; + ret = iommu_device_link(&smmu->iommu, dev); if (ret) goto err_disable_pasid; @@ -3415,6 +3638,7 @@ static void arm_smmu_remove_device(struct device *dev) master = fwspec->iommu_priv; smmu = master->smmu; + iopf_queue_remove_device(smmu->evtq.iopf, dev); iommu_sva_disable(dev); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); @@ -3538,7 +3762,7 @@ static void arm_smmu_get_resv_regions(struct device *dev, static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) { - return false; + return master->stall_enabled; } static bool arm_smmu_dev_has_feature(struct device *dev, @@ -3579,6 +3803,7 @@ static bool arm_smmu_dev_feature_enabled(struct device *dev, static int arm_smmu_dev_enable_sva(struct device *dev) { + int ret; struct arm_smmu_master *master = dev_to_master(dev); struct iommu_sva_param param = { .min_pasid = 1, @@ -3586,7 +3811,21 @@ static int arm_smmu_dev_enable_sva(struct device *dev) }; param.max_pasid = min(param.max_pasid, (1U << master->ssid_bits) - 1); - return iommu_sva_enable(dev, ¶m); + + ret = iommu_sva_enable(dev, ¶m); + if (ret) + return ret; + + if (master->stall_enabled) { + ret = iopf_queue_add_device(master->smmu->evtq.iopf, dev); + if (ret) + goto err_disable_sva; + } + return 0; + +err_disable_sva: + iommu_sva_disable(dev); + return ret; } static int arm_smmu_dev_enable_feature(struct device *dev, @@ -3609,11 +3848,14 @@ static int arm_smmu_dev_enable_feature(struct device *dev, static int arm_smmu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat) { + struct arm_smmu_master *master = dev_to_master(dev); + if (!arm_smmu_dev_feature_enabled(dev, feat)) return -EINVAL; switch (feat) { case IOMMU_DEV_FEAT_SVA: + iopf_queue_remove_device(master->smmu->evtq.iopf, dev); return iommu_sva_disable(dev); default: return -EINVAL; @@ -3645,6 +3887,7 @@ static struct iommu_ops arm_smmu_ops = { .sva_bind = arm_smmu_sva_bind, .sva_unbind = iommu_sva_unbind_generic, .sva_get_pasid = iommu_sva_get_pasid_generic, + .page_response = arm_smmu_page_response, .pgsize_bitmap = -1UL, /* Restricted during device attach */ }; @@ -3688,6 +3931,10 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift); q->llq.prod = q->llq.cons = 0; + + init_waitqueue_head(&q->wq); + q->batch = 0; + return 0; } @@ -3741,6 +3988,13 @@ static int arm_smmu_init_queues(struct arm_smmu_device *smmu) if (ret) return ret; + if (smmu->features & ARM_SMMU_FEAT_STALLS) { + smmu->evtq.iopf = iopf_queue_alloc(dev_name(smmu->dev), + arm_smmu_flush_evtq, smmu); + if (!smmu->evtq.iopf) + return -ENOMEM; + } + /* priq */ if (!(smmu->features & ARM_SMMU_FEAT_PRI)) return 0; @@ -4716,6 +4970,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev) iommu_device_unregister(&smmu->iommu); iommu_device_sysfs_remove(&smmu->iommu); arm_smmu_device_disable(smmu); + iopf_queue_free(smmu->evtq.iopf); return 0; } diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c index 20738aacac89..dd7017750954 100644 --- a/drivers/iommu/of_iommu.c +++ b/drivers/iommu/of_iommu.c @@ -205,9 +205,12 @@ const struct iommu_ops *of_iommu_configure(struct device *dev, } fwspec = dev_iommu_fwspec_get(dev); - if (!err && fwspec) + if (!err && fwspec) { of_property_read_u32(master_np, "pasid-num-bits", &fwspec->num_pasid_bits); + fwspec->can_stall = of_property_read_bool(master_np, + "dma-can-stall"); + } } /* diff --git a/include/linux/iommu.h b/include/linux/iommu.h index e52a8731e7a9..b39dae6608c5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -595,6 +595,7 @@ struct iommu_group *fsl_mc_device_group(struct device *dev); * @iommu_fwnode: firmware handle for this device's IOMMU * @iommu_priv: IOMMU driver private data for this device * @num_pasid_bits: number of PASID bits supported by this device + * @can_stall: the device is allowed to stall * @num_ids: number of associated device IDs * @ids: IDs which this device may present to the IOMMU */ @@ -603,6 +604,7 @@ struct iommu_fwspec { struct fwnode_handle *iommu_fwnode; void *iommu_priv; u32 num_pasid_bits; + bool can_stall; unsigned int num_ids; u32 ids[1]; }; From patchwork Mon Feb 24 18:23:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9131E92A for ; Mon, 24 Feb 2020 18:25:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5EC3D21744 for ; Mon, 24 Feb 2020 18:25:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="OnFOZtrZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5EC3D21744 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B4EA86B00A2; Mon, 24 Feb 2020 13:24:58 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AAEBD6B00A4; Mon, 24 Feb 2020 13:24:58 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 954566B00A5; Mon, 24 Feb 2020 13:24:58 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 7293F6B00A2 for ; Mon, 24 Feb 2020 13:24:58 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1B290181AC9CC for ; Mon, 24 Feb 2020 18:24:58 +0000 (UTC) X-FDA: 76525847076.13.color59_657a906b2e243 X-Spam-Summary: 2,0,0,89e1cb059ba96d1e,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3352:3865:3867:3868:4037:4250:4321:5007:6261:6653:6742:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13161:13229:13311:13357:13894:14096:14181:14384:14394:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.221.67:@linaro.org:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: color59_657a906b2e243 X-Filterd-Recvd-Size: 4589 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:57 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id m16so11545099wrx.11 for ; Mon, 24 Feb 2020 10:24:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oVPY2SNV/SUayab6VGyitFWrI9RILHKqjSrcDOg3sBg=; b=OnFOZtrZEFufL7rTBOmlZ40Hq224mutQ/5o0VAFHjr/9ZW5PBMJ0Y7/GDS3/xR/dnD TEsPyQ+iLNysIg1dKsUqwcdOSjQuqzH7iLV5knh5wCpz5/PmbdVT8KcjMEYbg24is9zH t6ACUI7hhk9FGQPHZTGMTb2NY4PcnI9R5MLG1z3wtZTl3ODyonISTbNRfmBpEMohHa92 CKN1L7n6QuxpTjQJxuOJ4Ho23RxS2Njg7SP7AcfdcAfbOXg6vAiEfIa4WG7H9/SStdt+ Gm1dU1ozzhGu8153BnWCyrRxnsfeTzwGIutPWQZPh3v8U2NHd1wgZo4SDlNKF9ic1rWv Y4DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oVPY2SNV/SUayab6VGyitFWrI9RILHKqjSrcDOg3sBg=; b=kVn3L5tlM+rf/yI/S9mBhesxbUbc+5y8N/jNz77A+wpHgWOyFK7sCOTdWItGzvVgkc PyOcZ1YOg6ptsC7LZ+insHxplFXqV/R6ecWwJ4CoxsfMo6f8PUPYn2209fkFkGthlQ7y cp9N+XygZxwzTbF/Xfl8sA8FNJV5Z26NJTpxOkmSf/nplpl1sg2alVylLjZllE3KzB8z CIITlrKZmGKbnSelX+DeWPp/pMCrEBYH7ATxW8Ag+1ySq7tLZVPBzw4eIIfuwjQ9y/w9 ns3Izbp/gLsrOSc6C8+oPpTkHOeXTGAP7kiFOl0mHBnvJH338MOTGd4Tl677Aq0D9uG3 Wraw== X-Gm-Message-State: APjAAAXDHxjvd4KRctDxn44VT0fLZ40bJhPEyc0D0/4erTFbVAy0pmuv kLsXfqOfRivvRD8OnVNYc/nZ7g== X-Google-Smtp-Source: APXvYqxjjLnHOC5PFDeOpyj527vO7u0PGvC49hWOgOh7ytPa88sYjcO6Bg19gqIDQUy51J2OFlu0Ew== X-Received: by 2002:a5d:4c52:: with SMTP id n18mr2796517wrt.403.1582568696322; Mon, 24 Feb 2020 10:24:56 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:55 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Bjorn Helgaas Subject: [PATCH v4 24/26] PCI/ATS: Add PRI stubs Date: Mon, 24 Feb 2020 19:23:59 +0100 Message-Id: <20200224182401.353359-25-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000022, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The SMMUv3 driver, which can be built without CONFIG_PCI, will soon gain support for PRI. Partially revert commit c6e9aefbf9db ("PCI/ATS: Remove unused PRI and PASID stubs") to re-introduce the PRI stubs, and avoid adding more #ifdefs to the SMMU driver. Cc: Bjorn Helgaas Signed-off-by: Jean-Philippe Brucker Acked-by: Bjorn Helgaas --- include/linux/pci-ats.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h index f75c307f346d..e9e266df9b37 100644 --- a/include/linux/pci-ats.h +++ b/include/linux/pci-ats.h @@ -28,6 +28,14 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs); void pci_disable_pri(struct pci_dev *pdev); int pci_reset_pri(struct pci_dev *pdev); int pci_prg_resp_pasid_required(struct pci_dev *pdev); +#else /* CONFIG_PCI_PRI */ +static inline int pci_enable_pri(struct pci_dev *pdev, u32 reqs) +{ return -ENODEV; } +static inline void pci_disable_pri(struct pci_dev *pdev) { } +static inline int pci_reset_pri(struct pci_dev *pdev) +{ return -ENODEV; } +static inline int pci_prg_resp_pasid_required(struct pci_dev *pdev) +{ return 0; } #endif /* CONFIG_PCI_PRI */ #ifdef CONFIG_PCI_PASID From patchwork Mon Feb 24 18:24:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05C4492A for ; Mon, 24 Feb 2020 18:25:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C822520838 for ; Mon, 24 Feb 2020 18:25:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="AaB0/woq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C822520838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6B8FF6B00A4; Mon, 24 Feb 2020 13:24:59 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5F59F6B00A6; Mon, 24 Feb 2020 13:24:59 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4483C6B00A7; Mon, 24 Feb 2020 13:24:59 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id 1EB896B00A4 for ; Mon, 24 Feb 2020 13:24:59 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B7E73824556B for ; Mon, 24 Feb 2020 18:24:58 +0000 (UTC) X-FDA: 76525847076.07.honey89_659fa9c907163 X-Spam-Summary: 2,0,0,5f36a20da7f4aab4,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3867:3868:4250:4321:5007:6119:6261:6653:6742:7875:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13141:13161:13229:13230:13311:13357:13894:14096:14181:14384:14394:14721:14777:21080:21433:21444:21627:21990:30046:30054,0,RBL:209.85.128.66:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: honey89_659fa9c907163 X-Filterd-Recvd-Size: 4656 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:58 +0000 (UTC) Received: by mail-wm1-f66.google.com with SMTP id a6so353883wme.2 for ; Mon, 24 Feb 2020 10:24:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=L4b0/E9FteCzWtJ3m41x7utj4DLNM6J+Wsoo7xEmR/M=; b=AaB0/woqy++LPMkoIBZ+m4G2UZnpMtnZvwcO6++TMOA9EU96huOkx2S0UskmT4cEpU xMrifrssPWkaFWKgEl2sH4pudtTv9O5xP8UA1tGUign2rkYDTr5uz/QuiZzooFCt1UtE rFYtkQKLo4kgyRbPCGQdQ0RkTH5WIdikmyUEt5mlxrO2m5pQuEYSi3SrHoKPrs6ir6C0 h1xnyNfpSIa1Ci4ubWuYL1OusT1Jg5137L60+g+2IYH6Io8VN2+J8l0BSyTJWIJ6meQH bVi7Q64CjqZ4FvWJAUKwwHkRMFvfOpZv/R9jAdlHM31BS2Lo6e6itFaGlBgY5e1Zdzw/ 2HxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=L4b0/E9FteCzWtJ3m41x7utj4DLNM6J+Wsoo7xEmR/M=; b=aPP0AcNQ4lXuDn4Zqc/jSFk6tjmJEpZOb3qXGYdfGzKHzlrKIDZ1+qpzfPcnvsVDuL NrxcXK4KPHIcd0bHuIM0Ik8I+YhNHX5FJdpoXln4t4A5slBd8TGpqTv5VnbDcZ3zoxkN zE7H+Iqck2nQ3kDZGf3s1pJBf6a0x3I/GyYHf8LIcs/FAIZnljkCErvJq/ZoiYs9g7b4 70yjeEGQ76xZRGxnGbC9KswmFiL6GmBXgBHOYdQJVBTPjKYWw7f4+SVvA2VwWSS5nuTA N2DEYXdURF/AVSyy9OvcZoEnn0JhNgIb1ZPHh9BltPdTvnyhrb5BJIAflSVIuH8HsH7F ZBYw== X-Gm-Message-State: APjAAAX1p1V9iVkSdG6uoil2pImflbIjwuPH5pMCsWlSmWM0vA6GJ5pn JYhJOXr44ntpawafe8SFLd7yQw== X-Google-Smtp-Source: APXvYqx6ypZ/kM3sOzy1vTskWfaUURVJkjgEhRCiqvXYI27Ek63kaa7GG0Ur7iiefNgxz5pwggvx6A== X-Received: by 2002:a7b:c249:: with SMTP id b9mr296154wmj.61.1582568697369; Mon, 24 Feb 2020 10:24:57 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:56 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Bjorn Helgaas Subject: [PATCH v4 25/26] PCI/ATS: Export symbols of PRI functions Date: Mon, 24 Feb 2020 19:24:00 +0100 Message-Id: <20200224182401.353359-26-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The SMMUv3 driver uses pci_{enable,disable}_pri() and related functions. Export those functions to allow the driver to be built as a module. Cc: Bjorn Helgaas Signed-off-by: Jean-Philippe Brucker Acked-by: Bjorn Helgaas --- drivers/pci/ats.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c index bbfd0d42b8b9..fc8fc6fc8bd5 100644 --- a/drivers/pci/ats.c +++ b/drivers/pci/ats.c @@ -197,6 +197,7 @@ void pci_pri_init(struct pci_dev *pdev) if (status & PCI_PRI_STATUS_PASID) pdev->pasid_required = 1; } +EXPORT_SYMBOL_GPL(pci_pri_init); /** * pci_enable_pri - Enable PRI capability @@ -243,6 +244,7 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs) return 0; } +EXPORT_SYMBOL_GPL(pci_enable_pri); /** * pci_disable_pri - Disable PRI capability @@ -322,6 +324,7 @@ int pci_reset_pri(struct pci_dev *pdev) return 0; } +EXPORT_SYMBOL_GPL(pci_reset_pri); /** * pci_prg_resp_pasid_required - Return PRG Response PASID Required bit @@ -337,6 +340,7 @@ int pci_prg_resp_pasid_required(struct pci_dev *pdev) return pdev->pasid_required; } +EXPORT_SYMBOL_GPL(pci_prg_resp_pasid_required); #endif /* CONFIG_PCI_PRI */ #ifdef CONFIG_PCI_PASID From patchwork Mon Feb 24 18:24:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 11401349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C01FC92A for ; Mon, 24 Feb 2020 18:25:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 740492084E for ; Mon, 24 Feb 2020 18:25:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="rhJnsCqi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 740492084E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 09DE06B00A6; Mon, 24 Feb 2020 13:25:01 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 026516B00A8; Mon, 24 Feb 2020 13:25:00 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E26226B00A9; Mon, 24 Feb 2020 13:25:00 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id C071A6B00A6 for ; Mon, 24 Feb 2020 13:25:00 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 69F27181AC9B6 for ; Mon, 24 Feb 2020 18:25:00 +0000 (UTC) X-FDA: 76525847160.28.plot49_65d46f6e51a14 X-Spam-Summary: 2,0,0,55114fbd5aba8b83,d41d8cd98f00b204,jean-philippe@linaro.org,,RULES_HIT:4:41:355:379:421:541:800:960:966:968:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2639:2691:2693:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4384:4385:4395:4605:5007:6117:6119:6261:6653:6742:7576:7862:7875:7903:9592:10004:10394:11026:11232:11233:11473:11658:11914:12043:12048:12291:12294:12296:12297:12438:12517:12519:12555:12679:12683:12895:13149:13230:13868:13894:13972:14096:14110:14394:21080:21324:21433:21444:21451:21611:21627:21789:21796:21809:21990,0,RBL:209.85.221.66:@linaro.org:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: plot49_65d46f6e51a14 X-Filterd-Recvd-Size: 17343 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 18:24:59 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id y17so2769732wrn.6 for ; Mon, 24 Feb 2020 10:24:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QhzuO7E/tPXvjm3lOwKKbbuvggFZYOykdp4cgLKXUjc=; b=rhJnsCqiLzzpFZarK5PZPVZX1EPddqa5jLad60Z8U7w1CG1LwjKmN75XJM0TeiQyCe xF4saHy4d5w8OAfJ718jBb2k58f1/pqxAYoczuciV29+MC44tfAxlD7Y+wBiF+LZB68V RYv5/ulpS06ZbNwFl+mhu1gA6Kl9BHPV5lnaFlE2EnPixaDyX/bg2ab7SWUeUnkoqeYp bTT6059LoN0iI2tvprDIzXCFDxybk6KHVhyYmYvfSvQDTpK7pqSdx2NLg8wB2lJvU/2H o3tLPeyWP+TuBaVKRakaD+TPB0K6BV67CsvAAuLzrEQEI2a45KdRXSTHgMvPMFuEycNP zgaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QhzuO7E/tPXvjm3lOwKKbbuvggFZYOykdp4cgLKXUjc=; b=fcmgAuhveZ0gGDEZ9PAr+IFcj7f2tVYInHceJvscz1FR5Vw8AwPRREEVpPEPGVSxon LCBgOdzDhIQ+LclWFeO5QxccmUbAgq/DF+eIxT+tWW4ZyI/n39qWL5B3SqB2vmzxDpNb hf6emI8IgJlzg1B8R9BjHRipSLiJrJvjWKC5SvBo2XyimqkwkHZ3zGAD197CZl/N2TKr eQOhztvy6oIEeJeMKEwi1d/bAp29+xcOSoiC28pMMn7KFIEL9DN8cbS4DuaklzhCL50G K6iRq0jNr/7KFNOKevKmHzOdsxzvMGp/krlGBvwMGFOiMEJuknG/6IfkSLGGNKXj8YBt KxmA== X-Gm-Message-State: APjAAAVjpb/M3R2njfvJNUeLTHPyxf0OMgzWrAOE35v2K2uQjr/a67rU /c+5UHg+0RcbdUnuPM1w2FEavQ== X-Google-Smtp-Source: APXvYqzrOQRGThABy8GPo4IKa+3r1YNI9YUhNUvkDvOvqDRDj80+wwDRBgWca3hkV2MNzupjeXtT0g== X-Received: by 2002:adf:e908:: with SMTP id f8mr4338897wrm.37.1582568698515; Mon, 24 Feb 2020 10:24:58 -0800 (PST) Received: from localhost.localdomain ([2001:171b:c9a8:fbc0:116c:c27a:3e7f:5eaf]) by smtp.gmail.com with ESMTPSA id n3sm304255wmc.27.2020.02.24.10.24.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Feb 2020 10:24:58 -0800 (PST) From: Jean-Philippe Brucker To: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org Cc: joro@8bytes.org, robh+dt@kernel.org, mark.rutland@arm.com, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, yi.l.liu@intel.com, zhangfei.gao@linaro.org, Jean-Philippe Brucker Subject: [PATCH v4 26/26] iommu/arm-smmu-v3: Add support for PRI Date: Mon, 24 Feb 2020 19:24:01 +0100 Message-Id: <20200224182401.353359-27-jean-philippe@linaro.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200224182401.353359-1-jean-philippe@linaro.org> References: <20200224182401.353359-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Jean-Philippe Brucker For PCI devices that support it, enable the PRI capability and handle PRI Page Requests with the generic fault handler. It is enabled on demand by iommu_sva_device_init(). Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm-smmu-v3.c | 278 +++++++++++++++++++++++++++++------- 1 file changed, 228 insertions(+), 50 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index da5dda5ba26a..f9732e397b2d 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -248,6 +248,7 @@ #define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4) #define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6) +#define STRTAB_STE_1_PPAR (1UL << 18) #define STRTAB_STE_1_S1STALLD (1UL << 27) #define STRTAB_STE_1_EATS GENMASK_ULL(29, 28) @@ -373,6 +374,9 @@ #define CMDQ_PRI_0_SID GENMASK_ULL(63, 32) #define CMDQ_PRI_1_GRPID GENMASK_ULL(8, 0) #define CMDQ_PRI_1_RESP GENMASK_ULL(13, 12) +#define CMDQ_PRI_1_RESP_FAILURE 0UL +#define CMDQ_PRI_1_RESP_INVALID 1UL +#define CMDQ_PRI_1_RESP_SUCCESS 2UL #define CMDQ_RESUME_0_SID GENMASK_ULL(63, 32) #define CMDQ_RESUME_0_RESP_TERM 0UL @@ -445,12 +449,6 @@ module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO); MODULE_PARM_DESC(disable_bypass, "Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU."); -enum pri_resp { - PRI_RESP_DENY = 0, - PRI_RESP_FAIL = 1, - PRI_RESP_SUCC = 2, -}; - enum arm_smmu_msi_index { EVTQ_MSI_INDEX, GERROR_MSI_INDEX, @@ -533,7 +531,7 @@ struct arm_smmu_cmdq_ent { u32 sid; u32 ssid; u16 grpid; - enum pri_resp resp; + u8 resp; } pri; #define CMDQ_OP_RESUME 0x44 @@ -611,6 +609,7 @@ struct arm_smmu_evtq { struct arm_smmu_priq { struct arm_smmu_queue q; + struct iopf_queue *iopf; }; /* High-level stream table and context descriptor structures */ @@ -743,6 +742,8 @@ struct arm_smmu_master { unsigned int num_streams; bool ats_enabled; bool stall_enabled; + bool pri_supported; + bool prg_resp_needs_ssid; unsigned int ssid_bits; }; @@ -1015,14 +1016,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent) cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid); cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid); cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid); - switch (ent->pri.resp) { - case PRI_RESP_DENY: - case PRI_RESP_FAIL: - case PRI_RESP_SUCC: - break; - default: - return -EINVAL; - } cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp); break; case CMDQ_OP_RESUME: @@ -1602,6 +1595,7 @@ static int arm_smmu_page_response(struct device *dev, { struct arm_smmu_cmdq_ent cmd = {0}; struct arm_smmu_master *master = dev_iommu_fwspec_get(dev)->iommu_priv; + bool pasid_valid = resp->flags & IOMMU_PAGE_RESP_PASID_VALID; int sid = master->streams[0].id; if (master->stall_enabled) { @@ -1619,8 +1613,27 @@ static int arm_smmu_page_response(struct device *dev, default: return -EINVAL; } + } else if (master->pri_supported) { + cmd.opcode = CMDQ_OP_PRI_RESP; + cmd.substream_valid = pasid_valid && + master->prg_resp_needs_ssid; + cmd.pri.sid = sid; + cmd.pri.ssid = resp->pasid; + cmd.pri.grpid = resp->grpid; + switch (resp->code) { + case IOMMU_PAGE_RESP_FAILURE: + cmd.pri.resp = CMDQ_PRI_1_RESP_FAILURE; + break; + case IOMMU_PAGE_RESP_INVALID: + cmd.pri.resp = CMDQ_PRI_1_RESP_INVALID; + break; + case IOMMU_PAGE_RESP_SUCCESS: + cmd.pri.resp = CMDQ_PRI_1_RESP_SUCCESS; + break; + default: + return -EINVAL; + } } else { - /* TODO: insert PRI response here */ return -ENODEV; } @@ -2215,6 +2228,9 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) | FIELD_PREP(STRTAB_STE_1_STRW, strw)); + if (master->prg_resp_needs_ssid) + dst[1] |= STRTAB_STE_1_PPAR; + if (smmu->features & ARM_SMMU_FEAT_STALLS && !master->stall_enabled) dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD); @@ -2460,61 +2476,110 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt) { - u32 sid, ssid; - u16 grpid; - bool ssv, last; - - sid = FIELD_GET(PRIQ_0_SID, evt[0]); - ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]); - ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0; - last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]); - grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]); - - dev_info(smmu->dev, "unexpected PRI request received:\n"); - dev_info(smmu->dev, - "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n", - sid, ssid, grpid, last ? "L" : "", - evt[0] & PRIQ_0_PERM_PRIV ? "" : "un", - evt[0] & PRIQ_0_PERM_READ ? "R" : "", - evt[0] & PRIQ_0_PERM_WRITE ? "W" : "", - evt[0] & PRIQ_0_PERM_EXEC ? "X" : "", - evt[1] & PRIQ_1_ADDR_MASK); - - if (last) { - struct arm_smmu_cmdq_ent cmd = { - .opcode = CMDQ_OP_PRI_RESP, - .substream_valid = ssv, - .pri = { - .sid = sid, - .ssid = ssid, - .grpid = grpid, - .resp = PRI_RESP_DENY, - }, + u32 sid = FIELD_PREP(PRIQ_0_SID, evt[0]); + + bool pasid_valid, last; + struct arm_smmu_master *master; + struct iommu_fault_event fault_evt = { + .fault.type = IOMMU_FAULT_PAGE_REQ, + .fault.prm = { + .pasid = FIELD_GET(PRIQ_0_SSID, evt[0]), + .grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]), + .addr = evt[1] & PRIQ_1_ADDR_MASK, + }, + }; + struct iommu_fault_page_request *pr = &fault_evt.fault.prm; + + pasid_valid = evt[0] & PRIQ_0_SSID_V; + last = evt[0] & PRIQ_0_PRG_LAST; + + /* Discard Stop PASID marker, it isn't used */ + if (!(evt[0] & (PRIQ_0_PERM_READ | PRIQ_0_PERM_WRITE)) && last) + return; + + if (last) + pr->flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE; + if (pasid_valid) + pr->flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID; + if (evt[0] & PRIQ_0_PERM_READ) + pr->perm |= IOMMU_FAULT_PERM_READ; + if (evt[0] & PRIQ_0_PERM_WRITE) + pr->perm |= IOMMU_FAULT_PERM_WRITE; + if (evt[0] & PRIQ_0_PERM_EXEC) + pr->perm |= IOMMU_FAULT_PERM_EXEC; + if (evt[0] & PRIQ_0_PERM_PRIV) + pr->perm |= IOMMU_FAULT_PERM_PRIV; + + master = arm_smmu_find_master(smmu, sid); + if (WARN_ON(!master)) + return; + + if (iommu_report_device_fault(master->dev, &fault_evt)) { + /* + * No handler registered, so subsequent faults won't produce + * better results. Try to disable PRI. + */ + struct iommu_page_response resp = { + .flags = pasid_valid ? + IOMMU_PAGE_RESP_PASID_VALID : 0, + .pasid = pr->pasid, + .grpid = pr->grpid, + .code = IOMMU_PAGE_RESP_FAILURE, }; - arm_smmu_cmdq_issue_cmd(smmu, &cmd); + dev_warn(master->dev, + "PPR 0x%x:0x%llx 0x%x: nobody cared, disabling PRI\n", + pasid_valid ? pr->pasid : 0, pr->addr, pr->perm); + if (last) + arm_smmu_page_response(master->dev, NULL, &resp); } } static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) { + int num_handled = 0; + bool overflow = false; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->priq.q; struct arm_smmu_ll_queue *llq = &q->llq; + size_t queue_size = 1 << llq->max_n_shift; u64 evt[PRIQ_ENT_DWORDS]; + spin_lock(&q->wq.lock); do { - while (!queue_remove_raw(q, evt)) + while (!queue_remove_raw(q, evt)) { + spin_unlock(&q->wq.lock); arm_smmu_handle_ppr(smmu, evt); + spin_lock(&q->wq.lock); + if (++num_handled == queue_size) { + q->batch++; + wake_up_all_locked(&q->wq); + num_handled = 0; + } + } - if (queue_sync_prod_in(q) == -EOVERFLOW) + if (queue_sync_prod_in(q) == -EOVERFLOW) { dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n"); + overflow = true; + } } while (!queue_empty(llq)); /* Sync our overflow flag, as we believe we're up to speed */ llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) | Q_IDX(llq, llq->cons); queue_sync_cons_out(q); + + wake_up_all_locked(&q->wq); + spin_unlock(&q->wq.lock); + + /* + * On overflow, the SMMU might have discarded the last PPR in a group. + * There is no way to know more about it, so we have to discard all + * partial faults already queued. + */ + if (overflow) + iopf_queue_discard_partial(smmu->priq.iopf); + return IRQ_HANDLED; } @@ -2545,6 +2610,30 @@ static int arm_smmu_flush_evtq(void *cookie, struct device *dev, int pasid) return ret; } +static int arm_smmu_flush_priq(void *cookie, struct device *dev, int pasid) +{ + int ret; + u64 batch; + bool overflow = false; + struct arm_smmu_device *smmu = cookie; + struct arm_smmu_queue *q = &smmu->priq.q; + + spin_lock(&q->wq.lock); + if (queue_sync_prod_in(q) == -EOVERFLOW) { + dev_err(smmu->dev, "priq overflow detected -- requests lost\n"); + overflow = true; + } + + batch = q->batch; + ret = wait_event_interruptible_locked(q->wq, queue_empty(&q->llq) || + q->batch >= batch + 2); + spin_unlock(&q->wq.lock); + + if (overflow) + iopf_queue_discard_partial(smmu->priq.iopf); + return ret; +} + static int arm_smmu_device_disable(struct arm_smmu_device *smmu); static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) @@ -3208,6 +3297,75 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master) pci_disable_pasid(pdev); } +static int arm_smmu_init_pri(struct arm_smmu_master *master) +{ + int pos; + struct pci_dev *pdev; + + if (!dev_is_pci(master->dev)) + return -EINVAL; + + if (!(master->smmu->features & ARM_SMMU_FEAT_PRI)) + return 0; + + pdev = to_pci_dev(master->dev); + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); + if (!pos) + return 0; + + /* If the device supports PASID and PRI, set STE.PPAR */ + if (master->ssid_bits) + master->prg_resp_needs_ssid = pci_prg_resp_pasid_required(pdev); + + master->pri_supported = true; + return 0; +} + +static int arm_smmu_enable_pri(struct arm_smmu_master *master) +{ + int ret; + struct pci_dev *pdev; + /* + * TODO: find a good inflight PPR number. We should divide the PRI queue + * by the number of PRI-capable devices, but it's impossible to know + * about future (probed late or hotplugged) devices. So we're at risk of + * dropping PPRs (and leaking pending requests in the FQ). + */ + size_t max_inflight_pprs = 16; + + if (!master->pri_supported || !master->ats_enabled) + return -ENOSYS; + + pdev = to_pci_dev(master->dev); + + ret = pci_reset_pri(pdev); + if (ret) + return ret; + + ret = pci_enable_pri(pdev, max_inflight_pprs); + if (ret) { + dev_err(master->dev, "cannot enable PRI: %d\n", ret); + return ret; + } + + return 0; +} + +static void arm_smmu_disable_pri(struct arm_smmu_master *master) +{ + struct pci_dev *pdev; + + if (!dev_is_pci(master->dev)) + return; + + pdev = to_pci_dev(master->dev); + + if (!pdev->pri_enabled) + return; + + pci_disable_pri(pdev); +} + static void arm_smmu_detach_dev(struct arm_smmu_master *master) { unsigned long flags; @@ -3603,6 +3761,8 @@ static int arm_smmu_add_device(struct device *dev) smmu->features & ARM_SMMU_FEAT_STALL_FORCE) master->stall_enabled = true; + arm_smmu_init_pri(master); + ret = iommu_device_link(&smmu->iommu, dev); if (ret) goto err_disable_pasid; @@ -3639,6 +3799,7 @@ static void arm_smmu_remove_device(struct device *dev) master = fwspec->iommu_priv; smmu = master->smmu; iopf_queue_remove_device(smmu->evtq.iopf, dev); + iopf_queue_remove_device(smmu->priq.iopf, dev); iommu_sva_disable(dev); arm_smmu_detach_dev(master); iommu_group_remove_device(dev); @@ -3762,7 +3923,7 @@ static void arm_smmu_get_resv_regions(struct device *dev, static bool arm_smmu_iopf_supported(struct arm_smmu_master *master) { - return master->stall_enabled; + return master->stall_enabled || master->pri_supported; } static bool arm_smmu_dev_has_feature(struct device *dev, @@ -3820,6 +3981,15 @@ static int arm_smmu_dev_enable_sva(struct device *dev) ret = iopf_queue_add_device(master->smmu->evtq.iopf, dev); if (ret) goto err_disable_sva; + } else if (master->pri_supported) { + ret = iopf_queue_add_device(master->smmu->priq.iopf, dev); + if (ret) + goto err_disable_sva; + + if (arm_smmu_enable_pri(master)) { + iopf_queue_remove_device(master->smmu->priq.iopf, dev); + goto err_disable_sva; + } } return 0; @@ -3855,7 +4025,9 @@ static int arm_smmu_dev_disable_feature(struct device *dev, switch (feat) { case IOMMU_DEV_FEAT_SVA: + arm_smmu_disable_pri(master); iopf_queue_remove_device(master->smmu->evtq.iopf, dev); + iopf_queue_remove_device(master->smmu->priq.iopf, dev); return iommu_sva_disable(dev); default: return -EINVAL; @@ -3999,6 +4171,11 @@ static int arm_smmu_init_queues(struct arm_smmu_device *smmu) if (!(smmu->features & ARM_SMMU_FEAT_PRI)) return 0; + smmu->priq.iopf = iopf_queue_alloc(dev_name(smmu->dev), + arm_smmu_flush_priq, smmu); + if (!smmu->priq.iopf) + return -ENOMEM; + return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD, ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS, "priq"); @@ -4971,6 +5148,7 @@ static int arm_smmu_device_remove(struct platform_device *pdev) iommu_device_sysfs_remove(&smmu->iommu); arm_smmu_device_disable(smmu); iopf_queue_free(smmu->evtq.iopf); + iopf_queue_free(smmu->priq.iopf); return 0; }