From patchwork Wed Jun 16 13:38:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83475C48BE8 for ; Wed, 16 Jun 2021 13:41:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4DDB26024A for ; Wed, 16 Jun 2021 13:41:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4DDB26024A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Czstr2UfLDJx6a68/AYGiG4pCVj5PZxqVVC8WbSCBx8=; b=GnPEXmg7FeGvne Xl2vjB1guCK4ObAtURjK3xx6Wn5gYmfdJva7oL1NAjcPZl0YbFgDLtcsbIL1d/Oflq3JhVtq6xYgG zJ7Aj7z0wfwgfO2qOnKem1kSxHJuafeSkehn9ZANASU1FVw+OmD+QgxImG7xllEPOf/yEj2pP04GV o+N3UukmD4GD98HX0xmSpkm+RzM+xTQ7yf9g7LSeViDlkwJ4bIblur4wAVm3jyuE3Tc32Kn+SRAZn J4wuua18NntkK/HilyepqssdCMwPzeW6qKEOPs8uZxRqgoKzm2lMfjVt97XgVHyqGONzSHZsL0SFw l0+3RbmPEuJdj66/P8Dg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVmC-006T0T-1W; Wed, 16 Jun 2021 13:40:16 +0000 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlG-006SkI-QG for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850758; x=1655386758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=H0Mpm8A6eWprEp4EbUOw1hCbxLfkWczylWeJo/FTZ74=; b=XsW0lCk3K5T3v1e25livvJuCP4dv+97SlLGq5R24m3bKLL7m7O97I3dD 9r5CKHq5+RnCeoCLqY2zDgrh4O36F8q46EsvATMiBvb46NgBxSYBSzxmi dDT5rvRKwOh4F6oWLcLOHc99b8nqBJhFX9YdZsQckwH4YCTXyvFknbTKZ U=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:08 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:06 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 01/15] iommu/io-pgtable: Introduce unmap_pages() as a page table op Date: Wed, 16 Jun 2021 06:38:42 -0700 Message-ID: <1623850736-389584-2-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063918_944632_E84F70B3 X-CRM114-Status: GOOD ( 13.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" The io-pgtable code expects to operate on a single block or granule of memory that is supported by the IOMMU hardware when unmapping memory. This means that when a large buffer that consists of multiple such blocks is unmapped, the io-pgtable code will walk the page tables to the correct level to unmap each block, even for blocks that are virtually contiguous and at the same level, which can incur an overhead in performance. Introduce the unmap_pages() page table op to express to the io-pgtable code that it should unmap a number of blocks of the same size, instead of a single block. Doing so allows multiple blocks to be unmapped in one call to the io-pgtable code, reducing the number of page table walks, and indirect calls. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Will Deacon Signed-off-by: Georgi Djakov --- include/linux/io-pgtable.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 4d40dfa75b55..9391c5fa71e6 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -144,6 +144,7 @@ struct io_pgtable_cfg { * * @map: Map a physically contiguous memory region. * @unmap: Unmap a physically contiguous memory region. + * @unmap_pages: Unmap a range of virtually contiguous pages of the same size. * @iova_to_phys: Translate iova to physical address. * * These functions map directly onto the iommu_ops member functions with @@ -154,6 +155,9 @@ struct io_pgtable_ops { phys_addr_t paddr, size_t size, int prot, gfp_t gfp); size_t (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, size_t size, struct iommu_iotlb_gather *gather); + size_t (*unmap_pages)(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather); phys_addr_t (*iova_to_phys)(struct io_pgtable_ops *ops, unsigned long iova); }; From patchwork Wed Jun 16 13:38:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9914EC48BE5 for ; Wed, 16 Jun 2021 13:45:27 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5D1A361241 for ; Wed, 16 Jun 2021 13:45:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D1A361241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fhy1Ae7iXljHgb75HI/epEq4OcGDVBJibCFpzjJ8Fb8=; b=YGChjyvlLt3y1F iKA5jIWgZ/2vOJGmqjYASHOJ59xfTMrGgHNCwh3JGd61vndsmSfgjHD1FdJQp0VEgDoe/fZD6ThDx AUeir/1iCLyFDTbn/UrMnFqIzEppQAOgUOTIdeB1oCmc6uZ2NUmuWp59VLLTjBRNHfVB327v+qgaT y/I4NI0YXKzKiuKnTQq3j2328vnSvg7J8YRJLjj2nNFd+f9y8Y51LpGPw7sf7ZtDkQK3JWMOaIWcH xWdCVTYScLnevbUJLMg8mKRv+blCtRW9PBSF8ehBrd8MpetpAAhE5F8tO5NxOQVIFbwhfFMl+dPNE LMBewzYBtPYYk886RuUg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVpP-006UjB-QK; Wed, 16 Jun 2021 13:43:36 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlL-006Sjf-Og for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850763; x=1655386763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=e9j78rdWwk4Kd0aCMBvspRHOURninHxcAIXEA5TcovU=; b=R3fpFX/9u1T3IkiHSO1CZgD6K+3gl0EXXpLIzxBnh3xvyVtaaSA+EGoO +k5TcNUQf2QH2lTAioJY4RTxRnlkpvuDBsqYB2XW0G/MHxIXPg7gKBDGi h0awg8UfirARgUm4TzbT1EMdwzoHrLdaKg9HYiP6MH5yd9VDac0geNscH U=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:08 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:06 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 02/15] iommu: Add an unmap_pages() op for IOMMU drivers Date: Wed, 16 Jun 2021 06:38:43 -0700 Message-ID: <1623850736-389584-3-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063923_906348_E8292921 X-CRM114-Status: GOOD ( 11.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to unmap a virtually contiguous range of pages of the same size. For IOMMU drivers that do not specify an unmap_pages() callback, the existing logic of unmapping memory one page block at a time will be used. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Will Deacon Acked-by: Lu Baolu Signed-off-by: Georgi Djakov --- include/linux/iommu.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 32d448050bf7..25a844121be5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -181,6 +181,7 @@ struct iommu_iotlb_gather { * @detach_dev: detach device from an iommu domain * @map: map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain + * @unmap_pages: unmap a number of pages of the same size from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush @@ -231,6 +232,9 @@ struct iommu_ops { phys_addr_t paddr, size_t size, int prot, gfp_t gfp); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, size_t size, struct iommu_iotlb_gather *iotlb_gather); + size_t (*unmap_pages)(struct iommu_domain *domain, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *iotlb_gather); void (*flush_iotlb_all)(struct iommu_domain *domain); void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova, size_t size); From patchwork Wed Jun 16 13:38:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3887DC48BE5 for ; Wed, 16 Jun 2021 13:41:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E830860FE8 for ; Wed, 16 Jun 2021 13:41:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E830860FE8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XVEQgbZloXC5LTkogDzn67ENI8ZJ8RvZag4+YZRbbA4=; b=noK2aySxpFYIk4 iRgdcIt3uzrhE3uk0Isy6Sz3TBvL7MztksDODyPghbeSR7zHEgtd7Cu6HYuAaI1c3Vrl0ku/XEjsh 0fRL9nWqKsi4f826jd5W8sB0oc34R7oVLTcWPRx8X1TKxZDZ4BDyx0yuljJUVnHM6NAlKoSq3c9Yt OuTyUJ6okqbqA4WQhZ104I+qeE3OXqi6eFOZTQZFP/d7NQfKPXS3bSs8yFB7d4iYXRgUq0uWfXcBw 8LuG5UXtoivEUW6OFT0HtRA3f+anV0e2S6gZ5DY5+mW/Rr83wN+qvWmKZHHglSEBh7OJ8Ue06XHUu GAsYlwPKRqOK21exYvvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVld-006SqP-Id; Wed, 16 Jun 2021 13:39:41 +0000 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlF-006Sjg-7c for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850757; x=1655386757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=mN4TJb9ueuhi0oVhusFhmV7MzXLlMZV6zms8zDvW974=; b=i4I+4dwaJCroxOKmInEnBod7fIS4QmLkz6Dphe8fDXzBN2gLwShkwWt6 /8Ik82SwUOJbS8TVNfZum8UjK9nMCGbM7n+xcB/EgE9eDwFQSDdZVr0ps U03NxsREzyXoB5ko/wI1UOj8+Iqw3UHk1kD8/Gb7qKEgIMjJCDteKkBTU w=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 16 Jun 2021 06:39:13 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:08 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:07 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 03/15] iommu/io-pgtable: Introduce map_pages() as a page table op Date: Wed, 16 Jun 2021 06:38:44 -0700 Message-ID: <1623850736-389584-4-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063917_357353_B359C9DA X-CRM114-Status: GOOD ( 11.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Mapping memory into io-pgtables follows the same semantics that unmapping memory used to follow (i.e. a buffer will be mapped one page block per call to the io-pgtable code). This means that it can be optimized in the same way that unmapping memory was, so add a map_pages() callback to the io-pgtable ops structure, so that a range of pages of the same size can be mapped within the same call. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Georgi Djakov --- include/linux/io-pgtable.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index 9391c5fa71e6..c43f3b899d2a 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -143,6 +143,7 @@ struct io_pgtable_cfg { * struct io_pgtable_ops - Page table manipulation API for IOMMU drivers. * * @map: Map a physically contiguous memory region. + * @map_pages: Map a physically contiguous range of pages of the same size. * @unmap: Unmap a physically contiguous memory region. * @unmap_pages: Unmap a range of virtually contiguous pages of the same size. * @iova_to_phys: Translate iova to physical address. @@ -153,6 +154,9 @@ struct io_pgtable_cfg { struct io_pgtable_ops { int (*map)(struct io_pgtable_ops *ops, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); + int (*map_pages)(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int prot, gfp_t gfp, size_t *mapped); size_t (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, size_t size, struct iommu_iotlb_gather *gather); size_t (*unmap_pages)(struct io_pgtable_ops *ops, unsigned long iova, From patchwork Wed Jun 16 13:38:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C451FC48BE6 for ; Wed, 16 Jun 2021 13:41:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 941D46024A for ; Wed, 16 Jun 2021 13:41:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 941D46024A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=q6gn0BOWbVFMFOspl8W8EObNpMxo1/euddfjV+0Ugq8=; b=u9EKTtCOGVxYZj AW2BAnnKSyR7NS2FUeynoZ9TzwrlkZ/MzKEyJPnQc/lJQ68odoO61Vs2FHN6awXvd/Uj4FVcbvFpq ZJPt2BI+coAa5DB4shtR50a9/s+7twDdEk5I4Ubmuit/usNHQn8S+hkHzjSgbJCvUXg/MgFnSQGEp a5JVW8tg2zHNSsy5SvdrT1Zq1rug9xZZUoi7O7Sq9KWxvNVP/lEEuCfn72b4Wcq7EhwMIVZr23InU XvCNVfHzGN1rU1JQYWOICcn46atjgNJ/f7GwzqoJkPPF4hRZ9heSolLQewEmaZ4bIDAAj6fN19FQ0 cZbcW4u5PDVDmeGkA/Dg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlo-006Sv1-UQ; Wed, 16 Jun 2021 13:39:53 +0000 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlG-006Sjg-Kb for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850758; x=1655386758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=D6IUBGYQqCLcl7dcGiUkl6UUi4wbJwdxmuWA7BYn2qc=; b=xYl/Y3zxHwh6H9LSEQa5lx2Zv/Aho2XadE+3lufJglf0kDXEULSkBde4 AooKDiqezU9AdCndq6HaSHDAJBWekaSK9c7oxVmXls+M7pCjH8f4DhggP TnTqjppPioRtp4TmiVUN5PbCNALYX71x/0wpg6GU89JIVULm6bdWDcuiU c=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:09 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:07 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 04/15] iommu: Add a map_pages() op for IOMMU drivers Date: Wed, 16 Jun 2021 06:38:45 -0700 Message-ID: <1623850736-389584-5-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063918_735419_B04F08BF X-CRM114-Status: GOOD ( 11.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to map a physically contiguous rnage of pages of the same size. For IOMMU drivers that do not specify a map_pages() callback, the existing logic of mapping memory one page block at a time will be used. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Acked-by: Lu Baolu Signed-off-by: Georgi Djakov --- include/linux/iommu.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 25a844121be5..d7989d4a7404 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -180,6 +180,8 @@ struct iommu_iotlb_gather { * @attach_dev: attach device to an iommu domain * @detach_dev: detach device from an iommu domain * @map: map a physically contiguous memory region to an iommu domain + * @map_pages: map a physically contiguous set of pages of the same size to + * an iommu domain. * @unmap: unmap a physically contiguous memory region from an iommu domain * @unmap_pages: unmap a number of pages of the same size from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain @@ -230,6 +232,9 @@ struct iommu_ops { void (*detach_dev)(struct iommu_domain *domain, struct device *dev); int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); + int (*map_pages)(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int prot, gfp_t gfp, size_t *mapped); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, size_t size, struct iommu_iotlb_gather *iotlb_gather); size_t (*unmap_pages)(struct iommu_domain *domain, unsigned long iova, From patchwork Wed Jun 16 13:38:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8A30C48BE6 for ; Wed, 16 Jun 2021 13:41:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 76FD160FE8 for ; Wed, 16 Jun 2021 13:41:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 76FD160FE8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tczY01Q8FejOJEcSRYxmEzyVvZtfD5CmwIwr3v99gZI=; b=eAaZCc53HEnZzc 4QXmN9WuPwytMvAY9sGV4sBcRIGQimvcIHzI14WmOwvp8UKkLGWTXFDmNwLySsqKCg5BoxSoILBoZ SCjleFiMGJtbUIJdAjg0KzkINx1r394e90RBeNcQulWG+G3vGYNuR5HJpyq7HZ7b2Fe3PyhhDfQSQ M6f1dXiP2PTTZWiD/VuJyRKR46ZEDHoGyL3f6dP8wYvtR3l/de9vhFKVZezo88Fq3QwsSgHCc/HIr G0+GJGcb/cuLZwIQAAul2cqsbINVrDqUccvBzYf2WT/axIvpMw3JsAwIBFrxBUUJHHM2bXWb1oHaJ 391IgQwfUcJjhck4TesQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlT-006SlF-OQ; Wed, 16 Jun 2021 13:39:31 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlF-006Sjf-3i for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850757; x=1655386757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=H6YXXaA72lhIwfvaqkBYUyVRVQLeQOAd+Giz9XXvNpA=; b=qVY2B0jOZZe7zm/3oKN95MzOLPIeF4bMNp6mX7CakvuB6VGaIS14ewxy 5enObk4yGB2S2jwIQoq33Y4E6kydn+yvGGyijjjIA+VnG5jq+1L3sG3My XWDyDeHRVxuU+db+3xwngAT4KjD+meaTe2HzrSdcfF+rQY1Ug6UBlhYzZ g=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:09 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:07 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 05/15] iommu: Use bitmap to calculate page size in iommu_pgsize() Date: Wed, 16 Jun 2021 06:38:46 -0700 Message-ID: <1623850736-389584-6-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063917_222979_31D9F9B3 X-CRM114-Status: GOOD ( 14.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Avoid the potential for shifting values by amounts greater than the width of their type by using a bitmap to compute page size in iommu_pgsize(). Signed-off-by: Will Deacon Signed-off-by: Isaac J. Manjarres Signed-off-by: Georgi Djakov Reviewed-by: Lu Baolu --- drivers/iommu/iommu.c | 31 ++++++++++++------------------- 1 file changed, 12 insertions(+), 19 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 5419c4b9f27a..80e471ada358 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -2378,30 +2379,22 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long addr_merge, size_t size) { unsigned int pgsize_idx; + unsigned long pgsizes; size_t pgsize; - /* Max page size that still fits into 'size' */ - pgsize_idx = __fls(size); + /* Page sizes supported by the hardware and small enough for @size */ + pgsizes = domain->pgsize_bitmap & GENMASK(__fls(size), 0); - /* need to consider alignment requirements ? */ - if (likely(addr_merge)) { - /* Max page size allowed by address */ - unsigned int align_pgsize_idx = __ffs(addr_merge); - pgsize_idx = min(pgsize_idx, align_pgsize_idx); - } - - /* build a mask of acceptable page sizes */ - pgsize = (1UL << (pgsize_idx + 1)) - 1; - - /* throw away page sizes not supported by the hardware */ - pgsize &= domain->pgsize_bitmap; + /* Constrain the page sizes further based on the maximum alignment */ + if (likely(addr_merge)) + pgsizes &= GENMASK(__ffs(addr_merge), 0); - /* make sure we're still sane */ - BUG_ON(!pgsize); + /* Make sure we have at least one suitable page size */ + BUG_ON(!pgsizes); - /* pick the biggest page */ - pgsize_idx = __fls(pgsize); - pgsize = 1UL << pgsize_idx; + /* Pick the biggest page size remaining */ + pgsize_idx = __fls(pgsizes); + pgsize = BIT(pgsize_idx); return pgsize; } From patchwork Wed Jun 16 13:38:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11D19C48BE5 for ; Wed, 16 Jun 2021 13:43:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CB8F66128C for ; Wed, 16 Jun 2021 13:43:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB8F66128C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vJZaYltMaOQQcZCARYqWOueNwDW7oeg984D7lQbw6Z0=; b=I+hbN7Hcdg9GDr 74aFisswfhNrW+btlrOR5ylvc2T+a6k5Z9vVimWT/gD++icFB6F2bxRrhd7O7dYFtmBPemVShjL9L jxpFrgWT6W1suGjnFblro7k5lNWTv22mfvW0NqKFuiNDISWOuEPVcVB6qhwJkb/AfUXnqSRKGtCX2 nnb2krO2QVuHeN2AMUxE9rrE6zDlMVo6WJBvApWzIFrxGBLknpD/z0Jbe4jzCWwuUHwg+OkyPe1OW CI0604lI+3k5b+fUXZe3yn+thjm2MOKYAFso4eUgaMSBNAQupRyGo8s75zh462k9KRQ0KN2yGRk86 p8D5KPlQedHfkKaiCFTg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVoA-006U0O-7M; Wed, 16 Jun 2021 13:42:18 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlK-006Sjf-5Z for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850762; x=1655386762; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=sxWsrx4i4RcguP2ruhmRSpZY5bZXyPiuYAMWoJj1fSY=; b=HvgKToet0rL4NndP75Lp/GemUopdEEMK/MzDa91nWAfkKTrwmftAf/9q d08eGtZt99Rnm6JM0Hf5krAAAr5fB4UTacMx4iVYhPwqTGiRjXEOR3UpE 5kE3wtWKBlsB8e3+dZIXPCTjecQNjxiaEoqRkafU3zyOjC1eCVsRtwSNV 8=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:09 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:08 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 06/15] iommu: Split 'addr_merge' argument to iommu_pgsize() into separate parts Date: Wed, 16 Jun 2021 06:38:47 -0700 Message-ID: <1623850736-389584-7-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063922_301617_2225CDE6 X-CRM114-Status: GOOD ( 11.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon The 'addr_merge' parameter to iommu_pgsize() is a fabricated address intended to describe the alignment requirements to consider when choosing an appropriate page size. On the iommu_map() path, this address is the logical OR of the virtual and physical addresses. Subsequent improvements to iommu_pgsize() will need to check the alignment of the virtual and physical components of 'addr_merge' independently, so pass them in as separate parameters and reconstruct 'addr_merge' locally. No functional change. Signed-off-by: Will Deacon Signed-off-by: Isaac J. Manjarres Signed-off-by: Georgi Djakov Reviewed-by: Lu Baolu --- drivers/iommu/iommu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 80e471ada358..80e14c139d40 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2375,12 +2375,13 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) } EXPORT_SYMBOL_GPL(iommu_iova_to_phys); -static size_t iommu_pgsize(struct iommu_domain *domain, - unsigned long addr_merge, size_t size) +static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size) { unsigned int pgsize_idx; unsigned long pgsizes; size_t pgsize; + unsigned long addr_merge = paddr | iova; /* Page sizes supported by the hardware and small enough for @size */ pgsizes = domain->pgsize_bitmap & GENMASK(__fls(size), 0); @@ -2433,7 +2434,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size); while (size) { - size_t pgsize = iommu_pgsize(domain, iova | paddr, size); + size_t pgsize = iommu_pgsize(domain, iova, paddr, size); pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", iova, &paddr, pgsize); @@ -2521,8 +2522,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain, * or we hit an area that isn't mapped. */ while (unmapped < size) { - size_t pgsize = iommu_pgsize(domain, iova, size - unmapped); + size_t pgsize; + pgsize = iommu_pgsize(domain, iova, iova, size - unmapped); unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather); if (!unmapped_page) break; From patchwork Wed Jun 16 13:38:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55E48C48BE5 for ; Wed, 16 Jun 2021 13:41:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 204346024A for ; Wed, 16 Jun 2021 13:41:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 204346024A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=x5NO0QCcUkpatkw9Q9baEkKP1nEant9upKOb4TcCkzg=; b=XOMD5P8dvc2541 DPiiKtHgGhBt3xLmRQF1oS6T1FXtNp+Lt3iphvHiv2p/LejZ292nlP4mf9bR08CX09UhBBqZtIjOE pm0rYDCvPsPQmsDaSaNlcmIArv8nF0tWgnhOFIAep8vyvkc21+qXDQyOaOaddEngsDAcNrmmVNI8i yct7POjjRf+Mll5tnN2avE3pyn5wJ/x3KzVml9lIZLQx9Me3r5V7+aDU9xap+SOZt+Re2gR/BSawW rr6qonw9Yp6nIyb/ioIO6bpF0g1j51XD/qfoc9tFn226laKVmN5LaRHw2NY4Mlf/liIcTcTBp3kSu KIcOwbC5JTJPXR0PRiaQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlz-006Swo-Ot; Wed, 16 Jun 2021 13:40:03 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlG-006Sjf-H7 for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850758; x=1655386758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=S018wa2gdw8J/yqGiI977vZ9/2XFkFo+h4T7W/Qq1xk=; b=glY8kG35kW+DUNPthWb1qV8p55QM9N7b8H0jTViABA2AprGnliQXb3fR Tn61rSQD7vwvP3nGuCd6Yqju0rWZine3Gu+6JPns2l2MKE0JHaScPgxNl 70afUoDCGeHwDfLVqScYdEKHPb6BvLOe4WnHywYHa7fZUfb2jAhrs9z9U w=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:09 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:08 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 07/15] iommu: Hook up '->unmap_pages' driver callback Date: Wed, 16 Jun 2021 06:38:48 -0700 Message-ID: <1623850736-389584-8-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063918_649548_83C22978 X-CRM114-Status: GOOD ( 17.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Extend iommu_pgsize() to populate an optional 'count' parameter so that we can direct unmapping operation to the ->unmap_pages callback if it has been provided by the driver. Signed-off-by: Will Deacon Signed-off-by: Isaac J. Manjarres Signed-off-by: Georgi Djakov Reviewed-by: Lu Baolu --- drivers/iommu/iommu.c | 59 +++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 50 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 80e14c139d40..725622c7e603 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2376,11 +2376,11 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) EXPORT_SYMBOL_GPL(iommu_iova_to_phys); static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size) + phys_addr_t paddr, size_t size, size_t *count) { - unsigned int pgsize_idx; + unsigned int pgsize_idx, pgsize_idx_next; unsigned long pgsizes; - size_t pgsize; + size_t offset, pgsize, pgsize_next; unsigned long addr_merge = paddr | iova; /* Page sizes supported by the hardware and small enough for @size */ @@ -2396,7 +2396,36 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, /* Pick the biggest page size remaining */ pgsize_idx = __fls(pgsizes); pgsize = BIT(pgsize_idx); + if (!count) + return pgsize; + /* Find the next biggest support page size, if it exists */ + pgsizes = domain->pgsize_bitmap & ~GENMASK(pgsize_idx, 0); + if (!pgsizes) + goto out_set_count; + + pgsize_idx_next = __ffs(pgsizes); + pgsize_next = BIT(pgsize_idx_next); + + /* + * There's no point trying a bigger page size unless the virtual + * and physical addresses are similarly offset within the larger page. + */ + if ((iova ^ paddr) & (pgsize_next - 1)) + goto out_set_count; + + /* Calculate the offset to the next page size alignment boundary */ + offset = pgsize_next - (addr_merge & (pgsize_next - 1)); + + /* + * If size is big enough to accommodate the larger page, reduce + * the number of smaller pages. + */ + if (offset + pgsize_next <= size) + size = offset; + +out_set_count: + *count = size >> pgsize_idx; return pgsize; } @@ -2434,7 +2463,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size); while (size) { - size_t pgsize = iommu_pgsize(domain, iova, paddr, size); + size_t pgsize = iommu_pgsize(domain, iova, paddr, size, NULL); pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", iova, &paddr, pgsize); @@ -2485,6 +2514,19 @@ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova, } EXPORT_SYMBOL_GPL(iommu_map_atomic); +static size_t __iommu_unmap_pages(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct iommu_iotlb_gather *iotlb_gather) +{ + const struct iommu_ops *ops = domain->ops; + size_t pgsize, count; + + pgsize = iommu_pgsize(domain, iova, iova, size, &count); + return ops->unmap_pages ? + ops->unmap_pages(domain, iova, pgsize, count, iotlb_gather) : + ops->unmap(domain, iova, pgsize, iotlb_gather); +} + static size_t __iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, struct iommu_iotlb_gather *iotlb_gather) @@ -2494,7 +2536,7 @@ static size_t __iommu_unmap(struct iommu_domain *domain, unsigned long orig_iova = iova; unsigned int min_pagesz; - if (unlikely(ops->unmap == NULL || + if (unlikely(!(ops->unmap || ops->unmap_pages) || domain->pgsize_bitmap == 0UL)) return 0; @@ -2522,10 +2564,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain, * or we hit an area that isn't mapped. */ while (unmapped < size) { - size_t pgsize; - - pgsize = iommu_pgsize(domain, iova, iova, size - unmapped); - unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather); + unmapped_page = __iommu_unmap_pages(domain, iova, + size - unmapped, + iotlb_gather); if (!unmapped_page) break; From patchwork Wed Jun 16 13:38:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB06DC48BE5 for ; Wed, 16 Jun 2021 13:46:02 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8623E6100A for ; Wed, 16 Jun 2021 13:46:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8623E6100A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=u1XgP6Frnb9TzZSO6YrEJJx1ce3vd8XsBxEG0BwtgGI=; b=dXg5/tgf5LVP8v KeZZxTcCaZHc5imSJMyDD2qL+s5daA2Bvg4QnVLVlH5kA7YBKG9Z/3FcNfgRzYSRzOTX4vYZHslZz 7q9zOEzDd4215dwrmdaxeXySZ5Lg5rYeJb2ahZUwD9AZHNxgcogjRaySorU6mQOOoJXA+R7NdMLMo Cx2bEggNNAvU0vF36RZuMY+2Q5Z8BgsFCmHhd3kZopFn3N0rVhNG5Ka6a8tHX1Ea9zTYB7lEfIJof CkXt6TD9OvGCsicxfyH3L+A5aWAZItdAeXJ/f9TrKbmHgJldITonEKSrnBt4LlXwvyjHn39EfRPmk H0O3tOjfkUSlnXjjIOMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVpv-006Uyj-PV; Wed, 16 Jun 2021 13:44:07 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlL-006SlU-Of for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850763; x=1655386763; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=+HFv+DKvnPCPsWuXWYMZKBwCYe6e05uqYXwhKa4C3YE=; b=c/NmrRyp0M0YTdTWUq+LRtEKJCHChYhyT30GNfihIu4amDnAwoaTJluX YNN6ebEnPRCfvHZ/lX9QTiXrCoikIxFUMwu3F7EQKHhml4Y6fpUIJp19E 8uhJJpOnhR4FXK/rGby9Q1InyjwVjYWj0zhZhSVtwpBWOwoYAgJeuf2BG o=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:09 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:08 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 08/15] iommu: Add support for the map_pages() callback Date: Wed, 16 Jun 2021 06:38:49 -0700 Message-ID: <1623850736-389584-9-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063923_886267_52658F6D X-CRM114-Status: GOOD ( 16.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Since iommu_pgsize can calculate how many pages of the same size can be mapped/unmapped before the next largest page size boundary, add support for invoking an IOMMU driver's map_pages() callback, if it provides one. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Georgi Djakov Reviewed-by: Lu Baolu --- drivers/iommu/iommu.c | 43 +++++++++++++++++++++++++++++++++++-------- 1 file changed, 35 insertions(+), 8 deletions(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 725622c7e603..70a729ce88b1 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2429,6 +2429,30 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, return pgsize; } +static int __iommu_map_pages(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, + gfp_t gfp, size_t *mapped) +{ + const struct iommu_ops *ops = domain->ops; + size_t pgsize, count; + int ret; + + pgsize = iommu_pgsize(domain, iova, paddr, size, &count); + + pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu\n", + iova, &paddr, pgsize, count); + + if (ops->map_pages) { + ret = ops->map_pages(domain, iova, paddr, pgsize, count, prot, + gfp, mapped); + } else { + ret = ops->map(domain, iova, paddr, pgsize, prot, gfp); + *mapped = ret ? 0 : pgsize; + } + + return ret; +} + static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { @@ -2439,7 +2463,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t orig_paddr = paddr; int ret = 0; - if (unlikely(ops->map == NULL || + if (unlikely(!(ops->map || ops->map_pages) || domain->pgsize_bitmap == 0UL)) return -ENODEV; @@ -2463,18 +2487,21 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size); while (size) { - size_t pgsize = iommu_pgsize(domain, iova, paddr, size, NULL); + size_t mapped = 0; - pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", - iova, &paddr, pgsize); - ret = ops->map(domain, iova, paddr, pgsize, prot, gfp); + ret = __iommu_map_pages(domain, iova, paddr, size, prot, gfp, + &mapped); + /* + * Some pages may have been mapped, even if an error occurred, + * so we should account for those so they can be unmapped. + */ + size -= mapped; if (ret) break; - iova += pgsize; - paddr += pgsize; - size -= pgsize; + iova += mapped; + paddr += mapped; } /* unroll mapping in case something went wrong */ From patchwork Wed Jun 16 13:38:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBA31C48BE5 for ; Wed, 16 Jun 2021 13:42:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 86D2F611EE for ; Wed, 16 Jun 2021 13:42:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86D2F611EE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8PPBp3gNygEosV9sOmC+7jDeqyKGDrONiqEppEZQ1Wc=; b=YPb6tP9L+osDTP /Iy+bqkPk+Eg80SIo9d7ArDE54p+tYRI4qgSbhHJeHut7tDWyFY3iqQirCmFgs3MxQwOCM1jiOiW+ Q/zeonN6XXGf2Dj4b8r8ovrvdcq6at+zr8lQCCKCTtA3apkHX/SnR2Zbthcu+qKT4REvrethiZVww WSKufyu3ZH44BE4rm+kiyyZ52hk6FSxLLpiO38C3+hbT8fBNv3kgHCDYPKTkgcIDLTbDPgk6grq1Q ij0gEFykVgA2Wgqn0R3x+TAWcsOdErPhnFkCzVTOg5dGHWJC/4J3HlEXlrRp5uHDPO+l2XxSA80xi TTSJ52frYIRnGEKaPx9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVmj-006TEI-Lx; Wed, 16 Jun 2021 13:40:49 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlI-006Sjf-4j for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:22 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850760; x=1655386760; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=SyoClSC10MZdzcjwY9Xji1VjhbSPkm7sFDJ1QrxLPog=; b=A4Z1rXY3keWMKQYNZ1Iy84HpuMrO/M93pgm2ksPZkqQxvS28AJ7WMkuA eBwj4x3TkJl7lt2LN+lHvNLcwavadK0cQe/BHNTe794dkv/omxKPb075O c/IVfxvCYbFz4BVfZ0Ircm71XAYsvbtAi0IEFIjDVJQUBmyfLjvmFoX3S c=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:09 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:09 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 09/15] iommu/io-pgtable-arm: Prepare PTE methods for handling multiple entries Date: Wed, 16 Jun 2021 06:38:50 -0700 Message-ID: <1623850736-389584-10-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063920_276031_B3D6B5C2 X-CRM114-Status: GOOD ( 19.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" The PTE methods currently operate on a single entry. In preparation for manipulating multiple PTEs in one map or unmap call, allow them to handle multiple PTEs. Signed-off-by: Isaac J. Manjarres Suggested-by: Robin Murphy Signed-off-by: Georgi Djakov --- drivers/iommu/io-pgtable-arm.c | 78 ++++++++++++++++++++++++------------------ 1 file changed, 44 insertions(+), 34 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index 87def58e79b5..ea66b10c04c4 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -232,20 +232,23 @@ static void __arm_lpae_free_pages(void *pages, size_t size, free_pages((unsigned long)pages, get_order(size)); } -static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, +static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, struct io_pgtable_cfg *cfg) { dma_sync_single_for_device(cfg->iommu_dev, __arm_lpae_dma_addr(ptep), - sizeof(*ptep), DMA_TO_DEVICE); + sizeof(*ptep) * num_entries, DMA_TO_DEVICE); } static void __arm_lpae_set_pte(arm_lpae_iopte *ptep, arm_lpae_iopte pte, - struct io_pgtable_cfg *cfg) + int num_entries, struct io_pgtable_cfg *cfg) { - *ptep = pte; + int i; + + for (i = 0; i < num_entries; i++) + ptep[i] = pte; if (!cfg->coherent_walk) - __arm_lpae_sync_pte(ptep, cfg); + __arm_lpae_sync_pte(ptep, num_entries, cfg); } static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, @@ -255,47 +258,54 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, phys_addr_t paddr, arm_lpae_iopte prot, - int lvl, arm_lpae_iopte *ptep) + int lvl, int num_entries, arm_lpae_iopte *ptep) { arm_lpae_iopte pte = prot; + struct io_pgtable_cfg *cfg = &data->iop.cfg; + size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); + int i; if (data->iop.fmt != ARM_MALI_LPAE && lvl == ARM_LPAE_MAX_LEVELS - 1) pte |= ARM_LPAE_PTE_TYPE_PAGE; else pte |= ARM_LPAE_PTE_TYPE_BLOCK; - pte |= paddr_to_iopte(paddr, data); + for (i = 0; i < num_entries; i++) + ptep[i] = pte | paddr_to_iopte(paddr + i * sz, data); - __arm_lpae_set_pte(ptep, pte, &data->iop.cfg); + if (!cfg->coherent_walk) + __arm_lpae_sync_pte(ptep, num_entries, cfg); } static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, unsigned long iova, phys_addr_t paddr, - arm_lpae_iopte prot, int lvl, + arm_lpae_iopte prot, int lvl, int num_entries, arm_lpae_iopte *ptep) { - arm_lpae_iopte pte = *ptep; - - if (iopte_leaf(pte, lvl, data->iop.fmt)) { - /* We require an unmap first */ - WARN_ON(!selftest_running); - return -EEXIST; - } else if (iopte_type(pte) == ARM_LPAE_PTE_TYPE_TABLE) { - /* - * We need to unmap and free the old table before - * overwriting it with a block entry. - */ - arm_lpae_iopte *tblp; - size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); - - tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (__arm_lpae_unmap(data, NULL, iova, sz, lvl, tblp) != sz) { - WARN_ON(1); - return -EINVAL; + int i; + + for (i = 0; i < num_entries; i++) + if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { + /* We require an unmap first */ + WARN_ON(!selftest_running); + return -EEXIST; + } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { + /* + * We need to unmap and free the old table before + * overwriting it with a block entry. + */ + arm_lpae_iopte *tblp; + size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); + + tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); + if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, + lvl, tblp) != sz) { + WARN_ON(1); + return -EINVAL; + } } - } - __arm_lpae_init_pte(data, paddr, prot, lvl, ptep); + __arm_lpae_init_pte(data, paddr, prot, lvl, num_entries, ptep); return 0; } @@ -323,7 +333,7 @@ static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, return old; /* Even if it's not ours, there's no point waiting; just kick it */ - __arm_lpae_sync_pte(ptep, cfg); + __arm_lpae_sync_pte(ptep, 1, cfg); if (old == curr) WRITE_ONCE(*ptep, new | ARM_LPAE_PTE_SW_SYNC); @@ -344,7 +354,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, /* If we can install a leaf entry at this level, then do so */ if (size == block_size) - return arm_lpae_init_pte(data, iova, paddr, prot, lvl, ptep); + return arm_lpae_init_pte(data, iova, paddr, prot, lvl, 1, ptep); /* We can't allocate tables at the final level */ if (WARN_ON(lvl >= ARM_LPAE_MAX_LEVELS - 1)) @@ -361,7 +371,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, if (pte) __arm_lpae_free_pages(cptep, tblsz, cfg); } else if (!cfg->coherent_walk && !(pte & ARM_LPAE_PTE_SW_SYNC)) { - __arm_lpae_sync_pte(ptep, cfg); + __arm_lpae_sync_pte(ptep, 1, cfg); } if (pte && !iopte_leaf(pte, lvl, data->iop.fmt)) { @@ -543,7 +553,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, if (i == unmap_idx) continue; - __arm_lpae_init_pte(data, blk_paddr, pte, lvl, &tablep[i]); + __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]); } pte = arm_lpae_install_table(tablep, ptep, blk_pte, cfg); @@ -585,7 +595,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* If the size matches this level, we're in the right place */ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { - __arm_lpae_set_pte(ptep, 0, &iop->cfg); + __arm_lpae_set_pte(ptep, 0, 1, &iop->cfg); if (!iopte_leaf(pte, lvl, iop->fmt)) { /* Also flush any partial walks */ From patchwork Wed Jun 16 13:38:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB436C48BE5 for ; Wed, 16 Jun 2021 13:43:07 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 87FCC61241 for ; Wed, 16 Jun 2021 13:43:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 87FCC61241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3erjZWeQ9hRXqzQlm5Ng3CTkt5vceHYneEA7IhN+A1g=; b=cGuGaGlxK9xpfy UCjLmdMDiJm7UMZyFpuBkAUuel5ERC63IedpPz9sSIlHIwEZcPhPMm+qYcg4BIOrv05rRlXvMilrt DKlxsbktBv27Qij4rxOHUQ4i+tU2yd+8WcMgy9junJmU6SmuuZs0JvrI+GWql57mj0r/PJ7JJZBcy jxjiDjAhAf6jkZIOy18hgDYTAbFzpU2dxaL6UVBOqhVYnfm5bnee9d+n4e39ZWgpvDGvkE+ZgS3G3 8JYHH5ar88ivy71JzCSTkRfSgKc1RpCKf77ML3Y22zq7EgBlmKKljhuHFu53cLH/fnlTiBXzCboNZ X66nx0YdT7KNV8dCQGNw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVnP-006Ta3-9W; Wed, 16 Jun 2021 13:41:31 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlI-006SlU-BW for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:23 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850760; x=1655386760; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=ZgPeTZPfjOpXgnvKbqS9xE+8IKVSPSdpTVeE7AfikNs=; b=XxIFSKDCxLE0eAssKDpYMXdyNXdwKkZam/Awbsr554CeSEqNt+Lba7jP cGyPnXBRH++QQ8etx4yfb6yKuWBr9HWo0oZ0soEPTBj2adZOyFHPSzOeE TIGAsvALUE+YuED786hIYlnGLiOqBO3rdukAS7i0PWcCQSWA2QMUm5HEh g=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:10 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:09 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 10/15] iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages() Date: Wed, 16 Jun 2021 06:38:51 -0700 Message-ID: <1623850736-389584-11-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063920_512208_74365678 X-CRM114-Status: GOOD ( 23.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Implement the unmap_pages() callback for the ARM LPAE io-pgtable format. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Georgi Djakov --- drivers/iommu/io-pgtable-arm.c | 120 +++++++++++++++++++++++++---------------- 1 file changed, 74 insertions(+), 46 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index ea66b10c04c4..fe8fa0ee9c98 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -46,6 +46,9 @@ #define ARM_LPAE_PGD_SIZE(d) \ (sizeof(arm_lpae_iopte) << (d)->pgd_bits) +#define ARM_LPAE_PTES_PER_TABLE(d) \ + (ARM_LPAE_GRANULE(d) >> ilog2(sizeof(arm_lpae_iopte))) + /* * Calculate the index at level l used to map virtual address a using the * pagetable in d. @@ -239,22 +242,19 @@ static void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, sizeof(*ptep) * num_entries, DMA_TO_DEVICE); } -static void __arm_lpae_set_pte(arm_lpae_iopte *ptep, arm_lpae_iopte pte, - int num_entries, struct io_pgtable_cfg *cfg) +static void __arm_lpae_clear_pte(arm_lpae_iopte *ptep, struct io_pgtable_cfg *cfg) { - int i; - for (i = 0; i < num_entries; i++) - ptep[i] = pte; + *ptep = 0; if (!cfg->coherent_walk) - __arm_lpae_sync_pte(ptep, num_entries, cfg); + __arm_lpae_sync_pte(ptep, 1, cfg); } static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, int lvl, - arm_lpae_iopte *ptep); + unsigned long iova, size_t size, size_t pgcount, + int lvl, arm_lpae_iopte *ptep); static void __arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, phys_addr_t paddr, arm_lpae_iopte prot, @@ -298,7 +298,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, size_t sz = ARM_LPAE_BLOCK_SIZE(lvl, data); tblp = ptep - ARM_LPAE_LVL_IDX(iova, lvl, data); - if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, + if (__arm_lpae_unmap(data, NULL, iova + i * sz, sz, 1, lvl, tblp) != sz) { WARN_ON(1); return -EINVAL; @@ -526,14 +526,15 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, struct iommu_iotlb_gather *gather, unsigned long iova, size_t size, arm_lpae_iopte blk_pte, int lvl, - arm_lpae_iopte *ptep) + arm_lpae_iopte *ptep, size_t pgcount) { struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_lpae_iopte pte, *tablep; phys_addr_t blk_paddr; size_t tablesz = ARM_LPAE_GRANULE(data); size_t split_sz = ARM_LPAE_BLOCK_SIZE(lvl, data); - int i, unmap_idx = -1; + int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data); + int i, unmap_idx_start = -1, num_entries = 0, max_entries; if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) return 0; @@ -542,15 +543,18 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, if (!tablep) return 0; /* Bytes unmapped */ - if (size == split_sz) - unmap_idx = ARM_LPAE_LVL_IDX(iova, lvl, data); + if (size == split_sz) { + unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); + max_entries = ptes_per_table - unmap_idx_start; + num_entries = min_t(int, pgcount, max_entries); + } blk_paddr = iopte_to_paddr(blk_pte, data); pte = iopte_prot(blk_pte); - for (i = 0; i < tablesz / sizeof(pte); i++, blk_paddr += split_sz) { + for (i = 0; i < ptes_per_table; i++, blk_paddr += split_sz) { /* Unmap! */ - if (i == unmap_idx) + if (i >= unmap_idx_start && i < (unmap_idx_start + num_entries)) continue; __arm_lpae_init_pte(data, blk_paddr, pte, lvl, 1, &tablep[i]); @@ -568,76 +572,92 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data, return 0; tablep = iopte_deref(pte, data); - } else if (unmap_idx >= 0) { - io_pgtable_tlb_add_page(&data->iop, gather, iova, size); - return size; + } else if (unmap_idx_start >= 0) { + for (i = 0; i < num_entries; i++) + io_pgtable_tlb_add_page(&data->iop, gather, iova + i * size, size); + + return num_entries * size; } - return __arm_lpae_unmap(data, gather, iova, size, lvl, tablep); + return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl, tablep); } static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, struct iommu_iotlb_gather *gather, - unsigned long iova, size_t size, int lvl, - arm_lpae_iopte *ptep) + unsigned long iova, size_t size, size_t pgcount, + int lvl, arm_lpae_iopte *ptep) { arm_lpae_iopte pte; struct io_pgtable *iop = &data->iop; + int i = 0, num_entries, max_entries, unmap_idx_start; /* Something went horribly wrong and we ran out of page table */ if (WARN_ON(lvl == ARM_LPAE_MAX_LEVELS)) return 0; - ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); + ptep += unmap_idx_start; pte = READ_ONCE(*ptep); if (WARN_ON(!pte)) return 0; /* If the size matches this level, we're in the right place */ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { - __arm_lpae_set_pte(ptep, 0, 1, &iop->cfg); - - if (!iopte_leaf(pte, lvl, iop->fmt)) { - /* Also flush any partial walks */ - io_pgtable_tlb_flush_walk(iop, iova, size, - ARM_LPAE_GRANULE(data)); - ptep = iopte_deref(pte, data); - __arm_lpae_free_pgtable(data, lvl + 1, ptep); - } else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) { - /* - * Order the PTE update against queueing the IOVA, to - * guarantee that a flush callback from a different CPU - * has observed it before the TLBIALL can be issued. - */ - smp_wmb(); - } else { - io_pgtable_tlb_add_page(iop, gather, iova, size); + max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start; + num_entries = min_t(int, pgcount, max_entries); + + while (i < num_entries) { + pte = READ_ONCE(*ptep); + if (WARN_ON(!pte)) + break; + + __arm_lpae_clear_pte(ptep, &iop->cfg); + + if (!iopte_leaf(pte, lvl, iop->fmt)) { + /* Also flush any partial walks */ + io_pgtable_tlb_flush_walk(iop, iova + i * size, size, + ARM_LPAE_GRANULE(data)); + __arm_lpae_free_pgtable(data, lvl + 1, iopte_deref(pte, data)); + } else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) { + /* + * Order the PTE update against queueing the IOVA, to + * guarantee that a flush callback from a different CPU + * has observed it before the TLBIALL can be issued. + */ + smp_wmb(); + } else { + io_pgtable_tlb_add_page(iop, gather, iova + i * size, size); + } + + ptep++; + i++; } - return size; + return i * size; } else if (iopte_leaf(pte, lvl, iop->fmt)) { /* * Insert a table at the next level to map the old region, * minus the part we want to unmap */ return arm_lpae_split_blk_unmap(data, gather, iova, size, pte, - lvl + 1, ptep); + lvl + 1, ptep, pgcount); } /* Keep on walkin' */ ptep = iopte_deref(pte, data); - return __arm_lpae_unmap(data, gather, iova, size, lvl + 1, ptep); + return __arm_lpae_unmap(data, gather, iova, size, pgcount, lvl + 1, ptep); } -static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) +static size_t arm_lpae_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) { struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; arm_lpae_iopte *ptep = data->pgd; long iaext = (s64)iova >> cfg->ias; - if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize || !pgcount)) return 0; if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) @@ -645,7 +665,14 @@ static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, if (WARN_ON(iaext)) return 0; - return __arm_lpae_unmap(data, gather, iova, size, data->start_level, ptep); + return __arm_lpae_unmap(data, gather, iova, pgsize, pgcount, + data->start_level, ptep); +} + +static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, + size_t size, struct iommu_iotlb_gather *gather) +{ + return arm_lpae_unmap_pages(ops, iova, size, 1, gather); } static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, @@ -761,6 +788,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) data->iop.ops = (struct io_pgtable_ops) { .map = arm_lpae_map, .unmap = arm_lpae_unmap, + .unmap_pages = arm_lpae_unmap_pages, .iova_to_phys = arm_lpae_iova_to_phys, }; From patchwork Wed Jun 16 13:38:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5EA6C48BE5 for ; Wed, 16 Jun 2021 13:42:40 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 965E26100A for ; Wed, 16 Jun 2021 13:42:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 965E26100A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UA3pCGhmSyCa98OBh17ngsfsd7OdrpGngtxprnmqEw0=; b=y7uhkyDCPmlpQA DfEE7Vjze2RzU/DqBnXQ01z2enzaxUquoZaMWkLvjBVQ+Et1z8AaxONF1CwuBe7D7q40+OS50w8g/ 7c9Q+HdopI4b85gJe2qcjfdUhhw4meUgaJyY8ILypDwR/yg538fN/bowIsjLux7z20ZCgxkqHkbpP ZcwIFM4lL3O/9yZt1x8vkYS7G3TSIg9/JMXWS4Axe4S+9DhpJmqE+apQ/BepicQSW5265xsal/nTj qOWRdOmZN6Nwhb1IvAhO7LBlSyZaMn0w1oLVIBHhudBeJFxahoZP5CHzh4rrJrOvXGEuD062iztlu xPeAgbhdDv8pFGBZBH7g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVn1-006TMw-VH; Wed, 16 Jun 2021 13:41:08 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlG-006SkF-LK for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850758; x=1655386758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=6dNL8hbqtaxe5R14UIkrbAmyVgSCctSRwQU+f9pGOW4=; b=YLQkDKSIRdgK1I6+LaXW/hQzUm9Mgcudnxmif+l+2xbiDniwU8NFd/JS fmSMSCDj1ZWK5pOM0VaK7AgcYT/ZM+g4m2lyQV2f19PvTNURNMkJoTj2R li8UnvUrFVC36qimzRE4knNt+D44EOIHChYPCXTS6c/9ErKSfjVJGUA4E s=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:10 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:09 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 11/15] iommu/io-pgtable-arm: Implement arm_lpae_map_pages() Date: Wed, 16 Jun 2021 06:38:52 -0700 Message-ID: <1623850736-389584-12-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063918_803740_A31CEC8F X-CRM114-Status: GOOD ( 14.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Implement the map_pages() callback for the ARM LPAE io-pgtable format. Signed-off-by: Isaac J. Manjarres Signed-off-by: Georgi Djakov --- drivers/iommu/io-pgtable-arm.c | 41 +++++++++++++++++++++++++++++++---------- 1 file changed, 31 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index fe8fa0ee9c98..053df4048a29 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -341,20 +341,30 @@ static arm_lpae_iopte arm_lpae_install_table(arm_lpae_iopte *table, } static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, - phys_addr_t paddr, size_t size, arm_lpae_iopte prot, - int lvl, arm_lpae_iopte *ptep, gfp_t gfp) + phys_addr_t paddr, size_t size, size_t pgcount, + arm_lpae_iopte prot, int lvl, arm_lpae_iopte *ptep, + gfp_t gfp, size_t *mapped) { arm_lpae_iopte *cptep, pte; size_t block_size = ARM_LPAE_BLOCK_SIZE(lvl, data); size_t tblsz = ARM_LPAE_GRANULE(data); struct io_pgtable_cfg *cfg = &data->iop.cfg; + int ret = 0, num_entries, max_entries, map_idx_start; /* Find our entry at the current level */ - ptep += ARM_LPAE_LVL_IDX(iova, lvl, data); + map_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); + ptep += map_idx_start; /* If we can install a leaf entry at this level, then do so */ - if (size == block_size) - return arm_lpae_init_pte(data, iova, paddr, prot, lvl, 1, ptep); + if (size == block_size) { + max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; + num_entries = min_t(int, pgcount, max_entries); + ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep); + if (!ret && mapped) + *mapped += num_entries * size; + + return ret; + } /* We can't allocate tables at the final level */ if (WARN_ON(lvl >= ARM_LPAE_MAX_LEVELS - 1)) @@ -383,7 +393,8 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, } /* Rinse, repeat */ - return __arm_lpae_map(data, iova, paddr, size, prot, lvl + 1, cptep, gfp); + return __arm_lpae_map(data, iova, paddr, size, pgcount, prot, lvl + 1, + cptep, gfp, mapped); } static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, @@ -450,8 +461,9 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data, return pte; } -static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova, - phys_addr_t paddr, size_t size, int iommu_prot, gfp_t gfp) +static int arm_lpae_map_pages(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped) { struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); struct io_pgtable_cfg *cfg = &data->iop.cfg; @@ -460,7 +472,7 @@ static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova, arm_lpae_iopte prot; long iaext = (s64)iova >> cfg->ias; - if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) + if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize)) return -EINVAL; if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_TTBR1) @@ -473,7 +485,8 @@ static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova, return 0; prot = arm_lpae_prot_to_pte(data, iommu_prot); - ret = __arm_lpae_map(data, iova, paddr, size, prot, lvl, ptep, gfp); + ret = __arm_lpae_map(data, iova, paddr, pgsize, pgcount, prot, lvl, + ptep, gfp, mapped); /* * Synchronise all PTE updates for the new mapping before there's * a chance for anything to kick off a table walk for the new iova. @@ -483,6 +496,13 @@ static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova, return ret; } +static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t size, int iommu_prot, gfp_t gfp) +{ + return arm_lpae_map_pages(ops, iova, paddr, size, 1, iommu_prot, gfp, + NULL); +} + static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, arm_lpae_iopte *ptep) { @@ -787,6 +807,7 @@ arm_lpae_alloc_pgtable(struct io_pgtable_cfg *cfg) data->iop.ops = (struct io_pgtable_ops) { .map = arm_lpae_map, + .map_pages = arm_lpae_map_pages, .unmap = arm_lpae_unmap, .unmap_pages = arm_lpae_unmap_pages, .iova_to_phys = arm_lpae_iova_to_phys, From patchwork Wed Jun 16 13:38:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94A47C48BE6 for ; Wed, 16 Jun 2021 13:46:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5CB6260FF1 for ; Wed, 16 Jun 2021 13:46:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CB6260FF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vi/F85QnQACRg5RQa6kTKawdjUVwGNLkdsf978sigmM=; b=R+uHWId8lwr9cO xR6qqkfyzYogwPAAAUGrCJTJbj0forZXSv9NSyNYONylsnpx0IzZOGQ2rConwQtf1QGXr6cOts9dE yJehClW/deHNUOgQ8Q89vnzrFZxgD9PPqk65bUPjzm/ty0B0BdKgDn5E/9ayJC84c6DteOl3L7fre IOscm08CzcZ9sL0xulGX9cUxeM5+LToYT27rcwcasROfUtMxkCCzfHsRw5NUimOJg8NifzrkKKraF ySNyd23YUEgra1XZGo+SWXe+tt1vLKcrNYBDS36yxWyaX3xSwTfzksPr/AKMyB4y/lauu9l3VQ7ZD MPgTgebpR/izAfjdcf5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVqO-006VCO-RV; Wed, 16 Jun 2021 13:44:37 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlM-006SkF-4K for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:25 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850764; x=1655386764; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=a7q8EuAYMzxg+4doJVnZ179Qf38aU1OCTgmBjq9gR0M=; b=pe939OGHPo5v5Ywblx93+kMJp1bYeXARRHnlVhkCf1p6zl8fbvw2mMU5 Jfu9WswR4rz1LFSC4NX3gGfO9xuoO8eqk4Ix6Ptm15Hbeq+ShW0vGBxFM +meoDtyrRK9rvSckQXy6BTf19gNTjfLkGeiQx2VBOb3Wy3U7dxGMKn6G5 k=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:11 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:10 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 12/15] iommu/io-pgtable-arm-v7s: Implement arm_v7s_unmap_pages() Date: Wed, 16 Jun 2021 06:38:53 -0700 Message-ID: <1623850736-389584-13-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063924_259669_9669A664 X-CRM114-Status: GOOD ( 11.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Implement the unmap_pages() callback for the ARM v7s io-pgtable format. Signed-off-by: Isaac J. Manjarres Signed-off-by: Georgi Djakov --- drivers/iommu/io-pgtable-arm-v7s.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index d4004bcf333a..1af060686985 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -710,15 +710,32 @@ static size_t __arm_v7s_unmap(struct arm_v7s_io_pgtable *data, return __arm_v7s_unmap(data, gather, iova, size, lvl + 1, ptep); } -static size_t arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) +static size_t arm_v7s_unmap_pages(struct io_pgtable_ops *ops, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) { struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); + size_t unmapped = 0, ret; if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias))) return 0; - return __arm_v7s_unmap(data, gather, iova, size, 1, data->pgd); + while (pgcount--) { + ret = __arm_v7s_unmap(data, gather, iova, pgsize, 1, data->pgd); + if (!ret) + break; + + unmapped += pgsize; + iova += pgsize; + } + + return unmapped; +} + +static size_t arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, + size_t size, struct iommu_iotlb_gather *gather) +{ + return arm_v7s_unmap_pages(ops, iova, size, 1, gather); } static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, @@ -781,6 +798,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, data->iop.ops = (struct io_pgtable_ops) { .map = arm_v7s_map, .unmap = arm_v7s_unmap, + .unmap_pages = arm_v7s_unmap_pages, .iova_to_phys = arm_v7s_iova_to_phys, }; From patchwork Wed Jun 16 13:38:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9806CC48BE8 for ; Wed, 16 Jun 2021 13:42:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6C1B06024A for ; Wed, 16 Jun 2021 13:42:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C1B06024A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jupWDrUUcVg0fJ+jT4bo7x2D64ZzlOLJeIFyvroFiC4=; b=VM9sOipDRKjgG2 FRl+NEVUV4yvZg7giErDlv3AzEoBeFbP5t/YwksS1YuvnEIFMkela8iVitAS+x2FlM2MEgu8UaxW9 j/QK6b7Rdy6FnBk1CtI2JyGFzUN9BDXpFAwofEGHVWmJkf5ygRybfF0dNHFhpvSnz6WfOuBtjtN/a XsMI0weL0RLJDN9lkHY7RY+9DOJrbPgCmadYCgQ22wumOWCl/gl6V84j0noDAzTZyN5lqPzZ8YHRX M82Wr5XSHi1oLttbzo3D+niGO01LUPlt5zotxHsgO0511bzdZfO0SkmquJhXf8o0C6I/yt56j5IH0 xiUTDcN4QPp+FCLA55Fg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVmS-006T6T-GI; Wed, 16 Jun 2021 13:40:32 +0000 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlI-006Sjg-1O for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:21 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850760; x=1655386760; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=rwL0tPjHeJsi9VvRsGNXckNgcDwBnKl7/N+tguBCVBM=; b=a9gOEK2dB6FMmhhIzG5EuvmNb4tBuzdNu6WaP0JEjulnNlIA66cBdjUP 9+91nWaGMR5CmQZQElUf5+ezpLcYPwqDeOd+qn/Za1aTvPEKzX0S54W62 0dC1QCexAnzaYNTGlSA7VUsGhDNhY18bSIZsY7Am4NZv2+neN5Bi5Qf6Z g=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:11 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:10 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 13/15] iommu/io-pgtable-arm-v7s: Implement arm_v7s_map_pages() Date: Wed, 16 Jun 2021 06:38:54 -0700 Message-ID: <1623850736-389584-14-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063920_157371_D8960D29 X-CRM114-Status: GOOD ( 12.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Implement the map_pages() callback for the ARM v7s io-pgtable format. Signed-off-by: Isaac J. Manjarres Signed-off-by: Georgi Djakov --- drivers/iommu/io-pgtable-arm-v7s.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c index 1af060686985..5db90d7ce2ec 100644 --- a/drivers/iommu/io-pgtable-arm-v7s.c +++ b/drivers/iommu/io-pgtable-arm-v7s.c @@ -519,11 +519,12 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, unsigned long iova, return __arm_v7s_map(data, iova, paddr, size, prot, lvl + 1, cptep, gfp); } -static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +static int arm_v7s_map_pages(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int prot, gfp_t gfp, size_t *mapped) { struct arm_v7s_io_pgtable *data = io_pgtable_ops_to_data(ops); - int ret; + int ret = -EINVAL; if (WARN_ON(iova >= (1ULL << data->iop.cfg.ias) || paddr >= (1ULL << data->iop.cfg.oas))) @@ -533,7 +534,17 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, if (!(prot & (IOMMU_READ | IOMMU_WRITE))) return 0; - ret = __arm_v7s_map(data, iova, paddr, size, prot, 1, data->pgd, gfp); + while (pgcount--) { + ret = __arm_v7s_map(data, iova, paddr, pgsize, prot, 1, data->pgd, + gfp); + if (ret) + break; + + iova += pgsize; + paddr += pgsize; + if (mapped) + *mapped += pgsize; + } /* * Synchronise all PTE updates for the new mapping before there's * a chance for anything to kick off a table walk for the new iova. @@ -543,6 +554,12 @@ static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, return ret; } +static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +{ + return arm_v7s_map_pages(ops, iova, paddr, size, 1, prot, gfp, NULL); +} + static void arm_v7s_free_pgtable(struct io_pgtable *iop) { struct arm_v7s_io_pgtable *data = io_pgtable_to_data(iop); @@ -797,6 +814,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, data->iop.ops = (struct io_pgtable_ops) { .map = arm_v7s_map, + .map_pages = arm_v7s_map_pages, .unmap = arm_v7s_unmap, .unmap_pages = arm_v7s_unmap_pages, .iova_to_phys = arm_v7s_iova_to_phys, From patchwork Wed Jun 16 13:38:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AC49C48BE5 for ; Wed, 16 Jun 2021 13:48:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67D5B61356 for ; Wed, 16 Jun 2021 13:48:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67D5B61356 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nb+Waljjd07Y7z68OYWtIGVbdFiz6iGSsqTM0gWesxM=; b=yWua0evo2PiM61 61IA1k6fRDKEfPEptgxDoH5J+XM+M+EhiS4Ng9XMWGi1pII4eTUlWy0ptqzHVEmtckctIR2IoXkZh IiWUTdq6socP2hDLDfA6LsJ2+ep0wzdILnmEiaTBya5xVPpxJQ3lHmIb60T34qc7VuWuJAj1BvSHE KpuIR5C7O5vBbPgh84KcsbcuKau0Lrt2hqRtCMB22ipNjSiYo/SylcvhMqJTlC2op7MAHCktdncUC V+JyYCwRHzBC3wRj2uN7Prz2UBJ0aEJg/oFEHTT+So9IzS5VTx/0XQp16N928v7I1EaYYtA+LSF+b YXE24MCy44JBs+rTAigQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVrh-006Vk2-0X; Wed, 16 Jun 2021 13:45:57 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlN-006SlU-BE for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850765; x=1655386765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=CBMDlnsh+qaJMApeNPDUroxv7h3NodYrYtMqWk/jypA=; b=yUkrUzoLGDyTM9KoLg/upb1Kjj8Prn8DJGuCuC9r8GfqxSXEvGzaBL7e MbO7gFjZH3f0eIGxEBY8BgWkm1o6Y6b/OMPGSsFsVQkGJFH5H5crm9dR+ fHWZGPpyzr6ksuCHaylRhf2wCJyqybKAwV4xhZlCnwMRZqOPEFTKk1+ty E=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:11 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:10 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 14/15] iommu/arm-smmu: Implement the unmap_pages() IOMMU driver callback Date: Wed, 16 Jun 2021 06:38:55 -0700 Message-ID: <1623850736-389584-15-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063925_456665_6A9106E2 X-CRM114-Status: GOOD ( 11.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Implement the unmap_pages() callback for the ARM SMMU driver to allow calls from iommu_unmap to unmap multiple pages of the same size in one call. Also, remove the unmap() callback for the SMMU driver, as it will no longer be used. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Georgi Djakov --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 61233bcc4588..593a15cfa8d5 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -1210,8 +1210,9 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, return ret; } -static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) +static size_t arm_smmu_unmap_pages(struct iommu_domain *domain, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *iotlb_gather) { struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; @@ -1221,7 +1222,7 @@ static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, return 0; arm_smmu_rpm_get(smmu); - ret = ops->unmap(ops, iova, size, gather); + ret = ops->unmap_pages(ops, iova, pgsize, pgcount, iotlb_gather); arm_smmu_rpm_put(smmu); return ret; @@ -1574,7 +1575,7 @@ static struct iommu_ops arm_smmu_ops = { .domain_free = arm_smmu_domain_free, .attach_dev = arm_smmu_attach_dev, .map = arm_smmu_map, - .unmap = arm_smmu_unmap, + .unmap_pages = arm_smmu_unmap_pages, .flush_iotlb_all = arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync, .iova_to_phys = arm_smmu_iova_to_phys, From patchwork Wed Jun 16 13:38:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georgi Djakov X-Patchwork-Id: 12325253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCB9BC48BE5 for ; Wed, 16 Jun 2021 13:47:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 91EC260FF1 for ; Wed, 16 Jun 2021 13:47:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91EC260FF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=quicinc.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SfPeHCRVgfEx3FYXdVmVcHwuxx+d6aY9x413gUNzOeQ=; b=HOarYLb832EcT7 Gjn+KOXjQEQN2MH2QLBiDFPaKyTJARnZAVoSW3sPuTivy+d0gWb/GnWVD8Que+BAM4ezbeMB1iH9X H+5w6it5fivWOMETpYNgDJAeRHv3PXEle2Y7wWFi80BdUsOnhXLcECjHi6WlTRb77lt7300Uy8/mc b0542TIefogV74b5qdXqdqwzQPlS51axCtj7C2KRNpODiDF+QPQ58XYUneKcZYyajX/HBWLEJk9RO pH1rDfbMfIrAAQVvS6zm1R/UDkNLUkiCbVvSxhia+quTIhGbFpyr8Bvt95lz+ucIqsxPIypQydZdM zXRp13DWMn1D1R+8+eOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVqy-006VRI-5W; Wed, 16 Jun 2021 13:45:13 +0000 Received: from alexa-out-sd-02.qualcomm.com ([199.106.114.39]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltVlN-006Sjf-Ay for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 13:39:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850765; x=1655386765; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=BYgsDpGVXiNQKbjx1zEY4QnnueERRkwQ3EUBgFLcp1s=; b=hdPtXhw58p7I/pozE6yNT4bqJCpd3j4cWZcukwiO13ApjH5oYDFgH0Cq JOaqBYcZpk+aIy7C6uscI1gQuJowejnrZHIQ5KPYMff6553Brwss79c2m kDKmSBCeLMupfY/IpHgxi5Ni7vyO+IjcOp6FETgPUGCR6YBNprJyPY4E0 o=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-02.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:12 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:11 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 15/15] iommu/arm-smmu: Implement the map_pages() IOMMU driver callback Date: Wed, 16 Jun 2021 06:38:56 -0700 Message-ID: <1623850736-389584-16-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210616_063925_493448_EB83D5E8 X-CRM114-Status: GOOD ( 11.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: "Isaac J. Manjarres" Implement the map_pages() callback for the ARM SMMU driver to allow calls from iommu_map to map multiple pages of the same size in one call. Also, remove the map() callback for the ARM SMMU driver, as it will no longer be used. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Georgi Djakov --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 593a15cfa8d5..c1ca3b49a620 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -1193,8 +1193,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) return ret; } -static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) +static int arm_smmu_map_pages(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int prot, gfp_t gfp, size_t *mapped) { struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; @@ -1204,7 +1205,7 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, return -ENODEV; arm_smmu_rpm_get(smmu); - ret = ops->map(ops, iova, paddr, size, prot, gfp); + ret = ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, gfp, mapped); arm_smmu_rpm_put(smmu); return ret; @@ -1574,7 +1575,7 @@ static struct iommu_ops arm_smmu_ops = { .domain_alloc = arm_smmu_domain_alloc, .domain_free = arm_smmu_domain_free, .attach_dev = arm_smmu_attach_dev, - .map = arm_smmu_map, + .map_pages = arm_smmu_map_pages, .unmap_pages = arm_smmu_unmap_pages, .flush_iotlb_all = arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync,