From patchwork Tue Mar 1 18:27:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8468911 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3A368C0553 for ; Tue, 1 Mar 2016 18:41:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6FF84202FF for ; Tue, 1 Mar 2016 18:41:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9A2A6202C8 for ; Tue, 1 Mar 2016 18:41:21 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aapDh-0008UK-D6; Tue, 01 Mar 2016 18:40:29 +0000 Received: from mail-wm0-x22e.google.com ([2a00:1450:400c:c09::22e]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aap2o-0001pm-37 for linux-arm-kernel@lists.infradead.org; Tue, 01 Mar 2016 18:29:17 +0000 Received: by mail-wm0-x22e.google.com with SMTP id p65so47910173wmp.1 for ; Tue, 01 Mar 2016 10:28:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c2tmlTZFO34YuIfXdIOC8nYhWzb6H5b7vXh+03gLUc4=; b=H7L7O648dbsoj83pKaO0OJgLhSx/cTN5vewKeh22fDUJkjnVRrU4Mdk5hnLAFyM5FU vhrhx2IzjL5fMiYuC2/PvLKdmh6gPuklp4fK7SzUGYNMzVNVbJn1+OqdBnerEjwuZ/o0 S1hscGHbO73Vw2T8zs7LYtwQrnYDdDgiIxNqU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c2tmlTZFO34YuIfXdIOC8nYhWzb6H5b7vXh+03gLUc4=; b=Mc15OsfTVgCnW3gXuTjFy3e7zGxJxCD5o0My7C8jMjF5Qh8BAmgqj6ZJC4q8k7VnlN r3xddqSvjm4W5wVsSf70j7dJRxu3dBdxu12DT7InvLPu47LsBo8PcFMgCbN6E8hVohr/ pVOqcPLF/puZxICMc0zkumylN1rTJmA81MMHbtp1TxMhul46KFSNX5OCwvjpuwrBbS2b ZlVULo1z8vEscnw3ugSNGRhO9WrKKOUCsgMbcCjyUffvTHppNOY2tbVWhVii4j7SFtzu Z4O1eD0tU2cZvBz2VG0nY6ukq4GlyvVDmKsPe7ewIIzWRMWEVnSCjjCRnXBRzXXCpggQ FQuw== X-Gm-Message-State: AD7BkJKoffriA2K/JVkasTDnYM4Xuf/TbWsvOaWiSrnwE2pJxMarCpEDXtckTi6pttTsYTQt X-Received: by 10.28.144.195 with SMTP id s186mr462562wmd.9.1456856934521; Tue, 01 Mar 2016 10:28:54 -0800 (PST) Received: from new-host-8.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id k8sm32176385wjr.38.2016.03.01.10.28.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 01 Mar 2016 10:28:52 -0800 (PST) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Subject: [RFC v5 13/17] vfio: introduce VFIO_IOVA_RESERVED vfio_dma type Date: Tue, 1 Mar 2016 18:27:53 +0000 Message-Id: <1456856877-4817-14-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1456856877-4817-1-git-send-email-eric.auger@linaro.org> References: <1456856877-4817-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160301_102914_505755_A2DC77AF X-CRM114-Status: GOOD ( 14.50 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@linaro.org, Manish.Jaggi@caviumnetworks.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com, suravee.suthikulpanit@amd.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We introduce a vfio_dma type since we will need to discriminate legacy vfio_dma's from new reserved ones. Since those latter are not mapped at registration, some treatments need to be reworked: removal, replay. Currently they are unplugged. In subsequent patches they will be reworked. Signed-off-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 6f1ea3d..692e9a2 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -53,6 +53,15 @@ module_param_named(disable_hugepages, MODULE_PARM_DESC(disable_hugepages, "Disable VFIO IOMMU support for IOMMU hugepages."); +enum vfio_iova_type { + VFIO_IOVA_USER = 0, /* standard IOVA used to map user vaddr */ + /* + * IOVA reserved to map special host physical addresses, + * MSI frames for instance + */ + VFIO_IOVA_RESERVED, +}; + struct vfio_iommu { struct list_head domain_list; struct mutex lock; @@ -75,6 +84,7 @@ struct vfio_dma { unsigned long vaddr; /* Process virtual addr */ size_t size; /* Map size (bytes) */ int prot; /* IOMMU_READ/WRITE */ + enum vfio_iova_type type; /* type of IOVA */ }; struct vfio_group { @@ -395,7 +405,8 @@ static void vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma) static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) { - vfio_unmap_unpin(iommu, dma); + if (likely(dma->type != VFIO_IOVA_RESERVED)) + vfio_unmap_unpin(iommu, dma); vfio_unlink_dma(iommu, dma); kfree(dma); } @@ -671,6 +682,10 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, dma_addr_t iova; dma = rb_entry(n, struct vfio_dma, node); + + if (unlikely(dma->type == VFIO_IOVA_RESERVED)) + continue; + iova = dma->iova; while (iova < dma->iova + dma->size) {