From patchwork Tue Apr 19 17:24:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 8882251 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7E2F0BF29F for ; Tue, 19 Apr 2016 17:27:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5E6AE2026F for ; Tue, 19 Apr 2016 17:27:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06458202D1 for ; Tue, 19 Apr 2016 17:27:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1asZPN-0008Do-07; Tue, 19 Apr 2016 17:25:53 +0000 Received: from mail-wm0-x234.google.com ([2a00:1450:400c:c09::234]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1asZOu-0006wL-JX for linux-arm-kernel@lists.infradead.org; Tue, 19 Apr 2016 17:25:26 +0000 Received: by mail-wm0-x234.google.com with SMTP id v188so171474411wme.1 for ; Tue, 19 Apr 2016 10:25:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3moiwY9dyvgJLHzTskZVz2S9Lgp8MBpZiockYPud56k=; b=TAu4OqPYY18sddCPxTSwIwDIgOzJgW7t8IqeopCb4y5IOUZQ/nTWcPtnAUSixIzXp6 XOush95hsdNVv4vxejPAPg1oSY6CCyWz1RaTFNHSk/CS8OhU4ERBgxgFSZ/1OycqgHO7 5hnstumQd34z8Gys5+wdQibnqx+d1xO8UBP7E= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3moiwY9dyvgJLHzTskZVz2S9Lgp8MBpZiockYPud56k=; b=CyR9EqrVGF4yYqRiijYQy1NMfx/N0zOMssFj+k5C2aB8cRrItSVcT4ybSUHThQeKLJ z4u1Hn5hPFIrh3I4hqO6qFkpV91bobMRm7+B45Oz1W8BR4w1Y8Wm7qobWuHkFY+bobpV FLPcyCsSs74xqZLK/+oAx3OPPFgK1sQLKVJnnGzigeGDOVoPI+wn3XU2SEUkgRUsIP+X JnKFtZb8Nm4GZ204dDOiNkI3/VXQYRsCSG+2B0f4MkRl55O3OYHmSdapECv3pA82y5Vm g5E2twPM6jqnr8CDiWOwKekYmapTji3U1We/2pJxTIRKYBPGQw6naHMz6J9QvIosdQ5x VjTw== X-Gm-Message-State: AOPr4FVr8wlmOxgB3v0hWXgRs5QI0e9EAreI64Xnhzpva7PcyMCgtcDsPakWxIqTmiPl2ulK X-Received: by 10.28.10.7 with SMTP id 7mr10281508wmk.43.1461086702226; Tue, 19 Apr 2016 10:25:02 -0700 (PDT) Received: from new-host-34.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id w202sm5330220wmw.18.2016.04.19.10.25.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 19 Apr 2016 10:25:01 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, robin.murphy@arm.com, alex.williamson@redhat.com, will.deacon@arm.com, joro@8bytes.org, tglx@linutronix.de, jason@lakedaemon.net, marc.zyngier@arm.com, christoffer.dall@linaro.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v7 4/7] vfio: allow reserved iova registration Date: Tue, 19 Apr 2016 17:24:44 +0000 Message-Id: <1461086687-2658-5-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1461086687-2658-1-git-send-email-eric.auger@linaro.org> References: <1461086687-2658-1-git-send-email-eric.auger@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160419_102524_965365_EABEC31A X-CRM114-Status: GOOD ( 21.54 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: julien.grall@arm.com, patches@linaro.org, Jean-Philippe.Brucker@arm.com, p.fedin@samsung.com, linux-kernel@vger.kernel.org, Bharat.Bhushan@freescale.com, iommu@lists.linux-foundation.org, pranav.sawargaonkar@gmail.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The user is allowed to [un]register a reserved IOVA range by using the DMA MAP API and setting the new flag: VFIO_DMA_MAP_FLAG_MSI_RESERVED_IOVA. It provides the base address and the size. This region is stored in the vfio_dma rb tree. At that point the iova range is not mapped to any target address yet. The host kernel will use those iova when needed, typically when the VFIO-PCI driver allocates its MSIs. Signed-off-by: Eric Auger Signed-off-by: Bharat Bhushan --- v6 -> v7: - use iommu_free_reserved_iova_domain - convey prot attributes downto dma-reserved-iommu iova domain creation - reserved bindings teardown now performed on iommu domain destruction - rename VFIO_DMA_MAP_FLAG_MSI_RESERVED_IOVA into VFIO_DMA_MAP_FLAG_RESERVED_MSI_IOVA - change title - pass the protection attribute to dma-reserved-iommu API v3 -> v4: - use iommu_alloc/free_reserved_iova_domain exported by dma-reserved-iommu - protect vfio_register_reserved_iova_range implementation with CONFIG_IOMMU_DMA_RESERVED - handle unregistration by user-space and on vfio_iommu_type1 release v1 -> v2: - set returned value according to alloc_reserved_iova_domain result - free the iova domains in case any error occurs RFC v1 -> v1: - takes into account Alex comments, based on [RFC PATCH 1/6] vfio: Add interface for add/del reserved iova region: - use the existing dma map/unmap ioctl interface with a flag to register a reserved IOVA range. A single reserved iova region is allowed. --- drivers/vfio/vfio_iommu_type1.c | 138 +++++++++++++++++++++++++++++++++++++++- include/uapi/linux/vfio.h | 9 ++- 2 files changed, 145 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 93c17d9..fa6b8b1 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -454,6 +454,27 @@ static void vfio_destroy_reserved(struct vfio_iommu *iommu) iommu_free_reserved_iova_domain(d->domain); } +static int vfio_create_reserved(struct vfio_iommu *iommu, + dma_addr_t iova, size_t size, + int prot, unsigned long order) +{ + struct vfio_domain *d; + int ret = 0; + + list_for_each_entry(d, &iommu->domain_list, next) { + ret = iommu_alloc_reserved_iova_domain(d->domain, iova, + size, prot, order); + if (ret) + break; + } + + if (ret) { + list_for_each_entry(d, &iommu->domain_list, next) + iommu_free_reserved_iova_domain(d->domain); + } + return ret; +} + static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) { if (likely(dma->type == VFIO_IOVA_USER)) @@ -705,6 +726,110 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu, return ret; } +static int vfio_register_reserved_iova_range(struct vfio_iommu *iommu, + struct vfio_iommu_type1_dma_map *map) +{ + dma_addr_t iova = map->iova; + size_t size = map->size; + int ret = 0, prot = 0; + struct vfio_dma *dma; + unsigned long order; + uint64_t mask; + + /* Verify that none of our __u64 fields overflow */ + if (map->size != size || map->iova != iova) + return -EINVAL; + + order = __ffs(vfio_pgsize_bitmap(iommu)); + mask = ((uint64_t)1 << order) - 1; + + WARN_ON(mask & PAGE_MASK); + + if (!size || (size | iova) & mask) + return -EINVAL; + + /* Don't allow IOVA address wrap */ + if (iova + size - 1 < iova) + return -EINVAL; + + mutex_lock(&iommu->lock); + + if (vfio_find_dma(iommu, iova, size, VFIO_IOVA_ANY)) { + ret = -EEXIST; + goto unlock; + } + + dma = kzalloc(sizeof(*dma), GFP_KERNEL); + if (!dma) { + ret = -ENOMEM; + goto unlock; + } + + dma->iova = iova; + dma->size = size; + dma->type = VFIO_IOVA_RESERVED; + + if (map->flags & VFIO_DMA_MAP_FLAG_READ) + prot = IOMMU_READ; + if (map->flags & VFIO_DMA_MAP_FLAG_WRITE) + prot |= IOMMU_WRITE; + + ret = vfio_create_reserved(iommu, iova, size, prot, order); + if (ret) + goto free_unlock; + + vfio_link_dma(iommu, dma); + goto unlock; + +free_unlock: + kfree(dma); +unlock: + mutex_unlock(&iommu->lock); + return ret; +} + +static int vfio_unregister_reserved_iova_range(struct vfio_iommu *iommu, + struct vfio_iommu_type1_dma_unmap *unmap) +{ + dma_addr_t iova = unmap->iova; + struct vfio_dma *dma; + size_t size = unmap->size; + uint64_t mask; + unsigned long order; + int ret = -EINVAL; + + /* Verify that none of our __u64 fields overflow */ + if (unmap->size != size || unmap->iova != iova) + return ret; + + order = __ffs(vfio_pgsize_bitmap(iommu)); + mask = ((uint64_t)1 << order) - 1; + + WARN_ON(mask & PAGE_MASK); + + if (!size || (size | iova) & mask) + return ret; + + /* Don't allow IOVA address wrap */ + if (iova + size - 1 < iova) + return ret; + + mutex_lock(&iommu->lock); + + dma = vfio_find_dma(iommu, iova, size, VFIO_IOVA_RESERVED); + + if (dma && (dma->iova == iova) && (dma->size == size)) { + unmap->size = dma->size; + vfio_remove_dma(iommu, dma); + ret = 0; + goto unlock; + } + unmap->size = 0; +unlock: + mutex_unlock(&iommu->lock); + return ret; +} + static int vfio_bus_type(struct device *dev, void *data) { struct bus_type **bus = data; @@ -1074,7 +1199,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, } else if (cmd == VFIO_IOMMU_MAP_DMA) { struct vfio_iommu_type1_dma_map map; uint32_t mask = VFIO_DMA_MAP_FLAG_READ | - VFIO_DMA_MAP_FLAG_WRITE; + VFIO_DMA_MAP_FLAG_WRITE | + VFIO_DMA_MAP_FLAG_RESERVED_MSI_IOVA; minsz = offsetofend(struct vfio_iommu_type1_dma_map, size); @@ -1084,6 +1210,9 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, if (map.argsz < minsz || map.flags & ~mask) return -EINVAL; + if (map.flags & VFIO_DMA_MAP_FLAG_RESERVED_MSI_IOVA) + return vfio_register_reserved_iova_range(iommu, &map); + return vfio_dma_do_map(iommu, &map); } else if (cmd == VFIO_IOMMU_UNMAP_DMA) { @@ -1098,6 +1227,13 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, if (unmap.argsz < minsz || unmap.flags) return -EINVAL; + if (unmap.flags & VFIO_DMA_MAP_FLAG_RESERVED_MSI_IOVA) { + ret = vfio_unregister_reserved_iova_range(iommu, + &unmap); + if (ret) + return ret; + } + ret = vfio_dma_do_unmap(iommu, &unmap); if (ret) return ret; diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 255a211..0637f35 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -498,12 +498,18 @@ struct vfio_iommu_type1_info { * * Map process virtual addresses to IO virtual addresses using the * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required. + * + * In case RESERVED_MSI_IOVA flag is set, the API only aims at registering an + * IOVA region that will be used on some platforms to map the host MSI frames. + * In that specific case, vaddr is ignored. */ struct vfio_iommu_type1_dma_map { __u32 argsz; __u32 flags; #define VFIO_DMA_MAP_FLAG_READ (1 << 0) /* readable from device */ #define VFIO_DMA_MAP_FLAG_WRITE (1 << 1) /* writable from device */ +/* reserved iova for MSI vectors*/ +#define VFIO_DMA_MAP_FLAG_RESERVED_MSI_IOVA (1 << 2) __u64 vaddr; /* Process virtual address */ __u64 iova; /* IO virtual address */ __u64 size; /* Size of mapping (bytes) */ @@ -519,7 +525,8 @@ struct vfio_iommu_type1_dma_map { * Caller sets argsz. The actual unmapped size is returned in the size * field. No guarantee is made to the user that arbitrary unmaps of iova * or size different from those used in the original mapping call will - * succeed. + * succeed. A Reserved DMA region must be unmapped with RESERVED_MSI_IOVA + * flag set. */ struct vfio_iommu_type1_dma_unmap { __u32 argsz;