From patchwork Mon Mar 2 16:58:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baptiste Reynal X-Patchwork-Id: 5915841 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D65159F380 for ; Mon, 2 Mar 2015 17:11:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E3E4C20211 for ; Mon, 2 Mar 2015 17:11:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E0426200B4 for ; Mon, 2 Mar 2015 17:11:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755745AbbCBQ72 (ORCPT ); Mon, 2 Mar 2015 11:59:28 -0500 Received: from mail-we0-f182.google.com ([74.125.82.182]:43782 "EHLO mail-we0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755712AbbCBQ7Z (ORCPT ); Mon, 2 Mar 2015 11:59:25 -0500 Received: by wesu56 with SMTP id u56so34606417wes.10 for ; Mon, 02 Mar 2015 08:59:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=e2S6eQNSOXV709RLOC/9ZO4sZFSc8lj0oGdvJBqgGfc=; b=jyl0JEJV56Gg5bh95f9yDbwUJGVQcVc9Mp3AE8fveFDauAqTjXg1rnplMaNJewYc5T FnADttWs6xD8vLZJiYrNv16AmnyMqxKkWjAIMuxs+BvJd1yyzN+dqZd7kvZqRk8urdMC G9Ugt9n4lwm4oWcRmfiydn+mxKCdBELL7gY279I4DFJ5CACcjPaH1+nM87AMHyMKo9kk 4KRffwajsirusPUy2+MyQCtaec4x9Gh4w7SKk40epXMBSXF/esQKbucYCre75k6h/hJp 2vKDa3j65blf9xd5Id9/DJp2YwYojTVsRgPSE9BI2TwRuQi/dHxx3gcaXUB4HprzYM9U th6w== X-Gm-Message-State: ALoCoQk3E1Gt93pIyHjCVbuoeUjMngxK25VRySC+DsA8pCKHkToUeCUVJe9z+bYPYq5cB/aUx3bq X-Received: by 10.194.77.230 with SMTP id v6mr61787700wjw.25.1425315563595; Mon, 02 Mar 2015 08:59:23 -0800 (PST) Received: from localhost (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by mx.google.com with ESMTPSA id kr5sm20059300wjc.1.2015.03.02.08.59.22 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 02 Mar 2015 08:59:22 -0800 (PST) From: Baptiste Reynal To: iommu@lists.linux-foundation.org, kvmarm@lists.cs.columbia.edu Cc: tech@virtualopensystems.com, Antonios Motakis , Baptiste Reynal , Alex Williamson , kvm@vger.kernel.org (open list:VFIO DRIVER), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v4 3/5] vfio: type1: replace domain wide protection flags with supported capabilities Date: Mon, 2 Mar 2015 17:58:25 +0100 Message-Id: <1425315507-29661-4-git-send-email-b.reynal@virtualopensystems.com> X-Mailer: git-send-email 2.3.1 In-Reply-To: <1425315507-29661-1-git-send-email-b.reynal@virtualopensystems.com> References: <1425315507-29661-1-git-send-email-b.reynal@virtualopensystems.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Antonios Motakis VFIO_IOMMU_TYPE1 keeps track for each domain it knows a list of protection flags it always applies to all mappings in the domain. This is used for domains that support IOMMU_CAP_CACHE_COHERENCY. Refactor this slightly, by keeping track instead that a given domain supports the capability, and applying the IOMMU_CACHE protection flag when doing the actual DMA mappings. This will allow us to reuse the behavior for IOMMU_CAP_NOEXEC, which we also want to keep track of, but without applying it to all domains that support it unless the user explicitly requests it. Signed-off-by: Antonios Motakis [Baptiste Reynal: Use bit shifting for domain->caps] Signed-off-by: Baptiste Reynal --- drivers/vfio/vfio_iommu_type1.c | 31 ++++++++++++++++++++++--------- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 57d8c37..998619b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -65,7 +65,7 @@ struct vfio_domain { struct iommu_domain *domain; struct list_head next; struct list_head group_list; - int prot; /* IOMMU_CACHE */ + int caps; bool fgsp; /* Fine-grained super pages */ }; @@ -507,7 +507,7 @@ static int map_try_harder(struct vfio_domain *domain, dma_addr_t iova, for (i = 0; i < npage; i++, pfn++, iova += PAGE_SIZE) { ret = iommu_map(domain->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT, - PAGE_SIZE, prot | domain->prot); + PAGE_SIZE, prot); if (ret) break; } @@ -525,11 +525,16 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova, int ret; list_for_each_entry(d, &iommu->domain_list, next) { + int dprot = prot; + + if (d->caps & (1 << IOMMU_CAP_CACHE_COHERENCY)) + dprot |= IOMMU_CACHE; + ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT, - npage << PAGE_SHIFT, prot | d->prot); + npage << PAGE_SHIFT, dprot); if (ret) { if (ret != -EBUSY || - map_try_harder(d, iova, pfn, npage, prot)) + map_try_harder(d, iova, pfn, npage, dprot)) goto unwind; } @@ -644,6 +649,10 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, struct vfio_domain *d; struct rb_node *n; int ret; + int dprot = 0; + + if (domain->caps & (1 << IOMMU_CAP_CACHE_COHERENCY)) + dprot |= IOMMU_CACHE; /* Arbitrarily pick the first domain in the list for lookups */ d = list_first_entry(&iommu->domain_list, struct vfio_domain, next); @@ -677,7 +686,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu, size += PAGE_SIZE; ret = iommu_map(domain->domain, iova, phys, - size, dma->prot | domain->prot); + size, dma->prot | dprot); if (ret) return ret; @@ -702,13 +711,17 @@ static void vfio_test_domain_fgsp(struct vfio_domain *domain) { struct page *pages; int ret, order = get_order(PAGE_SIZE * 2); + int dprot = 0; + + if (domain->caps & (1 << IOMMU_CAP_CACHE_COHERENCY)) + dprot |= IOMMU_CACHE; pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (!pages) return; ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2, - IOMMU_READ | IOMMU_WRITE | domain->prot); + IOMMU_READ | IOMMU_WRITE | dprot); if (!ret) { size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE); @@ -787,7 +800,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, } if (iommu_capable(bus, IOMMU_CAP_CACHE_COHERENCY)) - domain->prot |= IOMMU_CACHE; + domain->caps |= (1 << IOMMU_CAP_CACHE_COHERENCY); /* * Try to match an existing compatible domain. We don't want to @@ -798,7 +811,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, */ list_for_each_entry(d, &iommu->domain_list, next) { if (d->domain->ops == domain->domain->ops && - d->prot == domain->prot) { + d->caps == domain->caps) { iommu_detach_group(domain->domain, iommu_group); if (!iommu_attach_group(d->domain, iommu_group)) { list_add(&group->next, &d->group_list); @@ -942,7 +955,7 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu) mutex_lock(&iommu->lock); list_for_each_entry(domain, &iommu->domain_list, next) { - if (!(domain->prot & IOMMU_CACHE)) { + if (!(domain->caps & IOMMU_CAP_CACHE_COHERENCY)) { ret = 0; break; }