From patchwork Fri Mar 13 08:07:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 6003051 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6B0E9BF90F for ; Fri, 13 Mar 2015 08:22:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 70B4B2017D for ; Fri, 13 Mar 2015 08:22:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B386200F3 for ; Fri, 13 Mar 2015 08:22:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752077AbbCMITq (ORCPT ); Fri, 13 Mar 2015 04:19:46 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:47314 "EHLO e23smtp09.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752558AbbCMIJG (ORCPT ); Fri, 13 Mar 2015 04:09:06 -0400 Received: from /spool/local by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 13 Mar 2015 18:09:05 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp09.au.ibm.com (202.81.31.206) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 13 Mar 2015 18:09:03 +1000 Received: from d23relay08.au.ibm.com (d23relay08.au.ibm.com [9.185.71.33]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id A93252BB005A; Fri, 13 Mar 2015 19:09:01 +1100 (EST) Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay08.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t2D88rR027918456; Fri, 13 Mar 2015 19:09:01 +1100 Received: from d23av03.au.ibm.com (localhost [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t2D88Rf1020971; Fri, 13 Mar 2015 19:08:28 +1100 Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.192.253.14]) by d23av03.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id t2D88R0T020337; Fri, 13 Mar 2015 19:08:27 +1100 Received: from bran.ozlabs.ibm.com (haven.au.ibm.com [9.192.253.15]) by ozlabs.au.ibm.com (Postfix) with ESMTP id 480A1A03CA; Fri, 13 Mar 2015 19:07:43 +1100 (AEDT) Received: from ka1.ozlabs.ibm.com (ka1.ozlabs.ibm.com [10.61.145.11]) by bran.ozlabs.ibm.com (Postfix) with ESMTP id 8574016A9D0; Fri, 13 Mar 2015 19:07:42 +1100 (AEDT) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , Benjamin Herrenschmidt , Paul Mackerras , Alex Williamson , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH kernel v6 07/29] vfio: powerpc/spapr: Moving pinning/unpinning to helpers Date: Fri, 13 Mar 2015 19:07:15 +1100 Message-Id: <1426234057-16165-8-git-send-email-aik@ozlabs.ru> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1426234057-16165-1-git-send-email-aik@ozlabs.ru> References: <1426234057-16165-1-git-send-email-aik@ozlabs.ru> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15031308-0033-0000-0000-00000129890A Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is a pretty mechanical patch to make next patches simpler. New tce_iommu_unuse_page() helper does put_page() now but it might skip that after the memory registering patch applied. As we are here, this removes unnecessary checks for a value returned by pfn_to_page() as it cannot possibly return NULL. This moves tce_iommu_disable() later to let tce_iommu_clear() know if the container has been enabled because if it has not been, then put_page() must not be called on TCEs from the TCE table. This situation is not yet possible but it will after KVM acceleration patchset is applied. Signed-off-by: Alexey Kardashevskiy --- Changes: v6: * tce_get_hva() returns hva via a pointer --- drivers/vfio/vfio_iommu_spapr_tce.c | 69 +++++++++++++++++++++++++++---------- 1 file changed, 51 insertions(+), 18 deletions(-) diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index 8a667cb..be693ca 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -198,7 +198,6 @@ static void tce_iommu_release(void *iommu_data) struct iommu_table *tbl = container->tbl; WARN_ON(tbl && !tbl->it_group); - tce_iommu_disable(container); if (tbl) { tce_iommu_clear(container, tbl, tbl->it_offset, tbl->it_size); @@ -206,63 +205,97 @@ static void tce_iommu_release(void *iommu_data) if (tbl->it_group) tce_iommu_detach_group(iommu_data, tbl->it_group); } + + tce_iommu_disable(container); + mutex_destroy(&container->lock); kfree(container); } +static void tce_iommu_unuse_page(struct tce_container *container, + unsigned long oldtce) +{ + struct page *page; + + if (!(oldtce & (TCE_PCI_READ | TCE_PCI_WRITE))) + return; + + /* + * VFIO cannot map/unmap when a container is not enabled so + * we would not need this check but KVM could map/unmap and if + * this happened, we must not put pages as KVM does not get them as + * it expects memory pre-registation to do this part. + */ + if (!container->enabled) + return; + + page = pfn_to_page(__pa(oldtce) >> PAGE_SHIFT); + + if (oldtce & TCE_PCI_WRITE) + SetPageDirty(page); + + put_page(page); +} + static int tce_iommu_clear(struct tce_container *container, struct iommu_table *tbl, unsigned long entry, unsigned long pages) { unsigned long oldtce; - struct page *page; for ( ; pages; --pages, ++entry) { oldtce = iommu_clear_tce(tbl, entry); if (!oldtce) continue; - page = pfn_to_page(oldtce >> PAGE_SHIFT); - WARN_ON(!page); - if (page) { - if (oldtce & TCE_PCI_WRITE) - SetPageDirty(page); - put_page(page); - } + tce_iommu_unuse_page(container, (unsigned long) __va(oldtce)); } return 0; } +static int tce_get_hva(struct tce_container *container, + unsigned page_shift, unsigned long tce, unsigned long *hva) +{ + struct page *page = NULL; + enum dma_data_direction direction = iommu_tce_direction(tce); + + if (get_user_pages_fast(tce & PAGE_MASK, 1, + direction != DMA_TO_DEVICE, &page) != 1) + return -EFAULT; + + *hva = (unsigned long) page_address(page); + + return 0; +} + static long tce_iommu_build(struct tce_container *container, struct iommu_table *tbl, unsigned long entry, unsigned long tce, unsigned long pages) { long i, ret = 0; - struct page *page = NULL; + struct page *page; unsigned long hva; enum dma_data_direction direction = iommu_tce_direction(tce); for (i = 0; i < pages; ++i) { - ret = get_user_pages_fast(tce & PAGE_MASK, 1, - direction != DMA_TO_DEVICE, &page); - if (unlikely(ret != 1)) { - ret = -EFAULT; + ret = tce_get_hva(container, tbl->it_page_shift, tce, &hva); + if (ret) break; - } + page = pfn_to_page(__pa(hva) >> PAGE_SHIFT); if (!tce_page_is_contained(page, tbl->it_page_shift)) { ret = -EPERM; break; } - hva = (unsigned long) page_address(page) + - (tce & IOMMU_PAGE_MASK(tbl) & ~PAGE_MASK); + /* Preserve offset within IOMMU page */ + hva |= tce & IOMMU_PAGE_MASK(tbl) & ~PAGE_MASK; ret = iommu_tce_build(tbl, entry + i, hva, direction); if (ret) { - put_page(page); + tce_iommu_unuse_page(container, hva); pr_err("iommu_tce: %s failed ioba=%lx, tce=%lx, ret=%ld\n", __func__, entry << tbl->it_page_shift, tce, ret);