From patchwork Fri Oct 25 11:21:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, Feng" X-Patchwork-Id: 3095481 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id BF7A29F372 for ; Fri, 25 Oct 2013 11:21:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BD81820237 for ; Fri, 25 Oct 2013 11:21:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8B861200D7 for ; Fri, 25 Oct 2013 11:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752987Ab3JYLVV (ORCPT ); Fri, 25 Oct 2013 07:21:21 -0400 Received: from mga02.intel.com ([134.134.136.20]:17415 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751523Ab3JYLVV (ORCPT ); Fri, 25 Oct 2013 07:21:21 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP; 25 Oct 2013 04:21:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.93,569,1378882800"; d="scan'208,223";a="417068492" Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34]) by fmsmga001.fm.intel.com with ESMTP; 25 Oct 2013 04:21:20 -0700 Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 25 Oct 2013 04:21:19 -0700 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 25 Oct 2013 04:21:19 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.244]) by SHSMSX101.ccr.corp.intel.com ([10.239.4.153]) with mapi id 14.03.0123.003; Fri, 25 Oct 2013 19:21:17 +0800 From: "Wu, Feng" To: "kvm@vger.kernel.org" CC: "alex.williamson@redhat.com" , "Zhang, Yang Z" Subject: [PATCH] KVM: Return the actual unmapped size in intel_iommu_unmap() Thread-Topic: [PATCH] KVM: Return the actual unmapped size in intel_iommu_unmap() Thread-Index: Ac7RdEZg37H8aQz9Ti+LxqQYFxlBAw== Date: Fri, 25 Oct 2013 11:21:17 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] MIME-Version: 1.0 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Actual unmapped size should be returned by intel_iommu_unmap(), because iommu_map() which calls this function depends on the real unmapped size. However, in the current logic, the return value of intel_iommu_unmap() is far smaller than the actual unmapped size, which leads a lot of redundant calls to intel_iommu_unmap() in iommu_map(). Since dma_pte_clear_range() can always unmap the space from "start_pfn" to "last_pfn" successfully, it is okay to return "size" for intel_iommu_unmap(). Signed-off-by: Feng Wu --- drivers/iommu/intel-iommu.c | 5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) -- 1.7.1 BTW: Here is the only place where the return value of dma_pte_clear_range() is used, if we don't use it here neither, maybe we can make it a void function. diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 15e9b57..bb795d5 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -4113,15 +4113,14 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) { struct dmar_domain *dmar_domain = domain->priv; - int order; - order = dma_pte_clear_range(dmar_domain, iova >> VTD_PAGE_SHIFT, + dma_pte_clear_range(dmar_domain, iova >> VTD_PAGE_SHIFT, (iova + size - 1) >> VTD_PAGE_SHIFT); if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova; - return PAGE_SIZE << order; + return size; } static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,