From patchwork Fri Dec 6 03:21:08 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiang Liu X-Patchwork-Id: 3292871 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C831DC0D4A for ; Fri, 6 Dec 2013 03:25:03 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 05400204E4 for ; Fri, 6 Dec 2013 03:25:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 70EB3204EB for ; Fri, 6 Dec 2013 03:25:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752224Ab3LFDUt (ORCPT ); Thu, 5 Dec 2013 22:20:49 -0500 Received: from mga03.intel.com ([143.182.124.21]:56623 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754261Ab3LFDUq (ORCPT ); Thu, 5 Dec 2013 22:20:46 -0500 Received: from azsmga002.ch.intel.com ([10.2.17.35]) by azsmga101.ch.intel.com with ESMTP; 05 Dec 2013 19:20:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.93,838,1378882800"; d="scan'208";a="325986895" Received: from gerry-dev.bj.intel.com ([10.238.158.74]) by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2013 19:20:42 -0800 From: Jiang Liu To: Joerg Roedel , David Woodhouse , Dan Williams , Vinod Koul , Ashok Raj , Yijing Wang , Jiri Kosina , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Jiang Liu , Tony Luck , Yinghai Lu , linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Subject: [Patch Part1 V2 05/20] iommu/vt-d, trivial: refine support of 64bit guest address Date: Fri, 6 Dec 2013 11:21:08 +0800 Message-Id: <1386300083-6882-6-git-send-email-jiang.liu@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1386300083-6882-1-git-send-email-jiang.liu@linux.intel.com> References: <1386300083-6882-1-git-send-email-jiang.liu@linux.intel.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In Intel IOMMU driver, it calculate page table level from adjusted guest address width as 'level = (agaw - 30) / 9', which assumes (agaw -30) could be divided by 9. On the other hand, 64bit is a valid agaw and (64 - 30) can't be divided by 9, so it needs special handling. This patch enhances Intel IOMMU driver to correctly handle 64bit agaw. It's mainly for code readability because there's no hardware supporting 64bit agaw yet. Signed-off-by: Jiang Liu --- drivers/iommu/intel-iommu.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 2398876..70bc071 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -63,6 +63,7 @@ #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48 #define MAX_AGAW_WIDTH 64 +#define MAX_AGAW_PFN_WIDTH (MAX_AGAW_WIDTH - VTD_PAGE_SHIFT) #define __DOMAIN_MAX_PFN(gaw) ((((uint64_t)1) << (gaw-VTD_PAGE_SHIFT)) - 1) #define __DOMAIN_MAX_ADDR(gaw) ((((uint64_t)1) << gaw) - 1) @@ -106,12 +107,12 @@ static inline int agaw_to_level(int agaw) static inline int agaw_to_width(int agaw) { - return 30 + agaw * LEVEL_STRIDE; + return min_t(int, 30 + agaw * LEVEL_STRIDE, MAX_AGAW_WIDTH); } static inline int width_to_agaw(int width) { - return (width - 30) / LEVEL_STRIDE; + return DIV_ROUND_UP(width - 30, LEVEL_STRIDE); } static inline unsigned int level_to_offset_bits(int level) @@ -141,7 +142,7 @@ static inline unsigned long align_to_level(unsigned long pfn, int level) static inline unsigned long lvl_to_nr_pages(unsigned int lvl) { - return 1 << ((lvl - 1) * LEVEL_STRIDE); + return 1 << min_t(int, (lvl - 1) * LEVEL_STRIDE, MAX_AGAW_PFN_WIDTH); } /* VT-d pages must always be _smaller_ than MM pages. Otherwise things @@ -865,7 +866,6 @@ static int dma_pte_clear_range(struct dmar_domain *domain, int addr_width = agaw_to_width(domain->agaw) - VTD_PAGE_SHIFT; unsigned int large_page = 1; struct dma_pte *first_pte, *pte; - int order; BUG_ON(addr_width < BITS_PER_LONG && start_pfn >> addr_width); BUG_ON(addr_width < BITS_PER_LONG && last_pfn >> addr_width); @@ -890,8 +890,7 @@ static int dma_pte_clear_range(struct dmar_domain *domain, } while (start_pfn && start_pfn <= last_pfn); - order = (large_page - 1) * 9; - return order; + return min_t(int, (large_page - 1) * 9, MAX_AGAW_PFN_WIDTH); } static void dma_pte_free_level(struct dmar_domain *domain, int level,