From patchwork Thu Oct 29 13:59:45 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 7519341 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F14209F37F for ; Thu, 29 Oct 2015 14:00:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 24909209DC for ; Thu, 29 Oct 2015 14:00:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7258C209DB for ; Thu, 29 Oct 2015 14:00:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757130AbbJ2N7z (ORCPT ); Thu, 29 Oct 2015 09:59:55 -0400 Received: from mail-wi0-f171.google.com ([209.85.212.171]:36665 "EHLO mail-wi0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756924AbbJ2N7x (ORCPT ); Thu, 29 Oct 2015 09:59:53 -0400 Received: by wicfx6 with SMTP id fx6so228476860wic.1 for ; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro_org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=ZE+EKc0cH3mMAUNH5lgyZGqvsVo+11vFW8Aokb+lrVY=; b=oOzteIkh+7qt8dS8umOn/vIKJCQhzQz/KUqMR2+9k9OEv/tvDAx0v6E0xi2ePDT4JF rxCkGEDvDqW2VY7zdcTFKUt1sUODhyk75vvGUZiQiI6an5d9h6CYUE2hzeqSFR2HUhJJ 43tDcTt9Wk6U42vy5LCdeL8BXbL2hQh5rY7XUTF8fpdcXvH/aX3PcAbfgzVUhY03eZUU demfNkaD11kHlnlfhNkO+bCJrMNFbLMPPvoPlyFsJDC0U6KVbn75M8DdJX7KGHJhOyXi wGCtjtpY0Tq8j1Dq0E31HLzm60V6XgAeUatIk/+CtrB14IlzOaRMCA/Zq2r5J6ICkub3 iQPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ZE+EKc0cH3mMAUNH5lgyZGqvsVo+11vFW8Aokb+lrVY=; b=b4bTyVnGSd3ahjuTDIiQp0TLuTME4PMItiQKsqDfCa2pCH8T4QpwqMCsbn5AsGvORx uolS5FRwa4znblGBvr5Hr+43OIjjiHip/RDEGnNWLWrX3p4irPDpCFeVdE4zy1r3bP1X i/abptmikvsHWd4eMGGcXIIOKDN/AHqDXMTDJZRx7nQ291oJks9sBHJwjOTMB+0p5v8X umsr5SjliKHOhLfBFmtXwQLagL5P9fE+is/4gSqyde8Y8Z9uj+dsnjQaU217ztIMCCcH W10zd5IW3kENYh7a1ukstnnBgZuNxY4hASOkLaoyiPeSelZlox3fAMtCEU8MAA7xxBPJ wMYg== X-Gm-Message-State: ALoCoQkvSOt1gExAk9JuiRqWrBOpFikMWlsoOrmwRy8mqWHRydy9u4tnU+/stXYNecjpWZYt0TMM X-Received: by 10.195.13.38 with SMTP id ev6mr2458925wjd.150.1446127192489; Thu, 29 Oct 2015 06:59:52 -0700 (PDT) Received: from new-host-4.home (LMontsouris-657-1-37-90.w80-11.abo.wanadoo.fr. [80.11.198.90]) by smtp.gmail.com with ESMTPSA id an7sm1930271wjc.44.2015.10.29.06.59.50 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 29 Oct 2015 06:59:51 -0700 (PDT) From: Eric Auger To: eric.auger@st.com, eric.auger@linaro.org, alex.williamson@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, will.deacon@arm.com Cc: suravee.suthikulpanit@amd.com, christoffer.dall@linaro.org, linux-kernel@vger.kernel.org, patches@linaro.org Subject: [PATCH] vfio/type1: handle case where IOMMU does not support PAGE_SIZE size Date: Thu, 29 Oct 2015 13:59:45 +0000 Message-Id: <1446127185-2096-1-git-send-email-eric.auger@linaro.org> X-Mailer: git-send-email 1.9.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Current vfio_pgsize_bitmap code hides the supported IOMMU page sizes smaller than PAGE_SIZE. As a result, in case the IOMMU does not support PAGE_SIZE page, the alignment check on map/unmap is done with larger page sizes, if any. This can fail although mapping could be done with pages smaller than PAGE_SIZE. This patch modifies vfio_pgsize_bitmap implementation so that, in case the IOMMU supports page sizes smaller than PAGE_HOST we pretend PAGE_HOST is supported and hide sub-PAGE_HOST sizes. That way the user will be able to map/unmap buffers whose size/ start address is aligned with PAGE_HOST. Pinning code uses that granularity while iommu driver can use the sub-PAGE_HOST size to map the buffer. Signed-off-by: Eric Auger Signed-off-by: Alex Williamson --- This was tested on AMD Seattle with 64kB page host. ARM MMU 401 currently expose 4kB, 2MB and 1GB page support. With a 64kB page host, the map/unmap check is done against 2MB. Some alignment check fail so VFIO_IOMMU_MAP_DMA fail while we could map using 4kB IOMMU page size. RFC -> PATCH v1: - move all modifications in vfio_pgsize_bitmap following Alex' suggestion to expose a fake PAGE_HOST support - restore WARN_ON's --- drivers/vfio/vfio_iommu_type1.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 57d8c37..cee504a 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -403,13 +403,26 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu) { struct vfio_domain *domain; - unsigned long bitmap = PAGE_MASK; + unsigned long bitmap = ULONG_MAX; mutex_lock(&iommu->lock); list_for_each_entry(domain, &iommu->domain_list, next) bitmap &= domain->domain->ops->pgsize_bitmap; mutex_unlock(&iommu->lock); + /* + * In case the IOMMU supports page sizes smaller than PAGE_HOST + * we pretend PAGE_HOST is supported and hide sub-PAGE_HOST sizes. + * That way the user will be able to map/unmap buffers whose size/ + * start address is aligned with PAGE_HOST. Pinning code uses that + * granularity while iommu driver can use the sub-PAGE_HOST size + * to map the buffer. + */ + if (bitmap & ~PAGE_MASK) { + bitmap &= PAGE_MASK; + bitmap |= PAGE_SIZE; + } + return bitmap; }