From patchwork Tue Mar 24 01:14:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 11454733 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1EA5714B4 for ; Tue, 24 Mar 2020 08:09:49 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F2AEB20775 for ; Tue, 24 Mar 2020 08:09:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="TDaQ5NVT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2AEB20775 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 938B86E44E; Tue, 24 Mar 2020 08:09:35 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qt1-x841.google.com (mail-qt1-x841.google.com [IPv6:2607:f8b0:4864:20::841]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9E27389E35 for ; Tue, 24 Mar 2020 01:15:12 +0000 (UTC) Received: by mail-qt1-x841.google.com with SMTP id t17so3558211qtn.12 for ; Mon, 23 Mar 2020 18:15:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OW36Zq5O+nO4G7kBf9jtFzChA1mg5xP0SMOPJaW+Ny0=; b=TDaQ5NVTx16AjeAUiHW1dHB0M8wMkBgVSdm/NgVz28FL1JCJ2xpJScL7jgPwIOvPhb wIrSXyL/H/vSncDr5LSFkwUI3wPQuNi0dgonKvZTfr70EsJHxqQESjt6E5tq0ZGmSvTn x3mkjDvTpeJM3+/aBTbpmibwghhIIEiRvrfVGk+ftxskuavyEMOtwKi01NJzQzRMvFGV AxmWhcKB2gLyomOBTG3dof8s1USnL6cqGHocR/ZRFN2Ugw662kNLm0ujyQybygj5IOwg 7LYql1qmZK1FVNbv2/1sQVPuB+YnquB31v016ocidkS0z4AKNEbX4QhKde34pDYXCVC1 /Rfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OW36Zq5O+nO4G7kBf9jtFzChA1mg5xP0SMOPJaW+Ny0=; b=S5iHbWlfzsLZ0v+1wpGGuSZwJrecUbq0I0Quf0sjODik17i+WejAwKVKeWvu6ZPBzV QaFmg0nb4WhaEYfo8np4Sn2+am/WEFlJbD13NNRtvrzDQOe6/0OV2gyMQTUhcKVUICLB SewhWOYCiz3psvep+OXAFMlMK7Z2FQGna1y7O596olRlicCFJjl6tFvQT09Octya/Qd1 l6QecqXGswpmH/dlBnVY6t6Q+hBKt9eREXlvwMJuVSPJ2un2EAXrrYOQozHRYtCUZ7Z2 9tDabyVil2sjw36IPe4nKwvk+b0fMawQuzTNzjqkA2NzenAJ4b1iFGfWj0HGFKL5sk3I 4k3A== X-Gm-Message-State: ANhLgQ0JeXSZi6Mkp4PmpxFb3oTccGzXpdrsrkR8krsTbWBNfMSgA5ET sFLuw+QRhI9hBJckD8z0PPD0mg== X-Google-Smtp-Source: ADFU+vurHQiZGzEk93HjPq+6N40heuGQe7NvOLr4e4QabkbD2CpMkG/mXFOgr7duxLwEVOjRMxTPeA== X-Received: by 2002:ac8:6c4e:: with SMTP id z14mr19233545qtu.138.1585012511661; Mon, 23 Mar 2020 18:15:11 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-68-57-212.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.68.57.212]) by smtp.gmail.com with ESMTPSA id j85sm12549468qke.20.2020.03.23.18.15.10 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 23 Mar 2020 18:15:10 -0700 (PDT) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1jGY9u-0000qo-2Y; Mon, 23 Mar 2020 22:15:10 -0300 From: Jason Gunthorpe To: Jerome Glisse , Ralph Campbell , Felix.Kuehling@amd.com Subject: [PATCH v2 hmm 1/9] mm/hmm: remove pgmap checking for devmap pages Date: Mon, 23 Mar 2020 22:14:49 -0300 Message-Id: <20200324011457.2817-2-jgg@ziepe.ca> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200324011457.2817-1-jgg@ziepe.ca> References: <20200324011457.2817-1-jgg@ziepe.ca> MIME-Version: 1.0 X-Mailman-Approved-At: Tue, 24 Mar 2020 08:09:16 +0000 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Philip Yang , John Hubbard , amd-gfx@lists.freedesktop.org, linux-mm@kvack.org, Jason Gunthorpe , dri-devel@lists.freedesktop.org, Christoph Hellwig Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Jason Gunthorpe The checking boils down to some racy check if the pagemap is still available or not. Instead of checking this, rely entirely on the notifiers, if a pagemap is destroyed then all pages that belong to it must be removed from the tables and the notifiers triggered. Reviewed-by: Ralph Campbell Reviewed-by: Christoph Hellwig Signed-off-by: Jason Gunthorpe --- mm/hmm.c | 50 ++------------------------------------------------ 1 file changed, 2 insertions(+), 48 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index a491d9aaafe45d..3a2610e0713329 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -28,7 +28,6 @@ struct hmm_vma_walk { struct hmm_range *range; - struct dev_pagemap *pgmap; unsigned long last; unsigned int flags; }; @@ -196,19 +195,8 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, return hmm_vma_fault(addr, end, fault, write_fault, walk); pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) { - if (pmd_devmap(pmd)) { - hmm_vma_walk->pgmap = get_dev_pagemap(pfn, - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) - return -EBUSY; - } + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags; - } - if (hmm_vma_walk->pgmap) { - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } hmm_vma_walk->last = end; return 0; } @@ -300,15 +288,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (fault || write_fault) goto fault; - if (pte_devmap(pte)) { - hmm_vma_walk->pgmap = get_dev_pagemap(pte_pfn(pte), - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) { - pte_unmap(ptep); - return -EBUSY; - } - } - /* * Since each architecture defines a struct page for the zero page, just * fall through and treat it like a normal page. @@ -328,10 +307,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, return 0; fault: - if (hmm_vma_walk->pgmap) { - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } pte_unmap(ptep); /* Fault any virtual address we were asked to fault */ return hmm_vma_fault(addr, end, fault, write_fault, walk); @@ -418,16 +393,6 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return r; } } - if (hmm_vma_walk->pgmap) { - /* - * We do put_dev_pagemap() here and not in hmm_vma_handle_pte() - * so that we can leverage get_dev_pagemap() optimization which - * will not re-take a reference on a pgmap if we already have - * one. - */ - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } pte_unmap(ptep - 1); hmm_vma_walk->last = addr; @@ -491,20 +456,9 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, } pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - for (i = 0; i < npages; ++i, ++pfn) { - hmm_vma_walk->pgmap = get_dev_pagemap(pfn, - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) { - ret = -EBUSY; - goto out_unlock; - } + for (i = 0; i < npages; ++i, ++pfn) pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags; - } - if (hmm_vma_walk->pgmap) { - put_dev_pagemap(hmm_vma_walk->pgmap); - hmm_vma_walk->pgmap = NULL; - } hmm_vma_walk->last = end; goto out_unlock; }