From patchwork Fri Oct 30 10:08:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Vetter X-Patchwork-Id: 11869059 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 817F9139F for ; Fri, 30 Oct 2020 10:10:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4CEC82083B for ; Fri, 30 Oct 2020 10:10:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="Gcc++IpN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726363AbgJ3KIw (ORCPT ); Fri, 30 Oct 2020 06:08:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726406AbgJ3KIu (ORCPT ); Fri, 30 Oct 2020 06:08:50 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CFE8C0613DE for ; Fri, 30 Oct 2020 03:08:49 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id y12so5798499wrp.6 for ; Fri, 30 Oct 2020 03:08:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HiI1+lLgMnsDQrYrNhRZjnVGpkNPEsmr7OdQwQZ1pJM=; b=Gcc++IpNNygNqtJu4MOWlOxXDTLkz8kWRrWBTguDYU0QSDOK1lmStqYhCY80JP1x87 nnoIM/kxhaqSCxhTbtNStA1+u8N8i8vvYmmr+2qlMbzhOp1A6XcFYXNVmwMv8phCp320 uR7glXXilJAM58uJT7pnvEqFSZFFav+qZ48X4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HiI1+lLgMnsDQrYrNhRZjnVGpkNPEsmr7OdQwQZ1pJM=; b=P4f4hT0ShjoVpkEyuVJlQSbh5P/n4Hc2sx6CJGKyZf5zUmt9ODLNaY+vhHBkeWhjx3 sTYgb3XGcnvHclOZbQ9Ma+EGUW9M/DfQt0YkMNmngFizHttd8Gc8gMb6gqELpoVkZs92 /3f2AGdMVQPZh7H0mIf1haFiGUlGugL/7N1TibIgDtYzrxv+uWUw9ZlKWimjbCrpD5k8 O0obIrP7rNdrkjdKuTgv2YkxKsTG0B4uaEwACHBd4RvElWsJMQ8tOwNq+TODVejI+VVM YCDQLMkrLYD+WFtW8B+uby+iPzB2wiuzIZKskSs1GlfeHAHhlYlTmIt2QybAfI/TauZv elMA== X-Gm-Message-State: AOAM530ca8pzrYVdk7zcpzkToyDkcPJPPZnziz2wLLlJX6i/viH67jXW uh1zUpPOJofdWbIUXSNSuWJHZw== X-Google-Smtp-Source: ABdhPJz2EbQLLd9y23l7XD4h1NQjvIRRCqqxxXIiiwxNslHpfrfcptIiI2iHPCasY7SN1JKAQAi14g== X-Received: by 2002:a5d:4d8a:: with SMTP id b10mr2128836wru.5.1604052527823; Fri, 30 Oct 2020 03:08:47 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id v189sm4430947wmg.14.2020.10.30.03.08.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Oct 2020 03:08:46 -0700 (PDT) From: Daniel Vetter To: DRI Development , LKML Cc: kvm@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-media@vger.kernel.org, Daniel Vetter , Daniel Vetter , Jason Gunthorpe , Dan Williams , Kees Cook , Benjamin Herrensmidt , Dave Airlie , Andrew Morton , John Hubbard , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jan Kara , Chris Wilson Subject: [PATCH v5 07/15] mm: Close race in generic_access_phys Date: Fri, 30 Oct 2020 11:08:07 +0100 Message-Id: <20201030100815.2269-8-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201030100815.2269-1-daniel.vetter@ffwll.ch> References: <20201030100815.2269-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org Way back it was a reasonable assumptions that iomem mappings never change the pfn range they point at. But this has changed: - gpu drivers dynamically manage their memory nowadays, invalidating ptes with unmap_mapping_range when buffers get moved - contiguous dma allocations have moved from dedicated carvetouts to cma regions. This means if we miss the unmap the pfn might contain pagecache or anon memory (well anything allocated with GFP_MOVEABLE) - even /dev/mem now invalidates mappings when the kernel requests that iomem region when CONFIG_IO_STRICT_DEVMEM is set, see 3234ac664a87 ("/dev/mem: Revoke mappings when a driver claims the region") Accessing pfns obtained from ptes without holding all the locks is therefore no longer a good idea. Fix this. Since ioremap might need to manipulate pagetables too we need to drop the pt lock and have a retry loop if we raced. While at it, also add kerneldoc and improve the comment for the vma_ops->access function. It's for accessing, not for moving the memory from iomem to system memory, as the old comment seemed to suggest. References: 28b2ee20c7cb ("access_process_vm device memory infrastructure") Signed-off-by: Daniel Vetter Cc: Jason Gunthorpe Cc: Dan Williams Cc: Kees Cook Cc: Benjamin Herrensmidt Cc: Dave Airlie Cc: Andrew Morton Cc: John Hubbard Cc: Jérôme Glisse Cc: Jan Kara Cc: Dan Williams Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: Chris Wilson Signed-off-by: Daniel Vetter --- v2: Fix inversion in the retry check (John). v4: While at it, use offset_in_page (Chris Wilson) --- include/linux/mm.h | 3 ++- mm/memory.c | 46 +++++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 45 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 179dbb78d08d..83d0be101a38 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -574,7 +574,8 @@ struct vm_operations_struct { vm_fault_t (*pfn_mkwrite)(struct vm_fault *vmf); /* called by access_process_vm when get_user_pages() fails, typically - * for use by special VMAs that can switch between memory and hardware + * for use by special VMAs. See also generic_access_phys() for a generic + * implementation useful for any iomem mapping. */ int (*access)(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write); diff --git a/mm/memory.c b/mm/memory.c index c48f8df6e502..ac32039ce941 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4847,28 +4847,68 @@ int follow_phys(struct vm_area_struct *vma, return ret; } +/** + * generic_access_phys - generic implementation for iomem mmap access + * @vma: the vma to access + * @addr: userspace addres, not relative offset within @vma + * @buf: buffer to read/write + * @len: length of transfer + * @write: set to FOLL_WRITE when writing, otherwise reading + * + * This is a generic implementation for &vm_operations_struct.access for an + * iomem mapping. This callback is used by access_process_vm() when the @vma is + * not page based. + */ int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write) { resource_size_t phys_addr; unsigned long prot = 0; void __iomem *maddr; - int offset = addr & (PAGE_SIZE-1); + pte_t *ptep, pte; + spinlock_t *ptl; + int offset = offset_in_page(addr); + int ret = -EINVAL; + + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) + return -EINVAL; + +retry: + if (follow_pte(vma->vm_mm, addr, &ptep, &ptl)) + return -EINVAL; + pte = *ptep; + pte_unmap_unlock(ptep, ptl); - if (follow_phys(vma, addr, write, &prot, &phys_addr)) + prot = pgprot_val(pte_pgprot(pte)); + phys_addr = (resource_size_t)pte_pfn(pte) << PAGE_SHIFT; + + if ((write & FOLL_WRITE) && !pte_write(pte)) return -EINVAL; maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot); if (!maddr) return -ENOMEM; + if (follow_pte(vma->vm_mm, addr, &ptep, &ptl)) + goto out_unmap; + + if (!pte_same(pte, *ptep)) { + pte_unmap_unlock(ptep, ptl); + iounmap(maddr); + + goto retry; + } + if (write) memcpy_toio(maddr + offset, buf, len); else memcpy_fromio(buf, maddr + offset, len); + ret = len; + pte_unmap_unlock(ptep, ptl); +out_unmap: iounmap(maddr); - return len; + return ret; } EXPORT_SYMBOL_GPL(generic_access_phys); #endif