From patchwork Fri Dec 14 11:10:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 10730887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3194314E2 for ; Fri, 14 Dec 2018 11:11:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FE6B2D418 for ; Fri, 14 Dec 2018 11:11:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 147142D424; Fri, 14 Dec 2018 11:11:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7CF9C2D41E for ; Fri, 14 Dec 2018 11:11:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F3D98E01D2; Fri, 14 Dec 2018 06:11:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 67CFF8E01D1; Fri, 14 Dec 2018 06:11:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56CF28E01D2; Fri, 14 Dec 2018 06:11:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by kanga.kvack.org (Postfix) with ESMTP id 26DE38E01D1 for ; Fri, 14 Dec 2018 06:11:23 -0500 (EST) Received: by mail-qk1-f200.google.com with SMTP id b185so4118020qkc.3 for ; Fri, 14 Dec 2018 03:11:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=JiqJ9c4cz7mLwgXSY/Bkq6+kdyzTyR1dwoyxY85nZso=; b=QKPU40RVMOqOKdNCEdbk/z4LrgTUHTXGUxBAmgiLzO0qhyXfAxkkk9SuGNEd/qjTE7 Tf5ri3cTNNC5/L7ImuJFgPpizrS5ALNlunkBwbcvrbu2hgEvgoJ8G7UZNcYe/ywIIZvy vlEmriDgB+KcWjyCJZC0xit+yuQCH1/86gw77d0gok4XNLQliIQDGCiualy81T7HCGr1 Z18baa8EG8BvMvd6LQU4S1DZVe9Mo4TgduM8SG3NUUJzeRdX83oPBDmRXFKJ3KlJoRbu jifTtSsTCNlc737HskxREh5aJxyXQgcPmxaIF2QNpgnUO34eHxrgu1Yn4Hw/A9QGdQjf 2p7Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AA+aEWbsUoddMEJQ5oNdCntXTLaFNw5AeEPlZfGONQlxT7eo4kxP12DS BsRSyUESMDT9Em+niBsDFmyjhOh2Vpx7AeEQuQ0s0TpFAlwDMiXam6it8qOrkJenQVYigjh+bsg AIMnRP8i/jhmxxtdzHIaGRxgCY3hK6dbNp/7QDEm6p1UOphzUzq5quqYnHnH/QoBEfQ== X-Received: by 2002:a37:2bcf:: with SMTP id r76mr2020015qkr.218.1544785882858; Fri, 14 Dec 2018 03:11:22 -0800 (PST) X-Google-Smtp-Source: AFSGD/XPvmKKcibYwQSc79BEBMzPmf2ko9OwQuR9HBD+GQWAwREKUiwCRB0SrNYvoQPYc/sPVNTl X-Received: by 2002:a37:2bcf:: with SMTP id r76mr2019993qkr.218.1544785882188; Fri, 14 Dec 2018 03:11:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544785882; cv=none; d=google.com; s=arc-20160816; b=qf0mrKL9/DIyx0n9mzzxW/Fp0PKjS5QWh5UfGlPWbVYPhIu8FX/QsrvK+abCyQVtKJ xONeAOZMQhEqhqK+/CZnPWEue1mAquy4tSLKU3bO6nm5iVFps0tdIrVEgFO2YpMP1Lf/ cGTZuRLjHczrfk2MkDi90d8U+0DPGlmRawoB0SGhxpvOu/XnhOw+3jALNx5SeH8f8mWJ Hn3mDyRZNDeFURwttDxRMzjupu9tHzNYbQzlDeesCPcq+P0H/Avi3JxOcjUxjxeslenr UwYBdUogxFdkesoqg2oI8fu3apx4yqc+hbOWfpEv5IZOem4BitNBc82Y1d3JtDfy6laQ 5sLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=JiqJ9c4cz7mLwgXSY/Bkq6+kdyzTyR1dwoyxY85nZso=; b=OyIUrcexjjBUJMEqVfN3ROZ6WDEW9V4ZKNXnBjwq/pAewmX2nC3xRTBqFGjjRklbRy pLgi6IvDUdNYbJDLg/GCU99TvdGs4tBuYOq47kBwRYxxq7b/n8a58AnVhuu/HVZtp0Kl n00K8dEuuEFNLxab6ng7OQcpoVUsvpoeYn0wSc7NlFVzbzN9mpEVTYTajV8i+/21quLU IKuIc5Gki1Hy0iliZnCJMI42OFWsj2xxc+Di8aFPqIu0DxsTIY5H6UWDFSd0W5G+FN9Y RsZelSrmNuNBqQpuTdSr3un8chuFtJaJHbPEGISh+COjOCHaBVuCj4Y6B/p0FSn3EITt a8zw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id x54si2813206qtx.347.2018.12.14.03.11.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Dec 2018 03:11:22 -0800 (PST) Received-SPF: pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of david@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 03073307CDFE; Fri, 14 Dec 2018 11:11:21 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-139.ams2.redhat.com [10.36.117.139]) by smtp.corp.redhat.com (Postfix) with ESMTP id F18FD61295; Fri, 14 Dec 2018 11:11:12 +0000 (UTC) From: David Hildenbrand To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-m68k@lists.linux-m68k.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-mediatek@lists.infradead.org, David Hildenbrand , Tony Luck , Fenghua Yu , Oleg Nesterov , Andrew Morton , David Howells , Mike Rapoport , Michal Hocko Subject: [PATCH v1 8/9] ia64: perfmon: Don't mark buffer pages as PG_reserved Date: Fri, 14 Dec 2018 12:10:13 +0100 Message-Id: <20181214111014.15672-9-david@redhat.com> In-Reply-To: <20181214111014.15672-1-david@redhat.com> References: <20181214111014.15672-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Fri, 14 Dec 2018 11:11:21 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP In the old days, remap_pfn_range() required pages to be marked as PG_reserved, so they would e.g. never get swapped out. This was required for special mappings. Nowadays, this is fully handled via the VMA (VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP inside remap_pfn_range() to be precise). PG_reserved is no longer required but only a relict from the past. So only architecture specific MM handling might require it (e.g. to detect them as MMIO pages). As there are no architecture specific checks for PageReserved() apart from MCA handling in ia64code, this can go. Use simple vzalloc()/vfree() instead. Note that before calling vzalloc(), size has already been aligned to PAGE_SIZE, no need to align again. Cc: Tony Luck Cc: Fenghua Yu Cc: Oleg Nesterov Cc: Andrew Morton Cc: David Hildenbrand Cc: David Howells Cc: Mike Rapoport Cc: Michal Hocko Signed-off-by: David Hildenbrand --- arch/ia64/kernel/perfmon.c | 59 +++----------------------------------- 1 file changed, 4 insertions(+), 55 deletions(-) diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c index a9d4dc6c0427..e1b9287dc455 100644 --- a/arch/ia64/kernel/perfmon.c +++ b/arch/ia64/kernel/perfmon.c @@ -583,17 +583,6 @@ pfm_put_task(struct task_struct *task) if (task != current) put_task_struct(task); } -static inline void -pfm_reserve_page(unsigned long a) -{ - SetPageReserved(vmalloc_to_page((void *)a)); -} -static inline void -pfm_unreserve_page(unsigned long a) -{ - ClearPageReserved(vmalloc_to_page((void*)a)); -} - static inline unsigned long pfm_protect_ctx_ctxsw(pfm_context_t *x) { @@ -817,44 +806,6 @@ pfm_reset_msgq(pfm_context_t *ctx) DPRINT(("ctx=%p msgq reset\n", ctx)); } -static void * -pfm_rvmalloc(unsigned long size) -{ - void *mem; - unsigned long addr; - - size = PAGE_ALIGN(size); - mem = vzalloc(size); - if (mem) { - //printk("perfmon: CPU%d pfm_rvmalloc(%ld)=%p\n", smp_processor_id(), size, mem); - addr = (unsigned long)mem; - while (size > 0) { - pfm_reserve_page(addr); - addr+=PAGE_SIZE; - size-=PAGE_SIZE; - } - } - return mem; -} - -static void -pfm_rvfree(void *mem, unsigned long size) -{ - unsigned long addr; - - if (mem) { - DPRINT(("freeing physical buffer @%p size=%lu\n", mem, size)); - addr = (unsigned long) mem; - while ((long) size > 0) { - pfm_unreserve_page(addr); - addr+=PAGE_SIZE; - size-=PAGE_SIZE; - } - vfree(mem); - } - return; -} - static pfm_context_t * pfm_context_alloc(int ctx_flags) { @@ -1499,7 +1450,7 @@ pfm_free_smpl_buffer(pfm_context_t *ctx) /* * free the buffer */ - pfm_rvfree(ctx->ctx_smpl_hdr, ctx->ctx_smpl_size); + vfree(ctx->ctx_smpl_hdr); ctx->ctx_smpl_hdr = NULL; ctx->ctx_smpl_size = 0UL; @@ -2138,7 +2089,7 @@ pfm_close(struct inode *inode, struct file *filp) * All memory free operations (especially for vmalloc'ed memory) * MUST be done with interrupts ENABLED. */ - if (smpl_buf_addr) pfm_rvfree(smpl_buf_addr, smpl_buf_size); + vfree(smpl_buf_addr); /* * return the memory used by the context @@ -2267,10 +2218,8 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t /* * We do the easy to undo allocations first. - * - * pfm_rvmalloc(), clears the buffer, so there is no leak */ - smpl_buf = pfm_rvmalloc(size); + smpl_buf = vzalloc(size); if (smpl_buf == NULL) { DPRINT(("Can't allocate sampling buffer\n")); return -ENOMEM; @@ -2347,7 +2296,7 @@ pfm_smpl_buffer_alloc(struct task_struct *task, struct file *filp, pfm_context_t error: vm_area_free(vma); error_kmem: - pfm_rvfree(smpl_buf, size); + vfree(smpl_buf); return -ENOMEM; }