From patchwork Tue Jul 31 10:47:50 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Nikunj A. Dadhania" X-Patchwork-Id: 1258991 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 6AD8EDF26F for ; Tue, 31 Jul 2012 10:48:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755588Ab2GaKsh (ORCPT ); Tue, 31 Jul 2012 06:48:37 -0400 Received: from e28smtp05.in.ibm.com ([122.248.162.5]:50653 "EHLO e28smtp05.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755499Ab2GaKsg (ORCPT ); Tue, 31 Jul 2012 06:48:36 -0400 Received: from /spool/local by e28smtp05.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 31 Jul 2012 16:18:33 +0530 Received: from d28relay02.in.ibm.com (9.184.220.59) by e28smtp05.in.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 31 Jul 2012 16:18:30 +0530 Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64]) by d28relay02.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q6VAmU3921037154 for ; Tue, 31 Jul 2012 16:18:30 +0530 Received: from d28av02.in.ibm.com (loopback [127.0.0.1]) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q6VAmTYf031520 for ; Tue, 31 Jul 2012 20:48:30 +1000 Received: from abhimanyu.in.ibm.com (abhimanyu.in.ibm.com [9.124.35.147] (may be forged)) by d28av02.in.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q6VAmTIA031513; Tue, 31 Jul 2012 20:48:29 +1000 Subject: [PATCH v3 1/8] mm, x86: Add HAVE_RCU_TABLE_FREE support To: peterz@infradead.org, mtosatti@redhat.com, avi@redhat.com From: "Nikunj A. Dadhania" Cc: raghukt@linux.vnet.ibm.com, alex.shi@intel.com, mingo@elte.hu, kvm@vger.kernel.org, hpa@zytor.com Date: Tue, 31 Jul 2012 16:17:50 +0530 Message-ID: <20120731104728.16662.65467.stgit@abhimanyu.in.ibm.com> In-Reply-To: <20120731104312.16662.27889.stgit@abhimanyu.in.ibm.com> References: <20120731104312.16662.27889.stgit@abhimanyu.in.ibm.com> User-Agent: StGit/0.16-2-g0d85 MIME-Version: 1.0 x-cbid: 12073110-8256-0000-0000-0000038570C9 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra Implements optional HAVE_RCU_TABLE_FREE support for x86. This is useful for things like Xen and KVM where paravirt tlb flush means the software page table walkers like GUP-fast cannot rely on IRQs disabling like regular x86 can. Not for inclusion - is part of PeterZ's "Unify TLB gather implementations" http://mid.gmane.org/20120627211540.459910855@chello.nl Cc: Nikunj A Dadhania Cc: Jeremy Fitzhardinge Cc: Avi Kivity Signed-off-by: Peter Zijlstra Link: http://lkml.kernel.org/n/tip-r106wg6t7crxxhva55jnacrj@git.kernel.org --- arch/x86/include/asm/tlb.h | 1 + arch/x86/mm/pgtable.c | 6 +++--- include/asm-generic/tlb.h | 9 +++++++++ 3 files changed, 13 insertions(+), 3 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 4fef207..f5489f0 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -1,6 +1,7 @@ #ifndef _ASM_X86_TLB_H #define _ASM_X86_TLB_H +#define __tlb_remove_table(table) free_page_and_swap_cache(table) #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 8573b83..34fa168 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -51,21 +51,21 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { pgtable_page_dtor(pte); paravirt_release_pte(page_to_pfn(pte)); - tlb_remove_page(tlb, pte); + tlb_remove_table(tlb, pte); } #if PAGETABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); - tlb_remove_page(tlb, virt_to_page(pmd)); + tlb_remove_table(tlb, virt_to_page(pmd)); } #if PAGETABLE_LEVELS > 3 void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); - tlb_remove_page(tlb, virt_to_page(pud)); + tlb_remove_table(tlb, virt_to_page(pud)); } #endif /* PAGETABLE_LEVELS > 3 */ #endif /* PAGETABLE_LEVELS > 2 */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index ed6642a..d382b22 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -19,6 +19,8 @@ #include #include +static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page); + #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* * Semi RCU freeing of the page directories. @@ -60,6 +62,13 @@ struct mmu_table_batch { extern void tlb_table_flush(struct mmu_gather *tlb); extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +#else + +static inline void tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + tlb_remove_page(tlb, table); +} + #endif /*