From patchwork Fri Nov 20 08:42:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 7665581 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 44FECBF90C for ; Fri, 20 Nov 2015 08:29:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 76F3320437 for ; Fri, 20 Nov 2015 08:29:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 960B7203FB for ; Fri, 20 Nov 2015 08:29:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934781AbbKTI30 (ORCPT ); Fri, 20 Nov 2015 03:29:26 -0500 Received: from tama500.ecl.ntt.co.jp ([129.60.39.148]:36737 "EHLO tama500.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934753AbbKTI3Z (ORCPT ); Fri, 20 Nov 2015 03:29:25 -0500 Received: from vc1.ecl.ntt.co.jp (vc1.ecl.ntt.co.jp [129.60.86.153]) by tama500.ecl.ntt.co.jp (8.13.8/8.13.8) with ESMTP id tAK8TLLb031604; Fri, 20 Nov 2015 17:29:21 +0900 Received: from vc1.ecl.ntt.co.jp (localhost [127.0.0.1]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 44C145F65B; Fri, 20 Nov 2015 17:29:21 +0900 (JST) Received: from imail2.m.ecl.ntt.co.jp (imail2.m.ecl.ntt.co.jp [129.60.5.247]) by vc1.ecl.ntt.co.jp (Postfix) with ESMTP id 364DE5F65A; Fri, 20 Nov 2015 17:29:21 +0900 (JST) Received: from localhost.localdomain ([129.60.241.174]) by imail2.m.ecl.ntt.co.jp (8.13.8/8.13.8) with SMTP id tAK8TLIs010716; Fri, 20 Nov 2015 17:29:21 +0900 Date: Fri, 20 Nov 2015 17:42:23 +0900 From: Takuya Yoshikawa To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mtosatti@redhat.com, guangrong.xiao@linux.intel.com Subject: [PATCH 02/10] KVM: x86: MMU: Remove unused parameter of __direct_map() Message-Id: <20151120174223.7069d1e52caa4a630985e7dd@lab.ntt.co.jp> In-Reply-To: <20151120174005.9880b89f54eee2cec2422da5@lab.ntt.co.jp> References: <20151120174005.9880b89f54eee2cec2422da5@lab.ntt.co.jp> X-Mailer: Sylpheed 3.4.2 (GTK+ 2.24.28; x86_64-redhat-linux-gnu) Mime-Version: 1.0 X-TM-AS-MML: disable Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d9a6801..8a1593f 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2708,9 +2708,8 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep) __direct_pte_prefetch(vcpu, sp, sptep); } -static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write, - int map_writable, int level, gfn_t gfn, pfn_t pfn, - bool prefault) +static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable, + int level, gfn_t gfn, pfn_t pfn, bool prefault) { struct kvm_shadow_walk_iterator iterator; struct kvm_mmu_page *sp; @@ -3018,11 +3017,9 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code, make_mmu_pages_available(vcpu); if (likely(!force_pt_level)) transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level); - r = __direct_map(vcpu, v, write, map_writable, level, gfn, pfn, - prefault); + r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault); spin_unlock(&vcpu->kvm->mmu_lock); - return r; out_unlock: @@ -3531,8 +3528,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, make_mmu_pages_available(vcpu); if (likely(!force_pt_level)) transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level); - r = __direct_map(vcpu, gpa, write, map_writable, - level, gfn, pfn, prefault); + r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault); spin_unlock(&vcpu->kvm->mmu_lock); return r;