From patchwork Mon Apr 9 20:51:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10331987 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D725B6037F for ; Mon, 9 Apr 2018 20:52:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C4E0128BF7 for ; Mon, 9 Apr 2018 20:52:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B8F9528BFF; Mon, 9 Apr 2018 20:52:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 38D7528BF7 for ; Mon, 9 Apr 2018 20:52:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1+3zc42GfDo69h0V3kMNy3lAjj/F+20RwtaNBtENmLE=; b=Ar4UWPs7SLCnpx Cd3pB5k4b8OMDlszABTjEKxuUWvPL1U8wrO22MN2x7DaNNdK4rLgFgnHfxLWO7LElHsj/LReo32+y E6qeSfP2FVmtTCWtgomnF/uSq8su32saq8EzPD8J50vCxIXpXyQmNkb8HGK+D5tg9atQ9I2cDOmoS d0GsNcyNk+0c6bumDo7Mzd94XlW91pEv3yqPohT4AFxspt2ImrJZacStj5m0gZbI8ZARTuErfonDs 0ZI/Gq0mNDxbV8zq8NuK8dePCjW5Ap9R/cmnIJUfOiR4He7vIiKFLTKBW8rssr8cd21FRqhsd1J1i oXoAgg9H9pTAjuCupAjA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1f5dle-0004mL-HY; Mon, 09 Apr 2018 20:51:58 +0000 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1f5dlZ-0004i9-UO for linux-arm-kernel@lists.infradead.org; Mon, 09 Apr 2018 20:51:55 +0000 Received: by mail-wm0-x241.google.com with SMTP id t67so21964407wmt.0 for ; Mon, 09 Apr 2018 13:51:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=christofferdall-dk.20150623.gappssmtp.com; s=20150623; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Q96RUg2aH2G2w5SFegmClc7j4oNIcFS8Vje9Byr/n/o=; b=PPE8morgUqQLWl0UwWWnXlnyTvDMc5Q36O/CfuB+zaSdeQjuzd/8lSUUQtLB8ZLI4N qQeMVZ2NU4wT1gCVEOy7x8yQEvVrrrNI4cS/jHzVaDB7c94AzQ7KNF2C1CCDn+uaiv9I 1uz+zYZjrrD1MSyWDKb6jlZfa8TFlRIHRFqZ+epHcxqVXp9mP7fPvFmz4n7LV1Q5f6JJ 8guDkrSjohxFeFkzeedd3cfvD8KC3mNvJFii1ZuTmnz7PxxE1EcddBDLnwM5mFzJ3auy Dlgg2mj/DupELrZMSkY5Kd4VysCGFQk8xmgtg5k+cUqGlvIoQBqE3JZs532GBtzBIMl4 dDzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=Q96RUg2aH2G2w5SFegmClc7j4oNIcFS8Vje9Byr/n/o=; b=pOGVvIs9HZAYXDjGby47yEgi/RA0fnqS+ifWlo/pe9GVD2m0HDNccfkz5hJyCNAEup LUpEfI0K/Exgq0ulkH1qqlurH3FUWJd3+6rlLIjWxpIzPXt9Ulm9o6Vvdg2bcFIA1d0u VIYU0w8mtLrIDXCquM8ZQyuIUQr3B4Zps0BqrK8RcbjC6W6oP8HUbK4tvm6czEWVppnt 9FtdzWv8v5Xq9ibQh8CnnpWMxMzmJzrCJUpbR+rVI9RdU/vFLkdRY1/F2A3UHTBlrk2/ EMmRfvZpUOUB8ygdBO6CY2HHqPzyMI4jd1Pl4lNFScoOacPTv3M3Q3IrZfYPOBNzCefI hrxA== X-Gm-Message-State: ALQs6tA1aAXMV/kiVsIB5uoEgIrmiq03CghStCCXrT1JikwO9c+RV53D P7bIVVKgAM89C6Bb8sl7hz2Qpw== X-Google-Smtp-Source: AIpwx48eXtGFn9hJgSfIZ397PdYdb/60m62e+34R+ro2U+S1EzM+aXnDrmqZaxrEpd31mwHOSJQ94w== X-Received: by 10.80.227.198 with SMTP id c6mr6439738edm.4.1523307101837; Mon, 09 Apr 2018 13:51:41 -0700 (PDT) Received: from localhost (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id b36sm752130edd.81.2018.04.09.13.51.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Apr 2018 13:51:40 -0700 (PDT) Date: Mon, 9 Apr 2018 22:51:39 +0200 From: Christoffer Dall To: Marc Zyngier Subject: Re: [PATCH] KVM: arm/arm64: Close VMID generation race Message-ID: <20180409205139.GH10904@cbox> References: <20180409170706.23541-1-marc.zyngier@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20180409170706.23541-1-marc.zyngier@arm.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180409_135153_986818_9781763C X-CRM114-Status: GOOD ( 29.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Mark Rutland , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Shannon Zhao Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Apr 09, 2018 at 06:07:06PM +0100, Marc Zyngier wrote: > Before entering the guest, we check whether our VMID is still > part of the current generation. In order to avoid taking a lock, > we start with checking that the generation is still current, and > only if not current do we take the lock, recheck, and update the > generation and VMID. > > This leaves open a small race: A vcpu can bump up the global > generation number as well as the VM's, but has not updated > the VMID itself yet. > > At that point another vcpu from the same VM comes in, checks > the generation (and finds it not needing anything), and jumps > into the guest. At this point, we end-up with two vcpus belonging > to the same VM running with two different VMIDs. Eventually, the > VMID used by the second vcpu will get reassigned, and things will > really go wrong... > > A simple solution would be to drop this initial check, and always take > the lock. This is likely to cause performance issues. A middle ground > is to convert the spinlock to a rwlock, and only take the read lock > on the fast path. If the check fails at that point, drop it and > acquire the write lock, rechecking the condition. > > This ensures that the above scenario doesn't occur. > > Reported-by: Mark Rutland > Signed-off-by: Marc Zyngier > --- > I haven't seen any reply from Shannon, so reposting this to > a slightly wider audience for feedback. > > virt/kvm/arm/arm.c | 15 ++++++++++----- > 1 file changed, 10 insertions(+), 5 deletions(-) > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c > index dba629c5f8ac..a4c1b76240df 100644 > --- a/virt/kvm/arm/arm.c > +++ b/virt/kvm/arm/arm.c > @@ -63,7 +63,7 @@ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu); > static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); > static u32 kvm_next_vmid; > static unsigned int kvm_vmid_bits __read_mostly; > -static DEFINE_SPINLOCK(kvm_vmid_lock); > +static DEFINE_RWLOCK(kvm_vmid_lock); > > static bool vgic_present; > > @@ -473,11 +473,16 @@ static void update_vttbr(struct kvm *kvm) > { > phys_addr_t pgd_phys; > u64 vmid; > + bool new_gen; > > - if (!need_new_vmid_gen(kvm)) > + read_lock(&kvm_vmid_lock); > + new_gen = need_new_vmid_gen(kvm); > + read_unlock(&kvm_vmid_lock); > + > + if (!new_gen) > return; > > - spin_lock(&kvm_vmid_lock); > + write_lock(&kvm_vmid_lock); > > /* > * We need to re-check the vmid_gen here to ensure that if another vcpu > @@ -485,7 +490,7 @@ static void update_vttbr(struct kvm *kvm) > * use the same vmid. > */ > if (!need_new_vmid_gen(kvm)) { > - spin_unlock(&kvm_vmid_lock); > + write_unlock(&kvm_vmid_lock); > return; > } > > @@ -519,7 +524,7 @@ static void update_vttbr(struct kvm *kvm) > vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); > kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid; > > - spin_unlock(&kvm_vmid_lock); > + write_unlock(&kvm_vmid_lock); > } > > static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) > -- > 2.14.2 > The above looks correct to me. I am wondering if something like the following would also work, which may be slightly more efficient, although I doubt the difference can be measured: Thanks, -Christoffer diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index dba629c5f8ac..7ac869bcad21 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -458,7 +458,9 @@ void force_vm_exit(const cpumask_t *mask) */ static bool need_new_vmid_gen(struct kvm *kvm) { - return unlikely(kvm->arch.vmid_gen != atomic64_read(&kvm_vmid_gen)); + u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen); + smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */ + return unlikely(kvm->arch.vmid_gen != current_vmid_gen); } /** @@ -508,10 +510,11 @@ static void update_vttbr(struct kvm *kvm) kvm_call_hyp(__kvm_flush_vm_context); } - kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen); kvm->arch.vmid = kvm_next_vmid; kvm_next_vmid++; kvm_next_vmid &= (1 << kvm_vmid_bits) - 1; + smp_wmb(); + kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen); /* update vttbr to be used with the new vmid */ pgd_phys = virt_to_phys(kvm->arch.pgd);