From patchwork Fri Nov 10 15:18:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 10053323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2B8D26032D for ; Fri, 10 Nov 2017 15:18:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1AD252B33B for ; Fri, 10 Nov 2017 15:18:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F9D52B33D; Fri, 10 Nov 2017 15:18:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A71D32B344 for ; Fri, 10 Nov 2017 15:18:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752910AbdKJPSL (ORCPT ); Fri, 10 Nov 2017 10:18:11 -0500 Received: from mx1.redhat.com ([209.132.183.28]:33812 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752127AbdKJPSK (ORCPT ); Fri, 10 Nov 2017 10:18:10 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 97CE48765E; Fri, 10 Nov 2017 15:18:10 +0000 (UTC) Received: from t460s.redhat.com (ovpn-117-198.ams2.redhat.com [10.36.117.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 204B36F812; Fri, 10 Nov 2017 15:18:05 +0000 (UTC) From: David Hildenbrand To: linux-s390@vger.kernel.org, kvm@vger.kernel.org Cc: Christian Borntraeger , Cornelia Huck , Martin Schwidefsky , Heiko Carstens , Janosch Frank , David Hildenbrand Subject: [PATCH v1] s390x/mm: cleanup gmap_pte_op_walk() Date: Fri, 10 Nov 2017 16:18:05 +0100 Message-Id: <20171110151805.7541-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 10 Nov 2017 15:18:10 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP gmap_mprotect_notify() refuses shadow gmaps. Turns out that a) gmap_protect_range() b) gmap_read_table() c) gmap_pte_op_walk() Are never called for gmap shadows. And never should be. This dates back to gmap shadow prototypes where we allowed to call mprotect_notify() on the gmap shadow (to get notified about the prefix pages getting removed). This is avoided by always getting notified about any change on the gmap shadow. The only real function for walking page tables on shadow gmaps is gmap_table_walk(). So, essentially, these functions should never get called and gmap_pte_op_walk() can be cleaned up. Add some checks to callers of gmap_pte_op_walk(). Signed-off-by: David Hildenbrand Reviewed-by: Janosch Frank with one nit below --- arch/s390/mm/gmap.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 2f66290c9b92..9630387e5f76 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -814,27 +814,17 @@ static inline unsigned long *gmap_table_walk(struct gmap *gmap, * @ptl: pointer to the spinlock pointer * * Returns a pointer to the locked pte for a guest address, or NULL - * - * Note: Can also be called for shadow gmaps. */ static pte_t *gmap_pte_op_walk(struct gmap *gmap, unsigned long gaddr, spinlock_t **ptl) { unsigned long *table; - if (gmap_is_shadow(gmap)) - spin_lock(&gmap->guest_table_lock); + BUG_ON(gmap_is_shadow(gmap)); /* Walk the gmap page table, lock and get pte pointer */ table = gmap_table_walk(gmap, gaddr, 1); /* get segment pointer */ - if (!table || *table & _SEGMENT_ENTRY_INVALID) { - if (gmap_is_shadow(gmap)) - spin_unlock(&gmap->guest_table_lock); + if (!table || *table & _SEGMENT_ENTRY_INVALID) return NULL; - } - if (gmap_is_shadow(gmap)) { - *ptl = &gmap->guest_table_lock; - return pte_offset_map((pmd_t *) table, gaddr); - } return pte_alloc_map_lock(gmap->mm, (pmd_t *) table, gaddr, ptl); } @@ -889,7 +879,6 @@ static void gmap_pte_op_end(spinlock_t *ptl) * * Called with sg->mm->mmap_sem in read. * - * Note: Can also be called for shadow gmaps. */ static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, unsigned long len, int prot, unsigned long bits) @@ -899,6 +888,7 @@ static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, pte_t *ptep; int rc; + BUG_ON(gmap_is_shadow(gmap)); while (len) { rc = -EAGAIN; ptep = gmap_pte_op_walk(gmap, gaddr, &ptl); @@ -959,7 +949,8 @@ EXPORT_SYMBOL_GPL(gmap_mprotect_notify); * @val: pointer to the unsigned long value to return * * Returns 0 if the value was read, -ENOMEM if out of memory and -EFAULT - * if reading using the virtual address failed. + * if reading using the virtual address failed. -EINVAL if called on a gmap + * shadow. * * Called with gmap->mm->mmap_sem in read. */ @@ -970,6 +961,9 @@ int gmap_read_table(struct gmap *gmap, unsigned long gaddr, unsigned long *val) pte_t *ptep, pte; int rc; + if (gmap_is_shadow(gmap)) + return -EINVAL; + while (1) { rc = -EAGAIN; ptep = gmap_pte_op_walk(gmap, gaddr, &ptl);