From patchwork Wed Sep 19 08:47:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Janosch Frank X-Patchwork-Id: 10605503 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB9306CB for ; Wed, 19 Sep 2018 08:49:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B817D2B59F for ; Wed, 19 Sep 2018 08:49:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC7B72B5AA; Wed, 19 Sep 2018 08:49:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 196E02B59F for ; Wed, 19 Sep 2018 08:49:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731131AbeISO0B (ORCPT ); Wed, 19 Sep 2018 10:26:01 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:47216 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1731111AbeISO0B (ORCPT ); Wed, 19 Sep 2018 10:26:01 -0400 Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w8J8iSHM142182 for ; Wed, 19 Sep 2018 04:49:07 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 2mkk570fmd-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 19 Sep 2018 04:49:07 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 19 Sep 2018 09:49:06 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 19 Sep 2018 09:49:04 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w8J8n3Do66912394 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 19 Sep 2018 08:49:03 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BD292AE045; Wed, 19 Sep 2018 11:48:10 +0100 (BST) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0EAE6AE04D; Wed, 19 Sep 2018 11:48:10 +0100 (BST) Received: from s38lp20.boeblingen.de.ibm.com (unknown [9.145.184.145]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 19 Sep 2018 11:48:09 +0100 (BST) From: Janosch Frank To: kvm@vger.kernel.org Cc: linux-s390@vger.kernel.org, david@redhat.com, borntraeger@de.ibm.com, schwidefsky@de.ibm.com Subject: [RFC 06/14] s390/mm: Provide vmaddr to pmd notification Date: Wed, 19 Sep 2018 10:47:54 +0200 X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180919084802.183381-1-frankja@linux.ibm.com> References: <20180919084802.183381-1-frankja@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18091908-0012-0000-0000-000002AAE2DC X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18091908-0013-0000-0000-000020DF3FE8 Message-Id: <20180919084802.183381-7-frankja@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-09-19_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=651 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1809190091 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP It will be needed for shadow tables. Signed-off-by: Janosch Frank --- arch/s390/mm/gmap.c | 51 ++++++++++++++++++++++++++------------------------- 1 file changed, 26 insertions(+), 25 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 7bc490a6fbeb..70763bcd0e0b 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -530,10 +530,10 @@ void gmap_unlink(struct mm_struct *mm, unsigned long *table, } static void gmap_pmdp_xchg(struct gmap *gmap, pmd_t *old, pmd_t new, - unsigned long gaddr); + unsigned long gaddr, unsigned long vmaddr); static void gmap_pmd_split(struct gmap *gmap, unsigned long gaddr, - pmd_t *pmdp, struct page *page); + unsigned long vmaddr, pmd_t *pmdp, struct page *page); /** * gmap_link - set up shadow page tables to connect a host to a guest address @@ -632,7 +632,8 @@ int __gmap_link(struct gmap *gmap, unsigned long gaddr, unsigned long vmaddr) } else if (*table & _SEGMENT_ENTRY_PROTECT && !(pmd_val(*pmd) & _SEGMENT_ENTRY_PROTECT)) { if (page) { - gmap_pmd_split(gmap, gaddr, (pmd_t *)table, page); + gmap_pmd_split(gmap, gaddr, vmaddr, + (pmd_t *)table, page); page = NULL; } else { spin_unlock(ptl); @@ -948,19 +949,15 @@ static void gmap_pte_op_end(spinlock_t *ptl) * Returns a pointer to the pmd for a guest address, or NULL */ static inline pmd_t *gmap_pmd_op_walk(struct gmap *gmap, unsigned long gaddr, - spinlock_t **ptl) + unsigned long vmaddr, spinlock_t **ptl) { pmd_t *pmdp, *hpmdp; - unsigned long vmaddr; BUG_ON(gmap_is_shadow(gmap)); *ptl = NULL; if (gmap->mm->context.allow_gmap_hpage_1m) { - vmaddr = __gmap_translate(gmap, gaddr); - if (IS_ERR_VALUE(vmaddr)) - return NULL; hpmdp = pmd_alloc_map(gmap->mm, vmaddr); if (!hpmdp) return NULL; @@ -1043,7 +1040,7 @@ static inline void gmap_pmd_split_free(struct gmap *gmap, pmd_t *pmdp) * aren't tracked anywhere else. */ static void gmap_pmd_split(struct gmap *gmap, unsigned long gaddr, - pmd_t *pmdp, struct page *page) + unsigned long vmaddr, pmd_t *pmdp, struct page *page) { unsigned long *ptable = (unsigned long *) page_to_phys(page); pmd_t new; @@ -1065,7 +1062,7 @@ static void gmap_pmd_split(struct gmap *gmap, unsigned long gaddr, spin_lock(&gmap->split_list_lock); list_add(&page->lru, &gmap->split_list); spin_unlock(&gmap->split_list_lock); - gmap_pmdp_xchg(gmap, pmdp, new, gaddr); + gmap_pmdp_xchg(gmap, pmdp, new, gaddr, vmaddr); } /* @@ -1083,7 +1080,8 @@ static void gmap_pmd_split(struct gmap *gmap, unsigned long gaddr, * guest_table_lock held. */ static int gmap_protect_pmd(struct gmap *gmap, unsigned long gaddr, - pmd_t *pmdp, int prot, unsigned long bits) + unsigned long vmaddr, pmd_t *pmdp, int prot, + unsigned long bits) { int pmd_i = pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID; int pmd_p = pmd_val(*pmdp) & _SEGMENT_ENTRY_PROTECT; @@ -1095,13 +1093,13 @@ static int gmap_protect_pmd(struct gmap *gmap, unsigned long gaddr, if (prot == PROT_NONE && !pmd_i) { pmd_val(new) |= _SEGMENT_ENTRY_INVALID; - gmap_pmdp_xchg(gmap, pmdp, new, gaddr); + gmap_pmdp_xchg(gmap, pmdp, new, gaddr, vmaddr); } if (prot == PROT_READ && !pmd_p) { pmd_val(new) &= ~_SEGMENT_ENTRY_INVALID; pmd_val(new) |= _SEGMENT_ENTRY_PROTECT; - gmap_pmdp_xchg(gmap, pmdp, new, gaddr); + gmap_pmdp_xchg(gmap, pmdp, new, gaddr, vmaddr); } if (bits & GMAP_NOTIFY_MPROT) @@ -1164,10 +1162,14 @@ static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, int rc; BUG_ON(gmap_is_shadow(gmap)); + while (len) { rc = -EAGAIN; - - pmdp = gmap_pmd_op_walk(gmap, gaddr, &ptl_pmd); + vmaddr = __gmap_translate(gmap, gaddr); + if (IS_ERR_VALUE(vmaddr)) + return vmaddr; + vmaddr |= gaddr & ~PMD_MASK; + pmdp = gmap_pmd_op_walk(gmap, gaddr, vmaddr, &ptl_pmd); if (pmdp && !(pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID)) { if (!pmd_large(*pmdp)) { ptep = gmap_pte_from_pmd(gmap, pmdp, gaddr, @@ -1192,7 +1194,7 @@ static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, return -ENOMEM; continue; } else { - gmap_pmd_split(gmap, gaddr, + gmap_pmd_split(gmap, gaddr, vmaddr, pmdp, page); page = NULL; } @@ -1206,9 +1208,6 @@ static int gmap_protect_range(struct gmap *gmap, unsigned long gaddr, return rc; /* -EAGAIN, fixup of userspace mm and gmap */ - vmaddr = __gmap_translate(gmap, gaddr); - if (IS_ERR_VALUE(vmaddr)) - return vmaddr; rc = gmap_fixup(gmap, gaddr, vmaddr, prot); if (rc) return rc; @@ -2432,6 +2431,7 @@ static inline void pmdp_notify_split(struct gmap *gmap, pmd_t *pmdp, static void pmdp_notify_gmap(struct gmap *gmap, pmd_t *pmdp, unsigned long gaddr, unsigned long vmaddr) { + BUG_ON((gaddr & ~HPAGE_MASK) || (vmaddr & ~HPAGE_MASK)); if (gmap_pmd_is_split(pmdp)) return pmdp_notify_split(gmap, pmdp, gaddr, vmaddr); @@ -2452,10 +2452,11 @@ static void pmdp_notify_gmap(struct gmap *gmap, pmd_t *pmdp, * held. */ static void gmap_pmdp_xchg(struct gmap *gmap, pmd_t *pmdp, pmd_t new, - unsigned long gaddr) + unsigned long gaddr, unsigned long vmaddr) { gaddr &= HPAGE_MASK; - pmdp_notify_gmap(gmap, pmdp, gaddr, 0); + vmaddr &= HPAGE_MASK; + pmdp_notify_gmap(gmap, pmdp, gaddr, vmaddr); if (pmd_large(new)) pmd_val(new) &= ~GMAP_SEGMENT_NOTIFY_BITS; if (MACHINE_HAS_TLB_GUEST) @@ -2603,7 +2604,7 @@ EXPORT_SYMBOL_GPL(gmap_pmdp_idte_global); * held. */ bool gmap_test_and_clear_dirty_pmd(struct gmap *gmap, pmd_t *pmdp, - unsigned long gaddr) + unsigned long gaddr, unsigned long vmaddr) { if (pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID) return false; @@ -2615,7 +2616,7 @@ bool gmap_test_and_clear_dirty_pmd(struct gmap *gmap, pmd_t *pmdp, /* Clear UC indication and reset protection */ pmd_val(*pmdp) &= ~_SEGMENT_ENTRY_GMAP_UC; - gmap_protect_pmd(gmap, gaddr, pmdp, PROT_READ, 0); + gmap_protect_pmd(gmap, gaddr, vmaddr, pmdp, PROT_READ, 0); return true; } @@ -2638,12 +2639,12 @@ void gmap_sync_dirty_log_pmd(struct gmap *gmap, unsigned long bitmap[4], spinlock_t *ptl_pmd = NULL; spinlock_t *ptl_pte = NULL; - pmdp = gmap_pmd_op_walk(gmap, gaddr, &ptl_pmd); + pmdp = gmap_pmd_op_walk(gmap, gaddr, vmaddr, &ptl_pmd); if (!pmdp) return; if (pmd_large(*pmdp)) { - if (gmap_test_and_clear_dirty_pmd(gmap, pmdp, gaddr)) + if (gmap_test_and_clear_dirty_pmd(gmap, pmdp, gaddr, vmaddr)) bitmap_fill(bitmap, _PAGE_ENTRIES); } else { for (i = 0; i < _PAGE_ENTRIES; i++, vmaddr += PAGE_SIZE) {