From patchwork Fri Feb 7 11:39:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 11370279 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 014051398 for ; Fri, 7 Feb 2020 11:40:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DDA95217BA for ; Fri, 7 Feb 2020 11:40:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727049AbgBGLkO (ORCPT ); Fri, 7 Feb 2020 06:40:14 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:45460 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727162AbgBGLkI (ORCPT ); Fri, 7 Feb 2020 06:40:08 -0500 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 017Bc2S6171234; Fri, 7 Feb 2020 06:40:05 -0500 Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 2y0mdxtqku-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 07 Feb 2020 06:40:05 -0500 Received: from m0098416.ppops.net (m0098416.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 017Be4I2185704; Fri, 7 Feb 2020 06:40:04 -0500 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0b-001b2d01.pphosted.com with ESMTP id 2y0mdxtqjq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 07 Feb 2020 06:40:04 -0500 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 017Bcn9J026914; Fri, 7 Feb 2020 11:40:03 GMT Received: from b01cxnp22034.gho.pok.ibm.com (b01cxnp22034.gho.pok.ibm.com [9.57.198.24]) by ppma02dal.us.ibm.com with ESMTP id 2xykca1y7j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 07 Feb 2020 11:40:03 +0000 Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com [9.57.199.111]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 017Be1nE16384406 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 7 Feb 2020 11:40:01 GMT Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 87C2AAC05E; Fri, 7 Feb 2020 11:40:01 +0000 (GMT) Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 79056AC059; Fri, 7 Feb 2020 11:40:01 +0000 (GMT) Received: from localhost.localdomain (unknown [9.114.17.106]) by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP; Fri, 7 Feb 2020 11:40:01 +0000 (GMT) From: Christian Borntraeger To: Christian Borntraeger , Janosch Frank Cc: KVM , Cornelia Huck , David Hildenbrand , Thomas Huth , Ulrich Weigand , Claudio Imbrenda , Andrea Arcangeli , linux-s390 , Michael Mueller , Vasily Gorbik , linux-mm@kvack.org, Andrew Morton Subject: [PATCH 11/35] KVM: s390/mm: Make pages accessible before destroying the guest Date: Fri, 7 Feb 2020 06:39:34 -0500 Message-Id: <20200207113958.7320-12-borntraeger@de.ibm.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20200207113958.7320-1-borntraeger@de.ibm.com> References: <20200207113958.7320-1-borntraeger@de.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-02-07_01:2020-02-07,2020-02-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxlogscore=794 spamscore=0 adultscore=0 mlxscore=0 clxscore=1015 malwarescore=0 priorityscore=1501 phishscore=0 suspectscore=0 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2001150001 definitions=main-2002070089 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Before we destroy the secure configuration, we better make all pages accessible again. This also happens during reboot, where we reboot into a non-secure guest that then can go again into secure mode. As this "new" secure guest will have a new ID we cannot reuse the old page state. Signed-off-by: Christian Borntraeger Reviewed-by: Thomas Huth Reviewed-by: Cornelia Huck Reviewed-by: David Hildenbrand --- arch/s390/include/asm/pgtable.h | 1 + arch/s390/kvm/pv.c | 2 ++ arch/s390/mm/gmap.c | 35 +++++++++++++++++++++++++++++++++ 3 files changed, 38 insertions(+) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index dbd1453e6924..3e2ea997c334 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1669,6 +1669,7 @@ extern int vmem_remove_mapping(unsigned long start, unsigned long size); extern int s390_enable_sie(void); extern int s390_enable_skey(void); extern void s390_reset_cmma(struct mm_struct *mm); +extern void s390_reset_acc(struct mm_struct *mm); /* s390 has a private copy of get unmapped area to deal with cache synonyms */ #define HAVE_ARCH_UNMAPPED_AREA diff --git a/arch/s390/kvm/pv.c b/arch/s390/kvm/pv.c index 4795e61f4e16..392795a92bd9 100644 --- a/arch/s390/kvm/pv.c +++ b/arch/s390/kvm/pv.c @@ -66,6 +66,8 @@ int kvm_s390_pv_destroy_vm(struct kvm *kvm) int rc; u32 ret; + /* make all pages accessible before destroying the guest */ + s390_reset_acc(kvm->mm); rc = uv_cmd_nodata(kvm_s390_pv_handle(kvm), UVC_CMD_DESTROY_SEC_CONF, &ret); WRITE_ONCE(kvm->arch.gmap->guest_handle, 0); diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 7291452fe5f0..27926a06df32 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2650,3 +2650,38 @@ void s390_reset_cmma(struct mm_struct *mm) up_write(&mm->mmap_sem); } EXPORT_SYMBOL_GPL(s390_reset_cmma); + +/* + * make inaccessible pages accessible again + */ +static int __s390_reset_acc(pte_t *ptep, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pte_t pte = READ_ONCE(*ptep); + + if (pte_present(pte)) + WARN_ON_ONCE(uv_convert_from_secure(pte_val(pte) & PAGE_MASK)); + return 0; +} + +static const struct mm_walk_ops reset_acc_walk_ops = { + .pte_entry = __s390_reset_acc, +}; + +#include +void s390_reset_acc(struct mm_struct *mm) +{ + /* + * we might be called during + * reset: we walk the pages and clear + * close of all kvm file descriptors: we walk the pages and clear + * exit of process on fd closure: vma already gone, do nothing + */ + if (!mmget_not_zero(mm)) + return; + down_read(&mm->mmap_sem); + walk_page_range(mm, 0, TASK_SIZE, &reset_acc_walk_ops, NULL); + up_read(&mm->mmap_sem); + mmput(mm); +} +EXPORT_SYMBOL_GPL(s390_reset_acc);