From patchwork Tue Nov 29 09:35:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Yi L" X-Patchwork-Id: 13058370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F70AC4332F for ; Tue, 29 Nov 2022 09:35:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230265AbiK2Jfx (ORCPT ); Tue, 29 Nov 2022 04:35:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231607AbiK2Jfp (ORCPT ); Tue, 29 Nov 2022 04:35:45 -0500 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93B255B5B1; Tue, 29 Nov 2022 01:35:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669714544; x=1701250544; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YKHhu/G9B2KaPVN3RAKV3BFoAD9eOLEH10z2Gkl2nBY=; b=I2fsUD/o1Aa5+VmkDQeVqqzxMIutoy3g2aUhTlfnPiwFDYduYyNm5dYu YsS5YWs5yu3xZk+nE23mZ8emZfjQEz3ynNDb6SVdqueGP6bzTi87v2DPw 1L7BhYrxPpXDWhgR6j4os4oWCA8/IVquyDryx1Nhu+E7xaqNYqzySGPxi 4tLwGqW0/UukUHaPHpqY1goxhkngZbWtweVnK8Jv/l7l+RYvXqWaQHKLR wpLh9gSHVwDBJMiryUhUQ2m/iBnvXGVu88LOgFcy/UJz8ws7Vve5M5kFT 7sJNWb+kTGQdZQExhQMOiU+P2lVMosRXAI3EraZGIbVt4sCuoOMlyo3uI Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10545"; a="295442200" X-IronPort-AV: E=Sophos;i="5.96,202,1665471600"; d="scan'208";a="295442200" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2022 01:35:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10545"; a="818156895" X-IronPort-AV: E=Sophos;i="5.96,202,1665471600"; d="scan'208";a="818156895" Received: from 984fee00a4c6.jf.intel.com ([10.165.58.231]) by orsmga005.jf.intel.com with ESMTP; 29 Nov 2022 01:35:40 -0800 From: Yi Liu To: jgg@nvidia.com Cc: alex.williamson@redhat.com, kevin.tian@intel.com, kvm@vger.kernel.org, mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com, yi.l.liu@intel.com, yi.y.sun@linux.intel.com, intel-gvt-dev@lists.freedesktop.org, linux-s390@vger.kernel.org Subject: [iommufd PATCH v2 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable Date: Tue, 29 Nov 2022 01:35:35 -0800 Message-Id: <20221129093535.359357-3-yi.l.liu@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221129093535.359357-1-yi.l.liu@intel.com> References: <20221129093535.359357-1-yi.l.liu@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Matthew Rosato Currently, each mapped iova is stashed in its associated vfio_ap_queue; when we get an unmap request, validate that it matches with one or more of these stashed values before attempting unpins. Each stashed iova represents IRQ that was enabled for a queue. Therefore, if a match is found, trigger IRQ disable for this queue to ensure that underlying firmware will no longer try to use the associated pfn after the page is unpinned. IRQ disable will also handle the associated unpin. Signed-off-by: Matthew Rosato Signed-off-by: Yi Liu --- drivers/s390/crypto/vfio_ap_ops.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c index 0b4cc8c597ae..8bf353d46820 100644 --- a/drivers/s390/crypto/vfio_ap_ops.c +++ b/drivers/s390/crypto/vfio_ap_ops.c @@ -1535,13 +1535,29 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev, return 0; } +static void unmap_iova(struct ap_matrix_mdev *matrix_mdev, u64 iova, u64 length) +{ + struct ap_queue_table *qtable = &matrix_mdev->qtable; + struct vfio_ap_queue *q; + int loop_cursor; + + hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) { + if (q->saved_iova >= iova && q->saved_iova < iova + length) + vfio_ap_irq_disable(q); + } +} + static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova, u64 length) { struct ap_matrix_mdev *matrix_mdev = container_of(vdev, struct ap_matrix_mdev, vdev); - vfio_unpin_pages(&matrix_mdev->vdev, iova, 1); + mutex_lock(&matrix_dev->mdevs_lock); + + unmap_iova(matrix_mdev, iova, length); + + mutex_unlock(&matrix_dev->mdevs_lock); } /**