From patchwork Mon Apr 6 21:35:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 11476645 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 616F192C for ; Mon, 6 Apr 2020 21:35:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3FC3D20753 for ; Mon, 6 Apr 2020 21:35:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Mg8Vj/gS" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726775AbgDFVfM (ORCPT ); Mon, 6 Apr 2020 17:35:12 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:29104 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726663AbgDFVfL (ORCPT ); Mon, 6 Apr 2020 17:35:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1586208910; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=b+P99eoGOE59dzJri6nDdYTRF/iHp1rUM821OiEBSWI=; b=Mg8Vj/gSwdhlNbgAr3FwEyVtVnfFe74j6FkXKNpo6Y2sjrAryV6/F4irQ13Be0WTFVUAhv lWcVu+JJxAZgM6Ytf0vWZCow0mw15mJVmY0l3o0WS4bjLspxtR6Rxqjmas6Eo61scFXvbV qAeX7jQ7RqgwNwLXI3x0f4r/Dl0WfrI= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-321-pEr8hmp-O3ObqRi-FAJurQ-1; Mon, 06 Apr 2020 17:35:08 -0400 X-MC-Unique: pEr8hmp-O3ObqRi-FAJurQ-1 Received: by mail-wm1-f69.google.com with SMTP id l13so62378wme.7 for ; Mon, 06 Apr 2020 14:35:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=b+P99eoGOE59dzJri6nDdYTRF/iHp1rUM821OiEBSWI=; b=MhguJwP/d36k57R8j5s5fZdFAqbvvg9eFK3nzdM3IL3m6cFPc5ijgJIz4K8nsl8dEj gkP8Mhnno1XVsA9u+yhlr/G3vMCR32GR2Vckr3eBBcdnuTd/9QmMJj7L7VixE1Ho1eWY aJ5MdWzJqJBfLpOu+EvBJ2Z4uZf742m49LLVSKHqj+VrpJtk/fyFkPMWvZMOh6y2B8H8 x4/2wgrsdWHw3SG9HbXb1Er4KDzNe1X1sfgIPQUxMB+rceW63cfFwDNqSSKwO3I9/SzL LTMjV/4cVR7s5SseTCIK6LIDqND+ldFI8qGl6zlCl2VptXwokyrYVf89+XoVeobD1QCF uklA== X-Gm-Message-State: AGi0PuapxIahLTbJa+ds7IsOCzEwT6pEoQ8vWVkZPVUzwNakxNlI8vqo 2tAKZb5mX2t6ctH11UX4YR5PN/p0i75O9xhnn1auz4iK6/R0y0NUtmsv+vOwrtqETFH0Qt2+Osv /viyg7kYJpX7t X-Received: by 2002:a5d:4c87:: with SMTP id z7mr1417297wrs.39.1586208907584; Mon, 06 Apr 2020 14:35:07 -0700 (PDT) X-Google-Smtp-Source: APiQypKcIiWn1HPwA+vMFpmabnpjQIkpeHu9tKG3iwnUgYRQLaqvdB7lSYnAh68ff+ai34FiHnYBKA== X-Received: by 2002:a5d:4c87:: with SMTP id z7mr1417269wrs.39.1586208907340; Mon, 06 Apr 2020 14:35:07 -0700 (PDT) Received: from redhat.com (bzq-79-176-51-222.red.bezeqint.net. [79.176.51.222]) by smtp.gmail.com with ESMTPSA id u22sm1003113wmu.43.2020.04.06.14.35.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Apr 2020 14:35:06 -0700 (PDT) Date: Mon, 6 Apr 2020 17:35:04 -0400 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Jason Wang , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org Subject: [PATCH v5 06/12] vhost: force spec specified alignment on types Message-ID: <20200406213314.248038-7-mst@redhat.com> References: <20200406213314.248038-1-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200406213314.248038-1-mst@redhat.com> X-Mailer: git-send-email 2.24.1.751.gd10ce2899c X-Mutt-Fcc: =sent Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The ring element addresses are passed between components with different alignments assumptions. Thus, if guest/userspace selects a pointer and host then gets and dereferences it, we might need to decrease the compiler-selected alignment to prevent compiler on the host from assuming pointer is aligned. This actually triggers on ARM with -mabi=apcs-gnu - which is a deprecated configuration, but it seems safer to handle this generally. I verified that the produced binary is exactly identical on x86. Signed-off-by: Michael S. Tsirkin --- drivers/vhost/vhost.h | 6 +++--- include/linux/virtio_ring.h | 24 +++++++++++++++++++++--- 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index f8403bd46b85..60cab4c78229 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -67,9 +67,9 @@ struct vhost_virtqueue { /* The actual ring of buffers. */ struct mutex mutex; unsigned int num; - struct vring_desc __user *desc; - struct vring_avail __user *avail; - struct vring_used __user *used; + vring_desc_t __user *desc; + vring_avail_t __user *avail; + vring_used_t __user *used; const struct vhost_iotlb_map *meta_iotlb[VHOST_NUM_ADDRS]; struct file *kick; struct eventfd_ctx *call_ctx; diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h index 11680e74761a..c3f9ca054250 100644 --- a/include/linux/virtio_ring.h +++ b/include/linux/virtio_ring.h @@ -60,14 +60,32 @@ static inline void virtio_store_mb(bool weak_barriers, struct virtio_device; struct virtqueue; +/* + * The ring element addresses are passed between components with different + * alignments assumptions. Thus, we might need to decrease the compiler-selected + * alignment, and so must use a typedef to make sure the __aligned attribute + * actually takes hold: + * + * https://gcc.gnu.org/onlinedocs//gcc/Common-Type-Attributes.html#Common-Type-Attributes + * + * When used on a struct, or struct member, the aligned attribute can only + * increase the alignment; in order to decrease it, the packed attribute must + * be specified as well. When used as part of a typedef, the aligned attribute + * can both increase and decrease alignment, and specifying the packed + * attribute generates a warning. + */ +typedef struct vring_desc __aligned(VRING_DESC_ALIGN_SIZE) vring_desc_t; +typedef struct vring_avail __aligned(VRING_AVAIL_ALIGN_SIZE) vring_avail_t; +typedef struct vring_used __aligned(VRING_USED_ALIGN_SIZE) vring_used_t; + struct vring { unsigned int num; - struct vring_desc *desc; + vring_desc_t *desc; - struct vring_avail *avail; + vring_avail_t *avail; - struct vring_used *used; + vring_used_t *used; }; /*