From patchwork Wed Feb 28 20:14:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Williamson X-Patchwork-Id: 10249607 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C859960212 for ; Wed, 28 Feb 2018 20:15:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B6FA928DF3 for ; Wed, 28 Feb 2018 20:15:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A8C7828789; Wed, 28 Feb 2018 20:15:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52F9928789 for ; Wed, 28 Feb 2018 20:15:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934224AbeB1UPR (ORCPT ); Wed, 28 Feb 2018 15:15:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:51118 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934231AbeB1UPP (ORCPT ); Wed, 28 Feb 2018 15:15:15 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 04EC9CFEA4; Wed, 28 Feb 2018 20:15:15 +0000 (UTC) Received: from gimli.home (ovpn-117-203.phx2.redhat.com [10.3.117.203]) by smtp.corp.redhat.com (Postfix) with ESMTP id E74265C261; Wed, 28 Feb 2018 20:14:59 +0000 (UTC) Subject: [PATCH 2/3] vfio/pci: Use endian neutral helpers From: Alex Williamson To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, alex.williamson@redhat.com, peterx@redhat.com, eric.auger@redhat.com, aik@ozlabs.ru Date: Wed, 28 Feb 2018 13:14:59 -0700 Message-ID: <20180228201459.25283.36162.stgit@gimli.home> In-Reply-To: <20180228195504.25283.45666.stgit@gimli.home> References: <20180228195504.25283.45666.stgit@gimli.home> User-Agent: StGit/0.18-102-gdf9f MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 28 Feb 2018 20:15:15 +0000 (UTC) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The iowriteXX/ioreadXX functions assume little endian hardware and convert to little endian on a write and from little endian on a read. We currently do our own explicit conversion to negate this. Instead, add some endian dependent defines to avoid all byte swaps. There should be no functional change other than big endian systems aren't penalized with wasted swaps. Signed-off-by: Alex Williamson --- drivers/vfio/pci/vfio_pci_rdwr.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c index 5f2b376dcebd..925419e0f459 100644 --- a/drivers/vfio/pci/vfio_pci_rdwr.c +++ b/drivers/vfio/pci/vfio_pci_rdwr.c @@ -21,6 +21,24 @@ #include "vfio_pci_private.h" +#ifdef __LITTLE_ENDIAN +#define vfio_ioread64 ioread64 +#define vfio_iowrite64 iowrite64 +#define vfio_ioread32 ioread32 +#define vfio_iowrite32 iowrite32 +#define vfio_ioread16 ioread16 +#define vfio_iowrite16 iowrite16 +#else +#define vfio_ioread64 ioread64be +#define vfio_iowrite64 iowrite64be +#define vfio_ioread32 ioread32be +#define vfio_iowrite32 iowrite32be +#define vfio_ioread16 ioread16be +#define vfio_iowrite16 iowrite16be +#endif +#define vfio_ioread8 ioread8 +#define vfio_iowrite8 iowrite8 + /* * Read or write from an __iomem region (MMIO or I/O port) with an excluded * range which is inaccessible. The excluded range drops writes and fills @@ -44,15 +62,15 @@ static ssize_t do_io_rw(void __iomem *io, char __user *buf, fillable = 0; if (fillable >= 4 && !(off % 4)) { - __le32 val; + u32 val; if (iswrite) { if (copy_from_user(&val, buf, 4)) return -EFAULT; - iowrite32(le32_to_cpu(val), io + off); + vfio_iowrite32(val, io + off); } else { - val = cpu_to_le32(ioread32(io + off)); + val = vfio_ioread32(io + off); if (copy_to_user(buf, &val, 4)) return -EFAULT; @@ -60,15 +78,15 @@ static ssize_t do_io_rw(void __iomem *io, char __user *buf, filled = 4; } else if (fillable >= 2 && !(off % 2)) { - __le16 val; + u16 val; if (iswrite) { if (copy_from_user(&val, buf, 2)) return -EFAULT; - iowrite16(le16_to_cpu(val), io + off); + vfio_iowrite16(val, io + off); } else { - val = cpu_to_le16(ioread16(io + off)); + val = vfio_ioread16(io + off); if (copy_to_user(buf, &val, 2)) return -EFAULT; @@ -82,9 +100,9 @@ static ssize_t do_io_rw(void __iomem *io, char __user *buf, if (copy_from_user(&val, buf, 1)) return -EFAULT; - iowrite8(val, io + off); + vfio_iowrite8(val, io + off); } else { - val = ioread8(io + off); + val = vfio_ioread8(io + off); if (copy_to_user(buf, &val, 1)) return -EFAULT;