From patchwork Fri Mar 24 10:13:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Li X-Patchwork-Id: 13186929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A70BCC76196 for ; Fri, 24 Mar 2023 15:34:46 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pffR6-0005Ux-RO; Fri, 24 Mar 2023 07:18:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pffR4-0005Ui-MD; Fri, 24 Mar 2023 07:18:18 -0400 Received: from mail-qt1-f174.google.com ([209.85.160.174]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pffQI-0004bR-B7; Fri, 24 Mar 2023 07:18:15 -0400 Received: by mail-qt1-f174.google.com with SMTP id n2so1152527qtp.0; Fri, 24 Mar 2023 04:17:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679656639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DrzF0vDTto0xuqyb7Vf47oN2UnXwHNqODGgyz6+yxfw=; b=BzOv7XyafKrniCRvL+PY4AeHZwNgwSLi7/Ax4074IygXI+XYFCGt9u9+oNgAxOPcsX CQb4WJCPI66byKY3xPm/rUdUvi05RLDpIjZLafW4j9bfUnv+S7MRKUcX46LWH0o3lDxl IpJTF83JHWsfjHbayYMRuBLyobSzN/p3dX48fR6VYLXwVEXq59E1VZZ51XkSSdKcoGJ+ w4519ZrPuGAO93gltyphDWdlIoE3EIp27SSiJtiSlg5WOcVFONfS5N5OMHBlckTTYxXz A9iYSLNtvFEcPnPbun+p8aInr1ohD3mU8ybLtBQi2f2ekU3CMFs8WTVXMCGfAwsbFcN3 S27g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679656639; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DrzF0vDTto0xuqyb7Vf47oN2UnXwHNqODGgyz6+yxfw=; b=cF/e8YtPbVnzWOKHQPbIsmIgeEPcbs2K/FGdC+Rcg86Bm0CRjyb7ajmYzr3ydKk5Wd SGodvIqYsqvwEvtV+aCX8bs79jbvLfSGAs8tQNQ33NyLOWT8ntg+fk3Md9Pk5mNZbQT3 LPPvVu1kx2r6Xs0r1RVKO4QvybFf9FmdAhW7GeSCXnNmbLS+Ur9q0oO0gIEzkpELNtxH z9VfhBoCFi5le68jBqtDC06oh0FIHMmLrtuMPpNYw/soeWdCcEUsbVqOj34UgR5OMolg v+56qOswd+lt0AO5+lnTENYHcn4nHLwVSy69nvJGMSdtqMQKmVQIGiaT/SrATwpLRmVM vVZw== X-Gm-Message-State: AAQBX9ejX48/K89/gVF19Ectkoxgh0cpLwWuCbs6Gj2wqb2J8NxjcMQy +a9putB4XvI1FPK7aByZ//cgPAacfTrfXW+D X-Google-Smtp-Source: AKy350bBJ9cLjIr+zsCY6tSiFJ4oWNvwJMxbyB27riVb0NZFcvtSKg0Qg725Hp0zTGjnSuapxGg2DA== X-Received: by 2002:a62:170c:0:b0:5dc:6dec:e992 with SMTP id 12-20020a62170c000000b005dc6dece992mr2484016pfx.1.1679652850025; Fri, 24 Mar 2023 03:14:10 -0700 (PDT) Received: from fedlinux.. ([106.84.130.185]) by smtp.gmail.com with ESMTPSA id h24-20020a63df58000000b0050f85ef50d1sm8282421pgj.26.2023.03.24.03.14.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Mar 2023 03:14:09 -0700 (PDT) From: Sam Li To: qemu-devel@nongnu.org Cc: stefanha@redhat.com, "Michael S. Tsirkin" , hare@suse.de, Cornelia Huck , dmitry.fomichev@wdc.com, qemu-block@nongnu.org, Markus Armbruster , damien.lemoal@opensource.wdc.com, Raphael Norwitz , Hanna Reitz , Paolo Bonzini , Kevin Wolf , kvm@vger.kernel.org, Eric Blake , Sam Li Subject: [PATCH v9 1/5] include: update virtio_blk headers to v6.3-rc1 Date: Fri, 24 Mar 2023 18:13:53 +0800 Message-Id: <20230324101357.2717-2-faithilikerun@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230324101357.2717-1-faithilikerun@gmail.com> References: <20230324101357.2717-1-faithilikerun@gmail.com> MIME-Version: 1.0 Received-SPF: pass client-ip=209.85.160.174; envelope-from=faithilikerun@gmail.com; helo=mail-qt1-f174.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Use scripts/update-linux-headers.sh to update headers to 6.3-rc1. Signed-off-by: Sam Li Reviewed-by: Stefan Hajnoczi Reviewed-by: Dmitry Fomichev --- include/standard-headers/drm/drm_fourcc.h | 12 +++ include/standard-headers/linux/ethtool.h | 48 ++++++++- include/standard-headers/linux/fuse.h | 45 +++++++- include/standard-headers/linux/pci_regs.h | 1 + include/standard-headers/linux/vhost_types.h | 2 + include/standard-headers/linux/virtio_blk.h | 105 +++++++++++++++++++ linux-headers/asm-arm64/kvm.h | 1 + linux-headers/asm-x86/kvm.h | 34 +++++- linux-headers/linux/kvm.h | 9 ++ linux-headers/linux/vfio.h | 15 +-- linux-headers/linux/vhost.h | 8 ++ 11 files changed, 270 insertions(+), 10 deletions(-) diff --git a/include/standard-headers/drm/drm_fourcc.h b/include/standard-headers/drm/drm_fourcc.h index 69cab17b38..dc3e6112c1 100644 --- a/include/standard-headers/drm/drm_fourcc.h +++ b/include/standard-headers/drm/drm_fourcc.h @@ -87,6 +87,18 @@ extern "C" { * * The authoritative list of format modifier codes is found in * `include/uapi/drm/drm_fourcc.h` + * + * Open Source User Waiver + * ----------------------- + * + * Because this is the authoritative source for pixel formats and modifiers + * referenced by GL, Vulkan extensions and other standards and hence used both + * by open source and closed source driver stacks, the usual requirement for an + * upstream in-kernel or open source userspace user does not apply. + * + * To ensure, as much as feasible, compatibility across stacks and avoid + * confusion with incompatible enumerations stakeholders for all relevant driver + * stacks should approve additions. */ #define fourcc_code(a, b, c, d) ((uint32_t)(a) | ((uint32_t)(b) << 8) | \ diff --git a/include/standard-headers/linux/ethtool.h b/include/standard-headers/linux/ethtool.h index 87176ab075..99fcddf04f 100644 --- a/include/standard-headers/linux/ethtool.h +++ b/include/standard-headers/linux/ethtool.h @@ -711,6 +711,24 @@ enum ethtool_stringset { ETH_SS_COUNT }; +/** + * enum ethtool_mac_stats_src - source of ethtool MAC statistics + * @ETHTOOL_MAC_STATS_SRC_AGGREGATE: + * if device supports a MAC merge layer, this retrieves the aggregate + * statistics of the eMAC and pMAC. Otherwise, it retrieves just the + * statistics of the single (express) MAC. + * @ETHTOOL_MAC_STATS_SRC_EMAC: + * if device supports a MM layer, this retrieves the eMAC statistics. + * Otherwise, it retrieves the statistics of the single (express) MAC. + * @ETHTOOL_MAC_STATS_SRC_PMAC: + * if device supports a MM layer, this retrieves the pMAC statistics. + */ +enum ethtool_mac_stats_src { + ETHTOOL_MAC_STATS_SRC_AGGREGATE, + ETHTOOL_MAC_STATS_SRC_EMAC, + ETHTOOL_MAC_STATS_SRC_PMAC, +}; + /** * enum ethtool_module_power_mode_policy - plug-in module power mode policy * @ETHTOOL_MODULE_POWER_MODE_POLICY_HIGH: Module is always in high power mode. @@ -779,6 +797,31 @@ enum ethtool_podl_pse_pw_d_status { ETHTOOL_PODL_PSE_PW_D_STATUS_ERROR, }; +/** + * enum ethtool_mm_verify_status - status of MAC Merge Verify function + * @ETHTOOL_MM_VERIFY_STATUS_UNKNOWN: + * verification status is unknown + * @ETHTOOL_MM_VERIFY_STATUS_INITIAL: + * the 802.3 Verify State diagram is in the state INIT_VERIFICATION + * @ETHTOOL_MM_VERIFY_STATUS_VERIFYING: + * the Verify State diagram is in the state VERIFICATION_IDLE, + * SEND_VERIFY or WAIT_FOR_RESPONSE + * @ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED: + * indicates that the Verify State diagram is in the state VERIFIED + * @ETHTOOL_MM_VERIFY_STATUS_FAILED: + * the Verify State diagram is in the state VERIFY_FAIL + * @ETHTOOL_MM_VERIFY_STATUS_DISABLED: + * verification of preemption operation is disabled + */ +enum ethtool_mm_verify_status { + ETHTOOL_MM_VERIFY_STATUS_UNKNOWN, + ETHTOOL_MM_VERIFY_STATUS_INITIAL, + ETHTOOL_MM_VERIFY_STATUS_VERIFYING, + ETHTOOL_MM_VERIFY_STATUS_SUCCEEDED, + ETHTOOL_MM_VERIFY_STATUS_FAILED, + ETHTOOL_MM_VERIFY_STATUS_DISABLED, +}; + /** * struct ethtool_gstrings - string set for data tagging * @cmd: Command number = %ETHTOOL_GSTRINGS @@ -1183,7 +1226,7 @@ struct ethtool_rxnfc { uint32_t rule_cnt; uint32_t rss_context; }; - uint32_t rule_locs[0]; + uint32_t rule_locs[]; }; @@ -1741,6 +1784,9 @@ enum ethtool_link_mode_bit_indices { ETHTOOL_LINK_MODE_800000baseDR8_2_Full_BIT = 96, ETHTOOL_LINK_MODE_800000baseSR8_Full_BIT = 97, ETHTOOL_LINK_MODE_800000baseVR8_Full_BIT = 98, + ETHTOOL_LINK_MODE_10baseT1S_Full_BIT = 99, + ETHTOOL_LINK_MODE_10baseT1S_Half_BIT = 100, + ETHTOOL_LINK_MODE_10baseT1S_P2MP_Half_BIT = 101, /* must be last entry */ __ETHTOOL_LINK_MODE_MASK_NBITS diff --git a/include/standard-headers/linux/fuse.h b/include/standard-headers/linux/fuse.h index a1af78d989..35c131a107 100644 --- a/include/standard-headers/linux/fuse.h +++ b/include/standard-headers/linux/fuse.h @@ -201,6 +201,11 @@ * 7.38 * - add FUSE_EXPIRE_ONLY flag to fuse_notify_inval_entry * - add FOPEN_PARALLEL_DIRECT_WRITES + * - add total_extlen to fuse_in_header + * - add FUSE_MAX_NR_SECCTX + * - add extension header + * - add FUSE_EXT_GROUPS + * - add FUSE_CREATE_SUPP_GROUP */ #ifndef _LINUX_FUSE_H @@ -358,6 +363,8 @@ struct fuse_file_lock { * FUSE_SECURITY_CTX: add security context to create, mkdir, symlink, and * mknod * FUSE_HAS_INODE_DAX: use per inode DAX + * FUSE_CREATE_SUPP_GROUP: add supplementary group info to create, mkdir, + * symlink and mknod (single group that matches parent) */ #define FUSE_ASYNC_READ (1 << 0) #define FUSE_POSIX_LOCKS (1 << 1) @@ -394,6 +401,7 @@ struct fuse_file_lock { /* bits 32..63 get shifted down 32 bits into the flags2 field */ #define FUSE_SECURITY_CTX (1ULL << 32) #define FUSE_HAS_INODE_DAX (1ULL << 33) +#define FUSE_CREATE_SUPP_GROUP (1ULL << 34) /** * CUSE INIT request/reply flags @@ -499,6 +507,17 @@ struct fuse_file_lock { */ #define FUSE_EXPIRE_ONLY (1 << 0) +/** + * extension type + * FUSE_MAX_NR_SECCTX: maximum value of &fuse_secctx_header.nr_secctx + * FUSE_EXT_GROUPS: &fuse_supp_groups extension + */ +enum fuse_ext_type { + /* Types 0..31 are reserved for fuse_secctx_header */ + FUSE_MAX_NR_SECCTX = 31, + FUSE_EXT_GROUPS = 32, +}; + enum fuse_opcode { FUSE_LOOKUP = 1, FUSE_FORGET = 2, /* no reply */ @@ -882,7 +901,8 @@ struct fuse_in_header { uint32_t uid; uint32_t gid; uint32_t pid; - uint32_t padding; + uint16_t total_extlen; /* length of extensions in 8byte units */ + uint16_t padding; }; struct fuse_out_header { @@ -1043,4 +1063,27 @@ struct fuse_secctx_header { uint32_t nr_secctx; }; +/** + * struct fuse_ext_header - extension header + * @size: total size of this extension including this header + * @type: type of extension + * + * This is made compatible with fuse_secctx_header by using type values > + * FUSE_MAX_NR_SECCTX + */ +struct fuse_ext_header { + uint32_t size; + uint32_t type; +}; + +/** + * struct fuse_supp_groups - Supplementary group extension + * @nr_groups: number of supplementary groups + * @groups: flexible array of group IDs + */ +struct fuse_supp_groups { + uint32_t nr_groups; + uint32_t groups[]; +}; + #endif /* _LINUX_FUSE_H */ diff --git a/include/standard-headers/linux/pci_regs.h b/include/standard-headers/linux/pci_regs.h index 85ab127881..dc2000e0fe 100644 --- a/include/standard-headers/linux/pci_regs.h +++ b/include/standard-headers/linux/pci_regs.h @@ -693,6 +693,7 @@ #define PCI_EXP_LNKCTL2_TX_MARGIN 0x0380 /* Transmit Margin */ #define PCI_EXP_LNKCTL2_HASD 0x0020 /* HW Autonomous Speed Disable */ #define PCI_EXP_LNKSTA2 0x32 /* Link Status 2 */ +#define PCI_EXP_LNKSTA2_FLIT 0x0400 /* Flit Mode Status */ #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 0x32 /* end of v2 EPs w/ link */ #define PCI_EXP_SLTCAP2 0x34 /* Slot Capabilities 2 */ #define PCI_EXP_SLTCAP2_IBPD 0x00000001 /* In-band PD Disable Supported */ diff --git a/include/standard-headers/linux/vhost_types.h b/include/standard-headers/linux/vhost_types.h index c41a73fe36..88600e2d9f 100644 --- a/include/standard-headers/linux/vhost_types.h +++ b/include/standard-headers/linux/vhost_types.h @@ -163,5 +163,7 @@ struct vhost_vdpa_iova_range { #define VHOST_BACKEND_F_IOTLB_ASID 0x3 /* Device can be suspended */ #define VHOST_BACKEND_F_SUSPEND 0x4 +/* Device can be resumed */ +#define VHOST_BACKEND_F_RESUME 0x5 #endif diff --git a/include/standard-headers/linux/virtio_blk.h b/include/standard-headers/linux/virtio_blk.h index e81715cd70..7155b1a470 100644 --- a/include/standard-headers/linux/virtio_blk.h +++ b/include/standard-headers/linux/virtio_blk.h @@ -41,6 +41,7 @@ #define VIRTIO_BLK_F_DISCARD 13 /* DISCARD is supported */ #define VIRTIO_BLK_F_WRITE_ZEROES 14 /* WRITE ZEROES is supported */ #define VIRTIO_BLK_F_SECURE_ERASE 16 /* Secure Erase is supported */ +#define VIRTIO_BLK_F_ZONED 17 /* Zoned block device */ /* Legacy feature bits */ #ifndef VIRTIO_BLK_NO_LEGACY @@ -135,6 +136,16 @@ struct virtio_blk_config { /* Secure erase commands must be aligned to this number of sectors. */ __virtio32 secure_erase_sector_alignment; + /* Zoned block device characteristics (if VIRTIO_BLK_F_ZONED) */ + struct virtio_blk_zoned_characteristics { + uint32_t zone_sectors; + uint32_t max_open_zones; + uint32_t max_active_zones; + uint32_t max_append_sectors; + uint32_t write_granularity; + uint8_t model; + uint8_t unused2[3]; + } zoned; } QEMU_PACKED; /* @@ -172,6 +183,27 @@ struct virtio_blk_config { /* Secure erase command */ #define VIRTIO_BLK_T_SECURE_ERASE 14 +/* Zone append command */ +#define VIRTIO_BLK_T_ZONE_APPEND 15 + +/* Report zones command */ +#define VIRTIO_BLK_T_ZONE_REPORT 16 + +/* Open zone command */ +#define VIRTIO_BLK_T_ZONE_OPEN 18 + +/* Close zone command */ +#define VIRTIO_BLK_T_ZONE_CLOSE 20 + +/* Finish zone command */ +#define VIRTIO_BLK_T_ZONE_FINISH 22 + +/* Reset zone command */ +#define VIRTIO_BLK_T_ZONE_RESET 24 + +/* Reset All zones command */ +#define VIRTIO_BLK_T_ZONE_RESET_ALL 26 + #ifndef VIRTIO_BLK_NO_LEGACY /* Barrier before this op. */ #define VIRTIO_BLK_T_BARRIER 0x80000000 @@ -191,6 +223,72 @@ struct virtio_blk_outhdr { __virtio64 sector; }; +/* + * Supported zoned device models. + */ + +/* Regular block device */ +#define VIRTIO_BLK_Z_NONE 0 +/* Host-managed zoned device */ +#define VIRTIO_BLK_Z_HM 1 +/* Host-aware zoned device */ +#define VIRTIO_BLK_Z_HA 2 + +/* + * Zone descriptor. A part of VIRTIO_BLK_T_ZONE_REPORT command reply. + */ +struct virtio_blk_zone_descriptor { + /* Zone capacity */ + uint64_t z_cap; + /* The starting sector of the zone */ + uint64_t z_start; + /* Zone write pointer position in sectors */ + uint64_t z_wp; + /* Zone type */ + uint8_t z_type; + /* Zone state */ + uint8_t z_state; + uint8_t reserved[38]; +}; + +struct virtio_blk_zone_report { + uint64_t nr_zones; + uint8_t reserved[56]; + struct virtio_blk_zone_descriptor zones[]; +}; + +/* + * Supported zone types. + */ + +/* Conventional zone */ +#define VIRTIO_BLK_ZT_CONV 1 +/* Sequential Write Required zone */ +#define VIRTIO_BLK_ZT_SWR 2 +/* Sequential Write Preferred zone */ +#define VIRTIO_BLK_ZT_SWP 3 + +/* + * Zone states that are available for zones of all types. + */ + +/* Not a write pointer (conventional zones only) */ +#define VIRTIO_BLK_ZS_NOT_WP 0 +/* Empty */ +#define VIRTIO_BLK_ZS_EMPTY 1 +/* Implicitly Open */ +#define VIRTIO_BLK_ZS_IOPEN 2 +/* Explicitly Open */ +#define VIRTIO_BLK_ZS_EOPEN 3 +/* Closed */ +#define VIRTIO_BLK_ZS_CLOSED 4 +/* Read-Only */ +#define VIRTIO_BLK_ZS_RDONLY 13 +/* Full */ +#define VIRTIO_BLK_ZS_FULL 14 +/* Offline */ +#define VIRTIO_BLK_ZS_OFFLINE 15 + /* Unmap this range (only valid for write zeroes command) */ #define VIRTIO_BLK_WRITE_ZEROES_FLAG_UNMAP 0x00000001 @@ -217,4 +315,11 @@ struct virtio_scsi_inhdr { #define VIRTIO_BLK_S_OK 0 #define VIRTIO_BLK_S_IOERR 1 #define VIRTIO_BLK_S_UNSUPP 2 + +/* Error codes that are specific to zoned block devices */ +#define VIRTIO_BLK_S_ZONE_INVALID_CMD 3 +#define VIRTIO_BLK_S_ZONE_UNALIGNED_WP 4 +#define VIRTIO_BLK_S_ZONE_OPEN_RESOURCE 5 +#define VIRTIO_BLK_S_ZONE_ACTIVE_RESOURCE 6 + #endif /* _LINUX_VIRTIO_BLK_H */ diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h index a7cfefb3a8..d7e7bb885e 100644 --- a/linux-headers/asm-arm64/kvm.h +++ b/linux-headers/asm-arm64/kvm.h @@ -109,6 +109,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */ #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ +#define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ struct kvm_vcpu_init { __u32 target; diff --git a/linux-headers/asm-x86/kvm.h b/linux-headers/asm-x86/kvm.h index 2747d2ce14..2937e7bf69 100644 --- a/linux-headers/asm-x86/kvm.h +++ b/linux-headers/asm-x86/kvm.h @@ -9,6 +9,7 @@ #include #include +#include #define KVM_PIO_PAGE_OFFSET 1 #define KVM_COALESCED_MMIO_PAGE_OFFSET 2 @@ -505,8 +506,8 @@ struct kvm_nested_state { * KVM_{GET,PUT}_NESTED_STATE ioctl values. */ union { - struct kvm_vmx_nested_state_data vmx[0]; - struct kvm_svm_nested_state_data svm[0]; + __DECLARE_FLEX_ARRAY(struct kvm_vmx_nested_state_data, vmx); + __DECLARE_FLEX_ARRAY(struct kvm_svm_nested_state_data, svm); } data; }; @@ -523,6 +524,35 @@ struct kvm_pmu_event_filter { #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0) +#define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS) + +/* + * Masked event layout. + * Bits Description + * ---- ----------- + * 7:0 event select (low bits) + * 15:8 umask match + * 31:16 unused + * 35:32 event select (high bits) + * 36:54 unused + * 55 exclude bit + * 63:56 umask mask + */ + +#define KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, exclude) \ + (((event_select) & 0xFFULL) | (((event_select) & 0XF00ULL) << 24) | \ + (((mask) & 0xFFULL) << 56) | \ + (((match) & 0xFFULL) << 8) | \ + ((__u64)(!!(exclude)) << 55)) + +#define KVM_PMU_MASKED_ENTRY_EVENT_SELECT \ + (GENMASK_ULL(7, 0) | GENMASK_ULL(35, 32)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MASK (GENMASK_ULL(63, 56)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (GENMASK_ULL(15, 8)) +#define KVM_PMU_MASKED_ENTRY_EXCLUDE (BIT_ULL(55)) +#define KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT (56) + /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */ #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h index 1e2c16cfe3..599de3c6e3 100644 --- a/linux-headers/linux/kvm.h +++ b/linux-headers/linux/kvm.h @@ -581,6 +581,8 @@ struct kvm_s390_mem_op { struct { __u8 ar; /* the access register number */ __u8 key; /* access key, ignored if flag unset */ + __u8 pad1[6]; /* ignored */ + __u64 old_addr; /* ignored if cmpxchg flag unset */ }; __u32 sida_offset; /* offset into the sida */ __u8 reserved[32]; /* ignored */ @@ -593,11 +595,17 @@ struct kvm_s390_mem_op { #define KVM_S390_MEMOP_SIDA_WRITE 3 #define KVM_S390_MEMOP_ABSOLUTE_READ 4 #define KVM_S390_MEMOP_ABSOLUTE_WRITE 5 +#define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG 6 + /* flags for kvm_s390_mem_op->flags */ #define KVM_S390_MEMOP_F_CHECK_ONLY (1ULL << 0) #define KVM_S390_MEMOP_F_INJECT_EXCEPTION (1ULL << 1) #define KVM_S390_MEMOP_F_SKEY_PROTECTION (1ULL << 2) +/* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */ +#define KVM_S390_MEMOP_EXTENSION_CAP_BASE (1 << 0) +#define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG (1 << 1) + /* for KVM_INTERRUPT */ struct kvm_interrupt { /* in */ @@ -1173,6 +1181,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 #define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225 +#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h index c59692ce0b..4a534edbdc 100644 --- a/linux-headers/linux/vfio.h +++ b/linux-headers/linux/vfio.h @@ -49,7 +49,11 @@ /* Supports VFIO_DMA_UNMAP_FLAG_ALL */ #define VFIO_UNMAP_ALL 9 -/* Supports the vaddr flag for DMA map and unmap */ +/* + * Supports the vaddr flag for DMA map and unmap. Not supported for mediated + * devices, so this capability is subject to change as groups are added or + * removed. + */ #define VFIO_UPDATE_VADDR 10 /* @@ -1343,8 +1347,7 @@ struct vfio_iommu_type1_info_dma_avail { * Map process virtual addresses to IO virtual addresses using the * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required. * - * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova, and - * unblock translation of host virtual addresses in the iova range. The vaddr + * If flags & VFIO_DMA_MAP_FLAG_VADDR, update the base vaddr for iova. The vaddr * must have previously been invalidated with VFIO_DMA_UNMAP_FLAG_VADDR. To * maintain memory consistency within the user application, the updated vaddr * must address the same memory object as originally mapped. Failure to do so @@ -1395,9 +1398,9 @@ struct vfio_bitmap { * must be 0. This cannot be combined with the get-dirty-bitmap flag. * * If flags & VFIO_DMA_UNMAP_FLAG_VADDR, do not unmap, but invalidate host - * virtual addresses in the iova range. Tasks that attempt to translate an - * iova's vaddr will block. DMA to already-mapped pages continues. This - * cannot be combined with the get-dirty-bitmap flag. + * virtual addresses in the iova range. DMA to already-mapped pages continues. + * Groups may not be added to the container while any addresses are invalid. + * This cannot be combined with the get-dirty-bitmap flag. */ struct vfio_iommu_type1_dma_unmap { __u32 argsz; diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h index f9f115a7c7..92e1b700b5 100644 --- a/linux-headers/linux/vhost.h +++ b/linux-headers/linux/vhost.h @@ -180,4 +180,12 @@ */ #define VHOST_VDPA_SUSPEND _IO(VHOST_VIRTIO, 0x7D) +/* Resume a device so it can resume processing virtqueue requests + * + * After the return of this ioctl the device will have restored all the + * necessary states and it is fully operational to continue processing the + * virtqueue descriptors. + */ +#define VHOST_VDPA_RESUME _IO(VHOST_VIRTIO, 0x7E) + #endif From patchwork Fri Mar 24 10:13:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Li X-Patchwork-Id: 13186945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 27DB7C6FD20 for ; Fri, 24 Mar 2023 15:38:47 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pfeRX-0006ZD-Tu; Fri, 24 Mar 2023 06:14:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pfeRV-0006Yy-Ua; Fri, 24 Mar 2023 06:14:41 -0400 Received: from mail-pl1-x635.google.com ([2607:f8b0:4864:20::635]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pfeRD-0006AB-Q5; Fri, 24 Mar 2023 06:14:41 -0400 Received: by mail-pl1-x635.google.com with SMTP id ix20so1395346plb.3; Fri, 24 Mar 2023 03:14:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679652856; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NgyUnj2buc+4agBJO7cw0Q9zavqKmmHyxiyDXk1f9Ps=; b=fHksEqGzp+a5xjnQIP5glYVxmyxvcfUQVl//pZCq6WtMjKbTxO8BQvI+R5gaCtQvIE 755ldhxX4XK1JQsKzsd8qQ8r6QpINna0lfH8lUt5Nu7BD/1sskJmKE7WtI46ic7acV/s /kOLRqp0uD2F2T4VJyh8ip3grDzYbUOvhtQIIXx/nPWgi7+A2dQcVy/jSp2lpG2p1waZ +Vo8tCOR/7UbxTSPDsqJudeOwi11NpHlmpdbYxW4lT4WQwkP59fIdU0JUneMIXIzBgOT iULQfjtU8fKqvmPixRRqAu2MQwieZzYynyttYCvI5HiqEZVwn3V9nSAPmaRG6g2UApj7 y4NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679652856; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NgyUnj2buc+4agBJO7cw0Q9zavqKmmHyxiyDXk1f9Ps=; b=iyuFdEjjdQITLQ+/JZN2tZ5LZwvSwTgGB0IAcyTTBGF8FmnCLf1Y67Jx2mkTbnwz+F CvyT9GU/iEFJEM0MpmuVhq6hhId6fvdh3hGY6izEygcDOF/Kvq929unStlyOq3nHrVlL Q5bHZS3RBC/eXbp3vrqxvtq4y9cKOkNhx/vXp576kcPnEMR9OTcBDZweCBFS6dV8TLr+ Iyu/syQQkS8mgvgyabdOmg6BQjB668+uMMo5RQMLnrO2P9NDSAmKuqJJxNqKszaDBlgL Z8em4mTP+qSDga7K4S9arGDIPDo8Z+0hHmviB8f2pVUr0rKLivxhqI6N2zioiDIg2R2N bfiQ== X-Gm-Message-State: AO0yUKX3XEgcmHPejCkHoOI6NTQGQoE3HrtBs731O3+zfj+nhYt1je/+ QKPuGWOmumDbLzF9/AohKKjI4yS3/OI5pw== X-Google-Smtp-Source: AK7set9GyZQ2J5br3AIeoU+iuFENp9noL99KTS6Bf4Y6mEXeUiENGq9X1KxzpQTigs+KjUsiCTUbcg== X-Received: by 2002:a05:6a20:92a0:b0:d9:f41c:33b0 with SMTP id q32-20020a056a2092a000b000d9f41c33b0mr2435454pzg.36.1679652854676; Fri, 24 Mar 2023 03:14:14 -0700 (PDT) Received: from fedlinux.. ([106.84.130.185]) by smtp.gmail.com with ESMTPSA id h24-20020a63df58000000b0050f85ef50d1sm8282421pgj.26.2023.03.24.03.14.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Mar 2023 03:14:14 -0700 (PDT) From: Sam Li To: qemu-devel@nongnu.org Cc: stefanha@redhat.com, "Michael S. Tsirkin" , hare@suse.de, Cornelia Huck , dmitry.fomichev@wdc.com, qemu-block@nongnu.org, Markus Armbruster , damien.lemoal@opensource.wdc.com, Raphael Norwitz , Hanna Reitz , Paolo Bonzini , Kevin Wolf , kvm@vger.kernel.org, Eric Blake , Sam Li Subject: [PATCH v9 2/5] virtio-blk: add zoned storage emulation for zoned devices Date: Fri, 24 Mar 2023 18:13:54 +0800 Message-Id: <20230324101357.2717-3-faithilikerun@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230324101357.2717-1-faithilikerun@gmail.com> References: <20230324101357.2717-1-faithilikerun@gmail.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::635; envelope-from=faithilikerun@gmail.com; helo=mail-pl1-x635.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This patch extends virtio-blk emulation to handle zoned device commands by calling the new block layer APIs to perform zoned device I/O on behalf of the guest. It supports Report Zone, four zone oparations (open, close, finish, reset), and Append Zone. The VIRTIO_BLK_F_ZONED feature bit will only be set if the host does support zoned block devices. Regular block devices(conventional zones) will not be set. The guest os can use blktests, fio to test those commands on zoned devices. Furthermore, using zonefs to test zone append write is also supported. Signed-off-by: Sam Li --- hw/block/virtio-blk-common.c | 2 + hw/block/virtio-blk.c | 389 +++++++++++++++++++++++++++++++++++ hw/virtio/virtio-qmp.c | 2 + 3 files changed, 393 insertions(+) diff --git a/hw/block/virtio-blk-common.c b/hw/block/virtio-blk-common.c index ac52d7c176..e2f8e2f6da 100644 --- a/hw/block/virtio-blk-common.c +++ b/hw/block/virtio-blk-common.c @@ -29,6 +29,8 @@ static const VirtIOFeature feature_sizes[] = { .end = endof(struct virtio_blk_config, discard_sector_alignment)}, {.flags = 1ULL << VIRTIO_BLK_F_WRITE_ZEROES, .end = endof(struct virtio_blk_config, write_zeroes_may_unmap)}, + {.flags = 1ULL << VIRTIO_BLK_F_ZONED, + .end = endof(struct virtio_blk_config, zoned)}, {} }; diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index cefca93b31..66c2bc4b16 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -17,6 +17,7 @@ #include "qemu/module.h" #include "qemu/error-report.h" #include "qemu/main-loop.h" +#include "block/block_int.h" #include "trace.h" #include "hw/block/block.h" #include "hw/qdev-properties.h" @@ -601,6 +602,335 @@ err: return err_status; } +typedef struct ZoneCmdData { + VirtIOBlockReq *req; + struct iovec *in_iov; + unsigned in_num; + union { + struct { + unsigned int nr_zones; + BlockZoneDescriptor *zones; + } zone_report_data; + struct { + int64_t offset; + } zone_append_data; + }; +} ZoneCmdData; + +/* + * check zoned_request: error checking before issuing requests. If all checks + * passed, return true. + * append: true if only zone append requests issued. + */ +static bool check_zoned_request(VirtIOBlock *s, int64_t offset, int64_t len, + bool append, uint8_t *status) { + BlockDriverState *bs = blk_bs(s->blk); + int index; + + if (!virtio_has_feature(s->host_features, VIRTIO_BLK_F_ZONED)) { + *status = VIRTIO_BLK_S_UNSUPP; + return false; + } + + if (offset < 0 || len < 0 || len > (bs->total_sectors << BDRV_SECTOR_BITS) + || offset > (bs->total_sectors << BDRV_SECTOR_BITS) - len) { + *status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + return false; + } + + if (append) { + if (bs->bl.write_granularity) { + if ((offset % bs->bl.write_granularity) != 0) { + *status = VIRTIO_BLK_S_ZONE_UNALIGNED_WP; + return false; + } + } + + index = offset / bs->bl.zone_size; + if (BDRV_ZT_IS_CONV(bs->bl.wps->wp[index])) { + *status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + return false; + } + + if (len / 512 > bs->bl.max_append_sectors) { + if (bs->bl.max_append_sectors == 0) { + *status = VIRTIO_BLK_S_UNSUPP; + } else { + *status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + } + return false; + } + } + return true; +} + +static void virtio_blk_zone_report_complete(void *opaque, int ret) +{ + ZoneCmdData *data = opaque; + VirtIOBlockReq *req = data->req; + VirtIOBlock *s = req->dev; + VirtIODevice *vdev = VIRTIO_DEVICE(req->dev); + struct iovec *in_iov = data->in_iov; + unsigned in_num = data->in_num; + int64_t zrp_size, n, j = 0; + int64_t nz = data->zone_report_data.nr_zones; + int8_t err_status = VIRTIO_BLK_S_OK; + + if (ret) { + err_status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + goto out; + } + + struct virtio_blk_zone_report zrp_hdr = (struct virtio_blk_zone_report) { + .nr_zones = cpu_to_le64(nz), + }; + zrp_size = sizeof(struct virtio_blk_zone_report) + + sizeof(struct virtio_blk_zone_descriptor) * nz; + n = iov_from_buf(in_iov, in_num, 0, &zrp_hdr, sizeof(zrp_hdr)); + if (n != sizeof(zrp_hdr)) { + virtio_error(vdev, "Driver provided input buffer that is too small!"); + err_status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + goto out; + } + + for (size_t i = sizeof(zrp_hdr); i < zrp_size; + i += sizeof(struct virtio_blk_zone_descriptor), ++j) { + struct virtio_blk_zone_descriptor desc = + (struct virtio_blk_zone_descriptor) { + .z_start = cpu_to_le64(data->zone_report_data.zones[j].start + >> BDRV_SECTOR_BITS), + .z_cap = cpu_to_le64(data->zone_report_data.zones[j].cap + >> BDRV_SECTOR_BITS), + .z_wp = cpu_to_le64(data->zone_report_data.zones[j].wp + >> BDRV_SECTOR_BITS), + }; + + switch (data->zone_report_data.zones[j].type) { + case BLK_ZT_CONV: + desc.z_type = VIRTIO_BLK_ZT_CONV; + break; + case BLK_ZT_SWR: + desc.z_type = VIRTIO_BLK_ZT_SWR; + break; + case BLK_ZT_SWP: + desc.z_type = VIRTIO_BLK_ZT_SWP; + break; + default: + g_assert_not_reached(); + } + + switch (data->zone_report_data.zones[j].state) { + case BLK_ZS_RDONLY: + desc.z_state = VIRTIO_BLK_ZS_RDONLY; + break; + case BLK_ZS_OFFLINE: + desc.z_state = VIRTIO_BLK_ZS_OFFLINE; + break; + case BLK_ZS_EMPTY: + desc.z_state = VIRTIO_BLK_ZS_EMPTY; + break; + case BLK_ZS_CLOSED: + desc.z_state = VIRTIO_BLK_ZS_CLOSED; + break; + case BLK_ZS_FULL: + desc.z_state = VIRTIO_BLK_ZS_FULL; + break; + case BLK_ZS_EOPEN: + desc.z_state = VIRTIO_BLK_ZS_EOPEN; + break; + case BLK_ZS_IOPEN: + desc.z_state = VIRTIO_BLK_ZS_IOPEN; + break; + case BLK_ZS_NOT_WP: + desc.z_state = VIRTIO_BLK_ZS_NOT_WP; + break; + default: + g_assert_not_reached(); + } + + /* TODO: it takes O(n^2) time complexity. Optimizations required. */ + n = iov_from_buf(in_iov, in_num, i, &desc, sizeof(desc)); + if (n != sizeof(desc)) { + virtio_error(vdev, "Driver provided input buffer " + "for descriptors that is too small!"); + err_status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + } + } + +out: + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + virtio_blk_req_complete(req, err_status); + virtio_blk_free_request(req); + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); + g_free(data->zone_report_data.zones); + g_free(data); +} + +static void virtio_blk_handle_zone_report(VirtIOBlockReq *req, + struct iovec *in_iov, + unsigned in_num) +{ + VirtIOBlock *s = req->dev; + VirtIODevice *vdev = VIRTIO_DEVICE(s); + unsigned int nr_zones; + ZoneCmdData *data; + int64_t zone_size, offset; + uint8_t err_status; + + if (req->in_len < sizeof(struct virtio_blk_inhdr) + + sizeof(struct virtio_blk_zone_report) + + sizeof(struct virtio_blk_zone_descriptor)) { + virtio_error(vdev, "in buffer too small for zone report"); + return; + } + + /* start byte offset of the zone report */ + offset = virtio_ldq_p(vdev, &req->out.sector) << BDRV_SECTOR_BITS; + if (!check_zoned_request(s, offset, 0, false, &err_status)) { + goto out; + } + nr_zones = (req->in_len - sizeof(struct virtio_blk_inhdr) - + sizeof(struct virtio_blk_zone_report)) / + sizeof(struct virtio_blk_zone_descriptor); + + zone_size = sizeof(BlockZoneDescriptor) * nr_zones; + data = g_malloc(sizeof(ZoneCmdData)); + data->req = req; + data->in_iov = in_iov; + data->in_num = in_num; + data->zone_report_data.nr_zones = nr_zones; + data->zone_report_data.zones = g_malloc(zone_size), + + blk_aio_zone_report(s->blk, offset, &data->zone_report_data.nr_zones, + data->zone_report_data.zones, + virtio_blk_zone_report_complete, data); + return; +out: + virtio_blk_req_complete(req, err_status); + virtio_blk_free_request(req); +} + +static void virtio_blk_zone_mgmt_complete(void *opaque, int ret) +{ + VirtIOBlockReq *req = opaque; + VirtIOBlock *s = req->dev; + int8_t err_status = VIRTIO_BLK_S_OK; + + if (ret) { + err_status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + } + + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + virtio_blk_req_complete(req, err_status); + virtio_blk_free_request(req); + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); +} + +static int virtio_blk_handle_zone_mgmt(VirtIOBlockReq *req, BlockZoneOp op) +{ + VirtIOBlock *s = req->dev; + VirtIODevice *vdev = VIRTIO_DEVICE(s); + BlockDriverState *bs = blk_bs(s->blk); + int64_t offset = virtio_ldq_p(vdev, &req->out.sector) << BDRV_SECTOR_BITS; + uint64_t len; + uint64_t capacity = bs->total_sectors << BDRV_SECTOR_BITS; + uint8_t err_status = VIRTIO_BLK_S_OK; + + uint32_t type = virtio_ldl_p(vdev, &req->out.type); + if (type == VIRTIO_BLK_T_ZONE_RESET_ALL) { + /* Entire drive capacity */ + offset = 0; + len = capacity; + } else { + if (bs->bl.zone_size > capacity - offset) { + /* The zoned device allows the last smaller zone. */ + len = capacity - bs->bl.zone_size * (bs->bl.nr_zones - 1); + } else { + len = bs->bl.zone_size; + } + } + + if (!check_zoned_request(s, offset, len, false, &err_status)) { + goto out; + } + + blk_aio_zone_mgmt(s->blk, op, offset, len, + virtio_blk_zone_mgmt_complete, req); + + return 0; +out: + virtio_blk_req_complete(req, err_status); + virtio_blk_free_request(req); + return err_status; +} + +static void virtio_blk_zone_append_complete(void *opaque, int ret) +{ + ZoneCmdData *data = opaque; + VirtIOBlockReq *req = data->req; + VirtIOBlock *s = req->dev; + VirtIODevice *vdev = VIRTIO_DEVICE(req->dev); + int64_t append_sector, n; + uint8_t err_status = VIRTIO_BLK_S_OK; + + if (ret) { + err_status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + goto out; + } + + virtio_stq_p(vdev, &append_sector, + data->zone_append_data.offset >> BDRV_SECTOR_BITS); + n = iov_from_buf(data->in_iov, data->in_num, 0, &append_sector, + sizeof(append_sector)); + if (n != sizeof(append_sector)) { + virtio_error(vdev, "Driver provided input buffer less than size of " + "append_sector"); + err_status = VIRTIO_BLK_S_ZONE_INVALID_CMD; + goto out; + } + +out: + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + virtio_blk_req_complete(req, err_status); + virtio_blk_free_request(req); + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); + g_free(data); +} + +static int virtio_blk_handle_zone_append(VirtIOBlockReq *req, + struct iovec *out_iov, + struct iovec *in_iov, + uint64_t out_num, + unsigned in_num) { + VirtIOBlock *s = req->dev; + VirtIODevice *vdev = VIRTIO_DEVICE(s); + uint8_t err_status = VIRTIO_BLK_S_OK; + + int64_t offset = virtio_ldq_p(vdev, &req->out.sector) << BDRV_SECTOR_BITS; + int64_t len = iov_size(out_iov, out_num); + + if (!check_zoned_request(s, offset, len, true, &err_status)) { + goto out; + } + + ZoneCmdData *data = g_malloc(sizeof(ZoneCmdData)); + data->req = req; + data->in_iov = in_iov; + data->in_num = in_num; + data->zone_append_data.offset = offset; + qemu_iovec_init_external(&req->qiov, out_iov, out_num); + blk_aio_zone_append(s->blk, &data->zone_append_data.offset, &req->qiov, 0, + virtio_blk_zone_append_complete, data); + return 0; + +out: + aio_context_acquire(blk_get_aio_context(s->conf.conf.blk)); + virtio_blk_req_complete(req, err_status); + virtio_blk_free_request(req); + aio_context_release(blk_get_aio_context(s->conf.conf.blk)); + return err_status; +} + static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) { uint32_t type; @@ -687,6 +1017,24 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) case VIRTIO_BLK_T_FLUSH: virtio_blk_handle_flush(req, mrb); break; + case VIRTIO_BLK_T_ZONE_REPORT: + virtio_blk_handle_zone_report(req, in_iov, in_num); + break; + case VIRTIO_BLK_T_ZONE_OPEN: + virtio_blk_handle_zone_mgmt(req, BLK_ZO_OPEN); + break; + case VIRTIO_BLK_T_ZONE_CLOSE: + virtio_blk_handle_zone_mgmt(req, BLK_ZO_CLOSE); + break; + case VIRTIO_BLK_T_ZONE_FINISH: + virtio_blk_handle_zone_mgmt(req, BLK_ZO_FINISH); + break; + case VIRTIO_BLK_T_ZONE_RESET: + virtio_blk_handle_zone_mgmt(req, BLK_ZO_RESET); + break; + case VIRTIO_BLK_T_ZONE_RESET_ALL: + virtio_blk_handle_zone_mgmt(req, BLK_ZO_RESET); + break; case VIRTIO_BLK_T_SCSI_CMD: virtio_blk_handle_scsi(req); break; @@ -705,6 +1053,14 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) virtio_blk_free_request(req); break; } + case VIRTIO_BLK_T_ZONE_APPEND & ~VIRTIO_BLK_T_OUT: + /* + * Passing out_iov/out_num and in_iov/in_num is not safe + * to access req->elem.out_sg directly because it may be + * modified by virtio_blk_handle_request(). + */ + virtio_blk_handle_zone_append(req, out_iov, in_iov, out_num, in_num); + break; /* * VIRTIO_BLK_T_DISCARD and VIRTIO_BLK_T_WRITE_ZEROES are defined with * VIRTIO_BLK_T_OUT flag set. We masked this flag in the switch statement, @@ -890,6 +1246,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config) { VirtIOBlock *s = VIRTIO_BLK(vdev); BlockConf *conf = &s->conf.conf; + BlockDriverState *bs = blk_bs(s->blk); struct virtio_blk_config blkcfg; uint64_t capacity; int64_t length; @@ -954,6 +1311,30 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config) blkcfg.write_zeroes_may_unmap = 1; virtio_stl_p(vdev, &blkcfg.max_write_zeroes_seg, 1); } + if (bs->bl.zoned != BLK_Z_NONE) { + switch (bs->bl.zoned) { + case BLK_Z_HM: + blkcfg.zoned.model = VIRTIO_BLK_Z_HM; + break; + case BLK_Z_HA: + blkcfg.zoned.model = VIRTIO_BLK_Z_HA; + break; + default: + g_assert_not_reached(); + } + + virtio_stl_p(vdev, &blkcfg.zoned.zone_sectors, + bs->bl.zone_size / 512); + virtio_stl_p(vdev, &blkcfg.zoned.max_active_zones, + bs->bl.max_active_zones); + virtio_stl_p(vdev, &blkcfg.zoned.max_open_zones, + bs->bl.max_open_zones); + virtio_stl_p(vdev, &blkcfg.zoned.write_granularity, blk_size); + virtio_stl_p(vdev, &blkcfg.zoned.max_append_sectors, + bs->bl.max_append_sectors); + } else { + blkcfg.zoned.model = VIRTIO_BLK_Z_NONE; + } memcpy(config, &blkcfg, s->config_size); } @@ -1118,6 +1499,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) VirtIODevice *vdev = VIRTIO_DEVICE(dev); VirtIOBlock *s = VIRTIO_BLK(dev); VirtIOBlkConf *conf = &s->conf; + BlockDriverState *bs = blk_bs(conf->conf.blk); Error *err = NULL; unsigned i; @@ -1163,6 +1545,13 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp) return; } + if (bs->bl.zoned != BLK_Z_NONE) { + virtio_add_feature(&s->host_features, VIRTIO_BLK_F_ZONED); + if (bs->bl.zoned == BLK_Z_HM) { + virtio_clear_feature(&s->host_features, VIRTIO_BLK_F_DISCARD); + } + } + if (virtio_has_feature(s->host_features, VIRTIO_BLK_F_DISCARD) && (!conf->max_discard_sectors || conf->max_discard_sectors > BDRV_REQUEST_MAX_SECTORS)) { diff --git a/hw/virtio/virtio-qmp.c b/hw/virtio/virtio-qmp.c index b70148aba9..e84316dcfd 100644 --- a/hw/virtio/virtio-qmp.c +++ b/hw/virtio/virtio-qmp.c @@ -176,6 +176,8 @@ static const qmp_virtio_feature_map_t virtio_blk_feature_map[] = { "VIRTIO_BLK_F_DISCARD: Discard command supported"), FEATURE_ENTRY(VIRTIO_BLK_F_WRITE_ZEROES, \ "VIRTIO_BLK_F_WRITE_ZEROES: Write zeroes command supported"), + FEATURE_ENTRY(VIRTIO_BLK_F_ZONED, \ + "VIRTIO_BLK_F_ZONED: Zoned block devices"), #ifndef VIRTIO_BLK_NO_LEGACY FEATURE_ENTRY(VIRTIO_BLK_F_BARRIER, \ "VIRTIO_BLK_F_BARRIER: Request barriers supported"), From patchwork Fri Mar 24 10:13:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Li X-Patchwork-Id: 13186916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 784D1C77B60 for ; Fri, 24 Mar 2023 15:24:34 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pfeRV-0006Yf-Q9; Fri, 24 Mar 2023 06:14:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pfeRT-0006Y2-Ca; Fri, 24 Mar 2023 06:14:39 -0400 Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pfeRC-0006BR-U1; Fri, 24 Mar 2023 06:14:39 -0400 Received: by mail-pl1-x62d.google.com with SMTP id bc12so1430631plb.0; Fri, 24 Mar 2023 03:14:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679652860; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rwElof7QbdmCnXTUSd8P/isWPtam2txvqP83xWTSW30=; b=NXe8XKpn5vWPvhJHFw3zv/7Cdo26ACNcI+NVecpNHKeKVk0QB3eIEygcBXf72Cx1rI 8dYWi/SSeQYNPLslsPJGTLfxI+nCmqBnBG/HA+nMNL1YdUKMnGSMNCIOnMmGyZN9CkDk ec5dd7NQR5PB3HNtJRQYvaEb9bwBjo4zJukSSnQlNCFgxFWpPf13f6BKLmUZEKqZ7/YP F5GpbnSSL6DI099oI/Lsdwds3CEnD5DEI/lDXl5NqpB+opEVKgznsG7m4/+XQP+lM1YK Bqz0NLUH8Zlin1PK54Fq5wtxn7ae0YJDFTun5QFPW+uDtzOurRr+7z2DugAnzq6UAWBf pIdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679652860; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rwElof7QbdmCnXTUSd8P/isWPtam2txvqP83xWTSW30=; b=as+oPf+MAlYjU0GhrQEJuc1l8P//5FCsq3wBLiEywG2DCD/j5DUVm69yz2r3ooASH4 wJ1JqHcxO/f07XJzrmHwLiMFvxXN+iAc7xzBcaFlDXCfnKO0oiyStMil14axUagc8leH n1ARBpMeIWBSOAcia233m+P8AHP7m7NMbSyg57F5GT10JV1dwS7fcdm7zzuIBgrfdR6C K1/D7I4NehX68eeLzi1inWuJYLgVBng8rwf5hd8nb5ocKay5bsvQLr7CuaUoVQMFysjL 0K5666X2TJGDr9pocqoLXXTx5214Ajnf6bF6SHVpbg9L67N57k8jwVYxaNAPZssA3oBc zh6Q== X-Gm-Message-State: AAQBX9fPaD6BjnD/UdNSK1yBZt081kVaaksgShd+PA2XpvSmwAMRQUgO L6y0e9FAxhcu69GBXrQ2uvNLutkrm0jYTw== X-Google-Smtp-Source: AKy350Yw9ADKQ0OOGczQUkQZnoDFi2SZ9rqGHCPMSIngg5zRpnZJE+KlchybQyBUHPsuggRH7mNJGg== X-Received: by 2002:a17:903:230b:b0:1a0:4fb2:6623 with SMTP id d11-20020a170903230b00b001a04fb26623mr2582959plh.40.1679652859385; Fri, 24 Mar 2023 03:14:19 -0700 (PDT) Received: from fedlinux.. ([106.84.130.185]) by smtp.gmail.com with ESMTPSA id h24-20020a63df58000000b0050f85ef50d1sm8282421pgj.26.2023.03.24.03.14.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Mar 2023 03:14:19 -0700 (PDT) From: Sam Li To: qemu-devel@nongnu.org Cc: stefanha@redhat.com, "Michael S. Tsirkin" , hare@suse.de, Cornelia Huck , dmitry.fomichev@wdc.com, qemu-block@nongnu.org, Markus Armbruster , damien.lemoal@opensource.wdc.com, Raphael Norwitz , Hanna Reitz , Paolo Bonzini , Kevin Wolf , kvm@vger.kernel.org, Eric Blake , Sam Li Subject: [PATCH v9 3/5] block: add accounting for zone append operation Date: Fri, 24 Mar 2023 18:13:55 +0800 Message-Id: <20230324101357.2717-4-faithilikerun@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230324101357.2717-1-faithilikerun@gmail.com> References: <20230324101357.2717-1-faithilikerun@gmail.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62d; envelope-from=faithilikerun@gmail.com; helo=mail-pl1-x62d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Taking account of the new zone append write operation for zoned devices, BLOCK_ACCT_ZONE_APPEND enum is introduced as other I/O request type (read, write, flush). Signed-off-by: Sam Li --- block/qapi-sysemu.c | 11 ++++++ block/qapi.c | 18 ++++++++++ hw/block/virtio-blk.c | 4 +++ include/block/accounting.h | 1 + qapi/block-core.json | 68 ++++++++++++++++++++++++++++++++------ qapi/block.json | 4 +++ 6 files changed, 95 insertions(+), 11 deletions(-) diff --git a/block/qapi-sysemu.c b/block/qapi-sysemu.c index 7bd7554150..cec3c1afb4 100644 --- a/block/qapi-sysemu.c +++ b/block/qapi-sysemu.c @@ -517,6 +517,7 @@ void qmp_block_latency_histogram_set( bool has_boundaries, uint64List *boundaries, bool has_boundaries_read, uint64List *boundaries_read, bool has_boundaries_write, uint64List *boundaries_write, + bool has_boundaries_append, uint64List *boundaries_append, bool has_boundaries_flush, uint64List *boundaries_flush, Error **errp) { @@ -557,6 +558,16 @@ void qmp_block_latency_histogram_set( } } + if (has_boundaries || has_boundaries_append) { + ret = block_latency_histogram_set( + stats, BLOCK_ACCT_ZONE_APPEND, + has_boundaries_append ? boundaries_append : boundaries); + if (ret) { + error_setg(errp, "Device '%s' set append write boundaries fail", id); + return; + } + } + if (has_boundaries || has_boundaries_flush) { ret = block_latency_histogram_set( stats, BLOCK_ACCT_FLUSH, diff --git a/block/qapi.c b/block/qapi.c index c84147849d..2684484e9d 100644 --- a/block/qapi.c +++ b/block/qapi.c @@ -533,27 +533,36 @@ static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk) ds->rd_bytes = stats->nr_bytes[BLOCK_ACCT_READ]; ds->wr_bytes = stats->nr_bytes[BLOCK_ACCT_WRITE]; + ds->zone_append_bytes = stats->nr_bytes[BLOCK_ACCT_ZONE_APPEND]; ds->unmap_bytes = stats->nr_bytes[BLOCK_ACCT_UNMAP]; ds->rd_operations = stats->nr_ops[BLOCK_ACCT_READ]; ds->wr_operations = stats->nr_ops[BLOCK_ACCT_WRITE]; + ds->zone_append_operations = stats->nr_ops[BLOCK_ACCT_ZONE_APPEND]; ds->unmap_operations = stats->nr_ops[BLOCK_ACCT_UNMAP]; ds->failed_rd_operations = stats->failed_ops[BLOCK_ACCT_READ]; ds->failed_wr_operations = stats->failed_ops[BLOCK_ACCT_WRITE]; + ds->failed_zone_append_operations = + stats->failed_ops[BLOCK_ACCT_ZONE_APPEND]; ds->failed_flush_operations = stats->failed_ops[BLOCK_ACCT_FLUSH]; ds->failed_unmap_operations = stats->failed_ops[BLOCK_ACCT_UNMAP]; ds->invalid_rd_operations = stats->invalid_ops[BLOCK_ACCT_READ]; ds->invalid_wr_operations = stats->invalid_ops[BLOCK_ACCT_WRITE]; + ds->invalid_zone_append_operations = + stats->invalid_ops[BLOCK_ACCT_ZONE_APPEND]; ds->invalid_flush_operations = stats->invalid_ops[BLOCK_ACCT_FLUSH]; ds->invalid_unmap_operations = stats->invalid_ops[BLOCK_ACCT_UNMAP]; ds->rd_merged = stats->merged[BLOCK_ACCT_READ]; ds->wr_merged = stats->merged[BLOCK_ACCT_WRITE]; + ds->zone_append_merged = stats->merged[BLOCK_ACCT_ZONE_APPEND]; ds->unmap_merged = stats->merged[BLOCK_ACCT_UNMAP]; ds->flush_operations = stats->nr_ops[BLOCK_ACCT_FLUSH]; ds->wr_total_time_ns = stats->total_time_ns[BLOCK_ACCT_WRITE]; + ds->zone_append_total_time_ns = + stats->total_time_ns[BLOCK_ACCT_ZONE_APPEND]; ds->rd_total_time_ns = stats->total_time_ns[BLOCK_ACCT_READ]; ds->flush_total_time_ns = stats->total_time_ns[BLOCK_ACCT_FLUSH]; ds->unmap_total_time_ns = stats->total_time_ns[BLOCK_ACCT_UNMAP]; @@ -571,6 +580,7 @@ static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk) TimedAverage *rd = &ts->latency[BLOCK_ACCT_READ]; TimedAverage *wr = &ts->latency[BLOCK_ACCT_WRITE]; + TimedAverage *zap = &ts->latency[BLOCK_ACCT_ZONE_APPEND]; TimedAverage *fl = &ts->latency[BLOCK_ACCT_FLUSH]; dev_stats->interval_length = ts->interval_length; @@ -583,6 +593,10 @@ static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk) dev_stats->max_wr_latency_ns = timed_average_max(wr); dev_stats->avg_wr_latency_ns = timed_average_avg(wr); + dev_stats->min_zone_append_latency_ns = timed_average_min(zap); + dev_stats->max_zone_append_latency_ns = timed_average_max(zap); + dev_stats->avg_zone_append_latency_ns = timed_average_avg(zap); + dev_stats->min_flush_latency_ns = timed_average_min(fl); dev_stats->max_flush_latency_ns = timed_average_max(fl); dev_stats->avg_flush_latency_ns = timed_average_avg(fl); @@ -591,6 +605,8 @@ static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk) block_acct_queue_depth(ts, BLOCK_ACCT_READ); dev_stats->avg_wr_queue_depth = block_acct_queue_depth(ts, BLOCK_ACCT_WRITE); + dev_stats->avg_zone_append_queue_depth = + block_acct_queue_depth(ts, BLOCK_ACCT_ZONE_APPEND); QAPI_LIST_PREPEND(ds->timed_stats, dev_stats); } @@ -600,6 +616,8 @@ static void bdrv_query_blk_stats(BlockDeviceStats *ds, BlockBackend *blk) = bdrv_latency_histogram_stats(&hgram[BLOCK_ACCT_READ]); ds->wr_latency_histogram = bdrv_latency_histogram_stats(&hgram[BLOCK_ACCT_WRITE]); + ds->zone_append_latency_histogram + = bdrv_latency_histogram_stats(&hgram[BLOCK_ACCT_ZONE_APPEND]); ds->flush_latency_histogram = bdrv_latency_histogram_stats(&hgram[BLOCK_ACCT_FLUSH]); } diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index 66c2bc4b16..0d85c2c9b0 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -919,6 +919,10 @@ static int virtio_blk_handle_zone_append(VirtIOBlockReq *req, data->in_num = in_num; data->zone_append_data.offset = offset; qemu_iovec_init_external(&req->qiov, out_iov, out_num); + + block_acct_start(blk_get_stats(s->blk), &req->acct, len, + BLOCK_ACCT_ZONE_APPEND); + blk_aio_zone_append(s->blk, &data->zone_append_data.offset, &req->qiov, 0, virtio_blk_zone_append_complete, data); return 0; diff --git a/include/block/accounting.h b/include/block/accounting.h index b9caad60d5..a59e39f49d 100644 --- a/include/block/accounting.h +++ b/include/block/accounting.h @@ -37,6 +37,7 @@ enum BlockAcctType { BLOCK_ACCT_READ, BLOCK_ACCT_WRITE, BLOCK_ACCT_FLUSH, + BLOCK_ACCT_ZONE_APPEND, BLOCK_ACCT_UNMAP, BLOCK_MAX_IOTYPE, }; diff --git a/qapi/block-core.json b/qapi/block-core.json index c05ad0c07e..44a70aad21 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -849,6 +849,10 @@ # @min_wr_latency_ns: Minimum latency of write operations in the # defined interval, in nanoseconds. # +# @min_zone_append_latency_ns: Minimum latency of zone append operations +# in the defined interval, in nanoseconds +# (since 8.1) +# # @min_flush_latency_ns: Minimum latency of flush operations in the # defined interval, in nanoseconds. # @@ -858,6 +862,10 @@ # @max_wr_latency_ns: Maximum latency of write operations in the # defined interval, in nanoseconds. # +# @max_zone_append_latency_ns: Maximum latency of zone append operations +# in the defined interval, in nanoseconds +# (since 8.1) +# # @max_flush_latency_ns: Maximum latency of flush operations in the # defined interval, in nanoseconds. # @@ -867,6 +875,10 @@ # @avg_wr_latency_ns: Average latency of write operations in the # defined interval, in nanoseconds. # +# @avg_zone_append_latency_ns: Average latency of zone append operations +# in the defined interval, in nanoseconds +# (since 8.1) +# # @avg_flush_latency_ns: Average latency of flush operations in the # defined interval, in nanoseconds. # @@ -876,15 +888,23 @@ # @avg_wr_queue_depth: Average number of pending write operations # in the defined interval. # +# @avg_zone_append_queue_depth: Average number of pending zone append +# operations in the defined interval +# (since 8.1). +# # Since: 2.5 ## { 'struct': 'BlockDeviceTimedStats', 'data': { 'interval_length': 'int', 'min_rd_latency_ns': 'int', 'max_rd_latency_ns': 'int', 'avg_rd_latency_ns': 'int', 'min_wr_latency_ns': 'int', 'max_wr_latency_ns': 'int', - 'avg_wr_latency_ns': 'int', 'min_flush_latency_ns': 'int', - 'max_flush_latency_ns': 'int', 'avg_flush_latency_ns': 'int', - 'avg_rd_queue_depth': 'number', 'avg_wr_queue_depth': 'number' } } + 'avg_wr_latency_ns': 'int', 'min_zone_append_latency_ns': 'int', + 'max_zone_append_latency_ns': 'int', + 'avg_zone_append_latency_ns': 'int', + 'min_flush_latency_ns': 'int', 'max_flush_latency_ns': 'int', + 'avg_flush_latency_ns': 'int', 'avg_rd_queue_depth': 'number', + 'avg_wr_queue_depth': 'number', + 'avg_zone_append_queue_depth': 'number' } } ## # @BlockDeviceStats: @@ -895,12 +915,18 @@ # # @wr_bytes: The number of bytes written by the device. # +# @zone_append_bytes: The number of bytes appended by the zoned devices +# (since 8.1) +# # @unmap_bytes: The number of bytes unmapped by the device (Since 4.2) # # @rd_operations: The number of read operations performed by the device. # # @wr_operations: The number of write operations performed by the device. # +# @zone_append_operations: The number of zone append operations performed +# by the zoned devices (since 8.1) +# # @flush_operations: The number of cache flush operations performed by the # device (since 0.15) # @@ -911,6 +937,9 @@ # # @wr_total_time_ns: Total time spent on writes in nanoseconds (since 0.15). # +# @zone_append_total_time_ns: Total time spent on zone append writes +# in nanoseconds (since 8.1) +# # @flush_total_time_ns: Total time spent on cache flushes in nanoseconds # (since 0.15). # @@ -928,6 +957,9 @@ # @wr_merged: Number of write requests that have been merged into another # request (Since 2.3). # +# @zone_append_merged: Number of zone append requests that have been merged +# into another request (since 8.1) +# # @unmap_merged: Number of unmap requests that have been merged into another # request (Since 4.2) # @@ -941,6 +973,10 @@ # @failed_wr_operations: The number of failed write operations # performed by the device (Since 2.5) # +# @failed_zone_append_operations: The number of failed zone append write +# operations performed by the zoned devices +# (since 8.1) +# # @failed_flush_operations: The number of failed flush operations # performed by the device (Since 2.5) # @@ -953,6 +989,9 @@ # @invalid_wr_operations: The number of invalid write operations # performed by the device (Since 2.5) # +# @invalid_zone_append_operations: The number of invalid zone append operations +# performed by the zoned device (since 8.1) +# # @invalid_flush_operations: The number of invalid flush operations # performed by the device (Since 2.5) # @@ -972,27 +1011,34 @@ # # @wr_latency_histogram: @BlockLatencyHistogramInfo. (Since 4.0) # +# @zone_append_latency_histogram: @BlockLatencyHistogramInfo. (since 8.1) +# # @flush_latency_histogram: @BlockLatencyHistogramInfo. (Since 4.0) # # Since: 0.14 ## { 'struct': 'BlockDeviceStats', - 'data': {'rd_bytes': 'int', 'wr_bytes': 'int', 'unmap_bytes' : 'int', - 'rd_operations': 'int', 'wr_operations': 'int', + 'data': {'rd_bytes': 'int', 'wr_bytes': 'int', 'zone_append_bytes': 'int', + 'unmap_bytes' : 'int', 'rd_operations': 'int', + 'wr_operations': 'int', 'zone_append_operations': 'int', 'flush_operations': 'int', 'unmap_operations': 'int', 'rd_total_time_ns': 'int', 'wr_total_time_ns': 'int', - 'flush_total_time_ns': 'int', 'unmap_total_time_ns': 'int', - 'wr_highest_offset': 'int', - 'rd_merged': 'int', 'wr_merged': 'int', 'unmap_merged': 'int', - '*idle_time_ns': 'int', + 'zone_append_total_time_ns': 'int', 'flush_total_time_ns': 'int', + 'unmap_total_time_ns': 'int', 'wr_highest_offset': 'int', + 'rd_merged': 'int', 'wr_merged': 'int', 'zone_append_merged': 'int', + 'unmap_merged': 'int', '*idle_time_ns': 'int', 'failed_rd_operations': 'int', 'failed_wr_operations': 'int', - 'failed_flush_operations': 'int', 'failed_unmap_operations': 'int', - 'invalid_rd_operations': 'int', 'invalid_wr_operations': 'int', + 'failed_zone_append_operations': 'int', + 'failed_flush_operations': 'int', + 'failed_unmap_operations': 'int', 'invalid_rd_operations': 'int', + 'invalid_wr_operations': 'int', + 'invalid_zone_append_operations': 'int', 'invalid_flush_operations': 'int', 'invalid_unmap_operations': 'int', 'account_invalid': 'bool', 'account_failed': 'bool', 'timed_stats': ['BlockDeviceTimedStats'], '*rd_latency_histogram': 'BlockLatencyHistogramInfo', '*wr_latency_histogram': 'BlockLatencyHistogramInfo', + '*zone_append_latency_histogram': 'BlockLatencyHistogramInfo', '*flush_latency_histogram': 'BlockLatencyHistogramInfo' } } ## diff --git a/qapi/block.json b/qapi/block.json index 5fe068f903..5a57ef4a9f 100644 --- a/qapi/block.json +++ b/qapi/block.json @@ -525,6 +525,9 @@ # @boundaries-write: list of interval boundary values for write latency # histogram. # +# @boundaries-zap: list of interval boundary values for zone append write +# latency histogram. +# # @boundaries-flush: list of interval boundary values for flush latency # histogram. # @@ -573,5 +576,6 @@ '*boundaries': ['uint64'], '*boundaries-read': ['uint64'], '*boundaries-write': ['uint64'], + '*boundaries-zap': ['uint64'], '*boundaries-flush': ['uint64'] }, 'allow-preconfig': true }