diff mbox series

iomap: Address soft lockup in iomap_finish_ioend()

Message ID 20211230193522.55520-1-trondmy@kernel.org (mailing list archive)
State New
Headers show
Series iomap: Address soft lockup in iomap_finish_ioend() | expand

Commit Message

trondmy@kernel.org Dec. 30, 2021, 7:35 p.m. UTC
From: Trond Myklebust <trond.myklebust@hammerspace.com>

We're observing the following stack trace using various kernels when
running in the Azure cloud.

 watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [kworker/12:1:3106]
 Modules linked in: raid0 ipt_MASQUERADE nf_conntrack_netlink xt_addrtype nft_chain_nat nf_nat br_netfilter bridge stp llc ext4 mbcache jbd2 overlay xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_counter rpcrdma rdma_ucm xt_owner ib_srpt nft_compat intel_rapl_msr ib_isert intel_rapl_common nf_tables iscsi_target_mod isst_if_mbox_msr isst_if_common nfnetlink target_core_mod nfit ib_iser libnvdimm libiscsi scsi_transport_iscsi ib_umad kvm_intel ib_ipoib rdma_cm iw_cm vfat ib_cm fat kvm irqbypass crct10dif_pclmul crc32_pclmul mlx5_ib ghash_clmulni_intel rapl ib_uverbs ib_core i2c_piix4 pcspkr hyperv_fb hv_balloon hv_utils joydev nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c mlx5_core mlxfw tls pci_hyperv pci_hyperv_intf sd_mod t10_pi sg ata_generic hv_storvsc hv_netvsc scsi_transport_fc hyperv_keyboard hid_hyperv ata_piix libata crc32c_intel hv_vmbus serio_raw fuse
 CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-305.10.2.el8_4.x86_64 #1
 Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008  12/07/2018
 Workqueue: xfs-conv/md127 xfs_end_io [xfs]
 RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
 Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
 RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff12
 RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
 RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
 RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
 R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
 R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
 FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 Call Trace:
  wake_up_page_bit+0x8a/0x110
  iomap_finish_ioend+0xd7/0x1c0
  iomap_finish_ioends+0x7f/0xb0
  xfs_end_ioend+0x6b/0x100 [xfs]
  ? xfs_setfilesize_ioend+0x60/0x60 [xfs]
  xfs_end_io+0xb9/0xe0 [xfs]
  process_one_work+0x1a7/0x360
  worker_thread+0x1fa/0x390
  ? create_worker+0x1a0/0x1a0
  kthread+0x116/0x130
  ? kthread_flush_work_fn+0x10/0x10
  ret_from_fork+0x35/0x40

Jens suggested adding a latency-reducing cond_resched() to the loop in
iomap_finish_ioends().

Suggested-by: Jens Axboe <axboe@kernel.dk>
Fixes: 598ecfbaa742 ("iomap: lift the xfs writeback code to iomap")
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
---
 fs/iomap/buffered-io.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Jens Axboe Dec. 30, 2021, 9:24 p.m. UTC | #1
On 12/30/21 11:35 AM, trondmy@kernel.org wrote:
> From: Trond Myklebust <trond.myklebust@hammerspace.com>
> 
> We're observing the following stack trace using various kernels when
> running in the Azure cloud.
> 
>  watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [kworker/12:1:3106]
>  Modules linked in: raid0 ipt_MASQUERADE nf_conntrack_netlink xt_addrtype nft_chain_nat nf_nat br_netfilter bridge stp llc ext4 mbcache jbd2 overlay xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_counter rpcrdma rdma_ucm xt_owner ib_srpt nft_compat intel_rapl_msr ib_isert intel_rapl_common nf_tables iscsi_target_mod isst_if_mbox_msr isst_if_common nfnetlink target_core_mod nfit ib_iser libnvdimm libiscsi scsi_transport_iscsi ib_umad kvm_intel ib_ipoib rdma_cm iw_cm vfat ib_cm fat kvm irqbypass crct10dif_pclmul crc32_pclmul mlx5_ib ghash_clmulni_intel rapl ib_uverbs ib_core i2c_piix4 pcspkr hyperv_fb hv_balloon hv_utils joydev nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c mlx5_core mlxfw tls pci_hyperv pci_hyperv_intf sd_mod t10_pi sg ata_generic hv_storvsc hv_netvsc scsi_transport_fc hyperv_keyboard hid_hyperv ata_piix libata crc32c_intel hv_vmbus serio_raw fuse
>  CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-305.10.2.el8_4.x86_64 #1
>  Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008  12/07/2018
>  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
>  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
>  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
>  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff12
>  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
>  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
>  RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
>  R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
>  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
>  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000) knlGS:0000000000000000
>  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>  CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
>  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>  Call Trace:
>   wake_up_page_bit+0x8a/0x110
>   iomap_finish_ioend+0xd7/0x1c0
>   iomap_finish_ioends+0x7f/0xb0
>   xfs_end_ioend+0x6b/0x100 [xfs]
>   ? xfs_setfilesize_ioend+0x60/0x60 [xfs]
>   xfs_end_io+0xb9/0xe0 [xfs]
>   process_one_work+0x1a7/0x360
>   worker_thread+0x1fa/0x390
>   ? create_worker+0x1a0/0x1a0
>   kthread+0x116/0x130
>   ? kthread_flush_work_fn+0x10/0x10
>   ret_from_fork+0x35/0x40
> 
> Jens suggested adding a latency-reducing cond_resched() to the loop in
> iomap_finish_ioends().

The patch doesn't add it there though, I was suggesting:

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 71a36ae120ee..4ad2436a936a 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1078,6 +1078,7 @@ iomap_finish_ioends(struct iomap_ioend *ioend, int error)
 		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
 		list_del_init(&ioend->io_list);
 		iomap_finish_ioend(ioend, error);
+		cond_resched();
 	}
 }
 EXPORT_SYMBOL_GPL(iomap_finish_ioends);

as I don't think you need it once-per-vec. But not sure if you tested
that variant or not...
Trond Myklebust Dec. 30, 2021, 10:25 p.m. UTC | #2
On Thu, 2021-12-30 at 13:24 -0800, Jens Axboe wrote:
> On 12/30/21 11:35 AM, trondmy@kernel.org wrote:
> > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > 
> > We're observing the following stack trace using various kernels
> > when
> > running in the Azure cloud.
> > 
> >  watchdog: BUG: soft lockup - CPU#12 stuck for 23s!
> > [kworker/12:1:3106]
> >  Modules linked in: raid0 ipt_MASQUERADE nf_conntrack_netlink
> > xt_addrtype nft_chain_nat nf_nat br_netfilter bridge stp llc ext4
> > mbcache jbd2 overlay xt_conntrack nf_conntrack nf_defrag_ipv6
> > nf_defrag_ipv4 nft_counter rpcrdma rdma_ucm xt_owner ib_srpt
> > nft_compat intel_rapl_msr ib_isert intel_rapl_common nf_tables
> > iscsi_target_mod isst_if_mbox_msr isst_if_common nfnetlink
> > target_core_mod nfit ib_iser libnvdimm libiscsi
> > scsi_transport_iscsi ib_umad kvm_intel ib_ipoib rdma_cm iw_cm vfat
> > ib_cm fat kvm irqbypass crct10dif_pclmul crc32_pclmul mlx5_ib
> > ghash_clmulni_intel rapl ib_uverbs ib_core i2c_piix4 pcspkr
> > hyperv_fb hv_balloon hv_utils joydev nfsd auth_rpcgss nfs_acl lockd
> > grace sunrpc ip_tables xfs libcrc32c mlx5_core mlxfw tls pci_hyperv
> > pci_hyperv_intf sd_mod t10_pi sg ata_generic hv_storvsc hv_netvsc
> > scsi_transport_fc hyperv_keyboard hid_hyperv ata_piix libata
> > crc32c_intel hv_vmbus serio_raw fuse
> >  CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-
> > 305.10.2.el8_4.x86_64 #1
> >  Hardware name: Microsoft Corporation Virtual Machine/Virtual
> > Machine, BIOS 090008  12/07/2018
> >  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
> >  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
> >  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90
> > 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d
> > <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
> >  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX:
> > ffffffffffffff12
> >  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
> >  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
> >  RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
> >  R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
> >  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
> >  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000)
> > knlGS:0000000000000000
> >  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >  CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
> >  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> >  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> >  Call Trace:
> >   wake_up_page_bit+0x8a/0x110
> >   iomap_finish_ioend+0xd7/0x1c0
> >   iomap_finish_ioends+0x7f/0xb0
> >   xfs_end_ioend+0x6b/0x100 [xfs]
> >   ? xfs_setfilesize_ioend+0x60/0x60 [xfs]
> >   xfs_end_io+0xb9/0xe0 [xfs]
> >   process_one_work+0x1a7/0x360
> >   worker_thread+0x1fa/0x390
> >   ? create_worker+0x1a0/0x1a0
> >   kthread+0x116/0x130
> >   ? kthread_flush_work_fn+0x10/0x10
> >   ret_from_fork+0x35/0x40
> > 
> > Jens suggested adding a latency-reducing cond_resched() to the loop
> > in
> > iomap_finish_ioends().
> 
> The patch doesn't add it there though, I was suggesting:
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 71a36ae120ee..4ad2436a936a 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1078,6 +1078,7 @@ iomap_finish_ioends(struct iomap_ioend *ioend,
> int error)
>                 ioend = list_first_entry(&tmp, struct iomap_ioend,
> io_list);
>                 list_del_init(&ioend->io_list);
>                 iomap_finish_ioend(ioend, error);
> +               cond_resched();
>         }
>  }
>  EXPORT_SYMBOL_GPL(iomap_finish_ioends);
> 
> as I don't think you need it once-per-vec. But not sure if you tested
> that variant or not...
> 

Yes, we did test that variant, but were still seeing the soft lockups
on Azure, hence why I moved it into the inner loop.
Jens Axboe Dec. 30, 2021, 10:27 p.m. UTC | #3
On 12/30/21 2:25 PM, Trond Myklebust wrote:
> On Thu, 2021-12-30 at 13:24 -0800, Jens Axboe wrote:
>> On 12/30/21 11:35 AM, trondmy@kernel.org wrote:
>>> From: Trond Myklebust <trond.myklebust@hammerspace.com>
>>>
>>> We're observing the following stack trace using various kernels
>>> when
>>> running in the Azure cloud.
>>>
>>>  watchdog: BUG: soft lockup - CPU#12 stuck for 23s!
>>> [kworker/12:1:3106]
>>>  Modules linked in: raid0 ipt_MASQUERADE nf_conntrack_netlink
>>> xt_addrtype nft_chain_nat nf_nat br_netfilter bridge stp llc ext4
>>> mbcache jbd2 overlay xt_conntrack nf_conntrack nf_defrag_ipv6
>>> nf_defrag_ipv4 nft_counter rpcrdma rdma_ucm xt_owner ib_srpt
>>> nft_compat intel_rapl_msr ib_isert intel_rapl_common nf_tables
>>> iscsi_target_mod isst_if_mbox_msr isst_if_common nfnetlink
>>> target_core_mod nfit ib_iser libnvdimm libiscsi
>>> scsi_transport_iscsi ib_umad kvm_intel ib_ipoib rdma_cm iw_cm vfat
>>> ib_cm fat kvm irqbypass crct10dif_pclmul crc32_pclmul mlx5_ib
>>> ghash_clmulni_intel rapl ib_uverbs ib_core i2c_piix4 pcspkr
>>> hyperv_fb hv_balloon hv_utils joydev nfsd auth_rpcgss nfs_acl lockd
>>> grace sunrpc ip_tables xfs libcrc32c mlx5_core mlxfw tls pci_hyperv
>>> pci_hyperv_intf sd_mod t10_pi sg ata_generic hv_storvsc hv_netvsc
>>> scsi_transport_fc hyperv_keyboard hid_hyperv ata_piix libata
>>> crc32c_intel hv_vmbus serio_raw fuse
>>>  CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-
>>> 305.10.2.el8_4.x86_64 #1
>>>  Hardware name: Microsoft Corporation Virtual Machine/Virtual
>>> Machine, BIOS 090008  12/07/2018
>>>  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
>>>  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
>>>  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90
>>> 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d
>>> <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
>>>  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX:
>>> ffffffffffffff12
>>>  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
>>>  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
>>>  RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
>>>  R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
>>>  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
>>>  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000)
>>> knlGS:0000000000000000
>>>  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>  CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
>>>  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>>  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>>  Call Trace:
>>>   wake_up_page_bit+0x8a/0x110
>>>   iomap_finish_ioend+0xd7/0x1c0
>>>   iomap_finish_ioends+0x7f/0xb0
>>>   xfs_end_ioend+0x6b/0x100 [xfs]
>>>   ? xfs_setfilesize_ioend+0x60/0x60 [xfs]
>>>   xfs_end_io+0xb9/0xe0 [xfs]
>>>   process_one_work+0x1a7/0x360
>>>   worker_thread+0x1fa/0x390
>>>   ? create_worker+0x1a0/0x1a0
>>>   kthread+0x116/0x130
>>>   ? kthread_flush_work_fn+0x10/0x10
>>>   ret_from_fork+0x35/0x40
>>>
>>> Jens suggested adding a latency-reducing cond_resched() to the loop
>>> in
>>> iomap_finish_ioends().
>>
>> The patch doesn't add it there though, I was suggesting:
>>
>> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
>> index 71a36ae120ee..4ad2436a936a 100644
>> --- a/fs/iomap/buffered-io.c
>> +++ b/fs/iomap/buffered-io.c
>> @@ -1078,6 +1078,7 @@ iomap_finish_ioends(struct iomap_ioend *ioend,
>> int error)
>>                 ioend = list_first_entry(&tmp, struct iomap_ioend,
>> io_list);
>>                 list_del_init(&ioend->io_list);
>>                 iomap_finish_ioend(ioend, error);
>> +               cond_resched();
>>         }
>>  }
>>  EXPORT_SYMBOL_GPL(iomap_finish_ioends);
>>
>> as I don't think you need it once-per-vec. But not sure if you tested
>> that variant or not...
>>
> 
> Yes, we did test that variant, but were still seeing the soft lockups
> on Azure, hence why I moved it into the inner loop.

Gotcha - but maybe just outside the vec loop then, after the bio_put()?
Once per vec seems excessive, each vec shouldn't take long, but I guess
the ioend inlines can be long?
Trond Myklebust Dec. 30, 2021, 10:55 p.m. UTC | #4
On Thu, 2021-12-30 at 14:27 -0800, Jens Axboe wrote:
> On 12/30/21 2:25 PM, Trond Myklebust wrote:
> > On Thu, 2021-12-30 at 13:24 -0800, Jens Axboe wrote:
> > > On 12/30/21 11:35 AM, trondmy@kernel.org wrote:
> > > > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > > 
> > > > We're observing the following stack trace using various kernels
> > > > when
> > > > running in the Azure cloud.
> > > > 
> > > >  watchdog: BUG: soft lockup - CPU#12 stuck for 23s!
> > > > [kworker/12:1:3106]
> > > >  Modules linked in: raid0 ipt_MASQUERADE nf_conntrack_netlink
> > > > xt_addrtype nft_chain_nat nf_nat br_netfilter bridge stp llc
> > > > ext4
> > > > mbcache jbd2 overlay xt_conntrack nf_conntrack nf_defrag_ipv6
> > > > nf_defrag_ipv4 nft_counter rpcrdma rdma_ucm xt_owner ib_srpt
> > > > nft_compat intel_rapl_msr ib_isert intel_rapl_common nf_tables
> > > > iscsi_target_mod isst_if_mbox_msr isst_if_common nfnetlink
> > > > target_core_mod nfit ib_iser libnvdimm libiscsi
> > > > scsi_transport_iscsi ib_umad kvm_intel ib_ipoib rdma_cm iw_cm
> > > > vfat
> > > > ib_cm fat kvm irqbypass crct10dif_pclmul crc32_pclmul mlx5_ib
> > > > ghash_clmulni_intel rapl ib_uverbs ib_core i2c_piix4 pcspkr
> > > > hyperv_fb hv_balloon hv_utils joydev nfsd auth_rpcgss nfs_acl
> > > > lockd
> > > > grace sunrpc ip_tables xfs libcrc32c mlx5_core mlxfw tls
> > > > pci_hyperv
> > > > pci_hyperv_intf sd_mod t10_pi sg ata_generic hv_storvsc
> > > > hv_netvsc
> > > > scsi_transport_fc hyperv_keyboard hid_hyperv ata_piix libata
> > > > crc32c_intel hv_vmbus serio_raw fuse
> > > >  CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-
> > > > 305.10.2.el8_4.x86_64 #1
> > > >  Hardware name: Microsoft Corporation Virtual Machine/Virtual
> > > > Machine, BIOS 090008  12/07/2018
> > > >  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
> > > >  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
> > > >  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90
> > > > 90
> > > > 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57
> > > > 9d
> > > > <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00
> > > > 8b 07
> > > >  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX:
> > > > ffffffffffffff12
> > > >  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX:
> > > > dead000000000200
> > > >  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI:
> > > > 0000000000000202
> > > >  RBP: 0000000000000202 R08: ffffac51d3893c40 R09:
> > > > 0000000000000000
> > > >  R10: 00000000000000b9 R11: 00000000000004b3 R12:
> > > > 0000000000000a20
> > > >  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15:
> > > > ffffd228f3e5a200
> > > >  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000)
> > > > knlGS:0000000000000000
> > > >  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > >  CR2: 00007f5035487500 CR3: 0000000432810004 CR4:
> > > > 00000000003706e0
> > > >  DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> > > > 0000000000000000
> > > >  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
> > > > 0000000000000400
> > > >  Call Trace:
> > > >   wake_up_page_bit+0x8a/0x110
> > > >   iomap_finish_ioend+0xd7/0x1c0
> > > >   iomap_finish_ioends+0x7f/0xb0
> > > >   xfs_end_ioend+0x6b/0x100 [xfs]
> > > >   ? xfs_setfilesize_ioend+0x60/0x60 [xfs]
> > > >   xfs_end_io+0xb9/0xe0 [xfs]
> > > >   process_one_work+0x1a7/0x360
> > > >   worker_thread+0x1fa/0x390
> > > >   ? create_worker+0x1a0/0x1a0
> > > >   kthread+0x116/0x130
> > > >   ? kthread_flush_work_fn+0x10/0x10
> > > >   ret_from_fork+0x35/0x40
> > > > 
> > > > Jens suggested adding a latency-reducing cond_resched() to the
> > > > loop
> > > > in
> > > > iomap_finish_ioends().
> > > 
> > > The patch doesn't add it there though, I was suggesting:
> > > 
> > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > > index 71a36ae120ee..4ad2436a936a 100644
> > > --- a/fs/iomap/buffered-io.c
> > > +++ b/fs/iomap/buffered-io.c
> > > @@ -1078,6 +1078,7 @@ iomap_finish_ioends(struct iomap_ioend
> > > *ioend,
> > > int error)
> > >                 ioend = list_first_entry(&tmp, struct
> > > iomap_ioend,
> > > io_list);
> > >                 list_del_init(&ioend->io_list);
> > >                 iomap_finish_ioend(ioend, error);
> > > +               cond_resched();
> > >         }
> > >  }
> > >  EXPORT_SYMBOL_GPL(iomap_finish_ioends);
> > > 
> > > as I don't think you need it once-per-vec. But not sure if you
> > > tested
> > > that variant or not...
> > > 
> > 
> > Yes, we did test that variant, but were still seeing the soft
> > lockups
> > on Azure, hence why I moved it into the inner loop.
> 
> Gotcha - but maybe just outside the vec loop then, after the
> bio_put()?
> Once per vec seems excessive, each vec shouldn't take long, but I
> guess
> the ioend inlines can be long?
> 

The stack trace is always the same, and is triggering when releasing
the spin lock in wake_up_page_bit() in that inner loop. I can try
moving the cond_resched() to that middle loop over the bios and
retesting.
Matthew Wilcox Dec. 31, 2021, 1:42 a.m. UTC | #5
On Thu, Dec 30, 2021 at 02:35:22PM -0500, trondmy@kernel.org wrote:
>  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
>  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
>  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
>  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff12
>  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
>  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
>  RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
>  R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
>  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
>  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000) knlGS:0000000000000000
>  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>  CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
>  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>  Call Trace:
>   wake_up_page_bit+0x8a/0x110
>   iomap_finish_ioend+0xd7/0x1c0
>   iomap_finish_ioends+0x7f/0xb0

> +++ b/fs/iomap/buffered-io.c
> @@ -1052,9 +1052,11 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error)
>  			next = bio->bi_private;
>  
>  		/* walk each page on bio, ending page IO on them */
> -		bio_for_each_segment_all(bv, bio, iter_all)
> +		bio_for_each_segment_all(bv, bio, iter_all) {
>  			iomap_finish_page_writeback(inode, bv->bv_page, error,
>  					bv->bv_len);
> +			cond_resched();
> +		}
>  		bio_put(bio);
>  	}
>  	/* The ioend has been freed by bio_put() */

As I recall, iomap_finish_ioend() can be called in softirq (or even
hardirq?) context currently.  I think we've seen similar things before,
and the solution suggested at the time was to aggregate fewer writeback
pages into a single bio.
Trond Myklebust Dec. 31, 2021, 6:16 a.m. UTC | #6
On Fri, 2021-12-31 at 01:42 +0000, Matthew Wilcox wrote:
> On Thu, Dec 30, 2021 at 02:35:22PM -0500, trondmy@kernel.org wrote:
> >  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
> >  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
> >  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90
> > 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d
> > <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
> >  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX:
> > ffffffffffffff12
> >  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
> >  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
> >  RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
> >  R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
> >  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
> >  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000)
> > knlGS:0000000000000000
> >  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> >  CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
> >  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> >  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> >  Call Trace:
> >   wake_up_page_bit+0x8a/0x110
> >   iomap_finish_ioend+0xd7/0x1c0
> >   iomap_finish_ioends+0x7f/0xb0
> 
> > +++ b/fs/iomap/buffered-io.c
> > @@ -1052,9 +1052,11 @@ iomap_finish_ioend(struct iomap_ioend
> > *ioend, int error)
> >                         next = bio->bi_private;
> >  
> >                 /* walk each page on bio, ending page IO on them */
> > -               bio_for_each_segment_all(bv, bio, iter_all)
> > +               bio_for_each_segment_all(bv, bio, iter_all) {
> >                         iomap_finish_page_writeback(inode, bv-
> > >bv_page, error,
> >                                         bv->bv_len);
> > +                       cond_resched();
> > +               }
> >                 bio_put(bio);
> >         }
> >         /* The ioend has been freed by bio_put() */
> 
> As I recall, iomap_finish_ioend() can be called in softirq (or even
> hardirq?) context currently.  I think we've seen similar things
> before,
> and the solution suggested at the time was to aggregate fewer
> writeback
> pages into a single bio.

I haven't seen any evidence that iomap_finish_ioend() is being called
from anything other than a regular task context. Where can it be called
from softirq/hardirq and why is that a requirement?
Dave Chinner Jan. 1, 2022, 3:55 a.m. UTC | #7
On Fri, Dec 31, 2021 at 06:16:53AM +0000, Trond Myklebust wrote:
> On Fri, 2021-12-31 at 01:42 +0000, Matthew Wilcox wrote:
> > On Thu, Dec 30, 2021 at 02:35:22PM -0500, trondmy@kernel.org wrote:
> > >  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
> > >  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
> > >  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90 90
> > > 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57 9d
> > > <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 8b 07
> > >  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX:
> > > ffffffffffffff12
> > >  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX: dead000000000200
> > >  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI: 0000000000000202
> > >  RBP: 0000000000000202 R08: ffffac51d3893c40 R09: 0000000000000000
> > >  R10: 00000000000000b9 R11: 00000000000004b3 R12: 0000000000000a20
> > >  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15: ffffd228f3e5a200
> > >  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000)
> > > knlGS:0000000000000000
> > >  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > >  CR2: 00007f5035487500 CR3: 0000000432810004 CR4: 00000000003706e0
> > >  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > >  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > >  Call Trace:
> > >   wake_up_page_bit+0x8a/0x110
> > >   iomap_finish_ioend+0xd7/0x1c0
> > >   iomap_finish_ioends+0x7f/0xb0
> > 
> > > +++ b/fs/iomap/buffered-io.c
> > > @@ -1052,9 +1052,11 @@ iomap_finish_ioend(struct iomap_ioend
> > > *ioend, int error)
> > >                         next = bio->bi_private;
> > >  
> > >                 /* walk each page on bio, ending page IO on them */
> > > -               bio_for_each_segment_all(bv, bio, iter_all)
> > > +               bio_for_each_segment_all(bv, bio, iter_all) {
> > >                         iomap_finish_page_writeback(inode, bv-
> > > >bv_page, error,
> > >                                         bv->bv_len);
> > > +                       cond_resched();
> > > +               }
> > >                 bio_put(bio);
> > >         }
> > >         /* The ioend has been freed by bio_put() */
> > 
> > As I recall, iomap_finish_ioend() can be called in softirq (or even
> > hardirq?) context currently.  I think we've seen similar things
> > before,
> > and the solution suggested at the time was to aggregate fewer
> > writeback
> > pages into a single bio.
> 
> I haven't seen any evidence that iomap_finish_ioend() is being called
> from anything other than a regular task context. Where can it be called
> from softirq/hardirq and why is that a requirement?

softirq based bio completion is possible, AFAIA. The path is
iomap_writepage_end_bio() -> iomap_finish_ioend() from the bio endio
completion callback set up by iomap_submit_bio(). This will happen
with gfs2 and zonefs, at least.

XFS, however, happens to override the generic bio endio completion
via it's ->prepare_ioend so instead we go xfs_end_bio -> work queue
-> xfs_end_io -> xfs_end_ioend -> iomap_finish_ioends ->
iomap_finish_ioend.

So, yeah, if all you are looking at is XFS IO completions, you'll
only see them run from workqueue task context. Other filesystems can
run them from softirq based bio completion context.

As it is, if you are getting soft lockups in this location, that's
an indication that the ioend chain that is being built by XFS is
way, way too long. IOWs, the completion latency problem is caused by
a lack of submit side ioend chain length bounding in combination
with unbound completion side merging in xfs_end_bio - it's not a
problem with the generic iomap code....

Let's try to address this in the XFS code, rather than hack
unnecessary band-aids over the problem in the generic code...

Cheers,

Dave.
Trond Myklebust Jan. 1, 2022, 5:39 p.m. UTC | #8
On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> On Fri, Dec 31, 2021 at 06:16:53AM +0000, Trond Myklebust wrote:
> > On Fri, 2021-12-31 at 01:42 +0000, Matthew Wilcox wrote:
> > > On Thu, Dec 30, 2021 at 02:35:22PM -0500,
> > > trondmy@kernel.org wrote:
> > > >  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
> > > >  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
> > > >  Code: 7c ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 8c 90 90 90 90
> > > > 90
> > > > 90 90 90 90 90 0f 1f 44 00 00 e8 e6 db 7e ff 66 90 48 89 f7 57
> > > > 9d
> > > > <0f> 1f 44 00 00 c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00
> > > > 8b 07
> > > >  RSP: 0018:ffffac51d26dfd18 EFLAGS: 00000202 ORIG_RAX:
> > > > ffffffffffffff12
> > > >  RAX: 0000000000000001 RBX: ffffffff980085a0 RCX:
> > > > dead000000000200
> > > >  RDX: ffffac51d3893c40 RSI: 0000000000000202 RDI:
> > > > 0000000000000202
> > > >  RBP: 0000000000000202 R08: ffffac51d3893c40 R09:
> > > > 0000000000000000
> > > >  R10: 00000000000000b9 R11: 00000000000004b3 R12:
> > > > 0000000000000a20
> > > >  R13: ffffd228f3e5a200 R14: ffff963cf7f58d10 R15:
> > > > ffffd228f3e5a200
> > > >  FS:  0000000000000000(0000) GS:ffff9625bfb00000(0000)
> > > > knlGS:0000000000000000
> > > >  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > >  CR2: 00007f5035487500 CR3: 0000000432810004 CR4:
> > > > 00000000003706e0
> > > >  DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> > > > 0000000000000000
> > > >  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7:
> > > > 0000000000000400
> > > >  Call Trace:
> > > >   wake_up_page_bit+0x8a/0x110
> > > >   iomap_finish_ioend+0xd7/0x1c0
> > > >   iomap_finish_ioends+0x7f/0xb0
> > > 
> > > > +++ b/fs/iomap/buffered-io.c
> > > > @@ -1052,9 +1052,11 @@ iomap_finish_ioend(struct iomap_ioend
> > > > *ioend, int error)
> > > >                         next = bio->bi_private;
> > > >  
> > > >                 /* walk each page on bio, ending page IO on
> > > > them */
> > > > -               bio_for_each_segment_all(bv, bio, iter_all)
> > > > +               bio_for_each_segment_all(bv, bio, iter_all) {
> > > >                         iomap_finish_page_writeback(inode, bv-
> > > > > bv_page, error,
> > > >                                         bv->bv_len);
> > > > +                       cond_resched();
> > > > +               }
> > > >                 bio_put(bio);
> > > >         }
> > > >         /* The ioend has been freed by bio_put() */
> > > 
> > > As I recall, iomap_finish_ioend() can be called in softirq (or
> > > even
> > > hardirq?) context currently.  I think we've seen similar things
> > > before,
> > > and the solution suggested at the time was to aggregate fewer
> > > writeback
> > > pages into a single bio.
> > 
> > I haven't seen any evidence that iomap_finish_ioend() is being
> > called
> > from anything other than a regular task context. Where can it be
> > called
> > from softirq/hardirq and why is that a requirement?
> 
> softirq based bio completion is possible, AFAIA. The path is
> iomap_writepage_end_bio() -> iomap_finish_ioend() from the bio endio
> completion callback set up by iomap_submit_bio(). This will happen
> with gfs2 and zonefs, at least.
> 
> XFS, however, happens to override the generic bio endio completion
> via it's ->prepare_ioend so instead we go xfs_end_bio -> work queue
> -> xfs_end_io -> xfs_end_ioend -> iomap_finish_ioends ->
> iomap_finish_ioend.
> 
> So, yeah, if all you are looking at is XFS IO completions, you'll
> only see them run from workqueue task context. Other filesystems can
> run them from softirq based bio completion context.
> 
> As it is, if you are getting soft lockups in this location, that's
> an indication that the ioend chain that is being built by XFS is
> way, way too long. IOWs, the completion latency problem is caused by
> a lack of submit side ioend chain length bounding in combination
> with unbound completion side merging in xfs_end_bio - it's not a
> problem with the generic iomap code....
> 
> Let's try to address this in the XFS code, rather than hack
> unnecessary band-aids over the problem in the generic code...
> 
> Cheers,
> 
> Dave.

Fair enough. As long as someone is working on a solution, then I'm
happy. Just a couple of things:

Firstly, we've verified that the cond_resched() in the bio loop does
suffice to resolve the issue with XFS, which would tend to confirm what
you're saying above about the underlying issue being the ioend chain
length.

Secondly, note that we've tested this issue with a variety of older
kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in mind
that it would be useful for any fix to be backward portable through the
stable mechanism.


Thanks, and Happy New Year!

  Trond
Dave Chinner Jan. 3, 2022, 10:03 p.m. UTC | #9
On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
> On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> > As it is, if you are getting soft lockups in this location, that's
> > an indication that the ioend chain that is being built by XFS is
> > way, way too long. IOWs, the completion latency problem is caused by
> > a lack of submit side ioend chain length bounding in combination
> > with unbound completion side merging in xfs_end_bio - it's not a
> > problem with the generic iomap code....
> > 
> > Let's try to address this in the XFS code, rather than hack
> > unnecessary band-aids over the problem in the generic code...
> > 
> > Cheers,
> > 
> > Dave.
> 
> Fair enough. As long as someone is working on a solution, then I'm
> happy. Just a couple of things:
> 
> Firstly, we've verified that the cond_resched() in the bio loop does
> suffice to resolve the issue with XFS, which would tend to confirm what
> you're saying above about the underlying issue being the ioend chain
> length.
> 
> Secondly, note that we've tested this issue with a variety of older
> kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in mind
> that it would be useful for any fix to be backward portable through the
> stable mechanism.

The infrastructure hasn't changed that much, so whatever the result
is it should be backportable.

As it is, is there a specific workload that triggers this issue? Or
a specific machine config (e.g. large memory, slow storage). Are
there large fragmented files in use (e.g. randomly written VM image
files)? There are a few factors that can exacerbate the ioend chain
lengths, so it would be handy to have some idea of what is actually
triggering this behaviour...

Cheers,

Dave.
Trond Myklebust Jan. 4, 2022, 12:04 a.m. UTC | #10
On Tue, 2022-01-04 at 09:03 +1100, Dave Chinner wrote:
> On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
> > On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> > > As it is, if you are getting soft lockups in this location,
> > > that's
> > > an indication that the ioend chain that is being built by XFS is
> > > way, way too long. IOWs, the completion latency problem is caused
> > > by
> > > a lack of submit side ioend chain length bounding in combination
> > > with unbound completion side merging in xfs_end_bio - it's not a
> > > problem with the generic iomap code....
> > > 
> > > Let's try to address this in the XFS code, rather than hack
> > > unnecessary band-aids over the problem in the generic code...
> > > 
> > > Cheers,
> > > 
> > > Dave.
> > 
> > Fair enough. As long as someone is working on a solution, then I'm
> > happy. Just a couple of things:
> > 
> > Firstly, we've verified that the cond_resched() in the bio loop
> > does
> > suffice to resolve the issue with XFS, which would tend to confirm
> > what
> > you're saying above about the underlying issue being the ioend
> > chain
> > length.
> > 
> > Secondly, note that we've tested this issue with a variety of older
> > kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in mind
> > that it would be useful for any fix to be backward portable through
> > the
> > stable mechanism.
> 
> The infrastructure hasn't changed that much, so whatever the result
> is it should be backportable.
> 
> As it is, is there a specific workload that triggers this issue? Or
> a specific machine config (e.g. large memory, slow storage). Are
> there large fragmented files in use (e.g. randomly written VM image
> files)? There are a few factors that can exacerbate the ioend chain
> lengths, so it would be handy to have some idea of what is actually
> triggering this behaviour...
> 
> Cheers,
> 
> Dave.

We have different reproducers. The common feature appears to be the
need for a decently fast box with fairly large memory (128GB in one
case, 400GB in the other). It has been reproduced with HDs, SSDs and
NVME systems.

On the 128GB box, we had it set up with 10+ disks in a JBOD
configuration and were running the AJA system tests.

On the 400GB box, we were just serially creating large (> 6GB) files
using fio and that was occasionally triggering the issue. However doing
an strace of that workload to disk reproduced the problem faster :-).

So really, it seems as if the problem is 'lots of data in cache' and
then flush it out.
Dave Chinner Jan. 4, 2022, 1:22 a.m. UTC | #11
On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust wrote:
> On Tue, 2022-01-04 at 09:03 +1100, Dave Chinner wrote:
> > On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
> > > On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> > > > As it is, if you are getting soft lockups in this location,
> > > > that's
> > > > an indication that the ioend chain that is being built by XFS is
> > > > way, way too long. IOWs, the completion latency problem is caused
> > > > by
> > > > a lack of submit side ioend chain length bounding in combination
> > > > with unbound completion side merging in xfs_end_bio - it's not a
> > > > problem with the generic iomap code....
> > > > 
> > > > Let's try to address this in the XFS code, rather than hack
> > > > unnecessary band-aids over the problem in the generic code...
> > > > 
> > > > Cheers,
> > > > 
> > > > Dave.
> > > 
> > > Fair enough. As long as someone is working on a solution, then I'm
> > > happy. Just a couple of things:
> > > 
> > > Firstly, we've verified that the cond_resched() in the bio loop
> > > does
> > > suffice to resolve the issue with XFS, which would tend to confirm
> > > what
> > > you're saying above about the underlying issue being the ioend
> > > chain
> > > length.
> > > 
> > > Secondly, note that we've tested this issue with a variety of older
> > > kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in mind
> > > that it would be useful for any fix to be backward portable through
> > > the
> > > stable mechanism.
> > 
> > The infrastructure hasn't changed that much, so whatever the result
> > is it should be backportable.
> > 
> > As it is, is there a specific workload that triggers this issue? Or
> > a specific machine config (e.g. large memory, slow storage). Are
> > there large fragmented files in use (e.g. randomly written VM image
> > files)? There are a few factors that can exacerbate the ioend chain
> > lengths, so it would be handy to have some idea of what is actually
> > triggering this behaviour...
> > 
> > Cheers,
> > 
> > Dave.
> 
> We have different reproducers. The common feature appears to be the
> need for a decently fast box with fairly large memory (128GB in one
> case, 400GB in the other). It has been reproduced with HDs, SSDs and
> NVME systems.
> 
> On the 128GB box, we had it set up with 10+ disks in a JBOD
> configuration and were running the AJA system tests.
> 
> On the 400GB box, we were just serially creating large (> 6GB) files
> using fio and that was occasionally triggering the issue. However doing
> an strace of that workload to disk reproduced the problem faster :-).

Ok, that matches up with the "lots of logically sequential dirty
data on a single inode in cache" vector that is required to create
really long bio chains on individual ioends.

Can you try the patch below and see if addresses the issue?

Cheers,

Dave.
Trond Myklebust Jan. 4, 2022, 3:01 a.m. UTC | #12
On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust wrote:
> > On Tue, 2022-01-04 at 09:03 +1100, Dave Chinner wrote:
> > > On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
> > > > On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> > > > > As it is, if you are getting soft lockups in this location,
> > > > > that's
> > > > > an indication that the ioend chain that is being built by XFS
> > > > > is
> > > > > way, way too long. IOWs, the completion latency problem is
> > > > > caused
> > > > > by
> > > > > a lack of submit side ioend chain length bounding in
> > > > > combination
> > > > > with unbound completion side merging in xfs_end_bio - it's
> > > > > not a
> > > > > problem with the generic iomap code....
> > > > > 
> > > > > Let's try to address this in the XFS code, rather than hack
> > > > > unnecessary band-aids over the problem in the generic code...
> > > > > 
> > > > > Cheers,
> > > > > 
> > > > > Dave.
> > > > 
> > > > Fair enough. As long as someone is working on a solution, then
> > > > I'm
> > > > happy. Just a couple of things:
> > > > 
> > > > Firstly, we've verified that the cond_resched() in the bio loop
> > > > does
> > > > suffice to resolve the issue with XFS, which would tend to
> > > > confirm
> > > > what
> > > > you're saying above about the underlying issue being the ioend
> > > > chain
> > > > length.
> > > > 
> > > > Secondly, note that we've tested this issue with a variety of
> > > > older
> > > > kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in
> > > > mind
> > > > that it would be useful for any fix to be backward portable
> > > > through
> > > > the
> > > > stable mechanism.
> > > 
> > > The infrastructure hasn't changed that much, so whatever the
> > > result
> > > is it should be backportable.
> > > 
> > > As it is, is there a specific workload that triggers this issue?
> > > Or
> > > a specific machine config (e.g. large memory, slow storage). Are
> > > there large fragmented files in use (e.g. randomly written VM
> > > image
> > > files)? There are a few factors that can exacerbate the ioend
> > > chain
> > > lengths, so it would be handy to have some idea of what is
> > > actually
> > > triggering this behaviour...
> > > 
> > > Cheers,
> > > 
> > > Dave.
> > 
> > We have different reproducers. The common feature appears to be the
> > need for a decently fast box with fairly large memory (128GB in one
> > case, 400GB in the other). It has been reproduced with HDs, SSDs
> > and
> > NVME systems.
> > 
> > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > configuration and were running the AJA system tests.
> > 
> > On the 400GB box, we were just serially creating large (> 6GB)
> > files
> > using fio and that was occasionally triggering the issue. However
> > doing
> > an strace of that workload to disk reproduced the problem faster :-
> > ).
> 
> Ok, that matches up with the "lots of logically sequential dirty
> data on a single inode in cache" vector that is required to create
> really long bio chains on individual ioends.
> 
> Can you try the patch below and see if addresses the issue?
> 
> Cheers,
> 
> Dave.

Thanks Dave!

I'm building a new kernel for testing now and should have results ready
tomorrow at the latest.
Christoph Hellwig Jan. 4, 2022, 7:08 a.m. UTC | #13
On Tue, Jan 04, 2022 at 12:22:15PM +1100, Dave Chinner wrote:
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1098,6 +1098,15 @@ iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
>  		return false;
>  	if (ioend->io_offset + ioend->io_size != next->io_offset)
>  		return false;
> +	/*
> +	 * Do not merge physically discontiguous ioends. The filesystem
> +	 * completion functions will have to iterate the physical
> +	 * discontiguities even if we merge the ioends at a logical level, so
> +	 * we don't gain anything by merging physical discontiguities here.
> +	 */
> +	if (ioend->io_inline_bio.bi_iter.bi_sector + (ioend->io_size >> 9) !=

This open codes bio_end_sector()

> +	    next->io_inline_bio.bi_iter.bi_sector)

But more importantly I don't think just using the inline_bio makes sense
here as the ioend can have multiple bios.  Fortunately we should always
have the last built bio available in ->io_bio.

> +		return false;
>  	return true;
>  }
>  
> @@ -1241,6 +1250,13 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
>  		return false;
>  	if (sector != bio_end_sector(wpc->ioend->io_bio))
>  		return false;
> +	/*
> +	 * Limit ioend bio chain lengths to minimise IO completion latency. This
> +	 * also prevents long tight loops ending page writeback on all the pages
> +	 * in the ioend.
> +	 */
> +	if (wpc->ioend->io_size >= 4096 * PAGE_SIZE)
> +		return false;

And this stops making sense with the impending additions of large folio
support.  I think we need to count the pages/folios instead as the
operations are once per page/folio.
Brian Foster Jan. 4, 2022, 1:36 p.m. UTC | #14
On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
...
> 
> Fair enough. As long as someone is working on a solution, then I'm
> happy. Just a couple of things:
> 
> Firstly, we've verified that the cond_resched() in the bio loop does
> suffice to resolve the issue with XFS, which would tend to confirm what
> you're saying above about the underlying issue being the ioend chain
> length.
> 
> Secondly, note that we've tested this issue with a variety of older
> kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in mind
> that it would be useful for any fix to be backward portable through the
> stable mechanism.
> 

I've sent a couple or so different variants of this in the past. The
last I believe was here [1], but still never seemed to go anywhere
(despite having reviews on the first couple patches). That one was
essentially a sequence of adding a cond_resched() call in the iomap code
to address the soft lockup warning followed by capping the ioend size
for latency reasons.

Brian

[1] https://lore.kernel.org/linux-xfs/20210517171722.1266878-1-bfoster@redhat.com/

> 
> Thanks, and Happy New Year!
> 
>   Trond
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com
> 
>
Matthew Wilcox Jan. 4, 2022, 6:08 p.m. UTC | #15
On Mon, Jan 03, 2022 at 11:08:54PM -0800, hch@infradead.org wrote:
> > +	/*
> > +	 * Limit ioend bio chain lengths to minimise IO completion latency. This
> > +	 * also prevents long tight loops ending page writeback on all the pages
> > +	 * in the ioend.
> > +	 */
> > +	if (wpc->ioend->io_size >= 4096 * PAGE_SIZE)
> > +		return false;
> 
> And this stops making sense with the impending additions of large folio
> support.  I think we need to count the pages/folios instead as the
> operations are once per page/folio.

I think it's fine to put in a fix like this now that's readily
backportable.  For folios, I can't help but think we want a
restructuring to iterate per-extent first, then per-folio and finally
per-sector instead of the current model where we iterate per folio,
looking up the extent for each sector.

Particularly for the kind of case Trond is talking about here; when we
want to fsync(), as long as the entire folio is Uptodate, we want to
write the entire thing back.  Doing it in portions and merging them back
together seems like a lot of wasted effort.
Christoph Hellwig Jan. 4, 2022, 6:14 p.m. UTC | #16
On Tue, Jan 04, 2022 at 06:08:24PM +0000, Matthew Wilcox wrote:
> I think it's fine to put in a fix like this now that's readily
> backportable.  For folios, I can't help but think we want a
> restructuring to iterate per-extent first, then per-folio and finally
> per-sector instead of the current model where we iterate per folio,
> looking up the extent for each sector.

We don't look up the extent for each sector.  We look up the extent
once and then add as much of it as we can to the bio until either the
bio is full or the extent ends.  In the first case we then allocate
a new bio and add it to the ioend.

> Particularly for the kind of case Trond is talking about here; when we
> want to fsync(), as long as the entire folio is Uptodate, we want to
> write the entire thing back.  Doing it in portions and merging them back
> together seems like a lot of wasted effort.

Writing everything together should be the common case.
Darrick J. Wong Jan. 4, 2022, 7:22 p.m. UTC | #17
On Tue, Jan 04, 2022 at 10:14:27AM -0800, hch@infradead.org wrote:
> On Tue, Jan 04, 2022 at 06:08:24PM +0000, Matthew Wilcox wrote:
> > I think it's fine to put in a fix like this now that's readily
> > backportable.  For folios, I can't help but think we want a
> > restructuring to iterate per-extent first, then per-folio and finally
> > per-sector instead of the current model where we iterate per folio,
> > looking up the extent for each sector.
> 
> We don't look up the extent for each sector.  We look up the extent
> once and then add as much of it as we can to the bio until either the
> bio is full or the extent ends.  In the first case we then allocate
> a new bio and add it to the ioend.

Can we track the number of folios that have been bio_add_folio'd to the
iomap_ioend, and make iomap_can_add_to_ioend return false when the
number of folios reaches some threshold?  I think that would solve the
problem of overly large ioends while not splitting folios across ioends
unnecessarily.

As for where to put a cond_resched() call, I think we'd need to change
iomap_ioend_can_merge to avoid merging two ioends if their folio count
exceeds the same(?) threshold, and then one could put the cond_resched
after each iomap_finish_ioend call in iomap_finish_ioends, and declare
that iomap_finish_ioends cannot be called from atomic context.

I forget if anyone ever benchmarked the actual overhead of cond_resched,
but if my dim memory serves, it's not cheap but also not expensive.

Limiting each ioend to (say) 16k folios and not letting small ioends
merge into something bigger than that for the completion seems (to me
anyway) a balance between stalling out on marking pages after huge IOs
vs. losing the ability to coalesce xfs_end_ioend calls when a contiguous
range of file range has been written back but the backing isn't.

<shrug> That's just my ENOCOFFEE reaction, hopefully that wasn't total
nonsense.

--D

> > Particularly for the kind of case Trond is talking about here; when we
> > want to fsync(), as long as the entire folio is Uptodate, we want to
> > write the entire thing back.  Doing it in portions and merging them back
> > together seems like a lot of wasted effort.
> 
> Writing everything together should be the common case.
Darrick J. Wong Jan. 4, 2022, 7:23 p.m. UTC | #18
On Tue, Jan 04, 2022 at 08:36:42AM -0500, Brian Foster wrote:
> On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
> ...
> > 
> > Fair enough. As long as someone is working on a solution, then I'm
> > happy. Just a couple of things:
> > 
> > Firstly, we've verified that the cond_resched() in the bio loop does
> > suffice to resolve the issue with XFS, which would tend to confirm what
> > you're saying above about the underlying issue being the ioend chain
> > length.
> > 
> > Secondly, note that we've tested this issue with a variety of older
> > kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in mind
> > that it would be useful for any fix to be backward portable through the
> > stable mechanism.
> > 
> 
> I've sent a couple or so different variants of this in the past. The
> last I believe was here [1], but still never seemed to go anywhere
> (despite having reviews on the first couple patches). That one was
> essentially a sequence of adding a cond_resched() call in the iomap code
> to address the soft lockup warning followed by capping the ioend size
> for latency reasons.

Huh.  I wonder why I didn't ever merge that?  I said I was going to do
that for 5.14 and ... never did.  ISTR Matthew saying something about
wanting to key the decision off of the number of pages/folios we'd have
to touch, and then musing about adding QOS metrics, me getting fussy
about that, trying to figure out if there was a way to make
iomap_finish_page_writeback cheaper, and ...

<checks notes>

...and decided that since the folio merge was imminent (HA!) I would
merge it after all the dust settled.  Add several months of Things I
Still Cannot Talk About and now it's 2022. :(

Ah, ok, I'll go reply elsewhere in the thread since I think my thinking
on all this has evolved somewhat since then.

--D

> 
> Brian
> 
> [1] https://lore.kernel.org/linux-xfs/20210517171722.1266878-1-bfoster@redhat.com/
> 
> > 
> > Thanks, and Happy New Year!
> > 
> >   Trond
> > 
> > -- 
> > Trond Myklebust
> > Linux NFS client maintainer, Hammerspace
> > trond.myklebust@hammerspace.com
> > 
> > 
>
Dave Chinner Jan. 4, 2022, 9:16 p.m. UTC | #19
On Mon, Jan 03, 2022 at 11:08:54PM -0800, hch@infradead.org wrote:
> On Tue, Jan 04, 2022 at 12:22:15PM +1100, Dave Chinner wrote:
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -1098,6 +1098,15 @@ iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
> >  		return false;
> >  	if (ioend->io_offset + ioend->io_size != next->io_offset)
> >  		return false;
> > +	/*
> > +	 * Do not merge physically discontiguous ioends. The filesystem
> > +	 * completion functions will have to iterate the physical
> > +	 * discontiguities even if we merge the ioends at a logical level, so
> > +	 * we don't gain anything by merging physical discontiguities here.
> > +	 */
> > +	if (ioend->io_inline_bio.bi_iter.bi_sector + (ioend->io_size >> 9) !=
> 
> This open codes bio_end_sector()

No, it doesn't. The ioend can have chained bios or have others merged
and concatenated to the ioend->io_list, so ioend->io_size != length
of the first bio in the chain....

> > +	    next->io_inline_bio.bi_iter.bi_sector)
> 
> But more importantly I don't think just using the inline_bio makes sense
> here as the ioend can have multiple bios.  Fortunately we should always
> have the last built bio available in ->io_bio.

Except merging chains ioends and modifies the head io_size to
account for the chained ioends we add to ioend->io_list. Hence
ioend->io_bio is not the last bio in a contiguous ioend chain.

> > +		return false;
> >  	return true;
> >  }
> >  
> > @@ -1241,6 +1250,13 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
> >  		return false;
> >  	if (sector != bio_end_sector(wpc->ioend->io_bio))
> >  		return false;
> > +	/*
> > +	 * Limit ioend bio chain lengths to minimise IO completion latency. This
> > +	 * also prevents long tight loops ending page writeback on all the pages
> > +	 * in the ioend.
> > +	 */
> > +	if (wpc->ioend->io_size >= 4096 * PAGE_SIZE)
> > +		return false;
> 
> And this stops making sense with the impending additions of large folio
> support.  I think we need to count the pages/folios instead as the
> operations are once per page/folio.

Agree, but I was looking at this initially as something easy to test
and backport.

UNfortunately, we hide the ioend switching in a function that can be
called many times per page/folio and the calling function has no
real clue when ioends get switched. Hence it's much more invasive to
correctly account for size based on variable sized folios attached
to bios in an ioend compared to hard coding a simple IO size limit.

Cheers,

Dave.
Dave Chinner Jan. 4, 2022, 9:52 p.m. UTC | #20
On Tue, Jan 04, 2022 at 11:22:27AM -0800, Darrick J. Wong wrote:
> On Tue, Jan 04, 2022 at 10:14:27AM -0800, hch@infradead.org wrote:
> > On Tue, Jan 04, 2022 at 06:08:24PM +0000, Matthew Wilcox wrote:
> > > I think it's fine to put in a fix like this now that's readily
> > > backportable.  For folios, I can't help but think we want a
> > > restructuring to iterate per-extent first, then per-folio and finally
> > > per-sector instead of the current model where we iterate per folio,
> > > looking up the extent for each sector.
> > 
> > We don't look up the extent for each sector.  We look up the extent
> > once and then add as much of it as we can to the bio until either the
> > bio is full or the extent ends.  In the first case we then allocate
> > a new bio and add it to the ioend.
> 
> Can we track the number of folios that have been bio_add_folio'd to the
> iomap_ioend, and make iomap_can_add_to_ioend return false when the
> number of folios reaches some threshold?  I think that would solve the
> problem of overly large ioends while not splitting folios across ioends
> unnecessarily.

See my reply to Christoph up thread.

The problem is multiple blocks per page/folio - bio_add_folio() will
get called for the same folio many times, and we end up not knowing
when a new page/folio is attached. Hence dynamically calculating it
as we build the bios is .... convoluted.

Alternatively, we could ask the bio how many segments it has
attached before we switch it out (or submit it) and add that to the
ioend count. THat's probably the least invasive way of doing this,
as we already have wrapper functions for chaining and submitting
bios on ioends....

> As for where to put a cond_resched() call, I think we'd need to change
> iomap_ioend_can_merge to avoid merging two ioends if their folio count
> exceeds the same(?) threshold,

That I'm not so sure about. If the ioends are physically contiguous,
we do *much less* CPU work by doing a single merged extent
conversion transaction than doing one transaction per unmerged
ioend. i.e. we save a lot of completion CPU time by merging
physically contiguous ioends, but we save none by merging physically
discontiguous ioends.

Yes, I can see that we probably still want to limit the ultimate
size of the merged, physically contiguous ioend, but I don't think
it's anywhere near as small as the IO submission sized chunks need
to be.

> and then one could put the cond_resched
> after each iomap_finish_ioend call in iomap_finish_ioends, and declare
> that iomap_finish_ioends cannot be called from atomic context.

iomap does not do ioend merging by itself. The filesystem decides if
merging is to be done - only XFS calls iomap_ioend_try_merge() right
now, so it's the only filesystem that uses completion merging.

Hence generic iomap code will only end up calling
iomap_finish_ioends() with the same ioend that was submitted. i.e.
capped to 4096 pages by this patch. THerefore it does not need
cond_resched() calls - the place that needs it is where the ioends
are merged and then finished. That is, in the filesystem completion
processing that does the merging....

> I forget if anyone ever benchmarked the actual overhead of cond_resched,
> but if my dim memory serves, it's not cheap but also not expensive.

The overhead is noise if called once per ioend.

> Limiting each ioend to (say) 16k folios and not letting small ioends
> merge into something bigger than that for the completion seems (to me
> anyway) a balance between stalling out on marking pages after huge IOs
> vs. losing the ability to coalesce xfs_end_ioend calls when a contiguous
> range of file range has been written back but the backing isn't.

Remember, we only end up in xfs_end_ioend() for COW, unwritten
conversion or extending the file. For pure overwrites, we already
go through the generic iomap_writepage_end_bio() ->
iomap_finish_ioend() (potentially in softirq context) path and don't
do any completion merging at all. Hence for this path we need
submission side ioend size limiting as there's only one ioend to
process per completion...

Largely this discussion is about heuristics. The submission side
needs to have a heuristic to limit single ioend sizes because of the
above completion path, and any filesystem that does merging needs
other heuristics that match the mechanisms it uses merging to
optimise.

Hence I think that we want the absolute minimum heuristics in the
iomap code that limit the size of a single ioend completion so the
generic iomap paths do not need "cond_resched()" magic sprinkled
through them, whilst filesystems that do merging need to control and
handle merged ioends appropriately.

filesystems that do merging need to have their own heuristics to
control merging and avoid creating huge ioends. The current merging
code has one user - XFS - and so it's largely XFS specific behaviour
it encodes. IT might just be simpler to move the merging heuristics
back up into the XFS layer for the moment and worry about generic
support when some other filesystem wants to use completion
merging...

Cheers,

Dave.
Darrick J. Wong Jan. 4, 2022, 11:12 p.m. UTC | #21
On Wed, Jan 05, 2022 at 08:52:27AM +1100, Dave Chinner wrote:
> On Tue, Jan 04, 2022 at 11:22:27AM -0800, Darrick J. Wong wrote:
> > On Tue, Jan 04, 2022 at 10:14:27AM -0800, hch@infradead.org wrote:
> > > On Tue, Jan 04, 2022 at 06:08:24PM +0000, Matthew Wilcox wrote:
> > > > I think it's fine to put in a fix like this now that's readily
> > > > backportable.  For folios, I can't help but think we want a
> > > > restructuring to iterate per-extent first, then per-folio and finally
> > > > per-sector instead of the current model where we iterate per folio,
> > > > looking up the extent for each sector.
> > > 
> > > We don't look up the extent for each sector.  We look up the extent
> > > once and then add as much of it as we can to the bio until either the
> > > bio is full or the extent ends.  In the first case we then allocate
> > > a new bio and add it to the ioend.
> > 
> > Can we track the number of folios that have been bio_add_folio'd to the
> > iomap_ioend, and make iomap_can_add_to_ioend return false when the
> > number of folios reaches some threshold?  I think that would solve the
> > problem of overly large ioends while not splitting folios across ioends
> > unnecessarily.
> 
> See my reply to Christoph up thread.
> 
> The problem is multiple blocks per page/folio - bio_add_folio() will
> get called for the same folio many times, and we end up not knowing
> when a new page/folio is attached. Hence dynamically calculating it
> as we build the bios is .... convoluted.

Hm.  Indulge me in a little more frame-shifting for a moment --

As I see it, the problem here is that we're spending too much time
calling iomap_finish_page_writeback over and over and over, right?

If we have a single page with a single mapping that fits in a single
bio, that means we call bio_add_page once, and on the other end we call
iomap_finish_page_writeback once.

If we have (say) an 8-page folio with 4 blocks per page, in the worst
case we'd create 32 different ioends, each with a single-block bio,
which means 32 calls to iomap_finish_page_writeback, right?

From what I can see, the number of bio_add_folio calls is proportional
to the amount of ioend work we do without providing any external signs
of life to the watchdog, right?

So forget the number of folios or the byte count involved.  Isn't the
number of future iomap_finish_page_writeback calls exactly the metric
that we want to decide when to cut off ioend submission?

That was what I was getting at this morning; too bad the description I
came up with made it sound like I wanted to count actual folios, not
solely the calls to bio_add_folio.

> Alternatively, we could ask the bio how many segments it has
> attached before we switch it out (or submit it) and add that to the
> ioend count. THat's probably the least invasive way of doing this,
> as we already have wrapper functions for chaining and submitting
> bios on ioends....
> 
> > As for where to put a cond_resched() call, I think we'd need to change
> > iomap_ioend_can_merge to avoid merging two ioends if their folio count
> > exceeds the same(?) threshold,
> 
> That I'm not so sure about. If the ioends are physically contiguous,
> we do *much less* CPU work by doing a single merged extent
> conversion transaction than doing one transaction per unmerged
> ioend. i.e. we save a lot of completion CPU time by merging
> physically contiguous ioends, but we save none by merging physically
> discontiguous ioends.
> 
> Yes, I can see that we probably still want to limit the ultimate
> size of the merged, physically contiguous ioend, but I don't think
> it's anywhere near as small as the IO submission sized chunks need
> to be.

Good point.  Yes, higher limits for the merging makes sense.

> > and then one could put the cond_resched
> > after each iomap_finish_ioend call in iomap_finish_ioends, and declare
> > that iomap_finish_ioends cannot be called from atomic context.
> 
> iomap does not do ioend merging by itself. The filesystem decides if
> merging is to be done - only XFS calls iomap_ioend_try_merge() right
> now, so it's the only filesystem that uses completion merging.

I know, I remember that code.

> Hence generic iomap code will only end up calling
> iomap_finish_ioends() with the same ioend that was submitted. i.e.
> capped to 4096 pages by this patch. THerefore it does not need
> cond_resched() calls - the place that needs it is where the ioends
> are merged and then finished. That is, in the filesystem completion
> processing that does the merging....

Huh?  I propose adding cond_resched to iomap_finish_ioends (plural),
which walks a list of ioends and calls iomap_finish_ioend (singular) on
each ioend.  IOWs, we'd call cond_resched in between finishing one ioend
and starting on the next one.  Isn't that where ioends are finished?

(I'm starting to wonder if we're talking past each other?)

So looking at xfs_end_io:

/* Finish all pending io completions. */
void
xfs_end_io(
	struct work_struct	*work)
{
	struct xfs_inode	*ip =
		container_of(work, struct xfs_inode, i_ioend_work);
	struct iomap_ioend	*ioend;
	struct list_head	tmp;
	unsigned long		flags;

	spin_lock_irqsave(&ip->i_ioend_lock, flags);
	list_replace_init(&ip->i_ioend_list, &tmp);
	spin_unlock_irqrestore(&ip->i_ioend_lock, flags);

	iomap_sort_ioends(&tmp);
	while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend,
			io_list))) {
		list_del_init(&ioend->io_list);

Here we pull the first ioend off the sorted list of ioends.

		iomap_ioend_try_merge(ioend, &tmp);

Now we've merged that first ioend with as many subsequent ioends as we
could merge.  Let's say there were 200 ioends, each 100MB.  Now ioend
is a chain (of those other 199 ioends) representing 20GB of data.

		xfs_end_ioend(ioend);

At the end of this routine, we call iomap_finish_ioends on the 20GB
ioend chain.  This now has to mark 5.2 million pages...

		cond_resched();

...before we get to the cond_resched.  I'd really rather do the
cond_resched between each of those 200 ioends that (supposedly) are
small enough not to trip the hangcheck timers.

	}
}
/*
 * Mark writeback finished on a chain of ioends.  Caller must not call
 * this function from atomic/softirq context.
 */
void
iomap_finish_ioends(struct iomap_ioend *ioend, int error)
{
	struct list_head tmp;

	list_replace_init(&ioend->io_list, &tmp);
	iomap_finish_ioend(ioend, error);

	while (!list_empty(&tmp)) {
		cond_resched();

So I propose doing it ^^^ here instead.

		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
		list_del_init(&ioend->io_list);
		iomap_finish_ioend(ioend, error);
	}
}


> > I forget if anyone ever benchmarked the actual overhead of cond_resched,
> > but if my dim memory serves, it's not cheap but also not expensive.
> 
> The overhead is noise if called once per ioend.
> 
> > Limiting each ioend to (say) 16k folios and not letting small ioends
> > merge into something bigger than that for the completion seems (to me
> > anyway) a balance between stalling out on marking pages after huge IOs
> > vs. losing the ability to coalesce xfs_end_ioend calls when a contiguous
> > range of file range has been written back but the backing isn't.
> 
> Remember, we only end up in xfs_end_ioend() for COW, unwritten
> conversion or extending the file. For pure overwrites, we already
> go through the generic iomap_writepage_end_bio() ->
> iomap_finish_ioend() (potentially in softirq context) path and don't
> do any completion merging at all. Hence for this path we need
> submission side ioend size limiting as there's only one ioend to
> process per completion...
> 
> Largely this discussion is about heuristics. The submission side
> needs to have a heuristic to limit single ioend sizes because of the
> above completion path, and any filesystem that does merging needs
> other heuristics that match the mechanisms it uses merging to
> optimise.

Yep, agreed.

> Hence I think that we want the absolute minimum heuristics in the
> iomap code that limit the size of a single ioend completion so the
> generic iomap paths do not need "cond_resched()" magic sprinkled
> through them, whilst filesystems that do merging need to control and
> handle merged ioends appropriately.

Agreed.  I think the only point of conflict about this part of the
solution is how we figure out when an ioend has gotten too big -- byte
counts are the {obvious,backportable} solution as you say, but I also
feel that byte counts are a (somewhat poor) proxy for the amount of work
that will have to be done.

> filesystems that do merging need to have their own heuristics to
> control merging and avoid creating huge ioends. The current merging
> code has one user - XFS - and so it's largely XFS specific behaviour
> it encodes. IT might just be simpler to move the merging heuristics
> back up into the XFS layer for the moment and worry about generic
> support when some other filesystem wants to use completion
> merging...

<nod> At the moment, iomap_finish_ioends (plural) is how that one
filesystem that uses ioend merging finishes a merged ioend chain.

--D

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
Trond Myklebust Jan. 5, 2022, 2:09 a.m. UTC | #22
On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust wrote:
> > On Tue, 2022-01-04 at 09:03 +1100, Dave Chinner wrote:
> > > On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust wrote:
> > > > On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> > > > > As it is, if you are getting soft lockups in this location,
> > > > > that's
> > > > > an indication that the ioend chain that is being built by XFS
> > > > > is
> > > > > way, way too long. IOWs, the completion latency problem is
> > > > > caused
> > > > > by
> > > > > a lack of submit side ioend chain length bounding in
> > > > > combination
> > > > > with unbound completion side merging in xfs_end_bio - it's
> > > > > not a
> > > > > problem with the generic iomap code....
> > > > > 
> > > > > Let's try to address this in the XFS code, rather than hack
> > > > > unnecessary band-aids over the problem in the generic code...
> > > > > 
> > > > > Cheers,
> > > > > 
> > > > > Dave.
> > > > 
> > > > Fair enough. As long as someone is working on a solution, then
> > > > I'm
> > > > happy. Just a couple of things:
> > > > 
> > > > Firstly, we've verified that the cond_resched() in the bio loop
> > > > does
> > > > suffice to resolve the issue with XFS, which would tend to
> > > > confirm
> > > > what
> > > > you're saying above about the underlying issue being the ioend
> > > > chain
> > > > length.
> > > > 
> > > > Secondly, note that we've tested this issue with a variety of
> > > > older
> > > > kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear in
> > > > mind
> > > > that it would be useful for any fix to be backward portable
> > > > through
> > > > the
> > > > stable mechanism.
> > > 
> > > The infrastructure hasn't changed that much, so whatever the
> > > result
> > > is it should be backportable.
> > > 
> > > As it is, is there a specific workload that triggers this issue?
> > > Or
> > > a specific machine config (e.g. large memory, slow storage). Are
> > > there large fragmented files in use (e.g. randomly written VM
> > > image
> > > files)? There are a few factors that can exacerbate the ioend
> > > chain
> > > lengths, so it would be handy to have some idea of what is
> > > actually
> > > triggering this behaviour...
> > > 
> > > Cheers,
> > > 
> > > Dave.
> > 
> > We have different reproducers. The common feature appears to be the
> > need for a decently fast box with fairly large memory (128GB in one
> > case, 400GB in the other). It has been reproduced with HDs, SSDs
> > and
> > NVME systems.
> > 
> > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > configuration and were running the AJA system tests.
> > 
> > On the 400GB box, we were just serially creating large (> 6GB)
> > files
> > using fio and that was occasionally triggering the issue. However
> > doing
> > an strace of that workload to disk reproduced the problem faster :-
> > ).
> 
> Ok, that matches up with the "lots of logically sequential dirty
> data on a single inode in cache" vector that is required to create
> really long bio chains on individual ioends.
> 
> Can you try the patch below and see if addresses the issue?
> 

That patch does seem to fix the soft lockups.
Dave Chinner Jan. 5, 2022, 2:10 a.m. UTC | #23
On Tue, Jan 04, 2022 at 03:12:30PM -0800, Darrick J. Wong wrote:
> On Wed, Jan 05, 2022 at 08:52:27AM +1100, Dave Chinner wrote:
> > On Tue, Jan 04, 2022 at 11:22:27AM -0800, Darrick J. Wong wrote:
> > > On Tue, Jan 04, 2022 at 10:14:27AM -0800, hch@infradead.org wrote:
> > > > On Tue, Jan 04, 2022 at 06:08:24PM +0000, Matthew Wilcox wrote:
> > > > > I think it's fine to put in a fix like this now that's readily
> > > > > backportable.  For folios, I can't help but think we want a
> > > > > restructuring to iterate per-extent first, then per-folio and finally
> > > > > per-sector instead of the current model where we iterate per folio,
> > > > > looking up the extent for each sector.
> > > > 
> > > > We don't look up the extent for each sector.  We look up the extent
> > > > once and then add as much of it as we can to the bio until either the
> > > > bio is full or the extent ends.  In the first case we then allocate
> > > > a new bio and add it to the ioend.
> > > 
> > > Can we track the number of folios that have been bio_add_folio'd to the
> > > iomap_ioend, and make iomap_can_add_to_ioend return false when the
> > > number of folios reaches some threshold?  I think that would solve the
> > > problem of overly large ioends while not splitting folios across ioends
> > > unnecessarily.
> > 
> > See my reply to Christoph up thread.
> > 
> > The problem is multiple blocks per page/folio - bio_add_folio() will
> > get called for the same folio many times, and we end up not knowing
> > when a new page/folio is attached. Hence dynamically calculating it
> > as we build the bios is .... convoluted.
> 
> Hm.  Indulge me in a little more frame-shifting for a moment --
> 
> As I see it, the problem here is that we're spending too much time
> calling iomap_finish_page_writeback over and over and over, right?
> 
> If we have a single page with a single mapping that fits in a single
> bio, that means we call bio_add_page once, and on the other end we call
> iomap_finish_page_writeback once.
> 
> If we have (say) an 8-page folio with 4 blocks per page, in the worst
> case we'd create 32 different ioends, each with a single-block bio,
> which means 32 calls to iomap_finish_page_writeback, right?

Yes, but in this case, we've had to issue and complete 32 bios and
ioends to get one call to end_page_writeback(). That is overhead we
cannot avoid if we have worst-case physical fragmentation of the
filesystem. But, quite frankly, if that's the case we just don't
care about performance of IO completion - performance will suck
because we're doing 32 IOs instead of 1 for that data, not because
IO completion has to do more work per page/folio....

> From what I can see, the number of bio_add_folio calls is proportional
> to the amount of ioend work we do without providing any external signs
> of life to the watchdog, right?
> 
> So forget the number of folios or the byte count involved.  Isn't the
> number of future iomap_finish_page_writeback calls exactly the metric
> that we want to decide when to cut off ioend submission?

Isn't that exactly what I suggested by counting bio segments in the
ioend at bio submission time? I mean, iomap_finish_page_writeback()
iterates bio segments, not pages, folios or filesystem blocks....

> > Hence generic iomap code will only end up calling
> > iomap_finish_ioends() with the same ioend that was submitted. i.e.
> > capped to 4096 pages by this patch. THerefore it does not need
> > cond_resched() calls - the place that needs it is where the ioends
> > are merged and then finished. That is, in the filesystem completion
> > processing that does the merging....
> 
> Huh?  I propose adding cond_resched to iomap_finish_ioends (plural),

Which is only called from XFS on merged ioends after XFS has
processed the merged ioend.....

> which walks a list of ioends and calls iomap_finish_ioend (singular) on
> each ioend.  IOWs, we'd call cond_resched in between finishing one ioend
> and starting on the next one.  Isn't that where ioends are finished?
> 
> (I'm starting to wonder if we're talking past each other?)
> 
> So looking at xfs_end_io:
> 
> /* Finish all pending io completions. */
> void
> xfs_end_io(
> 	struct work_struct	*work)
> {
> 	struct xfs_inode	*ip =
> 		container_of(work, struct xfs_inode, i_ioend_work);
> 	struct iomap_ioend	*ioend;
> 	struct list_head	tmp;
> 	unsigned long		flags;
> 
> 	spin_lock_irqsave(&ip->i_ioend_lock, flags);
> 	list_replace_init(&ip->i_ioend_list, &tmp);
> 	spin_unlock_irqrestore(&ip->i_ioend_lock, flags);
> 
> 	iomap_sort_ioends(&tmp);
> 	while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend,
> 			io_list))) {
> 		list_del_init(&ioend->io_list);
> 
> Here we pull the first ioend off the sorted list of ioends.
> 
> 		iomap_ioend_try_merge(ioend, &tmp);
> 
> Now we've merged that first ioend with as many subsequent ioends as we
> could merge.  Let's say there were 200 ioends, each 100MB.  Now ioend

Ok, so how do we get to this completion state right now?

1. an ioend is a physically contiguous extent so submission is
   broken down into an ioend per physical extent.
2. we merge logically contiguous ioends at completion.

So, if we have 200 ioends of 100MB each that are logically
contiguous we'll currently always merge them into a single 20GB
ioend that gets processed as a single entity even if submission
broke them up because they were physically discontiguous.

Now, with this patch we add:

3. Individual ioends are limited to 16MB.
4. completion can only merge physically contiguous ioends.
5. we cond_resched() between physically contiguous ioend completion.

Submission will break that logically contiguous 20GB dirty range
down into 200x6x16MB ioends.

Now completion will only merge ioends that are both physically and
logically contiguous. That results in a maximum merged ioend chain
size of 100MB at completion. They'll get merged one 100MB chunk at a
time.

> is a chain (of those other 199 ioends) representing 20GB of data.
> 
> 		xfs_end_ioend(ioend);

We now do one conversion transaction for the entire 100MB extent,
then....

> At the end of this routine, we call iomap_finish_ioends on the 20GB
> ioend chain.  This now has to mark 5.2 million pages...

run iomap_finish_ioends() on 100MB of pages, which is about 25,000
pages, not 5 million...

> 		cond_resched();
> 
> ...before we get to the cond_resched.

... and so in this scenario this patch reduces the time between
reschedule events by a factor of 200 - the number of physical
extents the ioends map....

That's kind of my point - we can't ignore why the filesystem needs
merging or how it should optimise merging for it's own purposes in
this discussion. Because logically merged ioends require the
filesystem to do internal loops over physical discontiguities,
requiring us to drive cond_resched() into both the iomap loops and
the lower layer filesystem loops.

i.e. when we have ioend merging based on logical contiguity, we need
to limit the number of the loops the filesystem does internally, not
just the loops that the ioend code is doing...

> I'd really rather do the
> cond_resched between each of those 200 ioends that (supposedly) are
> small enough not to trip the hangcheck timers.
> 
> 	}
> }
> /*
>  * Mark writeback finished on a chain of ioends.  Caller must not call
>  * this function from atomic/softirq context.
>  */
> void
> iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> {
> 	struct list_head tmp;
> 
> 	list_replace_init(&ioend->io_list, &tmp);
> 	iomap_finish_ioend(ioend, error);
> 
> 	while (!list_empty(&tmp)) {
> 		cond_resched();
> 
> So I propose doing it ^^^ here instead.
> 
> 		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
> 		list_del_init(&ioend->io_list);
> 		iomap_finish_ioend(ioend, error);
> 	}
> }

Yes, but this only addresses a single aspect of the issue when
filesystem driven merging is used. That is, we might have just had
to do a long unbroken loop in xfs_end_ioend() that might have to run
conversion of several thousand physical extents that the logically
merged ioends might have covered. Hence even with the above, we'd
still need to add cond_resched() calls to the XFS code. Hence from
an XFS IO completion point of view, we only want to merge to
physical extent boundaries and issue cond_resched() at physical
extent boundaries because that's what our filesystem completion
processing loops on, not pages/folios.

Hence my point that we cannot ignore what the filesystem is doing
with these merged ioends and only think about iomap in isolation.

Cheers,

Dave.
kernel test robot Jan. 5, 2022, 2:31 a.m. UTC | #24
Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: f5934dda5442999d71eea07d9a324b259e5a36a5 ("[PATCH] iomap: Address soft lockup in iomap_finish_ioend()")
url: https://github.com/0day-ci/linux/commits/trondmy-kernel-org/iomap-Address-soft-lockup-in-iomap_finish_ioend/20211231-034313
base: https://git.kernel.org/cgit/fs/xfs/xfs-linux.git for-next
patch link: https://lore.kernel.org/linux-xfs/20211230193522.55520-1-trondmy@kernel.org

in testcase: xfstests
version: xfstests-x86_64-972d710-1_20211231
with following parameters:

	disk: 4HDD
	fs: xfs
	test: xfs-reflink-21
	ucode: 0x28

test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git


on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790 v3 @ 3.60GHz with 6G memory

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@intel.com>


[   40.451880][    C5] BUG: sleeping function called from invalid context at fs/iomap/buffered-io.c:1058
[   40.461015][    C5] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 0, name: swapper/5
[   40.469465][    C5] preempt_count: 101, expected: 0
[   40.474309][    C5] CPU: 5 PID: 0 Comm: swapper/5 Not tainted 5.16.0-rc5-00009-gf5934dda5442 #1
[   40.482940][    C5] Hardware name: Dell Inc. OptiPlex 9020/03CPWF, BIOS A11 04/01/2015
[   40.490773][    C5] Call Trace:
[   40.493882][    C5]  <IRQ>
[ 40.496560][ C5] dump_stack_lvl (lib/dump_stack.c:107) 
[ 40.500877][ C5] __might_resched.cold (kernel/sched/core.c:9539 kernel/sched/core.c:9492) 
[ 40.505879][ C5] ? folio_end_writeback (arch/x86/include/asm/atomic.h:123 include/linux/atomic/atomic-instrumented.h:543 include/linux/page_ref.h:210 include/linux/mm.h:738 include/linux/mm.h:743 include/linux/mm.h:1236 mm/filemap.c:1611) 
[ 40.510969][ C5] iomap_finish_ioend (include/linux/sched.h:2024 fs/iomap/buffered-io.c:1058) 
[ 40.515807][ C5] blk_update_request (block/blk-mq.c:744) 
[ 40.520722][ C5] scsi_end_request (drivers/scsi/scsi_lib.c:543) 
[ 40.525293][ C5] scsi_io_completion (drivers/scsi/scsi_lib.c:939) 
[ 40.530206][ C5] ? sd_completed_bytes (drivers/scsi/sd.c:2030) sd_mod
[ 40.535989][ C5] ? scsi_unblock_requests (drivers/scsi/scsi_lib.c:910) 
[ 40.541074][ C5] ? scsi_device_unbusy (arch/x86/include/asm/bitops.h:60 include/asm-generic/bitops/instrumented-atomic.h:29 include/linux/sbitmap.h:324 include/linux/sbitmap.h:333 drivers/scsi/scsi_lib.c:303) 
[ 40.546075][ C5] blk_complete_reqs (block/blk-mq.c:891 (discriminator 3)) 
[ 40.550727][ C5] __do_softirq (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:212 include/trace/events/irq.h:142 kernel/softirq.c:559) 
[ 40.555040][ C5] irq_exit_rcu (kernel/softirq.c:432 kernel/softirq.c:637 kernel/softirq.c:649) 
[ 40.559353][ C5] common_interrupt (arch/x86/kernel/irq.c:240 (discriminator 14)) 
[   40.563838][    C5]  </IRQ>
[   40.566600][    C5]  <TASK>
[ 40.569363][ C5] asm_common_interrupt (arch/x86/include/asm/idtentry.h:629) 
[ 40.574189][ C5] RIP: 0010:cpuidle_enter_state (drivers/cpuidle/cpuidle.c:259) 
[ 40.579877][ C5] Code: 89 c6 0f 1f 44 00 00 31 ff e8 3d 37 61 fe 80 3c 24 00 74 12 9c 58 f6 c4 02 0f 85 27 07 00 00 31 ff e8 25 17 76 fe fb 45 85 ed <0f> 88 97 03 00 00 49 63 ed 48 83 fd 09 0f 87 cf 08 00 00 48 8d 44
All code
========
   0:	89 c6                	mov    %eax,%esi
   2:	0f 1f 44 00 00       	nopl   0x0(%rax,%rax,1)
   7:	31 ff                	xor    %edi,%edi
   9:	e8 3d 37 61 fe       	callq  0xfffffffffe61374b
   e:	80 3c 24 00          	cmpb   $0x0,(%rsp)
  12:	74 12                	je     0x26
  14:	9c                   	pushfq 
  15:	58                   	pop    %rax
  16:	f6 c4 02             	test   $0x2,%ah
  19:	0f 85 27 07 00 00    	jne    0x746
  1f:	31 ff                	xor    %edi,%edi
  21:	e8 25 17 76 fe       	callq  0xfffffffffe76174b
  26:	fb                   	sti    
  27:	45 85 ed             	test   %r13d,%r13d
  2a:*	0f 88 97 03 00 00    	js     0x3c7		<-- trapping instruction
  30:	49 63 ed             	movslq %r13d,%rbp
  33:	48 83 fd 09          	cmp    $0x9,%rbp
  37:	0f 87 cf 08 00 00    	ja     0x90c
  3d:	48                   	rex.W
  3e:	8d                   	.byte 0x8d
  3f:	44                   	rex.R

Code starting with the faulting instruction
===========================================
   0:	0f 88 97 03 00 00    	js     0x39d
   6:	49 63 ed             	movslq %r13d,%rbp
   9:	48 83 fd 09          	cmp    $0x9,%rbp
   d:	0f 87 cf 08 00 00    	ja     0x8e2
  13:	48                   	rex.W
  14:	8d                   	.byte 0x8d
  15:	44                   	rex.R
[   40.599142][    C5] RSP: 0018:ffffc90000167d80 EFLAGS: 00000202
[   40.605000][    C5] RAX: dffffc0000000000 RBX: ffffe8ffffc80000 RCX: 000000000000001f
[   40.612750][    C5] RDX: 1ffff11026f570d1 RSI: 0000000023a34dfe RDI: ffff888137ab8688
[   40.620496][    C5] RBP: 0000000000000002 R08: 000000aa3dd5a974 R09: ffffed1026f57126
[   40.628244][    C5] R10: ffff888137ab892b R11: ffffed1026f57125 R12: ffffffff84d3a3a0
[   40.635990][    C5] R13: 0000000000000002 R14: 000000096b1e9b17 R15: ffffffff84d3a488
[ 40.643738][ C5] ? menu_reflect (drivers/cpuidle/governors/menu.c:440) 
[ 40.648137][ C5] cpuidle_enter (drivers/cpuidle/cpuidle.c:353) 
[ 40.652363][ C5] do_idle (kernel/sched/idle.c:158 kernel/sched/idle.c:239 kernel/sched/idle.c:306) 
[ 40.656244][ C5] ? arch_cpu_idle_exit+0xc0/0xc0 
[ 40.661070][ C5] cpu_startup_entry (kernel/sched/idle.c:402 (discriminator 1)) 
[ 40.665638][ C5] start_secondary (arch/x86/kernel/smpboot.c:224) 
[ 40.670210][ C5] ? set_cpu_sibling_map (arch/x86/kernel/smpboot.c:224) 
[ 40.675468][ C5] secondary_startup_64_no_verify (arch/x86/kernel/head_64.S:283) 
[   40.681155][    C5]  </TASK>
[   40.866821][ T2166] 420 (2166): drop_caches: 1
[   40.878623][ T2536] XFS (sda4): Unmounting Filesystem
[   40.967510][ T2542] XFS (sda4): Mounting V5 Filesystem
[   41.010880][ T2542] XFS (sda4): Ending clean mount
[   41.017620][ T2542] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   41.097015][ T2576] XFS (sda1): Unmounting Filesystem
[   41.266165][ T2643] XFS (sda4): Unmounting Filesystem
[   41.409393][ T2820] XFS (sda4): Mounting V5 Filesystem
[   41.430863][ T2820] XFS (sda4): Ending clean mount
[   41.437619][ T2820] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   41.454851][ T2830] XFS (sda4): Unmounting Filesystem
[   41.508142][  T346] xfs/420	_check_dmesg: something found in dmesg (see /lkp/benchmarks/xfstests/results//xfs/420.dmesg)
[   41.508152][  T346]
[   41.521191][  T346]
[   41.521196][  T346]
[   41.558537][ T1597] run fstests xfs/421 at 2022-01-01 19:19:09
[   42.000464][ T3060] XFS (sda1): Mounting V5 Filesystem
[   42.071087][ T3060] XFS (sda1): Ending clean mount
[   42.078076][ T3060] xfs filesystem being mounted at /fs/sda1 supports timestamps until 2038 (0x7fffffff)
[   43.073514][ T3137] XFS (sda4): Mounting V5 Filesystem
[   43.146618][ T3137] XFS (sda4): Ending clean mount
[   43.153737][ T3137] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   43.192057][ T3153] XFS (sda4): Unmounting Filesystem
[   44.224872][ T3210] XFS (sda4): Mounting V5 Filesystem
[   44.296088][ T3210] XFS (sda4): Ending clean mount
[   44.303329][ T3210] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   44.383646][ T3231] XFS (sda4): Unmounting Filesystem
[   44.467185][ T3237] XFS (sda4): Mounting V5 Filesystem
[   44.510544][ T3237] XFS (sda4): Ending clean mount
[   44.517356][ T3237] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   44.611180][ T2930] 421 (2930): drop_caches: 1
[   44.798358][ T2930] 421 (2930): drop_caches: 1
[   44.949219][ T2930] 421 (2930): drop_caches: 1
[   44.961260][ T3312] XFS (sda4): Unmounting Filesystem
[   45.050492][ T3320] XFS (sda4): Mounting V5 Filesystem
[   45.126821][ T3320] XFS (sda4): Ending clean mount
[   45.133426][ T3320] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   45.212078][ T3355] XFS (sda1): Unmounting Filesystem
[   45.385041][ T3422] XFS (sda4): Unmounting Filesystem
[   45.491804][ T3599] XFS (sda4): Mounting V5 Filesystem
[   45.526767][ T3599] XFS (sda4): Ending clean mount
[   45.533445][ T3599] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   45.550650][ T3609] XFS (sda4): Unmounting Filesystem
[   45.621721][  T346] xfs/421	 4s
[   45.621730][  T346]
[   45.663021][ T1597] run fstests xfs/435 at 2022-01-01 19:19:13
[   46.088251][ T3841] XFS (sda1): Mounting V5 Filesystem
[   46.185273][ T3841] XFS (sda1): Ending clean mount
[   46.192234][ T3841] xfs filesystem being mounted at /fs/sda1 supports timestamps until 2038 (0x7fffffff)
[   46.290869][ T3884] XFS (sda1): Unmounting Filesystem
[   46.761754][ T3888] SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, fatal assert, debug enabled
[   46.789787][ T3894] XFS (sda1): Mounting V5 Filesystem
[   46.832054][ T3894] XFS (sda1): Ending clean mount
[   46.838578][ T3894] xfs filesystem being mounted at /fs/sda1 supports timestamps until 2038 (0x7fffffff)
[   47.773681][ T3951] XFS (sda4): Mounting V5 Filesystem
[   47.845813][ T3951] XFS (sda4): Ending clean mount
[   47.852948][ T3951] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   47.907907][ T3964] XFS (sda4): Unmounting Filesystem
[   48.856657][ T3989] XFS (sda4): Mounting V5 Filesystem
[   48.928626][ T3989] XFS (sda4): Ending clean mount
[   48.934389][ T3989] XFS (sda4): Quotacheck needed: Please wait.
[   48.958046][ T3989] XFS (sda4): Quotacheck: Done.
[   48.963681][ T3989] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   49.024435][ T4007] XFS (sda4): Unmounting Filesystem
[   49.164158][ T4020] XFS (sda4): Mounting V5 Filesystem
[   49.204473][ T4020] XFS (sda4): Ending clean mount
[   49.211130][ T4020] xfs filesystem being mounted at /fs/scratch supports timestamps until 2038 (0x7fffffff)
[   49.225240][ T4029] XFS (sda4): Unmounting Filesystem
[   49.276739][ T4039] XFS (sda1): Unmounting Filesystem
[   49.724210][ T4043] SGI XFS with ACLs, security attributes, realtime, scrub, repair, quota, fatal assert, debug enabled
[   49.752466][ T4049] XFS (sda1): Mounting V5 Filesystem
[   49.815033][ T4049] XFS (sda1): Ending clean mount
[   49.821589][ T4049] xfs filesystem being mounted at /fs/sda1 supports timestamps until 2038 (0x7fffffff)
[   49.845370][ T4061] XFS (sda1): Unmounting Filesystem
[   49.897630][  T346] xfs/435	 4s
[   49.897639][  T346]
[   49.912268][  T346] Ran: xfs/420 xfs/421 xfs/435


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.



---
0DAY/LKP+ Test Infrastructure                   Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org       Intel Corporation

Thanks,
Oliver Sang
#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 5.16.0-rc5 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-9 (Debian 9.3.0-22) 9.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=90300
CONFIG_CLANG_VERSION=0
CONFIG_AS_IS_GNU=y
CONFIG_AS_VERSION=23502
CONFIG_LD_IS_BFD=y
CONFIG_LD_VERSION=23502
CONFIG_LLD_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CC_HAS_NO_PROFILE_FN_ATTR=y
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
# CONFIG_WERROR is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_ZSTD=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_KERNEL_ZSTD is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_WATCH_QUEUE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_USELIB is not set
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_GENERIC_IRQ_INJECTION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y
CONFIG_TIME_KUNIT_TEST=m

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y
CONFIG_CONTEXT_TRACKING=y
# CONFIG_CONTEXT_TRACKING_FORCE is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
# end of Timers subsystem

CONFIG_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y

#
# BPF subsystem
#
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_BPF_UNPRIV_DEFAULT_OFF=y
# CONFIG_BPF_PRELOAD is not set
# CONFIG_BPF_LSM is not set
# end of BPF subsystem

CONFIG_PREEMPT_VOLUNTARY_BUILD=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
# CONFIG_PREEMPT_DYNAMIC is not set
# CONFIG_SCHED_CORE is not set

#
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_SCHED_AVG_IRQ=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_SRCU=y
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RCU=y
CONFIG_TASKS_RUDE_RCU=y
CONFIG_TASKS_TRACE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
CONFIG_RCU_NOCB_CPU=y
# end of RCU Subsystem

CONFIG_BUILD_BIN2C=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
# CONFIG_PRINTK_INDEX is not set
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y

#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_CC_IMPLICIT_FALLTHROUGH="-Wimplicit-fallthrough=5"
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_MISC is not set
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
CONFIG_RD_ZSTD=y
# CONFIG_BOOT_CONFIG is not set
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_LD_ORPHAN_WARN=y
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_HAVE_ARCH_USERFAULTFD_WP=y
CONFIG_HAVE_ARCH_USERFAULTFD_MINOR=y
CONFIG_MEMBARRIER=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
CONFIG_USERFAULTFD=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_KCMP=y
CONFIG_RSEQ=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
CONFIG_SLAB_MERGE_DEFAULT=y
CONFIG_SLAB_FREELIST_RANDOM=y
# CONFIG_SLAB_FREELIST_HARDENED is not set
CONFIG_SHUFFLE_PAGE_ALLOCATOR=y
CONFIG_SLUB_CPU_PARTIAL=y
CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# end of General setup

CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_FILTER_PGPROT=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_NR_GPIO=1024
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_AUDIT_ARCH=y
CONFIG_KASAN_SHADOW_OFFSET=0xdffffc0000000000
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_PGTABLE_LEVELS=5
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

#
# Processor type and features
#
CONFIG_SMP=y
CONFIG_X86_FEATURE_NAMES=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_GOLDFISH is not set
CONFIG_RETPOLINE=y
# CONFIG_X86_CPU_RESCTRL is not set
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_NUMACHIP is not set
# CONFIG_X86_VSMP is not set
CONFIG_X86_UV=y
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=y
# CONFIG_X86_AMD_PLATFORM_DEVICE is not set
CONFIG_IOSF_MBI=y
# CONFIG_IOSF_MBI_DEBUG is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
# CONFIG_XEN is not set
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_PVH is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_CPU_SUP_ZHAOXIN=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_MAXSMP=y
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
CONFIG_SCHED_CLUSTER=y
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCELOG_LEGACY=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=m
CONFIG_PERF_EVENTS_INTEL_RAPL=m
CONFIG_PERF_EVENTS_INTEL_CSTATE=m
# CONFIG_PERF_EVENTS_AMD_POWER is not set
CONFIG_PERF_EVENTS_AMD_UNCORE=y
# end of Performance monitoring

CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_I8K=m
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_X86_5LEVEL=y
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
# CONFIG_AMD_MEM_ENCRYPT is not set
CONFIG_NUMA=y
# CONFIG_AMD_NUMA is not set
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=10
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=m
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
# CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_X86_UMIP=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
# CONFIG_X86_SGX is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
# CONFIG_KEXEC_SIG is not set
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_X86_NEED_RELOCS=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_RANDOMIZE_MEMORY=y
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
CONFIG_HOTPLUG_CPU=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_LEGACY_VSYSCALL_XONLY is not set
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
# CONFIG_STRICT_SIGALTSTACK_SIZE is not set
CONFIG_HAVE_LIVEPATCH=y
CONFIG_LIVEPATCH=y
# end of Processor type and features

CONFIG_ARCH_HAS_ADD_PAGES=y
CONFIG_ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_PM_TRACE_RTC is not set
CONFIG_PM_CLK=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
# CONFIG_ACPI_FPDT is not set
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
CONFIG_ACPI_EC_DEBUGFS=m
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_TAD=m
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_PLATFORM_PROFILE=m
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_SBS=m
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
CONFIG_ACPI_BGRT=y
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=m
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_DPTF is not set
CONFIG_ACPI_WATCHDOG=y
CONFIG_ACPI_EXTLOG=m
CONFIG_ACPI_ADXL=y
# CONFIG_ACPI_CONFIGFS is not set
CONFIG_PMIC_OPREGION=y
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_PRMT=y

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_CPU_IDLE_GOV_TEO is not set
# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
CONFIG_HALTPOLL_CPUIDLE=y
# end of CPU Idle

CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_MMCONF_FAM10H=y
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
# end of Binary Emulations

CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_KVM_XFER_TO_GUEST_WORK=y
CONFIG_HAVE_KVM_PM_NOTIFIER=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_KVM_INTEL=m
# CONFIG_KVM_AMD is not set
# CONFIG_KVM_XEN is not set
CONFIG_KVM_MMU_AUDIT=y
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y

#
# General architecture-dependent options
#
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_HOTPLUG_SMT=y
CONFIG_GENERIC_ENTRY=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
# CONFIG_STATIC_CALL_SELFTEST is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_ARCH_WANTS_NO_INSTR=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
CONFIG_SECCOMP_FILTER=y
# CONFIG_SECCOMP_CACHE_DEBUG is not set
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG_THIN=y
CONFIG_LTO_NONE=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_CONTEXT_TRACKING_OFFSTACK=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PUD=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_PAGE_SIZE_LESS_THAN_64KB=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET=y
# CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT is not set
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y
CONFIG_HAVE_STATIC_CALL=y
CONFIG_HAVE_STATIC_CALL_INLINE=y
CONFIG_HAVE_PREEMPT_DYNAMIC=y
CONFIG_ARCH_WANT_LD_ORPHAN_WARN=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_ARCH_HAS_ELFCORE_COMPAT=y
CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH=y
CONFIG_DYNAMIC_SIGFRAME=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=y
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=y
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
CONFIG_MODULE_SIG_SHA256=y
# CONFIG_MODULE_SIG_SHA384 is not set
# CONFIG_MODULE_SIG_SHA512 is not set
CONFIG_MODULE_SIG_HASH="sha256"
CONFIG_MODULE_COMPRESS_NONE=y
# CONFIG_MODULE_COMPRESS_GZIP is not set
# CONFIG_MODULE_COMPRESS_XZ is not set
# CONFIG_MODULE_COMPRESS_ZSTD is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_MODPROBE_PATH="/sbin/modprobe"
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_DEV_BSG_COMMON=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
CONFIG_BLK_WBT=y
CONFIG_BLK_WBT_MQ=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_FC_APPID is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
# CONFIG_BLK_CGROUP_IOPRIO is not set
CONFIG_BLK_DEBUG_FS=y
CONFIG_BLK_DEBUG_FS_ZONED=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
# end of Partition Types

CONFIG_BLOCK_COMPAT=y
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_MQ_RDMA=y
CONFIG_BLK_PM=y
CONFIG_BLOCK_HOLDER_DEPRECATED=y

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=y
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_BFQ_CGROUP_DEBUG is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y

#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
# end of Executable file formats

#
# Memory Management options
#
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_EXCLUSIVE_SYSTEM_RAM=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG=y
# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_MHP_MEMMAP_ON_MEMORY=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_MEMORY_BALLOON=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_THP_SWAP=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
# CONFIG_CMA_SYSFS is not set
CONFIG_CMA_AREAS=19
# CONFIG_MEM_SOFT_DIRTY is not set
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
# CONFIG_ZSWAP_DEFAULT_ON is not set
CONFIG_ZPOOL=y
CONFIG_ZBUD=y
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_GENERIC_EARLY_IOREMAP=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_PAGE_IDLE_FLAG=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ZONE_DMA=y
CONFIG_ZONE_DMA32=y
CONFIG_ZONE_DEVICE=y
CONFIG_DEV_PAGEMAP_OPS=y
CONFIG_HMM_MIRROR=y
CONFIG_DEVICE_PRIVATE=y
CONFIG_VMAP_PFN=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_TEST is not set
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_SECRETMEM=y

#
# Data Access Monitoring
#
# CONFIG_DAMON is not set
# end of Data Access Monitoring
# end of Memory Management options

CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_SKB_EXTENSIONS=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_SCM=y
CONFIG_AF_UNIX_OOB=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
CONFIG_TLS_DEVICE=y
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_OFFLOAD=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_USER_COMPAT is not set
# CONFIG_XFRM_INTERFACE is not set
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_AH=m
CONFIG_XFRM_ESP=m
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
# CONFIG_SMC is not set
CONFIG_XDP_SOCKETS=y
# CONFIG_XDP_SOCKETS_DIAG is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IP_TUNNEL=m
CONFIG_NET_IPGRE=m
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_NET_UDP_TUNNEL=m
# CONFIG_NET_FOU is not set
# CONFIG_NET_FOU_IP_TUNNELS is not set
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_ESP_OFFLOAD=m
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
CONFIG_INET_RAW_DIAG=m
# CONFIG_INET_DIAG_DESTROY is not set
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_NV=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
# CONFIG_TCP_CONG_CDG is not set
CONFIG_TCP_CONG_BBR=m
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_ESP_OFFLOAD=m
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
# CONFIG_IPV6_ILA is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
CONFIG_IPV6_VTI=m
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_GRE=m
CONFIG_IPV6_MULTIPLE_TABLES=y
# CONFIG_IPV6_SUBTREES is not set
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
# CONFIG_IPV6_SEG6_LWTUNNEL is not set
# CONFIG_IPV6_SEG6_HMAC is not set
# CONFIG_IPV6_RPL_LWTUNNEL is not set
# CONFIG_IPV6_IOAM6_LWTUNNEL is not set
CONFIG_NETLABEL=y
CONFIG_MPTCP=y
CONFIG_INET_MPTCP_DIAG=m
CONFIG_MPTCP_IPV6=y
CONFIG_MPTCP_KUNIT_TEST=m
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
CONFIG_NETWORK_PHY_TIMESTAMPING=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=m

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_EGRESS=y
CONFIG_NETFILTER_SKIP_EGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
# CONFIG_NETFILTER_NETLINK_HOOK is not set
# CONFIG_NETFILTER_NETLINK_ACCT is not set
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_SYSLOG=m
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_GLUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NFT_NUMGEN=m
CONFIG_NFT_CT=m
CONFIG_NFT_COUNTER=m
CONFIG_NFT_CONNLIMIT=m
CONFIG_NFT_LOG=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_MASQ=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_NAT=m
# CONFIG_NFT_TUNNEL is not set
CONFIG_NFT_OBJREF=m
CONFIG_NFT_QUEUE=m
CONFIG_NFT_QUOTA=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_REJECT_INET=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_HASH=m
CONFIG_NFT_FIB=m
CONFIG_NFT_FIB_INET=m
# CONFIG_NFT_XFRM is not set
CONFIG_NFT_SOCKET=m
# CONFIG_NFT_OSF is not set
# CONFIG_NFT_TPROXY is not set
# CONFIG_NFT_SYNPROXY is not set
CONFIG_NF_DUP_NETDEV=m
CONFIG_NFT_DUP_NETDEV=m
CONFIG_NFT_FWD_NETDEV=m
CONFIG_NFT_FIB_NETDEV=m
# CONFIG_NFT_REJECT_NETDEV is not set
# CONFIG_NF_FLOW_TABLE is not set
CONFIG_NETFILTER_XTABLES=y
CONFIG_NETFILTER_XTABLES_COMPAT=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
# CONFIG_NETFILTER_XT_TARGET_LED is not set
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
# CONFIG_NETFILTER_XT_MATCH_TIME is not set
# CONFIG_NETFILTER_XT_MATCH_U32 is not set
# end of Core Netfilter Configuration

CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
CONFIG_IP_VS_FO=m
CONFIG_IP_VS_OVF=m
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
# CONFIG_IP_VS_MH is not set
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m
# CONFIG_IP_VS_TWOS is not set

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=m
CONFIG_NFT_DUP_IPV4=m
CONFIG_NFT_FIB_IPV4=m
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_DUP_IPV4=m
CONFIG_NF_LOG_ARP=m
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_SYNPROXY=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
# CONFIG_IP_NF_TARGET_CLUSTERIP is not set
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=m
CONFIG_NFT_DUP_IPV6=m
CONFIG_NFT_FIB_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
# CONFIG_IP6_NF_MATCH_SRH is not set
# CONFIG_IP6_NF_TARGET_HL is not set
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_TARGET_SYNPROXY=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_TABLES_BRIDGE=m
# CONFIG_NFT_BRIDGE_META is not set
CONFIG_NFT_BRIDGE_REJECT=m
# CONFIG_NF_CONNTRACK_BRIDGE is not set
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
# CONFIG_IP_DCCP is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
# CONFIG_RDS is not set
CONFIG_TIPC=m
# CONFIG_TIPC_MEDIA_IB is not set
CONFIG_TIPC_MEDIA_UDP=y
CONFIG_TIPC_CRYPTO=y
CONFIG_TIPC_DIAG=m
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
# CONFIG_ATM_MPOA is not set
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=m
CONFIG_GARP=m
CONFIG_MRP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
# CONFIG_BRIDGE_MRP is not set
# CONFIG_BRIDGE_CFM is not set
# CONFIG_NET_DSA is not set
CONFIG_VLAN_8021Q=m
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
CONFIG_6LOWPAN=m
# CONFIG_6LOWPAN_DEBUGFS is not set
# CONFIG_6LOWPAN_NHC is not set
CONFIG_IEEE802154=m
# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
CONFIG_IEEE802154_SOCKET=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
# CONFIG_NET_SCH_CBS is not set
# CONFIG_NET_SCH_ETF is not set
# CONFIG_NET_SCH_TAPRIO is not set
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
# CONFIG_NET_SCH_SKBPRIO is not set
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=y
# CONFIG_NET_SCH_CAKE is not set
CONFIG_NET_SCH_FQ=m
CONFIG_NET_SCH_HHF=m
CONFIG_NET_SCH_PIE=m
# CONFIG_NET_SCH_FQ_PIE is not set
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m
# CONFIG_NET_SCH_ETS is not set
CONFIG_NET_SCH_DEFAULT=y
# CONFIG_DEFAULT_FQ is not set
# CONFIG_DEFAULT_CODEL is not set
CONFIG_DEFAULT_FQ_CODEL=y
# CONFIG_DEFAULT_SFQ is not set
# CONFIG_DEFAULT_PFIFO_FAST is not set
CONFIG_DEFAULT_NET_SCH="fq_codel"

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
# CONFIG_NET_EMATCH_CANID is not set
CONFIG_NET_EMATCH_IPSET=m
# CONFIG_NET_EMATCH_IPT is not set
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
# CONFIG_NET_ACT_IPT is not set
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
# CONFIG_NET_ACT_MPLS is not set
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_ACT_BPF=m
# CONFIG_NET_ACT_CONNMARK is not set
# CONFIG_NET_ACT_CTINFO is not set
CONFIG_NET_ACT_SKBMOD=m
# CONFIG_NET_ACT_IFE is not set
CONFIG_NET_ACT_TUNNEL_KEY=m
# CONFIG_NET_ACT_GATE is not set
# CONFIG_NET_TC_SKB_EXT is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_GRE=m
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_VMWARE_VMCI_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_HYPERV_VSOCKETS=m
CONFIG_NETLINK_DIAG=m
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=y
CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=y
# CONFIG_HSR is not set
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_PCPU_DEV_REFCNT=y
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_SOCK_RX_QUEUE_MAPPING=y
CONFIG_XPS=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options

# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
# CONFIG_CAN_J1939 is not set
# CONFIG_CAN_ISOTP is not set

#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=m
# CONFIG_CAN_VXCAN is not set
CONFIG_CAN_SLCAN=m
CONFIG_CAN_DEV=m
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_KVASER_PCIEFD is not set
CONFIG_CAN_C_CAN=m
CONFIG_CAN_C_CAN_PLATFORM=m
CONFIG_CAN_C_CAN_PCI=m
CONFIG_CAN_CC770=m
# CONFIG_CAN_CC770_ISA is not set
CONFIG_CAN_CC770_PLATFORM=m
# CONFIG_CAN_IFI_CANFD is not set
# CONFIG_CAN_M_CAN is not set
# CONFIG_CAN_PEAK_PCIEFD is not set
CONFIG_CAN_SJA1000=m
CONFIG_CAN_EMS_PCI=m
# CONFIG_CAN_F81601 is not set
CONFIG_CAN_KVASER_PCI=m
CONFIG_CAN_PEAK_PCI=m
CONFIG_CAN_PEAK_PCIEC=y
CONFIG_CAN_PLX_PCI=m
# CONFIG_CAN_SJA1000_ISA is not set
CONFIG_CAN_SJA1000_PLATFORM=m
CONFIG_CAN_SOFTING=m

#
# CAN SPI interfaces
#
# CONFIG_CAN_HI311X is not set
# CONFIG_CAN_MCP251X is not set
# CONFIG_CAN_MCP251XFD is not set
# end of CAN SPI interfaces

#
# CAN USB interfaces
#
# CONFIG_CAN_8DEV_USB is not set
# CONFIG_CAN_EMS_USB is not set
# CONFIG_CAN_ESD_USB2 is not set
# CONFIG_CAN_ETAS_ES58X is not set
# CONFIG_CAN_GS_USB is not set
# CONFIG_CAN_KVASER_USB is not set
# CONFIG_CAN_MCBA_USB is not set
# CONFIG_CAN_PEAK_USB is not set
# CONFIG_CAN_UCAN is not set
# end of CAN USB interfaces

# CONFIG_CAN_DEBUG_DEVICES is not set
# end of CAN Device Drivers

CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_HIDP=m
CONFIG_BT_HS=y
CONFIG_BT_LE=y
# CONFIG_BT_6LOWPAN is not set
# CONFIG_BT_LEDS is not set
# CONFIG_BT_MSFTEXT is not set
# CONFIG_BT_AOSPEXT is not set
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set

#
# Bluetooth device drivers
#
# CONFIG_BT_HCIBTUSB is not set
# CONFIG_BT_HCIBTSDIO is not set
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
# CONFIG_BT_HCIUART_INTEL is not set
# CONFIG_BT_HCIUART_AG6XX is not set
# CONFIG_BT_HCIBCM203X is not set
# CONFIG_BT_HCIBPA10X is not set
# CONFIG_BT_HCIBFUSB is not set
CONFIG_BT_HCIVHCI=m
CONFIG_BT_MRVL=m
# CONFIG_BT_MRVL_SDIO is not set
# CONFIG_BT_MTKSDIO is not set
# CONFIG_BT_VIRTIO is not set
# end of Bluetooth device drivers

# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
# CONFIG_MCTP is not set
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
CONFIG_CFG80211_WEXT=y
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_MESH=y
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
# CONFIG_RFKILL_GPIO is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
# CONFIG_NET_9P_RDMA is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
# CONFIG_NFC is not set
CONFIG_PSAMPLE=m
# CONFIG_NET_IFE is not set
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_SOCK_VALIDATE_XMIT=y
CONFIG_NET_SELFTESTS=y
CONFIG_NET_SOCK_MSG=y
CONFIG_FAILOVER=m
CONFIG_ETHTOOL_NETLINK=y

#
# Device Drivers
#
CONFIG_HAVE_EISA=y
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIEAER_INJECT=m
CONFIG_PCIE_ECRC=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
CONFIG_PCIE_DPC=y
# CONFIG_PCIE_PTM is not set
# CONFIG_PCIE_EDR is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_IRQ_DOMAIN=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=y
CONFIG_PCI_PF_STUB=m
CONFIG_PCI_ATS=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_P2PDMA is not set
CONFIG_PCI_LABEL=y
CONFIG_PCI_HYPERV=m
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=m
# CONFIG_HOTPLUG_PCI_CPCI is not set
CONFIG_HOTPLUG_PCI_SHPC=y

#
# PCI controller drivers
#
CONFIG_VMD=y
CONFIG_PCI_HYPERV_INTERFACE=m

#
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCI_MESON is not set
# end of DesignWare PCI Core Support

#
# Mobiveil PCIe Core Support
#
# end of Mobiveil PCIe Core Support

#
# Cadence PCIe controllers support
#
# end of Cadence PCIe controllers support
# end of PCI controller drivers

#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers

# CONFIG_CXL_BUS is not set
# CONFIG_PCCARD is not set
# CONFIG_RAPIDIO is not set

#
# Generic Driver Options
#
CONFIG_AUXILIARY_BUS=y
# CONFIG_UEVENT_HELPER is not set
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y

#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
# CONFIG_FW_LOADER_COMPRESS is not set
CONFIG_FW_CACHE=y
# end of Firmware loader

CONFIG_ALLOW_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_REGMAP_SPI=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# end of Generic Driver Options

#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# end of Bus devices

CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y

#
# Firmware Drivers
#

#
# ARM System Control and Management Interface Protocol
#
# end of ARM System Control and Management Interface Protocol

CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
# CONFIG_ISCSI_IBFT is not set
CONFIG_FW_CFG_SYSFS=y
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
CONFIG_SYSFB=y
# CONFIG_SYSFB_SIMPLEFB is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_ESRT=y
CONFIG_EFI_VARS_PSTORE=y
CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_WRAPPERS=y
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
# CONFIG_EFI_RCI2_TABLE is not set
# CONFIG_EFI_DISABLE_PCI_DMA is not set
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
CONFIG_EFI_EARLYCON=y
CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=y

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

# CONFIG_GNSS is not set
# CONFIG_MTD is not set
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_SERIAL=m
# CONFIG_PARPORT_PC_FIFO is not set
# CONFIG_PARPORT_PC_SUPERIO is not set
# CONFIG_PARPORT_AX88796 is not set
CONFIG_PARPORT_1284=y
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION=y
# CONFIG_BLK_DEV_FD is not set
CONFIG_CDROM=m
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_ZRAM=m
CONFIG_ZRAM_DEF_COMP_LZORLE=y
# CONFIG_ZRAM_DEF_COMP_LZO is not set
CONFIG_ZRAM_DEF_COMP="lzo-rle"
CONFIG_ZRAM_WRITEBACK=y
# CONFIG_ZRAM_MEMORY_TRACKING is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
# CONFIG_BLK_DEV_DRBD is not set
CONFIG_BLK_DEV_NBD=m
# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=16384
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_VIRTIO_BLK=m
CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_RSXX is not set

#
# NVME Support
#
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
CONFIG_NVME_MULTIPATH=y
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
# CONFIG_NVME_RDMA is not set
CONFIG_NVME_FC=m
# CONFIG_NVME_TCP is not set
CONFIG_NVME_TARGET=m
# CONFIG_NVME_TARGET_PASSTHRU is not set
CONFIG_NVME_TARGET_LOOP=m
# CONFIG_NVME_TARGET_RDMA is not set
CONFIG_NVME_TARGET_FC=m
CONFIG_NVME_TARGET_FCLOOP=m
# CONFIG_NVME_TARGET_TCP is not set
# end of NVME Support

#
# Misc devices
#
CONFIG_SENSORS_LIS3LV02D=m
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
CONFIG_TIFM_CORE=m
CONFIG_TIFM_7XX1=m
# CONFIG_ICS932S401 is not set
CONFIG_ENCLOSURE_SERVICES=m
CONFIG_SGI_XP=m
CONFIG_HP_ILO=m
CONFIG_SGI_GRU=m
# CONFIG_SGI_GRU_DEBUG is not set
CONFIG_APDS9802ALS=m
CONFIG_ISL29003=m
CONFIG_ISL29020=m
CONFIG_SENSORS_TSL2550=m
CONFIG_SENSORS_BH1770=m
CONFIG_SENSORS_APDS990X=m
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
CONFIG_VMWARE_BALLOON=m
# CONFIG_LATTICE_ECP3_CONFIG is not set
# CONFIG_SRAM is not set
# CONFIG_DW_XDATA_PCIE is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
CONFIG_MISC_RTSX=m
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_AT25 is not set
CONFIG_EEPROM_LEGACY=m
CONFIG_EEPROM_MAX6875=m
CONFIG_EEPROM_93CX6=m
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support

CONFIG_CB710_CORE=m
# CONFIG_CB710_DEBUG is not set
CONFIG_CB710_DEBUG_ASSUMPTIONS=y

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline

CONFIG_SENSORS_LIS3_I2C=m
CONFIG_ALTERA_STAPL=m
CONFIG_INTEL_MEI=m
CONFIG_INTEL_MEI_ME=m
# CONFIG_INTEL_MEI_TXE is not set
# CONFIG_INTEL_MEI_HDCP is not set
# CONFIG_INTEL_MEI_PXP is not set
CONFIG_VMWARE_VMCI=m
# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_BCM_VK is not set
# CONFIG_MISC_ALCOR_PCI is not set
CONFIG_MISC_RTSX_PCI=m
# CONFIG_MISC_RTSX_USB is not set
# CONFIG_HABANA_AI is not set
# CONFIG_UACCE is not set
CONFIG_PVPANIC=y
# CONFIG_PVPANIC_MMIO is not set
# CONFIG_PVPANIC_PCI is not set
# end of Misc devices

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m
CONFIG_SCSI_COMMON=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_BLK_DEV_BSG=y
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SRP_ATTRS=m
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=y
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_MPI3MR is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_UFS_HWMON is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_HYPERV_STORAGE=m
# CONFIG_LIBFC is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
CONFIG_SCSI_ISCI=m
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_PPA is not set
# CONFIG_SCSI_IMM is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_FC is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_EFCT is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_BFA_FC is not set
# CONFIG_SCSI_VIRTIO is not set
# CONFIG_SCSI_CHELSIO_FCOE is not set
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=y
CONFIG_SCSI_DH_HP_SW=y
CONFIG_SCSI_DH_EMC=y
CONFIG_SCSI_DH_ALUA=y
# end of SCSI device support

CONFIG_ATA=m
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
CONFIG_SATA_MOBILE_LPM_POLICY=0
CONFIG_SATA_AHCI_PLATFORM=m
# CONFIG_SATA_INIC162X is not set
# CONFIG_SATA_ACARD_AHCI is not set
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_DWC is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
# CONFIG_PATA_SIS is not set
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_MD_CLUSTER=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_SMQ=m
CONFIG_DM_WRITECACHE=m
# CONFIG_DM_EBS is not set
CONFIG_DM_ERA=m
# CONFIG_DM_CLONE is not set
CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
# CONFIG_DM_MULTIPATH_HST is not set
# CONFIG_DM_MULTIPATH_IOA is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
# CONFIG_DM_VERITY_FEC is not set
CONFIG_DM_SWITCH=m
CONFIG_DM_LOG_WRITES=m
CONFIG_DM_INTEGRITY=m
# CONFIG_DM_ZONED is not set
CONFIG_DM_AUDIT=y
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_TCM_USER2=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
# CONFIG_FUSION is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=y
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
CONFIG_DUMMY=m
# CONFIG_WIREGUARD is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_IPVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
# CONFIG_AMT is not set
# CONFIG_MACSEC is not set
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=m
# CONFIG_TUN_VNET_CROSS_LE is not set
CONFIG_VETH=m
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
# CONFIG_NET_VRF is not set
# CONFIG_VSOCKMON is not set
# CONFIG_ARCNET is not set
CONFIG_ATM_DRIVERS=y
# CONFIG_ATM_DUMMY is not set
# CONFIG_ATM_TCP is not set
# CONFIG_ATM_LANAI is not set
# CONFIG_ATM_ENI is not set
# CONFIG_ATM_FIRESTREAM is not set
# CONFIG_ATM_ZATM is not set
# CONFIG_ATM_NICSTAR is not set
# CONFIG_ATM_IDT77252 is not set
# CONFIG_ATM_AMBASSADOR is not set
# CONFIG_ATM_HORIZON is not set
# CONFIG_ATM_IA is not set
# CONFIG_ATM_FORE200E is not set
# CONFIG_ATM_HE is not set
# CONFIG_ATM_SOLOS is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_AGERE=y
# CONFIG_ET131X is not set
CONFIG_NET_VENDOR_ALACRITECH=y
# CONFIG_SLICOSS is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=y
# CONFIG_ENA_ETHERNET is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_AMD_XGBE is not set
CONFIG_NET_VENDOR_AQUANTIA=y
# CONFIG_AQTION is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ASIX=y
# CONFIG_SPI_AX88796C is not set
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_ATL1C is not set
# CONFIG_ALX is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
# CONFIG_BCMGENET is not set
# CONFIG_BNX2 is not set
# CONFIG_CNIC is not set
# CONFIG_TIGON3 is not set
# CONFIG_BNX2X is not set
# CONFIG_SYSTEMPORT is not set
# CONFIG_BNXT is not set
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
CONFIG_NET_VENDOR_CADENCE=y
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_CAVIUM=y
# CONFIG_THUNDER_NIC_PF is not set
# CONFIG_THUNDER_NIC_VF is not set
# CONFIG_THUNDER_NIC_BGX is not set
# CONFIG_THUNDER_NIC_RGX is not set
CONFIG_CAVIUM_PTP=y
# CONFIG_LIQUIDIO is not set
# CONFIG_LIQUIDIO_VF is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
CONFIG_NET_VENDOR_CORTINA=y
# CONFIG_CX_ECAT is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
# CONFIG_NET_TULIP is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_GOOGLE=y
# CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=y
# CONFIG_HINIC is not set
CONFIG_NET_VENDOR_I825XX=y
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
CONFIG_IXGBE=y
CONFIG_IXGBE_HWMON=y
# CONFIG_IXGBE_DCB is not set
CONFIG_IXGBE_IPSEC=y
# CONFIG_IXGBEVF is not set
CONFIG_I40E=y
# CONFIG_I40E_DCB is not set
# CONFIG_I40EVF is not set
# CONFIG_ICE is not set
# CONFIG_FM10K is not set
CONFIG_IGC=y
CONFIG_NET_VENDOR_MICROSOFT=y
# CONFIG_MICROSOFT_MANA is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_LITEX=y
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
# CONFIG_SKGE is not set
# CONFIG_SKY2 is not set
# CONFIG_PRESTERA is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MICROCHIP=y
# CONFIG_ENC28J60 is not set
# CONFIG_ENCX24J600 is not set
# CONFIG_LAN743X is not set
CONFIG_NET_VENDOR_MICROSEMI=y
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_NETERION=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_NETRONOME=y
# CONFIG_NFP is not set
CONFIG_NET_VENDOR_NI=y
# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_NE2K_PCI is not set
CONFIG_NET_VENDOR_NVIDIA=y
# CONFIG_FORCEDETH is not set
CONFIG_NET_VENDOR_OKI=y
# CONFIG_ETHOC is not set
CONFIG_NET_VENDOR_PACKET_ENGINES=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_PENSANDO=y
# CONFIG_IONIC is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_NETXEN_NIC is not set
# CONFIG_QED is not set
CONFIG_NET_VENDOR_QUALCOMM=y
# CONFIG_QCOM_EMAC is not set
# CONFIG_RMNET is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_ATP is not set
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
CONFIG_R8169=y
CONFIG_NET_VENDOR_RENESAS=y
CONFIG_NET_VENDOR_ROCKER=y
# CONFIG_ROCKER is not set
CONFIG_NET_VENDOR_SAMSUNG=y
# CONFIG_SXGBE_ETH is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SOLARFLARE=y
# CONFIG_SFC is not set
# CONFIG_SFC_FALCON is not set
CONFIG_NET_VENDOR_SILAN=y
# CONFIG_SC92031 is not set
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_SOCIONEXT=y
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_SYNOPSYS=y
# CONFIG_DWC_XLGMAC is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
# CONFIG_TI_CPSW_PHY_SEL is not set
# CONFIG_TLAN is not set
CONFIG_NET_VENDOR_VIA=y
# CONFIG_VIA_RHINE is not set
# CONFIG_VIA_VELOCITY is not set
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_EMACLITE is not set
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y
CONFIG_SWPHY=y
# CONFIG_LED_TRIGGER_PHY is not set
CONFIG_FIXED_PHY=y

#
# MII PHY device drivers
#
# CONFIG_AMD_PHY is not set
# CONFIG_ADIN_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
CONFIG_AX88796B_PHY=y
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM84881_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_CICADA_PHY is not set
# CONFIG_CORTINA_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_INTEL_XWAY_PHY is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MARVELL_PHY is not set
# CONFIG_MARVELL_10G_PHY is not set
# CONFIG_MARVELL_88X2222_PHY is not set
# CONFIG_MAXLINEAR_GPHY is not set
# CONFIG_MEDIATEK_GE_PHY is not set
# CONFIG_MICREL_PHY is not set
# CONFIG_MICROCHIP_PHY is not set
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
# CONFIG_MOTORCOMM_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_NXP_C45_TJA11XX_PHY is not set
# CONFIG_NXP_TJA11XX_PHY is not set
# CONFIG_QSEMI_PHY is not set
CONFIG_REALTEK_PHY=y
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
# CONFIG_SMSC_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_TERANETICS_PHY is not set
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
CONFIG_FWNODE_MDIO=y
CONFIG_ACPI_MDIO=y
CONFIG_MDIO_DEVRES=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_MDIO_BCM_UNIMAC is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_MSCC_MIIM is not set
# CONFIG_MDIO_THUNDER is not set

#
# MDIO Multiplexers
#

#
# PCS device drivers
#
# CONFIG_PCS_XPCS is not set
# end of PCS device drivers

# CONFIG_PLIP is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
CONFIG_USB_NET_DRIVERS=y
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
CONFIG_USB_RTL8152=y
# CONFIG_USB_LAN78XX is not set
CONFIG_USB_USBNET=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_AX88179_178A=y
# CONFIG_USB_NET_CDCETHER is not set
# CONFIG_USB_NET_CDC_EEM is not set
# CONFIG_USB_NET_CDC_NCM is not set
# CONFIG_USB_NET_HUAWEI_CDC_NCM is not set
# CONFIG_USB_NET_CDC_MBIM is not set
# CONFIG_USB_NET_DM9601 is not set
# CONFIG_USB_NET_SR9700 is not set
# CONFIG_USB_NET_SR9800 is not set
# CONFIG_USB_NET_SMSC75XX is not set
# CONFIG_USB_NET_SMSC95XX is not set
# CONFIG_USB_NET_GL620A is not set
# CONFIG_USB_NET_NET1080 is not set
# CONFIG_USB_NET_PLUSB is not set
# CONFIG_USB_NET_MCS7830 is not set
# CONFIG_USB_NET_RNDIS_HOST is not set
# CONFIG_USB_NET_CDC_SUBSET is not set
# CONFIG_USB_NET_ZAURUS is not set
# CONFIG_USB_NET_CX82310_ETH is not set
# CONFIG_USB_NET_KALMIA is not set
# CONFIG_USB_NET_QMI_WWAN is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_NET_INT51X1 is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_USB_SIERRA_NET is not set
# CONFIG_USB_NET_CH9200 is not set
# CONFIG_USB_NET_AQC111 is not set
CONFIG_WLAN=y
CONFIG_WLAN_VENDOR_ADMTEK=y
# CONFIG_ADM8211 is not set
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K is not set
# CONFIG_ATH5K_PCI is not set
# CONFIG_ATH9K is not set
# CONFIG_ATH9K_HTC is not set
# CONFIG_CARL9170 is not set
# CONFIG_ATH6KL is not set
# CONFIG_AR5523 is not set
# CONFIG_WIL6210 is not set
# CONFIG_ATH10K is not set
# CONFIG_WCN36XX is not set
# CONFIG_ATH11K is not set
CONFIG_WLAN_VENDOR_ATMEL=y
# CONFIG_ATMEL is not set
# CONFIG_AT76C50X_USB is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
# CONFIG_B43LEGACY is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
CONFIG_WLAN_VENDOR_CISCO=y
# CONFIG_AIRO is not set
CONFIG_WLAN_VENDOR_INTEL=y
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
# CONFIG_IWL4965 is not set
# CONFIG_IWL3945 is not set
# CONFIG_IWLWIFI is not set
CONFIG_WLAN_VENDOR_INTERSIL=y
# CONFIG_HOSTAP is not set
# CONFIG_HERMES is not set
# CONFIG_P54_COMMON is not set
CONFIG_WLAN_VENDOR_MARVELL=y
# CONFIG_LIBERTAS is not set
# CONFIG_LIBERTAS_THINFIRM is not set
# CONFIG_MWIFIEX is not set
# CONFIG_MWL8K is not set
# CONFIG_WLAN_VENDOR_MEDIATEK is not set
CONFIG_WLAN_VENDOR_MICROCHIP=y
# CONFIG_WILC1000_SDIO is not set
# CONFIG_WILC1000_SPI is not set
CONFIG_WLAN_VENDOR_RALINK=y
# CONFIG_RT2X00 is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
CONFIG_RTL_CARDS=m
# CONFIG_RTL8192CE is not set
# CONFIG_RTL8192SE is not set
# CONFIG_RTL8192DE is not set
# CONFIG_RTL8723AE is not set
# CONFIG_RTL8723BE is not set
# CONFIG_RTL8188EE is not set
# CONFIG_RTL8192EE is not set
# CONFIG_RTL8821AE is not set
# CONFIG_RTL8192CU is not set
# CONFIG_RTL8XXXU is not set
# CONFIG_RTW88 is not set
# CONFIG_RTW89 is not set
CONFIG_WLAN_VENDOR_RSI=y
# CONFIG_RSI_91X is not set
CONFIG_WLAN_VENDOR_ST=y
# CONFIG_CW1200 is not set
CONFIG_WLAN_VENDOR_TI=y
# CONFIG_WL1251 is not set
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
CONFIG_WLAN_VENDOR_ZYDAS=y
# CONFIG_USB_ZD1201 is not set
# CONFIG_ZD1211RW is not set
CONFIG_WLAN_VENDOR_QUANTENNA=y
# CONFIG_QTNFMAC_PCIE is not set
CONFIG_MAC80211_HWSIM=m
# CONFIG_USB_NET_RNDIS_WLAN is not set
# CONFIG_VIRT_WIFI is not set
# CONFIG_WAN is not set
CONFIG_IEEE802154_DRIVERS=m
# CONFIG_IEEE802154_FAKELB is not set
# CONFIG_IEEE802154_AT86RF230 is not set
# CONFIG_IEEE802154_MRF24J40 is not set
# CONFIG_IEEE802154_CC2520 is not set
# CONFIG_IEEE802154_ATUSB is not set
# CONFIG_IEEE802154_ADF7242 is not set
# CONFIG_IEEE802154_CA8210 is not set
# CONFIG_IEEE802154_MCR20A is not set
# CONFIG_IEEE802154_HWSIM is not set

#
# Wireless WAN
#
# CONFIG_WWAN is not set
# end of Wireless WAN

# CONFIG_VMXNET3 is not set
# CONFIG_FUJITSU_ES is not set
# CONFIG_HYPERV_NET is not set
# CONFIG_NETDEVSIM is not set
CONFIG_NET_FAILOVER=m
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_SPARSEKMAP=m
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
# CONFIG_KEYBOARD_APPLESPI is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_CYPRESS_SF is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
CONFIG_MOUSE_PS2_SENTELIC=y
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_PS2_FOCALTECH=y
CONFIG_MOUSE_PS2_VMMOUSE=y
CONFIG_MOUSE_PS2_SMBUS=y
CONFIG_MOUSE_SERIAL=m
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
CONFIG_MOUSE_CYAPA=m
CONFIG_MOUSE_ELAN_I2C=m
CONFIG_MOUSE_ELAN_I2C_I2C=y
CONFIG_MOUSE_ELAN_I2C_SMBUS=y
CONFIG_MOUSE_VSXXXAA=m
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
# CONFIG_MOUSE_SYNAPTICS_USB is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_GPIO_BEEPER is not set
# CONFIG_INPUT_GPIO_DECODER is not set
# CONFIG_INPUT_GPIO_VIBRA is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
CONFIG_INPUT_UINPUT=y
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_PWM_BEEPER is not set
# CONFIG_INPUT_PWM_VIBRA is not set
# CONFIG_INPUT_GPIO_ROTARY_ENCODER is not set
# CONFIG_INPUT_DA7280_HAPTICS is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_IQS626A is not set
# CONFIG_INPUT_CMA3000 is not set
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set
# CONFIG_INPUT_DRV260X_HAPTICS is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
CONFIG_RMI4_CORE=m
CONFIG_RMI4_I2C=m
CONFIG_RMI4_SPI=m
CONFIG_RMI4_SMB=m
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=m
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
CONFIG_RMI4_F34=y
# CONFIG_RMI4_F3A is not set
# CONFIG_RMI4_F54 is not set
CONFIG_RMI4_F55=y

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_SERIO_ARC_PS2=m
CONFIG_HYPERV_KEYBOARD=m
# CONFIG_SERIO_GPIO_PS2 is not set
# CONFIG_USERIO is not set
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_LDISC_AUTOLOAD=y

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=y
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_EXAR=y
CONFIG_SERIAL_8250_NR_UARTS=64
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
CONFIG_SERIAL_8250_DW=y
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=y
CONFIG_SERIAL_8250_MID=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MAX3100 is not set
# CONFIG_SERIAL_MAX310X is not set
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_SERIAL_JSM=m
# CONFIG_SERIAL_LANTIQ is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_BCM63XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
CONFIG_SERIAL_ARC=m
CONFIG_SERIAL_ARC_NR_PORTS=1
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_SYNCLINK_GT=m
CONFIG_N_HDLC=m
CONFIG_N_GSM=m
CONFIG_NOZOMI=m
# CONFIG_NULL_TTY is not set
CONFIG_HVC_DRIVER=y
# CONFIG_SERIAL_DEV_BUS is not set
CONFIG_PRINTER=m
# CONFIG_LP_CONSOLE is not set
CONFIG_PPDEV=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_DMI_DECODE=y
CONFIG_IPMI_PLAT_DATA=y
CONFIG_IPMI_PANIC_EVENT=y
CONFIG_IPMI_PANIC_STRING=y
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_SSIF=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_TIMERIOMEM=m
CONFIG_HW_RANDOM_INTEL=m
CONFIG_HW_RANDOM_AMD=m
# CONFIG_HW_RANDOM_BA431 is not set
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=y
# CONFIG_HW_RANDOM_XIPHERA is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_DEVMEM=y
CONFIG_NVRAM=y
CONFIG_DEVPORT=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
# CONFIG_HPET_MMAP_DEFAULT is not set
CONFIG_HANGCHECK_TIMER=m
CONFIG_UV_MMTIMER=m
CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_TIS_SPI is not set
# CONFIG_TCG_TIS_I2C_CR50 is not set
CONFIG_TCG_TIS_I2C_ATMEL=m
CONFIG_TCG_TIS_I2C_INFINEON=m
CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
CONFIG_TCG_TIS_ST33ZP24=m
CONFIG_TCG_TIS_ST33ZP24_I2C=m
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
CONFIG_TELCLOCK=m
# CONFIG_XILLYBUS is not set
# CONFIG_XILLYUSB is not set
# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
# end of Character devices

#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
# CONFIG_I2C_MUX_GPIO is not set
# CONFIG_I2C_MUX_LTC4306 is not set
# CONFIG_I2C_MUX_PCA9541 is not set
# CONFIG_I2C_MUX_PCA954x is not set
# CONFIG_I2C_MUX_REG is not set
CONFIG_I2C_MUX_MLXCPLD=m
# end of Multiplexer I2C Chip support

CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=y
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_ALGOPCA=m

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
CONFIG_I2C_AMD756=m
CONFIG_I2C_AMD756_S4882=m
CONFIG_I2C_AMD8111=m
# CONFIG_I2C_AMD_MP2 is not set
CONFIG_I2C_I801=y
CONFIG_I2C_ISCH=m
CONFIG_I2C_ISMT=m
CONFIG_I2C_PIIX4=m
CONFIG_I2C_NFORCE2=m
CONFIG_I2C_NFORCE2_S4985=m
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
CONFIG_I2C_SIS96X=m
CONFIG_I2C_VIA=m
CONFIG_I2C_VIAPRO=m

#
# ACPI drivers
#
CONFIG_I2C_SCMI=m

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=m
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=m
CONFIG_I2C_DESIGNWARE_BAYTRAIL=y
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
CONFIG_I2C_PCA_PLATFORM=m
CONFIG_I2C_SIMTEC=m
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_CP2615 is not set
CONFIG_I2C_PARPORT=m
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
CONFIG_I2C_MLXCPLD=m
# CONFIG_I2C_VIRTIO is not set
# end of I2C Hardware Bus support

CONFIG_I2C_STUB=m
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

# CONFIG_I3C is not set
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
# CONFIG_SPI_MEM is not set

#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_BUTTERFLY is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_DESIGNWARE is not set
# CONFIG_SPI_NXP_FLEXSPI is not set
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LM70_LLP is not set
# CONFIG_SPI_LANTIQ_SSC is not set
# CONFIG_SPI_OC_TINY is not set
# CONFIG_SPI_PXA2XX is not set
# CONFIG_SPI_ROCKCHIP is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_ZYNQMP_GQSPI is not set
# CONFIG_SPI_AMD is not set

#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set

#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
CONFIG_SPI_DYNAMIC=y
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_PTP_1588_CLOCK_OPTIONAL=y
# CONFIG_DP83640_PHY is not set
# CONFIG_PTP_1588_CLOCK_INES is not set
CONFIG_PTP_1588_CLOCK_KVM=m
# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
# CONFIG_PTP_1588_CLOCK_IDTCM is not set
# CONFIG_PTP_1588_CLOCK_VMW is not set
# end of PTP clock support

CONFIG_PINCTRL=y
CONFIG_PINMUX=y
CONFIG_PINCONF=y
CONFIG_GENERIC_PINCONF=y
# CONFIG_DEBUG_PINCTRL is not set
CONFIG_PINCTRL_AMD=m
# CONFIG_PINCTRL_MCP23S08 is not set
# CONFIG_PINCTRL_SX150X is not set

#
# Intel pinctrl drivers
#
CONFIG_PINCTRL_BAYTRAIL=y
# CONFIG_PINCTRL_CHERRYVIEW is not set
# CONFIG_PINCTRL_LYNXPOINT is not set
CONFIG_PINCTRL_INTEL=y
# CONFIG_PINCTRL_ALDERLAKE is not set
CONFIG_PINCTRL_BROXTON=m
CONFIG_PINCTRL_CANNONLAKE=m
CONFIG_PINCTRL_CEDARFORK=m
CONFIG_PINCTRL_DENVERTON=m
# CONFIG_PINCTRL_ELKHARTLAKE is not set
# CONFIG_PINCTRL_EMMITSBURG is not set
CONFIG_PINCTRL_GEMINILAKE=m
# CONFIG_PINCTRL_ICELAKE is not set
# CONFIG_PINCTRL_JASPERLAKE is not set
# CONFIG_PINCTRL_LAKEFIELD is not set
CONFIG_PINCTRL_LEWISBURG=m
CONFIG_PINCTRL_SUNRISEPOINT=m
# CONFIG_PINCTRL_TIGERLAKE is not set
# end of Intel pinctrl drivers

#
# Renesas pinctrl drivers
#
# end of Renesas pinctrl drivers

CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_GPIO_ACPI=y
CONFIG_GPIOLIB_IRQCHIP=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_CDEV=y
CONFIG_GPIO_CDEV_V1=y
CONFIG_GPIO_GENERIC=m

#
# Memory mapped GPIO drivers
#
CONFIG_GPIO_AMDPT=m
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_EXAR is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_ICH=m
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_ADP5588 is not set
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCA9570 is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
# end of USB GPIO expanders

#
# Virtual GPIO drivers
#
# CONFIG_GPIO_AGGREGATOR is not set
# CONFIG_GPIO_MOCKUP is not set
# CONFIG_GPIO_VIRTIO is not set
# end of Virtual GPIO drivers

# CONFIG_W1 is not set
CONFIG_POWER_RESET=y
# CONFIG_POWER_RESET_RESTART is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_MANAGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_LTC4162L is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ2515X is not set
# CONFIG_CHARGER_BQ25890 is not set
# CONFIG_CHARGER_BQ25980 is not set
# CONFIG_CHARGER_BQ256XX is not set
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_BATTERY_GOLDFISH is not set
# CONFIG_BATTERY_RT5033 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_BD99954 is not set
CONFIG_HWMON=y
CONFIG_HWMON_VID=m
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
CONFIG_SENSORS_ABITUGURU=m
CONFIG_SENSORS_ABITUGURU3=m
# CONFIG_SENSORS_AD7314 is not set
CONFIG_SENSORS_AD7414=m
CONFIG_SENSORS_AD7418=m
CONFIG_SENSORS_ADM1021=m
CONFIG_SENSORS_ADM1025=m
CONFIG_SENSORS_ADM1026=m
CONFIG_SENSORS_ADM1029=m
CONFIG_SENSORS_ADM1031=m
# CONFIG_SENSORS_ADM1177 is not set
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7X10=m
# CONFIG_SENSORS_ADT7310 is not set
CONFIG_SENSORS_ADT7410=m
CONFIG_SENSORS_ADT7411=m
CONFIG_SENSORS_ADT7462=m
CONFIG_SENSORS_ADT7470=m
CONFIG_SENSORS_ADT7475=m
# CONFIG_SENSORS_AHT10 is not set
# CONFIG_SENSORS_AQUACOMPUTER_D5NEXT is not set
# CONFIG_SENSORS_AS370 is not set
CONFIG_SENSORS_ASC7621=m
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
CONFIG_SENSORS_K8TEMP=m
CONFIG_SENSORS_K10TEMP=m
CONFIG_SENSORS_FAM15H_POWER=m
CONFIG_SENSORS_APPLESMC=m
CONFIG_SENSORS_ASB100=m
# CONFIG_SENSORS_ASPEED is not set
CONFIG_SENSORS_ATXP1=m
# CONFIG_SENSORS_CORSAIR_CPRO is not set
# CONFIG_SENSORS_CORSAIR_PSU is not set
# CONFIG_SENSORS_DRIVETEMP is not set
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
CONFIG_SENSORS_DELL_SMM=m
CONFIG_SENSORS_I5K_AMB=m
CONFIG_SENSORS_F71805F=m
CONFIG_SENSORS_F71882FG=m
CONFIG_SENSORS_F75375S=m
CONFIG_SENSORS_FSCHMD=m
# CONFIG_SENSORS_FTSTEUTATES is not set
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
CONFIG_SENSORS_G760A=m
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
CONFIG_SENSORS_IBMAEM=m
CONFIG_SENSORS_IBMPEX=m
CONFIG_SENSORS_I5500=m
CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_IT87=m
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_POWR1220 is not set
CONFIG_SENSORS_LINEAGE=m
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
# CONFIG_SENSORS_LTC2992 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LTC4215=m
# CONFIG_SENSORS_LTC4222 is not set
CONFIG_SENSORS_LTC4245=m
# CONFIG_SENSORS_LTC4260 is not set
CONFIG_SENSORS_LTC4261=m
# CONFIG_SENSORS_MAX1111 is not set
# CONFIG_SENSORS_MAX127 is not set
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
CONFIG_SENSORS_MAX1668=m
CONFIG_SENSORS_MAX197=m
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX6620 is not set
# CONFIG_SENSORS_MAX6621 is not set
CONFIG_SENSORS_MAX6639=m
CONFIG_SENSORS_MAX6642=m
CONFIG_SENSORS_MAX6650=m
CONFIG_SENSORS_MAX6697=m
# CONFIG_SENSORS_MAX31790 is not set
CONFIG_SENSORS_MCP3021=m
# CONFIG_SENSORS_MLXREG_FAN is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_TPS23861 is not set
# CONFIG_SENSORS_MR75203 is not set
# CONFIG_SENSORS_ADCXX is not set
CONFIG_SENSORS_LM63=m
# CONFIG_SENSORS_LM70 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
CONFIG_SENSORS_LM85=m
CONFIG_SENSORS_LM87=m
CONFIG_SENSORS_LM90=m
CONFIG_SENSORS_LM92=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_LM95234=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_LM95245=m
CONFIG_SENSORS_PC87360=m
CONFIG_SENSORS_PC87427=m
CONFIG_SENSORS_NTC_THERMISTOR=m
# CONFIG_SENSORS_NCT6683 is not set
CONFIG_SENSORS_NCT6775=m
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
# CONFIG_SENSORS_NZXT_KRAKEN2 is not set
CONFIG_SENSORS_PCF8591=m
CONFIG_PMBUS=m
CONFIG_SENSORS_PMBUS=m
# CONFIG_SENSORS_ADM1266 is not set
CONFIG_SENSORS_ADM1275=m
# CONFIG_SENSORS_BEL_PFE is not set
# CONFIG_SENSORS_BPA_RS600 is not set
# CONFIG_SENSORS_FSP_3Y is not set
# CONFIG_SENSORS_IBM_CFFPS is not set
# CONFIG_SENSORS_DPS920AB is not set
# CONFIG_SENSORS_INSPUR_IPSPS is not set
# CONFIG_SENSORS_IR35221 is not set
# CONFIG_SENSORS_IR36021 is not set
# CONFIG_SENSORS_IR38064 is not set
# CONFIG_SENSORS_IRPS5401 is not set
# CONFIG_SENSORS_ISL68137 is not set
CONFIG_SENSORS_LM25066=m
CONFIG_SENSORS_LTC2978=m
# CONFIG_SENSORS_LTC3815 is not set
# CONFIG_SENSORS_MAX15301 is not set
CONFIG_SENSORS_MAX16064=m
# CONFIG_SENSORS_MAX16601 is not set
# CONFIG_SENSORS_MAX20730 is not set
# CONFIG_SENSORS_MAX20751 is not set
# CONFIG_SENSORS_MAX31785 is not set
CONFIG_SENSORS_MAX34440=m
CONFIG_SENSORS_MAX8688=m
# CONFIG_SENSORS_MP2888 is not set
# CONFIG_SENSORS_MP2975 is not set
# CONFIG_SENSORS_PIM4328 is not set
# CONFIG_SENSORS_PM6764TR is not set
# CONFIG_SENSORS_PXE1610 is not set
# CONFIG_SENSORS_Q54SJ108A2 is not set
# CONFIG_SENSORS_STPDDC60 is not set
# CONFIG_SENSORS_TPS40422 is not set
# CONFIG_SENSORS_TPS53679 is not set
CONFIG_SENSORS_UCD9000=m
CONFIG_SENSORS_UCD9200=m
# CONFIG_SENSORS_XDPE122 is not set
CONFIG_SENSORS_ZL6100=m
# CONFIG_SENSORS_SBTSI is not set
# CONFIG_SENSORS_SBRMI is not set
CONFIG_SENSORS_SHT15=m
CONFIG_SENSORS_SHT21=m
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHT4x is not set
# CONFIG_SENSORS_SHTC1 is not set
CONFIG_SENSORS_SIS5595=m
CONFIG_SENSORS_DME1737=m
CONFIG_SENSORS_EMC1403=m
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SMSC47B397=m
CONFIG_SENSORS_SCH56XX_COMMON=m
CONFIG_SENSORS_SCH5627=m
CONFIG_SENSORS_SCH5636=m
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_ADC128D818 is not set
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_ADS7871 is not set
CONFIG_SENSORS_AMC6821=m
CONFIG_SENSORS_INA209=m
CONFIG_SENSORS_INA2XX=m
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
CONFIG_SENSORS_THMC50=m
CONFIG_SENSORS_TMP102=m
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
CONFIG_SENSORS_TMP401=m
CONFIG_SENSORS_TMP421=m
# CONFIG_SENSORS_TMP513 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VIA686A=m
CONFIG_SENSORS_VT1211=m
CONFIG_SENSORS_VT8231=m
# CONFIG_SENSORS_W83773G is not set
CONFIG_SENSORS_W83781D=m
CONFIG_SENSORS_W83791D=m
CONFIG_SENSORS_W83792D=m
CONFIG_SENSORS_W83793=m
CONFIG_SENSORS_W83795=m
# CONFIG_SENSORS_W83795_FANCTRL is not set
CONFIG_SENSORS_W83L785TS=m
CONFIG_SENSORS_W83L786NG=m
CONFIG_SENSORS_W83627HF=m
CONFIG_SENSORS_W83627EHF=m
# CONFIG_SENSORS_XGENE is not set

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=m
CONFIG_SENSORS_ATK0110=m
CONFIG_THERMAL=y
# CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_BANG_BANG=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=m
CONFIG_X86_THERMAL_VECTOR=y
CONFIG_X86_PKG_TEMP_THERMAL=m
CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
# CONFIG_INTEL_SOC_DTS_THERMAL is not set

#
# ACPI INT340X thermal drivers
#
CONFIG_INT340X_THERMAL=m
CONFIG_ACPI_THERMAL_REL=m
# CONFIG_INT3406_THERMAL is not set
CONFIG_PROC_THERMAL_MMIO_RAPL=m
# end of ACPI INT340X thermal drivers

CONFIG_INTEL_PCH_THERMAL=m
# CONFIG_INTEL_TCC_COOLING is not set
# CONFIG_INTEL_MENLOW is not set
# end of Intel thermal drivers

CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
CONFIG_WATCHDOG_SYSFS=y
# CONFIG_WATCHDOG_HRTIMER_PRETIMEOUT is not set

#
# Watchdog Pretimeout Governors
#
# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
CONFIG_WDAT_WDT=m
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_MLX_WDT is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
CONFIG_ALIM1535_WDT=m
CONFIG_ALIM7101_WDT=m
# CONFIG_EBC_C384_WDT is not set
CONFIG_F71808E_WDT=m
CONFIG_SP5100_TCO=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
CONFIG_IB700_WDT=m
CONFIG_IBMASR=m
# CONFIG_WAFER_WDT is not set
CONFIG_I6300ESB_WDT=y
CONFIG_IE6XX_WDT=m
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=m
CONFIG_IT87_WDT=m
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
CONFIG_NV_TCO=m
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
CONFIG_SMSC_SCH311X_WDT=m
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
CONFIG_VIA_WDT=m
CONFIG_W83627HF_WDT=m
CONFIG_W83877F_WDT=m
CONFIG_W83977F_WDT=m
CONFIG_MACHZ_WDT=m
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_INTEL_MEI_WDT=m
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set

#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=m
CONFIG_WDTPCI=m

#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=y
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=m
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
CONFIG_BCMA_HOST_PCI=y
# CONFIG_BCMA_HOST_SOC is not set
CONFIG_BCMA_DRIVER_PCI=y
CONFIG_BCMA_DRIVER_GMAC_CMN=y
CONFIG_BCMA_DRIVER_GPIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_AAT2870_CORE is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_SPI is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_HTC_I2CPLD is not set
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=y
CONFIG_LPC_SCH=m
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
CONFIG_MFD_INTEL_LPSS=y
CONFIG_MFD_INTEL_LPSS_ACPI=y
CONFIG_MFD_INTEL_LPSS_PCI=y
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_INTEL_PMT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_EZX_PCAP is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT4831 is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SI476X_CORE is not set
CONFIG_MFD_SM501=m
CONFIG_MFD_SM501_GPIO=y
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS65912_SPI is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TQMX86 is not set
CONFIG_MFD_VX855=m
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM831X_SPI is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_MFD_ATC260X_I2C is not set
# CONFIG_MFD_INTEL_M10_BMC is not set
# end of Multifunction device drivers

# CONFIG_REGULATOR is not set
CONFIG_RC_CORE=m
CONFIG_RC_MAP=m
CONFIG_LIRC=y
CONFIG_RC_DECODERS=y
CONFIG_IR_NEC_DECODER=m
CONFIG_IR_RC5_DECODER=m
CONFIG_IR_RC6_DECODER=m
CONFIG_IR_JVC_DECODER=m
CONFIG_IR_SONY_DECODER=m
CONFIG_IR_SANYO_DECODER=m
# CONFIG_IR_SHARP_DECODER is not set
CONFIG_IR_MCE_KBD_DECODER=m
# CONFIG_IR_XMP_DECODER is not set
CONFIG_IR_IMON_DECODER=m
# CONFIG_IR_RCMM_DECODER is not set
CONFIG_RC_DEVICES=y
# CONFIG_RC_ATI_REMOTE is not set
CONFIG_IR_ENE=m
# CONFIG_IR_IMON is not set
# CONFIG_IR_IMON_RAW is not set
# CONFIG_IR_MCEUSB is not set
CONFIG_IR_ITE_CIR=m
CONFIG_IR_FINTEK=m
CONFIG_IR_NUVOTON=m
# CONFIG_IR_REDRAT3 is not set
# CONFIG_IR_STREAMZAP is not set
CONFIG_IR_WINBOND_CIR=m
# CONFIG_IR_IGORPLUGUSB is not set
# CONFIG_IR_IGUANA is not set
# CONFIG_IR_TTUSBIR is not set
# CONFIG_RC_LOOPBACK is not set
CONFIG_IR_SERIAL=m
CONFIG_IR_SERIAL_TRANSMITTER=y
# CONFIG_RC_XBOX_DVD is not set
# CONFIG_IR_TOY is not set

#
# CEC support
#
CONFIG_MEDIA_CEC_SUPPORT=y
# CONFIG_CEC_CH7322 is not set
# CONFIG_CEC_SECO is not set
# CONFIG_USB_PULSE8_CEC is not set
# CONFIG_USB_RAINSHADOW_CEC is not set
# end of CEC support

CONFIG_MEDIA_SUPPORT=m
# CONFIG_MEDIA_SUPPORT_FILTER is not set
# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set

#
# Media device types
#
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_RADIO_SUPPORT=y
CONFIG_MEDIA_SDR_SUPPORT=y
CONFIG_MEDIA_PLATFORM_SUPPORT=y
CONFIG_MEDIA_TEST_SUPPORT=y
# end of Media device types

#
# Media core support
#
CONFIG_VIDEO_DEV=m
CONFIG_MEDIA_CONTROLLER=y
CONFIG_DVB_CORE=m
# end of Media core support

#
# Video4Linux options
#
CONFIG_VIDEO_V4L2=m
CONFIG_VIDEO_V4L2_I2C=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
# end of Video4Linux options

#
# Media controller options
#
# CONFIG_MEDIA_CONTROLLER_DVB is not set
# end of Media controller options

#
# Digital TV options
#
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
CONFIG_DVB_MAX_ADAPTERS=16
CONFIG_DVB_DYNAMIC_MINORS=y
# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
# CONFIG_DVB_ULE_DEBUG is not set
# end of Digital TV options

#
# Media drivers
#
# CONFIG_MEDIA_USB_SUPPORT is not set
# CONFIG_MEDIA_PCI_SUPPORT is not set
CONFIG_RADIO_ADAPTERS=y
# CONFIG_RADIO_SI470X is not set
# CONFIG_RADIO_SI4713 is not set
# CONFIG_USB_MR800 is not set
# CONFIG_USB_DSBR is not set
# CONFIG_RADIO_MAXIRADIO is not set
# CONFIG_RADIO_SHARK is not set
# CONFIG_RADIO_SHARK2 is not set
# CONFIG_USB_KEENE is not set
# CONFIG_USB_RAREMONO is not set
# CONFIG_USB_MA901 is not set
# CONFIG_RADIO_TEA5764 is not set
# CONFIG_RADIO_SAA7706H is not set
# CONFIG_RADIO_TEF6862 is not set
# CONFIG_RADIO_WL1273 is not set
CONFIG_VIDEOBUF2_CORE=m
CONFIG_VIDEOBUF2_V4L2=m
CONFIG_VIDEOBUF2_MEMOPS=m
CONFIG_VIDEOBUF2_VMALLOC=m
# CONFIG_V4L_PLATFORM_DRIVERS is not set
# CONFIG_V4L_MEM2MEM_DRIVERS is not set
# CONFIG_DVB_PLATFORM_DRIVERS is not set
# CONFIG_SDR_PLATFORM_DRIVERS is not set

#
# MMC/SDIO DVB adapters
#
# CONFIG_SMS_SDIO_DRV is not set
# CONFIG_V4L_TEST_DRIVERS is not set
# CONFIG_DVB_TEST_DRIVERS is not set

#
# FireWire (IEEE 1394) Adapters
#
# CONFIG_DVB_FIREDTV is not set
# end of Media drivers

#
# Media ancillary drivers
#
CONFIG_MEDIA_ATTACH=y
CONFIG_VIDEO_IR_I2C=m

#
# Audio decoders, processors and mixers
#
# CONFIG_VIDEO_TVAUDIO is not set
# CONFIG_VIDEO_TDA7432 is not set
# CONFIG_VIDEO_TDA9840 is not set
# CONFIG_VIDEO_TEA6415C is not set
# CONFIG_VIDEO_TEA6420 is not set
# CONFIG_VIDEO_MSP3400 is not set
# CONFIG_VIDEO_CS3308 is not set
# CONFIG_VIDEO_CS5345 is not set
# CONFIG_VIDEO_CS53L32A is not set
# CONFIG_VIDEO_TLV320AIC23B is not set
# CONFIG_VIDEO_UDA1342 is not set
# CONFIG_VIDEO_WM8775 is not set
# CONFIG_VIDEO_WM8739 is not set
# CONFIG_VIDEO_VP27SMPX is not set
# CONFIG_VIDEO_SONY_BTF_MPX is not set
# end of Audio decoders, processors and mixers

#
# RDS decoders
#
# CONFIG_VIDEO_SAA6588 is not set
# end of RDS decoders

#
# Video decoders
#
# CONFIG_VIDEO_ADV7180 is not set
# CONFIG_VIDEO_ADV7183 is not set
# CONFIG_VIDEO_ADV7604 is not set
# CONFIG_VIDEO_ADV7842 is not set
# CONFIG_VIDEO_BT819 is not set
# CONFIG_VIDEO_BT856 is not set
# CONFIG_VIDEO_BT866 is not set
# CONFIG_VIDEO_KS0127 is not set
# CONFIG_VIDEO_ML86V7667 is not set
# CONFIG_VIDEO_SAA7110 is not set
# CONFIG_VIDEO_SAA711X is not set
# CONFIG_VIDEO_TC358743 is not set
# CONFIG_VIDEO_TVP514X is not set
# CONFIG_VIDEO_TVP5150 is not set
# CONFIG_VIDEO_TVP7002 is not set
# CONFIG_VIDEO_TW2804 is not set
# CONFIG_VIDEO_TW9903 is not set
# CONFIG_VIDEO_TW9906 is not set
# CONFIG_VIDEO_TW9910 is not set
# CONFIG_VIDEO_VPX3220 is not set

#
# Video and audio decoders
#
# CONFIG_VIDEO_SAA717X is not set
# CONFIG_VIDEO_CX25840 is not set
# end of Video decoders

#
# Video encoders
#
# CONFIG_VIDEO_SAA7127 is not set
# CONFIG_VIDEO_SAA7185 is not set
# CONFIG_VIDEO_ADV7170 is not set
# CONFIG_VIDEO_ADV7175 is not set
# CONFIG_VIDEO_ADV7343 is not set
# CONFIG_VIDEO_ADV7393 is not set
# CONFIG_VIDEO_ADV7511 is not set
# CONFIG_VIDEO_AD9389B is not set
# CONFIG_VIDEO_AK881X is not set
# CONFIG_VIDEO_THS8200 is not set
# end of Video encoders

#
# Video improvement chips
#
# CONFIG_VIDEO_UPD64031A is not set
# CONFIG_VIDEO_UPD64083 is not set
# end of Video improvement chips

#
# Audio/Video compression chips
#
# CONFIG_VIDEO_SAA6752HS is not set
# end of Audio/Video compression chips

#
# SDR tuner chips
#
# CONFIG_SDR_MAX2175 is not set
# end of SDR tuner chips

#
# Miscellaneous helper chips
#
# CONFIG_VIDEO_THS7303 is not set
# CONFIG_VIDEO_M52790 is not set
# CONFIG_VIDEO_I2C is not set
# CONFIG_VIDEO_ST_MIPID02 is not set
# end of Miscellaneous helper chips

#
# Camera sensor devices
#
# CONFIG_VIDEO_HI556 is not set
# CONFIG_VIDEO_HI846 is not set
# CONFIG_VIDEO_IMX208 is not set
# CONFIG_VIDEO_IMX214 is not set
# CONFIG_VIDEO_IMX219 is not set
# CONFIG_VIDEO_IMX258 is not set
# CONFIG_VIDEO_IMX274 is not set
# CONFIG_VIDEO_IMX290 is not set
# CONFIG_VIDEO_IMX319 is not set
# CONFIG_VIDEO_IMX355 is not set
# CONFIG_VIDEO_OV02A10 is not set
# CONFIG_VIDEO_OV2640 is not set
# CONFIG_VIDEO_OV2659 is not set
# CONFIG_VIDEO_OV2680 is not set
# CONFIG_VIDEO_OV2685 is not set
# CONFIG_VIDEO_OV2740 is not set
# CONFIG_VIDEO_OV5647 is not set
# CONFIG_VIDEO_OV5648 is not set
# CONFIG_VIDEO_OV6650 is not set
# CONFIG_VIDEO_OV5670 is not set
# CONFIG_VIDEO_OV5675 is not set
# CONFIG_VIDEO_OV5695 is not set
# CONFIG_VIDEO_OV7251 is not set
# CONFIG_VIDEO_OV772X is not set
# CONFIG_VIDEO_OV7640 is not set
# CONFIG_VIDEO_OV7670 is not set
# CONFIG_VIDEO_OV7740 is not set
# CONFIG_VIDEO_OV8856 is not set
# CONFIG_VIDEO_OV8865 is not set
# CONFIG_VIDEO_OV9640 is not set
# CONFIG_VIDEO_OV9650 is not set
# CONFIG_VIDEO_OV9734 is not set
# CONFIG_VIDEO_OV13858 is not set
# CONFIG_VIDEO_OV13B10 is not set
# CONFIG_VIDEO_VS6624 is not set
# CONFIG_VIDEO_MT9M001 is not set
# CONFIG_VIDEO_MT9M032 is not set
# CONFIG_VIDEO_MT9M111 is not set
# CONFIG_VIDEO_MT9P031 is not set
# CONFIG_VIDEO_MT9T001 is not set
# CONFIG_VIDEO_MT9T112 is not set
# CONFIG_VIDEO_MT9V011 is not set
# CONFIG_VIDEO_MT9V032 is not set
# CONFIG_VIDEO_MT9V111 is not set
# CONFIG_VIDEO_SR030PC30 is not set
# CONFIG_VIDEO_NOON010PC30 is not set
# CONFIG_VIDEO_M5MOLS is not set
# CONFIG_VIDEO_RDACM20 is not set
# CONFIG_VIDEO_RDACM21 is not set
# CONFIG_VIDEO_RJ54N1 is not set
# CONFIG_VIDEO_S5K6AA is not set
# CONFIG_VIDEO_S5K6A3 is not set
# CONFIG_VIDEO_S5K4ECGX is not set
# CONFIG_VIDEO_S5K5BAF is not set
# CONFIG_VIDEO_CCS is not set
# CONFIG_VIDEO_ET8EK8 is not set
# CONFIG_VIDEO_S5C73M3 is not set
# end of Camera sensor devices

#
# Lens drivers
#
# CONFIG_VIDEO_AD5820 is not set
# CONFIG_VIDEO_AK7375 is not set
# CONFIG_VIDEO_DW9714 is not set
# CONFIG_VIDEO_DW9768 is not set
# CONFIG_VIDEO_DW9807_VCM is not set
# end of Lens drivers

#
# Flash devices
#
# CONFIG_VIDEO_ADP1653 is not set
# CONFIG_VIDEO_LM3560 is not set
# CONFIG_VIDEO_LM3646 is not set
# end of Flash devices

#
# SPI helper chips
#
# CONFIG_VIDEO_GS1662 is not set
# end of SPI helper chips

#
# Media SPI Adapters
#
CONFIG_CXD2880_SPI_DRV=m
# end of Media SPI Adapters

CONFIG_MEDIA_TUNER=m

#
# Customize TV tuners
#
CONFIG_MEDIA_TUNER_SIMPLE=m
CONFIG_MEDIA_TUNER_TDA18250=m
CONFIG_MEDIA_TUNER_TDA8290=m
CONFIG_MEDIA_TUNER_TDA827X=m
CONFIG_MEDIA_TUNER_TDA18271=m
CONFIG_MEDIA_TUNER_TDA9887=m
CONFIG_MEDIA_TUNER_TEA5761=m
CONFIG_MEDIA_TUNER_TEA5767=m
CONFIG_MEDIA_TUNER_MSI001=m
CONFIG_MEDIA_TUNER_MT20XX=m
CONFIG_MEDIA_TUNER_MT2060=m
CONFIG_MEDIA_TUNER_MT2063=m
CONFIG_MEDIA_TUNER_MT2266=m
CONFIG_MEDIA_TUNER_MT2131=m
CONFIG_MEDIA_TUNER_QT1010=m
CONFIG_MEDIA_TUNER_XC2028=m
CONFIG_MEDIA_TUNER_XC5000=m
CONFIG_MEDIA_TUNER_XC4000=m
CONFIG_MEDIA_TUNER_MXL5005S=m
CONFIG_MEDIA_TUNER_MXL5007T=m
CONFIG_MEDIA_TUNER_MC44S803=m
CONFIG_MEDIA_TUNER_MAX2165=m
CONFIG_MEDIA_TUNER_TDA18218=m
CONFIG_MEDIA_TUNER_FC0011=m
CONFIG_MEDIA_TUNER_FC0012=m
CONFIG_MEDIA_TUNER_FC0013=m
CONFIG_MEDIA_TUNER_TDA18212=m
CONFIG_MEDIA_TUNER_E4000=m
CONFIG_MEDIA_TUNER_FC2580=m
CONFIG_MEDIA_TUNER_M88RS6000T=m
CONFIG_MEDIA_TUNER_TUA9001=m
CONFIG_MEDIA_TUNER_SI2157=m
CONFIG_MEDIA_TUNER_IT913X=m
CONFIG_MEDIA_TUNER_R820T=m
CONFIG_MEDIA_TUNER_MXL301RF=m
CONFIG_MEDIA_TUNER_QM1D1C0042=m
CONFIG_MEDIA_TUNER_QM1D1B0004=m
# end of Customize TV tuners

#
# Customise DVB Frontends
#

#
# Multistandard (satellite) frontends
#
CONFIG_DVB_STB0899=m
CONFIG_DVB_STB6100=m
CONFIG_DVB_STV090x=m
CONFIG_DVB_STV0910=m
CONFIG_DVB_STV6110x=m
CONFIG_DVB_STV6111=m
CONFIG_DVB_MXL5XX=m
CONFIG_DVB_M88DS3103=m

#
# Multistandard (cable + terrestrial) frontends
#
CONFIG_DVB_DRXK=m
CONFIG_DVB_TDA18271C2DD=m
CONFIG_DVB_SI2165=m
CONFIG_DVB_MN88472=m
CONFIG_DVB_MN88473=m

#
# DVB-S (satellite) frontends
#
CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_MT312=m
CONFIG_DVB_ZL10036=m
CONFIG_DVB_ZL10039=m
CONFIG_DVB_S5H1420=m
CONFIG_DVB_STV0288=m
CONFIG_DVB_STB6000=m
CONFIG_DVB_STV0299=m
CONFIG_DVB_STV6110=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA10086=m
CONFIG_DVB_TDA8261=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TDA826X=m
CONFIG_DVB_TUA6100=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
CONFIG_DVB_SI21XX=m
CONFIG_DVB_TS2020=m
CONFIG_DVB_DS3000=m
CONFIG_DVB_MB86A16=m
CONFIG_DVB_TDA10071=m

#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_SP887X=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
CONFIG_DVB_S5H1432=m
CONFIG_DVB_DRXD=m
CONFIG_DVB_L64781=m
CONFIG_DVB_TDA1004X=m
CONFIG_DVB_NXT6000=m
CONFIG_DVB_MT352=m
CONFIG_DVB_ZL10353=m
CONFIG_DVB_DIB3000MB=m
CONFIG_DVB_DIB3000MC=m
CONFIG_DVB_DIB7000M=m
CONFIG_DVB_DIB7000P=m
CONFIG_DVB_DIB9000=m
CONFIG_DVB_TDA10048=m
CONFIG_DVB_AF9013=m
CONFIG_DVB_EC100=m
CONFIG_DVB_STV0367=m
CONFIG_DVB_CXD2820R=m
CONFIG_DVB_CXD2841ER=m
CONFIG_DVB_RTL2830=m
CONFIG_DVB_RTL2832=m
CONFIG_DVB_RTL2832_SDR=m
CONFIG_DVB_SI2168=m
CONFIG_DVB_ZD1301_DEMOD=m
CONFIG_DVB_CXD2880=m

#
# DVB-C (cable) frontends
#
CONFIG_DVB_VES1820=m
CONFIG_DVB_TDA10021=m
CONFIG_DVB_TDA10023=m
CONFIG_DVB_STV0297=m

#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=m
CONFIG_DVB_OR51211=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LGDT330X=m
CONFIG_DVB_LGDT3305=m
CONFIG_DVB_LGDT3306A=m
CONFIG_DVB_LG2160=m
CONFIG_DVB_S5H1409=m
CONFIG_DVB_AU8522=m
CONFIG_DVB_AU8522_DTV=m
CONFIG_DVB_AU8522_V4L=m
CONFIG_DVB_S5H1411=m
CONFIG_DVB_MXL692=m

#
# ISDB-T (terrestrial) frontends
#
CONFIG_DVB_S921=m
CONFIG_DVB_DIB8000=m
CONFIG_DVB_MB86A20S=m

#
# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
#
CONFIG_DVB_TC90522=m
CONFIG_DVB_MN88443X=m

#
# Digital terrestrial only tuners/PLL
#
CONFIG_DVB_PLL=m
CONFIG_DVB_TUNER_DIB0070=m
CONFIG_DVB_TUNER_DIB0090=m

#
# SEC control devices for DVB-S
#
CONFIG_DVB_DRX39XYJ=m
CONFIG_DVB_LNBH25=m
CONFIG_DVB_LNBH29=m
CONFIG_DVB_LNBP21=m
CONFIG_DVB_LNBP22=m
CONFIG_DVB_ISL6405=m
CONFIG_DVB_ISL6421=m
CONFIG_DVB_ISL6423=m
CONFIG_DVB_A8293=m
CONFIG_DVB_LGS8GL5=m
CONFIG_DVB_LGS8GXX=m
CONFIG_DVB_ATBM8830=m
CONFIG_DVB_TDA665x=m
CONFIG_DVB_IX2505V=m
CONFIG_DVB_M88RS2000=m
CONFIG_DVB_AF9033=m
CONFIG_DVB_HORUS3A=m
CONFIG_DVB_ASCOT2E=m
CONFIG_DVB_HELENE=m

#
# Common Interface (EN50221) controller drivers
#
CONFIG_DVB_CXD2099=m
CONFIG_DVB_SP2=m
# end of Customise DVB Frontends

#
# Tools to develop new frontends
#
# CONFIG_DVB_DUMMY_FE is not set
# end of Media ancillary drivers

#
# Graphics support
#
# CONFIG_AGP is not set
CONFIG_INTEL_GTT=m
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=64
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=m
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_DP_AUX_CHARDEV=y
# CONFIG_DRM_DEBUG_SELFTEST is not set
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=y

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
CONFIG_DRM_I915_GVT=y
# CONFIG_DRM_I915_GVT_KVMGT is not set
CONFIG_DRM_I915_REQUEST_TIMEOUT=20000
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# CONFIG_DRM_VGEM is not set
# CONFIG_DRM_VKMS is not set
# CONFIG_DRM_VMWGFX is not set
CONFIG_DRM_GMA500=m
# CONFIG_DRM_UDL is not set
CONFIG_DRM_AST=m
CONFIG_DRM_MGAG200=m
CONFIG_DRM_QXL=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_PANEL=y

#
# Display Panels
#
# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
# CONFIG_DRM_PANEL_WIDECHIPS_WS2401 is not set
# end of Display Panels

CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y

#
# Display Interface Bridges
#
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges

# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_BOCHS=m
CONFIG_DRM_CIRRUS_QEMU=m
# CONFIG_DRM_GM12U320 is not set
# CONFIG_DRM_SIMPLEDRM is not set
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_GUD is not set
# CONFIG_DRM_HYPERV is not set
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y

#
# Frame buffer Devices
#
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_MODE_HELPERS is not set
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SM501 is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
CONFIG_FB_HYPERV=m
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SSD1307 is not set
# CONFIG_FB_SM712 is not set
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
# CONFIG_LCD_L4F00242T03 is not set
# CONFIG_LCD_LMS283GF05 is not set
# CONFIG_LCD_LTV350QV is not set
# CONFIG_LCD_ILI922X is not set
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_TDO24M is not set
# CONFIG_LCD_VGG2432A4 is not set
CONFIG_LCD_PLATFORM=m
# CONFIG_LCD_AMS369FG06 is not set
# CONFIG_LCD_LMS501KF03 is not set
# CONFIG_LCD_HX8357 is not set
# CONFIG_LCD_OTM3225A is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_KTD253 is not set
# CONFIG_BACKLIGHT_PWM is not set
CONFIG_BACKLIGHT_APPLE=m
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
CONFIG_BACKLIGHT_LP855X=m
# CONFIG_BACKLIGHT_GPIO is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
# end of Backlight & LCD device support

CONFIG_HDMI=y

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support

CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_LOGO_LINUX_CLUT224=y
# end of Graphics support

# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACCUTOUCH is not set
CONFIG_HID_ACRUX=m
# CONFIG_HID_ACRUX_FF is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
CONFIG_HID_ASUS=m
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=m
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
CONFIG_HID_CHERRY=m
# CONFIG_HID_CHICONY is not set
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
CONFIG_HID_CMEDIA=m
# CONFIG_HID_CP2112 is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
CONFIG_HID_CYPRESS=m
CONFIG_HID_DRAGONRISE=m
# CONFIG_DRAGONRISE_FF is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
CONFIG_HID_ELECOM=m
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_FT260 is not set
CONFIG_HID_GEMBIRD=m
CONFIG_HID_GFRM=m
# CONFIG_HID_GLORIOUS is not set
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_VIVALDI is not set
# CONFIG_HID_GT683R is not set
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
# CONFIG_HID_UCLOGIC is not set
CONFIG_HID_WALTOP=m
# CONFIG_HID_VIEWSONIC is not set
# CONFIG_HID_XIAOMI is not set
CONFIG_HID_GYRATION=m
CONFIG_HID_ICADE=m
CONFIG_HID_ITE=m
CONFIG_HID_JABRA=m
CONFIG_HID_TWINHAN=m
CONFIG_HID_KENSINGTON=m
CONFIG_HID_LCPOWER=m
CONFIG_HID_LED=m
CONFIG_HID_LENOVO=m
CONFIG_HID_LOGITECH=m
CONFIG_HID_LOGITECH_DJ=m
CONFIG_HID_LOGITECH_HIDPP=m
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
# CONFIG_HID_REDRAGON is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
CONFIG_HID_MULTITOUCH=m
# CONFIG_HID_NINTENDO is not set
CONFIG_HID_NTI=m
# CONFIG_HID_NTRIG is not set
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
# CONFIG_PANTHERLORD_FF is not set
# CONFIG_HID_PENMOUNT is not set
CONFIG_HID_PETALYNX=m
CONFIG_HID_PICOLCD=m
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=m
CONFIG_HID_PRIMAX=m
# CONFIG_HID_RETRODE is not set
# CONFIG_HID_ROCCAT is not set
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
# CONFIG_HID_SEMITEK is not set
# CONFIG_HID_SONY is not set
CONFIG_HID_SPEEDLINK=m
# CONFIG_HID_STEAM is not set
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_RMI=m
CONFIG_HID_GREENASIA=m
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_HYPERV_MOUSE=m
CONFIG_HID_SMARTJOYPLUS=m
# CONFIG_SMARTJOYPLUS_FF is not set
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
# CONFIG_THRUSTMASTER_FF is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
# CONFIG_HID_WACOM is not set
CONFIG_HID_WIIMOTE=m
CONFIG_HID_XINMO=m
CONFIG_HID_ZEROPLUS=m
# CONFIG_ZEROPLUS_FF is not set
CONFIG_HID_ZYDACRON=m
CONFIG_HID_SENSOR_HUB=y
CONFIG_HID_SENSOR_CUSTOM_SENSOR=m
CONFIG_HID_ALPS=m
# CONFIG_HID_MCP2221 is not set
# end of Special HID drivers

#
# USB HID support
#
CONFIG_USB_HID=y
# CONFIG_HID_PID is not set
# CONFIG_USB_HIDDEV is not set
# end of USB HID support

#
# I2C HID support
#
# CONFIG_I2C_HID_ACPI is not set
# end of I2C HID support

#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=m
# CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set
# end of Intel ISH HID support

#
# AMD SFH HID Support
#
# CONFIG_AMD_SFH_HID is not set
# end of AMD SFH HID Support
# end of HID support

CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_FEW_INIT_RETRIES is not set
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
CONFIG_USB_LEDS_TRIGGER_USBPORT=y
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_MON=y

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
CONFIG_USB_XHCI_HCD=y
# CONFIG_USB_XHCI_DBGCAP is not set
CONFIG_USB_XHCI_PCI=y
# CONFIG_USB_XHCI_PCI_RENESAS is not set
# CONFIG_USB_XHCI_PLATFORM is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_FSL is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
# CONFIG_USB_MAX3421_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_BCMA is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set
# CONFIG_USB_UAS is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_USB_CDNS_SUPPORT is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set

#
# USB port drivers
#
# CONFIG_USB_USS720 is not set
CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_GENERIC=y
# CONFIG_USB_SERIAL_SIMPLE is not set
# CONFIG_USB_SERIAL_AIRCABLE is not set
# CONFIG_USB_SERIAL_ARK3116 is not set
# CONFIG_USB_SERIAL_BELKIN is not set
# CONFIG_USB_SERIAL_CH341 is not set
# CONFIG_USB_SERIAL_WHITEHEAT is not set
# CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set
# CONFIG_USB_SERIAL_CP210X is not set
# CONFIG_USB_SERIAL_CYPRESS_M8 is not set
# CONFIG_USB_SERIAL_EMPEG is not set
# CONFIG_USB_SERIAL_FTDI_SIO is not set
# CONFIG_USB_SERIAL_VISOR is not set
# CONFIG_USB_SERIAL_IPAQ is not set
# CONFIG_USB_SERIAL_IR is not set
# CONFIG_USB_SERIAL_EDGEPORT is not set
# CONFIG_USB_SERIAL_EDGEPORT_TI is not set
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
# CONFIG_USB_SERIAL_GARMIN is not set
# CONFIG_USB_SERIAL_IPW is not set
# CONFIG_USB_SERIAL_IUU is not set
# CONFIG_USB_SERIAL_KEYSPAN_PDA is not set
# CONFIG_USB_SERIAL_KEYSPAN is not set
# CONFIG_USB_SERIAL_KLSI is not set
# CONFIG_USB_SERIAL_KOBIL_SCT is not set
# CONFIG_USB_SERIAL_MCT_U232 is not set
# CONFIG_USB_SERIAL_METRO is not set
# CONFIG_USB_SERIAL_MOS7720 is not set
# CONFIG_USB_SERIAL_MOS7840 is not set
# CONFIG_USB_SERIAL_MXUPORT is not set
# CONFIG_USB_SERIAL_NAVMAN is not set
# CONFIG_USB_SERIAL_PL2303 is not set
# CONFIG_USB_SERIAL_OTI6858 is not set
# CONFIG_USB_SERIAL_QCAUX is not set
# CONFIG_USB_SERIAL_QUALCOMM is not set
# CONFIG_USB_SERIAL_SPCP8X5 is not set
# CONFIG_USB_SERIAL_SAFE is not set
# CONFIG_USB_SERIAL_SIERRAWIRELESS is not set
# CONFIG_USB_SERIAL_SYMBOL is not set
# CONFIG_USB_SERIAL_TI is not set
# CONFIG_USB_SERIAL_CYBERJACK is not set
# CONFIG_USB_SERIAL_OPTION is not set
# CONFIG_USB_SERIAL_OMNINET is not set
# CONFIG_USB_SERIAL_OPTICON is not set
# CONFIG_USB_SERIAL_XSENS_MT is not set
# CONFIG_USB_SERIAL_WISHBONE is not set
# CONFIG_USB_SERIAL_SSU100 is not set
# CONFIG_USB_SERIAL_QT2 is not set
# CONFIG_USB_SERIAL_UPD78F0730 is not set
# CONFIG_USB_SERIAL_XR is not set
CONFIG_USB_SERIAL_DEBUG=m

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HUB_USB251XB is not set
# CONFIG_USB_HSIC_USB3503 is not set
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
# CONFIG_USB_ATM is not set

#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers

# CONFIG_USB_GADGET is not set
CONFIG_TYPEC=y
# CONFIG_TYPEC_TCPM is not set
CONFIG_TYPEC_UCSI=y
# CONFIG_UCSI_CCG is not set
CONFIG_UCSI_ACPI=y
# CONFIG_TYPEC_TPS6598X is not set
# CONFIG_TYPEC_STUSB160X is not set

#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
# CONFIG_TYPEC_MUX_PI3USB30532 is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support

#
# USB Type-C Alternate Mode drivers
#
# CONFIG_TYPEC_DP_ALTMODE is not set
# end of USB Type-C Alternate Mode drivers

# CONFIG_USB_ROLE_SWITCH is not set
CONFIG_MMC=m
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_IO_ACCESSORS=y
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_SDHCI_ACPI=m
CONFIG_MMC_SDHCI_PLTFM=m
# CONFIG_MMC_SDHCI_F_SDH30 is not set
# CONFIG_MMC_WBSD is not set
# CONFIG_MMC_TIFM_SD is not set
# CONFIG_MMC_SPI is not set
# CONFIG_MMC_CB710 is not set
# CONFIG_MMC_VIA_SDMMC is not set
# CONFIG_MMC_VUB300 is not set
# CONFIG_MMC_USHC is not set
# CONFIG_MMC_USDHI6ROL0 is not set
# CONFIG_MMC_REALTEK_PCI is not set
CONFIG_MMC_CQHCI=m
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_MMC_SDHCI_XENON is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set

#
# LED drivers
#
# CONFIG_LEDS_APU is not set
CONFIG_LEDS_LM3530=m
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
CONFIG_LEDS_LP3944=m
# CONFIG_LEDS_LP3952 is not set
# CONFIG_LEDS_LP50XX is not set
CONFIG_LEDS_CLEVO_MAIL=m
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_PWM is not set
# CONFIG_LEDS_BD2802 is not set
CONFIG_LEDS_INTEL_SS4200=m
CONFIG_LEDS_LT3593=m
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set

#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
CONFIG_LEDS_BLINKM=m
CONFIG_LEDS_MLXCPLD=m
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_TI_LMU_COMMON is not set

#
# Flash and Torch LED drivers
#

#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=m
CONFIG_LEDS_TRIGGER_ONESHOT=m
# CONFIG_LEDS_TRIGGER_DISK is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_GPIO=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
CONFIG_LEDS_TRIGGER_CAMERA=m
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
CONFIG_LEDS_TRIGGER_AUDIO=m
# CONFIG_LEDS_TRIGGER_TTY is not set
# CONFIG_ACCESSIBILITY is not set
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_MAD=m
CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_INFINIBAND_USER_MEM=y
CONFIG_INFINIBAND_ON_DEMAND_PAGING=y
CONFIG_INFINIBAND_ADDR_TRANS=y
CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS=y
CONFIG_INFINIBAND_VIRT_DMA=y
# CONFIG_INFINIBAND_MTHCA is not set
# CONFIG_INFINIBAND_EFA is not set
# CONFIG_MLX4_INFINIBAND is not set
# CONFIG_INFINIBAND_OCRDMA is not set
# CONFIG_INFINIBAND_USNIC is not set
# CONFIG_INFINIBAND_RDMAVT is not set
CONFIG_RDMA_RXE=m
CONFIG_RDMA_SIW=m
CONFIG_INFINIBAND_IPOIB=m
# CONFIG_INFINIBAND_IPOIB_CM is not set
CONFIG_INFINIBAND_IPOIB_DEBUG=y
# CONFIG_INFINIBAND_IPOIB_DEBUG_DATA is not set
CONFIG_INFINIBAND_SRP=m
CONFIG_INFINIBAND_SRPT=m
# CONFIG_INFINIBAND_ISER is not set
# CONFIG_INFINIBAND_ISERT is not set
# CONFIG_INFINIBAND_RTRS_CLIENT is not set
# CONFIG_INFINIBAND_RTRS_SERVER is not set
# CONFIG_INFINIBAND_OPA_VNIC is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=m
CONFIG_EDAC_GHES=y
CONFIG_EDAC_AMD64=m
CONFIG_EDAC_E752X=m
CONFIG_EDAC_I82975X=m
CONFIG_EDAC_I3000=m
CONFIG_EDAC_I3200=m
CONFIG_EDAC_IE31200=m
CONFIG_EDAC_X38=m
CONFIG_EDAC_I5400=m
CONFIG_EDAC_I7CORE=m
CONFIG_EDAC_I5000=m
CONFIG_EDAC_I5100=m
CONFIG_EDAC_I7300=m
CONFIG_EDAC_SBRIDGE=m
CONFIG_EDAC_SKX=m
# CONFIG_EDAC_I10NM is not set
CONFIG_EDAC_PND2=m
# CONFIG_EDAC_IGEN6 is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_MC146818_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_SYSTOHC is not set
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_LIB_KUNIT_TEST=m
CONFIG_RTC_NVMEM=y

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
CONFIG_RTC_DRV_DS1307=m
# CONFIG_RTC_DRV_DS1307_CENTURY is not set
CONFIG_RTC_DRV_DS1374=m
# CONFIG_RTC_DRV_DS1374_WDT is not set
CONFIG_RTC_DRV_DS1672=m
CONFIG_RTC_DRV_MAX6900=m
CONFIG_RTC_DRV_RS5C372=m
CONFIG_RTC_DRV_ISL1208=m
CONFIG_RTC_DRV_ISL12022=m
CONFIG_RTC_DRV_X1205=m
CONFIG_RTC_DRV_PCF8523=m
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
CONFIG_RTC_DRV_PCF8563=m
CONFIG_RTC_DRV_PCF8583=m
CONFIG_RTC_DRV_M41T80=m
CONFIG_RTC_DRV_M41T80_WDT=y
CONFIG_RTC_DRV_BQ32K=m
# CONFIG_RTC_DRV_S35390A is not set
CONFIG_RTC_DRV_FM3130=m
# CONFIG_RTC_DRV_RX8010 is not set
CONFIG_RTC_DRV_RX8581=m
CONFIG_RTC_DRV_RX8025=m
CONFIG_RTC_DRV_EM3027=m
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV3032 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set

#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
CONFIG_RTC_DRV_RX4581=m
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=y

#
# SPI and I2C RTC drivers
#
CONFIG_RTC_DRV_DS3232=m
CONFIG_RTC_DRV_DS3232_HWMON=y
# CONFIG_RTC_DRV_PCF2127 is not set
CONFIG_RTC_DRV_RV3029C2=m
# CONFIG_RTC_DRV_RV3029_HWMON is not set
# CONFIG_RTC_DRV_RX6110 is not set

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
CONFIG_RTC_DRV_DS1286=m
CONFIG_RTC_DRV_DS1511=m
CONFIG_RTC_DRV_DS1553=m
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
CONFIG_RTC_DRV_DS1742=m
CONFIG_RTC_DRV_DS2404=m
CONFIG_RTC_DRV_STK17TA8=m
# CONFIG_RTC_DRV_M48T86 is not set
CONFIG_RTC_DRV_M48T35=m
CONFIG_RTC_DRV_M48T59=m
CONFIG_RTC_DRV_MSM6242=m
CONFIG_RTC_DRV_BQ4802=m
CONFIG_RTC_DRV_RP5C01=m
CONFIG_RTC_DRV_V3020=m

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_FTRTC010 is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_GOLDFISH is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
CONFIG_DMA_ENGINE=y
CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
CONFIG_INTEL_IDMA64=m
# CONFIG_INTEL_IDXD is not set
# CONFIG_INTEL_IDXD_COMPAT is not set
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_AMD_PTDMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=y
CONFIG_DW_DMAC=m
CONFIG_DW_DMAC_PCI=y
# CONFIG_DW_EDMA is not set
# CONFIG_DW_EDMA_PCIE is not set
CONFIG_HSU_DMA=y
# CONFIG_SF_PDMA is not set
# CONFIG_INTEL_LDMA is not set

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
CONFIG_DMATEST=m
CONFIG_DMA_ENGINE_RAID=y

#
# DMABUF options
#
CONFIG_SYNC_FILE=y
# CONFIG_SW_SYNC is not set
# CONFIG_UDMABUF is not set
# CONFIG_DMABUF_MOVE_NOTIFY is not set
# CONFIG_DMABUF_DEBUG is not set
# CONFIG_DMABUF_SELFTESTS is not set
# CONFIG_DMABUF_HEAPS is not set
# CONFIG_DMABUF_SYSFS_STATS is not set
# end of DMABUF options

CONFIG_DCA=m
# CONFIG_AUXDISPLAY is not set
# CONFIG_PANEL is not set
CONFIG_UIO=m
CONFIG_UIO_CIF=m
CONFIG_UIO_PDRV_GENIRQ=m
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_PRUSS is not set
# CONFIG_UIO_MF624 is not set
CONFIG_UIO_HV_GENERIC=m
CONFIG_VFIO=m
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO_VIRQFD=m
CONFIG_VFIO_NOIOMMU=y
CONFIG_VFIO_PCI_CORE=m
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
CONFIG_VFIO_PCI=m
# CONFIG_VFIO_PCI_VGA is not set
# CONFIG_VFIO_PCI_IGD is not set
CONFIG_VFIO_MDEV=m
CONFIG_IRQ_BYPASS_MANAGER=m
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=y
CONFIG_VIRTIO_PCI_LIB_LEGACY=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_PMEM is not set
CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_MEM=m
CONFIG_VIRTIO_INPUT=m
# CONFIG_VIRTIO_MMIO is not set
CONFIG_VIRTIO_DMA_SHARED_BUFFER=m
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_VSOCK=m
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set

#
# Microsoft Hyper-V guest support
#
CONFIG_HYPERV=m
CONFIG_HYPERV_TIMER=y
CONFIG_HYPERV_UTILS=m
CONFIG_HYPERV_BALLOON=m
# end of Microsoft Hyper-V guest support

# CONFIG_GREYBUS is not set
# CONFIG_COMEDI is not set
# CONFIG_STAGING is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_ACPI_WMI=m
CONFIG_WMI_BMOF=m
# CONFIG_HUAWEI_WMI is not set
# CONFIG_UV_SYSFS is not set
CONFIG_MXM_WMI=m
# CONFIG_PEAQ_WMI is not set
# CONFIG_NVIDIA_WMI_EC_BACKLIGHT is not set
# CONFIG_XIAOMI_WMI is not set
# CONFIG_GIGABYTE_WMI is not set
CONFIG_ACERHDF=m
# CONFIG_ACER_WIRELESS is not set
CONFIG_ACER_WMI=m
# CONFIG_AMD_PMC is not set
# CONFIG_ADV_SWBUTTON is not set
CONFIG_APPLE_GMUX=m
CONFIG_ASUS_LAPTOP=m
# CONFIG_ASUS_WIRELESS is not set
CONFIG_ASUS_WMI=m
CONFIG_ASUS_NB_WMI=m
# CONFIG_MERAKI_MX100 is not set
CONFIG_EEEPC_LAPTOP=m
CONFIG_EEEPC_WMI=m
# CONFIG_X86_PLATFORM_DRIVERS_DELL is not set
CONFIG_AMILO_RFKILL=m
CONFIG_FUJITSU_LAPTOP=m
CONFIG_FUJITSU_TABLET=m
# CONFIG_GPD_POCKET_FAN is not set
CONFIG_HP_ACCEL=m
# CONFIG_WIRELESS_HOTKEY is not set
CONFIG_HP_WMI=m
# CONFIG_IBM_RTL is not set
CONFIG_IDEAPAD_LAPTOP=m
CONFIG_SENSORS_HDAPS=m
CONFIG_THINKPAD_ACPI=m
# CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set
# CONFIG_THINKPAD_ACPI_DEBUG is not set
# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
CONFIG_THINKPAD_ACPI_VIDEO=y
CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
# CONFIG_THINKPAD_LMI is not set
CONFIG_X86_PLATFORM_DRIVERS_INTEL=y
# CONFIG_INTEL_ATOMISP2_PM is not set
# CONFIG_INTEL_SAR_INT1092 is not set
CONFIG_INTEL_PMC_CORE=m

#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support

CONFIG_INTEL_WMI=y
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
CONFIG_INTEL_WMI_THUNDERBOLT=m
CONFIG_INTEL_HID_EVENT=m
CONFIG_INTEL_VBTN=m
# CONFIG_INTEL_INT0002_VGPIO is not set
CONFIG_INTEL_OAKTRAIL=m
# CONFIG_INTEL_ISHTP_ECLITE is not set
# CONFIG_INTEL_PUNIT_IPC is not set
CONFIG_INTEL_RST=m
# CONFIG_INTEL_SMARTCONNECT is not set
CONFIG_INTEL_TURBO_MAX_3=y
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
CONFIG_MSI_LAPTOP=m
CONFIG_MSI_WMI=m
# CONFIG_PCENGINES_APU2 is not set
# CONFIG_BARCO_P50_GPIO is not set
CONFIG_SAMSUNG_LAPTOP=m
CONFIG_SAMSUNG_Q10=m
CONFIG_TOSHIBA_BT_RFKILL=m
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
CONFIG_ACPI_CMPC=m
CONFIG_COMPAL_LAPTOP=m
# CONFIG_LG_LAPTOP is not set
CONFIG_PANASONIC_LAPTOP=m
CONFIG_SONY_LAPTOP=m
CONFIG_SONYPI_COMPAT=y
# CONFIG_SYSTEM76_ACPI is not set
CONFIG_TOPSTAR_LAPTOP=m
# CONFIG_I2C_MULTI_INSTANTIATE is not set
CONFIG_MLX_PLATFORM=m
CONFIG_INTEL_IPS=m
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
CONFIG_PMC_ATOM=y
# CONFIG_CHROME_PLATFORMS is not set
CONFIG_MELLANOX_PLATFORM=y
CONFIG_MLXREG_HOTPLUG=m
# CONFIG_MLXREG_IO is not set
# CONFIG_MLXREG_LC is not set
CONFIG_SURFACE_PLATFORMS=y
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_GPE is not set
# CONFIG_SURFACE_HOTPLUG is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
CONFIG_HAVE_CLK=y
CONFIG_HAVE_CLK_PREPARE=y
CONFIG_COMMON_CLK=y
# CONFIG_LMK04832 is not set
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_COMMON_CLK_PWM is not set
# CONFIG_XILINX_VCU is not set
CONFIG_HWSPINLOCK=y

#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# end of Clock Source drivers

CONFIG_MAILBOX=y
CONFIG_PCC=y
# CONFIG_ALTERA_MBOX is not set
CONFIG_IOMMU_IOVA=y
CONFIG_IOASID=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y

#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support

# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_DMA_STRICT is not set
CONFIG_IOMMU_DEFAULT_DMA_LAZY=y
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_DMA=y
# CONFIG_AMD_IOMMU is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_SVM is not set
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON=y
CONFIG_IRQ_REMAP=y
CONFIG_HYPERV_IOMMU=y
# CONFIG_VIRTIO_IOMMU is not set

#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers

#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers

# CONFIG_SOUNDWIRE is not set

#
# SOC (System On Chip) specific Drivers
#

#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers

#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers

#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers

#
# i.MX SoC drivers
#
# end of i.MX SoC drivers

#
# Enable LiteX SoC Builder specific drivers
#
# end of Enable LiteX SoC Builder specific drivers

#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers

# CONFIG_SOC_TI is not set

#
# Xilinx SoC drivers
#
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers

# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
CONFIG_NTB=m
# CONFIG_NTB_MSI is not set
# CONFIG_NTB_AMD is not set
# CONFIG_NTB_IDT is not set
# CONFIG_NTB_INTEL is not set
# CONFIG_NTB_EPF is not set
# CONFIG_NTB_SWITCHTEC is not set
# CONFIG_NTB_PINGPONG is not set
# CONFIG_NTB_TOOL is not set
# CONFIG_NTB_PERF is not set
# CONFIG_NTB_TRANSPORT is not set
# CONFIG_VME_BUS is not set
CONFIG_PWM=y
CONFIG_PWM_SYSFS=y
# CONFIG_PWM_DEBUG is not set
# CONFIG_PWM_DWC is not set
CONFIG_PWM_LPSS=m
CONFIG_PWM_LPSS_PCI=m
CONFIG_PWM_LPSS_PLATFORM=m
# CONFIG_PWM_PCA9685 is not set

#
# IRQ chip support
#
# end of IRQ chip support

# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_USB_LGM_PHY is not set
# CONFIG_PHY_CAN_TRANSCEIVER is not set

#
# PHY drivers for Broadcom platforms
#
# CONFIG_BCM_KONA_USB2_PHY is not set
# end of PHY drivers for Broadcom platforms

# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_INTEL_LGM_EMMC is not set
# end of PHY Subsystem

CONFIG_POWERCAP=y
CONFIG_INTEL_RAPL_CORE=m
CONFIG_INTEL_RAPL=m
# CONFIG_IDLE_INJECT is not set
# CONFIG_DTPM is not set
# CONFIG_MCB is not set

#
# Performance monitor support
#
# end of Performance monitor support

CONFIG_RAS=y
# CONFIG_RAS_CEC is not set
# CONFIG_USB4 is not set

#
# Android
#
# CONFIG_ANDROID is not set
# end of Android

CONFIG_LIBNVDIMM=m
CONFIG_BLK_DEV_PMEM=m
CONFIG_ND_BLK=m
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=m
CONFIG_BTT=y
CONFIG_ND_PFN=m
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_NVDIMM_KEYS=y
CONFIG_DAX_DRIVER=y
CONFIG_DAX=y
CONFIG_DEV_DAX=m
CONFIG_DEV_DAX_PMEM=m
CONFIG_DEV_DAX_KMEM=m
CONFIG_DEV_DAX_PMEM_COMPAT=m
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y
# CONFIG_NVMEM_RMEM is not set

#
# HW tracing support
#
CONFIG_STM=m
# CONFIG_STM_PROTO_BASIC is not set
# CONFIG_STM_PROTO_SYS_T is not set
CONFIG_STM_DUMMY=m
CONFIG_STM_SOURCE_CONSOLE=m
CONFIG_STM_SOURCE_HEARTBEAT=m
CONFIG_STM_SOURCE_FTRACE=m
CONFIG_INTEL_TH=m
CONFIG_INTEL_TH_PCI=m
CONFIG_INTEL_TH_ACPI=m
CONFIG_INTEL_TH_GTH=m
CONFIG_INTEL_TH_STH=m
CONFIG_INTEL_TH_MSU=m
CONFIG_INTEL_TH_PTI=m
# CONFIG_INTEL_TH_DEBUG is not set
# end of HW tracing support

# CONFIG_FPGA is not set
# CONFIG_TEE is not set
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# end of Device Drivers

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_EXT4_KUNIT_TESTS=m
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
CONFIG_XFS_SUPPORT_V4=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
CONFIG_XFS_ONLINE_SCRUB=y
CONFIG_XFS_ONLINE_REPAIR=y
CONFIG_XFS_DEBUG=y
CONFIG_XFS_ASSERT_FATAL=y
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=m
CONFIG_OCFS2_FS_O2CB=m
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
CONFIG_OCFS2_FS_STATS=y
CONFIG_OCFS2_DEBUG_MASKLOG=y
# CONFIG_OCFS2_DEBUG_FS is not set
CONFIG_BTRFS_FS=m
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
CONFIG_F2FS_FS=m
CONFIG_F2FS_STAT_FS=y
CONFIG_F2FS_FS_XATTR=y
CONFIG_F2FS_FS_POSIX_ACL=y
CONFIG_F2FS_FS_SECURITY=y
# CONFIG_F2FS_CHECK_FS is not set
# CONFIG_F2FS_FAULT_INJECTION is not set
# CONFIG_F2FS_FS_COMPRESSION is not set
CONFIG_F2FS_IOSTAT=y
# CONFIG_ZONEFS_FS is not set
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FILE_LOCKING=y
CONFIG_FS_ENCRYPTION=y
CONFIG_FS_ENCRYPTION_ALGS=y
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_PRINT_QUOTA_WARNING=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_AUTOFS4_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
CONFIG_CUSE=m
# CONFIG_VIRTIO_FS is not set
CONFIG_OVERLAY_FS=m
# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set
# CONFIG_OVERLAY_FS_INDEX is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set

#
# Caches
#
CONFIG_NETFS_SUPPORT=y
CONFIG_NETFS_STATS=y
CONFIG_FSCACHE=m
CONFIG_FSCACHE_STATS=y
# CONFIG_FSCACHE_DEBUG is not set
CONFIG_CACHEFILES=m
# CONFIG_CACHEFILES_DEBUG is not set
# end of Caches

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems

#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
CONFIG_FAT_KUNIT_TEST=m
# CONFIG_EXFAT_FS is not set
# CONFIG_NTFS_FS is not set
# CONFIG_NTFS3_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_VMCORE_DEVICE_DUMP=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_PROC_CHILDREN=y
CONFIG_PROC_PID_ARCH_STATUS=y
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
# CONFIG_TMPFS_INODE64 is not set
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_HUGETLB_PAGE_FREE_VMEMMAP=y
# CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON is not set
CONFIG_MEMFD_CREATE=y
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y
# end of Pseudo filesystems

CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
CONFIG_CRAMFS=m
CONFIG_CRAMFS_BLOCKDEV=y
CONFIG_SQUASHFS=m
# CONFIG_SQUASHFS_FILE_CACHE is not set
CONFIG_SQUASHFS_FILE_DIRECT=y
# CONFIG_SQUASHFS_DECOMP_SINGLE is not set
# CONFIG_SQUASHFS_DECOMP_MULTI is not set
CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU=y
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZ4 is not set
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_ZSTD is not set
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
# CONFIG_VXFS_FS is not set
CONFIG_MINIX_FS=m
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
CONFIG_PSTORE_DEFAULT_KMSG_BYTES=10240
CONFIG_PSTORE_DEFLATE_COMPRESS=y
# CONFIG_PSTORE_LZO_COMPRESS is not set
# CONFIG_PSTORE_LZ4_COMPRESS is not set
# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
# CONFIG_PSTORE_842_COMPRESS is not set
# CONFIG_PSTORE_ZSTD_COMPRESS is not set
CONFIG_PSTORE_COMPRESS=y
CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_PMSG is not set
# CONFIG_PSTORE_FTRACE is not set
CONFIG_PSTORE_RAM=m
# CONFIG_PSTORE_BLK is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
# CONFIG_NFS_V2 is not set
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
# CONFIG_NFS_V4_2_READ_PLUS is not set
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_PNFS=y
# CONFIG_NFSD_BLOCKLAYOUT is not set
CONFIG_NFSD_SCSILAYOUT=y
# CONFIG_NFSD_FLEXFILELAYOUT is not set
# CONFIG_NFSD_V4_2_INTER_SSC is not set
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_GRACE_PERIOD=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_NFS_V4_2_SSC_HELPER=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=m
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
CONFIG_SUNRPC_DEBUG=y
CONFIG_SUNRPC_XPRT_RDMA=m
CONFIG_CEPH_FS=m
# CONFIG_CEPH_FSCACHE is not set
CONFIG_CEPH_FS_POSIX_ACL=y
# CONFIG_CEPH_FS_SECURITY_LABEL is not set
CONFIG_CIFS=m
CONFIG_CIFS_STATS2=y
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
# CONFIG_CIFS_SWN_UPCALL is not set
# CONFIG_CIFS_SMB_DIRECT is not set
# CONFIG_CIFS_FSCACHE is not set
# CONFIG_SMB_SERVER is not set
CONFIG_SMBFS_COMMON=m
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_9P_FS=y
CONFIG_9P_FS_POSIX_ACL=y
# CONFIG_9P_FS_SECURITY is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_MAC_ROMAN=m
CONFIG_NLS_MAC_CELTIC=m
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
CONFIG_NLS_MAC_CYRILLIC=m
CONFIG_NLS_MAC_GAELIC=m
CONFIG_NLS_MAC_GREEK=m
CONFIG_NLS_MAC_ICELAND=m
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
CONFIG_NLS_MAC_TURKISH=m
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
CONFIG_DLM_DEBUG=y
# CONFIG_UNICODE is not set
CONFIG_IO_WQ=y
# end of File systems

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_KEYS_REQUEST_CACHE is not set
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_TRUSTED_KEYS=y
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITY_WRITABLE_HOOKS=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_PAGE_TABLE_ISOLATION=y
# CONFIG_SECURITY_INFINIBAND is not set
CONFIG_SECURITY_NETWORK_XFRM=y
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65535
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
# CONFIG_STATIC_USERMODEHELPER is not set
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
# CONFIG_SECURITY_LANDLOCK is not set
CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_LSM_RULES=y
# CONFIG_IMA_TEMPLATE is not set
CONFIG_IMA_NG_TEMPLATE=y
# CONFIG_IMA_SIG_TEMPLATE is not set
CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
CONFIG_IMA_DEFAULT_HASH_SHA1=y
# CONFIG_IMA_DEFAULT_HASH_SHA256 is not set
# CONFIG_IMA_DEFAULT_HASH_SHA512 is not set
CONFIG_IMA_DEFAULT_HASH="sha1"
CONFIG_IMA_WRITE_POLICY=y
# CONFIG_IMA_READ_POLICY is not set
CONFIG_IMA_APPRAISE=y
# CONFIG_IMA_ARCH_POLICY is not set
# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
CONFIG_IMA_APPRAISE_BOOTPARAM=y
# CONFIG_IMA_APPRAISE_MODSIG is not set
CONFIG_IMA_TRUSTED_KEYRING=y
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set
# CONFIG_IMA_DISABLE_HTABLE is not set
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
# CONFIG_EVM_ADD_XATTRS is not set
# CONFIG_EVM_LOAD_X509 is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_LSM="landlock,lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf"

#
# Kernel hardening options
#

#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=y
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
# end of Memory initialization
# end of Kernel hardening options
# end of Security options

CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SKCIPHER=y
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=y
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=m
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_NULL2=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_SIMD=y

#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECDSA is not set
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_SM2 is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# CONFIG_CRYPTO_CURVE25519_X86 is not set

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_CHACHA20POLY1305=m
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_ECHAINIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CFB=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=m
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=m
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
# CONFIG_CRYPTO_ADIANTUM is not set
CONFIG_CRYPTO_ESSIV=m

#
# Hash modes
#
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_VMAC=m

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_XXHASH=m
CONFIG_CRYPTO_BLAKE2B=m
# CONFIG_CRYPTO_BLAKE2S is not set
# CONFIG_CRYPTO_BLAKE2S_X86 is not set
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
CONFIG_CRYPTO_GHASH=y
CONFIG_CRYPTO_POLY1305=m
CONFIG_CRYPTO_POLY1305_X86_64=m
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_SHA3=m
# CONFIG_CRYPTO_SM3 is not set
# CONFIG_CRYPTO_STREEBOG is not set
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_TI is not set
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_CHACHA20=m
CONFIG_CRYPTO_CHACHA20_X86_64=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
# CONFIG_CRYPTO_SM4 is not set
# CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_SM4_AESNI_AVX2_X86_64 is not set
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
# CONFIG_CRYPTO_ZSTD is not set

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_DRBG_MENU=y
CONFIG_CRYPTO_DRBG_HMAC=y
CONFIG_CRYPTO_DRBG_HASH=y
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=y
CONFIG_CRYPTO_JITTERENTROPY=y
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_USER_API_RNG=y
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=y
CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE=y
# CONFIG_CRYPTO_STATS is not set
CONFIG_CRYPTO_HASH_INFO=y

#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_AES=y
CONFIG_CRYPTO_LIB_ARC4=m
# CONFIG_CRYPTO_LIB_BLAKE2S is not set
CONFIG_CRYPTO_ARCH_HAVE_LIB_CHACHA=m
CONFIG_CRYPTO_LIB_CHACHA_GENERIC=m
# CONFIG_CRYPTO_LIB_CHACHA is not set
# CONFIG_CRYPTO_LIB_CURVE25519 is not set
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
CONFIG_CRYPTO_ARCH_HAVE_LIB_POLY1305=m
CONFIG_CRYPTO_LIB_POLY1305_GENERIC=m
# CONFIG_CRYPTO_LIB_POLY1305 is not set
# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_LIB_SHA256=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=m
CONFIG_CRYPTO_DEV_PADLOCK_AES=m
CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=y
CONFIG_CRYPTO_DEV_CCP_DD=m
CONFIG_CRYPTO_DEV_SP_CCP=y
CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
CONFIG_CRYPTO_DEV_SP_PSP=y
# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
CONFIG_CRYPTO_DEV_QAT=m
CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
CONFIG_CRYPTO_DEV_QAT_C3XXX=m
CONFIG_CRYPTO_DEV_QAT_C62X=m
# CONFIG_CRYPTO_DEV_QAT_4XXX is not set
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
CONFIG_CRYPTO_DEV_QAT_C3XXXVF=m
CONFIG_CRYPTO_DEV_QAT_C62XVF=m
CONFIG_CRYPTO_DEV_NITROX=m
CONFIG_CRYPTO_DEV_NITROX_CNN55XX=m
# CONFIG_CRYPTO_DEV_VIRTIO is not set
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
CONFIG_X509_CERTIFICATE_PARSER=y
# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y

#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
CONFIG_MODULE_SIG_KEY_TYPE_RSA=y
# CONFIG_MODULE_SIG_KEY_TYPE_ECDSA is not set
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_SYSTEM_BLACKLIST_HASH_LIST=""
# CONFIG_SYSTEM_REVOCATION_LIST is not set
# end of Certificates for signature checking

CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_RAID6_PQ_BENCHMARK=y
CONFIG_LINEAR_RANGES=m
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_CORDIC=m
# CONFIG_PRIME_NUMBERS is not set
CONFIG_RATIONAL=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
CONFIG_ARCH_USE_SYM_ANNOTATIONS=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC64 is not set
# CONFIG_CRC4 is not set
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_ZSTD_COMPRESS=m
CONFIG_ZSTD_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
# CONFIG_XZ_DEC_MICROLZMA is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_DECOMPRESS_ZSTD=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC8=y
CONFIG_REED_SOLOMON_DEC8=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_INTERVAL_TREE=y
CONFIG_XARRAY_MULTI=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_DMA_OPS=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_SWIOTLB=y
CONFIG_DMA_CMA=y
# CONFIG_DMA_PERNUMA_CMA is not set

#
# Default contiguous memory area size:
#
CONFIG_CMA_SIZE_MBYTES=0
CONFIG_CMA_SIZE_SEL_MBYTES=y
# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
# CONFIG_CMA_SIZE_SEL_MIN is not set
# CONFIG_CMA_SIZE_SEL_MAX is not set
CONFIG_CMA_ALIGNMENT=8
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_DMA_MAP_BENCHMARK is not set
CONFIG_SGL_ALLOC=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPUMASK_OFFSTACK=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_CLZ_TAB=y
CONFIG_IRQ_POLL=y
CONFIG_MPILIB=y
CONFIG_SIGNATURE=y
CONFIG_DIMLIB=y
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_MEMREGION=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_HAS_COPY_MC=y
CONFIG_ARCH_STACKWALK=y
CONFIG_STACKDEPOT=y
CONFIG_STACK_HASH_ORDER=20
CONFIG_SBITMAP=y
# end of Library routines

CONFIG_ASN1_ENCODER=y

#
# Kernel hacking
#

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_PRINTK_CALLER=y
# CONFIG_STACKTRACE_BUILD_ID is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_DEBUG_INFO_COMPRESSED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
CONFIG_DEBUG_INFO_DWARF4=y
# CONFIG_DEBUG_INFO_DWARF5 is not set
CONFIG_PAHOLE_HAS_SPLIT_BTF=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_FRAME_WARN=8192
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
CONFIG_STACK_VALIDATION=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options

#
# Generic Kernel Debugging Instruments
#
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_MAGIC_SYSRQ_SERIAL=y
CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
CONFIG_UBSAN=y
# CONFIG_UBSAN_TRAP is not set
CONFIG_CC_HAS_UBSAN_BOUNDS=y
CONFIG_UBSAN_BOUNDS=y
CONFIG_UBSAN_ONLY_BOUNDS=y
CONFIG_UBSAN_SHIFT=y
# CONFIG_UBSAN_DIV_ZERO is not set
# CONFIG_UBSAN_BOOL is not set
# CONFIG_UBSAN_ENUM is not set
# CONFIG_UBSAN_ALIGNMENT is not set
CONFIG_UBSAN_SANITIZE_ALL=y
# CONFIG_TEST_UBSAN is not set
CONFIG_HAVE_ARCH_KCSAN=y
# end of Generic Kernel Debugging Instruments

CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MISC=y

#
# Memory Debugging
#
CONFIG_PAGE_EXTENSION=y
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_PAGE_OWNER=y
# CONFIG_PAGE_POISONING is not set
# CONFIG_DEBUG_PAGE_REF is not set
# CONFIG_DEBUG_RODATA_TEST is not set
CONFIG_ARCH_HAS_DEBUG_WX=y
# CONFIG_DEBUG_WX is not set
CONFIG_GENERIC_PTDUMP=y
# CONFIG_PTDUMP_DEBUGFS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
# CONFIG_SLUB_STATS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_SCHED_STACK_END_CHECK is not set
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VM_PGTABLE is not set
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=y
CONFIG_KASAN=y
CONFIG_KASAN_GENERIC=y
# CONFIG_KASAN_OUTLINE is not set
CONFIG_KASAN_INLINE=y
CONFIG_KASAN_STACK=y
CONFIG_KASAN_VMALLOC=y
# CONFIG_KASAN_KUNIT_TEST is not set
# CONFIG_KASAN_MODULE_TEST is not set
CONFIG_HAVE_ARCH_KFENCE=y
# CONFIG_KFENCE is not set
# end of Memory Debugging

CONFIG_DEBUG_SHIRQ=y

#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
CONFIG_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=480
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0
CONFIG_WQ_WATCHDOG=y
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs

#
# Scheduler Debugging
#
CONFIG_SCHED_DEBUG=y
CONFIG_SCHED_INFO=y
CONFIG_SCHEDSTATS=y
# end of Scheduler Debugging

# CONFIG_DEBUG_TIMEKEEPING is not set

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# CONFIG_SCF_TORTURE_TEST is not set
# CONFIG_CSD_LOCK_WAIT_DEBUG is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)

# CONFIG_DEBUG_IRQFLAGS is not set
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set

#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
# CONFIG_DEBUG_PLIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
CONFIG_BUG_ON_DATA_CORRUPTION=y
# end of Debug kernel data structures

# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
CONFIG_TORTURE_TEST=m
# CONFIG_RCU_SCALE_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_REF_SCALE_TEST=m
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging

# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
CONFIG_LATENCYTOP=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_OBJTOOL_MCOUNT=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_BOOTTIME_TRACING is not set
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y
# CONFIG_IRQSOFF_TRACER is not set
CONFIG_SCHED_TRACER=y
CONFIG_HWLAT_TRACER=y
# CONFIG_OSNOISE_TRACER is not set
# CONFIG_TIMERLAT_TRACER is not set
# CONFIG_MMIOTRACE is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENTS=y
# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
CONFIG_UPROBE_EVENTS=y
CONFIG_BPF_EVENTS=y
CONFIG_DYNAMIC_EVENTS=y
CONFIG_PROBE_EVENTS=y
# CONFIG_BPF_KPROBE_OVERRIDE is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_FTRACE_MCOUNT_USE_CC=y
CONFIG_TRACING_MAP=y
CONFIG_SYNTH_EVENTS=y
CONFIG_HIST_TRIGGERS=y
# CONFIG_TRACE_EVENT_INJECT is not set
# CONFIG_TRACEPOINT_BENCHMARK is not set
CONFIG_RING_BUFFER_BENCHMARK=m
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_RECORD_RECURSION is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS is not set
# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
# CONFIG_SYNTH_EVENT_GEN_TEST is not set
# CONFIG_KPROBE_EVENT_GEN_TEST is not set
# CONFIG_HIST_TRIGGERS_DEBUG is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_SAMPLES is not set
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
# CONFIG_IO_STRICT_DEVMEM is not set

#
# x86 Debugging
#
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_EARLY_PRINTK_USB_XDBC=y
# CONFIG_EFI_PGT_DUMP is not set
# CONFIG_DEBUG_TLBFLUSH is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_X86_DECODER_SELFTEST=y
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_FPU is not set
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=y
# CONFIG_UNWINDER_FRAME_POINTER is not set
# end of x86 Debugging

#
# Kernel Testing and Coverage
#
CONFIG_KUNIT=m
CONFIG_KUNIT_DEBUGFS=y
# CONFIG_KUNIT_TEST is not set
# CONFIG_KUNIT_EXAMPLE_TEST is not set
CONFIG_KUNIT_ALL_TESTS=m
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FUNCTION_ERROR_INJECTION=y
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
# CONFIG_FAULT_INJECTION_USERCOPY is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
# CONFIG_FAIL_FUTEX is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_FAIL_FUNCTION is not set
# CONFIG_FAIL_MMC_REQUEST is not set
# CONFIG_FAIL_SUNRPC is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_SORT is not set
# CONFIG_TEST_DIV64 is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_STRING_SELFTEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_STRSCPY is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_SCANF is not set
# CONFIG_TEST_BITMAP is not set
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_OVERFLOW is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_HASH is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
# CONFIG_TEST_BITOPS is not set
# CONFIG_TEST_VMALLOC is not set
# CONFIG_TEST_USER_COPY is not set
CONFIG_TEST_BPF=m
# CONFIG_TEST_BLACKHOLE_DEV is not set
# CONFIG_FIND_BIT_BENCHMARK is not set
# CONFIG_TEST_FIRMWARE is not set
# CONFIG_TEST_SYSCTL is not set
CONFIG_BITFIELD_KUNIT=m
CONFIG_RESOURCE_KUNIT_TEST=m
CONFIG_SYSCTL_KUNIT_TEST=m
CONFIG_LIST_KUNIT_TEST=m
CONFIG_LINEAR_RANGES_TEST=m
CONFIG_CMDLINE_KUNIT_TEST=m
CONFIG_BITS_TEST=m
CONFIG_SLUB_KUNIT_TEST=m
CONFIG_RATIONAL_KUNIT_TEST=m
CONFIG_MEMCPY_KUNIT_TEST=m
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_KMOD is not set
# CONFIG_TEST_MEMCAT_P is not set
# CONFIG_TEST_LIVEPATCH is not set
# CONFIG_TEST_STACKINIT is not set
# CONFIG_TEST_MEMINIT is not set
# CONFIG_TEST_HMM is not set
# CONFIG_TEST_FREE_PAGES is not set
# CONFIG_TEST_FPU is not set
# CONFIG_TEST_CLOCKSOURCE_WATCHDOG is not set
CONFIG_ARCH_USE_MEMTEST=y
# CONFIG_MEMTEST is not set
# CONFIG_HYPERV_TESTING is not set
# end of Kernel Testing and Coverage
# end of Kernel hacking
#!/bin/sh

export_top_env()
{
	export suite='xfstests'
	export testcase='xfstests'
	export category='functional'
	export need_memory='3G'
	export job_origin='xfstests-xfs-part2.yaml'
	export queue_cmdline_keys='branch
commit'
	export queue='validate'
	export testbox='lkp-hsw-d02'
	export tbox_group='lkp-hsw-d02'
	export kconfig='x86_64-rhel-8.3-func'
	export submit_id='61d0a4e91c34349adf8b16e4'
	export job_file='/lkp/jobs/scheduled/lkp-hsw-d02/xfstests-4HDD-xfs-xfs-reflink-21-ucode=0x28-debian-10.4-x86_64-20200603.cgz-f5934dda5442999d71eea07d9a324b259e5a36a5-20220102-105183-lleg4s-5.yaml'
	export id='93e44a9f51f7e7f6237a37b216c6ee4c029d5fa1'
	export queuer_version='/lkp-src'
	export model='Haswell'
	export nr_node=1
	export nr_cpu=8
	export memory='6G'
	export nr_ssd_partitions=1
	export nr_hdd_partitions=6
	export hdd_partitions='/dev/disk/by-id/ata-ST4000NM0035-1V4107_ZC12NP6D-part*'
	export ssd_partitions='/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4171000P800RGN-part3'
	export swap_partitions='/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4171000P800RGN-part1'
	export rootfs_partition='/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4171000P800RGN-part2'
	export brand='Intel(R) Core(TM) i7-4790 v3 @ 3.60GHz'
	export need_kconfig='BLK_DEV_SD
SCSI
{"BLOCK"=>"y"}
SATA_AHCI
SATA_AHCI_PLATFORM
ATA
{"PCI"=>"y"}
XFS_FS'
	export commit='f5934dda5442999d71eea07d9a324b259e5a36a5'
	export ucode='0x28'
	export need_kconfig_hw='{"E1000E"=>"y"}
SATA_AHCI
DRM_I915'
	export bisect_dmesg=true
	export enqueue_time='2022-01-02 03:00:58 +0800'
	export _id='61d0a4fc1c34349adf8b16e8'
	export _rt='/result/xfstests/4HDD-xfs-xfs-reflink-21-ucode=0x28/lkp-hsw-d02/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5'
	export user='lkp'
	export compiler='gcc-9'
	export LKP_SERVER='internal-lkp-server'
	export head_commit='7e5d225545413a1c6cfc6ec1f21c32db93ae0faa'
	export base_commit='fc74e0a40e4f9fd0468e34045b0c45bba11dcbb2'
	export branch='linux-review/trondmy-kernel-org/iomap-Address-soft-lockup-in-iomap_finish_ioend/20211231-034313'
	export rootfs='debian-10.4-x86_64-20200603.cgz'
	export result_root='/result/xfstests/4HDD-xfs-xfs-reflink-21-ucode=0x28/lkp-hsw-d02/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/3'
	export scheduler_version='/lkp/lkp/.src-20211231-112748'
	export arch='x86_64'
	export max_uptime=2100
	export initrd='/osimage/debian/debian-10.4-x86_64-20200603.cgz'
	export bootloader_append='root=/dev/ram0
RESULT_ROOT=/result/xfstests/4HDD-xfs-xfs-reflink-21-ucode=0x28/lkp-hsw-d02/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/3
BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/vmlinuz-5.16.0-rc5-00009-gf5934dda5442
branch=linux-review/trondmy-kernel-org/iomap-Address-soft-lockup-in-iomap_finish_ioend/20211231-034313
job=/lkp/jobs/scheduled/lkp-hsw-d02/xfstests-4HDD-xfs-xfs-reflink-21-ucode=0x28-debian-10.4-x86_64-20200603.cgz-f5934dda5442999d71eea07d9a324b259e5a36a5-20220102-105183-lleg4s-5.yaml
user=lkp
ARCH=x86_64
kconfig=x86_64-rhel-8.3-func
commit=f5934dda5442999d71eea07d9a324b259e5a36a5
max_uptime=2100
LKP_SERVER=internal-lkp-server
nokaslr
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw'
	export modules_initrd='/pkg/linux/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/modules.cgz'
	export bm_initrd='/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20211221.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/fs_20210917.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/xfstests_20211227.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/xfstests-x86_64-972d710-1_20211231.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz'
	export ucode_initrd='/osimage/ucode/intel-ucode-20210222.cgz'
	export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz'
	export site='inn'
	export LKP_CGI_PORT=80
	export LKP_CIFS_PORT=139
	export last_kernel='4.20.0'
	export repeat_to=6
	export schedule_notify_address=
	export kernel='/pkg/linux/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/vmlinuz-5.16.0-rc5-00009-gf5934dda5442'
	export dequeue_time='2022-01-02 03:10:51 +0800'
	export job_initrd='/lkp/jobs/scheduled/lkp-hsw-d02/xfstests-4HDD-xfs-xfs-reflink-21-ucode=0x28-debian-10.4-x86_64-20200603.cgz-f5934dda5442999d71eea07d9a324b259e5a36a5-20220102-105183-lleg4s-5.cgz'

	[ -n "$LKP_SRC" ] ||
	export LKP_SRC=/lkp/${user:-lkp}/src
}

run_job()
{
	echo $$ > $TMP/run-job.pid

	. $LKP_SRC/lib/http.sh
	. $LKP_SRC/lib/job.sh
	. $LKP_SRC/lib/env.sh

	export_top_env

	run_setup nr_hdd=4 $LKP_SRC/setup/disk

	run_setup fs='xfs' $LKP_SRC/setup/fs

	run_monitor $LKP_SRC/monitors/wrapper kmsg
	run_monitor $LKP_SRC/monitors/wrapper heartbeat
	run_monitor $LKP_SRC/monitors/wrapper meminfo
	run_monitor $LKP_SRC/monitors/wrapper oom-killer
	run_monitor $LKP_SRC/monitors/plain/watchdog

	run_test test='xfs-reflink-21' $LKP_SRC/tests/wrapper xfstests
}

extract_stats()
{
	export stats_part_begin=
	export stats_part_end=

	env test='xfs-reflink-21' $LKP_SRC/stats/wrapper xfstests
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper meminfo

	$LKP_SRC/stats/wrapper time xfstests.time
	$LKP_SRC/stats/wrapper dmesg
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper last_state
	$LKP_SRC/stats/wrapper stderr
	$LKP_SRC/stats/wrapper time
}

"$@"
2022-01-01 19:12:20 export TEST_DIR=/fs/sda1
2022-01-01 19:12:20 export TEST_DEV=/dev/sda1
2022-01-01 19:12:20 export FSTYP=xfs
2022-01-01 19:12:20 export SCRATCH_MNT=/fs/scratch
2022-01-01 19:12:20 mkdir /fs/scratch -p
2022-01-01 19:12:20 export SCRATCH_DEV=/dev/sda4
2022-01-01 19:12:20 export SCRATCH_LOGDEV=/dev/sda2
2022-01-01 19:12:20 export SCRATCH_XFS_LIST_METADATA_FIELDS=u3.sfdir3.hdr.parent.i4
2022-01-01 19:12:20 export SCRATCH_XFS_LIST_FUZZ_VERBS=random
2022-01-01 19:12:20 export MKFS_OPTIONS=-mreflink=1
2022-01-01 19:12:20 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-reflink-21
2022-01-01 19:12:20 ./check xfs/420 xfs/421 xfs/435
FSTYP         -- xfs (debug)
PLATFORM      -- Linux/x86_64 lkp-hsw-d02 5.16.0-rc5-00009-gf5934dda5442 #1 SMP Sat Jan 1 21:17:33 CST 2022
MKFS_OPTIONS  -- -f -mreflink=1 /dev/sda4
MOUNT_OPTIONS -- /dev/sda4 /fs/scratch

xfs/420	_check_dmesg: something found in dmesg (see /lkp/benchmarks/xfstests/results//xfs/420.dmesg)

xfs/421	 4s
xfs/435	 5s
Ran: xfs/420 xfs/421 xfs/435
Failures: xfs/420
Failed 1 of 3 tests
---
:#! jobs/xfstests-xfs-part2.yaml:
suite: xfstests
testcase: xfstests
category: functional
need_memory: 3G
disk: 4HDD
fs: xfs
xfstests:
  test: xfs-reflink-21
job_origin: xfstests-xfs-part2.yaml
:#! queue options:
queue_cmdline_keys:
- branch
- commit
queue: bisect
testbox: lkp-hsw-d02
tbox_group: lkp-hsw-d02
kconfig: x86_64-rhel-8.3-func
submit_id: 61d0522c1c34346a8756ad43
job_file: "/lkp/jobs/scheduled/lkp-hsw-d02/xfstests-4HDD-xfs-xfs-reflink-21-ucode=0x28-debian-10.4-x86_64-20200603.cgz-f5934dda5442999d71eea07d9a324b259e5a36a5-20220101-92807-93r5uv-0.yaml"
id: 9c7dec79615bf06e051a453cc873e84f16878c0d
queuer_version: "/lkp-src"
:#! hosts/lkp-hsw-d02:
model: Haswell
nr_node: 1
nr_cpu: 8
memory: 6G
nr_ssd_partitions: 1
nr_hdd_partitions: 6
hdd_partitions: "/dev/disk/by-id/ata-ST4000NM0035-1V4107_ZC12NP6D-part*"
ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4171000P800RGN-part3"
swap_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4171000P800RGN-part1"
rootfs_partition: "/dev/disk/by-id/ata-INTEL_SSDSC2BB800G4_PHWL4171000P800RGN-part2"
brand: Intel(R) Core(TM) i7-4790 v3 @ 3.60GHz
:#! include/category/functional:
kmsg:
heartbeat:
meminfo:
:#! include/disk/nr_hdd:
need_kconfig:
- BLK_DEV_SD
- SCSI
- BLOCK: y
- SATA_AHCI
- SATA_AHCI_PLATFORM
- ATA
- PCI: y
- XFS_FS
:#! include/queue/cyclic:
commit: f5934dda5442999d71eea07d9a324b259e5a36a5
:#! include/testbox/lkp-hsw-d02:
ucode: '0x28'
need_kconfig_hw:
- E1000E: y
- SATA_AHCI
- DRM_I915
bisect_dmesg: true
:#! include/fs/OTHERS:
enqueue_time: 2022-01-01 21:07:56.821830731 +08:00
_id: 61d0522c1c34346a8756ad43
_rt: "/result/xfstests/4HDD-xfs-xfs-reflink-21-ucode=0x28/lkp-hsw-d02/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5"
:#! schedule options:
user: lkp
compiler: gcc-9
LKP_SERVER: internal-lkp-server
head_commit: 7e5d225545413a1c6cfc6ec1f21c32db93ae0faa
base_commit: fc74e0a40e4f9fd0468e34045b0c45bba11dcbb2
branch: linux-devel/devel-hourly-20211231-132259
rootfs: debian-10.4-x86_64-20200603.cgz
result_root: "/result/xfstests/4HDD-xfs-xfs-reflink-21-ucode=0x28/lkp-hsw-d02/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/0"
scheduler_version: "/lkp/lkp/.src-20211231-112748"
arch: x86_64
max_uptime: 2100
initrd: "/osimage/debian/debian-10.4-x86_64-20200603.cgz"
bootloader_append:
- root=/dev/ram0
- RESULT_ROOT=/result/xfstests/4HDD-xfs-xfs-reflink-21-ucode=0x28/lkp-hsw-d02/debian-10.4-x86_64-20200603.cgz/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/0
- BOOT_IMAGE=/pkg/linux/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/vmlinuz-5.16.0-rc5-00009-gf5934dda5442
- branch=linux-devel/devel-hourly-20211231-132259
- job=/lkp/jobs/scheduled/lkp-hsw-d02/xfstests-4HDD-xfs-xfs-reflink-21-ucode=0x28-debian-10.4-x86_64-20200603.cgz-f5934dda5442999d71eea07d9a324b259e5a36a5-20220101-92807-93r5uv-0.yaml
- user=lkp
- ARCH=x86_64
- kconfig=x86_64-rhel-8.3-func
- commit=f5934dda5442999d71eea07d9a324b259e5a36a5
- max_uptime=2100
- LKP_SERVER=internal-lkp-server
- nokaslr
- selinux=0
- debug
- apic=debug
- sysrq_always_enabled
- rcupdate.rcu_cpu_stall_timeout=100
- net.ifnames=0
- printk.devkmsg=on
- panic=-1
- softlockup_panic=1
- nmi_watchdog=panic
- oops=panic
- load_ramdisk=2
- prompt_ramdisk=0
- drbd.minor_count=8
- systemd.log_level=err
- ignore_loglevel
- console=tty0
- earlyprintk=ttyS0,115200
- console=ttyS0,115200
- vga=normal
- rw
modules_initrd: "/pkg/linux/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/modules.cgz"
bm_initrd: "/osimage/deps/debian-10.4-x86_64-20200603.cgz/run-ipconfig_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/lkp_20211221.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/rsync-rootfs_20200608.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/fs_20210917.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/xfstests_20211227.cgz,/osimage/pkg/debian-10.4-x86_64-20200603.cgz/xfstests-x86_64-972d710-1_20211231.cgz,/osimage/deps/debian-10.4-x86_64-20200603.cgz/hw_20200715.cgz"
ucode_initrd: "/osimage/ucode/intel-ucode-20210222.cgz"
lkp_initrd: "/osimage/user/lkp/lkp-x86_64.cgz"
site: inn
:#! /cephfs/db/releases/20211231195303/lkp-src/include/site/inn:
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
oom-killer:
watchdog:
:#! runtime status:
last_kernel: 4.20.0
schedule_notify_address:
:#! user overrides:
kernel: "/pkg/linux/x86_64-rhel-8.3-func/gcc-9/f5934dda5442999d71eea07d9a324b259e5a36a5/vmlinuz-5.16.0-rc5-00009-gf5934dda5442"
dequeue_time: 2022-01-01 22:11:02.191980659 +08:00
job_state: finished
loadavg: 0.89 0.26 0.09 1/185 4170
start_time: '1641046350'
end_time: '1641046365'
version: "/lkp/lkp/.src-20211231-112828:bdd14a96:bf02ab871"
dmsetup remove_all
wipefs -a --force /dev/sda1
wipefs -a --force /dev/sda2
wipefs -a --force /dev/sda3
wipefs -a --force /dev/sda4
mkfs -t xfs -f /dev/sda3
mkfs -t xfs -f /dev/sda1
mkfs -t xfs -f /dev/sda4
mkfs -t xfs -f /dev/sda2
mkdir -p /fs/sda1
modprobe xfs
mount -t xfs -o inode64 /dev/sda1 /fs/sda1
mkdir -p /fs/sda2
mount -t xfs -o inode64 /dev/sda2 /fs/sda2
mkdir -p /fs/sda3
mount -t xfs -o inode64 /dev/sda3 /fs/sda3
mkdir -p /fs/sda4
mount -t xfs -o inode64 /dev/sda4 /fs/sda4
export TEST_DIR=/fs/sda1
export TEST_DEV=/dev/sda1
export FSTYP=xfs
export SCRATCH_MNT=/fs/scratch
mkdir /fs/scratch -p
export SCRATCH_DEV=/dev/sda4
export SCRATCH_LOGDEV=/dev/sda2
export SCRATCH_XFS_LIST_METADATA_FIELDS=u3.sfdir3.hdr.parent.i4
export SCRATCH_XFS_LIST_FUZZ_VERBS=random
export MKFS_OPTIONS=-mreflink=1
sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-reflink-21
./check xfs/420 xfs/421 xfs/435
Christoph Hellwig Jan. 5, 2022, 1:42 p.m. UTC | #25
On Tue, Jan 04, 2022 at 03:12:30PM -0800, Darrick J. Wong wrote:
> As I see it, the problem here is that we're spending too much time
> calling iomap_finish_page_writeback over and over and over, right?
> 
> If we have a single page with a single mapping that fits in a single
> bio, that means we call bio_add_page once, and on the other end we call
> iomap_finish_page_writeback once.

iomap_finish_page_writeback is called once per page, and the folio
equivalent will be called once per folio, yes.

But usually call bio_add_page mutliple times, due to the silly one block
at a time loop in iomap_writepage_map.  But that is someting we can
easily fix.
Christoph Hellwig Jan. 5, 2022, 1:43 p.m. UTC | #26
On Wed, Jan 05, 2022 at 08:16:05AM +1100, Dave Chinner wrote:
> > > +	if (ioend->io_inline_bio.bi_iter.bi_sector + (ioend->io_size >> 9) !=
> > 
> > This open codes bio_end_sector()
> 
> No, it doesn't. The ioend can have chained bios or have others merged
> and concatenated to the ioend->io_list, so ioend->io_size != length
> of the first bio in the chain....
> 
> > > +	    next->io_inline_bio.bi_iter.bi_sector)
> > 
> > But more importantly I don't think just using the inline_bio makes sense
> > here as the ioend can have multiple bios.  Fortunately we should always
> > have the last built bio available in ->io_bio.
> 
> Except merging chains ioends and modifies the head io_size to
> account for the chained ioends we add to ioend->io_list. Hence
> ioend->io_bio is not the last bio in a contiguous ioend chain.

Indeed.  We could use bio_end_sector on io_bio or this.
Brian Foster Jan. 5, 2022, 1:56 p.m. UTC | #27
On Wed, Jan 05, 2022 at 01:10:22PM +1100, Dave Chinner wrote:
> On Tue, Jan 04, 2022 at 03:12:30PM -0800, Darrick J. Wong wrote:
> > On Wed, Jan 05, 2022 at 08:52:27AM +1100, Dave Chinner wrote:
> > > On Tue, Jan 04, 2022 at 11:22:27AM -0800, Darrick J. Wong wrote:
> > > > On Tue, Jan 04, 2022 at 10:14:27AM -0800, hch@infradead.org wrote:
> > > > > On Tue, Jan 04, 2022 at 06:08:24PM +0000, Matthew Wilcox wrote:
> > > > > > I think it's fine to put in a fix like this now that's readily
> > > > > > backportable.  For folios, I can't help but think we want a
> > > > > > restructuring to iterate per-extent first, then per-folio and finally
> > > > > > per-sector instead of the current model where we iterate per folio,
> > > > > > looking up the extent for each sector.
> > > > > 
> > > > > We don't look up the extent for each sector.  We look up the extent
> > > > > once and then add as much of it as we can to the bio until either the
> > > > > bio is full or the extent ends.  In the first case we then allocate
> > > > > a new bio and add it to the ioend.
> > > > 
> > > > Can we track the number of folios that have been bio_add_folio'd to the
> > > > iomap_ioend, and make iomap_can_add_to_ioend return false when the
> > > > number of folios reaches some threshold?  I think that would solve the
> > > > problem of overly large ioends while not splitting folios across ioends
> > > > unnecessarily.
> > > 
> > > See my reply to Christoph up thread.
> > > 
> > > The problem is multiple blocks per page/folio - bio_add_folio() will
> > > get called for the same folio many times, and we end up not knowing
> > > when a new page/folio is attached. Hence dynamically calculating it
> > > as we build the bios is .... convoluted.
> > 
> > Hm.  Indulge me in a little more frame-shifting for a moment --
> > 
> > As I see it, the problem here is that we're spending too much time
> > calling iomap_finish_page_writeback over and over and over, right?
> > 

I think the fundamental problem is an excessively large page list that
requires a tight enough loop in iomap_finish_ioend() with no opportunity
for scheduling. AIUI, this can occur a few different ways atm. The first
is a large bio chain associated with an ioend. Another potential vector
is a series of large bio vecs, since IIUC a vector can cover something
like 4GB worth of pages if physically contiguous. Since Trond's instance
seems to be via the completion workqueue, yet another vector is likely
via a chain of merged ioends.

IOW, I think there is potential for such a warning in either of the two
loops in iomap_finish_ioend() or the ioend loop in iomap_finish_ioends()
depending on circumstance. Trond's earlier feedback on his initial patch
(i.e. without ioend size capping) suggests he's hitting more of the bio
chain case, since a cond_resched() in the bio iteration loop in
iomap_finish_ioend() mitigated the problem but lifting it outside into
iomap_finish_ioends() did not.

> > If we have a single page with a single mapping that fits in a single
> > bio, that means we call bio_add_page once, and on the other end we call
> > iomap_finish_page_writeback once.
> > 
> > If we have (say) an 8-page folio with 4 blocks per page, in the worst
> > case we'd create 32 different ioends, each with a single-block bio,
> > which means 32 calls to iomap_finish_page_writeback, right?
> 
> Yes, but in this case, we've had to issue and complete 32 bios and
> ioends to get one call to end_page_writeback(). That is overhead we
> cannot avoid if we have worst-case physical fragmentation of the
> filesystem. But, quite frankly, if that's the case we just don't
> care about performance of IO completion - performance will suck
> because we're doing 32 IOs instead of 1 for that data, not because
> IO completion has to do more work per page/folio....
> 
> > From what I can see, the number of bio_add_folio calls is proportional
> > to the amount of ioend work we do without providing any external signs
> > of life to the watchdog, right?
> > 
> > So forget the number of folios or the byte count involved.  Isn't the
> > number of future iomap_finish_page_writeback calls exactly the metric
> > that we want to decide when to cut off ioend submission?
> 
> Isn't that exactly what I suggested by counting bio segments in the
> ioend at bio submission time? I mean, iomap_finish_page_writeback()
> iterates bio segments, not pages, folios or filesystem blocks....
> 
> > > Hence generic iomap code will only end up calling
> > > iomap_finish_ioends() with the same ioend that was submitted. i.e.
> > > capped to 4096 pages by this patch. THerefore it does not need
> > > cond_resched() calls - the place that needs it is where the ioends
> > > are merged and then finished. That is, in the filesystem completion
> > > processing that does the merging....
> > 
> > Huh?  I propose adding cond_resched to iomap_finish_ioends (plural),
> 
> Which is only called from XFS on merged ioends after XFS has
> processed the merged ioend.....
> 
> > which walks a list of ioends and calls iomap_finish_ioend (singular) on
> > each ioend.  IOWs, we'd call cond_resched in between finishing one ioend
> > and starting on the next one.  Isn't that where ioends are finished?
> > 
> > (I'm starting to wonder if we're talking past each other?)
> > 
> > So looking at xfs_end_io:
> > 
> > /* Finish all pending io completions. */
> > void
> > xfs_end_io(
> > 	struct work_struct	*work)
> > {
> > 	struct xfs_inode	*ip =
> > 		container_of(work, struct xfs_inode, i_ioend_work);
> > 	struct iomap_ioend	*ioend;
> > 	struct list_head	tmp;
> > 	unsigned long		flags;
> > 
> > 	spin_lock_irqsave(&ip->i_ioend_lock, flags);
> > 	list_replace_init(&ip->i_ioend_list, &tmp);
> > 	spin_unlock_irqrestore(&ip->i_ioend_lock, flags);
> > 
> > 	iomap_sort_ioends(&tmp);
> > 	while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend,
> > 			io_list))) {
> > 		list_del_init(&ioend->io_list);
> > 
> > Here we pull the first ioend off the sorted list of ioends.
> > 
> > 		iomap_ioend_try_merge(ioend, &tmp);
> > 
> > Now we've merged that first ioend with as many subsequent ioends as we
> > could merge.  Let's say there were 200 ioends, each 100MB.  Now ioend
> 
> Ok, so how do we get to this completion state right now?
> 
> 1. an ioend is a physically contiguous extent so submission is
>    broken down into an ioend per physical extent.
> 2. we merge logically contiguous ioends at completion.
> 
> So, if we have 200 ioends of 100MB each that are logically
> contiguous we'll currently always merge them into a single 20GB
> ioend that gets processed as a single entity even if submission
> broke them up because they were physically discontiguous.
> 
> Now, with this patch we add:
> 
> 3. Individual ioends are limited to 16MB.
> 4. completion can only merge physically contiguous ioends.
> 5. we cond_resched() between physically contiguous ioend completion.
> 
> Submission will break that logically contiguous 20GB dirty range
> down into 200x6x16MB ioends.
> 
> Now completion will only merge ioends that are both physically and
> logically contiguous. That results in a maximum merged ioend chain
> size of 100MB at completion. They'll get merged one 100MB chunk at a
> time.
> 

I'm missing something with the reasoning here.. how does a contiguity
check in the ioend merge code guarantee we don't construct an
excessively large list of pages via a chain of merged ioends? Obviously
it filters out the discontig case, but what if the extents are
physically contiguous?

> > is a chain (of those other 199 ioends) representing 20GB of data.
> > 
> > 		xfs_end_ioend(ioend);
> 
> We now do one conversion transaction for the entire 100MB extent,
> then....
> 
> > At the end of this routine, we call iomap_finish_ioends on the 20GB
> > ioend chain.  This now has to mark 5.2 million pages...
> 
> run iomap_finish_ioends() on 100MB of pages, which is about 25,000
> pages, not 5 million...
> 
> > 		cond_resched();
> > 
> > ...before we get to the cond_resched.
> 
> ... and so in this scenario this patch reduces the time between
> reschedule events by a factor of 200 - the number of physical
> extents the ioends map....
> 
> That's kind of my point - we can't ignore why the filesystem needs
> merging or how it should optimise merging for it's own purposes in
> this discussion. Because logically merged ioends require the
> filesystem to do internal loops over physical discontiguities,
> requiring us to drive cond_resched() into both the iomap loops and
> the lower layer filesystem loops.
> 
> i.e. when we have ioend merging based on logical contiguity, we need
> to limit the number of the loops the filesystem does internally, not
> just the loops that the ioend code is doing...
> 
> > I'd really rather do the
> > cond_resched between each of those 200 ioends that (supposedly) are
> > small enough not to trip the hangcheck timers.
> > 
> > 	}
> > }
> > /*
> >  * Mark writeback finished on a chain of ioends.  Caller must not call
> >  * this function from atomic/softirq context.
> >  */
> > void
> > iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> > {
> > 	struct list_head tmp;
> > 
> > 	list_replace_init(&ioend->io_list, &tmp);
> > 	iomap_finish_ioend(ioend, error);
> > 
> > 	while (!list_empty(&tmp)) {
> > 		cond_resched();
> > 
> > So I propose doing it ^^^ here instead.
> > 
> > 		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
> > 		list_del_init(&ioend->io_list);
> > 		iomap_finish_ioend(ioend, error);
> > 	}
> > }

Hmm.. I'm not seeing how this is much different from Dave's patch, and
I'm not totally convinced the cond_resched() in Dave's patch is
effective without something like Darrick's earlier suggestion to limit
the $object (page/folio/whatever) count of the entire merged mapping (to
ensure that iomap_finish_ioend() is no longer a soft lockup vector by
itself).

Trond reports that the test patch mitigates his reproducer, but that
patch also includes the ioend size cap and so the test doesn't
necessarily isolate whether the cond_resched() is effective or whether
the additional submission/completion overhead is enough to avoid the
pathological conditions that enable it via the XFS merging code. I'd be
curious to have a more tangible datapoint on that. The easiest way to
test without getting into the weeds of looking at merging behavior is
probably just see whether the problem returns with the cond_resched()
removed and all of the other changes in place. Trond, is that something
you can test?

Brian

> 
> Yes, but this only addresses a single aspect of the issue when
> filesystem driven merging is used. That is, we might have just had
> to do a long unbroken loop in xfs_end_ioend() that might have to run
> conversion of several thousand physical extents that the logically
> merged ioends might have covered. Hence even with the above, we'd
> still need to add cond_resched() calls to the XFS code. Hence from
> an XFS IO completion point of view, we only want to merge to
> physical extent boundaries and issue cond_resched() at physical
> extent boundaries because that's what our filesystem completion
> processing loops on, not pages/folios.
> 
> Hence my point that we cannot ignore what the filesystem is doing
> with these merged ioends and only think about iomap in isolation.
> 
> Cheers,
> 
> Dave.
> 
> -- 
> Dave Chinner
> david@fromorbit.com
>
Trond Myklebust Jan. 5, 2022, 8:45 p.m. UTC | #28
On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 09:03 +1100, Dave Chinner wrote:
> > > > On Sat, Jan 01, 2022 at 05:39:45PM +0000, Trond Myklebust
> > > > wrote:
> > > > > On Sat, 2022-01-01 at 14:55 +1100, Dave Chinner wrote:
> > > > > > As it is, if you are getting soft lockups in this location,
> > > > > > that's
> > > > > > an indication that the ioend chain that is being built by
> > > > > > XFS
> > > > > > is
> > > > > > way, way too long. IOWs, the completion latency problem is
> > > > > > caused
> > > > > > by
> > > > > > a lack of submit side ioend chain length bounding in
> > > > > > combination
> > > > > > with unbound completion side merging in xfs_end_bio - it's
> > > > > > not a
> > > > > > problem with the generic iomap code....
> > > > > > 
> > > > > > Let's try to address this in the XFS code, rather than hack
> > > > > > unnecessary band-aids over the problem in the generic
> > > > > > code...
> > > > > > 
> > > > > > Cheers,
> > > > > > 
> > > > > > Dave.
> > > > > 
> > > > > Fair enough. As long as someone is working on a solution,
> > > > > then
> > > > > I'm
> > > > > happy. Just a couple of things:
> > > > > 
> > > > > Firstly, we've verified that the cond_resched() in the bio
> > > > > loop
> > > > > does
> > > > > suffice to resolve the issue with XFS, which would tend to
> > > > > confirm
> > > > > what
> > > > > you're saying above about the underlying issue being the
> > > > > ioend
> > > > > chain
> > > > > length.
> > > > > 
> > > > > Secondly, note that we've tested this issue with a variety of
> > > > > older
> > > > > kernels, including 4.18.x, 5.1.x and 5.15.x, so please bear
> > > > > in
> > > > > mind
> > > > > that it would be useful for any fix to be backward portable
> > > > > through
> > > > > the
> > > > > stable mechanism.
> > > > 
> > > > The infrastructure hasn't changed that much, so whatever the
> > > > result
> > > > is it should be backportable.
> > > > 
> > > > As it is, is there a specific workload that triggers this
> > > > issue?
> > > > Or
> > > > a specific machine config (e.g. large memory, slow storage).
> > > > Are
> > > > there large fragmented files in use (e.g. randomly written VM
> > > > image
> > > > files)? There are a few factors that can exacerbate the ioend
> > > > chain
> > > > lengths, so it would be handy to have some idea of what is
> > > > actually
> > > > triggering this behaviour...
> > > > 
> > > > Cheers,
> > > > 
> > > > Dave.
> > > 
> > > We have different reproducers. The common feature appears to be
> > > the
> > > need for a decently fast box with fairly large memory (128GB in
> > > one
> > > case, 400GB in the other). It has been reproduced with HDs, SSDs
> > > and
> > > NVME systems.
> > > 
> > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > configuration and were running the AJA system tests.
> > > 
> > > On the 400GB box, we were just serially creating large (> 6GB)
> > > files
> > > using fio and that was occasionally triggering the issue. However
> > > doing
> > > an strace of that workload to disk reproduced the problem faster
> > > :-
> > > ).
> > 
> > Ok, that matches up with the "lots of logically sequential dirty
> > data on a single inode in cache" vector that is required to create
> > really long bio chains on individual ioends.
> > 
> > Can you try the patch below and see if addresses the issue?
> > 
> 
> That patch does seem to fix the soft lockups.
> 

Oops... Strike that, apparently our tests just hit the following when
running on AWS with that patch.

[Wed Jan  5 20:34:46 2022] watchdog: BUG: soft lockup - CPU#4 stuck for
48s! [kworker/4:1:31315]
[Wed Jan  5 20:34:46 2022] Modules linked in: nfsv3 auth_name
bpf_preload xt_nat veth nfs_layout_flexfiles rpcsec_gss_krb5 nfsv4
dns_resolver nfsidmap nfs fscache netfs dm_multipath nfsd auth_rpcgss
nfs_acl lockd grace sunrpc xt_MASQUERADE nf_conntrack_netlink
xt_addrtype br_netfilter bridge stp llc overlay xt_sctp
nf_conntrack_netbios_ns nf_conntrack_broadcast nf_nat_ftp
nf_conntrack_ftp xt_CT ip6t_rpfilter ip6t_REJECT nf_reject_ipv6
ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle
ip6table_security ip6table_raw iptable_nat nf_nat iptable_mangle
iptable_security iptable_raw nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4
ip_set nfnetlink ip6table_filter ip6_tables iptable_filter bonding tls
ipmi_msghandler intel_rapl_msr intel_rapl_common isst_if_common nfit
libnvdimm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel i2c_piix4
rapl ip_tables xfs nvme crc32c_intel ena nvme_core
[Wed Jan  5 20:34:46 2022] CPU: 4 PID: 31315 Comm: kworker/4:1 Kdump:
loaded Tainted: G        W    L    5.15.12-200.pd.17718.el7.x86_64 #1
[Wed Jan  5 20:34:46 2022] Hardware name: Amazon EC2 r5b.2xlarge/, BIOS
1.0 10/16/2017
[Wed Jan  5 20:34:46 2022] Workqueue: xfs-conv/nvme1n1 xfs_end_io [xfs]
[Wed Jan  5 20:34:46 2022] RIP:
0010:_raw_spin_unlock_irqrestore+0x1c/0x20
[Wed Jan  5 20:34:46 2022] Code: 92 cc cc cc cc cc cc cc cc cc cc cc cc
cc 0f 1f 44 00 00 c6 07 00 0f 1f 40 00 f7 c6 00 02 00 00 75 01 c3 fb 66
0f 1f 44 00 00 <c3> 0f 1f 00 0f 1f 44 00 00 8b 07 a9 ff 01 00 00 75 21
b8 00 02 00
[Wed Jan  5 20:34:46 2022] RSP: 0018:ffffb9738983fd10 EFLAGS: 00000206
[Wed Jan  5 20:34:46 2022] RAX: 0000000000000001 RBX: 0000000000000db0
RCX: fffffffffffff90f
[Wed Jan  5 20:34:46 2022] RDX: ffffffffa3808938 RSI: 0000000000000206
RDI: ffffffffa3808930
[Wed Jan  5 20:34:46 2022] RBP: 0000000000000206 R08: ffffb9738601fc80
R09: ffffb9738601fc80
[Wed Jan  5 20:34:46 2022] R10: 0000000000000000 R11: 0000000000000000
R12: ffffffffa3808930
[Wed Jan  5 20:34:46 2022] R13: ffffdda3db40dd40 R14: ffff930e1c62f538
R15: ffffdda3db40dd40
[Wed Jan  5 20:34:46 2022] FS:  0000000000000000(0000)
GS:ffff93164dd00000(0000) knlGS:0000000000000000
[Wed Jan  5 20:34:46 2022] CS:  0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[Wed Jan  5 20:34:46 2022] CR2: 00007ffe41f5c080 CR3: 00000005a5810001
CR4: 00000000007706e0
[Wed Jan  5 20:34:46 2022] DR0: 0000000000000000 DR1: 0000000000000000
DR2: 0000000000000000
[Wed Jan  5 20:34:46 2022] DR3: 0000000000000000 DR6: 00000000fffe0ff0
DR7: 0000000000000400
[Wed Jan  5 20:34:46 2022] PKRU: 55555554
[Wed Jan  5 20:34:46 2022] Call Trace:
[Wed Jan  5 20:34:46 2022]  <TASK>
[Wed Jan  5 20:34:46 2022]  wake_up_page_bit+0x79/0xe0
[Wed Jan  5 20:34:46 2022]  end_page_writeback+0xc4/0xf0
[Wed Jan  5 20:34:46 2022]  iomap_finish_ioend+0x130/0x260
[Wed Jan  5 20:34:46 2022]  iomap_finish_ioends+0x71/0x90
[Wed Jan  5 20:34:46 2022]  xfs_end_ioend+0x5a/0x120 [xfs]
[Wed Jan  5 20:34:46 2022]  xfs_end_io+0xa1/0xc0 [xfs]
[Wed Jan  5 20:34:46 2022]  process_one_work+0x1f1/0x390
[Wed Jan  5 20:34:46 2022]  worker_thread+0x53/0x3e0
[Wed Jan  5 20:34:46 2022]  ? process_one_work+0x390/0x390
[Wed Jan  5 20:34:46 2022]  kthread+0x127/0x150
[Wed Jan  5 20:34:46 2022]  ? set_kthread_struct+0x40/0x40
[Wed Jan  5 20:34:46 2022]  ret_from_fork+0x22/0x30
[Wed Jan  5 20:34:46 2022]  </TASK>


So it was harder to hit, but we still did eventually.
Dave Chinner Jan. 5, 2022, 10:04 p.m. UTC | #29
On Wed, Jan 05, 2022 at 08:56:33AM -0500, Brian Foster wrote:
> On Wed, Jan 05, 2022 at 01:10:22PM +1100, Dave Chinner wrote:
> > On Tue, Jan 04, 2022 at 03:12:30PM -0800, Darrick J. Wong wrote:
> > > So looking at xfs_end_io:
> > > 
> > > /* Finish all pending io completions. */
> > > void
> > > xfs_end_io(
> > > 	struct work_struct	*work)
> > > {
> > > 	struct xfs_inode	*ip =
> > > 		container_of(work, struct xfs_inode, i_ioend_work);
> > > 	struct iomap_ioend	*ioend;
> > > 	struct list_head	tmp;
> > > 	unsigned long		flags;
> > > 
> > > 	spin_lock_irqsave(&ip->i_ioend_lock, flags);
> > > 	list_replace_init(&ip->i_ioend_list, &tmp);
> > > 	spin_unlock_irqrestore(&ip->i_ioend_lock, flags);
> > > 
> > > 	iomap_sort_ioends(&tmp);
> > > 	while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend,
> > > 			io_list))) {
> > > 		list_del_init(&ioend->io_list);
> > > 
> > > Here we pull the first ioend off the sorted list of ioends.
> > > 
> > > 		iomap_ioend_try_merge(ioend, &tmp);
> > > 
> > > Now we've merged that first ioend with as many subsequent ioends as we
> > > could merge.  Let's say there were 200 ioends, each 100MB.  Now ioend
> > 
> > Ok, so how do we get to this completion state right now?
> > 
> > 1. an ioend is a physically contiguous extent so submission is
> >    broken down into an ioend per physical extent.
> > 2. we merge logically contiguous ioends at completion.
> > 
> > So, if we have 200 ioends of 100MB each that are logically
> > contiguous we'll currently always merge them into a single 20GB
> > ioend that gets processed as a single entity even if submission
> > broke them up because they were physically discontiguous.
> > 
> > Now, with this patch we add:
> > 
> > 3. Individual ioends are limited to 16MB.
> > 4. completion can only merge physically contiguous ioends.
> > 5. we cond_resched() between physically contiguous ioend completion.
> > 
> > Submission will break that logically contiguous 20GB dirty range
> > down into 200x6x16MB ioends.
> > 
> > Now completion will only merge ioends that are both physically and
> > logically contiguous. That results in a maximum merged ioend chain
> > size of 100MB at completion. They'll get merged one 100MB chunk at a
> > time.
> > 
> 
> I'm missing something with the reasoning here.. how does a contiguity
> check in the ioend merge code guarantee we don't construct an
> excessively large list of pages via a chain of merged ioends? Obviously
> it filters out the discontig case, but what if the extents are
> physically contiguous?

It doesn't. I keep saying there are two aspects of this problem -
one is the filesystem looping doing considerable work over multiple
physical extents (would be 200 extent conversions in a tight loop
via xfs_iomap_write_unwritten()) before we even call into
iomap_finish_ioends() to process the pages in the merged ioend
chain.

Darrick is trying to address with the cond_resched() calls in
iomap_finish_ioends(), but is missing the looping being done in
xfs_end_ioend() prior to calling iomap_finish_ioends().  Badly
fragmented merged ioend completion will loop for much longer in
xfs_iomap_write_unwritten() than they will in
iomap_finish_ioends()....

> > >  * Mark writeback finished on a chain of ioends.  Caller must not call
> > >  * this function from atomic/softirq context.
> > >  */
> > > void
> > > iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> > > {
> > > 	struct list_head tmp;
> > > 
> > > 	list_replace_init(&ioend->io_list, &tmp);
> > > 	iomap_finish_ioend(ioend, error);
> > > 
> > > 	while (!list_empty(&tmp)) {
> > > 		cond_resched();
> > > 
> > > So I propose doing it ^^^ here instead.
> > > 
> > > 		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
> > > 		list_del_init(&ioend->io_list);
> > > 		iomap_finish_ioend(ioend, error);
> > > 	}
> > > }
> 
> Hmm.. I'm not seeing how this is much different from Dave's patch, and
> I'm not totally convinced the cond_resched() in Dave's patch is
> effective without something like Darrick's earlier suggestion to limit
> the $object (page/folio/whatever) count of the entire merged mapping (to
> ensure that iomap_finish_ioend() is no longer a soft lockup vector by
> itself).

Yes, that's what I did immediately after posting the first patch for
Trond to test a couple of days ago.  The original patch was an
attempt to make a simple, easily backportable fix to mitigate the
issue without excessive cond_resched() overhead, not a "perfect
solution".

> Trond reports that the test patch mitigates his reproducer, but that
> patch also includes the ioend size cap and so the test doesn't
> necessarily isolate whether the cond_resched() is effective or whether
> the additional submission/completion overhead is enough to avoid the
> pathological conditions that enable it via the XFS merging code. I'd be
> curious to have a more tangible datapoint on that. The easiest way to
> test without getting into the weeds of looking at merging behavior is
> probably just see whether the problem returns with the cond_resched()
> removed and all of the other changes in place. Trond, is that something
> you can test?

Trond has already reported a new softlockup that indicates we still
need a cond_resched() in iomap_finish_ioends() even with the patch I
posted. So we've got the feedback we needed from Trond already, from
both the original patch (fine grained cond_resched()) and from the
patch I sent for him to test.

What this tells us is we actually need *3* layers of co-ordination
here:

1. bio chains per ioend need to be bound in length. Pure overwrites
go straight to iomap_finish_ioend() in softirq context with the
exact bio chain attached to the ioend by submission. Hence the only
way to prevent long holdoffs here is to bound ioend submission
sizes.

2. iomap_finish_ioends() has to handle unbound merged ioend chains
correctly. This relies on any one call to iomap_finish_ioend() being
bound in runtime so that cond_resched() can be issued regularly as
the long ioend chain is processed. i.e. this relies on mechanism #1
to limit individual ioend sizes to work correctly.

3. filesystems have to loop over the merged ioends to process
physical extent manipulations. This means they can loop internally,
and so we break merging at physical extent boundaries so the
filesystem can easily insert reschedule points between individual
extent manipulations.

See the patch below.

Cheers,

Dave.
Dave Chinner Jan. 5, 2022, 10:34 p.m. UTC | #30
On Wed, Jan 05, 2022 at 05:43:54AM -0800, hch@infradead.org wrote:
> On Wed, Jan 05, 2022 at 08:16:05AM +1100, Dave Chinner wrote:
> > > > +	if (ioend->io_inline_bio.bi_iter.bi_sector + (ioend->io_size >> 9) !=
> > > 
> > > This open codes bio_end_sector()
> > 
> > No, it doesn't. The ioend can have chained bios or have others merged
> > and concatenated to the ioend->io_list, so ioend->io_size != length
> > of the first bio in the chain....
> > 
> > > > +	    next->io_inline_bio.bi_iter.bi_sector)
> > > 
> > > But more importantly I don't think just using the inline_bio makes sense
> > > here as the ioend can have multiple bios.  Fortunately we should always
> > > have the last built bio available in ->io_bio.
> > 
> > Except merging chains ioends and modifies the head io_size to
> > account for the chained ioends we add to ioend->io_list. Hence
> > ioend->io_bio is not the last bio in a contiguous ioend chain.
> 
> Indeed.  We could use bio_end_sector on io_bio or this.

Not after we merge the first two contiguous ioends:

Before first merge:

ioend.io_inline_bio.bi_sector	= X
ioend.io_size			= A
bio_end_sector(ioend.io_bio)    = X + A		<<<< correct
ioend.io_list			= <empty>

After first merge:

ioend.io_inline_bio.bi_sector	= X
ioend.io_size			= A + B
bio_end_sector(ioend.io_bio)    = X + A		<<<<<<<< wrong
ioend.io_list			= <merged ioend B,
				   bi_sector	= X + A,
				   io_size	= B,
	correct >>>>>>>		   bio_end_sector() = X + A + B>


Hence if we want to use bio_end_sector(), we've got to jump through
hoops to get to the end of the ioend->io_list to get the io_bio from
that ioend. i.e:

	if (!list_empty(ioend->io_list)) {
		struct iomap_ioend *last = list_last_entry(&ioend->io_list, ...); 

		if (bio_end_sector(last->io_bio) !=
		    next->io_inline_bio.bi_iter.bi_sector)
			return false;
	}
	return true;

That much more opaque than just using bi_sector and ioend->io_size
to directly calculate the last sector of the contiguous ioend chain.
I much prefer the simple, obvious direct ioend maths compared to
having to remember exactly how the the io_list is structured every
time I need to understand what the merging constraints are....

Cheers,

Dave.
Dave Chinner Jan. 5, 2022, 10:48 p.m. UTC | #31
On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust wrote:
> > > > We have different reproducers. The common feature appears to be
> > > > the
> > > > need for a decently fast box with fairly large memory (128GB in
> > > > one
> > > > case, 400GB in the other). It has been reproduced with HDs, SSDs
> > > > and
> > > > NVME systems.
> > > > 
> > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > configuration and were running the AJA system tests.
> > > > 
> > > > On the 400GB box, we were just serially creating large (> 6GB)
> > > > files
> > > > using fio and that was occasionally triggering the issue. However
> > > > doing
> > > > an strace of that workload to disk reproduced the problem faster
> > > > :-
> > > > ).
> > > 
> > > Ok, that matches up with the "lots of logically sequential dirty
> > > data on a single inode in cache" vector that is required to create
> > > really long bio chains on individual ioends.
> > > 
> > > Can you try the patch below and see if addresses the issue?
> > > 
> > 
> > That patch does seem to fix the soft lockups.
> > 
> 
> Oops... Strike that, apparently our tests just hit the following when
> running on AWS with that patch.

OK, so there are also large contiguous physical extents being
allocated in some cases here.

> So it was harder to hit, but we still did eventually.

Yup, that's what I wanted to know - it indicates that both the
filesystem completion processing and the iomap page processing play
a role in the CPU usage. More complex patch for you to try below...

Cheers,

Dave.
Trond Myklebust Jan. 5, 2022, 11:29 p.m. UTC | #32
On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > wrote:
> > > > > We have different reproducers. The common feature appears to
> > > > > be
> > > > > the
> > > > > need for a decently fast box with fairly large memory (128GB
> > > > > in
> > > > > one
> > > > > case, 400GB in the other). It has been reproduced with HDs,
> > > > > SSDs
> > > > > and
> > > > > NVME systems.
> > > > > 
> > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > configuration and were running the AJA system tests.
> > > > > 
> > > > > On the 400GB box, we were just serially creating large (>
> > > > > 6GB)
> > > > > files
> > > > > using fio and that was occasionally triggering the issue.
> > > > > However
> > > > > doing
> > > > > an strace of that workload to disk reproduced the problem
> > > > > faster
> > > > > :-
> > > > > ).
> > > > 
> > > > Ok, that matches up with the "lots of logically sequential
> > > > dirty
> > > > data on a single inode in cache" vector that is required to
> > > > create
> > > > really long bio chains on individual ioends.
> > > > 
> > > > Can you try the patch below and see if addresses the issue?
> > > > 
> > > 
> > > That patch does seem to fix the soft lockups.
> > > 
> > 
> > Oops... Strike that, apparently our tests just hit the following
> > when
> > running on AWS with that patch.
> 
> OK, so there are also large contiguous physical extents being
> allocated in some cases here.
> 
> > So it was harder to hit, but we still did eventually.
> 
> Yup, that's what I wanted to know - it indicates that both the
> filesystem completion processing and the iomap page processing play
> a role in the CPU usage. More complex patch for you to try below...
> 
> Cheers,
> 
> Dave.

Thanks! Building...
Darrick J. Wong Jan. 6, 2022, 12:01 a.m. UTC | #33
On Thu, Jan 06, 2022 at 09:48:29AM +1100, Dave Chinner wrote:
> On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust wrote:
> > > > > We have different reproducers. The common feature appears to be
> > > > > the
> > > > > need for a decently fast box with fairly large memory (128GB in
> > > > > one
> > > > > case, 400GB in the other). It has been reproduced with HDs, SSDs
> > > > > and
> > > > > NVME systems.
> > > > > 
> > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > configuration and were running the AJA system tests.
> > > > > 
> > > > > On the 400GB box, we were just serially creating large (> 6GB)
> > > > > files
> > > > > using fio and that was occasionally triggering the issue. However
> > > > > doing
> > > > > an strace of that workload to disk reproduced the problem faster
> > > > > :-
> > > > > ).
> > > > 
> > > > Ok, that matches up with the "lots of logically sequential dirty
> > > > data on a single inode in cache" vector that is required to create
> > > > really long bio chains on individual ioends.
> > > > 
> > > > Can you try the patch below and see if addresses the issue?
> > > > 
> > > 
> > > That patch does seem to fix the soft lockups.
> > > 
> > 
> > Oops... Strike that, apparently our tests just hit the following when
> > running on AWS with that patch.
> 
> OK, so there are also large contiguous physical extents being
> allocated in some cases here.
> 
> > So it was harder to hit, but we still did eventually.
> 
> Yup, that's what I wanted to know - it indicates that both the
> filesystem completion processing and the iomap page processing play
> a role in the CPU usage. More complex patch for you to try below...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: limit individual ioend chain length in writeback
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> Trond Myklebust reported soft lockups in XFS IO completion such as
> this:
> 
>  watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [kworker/12:1:3106]
>  CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-305.10.2.el8_4.x86_64 #1
>  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
>  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
>  Call Trace:
>   wake_up_page_bit+0x8a/0x110
>   iomap_finish_ioend+0xd7/0x1c0
>   iomap_finish_ioends+0x7f/0xb0
>   xfs_end_ioend+0x6b/0x100 [xfs]
>   xfs_end_io+0xb9/0xe0 [xfs]
>   process_one_work+0x1a7/0x360
>   worker_thread+0x1fa/0x390
>   kthread+0x116/0x130
>   ret_from_fork+0x35/0x40
> 
> Ioends are processed as an atomic completion unit when all the
> chained bios in the ioend have completed their IO. Logically
> contiguous ioends can also be merged and completed as a single,
> larger unit.  Both of these things can be problematic as both the
> bio chains per ioend and the size of the merged ioends processed as
> a single completion are both unbound.
> 
> If we have a large sequential dirty region in the page cache,
> write_cache_pages() will keep feeding us sequential pages and we
> will keep mapping them into ioends and bios until we get a dirty
> page at a non-sequential file offset. These large sequential runs
> can will result in bio and ioend chaining to optimise the io

"can result"?

> patterns. The pages iunder writeback are pinned within these chains

"pages under writeback"

> until the submission chaining is broken, allowing the entire chain
> to be completed. This can result in huge chains being processed
> in IO completion context.
> 
> We get deep bio chaining if we have large contiguous physical
> extents. We will keep adding pages to the current bio until it is
> full, then we'll chain a new bio to keep adding pages for writeback.
> Hence we can build bio chains that map millions of pages and tens of
> gigabytes of RAM if the page cache contains big enough contiguous
> dirty file regions. This long bio chain pins those pages until the
> final bio in the chain completes and the ioend can iterate all the
> chained bios and complete them.
> 
> OTOH, if we have a physically fragmented file, we end up submitting
> one ioend per physical fragment that each have a small bio or bio
> chain attached to them. We do not chain these at IO submission time,
> but instead we chain them at completion time based on file
> offset via iomap_ioend_try_merge(). Hence we can end up with unbound
> ioend chains being built via completion merging.
> 
> XFS can then do COW remapping or unwritten extent conversion on that
> merged chain, which involves walking an extent fragment at a time
> and running a transaction to modify the physical extent information.
> IOWs, we merge all the discontiguous ioends together into a
> contiguous file range, only to then process them individually as
> discontiguous extents.
> 
> This extent manipulation is computationally expensive and can run in
> a tight loop, so merging logically contiguous but physically
> discontigous ioends gains us nothing except for hiding the fact the
> fact we broke the ioends up into individual physical extents at
> submission and then need to loop over those individual physical
> extents at completion.

<nod>

> Hence we need to have mechanisms to limit ioend sizes and
> to break up completion processing of large merged ioend chains:
> 
> 1. bio chains per ioend need to be bound in length. Pure overwrites
> go straight to iomap_finish_ioend() in softirq context with the
> exact bio chain attached to the ioend by submission. Hence the only
> way to prevent long holdoffs here is to bound ioend submission
> sizes because we can't reschedule in softirq context.

<nod>

> 2. iomap_finish_ioends() has to handle unbound merged ioend chains
> correctly. This relies on any one call to iomap_finish_ioend() being
> bound in runtime so that cond_resched() can be issued regularly as
> the long ioend chain is processed. i.e. this relies on mechanism #1
> to limit individual ioend sizes to work correctly.

<nod>

> 3. filesystems have to loop over the merged ioends to process
> physical extent manipulations. This means they can loop internally,
> and so we break merging at physical extent boundaries so the
> filesystem can easily insert reschedule points between individual
> extent manipulations.

<nod> I think I grok this all now.  Just a couple minor questions
more...

> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/iomap/buffered-io.c | 47 +++++++++++++++++++++++++++++++++++++++++++----
>  fs/xfs/xfs_aops.c      | 16 +++++++++++++++-
>  include/linux/iomap.h  |  1 +
>  3 files changed, 59 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 71a36ae120ee..39214577bc46 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1066,17 +1066,34 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error)
>  	}
>  }
>  
> +/*
> + * Ioend completion routine for merged bios. This can only be called from task
> + * contexts as merged ioends can be of unbound length. Hence we have to break up
> + * the page writeback completion into manageable chunks to avoid long scheduler
> + * holdoffs. We aim to keep scheduler holdoffs down below 10ms so that we get
> + * good batch processing throughput without creating adverse scheduler latency
> + * conditions.
> + */
>  void
>  iomap_finish_ioends(struct iomap_ioend *ioend, int error)
>  {
>  	struct list_head tmp;
> +	int segments;

Nit: io_segments is u32, this should be unsigned int.

> +
> +	might_sleep();
>  
>  	list_replace_init(&ioend->io_list, &tmp);
> +	segments = ioend->io_segments;
>  	iomap_finish_ioend(ioend, error);
>  
>  	while (!list_empty(&tmp)) {
> +		if (segments > 32768) {
> +			cond_resched();
> +			segments = 0;
> +		}
>  		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
>  		list_del_init(&ioend->io_list);
> +		segments += ioend->io_segments;
>  		iomap_finish_ioend(ioend, error);
>  	}

I wonder, should we take one more swing at cond_resched at the end of
the function so that we can return to the caller having given the system
at least one chance to reschedule?

(Don't really care all that strongly; aside from the nits I mentioned, I
think I'm comfy with stuffing this one in after willy's iomap
fiolio^Wfoolio^WFOLIOS conversion goes upstream next week.)

--D

>  }
> @@ -1098,6 +1115,15 @@ iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
>  		return false;
>  	if (ioend->io_offset + ioend->io_size != next->io_offset)
>  		return false;
> +	/*
> +	 * Do not merge physically discontiguous ioends. The filesystem
> +	 * completion functions will have to iterate the physical
> +	 * discontiguities even if we merge the ioends at a logical level, so
> +	 * we don't gain anything by merging physical discontiguities here.
> +	 */
> +	if (ioend->io_inline_bio.bi_iter.bi_sector + (ioend->io_size >> 9) !=
> +	    next->io_inline_bio.bi_iter.bi_sector)
> +		return false;
>  	return true;
>  }
>  
> @@ -1175,6 +1201,7 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend,
>  		return error;
>  	}
>  
> +	ioend->io_segments += bio_segments(ioend->io_bio);
>  	submit_bio(ioend->io_bio);
>  	return 0;
>  }
> @@ -1199,6 +1226,7 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
>  	ioend->io_flags = wpc->iomap.flags;
>  	ioend->io_inode = inode;
>  	ioend->io_size = 0;
> +	ioend->io_segments = 0;
>  	ioend->io_offset = offset;
>  	ioend->io_bio = bio;
>  	return ioend;
> @@ -1211,11 +1239,14 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
>   * so that the bi_private linkage is set up in the right direction for the
>   * traversal in iomap_finish_ioend().
>   */
> -static struct bio *
> -iomap_chain_bio(struct bio *prev)
> +static void
> +iomap_chain_bio(struct iomap_ioend *ioend)
>  {
> +	struct bio *prev = ioend->io_bio;
>  	struct bio *new;
>  
> +	ioend->io_segments += bio_segments(prev);
> +
>  	new = bio_alloc(GFP_NOFS, BIO_MAX_VECS);
>  	bio_copy_dev(new, prev);/* also copies over blkcg information */
>  	new->bi_iter.bi_sector = bio_end_sector(prev);
> @@ -1225,7 +1256,8 @@ iomap_chain_bio(struct bio *prev)
>  	bio_chain(prev, new);
>  	bio_get(prev);		/* for iomap_finish_ioend */
>  	submit_bio(prev);
> -	return new;
> +
> +	ioend->io_bio = new;
>  }
>  
>  static bool
> @@ -1241,6 +1273,13 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
>  		return false;
>  	if (sector != bio_end_sector(wpc->ioend->io_bio))
>  		return false;
> +	/*
> +	 * Limit ioend bio chain lengths to minimise IO completion latency. This
> +	 * also prevents long tight loops ending page writeback on all the pages
> +	 * in the ioend.
> +	 */
> +	if (wpc->ioend->io_segments >= 4096)
> +		return false;
>  	return true;
>  }
>  
> @@ -1264,7 +1303,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
>  	}
>  
>  	if (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) {
> -		wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio);
> +		iomap_chain_bio(wpc->ioend);
>  		__bio_add_page(wpc->ioend->io_bio, page, len, poff);
>  	}
>  
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index c8c15c3c3147..148a8fce7029 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -136,7 +136,20 @@ xfs_end_ioend(
>  	memalloc_nofs_restore(nofs_flag);
>  }
>  
> -/* Finish all pending io completions. */
> +/*
> + * Finish all pending IO completions that require transactional modifications.
> + *
> + * We try to merge physical and logically contiguous ioends before completion to
> + * minimise the number of transactions we need to perform during IO completion.
> + * Both unwritten extent conversion and COW remapping need to iterate and modify
> + * one physical extent at a time, so we gain nothing by merging physically
> + * discontiguous extents here.
> + *
> + * The ioend chain length that we can be processing here is largely unbound in
> + * length and we may have to perform significant amounts of work on each ioend
> + * to complete it. Hence we have to be careful about holding the CPU for too
> + * long in this loop.
> + */
>  void
>  xfs_end_io(
>  	struct work_struct	*work)
> @@ -157,6 +170,7 @@ xfs_end_io(
>  		list_del_init(&ioend->io_list);
>  		iomap_ioend_try_merge(ioend, &tmp);
>  		xfs_end_ioend(ioend);
> +		cond_resched();
>  	}
>  }
>  
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 6d1b08d0ae93..bfdba72f4e30 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -257,6 +257,7 @@ struct iomap_ioend {
>  	struct list_head	io_list;	/* next ioend in chain */
>  	u16			io_type;
>  	u16			io_flags;	/* IOMAP_F_* */
> +	u32			io_segments;
>  	struct inode		*io_inode;	/* file being written to */
>  	size_t			io_size;	/* size of the extent */
>  	loff_t			io_offset;	/* offset in the file */
Brian Foster Jan. 6, 2022, 4:44 p.m. UTC | #34
On Thu, Jan 06, 2022 at 09:04:21AM +1100, Dave Chinner wrote:
> On Wed, Jan 05, 2022 at 08:56:33AM -0500, Brian Foster wrote:
> > On Wed, Jan 05, 2022 at 01:10:22PM +1100, Dave Chinner wrote:
> > > On Tue, Jan 04, 2022 at 03:12:30PM -0800, Darrick J. Wong wrote:
> > > > So looking at xfs_end_io:
> > > > 
> > > > /* Finish all pending io completions. */
> > > > void
> > > > xfs_end_io(
> > > > 	struct work_struct	*work)
> > > > {
> > > > 	struct xfs_inode	*ip =
> > > > 		container_of(work, struct xfs_inode, i_ioend_work);
> > > > 	struct iomap_ioend	*ioend;
> > > > 	struct list_head	tmp;
> > > > 	unsigned long		flags;
> > > > 
> > > > 	spin_lock_irqsave(&ip->i_ioend_lock, flags);
> > > > 	list_replace_init(&ip->i_ioend_list, &tmp);
> > > > 	spin_unlock_irqrestore(&ip->i_ioend_lock, flags);
> > > > 
> > > > 	iomap_sort_ioends(&tmp);
> > > > 	while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend,
> > > > 			io_list))) {
> > > > 		list_del_init(&ioend->io_list);
> > > > 
> > > > Here we pull the first ioend off the sorted list of ioends.
> > > > 
> > > > 		iomap_ioend_try_merge(ioend, &tmp);
> > > > 
> > > > Now we've merged that first ioend with as many subsequent ioends as we
> > > > could merge.  Let's say there were 200 ioends, each 100MB.  Now ioend
> > > 
> > > Ok, so how do we get to this completion state right now?
> > > 
> > > 1. an ioend is a physically contiguous extent so submission is
> > >    broken down into an ioend per physical extent.
> > > 2. we merge logically contiguous ioends at completion.
> > > 
> > > So, if we have 200 ioends of 100MB each that are logically
> > > contiguous we'll currently always merge them into a single 20GB
> > > ioend that gets processed as a single entity even if submission
> > > broke them up because they were physically discontiguous.
> > > 
> > > Now, with this patch we add:
> > > 
> > > 3. Individual ioends are limited to 16MB.
> > > 4. completion can only merge physically contiguous ioends.
> > > 5. we cond_resched() between physically contiguous ioend completion.
> > > 
> > > Submission will break that logically contiguous 20GB dirty range
> > > down into 200x6x16MB ioends.
> > > 
> > > Now completion will only merge ioends that are both physically and
> > > logically contiguous. That results in a maximum merged ioend chain
> > > size of 100MB at completion. They'll get merged one 100MB chunk at a
> > > time.
> > > 
> > 
> > I'm missing something with the reasoning here.. how does a contiguity
> > check in the ioend merge code guarantee we don't construct an
> > excessively large list of pages via a chain of merged ioends? Obviously
> > it filters out the discontig case, but what if the extents are
> > physically contiguous?
> 
> It doesn't. I keep saying there are two aspects of this problem -
> one is the filesystem looping doing considerable work over multiple
> physical extents (would be 200 extent conversions in a tight loop
> via xfs_iomap_write_unwritten()) before we even call into
> iomap_finish_ioends() to process the pages in the merged ioend
> chain.
> 
> Darrick is trying to address with the cond_resched() calls in
> iomap_finish_ioends(), but is missing the looping being done in
> xfs_end_ioend() prior to calling iomap_finish_ioends().  Badly
> fragmented merged ioend completion will loop for much longer in
> xfs_iomap_write_unwritten() than they will in
> iomap_finish_ioends()....
> 

I'm just pointing out that the patch didn't seem to fully address the
reported issue.

> > > >  * Mark writeback finished on a chain of ioends.  Caller must not call
> > > >  * this function from atomic/softirq context.
> > > >  */
> > > > void
> > > > iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> > > > {
> > > > 	struct list_head tmp;
> > > > 
> > > > 	list_replace_init(&ioend->io_list, &tmp);
> > > > 	iomap_finish_ioend(ioend, error);
> > > > 
> > > > 	while (!list_empty(&tmp)) {
> > > > 		cond_resched();
> > > > 
> > > > So I propose doing it ^^^ here instead.
> > > > 
> > > > 		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
> > > > 		list_del_init(&ioend->io_list);
> > > > 		iomap_finish_ioend(ioend, error);
> > > > 	}
> > > > }
> > 
> > Hmm.. I'm not seeing how this is much different from Dave's patch, and
> > I'm not totally convinced the cond_resched() in Dave's patch is
> > effective without something like Darrick's earlier suggestion to limit
> > the $object (page/folio/whatever) count of the entire merged mapping (to
> > ensure that iomap_finish_ioend() is no longer a soft lockup vector by
> > itself).
> 
> Yes, that's what I did immediately after posting the first patch for
> Trond to test a couple of days ago.  The original patch was an
> attempt to make a simple, easily backportable fix to mitigate the
> issue without excessive cond_resched() overhead, not a "perfect
> solution".
> 
> > Trond reports that the test patch mitigates his reproducer, but that
> > patch also includes the ioend size cap and so the test doesn't
> > necessarily isolate whether the cond_resched() is effective or whether
> > the additional submission/completion overhead is enough to avoid the
> > pathological conditions that enable it via the XFS merging code. I'd be
> > curious to have a more tangible datapoint on that. The easiest way to
> > test without getting into the weeds of looking at merging behavior is
> > probably just see whether the problem returns with the cond_resched()
> > removed and all of the other changes in place. Trond, is that something
> > you can test?
> 
> Trond has already reported a new softlockup that indicates we still
> need a cond_resched() in iomap_finish_ioends() even with the patch I
> posted. So we've got the feedback we needed from Trond already, from
> both the original patch (fine grained cond_resched()) and from the
> patch I sent for him to test.
> 

Ok, that's what I suspected would occur eventually.

> What this tells us is we actually need *3* layers of co-ordination
> here:
> 
> 1. bio chains per ioend need to be bound in length. Pure overwrites
> go straight to iomap_finish_ioend() in softirq context with the
> exact bio chain attached to the ioend by submission. Hence the only
> way to prevent long holdoffs here is to bound ioend submission
> sizes.
> 
> 2. iomap_finish_ioends() has to handle unbound merged ioend chains
> correctly. This relies on any one call to iomap_finish_ioend() being
> bound in runtime so that cond_resched() can be issued regularly as
> the long ioend chain is processed. i.e. this relies on mechanism #1
> to limit individual ioend sizes to work correctly.
> 
> 3. filesystems have to loop over the merged ioends to process
> physical extent manipulations. This means they can loop internally,
> and so we break merging at physical extent boundaries so the
> filesystem can easily insert reschedule points between individual
> extent manipulations.
> 
> See the patch below.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: limit individual ioend chain length in writeback
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> Trond Myklebust reported soft lockups in XFS IO completion such as
> this:
> 
>  watchdog: BUG: soft lockup - CPU#12 stuck for 23s! [kworker/12:1:3106]
>  CPU: 12 PID: 3106 Comm: kworker/12:1 Not tainted 4.18.0-305.10.2.el8_4.x86_64 #1
>  Workqueue: xfs-conv/md127 xfs_end_io [xfs]
>  RIP: 0010:_raw_spin_unlock_irqrestore+0x11/0x20
>  Call Trace:
>   wake_up_page_bit+0x8a/0x110
>   iomap_finish_ioend+0xd7/0x1c0
>   iomap_finish_ioends+0x7f/0xb0
>   xfs_end_ioend+0x6b/0x100 [xfs]
>   xfs_end_io+0xb9/0xe0 [xfs]
>   process_one_work+0x1a7/0x360
>   worker_thread+0x1fa/0x390
>   kthread+0x116/0x130
>   ret_from_fork+0x35/0x40
> 
> Ioends are processed as an atomic completion unit when all the
> chained bios in the ioend have completed their IO. Logically
> contiguous ioends can also be merged and completed as a single,
> larger unit.  Both of these things can be problematic as both the
> bio chains per ioend and the size of the merged ioends processed as
> a single completion are both unbound.
> 
> If we have a large sequential dirty region in the page cache,
> write_cache_pages() will keep feeding us sequential pages and we
> will keep mapping them into ioends and bios until we get a dirty
> page at a non-sequential file offset. These large sequential runs
> can will result in bio and ioend chaining to optimise the io
> patterns. The pages iunder writeback are pinned within these chains
> until the submission chaining is broken, allowing the entire chain
> to be completed. This can result in huge chains being processed
> in IO completion context.
> 
> We get deep bio chaining if we have large contiguous physical
> extents. We will keep adding pages to the current bio until it is
> full, then we'll chain a new bio to keep adding pages for writeback.
> Hence we can build bio chains that map millions of pages and tens of
> gigabytes of RAM if the page cache contains big enough contiguous
> dirty file regions. This long bio chain pins those pages until the
> final bio in the chain completes and the ioend can iterate all the
> chained bios and complete them.
> 
> OTOH, if we have a physically fragmented file, we end up submitting
> one ioend per physical fragment that each have a small bio or bio
> chain attached to them. We do not chain these at IO submission time,
> but instead we chain them at completion time based on file
> offset via iomap_ioend_try_merge(). Hence we can end up with unbound
> ioend chains being built via completion merging.
> 
> XFS can then do COW remapping or unwritten extent conversion on that
> merged chain, which involves walking an extent fragment at a time
> and running a transaction to modify the physical extent information.
> IOWs, we merge all the discontiguous ioends together into a
> contiguous file range, only to then process them individually as
> discontiguous extents.
> 
> This extent manipulation is computationally expensive and can run in
> a tight loop, so merging logically contiguous but physically
> discontigous ioends gains us nothing except for hiding the fact the
> fact we broke the ioends up into individual physical extents at
> submission and then need to loop over those individual physical
> extents at completion.
> 
> Hence we need to have mechanisms to limit ioend sizes and
> to break up completion processing of large merged ioend chains:
> 
> 1. bio chains per ioend need to be bound in length. Pure overwrites
> go straight to iomap_finish_ioend() in softirq context with the
> exact bio chain attached to the ioend by submission. Hence the only
> way to prevent long holdoffs here is to bound ioend submission
> sizes because we can't reschedule in softirq context.
> 
> 2. iomap_finish_ioends() has to handle unbound merged ioend chains
> correctly. This relies on any one call to iomap_finish_ioend() being
> bound in runtime so that cond_resched() can be issued regularly as
> the long ioend chain is processed. i.e. this relies on mechanism #1
> to limit individual ioend sizes to work correctly.
> 
> 3. filesystems have to loop over the merged ioends to process
> physical extent manipulations. This means they can loop internally,
> and so we break merging at physical extent boundaries so the
> filesystem can easily insert reschedule points between individual
> extent manipulations.
> 

It's not clear to me if the intent is to split this up or not, but ISTM
that the capping of ioend size and ioend merging logic can stand alone
as independent changes.

> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/iomap/buffered-io.c | 47 +++++++++++++++++++++++++++++++++++++++++++----
>  fs/xfs/xfs_aops.c      | 16 +++++++++++++++-
>  include/linux/iomap.h  |  1 +
>  3 files changed, 59 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 71a36ae120ee..39214577bc46 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1066,17 +1066,34 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error)
>  	}
>  }
>  
> +/*
> + * Ioend completion routine for merged bios. This can only be called from task
> + * contexts as merged ioends can be of unbound length. Hence we have to break up
> + * the page writeback completion into manageable chunks to avoid long scheduler
> + * holdoffs. We aim to keep scheduler holdoffs down below 10ms so that we get
> + * good batch processing throughput without creating adverse scheduler latency
> + * conditions.
> + */
>  void
>  iomap_finish_ioends(struct iomap_ioend *ioend, int error)
>  {
>  	struct list_head tmp;
> +	int segments;
> +
> +	might_sleep();
>  
>  	list_replace_init(&ioend->io_list, &tmp);
> +	segments = ioend->io_segments;
>  	iomap_finish_ioend(ioend, error);
>  
>  	while (!list_empty(&tmp)) {
> +		if (segments > 32768) {
> +			cond_resched();
> +			segments = 0;
> +		}

How is this intended to address the large bi_vec scenario? AFAICT
bio_segments() doesn't account for multipage bvecs so the above logic
can allow something like 34b (?) 4k pages before a yield.

That aside, I find the approach odd in that we calculate the segment
count for each bio via additional iteration (which is how bio_segments()
works) and track the summation of the chain in the ioend only to provide
iomap_finish_ioends() with a subtly inaccurate view of how much work
iomap_finish_ioend() is doing as the loop iterates. We already have this
information in completion context and iomap_finish_ioends() is just a
small iterator function, so I don't understand why we wouldn't do
something like factor these two loops into a non-atomic context only
variant that yields based on the actual amount of page processing work
being done (i.e. including multipage bvecs). That seems more robust and
simple to me, but that's just my .02.

Brian

>  		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
>  		list_del_init(&ioend->io_list);
> +		segments += ioend->io_segments;
>  		iomap_finish_ioend(ioend, error);
>  	}
>  }
> @@ -1098,6 +1115,15 @@ iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next)
>  		return false;
>  	if (ioend->io_offset + ioend->io_size != next->io_offset)
>  		return false;
> +	/*
> +	 * Do not merge physically discontiguous ioends. The filesystem
> +	 * completion functions will have to iterate the physical
> +	 * discontiguities even if we merge the ioends at a logical level, so
> +	 * we don't gain anything by merging physical discontiguities here.
> +	 */
> +	if (ioend->io_inline_bio.bi_iter.bi_sector + (ioend->io_size >> 9) !=
> +	    next->io_inline_bio.bi_iter.bi_sector)
> +		return false;
>  	return true;
>  }
>  
> @@ -1175,6 +1201,7 @@ iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend,
>  		return error;
>  	}
>  
> +	ioend->io_segments += bio_segments(ioend->io_bio);
>  	submit_bio(ioend->io_bio);
>  	return 0;
>  }
> @@ -1199,6 +1226,7 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
>  	ioend->io_flags = wpc->iomap.flags;
>  	ioend->io_inode = inode;
>  	ioend->io_size = 0;
> +	ioend->io_segments = 0;
>  	ioend->io_offset = offset;
>  	ioend->io_bio = bio;
>  	return ioend;
> @@ -1211,11 +1239,14 @@ iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc,
>   * so that the bi_private linkage is set up in the right direction for the
>   * traversal in iomap_finish_ioend().
>   */
> -static struct bio *
> -iomap_chain_bio(struct bio *prev)
> +static void
> +iomap_chain_bio(struct iomap_ioend *ioend)
>  {
> +	struct bio *prev = ioend->io_bio;
>  	struct bio *new;
>  
> +	ioend->io_segments += bio_segments(prev);
> +
>  	new = bio_alloc(GFP_NOFS, BIO_MAX_VECS);
>  	bio_copy_dev(new, prev);/* also copies over blkcg information */
>  	new->bi_iter.bi_sector = bio_end_sector(prev);
> @@ -1225,7 +1256,8 @@ iomap_chain_bio(struct bio *prev)
>  	bio_chain(prev, new);
>  	bio_get(prev);		/* for iomap_finish_ioend */
>  	submit_bio(prev);
> -	return new;
> +
> +	ioend->io_bio = new;
>  }
>  
>  static bool
> @@ -1241,6 +1273,13 @@ iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t offset,
>  		return false;
>  	if (sector != bio_end_sector(wpc->ioend->io_bio))
>  		return false;
> +	/*
> +	 * Limit ioend bio chain lengths to minimise IO completion latency. This
> +	 * also prevents long tight loops ending page writeback on all the pages
> +	 * in the ioend.
> +	 */
> +	if (wpc->ioend->io_segments >= 4096)
> +		return false;
>  	return true;
>  }
>  
> @@ -1264,7 +1303,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page,
>  	}
>  
>  	if (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) {
> -		wpc->ioend->io_bio = iomap_chain_bio(wpc->ioend->io_bio);
> +		iomap_chain_bio(wpc->ioend);
>  		__bio_add_page(wpc->ioend->io_bio, page, len, poff);
>  	}
>  
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index c8c15c3c3147..148a8fce7029 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -136,7 +136,20 @@ xfs_end_ioend(
>  	memalloc_nofs_restore(nofs_flag);
>  }
>  
> -/* Finish all pending io completions. */
> +/*
> + * Finish all pending IO completions that require transactional modifications.
> + *
> + * We try to merge physical and logically contiguous ioends before completion to
> + * minimise the number of transactions we need to perform during IO completion.
> + * Both unwritten extent conversion and COW remapping need to iterate and modify
> + * one physical extent at a time, so we gain nothing by merging physically
> + * discontiguous extents here.
> + *
> + * The ioend chain length that we can be processing here is largely unbound in
> + * length and we may have to perform significant amounts of work on each ioend
> + * to complete it. Hence we have to be careful about holding the CPU for too
> + * long in this loop.
> + */
>  void
>  xfs_end_io(
>  	struct work_struct	*work)
> @@ -157,6 +170,7 @@ xfs_end_io(
>  		list_del_init(&ioend->io_list);
>  		iomap_ioend_try_merge(ioend, &tmp);
>  		xfs_end_ioend(ioend);
> +		cond_resched();
>  	}
>  }
>  
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 6d1b08d0ae93..bfdba72f4e30 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -257,6 +257,7 @@ struct iomap_ioend {
>  	struct list_head	io_list;	/* next ioend in chain */
>  	u16			io_type;
>  	u16			io_flags;	/* IOMAP_F_* */
> +	u32			io_segments;
>  	struct inode		*io_inode;	/* file being written to */
>  	size_t			io_size;	/* size of the extent */
>  	loff_t			io_offset;	/* offset in the file */
>
Trond Myklebust Jan. 6, 2022, 6:36 p.m. UTC | #35
On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > wrote:
> > > > > We have different reproducers. The common feature appears to
> > > > > be
> > > > > the
> > > > > need for a decently fast box with fairly large memory (128GB
> > > > > in
> > > > > one
> > > > > case, 400GB in the other). It has been reproduced with HDs,
> > > > > SSDs
> > > > > and
> > > > > NVME systems.
> > > > > 
> > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > configuration and were running the AJA system tests.
> > > > > 
> > > > > On the 400GB box, we were just serially creating large (>
> > > > > 6GB)
> > > > > files
> > > > > using fio and that was occasionally triggering the issue.
> > > > > However
> > > > > doing
> > > > > an strace of that workload to disk reproduced the problem
> > > > > faster
> > > > > :-
> > > > > ).
> > > > 
> > > > Ok, that matches up with the "lots of logically sequential
> > > > dirty
> > > > data on a single inode in cache" vector that is required to
> > > > create
> > > > really long bio chains on individual ioends.
> > > > 
> > > > Can you try the patch below and see if addresses the issue?
> > > > 
> > > 
> > > That patch does seem to fix the soft lockups.
> > > 
> > 
> > Oops... Strike that, apparently our tests just hit the following
> > when
> > running on AWS with that patch.
> 
> OK, so there are also large contiguous physical extents being
> allocated in some cases here.
> 
> > So it was harder to hit, but we still did eventually.
> 
> Yup, that's what I wanted to know - it indicates that both the
> filesystem completion processing and the iomap page processing play
> a role in the CPU usage. More complex patch for you to try below...
> 
> Cheers,
> 
> Dave.

Hi Dave,

This patch got further than the previous one. However it too failed on
the same AWS setup after we started creating larger (in this case 52GB)
files. The previous patch failed at 15GB.

NR_06-18:00:17 pm-46088DSX1 /mnt/data-portal/data $ ls -lh
total 59G
-rw-r----- 1 root root  52G Jan  6 18:20 100g
-rw-r----- 1 root root 9.8G Jan  6 17:38 10g
-rw-r----- 1 root root   29 Jan  6 17:36 file
NR_06-18:20:10 pm-46088DSX1 /mnt/data-portal/data $
Message from syslogd@pm-46088DSX1 at Jan  6 18:22:44 ...
 kernel:[ 5548.082987] watchdog: BUG: soft lockup - CPU#10 stuck for
24s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:23:44 ...
 kernel:[ 5608.082895] watchdog: BUG: soft lockup - CPU#10 stuck for
23s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:27:08 ...
 kernel:[ 5812.082587] watchdog: BUG: soft lockup - CPU#10 stuck for
22s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:27:36 ...
 kernel:[ 5840.082533] watchdog: BUG: soft lockup - CPU#10 stuck for
21s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:28:08 ...
 kernel:[ 5872.082455] watchdog: BUG: soft lockup - CPU#10 stuck for
21s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:28:40 ...
 kernel:[ 5904.082400] watchdog: BUG: soft lockup - CPU#10 stuck for
21s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:29:16 ...
 kernel:[ 5940.082243] watchdog: BUG: soft lockup - CPU#10 stuck for
21s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:29:44 ...
 kernel:[ 5968.082249] watchdog: BUG: soft lockup - CPU#10 stuck for
22s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:30:24 ...
 kernel:[ 6008.082204] watchdog: BUG: soft lockup - CPU#10 stuck for
21s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:31:08 ...
 kernel:[ 6052.082194] watchdog: BUG: soft lockup - CPU#10 stuck for
24s! [kworker/10:0:18995]
Message from syslogd@pm-46088DSX1 at Jan  6 18:31:48 ...
 kernel:[ 6092.082010] watchdog: BUG: soft lockup - CPU#10 stuck for
21s! [kworker/10:0:18995]
Trond Myklebust Jan. 6, 2022, 6:38 p.m. UTC | #36
On Thu, 2022-01-06 at 13:36 -0500, Trond Myklebust wrote:
> On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > wrote:
> > > > > > We have different reproducers. The common feature appears
> > > > > > to
> > > > > > be
> > > > > > the
> > > > > > need for a decently fast box with fairly large memory
> > > > > > (128GB
> > > > > > in
> > > > > > one
> > > > > > case, 400GB in the other). It has been reproduced with HDs,
> > > > > > SSDs
> > > > > > and
> > > > > > NVME systems.
> > > > > > 
> > > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > > configuration and were running the AJA system tests.
> > > > > > 
> > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > 6GB)
> > > > > > files
> > > > > > using fio and that was occasionally triggering the issue.
> > > > > > However
> > > > > > doing
> > > > > > an strace of that workload to disk reproduced the problem
> > > > > > faster
> > > > > > :-
> > > > > > ).
> > > > > 
> > > > > Ok, that matches up with the "lots of logically sequential
> > > > > dirty
> > > > > data on a single inode in cache" vector that is required to
> > > > > create
> > > > > really long bio chains on individual ioends.
> > > > > 
> > > > > Can you try the patch below and see if addresses the issue?
> > > > > 
> > > > 
> > > > That patch does seem to fix the soft lockups.
> > > > 
> > > 
> > > Oops... Strike that, apparently our tests just hit the following
> > > when
> > > running on AWS with that patch.
> > 
> > OK, so there are also large contiguous physical extents being
> > allocated in some cases here.
> > 
> > > So it was harder to hit, but we still did eventually.
> > 
> > Yup, that's what I wanted to know - it indicates that both the
> > filesystem completion processing and the iomap page processing play
> > a role in the CPU usage. More complex patch for you to try below...
> > 
> > Cheers,
> > 
> > Dave.
> 
> Hi Dave,
> 
> This patch got further than the previous one. However it too failed
> on
> the same AWS setup after we started creating larger (in this case
> 52GB)
> files. The previous patch failed at 15GB.
> 
> NR_06-18:00:17 pm-46088DSX1 /mnt/data-portal/data $ ls -lh
> total 59G
> -rw-r----- 1 root root  52G Jan  6 18:20 100g
> -rw-r----- 1 root root 9.8G Jan  6 17:38 10g
> -rw-r----- 1 root root   29 Jan  6 17:36 file
> NR_06-18:20:10 pm-46088DSX1 /mnt/data-portal/data $
> Message from syslogd@pm-46088DSX1 at Jan  6 18:22:44 ...
>  kernel:[ 5548.082987] watchdog: BUG: soft lockup - CPU#10 stuck for
> 24s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:23:44 ...
>  kernel:[ 5608.082895] watchdog: BUG: soft lockup - CPU#10 stuck for
> 23s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:27:08 ...
>  kernel:[ 5812.082587] watchdog: BUG: soft lockup - CPU#10 stuck for
> 22s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:27:36 ...
>  kernel:[ 5840.082533] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:28:08 ...
>  kernel:[ 5872.082455] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:28:40 ...
>  kernel:[ 5904.082400] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:29:16 ...
>  kernel:[ 5940.082243] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:29:44 ...
>  kernel:[ 5968.082249] watchdog: BUG: soft lockup - CPU#10 stuck for
> 22s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:30:24 ...
>  kernel:[ 6008.082204] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:31:08 ...
>  kernel:[ 6052.082194] watchdog: BUG: soft lockup - CPU#10 stuck for
> 24s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:31:48 ...
>  kernel:[ 6092.082010] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> 

Just to confirm that these are indeed the same XFS hangs:

[Thu Jan  6 18:33:58 2022] watchdog: BUG: soft lockup - CPU#10 stuck
for 24s! [kworker/10:0:18995]
[Thu Jan  6 18:33:58 2022] Modules linked in: nfsv3 auth_name
bpf_preload xt_nat veth nfs_layout_flexfiles rpcsec_gss_krb5 nfsv4
dns_resolver nfsidmap nfs fscache netfs dm_multipath nfsd auth_rpcgss
nfs_acl lockd grace sunrpc xt_MASQUERADE nf_conntrack_netlink
xt_addrtype br_netfilter bridge stp llc overlay xt_sctp
nf_conntrack_netbios_ns nf_conntrack_broadcast nf_nat_ftp
nf_conntrack_ftp xt_CT ip6t_rpfilter ip6t_REJECT nf_reject_ipv6
ipt_REJECT nf_reject_ipv4 xt_conntrack ip6table_nat ip6table_mangle
ip6table_security ip6table_raw iptable_nat nf_nat iptable_mangle
iptable_security iptable_raw nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4
ip_set nfnetlink ip6table_filter ip6_tables iptable_filter bonding tls
ipmi_msghandler intel_rapl_msr intel_rapl_common isst_if_common nfit
libnvdimm i2c_piix4 crct10dif_pclmul crc32_pclmul ghash_clmulni_intel
rapl ip_tables xfs nvme ena nvme_core crc32c_intel
[Thu Jan  6 18:33:58 2022] CPU: 10 PID: 18995 Comm: kworker/10:0 Kdump:
loaded Tainted: G             L    5.15.12-200.pd.17721.el7.x86_64 #1
[Thu Jan  6 18:33:58 2022] Hardware name: Amazon EC2 r5b.4xlarge/, BIOS
1.0 10/16/2017
[Thu Jan  6 18:33:58 2022] Workqueue: xfs-conv/nvme1n1 xfs_end_io [xfs]
[Thu Jan  6 18:33:58 2022] RIP:
0010:_raw_spin_unlock_irqrestore+0x1c/0x20
[Thu Jan  6 18:33:58 2022] Code: 92 cc cc cc cc cc cc cc cc cc cc cc cc
cc 0f 1f 44 00 00 c6 07 00 0f 1f 40 00 f7 c6 00 02 00 00 75 01 c3 fb 66
0f 1f 44 00 00 <c3> 0f 1f 00 0f 1f 44 00 00 8b 07 a9 ff 01 00 00 75 21
b8 00 02 00
[Thu Jan  6 18:33:58 2022] RSP: 0018:ffffac380beffd08 EFLAGS: 00000206
[Thu Jan  6 18:33:58 2022] RAX: 0000000000000001 RBX: 00000000000015c0
RCX: ffffffffffffb9a2
[Thu Jan  6 18:33:58 2022] RDX: ffffffff85809148 RSI: 0000000000000206
RDI: ffffffff85809140
[Thu Jan  6 18:33:58 2022] RBP: 0000000000000206 R08: ffffac380888fc80
R09: ffffac380888fc80
[Thu Jan  6 18:33:58 2022] R10: 00000000000000a0 R11: 0000000000000000
R12: ffffffff85809140
[Thu Jan  6 18:33:58 2022] R13: ffffe94e2ef6d780 R14: ffff95fad1053438
R15: ffffe94e2ef6d780
[Thu Jan  6 18:33:58 2022] FS:  0000000000000000(0000)
GS:ffff9612a3c80000(0000) knlGS:0000000000000000
[Thu Jan  6 18:33:58 2022] CS:  0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[Thu Jan  6 18:33:58 2022] CR2: 00007f9294723080 CR3: 0000001692810004
CR4: 00000000007706e0
[Thu Jan  6 18:33:58 2022] DR0: 0000000000000000 DR1: 0000000000000000
DR2: 0000000000000000
[Thu Jan  6 18:33:58 2022] DR3: 0000000000000000 DR6: 00000000fffe0ff0
DR7: 0000000000000400
[Thu Jan  6 18:33:58 2022] PKRU: 55555554
[Thu Jan  6 18:33:58 2022] Call Trace:
[Thu Jan  6 18:33:58 2022]  <TASK>
[Thu Jan  6 18:33:58 2022]  wake_up_page_bit+0x79/0xe0
[Thu Jan  6 18:33:58 2022]  end_page_writeback+0xc4/0xf0
[Thu Jan  6 18:33:58 2022]  iomap_finish_ioend+0x130/0x260
[Thu Jan  6 18:33:58 2022]  ? xfs_iunlock+0xa4/0xf0 [xfs]
[Thu Jan  6 18:33:58 2022]  iomap_finish_ioends+0x77/0xa0
[Thu Jan  6 18:33:58 2022]  xfs_end_ioend+0x5a/0x120 [xfs]
[Thu Jan  6 18:33:58 2022]  xfs_end_io+0xa1/0xc0 [xfs]
[Thu Jan  6 18:33:58 2022]  process_one_work+0x1f1/0x390
[Thu Jan  6 18:33:58 2022]  worker_thread+0x53/0x3e0
[Thu Jan  6 18:33:58 2022]  ? process_one_work+0x390/0x390
[Thu Jan  6 18:33:58 2022]  kthread+0x127/0x150
[Thu Jan  6 18:33:58 2022]  ? set_kthread_struct+0x40/0x40
[Thu Jan  6 18:33:58 2022]  ret_from_fork+0x22/0x30
[Thu Jan  6 18:33:58 2022]  </TASK>
Brian Foster Jan. 6, 2022, 8:07 p.m. UTC | #37
On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > wrote:
> > > > > > We have different reproducers. The common feature appears to
> > > > > > be
> > > > > > the
> > > > > > need for a decently fast box with fairly large memory (128GB
> > > > > > in
> > > > > > one
> > > > > > case, 400GB in the other). It has been reproduced with HDs,
> > > > > > SSDs
> > > > > > and
> > > > > > NVME systems.
> > > > > > 
> > > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > > configuration and were running the AJA system tests.
> > > > > > 
> > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > 6GB)
> > > > > > files
> > > > > > using fio and that was occasionally triggering the issue.
> > > > > > However
> > > > > > doing
> > > > > > an strace of that workload to disk reproduced the problem
> > > > > > faster
> > > > > > :-
> > > > > > ).
> > > > > 
> > > > > Ok, that matches up with the "lots of logically sequential
> > > > > dirty
> > > > > data on a single inode in cache" vector that is required to
> > > > > create
> > > > > really long bio chains on individual ioends.
> > > > > 
> > > > > Can you try the patch below and see if addresses the issue?
> > > > > 
> > > > 
> > > > That patch does seem to fix the soft lockups.
> > > > 
> > > 
> > > Oops... Strike that, apparently our tests just hit the following
> > > when
> > > running on AWS with that patch.
> > 
> > OK, so there are also large contiguous physical extents being
> > allocated in some cases here.
> > 
> > > So it was harder to hit, but we still did eventually.
> > 
> > Yup, that's what I wanted to know - it indicates that both the
> > filesystem completion processing and the iomap page processing play
> > a role in the CPU usage. More complex patch for you to try below...
> > 
> > Cheers,
> > 
> > Dave.
> 
> Hi Dave,
> 
> This patch got further than the previous one. However it too failed on
> the same AWS setup after we started creating larger (in this case 52GB)
> files. The previous patch failed at 15GB.
> 

Care to try my old series [1] that attempted to address this, assuming
it still applies to your kernel? You should only need patches 1 and 2.
You can toss in patch 3 if you'd like, but as Dave's earlier patch has
shown, this can just make it harder to reproduce.

I don't know if this will go anywhere as is, but I was never able to get
any sort of confirmation from the previous reporter to understand at
least whether it is effective. I agree with Jens' earlier concern that
the per-page yields are probably overkill, but if it were otherwise
effective it shouldn't be that hard to add filtering. Patch 3 could also
technically be used in place of patch 1 if we really wanted to go that
route, but I wouldn't take that step until there was some verification
that the yielding heuristic is effective.

Brian

[1] https://lore.kernel.org/linux-xfs/20210517171722.1266878-1-bfoster@redhat.com/

> NR_06-18:00:17 pm-46088DSX1 /mnt/data-portal/data $ ls -lh
> total 59G
> -rw-r----- 1 root root  52G Jan  6 18:20 100g
> -rw-r----- 1 root root 9.8G Jan  6 17:38 10g
> -rw-r----- 1 root root   29 Jan  6 17:36 file
> NR_06-18:20:10 pm-46088DSX1 /mnt/data-portal/data $
> Message from syslogd@pm-46088DSX1 at Jan  6 18:22:44 ...
>  kernel:[ 5548.082987] watchdog: BUG: soft lockup - CPU#10 stuck for
> 24s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:23:44 ...
>  kernel:[ 5608.082895] watchdog: BUG: soft lockup - CPU#10 stuck for
> 23s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:27:08 ...
>  kernel:[ 5812.082587] watchdog: BUG: soft lockup - CPU#10 stuck for
> 22s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:27:36 ...
>  kernel:[ 5840.082533] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:28:08 ...
>  kernel:[ 5872.082455] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:28:40 ...
>  kernel:[ 5904.082400] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:29:16 ...
>  kernel:[ 5940.082243] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:29:44 ...
>  kernel:[ 5968.082249] watchdog: BUG: soft lockup - CPU#10 stuck for
> 22s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:30:24 ...
>  kernel:[ 6008.082204] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:31:08 ...
>  kernel:[ 6052.082194] watchdog: BUG: soft lockup - CPU#10 stuck for
> 24s! [kworker/10:0:18995]
> Message from syslogd@pm-46088DSX1 at Jan  6 18:31:48 ...
>  kernel:[ 6092.082010] watchdog: BUG: soft lockup - CPU#10 stuck for
> 21s! [kworker/10:0:18995]
> 
> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com
> 
>
Trond Myklebust Jan. 7, 2022, 3:08 a.m. UTC | #38
On Thu, 2022-01-06 at 15:07 -0500, Brian Foster wrote:
> On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> > On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > > wrote:
> > > > > > > We have different reproducers. The common feature appears
> > > > > > > to
> > > > > > > be
> > > > > > > the
> > > > > > > need for a decently fast box with fairly large memory
> > > > > > > (128GB
> > > > > > > in
> > > > > > > one
> > > > > > > case, 400GB in the other). It has been reproduced with
> > > > > > > HDs,
> > > > > > > SSDs
> > > > > > > and
> > > > > > > NVME systems.
> > > > > > > 
> > > > > > > On the 128GB box, we had it set up with 10+ disks in a
> > > > > > > JBOD
> > > > > > > configuration and were running the AJA system tests.
> > > > > > > 
> > > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > > 6GB)
> > > > > > > files
> > > > > > > using fio and that was occasionally triggering the issue.
> > > > > > > However
> > > > > > > doing
> > > > > > > an strace of that workload to disk reproduced the problem
> > > > > > > faster
> > > > > > > :-
> > > > > > > ).
> > > > > > 
> > > > > > Ok, that matches up with the "lots of logically sequential
> > > > > > dirty
> > > > > > data on a single inode in cache" vector that is required to
> > > > > > create
> > > > > > really long bio chains on individual ioends.
> > > > > > 
> > > > > > Can you try the patch below and see if addresses the issue?
> > > > > > 
> > > > > 
> > > > > That patch does seem to fix the soft lockups.
> > > > > 
> > > > 
> > > > Oops... Strike that, apparently our tests just hit the
> > > > following
> > > > when
> > > > running on AWS with that patch.
> > > 
> > > OK, so there are also large contiguous physical extents being
> > > allocated in some cases here.
> > > 
> > > > So it was harder to hit, but we still did eventually.
> > > 
> > > Yup, that's what I wanted to know - it indicates that both the
> > > filesystem completion processing and the iomap page processing
> > > play
> > > a role in the CPU usage. More complex patch for you to try
> > > below...
> > > 
> > > Cheers,
> > > 
> > > Dave.
> > 
> > Hi Dave,
> > 
> > This patch got further than the previous one. However it too failed
> > on
> > the same AWS setup after we started creating larger (in this case
> > 52GB)
> > files. The previous patch failed at 15GB.
> > 
> 
> Care to try my old series [1] that attempted to address this,
> assuming
> it still applies to your kernel? You should only need patches 1 and
> 2.
> You can toss in patch 3 if you'd like, but as Dave's earlier patch
> has
> shown, this can just make it harder to reproduce.
> 
> I don't know if this will go anywhere as is, but I was never able to
> get
> any sort of confirmation from the previous reporter to understand at
> least whether it is effective. I agree with Jens' earlier concern
> that
> the per-page yields are probably overkill, but if it were otherwise
> effective it shouldn't be that hard to add filtering. Patch 3 could
> also
> technically be used in place of patch 1 if we really wanted to go
> that
> route, but I wouldn't take that step until there was some
> verification
> that the yielding heuristic is effective.
> 
> Brian
> 
> [1]
> https://lore.kernel.org/linux-xfs/20210517171722.1266878-1-bfoster@redhat.com/
> 
> 
> 

Hi Brian,

I would expect those to work, since the first patch is essentially
identical to the one I wrote and tested before trying Dave's first
patch version (at least for the special case of XFS). However we never
did test that patch on the AWS setup, so let me try your patches 1 & 2
and see if they get us further than 52GB.
Brian Foster Jan. 7, 2022, 3:15 p.m. UTC | #39
On Fri, Jan 07, 2022 at 03:08:48AM +0000, Trond Myklebust wrote:
> On Thu, 2022-01-06 at 15:07 -0500, Brian Foster wrote:
> > On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> > > On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > > > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > > > wrote:
> > > > > > > > We have different reproducers. The common feature appears
> > > > > > > > to
> > > > > > > > be
> > > > > > > > the
> > > > > > > > need for a decently fast box with fairly large memory
> > > > > > > > (128GB
> > > > > > > > in
> > > > > > > > one
> > > > > > > > case, 400GB in the other). It has been reproduced with
> > > > > > > > HDs,
> > > > > > > > SSDs
> > > > > > > > and
> > > > > > > > NVME systems.
> > > > > > > > 
> > > > > > > > On the 128GB box, we had it set up with 10+ disks in a
> > > > > > > > JBOD
> > > > > > > > configuration and were running the AJA system tests.
> > > > > > > > 
> > > > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > > > 6GB)
> > > > > > > > files
> > > > > > > > using fio and that was occasionally triggering the issue.
> > > > > > > > However
> > > > > > > > doing
> > > > > > > > an strace of that workload to disk reproduced the problem
> > > > > > > > faster
> > > > > > > > :-
> > > > > > > > ).
> > > > > > > 
> > > > > > > Ok, that matches up with the "lots of logically sequential
> > > > > > > dirty
> > > > > > > data on a single inode in cache" vector that is required to
> > > > > > > create
> > > > > > > really long bio chains on individual ioends.
> > > > > > > 
> > > > > > > Can you try the patch below and see if addresses the issue?
> > > > > > > 
> > > > > > 
> > > > > > That patch does seem to fix the soft lockups.
> > > > > > 
> > > > > 
> > > > > Oops... Strike that, apparently our tests just hit the
> > > > > following
> > > > > when
> > > > > running on AWS with that patch.
> > > > 
> > > > OK, so there are also large contiguous physical extents being
> > > > allocated in some cases here.
> > > > 
> > > > > So it was harder to hit, but we still did eventually.
> > > > 
> > > > Yup, that's what I wanted to know - it indicates that both the
> > > > filesystem completion processing and the iomap page processing
> > > > play
> > > > a role in the CPU usage. More complex patch for you to try
> > > > below...
> > > > 
> > > > Cheers,
> > > > 
> > > > Dave.
> > > 
> > > Hi Dave,
> > > 
> > > This patch got further than the previous one. However it too failed
> > > on
> > > the same AWS setup after we started creating larger (in this case
> > > 52GB)
> > > files. The previous patch failed at 15GB.
> > > 
> > 
> > Care to try my old series [1] that attempted to address this,
> > assuming
> > it still applies to your kernel? You should only need patches 1 and
> > 2.
> > You can toss in patch 3 if you'd like, but as Dave's earlier patch
> > has
> > shown, this can just make it harder to reproduce.
> > 
> > I don't know if this will go anywhere as is, but I was never able to
> > get
> > any sort of confirmation from the previous reporter to understand at
> > least whether it is effective. I agree with Jens' earlier concern
> > that
> > the per-page yields are probably overkill, but if it were otherwise
> > effective it shouldn't be that hard to add filtering. Patch 3 could
> > also
> > technically be used in place of patch 1 if we really wanted to go
> > that
> > route, but I wouldn't take that step until there was some
> > verification
> > that the yielding heuristic is effective.
> > 
> > Brian
> > 
> > [1]
> > https://lore.kernel.org/linux-xfs/20210517171722.1266878-1-bfoster@redhat.com/
> > 
> > 
> > 
> 
> Hi Brian,
> 
> I would expect those to work, since the first patch is essentially
> identical to the one I wrote and tested before trying Dave's first
> patch version (at least for the special case of XFS). However we never
> did test that patch on the AWS setup, so let me try your patches 1 & 2
> and see if they get us further than 52GB.
> 

Hm yeah, fair point. It's a little different in that it shuffles large
ioends to wq completion rather than splitting them up (to avoid atomic
completion context issues), but presumably that is not a factor with
your storage configuration and so otherwise the cond_resched() usage is
essentially the same as your initial patch. Regardless, it would be nice
to get some tangible results from a real reproducer, so the thorough
testing is appreciated. If you hadn't tested to this extreme yet, then
it would be certainly good to know whether the problem still occurs
because that indicates we're still missing something fundamental.

If this does actually survive your test, what might be an interesting
next step (if you're inclined to experiment!) is to plumb in a counter
to track the actual number of pages processed, use that to filter the
cond_resched() and then perhaps experiment with how aggressive that
filter needs to be to avoid the problem in your environment (and/or
similarly dump the counter value in a tracepoint when need_resched()
returns true or some such). The simplest way to do that experiment is
probably to just pass a counter pointer from iomap_finish_ioends() to
iomap_finish_ioend() and let the latter increment/filter/reset the
counter (similar to the logic in Dave's latest patch that allows the
counter to track across a set of ioends) based on the number of pages
processed and whatever magic heuristic is specified.

Brian

> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com
> 
>
Dave Chinner Jan. 9, 2022, 11:09 p.m. UTC | #40
On Wed, Jan 05, 2022 at 04:01:07PM -0800, Darrick J. Wong wrote:
> On Thu, Jan 06, 2022 at 09:48:29AM +1100, Dave Chinner wrote:
> > +
> > +	might_sleep();
> >  
> >  	list_replace_init(&ioend->io_list, &tmp);
> > +	segments = ioend->io_segments;
> >  	iomap_finish_ioend(ioend, error);
> >  
> >  	while (!list_empty(&tmp)) {
> > +		if (segments > 32768) {
> > +			cond_resched();
> > +			segments = 0;
> > +		}
> >  		ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);
> >  		list_del_init(&ioend->io_list);
> > +		segments += ioend->io_segments;
> >  		iomap_finish_ioend(ioend, error);
> >  	}
> 
> I wonder, should we take one more swing at cond_resched at the end of
> the function so that we can return to the caller having given the system
> at least one chance to reschedule?

That's for the caller of xfs_finish_ioends() to deal with as it
loops over each set of merged ioends and does it's own processing.
i.e. that's what the cond_resched() I added to the XFS endio
processing code here provides:

> >  void
> >  xfs_end_io(
> >  	struct work_struct	*work)
> > @@ -157,6 +170,7 @@ xfs_end_io(
> >  		list_del_init(&ioend->io_list);
> >  		iomap_ioend_try_merge(ioend, &tmp);
> >  		xfs_end_ioend(ioend);
> > +		cond_resched();
> >  	}
> >  }

Cheers,

Dave.
Dave Chinner Jan. 9, 2022, 11:34 p.m. UTC | #41
On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > wrote:
> > > > > > We have different reproducers. The common feature appears to
> > > > > > be
> > > > > > the
> > > > > > need for a decently fast box with fairly large memory (128GB
> > > > > > in
> > > > > > one
> > > > > > case, 400GB in the other). It has been reproduced with HDs,
> > > > > > SSDs
> > > > > > and
> > > > > > NVME systems.
> > > > > > 
> > > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > > configuration and were running the AJA system tests.
> > > > > > 
> > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > 6GB)
> > > > > > files
> > > > > > using fio and that was occasionally triggering the issue.
> > > > > > However
> > > > > > doing
> > > > > > an strace of that workload to disk reproduced the problem
> > > > > > faster
> > > > > > :-
> > > > > > ).
> > > > > 
> > > > > Ok, that matches up with the "lots of logically sequential
> > > > > dirty
> > > > > data on a single inode in cache" vector that is required to
> > > > > create
> > > > > really long bio chains on individual ioends.
> > > > > 
> > > > > Can you try the patch below and see if addresses the issue?
> > > > > 
> > > > 
> > > > That patch does seem to fix the soft lockups.
> > > > 
> > > 
> > > Oops... Strike that, apparently our tests just hit the following
> > > when
> > > running on AWS with that patch.
> > 
> > OK, so there are also large contiguous physical extents being
> > allocated in some cases here.
> > 
> > > So it was harder to hit, but we still did eventually.
> > 
> > Yup, that's what I wanted to know - it indicates that both the
> > filesystem completion processing and the iomap page processing play
> > a role in the CPU usage. More complex patch for you to try below...
> > 
> > Cheers,
> > 
> > Dave.
> 
> Hi Dave,
> 
> This patch got further than the previous one. However it too failed on
> the same AWS setup after we started creating larger (in this case 52GB)
> files. The previous patch failed at 15GB.

Ok, so that indicates that the page cache pages are being allocated
at write() time from physically contiguous pages so that we are
ending up with a large number of bvec merges in the bio layeri
during writeback. i.e. we're building multipage bvecs in the bios
and so the segment count per bvec is low (maybe one per bio, instead
of ~256 if the pages are not physically contiguous).

I'd hoped that wasn't going to be an issue because, unless memory is
largely empty and the workload is completely single threaded, you
can't get continuous gigabyte scale runs of contiguous pages in the
page cache for sequential writes. Hence I figured the segment limits
would trigger long before we get into the "millions of pages to
complete" needed to trigger the soft lockup.

Ok, I'll ignore bio segments and the upcoming multi-page folio stuff
that will largely result in 1:1 bio segment:folio ratios and just
count pages instead...

Cheers,

Dave.
Dave Chinner Jan. 10, 2022, 8:18 a.m. UTC | #42
On Thu, Jan 06, 2022 at 11:44:23AM -0500, Brian Foster wrote:
> On Thu, Jan 06, 2022 at 09:04:21AM +1100, Dave Chinner wrote:
> > On Wed, Jan 05, 2022 at 08:56:33AM -0500, Brian Foster wrote:
> > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > index 71a36ae120ee..39214577bc46 100644
> > --- a/fs/iomap/buffered-io.c
> > +++ b/fs/iomap/buffered-io.c
> > @@ -1066,17 +1066,34 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error)
> >  	}
> >  }
> >  
> > +/*
> > + * Ioend completion routine for merged bios. This can only be called from task
> > + * contexts as merged ioends can be of unbound length. Hence we have to break up
> > + * the page writeback completion into manageable chunks to avoid long scheduler
> > + * holdoffs. We aim to keep scheduler holdoffs down below 10ms so that we get
> > + * good batch processing throughput without creating adverse scheduler latency
> > + * conditions.
> > + */
> >  void
> >  iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> >  {
> >  	struct list_head tmp;
> > +	int segments;
> > +
> > +	might_sleep();
> >  
> >  	list_replace_init(&ioend->io_list, &tmp);
> > +	segments = ioend->io_segments;
> >  	iomap_finish_ioend(ioend, error);
> >  
> >  	while (!list_empty(&tmp)) {
> > +		if (segments > 32768) {
> > +			cond_resched();
> > +			segments = 0;
> > +		}
> 
> How is this intended to address the large bi_vec scenario? AFAICT
> bio_segments() doesn't account for multipage bvecs so the above logic
> can allow something like 34b (?) 4k pages before a yield.

Right now the bvec segment iteration in iomap_finish_ioend() is
completely unaware of multipage bvecs - as per above
bio_for_each_segment_all() iterates by PAGE_SIZE within a bvec,
regardless of whether they are stored in a multipage bvec or not.
Hence it always iterates the entire bio a single page at a time.

IOWs, we don't use multi-page bvecs in iomap writeback, nor is it
aware of them at all. We're adding single pages to bios via
bio_add_page() which may merge them internally into multipage bvecs.
However, all our iterators use single page interfaces, hence we
don't see the internal multi-page structure of the bio at all.
As such, bio_segments() should return the number of PAGE_SIZE pages
attached to the bio regardless of it's internal structure.

That is what I see on a trace from a large single file submission,
comparing bio_segments() output from the page count on an ioend:

   kworker/u67:2-187   [017] 13530.753548: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x370400 bi_vcnt 1, bi_size 16777216
   kworker/u67:2-187   [017] 13530.759706: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x378400 bi_vcnt 1, bi_size 16777216
   kworker/u67:2-187   [017] 13530.766326: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x380400 bi_vcnt 1, bi_size 16777216
   kworker/u67:2-187   [017] 13530.770689: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x388400 bi_vcnt 1, bi_size 16777216
   kworker/u67:2-187   [017] 13530.774716: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x390400 bi_vcnt 1, bi_size 16777216
   kworker/u67:2-187   [017] 13530.777157: iomap_writepages: 3. bios 2048, pages 2048, start sector 0x398400 bi_vcnt 1, bi_size 8388608

Which shows we are building ioends with a single bio with a single
bvec, containing 4096 pages and 4096 bio segments. So, as expected,
bio_segments() matches the page count and we submit 4096 page ioends
with a single bio attached to it.

This is clearly a case where we are getting physically contiguous
page cache page allocation during write() syscalls, and the result
is a single contiguous bvec from bio_add_page() doing physical page
merging at the bvec level. Hence we see bio->bi_vcnt = 1 and a
physically contiguous 4096 multipage bvec being dispatched. The
lower layers slice and dice these huge bios to what the hardware can
handle...

What I'm not yet reproducing is whatever vector that Trond is seeing
that is causing the multi-second hold-offs. I get page completion
processed at a rate of about a million pages per second per CPU, but
I'm bandwidth limited to about 400,000 pages per second due to
mapping->i_pages lock contention (reclaim vs page cache
instantiation vs writeback completion). I'm not seeing merged ioend
batches of larger than about 40,000 pages being processed at once.
Hence I can't yet see where the millions of pages in a single ioend
completion that would be required to hold a CPU for tens of seconds
is coming from yet...

> That aside, I find the approach odd in that we calculate the segment
> count for each bio via additional iteration (which is how bio_segments()
> works) and track the summation of the chain in the ioend only to provide
> iomap_finish_ioends() with a subtly inaccurate view of how much work
> iomap_finish_ioend() is doing as the loop iterates.

I just did that so I didn't have to count pages as the bio is built.
Easy to change - in fact I have changed it to check that
bio_segments() was returning the page count I expected it should be
returning....

I also changed the completion side to just count
end_page_writeback() calls, and I get the same number of
cond_resched() calls being made as the bio_segment. So AFAICT
there's no change of behaviour or accounting between the two
methods, and I'm not sure where the latest problem Trond reported
is...

> We already have this
> information in completion context and iomap_finish_ioends() is just a
> small iterator function, so I don't understand why we wouldn't do
> something like factor these two loops into a non-atomic context only
> variant that yields based on the actual amount of page processing work
> being done (i.e. including multipage bvecs). That seems more robust and
> simple to me, but that's just my .02.

iomap_finish_ioends() is pretty much that non-atomic version of
the ioend completion code. Merged ioend chains cannot be sanely
handled in atomic context and so it has to be called from task
context. Hence the "might_sleep()" I added to ensure that we get
warnings if it is called from atomic contexts.

As for limiting atomic context completion processing, we've
historically done that by limiting the size of individual IO chains
submitted during writeback. This means that atomic completion
contexts don't need any special signalling (i.e. conditional
"in_atomic()" behaviour) because they aren't given anything to
process that would cause problems in atomic contexts...

Cheers,

Dave.
Brian Foster Jan. 10, 2022, 5:45 p.m. UTC | #43
On Mon, Jan 10, 2022 at 07:18:47PM +1100, Dave Chinner wrote:
> On Thu, Jan 06, 2022 at 11:44:23AM -0500, Brian Foster wrote:
> > On Thu, Jan 06, 2022 at 09:04:21AM +1100, Dave Chinner wrote:
> > > On Wed, Jan 05, 2022 at 08:56:33AM -0500, Brian Foster wrote:
> > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > > index 71a36ae120ee..39214577bc46 100644
> > > --- a/fs/iomap/buffered-io.c
> > > +++ b/fs/iomap/buffered-io.c
> > > @@ -1066,17 +1066,34 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error)
> > >  	}
> > >  }
> > >  
> > > +/*
> > > + * Ioend completion routine for merged bios. This can only be called from task
> > > + * contexts as merged ioends can be of unbound length. Hence we have to break up
> > > + * the page writeback completion into manageable chunks to avoid long scheduler
> > > + * holdoffs. We aim to keep scheduler holdoffs down below 10ms so that we get
> > > + * good batch processing throughput without creating adverse scheduler latency
> > > + * conditions.
> > > + */
> > >  void
> > >  iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> > >  {
> > >  	struct list_head tmp;
> > > +	int segments;
> > > +
> > > +	might_sleep();
> > >  
> > >  	list_replace_init(&ioend->io_list, &tmp);
> > > +	segments = ioend->io_segments;
> > >  	iomap_finish_ioend(ioend, error);
> > >  
> > >  	while (!list_empty(&tmp)) {
> > > +		if (segments > 32768) {
> > > +			cond_resched();
> > > +			segments = 0;
> > > +		}
> > 
> > How is this intended to address the large bi_vec scenario? AFAICT
> > bio_segments() doesn't account for multipage bvecs so the above logic
> > can allow something like 34b (?) 4k pages before a yield.
> 
> Right now the bvec segment iteration in iomap_finish_ioend() is
> completely unaware of multipage bvecs - as per above
> bio_for_each_segment_all() iterates by PAGE_SIZE within a bvec,
> regardless of whether they are stored in a multipage bvec or not.
> Hence it always iterates the entire bio a single page at a time.
> 
> IOWs, we don't use multi-page bvecs in iomap writeback, nor is it
> aware of them at all. We're adding single pages to bios via
> bio_add_page() which may merge them internally into multipage bvecs.
> However, all our iterators use single page interfaces, hence we
> don't see the internal multi-page structure of the bio at all.
> As such, bio_segments() should return the number of PAGE_SIZE pages
> attached to the bio regardless of it's internal structure.
> 

That is pretty much the point. The completion loop doesn't really care
whether the amount of page processing work is due to a large bio chain,
multipage bi_bvec(s), merged ioends, or some odd combination thereof. As
you note, these conditions can manifest from various layers above or
below iomap. I don't think iomap really needs to know or care about any
of this. It just needs to yield when it has spent "too much" time
processing pages.

With regard to the iterators, my understanding was that
bio_for_each_segment_all() walks the multipage bvecs but
bio_for_each_segment() does not, but that could certainly be wrong as I
find the iterators a bit confusing. Either way, the most recent test
with the ioend granular filter implies that a single ioend can still
become a soft lockup vector from non-atomic context.

> That is what I see on a trace from a large single file submission,
> comparing bio_segments() output from the page count on an ioend:
> 
>    kworker/u67:2-187   [017] 13530.753548: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x370400 bi_vcnt 1, bi_size 16777216
>    kworker/u67:2-187   [017] 13530.759706: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x378400 bi_vcnt 1, bi_size 16777216
>    kworker/u67:2-187   [017] 13530.766326: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x380400 bi_vcnt 1, bi_size 16777216
>    kworker/u67:2-187   [017] 13530.770689: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x388400 bi_vcnt 1, bi_size 16777216
>    kworker/u67:2-187   [017] 13530.774716: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x390400 bi_vcnt 1, bi_size 16777216
>    kworker/u67:2-187   [017] 13530.777157: iomap_writepages: 3. bios 2048, pages 2048, start sector 0x398400 bi_vcnt 1, bi_size 8388608
> 
> Which shows we are building ioends with a single bio with a single
> bvec, containing 4096 pages and 4096 bio segments. So, as expected,
> bio_segments() matches the page count and we submit 4096 page ioends
> with a single bio attached to it.
> 
> This is clearly a case where we are getting physically contiguous
> page cache page allocation during write() syscalls, and the result
> is a single contiguous bvec from bio_add_page() doing physical page
> merging at the bvec level. Hence we see bio->bi_vcnt = 1 and a
> physically contiguous 4096 multipage bvec being dispatched. The
> lower layers slice and dice these huge bios to what the hardware can
> handle...
> 

I think we're in violent agreement here. That is the crux of multipage
bvecs and what I've been trying to point out [1]. Ming (who I believe
implemented it) pointed this out back when the problem was first
reported. This is also why I asked Trond to test out the older patch
series, because that was intended to cover this case.

[1] https://lore.kernel.org/linux-xfs/20220104192321.GF31606@magnolia/T/#mc08ffe4b619c1b503b2c1342157bdaa9823167c1

> What I'm not yet reproducing is whatever vector that Trond is seeing
> that is causing the multi-second hold-offs. I get page completion
> processed at a rate of about a million pages per second per CPU, but
> I'm bandwidth limited to about 400,000 pages per second due to
> mapping->i_pages lock contention (reclaim vs page cache
> instantiation vs writeback completion). I'm not seeing merged ioend
> batches of larger than about 40,000 pages being processed at once.
> Hence I can't yet see where the millions of pages in a single ioend
> completion that would be required to hold a CPU for tens of seconds
> is coming from yet...
> 

I was never able to reproduce the actual warning either (only construct
the unexpectedly large page sequences through various means), so I'm
equally as curious about that aspect of the problem. My only guess at
the moment is that perhaps hardware is enough of a factor to increase
the cost (i.e. slow cpu, cacheline misses, etc.)? I dunno..

> > That aside, I find the approach odd in that we calculate the segment
> > count for each bio via additional iteration (which is how bio_segments()
> > works) and track the summation of the chain in the ioend only to provide
> > iomap_finish_ioends() with a subtly inaccurate view of how much work
> > iomap_finish_ioend() is doing as the loop iterates.
> 
> I just did that so I didn't have to count pages as the bio is built.
> Easy to change - in fact I have changed it to check that
> bio_segments() was returning the page count I expected it should be
> returning....
> 
> I also changed the completion side to just count
> end_page_writeback() calls, and I get the same number of
> cond_resched() calls being made as the bio_segment. So AFAICT
> there's no change of behaviour or accounting between the two
> methods, and I'm not sure where the latest problem Trond reported
> is...
> 
> > We already have this
> > information in completion context and iomap_finish_ioends() is just a
> > small iterator function, so I don't understand why we wouldn't do
> > something like factor these two loops into a non-atomic context only
> > variant that yields based on the actual amount of page processing work
> > being done (i.e. including multipage bvecs). That seems more robust and
> > simple to me, but that's just my .02.
> 
> iomap_finish_ioends() is pretty much that non-atomic version of
> the ioend completion code. Merged ioend chains cannot be sanely
> handled in atomic context and so it has to be called from task
> context. Hence the "might_sleep()" I added to ensure that we get
> warnings if it is called from atomic contexts.
> 

We don't need to call iomap_finish_ioends() from atomic context. The
issue is the use of iomap_finish_ioend() in non-atomic context because
(if we assume atomic context usage is addressed by ioend size limits) it
can perform too much work without yielding the cpu. If we want to track
the number of pages across an arbitrary set of ioends/bios/bvecs, all we
need is something like:

iomap_finish_ioend(..., *count)
{
	for (bio = &ioend->io_inline_bio; bio; bio = next) {
		...
		bio_for_each_segment_all(bv, bio, iter_all) {
			...
			if (count && ++(*count) > MAGIC_VALUE) {
				cond_resched();
				*count = 0;
			}
		}
	}
}

iomap_finish_ioends()
{
	int count = 0;

	...

	while (...) {
		...
		iomap_finish_ioend(..., &count);
	}
}

... and you can slap a might_sleep() in either function for a sanity
check.

This doesn't require any additional counting in the submission path,
doesn't require increasing the size of the ioend, doesn't require
changes to the ioend merging code, doesn't impact non-atomic context
processing, and doesn't really impact any code outside of these couple
of functions (iomap_finish_ioend() is already static). It's also more
natural to remove if something like folios eliminates the need for it.

> As for limiting atomic context completion processing, we've
> historically done that by limiting the size of individual IO chains
> submitted during writeback. This means that atomic completion
> contexts don't need any special signalling (i.e. conditional
> "in_atomic()" behaviour) because they aren't given anything to
> process that would cause problems in atomic contexts...
> 

I've no real preference on the I/O splitting vs. queueing approach. Part
of the reason my last series implemented both is because there was
conflicting feedback. Some wanted to submit the large ioends as
constructed and complete them in wq context. Others wanted to split them
up and avoid the problem that way. Since an ioend size limit only
applied to the atomic context variant, I thought it made some sense to
break the problem down. I don't know where folks stand on these various
things atm.

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
>
Christoph Hellwig Jan. 10, 2022, 6:11 p.m. UTC | #44
On Mon, Jan 10, 2022 at 12:45:01PM -0500, Brian Foster wrote:
> With regard to the iterators, my understanding was that
> bio_for_each_segment_all() walks the multipage bvecs but
> bio_for_each_segment() does not, but that could certainly be wrong as I
> find the iterators a bit confusing. Either way, the most recent test
> with the ioend granular filter implies that a single ioend can still
> become a soft lockup vector from non-atomic context.

the segment iterators iterate over the pages segments, the
bvec iterators over the multi-page bvecs.  The _all suffix means
it iterates over the whole bio independent of clones and partial
submission state and is for use in the cmpletion handlers.  The
version without it are for use in the block drivers.
Dave Chinner Jan. 10, 2022, 11:37 p.m. UTC | #45
On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > wrote:
> > > > > > We have different reproducers. The common feature appears to
> > > > > > be
> > > > > > the
> > > > > > need for a decently fast box with fairly large memory (128GB
> > > > > > in
> > > > > > one
> > > > > > case, 400GB in the other). It has been reproduced with HDs,
> > > > > > SSDs
> > > > > > and
> > > > > > NVME systems.
> > > > > > 
> > > > > > On the 128GB box, we had it set up with 10+ disks in a JBOD
> > > > > > configuration and were running the AJA system tests.
> > > > > > 
> > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > 6GB)
> > > > > > files
> > > > > > using fio and that was occasionally triggering the issue.
> > > > > > However
> > > > > > doing
> > > > > > an strace of that workload to disk reproduced the problem
> > > > > > faster
> > > > > > :-
> > > > > > ).
> > > > > 
> > > > > Ok, that matches up with the "lots of logically sequential
> > > > > dirty
> > > > > data on a single inode in cache" vector that is required to
> > > > > create
> > > > > really long bio chains on individual ioends.
> > > > > 
> > > > > Can you try the patch below and see if addresses the issue?
> > > > > 
> > > > 
> > > > That patch does seem to fix the soft lockups.
> > > > 
> > > 
> > > Oops... Strike that, apparently our tests just hit the following
> > > when
> > > running on AWS with that patch.
> > 
> > OK, so there are also large contiguous physical extents being
> > allocated in some cases here.
> > 
> > > So it was harder to hit, but we still did eventually.
> > 
> > Yup, that's what I wanted to know - it indicates that both the
> > filesystem completion processing and the iomap page processing play
> > a role in the CPU usage. More complex patch for you to try below...
> > 
> > Cheers,
> > 
> > Dave.
> 
> Hi Dave,
> 
> This patch got further than the previous one. However it too failed on
> the same AWS setup after we started creating larger (in this case 52GB)
> files. The previous patch failed at 15GB.
> 
> NR_06-18:00:17 pm-46088DSX1 /mnt/data-portal/data $ ls -lh
> total 59G
> -rw-r----- 1 root root  52G Jan  6 18:20 100g
> -rw-r----- 1 root root 9.8G Jan  6 17:38 10g
> -rw-r----- 1 root root   29 Jan  6 17:36 file
> NR_06-18:20:10 pm-46088DSX1 /mnt/data-portal/data $
> Message from syslogd@pm-46088DSX1 at Jan  6 18:22:44 ...
>  kernel:[ 5548.082987] watchdog: BUG: soft lockup - CPU#10 stuck for
> 24s! [kworker/10:0:18995]

Ok, so coming back to this set of failures. Firstly, the patch I
sent you has a bug in it, meaning it did not merge ioends across
independent ->writepages invocations. Essentially it would merge
ioends as long as the chain of ioends are all the same size (e.g.
4096 pages long). The moment an ioend of a different size is added
to the chain (i.e. the runt at the tail of the writepages
invocation) the merging stopped.

That means the merging was limited to what writeback bandwidth
chunking broken the per-file writeback into. I'm generally seeing
that to be 100-200MB chunks per background writeback invocation,
and so nothing is merging beyond that size.

Once I fixed that bug (caused by bio->bi_iter.bi_sector being
modified by the block layer stack during submission when bios are
split so it doesn't point at the start sector at IO completion),
the only way I could get merging beyond submission chunking was
to induce a long scheduling delay in the XFS completion processing.
e.g. by adding msleep(1000) to xfs_end_io() before any of the
merging occurred.

In this case, I can't get any extra merging to occur - the
scheduling latency on IO completion is just so low and the
completion processing so fast that little to no merging occurs
at all with ioends split into 4096 page chunks.

So, let's introduce scheduling delays. The only one that matters
here is a delay running the XFS end IO work task - it will pull all
the pending ioends and process them as one loop. Hence the only way
we can get large merged ioends is to delay the processing before we
pull the completed ioends off the inode.

Worst case is that a scheduling delay will allow a single file to
dirty enough pages to hit the throttling limits in
balance_dirty_pages() while it waits for dirty pages to be cleaned.
I see this in completion:

kworker/9:1-769   [009]    35.267031: bprint: iomap_finish_ioends: pages 4096, start sector 0x400 size 10123546624 pcnt 4096
....
kworker/9:1-769   [009]    35.982461: bprint: iomap_finish_ioends: pages 164, start sector 0x12db368 size 671744 pcnt 31324

Yup, that merged into a 10GB ioend that is passed to
iomap_finish_ioends(), and it took just over 700ms to process the
entire 10GB ioend.

If writeback is running at the same time as completion, things are
a little slower:

kworker/31:2-793   [031]    51.147943: bprint: iomap_finish_ioends: pages 4096, start sector 0x132ac488 size 8019509248 pcnt 4096
kworker/u68:13-637 [025]    51.150218: bprint: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x141a4488 bi_vcnt 5, bi_size 16777216
kworker/31:2-793   [031]    51.152237: bprint: iomap_finish_ioends: pages 4096, start sector 0x132b4488 size 16777216 pcnt 8192
kworker/u68:13-637 [025]    51.155773: bprint: iomap_do_writepage: 2. bios 4096, pages 4096, start sector 0x141ac488 bi_vcnt 5, bi_size 16777216
....
kworker/31:2-793   [031]    52.528398: bprint: iomap_finish_ioends: pages 4096, start sector 0x14194488 size 16777216 pcnt 21504

That's 8GB in 1.4s, but it's still processing over 5GB/s of
completions.

This is running with 64GB RAM and:

$ grep . /proc/sys/vm/dirty*
/proc/sys/vm/dirty_background_bytes:0
/proc/sys/vm/dirty_background_ratio:10
/proc/sys/vm/dirty_bytes:0
/proc/sys/vm/dirty_expire_centisecs:3000
/proc/sys/vm/dirty_ratio:20
/proc/sys/vm/dirtytime_expire_seconds:43200
/proc/sys/vm/dirty_writeback_centisecs:500
$

If I change the dirty ratios to 60/80:

$ grep . /proc/sys/vm/dirty*ratio
/proc/sys/vm/dirty_background_ratio:60
/proc/sys/vm/dirty_ratio:80
$

I can get up to 15GB merged ioends with a 5 second scheduling delay
for the xfs_end_io workqueue, but that still only takes about 2s
of CPU time to process the entire completion:

kworker/4:2-214   [004]   788.133242: bprint: xfs_end_io: off 0x6f85fe000, sect 0x14daf0c0 size 16777216/0x8000 end 0x14db70c0
<merges>
kworker/4:2-214   [004]   788.135393: bprint: iomap_finish_ioends: pages 4096, start sector 0x14daf0c0 size 15837691904 pcnt 4096
.....
kworker/4:2-214   [004]   790.083058: bprint: iomap_finish_ioends: pages 4096, start sector 0x16b270c0 size 16777216 pcnt 32768

Given that I broke physical extent merging completely in the patch
you were testing, there's no way you would even be getting GB sized
completions being run, even with large scheduling delays. There is
just no way completion is spending that amount of CPU time in a loop
processing page based writeback completion without triggering some
case of cond_resched() in the patch I gave you to test unless there
is something else happening on those systems.

So at this point I'm kinda at a loss to understand where the 20+
second CPU times for completion processing are coming from, even if
we're trying to process the entire 52GB of dirty pages in a single
completion.

Trond, what is the IO completion task actually spending it's CPU
time doing on your machines? Can you trace out what the conditions
are (ioend lengths, processing time, etc) when the softlockups
occur? Are there so many pending IO completions iacross different
files that the completion CPU (CPU #10) is running out of worker
threads and/or the CPU bound completion worker threads are seeing
tens of seconds of scheduling delay?  Is it something completely
external like AWS preempting the vCPU that happens to be running IO
completion for 20+ seconds at a time? Or something else entirely?

I really need to know what I'm missing here, because it isn't
obvious from my local systems and it's not obvious just from
soft-lockup stack traces....

The latest patch with page based accounting and fixed ioend merging
I'm running here, including the tracepoints I've been using
('trace-cmd record -e printk' is your friend), is below.

Cheers,

Dave.
Dave Chinner Jan. 11, 2022, 12:08 a.m. UTC | #46
On Tue, Jan 11, 2022 at 10:37:46AM +1100, Dave Chinner wrote:
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index c8c15c3c3147..82515d1ad4e0 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -136,7 +136,20 @@ xfs_end_ioend(
>  	memalloc_nofs_restore(nofs_flag);
>  }
>  
> -/* Finish all pending io completions. */
> +/*
> + * Finish all pending IO completions that require transactional modifications.
> + *
> + * We try to merge physical and logically contiguous ioends before completion to
> + * minimise the number of transactions we need to perform during IO completion.
> + * Both unwritten extent conversion and COW remapping need to iterate and modify
> + * one physical extent at a time, so we gain nothing by merging physically
> + * discontiguous extents here.
> + *
> + * The ioend chain length that we can be processing here is largely unbound in
> + * length and we may have to perform significant amounts of work on each ioend
> + * to complete it. Hence we have to be careful about holding the CPU for too
> + * long in this loop.
> + */
>  void
>  xfs_end_io(
>  	struct work_struct	*work)
> @@ -147,6 +160,7 @@ xfs_end_io(
>  	struct list_head	tmp;
>  	unsigned long		flags;
>  
> +	msleep(5000);
>  	spin_lock_irqsave(&ip->i_ioend_lock, flags);
>  	list_replace_init(&ip->i_ioend_list, &tmp);
>  	spin_unlock_irqrestore(&ip->i_ioend_lock, flags);

You might want to comment that 5s completion delay out before you
run the patch, Trond...

Cheers,

Dave.
Trond Myklebust Jan. 11, 2022, 2:33 p.m. UTC | #47
On Mon, 2022-01-10 at 12:45 -0500, Brian Foster wrote:
> On Mon, Jan 10, 2022 at 07:18:47PM +1100, Dave Chinner wrote:
> > On Thu, Jan 06, 2022 at 11:44:23AM -0500, Brian Foster wrote:
> > > On Thu, Jan 06, 2022 at 09:04:21AM +1100, Dave Chinner wrote:
> > > > On Wed, Jan 05, 2022 at 08:56:33AM -0500, Brian Foster wrote:
> > > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> > > > index 71a36ae120ee..39214577bc46 100644
> > > > --- a/fs/iomap/buffered-io.c
> > > > +++ b/fs/iomap/buffered-io.c
> > > > @@ -1066,17 +1066,34 @@ iomap_finish_ioend(struct iomap_ioend
> > > > *ioend, int error)
> > > >         }
> > > >  }
> > > >  
> > > > +/*
> > > > + * Ioend completion routine for merged bios. This can only be
> > > > called from task
> > > > + * contexts as merged ioends can be of unbound length. Hence
> > > > we have to break up
> > > > + * the page writeback completion into manageable chunks to
> > > > avoid long scheduler
> > > > + * holdoffs. We aim to keep scheduler holdoffs down below 10ms
> > > > so that we get
> > > > + * good batch processing throughput without creating adverse
> > > > scheduler latency
> > > > + * conditions.
> > > > + */
> > > >  void
> > > >  iomap_finish_ioends(struct iomap_ioend *ioend, int error)
> > > >  {
> > > >         struct list_head tmp;
> > > > +       int segments;
> > > > +
> > > > +       might_sleep();
> > > >  
> > > >         list_replace_init(&ioend->io_list, &tmp);
> > > > +       segments = ioend->io_segments;
> > > >         iomap_finish_ioend(ioend, error);
> > > >  
> > > >         while (!list_empty(&tmp)) {
> > > > +               if (segments > 32768) {
> > > > +                       cond_resched();
> > > > +                       segments = 0;
> > > > +               }
> > > 
> > > How is this intended to address the large bi_vec scenario? AFAICT
> > > bio_segments() doesn't account for multipage bvecs so the above
> > > logic
> > > can allow something like 34b (?) 4k pages before a yield.
> > 
> > Right now the bvec segment iteration in iomap_finish_ioend() is
> > completely unaware of multipage bvecs - as per above
> > bio_for_each_segment_all() iterates by PAGE_SIZE within a bvec,
> > regardless of whether they are stored in a multipage bvec or not.
> > Hence it always iterates the entire bio a single page at a time.
> > 
> > IOWs, we don't use multi-page bvecs in iomap writeback, nor is it
> > aware of them at all. We're adding single pages to bios via
> > bio_add_page() which may merge them internally into multipage
> > bvecs.
> > However, all our iterators use single page interfaces, hence we
> > don't see the internal multi-page structure of the bio at all.
> > As such, bio_segments() should return the number of PAGE_SIZE pages
> > attached to the bio regardless of it's internal structure.
> > 
> 
> That is pretty much the point. The completion loop doesn't really
> care
> whether the amount of page processing work is due to a large bio
> chain,
> multipage bi_bvec(s), merged ioends, or some odd combination thereof.
> As
> you note, these conditions can manifest from various layers above or
> below iomap. I don't think iomap really needs to know or care about
> any
> of this. It just needs to yield when it has spent "too much" time
> processing pages.
> 
> With regard to the iterators, my understanding was that
> bio_for_each_segment_all() walks the multipage bvecs but
> bio_for_each_segment() does not, but that could certainly be wrong as
> I
> find the iterators a bit confusing. Either way, the most recent test
> with the ioend granular filter implies that a single ioend can still
> become a soft lockup vector from non-atomic context.
> 
> > That is what I see on a trace from a large single file submission,
> > comparing bio_segments() output from the page count on an ioend:
> > 
> >    kworker/u67:2-187   [017] 13530.753548: iomap_do_writepage: 2.
> > bios 4096, pages 4096, start sector 0x370400 bi_vcnt 1, bi_size
> > 16777216
> >    kworker/u67:2-187   [017] 13530.759706: iomap_do_writepage: 2.
> > bios 4096, pages 4096, start sector 0x378400 bi_vcnt 1, bi_size
> > 16777216
> >    kworker/u67:2-187   [017] 13530.766326: iomap_do_writepage: 2.
> > bios 4096, pages 4096, start sector 0x380400 bi_vcnt 1, bi_size
> > 16777216
> >    kworker/u67:2-187   [017] 13530.770689: iomap_do_writepage: 2.
> > bios 4096, pages 4096, start sector 0x388400 bi_vcnt 1, bi_size
> > 16777216
> >    kworker/u67:2-187   [017] 13530.774716: iomap_do_writepage: 2.
> > bios 4096, pages 4096, start sector 0x390400 bi_vcnt 1, bi_size
> > 16777216
> >    kworker/u67:2-187   [017] 13530.777157: iomap_writepages: 3.
> > bios 2048, pages 2048, start sector 0x398400 bi_vcnt 1, bi_size
> > 8388608
> > 
> > Which shows we are building ioends with a single bio with a single
> > bvec, containing 4096 pages and 4096 bio segments. So, as expected,
> > bio_segments() matches the page count and we submit 4096 page
> > ioends
> > with a single bio attached to it.
> > 
> > This is clearly a case where we are getting physically contiguous
> > page cache page allocation during write() syscalls, and the result
> > is a single contiguous bvec from bio_add_page() doing physical page
> > merging at the bvec level. Hence we see bio->bi_vcnt = 1 and a
> > physically contiguous 4096 multipage bvec being dispatched. The
> > lower layers slice and dice these huge bios to what the hardware
> > can
> > handle...
> > 
> 
> I think we're in violent agreement here. That is the crux of
> multipage
> bvecs and what I've been trying to point out [1]. Ming (who I believe
> implemented it) pointed this out back when the problem was first
> reported. This is also why I asked Trond to test out the older patch
> series, because that was intended to cover this case.
> 
> [1]
> https://lore.kernel.org/linux-xfs/20220104192321.GF31606@magnolia/T/#mc08ffe4b619c1b503b2c1342157bdaa9823167c1
> 
> > What I'm not yet reproducing is whatever vector that Trond is
> > seeing
> > that is causing the multi-second hold-offs. I get page completion
> > processed at a rate of about a million pages per second per CPU,
> > but
> > I'm bandwidth limited to about 400,000 pages per second due to
> > mapping->i_pages lock contention (reclaim vs page cache
> > instantiation vs writeback completion). I'm not seeing merged ioend
> > batches of larger than about 40,000 pages being processed at once.
> > Hence I can't yet see where the millions of pages in a single ioend
> > completion that would be required to hold a CPU for tens of seconds
> > is coming from yet...
> > 
> 
> I was never able to reproduce the actual warning either (only
> construct
> the unexpectedly large page sequences through various means), so I'm
> equally as curious about that aspect of the problem. My only guess at
> the moment is that perhaps hardware is enough of a factor to increase
> the cost (i.e. slow cpu, cacheline misses, etc.)? I dunno..
> 

So I did try patches 1 and 2 from this series over the weekend, but for
some reason the resulting kernel started hanging during I/O. I'm still
trying to figure out why.

> > 
I think I'll try Dave's new patch to see if that behaves similarly (in
case this is something new in the stable kernel series) and report
back.
Trond Myklebust Jan. 13, 2022, 5:01 p.m. UTC | #48
On Tue, 2022-01-11 at 10:37 +1100, Dave Chinner wrote:
> On Thu, Jan 06, 2022 at 06:36:52PM +0000, Trond Myklebust wrote:
> > On Thu, 2022-01-06 at 09:48 +1100, Dave Chinner wrote:
> > > On Wed, Jan 05, 2022 at 08:45:05PM +0000, Trond Myklebust wrote:
> > > > On Tue, 2022-01-04 at 21:09 -0500, Trond Myklebust wrote:
> > > > > On Tue, 2022-01-04 at 12:22 +1100, Dave Chinner wrote:
> > > > > > On Tue, Jan 04, 2022 at 12:04:23AM +0000, Trond Myklebust
> > > > > > wrote:
> > > > > > > We have different reproducers. The common feature appears
> > > > > > > to
> > > > > > > be
> > > > > > > the
> > > > > > > need for a decently fast box with fairly large memory
> > > > > > > (128GB
> > > > > > > in
> > > > > > > one
> > > > > > > case, 400GB in the other). It has been reproduced with
> > > > > > > HDs,
> > > > > > > SSDs
> > > > > > > and
> > > > > > > NVME systems.
> > > > > > > 
> > > > > > > On the 128GB box, we had it set up with 10+ disks in a
> > > > > > > JBOD
> > > > > > > configuration and were running the AJA system tests.
> > > > > > > 
> > > > > > > On the 400GB box, we were just serially creating large (>
> > > > > > > 6GB)
> > > > > > > files
> > > > > > > using fio and that was occasionally triggering the issue.
> > > > > > > However
> > > > > > > doing
> > > > > > > an strace of that workload to disk reproduced the problem
> > > > > > > faster
> > > > > > > :-
> > > > > > > ).
> > > > > > 
> > > > > > Ok, that matches up with the "lots of logically sequential
> > > > > > dirty
> > > > > > data on a single inode in cache" vector that is required to
> > > > > > create
> > > > > > really long bio chains on individual ioends.
> > > > > > 
> > > > > > Can you try the patch below and see if addresses the issue?
> > > > > > 
> > > > > 
> > > > > That patch does seem to fix the soft lockups.
> > > > > 
> > > > 
> > > > Oops... Strike that, apparently our tests just hit the
> > > > following
> > > > when
> > > > running on AWS with that patch.
> > > 
> > > OK, so there are also large contiguous physical extents being
> > > allocated in some cases here.
> > > 
> > > > So it was harder to hit, but we still did eventually.
> > > 
> > > Yup, that's what I wanted to know - it indicates that both the
> > > filesystem completion processing and the iomap page processing
> > > play
> > > a role in the CPU usage. More complex patch for you to try
> > > below...
> > > 
> > > Cheers,
> > > 
> > > Dave.
> > 
> > Hi Dave,
> > 
> > This patch got further than the previous one. However it too failed
> > on
> > the same AWS setup after we started creating larger (in this case
> > 52GB)
> > files. The previous patch failed at 15GB.
> > 
> > NR_06-18:00:17 pm-46088DSX1 /mnt/data-portal/data $ ls -lh
> > total 59G
> > -rw-r----- 1 root root  52G Jan  6 18:20 100g
> > -rw-r----- 1 root root 9.8G Jan  6 17:38 10g
> > -rw-r----- 1 root root   29 Jan  6 17:36 file
> > NR_06-18:20:10 pm-46088DSX1 /mnt/data-portal/data $
> > Message from syslogd@pm-46088DSX1 at Jan  6 18:22:44 ...
> >  kernel:[ 5548.082987] watchdog: BUG: soft lockup - CPU#10 stuck
> > for
> > 24s! [kworker/10:0:18995]
> 
> Ok, so coming back to this set of failures. Firstly, the patch I
> sent you has a bug in it, meaning it did not merge ioends across
> independent ->writepages invocations. Essentially it would merge
> ioends as long as the chain of ioends are all the same size (e.g.
> 4096 pages long). The moment an ioend of a different size is added
> to the chain (i.e. the runt at the tail of the writepages
> invocation) the merging stopped.
> 
> That means the merging was limited to what writeback bandwidth
> chunking broken the per-file writeback into. I'm generally seeing
> that to be 100-200MB chunks per background writeback invocation,
> and so nothing is merging beyond that size.
> 
> Once I fixed that bug (caused by bio->bi_iter.bi_sector being
> modified by the block layer stack during submission when bios are
> split so it doesn't point at the start sector at IO completion),
> the only way I could get merging beyond submission chunking was
> to induce a long scheduling delay in the XFS completion processing.
> e.g. by adding msleep(1000) to xfs_end_io() before any of the
> merging occurred.
> 
> In this case, I can't get any extra merging to occur - the
> scheduling latency on IO completion is just so low and the
> completion processing so fast that little to no merging occurs
> at all with ioends split into 4096 page chunks.
> 
> So, let's introduce scheduling delays. The only one that matters
> here is a delay running the XFS end IO work task - it will pull all
> the pending ioends and process them as one loop. Hence the only way
> we can get large merged ioends is to delay the processing before we
> pull the completed ioends off the inode.
> 
> Worst case is that a scheduling delay will allow a single file to
> dirty enough pages to hit the throttling limits in
> balance_dirty_pages() while it waits for dirty pages to be cleaned.
> I see this in completion:
> 
> kworker/9:1-769   [009]    35.267031: bprint: iomap_finish_ioends:
> pages 4096, start sector 0x400 size 10123546624 pcnt 4096
> ....
> kworker/9:1-769   [009]    35.982461: bprint: iomap_finish_ioends:
> pages 164, start sector 0x12db368 size 671744 pcnt 31324
> 
> Yup, that merged into a 10GB ioend that is passed to
> iomap_finish_ioends(), and it took just over 700ms to process the
> entire 10GB ioend.
> 
> If writeback is running at the same time as completion, things are
> a little slower:
> 
> kworker/31:2-793   [031]    51.147943: bprint: iomap_finish_ioends:
> pages 4096, start sector 0x132ac488 size 8019509248 pcnt 4096
> kworker/u68:13-637 [025]    51.150218: bprint: iomap_do_writepage: 2.
> bios 4096, pages 4096, start sector 0x141a4488 bi_vcnt 5, bi_size
> 16777216
> kworker/31:2-793   [031]    51.152237: bprint: iomap_finish_ioends:
> pages 4096, start sector 0x132b4488 size 16777216 pcnt 8192
> kworker/u68:13-637 [025]    51.155773: bprint: iomap_do_writepage: 2.
> bios 4096, pages 4096, start sector 0x141ac488 bi_vcnt 5, bi_size
> 16777216
> ....
> kworker/31:2-793   [031]    52.528398: bprint: iomap_finish_ioends:
> pages 4096, start sector 0x14194488 size 16777216 pcnt 21504
> 
> That's 8GB in 1.4s, but it's still processing over 5GB/s of
> completions.
> 
> This is running with 64GB RAM and:
> 
> $ grep . /proc/sys/vm/dirty*
> /proc/sys/vm/dirty_background_bytes:0
> /proc/sys/vm/dirty_background_ratio:10
> /proc/sys/vm/dirty_bytes:0
> /proc/sys/vm/dirty_expire_centisecs:3000
> /proc/sys/vm/dirty_ratio:20
> /proc/sys/vm/dirtytime_expire_seconds:43200
> /proc/sys/vm/dirty_writeback_centisecs:500
> $
> 
> If I change the dirty ratios to 60/80:
> 
> $ grep . /proc/sys/vm/dirty*ratio
> /proc/sys/vm/dirty_background_ratio:60
> /proc/sys/vm/dirty_ratio:80
> $
> 
> I can get up to 15GB merged ioends with a 5 second scheduling delay
> for the xfs_end_io workqueue, but that still only takes about 2s
> of CPU time to process the entire completion:
> 
> kworker/4:2-214   [004]   788.133242: bprint: xfs_end_io: off
> 0x6f85fe000, sect 0x14daf0c0 size 16777216/0x8000 end 0x14db70c0
> <merges>
> kworker/4:2-214   [004]   788.135393: bprint: iomap_finish_ioends:
> pages 4096, start sector 0x14daf0c0 size 15837691904 pcnt 4096
> .....
> kworker/4:2-214   [004]   790.083058: bprint: iomap_finish_ioends:
> pages 4096, start sector 0x16b270c0 size 16777216 pcnt 32768
> 
> Given that I broke physical extent merging completely in the patch
> you were testing, there's no way you would even be getting GB sized
> completions being run, even with large scheduling delays. There is
> just no way completion is spending that amount of CPU time in a loop
> processing page based writeback completion without triggering some
> case of cond_resched() in the patch I gave you to test unless there
> is something else happening on those systems.
> 
> So at this point I'm kinda at a loss to understand where the 20+
> second CPU times for completion processing are coming from, even if
> we're trying to process the entire 52GB of dirty pages in a single
> completion.
> 
> Trond, what is the IO completion task actually spending it's CPU
> time doing on your machines? Can you trace out what the conditions
> are (ioend lengths, processing time, etc) when the softlockups
> occur? Are there so many pending IO completions iacross different
> files that the completion CPU (CPU #10) is running out of worker
> threads and/or the CPU bound completion worker threads are seeing
> tens of seconds of scheduling delay?  Is it something completely
> external like AWS preempting the vCPU that happens to be running IO
> completion for 20+ seconds at a time? Or something else entirely?
> 
> I really need to know what I'm missing here, because it isn't
> obvious from my local systems and it's not obvious just from
> soft-lockup stack traces....
> 
> The latest patch with page based accounting and fixed ioend merging
> I'm running here, including the tracepoints I've been using
> ('trace-cmd record -e printk' is your friend), is below.
> 
> Cheers,
> 
> Dave.

Yesterday I figured out a testing issue that was causing confusion
among the people doing the actual testing. They were seeing hangs,
which were not soft lockups, and which turned out to be artifacts of
the testing methodology.

With this patch, it appears that we are not reproducing the soft
lockups.
Trond Myklebust Jan. 17, 2022, 5:24 p.m. UTC | #49
Hi Dave & Brian,

On Thu, 2022-01-13 at 12:01 -0500, Trond Myklebust wrote:
> 
> Yesterday I figured out a testing issue that was causing confusion
> among the people doing the actual testing. They were seeing hangs,
> which were not soft lockups, and which turned out to be artifacts of
> the testing methodology.
> 
> With this patch, it appears that we are not reproducing the soft
> lockups.
> 

What are the next steps you need from me at this point?
Darrick J. Wong Jan. 17, 2022, 5:36 p.m. UTC | #50
On Mon, Jan 17, 2022 at 05:24:50PM +0000, Trond Myklebust wrote:
> Hi Dave & Brian,
> 
> On Thu, 2022-01-13 at 12:01 -0500, Trond Myklebust wrote:
> > 
> > Yesterday I figured out a testing issue that was causing confusion
> > among the people doing the actual testing. They were seeing hangs,
> > which were not soft lockups, and which turned out to be artifacts of
> > the testing methodology.
> > 
> > With this patch, it appears that we are not reproducing the soft
> > lockups.
> > 
> 
> What are the next steps you need from me at this point?

Can someone (Dave?) please re-send whatever the latest version of the
fixpatch is to the list as a new thread?  With Tested-by tags, etc.?
Once that's done I'll push it to for-next as a 5.17 bugfix.

(/me is on vacation today; see you all tomorrow.)

--D

> -- 
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> trond.myklebust@hammerspace.com
> 
>
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 71a36ae120ee..e39a53923f9d 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1052,9 +1052,11 @@  iomap_finish_ioend(struct iomap_ioend *ioend, int error)
 			next = bio->bi_private;
 
 		/* walk each page on bio, ending page IO on them */
-		bio_for_each_segment_all(bv, bio, iter_all)
+		bio_for_each_segment_all(bv, bio, iter_all) {
 			iomap_finish_page_writeback(inode, bv->bv_page, error,
 					bv->bv_len);
+			cond_resched();
+		}
 		bio_put(bio);
 	}
 	/* The ioend has been freed by bio_put() */