From patchwork Thu Nov 14 11:31:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 11243585 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7DDE717E8 for ; Thu, 14 Nov 2019 11:32:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5CE69206E6 for ; Thu, 14 Nov 2019 11:32:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="FF70Oc4h" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726767AbfKNLcY (ORCPT ); Thu, 14 Nov 2019 06:32:24 -0500 Received: from us-smtp-2.mimecast.com ([205.139.110.61]:45314 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726786AbfKNLcY (ORCPT ); Thu, 14 Nov 2019 06:32:24 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573731143; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ydfF/6rph5aw0S+dXQ7vAdmn1cldyijsWjA+3qHaIds=; b=FF70Oc4hEF3xKikQlCqbWTTmntnVPs6Ioez5mPEqLUOoh5XFzvEmMr/tlhpfaqnZQ/9l1C sqrL2nAaVBOl6aDJeywoAqvsualCyOK+O2cr5ED//6obRXVsEKXz+yqd9DHqeZH86cwE02 XQ410Qhxj2RY8u3uF7+Jk8sNhqw7OoU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-432-SvsZNX8EO-qCTEECn5opRQ-1; Thu, 14 Nov 2019 06:32:20 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 929F0190743E; Thu, 14 Nov 2019 11:32:18 +0000 (UTC) Received: from ming.t460p (ovpn-8-24.pek2.redhat.com [10.72.8.24]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EBDAA77645; Thu, 14 Nov 2019 11:31:57 +0000 (UTC) Date: Thu, 14 Nov 2019 19:31:53 +0800 From: Ming Lei To: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Jeff Moyer , Dave Chinner , Eric Sandeen , Christoph Hellwig , Jens Axboe , Ingo Molnar , Peter Zijlstra , Tejun Heo Subject: single aio thread is migrated crazily by scheduler Message-ID: <20191114113153.GB4213@ming.t460p> MIME-Version: 1.0 User-Agent: Mutt/1.12.1 (2019-06-15) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-MC-Unique: SvsZNX8EO-qCTEECn5opRQ-1 X-Mimecast-Spam-Score: 0 Content-Disposition: inline Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Hi Guys, It is found that single AIO thread is migrated crazely by scheduler, and the migrate period can be < 10ms. Follows the test a): - run single job fio[1] for 30 seconds: ./xfs_complete 512 - observe fio io thread migration via bcc trace[2], and the migration times can reach 5k ~ 10K in above test. In this test, CPU utilization is 30~40% on the CPU running fio IO thread. - after applying the debug patch[3] to queue XFS completion work on other CPU(not current CPU), the above crazy fio IO thread migration can't be observed. And the similar result can be observed in the following test b) too: - set sched parameters: sysctl kernel.sched_min_granularity_ns=10000000 sysctl kernel.sched_wakeup_granularity_ns=15000000 which is usually done by 'tuned-adm profile network-throughput' - run single job fio aio[1] for 30 seconds: ./xfs_complete 4k - observe fio io thread migration[2], and similar crazy migration can be observed too. In this test, CPU utilization is close to 100% on the CPU for running fio IO thread - the debug patch[3] still makes a big difference on this test wrt. fio IO thread migration. For test b), I thought that load balance may be triggered when single fio IO thread takes the CPU by ~100%, meantime XFS's queue_work() schedules WQ worker thread on the current CPU, since all other CPUs are idle. When the fio IO thread is migrated to new CPU, the same steps can be repeated again. But for test a), I have no idea why fio IO thread is still migrated so frequently since the CPU isn't saturated at all. IMO, it is normal for user to saturate aio thread, since this way may save context switch. Guys, any idea on the crazy aio thread migration? BTW, the tests are run on latest linus tree(5.4-rc7) in KVM guest, and the fio test is created for simulating one real performance report which is proved to be caused by frequent aio submission thread migration. [1] xfs_complete: one fio script for running single job overwrite aio on XFS #!/bin/bash BS=$1 NJOBS=1 QD=128 DIR=/mnt/xfs BATCH=1 VERIFY="sha3-512" sysctl kernel.sched_wakeup_granularity_ns sysctl kernel.sched_min_granularity_ns rmmod scsi_debug;modprobe scsi_debug dev_size_mb=6144 ndelay=41000 dix=1 dif=2 DEV=`ls -d /sys/bus/pseudo/drivers/scsi_debug/adapter*/host*/target*/*/block/* | head -1 | xargs basename` DEV="/dev/"$DEV mkfs.xfs -f $DEV [ ! -d $DIR ] && mkdir -p $DIR mount $DEV $DIR fio --readwrite=randwrite --filesize=5g \ --overwrite=1 \ --filename=$DIR/fiofile \ --runtime=30s --time_based \ --ioengine=libaio --direct=1 --bs=4k --iodepth=$QD \ --iodepth_batch_submit=$BATCH \ --iodepth_batch_complete_min=$BATCH \ --numjobs=$NJOBS \ --verify=$VERIFY \ --name=/hana/fsperf/foo umount $DEV rmmod scsi_debug [2] observe fio migration via bcc trace: /usr/share/bcc/tools/trace -C -t 't:sched:sched_migrate_task "%s/%d cpu %d->%d", args->comm,args->pid,args->orig_cpu,args->dest_cpu' | grep fio [3] test patch for queuing xfs completetion on other CPU Thanks, Ming Signed-off-by: Srikar Dronamraju diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 1fc28c2da279..bdc007a57706 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -158,9 +158,14 @@ static void iomap_dio_bio_end_io(struct bio *bio) blk_wake_io_task(waiter); } else if (dio->flags & IOMAP_DIO_WRITE) { struct inode *inode = file_inode(dio->iocb->ki_filp); + unsigned cpu = cpumask_next(smp_processor_id(), + cpu_online_mask); + + if (cpu >= nr_cpu_ids) + cpu = 0; INIT_WORK(&dio->aio.work, iomap_dio_complete_work); - queue_work(inode->i_sb->s_dio_done_wq, &dio->aio.work); + queue_work_on(cpu, inode->i_sb->s_dio_done_wq, &dio->aio.work); } else { iomap_dio_complete_work(&dio->aio.work); }