Message ID | cover.1603884539.git.anand.jain@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 29/10/20 9:08 am, Anand Jain wrote: > > > On 28/10/20 10:32 pm, Josef Bacik wrote: >> On 10/28/20 9:25 AM, Anand Jain wrote: >>> Based on misc-next >>> >>> Depends on the following 3 patches in the mailing list. >>> btrfs: add btrfs_strmatch helper >>> btrfs: create read policy framework >>> btrfs: create read policy sysfs attribute, pid >>> >>> v1: >>> Drop tracing patch >>> Drop factoring inflight command >>> Here below is the performance differences, when inflight is used, >>> it pushed >>> few commands to the other device, so losing the potential merges. >>> >>> with inflight: >>> READ: bw=195MiB/s (204MB/s), 195MiB/s-195MiB/s (204MB/s-204MB/s), >>> io=15.6GiB (16.8GB), run=82203-82203msec >>> sda 256054 >>> sdc 20 >>> >>> without inflight: >>> READ: bw=192MiB/s (202MB/s), 192MiB/s-192MiB/s (202MB/s-202MB/s), >>> io=15.6GiB (16.8GB), run=83231-83231msec >>> sda 141006 >>> sdc 0 >>> >> >> What's the baseline? I think 3mib/s is not that big of a tradeoff for >> complexity, but if baseline is like 190mib/s then maybe its worth it. >> If baseline is 90mib/s then I say it's not worth the inflight. Thanks, > > Oh no I have to rerun the test cases here. As far as I remember > without inflight was better than with inflight. Because with > inflight there were fewer merges leading to more read IOs. > > Will rerun and send the data. > <raid1> With inflight: (the inflight patches were in the RFC patchset): pid [latency] device roundrobin ( 00) READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=15.6GiB (16.8GB), run=396546-396546msec vdb 363575 sda 200771 Without inflight: pid [latency] device roundrobin ( 00) READ: bw=41.7MiB/s (43.8MB/s), 41.7MiB/s-41.7MiB/s (43.8MB/s-43.8MB/s), io=15.6GiB (16.8GB), run=383274-383274msec vdb 256238 sda 0 Without inflight is better due to lesser IO. Thanks, Anand > Thanks, Anand > >> >> Josef
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index d3023879bdf6..72ec633e9063 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -5665,6 +5665,12 @@ static int find_live_mirror(struct btrfs_fs_info *fs_info, fs_info->fs_devices->read_policy = BTRFS_READ_POLICY_PID; fallthrough; case BTRFS_READ_POLICY_PID: + /* + * Just to factor in the cost of calculating the avg wait using + * iostat for testing. + */ + btrfs_find_best_stripe(fs_info, map, first, num_stripes, log, + logsz); preferred_mirror = first + current->pid % num_stripes; scnprintf(log, logsz, "first %d num_stripe %d %s (%d) preferred %d",