mbox series

[v14,0/3] scsi: ufs: Add Host Performance Booster Support

Message ID 20201216024444epcms2p5e69281911dd675306c473df3d2cef8b2@epcms2p5 (mailing list archive)
Headers show
Series scsi: ufs: Add Host Performance Booster Support | expand

Message

Daejun Park Dec. 16, 2020, 2:44 a.m. UTC
Changelog:

v13 -> v14
1. Cleanup codes by commentted in Greg's review.
2. Add documentation for sysfs entries (from Greg's review).
3. Add experiment result of HPB performance testing. (in this mail)

v12 -> v13
1. Cleanup codes by comments from Can Guo.
2. Add HPB related descriptor/flag/attributes in sysfs.
3. Change base commit from 5.10/scsi-queue to 5.11/scsi-queue.

v11 -> v12
1. Fixed to return error value when HPB fails to initialize pinned active 
region.
2. Fixed to disable HPB feature if HPB fails to allocate essential memory
and workqueue.
3. Fixed to change proper sub-region state when region is already evicted.

v10 -> v11
Add a newline at end the last line on Kconfig file.

v9 -> v10
1. Fixed 64-bit division error
2. Fixed problems commentted in Bart's review.

v8 -> v9
1. Change sysfs initialization.
2. Change reading descriptor during HPB initialization
3. Fixed problems commentted in Bart's review.
4. Change base commit from 5.9/scsi-queue to 5.10/scsi-queue.

v7 -> v8
Remove wrongly added tags.

v6 -> v7
1. Remove UFS feature layer.
2. Cleanup for sparse error.

v5 -> v6
Change base commit to b53293fa662e28ae0cdd40828dc641c09f133405

v4 -> v5
Delete unused macro define.

v3 -> v4
1. Cleanup.

v2 -> v3
1. Add checking input module parameter value.
2. Change base commit from 5.8/scsi-queue to 5.9/scsi-queue.
3. Cleanup for unused variables and label.

v1 -> v2
1. Change the full boilerplate text to SPDX style.
2. Adopt dynamic allocation for sub-region data structure.
3. Cleanup.

NAND flash memory-based storage devices use Flash Translation Layer (FTL)
to translate logical addresses of I/O requests to corresponding flash
memory addresses. Mobile storage devices typically have RAM with
constrained size, thus lack in memory to keep the whole mapping table.
Therefore, mapping tables are partially retrieved from NAND flash on
demand, causing random-read performance degradation.

To improve random read performance, JESD220-3 (HPB v1.0) proposes HPB
(Host Performance Booster) which uses host system memory as a cache for the
FTL mapping table. By using HPB, FTL data can be read from host memory
faster than from NAND flash memory. 

The current version only supports the DCM (device control mode).
This patch consists of 3 parts to support HPB feature.

1) HPB probe and initialization process
2) READ -> HPB READ using cached map information
3) L2P (logical to physical) map management

In the HPB probe and init process, the device information of the UFS is
queried. After checking supported features, the data structure for the HPB
is initialized according to the device information.

A read I/O in the active sub-region where the map is cached is changed to
HPB READ by the HPB.

The HPB manages the L2P map using information received from the
device. For active sub-region, the HPB caches through ufshpb_map
request. For the in-active region, the HPB discards the L2P map.
When a write I/O occurs in an active sub-region area, associated dirty
bitmap checked as dirty for preventing stale read.

HPB is shown to have a performance improvement of 58 - 67% for random read
workload. [1]

We measured the total start-up time of popular applications and observed
the difference by enabling the HPB.
Popular applications are 12 game apps and 24 non-game apps. Each target
applications were launched in order. The cycle consists of running 36
applications in sequence. We repeated the cycle for observing performance
improvement by L2P mapping cache hit in HPB.

The Following is experiment environment:
 - kernel version: 4.4.0 
 - UFS 2.1 (64GB)

Result:
+-------+----------+----------+-------+
| cycle | baseline | with HPB | diff  |
+-------+----------+----------+-------+
| 1     | 272.4    | 264.9    | -7.5  |
| 2     | 250.4    | 248.2    | -2.2  |
| 3     | 226.2    | 215.6    | -10.6 |
| 4     | 230.6    | 214.8    | -15.8 |
| 5     | 232.0    | 218.1    | -13.9 |
| 6     | 231.9    | 212.6    | -19.3 |
+-------+----------+----------+-------+

This series patches are based on the 5.11/scsi-queue branch.

[1]:
https://www.usenix.org/conference/hotstorage17/program/presentation/jeong

Daejun Park (3):
  scsi: ufs: Introduce HPB feature
  scsi: ufs: L2P map management for HPB read
  scsi: ufs: Prepare HPB read for cached sub-region

 Documentation/ABI/testing/sysfs-driver-ufs |   80 +
 drivers/scsi/ufs/Kconfig                   |    9 +
 drivers/scsi/ufs/Makefile                  |    1 +
 drivers/scsi/ufs/ufs-sysfs.c               |   18 +
 drivers/scsi/ufs/ufs.h                     |   49 +
 drivers/scsi/ufs/ufshcd.c                  |   53 +
 drivers/scsi/ufs/ufshcd.h                  |   23 +-
 drivers/scsi/ufs/ufshpb.c                  | 1767 ++++++++++++++++++++
 drivers/scsi/ufs/ufshpb.h                  |  230 +++
 9 files changed, 2229 insertions(+), 1 deletion(-)
 create mode 100644 drivers/scsi/ufs/ufshpb.c
 create mode 100644 drivers/scsi/ufs/ufshpb.h

--
2.25.1

Comments

Greg KH Dec. 16, 2020, 10:07 a.m. UTC | #1
On Wed, Dec 16, 2020 at 11:44:44AM +0900, Daejun Park wrote:
> NAND flash memory-based storage devices use Flash Translation Layer (FTL)
> to translate logical addresses of I/O requests to corresponding flash
> memory addresses. Mobile storage devices typically have RAM with
> constrained size, thus lack in memory to keep the whole mapping table.
> Therefore, mapping tables are partially retrieved from NAND flash on
> demand, causing random-read performance degradation.
> 
> To improve random read performance, JESD220-3 (HPB v1.0) proposes HPB
> (Host Performance Booster) which uses host system memory as a cache for the
> FTL mapping table. By using HPB, FTL data can be read from host memory
> faster than from NAND flash memory. 
> 
> The current version only supports the DCM (device control mode).
> This patch consists of 3 parts to support HPB feature.
> 
> 1) HPB probe and initialization process
> 2) READ -> HPB READ using cached map information
> 3) L2P (logical to physical) map management
> 
> In the HPB probe and init process, the device information of the UFS is
> queried. After checking supported features, the data structure for the HPB
> is initialized according to the device information.
> 
> A read I/O in the active sub-region where the map is cached is changed to
> HPB READ by the HPB.
> 
> The HPB manages the L2P map using information received from the
> device. For active sub-region, the HPB caches through ufshpb_map
> request. For the in-active region, the HPB discards the L2P map.
> When a write I/O occurs in an active sub-region area, associated dirty
> bitmap checked as dirty for preventing stale read.
> 
> HPB is shown to have a performance improvement of 58 - 67% for random read
> workload. [1]
> 
> We measured the total start-up time of popular applications and observed
> the difference by enabling the HPB.
> Popular applications are 12 game apps and 24 non-game apps. Each target
> applications were launched in order. The cycle consists of running 36
> applications in sequence. We repeated the cycle for observing performance
> improvement by L2P mapping cache hit in HPB.
> 
> The Following is experiment environment:
>  - kernel version: 4.4.0 
>  - UFS 2.1 (64GB)
> 
> Result:
> +-------+----------+----------+-------+
> | cycle | baseline | with HPB | diff  |
> +-------+----------+----------+-------+
> | 1     | 272.4    | 264.9    | -7.5  |
> | 2     | 250.4    | 248.2    | -2.2  |
> | 3     | 226.2    | 215.6    | -10.6 |
> | 4     | 230.6    | 214.8    | -15.8 |
> | 5     | 232.0    | 218.1    | -13.9 |
> | 6     | 231.9    | 212.6    | -19.3 |
> +-------+----------+----------+-------+

I feel this was burried in the 00 email, shouldn't it go into the 01
commit changelog so that you can see this?

But why does the "cycle" matter here?

Can you run a normal benchmark, like fio, on here so we can get some
numbers we know how to compare to other systems with, and possible
reproduct it ourselves?  I'm sure fio will easily show random read
performance increases, right?

thanks,

greg k-h
Daejun Park Dec. 18, 2020, 1:05 a.m. UTC | #2
Hi, Greg

> > NAND flash memory-based storage devices use Flash Translation Layer (FTL)
> > to translate logical addresses of I/O requests to corresponding flash
> > memory addresses. Mobile storage devices typically have RAM with
> > constrained size, thus lack in memory to keep the whole mapping table.
> > Therefore, mapping tables are partially retrieved from NAND flash on
> > demand, causing random-read performance degradation.
> > 
> > To improve random read performance, JESD220-3 (HPB v1.0) proposes HPB
> > (Host Performance Booster) which uses host system memory as a cache for the
> > FTL mapping table. By using HPB, FTL data can be read from host memory
> > faster than from NAND flash memory. 
> > 
> > The current version only supports the DCM (device control mode).
> > This patch consists of 3 parts to support HPB feature.
> > 
> > 1) HPB probe and initialization process
> > 2) READ -> HPB READ using cached map information
> > 3) L2P (logical to physical) map management
> > 
> > In the HPB probe and init process, the device information of the UFS is
> > queried. After checking supported features, the data structure for the HPB
> > is initialized according to the device information.
> > 
> > A read I/O in the active sub-region where the map is cached is changed to
> > HPB READ by the HPB.
> > 
> > The HPB manages the L2P map using information received from the
> > device. For active sub-region, the HPB caches through ufshpb_map
> > request. For the in-active region, the HPB discards the L2P map.
> > When a write I/O occurs in an active sub-region area, associated dirty
> > bitmap checked as dirty for preventing stale read.
> > 
> > HPB is shown to have a performance improvement of 58 - 67% for random read
> > workload. [1]
> > 
> > We measured the total start-up time of popular applications and observed
> > the difference by enabling the HPB.
> > Popular applications are 12 game apps and 24 non-game apps. Each target
> > applications were launched in order. The cycle consists of running 36
> > applications in sequence. We repeated the cycle for observing performance
> > improvement by L2P mapping cache hit in HPB.
> > 
> > The Following is experiment environment:
> >  - kernel version: 4.4.0 
> >  - UFS 2.1 (64GB)
> > 
> > Result:
> > +-------+----------+----------+-------+
> > | cycle | baseline | with HPB | diff  |
> > +-------+----------+----------+-------+
> > | 1     | 272.4    | 264.9    | -7.5  |
> > | 2     | 250.4    | 248.2    | -2.2  |
> > | 3     | 226.2    | 215.6    | -10.6 |
> > | 4     | 230.6    | 214.8    | -15.8 |
> > | 5     | 232.0    | 218.1    | -13.9 |
> > | 6     | 231.9    | 212.6    | -19.3 |
> > +-------+----------+----------+-------+
> 
> I feel this was burried in the 00 email, shouldn't it go into the 01
> commit changelog so that you can see this?

Sure, I will move this result to 01 commit log.
 
> But why does the "cycle" matter here?

I think iteration minimizes other factors that affect the start-up time of
application.

> Can you run a normal benchmark, like fio, on here so we can get some
> numbers we know how to compare to other systems with, and possible
> reproduct it ourselves?  I'm sure fio will easily show random read
> performance increases, right?

Here is my iozone script:
iozone -r 4k -+n -i2 -ecI -t 16 -l 16 -u 16 
-s $IO_RANGE/16 -F mnt/tmp_1 mnt/tmp_2 mnt/tmp_3 mnt/tmp_4 
mnt/tmp_5 mnt/tmp_6 mnt/tmp_7 mnt/tmp_8 mnt/tmp_9 mnt/tmp_10 mnt/tmp_11 
mnt/tmp_12 mnt/tmp_13 mnt/tmp_14 mnt/tmp_15 mnt/tmp_16

Result:
+----------+--------+---------+
| IO range | HPB on | HPB off |
+----------+--------+---------+
|   1 GB   | 294.8  | 300.87  |
|   4 GB   | 293.51 | 179.35  |
|   8 GB   | 294.85 | 162.52  |
|  16 GB   | 293.45 | 156.26  |
|  32 GB   | 277.4  | 153.25  |
+----------+--------+---------+

Thanks,
Daejun
Bart Van Assche Dec. 18, 2020, 1:58 a.m. UTC | #3
On 12/17/20 5:05 PM, Daejun Park wrote:
> Here is my iozone script:
> iozone -r 4k -+n -i2 -ecI -t 16 -l 16 -u 16 
> -s $IO_RANGE/16 -F mnt/tmp_1 mnt/tmp_2 mnt/tmp_3 mnt/tmp_4 
> mnt/tmp_5 mnt/tmp_6 mnt/tmp_7 mnt/tmp_8 mnt/tmp_9 mnt/tmp_10 mnt/tmp_11 
> mnt/tmp_12 mnt/tmp_13 mnt/tmp_14 mnt/tmp_15 mnt/tmp_16
> 
> Result:
> +----------+--------+---------+
> | IO range | HPB on | HPB off |
> +----------+--------+---------+
> |   1 GB   | 294.8  | 300.87  |
> |   4 GB   | 293.51 | 179.35  |
> |   8 GB   | 294.85 | 162.52  |
> |  16 GB   | 293.45 | 156.26  |
> |  32 GB   | 277.4  | 153.25  |
> +----------+--------+---------+

Hi Daejun,

What are the units of the numbers in columns 2 and 3?

Thanks,

Bart.
Daejun Park Dec. 18, 2020, 2:16 a.m. UTC | #4
On 12/17/20 5:05 PM, Daejun Park wrote:
> > Here is my iozone script:
> > iozone -r 4k -+n -i2 -ecI -t 16 -l 16 -u 16 
> > -s $IO_RANGE/16 -F mnt/tmp_1 mnt/tmp_2 mnt/tmp_3 mnt/tmp_4 
> > mnt/tmp_5 mnt/tmp_6 mnt/tmp_7 mnt/tmp_8 mnt/tmp_9 mnt/tmp_10 mnt/tmp_11 
> > mnt/tmp_12 mnt/tmp_13 mnt/tmp_14 mnt/tmp_15 mnt/tmp_16
> > 
> > Result:
> > +----------+--------+---------+
> > | IO range | HPB on | HPB off |
> > +----------+--------+---------+
> > |   1 GB   | 294.8  | 300.87  |
> > |   4 GB   | 293.51 | 179.35  |
> > |   8 GB   | 294.85 | 162.52  |
> > |  16 GB   | 293.45 | 156.26  |
> > |  32 GB   | 277.4  | 153.25  |
> > +----------+--------+---------+
> 
> Hi Daejun,
> 
> What are the units of the numbers in columns 2 and 3?
> 
> Thanks,
> 
> Bart.
> 
I forgot to add units, it is MB/s.

Thanks
Daejun