diff mbox series

[1/2] hwrng: iproc-rng200 - Set the quality value

Message ID 20200514190734.32746-2-l.stelmach@samsung.com (mailing list archive)
State Not Applicable
Headers show
Series [1/2] hwrng: iproc-rng200 - Set the quality value | expand

Commit Message

Łukasz Stelmach May 14, 2020, 7:07 p.m. UTC
The value has been estimaded by obtainig 1024 chunks of data 128 bytes
(1024 bits) each from the generator and finding chunk with minimal
entropy using the ent(1) tool. The value was 6.327820 bits of entropy
in each 8 bits of data.

Signed-off-by: Łukasz Stelmach <l.stelmach@samsung.com>
---
 drivers/char/hw_random/iproc-rng200.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Stephan Mueller May 14, 2020, 8:20 p.m. UTC | #1
Am Donnerstag, 14. Mai 2020, 21:07:33 CEST schrieb Łukasz Stelmach:

Hi Łukasz,

> The value has been estimaded by obtainig 1024 chunks of data 128 bytes
> (1024 bits) each from the generator and finding chunk with minimal
> entropy using the ent(1) tool. The value was 6.327820 bits of entropy
> in each 8 bits of data.

I am not sure we should use the ent tool to define the entropy level. Ent 
seems to use a very coarse entropy estimation.

I would feel more comfortable when using other measures like SP800-90B which 
even provides a tool for the analysis.

I understand that entropy estimates, well, are estimates. But the ent data is 
commonly not very conservative.

[1] https://github.com/usnistgov/SP800-90B_EntropyAssessment

Ciao
Stephan
Łukasz Stelmach May 14, 2020, 10:18 p.m. UTC | #2
It was <2020-05-14 czw 22:20>, when Stephan Mueller wrote:
> Am Donnerstag, 14. Mai 2020, 21:07:33 CEST schrieb Łukasz Stelmach:
>
> Hi Łukasz,
>
>> The value has been estimaded by obtainig 1024 chunks of data 128 bytes
>> (1024 bits) each from the generator and finding chunk with minimal
>> entropy using the ent(1) tool. The value was 6.327820 bits of entropy
>> in each 8 bits of data.
>
> I am not sure we should use the ent tool to define the entropy
> level. Ent seems to use a very coarse entropy estimation.
>
> I would feel more comfortable when using other measures like SP800-90B
> which even provides a tool for the analysis.
>
> I understand that entropy estimates, well, are estimates. But the ent
> data is commonly not very conservative.
>
> [1] https://github.com/usnistgov/SP800-90B_EntropyAssessment

Thank you for pointing this out.

I am running tests using SP800-90B tools and the first issue I can see
is the warning that samples contain less than 1e6 bytes of data. I know
little about maths behind random number generators, but I have noticed
that the bigger chunk of data from an RNG I feed into either ent or ea_iid
the higher entropy they report. That is why I divided the data into 1024
bit chunks in the first place. To get worse results. With ea_iid they
get even worse (128 bytes of random data)


    Calculating baseline statistics...
    H_original: 4.107376
    H_bitstring: 0.795122
    min(H_original, 8 X H_bitstring): 4.107376

but I don't know how much I can trust it, when I get such warning

    *** Warning: data contains less than 1000000 samples ***

ea_non_iid refuses to run tests with less than 4096 bytes of input.

I may suspect that lack of any warnings from ent doesn't make its
results any more reliable.

Anyway. I collected 1024 files 1024 bits each once again and ran the
following tests

    for f in exynos-trng/random*; do ./ea_iid "$f" | grep ^min; done |  sort | head -1
    for f in rng200/random*; do ./ea_iid "$f" | grep ^min; done | sort | head -1

For both RNGs I got the same

    min(H_original, 8 X H_bitstring): 3.393082

which, if I understand correctly, means I should set quality to no more
than 434. Do you think 400 is OK?

Kind regards,
Stephan Mueller May 15, 2020, 8:32 a.m. UTC | #3
Am Freitag, 15. Mai 2020, 00:18:41 CEST schrieb Lukasz Stelmach:

Hi Lukasz,
> 
> I am running tests using SP800-90B tools and the first issue I can see
> is the warning that samples contain less than 1e6 bytes of data. I know
> little about maths behind random number generators, but I have noticed
> that the bigger chunk of data from an RNG I feed into either ent or ea_iid
> the higher entropy they report. That is why I divided the data into 1024
> bit chunks in the first place. To get worse results. With ea_iid they
> get even worse (128 bytes of random data)

I read that you seem to just take the output data from the RNG. If this is 
correct, I think we can stop right here. The output of an RNG is usually after 
post-processing commonly provided by a cryptographic function.

Thus, when processing the output of the RNG all what we measure here is the 
quality of the cryptographic post-processing and not the entropy that may be 
present in the data.

What we need is to access the noise source and analyze this with the given 
tool set. And yes, the analysis may require adjusting the data to a format 
that can be consumed and analyzed by the statistical tests.

Ciao
Stephan
Łukasz Stelmach May 15, 2020, 9:01 a.m. UTC | #4
It was <2020-05-15 pią 00:18>, when Lukasz Stelmach wrote:
> It was <2020-05-14 czw 22:20>, when Stephan Mueller wrote:
>> Am Donnerstag, 14. Mai 2020, 21:07:33 CEST schrieb Łukasz Stelmach:
>>
>> Hi Łukasz,
>>
>>> The value has been estimaded by obtainig 1024 chunks of data 128 bytes
>>> (1024 bits) each from the generator and finding chunk with minimal
>>> entropy using the ent(1) tool. The value was 6.327820 bits of entropy
>>> in each 8 bits of data.
>>
>> I am not sure we should use the ent tool to define the entropy
>> level. Ent seems to use a very coarse entropy estimation.
>>
>> I would feel more comfortable when using other measures like SP800-90B
>> which even provides a tool for the analysis.
>>
>> I understand that entropy estimates, well, are estimates. But the ent
>> data is commonly not very conservative.
>>
>> [1] https://github.com/usnistgov/SP800-90B_EntropyAssessment

[...]

> Anyway. I collected 1024 files 1024 bits each once again and ran the
> following tests
>
>     for f in exynos-trng/random*; do ./ea_iid "$f" | grep ^min; done |  sort | head -1
>     for f in rng200/random*; do ./ea_iid "$f" | grep ^min; done | sort | head -1
>
> For both RNGs I got the same
>
>     min(H_original, 8 X H_bitstring): 3.393082

Oddly enough I've got the same number for other random sources on my x86

| Source       | ea_iid -i | ea_iid -c (h') |      ent |
|--------------+-----------+----------------+----------|
| /dev/random  |  3.393082 |       0.768654 | 6.300399 |
| /dev/urandom |  3.393082 |       0.759161 | 6.348562 |
| tpm-rng      |  3.393082 |       0.735722 | 6.323990 |
| exynos-trng  |  3.393082 |       0.687825 | 6.327820 |
| rng200       |  3.393082 |       0.740376 | 6.291959 |

I suspect 1024 bits is too little for ea_iid to give a meaningfull
result. BTW ent results also seem a little oddly low for crng. Any
thoughs?
Łukasz Stelmach May 15, 2020, 9:06 a.m. UTC | #5
It was <2020-05-15 pią 10:32>, when Stephan Mueller wrote:
> Am Freitag, 15. Mai 2020, 00:18:41 CEST schrieb Lukasz Stelmach:
>
>> I am running tests using SP800-90B tools and the first issue I can see
>> is the warning that samples contain less than 1e6 bytes of data. I know
>> little about maths behind random number generators, but I have noticed
>> that the bigger chunk of data from an RNG I feed into either ent or ea_iid
>> the higher entropy they report. That is why I divided the data into 1024
>> bit chunks in the first place. To get worse results. With ea_iid they
>> get even worse (128 bytes of random data)
>
> I read that you seem to just take the output data from the RNG. If this is 
> correct, I think we can stop right here. The output of an RNG is usually after 
> post-processing commonly provided by a cryptographic function.
>
> Thus, when processing the output of the RNG all what we measure here is the 
> quality of the cryptographic post-processing and not the entropy that may be 
> present in the data.
>
> What we need is to access the noise source and analyze this with the given 
> tool set. And yes, the analysis may require adjusting the data to a format 
> that can be consumed and analyzed by the statistical tests.

I took data from /dev/hwrng which is directly connected to the
hardware. See rng_dev_read() in drivers/char/hw_random/core.c.
Stephan Mueller May 15, 2020, 9:10 a.m. UTC | #6
Am Freitag, 15. Mai 2020, 11:01:48 CEST schrieb Lukasz Stelmach:

Hi Lukasz,


As I mentioned, all that is or seems to be analyzed here is the quality of the 
cryptographic post-processing. Thus none of the data can be used for getting 
an idea of the entropy content.

That said, the ent value indeed looks too low which seems to be an issue in 
the tool itself.

Note, for an entropy assessment commonly at least 1 million traces from the 
raw noise source are needed.

See for examples on how such entropy assessments are conducted in the LRNG 
documentation [1] or the Linux /dev/random implementation in [2]

[1] appendix C of https://www.chronox.de/lrng/doc/lrng.pdf

[2] chapter 6 of https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/
Publications/Studies/LinuxRNG/LinuxRNG_EN.pdf

Ciao
Stephan
Łukasz Stelmach May 15, 2020, 10:59 a.m. UTC | #7
It was <2020-05-15 pią 11:10>, when Stephan Mueller wrote:
> As I mentioned, all that is or seems to be analyzed here is the
> quality of the cryptographic post-processing. Thus none of the data
> can be used for getting an idea of the entropy content.
>
> That said, the ent value indeed looks too low which seems to be an
> issue in the tool itself.
>
> Note, for an entropy assessment commonly at least 1 million traces
> from the raw noise source are needed.

I've got 1MiB from each source. Of course I used raw data from /dev/hwrng
for tpm, exynos and rng200.

| Source       | ea_iid -i | ea_iid -c (h') |      ent |
|--------------+-----------+----------------+----------|
| /dev/random  |  7.875064 |       0.998166 | 7.999801 |
| /dev/urandom |  7.879351 |       0.998373 | 7.999821 |
| tpm-rng      |  7.880012 |       0.998118 | 7.999828 |
| exynos-trng  |  7.435701 |       0.947574 | 7.991820 |
| rng200       |  7.883320 |       0.998592 | 7.999824 |

> See for examples on how such entropy assessments are conducted in the LRNG 
> documentation [1] or the Linux /dev/random implementation in [2]

Thanks a lot, I am reading.

I will try to write somthing clever as soon as I parse and understand
these documents (and do other stuff too). Thank you very much for your help.

Kind regards,
diff mbox series

Patch

diff --git a/drivers/char/hw_random/iproc-rng200.c b/drivers/char/hw_random/iproc-rng200.c
index 32d9fe61a225..7eb02a23f744 100644
--- a/drivers/char/hw_random/iproc-rng200.c
+++ b/drivers/char/hw_random/iproc-rng200.c
@@ -199,6 +199,7 @@  static int iproc_rng200_probe(struct platform_device *pdev)
 	priv->rng.read = iproc_rng200_read,
 	priv->rng.init = iproc_rng200_init,
 	priv->rng.cleanup = iproc_rng200_cleanup,
+	priv->rng.quality = 800,
 
 	/* Register driver */
 	ret = devm_hwrng_register(dev, &priv->rng);