diff mbox series

[3/3] i386/kvm: initialize struct at full before ioctl call

Message ID 1564502498-805893-4-git-send-email-andrey.shinkevich@virtuozzo.com (mailing list archive)
State New, archived
Headers show
Series Reduce the number of Valgrind reports in unit tests. | expand

Commit Message

Andrey Shinkevich July 30, 2019, 4:01 p.m. UTC
Not the whole structure is initialized before passing it to the KVM.
Reduce the number of Valgrind reports.

Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
---
 target/i386/kvm.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Philippe Mathieu-Daudé July 30, 2019, 4:44 p.m. UTC | #1
On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
> Not the whole structure is initialized before passing it to the KVM.
> Reduce the number of Valgrind reports.
> 
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> ---
>  target/i386/kvm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index dbbb137..ed57e31 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>          return 0;
>      }
>  
> +    memset(&msr_data, 0, sizeof(msr_data));

I wonder the overhead of this one...

>      msr_data.info.nmsrs = 1;
>      msr_data.entries[0].index = MSR_IA32_TSC;
>      env->tsc_valid = !runstate_is_running();
> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>  
>      if (has_xsave) {
>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));

OK

>      }
>  
>      max_nested_state_len = kvm_max_nested_state_length();
> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>          return 0;
>      }
>  
> +    memset(&dbgregs, 0, sizeof(dbgregs));

OK

>      for (i = 0; i < 4; i++) {
>          dbgregs.db[i] = env->dr[i];
>      }

We could remove 'dbgregs.flags = 0;'

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Peter Maydell July 30, 2019, 4:46 p.m. UTC | #2
On Tue, 30 Jul 2019 at 17:05, Andrey Shinkevich
<andrey.shinkevich@virtuozzo.com> wrote:
>
> Not the whole structure is initialized before passing it to the KVM.
> Reduce the number of Valgrind reports.
>
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>

Does it even make sense to try to valgrind a KVM-enabled run
of QEMU? As soon as we run the guest it will make modifications
to memory which Valgrind can't track; and I don't think
Valgrind supports the KVM_RUN ioctl anyway...

thanks
-- PMM
Christian Borntraeger July 30, 2019, 5:05 p.m. UTC | #3
On 30.07.19 18:44, Philippe Mathieu-Daudé wrote:
> On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
>> Not the whole structure is initialized before passing it to the KVM.
>> Reduce the number of Valgrind reports.
>>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>> ---
>>  target/i386/kvm.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>> index dbbb137..ed57e31 100644
>> --- a/target/i386/kvm.c
>> +++ b/target/i386/kvm.c
>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>          return 0;
>>      }
>>  
>> +    memset(&msr_data, 0, sizeof(msr_data));
> 
> I wonder the overhead of this one...

Cant we use designated initializers like in

commit bdfc8480c50a53d91aa9a513d23a84de0d5fbc86
Author:     Christian Borntraeger <borntraeger@de.ibm.com>
AuthorDate: Thu Oct 30 09:23:41 2014 +0100
Commit:     Paolo Bonzini <pbonzini@redhat.com>
CommitDate: Mon Dec 15 12:21:01 2014 +0100

    valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl

and others?

This should minimize the impact. 
> 
>>      msr_data.info.nmsrs = 1;
>>      msr_data.entries[0].index = MSR_IA32_TSC;
>>      env->tsc_valid = !runstate_is_running();
>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>  
>>      if (has_xsave) {
>>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
> 
> OK
> 
>>      }
>>  
>>      max_nested_state_len = kvm_max_nested_state_length();
>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>          return 0;
>>      }
>>  
>> +    memset(&dbgregs, 0, sizeof(dbgregs));
> 
> OK
> 
>>      for (i = 0; i < 4; i++) {
>>          dbgregs.db[i] = env->dr[i];
>>      }
> 
> We could remove 'dbgregs.flags = 0;'
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>
Christian Borntraeger July 30, 2019, 5:09 p.m. UTC | #4
On 30.07.19 18:46, Peter Maydell wrote:
> On Tue, 30 Jul 2019 at 17:05, Andrey Shinkevich
> <andrey.shinkevich@virtuozzo.com> wrote:
>>
>> Not the whole structure is initialized before passing it to the KVM.
>> Reduce the number of Valgrind reports.
>>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> 
> Does it even make sense to try to valgrind a KVM-enabled run
> of QEMU? As soon as we run the guest it will make modifications
> to memory which Valgrind can't track; and I don't think
> Valgrind supports the KVM_RUN ioctl anyway...

As long as we do not care about the guest memory, it does make sense 
and it does find bugs.

See also 
https://www.linux-kvm.org/page/KVM_Forum_2014
https://www.linux-kvm.org/images/d/d2/03x07-Valgrind.pdf

Unfortunately I wasnt able to follow up on those.
Philippe Mathieu-Daudé July 30, 2019, 5:14 p.m. UTC | #5
On 7/30/19 7:05 PM, Christian Borntraeger wrote:
> On 30.07.19 18:44, Philippe Mathieu-Daudé wrote:
>> On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
>>> Not the whole structure is initialized before passing it to the KVM.
>>> Reduce the number of Valgrind reports.
>>>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>> ---
>>>  target/i386/kvm.c | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>> index dbbb137..ed57e31 100644
>>> --- a/target/i386/kvm.c
>>> +++ b/target/i386/kvm.c
>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>          return 0;
>>>      }
>>>  
>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>
>> I wonder the overhead of this one...
> 
> Cant we use designated initializers like in
> 
> commit bdfc8480c50a53d91aa9a513d23a84de0d5fbc86
> Author:     Christian Borntraeger <borntraeger@de.ibm.com>
> AuthorDate: Thu Oct 30 09:23:41 2014 +0100
> Commit:     Paolo Bonzini <pbonzini@redhat.com>
> CommitDate: Mon Dec 15 12:21:01 2014 +0100
> 
>     valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl
> 
> and others?

Is the compiler smart enough to figure out it doesn't need to zeroes in
case env->tsc_valid is true and the function returns?

> 
> This should minimize the impact. 
>>
>>>      msr_data.info.nmsrs = 1;
>>>      msr_data.entries[0].index = MSR_IA32_TSC;
>>>      env->tsc_valid = !runstate_is_running();
>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>  
>>>      if (has_xsave) {
>>>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>
>> OK
>>
>>>      }
>>>  
>>>      max_nested_state_len = kvm_max_nested_state_length();
>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>          return 0;
>>>      }
>>>  
>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>
>> OK
>>
>>>      for (i = 0; i < 4; i++) {
>>>          dbgregs.db[i] = env->dr[i];
>>>      }
>>
>> We could remove 'dbgregs.flags = 0;'
>>
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>
>
Christian Borntraeger July 30, 2019, 5:47 p.m. UTC | #6
On 30.07.19 19:14, Philippe Mathieu-Daudé wrote:
> On 7/30/19 7:05 PM, Christian Borntraeger wrote:
>> On 30.07.19 18:44, Philippe Mathieu-Daudé wrote:
>>> On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
>>>> Not the whole structure is initialized before passing it to the KVM.
>>>> Reduce the number of Valgrind reports.
>>>>
>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>> ---
>>>>  target/i386/kvm.c | 3 +++
>>>>  1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>> index dbbb137..ed57e31 100644
>>>> --- a/target/i386/kvm.c
>>>> +++ b/target/i386/kvm.c
>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>          return 0;
>>>>      }
>>>>  
>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>
>>> I wonder the overhead of this one...
>>
>> Cant we use designated initializers like in
>>
>> commit bdfc8480c50a53d91aa9a513d23a84de0d5fbc86
>> Author:     Christian Borntraeger <borntraeger@de.ibm.com>
>> AuthorDate: Thu Oct 30 09:23:41 2014 +0100
>> Commit:     Paolo Bonzini <pbonzini@redhat.com>
>> CommitDate: Mon Dec 15 12:21:01 2014 +0100
>>
>>     valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl
>>
>> and others?
> 
> Is the compiler smart enough to figure out it doesn't need to zeroes in
> case env->tsc_valid is true and the function returns?

Good question, we would need to double check with objdump.
Paolo Bonzini July 30, 2019, 7:20 p.m. UTC | #7
On 30/07/19 18:01, Andrey Shinkevich wrote:
> Not the whole structure is initialized before passing it to the KVM.
> Reduce the number of Valgrind reports.
> 
> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>

Christian, is this the right fix?  It's not expensive so it wouldn't be
an issue, just checking if there's any better alternative.

Paolo

> ---
>  target/i386/kvm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index dbbb137..ed57e31 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>          return 0;
>      }
>  
> +    memset(&msr_data, 0, sizeof(msr_data));
>      msr_data.info.nmsrs = 1;
>      msr_data.entries[0].index = MSR_IA32_TSC;
>      env->tsc_valid = !runstate_is_running();
> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>  
>      if (has_xsave) {
>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>      }
>  
>      max_nested_state_len = kvm_max_nested_state_length();
> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>          return 0;
>      }
>  
> +    memset(&dbgregs, 0, sizeof(dbgregs));
>      for (i = 0; i < 4; i++) {
>          dbgregs.db[i] = env->dr[i];
>      }
> -- 
> 1.8.3.1
>
Paolo Bonzini July 30, 2019, 7:22 p.m. UTC | #8
On 30/07/19 18:44, Philippe Mathieu-Daudé wrote:
>> +++ b/target/i386/kvm.c
>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>          return 0;
>>      }
>>  
>> +    memset(&msr_data, 0, sizeof(msr_data));
> I wonder the overhead of this one...
> 

There is just one MSR in the struct so it is okay.

Paolo
Christian Borntraeger July 31, 2019, 7:24 a.m. UTC | #9
On 30.07.19 21:20, Paolo Bonzini wrote:
> On 30/07/19 18:01, Andrey Shinkevich wrote:
>> Not the whole structure is initialized before passing it to the KVM.
>> Reduce the number of Valgrind reports.
>>
>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
> 
> Christian, is this the right fix?  It's not expensive so it wouldn't be
> an issue, just checking if there's any better alternative.

I think all of these variants are valid with pros and cons
1. teach valgrind about this:
Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files)
knowledge about which parts are actually touched.
2. use designated initializers
3. use memset
3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this memory is defined

> 
> Paolo
> 
>> ---
>>  target/i386/kvm.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>> index dbbb137..ed57e31 100644
>> --- a/target/i386/kvm.c
>> +++ b/target/i386/kvm.c
>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>          return 0;
>>      }
>>  
>> +    memset(&msr_data, 0, sizeof(msr_data));
>>      msr_data.info.nmsrs = 1;
>>      msr_data.entries[0].index = MSR_IA32_TSC;
>>      env->tsc_valid = !runstate_is_running();
>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>  
>>      if (has_xsave) {
>>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>      }
>>  
>>      max_nested_state_len = kvm_max_nested_state_length();
>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>          return 0;
>>      }
>>  
>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>      for (i = 0; i < 4; i++) {
>>          dbgregs.db[i] = env->dr[i];
>>      }
>> -- 
>> 1.8.3.1
>>
> 
>
Christophe de Dinechin July 31, 2019, 9:05 a.m. UTC | #10
Christian Borntraeger writes:

> On 30.07.19 18:44, Philippe Mathieu-Daudé wrote:
>> On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
>>> Not the whole structure is initialized before passing it to the KVM.
>>> Reduce the number of Valgrind reports.
>>>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>> ---
>>>  target/i386/kvm.c | 3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>> index dbbb137..ed57e31 100644
>>> --- a/target/i386/kvm.c
>>> +++ b/target/i386/kvm.c
>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>          return 0;
>>>      }
>>>
>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>
>> I wonder the overhead of this one...
>
> Cant we use designated initializers like in
>
> commit bdfc8480c50a53d91aa9a513d23a84de0d5fbc86
> Author:     Christian Borntraeger <borntraeger@de.ibm.com>
> AuthorDate: Thu Oct 30 09:23:41 2014 +0100
> Commit:     Paolo Bonzini <pbonzini@redhat.com>
> CommitDate: Mon Dec 15 12:21:01 2014 +0100
>
>     valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl
>
> and others?
>
> This should minimize the impact.

Oh, when you talked about using designated initializers, I thought you
were talking about fully initializing the struct, like so:

diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index dbbb13772a..3533870c43 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -180,19 +180,20 @@ static int kvm_get_tsc(CPUState *cs)
 {
     X86CPU *cpu = X86_CPU(cs);
     CPUX86State *env = &cpu->env;
-    struct {
-        struct kvm_msrs info;
-        struct kvm_msr_entry entries[1];
-    } msr_data;
     int ret;

     if (env->tsc_valid) {
         return 0;
     }

-    msr_data.info.nmsrs = 1;
-    msr_data.entries[0].index = MSR_IA32_TSC;
-    env->tsc_valid = !runstate_is_running();
+    struct {
+        struct kvm_msrs info;
+        struct kvm_msr_entry entries[1];
+    } msr_data = {
+        .info = { .nmsrs =  1 },
+        .entries = { [0] = { .index = MSR_IA32_TSC } }
+    };
+     env->tsc_valid = !runstate_is_running();

     ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, &msr_data);
     if (ret < 0) {


This gives the compiler maximum opportunities to flag mistakes like
initializing the same thing twice, and make it easier (read no smart
optimizations) to initialize in one go. Moving the declaration past the
'if' also addresses Philippe's concern.

>>
>>>      msr_data.info.nmsrs = 1;
>>>      msr_data.entries[0].index = MSR_IA32_TSC;
>>>      env->tsc_valid = !runstate_is_running();
>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>
>>>      if (has_xsave) {
>>>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>
>> OK
>>
>>>      }
>>>
>>>      max_nested_state_len = kvm_max_nested_state_length();
>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>          return 0;
>>>      }
>>>
>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>
>> OK
>>
>>>      for (i = 0; i < 4; i++) {
>>>          dbgregs.db[i] = env->dr[i];
>>>      }
>>
>> We could remove 'dbgregs.flags = 0;'
>>
>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>


--
Cheers,
Christophe de Dinechin (IRC c3d)
Andrey Shinkevich July 31, 2019, 12:04 p.m. UTC | #11
On 31/07/2019 10:24, Christian Borntraeger wrote:
> 
> 
> On 30.07.19 21:20, Paolo Bonzini wrote:
>> On 30/07/19 18:01, Andrey Shinkevich wrote:
>>> Not the whole structure is initialized before passing it to the KVM.
>>> Reduce the number of Valgrind reports.
>>>
>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>
>> Christian, is this the right fix?  It's not expensive so it wouldn't be
>> an issue, just checking if there's any better alternative.
> 
> I think all of these variants are valid with pros and cons
> 1. teach valgrind about this:
> Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files)
> knowledge about which parts are actually touched.
> 2. use designated initializers
> 3. use memset
> 3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this memory is defined
> 

Thank you all very much for taking part in the discussion.
Also, one may use the Valgrind technology to suppress the unwanted 
reports by adding the Valgrind specific format file valgrind.supp to the 
QEMU project. The file content is extendable for future needs.
All the cases we like to suppress will be recounted in that file.
A case looks like the stack fragments. For instance, from QEMU block:

{
    hw/block/hd-geometry.c
    Memcheck:Cond
    fun:guess_disk_lchs
    fun:hd_geometry_guess
    fun:blkconf_geometry
    ...
    fun:device_set_realized
    fun:property_set_bool
    fun:object_property_set
    fun:object_property_set_qobject
    fun:object_property_set_bool
}

The number of suppressed cases are reported by the Valgrind with every 
run: "ERROR SUMMARY: 5 errors from 3 contexts (suppressed: 0 from 0)"

Andrey

>>
>> Paolo
>>
>>> ---
>>>   target/i386/kvm.c | 3 +++
>>>   1 file changed, 3 insertions(+)
>>>
>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>> index dbbb137..ed57e31 100644
>>> --- a/target/i386/kvm.c
>>> +++ b/target/i386/kvm.c
>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>           return 0;
>>>       }
>>>   
>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>       msr_data.info.nmsrs = 1;
>>>       msr_data.entries[0].index = MSR_IA32_TSC;
>>>       env->tsc_valid = !runstate_is_running();
>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>   
>>>       if (has_xsave) {
>>>           env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>>       }
>>>   
>>>       max_nested_state_len = kvm_max_nested_state_length();
>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>           return 0;
>>>       }
>>>   
>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>>       for (i = 0; i < 4; i++) {
>>>           dbgregs.db[i] = env->dr[i];
>>>       }
>>> -- 
>>> 1.8.3.1
>>>
>>
>>
>
Christian Borntraeger July 31, 2019, 12:28 p.m. UTC | #12
On 31.07.19 14:04, Andrey Shinkevich wrote:
> On 31/07/2019 10:24, Christian Borntraeger wrote:
>>
>>
>> On 30.07.19 21:20, Paolo Bonzini wrote:
>>> On 30/07/19 18:01, Andrey Shinkevich wrote:
>>>> Not the whole structure is initialized before passing it to the KVM.
>>>> Reduce the number of Valgrind reports.
>>>>
>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>
>>> Christian, is this the right fix?  It's not expensive so it wouldn't be
>>> an issue, just checking if there's any better alternative.
>>
>> I think all of these variants are valid with pros and cons
>> 1. teach valgrind about this:
>> Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files)
>> knowledge about which parts are actually touched.
>> 2. use designated initializers
>> 3. use memset
>> 3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this memory is defined
>>
> 
> Thank you all very much for taking part in the discussion.
> Also, one may use the Valgrind technology to suppress the unwanted 
> reports by adding the Valgrind specific format file valgrind.supp to the 
> QEMU project. The file content is extendable for future needs.
> All the cases we like to suppress will be recounted in that file.
> A case looks like the stack fragments. For instance, from QEMU block:
> 
> {
>     hw/block/hd-geometry.c
>     Memcheck:Cond
>     fun:guess_disk_lchs
>     fun:hd_geometry_guess
>     fun:blkconf_geometry
>     ...
>     fun:device_set_realized
>     fun:property_set_bool
>     fun:object_property_set
>     fun:object_property_set_qobject
>     fun:object_property_set_bool
> }
> 
> The number of suppressed cases are reported by the Valgrind with every 
> run: "ERROR SUMMARY: 5 errors from 3 contexts (suppressed: 0 from 0)"
> 
> Andrey

Yes, indeed that would be another variant. How performance critical are
the fixed locations? That might have an impact on what is the best solution.
From a cleanliness approach doing 1 (adding the ioctl definition to valgrind)
is certainly the most beautiful way. I did that in the past, look for example at

https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=c2baee9b7bf043702c130de0771a4df439fcf403
or 
https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=00a31dd3d1e7101b331c2c83fca6c666ba35d910

for examples. 


> 
>>>
>>> Paolo
>>>
>>>> ---
>>>>   target/i386/kvm.c | 3 +++
>>>>   1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>> index dbbb137..ed57e31 100644
>>>> --- a/target/i386/kvm.c
>>>> +++ b/target/i386/kvm.c
>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>           return 0;
>>>>       }
>>>>   
>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>>       msr_data.info.nmsrs = 1;
>>>>       msr_data.entries[0].index = MSR_IA32_TSC;
>>>>       env->tsc_valid = !runstate_is_running();
>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>>   
>>>>       if (has_xsave) {
>>>>           env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>>>       }
>>>>   
>>>>       max_nested_state_len = kvm_max_nested_state_length();
>>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>>           return 0;
>>>>       }
>>>>   
>>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>>>       for (i = 0; i < 4; i++) {
>>>>           dbgregs.db[i] = env->dr[i];
>>>>       }
>>>> -- 
>>>> 1.8.3.1
>>>>
>>>
>>>
>>
>
Paolo Bonzini July 31, 2019, 12:32 p.m. UTC | #13
On 31/07/19 11:05, Christophe de Dinechin wrote:
> 
> Christian Borntraeger writes:
> 
>> On 30.07.19 18:44, Philippe Mathieu-Daudé wrote:
>>> On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
>>>> Not the whole structure is initialized before passing it to the KVM.
>>>> Reduce the number of Valgrind reports.
>>>>
>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>> ---
>>>>  target/i386/kvm.c | 3 +++
>>>>  1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>> index dbbb137..ed57e31 100644
>>>> --- a/target/i386/kvm.c
>>>> +++ b/target/i386/kvm.c
>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>          return 0;
>>>>      }
>>>>
>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>
>>> I wonder the overhead of this one...
>>
>> Cant we use designated initializers like in
>>
>> commit bdfc8480c50a53d91aa9a513d23a84de0d5fbc86
>> Author:     Christian Borntraeger <borntraeger@de.ibm.com>
>> AuthorDate: Thu Oct 30 09:23:41 2014 +0100
>> Commit:     Paolo Bonzini <pbonzini@redhat.com>
>> CommitDate: Mon Dec 15 12:21:01 2014 +0100
>>
>>     valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl
>>
>> and others?
>>
>> This should minimize the impact.
> 
> Oh, when you talked about using designated initializers, I thought you
> were talking about fully initializing the struct, like so:

Yeah, that would be good too.  For now I'm applying Andrey's series though.

Paolo

> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
> index dbbb13772a..3533870c43 100644
> --- a/target/i386/kvm.c
> +++ b/target/i386/kvm.c
> @@ -180,19 +180,20 @@ static int kvm_get_tsc(CPUState *cs)
>  {
>      X86CPU *cpu = X86_CPU(cs);
>      CPUX86State *env = &cpu->env;
> -    struct {
> -        struct kvm_msrs info;
> -        struct kvm_msr_entry entries[1];
> -    } msr_data;
>      int ret;
> 
>      if (env->tsc_valid) {
>          return 0;
>      }
> 
> -    msr_data.info.nmsrs = 1;
> -    msr_data.entries[0].index = MSR_IA32_TSC;
> -    env->tsc_valid = !runstate_is_running();
> +    struct {
> +        struct kvm_msrs info;
> +        struct kvm_msr_entry entries[1];
> +    } msr_data = {
> +        .info = { .nmsrs =  1 },
> +        .entries = { [0] = { .index = MSR_IA32_TSC } }
> +    };
> +     env->tsc_valid = !runstate_is_running();
> 
>      ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, &msr_data);
>      if (ret < 0) {
> 
> 
> This gives the compiler maximum opportunities to flag mistakes like
> initializing the same thing twice, and make it easier (read no smart
> optimizations) to initialize in one go. Moving the declaration past the
> 'if' also addresses Philippe's concern.
> 
>>>
>>>>      msr_data.info.nmsrs = 1;
>>>>      msr_data.entries[0].index = MSR_IA32_TSC;
>>>>      env->tsc_valid = !runstate_is_running();
>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>>
>>>>      if (has_xsave) {
>>>>          env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>>
>>> OK
>>>
>>>>      }
>>>>
>>>>      max_nested_state_len = kvm_max_nested_state_length();
>>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>>          return 0;
>>>>      }
>>>>
>>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>>
>>> OK
>>>
>>>>      for (i = 0; i < 4; i++) {
>>>>          dbgregs.db[i] = env->dr[i];
>>>>      }
>>>
>>> We could remove 'dbgregs.flags = 0;'
>>>
>>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>
> 
> 
> --
> Cheers,
> Christophe de Dinechin (IRC c3d)
>
Christian Borntraeger July 31, 2019, 12:43 p.m. UTC | #14
On 31.07.19 14:28, Christian Borntraeger wrote:
> 
> 
> On 31.07.19 14:04, Andrey Shinkevich wrote:
>> On 31/07/2019 10:24, Christian Borntraeger wrote:
>>>
>>>
>>> On 30.07.19 21:20, Paolo Bonzini wrote:
>>>> On 30/07/19 18:01, Andrey Shinkevich wrote:
>>>>> Not the whole structure is initialized before passing it to the KVM.
>>>>> Reduce the number of Valgrind reports.
>>>>>
>>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>>
>>>> Christian, is this the right fix?  It's not expensive so it wouldn't be
>>>> an issue, just checking if there's any better alternative.
>>>
>>> I think all of these variants are valid with pros and cons
>>> 1. teach valgrind about this:
>>> Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files)
>>> knowledge about which parts are actually touched.
>>> 2. use designated initializers
>>> 3. use memset
>>> 3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this memory is defined
>>>
>>
>> Thank you all very much for taking part in the discussion.
>> Also, one may use the Valgrind technology to suppress the unwanted 
>> reports by adding the Valgrind specific format file valgrind.supp to the 
>> QEMU project. The file content is extendable for future needs.
>> All the cases we like to suppress will be recounted in that file.
>> A case looks like the stack fragments. For instance, from QEMU block:
>>
>> {
>>     hw/block/hd-geometry.c
>>     Memcheck:Cond
>>     fun:guess_disk_lchs
>>     fun:hd_geometry_guess
>>     fun:blkconf_geometry
>>     ...
>>     fun:device_set_realized
>>     fun:property_set_bool
>>     fun:object_property_set
>>     fun:object_property_set_qobject
>>     fun:object_property_set_bool
>> }
>>
>> The number of suppressed cases are reported by the Valgrind with every 
>> run: "ERROR SUMMARY: 5 errors from 3 contexts (suppressed: 0 from 0)"
>>
>> Andrey
> 
> Yes, indeed that would be another variant. How performance critical are
> the fixed locations? That might have an impact on what is the best solution.
> From a cleanliness approach doing 1 (adding the ioctl definition to valgrind)
> is certainly the most beautiful way. I did that in the past, look for example at
> 
> https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=c2baee9b7bf043702c130de0771a4df439fcf403
> or 
> https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=00a31dd3d1e7101b331c2c83fca6c666ba35d910
> 
> for examples. 
> 
> 
>>
>>>>
>>>> Paolo
>>>>
>>>>> ---
>>>>>   target/i386/kvm.c | 3 +++
>>>>>   1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>>> index dbbb137..ed57e31 100644
>>>>> --- a/target/i386/kvm.c
>>>>> +++ b/target/i386/kvm.c
>>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>>           return 0;
>>>>>       }
>>>>>   
>>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>>>       msr_data.info.nmsrs = 1;
>>>>>       msr_data.entries[0].index = MSR_IA32_TSC;
>>>>>       env->tsc_valid = !runstate_is_running();
>>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>>>   
>>>>>       if (has_xsave) {
>>>>>           env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));

This is memsetting 4k? 
Yet another variant would be to use the RUNNING_ON_VALGRIND macro from
valgrind/valgrind.h to only memset for valgrind. But just using MAKE_MEM_DEFINED
from memcheck.h is simpler.
Paolo Bonzini July 31, 2019, 1:03 p.m. UTC | #15
On 31/07/19 14:43, Christian Borntraeger wrote:
>>>>>>       if (has_xsave) {
>>>>>>           env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
> This is memsetting 4k? 
> Yet another variant would be to use the RUNNING_ON_VALGRIND macro from
> valgrind/valgrind.h to only memset for valgrind. But just using MAKE_MEM_DEFINED
> from memcheck.h is simpler. 
> 

Yes, it's 4k but only at initialization time and I actually prefer not
to have potentially uninitialized host data in there.

Paolo
Andrey Shinkevich July 31, 2019, 2:10 p.m. UTC | #16
On 31/07/2019 15:32, Paolo Bonzini wrote:
> On 31/07/19 11:05, Christophe de Dinechin wrote:
>>
>> Christian Borntraeger writes:
>>
>>> On 30.07.19 18:44, Philippe Mathieu-Daudé wrote:
>>>> On 7/30/19 6:01 PM, Andrey Shinkevich wrote:
>>>>> Not the whole structure is initialized before passing it to the KVM.
>>>>> Reduce the number of Valgrind reports.
>>>>>
>>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>>> ---
>>>>>   target/i386/kvm.c | 3 +++
>>>>>   1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>>> index dbbb137..ed57e31 100644
>>>>> --- a/target/i386/kvm.c
>>>>> +++ b/target/i386/kvm.c
>>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>>           return 0;
>>>>>       }
>>>>>
>>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>>
>>>> I wonder the overhead of this one...
>>>
>>> Cant we use designated initializers like in
>>>
>>> commit bdfc8480c50a53d91aa9a513d23a84de0d5fbc86
>>> Author:     Christian Borntraeger <borntraeger@de.ibm.com>
>>> AuthorDate: Thu Oct 30 09:23:41 2014 +0100
>>> Commit:     Paolo Bonzini <pbonzini@redhat.com>
>>> CommitDate: Mon Dec 15 12:21:01 2014 +0100
>>>
>>>      valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl
>>>
>>> and others?
>>>
>>> This should minimize the impact.
>>
>> Oh, when you talked about using designated initializers, I thought you
>> were talking about fully initializing the struct, like so:
> 
> Yeah, that would be good too.  For now I'm applying Andrey's series though.
> 
> Paolo
> 

Thank you.
As Philippe wrote, 'dbgregs.flags = 0;' is unnecessary with 'memset(0)'.

Andrey

>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>> index dbbb13772a..3533870c43 100644
>> --- a/target/i386/kvm.c
>> +++ b/target/i386/kvm.c
>> @@ -180,19 +180,20 @@ static int kvm_get_tsc(CPUState *cs)
>>   {
>>       X86CPU *cpu = X86_CPU(cs);
>>       CPUX86State *env = &cpu->env;
>> -    struct {
>> -        struct kvm_msrs info;
>> -        struct kvm_msr_entry entries[1];
>> -    } msr_data;
>>       int ret;
>>
>>       if (env->tsc_valid) {
>>           return 0;
>>       }
>>
>> -    msr_data.info.nmsrs = 1;
>> -    msr_data.entries[0].index = MSR_IA32_TSC;
>> -    env->tsc_valid = !runstate_is_running();
>> +    struct {
>> +        struct kvm_msrs info;
>> +        struct kvm_msr_entry entries[1];
>> +    } msr_data = {
>> +        .info = { .nmsrs =  1 },
>> +        .entries = { [0] = { .index = MSR_IA32_TSC } }
>> +    };
>> +     env->tsc_valid = !runstate_is_running();
>>
>>       ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_MSRS, &msr_data);
>>       if (ret < 0) {
>>
>>
>> This gives the compiler maximum opportunities to flag mistakes like
>> initializing the same thing twice, and make it easier (read no smart
>> optimizations) to initialize in one go. Moving the declaration past the
>> 'if' also addresses Philippe's concern.
>>
>>>>
>>>>>       msr_data.info.nmsrs = 1;
>>>>>       msr_data.entries[0].index = MSR_IA32_TSC;
>>>>>       env->tsc_valid = !runstate_is_running();
>>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>>>
>>>>>       if (has_xsave) {
>>>>>           env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>>>
>>>> OK
>>>>
>>>>>       }
>>>>>
>>>>>       max_nested_state_len = kvm_max_nested_state_length();
>>>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>>>           return 0;
>>>>>       }
>>>>>
>>>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>>>
>>>> OK
>>>>
>>>>>       for (i = 0; i < 4; i++) {
>>>>>           dbgregs.db[i] = env->dr[i];
>>>>>       }
>>>>
>>>> We could remove 'dbgregs.flags = 0;'
>>>>
>>>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>
>>
>>
>> --
>> Cheers,
>> Christophe de Dinechin (IRC c3d)
>>
>
Andrey Shinkevich July 31, 2019, 2:11 p.m. UTC | #17
On 31/07/2019 15:43, Christian Borntraeger wrote:
> 
> 
> On 31.07.19 14:28, Christian Borntraeger wrote:
>>
>>
>> On 31.07.19 14:04, Andrey Shinkevich wrote:
>>> On 31/07/2019 10:24, Christian Borntraeger wrote:
>>>>
>>>>
>>>> On 30.07.19 21:20, Paolo Bonzini wrote:
>>>>> On 30/07/19 18:01, Andrey Shinkevich wrote:
>>>>>> Not the whole structure is initialized before passing it to the KVM.
>>>>>> Reduce the number of Valgrind reports.
>>>>>>
>>>>>> Signed-off-by: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
>>>>>
>>>>> Christian, is this the right fix?  It's not expensive so it wouldn't be
>>>>> an issue, just checking if there's any better alternative.
>>>>
>>>> I think all of these variants are valid with pros and cons
>>>> 1. teach valgrind about this:
>>>> Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files)
>>>> knowledge about which parts are actually touched.
>>>> 2. use designated initializers
>>>> 3. use memset
>>>> 3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this memory is defined
>>>>
>>>
>>> Thank you all very much for taking part in the discussion.
>>> Also, one may use the Valgrind technology to suppress the unwanted
>>> reports by adding the Valgrind specific format file valgrind.supp to the
>>> QEMU project. The file content is extendable for future needs.
>>> All the cases we like to suppress will be recounted in that file.
>>> A case looks like the stack fragments. For instance, from QEMU block:
>>>
>>> {
>>>      hw/block/hd-geometry.c
>>>      Memcheck:Cond
>>>      fun:guess_disk_lchs
>>>      fun:hd_geometry_guess
>>>      fun:blkconf_geometry
>>>      ...
>>>      fun:device_set_realized
>>>      fun:property_set_bool
>>>      fun:object_property_set
>>>      fun:object_property_set_qobject
>>>      fun:object_property_set_bool
>>> }
>>>
>>> The number of suppressed cases are reported by the Valgrind with every
>>> run: "ERROR SUMMARY: 5 errors from 3 contexts (suppressed: 0 from 0)"
>>>
>>> Andrey
>>
>> Yes, indeed that would be another variant. How performance critical are
>> the fixed locations? That might have an impact on what is the best solution.
>>  From a cleanliness approach doing 1 (adding the ioctl definition to valgrind)
>> is certainly the most beautiful way. I did that in the past, look for example at
>>
>> https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=c2baee9b7bf043702c130de0771a4df439fcf403
>> or
>> https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=00a31dd3d1e7101b331c2c83fca6c666ba35d910
>>
>> for examples.
>>
>>
>>>
>>>>>
>>>>> Paolo
>>>>>
>>>>>> ---
>>>>>>    target/i386/kvm.c | 3 +++
>>>>>>    1 file changed, 3 insertions(+)
>>>>>>
>>>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>>>> index dbbb137..ed57e31 100644
>>>>>> --- a/target/i386/kvm.c
>>>>>> +++ b/target/i386/kvm.c
>>>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>>>            return 0;
>>>>>>        }
>>>>>>    
>>>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>>>>        msr_data.info.nmsrs = 1;
>>>>>>        msr_data.entries[0].index = MSR_IA32_TSC;
>>>>>>        env->tsc_valid = !runstate_is_running();
>>>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>>>>    
>>>>>>        if (has_xsave) {
>>>>>>            env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
> 
> This is memsetting 4k?
> Yet another variant would be to use the RUNNING_ON_VALGRIND macro from
> valgrind/valgrind.h to only memset for valgrind. But just using MAKE_MEM_DEFINED
> from memcheck.h is simpler.
> 

So, on this assumption, the code would look like

#ifdef CONFIG_VALGRIND_H
#include <valgrind/memcheck.h>
#endif

#ifdef CONFIG_VALGRIND_H
     VALGRIND_MAKE_MEM_DEFINED(&msr_data, sizeof(msr_data));
#endif

etc.

Andrey
diff mbox series

Patch

diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index dbbb137..ed57e31 100644
--- a/target/i386/kvm.c
+++ b/target/i386/kvm.c
@@ -190,6 +190,7 @@  static int kvm_get_tsc(CPUState *cs)
         return 0;
     }
 
+    memset(&msr_data, 0, sizeof(msr_data));
     msr_data.info.nmsrs = 1;
     msr_data.entries[0].index = MSR_IA32_TSC;
     env->tsc_valid = !runstate_is_running();
@@ -1706,6 +1707,7 @@  int kvm_arch_init_vcpu(CPUState *cs)
 
     if (has_xsave) {
         env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
+        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
     }
 
     max_nested_state_len = kvm_max_nested_state_length();
@@ -3477,6 +3479,7 @@  static int kvm_put_debugregs(X86CPU *cpu)
         return 0;
     }
 
+    memset(&dbgregs, 0, sizeof(dbgregs));
     for (i = 0; i < 4; i++) {
         dbgregs.db[i] = env->dr[i];
     }