Message ID | 20220221162243.683208-9-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM MMU refactoring part 2: role changes | expand |
On Mon, Feb 21, 2022, Paolo Bonzini wrote: > Snapshot the state of the processor registers that govern page walk into > a new field of struct kvm_mmu. This is a more natural representation > than having it *mostly* in mmu_role but not exclusively; the delta > right now is represented in other fields, such as root_level. > > The nested MMU now has only the CPU mode; and in fact the new function > kvm_calc_cpu_mode is analogous to the previous kvm_calc_nested_mmu_role, > except that it has role.base.direct equal to CR0.PG. It is not clear > what the code meant by "setting role.base.direct to true to detect bogus > usage of the nested MMU". The idea was to trigger fireworks due to a incoherent state (e.g. direct mmu_role with non-direct hooks) if the nested_mmu was ever used as a "real" MMU (handling faults, installing SPs/SPTEs, etc...). For a walk-only MMU, "direct" has no meaning and so rather than arbitrarily leave it '0', I arbitrarily set it '1'. Maybe this? The nested MMU now has only the CPU mode; and in fact the new function kvm_calc_cpu_mode is analogous to the previous kvm_calc_nested_mmu_role, except that it has role.base.direct equal to CR0.PG. Having "direct" track CR0.PG has the serendipitious side effect of being an even better sentinel than arbitrarily setting direct to true for the nested MMU, as KVM will run afoul of sanity checks for both direct and indirect MMUs if KVM attempts to use the nested MMU as a "real" MMU, e.g. for page faults. > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 107 ++++++++++++++++++++------------ > arch/x86/kvm/mmu/paging_tmpl.h | 2 +- > 3 files changed, 68 insertions(+), 42 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 92855d3984a7..cc268116eb3f 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -433,6 +433,7 @@ struct kvm_mmu { > struct kvm_mmu_page *sp); > void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); > struct kvm_mmu_root_info root; > + union kvm_mmu_role cpu_mode; > union kvm_mmu_role mmu_role; > u8 root_level; > u8 shadow_root_level; > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 7c835253a330..1af898f0cf87 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -221,7 +221,7 @@ BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); > #define BUILD_MMU_ROLE_ACCESSOR(base_or_ext, reg, name) \ > static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ > { \ > - return !!(mmu->mmu_role. base_or_ext . reg##_##name); \ > + return !!(mmu->cpu_mode. base_or_ext . reg##_##name); \ > } > BUILD_MMU_ROLE_ACCESSOR(ext, cr0, pg); > BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); > @@ -4680,6 +4680,39 @@ static void paging32_init_context(struct kvm_mmu *context) > context->direct_map = false; > } > > +static union kvm_mmu_role > +kvm_calc_cpu_mode(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) I strongly prefer we avoid putting the return type on a different line unless absolutely "necessary". static union kvm_mmu_role kvm_calc_cpu_mode(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) > +{ > + union kvm_mmu_role role = {0}; > + > + role.base.access = ACC_ALL; > + role.base.smm = is_smm(vcpu); > + role.base.guest_mode = is_guest_mode(vcpu); > + role.base.direct = !____is_cr0_pg(regs); > + if (!role.base.direct) { Can we check ____is_cr0_pg() instead of "direct"? IMO that's more intuitive for understanding why the bits below are left zero. I was scratching my head trying to figure out whether or not this was safe/correct for direct MMUs... And this indentation is quite nasty, and will only get worse. An early return or a goto would solve that nicely. I think I have a slight preference for an early return? role.ext.valid = 1; if (!____is_cr0_pg(regs)) return role; role.base.efer_nx = ____is_efer_nx(regs); role.base.cr0_wp = ____is_cr0_wp(regs); role.base.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs); role.base.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs); role.base.has_4_byte_gpte = !____is_cr4_pae(regs); role.base.level = role_regs_to_root_level(regs); role.ext.cr0_pg = 1; role.ext.cr4_pae = ____is_cr4_pae(regs); role.ext.cr4_smep = ____is_cr4_smep(regs); role.ext.cr4_smap = ____is_cr4_smap(regs); role.ext.cr4_pse = ____is_cr4_pse(regs); /* PKEY and LA57 are active iff long mode is active. */ role.ext.cr4_pke = ____is_efer_lma(regs) && ____is_cr4_pke(regs); role.ext.cr4_la57 = ____is_efer_lma(regs) && ____is_cr4_la57(regs); role.ext.efer_lma = ____is_efer_lma(regs); return role;
On 3/8/22 18:36, Sean Christopherson wrote: > The idea was to trigger fireworks due to a incoherent state (e.g. direct mmu_role with > non-direct hooks) if the nested_mmu was ever used as a "real" MMU (handling faults, > installing SPs/SPTEs, etc...). For a walk-only MMU, "direct" has no meaning and so > rather than arbitrarily leave it '0', I arbitrarily set it '1'. > > Maybe this? > > The nested MMU now has only the CPU mode; and in fact the new function > kvm_calc_cpu_mode is analogous to the previous kvm_calc_nested_mmu_role, > except that it has role.base.direct equal to CR0.PG. Having "direct" > track CR0.PG has the serendipitious side effect of being an even better > sentinel than arbitrarily setting direct to true for the nested MMU, as > KVM will run afoul of sanity checks for both direct and indirect MMUs if > KVM attempts to use the nested MMU as a "real" MMU, e.g. for page faults. Hmm, actually it is set to CR0.PG *negated*, so that future patches can get rid of role.ext.cr0_pg. But really anybody trying to use nested_mmu for real would get NULL pointer dereferences left and right in all likelihood. This will be even clearer by the end of the series, when the function pointers are initialized at vCPU creation time. >> >> + role.base.direct = !____is_cr0_pg(regs); >> + if (!role.base.direct) { > > Can we check ____is_cr0_pg() instead of "direct"? IMO that's more intuitive for > understanding why the bits below are left zero. I was scratching my head trying > to figure out whether or not this was safe/correct for direct MMUs... Yes, that's good. Paolo
On Mon, Feb 21, 2022, Paolo Bonzini wrote: > @@ -4800,13 +4836,15 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, > } > > static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, > - const struct kvm_mmu_role_regs *regs, > - union kvm_mmu_role new_role) > + union kvm_mmu_role cpu_mode, Can you give all helpers this treatment (rename "role" => "cpu_mode")? I got tripped up a few times reading patches because the ones where it wasn't necessary, i.e. where there's only a single kvm_mmu_role paramenter, were left as-is. I think kvm_calc_shadow_npt_root_page_role() and kvm_calc_shadow_mmu_root_page_role() are the only offenders. > + union kvm_mmu_role mmu_role) > { > - if (new_role.as_u64 == context->mmu_role.as_u64) > + if (cpu_mode.as_u64 == context->cpu_mode.as_u64 && > + mmu_role.as_u64 == context->mmu_role.as_u64) > return; > > - context->mmu_role.as_u64 = new_role.as_u64; > + context->cpu_mode.as_u64 = cpu_mode.as_u64; > + context->mmu_role.as_u64 = mmu_role.as_u64; > > if (!is_cr0_pg(context)) > nonpaging_init_context(context);
On 3/8/22 19:55, Sean Christopherson wrote: >> static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, >> - const struct kvm_mmu_role_regs *regs, >> - union kvm_mmu_role new_role) >> + union kvm_mmu_role cpu_mode, > Can you give all helpers this treatment (rename "role" => "cpu_mode")? I got > tripped up a few times reading patches because the ones where it wasn't necessary, > i.e. where there's only a single kvm_mmu_role paramenter, were left as-is. > > I think kvm_calc_shadow_npt_root_page_role() and kvm_calc_shadow_mmu_root_page_role() > are the only offenders. These take struct kvm_mmu_role_regs; they *return* union kvm_mmu_role but that is changed later in the series to the base part only. Paolo
On Wed, Mar 09, 2022, Paolo Bonzini wrote: > On 3/8/22 19:55, Sean Christopherson wrote: > > > static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, > > > - const struct kvm_mmu_role_regs *regs, > > > - union kvm_mmu_role new_role) > > > + union kvm_mmu_role cpu_mode, > > Can you give all helpers this treatment (rename "role" => "cpu_mode")? I got > > tripped up a few times reading patches because the ones where it wasn't necessary, > > i.e. where there's only a single kvm_mmu_role paramenter, were left as-is. > > > > I think kvm_calc_shadow_npt_root_page_role() and kvm_calc_shadow_mmu_root_page_role() > > are the only offenders. > > These take struct kvm_mmu_role_regs; they *return* union kvm_mmu_role but > that is changed later in the series to the base part only. Doh, sorry, a later patch is what confused me. Lost track of things when moving around in the series. Patch 12, "KVM: x86/mmu: cleanup computation of MMU roles for shadow paging", is where we end up with static union kvm_mmu_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, union kvm_mmu_role role) { if (!role.ext.efer_lma) role.base.level = PT32E_ROOT_LEVEL; else if (role.ext.cr4_la57) role.base.level = PT64_ROOT_5LEVEL; else role.base.level = PT64_ROOT_4LEVEL; return role; } Can we instead tweak that patch to make it and kvm_calc_shadow_npt_root_page_role() be static union kvm_mmu_role kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, union kvm_mmu_role cpu_role) { union kvm_mmu_role root_role = cpu_role; if (!cpu_role.ext.efer_lma) root_role.base.level = PT32E_ROOT_LEVEL; else if (cpu_role.ext.cr4_la57) root_role.base.level = PT64_ROOT_5LEVEL; else root_role.base.level = PT64_ROOT_4LEVEL; return root_role; } This is effectively the same feedback I gave in Patch 15[*], I messed up when trying to track back where the "union kvm_mmu_role role" param was introduced. https://lore.kernel.org/all/YieoXYBFyo9pZhhX@google.com
On 3/9/22 16:38, Sean Christopherson wrote: > Can we instead tweak that patch to make it and kvm_calc_shadow_npt_root_page_role() be > > static union kvm_mmu_role > kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, > union kvm_mmu_role cpu_role) > { > union kvm_mmu_role root_role = cpu_role; > > if (!cpu_role.ext.efer_lma) > root_role.base.level = PT32E_ROOT_LEVEL; > else if (cpu_role.ext.cr4_la57) > root_role.base.level = PT64_ROOT_5LEVEL; > else > root_role.base.level = PT64_ROOT_4LEVEL; > > return root_role; > } Yep, figured the same in the meanwhile. Paolo
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 92855d3984a7..cc268116eb3f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -433,6 +433,7 @@ struct kvm_mmu { struct kvm_mmu_page *sp); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); struct kvm_mmu_root_info root; + union kvm_mmu_role cpu_mode; union kvm_mmu_role mmu_role; u8 root_level; u8 shadow_root_level; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7c835253a330..1af898f0cf87 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -221,7 +221,7 @@ BUILD_MMU_ROLE_REGS_ACCESSOR(efer, lma, EFER_LMA); #define BUILD_MMU_ROLE_ACCESSOR(base_or_ext, reg, name) \ static inline bool __maybe_unused is_##reg##_##name(struct kvm_mmu *mmu) \ { \ - return !!(mmu->mmu_role. base_or_ext . reg##_##name); \ + return !!(mmu->cpu_mode. base_or_ext . reg##_##name); \ } BUILD_MMU_ROLE_ACCESSOR(ext, cr0, pg); BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); @@ -4680,6 +4680,39 @@ static void paging32_init_context(struct kvm_mmu *context) context->direct_map = false; } +static union kvm_mmu_role +kvm_calc_cpu_mode(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) +{ + union kvm_mmu_role role = {0}; + + role.base.access = ACC_ALL; + role.base.smm = is_smm(vcpu); + role.base.guest_mode = is_guest_mode(vcpu); + role.base.direct = !____is_cr0_pg(regs); + if (!role.base.direct) { + role.base.efer_nx = ____is_efer_nx(regs); + role.base.cr0_wp = ____is_cr0_wp(regs); + role.base.smep_andnot_wp = ____is_cr4_smep(regs) && !____is_cr0_wp(regs); + role.base.smap_andnot_wp = ____is_cr4_smap(regs) && !____is_cr0_wp(regs); + role.base.has_4_byte_gpte = !____is_cr4_pae(regs); + role.base.level = role_regs_to_root_level(regs); + + role.ext.cr0_pg = 1; + role.ext.cr4_pae = ____is_cr4_pae(regs); + role.ext.cr4_smep = ____is_cr4_smep(regs); + role.ext.cr4_smap = ____is_cr4_smap(regs); + role.ext.cr4_pse = ____is_cr4_pse(regs); + + /* PKEY and LA57 are active iff long mode is active. */ + role.ext.cr4_pke = ____is_efer_lma(regs) && ____is_cr4_pke(regs); + role.ext.cr4_la57 = ____is_efer_lma(regs) && ____is_cr4_la57(regs); + role.ext.efer_lma = ____is_efer_lma(regs); + } + + role.ext.valid = 1; + return role; +} + static union kvm_mmu_role kvm_calc_mmu_role_common(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { @@ -4739,13 +4772,16 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; - union kvm_mmu_role new_role = + union kvm_mmu_role cpu_mode = kvm_calc_cpu_mode(vcpu, regs); + union kvm_mmu_role mmu_role = kvm_calc_tdp_mmu_root_page_role(vcpu, regs); - if (new_role.as_u64 == context->mmu_role.as_u64) + if (cpu_mode.as_u64 == context->cpu_mode.as_u64 && + mmu_role.as_u64 == context->mmu_role.as_u64) return; - context->mmu_role.as_u64 = new_role.as_u64; + context->cpu_mode.as_u64 = cpu_mode.as_u64; + context->mmu_role.as_u64 = mmu_role.as_u64; context->page_fault = kvm_tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; @@ -4800,13 +4836,15 @@ kvm_calc_shadow_mmu_root_page_role(struct kvm_vcpu *vcpu, } static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *context, - const struct kvm_mmu_role_regs *regs, - union kvm_mmu_role new_role) + union kvm_mmu_role cpu_mode, + union kvm_mmu_role mmu_role) { - if (new_role.as_u64 == context->mmu_role.as_u64) + if (cpu_mode.as_u64 == context->cpu_mode.as_u64 && + mmu_role.as_u64 == context->mmu_role.as_u64) return; - context->mmu_role.as_u64 = new_role.as_u64; + context->cpu_mode.as_u64 = cpu_mode.as_u64; + context->mmu_role.as_u64 = mmu_role.as_u64; if (!is_cr0_pg(context)) nonpaging_init_context(context); @@ -4814,10 +4852,10 @@ static void shadow_mmu_init_context(struct kvm_vcpu *vcpu, struct kvm_mmu *conte paging64_init_context(context); else paging32_init_context(context); - context->root_level = role_regs_to_root_level(regs); + context->root_level = cpu_mode.base.level; reset_guest_paging_metadata(vcpu, context); - context->shadow_root_level = new_role.base.level; + context->shadow_root_level = mmu_role.base.level; reset_shadow_zero_bits_mask(vcpu, context); } @@ -4826,10 +4864,11 @@ static void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { struct kvm_mmu *context = &vcpu->arch.root_mmu; - union kvm_mmu_role new_role = + union kvm_mmu_role cpu_mode = kvm_calc_cpu_mode(vcpu, regs); + union kvm_mmu_role mmu_role = kvm_calc_shadow_mmu_root_page_role(vcpu, regs); - shadow_mmu_init_context(vcpu, context, regs, new_role); + shadow_mmu_init_context(vcpu, context, cpu_mode, mmu_role); } static union kvm_mmu_role @@ -4854,11 +4893,10 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0, .cr4 = cr4 & ~X86_CR4_PKE, .efer = efer, }; - union kvm_mmu_role new_role; + union kvm_mmu_role cpu_mode = kvm_calc_cpu_mode(vcpu, ®s); + union kvm_mmu_role mmu_role = kvm_calc_shadow_npt_root_page_role(vcpu, ®s);; - new_role = kvm_calc_shadow_npt_root_page_role(vcpu, ®s); - - shadow_mmu_init_context(vcpu, context, ®s, new_role); + shadow_mmu_init_context(vcpu, context, cpu_mode, mmu_role); kvm_mmu_new_pgd(vcpu, nested_cr3); } EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu); @@ -4881,7 +4919,6 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty, role.base.guest_mode = true; role.base.access = ACC_ALL; - /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ role.ext.word = 0; role.ext.execonly = execonly; role.ext.valid = 1; @@ -4895,12 +4932,14 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, { struct kvm_mmu *context = &vcpu->arch.guest_mmu; u8 level = vmx_eptp_page_walk_level(new_eptp); - union kvm_mmu_role new_role = + union kvm_mmu_role new_mode = kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty, execonly, level); - if (new_role.as_u64 != context->mmu_role.as_u64) { - context->mmu_role.as_u64 = new_role.as_u64; + if (new_mode.as_u64 != context->cpu_mode.as_u64) { + /* EPT, and thus nested EPT, does not consume CR0, CR4, nor EFER. */ + context->cpu_mode.as_u64 = new_mode.as_u64; + context->mmu_role.as_u64 = new_mode.as_u64; context->shadow_root_level = level; @@ -4933,36 +4972,19 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu, context->inject_page_fault = kvm_inject_page_fault_shadow; } -static union kvm_mmu_role -kvm_calc_nested_mmu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) -{ - union kvm_mmu_role role; - - role = kvm_calc_shadow_root_page_role_common(vcpu, regs); - - /* - * Nested MMUs are used only for walking L2's gva->gpa, they never have - * shadow pages of their own and so "direct" has no meaning. Set it - * to "true" to try to detect bogus usage of the nested MMU. - */ - role.base.direct = true; - role.base.level = role_regs_to_root_level(regs); - return role; -} - static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) { - union kvm_mmu_role new_role = kvm_calc_nested_mmu_role(vcpu, regs); + union kvm_mmu_role new_mode = kvm_calc_cpu_mode(vcpu, regs); struct kvm_mmu *g_context = &vcpu->arch.nested_mmu; - if (new_role.as_u64 == g_context->mmu_role.as_u64) + if (new_mode.as_u64 == g_context->cpu_mode.as_u64) return; - g_context->mmu_role.as_u64 = new_role.as_u64; + g_context->cpu_mode.as_u64 = new_mode.as_u64; g_context->get_guest_pgd = kvm_get_guest_cr3; g_context->get_pdptr = kvm_pdptr_read; g_context->inject_page_fault = kvm_inject_page_fault; - g_context->root_level = new_role.base.level; + g_context->root_level = new_mode.base.level; /* * L2 page tables are never shadowed, so there is no need to sync @@ -5020,6 +5042,9 @@ void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu) vcpu->arch.root_mmu.mmu_role.ext.valid = 0; vcpu->arch.guest_mmu.mmu_role.ext.valid = 0; vcpu->arch.nested_mmu.mmu_role.ext.valid = 0; + vcpu->arch.root_mmu.cpu_mode.ext.valid = 0; + vcpu->arch.guest_mmu.cpu_mode.ext.valid = 0; + vcpu->arch.nested_mmu.cpu_mode.ext.valid = 0; kvm_mmu_reset_context(vcpu); /* diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index d1d17d28e81b..e1c2ecb4ddee 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -323,7 +323,7 @@ static inline bool FNAME(is_last_gpte)(struct kvm_mmu *mmu, * is not reserved and does not indicate a large page at this level, * so clear PT_PAGE_SIZE_MASK in gpte if that is the case. */ - gpte &= level - (PT32_ROOT_LEVEL + mmu->mmu_role.ext.cr4_pse); + gpte &= level - (PT32_ROOT_LEVEL + mmu->cpu_mode.ext.cr4_pse); #endif /* * PG_LEVEL_4K always terminates. The RHS has bit 7 set
Snapshot the state of the processor registers that govern page walk into a new field of struct kvm_mmu. This is a more natural representation than having it *mostly* in mmu_role but not exclusively; the delta right now is represented in other fields, such as root_level. The nested MMU now has only the CPU mode; and in fact the new function kvm_calc_cpu_mode is analogous to the previous kvm_calc_nested_mmu_role, except that it has role.base.direct equal to CR0.PG. It is not clear what the code meant by "setting role.base.direct to true to detect bogus usage of the nested MMU". Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 107 ++++++++++++++++++++------------ arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 3 files changed, 68 insertions(+), 42 deletions(-)