From patchwork Tue Sep 27 17:50:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Konrad Rzeszutek Wilk X-Patchwork-Id: 9352423 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 043D36077B for ; Tue, 27 Sep 2016 17:52:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EB7E028F19 for ; Tue, 27 Sep 2016 17:52:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DFDEA290EB; Tue, 27 Sep 2016 17:52:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7B0C228F19 for ; Tue, 27 Sep 2016 17:52:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bowWR-0003fG-KW; Tue, 27 Sep 2016 17:50:27 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bowWQ-0003f3-OG for xen-devel@lists.xenproject.org; Tue, 27 Sep 2016 17:50:26 +0000 Received: from [193.109.254.147] by server-2.bemta-6.messagelabs.com id DB/0B-13744-261BAE75; Tue, 27 Sep 2016 17:50:26 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrFIsWRWlGSWpSXmKPExsXSO6nOVTdx46t wg7c7eC2+b5nM5MDocfjDFZYAxijWzLyk/IoE1oyGR1tZCrbNYKyY/eEOewPjhcIuRi4OIYEO Jon3V84wdzFyAjnfGCUWPYuESGxglPiy4BcThNPDKPHtzmd2kCoWAVWJ1mPHWLsYOTjYBEwk3 qxyBAmLCKhL/LkwgRHEZhYokJi1egcbiC0sYC8x5f42FhCbV8BSYuvDw2wQMzczShzdcpoJIi EocXLmExaIZqBB8y4xg8xnFpCWWP6PAyIsL9G8dTbYoZwC1hLXpp8HaxUVUJZomPEALC4hYCz RN6uPZQKj0CwkU2chmToLYeosJFMXMLKsYlQvTi0qSy3StdRLKspMzyjJTczM0TU0MNPLTS0u TkxPzUlMKtZLzs/dxAgMcwYg2MF4d1PAIUZJDiYlUV7G2a/ChfiS8lMqMxKLM+KLSnNSiw8xy nBwKEnwCm4AygkWpaanVqRl5gAjDiYtwcGjJMIbAZLmLS5IzC3OTIdInWJUlBLndQNJCIAkMk rz4NpgUX6JUVZKmJcR6BAhnoLUotzMElT5V4ziHIxKwrzKIFN4MvNK4Ka/AlrMBLR46YkXIIt LEhFSUg2MeSXiHcK+L1ReP39jNbWsj6td55Ss7slZP9ZM2V/ef/uPaXB+2fHb9wRUVnyOVV77 zXif5JFHnLfESo5xMa7T1XUJLHrwd4rlU9vJ16sOclrM8gw5fltyTuZ6WY4flRe6i9nfzkm2s 45vNtz4ea5jQt7xD9VP4zfdfxUV3rGVV0LsP2uCcN5VJZbijERDLeai4kQAyF8F1O0CAAA= X-Env-Sender: konrad.wilk@oracle.com X-Msg-Ref: server-16.tower-27.messagelabs.com!1474998623!61358520!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 8.84; banners=-,-,- X-VirusChecked: Checked Received: (qmail 46558 invoked from network); 27 Sep 2016 17:50:24 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 27 Sep 2016 17:50:24 -0000 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id u8RHo9GS017179 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 27 Sep 2016 17:50:10 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0021.oracle.com (8.13.8/8.13.8) with ESMTP id u8RHo9qV001397 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 27 Sep 2016 17:50:09 GMT Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id u8RHo9Ma005059; Tue, 27 Sep 2016 17:50:09 GMT Received: from localhost.localdomain (/209.6.196.81) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 27 Sep 2016 10:50:08 -0700 Date: Tue, 27 Sep 2016 13:50:00 -0400 From: Konrad Rzeszutek Wilk To: Julien Grall Message-ID: <20160927175000.GA3054@localhost.localdomain> References: <1474479154-20991-1-git-send-email-konrad.wilk@oracle.com> <1474479154-20991-15-git-send-email-konrad.wilk@oracle.com> <829630b3-e7ca-b62d-d3a1-7742d64fcc05@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <829630b3-e7ca-b62d-d3a1-7742d64fcc05@arm.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-Source-IP: aserv0021.oracle.com [141.146.126.233] Cc: ross.lagerwall@citrix.com, sstabellini@kernel.org, xen-devel@lists.xenproject.org Subject: Re: [Xen-devel] [PATCH v5 14/16] livepatch: Initial ARM32 support. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Sep 27, 2016 at 09:39:06AM -0700, Julien Grall wrote: > Hi Konrad, > > On 21/09/2016 10:32, Konrad Rzeszutek Wilk wrote: > > The patch piggybacks on: livepatch: Initial ARM64 support, which > > brings up all of the necessary livepatch infrastructure pieces in. > > > > This patch adds three major pieces: > > > > 1) ELF relocations. ARM32 uses SHT_REL instead of SHT_RELA which > > means the adddendum had to be extracted from within the > > instruction. Which required parsing BL/BLX, B/BL, > > MOVT, and MOVW instructions. > > > > The code was written from scratch using the ARM ELF manual > > (and the ARM Architecture Reference Manual) > > > > 2) Inserting an trampoline. We use the B (branch to address) > > which uses an offset that is based on the PC value: PC + imm32. > > Because we insert the branch at the start of the old function > > we have to account for the instruction already being fetched > > and subtract -8 from the delta (new_addr - old_addr). See > > ARM DDI 0406C.c, see A2.3 (pg 45) and A8.8.18 pg (pg 334,335) > > > > 3) Allows the test-cases to be built under ARM 32. > > The "livepatch: tests: Make them compile under ARM64" > > put in the right infrastructure for it and we piggyback on it. > > > > Acked-by: Julien Grall > > Acked-by: Jan Beulich [for non-ARM parts] > > Signed-off-by: Konrad Rzeszutek Wilk > > --- > > Cc: Julien Grall > > Cc: Stefano Stabellini > > > > v2: First submission. > > v3: Use LIVEPATCH_ARCH_RANGE instead of NEGATIVE_32MB macro > > -Use PATCH_INSN_SIZE instead of the value 4. > > -Ditch the old_ptr local variable. > > -Use 8 for evaluating the branch instead of 4. Based on ARM docs. > > -NOP patch up to sizeof(opaque) % PATCH_INSN_SIZE (so 7 instructions). > > -Don't mask 0x00FFFFF_E_ after shifting, instead mask by 0x00FFFFF_F_. > > The reason is that offset is constructed by shifting by two the insn > > (except the first two bytes) by left which meant we would have cleared > > offset[2]! - and jumped to a location that was -4 bytes off. > > -Update commit description to have -8 instead of -4 delta and also > > include reference to spec. > > v4: Added Jan's Ack. > > s/PATCH_INSN_SIZE/ARCH_PATCH_INSN_SIZE/ > > s/arch_livepatch_insn_len/livepatch_insn_len/ > > s/LIVEPATCH_ARCH_RANGE/ARCH_LIVEPATCH_RANGE/ > > v5: Added Julien's Ack. > > IHMO my ack should not have been retained given that ... Ooops! > > > - Rebased on "livepatch: Drop _jmp from arch_livepatch_[apply,revert]_jmp" > > - Added explanation for the usage of data cache and why we need to sync it. > > ... you also replace the clean_and_invalidate to the old_ptr by > clean_and_invalidate to the new_ptr. Ah yes! I had it in my patch queue but neglected to email it out. Here is what I have in the git branch: From 8bf07ac18e2cfcf304860aa00ab157e1e7f77ed9 Mon Sep 17 00:00:00 2001 From: Konrad Rzeszutek Wilk Date: Thu, 22 Sep 2016 20:15:09 -0400 Subject: [PATCH] livepatch: Initial ARM32 support. The patch piggybacks on: livepatch: Initial ARM64 support, which brings up all of the necessary livepatch infrastructure pieces in. This patch adds three major pieces: 1) ELF relocations. ARM32 uses SHT_REL instead of SHT_RELA which means the adddendum had to be extracted from within the instruction. Which required parsing BL/BLX, B/BL, MOVT, and MOVW instructions. The code was written from scratch using the ARM ELF manual (and the ARM Architecture Reference Manual) 2) Inserting an trampoline. We use the B (branch to address) which uses an offset that is based on the PC value: PC + imm32. Because we insert the branch at the start of the old function we have to account for the instruction already being fetched and subtract -8 from the delta (new_addr - old_addr). See ARM DDI 0406C.c, see A2.3 (pg 45) and A8.8.18 pg (pg 334,335) 3) Allows the test-cases to be built under ARM 32. The "livepatch: tests: Make them compile under ARM64" put in the right infrastructure for it and we piggyback on it. Acked-by: Jan Beulich [for non-ARM parts] Signed-off-by: Konrad Rzeszutek Wilk Acked-by: Julien Grall --- Cc: Julien Grall Cc: Stefano Stabellini v2: First submission. v3: Use LIVEPATCH_ARCH_RANGE instead of NEGATIVE_32MB macro -Use PATCH_INSN_SIZE instead of the value 4. -Ditch the old_ptr local variable. -Use 8 for evaluating the branch instead of 4. Based on ARM docs. -NOP patch up to sizeof(opaque) % PATCH_INSN_SIZE (so 7 instructions). -Don't mask 0x00FFFFF_E_ after shifting, instead mask by 0x00FFFFF_F_. The reason is that offset is constructed by shifting by two the insn (except the first two bytes) by left which meant we would have cleared offset[2]! - and jumped to a location that was -4 bytes off. -Update commit description to have -8 instead of -4 delta and also include reference to spec. v4: Added Jan's Ack. s/PATCH_INSN_SIZE/ARCH_PATCH_INSN_SIZE/ s/arch_livepatch_insn_len/livepatch_insn_len/ s/LIVEPATCH_ARCH_RANGE/ARCH_LIVEPATCH_RANGE/ v5: Added Julien's Ack. - Rebased on "livepatch: Drop _jmp from arch_livepatch_[apply,revert]_jmp" - Added explanation for the usage of data cache and why we need to sync it. - Put clean_and_invalidate_dcache_va_range on new_ptr back in. - Rebased on top "livepatch/arm/x86: Check payload for for unwelcomed symbols" - Simplified the 'arch_livepatch_revert' to be memcpy instead of a loop. - Removed Julien's Ack. --- xen/arch/arm/arm32/livepatch.c | 276 ++++++++++++++++++++++++++++++++++++++++- xen/arch/arm/arm64/livepatch.c | 7 ++ xen/arch/arm/livepatch.c | 7 -- xen/common/Kconfig | 2 +- xen/include/xen/elfstructs.h | 24 +++- xen/test/Makefile | 2 - xen/test/livepatch/Makefile | 3 + 7 files changed, 308 insertions(+), 13 deletions(-) diff --git a/xen/arch/arm/arm32/livepatch.c b/xen/arch/arm/arm32/livepatch.c index 5fc2e63..d9a8caa 100644 --- a/xen/arch/arm/arm32/livepatch.c +++ b/xen/arch/arm/arm32/livepatch.c @@ -3,21 +3,111 @@ */ #include +#include #include #include #include +#include +#include + void arch_livepatch_apply(struct livepatch_func *func) { + uint32_t insn; + uint32_t *new_ptr; + unsigned int i, len; + + BUILD_BUG_ON(ARCH_PATCH_INSN_SIZE > sizeof(func->opaque)); + BUILD_BUG_ON(ARCH_PATCH_INSN_SIZE != sizeof(insn)); + + ASSERT(vmap_of_xen_text); + + len = livepatch_insn_len(func); + if ( !len ) + return; + + /* Save old ones. */ + memcpy(func->opaque, func->old_addr, len); + + if ( func->new_addr ) + { + s32 delta; + + /* + * PC is current address (old_addr) + 8 bytes. The semantics for a + * unconditional branch is to jump to PC + imm32 (offset). + * + * ARM DDI 0406C.c, see A2.3 (pg 45) and A8.8.18 pg (pg 334,335) + * + */ + delta = (s32)func->new_addr - (s32)(func->old_addr + 8); + + /* The arch_livepatch_symbol_ok should have caught it. */ + ASSERT(delta >= -(s32)ARCH_LIVEPATCH_RANGE || + delta < (s32)ARCH_LIVEPATCH_RANGE); + + /* CPU shifts by two (left) when decoding, so we shift right by two. */ + delta = delta >> 2; + /* Lets not modify the cond. */ + delta &= 0x00FFFFFF; + + insn = 0xea000000 | delta; + } + else + insn = 0xe1a00000; /* mov r0, r0 */ + + new_ptr = func->old_addr - (void *)_start + vmap_of_xen_text; + len = len / sizeof(uint32_t); + + /* PATCH! */ + for ( i = 0; i < len; i++ ) + *(new_ptr + i) = insn; + + /* + * When we upload the payload, it will go through the data cache + * (the region is cacheable). Until the data cache is cleaned, the data + * may not reach the memory. And in the case the data and instruction cache + * are separated, we may read invalid instruction from the memory because + * the data cache have not yet synced with the memory. Hence sync it. + */ + if ( func->new_addr ) + clean_and_invalidate_dcache_va_range(func->new_addr, func->new_size); + clean_and_invalidate_dcache_va_range(new_ptr, sizeof (*new_ptr) * len); } void arch_livepatch_revert(const struct livepatch_func *func) { + uint32_t *new_ptr; + unsigned int len; + + new_ptr = func->old_addr - (void *)_start + vmap_of_xen_text; + + len = livepatch_insn_len(func); + memcpy(new_ptr, func->opaque, len); + + clean_and_invalidate_dcache_va_range(new_ptr, len); } int arch_livepatch_verify_elf(const struct livepatch_elf *elf) { - return -EOPNOTSUPP; + const Elf_Ehdr *hdr = elf->hdr; + + if ( hdr->e_machine != EM_ARM || + hdr->e_ident[EI_CLASS] != ELFCLASS32 ) + { + dprintk(XENLOG_ERR, LIVEPATCH "%s: Unsupported ELF Machine type!\n", + elf->name); + return -EOPNOTSUPP; + } + + if ( (hdr->e_flags & EF_ARM_EABI_MASK) != EF_ARM_EABI_VER5 ) + { + dprintk(XENLOG_ERR, LIVEPATCH "%s: Unsupported ELF EABI(%x)!\n", + elf->name, hdr->e_flags); + return -EOPNOTSUPP; + } + + return 0; } bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, @@ -33,11 +123,193 @@ bool arch_livepatch_symbol_deny(const struct livepatch_elf *elf, return false; } +static s32 get_addend(unsigned char type, void *dest) +{ + s32 addend = 0; + + switch ( type ) { + case R_ARM_NONE: + /* ignore */ + break; + + case R_ARM_ABS32: + addend = *(u32 *)dest; + break; + + case R_ARM_REL32: + addend = *(u32 *)dest; + break; + + case R_ARM_MOVW_ABS_NC: + case R_ARM_MOVT_ABS: + addend = (*(u32 *)dest & 0x00000FFF); + addend |= (*(u32 *)dest & 0x000F0000) >> 4; + /* Addend is to sign-extend ([19:16],[11:0]). */ + addend = (s16)addend; + break; + + case R_ARM_CALL: + case R_ARM_JUMP24: + /* Addend = sign_extend (insn[23:0]) << 2 */ + addend = ((*(u32 *)dest & 0xFFFFFF) ^ 0x800000) - 0x800000; + addend = addend << 2; + break; + } + + return addend; +} + +static int perform_rel(unsigned char type, void *dest, uint32_t val, s32 addend) +{ + + switch ( type ) { + case R_ARM_NONE: + /* ignore */ + break; + + case R_ARM_ABS32: /* (S + A) | T */ + *(u32 *)dest = (val + addend); + break; + + case R_ARM_REL32: /* ((S + A) | T) – P */ + *(u32 *)dest = (val + addend) - (uint32_t)dest; + break; + + case R_ARM_MOVW_ABS_NC: /* S + A */ + case R_ARM_MOVT_ABS: /* S + A */ + /* Clear addend if needed . */ + if ( addend ) + *(u32 *)dest &= 0xFFF0F000; + + if ( type == R_ARM_MOVT_ABS ) + { + /* + * Almost the same as MOVW except it uses the 16 bit + * high value. Putting it in insn requires shifting right by + * 16-bit (as we only have 16-bit for imm. + */ + val &= 0xFFFF0000; /* ResultMask */ + val = val >> 16; + } + else + { + /* MOVW loads 16 bits into the bottom half of a register. */ + val &= 0xFFFF; + } + /* [11:0] = Result_Mask(X) & 0xFFF,[19:16] = Result_Mask(X) >> 12 */ + *(u32 *)dest |= val & 0xFFF; + *(u32 *)dest |= (val >> 12) << 16; + break; + + case R_ARM_CALL: + case R_ARM_JUMP24: /* (S + A) - P */ + /* Clear the old addend. */ + if ( addend ) + *(u32 *)dest &= 0xFF000000; + + val += addend - (uint32_t)dest; + + /* + * arch_livepatch_verify_distance can't account of addend so we have + * to do the check here as well. + */ + if ( (s32)val < -(s32)ARCH_LIVEPATCH_RANGE || + (s32)val >= (s32)ARCH_LIVEPATCH_RANGE ) + return -EOVERFLOW; + + /* CPU always shifts insn by two, so complement it. */ + val = val >> 2; + val &= 0x00FFFFFE; + *(u32 *)dest |= (uint32_t)val; + break; + + default: + return -EOPNOTSUPP; + } + + return 0; +} + +int arch_livepatch_perform(struct livepatch_elf *elf, + const struct livepatch_elf_sec *base, + const struct livepatch_elf_sec *rela, + bool use_rela) +{ + const Elf_RelA *r_a; + const Elf_Rel *r; + unsigned int symndx, i; + uint32_t val; + void *dest; + int rc = 0; + + for ( i = 0; i < (rela->sec->sh_size / rela->sec->sh_entsize); i++ ) + { + unsigned char type; + s32 addend = 0; + + if ( use_rela ) + { + r_a = rela->data + i * rela->sec->sh_entsize; + symndx = ELF32_R_SYM(r_a->r_info); + type = ELF32_R_TYPE(r_a->r_info); + dest = base->load_addr + r_a->r_offset; /* P */ + addend = r_a->r_addend; + } + else + { + r = rela->data + i * rela->sec->sh_entsize; + symndx = ELF32_R_SYM(r->r_info); + type = ELF32_R_TYPE(r->r_info); + dest = base->load_addr + r->r_offset; /* P */ + } + + if ( symndx > elf->nsym ) + { + dprintk(XENLOG_ERR, LIVEPATCH "%s: Relative symbol wants symbol@%u which is past end!\n", + elf->name, symndx); + return -EINVAL; + } + + if ( !use_rela ) + addend = get_addend(type, dest); + + val = elf->sym[symndx].sym->st_value; /* S */ + + rc = perform_rel(type, dest, val, addend); + switch ( rc ) { + case -EOVERFLOW: + dprintk(XENLOG_ERR, LIVEPATCH "%s: Overflow in relocation %u in %s for %s!\n", + elf->name, i, rela->name, base->name); + break; + + case -EOPNOTSUPP: + dprintk(XENLOG_ERR, LIVEPATCH "%s: Unhandled relocation #%x\n", + elf->name, type); + break; + + default: + break; + } + + if ( rc ) + break; + } + + return rc; +} + +int arch_livepatch_perform_rel(struct livepatch_elf *elf, + const struct livepatch_elf_sec *base, + const struct livepatch_elf_sec *rela) +{ + return arch_livepatch_perform(elf, base, rela, false); +} + int arch_livepatch_perform_rela(struct livepatch_elf *elf, const struct livepatch_elf_sec *base, const struct livepatch_elf_sec *rela) { - return -ENOSYS; + return arch_livepatch_perform(elf, base, rela, true); } /* diff --git a/xen/arch/arm/arm64/livepatch.c b/xen/arch/arm/arm64/livepatch.c index f148927..558acb9 100644 --- a/xen/arch/arm/arm64/livepatch.c +++ b/xen/arch/arm/arm64/livepatch.c @@ -241,6 +241,13 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, void *dest, u64 val, return 0; } +int arch_livepatch_perform_rel(struct livepatch_elf *elf, + const struct livepatch_elf_sec *base, + const struct livepatch_elf_sec *rela) +{ + return -ENOSYS; +} + int arch_livepatch_perform_rela(struct livepatch_elf *elf, const struct livepatch_elf_sec *base, const struct livepatch_elf_sec *rela) diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c index b8dbee2..dfa285c 100644 --- a/xen/arch/arm/livepatch.c +++ b/xen/arch/arm/livepatch.c @@ -118,13 +118,6 @@ bool arch_livepatch_symbol_ok(const struct livepatch_elf *elf, return true; } -int arch_livepatch_perform_rel(struct livepatch_elf *elf, - const struct livepatch_elf_sec *base, - const struct livepatch_elf_sec *rela) -{ - return -ENOSYS; -} - int arch_livepatch_secure(const void *va, unsigned int pages, enum va_type type) { unsigned long start = (unsigned long)va; diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 0f26027..d4f10ca 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -217,7 +217,7 @@ config CRYPTO config LIVEPATCH bool "Live patching support (TECH PREVIEW)" default n - depends on !ARM_32 && HAS_BUILD_ID = "y" + depends on HAS_BUILD_ID = "y" ---help--- Allows a running Xen hypervisor to be dynamically patched using binary patches without rebooting. This is primarily used to binarily diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h index 7329987..e543212 100644 --- a/xen/include/xen/elfstructs.h +++ b/xen/include/xen/elfstructs.h @@ -103,6 +103,15 @@ typedef uint64_t Elf64_Xword; (ehdr).e_ident[EI_MAG2] == ELFMAG2 && \ (ehdr).e_ident[EI_MAG3] == ELFMAG3) +/* e_flags */ +#define EF_ARM_EABI_MASK 0xff000000 +#define EF_ARM_EABI_UNKNOWN 0x00000000 +#define EF_ARM_EABI_VER1 0x01000000 +#define EF_ARM_EABI_VER2 0x02000000 +#define EF_ARM_EABI_VER3 0x03000000 +#define EF_ARM_EABI_VER4 0x04000000 +#define EF_ARM_EABI_VER5 0x05000000 + /* ELF Header */ typedef struct elfhdr { unsigned char e_ident[EI_NIDENT]; /* ELF Identification */ @@ -364,9 +373,22 @@ typedef struct { #define R_X86_64_PLT32 4 /* 32 bit PLT address */ /* + * ARM32 relocation types. See + * http://infocenter.arm.com/help/topic/com.arm.doc.ihi0044f/IHI0044F_aaelf.pdf * S - address of symbol. - * A - addend for relocation (r_addend) + * A - addend for relocation (r_addend or need to extract from insn) * P - address of the dest being relocated (derieved from r_offset) + */ +#define R_ARM_NONE 0 +#define R_ARM_ABS32 2 /* Direct 32-bit. S+A */ +#define R_ARM_REL32 3 /* PC relative. S+A */ +#define R_ARM_CALL 28 /* SignExtend([23:0]) << 2. S+A-P */ +#define R_ARM_JUMP24 29 /* Same as R_ARM_CALL */ +#define R_ARM_MOVW_ABS_NC 43 /* SignExtend([19:16],[11:0])&0xFFFF, S+A */ +#define R_ARM_MOVT_ABS 44 /* SignExtend([19:16],[11:0))&0xFFFF0000 */ + /* >> 16, S+A. */ + +/* * NC - No check for overflow. * * The defines also use _PREL for PC-relative address, and _NC is No Check. diff --git a/xen/test/Makefile b/xen/test/Makefile index 95c1755..d91b319 100644 --- a/xen/test/Makefile +++ b/xen/test/Makefile @@ -1,8 +1,6 @@ .PHONY: tests tests: -ifneq $(XEN_TARGET_ARCH),arm32) $(MAKE) -f $(BASEDIR)/Rules.mk -C livepatch livepatch -endif .PHONY: clean clean:: diff --git a/xen/test/livepatch/Makefile b/xen/test/livepatch/Makefile index d844ad4..9439f62 100644 --- a/xen/test/livepatch/Makefile +++ b/xen/test/livepatch/Makefile @@ -6,6 +6,9 @@ endif ifeq ($(XEN_TARGET_ARCH),arm64) OBJCOPY_MAGIC := -I binary -O elf64-littleaarch64 -B aarch64 endif +ifeq ($(XEN_TARGET_ARCH),arm32) +OBJCOPY_MAGIC := -I binary -O elf32-littlearm -B arm +endif CODE_ADDR=$(shell nm --defined $(1) | grep $(2) | awk '{print "0x"$$1}') CODE_SZ=$(shell nm --defined -S $(1) | grep $(2) | awk '{ print "0x"$$2}')