diff mbox

[v2,13/20] livepatch: Initial ARM64 support.

Message ID 1472132255-23470-14-git-send-email-konrad.wilk@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Konrad Rzeszutek Wilk Aug. 25, 2016, 1:37 p.m. UTC
As compared to x86 the va of the hypervisor .text
is locked down - we cannot modify the running pagetables
to have the .ro flag unset. We borrow the same idea that
alternative patching has - which is to vmap the entire
.text region and use the alternative virtual address
for patching.

Since we are doing vmap we may fail, hence the
arch_livepatch_quiesce was changed (see "x86,arm:
Change arch_livepatch_quiesce() decleration") to return
an error value which will be bubbled in payload->rc and
provided to the user (along with messages in the ring buffer).

The livepatch virtual address space (where the new functions
are) needs to be close to the hypervisor virtual address
so that the trampoline can reach it. As such we re-use
the BOOT_RELOC_VIRT_START which is not used after bootup
(alternatively we can also use the space after the _end to
FIXMAP_ADDR(0), but that may be too small).

The ELF relocation engine at the start was coded from
the "ELF for the ARM 64-bit Architecture (AArch64)"
(http://infocenter.arm.com/help/topic/com.arm.doc.ihi0056b/IHI0056B_aaelf64.pdf)
but after a while of trying to write the correct bit shifting
and masking from scratch I ended up borrowing from Linux, the
'reloc_insn_imm' (Linux v4.7 arch/arm64/kernel/module.c function.
See 257cb251925f854da435cbf79b140984413871ac "arm64: Loadable modules")

And while at it - we also utilize code from Linux to construct
the right branch instruction (see "arm64/insn: introduce
aarch64_insn_gen_{nop|branch_imm}() helper functions").

In the livepatch payload loading code we tweak the #ifdef to
only exclude ARM_32. The exceptions are not part of ARM 32/64 hence
they are still behind the #ifdef.

We also expand the MAINTAINERS file to include the arm64 and arm32
platform specific livepatch file.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

---
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc  Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

RFC: Wholy cow! It works!
v1: Finished the various TODOs and reviews outlined by Julien in RFC.
v2: Call check_for_livepatch_work in leave_hypervisor_tail not in
    reset_stack_and_jump
   - Move ARM 32 components in other patch
   - Blacklist platform options in Kconfig
   - Added R_AARCH64_LDST128_ABS_LO12_NC, R_AARCH64_TSTBR14, and
     R_AARCH64_ADR_PREL_PG_HI21_NC
   - Added do_reloc and reloc_data along with overflow checks from Linux.
   - Use apply_alternatives without any #ifdef
   - Use leave_hypervisor_tail to call check_for_livepatch_work
   - Add ASSERT that isns is not AARCH64_BREAK_FAULT
   - Spun out arch_livepatch_quiesce changes in seperate patch.
   - Spun out changes to config.h (document ones) to seperate patch.
   - Move the livepatch.h to include/xen/asm-arm.
   - Add LIVEPATCH_VMAP_END and LIVEPATCH_VMAP_START in config.h
   - In arch_livepatch_secure use switch for va_type.
   - Drop the #ifndef CONFIG_ARM for .ex_table (see
     "arm/x86: Add HAS_ALTERNATIVE and HAS_EX_TABLE")
   - Piggyback on "x86: change modify_xen_mappings to return error"
      so that arch_livepatch_secure can return errors.
   - Moves comment about branch predictor out of this patch.
---
 MAINTAINERS                     |   1 +
 xen/arch/arm/Makefile           |  13 +-
 xen/arch/arm/arm32/Makefile     |   1 +
 xen/arch/arm/arm32/livepatch.c  |  38 +++++
 xen/arch/arm/arm64/Makefile     |   1 +
 xen/arch/arm/arm64/livepatch.c  | 323 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/domain.c           |   6 +
 xen/arch/arm/livepatch.c        | 100 ++++++++++---
 xen/arch/arm/traps.c            |   6 +
 xen/common/Kconfig              |   2 +-
 xen/include/asm-arm/config.h    |   5 +
 xen/include/asm-arm/livepatch.h |  28 ++++
 xen/include/xen/elfstructs.h    |  34 ++++-
 xen/include/xen/types.h         |   6 +
 14 files changed, 539 insertions(+), 25 deletions(-)
 create mode 100644 xen/arch/arm/arm32/livepatch.c
 create mode 100644 xen/arch/arm/arm64/livepatch.c
 create mode 100644 xen/include/asm-arm/livepatch.h

Comments

Jan Beulich Aug. 25, 2016, 3:02 p.m. UTC | #1
>>> On 25.08.16 at 15:37, <konrad.wilk@oracle.com> wrote:
> --- a/xen/include/xen/types.h
> +++ b/xen/include/xen/types.h
> @@ -14,6 +14,12 @@
>  #define NULL ((void*)0)
>  #endif
>  
> +#define U16_MAX         ((u16)~0U)
> +#define S16_MAX         ((s16)(U16_MAX>>1))
> +#define S16_MIN         ((s16)(-S16_MAX - 1))
> +#define U32_MAX         ((u32)~0U)
> +#define S32_MAX         ((s32)(U32_MAX>>1))
> +#define S32_MIN         ((s32)(-S32_MAX - 1))

These are rather strange constants: Fixed width types necessarily
always have the same boundaries of representable values. Otoh
the C standard has such constants too - maybe if we really want
them we should rather use their names?

Jan
Julien Grall Sept. 1, 2016, 2:16 p.m. UTC | #2
Hi Konrad,

On 25/08/16 14:37, Konrad Rzeszutek Wilk wrote:
> As compared to x86 the va of the hypervisor .text
> is locked down - we cannot modify the running pagetables
> to have the .ro flag unset. We borrow the same idea that
> alternative patching has - which is to vmap the entire
> .text region and use the alternative virtual address
> for patching.
>
> Since we are doing vmap we may fail, hence the
> arch_livepatch_quiesce was changed (see "x86,arm:
> Change arch_livepatch_quiesce() decleration") to return

s/decleration/declaration/

> an error value which will be bubbled in payload->rc and
> provided to the user (along with messages in the ring buffer).
>
> The livepatch virtual address space (where the new functions
> are) needs to be close to the hypervisor virtual address
> so that the trampoline can reach it. As such we re-use
> the BOOT_RELOC_VIRT_START which is not used after bootup
> (alternatively we can also use the space after the _end to
> FIXMAP_ADDR(0), but that may be too small).
>
> The ELF relocation engine at the start was coded from
> the "ELF for the ARM 64-bit Architecture (AArch64)"
> (http://infocenter.arm.com/help/topic/com.arm.doc.ihi0056b/IHI0056B_aaelf64.pdf)
> but after a while of trying to write the correct bit shifting
> and masking from scratch I ended up borrowing from Linux, the
> 'reloc_insn_imm' (Linux v4.7 arch/arm64/kernel/module.c function.
> See 257cb251925f854da435cbf79b140984413871ac "arm64: Loadable modules")
>
> And while at it - we also utilize code from Linux to construct
> the right branch instruction (see "arm64/insn: introduce
> aarch64_insn_gen_{nop|branch_imm}() helper functions").
>
> In the livepatch payload loading code we tweak the #ifdef to
> only exclude ARM_32. The exceptions are not part of ARM 32/64 hence
> they are still behind the #ifdef.
>
> We also expand the MAINTAINERS file to include the arm64 and arm32
> platform specific livepatch file.
>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

[...]

> diff --git a/xen/arch/arm/arm64/livepatch.c b/xen/arch/arm/arm64/livepatch.c
> new file mode 100644
> index 0000000..0809a42
> --- /dev/null
> +++ b/xen/arch/arm/arm64/livepatch.c
> @@ -0,0 +1,323 @@
> +/*
> + *  Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
> + */
> +
> +#include <xen/bitops.h>
> +#include <xen/errno.h>
> +#include <xen/lib.h>
> +#include <xen/livepatch_elf.h>
> +#include <xen/livepatch.h>
> +#include <xen/mm.h>
> +#include <xen/vmap.h>
> +
> +#include <asm/bitops.h>
> +#include <asm/byteorder.h>
> +#include <asm/insn.h>
> +#include <asm/livepatch.h>
> +
> +void arch_livepatch_apply_jmp(struct livepatch_func *func)
> +{
> +    uint32_t insn;
> +    uint32_t *old_ptr;
> +    uint32_t *new_ptr;
> +
> +    BUILD_BUG_ON(PATCH_INSN_SIZE > sizeof(func->opaque));
> +    BUILD_BUG_ON(PATCH_INSN_SIZE != sizeof(insn));
> +
> +    ASSERT(vmap_of_xen_text);
> +
> +    /* Save old one. */
> +    old_ptr = func->old_addr;
> +    memcpy(func->opaque, old_ptr, PATCH_INSN_SIZE);

I don't see any value to use a temporary variable (old_ptr) to hold 
func->old_addr here.

> +
> +    if ( func->new_addr )
> +        insn = aarch64_insn_gen_branch_imm((unsigned long)func->old_addr,
> +                                           (unsigned long)func->new_addr,
> +                                           AARCH64_INSN_BRANCH_NOLINK);
> +    else
> +        insn = aarch64_insn_gen_nop();
> +
> +    ASSERT(insn != AARCH64_BREAK_FAULT);

Could you document in the code what prevents aarch64_insn_gen_branch_imm 
to not generate a break fault instruction?

> +    new_ptr = func->old_addr - (void *)_start + vmap_of_xen_text;
> +
> +    /* PATCH! */
> +    *(new_ptr) = cpu_to_le32(insn);
> +
> +    clean_and_invalidate_dcache_va_range(new_ptr, sizeof(*new_ptr));
> +}
> +
> +void arch_livepatch_revert_jmp(const struct livepatch_func *func)
> +{
> +    uint32_t *new_ptr;
> +    uint32_t insn;
> +
> +    memcpy(&insn, func->opaque, PATCH_INSN_SIZE);
> +
> +    new_ptr = (uint32_t *)func->old_addr - (u32 *)_start + vmap_of_xen_text;

The computation here looks wrong. You are mixing (u32 *) and (void *).

> +
> +    /* PATCH! */
> +    *(new_ptr) = cpu_to_le32(insn);
> +
> +    clean_and_invalidate_dcache_va_range(new_ptr, sizeof(*new_ptr));
> +}
> +
> +int arch_livepatch_verify_elf(const struct livepatch_elf *elf)
> +{
> +    const Elf_Ehdr *hdr = elf->hdr;
> +
> +    if ( hdr->e_machine != EM_AARCH64 ||
> +         hdr->e_ident[EI_CLASS] != ELFCLASS64 )
> +    {
> +        dprintk(XENLOG_ERR, LIVEPATCH "%s: Unsupported ELF Machine type!\n",
> +                elf->name);
> +        return -EOPNOTSUPP;
> +    }
> +
> +    return 0;
> +}
> +
> +enum aarch64_reloc_op {
> +    RELOC_OP_NONE,
> +    RELOC_OP_ABS,
> +    RELOC_OP_PREL,
> +    RELOC_OP_PAGE,
> +};
> +
> +static u64 do_reloc(enum aarch64_reloc_op reloc_op, void *place, u64 val)
> +{
> +    switch (reloc_op) {

This seems to be a left-over of the Linux coding style.

> +    case RELOC_OP_ABS:
> +        return val;
> +
> +    case RELOC_OP_PREL:
> +        return val - (u64)place;
> +
> +    case RELOC_OP_PAGE:
> +        return (val & ~0xfff) - ((u64)place & ~0xfff);
> +
> +    case RELOC_OP_NONE:
> +        return 0;
> +
> +    }
> +
> +    dprintk(XENLOG_DEBUG, LIVEPATCH "do_reloc: unknown relocation operation %d\n", reloc_op);

NIT: Please add a blank line here.

> +    return 0;
> +}
> +
> +static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
> +{
> +    s64 sval = do_reloc(op, place, val);
> +
> +    switch (len) {

Same question about coding style here.

> +    case 16:
> +        *(s16 *)place = sval;
> +        if (sval < S16_MIN || sval > U16_MAX)

Ditto

> +	        return -EOVERFLOW;
> +        break;
> +
> +    case 32:
> +        *(s32 *)place = sval;
> +        if (sval < S32_MIN || sval > U32_MAX)

Ditto

> +	        return -EOVERFLOW;
> +        break;
> +
> +    case 64:
> +        *(s64 *)place = sval;
> +        break;
> +
> +    default:
> +        dprintk(XENLOG_DEBUG, LIVEPATCH "Invalid length (%d) for data relocation\n", len);
> +        return 0;
> +    }
> +
> +    return 0;
> +}
> +
> +enum aarch64_insn_movw_imm_type {
> +    AARCH64_INSN_IMM_MOVNZ,
> +    AARCH64_INSN_IMM_MOVKZ,
> +};
> +
> +static int reloc_insn_imm(enum aarch64_reloc_op op, void *dest, u64 val,
> +                          int lsb, int len, enum aarch64_insn_imm_type imm_type)
> +{
> +    u64 imm, imm_mask;
> +    s64 sval;
> +    u32 insn = *(u32 *)dest;
> +
> +    /* Calculate the relocation value. */
> +    sval = do_reloc(op, dest, val);
> +    sval >>= lsb;
> +
> +    /* Extract the value bits and shift them to bit 0. */
> +    imm_mask = (BIT(lsb + len) - 1) >> lsb;
> +    imm = sval & imm_mask;
> +
> +    /* Update the instruction's immediate field. */
> +    insn = aarch64_insn_encode_immediate(imm_type, insn, imm);
> +    *(u32 *)dest = insn;
> +
> +    /*
> +     * Extract the upper value bits (including the sign bit) and
> +     * shift them to bit 0.
> +     */
> +    sval = (s64)(sval & ~(imm_mask >> 1)) >> (len - 1);
> +
> +    /*
> +     * Overflow has occurred if the upper bits are not all equal to
> +     * the sign bit of the value.
> +     */
> +    if ((u64)(sval + 1) >= 2)

Ditto

> +        return -EOVERFLOW;
> +    return 0;
> +}
> +
> +int arch_livepatch_perform_rela(struct livepatch_elf *elf,
> +                                const struct livepatch_elf_sec *base,
> +                                const struct livepatch_elf_sec *rela)
> +{
> +    const Elf_RelA *r;
> +    unsigned int symndx, i;
> +    uint64_t val;
> +    void *dest;
> +    bool_t overflow_check;
> +
> +    for ( i = 0; i < (rela->sec->sh_size / rela->sec->sh_entsize); i++ )
> +    {
> +        int ovf = 0;
> +
> +        r = rela->data + i * rela->sec->sh_entsize;
> +
> +        symndx = ELF64_R_SYM(r->r_info);
> +
> +        if ( symndx > elf->nsym )
> +        {
> +            dprintk(XENLOG_ERR, LIVEPATCH "%s: Relative relocation wants symbol@%u which is past end!\n",
> +                    elf->name, symndx);
> +            return -EINVAL;
> +        }
> +
> +        dest = base->load_addr + r->r_offset; /* P */
> +        val = elf->sym[symndx].sym->st_value +  r->r_addend; /* S+A */
> +
> +        overflow_check = 1;

Please use true here.

> +
> +        /* ARM64 operations at minimum are always 32-bit. */
> +        if ( r->r_offset >= base->sec->sh_size ||
> +            (r->r_offset + sizeof(uint32_t)) > base->sec->sh_size )
> +            goto bad_offset;
> +
> +        switch ( ELF64_R_TYPE(r->r_info) ) {

This seems to be a left-over of the Linux coding style.

> +        /* Data */
> +        case R_AARCH64_ABS64:
> +            if ( r->r_offset + sizeof(uint64_t) > base->sec->sh_size )
> +                goto bad_offset;
> +            overflow_check = false;
> +            ovf = reloc_data(RELOC_OP_ABS, dest, val, 64);
> +            break;
> +
> +        case R_AARCH64_ABS32:
> +            ovf = reloc_data(RELOC_OP_ABS, dest, val, 32);
> +            break;

I have noticed that not all the relocations are implemented (e.g 
R_AARCH64_ABS16, R_AARCH64_MOVW_*...). I don't think there is anything 
preventing the compiler to use them. So is there any particular reasons 
to not include them?

[...]

> diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
> index 755f596..f49e347 100644
> --- a/xen/arch/arm/livepatch.c
> +++ b/xen/arch/arm/livepatch.c
> @@ -6,44 +6,80 @@
>  #include <xen/lib.h>
>  #include <xen/livepatch_elf.h>
>  #include <xen/livepatch.h>
> +#include <xen/vmap.h>
> +
> +#include <asm/livepatch.h>
> +#include <asm/mm.h>
> +
> +void *vmap_of_xen_text;
>
>  int arch_livepatch_quiesce(void)
>  {
> -    return -ENOSYS;
> +    mfn_t text_mfn;
> +    unsigned int text_order;
> +
> +    if ( vmap_of_xen_text )
> +        return -EINVAL;
> +
> +    text_mfn = _mfn(virt_to_mfn(_start));
> +    text_order = get_order_from_bytes(_end - _start);
> +
> +    /*
> +     * The text section is read-only. So re-map Xen to be able to patch
> +     * the code.
> +     */
> +    vmap_of_xen_text = __vmap(&text_mfn, 1 << text_order, 1, 1, PAGE_HYPERVISOR,

NIT: Could you use 1U here?

> +                              VMAP_DEFAULT);
> +
> +    if ( !vmap_of_xen_text )
> +    {
> +        printk(XENLOG_ERR LIVEPATCH "Failed to setup vmap of hypervisor! (order=%u)\n",
> +               text_order);
> +        return -ENOMEM;
> +    }

NIT: Missing blank line here.

> +    return 0;
>  }

[...]

>  int arch_livepatch_secure(const void *va, unsigned int pages, enum va_type type)
>  {
> -    return -ENOSYS;
> +    unsigned long start = (unsigned long)va;
> +    unsigned int flags = 0;
> +
> +    ASSERT(va);
> +    ASSERT(pages);
> +
> +    switch ( type ) {

NIT: The brace should be on a newline.

> +    case LIVEPATCH_VA_RX:
> +        flags = PTE_RO; /* R set, NX clear */
> +        break;
> +
> +    case LIVEPATCH_VA_RW:
> +        flags = PTE_NX; /* R clear, NX set */
> +        break;
> +
> +    case LIVEPATCH_VA_RO:
> +        flags = PTE_NX | PTE_RO; /* R set, NX set */
> +        break;
> +
> +    default:
> +        return -EINVAL;
> +    }
> +
> +    return modify_xen_mappings(start, start + pages * PAGE_SIZE, flags);
>  }

Regards,
Konrad Rzeszutek Wilk Sept. 7, 2016, 2:58 a.m. UTC | #3
On Thu, Aug 25, 2016 at 09:02:48AM -0600, Jan Beulich wrote:
> >>> On 25.08.16 at 15:37, <konrad.wilk@oracle.com> wrote:
> > --- a/xen/include/xen/types.h
> > +++ b/xen/include/xen/types.h
> > @@ -14,6 +14,12 @@
> >  #define NULL ((void*)0)
> >  #endif
> >  
> > +#define U16_MAX         ((u16)~0U)
> > +#define S16_MAX         ((s16)(U16_MAX>>1))
> > +#define S16_MIN         ((s16)(-S16_MAX - 1))
> > +#define U32_MAX         ((u32)~0U)
> > +#define S32_MAX         ((s32)(U32_MAX>>1))
> > +#define S32_MIN         ((s32)(-S32_MAX - 1))
> 
> These are rather strange constants: Fixed width types necessarily
> always have the same boundaries of representable values. Otoh
> the C standard has such constants too - maybe if we really want
> them we should rather use their names?

I got these from the Linux kernel - which may have added these
before the C99 standard.

But I agree that using the C ones should make this easier, so I pulled
them in from /usr/include/stdint.h
> 
> Jan
>
diff mbox

Patch

diff --git a/MAINTAINERS b/MAINTAINERS
index 97720a8..ae0b6bc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -269,6 +269,7 @@  S:  Supported
 F:  docs/misc/livepatch.markdown
 F:  tools/misc/xen-livepatch.c
 F:  xen/arch/*/livepatch*
+F:  xen/arch/*/*/livepatch*
 F:  xen/common/livepatch*
 F:  xen/include/xen/livepatch*
 
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 0a96713..9f75c5c 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -58,6 +58,15 @@  ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
 
 DEPS += $(TARGET_SUBARCH)/.head.o.d
 
+ifdef CONFIG_LIVEPATCH
+all_symbols = --all-symbols
+ifdef CONFIG_FAST_SYMBOL_LOOKUP
+all_symbols = --all-symbols --sort-by-name
+endif
+else
+all_symbols =
+endif
+
 $(TARGET): $(TARGET)-syms $(TARGET).axf
 	$(OBJCOPY) -O binary -S $< $@
 ifeq ($(CONFIG_ARM_64),y)
@@ -93,12 +102,12 @@  $(TARGET)-syms: prelink.o xen.lds $(BASEDIR)/common/symbols-dummy.o
 	$(LD) $(LDFLAGS) -T xen.lds -N prelink.o \
 	    $(BASEDIR)/common/symbols-dummy.o -o $(@D)/.$(@F).0
 	$(NM) -pa --format=sysv $(@D)/.$(@F).0 \
-		| $(BASEDIR)/tools/symbols --sysv --sort >$(@D)/.$(@F).0.S
+		| $(BASEDIR)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).0.S
 	$(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).0.o
 	$(LD) $(LDFLAGS) -T xen.lds -N prelink.o \
 	    $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1
 	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
-		| $(BASEDIR)/tools/symbols --sysv --sort >$(@D)/.$(@F).1.S
+		| $(BASEDIR)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).1.S
 	$(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).1.o
 	$(LD) $(LDFLAGS) -T xen.lds -N prelink.o $(build_id_linker) \
 	    $(@D)/.$(@F).1.o -o $@
diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
index b20db64..4395693 100644
--- a/xen/arch/arm/arm32/Makefile
+++ b/xen/arch/arm/arm32/Makefile
@@ -4,6 +4,7 @@  obj-$(EARLY_PRINTK) += debug.o
 obj-y += domctl.o
 obj-y += domain.o
 obj-y += entry.o
+obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += proc-v7.o proc-caxx.o
 obj-y += smpboot.o
 obj-y += traps.o
diff --git a/xen/arch/arm/arm32/livepatch.c b/xen/arch/arm/arm32/livepatch.c
new file mode 100644
index 0000000..c33b68d
--- /dev/null
+++ b/xen/arch/arm/arm32/livepatch.c
@@ -0,0 +1,38 @@ 
+/*
+ *  Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
+ */
+
+#include <xen/errno.h>
+#include <xen/lib.h>
+#include <xen/livepatch_elf.h>
+#include <xen/livepatch.h>
+
+void arch_livepatch_apply_jmp(struct livepatch_func *func)
+{
+}
+
+void arch_livepatch_revert_jmp(const struct livepatch_func *func)
+{
+}
+
+int arch_livepatch_verify_elf(const struct livepatch_elf *elf)
+{
+    return -EOPNOTSUPP;
+}
+
+int arch_livepatch_perform_rela(struct livepatch_elf *elf,
+                                const struct livepatch_elf_sec *base,
+                                const struct livepatch_elf_sec *rela)
+{
+    return -ENOSYS;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index c1fa43f..149b6b3 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -6,6 +6,7 @@  obj-y += domctl.o
 obj-y += domain.o
 obj-y += entry.o
 obj-y += insn.o
+obj-$(CONFIG_LIVEPATCH) += livepatch.o
 obj-y += smpboot.o
 obj-y += traps.o
 obj-y += vfp.o
diff --git a/xen/arch/arm/arm64/livepatch.c b/xen/arch/arm/arm64/livepatch.c
new file mode 100644
index 0000000..0809a42
--- /dev/null
+++ b/xen/arch/arm/arm64/livepatch.c
@@ -0,0 +1,323 @@ 
+/*
+ *  Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
+ */
+
+#include <xen/bitops.h>
+#include <xen/errno.h>
+#include <xen/lib.h>
+#include <xen/livepatch_elf.h>
+#include <xen/livepatch.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+
+#include <asm/bitops.h>
+#include <asm/byteorder.h>
+#include <asm/insn.h>
+#include <asm/livepatch.h>
+
+void arch_livepatch_apply_jmp(struct livepatch_func *func)
+{
+    uint32_t insn;
+    uint32_t *old_ptr;
+    uint32_t *new_ptr;
+
+    BUILD_BUG_ON(PATCH_INSN_SIZE > sizeof(func->opaque));
+    BUILD_BUG_ON(PATCH_INSN_SIZE != sizeof(insn));
+
+    ASSERT(vmap_of_xen_text);
+
+    /* Save old one. */
+    old_ptr = func->old_addr;
+    memcpy(func->opaque, old_ptr, PATCH_INSN_SIZE);
+
+    if ( func->new_addr )
+        insn = aarch64_insn_gen_branch_imm((unsigned long)func->old_addr,
+                                           (unsigned long)func->new_addr,
+                                           AARCH64_INSN_BRANCH_NOLINK);
+    else
+        insn = aarch64_insn_gen_nop();
+
+    ASSERT(insn != AARCH64_BREAK_FAULT);
+    new_ptr = func->old_addr - (void *)_start + vmap_of_xen_text;
+
+    /* PATCH! */
+    *(new_ptr) = cpu_to_le32(insn);
+
+    clean_and_invalidate_dcache_va_range(new_ptr, sizeof(*new_ptr));
+}
+
+void arch_livepatch_revert_jmp(const struct livepatch_func *func)
+{
+    uint32_t *new_ptr;
+    uint32_t insn;
+
+    memcpy(&insn, func->opaque, PATCH_INSN_SIZE);
+
+    new_ptr = (uint32_t *)func->old_addr - (u32 *)_start + vmap_of_xen_text;
+
+    /* PATCH! */
+    *(new_ptr) = cpu_to_le32(insn);
+
+    clean_and_invalidate_dcache_va_range(new_ptr, sizeof(*new_ptr));
+}
+
+int arch_livepatch_verify_elf(const struct livepatch_elf *elf)
+{
+    const Elf_Ehdr *hdr = elf->hdr;
+
+    if ( hdr->e_machine != EM_AARCH64 ||
+         hdr->e_ident[EI_CLASS] != ELFCLASS64 )
+    {
+        dprintk(XENLOG_ERR, LIVEPATCH "%s: Unsupported ELF Machine type!\n",
+                elf->name);
+        return -EOPNOTSUPP;
+    }
+
+    return 0;
+}
+
+enum aarch64_reloc_op {
+    RELOC_OP_NONE,
+    RELOC_OP_ABS,
+    RELOC_OP_PREL,
+    RELOC_OP_PAGE,
+};
+
+static u64 do_reloc(enum aarch64_reloc_op reloc_op, void *place, u64 val)
+{
+    switch (reloc_op) {
+    case RELOC_OP_ABS:
+        return val;
+
+    case RELOC_OP_PREL:
+        return val - (u64)place;
+
+    case RELOC_OP_PAGE:
+        return (val & ~0xfff) - ((u64)place & ~0xfff);
+
+    case RELOC_OP_NONE:
+        return 0;
+
+    }
+
+    dprintk(XENLOG_DEBUG, LIVEPATCH "do_reloc: unknown relocation operation %d\n", reloc_op);
+    return 0;
+}
+
+static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
+{
+    s64 sval = do_reloc(op, place, val);
+
+    switch (len) {
+    case 16:
+        *(s16 *)place = sval;
+        if (sval < S16_MIN || sval > U16_MAX)
+	        return -EOVERFLOW;
+        break;
+
+    case 32:
+        *(s32 *)place = sval;
+        if (sval < S32_MIN || sval > U32_MAX)
+	        return -EOVERFLOW;
+        break;
+
+    case 64:
+        *(s64 *)place = sval;
+        break;
+
+    default:
+        dprintk(XENLOG_DEBUG, LIVEPATCH "Invalid length (%d) for data relocation\n", len);
+        return 0;
+    }
+
+    return 0;
+}
+
+enum aarch64_insn_movw_imm_type {
+    AARCH64_INSN_IMM_MOVNZ,
+    AARCH64_INSN_IMM_MOVKZ,
+};
+
+static int reloc_insn_imm(enum aarch64_reloc_op op, void *dest, u64 val,
+                          int lsb, int len, enum aarch64_insn_imm_type imm_type)
+{
+    u64 imm, imm_mask;
+    s64 sval;
+    u32 insn = *(u32 *)dest;
+
+    /* Calculate the relocation value. */
+    sval = do_reloc(op, dest, val);
+    sval >>= lsb;
+
+    /* Extract the value bits and shift them to bit 0. */
+    imm_mask = (BIT(lsb + len) - 1) >> lsb;
+    imm = sval & imm_mask;
+
+    /* Update the instruction's immediate field. */
+    insn = aarch64_insn_encode_immediate(imm_type, insn, imm);
+    *(u32 *)dest = insn;
+
+    /*
+     * Extract the upper value bits (including the sign bit) and
+     * shift them to bit 0.
+     */
+    sval = (s64)(sval & ~(imm_mask >> 1)) >> (len - 1);
+
+    /*
+     * Overflow has occurred if the upper bits are not all equal to
+     * the sign bit of the value.
+     */
+    if ((u64)(sval + 1) >= 2)
+        return -EOVERFLOW;
+    return 0;
+}
+
+int arch_livepatch_perform_rela(struct livepatch_elf *elf,
+                                const struct livepatch_elf_sec *base,
+                                const struct livepatch_elf_sec *rela)
+{
+    const Elf_RelA *r;
+    unsigned int symndx, i;
+    uint64_t val;
+    void *dest;
+    bool_t overflow_check;
+
+    for ( i = 0; i < (rela->sec->sh_size / rela->sec->sh_entsize); i++ )
+    {
+        int ovf = 0;
+
+        r = rela->data + i * rela->sec->sh_entsize;
+
+        symndx = ELF64_R_SYM(r->r_info);
+
+        if ( symndx > elf->nsym )
+        {
+            dprintk(XENLOG_ERR, LIVEPATCH "%s: Relative relocation wants symbol@%u which is past end!\n",
+                    elf->name, symndx);
+            return -EINVAL;
+        }
+
+        dest = base->load_addr + r->r_offset; /* P */
+        val = elf->sym[symndx].sym->st_value +  r->r_addend; /* S+A */
+
+        overflow_check = 1;
+
+        /* ARM64 operations at minimum are always 32-bit. */
+        if ( r->r_offset >= base->sec->sh_size ||
+            (r->r_offset + sizeof(uint32_t)) > base->sec->sh_size )
+            goto bad_offset;
+
+        switch ( ELF64_R_TYPE(r->r_info) ) {
+        /* Data */
+        case R_AARCH64_ABS64:
+            if ( r->r_offset + sizeof(uint64_t) > base->sec->sh_size )
+                goto bad_offset;
+            overflow_check = false;
+            ovf = reloc_data(RELOC_OP_ABS, dest, val, 64);
+            break;
+
+        case R_AARCH64_ABS32:
+            ovf = reloc_data(RELOC_OP_ABS, dest, val, 32);
+            break;
+
+        case R_AARCH64_PREL64:
+            if ( r->r_offset + sizeof(uint64_t) > base->sec->sh_size )
+                goto bad_offset;
+            overflow_check = false;
+            ovf = reloc_data(RELOC_OP_PREL, dest, val, 64);
+            break;
+
+        case R_AARCH64_PREL32:
+            ovf = reloc_data(RELOC_OP_PREL, dest, val, 32);
+            break;
+
+        /* Instructions. */
+        case R_AARCH64_ADR_PREL_LO21:
+            ovf = reloc_insn_imm(RELOC_OP_PREL, dest, val, 0, 21,
+                                 AARCH64_INSN_IMM_ADR);
+            break;
+
+        case R_AARCH64_ADR_PREL_PG_HI21_NC:
+            overflow_check = false;
+        case R_AARCH64_ADR_PREL_PG_HI21:
+            ovf = reloc_insn_imm(RELOC_OP_PAGE, dest, val, 12, 21,
+                                 AARCH64_INSN_IMM_ADR);
+            break;
+
+        case R_AARCH64_LDST8_ABS_LO12_NC:
+        case R_AARCH64_ADD_ABS_LO12_NC:
+            overflow_check = false;
+            ovf = reloc_insn_imm(RELOC_OP_ABS, dest, val, 0, 12,
+                                 AARCH64_INSN_IMM_12);
+            break;
+
+        case R_AARCH64_LDST16_ABS_LO12_NC:
+            overflow_check = false;
+            ovf = reloc_insn_imm(RELOC_OP_ABS, dest, val, 1, 11,
+                                 AARCH64_INSN_IMM_12);
+            break;
+
+        case R_AARCH64_LDST32_ABS_LO12_NC:
+            overflow_check = false;
+            ovf = reloc_insn_imm(RELOC_OP_ABS, dest, val, 2, 10,
+                                 AARCH64_INSN_IMM_12);
+            break;
+
+        case R_AARCH64_LDST64_ABS_LO12_NC:
+            overflow_check = false;
+            ovf = reloc_insn_imm(RELOC_OP_ABS, dest, val, 3, 9,
+                                 AARCH64_INSN_IMM_12);
+            break;
+
+        case R_AARCH64_LDST128_ABS_LO12_NC:
+            overflow_check = false;
+            ovf = reloc_insn_imm(RELOC_OP_ABS, dest, val, 4, 8,
+                                 AARCH64_INSN_IMM_12);
+            break;
+
+        case R_AARCH64_TSTBR14:
+            ovf = reloc_insn_imm(RELOC_OP_PREL, dest, val, 2, 19,
+                                 AARCH64_INSN_IMM_14);
+            break;
+
+        case R_AARCH64_CONDBR19:
+            ovf = reloc_insn_imm(RELOC_OP_PREL, dest, val, 2, 19,
+                                 AARCH64_INSN_IMM_19);
+            break;
+
+        case R_AARCH64_JUMP26:
+        case R_AARCH64_CALL26:
+            ovf = reloc_insn_imm(RELOC_OP_PREL, dest, val, 2, 26,
+                                 AARCH64_INSN_IMM_26);
+            break;
+
+        default:
+            dprintk(XENLOG_ERR, LIVEPATCH "%s: Unhandled relocation %lu\n",
+                    elf->name, ELF64_R_TYPE(r->r_info));
+            return -EOPNOTSUPP;
+        }
+
+        if ( overflow_check && ovf == -EOVERFLOW )
+        {
+            dprintk(XENLOG_ERR, LIVEPATCH "%s: Overflow in relocation %u in %s for %s!\n",
+                    elf->name, i, rela->name, base->name);
+            return ovf;
+        }
+    }
+    return 0;
+
+ bad_offset:
+    dprintk(XENLOG_ERR, LIVEPATCH "%s: Relative relocation offset is past %s section!\n",
+            elf->name, base->name);
+    return -EINVAL;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 20bb2ba..607ee59 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -13,6 +13,7 @@ 
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/livepatch.h>
 #include <xen/sched.h>
 #include <xen/softirq.h>
 #include <xen/wait.h>
@@ -55,6 +56,11 @@  void idle_loop(void)
 
         do_tasklet();
         do_softirq();
+        /*
+         * We MUST be last (or before dsb, wfi). Otherwise after we get the
+         * softirq we would execute dsb,wfi (and sleep) and not patch.
+         */
+        check_for_livepatch_work();
     }
 }
 
diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
index 755f596..f49e347 100644
--- a/xen/arch/arm/livepatch.c
+++ b/xen/arch/arm/livepatch.c
@@ -6,44 +6,80 @@ 
 #include <xen/lib.h>
 #include <xen/livepatch_elf.h>
 #include <xen/livepatch.h>
+#include <xen/vmap.h>
+
+#include <asm/livepatch.h>
+#include <asm/mm.h>
+
+void *vmap_of_xen_text;
 
 int arch_livepatch_quiesce(void)
 {
-    return -ENOSYS;
+    mfn_t text_mfn;
+    unsigned int text_order;
+
+    if ( vmap_of_xen_text )
+        return -EINVAL;
+
+    text_mfn = _mfn(virt_to_mfn(_start));
+    text_order = get_order_from_bytes(_end - _start);
+
+    /*
+     * The text section is read-only. So re-map Xen to be able to patch
+     * the code.
+     */
+    vmap_of_xen_text = __vmap(&text_mfn, 1 << text_order, 1, 1, PAGE_HYPERVISOR,
+                              VMAP_DEFAULT);
+
+    if ( !vmap_of_xen_text )
+    {
+        printk(XENLOG_ERR LIVEPATCH "Failed to setup vmap of hypervisor! (order=%u)\n",
+               text_order);
+        return -ENOMEM;
+    }
+    return 0;
 }
 
 void arch_livepatch_revive(void)
 {
+    /*
+     * Nuke the instruction cache. Data cache has been cleaned before in
+     * arch_livepatch_apply_jmp.
+     */
+    invalidate_icache();
+
+    if ( vmap_of_xen_text )
+        vunmap(vmap_of_xen_text);
+
+    vmap_of_xen_text = NULL;
 }
 
 int arch_livepatch_verify_func(const struct livepatch_func *func)
 {
-    return -ENOSYS;
-}
+    /* If NOPing only do the insn size. */
+    if ( !func->new_addr && func->new_size != PATCH_INSN_SIZE )
+        return -EOPNOTSUPP;
 
-void arch_livepatch_apply_jmp(struct livepatch_func *func)
-{
-}
+    if ( func->old_size < PATCH_INSN_SIZE )
+        return -EINVAL;
 
-void arch_livepatch_revert_jmp(const struct livepatch_func *func)
-{
+    return 0;
 }
 
 void arch_livepatch_post_action(void)
 {
+    /* arch_livepatch_revive has nuked the instruction cache. */
 }
 
 void arch_livepatch_mask(void)
 {
+    /* Mask System Error (SError) */
+    local_abort_disable();
 }
 
 void arch_livepatch_unmask(void)
 {
-}
-
-int arch_livepatch_verify_elf(const struct livepatch_elf *elf)
-{
-    return -ENOSYS;
+    local_abort_enable();
 }
 
 int arch_livepatch_perform_rel(struct livepatch_elf *elf,
@@ -53,20 +89,42 @@  int arch_livepatch_perform_rel(struct livepatch_elf *elf,
     return -ENOSYS;
 }
 
-int arch_livepatch_perform_rela(struct livepatch_elf *elf,
-                                const struct livepatch_elf_sec *base,
-                                const struct livepatch_elf_sec *rela)
-{
-    return -ENOSYS;
-}
-
 int arch_livepatch_secure(const void *va, unsigned int pages, enum va_type type)
 {
-    return -ENOSYS;
+    unsigned long start = (unsigned long)va;
+    unsigned int flags = 0;
+
+    ASSERT(va);
+    ASSERT(pages);
+
+    switch ( type ) {
+    case LIVEPATCH_VA_RX:
+        flags = PTE_RO; /* R set, NX clear */
+        break;
+
+    case LIVEPATCH_VA_RW:
+        flags = PTE_NX; /* R clear, NX set */
+        break;
+
+    case LIVEPATCH_VA_RO:
+        flags = PTE_NX | PTE_RO; /* R set, NX set */
+        break;
+
+    default:
+        return -EINVAL;
+    }
+
+    return modify_xen_mappings(start, start + pages * PAGE_SIZE, flags);
 }
 
 void __init arch_livepatch_init(void)
 {
+    void *start, *end;
+
+    start = (void *)LIVEPATCH_VMAP_START;
+    end = (void *)LIVEPATCH_VMAP_END;
+
+    vm_init_type(VMAP_XEN, start, end);
 }
 
 /*
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 683bcb2..010bb16 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -24,6 +24,7 @@ 
 #include <xen/symbols.h>
 #include <xen/irq.h>
 #include <xen/lib.h>
+#include <xen/livepatch.h>
 #include <xen/mm.h>
 #include <xen/errno.h>
 #include <xen/hypercall.h>
@@ -2688,6 +2689,11 @@  asmlinkage void leave_hypervisor_tail(void)
         }
         local_irq_enable();
         do_softirq();
+        /*
+         * Must be the last one - as the IPI will trigger us to come here
+         * and we want to patch the hypervisor with almost no stack.
+         */
+        check_for_livepatch_work();
     }
 }
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index befa30e..03e4b50 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -225,7 +225,7 @@  config CRYPTO
 config LIVEPATCH
 	bool "Live patching support (TECH PREVIEW)"
 	default n
-	depends on X86 && HAS_BUILD_ID = "y"
+	depends on !ARM_32 && HAS_BUILD_ID = "y"
 	---help---
 	  Allows a running Xen hypervisor to be dynamically patched using
 	  binary patches without rebooting. This is primarily used to binarily
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 6772555..ba61f65 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -80,6 +80,7 @@ 
  *   4M -   6M   Fixmap: special-purpose 4K mapping slots
  *   6M -   8M   Early boot mapping of FDT
  *   8M -  10M   Early relocation address (used when relocating Xen)
+ *               and later for livepatch vmap (if compiled in)
  *
  * ARM32 layout:
  *   0  -  10M   <COMMON>
@@ -113,6 +114,10 @@ 
 #define FIXMAP_ADDR(n)        (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE)
 #define BOOT_FDT_VIRT_START    _AT(vaddr_t,0x00600000)
 #define BOOT_RELOC_VIRT_START  _AT(vaddr_t,0x00800000)
+#ifdef CONFIG_LIVEPATCH
+#define LIVEPATCH_VMAP_START   _AT(vaddr_t,0x00800000)
+#define LIVEPATCH_VMAP_END     (LIVEPATCH_VMAP_START + MB(2))
+#endif
 
 #define HYPERVISOR_VIRT_START  XEN_VIRT_START
 
diff --git a/xen/include/asm-arm/livepatch.h b/xen/include/asm-arm/livepatch.h
new file mode 100644
index 0000000..8c8d625
--- /dev/null
+++ b/xen/include/asm-arm/livepatch.h
@@ -0,0 +1,28 @@ 
+/*
+ * Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
+ *
+ */
+
+#ifndef __XEN_ARM_LIVEPATCH_H__
+#define __XEN_ARM_LIVEPATCH_H__
+
+/* On ARM32,64 instructions are always 4 bytes long. */
+#define PATCH_INSN_SIZE 4
+
+/*
+ * The va of the hypervisor .text region. We need this as the
+ * normal va are write protected.
+ */
+extern void *vmap_of_xen_text;
+
+#endif /* __XEN_ARM_LIVEPATCH_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/elfstructs.h b/xen/include/xen/elfstructs.h
index 5f2082e..3746268 100644
--- a/xen/include/xen/elfstructs.h
+++ b/xen/include/xen/elfstructs.h
@@ -177,6 +177,7 @@  typedef struct {
 #define EM_IA_64	50		/* Intel Merced */
 #define EM_X86_64	62		/* AMD x86-64 architecture */
 #define EM_VAX		75		/* DEC VAX */
+#define EM_AARCH64	183		/* ARM 64-bit */
 
 /* Version */
 #define EV_NONE		0		/* Invalid */
@@ -353,12 +354,43 @@  typedef struct {
 #define	ELF64_R_TYPE(info)	((info) & 0xFFFFFFFF)
 #define ELF64_R_INFO(s,t) 	(((s) << 32) + (u_int32_t)(t))
 
-/* x86-64 relocation types. We list only the ones Live Patch implements. */
+/*
+ * Relocation types for x86_64 and ARM 64. We list only the ones Live Patch
+ * implements.
+ */
 #define R_X86_64_NONE		0	/* No reloc */
 #define R_X86_64_64	    	1	/* Direct 64 bit  */
 #define R_X86_64_PC32		2	/* PC relative 32 bit signed */
 #define R_X86_64_PLT32		4	/* 32 bit PLT address */
 
+/*
+ * S - address of symbol.
+ * A - addend for relocation (r_addend)
+ * P - address of the dest being relocated (derieved from r_offset)
+ * NC -  No check for overflow.
+ *
+ * The defines also use _PREL for PC-relative address, and _NC is No Check.
+ */
+#define R_AARCH64_ABS64			257 /* Direct 64 bit. S+A, NC*/
+#define R_AARCH64_ABS32			258 /* Direct 32 bit. S+A */
+#define R_AARCH64_PREL64		260 /* S+A-P, NC */
+#define R_AARCH64_PREL32		261 /* S+A-P */
+
+#define R_AARCH64_ADR_PREL_LO21		274 /* ADR imm, [20:0]. S+A-P */
+#define R_AARCH64_ADR_PREL_PG_HI21	275 /* ADRP imm, [32:12]. Page(S+A) - Page(P).*/
+#define R_AARCH64_ADR_PREL_PG_HI21_NC	276
+#define R_AARCH64_ADD_ABS_LO12_NC	277 /* ADD imm. [11:0]. S+A, NC */
+
+#define R_AARCH64_TSTBR14		279
+#define R_AARCH64_CONDBR19		280 /* Bits 20:2, S+A-P */
+#define R_AARCH64_JUMP26		282 /* Bits 27:2, S+A-P */
+#define R_AARCH64_CALL26		283 /* Bits 27:2, S+A-P */
+#define R_AARCH64_LDST16_ABS_LO12_NC	284 /* LD/ST to bits 11:1, S+A, NC */
+#define R_AARCH64_LDST32_ABS_LO12_NC	285 /* LD/ST to bits 11:2, S+A, NC */
+#define R_AARCH64_LDST64_ABS_LO12_NC	286 /* LD/ST to bits 11:3, S+A, NC */
+#define R_AARCH64_LDST8_ABS_LO12_NC	278 /* LD/ST to bits 11:0, S+A, NC */
+#define R_AARCH64_LDST128_ABS_LO12_NC	299
+
 /* Program Header */
 typedef struct {
 	Elf32_Word	p_type;		/* segment type */
diff --git a/xen/include/xen/types.h b/xen/include/xen/types.h
index 7bdc83b..707251c 100644
--- a/xen/include/xen/types.h
+++ b/xen/include/xen/types.h
@@ -14,6 +14,12 @@ 
 #define NULL ((void*)0)
 #endif
 
+#define U16_MAX         ((u16)~0U)
+#define S16_MAX         ((s16)(U16_MAX>>1))
+#define S16_MIN         ((s16)(-S16_MAX - 1))
+#define U32_MAX         ((u32)~0U)
+#define S32_MAX         ((s32)(U32_MAX>>1))
+#define S32_MIN         ((s32)(-S32_MAX - 1))
 #define INT_MAX         ((int)(~0U>>1))
 #define INT_MIN         (-INT_MAX - 1)
 #define UINT_MAX        (~0U)