mbox series

[v4,0/8] Allwinner H6 Mali GPU support

Message ID 20190512174608.10083-1-peron.clem@gmail.com (mailing list archive)
Headers show
Series Allwinner H6 Mali GPU support | expand

Message

Clément Péron May 12, 2019, 5:46 p.m. UTC
From: Clément Péron <peron.clem@gmail.com>

Hi,

The Allwinner H6 has a Mali-T720 MP2. The drivers are
out-of-tree so this series only introduce the dt-bindings.

The first patch is from Neil Amstrong and has been already
merged in linux-amlogic. It is required for this series.

The second patch is from Icenowy Zheng where I changed the
order has required by Rob Herring.
See: https://patchwork.kernel.org/patch/10699829/

Thanks,
Clément

Changes in v4:
 - Add Rob Herring reviewed-by tag
 - Resent with correct Maintainers

Changes in v3 (Thanks to Maxime Ripard):
 - Reauthor Icenowy for her patch

Changes in v2 (Thanks to Maxime Ripard):
 - Drop GPU OPP Table
 - Add clocks and clock-names in required

Clément Péron (6):
  dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
  arm64: dts: allwinner: Add ARM Mali GPU node for H6
  arm64: dts: allwinner: Add mali GPU supply for Pine H64
  arm64: dts: allwinner: Add mali GPU supply for Beelink GS1
  arm64: dts: allwinner: Add mali GPU supply for OrangePi Boards
  arm64: dts: allwinner: Add mali GPU supply for OrangePi 3

Icenowy Zheng (1):
  dt-bindings: gpu: add bus clock for Mali Midgard GPUs

Neil Armstrong (1):
  dt-bindings: gpu: mali-midgard: Add resets property

 .../bindings/gpu/arm,mali-midgard.txt         | 27 +++++++++++++++++++
 .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  5 ++++
 .../dts/allwinner/sun50i-h6-orangepi-3.dts    |  5 ++++
 .../dts/allwinner/sun50i-h6-orangepi.dtsi     |  5 ++++
 .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  5 ++++
 arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 ++++++++++
 6 files changed, 61 insertions(+)

Comments

Daniel Vetter May 13, 2019, 3:14 p.m. UTC | #1
On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
> From: Clément Péron <peron.clem@gmail.com>
> 
> Hi,
> 
> The Allwinner H6 has a Mali-T720 MP2. The drivers are
> out-of-tree so this series only introduce the dt-bindings.

We do have an in-tree midgard driver now (since 5.2). Does this stuff work
together with your dt changes here?
-Daniel

> The first patch is from Neil Amstrong and has been already
> merged in linux-amlogic. It is required for this series.
> 
> The second patch is from Icenowy Zheng where I changed the
> order has required by Rob Herring.
> See: https://patchwork.kernel.org/patch/10699829/
> 
> Thanks,
> Clément
> 
> Changes in v4:
>  - Add Rob Herring reviewed-by tag
>  - Resent with correct Maintainers
> 
> Changes in v3 (Thanks to Maxime Ripard):
>  - Reauthor Icenowy for her patch
> 
> Changes in v2 (Thanks to Maxime Ripard):
>  - Drop GPU OPP Table
>  - Add clocks and clock-names in required
> 
> Clément Péron (6):
>   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
>   arm64: dts: allwinner: Add ARM Mali GPU node for H6
>   arm64: dts: allwinner: Add mali GPU supply for Pine H64
>   arm64: dts: allwinner: Add mali GPU supply for Beelink GS1
>   arm64: dts: allwinner: Add mali GPU supply for OrangePi Boards
>   arm64: dts: allwinner: Add mali GPU supply for OrangePi 3
> 
> Icenowy Zheng (1):
>   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
> 
> Neil Armstrong (1):
>   dt-bindings: gpu: mali-midgard: Add resets property
> 
>  .../bindings/gpu/arm,mali-midgard.txt         | 27 +++++++++++++++++++
>  .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  5 ++++
>  .../dts/allwinner/sun50i-h6-orangepi-3.dts    |  5 ++++
>  .../dts/allwinner/sun50i-h6-orangepi.dtsi     |  5 ++++
>  .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  5 ++++
>  arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 ++++++++++
>  6 files changed, 61 insertions(+)
> 
> -- 
> 2.17.1
>
Neil Armstrong May 14, 2019, 10:29 a.m. UTC | #2
Hi,

On 13/05/2019 17:14, Daniel Vetter wrote:
> On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
>> From: Clément Péron <peron.clem@gmail.com>
>>
>> Hi,
>>
>> The Allwinner H6 has a Mali-T720 MP2. The drivers are
>> out-of-tree so this series only introduce the dt-bindings.
> 
> We do have an in-tree midgard driver now (since 5.2). Does this stuff work
> together with your dt changes here?

No, but it should be easy to add.

Clément, no need to resend the first patch, it's now on
linus master.

Could you also add support for the bus clock in panfrost
in the same patchset since it's also on master now ?

Neil

> -Daniel
> 
>> The first patch is from Neil Amstrong and has been already
>> merged in linux-amlogic. It is required for this series.
>>
>> The second patch is from Icenowy Zheng where I changed the
>> order has required by Rob Herring.
>> See: https://patchwork.kernel.org/patch/10699829/
>>
>> Thanks,
>> Clément
>>
>> Changes in v4:
>>  - Add Rob Herring reviewed-by tag
>>  - Resent with correct Maintainers
>>
>> Changes in v3 (Thanks to Maxime Ripard):
>>  - Reauthor Icenowy for her patch
>>
>> Changes in v2 (Thanks to Maxime Ripard):
>>  - Drop GPU OPP Table
>>  - Add clocks and clock-names in required
>>
>> Clément Péron (6):
>>   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
>>   arm64: dts: allwinner: Add ARM Mali GPU node for H6
>>   arm64: dts: allwinner: Add mali GPU supply for Pine H64
>>   arm64: dts: allwinner: Add mali GPU supply for Beelink GS1
>>   arm64: dts: allwinner: Add mali GPU supply for OrangePi Boards
>>   arm64: dts: allwinner: Add mali GPU supply for OrangePi 3
>>
>> Icenowy Zheng (1):
>>   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
>>
>> Neil Armstrong (1):
>>   dt-bindings: gpu: mali-midgard: Add resets property
>>
>>  .../bindings/gpu/arm,mali-midgard.txt         | 27 +++++++++++++++++++
>>  .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  5 ++++
>>  .../dts/allwinner/sun50i-h6-orangepi-3.dts    |  5 ++++
>>  .../dts/allwinner/sun50i-h6-orangepi.dtsi     |  5 ++++
>>  .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  5 ++++
>>  arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 ++++++++++
>>  6 files changed, 61 insertions(+)
>>
>> -- 
>> 2.17.1
>>
>
Clément Péron May 14, 2019, 3:17 p.m. UTC | #3
Hi,

On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> wrote:
>
> Hi,
>
> On 13/05/2019 17:14, Daniel Vetter wrote:
> > On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
> >> From: Clément Péron <peron.clem@gmail.com>
> >>
> >> Hi,
> >>
> >> The Allwinner H6 has a Mali-T720 MP2. The drivers are
> >> out-of-tree so this series only introduce the dt-bindings.
> >
> > We do have an in-tree midgard driver now (since 5.2). Does this stuff work
> > together with your dt changes here?
>
> No, but it should be easy to add.
I will give it a try and let you know.

>
> Clément, no need to resend the first patch, it's now on
> linus master.
Ok

Thanks,
Clement

>
> Could you also add support for the bus clock in panfrost
> in the same patchset since it's also on master now ?
>
> Neil
>
> > -Daniel
> >
> >> The first patch is from Neil Amstrong and has been already
> >> merged in linux-amlogic. It is required for this series.
> >>
> >> The second patch is from Icenowy Zheng where I changed the
> >> order has required by Rob Herring.
> >> See: https://patchwork.kernel.org/patch/10699829/
> >>
> >> Thanks,
> >> Clément
> >>
> >> Changes in v4:
> >>  - Add Rob Herring reviewed-by tag
> >>  - Resent with correct Maintainers
> >>
> >> Changes in v3 (Thanks to Maxime Ripard):
> >>  - Reauthor Icenowy for her patch
> >>
> >> Changes in v2 (Thanks to Maxime Ripard):
> >>  - Drop GPU OPP Table
> >>  - Add clocks and clock-names in required
> >>
> >> Clément Péron (6):
> >>   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
> >>   arm64: dts: allwinner: Add ARM Mali GPU node for H6
> >>   arm64: dts: allwinner: Add mali GPU supply for Pine H64
> >>   arm64: dts: allwinner: Add mali GPU supply for Beelink GS1
> >>   arm64: dts: allwinner: Add mali GPU supply for OrangePi Boards
> >>   arm64: dts: allwinner: Add mali GPU supply for OrangePi 3
> >>
> >> Icenowy Zheng (1):
> >>   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
> >>
> >> Neil Armstrong (1):
> >>   dt-bindings: gpu: mali-midgard: Add resets property
> >>
> >>  .../bindings/gpu/arm,mali-midgard.txt         | 27 +++++++++++++++++++
> >>  .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  5 ++++
> >>  .../dts/allwinner/sun50i-h6-orangepi-3.dts    |  5 ++++
> >>  .../dts/allwinner/sun50i-h6-orangepi.dtsi     |  5 ++++
> >>  .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  5 ++++
> >>  arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 ++++++++++
> >>  6 files changed, 61 insertions(+)
> >>
> >> --
> >> 2.17.1
> >>
> >
>
Clément Péron May 14, 2019, 9:22 p.m. UTC | #4
Hi,

On Tue, 14 May 2019 at 17:17, Clément Péron <peron.clem@gmail.com> wrote:
>
> Hi,
>
> On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> wrote:
> >
> > Hi,
> >
> > On 13/05/2019 17:14, Daniel Vetter wrote:
> > > On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
> > >> From: Clément Péron <peron.clem@gmail.com>
> > >>
> > >> Hi,
> > >>
> > >> The Allwinner H6 has a Mali-T720 MP2. The drivers are
> > >> out-of-tree so this series only introduce the dt-bindings.
> > >
> > > We do have an in-tree midgard driver now (since 5.2). Does this stuff work
> > > together with your dt changes here?
> >
> > No, but it should be easy to add.
> I will give it a try and let you know.
Added the bus_clock and a ramp delay to the gpu_vdd but the driver
fail at probe.

[    3.052919] panfrost 1800000.gpu: clock rate = 432000000
[    3.058278] panfrost 1800000.gpu: bus_clock rate = 100000000
[    3.179772] panfrost 1800000.gpu: mali-t720 id 0x720 major 0x1
minor 0x1 status 0x0
[    3.187432] panfrost 1800000.gpu: features: 00000000,10309e40,
issues: 00000000,21054400
[    3.195531] panfrost 1800000.gpu: Features: L2:0x07110206
Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002821 AS:0xf
JS:0x7
[    3.207178] panfrost 1800000.gpu: shader_present=0x3 l2_present=0x1
[    3.238257] panfrost 1800000.gpu: Fatal error during GPU init
[    3.244165] panfrost: probe of 1800000.gpu failed with error -12

The ENOMEM is coming from "panfrost_mmu_init"
alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
                                         pfdev);

Which is due to a check in the pgtable alloc "cfg->ias != 48"
arm-lpae io-pgtable: arm_mali_lpae_alloc_pgtable cfg->ias 33 cfg->oas 40

DRI stack is totally new for me, could you give me a little clue about
this issue ?

Thanks,
Clément

>
> >
> > Clément, no need to resend the first patch, it's now on
> > linus master.
> Ok
>
> Thanks,
> Clement
>
> >
> > Could you also add support for the bus clock in panfrost
> > in the same patchset since it's also on master now ?
> >
> > Neil
> >
> > > -Daniel
> > >
> > >> The first patch is from Neil Amstrong and has been already
> > >> merged in linux-amlogic. It is required for this series.
> > >>
> > >> The second patch is from Icenowy Zheng where I changed the
> > >> order has required by Rob Herring.
> > >> See: https://patchwork.kernel.org/patch/10699829/
> > >>
> > >> Thanks,
> > >> Clément
> > >>
> > >> Changes in v4:
> > >>  - Add Rob Herring reviewed-by tag
> > >>  - Resent with correct Maintainers
> > >>
> > >> Changes in v3 (Thanks to Maxime Ripard):
> > >>  - Reauthor Icenowy for her patch
> > >>
> > >> Changes in v2 (Thanks to Maxime Ripard):
> > >>  - Drop GPU OPP Table
> > >>  - Add clocks and clock-names in required
> > >>
> > >> Clément Péron (6):
> > >>   dt-bindings: gpu: mali-midgard: Add H6 mali gpu compatible
> > >>   arm64: dts: allwinner: Add ARM Mali GPU node for H6
> > >>   arm64: dts: allwinner: Add mali GPU supply for Pine H64
> > >>   arm64: dts: allwinner: Add mali GPU supply for Beelink GS1
> > >>   arm64: dts: allwinner: Add mali GPU supply for OrangePi Boards
> > >>   arm64: dts: allwinner: Add mali GPU supply for OrangePi 3
> > >>
> > >> Icenowy Zheng (1):
> > >>   dt-bindings: gpu: add bus clock for Mali Midgard GPUs
> > >>
> > >> Neil Armstrong (1):
> > >>   dt-bindings: gpu: mali-midgard: Add resets property
> > >>
> > >>  .../bindings/gpu/arm,mali-midgard.txt         | 27 +++++++++++++++++++
> > >>  .../dts/allwinner/sun50i-h6-beelink-gs1.dts   |  5 ++++
> > >>  .../dts/allwinner/sun50i-h6-orangepi-3.dts    |  5 ++++
> > >>  .../dts/allwinner/sun50i-h6-orangepi.dtsi     |  5 ++++
> > >>  .../boot/dts/allwinner/sun50i-h6-pine-h64.dts |  5 ++++
> > >>  arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi  | 14 ++++++++++
> > >>  6 files changed, 61 insertions(+)
> > >>
> > >> --
> > >> 2.17.1
> > >>
> > >
> >
Robin Murphy May 14, 2019, 9:56 p.m. UTC | #5
On 2019-05-14 10:22 pm, Clément Péron wrote:
> Hi,
> 
> On Tue, 14 May 2019 at 17:17, Clément Péron <peron.clem@gmail.com> wrote:
>>
>> Hi,
>>
>> On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> wrote:
>>>
>>> Hi,
>>>
>>> On 13/05/2019 17:14, Daniel Vetter wrote:
>>>> On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
>>>>> From: Clément Péron <peron.clem@gmail.com>
>>>>>
>>>>> Hi,
>>>>>
>>>>> The Allwinner H6 has a Mali-T720 MP2. The drivers are
>>>>> out-of-tree so this series only introduce the dt-bindings.
>>>>
>>>> We do have an in-tree midgard driver now (since 5.2). Does this stuff work
>>>> together with your dt changes here?
>>>
>>> No, but it should be easy to add.
>> I will give it a try and let you know.
> Added the bus_clock and a ramp delay to the gpu_vdd but the driver
> fail at probe.
> 
> [    3.052919] panfrost 1800000.gpu: clock rate = 432000000
> [    3.058278] panfrost 1800000.gpu: bus_clock rate = 100000000
> [    3.179772] panfrost 1800000.gpu: mali-t720 id 0x720 major 0x1
> minor 0x1 status 0x0
> [    3.187432] panfrost 1800000.gpu: features: 00000000,10309e40,
> issues: 00000000,21054400
> [    3.195531] panfrost 1800000.gpu: Features: L2:0x07110206
> Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002821 AS:0xf
> JS:0x7
> [    3.207178] panfrost 1800000.gpu: shader_present=0x3 l2_present=0x1
> [    3.238257] panfrost 1800000.gpu: Fatal error during GPU init
> [    3.244165] panfrost: probe of 1800000.gpu failed with error -12
> 
> The ENOMEM is coming from "panfrost_mmu_init"
> alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
>                                           pfdev);
> 
> Which is due to a check in the pgtable alloc "cfg->ias != 48"
> arm-lpae io-pgtable: arm_mali_lpae_alloc_pgtable cfg->ias 33 cfg->oas 40
> 
> DRI stack is totally new for me, could you give me a little clue about
> this issue ?

Heh, this is probably the one bit which doesn't really count as "DRI stack".

That's merely a somewhat-conservative sanity check - I'm pretty sure it 
*should* be fine to change the test to "cfg->ias > 48" (io-pgtable 
itself ought to cope). You'll just get to be the first to actually test 
a non-48-bit configuration here :)

Robin.
Clément Péron May 15, 2019, 10:05 p.m. UTC | #6
Hi Robin,

On Tue, 14 May 2019 at 23:57, Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2019-05-14 10:22 pm, Clément Péron wrote:
> > Hi,
> >
> > On Tue, 14 May 2019 at 17:17, Clément Péron <peron.clem@gmail.com> wrote:
> >>
> >> Hi,
> >>
> >> On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> On 13/05/2019 17:14, Daniel Vetter wrote:
> >>>> On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
> >>>>> From: Clément Péron <peron.clem@gmail.com>
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> The Allwinner H6 has a Mali-T720 MP2. The drivers are
> >>>>> out-of-tree so this series only introduce the dt-bindings.
> >>>>
> >>>> We do have an in-tree midgard driver now (since 5.2). Does this stuff work
> >>>> together with your dt changes here?
> >>>
> >>> No, but it should be easy to add.
> >> I will give it a try and let you know.
> > Added the bus_clock and a ramp delay to the gpu_vdd but the driver
> > fail at probe.
> >
> > [    3.052919] panfrost 1800000.gpu: clock rate = 432000000
> > [    3.058278] panfrost 1800000.gpu: bus_clock rate = 100000000
> > [    3.179772] panfrost 1800000.gpu: mali-t720 id 0x720 major 0x1
> > minor 0x1 status 0x0
> > [    3.187432] panfrost 1800000.gpu: features: 00000000,10309e40,
> > issues: 00000000,21054400
> > [    3.195531] panfrost 1800000.gpu: Features: L2:0x07110206
> > Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002821 AS:0xf
> > JS:0x7
> > [    3.207178] panfrost 1800000.gpu: shader_present=0x3 l2_present=0x1
> > [    3.238257] panfrost 1800000.gpu: Fatal error during GPU init
> > [    3.244165] panfrost: probe of 1800000.gpu failed with error -12
> >
> > The ENOMEM is coming from "panfrost_mmu_init"
> > alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
> >                                           pfdev);
> >
> > Which is due to a check in the pgtable alloc "cfg->ias != 48"
> > arm-lpae io-pgtable: arm_mali_lpae_alloc_pgtable cfg->ias 33 cfg->oas 40
> >
> > DRI stack is totally new for me, could you give me a little clue about
> > this issue ?
>
> Heh, this is probably the one bit which doesn't really count as "DRI stack".
>
> That's merely a somewhat-conservative sanity check - I'm pretty sure it
> *should* be fine to change the test to "cfg->ias > 48" (io-pgtable
> itself ought to cope). You'll just get to be the first to actually test
> a non-48-bit configuration here :)

Thanks a lot, the probe seems fine now :)

I try to run glmark2 :
# glmark2-es2-drm
=======================================================
    glmark2 2017.07
=======================================================
    OpenGL Information
    GL_VENDOR:     panfrost
    GL_RENDERER:   panfrost
    GL_VERSION:    OpenGL ES 2.0 Mesa 19.1.0-rc2
=======================================================
[build] use-vbo=false:

But it seems that H6 is not so easy to add :(.

[  345.204813] panfrost 1800000.gpu: mmu irq status=1
[  345.209617] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
0x0000000002400400
[  345.209617] Reason: TODO
[  345.209617] raw fault status: 0x800002C1
[  345.209617] decoded fault status: SLAVE FAULT
[  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  345.209617] access type 0x2: READ
[  345.209617] source id 0x8000
[  345.729957] panfrost 1800000.gpu: gpu sched timeout, js=0,
status=0x8, head=0x2400400, tail=0x2400400, sched_job=000000009e204de9
[  346.055876] panfrost 1800000.gpu: mmu irq status=1
[  346.060680] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
0x0000000002C00A00
[  346.060680] Reason: TODO
[  346.060680] raw fault status: 0x810002C1
[  346.060680] decoded fault status: SLAVE FAULT
[  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  346.060680] access type 0x2: READ
[  346.060680] source id 0x8100
[  346.561955] panfrost 1800000.gpu: gpu sched timeout, js=1,
status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=00000000b55a9a85
[  346.573913] panfrost 1800000.gpu: mmu irq status=1
[  346.578707] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
0x0000000002C00B80
[  346.578707] Reason: TODO
[  346.578707] raw fault status: 0x800002C1
[  346.578707] decoded fault status: SLAVE FAULT
[  346.578707] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  346.578707] access type 0x2: READ
[  346.578707] source id 0x8000
[  347.073947] panfrost 1800000.gpu: gpu sched timeout, js=0,
status=0x8, head=0x2c00b80, tail=0x2c00b80, sched_job=00000000cf6af8e8
[  347.104125] panfrost 1800000.gpu: mmu irq status=1
[  347.108930] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
0x0000000002800900
[  347.108930] Reason: TODO
[  347.108930] raw fault status: 0x810002C1
[  347.108930] decoded faultn thi status: SLAVE FAULT
[  347.108930] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
[  347.108930] access type 0x2: READ
[  347.108930] source id 0x8100
[  347.617950] panfrost 1800000.gpu: gpu sched timeout, js=1,
status=0x8, head=0x2800900, tail=0x2800900, sched_job=000000009325fdb7
[  347.629902] panfrost 1800000.gpu: mmu irq status=1
[  347.634696] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
0x0000000002800A80

Regards,
Clement

>
> Robin.
Rob Herring May 15, 2019, 11:22 p.m. UTC | #7
On Wed, May 15, 2019 at 5:06 PM Clément Péron <peron.clem@gmail.com> wrote:
>
> Hi Robin,
>
> On Tue, 14 May 2019 at 23:57, Robin Murphy <robin.murphy@arm.com> wrote:
> >
> > On 2019-05-14 10:22 pm, Clément Péron wrote:
> > > Hi,
> > >
> > > On Tue, 14 May 2019 at 17:17, Clément Péron <peron.clem@gmail.com> wrote:
> > >>
> > >> Hi,
> > >>
> > >> On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> wrote:
> > >>>
> > >>> Hi,
> > >>>
> > >>> On 13/05/2019 17:14, Daniel Vetter wrote:
> > >>>> On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
> > >>>>> From: Clément Péron <peron.clem@gmail.com>
> > >>>>>
> > >>>>> Hi,
> > >>>>>
> > >>>>> The Allwinner H6 has a Mali-T720 MP2. The drivers are
> > >>>>> out-of-tree so this series only introduce the dt-bindings.
> > >>>>
> > >>>> We do have an in-tree midgard driver now (since 5.2). Does this stuff work
> > >>>> together with your dt changes here?
> > >>>
> > >>> No, but it should be easy to add.
> > >> I will give it a try and let you know.
> > > Added the bus_clock and a ramp delay to the gpu_vdd but the driver
> > > fail at probe.
> > >
> > > [    3.052919] panfrost 1800000.gpu: clock rate = 432000000
> > > [    3.058278] panfrost 1800000.gpu: bus_clock rate = 100000000
> > > [    3.179772] panfrost 1800000.gpu: mali-t720 id 0x720 major 0x1
> > > minor 0x1 status 0x0
> > > [    3.187432] panfrost 1800000.gpu: features: 00000000,10309e40,
> > > issues: 00000000,21054400
> > > [    3.195531] panfrost 1800000.gpu: Features: L2:0x07110206
> > > Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002821 AS:0xf
> > > JS:0x7
> > > [    3.207178] panfrost 1800000.gpu: shader_present=0x3 l2_present=0x1
> > > [    3.238257] panfrost 1800000.gpu: Fatal error during GPU init
> > > [    3.244165] panfrost: probe of 1800000.gpu failed with error -12
> > >
> > > The ENOMEM is coming from "panfrost_mmu_init"
> > > alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
> > >                                           pfdev);
> > >
> > > Which is due to a check in the pgtable alloc "cfg->ias != 48"
> > > arm-lpae io-pgtable: arm_mali_lpae_alloc_pgtable cfg->ias 33 cfg->oas 40
> > >
> > > DRI stack is totally new for me, could you give me a little clue about
> > > this issue ?
> >
> > Heh, this is probably the one bit which doesn't really count as "DRI stack".
> >
> > That's merely a somewhat-conservative sanity check - I'm pretty sure it
> > *should* be fine to change the test to "cfg->ias > 48" (io-pgtable
> > itself ought to cope). You'll just get to be the first to actually test
> > a non-48-bit configuration here :)
>
> Thanks a lot, the probe seems fine now :)
>
> I try to run glmark2 :
> # glmark2-es2-drm
> =======================================================
>     glmark2 2017.07
> =======================================================
>     OpenGL Information
>     GL_VENDOR:     panfrost
>     GL_RENDERER:   panfrost
>     GL_VERSION:    OpenGL ES 2.0 Mesa 19.1.0-rc2
> =======================================================
> [build] use-vbo=false:
>
> But it seems that H6 is not so easy to add :(.
>
> [  345.204813] panfrost 1800000.gpu: mmu irq status=1
> [  345.209617] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> 0x0000000002400400
> [  345.209617] Reason: TODO
> [  345.209617] raw fault status: 0x800002C1
> [  345.209617] decoded fault status: SLAVE FAULT
> [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> [  345.209617] access type 0x2: READ
> [  345.209617] source id 0x8000
> [  345.729957] panfrost 1800000.gpu: gpu sched timeout, js=0,
> status=0x8, head=0x2400400, tail=0x2400400, sched_job=000000009e204de9
> [  346.055876] panfrost 1800000.gpu: mmu irq status=1
> [  346.060680] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> 0x0000000002C00A00
> [  346.060680] Reason: TODO
> [  346.060680] raw fault status: 0x810002C1
> [  346.060680] decoded fault status: SLAVE FAULT
> [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> [  346.060680] access type 0x2: READ
> [  346.060680] source id 0x8100
> [  346.561955] panfrost 1800000.gpu: gpu sched timeout, js=1,
> status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=00000000b55a9a85
> [  346.573913] panfrost 1800000.gpu: mmu irq status=1
> [  346.578707] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> 0x0000000002C00B80
> [  346.578707] Reason: TODO
> [  346.578707] raw fault status: 0x800002C1
> [  346.578707] decoded fault status: SLAVE FAULT
> [  346.578707] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> [  346.578707] access type 0x2: READ
> [  346.578707] source id 0x8000
> [  347.073947] panfrost 1800000.gpu: gpu sched timeout, js=0,
> status=0x8, head=0x2c00b80, tail=0x2c00b80, sched_job=00000000cf6af8e8
> [  347.104125] panfrost 1800000.gpu: mmu irq status=1
> [  347.108930] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> 0x0000000002800900
> [  347.108930] Reason: TODO
> [  347.108930] raw fault status: 0x810002C1
> [  347.108930] decoded faultn thi status: SLAVE FAULT
> [  347.108930] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> [  347.108930] access type 0x2: READ
> [  347.108930] source id 0x8100
> [  347.617950] panfrost 1800000.gpu: gpu sched timeout, js=1,
> status=0x8, head=0x2800900, tail=0x2800900, sched_job=000000009325fdb7
> [  347.629902] panfrost 1800000.gpu: mmu irq status=1
> [  347.634696] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> 0x0000000002800A80

Is this 32 or 64 bit userspace? I think 64-bit does not work with
T7xx. You might need this[1]. You may also be the first to try T720,
so it could be something else.

Rob

[1] https://gitlab.freedesktop.org/mesa/mesa/merge_requests/650
Robin Murphy May 16, 2019, 11:19 a.m. UTC | #8
On 16/05/2019 00:22, Rob Herring wrote:
> On Wed, May 15, 2019 at 5:06 PM Clément Péron <peron.clem@gmail.com> wrote:
>>
>> Hi Robin,
>>
>> On Tue, 14 May 2019 at 23:57, Robin Murphy <robin.murphy@arm.com> wrote:
>>>
>>> On 2019-05-14 10:22 pm, Clément Péron wrote:
>>>> Hi,
>>>>
>>>> On Tue, 14 May 2019 at 17:17, Clément Péron <peron.clem@gmail.com> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On 13/05/2019 17:14, Daniel Vetter wrote:
>>>>>>> On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com wrote:
>>>>>>>> From: Clément Péron <peron.clem@gmail.com>
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> The Allwinner H6 has a Mali-T720 MP2. The drivers are
>>>>>>>> out-of-tree so this series only introduce the dt-bindings.
>>>>>>>
>>>>>>> We do have an in-tree midgard driver now (since 5.2). Does this stuff work
>>>>>>> together with your dt changes here?
>>>>>>
>>>>>> No, but it should be easy to add.
>>>>> I will give it a try and let you know.
>>>> Added the bus_clock and a ramp delay to the gpu_vdd but the driver
>>>> fail at probe.
>>>>
>>>> [    3.052919] panfrost 1800000.gpu: clock rate = 432000000
>>>> [    3.058278] panfrost 1800000.gpu: bus_clock rate = 100000000
>>>> [    3.179772] panfrost 1800000.gpu: mali-t720 id 0x720 major 0x1
>>>> minor 0x1 status 0x0
>>>> [    3.187432] panfrost 1800000.gpu: features: 00000000,10309e40,
>>>> issues: 00000000,21054400
>>>> [    3.195531] panfrost 1800000.gpu: Features: L2:0x07110206
>>>> Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002821 AS:0xf
>>>> JS:0x7
>>>> [    3.207178] panfrost 1800000.gpu: shader_present=0x3 l2_present=0x1
>>>> [    3.238257] panfrost 1800000.gpu: Fatal error during GPU init
>>>> [    3.244165] panfrost: probe of 1800000.gpu failed with error -12
>>>>
>>>> The ENOMEM is coming from "panfrost_mmu_init"
>>>> alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
>>>>                                            pfdev);
>>>>
>>>> Which is due to a check in the pgtable alloc "cfg->ias != 48"
>>>> arm-lpae io-pgtable: arm_mali_lpae_alloc_pgtable cfg->ias 33 cfg->oas 40
>>>>
>>>> DRI stack is totally new for me, could you give me a little clue about
>>>> this issue ?
>>>
>>> Heh, this is probably the one bit which doesn't really count as "DRI stack".
>>>
>>> That's merely a somewhat-conservative sanity check - I'm pretty sure it
>>> *should* be fine to change the test to "cfg->ias > 48" (io-pgtable
>>> itself ought to cope). You'll just get to be the first to actually test
>>> a non-48-bit configuration here :)
>>
>> Thanks a lot, the probe seems fine now :)
>>
>> I try to run glmark2 :
>> # glmark2-es2-drm
>> =======================================================
>>      glmark2 2017.07
>> =======================================================
>>      OpenGL Information
>>      GL_VENDOR:     panfrost
>>      GL_RENDERER:   panfrost
>>      GL_VERSION:    OpenGL ES 2.0 Mesa 19.1.0-rc2
>> =======================================================
>> [build] use-vbo=false:
>>
>> But it seems that H6 is not so easy to add :(.
>>
>> [  345.204813] panfrost 1800000.gpu: mmu irq status=1
>> [  345.209617] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
>> 0x0000000002400400
>> [  345.209617] Reason: TODO
>> [  345.209617] raw fault status: 0x800002C1
>> [  345.209617] decoded fault status: SLAVE FAULT
>> [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
>> [  345.209617] access type 0x2: READ
>> [  345.209617] source id 0x8000
>> [  345.729957] panfrost 1800000.gpu: gpu sched timeout, js=0,
>> status=0x8, head=0x2400400, tail=0x2400400, sched_job=000000009e204de9
>> [  346.055876] panfrost 1800000.gpu: mmu irq status=1
>> [  346.060680] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
>> 0x0000000002C00A00
>> [  346.060680] Reason: TODO
>> [  346.060680] raw fault status: 0x810002C1
>> [  346.060680] decoded fault status: SLAVE FAULT
>> [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
>> [  346.060680] access type 0x2: READ
>> [  346.060680] source id 0x8100
>> [  346.561955] panfrost 1800000.gpu: gpu sched timeout, js=1,
>> status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=00000000b55a9a85
>> [  346.573913] panfrost 1800000.gpu: mmu irq status=1
>> [  346.578707] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
>> 0x0000000002C00B80
>> [  346.578707] Reason: TODO
>> [  346.578707] raw fault status: 0x800002C1
>> [  346.578707] decoded fault status: SLAVE FAULT
>> [  346.578707] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
>> [  346.578707] access type 0x2: READ
>> [  346.578707] source id 0x8000
>> [  347.073947] panfrost 1800000.gpu: gpu sched timeout, js=0,
>> status=0x8, head=0x2c00b80, tail=0x2c00b80, sched_job=00000000cf6af8e8
>> [  347.104125] panfrost 1800000.gpu: mmu irq status=1
>> [  347.108930] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
>> 0x0000000002800900
>> [  347.108930] Reason: TODO
>> [  347.108930] raw fault status: 0x810002C1
>> [  347.108930] decoded faultn thi status: SLAVE FAULT
>> [  347.108930] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
>> [  347.108930] access type 0x2: READ
>> [  347.108930] source id 0x8100
>> [  347.617950] panfrost 1800000.gpu: gpu sched timeout, js=1,
>> status=0x8, head=0x2800900, tail=0x2800900, sched_job=000000009325fdb7
>> [  347.629902] panfrost 1800000.gpu: mmu irq status=1
>> [  347.634696] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
>> 0x0000000002800A80
> 
> Is this 32 or 64 bit userspace? I think 64-bit does not work with
> T7xx. You might need this[1].

[ Oooh... that makes T620 actually do stuff without falling over 
dereferencing VA 0 somewhere halfway through the job chain :D

I shall continue to play... ]

> You may also be the first to try T720,
> so it could be something else.

I was expecting to see a similar behaviour to my T620 (which I now 
assume was down to 64-bit job descriptors sort-of-but-not-quite working) 
but this does look a bit more fundamental - the fact that it's a level 1 
fault with VA == head == tail suggests to me that the MMU can't see the 
page tables at all to translate anything. I really hope that the H6 GPU 
integration doesn't suffer from the same DMA offset as the Allwinner 
display pipeline stuff, because that would be a real pain to support in 
io-pgtable.

Robin.
Steven Price May 16, 2019, 1:21 p.m. UTC | #9
On 16/05/2019 12:19, Robin Murphy wrote:
[...]
> I was expecting to see a similar behaviour to my T620 (which I now
> assume was down to 64-bit job descriptors sort-of-but-not-quite working)
> but this does look a bit more fundamental - the fact that it's a level 1
> fault with VA == head == tail suggests to me that the MMU can't see the
> page tables at all to translate anything. I really hope that the H6 GPU
> integration doesn't suffer from the same DMA offset as the Allwinner
> display pipeline stuff, because that would be a real pain to support in
> io-pgtable.

Assuming you mean the case where the physical address (as seen by the
CPU) is different from the dma address (as seen by the GPU), then I
highly doubt it because mali_kbase doesn't support it:

[from kbase_mem_pool_alloc_page() in mali_kbase_mem_pool.c]:

	dma_addr = dma_map_page(dev, p, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
	if (dma_mapping_error(dev, dma_addr)) {
		__free_page(p);
		return NULL;
	}

	WARN_ON(dma_addr != page_to_phys(p));


That being said it's quite possible there could be something in the bus
which needs configuring to make this work - in which case your best bet
is to look at the vendor kernel and see if anything extra is poked when
the Mali driver is loaded.

Steve
Jernej Škrabec May 25, 2019, 7:50 p.m. UTC | #10
Hi!

Dne četrtek, 16. maj 2019 ob 13:19:06 CEST je Robin Murphy napisal(a):
> On 16/05/2019 00:22, Rob Herring wrote:
> > On Wed, May 15, 2019 at 5:06 PM Clément Péron <peron.clem@gmail.com> 
wrote:
> >> Hi Robin,
> >> 
> >> On Tue, 14 May 2019 at 23:57, Robin Murphy <robin.murphy@arm.com> wrote:
> >>> On 2019-05-14 10:22 pm, Clément Péron wrote:
> >>>> Hi,
> >>>> 
> >>>> On Tue, 14 May 2019 at 17:17, Clément Péron <peron.clem@gmail.com> 
wrote:
> >>>>> Hi,
> >>>>> 
> >>>>> On Tue, 14 May 2019 at 12:29, Neil Armstrong <narmstrong@baylibre.com> 
wrote:
> >>>>>> Hi,
> >>>>>> 
> >>>>>> On 13/05/2019 17:14, Daniel Vetter wrote:
> >>>>>>> On Sun, May 12, 2019 at 07:46:00PM +0200, peron.clem@gmail.com 
wrote:
> >>>>>>>> From: Clément Péron <peron.clem@gmail.com>
> >>>>>>>> 
> >>>>>>>> Hi,
> >>>>>>>> 
> >>>>>>>> The Allwinner H6 has a Mali-T720 MP2. The drivers are
> >>>>>>>> out-of-tree so this series only introduce the dt-bindings.
> >>>>>>> 
> >>>>>>> We do have an in-tree midgard driver now (since 5.2). Does this
> >>>>>>> stuff work
> >>>>>>> together with your dt changes here?
> >>>>>> 
> >>>>>> No, but it should be easy to add.
> >>>>> 
> >>>>> I will give it a try and let you know.
> >>>> 
> >>>> Added the bus_clock and a ramp delay to the gpu_vdd but the driver
> >>>> fail at probe.
> >>>> 
> >>>> [    3.052919] panfrost 1800000.gpu: clock rate = 432000000
> >>>> [    3.058278] panfrost 1800000.gpu: bus_clock rate = 100000000
> >>>> [    3.179772] panfrost 1800000.gpu: mali-t720 id 0x720 major 0x1
> >>>> minor 0x1 status 0x0
> >>>> [    3.187432] panfrost 1800000.gpu: features: 00000000,10309e40,
> >>>> issues: 00000000,21054400
> >>>> [    3.195531] panfrost 1800000.gpu: Features: L2:0x07110206
> >>>> Shader:0x00000000 Tiler:0x00000809 Mem:0x1 MMU:0x00002821 AS:0xf
> >>>> JS:0x7
> >>>> [    3.207178] panfrost 1800000.gpu: shader_present=0x3 l2_present=0x1
> >>>> [    3.238257] panfrost 1800000.gpu: Fatal error during GPU init
> >>>> [    3.244165] panfrost: probe of 1800000.gpu failed with error -12
> >>>> 
> >>>> The ENOMEM is coming from "panfrost_mmu_init"
> >>>> alloc_io_pgtable_ops(ARM_MALI_LPAE, &pfdev->mmu->pgtbl_cfg,
> >>>> 
> >>>>                                            pfdev);
> >>>> 
> >>>> Which is due to a check in the pgtable alloc "cfg->ias != 48"
> >>>> arm-lpae io-pgtable: arm_mali_lpae_alloc_pgtable cfg->ias 33 cfg->oas
> >>>> 40
> >>>> 
> >>>> DRI stack is totally new for me, could you give me a little clue about
> >>>> this issue ?
> >>> 
> >>> Heh, this is probably the one bit which doesn't really count as "DRI
> >>> stack".
> >>> 
> >>> That's merely a somewhat-conservative sanity check - I'm pretty sure it
> >>> *should* be fine to change the test to "cfg->ias > 48" (io-pgtable
> >>> itself ought to cope). You'll just get to be the first to actually test
> >>> a non-48-bit configuration here :)
> >> 
> >> Thanks a lot, the probe seems fine now :)
> >> 
> >> I try to run glmark2 :
> >> # glmark2-es2-drm
> >> =======================================================
> >> 
> >>      glmark2 2017.07
> >> 
> >> =======================================================
> >> 
> >>      OpenGL Information
> >>      GL_VENDOR:     panfrost
> >>      GL_RENDERER:   panfrost
> >>      GL_VERSION:    OpenGL ES 2.0 Mesa 19.1.0-rc2
> >> 
> >> =======================================================
> >> [build] use-vbo=false:
> >> 
> >> But it seems that H6 is not so easy to add :(.
> >> 
> >> [  345.204813] panfrost 1800000.gpu: mmu irq status=1
> >> [  345.209617] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> >> 0x0000000002400400
> >> [  345.209617] Reason: TODO
> >> [  345.209617] raw fault status: 0x800002C1
> >> [  345.209617] decoded fault status: SLAVE FAULT
> >> [  345.209617] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> >> [  345.209617] access type 0x2: READ
> >> [  345.209617] source id 0x8000
> >> [  345.729957] panfrost 1800000.gpu: gpu sched timeout, js=0,
> >> status=0x8, head=0x2400400, tail=0x2400400, sched_job=000000009e204de9
> >> [  346.055876] panfrost 1800000.gpu: mmu irq status=1
> >> [  346.060680] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> >> 0x0000000002C00A00
> >> [  346.060680] Reason: TODO
> >> [  346.060680] raw fault status: 0x810002C1
> >> [  346.060680] decoded fault status: SLAVE FAULT
> >> [  346.060680] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> >> [  346.060680] access type 0x2: READ
> >> [  346.060680] source id 0x8100
> >> [  346.561955] panfrost 1800000.gpu: gpu sched timeout, js=1,
> >> status=0x8, head=0x2c00a00, tail=0x2c00a00, sched_job=00000000b55a9a85
> >> [  346.573913] panfrost 1800000.gpu: mmu irq status=1
> >> [  346.578707] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> >> 0x0000000002C00B80
> >> [  346.578707] Reason: TODO
> >> [  346.578707] raw fault status: 0x800002C1
> >> [  346.578707] decoded fault status: SLAVE FAULT
> >> [  346.578707] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> >> [  346.578707] access type 0x2: READ
> >> [  346.578707] source id 0x8000
> >> [  347.073947] panfrost 1800000.gpu: gpu sched timeout, js=0,
> >> status=0x8, head=0x2c00b80, tail=0x2c00b80, sched_job=00000000cf6af8e8
> >> [  347.104125] panfrost 1800000.gpu: mmu irq status=1
> >> [  347.108930] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> >> 0x0000000002800900
> >> [  347.108930] Reason: TODO
> >> [  347.108930] raw fault status: 0x810002C1
> >> [  347.108930] decoded faultn thi status: SLAVE FAULT
> >> [  347.108930] exception type 0xC1: TRANSLATION_FAULT_LEVEL1
> >> [  347.108930] access type 0x2: READ
> >> [  347.108930] source id 0x8100
> >> [  347.617950] panfrost 1800000.gpu: gpu sched timeout, js=1,
> >> status=0x8, head=0x2800900, tail=0x2800900, sched_job=000000009325fdb7
> >> [  347.629902] panfrost 1800000.gpu: mmu irq status=1
> >> [  347.634696] panfrost 1800000.gpu: Unhandled Page fault in AS0 at VA
> >> 0x0000000002800A80
> > 
> > Is this 32 or 64 bit userspace? I think 64-bit does not work with
> > T7xx. You might need this[1].
> 
> [ Oooh... that makes T620 actually do stuff without falling over
> dereferencing VA 0 somewhere halfway through the job chain :D
> 
> I shall continue to play... ]
> 
> > You may also be the first to try T720,
> > so it could be something else.
> 
> I was expecting to see a similar behaviour to my T620 (which I now
> assume was down to 64-bit job descriptors sort-of-but-not-quite working)
> but this does look a bit more fundamental - the fact that it's a level 1
> fault with VA == head == tail suggests to me that the MMU can't see the
> page tables at all to translate anything. I really hope that the H6 GPU
> integration doesn't suffer from the same DMA offset as the Allwinner
> display pipeline stuff, because that would be a real pain to support in
> io-pgtable.

DMA offset is present only on early SoCs with DE1. DE2 and DE3 (used on H6) 
have no offset.

Best regards,
Jernej