diff mbox series

[v3,5/7] drm/sun4i: Rely on dma interconnect for our RAM offset

Message ID dfbdedf89c6a36f6fe5564e7c518664fa9e828c0.1549897336.git-series.maxime.ripard@bootlin.com (mailing list archive)
State New, archived
Headers show
Series sunxi: Add DT representation for the MBUS controller | expand

Commit Message

Maxime Ripard Feb. 11, 2019, 3:02 p.m. UTC
Now that we can express our DMA topology, rely on those property instead of
hardcoding an offset from the dma_addr_t which wasn't really great.

We still need to add some code to deal with the old DT that would lack that
property, but we move the offset to the DRM device dma_pfn_offset to be
able to rely on just the dma_addr_t associated to the GEM object.

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
---
 drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

Comments

Robin Murphy Feb. 12, 2019, 6:46 p.m. UTC | #1
On 11/02/2019 15:02, Maxime Ripard wrote:
> Now that we can express our DMA topology, rely on those property instead of
> hardcoding an offset from the dma_addr_t which wasn't really great.
> 
> We still need to add some code to deal with the old DT that would lack that
> property, but we move the offset to the DRM device dma_pfn_offset to be
> able to rely on just the dma_addr_t associated to the GEM object.
> 
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
> ---
>   drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
>   1 file changed, 21 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
> index 9e9255ee59cd..1846a1b30fea 100644
> --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
> +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
> @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
>   	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
>   	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
>   
> -	/*
> -	 * backend DMA accesses DRAM directly, bypassing the system
> -	 * bus. As such, the address range is different and the buffer
> -	 * address needs to be corrected.
> -	 */
> -	paddr -= PHYS_OFFSET;
> -
>   	if (fb->format->is_yuv)
>   		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
>   
> @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
>   	dev_set_drvdata(dev, backend);
>   	spin_lock_init(&backend->frontend_lock);
>   
> +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
> +		/*
> +		 * This assume we have the same DMA constraints for all our the
> +		 * devices in our pipeline (all the backends, but also the
> +		 * frontends). This sounds bad, but it has always been the case
> +		 * for us, and DRM doesn't do per-device allocation either, so
> +		 * we would need to fix DRM first...
> +		 */
> +		ret = of_dma_configure(drm->dev, dev->of_node, true);

It would be even nicer if we could ensure that drm->dev originates from 
a DT node which has the appropriate interconnects property itself, such 
that we can assume it's already configured correctly.

Robin.

> +		if (ret)
> +			return ret;
> +	} else {
> +		/*
> +		 * If we don't have the interconnect property, most likely
> +		 * because of an old DT, we need to set the DMA offset by hand
> +		 * on our device since the RAM mapping is at 0 for the DMA bus,
> +		 * unlike the CPU.
> +		 */
> +		drm->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
> +	}
> +
>   	backend->engine.node = dev->of_node;
>   	backend->engine.ops = &sun4i_backend_engine_ops;
>   	backend->engine.id = sun4i_backend_of_get_id(dev->of_node);
>
Maxime Ripard Feb. 13, 2019, 3:41 p.m. UTC | #2
Hi Robin,

Thanks for your feedback!

On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
> On 11/02/2019 15:02, Maxime Ripard wrote:
> > Now that we can express our DMA topology, rely on those property instead of
> > hardcoding an offset from the dma_addr_t which wasn't really great.
> > 
> > We still need to add some code to deal with the old DT that would lack that
> > property, but we move the offset to the DRM device dma_pfn_offset to be
> > able to rely on just the dma_addr_t associated to the GEM object.
> > 
> > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
> > ---
> >   drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
> >   1 file changed, 21 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > index 9e9255ee59cd..1846a1b30fea 100644
> > --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
> > +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
> >   	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
> >   	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
> > -	/*
> > -	 * backend DMA accesses DRAM directly, bypassing the system
> > -	 * bus. As such, the address range is different and the buffer
> > -	 * address needs to be corrected.
> > -	 */
> > -	paddr -= PHYS_OFFSET;
> > -
> >   	if (fb->format->is_yuv)
> >   		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
> > @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
> >   	dev_set_drvdata(dev, backend);
> >   	spin_lock_init(&backend->frontend_lock);
> > +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
> > +		/*
> > +		 * This assume we have the same DMA constraints for all our the
> > +		 * devices in our pipeline (all the backends, but also the
> > +		 * frontends). This sounds bad, but it has always been the case
> > +		 * for us, and DRM doesn't do per-device allocation either, so
> > +		 * we would need to fix DRM first...
> > +		 */
> > +		ret = of_dma_configure(drm->dev, dev->of_node, true);
> 
> It would be even nicer if we could ensure that drm->dev originates from a DT
> node which has the appropriate interconnects property itself, such that we
> can assume it's already configured correctly.

The thing is drm->dev comes from a node in the DT that is a virtual
node, and therefore doesn't have any resources attached, so I'm not
sure we have any other way, unfortunately.

Maxime
Robin Murphy Feb. 13, 2019, 4:40 p.m. UTC | #3
On 13/02/2019 15:41, Maxime Ripard wrote:
> Hi Robin,
> 
> Thanks for your feedback!
> 
> On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
>> On 11/02/2019 15:02, Maxime Ripard wrote:
>>> Now that we can express our DMA topology, rely on those property instead of
>>> hardcoding an offset from the dma_addr_t which wasn't really great.
>>>
>>> We still need to add some code to deal with the old DT that would lack that
>>> property, but we move the offset to the DRM device dma_pfn_offset to be
>>> able to rely on just the dma_addr_t associated to the GEM object.
>>>
>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
>>> ---
>>>    drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
>>>    1 file changed, 21 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>> index 9e9255ee59cd..1846a1b30fea 100644
>>> --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
>>> +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>> @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
>>>    	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
>>>    	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
>>> -	/*
>>> -	 * backend DMA accesses DRAM directly, bypassing the system
>>> -	 * bus. As such, the address range is different and the buffer
>>> -	 * address needs to be corrected.
>>> -	 */
>>> -	paddr -= PHYS_OFFSET;
>>> -
>>>    	if (fb->format->is_yuv)
>>>    		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
>>> @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
>>>    	dev_set_drvdata(dev, backend);
>>>    	spin_lock_init(&backend->frontend_lock);
>>> +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
>>> +		/*
>>> +		 * This assume we have the same DMA constraints for all our the
>>> +		 * devices in our pipeline (all the backends, but also the
>>> +		 * frontends). This sounds bad, but it has always been the case
>>> +		 * for us, and DRM doesn't do per-device allocation either, so
>>> +		 * we would need to fix DRM first...
>>> +		 */
>>> +		ret = of_dma_configure(drm->dev, dev->of_node, true);
>>
>> It would be even nicer if we could ensure that drm->dev originates from a DT
>> node which has the appropriate interconnects property itself, such that we
>> can assume it's already configured correctly.
> 
> The thing is drm->dev comes from a node in the DT that is a virtual
> node, and therefore doesn't have any resources attached, so I'm not
> sure we have any other way, unfortunately.

Right, I appreciate that it may not be feasible to swizzle drm->dev for 
one of the 'real' component devices, but what I was also thinking was 
that since the virtual device node effectively represents the 
aggregation of the other component devices, we could just say that it 
also has to have its own link to the MBUS interconnect (with the ID of 
the pipeline entrypoint it's associated with, I guess). That ought to be 
enough to get dma_configure() to do the job, and in fairness is no 
*less* accurate a description of the hardware, even if might look a 
little funky to some.

Robin.
Maxime Ripard Feb. 19, 2019, 10:55 a.m. UTC | #4
Hi Robin,

On Wed, Feb 13, 2019 at 04:40:11PM +0000, Robin Murphy wrote:
> On 13/02/2019 15:41, Maxime Ripard wrote:
> > Hi Robin,
> > 
> > Thanks for your feedback!
> > 
> > On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
> > > On 11/02/2019 15:02, Maxime Ripard wrote:
> > > > Now that we can express our DMA topology, rely on those property instead of
> > > > hardcoding an offset from the dma_addr_t which wasn't really great.
> > > > 
> > > > We still need to add some code to deal with the old DT that would lack that
> > > > property, but we move the offset to the DRM device dma_pfn_offset to be
> > > > able to rely on just the dma_addr_t associated to the GEM object.
> > > > 
> > > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > > Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
> > > > ---
> > > >    drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
> > > >    1 file changed, 21 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > > > index 9e9255ee59cd..1846a1b30fea 100644
> > > > --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
> > > > +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
> > > > @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
> > > >    	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
> > > >    	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
> > > > -	/*
> > > > -	 * backend DMA accesses DRAM directly, bypassing the system
> > > > -	 * bus. As such, the address range is different and the buffer
> > > > -	 * address needs to be corrected.
> > > > -	 */
> > > > -	paddr -= PHYS_OFFSET;
> > > > -
> > > >    	if (fb->format->is_yuv)
> > > >    		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
> > > > @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
> > > >    	dev_set_drvdata(dev, backend);
> > > >    	spin_lock_init(&backend->frontend_lock);
> > > > +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
> > > > +		/*
> > > > +		 * This assume we have the same DMA constraints for all our the
> > > > +		 * devices in our pipeline (all the backends, but also the
> > > > +		 * frontends). This sounds bad, but it has always been the case
> > > > +		 * for us, and DRM doesn't do per-device allocation either, so
> > > > +		 * we would need to fix DRM first...
> > > > +		 */
> > > > +		ret = of_dma_configure(drm->dev, dev->of_node, true);
> > > 
> > > It would be even nicer if we could ensure that drm->dev originates from a DT
> > > node which has the appropriate interconnects property itself, such that we
> > > can assume it's already configured correctly.
> > 
> > The thing is drm->dev comes from a node in the DT that is a virtual
> > node, and therefore doesn't have any resources attached, so I'm not
> > sure we have any other way, unfortunately.
> 
> Right, I appreciate that it may not be feasible to swizzle drm->dev for one
> of the 'real' component devices, but what I was also thinking was that since
> the virtual device node effectively represents the aggregation of the other
> component devices, we could just say that it also has to have its own link
> to the MBUS interconnect (with the ID of the pipeline entrypoint it's
> associated with, I guess). That ought to be enough to get dma_configure() to
> do the job, and in fairness is no *less* accurate a description of the
> hardware, even if might look a little funky to some.

That virtual device however can work with up to 4 devices that can
perform DMA operations, all of them going through a different port of
the MBUS controller that can account for DMA usage on a per-master
basis.

Eventually, we should support that feature as well, but the point is
that from a DT point of view, there's not a single point of DMA access
for that particular device but more likely 2-4 points, each with a
different argument to the phandle.

We could of course have a hack and use only one of them, but that
would be less accurate than what we currently have. Plus, this is
really due to a restriction on the DRM side, that could change in the
future (even though it's unlikely). Fixing that restriction would make
that property completely useless, confusing and wrong from an hardware
PoV.

Maxime
Robin Murphy March 5, 2019, 4:11 p.m. UTC | #5
On 19/02/2019 10:55, Maxime Ripard wrote:
> Hi Robin,
> 
> On Wed, Feb 13, 2019 at 04:40:11PM +0000, Robin Murphy wrote:
>> On 13/02/2019 15:41, Maxime Ripard wrote:
>>> Hi Robin,
>>>
>>> Thanks for your feedback!
>>>
>>> On Tue, Feb 12, 2019 at 06:46:40PM +0000, Robin Murphy wrote:
>>>> On 11/02/2019 15:02, Maxime Ripard wrote:
>>>>> Now that we can express our DMA topology, rely on those property instead of
>>>>> hardcoding an offset from the dma_addr_t which wasn't really great.
>>>>>
>>>>> We still need to add some code to deal with the old DT that would lack that
>>>>> property, but we move the offset to the DRM device dma_pfn_offset to be
>>>>> able to rely on just the dma_addr_t associated to the GEM object.
>>>>>
>>>>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>>>> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
>>>>> ---
>>>>>     drivers/gpu/drm/sun4i/sun4i_backend.c | 28 +++++++++++++++++++++-------
>>>>>     1 file changed, 21 insertions(+), 7 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>>>> index 9e9255ee59cd..1846a1b30fea 100644
>>>>> --- a/drivers/gpu/drm/sun4i/sun4i_backend.c
>>>>> +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
>>>>> @@ -383,13 +383,6 @@ int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
>>>>>     	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
>>>>>     	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
>>>>> -	/*
>>>>> -	 * backend DMA accesses DRAM directly, bypassing the system
>>>>> -	 * bus. As such, the address range is different and the buffer
>>>>> -	 * address needs to be corrected.
>>>>> -	 */
>>>>> -	paddr -= PHYS_OFFSET;
>>>>> -
>>>>>     	if (fb->format->is_yuv)
>>>>>     		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
>>>>> @@ -835,6 +828,27 @@ static int sun4i_backend_bind(struct device *dev, struct device *master,
>>>>>     	dev_set_drvdata(dev, backend);
>>>>>     	spin_lock_init(&backend->frontend_lock);
>>>>> +	if (of_find_property(dev->of_node, "interconnects", NULL)) {
>>>>> +		/*
>>>>> +		 * This assume we have the same DMA constraints for all our the
>>>>> +		 * devices in our pipeline (all the backends, but also the
>>>>> +		 * frontends). This sounds bad, but it has always been the case
>>>>> +		 * for us, and DRM doesn't do per-device allocation either, so
>>>>> +		 * we would need to fix DRM first...
>>>>> +		 */
>>>>> +		ret = of_dma_configure(drm->dev, dev->of_node, true);
>>>>
>>>> It would be even nicer if we could ensure that drm->dev originates from a DT
>>>> node which has the appropriate interconnects property itself, such that we
>>>> can assume it's already configured correctly.
>>>
>>> The thing is drm->dev comes from a node in the DT that is a virtual
>>> node, and therefore doesn't have any resources attached, so I'm not
>>> sure we have any other way, unfortunately.
>>
>> Right, I appreciate that it may not be feasible to swizzle drm->dev for one
>> of the 'real' component devices, but what I was also thinking was that since
>> the virtual device node effectively represents the aggregation of the other
>> component devices, we could just say that it also has to have its own link
>> to the MBUS interconnect (with the ID of the pipeline entrypoint it's
>> associated with, I guess). That ought to be enough to get dma_configure() to
>> do the job, and in fairness is no *less* accurate a description of the
>> hardware, even if might look a little funky to some.
> 
> That virtual device however can work with up to 4 devices that can
> perform DMA operations, all of them going through a different port of
> the MBUS controller that can account for DMA usage on a per-master
> basis.
> 
> Eventually, we should support that feature as well, but the point is
> that from a DT point of view, there's not a single point of DMA access
> for that particular device but more likely 2-4 points, each with a
> different argument to the phandle.
> 
> We could of course have a hack and use only one of them, but that
> would be less accurate than what we currently have. Plus, this is
> really due to a restriction on the DRM side, that could change in the
> future (even though it's unlikely). Fixing that restriction would make
> that property completely useless, confusing and wrong from an hardware
> PoV.

Thanks for the clarification. Much as I'd like to keep 
of_dma_configure() out of drivers as much as possible, I agree that it's 
not sufficient justification for (ab)using DT to sidestep a 
Linux-specific issue which might eventually get properly fixed anyway. 
I'm sure the last thing sunxi needs is any more awkward DT ABI concerns :)

Robin.
diff mbox series

Patch

diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c
index 9e9255ee59cd..1846a1b30fea 100644
--- a/drivers/gpu/drm/sun4i/sun4i_backend.c
+++ b/drivers/gpu/drm/sun4i/sun4i_backend.c
@@ -383,13 +383,6 @@  int sun4i_backend_update_layer_buffer(struct sun4i_backend *backend,
 	paddr = drm_fb_cma_get_gem_addr(fb, state, 0);
 	DRM_DEBUG_DRIVER("Setting buffer address to %pad\n", &paddr);
 
-	/*
-	 * backend DMA accesses DRAM directly, bypassing the system
-	 * bus. As such, the address range is different and the buffer
-	 * address needs to be corrected.
-	 */
-	paddr -= PHYS_OFFSET;
-
 	if (fb->format->is_yuv)
 		return sun4i_backend_update_yuv_buffer(backend, fb, paddr);
 
@@ -835,6 +828,27 @@  static int sun4i_backend_bind(struct device *dev, struct device *master,
 	dev_set_drvdata(dev, backend);
 	spin_lock_init(&backend->frontend_lock);
 
+	if (of_find_property(dev->of_node, "interconnects", NULL)) {
+		/*
+		 * This assume we have the same DMA constraints for all our the
+		 * devices in our pipeline (all the backends, but also the
+		 * frontends). This sounds bad, but it has always been the case
+		 * for us, and DRM doesn't do per-device allocation either, so
+		 * we would need to fix DRM first...
+		 */
+		ret = of_dma_configure(drm->dev, dev->of_node, true);
+		if (ret)
+			return ret;
+	} else {
+		/*
+		 * If we don't have the interconnect property, most likely
+		 * because of an old DT, we need to set the DMA offset by hand
+		 * on our device since the RAM mapping is at 0 for the DMA bus,
+		 * unlike the CPU.
+		 */
+		drm->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
+	}
+
 	backend->engine.node = dev->of_node;
 	backend->engine.ops = &sun4i_backend_engine_ops;
 	backend->engine.id = sun4i_backend_of_get_id(dev->of_node);