diff mbox series

[v2,2/4] PCI: cadence: Use "dma-ranges" instead of "cdns,no-bar-match-nbits" property

Message ID 20200417114322.31111-3-kishon@ti.com
State Superseded
Delegated to: Lorenzo Pieralisi
Headers show
Series PCI: cadence: Deprecate inbound/outbound specific bindings | expand

Commit Message

Kishon Vijay Abraham I April 17, 2020, 11:43 a.m. UTC
Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
property to configure the number of bits passed through from PCIe
address to internal address in Inbound Address Translation register.

However standard PCI dt-binding already defines "dma-ranges" to
describe the address range accessible by PCIe controller. Parse
"dma-ranges" property to configure the number of bits passed
through from PCIe address to internal address in Inbound Address
Translation register.

Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
---
 drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

Comments

Lorenzo Pieralisi May 1, 2020, 2:46 p.m. UTC | #1
[+Robin - to check on dma-ranges intepretation]

I would need RobH and Robin to review this.

Also, An ACK from Tom is required - for the whole series.

On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
> property to configure the number of bits passed through from PCIe
> address to internal address in Inbound Address Translation register.
> 
> However standard PCI dt-binding already defines "dma-ranges" to
> describe the address range accessible by PCIe controller. Parse
> "dma-ranges" property to configure the number of bits passed
> through from PCIe address to internal address in Inbound Address
> Translation register.
> 
> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
> ---
>  drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
> index 9b1c3966414b..60f912a657b9 100644
> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>  	struct device *dev = rc->pcie.dev;
>  	struct platform_device *pdev = to_platform_device(dev);
>  	struct device_node *np = dev->of_node;
> +	struct of_pci_range_parser parser;
>  	struct pci_host_bridge *bridge;
>  	struct list_head resources;
> +	struct of_pci_range range;
>  	struct cdns_pcie *pcie;
>  	struct resource *res;
>  	int ret;
> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>  	rc->max_regions = 32;
>  	of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions);
>  
> -	rc->no_bar_nbits = 32;
> -	of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
> +	if (!of_pci_dma_range_parser_init(&parser, np))
> +		if (of_pci_range_parser_one(&parser, &range))
> +			rc->no_bar_nbits = ilog2(range.size);
> +
> +	if (!rc->no_bar_nbits) {
> +		rc->no_bar_nbits = 32;
> +		of_property_read_u32(np, "cdns,no-bar-match-nbits",
> +				     &rc->no_bar_nbits);
> +	}
>  
>  	rc->vendor_id = 0xffff;
>  	of_property_read_u16(np, "vendor-id", &rc->vendor_id);
> -- 
> 2.17.1
>
Robin Murphy May 1, 2020, 3:54 p.m. UTC | #2
On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
> [+Robin - to check on dma-ranges intepretation]
> 
> I would need RobH and Robin to review this.
> 
> Also, An ACK from Tom is required - for the whole series.
> 
> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>> property to configure the number of bits passed through from PCIe
>> address to internal address in Inbound Address Translation register.
>>
>> However standard PCI dt-binding already defines "dma-ranges" to
>> describe the address range accessible by PCIe controller. Parse
>> "dma-ranges" property to configure the number of bits passed
>> through from PCIe address to internal address in Inbound Address
>> Translation register.
>>
>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>> ---
>>   drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>   1 file changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
>> index 9b1c3966414b..60f912a657b9 100644
>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>   	struct device *dev = rc->pcie.dev;
>>   	struct platform_device *pdev = to_platform_device(dev);
>>   	struct device_node *np = dev->of_node;
>> +	struct of_pci_range_parser parser;
>>   	struct pci_host_bridge *bridge;
>>   	struct list_head resources;
>> +	struct of_pci_range range;
>>   	struct cdns_pcie *pcie;
>>   	struct resource *res;
>>   	int ret;
>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>   	rc->max_regions = 32;
>>   	of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions);
>>   
>> -	rc->no_bar_nbits = 32;
>> -	of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>> +	if (!of_pci_dma_range_parser_init(&parser, np))
>> +		if (of_pci_range_parser_one(&parser, &range))
>> +			rc->no_bar_nbits = ilog2(range.size);

You probably want "range.pci_addr + range.size" here just in case the 
bottom of the window is ever non-zero. Is there definitely only ever a 
single inbound window to consider?

I believe that pci_parse_request_of_pci_ranges() could do the actual 
parsing for you, but I suppose plumbing that in plus processing the 
resulting dma_ranges resource probably ends up a bit messier than the 
concise open-coding here.

Robin.

>> +
>> +	if (!rc->no_bar_nbits) {
>> +		rc->no_bar_nbits = 32;
>> +		of_property_read_u32(np, "cdns,no-bar-match-nbits",
>> +				     &rc->no_bar_nbits);
>> +	}
>>   
>>   	rc->vendor_id = 0xffff;
>>   	of_property_read_u16(np, "vendor-id", &rc->vendor_id);
>> -- 
>> 2.17.1
>>
Kishon Vijay Abraham I May 4, 2020, 8:44 a.m. UTC | #3
Hi Robin,

On 5/1/2020 9:24 PM, Robin Murphy wrote:
> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
>> [+Robin - to check on dma-ranges intepretation]
>>
>> I would need RobH and Robin to review this.
>>
>> Also, An ACK from Tom is required - for the whole series.
>>
>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>>> property to configure the number of bits passed through from PCIe
>>> address to internal address in Inbound Address Translation register.
>>>
>>> However standard PCI dt-binding already defines "dma-ranges" to
>>> describe the address range accessible by PCIe controller. Parse
>>> "dma-ranges" property to configure the number of bits passed
>>> through from PCIe address to internal address in Inbound Address
>>> Translation register.
>>>
>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>>> ---
>>>   drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>>   1 file changed, 11 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>> index 9b1c3966414b..60f912a657b9 100644
>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>       struct device *dev = rc->pcie.dev;
>>>       struct platform_device *pdev = to_platform_device(dev);
>>>       struct device_node *np = dev->of_node;
>>> +    struct of_pci_range_parser parser;
>>>       struct pci_host_bridge *bridge;
>>>       struct list_head resources;
>>> +    struct of_pci_range range;
>>>       struct cdns_pcie *pcie;
>>>       struct resource *res;
>>>       int ret;
>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>       rc->max_regions = 32;
>>>       of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions);
>>>   -    rc->no_bar_nbits = 32;
>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
>>> +        if (of_pci_range_parser_one(&parser, &range))
>>> +            rc->no_bar_nbits = ilog2(range.size);
> 
> You probably want "range.pci_addr + range.size" here just in case the bottom of
> the window is ever non-zero. Is there definitely only ever a single inbound
> window to consider?

Cadence IP has 3 inbound address translation registers, however we use only 1
inbound address translation register to map the entire 32 bit or 64 bit address
region.
> 
> I believe that pci_parse_request_of_pci_ranges() could do the actual parsing
> for you, but I suppose plumbing that in plus processing the resulting
> dma_ranges resource probably ends up a bit messier than the concise open-coding
> here.

right, pci_parse_request_of_pci_ranges() parses "ranges" property and is used
for outbound configuration, whereas here we parse "dma-ranges" property and is
used for inbound configuration.

Thanks
Kishon

> 
> Robin.
> 
>>> +
>>> +    if (!rc->no_bar_nbits) {
>>> +        rc->no_bar_nbits = 32;
>>> +        of_property_read_u32(np, "cdns,no-bar-match-nbits",
>>> +                     &rc->no_bar_nbits);
>>> +    }
>>>         rc->vendor_id = 0xffff;
>>>       of_property_read_u16(np, "vendor-id", &rc->vendor_id);
>>> -- 
>>> 2.17.1
>>>
Robin Murphy May 4, 2020, 10:54 a.m. UTC | #4
On 2020-05-04 9:44 am, Kishon Vijay Abraham I wrote:
> Hi Robin,
> 
> On 5/1/2020 9:24 PM, Robin Murphy wrote:
>> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
>>> [+Robin - to check on dma-ranges intepretation]
>>>
>>> I would need RobH and Robin to review this.
>>>
>>> Also, An ACK from Tom is required - for the whole series.
>>>
>>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>>>> property to configure the number of bits passed through from PCIe
>>>> address to internal address in Inbound Address Translation register.
>>>>
>>>> However standard PCI dt-binding already defines "dma-ranges" to
>>>> describe the address range accessible by PCIe controller. Parse
>>>> "dma-ranges" property to configure the number of bits passed
>>>> through from PCIe address to internal address in Inbound Address
>>>> Translation register.
>>>>
>>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>>>> ---
>>>>    drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>> index 9b1c3966414b..60f912a657b9 100644
>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>        struct device *dev = rc->pcie.dev;
>>>>        struct platform_device *pdev = to_platform_device(dev);
>>>>        struct device_node *np = dev->of_node;
>>>> +    struct of_pci_range_parser parser;
>>>>        struct pci_host_bridge *bridge;
>>>>        struct list_head resources;
>>>> +    struct of_pci_range range;
>>>>        struct cdns_pcie *pcie;
>>>>        struct resource *res;
>>>>        int ret;
>>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>        rc->max_regions = 32;
>>>>        of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions);
>>>>    -    rc->no_bar_nbits = 32;
>>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
>>>> +        if (of_pci_range_parser_one(&parser, &range))
>>>> +            rc->no_bar_nbits = ilog2(range.size);
>>
>> You probably want "range.pci_addr + range.size" here just in case the bottom of
>> the window is ever non-zero. Is there definitely only ever a single inbound
>> window to consider?
> 
> Cadence IP has 3 inbound address translation registers, however we use only 1
> inbound address translation register to map the entire 32 bit or 64 bit address
> region.

OK, if anything that further strengthens the argument for deprecating a 
single "number of bits" property in favour of ranges that accurately 
describe the window(s). However it also suggests that other users in 
future might have some expectation that specifying "dma-ranges" with up 
to 3 entries should work to allow a more restrictive inbound 
configuration. Thus it would be desirable to make the code a little more 
robust here - even if we don't support multiple windows straight off, it 
would still be better to implement it in a way that can be cleanly 
extended later, and at least say something if more ranges are specified 
rather than just silently ignoring them.

>> I believe that pci_parse_request_of_pci_ranges() could do the actual parsing
>> for you, but I suppose plumbing that in plus processing the resulting
>> dma_ranges resource probably ends up a bit messier than the concise open-coding
>> here.
> 
> right, pci_parse_request_of_pci_ranges() parses "ranges" property and is used
> for outbound configuration, whereas here we parse "dma-ranges" property and is
> used for inbound configuration.

If you give it a valid third argument it *also* parses "dma-ranges" into 
a list of inbound regions. This is already used by various other drivers 
for equivalent inbound window setup, which is what I was hinting at 
before, but given the extensibility argument above I'm now going to 
actively suggest following that pattern for consistency.

Robin.
Kishon Vijay Abraham I May 4, 2020, 12:53 p.m. UTC | #5
Hi Robin,

On 5/4/2020 4:24 PM, Robin Murphy wrote:
> On 2020-05-04 9:44 am, Kishon Vijay Abraham I wrote:
>> Hi Robin,
>>
>> On 5/1/2020 9:24 PM, Robin Murphy wrote:
>>> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
>>>> [+Robin - to check on dma-ranges intepretation]
>>>>
>>>> I would need RobH and Robin to review this.
>>>>
>>>> Also, An ACK from Tom is required - for the whole series.
>>>>
>>>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>>>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>>>>> property to configure the number of bits passed through from PCIe
>>>>> address to internal address in Inbound Address Translation register.
>>>>>
>>>>> However standard PCI dt-binding already defines "dma-ranges" to
>>>>> describe the address range accessible by PCIe controller. Parse
>>>>> "dma-ranges" property to configure the number of bits passed
>>>>> through from PCIe address to internal address in Inbound Address
>>>>> Translation register.
>>>>>
>>>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>>>>> ---
>>>>>    drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>> index 9b1c3966414b..60f912a657b9 100644
>>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>        struct device *dev = rc->pcie.dev;
>>>>>        struct platform_device *pdev = to_platform_device(dev);
>>>>>        struct device_node *np = dev->of_node;
>>>>> +    struct of_pci_range_parser parser;
>>>>>        struct pci_host_bridge *bridge;
>>>>>        struct list_head resources;
>>>>> +    struct of_pci_range range;
>>>>>        struct cdns_pcie *pcie;
>>>>>        struct resource *res;
>>>>>        int ret;
>>>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>        rc->max_regions = 32;
>>>>>        of_property_read_u32(np, "cdns,max-outbound-regions",
>>>>> &rc->max_regions);
>>>>>    -    rc->no_bar_nbits = 32;
>>>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>>>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
>>>>> +        if (of_pci_range_parser_one(&parser, &range))
>>>>> +            rc->no_bar_nbits = ilog2(range.size);
>>>
>>> You probably want "range.pci_addr + range.size" here just in case the bottom of
>>> the window is ever non-zero. Is there definitely only ever a single inbound
>>> window to consider?
>>
>> Cadence IP has 3 inbound address translation registers, however we use only 1
>> inbound address translation register to map the entire 32 bit or 64 bit address
>> region.
> 
> OK, if anything that further strengthens the argument for deprecating a single
> "number of bits" property in favour of ranges that accurately describe the
> window(s). However it also suggests that other users in future might have some
> expectation that specifying "dma-ranges" with up to 3 entries should work to
> allow a more restrictive inbound configuration. Thus it would be desirable to
> make the code a little more robust here - even if we don't support multiple
> windows straight off, it would still be better to implement it in a way that
> can be cleanly extended later, and at least say something if more ranges are
> specified rather than just silently ignoring them.

I looked at this further in the Cadence user doc. The three inbound ATU entries
are for BAR0, BAR1 in RC configuration space and the third one is for NO MATCH
BAR when there is no matching found in RC BARs. Right now we always configure
the NO MATCH BAR. Would it be possible describe at BAR granularity in dma-ranges?
> 
>>> I believe that pci_parse_request_of_pci_ranges() could do the actual parsing
>>> for you, but I suppose plumbing that in plus processing the resulting
>>> dma_ranges resource probably ends up a bit messier than the concise open-coding
>>> here.
>>
>> right, pci_parse_request_of_pci_ranges() parses "ranges" property and is used
>> for outbound configuration, whereas here we parse "dma-ranges" property and is
>> used for inbound configuration.
> 
> If you give it a valid third argument it *also* parses "dma-ranges" into a list
> of inbound regions. This is already used by various other drivers for
> equivalent inbound window setup, which is what I was hinting at before, but
> given the extensibility argument above I'm now going to actively suggest
> following that pattern for consistency.
yeah, just got to know about this.

Thanks
Kishon
Kishon Vijay Abraham I May 6, 2020, 3:22 a.m. UTC | #6
Hi Robin,

On 5/4/2020 6:23 PM, Kishon Vijay Abraham I wrote:
> Hi Robin,
> 
> On 5/4/2020 4:24 PM, Robin Murphy wrote:
>> On 2020-05-04 9:44 am, Kishon Vijay Abraham I wrote:
>>> Hi Robin,
>>>
>>> On 5/1/2020 9:24 PM, Robin Murphy wrote:
>>>> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
>>>>> [+Robin - to check on dma-ranges intepretation]
>>>>>
>>>>> I would need RobH and Robin to review this.
>>>>>
>>>>> Also, An ACK from Tom is required - for the whole series.
>>>>>
>>>>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>>>>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>>>>>> property to configure the number of bits passed through from PCIe
>>>>>> address to internal address in Inbound Address Translation register.
>>>>>>
>>>>>> However standard PCI dt-binding already defines "dma-ranges" to
>>>>>> describe the address range accessible by PCIe controller. Parse
>>>>>> "dma-ranges" property to configure the number of bits passed
>>>>>> through from PCIe address to internal address in Inbound Address
>>>>>> Translation register.
>>>>>>
>>>>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>>>>>> ---
>>>>>>    drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>> index 9b1c3966414b..60f912a657b9 100644
>>>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>>        struct device *dev = rc->pcie.dev;
>>>>>>        struct platform_device *pdev = to_platform_device(dev);
>>>>>>        struct device_node *np = dev->of_node;
>>>>>> +    struct of_pci_range_parser parser;
>>>>>>        struct pci_host_bridge *bridge;
>>>>>>        struct list_head resources;
>>>>>> +    struct of_pci_range range;
>>>>>>        struct cdns_pcie *pcie;
>>>>>>        struct resource *res;
>>>>>>        int ret;
>>>>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>>        rc->max_regions = 32;
>>>>>>        of_property_read_u32(np, "cdns,max-outbound-regions",
>>>>>> &rc->max_regions);
>>>>>>    -    rc->no_bar_nbits = 32;
>>>>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>>>>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
>>>>>> +        if (of_pci_range_parser_one(&parser, &range))
>>>>>> +            rc->no_bar_nbits = ilog2(range.size);
>>>>
>>>> You probably want "range.pci_addr + range.size" here just in case the bottom of
>>>> the window is ever non-zero. Is there definitely only ever a single inbound
>>>> window to consider?
>>>
>>> Cadence IP has 3 inbound address translation registers, however we use only 1
>>> inbound address translation register to map the entire 32 bit or 64 bit address
>>> region.
>>
>> OK, if anything that further strengthens the argument for deprecating a single
>> "number of bits" property in favour of ranges that accurately describe the
>> window(s). However it also suggests that other users in future might have some
>> expectation that specifying "dma-ranges" with up to 3 entries should work to
>> allow a more restrictive inbound configuration. Thus it would be desirable to
>> make the code a little more robust here - even if we don't support multiple
>> windows straight off, it would still be better to implement it in a way that
>> can be cleanly extended later, and at least say something if more ranges are
>> specified rather than just silently ignoring them.
> 
> I looked at this further in the Cadence user doc. The three inbound ATU entries
> are for BAR0, BAR1 in RC configuration space and the third one is for NO MATCH
> BAR when there is no matching found in RC BARs. Right now we always configure
> the NO MATCH BAR. Would it be possible describe at BAR granularity in dma-ranges?

I was thinking if I could use something like
dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR0 IB mapping
	     <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR1 IB mapping
	     <0x02000000 0x0 0x0 0x0 0x0 0x10000 0x0>; //NO MATCH BAR

This way the driver can tell the 1st tuple is for BAR0, 2nd is for BAR1 and
last is for NO MATCH. In the above case both BAR0 and BAR1 is just empty and
doesn't have valid values as we use only the NO MATCH BAR.

However I'm not able to use for_each_of_pci_range() in Cadence driver to get
the configuration for each BAR, since the for loop gets invoked only once since
of_pci_range_parser_one() merges contiguous addresses.

Do you think I should extend the flags cell to differentiate between BAR0, BAR1
and NO MATCH BAR? Can you suggest any other alternatives?

Thanks
Kishon

>>
>>>> I believe that pci_parse_request_of_pci_ranges() could do the actual parsing
>>>> for you, but I suppose plumbing that in plus processing the resulting
>>>> dma_ranges resource probably ends up a bit messier than the concise open-coding
>>>> here.
>>>
>>> right, pci_parse_request_of_pci_ranges() parses "ranges" property and is used
>>> for outbound configuration, whereas here we parse "dma-ranges" property and is
>>> used for inbound configuration.
>>
>> If you give it a valid third argument it *also* parses "dma-ranges" into a list
>> of inbound regions. This is already used by various other drivers for
>> equivalent inbound window setup, which is what I was hinting at before, but
>> given the extensibility argument above I'm now going to actively suggest
>> following that pattern for consistency.
> yeah, just got to know about this.
> 
> Thanks
> Kishon
>
Rob Herring May 7, 2020, 8:26 p.m. UTC | #7
On Wed, May 06, 2020 at 08:52:13AM +0530, Kishon Vijay Abraham I wrote:
> Hi Robin,
> 
> On 5/4/2020 6:23 PM, Kishon Vijay Abraham I wrote:
> > Hi Robin,
> > 
> > On 5/4/2020 4:24 PM, Robin Murphy wrote:
> >> On 2020-05-04 9:44 am, Kishon Vijay Abraham I wrote:
> >>> Hi Robin,
> >>>
> >>> On 5/1/2020 9:24 PM, Robin Murphy wrote:
> >>>> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
> >>>>> [+Robin - to check on dma-ranges intepretation]
> >>>>>
> >>>>> I would need RobH and Robin to review this.
> >>>>>
> >>>>> Also, An ACK from Tom is required - for the whole series.
> >>>>>
> >>>>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
> >>>>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
> >>>>>> property to configure the number of bits passed through from PCIe
> >>>>>> address to internal address in Inbound Address Translation register.
> >>>>>>
> >>>>>> However standard PCI dt-binding already defines "dma-ranges" to
> >>>>>> describe the address range accessible by PCIe controller. Parse
> >>>>>> "dma-ranges" property to configure the number of bits passed
> >>>>>> through from PCIe address to internal address in Inbound Address
> >>>>>> Translation register.
> >>>>>>
> >>>>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
> >>>>>> ---
> >>>>>>    drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
> >>>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
> >>>>>>
> >>>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
> >>>>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
> >>>>>> index 9b1c3966414b..60f912a657b9 100644
> >>>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
> >>>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
> >>>>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
> >>>>>>        struct device *dev = rc->pcie.dev;
> >>>>>>        struct platform_device *pdev = to_platform_device(dev);
> >>>>>>        struct device_node *np = dev->of_node;
> >>>>>> +    struct of_pci_range_parser parser;
> >>>>>>        struct pci_host_bridge *bridge;
> >>>>>>        struct list_head resources;
> >>>>>> +    struct of_pci_range range;
> >>>>>>        struct cdns_pcie *pcie;
> >>>>>>        struct resource *res;
> >>>>>>        int ret;
> >>>>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
> >>>>>>        rc->max_regions = 32;
> >>>>>>        of_property_read_u32(np, "cdns,max-outbound-regions",
> >>>>>> &rc->max_regions);
> >>>>>>    -    rc->no_bar_nbits = 32;
> >>>>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
> >>>>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
> >>>>>> +        if (of_pci_range_parser_one(&parser, &range))
> >>>>>> +            rc->no_bar_nbits = ilog2(range.size);
> >>>>
> >>>> You probably want "range.pci_addr + range.size" here just in case the bottom of
> >>>> the window is ever non-zero. Is there definitely only ever a single inbound
> >>>> window to consider?
> >>>
> >>> Cadence IP has 3 inbound address translation registers, however we use only 1
> >>> inbound address translation register to map the entire 32 bit or 64 bit address
> >>> region.
> >>
> >> OK, if anything that further strengthens the argument for deprecating a single
> >> "number of bits" property in favour of ranges that accurately describe the
> >> window(s). However it also suggests that other users in future might have some
> >> expectation that specifying "dma-ranges" with up to 3 entries should work to
> >> allow a more restrictive inbound configuration. Thus it would be desirable to
> >> make the code a little more robust here - even if we don't support multiple
> >> windows straight off, it would still be better to implement it in a way that
> >> can be cleanly extended later, and at least say something if more ranges are
> >> specified rather than just silently ignoring them.
> > 
> > I looked at this further in the Cadence user doc. The three inbound ATU entries
> > are for BAR0, BAR1 in RC configuration space and the third one is for NO MATCH
> > BAR when there is no matching found in RC BARs. Right now we always configure
> > the NO MATCH BAR. Would it be possible describe at BAR granularity in dma-ranges?
> 
> I was thinking if I could use something like
> dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR0 IB mapping
> 	     <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR1 IB mapping
> 	     <0x02000000 0x0 0x0 0x0 0x0 0x10000 0x0>; //NO MATCH BAR
> 
> This way the driver can tell the 1st tuple is for BAR0, 2nd is for BAR1 and
> last is for NO MATCH. In the above case both BAR0 and BAR1 is just empty and
> doesn't have valid values as we use only the NO MATCH BAR.
> 
> However I'm not able to use for_each_of_pci_range() in Cadence driver to get
> the configuration for each BAR, since the for loop gets invoked only once since
> of_pci_range_parser_one() merges contiguous addresses.

NO_MATCH_BAR could just be the last entry no matter how many? Who cares 
if they get merged? Maybe each BAR has max size and dma-ranges could 
exceed that, but if so you have to handle that and split them again.

> Do you think I should extend the flags cell to differentiate between BAR0, BAR1
> and NO MATCH BAR? Can you suggest any other alternatives?

If you just have 1 region, then just 1 entry makes sense to me. Why 
can't you use BAR0 in that case?

Rob
Kishon Vijay Abraham I May 8, 2020, 8:49 a.m. UTC | #8
Hi Rob,

On 5/8/2020 1:56 AM, Rob Herring wrote:
> On Wed, May 06, 2020 at 08:52:13AM +0530, Kishon Vijay Abraham I wrote:
>> Hi Robin,
>>
>> On 5/4/2020 6:23 PM, Kishon Vijay Abraham I wrote:
>>> Hi Robin,
>>>
>>> On 5/4/2020 4:24 PM, Robin Murphy wrote:
>>>> On 2020-05-04 9:44 am, Kishon Vijay Abraham I wrote:
>>>>> Hi Robin,
>>>>>
>>>>> On 5/1/2020 9:24 PM, Robin Murphy wrote:
>>>>>> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
>>>>>>> [+Robin - to check on dma-ranges intepretation]
>>>>>>>
>>>>>>> I would need RobH and Robin to review this.
>>>>>>>
>>>>>>> Also, An ACK from Tom is required - for the whole series.
>>>>>>>
>>>>>>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>>>>>>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>>>>>>>> property to configure the number of bits passed through from PCIe
>>>>>>>> address to internal address in Inbound Address Translation register.
>>>>>>>>
>>>>>>>> However standard PCI dt-binding already defines "dma-ranges" to
>>>>>>>> describe the address range accessible by PCIe controller. Parse
>>>>>>>> "dma-ranges" property to configure the number of bits passed
>>>>>>>> through from PCIe address to internal address in Inbound Address
>>>>>>>> Translation register.
>>>>>>>>
>>>>>>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>>>>>>>> ---
>>>>>>>>    drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>>>>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>> index 9b1c3966414b..60f912a657b9 100644
>>>>>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>>>>        struct device *dev = rc->pcie.dev;
>>>>>>>>        struct platform_device *pdev = to_platform_device(dev);
>>>>>>>>        struct device_node *np = dev->of_node;
>>>>>>>> +    struct of_pci_range_parser parser;
>>>>>>>>        struct pci_host_bridge *bridge;
>>>>>>>>        struct list_head resources;
>>>>>>>> +    struct of_pci_range range;
>>>>>>>>        struct cdns_pcie *pcie;
>>>>>>>>        struct resource *res;
>>>>>>>>        int ret;
>>>>>>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>>>>        rc->max_regions = 32;
>>>>>>>>        of_property_read_u32(np, "cdns,max-outbound-regions",
>>>>>>>> &rc->max_regions);
>>>>>>>>    -    rc->no_bar_nbits = 32;
>>>>>>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>>>>>>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
>>>>>>>> +        if (of_pci_range_parser_one(&parser, &range))
>>>>>>>> +            rc->no_bar_nbits = ilog2(range.size);
>>>>>>
>>>>>> You probably want "range.pci_addr + range.size" here just in case the bottom of
>>>>>> the window is ever non-zero. Is there definitely only ever a single inbound
>>>>>> window to consider?
>>>>>
>>>>> Cadence IP has 3 inbound address translation registers, however we use only 1
>>>>> inbound address translation register to map the entire 32 bit or 64 bit address
>>>>> region.
>>>>
>>>> OK, if anything that further strengthens the argument for deprecating a single
>>>> "number of bits" property in favour of ranges that accurately describe the
>>>> window(s). However it also suggests that other users in future might have some
>>>> expectation that specifying "dma-ranges" with up to 3 entries should work to
>>>> allow a more restrictive inbound configuration. Thus it would be desirable to
>>>> make the code a little more robust here - even if we don't support multiple
>>>> windows straight off, it would still be better to implement it in a way that
>>>> can be cleanly extended later, and at least say something if more ranges are
>>>> specified rather than just silently ignoring them.
>>>
>>> I looked at this further in the Cadence user doc. The three inbound ATU entries
>>> are for BAR0, BAR1 in RC configuration space and the third one is for NO MATCH
>>> BAR when there is no matching found in RC BARs. Right now we always configure
>>> the NO MATCH BAR. Would it be possible describe at BAR granularity in dma-ranges?
>>
>> I was thinking if I could use something like
>> dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR0 IB mapping
>> 	     <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR1 IB mapping
>> 	     <0x02000000 0x0 0x0 0x0 0x0 0x10000 0x0>; //NO MATCH BAR
>>
>> This way the driver can tell the 1st tuple is for BAR0, 2nd is for BAR1 and
>> last is for NO MATCH. In the above case both BAR0 and BAR1 is just empty and
>> doesn't have valid values as we use only the NO MATCH BAR.
>>
>> However I'm not able to use for_each_of_pci_range() in Cadence driver to get
>> the configuration for each BAR, since the for loop gets invoked only once since
>> of_pci_range_parser_one() merges contiguous addresses.
> 
> NO_MATCH_BAR could just be the last entry no matter how many? Who cares 
> if they get merged? Maybe each BAR has max size and dma-ranges could 
> exceed that, but if so you have to handle that and split them again.

Each of RP_BAR0, RP_BAR1 and RP_NO_BAR has separate register to be configured.
If they get merged, we'll loose info on which of the registers to be
configured. Cadence IP specifies maximum size of BAR0 as 256GB, maximum size of
BAR1 as 2 GB. However when I specify dma-ranges like below and use
for_each_of_pci_range(&parser, &range), the first range itself is 258.

dma-ranges = <0x02000000 0x00 0x0 0x00 0x0 0x40 0x00000000>, /* BAR0 256 GB */
	     <0x02000000 0x40 0x0 0x40 0x0 0x00 0x80000000>; /* BAR1 2 GB */
> 
>> Do you think I should extend the flags cell to differentiate between BAR0, BAR1
>> and NO MATCH BAR? Can you suggest any other alternatives?
> 
> If you just have 1 region, then just 1 entry makes sense to me. Why 
> can't you use BAR0 in that case?

Well, Cadence has specified a max size for each BAR. I think we could specify a
single region (48 bits in my case) in dma-ranges and let the driver decide how
to split it among BAR0, BAR1 and NO_MATCH_BAR?

Thanks
Kishon
Kishon Vijay Abraham I May 8, 2020, 11:51 a.m. UTC | #9
Hi Rob, Robin,

On 5/8/2020 2:19 PM, Kishon Vijay Abraham I wrote:
> Hi Rob,
> 
> On 5/8/2020 1:56 AM, Rob Herring wrote:
>> On Wed, May 06, 2020 at 08:52:13AM +0530, Kishon Vijay Abraham I wrote:
>>> Hi Robin,
>>>
>>> On 5/4/2020 6:23 PM, Kishon Vijay Abraham I wrote:
>>>> Hi Robin,
>>>>
>>>> On 5/4/2020 4:24 PM, Robin Murphy wrote:
>>>>> On 2020-05-04 9:44 am, Kishon Vijay Abraham I wrote:
>>>>>> Hi Robin,
>>>>>>
>>>>>> On 5/1/2020 9:24 PM, Robin Murphy wrote:
>>>>>>> On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
>>>>>>>> [+Robin - to check on dma-ranges intepretation]
>>>>>>>>
>>>>>>>> I would need RobH and Robin to review this.
>>>>>>>>
>>>>>>>> Also, An ACK from Tom is required - for the whole series.
>>>>>>>>
>>>>>>>> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>>>>>>>>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>>>>>>>>> property to configure the number of bits passed through from PCIe
>>>>>>>>> address to internal address in Inbound Address Translation register.
>>>>>>>>>
>>>>>>>>> However standard PCI dt-binding already defines "dma-ranges" to
>>>>>>>>> describe the address range accessible by PCIe controller. Parse
>>>>>>>>> "dma-ranges" property to configure the number of bits passed
>>>>>>>>> through from PCIe address to internal address in Inbound Address
>>>>>>>>> Translation register.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
>>>>>>>>> ---
>>>>>>>>>    drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>>>>>>>>>    1 file changed, 11 insertions(+), 2 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>>> b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>>> index 9b1c3966414b..60f912a657b9 100644
>>>>>>>>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>>>>>>>>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>>>>>        struct device *dev = rc->pcie.dev;
>>>>>>>>>        struct platform_device *pdev = to_platform_device(dev);
>>>>>>>>>        struct device_node *np = dev->of_node;
>>>>>>>>> +    struct of_pci_range_parser parser;
>>>>>>>>>        struct pci_host_bridge *bridge;
>>>>>>>>>        struct list_head resources;
>>>>>>>>> +    struct of_pci_range range;
>>>>>>>>>        struct cdns_pcie *pcie;
>>>>>>>>>        struct resource *res;
>>>>>>>>>        int ret;
>>>>>>>>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>>>>>>>>>        rc->max_regions = 32;
>>>>>>>>>        of_property_read_u32(np, "cdns,max-outbound-regions",
>>>>>>>>> &rc->max_regions);
>>>>>>>>>    -    rc->no_bar_nbits = 32;
>>>>>>>>> -    of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>>>>>>>>> +    if (!of_pci_dma_range_parser_init(&parser, np))
>>>>>>>>> +        if (of_pci_range_parser_one(&parser, &range))
>>>>>>>>> +            rc->no_bar_nbits = ilog2(range.size);
>>>>>>>
>>>>>>> You probably want "range.pci_addr + range.size" here just in case the bottom of
>>>>>>> the window is ever non-zero. Is there definitely only ever a single inbound
>>>>>>> window to consider?
>>>>>>
>>>>>> Cadence IP has 3 inbound address translation registers, however we use only 1
>>>>>> inbound address translation register to map the entire 32 bit or 64 bit address
>>>>>> region.
>>>>>
>>>>> OK, if anything that further strengthens the argument for deprecating a single
>>>>> "number of bits" property in favour of ranges that accurately describe the
>>>>> window(s). However it also suggests that other users in future might have some
>>>>> expectation that specifying "dma-ranges" with up to 3 entries should work to
>>>>> allow a more restrictive inbound configuration. Thus it would be desirable to
>>>>> make the code a little more robust here - even if we don't support multiple
>>>>> windows straight off, it would still be better to implement it in a way that
>>>>> can be cleanly extended later, and at least say something if more ranges are
>>>>> specified rather than just silently ignoring them.
>>>>
>>>> I looked at this further in the Cadence user doc. The three inbound ATU entries
>>>> are for BAR0, BAR1 in RC configuration space and the third one is for NO MATCH
>>>> BAR when there is no matching found in RC BARs. Right now we always configure
>>>> the NO MATCH BAR. Would it be possible describe at BAR granularity in dma-ranges?
>>>
>>> I was thinking if I could use something like
>>> dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR0 IB mapping
>>> 	     <0x02000000 0x0 0x0 0x0 0x0 0x00000 0x0>, //For BAR1 IB mapping
>>> 	     <0x02000000 0x0 0x0 0x0 0x0 0x10000 0x0>; //NO MATCH BAR
>>>
>>> This way the driver can tell the 1st tuple is for BAR0, 2nd is for BAR1 and
>>> last is for NO MATCH. In the above case both BAR0 and BAR1 is just empty and
>>> doesn't have valid values as we use only the NO MATCH BAR.
>>>
>>> However I'm not able to use for_each_of_pci_range() in Cadence driver to get
>>> the configuration for each BAR, since the for loop gets invoked only once since
>>> of_pci_range_parser_one() merges contiguous addresses.
>>
>> NO_MATCH_BAR could just be the last entry no matter how many? Who cares 
>> if they get merged? Maybe each BAR has max size and dma-ranges could 
>> exceed that, but if so you have to handle that and split them again.
> 
> Each of RP_BAR0, RP_BAR1 and RP_NO_BAR has separate register to be configured.
> If they get merged, we'll loose info on which of the registers to be
> configured. Cadence IP specifies maximum size of BAR0 as 256GB, maximum size of
> BAR1 as 2 GB. However when I specify dma-ranges like below and use
> for_each_of_pci_range(&parser, &range), the first range itself is 258.
> 
> dma-ranges = <0x02000000 0x00 0x0 0x00 0x0 0x40 0x00000000>, /* BAR0 256 GB */
> 	     <0x02000000 0x40 0x0 0x40 0x0 0x00 0x80000000>; /* BAR1 2 GB */
>>
>>> Do you think I should extend the flags cell to differentiate between BAR0, BAR1
>>> and NO MATCH BAR? Can you suggest any other alternatives?
>>
>> If you just have 1 region, then just 1 entry makes sense to me. Why 
>> can't you use BAR0 in that case?
> 
> Well, Cadence has specified a max size for each BAR. I think we could specify a
> single region (48 bits in my case) in dma-ranges and let the driver decide how
> to split it among BAR0, BAR1 and NO_MATCH_BAR?

Okay, I'll add support in driver for parsing multiple dma-ranges (non
consecutive regions) and driver splitting the regions based on maximum size
supported by each BAR.

This means, we will not directly use NO_MATCH_BAR, but wil first fill up BAR0,
BAR1 and then only the remaining space in NO_MATCH_BAR.

Thanks
Kishon
diff mbox series

Patch

diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
index 9b1c3966414b..60f912a657b9 100644
--- a/drivers/pci/controller/cadence/pcie-cadence-host.c
+++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
@@ -206,8 +206,10 @@  int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
 	struct device *dev = rc->pcie.dev;
 	struct platform_device *pdev = to_platform_device(dev);
 	struct device_node *np = dev->of_node;
+	struct of_pci_range_parser parser;
 	struct pci_host_bridge *bridge;
 	struct list_head resources;
+	struct of_pci_range range;
 	struct cdns_pcie *pcie;
 	struct resource *res;
 	int ret;
@@ -222,8 +224,15 @@  int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
 	rc->max_regions = 32;
 	of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions);
 
-	rc->no_bar_nbits = 32;
-	of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
+	if (!of_pci_dma_range_parser_init(&parser, np))
+		if (of_pci_range_parser_one(&parser, &range))
+			rc->no_bar_nbits = ilog2(range.size);
+
+	if (!rc->no_bar_nbits) {
+		rc->no_bar_nbits = 32;
+		of_property_read_u32(np, "cdns,no-bar-match-nbits",
+				     &rc->no_bar_nbits);
+	}
 
 	rc->vendor_id = 0xffff;
 	of_property_read_u16(np, "vendor-id", &rc->vendor_id);