diff mbox series

[1/3] tests/functional/asset: Fail assert fetch when retries are exceeded

Message ID 20250312051739.938441-2-npiggin@gmail.com (mailing list archive)
State New
Headers show
Series tests/functional/asset: improve partial-download handling | expand

Commit Message

Nicholas Piggin March 12, 2025, 5:17 a.m. UTC
Currently the fetch code does not fail gracefully when retry limit is
exceeded, it just falls through the loop with no file, which ends up
hitting other errors.

In preparation for adding more cases where a download gets retried,
add an explicit check for retry limit exceeded.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 tests/functional/qemu_test/asset.py | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Thomas Huth March 12, 2025, 6:40 a.m. UTC | #1
On 12/03/2025 06.17, Nicholas Piggin wrote:
> Currently the fetch code does not fail gracefully when retry limit is
> exceeded, it just falls through the loop with no file, which ends up
> hitting other errors.
> 
> In preparation for adding more cases where a download gets retried,
> add an explicit check for retry limit exceeded.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   tests/functional/qemu_test/asset.py | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/functional/qemu_test/asset.py b/tests/functional/qemu_test/asset.py
> index f0730695f09..6a1c92ffbef 100644
> --- a/tests/functional/qemu_test/asset.py
> +++ b/tests/functional/qemu_test/asset.py
> @@ -116,7 +116,10 @@ def fetch(self):
>           self.log.info("Downloading %s to %s...", self.url, self.cache_file)
>           tmp_cache_file = self.cache_file.with_suffix(".download")
>   
> -        for retries in range(3):
> +        for retries in range(4):
> +            if retries == 3:
> +                raise Exception("Retries exceeded downloading %s", self.url)
> +
>               try:
>                   with tmp_cache_file.open("xb") as dst:
>                       with urllib.request.urlopen(self.url) as resp:

Reviewed-by: Thomas Huth <thuth@redhat.com>
Daniel P. Berrangé March 12, 2025, 8:13 a.m. UTC | #2
On Wed, Mar 12, 2025 at 03:17:36PM +1000, Nicholas Piggin wrote:
> Currently the fetch code does not fail gracefully when retry limit is
> exceeded, it just falls through the loop with no file, which ends up
> hitting other errors.
> 
> In preparation for adding more cases where a download gets retried,
> add an explicit check for retry limit exceeded.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  tests/functional/qemu_test/asset.py | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/functional/qemu_test/asset.py b/tests/functional/qemu_test/asset.py
> index f0730695f09..6a1c92ffbef 100644
> --- a/tests/functional/qemu_test/asset.py
> +++ b/tests/functional/qemu_test/asset.py
> @@ -116,7 +116,10 @@ def fetch(self):
>          self.log.info("Downloading %s to %s...", self.url, self.cache_file)
>          tmp_cache_file = self.cache_file.with_suffix(".download")
>  
> -        for retries in range(3):
> +        for retries in range(4):
> +            if retries == 3:
> +                raise Exception("Retries exceeded downloading %s", self.url)

While it works, it feels a bit wierd to me. Given the error retry
scenario will unlink the file, I think it would be better todo

   if not os.path.exists(tmp_cache_file)
      raise Exception(...)

immediately after the for() loop

> +
>              try:
>                  with tmp_cache_file.open("xb") as dst:
>                      with urllib.request.urlopen(self.url) as resp:
> -- 
> 2.47.1
> 

With regards,
Daniel
diff mbox series

Patch

diff --git a/tests/functional/qemu_test/asset.py b/tests/functional/qemu_test/asset.py
index f0730695f09..6a1c92ffbef 100644
--- a/tests/functional/qemu_test/asset.py
+++ b/tests/functional/qemu_test/asset.py
@@ -116,7 +116,10 @@  def fetch(self):
         self.log.info("Downloading %s to %s...", self.url, self.cache_file)
         tmp_cache_file = self.cache_file.with_suffix(".download")
 
-        for retries in range(3):
+        for retries in range(4):
+            if retries == 3:
+                raise Exception("Retries exceeded downloading %s", self.url)
+
             try:
                 with tmp_cache_file.open("xb") as dst:
                     with urllib.request.urlopen(self.url) as resp: