-
Notifications
You must be signed in to change notification settings - Fork 169
Description
Is your feature request related to a problem? Please describe.
When deleting a lot of blobs using the batch API, it sometimes raises a ServiceUnavailable: 503 BATCH contentid://None: We encountered an internal error. Please try again.. This is a bit undesired, that it raises in the middle of a big deletion job.
Describe the solution you'd like
I tried settings the retry parameter at the client level client.get_bucket(bucket_name, retry=retry, timeout=600) or a the blob level blob.delete(retry=retry, timeout=600), even forcing the if_generation_match=blob.generation. No retry seem to be done. The class does not seem to use any retry here:
python-storage/google/cloud/storage/batch.py
Line 309 in c52e882
| response = self._client._base_connection._make_request( |
Either the client can support it, or at the very least the batch object should give access to the blobs (subtasks) that couldn't be deleted so that we can retry manually.
A manual retry of the full batch (for loop) does not work as some of the blobs from the batch got deleted in the first attempt, raising a 404 on the second attempt.
A clear and concise description of what you want to happen.
Retry or give the user the ability to retry only the one that fails