Skip to content

Retry batch delete blob on 503 #1277

@maingoh

Description

@maingoh

Is your feature request related to a problem? Please describe.

When deleting a lot of blobs using the batch API, it sometimes raises a ServiceUnavailable: 503 BATCH contentid://None: We encountered an internal error. Please try again.. This is a bit undesired, that it raises in the middle of a big deletion job.

Describe the solution you'd like

I tried settings the retry parameter at the client level client.get_bucket(bucket_name, retry=retry, timeout=600) or a the blob level blob.delete(retry=retry, timeout=600), even forcing the if_generation_match=blob.generation. No retry seem to be done. The class does not seem to use any retry here:

response = self._client._base_connection._make_request(

Either the client can support it, or at the very least the batch object should give access to the blobs (subtasks) that couldn't be deleted so that we can retry manually.
A manual retry of the full batch (for loop) does not work as some of the blobs from the batch got deleted in the first attempt, raising a 404 on the second attempt.

A clear and concise description of what you want to happen.

Retry or give the user the ability to retry only the one that fails

Metadata

Metadata

Assignees

Labels

api: storageIssues related to the googleapis/python-storage API.priority: p3Desirable enhancement or fix. May not be included in next release.type: feature request‘Nice-to-have’ improvement, new feature or different behavior or design.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions