Skip to content

Support batch upload and delete for auth files#2335

Merged
luispater merged 1 commit intodevfrom
auth
Mar 25, 2026
Merged

Support batch upload and delete for auth files#2335
luispater merged 1 commit intodevfrom
auth

Conversation

@hkfires
Copy link
Copy Markdown
Collaborator

@hkfires hkfires commented Mar 25, 2026

Implement functionality to handle batch uploads and deletions of authentication files, enhancing the API's efficiency in managing multiple files simultaneously.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the API's efficiency for managing authentication files by introducing robust batch processing capabilities. Users can now upload and delete multiple authentication files simultaneously, improving the overall user experience and reducing the overhead of individual file operations. The changes include comprehensive error handling and detailed status reporting for batch actions.

Highlights

  • Batch Upload Functionality: Implemented the ability to upload multiple authentication files simultaneously using multipart form data, enhancing efficiency for managing numerous files.
  • Batch Delete Functionality: Added support for deleting multiple authentication files at once, accepting file names via query parameters or a JSON request body.
  • Enhanced Error Handling and Reporting: Improved error handling for batch operations, including the use of HTTP 207 Multi-Status for partial successes during uploads and deletions, providing detailed feedback on individual file outcomes.
  • Code Refactoring and Modularity: Refactored existing file handling logic into several new, dedicated helper functions to improve code organization, readability, and maintainability.
  • Comprehensive Testing: Introduced a new test file with specific unit tests to cover the new batch upload and delete functionalities, ensuring their correctness and robustness.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces batch upload and delete functionalities for authentication files. The UploadAuthFile handler has been refactored to support multiple file uploads via multipart form data, providing detailed status responses for individual file successes and failures. Similarly, the DeleteAuthFile handler now supports batch deletion of files specified through query parameters or the request body. New helper functions were added to modularize the file handling logic. The review comments suggest improvements such as correcting the content type check for multipart requests, standardizing error messages for JSON file validation, simplifying error handling in batch upload loops, ensuring file cleanup if auth record registration fails, and restoring the request body after reading it to prevent issues with subsequent handlers.

}

func (h *Handler) multipartAuthFileHeaders(c *gin.Context) ([]*multipart.FileHeader, error) {
if h == nil || c == nil || c.ContentType() != "multipart/form-data" {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The check c.ContentType() != "multipart/form-data" is incorrect because the Content-Type header for multipart requests includes a boundary parameter (e.g., multipart/form-data; boundary=...). This will cause the function to incorrectly return early for valid multipart requests. You should check if the content type starts with multipart/form-data instead.

Suggested change
if h == nil || c == nil || c.ContentType() != "multipart/form-data" {
if h == nil || c == nil || !strings.HasPrefix(c.ContentType(), "multipart/form-data") {

callbackForwarders = make(map[int]*callbackForwarder)
callbackForwardersMu sync.Mutex
callbackForwarders = make(map[int]*callbackForwarder)
errAuthFileMustBeJSON = errors.New("auth file must be .json")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message for errAuthFileMustBeJSON is "auth file must be .json", but in UploadAuthFile you are creating responses with "file must be .json". To improve consistency and simplify the error handling logic in UploadAuthFile, consider aligning the error message here.

Suggested change
errAuthFileMustBeJSON = errors.New("auth file must be .json")
errAuthFileMustBeJSON = errors.New("file must be .json")

Comment on lines +583 to 590
if _, errUpload := h.storeUploadedAuthFile(ctx, fileHeaders[0]); errUpload != nil {
if errors.Is(errUpload, errAuthFileMustBeJSON) {
c.JSON(http.StatusBadRequest, gin.H{"error": "file must be .json"})
return
}
}
if errSave := c.SaveUploadedFile(file, dst); errSave != nil {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to save file: %v", errSave)})
c.JSON(http.StatusInternalServerError, gin.H{"error": errUpload.Error()})
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This error handling can be made more concise. By checking for errAuthFileMustBeJSON you can set the appropriate status code and avoid duplicating logic. Assuming errAuthFileMustBeJSON's message is updated as per another suggestion, this becomes even cleaner.

Suggested change
if _, errUpload := h.storeUploadedAuthFile(ctx, fileHeaders[0]); errUpload != nil {
if errors.Is(errUpload, errAuthFileMustBeJSON) {
c.JSON(http.StatusBadRequest, gin.H{"error": "file must be .json"})
return
}
}
if errSave := c.SaveUploadedFile(file, dst); errSave != nil {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to save file: %v", errSave)})
c.JSON(http.StatusInternalServerError, gin.H{"error": errUpload.Error()})
return
}
if _, errUpload := h.storeUploadedAuthFile(ctx, fileHeaders[0]); errUpload != nil {
status := http.StatusInternalServerError
if errors.Is(errUpload, errAuthFileMustBeJSON) {
status = http.StatusBadRequest
}
c.JSON(status, gin.H{"error": errUpload.Error()})
return
}

Comment on lines +599 to +609
if errUpload != nil {
failureName := ""
if file != nil {
failureName = filepath.Base(file.Filename)
}
msg := errUpload.Error()
if errors.Is(errUpload, errAuthFileMustBeJSON) {
msg = "file must be .json"
}
failed = append(failed, gin.H{"name": failureName, "error": msg})
continue
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This error handling block can be simplified. The if file != nil check is redundant since file is a loop variable and will not be nil here. Also, if errAuthFileMustBeJSON is updated as suggested in another comment, the special handling for the error message can be removed.

if errUpload != nil {
	failed = append(failed, gin.H{
		"name":  filepath.Base(file.Filename),
		"error": errUpload.Error(),
	})
	continue
}

Comment on lines +794 to +796
if err := h.upsertAuthRecord(ctx, auth); err != nil {
return err
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a potential for an inconsistent state here. If h.upsertAuthRecord(ctx, auth) fails, the auth file has already been written to disk but won't be tracked by the auth manager. This could lead to orphaned auth files. Consider removing the file if the registration fails to maintain consistency.

Suggested change
if err := h.upsertAuthRecord(ctx, auth); err != nil {
return err
}
if err := h.upsertAuthRecord(ctx, auth); err != nil {
// Attempt to clean up the file if registration fails.
if errRemove := os.Remove(dst); errRemove != nil && !os.IsNotExist(errRemove) {
log.WithError(errRemove).Warnf("failed to remove auth file after registration failure: %s", dst)
}
return err
}

Comment on lines +809 to +812
body, err := io.ReadAll(c.Request.Body)
if err != nil {
return nil, fmt.Errorf("failed to read body")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Reading the request body with io.ReadAll(c.Request.Body) consumes it, which can lead to unexpected behavior if other handlers or middlewares attempt to read the body later. It's a good practice to restore the request body after reading it so it can be read again if needed.

Suggested change
body, err := io.ReadAll(c.Request.Body)
if err != nil {
return nil, fmt.Errorf("failed to read body")
}
body, err := io.ReadAll(c.Request.Body)
if err != nil {
return nil, fmt.Errorf("failed to read body")
}
c.Request.Body = io.NopCloser(bytes.NewBuffer(body))

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9e5693e74f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

}
defer src.Close()

data, err := io.ReadAll(src)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stream multipart uploads instead of buffering entire file

storeUploadedAuthFile now calls io.ReadAll(src) for every multipart file before writing it, which means request memory usage scales with file size; a large (or intentionally oversized) upload can spike RSS and potentially OOM the process before any disk write occurs. This is a regression from the prior streamed SaveUploadedFile path and is especially risky now that batch upload is supported, so this path should stream to disk (or enforce strict size limits) rather than fully buffering.

Useful? React with 👍 / 👎.

@luispater luispater merged commit 76c064c into dev Mar 25, 2026
2 checks passed
@luispater luispater deleted the auth branch March 25, 2026 01:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants