feat(formatter): add --format tsv output format#426
feat(formatter): add --format tsv output format#426abhiram304 wants to merge 4 commits intogoogleworkspace:mainfrom
Conversation
Add `Tsv` as a new `OutputFormat` variant alongside the existing JSON, Table, YAML, and CSV formats. TSV (tab-separated values) is the standard format for shell pipeline tools such as `cut -f2` and `awk -F'\t'`, making it a natural companion to the existing CSV format for scripting use cases. Behaviour mirrors `--format csv`: - Array-of-objects: header row + data rows separated by tabs - Array-of-arrays (e.g. Sheets values API): rows separated by tabs - Flat scalars: one value per line - `--page-all` pagination: header emitted only on the first page Tab and newline characters inside field values are replaced with spaces to preserve column structure, matching the behaviour of Google Sheets TSV export and PostgreSQL COPY.
🦋 Changeset detectedLatest commit: 29ccee4 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the CLI tool by adding a new Tab-Separated Values (TSV) output format. This feature provides a standard, shell-friendly data export option, complementing the existing CSV format. The implementation ensures consistent behavior with other structured output formats, particularly CSV, regarding data representation and pagination, while also handling special characters within values to preserve data integrity. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Code Review
This pull request adds support for TSV output format. The implementation largely mirrors the existing CSV formatting logic, and the changes are well-tested. I've identified a performance issue in the new code for collecting column headers, which appears to be duplicated from the existing CSV formatting logic. I've provided a suggestion for a more efficient implementation. Overall, this is a valuable addition.
| let mut columns: Vec<String> = Vec::new(); | ||
| for item in arr { | ||
| if let Value::Object(obj) = item { | ||
| for key in obj.keys() { | ||
| if !columns.contains(key) { | ||
| columns.push(key.clone()); | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
This loop for collecting columns has a performance issue. Using columns.contains(key) results in a linear scan for every key, which can be slow for large datasets as it has O(N) complexity inside a loop, leading to non-linear overall complexity.
A more efficient approach is to use a HashSet to track seen keys, which provides O(1) average lookup time.
This logic is also duplicated from format_csv_page, so a similar fix would be beneficial there. A future refactoring to merge format_csv_page and format_tsv_page could address the duplication and fix the issue in both places.
You may need to add use std::collections::HashSet; at the top of the file.
| let mut columns: Vec<String> = Vec::new(); | |
| for item in arr { | |
| if let Value::Object(obj) = item { | |
| for key in obj.keys() { | |
| if !columns.contains(key) { | |
| columns.push(key.clone()); | |
| } | |
| } | |
| } | |
| } | |
| let mut columns: Vec<String> = Vec::new(); | |
| let mut seen_keys = std::collections::HashSet::new(); | |
| for item in arr { | |
| if let Value::Object(obj) = item { | |
| for key in obj.keys() { | |
| if seen_keys.insert(key.as_str()) { | |
| columns.push(key.clone()); | |
| } | |
| } | |
| } | |
| } |
…tsv_page Address Gemini code review feedback: the previous loop used Vec::contains for deduplication which is O(N) per key, leading to O(N²) complexity when collecting columns from large datasets. Replace with a HashSet<&str> to track seen keys in O(1) average time while still preserving insertion order in the Vec.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a new tsv output format, which is a valuable addition for command-line usage. The implementation is well-tested and mirrors the existing csv format's features. However, the new code introduces significant duplication from the CSV formatting logic. Additionally, I've identified a critical bug where column headers are not escaped, which could lead to corrupted output if a column name contains a tab character. My review includes a detailed suggestion to refactor the duplicated code into a single, generic function. This refactoring not only improves maintainability but also resolves the header-escaping bug for both TSV and CSV formats.
| fn format_tsv(value: &Value) -> String { | ||
| format_tsv_page(value, true) | ||
| } | ||
|
|
||
| /// Format as TSV, optionally omitting the header row. | ||
| /// | ||
| /// Pass `emit_header = false` for all pages after the first when using | ||
| /// `--page-all`, so the combined output has a single header line. | ||
| fn format_tsv_page(value: &Value, emit_header: bool) -> String { | ||
| let items = extract_items(value); | ||
|
|
||
| let arr = if let Some((_key, arr)) = items { | ||
| arr.as_slice() | ||
| } else if let Value::Array(arr) = value { | ||
| arr.as_slice() | ||
| } else { | ||
| return tsv_escape(&value_to_cell(value)); | ||
| }; | ||
|
|
||
| if arr.is_empty() { | ||
| return String::new(); | ||
| } | ||
|
|
||
| // Array of non-objects | ||
| if !arr.iter().any(|v| v.is_object()) { | ||
| let mut output = String::new(); | ||
| for item in arr { | ||
| if let Value::Array(inner) = item { | ||
| let cells: Vec<String> = inner | ||
| .iter() | ||
| .map(|v| tsv_escape(&value_to_cell(v))) | ||
| .collect(); | ||
| let _ = writeln!(output, "{}", cells.join("\t")); | ||
| } else { | ||
| let _ = writeln!(output, "{}", tsv_escape(&value_to_cell(item))); | ||
| } | ||
| } | ||
| return output; | ||
| } | ||
|
|
||
| // Collect columns, preserving insertion order while deduplicating in O(1). | ||
| let mut columns: Vec<String> = Vec::new(); | ||
| let mut seen_keys = std::collections::HashSet::new(); | ||
| for item in arr { | ||
| if let Value::Object(obj) = item { | ||
| for key in obj.keys() { | ||
| if seen_keys.insert(key.as_str()) { | ||
| columns.push(key.clone()); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| let mut output = String::new(); | ||
|
|
||
| if emit_header { | ||
| let _ = writeln!(output, "{}", columns.join("\t")); | ||
| } | ||
|
|
||
| for item in arr { | ||
| let cells: Vec<String> = columns | ||
| .iter() | ||
| .map(|col| { | ||
| if let Value::Object(obj) = item { | ||
| tsv_escape(&value_to_cell(obj.get(col).unwrap_or(&Value::Null))) | ||
| } else { | ||
| String::new() | ||
| } | ||
| }) | ||
| .collect(); | ||
| let _ = writeln!(output, "{}", cells.join("\t")); | ||
| } | ||
|
|
||
| output | ||
| } |
There was a problem hiding this comment.
The new format_tsv_page function is almost identical to the existing format_csv_page function. This significant code duplication makes future maintenance more difficult and error-prone. For example, the efficient column collection logic using a HashSet should be shared, and both implementations have a bug where column headers are not escaped, potentially corrupting the output.
To improve maintainability and fix this bug, I recommend refactoring the common logic into a single generic format_delimited_page function. This function would accept the delimiter and an escape function as arguments.
Here's a suggested implementation that replaces the new TSV formatting functions. You could then update format_csv_page to call format_delimited_page(value, emit_header, ",", &csv_escape, false) to complete the refactoring. The escape_single_value: false parameter is to maintain the current behavior of format_csv_page, which doesn't escape single scalar values.
fn format_delimited_page(
value: &Value,
emit_header: bool,
separator: &str,
escape_fn: &dyn Fn(&str) -> String,
escape_single_value: bool,
) -> String {
let items = extract_items(value);
let arr = if let Some((_key, arr)) = items {
arr.as_slice()
} else if let Value::Array(arr) = value {
arr.as_slice()
} else {
let cell = value_to_cell(value);
return if escape_single_value {
escape_fn(&cell)
} else {
cell
};
};
if arr.is_empty() {
return String::new();
}
// Array of non-objects
if !arr.iter().any(|v| v.is_object()) {
let mut output = String::new();
for item in arr {
if let Value::Array(inner) = item {
let cells: Vec<String> = inner
.iter()
.map(|v| escape_fn(&value_to_cell(v)))
.collect();
let _ = writeln!(output, "{}", cells.join(separator));
} else {
let _ = writeln!(output, "{}", escape_fn(&value_to_cell(item)));
}
}
return output;
}
// Collect columns, preserving insertion order while deduplicating in O(1).
let mut columns: Vec<String> = Vec::new();
let mut seen_keys = std::collections::HashSet::new();
for item in arr {
if let Value::Object(obj) = item {
for key in obj.keys() {
if seen_keys.insert(key.as_str()) {
columns.push(key.clone());
}
}
}
}
let mut output = String::new();
if emit_header {
let headers = columns
.iter()
.map(|c| escape_fn(c))
.collect::<Vec<_>>()
.join(separator);
let _ = writeln!(output, "{}", headers);
}
for item in arr {
let cells: Vec<String> = columns
.iter()
.map(|col| {
if let Value::Object(obj) = item {
escape_fn(&value_to_cell(obj.get(col).unwrap_or(&Value::Null)))
} else {
String::new()
}
})
.collect();
let _ = writeln!(output, "{}", cells.join(separator));
}
output
}
fn format_tsv(value: &Value) -> String {
format_tsv_page(value, true)
}
/// Format as TSV, optionally omitting the header row.
///
/// Pass `emit_header = false` for all pages after the first when using
/// `--page-all`, so the combined output has a single header line.
fn format_tsv_page(value: &Value, emit_header: bool) -> String {
format_delimited_page(value, emit_header, "\t", &tsv_escape, true)
}… TSV Replace duplicate CSV/TSV page-formatting logic with a single format_delimited_page function. Both format_csv_page and format_tsv_page now delegate to this shared implementation, which uses a HashSet for O(1) column deduplication while preserving insertion order. Addresses code-review feedback requesting elimination of duplication.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a tsv output format, which is a valuable addition for integrating with shell pipeline tools. The implementation is well-executed, refactoring the existing CSV formatting logic into a more generic function for delimiter-separated values. This refactoring also commendably improves performance in column detection and fixes a latent bug where delimiters in header names were not escaped. The new functionality is supported by a comprehensive test suite. I have identified one correctness issue in the TSV escaping logic regarding inconsistent handling of newline characters, for which I've provided a fix.
| fn tsv_escape(s: &str) -> String { | ||
| s.replace(['\t', '\n'], " ").replace('\r', "") | ||
| } |
There was a problem hiding this comment.
The current implementation of tsv_escape handles carriage returns (\r) inconsistently compared to newlines (\n). It removes \r but replaces \n with a space. This means a value like "hello\rworld" becomes "helloworld", while "hello\nworld" becomes "hello world". To ensure consistent behavior across different line ending conventions (Unix, Windows, classic Mac), all whitespace characters that could break the TSV structure (\t, \n, \r) should be replaced with a space.
| fn tsv_escape(s: &str) -> String { | |
| s.replace(['\t', '\n'], " ").replace('\r', "") | |
| } | |
| fn tsv_escape(s: &str) -> String { | |
| s.replace(['\t', '\n', '\r'], ' ') | |
| } |
| #[test] | ||
| fn test_format_tsv_escape() { | ||
| assert_eq!(tsv_escape("simple"), "simple"); | ||
| assert_eq!(tsv_escape("has\ttab"), "has tab"); | ||
| assert_eq!(tsv_escape("has\nnewline"), "has newline"); | ||
| assert_eq!(tsv_escape("has\r\nwindows"), "has windows"); | ||
| } |
There was a problem hiding this comment.
Following the suggested change to tsv_escape to handle \r consistently, this test should be updated. The assertion for Windows-style newlines (\r\n) will now expect two spaces, as both \r and \n are replaced by a space. Additionally, a new test case for a standalone carriage return (\r) should be added to verify the corrected behavior.
| #[test] | |
| fn test_format_tsv_escape() { | |
| assert_eq!(tsv_escape("simple"), "simple"); | |
| assert_eq!(tsv_escape("has\ttab"), "has tab"); | |
| assert_eq!(tsv_escape("has\nnewline"), "has newline"); | |
| assert_eq!(tsv_escape("has\r\nwindows"), "has windows"); | |
| } | |
| #[test] | |
| fn test_format_tsv_escape() { | |
| assert_eq!(tsv_escape("simple"), "simple"); | |
| assert_eq!(tsv_escape("has\ttab"), "has tab"); | |
| assert_eq!(tsv_escape("has\nnewline"), "has newline"); | |
| assert_eq!(tsv_escape("has\rreturn"), "has return"); | |
| assert_eq!(tsv_escape("has\r\nwindows"), "has windows"); | |
| } |
…haviour Previously \r was silently deleted while \n and \t were replaced with a space. This causes "hello\rworld" → "helloworld" but "hello\nworld" → "hello world", which is inconsistent. Now all three ASCII control characters that break TSV structure (\t, \n, \r) are uniformly replaced with a single space, matching the behaviour of PostgreSQL COPY and Google Sheets TSV export. Update test to cover standalone \r and the \r\n (Windows) case. Addresses code-review feedback.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a tsv output format, which is a great addition for shell pipeline integration. The implementation is well-structured, notably refactoring the CSV formatting logic into a more generic format_delimited_page function to support both CSV and TSV. This improves code reuse and maintainability. The new functionality is also thoroughly covered by tests. I have one suggestion to further improve the performance of the new generic function by using static dispatch, which is more idiomatic in Rust for this kind of scenario.
| fn format_delimited_page( | ||
| value: &Value, | ||
| emit_header: bool, | ||
| separator: &str, | ||
| escape_fn: &dyn Fn(&str) -> String, | ||
| escape_single_value: bool, | ||
| ) -> String { |
There was a problem hiding this comment.
For better performance and to follow idiomatic Rust, consider using static dispatch for the escape_fn by making format_delimited_page a generic function. Dynamic dispatch (&dyn Fn) introduces a small runtime overhead for each function call, which can become significant when processing a large number of cells. Using a generic type parameter with a Fn trait bound will allow the compiler to perform monomorphization and potentially inline the escape function, eliminating the overhead.
You will need to update the call sites in format_csv_page and format_tsv_page to pass the function pointers directly (e.g., csv_escape instead of &csv_escape).
fn format_delimited_page<F>(
value: &Value,
emit_header: bool,
separator: &str,
escape_fn: F,
escape_single_value: bool,
) -> String
where
F: Fn(&str) -> String,
{
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #426 +/- ##
==========================================
+ Coverage 64.40% 64.58% +0.18%
==========================================
Files 38 38
Lines 15584 15672 +88
==========================================
+ Hits 10037 10122 +85
- Misses 5547 5550 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Summary
tsvas a newOutputFormatvariant alongside the existingjson,table,yaml, andcsvformatscut -f2,awk -F'\t'), filling a natural gap next to--format csv--format csvexactly: array-of-objects, array-of-arrays (e.g. Sheets values API), flat scalars, and--page-allpagination with header suppression on continuation pagesCOPYconventionsUsage
Test plan
cargo test— 558 tests pass, 0 failurescargo clippy -- -D warnings— clean🤖 Generated with Claude Code