-
Notifications
You must be signed in to change notification settings - Fork 0
Merge Template #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge Template #49
Conversation
just test for launching each page and ensuring there is not an exception. Currenly having trouble with quickstart. Also add documentation specific tests
fix tests for view_ms_raw_data and file_upload
Add Docker Compose as Deployment Option
Development wix
note currently spectrum visualization is not tested because cannot find a way to select a spectrum to be displayed.
bugs in testing framework caused tests to fail, fix this
Fix Dockerfile Simple
Bumps [captcha](https://github.com/lepture/captcha) from 0.5.0 to 0.7.1. - [Release notes](https://github.com/lepture/captcha/releases) - [Changelog](https://github.com/lepture/captcha/blob/main/docs/changelog.rst) - [Commits](lepture/captcha@v0.5.0...v0.7.1) --- updated-dependencies: - dependency-name: captcha dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [pyopenms](https://github.com/OpenMS/pyopenms-docs) from 3.2.0 to 3.3.0. - [Release notes](https://github.com/OpenMS/pyopenms-docs/releases) - [Commits](OpenMS/pyopenms-docs@release/3.2.0...release/3.3.0) --- updated-dependencies: - dependency-name: pyopenms dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Add comprehensive tests for TOPP workflow parameter page * Add comprehensive tests for TOPP workflow parameter page * Add comprehensive tests for TOPP workflow parameter page * Remove unused import and enhance parameter tests with complex data types
* update licence * fix duplicate licence, refer to git tracking
* fix failing tests * hold back cython 3.1 as it breaks autowrap * add quotes
Until we release 3.5.0 autowrap needs to stay at the old version.
update Readme (Several way to run app)
* update to OpenMS 3.4 * qt6 * update thirdparty folder structure * add libgl * pin autowrap
* digestor * add calculator * fix emoji and add src file * min max size * renamed mass calculator * replace with spinbox * fix minor bugs * fix page name * rename of isotope generator * add formula and peptide pattern generator * add oligos * just RNA * fragmentation added * better emojis * more doc * docs * proper support for encodings * fix * slimmer * complete m/z calculator * fix sequence format * max charge * add length column * add start and end position columns to peptide digest output * add coloring * fix * fix lint * add first prefix ions --------- Co-authored-by: Tom David Mueller <tom.mueller@beachouse.de>
* update streamlit * fix md5 error * fix sidebar updates * remove workspace selector if offline * update workspace via query param * fix duplicate component bug * fix log refresh * stream output in workflows * fix workflow display
WalkthroughRemoves Dependabot config; updates .gitignore, LICENSE, and docker-compose to drop version key; adjusts requirements.txt to a pinned pip-compile output; updates submodule pointer; minor newline in app.py; refactors Streamlit page management and workspace handling; replaces command execution with live stdout/stderr streaming; adds real-time log viewing in UI. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant UI as StreamlitUI
participant Exec as CommandExecutor
participant OS as Subprocess/OS
participant FS as Filesystem (Logs)
User->>UI: Click "Start Workflow"
UI->>Exec: run_command(cmd, log_path, pid_file)
Exec->>OS: Popen(cmd, stdout=PIPE, stderr=PIPE, text=True)
Exec->>Exec: spawn threads to read stdout/stderr
par Stream STDOUT
Exec-->>UI: log line (stdout)
Exec->>FS: append to log
and Stream STDERR
Exec-->>UI: log line (stderr prefixed)
Exec->>FS: append to log
end
OS-->>Exec: process exit code
Exec->>FS: remove pid file
Exec-->>UI: completion status
loop Every 1s (while running)
UI->>FS: read last N lines of log
UI-->>User: render tail (N lines)
end
alt On Stop
User->>UI: Click "Stop Workflow"
UI->>OS: terminate process by PID
end
sequenceDiagram
autonumber
participant UI as StreamlitUI
participant RT as Streamlit Runtime
participant FS as Filesystem
participant PM as Page Manager (captcha_.py)
UI->>PM: restore_all_pages(main_script)
PM->>FS: list content/*.py
PM->>PM: calc_md5(script paths)
PM->>RT: set_pages(pages dict)
UI-->>UI: rerun to reflect pages
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
requirements.txt (1)
139-141: Pin SciPy and address xlsxwriter in requirements.txt
- SciPy is imported in src/render/compression.py, src/parse/deconv.py and src/parse/tnt.py; replace the unpinned
scipy>=1.15with an exact version (e.g.,scipy==1.15.3) to enforce reproducibility.- No references to xlsxwriter were found in Python code—either remove
xlsxwriterfrom requirements.txt if unused, or declare & pin it in pyproject.toml before re-running pip-compile.Option A (preferred): add both to pyproject.toml and re-run
# pip-compile --output-file=requirements.txt pyproject.tomlOption B (quick fix): manual pins/removal in requirements.txt
-xlsxwriter -scipy>=1.15 +# if needed, pin xlsxwriter; otherwise remove +xlsxwriter==3.2.5 +scipy==1.15.3src/common/captcha_.py (4)
109-112: Index parsing is brittle; guard non-numeric filenames.
A non-numeric prefix raises ValueError. Provide a safe fallback.- index = int(os.path.basename(script_path.stem).split("_")[0]) + name_head = script_path.stem.split("_", 1)[0] + try: + index = int(name_head) + except ValueError: + # put unknown-index pages at the end + index = 10_000_000
162-167: Robustness: handle missing/ambiguous matches when adding a page.
The list index[0]will raise if no match; substring matching can be ambiguous.- script_path = [f for f in pages_dir.glob("*.py") if f.name.find(page_name) != -1][0] + matches = [f for f in pages_dir.glob("*.py") if page_name in f.stem] + if not matches: + raise FileNotFoundError(f"No page script found matching '{page_name}' in {pages_dir}") + # TODO: if multiple matches exist, disambiguate by exact stem or prompt caller. + script_path = matches[0]
209-217: Avoid KeyError/AttributeError on nested session_state access.
Use.getwith defaults and attribute-free access.- ga = st.session_state.settings['analytics']['google-analytics']['enabled'] - pp = st.session_state.settings['analytics']['piwik-pro']['enabled'] - if (ga or pp) and (st.session_state.tracking_consent is None): + settings = st.session_state.get("settings", {}) + analytics = settings.get("analytics", {}) + ga = analytics.get("google-analytics", {}).get("enabled", False) + pp = analytics.get("piwik-pro", {}).get("enabled", False) + if (ga or pp) and (st.session_state.get("tracking_consent") is None):
259-261: Fix user-facing typo.
“Captch is wrong” → “Captcha is wrong”.- st.error("🚨 Captch is wrong") + st.error("🚨 Captcha is wrong")
🧹 Nitpick comments (12)
.gitignore (2)
27-30: "python" is overly broad; tighten to avoid hiding legitimate files.*At repo root,
python*could ignore unintended files/dirs (e.g., apython_utils.pyor folder). Suggest scoping to runtime artifacts explicitly.- python* + # Ignore bundled Python runtimes and binaries, not arbitrary files starting with "python" + /python*/ + python*.exe + python*.dll + python*.pdb
30-30: Duplicate ignore pattern; keep a single entry.
gdpr_consent/node_modules/appears at Line 8 and Line 30. One is enough.- gdpr_consent/node_modules/ + # (duplicate removed) gdpr_consent/node_modules/openms-streamlit-vue-component (1)
1-1: Option: pin to a tagged release for discoverability.If upstream has annotated tags, consider referencing a release tag (and noting it in the PR) to simplify future upgrades. Otherwise, pinning to a SHA is fine for reproducibility.
src/workflow/CommandExecutor.py (2)
109-109: Nit: remove extraneous f-string prefix.Minor linter issue (F541).
Apply this diff:
- self.logger.log(f"Process finished:\n"+' '.join(command)+f"\nTotal time to run command: {execution_time:.2f} seconds", 1) + self.logger.log("Process finished:\n"+' '.join(command)+f"\nTotal time to run command: {execution_time:.2f} seconds", 1)
131-146: Narrow exceptions when reading pipes.Catching broad Exception hides real issues and trips BLE001.
Apply this diff:
- except Exception as e: + except (OSError, UnicodeDecodeError) as e: self.logger.log(f"Error reading stdout: {e}", 2) @@ - except Exception as e: + except (OSError, UnicodeDecodeError) as e: self.logger.log(f"Error reading stderr: {e}", 2)src/common/common.py (1)
356-377: UX: handle empty workspace list and missing current workspace gracefully.Avoid ValueError when options is empty or current workspace isn’t present.
Apply this diff:
- st.selectbox( + idx = options.index(st.session_state.workspace.stem) if st.session_state.workspace.stem in options and options else 0 + st.selectbox( "choose existing workspace", options, - index=options.index(str(st.session_state.workspace.stem)), + index=idx if options else 0, on_change=change_workspace, key="chosen-workspace", )src/workflow/StreamlitUI.py (3)
1067-1083: Avoid rereading entire log file each tick; tail efficiently.For large logs, readlines() on every poll is costly. Use deque to keep last N lines.
Apply this diff:
- with open(log_path, "r", encoding="utf-8") as f: - lines = f.readlines() - if log_lines_count == "all": - display_lines = lines - else: - display_lines = lines[-st.session_state.log_lines_count:] - st.code( - "".join(display_lines), + from collections import deque + with open(log_path, "r", encoding="utf-8") as f: + if log_lines_count == "all": + content = f.read() + else: + tail = deque(f, maxlen=st.session_state.log_lines_count) + content = "".join(tail) + st.code( + content, language="neon", line_numbers=False, )
1093-1094: Fix membership test style (Ruff E713).Use “not in”.
Apply this diff:
- if not "WORKFLOW FINISHED" in content: + if "WORKFLOW FINISHED" not in content: st.error("**Errors occurred, check log file.**")
1043-1056: Avoid state/key collisions; consider unifying show_log and execution_section.Both define/select “log_level” and handle log rendering. Consolidate to one path to simplify behavior.
I can draft a single log viewer fragment with runtime/complete modes if helpful.
src/common/captcha_.py (3)
13-15: Non-crypto MD5: annotate and silence the linter (or switch).
This hash is used as a stable page key, not for security. Keep MD5 to match Streamlit internals but document and ignore S324; or switch to BLAKE2 if compatibility is assured.-def calc_md5(string : str): - return hashlib.md5(string.encode()).hexdigest() +def calc_md5(string: str) -> str: + # Non-cryptographic: matches Streamlit's page key behavior. noqa: S324 + return hashlib.md5(string.encode("utf-8")).hexdigest()
128-136: Optional: stale pages not removed.
restore_all_pages() only adds; it doesn’t remove pages no longer present under content/. Consider starting from a copy that only retains the main page, or calling delete_all_pages() first.Would you like a patch that preserves the main page and then repopulates from content/?
22-31: Docstring/param naming mismatch.
Parameter is used as a filesystem path (Path(main_script_path_str)), but docstring says it’s the “name of the main page”. Align the docstring or rename the param for clarity.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (11)
.github/dependabot.yml(0 hunks).gitignore(1 hunks)LICENSE(1 hunks)app.py(1 hunks)docker-compose.yml(0 hunks)openms-streamlit-vue-component(1 hunks)requirements.txt(1 hunks)src/common/captcha_.py(8 hunks)src/common/common.py(2 hunks)src/workflow/CommandExecutor.py(1 hunks)src/workflow/StreamlitUI.py(1 hunks)
💤 Files with no reviewable changes (2)
- .github/dependabot.yml
- docker-compose.yml
🧰 Additional context used
🧬 Code graph analysis (2)
src/workflow/CommandExecutor.py (1)
src/workflow/Logger.py (1)
log(16-42)
src/workflow/StreamlitUI.py (1)
src/workflow/CommandExecutor.py (1)
stop(254-268)
🪛 OSV Scanner (2.1.0)
requirements.txt
[HIGH] 80-80: protobuf 5.29.4: protobuf-python has a potential Denial of Service issue
[HIGH] 126-126: tornado 6.4.2: Tornado vulnerable to excessive logging caused by malformed multipart form data
🪛 Ruff (0.12.2)
src/workflow/CommandExecutor.py
81-81: subprocess call: check for execution of untrusted input
(S603)
109-109: f-string without any placeholders
Remove extraneous f prefix
(F541)
131-131: Do not catch blind exception: Exception
(BLE001)
144-144: Do not catch blind exception: Exception
(BLE001)
src/workflow/StreamlitUI.py
1093-1093: Test for membership should be not in
Convert to not in
(E713)
src/common/captcha_.py
14-14: Probable use of insecure hash functions in hashlib: md5
(S324)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-openms
- GitHub Check: build-full-app
🔇 Additional comments (5)
LICENSE (1)
16-16: LGTM: author reference updated to git contributions.No license terms changed; only attribution phrasing. Safe to merge.
app.py (1)
39-39: No changes needed: Streamlit==1.49.0 supports st.navigation and st.Page (introduced in 1.36.0).src/common/common.py (1)
29-41: LGTM: lightweight hardware monitor fragment.Reads are efficient and UI-friendly.
requirements.txt (1)
1-6: Good move: switch to pip-compile, Python 3.12 lockfile.
Autogenerated, fully pinned graph improves reproducibility and supply-chain hygiene.src/common/captcha_.py (1)
16-21: Internal API usage: st.runtime.get_pages/set_pages may change.
These are private APIs and can break across Streamlit releases. Guard with a clear fallback or version check.Would you like a small helper that probes availability and raises a friendly error when missing?
| @@ -1 +1 @@ | |||
| Subproject commit 0a45f30e2223cf2f24392149a4e190ac88e31b27 | |||
| Subproject commit fad86f46c3aa788387ec5201fc5cceda5db30fd9 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Submodule bump to fad86f46…: verify CI submodule init and commit match.
Ensure CI clones submodules and that the checked-out commit equals the expected SHA.
Run:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Before init:"
git submodule status || true
# Initialize and fetch the submodule
git submodule update --init --recursive --depth 1
echo "After init:"
git submodule status
expected="fad86f46c3aa788387ec5201fc5cceda5db30fd9"
path="openms-streamlit-vue-component"
if [ -d "$path/.git" ]; then
pushd "$path" >/dev/null
actual="$(git rev-parse HEAD)"
echo "Expected: $expected"
echo "Actual: $actual"
test "$actual" = "$expected" || { echo "ERROR: Submodule commit mismatch."; exit 1; }
popd >/dev/null
else
echo "ERROR: Submodule directory not initialized at $path"
exit 1
fiLength of output: 1218
Ensure CI can clone the submodule: The submodule is configured with an SSH URL (git@…), causing git submodule update --init to fail in environments without SSH support. Update the URL in .gitmodules to HTTPS or configure SSH credentials in CI so that the expected commit (fad86f46c3aa788387ec5201fc5cceda5db30fd9) can be fetched.
🤖 Prompt for AI Agents
In openms-streamlit-vue-component around lines 1 to 1, the submodule in
.gitmodules uses an SSH URL which breaks CI environments without SSH; replace
the git@... SSH URL with the corresponding HTTPS URL for that submodule
(ensuring it still references the same repository and commit
fad86f46c3aa788387ec5201fc5cceda5db30fd9), run git submodule sync && git
submodule update --init --recursive to verify the commit can be fetched, and
commit the updated .gitmodules (or alternatively configure CI to provide SSH
credentials/agent so the existing SSH URL can be accessed).
| protobuf==5.29.4 | ||
| # via streamlit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: bump protobuf to a patched version.
protobuf==5.29.4 is vulnerable to DoS; patched versions are 4.25.8, 5.29.5, or 6.31.1. Recommend 5.29.5 to stay on the same major/minor.
Apply:
-protobuf==5.29.4
+protobuf==5.29.5Refs: GitHub Advisory GHSA-8qvm-5x2c-j2w7 (fixed in 5.29.5). (github.com, advisories.gitlab.com)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| protobuf==5.29.4 | |
| # via streamlit | |
| protobuf==5.29.5 | |
| # via streamlit |
🧰 Tools
🪛 OSV Scanner (2.1.0)
[HIGH] 80-80: protobuf 5.29.4: protobuf-python has a potential Denial of Service issue
🤖 Prompt for AI Agents
In requirements.txt around lines 80-81, protobuf==5.29.4 is vulnerable and
should be bumped to the patched release; update the pin to protobuf==5.29.5,
then regenerate any dependency lockfiles or virtualenvs (e.g., pip-compile or
poetry lock) and run the test suite / CI to ensure nothing breaks from the
patch.
| tornado==6.4.2 | ||
| # via streamlit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: upgrade Tornado to 6.5.0+.
tornado==6.4.2 is affected by excessive-logging DoS; patched in 6.5.0.
Apply:
-tornado==6.4.2
+tornado==6.5.0Refs: GHSA-7cx3-6m66-7c5m (patched in 6.5.0). (github.com)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tornado==6.4.2 | |
| # via streamlit | |
| tornado==6.5.0 | |
| # via streamlit |
🧰 Tools
🪛 OSV Scanner (2.1.0)
[HIGH] 126-126: tornado 6.4.2: Tornado vulnerable to excessive logging caused by malformed multipart form data
🤖 Prompt for AI Agents
In requirements.txt around lines 126-127, tornado is pinned to 6.4.2 which is
vulnerable; update the tornado requirement to at least 6.5.0 (e.g., change the
pin to tornado==6.5.0 or use a safe range like tornado>=6.5.0,<7.0) to include
the patch for GHSA-7cx3-6m66-7c5m, then regenerate any lock/compiled
requirements (pip-tools/poetry lock) and run the test suite and CI to ensure
Streamlit and other deps remain compatible; if any conflicts arise, adjust the
constraint to the minimal compatible upper bound and document the change in the
dependency/changelog.
| for key, value in current_pages.items(): | ||
| if value["page_name"] == page_name: | ||
| del current_pages[key] | ||
|
|
||
| # Refresh the pages configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: deleting from dict while iterating raises RuntimeError.
del current_pages[key] during iteration can fail. Collect keys first, then delete.
- for key, value in current_pages.items():
- if value["page_name"] == page_name:
- del current_pages[key]
+ keys_to_delete = [k for k, v in list(current_pages.items()) if v.get("page_name") == page_name]
+ for k in keys_to_delete:
+ current_pages.pop(k, None)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for key, value in current_pages.items(): | |
| if value["page_name"] == page_name: | |
| del current_pages[key] | |
| # Refresh the pages configuration | |
| keys_to_delete = [k for k, v in list(current_pages.items()) if v.get("page_name") == page_name] | |
| for k in keys_to_delete: | |
| current_pages.pop(k, None) | |
| # Refresh the pages configuration |
🤖 Prompt for AI Agents
In src/common/captcha_.py around lines 67 to 71, the code deletes entries from
current_pages while iterating over it which can raise a RuntimeError; collect
the keys to remove first (e.g., build a list of keys where value["page_name"] ==
page_name) and then iterate over that list to delete them from current_pages, or
replace current_pages with a filtered dict comprehension that excludes matching
entries, ensuring you do not mutate the dict while iterating.
| # Workspaces directory specified in the settings.json | ||
| if ( | ||
| st.session_state.settings["workspaces_dir"] | ||
| and st.session_state.location == "local" | ||
| ): | ||
| workspaces_dir = Path( | ||
| st.session_state.settings["workspaces_dir"], | ||
| "workspaces-" + st.session_state.settings["repository-name"], | ||
| ) | ||
| else: | ||
| workspaces_dir = ".." | ||
| # Online: show current workspace name in info text and option to change to other existing workspace |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: workspaces_dir may be a str, causing AttributeError on iterdir().
When settings["workspaces_dir"] is falsy, workspaces_dir is ".." (str). You then call workspaces_dir.iterdir(), which will crash.
Apply this diff:
- else:
- workspaces_dir = ".."
+ else:
+ workspaces_dir = Path("..")
+ # Normalize and ensure it exists (local-only UI)
+ workspaces_dir = workspaces_dir.resolve()
+ workspaces_dir.mkdir(parents=True, exist_ok=True)Optional: add a guard before using options to handle an empty directory.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Workspaces directory specified in the settings.json | |
| if ( | |
| st.session_state.settings["workspaces_dir"] | |
| and st.session_state.location == "local" | |
| ): | |
| workspaces_dir = Path( | |
| st.session_state.settings["workspaces_dir"], | |
| "workspaces-" + st.session_state.settings["repository-name"], | |
| ) | |
| else: | |
| workspaces_dir = ".." | |
| # Online: show current workspace name in info text and option to change to other existing workspace | |
| # Workspaces directory specified in the settings.json | |
| if ( | |
| st.session_state.settings["workspaces_dir"] | |
| and st.session_state.location == "local" | |
| ): | |
| workspaces_dir = Path( | |
| st.session_state.settings["workspaces_dir"], | |
| "workspaces-" + st.session_state.settings["repository-name"], | |
| ) | |
| else: | |
| workspaces_dir = Path("..") | |
| # Normalize and ensure it exists (local-only UI) | |
| workspaces_dir = workspaces_dir.resolve() | |
| workspaces_dir.mkdir(parents=True, exist_ok=True) | |
| # Online: show current workspace name in info text and option to change to other existing workspace |
🤖 Prompt for AI Agents
In src/common/common.py around lines 344 to 355, workspaces_dir is sometimes set
to the string ".." which will raise AttributeError when you later call
workspaces_dir.iterdir(); change the fallback to wrap it in Path(...) so
workspaces_dir is always a pathlib.Path (e.g., Path("..")), and before iterating
or listing options check workspaces_dir.exists() and workspaces_dir.is_dir() and
handle the empty/non-existent directory case (skip iteration and/or provide an
empty options list) to avoid exceptions.
| # Execute the command with real-time output capture | ||
| process = subprocess.Popen( | ||
| command, | ||
| stdout=subprocess.PIPE, | ||
| stderr=subprocess.PIPE, | ||
| text=True, | ||
| bufsize=1, # Line buffered | ||
| universal_newlines=True | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden Popen text handling; remove redundant arg.
Specify encoding to avoid decode errors and drop duplicate universal_newlines.
Apply this diff:
- process = subprocess.Popen(
+ process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
- text=True,
- bufsize=1, # Line buffered
- universal_newlines=True
+ text=True,
+ encoding="utf-8",
+ errors="replace",
+ bufsize=1 # Line buffered
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Execute the command with real-time output capture | |
| process = subprocess.Popen( | |
| command, | |
| stdout=subprocess.PIPE, | |
| stderr=subprocess.PIPE, | |
| text=True, | |
| bufsize=1, # Line buffered | |
| universal_newlines=True | |
| ) | |
| # Execute the command with real-time output capture | |
| process = subprocess.Popen( | |
| command, | |
| stdout=subprocess.PIPE, | |
| stderr=subprocess.PIPE, | |
| text=True, | |
| encoding="utf-8", | |
| errors="replace", | |
| bufsize=1 # Line buffered | |
| ) |
🧰 Tools
🪛 Ruff (0.12.2)
81-81: subprocess call: check for execution of untrusted input
(S603)
🤖 Prompt for AI Agents
In src/workflow/CommandExecutor.py around lines 80 to 88, the subprocess.Popen
call uses both text=True and universal_newlines=True and lacks an explicit
encoding; remove the redundant universal_newlines argument and add an explicit
encoding (e.g., encoding='utf-8') to Popen so stdout/stderr are decoded reliably
and avoid decode errors.
| # Record the PID to keep track of running processes associated with this workspace/workflow | ||
| # User can close the Streamlit app and return to a running workflow later | ||
| pid_file_path = self.pid_dir / str(child_pid) | ||
| pid_file_path.touch() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Create pid directory before touching the PID file.
touch() will fail if parent directory doesn’t exist.
Apply this diff:
pid_file_path = self.pid_dir / str(child_pid)
- pid_file_path.touch()
+ self.pid_dir.mkdir(parents=True, exist_ok=True)
+ pid_file_path.touch()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Record the PID to keep track of running processes associated with this workspace/workflow | |
| # User can close the Streamlit app and return to a running workflow later | |
| pid_file_path = self.pid_dir / str(child_pid) | |
| pid_file_path.touch() | |
| # Record the PID to keep track of running processes associated with this workspace/workflow | |
| # User can close the Streamlit app and return to a running workflow later | |
| pid_file_path = self.pid_dir / str(child_pid) | |
| self.pid_dir.mkdir(parents=True, exist_ok=True) | |
| pid_file_path.touch() |
🤖 Prompt for AI Agents
In src/workflow/CommandExecutor.py around lines 91 to 95, the code touches a PID
file without ensuring its parent directory exists so touch() will raise if
pid_dir is missing; before calling pid_file_path.touch(), create the pid_dir
with pid_file_path.parent.mkdir(parents=True, exist_ok=True) (or
self.pid_dir.mkdir(parents=True, exist_ok=True)) to ensure the directory exists,
then touch the file; optionally wrap in a try/except to surface a clear error if
creation fails.
| def _stream_output(self, process: subprocess.Popen) -> None: | ||
| """ | ||
| Streams stdout and stderr from a running process in real-time to the logger. | ||
| This method runs in the workflow process, not the GUI thread, so it's safe to block. | ||
|
|
||
| Args: | ||
| process: The subprocess.Popen object to stream from | ||
| """ | ||
| def read_stdout(): | ||
| """Read stdout in real-time""" | ||
| try: | ||
| for line in iter(process.stdout.readline, ''): | ||
| if line: | ||
| self.logger.log(line.rstrip(), 2) | ||
| if process.poll() is not None: | ||
| break | ||
| except Exception as e: | ||
| self.logger.log(f"Error reading stdout: {e}", 2) | ||
| finally: | ||
| process.stdout.close() | ||
|
|
||
| def read_stderr(): | ||
| """Read stderr in real-time""" | ||
| try: | ||
| for line in iter(process.stderr.readline, ''): | ||
| if line: | ||
| self.logger.log(f"STDERR: {line.rstrip()}", 2) | ||
| if process.poll() is not None: | ||
| break | ||
| except Exception as e: | ||
| self.logger.log(f"Error reading stderr: {e}", 2) | ||
| finally: | ||
| process.stderr.close() | ||
|
|
||
| # Start threads to read stdout and stderr simultaneously | ||
| stdout_thread = threading.Thread(target=read_stdout, daemon=True) | ||
| stderr_thread = threading.Thread(target=read_stderr, daemon=True) | ||
|
|
||
| stdout_thread.start() | ||
| stderr_thread.start() | ||
|
|
||
| # Log stderr and raise an exception if errors occurred | ||
| if stderr or process.returncode != 0: | ||
| error_message = stderr.decode().strip() | ||
| self.logger.log(f"ERRORS OCCURRED:\n{error_message}", 2) | ||
| # Wait for both threads to complete | ||
| stdout_thread.join() | ||
| stderr_thread.join() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: stdout/stderr streaming can truncate logs (premature break).
The inner loops break on process.poll(), which can leave unread buffered data after the child exits. Read to EOF only.
Apply this diff:
def read_stdout():
"""Read stdout in real-time"""
try:
- for line in iter(process.stdout.readline, ''):
+ for line in iter(process.stdout.readline, ''):
if line:
self.logger.log(line.rstrip(), 2)
- if process.poll() is not None:
- break
except Exception as e:
self.logger.log(f"Error reading stdout: {e}", 2)
finally:
process.stdout.close()
def read_stderr():
"""Read stderr in real-time"""
try:
- for line in iter(process.stderr.readline, ''):
+ for line in iter(process.stderr.readline, ''):
if line:
self.logger.log(f"STDERR: {line.rstrip()}", 2)
- if process.poll() is not None:
- break
except Exception as e:
self.logger.log(f"Error reading stderr: {e}", 2)
finally:
process.stderr.close()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _stream_output(self, process: subprocess.Popen) -> None: | |
| """ | |
| Streams stdout and stderr from a running process in real-time to the logger. | |
| This method runs in the workflow process, not the GUI thread, so it's safe to block. | |
| Args: | |
| process: The subprocess.Popen object to stream from | |
| """ | |
| def read_stdout(): | |
| """Read stdout in real-time""" | |
| try: | |
| for line in iter(process.stdout.readline, ''): | |
| if line: | |
| self.logger.log(line.rstrip(), 2) | |
| if process.poll() is not None: | |
| break | |
| except Exception as e: | |
| self.logger.log(f"Error reading stdout: {e}", 2) | |
| finally: | |
| process.stdout.close() | |
| def read_stderr(): | |
| """Read stderr in real-time""" | |
| try: | |
| for line in iter(process.stderr.readline, ''): | |
| if line: | |
| self.logger.log(f"STDERR: {line.rstrip()}", 2) | |
| if process.poll() is not None: | |
| break | |
| except Exception as e: | |
| self.logger.log(f"Error reading stderr: {e}", 2) | |
| finally: | |
| process.stderr.close() | |
| # Start threads to read stdout and stderr simultaneously | |
| stdout_thread = threading.Thread(target=read_stdout, daemon=True) | |
| stderr_thread = threading.Thread(target=read_stderr, daemon=True) | |
| stdout_thread.start() | |
| stderr_thread.start() | |
| # Log stderr and raise an exception if errors occurred | |
| if stderr or process.returncode != 0: | |
| error_message = stderr.decode().strip() | |
| self.logger.log(f"ERRORS OCCURRED:\n{error_message}", 2) | |
| # Wait for both threads to complete | |
| stdout_thread.join() | |
| stderr_thread.join() | |
| def _stream_output(self, process: subprocess.Popen) -> None: | |
| """ | |
| Streams stdout and stderr from a running process in real-time to the logger. | |
| This method runs in the workflow process, not the GUI thread, so it's safe to block. | |
| Args: | |
| process: The subprocess.Popen object to stream from | |
| """ | |
| def read_stdout(): | |
| """Read stdout in real-time""" | |
| try: | |
| for line in iter(process.stdout.readline, ''): | |
| if line: | |
| self.logger.log(line.rstrip(), 2) | |
| except Exception as e: | |
| self.logger.log(f"Error reading stdout: {e}", 2) | |
| finally: | |
| process.stdout.close() | |
| def read_stderr(): | |
| """Read stderr in real-time""" | |
| try: | |
| for line in iter(process.stderr.readline, ''): | |
| if line: | |
| self.logger.log(f"STDERR: {line.rstrip()}", 2) | |
| except Exception as e: | |
| self.logger.log(f"Error reading stderr: {e}", 2) | |
| finally: | |
| process.stderr.close() | |
| # Start threads to read stdout and stderr simultaneously | |
| stdout_thread = threading.Thread(target=read_stdout, daemon=True) | |
| stderr_thread = threading.Thread(target=read_stderr, daemon=True) | |
| stdout_thread.start() | |
| stderr_thread.start() | |
| # Wait for both threads to complete | |
| stdout_thread.join() | |
| stderr_thread.join() |
🧰 Tools
🪛 Ruff (0.12.2)
131-131: Do not catch blind exception: Exception
(BLE001)
144-144: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In src/workflow/CommandExecutor.py around lines 115 to 159, the stdout/stderr
reader loops break when process.poll() is set which can cause buffered output to
be left unread; remove the premature poll() checks so each reader reads until
EOF (i.e., let iter(process.stdout.readline, '') / iter(process.stderr.readline,
'') drive the loop), keep the try/except/finally and close the streams in
finally, and do not break early when the process has exited.
Updates to latest template (and thus streamlit) version
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Chores
Documentation