Skip to content

perf: Enhance Gunicorn preload functionality for Langflow#12778

Merged
jordanrfrazier merged 27 commits into
langflow-ai:release-1.10.0from
severfire:use_preload_advantages
May 5, 2026
Merged

perf: Enhance Gunicorn preload functionality for Langflow#12778
jordanrfrazier merged 27 commits into
langflow-ai:release-1.10.0from
severfire:use_preload_advantages

Conversation

@severfire
Copy link
Copy Markdown
Contributor

Related PRs:

cc @erichare

Summary

This PR introduces a dedicated preload module that maximizes the memory-saving benefits of Gunicorn's preload_app feature. While PRs #12364 and #12587 enabled preload and fixed fork-safety bugs, workers were still duplicating significant initialization work post-fork. This PR moves all fork-safe operations into the master process so workers inherit the result via Linux Copy-on-Write (CoW), dramatically reducing per-worker memory consumption.

What Changed

1. New preload.py Module

  • Introduced a fork-safe initialization function (preload_master()) that runs exclusively in the Gunicorn master process
  • Executes heavy one-time operations before workers are forked:
    • Database migrations and initial setup
    • Bundle loading and component imports
    • Component types cache building (lfx.interface.components.component_cache)
    • Starter projects creation
    • Agentic MCP server configuration
    • Flow directory loading
    • Profile pictures copying

2. Updated main.py Lifespan

  • Workers now detect if the master has preloaded resources via is_preloaded() check
  • When preloaded, workers skip redundant initialization and inherit shared state
  • Fork-unsafe resources (DB connection pools, Redis, telemetry threads, MCP asyncio tasks, queue service) remain per-worker and are set up post-fork as before

3. Server Integration

  • Modified LangflowApplication.load() in server.py to call preload_master() when cfg.preload_app is enabled
  • Ensures the master properly disposes database connections before fork to prevent descriptor leaks
  • Added gc.freeze() call to prevent cyclic GC from unsharing CoW pages in workers

Memory Usage Results

Tested with 30 workers on WSL:

RAM usage after load Option Version Workers Notes
20.55 GB no preload v1.8.3 30 baseline
3.38 GB preload off v1.9 30 take 1
2.98 GB preload off v1.9 30 take 2
3.00 GB preload on v1.9 30 take 1
3.00 GB preload on v1.9 30 take 2
3.88 GB preload off v1.10 + this PR 30 take 1
3.77 GB preload off v1.10 + this PR 30 take 2
4.08 GB preload off v1.10 + this PR (after rebase) 30 take 3
2.45 GB preload on v1.10 + this PR 30 take 1
2.31 GB preload on v1.10 + this PR 30 take 2
2.32 GB preload on v1.10 + this PR (after rebase) 30 take 3

Key Findings:

  • ~40% memory reduction with preload enabled in v1.10 compared to v1.9 with preload (3.00 GB → 2.32 GB)
  • ~89% memory reduction compared to v1.8.3 baseline (20.55 GB → 2.32 GB)
  • The preload off numbers for v1.10 are slightly higher due to additional features, but preload on shows significant gains

Technical Details

Fork-Safety

  • The preload module carefully avoids creating any fork-unsafe resources (threads, sockets, file descriptors)
  • Database engine is explicitly disposed before fork: await get_db_service().engine.dispose()
  • Cache service teardown is attempted to close Redis connections if present
  • Workers reconstruct their own connection pools on first DB access post-fork

Copy-on-Write Optimization

  • gc.collect() + gc.freeze() moves preloaded objects into permanent generation
  • Prevents Python's cyclic GC from touching shared pages and triggering unnecessary copies
  • Bundle temp directories are owned by master; workers read via CoW but don't clean them up

State Detection

  • is_preloaded() returns True in any process forked from a master that ran preload
  • is_master() identifies if current process is the original master (for cleanup)
  • get_preloaded_temp_dirs() returns bundle directories (master-owned)

Safety & Compatibility

  • Fully backward compatible: When LANGFLOW_GUNICORN_PRELOAD=false (default), behavior is unchanged
  • No behavior regression: If preload fails, workers fall back to full initialization
  • Per-worker independence maintained: Each worker still has its own event loop, DB pool, telemetry client, etc.
  • Extensively tested: Validated with 30-60 worker deployments on WSL

Ghost Safety Analysis

Changes appear to be ghost safe. Here's why:

✅ Fork-Safe Practices Implemented

  1. DB Connection Pool Disposal (preload.py:156-159):

    • The master explicitly disposes the DB engine before fork
    • This prevents workers from inheriting shared file descriptors/connections
    • Workers rebuild fresh connection pools on first use
  2. Cache Service Teardown (preload.py:162-175):

    • Attempts to close any cache sockets (e.g., Redis) before fork
    • Prevents shared socket file descriptors across processes
  3. Fork-Unsafe Resources Excluded:

    • Prometheus HTTP servers: Not started in preload ✓
    • Telemetry threads: Not started in preload ✓
    • MCP asyncio tasks: Not started in preload ✓
    • Queue service: Not started in preload ✓
    • All these are still initialized per-worker in the FastAPI lifespan
  4. Temp Directory Ownership (main.py:266-267):

    if running_in_master:
        temp_dirs = get_preloaded_temp_dirs()
    • Only the master process tracks temp_dirs for cleanup
    • Workers don't attempt to cleanup shared temp files, preventing race conditions
  5. COW Optimization (preload.py:201-204):

    • Uses gc.freeze() to move preloaded objects into permanent generation
    • Prevents cyclic GC from touching (and unsharing) shared memory pages
  6. Idempotent Service Initialization (main.py:200-202):

    • Workers still call initialize_services() but it's documented as idempotent
    • Factory registration and migrations are no-ops when already done

✅ Copy-On-Write (COW) Benefits

The commit correctly leverages COW for:

  • Python modules from custom component bundles
  • Component types dict (tens of MB)
  • Starter project graphs
  • Profile pictures

No Ghost State Detected

No dangling references, shared mutexes, or cross-process state that could cause:

  • Deadlocks
  • Corrupted data
  • File descriptor conflicts
  • Socket reuse issues

Usage

No configuration changes required. Simply set the existing environment variable:

export LANGFLOW_GUNICORN_PRELOAD=true
langflow run --workers 30

The enhanced preload will automatically take effect.


Note: This PR focuses on maximizing CoW memory sharing for Python modules and in-memory state. Future work could explore sharing dynamically-loaded component libraries.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 20, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3e5280d5-5bb2-41bd-8bec-70f2dcc8a7bb

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added the performance Maintenance tasks and housekeeping label Apr 20, 2026
@severfire
Copy link
Copy Markdown
Contributor Author

@jordanrfrazier Hey Jordan! Regarding your comment on PR #12587 about a potential blog post—I’ve just added the memory usage benchmarks to that PR.

I’ve identified a few more areas for memory optimization that I’d like to tackle first. I think the post would be much more impactful if we bundled all these improvements together. Specifically, I'd love your eyes on PR #12588 when you have a moment.

I’ve documented the Memory Usage Results in that PR, but I’d like to hold off on the post until I finish a few more optimizations I’ve identified. I think we’ll have a much stronger narrative if we showcase a comprehensive "Memory & Stability" package.

The Strategy

I'm thinking we align this content with the v1.10 release. We can frame it as a deep dive into Langflow’s production readiness, specifically highlighting:

  • Worker Lifecycle Rotation (feat: Enhance config loading by applying GUNICORN_CMD_ARGS before programmatic options #12313): A massive win for long-running instances, effectively neutralizing memory leaks from 3rd-party libs or custom components.
  • Memory Optimizations: Showcasing the benchmarks I just pulled and the further refinements I'm working on.
  • Multi-Agent Context: We can demonstrate how these improvements directly benefit heavy multi-agent environments where resource overhead usually compounds quickly.

What do you think about bundling this for the 1.10 launch news?

Later on I would also like to work bit on ISO 27001. It relates also to #12615 which is, in my opinion, important :-)

Please let me know! Thanks!

@severfire
Copy link
Copy Markdown
Contributor Author

@erichare I did some research regarding my memory tests, I got some insights:

Memory Reduction Analysis: v1.8.3 → v1.9.0

Executive Summary

Test results show a dramatic 85% memory reduction from v1.8.3 (20.55 GB) to v1.9.0 (~3 GB) with 30 workers. Additionally, the preload on/off setting makes almost no difference in v1.9.0.

Test Results

Memory Preload Version Workers Reduction
20.55 GB no preload v1.8.3 30 Baseline
3.00 GB preload on v1.9.0 30 -85.4%
3.00 GB preload on v1.9.0 30 -85.4%
3.38 GB preload off v1.9.0 30 -83.5%
2.98 GB preload off v1.9.0 30 -85.5%

Key Observation: Preload on/off differs by only ~400MB (13% variation), not the multiple GB difference you'd expect from memory sharing.


Root Cause: LangChain 1.0 Upgrade

The Critical Commit

Commit: 7d4ffbcbf5 - "feat: add support for Langchain 1.0 (#11114)"
Date: March 19, 2026
Author: Gabriel Luiz Freitas Almeida

What Changed in LangChain 1.0

Dependencies before (v1.8.3):
-    "langchain~=0.3.27",
-    "langchain-community>=0.3.28,<1.0.0",
-    "langchain-core>=0.3.81,<1.0.0",

Dependencies after (v1.9.0):
+    "langchain~=1.2.0",
+    "langchain-community~=0.4.1",
+    "langchain-core>=1.2.28,<2.0.0",

Key Architectural Changes in LangChain 1.0

  1. Removed AgentExecutor and related classes → moved to langchain-classic

    • This splits legacy agent code into a separate package
    • Workers only load langchain-classic if needed
  2. Removed SQLAlchemy as transitive dependency

    • v0.3.x: Every worker loaded SQLAlchemy through langchain
    • v1.0+: SQLAlchemy only loaded when actually needed
    • Made sqlalchemy import lazy in session_scope
  3. Modular imports - Moved classes to more specific packages:

    • langchain.callbackslangchain_core.callbacks
    • langchain.chainslangchain_classic.chains
    • langchain.memorylangchain_classic.memory
    • This enables lazy loading and smaller worker footprints
  4. Removed heavy transitive dependencies

    • OpenAI dependency conflicts resolved
    • Numpy/opencv conflicts resolved
    • Overall dependency tree pruned

Memory Impact Calculation

With 30 workers:

  • v1.8.3: 20.55 GB ÷ 30 = ~685 MB per worker
  • v1.9.0: 3.00 GB ÷ 30 = ~100 MB per worker
  • Savings: ~585 MB per worker (85% reduction)

This suggests v1.8.3 was loading significantly more dependencies per worker, likely:

  • Full SQLAlchemy stack (ORM, dialects, connection pooling)
  • All langchain agent classes (even if unused)
  • Heavier dependency trees from older package versions

Other Contributing Factors

1. Gunicorn Upgrade (v22 → v25)

While no specific memory-related features were documented, the 3-version jump included:

  • Better worker lifecycle management
  • Improved resource cleanup
  • Newer Python stdlib usage

2. SQLModel Upgrade (0.0.22 → 0.0.37)

  • 15 minor versions of improvements
  • Potential memory leak fixes
  • Better connection pooling defaults

3. Pydantic Upgrade (2.11.0 → 2.12.5)

  • Validation engine improvements
  • Reduced memory overhead for model instances

4. LFX Upgrade (0.3.3 → 0.4.0)

Relevant commit: cab9ba80da - "fix: Add os catch error to prevent windows failure installation on desktop on lfx lazy import"

This explicitly mentions lazy imports, suggesting LFX v0.4.0 introduced lazy loading optimizations.


Why Preload Makes Little Difference in v1.9.0

The preload_app setting was added in commit afbc6b0db3, but the actual preload implementation (preload.py) didn't exist in v1.9.0.

@ogabrielluiz can you confirm, please?

@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels Apr 21, 2026
@jordanrfrazier
Copy link
Copy Markdown
Collaborator

@severfire Perhaps a result of branching from release-1.9.1(?) but there are some unrelated commits in this PR. Maybe could do a force push to get rid of them, or they should be cleared out when we merge release-1.9.1 into main and then rebase onto release-1.10.0. Hopefully.

@jordanrfrazier
Copy link
Copy Markdown
Collaborator

@severfire Can you take a look at what claude found, if you agree with these issues?

  Critical fork-safety bugs                                                                                                                                           
                                                                                                                                                                      
  1. Redis socket leak across fork — RedisCache had no teardown() override, so the preload code path that "closes the cache socket" was actually a no-op. Workers were
   forking with a shared Redis TCP socket.                                                                                                                            
  → Added RedisCache.teardown() that calls self._client.aclose().                                                                                                     
  2. DB engine dispose failure silently ignored — The comment labeled engine.dispose() as CRITICAL for fork safety, but the surrounding try/except downgraded any     
  failure to a warning and continued setting preloaded=True. Workers would then fork with a shared connection pool.                                                   
  → Removed the try/except; failure now propagates to the outer handler, resets state, and falls back to per-worker init.                                             
  3. Cache teardown broadly swallowed everything — The teardown block caught ImportError, AttributeError, and the actual close failure under one awarning("skipped"). 
  A renamed import path would silently disable teardown forever.                                                                                                      
  → Moved imports out of the try; removed the blanket catch; propagates on failure.                                                                                   
                                                                                                                                                                      
  Silent data loss                                                            
                                                                                                                                                                      
  4. Per-step failures still marked preload as complete — copy_profile_pictures, create_or_update_starter_projects, load_flows_from_directory, and both agentic MCP   
  steps each caught exceptions and continued, but _STATE.preloaded=True was still set. Workers then skipped these steps in the lifespan. Result: missing avatars /
  starter projects / flows with no user-visible error.                                                                                                                
  → Added per-step completion flags. Workers now gate each skip on the corresponding flag, re-running any step that failed in the master (matching the non-preload
  loud-fail behavior).                                                                                                                                                
  
  State consistency                                                                                                                                                   
                                                                              
  5. master_pid leaked on preload failure — _STATE.master_pid was set before the try block. On failure, is_master() returned True while is_preloaded() returned False 
  — an inconsistent combination.
  → Added _PreloadState.reset() called in the outer failure handler.                                                                                                  
  6. Fragile temp_dirs handling in worker lifespan — The if running_in_master: temp_dirs = get_preloaded_temp_dirs() pattern only worked because of a default []      
  initializer earlier in the function. One refactor away from UnboundLocalError in workers.                                                                           
  → Added get_owned_temp_dirs() helper that encodes the master-only ownership rule. main.py no longer needs is_master().                                              
                                                                                                                                                                      
  Structural improvements (incidental)                                                                                                                                
                                                                                                                                                                      
  - Split the one-try-wraps-two-ops agentic MCP block into two independent tries so partial success is tracked correctly.                                             
  - Added a Failure contract section to the module docstring explicitly distinguishing fork-safety-critical steps (propagate) from best-effort steps (flag +
  continue).                                                                                                                                                          
  - Lifespan now reads as a sequence of per-step gates instead of monolithic if preloaded: else: branches.
                                                                                                                                                                      
  Still open (not fixed, noted for follow-up)                                                                                                                         
                                                                                                                                                                      
  - No unit tests for preload.py — the failure-fallback contract is the most load-bearing claim of the PR and has no regression guard.                                
  - Pre-existing double-call of initialize_auto_login_default_superuser() in the non-preloaded branch.
  - Auto-login is still skipped when is_preloaded() is True even though preload never runs it — preserved existing behavior since it was out of scope.                

@severfire
Copy link
Copy Markdown
Contributor Author

@jordanrfrazier I will investigate that :-) thank you!

- Introduced a new preload module to optimize memory usage by running fork-safe initialization in the Gunicorn master process.
- Updated the lifespan management in `main.py` to check if the master has preloaded resources, allowing workers to inherit state and skip redundant setup.
- Adjusted the server loading process to accommodate the new preload logic, ensuring efficient resource management across worker processes.
@severfire severfire force-pushed the use_preload_advantages branch from 1aa3606 to 8639a23 Compare April 21, 2026 16:37
@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels Apr 21, 2026
@severfire
Copy link
Copy Markdown
Contributor Author

@jordanrfrazier - okay, I made order with my branch. Now I will investigate things you mentioned and I will try to address them. Thanks!

…et leaks

- Added a `teardown` method to the `RedisCache` class to close the Redis connection, addressing potential socket leaks during process forking.
- Introduced unit tests to verify the functionality of the `teardown` method, ensuring it handles client closure and errors gracefully.
- Tests cover scenarios including normal closure, error handling during closure, and teardown with URL-based connections.
…t leaks

- Added a `teardown` method to the `RedisCache` class, ensuring proper closure of the Redis client connection before forking to avoid socket leaks.
- Created comprehensive unit tests to validate the functionality of the `teardown` method, covering various scenarios including normal operation and error handling.
- Updated existing tests to reflect the new teardown functionality and ensure RedisCache is recognized as an instance of `ExternalAsyncBaseCacheService`.
- Removed exception handling around the DB engine disposal to streamline the process, ensuring that the engine is disposed of without unnecessary error logging. This change enhances code clarity and maintains the intended functionality of resource management during the preload phase.
- Simplified the cache service socket closure process in the master preload function to prevent sharing across forks. This change enhances code clarity by removing unnecessary exception handling while maintaining the intended functionality of resource management during the preload phase.
… management

- Added completion flags in the preload state to track the status of various initialization steps, including profile picture copying, starter project creation, agentic global variable initialization, MCP server configuration, and flow loading.
- Updated the lifespan management in `main.py` to utilize these flags, allowing the system to skip redundant setup tasks if they have already been completed during the preload phase.
- This enhancement improves resource management and ensures that the application behaves correctly in a multi-worker environment.
…irs function

- Updated the lifespan management in `main.py` to utilize the new `get_owned_temp_dirs` function, which encapsulates the logic for determining temp directory ownership based on the process type (master or worker).
- Removed the `is_master` check from the lifespan function, simplifying the code and enhancing clarity regarding temp directory cleanup responsibilities.
- Added the `get_owned_temp_dirs` function in `preload.py` to centralize temp directory ownership logic, ensuring that workers do not attempt to clean up directories owned by the master process.
- Introduced conditional gates in the `get_lifespan` function to manage the initialization of profile pictures, super users, bundles, component types, and starter projects based on their completion status.
- Improved logging to provide clearer insights into which steps are being skipped or executed, enhancing the overall clarity of the initialization process.
- Updated the preload logic in `preload.py` to ensure that agentic global variables and MCP server configuration are only initialized when necessary, maintaining efficient resource management in a multi-worker environment.
…nd add preload tests

- Fix double-call issue: setup_superuser() now handles AUTO_LOGIN completely with file lock
- Add comprehensive unit tests for preload.py covering failure-fallback contract
- Simplify code by doing superuser initialization in initialize_services() (called early in both preload and worker startup)
- File lock protects multi-worker race conditions when preload is disabled
- Tests verify critical step failures propagate, best-effort steps continue on failure

Made-with: Cursor
…icts

Fixed critical bugs introduced in c0e81a5 that caused preload failures:

1. Missing import: Added DEFAULT_SUPERUSER_PASSWORD to module-level imports
   - Was only imported inside AUTO_LOGIN block but used when AUTO_LOGIN=false
   - Caused NameError that crashed preload with "session scope error"

2. Removed agentic variable initialization from setup_superuser()
   - Prevents double-initialization conflict with preload's dedicated step
   - initialize_agentic_global_variables() in preload handles all users

3. Made teardown_superuser() more robust
   - Now skips deletion instead of raising errors on FK constraints
   - Prevents startup failures when default superuser has associated flows

Resolves: "An error occurred during the session scope" preload error
Resolves: Ghost thread warnings from incomplete initialization
Made-with: Cursor
- Updated the `_PreloadState` class to include `bundles_loaded` and `types_cached` flags for better tracking of initialization steps.
- Modified the `get_lifespan` function to utilize the new state flags, improving the conditional logic for loading bundles and caching component types.
- Implemented a `reset` method in `_PreloadState` to ensure consistent state restoration after preload failures, enhancing reliability in multi-worker environments.
- Simplified the teardown process in `ExternalAsyncBaseCacheService` by making `teardown` an abstract method, allowing direct calls without fallback checks.

This refactor improves clarity and efficiency in the preload and lifespan management processes.
@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels Apr 22, 2026
@severfire
Copy link
Copy Markdown
Contributor Author

severfire commented Apr 22, 2026

@jordanrfrazier I hope now it should be okay.

I made memory tests, measured just after loading Langflow:

no preload:
3.92GB | 30 workers
6.66GB | 60 workers

Preload:
2.47GB | 30 workers | saving 1.45GB = 36.9% vs no-preload
4.13GB | 60 workers | saving 2.53GB = 37.99% vs no-preload

@jordanrfrazier
Copy link
Copy Markdown
Collaborator

@severfire Great. I'll block some time to look into this today and tomorrow!

@erichare erichare force-pushed the release-1.10.0 branch 2 times, most recently from 841d2d7 to 242c6c7 Compare April 24, 2026 02:15
@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels May 5, 2026
@jordanrfrazier jordanrfrazier enabled auto-merge May 5, 2026 16:28
@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels May 5, 2026
@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels May 5, 2026
erichare added 2 commits May 5, 2026 14:25
…o-superuser case

The `initialized_services` fixture starts with `AUTO_LOGIN=false`, which
runs `setup_superuser` through the credentials-fallback path and creates
the default superuser. The "raises_when_no_superuser" test then mocked
the lock to time out, but the existence check found that pre-created
user and returned `AUTO_LOGIN_LOCK_TIMEOUT_SUPERUSER_PRESENT` instead of
raising `RuntimeError`. Delete the default superuser before mocking the
lock so the no-superuser branch is actually exercised.
@github-actions github-actions Bot added performance Maintenance tasks and housekeeping and removed performance Maintenance tasks and housekeeping labels May 5, 2026
@jordanrfrazier jordanrfrazier added this pull request to the merge queue May 5, 2026
Merged via the queue into langflow-ai:release-1.10.0 with commit 78f82ca May 5, 2026
103 of 104 checks passed
@severfire
Copy link
Copy Markdown
Contributor Author

@jordanrfrazier here is article I have written, please suggest changes: https://docs.google.com/document/d/12vOopCRs896_bJxY2_JtH9iTC--LMhH32NWssq_D-08/edit?tab=t.0#heading=h.pngb50tfb9bo

@jordanrfrazier
Copy link
Copy Markdown
Collaborator

@severfire Looks awesome, green light from me and our docs writer (@mendonk). Please feel free to post, and coordinate with Mendon on getting it published and linked onto the Langflow blog page - https://www.langflow.org/blog.

Only notes were to check some paragraph spacing (likely a result of my copy/paste issue) on the Exorcising and Looking Ahead paragraphs.

And a question from me -- thoughts on reorganizing to move the benchmark results to the top? I see it states the ultimate savings early on, which is good, but you could see how it reads if you started from showing the results at top and then explanations of each section (in case we lose some readers through the technical parts). Either way, I think it reads great. Thanks for this.

@severfire
Copy link
Copy Markdown
Contributor Author

@jordanrfrazier
Thank you, made suggested correction. Also added Related PRs at the end of the document.

I think it can be published when 1.10 will be released. Does it sound good? We could also add #12588 after we will be done with it, as it is also related to Reliability. So small section about it could be written.

As how to publish it, I do not have access to edit/add to Langflow blog :-) so I guess I would require @mendonk help.

@mendonk
Copy link
Copy Markdown
Collaborator

mendonk commented May 7, 2026

@severfire Thanks for the great work. I can handle the blog publication for the 1.10.x release - I'll have a vercel build for you pretty soon.

@mendonk
Copy link
Copy Markdown
Collaborator

mendonk commented May 11, 2026

@severfire @jordanrfrazier Here's a Vercel preview build of the blog for 1.10.x. It's 1:1 with the google doc right now. Any suggestions?

@severfire
Copy link
Copy Markdown
Contributor Author

I like it! Thank you!@mendonk

@jordanrfrazier
If also team could 2ble check the memory savings between the versions I would appreciate.
I used WLS with Ubuntu.

Before running Langflow I checked RAM used with htop,
After it loaded I waited like 10-20s for it to stabilize and I got the number.
Then I deducted this number by starting RAM and I got RAM usage after start.

@mendonk
Copy link
Copy Markdown
Collaborator

mendonk commented May 12, 2026

@jordanrfrazier I'll run the doublecheck testing between versions and will follow up here.
@severfire Would you like full authorship of the blog post? If so I can create an author profile and links for you with any of the following if you want to provide them:
"name": "",
"bio": "",
"avatar": "",
"location": "",
"social": {
"github": "",
"linkedin": "",
"twitter": "",
"website": ""

@severfire
Copy link
Copy Markdown
Contributor Author

@mendonk thank you, not now, maybe next time :-D Maybe I will be able to write some tips on optimalizations for high load environments :-)

ogabrielluiz added a commit that referenced this pull request May 12, 2026
…igrate_orphaned_mcp_servers_config

PR #12778 (Gunicorn preload functionality) added migrate_orphaned_mcp_servers_config
to langflow/services/utils.py but the AST-parity guard in
test_services_utils_module_structure_unchanged was not updated. The test
codifies the current function layout, so adding a function legitimately
requires extending the expected list.
ogabrielluiz added a commit that referenced this pull request May 12, 2026
…igrate_orphaned_mcp_servers_config

PR #12778 (Gunicorn preload functionality) added migrate_orphaned_mcp_servers_config
to langflow/services/utils.py but the AST-parity guard in
test_services_utils_module_structure_unchanged was not updated. The test
codifies the current function layout, so adding a function legitimately
requires extending the expected list.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Maintenance tasks and housekeeping

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants