@InfoHunter
As a result of looking through my current SEnginx 'debug' logs and the 'proxy' Upstream 'cold start' timings, I tried the following two SEnginx "core" code changes and a some client side http benchmarks (as before and after):
Code changes were:
-
Removed 'ngx_http_neteye_security' and the associated "neteye" code embedded in the following SEnginx files: A) src/http/ngx_http.c B) src/http/ngx_http_core_module.cand C) src/http/ngx_http_core_module.h.
-
Removed all "neteye" SEnginx modules except for the following requisites: A) ngx_http_upstream_fastest, B) ngx_http_upstream_persistence, and C) ngx_http_if_extend.
Test Results: The tested 'cold start' upstream proxy [3 streams as Unix sockets] load times went from ~3 seconds [cold] to 295ms on a cold start (with the above changes). The warm [valid] cache upstream load times did not change much [good], but the cache 'invalid' times were [also] improved as much as the cold start times [as above]. Notes: The three proxy upstream, as Unix sockets, pull static assets (only) for a local CDN - which are store (optimized) using nginx's "proxy_store" (on 1st pull).
This report is intended as 'basic user' feedback on the possible overheads of utilizing the current "ngx_http_neteye_security" framework API's - as placed into SEnginx core and the associated 'neteye' security modules.
I accept I could have some configuration issues to follow through on :)
I wish I could offer you a better stream of hard data and numbers, but it is a production server that I use SEnginx on, so my change window is always brief... :)
@InfoHunter
As a result of looking through my current SEnginx 'debug' logs and the 'proxy' Upstream 'cold start' timings, I tried the following two SEnginx "core" code changes and a some client side http benchmarks (as before and after):
Code changes were:
Removed 'ngx_http_neteye_security' and the associated "neteye" code embedded in the following SEnginx files: A)
src/http/ngx_http.cB)src/http/ngx_http_core_module.cand C)src/http/ngx_http_core_module.h.Removed all "neteye" SEnginx modules except for the following requisites: A)
ngx_http_upstream_fastest, B)ngx_http_upstream_persistence, and C)ngx_http_if_extend.Test Results: The tested 'cold start' upstream proxy [3 streams as Unix sockets] load times went from ~3 seconds [cold] to 295ms on a cold start (with the above changes). The warm [valid] cache upstream load times did not change much [good], but the cache 'invalid' times were [also] improved as much as the cold start times [as above]. Notes: The three proxy upstream, as Unix sockets, pull static assets (only) for a local CDN - which are store (optimized) using nginx's "proxy_store" (on 1st pull).
This report is intended as 'basic user' feedback on the possible overheads of utilizing the current "ngx_http_neteye_security" framework API's - as placed into SEnginx core and the associated 'neteye' security modules.
I accept I could have some configuration issues to follow through on :)
I wish I could offer you a better stream of hard data and numbers, but it is a production server that I use SEnginx on, so my change window is always brief... :)