-
Notifications
You must be signed in to change notification settings - Fork 127
Description
Hi there,
I'm trying to run nginx with ofp to have a basic performance test wrk -c4000 -d120s -t12 --latency "http://myserver//"
my playing is:
vm os: centos 7.7.1908 (kernel 3.10.0-1062)
dpdk: dpdk-18.11.5
odp: odp 1.23.3
ofp: clone from this repository with the latest version
nginx: 1.16.1
I find that there is a feature netwrap to do this with the original Nginx #225 ,
but failed to run it.
./scripts/ofp_netwrap.sh ../nginx-1.16.1/objs/nginx -p ../nginx-1.16.1
....
D 3437958 3:3942635264 ofp_pkt_processing.c:174] ETH TYPE = 0800
D 3437965 2:3951027968 ofp_pkt_processing.c:174] ETH TYPE = 0800
D 3437971 3:3942635264 ofp_pkt_processing.c:174] ETH TYPE = 0069
D 3437992 2:3951027968 ofp_pkt_processing.c:174] ETH TYPE = 86dd
D 3437992 2:3951027968 ofp_pkt_processing.c:174] ETH TYPE = 86dd
I 3438017 0:4160650240 app_main.c:216] End Netwrap processing constructor()
EAL: RTE_ACL tailq is already registered
PANIC in tailqinitfn_rte_acl_tailq():
Cannot initialize tailq: RTE_ACL
5: [/lib64/ld-linux-x86-64.so.2(+0x115a) [0x7ffff7ddc15a]]
4: [/lib64/ld-linux-x86-64.so.2(+0xf973) [0x7ffff7dea973]]
3: [/home/sonicwall/odp_ofp/lib/libofp.so.0.0.0(+0xd3b9c) [0x7ffff61dab9c]]
2: [/home/sonicwall/odp_ofp/lib/libofp_netwrap_crt.so.0.0.0(__rte_panic+0xba) [0x7ffff7066758]]
1: [/home/sonicwall/odp_ofp/lib/libofp_netwrap_crt.so.0.0.0(rte_dump_stack+0x1a) [0x7ffff70dc2da]]
./scripts/ofp_netwrap.sh: line 9: 35020 Aborted LD_LIBRARY_PATH=/path/tomy/odp_ofp/lib:$LD_LIBRARY_PATH LD_PRELOAD
=libofp_netwrap_crt.so.0.0.0:libofp.so.0.0.0:libofp_netwrap_proc.so.0.0.0 $@
I also tried to set the Nginx to daemon off; master_process off;, failed either with the same stack.
Then I'm going to another project nginx_ofp. Again failed to run it.
[root@localhost nginx_dpdk]# ./start_nginx.sh 0 |
Starting nginx on interface 0
Found 1 worker_processes
[root@localhost nginx_dpdk]# signal is :(null)
len : 1e, file : /usr/local/nginx_dpdk/ofp.conf
ODP system info
---------------
ODP API version: 1.23.3
CPU model:
CPU freq (hz): 0
Cache line size: 0
Core count: 0
Running ODP appl: "webserver"
-----------------
IF-count: 1
Using IFs: 0
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:05.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
Pool config:
pool.pkt.max_num: 262143
Queue config:
queue_basic.max_queue_size: 8191
queue_basic.default_queue_size: 4095
Using scheduler 'basic'
Scheduler config:
sched_basic.prio_spread: 4
sched_basic.prio_spread_weight: 63
sched_basic.burst_size_default[] = 32 32 32 32 32 16 8 4
sched_basic.burst_size_max[] = 255 255 255 255 255 16 16 8
sched_basic.group_enable.all: 1
sched_basic.group_enable.worker: 1
sched_basic.group_enable.control: 1
Packet IO config:
pktio.pktin_frame_offset: 0
PKTIO: initialized null interface.
No crypto devices available
I 739911 0:4160634880 ofp_uma.c:45] Creating pool 'udp_inpcb', nitems=60000 size=1160 total=69600000
I 739914 0:4160634880 ofp_uma.c:45] Creating pool 'tcp_inpcb', nitems=60000 size=1160 total=69600000
I 739918 0:4160634880 ofp_uma.c:45] Creating pool 'tcpcb', nitems=60000 size=784 total=47040000
I 739921 0:4160634880 ofp_uma.c:45] Creating pool 'tcptw', nitems=60000 size=80 total=4800000
I 739923 0:4160634880 ofp_uma.c:45] Creating pool 'syncache', nitems=30720 size=168 total=5160960
I 739923 0:4160634880 ofp_uma.c:45] Creating pool 'tcpreass', nitems=320 size=48 total=15360
I 739923 0:4160634880 ofp_uma.c:45] Creating pool 'sackhole', nitems=65536 size=40 total=2621440
odp_crypto.c:598:odp_crypto_capability():No crypto devices available
I 739925 0:4160634880 ofp_init.c:438] Slow path threads on core 0
Num worker threads: 2
first CPU: 2
cpu mask: 0xC
D 739925 0:4160634880 ofp_ifnet.c:302] Interface '0' becomes 'fp0', port 0
DPDK interface (net_e1000_em): 0
num_rx_desc: 128
num_tx_desc: 256
rx_drop_en: 0
D 739925 0:4160634880 ofp_ifnet.c:54] Interface '0' supports IPv4 TX checksum offload
D 739925 0:4160634880 ofp_ifnet.c:76] Interface '0' supports UDP TX checksum offload
D 739925 0:4160634880 ofp_ifnet.c:98] Interface '0' supports TCP TX checksum offload
E 739925 0:4160634880 ofp_ifnet.c:163] Failed to create output queues.
E 739925 0:4160634880 ngx_ofp_module.c:224] Failed to init interface 0
I set the nginx.conf to run only on a single core worker_processes 1; , but it seems the Nginx setup with CPU mask 0xc. That will not work since I'm having a network probe with E1000.
Then I turned to Vmxnet3 but got this
D 12325 0:4160634880 ofp_ifnet.c:302] Interface '0' becomes 'fp0', port 0
odp_packet_dpdk.c:182:init_options():Invalid number of TX descriptors
odp_packet_dpdk.c:688:setup_pkt_dpdk():Initializing runtime options failed
../linux-generic/odp_packet_io.c:334:setup_pktio_entry():Unable to init any I/O type.
E 12325 0:4160634880 ofp_ifnet.c:23] odp_pktio_open failed
E 12325 0:4160634880 ngx_ofp_module.c:224] Failed to init interface 0
Also failed on the multi-queue.
Kindly advise how to run it, please.
P.S.
I actually not matter how to run nginx with ofp on multi-cores. The thing I want is to make the configure worker_processes 1; works since my benchmarks are all on a single core.
P.S.2
Also, I succeeded to run the webserver with the help of #241 but it was too slow to compare with original nginx. So I have to figure out how to run nginx with ofp.
webserver:
wrk -c4000 -d120s -t12 --latency "http://myserver:2048//"
Running 2m test @ http://myserver:2048//
12 threads and 4000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 59.02ms 78.34ms 1.99s 94.99%
Req/Sec 489.13 224.24 1.90k 65.84%
Latency Distribution
50% 42.63ms
75% 47.08ms
90% 54.24ms
99% 329.61ms
467540 requests in 2.00m, 8.47MB read
Socket errors: connect 2987, read 467540, write 0, timeout 242
Requests/sec: 3894.37
Transfer/sec: 72.26KB
original nginx:
wrk -c4000 -d120s -t12 --latency "http://myserver//"
Running 2m test @ http://myserver//
12 threads and 4000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 43.40ms 79.56ms 2.00s 94.97%
Req/Sec 2.96k 1.16k 5.97k 58.85%
Latency Distribution
50% 27.50ms
75% 32.65ms
90% 40.89ms
99% 293.86ms
2825754 requests in 2.00m, 452.60MB read
Socket errors: connect 2987, read 0, write 0, timeout 49
Requests/sec: 23539.40
Transfer/sec: 3.77MB