question about load-balancing / benchmarking

ts77

Well-Known Member
#1
Hi there,

finally I'm trying now to get my php-fcgi-load-balancing to work.
It runs fine beside I can't get more throughput than 10,3 MB / s on my 100 mbit to really try it out ;).

I'm just wondering how the load-balancing is done internally ... any algorithm to share?
Does it use the connection which has the lowest number of connections or whats the way its working?

Trying to benchmark from the localhost to avoid the network-saturation gives me lots of messages like
Code:
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: socket: read error Connection reset by peer:: Connection reset by peer
I'm trying to benchmark using siege with 100 concurrent users.

Any idea what to make of those messages then? I don't get those then trying to benchmark from a remote host :(.


Thanks,

Thomas
 

mistwang

LiteSpeed Staff
#2
The load balancing algorithm is based on <utilization rate> = <used connections> / <max connections> .

So, you can use <max connections> to adjust the work load for each node.

Not sure why you only get "reset by peer" error from localhost, please make sure "max connections per ip" has been set high enough. Are you still getting the error with lower concurrent level?
 

ts77

Well-Known Member
#3
thanks, with max-connections I can nicely spread the load over the php-backends.

Yeah, with 10 concurrent connections I don't get that message from below but already with 20 concurrent connections I get them (thought not as much as with 100 concurrent users).

Code:
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: unable to shutdown the socket: Transport endpoint is not connected
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: unable to shutdown the socket: Transport endpoint is not connected
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: unable to shutdown the socket: Transport endpoint is not connected
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: unable to shutdown the socket: Transport endpoint is not connected
Error: socket: read error Connection reset by peer:: Connection reset by peer
Error: unable to shutdown the socket: Transport endpoint is not connected

btw. here are some numbers with 2 php-backends delivering a phpinfo-page and siege with 10 concurrent numbers is done from the local host:
Code:
Transactions:                  26314 hits
Availability:                 100.00 %
Elapsed time:                  60.45 secs
Data transferred:             996.33 MB
Response time:                  0.02 secs
Transaction rate:             435.30 trans/sec
Throughput:                    16.48 MB/sec
Concurrency:                    9.96
Successful transactions:       26314
Failed transactions:               0
Longest transaction:            0.57
Shortest transaction:           0.00
Seems like it was really the network-interface limiting the performance previously to ~10MB/s :).
 

ts77

Well-Known Member
#4
ah, I think I found it.
I switched the event-handler for lsws to epoll and now I don't get any error-messages anymore.

I've just run with 50 concurrent connections
Code:
Transactions:                  28917 hits
Availability:                 100.00 %
Elapsed time:                  60.06 secs
Data transferred:            1086.47 MB
Response time:                  0.10 secs
Transaction rate:             481.47 trans/sec
Throughput:                    18.09 MB/sec
Concurrency:                   49.58
Successful transactions:       28917
Failed transactions:               0
Longest transaction:            5.10
Shortest transaction:           0.03
numbers won't increase much more, guess I'd need to add more php-backends if needed as the (single-processor)- machines were at a load of 5 with that test already ;).

So, problem solved I guess, thx for your support.
 
Top