LSWS starts too many dispatch.lsapi processes

andreas

Well-Known Member
#1
Hi,

I configured my Rails app according to the description in the Wiki. I have set both max connections and LSAPI_CHILDREN to 3, but when I do multiple concurrent requests way too many processes are started:


├─lshttpd─┬─lscgid
│ └─lshttpd─┬─admin_php
│ └─dispatch.lsapi───11*[dispatch.lsapi]


Why?

Thanks
Andreas
 

mistwang

LiteSpeed Staff
#2
Are you using 2 cpu enterprise? Has "instances" been set to "1", has "Initial request timeout" been set to be greater than the time that the slowest ruby request will take?
 

andreas

Well-Known Member
#3
mistwang said:
Are you using 2 cpu enterprise? Has "instances" been set to "1", has "Initial request timeout" been set to be greater than the time that the slowest ruby request will take?
I am using the standard version on a 1 CPU machine. My settings:

Name RubyRailsLSAPI
Address uds://tmp/lshttpd/rubyrailslsapi.sock
Max Connections 3
Environment RAILS_ENV=production
LSAPI_CHILDREN=3
Initial Request Timeout (secs) 30
Retry Timeout (secs) 30
Persistent Connection Yes
Connection Keepalive Timeout N/A
Response Bufferring No
Auto Start Yes
Command /xyz/public/dispatch.ls api
Back Log 1
Instances 1
Run On Start Up Yes
Max Idle Time -1
Priority N/A
Memory Soft Limit (bytes) N/A
Memory Hard Limit (bytes) N/A
Process Soft Limit 1
Process Hard Limit 1

When I click on a link in my application a few times it results in an immediate increase in the number of processes until all the requests are handled, so I don't think the 30s timeout matters.
 

mistwang

LiteSpeed Staff
#4
should increase the "Process Soft/Hard limit" to a more reasonable value, like "50-100". "Back log" should be increase to something like "10".
 

andreas

Well-Known Member
#5
mistwang said:
should increase the "Process Soft/Hard limit" to a more reasonable value, like "50-100". "Back log" should be increase to something like "10".
I made the changes and restarted the server, but still the same probem. pstree suggests that the dispatcher forks a new process for every concurrent request and immediately kills it after it has answered its request.

A few concurrent requests active:
└─lshttpd───dispatch.lsapi───8*[dispatch.lsapi]

After the requests are handled the number of children immediately drops to 1:
└─lshttpd───dispatch.lsapi───dispatch.lsapi
 
Last edited:

mistwang

LiteSpeed Staff
#6
Are you testing this from a browser by clicking "refresh" button quickly and repeatedly? Or using a load testing software like "ab"?
The former way may cause what you observed, that because the request has been canceled in the middle, so LSWS has to close the connection on its side before the request has been finished, on the dispatch.lsapi side, the process will die after finish the request since the connection to server has been lost.
the dispatch.lsapi process will stay alive as long as the connection to server is in good shape.
Lshttpd should not make more than 3 concurrent connections to dispatch.lsapi, but there might be more processes which were processing canceled requests.

That's my explanation, but I might be wrong. Please try you test with "ab", there should be four process at most. If it is not the case, please let me know.
 

andreas

Well-Known Member
#7
mistwang said:
Are you testing this from a browser by clicking "refresh" button quickly and repeatedly?
Yes.

Or using a load testing software like "ab"?
The former way may cause what you observed, that because the request has been canceled in the middle, so LSWS has to close the connection on its side before the request has been finished, on the dispatch.lsapi side, the process will die after finish the request since the connection to server has been lost.
the dispatch.lsapi process will stay alive as long as the connection to server is in good shape.
Lshttpd should not make more than 3 concurrent connections to dispatch.lsapi, but there might be more processes which were processing canceled requests.
Can't this be prevented? I find it worrying that someone can create insane numbers of processes with this kind of behaviour. Especially with the high RAM requirements of Rails it is very easy to DOS a small server this way.

That's my explanation, but I might be wrong. Please try you test with "ab", there should be four process at most. If it is not the case, please let me know.
Yes, it doesn't create more than 3 children when I use ab -c 10. But it kills all but one after the requests are finished. Is there some way of keeping the processes alive?
 

mistwang

LiteSpeed Staff
#8
Can't this be prevented? I find it worrying that someone can create insane numbers of processes with this kind of behaviour. Especially with the high RAM requirements of Rails it is very easy to DOS a small server this way.
We will improve the process manager see if we can come up with a better solution. This kind of attack can be easily fend off with our request rate throttling feature.
Yes, it doesn't create more than 3 children when I use ab -c 10. But it kills all but one after the requests are finished. Is there some way of keeping the processes alive?
That's because "ab" cancels the extra requests it sent at the end of the test. If you try "ab -c 1", the child process is alive at the end.
 

mistwang

LiteSpeed Staff
#9
Can't this be prevented? I find it worrying that someone can create insane numbers of processes with this kind of behaviour. Especially with the high RAM requirements of Rails it is very easy to DOS a small server this way.
Please try ruby-lsapi 1.4 release, the maximum number of children process is limited to twice of the configured value, the children process can stays alive as well.
 

mistwang

LiteSpeed Staff
#11
The gem version will be updated a little while later if there is no bug reported on the version. You probably need to uninstall the gem version first when you try the new version.
 
Top