Optimizing litespeed for high traffic

Jinesh

Well-Known Member
#1
Hello,

I just installed litespeed in my server but don't know exactly how to optimize it to obtain the most of it. I have a single website with a high traffic volume. mobdro kodi
Can you guys help me?

Thanks!!!
 
Last edited:

Germont

Well-Known Member
#2
You didn't mentioned your server OS, control panel and CMS like wordpress or whatever.
Such details could help for more detailed answers.
One aspect regarding LiteSpeed is to increase PHP suEXEC Max Conn according to the instructions.
For a high traffic site, an option like "serve stale pages" can be useful.
There is a lot more to discuss about LS, but maybe other knows more.

There are some ressource eating php extensions that for most installations should be disabled: snmp, xdebug.
But timezonedb or opcode cache should be enabled.

Also can disable symlink protection to improve performance, and use instead KernelCare free patch.
I use redis even if I have low traffic site, preload time is decreased by 25%.
For high traffic sites is useful a lot more, but an expert advice could be needed to get the most from it.

is your installation compatible with PHP 8 /MySQL 8.0?
 
Last edited:

Crazy Serb

Well-Known Member
#4
I've followed all those tip and I still hit the limit even at 500 for Max Connections:

https://markuphero.com/share/LWMByIGqTMl4q1ZjRzWN

And here are some of my config settings:

https://prnt.sc/wOOMZKrPav3-
https://prnt.sc/Vuio1gQYO5La

on a AMD Ryzen 7500 CPU (12 cores, multi-threaded), 128GB RAM, NVMe SSDs, Apache MPM Event, PHP 8...

and I am running load tests on it from loader.io:

https://prnt.sc/FZhKYcrKBuww

getting results like these:

https://prnt.sc/GhnZGFXCBUDo

Can I hire someone here to fine tune this server's Litespeed config to be able to handle 10,000+ concurrent connections without crapping out?
 

serpent_driver

Well-Known Member
#5
@Crazy Serb
You should not hold the web server or LiteSpeed solely responsible for performance. Basically, the LSWS, unlike Apache, is designed to process an almost infinite number of requests, which is due to the event-oriented process architecture and can also be found in nginx. If you're having issues with your server's connectivity, then you need to consider the bigger picture. A web server is only as fast as PHP and the DB server can process the request. If you are not already using it, you should use LScache whenever possible because it is the only way you can process a large number of simultaneous requests almost without any problems without significantly increasing the load on your server. In the worst case you can also use WebADC.
 

Crazy Serb

Well-Known Member
#6
@Crazy Serb
You should not hold the web server or LiteSpeed solely responsible for performance. Basically, the LSWS, unlike Apache, is designed to process an almost infinite number of requests, which is due to the event-oriented process architecture and can also be found in nginx. If you're having issues with your server's connectivity, then you need to consider the bigger picture. A web server is only as fast as PHP and the DB server can process the request. If you are not already using it, you should use LScache whenever possible because it is the only way you can process a large number of simultaneous requests almost without any problems without significantly increasing the load on your server. In the worst case you can also use WebADC.
Yeah, that's not helping at all, unfortunately.

I'm fully aware that Litespeed isn't the only factor in the equation, but all the other factors have been taken care of and have been optimized already, so saying that Litespeed can handle unlimited amount of requests when it craps out at 1500-2000 requests doesn't help me at all.

Again, I'm looking for specific suggestions on what to edit and where, up or down, what values to test with (as there are a thousand settings in Litespeed config alone) so that I can do that... or if someone knows what and where to edit and test the performance of I'll gladly pay them for their time (whether that's Litespeed config only or other server software itself).

Again, this is set up on top of the line dedicated server with plenty of resources, and for some reason I can't get this thing (Litespeed) to support more than 1500 concurrent connections even with LSCache turned on and most of the requests being served from cache itself.

So I apologize if I don't believe that claim that it can handle unlimited requests right now...
 

AndreyPopov

Well-Known Member
#7
Can I hire someone here to fine tune this server's Litespeed config to be able to handle 10,000+ concurrent connections without crapping out?

if you want 10 000 + concurrent connections then why you set everywhere Max Connections to 500?

are your read this: https://docs.litespeedtech.com/lsws/extapp/php/configuration/control/


Set PHP suEXEC Max Conn to the maximum number of concurrent LSPHP processes you want to allow



If you are using Apache but not using PHP suEXEC please follow the override auto detected php guide and set the following for the External Applications:

  • Set Max Connections to the maximum number of concurrent LSPHP processes you want to allow.
  • Set PHP_LSAPI_CHILDREN inside of Environment to the maximum number of concurrent LSPHP processes you want to allow.
Max Connection and PHP_LSAPI_CHILDREN must be the same value.
 
Last edited:

Crazy Serb

Well-Known Member
#8
Hmm, I'll try that, but the reason why I was using PHP suEXEC Max Conn = 500 was because 1) it wouldn't let me set it to anything higher than 2000 (some hard limit?) and 2) every suggestion I got from admins here was to NOT set that value to anything higher than 100 for some reason...
 

Crazy Serb

Well-Known Member
#9
if you want 10 000 + concurrent connections then why you set everywhere Max Connections to 500?
Oh yeah, here it is:

https://markuphero.com/share/t6RXiZCHs7RkP0nzwb4y

The limit for those is 2,000, I can't go over that.

And when I do set it to 2,000 and match the PHP_LSAPI_CHILDREN value to that (2,000) the server usage goes through the roof when I run a test with 2,000 clients per second, Server Load goes into 200+ (from 0.1 average load).

So setting this Max Connections and PHP_LSAPI_CHILDREN value to high numbers isn't really a solution to tackling high traffic and heavy load... so what else am I missing here?
 

serpent_driver

Well-Known Member
#11
@AndreyPopov

I have to agree with you for once. I actually already said it, but it's an illusion to believe that only web server 2000 or more simultaneous requests can be handled. That's why you have to assume that @Crazy Serb (crazy nick ;)) is only making theoretical efforts that are alien to any reality. Before the web server quits the service due to too many simultaneous requests, the MySQL server has stopped working much earlier. That's why the discussion about how to configure the LSWS for high traffic without taking all other components into account is meaningless.
 

Crazy Serb

Well-Known Member
#12
@AndreyPopov

I have to agree with you for once. I actually already said it, but it's an illusion to believe that only web server 2000 or more simultaneous requests can be handled. That's why you have to assume that @Crazy Serb (crazy nick ;)) is only making theoretical efforts that are alien to any reality. Before the web server quits the service due to too many simultaneous requests, the MySQL server has stopped working much earlier. That's why the discussion about how to configure the LSWS for high traffic without taking all other components into account is meaningless.
I don't know why you think that's an illusion... you clearly said it's designed to process an infinite number of requests.

And I clearly told you that mySQL and all other components of the server have been fine tuned to perfection, and can handle millions of requests per minute.

How do I know?

Because I tested everything else under heavy load as well.

And the only thing causing server load to spike up into oblivion and load times to go up to 50-60 seconds is (yes, you guessed it), Litespeed.

So I appreciate the anecdotal advice there, but none of that is really helping in my situation because, as I already said, everything else on the server has been fine tuned already. The only wild cards here are Litespeed and Apache settings (that it feeds off of to a certain extent).
 

serpent_driver

Well-Known Member
#13
Please don't expect your "gimmicks" to be believed. At the risk of repeating myself again, but if you can process requests of this size, you need a server cluster. No single server in this universe is capable of processing 2000 or more simultaneous requests and when I say server, I mean all the components involved in a request. The high load you mention is not because the web server causes it, but because PHP and MySQL cause it. With such a high number of simultaneous requests, it no longer matters whether the web server can process an almost infinite number of simultaneous requests if the database server either quits the service much earlier or allows the system load to increase exorbitantly. You can't optimize a database server so well that it doesn't significantly increase the load. If you're looking for a bottleneck, do it in the lower-level modules and not in the web server. The web server is only the provider, but if PHP and MySql are overwhelmed with the high request load, then this is to the detriment of these components, but not the web server.

If you want to have a serious discussion, you are cordially invited to do so. At the moment your thoughts are far from that and sound rather “crazy”.
 

Crazy Serb

Well-Known Member
#14
Please don't expect your "gimmicks" to be believed. At the risk of repeating myself again, but if you can process requests of this size, you need a server cluster. No single server in this universe is capable of processing 2000 or more simultaneous requests and when I say server, I mean all the components involved in a request. The high load you mention is not because the web server causes it, but because PHP and MySQL cause it. With such a high number of simultaneous requests, it no longer matters whether the web server can process an almost infinite number of simultaneous requests if the database server either quits the service much earlier or allows the system load to increase exorbitantly. You can't optimize a database server so well that it doesn't significantly increase the load. If you're looking for a bottleneck, do it in the lower-level modules and not in the web server. The web server is only the provider, but if PHP and MySql are overwhelmed with the high request load, then this is to the detriment of these components, but not the web server.

If you want to have a serious discussion, you are cordially invited to do so. At the moment your thoughts are far from that and sound rather “crazy”.
Dude, please stop replying here as you're not helping, and are actually talking nonsense more than anything.

I've seen basic servers handle 5,000 requests per second just fine, like this one with 1CPU and 1GB RAM only:

https://kunaldesai.blog/litespeed/

My server has 24 CPUs + 128GB of RAM and the PHP / mySQL usage during these loader.io tests is negligible. I know, because as I said, I'm not talking out of my ass but from actual observations during these tests. mySQL usage is about 10-20% of a single CPU, at the peak of these tests, and given 24 cores it barely even registers on the radar when it comes to server load.

Hell, I've even tried testing it with mySQL on a remote server as well, and same results - the current server with Litespeed goes up in smoke as soon as I increase the number of clients in that test to above 2,000 per second.

And here's another one for you - tested it with some basic caching set up on Cloudflare (no image CDN, no page rules of any sort) with 10,000 clients/second and the server load stays under 1.00, meaning doesn't even flinch.

I've seen nginx setups on basic servers with minimal hardware handle 10,000+ requests without a flinch as well (I can link to those as well if you don't believe me), I just don't want to deal with nginx and prefer Litespeed. But... who knows. If I can figure out how to use nginx to handle that much traffic as well...

So, yeah... if you don't mind, I don't want this to turn into another back & forth argument with someone who refuses to accept reality and someone who thinks they know what they're talking about but in reality they don't, due to their limited knowledge, experience or beliefs, and someone who will most likely not change their beliefs about this topic no matter how much evidence to the contrary I present them.

I'd rather hear from someone who has actually done this and what they did to achieve that instead argue with you. Thanks. I'll ignore your follow-up comments unless they're constructive and specific to what I asked previously.
 
Last edited:

AndreyPopov

Well-Known Member
#16
I've seen basic servers handle 5,000 requests per second just fine, like this one with 1CPU and 1GB RAM only:
2,000 clients per second

- request per second
- clients per second

are very different things!!!!!


not any request to web server contain php code execution and/or access to database data

for example I use on shared hosting 4 core of Intel Xeon E5-2667 v3 3.2GHz, 8GB RAM
but limited to 200 connections to MySQL database, not per second - total 200 connections!


I use mysql_memcached driver access to database for web frontend data to reduce connections, but not for client specific request like account, cart, etc.

server can serve 10 000 requests per second
but can not serve 1000 clients per second
 

serpent_driver

Well-Known Member
#18
server can serve 10 000 requests per second
but can not serve 1000 clients per second
This is an important aspect, but it is still irrelevant. 2000 or more simultaneous website requests generate a high load regardless of requests to static sources. That's why it's still an illusion that you can handle this large number of simultaneous requests without a load balancer.
 

AndreyPopov

Well-Known Member
#20
No load balancer, 1 CPU, 1GB of RAM server.

Nothing special in terms of hardware (or software) setup.
are you sure?

no any information about storage configuration!!!!!!!!!!!!!!!!!!!!
no any information about network configuration!!!!!!!!!!!!!!!!!!!
no any information about WordPress pages type and size!!!!!!!!!!!!!!

may be all WordPress pages are simple page with only "Hello World!" - no css, no js, no images, no fonts - (size less then 1KB)

all pages are cached by LScache

LSCache stored cached pages in compressed (gzip or brotli) HTML files on disk (512 bytes).

storage may be hardware RAID0 with on-board cache from 4xNVMe PCIe 4.0 4x disks
network configuration - 10G adapter connected to high perfomance switch
 
Top