PHP Hello World Benchmark 2014

LiteSpeed vs. NGINX

This benchmark compares the speed at which different web servers respond to requests for small PHP scripts using both non-keep-alive and keep-alive connections.

Summary:


With no keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

  • 3361% faster than Apache 2.2 using suPHP.
  • 3911% faster than Apache 2.4 using suPHP.
  • 55% faster than Apache 2.2 with mod_PHP.
  • 220% faster than Apache 2.2 with PHP-FPM.
  • 174% faster than Apache 2.4 with PHP-FPM.
  • 26% faster than nginx with PHP-FPM.
  • 23% faster than LiteSpeed Enterprise with PHP-FPM.
  • slightly faster than OpenLiteSpeed

With keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

  • 3885% faster than Apache 2.2 using suPHP.
  • 4506% faster than Apache 2.4 using suPHP.
  • 50% faster than Apache 2.2 with mod_PHP.
  • 273% faster than Apache 2.2 with PHP-FPM.
  • 203% faster than Apache 2.4 with PHP-FPM.
  • 91% faster than nginx with PHP-FPM.
  • 16% faster than LiteSpeed Enterprise with PHP-FPM.
  • slightly faster than OpenLiteSpeed.

In both tests, LiteSpeed Enterprise returned essentially the same results with suEXEC daemon mode and ProcessGroup.

It should also be noted that, even using the same PHP-FPM backend, LiteSpeed Enterprise outperformed all participants not using LSAPI. This is due to its optimized coding.


Notes:
  • We used a simple PHP hello world script (13 bytes). We used such a tiny script to avoid saturating the network connection and to show raw speed differences between the different setups.
  • Part of the difference in speeds is due to different server APIs. OpenLiteSpeed and LiteSpeed Enterprise with ProcessGroup and suEXEC daemon mode all used LSAPI. PHP-FPM uses FCGI. suPHP uses CGI. mod_PHP is embedded in the web server.
  • Opcode caching was not used for any of these setups. With opcode caching, differences would have been more marked. Fast setups would have shown an even larger advantage over slower setups.
  • LiteSpeed's suEXEC daemon and ProcessGroup modes returned the same results in these tests. In the real world, though, they have different advantages and uses. Each has situations where it would be preferable over the other. For more information on this, see our PHP LSAPI documentation.
  • The benchmark simulated serving 50000 requests to 100 users.
  • Access logging was disabled for all web servers to minimize disk I/O.
  • The test was performed over a 10GBps network connection to make sure network bandwidth did not become a bottleneck.
  • As the server CPU was faster than the client machine CPU, the test client "ab" could have become a bottleneck before the server reached its peak performance. We thus created an OpenVZ container on the server and assigned it 50% of one CPU, allowing the server to reach 100% CPU utilization during all tests.
  • For the keep-alive test, all web servers are configured to a maximum 100 keep-alive requests.
Download the raw test results helloworld.php.log and all configurations.

Test Environment

Server hardware specs:

Supermicro X9SRH-7TF
Intel Xeon E5-1620 Quad Core @ 3.60GHz
8GB RAM
CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
Intel X540 10GBASE-T on board NIC
Host IP: 192.168.0.22
Hard Drive: Samsung HD103SJ 1TB 7200rpm

Client hardware specs:

Supermicro SYS-6016T-6RFT+
Dual Intel Xeon E5620 Quad Core @ 2.40GHz
32GB RAM
CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
On board Intel 82599EB 10 Gigabit Ethernet Controller
Host IP: 192.168.0.20
Hard Drive: On board LSI 2108 RAID controller
Samsung HD103SJ 1TB 7200rpm X4 in RAID5

Network Switch:

Netgear XS708E-100NES 8-ports 10G switch

We welcome your feedback on our forum.

PHP Hello World Benchmark 2014

LiteSpeed vs. NGINX

This benchmark compares the speed at which different web servers respond to requests for small PHP scripts using both non-keep-alive and keep-alive connections.

Summary:


With no keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

  • 3361% faster than Apache 2.2 using suPHP.
  • 3911% faster than Apache 2.4 using suPHP.
  • 55% faster than Apache 2.2 with mod_PHP.
  • 220% faster than Apache 2.2 with PHP-FPM.
  • 174% faster than Apache 2.4 with PHP-FPM.
  • 15% faster than nginx with PHP-FPM.
  • 23% faster than LiteSpeed Enterprise with PHP-FPM.
  • slightly faster than OpenLiteSpeed

With keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

  • 3885% faster than Apache 2.2 using suPHP.
  • 4506% faster than Apache 2.4 using suPHP.
  • 50% faster than Apache 2.2 with mod_PHP.
  • 273% faster than Apache 2.2 with PHP-FPM.
  • 203% faster than Apache 2.4 with PHP-FPM.
  • 118% faster than nginx with PHP-FPM.
  • 16% faster than LiteSpeed Enterprise with PHP-FPM.
  • slightly faster than OpenLiteSpeed.

In both tests, LiteSpeed Enterprise returned essentially the same results with suEXEC daemon mode and ProcessGroup.

It should also be noted that, even using the same PHP-FPM backend, LiteSpeed Enterprise outperformed all participants not using LSAPI. This is due to its optimized coding.


Notes:
  • We used a simple PHP hello world script (13 bytes). We used such a tiny script to avoid saturating the network connection and to show raw speed differences between the different setups.
  • Part of the difference in speeds is due to different server APIs. OpenLiteSpeed and LiteSpeed Enterprise with ProcessGroup and suEXEC daemon mode all used LSAPI. PHP-FPM uses FCGI. suPHP uses CGI. mod_PHP is embedded in the web server.
  • Opcode caching was not used for any of these setups. With opcode caching, differences would have been more marked. Fast setups would have shown an even larger advantage over slower setups.
  • LiteSpeed's suEXEC daemon and ProcessGroup modes returned the same results in these tests. In the real world, though, they have different advantages and uses. Each has situations where it would be preferable over the other. For more information on this, see our PHP LSAPI documentation.
  • The benchmark simulated serving 50000 requests to 100 users.
  • Access logging was disabled for all web servers to minimize disk I/O.
  • The test was performed over a 10GBps network connection to make sure network bandwidth did not become a bottleneck.
  • As the server CPU was faster than the client machine CPU, the test client "ab" could have become a bottleneck before the server reached its peak performance. We thus created an OpenVZ container on the server and assigned it 50% of one CPU, allowing the server to reach 100% CPU utilization during all tests.
  • For the keep-alive test, all web servers are configured to a maximum 100 keep-alive requests.
Download the raw test results helloworld.php.log and all configurations.

Test Environment

Server hardware specs:

Supermicro X9SRH-7TF
Intel Xeon E5-1620 Quad Core @ 3.60GHz
8GB RAM
CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
Intel X540 10GBASE-T on board NIC
Host IP: 192.168.0.22
Hard Drive: Samsung HD103SJ 1TB 7200rpm

Client hardware specs:

Supermicro SYS-6016T-6RFT+
Dual Intel Xeon E5620 Quad Core @ 2.40GHz
32GB RAM
CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
On board Intel 82599EB 10 Gigabit Ethernet Controller
Host IP: 192.168.0.20
Hard Drive: On board LSI 2108 RAID controller
Samsung HD103SJ 1TB 7200rpm X4 in RAID5

Network Switch:

Netgear XS708E-100NES 8-ports 10G switch

We welcome your feedback on our forum.

cPanel WordPress Benchmarks

LiteSpeed vs. Apache (multiple PHP setups)

Courtesy of Tristan Wallace and cPanel (Presented at cPanel Conference 2014)

WordPress + cPanel setups tested at different levels of concurrency — 20 and 50 concurrent connections. (The tests use different hardware. Apache could not complete the higher concurrency test on the small VPS.)

RAM Usage

Note the huge increase in RAM usage when you increase concurrency with Apache.

Server Load

We also see a very large jump in server load when concurrency is increased with Apache.

Max Response Time

In the speed tests, the importance of the more powerful hardware is evident — even with higher concurrency, the dedicated server is faster than the VPS. There is still a rather large difference between LSWS and Apache with suPHP or FCGI.

Test Environment

VPS specs (20 concurrent connections test):

One core
1GB RAM
Xen-based
CentOS 6.5, kernel 2.6.32
SSD cache in front of spinning disks, 10k Drive speed
cPanel & WHM 11.44.1.18
PHP 5.4.32
Apache 2.4.10, MPM Prefork
LSWS 4.2.14, VPS license

Dedicated server specs (50 concurrent connections test):

Quad core
8GB RAM
CentOS 6.5, kernel 2.6.32
SSD cache in front of spinning disks, 10k Drive speed
cPanel & WHM 11.44.1.18
PHP 5.4.32
Apache 2.4.10, MPM Prefork
LSWS 4.2.14, 2-CPU license

Summary:


The most interesting feature of these benchmarks is that they show how the two HTTP servers react to increases in concurrency:

  • For Apache, as the number of users increases, RAM usage and server load increase exponentially.
  • LiteSpeed Web Server's RAM usage and server load stay low even when concurrency is increased.

This difference demonstrates the advantage in scalability that event-driven architecture provides over Apache's process-based architecture. This is what leads to the load drop users see when switching to LSWS, and it is this difference that allows LSWS to serve much more traffic than Apache on the same hardware.

Unfortunately, having the two tests on different hardware means we can't see speed differences connected to concurrency. This would have been interesting because much of the speed gain LiteSpeed users see is due to efficient traffic handling — Apache uses too much RAM and CPU and gets overloaded, while LSWS continues to run at full speed. This was partially demonstrated when Apache couldn't handle the higher concurrency test on the VPS but LSWS could.


Notes:
  • Configurations are default configurations when possible.
  • As mentioned above, the different concurrency tests were conducted on different machines. A 50 concurrent connections test was originally planned for the VPS, as well. LSWS was able to complete tests on the VPS at a higher concurrency, but tests were discontinued when Apache could not complete the the higher concurrency test on the small VPS.
  • Tests were done using the ApacheBench scripts ab -n 25000 -c 20 and ab -n 100000 -c 50.
  • Servers were allowed to rest for 30 minutes between tests to avoid contaminating results.
  • This is a selection of the results presented at cPanel Conference 2014. Some results were omitted in order to make the graphs easier to read. The full presentation can be found below. The next three bullet points address why certain results were removed.
  • In the presentation, two results for each concurrency are presented. We have only used the results for the test with the larger number of requests.
  • Results for LSWS's ProcessGroup setup were removed because the goal of ProcessGroup is to make more efficient use of opcode caching. There is no point in using ProcessGroup without opcode caching.
  • Results for Apache's CGI setup were removed because this was deemed to be the least used Apache setup of the four.
Download the full presentation cPConference14-AvL.pdf.

We welcome your feedback on our forum.

LiteSpeed Cuts RAM Usage by 95% on Shared Servers

Synergy 8 uses PHP suEXEC Daemon mode to deliver efficient opcode caching


This scalability, reliability, and performance issue was solved with LiteSpeed Web Server.

Scott M.Director

The Challenge

  • Average server has 160 separate websites, averaging 10 req/sec, peaking to 100 req/sec.
  • Traffic is PHP heavy.
  • suEXEC needed for security, but can cause very high RAM usage.
  • Exponential growth already under way.

The Solution

  • LiteSpeed Web Server 2-CPU license.
  • PHP suEXEC Daemon mode with shared opcode caching.
  • RAM usage immediately decreases from 40GB to 2GB.
  • Addition of new users does not add noticeable increases in RAM usage.

Synergy 8 is a cloud-based platform for managing your online presence. Features include content management (CMS), e-mail marketing, customer database (CRM), and e-commerce. Our application servers serve a lot of dynamic content (PHP), moderate amounts of static content, and very little video. We believe that performance is paramount to usability, so we are prepared to invest in the best software and hardware solutions available. All of our clients share the same PHP code base on a "per branch" basis, so shared opcode caching is very beneficial for us.

Server Environment
  • Dual CPU x5690 (2x 6 core) 3.46 GHz
  • 96 GB RAM

Our initial configuration was Apache+DSO+PHP. At the end of April 2013, though, we switched to Apache+FCGI+suEXEC for security. We upgraded to PHP 5.5 and moved from APC to OPcache for our opcode caching. We used generous opcode cache sizes, as we like to take advantage of our servers' high memory capacity. We were not aware, though, of the insane implementation of FCGI PHP suEXEC— there is no shared opcode cache. This caused our committed memory to spike. FCGI also spooled up processes leading to dangerously high RAM usage (see graph at right). Each new website added about 300MB of RAM, with almost a 1GB commit. As we grew with this unscalable approach, we experienced a couple of crashes, primarily in the form of Linux watchdog timeouts — a classic “out of memory” event. This would happen during our most critical/peak times as Apache would spool up additional workers and PHP processes. We did not want to leave Apache, but enough was enough.

In August 2014, we switched to LiteSpeed+suEXEC Daemon mode. With suEXEC Daemon mode's shared opcode caching, we immediately reduced RAM usage from 40-50GB consumption + 100GB commit to 2-3GB consumption + commit. Our services run faster, mainly due to this freed up RAM now being utilised for disk caching.

Our bottleneck has been RAM usage for quite some time now due to suEXEC. This scalability, reliability, and performance issue was solved with LiteSpeed Web Server. The other factors (CPU, etc) are not a problem at present for us. We did not see any real difference in CPU usage between Apache and LiteSpeed, though this is probably due to our workload mainly being dynamic PHP, rather than static files. Thanks again for your product. It’s taken a great deal of pain away.


LSWS's combination of PHP suEXEC and opcode caching slashed RAM usage for Synergy 8, while keeping top-of-the-line performance. Do you have runaway RAM usage? Interested in opcode caching with suEXEC security? Read up on LiteSpeed's unique suEXEC Daemon mode.

Serve More Bandwidth With LiteSpeed

LiteSpeed Web Server is Green Olive Tree's choice for high-bandwidth web applications


The infrastructure has pushed out as much as 450 Mbit peak traffic without a hiccup.

Jon B.President

The Challenge

  • A high-traffic WordPress site and a very heavy affiliate system application.
  • Apache setup is swamped and only a portion of traffic is getting through.
  • Exponential growth on the way.

The Solution

  • LiteSpeed Web Server 2-CPU license.
  • Traffic immediately increases by 50%.
  • Now around 750,000 unique visitors daily.
  • Each server can handle 90 Mbit peak traffic without issue.

Jon Berry, the owner of Green Olive Tree, prides himself on solving complicated tech problems. For very high bandwidth applications or sites where page speed is critical, he uses LiteSpeed Web Server. He's seen how LSWS's efficient connection processing allows servers to serve more:

Server Environment
  • 5 application servers with 2xE5-2620 CPUs (12 cores) each
  • 32GB RAM each

Fans2Cash is now a popular Facebook affiliate network. But when they were just starting out, Fans2Cash's owners came to Green Olive Tree with a problem. They were running the affiliate system and their content site, mobilelikez.com, "on a single underpowered cPanel server and were only getting around 10 Mbit peak traffic per day. The site operators were convinced that a lot of their traffic just wasn’t getting through." Jon switched them to LiteSpeed Web Server, guessing their WordPress site needed the superior PHP handling LSWS offers (as opposed to something like nginx). Immediately, they saw a 50% jump in traffic.

This was just the start, though. "Their growth speed was staggering," Jon says. Now they are on five application servers all running LSWS. Each of these servers routinely powers through 30 Mbit peak traffic (for a total of 150 Mbit) without any slowdowns. (See graph at right.) These traffic levels would be unimaginable with Apache on similar hardware. Jon says he's even seen them handle 90 Mbit peak traffic each "without a hiccup".

The Fans2Cash operators are thrilled with Green Olive Tree's management. Green Olive Tree has customized many different aspects, including recently optimizing the affiliate system, but LSWS has been integral to making the system work.

LSWS makes possible web application traffic levels Apache and nginx can't touch. It's efficient PHP handling has given Fans2Cash the speed and reliability they need to grow at breakneck speed.

STAY CONNECTED