HTTP/2 The Web's New Protocol.

For over 15 years, HTTP has improved communication between web browsers and websites through the client-server computing model. More recently, an increase in traffic and connections on the Web has made the process of loading a web page both more complex and resource intensive. These two factors have caused HTTP/1.1 requests to have a much higher overhead and a decrease in performance if too many requests are made at a given time, which as a result, means a decrease in speed and performance in the end user's experience.

With this latest version, HTTP/2 promises to enhance the end users experience by focusing on the improvement of speed and performance. Pages will now load at a faster rate and the need for page optimization (such as resource merging, image sprinting etc.,) will be eliminated.

As a result, HTTP/2 will bring a number of substantial benefits such as:

  • More efficient use of network resources.
  • Reduced latency.
  • Faster page loads.
  • Longer connections.
  • LiteSpeed Technologies and HTTP/2.

    LiteSpeed Web Server is the FIRST HTTP/2 ready web server for production use. We are very pleased to announce that the latest versions of LiteSpeed Enterprise and Open LiteSpeed include support for all of HTTP/2's major implementations.

    LiteSpeed Technologies has always distinguished itself by improving performance and speed without sacrificing resources. By offering full HTTP /2 support we now seek to enhance the end user's overall experience by being part of the process that loads web pages faster and saves time for users.

    litemage

     

    The Origin Story

    Magento is slow. Very slow.

    Magento is the most popular e-commerce platform on the Internet. But its modularized architecture and flexible configurations come with a cost: more than 4 million lines of PHP code and 2 million lines XML configurations. This overhead makes Magento resource hungry and leads to performance issues — even a medium-sized store can require a very powerful server or cluster of servers.

    Hole Punching (click for details)

    hole punch

    Magento is not page cache friendly.

    Page caching is the most powerful way to bypass Magento's heavy architecture and speed up slow pages, but even though 95% of the content on a Magento page may be the same for all visitors (and is thus safe to cache), items in a shopping cart or the list of last viewed items, for example, cannot be cached and shown to all visitors. Because of these small blocks that change per-user, traditional page caches cannot cache most Magento pages and are not able to speed Magento up significantly.

    Hole punching to the rescue.

    LiteMage Cache uses Edge Side Includes (ESI) to punch holes in pages where information changes from visitor to visitor. The remaining content is saved to cache. When the next person visits the same page, the cached content is served quickly, with only the holes needing to be filled in with data for that visitor. LiteMage Cache also caches per-user data in private caches, so entire pages, even those with multiple holes, can be assembled completely from cache.

     

    What Is LiteMage Cache?

    litemage parts

    LiteMage Cache features:

    • Edge Side Include (ESI) engine for hole punching.
    • Punched holes are configurable and mapped to blocks defined in Magento page layout.
    • Main page and public blocks are cached once and served to all users. Private blocks are cached per-user and served only to that user.
    • Retrieve multiple blocks in one request, minimizing the overhead of building pages with multiple blocks.
    • Supports Last Viewed Product, Product Comparison, Product Toolbar Options, Stock Tracking and other features requiring communication with the Magento backend. (This support can also be turned off for even faster speeds.)
    • Supports layered navigation, category filtering, view as, sort by and show per page functionality.
    • Supports multi-store, multi-currency, and multi-user groups.
    • Built-in crawler to warm up cache.

     

    LiteMage Cache Benchmarks

    litemage benchmark

     

    How Does LiteMage Cache Compare to Other Page Caching Solutions?

    Varnish-based solutions (Turpentine, etc.)

    • Complicated to set up.
    • Requests dynamic blocks individually, multiplying the high cost of Magento framework initialization. Can make a cached page even slower than an uncached page.
    • No SSL support. Frontend proxy server required to support HTTPS, adding even more layers.
    • Uses AJAX, which lowers overall server performance.

    PHP-based solutions (Full Page Cache)

    • Content is cached by PHP, so all pages are still generated dynamically by PHP. PHP is a heavy language and overall scalability is thus limited by PHP scalability.
    • Reliance on PHP causes generally slower page load times.
    • Limits in PHP scalability make PHP solutions vulnerable to DDoS attacks.

    LiteMage Cache

    • All content can be assembled from cache, even per-user, private blocks, for best performance.
    • Multiple blocks are called for in a single request, lowering overhead.
    • Native SSL support.
    • Supremely scalable — handles 10,000's of connections without missing a beat.
    • Built-in extra anti-DDoS features protect you from attacks.

     

    Try LiteMage Free

    LiteMage Cache Installation and Configuration Manual

     

     

    PHP Hello World Benchmark 2014

    LiteSpeed vs. NGINX

    This benchmark compares the speed at which different web servers respond to requests for small PHP scripts using both non-keep-alive and keep-alive connections.

    Summary:


    With no keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

    • 3361% faster than Apache 2.2 using suPHP.
    • 3911% faster than Apache 2.4 using suPHP.
    • 55% faster than Apache 2.2 with mod_PHP.
    • 220% faster than Apache 2.2 with PHP-FPM.
    • 174% faster than Apache 2.4 with PHP-FPM.
    • 26% faster than nginx with PHP-FPM.
    • 23% faster than LiteSpeed Enterprise with PHP-FPM.
    • slightly faster than OpenLiteSpeed

    With keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

    • 3885% faster than Apache 2.2 using suPHP.
    • 4506% faster than Apache 2.4 using suPHP.
    • 50% faster than Apache 2.2 with mod_PHP.
    • 273% faster than Apache 2.2 with PHP-FPM.
    • 203% faster than Apache 2.4 with PHP-FPM.
    • 91% faster than nginx with PHP-FPM.
    • 16% faster than LiteSpeed Enterprise with PHP-FPM.
    • slightly faster than OpenLiteSpeed.

    In both tests, LiteSpeed Enterprise returned essentially the same results with suEXEC daemon mode and ProcessGroup.

    It should also be noted that, even using the same PHP-FPM backend, LiteSpeed Enterprise outperformed all participants not using LSAPI. This is due to its optimized coding.


    Notes:
    • We used a simple PHP hello world script (13 bytes). We used such a tiny script to avoid saturating the network connection and to show raw speed differences between the different setups.
    • Part of the difference in speeds is due to different server APIs. OpenLiteSpeed and LiteSpeed Enterprise with ProcessGroup and suEXEC daemon mode all used LSAPI. PHP-FPM uses FCGI. suPHP uses CGI. mod_PHP is embedded in the web server.
    • Opcode caching was not used for any of these setups. With opcode caching, differences would have been more marked. Fast setups would have shown an even larger advantage over slower setups.
    • LiteSpeed's suEXEC daemon and ProcessGroup modes returned the same results in these tests. In the real world, though, they have different advantages and uses. Each has situations where it would be preferable over the other. For more information on this, see our PHP LSAPI documentation.
    • The benchmark simulated serving 50000 requests to 100 users.
    • Access logging was disabled for all web servers to minimize disk I/O.
    • The test was performed over a 10GBps network connection to make sure network bandwidth did not become a bottleneck.
    • As the server CPU was faster than the client machine CPU, the test client "ab" could have become a bottleneck before the server reached its peak performance. We thus created an OpenVZ container on the server and assigned it 50% of one CPU, allowing the server to reach 100% CPU utilization during all tests.
    • For the keep-alive test, all web servers are configured to a maximum 100 keep-alive requests.
    Download the raw test results helloworld.php.log and all configurations.

    Test Environment

    Server hardware specs:

    Supermicro X9SRH-7TF
    Intel Xeon E5-1620 Quad Core @ 3.60GHz
    8GB RAM
    CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
    Intel X540 10GBASE-T on board NIC
    Host IP: 192.168.0.22
    Hard Drive: Samsung HD103SJ 1TB 7200rpm

    Client hardware specs:

    Supermicro SYS-6016T-6RFT+
    Dual Intel Xeon E5620 Quad Core @ 2.40GHz
    32GB RAM
    CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
    On board Intel 82599EB 10 Gigabit Ethernet Controller
    Host IP: 192.168.0.20
    Hard Drive: On board LSI 2108 RAID controller
    Samsung HD103SJ 1TB 7200rpm X4 in RAID5

    Network Switch:

    Netgear XS708E-100NES 8-ports 10G switch

    We welcome your feedback on our forum.

    PHP Hello World Benchmark 2014

    LiteSpeed vs. NGINX

    This benchmark compares the speed at which different web servers respond to requests for small PHP scripts using both non-keep-alive and keep-alive connections.

    Summary:


    With no keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

    • 3361% faster than Apache 2.2 using suPHP.
    • 3911% faster than Apache 2.4 using suPHP.
    • 55% faster than Apache 2.2 with mod_PHP.
    • 220% faster than Apache 2.2 with PHP-FPM.
    • 174% faster than Apache 2.4 with PHP-FPM.
    • 15% faster than nginx with PHP-FPM.
    • 23% faster than LiteSpeed Enterprise with PHP-FPM.
    • slightly faster than OpenLiteSpeed

    With keep-alive connections, LiteSpeed Enterprise (with either suEXEC Daemon mode or ProcessGroup) was

    • 3885% faster than Apache 2.2 using suPHP.
    • 4506% faster than Apache 2.4 using suPHP.
    • 50% faster than Apache 2.2 with mod_PHP.
    • 273% faster than Apache 2.2 with PHP-FPM.
    • 203% faster than Apache 2.4 with PHP-FPM.
    • 118% faster than nginx with PHP-FPM.
    • 16% faster than LiteSpeed Enterprise with PHP-FPM.
    • slightly faster than OpenLiteSpeed.

    In both tests, LiteSpeed Enterprise returned essentially the same results with suEXEC daemon mode and ProcessGroup.

    It should also be noted that, even using the same PHP-FPM backend, LiteSpeed Enterprise outperformed all participants not using LSAPI. This is due to its optimized coding.


    Notes:
    • We used a simple PHP hello world script (13 bytes). We used such a tiny script to avoid saturating the network connection and to show raw speed differences between the different setups.
    • Part of the difference in speeds is due to different server APIs. OpenLiteSpeed and LiteSpeed Enterprise with ProcessGroup and suEXEC daemon mode all used LSAPI. PHP-FPM uses FCGI. suPHP uses CGI. mod_PHP is embedded in the web server.
    • Opcode caching was not used for any of these setups. With opcode caching, differences would have been more marked. Fast setups would have shown an even larger advantage over slower setups.
    • LiteSpeed's suEXEC daemon and ProcessGroup modes returned the same results in these tests. In the real world, though, they have different advantages and uses. Each has situations where it would be preferable over the other. For more information on this, see our PHP LSAPI documentation.
    • The benchmark simulated serving 50000 requests to 100 users.
    • Access logging was disabled for all web servers to minimize disk I/O.
    • The test was performed over a 10GBps network connection to make sure network bandwidth did not become a bottleneck.
    • As the server CPU was faster than the client machine CPU, the test client "ab" could have become a bottleneck before the server reached its peak performance. We thus created an OpenVZ container on the server and assigned it 50% of one CPU, allowing the server to reach 100% CPU utilization during all tests.
    • For the keep-alive test, all web servers are configured to a maximum 100 keep-alive requests.
    Download the raw test results helloworld.php.log and all configurations.

    Test Environment

    Server hardware specs:

    Supermicro X9SRH-7TF
    Intel Xeon E5-1620 Quad Core @ 3.60GHz
    8GB RAM
    CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
    Intel X540 10GBASE-T on board NIC
    Host IP: 192.168.0.22
    Hard Drive: Samsung HD103SJ 1TB 7200rpm

    Client hardware specs:

    Supermicro SYS-6016T-6RFT+
    Dual Intel Xeon E5620 Quad Core @ 2.40GHz
    32GB RAM
    CentOS 6.4 with OpenVZ kernel 2.6.32-042stab081.8
    On board Intel 82599EB 10 Gigabit Ethernet Controller
    Host IP: 192.168.0.20
    Hard Drive: On board LSI 2108 RAID controller
    Samsung HD103SJ 1TB 7200rpm X4 in RAID5

    Network Switch:

    Netgear XS708E-100NES 8-ports 10G switch

    We welcome your feedback on our forum.

    cPanel WordPress Benchmarks

    LiteSpeed vs. Apache (multiple PHP setups)

    Courtesy of Tristan Wallace and cPanel (Presented at cPanel Conference 2014)

    WordPress + cPanel setups tested at different levels of concurrency — 20 and 50 concurrent connections. (The tests use different hardware. Apache could not complete the higher concurrency test on the small VPS.)

    RAM Usage

    Note the huge increase in RAM usage when you increase concurrency with Apache.

    Server Load

    We also see a very large jump in server load when concurrency is increased with Apache.

    Max Response Time

    In the speed tests, the importance of the more powerful hardware is evident — even with higher concurrency, the dedicated server is faster than the VPS. There is still a rather large difference between LSWS and Apache with suPHP or FCGI.

    Test Environment

    VPS specs (20 concurrent connections test):

    One core
    1GB RAM
    Xen-based
    CentOS 6.5, kernel 2.6.32
    SSD cache in front of spinning disks, 10k Drive speed
    cPanel & WHM 11.44.1.18
    PHP 5.4.32
    Apache 2.4.10, MPM Prefork
    LSWS 4.2.14, VPS license

    Dedicated server specs (50 concurrent connections test):

    Quad core
    8GB RAM
    CentOS 6.5, kernel 2.6.32
    SSD cache in front of spinning disks, 10k Drive speed
    cPanel & WHM 11.44.1.18
    PHP 5.4.32
    Apache 2.4.10, MPM Prefork
    LSWS 4.2.14, 2-CPU license

    Summary:


    The most interesting feature of these benchmarks is that they show how the two HTTP servers react to increases in concurrency:

    • For Apache, as the number of users increases, RAM usage and server load increase exponentially.
    • LiteSpeed Web Server's RAM usage and server load stay low even when concurrency is increased.

    This difference demonstrates the advantage in scalability that event-driven architecture provides over Apache's process-based architecture. This is what leads to the load drop users see when switching to LSWS, and it is this difference that allows LSWS to serve much more traffic than Apache on the same hardware.

    Unfortunately, having the two tests on different hardware means we can't see speed differences connected to concurrency. This would have been interesting because much of the speed gain LiteSpeed users see is due to efficient traffic handling — Apache uses too much RAM and CPU and gets overloaded, while LSWS continues to run at full speed. This was partially demonstrated when Apache couldn't handle the higher concurrency test on the VPS but LSWS could.


    Notes:
    • Configurations are default configurations when possible.
    • As mentioned above, the different concurrency tests were conducted on different machines. A 50 concurrent connections test was originally planned for the VPS, as well. LSWS was able to complete tests on the VPS at a higher concurrency, but tests were discontinued when Apache could not complete the the higher concurrency test on the small VPS.
    • Tests were done using the ApacheBench scripts ab -n 25000 -c 20 and ab -n 100000 -c 50.
    • Servers were allowed to rest for 30 minutes between tests to avoid contaminating results.
    • This is a selection of the results presented at cPanel Conference 2014. Some results were omitted in order to make the graphs easier to read. The full presentation can be found below. The next three bullet points address why certain results were removed.
    • In the presentation, two results for each concurrency are presented. We have only used the results for the test with the larger number of requests.
    • Results for LSWS's ProcessGroup setup were removed because the goal of ProcessGroup is to make more efficient use of opcode caching. There is no point in using ProcessGroup without opcode caching.
    • Results for Apache's CGI setup were removed because this was deemed to be the least used Apache setup of the four.
    Download the full presentation cPConference14-AvL.pdf.

    We welcome your feedback on our forum.

    STAY CONNECTED