THE GOAL
I am in the process of creating a Docker image with a very specific configuration:
I am seeking assistance as I work through the process, as I know I am missing some basic information.
So far, i have heavily tweaked the one-click install script for LiteSpeed, such that it is nearly bare-bones, specifically for Debian 9, and serves only one purpose - to download and install LiteSpeed. I am using this stripped down one-click script as part of my Dockerfile that builds the image, but for the purposes of this discussion, I feel pretty confident that we can ignore that Docker is even in the mix - for now, I am working within a container already, to try and configure LiteSpeed and Lucee/Tomcat to play well together, and try to figure out how the caching would work, based on my existing experience with the WordPress plugin. If I am successful, I may even be able to develop a similar plugin for Mura.
I am working off of the premise that almost everything configuration-wise will be done in the *virtual host* configuration file. This is to keep things as simple as possible, since the whole point of this being a Docker image is that it will provide only one service - HTML/ColdFusion page generation and serving, for one single app. If I can distribute the Docker image with the server configuration as solid as possible and making it easy for the user of the Docker image to customize the install by editing a single vhost file and .htaccess, this seems like an ideal distribution.
CURRENT PROGRESS
So far, I have installed both LS and Lucee, and by adding a virtual host in Lucee's server.xml file, combined with the LS vhost, I am able to serve .cfm pages without issue from a virtual host well beyond the LS Example host. This seems like a pretty good milestone already, though I don't know if I've made the connection in the most efficient manner. Here's what I have so far, within the docker container:
1. How do I enable caching at the vhost level for my Lucee/ColdFusion site, using .htaccess? The instructions in the Wiki all appear to be for an older version of LSWS, and I'm not sure which ones apply in this situation anyway.
2. How do I configure the cache to work similar to the base WordPress/LSCache install, but for a custom app or a CMS like Mura? It is a given that static resources should be cached, but the dynamic pages are the fly in the ointment of course. Once I get caching working, I obviously have to make sure I balance out the caching with the nature of dynamic pages - I can't have a page that should show the current date/time on each request simply kick out the date/time of the page as it exists in the cache =D ... so I need to have a basis for caching that not only *I* can live with, but a "safe" configuraiton with some options, hopefully configured via .htaccess, for any other person wishing to jump start their project using my Docker image of this configuration.
My (awesome) experience with LSWS/LSCache so far has really been in the context of the WordPress plugin, which works absolutely great... LS seems to know when these dynamic PHP pages have actually changed, and serves them up fresh, but then serves them from the cache for a set period of time, optimizing the entire WordPress install. It's been excellent for all my WordPress clients, but to be honest, I don't quite know how it does it. The WP plugin seems to modify .htaccess, but I'm not sure if that's all it's doing. I tried using the .htaccess rules from a WordPress site, but they didn't work, and I was (and *AM*) getting errors in the server-level error log regarding invalid htaccess directives, even though they seem to work for WordPress sites without issue.
So I'd like to have some idea of how the WordPress-based install of LSWS/LSCache "does its thing" so well, so I can attempt to do the same with a CF site, or a CF-based CMS like Mura. This is more along the lines of the philosophical approach to caching in LSWS in addition to the more straightforward "do this in your config" suggestions I may need.
CONCLUSION
I am happy to provide whatever additional info is required of course, and also include credit to anyone when it comes time to distribute the Docker image, for any assistance provided. My preference would be to work closely with LiteSpeed folks to deliver this very specialized Docker image first, and then genericize it to more OS's beyond Debian 9.3, as the one-click install script provides. I wanted to eliminate as many variables as possible first, but hope to expand the configuration over time.
So there it is... sorry for the length just to ask what are probably very simple questions... but if you have recommendations on how to get these two technologies working as efficient and powerfully as possible together, I'm ready to dig in.
~ oranuf
I am in the process of creating a Docker image with a very specific configuration:
- Debian 9.3
- Latest version of LiteSpeed
- Lucee 5.2.6.060
I am seeking assistance as I work through the process, as I know I am missing some basic information.
So far, i have heavily tweaked the one-click install script for LiteSpeed, such that it is nearly bare-bones, specifically for Debian 9, and serves only one purpose - to download and install LiteSpeed. I am using this stripped down one-click script as part of my Dockerfile that builds the image, but for the purposes of this discussion, I feel pretty confident that we can ignore that Docker is even in the mix - for now, I am working within a container already, to try and configure LiteSpeed and Lucee/Tomcat to play well together, and try to figure out how the caching would work, based on my existing experience with the WordPress plugin. If I am successful, I may even be able to develop a similar plugin for Mura.
I am working off of the premise that almost everything configuration-wise will be done in the *virtual host* configuration file. This is to keep things as simple as possible, since the whole point of this being a Docker image is that it will provide only one service - HTML/ColdFusion page generation and serving, for one single app. If I can distribute the Docker image with the server configuration as solid as possible and making it easy for the user of the Docker image to customize the install by editing a single vhost file and .htaccess, this seems like an ideal distribution.
CURRENT PROGRESS
So far, I have installed both LS and Lucee, and by adding a virtual host in Lucee's server.xml file, combined with the LS vhost, I am able to serve .cfm pages without issue from a virtual host well beyond the LS Example host. This seems like a pretty good milestone already, though I don't know if I've made the connection in the most efficient manner. Here's what I have so far, within the docker container:
- Created the following directories:
- /srv/APP/httpd - this becomes the vhost root. Permissions for this directory and all subdirectories are set to nobody:nogroup, except for the conf directory mentioned below;
- /srv/APP/httpd/www - this becomes the vhost DOCUMENT root;
- /srv/APP/httpd/conf - this is where the vhost config file resides (the LS admin will NOT let one put a vhost config file outside $SERVER_ROOT, so I'm apparently breaking a rule here... i programmed this location into the Litespeed httpd_config.conf file, and set permissions on this directory to lsadm:lsadm to get this to work;
- /srv/APP/httpd/logs - this is where the vhost LOGS will be written.
- /srv/APP/httpd - this becomes the vhost root. Permissions for this directory and all subdirectories are set to nobody:nogroup, except for the conf directory mentioned below;
- Created listeners for my APP, both standard ( : 80) and SSL ( : 443) in the httpd_config.conf file. For now, the SSL listener uses the same LiteSpeed certificates in use by the LiteSpeed admin;
- Set up my vhost file (again, /srv/APP/httpd/conf/vhconf.conf) as follows:
Code:docRoot $VH_ROOT/www errorlog $VH_ROOT/logs/error.log { useServer 0 logLevel NOTICE rollingSize 64M } accesslog $VH_ROOT/logs/access.log { useServer 0 logFormat <<<END_logFormat %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\ logHeaders 7 END_logFormat rollingSize 64M keepDays 60 bytesLog $VH_ROOT/logs/bytes.log compressArchive 1 } index { useServer 0 indexFiles index.cfm } scripthandler { add servlet:Lucee cfm } extprocessor Lucee { type servlet address localhost:8009 maxConns 1000 pcKeepAliveTimeout -1 initTimeout 15 retryTimeout 15 respBuffer 0 } context / { type null location $DOC_ROOT allowBrowse 1 enableExpires 1 indexFiles index.cfm rewrite { enable 1 inherit 0 rewriteFile $DOC_ROOT/.htaccess } addDefaultCharset off enableIpGeo 1 }
...and this file has enabled .cfm files to be passed from LiteSpeed to Lucee for processing - i can call https://localhost/index.cfm, and get an updated date/time output with each page refresh.
- attempted to set up the .htaccess file in the $DOC_ROOT, so that caching could be enabled. Initially I had a cache module configured at the server level, but again, I'd rather have the configuration stick as closely as possible to the vhost or .htaccess file as possible.
When I had the cache module configured in LSAdmin at the server level, I turned on public and private (set =1) and EVERYTHING was cached, including my dynamic ColdFusion page... so I know the configuration between CF and LS *does* work and is capable of caching. It's just that I cannot remove the cache module from the server level and enable it in .htaccess, and have it work.
1. How do I enable caching at the vhost level for my Lucee/ColdFusion site, using .htaccess? The instructions in the Wiki all appear to be for an older version of LSWS, and I'm not sure which ones apply in this situation anyway.
2. How do I configure the cache to work similar to the base WordPress/LSCache install, but for a custom app or a CMS like Mura? It is a given that static resources should be cached, but the dynamic pages are the fly in the ointment of course. Once I get caching working, I obviously have to make sure I balance out the caching with the nature of dynamic pages - I can't have a page that should show the current date/time on each request simply kick out the date/time of the page as it exists in the cache =D ... so I need to have a basis for caching that not only *I* can live with, but a "safe" configuraiton with some options, hopefully configured via .htaccess, for any other person wishing to jump start their project using my Docker image of this configuration.
My (awesome) experience with LSWS/LSCache so far has really been in the context of the WordPress plugin, which works absolutely great... LS seems to know when these dynamic PHP pages have actually changed, and serves them up fresh, but then serves them from the cache for a set period of time, optimizing the entire WordPress install. It's been excellent for all my WordPress clients, but to be honest, I don't quite know how it does it. The WP plugin seems to modify .htaccess, but I'm not sure if that's all it's doing. I tried using the .htaccess rules from a WordPress site, but they didn't work, and I was (and *AM*) getting errors in the server-level error log regarding invalid htaccess directives, even though they seem to work for WordPress sites without issue.
So I'd like to have some idea of how the WordPress-based install of LSWS/LSCache "does its thing" so well, so I can attempt to do the same with a CF site, or a CF-based CMS like Mura. This is more along the lines of the philosophical approach to caching in LSWS in addition to the more straightforward "do this in your config" suggestions I may need.
CONCLUSION
I am happy to provide whatever additional info is required of course, and also include credit to anyone when it comes time to distribute the Docker image, for any assistance provided. My preference would be to work closely with LiteSpeed folks to deliver this very specialized Docker image first, and then genericize it to more OS's beyond Debian 9.3, as the one-click install script provides. I wanted to eliminate as many variables as possible first, but hope to expand the configuration over time.
So there it is... sorry for the length just to ask what are probably very simple questions... but if you have recommendations on how to get these two technologies working as efficient and powerfully as possible together, I'm ready to dig in.
~ oranuf
Last edited by a moderator: