In my previous post on storing web assets on Amazon S3, I promised to share the Ant script I developed from a week of work and testing to jumpstart your own efforts. Let’s look at what I wanted to accomplish.
The Ideal Deployment
The ideal deployment would accomplish a number of things that I listed in my post such as:
- Combine multiple Javascripts or CSS files to reduce download count
- Automatically set far-future headers, mime type, encoding and so forth
- Grab remote scripts like Google Analytics or AddThis’ sharing widget and bundle it into our core code (see why and discussion)
- Strip and minify Javascript and CSS to reduce file size
- Finally, compress text files for browsers that support Gzip and upload them automatically
That’s a good wish list because it’s everything I did by hand the first time and it was a pain in the ass. Being fully automated means with one simple execution and a couple of parameters, we can push an entire repository of web assets off to Amazon S3 and keep them up to date.
Implementation
Here’s the background on my site and why this script does what it does. It’s easy to customize so you should be able to use it for your own needs with little effort but this gives some rationale as to why I’ve made certain decisions.
Be sure to check my previous post to fully understand the limitations of S3. I won’t be re-addressing here why we have two buckets, set headers or compress content separately.
Javascript
On my site, we include Google Analytics on every page. We also include jQuery and Dan Switzer’s qForms on many pages. The public facing part of our site also includes the AddThis widget that lets people share our content via social media sites like Facebook and Twitter.
Depending on the page you hit, you might have to download all four of those things. Or in certain rare cases, maybe just one of them (GA). We made the decision to create a bundle of core Javascript that would include all four of those items in a single file. When stripped and minified with a far futures expiry date (which means they would only ever download it once), we decided that the file size was small enough to send to every user instead of sending it piecemeal. jQuery accounts for the biggest chunk and since we’re moving towards more interactivity, we decided to bite the bullet and bundle the four scripts together.
Javascript is compressed by the impressive YUI-Compressor written by Julien Lecomte.
CSS
Historically, we’ve always used separate files for print and screen CSS. However, there is a technique that will allow you to put both styles into the same spreadsheet that will not only eliminate the extra file but will also reduce the total amount of CSS needed. It boils down to using this technique in your single CSS file:
@media screen {
/* Rest of the screen stylesheet goes here */
}
@media print {
/* Rest of the print stylesheet goes here */
}
This first part will eliminate the second file but it doesn’t reduce the overall amount of CSS. For that, I turned to this explanation of putting generic styles outside of the { } which gives you a set of “base” styles that are then overridden by media-specific styles (in my case, print). This is the “cascading” in Cascading Style Sheets. You have to laugh when you work with something for years and still learn something so fundamental. In the end, I have three CSS files that look like:
pap_screen.css:
/* general CSS */
forums.css:
/* general forums css */
pap_print.css:
@media print {
/* print specific overrides to the general CSS */
}
During deployment, the script concatenates the three files together into a single CSS which handles both screen and print. Then it’s minimized by YUI-Compressor and gzipped for a teensy end product.
Why keep the three separate files? I find it easier to go into a smaller file to find a style than deal with one giant file all the time. If I were starting from scratch, I probably wouldn’t have gone this way but since I already had the three files in my source control, I left them as-is and let Ant put them together.
Updating files with far future Expires
Astute readers will be wondering: if you set the cache headers to expire a year from now, how do you make changes to those files? Won’t the browser use its local copy effectively ignoring the updated file on the server?
Yes.
When I consulted at Yahoo in 2003 helping them with a major search redesign, I was exposed to their internal interface to Akamai’s content distribution network. Their answer to this problem was very simple: rename the file. If the file was foo.css, name it foo2.css and update your HTML to point at it instead. Assuming the actual HTML doesn’t also have a far futures expiry, then the next request will load foo2.css instead and see the updated styles.
This sounds like a pain in the butt but is not that bad. You should already have your templates abstracted in some fashion, either in a MVC system, custom tag or some other templating mechanism that separates out the core aspects of your look and feel. That means when your core JS and CSS files are updated, and thus renamed, there should only be one or two places you need to make changes.
Ask yourself: is a tiny extra bit of work on your part worth a huge speed increase for every one of your users on every request ever made to your website? There is only one correct answer to that question.
Preparing Ant
Let’s get to business. In addition to Ant 1.7, you’ll need to also obtain the following libraries and Ant tasks:
- SVNAnt – access Subversion repositories from Ant
- YUI-Compressor – library and Ant task for compressing CSS and JS
- Ant-Contrib – some extra Ant tasks for looping
- s3cmd – A python script for uploading/downloading files to S3, so you’ll need Python too)
It’s beyond the scope of this document on how to get those working; you’ll need some Ant skills but basically it involves downloading the JARs and putting them into your Java lib/ext or ant/lib folder. On CentOS Linux, that would be /usr/java/latest/jre/lib/ext and /usr/share/ant/lib.
Why not jets3t?
There is a pure Java interface for S3, jets3t, but it didn’t work for my purposes here. It may change over time and would be preferable to an external dependency like s3cmd.
Properties Files
I use one properties file per environment that I deploy to and my Ant scripts ask me which environment I want to target when they start. For the script below, they would be named production.properties, staging.properties and development.properties:
# for production cdn
project.name=cdn
project.defaultTarget=deploy
svn.project=cdn
root.buildpath=/tmp
root.deploypath=/my/web/folder
jar.path=/usr/share/ant/lib
# aws props (we have two hosts, one is static, one is compressed, so two buckets)
aws.bucket.uncompressed=cdn-sitename
aws.bucket.compressed=cdnz-sitename
aws.accessId=yourAccessId
aws.secretKey=yourSecretKey
# system settings
exec.python=/usr/bin/python
exec.s3cmd=/usr/bin/s3cmd
# concatenated properties and subversion details
project.build.root=${root.buildpath}/${project.name}/build
project.clean.root=${root.buildpath}/${project.name}/clean
project.compile.root=${root.buildpath}/${project.name}/compile
project.compressed.root=${root.buildpath}/${project.name}/clean-compressed
project.deploy.root=${root.deploypath}/${project.name}
# construct an SVN path like http://server/root/project/ for checkout/update
svn.rooturl=http://your.subversion.com/server/root
svn.projecturl=${svn.rooturl}${svn.project}/trunk/
For running on Windows, I change a few of the key parameters like so:
root.buildpath=c:/temp
root.deploypath=c:/Documents and Settings/brian/My Documents/web
jar.path=c:\\apache-ant-1.7.1\\lib
exec.python=c:\\progra~1\\python\\python.exe
exec.s3cmd=c:\\progra~1\\python\\Scripts\\s3cmd.py
The Script
With all of the preparations done, we can try the actual Ant script. I’m sorry the formatting of this makes you scroll horizontally; I’m going to get a new code plugin soon to eliminate this. You might download the file to follow along instead.
<?xml version="1.0"?>
<project name="myCDN" default="picktarget" basedir=".">
<property name="email.to" value="[email protected]" />
<property name="email.from" value="[email protected]" />
<tstamp>
<format property="svn.builddate" pattern="yyMMddhhmm"/>
</tstamp>
<record name="build.logfile" />
<property environment="env"/>
The first and default target is picktarget – this prompts the user for which environment to deploy to and sets some Ant properties for helper libraries and options before jumping off to the target specified in the properties file:
<target name="picktarget">
<input message="Do you want to deploy?" validargs="production,staging,development" addproperty="target" />
<echo>Deploy target: ${target}</echo>
<available file="${target}.properties" property="target.props.available" />
<fail message="The properties file, ${target}.properties, could not be found!" unless="target.props.available" />
<property file="${target}.properties" />
<taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask">
<classpath>
<fileset dir="${jar.path}">
<include name="**/svn*.jar"/>
</fileset>
</classpath>
</taskdef>
<path id="yui.classpath">
<pathelement location="${jar.path}/yuicompressor-2.4.2.jar" />
<pathelement location="${jar.path}/yui-compressor-ant-task-0.4.jar" />
</path>
<taskdef name="yui-compressor" classname="net.noha.tools.ant.yuicompressor.tasks.YuiCompressorTask">
<classpath refid="yui.classpath" />
</taskdef>
<taskdef resource="net/sf/antcontrib/antlib.xml">
<classpath>
<pathelement location="${jar.path}/ant-contrib-1.0b3.jar" />
</classpath>
</taskdef>
<antcall target="${project.defaultTarget}" />
</target>
I like to have an init target that cleans up existing directories and prepares the script to run. I used to remove the directory completely and recreate it but it extends the length of time that SVN exports or checkouts take over slow networks so I generally keep the source directory and only rebuild the work directory now:
<target name="init" description="Create temp local directories for build">
<mkdir dir="${project.build.root}" />
<delete dir="${project.clean.root}" />
<delete dir="${project.compile.root}" />
<mkdir dir="${project.compile.root}" />
<echo message="Temporary build directories created successfully!" />
<available file="${project.build.root}/images" property="target.exists" />
<antcall target="-checkout" />
<antcall target="-update" />
<svn username="${svn.username}" password="${svn.password}" javahl="false">
<export srcUrl="${svn.projecturl}" destPath="${project.clean.root}" />
</svn>
</target>
The hyphen in front of this target name indicates it is private. I only call this as a dependency from other targets using the “depends” syntax. This is pretty straightforward – it uses the Svnant library to checkout my source code from Subversion using the properties file we specified.
I like to use the current svn revision number as my release number and I embed it in my application for reference. I also print it to the screen while deploying.
<target name="-checkout" description="Pulls code from Subversion into the build directory" unless="target.exists">
<echo message="Checking out files from svn repository:" />
<input message="Please enter svn repo username:" addproperty="svn.username" />
<input message="Please enter svn repo password:" addproperty="svn.password" />
<input message="Enter version to deploy (default: HEAD):" addproperty="svn.revision" defaultvalue="HEAD" />
<svn username="${svn.username}" password="${svn.password}" javahl="false">
<checkout url="${svn.projecturl}" destPath="${project.build.root}" revision="${svn.revision}" recurse="true" />
<status path="${project.build.root}" revisionProperty="revision" />
</svn>
<echo>Release version is ${revision}</echo>
</target>
<target name="-update" description="svn update a working copy instead" if="target.exists">
<echo message="Updating existing working copy:" />
<input message="Enter version to deploy (default: HEAD):" addproperty="svn.revision" defaultvalue="HEAD" />
<svn javahl="false">
<update dir="${project.build.root}" revision="${svn.revision}" recurse="true" />
<status path="${project.build.root}" revisionProperty="revision" />
</svn>
<echo>Release version is ${revision}</echo>
</target>
Now we start to get to the good stuff. Here I’m creating the directory structure to build my static assets with a directory for concatenating and minimizing Javascript and CSS files. These files wind up in a /global directory during deployment for inclusion in my HTML templates.
<target name="precompile" description="set up compilation environment">
<mkdir dir="${project.compile.root}" />
<mkdir dir="${project.compile.root}/js" />
<mkdir dir="${project.compile.root}/css" />
</target>
<target name="-precombine">
<delete>
<fileset dir="${project.compile.root}/js" includes="*.js" />
<fileset dir="${project.compile.root}/css" includes="*.css" />
</delete>
<delete dir="${project.clean.root}/global" />
</target>
One of the challenges of bundling Google Analytics and AddThis javascript code into your app is that you’re no longer getting their updates on every page request. Generally this is OK – you probably don’t need an update from them every day. But from time to time, there are new features and enhancements (especially in GA) that you’ll want to capture and update in your bundle. I’ve automated this process by fetching those files during deployment so I have the latest each time I deploy.
Note: because of the way the combined files are named, you may need to update your templates when you deploy your assets!
<target name="fetch" description="Retrieve external resources for inclusion">
<get src="http://www.google-analytics.com/ga.js" dest="${project.compile.root}/js/ga.js" verbose="true" />
<get src="https://secure.addthis.com/js/200/addthis_widget.js" dest="${project.compile.root}/js/sharethis.js" verbose="true" />
</target>
Now take all of the Javascript and CSS files and concatenate them into fewer files. Note that ORDER MATTERS! We list the files in an explicit order to satisfy any dependencies that we may have in the system. Once combined, we run them through YUI-compressor to squeeze them down and finally rename them back to their original names.
<target name="combine" description="Combine permitted files together for deployment" depends="-precombine,fetch">
<echo message="Building global javascript and style sheets..." />
<concat destfile="${project.compile.root}/js/core.js" encoding="UTF8" eol="unix" force="no">
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.3.2/jquery.js" />
<fileset dir="${project.clean.root}" includes="js/qforms/qforms-combined.js" />
<fileset dir="${project.compile.root}" includes="js/ga.js" />
<fileset dir="${project.compile.root}" includes="js/sharethis.js" />
</concat>
<concat destfile="${project.compile.root}/js/forms.js" encoding="UTF8" eol="unix" force="no">
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.2.3/jquery.color.js" />
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.2.6/jquery.autocomplete.js" />
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.3.1/jquery.checkboxes.min.js" />
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.3.1/jquery.selectboxes.min.js" />
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.3.1/jquery.field.min.js" />
<fileset dir="${project.clean.root}" includes="js/library/jquery/1.3.1/jquery.colorbox.min.js" />
</concat>
<concat destfile="${project.compile.root}/css/pap.css" encoding="UTF8" eol="unix" force="no">
<fileset dir="${project.clean.root}" includes="css/pap.css" />
<fileset dir="${project.clean.root}" includes="css/forums.css" />
<fileset dir="${project.clean.root}" includes="css/pap_print.css" />
</concat>
<concat destfile="${project.compile.root}/css/pmp.css" encoding="UTF8" eol="unix" force="no">
<fileset dir="${project.clean.root}" includes="css/pmp.css" />
<fileset dir="${project.clean.root}" includes="css/forums.css" />
<fileset dir="${project.clean.root}" includes="css/pmp_print.css" />
</concat>
<concat destfile="${project.compile.root}/css/regform.css" encoding="UTF8" eol="unix" force="no">
<fileset dir="${project.clean.root}" includes="css/jquery.colorbox.css" />
<fileset dir="${project.clean.root}" includes="css/regform.css" />
</concat>
<yui-compressor warn="false" munge="true" charset="UTF-8" fromdir="${project.compile.root}" todir="${project.compile.root}">
<include name="js/core.js" />
<include name="js/forms.js" />
<include name="css/pap.css" />
<include name="css/pmp.css" />
<include name="css/regform.css" />
</yui-compressor>
<mkdir dir="${project.clean.root}/global" />
<copy file="${project.compile.root}/js/core-min.js" tofile="${project.clean.root}/global/core.js" />
<copy file="${project.compile.root}/js/forms-min.js" tofile="${project.clean.root}/global/forms.js" />
<copy file="${project.compile.root}/css/pap-min.css" tofile="${project.clean.root}/global/pap.css" />
<copy file="${project.compile.root}/css/pmp-min.css" tofile="${project.clean.root}/global/pmp.css" />
<copy file="${project.compile.root}/css/regform-min.css" tofile="${project.clean.root}/global/regform.css" />
</target>
As we discussed above, when you set a far-futures expires header on a file and you need to change that file, the only realistic strategy is to rename it. This next target does that automatically by using an md5 hash (Ant’s “checksum”) as part of the filename. Why use this instead of say the revision number? Because we only want to update the references to these files in our HTML templates if something actually changes. It’s quite possible you could push your static assets many times and unless you updated your own Javascript or CSS or Google or AddThis updated theirs, you may not actually have any changes. Thus, save yourself the effort of updating your HTML template references by leaving the file name the same.
I will admit that I don’t like using an md5 hash because it’s hard to spot check for changes. I haven’t come up with anything better yet.
The end of this target prints the filenames to the output so you can compare them to the ones in your templates. If you were clever, you would put these filenames into a config file of some sort that your templates used so you didn’t have to actually update the templates each time.
<target name="versionize" description="Rename files based upon their checksums as a versioning system for the CDN">
<checksum file="${project.clean.root}/global/core.js" property="chksum.core" />
<move file="${project.clean.root}/global/core.js" tofile="${project.clean.root}/global/core_${chksum.core}.js" />
<checksum file="${project.clean.root}/global/forms.js" property="chksum.forms" />
<move file="${project.clean.root}/global/forms.js" tofile="${project.clean.root}/global/forms_${chksum.forms}.js" />
<checksum file="${project.clean.root}/global/pap.css" property="chksum.pap" />
<move file="${project.clean.root}/global/pap.css" tofile="${project.clean.root}/global/pap_${chksum.pap}.css" />
<checksum file="${project.clean.root}/global/pmp.css" property="chksum.pmp" />
<move file="${project.clean.root}/global/pmp.css" tofile="${project.clean.root}/global/pmp_${chksum.pmp}.css" />
<checksum file="${project.clean.root}/global/regform.css" property="chksum.regform" />
<move file="${project.clean.root}/global/regform.css" tofile="${project.clean.root}/global/regform_${chksum.regform}.css" />
<echo>Most recent globals, compare with previous:</echo>
<for param="file">
<path>
<fileset dir="${project.clean.root}/global/" />
</path>
<sequential>
<echo message="@{file}" />
</sequential>
</for>
</target>
When developing locally for example, I may wish to push to a local directory rather than to S3. On our live servers, we keep a copy of our static assets on the boxes alongside our production code. In case S3 were to have a serious outage, we could update one configuration file and suddenly start using our local assets again instead of S3. Given that we don’t control S3, we feel this is a good backup strategy.
<target name="localdeploy" description="Finalize build and stop before deploying to CDN">
<echo message="Exporting file transfer to ${project.deploy.root}..." />
<chmod dir="${project.clean.root}" type="file" perm="0644" />
<chmod dir="${project.clean.root}" type="dir" perm="0755" />
<mkdir dir="${project.deploy.root}" />
<sync todir="${project.deploy.root}" includeEmptyDirs="true" overwrite="true">
<fileset dir="${project.clean.root}" />
</sync>
</target>
Now we use our Python synchronization utility and push our uncompressed (non-Gzipped) assets to S3. This goes to the first of the two buckets we defined earlier.
Technically the expires header should not be set further than a year in advance according to the RFP but we’re using 12/31/2010 because I’m lazy.
Notice how the S3 library is being instructed to add the HTTP headers like Cache-Control and Expires and mime-type. Those are critical!
<target name="push" description="take a finished build and move it to Amazon S3">
<property name="http.expires" value="Fri, 31 Dec 2010 12:00:00 GMT" />
<exec executable="${exec.python}" failonerror="true">
<arg value="${exec.s3cmd}" />
<arg value="--guess-mime-type" />
<arg value="--add-header=Cache-Control:public, max-age=630657344" />
<arg value="--add-header=Expires:${http.expires}" />
<arg value="--encoding=UTF-8" />
<arg value="--skip-existing" />
<arg value="--recursive" />
<arg value="--acl-public" />
<arg value="sync" />
<arg value="${project.clean.root}/" />
<arg value="s3://${aws.bucket.uncompressed}/" />
</exec>
</target>
We also want to push gzipped assets to S3 as well so this private target finds all text assets and compresses them:
<target name="-compress" description="gzip text files for additional speed">
<delete dir="${project.compressed.root}" />
<mkdir dir="${project.compressed.root}" />
<sync todir="${project.compressed.root}" includeEmptyDirs="true" overwrite="true">
<fileset dir="${project.clean.root}" />
</sync>
<for param="file">
<path>
<fileset dir="${project.compressed.root}" includes="**/*.js" />
<fileset dir="${project.compressed.root}" includes="**/*.css" />
<fileset dir="${project.compressed.root}" includes="**/*.xml" />
<fileset dir="${project.compressed.root}" includes="**/*.html" />
</path>
<sequential>
<gzip src="@{file}" destfile="@{file}.gz" />
<move file="@{file}.gz" tofile="@{file}" overwrite="true" />
</sequential>
</for>
</target>
And finally, repeat the push but this time to our compressed bucket. We use two uploads here – one for the uncompressed content like images which won’t be Gzipped and another for the compressed content that includes a couple of additional headers for Vary and Content-Encoding so that browsers and proxies will know what to do with it:
<target name="push-compressed" description="take a finished build and move it to Amazon S3" depends="-compress">
<property name="http.expires" value="Fri, 31 Dec 2010 12:00:00 GMT" />
<exec executable="${exec.python}" failonerror="true">
<arg value="${exec.s3cmd}" />
<arg value="--skip-existing" />
<arg value="--guess-mime-type" />
<arg value="--exclude=*.js" />
<arg value="--exclude=*.css" />
<arg value="--exclude=*.xml" />
<arg value="--exclude=*.html" />
<arg value="--add-header=Cache-Control:public, max-age=630657344" />
<arg value="--add-header=Expires:${http.expires}" />
<arg value="--encoding=UTF-8" />
<arg value="--recursive" />
<arg value="--acl-public" />
<arg value="sync" />
<arg value="${project.compressed.root}/" />
<arg value="s3://${aws.bucket.compressed}/" />
</exec>
<exec executable="${exec.python}" failonerror="true">
<arg value="${exec.s3cmd}" />
<arg value="--skip-existing" />
<arg value="--guess-mime-type" />
<arg value="--exclude=*" />
<arg value="--include=*.js" />
<arg value="--include=*.css" />
<arg value="--include=*.xml" />
<arg value="--include=*.html" />
<arg value="--add-header=Cache-Control:public, max-age=630657344" />
<arg value="--add-header=Expires:${http.expires}" />
<arg value="--add-header=Vary:Accept-Encoding" />
<arg value="--add-header=Content-Encoding:gzip" />
<arg value="--encoding=UTF-8" />
<arg value="--recursive" />
<arg value="--acl-public" />
<arg value="sync" />
<arg value="${project.compressed.root}/" />
<arg value="s3://${aws.bucket.compressed}/" />
</exec>
</target>
I like to get an email when deployments are run, so we include this:
<target name="sendMail" description="Send email notification">
<fixcrlf srcdir="." includes="**/*.logfile" eol="crlf" eof="remove" />
<mail mailhost="yourmailserver.com" mailport="25"
subject="'${target}' build at revision ${revision} successful"
messagefile="build.logfile"
encoding="plain">
<from address="${email.to}"/>
<to address="${email.from}"/>
</mail>
<echo message="Mail sent!"/>
</target>
And finally, the roll up targets. None of the above targets are really designed to be called directly. Rather, the following targets use the “depends” feature of Ant to combine multiple targets into something useful like: predeploy, deploy, redeploy, localpush, and repush. Those should all be semi-guessable in terms of what they accomplish.
<target name="deploy" depends="init,precompile,combine,versionize,localdeploy,push,push-compressed,sendMail"></target>
<target name="predeploy" depends="init,precompile,combine,versionize,-compress"></target>
<target name="redeploy" depends="combine,versionize,localdeploy"></target>
<target name="localpush" depends="init,precompile,combine,versionize,localdeploy"></target>
<target name="repush" depends="push,push-compressed"></target>
</project>
You can download the full build.xml.
I hope this Ant file helps you get up and running with S3. If you have ideas or improvements, please leave them in the comments!
Jim Priest said:
on September 2, 2009 at 9:45 am
Great post! Adding to my Ant wiki (http://thecrumb.com/wiki/ant)
marc esher said:
on September 3, 2009 at 8:39 am
Very cool, Brian.
When you’re developing locally, I assume you have all these js and css separate, and in your html you’re including them individually, correct?
So how are you modifying your HTML when you deploy to production? or does your app just have a switch that detects its environment and then decides whether to include the individual versions or the concatenated versions?
brian said:
on September 3, 2009 at 9:00 am
@Jim – thanks for the link!
@Marc – I saw you presented on automation at CFUN, wish I could have seen that. I am definitely an Ant novice just getting enough done so I can move on. I’d love to know more.
Since my code plugin is a total fail in the comments, I’m going to post my response as a post…
Managing CDNs in your Application » ghidinelli.com said:
on September 3, 2009 at 9:15 am
[...] to actually generate the files, one of the targets for my Ant script is “localdeploy” and my development.properties file has all Windows paths that point to [...]
marc esher said:
on September 3, 2009 at 9:58 am
Brian,
OK, so I understand how you’re changing the path to where the files live. But what about the files themselves? For example, let’s say you and your team have a handful of javascript files that you all work on all the time. These aren’t open source projects that only change once in a while… they’re files you work on every day. So in your html, you have
[script src="/path/to/my/JSFileIChangeEveryDay_1.js"]
[script src="/path/to/my/JSFileIChangeEveryDay_2.js"]
[script src="/path/to/my/JSFileIChangeEveryDay_3.js"]
[script src="/path/to/my/JSFileIChangeEveryDay_4.js"]
[script src="/path/to/my/JSFileIChangeEveryDay_5.js"]
And on production, you want to have these files all concatenated, so you put that in your build script and it creates a new file: JSFilesIChangeEVeryDay_Combined.js or whatever
Now, your HTML needs to change on production, so that it points to your new combined file instead of each individual js file.
My question is: how are you managing the changing of your HTML during deployments? Or aren’t you?
See, I’m on the verge of instituting a very similar thing as you’ve described, and in my head, I see me simply using ANT to do a find/replace in our main layout file which looks for the big chunk of text that includes all the single JS files and replaces them with a single include for the combined JS file. But if there are other approaches to doing that, I’d love to hear them.
brian said:
on September 3, 2009 at 10:08 am
Haha… so I didn’t actually answer your question with my blabbering?
I don’t use the individual files locally on a day to day basis – I use the concatenated ones. If I change one of those underlying JS files then I recompile them. If I was doing this day in and day out like you’re suggesting, I would probably have a separate target that just took my local files and rebuilt the concatenated versions into a statically named file for during development. Then you could use your normal deployment scripts to adjust which files to use based upon the environment.
It’s important to work with the compiled scripts most of the time because there are problems that can crop up with minimizing and concatenation. You won’t see them if you’re normally working with separate files.
I don’t know of a good solution beyond what you’re proposing: a more complex bit of Ant search and replace.
Dominic said:
on September 25, 2009 at 4:11 pm
@Mark
We are doing something like this:
if(mode = production)
include production css and js (singular compressed files)
else
include dev css and js (lots of uncompressed files)
end if
Even during dev we have this ‘mode’ in production (for the reasons brian points out), but allow that mode to be overwritten with an url argument, e.g. &debug=1. Works pretty well.
marc esher said:
on September 25, 2009 at 5:51 pm
@Dominick, that’s exactly where we’re headed with it. thanks for sharing.
Dominic said:
on September 27, 2009 at 10:23 am
@brian, just a shout of appreciation for this post. Got me up and running with Ant and also a few things with s3 sync that I needed.
My script does:
- Prompt for svn revision to deploy
- Prompt for environment
- Write the build date to a settings file that the application uses
- Compress the js and css using Juice, naming the minified css and js files, production${builddate}.css/js
- Sync to s3
- Sync to my app server
In my templates, I call the production css/js like src=”#jsDomain#/production#buildDate#.js”
Its working really nicely, a few keystrokes and my app is deployed for any revision I choose. This post helped a lot. Thanks.
@Marc, sorry I misspelled your name
Brian said:
on September 27, 2009 at 12:32 pm
@Dominic – glad it helped! Why did you choose Juice over, say, YUI-compressor?
Dominic said:
on September 27, 2009 at 1:53 pm
I should say , ‘Juicer’. It uses YUI-Compressor and also adds cache-busters to image urls within stylesheets (i.e. adds ?{lastmodifieddate} ).
http://cjohansen.no/en/ruby/juicer_a_css_and_javascript_packaging_tool
Coder34 said:
on October 22, 2009 at 1:40 pm
TiVo cut off the original before the trailer. ,