Wednesday, March 28, 2007

Capistrano and Mongrel are easy to use, but deployment is still hard

So I finally got around to deploying with Capistrano. It was like learning how to walk all over again. Before, I had learned to deploy using just lightty behind apache, and that was a pain every time, since there would be steps I'd be forgetting. A good introduction to deploying with capistrano is "It's time for a grown up server." I also checked out Mongrel at the same time. It seems pretty neat.

I'm sure that Cappy is easy to use once it's up and running, but boy, setting it up on a server from scratch has taken all day. This Yariv's guy has a point when he says there's so much stuff to install, compared to web development in Erlang--all you need is Yaws and Mnesia. But then again, all this 'stuff' is suppose to make our lives easier.

There are plenty of tutorials out there about setting up Capistrano and Mongrel, but deviate from them just a little, you'd better be ready to use your problem solving noggin.

SVN is just another TLA

One of the biggest problems I've run into while setting things up is that if you have a password on your svn repository, Cappy's not going to like it too much. This is an old problem, and the old solution from Jamis exists. However, I noticed that when I used his solution, the options --username and --password was there twice. I'm guessing that you can do the following in your capistrano deploy.rb, and it would work...BUT!:
set :svn_username, ENV['USER'] || "default_username"
set :svn_password, Proc.new { Capistrano::CLI.password_prompt('SVN Password: ') }
set :repository, "http://path.to.svn/svn/#{application}/trunk"
This only works if the machine you're developing on is the same as the machine that you're deploying on, per the example in Agile Web Development with Rails (and already the example is outdated). When I did the cold deploy, I got this:
$ cap cold_deploy
* executing task cold_deploy
SVN Password:
...some other stuff clipped...
** [out :: remote_server] Authentication realm: xxxxxx.com
** [out :: remote_server] Password for 'someapp':
** [out :: remote_server] subversion is asking for a password
** [out :: remote_server] Authentication realm: xxxxxx.com
** [out :: remote_server] Username:
It seems like Capistrano checks the SVN twice. Once to find the latest version number when it is on the local development machine, and once again to checkout the latest version from the repository on the remote server machine. Since I had no way of controlling what goes on in the script while being executed on the remote machine, the prompts for a username and password for svn just got logged, and Cappy just waits on the local machine.

The only solution I came up with was just to go back to the Old Ways, where you ssh into the remote server machine and do a temp checkout by hand first. And you do the same on your local development machine.
$ svn co http://path.to.svn/svn/trunk/
$ rm -rf trunk
You can then just leave the deploy.rb script as:
set :repository, "http://path.to.svn/svn/#{application}/trunk"
This way, svn caches your authentication on the remote machine (and local machine), and probably won't ask for it again until maybe you reboot.

To get the pack of Mongrels running, you'd better get things right

Things were going hunky-dory, and I thought I was in the clear, but then there was a slight detour. I ran into this problem when I tried to start mongrel on the remote server.
$ mongrel_rails start -d
/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/rails.rb:32: uninitialized constant Mongrel::HttpHandler (NameError)
from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:27:in `require'
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/mongrel_rails:10
from /usr/bin/mongrel_rails:18
This one took a little bit of time to figure out, but it was easier than the next two. Gems said that mongrel was successful at the end, when in fact, it didn't build at all! This was because I didn't have make on my machine. To get it on Ubuntu, apt-get the build-essential package, and then reinstall the mongrel and mongrel_cluster gems. Odd thing is, I needed make to build rubygems on Ubuntu. Must have got removed when I did apt-get autoremove

However, the real kicker stumped me for a while here:
$ cap cold_deploy
...clipped...
** transaction: commit
* executing task spinner
* executing task start_mongrel_cluster
* executing "sudo mongrel_rails cluster::start -C /var/www/apps/my_app/current/config/mongrel_cluster.yml"
servers: ["remote_server"]
[remote_server] executing command
command finished
command "sudo mongrel_rails cluster::start -C /var/www/apps/my_app/current/config/mongrel_cluster.yml" failed on remote_server
Huh? What's going on? I was stumped for most of the day. It's always the little thing that get you, but you learn, and alas you learn. Ubuntu, by default, does not add any subsequent users to sudo. I had created another user on the remote server machine to do the deployment. Therefore, it had no sudo privileges. Run:
sudo usermod -G admin username
And after that, I ran into:
$ cap cold_deploy
...clipped...
servers: ["remote_server"]
[remote_server] executing command
** [out :: remote_server] Starting 2 Mongrel servers...
** [out :: remote_server] !!! Path to log file not valid: log/mongrel.log
** [out :: remote_server] mongrel::start reported an error. Use mongrel_rails mongrel::start -h to get help.
** [out :: remote_server] mongrel_rails start -d -e production -p 8000 -a 127.0.0.1 -P log/mongrel.8000.pid -c /var/www/apps/my_app
** [out :: remote_server] !!! Path to log file not valid: log/mongrel.log
** [out :: remote_server] mongrel::start reported an error. Use mongrel_rails mongrel::start -h to get help.
** [out :: aeolus] mongrel_rails start -d -e production -p 8001 -a 127.0.0.1 -P log/mongrel.8001.pid -c /var/www/apps/my_app
command finished
After reading through some google posts, I realized it was really simple. The configuration file for mongrel had the wrong path to it. It didn't have the right path. in the mongrel setup, you need to make sure the deploy path as "current" on the end. Remember to add /current/ to wherever you're deploying your application!

mongrel_rails cluster::configure -e production -p 8000 -a 127.0.0.1 -N 2 -c /deploy/path/my_app/current

So hopefully, if any of you out there has run into the same problems, I've saved you a little bit of time. I can't really say this is a tip, more like a log of my adventures deploying. Remember, if just doing it once will save you lots of time in the future, it's probably worth it to learn how to do it.

2 comments:

  1. simple thing like putting 'current' at the end that'll get cha stuck, and want to just screw it all... thanks for the write up.

    ReplyDelete
  2. Anonymous6:20 PM

    Thank you ^^

    ReplyDelete