I use a mix of Crashplan and Dropbox. They both have their uses and are great. Until recently that is. My headless linux boxes stopped backing up to Crashplan earlier this year. A local JVM instance wouldn’t run. The documentation for a headless client is poor if you run into any issues. I ended up installing the 12.04LTS java-common package a few months ago which is still v1.6.
Importantly, to do this you must change /usr/local/crashplan/install.vars
I will say that I didn’t install the java package on two other linux servers and they subsequently just worked themselves out. I think Code42 (Crashplan’s creator) was inundated with failures as they released some software which essentially broke their linux based backbone. Their newer software versions, at least after 4.4.1, seem to upgrade the native jre package properly and require little intervention. So, before you go down the road of manual jre installations, look into this first.
Backups started working again after the manual jre upgrade, some hassle, a lot of searching, finding logs, etc. It was a mess. Fast forward to today. Server went down around 1:30am. We look and find there is zero space left on the drive. Damn. We find that in our /usr/local/crashplan/upgrade directory there are effectively an unlimited number of time stamped update folders. Crashplan was creating a new folder every 30 minutes and then finding that v1.6 of the ubuntu java-common package was installed and then failing out of the process. This continued long enough to spool up to our entire drive size. Code42, you need a reasonability check here!
I resolved by uninstalling and finding all crashplan install files and locations. Once removed, I installed java using the guide here and the useful first comment for default install. http://linuxg.net/how-to-install-oracle-java-jdk-678-on-ubuntu-13-04-12-10-12-04/ . I went through the install process again. During the installation process java-common was updated with compliant java files as Crashplan does not work with the Oracle version apparently.
Then we are back up again. I’ve lost a huge time commitment fixing these issues brought about by Crashplan’s upgrade processed from 3.x to 4.x. Prior upgrades were seamless and required no intervention. I only noticed this problem when our backups stopped working and the subsequent failure when we had a production server go down.
The new version also required me to open some ports in the Ubuntu Firewall. ufw allow from xxx.xxx.xxx.xxx to any port 4242 is the command I used to open a port back to my office for backups to go back and forth between servers. I keep a local copy of crashplan data here on a file server for rapid restores if needed.
2017-01-02 Update: A local Ubuntu machine failed today. I went to look because my crashplan email reports said it wasn’t updating even though the machine was online. When I went in with the Crashplan client, it said the machine was upgrading but stopped there.
At the Ubuntu CLI, I saw that the service couldn’t upgrade because it had no space available. /dev/mapper/ was completely full with no available space. /usr/local/crashplan/upgrade had consumed 91GB. When I go in, I see 45MB upgrade files downloaded every ~30min until your drive is full. Seriously, Code42 Software you REALLY need a reasonability test on upgrade failure here as this is ridiculous.
I went into the upgrade folder, deleted the upgrade files. Restarted the client. The software upgraded and all went well. Too bad this required manual intervention and the script was written poorly. I hope this helps someone else in a similar scenario.