I’ve used Ubuntu Linux for many years now and am quite happy with it, for the most part it works quite well, is quite secure being based on Debian, and it upgrades twice per year. A few years ago the parent company that produces it, called Canonical added an integrated 5Gig cloud drive, called “Ubuntu One”, and it worked very well from any computer running Ubuntu. I found it particularly useful to store files when travelling or sending stuff to my home computer from work etc. However recently Canonical, in their infinite wisdom, has decided to cancel this service. Personally I think its an error, for all it cost them to operate, it fostered a great deal of good will. Anyway that left many of us Ubuntu users scrambling to find a replacement.
There are several available such as Dropbox, or Google-Drive but none of them seem to work as seamlessly. This was my motivatin to see if I could mount a remote AWS S3 bucket as a local directory and use it in the same way, like a cloud drive. I managed to cobble it together so here is my rather uneligent solution!
I found a couple of webpages decribing how other individuals go the same thing to work but I had to make a few changes. If they are useful to you here they are at Chris Reevesand another github repository.
Here’s what worked for me!
It uses a package, produced by Amazon called “s3fs” which depends on another package called “fuse”.
First of all Ubuntu 14.04 requires a few dependent libraries to be installed so I installed or updated these as root
- build-essential
- pkg-config
- libfuse-dev
- libcurl4-openssl-dev
- libxml2-dev
- mime-support
- automake
- libtool
Next download and install the latest version of s3fs-fuse-1.78.tar.gz, which in my case was version 1.78.
Then follow the instructions for Ubuntu from the github repository
untar and unzip the archive file tar xvzf s3fs-fuse-1.78.tar.gz
then execute these commands in a terminal window
- cd s3fs-fuse/
- ./autogen.sh
- ./configure –prefix=/usr –with-openssl
- make
- sudo make install
The next thing was to create a directory in my home directory where the remote AWS bucket would be mounted, so for this example lets say I created a directory called jamawss3fs with this command
mkdir ~/jamawss3fs
The next job is to set up a password file that contains the AWS keys that will allow you to connect to your AWS S3 bucket. First go to your AWS account and get a new set of security keys. If you don’t know how to do this then read the AWS information. Download your newly created keys and put them in a file called
~/.passwd-s3fs
in the format
ACCCESS_KEY_ID:SECRET_ACCESS_KEY
Note the format, the two keys are separated by a colon (:) You also need to set the file permissions for this file specifically as sudo chmod 600 ~/.passwd-s3fs
If you have an AWS bucket already prepared, for example named “trialbucket” you should be able to mount it locally by issuing a command such as
s3fs trialbucket ~/jamawss3fs
Good luck, it worked for me.
J