Skip to main content

Using OpenSSH to setup an SFTP server on Ubuntu 14.04

I'm busy migrating an existing server to the cloud and need to replicate the SFTP setup.  They're using a password to authenticate a user and then uploading data files for a web service to consume.

YMMV - My use case is pretty specific to this legacy application so you'll need to give consideration to the directories you use.

It took a surprising amount of reading to find a consistent set of instructions so I thought I should document the setup from start to finish.

Firstly, I set up the group and user that I will be needing:

 groupadd sftponly  
 useradd -G sftponly username  
 passwd username  

Then I made a backup copy of and then edited /etc/ssh/sshd_config

Right at the end of the file add the following:

  Match group sftponly   
    ChrootDirectory /usr/share/nginx/html/website_directory/chroot   
    X11Forwarding no   
    AllowTcpForwarding no   
    ForceCommand internal-sftp -d /uploads   

For some reason if this block appears before the UsePAM setting then your sshd_config is borked and you won't be able to connect to port 22.

We force the user into the /uploads directory by default when they login using the ForceCommand setting.

Now change the Subsystem setting.  I've left the original as a comment in here.  The parameter "-u 0002" sets the default umask for the user.

 #Subsystem sftp /usr/lib/openssh/sftp-server  
 Subsystem sftp internal-sftp -u 0002  

I elected to place the base chroot folder inside the website directory for a few reasons.  Firstly, this is the only website or service running on this VM so it doesn't need to play nicely with other use cases.  Secondly I want the next sysadmin who is trying to work out how this all works to be able to immediately spot what is happening when she looks in the directory.

Then because my use case demanded it I enabled password logins for the sftp user by finding and changing the line in /etc/ssh/sshd_config like this:

 # Change to no to disable tunnelled clear text passwords  
 PasswordAuthentication yes  

The base chroot directory must be owned by root and not be writeable by any other groups.

cd /usr/share/nginx/html/website_directory
mkdir chroot
chown root:root chroot/  
chmod 755 chroot/  

If you skip this step then your connection will be dropped with a "broken pipe" message as soon as you connect.  Looking in your /var/log/auth.log file will reveal errors like this: fatal: bad ownership or modes for chroot directory

The next step is to make a directory that the user has write privileges to.  The base chroot folder is not writeable by your sftp user, so make an uploads directory and give them "writes" (ha!) to it:

 mkdir uploads  
 chown username:username uploads  
 chmod 755 uploads  

If you skip that step then when you connect you won't have any write privileges.  This is why we had to create a chroot base directory and then place the uploads folder off it.  I chose to stick the base in the web directory to make it obvious to spot, but obviously in more general cases you would place this in more sensible locations.

Finally I link the uploads directory in the chroot jail to the uploads directory where the web service expects to find files.

 cd /usr/share/nginx/html/website_directory  
 ln -s chroot/uploads uploads  

I feel a bit uneasy about a password login being used to write files to a directory being used by a webservice, but in my particular use case my firewall whitelists our office IP address on port 22.  So nobody outside of our office can connect.  I'm also using fail2ban just in case somebody manages to get access to our VPN.

Comments

Popular posts from this blog

Separating business logic from persistence layer in Laravel

There are several reasons to separate business logic from your persistence layer.  Perhaps the biggest advantage is that the parts of your application which are unique are not coupled to how data are persisted.  This makes the code easier to port and maintain. I'm going to use Doctrine to replace the Eloquent ORM in Laravel.  A thorough comparison of the patterns is available  here . By using Doctrine I am also hoping to mitigate the risk of a major version upgrade on the underlying framework.  It can be expected for the ORM to change between major versions of a framework and upgrading to a new release can be quite costly. Another advantage to this approach is to limit the access that objects have to the database.  Unless a developer is aware of the business rules in place on an Eloquent model there is a chance they will mistakenly ignore them by calling the ActiveRecord save method directly. I'm not implementing the repository pattern in all its glory in this demo.  

Fixing puppet "Exiting; no certificate found and waitforcert is disabled" error

While debugging and setting up Puppet I am still running the agent and master from CLI in --no-daemonize mode.  I kept getting an error on my agent - ""Exiting; no certificate found and waitforcert is disabled". The fix was quite simple and a little embarrassing.  Firstly I forgot to run my puppet master with root privileges which meant that it was unable to write incoming certificate requests to disk.  That's the embarrassing part and after I looked at my shell prompt and noticed this issue fixing it was quite simple. Firstly I got the puppet ssl path by running the command   puppet agent --configprint ssldir Then I removed that directory so that my agent no longer had any certificates or requests. On my master side I cleaned the old certificate by running  puppet cert clean --all  (this would remove all my agent certificates but for now I have just the one so its quicker than tagging it). I started my agent up with the command  puppet agent --test   whi

Redirecting non-www urls to www and http to https in Nginx web server

Image: Pixabay Although I'm currently playing with Elixir and its HTTP servers like Cowboy at the moment Nginx is still my go-to server for production PHP. If you haven't already swapped your web-server from Apache then you really should consider installing Nginx on a test server and running some stress tests on it.  I wrote about stress testing in my book on scaling PHP . Redirecting non-www traffic to www in nginx is best accomplished by using the "return" verb.  You could use a rewrite but the Nginx manual suggests that a return is better in the section on " Taxing Rewrites ". Server blocks are cheap in Nginx and I find it's simplest to have two redirects for the person who arrives on the non-secure non-canonical form of my link.  I wouldn't expect many people to reach this link because obviously every link that I create will be properly formatted so being redirected twice will only affect a small minority of people. Anyway, here's