25 January 2016
Before I go any further I should note that I'm using Piwik as my analytics package, and it respects "Do Not Track" requests. We're not using this to track people, but we are tying it to our clients existing database of their user interests.
I want the process of identifying the user to be as magical as possible so that my controllers can stay nice and skinny. Nobody likes a fat controller right?
I decided to use middleware to trap all my web requests to assign a "responder" to the request. Then I'll use a view composer to make sure that all of the output views have this information readily available.
The only snag in this plan was that the Laravel documentation was a little sketchy on how to get the value of the request parameter in middleware. It turns out that the syntax I was looking for was $request->route()->parameters()which neatly returns the route parameters in my middleware.
The result is that every web request to my application is associated with a visitor in my database and this unique id is sent magically to my frontend analytics.
So, here are enough of the working pieces to explain what my approach was:
19 January 2016
YMMV - My use case is pretty specific to this legacy application so you'll need to give consideration to the directories you use.
It took a surprising amount of reading to find a consistent set of instructions so I thought I should document the setup from start to finish.
Firstly, I set up the group and user that I will be needing:
groupadd sftponly useradd -G sftponly username passwd username
Then I made a backup copy of and then edited /etc/ssh/sshd_config
Right at the end of the file add the following:
Match group sftponly ChrootDirectory /usr/share/nginx/html/website_directory/chroot X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp -d /uploads
For some reason if this block appears before the UsePAM setting then your sshd_config is borked and you won't be able to connect to port 22.
We force the user into the /uploads directory by default when they login using the ForceCommand setting.
Now change the Subsystem setting. I've left the original as a comment in here. The parameter "-u 0002" sets the default umask for the user.
#Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp -u 0002
I elected to place the base chroot folder inside the website directory for a few reasons. Firstly, this is the only website or service running on this VM so it doesn't need to play nicely with other use cases. Secondly I want the next sysadmin who is trying to work out how this all works to be able to immediately spot what is happening when she looks in the directory.
Then because my use case demanded it I enabled password logins for the sftp user by finding and changing the line in /etc/ssh/sshd_config like this:
# Change to no to disable tunnelled clear text passwords PasswordAuthentication yes
The base chroot directory must be owned by root and not be writeable by any other groups.
cd /usr/share/nginx/html/website_directory mkdir chroot chown root:root chroot/ chmod 755 chroot/
If you skip this step then your connection will be dropped with a "broken pipe" message as soon as you connect. Looking in your /var/log/auth.log file will reveal errors like this: fatal: bad ownership or modes for chroot directory
The next step is to make a directory that the user has write privileges to. The base chroot folder is not writeable by your sftp user, so make an uploads directory and give them "writes" (ha!) to it:
mkdir uploads chown username:username uploads chmod 755 uploads
If you skip that step then when you connect you won't have any write privileges. This is why we had to create a chroot base directory and then place the uploads folder off it. I chose to stick the base in the web directory to make it obvious to spot, but obviously in more general cases you would place this in more sensible locations.
Finally I link the uploads directory in the chroot jail to the uploads directory where the web service expects to find files.
cd /usr/share/nginx/html/website_directory ln -s chroot/uploads uploads
I feel a bit uneasy about a password login being used to write files to a directory being used by a webservice, but in my particular use case my firewall whitelists our office IP address on port 22. So nobody outside of our office can connect. I'm also using fail2ban just in case somebody manages to get access to our VPN.
Azure Active Directory is a great product and is invaluable in the enterprise space. In this article we'll be setting it up to provide ...
There are so many different problems that people have with the Doctrine error message: exception 'Doctrine\ORM\ORMInvalidArgument...
While debugging and setting up Puppet I am still running the agent and master from CLI in --no-daemonize mode. I kept getting an error on...