08 December 2012

Role based authentication in Cake 2.x

I do not like reinventing the wheel so really just want to build on existing tutorials and provide some background information and experience.

 Firstly make sure you understand the difference between ACO and ARO. To put it in very simple terms an ACO is something that is protected by ACL and an ARO is something that uses ACL to access the ACO.

 It might help to think of ARO as users (groups) and ACO as controller actions. You will be marking your user and group models as requester objects and setting ACO on controller actions across the board.

The Cake manual really is good in explaining the concept of ARO, ACO, ACL. Please make sure you read it and understand it before continuing.  Unless you understand what ARO, ACO, and ACL mean at this point the rest of this post will make no sense.

Please RTFM before continuing.

Okay, now read through the Cake page that introduces the ACL shell (here). Ignore the sections "Create and delete nodes" and "Grant and Deny Access". We will be using tools to do these tasks since manually assigning permissions for even a medium sized project (20 models) would be unmanageable - especially if you are using a feature driven Agile approach.

Read and preferably try out the tutorial found on the Cake book site (here). Following this tutorial will give you a role based authentication system. Since we are predominantly interested in Role Based Authentication please make sure that you follow the instructions regarding "Group-only ACL" (here).

Still with me? Okay - I typically use a feature driven Agile approach when developing. CakePHP really lends itself to this development methodology. BUT it does mess around a bit with ACL when you need to add new models and controller methods. So what's the fix?

Well, in my opinion, the AclExtras plugin (available here) that the tutorial I linked to earlier is an indispensable tool. By using it to sync the ACO table (read the tutorial) you can quickly create/recreate the ACO tree.  Now you should understand why I told you to ignore the section "Create and delete Nodes" from the Shell Manual.

Go back to the page about the ACL shell (here). At the bottom of the page there are instructions to output the tree. I personally like to output it into a text file and store it in my documents folder. Not only is it useful to have a list of the ACO's but it really helps in creating the initDB method in the Users controller (see the tutorial).

 Please don't make the mistake of making every model an ARO. Only the User/Group tables need to be ARO. The tutorial doesn't explicitly say this and I remember the first time I worked with ACL I made this mistake. Do not copy and paste the ActsAs requester into each model!

Development is never static (unless you are using a Waterfall approach in a legacy environment like Cobol or Fortran). So working with ACL in CakePHP will require some tweaking as you go along.

 Fixing the ARO table is relatively simple. There are a couple of ways to fix it:

1) Truncate the table, add the allow method from the tutorial to your before filter in Users controller and add new users who are linked to the appropriate groups. Remember: We are focused on role based authentication so you should have assigned group as parent to user.
2) Truncate the table and manually add the groups required (should only be a handful)

Once you have fixed your ARO table you should rerun the initDB method in your Users controller to recreate the join table (aros_acos). I'm not sure why this join table is named in such a way as to break Cake conventions but truncating it and rerunning your initDB method (see the tutorial) is the way to fix permissions.

What happens if you get a node error? No problem really - resync your ACO table with AclExtras plugin (see tutorial). When will this occur? If you add a new controller, or a new method to a controller. This is why I use the ACL shell (linked above) to export the ACO tree (created with AclExtra's plugin). It allows me to quickly check what nodes exist.

What do the tables do?

  1. 'acos' => objects that can be requested by ARO
  2. 'aros' => requester objects that require access to protected ACO
  3. aros_acos => join table linking the permissions
In the tutorial linked above the initDB method sets up the 'aros_acos' join table.

Any questions?  Please comment on the post and I will answer.

05 December 2012

Adding a cross-browser transparent background

Adding a transparent background that is cross browser compatible is relatively simple.  It does not rely on CSS3 and so this method works for the current versions of Chrome and Firefox as well as IE8 and above.

Add this to your template:
<div class="container">
   <div class="content">
       Here is the content. <br />
       Background should grow to fit.
   <div class="background"></div>

Then add this to your CSS:

   .container {  
   .content {  
   .background {  
     /* These three lines are for transparency in all browsers. */  
     filter: alpha(opacity=50);  

This was an answer on StackOverflow

29 November 2012

Adding a prefix to all files in a directory using DOS

A quick way to prefix all files in the directory is to run this command from your shell in the directory where your files are:
for %a in (*) do ren "%~a" "prefix_%~a"
The part of the command "prefix_" can be replaced with whatever prefix you want to swap with.

19 June 2012

Getting XAMPP to use Microsoft SQL server

Scary Microsoft employee makes your life hard

This post just builds on the post found here and gives you some shortcuts to solving the issue.

Before visting that link run phpinfo() to check the compiler that was used for your version of PHP.

Next thing to remember is that nts is short for "not thread safe" and ts is short for "thread safe". The 53 or 54 in the file names of the dll's you download from (Microsoft correspond to the version of PHP you're using (5.3 or 5.4).

Finally if you get the error about "This extension requires the Microsoft SQL Server 2012 Native Client ODBC Driver to communicate with SQL Server" you can download the native client from Microsoft. There is an .msi installer for just the client down the page if you don't want to download the whole package.

18 April 2012

Giving up Facebook

Giving up Facebook was difficult. I had to face up to the fact that I was thinking about it pretty much whenever I was taking a break.  I started to realize that Facebook took up a fair amount of headspace and time.  Since I don't smoke I don't go outside.  Left with the choice of drinking yet another cup of unhealthy coffee or finding a distraction on my PC I found Facebook curiously addictive.

What did I like about Facebook?   Well I analyzed this carefully and thought about the value proposition.   Ultimately I realized that Facebook offered two things - lots of shallow electronic interactions and meaningless flash animation games.  Since I earn enough to buy a decent PC (or console) and really hot games the games on Facebook offer little.  The only game that meant anything to me was Fairyland and that only because it promised to save the rainforest.  PC games are better without Facebook.  As for meaningless social interaction guess how many Facebook "friends" have tried to get in touch with me since I stopped using Facebook?  That's right... zero.

Facebook's product is your personal information.   It's the ability of Facebook to sell information about you to advertisers.  You are no longer a person, you are a product.  You are Facebook's product.  So in exchange for the "free" services that they offer you willingly divulge your interests, contact details, friends, where you live, where you travel, your political and religious beliefs, and everything inbetween.

And even if Facebook isn't going to capitalize on your willingness to slave yourself out they will sell your details to third parties.  Did you notice the agreement between Facebook and Paypal that allows a one-click purchase system?  How convenient... your money linked directly to your Facebook account.  If you're not scared then you're not a hacker or have any clue what possibility you're giving Facebook by linking your accounts.

So my decision to disable my Facebook account was complicated:   firstly it was interfering with my work productivity by invading my thoughts, secondly it was removing my need for real human interaction, thirdly it was threatening my personal privacy, and lastly it was full of lame people talking about their cats.

Statistically if you give a million monkeys a typewriter and enough time you might expect them to randomly produce the works of Shakespeare.  Facebook is the disproof of this theorem - I really found that my time spent reading Facebook nonsense detracted from the time I had available to read news websites and otherwise improve my understanding of world.  Go check out http://www.failbook.com if you think you can educate yourself on Facebook or otherwise receive valuable informative opinion that will improve your life.

So what was it like?  Well firstly I was a little insulted that none of my Facebook "friends" noticed that I cancelled my account.  I thought about this and realized that Facebook offers a great deal of superficial social interactions.  A Facebook "friend" is meaningless and if one of them disappears there are plenty of other shallow interactions to fill the gap.   Test it for yourself... don't login to Facebook for a few days and see who tries to email or phone you.  You'll discover that Facebook "friends" are a poor substitute for real social interaction.

Then I started craving the various games I had started playing on Facebook.  I suppose it was useful that none of my "neighbours" tried to email me.  The social value was ruined for me when I acknowledged that none of these people were really my friends.  The only game I missed was "Fairyland" which promised to donate money to save the rainforests.  I rationalized that by donating to my church I was actually donating a whole lot more to the planet.... and since playing Fairyland took X hours it was cheaper to donate those dollars directly to the church.

Then I missed Facebook's photo gallery.  So I tried out Tumblr which allows me to upload photos.  So does Flickr.  Picasa doesn't because I use Linux as my operating system.  No problems... Tumblr and Flickr are both more private than Facebook or Google and neither has credit card information.  Failing online storage,  an external USB drive is an affordable backup option.  Plus mine is encrypted with the (free) Truecrypt program which is better than giving Facebook my rights to it.

Rights?  Yes - anything you publish on Facebook belongs to Facebook.  If you put a photo, witty comment or statement, social interaction, or anything on Facebook they can whore it out or use it any which way they want to.  What?  Yes it's true - the fact that you're tagged in a photograph can be used to profile you and target you.   Even if you don't agree to it, if your friends naively agree to have their privacy invaded malicious people can find your details.  Having looked at what an uncertified Facebook developer can do I must tell you that your privacy is history if you play games or use applications on Facebook.

Ultimately although my initial decision to stop using Facebook was because it interfered with work and offered no REAL social interaction my ongoing decision not to use it is because I have to acknowledge that I am not a product.  You can't sell me.  Facebook's chief product is access to the personal details of its users.  It already has the credit card information of millions, the Facebook credit is touted to become it's own currency, it knows where you are, what you're interested in, who your friends are, what clothes you wear, your religious beliefs, your sexual orientation, your work history, and so much more.  And it's willing to sell that information to the highest bidder.  You are Facebook's product.  Do you want to be a product?

08 April 2012

Three steps to create a self-signed certificate in Apache for Ubuntu 11.10

It is very simple and quick to create a self-signed certificate on your development machine. Of course you would never use this on a production server because self-signed certificates are vulnerable to man in the middle attacks. 

You will need to make sure that you have the ssl-cert and libapache2-mod-gnutls packages installed.

Step One: Use the ssl-cert package to create a self-signed certificate.  This will create the certificate files in /etc/ssl which is where the Ubuntu default Apache configuration expects to find them.

make-ssl-cert generate-default-snakeoil --force-overwrite

Step Two: Active the SSL module and the default SSL site using the convenience wrappers:

a2enmod ssl
a2ensite default-ssl

Step Three: Restart Apache

service apache2 restart

20 March 2012

Installing a Unified Communications SSL certificate in Microsoft IIS 6.0

Just another working day in Redmond

Being placed in the dire situation where my project has to go live and is being served by a Windows server that has no administrator I was forced to open up my RDP client and venture back in time to the days of dinosaurs and IIS.

Unified Communications SSL Certificates are pretty much the only solution I could find to allow a single installation of IIS to share a single certificate that is valid for multiple domains that don't conform to a wildcard.  Whew, what a mouthful.  In other words if you have the domains http://www.ihatemicrosoft.com , http://www.apacheisfree.com, and http://www.graphicalinterfacesareforpansies.com you can use a SSL single certificate to secure them by setting up Subject Alternate Names.

Getting them up and running was a cinch for me made only slightly more complicated by previous failed installation issues which I had to identify and undo.

Firstly if somebody else has tried to install the certificate and failed it's not a bother.  Just get the exact details that were used and rekey it (if the issuer allows this).  GoDaddy allowed me to instantly request a new certificate which I was quickly able to install onto the "master" domain (the one that is not a Subject Alternate Name). Thus I was working from a clean canvas, without incorrect or expired certificates lurking around.

I really don't feel like replicating the bazillions of articles written for Microsoft IIS 6.0 so I'll link to an article that is pretty useful and is on a site full of useful articles - How To Install a Certificate in IIS 6.0 .  I personally had to remove the old (expired) certificate and issue a new CRF but hopefully you won't have to go through all that.

Now that you have it installed for your master the next issue is to set up the SSL bindings, which is the clever bit and the whole point of using Subject Alternate Names.  Basically the issue with using the same IP and port (443) for different sites causes an issue with other sorts of certificates for obvious reasons.  However the Unified Communications SSL certificate is able to validate a number of domains quite happily, we just need to get IIS 6.0 to bind the SSL 443 ports correctly to the host names.

You have probably already noticed that you can't set host headers for SSL in the IIS manager.  That's okay, there is a DOS tool to do this.  For non-Linux people the this might be very very scary, but you need to just drop to a command prompt and do a few things.  Before you do that, however, click on the root node of your domain list to view a list of domains.  Make a note of the long number and host header values that identify the site(s) you want to add as Subject Alternate Names.

Now pop to a DOS prompt and follow the advice given at Digicert which helps you to configure the IIS 6.0 SSL host headers using a VB script.  Basically the important thing is to run the following command from c:\Inetpub\AdminScripts (assuming a default IIS installation):

cscript.exe adsutil.vbs set /w3svc/site identifier/SecureBindings ":443:host header"

If you get an error when browsing that refers to an Invalid Host Header just check that you have correctly matched the site identifier number to the hostheader in the command above and rerun with the correct values to fix it.  You may need to stop and start (why does IIS not have a restart option Steve Ballmer?)  to get everything happy.

14 March 2012

Preventing Directory Traversal attacks in PHP

Directory traversal attacks occur when your program reads or writes a file where the name is based on some sort of input that can be maliciously tampered with.  When used in conjunction with log poisoning this can lead to an attacker gaining remote shell access to your server.

At the most simple it could be to include a file like this:

echo file_get_contents($_GET['sidebar']);

The intention would be for you to be able to call your URL and send a parameter indicating which sidebar content you want to load... like this:  http://foo.bar/myfile.php?sidebar=adverts.html

Which is really terrible practice and would not be done by any experienced developer.

Another common place where directory traversal attacks can occur is in displaying content based on a database call.

If you are reading from or writing to a file based on some input (like GET, POST, COOKIE, etc) then make sure that you remove paths.  The PHP function basename will strip out paths and make sure that you are left only with a filename.

This is still not foolproof, however, as an attacker would still be able to read files in the same directory.

A safer way to do it is to whitelist the files that are allowed to be included.  Whitelisting is safer than blacklisting, so instead of trying to exclude all malicious combinations we will rather allow only a set of safe options to be used.

Consider the following code as an alternative to the above:

$page = $_GET['page'];
$allowedPages = array('adverts','contacts','information');
if ( in_array($page, $allowedPages) ) 
    echo file_get_contents(basename($page . '.html'));

You should consider configuring PHP to disallow opening remote urls with the file stream wrapper by setting allow_url_fopen to Off in your php.ini file.  This does mean that you can't use any function that relies on the file stream (like file_get_contents) to read a URL (you'll need to use curl instead) but it does prevent an attacker from including their own code into your site.

On a system configuration scale it's ideal to have each site running in a chroot jail.  By locking down access to the user that your webserver runs under to a specific directory you can limit the impact of a traversal attack.

So in summary:

  1. Use basename() on any variable you use to include a file
  2. Set allow_url_fopen PHP setting to Off
  3. Set a whitelist of files that you allow to be included

Continuous Integration with Jenkins and Git

Jenkins is a free and open source solution for monitoring the execution of jobs, including software project builds.

By monitoring the outcome of a build you are able to provide continuous quality control throughout the development period of a project.  The aim is to reduce the effort required in quality control at the end of development by  consistently applying small amounts of effort to quality throughout the development cycle.

Under the continuous integration (CI) model developers should consistently integrate their development efforts into the repository.  There should be time delay between committing code changes and the new build - this allows developers to recognize and correct potential problems immediately.  Of course measures must be in place to flag errors with the build.

The advantage to developers and project managers to having a stable repository to which commits are made and tested are multiple.  I don't need to replicate the Wikipedia list here but suffice to say that I've found although development is slowed slightly by needing to correct bugs (lol!) the overall quality of code is improved.  A drawback that is mentioned on Wikipedia and actually made itself very apparent to me immediately is the need for a good test suite.  You should expect to either assign a developer to coding unit tests or to allocate time for developers to code these as part of their development cycle.

If you're running Ubuntu installing Jenkins is very easy - a version is included in the repositories and so can be installed with apt-get.  There is an excellent resource at that guides you through the installation of Jenkins at rdegges.com that will help you get started.  I personally found the Jenkins site itself slightly lacking in documentation aimed at first time users, but there is a large community base of users for support. There is a good tutorial for setting up PHP projects here.

Just by the way, the JAVA_HOME variable should be set to /usr/lib/jvm/default-java on Debian distro's.  This is a symbolic link to the currently installed JVM.

Installing PHPUnit

In case you struggle to install PHPUnit you should have a look at this bug comment on Launchpad which will help to solve the known "coverage" bug in Ubuntu installs.  The following steps are given (and work) to install phpunit on Ubuntu:

sudo apt-get remove phpunit
sudo pear channel-discover pear.phpunit.de
sudo pear channel-discover pear.symfony-project.com
sudo pear channel-discover components.ez.no
sudo pear update-channels
sudo pear upgrade-all
sudo pear install --alldeps phpunit/PHPUnit
Note that I have omitted the last step of the process given on the web which installs phpunit again with apt-get.  This breaks the installation because the new version of PHPUnit is incompatible with the CodeCoverage filter and you will get this error: PHP Fatal error:  Call to undefined method PHP_CodeCoverage_Filter::getInstance() in /usr/bin/phpunit on line 39

If you follow the steps given above and install phpunit with pear you should be okay :-)

09 March 2012

Adding a CakePHP based virtual host in Apache 2.2

It's very simple to set up a name based virtual host in Apache 2.2 using the default Ubuntu package.

I'm assuming that you have installed Apache already and that you have edited /etc/apache2/sites-enabled/000-default to change the AllowOverride None to something like this:

<Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Order allow,deny
                allow from all

If you have not already used this command

sudo a2enmod rewrite

then do so in order to enable mod_rewrite.

Now edit your /etc/hosts file and add an entry that points to the server where you are setting up the virtual host.

The line should look something like this:  mysite.local

Where the IP address points to the server where you are setting up the host and mysite.local is a nickname for the site. Remember to add the .local :)

Now create a file in /etc/apache2/sites-available and name it something that relates to the sitename (for future maintainability). I would suggest naming it the same as the sitename. Edit it and copy this basic skeleton structure into it:

<VirtualHost *:80>
        ServerAdmin webmaster@example.com
        ServerName  mysite.local
        ServerAlias mysite

        # Indexes + Directory Root.
        DirectoryIndex index.php
        DocumentRoot /var/www/mysite/

It is important that the ServerName matches the entry you made in your /etc/hosts file.

Now run the command a2ensite mysite.local which is a Debian convenience command which creates the symbolic link from the file you created to the /etc/apache2/sites-enabled/ directory.

You will need to restart Apache (service apache2 restart). If all is well you will be able to navigate to http://mysite.local on your local machine and view the site present on the server at /var/www/mysite

24 February 2012

Consuming Microsoft .NET SOAP server datasets in PHP

Microsoft Just Clowning Around Again
If you're impatient here is the link that this article leads to

SOAP is generally understood to be a simple method for systems to exchange data in a standard manner. This allows for remote systems to make calls on a server application. This sounds like a Good Idea.

Microsoft, however, does not appear to fully understand the concept of SOAP when it comes to providing a SOAP server based on "datasets".

Apparently the use of these datasets make it much easier for programmers using Microsoft languages to consume web services. 
Unfortunately it makes it inconvenient for everybody else.

So we have a standard way of doing things, but Microsoft decides to "improve" it and thereby forces everybody else to manually parse their XML responses. What is the point of having a standard method of accessing server methods if Microsoft then makes their implementation inoperable to Java, PHP, Ruby, Python, developers? 

Isn't the whole idea of SOAP to allow remote access?  So why make things difficult for everybody except the people who choose Microsoft as their vendor for Server, Desktop, Development IDE, Programming Language, Email, and Security?  What about somebody who wants to use a vendor other than Microsoft for one of these software services?  Is it technically better?  Is it better for a client to be locked into a vendor?  Or does Microsoft make more money by trying to force you to make them your vendor for all software?

Well in any case, if you are trying to consume a Microsoft dataset SOAP packet you will end up needing to write your own helper classes to decipher their code.  That's the bad news.  Trust me, I asked on Stackoverflow, Googled extensively (I didn't use Bing to search though, maybe I should have), and otherwise checked and rechecked why I was not able to handle the SOAP packets being returned by the Microsoft server.

PHP developers can use this code (http://www.bin-co.com/php/scripts/xml2array/) as a start to developing their class.  The code given there will help shortcut the process of reading the Microsoft dataset SOAP response.  If anybody has similar solutions for other languages please feel free to forward them to me.

15 February 2012

Questions for mid-level PHP developer candidates

I often get CV's from developers applying for positions. Some colleges give people a certificate without really giving the candidate any problem solving skills or real understanding of theory. Here are some standard questions that I ask candidates to complete with pen and paper without access to Google. They cover basic OOP theory, logic, basic PHP syntax, and try to get some idea of the candidates passion for learning.

In the rare occasion that a candidate actually bothers to investigate the company and finds my blog they will naturally be expected to do well on this quiz.  I guess that's bonus marks for being prepared :p

PHP quiz

1)  Explain what SQL injection is and give TWO ways to combat it
2)  If you type hint an interface name in a function argument what sort of variables can you pass?
3)  What is an abstract class?
4)  How would you call the construct method of a parent class inside 
a child of that class?
5)  Given two variables $a and $b which contain integer numbers.  
Swap the values of $a and $b without declaring a third variable and using 
only the mathematical functions +,*,-,/
6)  Define a class called House that has an owner property and a method 
called sell that accepts a string parameter which changes the owner
7)  Explain call by value and call by reference.  Which method does PHP5 
use when passing primitive variable types and objects?
8)  What does AJAX stand for?  Write a jQuery AJAX call to 'weather.php' 
which updates the contents of with the results from that file
9)  What is the safest PHP function to use to filter output to prevent XSS?
10) What is the difference between GET and POST?
11) What is your approach to unit and integration testing?
12) What are traits used for and how would you include one in your class?
13) What design patterns are you familiar with?  What do you think about 
the use of the Singleton pattern in PHP?
14) Write a program to roll two six-sided dice 10,000 times.  Sum the two
values on each roll.  At the end of the program run output the average sum 
of all the rolls.
15) What is the value of $a if $a = ( '42' === 42 ) ? 'answer one' : 'answer two';

Feel free to use any or all of these questions if you like them.  They're awkwardly formatted on the blog because of the template I'm using.  I have shared a raw copy on Google Docs.

I have seen question number 5 done in a single line by the way (usually it takes three).

06 February 2012

Reverse Engineering an MS-SQL database without Visio

The splash screen for Squirrel SQL

I'm working on a project that draws from a Microsoft Sql Database.  Unfortunately there is no project documentation which means that it takes longer to become familiar with the design.  I particularly wanted an ERD of the database but this wasn't available.  So I looked for open source reverse engineering tools and found Squirrel SQL.  This is a very handy tool as it supports a variety of databases and client operating systems.

Installing the Microsoft JDBC (available from the Microsoft site) was a snap:

  1. Just download the archive, extract it somewhere meaningful (I put mine as a directory in Squirrel).
  2. Edit the Microsoft SQL driver in your driver list
  3. Add an extra class and point it to the JDBC4 jar file (version 4 is required for newer versions of the JDK)
  4. The driver should load now
Then proceed to add your connection alias per normal and you're connected to your MS-SQL database.

The plugin to reverse engineer your database is called "Graph".  Simply connect to your database and select the tables you want.  Right click them and choose "Add to graph" from the context menu.

03 February 2012

Online file resizer

Kraken resizerKraken is an online image compressing utility that compresses  jpeg, gif, and png formats using a new algorithm.  It claims that the compression on existing files can be losslessly improved.

Does it work?

I tried it on a random file on my hard-drive and the algorithm reduced the size from 853kb to 729kb (about a 14% reduction).

Here is the original file (click to view full size):

And here is the reduced file:

23 January 2012

Screen capture in Android 2.2.1 "Froyo"

My Android Desktop snapped with this method
I took a screen capture by mistake once but then struggled to repeat the behaviour.  After Googling for a solution I found some very complicated solutions.  Probably the best way to do this is to buy an application on the Market, but I don't want to spend money on a toy.

If you look in your "Settings » Applications » Running Services" menu you should see a service called "ScreenCaptureService".

This allows you to take a screenshot by pressing the "Back" and "Home" buttons simultaneously.

What works for me is to press and hold "Back" and then to press and release "Home" (while holding "Back").  This makes a snapshot noise and displays a message.  Files are saved to the ScreenCapture directory on your SD card and should appear in your gallery.

Of course this is a problem if you try to take a snapshot of a running application because pressing "back" causes it to quit.  I could not find a workaround for this and there doesn't seem to be anything much on Google.  I'm not sure if they did this deliberately or if for some reason the developers didn't consider this when implementing the system.  It seems too obvious an oversight to be anything but deliberate to me.

For more recent versions of Android you should press the Power and Volume Down buttons together for 2-4 seconds.

If your phone has a physical button for the home button then you may need to try pressing power and your (physical) home button together for 2-4 seconds.  Make sure you press them together.