Skip to main content

Standardizing infra - lxd is not Docker

LXD logo from Ubuntu blog
Selling a software package that is deployed to a customer data-centre can be challenging due to the diversity found in the physical infrastructure.

It is expensive to make (and test) adjustments to the software that allow it to run in a non-standard manner.

Maintaining a swarm of snowflakes is not a scalable business practice. More deployment variations means more documentation and more pairs of hands needed to manage them. Uniformity and automation are what keep our prices competitive.

Opposing the engineering team's desire for uniformity is the practical need to fit our solution into the customers data-centre and budget. We can't define a uniform physical infrastructure that all of our customers must adhere to. Well, I suppose we could, but we would only have a very small number of customers who are willing and able to fit their data-centre around us.

We are therefore on the horns of a dilemma. Specifying standard physical hardware is impractical and limits the sales team. Allowing the sales team to adjust the infrastructure means that we have to tailor our platform to each deployment.

Virtualization is the rational response to this dilemma. We can have a standard virtualized infrastructure and work with the sales team and the customer to make sure that the hardware is able to support this. This way we'll know that our software is always deployed on the same platform.

LXD is a hypervisor for containers and is an expansion of LXC, the Linux container technology behind Docker. It's described by Stéphane Graber, an Ubuntu project engineer,as a "daemon exporting an authenticated representational state transfer application programming interface (REST API) both locally over a unix socket and over the network using https. There are then two clients for this daemon, one is an OpenStack plugin, the other a standalone command line tool."

It's not a replacement for Docker, even though Linux kernel provided containers are at the root of both technologies. So what is LXD for? An LXD container is an alternative to a virtual machine running in a traditional hypervisor.

With virtualization (or LXD containerization), if your customer has limited rack-space or a limited budget you can still sell your software based on a standard platform. You can take what metal is available and install LXD containers to partitition the resources up into separated "machines".

If you like, you can use Docker to manage the deployment of your software into these LXD containers. Docker and LXD are complementary!

In practical terms you can use tools like Ansible to automate the provisioning of your LXD containers on the customers metal. You are able to define in code the infrastructure and platform that your software runs on. And that means your engineering team wins at automation and your sales team wins at fitting the software into the customer data-centre.

Comments

Popular posts from this blog

Solving Doctrine - A new entity was found through the relationship

There are so many different problems that people have with the Doctrine error message: exception 'Doctrine\ORM\ORMInvalidArgumentException' with message 'A new entity was found through the relationship 'App\Lib\Domain\Datalayer\UnicodeLookups#lookupStatus' that was not configured to cascade persist operations for entity: Searching through the various online sources was a bit of a nightmare.  The best documentation I found was at  http://www.krueckeberg.org/  where there were a number of clearly explained examples of various associations. More useful information about association ownership was in the Doctrine manual , but I found a more succinct explanation in the answer to this question on StackOverflow . Now I understood better about associations and ownership and was able to identify exactly what sort I was using and the syntax that was required. I was implementing a uni-directional many to one relationship, which is supposedly one of the most simpl...

Grokking PHP monolog context into Elastic

An indexed and searchable centralized log is one of those tools that once you've had it you'll wonder how you managed without it.    I've experienced a couple of advantages to using a central log - debugging, monitoring performance, and catching unknown problems. Debugging Debugging becomes easier because instead of poking around grepping text logs on servers you're able to use a GUI to contrast and compare values between different time ranges. A ticket will often include sparse information about the problem and observed error, but if you know more or less when a problem occurred then you can check the logs of all your systems at that time. Problem behaviour in your application can occur as a result of the services you depend on.  A database fault will produce errors in your application, for example. If you log your database errors and your application errors in the same central platform then it's much more convenient to compare behaviour between...

Translating a bit of the idea behind domain driven design into code architecture

I've often participated in arguments discussions about whether thin models or thin controllers should be preferred.  The wisdom of a thin controller is that if you need to test your controller in isolation then you need to stub the dependencies of your request and response. It also violates the single responsibility principal because the controller could have multiple reasons to change.   Seemingly, the alternative is to settle on having fat models. This results in having domain logic right next to your persistence logic. If you ever want to change your persistence layer you're going to be in for a painful time. That's a bit of a cargo cult argument because honestly who does that, but it's also a violation of the single responsibility principal.   One way to decouple your domain logic from both persistence and controller is to use the "repository pattern".   Here we encapsulate domain logic into a data service. This layer deals exclusively with imple...