This blog post is about my understanding of the versioning aspect of true RESTful APIs, or as I’m going to refer to them further, Hypermedia APIs, and how link context, and ‘rel’ attribute in particular, lets you get away without versioning your API while keeping clients from breaking. The rest of this post is going to assume that you are familiar with Richardson Maturity Model and modern MVC frameworks like Symfony or Rails.
The routing component of such frameworks serves a double purpose:
- First and foremost, it lets framework users handle different URIs by routing them to various controller actions.
- Secondly, and what is more relevant to this blog post, routing lets framework users create route aliases and then use them to generate links in the view.
For example, if we defined a route called ‘home’ for URI ‘/’, we could then generate it in the view with something like `<a href=”<?php echo $router->generate(‘home’); ?>”>Home</a>`. This proves to be incredibly useful when you change the actual URIs of each route since you don’t have to modify the views later.
From the API design point of view, when links contain ‘rel’ attribute like `<link href=”/” rel=”home” />`, hypermedia clients don’t need to know about the actual URI for ‘home’ resource, hence that URI can be changed without the need of modifying the client, just like you won’t break your website by changing URIs of some controller actions when using a modern framework. Additionally, you don’t need to version your websites because the link to ‘home’ is now linking to ‘/index’ instead of ‘/’.
By the way, when designing websites, we provide context to our users by putting text ‘Home’ inside the ‘a’ tag, e.g. `<a href=”/index”>Home</a>`, in the website navigation, to let them know what the hyperlink is for and not where it is linking to, which changes less frequently.
Say good bye to versioned “RESTful” APIs, and welcome discoverable hypermedia APIs!
P.S. You could even use your real route names, the ones you define in your favorite framework, as values for ‘rel’ attributes of your links. Additionally, to make ‘rel’ attribute even more useful, you could structure your ‘rel’ attributes like URIs, e.g. `<link href=”/” rel=”/rels/index” />`. You could then use ‘rel’ attribute URIs to provide documentation for the appropriate resource, e.g. docs for root resource would live under ‘/rels/index’. Finally, ‘/rels’ could be used to list all available documentation. This should enable users to find documentation for your API by interacting with it.
Working at OpenSky has been a rewarding and exciting experience, I’ve met developers from around the world, attended conferences (even got to speak at one), helped build open source software and have seen others consider my contributions valuable. However, the following two weeks are gonna be my last two weeks at OpenSky, after which I’m moving to San Francisco to join Twilio. This post is a recap of my days at OpenSky and thoughts that pushed me to make this decision.
I’ve started at OpenSky mid August 2009, almost two years ago. At that time the company had a total of about ten employees, with a technology department consisting of two people, including an enthusiastic CTO and a bright software engineer. I was the second software engineer hired at OpenSky which allowed me to see the company grow from its infancy and take part in most of the technical decisions made here. A position at a young and promising startup one can only dream of.
“Follow your dreams, because life is too short”
John Caplan, CEO and co-founder, OpenSky.
During these two years OpenSky survived an office move, several system re-builds, one major pivot and two CTOs and is now growing more rapidly than ever before. We have about fifty in-house employees and the technology team grew from four to over a dozen engineers and sys ops workers. In addition, OpenSky has a product team of almost the same size, that consists of great product managers, a creative director and several front end and interaction designers. The revenue and member numbers have been growing exponentially every month after the latest re-launch in April, showing the true potential of the company.
OpenSky is the most successful company and the smartest team (engineering, product and business) that I have ever been a part of.
I was always comfortable here. I’ve been lucky enough to spend a lot of time working with different open source ecommerce systems, studied how they solved similar problems and got to pick solutions that worked best for me even prior to joining the company. In fact, almost all websites I’ve worked on professionally (for money) were ecommerce related and since I’ve had experience building these before I started at OpenSky, most of the problems I’ve been solving there I already have solved or have seen solved somewhere else.
PHP has been my tool of choice as its ability to solve a great number of web-related problems is still unmatched. Thanks to OpenSky’s modern approach to software development and my obsession with programming I’ve come to learn what clean code looked like, at least in PHP, practiced Test Driven Development and got involved in the open source community. We always worked with the best tools available at the time, even if their stability or completeness were yet to be proved. We thought that it was better to start with something promising that we could help grow instead of forcing ourselves into tools we had already learned were limited. That was overwhelming at times and I appreciate the trust and support the management has shown us during those periods, those were very exciting times otherwise. Most of the tools we use now are either stable or close to it, and the sense of innovation for me is gone. As someone said “if you understand what you’re doing, you’re not learning anything”. So here I am, with more than four years of experience of building small to medium ecommerce systems in PHP, building, although the most successful so far, yet another ecommerce system. Comfort is the word that describes my current situation best. And comfort is something I feel I’m too young to stop at. I need challenge and since PHP is widely used to solve a rather narrow set of problems, I realize how much of the computer science fundamentals (algorithms and data structures, memory management, processes, threads, locks, networks) I’ve never had to deal with.
There is a great idea expressed in Chad Fowler’s Passionate Programmer - one should always try to work in a team where he is the worst member. This doesn’t mean that you need to be dumb or not passionate about what you do, rather - try to work among people more talented and experienced than yourself. In other words, to become a better chess player one should play with a more skilled opponent.
When it comes to challenge Twilio is a unique company. It is the only company I know of, that provides telecommunications (voice and sms) as a service. The initial version of Twilio’s product was built entirely in PHP by the company’s CEO and co-founder, Jeff Lawson, and the majority of that code is still in use. As a result, it has complex architecture, uses a variety of technologies for a large set of different and rare problems and has a brilliant team of engineers, experienced in scalability, networks, databases and api design.
We’ve been through a lot together, me and OpenSky, and that our affair ends is sad. However, my dream of becoming one of the world’s most knowledgeable people in software development is awaiting and I’m quite confident Twilio will bring it even closer to reality.
Until next time, Bulat
Boy, it’s been a while since my last post. I haven’t been blogging partially because I had nothing to say and partially because I had no time. This post will hopefully break the silence and at the same time be useful to my fellow PHP developers out there.
I’ve been talking about clean code and testability for quite some time now. It is simply impossible to cover all the techniques and explain them to a new audience in 40-some minutes during a meetup or a conference.
In this post I will share some of the techniques I use when designing code-bases of open-source libraries that I’m working on and how I think the design I chose helps others to keep their code clean and testable. This post was prompted as sort of a followup to discussions like the one we had on Symfony2 dev mailing list recently. Here I want to state my opinion and provide reasoning for what its worth.
Start with final classes:
When coding a class, I usually use TDD, meaning I write the test for my class before the actual implementation. At that point I usually have no idea how that class is going to look like, what public API (unless I have already partially discovered it from testing another class) it is going to have, which role in the class hierarchy it will take and whether it will have one at all. So, I start out by declaring the class as final, and use private properties and methods, because at that point, that class is final and not part of any inheritance trees.
This both keeps me from extending the class myself later on by also forcing me to think about how I want the class to be extended.
Mock or Stub by Interface:
During the coding of my class, I start seeing some of its dependencies and what they should be doing. At this point I don’t want to code real classes of my object collaborators yet, but I need those collaborators at the same time. So I create an interface for my future collaborator and mock it in the test.
The reason I advice mocking interfaces is simple - concrete classes can be final or can have some of their methods declared as final, at which point mocking is impossible. As we know, an interface in PHP (and in OOP in general) is a contract for classes that are going to implement it, as well as for classes that are going to collaborate with it’s implementations by type-hinting their methods. I think it makes sense to use such contract in cases where you want to replace actual class instance with a test double (be it a stub or mock), since no matter which one you chose, it is going to be an alternative implementation of the real object that needs to follow the same contract. Also keep in mind, that some language specifics in PHP encourages you to use interfaces.
NOTE: A mock in PHP should conform to the type-hint of the class being mocked, to achieve the mimicking of that class. Internally, PHPUnit generates a new class with obfuscated name, that extends or implements the class or interface being mocked accordingly. Hence, if the class-to-be-mocked has final methods, they won’t get overloaded in the mock, which may lead to unexpected behavior in tests. Even if the concrete class changes some methods to final later on, the tests that were once working will start breaking while no real public API changes occurred.
Refactor to inheritance:
Starting with final classes is important, because it forces us to make an extra step on our way to inheritance and there is a reason I want that step. Inheritance is very a useful and powerful feature of OOP (I feel like I’ve heard these exact words hundreds of times already) and I am not trying to de-value it. When it comes to programming, inheritance is a way to extend code by adding custom behavior to the child classes, without re-implementing what is already working in the parent, which is great and helps code-reuse a lot.
However. In languages like PHP, where we, poor developers, don’t have the means of horizontal code-reuse (yet?) like mixins or multiple inheritance, extending one class also means that it will not be possible to extend another. I personally feel that a decision like this is very serious and try to defer the need of making one until I know more about the system I’m building and the problem I’m solving. Programmers might find themselves in the middle of interesting problems if that principle is not followed.
Typically that means, that when I finally do extend some class:
- I have an interface that I need to conform to
- Classes at the bottom of my hierarchy are typically final
- Classes at the top of the hierarchy are usually abstract
- Most of the class members are private
- Only methods and properties that need to be extended are protected
For every class operating on internal collaborators there has to be an interface:
The statement above might not be clear to everyone, so before justifying it, let me be clear on what I mean.
Assume you have a library that sends emails (SwiftMailer). That library has the Mailer class and Transport classes. The Mailer class can be configured with a Transport of choice (think SMTP, SendMail, etc.). What I mean is that the Mailer class should have a MailerInterface that it implements, because the class itself relies on collaborators to work. At the same time classes that are responsible for only tracking their internal state like value objects or PHP’s DateTime don’t need an interface.
The rationale here is simple - whenever I need to test a class that collaborates with Mailer, I don’t want to spend time on complicated setup of the Mailer object. Instead, I want to mock it and tell the object how it should behave in the test. The presence of the interface makes is that much simpler.
For every class that will be used in user-land code there must be an interface:
The rationale here is also somewhat simple: let’s be kind to other developers, and provide them a shortcut to stub or mock our library classes they will have to interact with directly from the classes they own, without having to worry about complicated setups in tests.
Every part of the library that can be extended must have an Interface:
If a user wants to provide alternative implementation of some class and the library is designed to allow that, there must be an interface that the user class can implement. In case of SwiftMailer, that means a TransportInterface to let us provide alternative email transportation means.
NOTE: Even if you designed an abstract class that needs to be extended, there should be an interface that lets users of the library write their own implementation from scratch. While an Interface is a contract, an Abstract Class is a suggestion and should not be considered a contract on its own.
Don’t force users of your library to use static methods:
I feel static methods are probably one of the biggest lies in OOP. They give you a sense of object oriented design, while they really are functions that live in global space, that cannot be encapsulated or replaced with test implementations if used inside objects and lead to all sorts of problems. There, I said it. Now let me try to explain myself.
When one calls a static method, it looks like Class::method(), that means that our code is all of a sudden dependent on the class Class (I know…), which, despite of all of our interfaces and best practices, binds us to a concrete implementation and most importantly, denies us from checking that our code actually calls this method internally while testing (unless we modify state from inside the static method itself, which is asking for even more trouble).
When designing a system, especially the one to be used by others, one should concentrate on extensibility and flexibility. When I say extensibility, I don’t mean “leave all your classes open to inheritance, use protected properties everywhere, so that any class could be extended and changed to the core”. In fact, a technique like that kills flexibility on the side of library developers and maintainers by making refactoring impossible.
Extensibility means “let the system be extended to perform more than what it can initially”, but there are many means of doing it, composition and dependency injection being the most powerful ones. A well-designed system will result in stable API that can be extended over time without worrying about backwards compatibility (BC) breaks, just like the open/closed principle suggests, by making no changes to the core class and extending or decorating it to achieve more.
NOTE: A “refactoring” that breaks BC, could not be considered such, as refactoring by definition is “.. changing source code without modifying external behavior” to improve code-reuse and design. Code that is used by end-users can be considered public.
NOTE: Dependency Injection means the user injects the dependencies in the setup part of the application, it definitely does not mean the user can pull in or suck in the dependencies by having a service lookup (that pattern would be probably called Dependency Sucking). In dependency injection you only pass around what’s needed and not shove all objects into some kind of service locator class (think Registry) and let other classes extract what they need. One of the advantages of DI is the fact that by re-assembling the system components, we can achieve different behavior for the end system, this advantage is lost with service lookups hard-coded in concrete classes. I feel that the clarification is important as even some of the most well-known PHP-ers can mix the terms sometimes, let alone everyone else.
This post is mainly a reminder to my future self in case I need to do something like this again.
Using bleeding edge technologies on Windows has always been a painful process. Mainly because not many LAMP developers use windows (its just not in the acronym), which leads to poor support of the OS and lack of learning material.
After playing with NodeJS and watching Ryan’s presentation, I realized all the drawbacks of Apache - my default web server for many years - and decided to give ngninx a shot.
- Download and install a copy of the most recent php version for windows (PHP 5.3.3). Please note, that since we’re not gonna be using Apache, you can download the non tread safe version compiled with VC9.
- The second step is to get nginx executable in the download section of nginx website. On Windows its as simple as unzipping the file into c:\nginx directory.
- After that is done, we need to configure it to work with PHP. To do that let’s open c:\nginx\conf\nginx.conf and create the following server config: The above config tells nginx that application on http://your_app.lcl/ is located at c:\www\path\to\your\website on your filesystem. Furthermore, it tells nginx that all *.php files need to be served through a fast_cgi server on port 9000.
- Now that we configured nginx to use fast_cgi for *.php files, we need to start a fast_cgi server daemon. From your windows console run C:\php\php-cgi.exe -b 127.0.0.1:9000, where C:\php is the php installation path.
- You can access your application at http://your_app.lcl/ after starting the nginx process in c:\nginx\nginx.exe
I personally feel that conventions should be best practices and not inevitable parts of frameworks. Conventions are good, but they kill testability. So while they can save you some time you would have had to spend on configuration otherwise, they also limit the granularity of your interfaces and break testability.
My recent example of not testable controllers and how it could have been fixed was very well received amongst fellow Symfony2 developers, so that gives me enough confidence to propose something else.
There is another major part of the framework that can hardly be tested as it relies on Symfony’s internals and cannot use DIC for own configuration. Console Commands. They are registered by manual scan of bundles’ Console directory. They therefore cannot be configured through DIC with all dependencies moved to their interface definition and just get the generic Container instance instead.
Or can they? The answer is: “Yes, they can”.
And it wouldn’t be a lot of work to switch that. All we need to do is register each command in DIC as a service, and use tags to specify that this service is a command:
Since I’m gonna be testing the already existing class, the test will not be as elegant as it could have been:
While writing this test, I found out the command wasn’t testable because of a hard-coded mkdir function call that I couldn’t mock out. In order to fix it, I found the already existent Symfony\Bundle\FrameworkBundle\Util\Filesystem::mkdirs() function that wraps it, and makes it mockable, which I then proceeded to use. The only other changes I had to introduce were - get rid of Container dependency, and add Symfony\Bundle\FrameworkBundle\Command\AssetsInstallCommand::setKernel and Symfony\Bundle\FrameworkBundle\Command\AssetsInstallCommand::setFilesystem methods for direct injection of primary dependencies.
So here it is - the modified AssetsInstallCommand, that is fully unit-tested:
And here is the result of running it in PHPUnit:
As always, feedback is much appreciated, and I have nice DISQUS Comments for that very purpose now!
P.S. While I was posting this, and embedding my thoughts in public gists, Kris Wallsmith suggested to use the tags to specify command names as well, which is a very interesting suggestion.
P.P.P.S Code that I provided in the post is available on my GitHub repository, and is built on top of Henrik’s efforts.
As some of you might know, the Symfony2 framework consists of two main ingredients:
The logical separation should be the following:
The Symfony Components are standalone and reusable PHP classes. With no pre-requisite, except for PHP, you can install them today, and start using them right away. (Symfony Components Web Site)
Of course, there are different vendor libraries that Symfony2 uses, that are not Components or Bundles. Its important to remember, that in order to expose that functionality in your Symfony2 application and make it accessible, you have to create a Bundle. Its a good practice and an unwritten convention.
I think that the main reason for doing so is to avoid setting up third party libraries yourself and delegate that to Symfony2’s DIC component, which was built for that very purpose. This lets other developers overload some of your configuration, class names and parameters without modifying your core classes and breaking backwards compatibility.
DIC stands for Dependency Injection Container.
The main idea behind Dependency Injection Containers is to extract all the instantiation and wiring logic from your application into a well-tested dedicated component, avoiding the code duplication that inevitably happens if you’re practicing Dependency Injection and Testability without DIC. By removing all of the setup code, Symfony2 removes another possibility of error and lets you concentrate on your domain problems instead of object instantiation.
Each object in Symfony2 DIC is called a Service. Service is an instance of some Class, that is created either by direct instantiation using the ‘new’ construct or using some other Service’s factory method, that gets certain dependencies injected into it as part of the instantiation process.
It is much easier to understand how services are configured by looking at an example configuration:
I personally find it very readable.
During the container instantiation, the XmlFileLoader takes the above-mentioned services.xml file and transforms it into PHP code, which looks similar to the following pseudo-code:
Now you have sort of a bird-eye view of how your objects are built and interact all in one place. No need to open some bootstrap file to see how everything gets wired together, and most importantly, no need to touch your code in order to change how things get wired together. Ideally, we want application to be able to perform completely different tasks, just by re-arranging some dependencies.
NOTE: All of your DI xml (or yaml or php) configurations need to live under <bundle name>/Resources/config directory of your application, in our example, I would store the configuration in MyCompany/PaymentBundle/Resources/config/services.xml.
The next step is to let your Symfony2 application know that you have this service configuration and want it to be included in the main application container. The way you do it is very conventional, although I know at least one way to make it configurable, but that’s a different topic and deserves its own blog post.
In order to include your custom configuration, you usually need to create something called Dependency Injection Extension. A DI Extension is a class, that lives under <bundle name>/DependencyInjection directory, that implements Symfony\Component\DependencyInjection\Extension\ExtensionInterface and which name is suffixed with ‘Extension’.
Inside that class, you need to implement four methods:
- public function load($tag, array $config, ContainerBuilder $configuration);
- public function getNamespace();
- public function getXsdValidationBasePath();
- public function getAlias();
Or you could choose to extend Symfony\Component\DependencyInjection\Extension\Extension and have to worry only about the last three.
Let’s look at an example extension, that would register our services.xml configuration file with Symfony2’s DIC:
This extension does several things:
- It will include the services.xml into DIC only if ‘payment_gateway’ service is not yet defined - this is to avoid conflicts and lazy-load the configuration
- It will override some of default parameters, if you specify your own when enabling the extension
- It also provides the XSD schema location and base path for validation of XML configuration
After you created the extension, all you need to do is add PaymentBundle to the application Kernel::registerBundles() method’s returned array. Then in the application configuration file specif something like ‘payment.config: ~’ (assuming you’re using yaml configs). That should do it, you should now be able to call $container->getService(‘payment_gateway’) and get the fully set up instance of Gateway.
During the preparation for my recent talk, I had to refresh my knowledge of the theoretical side of Dependency Injection, since I was using it for a while, but found myself not knowing some of the terminology. I found “Dependency Injection” article on Wikipedia.org to very fulfilling. One thing that caught my attention specifically was the types of Dependency Injection, and Wikipedia defines three of them:
- Interface Injection (type 1)
- Setter Injection (type 2)
- Constructor Injection (type 3)
While the last two types are what I use in day-to-day coding, the first type was rather unfamiliar. So I looked it up, and Martin Fowler had a good definition.
[Interface Injection] - [is a technique] to define and use interfaces for the injection (Martin Fowler)
Examples on Martin Fowler’s blog are quite complicated and would be more confusing to PHP developers than beneficial. But the general idea, which resonates with my point on Composition over Inheritance, is very simple - allow certain service to be injected into other services, that implement a common interface.
Example use case:
I have an interface ContainerAware, and I want all services that implement that interface to receive instance of Container through its ->setContainer() method. This would be very useful in Symfony2, if you were defining controllers as services and needed container. Right now there is a hard-coded ‘instanceof’ check deep inside Symfony2’s controller instantiation process, that sets the container on ContainterAware controllers. If Symfony2 DIC component supported the first type of Dependency Injection (Interface Injection), this would not be necessary, and many more interfaces could be defined, removing the verbosity from configuration.
Enough talking, show us some code:
NOTE: Actually any controller that extends Symfony\Bundle\FrameworkBundle\Controller\Controller implements ContainerAwareInterface, as it inherits it from parent, but for the purposes of this example, I chose code that shows everything explicitly.
NOTE: There is no way to define Interface Injectors in Symfony2 yet, as it doesn’t support the first type of Dependency Injection. This is my proposed way of configuring interface injectors in xml.
All we would need to do now is register our controller within the Symfony2’s DIC component without defining the method call explicitly, since the DIC would know to apply interface injectors to all services, requiring them:
The described approach makes DIC configurations take less space to achieve the same result. Because if you’re implementing a certain interface that defines rules for injection of other services, you usually need to re-use the same services across all instances of the interface, but have to explicitly define the injections every time.
I can see some of the Symfony2’s built-in services benefiting from Interface Injection:
- ‘event_dispatcher’ and ‘debug.event_dispatcher’
- ‘templating.loader.filesystem’, ‘templating.loader.cache’ and ‘templating.loader.chain’
I definitely have a lot more classes, that can benefit from it in my problem domain.
So with all that in mind I spent last evening putting together my own implementation of Interface Injection in Symfony2’s DIC component, and you can find it in my Symfony2 for on github.
Please comment with suggestions and/or feedback, very interested in what you think.
Having a rich background in different MVC frameworks, one thing that was always unclear was how to test controllers. I think the main reason it wasn’t obvious is because controllers had always been sort of a black magic element in frameworks. There were too many conventions on where on the file system controller should be located, what dependencies it should have knowledge of and those dependencies were always wired with the controller the hard way (view layer).
Environment like that assumes no easy way for controller testing, since you can’t instantiate a controller and some of its primary dependencies to test the interaction - you have to boot the whole framework and write functional tests.
Because of how complicated that process is, people usually don’t unit-test controllers, functional test is the maximum you can get, but usually you don’t get anything at all.
Symfony2 changes that completely.
Initially Symfony2 framework had only conventional controller loading approach. Controller instance still was very lightweight and was not required to extend some parent class in order to work. If your controllers implemented ContainerAware interface, you would have the DIC (dependency injection container) inserted using ContainerAware::setContainer() method, which you could then use to access any service that you registered in DIC.
The proposed method of testing controllers at the time was a black box testing approach, where you test full requests to your application and assert their output like so:
Although this method is easy to read and understand, there are a couple of drawbacks:
- To execute the test, we need to bootstrap the Kernel
- This test can only assert response body, which makes it fragile to design changes
- As a result of all of the above, it runs much slower and does much more than it needs to
In ideal world, I would want to test controller interaction with other services in my application, like so:
NOTE: Controller is just a POPO (plain old PHP object) without any base class that it needs to extend. Symfony2 doesn’t need anything but the class itself for controller to work.
Well, the good news is that Symfony2 allows that. Now all your controllers can be services. The old, conventional approach is still supported and is irreplaceable for small application controllers, with no need for unit-testing.
To make the above example controller correctly interact with Symfony2 and work as expected, we need the following.
Create controller class:
Create DIC configuration, using the following xml:
Create routing configuration:
NOTE: in the above example, we used service_id:action instead of the regular BundleBundle:Controller:action (without the ‘Action’ suffix)
After all of the above is done, we need to inform Symfony2 of our services. To avoid creating a new Dependency Injection extension and creating configuration file entry, we can register our services directly:
NOTE: the above technique was originally invented by Kris Wallsmith in a project we were working on at OpenSky.
Now you’re ready to go. You need to include your bundle level routing file in the application level routing configuration, create the Index directory. The final file structure should look something like this:
After all the above steps are completed, you can try it from the browser, by going to:
Last Friday I gave a talk on unit-testing and programming best practices at OpenSky. This is the talk summary and some insight on how that turned out.
Some of you might, others might not know that our company, OpenSky, became the official sponsor of Symfony NYC Meetups, which means, that we’ll be seeing you the last Thursday of every month, here, at OpenSky HQ in NYC for the foreseeable future…