May Tech Recap
Our last few months have seen experiments into isomorphic routing, immutable application state, micro-services and more
Isomorphic Routing & Immutable Application State
As per our isomorphic view rendering solution, our view controllers are responsible for hoisting data from the DB (or REST API) into what we call the "application data scope", once it has been mapped into the appropriate view models. This data scope is then consumed by our templates and client-side application, so having an identical data scope on both sides of the stack is crucial.
This was the last piece of our overall architecture that still had to be manually replicated by both the front-end and back-end teams as features were added, which could still result in functionality that didn’t quite match if we weren’t careful. We were therefore very conscious of the need to improve this part of the stack.
To reduce duplication, one idea that had been suggested to us several months ago was the sharing of JSON route files between the front and back-end, similar to our existing approach to isomorphic modular templating using JSON “layout” files. While this approach made a lot of sense, it didn’t solve the issue of the duplicated business logic in our view controllers.
With this idea in place, we would then have the beginning of state-driven views, and our client-side application could then be restructured into an “action” based flow, where at its most basic level, any user interaction would result in a new, modified state object being pushed into the history, an event fired, and the view updated as a result.
We were actually already very close to this concept, but the lack of a strict philosophy or paradigm around how the whole system should function had led to arbitrary architectural decisions and unnecessary UI and DOM-manipulation code, which could now be significantly reduced with all parts of the application streamlined into a single cohesive architecture.
We pushed the first iteration of this refactor into production today and we’re very excited to start taking advantage of the benefits of this new architecture. We’ll be writing more about the whole process in detail in a future blog post so stay tuned.
User Lifecycle & Microservices
As a business, we have been looking recently at various ways to improve our user retention and engagement metrics. The first approach we decided on was to actively target our users with tailored emails throughout their life as members on our platform. We are now rolling out a system to do just this. Using our existing database, we can segment our users based on their platform activity and send them relevant information and offers. For example, three days after a user has signed up we send a follow up email to reinforce the benefits of being a member on Colony; or, if a user has not bought a film after a year or more on the platform, we can target them with a specific offer to tempt them back in. We use the Mandrill API to send out our automated emails, which gives us the ability to A/B test, and to track open and click through rates. We are also using specific analytics tags to trace the success of this.
Technically speaking, we wanted to use this project to evaluate the “microservices” architectural style. Our existing server-side application is fast and robust, but it is a “monolith”, and as such has started to become a little unwieldy. Any new functionality needs to “fit in” with our existing development patterns – for example, we make a lot of use of Entity Framework for our data access, but sometimes for small pieces of functionality we would rather use something a lot more lightweight, like Dapper. We have now built a number of small micro services that are completely independent of this monolith, and cooperate over APIs and by means of our existing message queue system built on Amazon SQS. Each service employs patterns that make sense for the small section of the overall whole that it is responsible for, and they are independently deployable and scalable. The approach is not without its drawbacks of course – we have more EC2 instances and deployment pipelines to manage, and a proliferation of patterns may make switching costs a bit higher. However, we are pleased with our initial foray into this method, and will follow up with a more in depth blog post on this subject after another month or two in production.