Moving to the cloud part 6: Enabling RDS

In this part of my series we’ll get closer to the fundamentals of our application. In the last article we’ve outsourced our mail server to the cloud. Now we want to do the same with our database server to get rid of all the tasks that come with hosting a database server like setup, configuration, maintenance, backups, security, replication or updates. Amazon provides a really straightforward solution: RDS

Continue reading “Moving to the cloud part 6: Enabling RDS”

Moving to the cloud part 5: Enabling SES

Sending emails from a Symfony2 application is no challenging task. Just configure the Swiftmailer library with a handful of simple parameters, create a message object, trigger the sending process and you are done. Things change slightly if you are responsible of the mail server at the same time. Setting up and maintaining mail server software may become a challenging task, especially if there are complaints about missing emails, security holes or spam issues. Moreover, sending from a cloud server is not very reliable because of its doubtful IP reputation.  Amazon’s SES service provides relief.

Amazon’s SES (Simple Email Service) service frees us from setting up and maintaining a mail server by providing an email service with a single SMTP endpoint.

Continue reading “Moving to the cloud part 5: Enabling SES”

Moving to the cloud part 4: Enabling S3

Moving to the cloud mostly means moving to a scalable multiserver environement with a load balancer in front. The load balancer redirects a user to an available webserver instance of the cluster. Imagine a form with an image file upload somewhere in your application allowing a user to publish an avatar on his profile page. Handling the uploaded file the old way would mean to store it on the current webserver’s file system. But how could this file be accessed by other webservers of the cluster, e.g. to display the avatar in the user’s public profile to users that have been redirected to another instance of the cluster? Moreover, what happens if we want to scale down our mulitserver enviroment – meaning that we may need to shut down a webserver that stores uploaded images? One possible solution would be to setup an additional file server for this purpose, not beeing part of the scaling cluster. All webservers could access uploaded files at the same central location. But there are serveral drawbacks with this setup: First, it means setting up and maintaining another server with a different configuration. Second, it means a single point of failure: if our single file server fails then the whole application is concerned – and for the sake of simplicity mirroring the file server is not an option… S3 to the rescue!

Continue reading “Moving to the cloud part 4: Enabling S3”

Moving to the cloud part 3: Enabling Route 53

This is part 3 of my series of articles about our first application move to the Amazon Cloud. As we are not the owner of the productive domain of the application, all communication about DNS changes has always been quite tedious, time consuming and prone to errors in the past – especially when your contact person lives in another time zone. In preparation to the final application move to EC2, which involves some DNS changes to switch to the Amazon load balancer, we wanted to gain some flexibility. Enter Route 53.

Route 53 allows you to manage all the DNS records of a given domain – even if you are not the owner of the domain. For instance, you can route a domain or any subdomain at any time to any server of your choice. This is quite cool because whenever we’ll be ready with the setup of our EC2 server cluster we ourselves will be able to flip the switch. No need to contact someone, no need to wait impatiently for the changes to happen. If something goes wrong – let’s do a rollback to the previous setup.

Continue reading “Moving to the cloud part 3: Enabling Route 53”

Moving to the cloud part 2: Enabling database session storage

By default, PHP persists every user session to a single file stored in the system’s default temporary directory. You can go there, open an arbitrary session file – most likely prefixed by sess_ – and you will find a serialized array which represents the contents of the global $_SESSION array which is available to your scripts. Ok, this works great, so what’s the problem with this setup?

Actually, there is nothing wrong with using the file based session storage. But with growing demands some downsides of this approach may attract your attention:

  • The system’s temporary directory is a shared directory: session files of different applications  and temporary files of foreign processes may also use this location. In case of a security issue your user sessions may be compromised. This may be solved by configuring a unique session save path per application and put an open_basedir restriction on top to prevent unauthorized access. This applies all the more if your application is installed on a shared server. In contrast, a database can make use of its access management, you will just need to setup an excluvise account for your session table.
  • There are no simple means to increase file access performance. In constrast, a database usually knows a lot of concepts to improve performance like indexing and clustering.
  • As soon as you want to run your application on multiple servers for reliability and performance reasons you will prefer to store session data in a central location that is common to all webservers. Thus, every webserver shares the same pool of session data and it doesn’t matter which webserver of your cluster serves two subsequent requests of the same client.

Implementing a different session save handler in raw PHP is quite well described in the PHP documentation, so we will focus on how to do the Symfony2 configuration for this requirement. Continue reading “Moving to the cloud part 2: Enabling database session storage”

Moving to the cloud part 1: Intentions

Currently, we are moving our first Symfony2 application to the Amazon cloud (AWS). This series of articles describes how we modified and moved this application.

The existing application setup is a common one:

  • Single production server
  • Usual LAMP stack with Ubuntu and local MySQL database
  • File based sessions
  • User uploads stored in the local filesystem
  • Local Postfix mailserver
  • Cron jobs, e.g. for sending email reports
  • Deployment happens from a local machine using a self written deployment script
  • A little bit of monitoring using Nagios
  • Database backups using a self written script
  • File backups using duplicity
  • DNS management via client’s domain registrar
  • Git Version Control using Bitbucket

In the past, every programmer at our office has been more or less a one-man show, being the master of all the above mentioned processes. Continue reading “Moving to the cloud part 1: Intentions”

Aggregatsfelder mit Symfony und Doctrine

Dieser Beitrag handelt von einem Thema mit dem wir eigentlich schon oft zu tun hatten und auf die ein oder andere Weise implementiert haben. Und doch stellt es uns immer wieder neu vor die Frage wie man es eigentlich »richtig« macht, ganz abgesehen von Besonderheiten die jeder spezielle Fall mit sich bringt: Aggregatsfelder bzw. Aggregate Fields.

Continue reading “Aggregatsfelder mit Symfony und Doctrine”