The journey of a Symfony API from 150ms to 20ms

I have been involved in the development of an app based on symfony 2.8 which also used solr, Mongo and redis.

The problem at hand: response times were slow (see table below). With some bold objectives we went down the road.



Best practices says to enable caching, but before going there let’s try to see what can we optimize, as caching may make some things harder to discover.

At this point you should make sure that you have the proper measuring tools (APM and load testing). For this we have used jmeter and free new relic. Get some numbers before starting the process and have patience (some results will start be seen after days).

Use your DB engine properly

What i mean is for mysql/mongo for example put in place proper indexes and analyse the queries your run.
Think about that a query that you runs every few hours could lock your tables/collections for a few seconds and this could increase your response time. Our particular case was in regards to missing indexes in mongo.

Another bottleneck were the Solr query. We have made some tweaks to both schema and config of the solr it self and boom…. from calls that took 2-3 seconds… we don’t have any above 500ms (the 99% at 3000 was mostly generated by solr calls)

Use the latest versions

Or at least somewhere around :). For example our upgrade from

php5.6 to php70 has helped us with a boost of 25% (and some headaches dues to required upgrade of Mongo driver and doctrine missing native support for the new driver).

Update 2016.12.1: Someone asked about the upgrade of doctrine+mongo to PHP 7. As many may have observed already, the old driver is deprecated and doctrine-mongo-odm is not compatible with the new driver. But someone made a cool transition package ( that worked great for us.
Another problem encountered during the upgrade of ODM was related to the field annotation, that we had to migrate from @MongoDB\String @MongoDB\Int to @MongoDB\Field(type=”string”)

upgrade Mongo from 2.4 to 3.2 – well… someone used a very old version of Mongo when the cluster was made. This was a bit tricky as we had to take the whole app down for about 1 hour as we couldn’t migrate from 2.4 to 3.2 without taking a full backup and restoring it on a new cluster.


If you don’t need it, remove it! If you need it in development, load it just in dev/test env.

Some examples of bundles disabled on our app were:

  • SecurityBundle – our API is read-only and it’s nothing that needs to be protected (or at least you could declare a dedicated firewall for the public API sections)
  • SwiftMailerBundle
  • SensioFrameworkExtraBundle
  • TwigBundle
  • AsseticBundle


Make sure all configs are in production mode. Some examples would be:

  • make as little as possible IO (also consider logs in the process)
  • Doctrine makes a lot of cache files for metadata. Put it in APC/APCU.
                server: mongodb://%mongo_servers%
                    connect: true
                    connectTimeoutMS: 300
        default_database: %database_name%
                metadata_cache_driver: apc
                retry_connect:              1
                retry_query:                1
                auto_mapping: true
  • persistent connections where possible (we had some problems with redis after doing this, because we were using 2 databases for different purposes and switching dbs… was a nightmare)
  • Make sure you read from slaves. (we were killing the Mongo master because of this)
    • Here we need to control when to read from slave… so we haven’t allowed it by default, but we enabled it on demand:
      $container->get('doctrine_mongodb')->getManager()->getClassMetadata('<Entity Name>')->slaveOkay = true
  • Try to consider what the timeout of the clients are and use them in your app. If you know your client is ignoring your response, why bother completing the entire request (example: timeout -300ms, but a internal http call takes 3000ms. This would keep you busy for nothing )

Queue it!

If you don’t need something now, just put in a queue and continue with serving requests. Write operations (to db or disk) are usually heavier than putting it on a queue and handling it on a separate server.

We use RabbitMQ with this bundle:


It’s an API and we chose to have twig enabled only in dev environment (so that we can have the profiler work). 🙂

Try to use php directly, but if you can’t live without it, at least install php twig extension (brings some boost in performance)

Container / DI

Here we also did a little comparison with other bare frameworks like slilex. We choose to stay with symfony as the performance penalty was minor compared with the advantages:

  • yml configs were actually cached as php code at deploy
  • yml errors were caught during deploy phase (at cache warmup) and faulty containers can not be released live
  • Personally I started to use a lot ContainerPass’es and also “tags”
  • We already knew how to define and use services in symfony and brining a new framework make lead to some beginner mistakes.


This was one of the final steps we added. A few tricks to be considered here:

  • Make sure your connection to the cache server is persistent
  • Consider with what TTL you should cache (depending on change freq). If you have a large data set and freq updates on various events, you can set a high TTL and make the update process to also flush the cache
  • you can try multi level caches if you need it (apc + memcache), but it will also generate some headaches
  • To avoid complicated logic inside your methods you could use a cool caching bundle:
  • You might have a lot of cache evictions due to large cache.
    • Try to reduce the size of caches (is it really required that often?)
    • Add additional capacity to the caching server.

Nginx / FPM config

Here we did some changes after we discovered that under high load we had increased response time, but our servers were free (CPU and memory). We have increase the number of worker/child processes for both nginx and php-fpm and this allowed us higher load.

SimpleSQL (MySQL admin interface)

Titlul spune tot. E o interfata de administrare de baze de date dezolvatat de mine in cateva ore fiindca aveam nevoie sa studiez o baza de date si phpmyadmin era prea mare si se incarca prea greu pentru ce-mi trebuia mie.
La ce foloseste? Ei bine, toti care am lucrat cu phpmyadmin stim ca acesta este un tool foarte puternic, insa nu intotdeauna avem nevoie de toate functiile oferite de acesta. Acest tool ofera o interfata grafica pentru a lista tabelele, insa in rest pentru tot trebuie scrisa sintaxa de sql.Va astept sa imi dati idei pentru a-l imbunatatii.
Cum se instaleaza? Copiati directorul in site-ul vostru, nu necesita nici o instalare speciala, doar trebuie sa editati fisierul config.php pentru a ii spune ce servere de baze de date sa foloseasca sau daca o sa comentati linia in care sunt definite serverele scriptul o sa va lase sa va connectati la orice baza de date.


Last week

Ce mai faci? In ultima perioada m-am ocupat de deschiderea unui server pentru jocul lineage 2 si inca mai lucrez la interfata web(versiunea pentru public e inca un simplu html), insa sper ca in maxim 2 zile sa iesim cu versiunea beta a site-ului pentru public, urmand ca dezvoltarea lui sa conitnue. Urmeaza ca la mijlocul lunii viitoare sa ne mutam pe un server dedicat locat in state pentru a avea access mai bun la nivel international.

Numele proiectului? l2Apollo

Pagina web?

Adresa server?

Cine il dezolvata? phpAB Team

Status actual? Beta(speram ca pana la sfarsitul lunii viitoare sa avem toate itemurile setate, geoip instalat)

De ce o sa cerem inregistrare pe site? Vrem sa oferim un mod de a ne recompensa utilizatorii care aduc alti jucatori pe serverul nostru, iar aceasta era singura modalitate prin care putea sa facem un mecanism de refeer.

Va mai tinem la curent cu ce o sa se intample! 😉


M-am intalnit azi cu o chestie f misto…. functii facute direct in sql pe care. Mi-am facut cateva chestii de care chiar aveam nevoie pentru a reduce traficul.

Exemple(nu de alta, dar sa le am undeva notate:p):


DROP PROCEDURE IF EXISTS `l2jserver`.`change_passwd` $$
CREATE DEFINER=`l2jserver`@`%` PROCEDURE `change_passwd`(IN s_username TEXT,IN old_passwd TEXT, IN new_passwd TEXT)
UPDATE `l2jserver`.`web_accounts`
SET `password`=new_passwd
WHERE `password`=old_passwd AND `username` = s_username;
UPDATE `l2jserver`.`accounts`
SET `password`=new_passwd
WHERE `password`=old_passwd AND `login` = s_username;
END $$


And also:

DROP VIEW IF EXISTS `l2jserver`.`users`;
CREATE OR REPLACE ALGORITHM=UNDEFINED DEFINER=`root`@`%` SQL SECURITY DEFINER VIEW `l2jserver`.`users` AS select `l2jserver`.`web_accounts`.`id` AS `id`,`l2jserver`.`accounts`.`login` AS `Username`,`l2jserver`.`web_accounts`.`password` AS `Password`,`l2jserver`.`web_accounts`.`email` AS `email`,`l2jserver`.`web_accounts`.`accesslevel` AS `Level access`,`l2jserver`.`accounts`.`lastactive` AS `Last Active`,`l2jserver`.`accounts`.`lastIP` AS `From IP last time` from (`l2jserver`.`accounts` join `l2jserver`.`web_accounts` on((`l2jserver`.`web_accounts`.`username` = `l2jserver`.`accounts`.`login`)));


Azi am intrat pe blogul unui coleg care a descoperit niste brese in scriptul unui site…
tin sa il anunt ca atat timp cat nu are acces la DB, nu e chestie de securitate. Cateva dintre linkuri le-am verificat si eu, insa nu am reusit sa capat acces la db. Singura chestie care a uitat saracul programator sa o faca era un html_eneties();