I have been involved in the development of an app based on symfony 2.8 which also used solr, Mongo and redis.

The problem at hand: response times were slow (see table below). With some bold objectives we went down the road.
[jtrt_tables id='205']

Start

Best practices says to enable caching, but before going there let's try to see what can we optimize, as caching may make some things harder to discover.

At this point you should make sure that you have the proper measuring tools (APM and load testing). For this we have used jmeter and free new relic. Get some numbers before starting the process and have patience (some results will start be seen after days).

Use your DB engine properly

What i mean is for mysql/mongo for example put in place proper indexes and analyse the queries your run.
Think about that a query that you runs every few hours could lock your tables/collections for a few seconds and this could increase your response time. Our particular case was in regards to missing indexes in mongo.

Another bottleneck were the Solr query. We have made some tweaks to both schema and config of the solr it self and boom.... from calls that took 2-3 seconds... we don't have any above 500ms (the 99% at 3000 was mostly generated by solr calls)

Use the latest versions

Or at least somewhere around :). For example our upgrade from

php5.6 to php70 has helped us with a boost of 25% (and some headaches dues to required upgrade of Mongo driver and doctrine missing native support for the new driver).

Update 2016.12.1: Someone asked about the upgrade of doctrine+mongo to PHP 7. As many may have observed already, the old driver is deprecated and doctrine-mongo-odm is not compatible with the new driver. But someone made a cool transition package (https://github.com/alcaeus/mongo-php-adapter) that worked great for us.
Another problem encountered during the upgrade of ODM was related to the field annotation, that we had to migrate from @MongoDB\String @MongoDB\Int to @MongoDB\Field(type="string")

upgrade Mongo from 2.4 to 3.2 - well... someone used a very old version of Mongo when the cluster was made. This was a bit tricky as we had to take the whole app down for about 1 hour as we couldn't migrate from 2.4 to 3.2 without taking a full backup and restoring it on a new cluster.

Bundles

If you don't need it, remove it! If you need it in development, load it just in dev/test env.

Some examples of bundles disabled on our app were:

  • SecurityBundle - our API is read-only and it's nothing that needs to be protected (or at least you could declare a dedicated firewall for the public API sections)
  • SwiftMailerBundle
  • SensioFrameworkExtraBundle
  • TwigBundle
  • AsseticBundle

Configs

Make sure all configs are in production mode. Some examples would be:

  • make as little as possible IO (also consider logs in the process)
  • Doctrine makes a lot of cache files for metadata. Put it in APC/APCU.
    doctrine_mongodb:
        connections:
            default:
                server: mongodb://%mongo_servers%
                options:
                    connect: true
                    connectTimeoutMS: 300
        default_database: %database_name%
        document_managers:
            default:
                metadata_cache_driver: apc
                retry_connect:              1
                retry_query:                1
                auto_mapping: true
  • persistent connections where possible (we had some problems with redis after doing this, because we were using 2 databases for different purposes and switching dbs... was a nightmare)
  • Make sure you read from slaves. (we were killing the Mongo master because of this)
    • Here we need to control when to read from slave... so we haven't allowed it by default, but we enabled it on demand:
      $container->get('doctrine_mongodb')->getManager()->getClassMetadata('<Entity Name>')->slaveOkay = true
  • Try to consider what the timeout of the clients are and use them in your app. If you know your client is ignoring your response, why bother completing the entire request (example: timeout -300ms, but a internal http call takes 3000ms. This would keep you busy for nothing )

Queue it!

If you don't need something now, just put in a queue and continue with serving requests. Write operations (to db or disk) are usually heavier than putting it on a queue and handling it on a separate server.

We use RabbitMQ with this bundle: https://github.com/php-amqplib/RabbitMqBundle

Twig

It's an API and we chose to have twig enabled only in dev environment (so that we can have the profiler work). :)

Try to use php directly, but if you can't live without it, at least install php twig extension (brings some boost in performance)

Container / DI

Here we also did a little comparison with other bare frameworks like slilex. We choose to stay with symfony as the performance penalty was minor compared with the advantages:

  • yml configs were actually cached as php code at deploy
  • yml errors were caught during deploy phase (at cache warmup) and faulty containers can not be released live
  • Personally I started to use a lot ContainerPass'es and also "tags"
  • We already knew how to define and use services in symfony and brining a new framework make lead to some beginner mistakes.

Cache

This was one of the final steps we added. A few tricks to be considered here:

  • Make sure your connection to the cache server is persistent
  • Consider with what TTL you should cache (depending on change freq). If you have a large data set and freq updates on various events, you can set a high TTL and make the update process to also flush the cache
  • you can try multi level caches if you need it (apc + memcache), but it will also generate some headaches
  • To avoid complicated logic inside your methods you could use a cool caching bundle: https://github.com/eMAGTechLabs/cachebundle
  • You might have a lot of cache evictions due to large cache.
    • Try to reduce the size of caches (is it really required that often?)
    • Add additional capacity to the caching server.

Nginx / FPM config

Here we did some changes after we discovered that under high load we had increased response time, but our servers were free (CPU and memory). We have increase the number of worker/child processes for both nginx and php-fpm and this allowed us higher load.