Poor performance with blue cover


After switching a couple of database calls to cache, we actually had worse performance. We noticed a huge jump in CLR time and response time according to new relic. Please see attached graph for the jump (cache was introduced 1/5 at 0:00). The only thing that has changed has been the introduction of Azure App Fabric Cache. Our cache client uses a singleton pattern so there is only one for the instance of the webservice. the cache factory is created once and then stored away so we are not en-curing the overhead of opening the connection each time.

Furthermore, NewRelic reports that cache is taking on average 15ms. In many cases, 15ms can be slower than the database!!!!

nto The object we are sticking i cache constits of two byte arrays, one has a length of about 421 and the other has a length of 8.

Not really understanding why with the introduction of cache we see increased response time. Is a byte array not cache friendly?

my class looks like this (the only two properties that get populated prior to being shoved into class is the two byte arrays everything else is left to default values)

public class GameState
    [Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)]
    public int Id { get; set; }

    [Column(UpdateCheck = UpdateCheck.Never, Name = "game_id")]
    public int GameId { get; set; }

    [Column(UpdateCheck = UpdateCheck.Never, Name = "player_id")]
    public int PlayerId { get; set; }

    [Column(UpdateCheck = UpdateCheck.Never, DbType = "VarBinary(max)")]  //has a length around 421
    public byte[] State { get; set; }

    [Column(UpdateCheck = UpdateCheck.Never, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)]
    public DateTime Created { get; set; }

    [Column(UpdateCheck = UpdateCheck.Never, Name = "update", IsDbGenerated = true, DbType = "timestamp")] //has a length of 8
    public byte[] TimeStamp { get; set; }



We talked with several Microsoft engineers and no one could give us any help as to why it was so slow. One engineer reported that the cache layer was built on top of SQL Azure which explained the high request times. Another engineer denied that claim, but wasn't exactly sure how Shared Caching was implemented.

We were never able to get the azure cache working quickly and ultimately switched from Azure to Amazon ec2. Once on Amazon ec2 on comparable hardware our response time dropped to about 60-70ms.

For anyone else in considering this, this was what we learned in the switch.

SQL Azure is shared DB hosting. You do not get your own DB you are in a server with a bunch of other DB's and if you have any kind of decent traffic, you will get throttled. They kept telling us about some ticket purchasing success story, but in that scenario they had 750 DB's to process the transactions. Sharding is no fun, and a better success story is that you handled all those requests with 1DB.

We use SSL, and having IIS manage the SSL really kills your CPU. Amazon has your ELB do the ssl and then your IIS boxes don't have to. This freed up the IIS boxes to handle requests faster.

Amazon lets you run memcache. Memcache is awesome. Having a lightening fast cache layer (capable of growing well beyond 4GB), took tremendous load off our DB.

We made the switch back in jan 2012, so its possible Azure has become better in the last year, however I have no plans on giving it a second chance.

The performance of Azure Cache is not that satisfied. Basically it's because the Azure Cache has its own load balance when communicate. But you can try to enable the local cache feature, which will increase the load performance.