An old chestnut this one. Have the argument in almost every. contract but still don't know the definitive answer.
In a scenario when a web app, say, references data that is fairly static in nature I always propose using some caching strategy to avoid unnecessary round trips to the database.
This always prompts some database nut (there's always one) to challenge my plan by saying that there is no need to implement this caching because Sql Server caches everything and there is no unnecessary disc IO.
In that case, why does a round trip to the DB perform so badly compared to the method that uses caching when analysed through some performance monitoring tool ? Let's assume the DB is on the same box and there are no network issues.
In a scenario when a web app, say, references data that is fairly static in nature I always propose using some caching strategy to avoid unnecessary round trips to the database.
This always prompts some database nut (there's always one) to challenge my plan by saying that there is no need to implement this caching because Sql Server caches everything and there is no unnecessary disc IO.
In that case, why does a round trip to the DB perform so badly compared to the method that uses caching when analysed through some performance monitoring tool ? Let's assume the DB is on the same box and there are no network issues.
Comment