SlideShare a Scribd company logo
Caching Solution
24/08/2013
2
- We can win or lose projects/customers if response time is not good. Caching is the most
importance to think about to improve performance and can be easily applied in projects.
- Cache usually means we save frequently used resources into some easy and quick to
access area.
- Caching can provide big magnitude of performance, save CPU, network bandwidth, disk
access.
- Cache usually is not permanent storage, but should provide very fast access to resources
and has limited capacity, such as RAM
- Cache is a volatile storage, your data can be recollected at anytime if computer is in short
of memory so always check null when getting data from cache
What is caching
3
Caching discussion
- Server side caching:
Introduction to popular data caching methods.
Few patterns: Cache cloning & dependency tree.
Caching in ORM frameworks.
Caching with load balancing & scalabilities.
Output caching.
Caching with high concurrency.
- Down stream caching:
CDN caching
Brower and proxy caching
4
Introduction to popular data caching methods
Different ways of storing data for easy and quick access: sessions, static
variables, application state, http cache, distributed cache.
- Static variables and application state are in global context, can create memory leaks if not
managed well and can increase contention rate which will decrease throughout put
- Sessions (in proc mode): can only store per user data, have limited time life (session timeout)
- Http cache: support cache dependencies (SQL, file and cache key dependency), absolute
expire time, sliding expire time, cache invalidation callback. Local cache, super fast but
difficult to grow very large. MS Enterprise Caching Application Block is local cache similar to
Http cache but can be used in win form.
- Distributed cache: can be shared by multiple servers, slower than local cache, can be
designed to grow very large (ex. Memcached)
5
Few patterns:
Cache cloning & dependency tree
• Cache cloning pattern:
– Need to prevent other threads seeing the changes when we’re editing an object
in memory
– Should clone an object in cache to create a new object for the editing user to edit
that object
– Need to build this cloning pattern into your business object models
Cached
object
Writable
clone
Writing request
Create writable
clone
6
Few patterns:
Cache cloning & dependency tree
• Cache dependency and how to build a hierarchical cache key systems:
- Http cache support caching an object with its own key but make it depends on
another cache key, when this cache key is clear, the object also removed from
cache.
- Create a master key “Master” with an empty object, then create “Sub-
master1”, and “Sub- master2” which depends on “Master” key with empty
objects. Then we cache some objects which depends on “Sub-master1” and
some other objects which depends on “Sub-master2”.
Master cache key
Sub master 2 Sub master1
Sub
master 3
Cache
object 1
Cache
object 2
Cache
object n
Cache
object 1’
Cache
object 2’
Cache
object 2’
Sub
master 4
7
Caching with ORM frameworks
• Most of ORM frameworks (EntityFramework, LINQ to SQL, Nhibernate etc..) has first layer of caching but has problem for distributed
application where often the context is out of scope
• Caching object outside of ORM layer (layer 2 cache), to work with distributed environment, has difficulties in manipulate objects because
those object are not associated to any context. Layer 2 cache problems in ORM: http://queue.acm.org/detail.cfm?id=1394141
• There were a open source to develop a level 2 cache layer for EF framework http://code.msdn.microsoft.com/EFProviderWrappers
• Above solution works quite close to DB layer, we have to analyze SQL query, commands, tables to build cache key. It doesn’t cache
query with output parameters, offer limit cache policy control
8
Caching with ORM frameworks
Introduce a solution for ORM layer 2 caching:
- Should use only single request or single transaction scope for an ORM context.
- Disable tracking in context in normal get request, which save some performance, after that
put objects in to cache
- In posts requests to update/delete, after validate inputs, enable tracking in context, re-get
the objects from DB, update object which in tracking with new properties, (perhaps check
time stamp for concurrency) and save to DB.
- In post requests for insert/add we don’t need this “re-get” action.
- We should separate entities model of ORM from business model. This logic is implemented in
business model and business functions and is independent with ORM technologies (could
work with LINQ2SQL, Entity Framework, NHibernate, Azure TableService)
9
Caching with load balancing &
scalabilities
In load balancing environment, there are many servers so we do we put cache ? If all server has its
own local cache, how to clear cache and update cache ? And if we need to increase the cache
capacity to large or very large ?
• Synchronized local cache:
- We provide a cache wrapper layer to a good local cache option such as Http cache.
- In this wrapper provide the infrastructure to connect all servers together, so that when 1
server clear a cache key, it sends this message to others.
- WCF (using tcp or udp protocols), web services can be used as communication infrastructure.
- Works quite well in big configurations but can’t grow very large.
Web
app
Local
cache
Communication
channel
Local
cache
Web app
Communication
channel
10
Caching with load balancing &
scalabilities
• Distributed cache:
- Frist variants: cache data is stored on a separate/dedicated cache server(s)
- Second variants: cache data is distributed among the servers
- Both configurations are designed to grow very large, specially second variant will have bring
more cache capacity and CPU power whenever a new server is added
- A very successful/common implementation of this distributed cache is Memcached:
http://memcached.org/
Memcached
server
Memcached
client
Web application
Memcached
server
Memcached
client
Web application
11
Caching with load balancing &
scalabilities
• Scale up: increase your server hardware config to get more power: increasing CPU,
RAM, disk space on your servers, 4x expensive but only get 2x performance.
• Scale out strategy: adding more server boxes to our cluster should increase your
system power linearly.
• It’s usually hard to scale out databases:
- Sharding: we can split our data across many data base servers, algorithm to
retrieve/save data become so much more complex, transactional integrity
between shards is difficult.
- NoSQL: not an option for OLTP, immediate consistency, and real-time analytics.
- SQL replication: usually for back up only, can share the reading load if we can
deliver out dated data to end user.
12
Caching with load balancing &
scalabilities
• We have Memcached to reduce database reads a lot, but to further reduce its load
on analytical queries, search queries on data, we can introduce a Solr server in
each server box
• We can consider “cache first save later“ strategy for non-critical data.
Web Server
Memcached Solr slave
Web Server
Memcached Solr slave
Web Server
Memcached Solr slave
Database
Server
Solr
master
13
Output caching
• If our data don’t change and our logic in code don’t change then the final output (HTML pages) won’t change, why
don’t we cache it ?
• Output cache uasually means cache the final output of a request, HTML pages, XML content, JSON content etc..
after all server processing is done. It also serve content very early, before most of server processing is started.
• Output cache kicks in very early stage and also very late in the life cycle of a request.
• We use a filter stream, assigned to HttpResponse.Filter property to tap into all content written into a response
object to implement output caching. And we put content in to output cache at step 19. It’s can be a chanllenge to
clear the output cache.
14
Caching with too high concurrency
• For some reasons it take long time like 500ms to get a resource, if your web site has very high
concurrency and the cache for that object is expired, there can be many thread try to start retrieve
that resource until first request succeed and add it to cache.
• Consider locking with timeouts to avoid indefinite locking. Use cross thread locks and should not
hold too much thread in waiting.
Cache of time
consuming resource
is expired or
resource is updated
Only 1stst request
is allowed to get
resource, lock
other requests
Get the resource
and add it to
cache and release
lock
Other request
must wait or get
the extended old
cache
Get new
resource from
cache
Requests
15
Down stream caching
• CDN networks, what are they and benefits we can get:
– Content Delivery Network: they are massive number of servers that are distributed
across the globe to deliver our content quicker to the user anywhere.
– Akamai as example: they have >100 000 servers, provide 1 hop away connection to 70%
internet user .
– They can deliver static content and dynamic content, provide fallback site etc..
– They’re maybe the best defend against DOS attack.
• Browser and proxy caching:
– Browser can help us to cache static files (js, css, images), but also the whole HTML page
and ajax responses.
– Browser determine caching policies based on response headers: Cache-
control, Etag, Last-Modified, Expired.
– Most proxy cache also works based on response headers like browsers, and some of
them don’t cache resources that has query string path in the URI
16
Conclusion
• Cache what you can, wherever and whenever
you can 
• Email me for discussion:
hoang.tran@niteco.se

More Related Content

What's hot (20)

PDF
Distributed Caching Using the JCACHE API and ehcache, Including a Case Study ...
elliando dias
 
PDF
Web session replication with Hazelcast
Emrah Kocaman
 
PDF
Ehcache Architecture, Features And Usage Patterns
Eduardo Pelegri-Llopart
 
PDF
Distributed Caching Essential Lessons (Ts 1402)
Yury Kaliaha
 
PPT
World Wide Web Caching
ersanbilik
 
PPT
Session Handling Using Memcache
Anand Ghaywankar
 
PPTX
Caching In Java- Best Practises and Pitfalls
HARIHARAN ANANTHARAMAN
 
PDF
Overview of the ehcache
HyeonSeok Choi
 
DOC
No sql exploration keyvaluestore
Balaji Srinivasaraghavan
 
PPTX
5 Reasons to Upgrade Ehcache to BigMemory Go
Terracotta, a product line at Software AG
 
PPTX
Using memcache to improve php performance
Sudar Muthu
 
PDF
[B5]memcached scalability-bag lru-deview-100
NAVER D2
 
PDF
Caching technology comparison
Rohit Kelapure
 
PDF
Memcached Presentation
Asif Ali
 
KEY
Caching: A Guided Tour - 10/12/2010
Jason Ragsdale
 
PPTX
Caching
Nascenia IT
 
PPT
Memcache
Abhinav Singh
 
PDF
Usage case of HBase for real-time application
Edward Yoon
 
PDF
Tulsa tech fest 2010 - web speed and scalability
Jason Ragsdale
 
PDF
High Performance Drupal Sites
Abayomi Ayoola
 
Distributed Caching Using the JCACHE API and ehcache, Including a Case Study ...
elliando dias
 
Web session replication with Hazelcast
Emrah Kocaman
 
Ehcache Architecture, Features And Usage Patterns
Eduardo Pelegri-Llopart
 
Distributed Caching Essential Lessons (Ts 1402)
Yury Kaliaha
 
World Wide Web Caching
ersanbilik
 
Session Handling Using Memcache
Anand Ghaywankar
 
Caching In Java- Best Practises and Pitfalls
HARIHARAN ANANTHARAMAN
 
Overview of the ehcache
HyeonSeok Choi
 
No sql exploration keyvaluestore
Balaji Srinivasaraghavan
 
5 Reasons to Upgrade Ehcache to BigMemory Go
Terracotta, a product line at Software AG
 
Using memcache to improve php performance
Sudar Muthu
 
[B5]memcached scalability-bag lru-deview-100
NAVER D2
 
Caching technology comparison
Rohit Kelapure
 
Memcached Presentation
Asif Ali
 
Caching: A Guided Tour - 10/12/2010
Jason Ragsdale
 
Caching
Nascenia IT
 
Memcache
Abhinav Singh
 
Usage case of HBase for real-time application
Edward Yoon
 
Tulsa tech fest 2010 - web speed and scalability
Jason Ragsdale
 
High Performance Drupal Sites
Abayomi Ayoola
 

Similar to [Hanoi-August 13] Tech Talk on Caching Solutions (20)

PPTX
Mini-Training: To cache or not to cache
Betclic Everest Group Tech Team
 
PPTX
Selecting the right cache framework
Mohammed Fazuluddin
 
PPTX
cache concepts and varnish-cache
Marc Cortinas Val
 
PPTX
Caching
saravanan_k83
 
PPTX
Distributed Cache with dot microservices
Knoldus Inc.
 
PPT
Performance Optimization using Caching | Swatantra Kumar
Swatantra Kumar
 
PPTX
Caching in Drupal 8
valuebound
 
PDF
Scalability Considerations
Navid Malek
 
PPTX
The Most Frequently Used Caching Headers
HTS Hosting
 
PDF
Oracle RAC Internals - The Cache Fusion Edition
Markus Michalewicz
 
PPTX
From cache to in-memory data grid. Introduction to Hazelcast.
Taras Matyashovsky
 
PDF
Mysql wp memcached
sharad chhetri
 
PPTX
Beyond the Basics 1: Storage Engines
MongoDB
 
PDF
IMCSummit 2015 - Day 2 General Session - Flash-Extending In-Memory Computing
In-Memory Computing Summit
 
PDF
ASP.NET Scalability - VBUG London
Phil Pursglove
 
PDF
No sql presentation
Saifuddin Kaijar
 
PPTX
Cloud computing UNIT 2.1 presentation in
RahulBhole12
 
PPTX
Turning object storage into vm storage
wim_provoost
 
PDF
ASP.NET Scalability - NxtGen Oxford
Phil Pursglove
 
PPTX
Centralized log-management-with-elastic-stack
Rich Lee
 
Mini-Training: To cache or not to cache
Betclic Everest Group Tech Team
 
Selecting the right cache framework
Mohammed Fazuluddin
 
cache concepts and varnish-cache
Marc Cortinas Val
 
Caching
saravanan_k83
 
Distributed Cache with dot microservices
Knoldus Inc.
 
Performance Optimization using Caching | Swatantra Kumar
Swatantra Kumar
 
Caching in Drupal 8
valuebound
 
Scalability Considerations
Navid Malek
 
The Most Frequently Used Caching Headers
HTS Hosting
 
Oracle RAC Internals - The Cache Fusion Edition
Markus Michalewicz
 
From cache to in-memory data grid. Introduction to Hazelcast.
Taras Matyashovsky
 
Mysql wp memcached
sharad chhetri
 
Beyond the Basics 1: Storage Engines
MongoDB
 
IMCSummit 2015 - Day 2 General Session - Flash-Extending In-Memory Computing
In-Memory Computing Summit
 
ASP.NET Scalability - VBUG London
Phil Pursglove
 
No sql presentation
Saifuddin Kaijar
 
Cloud computing UNIT 2.1 presentation in
RahulBhole12
 
Turning object storage into vm storage
wim_provoost
 
ASP.NET Scalability - NxtGen Oxford
Phil Pursglove
 
Centralized log-management-with-elastic-stack
Rich Lee
 
Ad

Recently uploaded (20)

PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
PPTX
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
Blockchain Transactions Explained For Everyone
CIFDAQ
 
PDF
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
PDF
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
Blockchain Transactions Explained For Everyone
CIFDAQ
 
POV_ Why Enterprises Need to Find Value in ZERO.pdf
darshakparmar
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
Biography of Daniel Podor.pdf
Daniel Podor
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
Ad

[Hanoi-August 13] Tech Talk on Caching Solutions

  • 2. 2 - We can win or lose projects/customers if response time is not good. Caching is the most importance to think about to improve performance and can be easily applied in projects. - Cache usually means we save frequently used resources into some easy and quick to access area. - Caching can provide big magnitude of performance, save CPU, network bandwidth, disk access. - Cache usually is not permanent storage, but should provide very fast access to resources and has limited capacity, such as RAM - Cache is a volatile storage, your data can be recollected at anytime if computer is in short of memory so always check null when getting data from cache What is caching
  • 3. 3 Caching discussion - Server side caching: Introduction to popular data caching methods. Few patterns: Cache cloning & dependency tree. Caching in ORM frameworks. Caching with load balancing & scalabilities. Output caching. Caching with high concurrency. - Down stream caching: CDN caching Brower and proxy caching
  • 4. 4 Introduction to popular data caching methods Different ways of storing data for easy and quick access: sessions, static variables, application state, http cache, distributed cache. - Static variables and application state are in global context, can create memory leaks if not managed well and can increase contention rate which will decrease throughout put - Sessions (in proc mode): can only store per user data, have limited time life (session timeout) - Http cache: support cache dependencies (SQL, file and cache key dependency), absolute expire time, sliding expire time, cache invalidation callback. Local cache, super fast but difficult to grow very large. MS Enterprise Caching Application Block is local cache similar to Http cache but can be used in win form. - Distributed cache: can be shared by multiple servers, slower than local cache, can be designed to grow very large (ex. Memcached)
  • 5. 5 Few patterns: Cache cloning & dependency tree • Cache cloning pattern: – Need to prevent other threads seeing the changes when we’re editing an object in memory – Should clone an object in cache to create a new object for the editing user to edit that object – Need to build this cloning pattern into your business object models Cached object Writable clone Writing request Create writable clone
  • 6. 6 Few patterns: Cache cloning & dependency tree • Cache dependency and how to build a hierarchical cache key systems: - Http cache support caching an object with its own key but make it depends on another cache key, when this cache key is clear, the object also removed from cache. - Create a master key “Master” with an empty object, then create “Sub- master1”, and “Sub- master2” which depends on “Master” key with empty objects. Then we cache some objects which depends on “Sub-master1” and some other objects which depends on “Sub-master2”. Master cache key Sub master 2 Sub master1 Sub master 3 Cache object 1 Cache object 2 Cache object n Cache object 1’ Cache object 2’ Cache object 2’ Sub master 4
  • 7. 7 Caching with ORM frameworks • Most of ORM frameworks (EntityFramework, LINQ to SQL, Nhibernate etc..) has first layer of caching but has problem for distributed application where often the context is out of scope • Caching object outside of ORM layer (layer 2 cache), to work with distributed environment, has difficulties in manipulate objects because those object are not associated to any context. Layer 2 cache problems in ORM: http://queue.acm.org/detail.cfm?id=1394141 • There were a open source to develop a level 2 cache layer for EF framework http://code.msdn.microsoft.com/EFProviderWrappers • Above solution works quite close to DB layer, we have to analyze SQL query, commands, tables to build cache key. It doesn’t cache query with output parameters, offer limit cache policy control
  • 8. 8 Caching with ORM frameworks Introduce a solution for ORM layer 2 caching: - Should use only single request or single transaction scope for an ORM context. - Disable tracking in context in normal get request, which save some performance, after that put objects in to cache - In posts requests to update/delete, after validate inputs, enable tracking in context, re-get the objects from DB, update object which in tracking with new properties, (perhaps check time stamp for concurrency) and save to DB. - In post requests for insert/add we don’t need this “re-get” action. - We should separate entities model of ORM from business model. This logic is implemented in business model and business functions and is independent with ORM technologies (could work with LINQ2SQL, Entity Framework, NHibernate, Azure TableService)
  • 9. 9 Caching with load balancing & scalabilities In load balancing environment, there are many servers so we do we put cache ? If all server has its own local cache, how to clear cache and update cache ? And if we need to increase the cache capacity to large or very large ? • Synchronized local cache: - We provide a cache wrapper layer to a good local cache option such as Http cache. - In this wrapper provide the infrastructure to connect all servers together, so that when 1 server clear a cache key, it sends this message to others. - WCF (using tcp or udp protocols), web services can be used as communication infrastructure. - Works quite well in big configurations but can’t grow very large. Web app Local cache Communication channel Local cache Web app Communication channel
  • 10. 10 Caching with load balancing & scalabilities • Distributed cache: - Frist variants: cache data is stored on a separate/dedicated cache server(s) - Second variants: cache data is distributed among the servers - Both configurations are designed to grow very large, specially second variant will have bring more cache capacity and CPU power whenever a new server is added - A very successful/common implementation of this distributed cache is Memcached: http://memcached.org/ Memcached server Memcached client Web application Memcached server Memcached client Web application
  • 11. 11 Caching with load balancing & scalabilities • Scale up: increase your server hardware config to get more power: increasing CPU, RAM, disk space on your servers, 4x expensive but only get 2x performance. • Scale out strategy: adding more server boxes to our cluster should increase your system power linearly. • It’s usually hard to scale out databases: - Sharding: we can split our data across many data base servers, algorithm to retrieve/save data become so much more complex, transactional integrity between shards is difficult. - NoSQL: not an option for OLTP, immediate consistency, and real-time analytics. - SQL replication: usually for back up only, can share the reading load if we can deliver out dated data to end user.
  • 12. 12 Caching with load balancing & scalabilities • We have Memcached to reduce database reads a lot, but to further reduce its load on analytical queries, search queries on data, we can introduce a Solr server in each server box • We can consider “cache first save later“ strategy for non-critical data. Web Server Memcached Solr slave Web Server Memcached Solr slave Web Server Memcached Solr slave Database Server Solr master
  • 13. 13 Output caching • If our data don’t change and our logic in code don’t change then the final output (HTML pages) won’t change, why don’t we cache it ? • Output cache uasually means cache the final output of a request, HTML pages, XML content, JSON content etc.. after all server processing is done. It also serve content very early, before most of server processing is started. • Output cache kicks in very early stage and also very late in the life cycle of a request. • We use a filter stream, assigned to HttpResponse.Filter property to tap into all content written into a response object to implement output caching. And we put content in to output cache at step 19. It’s can be a chanllenge to clear the output cache.
  • 14. 14 Caching with too high concurrency • For some reasons it take long time like 500ms to get a resource, if your web site has very high concurrency and the cache for that object is expired, there can be many thread try to start retrieve that resource until first request succeed and add it to cache. • Consider locking with timeouts to avoid indefinite locking. Use cross thread locks and should not hold too much thread in waiting. Cache of time consuming resource is expired or resource is updated Only 1stst request is allowed to get resource, lock other requests Get the resource and add it to cache and release lock Other request must wait or get the extended old cache Get new resource from cache Requests
  • 15. 15 Down stream caching • CDN networks, what are they and benefits we can get: – Content Delivery Network: they are massive number of servers that are distributed across the globe to deliver our content quicker to the user anywhere. – Akamai as example: they have >100 000 servers, provide 1 hop away connection to 70% internet user . – They can deliver static content and dynamic content, provide fallback site etc.. – They’re maybe the best defend against DOS attack. • Browser and proxy caching: – Browser can help us to cache static files (js, css, images), but also the whole HTML page and ajax responses. – Browser determine caching policies based on response headers: Cache- control, Etag, Last-Modified, Expired. – Most proxy cache also works based on response headers like browsers, and some of them don’t cache resources that has query string path in the URI
  • 16. 16 Conclusion • Cache what you can, wherever and whenever you can  • Email me for discussion: [email protected]