commit bd8f23d7d66279d1659db0fa075ebca59235b2fc
Author: Erik C. Thauvin
+
+
+Reports:
+
+
+
+
+Documentation for the currently developing release can always be found on the OSCache Wiki.
+ If you have edits or corrections to make to the documentation here, you may edit them directly on the wiki as well.
+
+
+
\ No newline at end of file
diff --git a/docs/meta.xml b/docs/meta.xml
new file mode 100644
index 0000000..4e782ff
--- /dev/null
+++ b/docs/meta.xml
@@ -0,0 +1,220 @@
+About
+Overview
+Feature List
+Download
+Changelog
+Requirements
+
+
+
+ +
+ Beside the JSP tag library and the CacheFilter you can use OSCache through its straightforward API. You can use the GeneralCacheAdministrator to create, flush and administrate the cache. The GeneralCacheAdministrator has a cache instance and delegates different Cache's methods. Furthermore the GeneralCacheAdministrator is in charge of load the cache.properties and create a cache instance with the properties definded. You have to store an instance of the GeneralCacheAdministrator in a static value or use a singleton pattern to access the same GeneralCacheAdministrator. + +Typical use with fail over+ +
+ String myKey = "myKey"; +String myValue; +int myRefreshPeriod = 1000; +try { + // Get from the cache + myValue = (String) admin.getFromCache(myKey, myRefreshPeriod); +} catch (NeedsRefreshException nre) { + try { + // Get the value (probably from the database) + myValue = "This is the content retrieved."; + // Store in the cache + admin.putInCache(myKey, myValue); + } catch (Exception ex) { + // We have the current content if we want fail-over. + myValue = (String) nre.getCacheContent(); + // It is essential that cancelUpdate is called if the + // cached content is not rebuilt + admin.cancelUpdate(myKey); + } +}+ Typical use without fail over+ +
+ String myKey = "myKey"; +String myValue; +int myRefreshPeriod = 1000; +try { + // Get from the cache + myValue = (String) admin.getFromCache(myKey, myRefreshPeriod); +} catch (NeedsRefreshException nre) { + try { + // Get the value (probably from the database) + myValue = "This is the content retrieved."; + // Store in the cache + admin.putInCache(myKey, myValue); + updated = true; + } finally { + if (!updated) { + // It is essential that cancelUpdate is called if the + // cached content could not be rebuilt + admin.cancelUpdate(myKey); + } + } +}+ Note+ +
|
+
+ Introduction+ +OSCache comes with a servlet filter that enables you to transparently cache entire pages of your website, and even binary files. Caching of binary files is extremely useful when they are generated dynamically, e.g. PDF files or images. In addition by using the last modified header the transaction overhead and server load is reduced excellently which speed ups the server response time. + +How to configure OSCache to cache entire servlet responses is described in the configuration page of the CacheFilter. This short tutorial should demonstrate how to make your web site more responsive, and save load on your server. Using the CacheFilter the user will appreciate a faster loading site and will visit it more often. + +Improvements+ +Major improvements have been made to the CacheFilter in the releases 2.2 and 2.3: + +
Cacheable Content+ +
Configuring the filter+ +Example 1+ +To configure the filter, add something like the following to your web.xml file (obviously you will want to set the URL pattern to match only the content you want to cache; this example will cache all JSP pages for 10 minutes in session scope): + +
+ <filter> + <filter-name>CacheFilter</filter-name> + <filter-class>com.opensymphony.oscache.web.filter.CacheFilter</filter-class> + <init-param> + <param-name>time</param-name> + <param-value>600</param-value> + </init-param> + <init-param> + <param-name>scope</param-name> + <param-value>session</param-value> + </init-param> +</filter> + +<filter-mapping> + <filter-name>CacheFilter</filter-name> + <url-pattern>*.jsp</url-pattern> +</filter-mapping>+ The default duration is one hour and the default scope for the cache is application scope. You can change these settings using initialization parameters. + +Example 2+ +The initialization of the last modified header based on the current time reduces transaction overhead and server load, because the browser can ask the server if the cached content in the browser cache was changed on the server since the last request. If the content wasn't changed , the server will response with the status 304 (not modified). + +Furthermore if the expires parameter is the set to time, the server will send the date and time after which the content is considered stale. Then common browsers won't request the server anymore until the cached content is considered stale. The example will cache the content for one hour by default and the expires date and time will be calculated based on the creation time and the time parameter (default is one hour). + +
+ <filter> + <filter-name>CacheFilterStaticContent</filter-name> + <filter-class>com.opensymphony.oscache.web.filter.CacheFilter</filter-class> + <init-param> + <param-name>expires</param-name> + <param-value>time</param-value> + </init-param> +</filter> + +<filter-mapping> + <filter-name>CacheFilterStaticContent</filter-name> + <url-pattern>*.jsp</url-pattern> +</filter-mapping>+ Using the filter+ +Example 1: ICacheKeyProvider+ +A simple example how to use the ICacheKeyProvider parameter of the CacheFilter. The cache key in constructed with the http request URI and with two request parameters pageid and pagination. + +
+ import javax.servlet.http.HttpServletRequest; + +import com.opensymphony.oscache.base.Cache; +import com.opensymphony.oscache.web.ServletCacheAdministrator; +import com.opensymphony.oscache.web.filter.ICacheKeyProvider; + +public class ExampleCacheKeyProvider implements ICacheKeyProvider { + + public String createCacheKey(HttpServletRequest httpRequest, ServletCacheAdministrator scAdmin, Cache cache) { + + // buffer for the cache key + StringBuffer buffer = new StringBuffer(100); + + // part 1 of the key: the request uri + buffer.append(httpRequest.getRequestURI()); + + // separation + buffer.append('_'); + + // part 2 of the key: the page id + buffer.append(httpRequest.getParameter("pageid")); + + // separation + buffer.append('_'); + + // part 3 of the key: the pagination + buffer.append(httpRequest.getParameter("pagination")); + + return buffer.toString(); + } + +}+ You can use session attributes values for the cache key also, if request parameters aren't available or e.g. security settings have to be add to the cache key. + +Example 2: Flush+ +The flush example shows how to flush a CacheFilter with scope application based on group names. In this example the http servlet request of the user is required to get the cache object. + +
+ import com.opensymphony.oscache.base.Cache; +import com.opensymphony.oscache.web.ServletCacheAdministrator; + +import java.util.Collection; +import java.util.Iterator; + +import javax.servlet.http.HttpServletRequest; +import javax.servlet.jsp.PageContext; + +public class OSCacheAdmin { + + /** + * flush the CacheFilter according to dependent group + * + * @param request the HttpServletRequest of the user + * @param groupNames a string collection of group names + */ + public static void flushCacheGroup(HttpServletRequest request, Collection groupNames) { + Cache cache = ServletCacheAdministrator.getInstance(request.getSession().getServletContext()).getCache(request, PageContext.APPLICATION_SCOPE); + Iterator groups = groupNames.iterator(); + while (groups.hasNext()) { + String group = (String) groups.next(); + cache.flushGroup(group); + } + } +}+ If you're CacheFilter is running with scope session, you have to get the cache as follows: + +
+ Cache cache = ServletCacheAdministrator.getInstance(request.getSession(true).getServletContext()).getCache(request, PageContext.SESSION_SCOPE);
+ |
+
+ OSCache comes with a servlet filter that enables you to transparently cache entire pages of your website, and even binary files. Caching of binary files is extremely useful when they are generated dynamically, e.g. PDF files or images. + +A tutorial describes how to cache entire pages of your website and what performance improvements can be done with the CacheFilter. + +Beginning with release 2.4 you are be able to set/override the CacheFilter initialization parameters at runtime. + +Cacheable Content+
Configuring the filter+ +To configure the filter, use the oscache.properties to configure the core settings of OSCache and add something like the following to your web.xml file: + +
+ <filter> + <filter-name>CacheFilter</filter-name> + <filter-class>com.opensymphony.oscache.web.filter.CacheFilter</filter-class> + <init-param> + <param-name>time</param-name> + <param-value>600</param-value> + </init-param> + <init-param> + <param-name>scope</param-name> + <param-value>session</param-value> + </init-param> +</filter> + +<filter-mapping> + <filter-name>CacheFilter</filter-name> + <url-pattern>*.jsp</url-pattern> +</filter-mapping>+ Obviously you will want to set the URL pattern to match only the content you want to cache; this example will cache all JSP pages for 10 minutes in session scope. The default duration is one hour and the default scope for the cache is application scope. + +If the ICacheKeyProvider parameter isn't set, the CacheFilter will use the HTTP request URI and the QueryString to create the cache key. + +You can change the CacheFilter settings using the following initialization parameters. + +Parameter: time+ +The time parameter sets the cache time (in seconds) for the content. The default cache time is one hour. + +Specifying -1 (indefinite expiry) as the cache time will ensure a content does not become stale until it is either explicitly flushed or the expires refresh policy causes the entry to expire. + +Parameter: scope+ +The scope parameter lets you set the scope to cache content in. Valid values for the scope are application (default) and session. + +Parameter: cron (NEW! Since 2.3)+ +A cron expression that determines when the page content will expire. This allows content to be expired at particular dates and/or times, rather than once a cache entry reaches a certain age. See Cron Expressions to read more about this attribute. Please consider that the (default) time value is still evaluated, hence the time value should be set to indefinite expiry. + +Parameter: fragment (NEW! Since 2.2)+ +Defines if the filter handles fragments of a page. Acceptable values are auto for auto detect, no for false and yes for true. The default value is auto detect which checks the javax.servlet.include.request_uri request attribute. Fragments of a page shouldn't be gzipped or evaluate the last modified header. + +Parameter: nocache (NEW! Since 2.2)+ +Defines which objects shouldn't be cached. Acceptable values are off (default) for caching all objects and sessionIdInURL for don't cache page if the session id is contained in the URL. + +Parameter: lastModified (NEW! Since 2.2)+ +Defines if the last modified header will be sent in the response. Acceptable values are off for don't sending the header, even it is set in the filter chain, on for sending it if it is set in the filter chain and initial (default) the last modified information will be set based on current time. + +Parameter: max-age (NEW! Since 2.3.1)+ +Specifies the maximum amount of time in seconds that the cache content will be considered new in the browser's cache. The browser will retrieve the content from it's own cache for the amount of time without requesting the web server again. The default max-age time is 60 seconds. Combined with the last modified header the transaction overhead and server load is reduced excellently which speed ups the server response time. Further parameters are no init for don't initializing the max-age cache control and time to set max-age based on the time parameter and creation time of the content (expiration timestamp minus current timestamp) by each request. + +Parameter: expires (NEW! Since 2.2)+ +Defines if the expires header will be sent in the response. Acceptable values are off for don't sending the header, even it is set in the filter chain, on (default) for sending it if it is set in the filter chain and time the expires information will be intialized based on the time parameter and creation time of the content.
Parameter: ICacheKeyProvider (NEW! Since 2.2)+ +Specify a class which implements the interface ICacheKeyProvider. A developer can implement a class which provides cache keys based on the request, the servlect cache administrator and the cache. + +Parameter: ICacheGroupsProvider (NEW! Since 2.2)+ +Specify a class which implements the interface ICacheGroupsProvider. A developer can implement a class which provides cache groups based on the request, the servlect cache administrator and the cache. + +Parameter: EntryRefreshPolicy (New! Since 2.3)+ +Specify a class which implements the interface EntryRefreshPolicy. A developer can implement a class which provides a different custom cache invalidation policy for a specific cache entry. If not specified, the default policy is timed entry expiry as specified with the time parameter described above. + +Parameter: disableCacheOnMethods (New! Since 2.4)+ +Specify HTTP method names in a comma separated list for which cacheing should be disabled. The default value is <code>null</code> for cacheing all requests without regarding the method name. See HttpServletRequest#getMethod, e.g.: + +
+ <init-param> + <param-name>disableCacheOnMethods</param-name> + <param-value>POST,PUT,DELETE</param-value> + </init-param>+ Parameter: oscache-properties-file (New! Since 2.4)+By specifying a OSCache properties file for a CacheFilter, the developer can run multiple caches each with different configurations tailored to the requirements of the application. In each properties file the developer has to define a unique cache.key otherwise the default properties file is used. If the parameter is not specified, the default properties file will be used. The file has to be put into the classpath, e.g. WEB-INF/classes. + + |
+
+ Discussion+ +Lars wrote: DiskPersistence, SoftReferenceCache etc. would implement the Command interface of Commons Chain or a new interface of OSCache. + +http://jakarta.apache.org/commons/chain/ Andres wrote: I think disk persistence is still in, although I don't think it will be like it is now. We will be accepting Object keys, so any cache impl will need to accept them. I have been thinking about a lightweight object db that has persistence built-in but I'm not sure. + +This is definitely an interesting topic and I'd like to discuss it more. Lars wrote: Andres wrote: Lars wrote: 1.) How do you want to synchronize the access to the same cache content? In OSCache 2 this is done by the EntryUpdateState based on the key. Some other points: I added a simple class diagramm and saved the diagramm in the Fujaba format. + +Andres wrote: + +The cache chain should have no knowledge of what is in any of the cache links. However, the issue of eviction is clear. When a put() is called, the link should return an evicted entry or null if the cache is not full. The chain will then know if it needs to continue the put into the next link. To clarify the interfaces, I think a Chain interface should extend Map. The Link and EvictionAlgorithm should themselves be interfaces. Link could have implementations such as memory, disk, database. EvictionAlgorithm could have implementations such as LRU, FIFO, etc.... + +1.) In my branch, I have synchronized the entire cache on each cache access. I think this will still be fast enough and will surely be more stable. basically, get, put, and remove are sync'd. I do not think we need to achieve a highly concurrent cache in order to provide a solution that is hundreds of times faster than db or disk access. + +However, we could add functionality the improves performance but does not cause deadlocks, such as a write behind feature on puts, so that puts get queued and another thread does the work when it has time. + +2.) I don't think this would be wise. I don't think the chain should have knowledge of the keys. I think all it should have is references to the links and stateless logic. Either way each link would need to keep its own keys, therefore putting them in the chain would add another map that would have to be accessed and slow performance. + +There are 3 places I believe the keys must exist: in the store (duh), in the algorithm (or we could generalize this as any metrics collector), and in the groups map. + +Group functionality is a similar issue. I had wanted to drop this functionality but it seems the people that use cache tags (I never have yet) really depend on them. This functionality is unique to OSCache as far as I am aware. 3./4.) Yeah, that is sort of borrowed code and is not necessary. However, we need to be mindful of the access to the listener list. The easiest way is to probably make the list implementation a SynchronizedArrayList or something. + +The way I am thinking the current code in my branch could be moved over to a chain model is: +
Lars wrote: The default CacheChain should be the SimplePipeCacheChain. The SizeBasedCacheChain can be implemented as part of a 3.1 release. + +Scenarios to be checked and tested+ +Configuration with a LRU algorithm: (1) MemoryCache - Scenario A: Get for a object in SoftRefCache+ +
Scenario B: Get for a object in DiskPersistCache+ +
Scenario C: Put a new object+ +
Scenario D: Put a stale object or get a stale object+ +TODO + + |
+
+ Release Notes+
See also JIRA - Change Log or read + the complete release notes at once. + + |
+
+ New in OSCache 2.0 is support for clustering of caches. OSCache currently ships with implementations that allow you to use either JavaGroups or JMS as the underlying broadcast protocol. + +Caches across a cluster only broadcast messages when flush events occur. This means that the content of the caches are built up independently on each server, but whenever content becomes stale on one server it is made stale on them all. This provides a very high performing solution since we never have to pass cached objects around the cluster. And since there is no central server that is in charge of the cluster, the clustering is very robust. + +Configuring OSCache to cluster is very simple. Follow either the JMS or the JavaGroups instructions below depending on which protocol you want to use. + +JMS Configuration+ +Configure your JMS server. OSCache requires that a JMS ConnectionFactory and a Topic are available via JNDI. See your JMS server's documentation for details. + +Add the JMS broadcasting listener to your oscache.properties file like this: + +
+ cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JMSBroadcastingListener ++ (Note that this listener requires JMS 1.1 or higher, however legacy support for 1.0.x is also provided. If your JMS server only supports JMS 1.0.x then use JMS10BroadcastingListener instead of JMSBroadcastingListener. The rest of this documentation applies equally to both the 1.1 and 1.0 listeners.) + +The JMS listener supports the following configuration parameters: + +
If you are running OSCache from a standalone application or are not running in an environment where new InitialContext() will find your JNDI InitialContextFactory or provider URL, you will have to specify them either in a jndi.properties file or as system properties. See the InitalContext documentation for details. + +JavaGroups Configuration+ +Just make sure you have jgroups-all.jar file in your classpath (for a webapp put it in WEB-INF/lib), and add the JavaGroups broadcasting listener to your oscache.properties file like this: + +
+ cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JavaGroupsBroadcastingListener ++ In most cases, that's it! OSCache will now broadcast any cache flush events across the LAN. The jgroups-all.jar library is not included with the binary distribution due to its size, however you can obtain it either by downloading the full OSCache distribution, or by visiting the JavaGroups website. + +If you want to run more than one OSCache cluster on the same LAN, you will need to use different multicast IP addresses. This allows the caches to exist in separate multicast groups and therefore not interfere with each other. The IP to use can be specified in your oscache.properties file by the cache.cluster.multicast.ip property. The default value is 231.12.21.132, however you can use any class D IP address. Class D address fall in the range 224.0.0.0 through 239.255.255.255. + +If you need more control over the multicast configuration (eg setting network timeout or time-to-live values), you can use the cache.cluster.properties configuration property. Use this instead of the cache.cluster.multicast.ip property. The default value is: + +
+ UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;\ +mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\ +PING(timeout=2000;num_initial_members=3):\ +MERGE2(min_interval=5000;max_interval=10000):\ +FD_SOCK:VERIFY_SUSPECT(timeout=1500):\ +pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):\ +UNICAST(timeout=300,600,1200,2400):\ +pbcast.STABLE(desired_avg_gossip=20000):\ +FRAG(frag_size=8096;down_thread=false;up_thread=false):\ +pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true) ++ See the JavaGroups site for more information. In particular, look at the documentation of Channels in the User's Guide. + + |
+
+ OSCache 2.4.1+Release Notes+ +(7th July 2007 - by Lars Torunski) + +This + maintenance release of 2.4.1 has two bug fixes:+ +
JIRA Issue List+ +
OSCache 2.4+Release Notes+ +(1st Mai 2007 - by Lars Torunski) + +New + features and enhancements+ +Furthermore the next major release 2.4 enhances the CacheFilter + and allows a better integration with the Spring Framework and JMX Monitoring. + +
Upgrade Guide+ +
JIRA Issue List+ +OSCache 2.3.2+Release Notes+ +(23rd July 2006 - by Lars Torunski) + +This + maintenance release of 2.3.1 has one enhancement:+ +
Bug fixes:+ +
JIRA Issue List+ +
OSCache 2.3.1+Release Notes+ +(19th June 2006 - by Lars Torunski) + +This + maintenance release of 2.3 has one enhancement:+ +
Bug fixes:+ +
JIRA Issue List+ +
OSCache 2.3+Release Notes+ +(6th March 2006 - by Lars Torunski) + +This + release includes additional improvements to the CacheFilter:+ +
Disk + persistence:+ +
Further + changes are:+ +
JIRA Issue List+ +OSCache 2.2 + Final+Release + Notes - Final+ +(6th November 2005 - by Lars Torunski) + +Additionally + to the 2.2 RC + improvements, the final release was enhanced by:+ +
JIRA Issue List+ +OSCache 2.2 RC+Release + Notes - Release Candidate+ +(18th September 2005 - by Lars Torunski) + +Besides + bugs being fixed, major improvements have been made to the CacheFilter + in many ways:+ +
JIRA Issue List+ +OSCache 2.1.1+Release Notes+ +(1st May 2005 - by Andres March) + +Improvements:+ +
Bug Fixes:+ +
Changes + that may affect backwards compatibility:+ +
JIRA Issue List+ +OSCache 2.1+Release Notes+ +(18th January 2005 - by Andres March) +New Features:+
Improvements:+ +
Bug Fixes:+ +
OSCache 2.0.2+Release Notes+ +(22nd January 2004 - by Mathias Bogaert) + +Improvements:+ +
Bug Fixes:+ +
OSCache 2.0.1+Release Notes+ +(4th November 2003 - by Chris Miller) +Improvements:+ +
Bug Fixes:+ +
Changes + that may affect backwards compatibility:+ +
OSCache 2.0+Release Notes+ +(22nd September 2003 - by Chris Miller) +Improvements:+ +
Bug Fixes:+ +
OSCache 2.0 + beta 2+Release Notes+ +(4th August 2003 - by Chris Miller) +New Features:+ +
Bug Fixes:+ +
OSCache 2.0 + beta 1+Release Notes+ +(19th July 2003 - by Chris Miller) +New Features:+ +
Changes + that may affect backwards compatibility:+ +
Bug Fixes:+ +
Known + Problems: (these have existed for some time in the 1.x.x versions and + will be addressed in an upcoming 2.x.x release)+ +
OSCache 1.7.5+Release Notes+(5th January 2002 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.7.4+Release Notes+ +(3rd December 2001 - by Francois Beauregard,
+ fbeauregard@pyxis-tech.com, and
OSCache 1.7.3+Release Notes+ +(11th November 2001 - by Francois Beauregard, + fbeauregard@pyxis-tech.com) + +
OSCache 1.7.2+Release Notes+ +(31st October 2001 - by Mike Cannon-Brookes, + mike@atlassian.com) + +
OSCache 1.7.1+Release Notes+ +(26th September 2001 - by Francois Beauregard,
+ fbeauregard@pyxis-tech.com, and
OSCache 1.7.0+Release Notes+ +(26th September 2001 - by Francois Beauregard,
+ fbeauregard@pyxis-tech.com, and This version include some refactoring, corrections and new
+ features.
OSCache 1.6.1+Release Notes+ +(16th September, 2001 - by Todd Gochenour, + tgochenour@peregrine.com) + +
OSCache 1.6+Release Notes+ +(5th September, 2001 - by Mike Cannon-Brookes, + mike@atlassian.com) + +
OSCache 1.5+Release Notes+ +(6th August, 2001 - by Todd Gochenour, + tgochenour@peregrine.com) + +
OSCache 1.3+Release Notes+ +(9th June, 2001 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.2.5+Release Notes+ +(18th May, 2001 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.2.1+Release Notes+ +(10th May, 2001 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.2+Release Notes+ +(28th March, 2001 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.1+Release Notes+ +(25th March, 2001 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.0 + beta 2+Release Notes+ +(20th March, 2001 - by Mike Cannon-Brookes, mike@atlassian.com) + +
OSCache 1.0 + beta 1+Release Notes+ +(20th February, 2001 - by Mike Cannon-Brookes, + mike@atlassian.com) + +
OSCache 1.0 + beta 0+Release Notes+ +(26th November, 2000 - by Mike Cannon-Brookes, + mike@atlassian.com) + +
|
+
+ This guide only covers the configuration of OSCache by using the oscache.properties file. To see how to install OSCache and where to place the oscache.properties file, see the Installation Guide. cache.memory+ +Valid values are true or false, with true being the default value. If you want to disable memory caching, just comment out or remove this line. + +Note: disabling memory AND disk caching is possible but fairly stupid cache.capacity+ +The maximum number of items that a cache will hold. By default the capacity is unlimited - the cache will never remove any items. Negative values will also be treated as meaning unlimited capacity. + +cache.algorithm+ +The default cache algorithm to use. Note that in order to use an algorithm the cache size must also be specified. If the cache size is not specified, the cache algorithm will be Unlimited cache regardless of the value of this property. If you specify a size but not an algorithm, the cache algorithm used will be com.opensymphony.oscache.base.algorithm.LRUCache. + +OSCache currently comes with three algorithms: + +
cache.blocking+ +When a request is made for a stale cache entry, it is possible that another thread is already in the process of rebuilding that entry. This setting specifies how OSCache handles the subsequent 'non-building' threads. The default behaviour (cache.blocking=false) is to serve the old content to subsequent threads until the cache entry has been updated. This provides the best performance (at the cost of serving slightly stale data). When blocking is enabled, threads will instead block until the new cache entry is ready to be served. Once the new entry is put in the cache the blocked threads will be restarted and given the new entry. + +Note that even if blocking is disabled, when there is no stale data available to be served threads will block until the data is added to the cache by the thread that is responsible for building the data. + +cache.unlimited.disk+ +Indicates whether the disk cache should be treated as unlimited or not. The default value is false. In this case, the disk cache capacity will be equal to the memory cache capacity set by cache.capacity. + +cache.persistence.class+ +Specifies the class to use for persisting cache entries. This class must implement the PersistenceListener interface. OSCache comes with an implementation that provides filesystem based persistence. Set this property to com.opensymphony.oscache.plugins.diskpersistence.HashDiskPersistenceListener to enable this implementation. By specifying your own class here you should be able to persist cache data using say JDBC or LDAP. NOTE: This class hashes the toString() of the object being cached to produce the file name of the entry. If you prefer readable file names, the parent DiskPersistenceListener can still be used but it will have issues with illegal filesystem characters or long names. +
cache.path+ +This specifies the directory on disk where the caches will be stored. The directory will be created if it doesn't already exist, but remember that OSCache must have permission to write to this location. Avoid sharing the same cache path between different caches, because OSCache has not been designed to handle this. + +
+ cache.path=c:\\myapp\\cache + or *ix: + cache.path=/opt/myapp/cache ++ cache.persistence.overflow.only (NEW! Since 2.1)+ +Indicates whether the persistence should only happen once the memory cache capacity has been reached. The default value is false for backwards compatibility but the recommended value is true when the memory cache is enabled. This property drastically changes the behavior of the cache in that the persisted cache will now be different then what is in memory. + +cache.event.listeners+ +This takes a comma-delimited list of fully-qualified class names. Each class in the list must implement one (or more) of the following interfaces: + +
No listeners are configured by default, however some ship with OSCache that you may wish to enable: + +
It is also of course quite straightforward to write your own event listener. See the JavaDoc API for further details and Statistics for an example. + + +cache.key+ +This is the key that will be used by the ServletCacheAdministrator (and hence the custom tags) to store the cache object in the application and session scope. The default value when this property is not specified is "__oscache_cache". If you want to access this default value in your code, it is available as com.opensymphony.oscache.web.ServletCacheAdministrator.DEFAULT_CACHE_KEY. + +cache.use.host.domain.in.key+ +If your server is configured with multiple hosts, you may wish to add host name information to automatically generated cache keys. If so, set this property to true. The default value is false. + + +Additional Properties+ +In additon to the above basic options, any other properties that are specified in this file will still be loaded and can be made available to your event handlers. For example, the JavaGroupsBroadcastingListener supports the following additional properties: + +cache.cluster.multicast.ip+ +The multicast IP to use for this cache cluster. Defaults to 231.12.21.132. + +cache.cluster.properties+ +Specifies additional configuration options for the clustering. The default setting is +
+ UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;\ +mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\ +PING(timeout=2000;num_initial_members=3):\ +MERGE2(min_interval=5000;max_interval=10000):\ +FD_SOCK:VERIFY_SUSPECT(timeout=1500):\ +pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):\ +UNICAST(timeout=300,600,1200,2400):\ +pbcast.STABLE(desired_avg_gossip=20000):\ +FRAG(frag_size=8096;down_thread=false;up_thread=false):\ +pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true) ++ See the Clustering OSCache documentation for further details on the above two properties. + + |
+
+ Prior to version 2.0 of OSCache, content expiry could only be specified in terms of how long a piece of content had been in the cache, ie, it was based on the age of the content. If you needed to expire it at a particular time of day or on a specific date, you had to write a custom RefreshPolicy class. + +OSCache 2.0 now gives you the ability to expire content at specific dates and/or times based on a cron expression. + +What is a Cron Expression?+ +Many of you are probably already familiar with the unix cron program. For those that aren't, cron is a daemon process that allows users to execute commands or scripts automatically at user-configurable dates and times. The important part as far as OSCache is concerned is the cron expression syntax that allows users to dictate when commands should be executed - you can now use the same syntax to expire content in OSCache! A cron expression is a simple text string that specifies particular dates and/or times that are matched against. + +How Does OSCache Match Against an Expression?+ +OSCache uses cron expressions in a manner that might seem 'backwards' to what you might initially expect. When using a cron expression to test if a cache entry is stale, OSCache finds the date and time (prior to the current time) that most recently matches the supplied expression. This date/time is used as the expiry time - entries that were placed in the cache prior to this expiry time are considered stale and result in a NeedsRefreshException being thrown. + +As an example, suppose you specify a cron expiry that matches every hour, on the hour ("0 * * * *"). If the current time is 10:42pm, then any content that was placed in the cache prior to 10:00pm would be considered stale. + +What is the Difference Between the Refresh Period and a Cron Expression?+ +The difference between the refresh period and a cron expression is that the refresh period specifies the maximum allowable age of a cache entry, whilst a cron expression specifies specific expiry times, regardless of how old an entry is. Eg imagine caching an object at 10:29am. With a refresh period of 30 minutes that entry would expire at 10:59am. With a cron expression of "0,30 * * * *" that entry would expire at 10:30am. + +The Cron Expression Syntax+ +A cron expression consists of the following 5 fields: + +
If you don't want to specify a value for a particular field (ie you want the cron expression to match all values for that field), just use a * character for the field value. + +As an example, an expression that expired content at 11:45pm each day during April would look like this: "45 23 * April *". + +OSCache also allows you to optionally specify lists, ranges and intervals (or even a combination of all three) within each field: + +
To have a look at further examples of both valid and invalid syntax, it is suggested you take a look at the JUnit test cases in the com.opensymphony.oscache.util.TestFastCronParser class. This class is located under the src/core/test directory. For examples of how to specify cron expiry times using the taglibs, see the Tag Reference and the cronTest.jsp file in the example web application. + +Notes+ +
|
+
+ 1. Overview+
2. OSCache versions+
3. Tutorial+
4. Reference Guide+
5. Third-party integration+
6. Links+
|
+
+ Got a question you'd like to ask? Ask us and we'll add it to the FAQ. + +Questions + +
What can I use OSCache for exactly?+ +OSCache can be used on three different levels: + +
All three approaches can be mixed and matched within the same application. + +Where is the data cached?+ +Out of the box, OSCache is capable of caching data in memory (so it is very fast), and/or to disk (so your cache can be persistent across server restarts). Support is also provided for managing a cluster of caches across multiple servers. + +In addition to these capabilities, it is possible to plug in custom persistence code and custom event handlers, so you could easily extend OSCache to persist cached objects to say a database or an LDAP directory. + +Can OSCache cache Java objects rather than portions of JSP pages? I mean if I create a Product object, can I cache it and use it later so that I don't have to fetch data again?+ +Yes, however to do this you will need to write code that talks to the OSCache API directly. The taglibs are currently only designed to cache rendered JSP content. This should hopefully not be too big a limitation since any creation or manipulation of java objects should generally be performed in beans or MVC action classes rather than JSP scriptlets anyway. + +What other features does OSCache have?+ +There is a full list of features in the Feature List documentation. + +Can you give me some examples of how the OSCache tags are used?+Example 1
+ <cache:cache time="600"> + <%= myBean.getTitle() %> + </cache:cache>+ This will only access your EJB once every 10 minutes. Every other request it will just serve the cached JSP content that was produced the first time (this results in much faster page loading). + +Example 2
+ <cache:cache key="foobar" scope="session"> + <%= myBean.getTitle() %> + </cache:cache>+ This time the cache is keyed (you could have a programmatic key here too, like <%= foobarString %>) and it's scoped by session. + +This is revolutionary as far as caching goes. You can now have cached content, that's different for every user! No more full page caches with no dynamic content! + +Example 3
+ (a very powerful & useful way to use the taglibs): + + <cache:cache> + <% try { %> + <%= myBean.getTitle() %>> + <% } catch (Exception e) { %> + <% application.log("Exception occurred in myBean.getTitle(): " + e); %> + <cache:usecached /> + <% } %> + </cache:cache>+ If a RemoteException occurs trying to get the EJB title (for example the database goes down) the cached content will be served so the user will not suspect a thing. No error page as per a normal JSP application. What does this mean? It means greater error tolerance in your JSP apps! + +One example of where this is useful - when our machine restarts, our app server loads faster than the database server. No problem - because the cache is persistent, it serves cached content while the database boots, then seamlessly kicks in to the database for a cache refresh when the database is ready. + +See the Tag Reference and the example web application for further taglib examples. + +Can OSCache tags be nested?+ +You can't currently nest <cache> tags within one another - not that you'd probably want to. It is because of the cache object being placed in the page scope for use by programmers within the tag. + +We're not sure if anyone actually uses this so we might remove it to allow for tag nesting (presumably across includes or something). + +What control do you have over the cache size? I can imagine the size of the in-memory cache getting very big. Is it possible to set a max cache size and then remove the least-recently-used entries from the cache?+ +You can limit the memory cache by the number of objects that are cached. When an object is added to the cache and the limit is exceeded, another object will be removed from the cache to make room. + +Currently the disk cache can either be set to unlimited, or tied to the same size as the memory cache (ie, objects will be removed from the disk cache at the same time as they are removed from the memory cache. Depending on the useage patterns of your cache, restarting your application could mean that the disk cache might continue to grow). We understand that this is not ideal and there is room for improvement here. Stay tuned! + +How does OSCache decide which object to remove? What caching algorithm does OSCache use?+ +The caching algorithm is configurable. OSCache currently ships with 3 different algorithms - LRU (Least Recently Used), FIFO (First In First Out), and Unlimited. Should one of those not prove suitable, it is also possible to specify a custom algorithm class. + +How does OSCache's clustering work?+ +The clustering is implemented as a listener that catches 'flush' events. These events are then broadcast across the network (using either the JavaGroups library or JMS) so that other nodes in the cluster can flush the relevant object(s) from their local cache. Note that for performance reasons, when objects are added to a cache they are not broadcast to other nodes. This means that each node in the cluster maintains their own relatively indedependent cache, yet still remains fresh. + +If this mechanism does not suit your requirements, you can always code up a different solution by writing a custom event handler. + +What happens if I need to expire data in the cache?+ +Cache entries can be flushed explicitly in several ways: + +
In addition, cached data can be expired at retrieval time by specifying a maximum age for the data, or by indicating what dates and/or times the data should expire. See the time, duration and cron attributes of the <cache> tag for more information. + + +Can you tell me more about grouping cache entries? How might this be used?+ +This is a powerful feature that makes it easy to manage your cache content. Suppose you are rendering a website and the pages that you are caching depend on various factors. Perhaps they use various shared templates, some database content, and maybe some of them depend on an external data feed. By creating a cache group for each of these factors, each cached page can be placed into the group(s) that the page is dependent on. Then when say an external datafeed is updated it is trivial to flush all pages that depend on that datafeed. + + +Example 1:displayProduct.jsp
+ ... + <cache:cache key="myKey1" groups="product100,datafeed"> + <%= myProductBean.getProduct(100).getName() %> + <%= myDatafeedBean.getDataFeed().getTotal() %> + </cache:cache> + ...+ Example 2:updateDatafeed.jsp
+ ... + <%= myDatafeedBean.refreshDatafeed() %> + + <%-- Flush all cache entries that depend on the datafeed --%> + <cache:flush group="datafeed" scope="application"> + ...+ I don't want to use the taglibs, I want to access OSCache directly from within my application. Where do I start?+ +We'd suggest the best place to start would be to look at the GeneralCacheAdministrator class. It provides a simple wrapper for a single cache instance and should give you all the basic functionality you need. If you want to work with multiple caches or manipulate your cache beyond what GeneralCacheAdministrator provides, consider either writing your own administrator class using GeneralCacheAdministrator as a starting point, or just create and use the Cache class directly. See the Javadocs for more information. + +Where else can I go for help if I can't find an answer to my question here?+ +The best place to try is on the OSCache mailing list. It reaches a wide audience and is your best chance of getting a fast response. Remember to search the archives first to see if your question has already been answered. + +Got a question you'd like to ask? Ask us and we'll add it to the FAQ. + + |
+
+ OSCache Features+ +Fast in-memory caching+ +
Persistent on-disk caching+ +
Excellent Performance+ +
Clustering support+ +
Flexible Caching System+ +
Simple JSP Tag Library+ +
Caching Filter+ +
Comprehensive API+ +
Exception Handling+ +
Cache Flushing+ +
Portable caching+ +
i18n Aware+ +
Solid Reputation+ +
|
+
+ Patched version of OSCache.java originally created by Mathias Bogaert. + +OSCache.java
+ import java.util.Properties; + +import net.sf.hibernate.cache.Cache; +import net.sf.hibernate.cache.CacheException; +import net.sf.hibernate.cache.Timestamper; +import net.sf.hibernate.util.PropertiesHelper; +import net.sf.hibernate.util.StringHelper; + +import com.opensymphony.oscache.base.Config; +import com.opensymphony.oscache.base.CacheEntry; +import com.opensymphony.oscache.base.NeedsRefreshException; +import com.opensymphony.oscache.general.GeneralCacheAdministrator; + +/** + * Adapter for the OSCache implementation + */ +public class OSCache implements Cache { + + /** + * The <tt>OSCache</tt> cache capacity property suffix. + */ + public static final String OSCACHE_CAPACITY = "cache.capacity"; + + private static final Properties OSCACHE_PROPERTIES = new Config().getProperties(); + /** + * The OSCache 2.0 cache administrator. + */ + private static GeneralCacheAdministrator cache = new GeneralCacheAdministrator(); + + private static Integer capacity = PropertiesHelper.getInteger(OSCACHE_CAPACITY, OSCACHE_PROPERTIES); + + static { + if (capacity != null) cache.setCacheCapacity(capacity.intValue()); + } + + private final int refreshPeriod; + private final String cron; + private final String regionName; + private final String[] regionGroups; + + private String toString(Object key) { + return String.valueOf(key) + StringHelper.DOT + regionName; + } + + public OSCache(int refreshPeriod, String cron, String region) { + this.refreshPeriod = refreshPeriod; + this.cron = cron; + this.regionName = region; + this.regionGroups = new String[] {region}; + } + + public Object get(Object key) throws CacheException { + try { + return cache.getFromCache( toString(key), refreshPeriod, cron ); + } + catch (NeedsRefreshException e) { + cache.cancelUpdate( toString(key) ); + return null; + } + } + + public void put(Object key, Object value) throws CacheException { + cache.putInCache( toString(key), value, regionGroups ); + } + + public void remove(Object key) throws CacheException { + cache.flushEntry( toString(key) ); + } + + public void clear() throws CacheException { + cache.flushGroup(regionName); + } + + public void destroy() throws CacheException { + synchronized (cache) { + cache.destroy(); + } + } + + public void lock(Object key) throws CacheException { + // local cache, so we use synchronization + } + + public void unlock(Object key) throws CacheException { + // local cache, so we use synchronization + } + + public long nextTimestamp() { + return Timestamper.next(); + } + + public int getTimeout() { + return CacheEntry.INDEFINITE_EXPIRY; + } + +}+ |
+
+ Patched version of OSCacheProvider.java originally created by Mathias Bogaert. + +OSCacheProvider.java
+ import java.util.Properties; + +import net.sf.hibernate.cache.Cache; +import net.sf.hibernate.cache.CacheException; +import net.sf.hibernate.cache.CacheProvider; +import net.sf.hibernate.cache.Timestamper; +import net.sf.hibernate.util.PropertiesHelper; +import net.sf.hibernate.util.StringHelper; + +import com.opensymphony.oscache.base.CacheEntry; +import com.opensymphony.oscache.base.Config; + +/** + * Support for OpenSymphony OSCache. This implementation assumes + * that identifiers have well-behaved <tt>toString()</tt> methods. + */ +public class OSCacheProvider implements CacheProvider { + + /** + * The <tt>OSCache</tt> refresh period property suffix. + */ + public static final String OSCACHE_REFRESH_PERIOD = "refresh.period"; + /** + * The <tt>OSCache</tt> CRON expression property suffix. + */ + public static final String OSCACHE_CRON = "cron"; + + private static final Properties OSCACHE_PROPERTIES = new Config().getProperties(); + + /** + * Builds a new {@link Cache} instance, and gets it's properties from the OSCache {@link Config} + * which reads the properties file (<code>oscache.properties</code>) from the classpath. + * If the file cannot be found or loaded, an the defaults are used. + * + * @param region + * @param properties + * @return + * @throws CacheException + */ + public Cache buildCache(String region, Properties properties) throws CacheException { + + int refreshPeriod = PropertiesHelper.getInt( + StringHelper.qualify(region, OSCACHE_REFRESH_PERIOD), + OSCACHE_PROPERTIES, + CacheEntry.INDEFINITE_EXPIRY + ); + String cron = OSCACHE_PROPERTIES.getProperty( StringHelper.qualify(region, OSCACHE_CRON) ); + + // construct the cache + return new OSCache(refreshPeriod, cron, region); + } + + public long nextTimestamp() { + return Timestamper.next(); + } + + /** + * Callback to perform any necessary initialization of the underlying cache implementation + * during SessionFactory construction. + * + * @param properties current configuration settings. + */ + public void start(Properties properties) throws CacheException { + } + + /** + * Callback to perform any necessary cleanup of the underlying cache implementation + * during SessionFactory.close(). + */ + public void stop() { + } + +}+ |
+
+
Hibernate is a powerful, ultra-high performance object/relational persistence and query service for Java. Hibernate lets you develop persistent objects following common Java idiom - including association, inheritance, polymorphism, composition and the Java collections framework. Extremely fine-grained, richly typed object models are possible. + +Hibernate 2.1 features support for plugin cache providers and is designed to integrate with distributed caches (2.1 also implements more aggressive use of the cache). net.sf.hibernate.cache.CacheProvider is the extension point for user-defined cache integration. + +Hibernate 2.1.1 or higher is required. + +hibernate.cache.provider_class+ +OSCache and Hibernate 2.1 integrate though OSCacheProvider. + +
To enable OSCache in Hibernate's configuration, add the following line to hibernate.cfg.xml: + +hibernate.cfg.xml
+ <property name="hibernate.cache.provider_class">my.patched.provider.package.OSCacheProvider</property>+ The default refresh period is CacheEntry.INDEFINITE_EXPIRY. The first time a cacheable query is done, the cache has no effect on speed. On the second and successive queries, the cache will be populated and available to be hit. + +
Cache Region Configuration+ +To modify the refresh period, CRON expression, add the region configuration to your oscache.properties file, as demonstrated below: + +
+ [region].refresh.period = 4000 +[region].cron = * * 31 Feb * + +# The maximum cache capacity can only be set per region if you use the +# net.sf.hibernate.cache.OSCacheProvider distributed with Hibernate. +[region].capacity = 5000 + +# The patched version distributed with OSCache only allows a single cache.capacity setting and saves memory. ++ The com.mypackage.domain.Customer is Hibernate's internal cache region, which defaults to the classname, and which can be altered by setting Hibernate's configuration property hibernate.cache.region_prefix . + +Source Code+ + + + |
+
+ Patched version of OSCache.java for Hibernate 3 - originally created by Mathias Bogaert. + +OSCache.java
+ import java.util.Properties; +import java.util.Map; + +import org.hibernate.util.PropertiesHelper; +import org.hibernate.util.StringHelper; +import org.hibernate.cache.*; + +import com.opensymphony.oscache.base.Config; +import com.opensymphony.oscache.base.CacheEntry; +import com.opensymphony.oscache.base.NeedsRefreshException; +import com.opensymphony.oscache.general.GeneralCacheAdministrator; + +/** + * Adapter for the OSCache implementation + */ +public class OSCache implements Cache { + + /** + * The <tt>OSCache</tt> cache capacity property suffix. + */ + public static final String OSCACHE_CAPACITY = "cache.capacity"; + + private static final Properties OSCACHE_PROPERTIES = new Config().getProperties(); + /** + * The OSCache 2.0 cache administrator. + */ + private static GeneralCacheAdministrator cache = new GeneralCacheAdministrator(); + + private static Integer capacity = PropertiesHelper.getInteger(OSCACHE_CAPACITY, + OSCACHE_PROPERTIES); + + static { + if (capacity != null) cache.setCacheCapacity(capacity.intValue()); + } + + private final int refreshPeriod; + private final String cron; + private final String regionName; + private final String[] regionGroups; + + private String toString(Object key) { + return String.valueOf(key) + "." + regionName; + } + + public OSCache(int refreshPeriod, String cron, String region) { + this.refreshPeriod = refreshPeriod; + this.cron = cron; + this.regionName = region; + this.regionGroups = new String[] {region}; + } + + public Object get(Object key) throws CacheException { + try { + return cache.getFromCache( toString(key), refreshPeriod, cron ); + } + catch (NeedsRefreshException e) { + cache.cancelUpdate( toString(key) ); + return null; + } + } + + public void put(Object key, Object value) throws CacheException { + cache.putInCache( toString(key), value, regionGroups ); + } + + public void remove(Object key) throws CacheException { + cache.flushEntry( toString(key) ); + } + + public void clear() throws CacheException { + cache.flushGroup(regionName); + } + + public void destroy() throws CacheException { + synchronized (cache) { + cache.destroy(); + } + } + + public void lock(Object key) throws CacheException { + // local cache, so we use synchronization + } + + public void unlock(Object key) throws CacheException { + // local cache, so we use synchronization + } + + public long nextTimestamp() { + return Timestamper.next(); + } + + public int getTimeout() { + return Timestamper.ONE_MS * 60000; //ie. 60 seconds + } + + public Map toMap() { + throw new UnsupportedOperationException(); + } + + public long getElementCountOnDisk() { + return -1; + } + + public long getElementCountInMemory() { + return -1; + } + + public long getSizeInMemory() { + return -1; + } + + public String getRegionName() { + return regionName; + } + + public void update(Object key, Object value) throws CacheException { + put(key, value); + } + + public Object read(Object key) throws CacheException { + return get(key); + } +}+ |
+
+ Patched version of OSCacheProvider.java for Hibernate 3.0 - originally created by Mathias Bogaert. + +OSCacheProvider.java
+ import java.util.Properties; +import org.hibernate.util.PropertiesHelper; +import org.hibernate.util.StringHelper; +import org.hibernate.cache.*; +import com.opensymphony.oscache.base.CacheEntry; +import com.opensymphony.oscache.base.Config; + +/** + * Support for OpenSymphony OSCache. This implementation assumes + * that identifiers have well-behaved <tt>toString()</tt> methods. + */ +public class OSCacheProvider implements CacheProvider { + + /** + * The <tt>OSCache</tt> refresh period property suffix. + */ + public static final String OSCACHE_REFRESH_PERIOD = "refresh.period"; + /** + * The <tt>OSCache</tt> CRON expression property suffix. + */ + public static final String OSCACHE_CRON = "cron"; + + private static final Properties OSCACHE_PROPERTIES = new Config().getProperties(); + + /** + * Builds a new {@link Cache} instance, and gets it's properties from the OSCache {@link Config} + * which reads the properties file (<code>oscache.properties</code>) from the classpath. + * If the file cannot be found or loaded, an the defaults are used. + * + * @param region + * @param properties + * @return + * @throws CacheException + */ + public Cache buildCache(String region, Properties properties) throws CacheException { + + int refreshPeriod = PropertiesHelper.getInt( + StringHelper.qualify(region, OSCACHE_REFRESH_PERIOD), + OSCACHE_PROPERTIES, + CacheEntry.INDEFINITE_EXPIRY + ); + String cron = OSCACHE_PROPERTIES.getProperty( StringHelper.qualify(region, OSCACHE_CRON) ); + + // construct the cache + return new OSCache(refreshPeriod, cron, region); + } + + public long nextTimestamp() { + return Timestamper.next(); + } + + public boolean isMinimalPutsEnabledByDefault() { + return false; + } + + /** + * Callback to perform any necessary cleanup of the underlying cache implementation + * during SessionFactory.close(). + */ + public void stop() { + } + + /** + * Callback to perform any necessary initialization of the underlying cache implementation + * during SessionFactory construction. + * + * @param properties current configuration settings. + */ + public void start(Properties properties) throws CacheException { + } +}+ |
+
+ Hibernate is a powerful, + ultra-high performance object/relational persistence and query service + for Java. Hibernate lets you develop persistent objects following + common Java idiom - including association, inheritance, polymorphism, + composition and the Java collections framework. Extremely + fine-grained, richly typed object models are possible. + +Hibernate 3.2 features support for plugin cache providers and + is designed to integrate with distributed caches (3.2 also implements + more aggressive use of the cache). net.sf.hibernate.cache.CacheProvider + is the extension point for user-defined cache integration. + +Hibernate Core 3.2.3 GA or + higher is required. + +
hibernate.cache.provider_class+ +OSCache and Hibernate 3.2 integrate though OSCacheProvider. + +
To enable OSCache for Hibernate's second level cache add the + following line to Hibernate's configuration e.g. hibernate.cfg.xml: + +
+
+
+ hibernate.cfg.xml
+ <property name="hibernate.cache.provider_class">com.opensymphony.oscache.hibernate.OSCacheProvider</property> The default refresh period is CacheEntry.INDEFINITE_EXPIRY. + The first time a cacheable query is done, the cache has no effect on + speed. On the second and successive queries, the cache will be + populated and available to be hit. + +
Cache Region + Configuration+ +To modify the refresh period, CRON expression, add the region + configuration to your oscache.properties file, as demonstrated + below: + +
+
+
+ [region].refresh.period = 4000 +[region].cron = * * 31 Feb * + The com.mypackage.domain.Customer is Hibernate's internal cache + region, which defaults to the classname, and which can be altered by + setting Hibernate's configuration property hibernate.cache.region_prefix + . + +Configure + a different configuration file for Hibernate+ +To configure a different configuration file use the following + parameter in the Hibernate's configuration: +
+
+
+ hibernate.cfg.xml
+ <property name="com.opensymphony.oscache.configurationResourceName">path to oscache-hibernate.properties</property> |
+
+ Welcome to the OSCache wiki.+ +OSCache is a caching solution that includes a JSP tag library and set of classes to perform fine grained dynamic caching of JSP content, servlet responses or arbitrary objects. It provides both in memory and persistent on disk caches, and can allow your site to have graceful error tolerance (eg if an error occurs like your db goes down, you can serve the cached content so people can still surf the site almost without knowing). Take a look at the great features of OSCache. + +This wiki is used for additional information as well as documentation for the latest developing version (see previous releases). + +
OSCache's official homepage is http://www.opensymphony.com/oscache/. There you can find the documentation of the latest production release of OSCache. + + |
+
+ This installation guide shows you how to configure OSCache 2.4 + for use inside your JSP pages. It assumes you have downloaded the latest + version, which requires at least Java 1.4 and a Servlet + 2.3 container (part of J2EE 1.3). Read the Requirements for more details. + +If you intend to use OSCache via the API rather than via the taglibs, these instructions do not apply. Just + make sure oscache.jar and commons-logging.jar are + somewhere on your application's classpath. + +Extraction + Steps+ +
Installation + Steps+ +
Further + Information+ +
|
+
+ New in OSCache 2.4 is support for JMX monitoring and administration via the Spring Framework. + +In oscache.properties, enable the statistic listener: + +
+ cache.event.listeners= com.opensymphony.oscache.extra.StatisticListenerImpl ++ Then add this to the Spring application context + +
+ <!-- create mbeanserver, this doesn't need to be done if running on an Appserver with +it's own JMX server, such as Tomcat --> +<bean id="mbeanServer" class="org.springframework.jmx.support.MBeanServerFactoryBean"/> + +<!-- create a connector on port 1109 --> +<bean id="registry" + class="org.springframework.remoting.rmi.RmiRegistryFactoryBean"> + <property name="port"> + <value>1109</value> + </property> +</bean> + +<bean id="serverConnector" depends-on="registry" + class="org.springframework.jmx.support.ConnectorServerFactoryBean"> + <property name="objectName"> + <value>connector:name=rmi</value> + </property> + <property name="serviceUrl"> + <value>service:jmx:rmi://localhost/jndi/rmi://localhost:1109/jmxconnector</value> + </property> +</bean> + +<!-- export the oscache stats beans --> +<bean id="exporter" + class="org.springframework.jmx.export.MBeanExporter"> + <property name="beans"> + <map> + <entry key="bean:name=StatisticListenerImpl"> + <value>StatisticListenerImpl</value> + </entry> + </map> + </property> +</bean> + +<!-- oscache stats bean --> +<bean id="StatisticListenerImpl" class="com.opensymphony.oscache.extra.StatisticListenerImpl"/>+ |
+
+ OSCache comes with a JSP tag library that controls all its major functions. The tags are listed below with descriptions, attributes and examples of use. + +For instructions on installing OSCache in a web application, see the Installation Guide. You just have to add the following line declaring the OSCache custom tag library for use on the jsp page: + +
Summary+ +The tags are: + +
<cache></cache>+ +Description:+ +This is the main tag of OSCache. The body of the tag will be cached according to the attributes specified. The first time a cache is used the body content is executed and cached. + +Each subsequent time the tag is run, it will check to see if the cached content is stale. Content is considered stale due to one (or more) of the following being true: + +
If the cached body content is stale, the tag will execute the body again and recache the new body content. Otherwise it will serve the cached content and the body will be skipped (resulting in a large speed increase). + +Attributes:+ +
<usecached />+Description:+This tag is nested within a <cache> tag and tells its parent whether or not to use the cached version. + +Attributes:+
<flush />+ +Description:+This tag is used to flush caches at runtime. It is especially useful because it can be coded into the administration section of your site so that admins can decide when to flush the caches. + +Attributes:+ +
Example
+ This will flush the application scope. + + <cache:flush scope="application" /> + + This will flush the cache entry with key "foobar" in the session scope. + + <cache:flush scope="session" key="foobar" /> + + This will flush all cache entries in the "currencyData" group from the application scope. + + <cache:flush scope="application" group="currencyData" />+ <addgroup />+ +Description:+ +This tag must be nested inside a <cache:cache/> tag. It allows a single group name to be dynamically added to a cached block. It is useful when the group a cached block should belong to are unknown until the block is actually rendered. As each group is 'discovered', this tag can be used to add the group to the block's group list. + +Attributes:+ +
Example
+ This will add the cache block with the key 'test1' to groups 'group1' and 'group2'. + + <cache:cache key="test1"> + <cache:addgroup group="group1" /> + ... some jsp content ... + <cache:addgroup group="group2" /> + ... some more jsp content ... + </cache:cache>+ <addgroups /> (New! Since 2.3)+ +Description:+ +This tag must be nested inside a <cache:cache/> tag. It allows a comma-delimited list of groups names to be dynamically added to a cached block with a single tag statement. As a group list is 'discovered', this tag can be used to add the groups to the block's group list. + +Attributes:+ +
Example
+ This will add the cache block with the key 'test1' to groups 'group1' and 'group2'. + + <cache:cache key="test1"> + ... some jsp content ... + <cache:addgroups groups="group1,group2" /> + ... some jsp content ... + </cache:cache>+ |
+
+ All OpenSymphony projects use the OpenSymphony License, which is a modified Apache License. You can find the license at http://www.opensymphony.com/oscache/license.action + + |
+
+ Release Notes+ +(19th July 2003 - by Chris Miller) +New Features:+ +
Changes that may affect backwards compatibility:+ +
Bug Fixes:+ +
Known Problems: (these have existed for some time in the 1.x.x versions and will be addressed in an upcoming 2.x.x release)+ +
|
+
+ Release Notes+ +(4th August 2003 - by Chris Miller) +New Features:+ +
Bug Fixes:+ +
|
+
+ Release Notes+ +(4th November 2003 - by Chris Miller) +Improvements:+ +
Bug Fixes:+ +
Changes that may affect backwards compatibility:+ +
|
+
+ Release Notes+ +(22nd January 2004 - by Mathias Bogaert) + +Improvements:+ +
Bug Fixes:+ +
|
+
+ Release Notes+ +(22nd September 2003 - by Chris Miller) +Improvements:+ +
Bug Fixes:+ +
|
+
+ Release Notes+ +(1st May 2005 - by Andres March) + +Improvements:+ +
Bug Fixes:+ +
Changes that may affect backwards compatibility:+ +
JIRA Issue List+ + |
+
+ Release Notes+ +(18th January 2005 - by Andres March) +New Features:+
Improvements:+ +
Bug Fixes:+ +
|
+
+ Release Notes - Release Candidate+ +(18th September 2005 - by Lars Torunski) + +Besides bugs being fixed, major improvements have been made to the CacheFilter in many ways:+ +
JIRA Issue List+ + |
+
+ Release Notes - Final+ +(6th November 2005 - by Lars Torunski) + +Additionally to the 2.2 RC improvements, the final release was enhanced by:+ +
JIRA Issue List+ + |
+
+ Release Notes+ +(19th June 2006 - by Lars Torunski) + +This maintenance release of 2.3 has one enhancement:+ +
Bug fixes:+ +
JIRA Issue List+ +
|
+
+ Release Notes+ +(23rd July 2006 - by Lars Torunski) + +This maintenance release of 2.3.1 has one enhancement:+ +
Bug fixes:+ +
JIRA Issue List+ +
|
+
+ Release Notes+ +(6th March 2006 - by Lars Torunski) + +This release includes additional improvements to the CacheFilter:+ +
Disk persistence:+ +
Further changes are:+ +
JIRA Issue List+ + |
+
+ Release Notes+ +(7th July 2007 - by Lars Torunski) + +This maintenance release of 2.4.1 has two bug fixes:+ +
JIRA Issue List+ +
|
+
+ Release Notes+ +(1st Mai 2007 - by Lars Torunski) + +New + features and enhancements+ +Furthermore the next major release 2.4 enhances the CacheFilter + and allows a better integration with the Spring Framework and JMX Monitoring. + +
Upgrade Guide+ +
JIRA Issue List+ + |
+
+ The following are some of the sites that are using OSCache in production. This is far from an exhaustive list of course! If you have or know of a site using OSCache, please let us know so we can add it to the list. While not required, any performance figures, load levels or case studies that you can include would be greatly appreciated. + +
|
+
+ OSCache can be used directly to provide caching for any Java application. Using the OSCache tag library requires Servlet 2.3 and JSP 1.2 support (included in J2EE 1.3) to run properly. There is no dependency on a servlet container if the OSCache API is used directly. + +So far OSCache has been tested in the following application servers and web containers: + +
This does not mean it will not run on other servers! It should run on any specification compliant container. If you have run OSCache successfully in other servers, please let us know and we'll add to this list. + +The Caching Filter (for caching entire pages, and binary content such as GIFs and PDFs) requires Servlet 2.3 support. It is known to work on Orion, BEA WebLogic Server and Tomcat 4.0. + +OSCache requires at least Java 1.4 for the installation. + + + |
+
+ Scope+ +This page and the mailing list are provided for discussion purposes about the roadmap of OSCache and discussing new features and improvements. See also the JIRA - Road Map for more details or vote for issues in JIRA - Popular Issues . + +OSCache 3.0+ +The primary goal of this release is to make OSCache more reliable and easier to use and maintain. +
Furthermore we discuss a Chain Caching Model internal. + + |
+
+ SVN+ +The OSCache SVN repository is hosted at http://svn.opensymphony.com/svn/oscache. You can get the sources anonymously by using e.g. Subclipse a Subversion Eclipse Plugin. + +If you want to build OSCache from SVN, you have to checkout the project OpenSymphony also. + +Compiling OSCache+ +Run build.xml with Ant 1.6.5 (or higher) under Java 1.4 or later. From the OSCache directory, type + +
You may need to add the Ivy jar to your $ANT_HOME/lib directory if it is not there already. + + |
+
+ Configuring a GeneralCacheAdministrator+ +A GeneralCacheAdministrator instance that picks up configuration from an oscache.properties file can be configured within Spring using the following code: +
+ <bean id="cacheAdministrator" class="com.opensymphony.oscache.general.GeneralCacheAdministrator" destroy-method="destroy"/>
+Notice that a destory-method is configured to ensure that the GeneralCacheAdministrator is closed down gracefully. + +If you'd prefer to keep all your configuration inside the Spring configuration, you can omit the oscache.properties file and pass in any properties you want to the GeneralCacheAdministrator constructor like so: +
+ <bean id="cacheAdministrator" class="com.opensymphony.oscache.general.GeneralCacheAdministrator" destroy-method="destroy"> + <constructor-arg index="0"> + <props> + <prop key="cache.memory">true</prop> + </props> + </constructor-arg> +</bean>+ Configuring a Cache+ +You can configure a Cache instance directly using the following snippet of code: +
+ <bean id="cache" class="com.opensymphony.oscache.base.Cache"> + <constructor-arg index="0"> + <value>true</value> <!-- useMemoryCaching --> + <constructor-arg> + <constructor-arg index="1"> + <value>true</value> <!-- unlimitedDiskCache --> + <constructor-arg> + <constructor-arg index="2"> + <value>true</value> <!-- overflowPersistence --> + <constructor-arg> +</bean>+ Alternatively, you can pick up the Cache from the GeneralCacheAdministrator like so: +
+ <bean id="cacheAdministrator" class="com.opensymphony.oscache.general.GeneralCacheAdministrator" destroy-method="destroy"/> + +<bean id="cache" factory-bean="cacheAdministrator" factory-method="getCache"/>+ |
+
+ Description+ +With the cache event handlers a listerner can be implemented to provide cache hits and misses information. You can copy and paste the following code to get a statistic of your OSCache integration. Just change the used logger and the sample helps you to improve the cache key creation and to decide which scope to use. The SimpleStatisticListenerImpl should be configured via the cache.event.listeners in the oscache.properties. + +Sample Code+ +SimpleStatisticListenerImpl.java
+ /* + * Copyright (c) 2002-2007 by OpenSymphony + * All rights reserved. + */ +package com.opensymphony.oscache.extra; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; + +import com.opensymphony.oscache.base.Cache; +import com.opensymphony.oscache.base.events.*; + +/** + * A simple implementation of a statistic reporter which uses the + * CacheMapAccessEventListener, CacheEntryEventListener and ScopeEventListener. + * It uses the events to count the cache hit and misses and of course the + * flushes. + * <p> + * We are not using any synchronized so that this does not become a bottleneck. + * The consequence is that on retrieving values, the operations that are + * currently being done won't be counted. + */ +public class SimpleStatisticListenerImpl implements CacheMapAccessEventListener, CacheEntryEventListener, ScopeEventListener { + + private static transient final Log log = LogFactory.getLog(SimpleStatisticListenerImpl.class); + + /** + * Hit counter + */ + private int hitCount = 0; + + /** + * Miss counter + */ + private int missCount = 0; + + /** + * Stale hit counter + */ + private int staleHitCount = 0; + + /** + * Hit counter sum + */ + private int hitCountSum = 0; + + /** + * Miss counter sum + */ + private int missCountSum = 0; + + /** + * Stale hit counter + */ + private int staleHitCountSum = 0; + + /** + * Flush hit counter + */ + private int flushCount = 0; + + /** + * Constructor, empty for us + */ + public SimpleStatisticListenerImpl() { + log.info("Creation of SimpleStatisticListenerImpl"); + } + + /** + * This method handles an event each time the cache is accessed + * + * @param event The event triggered when the cache was accessed + * @see com.opensymphony.oscache.base.events.CacheMapAccessEventListener#accessed(CacheMapAccessEvent) + */ + public void accessed(CacheMapAccessEvent event) { + String result = "N/A"; + + // Retrieve the event type and update the counters + CacheMapAccessEventType type = event.getEventType(); + + // Handles a hit event + if (type == CacheMapAccessEventType.HIT) { + hitCount++; + result = "HIT"; + } + // Handles a stale hit event + else if (type == CacheMapAccessEventType.STALE_HIT) { + staleHitCount++; + result = "STALE HIT"; + } + // Handles a miss event + else if (type == CacheMapAccessEventType.MISS) { + missCount++; + result = "MISS"; + } + + if (log.isDebugEnabled()) { + log.debug("ACCESS : " + result + ": " + event.getCacheEntryKey()); + log.debug("STATISTIC : Hit = " + hitCount + ", stale hit =" + + staleHitCount + ", miss = " + missCount); + } + } + + /** + * Logs the flush of the cache. + * + * @param info the string to be logged. + */ + private void flushed(String info) { + flushCount++; + + hitCountSum += hitCount; + staleHitCountSum += staleHitCount; + missCountSum += missCount; + + if (log.isInfoEnabled()) { + log.info("FLUSH : " + info); + log.info("STATISTIC SUM : " + "Hit = " + hitCountSum + + ", stale hit = " + staleHitCountSum + ", miss = " + + missCountSum + ", flush = " + flushCount); + } + + hitCount = 0; + staleHitCount = 0; + missCount = 0; + } + + /** + * Event fired when a specific or all scopes are flushed. + * + * @param event ScopeEvent + * @see com.opensymphony.oscache.base.events.ScopeEventListener#scopeFlushed(ScopeEvent) + */ + public void scopeFlushed(ScopeEvent event) { + flushed("scope " + ScopeEventListenerImpl.SCOPE_NAMES[event.getScope()]); + } + + /** + * Event fired when an entry is added to the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryAdded(CacheEntryEvent) + */ + public void cacheEntryAdded(CacheEntryEvent event) { + // do nothing + } + + /** + * Event fired when an entry is flushed from the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryFlushed(CacheEntryEvent) + */ + public void cacheEntryFlushed(CacheEntryEvent event) { + // do nothing, because a group or other flush is coming + if (!Cache.NESTED_EVENT.equals(event.getOrigin())) { + flushed("entry " + event.getKey() + " / " + event.getOrigin()); + } + } + + /** + * Event fired when an entry is removed from the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryRemoved(CacheEntryEvent) + */ + public void cacheEntryRemoved(CacheEntryEvent event) { + // do nothing + } + + /** + * Event fired when an entry is updated in the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryUpdated(CacheEntryEvent) + */ + public void cacheEntryUpdated(CacheEntryEvent event) { + // do nothing + } + + /** + * Event fired when a group is flushed from the cache. + * + * @param event CacheGroupEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheGroupFlushed(CacheGroupEvent) + */ + public void cacheGroupFlushed(CacheGroupEvent event) { + flushed("group " + event.getGroup()); + } + + /** + * Event fired when a key pattern is flushed from the cache. + * + * @param event CachePatternEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cachePatternFlushed(CachePatternEvent) + */ + public void cachePatternFlushed(CachePatternEvent event) { + flushed("pattern " + event.getPattern()); + } + + /** + * An event that is fired when an entire cache gets flushed. + * + * @param event CachewideEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheFlushed(CachewideEvent) + */ + public void cacheFlushed(CachewideEvent event) { + flushed("wide " + event.getDate()); + } + + /** + * Return the counters in a string form + * + * @return String + */ + public String toString() { + return "SimpleStatisticListenerImpl: Hit = " + hitCount + " / " + hitCountSum + + ", stale hit = " + staleHitCount + " / " + staleHitCountSum + + ", miss = " + missCount + " / " + missCountSum + + ", flush = " + flushCount; + } +}+ |
+
+ * We are not using any synchronized so that this does not become a bottleneck. + * The consequence is that on retrieving values, the operations that are + * currently being done won't be counted. + */ +public class StatisticListenerImpl implements CacheMapAccessEventListener, CacheEntryEventListener, ScopeEventListener { + + private static transient final Log log = LogFactory.getLog(StatisticListenerImpl.class); + + /** + * Hit counter + */ + private int hitCount = 0; + + /** + * Miss counter + */ + private int missCount = 0; + + /** + * Stale hit counter + */ + private int staleHitCount = 0; + + /** + * Hit counter sum + */ + private int hitCountSum = 0; + + /** + * Miss counter sum + */ + private int missCountSum = 0; + + /** + * Stale hit counter + */ + private int staleHitCountSum = 0; + + /** + * Flush hit counter + */ + private int flushCount = 0; + + /** + * Constructor, empty for us + */ + public StatisticListenerImpl() { + log.info("Creation of StatisticListenerImpl"); + } + + /** + * This method handles an event each time the cache is accessed + * + * @param event The event triggered when the cache was accessed + * @see com.opensymphony.oscache.base.events.CacheMapAccessEventListener#accessed(CacheMapAccessEvent) + */ + public void accessed(CacheMapAccessEvent event) { + String result = "N/A"; + + // Retrieve the event type and update the counters + CacheMapAccessEventType type = event.getEventType(); + + // Handles a hit event + if (type == CacheMapAccessEventType.HIT) { + hitCount++; + result = "HIT"; + } + // Handles a stale hit event + else if (type == CacheMapAccessEventType.STALE_HIT) { + staleHitCount++; + result = "STALE HIT"; + } + // Handles a miss event + else if (type == CacheMapAccessEventType.MISS) { + missCount++; + result = "MISS"; + } + + if (log.isDebugEnabled()) { + log.debug("ACCESS : " + result + ": " + event.getCacheEntryKey()); + log.debug("STATISTIC : Hit = " + hitCount + ", stale hit =" + + staleHitCount + ", miss = " + missCount); + } + } + + /** + * Logs the flush of the cache. + * + * @param info the string to be logged. + */ + private void flushed(String info) { + flushCount++; + + hitCountSum += hitCount; + staleHitCountSum += staleHitCount; + missCountSum += missCount; + + if (log.isInfoEnabled()) { + log.info("FLUSH : " + info); + log.info("STATISTIC SUM : " + "Hit = " + hitCountSum + + ", stale hit = " + staleHitCountSum + ", miss = " + + missCountSum + ", flush = " + flushCount); + } + + hitCount = 0; + staleHitCount = 0; + missCount = 0; + } + + /** + * Event fired when a specific or all scopes are flushed. + * + * @param event ScopeEvent + * @see com.opensymphony.oscache.base.events.ScopeEventListener#scopeFlushed(ScopeEvent) + */ + public void scopeFlushed(ScopeEvent event) { + flushed("scope " + ScopeEventListenerImpl.SCOPE_NAMES[event.getScope()]); + } + + /** + * Event fired when an entry is added to the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryAdded(CacheEntryEvent) + */ + public void cacheEntryAdded(CacheEntryEvent event) { + // do nothing + } + + /** + * Event fired when an entry is flushed from the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryFlushed(CacheEntryEvent) + */ + public void cacheEntryFlushed(CacheEntryEvent event) { + // do nothing, because a group or other flush is coming + if (!Cache.NESTED_EVENT.equals(event.getOrigin())) { + flushed("entry " + event.getKey() + " / " + event.getOrigin()); + } + } + + /** + * Event fired when an entry is removed from the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryRemoved(CacheEntryEvent) + */ + public void cacheEntryRemoved(CacheEntryEvent event) { + // do nothing + } + + /** + * Event fired when an entry is updated in the cache. + * + * @param event CacheEntryEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryUpdated(CacheEntryEvent) + */ + public void cacheEntryUpdated(CacheEntryEvent event) { + // do nothing + } + + /** + * Event fired when a group is flushed from the cache. + * + * @param event CacheGroupEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheGroupFlushed(CacheGroupEvent) + */ + public void cacheGroupFlushed(CacheGroupEvent event) { + flushed("group " + event.getGroup()); + } + + /** + * Event fired when a key pattern is flushed from the cache. + * + * @param event CachePatternEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cachePatternFlushed(CachePatternEvent) + */ + public void cachePatternFlushed(CachePatternEvent event) { + flushed("pattern " + event.getPattern()); + } + + /** + * An event that is fired when an entire cache gets flushed. + * + * @param event CachewideEvent + * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheFlushed(CachewideEvent) + */ + public void cacheFlushed(CachewideEvent event) { + flushed("wide " + event.getDate()); + } + + /** + * Return the counters in a string form + * + * @return String + */ + public String toString() { + return "StatisticListenerImpl: Hit = " + hitCount + " / " + hitCountSum + + ", stale hit = " + staleHitCount + " / " + staleHitCountSum + + ", miss = " + missCount + " / " + missCountSum + ", flush = " + + flushCount; + } +} \ No newline at end of file diff --git a/docs/wiki/What is OSCache.html b/docs/wiki/What is OSCache.html new file mode 100644 index 0000000..2f49be1 --- /dev/null +++ b/docs/wiki/What is OSCache.html @@ -0,0 +1,55 @@ + +
+
+ OSCache is a widely used, high performance J2EE caching framework. + +The Problems Solved
+ OSCache solves fundamental problems for dynamic websites: + +
Brief Feature List
+ In addition to it's servlet-specific features, OSCache can be used as a generic caching solution for any Java application. A few of its generic features include: + +
We encourage you to take a look at the full Feature List to see what else OSCache has to offer. + + |
+
+ Space Index+ ++
|
+
+ *
+ * Extend this class to implement a custom cache administrator.
+ *
+ * @version $Revision$
+ * @author a href="mailto:mike@atlassian.com">Mike Cannon-Brookes
+ * @author Francois Beauregard
+ * @author Alain Bergevin
+ * @author Fabian Crabus
+ * @author Chris Miller
+ */
+public abstract class AbstractCacheAdministrator implements java.io.Serializable {
+ private static transient final Log log = LogFactory.getLog(AbstractCacheAdministrator.class);
+
+ /**
+ * A boolean cache configuration property that indicates whether the cache
+ * should cache objects in memory. Set this property to false
+ * to disable in-memory caching.
+ */
+ public final static String CACHE_MEMORY_KEY = "cache.memory";
+
+ /**
+ * An integer cache configuration property that specifies the maximum number
+ * of objects to hold in the cache. Setting this to a negative value will
+ * disable the capacity functionality - there will be no limit to the number
+ * of objects that are held in cache.
+ */
+ public final static String CACHE_CAPACITY_KEY = "cache.capacity";
+
+ /**
+ * A String cache configuration property that specifies the classname of
+ * an alternate caching algorithm. This class must extend
+ * {@link com.opensymphony.oscache.base.algorithm.AbstractConcurrentReadCache}
+ * By default caches will use {@link com.opensymphony.oscache.base.algorithm.LRUCache} as
+ * the default algorithm if the cache capacity is set to a postive value, or
+ * {@link com.opensymphony.oscache.base.algorithm.UnlimitedCache} if the
+ * capacity is negative (ie, disabled).
+ */
+ public final static String CACHE_ALGORITHM_KEY = "cache.algorithm";
+
+ /**
+ * A boolean cache configuration property that indicates whether the persistent
+ * cache should be unlimited in size, or should be restricted to the same size
+ * as the in-memory cache. Set this property to true
to allow the
+ * persistent cache to grow without bound.
+ */
+ public final static String CACHE_DISK_UNLIMITED_KEY = "cache.unlimited.disk";
+
+ /**
+ * The configuration key that specifies whether we should block waiting for new
+ * content to be generated, or just serve the old content instead. The default
+ * behaviour is to serve the old content since that provides the best performance
+ * (at the cost of serving slightly stale data).
+ */
+ public final static String CACHE_BLOCKING_KEY = "cache.blocking";
+
+ /**
+ * A String cache configuration property that specifies the classname that will
+ * be used to provide cache persistence. This class must extend {@link PersistenceListener}.
+ */
+ public static final String PERSISTENCE_CLASS_KEY = "cache.persistence.class";
+
+ /**
+ * A String cache configuration property that specifies if the cache persistence
+ * will only be used in overflow mode, that is, when the memory cache capacity has been reached.
+ */
+ public static final String CACHE_PERSISTENCE_OVERFLOW_KEY = "cache.persistence.overflow.only";
+
+ /**
+ * A String cache configuration property that holds a comma-delimited list of
+ * classnames. These classes specify the event handlers that are to be applied
+ * to the cache.
+ */
+ public static final String CACHE_ENTRY_EVENT_LISTENERS_KEY = "cache.event.listeners";
+ protected Config config = null;
+
+ /**
+ * Holds a list of all the registered event listeners. Event listeners are specified
+ * using the {@link #CACHE_ENTRY_EVENT_LISTENERS_KEY} configuration key.
+ */
+ protected EventListenerList listenerList = new EventListenerList();
+
+ /**
+ * The algorithm class being used, as specified by the {@link #CACHE_ALGORITHM_KEY}
+ * configuration property.
+ */
+ protected String algorithmClass = null;
+
+ /**
+ * The cache capacity (number of entries), as specified by the {@link #CACHE_CAPACITY_KEY}
+ * configuration property.
+ */
+ protected int cacheCapacity = -1;
+
+ /**
+ * Whether the cache blocks waiting for content to be build, or serves stale
+ * content instead. This value can be specified using the {@link #CACHE_BLOCKING_KEY}
+ * configuration property.
+ */
+ private boolean blocking = false;
+
+ /**
+ * Whether or not to store the cache entries in memory. This is configurable using the
+ * {@link com.opensymphony.oscache.base.AbstractCacheAdministrator#CACHE_MEMORY_KEY} property.
+ */
+ private boolean memoryCaching = true;
+
+ /**
+ * Whether the persistent cache should be used immediately or only when the memory capacity
+ * has been reached, ie. overflow only.
+ * This can be set via the {@link #CACHE_PERSISTENCE_OVERFLOW_KEY} configuration property.
+ */
+ private boolean overflowPersistence;
+
+ /**
+ * Whether the disk cache should be unlimited in size, or matched 1-1 to the memory cache.
+ * This can be set via the {@link #CACHE_DISK_UNLIMITED_KEY} configuration property.
+ */
+ private boolean unlimitedDiskCache;
+
+ /**
+ * Create the AbstractCacheAdministrator.
+ * This will initialize all values and load the properties from oscache.properties.
+ */
+ protected AbstractCacheAdministrator() {
+ this(null);
+ }
+
+ /**
+ * Create the AbstractCacheAdministrator.
+ *
+ * @param p the configuration properties for this cache.
+ */
+ protected AbstractCacheAdministrator(Properties p) {
+ loadProps(p);
+ initCacheParameters();
+
+ if (log.isDebugEnabled()) {
+ log.debug("Constructed AbstractCacheAdministrator()");
+ }
+ }
+
+ /**
+ * Sets the algorithm to use for the cache.
+ *
+ * @see com.opensymphony.oscache.base.algorithm.LRUCache
+ * @see com.opensymphony.oscache.base.algorithm.FIFOCache
+ * @see com.opensymphony.oscache.base.algorithm.UnlimitedCache
+ * @param newAlgorithmClass The class to use (eg.
+ * "com.opensymphony.oscache.base.algorithm.LRUCache"
)
+ */
+ public void setAlgorithmClass(String newAlgorithmClass) {
+ algorithmClass = newAlgorithmClass;
+ }
+
+ /**
+ * Indicates whether the cache will block waiting for new content to
+ * be built, or serve stale content instead of waiting. Regardless of this
+ * setting, the cache will always block if new content is being
+ * created, ie, there's no stale content in the cache that can be served.
+ */
+ public boolean isBlocking() {
+ return blocking;
+ }
+
+ /**
+ * Sets the cache capacity (number of items). Administrator implementations
+ * should override this method to ensure that their {@link Cache} objects
+ * are updated correctly (by calling {@link AbstractConcurrentReadCache#setMaxEntries(int)}}}.
+ *
+ * @param newCacheCapacity The new capacity
+ */
+ protected void setCacheCapacity(int newCacheCapacity) {
+ cacheCapacity = newCacheCapacity;
+ }
+
+ /**
+ * Whether entries are cached in memory or not.
+ * Default is true.
+ * Set by the cache.memory
property.
+ *
+ * @return Status whether or not memory caching is used.
+ */
+ public boolean isMemoryCaching() {
+ return memoryCaching;
+ }
+
+ /**
+ * Retrieves the value of one of the configuration properties.
+ *
+ * @param key The key assigned to the property
+ * @return Property value, or null
if the property could not be found.
+ */
+ public String getProperty(String key) {
+ return config.getProperty(key);
+ }
+
+ /**
+ * Indicates whether the unlimited disk cache is enabled or not.
+ */
+ public boolean isUnlimitedDiskCache() {
+ return unlimitedDiskCache;
+ }
+
+ /**
+ * Check if we use overflowPersistence
+ *
+ * @return Returns the overflowPersistence.
+ */
+ public boolean isOverflowPersistence() {
+ return this.overflowPersistence;
+ }
+
+ /**
+ * Sets the overflowPersistence flag
+ *
+ * @param overflowPersistence The overflowPersistence to set.
+ */
+ public void setOverflowPersistence(boolean overflowPersistence) {
+ this.overflowPersistence = overflowPersistence;
+ }
+
+ /**
+ * Retrieves an array containing instances all of the {@link CacheEventListener}
+ * classes that are specified in the OSCache configuration file.
+ */
+ protected CacheEventListener[] getCacheEventListeners() {
+ List classes = StringUtil.split(config.getProperty(CACHE_ENTRY_EVENT_LISTENERS_KEY), ',');
+ CacheEventListener[] listeners = new CacheEventListener[classes.size()];
+
+ for (int i = 0; i < classes.size(); i++) {
+ String className = (String) classes.get(i);
+
+ try {
+ Class clazz = Class.forName(className);
+
+ if (!CacheEventListener.class.isAssignableFrom(clazz)) {
+ log.error("Specified listener class '" + className + "' does not implement CacheEventListener. Ignoring this listener.");
+ } else {
+ listeners[i] = (CacheEventListener) clazz.newInstance();
+ }
+ } catch (ClassNotFoundException e) {
+ log.error("CacheEventListener class '" + className + "' not found. Ignoring this listener.", e);
+ } catch (InstantiationException e) {
+ log.error("CacheEventListener class '" + className + "' could not be instantiated because it is not a concrete class. Ignoring this listener.", e);
+ } catch (IllegalAccessException e) {
+ log.error("CacheEventListener class '" + className + "' could not be instantiated because it is not public. Ignoring this listener.", e);
+ }
+ }
+
+ return listeners;
+ }
+
+ /**
+ * If there is a PersistenceListener
in the configuration
+ * it will be instantiated and applied to the given cache object. If the
+ * PersistenceListener
cannot be found or instantiated, an
+ * error will be logged but the cache will not have a persistence listener
+ * applied to it and no exception will be thrown.
+ *
+ * A cache can only have one PersistenceListener
.
+ *
+ * @param cache the cache to apply the PersistenceListener
to.
+ *
+ * @return the same cache object that was passed in.
+ */
+ protected Cache setPersistenceListener(Cache cache) {
+ String persistenceClassname = config.getProperty(PERSISTENCE_CLASS_KEY);
+
+ try {
+ Class clazz = Class.forName(persistenceClassname);
+ PersistenceListener persistenceListener = (PersistenceListener) clazz.newInstance();
+
+ cache.setPersistenceListener(persistenceListener.configure(config));
+ } catch (ClassNotFoundException e) {
+ log.error("PersistenceListener class '" + persistenceClassname + "' not found. Check your configuration.", e);
+ } catch (Exception e) {
+ log.error("Error instantiating class '" + persistenceClassname + "'", e);
+ }
+
+ return cache;
+ }
+
+ /**
+ * Applies all of the recognised listener classes to the supplied
+ * cache object. Recognised classes are {@link CacheEntryEventListener}
+ * and {@link CacheMapAccessEventListener}.
+ *
+ * @param cache The cache to apply the configuration to.
+ * @return cache The configured cache object.
+ */
+ protected Cache configureStandardListeners(Cache cache) {
+ if (config.getProperty(PERSISTENCE_CLASS_KEY) != null) {
+ cache = setPersistenceListener(cache);
+ }
+
+ if (config.getProperty(CACHE_ENTRY_EVENT_LISTENERS_KEY) != null) {
+ // Grab all the specified listeners and add them to the cache's
+ // listener list. Note that listeners that implement more than
+ // one of the event interfaces will be added multiple times.
+ CacheEventListener[] listeners = getCacheEventListeners();
+
+ for (int i = 0; i < listeners.length; i++) {
+ // Pass through the configuration to those listeners that require it
+ if (listeners[i] instanceof LifecycleAware) {
+ try {
+ ((LifecycleAware) listeners[i]).initialize(cache, config);
+ } catch (InitializationException e) {
+ log.error("Could not initialize listener '" + listeners[i].getClass().getName() + "'. Listener ignored.", e);
+
+ continue;
+ }
+ }
+
+ if (listeners[i] instanceof CacheEventListener) {
+ cache.addCacheEventListener(listeners[i]);
+ }
+ }
+ }
+
+ return cache;
+ }
+
+ /**
+ * Finalizes all the listeners that are associated with the given cache object.
+ * Any FinalizationException
s that are thrown by the listeners will
+ * be caught and logged.
+ */
+ protected void finalizeListeners(Cache cache) {
+ // It's possible for cache to be null if getCache() was never called (CACHE-63)
+ if (cache == null) {
+ return;
+ }
+
+ Object[] listeners = cache.listenerList.getListenerList();
+
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i + 1] instanceof LifecycleAware) {
+ try {
+ ((LifecycleAware) listeners[i + 1]).finialize();
+ } catch (FinalizationException e) {
+ log.error("Listener could not be finalized", e);
+ }
+ }
+ }
+ }
+
+ /**
+ * Initialize the core cache parameters from the configuration properties.
+ * The parameters that are initialized are:
+ *
+ * To avoid data races, values in this map should remain present during the whole time distinct threads deal with the
+ * same key. We implement this using explicit reference counting in the EntryUpdateState instance, to be able to clean up
+ * the map once all threads have declared they are done accessing/updating a given key.
+ *
+ * It is not possible to locate this into the CacheEntry because this would require to have a CacheEntry instance for all cache misses, and
+ * may therefore generate a memory leak. More over, the CacheEntry instance may not be hold in memory in the case no
+ * memory cache is configured.
+ */
+ private Map updateStates = new HashMap();
+
+ /**
+ * Indicates whether the cache blocks requests until new content has
+ * been generated or just serves stale content instead.
+ */
+ private boolean blocking = false;
+
+ /**
+ * Create a new Cache
+ *
+ * @param useMemoryCaching Specify if the memory caching is going to be used
+ * @param unlimitedDiskCache Specify if the disk caching is unlimited
+ * @param overflowPersistence Specify if the persistent cache is used in overflow only mode
+ */
+ public Cache(boolean useMemoryCaching, boolean unlimitedDiskCache, boolean overflowPersistence) {
+ this(useMemoryCaching, unlimitedDiskCache, overflowPersistence, false, null, 0);
+ }
+
+ /**
+ * Create a new Cache.
+ *
+ * If a valid algorithm class is specified, it will be used for this cache.
+ * Otherwise if a capacity is specified, it will use LRUCache.
+ * If no algorithm or capacity is specified UnlimitedCache is used.
+ *
+ * @see com.opensymphony.oscache.base.algorithm.LRUCache
+ * @see com.opensymphony.oscache.base.algorithm.UnlimitedCache
+ * @param useMemoryCaching Specify if the memory caching is going to be used
+ * @param unlimitedDiskCache Specify if the disk caching is unlimited
+ * @param overflowPersistence Specify if the persistent cache is used in overflow only mode
+ * @param blocking This parameter takes effect when a cache entry has
+ * just expired and several simultaneous requests try to retrieve it. While
+ * one request is rebuilding the content, the other requests will either
+ * block and wait for the new content (blocking == true
) or
+ * instead receive a copy of the stale content so they don't have to wait
+ * (blocking == false
). the default is false
,
+ * which provides better performance but at the expense of slightly stale
+ * data being served.
+ * @param algorithmClass The class implementing the desired algorithm
+ * @param capacity The capacity
+ */
+ public Cache(boolean useMemoryCaching, boolean unlimitedDiskCache, boolean overflowPersistence, boolean blocking, String algorithmClass, int capacity) {
+ // Instantiate the algo class if valid
+ if (((algorithmClass != null) && (algorithmClass.length() > 0)) && (capacity > 0)) {
+ try {
+ cacheMap = (AbstractConcurrentReadCache) Class.forName(algorithmClass).newInstance();
+ cacheMap.setMaxEntries(capacity);
+ } catch (Exception e) {
+ log.error("Invalid class name for cache algorithm class. " + e.toString());
+ }
+ }
+
+ if (cacheMap == null) {
+ // If we have a capacity, use LRU cache otherwise use unlimited Cache
+ if (capacity > 0) {
+ cacheMap = new LRUCache(capacity);
+ } else {
+ cacheMap = new UnlimitedCache();
+ }
+ }
+
+ cacheMap.setUnlimitedDiskCache(unlimitedDiskCache);
+ cacheMap.setOverflowPersistence(overflowPersistence);
+ cacheMap.setMemoryCaching(useMemoryCaching);
+
+ this.blocking = blocking;
+ }
+
+ /**
+ * @return the maximum number of items to cache can hold.
+ */
+ public int getCapacity() {
+ return cacheMap.getMaxEntries();
+ }
+
+ /**
+ * Allows the capacity of the cache to be altered dynamically. Note that
+ * some cache implementations may choose to ignore this setting (eg the
+ * {@link UnlimitedCache} ignores this call).
+ *
+ * @param capacity the maximum number of items to hold in the cache.
+ */
+ public void setCapacity(int capacity) {
+ cacheMap.setMaxEntries(capacity);
+ }
+
+ /**
+ * Checks if the cache was flushed more recently than the CacheEntry provided.
+ * Used to determine whether to refresh the particular CacheEntry.
+ *
+ * @param cacheEntry The cache entry which we're seeing whether to refresh
+ * @return Whether or not the cache has been flushed more recently than this cache entry was updated.
+ */
+ public boolean isFlushed(CacheEntry cacheEntry) {
+ if (flushDateTime != null) {
+ final long lastUpdate = cacheEntry.getLastUpdate();
+ final long flushTime = flushDateTime.getTime();
+
+ // CACHE-241: check flushDateTime with current time also
+ return (flushTime <= System.currentTimeMillis()) && (flushTime >= lastUpdate);
+ } else {
+ return false;
+ }
+ }
+
+ /**
+ * Retrieve an object from the cache specifying its key.
+ *
+ * @param key Key of the object in the cache.
+ *
+ * @return The object from cache
+ *
+ * @throws NeedsRefreshException Thrown when the object either
+ * doesn't exist, or exists but is stale. When this exception occurs,
+ * the CacheEntry corresponding to the supplied key will be locked
+ * and other threads requesting this entry will potentially be blocked
+ * until the caller repopulates the cache. If the caller choses not
+ * to repopulate the cache, they must instead call
+ * {@link #cancelUpdate(String)}.
+ */
+ public Object getFromCache(String key) throws NeedsRefreshException {
+ return getFromCache(key, CacheEntry.INDEFINITE_EXPIRY, null);
+ }
+
+ /**
+ * Retrieve an object from the cache specifying its key.
+ *
+ * @param key Key of the object in the cache.
+ * @param refreshPeriod How long before the object needs refresh. To
+ * allow the object to stay in the cache indefinitely, supply a value
+ * of {@link CacheEntry#INDEFINITE_EXPIRY}.
+ *
+ * @return The object from cache
+ *
+ * @throws NeedsRefreshException Thrown when the object either
+ * doesn't exist, or exists but is stale. When this exception occurs,
+ * the CacheEntry corresponding to the supplied key will be locked
+ * and other threads requesting this entry will potentially be blocked
+ * until the caller repopulates the cache. If the caller choses not
+ * to repopulate the cache, they must instead call
+ * {@link #cancelUpdate(String)}.
+ */
+ public Object getFromCache(String key, int refreshPeriod) throws NeedsRefreshException {
+ return getFromCache(key, refreshPeriod, null);
+ }
+
+ /**
+ * Retrieve an object from the cache specifying its key.
+ *
+ * @param key Key of the object in the cache.
+ * @param refreshPeriod How long before the object needs refresh. To
+ * allow the object to stay in the cache indefinitely, supply a value
+ * of {@link CacheEntry#INDEFINITE_EXPIRY}.
+ * @param cronExpiry A cron expression that specifies fixed date(s)
+ * and/or time(s) that this cache entry should
+ * expire on.
+ *
+ * @return The object from cache
+ *
+ * @throws NeedsRefreshException Thrown when the object either
+ * doesn't exist, or exists but is stale. When this exception occurs,
+ * the CacheEntry corresponding to the supplied key will be locked
+ * and other threads requesting this entry will potentially be blocked
+ * until the caller repopulates the cache. If the caller choses not
+ * to repopulate the cache, they must instead call
+ * {@link #cancelUpdate(String)}.
+ */
+ public Object getFromCache(String key, int refreshPeriod, String cronExpiry) throws NeedsRefreshException {
+ CacheEntry cacheEntry = this.getCacheEntry(key, null, null);
+
+ Object content = cacheEntry.getContent();
+ CacheMapAccessEventType accessEventType = CacheMapAccessEventType.HIT;
+
+ boolean reload = false;
+
+ // Check if this entry has expired or has not yet been added to the cache. If
+ // so, we need to decide whether to block, serve stale content or throw a
+ // NeedsRefreshException
+ if (this.isStale(cacheEntry, refreshPeriod, cronExpiry)) {
+
+ //Get access to the EntryUpdateState instance and increment the usage count during the potential sleep
+ EntryUpdateState updateState = getUpdateState(key);
+ try {
+ synchronized (updateState) {
+ if (updateState.isAwaitingUpdate() || updateState.isCancelled()) {
+ // No one else is currently updating this entry - grab ownership
+ updateState.startUpdate();
+
+ if (cacheEntry.isNew()) {
+ accessEventType = CacheMapAccessEventType.MISS;
+ } else {
+ accessEventType = CacheMapAccessEventType.STALE_HIT;
+ }
+ } else if (updateState.isUpdating()) {
+ // Another thread is already updating the cache. We block if this
+ // is a new entry, or blocking mode is enabled. Either putInCache()
+ // or cancelUpdate() can cause this thread to resume.
+ if (cacheEntry.isNew() || blocking) {
+ do {
+ try {
+ updateState.wait();
+ } catch (InterruptedException e) {
+ }
+ } while (updateState.isUpdating());
+
+ if (updateState.isCancelled()) {
+ // The updating thread canceled the update, let this one have a go.
+ // This increments the usage count for this EntryUpdateState instance
+ updateState.startUpdate();
+
+ if (cacheEntry.isNew()) {
+ accessEventType = CacheMapAccessEventType.MISS;
+ } else {
+ accessEventType = CacheMapAccessEventType.STALE_HIT;
+ }
+ } else if (updateState.isComplete()) {
+ reload = true;
+ } else {
+ log.error("Invalid update state for cache entry " + key);
+ }
+ }
+ } else {
+ reload = true;
+ }
+ }
+ } finally {
+ //Make sure we release the usage count for this EntryUpdateState since we don't use it anymore. If the current thread started the update, then the counter was
+ //increased by one in startUpdate()
+ releaseUpdateState(updateState, key);
+ }
+ }
+
+ // If reload is true then another thread must have successfully rebuilt the cache entry
+ if (reload) {
+ cacheEntry = (CacheEntry) cacheMap.get(key);
+
+ if (cacheEntry != null) {
+ content = cacheEntry.getContent();
+ } else {
+ log.error("Could not reload cache entry after waiting for it to be rebuilt");
+ }
+ }
+
+ dispatchCacheMapAccessEvent(accessEventType, cacheEntry, null);
+
+ // If we didn't end up getting a hit then we need to throw a NRE
+ if (accessEventType != CacheMapAccessEventType.HIT) {
+ throw new NeedsRefreshException(content);
+ }
+
+ return content;
+ }
+
+ /**
+ * Set the listener to use for data persistence. Only one
+ * PersistenceListener
can be configured per cache.
+ *
+ * @param listener The implementation of a persistance listener
+ */
+ public void setPersistenceListener(PersistenceListener listener) {
+ cacheMap.setPersistenceListener(listener);
+ }
+
+ /**
+ * Retrieves the currently configured PersistenceListener
.
+ *
+ * @return the cache's PersistenceListener
, or null
+ * if no listener is configured.
+ */
+ public PersistenceListener getPersistenceListener() {
+ return cacheMap.getPersistenceListener();
+ }
+
+ /**
+ * Register a listener for Cache events. The listener must implement
+ * one of the child interfaces of the {@link CacheEventListener} interface.
+ *
+ * @param listener The object that listens to events.
+ * @since 2.4
+ */
+ public void addCacheEventListener(CacheEventListener listener) {
+ // listenerList.add(CacheEventListener.class, listener);
+ listenerList.add(listener.getClass(), listener);
+ }
+
+ /**
+ * Register a listener for Cache events. The listener must implement
+ * one of the child interfaces of the {@link CacheEventListener} interface.
+ *
+ * @param listener The object that listens to events.
+ * @param clazz the type of the listener to be added
+ * @deprecated use {@link #addCacheEventListener(CacheEventListener)}
+ */
+ public void addCacheEventListener(CacheEventListener listener, Class clazz) {
+ if (CacheEventListener.class.isAssignableFrom(clazz)) {
+ listenerList.add(clazz, listener);
+ } else {
+ log.error("The class '" + clazz.getName() + "' is not a CacheEventListener. Ignoring this listener.");
+ }
+ }
+
+ /**
+ * Returns the list of all CacheEventListeners.
+ * @return the CacheEventListener's list of the Cache
+ */
+ public EventListenerList getCacheEventListenerList() {
+ return listenerList;
+ }
+
+ /**
+ * Cancels any pending update for this cache entry. This should only
+ * be called by the thread that is responsible for performing the update ie
+ * the thread that received the original {@link NeedsRefreshException}.
true
if the entry is stale, false
otherwise.
+ */
+ protected boolean isStale(CacheEntry cacheEntry, int refreshPeriod, String cronExpiry) {
+ boolean result = cacheEntry.needsRefresh(refreshPeriod) || isFlushed(cacheEntry);
+
+ if ((!result) && (cronExpiry != null) && (cronExpiry.length() > 0)) {
+ try {
+ FastCronParser parser = new FastCronParser(cronExpiry);
+ result = result || parser.hasMoreRecentMatch(cacheEntry.getLastUpdate());
+ } catch (ParseException e) {
+ log.warn(e);
+ }
+ }
+
+ return result;
+ }
+
+ /**
+ * Get the updating cache entry from the update map. If one is not found,
+ * create a new one (with state {@link EntryUpdateState#NOT_YET_UPDATING})
+ * and add it to the map.
+ *
+ * @param key The cache key for this entry
+ *
+ * @return the CacheEntry that was found (or added to) the updatingEntries
+ * map.
+ */
+ protected EntryUpdateState getUpdateState(String key) {
+ EntryUpdateState updateState;
+
+ synchronized (updateStates) {
+ // Try to find the matching state object in the updating entry map.
+ updateState = (EntryUpdateState) updateStates.get(key);
+
+ if (updateState == null) {
+ // It's not there so add it.
+ updateState = new EntryUpdateState();
+ updateStates.put(key, updateState);
+ } else {
+ //Otherwise indicate that we start using it to prevent its removal until all threads are done with it.
+ updateState.incrementUsageCounter();
+ }
+ }
+
+ return updateState;
+ }
+
+ /**
+ * releases the usage that was made of the specified EntryUpdateState. When this reaches zero, the entry is removed from the map.
+ * @param state the state to release the usage of
+ * @param key the associated key.
+ */
+ protected void releaseUpdateState(EntryUpdateState state, String key) {
+ synchronized (updateStates) {
+ int usageCounter = state.decrementUsageCounter();
+ checkEntryStateUpdateUsage(key, state, usageCounter);
+ }
+ }
+
+ /**
+ * Completely clears the cache.
+ */
+ protected void clear() {
+ cacheMap.clear();
+ }
+
+ /**
+ * Removes the update state for the specified key and notifies any other
+ * threads that are waiting on this object. This is called automatically
+ * by the {@link #putInCache} method, so it is possible that no EntryUpdateState was hold
+ * when this method is called.
+ *
+ * @param key The cache key that is no longer being updated.
+ */
+ protected void completeUpdate(String key) {
+ EntryUpdateState state;
+
+ synchronized (updateStates) {
+ state = (EntryUpdateState) updateStates.get(key);
+
+ if (state != null) {
+ synchronized (state) {
+ int usageCounter = state.completeUpdate();
+ state.notifyAll();
+
+ checkEntryStateUpdateUsage(key, state, usageCounter);
+
+ }
+ } else {
+ //If putInCache() was called directly (i.e. not as a result of a NeedRefreshException) then no EntryUpdateState would be found.
+ }
+ }
+ }
+
+ /**
+ * Completely removes a cache entry from the cache and its associated cache
+ * groups.
+ *
+ * @param key The key of the entry to remove.
+ */
+ public void removeEntry(String key) {
+ removeEntry(key, null);
+ }
+
+ /**
+ * Completely removes a cache entry from the cache and its associated cache
+ * groups.
+ *
+ * @param key The key of the entry to remove.
+ * @param origin The origin of this remove request.
+ */
+ protected void removeEntry(String key, String origin) {
+ CacheEntry cacheEntry = (CacheEntry) cacheMap.get(key);
+ cacheMap.remove(key);
+
+ if (listenerList.getListenerCount() > 0) {
+ CacheEntryEvent event = new CacheEntryEvent(this, cacheEntry, origin);
+ dispatchCacheEntryEvent(CacheEntryEventType.ENTRY_REMOVED, event);
+ }
+ }
+
+ /**
+ * Dispatch a cache entry event to all registered listeners.
+ *
+ * @param eventType The type of event (used to branch on the proper method)
+ * @param event The event that was fired
+ */
+ private void dispatchCacheEntryEvent(CacheEntryEventType eventType, CacheEntryEvent event) {
+ // Guaranteed to return a non-null array
+ Object[] listeners = listenerList.getListenerList();
+
+ // Process the listeners last to first, notifying
+ // those that are interested in this event
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i+1] instanceof CacheEntryEventListener) {
+ CacheEntryEventListener listener = (CacheEntryEventListener) listeners[i+1];
+ if (eventType.equals(CacheEntryEventType.ENTRY_ADDED)) {
+ listener.cacheEntryAdded(event);
+ } else if (eventType.equals(CacheEntryEventType.ENTRY_UPDATED)) {
+ listener.cacheEntryUpdated(event);
+ } else if (eventType.equals(CacheEntryEventType.ENTRY_FLUSHED)) {
+ listener.cacheEntryFlushed(event);
+ } else if (eventType.equals(CacheEntryEventType.ENTRY_REMOVED)) {
+ listener.cacheEntryRemoved(event);
+ }
+ }
+ }
+ }
+
+ /**
+ * Dispatch a cache group event to all registered listeners.
+ *
+ * @param eventType The type of event (this is used to branch to the correct method handler)
+ * @param group The cache group that the event applies to
+ * @param origin The origin of this event (optional)
+ */
+ private void dispatchCacheGroupEvent(CacheEntryEventType eventType, String group, String origin) {
+ CacheGroupEvent event = new CacheGroupEvent(this, group, origin);
+
+ // Guaranteed to return a non-null array
+ Object[] listeners = listenerList.getListenerList();
+
+ // Process the listeners last to first, notifying
+ // those that are interested in this event
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i+1] instanceof CacheEntryEventListener) {
+ CacheEntryEventListener listener = (CacheEntryEventListener) listeners[i + 1];
+ if (eventType.equals(CacheEntryEventType.GROUP_FLUSHED)) {
+ listener.cacheGroupFlushed(event);
+ }
+ }
+ }
+ }
+
+ /**
+ * Dispatch a cache map access event to all registered listeners.
+ *
+ * @param eventType The type of event
+ * @param entry The entry that was affected.
+ * @param origin The origin of this event (optional)
+ */
+ private void dispatchCacheMapAccessEvent(CacheMapAccessEventType eventType, CacheEntry entry, String origin) {
+ CacheMapAccessEvent event = new CacheMapAccessEvent(eventType, entry, origin);
+
+ // Guaranteed to return a non-null array
+ Object[] listeners = listenerList.getListenerList();
+
+ // Process the listeners last to first, notifying
+ // those that are interested in this event
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i+1] instanceof CacheMapAccessEventListener) {
+ CacheMapAccessEventListener listener = (CacheMapAccessEventListener) listeners[i + 1];
+ listener.accessed(event);
+ }
+ }
+ }
+
+ /**
+ * Dispatch a cache pattern event to all registered listeners.
+ *
+ * @param eventType The type of event (this is used to branch to the correct method handler)
+ * @param pattern The cache pattern that the event applies to
+ * @param origin The origin of this event (optional)
+ */
+ private void dispatchCachePatternEvent(CacheEntryEventType eventType, String pattern, String origin) {
+ CachePatternEvent event = new CachePatternEvent(this, pattern, origin);
+
+ // Guaranteed to return a non-null array
+ Object[] listeners = listenerList.getListenerList();
+
+ // Process the listeners last to first, notifying
+ // those that are interested in this event
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i+1] instanceof CacheEntryEventListener) {
+ if (eventType.equals(CacheEntryEventType.PATTERN_FLUSHED)) {
+ CacheEntryEventListener listener = (CacheEntryEventListener) listeners[i+1];
+ listener.cachePatternFlushed(event);
+ }
+ }
+ }
+ }
+
+ /**
+ * Dispatches a cache-wide event to all registered listeners.
+ *
+ * @param eventType The type of event (this is used to branch to the correct method handler)
+ * @param origin The origin of this event (optional)
+ */
+ private void dispatchCachewideEvent(CachewideEventType eventType, Date date, String origin) {
+ CachewideEvent event = new CachewideEvent(this, date, origin);
+
+ // Guaranteed to return a non-null array
+ Object[] listeners = listenerList.getListenerList();
+
+ // Process the listeners last to first, notifying
+ // those that are interested in this event
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i+1] instanceof CacheEntryEventListener) {
+ if (eventType.equals(CachewideEventType.CACHE_FLUSHED)) {
+ CacheEntryEventListener listener = (CacheEntryEventListener) listeners[i+1];
+ listener.cacheFlushed(event);
+ }
+ }
+ }
+ }
+
+ /**
+ * Flush a cache entry. On completion of the flush, a
+ * CacheEntryEventType.ENTRY_FLUSHED event is fired.
+ *
+ * @param entry The entry to flush
+ * @param origin The origin of this flush event (optional)
+ */
+ private void flushEntry(CacheEntry entry, String origin) {
+ String key = entry.getKey();
+
+ // Flush the object itself
+ entry.flush();
+
+ if (!entry.isNew()) {
+ // Update the entry's state in the map
+ cacheMap.put(key, entry);
+ }
+
+ // Trigger an ENTRY_FLUSHED event. [CACHE-107] Do this for all flushes.
+ if (listenerList.getListenerCount() > 0) {
+ CacheEntryEvent event = new CacheEntryEvent(this, entry, origin);
+ dispatchCacheEntryEvent(CacheEntryEventType.ENTRY_FLUSHED, event);
+ }
+ }
+
+ /**
+ * @return the total number of cache entries held in this cache.
+ */
+ public int getSize() {
+ synchronized(cacheMap) {
+ return cacheMap.size();
+ }
+ }
+
+ /**
+ * Test support only: return the number of EntryUpdateState instances within the updateStates map.
+ */
+ protected int getNbUpdateState() {
+ synchronized(updateStates) {
+ return updateStates.size();
+ }
+ }
+
+
+ /**
+ * Test support only: return the number of entries currently in the cache map
+ * @deprecated use getSize()
+ */
+ public int getNbEntries() {
+ synchronized(cacheMap) {
+ return cacheMap.size();
+ }
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/CacheEntry.java b/src/java/com/opensymphony/oscache/base/CacheEntry.java
new file mode 100644
index 0000000..ee42292
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/CacheEntry.java
@@ -0,0 +1,311 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import com.opensymphony.oscache.web.filter.ResponseContent;
+
+import java.io.Serializable;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * A CacheEntry instance represents one entry in the cache. It holds the object that
+ * is being cached, along with a host of information about that entry such as the
+ * cache key, the time it was cached, whether the entry has been flushed or not and
+ * the groups it belongs to.
+ *
+ * @version $Revision$
+ * @author Mike Cannon-Brookes
+ * @author Todd Gochenour
+ * @author Francois Beauregard
+ */
+public class CacheEntry implements Serializable {
+ /**
+ * Default initialization value for the creation time and the last
+ * update time. This is a placeholder that indicates the value has
+ * not been set yet.
+ */
+ private static final byte NOT_YET = -1;
+
+ /**
+ * Specifying this as the refresh period for the
+ * {@link #needsRefresh(int)} method will ensure
+ * an entry does not become stale until it is
+ * either explicitly flushed or a custom refresh
+ * policy causes the entry to expire.
+ */
+ public static final int INDEFINITE_EXPIRY = -1;
+
+ /**
+ * The entry refresh policy object to use for this cache entry. This is optional.
+ */
+ private EntryRefreshPolicy policy = null;
+
+ /**
+ * The actual content that is being cached. Wherever possible this object
+ * should be serializable. This allows PersistenceListener
s
+ * to serialize the cache entries to disk or database.
+ */
+ private Object content = null;
+
+ /**
+ * The set of cache groups that this cache entry belongs to, if any.
+ */
+ private Set groups = null;
+
+ /**
+ * The unique cache key for this entry
+ */
+ private String key;
+
+ /**
+ * true
if this entry was flushed
+ */
+ private boolean wasFlushed = false;
+
+ /**
+ * The time this entry was created.
+ */
+ private long created = NOT_YET;
+
+ /**
+ * The time this emtry was last updated.
+ */
+ private long lastUpdate = NOT_YET;
+
+ /**
+ * Construct a new CacheEntry using the key provided.
+ *
+ * @param key The key of this CacheEntry
+ */
+ public CacheEntry(String key) {
+ this(key, null);
+ }
+
+ /**
+ * Construct a CacheEntry.
+ *
+ * @param key The unique key for this CacheEntry
.
+ * @param policy Object that implements refresh policy logic. This parameter
+ * is optional.
+ */
+ public CacheEntry(String key, EntryRefreshPolicy policy) {
+ this(key, policy, null);
+ }
+
+ /**
+ * Construct a CacheEntry.
+ *
+ * @param key The unique key for this CacheEntry
.
+ * @param policy The object that implements the refresh policy logic. This
+ * parameter is optional.
+ * @param groups The groups that this CacheEntry
belongs to. This
+ * parameter is optional.
+ */
+ public CacheEntry(String key, EntryRefreshPolicy policy, String[] groups) {
+ this.key = key;
+
+ if (groups != null) {
+ this.groups = new HashSet(groups.length);
+
+ for (int i = 0; i < groups.length; i++) {
+ this.groups.add(groups[i]);
+ }
+ }
+
+ this.policy = policy;
+ this.created = System.currentTimeMillis();
+ }
+
+ /**
+ * Sets the actual content that is being cached. Wherever possible this
+ * object should be Serializable
, however it is not an
+ * absolute requirement when using a memory-only cache. Being Serializable
+ * allows PersistenceListener
s to serialize the cache entries to disk
+ * or database.
+ *
+ * @param value The content to store in this CacheEntry.
+ */
+ public synchronized void setContent(Object value) {
+ content = value;
+ lastUpdate = System.currentTimeMillis();
+ wasFlushed = false;
+ }
+
+ /**
+ * Get the cached content from this CacheEntry.
+ *
+ * @return The content of this CacheEntry.
+ */
+ public Object getContent() {
+ return content;
+ }
+
+ /**
+ * Get the date this CacheEntry was created.
+ *
+ * @return The date this CacheEntry was created.
+ */
+ public long getCreated() {
+ return created;
+ }
+
+ /**
+ * Sets the cache groups for this entry.
+ *
+ * @param groups A string array containing all the group names
+ */
+ public synchronized void setGroups(String[] groups) {
+ if (groups != null) {
+ this.groups = new HashSet(groups.length);
+
+ for (int i = 0; i < groups.length; i++) {
+ this.groups.add(groups[i]);
+ }
+ } else {
+ this.groups = null;
+ }
+
+ lastUpdate = System.currentTimeMillis();
+ }
+
+ /**
+ * Sets the cache groups for this entry
+ *
+ * @param groups A collection containing all the group names
+ */
+ public synchronized void setGroups(Collection groups) {
+ if (groups != null) {
+ this.groups = new HashSet(groups);
+ } else {
+ this.groups = null;
+ }
+
+ lastUpdate = System.currentTimeMillis();
+ }
+
+ /**
+ * Gets the cache groups that this cache entry belongs to.
+ * These returned groups should be treated as immuatable.
+ *
+ * @return A set containing the names of all the groups that
+ * this cache entry belongs to.
+ */
+ public Set getGroups() {
+ return groups;
+ }
+
+ /**
+ * Get the key of this CacheEntry
+ *
+ * @return The key of this CacheEntry
+ */
+ public String getKey() {
+ return key;
+ }
+
+ /**
+ * Set the date this CacheEntry was last updated.
+ *
+ * @param update The time (in milliseconds) this CacheEntry was last updated.
+ */
+ public void setLastUpdate(long update) {
+ lastUpdate = update;
+ }
+
+ /**
+ * Get the date this CacheEntry was last updated.
+ *
+ * @return The date this CacheEntry was last updated.
+ */
+ public long getLastUpdate() {
+ return lastUpdate;
+ }
+
+ /**
+ * Indicates whether this CacheEntry is a freshly created one and
+ * has not yet been assigned content or placed in a cache.
+ *
+ * @return true
if this entry is newly created
+ */
+ public boolean isNew() {
+ return lastUpdate == NOT_YET;
+ }
+
+ /**
+ * Get the size of the cache entry in bytes (roughly).
+ *
+ * Currently this method only handles January, 2004 - The OSCache developers are aware of the fact that throwing
+ * an exception for a perfect valid situation (cache miss) is design smell. This will
+ * be removed in the near future, and other means of refreshing the cache will be
+ * provided.
+ *
+ * Successful retrievals using get(key) and containsKey(key) usually
+ * run without locking. Unsuccessful ones (i.e., when the key is not
+ * present) do involve brief synchronization (locking). Also, the
+ * size and isEmpty methods are always synchronized.
+ *
+ * Because retrieval operations can ordinarily overlap with
+ * writing operations (i.e., put, remove, and their derivatives),
+ * retrievals can only be guaranteed to return the results of the most
+ * recently completed operations holding upon their
+ * onset. Retrieval operations may or may not return results
+ * reflecting in-progress writing operations. However, the retrieval
+ * operations do always return consistent results -- either those
+ * holding before any single modification or after it, but never a
+ * nonsense result. For aggregate operations such as putAll and
+ * clear, concurrent reads may reflect insertion or removal of only
+ * some entries. In those rare contexts in which you use a hash table
+ * to synchronize operations across threads (for example, to prevent
+ * reads until after clears), you should either encase operations
+ * in synchronized blocks, or instead use java.util.Hashtable.
+ *
+ *
+ *
+ * This class also supports optional guaranteed
+ * exclusive reads, simply by surrounding a call within a synchronized
+ * block, as in
+ *
+ * Iterators and Enumerations (i.e., those returned by
+ * keySet().iterator(), entrySet().iterator(), values().iterator(),
+ * keys(), and elements()) return elements reflecting the state of the
+ * hash table at some point at or since the creation of the
+ * iterator/enumeration. They will return at most one instance of
+ * each element (via next()/nextElement()), but might or might not
+ * reflect puts and removes that have been processed since they were
+ * created. They do not throw ConcurrentModificationException.
+ * However, these iterators are designed to be used by only one
+ * thread at a time. Sharing an iterator across multiple threads may
+ * lead to unpredictable results if the table is being concurrently
+ * modified. Again, you can ensure interference-free iteration by
+ * enclosing the iteration in a synchronized block.
+ *
+ * This class may be used as a direct replacement for any use of
+ * java.util.Hashtable that does not depend on readers being blocked
+ * during updates. Like Hashtable but unlike java.util.HashMap,
+ * this class does NOT allow null to be used as a key or
+ * value. This class is also typically faster than ConcurrentHashMap
+ * when there is usually only one thread updating the table, but
+ * possibly many retrieving values from it.
+ *
+ *
+ * Implementation note: A slightly faster implementation of
+ * this class will be possible once planned Java Memory Model
+ * revisions are in place.
+ *
+ * [ Introduction to this package. ]
+ **/
+public abstract class AbstractConcurrentReadCache extends AbstractMap implements Map, Cloneable, Serializable {
+ /**
+ * The default initial number of table slots for this table (32).
+ * Used when not otherwise specified in constructor.
+ **/
+ public static final int DEFAULT_INITIAL_CAPACITY = 32;
+
+ /**
+ * The minimum capacity.
+ * Used if a lower value is implicitly specified
+ * by either of the constructors with arguments.
+ * MUST be a power of two.
+ */
+ private static final int MINIMUM_CAPACITY = 4;
+
+ /**
+ * The maximum capacity.
+ * Used if a higher value is implicitly specified
+ * by either of the constructors with arguments.
+ * MUST be a power of two <= 1<<30.
+ */
+ private static final int MAXIMUM_CAPACITY = 1 << 30;
+
+ /**
+ * The default load factor for this table.
+ * Used when not otherwise specified in constructor, the default is 0.75f.
+ **/
+ public static final float DEFAULT_LOAD_FACTOR = 0.75f;
+
+ //OpenSymphony BEGIN (pretty long!)
+ protected static final String NULL = "_nul!~";
+
+ private static final Log log = LogFactory.getLog(AbstractConcurrentReadCache.class);
+
+ /*
+ The basic strategy is an optimistic-style scheme based on
+ the guarantee that the hash table and its lists are always
+ kept in a consistent enough state to be read without locking:
+
+ * Read operations first proceed without locking, by traversing the
+ apparently correct list of the apparently correct bin. If an
+ entry is found, but not invalidated (value field null), it is
+ returned. If not found, operations must recheck (after a memory
+ barrier) to make sure they are using both the right list and
+ the right table (which can change under resizes). If
+ invalidated, reads must acquire main update lock to wait out
+ the update, and then re-traverse.
+
+ * All list additions are at the front of each bin, making it easy
+ to check changes, and also fast to traverse. Entry next
+ pointers are never assigned. Remove() builds new nodes when
+ necessary to preserve this.
+
+ * Remove() (also clear()) invalidates removed nodes to alert read
+ operations that they must wait out the full modifications.
+
+ */
+
+ /**
+ * Lock used only for its memory effects. We use a Boolean
+ * because it is serializable, and we create a new one because
+ * we need a unique object for each cache instance.
+ **/
+ protected final Boolean barrierLock = new Boolean(true);
+
+ /**
+ * field written to only to guarantee lock ordering.
+ **/
+ protected transient Object lastWrite;
+
+ /**
+ * The hash table data.
+ */
+ protected transient Entry[] table;
+
+ /**
+ * The total number of mappings in the hash table.
+ */
+ protected transient int count;
+
+ /**
+ * Persistence listener.
+ */
+ protected transient PersistenceListener persistenceListener = null;
+
+ /**
+ * Use memory cache or not.
+ */
+ protected boolean memoryCaching = true;
+
+ /**
+ * Use unlimited disk caching.
+ */
+ protected boolean unlimitedDiskCache = false;
+
+ /**
+ * The load factor for the hash table.
+ *
+ * @serial
+ */
+ protected float loadFactor;
+
+ /**
+ * Default cache capacity (number of entries).
+ */
+ protected final int DEFAULT_MAX_ENTRIES = 100;
+
+ /**
+ * Max number of element in cache when considered unlimited.
+ */
+ protected final int UNLIMITED = 2147483646;
+ protected transient Collection values = null;
+
+ /**
+ * A HashMap containing the group information.
+ * Each entry uses the group name as the key, and holds a
+ *
+ *
+ * Note that this method is identical in functionality to containsValue,
+ * (which is part of the Map interface in the collections framework).
+ *
+ * @param value a value to search for.
+ * @return
+ *
+ * The value can be retrieved by calling the LRU (Least Recently Used) algorithm for the cache. Since release 2.3 this class requires Java 1.4
+ * to use the No synchronization is required in this class since the
+ *
+ * There is a corresponding interface {@link CacheEntryEventListener} for
+ * handling these events.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class CacheEntryEventType {
+ /**
+ * Get an event type for an entry added.
+ */
+ public static final CacheEntryEventType ENTRY_ADDED = new CacheEntryEventType();
+
+ /**
+ * Get an event type for an entry updated.
+ */
+ public static final CacheEntryEventType ENTRY_UPDATED = new CacheEntryEventType();
+
+ /**
+ * Get an event type for an entry flushed.
+ */
+ public static final CacheEntryEventType ENTRY_FLUSHED = new CacheEntryEventType();
+
+ /**
+ * Get an event type for an entry removed.
+ */
+ public static final CacheEntryEventType ENTRY_REMOVED = new CacheEntryEventType();
+
+ /**
+ * Get an event type for a group flush event.
+ */
+ public static final CacheEntryEventType GROUP_FLUSHED = new CacheEntryEventType();
+
+ /**
+ * Get an event type for a pattern flush event.
+ */
+ public static final CacheEntryEventType PATTERN_FLUSHED = new CacheEntryEventType();
+
+ /**
+ * Private constructor to ensure that no object of that type are
+ * created externally.
+ */
+ private CacheEntryEventType() {
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheEvent.java b/src/java/com/opensymphony/oscache/base/events/CacheEvent.java
new file mode 100644
index 0000000..796327a
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheEvent.java
@@ -0,0 +1,48 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * The root event class for all cache events. Each subclasses of this class
+ * classifies a particular type of cache event.
+ *
+ * @author Chris Miller
+ * Date: 20-May-2003
+ * Time: 15:25:02
+ */
+public abstract class CacheEvent {
+ /**
+ * An optional tag that can be attached to the event to specify the event's origin.
+ */
+ protected String origin = null;
+
+ /**
+ * No-argument constructor so subtypes can easily implement
+ * @param eventType Type of the event.
+ * @param entry The cache entry that the event applies to.
+ */
+ public CacheMapAccessEvent(CacheMapAccessEventType eventType, CacheEntry entry) {
+ this(eventType, entry, null);
+ }
+
+ /**
+ * Constructor.
+ *
+ * @param eventType Type of the event.
+ * @param entry The cache entry that the event applies to.
+ * @param origin The origin of the event
+ */
+ public CacheMapAccessEvent(CacheMapAccessEventType eventType, CacheEntry entry, String origin) {
+ super(origin);
+ this.eventType = eventType;
+ this.entry = entry;
+ }
+
+ /**
+ * Retrieve the cache entry that the event applies to.
+ */
+ public CacheEntry getCacheEntry() {
+ return entry;
+ }
+
+ /**
+ * Retrieve the cache entry key that the event applies to.
+ */
+ public String getCacheEntryKey() {
+ return entry.getKey();
+ }
+
+ /**
+ * Retrieve the type of the event.
+ */
+ public CacheMapAccessEventType getEventType() {
+ return eventType;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEventListener.java b/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEventListener.java
new file mode 100644
index 0000000..48833eb
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEventListener.java
@@ -0,0 +1,21 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is the interface to listen to cache map access events. The events are
+ * cache hits and misses, and are dispatched through this interface
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public interface CacheMapAccessEventListener extends CacheEventListener {
+ /**
+ * Event fired when an entry is accessed.
+ * Use getEventType to differentiate between access events.
+ */
+ public void accessed(CacheMapAccessEvent event);
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEventType.java b/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEventType.java
new file mode 100644
index 0000000..8f83729
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEventType.java
@@ -0,0 +1,37 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is an enumeration of the cache events that represent the
+ * various outcomes of cache accesses.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class CacheMapAccessEventType {
+ /**
+ * Get an event type for a cache hit.
+ */
+ public static final CacheMapAccessEventType HIT = new CacheMapAccessEventType();
+
+ /**
+ * Get an event type for a cache miss.
+ */
+ public static final CacheMapAccessEventType MISS = new CacheMapAccessEventType();
+
+ /**
+ * Get an event type for when the data was found in the cache but was stale.
+ */
+ public static final CacheMapAccessEventType STALE_HIT = new CacheMapAccessEventType();
+
+ /**
+ * Private constructor to ensure that no object of this type are
+ * created externally.
+ */
+ private CacheMapAccessEventType() {
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CachePatternEvent.java b/src/java/com/opensymphony/oscache/base/events/CachePatternEvent.java
new file mode 100644
index 0000000..b597bb5
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CachePatternEvent.java
@@ -0,0 +1,69 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+
+/**
+ * A CachePatternEvent is fired when a pattern has been applied to a cache.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public final class CachePatternEvent extends CacheEvent {
+ /**
+ * The cache the pattern is being applied to.
+ */
+ private Cache map = null;
+
+ /**
+ * The pattern that is being applied.
+ */
+ private String pattern = null;
+
+ /**
+ * Constructs a cache pattern event with no origin
+ *
+ * @param map The cache map that the pattern was applied to
+ * @param pattern The pattern that was applied
+ */
+ public CachePatternEvent(Cache map, String pattern) {
+ this(map, pattern, null);
+ }
+
+ /**
+ * Constructs a cache pattern event
+ *
+ * @param map The cache map that the pattern was applied to
+ * @param pattern The cache pattern that the event applies to.
+ * @param origin An optional tag that can be attached to the event to
+ * specify the event's origin. This is useful to prevent events from being
+ * fired recursively in some situations, such as when an event handler
+ * causes another event to be fired, or for logging purposes.
+ */
+ public CachePatternEvent(Cache map, String pattern, String origin) {
+ super(origin);
+ this.map = map;
+ this.pattern = pattern;
+ }
+
+ /**
+ * Retrieve the cache map that had the pattern applied.
+ */
+ public Cache getMap() {
+ return map;
+ }
+
+ /**
+ * Retrieve the pattern that was applied to the cache.
+ */
+ public String getPattern() {
+ return pattern;
+ }
+
+ public String toString() {
+ return "pattern=" + pattern;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CachewideEvent.java b/src/java/com/opensymphony/oscache/base/events/CachewideEvent.java
new file mode 100644
index 0000000..d286e34
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CachewideEvent.java
@@ -0,0 +1,59 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+
+import java.util.Date;
+
+/**
+ * A
+ * We are not using any synchronized so that this does not become a bottleneck.
+ * The consequence is that on retrieving values, the operations that are
+ * currently being done won't be counted.
+ *
+ * @version $Revision$
+ * @author Alain Bergevin
+ * @author Chris Miller
+ */
+public class CacheEntryEventListenerImpl implements CacheEntryEventListener {
+ /**
+ * Counter for the cache flushes
+ */
+ private int cacheFlushedCount = 0;
+
+ /**
+ * Counter for the added entries
+ */
+ private int entryAddedCount = 0;
+
+ /**
+ * Counter for the flushed entries
+ */
+ private int entryFlushedCount = 0;
+
+ /**
+ * Counter for the removed entries
+ */
+ private int entryRemovedCount = 0;
+
+ /**
+ * Counter for the updated entries
+ */
+ private int entryUpdatedCount = 0;
+
+ /**
+ * Counter for the flushed groups
+ */
+ private int groupFlushedCount = 0;
+
+ /**
+ * Counter for the pattern flushes
+ */
+ private int patternFlushedCount = 0;
+
+ /**
+ * Constructor, empty for us
+ */
+ public CacheEntryEventListenerImpl() {
+ }
+
+ /**
+ * Gets the add counter
+ *
+ * @return The added counter
+ */
+ public int getEntryAddedCount() {
+ return entryAddedCount;
+ }
+
+ /**
+ * Gets the flushed counter
+ *
+ * @return The flushed counter
+ */
+ public int getEntryFlushedCount() {
+ return entryFlushedCount;
+ }
+
+ /**
+ * Gets the removed counter
+ *
+ * @return The removed counter
+ */
+ public int getEntryRemovedCount() {
+ return entryRemovedCount;
+ }
+
+ /**
+ * Gets the updated counter
+ *
+ * @return The updated counter
+ */
+ public int getEntryUpdatedCount() {
+ return entryUpdatedCount;
+ }
+
+ /**
+ * Gets the group flush counter
+ *
+ * @return The number of group flush calls that have occurred
+ */
+ public int getGroupFlushedCount() {
+ return groupFlushedCount;
+ }
+
+ /**
+ * Gets the pattern flush counter
+ *
+ * @return The number of pattern flush calls that have occurred
+ */
+ public int getPatternFlushedCount() {
+ return patternFlushedCount;
+ }
+
+ /**
+ * Gets the cache flush counter
+ *
+ * @return The number of times the entire cache has been flushed
+ */
+ public int getCacheFlushedCount() {
+ return cacheFlushedCount;
+ }
+
+ /**
+ * Handles the event fired when an entry is added in the cache.
+ *
+ * @param event The event triggered when a cache entry has been added
+ */
+ public void cacheEntryAdded(CacheEntryEvent event) {
+ entryAddedCount++;
+ }
+
+ /**
+ * Handles the event fired when an entry is flushed from the cache.
+ *
+ * @param event The event triggered when a cache entry has been flushed
+ */
+ public void cacheEntryFlushed(CacheEntryEvent event) {
+ entryFlushedCount++;
+ }
+
+ /**
+ * Handles the event fired when an entry is removed from the cache.
+ *
+ * @param event The event triggered when a cache entry has been removed
+ */
+ public void cacheEntryRemoved(CacheEntryEvent event) {
+ entryRemovedCount++;
+ }
+
+ /**
+ * Handles the event fired when an entry is updated in the cache.
+ *
+ * @param event The event triggered when a cache entry has been updated
+ */
+ public void cacheEntryUpdated(CacheEntryEvent event) {
+ entryUpdatedCount++;
+ }
+
+ /**
+ * Handles the event fired when a group is flushed from the cache.
+ *
+ * @param event The event triggered when a cache group has been flushed
+ */
+ public void cacheGroupFlushed(CacheGroupEvent event) {
+ groupFlushedCount++;
+ }
+
+ /**
+ * Handles the event fired when a pattern is flushed from the cache.
+ *
+ * @param event The event triggered when a cache pattern has been flushed
+ */
+ public void cachePatternFlushed(CachePatternEvent event) {
+ patternFlushedCount++;
+ }
+
+ /**
+ * Handles the event fired when a cache flush occurs.
+ *
+ * @param event The event triggered when an entire cache is flushed
+ */
+ public void cacheFlushed(CachewideEvent event) {
+ cacheFlushedCount++;
+ }
+
+ /**
+ * Returns the internal values in a string form
+ */
+ public String toString() {
+ return ("Added " + entryAddedCount + ", Updated " + entryUpdatedCount + ", Flushed " + entryFlushedCount + ", Removed " + entryRemovedCount + ", Groups Flushed " + groupFlushedCount + ", Patterns Flushed " + patternFlushedCount + ", Cache Flushed " + cacheFlushedCount);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/extra/CacheMapAccessEventListenerImpl.java b/src/java/com/opensymphony/oscache/extra/CacheMapAccessEventListenerImpl.java
new file mode 100644
index 0000000..f00a44d
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/extra/CacheMapAccessEventListenerImpl.java
@@ -0,0 +1,111 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import com.opensymphony.oscache.base.events.CacheMapAccessEvent;
+import com.opensymphony.oscache.base.events.CacheMapAccessEventListener;
+import com.opensymphony.oscache.base.events.CacheMapAccessEventType;
+
+/**
+ * Implementation of a CacheMapAccessEventListener. It uses the events to count
+ * the cache hit and misses.
+ *
+ * We are not using any synchronized so that this does not become a bottleneck.
+ * The consequence is that on retrieving values, the operations that are
+ * currently being done won't be counted.
+ *
+ * @version $Revision$
+ * @author Alain Bergevin
+ * @author Chris Miller
+ */
+public class CacheMapAccessEventListenerImpl implements CacheMapAccessEventListener {
+ /**
+ * Hit counter
+ */
+ private int hitCount = 0;
+
+ /**
+ * Miss counter
+ */
+ private int missCount = 0;
+
+ /**
+ * Stale hit counter
+ */
+ private int staleHitCount = 0;
+
+ /**
+ * Constructor, empty for us
+ */
+ public CacheMapAccessEventListenerImpl() {
+ }
+
+ /**
+ * Returns the cache's current hit count
+ *
+ * @return The hit count
+ */
+ public int getHitCount() {
+ return hitCount;
+ }
+
+ /**
+ * Returns the cache's current miss count
+ *
+ * @return The miss count
+ */
+ public int getMissCount() {
+ return missCount;
+ }
+
+ /**
+ * Returns the cache's current stale hit count
+ */
+ public int getStaleHitCount() {
+ return staleHitCount;
+ }
+
+ /**
+ * This method handles an event each time the cache is accessed
+ *
+ * @param event The event triggered when the cache was accessed
+ */
+ public void accessed(CacheMapAccessEvent event) {
+ // Retrieve the event type and update the counters
+ CacheMapAccessEventType type = event.getEventType();
+
+ // Handles a hit event
+ if (type == CacheMapAccessEventType.HIT) {
+ hitCount++;
+ }
+ // Handles a stale hit event
+ else if (type == CacheMapAccessEventType.STALE_HIT) {
+ staleHitCount++;
+ }
+ // Handles a miss event
+ else if (type == CacheMapAccessEventType.MISS) {
+ missCount++;
+ } else {
+ // Unknown event!
+ throw new IllegalArgumentException("Unknown Cache Map Access event received");
+ }
+ }
+
+ /**
+ * Resets all of the totals to zero
+ */
+ public void reset() {
+ hitCount = 0;
+ staleHitCount = 0;
+ missCount = 0;
+ }
+
+ /**
+ * Return the counters in a string form
+ */
+ public String toString() {
+ return ("Hit count = " + hitCount + ", stale hit count = " + staleHitCount + " and miss count = " + missCount);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/extra/ScopeEventListenerImpl.java b/src/java/com/opensymphony/oscache/extra/ScopeEventListenerImpl.java
new file mode 100644
index 0000000..235b7d5
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/extra/ScopeEventListenerImpl.java
@@ -0,0 +1,147 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import com.opensymphony.oscache.base.events.ScopeEvent;
+import com.opensymphony.oscache.base.events.ScopeEventListener;
+import com.opensymphony.oscache.base.events.ScopeEventType;
+
+/**
+ * Implementation of a ScopeEventListener that keeps track of the scope flush events.
+ * We are not using any synchronized so that this does not become a bottleneck.
+ * The consequence is that on retrieving values, the operations that are
+ * currently being done won't be counted.
+ *
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class ScopeEventListenerImpl implements ScopeEventListener {
+ /**
+ * Scope names
+ */
+ public static final String[] SCOPE_NAMES = {
+ null, "page", "request", "session", "application"
+ };
+
+ /**
+ * Number of known scopes
+ */
+ public static final int NB_SCOPES = SCOPE_NAMES.length - 1;
+
+ /**
+ * Page scope number
+ */
+ public static final int PAGE_SCOPE = 1;
+
+ /**
+ * Request scope number
+ */
+ public static final int REQUEST_SCOPE = 2;
+
+ /**
+ * Session scope number
+ */
+ public static final int SESSION_SCOPE = 3;
+
+ /**
+ * Application scope number
+ */
+ public static final int APPLICATION_SCOPE = 4;
+
+ /**
+ * Flush counter for all scopes.
+ * Add one to the number of scope because the array is being used
+ * from position 1 instead of 0 for convenience
+ */
+ private int[] scopeFlushCount = new int[NB_SCOPES + 1];
+
+ public ScopeEventListenerImpl() {
+ }
+
+ /**
+ * Gets the flush count for scope {@link ScopeEventListenerImpl#APPLICATION_SCOPE}.
+ *
+ * @return The total number of application flush
+ */
+ public int getApplicationScopeFlushCount() {
+ return scopeFlushCount[APPLICATION_SCOPE];
+ }
+
+ /**
+ * Gets the flush count for scope {@link ScopeEventListenerImpl#PAGE_SCOPE}.
+ * @return The total number of page flush
+ */
+ public int getPageScopeFlushCount() {
+ return scopeFlushCount[PAGE_SCOPE];
+ }
+
+ /**
+ * Gets the flush count for scope {@link ScopeEventListenerImpl#REQUEST_SCOPE}.
+ * @return The total number of request flush
+ */
+ public int getRequestScopeFlushCount() {
+ return scopeFlushCount[REQUEST_SCOPE];
+ }
+
+ /**
+ * Gets the flush count for scope {@link ScopeEventListenerImpl#SESSION_SCOPE}.
+ * @return The total number of session flush
+ */
+ public int getSessionScopeFlushCount() {
+ return scopeFlushCount[SESSION_SCOPE];
+ }
+
+ /**
+ * Returns the total flush count.
+ * @return The total number of scope flush
+ */
+ public int getTotalScopeFlushCount() {
+ int total = 0;
+
+ for (int count = 1; count <= NB_SCOPES; count++) {
+ total += scopeFlushCount[count];
+ }
+
+ return total;
+ }
+
+ /**
+ * Handles all the scope flush events.
+ * @param event The scope event
+ */
+ public void scopeFlushed(ScopeEvent event) {
+ // Get the event type and process it
+ ScopeEventType eventType = event.getEventType();
+
+ if (eventType == ScopeEventType.ALL_SCOPES_FLUSHED) {
+ // All 4 scopes were flushed, increment the counters
+ for (int count = 1; count <= NB_SCOPES; count++) {
+ scopeFlushCount[count]++;
+ }
+ } else if (eventType == ScopeEventType.SCOPE_FLUSHED) {
+ // Get back the scope from the event and increment the flush count
+ scopeFlushCount[event.getScope()]++;
+ } else {
+ // Unknown event!
+ throw new IllegalArgumentException("Unknown Scope Event type received");
+ }
+ }
+
+ /**
+ * Returns all the flush counter in a string form.
+ */
+ public String toString() {
+ StringBuffer returnString = new StringBuffer("Flush count for ");
+
+ for (int count = 1; count <= NB_SCOPES; count++) {
+ returnString.append("scope " + SCOPE_NAMES[count] + " = " + scopeFlushCount[count] + ", ");
+ }
+
+ // Remove the last 2 chars, which are ", "
+ returnString.setLength(returnString.length() - 2);
+
+ return returnString.toString();
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/extra/StatisticListenerImpl.java b/src/java/com/opensymphony/oscache/extra/StatisticListenerImpl.java
new file mode 100644
index 0000000..9965ff2
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/extra/StatisticListenerImpl.java
@@ -0,0 +1,295 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.events.CacheEntryEvent;
+import com.opensymphony.oscache.base.events.CacheEntryEventListener;
+import com.opensymphony.oscache.base.events.CacheGroupEvent;
+import com.opensymphony.oscache.base.events.CacheMapAccessEvent;
+import com.opensymphony.oscache.base.events.CacheMapAccessEventListener;
+import com.opensymphony.oscache.base.events.CacheMapAccessEventType;
+import com.opensymphony.oscache.base.events.CachePatternEvent;
+import com.opensymphony.oscache.base.events.CachewideEvent;
+import com.opensymphony.oscache.base.events.ScopeEvent;
+import com.opensymphony.oscache.base.events.ScopeEventListener;
+import com.opensymphony.oscache.extra.ScopeEventListenerImpl;
+
+/**
+ * A simple implementation of a statistic reporter which uses the
+ * event listeners. It uses the events to count the cache hit and
+ * misses and of course the flushes.
+ *
+ * We are not using any synchronized so that this does not become a bottleneck.
+ * The consequence is that on retrieving values, the operations that are
+ * currently being done won't be counted.
+ */
+public class StatisticListenerImpl implements CacheMapAccessEventListener,
+ CacheEntryEventListener, ScopeEventListener {
+
+ /**
+ * Hit counter.
+ */
+ private static int hitCount = 0;
+
+ /**
+ * Miss counter.
+ */
+ private static int missCount = 0;
+
+ /**
+ * Stale hit counter.
+ */
+ private static int staleHitCount = 0;
+
+ /**
+ * Hit counter sum.
+ */
+ private static int hitCountSum = 0;
+
+ /**
+ * Miss counter sum.
+ */
+ private static int missCountSum = 0;
+
+ /**
+ * Stale hit counter.
+ */
+ private static int staleHitCountSum = 0;
+
+ /**
+ * Flush hit counter.
+ */
+ private static int flushCount = 0;
+
+ /**
+ * Miss counter sum.
+ */
+ private static int entriesAdded = 0;
+
+ /**
+ * Stale hit counter.
+ */
+ private static int entriesRemoved = 0;
+
+ /**
+ * Flush hit counter.
+ */
+ private static int entriesUpdated = 0;
+
+ /**
+ * Constructor, empty for us.
+ */
+ public StatisticListenerImpl() {
+
+ }
+
+ /**
+ * This method handles an event each time the cache is accessed.
+ *
+ * @param event
+ * The event triggered when the cache was accessed
+ * @see com.opensymphony.oscache.base.events.CacheMapAccessEventListener#accessed(CacheMapAccessEvent)
+ */
+ public void accessed(CacheMapAccessEvent event) {
+ // Retrieve the event type and update the counters
+ CacheMapAccessEventType type = event.getEventType();
+
+ // Handles a hit event
+ if (type == CacheMapAccessEventType.HIT) {
+ hitCount++;
+ } else if (type == CacheMapAccessEventType.STALE_HIT) { // Handles a
+ // stale hit
+ // event
+ staleHitCount++;
+ } else if (type == CacheMapAccessEventType.MISS) { // Handles a miss
+ // event
+ missCount++;
+ }
+ }
+
+ /**
+ * Logs the flush of the cache.
+ *
+ * @param info the string to be logged.
+ */
+ private void flushed(String info) {
+ flushCount++;
+
+ hitCountSum += hitCount;
+ staleHitCountSum += staleHitCount;
+ missCountSum += missCount;
+
+ hitCount = 0;
+ staleHitCount = 0;
+ missCount = 0;
+ }
+
+ /**
+ * Event fired when a specific or all scopes are flushed.
+ *
+ * @param event ScopeEvent
+ * @see com.opensymphony.oscache.base.events.ScopeEventListener#scopeFlushed(ScopeEvent)
+ */
+ public void scopeFlushed(ScopeEvent event) {
+ flushed("scope " + ScopeEventListenerImpl.SCOPE_NAMES[event.getScope()]);
+ }
+
+ /**
+ * Event fired when an entry is added to the cache.
+ *
+ * @param event CacheEntryEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryAdded(CacheEntryEvent)
+ */
+ public void cacheEntryAdded(CacheEntryEvent event) {
+ entriesAdded++;
+ }
+
+ /**
+ * Event fired when an entry is flushed from the cache.
+ *
+ * @param event CacheEntryEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryFlushed(CacheEntryEvent)
+ */
+ public void cacheEntryFlushed(CacheEntryEvent event) {
+ // do nothing, because a group or other flush is coming
+ if (!Cache.NESTED_EVENT.equals(event.getOrigin())) {
+ flushed("entry " + event.getKey() + " / " + event.getOrigin());
+ }
+ }
+
+ /**
+ * Event fired when an entry is removed from the cache.
+ *
+ * @param event CacheEntryEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryRemoved(CacheEntryEvent)
+ */
+ public void cacheEntryRemoved(CacheEntryEvent event) {
+ entriesRemoved++;
+ }
+
+ /**
+ * Event fired when an entry is updated in the cache.
+ *
+ * @param event CacheEntryEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheEntryUpdated(CacheEntryEvent)
+ */
+ public void cacheEntryUpdated(CacheEntryEvent event) {
+ entriesUpdated++;
+ }
+
+ /**
+ * Event fired when a group is flushed from the cache.
+ *
+ * @param event CacheGroupEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheGroupFlushed(CacheGroupEvent)
+ */
+ public void cacheGroupFlushed(CacheGroupEvent event) {
+ flushed("group " + event.getGroup());
+ }
+
+ /**
+ * Event fired when a key pattern is flushed from the cache.
+ *
+ * @param event CachePatternEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cachePatternFlushed(CachePatternEvent)
+ */
+ public void cachePatternFlushed(CachePatternEvent event) {
+ flushed("pattern " + event.getPattern());
+ }
+
+ /**
+ * An event that is fired when an entire cache gets flushed.
+ *
+ * @param event CachewideEvent
+ * @see com.opensymphony.oscache.base.events.CacheEntryEventListener#cacheFlushed(CachewideEvent)
+ */
+ public void cacheFlushed(CachewideEvent event) {
+ flushed("wide " + event.getDate());
+ }
+
+ /**
+ * Return the counters in a string form.
+ *
+ * @return String
+ */
+ public String toString() {
+ return "StatisticListenerImpl: Hit = " + hitCount + " / " + hitCountSum
+ + ", stale hit = " + staleHitCount + " / " + staleHitCountSum
+ + ", miss = " + missCount + " / " + missCountSum + ", flush = "
+ + flushCount + ", entries (added, removed, updates) = "
+ + entriesAdded + ", " + entriesRemoved + ", " + entriesUpdated;
+ }
+
+ /**
+ * @return Returns the entriesAdded.
+ */
+ public int getEntriesAdded() {
+ return entriesAdded;
+ }
+
+ /**
+ * @return Returns the entriesRemoved.
+ */
+ public int getEntriesRemoved() {
+ return entriesRemoved;
+ }
+
+ /**
+ * @return Returns the entriesUpdated.
+ */
+ public int getEntriesUpdated() {
+ return entriesUpdated;
+ }
+
+ /**
+ * @return Returns the flushCount.
+ */
+ public int getFlushCount() {
+ return flushCount;
+ }
+
+ /**
+ * @return Returns the hitCount.
+ */
+ public int getHitCount() {
+ return hitCount;
+ }
+
+ /**
+ * @return Returns the hitCountSum.
+ */
+ public int getHitCountSum() {
+ return hitCountSum;
+ }
+
+ /**
+ * @return Returns the missCount.
+ */
+ public int getMissCount() {
+ return missCount;
+ }
+
+ /**
+ * @return Returns the missCountSum.
+ */
+ public int getMissCountSum() {
+ return missCountSum;
+ }
+
+ /**
+ * @return Returns the staleHitCount.
+ */
+ public int getStaleHitCount() {
+ return staleHitCount;
+ }
+
+ /**
+ * @return Returns the staleHitCountSum.
+ */
+ public int getStaleHitCountSum() {
+ return staleHitCountSum;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/extra/package.html b/src/java/com/opensymphony/oscache/extra/package.html
new file mode 100644
index 0000000..81ee366
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/extra/package.html
@@ -0,0 +1,32 @@
+
+
+ Called by the cache administrator class when a cache is instantiated. The JMS broadcasting implementation requires the following configuration
+ * properties to be specified in Called by the cache administrator class when a cache is instantiated. The JMS broadcasting implementation requires the following configuration
+ * properties to be specified in String
s and
+ * {@link ResponseContent} objects.
+ *
+ * @return The approximate size of the entry in bytes, or -1 if the
+ * size could not be estimated.
+ */
+ public int getSize() {
+ // a char is two bytes
+ int size = (key.length() * 2) + 4;
+
+ if (content.getClass() == String.class) {
+ size += ((content.toString().length() * 2) + 4);
+ } else if (content instanceof ResponseContent) {
+ size += ((ResponseContent) content).getSize();
+ } else {
+ return -1;
+ }
+
+ //add created, lastUpdate, and wasFlushed field sizes (1, 8, and 8)
+ return size + 17;
+ }
+
+ /**
+ * Flush the entry from cache.
+ * note that flushing the cache doesn't actually remove the cache contents
+ * it just tells the CacheEntry that it needs a refresh next time it is asked
+ * this is so that the content is still there for a
cache.configuration
.
+ *
+ * @author Fabian Crabus
+ * @version $Revision$
+ */
+public class Config implements java.io.Serializable {
+
+ private static final transient Log log = LogFactory.getLog(Config.class);
+
+ /**
+ * Name of the properties file.
+ */
+ private final static String PROPERTIES_FILENAME = "/oscache.properties";
+
+ /**
+ * Properties map to hold the cache configuration.
+ */
+ private Properties properties = null;
+
+ /**
+ * Create an OSCache Config that loads properties from oscache.properties.
+ * The file must be present in the root of OSCache's classpath. If the file
+ * cannot be loaded, an error will be logged and the configuration will
+ * remain empty.
+ */
+ public Config() {
+ this(null);
+ }
+
+ /**
+ * Create an OSCache configuration with the specified properties.
+ * Note that it is the responsibility of the caller to provide valid
+ * properties as no error checking is done to ensure that required
+ * keys are present. If you're unsure of what keys should be present,
+ * have a look at a sample oscache.properties file.
+ *
+ * @param p The properties to use for this configuration. If null,
+ * then the default properties are loaded from the oscache.properties
+ * file.
+ */
+ public Config(Properties p) {
+ if (log.isDebugEnabled()) {
+ log.debug("OSCache: Config called");
+ }
+
+ if (p == null) {
+ this.properties = loadProperties(PROPERTIES_FILENAME, "the default configuration");
+ } else {
+ this.properties = p;
+ }
+ }
+
+ /**
+ * Retrieve the value of the named configuration property. If the property
+ * cannot be found this method will return null
.
+ *
+ * @param key The name of the property.
+ * @return The property value, or null
if the value could
+ * not be found.
+ *
+ * @throws IllegalArgumentException if the supplied key is null.
+ */
+ public String getProperty(String key) {
+ if (key == null) {
+ throw new IllegalArgumentException("key is null");
+ }
+
+ if (properties == null) {
+ return null;
+ }
+
+ return properties.getProperty(key);
+ }
+
+ /**
+ * Retrieves all of the configuration properties. This property set
+ * should be treated as immutable.
+ *
+ * @return The configuration properties.
+ */
+ public Properties getProperties() {
+ return properties;
+ }
+
+ public Object get(Object key) {
+ return properties.get(key);
+ }
+
+ /**
+ * Sets a configuration property.
+ *
+ * @param key The unique name for this property.
+ * @param value The value assigned to this property.
+ *
+ * @throws IllegalArgumentException if the supplied key is null.
+ */
+ public void set(Object key, Object value) {
+ if (key == null) {
+ throw new IllegalArgumentException("key is null");
+ }
+
+ if (value == null) {
+ return;
+ }
+
+ if (properties == null) {
+ properties = new Properties();
+ }
+
+ properties.put(key, value);
+ }
+
+ /**
+ * Load the properties from the specified URL.
+ * @param url a non null value of the URL to the properties
+ * @param info additional logger information if the properties can't be read
+ * @return the loaded properties specified by the URL
+ * @since 2.4
+ */
+ public static Properties loadProperties(URL url, String info) {
+ log.info("OSCache: Getting properties from URL " + url + " for " + info);
+
+ Properties properties = new Properties();
+ InputStream in = null;
+
+ try {
+ in = url.openStream();
+ properties.load(in);
+ log.info("OSCache: Properties read " + properties);
+ } catch (Exception e) {
+ log.error("OSCache: Error reading from " + url, e);
+ log.error("OSCache: Ensure the properties information in " + url+ " is readable and in your classpath.");
+ } finally {
+ try {
+ in.close();
+ } catch (IOException e) {
+ log.warn("OSCache: IOException while closing InputStream: " + e.getMessage());
+ }
+ }
+
+ return properties;
+ }
+
+ /**
+ * Load the specified properties file from the classpath. If the file
+ * cannot be found or loaded, an error will be logged and no
+ * properties will be set.
+ * @param filename the properties file with path
+ * @param info additional logger information if file can't be read
+ * @return the loaded properties specified by the filename
+ * @since 2.4
+ */
+ public static Properties loadProperties(String filename, String info) {
+ URL url = null;
+
+ ClassLoader threadContextClassLoader = Thread.currentThread().getContextClassLoader();
+ if (threadContextClassLoader != null) {
+ url = threadContextClassLoader.getResource(filename);
+ }
+ if (url == null) {
+ url = Config.class.getResource(filename);
+ if (url == null) {
+ log.warn("OSCache: No properties file found in the classpath by filename " + filename);
+ return new Properties();
+ }
+ }
+
+ return loadProperties(url, info);
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/base/EntryRefreshPolicy.java b/src/java/com/opensymphony/oscache/base/EntryRefreshPolicy.java
new file mode 100644
index 0000000..e04e8be
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/EntryRefreshPolicy.java
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import java.io.Serializable;
+
+/**
+ * Interface that allows custom code to be called when checking to see if a cache entry
+ * has expired. This is useful when the rules that determine when content needs refreshing
+ * are beyond the base funtionality offered by OSCache.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public interface EntryRefreshPolicy extends Serializable {
+ /**
+ * Indicates whether the supplied CacheEntry
needs to be refreshed.
+ * This will be called when retrieving an entry from the cache - if this method
+ * returns true
then a NeedsRefreshException
will be
+ * thrown.
+ *
+ * @param entry The cache entry that is being tested.
+ * @return true
if the content needs refreshing, false
otherwise.
+ *
+ * @see NeedsRefreshException
+ * @see CacheEntry
+ */
+ public boolean needsRefresh(CacheEntry entry);
+}
diff --git a/src/java/com/opensymphony/oscache/base/EntryUpdateState.java b/src/java/com/opensymphony/oscache/base/EntryUpdateState.java
new file mode 100644
index 0000000..01e1c80
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/EntryUpdateState.java
@@ -0,0 +1,157 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+
+/**
+ * Holds the state of a Cache Entry that is in the process of being (re)generated.
+ * This is not synchronized; the synchronization must be handled by the calling
+ * classes.
+ *
+ * @author Chris Miller
+ * @author Author: $
+ * @version Revision: $
+ */
+public class EntryUpdateState {
+ /**
+ * The initial state when this object is first created
+ */
+ public static final int NOT_YET_UPDATING = -1;
+
+ /**
+ * Update in progress state
+ */
+ public static final int UPDATE_IN_PROGRESS = 0;
+
+ /**
+ * Update complete state
+ */
+ public static final int UPDATE_COMPLETE = 1;
+
+ /**
+ * Update cancelled state
+ */
+ public static final int UPDATE_CANCELLED = 2;
+
+ /**
+ * Current update state
+ */
+ int state = NOT_YET_UPDATING;
+
+ /**
+ * A counter of the number of threads that are coordinated through this instance. When this counter gets to zero, then the reference to this
+ * instance may be released from the Cache instance.
+ * This is counter is protected by the EntryStateUpdate instance monitor.
+ */
+ private int nbConcurrentUses = 1;
+
+ /**
+ * This is the initial state when an instance this object is first created.
+ * It indicates that a cache entry needs updating, but no thread has claimed
+ * responsibility for updating it yet.
+ */
+ public boolean isAwaitingUpdate() {
+ return state == NOT_YET_UPDATING;
+ }
+
+ /**
+ * The thread that was responsible for updating the cache entry (ie, the thread
+ * that managed to grab the update lock) has decided to give up responsibility
+ * for performing the update. OSCache will notify any other threads that are
+ * waiting on the update so one of them can take over the responsibility.
+ */
+ public boolean isCancelled() {
+ return state == UPDATE_CANCELLED;
+ }
+
+ /**
+ * The update of the cache entry has been completed.
+ */
+ public boolean isComplete() {
+ return state == UPDATE_COMPLETE;
+ }
+
+ /**
+ * The cache entry is currently being generated by the thread that got hold of
+ * the update lock.
+ */
+ public boolean isUpdating() {
+ return state == UPDATE_IN_PROGRESS;
+ }
+
+ /**
+ * Updates the state to UPDATE_CANCELLED
. This should only
+ * be called by the thread that managed to get the update lock.
+ * @return the counter value after the operation completed
+ */
+ public int cancelUpdate() {
+ if (state != UPDATE_IN_PROGRESS) {
+ throw new IllegalStateException("Cannot cancel cache update - current state (" + state + ") is not UPDATE_IN_PROGRESS");
+ }
+
+ state = UPDATE_CANCELLED;
+ return decrementUsageCounter();
+ }
+
+ /**
+ * Updates the state to UPDATE_COMPLETE
. This should only
+ * be called by the thread that managed to get the update lock.
+ * @return the counter value after the operation completed
+ */
+ public int completeUpdate() {
+ if (state != UPDATE_IN_PROGRESS) {
+ throw new IllegalStateException("Cannot complete cache update - current state (" + state + ") is not UPDATE_IN_PROGRESS");
+ }
+
+ state = UPDATE_COMPLETE;
+ return decrementUsageCounter();
+ }
+
+ /**
+ * Attempt to change the state to UPDATE_IN_PROGRESS
. Calls
+ * to this method must be synchronized on the EntryUpdateState instance.
+ * @return the counter value after the operation completed
+ */
+ public int startUpdate() {
+ if ((state != NOT_YET_UPDATING) && (state != UPDATE_CANCELLED)) {
+ throw new IllegalStateException("Cannot begin cache update - current state (" + state + ") is not NOT_YET_UPDATING or UPDATE_CANCELLED");
+ }
+
+ state = UPDATE_IN_PROGRESS;
+ return incrementUsageCounter();
+ }
+
+ /**
+ * Increments the usage counter by one
+ * @return the counter value after the increment
+ */
+ public synchronized int incrementUsageCounter() {
+ nbConcurrentUses++;
+ return nbConcurrentUses;
+ }
+
+ /**
+ * Gets the current usage counter value
+ * @return a positive number.
+ */
+ public synchronized int getUsageCounter() {
+ return nbConcurrentUses;
+ }
+
+
+ /**
+ * Decrements the usage counter by one. This method may only be called when the usage number is greater than zero
+ * @return the counter value after the decrement
+ */
+ public synchronized int decrementUsageCounter() {
+ if (nbConcurrentUses <=0) {
+ throw new IllegalStateException("Cannot decrement usage counter, it is already equals to [" + nbConcurrentUses + "]");
+ }
+ nbConcurrentUses--;
+ return nbConcurrentUses;
+ }
+
+
+}
diff --git a/src/java/com/opensymphony/oscache/base/FinalizationException.java b/src/java/com/opensymphony/oscache/base/FinalizationException.java
new file mode 100644
index 0000000..7a390a3
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/FinalizationException.java
@@ -0,0 +1,23 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+
+/**
+ * Thrown by {@link LifecycleAware} listeners that are not able to finalize
+ * themselves.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public class FinalizationException extends Exception {
+ public FinalizationException() {
+ super();
+ }
+
+ public FinalizationException(String message) {
+ super(message);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/InitializationException.java b/src/java/com/opensymphony/oscache/base/InitializationException.java
new file mode 100644
index 0000000..dd9bcf0
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/InitializationException.java
@@ -0,0 +1,23 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+
+/**
+ * Thrown by {@link LifecycleAware} listeners that are not able to initialize
+ * themselves.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public class InitializationException extends Exception {
+ public InitializationException() {
+ super();
+ }
+
+ public InitializationException(String message) {
+ super(message);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/LifecycleAware.java b/src/java/com/opensymphony/oscache/base/LifecycleAware.java
new file mode 100644
index 0000000..71650d7
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/LifecycleAware.java
@@ -0,0 +1,42 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+
+/**
+ * Event handlers implement this so they can be notified when a cache
+ * is created and also when it is destroyed. This allows event handlers
+ * to load any configuration and/or resources they need on startup and
+ * then release them again when the cache is shut down.
+ *
+ * @author Chris Miller
+ *
+ * @see com.opensymphony.oscache.base.events.CacheEventListener
+ */
+public interface LifecycleAware {
+ /**
+ * Called by the cache administrator class when a cache is instantiated.
+ *
+ * @param cache the cache instance that this listener is attached to.
+ * @param config The cache's configuration details. This allows the event handler
+ * to initialize itself based on the cache settings, and also to receive additional
+ * settings that were part of the cache configuration but that the cache
+ * itself does not care about. If you are using cache.properties
+ * for your configuration, simply add any additional properties that your event
+ * handler requires and they will be passed through in this parameter.
+ *
+ * @throws InitializationException thrown when there was a problem initializing the
+ * listener. The cache administrator will log this error and disable the listener.
+ */
+ public void initialize(Cache cache, Config config) throws InitializationException;
+
+ /**
+ * Called by the cache administrator class when a cache is destroyed.
+ *
+ * @throws FinalizationException thrown when there was a problem finalizing the
+ * listener. The cache administrator will catch and log this error.
+ */
+ public void finialize() throws FinalizationException;
+}
diff --git a/src/java/com/opensymphony/oscache/base/NeedsRefreshException.java b/src/java/com/opensymphony/oscache/base/NeedsRefreshException.java
new file mode 100644
index 0000000..e8c9869
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/NeedsRefreshException.java
@@ -0,0 +1,51 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+/**
+ * This exception is thrown when retrieving an item from cache and it is
+ * expired.
+ * Note that for fault tolerance purposes, it is possible to retrieve the
+ * current cached object from the exception.
+ *
+ *
+ * AbstractConcurrentReadCache t; ... Object v;
+ * synchronized(t) { v = t.get(k); }
+ *
+ * But this is not usually necessary in practice. For
+ * example, it is generally inefficient to write:
+ *
+ *
+ * AbstractConcurrentReadCache t; ... // Inefficient version
+ * Object key; ...
+ * Object value; ...
+ * synchronized(t) {
+ * if (!t.containsKey(key))
+ * t.put(key, value);
+ * // other code if not previously present
+ * }
+ * else {
+ * // other code if it was previously present
+ * }
+ * }
+ *
+ * Instead, just take advantage of the fact that put returns
+ * null if the key was not previously present:
+ *
+ * AbstractConcurrentReadCache t; ... // Use this instead
+ * Object key; ...
+ * Object value; ...
+ * Object oldValue = t.put(key, value);
+ * if (oldValue == null) {
+ * // other code if not previously present
+ * }
+ * else {
+ * // other code if it was previously present
+ * }
+ *
+ * Set
of containing keys of all
+ * the cache entries that belong to that particular group.
+ */
+ protected HashMap groups = new HashMap();
+ protected transient Set entrySet = null;
+
+ // Views
+ protected transient Set keySet = null;
+
+ /**
+ * Cache capacity (number of entries).
+ */
+ protected int maxEntries = DEFAULT_MAX_ENTRIES;
+
+ /**
+ * The table is rehashed when its size exceeds this threshold.
+ * (The value of this field is always (int)(capacity * loadFactor).)
+ *
+ * @serial
+ */
+ protected int threshold;
+
+ /**
+ * Use overflow persistence caching.
+ */
+ private boolean overflowPersistence = false;
+
+ /**
+ * Constructs a new, empty map with the specified initial capacity and the specified load factor.
+ *
+ * @param initialCapacity the initial capacity
+ * The actual initial capacity is rounded to the nearest power of two.
+ * @param loadFactor the load factor of the AbstractConcurrentReadCache
+ * @throws IllegalArgumentException if the initial maximum number
+ * of elements is less
+ * than zero, or if the load factor is nonpositive.
+ */
+ public AbstractConcurrentReadCache(int initialCapacity, float loadFactor) {
+ if (loadFactor <= 0) {
+ throw new IllegalArgumentException("Illegal Load factor: " + loadFactor);
+ }
+
+ this.loadFactor = loadFactor;
+
+ int cap = p2capacity(initialCapacity);
+ table = new Entry[cap];
+ threshold = (int) (cap * loadFactor);
+ }
+
+ /**
+ * Constructs a new, empty map with the specified initial capacity and default load factor.
+ *
+ * @param initialCapacity the initial capacity of the
+ * AbstractConcurrentReadCache.
+ * @throws IllegalArgumentException if the initial maximum number
+ * of elements is less
+ * than zero.
+ */
+ public AbstractConcurrentReadCache(int initialCapacity) {
+ this(initialCapacity, DEFAULT_LOAD_FACTOR);
+ }
+
+ /**
+ * Constructs a new, empty map with a default initial capacity and load factor.
+ */
+ public AbstractConcurrentReadCache() {
+ this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
+ }
+
+ /**
+ * Constructs a new map with the same mappings as the given map.
+ * The map is created with a capacity of twice the number of mappings in
+ * the given map or 11 (whichever is greater), and a default load factor.
+ */
+ public AbstractConcurrentReadCache(Map t) {
+ this(Math.max(2 * t.size(), 11), DEFAULT_LOAD_FACTOR);
+ putAll(t);
+ }
+
+ /**
+ * Returns true if this map contains no key-value mappings.
+ *
+ * @return true if this map contains no key-value mappings.
+ */
+ public synchronized boolean isEmpty() {
+ return count == 0;
+ }
+
+ /**
+ * Returns a set of the cache keys that reside in a particular group.
+ *
+ * @param groupName The name of the group to retrieve.
+ * @return a set containing all of the keys of cache entries that belong
+ * to this group, or null
if the group was not found.
+ * @exception NullPointerException if the groupName is null
.
+ */
+ public Set getGroup(String groupName) {
+ if (log.isDebugEnabled()) {
+ log.debug("getGroup called (group=" + groupName + ")");
+ }
+
+ Set memoryGroupEntries = null;
+ if (memoryCaching && (groups != null)) {
+ memoryGroupEntries = (Set) getGroupForReading(groupName);
+ }
+
+ // CACHE-309
+ Set persistGroupEntries = persistRetrieveGroup(groupName);
+
+ if (memoryGroupEntries != null) {
+ if (persistGroupEntries != null) {
+ memoryGroupEntries.addAll(persistGroupEntries);
+ }
+ return memoryGroupEntries;
+ }
+
+ return persistGroupEntries;
+ }
+
+ /**
+ * Set the cache capacity
+ */
+ public void setMaxEntries(int newLimit) {
+ if (newLimit > 0) {
+ maxEntries = newLimit;
+
+ synchronized (this) { // because remove() isn't synchronized
+
+ while (size() > maxEntries) {
+ remove(removeItem(), false, false);
+ }
+ }
+ } else {
+ // Capacity must be at least 1
+ throw new IllegalArgumentException("Cache maximum number of entries must be at least 1");
+ }
+ }
+
+ /**
+ * Retrieve the cache capacity (number of entries).
+ */
+ public int getMaxEntries() {
+ return maxEntries;
+ }
+
+ /**
+ * Sets the memory caching flag.
+ */
+ public void setMemoryCaching(boolean memoryCaching) {
+ this.memoryCaching = memoryCaching;
+ }
+
+ /**
+ * Check if memory caching is used.
+ */
+ public boolean isMemoryCaching() {
+ return memoryCaching;
+ }
+
+ /**
+ * Set the persistence listener to use.
+ */
+ public void setPersistenceListener(PersistenceListener listener) {
+ this.persistenceListener = listener;
+ }
+
+ /**
+ * Get the persistence listener.
+ */
+ public PersistenceListener getPersistenceListener() {
+ return persistenceListener;
+ }
+
+ /**
+ * Sets the unlimited disk caching flag.
+ */
+ public void setUnlimitedDiskCache(boolean unlimitedDiskCache) {
+ this.unlimitedDiskCache = unlimitedDiskCache;
+ }
+
+ /**
+ * Check if we use unlimited disk cache.
+ */
+ public boolean isUnlimitedDiskCache() {
+ return unlimitedDiskCache;
+ }
+
+ /**
+ * Check if we use overflowPersistence
+ *
+ * @return Returns the overflowPersistence.
+ */
+ public boolean isOverflowPersistence() {
+ return this.overflowPersistence;
+ }
+
+ /**
+ * Sets the overflowPersistence flag
+ *
+ * @param overflowPersistence The overflowPersistence to set.
+ */
+ public void setOverflowPersistence(boolean overflowPersistence) {
+ this.overflowPersistence = overflowPersistence;
+ }
+
+ /**
+ * Return the number of slots in this table.
+ **/
+ public synchronized int capacity() {
+ return table.length;
+ }
+
+ /**
+ * Removes all mappings from this map.
+ */
+ public synchronized void clear() {
+ Entry[] tab = table;
+
+ for (int i = 0; i < tab.length; ++i) {
+ // must invalidate all to force concurrent get's to wait and then retry
+ for (Entry e = tab[i]; e != null; e = e.next) {
+ e.value = null;
+
+ /** OpenSymphony BEGIN */
+ itemRemoved(e.key);
+
+ /** OpenSymphony END */
+ }
+
+ tab[i] = null;
+ }
+
+ // Clean out the entire disk cache
+ persistClear();
+
+ count = 0;
+ recordModification(tab);
+ }
+
+ /**
+ * Returns a shallow copy of this.
+ * AbstractConcurrentReadCache instance: the keys and
+ * values themselves are not cloned.
+ *
+ * @return a shallow copy of this map.
+ */
+ public synchronized Object clone() {
+ try {
+ AbstractConcurrentReadCache t = (AbstractConcurrentReadCache) super.clone();
+ t.keySet = null;
+ t.entrySet = null;
+ t.values = null;
+
+ Entry[] tab = table;
+ t.table = new Entry[tab.length];
+
+ Entry[] ttab = t.table;
+
+ for (int i = 0; i < tab.length; ++i) {
+ Entry first = tab[i];
+
+ if (first != null) {
+ ttab[i] = (Entry) (first.clone());
+ }
+ }
+
+ return t;
+ } catch (CloneNotSupportedException e) {
+ // this shouldn't happen, since we are Cloneable
+ throw new InternalError();
+ }
+ }
+
+ /**
+ * Tests if some key maps into the specified value in this table.
+ * This operation is more expensive than the containsKey
+ * method.true
if and only if some key maps to the
+ * value
argument in this table as
+ * determined by the equals method;
+ * false
otherwise.
+ * @exception NullPointerException if the value is null
.
+ * @see #containsKey(Object)
+ * @see #containsValue(Object)
+ * @see Map
+ */
+ public boolean contains(Object value) {
+ return containsValue(value);
+ }
+
+ /**
+ * Tests if the specified object is a key in this table.
+ *
+ * @param key possible key.
+ * @return true
if and only if the specified object
+ * is a key in this table, as determined by the
+ * equals method; false
otherwise.
+ * @exception NullPointerException if the key is
+ * null
.
+ * @see #contains(Object)
+ */
+ public boolean containsKey(Object key) {
+ return get(key) != null;
+
+ /** OpenSymphony BEGIN */
+
+ // TODO: Also check the persistence?
+
+ /** OpenSymphony END */
+ }
+
+ /**
+ * Returns true if this map maps one or more keys to the
+ * specified value. Note: This method requires a full internal
+ * traversal of the hash table, and so is much slower than
+ * method containsKey.
+ *
+ * @param value value whose presence in this map is to be tested.
+ * @return true if this map maps one or more keys to the
+ * specified value.
+ * @exception NullPointerException if the value is null
.
+ */
+ public boolean containsValue(Object value) {
+ if (value == null) {
+ throw new NullPointerException();
+ }
+
+ Entry[] tab = getTableForReading();
+
+ for (int i = 0; i < tab.length; ++i) {
+ for (Entry e = tab[i]; e != null; e = e.next) {
+ Object v = e.value;
+
+ if ((v != null) && value.equals(v)) {
+ return true;
+ }
+ }
+ }
+
+ return false;
+ }
+
+ /**
+ * Returns an enumeration of the values in this table.
+ * Use the Enumeration methods on the returned object to fetch the elements
+ * sequentially.
+ *
+ * @return an enumeration of the values in this table.
+ * @see java.util.Enumeration
+ * @see #keys()
+ * @see #values()
+ * @see Map
+ */
+ public Enumeration elements() {
+ return new ValueIterator();
+ }
+
+ /**
+ * Returns a collection view of the mappings contained in this map.
+ * Each element in the returned collection is a Map.Entry. The
+ * collection is backed by the map, so changes to the map are reflected in
+ * the collection, and vice-versa. The collection supports element
+ * removal, which removes the corresponding mapping from the map, via the
+ * Iterator.remove, Collection.remove,
+ * removeAll, retainAll, and clear operations.
+ * It does not support the add or addAll operations.
+ *
+ * @return a collection view of the mappings contained in this map.
+ */
+ public Set entrySet() {
+ Set es = entrySet;
+
+ if (es != null) {
+ return es;
+ } else {
+ return entrySet = new AbstractSet() {
+ public Iterator iterator() {
+ return new HashIterator();
+ }
+
+ public boolean contains(Object o) {
+ if (!(o instanceof Map.Entry)) {
+ return false;
+ }
+
+ Map.Entry entry = (Map.Entry) o;
+ Object key = entry.getKey();
+ Object v = AbstractConcurrentReadCache.this.get(key);
+
+ return (v != null) && v.equals(entry.getValue());
+ }
+
+ public boolean remove(Object o) {
+ if (!(o instanceof Map.Entry)) {
+ return false;
+ }
+
+ return AbstractConcurrentReadCache.this.findAndRemoveEntry((Map.Entry) o);
+ }
+
+ public int size() {
+ return AbstractConcurrentReadCache.this.size();
+ }
+
+ public void clear() {
+ AbstractConcurrentReadCache.this.clear();
+ }
+ };
+ }
+ }
+
+ /**
+ * Returns the value to which the specified key is mapped in this table.
+ *
+ * @param key a key in the table.
+ * @return the value to which the key is mapped in this table;
+ * null
if the key is not mapped to any value in
+ * this table.
+ * @exception NullPointerException if the key is
+ * null
.
+ * @see #put(Object, Object)
+ */
+ public Object get(Object key) {
+ if (log.isDebugEnabled()) {
+ log.debug("get called (key=" + key + ")");
+ }
+
+ // throw null pointer exception if key null
+ int hash = hash(key);
+
+ /*
+ Start off at the apparently correct bin. If entry is found, we
+ need to check after a barrier anyway. If not found, we need a
+ barrier to check if we are actually in right bin. So either
+ way, we encounter only one barrier unless we need to retry.
+ And we only need to fully synchronize if there have been
+ concurrent modifications.
+ */
+ Entry[] tab = table;
+ int index = hash & (tab.length - 1);
+ Entry first = tab[index];
+ Entry e = first;
+
+ for (;;) {
+ if (e == null) {
+ // If key apparently not there, check to
+ // make sure this was a valid read
+ tab = getTableForReading();
+
+ if (first == tab[index]) {
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ return null;*/
+
+ // Not in the table, try persistence
+ Object value = persistRetrieve(key);
+
+ if (value != null) {
+ // Update the map, but don't persist the data
+ put(key, value, false);
+ }
+
+ return value;
+
+ /** OpenSymphony END */
+ } else {
+ // Wrong list -- must restart traversal at new first
+ e = first = tab[index = hash & (tab.length - 1)];
+ }
+ }
+ // checking for pointer equality first wins in most applications
+ else if ((key == e.key) || ((e.hash == hash) && key.equals(e.key))) {
+ Object value = e.value;
+
+ if (value != null) {
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ return value;*/
+ if (NULL.equals(value)) {
+ // Memory cache disable, use disk
+ value = persistRetrieve(e.key);
+
+ if (value != null) {
+ itemRetrieved(key);
+ }
+
+ return value; // fix [CACHE-13]
+ } else {
+ itemRetrieved(key);
+
+ return value;
+ }
+
+ /** OpenSymphony END */
+ }
+
+ // Entry was invalidated during deletion. But it could
+ // have been re-inserted, so we must retraverse.
+ // To avoid useless contention, get lock to wait out modifications
+ // before retraversing.
+ synchronized (this) {
+ tab = table;
+ }
+
+ e = first = tab[index = hash & (tab.length - 1)];
+ } else {
+ e = e.next;
+ }
+ }
+ }
+
+ /**
+ * Returns a set view of the keys contained in this map.
+ * The set is backed by the map, so changes to the map are reflected in the set, and
+ * vice-versa. The set supports element removal, which removes the
+ * corresponding mapping from this map, via the Iterator.remove,
+ * Set.remove, removeAll, retainAll, and
+ * clear operations. It does not support the add or
+ * addAll operations.
+ *
+ * @return a set view of the keys contained in this map.
+ */
+ public Set keySet() {
+ Set ks = keySet;
+
+ if (ks != null) {
+ return ks;
+ } else {
+ return keySet = new AbstractSet() {
+ public Iterator iterator() {
+ return new KeyIterator();
+ }
+
+ public int size() {
+ return AbstractConcurrentReadCache.this.size();
+ }
+
+ public boolean contains(Object o) {
+ return AbstractConcurrentReadCache.this.containsKey(o);
+ }
+
+ public boolean remove(Object o) {
+ return AbstractConcurrentReadCache.this.remove(o) != null;
+ }
+
+ public void clear() {
+ AbstractConcurrentReadCache.this.clear();
+ }
+ };
+ }
+ }
+
+ /**
+ * Returns an enumeration of the keys in this table.
+ *
+ * @return an enumeration of the keys in this table.
+ * @see Enumeration
+ * @see #elements()
+ * @see #keySet()
+ * @see Map
+ */
+ public Enumeration keys() {
+ return new KeyIterator();
+ }
+
+ /**
+ * Return the load factor
+ **/
+ public float loadFactor() {
+ return loadFactor;
+ }
+
+ /**
+ * Maps the specified key
to the specified value
in this table.
+ * Neither the key nor the
+ * value can be null
. get
method
+ * with a key that is equal to the original key.
+ *
+ * @param key the table key.
+ * @param value the value.
+ * @return the previous value of the specified key in this table,
+ * or null
if it did not have one.
+ * @exception NullPointerException if the key or value is
+ * null
.
+ * @see Object#equals(Object)
+ * @see #get(Object)
+ */
+ /** OpenSymphony BEGIN */
+ public Object put(Object key, Object value) {
+ // Call the internal put using persistance
+ return put(key, value, true);
+ }
+
+ /**
+ * Copies all of the mappings from the specified map to this one.
+ *
+ * These mappings replace any mappings that this map had for any of the
+ * keys currently in the specified Map.
+ *
+ * @param t Mappings to be stored in this map.
+ */
+ public synchronized void putAll(Map t) {
+ for (Iterator it = t.entrySet().iterator(); it.hasNext();) {
+ Map.Entry entry = (Map.Entry) it.next();
+ Object key = entry.getKey();
+ Object value = entry.getValue();
+ put(key, value);
+ }
+ }
+
+ /**
+ * Removes the key (and its corresponding value) from this table.
+ * This method does nothing if the key is not in the table.
+ *
+ * @param key the key that needs to be removed.
+ * @return the value to which the key had been mapped in this table,
+ * or null
if the key did not have a mapping.
+ */
+ /** OpenSymphony BEGIN */
+ public Object remove(Object key) {
+ return remove(key, true, false);
+ }
+
+ /**
+ * Like remove(Object)
, but ensures that the entry will be removed from the persistent store, too,
+ * even if overflowPersistence or unlimitedDiskcache are true.
+ *
+ * @param key the key that needs to be removed.
+ * @return the value to which the key had been mapped in this table,
+ * or null
if the key did not have a mapping.
+ */
+ public Object removeForce(Object key) {
+ return remove(key, true, true);
+ }
+
+ /**
+ * Returns the total number of cache entries held in this map.
+ *
+ * @return the number of key-value mappings in this map.
+ */
+ public synchronized int size() {
+ return count;
+ }
+
+ /**
+ * Returns a collection view of the values contained in this map.
+ * The collection is backed by the map, so changes to the map are reflected in
+ * the collection, and vice-versa. The collection supports element
+ * removal, which removes the corresponding mapping from this map, via the
+ * Iterator.remove, Collection.remove,
+ * removeAll, retainAll, and clear operations.
+ * It does not support the add or addAll operations.
+ *
+ * @return a collection view of the values contained in this map.
+ */
+ public Collection values() {
+ Collection vs = values;
+
+ if (vs != null) {
+ return vs;
+ } else {
+ return values = new AbstractCollection() {
+ public Iterator iterator() {
+ return new ValueIterator();
+ }
+
+ public int size() {
+ return AbstractConcurrentReadCache.this.size();
+ }
+
+ public boolean contains(Object o) {
+ return AbstractConcurrentReadCache.this.containsValue(o);
+ }
+
+ public void clear() {
+ AbstractConcurrentReadCache.this.clear();
+ }
+ };
+ }
+ }
+
+ /**
+ * Get ref to group.
+ * CACHE-127 Synchronized copying of the group entry set since
+ * the new HashSet(Collection c) constructor uses the iterator.
+ * This may slow things down but it is better than a
+ * ConcurrentModificationException. We might have to revisit the
+ * code if performance is too adversely impacted.
+ **/
+ protected synchronized final Set getGroupForReading(String groupName) {
+ Set group = (Set) getGroupsForReading().get(groupName);
+ if (group == null) return null;
+ return new HashSet(group);
+ }
+
+ /**
+ * Get ref to groups.
+ * The reference and the cells it
+ * accesses will be at least as fresh as from last
+ * use of barrierLock
+ **/
+ protected final Map getGroupsForReading() {
+ synchronized (barrierLock) {
+ return groups;
+ }
+ }
+
+ /**
+ * Get ref to table; the reference and the cells it
+ * accesses will be at least as fresh as from last
+ * use of barrierLock
+ **/
+ protected final Entry[] getTableForReading() {
+ synchronized (barrierLock) {
+ return table;
+ }
+ }
+
+ /**
+ * Force a memory synchronization that will cause
+ * all readers to see table. Call only when already
+ * holding main synch lock.
+ **/
+ protected final void recordModification(Object x) {
+ synchronized (barrierLock) {
+ lastWrite = x;
+ }
+ }
+
+ /**
+ * Helper method for entrySet remove.
+ **/
+ protected synchronized boolean findAndRemoveEntry(Map.Entry entry) {
+ Object key = entry.getKey();
+ Object v = get(key);
+
+ if ((v != null) && v.equals(entry.getValue())) {
+ remove(key);
+
+ return true;
+ } else {
+ return false;
+ }
+ }
+
+ /**
+ * Remove an object from the persistence.
+ * @param key The key of the object to remove
+ */
+ protected void persistRemove(Object key) {
+ if (log.isDebugEnabled()) {
+ log.debug("PersistRemove called (key=" + key + ")");
+ }
+
+ if (persistenceListener != null) {
+ try {
+ persistenceListener.remove((String) key);
+ } catch (CachePersistenceException e) {
+ log.error("[oscache] Exception removing cache entry with key '" + key + "' from persistence", e);
+ }
+ }
+ }
+
+ /**
+ * Removes a cache group using the persistence listener.
+ * @param groupName The name of the group to remove
+ */
+ protected void persistRemoveGroup(String groupName) {
+ if (log.isDebugEnabled()) {
+ log.debug("persistRemoveGroup called (groupName=" + groupName + ")");
+ }
+
+ if (persistenceListener != null) {
+ try {
+ persistenceListener.removeGroup(groupName);
+ } catch (CachePersistenceException e) {
+ log.error("[oscache] Exception removing group " + groupName, e);
+ }
+ }
+ }
+
+ /**
+ * Retrieve an object from the persistence listener.
+ * @param key The key of the object to retrieve
+ */
+ protected Object persistRetrieve(Object key) {
+ if (log.isDebugEnabled()) {
+ log.debug("persistRetrieve called (key=" + key + ")");
+ }
+
+ Object entry = null;
+
+ if (persistenceListener != null) {
+ try {
+ entry = persistenceListener.retrieve((String) key);
+ } catch (CachePersistenceException e) {
+ /**
+ * It is normal that we get an exception occasionally.
+ * It happens when the item is invalidated (written or removed)
+ * during read. The logic is constructed so that read is retried.
+ */
+ }
+ }
+
+ return entry;
+ }
+
+ /**
+ * Retrieves a cache group using the persistence listener.
+ * @param groupName The name of the group to retrieve
+ */
+ protected Set persistRetrieveGroup(String groupName) {
+ if (log.isDebugEnabled()) {
+ log.debug("persistRetrieveGroup called (groupName=" + groupName + ")");
+ }
+
+ if (persistenceListener != null) {
+ try {
+ return persistenceListener.retrieveGroup(groupName);
+ } catch (CachePersistenceException e) {
+ log.error("[oscache] Exception retrieving group " + groupName, e);
+ }
+ }
+
+ return null;
+ }
+
+ /**
+ * Store an object in the cache using the persistence listener.
+ * @param key The object key
+ * @param obj The object to store
+ */
+ protected void persistStore(Object key, Object obj) {
+ if (log.isDebugEnabled()) {
+ log.debug("persistStore called (key=" + key + ")");
+ }
+
+ if (persistenceListener != null) {
+ try {
+ persistenceListener.store((String) key, obj);
+ } catch (CachePersistenceException e) {
+ log.error("[oscache] Exception persisting " + key, e);
+ }
+ }
+ }
+
+ /**
+ * Creates or Updates a cache group using the persistence listener.
+ * @param groupName The name of the group to update
+ * @param group The entries for the group
+ */
+ protected void persistStoreGroup(String groupName, Set group) {
+ if (log.isDebugEnabled()) {
+ log.debug("persistStoreGroup called (groupName=" + groupName + ")");
+ }
+
+ if (persistenceListener != null) {
+ try {
+ if ((group == null) || group.isEmpty()) {
+ persistenceListener.removeGroup(groupName);
+ } else {
+ persistenceListener.storeGroup(groupName, group);
+ }
+ } catch (CachePersistenceException e) {
+ log.error("[oscache] Exception persisting group " + groupName, e);
+ }
+ }
+ }
+
+ /**
+ * Removes the entire cache from persistent storage.
+ */
+ protected void persistClear() {
+ if (log.isDebugEnabled()) {
+ log.debug("persistClear called");
+ ;
+ }
+
+ if (persistenceListener != null) {
+ try {
+ persistenceListener.clear();
+ } catch (CachePersistenceException e) {
+ log.error("[oscache] Exception clearing persistent cache", e);
+ }
+ }
+ }
+
+ /**
+ * Notify the underlying implementation that an item was put in the cache.
+ *
+ * @param key The cache key of the item that was put.
+ */
+ protected abstract void itemPut(Object key);
+
+ /**
+ * Notify any underlying algorithm that an item has been retrieved from the cache.
+ *
+ * @param key The cache key of the item that was retrieved.
+ */
+ protected abstract void itemRetrieved(Object key);
+
+ /**
+ * Notify the underlying implementation that an item was removed from the cache.
+ *
+ * @param key The cache key of the item that was removed.
+ */
+ protected abstract void itemRemoved(Object key);
+
+ /**
+ * The cache has reached its cacpacity and an item needs to be removed.
+ * (typically according to an algorithm such as LRU or FIFO).
+ *
+ * @return The key of whichever item was removed.
+ */
+ protected abstract Object removeItem();
+
+ /**
+ * Reconstitute the AbstractConcurrentReadCache.
+ * instance from a stream (i.e.,
+ * deserialize it).
+ */
+ private synchronized void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException {
+ // Read in the threshold, loadfactor, and any hidden stuff
+ s.defaultReadObject();
+
+ // Read in number of buckets and allocate the bucket array;
+ int numBuckets = s.readInt();
+ table = new Entry[numBuckets];
+
+ // Read in size (number of Mappings)
+ int size = s.readInt();
+
+ // Read the keys and values, and put the mappings in the table
+ for (int i = 0; i < size; i++) {
+ Object key = s.readObject();
+ Object value = s.readObject();
+ put(key, value);
+ }
+ }
+
+ /**
+ * Rehashes the contents of this map into a new table with a larger capacity.
+ * This method is called automatically when the
+ * number of keys in this map exceeds its capacity and load factor.
+ */
+ protected void rehash() {
+ Entry[] oldMap = table;
+ int oldCapacity = oldMap.length;
+
+ if (oldCapacity >= MAXIMUM_CAPACITY) {
+ return;
+ }
+
+ int newCapacity = oldCapacity << 1;
+ Entry[] newMap = new Entry[newCapacity];
+ threshold = (int) (newCapacity * loadFactor);
+
+ /*
+ We need to guarantee that any existing reads of oldMap can
+ proceed. So we cannot yet null out each oldMap bin.
+
+ Because we are using power-of-two expansion, the elements
+ from each bin must either stay at same index, or move
+ to oldCapacity+index. We also minimize new node creation by
+ catching cases where old nodes can be reused because their
+ .next fields won't change. (This is checked only for sequences
+ of one and two. It is not worth checking longer ones.)
+ */
+ for (int i = 0; i < oldCapacity; ++i) {
+ Entry l = null;
+ Entry h = null;
+ Entry e = oldMap[i];
+
+ while (e != null) {
+ int hash = e.hash;
+ Entry next = e.next;
+
+ if ((hash & oldCapacity) == 0) {
+ // stays at newMap[i]
+ if (l == null) {
+ // try to reuse node
+ if ((next == null) || ((next.next == null) && ((next.hash & oldCapacity) == 0))) {
+ l = e;
+
+ break;
+ }
+ }
+
+ l = new Entry(hash, e.key, e.value, l);
+ } else {
+ // moves to newMap[oldCapacity+i]
+ if (h == null) {
+ if ((next == null) || ((next.next == null) && ((next.hash & oldCapacity) != 0))) {
+ h = e;
+
+ break;
+ }
+ }
+
+ h = new Entry(hash, e.key, e.value, h);
+ }
+
+ e = next;
+ }
+
+ newMap[i] = l;
+ newMap[oldCapacity + i] = h;
+ }
+
+ table = newMap;
+ recordModification(newMap);
+ }
+
+ /**
+ * Continuation of put(), called only when synch lock is
+ * held and interference has been detected.
+ **/
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ protected Object sput(Object key, Object value, int hash) {*/
+ protected Object sput(Object key, Object value, int hash, boolean persist) {
+ /** OpenSymphony END */
+ Entry[] tab = table;
+ int index = hash & (tab.length - 1);
+ Entry first = tab[index];
+ Entry e = first;
+
+ for (;;) {
+ if (e == null) {
+ /** OpenSymphony BEGIN */
+
+ // Previous code
+ // Entry newEntry = new Entry(hash, key, value, first);
+ Entry newEntry;
+
+ if (memoryCaching) {
+ newEntry = new Entry(hash, key, value, first);
+ } else {
+ newEntry = new Entry(hash, key, NULL, first);
+ }
+
+ itemPut(key);
+
+ // Persist if required
+ if (persist && !overflowPersistence) {
+ persistStore(key, value);
+ }
+
+ // If we have a CacheEntry, update the group lookups
+ if (value instanceof CacheEntry) {
+ updateGroups(null, (CacheEntry) value, persist);
+ }
+
+ /** OpenSymphony END */
+ tab[index] = newEntry;
+
+ if (++count >= threshold) {
+ rehash();
+ } else {
+ recordModification(newEntry);
+ }
+
+ return null;
+ } else if ((key == e.key) || ((e.hash == hash) && key.equals(e.key))) {
+ Object oldValue = e.value;
+
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ e.value = value; */
+ if (memoryCaching) {
+ e.value = value;
+ }
+
+ // Persist if required
+ if (persist && overflowPersistence) {
+ persistRemove(key);
+ } else if (persist) {
+ persistStore(key, value);
+ }
+
+ updateGroups(oldValue, value, persist);
+
+ itemPut(key);
+
+ /** OpenSymphony END */
+ return oldValue;
+ } else {
+ e = e.next;
+ }
+ }
+ }
+
+ /**
+ * Continuation of remove(), called only when synch lock is
+ * held and interference has been detected.
+ **/
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ protected Object sremove(Object key, int hash) { */
+ protected Object sremove(Object key, int hash, boolean invokeAlgorithm) {
+ /** OpenSymphony END */
+ Entry[] tab = table;
+ int index = hash & (tab.length - 1);
+ Entry first = tab[index];
+ Entry e = first;
+
+ for (;;) {
+ if (e == null) {
+ return null;
+ } else if ((key == e.key) || ((e.hash == hash) && key.equals(e.key))) {
+ Object oldValue = e.value;
+ if (persistenceListener != null && (oldValue == NULL)) {
+ oldValue = persistRetrieve(key);
+ }
+
+ e.value = null;
+ count--;
+
+ /** OpenSymphony BEGIN */
+ if (!unlimitedDiskCache && !overflowPersistence) {
+ persistRemove(e.key);
+ // If we have a CacheEntry, update the groups
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry)oldValue;
+ removeGroupMappings(oldEntry.getKey(),
+ oldEntry.getGroups(), true);
+ }
+ } else {
+ // only remove from memory groups
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry)oldValue;
+ removeGroupMappings(oldEntry.getKey(),
+ oldEntry.getGroups(), false);
+ }
+ }
+
+ if (overflowPersistence && ((size() + 1) >= maxEntries)) {
+ persistStore(key, oldValue);
+ // add key to persistent groups but NOT to the memory groups
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry)oldValue;
+ addGroupMappings(oldEntry.getKey(), oldEntry.getGroups(), true, false);
+ }
+ }
+
+ if (invokeAlgorithm) {
+ itemRemoved(key);
+ }
+
+ /** OpenSymphony END */
+ Entry head = e.next;
+
+ for (Entry p = first; p != e; p = p.next) {
+ head = new Entry(p.hash, p.key, p.value, head);
+ }
+
+ tab[index] = head;
+ recordModification(head);
+
+ return oldValue;
+ } else {
+ e = e.next;
+ }
+ }
+ }
+
+ /**
+ * Save the state of the AbstractConcurrentReadCache instance to a stream.
+ * (i.e., serialize it).
+ *
+ * @serialData The capacity of the
+ * AbstractConcurrentReadCache (the length of the
+ * bucket array) is emitted (int), followed by the
+ * size of the AbstractConcurrentReadCache (the number of key-value
+ * mappings), followed by the key (Object) and value (Object)
+ * for each key-value mapping represented by the AbstractConcurrentReadCache
+ * The key-value mappings are emitted in no particular order.
+ */
+ private synchronized void writeObject(java.io.ObjectOutputStream s) throws IOException {
+ // Write out the threshold, loadfactor, and any hidden stuff
+ s.defaultWriteObject();
+
+ // Write out number of buckets
+ s.writeInt(table.length);
+
+ // Write out size (number of Mappings)
+ s.writeInt(count);
+
+ // Write out keys and values (alternating)
+ for (int index = table.length - 1; index >= 0; index--) {
+ Entry entry = table[index];
+
+ while (entry != null) {
+ s.writeObject(entry.key);
+ s.writeObject(entry.value);
+ entry = entry.next;
+ }
+ }
+ }
+
+ /**
+ * Return hash code for Object x.
+ * Since we are using power-of-two
+ * tables, it is worth the effort to improve hashcode via
+ * the same multiplicative scheme as used in IdentityHashMap.
+ */
+ private static int hash(Object x) {
+ int h = x.hashCode();
+
+ // Multiply by 127 (quickly, via shifts), and mix in some high
+ // bits to help guard against bunching of codes that are
+ // consecutive or equally spaced.
+ return ((h << 7) - h + (h >>> 9) + (h >>> 17));
+ }
+
+ /**
+ * Add this cache key to the groups specified groups.
+ * We have to treat the
+ * memory and disk group mappings seperately so they remain valid for their
+ * corresponding memory/disk caches. (eg if mem is limited to 100 entries
+ * and disk is unlimited, the group mappings will be different).
+ *
+ * @param key The cache key that we are ading to the groups.
+ * @param newGroups the set of groups we want to add this cache entry to.
+ * @param persist A flag to indicate whether the keys should be added to
+ * the persistent cache layer.
+ * @param memory A flag to indicate whether the key should be added to
+ * the memory groups (important for overflow-to-disk)
+ */
+ private void addGroupMappings(String key, Set newGroups, boolean persist, boolean memory) {
+ if (newGroups == null) {
+ return;
+ }
+
+ // Add this CacheEntry to the groups that it is now a member of
+ for (Iterator it = newGroups.iterator(); it.hasNext();) {
+ String groupName = (String) it.next();
+
+ // Update the in-memory groups
+ if (memoryCaching && memory) {
+ if (groups == null) {
+ groups = new HashMap();
+ }
+
+ Set memoryGroup = (Set) groups.get(groupName);
+
+ if (memoryGroup == null) {
+ memoryGroup = new HashSet();
+ groups.put(groupName, memoryGroup);
+ }
+
+ memoryGroup.add(key);
+ }
+
+ // Update the persistent group maps
+ if (persist) {
+ Set persistentGroup = persistRetrieveGroup(groupName);
+
+ if (persistentGroup == null) {
+ persistentGroup = new HashSet();
+ }
+
+ persistentGroup.add(key);
+ persistStoreGroup(groupName, persistentGroup);
+ }
+ }
+ }
+
+ /** OpenSymphony END (pretty long!) */
+ /**
+ * Returns the appropriate capacity (power of two) for the specified
+ * initial capacity argument.
+ */
+ private int p2capacity(int initialCapacity) {
+ int cap = initialCapacity;
+
+ // Compute the appropriate capacity
+ int result;
+
+ if ((cap > MAXIMUM_CAPACITY) || (cap < 0)) {
+ result = MAXIMUM_CAPACITY;
+ } else {
+ result = MINIMUM_CAPACITY;
+
+ while (result < cap) {
+ result <<= 1;
+ }
+ }
+
+ return result;
+ }
+
+ /* Previous code
+ public Object put(Object key, Object value)*/
+ private Object put(Object key, Object value, boolean persist) {
+ /** OpenSymphony END */
+ if (value == null) {
+ throw new NullPointerException();
+ }
+
+ int hash = hash(key);
+ Entry[] tab = table;
+ int index = hash & (tab.length - 1);
+ Entry first = tab[index];
+ Entry e = first;
+
+ for (;;) {
+ if (e == null) {
+ synchronized (this) {
+ tab = table;
+
+ /** OpenSymphony BEGIN */
+
+ // Previous code
+
+ /* if (first == tab[index]) {
+ // Add to front of list
+ Entry newEntry = new Entry(hash, key, value, first);
+ tab[index] = newEntry;
+ if (++count >= threshold) rehash();
+ else recordModification(newEntry);
+ return null; */
+
+ Object oldValue = null;
+
+ // Remove an item if the cache is full
+ if (size() >= maxEntries) {
+ // part of fix CACHE-255: method should return old value
+ oldValue = remove(removeItem(), false, false);
+ }
+
+ if (first == tab[index]) {
+ // Add to front of list
+ Entry newEntry = null;
+
+ if (memoryCaching) {
+ newEntry = new Entry(hash, key, value, first);
+ } else {
+ newEntry = new Entry(hash, key, NULL, first);
+ }
+
+ tab[index] = newEntry;
+ itemPut(key);
+
+ // Persist if required
+ if (persist && !overflowPersistence) {
+ persistStore(key, value);
+ }
+
+ // If we have a CacheEntry, update the group lookups
+ if (value instanceof CacheEntry) {
+ updateGroups(null, (CacheEntry) value, persist);
+ }
+
+ if (++count >= threshold) {
+ rehash();
+ } else {
+ recordModification(newEntry);
+ }
+
+ return oldValue;
+
+ /** OpenSymphony END */
+ } else {
+ // wrong list -- retry
+
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ return sput(key, value, hash);*/
+ return sput(key, value, hash, persist);
+
+ /** OpenSymphony END */
+ }
+ }
+ } else if ((key == e.key) || ((e.hash == hash) && key.equals(e.key))) {
+ // synch to avoid race with remove and to
+ // ensure proper serialization of multiple replaces
+ synchronized (this) {
+ tab = table;
+
+ Object oldValue = e.value;
+
+ // [CACHE-118] - get the old cache entry even if there's no memory cache
+ if (persist && (oldValue == NULL)) {
+ oldValue = persistRetrieve(key);
+ }
+
+ if ((first == tab[index]) && (oldValue != null)) {
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ e.value = value;
+ return oldValue; */
+ if (memoryCaching) {
+ e.value = value;
+ }
+
+ // Persist if required
+ if (persist && overflowPersistence) {
+ persistRemove(key);
+ } else if (persist) {
+ persistStore(key, value);
+ }
+
+ updateGroups(oldValue, value, persist);
+ itemPut(key);
+
+ return oldValue;
+
+ /** OpenSymphony END */
+ } else {
+ // retry if wrong list or lost race against concurrent remove
+
+ /** OpenSymphony BEGIN */
+
+ /* Previous code
+ return sput(key, value, hash);*/
+ return sput(key, value, hash, persist);
+
+ /** OpenSymphony END */
+ }
+ }
+ } else {
+ e = e.next;
+ }
+ }
+ }
+
+ private synchronized Object remove(Object key, boolean invokeAlgorithm, boolean forcePersist)
+ /* Previous code
+ public Object remove(Object key) */
+
+ /** OpenSymphony END */ {
+ /*
+ Strategy:
+
+ Find the entry, then
+ 1. Set value field to null, to force get() to retry
+ 2. Rebuild the list without this entry.
+ All entries following removed node can stay in list, but
+ all preceeding ones need to be cloned. Traversals rely
+ on this strategy to ensure that elements will not be
+ repeated during iteration.
+ */
+
+ /** OpenSymphony BEGIN */
+ if (key == null) {
+ return null;
+ }
+
+ /** OpenSymphony END */
+ int hash = hash(key);
+ Entry[] tab = table;
+ int index = hash & (tab.length - 1);
+ Entry first = tab[index];
+ Entry e = first;
+
+ for (;;) {
+ if (e == null) {
+ tab = getTableForReading();
+
+ if (first == tab[index]) {
+ return null;
+ } else {
+ // Wrong list -- must restart traversal at new first
+
+ /** OpenSymphony BEGIN */
+
+ /* Previous Code
+ return sremove(key, hash); */
+ return sremove(key, hash, invokeAlgorithm);
+
+ /** OpenSymphony END */
+ }
+ } else if ((key == e.key) || ((e.hash == hash) && key.equals(e.key))) {
+ synchronized (this) {
+ tab = table;
+
+ Object oldValue = e.value;
+ if (persistenceListener != null && (oldValue == NULL)) {
+ oldValue = persistRetrieve(key);
+ }
+
+ // re-find under synch if wrong list
+ if ((first != tab[index]) || (oldValue == null)) {
+ /** OpenSymphony BEGIN */
+
+ /* Previous Code
+ return sremove(key, hash); */
+ return sremove(key, hash, invokeAlgorithm);
+ }
+
+ /** OpenSymphony END */
+ e.value = null;
+ count--;
+
+ /** OpenSymphony BEGIN */
+ if (forcePersist || (!unlimitedDiskCache && !overflowPersistence)) {
+ persistRemove(e.key);
+ // If we have a CacheEntry, update the group lookups
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry) oldValue;
+ removeGroupMappings(oldEntry.getKey(),
+ oldEntry.getGroups(), true);
+ }
+ } else {
+ // only remove from memory groups
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry) oldValue;
+ removeGroupMappings(oldEntry.getKey(), oldEntry
+ .getGroups(), false);
+ }
+ }
+
+ if (!forcePersist && overflowPersistence && ((size() + 1) >= maxEntries)) {
+ persistStore(key, oldValue);
+ // add key to persistent groups but NOT to the memory groups
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry) oldValue;
+ addGroupMappings(oldEntry.getKey(), oldEntry.getGroups(), true, false);
+ }
+ }
+
+ if (invokeAlgorithm) {
+ itemRemoved(key);
+ }
+
+ // introduced to fix bug CACHE-255
+ if (oldValue instanceof CacheEntry) {
+ CacheEntry oldEntry = (CacheEntry) oldValue;
+ oldValue = oldEntry.getContent();
+ }
+
+ /** OpenSymphony END */
+ Entry head = e.next;
+
+ for (Entry p = first; p != e; p = p.next) {
+ head = new Entry(p.hash, p.key, p.value, head);
+ }
+
+ tab[index] = head;
+ recordModification(head);
+
+ return oldValue;
+ }
+ } else {
+ e = e.next;
+ }
+ }
+ }
+
+ /**
+ * Remove this CacheEntry from the groups it no longer belongs to.
+ * We have to treat the memory and disk group mappings separately so they remain
+ * valid for their corresponding memory/disk caches. (eg if mem is limited
+ * to 100 entries and disk is unlimited, the group mappings will be
+ * different).
+ *
+ * @param key The cache key that we are removing from the groups.
+ * @param oldGroups the set of groups we want to remove the cache entry
+ * from.
+ * @param persist A flag to indicate whether the keys should be removed
+ * from the persistent cache layer.
+ */
+ private void removeGroupMappings(String key, Set oldGroups, boolean persist) {
+ if (oldGroups == null) {
+ return;
+ }
+
+ for (Iterator it = oldGroups.iterator(); it.hasNext();) {
+ String groupName = (String) it.next();
+
+ // Update the in-memory groups
+ if (memoryCaching && (this.groups != null)) {
+ Set memoryGroup = (Set) groups.get(groupName);
+
+ if (memoryGroup != null) {
+ memoryGroup.remove(key);
+
+ if (memoryGroup.isEmpty()) {
+ groups.remove(groupName);
+ }
+ }
+ }
+
+ // Update the persistent group maps
+ if (persist) {
+ Set persistentGroup = persistRetrieveGroup(groupName);
+
+ if (persistentGroup != null) {
+ persistentGroup.remove(key);
+
+ if (persistentGroup.isEmpty()) {
+ persistRemoveGroup(groupName);
+ } else {
+ persistStoreGroup(groupName, persistentGroup);
+ }
+ }
+ }
+ }
+ }
+
+ /**
+ * Updates the groups to reflect the differences between the old and new
+ * cache entries. Either of the old or new values can be null
+ * or contain a null
group list, in which case the entry's
+ * groups will all be added or removed respectively.
+ *
+ * @param oldValue The old CacheEntry that is being replaced.
+ * @param newValue The new CacheEntry that is being inserted.
+ */
+ private void updateGroups(Object oldValue, Object newValue, boolean persist) {
+ // If we have/had a CacheEntry, update the group lookups
+ boolean oldIsCE = oldValue instanceof CacheEntry;
+ boolean newIsCE = newValue instanceof CacheEntry;
+
+ if (newIsCE && oldIsCE) {
+ updateGroups((CacheEntry) oldValue, (CacheEntry) newValue, persist);
+ } else if (newIsCE) {
+ updateGroups(null, (CacheEntry) newValue, persist);
+ } else if (oldIsCE) {
+ updateGroups((CacheEntry) oldValue, null, persist);
+ }
+ }
+
+ /**
+ * Updates the groups to reflect the differences between the old and new cache entries.
+ * Either of the old or new values can be null
+ * or contain a null
group list, in which case the entry's
+ * groups will all be added or removed respectively.
+ *
+ * @param oldValue The old CacheEntry that is being replaced.
+ * @param newValue The new CacheEntry that is being inserted.
+ */
+ private void updateGroups(CacheEntry oldValue, CacheEntry newValue, boolean persist) {
+ Set oldGroups = null;
+ Set newGroups = null;
+
+ if (oldValue != null) {
+ oldGroups = oldValue.getGroups();
+ }
+
+ if (newValue != null) {
+ newGroups = newValue.getGroups();
+ }
+
+ // Get the names of the groups to remove
+ if (oldGroups != null) {
+ Set removeFromGroups = new HashSet();
+
+ for (Iterator it = oldGroups.iterator(); it.hasNext();) {
+ String groupName = (String) it.next();
+
+ if ((newGroups == null) || !newGroups.contains(groupName)) {
+ // We need to remove this group
+ removeFromGroups.add(groupName);
+ }
+ }
+
+ removeGroupMappings(oldValue.getKey(), removeFromGroups, persist);
+ }
+
+ // Get the names of the groups to add
+ if (newGroups != null) {
+ Set addToGroups = new HashSet();
+
+ for (Iterator it = newGroups.iterator(); it.hasNext();) {
+ String groupName = (String) it.next();
+
+ if ((oldGroups == null) || !oldGroups.contains(groupName)) {
+ // We need to add this group
+ addToGroups.add(groupName);
+ }
+ }
+
+ addGroupMappings(newValue.getKey(), addToGroups, persist, true);
+ }
+ }
+
+ /**
+ * AbstractConcurrentReadCache collision list entry.
+ */
+ protected static class Entry implements Map.Entry {
+ protected final Entry next;
+ protected final Object key;
+
+ /*
+ The use of volatile for value field ensures that
+ we can detect status changes without synchronization.
+ The other fields are never changed, and are
+ marked as final.
+ */
+ protected final int hash;
+ protected volatile Object value;
+
+ Entry(int hash, Object key, Object value, Entry next) {
+ this.hash = hash;
+ this.key = key;
+ this.next = next;
+ this.value = value;
+ }
+
+ // Map.Entry Ops
+ public Object getKey() {
+ return key;
+ }
+
+ /**
+ * Set the value of this entry.
+ * Note: In an entrySet or
+ * entrySet.iterator), unless the set or iterator is used under
+ * synchronization of the table as a whole (or you can otherwise
+ * guarantee lack of concurrent modification), setValue
+ * is not strictly guaranteed to actually replace the value field
+ * obtained via the get operation of the underlying hash
+ * table in multithreaded applications. If iterator-wide
+ * synchronization is not used, and any other concurrent
+ * put or remove operations occur, sometimes
+ * even to other entries, then this change is not
+ * guaranteed to be reflected in the hash table. (It might, or it
+ * might not. There are no assurances either way.)
+ *
+ * @param value the new value.
+ * @return the previous value, or null if entry has been detectably
+ * removed.
+ * @exception NullPointerException if the value is null
.
+ *
+ **/
+ public Object setValue(Object value) {
+ if (value == null) {
+ throw new NullPointerException();
+ }
+
+ Object oldValue = this.value;
+ this.value = value;
+
+ return oldValue;
+ }
+
+ /**
+ * Get the value.
+ * Note: In an entrySet or entrySet.iterator,
+ * unless the set or iterator is used under synchronization of the
+ * table as a whole (or you can otherwise guarantee lack of
+ * concurrent modification), getValue might
+ * return null, reflecting the fact that the entry has been
+ * concurrently removed. However, there are no assurances that
+ * concurrent removals will be reflected using this method.
+ *
+ * @return the current value, or null if the entry has been
+ * detectably removed.
+ **/
+ public Object getValue() {
+ return value;
+ }
+
+ public boolean equals(Object o) {
+ if (!(o instanceof Map.Entry)) {
+ return false;
+ }
+
+ Map.Entry e = (Map.Entry) o;
+
+ if (!key.equals(e.getKey())) {
+ return false;
+ }
+
+ Object v = value;
+
+ return (v == null) ? (e.getValue() == null) : v.equals(e.getValue());
+ }
+
+ public int hashCode() {
+ Object v = value;
+
+ return hash ^ ((v == null) ? 0 : v.hashCode());
+ }
+
+ public String toString() {
+ return key + "=" + value;
+ }
+
+ protected Object clone() {
+ return new Entry(hash, key, value, ((next == null) ? null : (Entry) next.clone()));
+ }
+ }
+
+ protected class HashIterator implements Iterator, Enumeration {
+ protected final Entry[] tab; // snapshot of table
+ protected Entry entry = null; // current node of slot
+ protected Entry lastReturned = null; // last node returned by next
+ protected Object currentKey; // key for current node
+ protected Object currentValue; // value for current node
+ protected int index; // current slot
+
+ protected HashIterator() {
+ tab = AbstractConcurrentReadCache.this.getTableForReading();
+ index = tab.length - 1;
+ }
+
+ public boolean hasMoreElements() {
+ return hasNext();
+ }
+
+ public boolean hasNext() {
+ /*
+ currentkey and currentValue are set here to ensure that next()
+ returns normally if hasNext() returns true. This avoids
+ surprises especially when final element is removed during
+ traversal -- instead, we just ignore the removal during
+ current traversal.
+ */
+ for (;;) {
+ if (entry != null) {
+ Object v = entry.value;
+
+ if (v != null) {
+ currentKey = entry.key;
+ currentValue = v;
+
+ return true;
+ } else {
+ entry = entry.next;
+ }
+ }
+
+ while ((entry == null) && (index >= 0)) {
+ entry = tab[index--];
+ }
+
+ if (entry == null) {
+ currentKey = currentValue = null;
+
+ return false;
+ }
+ }
+ }
+
+ public Object next() {
+ if ((currentKey == null) && !hasNext()) {
+ throw new NoSuchElementException();
+ }
+
+ Object result = returnValueOfNext();
+ lastReturned = entry;
+ currentKey = currentValue = null;
+ entry = entry.next;
+
+ return result;
+ }
+
+ public Object nextElement() {
+ return next();
+ }
+
+ public void remove() {
+ if (lastReturned == null) {
+ throw new IllegalStateException();
+ }
+
+ AbstractConcurrentReadCache.this.remove(lastReturned.key);
+ }
+
+ protected Object returnValueOfNext() {
+ return entry;
+ }
+ }
+
+ protected class KeyIterator extends HashIterator {
+ protected Object returnValueOfNext() {
+ return currentKey;
+ }
+ }
+
+ protected class ValueIterator extends HashIterator {
+ protected Object returnValueOfNext() {
+ return currentValue;
+ }
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/algorithm/FIFOCache.java b/src/java/com/opensymphony/oscache/base/algorithm/FIFOCache.java
new file mode 100644
index 0000000..494ee03
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/algorithm/FIFOCache.java
@@ -0,0 +1,92 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import java.util.*;
+
+/**
+ * FIFO (First In First Out) based queue algorithm for the cache.
+ *
+ * No synchronization is required in this class since the
+ * AbstractConcurrentReadCache
already takes care of any
+ * synchronization requirements.
+ *
+ * @version $Revision$
+ * @author Mike Cannon-Brookes
+ * @author Alain Bergevin
+ * @author Chris Miller
+ */
+public class FIFOCache extends AbstractConcurrentReadCache {
+
+ private static final long serialVersionUID = -10333778645392679L;
+
+ /**
+ * A queue containing all cache keys
+ */
+ private Collection list = new LinkedHashSet();
+
+ /**
+ * Constructs a FIFO Cache.
+ */
+ public FIFOCache() {
+ super();
+ }
+
+ /**
+ * Constructs a FIFO Cache of the specified capacity.
+ *
+ * @param capacity The maximum cache capacity.
+ */
+ public FIFOCache(int capacity) {
+ this();
+ maxEntries = capacity;
+ }
+
+ /**
+ * An object was retrieved from the cache. This implementation
+ * does noting since this event has no impact on the FIFO algorithm.
+ *
+ * @param key The cache key of the item that was retrieved.
+ */
+ protected void itemRetrieved(Object key) {
+ }
+
+ /**
+ * An object was put in the cache. This implementation just adds
+ * the key to the end of the list if it doesn't exist in the list
+ * already.
+ *
+ * @param key The cache key of the item that was put.
+ */
+ protected void itemPut(Object key) {
+ if (!list.contains(key)) {
+ list.add(key);
+ }
+ }
+
+ /**
+ * An item needs to be removed from the cache. The FIFO implementation
+ * removes the first element in the list (ie, the item that has been in
+ * the cache for the longest time).
+ *
+ * @return The key of whichever item was removed.
+ */
+ protected Object removeItem() {
+ Iterator it = list.iterator();
+ Object toRemove = it.next();
+ it.remove();
+
+ return toRemove;
+ }
+
+ /**
+ * Remove specified key since that object has been removed from the cache.
+ *
+ * @param key The cache key of the item that was removed.
+ */
+ protected void itemRemoved(Object key) {
+ list.remove(key);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/algorithm/LRUCache.java b/src/java/com/opensymphony/oscache/base/algorithm/LRUCache.java
new file mode 100644
index 0000000..630033f
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/algorithm/LRUCache.java
@@ -0,0 +1,156 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import java.util.*;
+
+/**
+ * LinkedHashSet
. Use prior OSCache release which
+ * require the Jakarta commons-collections SequencedHashMap
+ * class or the LinkedList
class if neither of the above
+ * classes are available.AbstractConcurrentReadCache
already takes care of any
+ * synchronization requirements.UNLIMITED
maximum number of entries.
+ */
+ public UnlimitedCache() {
+ super();
+ maxEntries = UNLIMITED;
+ }
+
+ /**
+ * Overrides the setMaxEntries
with an empty implementation.
+ * This property cannot be modified and is ignored for an
+ * UnlimitedCache
.
+ */
+ public void setMaxEntries(int maxEntries) {
+ log.warn("Cache max entries can't be set in " + this.getClass().getName() + ", ignoring value " + maxEntries + ".");
+ }
+
+ /**
+ * Implements itemRetrieved
with an empty implementation.
+ * The unlimited cache doesn't care that an item was retrieved.
+ */
+ protected void itemRetrieved(Object key) {
+ }
+
+ /**
+ * Implements itemPut
with an empty implementation.
+ * The unlimited cache doesn't care that an item was put in the cache.
+ */
+ protected void itemPut(Object key) {
+ }
+
+ /**
+ * This method just returns null
since items should
+ * never end up being removed from an unlimited cache!
+ */
+ protected Object removeItem() {
+ return null;
+ }
+
+ /**
+ * An empty implementation. The unlimited cache doesn't care that an
+ * item was removed.
+ */
+ protected void itemRemoved(Object key) {
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/algorithm/package.html b/src/java/com/opensymphony/oscache/base/algorithm/package.html
new file mode 100644
index 0000000..abf5529
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/algorithm/package.html
@@ -0,0 +1,37 @@
+
+
+
+
+
+
+
+Provides the classes that implement the caching algorithms used by OSCache, all of
+which are based on a derivative of Doug Lea's ConcurrentReaderHashMap
.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+For further information on Doug Lea's concurrency package, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheEntryEvent.java b/src/java/com/opensymphony/oscache/base/events/CacheEntryEvent.java
new file mode 100644
index 0000000..32d5f46
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheEntryEvent.java
@@ -0,0 +1,76 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.CacheEntry;
+
+/**
+ * CacheEntryEvent is the object created when an event occurs on a
+ * cache entry (Add, update, remove, flush). It contains the entry itself and
+ * its map.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class CacheEntryEvent extends CacheEvent {
+ /**
+ * The cache where the entry resides.
+ */
+ private Cache map = null;
+
+ /**
+ * The entry that the event applies to.
+ */
+ private CacheEntry entry = null;
+
+ /**
+ * Constructs a cache entry event object with no specified origin
+ *
+ * @param map The cache map of the cache entry
+ * @param entry The cache entry that the event applies to
+ */
+ public CacheEntryEvent(Cache map, CacheEntry entry) {
+ this(map, entry, null);
+ }
+
+ /**
+ * Constructs a cache entry event object
+ *
+ * @param map The cache map of the cache entry
+ * @param entry The cache entry that the event applies to
+ * @param origin The origin of this event
+ */
+ public CacheEntryEvent(Cache map, CacheEntry entry, String origin) {
+ super(origin);
+ this.map = map;
+ this.entry = entry;
+ }
+
+ /**
+ * Retrieve the cache entry that the event applies to.
+ */
+ public CacheEntry getEntry() {
+ return entry;
+ }
+
+ /**
+ * Retrieve the cache entry key
+ */
+ public String getKey() {
+ return entry.getKey();
+ }
+
+ /**
+ * Retrieve the cache map where the entry resides.
+ */
+ public Cache getMap() {
+ return map;
+ }
+
+ public String toString() {
+ return "key=" + entry.getKey();
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheEntryEventListener.java b/src/java/com/opensymphony/oscache/base/events/CacheEntryEventListener.java
new file mode 100644
index 0000000..d67a108
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheEntryEventListener.java
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is the interface to listen to cache entry events. There is a method
+ * for each event type. These methods are called via a dispatcher. If you
+ * want to be notified when an event occurs on an entry, group or across a
+ * pattern, register a listener and implement this interface.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public interface CacheEntryEventListener extends CacheEventListener {
+ /**
+ * Event fired when an entry is added to the cache.
+ */
+ void cacheEntryAdded(CacheEntryEvent event);
+
+ /**
+ * Event fired when an entry is flushed from the cache.
+ */
+ void cacheEntryFlushed(CacheEntryEvent event);
+
+ /**
+ * Event fired when an entry is removed from the cache.
+ */
+ void cacheEntryRemoved(CacheEntryEvent event);
+
+ /**
+ * Event fired when an entry is updated in the cache.
+ */
+ void cacheEntryUpdated(CacheEntryEvent event);
+
+ /**
+ * Event fired when a group is flushed from the cache.
+ */
+ void cacheGroupFlushed(CacheGroupEvent event);
+
+ /**
+ * Event fired when a key pattern is flushed from the cache.
+ * Note that this event will not be fired if the pattern
+ * is null
or an empty string, instead the flush
+ * request will silently be ignored.
+ */
+ void cachePatternFlushed(CachePatternEvent event);
+
+ /**
+ * An event that is fired when an entire cache gets flushed.
+ */
+ void cacheFlushed(CachewideEvent event);
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheEntryEventType.java b/src/java/com/opensymphony/oscache/base/events/CacheEntryEventType.java
new file mode 100644
index 0000000..16af701
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheEntryEventType.java
@@ -0,0 +1,54 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is all the possible events that may occur on a cache entry or
+ * collection of cache entries.Serializable
+ */
+ public CacheEvent() {
+ }
+
+ /**
+ * Creates a cache event object that came from the specified origin.
+ *
+ * @param origin A string that indicates where this event was fired from.
+ * This value is optional; null
can be passed in if an
+ * origin is not required.
+ */
+ public CacheEvent(String origin) {
+ this.origin = origin;
+ }
+
+ /**
+ * Retrieves the origin of this event, if one was specified. This is most
+ * useful when an event handler causes another event to fire - by checking
+ * the origin the handler is able to prevent recursive events being
+ * fired.
+ */
+ public String getOrigin() {
+ return origin;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheEventListener.java b/src/java/com/opensymphony/oscache/base/events/CacheEventListener.java
new file mode 100644
index 0000000..3805691
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheEventListener.java
@@ -0,0 +1,16 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import java.util.EventListener;
+
+/**
+ * This is the base interface for cache events.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public interface CacheEventListener extends EventListener {
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheGroupEvent.java b/src/java/com/opensymphony/oscache/base/events/CacheGroupEvent.java
new file mode 100644
index 0000000..332740c
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheGroupEvent.java
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+
+/**
+ * CacheGroupEvent is an event that occurs at the cache group level
+ * (Add, update, remove, flush). It contains the group name and the
+ * originating cache object.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public final class CacheGroupEvent extends CacheEvent {
+ /**
+ * The cache where the entry resides.
+ */
+ private Cache map = null;
+
+ /**
+ * The group that the event applies to.
+ */
+ private String group = null;
+
+ /**
+ * Constructs a cache group event with no origin
+ *
+ * @param map The cache map of the cache entry
+ * @param group The cache group that the event applies to.
+ */
+ public CacheGroupEvent(Cache map, String group) {
+ this(map, group, null);
+ }
+
+ /**
+ * Constructs a cache group event
+ *
+ * @param map The cache map of the cache entry
+ * @param group The cache group that the event applies to.
+ * @param origin An optional tag that can be attached to the event to
+ * specify the event's origin. This is useful to prevent events from being
+ * fired recursively in some situations, such as when an event handler
+ * causes another event to be fired.
+ */
+ public CacheGroupEvent(Cache map, String group, String origin) {
+ super(origin);
+ this.map = map;
+ this.group = group;
+ }
+
+ /**
+ * Retrieve the cache group that the event applies to.
+ */
+ public String getGroup() {
+ return group;
+ }
+
+ /**
+ * Retrieve the cache map where the group resides.
+ */
+ public Cache getMap() {
+ return map;
+ }
+
+ public String toString() {
+ return "groupName=" + group;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEvent.java b/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEvent.java
new file mode 100644
index 0000000..ab5417b
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CacheMapAccessEvent.java
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.CacheEntry;
+
+/**
+ * Cache map access event. This is the object created when an event occurs on a
+ * cache map (cache Hit, cache miss). It contains the entry that was referenced
+ * by the event and the event type.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class CacheMapAccessEvent extends CacheEvent {
+ /**
+ * The cache entry that the event applies to.
+ */
+ private CacheEntry entry = null;
+
+ /**
+ * Type of the event.
+ */
+ private CacheMapAccessEventType eventType = null;
+
+ /**
+ * Constructor.
+ * CachewideEvent
represents and event that occurs on
+ * the the entire cache, eg a cache flush or clear.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public final class CachewideEvent extends CacheEvent {
+ /**
+ * The cache where the event occurred.
+ */
+ private Cache cache = null;
+
+ /**
+ * The date/time for when the flush is scheduled
+ */
+ private Date date = null;
+
+ /**
+ * Constructs a cachewide event with the specified origin.
+ *
+ * @param cache The cache map that the event occurred on.
+ * @param date The date/time that this cachewide event is scheduled for
+ * (eg, the date that the cache is to be flushed).
+ * @param origin An optional tag that can be attached to the event to
+ * specify the event's origin. This is useful to prevent events from being
+ * fired recursively in some situations, such as when an event handler
+ * causes another event to be fired.
+ */
+ public CachewideEvent(Cache cache, Date date, String origin) {
+ super(origin);
+ this.date = date;
+ this.cache = cache;
+ }
+
+ /**
+ * Retrieve the cache map that the event occurred on.
+ */
+ public Cache getCache() {
+ return cache;
+ }
+
+ /**
+ * Retrieve the date/time that the cache flush is scheduled for.
+ */
+ public Date getDate() {
+ return date;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/CachewideEventType.java b/src/java/com/opensymphony/oscache/base/events/CachewideEventType.java
new file mode 100644
index 0000000..50a58a3
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/CachewideEventType.java
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is an enumeration holding all the events that can
+ * occur at the cache-wide level.
+ *
+ * @author Chris Miller
+ */
+public class CachewideEventType {
+ /**
+ * Get an event type for a cache flush event.
+ */
+ public static final CachewideEventType CACHE_FLUSHED = new CachewideEventType();
+
+ /**
+ * Private constructor to ensure that no object of this type are
+ * created externally.
+ */
+ private CachewideEventType() {
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/ScopeEvent.java b/src/java/com/opensymphony/oscache/base/events/ScopeEvent.java
new file mode 100644
index 0000000..a819e31
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/ScopeEvent.java
@@ -0,0 +1,78 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import java.util.Date;
+
+/**
+ * A
ScopeEvent
is created when an event occurs across one or all scopes.
+ * This type of event is only applicable to the ServletCacheAdministrator
.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class ScopeEvent extends CacheEvent {
+ /**
+ * Date that the event applies to.
+ */
+ private Date date = null;
+
+ /**
+ * Type of the event.
+ */
+ private ScopeEventType eventType = null;
+
+ /**
+ * Scope that applies to this event.
+ */
+ private int scope = 0;
+
+ /**
+ * Constructs a scope event object with no specified origin.
+ *
+ * @param eventType Type of the event.
+ * @param scope Scope that applies to the event.
+ * @param date Date that the event applies to.
+ */
+ public ScopeEvent(ScopeEventType eventType, int scope, Date date) {
+ this(eventType, scope, date, null);
+ }
+
+ /**
+ * Constructs a scope event object.
+ *
+ * @param eventType Type of the event.
+ * @param scope Scope that applies to the event.
+ * @param date Date that the event applies to.
+ * @param origin The origin of this event.
+ */
+ public ScopeEvent(ScopeEventType eventType, int scope, Date date, String origin) {
+ super(origin);
+ this.eventType = eventType;
+ this.scope = scope;
+ this.date = date;
+ }
+
+ /**
+ * Retrieve the event date
+ */
+ public Date getDate() {
+ return date;
+ }
+
+ /**
+ * Retrieve the type of the event.
+ */
+ public ScopeEventType getEventType() {
+ return eventType;
+ }
+
+ /**
+ * Retrieve the scope that applies to the event.
+ */
+ public int getScope() {
+ return scope;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/ScopeEventListener.java b/src/java/com/opensymphony/oscache/base/events/ScopeEventListener.java
new file mode 100644
index 0000000..9354cd8
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/ScopeEventListener.java
@@ -0,0 +1,21 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is the interface to listen to scope events. The events are
+ * scope flushed and all scope flushed, and are dispatched thru this interface
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public interface ScopeEventListener extends CacheEventListener {
+ /**
+ * Event fired when a specific or all scopes are flushed.
+ * Use getEventType to differentiate between the two.
+ */
+ public void scopeFlushed(ScopeEvent event);
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/ScopeEventType.java b/src/java/com/opensymphony/oscache/base/events/ScopeEventType.java
new file mode 100644
index 0000000..3a0ecea
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/ScopeEventType.java
@@ -0,0 +1,33 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+
+/**
+ * This is an enumeration of all the possible events that may occur
+ * at the scope level. Scope-level events are only relevant to the
+ * ServletCacheAdministrator
.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class ScopeEventType {
+ /**
+ * Specifies an event type for the all scope flushed event.
+ */
+ public static final ScopeEventType ALL_SCOPES_FLUSHED = new ScopeEventType();
+
+ /**
+ * Specifies an event type for the flushing of a specific scope.
+ */
+ public static final ScopeEventType SCOPE_FLUSHED = new ScopeEventType();
+
+ /**
+ * Private constructor to ensure that no object of that type are
+ * created externally.
+ */
+ private ScopeEventType() {
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/events/package.html b/src/java/com/opensymphony/oscache/base/events/package.html
new file mode 100644
index 0000000..07038f6
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/events/package.html
@@ -0,0 +1,32 @@
+
+
+
+
+
+
+
+Provides the base classes and interfaces that allow pluggable event handlers to be
+incorporated into OSCache.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/base/package.html b/src/java/com/opensymphony/oscache/base/package.html
new file mode 100644
index 0000000..6198c60
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/package.html
@@ -0,0 +1,31 @@
+
+
+
+
+
+
+
+Provides the base classes and interfaces that make up the core of OSCache.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/base/persistence/CachePersistenceException.java b/src/java/com/opensymphony/oscache/base/persistence/CachePersistenceException.java
new file mode 100644
index 0000000..391337c
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/persistence/CachePersistenceException.java
@@ -0,0 +1,33 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.persistence;
+
+
+/**
+ * Exception thrown when an error occurs in a PersistenceListener implementation.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class CachePersistenceException extends Exception {
+ /**
+ * Creates new CachePersistenceException without detail message.
+ */
+ public CachePersistenceException() {
+ }
+
+ /**
+ * Constructs an CachePersistenceException with the specified detail message.
+ *
+ * @param msg the detail message.
+ */
+ public CachePersistenceException(String msg) {
+ super(msg);
+ }
+
+ public CachePersistenceException(String message, Throwable cause) {
+ super(message, cause);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/base/persistence/PersistenceListener.java b/src/java/com/opensymphony/oscache/base/persistence/PersistenceListener.java
new file mode 100644
index 0000000..932ea39
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/persistence/PersistenceListener.java
@@ -0,0 +1,96 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.persistence;
+
+import com.opensymphony.oscache.base.Config;
+
+import java.util.Set;
+
+/**
+ * Defines the methods that are required to persist cache data.
+ * To provide a custom persistence mechanism you should implement this
+ * interface and supply the fully-qualified classname to the cache via
+ * the cache.persistence.class
configuration property.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public interface PersistenceListener {
+ /**
+ * Verify if an object is currently stored in the persistent cache.
+ *
+ * @param key The cache key of the object to check.
+ */
+ public boolean isStored(String key) throws CachePersistenceException;
+
+ /**
+ * Verify if a group is currently stored in the persistent cache.
+ *
+ * @param groupName The name of the group to check.
+ */
+ public boolean isGroupStored(String groupName) throws CachePersistenceException;
+
+ /**
+ * Clear the entire persistent cache (including the root)
+ */
+ public void clear() throws CachePersistenceException;
+
+ /**
+ * Allow the persistence code to initialize itself based on the supplied
+ * cache configuration.
+ */
+ public PersistenceListener configure(Config config);
+
+ /**
+ * Removes an object from the persistent cache
+ */
+ public void remove(String key) throws CachePersistenceException;
+
+ /**
+ * Removes a group from the persistent cache.
+ *
+ * @param groupName The name of the cache group to remove.
+ */
+ public void removeGroup(String groupName) throws CachePersistenceException;
+
+ /**
+ * Retrieves an object from the persistent cache.
+ *
+ * @param key The unique cache key that maps to the object
+ * being retrieved.
+ * @return The object, or null
if no object was found
+ * matching the supplied key.
+ */
+ public Object retrieve(String key) throws CachePersistenceException;
+
+ /**
+ * Stores an object in the persistent cache.
+ *
+ * @param key The key to uniquely identify this object.
+ * @param obj The object to persist. Most implementations
+ * of this interface will require this object implements
+ * Serializable
.
+ */
+ public void store(String key, Object obj) throws CachePersistenceException;
+
+ /**
+ * Stores a group in the persistent cache.
+ *
+ * @param groupName The name of the group to persist.
+ * @param group A set containing the keys of all the CacheEntry
+ * objects that belong to this group.
+ */
+ public void storeGroup(String groupName, Set group) throws CachePersistenceException;
+
+ /**
+ * Retrieves a group from the persistent cache.
+ *
+ * @param groupName The name of the group to retrieve.
+ * @return The returned set should contain the keys
+ * of all the CacheEntry
objects that belong
+ * to this group.
+ */
+ Set retrieveGroup(String groupName) throws CachePersistenceException;
+}
diff --git a/src/java/com/opensymphony/oscache/base/persistence/package.html b/src/java/com/opensymphony/oscache/base/persistence/package.html
new file mode 100644
index 0000000..a75d087
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/base/persistence/package.html
@@ -0,0 +1,31 @@
+
+
+
+
+
+
+
+Provides the interfaces that provide persistence storage of cached objects.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/extra/CacheEntryEventListenerImpl.java b/src/java/com/opensymphony/oscache/extra/CacheEntryEventListenerImpl.java
new file mode 100644
index 0000000..c9a4fa6
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/extra/CacheEntryEventListenerImpl.java
@@ -0,0 +1,195 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import com.opensymphony.oscache.base.events.*;
+
+/**
+ * Implementation of a CacheEntryEventListener. It use the events to count
+ * the operations performed on the cache.
+ * Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/general/GeneralCacheAdministrator.java b/src/java/com/opensymphony/oscache/general/GeneralCacheAdministrator.java
new file mode 100644
index 0000000..8196b1f
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/general/GeneralCacheAdministrator.java
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.general;
+
+import com.opensymphony.oscache.base.*;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.util.Date;
+import java.util.Properties;
+
+/**
+ * A GeneralCacheAdministrator creates, flushes and administers the cache.
+ *
+ * EXAMPLES :
+ *
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ * @author Alain Bergevin
+ */
+public class GeneralCacheAdministrator extends AbstractCacheAdministrator {
+ private static transient final Log log = LogFactory.getLog(GeneralCacheAdministrator.class);
+
+ /**
+ * Application cache
+ */
+ private Cache applicationCache = null;
+
+ /**
+ * Create the cache administrator.
+ */
+ public GeneralCacheAdministrator() {
+ this(null);
+ }
+
+ /**
+ * Create the cache administrator with the specified properties
+ */
+ public GeneralCacheAdministrator(Properties p) {
+ super(p);
+ log.info("Constructed GeneralCacheAdministrator()");
+ createCache();
+ }
+
+ /**
+ * Grabs a cache
+ *
+ * @return The cache
+ */
+ public Cache getCache() {
+ return applicationCache;
+ }
+
+ /**
+ * Remove an object from the cache
+ *
+ * @param key The key entered by the user.
+ */
+ public void removeEntry(String key) {
+ getCache().removeEntry(key);
+ }
+ /**
+ * Get an object from the cache
+ *
+ * @param key The key entered by the user.
+ * @return The object from cache
+ * @throws NeedsRefreshException when no cache entry could be found with the
+ * supplied key, or when an entry was found but is considered out of date. If
+ * the cache entry is a new entry that is currently being constructed this method
+ * will block until the new entry becomes available. Similarly, it will block if
+ * a stale entry is currently being rebuilt by another thread and cache blocking is
+ * enabled (
+ * // ---------------------------------------------------------------
+ * // Typical use with fail over
+ * // ---------------------------------------------------------------
+ * String myKey = "myKey";
+ * String myValue;
+ * int myRefreshPeriod = 1000;
+ * try {
+ * // Get from the cache
+ * myValue = (String) admin.getFromCache(myKey, myRefreshPeriod);
+ * } catch (NeedsRefreshException nre) {
+ * try {
+ * // Get the value (probably by calling an EJB)
+ * myValue = "This is the content retrieved.";
+ * // Store in the cache
+ * admin.putInCache(myKey, myValue);
+ * } catch (Exception ex) {
+ * // We have the current content if we want fail-over.
+ * myValue = (String) nre.getCacheContent();
+ * // It is essential that cancelUpdate is called if the
+ * // cached content is not rebuilt
+ * admin.cancelUpdate(myKey);
+ * }
+ * }
+ *
+ *
+ *
+ * // ---------------------------------------------------------------
+ * // Typical use without fail over
+ * // ---------------------------------------------------------------
+ * String myKey = "myKey";
+ * String myValue;
+ * int myRefreshPeriod = 1000;
+ * try {
+ * // Get from the cache
+ * myValue = (String) admin.getFromCache(myKey, myRefreshPeriod);
+ * } catch (NeedsRefreshException nre) {
+ * try {
+ * // Get the value (probably by calling an EJB)
+ * myValue = "This is the content retrieved.";
+ * // Store in the cache
+ * admin.putInCache(myKey, myValue);
+ * updated = true;
+ * } finally {
+ * if (!updated) {
+ * // It is essential that cancelUpdate is called if the
+ * // cached content could not be rebuilt
+ * admin.cancelUpdate(myKey);
+ * }
+ * }
+ * }
+ * // ---------------------------------------------------------------
+ * // ---------------------------------------------------------------
+ *
cache.blocking=true
).
+ */
+ public Object getFromCache(String key) throws NeedsRefreshException {
+ return getCache().getFromCache(key);
+ }
+
+ /**
+ * Get an object from the cache
+ *
+ * @param key The key entered by the user.
+ * @param refreshPeriod How long the object can stay in cache in seconds. To
+ * allow the entry to stay in the cache indefinitely, supply a value of
+ * {@link CacheEntry#INDEFINITE_EXPIRY}
+ * @return The object from cache
+ * @throws NeedsRefreshException when no cache entry could be found with the
+ * supplied key, or when an entry was found but is considered out of date. If
+ * the cache entry is a new entry that is currently being constructed this method
+ * will block until the new entry becomes available. Similarly, it will block if
+ * a stale entry is currently being rebuilt by another thread and cache blocking is
+ * enabled (cache.blocking=true
).
+ */
+ public Object getFromCache(String key, int refreshPeriod) throws NeedsRefreshException {
+ return getCache().getFromCache(key, refreshPeriod);
+ }
+
+ /**
+ * Get an object from the cache
+ *
+ * @param key The key entered by the user.
+ * @param refreshPeriod How long the object can stay in cache in seconds. To
+ * allow the entry to stay in the cache indefinitely, supply a value of
+ * {@link CacheEntry#INDEFINITE_EXPIRY}
+ * @param cronExpression A cron expression that the age of the cache entry
+ * will be compared to. If the entry is older than the most recent match for the
+ * cron expression, the entry will be considered stale.
+ * @return The object from cache
+ * @throws NeedsRefreshException when no cache entry could be found with the
+ * supplied key, or when an entry was found but is considered out of date. If
+ * the cache entry is a new entry that is currently being constructed this method
+ * will block until the new entry becomes available. Similarly, it will block if
+ * a stale entry is currently being rebuilt by another thread and cache blocking is
+ * enabled (cache.blocking=true
).
+ */
+ public Object getFromCache(String key, int refreshPeriod, String cronExpression) throws NeedsRefreshException {
+ return getCache().getFromCache(key, refreshPeriod, cronExpression);
+ }
+
+ /**
+ * Cancels a pending cache update. This should only be called by a thread
+ * that received a {@link NeedsRefreshException} and was unable to generate
+ * some new cache content.
+ *
+ * @param key The cache entry key to cancel the update of.
+ */
+ public void cancelUpdate(String key) {
+ getCache().cancelUpdate(key);
+ }
+
+ /**
+ * Shuts down the cache administrator.
+ */
+ public void destroy() {
+ finalizeListeners(applicationCache);
+ }
+
+ // METHODS THAT DELEGATES TO THE CACHE ---------------------
+
+ /**
+ * Flush the entire cache immediately.
+ */
+ public void flushAll() {
+ getCache().flushAll(new Date());
+ }
+
+ /**
+ * Flush the entire cache at the given date.
+ *
+ * @param date The time to flush
+ */
+ public void flushAll(Date date) {
+ getCache().flushAll(date);
+ }
+
+ /**
+ * Flushes a single cache entry.
+ */
+ public void flushEntry(String key) {
+ getCache().flushEntry(key);
+ }
+
+ /**
+ * Flushes all items that belong to the specified group.
+ *
+ * @param group The name of the group to flush
+ */
+ public void flushGroup(String group) {
+ getCache().flushGroup(group);
+ }
+
+ /**
+ * Allows to flush all items that have a specified pattern in the key.
+ *
+ * @param pattern Pattern.
+ * @deprecated For performance and flexibility reasons it is preferable to
+ * store cache entries in groups and use the {@link #flushGroup(String)} method
+ * instead of relying on pattern flushing.
+ */
+ public void flushPattern(String pattern) {
+ getCache().flushPattern(pattern);
+ }
+
+ /**
+ * Put an object in a cache
+ *
+ * @param key The key entered by the user
+ * @param content The object to store
+ * @param policy Object that implements refresh policy logic
+ */
+ public void putInCache(String key, Object content, EntryRefreshPolicy policy) {
+ Cache cache = getCache();
+ cache.putInCache(key, content, policy);
+ }
+
+ /**
+ * Put an object in a cache
+ *
+ * @param key The key entered by the user
+ * @param content The object to store
+ */
+ public void putInCache(String key, Object content) {
+ putInCache(key, content, (EntryRefreshPolicy) null);
+ }
+
+ /**
+ * Puts an object in a cache
+ *
+ * @param key The unique key for this cached object
+ * @param content The object to store
+ * @param groups The groups that this object belongs to
+ */
+ public void putInCache(String key, Object content, String[] groups) {
+ getCache().putInCache(key, content, groups);
+ }
+
+ /**
+ * Puts an object in a cache
+ *
+ * @param key The unique key for this cached object
+ * @param content The object to store
+ * @param groups The groups that this object belongs to
+ * @param policy The refresh policy to use
+ */
+ public void putInCache(String key, Object content, String[] groups, EntryRefreshPolicy policy) {
+ getCache().putInCache(key, content, groups, policy, null);
+ }
+
+ /**
+ * Sets the cache capacity (number of items). If the cache contains
+ * more than capacity
items then items will be removed
+ * to bring the cache back down to the new size.
+ *
+ * @param capacity The new capacity of the cache
+ */
+ public void setCacheCapacity(int capacity) {
+ super.setCacheCapacity(capacity);
+ getCache().setCapacity(capacity);
+ }
+
+ /**
+ * Creates a cache in this admin
+ */
+ private void createCache() {
+ log.info("Creating new cache");
+
+ applicationCache = new Cache(isMemoryCaching(), isUnlimitedDiskCache(), isOverflowPersistence(), isBlocking(), algorithmClass, cacheCapacity);
+
+ configureStandardListeners(applicationCache);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/general/package.html b/src/java/com/opensymphony/oscache/general/package.html
new file mode 100644
index 0000000..fd122ac
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/general/package.html
@@ -0,0 +1,31 @@
+
+
+
+
+
+
+
+Provides a generic administrator class for the cache.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/hibernate/OSCache.java b/src/java/com/opensymphony/oscache/hibernate/OSCache.java
new file mode 100644
index 0000000..2b0e9be
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/hibernate/OSCache.java
@@ -0,0 +1,161 @@
+package com.opensymphony.oscache.hibernate;
+
+import java.util.Map;
+
+import org.hibernate.cache.Cache;
+import org.hibernate.cache.CacheException;
+import org.hibernate.cache.Timestamper;
+
+import com.opensymphony.oscache.base.NeedsRefreshException;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+/**
+ * Cache plugin for Hibernate 3.2 and OpenSymphony OSCache 2.4.
+ *
+ * The OSCache implementation assumes that identifiers have well-behaved toString() methods.
+ * This implementation must be threadsafe.
+ *
+ * @version $Revision:$
+ */
+public class OSCache implements Cache {
+
+ /** The OSCache 2.4 cache administrator. */
+ private GeneralCacheAdministrator cache;
+ private final int refreshPeriod;
+ private final String cron;
+ private final String regionName;
+ private final String[] regionGroups;
+
+ public OSCache(GeneralCacheAdministrator cache, int refreshPeriod, String cron, String region) {
+ this.cache = cache;
+ this.refreshPeriod = refreshPeriod;
+ this.cron = cron;
+ this.regionName = region;
+ this.regionGroups = new String[] {region};
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#get(java.lang.Object)
+ */
+ public Object get(Object key) throws CacheException {
+ try {
+ return cache.getFromCache( toString(key), refreshPeriod, cron );
+ }
+ catch (NeedsRefreshException e) {
+ cache.cancelUpdate( toString(key) );
+ return null;
+ }
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#put(java.lang.Object, java.lang.Object)
+ */
+ public void put(Object key, Object value) throws CacheException {
+ cache.putInCache( toString(key), value, regionGroups );
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#remove(java.lang.Object)
+ */
+ public void remove(Object key) throws CacheException {
+ cache.flushEntry( toString(key) );
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#clear()
+ */
+ public void clear() throws CacheException {
+ cache.flushGroup(regionName);
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#destroy()
+ */
+ public void destroy() throws CacheException {
+ synchronized (cache) {
+ cache.destroy();
+ }
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#lock(java.lang.Object)
+ */
+ public void lock(Object key) throws CacheException {
+ // local cache, so we use synchronization
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#unlock(java.lang.Object)
+ */
+ public void unlock(Object key) throws CacheException {
+ // local cache, so we use synchronization
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#nextTimestamp()
+ */
+ public long nextTimestamp() {
+ return Timestamper.next();
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#getTimeout()
+ */
+ public int getTimeout() {
+ return Timestamper.ONE_MS * 60000; //ie. 60 seconds
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#toMap()
+ */
+ public Map toMap() {
+ throw new UnsupportedOperationException();
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#getElementCountOnDisk()
+ */
+ public long getElementCountOnDisk() {
+ return -1;
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#getElementCountInMemory()
+ */
+ public long getElementCountInMemory() {
+ return -1;
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#getSizeInMemory()
+ */
+ public long getSizeInMemory() {
+ return -1;
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#getRegionName()
+ */
+ public String getRegionName() {
+ return regionName;
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#update(java.lang.Object, java.lang.Object)
+ */
+ public void update(Object key, Object value) throws CacheException {
+ put(key, value);
+ }
+
+ /**
+ * @see org.hibernate.cache.Cache#read(java.lang.Object)
+ */
+ public Object read(Object key) throws CacheException {
+ return get(key);
+ }
+
+ private String toString(Object key) {
+ return String.valueOf(key) + "." + regionName;
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/hibernate/OSCacheProvider.java b/src/java/com/opensymphony/oscache/hibernate/OSCacheProvider.java
new file mode 100644
index 0000000..3f4ae85
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/hibernate/OSCacheProvider.java
@@ -0,0 +1,123 @@
+package com.opensymphony.oscache.hibernate;
+
+import java.util.Properties;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.hibernate.cache.Cache;
+import org.hibernate.cache.CacheException;
+import org.hibernate.cache.CacheProvider;
+import org.hibernate.cache.Timestamper;
+import org.hibernate.util.StringHelper;
+
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+import com.opensymphony.oscache.util.StringUtil;
+
+/**
+ * Cache provider plugin for Hibernate 3.2 and OpenSymphony OSCache 2.4.
+ *
+ * This implementation assumes that identifiers have well-behaved toString() methods.
+ *
+ * To enable OSCache for Hibernate's second level cache add the following line to Hibernate's configuration e.g. hibernate.cfg.xml
):
+ * hibernate.cache.provider_class=com.opensymphony.oscache.hibernate.OSCacheProvider
+ * To configure a different configuration file use the following parameter in the Hibernate's configuration:
+ * com.opensymphony.oscache.configurationResourceName=[path to oscache-hibernate.properties]
+ *
+ * @version $Revision:$
+ */
+public class OSCacheProvider implements CacheProvider {
+
+ private static final Log LOG = LogFactory.getLog(OSCacheProvider.class);
+
+ /** In the Hibernate system property you can specify the location of the oscache configuration file name. */
+ public static final String OSCACHE_CONFIGURATION_RESOURCE_NAME = "com.opensymphony.oscache.configurationResourceName";
+
+ /** The OSCache refresh period property suffix. */
+ public static final String OSCACHE_REFRESH_PERIOD = "refresh.period";
+
+ /** The OSCache CRON expression property suffix. */
+ public static final String OSCACHE_CRON = "cron";
+
+ private static GeneralCacheAdministrator cache;
+
+ /**
+ * Builds a new {@link Cache} instance, and gets it's properties from the
+ * GeneralCacheAdministrator {@link GeneralCacheAdministrator}
+ * which reads the properties file (oscache.properties
) in the start method:
+ * @see com.opensymphony.oscache.hibernate.OSCacheProvider#start(java.util.Properties)
+ *
+ * @param region the region of the cache
+ * @param properties not used
+ * @return the hibernate 2nd level cache
+ * @throws CacheException
+ *
+ * @see org.hibernate.cache.CacheProvider#buildCache(java.lang.String, java.util.Properties)
+ */
+ public Cache buildCache(String region, Properties properties) throws CacheException {
+ if (cache != null) {
+ LOG.debug("building cache in OSCacheProvider...");
+
+ String refreshPeriodString = cache.getProperty( StringHelper.qualify(region, OSCACHE_REFRESH_PERIOD) );
+ int refreshPeriod = refreshPeriodString==null ? CacheEntry.INDEFINITE_EXPIRY : Integer.parseInt( refreshPeriodString.trim() );
+
+ String cron = cache.getProperty( StringHelper.qualify(region, OSCACHE_CRON) );
+
+ return new OSCache(cache, refreshPeriod, cron, region);
+ }
+ throw new CacheException("OSCache was stopped or wasn't configured via method start.");
+ }
+
+ /**
+ * @see org.hibernate.cache.CacheProvider#nextTimestamp()
+ */
+ public long nextTimestamp() {
+ return Timestamper.next();
+ }
+
+ /**
+ * This method isn't documented in Hibernate:
+ * @see org.hibernate.cache.CacheProvider#isMinimalPutsEnabledByDefault()
+ */
+ public boolean isMinimalPutsEnabledByDefault() {
+ return false;
+ }
+
+ /**
+ * @see org.hibernate.cache.CacheProvider#stop()
+ */
+ public void stop() {
+ if (cache != null) {
+ LOG.debug("Stopping OSCacheProvider...");
+ cache.destroy();
+ cache = null;
+ LOG.debug("OSCacheProvider stopped.");
+ }
+ }
+
+ /**
+ * @see org.hibernate.cache.CacheProvider#start(java.util.Properties)
+ */
+ public void start(Properties hibernateSystemProperties) throws CacheException {
+ if (cache == null) {
+ // construct the cache
+ LOG.debug("Starting OSCacheProvider...");
+ String configResourceName = null;
+ if (hibernateSystemProperties != null) {
+ configResourceName = (String) hibernateSystemProperties.get(OSCACHE_CONFIGURATION_RESOURCE_NAME);
+ }
+ if (StringUtil.isEmpty(configResourceName)) {
+ cache = new GeneralCacheAdministrator();
+ } else {
+ Properties propertiesOSCache = Config.loadProperties(configResourceName, this.getClass().getName());
+ cache = new GeneralCacheAdministrator(propertiesOSCache);
+ }
+ LOG.debug("OSCacheProvider started.");
+ } else {
+ LOG.warn("Tried to restart OSCacheProvider, which is already running.");
+ }
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/hibernate/package.html b/src/java/com/opensymphony/oscache/hibernate/package.html
new file mode 100644
index 0000000..de133f6
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/hibernate/package.html
@@ -0,0 +1,31 @@
+
+
+
+
+
+
+
+Provides Hibernate 3.2 classes for OSCache.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/plugins/clustersupport/AbstractBroadcastingListener.java b/src/java/com/opensymphony/oscache/plugins/clustersupport/AbstractBroadcastingListener.java
new file mode 100644
index 0000000..9f648c8
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/clustersupport/AbstractBroadcastingListener.java
@@ -0,0 +1,177 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.*;
+import com.opensymphony.oscache.base.events.*;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.util.Date;
+
+/**
+ * Implementation of a CacheEntryEventListener. It broadcasts the flush events
+ * across a cluster to other listening caches. Note that this listener cannot
+ * be used in conjection with session caches.
+ *
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public abstract class AbstractBroadcastingListener implements CacheEntryEventListener, LifecycleAware {
+ private final static Log log = LogFactory.getLog(AbstractBroadcastingListener.class);
+
+ /**
+ * The name to use for the origin of cluster events. Using this ensures
+ * events are not fired recursively back over the cluster.
+ */
+ protected static final String CLUSTER_ORIGIN = "CLUSTER";
+ protected Cache cache = null;
+
+ public AbstractBroadcastingListener() {
+ if (log.isInfoEnabled()) {
+ log.info("AbstractBroadcastingListener registered");
+ }
+ }
+
+ /**
+ * Event fired when an entry is flushed from the cache. This broadcasts
+ * the flush message to any listening nodes on the network.
+ */
+ public void cacheEntryFlushed(CacheEntryEvent event) {
+ if (!Cache.NESTED_EVENT.equals(event.getOrigin()) && !CLUSTER_ORIGIN.equals(event.getOrigin())) {
+ if (log.isDebugEnabled()) {
+ log.debug("cacheEntryFlushed called (" + event + ")");
+ }
+
+ sendNotification(new ClusterNotification(ClusterNotification.FLUSH_KEY, event.getKey()));
+ }
+ }
+
+ /**
+ * Event fired when an entry is removed from the cache. This broadcasts
+ * the remove method to any listening nodes on the network, as long as
+ * this event wasn't from a broadcast in the first place.
+ */
+ public void cacheGroupFlushed(CacheGroupEvent event) {
+ if (!Cache.NESTED_EVENT.equals(event.getOrigin()) && !CLUSTER_ORIGIN.equals(event.getOrigin())) {
+ if (log.isDebugEnabled()) {
+ log.debug("cacheGroupFushed called (" + event + ")");
+ }
+
+ sendNotification(new ClusterNotification(ClusterNotification.FLUSH_GROUP, event.getGroup()));
+ }
+ }
+
+ public void cachePatternFlushed(CachePatternEvent event) {
+ if (!Cache.NESTED_EVENT.equals(event.getOrigin()) && !CLUSTER_ORIGIN.equals(event.getOrigin())) {
+ if (log.isDebugEnabled()) {
+ log.debug("cachePatternFushed called (" + event + ")");
+ }
+
+ sendNotification(new ClusterNotification(ClusterNotification.FLUSH_PATTERN, event.getPattern()));
+ }
+ }
+
+ public void cacheFlushed(CachewideEvent event) {
+ if (!Cache.NESTED_EVENT.equals(event.getOrigin()) && !CLUSTER_ORIGIN.equals(event.getOrigin())) {
+ if (log.isDebugEnabled()) {
+ log.debug("cacheFushed called (" + event + ")");
+ }
+
+ sendNotification(new ClusterNotification(ClusterNotification.FLUSH_CACHE, event.getDate()));
+ }
+ }
+
+ // --------------------------------------------------------
+ // The remaining events are of no interest to this listener
+ // --------------------------------------------------------
+ public void cacheEntryAdded(CacheEntryEvent event) {
+ }
+
+ public void cacheEntryRemoved(CacheEntryEvent event) {
+ }
+
+ public void cacheEntryUpdated(CacheEntryEvent event) {
+ }
+
+ public void cacheGroupAdded(CacheGroupEvent event) {
+ }
+
+ public void cacheGroupEntryAdded(CacheGroupEvent event) {
+ }
+
+ public void cacheGroupEntryRemoved(CacheGroupEvent event) {
+ }
+
+ public void cacheGroupRemoved(CacheGroupEvent event) {
+ }
+
+ public void cacheGroupUpdated(CacheGroupEvent event) {
+ }
+
+ /**
+ * Called by the cache administrator class when a cache is instantiated.
+ *
+ * @param cache the cache instance that this listener is attached to.
+ * @param config The cache's configuration details. This allows the event handler
+ * to initialize itself based on the cache settings, and also to receive additional
+ * settings that were part of the cache configuration but that the cache
+ * itself does not care about. If you are using cache.properties
+ * for your configuration, simply add any additional properties that your event
+ * handler requires and they will be passed through in this parameter.
+ *
+ * @throws InitializationException thrown when there was a problem initializing the
+ * listener. The cache administrator will log this error and disable the listener.
+ */
+ public void initialize(Cache cache, Config config) throws InitializationException {
+ this.cache = cache;
+ }
+
+ /**
+ * Handles incoming notification messages. This method should be called by the
+ * underlying broadcasting implementation when a message is received from another
+ * node in the cluster.
+ *
+ * @param message The incoming cluster notification message object.
+ */
+ public void handleClusterNotification(ClusterNotification message) {
+ if (cache == null) {
+ log.warn("A cluster notification (" + message + ") was received, but no cache is registered on this machine. Notification ignored.");
+
+ return;
+ }
+
+ if (log.isInfoEnabled()) {
+ log.info("Cluster notification (" + message + ") was received.");
+ }
+
+ switch (message.getType()) {
+ case ClusterNotification.FLUSH_KEY:
+ cache.flushEntry((String) message.getData(), CLUSTER_ORIGIN);
+ break;
+ case ClusterNotification.FLUSH_GROUP:
+ cache.flushGroup((String) message.getData(), CLUSTER_ORIGIN);
+ break;
+ case ClusterNotification.FLUSH_PATTERN:
+ cache.flushPattern((String) message.getData(), CLUSTER_ORIGIN);
+ break;
+ case ClusterNotification.FLUSH_CACHE:
+ cache.flushAll((Date) message.getData(), CLUSTER_ORIGIN);
+ break;
+ default:
+ log.error("The cluster notification (" + message + ") is of an unknown type. Notification ignored.");
+ }
+ }
+
+ /**
+ * Called when a cluster notification message is to be broadcast. Implementing
+ * classes should use their underlying transport to broadcast the message across
+ * the cluster.
+ *
+ * @param message The notification message to broadcast.
+ */
+ abstract protected void sendNotification(ClusterNotification message);
+}
diff --git a/src/java/com/opensymphony/oscache/plugins/clustersupport/ClusterNotification.java b/src/java/com/opensymphony/oscache/plugins/clustersupport/ClusterNotification.java
new file mode 100644
index 0000000..edfc737
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/clustersupport/ClusterNotification.java
@@ -0,0 +1,86 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import java.io.Serializable;
+
+/**
+ * A notification message that holds information about a cache event. This
+ * class is Serializable
to allow it to be sent across the
+ * network to other machines running in a cluster.
+ *
+ * @author Chris Miller
+ * @author $Author$
+ * @version $Revision$
+ */
+public class ClusterNotification implements Serializable {
+ /**
+ * Specifies a notification message that indicates a particular cache key
+ * should be flushed.
+ */
+ public static final int FLUSH_KEY = 1;
+
+ /**
+ * Specifies a notification message that indicates an entire cache group
+ * should be flushed.
+ */
+ public static final int FLUSH_GROUP = 2;
+
+ /**
+ * Specifies a notification message that indicates all entries in the cache
+ * that match the specified pattern should be flushed.
+ */
+ public static final int FLUSH_PATTERN = 3;
+
+ /**
+ * Specifies a notification message indicating that an entire cache should
+ * be flushed.
+ */
+ public static final int FLUSH_CACHE = 4;
+
+ /**
+ * Any additional data that may be required
+ */
+ protected Serializable data;
+
+ /**
+ * The type of notification message.
+ */
+ protected int type;
+
+ /**
+ * Creates a new notification message object to broadcast to other
+ * listening nodes in the cluster.
+ *
+ * @param type The type of notification message. Valid types are
+ * {@link #FLUSH_KEY} and {@link #FLUSH_GROUP}.
+ * @param data Specifies the object key or group name to flush.
+ */
+ public ClusterNotification(int type, Serializable data) {
+ this.type = type;
+ this.data = data;
+ }
+
+ /**
+ * Holds any additional data that was required
+ */
+ public Serializable getData() {
+ return data;
+ }
+
+ /**
+ * The type of notification message.
+ */
+ public int getType() {
+ return type;
+ }
+
+ public String toString() {
+ StringBuffer buf = new StringBuffer();
+ buf.append("type=").append(type).append(", data=").append(data);
+
+ return buf.toString();
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/plugins/clustersupport/JMS10BroadcastingListener.java b/src/java/com/opensymphony/oscache/plugins/clustersupport/JMS10BroadcastingListener.java
new file mode 100644
index 0000000..1b34819
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/clustersupport/JMS10BroadcastingListener.java
@@ -0,0 +1,180 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.FinalizationException;
+import com.opensymphony.oscache.base.InitializationException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import javax.jms.*;
+
+import javax.naming.InitialContext;
+
+/**
+ * A JMS 1.0.x based clustering implementation. This implementation is independent of
+ * the JMS provider and uses non-persistent messages on a publish subscribe protocol.
+ *
+ * @author Chris Miller
+ */
+public class JMS10BroadcastingListener extends AbstractBroadcastingListener {
+ private final static Log log = LogFactory.getLog(JMS10BroadcastingListener.class);
+
+ /**
+ * The name of this cluster. Used to identify the sender of a message.
+ */
+ private String clusterNode;
+
+ /**
+ *The JMS connection used
+ */
+ private TopicConnection connection;
+
+ /**
+ * Th object used to publish new messages
+ */
+ private TopicPublisher publisher;
+
+ /**
+ * The current JMS session
+ */
+ private TopicSession publisherSession;
+
+ /**
+ * oscache.properties
:
+ *
+ *
oscache.properties
:
+ *
+ *
A concrete implementation of the {@link AbstractBroadcastingListener} based on + * the JavaGroups library. This Class uses JavaGroups to broadcast cache flush + * messages across a cluster.
+ * + *One of the following properties should be configured in oscache.properties
for
+ * this listener:
+ *
+ * UDP(mcast_addr=*.*.*.*;mcast_port=45566;ip_ttl=32;\ + * mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\ + * PING(timeout=2000;num_initial_members=3):\ + * MERGE2(min_interval=5000;max_interval=10000):\ + * FD_SOCK:VERIFY_SUSPECT(timeout=1500):\ + * pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):\ + * UNICAST(timeout=300,600,1200,2400):\ + * pbcast.STABLE(desired_avg_gossip=20000):\ + * FRAG(frag_size=8096;down_thread=false;up_thread=false):\ + * pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true) + *+ * + * Where
*.*.*.*
is the specified multicast IP, which defaults to 231.12.21.132
.
+ */
+ private static final String DEFAULT_CHANNEL_PROPERTIES_PRE = "UDP(mcast_addr=";
+
+ /**
+ * The second half of the default channel properties. They default channel properties are:
+ * + * UDP(mcast_addr=*.*.*.*;mcast_port=45566;ip_ttl=32;\ + * mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\ + * PING(timeout=2000;num_initial_members=3):\ + * MERGE2(min_interval=5000;max_interval=10000):\ + * FD_SOCK:VERIFY_SUSPECT(timeout=1500):\ + * pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):\ + * UNICAST(timeout=300,600,1200,2400):\ + * pbcast.STABLE(desired_avg_gossip=20000):\ + * FRAG(frag_size=8096;down_thread=false;up_thread=false):\ + * pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true) + *+ * + * Where
*.*.*.*
is the specified multicast IP, which defaults to 231.12.21.132
.
+ */
+ private static final String DEFAULT_CHANNEL_PROPERTIES_POST = ";mcast_port=45566;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):" + "PING(timeout=2000;num_initial_members=3):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK:VERIFY_SUSPECT(timeout=1500):" + "pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):UNICAST(timeout=300,600,1200,2400):pbcast.STABLE(desired_avg_gossip=20000):" + "FRAG(frag_size=8096;down_thread=false;up_thread=false):pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true)";
+ private static final String DEFAULT_MULTICAST_IP = "231.12.21.132";
+ private NotificationBus bus;
+
+ /**
+ * Initializes the broadcasting listener by starting up a JavaGroups notification
+ * bus instance to handle incoming and outgoing messages.
+ *
+ * @param config An OSCache configuration object.
+ * @throws com.opensymphony.oscache.base.InitializationException If this listener has
+ * already been initialized.
+ */
+ public synchronized void initialize(Cache cache, Config config) throws InitializationException {
+ super.initialize(cache, config);
+
+ String properties = config.getProperty(CHANNEL_PROPERTIES);
+ String multicastIP = config.getProperty(MULTICAST_IP_PROPERTY);
+
+ if ((properties == null) && (multicastIP == null)) {
+ multicastIP = DEFAULT_MULTICAST_IP;
+ }
+
+ if (properties == null) {
+ properties = DEFAULT_CHANNEL_PROPERTIES_PRE + multicastIP.trim() + DEFAULT_CHANNEL_PROPERTIES_POST;
+ } else {
+ properties = properties.trim();
+ }
+
+ if (log.isInfoEnabled()) {
+ log.info("Starting a new JavaGroups broadcasting listener with properties=" + properties);
+ }
+
+ try {
+ bus = new NotificationBus(BUS_NAME, properties);
+ bus.start();
+ bus.getChannel().setOpt(Channel.LOCAL, new Boolean(false));
+ bus.setConsumer(this);
+ log.info("JavaGroups clustering support started successfully");
+ } catch (Exception e) {
+ throw new InitializationException("Initialization failed: " + e);
+ }
+ }
+
+ /**
+ * Shuts down the JavaGroups being managed by this listener. This
+ * occurs once the cache is shut down and this listener is no longer
+ * in use.
+ *
+ * @throws com.opensymphony.oscache.base.FinalizationException
+ */
+ public synchronized void finialize() throws FinalizationException {
+ if (log.isInfoEnabled()) {
+ log.info("JavaGroups shutting down...");
+ }
+
+ // It's possible that the notification bus is null (CACHE-154)
+ if (bus != null) {
+ bus.stop();
+ bus = null;
+ } else {
+ log.warn("Notification bus wasn't initialized or finialize was invoked before!");
+ }
+
+ if (log.isInfoEnabled()) {
+ log.info("JavaGroups shutdown complete.");
+ }
+ }
+
+ /**
+ * Uses JavaGroups to broadcast the supplied notification message across the cluster.
+ *
+ * @param message The cluster nofication message to broadcast.
+ */
+ protected void sendNotification(ClusterNotification message) {
+ bus.sendNotification(message);
+ }
+
+ /**
+ * Handles incoming notification messages from JavaGroups. This method should
+ * never be called directly.
+ *
+ * @param serializable The incoming message object. This must be a {@link ClusterNotification}.
+ */
+ public void handleNotification(Serializable serializable) {
+ if (!(serializable instanceof ClusterNotification)) {
+ log.error("An unknown cluster notification message received (class=" + serializable.getClass().getName() + "). Notification ignored.");
+
+ return;
+ }
+
+ handleClusterNotification((ClusterNotification) serializable);
+ }
+
+ /**
+ * We are not using the caching, so we just return something that identifies
+ * us. This method should never be called directly.
+ */
+ public Serializable getCache() {
+ return "JavaGroupsBroadcastingListener: " + bus.getLocalAddress();
+ }
+
+ /**
+ * A callback that is fired when a new member joins the cluster. This
+ * method should never be called directly.
+ *
+ * @param address The address of the member who just joined.
+ */
+ public void memberJoined(Address address) {
+ if (log.isInfoEnabled()) {
+ log.info("A new member at address '" + address + "' has joined the cluster");
+ }
+ }
+
+ /**
+ * A callback that is fired when an existing member leaves the cluster.
+ * This method should never be called directly.
+ *
+ * @param address The address of the member who left.
+ */
+ public void memberLeft(Address address) {
+ if (log.isInfoEnabled()) {
+ log.info("Member at address '" + address + "' left the cluster");
+ }
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/plugins/clustersupport/package.html b/src/java/com/opensymphony/oscache/plugins/clustersupport/package.html
new file mode 100644
index 0000000..75fc50c
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/clustersupport/package.html
@@ -0,0 +1,34 @@
+
+
+
+
+
+
+
+Provides support for broadcasting flush events so that OSCache can function across a
+cluster.
+
+
+null
if the group
+ * file could not be found.
+ *
+ * @param groupName The name of the group to retrieve.
+ * @return A Set
containing keys of all of the cache
+ * entries that belong to this group.
+ * @throws CachePersistenceException
+ */
+ public Set retrieveGroup(String groupName) throws CachePersistenceException {
+ File groupFile = getCacheGroupFile(groupName);
+
+ try {
+ return (Set) retrieve(groupFile);
+ } catch (ClassCastException e) {
+ throw new CachePersistenceException("Group file " + groupFile + " was not persisted as a Set: " + e);
+ }
+ }
+
+ /**
+ * Stores an object in cache
+ *
+ * @param key The object's key
+ * @param obj The object to store
+ * @throws CachePersistenceException
+ */
+ public void store(String key, Object obj) throws CachePersistenceException {
+ File file = getCacheFile(key);
+ store(file, obj);
+ }
+
+ /**
+ * Stores a group in the persistent cache. This will overwrite any existing
+ * group with the same name
+ */
+ public void storeGroup(String groupName, Set group) throws CachePersistenceException {
+ File groupFile = getCacheGroupFile(groupName);
+ store(groupFile, group);
+ }
+
+ /**
+ * Allows to translate to the temp dir of the servlet container if cachePathStr
+ * is javax.servlet.context.tempdir.
+ *
+ * @param cachePathStr Cache path read from the properties file.
+ * @return Adjusted cache path
+ */
+ protected String adjustFileCachePath(String cachePathStr) {
+ if (cachePathStr.compareToIgnoreCase(CONTEXT_TMPDIR) == 0) {
+ cachePathStr = contextTmpDir.getAbsolutePath();
+ }
+
+ return cachePathStr;
+ }
+
+ /**
+ * Set caching to file on or off.
+ * If the cache.path
property exists, we assume file caching is turned on.
+ * By the same token, to turn off file caching just remove this property.
+ */
+ protected void initFileCaching(String cachePathStr) {
+ if (cachePathStr != null) {
+ cachePath = new File(cachePathStr);
+
+ try {
+ if (!cachePath.exists()) {
+ if (log.isInfoEnabled()) {
+ log.info("cache.path '" + cachePathStr + "' does not exist, creating");
+ }
+
+ // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4742723
+ synchronized (MKDIRS_LOCK) {
+ cachePath.mkdirs();
+ }
+ }
+
+ if (!cachePath.isDirectory()) {
+ log.error("cache.path '" + cachePathStr + "' is not a directory");
+ cachePath = null;
+ } else if (!cachePath.canWrite()) {
+ log.error("cache.path '" + cachePathStr + "' is not a writable location");
+ cachePath = null;
+ }
+ } catch (Exception e) {
+ log.error("cache.path '" + cachePathStr + "' could not be used", e);
+ cachePath = null;
+ }
+ } else {
+ // Use default value
+ }
+ }
+
+ // try 30s to delete the file
+ private static final long DELETE_THREAD_SLEEP = 500;
+ private static final int DELETE_COUNT = 60;
+
+ protected void remove(File file) throws CachePersistenceException {
+ int count = DELETE_COUNT;
+ try {
+ // Loop until we are able to delete (No current read).
+ // The cache must ensure that there are never two concurrent threads
+ // doing write (store and delete) operations on the same item.
+ // Delete only should be enough but file.exists prevents infinite loop
+ while (file.exists() && !file.delete() && count != 0) {
+ count--;
+ try {
+ Thread.sleep(DELETE_THREAD_SLEEP);
+ } catch (InterruptedException ignore) {
+ }
+ }
+ } catch (Exception e) {
+ throw new CachePersistenceException("Unable to remove file '" + file + "' from the disk cache.", e);
+ }
+ if (file.exists() && count == 0) {
+ throw new CachePersistenceException("Unable to delete '" + file + "' from the disk cache. "+DELETE_COUNT+" attempts at "+DELETE_THREAD_SLEEP+" milliseconds intervals.");
+ }
+ }
+
+ /**
+ * Stores an object using the supplied file object
+ *
+ * @param file The file to use for storing the object
+ * @param obj the object to store
+ * @throws CachePersistenceException
+ */
+ protected void store(File file, Object obj) throws CachePersistenceException {
+ // check if file exists before testing if parent exists
+ if (!file.exists()) {
+ // check if the directory structure required exists and create it if it doesn't
+ File filepath = new File(file.getParent());
+
+ try {
+ if (!filepath.exists()) {
+ // http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4742723
+ synchronized (MKDIRS_LOCK) {
+ filepath.mkdirs();
+ }
+ }
+ } catch (Exception e) {
+ throw new CachePersistenceException("Unable to create the directory " + filepath, e);
+ }
+ }
+
+ // Write the object to disk
+ try {
+ FileOutputStream fout = new FileOutputStream(file);
+ try {
+ ObjectOutputStream oout = new ObjectOutputStream(new BufferedOutputStream(fout));
+ try {
+ oout.writeObject(obj);
+ oout.flush();
+ } finally {
+ try {
+ oout.close();
+ } catch (Exception e) {
+ LOG.warn("Problem closing file of disk cache.", e);
+ }
+ }
+ } finally {
+ try {
+ fout.close();
+ } catch (Exception e) {
+ LOG.warn("Problem closing file of disk cache.", e);
+ }
+ }
+ } catch (Exception e) {
+ int count = DELETE_COUNT;
+ while (file.exists() && !file.delete() && count != 0) {
+ count--;
+ try {
+ Thread.sleep(DELETE_THREAD_SLEEP);
+ } catch (InterruptedException ignore) {
+ }
+ }
+ throw new CachePersistenceException("Unable to write file '" + file + "' in the disk cache.", e);
+ }
+ }
+
+ /**
+ * Build fully qualified cache file for the specified cache entry key.
+ *
+ * @param key Cache Entry Key.
+ * @return File reference.
+ */
+ protected File getCacheFile(String key) {
+ char[] fileChars = getCacheFileName(key);
+
+ File file = new File(root, new String(fileChars) + "." + CACHE_EXTENSION);
+
+ return file;
+ }
+
+ /**
+ * Build cache file name for the specified cache entry key.
+ *
+ * @param key Cache Entry Key.
+ * @return char[] file name.
+ */
+ protected abstract char[] getCacheFileName(String key);
+
+ /**
+ * Builds a fully qualified file name that specifies a cache group entry.
+ *
+ * @param group The name of the group
+ * @return A File reference
+ */
+ private File getCacheGroupFile(String group) {
+ int AVERAGE_PATH_LENGTH = 30;
+
+ if ((group == null) || (group.length() == 0)) {
+ throw new IllegalArgumentException("Invalid group '" + group + "' specified to getCacheGroupFile.");
+ }
+
+ StringBuffer path = new StringBuffer(AVERAGE_PATH_LENGTH);
+
+ // Build a fully qualified file name for this group
+ path.append(GROUP_DIRECTORY).append('/');
+ path.append(getCacheFileName(group)).append('.').append(CACHE_EXTENSION);
+
+ return new File(root, path.toString());
+ }
+
+ /**
+ * This allows to persist different scopes in different path in the case of
+ * file caching.
+ *
+ * @param scope Cache scope.
+ * @return The scope subpath
+ */
+ private String getPathPart(int scope) {
+ if (scope == PageContext.SESSION_SCOPE) {
+ return SESSION_CACHE_SUBPATH;
+ } else {
+ return APPLICATION_CACHE_SUBPATH;
+ }
+ }
+
+ /**
+ * Clears a whole directory, starting from the specified
+ * directory
+ *
+ * @param baseDirName The root directory to delete
+ * @throws CachePersistenceException
+ */
+ private void clear(String baseDirName) throws CachePersistenceException {
+ File baseDir = new File(baseDirName);
+ File[] fileList = baseDir.listFiles();
+
+ try {
+ if (fileList != null) {
+ // Loop through all the files and directory to delete them
+ for (int count = 0; count < fileList.length; count++) {
+ if (fileList[count].isFile()) {
+ fileList[count].delete();
+ } else {
+ // Make a recursive call to delete the directory
+ clear(fileList[count].toString());
+ fileList[count].delete();
+ }
+ }
+ }
+
+ // Delete the root directory
+ baseDir.delete();
+ } catch (Exception e) {
+ throw new CachePersistenceException("Unable to clear the cache directory");
+ }
+ }
+
+ /**
+ * Retrives a serialized object from the supplied file, or returns
+ * null
if the file does not exist.
+ *
+ * @param file The file to deserialize
+ * @return The deserialized object
+ * @throws CachePersistenceException
+ */
+ private Object retrieve(File file) throws CachePersistenceException {
+ Object readContent = null;
+ boolean fileExist;
+
+ try {
+ fileExist = file.exists();
+ } catch (Exception e) {
+ throw new CachePersistenceException("Unable to verify if file '" + file + "' exists.", e);
+ }
+
+ // Read the file if it exists
+ if (fileExist) {
+ ObjectInputStream oin = null;
+
+ try {
+ BufferedInputStream in = new BufferedInputStream(new FileInputStream(file));
+ oin = new ObjectInputStream(in);
+ readContent = oin.readObject();
+ } catch (Exception e) {
+ // We expect this exception to occur.
+ // This is when the item will be invalidated (written or deleted)
+ // during read.
+ // The cache has the logic to retry reading.
+ throw new CachePersistenceException("Unable to read file '" + file.getAbsolutePath() + "' from the disk cache.", e);
+ } finally {
+ // HHDE: no need to close in. Will be closed by oin
+ try {
+ oin.close();
+ } catch (Exception ex) {
+ }
+ }
+ }
+
+ return readContent;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/plugins/diskpersistence/DiskPersistenceListener.java b/src/java/com/opensymphony/oscache/plugins/diskpersistence/DiskPersistenceListener.java
new file mode 100644
index 0000000..78fa823
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/diskpersistence/DiskPersistenceListener.java
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.diskpersistence;
+
+
+/**
+ * Persist the cache data to disk.
+ *
+ * The code in this class is totally not thread safe it is the resonsibility
+ * of the cache using this persistence listener to handle the concurrency.
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ * @author Alain Bergevin
+ * @author Chris Miller
+ */
+public class DiskPersistenceListener extends AbstractDiskPersistenceListener {
+ private static final String CHARS_TO_CONVERT = "./\\ :;\"\'_?";
+
+ /**
+ * Build cache file name for the specified cache entry key.
+ *
+ * @param key Cache Entry Key.
+ * @return char[] file name.
+ */
+ protected char[] getCacheFileName(String key) {
+ if ((key == null) || (key.length() == 0)) {
+ throw new IllegalArgumentException("Invalid key '" + key + "' specified to getCacheFile.");
+ }
+
+ char[] chars = key.toCharArray();
+
+ StringBuffer sb = new StringBuffer(chars.length + 8);
+
+ for (int i = 0; i < chars.length; i++) {
+ char c = chars[i];
+ int pos = CHARS_TO_CONVERT.indexOf(c);
+
+ if (pos >= 0) {
+ sb.append('_');
+ sb.append(i);
+ } else {
+ sb.append(c);
+ }
+ }
+
+ char[] fileChars = new char[sb.length()];
+ sb.getChars(0, fileChars.length, fileChars, 0);
+ return fileChars;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/plugins/diskpersistence/HashDiskPersistenceListener.java b/src/java/com/opensymphony/oscache/plugins/diskpersistence/HashDiskPersistenceListener.java
new file mode 100644
index 0000000..fdd12a4
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/diskpersistence/HashDiskPersistenceListener.java
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.diskpersistence;
+
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.persistence.PersistenceListener;
+
+import java.io.File;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Persists cache data to disk. Provides a hash of the standard key name as the file name.
+ *
+ * A configurable hash algorithm is used to create a digest of the cache key for the
+ * disk filename. This is to allow for more sane filenames for objects which dont generate
+ * friendly cache keys.
+ *
+ * @author Jason Parrott
+ */
+public class HashDiskPersistenceListener extends AbstractDiskPersistenceListener {
+
+ private static final Log LOG = LogFactory.getLog(HashDiskPersistenceListener.class);
+
+ private static final int DIR_LEVELS = 3;
+
+ public final static String HASH_ALGORITHM_KEY = "cache.persistence.disk.hash.algorithm";
+ public final static String DEFAULT_HASH_ALGORITHM = "MD5";
+ protected MessageDigest md = null;
+
+ /**
+ * Initializes the HashDiskPersistenceListener. Namely this involves only setting up the
+ * message digester to hash the key values.
+ * @see com.opensymphony.oscache.base.persistence.PersistenceListener#configure(com.opensymphony.oscache.base.Config)
+ */
+ public PersistenceListener configure(Config config) {
+ try {
+ if (config.getProperty(HashDiskPersistenceListener.HASH_ALGORITHM_KEY) != null) {
+ try {
+ md = MessageDigest.getInstance(config.getProperty(HashDiskPersistenceListener.HASH_ALGORITHM_KEY));
+ } catch (NoSuchAlgorithmException e) {
+ md = MessageDigest.getInstance(HashDiskPersistenceListener.DEFAULT_HASH_ALGORITHM);
+ }
+ } else {
+ md = MessageDigest.getInstance(HashDiskPersistenceListener.DEFAULT_HASH_ALGORITHM);
+ }
+ } catch (NoSuchAlgorithmException e) {
+ LOG.warn("No hash algorithm available for disk persistence", e);
+ throw new RuntimeException("No hash algorithm available for disk persistence", e);
+ }
+
+ return super.configure(config);
+ }
+
+ /**
+ * Generates a file name for the given cache key. In this case the file name is attempted to be
+ * generated from the hash of the standard key name. Cache algorithm is configured via the
+ * cache.persistence.disk.hash.algorithm configuration variable.
+ * @param key cache entry key
+ * @return char[] file name
+ */
+ protected synchronized char[] getCacheFileName(String key) {
+ if ((key == null) || (key.length() == 0)) {
+ throw new IllegalArgumentException("Invalid key '" + key + "' specified to getCacheFile.");
+ }
+
+ String hexDigest = byteArrayToHexString(md.digest(key.getBytes()));
+
+ // CACHE-249: Performance improvement for large disk persistence usage
+ StringBuffer filename = new StringBuffer(hexDigest.length() + 2 * DIR_LEVELS);
+ for (int i=0; i < DIR_LEVELS; i++) {
+ filename.append(hexDigest.charAt(i)).append(File.separator);
+ }
+ filename.append(hexDigest);
+
+ return filename.toString().toCharArray();
+ }
+
+ /**
+ * Nibble conversion. Thanks to our friends at:
+ * http://www.devx.com/tips/Tip/13540
+ * @param in the byte array to convert
+ * @return a java.lang.String based version of they byte array
+ */
+ static String byteArrayToHexString(byte[] in) {
+ if ((in == null) || (in.length <= 0)) {
+ return null;
+ }
+
+ StringBuffer out = new StringBuffer(in.length * 2);
+
+ for (int i = 0; i < in.length; i++) {
+ byte ch = (byte) (in[i] & 0xF0); // Strip off high nibble
+ ch = (byte) (ch >>> 4);
+
+ // shift the bits down
+ ch = (byte) (ch & 0x0F);
+
+ // must do this is high order bit is on!
+ out.append(PSEUDO[(int) ch]); // convert the nibble to a String Character
+ ch = (byte) (in[i] & 0x0F); // Strip off low nibble
+ out.append(PSEUDO[(int) ch]); // convert the nibble to a String Character
+ }
+
+ return out.toString();
+ }
+
+ static final String[] PSEUDO = {
+ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B", "C", "D",
+ "E", "F"
+ };
+
+}
diff --git a/src/java/com/opensymphony/oscache/plugins/diskpersistence/package.html b/src/java/com/opensymphony/oscache/plugins/diskpersistence/package.html
new file mode 100644
index 0000000..b7081d1
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/plugins/diskpersistence/package.html
@@ -0,0 +1,31 @@
+
+
+
+
+
+
+
+Provides support for persisting cached objects to disk.
+
+
+This code is borrowed directly from OSCore, but is duplicated + * here to avoid having to add a dependency on the entire OSCore jar.
+ * + *If much more code from OSCore is needed then it might be wiser to + * bite the bullet and add a dependency.
+ */ +public class ClassLoaderUtil { + + private ClassLoaderUtil() { + } + + /** + * Load a class with a given name. + * + * It will try to load the class in the following order: + *lookup[MINUTE]
map to minutes 0 -> 59 respectively. Bits are set if
+ * the corresponding value is enabled. So if the minute field in the cron expression
+ * was "0,2-8,50"
, bits 0, 2, 3, 4, 5, 6, 7, 8 and 50 will be set.
+ * If the cron expression is "*"
, the long value is set to
+ * Long.MAX_VALUE
.
+ */
+ private long[] lookup = {
+ Long.MAX_VALUE, Long.MAX_VALUE, Long.MAX_VALUE, Long.MAX_VALUE,
+ Long.MAX_VALUE
+ };
+
+ /**
+ * This is based on the contents of the lookup
table. It holds the
+ * highest valid field value for each field type.
+ */
+ private int[] lookupMax = {-1, -1, -1, -1, -1};
+
+ /**
+ * This is based on the contents of the lookup
table. It holds the
+ * lowest valid field value for each field type.
+ */
+ private int[] lookupMin = {
+ Integer.MAX_VALUE, Integer.MAX_VALUE, Integer.MAX_VALUE,
+ Integer.MAX_VALUE, Integer.MAX_VALUE
+ };
+
+ /**
+ * Creates a FastCronParser that uses a default cron expression of "* * * * *".
+ * This will match any time that is supplied.
+ */
+ public FastCronParser() {
+ }
+
+ /**
+ * Constructs a new FastCronParser based on the supplied expression.
+ *
+ * @throws ParseException if the supplied expression is not a valid cron expression.
+ */
+ public FastCronParser(String cronExpression) throws ParseException {
+ setCronExpression(cronExpression);
+ }
+
+ /**
+ * Resets the cron expression to the value supplied.
+ *
+ * @param cronExpression the new cron expression.
+ *
+ * @throws ParseException if the supplied expression is not a valid cron expression.
+ */
+ public void setCronExpression(String cronExpression) throws ParseException {
+ if (cronExpression == null) {
+ throw new IllegalArgumentException("Cron time expression cannot be null");
+ }
+
+ this.cronExpression = cronExpression;
+ parseExpression(cronExpression);
+ }
+
+ /**
+ * Retrieves the current cron expression.
+ *
+ * @return the current cron expression.
+ */
+ public String getCronExpression() {
+ return this.cronExpression;
+ }
+
+ /**
+ * Determines whether this cron expression matches a date/time that is more recent
+ * than the one supplied.
+ *
+ * @param time The time to compare the cron expression against.
+ *
+ * @return true
if the cron expression matches a time that is closer
+ * to the current time than the supplied time is, false
otherwise.
+ */
+ public boolean hasMoreRecentMatch(long time) {
+ return time < getTimeBefore(System.currentTimeMillis());
+ }
+
+ /**
+ * Find the most recent time that matches this cron expression. This time will
+ * always be in the past, ie a lower value than the supplied time.
+ *
+ * @param time The time (in milliseconds) that we're using as our upper bound.
+ *
+ * @return The time (in milliseconds) when this cron event last occurred.
+ */
+ public long getTimeBefore(long time) {
+ // It would be nice to get rid of the Calendar class for speed, but it's a lot of work...
+ // We create this
+ Calendar cal = new GregorianCalendar();
+ cal.setTimeInMillis(time);
+
+ int minute = cal.get(Calendar.MINUTE);
+ int hour = cal.get(Calendar.HOUR_OF_DAY);
+ int dayOfMonth = cal.get(Calendar.DAY_OF_MONTH);
+ int month = cal.get(Calendar.MONTH) + 1; // Calendar is 0-based for this field, and we are 1-based
+ int year = cal.get(Calendar.YEAR);
+
+ long validMinutes = lookup[MINUTE];
+ long validHours = lookup[HOUR];
+ long validDaysOfMonth = lookup[DAY_OF_MONTH];
+ long validMonths = lookup[MONTH];
+ long validDaysOfWeek = lookup[DAY_OF_WEEK];
+
+ // Find out if we have a Day of Week or Day of Month field
+ boolean haveDOM = validDaysOfMonth != Long.MAX_VALUE;
+ boolean haveDOW = validDaysOfWeek != Long.MAX_VALUE;
+
+ boolean skippedNonLeapYear = false;
+
+ while (true) {
+ boolean retry = false;
+
+ // Clean up the month if it was wrapped in a previous iteration
+ if (month < 1) {
+ month += 12;
+ year--;
+ }
+
+ // get month...................................................
+ boolean found = false;
+
+ if (validMonths != Long.MAX_VALUE) {
+ for (int i = month + 11; i > (month - 1); i--) {
+ int testMonth = (i % 12) + 1;
+
+ // Check if the month is valid
+ if (((1L << (testMonth - 1)) & validMonths) != 0) {
+ if ((testMonth > month) || skippedNonLeapYear) {
+ year--;
+ }
+
+ // Check there are enough days in this month (catches non leap-years trying to match the 29th Feb)
+ int numDays = numberOfDaysInMonth(testMonth, year);
+
+ if (!haveDOM || (numDays >= lookupMin[DAY_OF_MONTH])) {
+ if ((month != testMonth) || skippedNonLeapYear) {
+ // New DOM = min(maxDOM, prevDays); ie, the highest valid value
+ dayOfMonth = (numDays <= lookupMax[DAY_OF_MONTH]) ? numDays : lookupMax[DAY_OF_MONTH];
+ hour = lookupMax[HOUR];
+ minute = lookupMax[MINUTE];
+ month = testMonth;
+ }
+
+ found = true;
+ break;
+ }
+ }
+ }
+
+ skippedNonLeapYear = false;
+
+ if (!found) {
+ // The only time we drop out here is when we're searching for the 29th of February and no other date!
+ skippedNonLeapYear = true;
+ continue;
+ }
+ }
+
+ // Clean up if the dayOfMonth was wrapped. This takes leap years into account.
+ if (dayOfMonth < 1) {
+ month--;
+ dayOfMonth += numberOfDaysInMonth(month, year);
+ hour = lookupMax[HOUR];
+ continue;
+ }
+
+ // get day...................................................
+ if (haveDOM && !haveDOW) { // get day using just the DAY_OF_MONTH token
+
+ int daysInThisMonth = numberOfDaysInMonth(month, year);
+ int daysInPreviousMonth = numberOfDaysInMonth(month - 1, year);
+
+ // Find the highest valid day that is below the current day
+ for (int i = dayOfMonth + 30; i > (dayOfMonth - 1); i--) {
+ int testDayOfMonth = (i % 31) + 1;
+
+ // Skip over any days that don't actually exist (eg 31st April)
+ if ((testDayOfMonth <= dayOfMonth) && (testDayOfMonth > daysInThisMonth)) {
+ continue;
+ }
+
+ if ((testDayOfMonth > dayOfMonth) && (testDayOfMonth > daysInPreviousMonth)) {
+ continue;
+ }
+
+ if (((1L << (testDayOfMonth - 1)) & validDaysOfMonth) != 0) {
+ if (testDayOfMonth > dayOfMonth) {
+ // We've found a valid day, but we had to move back a month
+ month--;
+ retry = true;
+ }
+
+ if (dayOfMonth != testDayOfMonth) {
+ hour = lookupMax[HOUR];
+ minute = lookupMax[MINUTE];
+ }
+
+ dayOfMonth = testDayOfMonth;
+ break;
+ }
+ }
+
+ if (retry) {
+ continue;
+ }
+ } else if (haveDOW && !haveDOM) { // get day using just the DAY_OF_WEEK token
+
+ int daysLost = 0;
+ int currentDOW = dayOfWeek(dayOfMonth, month, year);
+
+ for (int i = currentDOW + 7; i > currentDOW; i--) {
+ int testDOW = i % 7;
+
+ if (((1L << testDOW) & validDaysOfWeek) != 0) {
+ dayOfMonth -= daysLost;
+
+ if (dayOfMonth < 1) {
+ // We've wrapped back a month
+ month--;
+ dayOfMonth += numberOfDaysInMonth(month, year);
+ retry = true;
+ }
+
+ if (currentDOW != testDOW) {
+ hour = lookupMax[HOUR];
+ minute = lookupMax[MINUTE];
+ }
+
+ break;
+ }
+
+ daysLost++;
+ }
+
+ if (retry) {
+ continue;
+ }
+ }
+
+ // Clean up if the hour has been wrapped
+ if (hour < 0) {
+ hour += 24;
+ dayOfMonth--;
+ continue;
+ }
+
+ // get hour...................................................
+ if (validHours != Long.MAX_VALUE) {
+ // Find the highest valid hour that is below the current hour
+ for (int i = hour + 24; i > hour; i--) {
+ int testHour = i % 24;
+
+ if (((1L << testHour) & validHours) != 0) {
+ if (testHour > hour) {
+ // We've found an hour, but we had to move back a day
+ dayOfMonth--;
+ retry = true;
+ }
+
+ if (hour != testHour) {
+ minute = lookupMax[MINUTE];
+ }
+
+ hour = testHour;
+ break;
+ }
+ }
+
+ if (retry) {
+ continue;
+ }
+ }
+
+ // get minute.................................................
+ if (validMinutes != Long.MAX_VALUE) {
+ // Find the highest valid minute that is below the current minute
+ for (int i = minute + 60; i > minute; i--) {
+ int testMinute = i % 60;
+
+ if (((1L << testMinute) & validMinutes) != 0) {
+ if (testMinute > minute) {
+ // We've found a minute, but we had to move back an hour
+ hour--;
+ retry = true;
+ }
+
+ minute = testMinute;
+ break;
+ }
+ }
+
+ if (retry) {
+ continue;
+ }
+ }
+
+ break;
+ }
+
+ // OK, all done. Return the adjusted time value (adjusting this is faster than creating a new Calendar object)
+ cal.set(Calendar.YEAR, year);
+ cal.set(Calendar.MONTH, month - 1); // Calendar is 0-based for this field, and we are 1-based
+ cal.set(Calendar.DAY_OF_MONTH, dayOfMonth);
+ cal.set(Calendar.HOUR_OF_DAY, hour);
+ cal.set(Calendar.MINUTE, minute);
+ cal.set(Calendar.SECOND, 0);
+ cal.set(Calendar.MILLISECOND, 0);
+
+ return cal.getTime().getTime();
+ }
+
+ /**
+ * Takes a cron expression as an input parameter, and extracts from it the
+ * relevant minutes/hours/days/months that the expression matches.
+ *
+ * @param expression A valid cron expression.
+ * @throws ParseException If the supplied expression could not be parsed.
+ */
+ private void parseExpression(String expression) throws ParseException {
+ try {
+ // Reset all the lookup data
+ for (int i = 0; i < lookup.length; lookup[i++] = 0) {
+ lookupMin[i] = Integer.MAX_VALUE;
+ lookupMax[i] = -1;
+ }
+
+ // Create some character arrays to hold the extracted field values
+ char[][] token = new char[NUMBER_OF_CRON_FIELDS][];
+
+ // Extract the supplied expression into another character array
+ // for speed
+ int length = expression.length();
+ char[] expr = new char[length];
+ expression.getChars(0, length, expr, 0);
+
+ int field = 0;
+ int startIndex = 0;
+ boolean inWhitespace = true;
+
+ // Extract the various cron fields from the expression
+ for (int i = 0; (i < length) && (field < NUMBER_OF_CRON_FIELDS);
+ i++) {
+ boolean haveChar = (expr[i] != ' ') && (expr[i] != '\t');
+
+ if (haveChar) {
+ // We have a text character of some sort
+ if (inWhitespace) {
+ startIndex = i; // Remember the start of this token
+ inWhitespace = false;
+ }
+ }
+
+ if (i == (length - 1)) { // Adjustment for when we reach the end of the expression
+ i++;
+ }
+
+ if (!(haveChar || inWhitespace) || (i == length)) {
+ // We've reached the end of a token. Copy it into a new char array
+ token[field] = new char[i - startIndex];
+ System.arraycopy(expr, startIndex, token[field], 0, i - startIndex);
+ inWhitespace = true;
+ field++;
+ }
+ }
+
+ if (field < NUMBER_OF_CRON_FIELDS) {
+ throw new ParseException("Unexpected end of expression while parsing \"" + expression + "\". Cron expressions require 5 separate fields.", length);
+ }
+
+ // OK, we've broken the string up into the 5 cron fields, now lets add
+ // each field to their lookup table.
+ for (field = 0; field < NUMBER_OF_CRON_FIELDS; field++) {
+ startIndex = 0;
+
+ boolean inDelimiter = true;
+
+ // We add each comma-delimited element seperately.
+ int elementLength = token[field].length;
+
+ for (int i = 0; i < elementLength; i++) {
+ boolean haveElement = token[field][i] != ',';
+
+ if (haveElement) {
+ // We have a character from an element in the token
+ if (inDelimiter) {
+ startIndex = i;
+ inDelimiter = false;
+ }
+ }
+
+ if (i == (elementLength - 1)) { // Adjustment for when we reach the end of the token
+ i++;
+ }
+
+ if (!(haveElement || inDelimiter) || (i == elementLength)) {
+ // We've reached the end of an element. Copy it into a new char array
+ char[] element = new char[i - startIndex];
+ System.arraycopy(token[field], startIndex, element, 0, i - startIndex);
+
+ // Add the element to our datastructure.
+ storeExpressionValues(element, field);
+
+ inDelimiter = true;
+ }
+ }
+
+ if (lookup[field] == 0) {
+ throw new ParseException("Token " + new String(token[field]) + " contains no valid entries for this field.", 0);
+ }
+ }
+
+ // Remove any months that will never be valid
+ switch (lookupMin[DAY_OF_MONTH]) {
+ case 31:
+ lookup[MONTH] &= (0xFFF - 0x528); // Binary 010100101000 - the months that have 30 days
+ case 30:
+ lookup[MONTH] &= (0xFFF - 0x2); // Binary 000000000010 - February
+
+ if (lookup[MONTH] == 0) {
+ throw new ParseException("The cron expression \"" + expression + "\" will never match any months - the day of month field is out of range.", 0);
+ }
+ }
+
+ // Check that we don't have both a day of month and a day of week field.
+ if ((lookup[DAY_OF_MONTH] != Long.MAX_VALUE) && (lookup[DAY_OF_WEEK] != Long.MAX_VALUE)) {
+ throw new ParseException("The cron expression \"" + expression + "\" is invalid. Having both a day-of-month and day-of-week field is not supported.", 0);
+ }
+ } catch (Exception e) {
+ if (e instanceof ParseException) {
+ throw (ParseException) e;
+ } else {
+ throw new ParseException("Illegal cron expression format (" + e.toString() + ")", 0);
+ }
+ }
+ }
+
+ /**
+ * Stores the values for the supplied cron element into the specified field.
+ *
+ * @param element The cron element to store. A cron element is a single component
+ * of a cron expression. For example, the complete set of elements for the cron expression
+ * 30 0,6,12,18 * * *
would be {"30", "0", "6", "12", "18", "*", "*", "*"}
.
+ * @param field The field that this expression belongs to. Valid values are {@link #MINUTE},
+ * {@link #HOUR}, {@link #DAY_OF_MONTH}, {@link #MONTH} and {@link #DAY_OF_WEEK}.
+ *
+ * @throws ParseException if there was a problem parsing the supplied element.
+ */
+ private void storeExpressionValues(char[] element, int field) throws ParseException {
+ int i = 0;
+
+ int start = -99;
+ int end = -99;
+ int interval = -1;
+ boolean wantValue = true;
+ boolean haveInterval = false;
+
+ while ((interval < 0) && (i < element.length)) {
+ char ch = element[i++];
+
+ // Handle the wildcard character - it can only ever occur at the start of an element
+ if ((i == 1) && (ch == '*')) {
+ // Handle the special case where we have '*' and nothing else
+ if (i >= element.length) {
+ addToLookup(-1, -1, field, 1);
+ return;
+ }
+
+ start = -1;
+ end = -1;
+ wantValue = false;
+ continue;
+ }
+
+ if (wantValue) {
+ // Handle any numbers
+ if ((ch >= '0') && (ch <= '9')) {
+ ValueSet vs = getValue(ch - '0', element, i);
+
+ if (start == -99) {
+ start = vs.value;
+ } else if (!haveInterval) {
+ end = vs.value;
+ } else {
+ if (end == -99) {
+ end = MAX_VALUE[field];
+ }
+
+ interval = vs.value;
+ }
+
+ i = vs.pos;
+ wantValue = false;
+ continue;
+ }
+
+ if (!haveInterval && (end == -99)) {
+ // Handle any months that have been suplied as words
+ if (field == MONTH) {
+ if (start == -99) {
+ start = getMonthVal(ch, element, i++);
+ } else {
+ end = getMonthVal(ch, element, i++);
+ }
+
+ wantValue = false;
+
+ // Skip past the rest of the month name
+ while (++i < element.length) {
+ int c = element[i] | 0x20;
+
+ if ((c < 'a') || (c > 'z')) {
+ break;
+ }
+ }
+
+ continue;
+ } else if (field == DAY_OF_WEEK) {
+ if (start == -99) {
+ start = getDayOfWeekVal(ch, element, i++);
+ } else {
+ end = getDayOfWeekVal(ch, element, i++);
+ }
+
+ wantValue = false;
+
+ // Skip past the rest of the day name
+ while (++i < element.length) {
+ int c = element[i] | 0x20;
+
+ if ((c < 'a') || (c > 'z')) {
+ break;
+ }
+ }
+
+ continue;
+ }
+ }
+ } else {
+ // Handle the range character. A range character is only valid if we have a start but no end value
+ if ((ch == '-') && (start != -99) && (end == -99)) {
+ wantValue = true;
+ continue;
+ }
+
+ // Handle an interval. An interval is valid as long as we have a start value
+ if ((ch == '/') && (start != -99)) {
+ wantValue = true;
+ haveInterval = true;
+ continue;
+ }
+ }
+
+ throw makeParseException("Invalid character encountered while parsing element", element, i);
+ }
+
+ if (element.length > i) {
+ throw makeParseException("Extraneous characters found while parsing element", element, i);
+ }
+
+ if (end == -99) {
+ end = start;
+ }
+
+ if (interval < 0) {
+ interval = 1;
+ }
+
+ addToLookup(start, end, field, interval);
+ }
+
+ /**
+ * Extracts a numerical value from inside a character array.
+ *
+ * @param value The value of the first character
+ * @param element The character array we're extracting the value from
+ * @param i The index into the array of the next character to process
+ *
+ * @return the new index and the extracted value
+ */
+ private ValueSet getValue(int value, char[] element, int i) {
+ ValueSet result = new ValueSet();
+ result.value = value;
+
+ if (i >= element.length) {
+ result.pos = i;
+ return result;
+ }
+
+ char ch = element[i];
+
+ while ((ch >= '0') && (ch <= '9')) {
+ result.value = (result.value * 10) + (ch - '0');
+
+ if (++i >= element.length) {
+ break;
+ }
+
+ ch = element[i];
+ }
+
+ result.pos = i;
+
+ return result;
+ }
+
+ /**
+ * Adds a group of valid values to the lookup table for the specified field. This method
+ * handles ranges that increase in arbitrary step sizes. It is also possible to add a single
+ * value by specifying a range with the same start and end values.
+ *
+ * @param start The starting value for the range. Supplying a value that is less than zero
+ * will cause the minimum allowable value for the specified field to be used as the start value.
+ * @param end The maximum value that can be added (ie the upper bound). If the step size is
+ * greater than one, this maximum value may not necessarily end up being added. Supplying a
+ * value that is less than zero will cause the maximum allowable value for the specified field
+ * to be used as the upper bound.
+ * @param field The field that the values should be added to.
+ * @param interval Specifies the step size for the range. Any values less than one will be
+ * treated as a single step interval.
+ */
+ private void addToLookup(int start, int end, int field, int interval) throws ParseException {
+ // deal with the supplied range
+ if (start == end) {
+ if (start < 0) {
+ // We're setting the entire range of values
+ start = lookupMin[field] = MIN_VALUE[field];
+ end = lookupMax[field] = MAX_VALUE[field];
+
+ if (interval <= 1) {
+ lookup[field] = Long.MAX_VALUE;
+ return;
+ }
+ } else {
+ // We're only setting a single value - check that it is in range
+ if (start < MIN_VALUE[field]) {
+ throw new ParseException("Value " + start + " in field " + field + " is lower than the minimum allowable value for this field (min=" + MIN_VALUE[field] + ")", 0);
+ } else if (start > MAX_VALUE[field]) {
+ throw new ParseException("Value " + start + " in field " + field + " is higher than the maximum allowable value for this field (max=" + MAX_VALUE[field] + ")", 0);
+ }
+ }
+ } else {
+ // For ranges, if the start is bigger than the end value then swap them over
+ if (start > end) {
+ end ^= start;
+ start ^= end;
+ end ^= start;
+ }
+
+ if (start < 0) {
+ start = MIN_VALUE[field];
+ } else if (start < MIN_VALUE[field]) {
+ throw new ParseException("Value " + start + " in field " + field + " is lower than the minimum allowable value for this field (min=" + MIN_VALUE[field] + ")", 0);
+ }
+
+ if (end < 0) {
+ end = MAX_VALUE[field];
+ } else if (end > MAX_VALUE[field]) {
+ throw new ParseException("Value " + end + " in field " + field + " is higher than the maximum allowable value for this field (max=" + MAX_VALUE[field] + ")", 0);
+ }
+ }
+
+ if (interval < 1) {
+ interval = 1;
+ }
+
+ int i = start - MIN_VALUE[field];
+
+ // Populate the lookup table by setting all the bits corresponding to the valid field values
+ for (i = start - MIN_VALUE[field]; i <= (end - MIN_VALUE[field]);
+ i += interval) {
+ lookup[field] |= (1L << i);
+ }
+
+ // Make sure we remember the minimum value set so far
+ // Keep track of the highest and lowest values that have been added to this field so far
+ if (lookupMin[field] > start) {
+ lookupMin[field] = start;
+ }
+
+ i += (MIN_VALUE[field] - interval);
+
+ if (lookupMax[field] < i) {
+ lookupMax[field] = i;
+ }
+ }
+
+ /**
+ * Indicates if a year is a leap year or not.
+ *
+ * @param year The year to check
+ *
+ * @return true
if the year is a leap year, false
otherwise.
+ */
+ private boolean isLeapYear(int year) {
+ return (((year % 4) == 0) && ((year % 100) != 0)) || ((year % 400) == 0);
+ }
+
+ /**
+ * Calculate the day of the week. Sunday = 0, Monday = 1, ... , Saturday = 6. The formula
+ * used is an optimized version of Zeller's Congruence.
+ *
+ * @param day The day of the month (1-31)
+ * @param month The month (1 - 12)
+ * @param year The year
+ * @return
+ */
+ private int dayOfWeek(int day, int month, int year) {
+ day += ((month < 3) ? year-- : (year - 2));
+ return ((((23 * month) / 9) + day + 4 + (year / 4)) - (year / 100) + (year / 400)) % 7;
+ }
+
+ /**
+ * Retrieves the number of days in the supplied month, taking into account leap years.
+ * If the month value is outside the range MIN_VALUE[MONTH] - MAX_VALUE[MONTH]
+ * then the year will be adjusted accordingly and the correct number of days will still
+ * be returned.
+ *
+ * @param month The month of interest.
+ * @param year The year we are checking.
+ *
+ * @return The number of days in the month.
+ */
+ private int numberOfDaysInMonth(int month, int year) {
+ while (month < 1) {
+ month += 12;
+ year--;
+ }
+
+ while (month > 12) {
+ month -= 12;
+ year++;
+ }
+
+ if (month == 2) {
+ return isLeapYear(year) ? 29 : 28;
+ } else {
+ return DAYS_IN_MONTH[month - 1];
+ }
+ }
+
+ /**
+ * Quickly retrieves the day of week value (Sun = 0, ... Sat = 6) that corresponds to the
+ * day name that is specified in the character array. Only the first 3 characters are taken
+ * into account; the rest are ignored.
+ *
+ * @param element The character array
+ * @param i The index to start looking at
+ * @return The day of week value
+ */
+ private int getDayOfWeekVal(char ch1, char[] element, int i) throws ParseException {
+ if ((i + 1) >= element.length) {
+ throw makeParseException("Unexpected end of element encountered while parsing a day name", element, i);
+ }
+
+ int ch2 = element[i] | 0x20;
+ int ch3 = element[i + 1] | 0x20;
+
+ switch (ch1 | 0x20) {
+ case 's': // Sunday, Saturday
+
+ if ((ch2 == 'u') && (ch3 == 'n')) {
+ return 0;
+ }
+
+ if ((ch2 == 'a') && (ch3 == 't')) {
+ return 6;
+ }
+
+ break;
+ case 'm': // Monday
+
+ if ((ch2 == 'o') && (ch3 == 'n')) {
+ return 1;
+ }
+
+ break;
+ case 't': // Tuesday, Thursday
+
+ if ((ch2 == 'u') && (ch3 == 'e')) {
+ return 2;
+ }
+
+ if ((ch2 == 'h') && (ch3 == 'u')) {
+ return 4;
+ }
+
+ break;
+ case 'w': // Wednesday
+
+ if ((ch2 == 'e') && (ch3 == 'd')) {
+ return 3;
+ }
+
+ break;
+ case 'f': // Friday
+
+ if ((ch2 == 'r') && (ch3 == 'i')) {
+ return 5;
+ }
+
+ break;
+ }
+
+ throw makeParseException("Unexpected character while parsing a day name", element, i - 1);
+ }
+
+ /**
+ * Quickly retrieves the month value (Jan = 1, ..., Dec = 12) that corresponds to the month
+ * name that is specified in the character array. Only the first 3 characters are taken
+ * into account; the rest are ignored.
+ *
+ * @param element The character array
+ * @param i The index to start looking at
+ * @return The month value
+ */
+ private int getMonthVal(char ch1, char[] element, int i) throws ParseException {
+ if ((i + 1) >= element.length) {
+ throw makeParseException("Unexpected end of element encountered while parsing a month name", element, i);
+ }
+
+ int ch2 = element[i] | 0x20;
+ int ch3 = element[i + 1] | 0x20;
+
+ switch (ch1 | 0x20) {
+ case 'j': // January, June, July
+
+ if ((ch2 == 'a') && (ch3 == 'n')) {
+ return 1;
+ }
+
+ if (ch2 == 'u') {
+ if (ch3 == 'n') {
+ return 6;
+ }
+
+ if (ch3 == 'l') {
+ return 7;
+ }
+ }
+
+ break;
+ case 'f': // February
+
+ if ((ch2 == 'e') && (ch3 == 'b')) {
+ return 2;
+ }
+
+ break;
+ case 'm': // March, May
+
+ if (ch2 == 'a') {
+ if (ch3 == 'r') {
+ return 3;
+ }
+
+ if (ch3 == 'y') {
+ return 5;
+ }
+ }
+
+ break;
+ case 'a': // April, August
+
+ if ((ch2 == 'p') && (ch3 == 'r')) {
+ return 4;
+ }
+
+ if ((ch2 == 'u') && (ch3 == 'g')) {
+ return 8;
+ }
+
+ break;
+ case 's': // September
+
+ if ((ch2 == 'e') && (ch3 == 'p')) {
+ return 9;
+ }
+
+ break;
+ case 'o': // October
+
+ if ((ch2 == 'c') && (ch3 == 't')) {
+ return 10;
+ }
+
+ break;
+ case 'n': // November
+
+ if ((ch2 == 'o') && (ch3 == 'v')) {
+ return 11;
+ }
+
+ break;
+ case 'd': // December
+
+ if ((ch2 == 'e') && (ch3 == 'c')) {
+ return 12;
+ }
+
+ break;
+ }
+
+ throw makeParseException("Unexpected character while parsing a month name", element, i - 1);
+ }
+
+ /**
+ * Recreates the original human-readable cron expression based on the internal
+ * datastructure values.
+ *
+ * @return A cron expression that corresponds to the current state of the
+ * internal data structure.
+ */
+ public String getExpressionSummary() {
+ StringBuffer buf = new StringBuffer();
+
+ buf.append(getExpressionSetSummary(MINUTE)).append(' ');
+ buf.append(getExpressionSetSummary(HOUR)).append(' ');
+ buf.append(getExpressionSetSummary(DAY_OF_MONTH)).append(' ');
+ buf.append(getExpressionSetSummary(MONTH)).append(' ');
+ buf.append(getExpressionSetSummary(DAY_OF_WEEK));
+
+ return buf.toString();
+ }
+
+ /**
+ * Converts the internal datastructure that holds a particular cron field into
+ * a human-readable list of values of the field's contents. For example, if the
+ * DAY_OF_WEEK
field was submitted that had Sunday and Monday specified,
+ * the string 0,1
would be returned.
+ *
+ * If the field contains all possible values, *
will be returned.
+ *
+ * @param field The field.
+ *
+ * @return A human-readable string representation of the field's contents.
+ */
+ private String getExpressionSetSummary(int field) {
+ if (lookup[field] == Long.MAX_VALUE) {
+ return "*";
+ }
+
+ StringBuffer buf = new StringBuffer();
+
+ boolean first = true;
+
+ for (int i = MIN_VALUE[field]; i <= MAX_VALUE[field]; i++) {
+ if ((lookup[field] & (1L << (i - MIN_VALUE[field]))) != 0) {
+ if (!first) {
+ buf.append(",");
+ } else {
+ first = false;
+ }
+
+ buf.append(String.valueOf(i));
+ }
+ }
+
+ return buf.toString();
+ }
+
+ /**
+ * Makes a ParseException
. The exception message is constructed by
+ * taking the given message parameter and appending the supplied character data
+ * to the end of it. for example, if msg == "Invalid character
+ * encountered"
and data == {'A','g','u','s','t'}
, the resultant
+ * error message would be "Invalid character encountered [Agust]"
.
+ *
+ * @param msg The error message
+ * @param data The data that the message
+ * @param offset The offset into the data where the error was encountered.
+ *
+ * @return a newly created ParseException
object.
+ */
+ private ParseException makeParseException(String msg, char[] data, int offset) {
+ char[] buf = new char[msg.length() + data.length + 3];
+ int msgLen = msg.length();
+ System.arraycopy(msg.toCharArray(), 0, buf, 0, msgLen);
+ buf[msgLen] = ' ';
+ buf[msgLen + 1] = '[';
+ System.arraycopy(data, 0, buf, msgLen + 2, data.length);
+ buf[buf.length - 1] = ']';
+ return new ParseException(new String(buf), offset);
+ }
+}
+
+
+class ValueSet {
+ public int pos;
+ public int value;
+}
diff --git a/src/java/com/opensymphony/oscache/util/StringUtil.java b/src/java/com/opensymphony/oscache/util/StringUtil.java
new file mode 100644
index 0000000..275e565
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/util/StringUtil.java
@@ -0,0 +1,67 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.util;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Provides common utility methods for handling strings.
+ *
+ * @author Chris Miller
+ */
+public class StringUtil {
+
+ private StringUtil() {
+ }
+
+ /**
+ * Splits a string into substrings based on the supplied delimiter
+ * character. Each extracted substring will be trimmed of leading
+ * and trailing whitespace.
+ *
+ * @param str The string to split
+ * @param delimiter The character that delimits the string
+ * @return A string array containing the resultant substrings
+ */
+ public static final List split(String str, char delimiter) {
+ // return no groups if we have an empty string
+ if ((str == null) || "".equals(str)) {
+ return new ArrayList();
+ }
+
+ ArrayList parts = new ArrayList();
+ int currentIndex;
+ int previousIndex = 0;
+
+ while ((currentIndex = str.indexOf(delimiter, previousIndex)) > 0) {
+ String part = str.substring(previousIndex, currentIndex).trim();
+ parts.add(part);
+ previousIndex = currentIndex + 1;
+ }
+
+ parts.add(str.substring(previousIndex, str.length()).trim());
+
+ return parts;
+ }
+
+ /**
+ * @param s the string to be checked
+ * @return true if the string parameter contains at least one element
+ */
+ public static final boolean hasLength(String s) {
+ return (s != null) && (s.length() > 0);
+ }
+
+ /**
+ * @param s the string to be checked
+ * @return true if the string parameter is null or doesn't contain any element
+ * @since 2.4
+ */
+ public static final boolean isEmpty(String s) {
+ return (s == null) || (s.length() == 0);
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/util/package.html b/src/java/com/opensymphony/oscache/util/package.html
new file mode 100644
index 0000000..518a289
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/util/package.html
@@ -0,0 +1,33 @@
+
+
+
+
+
+
+
+Provides utility classes that perform fairly general-purpose functions and are required
+by OSCache.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/web/CacheContextListener.java b/src/java/com/opensymphony/oscache/web/CacheContextListener.java
new file mode 100644
index 0000000..fe55281
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/CacheContextListener.java
@@ -0,0 +1,40 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import javax.servlet.ServletContext;
+import javax.servlet.ServletContextEvent;
+import javax.servlet.ServletContextListener;
+
+/**
+ * Class for a clean startup and shutdown of the ServletCacheAdministrator and its application scoped cache.
+ * @author Chris Miller
+ */
+public class CacheContextListener implements ServletContextListener {
+
+ /**
+ * This notification occurs when the webapp is ready to process requests.
+ * We use this hook to cleanly start up the {@link ServletCacheAdministrator}
+ * and create the application scope cache (which will consequentially
+ * initialize any listeners configured for it that implement LifecycleAware
.)
+ *
+ * As of Servlet 2.4, this is guaranteed to be called before any Servlet.init()
+ * methods.
+ */
+ public void contextInitialized(ServletContextEvent servletContextEvent) {
+ ServletContext context = servletContextEvent.getServletContext();
+ ServletCacheAdministrator.getInstance(context);
+ }
+
+ /**
+ * This notification occurs when the servlet context is about to be shut down.
+ * We use this hook to cleanly shut down the cache.
+ */
+ public void contextDestroyed(ServletContextEvent servletContextEvent) {
+ ServletContext context = servletContextEvent.getServletContext();
+ ServletCacheAdministrator.destroyInstance(context);
+ }
+
+}
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/web/ServletCache.java b/src/java/com/opensymphony/oscache/web/ServletCache.java
new file mode 100644
index 0000000..482d697
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/ServletCache.java
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.CacheEntry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.Serializable;
+
+import javax.servlet.http.HttpSessionBindingEvent;
+import javax.servlet.http.HttpSessionBindingListener;
+
+/**
+ * A simple extension of Cache that implements a session binding listener,
+ * and deletes it's entries when unbound
+ *
+ * @author Mike Cannon-Brookes
+ * @author Todd Gochenour
+ * @author Francois Beauregard
+ * @version $Revision$
+ */
+public final class ServletCache extends Cache implements HttpSessionBindingListener, Serializable {
+ private static transient final Log log = LogFactory.getLog(ServletCache.class);
+
+ /**
+ * The admin for this cache
+ */
+ private ServletCacheAdministrator admin;
+
+ /**
+ * The scope of that cache.
+ */
+ private int scope;
+
+ /**
+ * Create a new ServletCache
+ *
+ * @param admin The ServletCacheAdministrator to administer this ServletCache.
+ * @param scope The scope of all entries in this hashmap
+ */
+ public ServletCache(ServletCacheAdministrator admin, int scope) {
+ super(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence());
+ setScope(scope);
+ this.admin = admin;
+ }
+
+ /**
+ * Create a new Cache
+ *
+ * @param admin The CacheAdministrator to administer this Cache.
+ * @param algorithmClass The class that implement an algorithm
+ * @param limit The maximum cache size in number of entries
+ * @param scope The cache scope
+ */
+ public ServletCache(ServletCacheAdministrator admin, String algorithmClass, int limit, int scope) {
+ super(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence(), admin.isBlocking(), algorithmClass, limit);
+ setScope(scope);
+ this.admin = admin;
+ }
+
+ /**
+ * Get the cache scope
+ *
+ * @return The cache scope
+ */
+ public int getScope() {
+ return scope;
+ }
+
+ private void setScope(int scope) {
+ this.scope = scope;
+ }
+
+ /**
+ * When this Cache is bound to the session, do nothing.
+ *
+ * @param event The SessionBindingEvent.
+ */
+ public void valueBound(HttpSessionBindingEvent event) {
+ }
+
+ /**
+ * When the users's session ends, all listeners are finalized and the
+ * session cache directory is deleted from disk.
+ *
+ * @param event The event that triggered this unbinding.
+ */
+ public void valueUnbound(HttpSessionBindingEvent event) {
+ if (log.isInfoEnabled()) {
+ // CACHE-229: don't access the session's id, because this can throw an IllegalStateException
+ log.info("[Cache] Unbound from session " + event.getSession() + " using name " + event.getName());
+ }
+
+ admin.finalizeListeners(this);
+ clear();
+ }
+
+ /**
+ * Indicates whether or not the cache entry is stale. This overrides the
+ * {@link Cache#isStale(CacheEntry, int, String)} method to take into account any
+ * flushing that may have been applied to the scope that this cache belongs to.
+ *
+ * @param cacheEntry The cache entry to test the freshness of.
+ * @param refreshPeriod The maximum allowable age of the entry, in seconds.
+ * @param cronExpiry A cron expression that defines fixed expiry dates and/or
+ * times for this cache entry.
+ *
+ * @return true
if the entry is stale, false
otherwise.
+ */
+ protected boolean isStale(CacheEntry cacheEntry, int refreshPeriod, String cronExpiry) {
+ return super.isStale(cacheEntry, refreshPeriod, cronExpiry) || admin.isScopeFlushed(cacheEntry, scope);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/ServletCacheAdministrator.java b/src/java/com/opensymphony/oscache/web/ServletCacheAdministrator.java
new file mode 100644
index 0000000..713a64d
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/ServletCacheAdministrator.java
@@ -0,0 +1,794 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.opensymphony.oscache.base.*;
+import com.opensymphony.oscache.base.events.ScopeEvent;
+import com.opensymphony.oscache.base.events.ScopeEventListener;
+import com.opensymphony.oscache.base.events.ScopeEventType;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.Serializable;
+
+import java.util.*;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpSession;
+import javax.servlet.jsp.PageContext;
+
+/**
+ * A ServletCacheAdministrator creates, flushes and administers the cache.
+ *
+ * This is a "servlet Singleton". This means it's not a Singleton in the traditional sense,
+ * that is stored in a static instance. It's a Singleton _per web app context_.
+ *
+ * Once created it manages the cache path on disk through the oscache.properties
+ * file, and also keeps track of the flush times.
+ *
+ * @author Mike Cannon-Brookes
+ * @author Todd Gochenour
+ * @author Francois Beauregard
+ * @author Alain Bergevin
+ * @author Chris Miller
+ * @version $Revision$
+ */
+public class ServletCacheAdministrator extends AbstractCacheAdministrator implements Serializable {
+ private static final transient Log log = LogFactory.getLog(ServletCacheAdministrator.class);
+
+ /**
+ * Constants for properties read/written from/to file
+ */
+ private final static String CACHE_USE_HOST_DOMAIN_KEY = "cache.use.host.domain.in.key";
+ private final static String CACHE_KEY_KEY = "cache.key";
+
+ /**
+ * The default cache key that is used to store the cache in context.
+ */
+ private final static String DEFAULT_CACHE_KEY = "__oscache_cache";
+
+ /**
+ * Constants for scope's name
+ */
+ public final static String SESSION_SCOPE_NAME = "session";
+ public final static String APPLICATION_SCOPE_NAME = "application";
+
+ /**
+ * The suffix added to the cache key used to store a
+ * ServletCacheAdministrator will be stored in the ServletContext
+ */
+ private final static String CACHE_ADMINISTRATOR_KEY_SUFFIX = "_admin";
+
+ /**
+ * The key under which an array of all ServletCacheAdministrator objects
+ * will be stored in the ServletContext
+ */
+ private final static String CACHE_ADMINISTRATORS_KEY = "__oscache_admins";
+
+ /**
+ * Key used to store the current scope in the configuration. This is a hack
+ * to let the scope information get passed through to the DiskPersistenceListener,
+ * and will be removed in a future release.
+ */
+ public final static String HASH_KEY_SCOPE = "scope";
+
+ /**
+ * Key used to store the current session ID in the configuration. This is a hack
+ * to let the scope information get passed through to the DiskPersistenceListener,
+ * and will be removed in a future release.
+ */
+ public final static String HASH_KEY_SESSION_ID = "sessionId";
+
+ /**
+ * Key used to store the servlet container temporary directory in the configuration.
+ * This is a hack to let the scope information get passed through to the
+ * DiskPersistenceListener, and will be removed in a future release.
+ */
+ public final static String HASH_KEY_CONTEXT_TMPDIR = "context.tempdir";
+
+ /**
+ * The string to use as a file separator.
+ */
+ private final static String FILE_SEPARATOR = "/";
+
+ /**
+ * The character to use as a file separator.
+ */
+ private final static char FILE_SEPARATOR_CHAR = FILE_SEPARATOR.charAt(0);
+
+ /**
+ * Constant for Key generation.
+ */
+ private final static short AVERAGE_KEY_LENGTH = 30;
+
+ /**
+ * Usable caracters for key generation
+ */
+ private static final String m_strBase64Chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
+
+ /**
+ * Map containing the flush times of different scopes
+ */
+ private Map flushTimes;
+
+ /**
+ * Required so we can look up the app scope cache without forcing a session creation.
+ */
+ private transient ServletContext context;
+
+ /**
+ * Key to use for storing and retrieving Object in contexts (Servlet, session).
+ */
+ private String cacheKey;
+
+ /**
+ * Set property cache.use.host.domain.in.key=true to add domain information to key
+ * generation for hosting multiple sites.
+ */
+ private boolean useHostDomainInKey = false;
+
+ /**
+ * Create the cache administrator.
+ *
+ * This will reset all the flush times and load the properties file.
+ */
+ private ServletCacheAdministrator(ServletContext context, Properties p) {
+ super(p);
+ config.set(HASH_KEY_CONTEXT_TMPDIR, context.getAttribute("javax.servlet.context.tempdir"));
+
+ flushTimes = new HashMap();
+ initHostDomainInKey();
+ this.context = context;
+ }
+
+ /**
+ * Obtain an instance of the CacheAdministrator
+ *
+ * @param context The ServletContext that this CacheAdministrator is a Singleton under
+ * @return Returns the CacheAdministrator instance for this context
+ */
+ public static ServletCacheAdministrator getInstance(ServletContext context) {
+ return getInstance(context, null);
+ }
+
+ /**
+ * Obtain an instance of the CacheAdministrator for the specified key
+ *
+ * @param context The ServletContext that this CacheAdministrator is a Singleton under
+ * @param key the cachekey or admincachekey for the CacheAdministrator wanted
+ * @return Returns the CacheAdministrator instance for this context, or null if no
+ * CacheAdministrator exists with the key supplied
+ */
+ public static ServletCacheAdministrator getInstanceFromKey(ServletContext context, String key) {
+ // Note we do not bother to check if the key is null because it mustn't.
+ if (!key.endsWith(CACHE_ADMINISTRATOR_KEY_SUFFIX)) {
+ key = key + CACHE_ADMINISTRATOR_KEY_SUFFIX;
+ }
+ return (ServletCacheAdministrator) context.getAttribute(key);
+ }
+
+ /**
+ * Obtain an instance of the CacheAdministrator
+ *
+ * @param context The ServletContext that this CacheAdministrator is a Singleton under
+ * @param p the properties to use for the cache if the cache administrator has not been
+ * created yet. Once the administrator has been created, the properties parameter is
+ * ignored for all future invocations. If a null value is passed in, then the properties
+ * are loaded from the oscache.properties file in the classpath.
+ * @return Returns the CacheAdministrator instance for this context
+ */
+ public synchronized static ServletCacheAdministrator getInstance(ServletContext context, Properties p)
+ {
+ String adminKey = null;
+ if (p!= null) {
+ adminKey = p.getProperty(CACHE_KEY_KEY);
+ }
+ if (adminKey == null) {
+ adminKey = DEFAULT_CACHE_KEY;
+ }
+ adminKey += CACHE_ADMINISTRATOR_KEY_SUFFIX;
+
+ ServletCacheAdministrator admin = (ServletCacheAdministrator) context.getAttribute(adminKey);
+
+ // First time we need to create the administrator and store it in the
+ // servlet context
+ if (admin == null) {
+ admin = new ServletCacheAdministrator(context, p);
+ Map admins = (Map) context.getAttribute(CACHE_ADMINISTRATORS_KEY);
+ if (admins == null) {
+ admins = new HashMap();
+ }
+ admins.put(adminKey, admin);
+ context.setAttribute(CACHE_ADMINISTRATORS_KEY, admins);
+ context.setAttribute(adminKey, admin);
+
+ if (log.isInfoEnabled()) {
+ log.info("Created new instance of ServletCacheAdministrator with key "+adminKey);
+ }
+
+ admin.getAppScopeCache(context);
+ }
+
+ if (admin.context == null) {
+ admin.context = context;
+ }
+
+ return admin;
+ }
+
+ /**
+ * Shuts down all servlet cache administrators. This should usually only
+ * be called when the controlling application shuts down.
+ */
+ public static void destroyInstance(ServletContext context)
+ {
+ ServletCacheAdministrator admin;
+ Map admins = (Map) context.getAttribute(CACHE_ADMINISTRATORS_KEY);
+ if (admins != null)
+ {
+ Set keys = admins.keySet();
+ Iterator it = keys.iterator();
+ while (it.hasNext())
+ {
+ String adminKey = (String) it.next();
+ admin = (ServletCacheAdministrator) admins.get( adminKey );
+ if (admin != null)
+ {
+ // Finalize the application scope cache
+ Cache cache = (Cache) context.getAttribute(admin.getCacheKey());
+ if (cache != null) {
+ admin.finalizeListeners(cache);
+ context.removeAttribute(admin.getCacheKey());
+ context.removeAttribute(adminKey);
+ cache = null;
+ if (log.isInfoEnabled()) {
+ log.info("Shut down the ServletCacheAdministrator "+adminKey);
+ }
+ }
+ admin = null;
+ }
+ }
+ context.removeAttribute(CACHE_ADMINISTRATORS_KEY);
+ }
+ }
+
+
+ /**
+ * Grabs the cache for the specified scope
+ *
+ * @param request The current request
+ * @param scope The scope of this cache (PageContext.APPLICATION_SCOPE
+ * or PageContext.SESSION_SCOPE
)
+ * @return The cache
+ */
+ public Cache getCache(HttpServletRequest request, int scope) {
+ if (scope == PageContext.APPLICATION_SCOPE) {
+ return getAppScopeCache(context);
+ }
+
+ if (scope == PageContext.SESSION_SCOPE) {
+ return getSessionScopeCache(request.getSession(true));
+ }
+
+ throw new RuntimeException("The supplied scope value of " + scope + " is invalid. Acceptable values are PageContext.APPLICATION_SCOPE and PageContext.SESSION_SCOPE");
+ }
+
+ /**
+ * A convenience method to retrieve the application scope cache
+
+ * @param context the current ServletContext
+ * @return the application scope cache. If none is present, one will
+ * be created.
+ */
+ public Cache getAppScopeCache(ServletContext context) {
+ Cache cache;
+ Object obj = context.getAttribute(getCacheKey());
+
+ if ((obj == null) || !(obj instanceof Cache)) {
+ if (log.isInfoEnabled()) {
+ log.info("Created new application-scoped cache at key: " + getCacheKey());
+ }
+
+ cache = createCache(PageContext.APPLICATION_SCOPE, null);
+ context.setAttribute(getCacheKey(), cache);
+ } else {
+ cache = (Cache) obj;
+ }
+
+ return cache;
+ }
+
+ /**
+ * A convenience method to retrieve the session scope cache
+ *
+ * @param session the current HttpSession
+ * @return the session scope cache for this session. If none is present,
+ * one will be created.
+ */
+ public Cache getSessionScopeCache(HttpSession session) {
+ Cache cache;
+ Object obj = session.getAttribute(getCacheKey());
+
+ if ((obj == null) || !(obj instanceof Cache)) {
+ if (log.isInfoEnabled()) {
+ log.info("Created new session-scoped cache in session " + session.getId() + " at key: " + getCacheKey());
+ }
+
+ cache = createCache(PageContext.SESSION_SCOPE, session.getId());
+ session.setAttribute(getCacheKey(), cache);
+ } else {
+ cache = (Cache) obj;
+ }
+
+ return cache;
+ }
+
+ /**
+ * Get the cache key from the properties. Set it to a default value if it
+ * is not present in the properties
+ *
+ * @return The cache.key property or the DEFAULT_CACHE_KEY
+ */
+ public String getCacheKey() {
+ if (cacheKey == null) {
+ cacheKey = getProperty(CACHE_KEY_KEY);
+
+ if (cacheKey == null) {
+ cacheKey = DEFAULT_CACHE_KEY;
+ }
+ }
+
+ return cacheKey;
+ }
+
+ /**
+ * Set the flush time for a specific scope to a specific time
+ *
+ * @param date The time to flush the scope
+ * @param scope The scope to be flushed
+ */
+ public void setFlushTime(Date date, int scope) {
+ if (log.isInfoEnabled()) {
+ log.info("Flushing scope " + scope + " at " + date);
+ }
+
+ synchronized (flushTimes) {
+ if (date != null) {
+ // Trigger a SCOPE_FLUSHED event
+ dispatchScopeEvent(ScopeEventType.SCOPE_FLUSHED, scope, date, null);
+ flushTimes.put(new Integer(scope), date);
+ } else {
+ logError("setFlushTime called with a null date.");
+ throw new IllegalArgumentException("setFlushTime called with a null date.");
+ }
+ }
+ }
+
+ /**
+ * Set the flush time for a specific scope to the current time.
+ *
+ * @param scope The scope to be flushed
+ */
+ public void setFlushTime(int scope) {
+ setFlushTime(new Date(), scope);
+ }
+
+ /**
+ * Get the flush time for a particular scope.
+ *
+ * @param scope The scope to get the flush time for.
+ * @return A date representing the time this scope was last flushed.
+ * Returns null if it has never been flushed.
+ */
+ public Date getFlushTime(int scope) {
+ synchronized (flushTimes) {
+ return (Date) flushTimes.get(new Integer(scope));
+ }
+ }
+
+ /**
+ * Retrieve an item from the cache
+ *
+ * @param scope The cache scope
+ * @param request The servlet request
+ * @param key The key of the object to retrieve
+ * @param refreshPeriod The time interval specifying if an entry needs refresh
+ * @return The requested object
+ * @throws NeedsRefreshException
+ */
+ public Object getFromCache(int scope, HttpServletRequest request, String key, int refreshPeriod) throws NeedsRefreshException {
+ Cache cache = getCache(request, scope);
+ key = this.generateEntryKey(key, request, scope);
+ return cache.getFromCache(key, refreshPeriod);
+ }
+
+ /**
+ * Checks if the given scope was flushed more recently than the CacheEntry provided.
+ * Used to determine whether to refresh the particular CacheEntry.
+ *
+ * @param cacheEntry The cache entry which we're seeing whether to refresh
+ * @param scope The scope we're checking
+ *
+ * @return Whether or not the scope has been flushed more recently than this cache entry was updated.
+ */
+ public boolean isScopeFlushed(CacheEntry cacheEntry, int scope) {
+ Date flushDateTime = getFlushTime(scope);
+
+ if (flushDateTime != null) {
+ long lastUpdate = cacheEntry.getLastUpdate();
+ return (flushDateTime.getTime() >= lastUpdate);
+ } else {
+ return false;
+ }
+ }
+
+ /**
+ * Register a listener for Cache Map events.
+ *
+ * @param listener The object that listens to events.
+ */
+ public void addScopeEventListener(ScopeEventListener listener) {
+ listenerList.add(ScopeEventListener.class, listener);
+ }
+
+ /**
+ * Cancels a pending cache update. This should only be called by a thread
+ * that received a {@link NeedsRefreshException} and was unable to generate
+ * some new cache content.
+ *
+ * @param scope The cache scope
+ * @param request The servlet request
+ * @param key The cache entry key to cancel the update of.
+ */
+ public void cancelUpdate(int scope, HttpServletRequest request, String key) {
+ Cache cache = getCache(request, scope);
+ key = this.generateEntryKey(key, request, scope);
+ cache.cancelUpdate(key);
+ }
+
+ /**
+ * Flush all scopes at a particular time
+ *
+ * @param date The time to flush the scope
+ */
+ public void flushAll(Date date) {
+ synchronized (flushTimes) {
+ setFlushTime(date, PageContext.APPLICATION_SCOPE);
+ setFlushTime(date, PageContext.SESSION_SCOPE);
+ setFlushTime(date, PageContext.REQUEST_SCOPE);
+ setFlushTime(date, PageContext.PAGE_SCOPE);
+ }
+
+ // Trigger a flushAll event
+ dispatchScopeEvent(ScopeEventType.ALL_SCOPES_FLUSHED, -1, date, null);
+ }
+
+ /**
+ * Flush all scopes instantly.
+ */
+ public void flushAll() {
+ flushAll(new Date());
+ }
+
+ /**
+ * Generates a cache entry key.
+ *
+ * If the string key is not specified, the HTTP request URI and QueryString is used.
+ * Operating systems that have a filename limitation less than 255 or have
+ * filenames that are case insensitive may have issues with key generation where
+ * two distinct pages map to the same key.
+ *
+ * POST Requests (which have no distinguishing
+ * query string) may also generate identical keys for what is actually different pages.
+ * In these cases, specify an explicit key attribute for the CacheTag.
+ *
+ * @param key The key entered by the user
+ * @param request The current request
+ * @param scope The scope this cache entry is under
+ * @return The generated cache key
+ */
+ public String generateEntryKey(String key, HttpServletRequest request, int scope) {
+ return generateEntryKey(key, request, scope, null, null);
+ }
+
+ /**
+ * Generates a cache entry key.
+ *
+ * If the string key is not specified, the HTTP request URI and QueryString is used.
+ * Operating systems that have a filename limitation less than 255 or have
+ * filenames that are case insensitive may have issues with key generation where
+ * two distinct pages map to the same key.
+ *
+ * POST Requests (which have no distinguishing
+ * query string) may also generate identical keys for what is actually different pages.
+ * In these cases, specify an explicit key attribute for the CacheTag.
+ *
+ * @param key The key entered by the user
+ * @param request The current request
+ * @param scope The scope this cache entry is under
+ * @param language The ISO-639 language code to distinguish different pages in application scope
+ * @return The generated cache key
+ */
+ public String generateEntryKey(String key, HttpServletRequest request, int scope, String language) {
+ return generateEntryKey(key, request, scope, language, null);
+ }
+
+ /**
+ * Generates a cache entry key.
+ *
+ * If the string key is not specified, the HTTP request URI and QueryString is used.
+ * Operating systems that have a filename limitation less than 255 or have
+ * filenames that are case insensitive may have issues with key generation where
+ * two distinct pages map to the same key.
+ *
+ * POST Requests (which have no distinguishing
+ * query string) may also generate identical keys for what is actually different pages.
+ * In these cases, specify an explicit key attribute for the CacheTag.
+ *
+ * @param key The key entered by the user
+ * @param request The current request
+ * @param scope The scope this cache entry is under
+ * @param language The ISO-639 language code to distinguish different pages in application scope
+ * @param suffix The ability to put a suffix at the end of the key
+ * @return The generated cache key
+ */
+ public String generateEntryKey(String key, HttpServletRequest request, int scope, String language, String suffix) {
+ /**
+ * Used for generating cache entry keys.
+ */
+ StringBuffer cBuffer = new StringBuffer(AVERAGE_KEY_LENGTH);
+
+ // Append the language if available
+ if (language != null) {
+ cBuffer.append(FILE_SEPARATOR).append(language);
+ }
+
+ // Servers for multiple host domains need this distinction in the key
+ if (useHostDomainInKey) {
+ cBuffer.append(FILE_SEPARATOR).append(request.getServerName());
+ }
+
+ if (key != null) {
+ cBuffer.append(FILE_SEPARATOR).append(key);
+ } else {
+ String generatedKey = request.getRequestURI();
+
+ if (generatedKey.charAt(0) != FILE_SEPARATOR_CHAR) {
+ cBuffer.append(FILE_SEPARATOR_CHAR);
+ }
+
+ cBuffer.append(generatedKey);
+ cBuffer.append("_").append(request.getMethod()).append("_");
+
+ generatedKey = getSortedQueryString(request);
+
+ if (generatedKey != null) {
+ try {
+ java.security.MessageDigest digest = java.security.MessageDigest.getInstance("MD5");
+ byte[] b = digest.digest(generatedKey.getBytes());
+ cBuffer.append('_');
+
+ // Base64 encoding allows for unwanted slash characters.
+ cBuffer.append(toBase64(b).replace('/', '_'));
+ } catch (Exception e) {
+ // Ignore query string
+ }
+ }
+ }
+
+ // Do we want a suffix
+ if ((suffix != null) && (suffix.length() > 0)) {
+ cBuffer.append(suffix);
+ }
+
+ return cBuffer.toString();
+ }
+
+ /**
+ * Creates a string that contains all of the request parameters and their
+ * values in a single string. This is very similar to
+ * HttpServletRequest.getQueryString()
except the parameters are
+ * sorted by name, and if there is a jsessionid
parameter it is
+ * filtered out.
+ * If the request has no parameters, this method returns null
.
+ */
+ protected String getSortedQueryString(HttpServletRequest request) {
+ Map paramMap = request.getParameterMap();
+
+ if (paramMap.isEmpty()) {
+ return null;
+ }
+
+ Set paramSet = new TreeMap(paramMap).entrySet();
+
+ StringBuffer buf = new StringBuffer();
+
+ boolean first = true;
+
+ for (Iterator it = paramSet.iterator(); it.hasNext();) {
+ Map.Entry entry = (Map.Entry) it.next();
+ String[] values = (String[]) entry.getValue();
+
+ for (int i = 0; i < values.length; i++) {
+ String key = (String) entry.getKey();
+
+ if ((key.length() != 10) || !"jsessionid".equals(key)) {
+ if (first) {
+ first = false;
+ } else {
+ buf.append('&');
+ }
+
+ buf.append(key).append('=').append(values[i]);
+ }
+ }
+ }
+
+ // We get a 0 length buffer if the only parameter was a jsessionid
+ if (buf.length() == 0) {
+ return null;
+ } else {
+ return buf.toString();
+ }
+ }
+
+ /**
+ * Log error messages to commons logging.
+ *
+ * @param message Message to log.
+ */
+ public void logError(String message) {
+ log.error("[oscache]: " + message);
+ }
+
+ /**
+ * Put an object in the cache. This should only be called by a thread
+ * that received a {@link NeedsRefreshException}. Using session scope
+ * the thread has to insure that the session wasn't invalidated in
+ * the meantime. CacheTag and CacheFilter guarantee that the same
+ * cache is used in cancelUpdate and getFromCache.
+ *
+ * @param scope The cache scope
+ * @param request The servlet request
+ * @param key The object key
+ * @param content The object to add
+ */
+ public void putInCache(int scope, HttpServletRequest request, String key, Object content) {
+ putInCache(scope, request, key, content, null);
+ }
+
+ /**
+ * Put an object in the cache. This should only be called by a thread
+ * that received a {@link NeedsRefreshException}. Using session scope
+ * the thread has to insure that the session wasn't invalidated in
+ * the meantime. CacheTag and CacheFilter guarantee that the same
+ * cache is used in cancelUpdate and getFromCache.
+ *
+ * @param scope The cache scope
+ * @param request The servlet request
+ * @param key The object key
+ * @param content The object to add
+ * @param policy The refresh policy
+ */
+ public void putInCache(int scope, HttpServletRequest request, String key, Object content, EntryRefreshPolicy policy) {
+ Cache cache = getCache(request, scope);
+ key = this.generateEntryKey(key, request, scope);
+ cache.putInCache(key, content, policy);
+ }
+
+ /**
+ * Sets the cache capacity (number of items). If the cache contains
+ * more than capacity
items then items will be removed
+ * to bring the cache back down to the new size.
+ *
+ * @param scope The cache scope
+ * @param request The servlet request
+ * @param capacity The new capacity
+ */
+ public void setCacheCapacity(int scope, HttpServletRequest request, int capacity) {
+ setCacheCapacity(capacity);
+ getCache(request, scope).setCapacity(capacity);
+ }
+
+ /**
+ * Unregister a listener for Cache Map events.
+ *
+ * @param listener The object that currently listens to events.
+ */
+ public void removeScopeEventListener(ScopeEventListener listener) {
+ listenerList.remove(ScopeEventListener.class, listener);
+ }
+
+ /**
+ * Finalizes all the listeners that are associated with the given cache object
+ */
+ protected void finalizeListeners(Cache cache) {
+ super.finalizeListeners(cache);
+ }
+
+ /**
+ * Convert a byte array into a Base64 string (as used in mime formats)
+ */
+ private static String toBase64(byte[] aValue) {
+ int byte1;
+ int byte2;
+ int byte3;
+ int iByteLen = aValue.length;
+ StringBuffer tt = new StringBuffer();
+
+ for (int i = 0; i < iByteLen; i += 3) {
+ boolean bByte2 = (i + 1) < iByteLen;
+ boolean bByte3 = (i + 2) < iByteLen;
+ byte1 = aValue[i] & 0xFF;
+ byte2 = (bByte2) ? (aValue[i + 1] & 0xFF) : 0;
+ byte3 = (bByte3) ? (aValue[i + 2] & 0xFF) : 0;
+
+ tt.append(m_strBase64Chars.charAt(byte1 / 4));
+ tt.append(m_strBase64Chars.charAt((byte2 / 16) + ((byte1 & 0x3) * 16)));
+ tt.append(((bByte2) ? m_strBase64Chars.charAt((byte3 / 64) + ((byte2 & 0xF) * 4)) : '='));
+ tt.append(((bByte3) ? m_strBase64Chars.charAt(byte3 & 0x3F) : '='));
+ }
+
+ return tt.toString();
+ }
+
+ /**
+ * Create a cache
+ *
+ * @param scope The cache scope
+ * @param sessionId The sessionId for with the cache will be created
+ * @return A new cache
+ */
+ private ServletCache createCache(int scope, String sessionId) {
+ ServletCache newCache = new ServletCache(this, algorithmClass, cacheCapacity, scope);
+
+ // TODO - Fix me please!
+ // Hack! This is nasty - if two sessions are created within a short
+ // space of time it is possible they will end up with duplicate
+ // session IDs being passed to the DiskPersistenceListener!...
+ config.set(HASH_KEY_SCOPE, "" + scope);
+ config.set(HASH_KEY_SESSION_ID, sessionId);
+
+ newCache = (ServletCache) configureStandardListeners(newCache);
+
+ return newCache;
+ }
+
+ /**
+ * Dispatch a scope event to all registered listeners.
+ *
+ * @param eventType The type of event
+ * @param scope Scope that was flushed (Does not apply for FLUSH_ALL event)
+ * @param date Date of flushing
+ * @param origin The origin of the event
+ */
+ private void dispatchScopeEvent(ScopeEventType eventType, int scope, Date date, String origin) {
+ // Create the event
+ ScopeEvent event = new ScopeEvent(eventType, scope, date, origin);
+
+ // Guaranteed to return a non-null array
+ Object[] listeners = listenerList.getListenerList();
+
+ // Process the listeners last to first, notifying
+ // those that are interested in this event
+ for (int i = listeners.length - 2; i >= 0; i -= 2) {
+ if (listeners[i+1] instanceof ScopeEventListener) {
+ ((ScopeEventListener) listeners[i + 1]).scopeFlushed(event);
+ }
+ }
+ }
+
+ /**
+ * Set property cache.use.host.domain.in.key=true to add domain information to key
+ * generation for hosting multiple sites
+ */
+ private void initHostDomainInKey() {
+ String propStr = getProperty(CACHE_USE_HOST_DOMAIN_KEY);
+
+ useHostDomainInKey = "true".equalsIgnoreCase(propStr);
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/WebEntryRefreshPolicy.java b/src/java/com/opensymphony/oscache/web/WebEntryRefreshPolicy.java
new file mode 100644
index 0000000..8e030b0
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/WebEntryRefreshPolicy.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.opensymphony.oscache.base.EntryRefreshPolicy;
+
+/**
+ * Interface to implement an entry refresh policy.
+ * Specify the name of the implementing class using the refreshpolicyclass
+ * attribute of the cache tag. If any additional parameters are required,
+ * they should be supplied using the refreshpolicyparam attribute.
+ *
+ * For example:
+ *
+ * <cache:cache key="mykey"
+ * refreshpolicyclass="com.mycompany.cache.policy.MyRefreshPolicy"
+ * refreshpolicyparam="...additional data...">
+ My cached content
+ * </cache:cache>
+ *
+ *
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public interface WebEntryRefreshPolicy extends EntryRefreshPolicy {
+ /**
+ * Initializes the refresh policy.
+ *
+ * @param key The cache key that is being checked.
+ * @param param Any optional parameters that were supplied
+ */
+ public void init(String key, String param);
+}
diff --git a/src/java/com/opensymphony/oscache/web/filter/CacheFilter.java b/src/java/com/opensymphony/oscache/web/filter/CacheFilter.java
new file mode 100644
index 0000000..fb990e7
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/CacheFilter.java
@@ -0,0 +1,823 @@
+/*
+ * Copyright (c) 2002-2009 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.EntryRefreshPolicy;
+import com.opensymphony.oscache.base.NeedsRefreshException;
+import com.opensymphony.oscache.util.ClassLoaderUtil;
+import com.opensymphony.oscache.util.StringUtil;
+import com.opensymphony.oscache.web.ServletCacheAdministrator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Properties;
+
+import javax.servlet.*;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import javax.servlet.jsp.PageContext;
+
+/**
+ * CacheFilter is a filter that allows for server-side caching of post-processed servlet content.
+ *
+ * It also gives great programatic control over refreshing, flushing and updating the cache.
+ *
+ * @author Serge Knystautas
+ * @author Mike Cannon-Brookes
+ * @author Lars Torunski
+ * @version $Revision$
+ */
+public class CacheFilter implements Filter, ICacheKeyProvider, ICacheGroupsProvider {
+ // Header
+ public static final String HEADER_LAST_MODIFIED = "Last-Modified";
+ public static final String HEADER_CONTENT_TYPE = "Content-Type";
+ public static final String HEADER_CONTENT_ENCODING = "Content-Encoding";
+ public static final String HEADER_EXPIRES = "Expires";
+ public static final String HEADER_IF_MODIFIED_SINCE = "If-Modified-Since";
+ public static final String HEADER_CACHE_CONTROL = "Cache-Control";
+ public static final String HEADER_ACCEPT_ENCODING = "Accept-Encoding";
+ public static final String HEADER_ETAG = "ETag";
+ public static final String HEADER_CONTENT_DISPOSITION = "Content-Disposition";
+
+ // Fragment parameter
+ public static final int FRAGMENT_AUTODETECT = -1;
+ public static final int FRAGMENT_NO = 0;
+ public static final int FRAGMENT_YES = 1;
+
+ // No cache parameter
+ public static final int NOCACHE_OFF = 0;
+ public static final int NOCACHE_SESSION_ID_IN_URL = 1;
+
+ // Last Modified parameter
+ public static final long LAST_MODIFIED_OFF = 0;
+ public static final long LAST_MODIFIED_ON = 1;
+ public static final long LAST_MODIFIED_INITIAL = -1;
+
+ // Expires parameter
+ public static final long EXPIRES_OFF = 0;
+ public static final long EXPIRES_ON = 1;
+ public static final long EXPIRES_TIME = -1;
+
+ // ETag parameter
+ public static final int ETAG_OFF = 0;
+ public static final int ETAG_WEAK = 1;
+ //public static final int ETAG_STRONG = 2;
+
+ // Cache Control
+ public static final long MAX_AGE_NO_INIT = Long.MIN_VALUE;
+ public static final long MAX_AGE_TIME = Long.MAX_VALUE;
+
+ // request attribute to avoid reentrance
+ private final static String REQUEST_FILTERED = "__oscache_filtered__";
+ private String requestFiltered;
+
+ // the policy for the expires header
+ private EntryRefreshPolicy expiresRefreshPolicy;
+
+ // the logger
+ private final Log log = LogFactory.getLog(this.getClass());
+
+ // filter variables
+ private FilterConfig config;
+ private ServletCacheAdministrator admin = null;
+ private int cacheScope = PageContext.APPLICATION_SCOPE; // filter scope - default is APPLICATION
+ private int fragment = FRAGMENT_AUTODETECT; // defines if this filter handles fragments of a page - default is auto detect
+ private int time = 60 * 60; // time before cache should be refreshed - default one hour (in seconds)
+ private String cron = null; // A cron expression that determines when this cached content will expire - default is null
+ private int nocache = NOCACHE_OFF; // defines special no cache option for the requests - default is off
+ private long lastModified = LAST_MODIFIED_INITIAL; // defines if the last-modified-header will be sent - default is intial setting
+ private long expires = EXPIRES_ON; // defines if the expires-header will be sent - default is on
+ private int etag = ETAG_WEAK; // defines the type of the etag header - default is weak
+ private long cacheControlMaxAge = -60; // defines which max-age in Cache-Control to be set - default is 60 seconds for max-age
+ private ICacheKeyProvider cacheKeyProvider = this; // the provider of the cache key - default is the CacheFilter itselfs
+ private ICacheGroupsProvider cacheGroupsProvider = this; // the provider of the cache groups - default is the CacheFilter itselfs
+ private List disableCacheOnMethods = null; // caching can be disabled by defining the http methods - default is off
+
+ /**
+ * Filter clean-up
+ */
+ public void destroy() {
+ //Not much to do...
+ }
+
+ /**
+ * The doFilter call caches the response by wrapping the HttpServletResponse
+ * object so that the output stream can be caught. This works by splitting off the output
+ * stream into two with the {@link SplitServletOutputStream} class. One stream gets written
+ * out to the response as normal, the other is fed into a byte array inside a {@link ResponseContent}
+ * object.
+ *
+ * @param request The servlet request
+ * @param response The servlet response
+ * @param chain The filter chain
+ * @throws ServletException IOException
+ */
+ public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws ServletException, IOException {
+ if (log.isInfoEnabled()) {
+ log.info("OSCache: filter in scope " + cacheScope);
+ }
+
+ // avoid reentrance (CACHE-128) and check if request is cacheable
+ if (isFilteredBefore(request) || !isCacheableInternal(request)) {
+ chain.doFilter(request, response);
+ return;
+ }
+ request.setAttribute(requestFiltered, Boolean.TRUE);
+
+ HttpServletRequest httpRequest = (HttpServletRequest) request;
+
+ // checks if the response will be a fragment of a page
+ boolean fragmentRequest = isFragment(httpRequest);
+
+ // avoid useless session creation for application scope pages (CACHE-129)
+ Cache cache;
+ if (cacheScope == PageContext.SESSION_SCOPE) {
+ cache = admin.getSessionScopeCache(httpRequest.getSession(true));
+ } else {
+ cache = admin.getAppScopeCache(config.getServletContext());
+ }
+
+ // generate the cache entry key
+ String key = cacheKeyProvider.createCacheKey(httpRequest, admin, cache);
+
+ try {
+ ResponseContent respContent = (ResponseContent) cache.getFromCache(key, time, cron);
+
+ if (log.isInfoEnabled()) {
+ log.info("OSCache: Using cached entry for " + key);
+ }
+
+ boolean acceptsGZip = false;
+ if ((!fragmentRequest) && (lastModified != LAST_MODIFIED_OFF)) {
+ long clientLastModified = httpRequest.getDateHeader(HEADER_IF_MODIFIED_SINCE); // will return -1 if no header...
+
+ // only reply with SC_NOT_MODIFIED
+ // if the client has already the newest page and the response isn't a fragment in a page
+ if ((clientLastModified != -1) && (clientLastModified >= respContent.getLastModified())) {
+ ((HttpServletResponse) response).setStatus(HttpServletResponse.SC_NOT_MODIFIED);
+ return;
+ }
+
+ acceptsGZip = respContent.isContentGZiped() && acceptsGZipEncoding(httpRequest);
+ }
+
+ respContent.writeTo(response, fragmentRequest, acceptsGZip);
+ // acceptsGZip is used for performance reasons above; use the following line for CACHE-49
+ // respContent.writeTo(response, fragmentRequest, acceptsGZipEncoding(httpRequest));
+ } catch (NeedsRefreshException nre) {
+ boolean updateSucceeded = false;
+
+ try {
+ if (log.isInfoEnabled()) {
+ log.info("OSCache: New cache entry, cache stale or cache scope flushed for " + key);
+ }
+
+ CacheHttpServletResponseWrapper cacheResponse = new CacheHttpServletResponseWrapper((HttpServletResponse) response, fragmentRequest, time * 1000L, lastModified, expires, cacheControlMaxAge, etag);
+ chain.doFilter(request, cacheResponse);
+ cacheResponse.flushBuffer();
+
+ // Only cache if the response is cacheable
+ if (isCacheableInternal(cacheResponse)) {
+ // get the cache groups of the content
+ String[] groups = cacheGroupsProvider.createCacheGroups(httpRequest, admin, cache);
+ // Store as the cache content the result of the response
+ cache.putInCache(key, cacheResponse.getContent(), groups, expiresRefreshPolicy, null);
+ updateSucceeded = true;
+ if (log.isInfoEnabled()) {
+ log.info("OSCache: New entry added to the cache with key " + key);
+ }
+ }
+ } finally {
+ if (!updateSucceeded) {
+ cache.cancelUpdate(key);
+ }
+ }
+ }
+ }
+
+ /**
+ * Initialize the filter. This retrieves a {@link ServletCacheAdministrator}
+ * instance and configures the filter based on any initialization parameters.
+ * The supported initialization parameters are:
+ *
+ *
+ * - oscache-properties-file - the properties file that contains the OSCache configuration
+ * options to be used by the Cache that this Filter should use.
+ *
+ * @param filterConfig The filter configuration
+ */
+ public void init(FilterConfig filterConfig) {
+ // Get whatever settings we want...
+ config = filterConfig;
+
+ log.info("OSCache: Initializing CacheFilter with filter name " + config.getFilterName());
+
+ // setting the request filter to avoid reentrance with the same filter
+ requestFiltered = REQUEST_FILTERED + config.getFilterName();
+ log.info("Request filter attribute is " + requestFiltered);
+
+ // filter Properties file
+ Properties props = null;
+ try {
+ String propertiesfile = config.getInitParameter("oscache-properties-file");
+
+ if (propertiesfile != null && propertiesfile.length() > 0) {
+ props = Config.loadProperties(propertiesfile, "CacheFilter with filter name '" + config.getFilterName()+ "'");
+ }
+ } catch (Exception e) {
+ log.info("OSCache: Init parameter 'oscache-properties-file' not set, using default.");
+ }
+ admin = ServletCacheAdministrator.getInstance(config.getServletContext(), props);
+
+ // filter parameter time
+ String timeParam = config.getInitParameter("time");
+ if (timeParam != null) {
+ try {
+ setTime(Integer.parseInt(timeParam));
+ } catch (NumberFormatException nfe) {
+ log.error("OSCache: Unexpected value for the init parameter 'time', defaulting to one hour. Message=" + nfe.getMessage());
+ }
+ }
+
+ // filter parameter scope
+ String scopeParam = config.getInitParameter("scope");
+ if (scopeParam != null) {
+ if ("session".equalsIgnoreCase(scopeParam)) {
+ setCacheScope(PageContext.SESSION_SCOPE);
+ } else if ("application".equalsIgnoreCase(scopeParam)) {
+ setCacheScope(PageContext.APPLICATION_SCOPE);
+ } else {
+ log.error("OSCache: Wrong value '" + scopeParam + "' for init parameter 'scope', defaulting to 'application'.");
+ }
+
+ }
+
+ // filter parameter cron
+ setCron(config.getInitParameter("cron"));
+
+ // filter parameter fragment
+ String fragmentParam = config.getInitParameter("fragment");
+ if (fragmentParam != null) {
+ if ("no".equalsIgnoreCase(fragmentParam)) {
+ setFragment(FRAGMENT_NO);
+ } else if ("yes".equalsIgnoreCase(fragmentParam)) {
+ setFragment(FRAGMENT_YES);
+ } else if ("auto".equalsIgnoreCase(fragmentParam)) {
+ setFragment(FRAGMENT_AUTODETECT);
+ } else {
+ log.error("OSCache: Wrong value '" + fragmentParam + "' for init parameter 'fragment', defaulting to 'auto detect'.");
+ }
+ }
+
+ // filter parameter nocache
+ String nocacheParam = config.getInitParameter("nocache");
+ if (nocacheParam != null) {
+ if ("off".equalsIgnoreCase(nocacheParam)) {
+ nocache = NOCACHE_OFF;
+ } else if ("sessionIdInURL".equalsIgnoreCase(nocacheParam)) {
+ nocache = NOCACHE_SESSION_ID_IN_URL;
+ } else {
+ log.error("OSCache: Wrong value '" + nocacheParam + "' for init parameter 'nocache', defaulting to 'off'.");
+ }
+ }
+
+ // filter parameter last modified
+ String lastModifiedParam = config.getInitParameter("lastModified");
+ if (lastModifiedParam != null) {
+ if ("off".equalsIgnoreCase(lastModifiedParam)) {
+ lastModified = LAST_MODIFIED_OFF;
+ } else if ("on".equalsIgnoreCase(lastModifiedParam)) {
+ lastModified = LAST_MODIFIED_ON;
+ } else if ("initial".equalsIgnoreCase(lastModifiedParam)) {
+ lastModified = LAST_MODIFIED_INITIAL;
+ } else {
+ log.error("OSCache: Wrong value '" + lastModifiedParam + "' for init parameter 'lastModified', defaulting to 'initial'.");
+ }
+ }
+
+ // filter parameter expires
+ String expiresParam = config.getInitParameter("expires");
+ if (expiresParam != null) {
+ if ("off".equalsIgnoreCase(expiresParam)) {
+ setExpires(EXPIRES_OFF);
+ } else if ("on".equalsIgnoreCase(expiresParam)) {
+ setExpires(EXPIRES_ON);
+ } else if ("time".equalsIgnoreCase(expiresParam)) {
+ setExpires(EXPIRES_TIME);
+ } else {
+ log.error("OSCache: Wrong value '" + expiresParam + "' for init parameter 'expires', defaulting to 'on'.");
+ }
+ }
+
+ // filter parameter expires
+ String etagParam = config.getInitParameter("etag");
+ if (etagParam != null) {
+ if ("off".equalsIgnoreCase(etagParam)) {
+ setETag(ETAG_OFF);
+ } else if ("weak".equalsIgnoreCase(etagParam)) {
+ setETag(ETAG_WEAK);
+ } else {
+ log.error("OSCache: Wrong value '" + etagParam + "' for init parameter 'etag', defaulting to 'weak'.");
+ }
+ }
+
+ // filter parameter Cache-Control
+ String cacheControlMaxAgeParam = config.getInitParameter("max-age");
+ if (cacheControlMaxAgeParam != null) {
+ if (cacheControlMaxAgeParam.equalsIgnoreCase("no init")) {
+ setCacheControlMaxAge(MAX_AGE_NO_INIT);
+ } else if (cacheControlMaxAgeParam.equalsIgnoreCase("time")) {
+ setCacheControlMaxAge(MAX_AGE_TIME);
+ } else {
+ try {
+ setCacheControlMaxAge(Long.parseLong(cacheControlMaxAgeParam));
+ } catch (NumberFormatException nfe) {
+ log.error("OSCache: Unexpected value for the init parameter 'max-age', defaulting to '60'. Message=" + nfe.getMessage());
+ }
+ }
+ }
+
+ // filter parameter ICacheKeyProvider
+ ICacheKeyProvider cacheKeyProviderParam = (ICacheKeyProvider)instantiateFromInitParam("ICacheKeyProvider", ICacheKeyProvider.class, this.getClass().getName());
+ if (cacheKeyProviderParam != null) {
+ setCacheKeyProvider(cacheKeyProviderParam);
+ }
+
+ // filter parameter ICacheGroupsProvider
+ ICacheGroupsProvider cacheGroupsProviderParam = (ICacheGroupsProvider)instantiateFromInitParam("ICacheGroupsProvider", ICacheGroupsProvider.class, this.getClass().getName());
+ if (cacheGroupsProviderParam != null) {
+ setCacheGroupsProvider(cacheGroupsProviderParam);
+ }
+
+ // filter parameter EntryRefreshPolicy
+ EntryRefreshPolicy expiresRefreshPolicyParam = (EntryRefreshPolicy)instantiateFromInitParam("EntryRefreshPolicy", EntryRefreshPolicy.class, ExpiresRefreshPolicy.class.getName());
+ if (expiresRefreshPolicyParam != null) {
+ setExpiresRefreshPolicy(expiresRefreshPolicyParam);
+ } else {
+ // setting the refresh period for this cache filter
+ setExpiresRefreshPolicy(new ExpiresRefreshPolicy(time));
+ }
+
+ // filter parameter scope
+ String disableCacheOnMethodsParam = config.getInitParameter("disableCacheOnMethods");
+ if (StringUtil.hasLength(disableCacheOnMethodsParam)) {
+ disableCacheOnMethods = StringUtil.split(disableCacheOnMethodsParam, ',');
+ // log.error("OSCache: Wrong value '" + disableCacheOnMethodsParam + "' for init parameter 'disableCacheOnMethods', defaulting to 'null'.");
+ }
+
+ }
+
+ private Object instantiateFromInitParam(String classInitParam, Class interfaceClass, String defaultObjectName) {
+ String className = config.getInitParameter(classInitParam);
+ if (className != null) {
+ try {
+ Class clazz = ClassLoaderUtil.loadClass(className, this.getClass());
+ if (!interfaceClass.isAssignableFrom(clazz)) {
+ log.error("OSCache: Specified class '" + className + "' does not implement" + interfaceClass.getName() + ". Using default " + defaultObjectName + ".");
+ return null;
+ } else {
+ return clazz.newInstance();
+ }
+ } catch (ClassNotFoundException e) {
+ log.error("OSCache: Class '" + className + "' not found. Defaulting to " + defaultObjectName + ".", e);
+ } catch (InstantiationException e) {
+ log.error("OSCache: Class '" + className + "' could not be instantiated because it is not a concrete class. Using default object " + defaultObjectName + ".", e);
+ } catch (IllegalAccessException e) {
+ log.error("OSCache: Class '"+ className+ "' could not be instantiated because it is not public. Using default object " + defaultObjectName + ".", e);
+ }
+ }
+ return null;
+ }
+
+ /**
+ * {@link ICacheKeyProvider}
+ * @see com.opensymphony.oscache.web.filter.ICacheKeyProvider#createCacheKey(javax.servlet.http.HttpServletRequest, ServletCacheAdministrator, Cache)
+ */
+ public String createCacheKey(HttpServletRequest httpRequest, ServletCacheAdministrator scAdmin, Cache cache) {
+ return scAdmin.generateEntryKey(null, httpRequest, cacheScope);
+ }
+
+ /**
+ * {@link ICacheGroupsProvider}
+ * @see com.opensymphony.oscache.web.filter.ICacheGroupsProvider#createCacheGroups(javax.servlet.http.HttpServletRequest, ServletCacheAdministrator, Cache)
+ */
+ public String[] createCacheGroups(HttpServletRequest httpRequest, ServletCacheAdministrator scAdmin, Cache cache) {
+ return null;
+ }
+
+ /**
+ * Checks if the request is a fragment in a page.
+ *
+ * According to Java Servlet API 2.2 (8.2.1 Dispatching Requests, Included
+ * Request Parameters), when a servlet is being used from within an include,
+ * the attribute javax.servlet.include.request_uri
is set.
+ * According to Java Servlet API 2.3 this is excepted for servlets obtained
+ * by using the getNamedDispatcher method.
+ *
+ * @param request the to be handled request
+ * @return true if the request is a fragment in a page
+ */
+ public boolean isFragment(HttpServletRequest request) {
+ if (fragment == FRAGMENT_AUTODETECT) {
+ return request.getAttribute("javax.servlet.include.request_uri") != null;
+ } else {
+ return (fragment == FRAGMENT_NO) ? false : true;
+ }
+ }
+
+ /**
+ * Checks if the request was filtered before, so
+ * guarantees to be executed once per request. You
+ * can override this methods to define a more specific
+ * behavior.
+ *
+ * @param request checks if the request was filtered before.
+ * @return true if it is the first execution
+ */
+ public boolean isFilteredBefore(ServletRequest request) {
+ return request.getAttribute(requestFiltered) != null;
+ }
+
+ /*
+ * isCacheableInternal guarantees that the log information is correct.
+ *
+ * @param request The servlet request
+ * @return Returns a boolean indicating if the request can be cached or not.
+ */
+ private final boolean isCacheableInternal(ServletRequest request) {
+ final boolean cacheable = isCacheable(request);
+
+ if (log.isDebugEnabled()) {
+ log.debug("OSCache: the request " + ((cacheable) ? "is" : "is not") + " cachable.");
+ }
+
+ return cacheable;
+ }
+
+ /**
+ * isCacheable is a method allowing a subclass to decide if a request is
+ * cacheable or not.
+ *
+ * @param request The servlet request
+ * @return Returns a boolean indicating if the request can be cached or not.
+ */
+ public boolean isCacheable(ServletRequest request) {
+ boolean cacheable = request instanceof HttpServletRequest;
+
+ if (cacheable) {
+ HttpServletRequest requestHttp = (HttpServletRequest) request;
+ // CACHE-272 don't cache special http request methods
+ if ((disableCacheOnMethods != null) && (disableCacheOnMethods.contains(requestHttp.getMethod()))) {
+ return false;
+ }
+ if (nocache == NOCACHE_SESSION_ID_IN_URL) { // don't cache requests if session id is in the URL
+ cacheable = !requestHttp.isRequestedSessionIdFromURL();
+ }
+ }
+
+ return cacheable;
+ }
+
+ /*
+ * isCacheableInternal guarantees that the log information is correct.
+ *
+ * @param cacheResponse the HTTP servlet response
+ * @return Returns a boolean indicating if the response can be cached or not.
+ */
+ private final boolean isCacheableInternal(CacheHttpServletResponseWrapper cacheResponse) {
+ final boolean cacheable = isCacheable(cacheResponse);
+
+ if (log.isDebugEnabled()) {
+ log.debug("OSCache: the response " + ((cacheable) ? "is" : "is not") + " cacheable.");
+ }
+
+ return cacheable;
+ }
+
+ /**
+ * isCacheable is a method allowing subclass to decide if a response is
+ * cacheable or not.
+ *
+ * @param cacheResponse the HTTP servlet response
+ * @return Returns a boolean indicating if the response can be cached or not.
+ */
+ public boolean isCacheable(CacheHttpServletResponseWrapper cacheResponse) {
+ // TODO implement CACHE-137 here
+ // Only cache if the response was 200
+ return cacheResponse.getStatus() == HttpServletResponse.SC_OK;
+ }
+
+ /**
+ * Check if the client browser support gzip compression.
+ *
+ * @param request the http request
+ * @return true if client browser supports GZIP
+ */
+ public boolean acceptsGZipEncoding(HttpServletRequest request) {
+ String acceptEncoding = request.getHeader(HEADER_ACCEPT_ENCODING);
+ return (acceptEncoding != null) && (acceptEncoding.indexOf("gzip") != -1);
+ }
+
+ // ---------------------------------
+ // --- getter and setter methods ---
+ // ---------------------------------
+
+ /**
+ * @return the max-age of the cache control
+ * @since 2.4
+ */
+ public long getCacheControlMaxAge() {
+ if ((cacheControlMaxAge == MAX_AGE_NO_INIT) || (cacheControlMaxAge == MAX_AGE_TIME)) {
+ return cacheControlMaxAge;
+ }
+ return - cacheControlMaxAge;
+ }
+
+ /**
+ * max-age - defines the cache control response header max-age. Acceptable values are
+ * MAX_AGE_NO_INIT
for don't initializing the max-age cache control,
+ * MAX_AGE_TIME
the max-age information will be based on the time parameter and creation time of the content (expiration timestamp minus current timestamp), and
+ * [positive integer]
value constant in seconds to be set in every response, the default value is 60.
+ *
+ * @param cacheControlMaxAge the cacheControlMaxAge to set
+ * @since 2.4
+ */
+ public void setCacheControlMaxAge(long cacheControlMaxAge) {
+ if ((cacheControlMaxAge == MAX_AGE_NO_INIT) || (cacheControlMaxAge == MAX_AGE_TIME)) {
+ this.cacheControlMaxAge = cacheControlMaxAge;
+ } else if (cacheControlMaxAge >= 0) {
+ // declare the cache control as a constant
+ // TODO check if this value can be stored as a positive long without changing it
+ this.cacheControlMaxAge = - cacheControlMaxAge;
+ } else {
+ log.warn("OSCache: 'max-age' must be at least a positive integer, defaulting to '60'. ");
+ this.cacheControlMaxAge = -60;
+ }
+ }
+
+ /**
+ * @return the cacheGroupsProvider
+ * @since 2.4
+ */
+ public ICacheGroupsProvider getCacheGroupsProvider() {
+ return cacheGroupsProvider;
+ }
+
+ /**
+ * ICacheGroupsProvider - Class implementing the interface ICacheGroupsProvider
.
+ * A developer can implement a method which provides cache groups based on the request,
+ * the servlet cache administrator and cache. The parameter has to be not null
.
+ *
+ * @param cacheGroupsProvider the cacheGroupsProvider to set
+ * @since 2.4
+ */
+ public void setCacheGroupsProvider(ICacheGroupsProvider cacheGroupsProvider) {
+ if (cacheGroupsProvider == null) throw new IllegalArgumentException("The ICacheGroupsProvider is null.");
+ this.cacheGroupsProvider = cacheGroupsProvider;
+ }
+
+ /**
+ * @return the cacheKeyProvider
+ * @since 2.4
+ */
+ public ICacheKeyProvider getCacheKeyProvider() {
+ return cacheKeyProvider;
+ }
+
+ /**
+ * ICacheKeyProvider - Class implementing the interface ICacheKeyProvider
.
+ * A developer can implement a method which provides cache keys based on the request,
+ * the servlect cache administrator and cache. The parameter has to be not null
.
+ *
+ * @param cacheKeyProvider the cacheKeyProvider to set
+ * @since 2.4
+ */
+ public void setCacheKeyProvider(ICacheKeyProvider cacheKeyProvider) {
+ if (cacheKeyProvider == null) throw new IllegalArgumentException("The ICacheKeyProvider is null.");
+ this.cacheKeyProvider = cacheKeyProvider;
+ }
+
+ /**
+ * Returns PageContext.APPLICATION_SCOPE or PageContext.SESSION_SCOPE.
+ * @return the cache scope
+ * @since 2.4
+ */
+ public int getCacheScope() {
+ return cacheScope;
+ }
+
+ /**
+ * scope - the default scope to cache content. Acceptable values
+ * are PageContext.APPLICATION_SCOPE
(default) and PageContext.SESSION_SCOPE
.
+ *
+ * @param cacheScope the cacheScope to set
+ * @since 2.4
+ */
+ public void setCacheScope(int cacheScope) {
+ if ((cacheScope != PageContext.APPLICATION_SCOPE) && (cacheScope != PageContext.SESSION_SCOPE))
+ throw new IllegalArgumentException("Acceptable values for cache scope are PageContext.APPLICATION_SCOPE or PageContext.SESSION_SCOPE");
+ this.cacheScope = cacheScope;
+ }
+
+ /**
+ * @return the cron
+ * @since 2.4
+ */
+ public String getCron() {
+ return cron;
+ }
+
+ /**
+ * cron - defines an expression that determines when the page content will expire.
+ * This allows content to be expired at particular dates and/or times, rather than once
+ * a cache entry reaches a certain age.
+ *
+ * @param cron the cron to set
+ * @since 2.4
+ */
+ public void setCron(String cron) {
+ this.cron = cron;
+ }
+
+ /**
+ * @return the expires header
+ * @since 2.4
+ */
+ public long getExpires() {
+ return expires;
+ }
+
+ /**
+ * expires - defines if the expires header will be sent in the response. Acceptable values are
+ * EXPIRES_OFF
for don't sending the header, even it is set in the filter chain,
+ * EXPIRES_ON
(default) for sending it if it is set in the filter chain and
+ * EXPIRES_TIME
the expires information will be intialized based on the time parameter and creation time of the content.
+ *
+ * @param expires the expires to set
+ * @since 2.4
+ */
+ public void setExpires(long expires) {
+ if ((expires < EXPIRES_TIME) || (expires > EXPIRES_ON)) throw new IllegalArgumentException("Expires value out of range.");
+ this.expires = expires;
+ }
+
+ /**
+ * @return the etag
+ * @since 2.4.2
+ */
+ public int getETag() {
+ return etag;
+ }
+
+ /**
+ * etag - defines if the Entity tag (ETag) HTTP header is sent in the response. Acceptable values are
+ * ETAG_OFF
for don't sending the header, even it is set in the filter chain,
+ * ETAG_WEAK
(default) for generating a Weak ETag by concatenating the content length and the last modified time in milliseconds.
+ *
+ * @param etag the etag to set
+ * @since 2.4.2
+ */
+ public void setETag(int etag) {
+ if ((etag < ETAG_OFF) || (etag > ETAG_WEAK)) throw new IllegalArgumentException("ETag value out of range.");
+ this.etag = etag;
+ }
+
+ /**
+ * @return the expiresRefreshPolicy
+ * @since 2.4
+ */
+ public EntryRefreshPolicy getExpiresRefreshPolicy() {
+ return expiresRefreshPolicy;
+ }
+
+ /**
+ * EntryRefreshPolicy - Class implementing the interface EntryRefreshPolicy
.
+ * A developer can implement a class which provides a different custom cache invalidation policy for a specific cache entry.
+ * If not specified, the default policy is timed entry expiry as specified with the time parameter described above.
+ *
+ * @param expiresRefreshPolicy the expiresRefreshPolicy to set
+ * @since 2.4
+ */
+ public void setExpiresRefreshPolicy(EntryRefreshPolicy expiresRefreshPolicy) {
+ if (expiresRefreshPolicy == null) throw new IllegalArgumentException("The EntryRefreshPolicy is null.");
+ this.expiresRefreshPolicy = expiresRefreshPolicy;
+ }
+
+ /**
+ * @return the fragment
+ * @since 2.4
+ */
+ public int getFragment() {
+ return fragment;
+ }
+
+ /**
+ * fragment - defines if this filter handles fragments of a page. Acceptable values
+ * are FRAGMENT_AUTODETECT
(default) for auto detect, FRAGMENT_NO
and FRAGMENT_YES
.
+ *
+ * @param fragment the fragment to set
+ * @since 2.4
+ */
+ public void setFragment(int fragment) {
+ if ((fragment < FRAGMENT_AUTODETECT) || (fragment > FRAGMENT_YES)) throw new IllegalArgumentException("Fragment value out of range.");
+ this.fragment = fragment;
+ }
+
+ /**
+ * @return the lastModified
+ * @since 2.4
+ */
+ public long getLastModified() {
+ return lastModified;
+ }
+
+ /**
+ * lastModified - defines if the last modified header will be sent in the response. Acceptable values are
+ * LAST_MODIFIED_OFF
for don't sending the header, even it is set in the filter chain,
+ * LAST_MODIFIED_ON
for sending it if it is set in the filter chain and
+ * LAST_MODIFIED_INITIAL
(default) the last modified information will be set based on the current time and changes are allowed.
+ *
+ * @param lastModified the lastModified to set
+ * @since 2.4
+ */
+ public void setLastModified(long lastModified) {
+ if ((lastModified < LAST_MODIFIED_INITIAL) || (lastModified > LAST_MODIFIED_ON)) throw new IllegalArgumentException("LastModified value out of range.");
+ this.lastModified = lastModified;
+ }
+
+ /**
+ * @return the nocache
+ * @since 2.4
+ */
+ public int getNocache() {
+ return nocache;
+ }
+
+ /**
+ * nocache - defines which objects shouldn't be cached. Acceptable values
+ * are NOCACHE_OFF
(default) and NOCACHE_SESSION_ID_IN_URL
if the session id is
+ * contained in the URL.
+ *
+ * @param nocache the nocache to set
+ * @since 2.4
+ */
+ public void setNocache(int nocache) {
+ if ((nocache < NOCACHE_OFF) || (nocache > NOCACHE_SESSION_ID_IN_URL)) throw new IllegalArgumentException("Nocache value out of range.");
+ this.nocache = nocache;
+ }
+
+ /**
+ * @return the time
+ * @since 2.4
+ */
+ public int getTime() {
+ return time;
+ }
+
+ /**
+ * time - the default time (in seconds) to cache content for. The default
+ * value is 3600 seconds (one hour). Specifying -1 (indefinite expiry) as the cache
+ * time will ensure a content does not become stale until it is either explicitly
+ * flushed or the expires refresh policy causes the entry to expire.
+ *
+ * @param time the time to set
+ * @since 2.4
+ */
+ public void setTime(int time) {
+ this.time = time;
+ // check if ExpiresRefreshPolicy has to be reset
+ if (expiresRefreshPolicy instanceof ExpiresRefreshPolicy) {
+ ((ExpiresRefreshPolicy) expiresRefreshPolicy).setRefreshPeriod(time);
+ }
+ }
+
+ /**
+ * @link http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/servlet/http/HttpServletRequest.html#getMethod()
+ * @return the list of http method names for which cacheing should be disabled
+ * @since 2.4
+ */
+ public List getDisableCacheOnMethods() {
+ return disableCacheOnMethods;
+ }
+
+ /**
+ * disableCacheOnMethods - Defines the http method name for which cacheing should be disabled.
+ * The default value is null
for cacheing all requests without regarding the method name.
+ * @link http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/servlet/http/HttpServletRequest.html#getMethod()
+ * @param disableCacheOnMethods the list of http method names for which cacheing should be disabled
+ * @since 2.4
+ */
+ public void setDisableCacheOnMethods(List disableCacheOnMethods) {
+ this.disableCacheOnMethods = disableCacheOnMethods;
+ }
+
+ // TODO: check if getter/setter for oscache-properties-file is possible
+
+}
diff --git a/src/java/com/opensymphony/oscache/web/filter/CacheHttpServletResponseWrapper.java b/src/java/com/opensymphony/oscache/web/filter/CacheHttpServletResponseWrapper.java
new file mode 100644
index 0000000..25e9dde
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/CacheHttpServletResponseWrapper.java
@@ -0,0 +1,439 @@
+/*
+ * Copyright (c) 2002-2008 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.io.PrintWriter;
+
+import java.util.Locale;
+
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServletResponse;
+import javax.servlet.http.HttpServletResponseWrapper;
+
+/**
+ * CacheServletResponse is a serialized representation of a response
+ *
+ * @author Serge Knystautas
+ * @version $Revision$
+ */
+public class CacheHttpServletResponseWrapper extends HttpServletResponseWrapper {
+
+ private final Log log = LogFactory.getLog(this.getClass());
+
+ /**
+ * We cache the printWriter so we can maintain a single instance
+ * of it no matter how many times it is requested.
+ */
+ private PrintWriter cachedWriter = null;
+ private ResponseContent result = null;
+ private SplitServletOutputStream cacheOut = null;
+ private boolean fragment = false;
+ private int status = SC_OK;
+ private long expires = CacheFilter.EXPIRES_ON;
+ private long lastModified = CacheFilter.LAST_MODIFIED_INITIAL;
+ private long cacheControl = -60;
+ private int etagOption = CacheFilter.ETAG_WEAK;
+ private String etag = null;
+
+ /**
+ * Constructor
+ *
+ * @param response The servlet response
+ */
+ public CacheHttpServletResponseWrapper(HttpServletResponse response) {
+ this(response, false, Long.MAX_VALUE, CacheFilter.EXPIRES_ON, CacheFilter.LAST_MODIFIED_INITIAL, -60, CacheFilter.ETAG_WEAK);
+ }
+
+ /**
+ * Constructor
+ *
+ * @param response The servlet response
+ * @param fragment true if the repsonse indicates that it is a fragement of a page
+ * @param time the refresh time in millis
+ * @param lastModified defines if last modified header will be send, @see CacheFilter
+ * @param expires defines if expires header will be send, @see CacheFilter
+ * @param cacheControl defines if cache control header will be send, @see CacheFilter
+ * @param etagOption defines if the ETag header will be send, @see CacheFilter
+ */
+ public CacheHttpServletResponseWrapper(HttpServletResponse response, boolean fragment, long time, long lastModified, long expires, long cacheControl, int etagOption) {
+ super(response);
+ result = new ResponseContent();
+ this.fragment = fragment;
+ this.expires = expires;
+ this.lastModified = lastModified;
+ this.cacheControl = cacheControl;
+ this.etagOption = etagOption;
+
+ // only set initial values for last modified and expires, when a complete page is cached
+ if (!fragment) {
+ // setting a default last modified value based on object creation and remove the millis
+ if (lastModified == CacheFilter.LAST_MODIFIED_INITIAL) {
+ long current = System.currentTimeMillis();
+ current = current - (current % 1000);
+ result.setLastModified(current);
+ super.setDateHeader(CacheFilter.HEADER_LAST_MODIFIED, result.getLastModified());
+ }
+ // setting the expires value
+ if (expires == CacheFilter.EXPIRES_TIME) {
+ result.setExpires(result.getLastModified() + time);
+ super.setDateHeader(CacheFilter.HEADER_EXPIRES, result.getExpires());
+ }
+ // setting the cache control with max-age
+ if (this.cacheControl == CacheFilter.MAX_AGE_TIME) {
+ // set the count down
+ long maxAge = System.currentTimeMillis();
+ maxAge = maxAge - (maxAge % 1000) + time;
+ result.setMaxAge(maxAge);
+ super.addHeader(CacheFilter.HEADER_CACHE_CONTROL, "max-age=" + time / 1000);
+ } else if (this.cacheControl != CacheFilter.MAX_AGE_NO_INIT) {
+ result.setMaxAge(this.cacheControl);
+ super.addHeader(CacheFilter.HEADER_CACHE_CONTROL, "max-age=" + (-this.cacheControl));
+ } else if (this.cacheControl == CacheFilter.MAX_AGE_NO_INIT ) {
+ result.setMaxAge(this.cacheControl);
+ }
+ }
+ }
+
+ /**
+ * Get a response content
+ *
+ * @return The content
+ */
+ public ResponseContent getContent() {
+ // Flush the buffer
+ try {
+ flush();
+ } catch (IOException ignore) {
+ }
+
+ // Create the byte array
+ result.commit();
+
+ // Return the result from this response
+ return result;
+ }
+
+ /**
+ * Set the content type
+ *
+ * @param value The content type
+ */
+ public void setContentType(String value) {
+ if (log.isDebugEnabled()) {
+ log.debug("ContentType: " + value);
+ }
+
+ super.setContentType(value);
+ result.setContentType(value);
+ }
+
+ /**
+ * Set the date of a header
+ *
+ * @param name The header name
+ * @param value The date
+ */
+ public void setDateHeader(String name, long value) {
+ if (log.isDebugEnabled()) {
+ log.debug("dateheader: " + name + ": " + value);
+ }
+
+ // only set the last modified value, if a complete page is cached
+ if ((lastModified != CacheFilter.LAST_MODIFIED_OFF) && (CacheFilter.HEADER_LAST_MODIFIED.equalsIgnoreCase(name))) {
+ if (!fragment) {
+ result.setLastModified(value);
+ } // TODO should we return now by fragments to avoid putting the header to the response?
+ }
+
+ // implement RFC 2616 14.21 Expires (without max-age)
+ if ((expires != CacheFilter.EXPIRES_OFF) && (CacheFilter.HEADER_EXPIRES.equalsIgnoreCase(name))) {
+ if (!fragment) {
+ result.setExpires(value);
+ } // TODO should we return now by fragments to avoid putting the header to the response?
+ }
+
+ super.setDateHeader(name, value);
+ }
+
+ /**
+ * Add the date of a header
+ *
+ * @param name The header name
+ * @param value The date
+ */
+ public void addDateHeader(String name, long value) {
+ if (log.isDebugEnabled()) {
+ log.debug("dateheader: " + name + ": " + value);
+ }
+
+ // only set the last modified value, if a complete page is cached
+ if ((lastModified != CacheFilter.LAST_MODIFIED_OFF) && (CacheFilter.HEADER_LAST_MODIFIED.equalsIgnoreCase(name))) {
+ if (!fragment) {
+ result.setLastModified(value);
+ } // TODO should we return now by fragments to avoid putting the header to the response?
+ }
+
+ // implement RFC 2616 14.21 Expires (without max-age)
+ if ((expires != CacheFilter.EXPIRES_OFF) && (CacheFilter.HEADER_EXPIRES.equalsIgnoreCase(name))) {
+ if (!fragment) {
+ result.setExpires(value);
+ } // TODO should we return now by fragments to avoid putting the header to the response?
+ }
+
+ super.addDateHeader(name, value);
+ }
+
+ /**
+ * Set a header field
+ *
+ * @param name The header name
+ * @param value The header value
+ */
+ public void setHeader(String name, String value) {
+ if (log.isDebugEnabled()) {
+ log.debug("header: " + name + ": " + value);
+ }
+
+ if (CacheFilter.HEADER_CONTENT_TYPE.equalsIgnoreCase(name)) {
+ result.setContentType(value);
+ }
+
+ if (CacheFilter.HEADER_CONTENT_ENCODING.equalsIgnoreCase(name)) {
+ result.setContentEncoding(value);
+ }
+
+ if (CacheFilter.HEADER_ETAG.equalsIgnoreCase(name)) {
+ result.setETag(value);
+ }
+
+ if (CacheFilter.HEADER_CONTENT_DISPOSITION.equalsIgnoreCase(name)) {
+ result.setContentDisposition(value);
+ }
+
+ super.setHeader(name, value);
+ }
+
+ /**
+ * Add a header field
+ *
+ * @param name The header name
+ * @param value The header value
+ */
+ public void addHeader(String name, String value) {
+ if (log.isDebugEnabled()) {
+ log.debug("header: " + name + ": " + value);
+ }
+
+ if (CacheFilter.HEADER_CONTENT_TYPE.equalsIgnoreCase(name)) {
+ result.setContentType(value);
+ }
+
+ if (CacheFilter.HEADER_CONTENT_ENCODING.equalsIgnoreCase(name)) {
+ result.setContentEncoding(value);
+ }
+
+ if (CacheFilter.HEADER_ETAG.equalsIgnoreCase(name)) {
+ result.setETag(value);
+ }
+
+ if (CacheFilter.HEADER_CONTENT_DISPOSITION.equalsIgnoreCase(name)) {
+ result.setContentDisposition(value);
+ }
+
+ super.addHeader(name, value);
+ }
+
+ /**
+ * Set the int value of the header
+ *
+ * @param name The header name
+ * @param value The int value
+ */
+ public void setIntHeader(String name, int value) {
+ if (log.isDebugEnabled()) {
+ log.debug("intheader: " + name + ": " + value);
+ }
+
+ super.setIntHeader(name, value);
+ }
+
+ /**
+ * We override this so we can catch the response status. Only
+ * responses with a status of 200 (SC_OK
) will
+ * be cached.
+ */
+ public void setStatus(int status) {
+ super.setStatus(status);
+ this.status = status;
+ }
+
+ /**
+ * We override this so we can catch the response status. Only
+ * responses with a status of 200 (SC_OK
) will
+ * be cached.
+ */
+ public void sendError(int status, String string) throws IOException {
+ super.sendError(status, string);
+ this.status = status;
+ }
+
+ /**
+ * We override this so we can catch the response status. Only
+ * responses with a status of 200 (SC_OK
) will
+ * be cached.
+ */
+ public void sendError(int status) throws IOException {
+ super.sendError(status);
+ this.status = status;
+ }
+
+ /**
+ * We override this so we can catch the response status. Only
+ * responses with a status of 200 (SC_OK
) will
+ * be cached.
+ */
+ public void setStatus(int status, String string) {
+ super.setStatus(status, string);
+ this.status = status;
+ }
+
+ /**
+ * We override this so we can catch the response status. Only
+ * responses with a status of 200 (SC_OK
) will
+ * be cached.
+ */
+ public void sendRedirect(String location) throws IOException {
+ this.status = SC_MOVED_TEMPORARILY;
+ super.sendRedirect(location);
+ }
+
+ /**
+ * Retrieves the captured HttpResponse status.
+ */
+ public int getStatus() {
+ return status;
+ }
+
+ /**
+ * Set the locale
+ *
+ * @param value The locale
+ */
+ public void setLocale(Locale value) {
+ super.setLocale(value);
+ result.setLocale(value);
+ }
+
+ /**
+ * Get an output stream
+ *
+ * @throws IOException
+ */
+ public ServletOutputStream getOutputStream() throws IOException {
+ // Pass this faked servlet output stream that captures what is sent
+ if (cacheOut == null) {
+ cacheOut = new SplitServletOutputStream(result.getOutputStream(), super.getOutputStream());
+ }
+
+ return cacheOut;
+ }
+
+ /**
+ * Get a print writer
+ *
+ * @throws IOException
+ */
+ public PrintWriter getWriter() throws IOException {
+ if (cachedWriter == null) {
+ String encoding = getCharacterEncoding();
+ if (encoding != null) {
+ cachedWriter = new PrintWriter(new OutputStreamWriter(getOutputStream(), encoding));
+ } else { // using the default character encoding
+ cachedWriter = new PrintWriter(new OutputStreamWriter(getOutputStream()));
+ }
+ }
+
+ return cachedWriter;
+ }
+
+ /**
+ * Flushes all streams.
+ * @throws IOException
+ */
+ private void flush() throws IOException {
+ if (cacheOut != null) {
+ cacheOut.flush();
+ }
+
+ if (cachedWriter != null) {
+ cachedWriter.flush();
+ }
+ }
+
+ public void flushBuffer() throws IOException {
+ // The weak ETag is content size + lastModified
+ if (etag == null) {
+ if (etagOption == CacheFilter.ETAG_WEAK) {
+ etag = "W/\"" + result.getSize() + "-" + result.getLastModified() + "\"";
+ result.setETag(etag);
+ }
+ }
+ super.flushBuffer();
+ flush();
+ }
+
+ /**
+ * Returns a boolean indicating if the response has been committed.
+ * A committed response has already had its status code and headers written.
+ *
+ * @see javax.servlet.ServletResponseWrapper#isCommitted()
+ */
+ public boolean isCommitted() {
+ return super.isCommitted(); // || (result.getOutputStream() == null);
+ }
+
+ /**
+ * Clears any data that exists in the buffer as well as the status code and headers.
+ * If the response has been committed, this method throws an IllegalStateException.
+ * @see javax.servlet.ServletResponseWrapper#reset()
+ */
+ public void reset() {
+ log.info("CacheHttpServletResponseWrapper:reset()");
+ super.reset();
+ /*
+ cachedWriter = null;
+ result = new ResponseContent();
+ cacheOut = null;
+ fragment = false;
+ status = SC_OK;
+ expires = CacheFilter.EXPIRES_ON;
+ lastModified = CacheFilter.LAST_MODIFIED_INITIAL;
+ cacheControl = -60;
+ etag = null;
+ // time ?
+ */
+ }
+
+ /**
+ * Clears the content of the underlying buffer in the response without clearing headers or status code.
+ * If the response has been committed, this method throws an IllegalStateException.
+ * @see javax.servlet.ServletResponseWrapper#resetBuffer()
+ */
+ public void resetBuffer() {
+ log.info("CacheHttpServletResponseWrapper:resetBuffer()");
+ super.resetBuffer();
+ /*
+ //cachedWriter = null;
+ result = new ResponseContent();
+ //cacheOut = null;
+ //fragment = false;
+ */
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/filter/ExpiresRefreshPolicy.java b/src/java/com/opensymphony/oscache/web/filter/ExpiresRefreshPolicy.java
new file mode 100644
index 0000000..d75179a
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/ExpiresRefreshPolicy.java
@@ -0,0 +1,75 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.EntryRefreshPolicy;
+import com.opensymphony.oscache.base.NeedsRefreshException;
+
+/**
+ * Checks if a cache filter entry has expired.
+ * This is useful when expires header are used in the response.
+ *
+ * @version $Revision$
+ * @author Lars Torunski
+ */
+public class ExpiresRefreshPolicy implements EntryRefreshPolicy {
+
+ /** the refresh period (in milliseconds) of a certain cache filter*/
+ private long refreshPeriod;
+
+ /**
+ * Constructor ExpiresRefreshPolicy.
+ *
+ * @param refreshPeriod the refresh period in seconds
+ */
+ public ExpiresRefreshPolicy(int refreshPeriod) {
+ this.refreshPeriod = refreshPeriod * 1000L;
+ }
+
+ /**
+ * Indicates whether the supplied CacheEntry
needs to be refreshed.
+ * This will be called when retrieving an entry from the cache - if this method
+ * returns true
then a NeedsRefreshException
will be
+ * thrown.
+ *
+ * @param entry The cache entry which is ignored.
+ * @return true
if the content needs refreshing, false
otherwise.
+ *
+ * @see NeedsRefreshException
+ * @see CacheEntry
+ */
+ public boolean needsRefresh(CacheEntry entry) {
+
+ long currentTimeMillis = System.currentTimeMillis();
+
+ if ((refreshPeriod >= 0) && (currentTimeMillis >= (entry.getLastUpdate() + refreshPeriod))) {
+ return true;
+ } else if (entry.getContent() instanceof ResponseContent) {
+ ResponseContent responseContent = (ResponseContent) entry.getContent();
+ return currentTimeMillis >= responseContent.getExpires();
+ } else {
+ return false;
+ }
+
+ }
+
+ /**
+ * @return the refreshPeriod in seconds
+ * @since 2.4
+ */
+ public long getRefreshPeriod() {
+ return refreshPeriod / 1000;
+ }
+
+ /**
+ * @param refreshPeriod the refresh period in seconds
+ * @since 2.4
+ */
+ public void setRefreshPeriod(long refreshPeriod) {
+ this.refreshPeriod = refreshPeriod * 1000L;
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/web/filter/ICacheGroupsProvider.java b/src/java/com/opensymphony/oscache/web/filter/ICacheGroupsProvider.java
new file mode 100644
index 0000000..002cefe
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/ICacheGroupsProvider.java
@@ -0,0 +1,33 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import javax.servlet.http.HttpServletRequest;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.web.ServletCacheAdministrator;
+
+/**
+ * Provider interface for cache groups creation in CacheFilter. A developer can implement a method which provides
+ * cache groups based on the request, the servlet cache administrator and cache.
+ *
+ * JIRA issue: http://jira.opensymphony.com/browse/CACHE-195
+ *
+ * @author Lars Torunski
+ * @version $Revision$
+ */
+public interface ICacheGroupsProvider {
+
+ /**
+ * Creates the cache groups for the CacheFilter.
+ *
+ * @param httpRequest the http request.
+ * @param scAdmin the ServletCacheAdministrator of the cache
+ * @param cache the cache of the ServletCacheAdministrator
+ * @return the cache key
+ */
+ public String[] createCacheGroups(HttpServletRequest httpRequest, ServletCacheAdministrator scAdmin, Cache cache);
+
+}
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/web/filter/ICacheKeyProvider.java b/src/java/com/opensymphony/oscache/web/filter/ICacheKeyProvider.java
new file mode 100644
index 0000000..5e350ba
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/ICacheKeyProvider.java
@@ -0,0 +1,33 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import javax.servlet.http.HttpServletRequest;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.web.ServletCacheAdministrator;
+
+/**
+ * Provider interface for cache key creation. A developer can implement a method which provides
+ * cache keys based on the request, the servlet cache administrator and cache.
+ *
+ * JIRA issue: http://jira.opensymphony.com/browse/CACHE-179
+ *
+ * @author Lars Torunski
+ * @version $Revision$
+ */
+public interface ICacheKeyProvider {
+
+ /**
+ * Creates the cache key for the CacheFilter.
+ *
+ * @param httpRequest the http request.
+ * @param scAdmin the ServletCacheAdministrator of the cache
+ * @param cache the cache of the ServletCacheAdministrator
+ * @return the cache key
+ */
+ public String createCacheKey(HttpServletRequest httpRequest, ServletCacheAdministrator scAdmin, Cache cache);
+
+}
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/web/filter/ResponseContent.java b/src/java/com/opensymphony/oscache/web/filter/ResponseContent.java
new file mode 100644
index 0000000..48501f6
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/ResponseContent.java
@@ -0,0 +1,265 @@
+/*
+ * Copyright (c) 2002-2009 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import java.io.*;
+
+import java.util.Locale;
+import java.util.zip.GZIPInputStream;
+
+import javax.servlet.ServletResponse;
+import javax.servlet.http.HttpServletResponse;
+
+/**
+ * Holds the servlet response in a byte array so that it can be held
+ * in the cache (and, since this class is serializable, optionally
+ * persisted to disk).
+ *
+ * @version $Revision$
+ * @author Serge Knystautas
+ */
+public class ResponseContent implements Serializable {
+ private transient ByteArrayOutputStream bout = new ByteArrayOutputStream(1000);
+ private Locale locale = null;
+ private String contentEncoding = null;
+ private String contentType = null;
+ private byte[] content = null;
+ private long expires = Long.MAX_VALUE;
+ private long lastModified = -1;
+ private long maxAge = -60;
+ private String etag = null;
+ private String contentDisposition = null;
+
+ public String getContentType() {
+ return contentType;
+ }
+
+ /**
+ * Set the content type. We capture this so that when we serve this
+ * data from cache, we can set the correct content type on the response.
+ */
+ public void setContentType(String value) {
+ contentType = value;
+ }
+
+ public long getLastModified() {
+ return lastModified;
+ }
+
+ public void setLastModified(long value) {
+ lastModified = value;
+ }
+
+ public String getContentEncoding() {
+ return contentEncoding;
+ }
+
+ public void setContentEncoding(String contentEncoding) {
+ this.contentEncoding = contentEncoding;
+ }
+
+ public String getETag() {
+ return etag;
+ }
+
+ public void setETag(String etag) {
+ this.etag = etag;
+ }
+
+ public String getContentDisposition() {
+ return contentDisposition;
+ }
+
+ public void setContentDisposition(String contentDisposition) {
+ this.contentDisposition = contentDisposition;
+ }
+
+ /**
+ * Set the Locale. We capture this so that when we serve this data from
+ * cache, we can set the correct locale on the response.
+ */
+ public void setLocale(Locale value) {
+ locale = value;
+ }
+
+ /**
+ * @return the expires date and time in milliseconds when the content will be stale
+ */
+ public long getExpires() {
+ return expires;
+ }
+
+ /**
+ * Sets the expires date and time in milliseconds.
+ * @param value time in milliseconds when the content will expire
+ */
+ public void setExpires(long value) {
+ expires = value;
+ }
+
+ /**
+ * Returns the max age of the content in milliseconds. If expires header and cache control are
+ * enabled both, both will be equal.
+ * @return the max age of the content in milliseconds, if -1 max-age is disabled
+ */
+ public long getMaxAge() {
+ return maxAge;
+ }
+
+ /**
+ * Sets the max age date and time in milliseconds. If the parameter is -1, the max-age parameter
+ * won't be set by default in the Cache-Control header.
+ * @param value sets the intial
+ */
+ public void setMaxAge(long value) {
+ maxAge = value;
+ }
+
+ /**
+ * Get an output stream. This is used by the {@link SplitServletOutputStream}
+ * to capture the original (uncached) response into a byte array.
+ * @return the original (uncached) response, returns null if response is already committed.
+ */
+ public OutputStream getOutputStream() {
+ return bout;
+ }
+
+ /**
+ * Gets the size of this cached content.
+ *
+ * @return The size of the content, in bytes. If no content
+ * exists, this method returns -1
.
+ */
+ public int getSize() {
+ return (content != null) ? content.length : (-1);
+ }
+
+ /**
+ * Called once the response has been written in its entirety. This
+ * method commits the response output stream by converting the output
+ * stream into a byte array.
+ */
+ public void commit() {
+ if (bout != null) {
+ content = bout.toByteArray();
+ bout = null;
+ }
+ }
+
+ /**
+ * Writes this cached data out to the supplied ServletResponse
.
+ *
+ * @param response The servlet response to output the cached content to.
+ * @throws IOException
+ */
+ public void writeTo(ServletResponse response) throws IOException {
+ writeTo(response, false, false);
+ }
+
+ /**
+ * Writes this cached data out to the supplied ServletResponse
.
+ *
+ * @param response The servlet response to output the cached content to.
+ * @param fragment is true if this content a fragment or part of a page
+ * @param acceptsGZip is true if client browser supports gzip compression
+ * @throws IOException
+ */
+ public void writeTo(ServletResponse response, boolean fragment, boolean acceptsGZip) throws IOException {
+ //Send the content type and data to this response
+ if (contentType != null) {
+ response.setContentType(contentType);
+ }
+
+ if (fragment) {
+ // Don't support gzip compression if the content is a fragment of a page
+ acceptsGZip = false;
+ } else {
+ // add special headers for a complete page
+ if (response instanceof HttpServletResponse) {
+ HttpServletResponse httpResponse = (HttpServletResponse) response;
+
+ // add the last modified header
+ if (lastModified != -1) {
+ httpResponse.setDateHeader(CacheFilter.HEADER_LAST_MODIFIED, lastModified);
+ }
+
+ // add the etag header
+ if (etag != null) {
+ httpResponse.addHeader(CacheFilter.HEADER_ETAG, etag);
+ }
+
+ // add the content disposition header
+ if(contentDisposition != null) {
+ httpResponse.addHeader(CacheFilter.HEADER_CONTENT_DISPOSITION, contentDisposition);
+ }
+
+ // add the expires header
+ if (expires != Long.MAX_VALUE) {
+ httpResponse.setDateHeader(CacheFilter.HEADER_EXPIRES, expires);
+ }
+
+ // add the cache-control header for max-age
+ if (maxAge == CacheFilter.MAX_AGE_NO_INIT || maxAge == CacheFilter.MAX_AGE_TIME) {
+ // do nothing
+ } else if (maxAge > 0) { // set max-age based on life time
+ long currentMaxAge = maxAge / 1000 - System.currentTimeMillis() / 1000;
+ if (currentMaxAge < 0) {
+ currentMaxAge = 0;
+ }
+ httpResponse.addHeader(CacheFilter.HEADER_CACHE_CONTROL, "max-age=" + currentMaxAge);
+ } else {
+ httpResponse.addHeader(CacheFilter.HEADER_CACHE_CONTROL, "max-age=" + (-maxAge));
+ }
+
+ }
+ }
+
+ if (locale != null) {
+ response.setLocale(locale);
+ }
+
+ OutputStream out = new BufferedOutputStream(response.getOutputStream());
+
+ if (isContentGZiped()) {
+ if (acceptsGZip) {
+ ((HttpServletResponse) response).addHeader(CacheFilter.HEADER_CONTENT_ENCODING, "gzip");
+ response.setContentLength(content.length);
+ out.write(content);
+ } else {
+ // client doesn't support, so we have to uncompress it
+ ByteArrayInputStream bais = new ByteArrayInputStream(content);
+ GZIPInputStream zis = new GZIPInputStream(bais);
+
+ ByteArrayOutputStream baos = new ByteArrayOutputStream();
+ int numBytesRead = 0;
+ byte[] tempBytes = new byte[4196];
+
+ while ((numBytesRead = zis.read(tempBytes, 0, tempBytes.length)) != -1) {
+ baos.write(tempBytes, 0, numBytesRead);
+ }
+
+ byte[] result = baos.toByteArray();
+
+ response.setContentLength(result.length);
+ out.write(result);
+ }
+ } else {
+ // the content isn't compressed
+ // regardless if the client browser supports gzip we will just return the content
+ response.setContentLength(content.length);
+ out.write(content);
+ }
+ out.flush();
+ }
+
+
+ /**
+ * @return true if the content is GZIP compressed
+ */
+ public boolean isContentGZiped() {
+ return "gzip".equals(contentEncoding);
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/web/filter/SplitServletOutputStream.java b/src/java/com/opensymphony/oscache/web/filter/SplitServletOutputStream.java
new file mode 100644
index 0000000..90050a2
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/SplitServletOutputStream.java
@@ -0,0 +1,94 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.filter;
+
+import java.io.IOException;
+import java.io.OutputStream;
+
+import javax.servlet.ServletOutputStream;
+
+/**
+ * Extends the base ServletOutputStream
class so that
+ * the stream can be captured as it gets written. This is achieved
+ * by overriding the write()
methods and outputting
+ * the data to two streams - the original stream and a secondary stream
+ * that is designed to capture the written data.
+ *
+ * @version $Revision$
+ * @author Serge Knystautas
+ */
+public class SplitServletOutputStream extends ServletOutputStream {
+ OutputStream captureStream = null;
+ OutputStream passThroughStream = null;
+
+ /**
+ * Constructs a split output stream that both captures and passes through
+ * the servlet response.
+ *
+ * @param captureStream The stream that will be used to capture the data.
+ * @param passThroughStream The pass-through ServletOutputStream
+ * that will write the response to the client as originally intended.
+ */
+ public SplitServletOutputStream(OutputStream captureStream, OutputStream passThroughStream) {
+ this.captureStream = captureStream;
+ this.passThroughStream = passThroughStream;
+ }
+
+ /**
+ * Writes the incoming data to both the output streams.
+ *
+ * @param value The int data to write.
+ * @throws IOException
+ */
+ public void write(int value) throws IOException {
+ captureStream.write(value);
+ passThroughStream.write(value);
+ }
+
+ /**
+ * Writes the incoming data to both the output streams.
+ *
+ * @param value The bytes to write to the streams.
+ * @throws IOException
+ */
+ public void write(byte[] value) throws IOException {
+ captureStream.write(value);
+ passThroughStream.write(value);
+ }
+
+ /**
+ * Writes the incoming data to both the output streams.
+ *
+ * @param b The bytes to write out to the streams.
+ * @param off The offset into the byte data where writing should begin.
+ * @param len The number of bytes to write.
+ * @throws IOException
+ */
+ public void write(byte[] b, int off, int len) throws IOException {
+ captureStream.write(b, off, len);
+ passThroughStream.write(b, off, len);
+ }
+
+ /**
+ * Flushes both the output streams.
+ * @throws IOException
+ */
+ public void flush() throws IOException {
+ super.flush();
+ captureStream.flush(); //why not?
+ passThroughStream.flush();
+ }
+
+ /**
+ * Closes both the output streams.
+ * @throws IOException
+ */
+ public void close() throws IOException {
+ super.close();
+ captureStream.close();
+ passThroughStream.close();
+ }
+
+}
diff --git a/src/java/com/opensymphony/oscache/web/filter/package.html b/src/java/com/opensymphony/oscache/web/filter/package.html
new file mode 100644
index 0000000..38dcfcb
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/filter/package.html
@@ -0,0 +1,33 @@
+
+
+
+
+
+
+
+Provides the caching filter (and its support classes) that allows HTTP responses
+to be cached by OSCache.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/web/package.html b/src/java/com/opensymphony/oscache/web/package.html
new file mode 100644
index 0000000..e1760cb
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/package.html
@@ -0,0 +1,31 @@
+
+
+
+
+
+
+
+Provides classes and interfaces that make up the base of OSCache's web application support.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/com/opensymphony/oscache/web/tag/CacheTag.java b/src/java/com/opensymphony/oscache/web/tag/CacheTag.java
new file mode 100644
index 0000000..6a66618
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/tag/CacheTag.java
@@ -0,0 +1,824 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.tag;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.NeedsRefreshException;
+import com.opensymphony.oscache.util.StringUtil;
+import com.opensymphony.oscache.web.ServletCacheAdministrator;
+import com.opensymphony.oscache.web.WebEntryRefreshPolicy;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.IOException;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.jsp.JspTagException;
+import javax.servlet.jsp.PageContext;
+import javax.servlet.jsp.tagext.BodyTagSupport;
+import javax.servlet.jsp.tagext.TryCatchFinally;
+
+/**
+ * CacheTag is a tag that allows for server-side caching of post-processed JSP content.
+ *
+ * It also gives great programatic control over refreshing, flushing and updating the cache.
+ *
+ * Usage Example:
+ *
+ * <%@ taglib uri="oscache" prefix="cache" %>
+ * <cache:cache key="mycache"
+ * scope="application"
+ * refresh="false"
+ * time="30">
+ * jsp content here... refreshed every 30 seconds
+ * </cache:cache>
+ *
+ *
+ * @author Mike Cannon-Brookes
+ * @author Todd Gochenour
+ * @author Francois Beauregard
+ * @author Alain Bergevin
+ * @version $Revision$
+ */
+public class CacheTag extends BodyTagSupport implements TryCatchFinally {
+ /**
+ * Constants for time computation
+ */
+ private final static int SECOND = 1;
+ private final static int MINUTE = 60 * SECOND;
+ private final static int HOUR = 60 * MINUTE;
+ private final static int DAY = 24 * HOUR;
+ private final static int WEEK = 7 * DAY;
+ private final static int MONTH = 30 * DAY;
+ private final static int YEAR = 365 * DAY;
+
+ /**
+ * The key under which the tag counter will be stored in the request
+ */
+ private final static String CACHE_TAG_COUNTER_KEY = "__oscache_tag_counter";
+
+ /**
+ * Constants for refresh time
+ */
+ final static private int ONE_MINUTE = 60;
+ final static private int ONE_HOUR = 60 * ONE_MINUTE;
+ final static private int DEFAULT_TIMEOUT = ONE_HOUR;
+ private static transient Log log = LogFactory.getLog(CacheTag.class);
+
+ /**
+ * Cache modes
+ */
+ final static private int SILENT_MODE = 1;
+
+ /**
+ * A flag to indicate whether a NeedsRefreshException was thrown and
+ * the update needs to be cancelled
+ */
+ boolean cancelUpdateRequired = false;
+ private Cache cache = null;
+
+ /**
+ * If no groups are specified, the cached content does not get put into any groups
+ */
+ private List groups = null;
+ private ServletCacheAdministrator admin = null;
+
+ /**
+ * The actual key to use. This is generated based on the supplied key, scope etc.
+ */
+ private String actualKey = null;
+
+ /**
+ * The content that was retrieved from cache
+ */
+ private String content = null;
+
+ /**
+ * The cron expression that is used to expire cache entries at specific dates and/or times.
+ */
+ private String cron = null;
+
+ /**
+ * if cache key is null, the request URI is used
+ */
+ private String key = null;
+
+ /**
+ * The ISO-639 language code to distinguish different pages in application scope
+ */
+ private String language = null;
+
+ /**
+ * Class used to handle the refresh policy logic
+ */
+ private String refreshPolicyClass = null;
+
+ /**
+ * Parameters that will be passed to the init method of the
+ * refresh policy instance.
+ */
+ private String refreshPolicyParam = null;
+
+ /**
+ * Whether the cache should be refreshed instantly
+ */
+ private boolean refresh = false;
+
+ /**
+ * used for subtags to tell this tag that we should use the cached version
+ */
+ private boolean useBody = true;
+
+ /**
+ * The cache mode. Valid values are SILENT_MODE
+ */
+ private int mode = 0;
+
+ /**
+ * The cache scope to use
+ */
+ private int scope = PageContext.APPLICATION_SCOPE;
+
+ /**
+ * time (in seconds) before cache should be refreshed
+ */
+ private int time = DEFAULT_TIMEOUT;
+
+ /**
+ * Set the time this cache entry will be cached for. A date and/or time in
+ * either ISO-8601 format or a simple format can be specified. The acceptable
+ * syntax for the simple format can be any one of the following:
+ *
+ *
+ * - 0 (seconds)
+ *
- 0s (seconds)
+ *
- 0m (minutes)
+ *
- 0h (hours)
+ *
- 0d (days)
+ *
- 0w (weeks)
+ *
+ *
+ * @param duration The duration to cache this content (using either the simple
+ * or the ISO-8601 format). Passing in a duration of zero will turn off the
+ * caching, while a negative value will result in the cached content never
+ * expiring (ie, the cached content will always be served as long as it is
+ * present).
+ */
+ public void setDuration(String duration) {
+ try {
+ // Try Simple Date Format Duration first because it's faster
+ this.time = parseDuration(duration);
+ } catch (Exception ex) {
+ if (log.isDebugEnabled()) {
+ log.debug("Failed parsing simple duration format '" + duration + "' (" + ex.getMessage() + "). Trying ISO-8601 format...");
+ }
+
+ try {
+ // Try ISO-8601 Duration
+ this.time = parseISO_8601_Duration(duration);
+ } catch (Exception ex1) {
+ // An invalid duration entered, not much impact.
+ // The default timeout will be used
+ log.warn("The requested cache duration '" + duration + "' is invalid (" + ex1.getMessage() + "). Reverting to the default timeout");
+ this.time = DEFAULT_TIMEOUT;
+ }
+ }
+ }
+
+ /**
+ * Sets the cron expression that should be used to expire content at specific
+ * dates and/or times.
+ */
+ public void setCron(String cron) {
+ this.cron = cron;
+ }
+
+ /**
+ * Sets the groups for this cache entry. Any existing groups will
+ * be replaced.
+ *
+ * @param groups A comma-delimited list of groups that the cache entry belongs to.
+ */
+ public void setGroups(String groups) {
+ // FIXME: ArrayList doesn't avoid duplicates
+ this.groups = StringUtil.split(groups, ',');
+ }
+
+ /**
+ * Adds to the groups for this cache entry.
+ *
+ * @param group A group to which the cache entry should belong.
+ */
+ void addGroup(String group) {
+ if (groups == null) {
+ // FIXME: ArrayList doesn't avoid duplicates
+ groups = new ArrayList();
+ }
+
+ groups.add(group);
+ }
+
+ /**
+ * Adds comma-delimited list of groups that the cache entry belongs to.
+ *
+ * @param groups A comma-delimited list of groups that the cache entry belongs to also.
+ */
+ void addGroups(String groupsString) {
+ if (groups == null) {
+ // FIXME: ArrayList doesn't avoid duplicates
+ groups = new ArrayList();
+ }
+
+ groups.addAll(StringUtil.split(groupsString, ','));
+ }
+
+ /**
+ * Set the key for this cache entry.
+ *
+ * @param key The key for this cache entry.
+ */
+ public void setKey(String key) {
+ this.key = key;
+ }
+
+ /**
+ * Set the ISO-639 language code to distinguish different pages in application scope
+ *
+ * @param language The language code for this cache entry.
+ */
+ public void setLanguage(String language) {
+ this.language = language;
+ }
+
+ /**
+ * This method allows the user to programatically decide whether the cached
+ * content should be refreshed immediately.
+ *
+ * @param refresh Whether or not to refresh this cache entry immediately.
+ */
+ public void setRefresh(boolean refresh) {
+ this.refresh = refresh;
+ }
+
+ /**
+ * Setting this to true
prevents the cache from writing any output
+ * to the response, however the JSP content is still cached as normal.
+ * @param mode The cache mode to use.
+ */
+ public void setMode(String mode) {
+ if ("silent".equalsIgnoreCase(mode)) {
+ this.mode = SILENT_MODE;
+ } else {
+ this.mode = 0;
+ }
+ }
+
+ /**
+ * Class used to handle the refresh policy logic
+ */
+ public void setRefreshpolicyclass(String refreshPolicyClass) {
+ this.refreshPolicyClass = refreshPolicyClass;
+ }
+
+ /**
+ * Parameters that will be passed to the init method of the
+ * refresh policy instance.
+ */
+ public void setRefreshpolicyparam(String refreshPolicyParam) {
+ this.refreshPolicyParam = refreshPolicyParam;
+ }
+
+ // ----------- setMethods ------------------------------------------------------
+
+ /**
+ * Set the scope of this cache.
+ *
+ * @param scope The scope of this cache. Either "application" (default) or "session".
+ */
+ public void setScope(String scope) {
+ if (scope.equalsIgnoreCase(ServletCacheAdministrator.SESSION_SCOPE_NAME)) {
+ this.scope = PageContext.SESSION_SCOPE;
+ } else {
+ this.scope = PageContext.APPLICATION_SCOPE;
+ }
+ }
+
+ /**
+ * Set the time this cache entry will be cached for (in seconds)
+ *
+ * @param time The time to cache this content (in seconds). Passing in
+ * a time of zero will turn off the caching. A negative value for the
+ * time will result in the cached content never expiring (ie, the cached
+ * content will always be served if it is present)
+ */
+ public void setTime(int time) {
+ this.time = time;
+ }
+
+ /**
+ * This controls whether or not the body of the tag is evaluated or used.
+ *
+ * It is most often called by the <UseCached /> tag to tell this tag to
+ * use the cached content.
+ *
+ * @see UseCachedTag
+ * @param useBody Whether or not to use the cached content.
+ */
+ public void setUseBody(boolean useBody) {
+ if (log.isDebugEnabled()) {
+ log.debug(": Set useBody to " + useBody);
+ }
+
+ this.useBody = useBody;
+ }
+
+ /**
+ * After the cache body, either update the cache, serve new cached content or
+ * indicate an error.
+ *
+ * @throws JspTagException The standard exception thrown.
+ * @return The standard BodyTag return.
+ */
+ public int doAfterBody() throws JspTagException {
+ String body = null;
+
+ try {
+ // if we have a body, and we have not been told to use the cached version
+ if ((bodyContent != null) && (useBody || (time == 0)) && ((body = bodyContent.getString()) != null)) {
+ if ((time != 0) || (refreshPolicyClass != null)) {
+ // Instantiate custom refresh policy if needed
+ WebEntryRefreshPolicy policy = null;
+
+ if (refreshPolicyClass != null) {
+ try {
+ policy = (WebEntryRefreshPolicy) Class.forName(refreshPolicyClass).newInstance();
+ policy.init(actualKey, refreshPolicyParam);
+ } catch (Exception e) {
+ if (log.isInfoEnabled()) {
+ log.info(": Problem instantiating or initializing refresh policy : " + refreshPolicyClass);
+ }
+ }
+ }
+
+ if (log.isDebugEnabled()) {
+ log.debug(": Updating cache entry with new content : " + actualKey);
+ }
+
+ cancelUpdateRequired = false;
+
+ if ((groups == null) || groups.isEmpty()) {
+ cache.putInCache(actualKey, body, policy);
+ } else {
+ String[] groupArray = new String[groups.size()];
+ groups.toArray(groupArray);
+ cache.putInCache(actualKey, body, groupArray, policy, null);
+ }
+ }
+ }
+ // otherwise if we have been told to use the cached content and we have cached content
+ else {
+ if (!useBody && (content != null)) {
+ if (log.isInfoEnabled()) {
+ log.info(": Using cached version as instructed, useBody = false : " + actualKey);
+ }
+
+ body = content;
+ }
+ // either the cached entry is blank and a subtag has said don't useBody, or body is null
+ else {
+ if (log.isInfoEnabled()) {
+ log.info(": Missing cached content : " + actualKey);
+ }
+
+ body = "Missing cached content";
+ }
+ }
+
+ // Only display anything if we're not running in silent mode
+ if (mode != SILENT_MODE) {
+ bodyContent.clearBody();
+ bodyContent.write(body);
+ bodyContent.writeOut(bodyContent.getEnclosingWriter());
+ }
+ } catch (java.io.IOException e) {
+ throw new JspTagException("IO Error: " + e.getMessage());
+ }
+
+ return SKIP_BODY;
+ }
+
+ public void doCatch(Throwable throwable) throws Throwable {
+ throw throwable;
+ }
+
+ /**
+ * The end tag - clean up variables used.
+ *
+ * @throws JspTagException The standard exception thrown.
+ * @return The standard BodyTag return.
+ */
+ public int doEndTag() throws JspTagException {
+ return EVAL_PAGE;
+ }
+
+ public void doFinally() {
+ if (cancelUpdateRequired && (actualKey != null)) {
+ cache.cancelUpdate(actualKey);
+ }
+
+ // reset all states, CACHE-144
+ groups = null;
+ scope = PageContext.APPLICATION_SCOPE;
+ cron = null;
+ key = null;
+ language = null;
+ refreshPolicyClass = null;
+ refreshPolicyParam = null;
+ time = DEFAULT_TIMEOUT;
+ refresh = false;
+ mode = 0;
+ }
+
+ /**
+ * The start of the tag.
+ *
+ * Grabs the administrator, the cache, the specific cache entry, then decides
+ * whether to refresh.
+ *
+ * If no refresh is needed, this serves the cached content directly.
+ *
+ * @throws JspTagException The standard exception thrown.
+ * @return The standard doStartTag() return.
+ */
+ public int doStartTag() throws JspTagException {
+ cancelUpdateRequired = false;
+ useBody = true;
+ content = null;
+
+ // We can only skip the body if the cache has the data
+ int returnCode = EVAL_BODY_BUFFERED;
+
+ if (admin == null) {
+ admin = ServletCacheAdministrator.getInstance(pageContext.getServletContext());
+ }
+
+ // Retrieve the cache
+ if (scope == PageContext.SESSION_SCOPE) {
+ cache = admin.getSessionScopeCache(((HttpServletRequest) pageContext.getRequest()).getSession(true));
+ } else {
+ cache = admin.getAppScopeCache(pageContext.getServletContext());
+ }
+
+ // This allows to have multiple cache tags on a single page without
+ // having to specify keys. However, nested cache tags are not supported.
+ // In that case you would have to supply a key.
+ String suffix = null;
+
+ if (key == null) {
+ synchronized (pageContext.getRequest()) {
+ Object o = pageContext.getRequest().getAttribute(CACHE_TAG_COUNTER_KEY);
+
+ if (o == null) {
+ suffix = "1";
+ } else {
+ suffix = Integer.toString(Integer.parseInt((String) o) + 1);
+ }
+ }
+
+ pageContext.getRequest().setAttribute(CACHE_TAG_COUNTER_KEY, suffix);
+ }
+
+ // Generate the actual cache key
+ actualKey = admin.generateEntryKey(key, (HttpServletRequest) pageContext.getRequest(), scope, language, suffix);
+
+ /*
+ if
+ - refresh is not set,
+ - the cacheEntry itself does not need to be refreshed before 'time' and
+ - the administrator has not had the cache entry's scope flushed
+
+ send out the cached version!
+ */
+ try {
+ if (refresh) {
+ // Force a refresh
+ content = (String) cache.getFromCache(actualKey, 0, cron);
+ } else {
+ // Use the specified refresh period
+ content = (String) cache.getFromCache(actualKey, time, cron);
+ }
+
+ try {
+ if (log.isDebugEnabled()) {
+ log.debug(": Using cached entry : " + actualKey);
+ }
+
+ // Ensure that the cache returns the data correctly. Else re-evaluate the body
+ if ((content != null)) {
+ if (mode != SILENT_MODE) {
+ pageContext.getOut().write(content);
+ }
+
+ returnCode = SKIP_BODY;
+ }
+ } catch (IOException e) {
+ throw new JspTagException("IO Exception: " + e.getMessage());
+ }
+ } catch (NeedsRefreshException nre) {
+ cancelUpdateRequired = true;
+ content = (String) nre.getCacheContent();
+ }
+
+ if (returnCode == EVAL_BODY_BUFFERED) {
+ if (log.isDebugEnabled()) {
+ log.debug(": Cached content not used: New cache entry, cache stale or scope flushed : " + actualKey);
+ }
+ }
+
+ return returnCode;
+ }
+
+ /**
+ * Convert a SimpleDateFormat string to seconds
+ * Acceptable format are :
+ *
+ * - 0s (seconds)
+ *
- 0m (minute)
+ *
- 0h (hour)
+ *
- 0d (day)
+ *
- 0w (week)
+ *
+ * @param duration The simple date time to parse
+ * @return The value in seconds
+ */
+ private int parseDuration(String duration) {
+ int time = 0;
+
+ //Detect if the factor is specified
+ try {
+ time = Integer.parseInt(duration);
+ } catch (Exception ex) {
+ //Extract number and ajust this number with the time factor
+ for (int i = 0; i < duration.length(); i++) {
+ if (!Character.isDigit(duration.charAt(i))) {
+ time = Integer.parseInt(duration.substring(0, i));
+
+ switch ((int) duration.charAt(i)) {
+ case (int) 's':
+ time *= SECOND;
+ break;
+ case (int) 'm':
+ time *= MINUTE;
+ break;
+ case (int) 'h':
+ time *= HOUR;
+ break;
+ case (int) 'd':
+ time *= DAY;
+ break;
+ case (int) 'w':
+ time *= WEEK;
+ break;
+ default:
+ //no defined use as is
+ }
+
+ break;
+ }
+
+ // if
+ }
+
+ // for
+ }
+
+ // catch
+ return time;
+ }
+
+ /**
+ * Parse an ISO-8601 format date and return it's value in seconds
+ *
+ * @param duration The ISO-8601 date
+ * @return The equivalent number of seconds
+ * @throws Exception
+ */
+ private int parseISO_8601_Duration(String duration) throws Exception {
+ int years = 0;
+ int months = 0;
+ int days = 0;
+ int hours = 0;
+ int mins = 0;
+ int secs = 0;
+
+ // If there is a negative sign, it must be first
+ // If it is present, we will ignore it
+ int index = duration.indexOf("-");
+
+ if (index > 0) {
+ throw new Exception("Invalid duration (- must be at the beginning)");
+ }
+
+ // First caracter must be P
+ String workValue = duration.substring(index + 1);
+
+ if (workValue.charAt(0) != 'P') {
+ throw new Exception("Invalid duration (P must be at the beginning)");
+ }
+
+ // Must contain a value
+ workValue = workValue.substring(1);
+
+ if (workValue.length() == 0) {
+ throw new Exception("Invalid duration (nothing specified)");
+ }
+
+ // Check if there is a T
+ index = workValue.indexOf('T');
+
+ String timeString = "";
+
+ if (index > 0) {
+ timeString = workValue.substring(index + 1);
+
+ // Time cannot be empty
+ if (timeString.equals("")) {
+ throw new Exception("Invalid duration (T with no time)");
+ }
+
+ workValue = workValue.substring(0, index);
+ } else if (index == 0) {
+ timeString = workValue.substring(1);
+ workValue = "";
+ }
+
+ if (!workValue.equals("")) {
+ validateDateFormat(workValue);
+
+ int yearIndex = workValue.indexOf('Y');
+ int monthIndex = workValue.indexOf('M');
+ int dayIndex = workValue.indexOf('D');
+
+ if ((yearIndex != -1) && (monthIndex != -1) && (yearIndex > monthIndex)) {
+ throw new Exception("Invalid duration (Date part not properly specified)");
+ }
+
+ if ((yearIndex != -1) && (dayIndex != -1) && (yearIndex > dayIndex)) {
+ throw new Exception("Invalid duration (Date part not properly specified)");
+ }
+
+ if ((dayIndex != -1) && (monthIndex != -1) && (monthIndex > dayIndex)) {
+ throw new Exception("Invalid duration (Date part not properly specified)");
+ }
+
+ if (yearIndex >= 0) {
+ years = (new Integer(workValue.substring(0, yearIndex))).intValue();
+ }
+
+ if (monthIndex >= 0) {
+ months = (new Integer(workValue.substring(yearIndex + 1, monthIndex))).intValue();
+ }
+
+ if (dayIndex >= 0) {
+ if (monthIndex >= 0) {
+ days = (new Integer(workValue.substring(monthIndex + 1, dayIndex))).intValue();
+ } else {
+ if (yearIndex >= 0) {
+ days = (new Integer(workValue.substring(yearIndex + 1, dayIndex))).intValue();
+ } else {
+ days = (new Integer(workValue.substring(0, dayIndex))).intValue();
+ }
+ }
+ }
+ }
+
+ if (!timeString.equals("")) {
+ validateHourFormat(timeString);
+
+ int hourIndex = timeString.indexOf('H');
+ int minuteIndex = timeString.indexOf('M');
+ int secondIndex = timeString.indexOf('S');
+
+ if ((hourIndex != -1) && (minuteIndex != -1) && (hourIndex > minuteIndex)) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+
+ if ((hourIndex != -1) && (secondIndex != -1) && (hourIndex > secondIndex)) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+
+ if ((secondIndex != -1) && (minuteIndex != -1) && (minuteIndex > secondIndex)) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+
+ if (hourIndex >= 0) {
+ hours = (new Integer(timeString.substring(0, hourIndex))).intValue();
+ }
+
+ if (minuteIndex >= 0) {
+ mins = (new Integer(timeString.substring(hourIndex + 1, minuteIndex))).intValue();
+ }
+
+ if (secondIndex >= 0) {
+ if (timeString.length() != (secondIndex + 1)) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+
+ if (minuteIndex >= 0) {
+ timeString = timeString.substring(minuteIndex + 1, timeString.length() - 1);
+ } else {
+ if (hourIndex >= 0) {
+ timeString = timeString.substring(hourIndex + 1, timeString.length() - 1);
+ } else {
+ timeString = timeString.substring(0, timeString.length() - 1);
+ }
+ }
+
+ if (timeString.indexOf('.') == (timeString.length() - 1)) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+
+ secs = (new Double(timeString)).intValue();
+ }
+ }
+
+ // Compute Value
+ return secs + (mins * MINUTE) + (hours * HOUR) + (days * DAY) + (months * MONTH) + (years * YEAR);
+ }
+
+ /**
+ * Validate the basic date format
+ *
+ * @param basicDate The string to validate
+ * @throws Exception
+ */
+ private void validateDateFormat(String basicDate) throws Exception {
+ int yearCounter = 0;
+ int monthCounter = 0;
+ int dayCounter = 0;
+
+ for (int counter = 0; counter < basicDate.length(); counter++) {
+ // Check if there's any other caracters than Y, M, D and numbers
+ if (!Character.isDigit(basicDate.charAt(counter)) && (basicDate.charAt(counter) != 'Y') && (basicDate.charAt(counter) != 'M') && (basicDate.charAt(counter) != 'D')) {
+ throw new Exception("Invalid duration (Date part not properly specified)");
+ }
+
+ // Check if the allowed caracters are present more than 1 time
+ if (basicDate.charAt(counter) == 'Y') {
+ yearCounter++;
+ }
+
+ if (basicDate.charAt(counter) == 'M') {
+ monthCounter++;
+ }
+
+ if (basicDate.charAt(counter) == 'D') {
+ dayCounter++;
+ }
+ }
+
+ if ((yearCounter > 1) || (monthCounter > 1) || (dayCounter > 1)) {
+ throw new Exception("Invalid duration (Date part not properly specified)");
+ }
+ }
+
+ /**
+ * Validate the basic hour format
+ *
+ * @param basicHour The string to validate
+ * @throws Exception
+ */
+ private void validateHourFormat(String basicHour) throws Exception {
+ int minuteCounter = 0;
+ int secondCounter = 0;
+ int hourCounter = 0;
+
+ for (int counter = 0; counter < basicHour.length(); counter++) {
+ if (!Character.isDigit(basicHour.charAt(counter)) && (basicHour.charAt(counter) != 'H') && (basicHour.charAt(counter) != 'M') && (basicHour.charAt(counter) != 'S') && (basicHour.charAt(counter) != '.')) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+
+ if (basicHour.charAt(counter) == 'H') {
+ hourCounter++;
+ }
+
+ if (basicHour.charAt(counter) == 'M') {
+ minuteCounter++;
+ }
+
+ if (basicHour.charAt(counter) == 'S') {
+ secondCounter++;
+ }
+ }
+
+ if ((hourCounter > 1) || (minuteCounter > 1) || (secondCounter > 1)) {
+ throw new Exception("Invalid duration (Time part not properly specified)");
+ }
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/tag/FlushTag.java b/src/java/com/opensymphony/oscache/web/tag/FlushTag.java
new file mode 100644
index 0000000..8032a8d
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/tag/FlushTag.java
@@ -0,0 +1,171 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.tag;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.web.ServletCacheAdministrator;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.jsp.JspTagException;
+import javax.servlet.jsp.PageContext;
+import javax.servlet.jsp.tagext.TagSupport;
+
+/**
+ * FlushTag flushes caches created with <cache>.
+ *
+ * This tag provides programmatic control over when caches are flushed,
+ * and can flush all caches at once.
+ *
+ * Usage Examples:
+ *
+ * <%@ taglib uri="oscache" prefix="cache" %>
+ * <cache:flush scope="application" />
+ * <cache:flush scope="session" key="foobar" />
+ *
+ *
+ * Note: If no scope is provided (or scope is null), it will flush
+ * all caches globally - use with care!
+ *
+ * Flushing is done by setting an appropriate application level time,
+ * which <cache> always looks at before retrieving the cache.
+ * If this 'flush time' is > that cache's last update, it will refresh
+ * the cache.
+ *
+ * As such caches are not all 'flushed', they are all marked
+ * to be refreshed at their next access. That is the only way that
+ * the content can still be available if the refresh fails.
+ *
+ * @author Mike Cannon-Brookes
+ * @author Chris Miller
+ * @version $Revision$
+ */
+public class FlushTag extends TagSupport {
+ ServletCacheAdministrator admin = null;
+
+ /**
+ * A cache group.
+ * If specified, all content in that group will be flushed
+ */
+ String group = null;
+
+ /**
+ * Tag key.
+ */
+ String key = null;
+
+ /**
+ * if pattern value is specified, all keys that contain the pattern are flushed.
+ */
+ String pattern = null;
+ String scope = null;
+ int cacheScope = -1;
+
+ /**
+ * The ISO-639 language code to distinguish different pages in application scope.
+ */
+ private String language = null;
+
+ /**
+ * The group to be flushed.
+ * If specified, all cached content in the group will be flushed.
+ *
+ * @param group The name of the group to flush.
+ */
+ public void setGroup(String group) {
+ this.group = group;
+ }
+
+ /**
+ * The key to be flushed.
+ * If specified, only one cache entry will be flushed.
+ *
+ * @param value The key of the specific entry to flush.
+ */
+ public void setKey(String value) {
+ this.key = value;
+ }
+
+ /**
+ * Set the ISO-639 language code to distinguish different pages in application scope.
+ *
+ * @param value The language code for this cache entry.
+ */
+ public void setLanguage(String value) {
+ this.language = value;
+ }
+
+ /**
+ * The key pattern to be flushed.
+ * If specified, all entries that contain the pattern will be flushed.
+ * @param value The key of the specific entry to flush.
+ */
+ public void setPattern(String value) {
+ this.pattern = value;
+ }
+
+ /**
+ * Set the scope of this flush.
+ *
+ * @param value The scope - either "application" (default) or "session".
+ */
+ public void setScope(String value) {
+ if (value != null) {
+ if (value.equalsIgnoreCase(ServletCacheAdministrator.SESSION_SCOPE_NAME)) {
+ cacheScope = PageContext.SESSION_SCOPE;
+ } else if (value.equalsIgnoreCase(ServletCacheAdministrator.APPLICATION_SCOPE_NAME)) {
+ cacheScope = PageContext.APPLICATION_SCOPE;
+ }
+ }
+ }
+
+ /**
+ * Process the start of the tag.
+ *
+ * @throws JspTagException The standard tag exception thrown.
+ * @return The standard Tag return.
+ */
+ public int doStartTag() throws JspTagException {
+ if (admin == null) {
+ admin = ServletCacheAdministrator.getInstance(pageContext.getServletContext());
+ }
+
+ if (group != null) // We're flushing a group
+ {
+ if (cacheScope >= 0) {
+ Cache cache = admin.getCache((HttpServletRequest) pageContext.getRequest(), cacheScope);
+ cache.flushGroup(group);
+ } else {
+ throw new JspTagException("A cache group was specified for flushing, but the scope wasn't supplied or was invalid");
+ }
+ } else if (pattern != null) // We're flushing keys which contain the pattern
+ {
+ if (cacheScope >= 0) {
+ Cache cache = admin.getCache((HttpServletRequest) pageContext.getRequest(), cacheScope);
+ cache.flushPattern(pattern);
+ } else {
+ throw new JspTagException("A pattern was specified for flushing, but the scope wasn't supplied or was invalid");
+ }
+ } else if (key == null) // we're flushing a whole scope
+ {
+ if (cacheScope >= 0) {
+ admin.setFlushTime(cacheScope);
+ } else {
+ admin.flushAll();
+ }
+ } else // we're flushing just one key
+ {
+ if (cacheScope >= 0) {
+ String actualKey = admin.generateEntryKey(key, (HttpServletRequest) pageContext.getRequest(), cacheScope, language);
+
+ Cache cache = admin.getCache((HttpServletRequest) pageContext.getRequest(), cacheScope);
+ cache.flushEntry(actualKey);
+ } else {
+ throw new JspTagException("A cache key was specified for flushing, but the scope wasn't supplied or was invalid");
+ }
+ }
+
+ return SKIP_BODY;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/tag/GroupTag.java b/src/java/com/opensymphony/oscache/web/tag/GroupTag.java
new file mode 100644
index 0000000..db1ae3b
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/tag/GroupTag.java
@@ -0,0 +1,32 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.tag;
+
+import javax.servlet.jsp.JspTagException;
+import javax.servlet.jsp.tagext.TagSupport;
+
+/**
+ * GroupTag is a tag that adds a group to an ancestor CacheTag's groups.
+ *
+ * @author Robert van der Vliet
+ */
+public class GroupTag extends TagSupport {
+ private Object group = null;
+
+ public int doStartTag() throws JspTagException {
+ CacheTag ancestorCacheTag = (CacheTag) TagSupport.findAncestorWithClass(this, CacheTag.class);
+
+ if (ancestorCacheTag == null) {
+ throw new JspTagException("GroupTag cannot be used from outside a CacheTag");
+ }
+
+ ancestorCacheTag.addGroup(String.valueOf(group));
+ return EVAL_BODY_INCLUDE;
+ }
+
+ public void setGroup(Object group) {
+ this.group = group;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/tag/GroupsTag.java b/src/java/com/opensymphony/oscache/web/tag/GroupsTag.java
new file mode 100644
index 0000000..0a8ffaa
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/tag/GroupsTag.java
@@ -0,0 +1,33 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.tag;
+
+import javax.servlet.jsp.JspTagException;
+import javax.servlet.jsp.tagext.TagSupport;
+
+/**
+ * GroupsTag is a tag that add a comma-delimited list of groups to an ancestor CacheTag's groups.
+ *
+ * @author Lars Torunski
+ */
+public class GroupsTag extends TagSupport {
+ private Object groups = null;
+
+ public int doStartTag() throws JspTagException {
+ CacheTag ancestorCacheTag = (CacheTag) TagSupport.findAncestorWithClass(this, CacheTag.class);
+
+ if (ancestorCacheTag == null) {
+ throw new JspTagException("GroupsTag cannot be used from outside a CacheTag");
+ }
+
+ ancestorCacheTag.addGroups(String.valueOf(groups));
+
+ return EVAL_BODY_INCLUDE;
+ }
+
+ public void setGroups(Object groups) {
+ this.groups = groups;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/tag/UseCachedTag.java b/src/java/com/opensymphony/oscache/web/tag/UseCachedTag.java
new file mode 100644
index 0000000..594cc36
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/tag/UseCachedTag.java
@@ -0,0 +1,60 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web.tag;
+
+import javax.servlet.jsp.JspTagException;
+import javax.servlet.jsp.tagext.TagSupport;
+
+/**
+ * UseCachedTag is a tag that tells a <cache> tag to reuse the cached body.
+ *
+ * Usage Example:
+ *
+ * <%@ taglib uri="oscache" prefix="cache" %>
+ * <cache:cache key="mycache" scope="application">
+ * if (reuse cached)
+ * <cache:usecached />
+ * else
+ * some other logic
+ * </cache:cache>
+ *
+ *
+ * Note this is very useful with try / catch blocks
+ * so that you can still produce old cached data if an
+ * exception occurs, eg your database goes down.
+ *
+ * @author Mike Cannon-Brookes
+ * @version $Revision$
+ */
+public class UseCachedTag extends TagSupport {
+ boolean use = true;
+
+ /**
+ * Set the decision to use the body content of the ancestor <cache> or not.
+ *
+ * @param value Whether or not to use the body content. Default is true.
+ */
+ public void setUse(boolean value) {
+ this.use = value;
+ }
+
+ /**
+ * The start tag.
+ *
+ * @throws JspTagException The standard tag exception thrown.
+ * @return The standard Tag return.
+ */
+ public int doStartTag() throws JspTagException {
+ CacheTag cacheTag = (CacheTag) TagSupport.findAncestorWithClass(this, CacheTag.class);
+
+ if (cacheTag == null) {
+ throw new JspTagException("A UseCached tag must be nested within a Cache tag");
+ }
+
+ cacheTag.setUseBody(!use);
+
+ return SKIP_BODY;
+ }
+}
diff --git a/src/java/com/opensymphony/oscache/web/tag/package.html b/src/java/com/opensymphony/oscache/web/tag/package.html
new file mode 100644
index 0000000..509aec6
--- /dev/null
+++ b/src/java/com/opensymphony/oscache/web/tag/package.html
@@ -0,0 +1,33 @@
+
+
+
+
+
+
+
+Provides the tag libraries that allow OSCache to be accessed via JSP custom tags for
+caching portions of JSP pages.
+
+
+Package Specification
+
+Related Documentation
+
+
+For overviews, tutorials, examples, guides, and tool documentation, please see:
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/java/overview.html b/src/java/overview.html
new file mode 100644
index 0000000..3be1bd6
--- /dev/null
+++ b/src/java/overview.html
@@ -0,0 +1,6 @@
+
+
+This document is the API specification for OSCache - note, OSCache is built on top of
+OSCache.
+
+
\ No newline at end of file
diff --git a/src/test/java/com/opensymphony/oscache/base/DummyAlwayRefreshEntryPolicy.java b/src/test/java/com/opensymphony/oscache/base/DummyAlwayRefreshEntryPolicy.java
new file mode 100644
index 0000000..305a8cb
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/DummyAlwayRefreshEntryPolicy.java
@@ -0,0 +1,30 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+
+/**
+ * This is an dummy implementation of an EntryRefreshPolicy. It is just to
+ * illustrate how to use it.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Francois Beauregard
+ */
+public final class DummyAlwayRefreshEntryPolicy implements EntryRefreshPolicy {
+ /**
+ * Dummy implementation of an entry refresh policy. A real implementation
+ * whould do some logic to determine if this entry needs to be refreshed.
+ * It can be calling a bean or checking some files, or even manually manage
+ * the time expiration.
+ *
+ *
+ * @param entry The entry for wich to determine if a refresh is needed
+ * @return True or false
+ */
+ public boolean needsRefresh(CacheEntry entry) {
+ return true;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/GroupConcurrencyProblemTestCase.java b/src/test/java/com/opensymphony/oscache/base/GroupConcurrencyProblemTestCase.java
new file mode 100644
index 0000000..86b982e
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/GroupConcurrencyProblemTestCase.java
@@ -0,0 +1,59 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.TestCase;
+
+/**
+ * DOCUMENT ME!
+ *
+ * @author $author$
+ * @version $Revision$
+ */
+public class GroupConcurrencyProblemTestCase extends TestCase {
+ private static GeneralCacheAdministrator cache = new GeneralCacheAdministrator();
+
+ public static void main(String[] args) {
+ System.out.println("START");
+
+ // Create some clients and start them running.
+ for (int i = 0; i < 100; i++) {
+ System.out.println("Creating thread: " + i);
+
+ new Client(i, cache).start();
+ }
+
+ System.out.println("END");
+ }
+}
+
+
+/* Inner class to hammer away at the cache. */
+class Client extends Thread {
+ private static final int MAX_ITERATIONS = 1000;
+ private GeneralCacheAdministrator cache;
+ private int id;
+
+ public Client(int newId, GeneralCacheAdministrator newCache) {
+ super();
+ id = newId;
+ cache = newCache;
+ }
+
+ public void run() {
+ for (int i = 0; i < MAX_ITERATIONS; i++) {
+ /* Put an entry from this Client into the shared group.
+ */
+ cache.putInCache(Integer.toString(id), "Some interesting data", new String[] {
+ "GLOBAL_GROUP"
+ });
+
+ // Flush that group.
+ cache.flushGroup("GLOBAL_GROUP");
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/TestAbstractCacheAdministrator.java b/src/test/java/com/opensymphony/oscache/base/TestAbstractCacheAdministrator.java
new file mode 100644
index 0000000..da4e497
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/TestAbstractCacheAdministrator.java
@@ -0,0 +1,95 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import junit.framework.TestCase;
+
+/**
+ * Test class for the AbstractCacheAdministrator class. It tests some of the
+ * public methods of the admin. Some others cannot be tested since they are
+ * linked to the property file used for the tests, and since this file
+ * will change, the value of some parameters cannot be asserted
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public abstract class TestAbstractCacheAdministrator extends TestCase {
+ // Constants used in the tests
+ //private final String CACHE_PATH_PROP = "cache.path";
+ //private final String CONTENT = "Content for the abstract cache admin test";
+ //private final String ENTRY_KEY = "Test Abstract Admin Key";
+ private final String INVALID_PROP_NAME = "INVALID_PROP_NAME";
+ //private final String TEST_LOG = "test log";
+
+ /**
+ * Constructor for the this test class.
+ *
+ * @param str Test name (required by JUnit)
+ */
+ protected TestAbstractCacheAdministrator(String str) {
+ super(str);
+ }
+
+ /**
+ * Cannot be tested since CacheContents is an interface
+ */
+ public void testCacheContents() {
+ }
+
+ /**
+ * We cannot test this method because the value depends on the property
+ */
+ public void testGetCachePath() {
+ }
+
+ /**
+ * Validate that the properties retrieved by the admin are the same as the one
+ * specified in the property file. Do not test cache path or memory cache
+ * since it changes with the tests
+ */
+ public void testGetProperty() {
+ // Check if all the default properties are OK
+ assertNull(getAdmin().getProperty(INVALID_PROP_NAME));
+ assertNull(getAdmin().getProperty(""));
+
+ try {
+ assertNull(getAdmin().getProperty(null));
+ fail("NullPointerException expected (property Key null).");
+ } catch (Exception e) {
+ }
+ }
+
+ /**
+ * We cannot test this method because the value depends on the property
+ */
+ public void testIsFileCaching() {
+ }
+
+ /**
+ * We cannot test this method because the value depends on the property
+ */
+ public void testIsMemoryCaching() {
+ }
+
+ /**
+ * Perform a call to the log method. Unfornately, there is no way to check
+ * if the logging is done correctly, we only invoke it
+ */
+ public void testLog() {
+ // Invoke the log
+ // The other log method is not tested since it calls the same as we do
+ //TODO
+
+ /*getAdmin().log(TEST_LOG, System.out);
+ getAdmin().log("", System.out);
+ getAdmin().log(null, System.out);
+ getAdmin().log(TEST_LOG, null);
+ */
+ }
+
+ // Abstract method that returns an instance of an admin
+ protected abstract AbstractCacheAdministrator getAdmin();
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/TestCache.java b/src/test/java/com/opensymphony/oscache/base/TestCache.java
new file mode 100644
index 0000000..ad07cb4
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/TestCache.java
@@ -0,0 +1,275 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import java.util.Properties;
+
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Assert;
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test the public methods of the Cache class
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class TestCache extends TestCase {
+ // Static variables required thru all the tests
+ private static Cache map = null;
+ private final String CONTENT = "Content for the cache test";
+
+ // Constants needed thru all the tests
+ private final String ENTRY_KEY = "Test cache key";
+ private final int NO_REFRESH_NEEDED = CacheEntry.INDEFINITE_EXPIRY;
+ private final int REFRESH_NEEDED = 0;
+
+ /**
+ * Class constructor.
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCache(String str) {
+ super(str);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a new Cache
+ if (map == null) {
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+ map = admin.getCache();
+ assertNotNull(map);
+ }
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCache.class);
+ }
+
+ /**
+ * Verify that items may still be flushed by key pattern
+ */
+ public void testFlushPattern() {
+ // Try to flush with a bad pattern and ensure that our data is still there
+ map.putInCache(ENTRY_KEY, CONTENT);
+ map.flushPattern(ENTRY_KEY + "do not flush");
+ getBackContent(map, CONTENT, NO_REFRESH_NEEDED, false);
+
+ // Flush our map for real
+ map.flushPattern(ENTRY_KEY.substring(1, 2));
+ getBackContent(map, CONTENT, NO_REFRESH_NEEDED, true);
+
+ // Check invalid values
+ map.flushPattern("");
+ map.flushPattern(null);
+ }
+
+ /**
+ * Tests that with a very large amount of keys that added and trigger cache overflows, there is no memory leak
+ * @throws Exception
+ */
+ public void testBug174CacheOverflow() throws Exception {
+
+ Properties p = new Properties();
+ p.setProperty(AbstractCacheAdministrator.CACHE_ALGORITHM_KEY, "com.opensymphony.oscache.base.algorithm.LRUCache");
+ p.setProperty(AbstractCacheAdministrator.CACHE_CAPACITY_KEY, "100");
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator(p);
+
+ int cacheCapacity = 100;
+ int maxAddedCacheEntries = cacheCapacity*10;
+ String baseCacheKey= "baseKey";
+ String cacheValue ="same_value";
+
+ admin.setCacheCapacity(cacheCapacity);
+
+ Cache cache = admin.getCache();
+
+ //Add lots of different keys to trigger cache overflow
+ for (int keyIndex=0; keyIndex
+ * @param map The Cache in which the data is stored
+ * @param content The content expected to be retrieved
+ * @param refresh Time interval to determine if the cache object needs refresh
+ * @param exceptionExpected Specify if a NeedsRefreshException is expected
+ */
+ private void getBackContent(Cache map, Object content, int refresh, boolean exceptionExpected) {
+ try {
+ assertEquals(content, map.getFromCache(ENTRY_KEY, refresh));
+
+ if (exceptionExpected) {
+ fail("NeedsRefreshException should have been thrown!");
+ }
+ } catch (NeedsRefreshException nre) {
+ map.cancelUpdate(ENTRY_KEY);
+
+ if (!exceptionExpected) {
+ fail("NeedsRefreshException shouldn't have been thrown!");
+ }
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/TestCacheEntry.java b/src/test/java/com/opensymphony/oscache/base/TestCacheEntry.java
new file mode 100644
index 0000000..d63b02f
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/TestCacheEntry.java
@@ -0,0 +1,136 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test the public methods of the CacheEntry class
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class TestCacheEntry extends TestCase {
+ // Static variables required thru the tests
+ static CacheEntry entry = null;
+ static long beforeCreation = 0;
+ static long afterCreation = 0;
+ private final String CONTENT = "Content for the cache entry test";
+
+ // Constants used thru the tests
+ private final String ENTRY_KEY = "Test cache entry key";
+ private final int NO_REFRESH_NEEDED = 1000000;
+ private final int REFRESH_NEEDED = 0;
+
+ /**
+ * Class constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCacheEntry(String str) {
+ super(str);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a cache entry object
+ if (entry == null) {
+ // Log the time before and after to verify the creation time
+ // in one of the tests
+ beforeCreation = System.currentTimeMillis();
+
+ entry = new CacheEntry(ENTRY_KEY);
+ afterCreation = System.currentTimeMillis();
+ }
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCacheEntry.class);
+ }
+
+ /**
+ * Verify the flush
+ */
+ public void testFlush() {
+ // Set the content so it shouldn't need refresh
+ entry.setContent(CONTENT);
+ assertTrue(!entry.needsRefresh(NO_REFRESH_NEEDED));
+
+ // Flush the entry. It should now needs refresh
+ entry.flush();
+ assertTrue(entry.needsRefresh(NO_REFRESH_NEEDED));
+ }
+
+ /**
+ * Verify that the creation time is correct
+ */
+ public void testGetCreated() {
+ assertBetweenOrEquals(beforeCreation, entry.getCreated(), afterCreation);
+ }
+
+ /**
+ * Retrieve the item created by the setup
+ */
+ public void testGetKey() {
+ assertTrue(entry.getKey().equals(ENTRY_KEY));
+ }
+
+ /**
+ * Verify that the last modification time is between the time before and
+ * after the alteration of the item
+ */
+ public void testGetLastUpdate() {
+ // again. Then we ensure that the update time is between our timestamps
+ long before = System.currentTimeMillis();
+ entry.setContent(CONTENT);
+
+ long after = System.currentTimeMillis();
+ assertBetweenOrEquals(before, entry.getLastUpdate(), after);
+ }
+
+ /**
+ * Verify that the "freshness detection" function properly
+ */
+ public void testNeedsRefresh() {
+ // Set the entry content so it shouldn't need refresh
+ // Invoke needsRefresh with no delay, so it should return true.
+ // Then invoke it with a big delay, so it should return false
+ assertTrue(entry.needsRefresh(REFRESH_NEEDED));
+ assertTrue(!entry.needsRefresh(NO_REFRESH_NEEDED));
+ }
+
+ /**
+ * Set the content of the item created by setup and then retrieve it and
+ * validate it
+ */
+ public void testSetGetContent() {
+ entry.setContent(CONTENT);
+ assertTrue(CONTENT.equals(entry.getContent()));
+
+ // Ensure that nulls are allowed
+ entry.setContent(null);
+ assertNull(entry.getContent());
+ }
+
+ /**
+ * Ensure that a value is between two others. Since the execution may be
+ * very fast, equals values are also considered to be between
+ */
+ private void assertBetweenOrEquals(long first, long between, long last) {
+ assertTrue(between >= first);
+ assertTrue(between <= last);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/TestCompleteBase.java b/src/test/java/com/opensymphony/oscache/base/TestCompleteBase.java
new file mode 100644
index 0000000..919adb7
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/TestCompleteBase.java
@@ -0,0 +1,63 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import com.opensymphony.oscache.base.algorithm.TestCompleteAlgorithm;
+import com.opensymphony.oscache.base.events.TestCompleteEvents;
+import com.opensymphony.oscache.util.TestFastCronParser;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.base package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCompleteBase extends TestCase {
+ /**
+ * Constructor for the osCache project main test program
+ */
+ public TestCompleteBase(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteBase.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the tests suite of all the project classes
+ TestSuite suite = new TestSuite("Test all base cache modules");
+ suite.addTest(TestFastCronParser.suite());
+ suite.addTest(TestCacheEntry.suite());
+ suite.addTest(TestCache.suite());
+ suite.addTest(TestConcurrency.suite());
+ suite.addTest(TestConcurrency2.suite());
+ suite.addTest(TestCompleteAlgorithm.suite());
+ suite.addTest(TestCompleteEvents.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/TestConcurrency.java b/src/test/java/com/opensymphony/oscache/base/TestConcurrency.java
new file mode 100644
index 0000000..64f06a6
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/TestConcurrency.java
@@ -0,0 +1,489 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.util.BitSet;
+import java.util.Properties;
+
+/**
+ * Test the Cache class for any concurrency problems
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public class TestConcurrency extends TestCase {
+ private static transient final Log log = LogFactory.getLog(GeneralCacheAdministrator.class); //TestConcurrency.class
+
+ // Static variables required thru all the tests
+ private static GeneralCacheAdministrator admin = null;
+
+ // Constants needed in the tests
+ private final String KEY = "key";
+ private final String VALUE = "This is some content";
+ private final int ITERATION_COUNT = 5; //500;
+ private final int THREAD_COUNT = 6; //600;
+ private final int UNIQUE_KEYS = 1013;
+
+ /**
+ * Class constructor.
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestConcurrency(String str) {
+ super(str);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a new Cache
+ if (admin == null) {
+ Properties config = new Properties();
+ config.setProperty(AbstractCacheAdministrator.CACHE_CAPACITY_KEY, "70");
+ config.setProperty(AbstractCacheAdministrator.CACHE_BLOCKING_KEY, "false");
+ admin = new GeneralCacheAdministrator();
+ assertNotNull(admin);
+ }
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestConcurrency.class);
+ }
+
+ /**
+ * Check that the cache handles simultaneous attempts to access a
+ * new cache entry correctly
+ */
+ public void testNewEntry() {
+ String key = "new";
+
+ try {
+ admin.getFromCache(key, -1);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another couple of threads to get the same cache entry
+ GetEntry getEntry = new GetEntry(key, VALUE, -1, false);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+ getEntry = new GetEntry(key, VALUE, -1, false);
+ thread = new Thread(getEntry);
+ thread.start();
+
+ // OK, those threads should now be blocked waiting for the new cache
+ // entry to appear. Sleep for a bit to simulate the time taken to
+ // build the cache entry
+ try {
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ }
+
+ // Putting the entry in the cache should unblock the previous threads
+ admin.putInCache(key, VALUE);
+ }
+ }
+
+ /**
+ * Check that the cache handles simultaneous attempts to access a
+ * new cache entry correctly
+ */
+ public void testNewEntryCancel() {
+ String key = "newCancel";
+ String NEW_VALUE = VALUE + "...";
+
+ try {
+ admin.getFromCache(key, -1);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another thread to get the same cache entry
+ GetEntry getEntry = new GetEntry(key, NEW_VALUE, -1, true);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+
+ // The above thread will be blocked waiting for the new content
+ try {
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ }
+
+ // Now cancel the update (eg because an exception occurred while building the content).
+ // This will unblock the other thread and it will receive a NeedsRefreshException.
+ admin.cancelUpdate(key);
+
+ // Wait a bit for the other thread to update the cache
+ try {
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ }
+
+ try {
+ Object newValue = admin.getFromCache(key, -1);
+ assertEquals(NEW_VALUE, newValue);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(key);
+ fail("A NeedsRefreshException should not have been thrown");
+ }
+ }
+ }
+
+ /**
+ * Verify that we can concurrently access the cache without problems
+ */
+ public void testPut() {
+ Thread[] thread = new Thread[THREAD_COUNT];
+
+ for (int idx = 0; idx < THREAD_COUNT; idx++) {
+ OSGeneralTest runner = new OSGeneralTest();
+ thread[idx] = new Thread(runner);
+ thread[idx].start();
+ }
+
+ boolean stillAlive;
+
+ do {
+ try {
+ Thread.sleep(100);
+ } catch (InterruptedException e) {
+ // do nothing
+ }
+
+ stillAlive = false;
+
+ int i = 0;
+
+ while ((i < thread.length) && !stillAlive) {
+ stillAlive |= thread[i++].isAlive();
+ }
+ } while (stillAlive);
+ }
+
+ /**
+ * Check that the cache handles simultaneous attempts to access a
+ * stale cache entry correctly
+ */
+ public void testStaleEntry() {
+ String key = "stale";
+ assertFalse("The cache should not be in blocking mode for this test.", admin.isBlocking());
+
+ admin.putInCache(key, VALUE);
+
+ try {
+ // This should throw a NeedsRefreshException since the refresh
+ // period is 0
+ admin.getFromCache(key, 0);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another thread to get the same cache entry.
+ // Since blocking mode is currently disabled we should
+ // immediately get back the stale entry
+ GetEntry getEntry = new GetEntry(key, VALUE, 0, false);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+
+ // Sleep for a bit to simulate the time taken to build the cache entry
+ try {
+ Thread.sleep(200);
+ } catch (InterruptedException ie) {
+ }
+
+ // Putting the entry in the cache should mean that threads now retrieve
+ // the updated entry
+ String newValue = "New value";
+ admin.putInCache(key, newValue);
+
+ getEntry = new GetEntry(key, newValue, -1, false);
+ thread = new Thread(getEntry);
+ thread.start();
+
+ try {
+ Object fromCache = admin.getFromCache(key, -1);
+ assertEquals(newValue, fromCache);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(key);
+ fail("Should not have received a NeedsRefreshException");
+ }
+
+ // Give the GetEntry thread a chance to finish
+ try {
+ Thread.sleep(200);
+ } catch (InterruptedException ie) {
+ }
+ }
+ }
+
+ /**
+ * A test for the updating of a stale entry when CACHE.BLOCKING = TRUE
+ */
+ public void testStaleEntryBlocking() {
+ // A test for the case where oscache.blocking = true
+ admin.destroy();
+
+ Properties p = new Properties();
+ p.setProperty(AbstractCacheAdministrator.CACHE_BLOCKING_KEY, "true");
+ admin = new GeneralCacheAdministrator(p);
+
+ assertTrue("The cache should be in blocking mode for this test.", admin.isBlocking());
+
+ // Use a unique key in case these test entries are being persisted
+ String key = "blocking";
+ String NEW_VALUE = VALUE + " abc";
+ admin.putInCache(key, VALUE);
+
+ try {
+ // Force a NeedsRefreshException
+ admin.getFromCache(key, 0);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another thread to get the same cache entry.
+ // Since blocking mode is enabled this thread should block
+ // until the entry has been updated.
+ GetEntry getEntry = new GetEntry(key, NEW_VALUE, 0, false);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+
+ // Sleep for a bit to simulate the time taken to build the cache entry
+ try {
+ Thread.sleep(200);
+ } catch (InterruptedException ie) {
+ }
+
+ // Putting the entry in the cache should mean that threads now retrieve
+ // the updated entry
+ admin.putInCache(key, NEW_VALUE);
+
+ getEntry = new GetEntry(key, NEW_VALUE, -1, false);
+ thread = new Thread(getEntry);
+ thread.start();
+
+ try {
+ Object fromCache = admin.getFromCache(key, -1);
+ assertEquals(NEW_VALUE, fromCache);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(key);
+ fail("Should not have received a NeedsRefreshException");
+ }
+ }
+ }
+
+ /**
+ * Checks whether the cache handles simultaneous attempts to access a
+ * stable cache entry correctly when the blocking mode is enabled.
+ *
+ * Basically N threads are concurrently trying to access a same stale cache entry and each is cancelling its update. Each thread repeat this operation M times.
+ * The test is sucessfull if after some time, all threads are properly released
+ */
+ public void testConcurrentStaleGets() {
+ GeneralCacheAdministrator staticAdmin = admin;
+ admin = new GeneralCacheAdministrator(); //avoid poluting other test cases
+
+ try {
+ // A test for the case where oscache.blocking = true
+ //admin.destroy();
+ Properties p = new Properties();
+ p.setProperty(AbstractCacheAdministrator.CACHE_BLOCKING_KEY, "true");
+ admin = new GeneralCacheAdministrator(p);
+
+ assertTrue("The cache should be in blocking mode for this test.", admin.isBlocking());
+
+ int nbThreads = 50;
+ int retryByThreads = 10000;
+
+ String key = "new";
+
+ //First put a value
+ admin.putInCache(key, VALUE);
+
+ try {
+ //Then test without concurrency that it is reported as stale when time-to-live is zero
+ admin.getFromCache(key, 0);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ //Ok this is was is excpected, we can release the update
+ admin.cancelUpdate(key);
+ }
+
+ //Then ask N threads to concurrently try to access this stale resource and each should receive a NeedsRefreshException, and cancel the update
+ Thread[] spawnedThreads = new Thread[nbThreads];
+ BitSet successfullThreadTerminations = new BitSet(nbThreads); //Track which thread successfully terminated
+
+ for (int threadIndex = 0; threadIndex < nbThreads; threadIndex++) {
+ GetStaleEntryAndCancelUpdate getEntry = new GetStaleEntryAndCancelUpdate(key, 0, retryByThreads, threadIndex, successfullThreadTerminations);
+ Thread thread = new Thread(getEntry);
+ spawnedThreads[threadIndex] = thread;
+ thread.start();
+ }
+
+ // OK, those threads should now repeatidely be blocked waiting for the new cache
+ // entry to appear. Wait for all of them to terminate
+ long maxWaitingSeconds = 100;
+ int maxWaitForEachThread = 5;
+ long waitStartTime = System.currentTimeMillis();
+
+ boolean atLeastOneThreadRunning = false;
+
+ while ((System.currentTimeMillis() - waitStartTime) < (maxWaitingSeconds * 1000)) {
+ atLeastOneThreadRunning = false;
+
+ //Wait a bit between each step to avoid consumming all CPU and preventing other threads from running.
+ try {
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ }
+
+ //check whether all threads are done.
+ for (int threadIndex = 0; threadIndex < nbThreads;
+ threadIndex++) {
+ Thread inspectedThread = spawnedThreads[threadIndex];
+
+ try {
+ inspectedThread.join(maxWaitForEachThread * 1000L);
+ } catch (InterruptedException e) {
+ fail("Thread #" + threadIndex + " was interrupted");
+ }
+
+ if (inspectedThread.isAlive()) {
+ atLeastOneThreadRunning = true;
+ log.error("Thread #" + threadIndex + " did not complete within [" + ((System.currentTimeMillis() - waitStartTime) / 1000) + "] s ");
+ }
+ }
+
+ if (!atLeastOneThreadRunning) {
+ break; //while loop, test success.
+ }
+ }
+
+ assertTrue("at least one thread did not complete within [" + ((System.currentTimeMillis() - waitStartTime) / 1000) + "] s ", !atLeastOneThreadRunning);
+
+ for (int threadIndex = 0; threadIndex < nbThreads; threadIndex++) {
+ assertTrue("thread [" + threadIndex + "] did not successfully complete. ", successfullThreadTerminations.get(threadIndex));
+ }
+ } finally {
+ admin = staticAdmin;
+
+ //Avoid po
+ }
+ }
+
+ private class GetEntry implements Runnable {
+ String key;
+ String value;
+ boolean expectNRE;
+ int time;
+
+ GetEntry(String key, String value, int time, boolean expectNRE) {
+ this.key = key;
+ this.value = value;
+ this.time = time;
+ this.expectNRE = expectNRE;
+ }
+
+ public void run() {
+ try {
+ // Get from the cache
+ Object fromCache = admin.getFromCache(key, time);
+ assertEquals(value, fromCache);
+ } catch (NeedsRefreshException nre) {
+ if (!expectNRE) {
+ admin.cancelUpdate(key);
+ fail("Thread should have blocked until a new cache entry was ready");
+ } else {
+ // Put a new piece of content into the cache
+ admin.putInCache(key, value);
+ }
+ }
+ }
+ }
+
+ /**
+ * Basically requests a stale entry, expects to receive a NeedsRefreshException, and always cancels the update.
+ */
+ private class GetStaleEntryAndCancelUpdate implements Runnable {
+ String key;
+ int retries;
+ int time;
+ private final BitSet successfullThreadTerminations;
+ private final int threadIndex;
+
+ GetStaleEntryAndCancelUpdate(String key, int time, int retries, int threadIndex, BitSet successfullThreadTerminations) {
+ this.key = key;
+ this.time = time;
+ this.retries = retries;
+ this.threadIndex = threadIndex;
+ this.successfullThreadTerminations = successfullThreadTerminations;
+ }
+
+ public void run() {
+ for (int retryIndex = 0; retryIndex < retries; retryIndex++) {
+ try {
+ // Get from the cache
+ Object fromCache = admin.getFromCache(key, time);
+ assertNull("Thread index [" + retryIndex + "] expected stale request [" + retryIndex + "] to be received, got [" + fromCache + "]", fromCache);
+ } catch (NeedsRefreshException nre) {
+ try {
+ admin.cancelUpdate(key);
+ } catch (Throwable t) {
+ log.error("Thread index [" + retryIndex + "]: Unexpectedly caught exception [" + t + "]", t);
+ fail("Thread index [" + retryIndex + "] : Unexpectedly caught exception [" + t + "]");
+ }
+ } catch (Throwable t) {
+ log.error("Thread index [" + retryIndex + "] : Unexpectedly caught exception [" + t + "]", t);
+ fail("Thread index [" + retryIndex + "] : Unexpectedly caught exception [" + t + "]");
+ }
+ }
+
+ //Once we successfully terminate, we update the corresponding bit to let the Junit know we succeeded.
+ synchronized (successfullThreadTerminations) {
+ successfullThreadTerminations.set(threadIndex);
+ }
+ }
+ }
+
+ private class OSGeneralTest implements Runnable {
+ public void doit(int i) {
+ int refreshPeriod = 500 /*millis*/;
+ String key = KEY + (i % UNIQUE_KEYS);
+ admin.putInCache(key, VALUE);
+
+ try {
+ // Get from the cache
+ admin.getFromCache(KEY, refreshPeriod);
+ } catch (NeedsRefreshException nre) {
+ // Get the value
+ // Store in the cache
+ admin.putInCache(KEY, VALUE);
+ }
+
+ // Flush occasionally
+ if ((i % (UNIQUE_KEYS + 1)) == 0) {
+ admin.getCache().flushEntry(key);
+ }
+ }
+
+ public void run() {
+ int start = (int) (Math.random() * UNIQUE_KEYS);
+ System.out.print(start + " ");
+
+ for (int i = start; i < (start + ITERATION_COUNT); i++) {
+ doit(i);
+ }
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/TestConcurrency2.java b/src/test/java/com/opensymphony/oscache/base/TestConcurrency2.java
new file mode 100644
index 0000000..0b8b82e
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/TestConcurrency2.java
@@ -0,0 +1,480 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base;
+
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import net.sourceforge.groboutils.junit.v1.MultiThreadedTestRunner;
+import net.sourceforge.groboutils.junit.v1.TestRunnable;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.util.Properties;
+
+/**
+ * Test the Cache class for any concurrency problems
+ *
+ * $Id: TestConcurrency.java 404 2007-02-24 10:21:00Z larst $
+ * @version $Revision: 404 $
+ */
+public class TestConcurrency2 extends TestCase {
+
+ private static transient final Log log = LogFactory.getLog(GeneralCacheAdministrator.class); //TestConcurrency2.class
+
+ // Static variables required thru all the tests
+ private static GeneralCacheAdministrator admin = null;
+
+ // Constants needed in the tests
+ private final String KEY = "key";
+ private final String VALUE = "This is some content";
+ private final int ITERATION_COUNT = 1000;
+ private final int THREAD_COUNT = 3;
+ private final int UNIQUE_KEYS = 1013;
+
+ /**
+ * Class constructor.
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestConcurrency2(String str) {
+ super(str);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a new Cache
+ if (admin == null) {
+ Properties config = new Properties();
+ config.setProperty(AbstractCacheAdministrator.CACHE_CAPACITY_KEY, "70");
+ config.setProperty(AbstractCacheAdministrator.CACHE_BLOCKING_KEY, "false");
+ admin = new GeneralCacheAdministrator(config);
+ assertNotNull(admin);
+ }
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestConcurrency2.class);
+ }
+
+ /**
+ * Check that the cache handles simultaneous attempts to access a
+ * new cache entry correctly
+ */
+ public void testNewEntry() {
+ String key = "new";
+
+ try {
+ admin.getFromCache(key, -1);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another couple of threads to get the same cache entry
+ GetEntry getEntry1 = new GetEntry(key, VALUE, -1, false);
+ GetEntry getEntry2 = new GetEntry(key, VALUE, -1, false);
+
+ // OK, those threads should be blocked waiting for the new cache
+ // entry to appear. Sleep for a bit to simulate the time taken to
+ // build the cache entry
+ PutInCache putInCache = new PutInCache(key, VALUE, 500);
+
+ // pass that instance to the MTTR
+ TestRunnable[] trs = {getEntry1, getEntry2, putInCache};
+ MultiThreadedTestRunner mttr = new MultiThreadedTestRunner(trs);
+
+ // kickstarts the MTTR & fires off threads
+ try {
+ mttr.runTestRunnables(5000);
+ } catch (Throwable e) {
+ fail("Thread should have blocked until a new cache entry was ready");
+ }
+ }
+ }
+
+ /**
+ * Check that the cache handles simultaneous attempts to access a
+ * new cache entry correctly
+ */
+ public void testNewEntryCancel() {
+ final String key = "newCancel";
+ final String NEW_VALUE = VALUE + "...";
+
+ try {
+ admin.getFromCache(key, -1);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another thread to get the same cache entry
+ // We can't use GrobeUtils, because joining functionality is missing
+ GetEntrySimple getEntry = new GetEntrySimple(key, NEW_VALUE, CacheEntry.INDEFINITE_EXPIRY, true);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+
+ // The above thread will be blocked waiting for the new content
+ try {
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ }
+
+ // Now cancel the update (eg because an exception occurred while building the content).
+ // This will unblock the other thread and it will receive a NeedsRefreshException.
+ admin.cancelUpdate(key);
+
+ // Wait a bit for the other thread to update the cache
+ try {
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ }
+
+ try {
+ Object newValue = admin.getFromCache(key, CacheEntry.INDEFINITE_EXPIRY);
+ assertEquals(NEW_VALUE, newValue);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(key);
+ e.printStackTrace();
+ fail("A NeedsRefreshException should not have been thrown. content=" + e.getCacheContent() + ", "+e.getMessage());
+ }
+ }
+ }
+
+ /**
+ * Verify that we can concurrently access the cache without problems
+ */
+ public void testPut() {
+ Thread[] thread = new Thread[THREAD_COUNT];
+
+ for (int idx = 0; idx < THREAD_COUNT; idx++) {
+ OSGeneralTest runner = new OSGeneralTest();
+ thread[idx] = new Thread(runner);
+ thread[idx].start();
+ }
+
+ boolean stillAlive;
+
+ do {
+ try {
+ Thread.sleep(100);
+ } catch (InterruptedException e) {
+ // do nothing
+ }
+
+ stillAlive = false;
+
+ int i = 0;
+
+ while ((i < thread.length) && !stillAlive) {
+ stillAlive |= thread[i++].isAlive();
+ }
+ } while (stillAlive);
+ }
+
+ /**
+ * Check that the cache handles simultaneous attempts to access a
+ * stale cache entry correctly
+ */
+ public void testStaleEntry() {
+ String key = "stale";
+ assertFalse("The cache should not be in blocking mode for this test.", admin.isBlocking());
+
+ admin.putInCache(key, VALUE);
+
+ try {
+ // This should throw a NeedsRefreshException since the refresh
+ // period is 0
+ admin.getFromCache(key, 0);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another thread to get the same cache entry.
+ // Since blocking mode is currently disabled we should
+ // immediately get back the stale entry
+ GetEntry getEntry = new GetEntry(key, VALUE, 0, false);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+
+ // Sleep for a bit to simulate the time taken to build the cache entry
+ try {
+ Thread.sleep(200);
+ } catch (InterruptedException ie) {
+ }
+
+ // Putting the entry in the cache should mean that threads now retrieve
+ // the updated entry
+ String newValue = "New value";
+ admin.putInCache(key, newValue);
+
+ getEntry = new GetEntry(key, newValue, -1, false);
+ thread = new Thread(getEntry);
+ thread.start();
+
+ try {
+ Object fromCache = admin.getFromCache(key, -1);
+ assertEquals(newValue, fromCache);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(key);
+ fail("Should not have received a NeedsRefreshException");
+ }
+
+ // Give the GetEntry thread a chance to finish
+ try {
+ Thread.sleep(200);
+ } catch (InterruptedException ie) {
+ }
+ }
+ }
+
+ /**
+ * A test for the updating of a stale entry when CACHE.BLOCKING = TRUE
+ */
+ public void testStaleEntryBlocking() {
+ // A test for the case where oscache.blocking = true
+ admin.destroy();
+
+ Properties p = new Properties();
+ p.setProperty(AbstractCacheAdministrator.CACHE_BLOCKING_KEY, "true");
+ admin = new GeneralCacheAdministrator(p);
+
+ assertTrue("The cache should be in blocking mode for this test.", admin.isBlocking());
+
+ // Use a unique key in case these test entries are being persisted
+ String key = "blocking";
+ String NEW_VALUE = VALUE + " abc";
+ admin.putInCache(key, VALUE);
+
+ try {
+ // Force a NeedsRefreshException
+ admin.getFromCache(key, 0);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ // Fire off another thread to get the same cache entry.
+ // Since blocking mode is enabled this thread should block
+ // until the entry has been updated.
+ GetEntry getEntry = new GetEntry(key, NEW_VALUE, 0, false);
+ Thread thread = new Thread(getEntry);
+ thread.start();
+
+ // Sleep for a bit to simulate the time taken to build the cache entry
+ try {
+ Thread.sleep(20);
+ } catch (InterruptedException ie) {
+ }
+
+ // Putting the entry in the cache should mean that threads now retrieve
+ // the updated entry
+ admin.putInCache(key, NEW_VALUE);
+
+ getEntry = new GetEntry(key, NEW_VALUE, -1, false);
+ thread = new Thread(getEntry);
+ thread.start();
+
+ try {
+ Object fromCache = admin.getFromCache(key, -1);
+ assertEquals(NEW_VALUE, fromCache);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(key);
+ fail("Should not have received a NeedsRefreshException");
+ }
+ }
+ }
+
+ private static final int RETRY_BY_THREADS = 100000;
+ private static final int NB_THREADS = 4;
+
+ /**
+ * Checks whether the cache handles simultaneous attempts to access a
+ * stable cache entry correctly when the blocking mode is enabled.
+ *
+ * Basically N threads are concurrently trying to access a same stale cache entry and each is cancelling its update. Each thread repeat this operation M times.
+ * The test is sucessfull if after some time, all threads are properly released
+ */
+ public void testConcurrentStaleGets() {
+ GeneralCacheAdministrator staticAdmin = admin;
+ //admin = new GeneralCacheAdministrator(); //avoid poluting other test cases
+
+ try {
+ // A test for the case where oscache.blocking = true
+ //admin.destroy();
+ Properties p = new Properties();
+ p.setProperty(AbstractCacheAdministrator.CACHE_BLOCKING_KEY, "true");
+ admin = new GeneralCacheAdministrator(p);
+
+ assertTrue("The cache should be in blocking mode for this test.", admin.isBlocking());
+
+ String key = "new";
+
+ //First put a value
+ admin.putInCache(key, VALUE);
+
+ try {
+ //Then test without concurrency that it is reported as stale when time-to-live is zero
+ admin.getFromCache(key, 0);
+ fail("NeedsRefreshException should have been thrown");
+ } catch (NeedsRefreshException nre) {
+ //Ok this is was is excpected, we can release the update
+ admin.cancelUpdate(key);
+ }
+
+ //Then ask N threads to concurrently try to access this stale resource and each should receive a NeedsRefreshException, and cancel the update
+ TestRunnable[] spawnedThreads = new TestRunnable[NB_THREADS];
+
+ for (int threadIndex = 0; threadIndex < NB_THREADS; threadIndex++) {
+ spawnedThreads[threadIndex] = new GetStaleEntryAndCancelUpdate(key, 0, RETRY_BY_THREADS);
+ }
+ MultiThreadedTestRunner mttr = new MultiThreadedTestRunner(spawnedThreads);
+
+ //kickstarts the MTTR & fires off threads
+ try {
+ mttr.runTestRunnables(120 * 1000);
+ } catch (Throwable e) {
+ fail("at least one thread did not complete");
+ e.printStackTrace();
+ }
+
+ } finally {
+ // avoid poluting other test cases
+ admin = staticAdmin;
+ }
+ }
+
+ private class GetEntry extends TestRunnable {
+ String key;
+ String value;
+ boolean expectNRE;
+ int time;
+
+ GetEntry(String key, String value, int time, boolean expectNRE) {
+ this.key = key;
+ this.value = value;
+ this.time = time;
+ this.expectNRE = expectNRE;
+ }
+
+ public void runTest() {
+ try {
+ // Get from the cache
+ Object fromCache = admin.getFromCache(key, time);
+ assertEquals(value, fromCache);
+ } catch (NeedsRefreshException nre) {
+ if (!expectNRE) {
+ admin.cancelUpdate(key);
+ fail("Thread should have blocked until a new cache entry was ready");
+ } else {
+ // Put a new piece of content into the cache
+ admin.putInCache(key, value);
+ }
+ }
+ }
+ }
+
+ private class GetEntrySimple extends GetEntry {
+ GetEntrySimple(String key, String value, int time, boolean expectNRE) {
+ super(key, value, time, expectNRE);
+ }
+
+ public void run() {
+ runTest();
+ }
+
+ }
+
+ private class PutInCache extends TestRunnable {
+
+ String key;
+ String value;
+ long wait;
+
+ PutInCache(String key, String value, long wait) {
+ this.key = key;
+ this.value = value;
+ this.wait = wait;
+ }
+
+ public void runTest() {
+ try {
+ Thread.sleep(wait);
+ } catch (InterruptedException ie) {
+ fail("PutInCache thread shouldn't be interrupted.");
+ }
+ admin.putInCache(key, value);
+ }
+ }
+
+ /**
+ * Basically requests a stale entry, expects to receive a NeedsRefreshException, and always cancels the update.
+ */
+ private class GetStaleEntryAndCancelUpdate extends TestRunnable {
+ String key;
+ int retries;
+ int time;
+
+ GetStaleEntryAndCancelUpdate(String key, int time, int retries) {
+ this.key = key;
+ this.time = time;
+ this.retries = retries;
+ }
+
+ public void runTest() {
+ for (int retryIndex = 0; retryIndex < retries; retryIndex++) {
+ try {
+ // Get from the cache
+ Object fromCache = admin.getFromCache(key, time);
+ assertNull("Thread index [" + retryIndex + "] expected stale request [" + retryIndex + "] to be received, got [" + fromCache + "]", fromCache);
+ } catch (NeedsRefreshException nre) {
+ try {
+ admin.cancelUpdate(key);
+ } catch (Throwable t) {
+ log.error("Thread index [" + retryIndex + "]: Unexpectedly caught exception [" + t + "]", t);
+ fail("Thread index [" + retryIndex + "] : Unexpectedly caught exception [" + t + "]");
+ }
+ } catch (Throwable t) {
+ log.error("Thread index [" + retryIndex + "] : Unexpectedly caught exception [" + t + "]", t);
+ fail("Thread index [" + retryIndex + "] : Unexpectedly caught exception [" + t + "]");
+ }
+ }
+ }
+ }
+
+ private class OSGeneralTest extends TestRunnable {
+ public void doit(int i) {
+ int refreshPeriod = 500 /*millis*/;
+ String key = KEY + (i % UNIQUE_KEYS);
+ admin.putInCache(key, VALUE);
+
+ try {
+ // Get from the cache
+ admin.getFromCache(KEY, refreshPeriod);
+ } catch (NeedsRefreshException nre) {
+ // Get the value
+ // Store in the cache
+ admin.putInCache(KEY, VALUE);
+ }
+
+ // Flush occasionally
+ if ((i % (UNIQUE_KEYS + 1)) == 0) {
+ admin.getCache().flushEntry(key);
+ }
+ }
+
+ public void runTest() {
+ int start = (int) (Math.random() * UNIQUE_KEYS);
+ System.out.print(start + " ");
+
+ for (int i = start; i < (start + ITERATION_COUNT); i++) {
+ doit(i);
+ }
+ }
+ }
+
+
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/algorithm/TestAbstractCache.java b/src/test/java/com/opensymphony/oscache/base/algorithm/TestAbstractCache.java
new file mode 100644
index 0000000..af9e3d3
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/algorithm/TestAbstractCache.java
@@ -0,0 +1,251 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import java.util.Enumeration;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.persistence.CachePersistenceException;
+import com.opensymphony.oscache.base.persistence.PersistenceListener;
+
+import junit.framework.TestCase;
+
+/**
+ * Test class for the AbstractCache class. It tests all public methods of
+ * the AbstractCache and assert the results. It is design to run under JUnit.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public abstract class TestAbstractCache extends TestCase {
+ /**
+ * Invalid cache capacity
+ */
+ protected final int INVALID_MAX_ENTRIES = 0;
+
+ /**
+ * Cache capacity
+ */
+ protected final int MAX_ENTRIES = 3;
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ protected TestAbstractCache(String str) {
+ super(str);
+ }
+
+ /**
+ * Test the method that verify if the cache contains a specific key
+ */
+ public abstract void testContainsKey();
+
+ /**
+ * Test the get from the cache
+ */
+ public abstract void testGet();
+
+ /**
+ * Test the capacity setting
+ */
+ public void testGetSetMaxEntries() {
+ getCache().setMaxEntries(MAX_ENTRIES);
+ assertEquals(MAX_ENTRIES, getCache().getMaxEntries());
+
+ // Specify an invalid capacity
+ try {
+ getCache().setMaxEntries(INVALID_MAX_ENTRIES);
+ fail("Cache capacity set with an invalid argument");
+ } catch (Exception e) {
+ // This is what we expect
+ }
+ }
+
+ /**
+ * Test the setting of the memory cache
+ */
+ public void testGetSetMemoryCache() {
+ getCache().setMemoryCaching(true);
+ assertTrue(getCache().isMemoryCaching());
+ }
+
+ /**
+ * Test the iterator retrieval
+ */
+ public abstract void testIterator();
+
+ /**
+ * Test the put into the cache
+ */
+ public abstract void testPut();
+
+ /**
+ * Test the remove from the cache
+ */
+ public abstract void testRemove();
+
+ /**
+ * Test the specific details about the cache algorithm
+ */
+ public abstract void testRemoveItem();
+
+ /**
+ * Test the PersistenceListener setter. Since the persistance listener is
+ * an interface, just call the setter with null
+ */
+ public void testSetPersistenceListener() {
+ getCache().setPersistenceListener(null);
+ }
+
+ // Abstract method that returns an instance of an admin
+ protected abstract AbstractConcurrentReadCache getCache();
+
+ /**
+ * Test that groups are correctly updated on puts and removes
+ * See CACHE-188 and maybe CACHE-244
+ */
+ public void testGroups() {
+ String KEY = "testkey";
+ String KEY2 = "testkey2";
+ String GROUP_NAME = "group1";
+ CacheEntry entry = new CacheEntry(KEY, null);
+ entry.setContent("testvalue");
+ entry.setGroups(new String[] {GROUP_NAME});
+ getCache().put(KEY, entry);
+
+ Map m = getCache().getGroupsForReading();
+ assertNotNull("group must exist", m.get(GROUP_NAME));
+ try {
+ Set group = (Set)m.get(GROUP_NAME);
+ assertEquals(1, group.size());
+ Object keyFromGroup = group.iterator().next();
+ assertEquals(KEY, keyFromGroup);
+ } catch (ClassCastException e) {
+ fail("group should have been a java.util.Set but is a " +
+ m.get(GROUP_NAME).getClass().getName());
+ }
+
+ assertNotNull(getCache().remove(KEY));
+
+ m = getCache().getGroupsForReading();
+ assertNull("group should have been deleted (see CACHE-188)", m.get(GROUP_NAME));
+ getCache().clear();
+
+ // Test if persistence options are correctly considered for groups
+ try {
+ PersistenceListener listener = new MockPersistenceListener();
+ getCache().setPersistenceListener(listener);
+ getCache().setOverflowPersistence(false);
+ getCache().put(KEY, entry);
+ assertTrue(listener.isStored(KEY));
+ Set group = listener.retrieveGroup(GROUP_NAME);
+ assertNotNull(group);
+ assertTrue(group.contains(KEY));
+
+ getCache().remove(KEY);
+ assertFalse(listener.isStored(KEY));
+ getCache().clear();
+
+ // test overflow persistence
+ getCache().setOverflowPersistence(true);
+ getCache().setMaxEntries(1);
+ getCache().put(KEY, entry);
+ assertFalse(listener.isStored(KEY));
+ // is it correct that the group is persisted, even when we use overflow only?
+ // assertFalse(listener.isGroupStored(GROUP_NAME));
+
+ CacheEntry entry2 = new CacheEntry(KEY2);
+ entry2.setContent("testvalue");
+ entry2.setGroups(new String[] {GROUP_NAME});
+ getCache().put(KEY2, entry2);
+ // oldest must have been persisted to disk:
+ assertTrue(listener.isStored(KEY));
+ assertFalse(listener.isStored(KEY2));
+ assertNotNull(getCache().get(KEY2));
+ } catch (CachePersistenceException e) {
+ e.printStackTrace();
+ fail("Excpetion was thrown");
+ }
+ }
+
+ public void testMisc() {
+ getCache().clear();
+ assertTrue(getCache().capacity() > 0);
+
+ final String KEY = "testkeymisc";
+ final String CONTENT = "testkeymisc";
+
+ CacheEntry entry = new CacheEntry(KEY, null);
+ entry.setContent(CONTENT);
+
+ if (getCache().contains(entry) == false) {
+ getCache().put(KEY, entry);
+ }
+ assertTrue(getCache().contains(entry));
+
+ CacheEntry entry2 = new CacheEntry(KEY+"2", null);
+ entry.setContent(CONTENT+"2");
+ getCache().put(entry2.getKey(), entry2);
+
+ Enumeration enumeration = getCache().elements();
+ assertTrue(enumeration.hasMoreElements());
+ while (enumeration.hasMoreElements()) enumeration.nextElement();
+ }
+
+
+ private static class MockPersistenceListener implements PersistenceListener {
+
+ private Map entries = new HashMap();
+ private Map groups = new HashMap();
+
+ public void clear() throws CachePersistenceException {
+ entries.clear();
+ groups.clear();
+ }
+
+ public PersistenceListener configure(Config config) {
+ return this;
+ }
+
+ public boolean isGroupStored(String groupName) throws CachePersistenceException {
+ return groups.containsKey(groupName);
+ }
+
+ public boolean isStored(String key) throws CachePersistenceException {
+ return entries.containsKey(key);
+ }
+
+ public void remove(String key) throws CachePersistenceException {
+ entries.remove(key);
+ }
+
+ public void removeGroup(String groupName) throws CachePersistenceException {
+ groups.remove(groupName);
+ }
+
+ public Object retrieve(String key) throws CachePersistenceException {
+ return entries.get(key);
+ }
+
+ public Set retrieveGroup(String groupName) throws CachePersistenceException {
+ return (Set)groups.get(groupName);
+ }
+
+ public void store(String key, Object obj) throws CachePersistenceException {
+ entries.put(key, obj);
+ }
+
+ public void storeGroup(String groupName, Set group) throws CachePersistenceException {
+ groups.put(groupName, group);
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/algorithm/TestCompleteAlgorithm.java b/src/test/java/com/opensymphony/oscache/base/algorithm/TestCompleteAlgorithm.java
new file mode 100644
index 0000000..316cb2c
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/algorithm/TestCompleteAlgorithm.java
@@ -0,0 +1,56 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.base.algorithm package.
+ * It invokes all the test suites of all the other classes of the package,
+ * except abstract ones because they are tested via final ones.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCompleteAlgorithm extends TestCase {
+ /**
+ * Constructor for the oscache project main test program
+ */
+ public TestCompleteAlgorithm(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteAlgorithm.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the tests suite of all the project classes
+ TestSuite suite = new TestSuite("Test all base algorithm cache modules");
+ suite.addTest(TestFIFOCache.suite());
+ suite.addTest(TestLRUCache.suite());
+ suite.addTest(TestUnlimitedCache.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/algorithm/TestFIFOCache.java b/src/test/java/com/opensymphony/oscache/base/algorithm/TestFIFOCache.java
new file mode 100644
index 0000000..4dcee12
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/algorithm/TestFIFOCache.java
@@ -0,0 +1,73 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the FIFOCache class. It tests that the algorithm reacts as
+ * expected when entries are removed
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestFIFOCache extends TestQueueCache {
+ /**
+ * FIFO Cache object
+ */
+ private static FIFOCache cache = null;
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestFIFOCache(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestFIFOCache.class);
+ }
+
+ /**
+ * Abstract method used by the TestAbstractCache class
+ *
+ * @return A cache instance
+ */
+ public AbstractConcurrentReadCache getCache() {
+ return cache;
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // Create a cache instance on first invocation
+ if (cache == null) {
+ cache = new FIFOCache();
+ assertNotNull(cache);
+ }
+ }
+
+ /**
+ * Test the cache algorithm
+ */
+ public void testRemoveItem() {
+ // Add 2 elements in the cache and ensure that the one to remove is the first
+ // inserted
+ cache.itemPut(KEY);
+ cache.itemPut(KEY + 1);
+ assertTrue(KEY.equals(cache.removeItem()));
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/algorithm/TestLRUCache.java b/src/test/java/com/opensymphony/oscache/base/algorithm/TestLRUCache.java
new file mode 100644
index 0000000..1995a7a
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/algorithm/TestLRUCache.java
@@ -0,0 +1,80 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the LRUCache class. It only tests that the algorithm reacts as
+ * expected when entries are removed. All the other tests related to the LRU
+ * algorithm are in the TestNonQueueCache class, since those tests are shared
+ * with the TestUnlimitedCache class.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestLRUCache extends TestQueueCache {
+ /**
+ * LRU Cache object
+ */
+ private static LRUCache cache = null;
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestLRUCache(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestLRUCache.class);
+ }
+
+ /**
+ * Abstract method used by the TestAbstractCache class
+ *
+ * @return A cache instance
+ */
+ public AbstractConcurrentReadCache getCache() {
+ return cache;
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // Create a cache instance on first invocation
+ if (cache == null) {
+ cache = new LRUCache();
+ assertNotNull(cache);
+ }
+ }
+
+ /**
+ * Test the cache algorithm
+ */
+ public void testRemoveItem() {
+ // Add 3 elements
+ cache.itemPut(KEY);
+ cache.itemPut(KEY + 1);
+ cache.itemPut(KEY + 2);
+
+ // Get the last element
+ cache.itemRetrieved(KEY);
+
+ // The least recently used item is key + 1
+ assertTrue((KEY + 1).equals(cache.removeItem()));
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/algorithm/TestQueueCache.java b/src/test/java/com/opensymphony/oscache/base/algorithm/TestQueueCache.java
new file mode 100644
index 0000000..9c6c960
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/algorithm/TestQueueCache.java
@@ -0,0 +1,229 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.persistence.PersistenceListener;
+import com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener;
+import com.opensymphony.oscache.plugins.diskpersistence.TestDiskPersistenceListener;
+
+import java.util.Iterator;
+import java.util.Properties;
+
+/**
+ * Test class for the QueueCache class, which is the base class for FIFO
+ * and LIFO algorithm classes. All the public methods of QueueCache are tested
+ * here.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public abstract class TestQueueCache extends TestAbstractCache {
+ /**
+ * Entry content
+ */
+ protected final String CONTENT = "Test Queue Cache content";
+
+ /**
+ * Entry key
+ */
+ protected final String KEY = "Test Queue Cache key";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestQueueCache(String str) {
+ super(str);
+ }
+
+ /**
+ * Test the specific algorithms
+ */
+ public abstract void testRemoveItem();
+
+ /**
+ * Test the clear
+ */
+ public void testClear() {
+ getCache().clear();
+ assertEquals(0, getCache().size());
+ }
+
+ /**
+ * Test the ContainsKey method
+ */
+ public void testContainsKey() {
+ getCache().put(KEY, CONTENT);
+ assertTrue(getCache().containsKey(KEY));
+ getCache().clear();
+ }
+
+ /**
+ * Test the get method
+ */
+ public void testGet() {
+ // Add an entry and verify that it is there
+ getCache().put(KEY, CONTENT);
+ assertTrue(getCache().get(KEY).equals(CONTENT));
+
+ // Call with invalid parameters
+ try {
+ getCache().get(null);
+ fail("Get called with null parameters!");
+ } catch (Exception e) { /* This is what we expect */
+ }
+
+ getCache().clear();
+ }
+
+ /**
+ * Test the getter and setter for the max entries
+ */
+ public void testGetSetMaxEntries() {
+ // Check that the cache is full, then chop it by one and assert that
+ // an element has been removed
+ for (int count = 0; count < MAX_ENTRIES; count++) {
+ getCache().put(KEY + count, CONTENT + count);
+ }
+
+ assertEquals(MAX_ENTRIES, getCache().size());
+ getCache().setMaxEntries(MAX_ENTRIES - 1);
+ assertEquals(MAX_ENTRIES - 1, getCache().getMaxEntries());
+ assertEquals(MAX_ENTRIES - 1, getCache().size());
+
+ // Specify an invalid capacity
+ try {
+ getCache().setMaxEntries(INVALID_MAX_ENTRIES);
+ fail("Cache capacity set with an invalid argument");
+ } catch (Exception e) {
+ // This is what we expect
+ }
+
+ getCache().clear();
+ }
+
+ /**
+ * Test the iterator
+ */
+ public void testIterator() {
+ // Verify that the iterator returns MAX_ENTRIES and no more elements
+ int nbEntries = getCache().size();
+ Iterator iterator = getCache().entrySet().iterator();
+ assertNotNull(iterator);
+
+ for (int count = 0; count < nbEntries; count++) {
+ assertNotNull(iterator.next());
+ }
+
+ assertTrue(!iterator.hasNext());
+ }
+
+ /**
+ * Test the put method
+ */
+ public void testPut() {
+ // Put elements in cache
+ for (int count = 0; count < MAX_ENTRIES; count++) {
+ getCache().put(KEY + count, CONTENT + count);
+ }
+
+ // Call with invalid parameters
+ try {
+ getCache().put(null, null);
+ fail("Put called with null parameters!");
+ } catch (Exception e) { /* This is what we expect */
+ }
+
+ getCache().clear();
+ }
+
+ /**
+ * Test the put method with overflow parameter set
+ */
+ public void testPutOverflow() {
+ // Create a listener
+ PersistenceListener listener = new DiskPersistenceListener();
+
+ Properties p = new Properties();
+ p.setProperty("cache.path", TestDiskPersistenceListener.CACHEDIR);
+ p.setProperty("cache.memory", "true");
+ p.setProperty("cache.persistence.overflow.only", "true");
+ p.setProperty("cache.persistence.class", "com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener");
+ listener.configure(new Config(p));
+ getCache().setPersistenceListener(listener);
+ getCache().clear();
+ getCache().setMaxEntries(MAX_ENTRIES);
+ getCache().setOverflowPersistence(true);
+
+ if (getCache() instanceof UnlimitedCache) {
+ return; // nothing to test since memory will never overflow.
+ }
+
+ // Put elements in cache
+ for (int count = 0; count <= MAX_ENTRIES; count++) {
+ getCache().put(KEY + count, CONTENT + count);
+ }
+
+ try {
+ int numPersisted = 0;
+
+ // Check that number of elements persisted == 1 if it is an overflow cache or all
+ // if it is not overflow and writes every time.
+ for (int count = 0; count <= MAX_ENTRIES; count++) {
+ if (getCache().getPersistenceListener().isStored(KEY + count)) {
+ numPersisted++;
+ }
+ }
+
+ if (getCache().isOverflowPersistence()) {
+ assertTrue("Only 1 element should have been persisted ", numPersisted == 1);
+ } else {
+ assertTrue("All elements should have been persisted ", numPersisted == (MAX_ENTRIES + 1));
+ }
+ } catch (Exception e) {
+ fail();
+ }
+
+ getCache().clear();
+ }
+
+ /**
+ * Test if bug CACHE-255 disappeared.
+ */
+ public void testBugCache255() {
+ if (!getCache().isMemoryCaching()) {
+ return; // nothing to test since memory won't be used.
+ }
+ if (getCache() instanceof UnlimitedCache) {
+ return; // nothing to test since memory will never overflow.
+ }
+
+ // fill up the cache
+ for (int count = 0; count < MAX_ENTRIES; count++) {
+ getCache().put(KEY + count, CONTENT + count);
+ }
+
+ // get the old value
+ Object oldValue = getCache().put(KEY + MAX_ENTRIES, CONTENT + MAX_ENTRIES);
+
+ assertEquals("Evicted object content should be the same", CONTENT + "0", oldValue);
+
+ getCache().clear();
+ }
+
+ /**
+ * Test the remove from cache
+ */
+ public void testRemove() {
+ getCache().put(KEY, CONTENT);
+
+ // Remove the object and assert the return
+ assertNotNull(getCache().remove(KEY));
+ getCache().clear();
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/algorithm/TestUnlimitedCache.java b/src/test/java/com/opensymphony/oscache/base/algorithm/TestUnlimitedCache.java
new file mode 100644
index 0000000..9ce2a2b
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/algorithm/TestUnlimitedCache.java
@@ -0,0 +1,92 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.algorithm;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the Unlimited cache algorithm. Most of the tests are done
+ * in the TestNonQueueCache class, so only algorithm specific tests are done
+ * here. Since this is an unlimited cache, there's not much to test about
+ * the algorithm.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestUnlimitedCache extends TestQueueCache {
+ /**
+ * Unlimited Cache object
+ */
+ private static UnlimitedCache cache = null;
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestUnlimitedCache(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestUnlimitedCache.class);
+ }
+
+ /**
+ * Abstract method used by the TestAbstractCache class
+ *
+ * @return A cache instance
+ */
+ public AbstractConcurrentReadCache getCache() {
+ return cache;
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // Create a cache instance on first invocation
+ if (cache == null) {
+ cache = new UnlimitedCache();
+ assertNotNull(cache);
+ }
+ }
+
+ /**
+ * Test the getter and setter for the max entries. It overrides the TestQueueCache
+ * one since it shouldn't have any effect in unlimited cache
+ */
+ public void testGetSetMaxEntries() {
+ // Check that the max entries cannot be changed
+ int entryCount = getCache().getMaxEntries();
+ getCache().setMaxEntries(entryCount - 1);
+ assertEquals(entryCount, getCache().getMaxEntries());
+ }
+
+ /**
+ * Test the cache algorithm
+ */
+ public void testRemoveItem() {
+ // Add an item, and ensure that it is not removable
+ cache.itemPut(KEY);
+ assertNull(cache.removeItem());
+ }
+
+ /**
+ * Test that groups are correctly updated on puts and removes
+ */
+ public void testGroups() {
+ // test not possible, because can't reduce cache max entries for this test
+ }
+
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestCacheEntryEvent.java b/src/test/java/com/opensymphony/oscache/base/events/TestCacheEntryEvent.java
new file mode 100644
index 0000000..ee5005e
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestCacheEntryEvent.java
@@ -0,0 +1,75 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * This is the test class for the CacheEntryEvent class. It checks that the
+ * public methods are working properly
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCacheEntryEvent extends TestCase {
+ /**
+ * Constants required for the test
+ */
+ private final String KEY = "Test cache entry event key";
+ private final String KEY_2 = "Test cache entry event key 2";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCacheEntryEvent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCacheEntryEvent.class);
+ }
+
+ /**
+ * Test the CacheEntryEvent class
+ */
+ public void testCacheEntryEvent() {
+ // Create all the required objects
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+ Cache map = new Cache(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence());
+
+ // test with key
+ CacheEntry entry = new CacheEntry(KEY);
+ CacheEntryEvent event = new CacheEntryEvent(map, entry, null);
+
+ // Get back the values and assert them
+ assertEquals(event.getEntry(), entry);
+ assertEquals(event.getKey(), KEY);
+ assertEquals(event.getMap(), map);
+ assertNull(event.getOrigin());
+
+ CacheEntry entry2 = new CacheEntry(KEY_2);
+ CacheEntryEvent event2 = new CacheEntryEvent(map, entry2);
+
+ // Get back the values and assert them
+ assertEquals(event2.getEntry(), entry2);
+ assertEquals(event2.getKey(), KEY_2);
+ assertEquals(event2.getMap(), map);
+ assertNull(event2.getOrigin());
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestCacheGroupEvent.java b/src/test/java/com/opensymphony/oscache/base/events/TestCacheGroupEvent.java
new file mode 100644
index 0000000..6afde3d
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestCacheGroupEvent.java
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * This is the test class for the CacheGroupEvent class. It checks that the
+ * public methods are working properly
+ *
+ * $Id: TestCacheEntryEvent.java 385 2006-10-07 06:57:10Z larst $
+ * @version $Revision: 385 $
+ * @author Lars Torunski
+ */
+public final class TestCacheGroupEvent extends TestCase {
+
+ /**
+ * Constants required for the test
+ */
+ private final String TEST_GROUP = "testGroup";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCacheGroupEvent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCacheGroupEvent.class);
+ }
+
+ /**
+ * Test the CacheEntryEvent class
+ */
+ public void testCacheEntryEvent() {
+ // Create all the required objects
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+ Cache map = new Cache(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence());
+
+ // three parameters
+ CacheGroupEvent event = new CacheGroupEvent(map, TEST_GROUP, null);
+
+ // Get back the values and assert them
+ assertEquals(event.getMap(), map);
+ assertEquals(event.getGroup(), TEST_GROUP);
+ assertNull(event.getOrigin());
+
+ // two parameters
+ CachePatternEvent event2 = new CachePatternEvent(map, TEST_GROUP);
+
+ // Get back the values and assert them
+ assertEquals(event2.getMap(), map);
+ assertEquals(event.getGroup(), TEST_GROUP);
+ assertNull(event2.getOrigin());
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestCacheMapAccessEvent.java b/src/test/java/com/opensymphony/oscache/base/events/TestCacheMapAccessEvent.java
new file mode 100644
index 0000000..40dab5e
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestCacheMapAccessEvent.java
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.CacheEntry;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * This is the test class for the CacheMapAccessEvent class. It checks that the
+ * public methods are working properly
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCacheMapAccessEvent extends TestCase {
+ private final String KEY = "Test cache map access event key";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCacheMapAccessEvent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCacheMapAccessEvent.class);
+ }
+
+ /**
+ * Test the CacheMapAccessEvent class
+ */
+ public void testCacheMapAccessEvent() {
+ // Create an object and check the parameters
+ CacheEntry entry = new CacheEntry(KEY);
+ CacheMapAccessEvent event = new CacheMapAccessEvent(CacheMapAccessEventType.HIT, entry);
+ assertEquals(event.getCacheEntry(), entry);
+ assertEquals(event.getCacheEntryKey(), KEY);
+ assertEquals(event.getEventType(), CacheMapAccessEventType.HIT);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestCachePatternEvent.java b/src/test/java/com/opensymphony/oscache/base/events/TestCachePatternEvent.java
new file mode 100644
index 0000000..3f73f3e
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestCachePatternEvent.java
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * This is the test class for the CachePatternEvent class. It checks that the
+ * public methods are working properly
+ *
+ * $Id: TestCacheEntryEvent.java 385 2006-10-07 06:57:10Z larst $
+ * @version $Revision: 385 $
+ * @author Lars Torunski
+ */
+public final class TestCachePatternEvent extends TestCase {
+
+ /**
+ * Constants required for the test
+ */
+ private final String TEST_PATTERN = "testPattern";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCachePatternEvent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCachePatternEvent.class);
+ }
+
+ /**
+ * Test the CacheEntryEvent class
+ */
+ public void testCacheEntryEvent() {
+ // Create all the required objects
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+ Cache map = new Cache(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence());
+
+ // three parameters
+ CachePatternEvent event = new CachePatternEvent(map, TEST_PATTERN, null);
+
+ // Get back the values and assert them
+ assertEquals(event.getMap(), map);
+ assertEquals(event.getPattern(), TEST_PATTERN);
+ assertNull(event.getOrigin());
+
+ // two parameters
+ CachePatternEvent event2 = new CachePatternEvent(map, TEST_PATTERN);
+
+ // Get back the values and assert them
+ assertEquals(event2.getMap(), map);
+ assertEquals(event.getPattern(), TEST_PATTERN);
+ assertNull(event2.getOrigin());
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestCachewideEvent.java b/src/test/java/com/opensymphony/oscache/base/events/TestCachewideEvent.java
new file mode 100644
index 0000000..b83706b
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestCachewideEvent.java
@@ -0,0 +1,57 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import java.util.Date;
+
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * This is the test class for the CachewideEvent class. It checks that the
+ * public methods are working properly
+ *
+ * $Id: TestCacheEntryEvent.java 385 2006-10-07 06:57:10Z larst $
+ * @version $Revision: 385 $
+ * @author Lars Torunski
+ */
+public final class TestCachewideEvent extends TestCase {
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCachewideEvent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCachewideEvent.class);
+ }
+
+ /**
+ * Test the CacheEntryEvent class
+ */
+ public void testCacheEntryEvent() {
+ // Create all the required objects
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+
+ Date date = new Date();
+ CachewideEvent event = new CachewideEvent(admin.getCache(), date, null);
+
+ // Get back the values and assert them
+ assertEquals(event.getDate(), date);
+ assertEquals(event.getCache(), admin.getCache());
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestCompleteEvents.java b/src/test/java/com/opensymphony/oscache/base/events/TestCompleteEvents.java
new file mode 100644
index 0000000..5fa0afb
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestCompleteEvents.java
@@ -0,0 +1,58 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.base.events package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCompleteEvents extends TestCase {
+ /**
+ * Constructor for the oscache module main test program
+ */
+ public TestCompleteEvents(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteEvents.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the tests suite of all the project classes
+ TestSuite suite = new TestSuite("Test all base cache modules");
+ suite.addTest(TestCacheEntryEvent.suite());
+ suite.addTest(TestCacheMapAccessEvent.suite());
+ suite.addTest(TestScopeEvent.suite());
+ suite.addTest(TestCachewideEvent.suite());
+ suite.addTest(TestCachePatternEvent.suite());
+ suite.addTest(TestCacheGroupEvent.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/base/events/TestScopeEvent.java b/src/test/java/com/opensymphony/oscache/base/events/TestScopeEvent.java
new file mode 100644
index 0000000..3ce7196
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/base/events/TestScopeEvent.java
@@ -0,0 +1,59 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.base.events;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import java.util.Date;
+
+/**
+ * This is the test class for the ScopeEvent class. It checks that the
+ * public methods are working properly
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestScopeEvent extends TestCase {
+ private final int SCOPE = 3;
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestScopeEvent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestScopeEvent.class);
+ }
+
+ /**
+ * Test the ScopeEvent class
+ */
+ public void testScopeEvent() {
+ Date date = new Date();
+
+ // Create an object and check the parameters
+ ScopeEvent event = new ScopeEvent(ScopeEventType.ALL_SCOPES_FLUSHED, SCOPE, date, null);
+ assertEquals(event.getEventType(), ScopeEventType.ALL_SCOPES_FLUSHED);
+ assertEquals(event.getScope(), SCOPE);
+ assertTrue(event.getDate().equals(date));
+
+ event = new ScopeEvent(ScopeEventType.SCOPE_FLUSHED, SCOPE, date, null);
+ assertEquals(event.getEventType(), ScopeEventType.SCOPE_FLUSHED);
+ assertEquals(event.getScope(), SCOPE);
+ assertTrue(event.getDate().equals(date));
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/extra/TestCacheEntryEventListenerImpl.java b/src/test/java/com/opensymphony/oscache/extra/TestCacheEntryEventListenerImpl.java
new file mode 100644
index 0000000..e329e7c
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/extra/TestCacheEntryEventListenerImpl.java
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import java.util.Date;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.events.CacheEntryEvent;
+import com.opensymphony.oscache.base.events.CacheGroupEvent;
+import com.opensymphony.oscache.base.events.CachePatternEvent;
+import com.opensymphony.oscache.base.events.CachewideEvent;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test the cache entry event listener implementation
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class TestCacheEntryEventListenerImpl extends TestCase {
+ /**
+ * Key used for this test
+ */
+ private final String KEY = "Test Cache Entry Event Listener Impl Key";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCacheEntryEventListenerImpl(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCacheEntryEventListenerImpl.class);
+ }
+
+ /**
+ * Test the basic implementation
+ */
+ public void testCacheEntryEventListenerImpl() {
+ // Construct the objects required for the tests
+ CacheEntry entry = new CacheEntry(KEY);
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+ Cache cache = new Cache(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence());
+ CacheEntryEvent event = new CacheEntryEvent(cache, entry, null);
+ CacheEntryEventListenerImpl listener = new CacheEntryEventListenerImpl();
+
+ // Assert the counters
+ assertEquals(listener.getEntryAddedCount(), 0);
+ assertEquals(listener.getEntryFlushedCount(), 0);
+ assertEquals(listener.getEntryRemovedCount(), 0);
+ assertEquals(listener.getEntryUpdatedCount(), 0);
+ assertEquals(listener.getGroupFlushedCount(), 0);
+ assertEquals(listener.getPatternFlushedCount(), 0);
+ assertEquals(listener.getCacheFlushedCount(), 0);
+
+ // Generate an event of each type
+ listener.cacheEntryAdded(event);
+ listener.cacheEntryFlushed(event);
+ listener.cacheEntryRemoved(event);
+ listener.cacheEntryUpdated(event);
+
+ listener.cacheFlushed(new CachewideEvent(cache, new Date(), null));
+ listener.cacheGroupFlushed(new CacheGroupEvent(cache, "testGroup", null));
+ listener.cachePatternFlushed(new CachePatternEvent(cache, "testPattern", null));
+
+ // Assert the counters
+ assertEquals(listener.getEntryAddedCount(), 1);
+ assertEquals(listener.getEntryFlushedCount(), 1);
+ assertEquals(listener.getEntryRemovedCount(), 1);
+ assertEquals(listener.getEntryUpdatedCount(), 1);
+ assertEquals(listener.getGroupFlushedCount(), 1);
+ assertEquals(listener.getPatternFlushedCount(), 1);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/extra/TestCacheMapAccessEventListenerImpl.java b/src/test/java/com/opensymphony/oscache/extra/TestCacheMapAccessEventListenerImpl.java
new file mode 100644
index 0000000..0640826
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/extra/TestCacheMapAccessEventListenerImpl.java
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.events.CacheMapAccessEvent;
+import com.opensymphony.oscache.base.events.CacheMapAccessEventType;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test the cache map access event listener implementation
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class TestCacheMapAccessEventListenerImpl extends TestCase {
+ /**
+ * Key used for this test
+ */
+ private final String KEY = "Test Cache Map Access Event Listener Impl Key";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestCacheMapAccessEventListenerImpl(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestCacheMapAccessEventListenerImpl.class);
+ }
+
+ /**
+ * Test the basic implementation of the listener
+ */
+ public void testCacheMapAccessEventListenerImpl() {
+ // Build objects required for the tests
+ CacheEntry entry = new CacheEntry(KEY);
+ CacheMapAccessEventListenerImpl listener = new CacheMapAccessEventListenerImpl();
+
+ // Genereate events
+ listener.accessed(new CacheMapAccessEvent(CacheMapAccessEventType.HIT, entry));
+ listener.accessed(new CacheMapAccessEvent(CacheMapAccessEventType.HIT, entry));
+ listener.accessed(new CacheMapAccessEvent(CacheMapAccessEventType.STALE_HIT, entry));
+ listener.accessed(new CacheMapAccessEvent(CacheMapAccessEventType.MISS, entry));
+
+ // Assert the counters
+ assertEquals(listener.getHitCount(), 2);
+ assertEquals(listener.getStaleHitCount(), 1);
+ assertEquals(listener.getMissCount(), 1);
+
+ // Reset the counts
+ listener.reset();
+ assertEquals(listener.getHitCount(), 0);
+ assertEquals(listener.getStaleHitCount(), 0);
+ assertEquals(listener.getMissCount(), 0);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/extra/TestCompleteExtra.java b/src/test/java/com/opensymphony/oscache/extra/TestCompleteExtra.java
new file mode 100644
index 0000000..328ceb8
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/extra/TestCompleteExtra.java
@@ -0,0 +1,56 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.extra package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCompleteExtra extends TestCase {
+ /**
+ * Constructor for the osCache Cache Extra package main test program
+ */
+ public TestCompleteExtra(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteExtra.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the test suites of all the project classes
+ TestSuite suite = new TestSuite("Test all extra cache modules");
+ suite.addTest(TestCacheEntryEventListenerImpl.suite());
+ suite.addTest(TestCacheMapAccessEventListenerImpl.suite());
+ suite.addTest(TestScopeEventListenerImpl.suite());
+ suite.addTest(TestStatisticListenerImpl.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/extra/TestScopeEventListenerImpl.java b/src/test/java/com/opensymphony/oscache/extra/TestScopeEventListenerImpl.java
new file mode 100644
index 0000000..947387b
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/extra/TestScopeEventListenerImpl.java
@@ -0,0 +1,62 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import com.opensymphony.oscache.base.events.ScopeEvent;
+import com.opensymphony.oscache.base.events.ScopeEventType;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import java.util.Date;
+
+/**
+ * Test the scope event listener implementation
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class TestScopeEventListenerImpl extends TestCase {
+ private static final int PAGE_SCOPE = 1;
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestScopeEventListenerImpl(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestScopeEventListenerImpl.class);
+ }
+
+ /**
+ * Test the basic implementation of this listener
+ */
+ public void testScopeEventListenerImpl() {
+ // Construct the object we need for the test
+ ScopeEventListenerImpl listener = new ScopeEventListenerImpl();
+
+ // Generates events
+ listener.scopeFlushed(new ScopeEvent(ScopeEventType.ALL_SCOPES_FLUSHED, PAGE_SCOPE, new Date()));
+ listener.scopeFlushed(new ScopeEvent(ScopeEventType.SCOPE_FLUSHED, PAGE_SCOPE, new Date()));
+
+ // Assert the counters
+ assertEquals(listener.getApplicationScopeFlushCount(), 1);
+ assertEquals(listener.getPageScopeFlushCount(), 2);
+ assertEquals(listener.getRequestScopeFlushCount(), 1);
+ assertEquals(listener.getSessionScopeFlushCount(), 1);
+ assertEquals(listener.getTotalScopeFlushCount(), 5);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/extra/TestStatisticListenerImpl.java b/src/test/java/com/opensymphony/oscache/extra/TestStatisticListenerImpl.java
new file mode 100644
index 0000000..8cf36f4
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/extra/TestStatisticListenerImpl.java
@@ -0,0 +1,98 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.extra;
+
+import java.util.Date;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.events.CacheEntryEvent;
+import com.opensymphony.oscache.base.events.CacheGroupEvent;
+import com.opensymphony.oscache.base.events.CachePatternEvent;
+import com.opensymphony.oscache.base.events.CachewideEvent;
+import com.opensymphony.oscache.base.events.ScopeEvent;
+import com.opensymphony.oscache.base.events.ScopeEventType;
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test the cache entry event listener implementation
+ *
+ * $Id: TestCacheEntryEventListenerImpl.java 254 2005-06-17 05:07:38Z dres $
+ * @version $Revision: 254 $
+ */
+public class TestStatisticListenerImpl extends TestCase {
+
+ private static final int PAGE_SCOPE = 1;
+
+ /**
+ * Key used for this test
+ */
+ private final String KEY = "Test Statistikc Listener Impl Key";
+
+ /**
+ * Constructor
+ *
+ * @param str The test name (required by JUnit)
+ */
+ public TestStatisticListenerImpl(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestStatisticListenerImpl.class);
+ }
+
+ /**
+ * Test the basic implementation
+ */
+ public void testCacheEntryEventListenerImpl() {
+ // Construct the objects required for the tests
+ CacheEntry entry = new CacheEntry(KEY);
+ GeneralCacheAdministrator admin = new GeneralCacheAdministrator();
+ Cache cache = new Cache(admin.isMemoryCaching(), admin.isUnlimitedDiskCache(), admin.isOverflowPersistence());
+ CacheEntryEvent event = new CacheEntryEvent(cache, entry, null);
+ StatisticListenerImpl listener = new StatisticListenerImpl();
+
+ // Assert the counters
+ assertEquals(listener.getEntriesAdded(), 0);
+ assertEquals(listener.getFlushCount(), 0);
+ assertEquals(listener.getEntriesRemoved(), 0);
+ assertEquals(listener.getEntriesUpdated(), 0);
+ assertEquals(listener.getHitCount(), 0);
+ assertEquals(listener.getHitCountSum(), 0);
+ assertEquals(listener.getMissCount(), 0);
+ assertEquals(listener.getMissCountSum(), 0);
+ assertEquals(listener.getStaleHitCount(), 0);
+ assertEquals(listener.getStaleHitCountSum(), 0);
+
+ // Generate an event of each type
+ listener.cacheEntryAdded(event);
+ listener.cacheEntryFlushed(event);
+ listener.cacheEntryRemoved(event);
+ listener.cacheEntryUpdated(event);
+
+ listener.scopeFlushed(new ScopeEvent(ScopeEventType.ALL_SCOPES_FLUSHED, PAGE_SCOPE, new Date()));
+ listener.scopeFlushed(new ScopeEvent(ScopeEventType.SCOPE_FLUSHED, PAGE_SCOPE, new Date()));
+
+ listener.cacheFlushed(new CachewideEvent(cache, new Date(), null));
+ listener.cacheGroupFlushed(new CacheGroupEvent(cache, "testGroup"));
+ listener.cachePatternFlushed(new CachePatternEvent(cache, "testPattern"));
+
+ // Assert the counters
+ assertEquals(listener.getEntriesAdded(), 1);
+ assertEquals(listener.getFlushCount(), 6);
+ assertEquals(listener.getEntriesRemoved(), 1);
+ assertEquals(listener.getEntriesUpdated(), 1);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/general/TestCompleteGeneral.java b/src/test/java/com/opensymphony/oscache/general/TestCompleteGeneral.java
new file mode 100644
index 0000000..b073e0b
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/general/TestCompleteGeneral.java
@@ -0,0 +1,54 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.general;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.general package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCompleteGeneral extends TestCase {
+ /**
+ * Constructor for the osCache Cache project main test program
+ */
+ public TestCompleteGeneral(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteGeneral.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the tests suite of all the project classes
+ TestSuite suite = new TestSuite("Test all General cache package");
+ suite.addTest(TestGeneralCacheAdministrator.suite());
+ suite.addTest(TestConcurrent.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/general/TestConcurrent.java b/src/test/java/com/opensymphony/oscache/general/TestConcurrent.java
new file mode 100644
index 0000000..8aca866
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/general/TestConcurrent.java
@@ -0,0 +1,126 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.general;
+
+import java.util.Properties;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.opensymphony.oscache.base.AbstractCacheAdministrator;
+import com.opensymphony.oscache.base.NeedsRefreshException;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Testing concurrent API accesses.
+ *
+ * @author $Author: larst $
+ * @version $Revision: 385 $
+ */
+public class TestConcurrent extends TestCase {
+
+ private static transient final Log log = LogFactory.getLog(GeneralCacheAdministrator.class); //TestConcurrency.class
+
+ // Static instance of a cache administrator
+ private GeneralCacheAdministrator admin = null;
+
+ // Constants needed in the tests
+ private final String KEY = "ConcurrentKey";
+ private final String VALUE = "ConcurrentContent";
+ private static final int THREAD_COUNT = 5;
+ private static final int CACHE_SIZE_THREAD = 2000;
+ private static final int CACHE_SIZE = THREAD_COUNT * CACHE_SIZE_THREAD;
+
+ public TestConcurrent(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestConcurrent.class);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a new Cache
+ if (admin == null) {
+ Properties config = new Properties();
+ config.setProperty(AbstractCacheAdministrator.CACHE_CAPACITY_KEY, Integer.toString(CACHE_SIZE));
+ admin = new GeneralCacheAdministrator(config);
+ assertNotNull(admin);
+ log.info("Cache Size = " + admin.getCache().getSize());
+ }
+ }
+
+ /**
+ * Tests concurrent accesses.
+ * @see http://jira.opensymphony.com/browse/CACHE-279
+ */
+ public void testConcurrentCreation10000() {
+ Thread[] thread = new Thread[THREAD_COUNT];
+
+ log.info("Ramping threads...");
+ for (int idx = 0; idx < THREAD_COUNT; idx++) {
+ CreationTest runner = new CreationTest(idx);
+ thread[idx] = new Thread(runner);
+ thread[idx].start();
+ }
+
+ log.info("Waiting....");
+ boolean stillAlive;
+ do {
+ try {
+ Thread.sleep(200);
+ } catch (InterruptedException e) {
+ // do nothing
+ }
+
+ stillAlive = false;
+ for (int i = 0; i < thread.length; i++) {
+ stillAlive |= thread[i].isAlive();
+ }
+ } while (stillAlive);
+ log.info("All threads finished. Cache Size = " + admin.getCache().getSize());
+
+ assertTrue("Unexpected amount of objects in the cache: " + admin.getCache().getSize(), CACHE_SIZE == admin.getCache().getSize());
+ }
+
+ private class CreationTest implements Runnable {
+
+ private String prefixKey;
+
+ public CreationTest(int idx) {
+ prefixKey = KEY + "_" + Integer.toString(idx) + "_";
+ Thread.currentThread().setName("CreationTest-"+idx);
+ log.info(Thread.currentThread().getName() + " is running...");
+ }
+
+ public void run() {
+ for (int i = 0; i < CACHE_SIZE_THREAD; i++) {
+ String key = prefixKey + Integer.toString(i);
+ try {
+ // Get from the cache
+ admin.getFromCache(key);
+ } catch (NeedsRefreshException nre) {
+ // Get the value
+ // Store in the cache
+ admin.putInCache(key, VALUE);
+ }
+ }
+ log.info(Thread.currentThread().getName() + " finished.");
+ }
+ }
+
+}
diff --git a/src/test/java/com/opensymphony/oscache/general/TestGeneralCacheAdministrator.java b/src/test/java/com/opensymphony/oscache/general/TestGeneralCacheAdministrator.java
new file mode 100644
index 0000000..a2bb351
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/general/TestGeneralCacheAdministrator.java
@@ -0,0 +1,400 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.general;
+
+import java.util.Date;
+
+import com.opensymphony.oscache.base.*;
+import com.opensymphony.oscache.extra.CacheEntryEventListenerImpl;
+import com.opensymphony.oscache.extra.CacheMapAccessEventListenerImpl;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test all the public methods of the GeneralCacheAdministrator class. Since
+ * this class extends the TestAbstractCacheAdministrator class, the
+ * AbstractCacheAdministrator is tested when invoking this class.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public class TestGeneralCacheAdministrator extends TestAbstractCacheAdministrator {
+ // Constants used thru all the tests
+ private static final String KEY = "Test General Cache Admin Key";
+ private static final int NO_REFRESH_NEEDED = CacheEntry.INDEFINITE_EXPIRY;
+ private static final int REFRESH_NEEDED = 0;
+ private static final String CONTENT = "Content for the general cache admin test";
+ private static final String WILL_NOT_FLUSH_PATTERN = "This key won't flush";
+ private static final String GROUP1 = "group1";
+ private static final String GROUP2 = "group2";
+ private static final String GROUP3 = "group3";
+
+ // Constants for listener counters
+ private static final int NB_CACHE_HITS = 7;
+ private static final int NB_CACHE_STALE_HITS = 7;
+ private static final int NB_CACHE_MISSED = 1;
+ private static final int NB_ADD = 7;
+ private static final int NB_UPDATED = 2;
+ private static final int NB_FLUSH = 3;
+ private static final int NB_REMOVED = 0;
+ private static final int NB_GROUP_FLUSH = 2;
+ private static final int NB_PATTERN_FLUSH = 1;
+
+ // Static instance of a cache administrator
+ static GeneralCacheAdministrator admin = null;
+
+ // Declare the listeners
+ private CacheEntryEventListenerImpl cacheEntryEventListener = null;
+ private CacheMapAccessEventListenerImpl cacheMapAccessEventListener = null;
+
+ /**
+ * Class constructor
+ *
+ * @param str Test name (required by JUnit)
+ */
+ public TestGeneralCacheAdministrator(String str) {
+ super(str);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ return new TestSuite(TestGeneralCacheAdministrator.class);
+ }
+
+ /**
+ * Abstract method used by the TestAbstractCacheAdministrator class
+ *
+ * @return An administrator instance
+ */
+ public AbstractCacheAdministrator getAdmin() {
+ return admin;
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a administrator
+ admin = new GeneralCacheAdministrator();
+ assertNotNull(admin);
+ cacheEntryEventListener = new CacheEntryEventListenerImpl();
+ cacheMapAccessEventListener = new CacheMapAccessEventListenerImpl();
+
+ // Register the listeners on the cache map
+ admin.getCache().addCacheEventListener(cacheEntryEventListener);
+ admin.getCache().addCacheEventListener(cacheMapAccessEventListener);
+ }
+
+ /**
+ * Validate the CacheEntryEventListener's data
+ */
+ public void testCacheEntryEventListenerCounters() {
+ populate();
+ assertEquals(NB_ADD, cacheEntryEventListener.getEntryAddedCount());
+ assertEquals(NB_REMOVED, cacheEntryEventListener.getEntryRemovedCount());
+ assertEquals(NB_UPDATED, cacheEntryEventListener.getEntryUpdatedCount());
+ assertEquals(NB_GROUP_FLUSH, cacheEntryEventListener.getGroupFlushedCount());
+ assertEquals(NB_PATTERN_FLUSH, cacheEntryEventListener.getPatternFlushedCount());
+ assertEquals(NB_FLUSH, cacheEntryEventListener.getEntryFlushedCount());
+ }
+
+ /**
+ * Validate the CacheEntryEventListener's data
+ */
+ public void testCacheMapAccessEventListenerCounters() {
+ populate();
+
+ int missCount = cacheMapAccessEventListener.getMissCount();
+
+ if (NB_CACHE_MISSED != missCount) {
+ fail("We expected " + NB_CACHE_MISSED + " misses but got " + missCount + "." + " This is probably due to existing disk cache, delete it and re-run" + " the test");
+ }
+
+ assertEquals(NB_CACHE_HITS, cacheMapAccessEventListener.getHitCount());
+ assertEquals(NB_CACHE_STALE_HITS, cacheMapAccessEventListener.getStaleHitCount());
+ }
+
+ /**
+ * Ensure that item may be flushed by key pattern
+ */
+ public void testFlushPattern() {
+ // Put some content in cache
+ admin.putInCache(KEY, CONTENT);
+
+ // Call flush pattern with parameters that must NOT flush our object
+ admin.flushPattern(WILL_NOT_FLUSH_PATTERN);
+ admin.flushPattern("");
+ admin.flushPattern(null);
+
+ // Ensure that our object is not gone
+ assertNotNull(checkObj(KEY, NO_REFRESH_NEEDED, false));
+
+ // This time we flush it for real
+ admin.flushPattern(KEY.substring(1, 2));
+ assertNotNull(checkObj(KEY, NO_REFRESH_NEEDED, true));
+ }
+
+ /**
+ * Ensure that item may be flushed by the entry itself
+ */
+ public void testFlushEntry() {
+ // Put some content in cache
+ admin.putInCache(KEY, CONTENT);
+
+ // Call flush pattern with parameters that must NOT flush our object
+ admin.flushEntry(WILL_NOT_FLUSH_PATTERN);
+
+ // Ensure that our object is not gone
+ assertNotNull(checkObj(KEY, NO_REFRESH_NEEDED, false));
+
+ // This time we flush it for real
+ admin.flushEntry(KEY);
+ assertNotNull(checkObj(KEY, NO_REFRESH_NEEDED, true));
+ }
+
+ /**
+ * Ensure that item may be flushed by flush all
+ */
+ public void testFlushAll() {
+ // Put some content in cache
+ admin.putInCache(KEY, CONTENT);
+
+ // Ensure that our object is not gone
+ assertNotNull(checkObj(KEY, NO_REFRESH_NEEDED, false));
+
+ // This time we flush it for real
+ admin.flushAll();
+ assertNotNull(checkObj(KEY, NO_REFRESH_NEEDED, true));
+ }
+
+ /**
+ * Ensure that the cache groupings work correctly
+ */
+ public void testGroups() {
+ // Flush a non-existent group - should be OK and will still fire a GROUP_FLUSHED event
+ admin.flushGroup(GROUP1);
+
+ // Add some items to various group combinations
+ admin.putInCache("1", "item 1"); // No groups
+ admin.putInCache("2", "item 2", new String[] {GROUP1}); // Just group 1
+ admin.putInCache("3", "item 3", new String[] {GROUP2}); // Just group 2
+ admin.putInCache("4", "item 4", new String[] {GROUP1, GROUP2}); // groups 1 & 2
+ admin.putInCache("5", "item 5", new String[] {GROUP1, GROUP2, GROUP3}); // groups 1,2 & 3
+
+ admin.flushGroup(GROUP3); // This should flush item 5 only
+ assertNotNull(checkObj("5", NO_REFRESH_NEEDED, true));
+ assertNotNull(checkObj("4", NO_REFRESH_NEEDED, false));
+
+ admin.flushGroup(GROUP2); // This should flush items 3 and 4
+ assertNotNull(checkObj("1", NO_REFRESH_NEEDED, false));
+ assertNotNull(checkObj("2", NO_REFRESH_NEEDED, false));
+ assertNotNull(checkObj("3", NO_REFRESH_NEEDED, true));
+ assertNotNull(checkObj("4", NO_REFRESH_NEEDED, true));
+
+ admin.flushGroup(GROUP1); // Flushes item 2
+ assertNotNull(checkObj("1", NO_REFRESH_NEEDED, false));
+ assertNotNull(checkObj("2", NO_REFRESH_NEEDED, true));
+
+ // Test if regrouping a cache entry works
+ admin.putInCache("A", "ABC", new String[] {"A"});
+ admin.putInCache("A", "ABC", new String[] {"A", "B"});
+ admin.putInCache("B", "DEF", new String[] {"B"});
+ admin.flushGroup("B");
+ assertNotNull(checkObj("A", NO_REFRESH_NEEDED, true));
+ }
+
+ /**
+ * Test the main cache functionalities, which are storing and retrieving objects
+ * from it
+ */
+ public void testPutInCacheAndGetFromCache() {
+ // Put some item in cache and get it back right away. It should not need
+ // to be refreshed
+ admin.putInCache(KEY, CONTENT);
+
+ String cacheContent = (String) checkObj(KEY, NO_REFRESH_NEEDED, false);
+ assertTrue(CONTENT.equals(cacheContent));
+
+ // Get the item back again and expect a refresh
+ cacheContent = (String) checkObj(KEY, REFRESH_NEEDED, true);
+ assertTrue(CONTENT.equals(cacheContent));
+
+ // Call the put in cache with invalid values
+ invalidPutInCacheArgument(null, null);
+ admin.putInCache(KEY, null); // This will still update the cache - cached items can be null
+
+ // Call the getFromCache with invalid values
+ invalidGetFromCacheArgument(null, 0);
+
+ // Try to retrieve the values
+ assertNull(checkObj(KEY, NO_REFRESH_NEEDED, false));
+
+ // Try to retrieve an item that is not in the cache
+ Object obj = checkObj("Not in cache", NO_REFRESH_NEEDED, true);
+ assertNull(obj);
+ }
+
+ /**
+ * Test the main cache functionalities, which are storing and retrieving objects
+ * from it
+ */
+ public void testPutInCacheAndGetFromCacheWithPolicy() {
+ String key = "policy";
+
+ // We put content in the cache and get it back
+ admin.putInCache(key, CONTENT, new DummyAlwayRefreshEntryPolicy());
+
+ // Should get a refresh
+ try {
+ admin.getFromCache(key, -1);
+ fail("Should have got a refresh.");
+ } catch (NeedsRefreshException nre) {
+ admin.cancelUpdate(key);
+ }
+ }
+
+ protected void tearDown() throws Exception {
+ if (admin != null) {
+ admin.getCache().removeCacheEventListener(cacheEntryEventListener);
+ admin.getCache().removeCacheEventListener(cacheMapAccessEventListener);
+ }
+ }
+
+
+ /**
+ * Bug CACHE-241
+ */
+ public void testFlushDateTomorrow() {
+ GeneralCacheAdministrator cacheAdmin = new GeneralCacheAdministrator(null);
+
+ cacheAdmin.putInCache("key1", "key1value");
+
+ try {
+ assertNotNull(cacheAdmin.getFromCache("key1"));
+ } catch (NeedsRefreshException e1) {
+ fail("Previous cache key1 doesn't exsits in GCA for the test!");
+ }
+
+ cacheAdmin.flushAll(new Date(System.currentTimeMillis() + 5000)); // flush in 5 sec.
+ try {
+ cacheAdmin.getFromCache("key1");
+ } catch (NeedsRefreshException e) {
+ cacheAdmin.cancelUpdate("key1");
+ fail("NRE is thrown, but key will expire in 5s."); // it fails here
+ }
+ }
+
+
+ /**
+ * Utility method that tries to get an item from the cache and verify
+ * if all goes as expected
+ *
+ * @param key The item key
+ * @param refresh The timestamp specifiying if the item needs refresh
+ * @param exceptionExpected Specify if we expect a NeedsRefreshException
+ */
+ private Object checkObj(String key, int refresh, boolean exceptionExpected) {
+ // Cache content
+ Object content = null;
+
+ try {
+ // try to find an object
+ content = admin.getFromCache(key, refresh);
+
+ if (exceptionExpected) {
+ fail("Expected NeedsRefreshException!");
+ }
+ } catch (NeedsRefreshException nre) {
+ admin.cancelUpdate(key);
+
+ if (!exceptionExpected) {
+ fail("Did not expected NeedsRefreshException!");
+ }
+
+ // Return the cache content from the exception
+ content = nre.getCacheContent();
+ }
+
+ return content;
+ }
+
+ /**
+ * Method that try to retrieve data from the cache but specify wrong arguments
+ *
+ * @param key The cache item key
+ * @param refresh The timestamp specifiying if the item needs refresh
+ */
+ private void invalidGetFromCacheArgument(String key, int refresh) {
+ try {
+ // Try to get the data from the cache
+ admin.getFromCache(key, refresh);
+ fail("getFromCache did NOT throw an IllegalArgumentException");
+ } catch (IllegalArgumentException ipe) {
+ // This is what we expect
+ } catch (NeedsRefreshException nre) {
+ admin.cancelUpdate(key);
+
+ // Ignore this one
+ }
+ }
+
+ /**
+ * Method that try to insert data in the cache but specify wrong arguments
+ *
+ * @param key The cache item key
+ * @param content The content of the cache item
+ */
+ private void invalidPutInCacheArgument(String key, Object content) {
+ try {
+ // Try to put this data in the cache
+ admin.putInCache(key, content);
+ fail("putInCache did NOT throw an IllegalArgumentException");
+ } catch (IllegalArgumentException ipe) {
+ // This is what we expect
+ }
+ }
+
+ private void populate() {
+ for (int i = 0; i < 7; i++) {
+ String[] groups = ((i & 1) == 0) ? new String[] {GROUP1, GROUP2} : new String[] {
+ GROUP3
+ };
+ admin.putInCache(KEY + i, CONTENT + i, groups);
+ }
+
+ //register one miss.
+ checkObj("Not in cache", NO_REFRESH_NEEDED, true);
+
+ //register 7 hits
+ for (int i = 0; i < 7; i++) {
+ try {
+ admin.getFromCache(KEY + i, NO_REFRESH_NEEDED);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(KEY + i);
+ }
+ }
+
+ for (int i = 0; i < 7; i++) {
+ try {
+ admin.getFromCache(KEY + i, 0);
+ } catch (NeedsRefreshException e) {
+ admin.cancelUpdate(KEY + i);
+ }
+ }
+
+ admin.putInCache(KEY + 1, CONTENT);
+ admin.putInCache(KEY + 2, CONTENT);
+ admin.flushPattern("blahblah");
+ admin.flushGroup(GROUP1);
+ admin.flushGroup(GROUP2);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/clustersupport/BaseTestBroadcastingListener.java b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/BaseTestBroadcastingListener.java
new file mode 100644
index 0000000..0108641
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/BaseTestBroadcastingListener.java
@@ -0,0 +1,117 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.*;
+
+import junit.framework.TestCase;
+
+import java.util.Date;
+
+/**
+ * A base class that provides the framework for testing a cluster listener
+ * implementation.
+ *
+ * @author Chris Miller
+ */
+public abstract class BaseTestBroadcastingListener extends TestCase {
+ /**
+ * The persistance listener used for the tests
+ */
+ protected static AbstractBroadcastingListener listener = null;
+
+ /**
+ * A cache instance to use for the tests
+ */
+ protected static Cache cache = null;
+
+ /**
+ * The number of tests in this class. This is used to keep
+ * track of how many tests remain; once we reach zero we shut
+ * down the broadcasting listener.
+ */
+ int testsRemaining = 0;
+
+ /**
+ * Cache group
+ */
+ private final String GROUP = "test group";
+
+ /**
+ * Object key
+ */
+ private final String KEY = "Test clustersupport persistence listener key";
+
+ public BaseTestBroadcastingListener(String str) {
+ super(str);
+ }
+
+ /**
+ * Tests the listener by causing the cache to fire off all its
+ * events
+ */
+ public void testListener() {
+ CacheEntry entry = new CacheEntry(KEY, null);
+
+ cache.putInCache(KEY, entry);
+ cache.putInCache(KEY, entry, new String[] {GROUP});
+ cache.flushEntry(KEY);
+ cache.flushGroup(GROUP);
+ cache.flushAll(new Date());
+
+ // Note that the remove event is not called since it's not exposed.
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set up the broadcasting listener required for each test.
+ */
+ public void setUp() {
+ // At first invocation, create a listener
+ if (listener == null) {
+ testsRemaining = countTestCases(); // This seems to always return 1 even if there are multiple tests?
+
+ listener = getListener();
+ assertNotNull(listener);
+
+ cache = new Cache(true, false, false);
+ assertNotNull(cache);
+
+ try {
+ listener.initialize(cache, getConfig());
+ } catch (InitializationException e) {
+ fail(e.getMessage());
+ }
+
+ cache.addCacheEventListener(listener);
+ }
+ }
+
+ /**
+ * Once all the tests are complete this will shut down the broadcasting listener.
+ */
+ protected void tearDown() throws Exception {
+ if (--testsRemaining == 0) {
+ try {
+ listener.finialize();
+ listener = null;
+ } catch (FinalizationException e) {
+ fail(e.getMessage());
+ }
+ }
+ }
+
+ /**
+ * Child classes implement this to return the broadcasting listener instance
+ * that will be tested.
+ */
+ abstract AbstractBroadcastingListener getListener();
+
+ /**
+ * Child classes implement this to return the configuration for their listener
+ * @return
+ */
+ abstract Config getConfig();
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/clustersupport/ListenForClusterTests.java b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/ListenForClusterTests.java
new file mode 100644
index 0000000..0e760f4
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/ListenForClusterTests.java
@@ -0,0 +1,115 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.Cache;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.FinalizationException;
+import com.opensymphony.oscache.base.InitializationException;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+
+/**
+ *
This should be used in conjunction with the cluster test cases. Run this
+ * program to set up listeners for the various clustering implementations so
+ * you can see that the test messages are being received correctly.
+ *
+ * A shutdown hook is installed so the listeners can be shut down cleanly.
+ *
+ * @author Chris Miller
+ */
+public final class ListenForClusterTests {
+ ArrayList listeners = new ArrayList();
+ Cache cache;
+
+ private void mainLoop() {
+ Thread shutdownHook = new ShutdownHookThread("");
+ Runtime.getRuntime().addShutdownHook(shutdownHook);
+ System.out.println();
+ System.out.println("------------------------------------------------");
+ System.out.println("Waiting for cluster messages... (CTRL-C to exit)");
+ System.out.println("------------------------------------------------");
+
+ while (true) {
+ try {
+ Thread.sleep(250);
+ } catch (InterruptedException ie) {
+ }
+ }
+ }
+
+ private void initListeners() {
+ BaseTestBroadcastingListener testcase = null;
+ AbstractBroadcastingListener listener;
+ Cache cache = new Cache(true, false, false);
+
+ // Add the JavaGroups listener
+ try {
+ testcase = new TestJavaGroupsBroadcastingListener("JavaGroups");
+ listener = testcase.getListener();
+ listener.initialize(cache, testcase.getConfig());
+ cache.addCacheEventListener(listener);
+ listeners.add(listener);
+ } catch (InitializationException e) {
+ System.out.println("The JavaGroups listener could not be initialized: " + e);
+ }
+
+ // Add the JMS listener
+ try {
+ testcase = new TestJMSBroadcastingListener("JMS");
+ listener = testcase.getListener();
+
+ Config config = testcase.getConfig();
+ config.set("cache.cluster.jms.node.name", "cacheNode2");
+
+ listener.initialize(cache, config);
+ cache.addCacheEventListener(listener);
+ listeners.add(listener);
+ } catch (InitializationException e) {
+ System.out.println("The JMS listener could not be initialized: " + e);
+ }
+ }
+
+ /**
+ * Starts up the cluster listeners.
+ */
+ public static void main(String[] args) {
+ ListenForClusterTests listen = new ListenForClusterTests();
+
+ listen.initListeners();
+
+ listen.mainLoop();
+ }
+
+ /**
+ * Inner class that handles the shutdown event
+ */
+ class ShutdownHookThread extends Thread {
+ protected String message;
+
+ public ShutdownHookThread(String message) {
+ this.message = message;
+ }
+
+ /**
+ * This is executed when the application is forcibly shutdown (via
+ * CTRL-C etc). Any configured listeners are shut down here.
+ */
+ public void run() {
+ System.out.println("Shutting down the cluster listeners...");
+
+ for (Iterator it = listeners.iterator(); it.hasNext();) {
+ try {
+ ((AbstractBroadcastingListener) it.next()).finialize();
+ } catch (FinalizationException e) {
+ System.out.println("The listener could not be shut down cleanly: " + e);
+ }
+ }
+
+ System.out.println("Shutdown complete.");
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestCompleteClustering.java b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestCompleteClustering.java
new file mode 100644
index 0000000..7d00be3
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestCompleteClustering.java
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.plugins.clustersupport package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * @author Chris Miller
+ */
+public final class TestCompleteClustering extends TestCase {
+ /**
+ * Constructor for the osCache project main test program
+ */
+ public TestCompleteClustering(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteClustering.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the tests suite of all the project classes
+ TestSuite suite = new TestSuite("Test all OSCache clustering");
+ suite.addTest(TestJavaGroupsBroadcastingListener.suite());
+ suite.addTest(TestJMSBroadcastingListener.suite());
+ suite.addTest(TestJMS10BroadcastingListener.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJMS10BroadcastingListener.java b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJMS10BroadcastingListener.java
new file mode 100644
index 0000000..0f40f3a
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJMS10BroadcastingListener.java
@@ -0,0 +1,58 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.Config;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test all the public methods of the broadcasting listener and assert the
+ * return values
+ *
+ * @author Chris Miller
+ */
+public final class TestJMS10BroadcastingListener extends BaseTestBroadcastingListener {
+ public TestJMS10BroadcastingListener(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit.
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestJMS10BroadcastingListener.class);
+ }
+
+ /**
+ * Returns a configured JavaGroupsBroadcastingListener instance
+ * for testing.
+ */
+ public AbstractBroadcastingListener getListener() {
+ return new JMS10BroadcastingListener();
+ }
+
+ /**
+ * Return the configuration for the JMS listener
+ */
+ Config getConfig() {
+ Config config = new Config();
+
+ // There needs to be an application resource file present "jndi.properties" that contains the following
+ // parameters:
+ // config.set(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.ApplicationClientInitialContextFactory");
+ // config.set(Context.PROVIDER_URL, "ormi://localhost:23791/");
+ // config.set(Context.SECURITY_PRINCIPAL, "admin");
+ // config.set(Context.SECURITY_CREDENTIALS, "xxxxxx");
+ config.set("cache.cluster.jms.topic.factory", "java:comp/env/jms/TopicConnectionFactory");
+ config.set("cache.cluster.jms.topic.name", "java:comp/env/jms/OSCacheTopic");
+ config.set("cache.cluster.jms.node.name", "cacheNode1");
+
+ return config;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJMSBroadcastingListener.java b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJMSBroadcastingListener.java
new file mode 100644
index 0000000..5a63b6c
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJMSBroadcastingListener.java
@@ -0,0 +1,58 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.Config;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test all the public methods of the broadcasting listener and assert the
+ * return values
+ *
+ * @author Chris Miller
+ */
+public final class TestJMSBroadcastingListener extends BaseTestBroadcastingListener {
+ public TestJMSBroadcastingListener(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit.
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestJMSBroadcastingListener.class);
+ }
+
+ /**
+ * Returns a configured JavaGroupsBroadcastingListener instance
+ * for testing.
+ */
+ public AbstractBroadcastingListener getListener() {
+ return new JMSBroadcastingListener();
+ }
+
+ /**
+ * Return the configuration for the JMS listener
+ */
+ Config getConfig() {
+ Config config = new Config();
+
+ // There needs to be an application resource file present "jndi.properties" that contains the following
+ // parameters:
+ // config.set(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.ApplicationClientInitialContextFactory");
+ // config.set(Context.PROVIDER_URL, "ormi://localhost:23791/");
+ // config.set(Context.SECURITY_PRINCIPAL, "admin");
+ // config.set(Context.SECURITY_CREDENTIALS, "xxxxxx");
+ config.set("cache.cluster.jms.topic.factory", "java:comp/env/jms/TopicConnectionFactory");
+ config.set("cache.cluster.jms.topic.name", "java:comp/env/jms/OSCacheTopic");
+ config.set("cache.cluster.jms.node.name", "cacheNode1");
+
+ return config;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJavaGroupsBroadcastingListener.java b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJavaGroupsBroadcastingListener.java
new file mode 100644
index 0000000..5de10bb
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/clustersupport/TestJavaGroupsBroadcastingListener.java
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.clustersupport;
+
+import com.opensymphony.oscache.base.Config;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * Test all the public methods of the broadcasting listener and assert the
+ * return values
+ *
+ * @author Chris Miller
+ */
+public final class TestJavaGroupsBroadcastingListener extends BaseTestBroadcastingListener {
+ public TestJavaGroupsBroadcastingListener(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit.
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestJavaGroupsBroadcastingListener.class);
+ }
+
+ /**
+ * Returns a configured JavaGroupsBroadcastingListener instance
+ * for testing.
+ */
+ public AbstractBroadcastingListener getListener() {
+ return new JavaGroupsBroadcastingListener();
+ }
+
+ /**
+ * Get the configuration for this listener
+ */
+ public Config getConfig() {
+ Config config = new Config();
+
+ // Just specify the IP and leave the rest of the settings at
+ // default values.
+ config.set("cache.cluster.multicast.ip", "231.12.21.132");
+
+ return config;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestCompleteDiskPersistence.java b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestCompleteDiskPersistence.java
new file mode 100644
index 0000000..065a73a
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestCompleteDiskPersistence.java
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.diskpersistence;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.plugins.diskpersistence package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * $Id: TestCompleteDiskPersistence.java 254 2005-06-17 05:07:38Z dres $
+ * @version $Revision: 254 $
+ * @author Lars Torunski
+ */
+public final class TestCompleteDiskPersistence extends TestCase {
+ /**
+ * Constructor for the osCache Cache Extra package main test program
+ */
+ public TestCompleteDiskPersistence(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteDiskPersistence.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the test suites of all the project classes
+ TestSuite suite = new TestSuite("Test all diskpersistence plugins");
+ suite.addTest(TestDiskPersistenceListener.suite());
+ suite.addTest(TestHashDiskPersistenceListener.suite());
+ //suite.addTest(TestUnSerializable.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestDiskPersistenceListener.java b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestDiskPersistenceListener.java
new file mode 100644
index 0000000..e676e29
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestDiskPersistenceListener.java
@@ -0,0 +1,223 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.diskpersistence;
+
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.persistence.CachePersistenceException;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import java.io.File;
+import java.io.FilenameFilter;
+
+import java.util.HashSet;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Test all the public methods of the disk persistance listener and assert the
+ * return values
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestDiskPersistenceListener extends TestCase {
+ /**
+ * Cache dir to persist to
+ */
+ public static final String CACHEDIR = "/tmp/diskcache";
+
+ /**
+ * The persistance listener used for the tests
+ */
+ private DiskPersistenceListener listener = null;
+
+ /**
+ * Object content
+ */
+ private final String CONTENT = "Disk persistance content";
+
+ /**
+ * Cache group
+ */
+ private final String GROUP = "test group";
+
+ /**
+ * Object key
+ */
+ private final String KEY = "Test disk persistance listener key";
+ private CacheFileFilter cacheFileFilter = new CacheFileFilter();
+
+ public TestDiskPersistenceListener(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestDiskPersistenceListener.class);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a listener
+ listener = new DiskPersistenceListener();
+
+ Properties p = new Properties();
+ p.setProperty("cache.path", CACHEDIR);
+ p.setProperty("cache.memory", "false");
+ p.setProperty("cache.persistence.class", "com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener");
+ listener.configure(new Config(p));
+ }
+
+ /**
+ * Test the cache directory removal
+ */
+ public void testClear() {
+ // Create an new element since we removed it at the last test
+ testStoreRetrieve();
+
+ // Remove the directory, and assert that we have no more entry
+ try {
+ listener.clear();
+ assertTrue(!listener.isStored(KEY));
+ } catch (CachePersistenceException cpe) {
+ cpe.printStackTrace();
+ fail("Exception thrown in test clear!");
+ }
+ }
+
+ /**
+ * Test that the previouly created file exists
+ */
+ public void testIsStored() {
+ try {
+ listener.store(KEY, CONTENT);
+
+ // Retrieve the previously created file
+ assertTrue(listener.isStored(KEY));
+
+ // Check that the fake key returns false
+ assertTrue(!listener.isStored(KEY + "fake"));
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail("testIsStored raised an exception");
+ }
+ }
+
+ /**
+ * Test the cache removal
+ */
+ public void testRemove() {
+ // Create an entry if it doesn't exists
+ try {
+ if (!listener.isStored(KEY)) {
+ listener.store(KEY, CONTENT);
+ }
+
+ // Remove the previously created file
+ listener.remove(KEY);
+ } catch (CachePersistenceException cpe) {
+ cpe.printStackTrace();
+ fail("Exception thrown in test remove!");
+ }
+ }
+
+ /**
+ * Force CachePersistenceException to get a 100% in the unit test
+ */
+ public void testCachePersistenceException() {
+ try {
+ for (int i = 0; i < 2; i++) {
+ if (i == 1) throw new CachePersistenceException("test");
+ }
+ fail("CachePersistenceException not thrown!");
+ } catch (CachePersistenceException cpe) {
+ // ignore
+ }
+ try {
+ for (int i = 0; i < 2; i++) {
+ if (i == 1) throw new CachePersistenceException();
+ }
+ fail("CachePersistenceException not thrown!");
+ } catch (CachePersistenceException cpe) {
+ // ignore
+ }
+ }
+
+ /**
+ * Test the disk store and retrieve
+ */
+ public void testStoreRetrieve() {
+ // Create a cache entry and store it
+ CacheEntry entry = new CacheEntry(KEY);
+ entry.setContent(CONTENT);
+
+ try {
+ listener.store(KEY, entry);
+
+ // Retrieve our entry and validate the values
+ CacheEntry newEntry = (CacheEntry) listener.retrieve(KEY);
+ assertTrue(entry.getContent().equals(newEntry.getContent()));
+ assertEquals(entry.getCreated(), newEntry.getCreated());
+ assertTrue(entry.getKey().equals(newEntry.getKey()));
+
+ // Try to retrieve a non-existent object
+ assertNull(listener.retrieve("doesn't exist"));
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!");
+ }
+ }
+
+ /**
+ * Test the storing and retrieving of groups
+ */
+ public void testStoreRetrieveGroups() {
+ // Store a group
+ Set groupSet = new HashSet();
+ groupSet.add("1");
+ groupSet.add("2");
+
+ try {
+ listener.storeGroup(GROUP, groupSet);
+
+ // Retrieve it and validate its contents
+ groupSet = listener.retrieveGroup(GROUP);
+ assertNotNull(groupSet);
+
+ assertTrue(groupSet.contains("1"));
+ assertTrue(groupSet.contains("2"));
+ assertFalse(groupSet.contains("3"));
+
+ // Try to retrieve a non-existent group
+ assertNull(listener.retrieveGroup("abc"));
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!");
+ }
+ }
+
+ protected void tearDown() throws Exception {
+ listener.clear();
+ assertTrue("Cache not cleared", new File(CACHEDIR).list(cacheFileFilter).length == 0);
+ }
+
+ private static class CacheFileFilter implements FilenameFilter {
+ public boolean accept(File dir, String name) {
+ return !"__groups__".equals(name);
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestHashDiskPersistenceListener.java b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestHashDiskPersistenceListener.java
new file mode 100644
index 0000000..6b1860a
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestHashDiskPersistenceListener.java
@@ -0,0 +1,220 @@
+/*
+ * Copyright (c) 2002-2007 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.plugins.diskpersistence;
+
+import com.opensymphony.oscache.base.CacheEntry;
+import com.opensymphony.oscache.base.Config;
+import com.opensymphony.oscache.base.persistence.CachePersistenceException;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import java.io.File;
+import java.io.FilenameFilter;
+
+import java.util.HashSet;
+import java.util.Properties;
+import java.util.Set;
+
+/**
+ * Test all the public methods of the disk persistance listener and assert the
+ * return values
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestHashDiskPersistenceListener extends TestCase {
+ /**
+ * The persistance listener used for the tests
+ */
+ private HashDiskPersistenceListener listener = null;
+
+ /**
+ * Object content
+ */
+ private final String CONTENT = "Disk persistance content";
+
+ /**
+ * Cache group
+ */
+ private final String GROUP = "test group";
+
+ /**
+ * Object key
+ */
+ private final String KEY = "Test disk persistance listener key";
+ private CacheFileFilter cacheFileFilter = new CacheFileFilter();
+
+ public TestHashDiskPersistenceListener(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestHashDiskPersistenceListener.class);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // At first invocation, create a listener
+ listener = new HashDiskPersistenceListener();
+
+ Properties p = new Properties();
+ p.setProperty("cache.path", TestDiskPersistenceListener.CACHEDIR);
+ p.setProperty("cache.memory", "false");
+ p.setProperty("cache.persistence.class", "com.opensymphony.oscache.plugins.diskpersistence.HashDiskPersistenceListener");
+ p.setProperty("cache.persistence.disk.hash.algorithm", "MD5");
+ listener.configure(new Config(p));
+ }
+
+ /**
+ * Test the cache directory removal
+ */
+ public void testClear() {
+ // Create an new element since we removed it at the last test
+ testStoreRetrieve();
+
+ // Remove the directory, and assert that we have no more entry
+ try {
+ listener.clear();
+ assertTrue(!listener.isStored(KEY));
+ } catch (CachePersistenceException cpe) {
+ cpe.printStackTrace();
+ fail("Exception thrown in test clear!");
+ }
+ }
+
+ /**
+ * Test that the previouly created file exists
+ */
+ public void testIsStored() {
+ try {
+ listener.store(KEY, CONTENT);
+
+ // Retrieve the previously created file
+ assertTrue(listener.isStored(KEY));
+
+ // Check that the fake key returns false
+ assertTrue(!listener.isStored(KEY + "fake"));
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail("testIsStored raised an exception");
+ }
+ }
+
+ /**
+ * Test the cache removal
+ */
+ public void testRemove() {
+ // Create an entry if it doesn't exists
+ try {
+ if (!listener.isStored(KEY)) {
+ listener.store(KEY, CONTENT);
+ }
+
+ // Remove the previously created file
+ listener.remove(KEY);
+ } catch (CachePersistenceException cpe) {
+ cpe.printStackTrace();
+ fail("Exception thrown in test remove!");
+ }
+ }
+
+ /**
+ * Test the disk store and retrieve
+ */
+ public void testStoreRetrieve() {
+ // Create a cache entry and store it
+ CacheEntry entry = new CacheEntry(KEY);
+ entry.setContent(CONTENT);
+
+ try {
+ listener.store(KEY, entry);
+
+ // Retrieve our entry and validate the values
+ CacheEntry newEntry = (CacheEntry) listener.retrieve(KEY);
+ assertTrue(entry.getContent().equals(newEntry.getContent()));
+ assertEquals(entry.getCreated(), newEntry.getCreated());
+ assertTrue(entry.getKey().equals(newEntry.getKey()));
+
+ // Try to retrieve a non-existent object
+ assertNull(listener.retrieve("doesn't exist"));
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!");
+ }
+ }
+
+ /**
+ * Test the storing and retrieving of groups
+ */
+ public void testStoreRetrieveGroups() {
+ // Store a group
+ Set groupSet = new HashSet();
+ groupSet.add("1");
+ groupSet.add("2");
+
+ try {
+ listener.storeGroup(GROUP, groupSet);
+
+ // Retrieve it and validate its contents
+ groupSet = listener.retrieveGroup(GROUP);
+ assertNotNull(groupSet);
+
+ assertTrue(groupSet.contains("1"));
+ assertTrue(groupSet.contains("2"));
+ assertFalse(groupSet.contains("3"));
+
+ // Try to retrieve a non-existent group
+ assertNull(listener.retrieveGroup("abc"));
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!");
+ }
+ }
+
+ private static final byte[] BYTES_1 = {0x00};
+ private static final byte[] BYTES_2 = {0x00, 0x00};
+ private static final byte[] BYTES_3 = {0x00, 0x00, 0x00};
+ private static final byte[] BYTES_4 = {0x01};
+
+ /**
+ * Test against bug issue CACHE-288.
+ */
+ public void testByteArrayToHexString() {
+ assertFalse("ByteArrayToHexStrings 1 and 2 shouldn't be equal",
+ HashDiskPersistenceListener.byteArrayToHexString(BYTES_1).
+ equals(HashDiskPersistenceListener.byteArrayToHexString(BYTES_2)));
+ assertFalse("ByteArrayToHexStrings 1 and 3 shouldn't be equal",
+ HashDiskPersistenceListener.byteArrayToHexString(BYTES_1).
+ equals(HashDiskPersistenceListener.byteArrayToHexString(BYTES_3)));
+ assertFalse("ByteArrayToHexStrings 1 and 4 shouldn't be equal",
+ HashDiskPersistenceListener.byteArrayToHexString(BYTES_1).
+ equals(HashDiskPersistenceListener.byteArrayToHexString(BYTES_4)));
+ assertFalse("ByteArrayToHexStrings 1 and 4 shouldn't be equal",
+ HashDiskPersistenceListener.byteArrayToHexString(BYTES_1).
+ equals(HashDiskPersistenceListener.byteArrayToHexString(BYTES_4)));
+ }
+
+ protected void tearDown() throws Exception {
+ listener.clear();
+ assertTrue("Cache not cleared", new File(TestDiskPersistenceListener.CACHEDIR).list(cacheFileFilter).length == 0);
+ }
+
+ private static class CacheFileFilter implements FilenameFilter {
+ public boolean accept(File dir, String name) {
+ return !"__groups__".equals(name);
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestUnSerializable.java b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestUnSerializable.java
new file mode 100644
index 0000000..a82844b
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/plugins/diskpersistence/TestUnSerializable.java
@@ -0,0 +1,89 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+/*
+ * Created on Mar 11, 2005
+ *
+ * TODO To change the template for this generated file go to
+ * Window - Preferences - Java - Code Style - Code Templates
+ */
+package com.opensymphony.oscache.plugins.diskpersistence;
+
+import com.opensymphony.oscache.general.GeneralCacheAdministrator;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import java.io.File;
+
+/**
+ * @author admin
+ *
+ * TODO To change the template for this generated type comment go to
+ * Window - Preferences - Java - Code Style - Code Templates
+ */
+public class TestUnSerializable extends TestCase {
+ final String CACHE_DIRECTORY_PATH = TestDiskPersistenceListener.CACHEDIR + "/application";
+ GeneralCacheAdministrator cache;
+
+ /* (non-Javadoc)
+ * @see junit.framework.TestCase#setUp()
+ */
+ protected void setUp() throws Exception {
+ // TODO Auto-generated method stub
+ super.setUp();
+
+ java.util.Properties properties = new java.util.Properties();
+ properties.setProperty("cache.path", TestDiskPersistenceListener.CACHEDIR);
+ properties.setProperty("cache.persistence.class", "com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener");
+ properties.setProperty("cache.persistence.overflow.only", "true");
+
+ // properties.setProperty("cache.memory", "false");
+ properties.setProperty("cache.capacity", "2");
+ properties.setProperty("cache.unlimited.disk", "false");
+ cache = new GeneralCacheAdministrator(properties);
+ cache.getCache().getPersistenceListener().clear();
+ }
+
+ /* (non-Javadoc)
+ * @see junit.framework.TestCase#tearDown()
+ */
+ protected void tearDown() throws Exception {
+ // TODO Auto-generated method stub
+ super.tearDown();
+ }
+
+ public void testNotSerializableObject() throws Exception {
+ cache.putInCache("1", new UnSerializable());
+ cache.putInCache("2", new UnSerializable());
+ assertTrue(isDirectoryEmpty(CACHE_DIRECTORY_PATH));
+ cache.putInCache("3", new UnSerializable());
+ cache.putInCache("4", new UnSerializable());
+ assertTrue(isDirectoryEmpty(CACHE_DIRECTORY_PATH));
+ cache.flushAll();
+ }
+
+ /**
+ * @param filePath
+ * @return
+ */
+ private boolean isDirectoryEmpty(String filePath) {
+ File file = new File(filePath);
+ return !file.exists() || (file.list().length == 0);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The test for this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestUnSerializable.class);
+ }
+
+ public static class UnSerializable {
+ int asdfasdfasdf = 234;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/util/TestFastCronParser.java b/src/test/java/com/opensymphony/oscache/util/TestFastCronParser.java
new file mode 100644
index 0000000..32275fc
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/util/TestFastCronParser.java
@@ -0,0 +1,314 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.util;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+import java.text.ParseException;
+import java.text.SimpleDateFormat;
+
+import java.util.*;
+
+/**
+ *
+ * @author Chris Miller
+ * @author $Author$
+ * @version $Revision$
+ */
+public class TestFastCronParser extends TestCase {
+ public TestFastCronParser(String str) {
+ super(str);
+ }
+
+ /**
+ * This methods returns the name of this test class to JUnit
+ *
+ * @return The name of this class
+ */
+ public static Test suite() {
+ return new TestSuite(TestFastCronParser.class);
+ }
+
+ /**
+ * Tests to see if the cron class can calculate the previous matching
+ * time correctly in various circumstances
+ */
+ public void testEvaluations() {
+ // Minute tests
+ cronCall("01/01/2003 0:00", "45 * * * *", "31/12/2002 23:45", false);
+ cronCall("01/01/2003 0:00", "45-47,48,49 * * * *", "31/12/2002 23:49", false);
+ cronCall("01/01/2003 0:00", "2/5 * * * *", "31/12/2002 23:57", false);
+
+ // Hour tests
+ cronCall("20/12/2003 10:00", "* 3/4 * * *", "20/12/2003 07:59", false);
+ cronCall("20/12/2003 0:00", "* 3 * * *", "19/12/2003 03:59", false);
+
+ // Day of month tests
+ cronCall("07/01/2003 0:00", "30 * 1 * *", "01/01/2003 23:30", false);
+ cronCall("01/01/2003 0:00", "10 * 22 * *", "22/12/2002 23:10", false);
+ cronCall("01/01/2003 0:00", "30 23 19 * *", "19/12/2002 23:30", false);
+ cronCall("01/01/2003 0:00", "30 23 21 * *", "21/12/2002 23:30", false);
+ cronCall("01/01/2003 0:01", "* * 21 * *", "21/12/2002 23:59", false);
+ cronCall("10/07/2003 0:00", "* * 30,31 * *", "30/06/2003 23:59", false);
+
+ // Test month rollovers for months with 28,29,30 and 31 days
+ cronCall("01/03/2002 0:11", "* * * 2 *", "28/02/2002 23:59", false);
+ cronCall("01/03/2004 0:44", "* * * 2 *", "29/02/2004 23:59", false);
+ cronCall("01/04/2002 0:00", "* * * 3 *", "31/03/2002 23:59", false);
+ cronCall("01/05/2002 0:00", "* * * 4 *", "30/04/2002 23:59", false);
+
+ // Other month tests (including year rollover)
+ cronCall("01/01/2003 5:00", "10 * * 6 *", "30/06/2002 23:10", false);
+ cronCall("01/01/2003 5:00", "10 * * February,April-Jun *", "30/06/2002 23:10", false);
+ cronCall("01/01/2003 0:00", "0 12 1 6 *", "01/06/2002 12:00", false);
+ cronCall("11/09/1988 14:23", "* 12 1 6 *", "01/06/1988 12:59", false);
+ cronCall("11/03/1988 14:23", "* 12 1 6 *", "01/06/1987 12:59", false);
+ cronCall("11/03/1988 14:23", "* 2,4-8,15 * 6 *", "30/06/1987 15:59", false);
+ cronCall("11/03/1988 14:23", "20 * * january,FeB,Mar,april,May,JuNE,July,Augu,SEPT-October,Nov,DECEM *", "11/03/1988 14:20", false);
+
+ // Day of week tests
+ cronCall("26/06/2003 10:00", "30 6 * * 0", "22/06/2003 06:30", false);
+ cronCall("26/06/2003 10:00", "30 6 * * sunday", "22/06/2003 06:30", false);
+ cronCall("26/06/2003 10:00", "30 6 * * SUNDAY", "22/06/2003 06:30", false);
+ cronCall("23/06/2003 0:00", "1 12 * * 2", "17/06/2003 12:01", false);
+ cronCall("23/06/2003 0:00", "* * * * 3,0,4", "22/06/2003 23:59", false);
+ cronCall("23/06/2003 0:00", "* * * * 5", "20/06/2003 23:59", false);
+ cronCall("02/06/2003 18:30", "0 12 * * 2", "27/05/2003 12:00", false);
+ cronCall("02/06/2003 18:30", "0 12 * * Tue,Thurs-Sat,2", "31/05/2003 12:00", false);
+ cronCall("02/06/2003 18:30", "0 12 * * Mon-tuesday,wed,THURS-FRiday,Sat-SUNDAY", "02/06/2003 12:00", false);
+
+ // Leap year tests
+ cronCall("01/03/2003 12:00", "1 12 * * *", "28/02/2003 12:01", false); // non-leap year
+ cronCall("01/03/2004 12:00", "1 12 * * *", "29/02/2004 12:01", false); // leap year
+ cronCall("01/03/2003 12:00", "1 23 * * 0", "23/02/2003 23:01", false); // non-leap year
+ cronCall("01/03/2004 12:00", "1 23 * * 0", "29/02/2004 23:01", false); // leap year
+ cronCall("01/03/2003 12:00", "* * 29 2 *", "29/02/2000 23:59", false); // Find the previous leap-day
+ cronCall("01/02/2003 12:00", "* * 29 2 *", "29/02/2000 23:59", false); // Find the previous leap-day
+ cronCall("01/02/2004 12:00", "* * 29 2 *", "29/02/2000 23:59", false); // Find the previous leap-day
+
+ // Interval and range tests
+ cronCall("20/12/2003 10:00", "* */4 * * *", "20/12/2003 08:59", false);
+ cronCall("20/12/2003 10:00", "* 3/2 * * *", "20/12/2003 09:59", false);
+ cronCall("20/12/2003 10:00", "1-30/5 10-20/3 * jan-aug/2 *", "31/07/2003 19:26", false);
+ cronCall("20/12/2003 10:00", "20-25,27-30/2 10/8 * * *", "19/12/2003 18:29", false);
+ }
+
+ /**
+ * Tests a range of invalid cron expressions
+ */
+ public void testInvalidExpressionParsing() {
+ FastCronParser parser = new FastCronParser();
+
+ try {
+ parser.setCronExpression(null);
+ fail("An IllegalArgumentException should have been thrown");
+ } catch (IllegalArgumentException e) {
+ // Expected
+ } catch (ParseException e) {
+ fail("Expected an IllegalArgumentException but received a ParseException instead");
+ }
+
+ /**
+ * Not enough tokens
+ */
+ cronCall("01/01/2003 0:00", "", "", true);
+ cronCall("01/01/2003 0:00", "8 * 8/1 *", "", true);
+
+ /**
+ * Invalid syntax
+ */
+ cronCall("01/01/2003 0:00", "* invalid * * *", "", true);
+ cronCall("01/01/2003 0:00", "* -1 * * *", "", true);
+ cronCall("01/01/2003 0:00", "* * 20 * 0", "", true);
+ cronCall("01/01/2003 0:00", "* * 5-6-7 * *", "", true);
+ cronCall("01/01/2003 0:00", "* * 5/6-7 * *", "", true);
+ cronCall("01/01/2003 0:00", "* * 5-* * *", "", true);
+ cronCall("01/01/2003 0:00", "* * 5-6* * *", "", true);
+ cronCall("01/01/2003 0:00", "* * * * Mo", "", true);
+ cronCall("01/01/2003 0:00", "* * * jxxx *", "", true);
+ cronCall("01/01/2003 0:00", "* * * juxx *", "", true);
+ cronCall("01/01/2003 0:00", "* * * fbr *", "", true);
+ cronCall("01/01/2003 0:00", "* * * mch *", "", true);
+ cronCall("01/01/2003 0:00", "* * * mAh *", "", true);
+ cronCall("01/01/2003 0:00", "* * * arl *", "", true);
+ cronCall("01/01/2003 0:00", "* * * Spteber *", "", true);
+ cronCall("01/01/2003 0:00", "* * * otber *", "", true);
+ cronCall("01/01/2003 0:00", "* * * nvemtber *", "", true);
+ cronCall("01/01/2003 0:00", "* * * Dcmber *", "", true);
+ cronCall("01/01/2003 0:00", "* * * * mnday", "", true);
+ cronCall("01/01/2003 0:00", "* * * * tsdeday", "", true);
+ cronCall("01/01/2003 0:00", "* * * * wdnesday", "", true);
+ cronCall("01/01/2003 0:00", "* * * * frday", "", true);
+ cronCall("01/01/2003 0:00", "* * * * sdhdatr", "", true);
+
+ /**
+ * Values out of range
+ */
+ cronCall("01/01/2003 0:00", "* * 0 * *", "", true);
+ cronCall("01/01/2003 0:00", "* 50 * * *", "", true);
+ cronCall("01/01/2003 0:00", "* * * 1-20 *", "", true);
+ cronCall("01/01/2003 0:00", "* * 0-20 * *", "", true);
+ cronCall("01/01/2003 0:00", "* * 1-40 * *", "", true);
+ cronCall("01/01/2003 0:00", "* * * 1 8", "", true);
+ cronCall("01/01/2003 0:00", "* * 0/3 * *", "", true);
+ cronCall("01/01/2003 0:00", "* * 30 2 *", "", true); // 30th Feb doesn't ever exist!
+ cronCall("01/01/2003 0:00", "* * 31 4 *", "", true); // 31st April doesn't ever exist!
+ }
+
+ /**
+ * This tests the performance of the cron parsing engine. Note that it may take
+ * a couple of minutes o run - by default this test is disabled. Comment out the
+ * return
statement at the start of this method to enable the
+ * benchmarking.
+ */
+ public void testPerformance() {
+ if (true) {
+ // return; // Comment out this line to benchmark
+ }
+
+ SimpleDateFormat sdf = new SimpleDateFormat("dd/MM/yyyy HH:mm");
+ Date date = null;
+
+ try {
+ date = sdf.parse("21/01/2003 16:27");
+ } catch (ParseException e) {
+ fail("Failed to parse date. Please check your unit test code!");
+ }
+
+ Calendar calendar = Calendar.getInstance(TimeZone.getTimeZone("GMT"));
+ calendar.setTime(date);
+
+ long baseTime = calendar.getTimeInMillis();
+
+ long time = 0;
+
+ try {
+ // Give HotSpot a chance to warm up
+ iterate("28 17 22 02 *", baseTime, time, 10000, true);
+
+ // Number of iterations to test
+ int count = 1000000;
+
+ // Test the best-case scenario
+ long bestCaseTime = iterate("* * * * *", baseTime, time, count, true);
+ System.out.println("Best case with parsing took " + bestCaseTime + "ms for " + count + " iterations. (" + (bestCaseTime / (float) count) + "ms per call)");
+
+ // Test a near worst-case scenario
+ long worstCaseTime = iterate("0-59,0-13,2,3,4,5 17-19 22-23,22,23 2,3 *", baseTime, time, count, true);
+ System.out.println("Worst case with parsing took " + worstCaseTime + "ms for " + count + " iterations. (" + (worstCaseTime / (float) count) + "ms per call)");
+
+ // Test the best-case scenario without parsing the expression on each iteration
+ bestCaseTime = iterate("* * * * *", baseTime, time, count, false);
+ System.out.println("Best case without parsing took " + bestCaseTime + "ms for " + count + " iterations. (" + (bestCaseTime / (float) count) + "ms per call)");
+
+ // Test a near worst-case scenario without parsing the expression on each iteration
+ worstCaseTime = iterate("0-59,0-13,2,3,4,5 17-19 22-23,22,23 2,3 *", baseTime, time, count, false);
+ System.out.println("Worst case without parsing took " + worstCaseTime + "ms for " + count + " iterations. (" + (worstCaseTime / (float) count) + "ms per call)");
+ } catch (ParseException e) {
+ }
+ }
+
+ /**
+ * Tests that a range of valid cron expressions get parsed correctly.
+ */
+ public void testValidExpressionParsing() {
+ FastCronParser parser;
+
+ // Check the default constructor
+ parser = new FastCronParser();
+ assertNull(parser.getCronExpression());
+
+ try {
+ parser = new FastCronParser("* * * * *");
+ assertEquals("* * * * *", parser.getCronExpression()); // Should be the same as what we gave it
+ assertEquals("* * * * *", parser.getExpressionSummary());
+
+ parser.setCronExpression("0 * * * *");
+ assertEquals("0 * * * *", parser.getCronExpression()); // Should be the same as what we gave it
+ assertEquals("0 * * * *", parser.getExpressionSummary());
+
+ parser.setCronExpression("5 10 * * 1,4,6");
+ assertEquals("5 10 * * 1,4,6", parser.getExpressionSummary());
+
+ parser.setCronExpression("0,5-20,4-15,24-27 0 * 2-4,5,6-3 *"); // Overlapping ranges, backwards ranges
+ assertEquals("0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,24,25,26,27 0 * 2,3,4,5,6 *", parser.getExpressionSummary());
+ } catch (ParseException e) {
+ e.printStackTrace();
+ fail("Cron expression should have been valid: " + e);
+ }
+ }
+
+ /**
+ * Makes a call to the FastCronParser.
+ *
+ * @param dateStr The date string to use as the base date. The format must be
+ * "dd/MM/yyyy HH:mm"
.
+ * @param cronExpr The cron expression to test.
+ * @param result The expected result. This should be a date in the same format
+ * as dateStr
.
+ * @param expectException Pass in true
if the {@link FastCronParser} is
+ * expected to throw a ParseException
.
+ */
+ private void cronCall(String dateStr, String cronExpr, String result, boolean expectException) {
+ SimpleDateFormat sdf = new SimpleDateFormat("dd/MM/yyyy HH:mm");
+ Date date = null;
+
+ try {
+ date = sdf.parse(dateStr);
+ } catch (ParseException e) {
+ fail("Failed to parse date " + dateStr + ". Please check your unit test code!");
+ }
+
+ Calendar calendar = Calendar.getInstance();
+ calendar.setTime(date);
+
+ long baseTime = calendar.getTimeInMillis();
+ FastCronParser parser = null;
+
+ try {
+ parser = new FastCronParser(cronExpr);
+
+ if (expectException) {
+ fail("Should have received a ParseException while parsing " + cronExpr);
+ }
+
+ long time = parser.getTimeBefore(baseTime);
+ assertEquals(result, sdf.format(new Date(time)));
+ } catch (ParseException e) {
+ if (!expectException) {
+ fail("Unexpected ParseException while parsing " + cronExpr + ": " + e);
+ }
+ }
+ }
+
+ /**
+ * Used by the benchmarking
+ */
+ private long iterate(String cronExpr, long baseTime, long time, int count, boolean withParse) throws ParseException {
+ long startTime = System.currentTimeMillis();
+
+ if (withParse) {
+ FastCronParser parser = new FastCronParser();
+
+ for (int i = 0; i < count; i++) {
+ parser.setCronExpression(cronExpr);
+ time = parser.getTimeBefore(baseTime);
+ }
+ } else {
+ FastCronParser parser = new FastCronParser(cronExpr);
+
+ for (int i = 0; i < count; i++) {
+ time = parser.getTimeBefore(baseTime);
+ }
+ }
+
+ long endTime = System.currentTimeMillis();
+ long duration = (endTime - startTime);
+ duration += (time - time); // Use the time variable to prevent it getting optimized away
+ return duration;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/web/CheckDeployment.java b/src/test/java/com/opensymphony/oscache/web/CheckDeployment.java
new file mode 100644
index 0000000..8da2dc4
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/web/CheckDeployment.java
@@ -0,0 +1,46 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+
+import java.net.ConnectException;
+import java.net.URL;
+import java.net.URLConnection;
+
+/**
+ * User: hani
+ * Date: Jun 12, 2003
+ * Time: 3:34:20 PM
+ */
+public class CheckDeployment {
+ public static void main(String[] args) {
+ if (args.length == 0) {
+ throw new IllegalArgumentException("No url specified to check");
+ }
+
+ try {
+ if (!args[0].endsWith("/")) {
+ args[0] = args[0] + "/";
+ }
+
+ URL url = new URL(args[0] + "oscache.txt");
+ URLConnection c = url.openConnection();
+ c.getInputStream();
+ System.exit(0);
+ } catch (java.net.MalformedURLException e) {
+ System.out.println("Invalid url for oscache webapp:" + args[0]);
+ } catch (ConnectException ex) {
+ System.out.println("Error connecting to server at '" + args[0] + "', ensure that the webserver for the oscache example application is running");
+ } catch (FileNotFoundException e) {
+ System.out.println("Error connecting to webapp at '" + args[0] + "', ensure that the example-war app is deployed correctly at the specified url");
+ } catch (IOException e) {
+ System.out.println("Error connecting to webapp at '" + args[0] + "', ensure that the example-war app is deployed correctly at the specified url");
+ }
+
+ System.exit(1);
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/web/TestCompleteWeb.java b/src/test/java/com/opensymphony/oscache/web/TestCompleteWeb.java
new file mode 100644
index 0000000..7379df1
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/web/TestCompleteWeb.java
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.web package.
+ * It invokes all the test suites of all the other classes of the package.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestCompleteWeb extends TestCase {
+ /**
+ * Constructor for the osCache project main test program
+ */
+ public TestCompleteWeb(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestCompleteWeb.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ // Add all the tests suite of all the project classes
+ TestSuite suite = new TestSuite("Test all osCache web");
+ suite.addTest(TestOscacheJsp.suite());
+ suite.addTest(TestOscacheServlet.suite());
+ suite.addTest(TestOscacheFilter.suite());
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/web/TestLoadCompleteWeb.java b/src/test/java/com/opensymphony/oscache/web/TestLoadCompleteWeb.java
new file mode 100644
index 0000000..b3913ba
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/web/TestLoadCompleteWeb.java
@@ -0,0 +1,79 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.clarkware.junitperf.LoadTest;
+import com.clarkware.junitperf.RandomTimer;
+
+import junit.extensions.RepeatedTest;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test class for the com.opensymphony.oscache.web package.
+ * It invokes all the test suites of all the other classes of the package.
+ * The test methods will be invoked with many users and iterations to simulate
+ * load on request
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestLoadCompleteWeb extends TestCase {
+ /**
+ * Constructor for the osCache Cache project main test program
+ */
+ public TestLoadCompleteWeb(String str) {
+ super(str);
+ }
+
+ /**
+ * Main method which is called to perform the tests
+ *
+ * @param args Arguments received
+ */
+ public static void main(String[] args) {
+ // Run the test suite
+ junit.swingui.TestRunner testRunner = new junit.swingui.TestRunner();
+ testRunner.setLoading(false);
+
+ String[] args2 = {TestLoadCompleteWeb.class.getName()};
+ testRunner.start(args2);
+ }
+
+ /**
+ * Test suite required to test this project
+ *
+ * @return suite The test suite
+ */
+ public static Test suite() {
+ final int clientThreads = 10; // Simulate 10 client threads
+ final int iterations = 20; // Simulate each user doing 20 iterations
+
+ TestSuite suite = new TestSuite("Test all osCache web");
+
+ // Ramp up a thread each 500 ms (+-100ms) until total number of threads reached
+ RandomTimer tm = new RandomTimer(300, 100);
+
+ // JSP
+ Test repeatedTest = new RepeatedTest(new TestOscacheJsp("testOscacheBasicForLoad"), iterations);
+ Test loadTest = new LoadTest(repeatedTest, clientThreads, tm);
+ suite.addTest(loadTest);
+
+ // Servlet
+ repeatedTest = new RepeatedTest(new TestOscacheServlet("testOscacheServletBasicForLoad"), iterations);
+ loadTest = new LoadTest(repeatedTest, clientThreads, tm);
+ suite.addTest(loadTest);
+
+ // Filter
+ repeatedTest = new RepeatedTest(new TestOscacheFilter("testOscacheFilterBasicForLoad"), iterations);
+ loadTest = new LoadTest(repeatedTest, clientThreads, tm);
+ suite.addTest(loadTest);
+
+ return suite;
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/web/TestOscacheFilter.java b/src/test/java/com/opensymphony/oscache/web/TestOscacheFilter.java
new file mode 100644
index 0000000..31f60ca
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/web/TestOscacheFilter.java
@@ -0,0 +1,215 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.meterware.httpunit.WebConversation;
+import com.meterware.httpunit.WebResponse;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Tests the caching filter distributed with the package.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Chris Miller
+ */
+public final class TestOscacheFilter extends TestCase {
+ // The instance of a webconversation to invoke pages
+ WebConversation wc = null;
+ private final String BASE_PAGE = "filter/filterTest.jsp";
+
+ // Constants definition
+ private final String BASE_URL_SYSTEM_PRP = "test.web.baseURL";
+ private final String PARAM_1 = "abc=123";
+ private final String PARAM_2 = "xyz=321";
+ private final String SESSION_ID = "jsessionid=12345678";
+ // Constants definition to access OscacheServlet
+ private final String SERVLET_URL = "cacheServlet/?";
+ private final String FORCE_REFRESH = "forceRefresh=true&";
+
+
+ /**
+ * Constructor required by JUnit
+ *
+ * @param str Test name
+ */
+ public TestOscacheFilter(String str) {
+ super(str);
+ }
+
+ /**
+ * Returns the test suite for the test class
+ *
+ * @return Test suite for the class
+ */
+ public static Test suite() {
+ return new TestSuite(TestOscacheFilter.class);
+ }
+
+ /**
+ * Setup method called before each testXXXX of the class
+ */
+ public void setUp() {
+ // Create a web conversation to invoke our filter
+ if (wc == null) {
+ wc = new WebConversation();
+ }
+ compileJSP(constructURL(BASE_PAGE));
+ }
+
+ /**
+ * Test the OSCache filter
+ */
+ public void testOscacheFilter() {
+ String baseUrl = constructURL(BASE_PAGE);
+
+ // Flush the cache to avoid getting refreshed content from previous tests
+ flushCache();
+
+ // Call the page for the second time
+ String stringResponse = invokeURL(baseUrl, 200);
+
+ // Connect again, we should have the same content
+ String newResponse = invokeURL(baseUrl, 0);
+ assertTrue("new response " + newResponse + " should be the same to " + stringResponse, stringResponse.equals(newResponse));
+
+ // Try again with a session ID this time. The session ID should get filtered
+ // out of the cache key so the content should be the same
+ newResponse = invokeURL(baseUrl + "?" + SESSION_ID, 200);
+ assertTrue("new response by a session id request " + newResponse + " should be the same to " + stringResponse, stringResponse.equals(newResponse));
+
+ // Connect again with extra params, the content should be different
+ newResponse = invokeURL(baseUrl + "?" + PARAM_1 + "&" + PARAM_2, 500);
+ assertFalse("new response " + newResponse + " expected it to be different to last one.", stringResponse.equals(newResponse));
+
+ stringResponse = newResponse;
+
+ // Connect again with the parameters in a different order. We should still
+ // get the same content.
+ newResponse = invokeURL(baseUrl + "?" + PARAM_2 + "&" + PARAM_1, 0);
+ assertTrue("order of parameters shouldn't change the response", stringResponse.equals(newResponse));
+
+ // Connect again with the same parameters, but throw the session ID into
+ // the mix again. The content should remain the same.
+ newResponse = invokeURL(baseUrl + "?" + SESSION_ID + "&" + PARAM_1 + "&" + PARAM_2, 0);
+ assertTrue("a session id shouldn't change the response either", stringResponse.equals(newResponse));
+ }
+
+ /**
+ * Test the OSCache filter with fast requests
+ */
+ public void testOSCacheFilterFast() {
+ String baseUrl = constructURL(BASE_PAGE);
+
+ for (int i = 0; i < 10; i++) {
+ // Flush the cache to avoid getting refreshed content from previous tests
+ flushCache();
+ // build the url
+ String url = baseUrl + "?i=" + i;
+ String response = invokeURL(url, 100);
+ for (int j = 0; j < 5; j++) {
+ String newResponse = invokeURL(url, 100);
+ assertTrue("Fast: new response (i="+i+",j="+j+") " + newResponse + " should be the same to " + response, response.equals(newResponse));
+ }
+ }
+ }
+
+ /**
+ * Test the cache module using a filter and basic load
+ */
+ public void testOscacheFilterBasicForLoad() {
+ String baseUrl = constructURL(BASE_PAGE);
+
+ for (int i = 0; i < 5; i++) {
+ String stringResponse = invokeURL(baseUrl, 0);
+
+ // Check we received something slightly sane
+ assertTrue(stringResponse.indexOf("Current Time") > 0);
+ }
+ }
+
+ /**
+ * Compile a JSP page by invoking it. We compile the page first to avoid
+ * the compilation delay when testing since the time is a crucial factor
+ *
+ * @param URL The JSP url to invoke
+ */
+ private void compileJSP(String URL) {
+ try {
+ // Invoke the URL
+ wc.getResponse(URL);
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!!");
+ }
+ }
+
+ /**
+ * Flushes the cache to avoid recieving content from previous tests
+ */
+ private void flushCache() {
+ String flushUrl = constructURL(SERVLET_URL + FORCE_REFRESH);
+
+ String stringResponse = invokeURL(flushUrl, 0);
+
+ assertTrue("Flushing the cache failed!", stringResponse.indexOf("This is some cache content") > 0);
+
+ // avoid that flush time is equal to last update time of a new entry
+ try {
+ Thread.sleep(5);
+ } catch (InterruptedException ignore) {
+ }
+ }
+
+ /**
+ * Reads the base url from the test.web.baseURL system property and
+ * append the given URL.
+ *
+ * @param Url Url to append to the base.
+ * @return Complete URL
+ */
+ private String constructURL(String url) {
+ String base = System.getProperty(BASE_URL_SYSTEM_PRP);
+ String constructedUrl = null;
+
+ if (base != null) {
+ if (!base.endsWith("/")) {
+ base = base + "/";
+ }
+
+ constructedUrl = base + url;
+ } else {
+ fail("System property test.web.baseURL needs to be set to the proper server to use.");
+ }
+
+ return constructedUrl;
+ }
+
+ /**
+ * Utility method to request a URL and then sleep some time before returning
+ *
+ * @param url The URL of the page to invoke
+ * @param sleepTime The time to sleep before returning
+ * @return The text value of the reponse (HTML code)
+ */
+ private String invokeURL(String url, int sleepTime) {
+ try {
+ // Invoke the JSP and wait the specified sleepTime
+ WebResponse resp = wc.getResponse(url);
+ Thread.sleep(sleepTime);
+
+ return resp.getText();
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!!");
+
+ return null;
+ }
+ }
+
+}
diff --git a/src/test/java/com/opensymphony/oscache/web/TestOscacheJsp.java b/src/test/java/com/opensymphony/oscache/web/TestOscacheJsp.java
new file mode 100644
index 0000000..1aa5a71
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/web/TestOscacheJsp.java
@@ -0,0 +1,208 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.meterware.httpunit.WebConversation;
+import com.meterware.httpunit.WebResponse;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test test the JSPs distributed with the package. It checks that the
+ * cache integration is OK.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestOscacheJsp extends TestCase {
+ // The instance of a webconversation to invoke pages
+ WebConversation wc = null;
+ private final String APPLICATION_SCOPE = "scope=application&";
+
+ // Constants definition
+ private final String BASE_URL_SYSTEM_PRP = "test.web.baseURL";
+ private final String FIRST_PAGE = "oscacheTest.jsp?";
+ private final String FORCE_CACHE_USE = "forcecacheuse=yes&";
+ private final String FORCE_REFRESH = "refresh=true";
+ //private final String PAGE_SCOPE = "scope=page&";
+ //private final String REQUEST_SCOPE = "scope=request&";
+ private final String SECOND_PAGE = "oscacheTestMultipleTagNoKey.jsp?";
+ private final String SESSION_SCOPE = "scope=session&";
+ private final int CACHE_TAG_EXPIRATION = 2000;
+ private final int HALF_CACHE_TAG_EXPIRATION = CACHE_TAG_EXPIRATION / 2;
+
+ /**
+ * Constructor required by JUnit
+ *
+ * @param str Test name
+ */
+ public TestOscacheJsp(String str) {
+ super(str);
+ }
+
+ /**
+ * Returns the test suite for the test class
+ *
+ * @return Test suite for the class
+ */
+ public static Test suite() {
+ return new TestSuite(TestOscacheJsp.class);
+ }
+
+ /**
+ * Setup method called before each testXXXX of the class
+ */
+ public void setUp() {
+ // Create a web conversation to invoke our JSP
+ if (wc == null) {
+ wc = new WebConversation();
+ }
+ }
+
+ /**
+ * Test the cache module under load
+ */
+ public void testOscacheBasicForLoad() {
+ String baseUrl = constructURL(FIRST_PAGE);
+
+ // Connect to the JSP using the application scope
+ String stringResponse = invokeJSP(baseUrl, CACHE_TAG_EXPIRATION);
+
+ // Assert that a page was properly generated.
+ // This does not ensure that the cache is working properly.
+ // Though, it ensures that no exception or other weird problem occured
+ assertTrue(stringResponse.indexOf("This is some cache content") > 0);
+
+ // Invoke the JSP page containing 2 cache tag
+ baseUrl = constructURL(SECOND_PAGE);
+
+ // Connect to the JSP using the application scope
+ stringResponse = invokeJSP(baseUrl, CACHE_TAG_EXPIRATION);
+
+ // Assert that a page was properly generated.
+ // This does not ensure that the cache is working properly.
+ // Though, it ensures that no exception or other weird problem occured
+ assertTrue(stringResponse.indexOf("This is some cache content") > 0);
+ }
+
+ /**
+ * Test the cache module using a JSP
+ */
+ public void testOscacheJsp() {
+ String baseUrl = constructURL(FIRST_PAGE);
+
+ // Connect to a session scope to allow the JSP compilation
+ compileJSP(baseUrl + SESSION_SCOPE);
+
+ // Connect to the JSP using the application scope
+ String stringResponse = invokeJSP(baseUrl, HALF_CACHE_TAG_EXPIRATION);
+
+ // Connect again, we should have the same content since it expires
+ // only each 2 seconds
+ assertTrue(stringResponse.equals(invokeJSP(baseUrl, HALF_CACHE_TAG_EXPIRATION)));
+
+ // Connect again, the content should be different
+ String newResponse = invokeJSP(baseUrl, CACHE_TAG_EXPIRATION + (CACHE_TAG_EXPIRATION / 4));
+ assertTrue(!stringResponse.equals(newResponse));
+ stringResponse = newResponse;
+
+ // Connect again, but request the cache content so no refresh should occur
+ assertTrue(stringResponse.equals(invokeJSP(baseUrl, FORCE_CACHE_USE, 0)));
+
+ // Connect again, the content should have changed
+ newResponse = invokeJSP(baseUrl, HALF_CACHE_TAG_EXPIRATION);
+ assertTrue(!stringResponse.equals(newResponse));
+ stringResponse = newResponse;
+
+ // Connect for the last time, force the cache
+ // refresh so the content should have changed
+ assertTrue(!stringResponse.equals(invokeJSP(baseUrl, FORCE_REFRESH, 0)));
+
+ // Invoke the JSP page containing 2 cache tag
+ baseUrl = constructURL(SECOND_PAGE);
+ compileJSP(baseUrl + SESSION_SCOPE);
+ stringResponse = invokeJSP(baseUrl, CACHE_TAG_EXPIRATION);
+
+ // Invoke the same page en check if it's identical
+ assertTrue(stringResponse.equals(invokeJSP(baseUrl, CACHE_TAG_EXPIRATION)));
+ }
+
+ /**
+ * Compile a JSP page by invoking it. We compile the page first to avoid
+ * the compilation delay when testing since the time is a crucial factor
+ *
+ * @param URL The JSP url to invoke
+ */
+ private void compileJSP(String URL) {
+ try {
+ // Invoke the JSP
+ wc.getResponse(URL);
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!!");
+ }
+ }
+
+ /**
+ * Reads the base url from the test.web.baseURL system property and
+ * append the given URL.
+ *
+ * @param Url Url to append to the base.
+ * @return Complete URL
+ */
+ private String constructURL(String Url) {
+ String base = System.getProperty(BASE_URL_SYSTEM_PRP);
+ String constructedUrl = null;
+
+ if (base != null) {
+ if (!base.endsWith("/")) {
+ base = base + "/";
+ }
+
+ constructedUrl = base + Url;
+ } else {
+ fail("System property test.web.baseURL needs to be set to the proper server to use.");
+ }
+
+ return constructedUrl;
+ }
+
+ /**
+ * Utility method to invoke a JSP page and then sleep some time before returning
+ *
+ * @param baseUrl The URL of the JSP to invoke
+ * @param sleepTime THe time to sleep before returning
+ * @return The text value of the reponse (HTML code)
+ */
+ private String invokeJSP(String baseUrl, int sleepTime) {
+ return invokeJSP(baseUrl, "", sleepTime);
+ }
+
+ /**
+ * Utility method to invoke a JSP page and then sleep some time before returning
+ *
+ * @param baseUrl The URL of the JSP to invoke
+ * @param URLparam The URL parameters of the JSP to invoke
+ * @param sleepTime The time to sleep before returning
+ * @return The text value of the reponse (HTML code)
+ */
+ private String invokeJSP(String baseUrl, String URLparam, int sleepTime) {
+ try {
+ // Invoke the JSP and wait the specified sleepTime
+ WebResponse resp = wc.getResponse(baseUrl + APPLICATION_SCOPE + URLparam);
+ Thread.sleep(sleepTime);
+
+ return resp.getText();
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised!!");
+
+ return null;
+ }
+ }
+}
diff --git a/src/test/java/com/opensymphony/oscache/web/TestOscacheServlet.java b/src/test/java/com/opensymphony/oscache/web/TestOscacheServlet.java
new file mode 100644
index 0000000..da6870f
--- /dev/null
+++ b/src/test/java/com/opensymphony/oscache/web/TestOscacheServlet.java
@@ -0,0 +1,194 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.meterware.httpunit.WebConversation;
+import com.meterware.httpunit.WebResponse;
+
+import junit.framework.Test;
+import junit.framework.TestCase;
+import junit.framework.TestSuite;
+
+/**
+ * Test test the osCacheServlet distributed with the package. It checks that the
+ * cache integration is OK.
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Alain Bergevin
+ */
+public final class TestOscacheServlet extends TestCase {
+
+ // The instance of a webconversation to invoke pages
+ static WebConversation wc = null;
+ private final String APPLICATION_SCOPE = "scope=application&";
+
+ // Constants definition
+ private final String BASE_URL_SYSTEM_PRP = "test.web.baseURL";
+ private final String FORCE_CACHE_USE = "forcecacheuse=yes&";
+ private final String FORCE_REFRESH = "forceRefresh=true&";
+ private final String KEY = "key=ServletKeyItem&";
+ private final String REFRESH_PERIOD = "refreshPeriod=";
+ private final String SERVLET_URL = "/cacheServlet/?";
+ private final int NO_REFRESH_WANTED = 2000;
+ private final int REFRESH_WANTED = 0;
+
+ /**
+ * Constructor required by JUnit
+ *
+ * @param str Test name
+ */
+ public TestOscacheServlet(String str) {
+ super(str);
+ }
+
+ /**
+ * Returns the test suite for the test class
+ *
+ * @return Test suite for the class
+ */
+ public static Test suite() {
+ return new TestSuite(TestOscacheServlet.class);
+ }
+
+ /**
+ * This method is invoked before each testXXXX methods of the
+ * class. It set ups the variables required for each tests.
+ */
+ public void setUp() {
+ // Create a web conversation on first run
+ if (wc == null) {
+ wc = new WebConversation();
+ }
+ }
+
+ /**
+ * Test the cache module using a servlet
+ */
+ public void testOscacheServlet() {
+ // Make a first call just to initialize the servlet
+ String newResponse = invokeServlet(NO_REFRESH_WANTED);
+
+ // Connect to the servlet using the application scope
+ String previousReponse = invokeServlet(NO_REFRESH_WANTED);
+
+ // Call again an verify that the content hasn't changed
+ newResponse = invokeServlet(NO_REFRESH_WANTED);
+ assertTrue("new response " + newResponse + " should be the same to " + previousReponse, previousReponse.equals(newResponse));
+
+ // Call again an verify that the content is updated
+ newResponse = invokeServlet(REFRESH_WANTED);
+ assertFalse("new response " + newResponse + " expected it to be different to last one.", previousReponse.equals(newResponse));
+ previousReponse = newResponse;
+
+ // Call short delay so content should be refresh, but it will not since
+ // we ask to use the item already in cache
+ newResponse = invokeServlet(REFRESH_WANTED, FORCE_CACHE_USE);
+ assertTrue("new response " + newResponse + " should be the same to " + previousReponse, previousReponse.equals(newResponse));
+
+ // Call with long delay so the item would not need refresh, but we'll ask
+ // a refresh anyway
+ newResponse = invokeServlet(NO_REFRESH_WANTED, FORCE_REFRESH);
+ assertFalse("new response " + newResponse + " expected it to be different to last one.", previousReponse.equals(newResponse));
+
+ // Verify that the cache key and the cache entry are present in the output and
+ // that their values are correct
+ assertTrue("response '" + previousReponse + "' does not contain oscache string", previousReponse.indexOf("oscache") != -1);
+
+ assertTrue("response '" + previousReponse + "' does not contain /Test_key string", previousReponse.indexOf("/Test_key") != -1);
+ }
+
+ /**
+ * Test the cache module using a servlet and basic load
+ */
+ public void testOscacheServletBasicForLoad() {
+ // Call Servlet
+ String stringResponse = invokeServlet(NO_REFRESH_WANTED);
+
+ // Assert that a page was properly generated.
+ // This does not ensure that the cache is working properly.
+ // Though, it ensures that no exception or other weird problem occured
+ assertTrue(stringResponse.indexOf("This is some cache content") > 0);
+
+ // Call again
+ stringResponse = invokeServlet(REFRESH_WANTED);
+
+ // idem comment
+ assertTrue(stringResponse.indexOf("This is some cache content") > 0);
+
+ // Call again
+ stringResponse = invokeServlet(REFRESH_WANTED, FORCE_CACHE_USE);
+
+ // idem comment
+ assertTrue(stringResponse.indexOf("This is some cache content") > 0);
+
+ // Call again
+ stringResponse = invokeServlet(NO_REFRESH_WANTED, FORCE_REFRESH);
+
+ // idem comment
+ assertTrue(stringResponse.indexOf("This is some cache content") > 0);
+ }
+
+ /**
+ * Reads the base url from the test.web.baseURL system property and
+ * append the given URL.
+ *
+ * @param Url Url to append to the base.
+ * @return Complete URL
+ */
+ private String constructURL(String Url) {
+ String base = System.getProperty(BASE_URL_SYSTEM_PRP);
+ String constructedUrl = null;
+
+ if (base != null) {
+ if (base.endsWith("/")) {
+ base = base.substring(0, base.length() - 1);
+ }
+
+ constructedUrl = base + Url;
+ } else {
+ fail("System property test.web.baseURL needs to be set to the proper server to use.");
+ }
+
+ return constructedUrl;
+ }
+
+ /**
+ * Utility method to invoke a servlet
+ *
+ * @param refresh The time interval telling if the item needs refresh
+ * @return The HTML page returned by the servlet
+ */
+ private String invokeServlet(int refresh) {
+ // Invoke the servlet
+ return invokeServlet(refresh, "");
+ }
+
+ /**
+ * Utility method to invoke a servlet
+ *
+ * @param refresh The time interval telling if the item needs refresh
+ * @param URL The URL of the servlet
+ * @return The HTML page returned by the servlet
+ */
+ private String invokeServlet(int refresh, String URL) {
+ // wait 10 millis to change the time, see System.currentTimeMillis() in OscacheServlet
+ try {
+ Thread.sleep(10);
+ } catch (InterruptedException ignore) {
+ }
+
+ // Invoke the servlet
+ try {
+ String request = constructURL(SERVLET_URL) + APPLICATION_SCOPE + KEY + REFRESH_PERIOD + refresh + "&" + URL;
+ WebResponse resp = wc.getResponse(request);
+ return resp.getText();
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ fail("Exception raised! " + ex.getMessage());
+ return "";
+ }
+ }
+}
diff --git a/src/test/java/oscacheDiskAndMemory.properties b/src/test/java/oscacheDiskAndMemory.properties
new file mode 100644
index 0000000..4facba1
--- /dev/null
+++ b/src/test/java/oscacheDiskAndMemory.properties
@@ -0,0 +1,11 @@
+# CACHE IN MEMORY
+cache.memory=true
+
+# CACHE SIZE
+cache.capacity=100
+
+# CACHE PERSISTENCE CLASS
+cache.persistence.class=com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener
+
+# CACHE DIRECTORY
+cache.path=/tmp/cachetagscache
diff --git a/src/test/java/oscacheDiskOnly.properties b/src/test/java/oscacheDiskOnly.properties
new file mode 100644
index 0000000..4f0e071
--- /dev/null
+++ b/src/test/java/oscacheDiskOnly.properties
@@ -0,0 +1,8 @@
+# CACHE IN MEMORY
+cache.memory=false
+
+# CACHE PERSISTENCE CLASS
+cache.persistence.class=com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener
+
+# CACHE DIRECTORY
+cache.path=/tmp/cachetagscache
diff --git a/src/test/java/oscacheDiskOnlyHash.properties b/src/test/java/oscacheDiskOnlyHash.properties
new file mode 100644
index 0000000..b0e8711
--- /dev/null
+++ b/src/test/java/oscacheDiskOnlyHash.properties
@@ -0,0 +1,8 @@
+# CACHE IN MEMORY
+cache.memory=false
+
+# CACHE PERSISTENCE CLASS
+cache.persistence.class=com.opensymphony.oscache.plugins.diskpersistence.HashDiskPersistenceListener
+
+# CACHE DIRECTORY
+cache.path=/tmp/cachetagscache
diff --git a/src/test/java/oscacheMemoryAndOverflowToDisk.properties b/src/test/java/oscacheMemoryAndOverflowToDisk.properties
new file mode 100644
index 0000000..6d7946e
--- /dev/null
+++ b/src/test/java/oscacheMemoryAndOverflowToDisk.properties
@@ -0,0 +1,11 @@
+# CACHE IN MEMORY
+cache.memory=true
+
+# CACHE PERSISTENCE CLASS
+cache.persistence.class=com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener
+
+# CACHE DIRECTORY
+cache.path=/tmp/cachetagscache
+
+# CACHE OVERFLOW
+cache.persistence.overflow.only=true
diff --git a/src/test/java/oscacheMemoryOnly.properties b/src/test/java/oscacheMemoryOnly.properties
new file mode 100644
index 0000000..26f54a5
--- /dev/null
+++ b/src/test/java/oscacheMemoryOnly.properties
@@ -0,0 +1,8 @@
+# CACHE IN MEMORY
+cache.memory=true
+
+# CACHE LISTENERS
+cache.event.listeners=com.opensymphony.oscache.extra.StatisticListenerImpl
+
+# CACHE SIZE
+cache.capacity=1000
\ No newline at end of file
diff --git a/src/webapp/WEB-INF/classes/com/opensymphony/oscache/web/OscacheServlet.java b/src/webapp/WEB-INF/classes/com/opensymphony/oscache/web/OscacheServlet.java
new file mode 100644
index 0000000..87829d0
--- /dev/null
+++ b/src/webapp/WEB-INF/classes/com/opensymphony/oscache/web/OscacheServlet.java
@@ -0,0 +1,130 @@
+/*
+ * Copyright (c) 2002-2003 by OpenSymphony
+ * All rights reserved.
+ */
+package com.opensymphony.oscache.web;
+
+import com.opensymphony.oscache.base.NeedsRefreshException;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.ServletConfig;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServlet;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import javax.servlet.jsp.PageContext;
+
+/**
+ * Servlet used to test the web portion of osCache. It performs the operations
+ * received by parameter
+ *
+ * $Id$
+ * @version $Revision$
+ * @author Francois Beauregard
+ * @author Alain Bergevin
+ */
+public class OscacheServlet extends HttpServlet {
+ /** Output content type */
+ private static final String CONTENT_TYPE = "text/html";
+
+ /** Clean up resources */
+ public void destroy() {
+ }
+
+ /**
+ * Process the HTTP Get request
+ *
+ * @param request The HTTP request
+ * @param response The servlet response
+ * @throws ServletException
+ * @throws IOException
+ */
+ public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
+ boolean varForceRefresh = false;
+ int refreshPeriod = 0;
+ int scope = PageContext.APPLICATION_SCOPE;
+ String forceCacheUse = null;
+ String key = null;
+
+ // Cache item
+ Long item;
+
+ // Get the admin
+ ServletCacheAdministrator admin = ServletCacheAdministrator.getInstance(getServletContext());
+
+ // Translate parameters
+ try {
+ String paramValue = request.getParameter("forceRefresh");
+
+ if ((paramValue != null) && (paramValue.length() > 0)) {
+ varForceRefresh = Boolean.valueOf(paramValue).booleanValue();
+ }
+
+ paramValue = request.getParameter("scope");
+
+ if ((paramValue != null) && (paramValue.length() > 0)) {
+ scope = getScope(paramValue);
+ }
+
+ paramValue = request.getParameter("refreshPeriod");
+
+ if ((paramValue != null) && (paramValue.length() > 0)) {
+ refreshPeriod = Integer.valueOf(paramValue).intValue();
+ }
+
+ forceCacheUse = request.getParameter("forcecacheuse");
+ key = request.getParameter("key");
+ } catch (Exception e) {
+ getServletContext().log("Error while retrieving the servlet parameters: " + e.toString());
+ }
+
+ // Check if all the items should be flushed
+ if (varForceRefresh) {
+ admin.flushAll();
+ }
+
+ try {
+ // Get the data from the cache
+ item = (Long) admin.getFromCache(scope, request, key, refreshPeriod);
+ } catch (NeedsRefreshException nre) {
+ // Check if we want to force the use of an item already in cache
+ if ("yes".equals(forceCacheUse)) {
+ admin.cancelUpdate(scope, request, key);
+ item = (Long) nre.getCacheContent();
+ } else {
+ item = new Long(System.currentTimeMillis());
+ admin.putInCache(scope, request, key, item);
+ }
+ }
+
+ // Generate the output
+ response.setContentType(CONTENT_TYPE);
+
+ PrintWriter out = response.getWriter();
+ out.println("");
+ out.println("
OscacheServlet ");
+ out.println("");
+ out.println("This is some cache content : " + item.toString() + "
");
+ out.println("Cache key: " + admin.getCacheKey() + "
");
+ out.println("Entry key: " + admin.generateEntryKey("Test_key", request, scope) + "
");
+ out.println("");
+ }
+
+ /**Initialize global variables*/
+ public void init(ServletConfig config) throws ServletException {
+ super.init(config);
+ }
+
+ /**
+ * Return the scope number corresponding to it's string name
+ */
+ private int getScope(String value) {
+ if ((value != null) && (value.equalsIgnoreCase("session"))) {
+ return PageContext.SESSION_SCOPE;
+ } else {
+ return PageContext.APPLICATION_SCOPE;
+ }
+ }
+}
diff --git a/src/webapp/WEB-INF/classes/oscache-cachefilter-disableCacheOnMethods.properties b/src/webapp/WEB-INF/classes/oscache-cachefilter-disableCacheOnMethods.properties
new file mode 100644
index 0000000..7656c37
--- /dev/null
+++ b/src/webapp/WEB-INF/classes/oscache-cachefilter-disableCacheOnMethods.properties
@@ -0,0 +1,8 @@
+# CACHE KEY
+cache.key=__oscache_cachefilter_disableCacheOnMethods
+
+# CACHE LISTENERS
+cache.event.listeners=com.opensymphony.oscache.extra.ScopeEventListenerImpl
+
+# CACHE SIZE
+cache.capacity=10
diff --git a/src/webapp/WEB-INF/classes/oscache.properties b/src/webapp/WEB-INF/classes/oscache.properties
new file mode 100644
index 0000000..5f3bf9b
--- /dev/null
+++ b/src/webapp/WEB-INF/classes/oscache.properties
@@ -0,0 +1,140 @@
+# CACHE IN MEMORY
+#
+# If you want to disable memory caching, just uncomment this line.
+#
+# cache.memory=false
+
+
+# CACHE KEY
+#
+# This is the key that will be used to store the cache in the application
+# and session scope.
+#
+# If you want to set the cache key to anything other than the default
+# uncomment this line and change the cache.key
+#
+# cache.key=__oscache_cache
+
+
+# USE HOST DOMAIN NAME IN KEY
+#
+# Servers for multiple host domains may wish to add host name info to
+# the generation of the key. If this is true, then uncomment the
+# following line.
+#
+# cache.use.host.domain.in.key=true
+
+
+# CACHE LISTENERS
+#
+# These hook OSCache events and perform various actions such as logging
+# cache hits and misses, or broadcasting to other cache instances across a cluster.
+# See the documentation for further information.
+#
+# cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JMSBroadcastingListener, \
+# com.opensymphony.oscache.extra.CacheEntryEventListenerImpl, \
+# com.opensymphony.oscache.extra.CacheMapAccessEventListenerImpl, \
+# com.opensymphony.oscache.extra.ScopeEventListenerImpl
+
+
+# CACHE PERSISTENCE CLASS
+#
+# Specify the class to use for persistence. If you use the supplied DiskPersistenceListener,
+# don't forget to supply the cache.path property to specify the location of the cache
+# directory.
+#
+# If a persistence class is not specified, OSCache will use memory caching only.
+#
+# cache.persistence.class=com.opensymphony.oscache.plugins.diskpersistence.DiskPersistenceListener
+
+# CACHE OVERFLOW PERSISTENCE
+# Use persistent cache in overflow or not. The default value is false, which means
+# the persistent cache will be used at all times for every entry. true is the recommended setting.
+#
+# cache.persistence.overflow.only=true
+
+
+# CACHE DIRECTORY
+#
+# This is the directory on disk where caches will be stored by the DiskPersistenceListener.
+# it will be created if it doesn't already exist. Remember that OSCache must have
+# write permission to this directory.
+#
+# Note: for Windows machines, this needs \ to be escaped
+# ie Windows:
+# cache.path=c:\\myapp\\cache
+# or *ix:
+# cache.path=/opt/myapp/cache
+#
+# cache.path=c:\\app\\cache
+
+
+# CACHE ALGORITHM
+#
+# Default cache algorithm to use. Note that in order to use an algorithm
+# the cache size must also be specified. If the cache size is not specified,
+# the cache algorithm will be Unlimited cache.
+#
+# cache.algorithm=com.opensymphony.oscache.base.algorithm.LRUCache
+# cache.algorithm=com.opensymphony.oscache.base.algorithm.FIFOCache
+# cache.algorithm=com.opensymphony.oscache.base.algorithm.UnlimitedCache
+
+# THREAD BLOCKING BEHAVIOR
+#
+# When a request is made for a stale cache entry, it is possible that another thread is already
+# in the process of rebuilding that entry. This setting specifies how OSCache handles the
+# subsequent 'non-building' threads. The default behaviour (cache.blocking=false) is to serve
+# the old content to subsequent threads until the cache entry has been updated. This provides
+# the best performance (at the cost of serving slightly stale data). When blocking is enabled,
+# threads will instead block until the new cache entry is ready to be served. Once the new entry
+# is put in the cache the blocked threads will be restarted and given the new entry.
+# Note that even if blocking is disabled, when there is no stale data available to be served
+# threads will block until the data is added to the cache by the thread that is responsible
+# for building the data.
+#
+# cache.blocking=false
+
+
+# CACHE SIZE
+#
+# Default cache size in number of items. If a size is specified but not
+# an algorithm, the cache algorithm used will be LRUCache.
+#
+cache.capacity=1000
+
+
+# CACHE UNLIMITED DISK
+# Use unlimited disk cache or not. The default value is false, which means
+# the disk cache will be limited in size to the value specified by cache.capacity.
+#
+# cache.unlimited.disk=false
+
+
+# JMS CLUSTER PROPERTIES
+#
+# Configuration properties for JMS clustering. See the clustering documentation
+# for more information on these settings.
+#
+#cache.cluster.jms.topic.factory=java:comp/env/jms/TopicConnectionFactory
+#cache.cluster.jms.topic.name=java:comp/env/jms/OSCacheTopic
+#cache.cluster.jms.node.name=node1
+
+
+# JAVAGROUPS CLUSTER PROPERTIES
+#
+# Configuration properites for the JavaGroups clustering. Only one of these
+# should be specified. Default values (as shown below) will be used if niether
+# property is set. See the clustering documentation and the JavaGroups project
+# (www.javagroups.com) for more information on these settings.
+#
+#cache.cluster.properties=UDP(mcast_addr=231.12.21.132;mcast_port=45566;ip_ttl=32;\
+#mcast_send_buf_size=150000;mcast_recv_buf_size=80000):\
+#PING(timeout=2000;num_initial_members=3):\
+#MERGE2(min_interval=5000;max_interval=10000):\
+#FD_SOCK:VERIFY_SUSPECT(timeout=1500):\
+#pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):\
+#UNICAST(timeout=300,600,1200,2400):\
+#pbcast.STABLE(desired_avg_gossip=20000):\
+#FRAG(frag_size=8096;down_thread=false;up_thread=false):\
+#pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true)
+#cache.cluster.multicast.ip=231.12.21.132
diff --git a/src/webapp/WEB-INF/web.xml b/src/webapp/WEB-INF/web.xml
new file mode 100644
index 0000000..b3a47ef
--- /dev/null
+++ b/src/webapp/WEB-INF/web.xml
@@ -0,0 +1,60 @@
+
+
+
+
+ OSCache
+
+
+ CacheFilter
+ com.opensymphony.oscache.web.filter.CacheFilter
+
+
+
+ CacheFilter-disableCacheOnMethods
+ com.opensymphony.oscache.web.filter.CacheFilter
+
+ time
+ 60
+
+
+ disableCacheOnMethods
+ POST,PUT,DELETE
+
+
+ oscache-properties-file
+ /oscache-cachefilter-disableCacheOnMethods.properties
+
+
+
+
+ CacheFilter
+ /filter/*
+
+
+
+ CacheFilter-disableCacheOnMethods
+ /filter2/*
+
+
+
+ com.opensymphony.oscache.web.CacheContextListener
+
+
+
+ OSCacheServlet
+ com.opensymphony.oscache.web.OscacheServlet
+ 1
+
+
+
+ OSCacheServlet
+ /cacheServlet/*
+
+
+
+ 10
+
+
+
diff --git a/src/webapp/cachetest.jsp b/src/webapp/cachetest.jsp
new file mode 100644
index 0000000..fae276d
--- /dev/null
+++ b/src/webapp/cachetest.jsp
@@ -0,0 +1,65 @@
+<%@ page import="java.util.*" %>
+<%@ taglib uri="http://www.opensymphony.com/oscache" prefix="cache" %>
+
+<%
+String scope = "application";
+if (request.getParameter("scope") != null)
+{
+ scope = request.getParameter("scope");
+}
+%>
+
+Test Page
+
+
+
+
+Back to index
+
+
+
+<% Date start = new Date(); %> Start Time: <%= start %>
+
+ <%-- Note that we have to supply a cache key otherwise the 'refresh' parameter
+ causes the refreshed page to end up with a different cache key! --%>
+
+ Cache Time: <%= new Date() %>
+ <% try { %>
+ Inside try block.
+ <%
+ Thread.sleep(1000L); // Kill some time
+ if ((new Date()).getTime() % 5 == 0)
+ {
+ System.out.println("THROWING EXCEPTION....");
+ throw new Exception("ack!");
+ }
+ %>
+
+
+ <% }
+ catch (Exception e)
+ {
+ %>
+ Using cached content:
+ <%
+ }
+ %>
+
+
+End Time: <%= new Date() %>
+
+Running Time: <%= (new Date()).getTime() - start.getTime() %> ms.
+
\ No newline at end of file
diff --git a/src/webapp/cronTest.jsp b/src/webapp/cronTest.jsp
new file mode 100644
index 0000000..10d724d
--- /dev/null
+++ b/src/webapp/cronTest.jsp
@@ -0,0 +1,172 @@
+<%@ page import="java.util.*" %>
+<%@ taglib uri="http://www.opensymphony.com/oscache" prefix="cache" %>
+
+<%
+String scope = "application";
+if (request.getParameter("scope") != null)
+{
+ scope = request.getParameter("scope");
+}
+
+boolean refresh = false;
+if (request.getParameter("refresh") != null)
+{
+ refresh = true;
+}
+
+boolean forceCacheUse = false;
+if (request.getParameter("forceCacheUse") != null)
+{
+ forceCacheUse = true ;
+}
+%>
+
+Cron Test Page
+
+
+
+
+Back to index
+The cached content for the current day of the week should expire every minute.
+Try setting your system clock to a couple of minutes before midnight and watch what
+happens when you refresh the page as the day rolls over.
+
+Time this page was last refreshed: : <%= new Date() %>
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Sunday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Monday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Tuesday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Wednesday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Thursday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Friday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+
+ Cache Time: <%= new Date() %>
+
+ This is some cache content (expires according to the cron expression "* * * * Saturday")
+ <%
+ if (forceCacheUse)
+ {
+ %>
+
+ <%
+ }
+ %>
+
+
+
+