This storage service is a drop-in replacement for the standard Shibboleth IdP storage service. It provides session persistence and sharing across multiple servers by using a database to persist session data. It is designed to continue to operate even when the database server is unavailable, it will simply require a login if the user's session does not happen to exist on the node servicing the request. All session related data must be Serializable by the JVM. The storage service does not persist the login context information to the database. You must use session affinity during the login process so that the user's session remains on a single server in the cluster.
The service itself uses Hibernate as the data access layer. It has been tested with MySQL and Oracle, but should work with other RDMS systems. You may need to modify the included index and trigger statements, they are written for MySQL. The table structure itself is generated during the Maven build as target/db-storage-service-tables.sql. The default settings are for MySQL 5.x. If you need another RDMS, then you will need to edit the src/main/conf/hibernate.properties file to reflect the proper dialect. You may configure the Hibernate database access at any level you desire that works with your container. Hibernate will search the class path for the file hibernate.properties to decide how it accesses the database. That properties file can contain the direct database access information (server, user, password), or can be configured to use a container managed connection. Please refer to the Hibernate website for detailed information on configuring Hibernate.
Only use of the storage service API is replaced by this. Artifact resolution and attribute query resolution for a transient identifier, regardless of SAML version, rely on a "session cache" API and will not be enabled in a clustered environment just by use of this storage service.
The complete project is available from BitBucket Code Project.
- Download the project code.
- At the command line, use mvn package to build the JAR and SQL.
- Use the db-storage-service-tables.sql file to create the table in your RDMS.
- Use the src/main/conf/indexes.sql and src/main/conf/triggers.sql files to add indexes and triggers to the table respectively. Note these are MySQL specific. There is a contributed oracle.sql file there as well which should create the proper Oracle versions of the tables/indexes/triggers.
- Add a hibernate.properties to your web container somewhere on the class path. There is an example in the conf directory. It may contain the direct information to talk to the database or point to a container managed connection. Refer to the Hibernate website for more information. If you are using apache tomcat, a good place to put this is in $TOMCAT_HOME/lib.
- Place the jar file db-storage-service-x.y.z.jar in the Shibboleth installation lib directory. This is the subdirectory of the location where the install.sh/install.bat files are that build the Shibboleth WAR file. You will also need the Hibernate and related JARs referenced by the project in the Shibboleth installation lib directory. Finally, you must have a database driver for your RDMS installed in the web container. Ensure that no spurious or old .jars(slf4j the most likely offender) are in the classpath/WAR file.
Modify the Shibboleth web.xml file to load the filter for the storage service:
<!-- Add filter for storage service -->
<!-- DB version -->
Modify your Shibboleth internal.xml to load the new storage service:
<!-- The Clareity DB based storage service. You must disable all other storage services to use this one -->
<bean id="shibboleth.StorageService" class="net.clareitysecurity.shibboleth.storage.DbStorageService"
<constructor-arg value="shibboleth" /> <!-- this value separates systems -->
<!-- optional argument to not run the storage cleanup thread
<constructor-arg value="false" />
- Rebuild and deploy your Shibboleth.
Optional. To be able to get logging messages, add this to your logging.xml file:
<level value="DEBUG" />
<appender-ref ref="IDP_PROCESS" />
A few additional notes:
- The constructor argument for the storage service separates the session management into logical divisions. So if you run multiple IdP instances that are not related, you can use different values for each one here to separate the sessions. So instance one using "foo" as a key will not share sessions with instance two using "bar" as a key. You may also simply point them to different databases.
- If the IdP is stopped, any sessions in memory to that IdP are lost and can be orphaned in the database table. To clean up these orphaned rows, the storage service starts a clean up thread. In some web containers, that thread is not allowed access to the database (JBoss). To prevent the logging of an ERROR in those situations, enable the second parameter in internal.xml and set it to false.
- The filter is necessary for the storage service to know that it needs to persist updated data to the database.
- The service assumes all session data is Serializable. If your setup has data that is not, this will likely fail in strange and interesting ways.
Please direct all usage/support questions to the Shibboleth Users mailing list.
This code has been contributed under the Apache 2.0 license by Clareity Security.