The Shibboleth IdP V4 software has reached its End of Life and is no longer supported. This documentation is available for historical purposes only. See the IDP5 wiki space for current documentation on the supported version.

StorageConfiguration

File(s): conf/global.xml, conf/idp.properties

Format: Native Spring

Overview

The IdP provides a number of general-purpose storage facilities that can be used by core subsystems like session management and consent. Broadly speaking, there are two kinds of storage plugins: client-side and server-side. Client-side plugins have the advantage of requiring no additional software or configuration and make clustering very robust and simple, but they only support a subset of use cases. Server-side plugins (aside from the simple case of storing data in memory) support all use cases, but require additional software and configuration, and usually create additional points of failure in a clustered deployment.

You may wish to review the topic, as it may have implications for your storage options and/or the need to address SameSite based on your chosen options.

The IdP ships with 3 preconfigured org.opensaml.storage.StorageService beans:

  • shibboleth.ClientSessionStorageService (of type ClientStorageService)

    • Stores data in a browser session cookie or HTML local storage

  • shibboleth.ClientPersistentStorageService (of type ClientStorageService)

    • Stores data in a long-lived browser cookie or HTML local storage

  • shibboleth.StorageService (of type MemoryStorageService)

    • Stores data in server memory, does not survive restarts and is not replicated across a cluster

There are additional storage service plugins included in the software (JPA, memcached) but they are not predefined. Using them requires defining beans yourself and setting various properties to point to them.

By default, the shibboleth.ClientSessionStorageService bean, which stores data in the client, is used to store IdP session data, but that can be modified via the idp.properties file:

Example changing IdP session storage
idp.session.StorageService = shibboleth.StorageService

There are additional properties that can be used to change how other data is stored on a per-use case basis, but note that some components can't rely on client-side storage options, and more specific documentation should address that.

Reference

Property

Type

Default

Description

Property

Type

Default

Description

idp.storage.cleanupInterval

Duration

PT10M

Interval of background thread sweeping server-side storage for expired records

idp.storage.htmlLocalStorage

Boolean

false

Whether to use HTML Local Storage (if available) instead of cookies

idp.storage.clientSessionStorageName

String

shib_idp_session_ss

Name of cookie or HTML storage key used by the default per-session instance of the client storage service

idp.storage.clientPersistentStorageName

String

shib_idp_persistent_ss

Name of cookie or HTML storage key used by the default persistent instance of the client storage service

idp.session.StorageService

Bean ID of a StorageService

shibboleth.ClientSessionStorageService

Storage back-end to use for IdP sessions, authentication results, and optionally tracking of SP usage for logout

idp.consent.StorageService

Bean ID of a StorageService

shibboleth.ClientPersistentStorageService

Storage back-end to use for consent and terms-of-use records

idp.replayCache.StorageService

Bean ID of a StorageService

shibboleth.StorageService

Storage back-end to use for message replay checking (must be server-side)

idp.replayCache.strict

Boolean

true

Whether storage errors during replay checks should be treated as a replay

idp.artifact.StorageService

Bean ID of a StorageService

shibboleth.StorageService

Storage back-end to use for short-lived SAML Artifact mappings (must be server-side)

idp.cas.StorageService

Bean ID of a StorageService

shibboleth.StorageService

Storage back-end to use for CAS ticket mappings (must be server-side)

The following beans (most of which are internal to the system) can be used in various properties to control what storage instances are used for specific purposes. You can define your own beans also (e.g. in global.xml).

Bean ID

Type

Description

Bean ID

Type

Description

shibboleth.StorageService

MemoryStorageService

Default server-side storage, stores data in memory

shibboleth.ClientSessionStorageService

ClientStorageService

Default client-side storage for per-session data, stores data in session cookies or HTML local storage

shibboleth.ClientPersistentStorageService

ClientStorageService

Default client-side storage for long-lived data, stores data in persistent cookies or HTML local storage

shibboleth.ClientStorageServices

List<StorageService>

Enumeration of ClientStorageService plugins used, ensures proper load/save of data

Storage Implementations

There are three functionally-complete implementations of the storage interface supplied with the software.

The ClientStorageService is an advanced, and highly recommended, option that includes support for HTML Local Storage along with cookies as a fallback or alternative.

Local Storage support is enabled by default for new installation, but note that it requires JavaScript be enabled, because reading and writing to the client requires an explicit page be rendered. When JavaScript is enabled, the additional page appears quickly as a short-lived interstitial with a message about the loading or saving of the data to the client. In practice, it is has no material impact on the user experience.

Controlling this feature is handled by the idp.storage.htmlLocalStorage property.

No configuration is required, but you may want to change the look and feel of the templates that are displayed to the client while data is being read or written. These pages don't require any user interaction as long as JavaScript is enabled, but they tend to be visible at least briefly, particularly the first time through. They're somewhat similar to the templates displayed when SAML messages are handed off to the browser.

Much of that look is obviously controlled by style sheets and message properties, but the "visible" portions are in views/client-storage (to avoid losing your changes on upgrades).

As to why you would use Local Storage, there are really two main reasons:

  • Logout

  • Consent

The main reason for this feature is to enable the IdP's session manager to track and index the sessions created with SPs, and that information does not fit reliably in a cookie. That makes the single-logout feature unusable with client-side sessions unless Local Storage is enabled, since the IdP doesn't know what SPs to communicate with. Enabling the Local Storage feature is necessary but not sufficient to allow at least some form of single logout to work without moving session storage to the server. You also will need to ensure a couple of additional session management properties (idp.session.trackSPSessions and idp.session.secondaryServiceIndex) are enabled, and they are also on by default in new installs. There are two properties because the latter is more a SAML-specific need that may not extend to other protocols in the future.

The consent feature is very limited when cookies are used because the number of records it can store is extremely small. If Local Storage is available, that limit is essentially ignored. If you're comfortable with tracking consent per-device, this is a much more practical way to deploy it at most sites than with a database. Of course, many deployers are not comfortable with per-device consent, but those same deployers may become a lot more comfortable with it after enough database connection failures due to the nearly universally poor quality of JDBC networking code.

The JPA storage facility uses Hibernate ORM for searching and persistence using a relational database for storage. Example schemas are shown below.

Whatever you do, you MUST ensure the context and id columns are case-sensitively handled and compared. That is a requirement of the API that will be using the database. This is frequently NOT the default behavior of databases such as MySQL.

MySQL
CREATE TABLE `StorageRecords` ( `context` varchar(255) NOT NULL, `id` varchar(255) NOT NULL, `expires` bigint(20) DEFAULT NULL, `value` longtext NOT NULL, `version` bigint(20) NOT NULL, PRIMARY KEY (`context`,`id`) )
PostgreSQL or H2
CREATE TABLE storagerecords ( context varchar(255) NOT NULL, id varchar(255) NOT NULL, expires bigint DEFAULT NULL, value text NOT NULL, version bigint NOT NULL, PRIMARY KEY (context, id) );
Oracle

In order to configure this service you must provide Spring bean configuration for the JPAStorageService that includes the driver, URL, and credentials for your database. You are required to provide a jar containing the driver for your particular database. In addition, we recommend the use of a DataSource that provides connection pooling, which may require installing an additional library as well.

The following libraries provide connection pooling functionality:

In the DB-specific examples below use of HikariCP is demonstrated (class="com.zaxxer.hikari.HikariDataSource", p:jdbcUrl="..." in the DataSource bean). When using other Connection Pool implementations change the class and properties appropriately, e.g.:

  • Apache Commons: class="org.apache.commons.dbcp.BasicDataSource", p:url="..."

  • Tomcat DBCP2: class="org.apache.tomcat.dbcp.dbcp2.BasicDataSource", p:url="..."

  • Tomcat JDBC Pool: class="org.apache.tomcat.jdbc.pool.DataSource", p:url="..."

Installation

Place the driver jar and connection pooling jar in edit-webapp/WEB-INF/lib then execute bin/build.sh or bin/build.bat as appropriate for your environment.

The following configuration should be placed in conf/global.xml:

DB-independent Configuration

The specific examples that follow should NOT be assumed to be functional, as they likely are the product of different sources, varying amounts of testing (including none), and may not be current. Drivers get updated frequently and JDBC and database bugs appear and disappear with regularity. When in doubt, always grab new ones when problems appear.

Postgres Configuration
MySQL Configuration
Oracle Configuration
H2 Configuration

Further Configuration

The LocalContainerEntityManagerFactoryBean contains more configuration options and is designed to eliminate the need for a persistence.xml file.  If you need to alter the database schema you can deploy a custom mapping file which overrides column names and types.

ORM Mapping

Place your custom orm.xml file in edit-webapp/WEB-INF/classes/META-INF/orm.xml then rebuild your war. While you can configure a custom name and path for this file it must be located on your web application classpath. File system paths are not supported.

Postgres LOB Concerns

Switch identified an issue with the Postgres JDBC driver and the storage of LOBs related to the default mapping. Deployers can experience data loss when the Postgres vacuumlo command is run. It is recommend that a custom orm.xml file be used to override the value type:

Postgres ORM

See the Switch Installation Docs for more details

Requirements: memcached v1.4.14 or later

The memcached-based storage facility in IdPv3 is based on the spymemcached library, which has a number of compelling features for HA deployments:

  • Optimized IO for high throughput

  • Memcached failover facility

  • Stable hashing algorithm supports memcached pool resizing

The failover facility merits further discussion. Failover is enabled by specifying multiple memcached hosts and failureMode="Redistribute". When the client encounters an unreachable host in redistribute mode, it will temporarily remove the unavailable host from the pool of available hosts. Any keys that hash to the unavailable host will be written or retrieved from a backup host. The high-level effect of this behavior on the IdP session management service is that a node failure will cause loss of IdP session, which would impact users as an unexpected authentication. The IdP session management service, however, would be fully functional during a host failure/recovery event. Also note that this behavior requires no state sharing (i.e. repcache) between memcached nodes.

Bear in mind that different storage use cases have different failover properties. While the replay cache would be similarly unimpacted, the artifact map failing to retrieve a previously stored artifact mapping would result in a failed login to the service to which the artifact was sent.

The following architecture is suggested for HA deployments that wish to use memcache:

Thus every IdP node runs a memcached service and the Java process running the IdP software connects to every memcached service. The following configuration example assumes the recommended architecture above and should be placed in conf/global.xml .

MemcachedStorageService Configuration

Once a MemcachedStorageService bean has been defined as above, it can be used with subsystems that require a StorageService component. The following configuration snippet from conf/idp.properties indicates how to use memcached for session storage.

Memcached for IdP Sessions