Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This document is an informal input to the IdP V3 design discussions, not part of the process itself.  My suspicion is that to date most of the Shibboleth team have not spent much if any time with the MDA, and know little about it other than could be deduced from the name.  The purpose of this document is to give everyone enough understanding of the MDA code and the philosophy behind it to be able to make reasoned judgements about whether the MDA framework might be used as a component of the metadata handling design for the V3 IdP.

MDA Backgrounder

"Metadata Aggregator"

The name would tend to give you the impression that "the metadata aggregator" is:

  • about metadata

  • about aggregating

Unfortunately, the name is quite misleading and in fact neither of these impressions is really true.  The MDA product is really a generic processing framework for items of arbitrary data, which happens to provide features which are very useful for processing SAML 2.0 metadata amongst other things.  It comes with a set of processing stages which can be used for building metadata aggregators (again, amongst other things) but is intended to be extended by the creation of new stage implementations.

Items

Data to be manipulated by the MDA framework is encapsulated by an Item<T>.  You can access the underlying object through an unwrap() method.

As well as the underlying object, implementations of Item also provide a bucket of item metadata.  This is used for things that you want to carry alongside the object without having to make provision for them within the object representation.  You access the bucket through a getItemMetadata() method which returns a map from a subclass of ItemMetadata to an append-only list of instances of that subclass.  This is obviously ideal for tracking what has happened to an Item, but can also be used to stash things like entity names, detected error conditions and the like.

One common pattern is to extract some critical information from the wrapped object into item metadata, then use that in subsequent processing stages.  As well as cache-like performance benefits, note that anything operating just on the item metadata can be agnostic about the type of the wrapped item.  For example, a stage which performs whitelisting or blacklisting of entities by name can be written to operate against names in the item metadata; such a stage will work on items representing metadata of any underlying type.

Stages and Pipelines

Manipulation of items is performed by stages, which are arranged into sequences in pipelines.

Each pipeline instance has a generic type Pipeline<ItemType extends Item<?>>, which is to say it is specialised to operate on items of a particular underlying type.  All you can do with a pipeline once constructed is to execute it on a Collection<ItemType>.  You can start out with a collection containing the item or items you want to process, or start with an empty collection and include stages at the start of the pipeline which gather the appropriate item collection.  You can even combine these approaches.

When the pipeline has finished executing, the result is in the same collection object.  If your pipeline was being executed for its side-effects (e.g., to write a serialized document into a file) then there might be no result as such.

Each stage in the pipeline is called in turn to process the collection.  Many stages simply iterate across the collection performing the same operation on each item, but it is also possible for stages to add or remove items from the collection, or to replace its contents with an entirely new collection.  For example, EntitiesDescriptorAssemblerStage forms a new item wrapping an <EntitiesDescriptor> whose contents are derived from the items in the collection.  At the end of the stage's execution, that new item replaces all of the original items in the collection.

Pipeline stages can be written to invoke other pipelines:

  • Metadata aggregation can be performed by writing a pipeline for each metadata source; the "main" pipeline can then use PipelineMergeStage to invoke those source-specific pipelines and merge the results into the main pipeline's collection using a specified strategy.

  • Multiple output collections can be created from a single input collection by branching the main pipeline using PipelineDemultiplexerStage.

  • Structures approximating if-then-else constructs can be built using SplitMergeStage to break the collection into sub-collections by some criteria, execute a branch pipeline on each sub-collection and then combine the results.

If you want to see an extreme example of this kind of thing, see my blog post about the UK federation metadata system.

Error Handling

Another important use of item metadata is in error handling.  The general pattern used is for checking and error handling to be separated.  Stages which perform checking signal problems by adding instances of subclasses of StatusMetadata to an item's metadata.  For example, an error results in an ErrorStatus being added, a warning results in a WarningStatus and so forth. Handling any errors or warnings is then left either to downstream stages or the pipeline's caller to be handled as appropriate.

Checking for document-global conditions such as signature validity tend to be performed early in a pipeline, and such errors normally result in processing being terminated using ItemMetadataTerminationStage.  Most checks are however performed on the metadata for individual entities, and that means that a failure of something like schema validation for one entity need not affect the processing of other entities.  Instead, the entities with detected errors can be removed from the collection using ItemMetadataFilterStage.

Even logging of conditions is configurable, through StatusMetadataLoggingStage.

Application to the IdP

Here are my personal opinions on some of the design issues for IdP V3 as they relate to possible use of the aggregator framework.

Engine, not Provider

It's clear that the metadata aggregator framework in its current form can't just be plugged in to the IdP as a metadata resolver or provider in general.  However, a lot of the things that need to be done by such a provider are available as stages within the existing codebase: signature verification, validUntil processing, entity whitelisting and blacklisting, 

I think the most obvious way of making use of the aggregator framework in the IdP would be to use it as the configurable processing engine within each metadata provider.  The provider would be responsible for acquiring the initial DOM document (in most cases) but would hand off the processing of that document to an MDA pipeline.  The result of that pipeline would be a collection of the entities represented, with appropriate item metadata annotations for things like entity name, cache durations and the like.  That collection of entity representations in DOM form would probably need to be indexed by the provider to allow querying at least by entity name, and conversion to XMLObject structures representing the entity would only be necessary for those entities actually queried in such a way.  The provider would also need to be responsible for requesting a new document and running the pipeline again if any of the active entity representations expired.  For this and other purposes, the provider would need to pick selected item metadata out of the item during conversion; I think some of that (such as global trust root contexts) might also need to be available to callers as well (see the hierarchy section below).

I think Chad and I saw this kind of pattern as useful because the MDA framework was designed to be extremely extensible.  It's relatively easy to gin up something, for example, that blacklists any entity with entityID containing "http://iay.org.uk " (e.g., using an XPathFilteringStage).  Similarly, inserting a fixed Irish flag logo into any entity whose MDRPI says its from the Irish registrar but which doesn't already have an MDUI logo defined is a pretty simple application of XSLTransformationStage.

I might even have some hope that because the pluggability can be at the level of quick XSLT hacks in many cases, we might even see some community contributions for common problems.

No, you caught me, I'm joking.

Hierarchy

The aggregator framework is based around processing collections, with the assumption that each member of such a collection will normally be the metadata for one entity.  The implementation of SAML 2.0 processing in terms of Item<Element> with DOM documents for each item means that handling more structured metadata involving <EntitiesDescriptor> is possible, but I don't believe that in general it is advisable if it can be avoided.

One big influence on my view here is that I think the future needs to see everything moving away from aggregates in general, towards the metadata query protocol and similar mechanisms in which entities are much more stand-alone descriptions.  I don't think the hierarchy has been particularly useful in the past beyond the single level, and indeed nested <EntitiesDescriptors> seem to be handled incorrectly by almost all software other than Shibboleth and simpleSAMLphp.  I certainly don't think we should be looking for new uses for document hierarchies at this point.

Even outside the context of the use of the MDA framework, as a general thing I think any API we have which depends on EntitiesDescriptor should be regarded as suspect.  If we're crawling the object tree outside EntityDescriptor, iterating across the contents of an EntitiesDescriptor or even synthesising an EntitiesDescriptor as a way of saying Collection<EntityDescriptor>, I think we should look at alternatives.  If we can move away from seeing EntitiesDescriptor as a first class entity, we'll be improving things for the future.

The normal pattern of use with the MDA framework is to take in a document which may contain any structure of <EntitiesDescriptor> elements, perform validation of the signature and other document-level attributes such as validUntil, and then reduce the single Item to a flat collection of items representing the individual <EntityDescriptor> elements.  This is a lossy transformation in two ways: the explicit structure is discarded, and some specific information attached to that structure (the @Name attributes and any extensions attached to the <EntitiesDescriptor> elements) are lost as well.

Obviously we should always be nervous about any lossy transformation, but in this case I suggest that we need to look not at whether the transformation is lossy, but at what use we have for the information that is being lost.  If we do that, we can work to retain the uses we care about while moving away from the need to preserve the hierarchical structure as such.  I do not believe that there is any need to be able to reconstruct the hierarchy in which the information originally arrived.

In general, we would do this by attaching the information supporting the use case to the individual entity items as entity metadata.  This can be done either as we decompose the <EntitiesDescriptor> elements, or in an explicit stage that amalgamates such information and "pushes down" to the individual entities' DOM nodes just before that decomposition.  The latter is only an option for things that can be represented as SAML 2.0 metadata within an <EntityDescriptor>, however, and in general I think the former is a superior option.

Discarding the <EntitiesDescriptor> elements means that a metadata resolver's clients can't walk the tree up from the returned EntityDescriptor object to find contextual information.  Instead, I'd suggest that we either need to make appropriate contextual information available as part of EntityDescriptor (derived not from the DOM content of the item but from the item metadata) or have the metadata resolver return an object which wraps both an EntityDescriptor(derived from the DOM content of the item) and the contextual information (derived from the item metadata).  I'd personally probably lean towards the latter, as it starts to break the assumption that all metadata is necessarily based around SAML 2.0 <EntityDescriptor>s.

I think we're really only talking about two use cases in practice: the possibly hierarchical attachment of trust root information through <KeyAuthority> extensions, and attribute release to groups of entities named by the <EntitiesDescriptor> elements they are hierarchically contained within.

Trust Anchors

The V2 IdP makes use of trust roots made available as <KeyAuthority> extensions within role descriptors, entity descriptors, and all enclosing <EntitiesDescriptor>s up to the document root.  In practice, I suspect that almost everyone who runs PKIX just has one blob of trust root information at the top level, and that almost no-one uses trust roots within any nested scopes.  Such levels of sophistication are pretty much guaranteed to be Shibboleth only, for one thing.

What I'd suggest is converting each <KeyAuthority> (or perhaps each <KeyAuthority>/<KeyInfo> found as an extension into ItemMetadata, and appending all that are in scope for a particular entity to its item metadata bucket.  I don't believe that ordering of such trust roots within the <EntitiesDescriptor> tree affects the semantics, as the PKIX trust engine will try each one in turn until one works, but I'd note that it would be relatively straightforward to make sure that the order in which they appear as one descends or ascends the tree can in fact be preserved, as the item metadata bucket values are ordered lists.

Group Membership

The V2 IdP has rather funky relying party semantics, which we were planning on changing to some extent anyway.  In V2, you can designate a relying party by name and that will match against:

  • The @entityID of an entity,

  • The @Name of any <EntitiesDescriptor> hierarchically containing an entity.

In each case, said entity will be regarded as matching the name.  These are really two separate mechanisms, and as I recall we had already decided that they should be separate match functions, with the @entityID one being the normal case and a new matching function being available to get the @Name-based behaviour.

As with trust anchors, I'd propose that we collect @Name values as we decompose any <EntitiesDescriptor> and attach those as ItemMetadata to each entity in scope of each name.  As with the trust anchor case, I don't believe that ordering is significant (a value will match or not match irrespective of the ordering of the names as one ascends or descends the tree) but we can preserve the ordering if anyone can think of a counter-example.

I've called this section "group membership" because this second part of the V2 behaviour is conceptually really about an entity's membership in a named group; the hierarchy is just the way that membership in the group is expressed.  (It's also clearly a pretty bad way of expressing flexible group membership, as you're constrained to a set of groups which can be expressed as a hierarchy)

I think entity group membership is an important concept, not particularly restricted to SAML metadata and certainly not in principle dependent on being designated indirectly through an <EntitiesDescriptor> hierarchy (you might acquire group membership information from a different source than the actual entity descriptions).  I'd therefore see it as particularly positive if this kind of entity context information was raised up a level from being part of EntityDescriptor.

It may also be worth thinking more in general about entity group membership.  I think there are a couple of different use cases for the concept.

The first use case in both in the IdP and the SP , where you are looking at an entity for which the metadata is available, and want to know whether it's a member of a named group for policy reasons.  This part of the puzzle seems like a question you can ask an EntityDescriptor.

The second use case is probably restricted to the SP, when you might want to iterate across the members of a named group for discovery purposes.  You probably want the actual metadata for all the group members in this case, so in the IdP V3 API this would look like having an additional criteria type for metadata resolvers.

What I'm wondering is whether there is a third use case which requires one to be able to resolve an entity group name to a list of entity names, by which I mean that the metadata for each entity is not what one would need at the time.  There doesn't seem to be a place to put such an API as things stand, although some of the mechanisms one can think of for designating group membership (e.g., <AffiliationDescriptor>) happen to work that way.

  • No labels