The Shibboleth V2 IdP and SP software have reached End of Life and are no longer supported. This documentation is available for historical purposes only. See the IDP v4 and SP v3 wiki spaces for current documentation on the supported versions.

IdPLinuxNonRoot

Configuring Linux To Run a Servlet Container as Non-Root

Running a servlet container without Apache that needs to bind to ports < 1024 but still run as a non-root user usually requires special setup (unless using Debian or Ubuntu). Some containers include tools to assist with this, the Linux kernel also has features that can help enable this, or another option is to rely on port mapping.

Port mapping

In this approach the Java servlet container listens on a high port and the local packet filter is used to forward requests from standard ports (80, 443) to those high ports.

One caution regarding this approach is that it will cause your IdP to fail if the port mapping software is stopped. Normally dropping a firewall doesn't prevent existing services from running, but this approach changes that situation. You should take care that any administrative staff are well aware of this change.
One way to deal with this issue is generating the iptables (or firewalld) rules dynamically on each start of the container, which is easy with systemd. See also below (POSIX capabilities) for other examples using this technique.

Requirements

  • Linux kernel that support iptables and nat
  • IP address and ports numbers of servlet listeners

Configuration Changes

  • For non-Red Hat Linux installations modify /etc/rc.d/rc.local to include the following lines:

    /sbin/iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80  -j DNAT --to-destination 10.0.0.10:8080
    /sbin/iptables -t nat -A PREROUTING -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.0.0.10:8443
    
  • For Red Hat Linux installations using iptables (Red Hat 6 and earlier by default) modify the nat section of the /etc/sysconfig/iptables to include the following lines:

    *nat
    :PREROUTING ACCEPT [4332:289016]
    :POSTROUTING ACCEPT [43:2689]
    :OUTPUT ACCEPT [43:2689]
    -A PREROUTING -p tcp -m tcp --dport 80  -j DNAT --to-destination 10.0.0.10:8080
    -A PREROUTING -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.0.0.10:8443
    COMMIT
    

    Note the changes are only the addition of the DNAT lines in the nat section.

  • For Red Hat Linux installations using firewalld (Red Hat 7 and later by default, unless you specifically switched back to iptables), issue the following commands as root:

    firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toport=8080 --permanent
    firewall-cmd --zone=public --add-forward-port=port=443:proto=tcp:toport=8443 --permanent
    firewall-cmd --zone=public --add-port=80/tcp --permanent
    firewall-cmd --zone=public --add-port=443/tcp --permanent
    firewall-cmd --reload
  • Add iptables rules to non-Red Hat Linux installations by running the iptables commands by hand.
  • Restart iptables on Red Hat with the /etc/init.d/iptables script.

Authbind

Debian, Ubuntu and derivatives come with the authbind utility and integrate this with their Tomcat packages by default (only needs to be enabled). Others can download and build the software from source code (or try an inofficial RPM spec file), of course, but that also requires changed Tomcat startup scripts. Latest changes to authbind were made in 2012 (as per Aug. 2016), so it's not a particularly fast-moving target (i.e., no need to frequently update and rebuild the software, it's very stable).

Check the IdPLinuxNonRootDebianUbuntu page for configuration details.

Apache Commons daemon JSVC

On systems lacking authbind (e.g. RedHat, CentOS & friends) Apache Commons daemon (jsvc) can be used to manage Tomcat. Instead of the complex layering of shell scripts commonly used in Tomcat distributions (and GNU/Linux distributions of packaged Tomcat), using systemd one can override the packaged service file to use jscv instead (or alternatively create another service/unit file and start that instead of tomcat). JSVC starts as root – and hence can open ports < 1024 – but runs Tomcat and the JVM as unpriviledged user (here set to "tomcat" in the jsvc command below).

Requirements

  • Apache Commons daemon (jsvc), RHEL/CentOS: apache-commons-daemon-jsvc package
  • Systemd (other inits left as excercise to the reader)

Configuration changes

The examples below assume a CentOS 7 system and use of the packaged Tomcat 7 (no Tomcat 8 available for CentOS 7 (sad)) and OpenJDK 8 software. Configuration parameters can be read from a file (here CentOS's default config file /etc/tomcat/tomcat.conf is used, but you can also supply your own, with arbitrary NAME=value assignments) and so can be tuned independently from the systemd service file.

 Example systemd service file for jsvc and Tomcat on CentOS 7
yum install tomcat apache-commons-daemon-jsvc java-1.8.0-openjdk-headless

The following unit file tries to mirror the default Tomcat startup scripts from CentOS 7's tomcat package as closely as possible, only using jsvc instead, thereby avoiding the low port issue. Adjust memory ("-Xmx1g" used in the example below) as needed/recommended. Copy content below into this new file: /etc/systemd/system/tomcat.service

[Unit]
Description=Apache Tomcat Web Application Container JSVC wrapper
After=syslog.target network.target
[Service]
Type=simple
PIDFile=/var/run/tomcat.pid
EnvironmentFile=/etc/tomcat/tomcat.conf
Environment=CATALINA_PID=/var/run/tomcat.pid
Environment=CATALINA_OPTS="-Xmx1g"
Environment=ERRFILE=/var/log/tomcat/catalina.out
Environment=OUTFILE=/var/log/tomcat/catalina.out
ExecStart=/usr/bin/jsvc \
            -server \
            -nodetach \
            -Djava.awt.headless=true \
            -cwd ${CATALINA_HOME} \
            -Dcatalina.home=${CATALINA_HOME} \
            -Dcatalina.base=${CATALINA_HOME} \
            -Djava.io.tmpdir=${CATALINA_TMPDIR} \
            -Djava.util.logging.config.file=${CATALINA_HOME}/conf/logging.properties \
            -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager \
            -cp /usr/share/java/commons-daemon.jar:${CATALINA_HOME}/bin/bootstrap.jar:${CATALINA_HOME}/bin/tomcat-juli.jar \
            -user tomcat \
            -java-home ${JAVA_HOME} \
            -pidfile ${CATALINA_PID} \
            -errfile ${ERRFILE} \
            -outfile ${OUTFILE} \
            $CATALINA_OPTS \
            org.apache.catalina.startup.Bootstrap
ExecStop=/usr/bin/jsvc \
            -pidfile /var/run/tomcat.pid \
            -stop \
            org.apache.catalina.startup.Bootstrap
[Install]
WantedBy=multi-user.target

Note that with Type=forking the systemd journal contains messages like this:

tomcat.service: Supervising process <pid> which is not our child. We'll most likely not notice when it exits.

We therefore changed the Type=simple and also added the "-nodetach" parameter to jsvc, which is how systemd seems to prefer things.

systemctl daemon-reload
systemctl enable tomcat
systemctl start tomcat
systemctl status tomcat

 

Linux capabilities

To address the problem of unpriviledged users not being able to bind to priviledged ports head-on POSIX capabilities can be used to allow just that. The java binary used to run the servlet container needs to be given this capability.

Requirements

  • Linux kernel that supports capabilites (from 2.6.24 on, if compiled in)
  • Systemd for automated management of the capabilities on service start/stop
    (Alternatives are possible using the Kernel's inotify feature and tools like incron, to ensure changes are re-applied after each Java upgrade)

Configuration Changes

The examples below assume a CentOS 7 system and use of the packaged Tomcat 7 (no Tomcat 8 available for CentOS 7 (sad)) and OpenJDK 8 software. First try everything manually on the command line (possibly adjusting the paths, this is on AMD64 using all defaults):

yum install tomcat java-1.8.0-openjdk-headless libcap && systemctl enable tomcat
setcap cap_net_bind_service=+ep /usr/lib/jvm/jre/bin/java
echo /usr/lib/jvm/jre/lib/amd64/jli/ > /etc/ld.so.conf.d/java.conf
ldconfig

Then set your HTTP connector to port="443" or "80" in /etc/tomcat/server.xml and restart Tomcat. Check the status output, listening processes and the process list to make sure it's the unpriviledged tomcat user that's running Java here.

systemctl restart tomcat
systemctl status tomcat
netstat -ltnp  # what process listens where
ps auxww       # what user does that process run as

Only continue with automating/productionalizing this approach once you got this working.

The setcap command needs to be reapplied after every Java package upgrade. While that could be done automatically we'll use a slightly amended systemd unit file to make that process simple and reliable, without resorting to additional tools.

Copy the content of the systemd unit file (included below, expand to view) into a new file /etc/systemd/system/tomcat.service or alternatively run

systemctl edit --full tomcat

which spawns an editor with the current unit file and add the lines between BEGIN and END to the existing unit file, in the same place (after EnvironmentFile and before ExecStart). JAVA_HOME should come from one of the referenced EnvironmentFiles, but e.g. if your system is not of the "amd64" arch you may need to change the paths, both for the java binary as well as the one to lib/amd64/jli/libjli.so.

 Example systemd service file for Java with Capabilities and Tomcat on CentOS 7
# Systemd unit file for default tomcat
# 
# To create clones of this service:
# DO NOTHING, use tomcat@.service instead.
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target
[Service]
Type=simple
EnvironmentFile=/etc/tomcat/tomcat.conf
Environment="NAME="
EnvironmentFile=-/etc/sysconfig/tomcat
### BEGIN
PermissionsStartOnly=true
ExecStartPre=/usr/sbin/setcap cap_net_bind_service=+ep ${JAVA_HOME}/bin/java
ExecStartPre=/bin/sh -c "/bin/echo ${JAVA_HOME}/lib/amd64/jli/ > /etc/ld.so.conf.d/java.conf"
ExecStartPre=/sbin/ldconfig
ExecStopPost=-/usr/sbin/setcap -r ${JAVA_HOME}/bin/java
ExecStopPost=-/bin/rm -f /etc/ld.so.conf.d/java.conf
ExecStopPost=-/sbin/ldconfig
### END
ExecStart=/usr/libexec/tomcat/server start
ExecStop=/usr/libexec/tomcat/server stop
SuccessExitStatus=143
User=tomcat
Group=tomcat

[Install]
WantedBy=multi-user.target

Save, exit the editor and run:

systemctl daemon-reload
systemctl restart tomcat
systemctl status tomcat

and verify that everything is still working (Java listening on a low port, java process being run as Tomcat).

While Tomcat is still running verify that the capabilities are there on the Java binary:

# getcap /usr/lib/jvm/jre/bin/java
/usr/lib/jvm/jre/bin/java = cap_net_bind_service+ep

After you stop Tomcat again they should be gone (to be added dynamically again on each start):

systemctl stop tomcat
getcap /usr/lib/jvm/jre/bin/java

Proxying

Running a seperate server process only to proxy all requests to the servlet container is not ideal. At the very least doing that means that two servers/services must be correctly configured, integrated and running in order for one web server/service to function (i.e., doubling the potential for service disruptions). None of the approaches introduced above suffer from these issues. Additionally, proxying may introduce performance or timeout problems, problems with proper virtualization of requests, getting the correct IP addresses logged for clients, security issues with HTTP Request Headers set by an external web server / load balancer (which may be forged), etc.

Apache httpd and Tomcat

The example given here assumes use of AJP between Tomcat and httpd. TLS is offloaded to httpd. Tomcat only listens on port 8009 on the loopback interface (inaccessible to the outside world), httpd listens on port 443 for TLS (and maybe also 8443 for backchannel support).

  • In Tomcat's server.xml only configure an AJP Connector on the loopback interface and disable all other Connectors:

    <Connector port="8009" address="127.0.0.1"
      enableLookups="false" redirectPort="443"
      protocol="AJP/1.3" maxPostSize="100000" />
  • In Apache httpd with mod_ssl configure a VirtualHost with TLS (not covered here) and proxy requests to the idp context to the servlet container (adjust to taste if your IDP runs in a different context):

    <Proxy ajp://127.0.0.1:8009/idp/*>
      Require all granted
    </Proxy>
    ProxyPass /idp/ ajp://127.0.0.1:8009/idp/
  • If you offer support for a backchannel repeat the same on the backchannel 8443 vhost, using the IDP's backchannel key pair, not the webserver's SSL one), making sure httpd does not get in the way with evaluation of client certificates by the IDP application (SSLVerifyClient optional_no_ca):

    <Location /idp>
      Require all granted
      SSLOptions      +StdEnvVars +ExportCertData
      SSLVerifyClient optional_no_ca
      SSLVerifyDepth  10
    </Location>
    <Proxy ajp://localhost:8009/idp/*>
        Require all granted
    </Proxy>
    ProxyPass /idp/ ajp://localhost:8009/idp/

Webservers without AJP support

One example for this would be use of plain HTTP between the servlet container and the web server (e.g. Nginx, Lighttpd, Pound, etc.). TLS is offloaded to/terminated at the web server. The web server listens on port 443 and proxies all requests to the container, which is listening on port e.g. 8000 with a plain HTTP connector. The container must not be directly accessible from the outside world (e.g. by only listening on the loopback interface).
For backchannel support httpd may also listen on port 8443, proxying requests to the same container port (e.g. 8000) – but possibly a different port on the container might be needed (e.g. 8080) to tell the ports appart in the IDP application. Either way, the keypair must differ for the backchannel port (i.e., the IDP's backchannel credentials, not web server TLS keys), as usual.

The container may need additional configuration to correctly virtualize the scheme (https vs. http), port (443 vs. 8000) and possibly host name. E.g. for Apache Tomcat those are the scheme, proxyPort and proxyName attributes on the Connector, respectively.

Load balancer, TLS-offloading appliance

Most of the considerations as in section "Webservers without AJP support" apply here as well. In addition to those:

  • Since the load balancer will likely be on a different host/machine/system pay attention to properly virtualize the scheme, port and hostname in the container.
  • For backchannel support it may be easier (or necessary) to configure TLS with the IDP's backchannel key pair on the container itself (e.g. with a HTTPS Connector on port 8443) and pass through port 8443 from the load balancer unmolested.
  • Depending on the technologies involved (HTTP proxying, SNAT, etc.) the container may need additional configuration to get the correct information about the HTTP User Agent's IP address from the loadbalancer.

Systemd-socket-proxyd

Systemd as included in e.g. RHEL/CentOS 7 (from systemd version 209 on) also includes a proxy server that inherits a socket activated by systemd (e.g. port 443, opened with root privileges), connects to a configured server port (e.g. the container listening on port 8443 with a HTTPS connector for web browser TLS use) and bi-directionally forwards all bits.

  • This does not make use of TLS offloading, so the container will perform TLS itself (e.g. on port 9443 for web browser/front channel requests), though it will listen only on the loopback interface (Tomcat: address="127.0.0.1")
  • The container needs to be properly virtualized, with the externally visible (logical) scheme and port. For Tomcat e.g. scheme="https" proxyPort="443" on a Connector port="9443".
  • For backchannel requests you'd forgo use of the systemd proxy and directly expose another HTTPS connector on the container, e.g. on port 8443, listening on all interfaces (or a specific external one), with the IDP's backchannel key pair, of course.
  • All proxy startup and tear down happens dynamically in the container's systemd unit file, so no additional processes need to be managed and monitored.
  • But those external server/proxy processes still need to exist and bits will still need to be passed between 2 servers, which may incur some of the general proxying issues.

Examples TBD.