DIANA Docker Swarm Stacks

Derek Merck
University of Florida and Shands Hospital
Gainesville, FL

Build Status Coverage Status Doc Status

Admin Services

A base stack and provides Portainer and Traefik services and networks.


$ source sample.env
$ docker-stack deploy -f admin/admin-stack.yml admin


  • Traefik: http://host:8080
  • Portainer: http://host/portainer


Other stacks can attach to the Traefik network by declaring the network and adding appropriate service labels.

    external: true

Backend Data Services

Provides Postgres database and Redis and Splunk indexing services and allocates persistent storage.


$ source sample.env
$ docker-stack -f admin/admin-stack.yml
$ docker-stack -f backend-data/postgres-service.yml data
$ docker-stack -f backend-data/redis-service.yml data
$ docker-stack -f backend-data/splunk-service.yml data


  • Postgres: tcp://postgres:2345
  • Redis: tcp://redis:6543
  • Splunk: http://host/splunk
  • Splunk: http://splunk:{8088,8089}

DICOM Services

Provides Orthanc DICOM nodes and ingress gateways.


$ source sample.env
$ docker-stack -f dicom-node/archive-stack.yml


  • Orthanc: http://host/archive
  • Orthanc: http://host/incoming

DIANA Worker Services

Mock PACS Stack

Provides an Orthanc DICOM node and fills it continuously with simulated DICOM headers generated by DIANA’s MockSite daemon.

By default, the orthanc node is exposed at:

  • Orthanc: http://host/mock-pacs
  • Orthanc: dcm:MOCKPACS@orthanc-mock:4242


$ source sample.env
$ docker stack deploy -c admin/admin-stack.yml admin
$ docker-stack deploy -c diana-workers/mock-stack.yml mock

Diana Watcher

Additionally, see Remote Embedded Diana Watcher for Raspberry Pi and Balena.


Installing Docker-CE on RHEL

Follow the CentOS guide and update ``container-selinux` <https://nickjanetakis.com/blog/docker-tip-39-installing-docker-ce-on-redhat-rhel-7x>`__.

Sometimes RHEL behind firewalls can benefit from access to the CentOS yum repos.

Setup a Swarm

$ docker swarm init --advertise-addr <ip_addr>
$ ssh host2
> docker swarm join ... etc

Tag unique nodes for the scheduler

The storage node will be assigned the database backend.
Any bridge nodes will be assigned DICOM ingress, routing, and bridging services (b/c typically modalities authorize endpoint access by specific IP address.)
$ docker node update --label-add storage=true host1   # mounts mass storage
$ docker node update --label-add bridge=true host2    # registered IP address for DICOM receipt


Grant orthanc user superuser privileges so it can create trigrams