background-image: url(assets/hes-so.jpg), url(assets/MSE.png) background-position: left top, right 20px top 20px background-size: 25%, 30% class: center, middle # Singularity Containers ### Dr. Alexander Kashev
ScITS, Universität Bern #### Advanced Cloud
HES-SO Please open this in your browser to follow along: [`https://bit.ly/2Jrh566`](https://bit.ly/2Jrh566) --- # Agenda 1. The problem we're solving 2. Docker vs Singularity 3. Installing and testing Singularity 4. Modifying containers 5. Writing a Singularity definition file 6. Runtime options 7. Cloud resources 8. Extra credits (if time allows) --- class: center, middle # The problem we're solving ## A quick refresher on VMs and Containers --- # The problem we're solving: * Software depends on more than its own code. The OS it's running on, various libraries it's using, its environment are also important. -- * Reproducing those conditions on an actual production system is not easy, and for a larger collection of software they can be contradictory. -- * In addition, unrelated pieces of software can benefit from isolation from each other, reducing cost of errors and the attack surface. --- # What would we want in a solution? * **A turnkey solution** A recipe that can build a working instance of your software, reliably and fast. * **BYOE: Bring Your Own Environment** A way to capture the prerequisites and environment together with the software. * **Mitigate security risks** Provide a measure of isolation between the software running on a system. No security is perfect, but some is better than none. --- # Solution: Virtual Machines? * **The BYOE principle is fully realized** Whatever your environment is, you can package it fully, OS and everything. -- * **Security risks are truly minimized** Very narrow and secured bridge between the guest and the host means little opportunity for a bad actor to break out of isolation -- * **Easy to precisely measure out resources** The contained application, together with its OS, has restricted access to hardware: you measure out its disk, memory and alotted CPU. --- # Virtual Machines: the not so good parts * **Operational overhead** For every piece of software, the full underlying OS has to be run, and corresponding resources allocated. * **Setup overhead** Starting and stopping a virtual machine is not very fast, and/or requires saving its state. Changing the allocated resources can be hard too. * **Hardware availability** The isolation between the host and the guest can hinder access to specialized hardware on the host system. --- # Solution: Containers? * **Lower operational overhead** You don't need to run a whole second OS to run an application. -- * **Lower startup overhead** Setup and teardown of a container is much less costly. -- * **More hardware flexibility** You don't have to dedicate a set portion of memory to your VM well in advance, or contain your files in a fixed-size filesystem. Also, the level of isolation is up to you. You may present devices on the system directly to containers if needed. --- # Containers: the not so good parts * **Kernel compatibility** Kernel is shared between the host and the container, so there may be some incompatibilties. -- * **Security concerns** The isolation is thinner than in VM case, and kernel of the host OS is directly exposed. -- * **Linux on Linux** Containers are inherently a Linux technology. You need a Linux host (or a Linux VM) to run containers, and only Linux software can run. --- class: center, middle # Docker vs Singularity ## Why did another technology emerge? --- # Docker * Docker exists since 2013 and has grown to be the golden standard of container technology. * A huge amount of tools is built around Docker to build, run, orchestrate and integrate Docker containers. * Many cloud service providers can directly integrate Docker containers. Docker claims x26 resource efficiency improvement at cloud scale compared to VMs. * Docker approach encourages splitting software into microservice chunks that can be portably used as needed. --- # Docker: concerns * Docker's model of layers/images/volumes/metadata is rather complex, and it not always very transparent with how those are stored. * Container isolation features require superuser privileges; Docker has a persistent daemon running with those privileges and most container operations require root. Both of those issues make Docker undesirable in applications where you don't wholly own the computing resource - primarily HPC environments. Out of those concerns, and out of scientific community, came Singularity. --- # Singularity Singularity was created in 2015 as an HPC-friendly alternative to Docker. It is still in rapid development. -- * It's usually straightforward to convert a Docker container to a Singularity image. This gives users access to a vast library of containers. -- * Singularity uses a monolithic, image-file based approach. Instead of dynamically overlaid layers. You build a single file on one system and simply copy it over or archive it. This addresses the "complex storage" issue with Docker. --- # Singularity and root privileges The privilege problem was a concern from the ground-up, to make Singularity acceptable for academic clusters. -- * Addressed by having a `setuid`-enabled binary that can accomplish container startup and drop privileges ASAP. -- * Privilege elevation inside a container is impossible: `setuid` mechanism is disabled inside the container, so to be root inside, you have to be root outside. -- * Users don't need explicit root access to operate containers (at least after the initial build). --- # Singularity workflow overview .minusmargin[  ] 1. Interactively develop steps to construct a container. 2. Describe the steps in a recipe. 3. Build an immutable container on own machine. 4. Deploy this container in the production environment. --- # Singularity niche When is Singularity useful over Docker? * The major use case was and still is .highlight[shared systems]: systems where unprivileged users need the ability to run containers. However, an admin still needs to install Singularity for it to function. * Singularity is useful as an alternative to Docker. If you have admin privileges on the host, Singularity can do more than in unprivileged mode. It doesn't have the same level of ecosystem around it, but currently gaining features such as OCI runtime interface, native Kubernetes integration and own cloud services. --- # Singularity "sales pitch" Quoting from Singularity Admin documentation: > _Untrusted users (those who don’t have root access and aren’t getting it) can run untrusted containers (those that have not been vetted by admins) safely._ This won over quite a few academic users; for a sampling: [`https://www.Sylabs.io/singularity/whos-using-singularity/`](https://www.Sylabs.io/singularity/whos-using-singularity/) --- # Singularity versions There are two major branches of Singularity: * 2.x branch (currently at 2.6.1): legacy branch with no active development, but still deployed in places. * 3.x branch (currently at 3.1.0): actively developed branch, with most of the code completely rewritten in Go. -- Due to freshness of code and new Go dependency, 3.x adoption is slow. This course will cover 3.1.x branch. -- Singularity aims to be backwards-compatible: containers built with earlier versions should work with newer ones. --- class: center, middle # Working with Singularity ## Installation and basic use --- # Installing Singularity Installing Singularity from source is probably preferred, as it's still a relatively new piece of software. -- Instructions at: https://github.com/Sylabs/singularity/blob/master/INSTALL.md It is required to install Golang compiler >= 1.11.1 as a **build** dependency. It is not required to run the compiled software. -- If you want to try installing Singularity on your Linux system, follow the build instructions. You will need root access! --- # Using Singularity After installation, you should now have `singularity` available from the shell. ``` user@host:~$ singularity version 3.1.0 ``` -- The general format of Singularity commands is: ``` singularity [
]
[
] [
] ``` Use `singularity help [
]` to check built-in help. You can find the configuration of Singularity under `/usr/local/etc/singularity` if you used the default prefixes. --- # Container images A Singularity image is, for practical purposes, a filesystem tree that will be presented to the applications running inside it. -- A Docker container is built with a series of _layers_ that are stacked upon each other to form the filesystem. Layers are collections of updates to files, and must be inspected to find the latest version of the file. Singularity collapses those into a single, portable file. -- A container needs to be somehow bootstrapped to contain a base operating system before further modifications can be made. --- # Pulling Docker images The simplest way of obtaining a working Singularity image is to pull and convert it from Docker Hub. Let's try it with CentOS 6: ``` user@host:~$ singularity pull docker://centos:6 ``` This will download the layers of the Docker container to your machine and assemble them into an image. The result will be stored as `centos_6.sif` --- # Pulling Docker images ``` user@host:~$ singularity pull docker://centos:6 WARNING: Authentication token file not found : Only pulls of public images will succeed INFO: Starting build... Getting image source signatures Copying blob sha256:ff50d722b38227ec8f2bbf0cdbce428b66745077c173d8117d91376128fa532e 66.60 MiB / 66.60 MiB [====================================================] 7s Copying config sha256:5d1ece75fd80b4dd0e4b2d78a1cfebbabad9eb3b5bf48c4e1ba7f9dd28c2789e 1.51 KiB / 1.51 KiB [======================================================] 0s Writing manifest to image destination Storing signatures INFO: Creating SIF file... INFO: Build complete: centos_6.sif ``` Note that this .highlight[does not require `sudo` or Docker]! --- # Entering shell in the container To test our freshly-created container, we can invoke an interactive shell to explore it with .highlight[`shell`]: ``` user@host:~$ singularity shell centos_6.sif Singularity centos_6.sif:~> ``` At this point, you're within the environment of the container. -- We can verify we're "running" CentOS: ``` Singularity centos_6.sif:~> cat /etc/centos-release CentOS release 6.9 (Final) ``` --- # User/group within the container Inside the container, we are the same user: ``` Singularity centos_6.sif:~> whoami user Singularity centos_6.sif:~> exit user@host:~$ whoami user ``` We will also have the same groups. That way, if any host resources are mounted in the container, we'll have the same access privileges. --- # Root within the container If we launched `singularity` with `sudo`, we would be `root` inside the container. ``` user@host:~$ sudo singularity shell centos_6.sif Singularity centos_6.sif:~> whoami root ``` -- **Most importantly:** `setuid` mechanism will not work within the container. Once launched as non-root, no command can elevate your privileges. This can be overridden by host root, but is the default. --- # Default mounts In addition to the container filesystem, by default: * user's home folder, * `/tmp`, * `/dev`, * the folder we've invoked Singularity from are accessible inside the container. -- The idea is to provide minimal friction working with software inside the container: no need for extra mounts to access data or store preferences. It is possible to override this default behavior. --- # Default mounts ``` user@host:~$ singularity shell centos_6.sif Singularity centos_6.sif:~> ls ~ [..lists home folder..] Singularity centos_6.sif:~> touch ~/test_container Singularity centos_6.sif:~> exit user@host:~$ ls ~/test_container /home/user/test_container ``` The current working directory inside the container is the same as outside at launch time. --- # Running a command directly Besides the interactive shell, we can execute any command inside the container directly with .highlight[`exec`]: ``` user@host:~$ singularity exec centos_6.sif cat /etc/centos-release CentOS release 6.9 (Final) user@host:~$ singularity exec centos_6.sif python --version Python 2.6.6 user@host:~$ python --version Python 2.7.15rc1 ``` --- class: center, middle # Modifying containers ## Let's make our own --- # Modifying the container Let's try to install some software in the container. ``` user@host:~$ singularity shell centos_6.sif Singularity centos_6.sif:~> fortune bash: fortune: command not found ``` `fortune` is not part of the base image. Let's try installing it. -- ``` Singularity centos_6.sif:~> exit user@host:~$ sudo singularity shell centos_6.sif Singularity centos_6.sif:~> whoami root Singularity centos_6.sif:~> yum -y --enablerepo=extras install epel-release [...] [Errno 30] Read-only file system: '/var/lib/rpm/.rpm.lock' [...] ``` Despite having root, we can't write to the filesystem. --- # Images and overlays Singularity image files are read-only squashfs filesystems. Singularity can use an .highlight[overlay]: a layer on top of the image that holds changes to it. -- Overlays can be persistent (stored in a folder) or temporary. Singularity 2.x uses a temporary overlay by default. ``` user@host:~$ sudo singularity shell --writable-tmpfs centos_6.sif Singularity centos_6.sif:~> touch /test Singularity centos_6.sif:~> ls /test /test ``` ``` user@host:~$ mkdir persistent_overlay user@host:~$ sudo singularity shell --overlay persistent_overlay centos_6.sif Singularity centos_6.sif:~> touch /test Singularity centos_6.sif:~> ls /test /test ``` --- # Sandbox containers A more conventional way to write to a container is to use .highlight[sandbox] format, which is just a filesystem tree stored in a folder. ``` $ sudo singularity build --sandbox centos-writable docker://centos:6 $ ls centos-writable/ bin dev environment etc home lib lib64 lost+found media mnt opt proc root sbin selinux singularity srv sys tmp usr var ``` Working with sandbox containers requires root. -- Passing `--writable` to `shell` or `exec` will now enable changes: ``` $ sudo singularity shell --writable centos-writable Singularity centos-writable:~> touch /test Singularity centos-writable:~> ls /test /test Singularity centos-writable:~> exit $ ls centos-writable/test centos-writable/test ``` --- # Writing to a container, finally: We should now be able to enter it **in writable mode** and install software: ``` user@host:~$ sudo singularity shell --writable centos-writable Singularity centos-writable:~> yum -y --enablerepo=extras install epel-release [...] Singularity centos-writable:~> yum -y install fortune-mod [...] Singularity centos-writable:~> exit user@host:~$ singularity exec centos-writable fortune [some long-awaited wisdom of a fortune cookie] ``` --- # Default run script A container can have a "default" command which is run without specifying it. Inside the container, it's `/singularity`. Let's try modifying it: ``` user@host:~$ sudo nano centos-writable/singularity ``` By default you'll see a sizeable shell script. ```bash #!/bin/sh OCI_ENTRYPOINT='' OCI_CMD='"/bin/bash"' CMDLINE_ARGS="" # [...] # ``` --- # Custom run script We installed `fortune`, so let's use that instead: ```bash #!/bin/sh exec /usr/bin/fortune "$@" ``` Now we can invoke it with .highlight[`run`]: ``` user@host:~$ singularity run centos-writable [..some wisdom or humor..] ``` --- # Converting to final container One way to produce a "final" container is to convert it from the sandbox version: ``` user@host:~$ sudo singularity build fortune.sif centos-writable [...] ``` Now we can test our container: ``` user@host:~$ singularity run fortune.sif [..some more wisdom..] ``` --- # Running a container directly Note that the container file is executable: ``` user@host:~$ ls -lh fortune.sif -rwxr-xr-x 1 user user 105M Mar 15 13:37 fortune.sif ``` If we run it directly, it's the same as invoking `run`: ``` user@host:~$ ./fortune.simg [..a cracking joke..] ``` This does require to have `singularity` installed on the host, however, and is just a convenience. --- class: center, middle # Container definition files ## Making the container reproducible --- # Making the container reproducible Instead of taking some base image and making changes to it by hand, we want to make this build process reproducible. This is achieved with definition files called .highlight[Definition files], historically also called "recipes". Let's try to retrace out steps to obtain a fortune-telling CentOS. --- # Bootstrapping The definition file starts with a header section. The key part of it is the `Bootstrap:` configuration, which defines how we obtain the "base" image. There are multiple types of bootstrap methods: * pull an image from a cloud service such as `docker` * using `yum`/`debootstrap` on the host system to bootstrap a similar one * `localimage` to base off another image on your computer We'll be using the Docker method. ``` Bootstrap: docker From: centos:6 ``` --- # Setting up the container There are 2 sections for setup commands (essentially shell scripts): 1. .highlight[`%setup`] for commands to be executed .highlight[outside the container]. You can use `$SINGULARITY_ROOTFS` to access the container's filesystem, as it is mounted on the host during the build. 2. .highlight[`%post`] for commands to be executed .highlight[inside] the container. This is a good place to set up the OS, such as installing packages. --- # Setting up the container Let's save the name of the build host and install `fortune`: ``` Bootstrap: docker From: centos:6 %setup hostname -f > $SINGULARITY_ROOTFS/etc/build_host %post yum -y --enablerepo=extras install epel-release yum -y install fortune-mod ``` --- # Adding files to the container An additional section, .highlight[`%files`], allows to copy files or folders to the container. We won't be using it here, but the format is very similar to `cp`, with sources being outside and the final destination being inside the container: ``` %files some/file /some/other/file some/path/ some/directory some/path/ ``` Note that this happens **after** `%post`. If you need the files earlier, copy them manually in `%setup`. --- # Setting up the environment You can specify a script to be sourced when something is run in the container. This goes to the .highlight[`%environment`] section. Treat it like `.bash_profile`. ``` %environment export HELLO=World ``` Note that by defaut, the host environment variables are passed to the container. To disable it, use `-e` when running the container. --- # Setting up the runscript The runscript (`/singularity`) is specified in the `%runscript` section. Let's use the file we copied at `%setup` and run `fortune`: ``` %runscript read host < /etc/build_host echo "Hello, $HELLO! Fortune Teller, built by $host" exec /usr/bin/fortune "$@" ``` --- # Testing the built image You can specify commands to be run at the end of the build process inside the container to perform sanity checks. Use `%test` section for this: ``` %test test -f /etc/build_host test -x /usr/bin/fortune ``` All commands must return successfully or the build will fail. --- # The whole definition file ``` Bootstrap: docker From: centos:6 %setup hostname -f > $SINGULARITY_ROOTFS/etc/build_host %post yum -y --enablerepo=extras install epel-release yum -y install fortune-mod %environment export HELLO="World" %runscript read host < /etc/build_host echo "Hello, $HELLO! Fortune Teller, built by $host" exec /usr/bin/fortune "$@" %test test -f /etc/build_host test -x /usr/bin/fortune ``` --- # Building a container from definition To fill a container using a definition file, we invoke `build`: ``` user@host:~$ rm fortune.sif user@host:~$ sudo singularity build fortune.sif fortune.def [...] ``` Now `fortune.sif` is ready for use: ``` user@host:~$ ./fortune.sif [..witty quote..] ``` Note that `build`, unlike `pull`, .highlight[requires sudo]. --- # Inspecting a built container Container has some metadata you can read: ``` user@host:~$ singularity inspect fortune.sif { "org.label-schema.usage.singularity.deffile.bootstrap": "docker", "vendor": "CentOS", "name": "CentOS Base Image", [...] ``` You can inspect the original definiton file: ``` user@host:~$ singularity inspect -d fortune.sif Bootstrap: docker From: centos:6 %setup hostname -f > $SINGULARITY_ROOTFS/etc/build_host [...] ``` See `singularity help inspect` for more options, and `/.singularity.d/` inside the container to see how it's all stored. --- class: center, middle # Runtime options ## Fine-tuning container execution --- # Host resources A container can have more host resources exposed. For providing access to more directories, one can specify bind options at runtime with `-B`: ``` $ singularity run -B source[:destination[:mode]] container.sif ``` where .highlight[source] is the path on the host, .highlight[destination] is the path in a container (if different) and .highlight[mode] is optionally `ro` if you don't want to give write access. Of course, more than one bind can be specified. Note that you can't specify this configuration in the container! System administrators may specify binds that apply to all containers (e.g. `/scratch`). --- # Host resources Additionally, devices on the host can be exposed, e.g. the GPU; but you need to make sure that the guest has the appropriate drivers. One solution is to bind the drivers on the container. For Nvidia CUDA applications specifically, Singularity supports the `--nv` flag, which looks for specific libraries on the host and binds them in the container. ----- OpenMPI should also work, provided the libraries on the host and in the container are sufficiently close. If set up correctly, it should work normally with `mpirun`: ``` $ mpirun -np 20 singularity run mpi_job.sif ``` --- # Network Historically, Singularity defaulted to no network isolation, with an option of full isolation. With 3.x, Singularity implements in-between options through Container Network Interface: [`https://github.com/containernetworking/cni`](https://github.com/containernetworking/cni) -- Port remapping example: ``` $ sudo singularity instance start --writable-tmpfs \ --net --network-args "portmap=8080:80/tcp" docker://nginx web2 $ sudo singularity exec instance://web2 nginx $ curl localhost:8080 ``` This requires root, but it's a common problem with containerization technology at the moment. --- # Fuller isolation By default, a container is allowed a lot of "windows" into the host system (dictated by Singularity configuration). For an untrusted container, you can further restrict this with options like `--contain`, `--containall`. In this case, you have to manually define where standard binds like the home folder or `/tmp` point. See `singularity help run` for more information. --- # Distributing the container Using the container after creation on another Linux machine is simple: you simply copy the image file there. Note that you can't just run the image file on a host without Singularity installed! -- This approach makes it easy to deploy images on clusters with shared network storage. You can easily integrate Singularity with the usual scheduler scripts (e.g. Slurm) if Singularity is installed on all nodes. --- class: center, middle # Cloud services ## Current and upcoming ecosystem --- # Using Singularity Hub Singularity Hub allows you to cloud-build your containers from Bootstrap files, which you can then simply `pull` on a target host. .center[[`https://singularity-hub.org/`](https://singularity-hub.org/)] This requires a GitHub repository with a `Singularity` definition file. After creating an account and connecting to the GitHub account, you can select a repository and branches to be built. Afterwards, you can pull the result: ``` user@host:~$ singularity pull shub://kav2k/fortune [...] user@host:~$ ./fortune_latest.sif Hello, World! Fortune Teller, built by shub-builder-1450-kav2k-fortune-[...] ``` --- # Singularity Hub quirks * Singularity Hub is not like Docker Hub, or similar registry. You can't "push" an image there, it can only be built on their side. * Singularity Hub is not an official Sylabs project, it's an academic non-profit project by other developers. * Singularity Hub runs a modified version of Singularity 2.4, making some newer build-time features unavailable (but not runtime features). * There are no paid plans. Users are allowed a single private project. --- # Sylabs cloud offering Starting with Singluarity 3.0, the company behind Singularity aims to provide a range of cloud services to improve Singularity user experience. * **Container Library** as a counterpart for Docker Hub, serving as an official image repository. * **Remote Builder** service to allow unprivileged users to build containers in the cloud. * **KeyStore** service to enable container signature verification. Most of them are, as of now, still in public alpha stage. --- # Sylabs Container Library Container Libary is the Singularity counterpart to Docker Hub: a cloud registry for both public and private containers. .center[[`https://cloud.Sylabs.io/library`](https://cloud.Sylabs.io/library)] The Library allows direct upload of pre-built (and signed) containers, unlike Singularity Hub. ``` $ singularity push my.sif library://user/collection/my.sif:latest $ singularity pull library://user/collection/my.sif:latest ``` As of March 2019, it's still in Alpha status; eventual plan is a freemium model (pay for private images, pay for builder hours). --- # Sylabs Remote Builder Building a container from a recipe requires `sudo`, imposing a need for a separate container creation infrastructure. Sylabs provides a remote builder service that can build an image from a recipe file, then temporarily host it in Cloud Library to be downloaded. ``` user@host:~$ singularity build --remote output.sif fortune.def searching for available build agent......INFO: Starting build... [...] user@host:~$ ./output.sif Hello, World! Fortune Teller, built by ip-10-10-30-146.ec2.internal [..yet again, a funny quote..] ``` Caveat: all resources for a remote build must be accessible by the build node (i.e. over internet). --- # Signing containers and Sylabs Keystore To ensure safety of containers, SIF format allows them to be cryptographically signed. ``` user@host:~$ singularity sign output.sif user@host:~$ singularity verify output.sif ``` This alone provides assurance of integrity (has not been modified). For authentication, Sylabs provides a .highlight[keyserver] called Keystore, which can be used to check signatures of keys not locally available. ``` user@host:~$ singularity keys push
user@host2:~$ singularity verify output.sif ``` --- # Sylabs commercial offering Both the Container Library and Remote Builder are currently in free testing period. However, in future they will have a freemium model. There will also be on-premise versions of both services (which are not open source). Besides that, Sylabs offers Singularity PRO: a priority-supported version of Singularity with ready-built packages. Pricing is "upon request", and is either based on number of hosts or is site-wide. --- class: center, middle # "Extra credit" topics --- # Docker and Singularity Instead of writing a Singularity file, you may write a `Dockerfile`, build a Docker container and convert that. Pros: * More portable: for some, using Docker or some other container solution is preferable. * Easier private hosting: there is no mature private registry tech for Singularity. Cons: * Blackbox: Singularity understands less about the build process, in terms of container metadata. * Complexity: Extra tool to learn if you don't know Docker yet. Advice on Docker compatibility: [Best Practices (from 2.6 docs)](https://www.Sylabs.io/guides/2.6/user-guide/singularity_and_docker.html#best-practices) --- # Docker -> Singularity If you have a Docker image you want to convert to Singularity, you have at least 4 options: 1. Upload the image to a Docker Registry (such as Docker Hub) and `pull`/`Bootstrap` from there. 2. Use a private Docker registry to not rely on external services 3. Directly pull from a local Docker daemon cache 4. Use intermediate format as generated by `docker save` --- # Singularity Instances Running daemon-like persistent services with Singularity (such as a web server) can conveniently be done with the concept of Instances. A `%startscript` section of the recipe describes what service to launch, which subsequently works with `instance.*` commands: ``` $ singularity instance.start nginx.simg web $ singularity instance.list INSTANCE NAME PID CONTAINER IMAGE web 790 /home/mibauer/nginx.simg $ singularity instance.stop web ``` While an instance is running, the standard commands like `shell` and `exec` work with an `instance://` namespace. --- # Reducing container size Using traditional Linux distributions, even in minimal configurations, can still be an overkill for running a single application. One can reduce container size by clearing various artifacts of the build process, such as package manager caches. Alternatively, one can use minimal Linux distributions, such as Alpine Linux, as a base for containers, though compatibility needs extra testing. ``` $ ll -h -rwxr-xr-x 1 user group 66M Jun 25 15:04 centos_6.sif* -rwxr-xr-x 1 user group 2.0M Jun 25 16:08 alpine.sif* ``` --- # SCI-F One of the approaches for building scientific pipelines is bundling several tools in a single "toolset" container. SCI-F is a proposed standard for discovering and managing tools within such modular containers. Definition file can have several sections, e.g.: ``` %appenv foo BEST_GUY=foo export BEST_GUY %appenv bar BEST_GUY=bar export BEST_GUY %apprun foo echo The best guy is $BEST_GUY %apprun bar echo The best guy is $BEST_GUY ``` --- # SCI-F You can then discover the apps bundled and run them: ``` $ singularity apps foobar.simg bar foo $ singularity run --app bar foobar.simg The best guy is bar ``` More sections can be made app-specific, including providing a `help` description: ``` $ singularity help --app fortune moo.simg fortune is the best app ``` --- # Singularity Checks A container check is a utility script that can verify a container. Example uses: * Making sure no leftover artifacts from the build process remains (e.g. root's bash history) * Testing for common vulnerabilities * Custom checks for your specific environment ``` $ singularity check --tag clean ubuntu.img ``` --- # Reproducibility going forward Pinning a specific version of a base image makes it more probable that in future building the same recipe will be impossible. Singularity allows for easy storage of resulting containers, and is good at providing backwards compatibility. This provides archival capability (but containers can be large). -- But a "frozen" container can get other compatibility problems down the line, especially if it needs some host-container interaction. For example, compiled software in it is no longer optimized for newer hardware architectures. -- Bottom line: containers are not a silver bullet to solve reproducibility problems, but they help. --- # Further reading * Singularity User Guide: [https://www.Sylabs.io/guides/3.0/user-guide/](https://www.Sylabs.io/guides/3.0/user-guide/) * Singularity Admin Guide: [https://www.Sylabs.io/guides/3.0/admin-guide/](https://www.Sylabs.io/guides/3.0/admin-guide/) * Singularity White Paper: [link](https://www.Sylabs.io/wp-content/uploads/2019/01/Sylabs_Whitepaper_High_performance_server_v3.pdf) * _Extra credit:_ [https://rootlesscontaine.rs/](https://rootlesscontaine.rs/) --- class: center, middle # Questions?