For clarification upfront: The “I” stands for Intershop and with that series of articles I want to elaborate a little bit on our recent adventures into the latest and biggest hype within the IT scene, which undoubtedly is Docker, the most successful containerization approach yet known. In this second part I’m going to show you, how you can deploy a simple artifact repository with an automated Nginx proxy. I will also talk a little bit about the application of the artifact repository within the Intershop continuous integration system.
Some context: Artifact Repositories
A very important and central part of our continuous integration / continuous delivery system is a thing called “artifact repository”. It’s the place, where built artifacts are stored, snapshots and releases are elevated between development and general availability using a mechanism called “staging”. Also, customers will receive accounts for accessing the artifact repository and thus be able to retrieve the binary or in some cases source artifacts of the Intershop Commerce Management product line up they’ve paid for. Normally they can include their respective repository into their own artifact repository by a mechanism called proxying. That is, they can connect their own artifact repository to the remote repository at Intershop by creating a so called “proxy repository”. While it is not allowed to login to the UI of https://repo.intershop.de to access the artifacts, it is absolutely normal that customers can access their artifacts programmatically by running e.g. the Gradle build or deploy process and including their own repository. Resolving the dependencies on their own customer repository will lead to actually downloads of the artifacts through the proxy to their place, where they are available as a copy starting from that time.
Such an artifact repository can be Sonatype Nexus, which we’re using, but you can safely replace it by other solutions, such as JFrog Artifactory, the guys providing the quite known Bintray service as well.
Long way short: An artifact repository is an integral part of your build and deployment pipeline. So, how you’re going to install it? Let’s inspect how a dockerized approach can be beneficial here.
Dockerizing Nexus
Sonatype itself provides the sources to dockerizing the Nexus via GitHub. Since the images are automatically build into Docker Hub, you can access and run the container right away. Note, that if you don’t have an image pulled to your local registry and docker run it, it will automatically pulled:
$ docker run -d -p 8081:8081 -e CONTEXT_PATH --name nexus sonatype/nexus:oss Unable to find image 'sonatype/nexus:oss' locally oss: Pulling from sonatype/nexus 8d30e94188e7: Already exists 2c62663e7918: Pull complete fb8598ce803f: Pull complete 8e96d05c3e3d: Pull complete a19aa6d0a6a3: Pull complete Digest: sha256:a0c203b5d113e848a6c5477dd6c0c199ba7250fdee5e514532d0104cf44163a7 Status: Downloaded newer image for sonatype/nexus:oss ef9002940031f60738a7a058200bc1181b7578dc292c6753c04ad6a68cd92e14
Note, that by setting -p 8081:8081 you expose the port 8081 to your local system. The -e CONTEXT_PATH is used here, to unset the context path and therefore omit the http://localhost:8081/nexus in the url.
You can then simply follow the docker log file of the Nexus running up and watch for a line showing, that your server is up and running.
$ docker logs -f nexus --- snipped --- 2016-09-27 14:30:02,058+0000 INFO [main] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Started
After this, go visit your browser at http://localhost:8081 and have fun with your artifact repository. As you can see, it is very easy and very fast to have a running artifact repository by deploying it using a docker container.
A note on persistence
You might have already noticed, that this notoriously transient nexus container is incorporating the data storage which makes it somewhat quirky to handle persistent data. You may execute “docker commit” on a running container to create a backup of the whole container, but this is not state-of-the-art. Normally you would like to split up the application and the application data. Therefore, in the case of containers, you want a mechanism called docker volumes. I won’t cover that topic here in depth, but if you read the documentation at docker, you’ll find out that there’re two main possiblities to handle data. As a requirement to handle volumes you want to have a VOLUME declaration in your Docker file which exposes certain file paths to be mounted as a different volume. In the case of the dockerfile of sonatype’s nexus, you’ll find something like this:
ENV SONATYPE_WORK /sonatype-work ... VOLUME ${SONATYPE_WORK}
This enables you to manage the docker data slightly better. Typically the nexus artifacts are placed in the sonatype-work directory. A first approach of persistence consists of using a so called “data volume container”. You can create and reference a data volume container like shown below.
$ docker run -d --name nexus-data sonatype/nexus echo "data-only container for Nexus" $ docker run -d -p 8081:8081 --name nexus --volumes-from nexus-data sonatype/nexus
A second possibility to manage the data is to mount a directory of the host system into the /sonatype-work directory, where the data resides. By applying this, you can keep the data locally at the hosts filesystem or possibly on a host mounted cloud storage. To mount a host filesystem into the container, you do the following:
$ mkdir /some/local/dir/nexus-data && chown -R 200 /some/local/dir/nexus-data $ docker run -d -p 8081:8081 --name nexus -v /some/local/dir/nexus-data:/sonatype-work sonatype/nexus
To read more about backup/migration strategies for this, follow the documentation here.
An automated Nginx proxy configuration templater
So, possibly, you don’t want to expose the Nexus to your network or even more serious – to the public internet, without measurements of security. Of course, you will need to secure your Nexus by replacing the standard administrator password and be more rigid on accessing or publishing artifacts to your Nexus. This configuration can also be automated, because Nexus is offering a rich REST API. But I don’t want to go into details about that here.
Let’s moreover say, your Nexus is secured by the means above and now you want to expose it through an Nginx proxy to the world or your company network, respectively. You can do this in a dockerized manner as well. Jason Wilders project nginx-proxy (available at GitHub as well) is providing a very convenient way for you to setup a proxy automatically.
Some background: The project is using docker-gen (GitHub), which is a file generator that renders templates using docker container meta-data. Nginx-proxy just uses docker-gen to lookup docker containers on the given host, inspects them and provides according configuration for different services ranging from Logstash configuration, over Nginx configuration up to configurations of service discovery implementations.
We’re interested in providing a Nginx proxy to our Nexus. To get this to work, we need to kill the current Nexus container and restart it with a different environment configuration:
$ docker kill nexus && docker rm nexus # kill and remove old nexus container $ docker run -d -e VIRTUAL_HOST=localhost -e VIRTUAL_PORT=80 -e VIRTUAL_PROTO=http -e CONTEXT_PATH --name nexus --expose 8081 sonatype/nexus:oss
These VIRTUAL_* environment variables will be found by the Nginx proxy and will lead to an automatic configuration of the proxy configuration. Most important is VIRTUAL_HOST which sets the server_name for the Nginx configuration. In the end, this simple configuration will let your Nexus be visible at http://localhost which will automatically forwarded to the Nexus Docker container IP at the exposed port 8081.
To finalize this, start nginx-proxy:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro --name nginx-proxy jwilder/nginx-proxy
Point your browser to http://localhost and see how your Nexus will be available through a Nginx proxy. If you like to, you can watch the nginx-proxy error/request logs by watching Docker logs again:
$ docker logs -f nginx-proxy nginx.1 | localhost 172.17.0.1 - - [27/Sep/2016:16:15:52 +0000] "GET /service/local/outreach/welcome/analytics.js HTTP/1.1" 200 4540 "http://localhost/service/local/outreach/welcome/?version=2.14.0-01&versionMm=2.14&edition=OSS&usertype=anonymous" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36" nginx.1 | localhost 172.17.0.1 - - [27/Sep/2016:16:15:52 +0000] "GET /service/local/outreach/welcome/nexusSpaces.css HTTP/1.1" 200 1081 "http://localhost/service/local/outreach/welcome/?version=2.14.0-01&versionMm=2.14&edition=OSS&usertype=anonymous" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36" nginx.1 | localhost 172.17.0.1 - - [27/Sep/2016:16:15:52 +0000] "GET /service/local/outreach/welcome/nexus.js HTTP/1.1" 200 3022 "http://localhost/service/local/outreach/welcome/?version=2.14.0-01&versionMm=2.14&edition=OSS&usertype=anonymous" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.92 Safari/537.36"
Further improvements
Easy to see that this approach offers a wide range of improvements. At first, the docker infrastructure itself could be easily set up automatically using a configuration management tool, like Ansible. Ansible is a configuration management tool with a low footprint on either clients or the running host, only needing ssh to be running on the clients and some python libraries on the executing host. Ansible is more and more adopted at Intershop to speed up the installation of infrastructure. In our case here this infrastructure would (beside the docker provisioning) also concern the startup of the containers in an Ansible YAML script. Another important step would be the configuration of the Nexus artifact repository to be secure, that is, it can be run on the internet, which can be necessary when you use a cloud approach. So you want SSL-encryption for communicating to the repository. Therefore you’ll also need some certificate handling. The Nginx -proxy approach shown here offers options to make use of SSL certificates. Beside the securing of the communication you want to do some configuration with the Nexus itself to switch off anonymous access, create your repositories automatically, create specific reader/publisher users and assign them according permissions. All of this can be automated using REST which, in turn, can be easily leveraged again by e.g. Ansible.
This second article about docker playgrounds will not cover these possible topics yet. So keep an eye on this blog for further publications of the “containers and I” series.
Summary
We explained a little bit the role of an artifact repository in Intershops continuous delivery pipeline and continued to show a way about how to provision Sonatype Nexus using Docker and talked about methods of applying persistence. We then explained how an automated Nginx proxy configuration templating mechanism can help to ease the provision of a web based proxy to the Nexus artifact repository.