As better/more correct to configure automatic deployment of Docker container on the Swarm cluster?

Good day.
Set up test environment for automatic deployment of containers on the cluster. There is a question how to configure. TeamCity agent gathers the source code, tests, build the container and puts it in registry. Next question how to pick it up on the cluster?

Option 1. Standard tools docker:
export DOCKER_HOST=swarm-master:4000
docker run-d-p 1234:1234 --name my_application --label registry my_application/my_application:${version}


Option 2. Docker-compose:
export DOCKER_HOST=swarm-master:4000
docker-compose up my_application


Option 1 is good because it works. But there is a problem with stopping services if more than one, plus the deployment config is written in TeamCity.

Option 2 is good in that the configuration deployment is written in the draft, but with the launch of the problem. From TC agent service does not start (Error response from daemon: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find GlobalDefault address space (most likely the backing datastore is not configured)). Although the machine from the cluster it runs. Where to dig? to connect to the master cluster to copy docker-compose.yml file in there and run it? or docker-compos need to work with a remote cluster right?

UPD:
Problem with failed to parse pool request for address space "GlobalDefault" pool managed to solve it. (Had the configuration of all daemons in the cluster add-cluster-store consul://consul-host:8500 --cluster-advertise eth0:2375")

The application rises, but revealed a problem with Reschedule:
restart: unless-stopped
labels:
 com.docker.swarm.reschedule-policies: "[\"on-node-failure\"]"

in the fall of the container node was successfully transferred to another, but does not start...

UPD2:
The problem with the Reschedule was that I put the car on SWARMĐ° Noda and Noda CONSULa. If you cut down machine where there is no Consul, the service rises normally...
July 9th 19 at 10:43
2 answers
July 9th 19 at 10:45
July 9th 19 at 10:47
In the end the problem was solved samopisny script deployment. It also solved the problem with zero down time, cutting down the old container and picking up a new queue

Find more questions by tags Continuous deliveryDocker