Below are some information and considerations to be taken into account when running Clusters using Docker.
If you want to run one of the
/ipfs/ipfs-cluster Docker containers, it is important to know that:
go-ipfsand you should run the IPFS daemon separately, for example, using the
ipfs/go-ipfsDocker container. The
ipfs_connector/ipfshttp/node_multiaddressconfiguration value will need to be adjusted accordinly to be able to reach the IPFS API. This path supports DNS addresses (
/dns4/ipfs1/tcp/5001) and is set from the
IPFS_APIenvironment variable when starting the container and no previous configuration exists.
/data/ipfs-clusteras the IPFS Cluster configuration path. We recommend mounting this folder as means to provide custom configurations and/or data persistency for your peers. This is usually achieved by passing
api/restapi/http_listen_multiaddresswill be set to use
ipfs_connector/ipfshttp/proxy_listen_multiaddresswill be set to use
ipfs_connector/ipfshttp/node_multiaddresswill be set to the value of the
--net=host, you will need to set
$IPFS_APIor make sure the configuration has the correct
Make sure you read the Configuration documentation for more information on how to configure IPFS Cluster.
docker run ipfs/ipfs-cluster. By default it runs with
We also provide an example
docker-compose.yml which is able to launch an IPFS Cluster with two cluster peers and two ipfs daemons running.
The first cluster peer is launched first and acts as bootstrapper. The second peer is bootstrapped against the first one during the first boot. During the first launch, configurations are automatically generated and will be persisted for next launches in the
./compose folder, along with the
Only the IPFS swarm port (tcp
4101) and the IPFS Cluster API ports (tcp
9194) are exposed out of the containers.
This compose file is provided as an example on how to set up a multi-peer cluster using Docker containers.