docker的理念回顾
将应用和环境打包成一个镜像
需求:数据可持久化
Mysql 容器删除后数据没有了 需求:Mysql可以存储在本地!
容器之间可以有一盒数据共享技术 Docker容器中产生的数据同步到本地!
这就是卷技术!目录的挂载 将我们容器内的目录挂载到Linux上面!
总结:容器的持久化和同步操作!容器间也是可以数据共享!
#方式一 直接使用命令挂载 -v
docker run -it -v 主机目录:容器内目录
#测试
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker run -it -v /home/dockerTest:/home centos /bin/bash
[root@140970ec5880 /]# ls
bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
140970ec5880 centos "/bin/bash" 55 seconds ago Up 54 seconds stupefied_albattani
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker inspect 140970ec5880
"Mounts": [
{
"Type": "bind",
"Source": "/home/dockerTest",
"Destination": "/home",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
再测试
1.停止容器 exit
2.主机修改绑定路径内的文件 vim xxx
3.启动容器 docker run 容器id
4.进入容器 docker attach 容器id
5.查看文件内容 cat xxx
docker pull mysql
# 运行容器 需要做数据挂载 # 安装启动mysql 需要配置密码!!!
# 官方测试 docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
#
-d 后台运行
-p 端口映射
-v 卷挂载
-e 环境
--name 容器名
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root --name mysql-xm mysql:latest
8731e0276daa6f965f754710b1dce3fc3e0a668fd6efa3e6f74b14f5044dc7b7
# 启动成功后 我们在本地使用第三方工具测试
# 匿名挂载
-v 容器内路径
docker run -d -P --name nginx01 -v /ect/nginx nginx
# 查看所有的 volumn情况
#docker volume ls
# 这里发现 -v 只写了容器内的路径 没有写容器外的路径!
# 具名挂载
docker run -d -P --name nginx02 -v juming-nginx:/ect/nginx nginx
# 通过 -v 卷名:容器内路径
# 查看一下这个卷
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker volume inspect juming-nginx
[
{
"CreatedAt": "2021-04-02T10:12:57+08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data", #
"Name": "juming-nginx",
"Options": null,
"Scope": "local"
}
]
所有的 docker容器内的卷 没有指定目录的情况下都是 /var/lib/docker/volumes/xxx/_data
我们通过具名挂载可以方便找到我们的一个卷 大多数情况在使用 具名挂载
# 如何确定是具名还是匿名挂载 还是指定挂载!
-v 容器内路径 # 匿名
-v 卷名:容器内路径 #具名
-v /宿主机路径::容器内路径 # 指定路径挂载
拓展:
# 通过 -v 容器内路径 :ro rw 改变读写权限
ro readonly #只读
rw readwrite # 可读可写
# 一旦这个设置了容器权限 容器队我们挂载出来的内容就有限定了!
docker run -d -P --name nginx02 -v juming-nginx:/ect/nginx:ro nginx
docker run -d -P --name nginx02 -v juming-nginx:/ect/nginx:rw nginx
# ro 只要看到ro就说明这个路径只能通过宿主机操作 容器内部无法操作!
Dockerfile 就是用来构建docker 镜像的构建文件! 命令脚本!
通过这个脚本可以生成镜像 镜像是一层一层的 脚本一个一个的命令 每个命令都是一层
方式二:
# 创建一个dockerfile文件 名字随意 建议Dockerfile
# 文件中的内容 指令大写
FROM centos
VOLUME ["volume1","volume2"]
CMD echo "-----end-------"
CMD /bin/bash
# 这里每一个命令就是镜像的一层
命令: docker build -f /home/docker-test-volume/dockerfile -t xm-centos:1.0 .
# -f /home/docker-test-volume/dockerfile 具体路径的文件
# -t xm-centos:1.0 名称:版本号
# 实战
[root@iZ2vc20ehn0q0ihrgccmd2Z docker-test-volume]# vim dockerfile
[root@iZ2vc20ehn0q0ihrgccmd2Z docker-test-volume]# cat dockerfile
FROM centos
VOLUME ["volume1","volume2"]
CMD echo "-----end-------"
CMD /bin/bash
[root@iZ2vc20ehn0q0ihrgccmd2Z docker-test-volume]# docker build -f /home/docker-test-volume/dockerfile -t xm-centos .
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM centos
latest: Pulling from library/centos
7a0437f04f83: Already exists
Digest: sha256:5528e8b1b1719d34604c87e11dcd1c0a20bedf46e83b5632cdeac91b8c04efc1
Status: Downloaded newer image for centos:latest
---> 300e315adb2f
Step 2/4 : VOLUME ["volume1","volume2"]
---> Running in 403d3cdbc135
Removing intermediate container 403d3cdbc135
---> 702f06c938d5
Step 3/4 : CMD echo "-----end-------"
---> Running in a4d0e1f29b07
Removing intermediate container a4d0e1f29b07
---> e6db3186edb9
Step 4/4 : CMD /bin/bash
---> Running in 874797a783f2
Removing intermediate container 874797a783f2
---> a33178ea66fa
Successfully built a33178ea66fa
Successfully tagged xm-centos:latest
[root@iZ2vc20ehn0q0ihrgccmd2Z docker-test-volume]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xm-centos latest a33178ea66fa 2 minutes ago 209MB
这边的卷和外部一定有一个同步目录
查看挂载路径
docker inspect xxxID
"Mounts": [
{
"Type": "volume",
"Name": "96701dcb312e2bb90d70dbf208ab883df971929a040c56db11cba90599c9e517",
"Source": "/var/lib/docker/volumes/96701dcb312e2bb90d70dbf208ab883df971929a040c56db11cba90599c9e517/_data",
"Destination": "volume2",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "e914414f2f7206c6950d92382e821ab4a84f9b75a877ead16ec70c7024f481da",
"Source": "/var/lib/docker/volumes/e914414f2f7206c6950d92382e821ab4a84f9b75a877ead16ec70c7024f481da/_data",
"Destination": "volume1",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
多个mysql同步数据!
# 启动3个容器测试
[root@iZ2vc20ehn0q0ihrgccmd2Z /]# docker run -it --name docker01 xm-centos:1.0
[root@iZ2vc20ehn0q0ihrgccmd2Z /]# docker run -it --name docker02 --volumes-from docker01 xm-centos:1.0
docker02 挂载 docker01
实现文件数据共享 如果删除docker01 docker02中依然存在数据
多个mysql实现数据共享
docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root --name mysql-xm01 mysql:latest
docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD=root --name mysql-xm02 --volumes-from mymysql-xm01 sql:latest
# 这个时候可以实现两个容器数据同步!
结论:
容器之间配置信息的传递,数据卷容器的生命周期一直持续到没有容器使用为止。
但是一旦持久化到本地 这个时候本地是不会删除的!
dockerfile 是用来构建docker镜像的文件!
构建步骤:
1,编写一个dockerfile文件
2,docker build构建成为一个镜像
3,docker run 运行镜像
4,docker push 发布镜像(DockerHub,阿里云镜像仓库!)
基础知识:
1 每个指令都必须大写
2 执行从上到下顺序运行
3 # 表示注释
4 每个指令都会创建提交一个新的镜像层 并提交!
DockerFile:构建文件 定义了步骤
DockerImages:通过DockerFile 构建生成的镜像 最终发布和运行的产品
Docker容器:容器就是镜像运行起来提供服务
FROM #基础镜像
MAINTAINER #是谁写的镜像 姓名+邮箱
RUN # 构建的时候需要运行的命令
ADD # 步骤:如加tomcat镜像 添加压缩包
WORKDIR # 镜像的工作目录
VOLUME # 挂载的目录
EXPOSE # 保留端口配置
CMD # 指定这个容器启动时候运行的命令 之后最后一个生效 可以被替换
ENTRYPOINT #指定这个容器启动时候运行的命令 可以追加命令
ONBUILD # 当构建一个被继承 DockerFile 这个时候会运行 ONBUILD的指令 触发指令
COPY # 类似ADD
ENV # 构建时候设置环境变量
# 编写Dockerfile文件
FORM centos
MAINTAINER xuanmeng<496806621@qq.com>
ENV MYPATH /usr/local
WORKDIR $MYPATH
RUN yum -y install vim
RUN yum -y install net-tools
EXPOSE 80
CMD echo $MYPATH
CMD echo "---end---"
CMD /bin/bash
# 通过文件构建镜像
# docker build -f dockerfile文件路径 -t 镜像名:tag
[root@iZ2vc20ehn0q0ihrgccmd2Z dockerfile]# docker build -f mydockerfile-centos -t mycentos:1.0 .
CMD 和 ENTRYPOINT 区别
# CMD
FROM centos
CMD ["ls","-a"]
build后 运行
例:docker build -f xxx -t centos-cmd:1.0 .
docker run build后的ID
docker run build后的ID l # 目的执行 ls -al
报错
# ENTYRPOINT
FROM centos
ENTYRPOINT ["ls","-a"]
build后 运行
例:docker build -f xxx -t centos-cmd:1.0 .
docker run build后的ID
docker run build后的ID l # 目的执行 ls -al
---可以执行
FROM centos
MAINTAINER xm<496806621@qq.com>
COPY readme.txt /usr/local/readme.txt
ADD apache-tomcat-9.0.44.tar.gz /usr/local/
ADD jdk-linux-x64.tar.gz /usr/local/
RUN yum -y install vim
ENV MYPATH /usr/local
WORKDIR $MYPATH
ENV JAVA_HOME /usr/local/jdk1.8.0_131
ENV CLASSPATH $JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.44
ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.44
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
EXPOSE 8080
CMD /usr/local/apache-tomcat-9.0.44/bin/startup.sh && tail -F /url/local/apache-tomcat-9.0.44/logs/catalina.out
3.构建镜像
# docker build -t mydiytomcat . 或者 docker build -f Dockerfile -t mydiytomcat .
注:Dockerfile是默认加载的 可以不写
build成功后查看
启动并挂载
# docker run -d -p 9090:8080 --name xmtomcat -v /home/xuanmeng/build/tomcat/webapps:/usr/local/apache-tomcat-9.0.44/webapps mydiytomcatmkdir
在 本地路径/home/xuanmeng/build/tomcat/webapps 下创建test目录(test的helloworld工程 包含index.html web.xml)
web.xml
-----------------------------
<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4"
xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
</web-app>
----------------------------
index.html
----------------------------
<!doctype html>
<html lang='en'>
<head>
<meta charset='utf-8' />
<title>Hello World</title>
</head>
<body>
<p>Hello World!!!</p>
</body>
</html>
----------------------------
1 登入docker hub(先在docker hub注册账号) docker login -u -p
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker login -u xuanmengno3
Login Succeeded
2 设置版本号 docker tag id xxx:版本号
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker tag 63e3b750838d xm/mydiytomcat:1.0
3 push
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker push xm/mydiytomcat:1.0
1 登入阿里云
2 找到容器镜像服务
3 创建命名空间 (先创建实例)
4 创建容器镜像(本地仓库)
5 浏览阿里云
docker save -o xxx.tar 指定镜像名
# 实战
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker save -o my_Tomcat_v1.0.tar xuanmengno3/tomcat:1.0
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# ls
dockerfile docker-test-volume my_Tomcat_v1.0.tar testbbb.java xuan
dockerTest mysql testaaa.java www xuanmeng
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker load -i my_Tomcat_v1.0.tar
Loaded image: xuanmengno3/tomcat:1.0
清空所有环境
docker rm -f $(docker ps -aq) #清空容器
docker rmi -f $(docker images -aq) #清空镜像
ip addr 查看
三个网络
# 问题: docker 是如何处理容器网络访问的?
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker run -d -p 9090:8080 xuanmengno3/tomcat:1.0
526f90665c454e29890e48244324f5c872235d40bdde7633891a9165b4ff85fc
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
526f90665c45 xuanmengno3/tomcat:1.0 "/bin/sh -c '/usr/lo…" 3 seconds ago Up 2 seconds 0.0.0.0:9090->8080/tcp jovial_shamir
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker exec -it 526f90665c45 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
# 思考 linux能不能ping通容器内部!
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.075 ms
# 可以ping通
# 我们发现这个容器带来网卡都是一对一对的
# evth-pair 就是一对的虚拟设备接口 他们是承兑出现的 一端连着协议 一端连着彼此相连
# 因为这个特性 evth-pair 充当一个桥梁 连接 各种虚拟网络设备的
# OpenStac Docker容器之间的连接 ovs的连接都是使用 evth-pair 技术
结论: tomcat01 和tomcat02 是公用的一个路由器—docker0
所有的容器不指定网络的情况下, 都是docker0路由的, docker会给我们的容器分配一个默认的可用IP
Docker 使用的是Linux的桥接
Docker中的所有网络接口都是虚拟的,虚拟的转发效率高!
只要容器删除,对应的网桥一对就没了!
思考一个场景,我们编写了一个微服务,database url=ip:xxx,项目不重启 数据库ip换掉了 我们希望可以处理这个问题,用名字来进行访问容器??
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker exec -it tomcat03 ping tomcat01
ping: tomcat01: Name or service not known
# 如何可以解决呢?
# 通过--link 解决网络联通
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker run -d --name tomcat04 --link tomcat01 xuanmengno3/tomcat:1.0
33cb668c415632beeef9753a6f0a4b079988a695b40fc9321cfd4cebc8b29a1e
[root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker exec tomcat04 ping tomcat01
PING tomcat01 (172.17.0.2) 56(84) bytes of data.
64 bytes from tomcat01 (172.17.0.2): icmp_seq=1 ttl=64 time=0.093 ms
# 可以反向ping通吗?
不行。。。
查看tomcat04 host配置
root@iZ2vc20ehn0q0ihrgccmd2Z home]# docker exec -it tomcat04 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 tomcat01 708d9752f8bd
172.17.0.4 33cb668c4156
使用不方便 目前不建议使用 --link
查看所有docker网络
docker network ls
网络模式
bridge:桥接docker(默认,自己创建也使用bridge模式)
none: 不配置网络
host:和宿主机共享网络
container:容器网络联通!(用的少! 局限很大)
测试
# 我们直接启动的命令 --net bridge 而这个就是我们的docker0
docker run -d -P --name tomcat01 tomcat
docker run -d -P --name tomcat01 --net bridge tomcat
# docker0特点: 默认 域名不能访问 --link 可以打通链接!
自定义网络:docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
# 我们可以自定义网络
# --driver bridge
# --subnet 192.168.0.0/16
# --gateway 192.168.0.1
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
247a1d9458211a580250abc160ff69379712c95567e182528d3ec85463ec81fa
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
60f27d4417f0 bridge bridge local
ae7307b414ae host host local
247a1d945821 mynet bridge local
e15f5c232f9f none null local
# 创建2个容器 使用自定义网络【mynet】
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker run -d -P --name tomcat-net01 --net mynet xuanmengno3/tomcat:1.0
720b9fc1d4fbc24560f66429c20ed3609f2f87d083c1207c3a0784531441ed7b
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker run -d -P --name tomcat-net02 --net mynet xuanmengno3/tomcat:1.0
4351888f34ef0e856be41e7f22102151020390c7a5eb936df1a211bbeae91d11
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker network inspect mynet
[
{
"Name": "mynet",
"Id": "247a1d9458211a580250abc160ff69379712c95567e182528d3ec85463ec81fa",
"Created": "2021-04-06T12:52:38.205019805+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4351888f34ef0e856be41e7f22102151020390c7a5eb936df1a211bbeae91d11": {
"Name": "tomcat-net02",
"EndpointID": "1d1b5263c11aa360434b2c5801d98b8c78aa270550349c4caf77f7fc7c04b4e4",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
},
"720b9fc1d4fbc24560f66429c20ed3609f2f87d083c1207c3a0784531441ed7b": {
"Name": "tomcat-net01",
"EndpointID": "f0236019e68ae599483d34c7f48e1d26587247987535bc8562cb9058530cef37",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
测试 ping IP 或者ping 名字
# ping IP
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker exec -it tomcat-net01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.101 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.066 ms
# ping 名字
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker exec -it tomcat-net01 ping tomcat-net02
PING tomcat-net02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from tomcat-net02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.068 ms
我们自定义的网络docker 都已经帮我们维护好了对应关系,推荐我们平时这样使用网络!
好处:
redis -不同的集群使用不同的网络,保证集群式安全健康的!
mysql -不同的集群使用不同的网络,保证集群式安全健康的!
docker network connect mynet tomcat01
将docker0网络中的容器打通mynet
# 测试 将tomcat01 加入到mynet 的网络中
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker network connect mynet tomcat01
# tomcat 直接ping mynet网络中的 tomcat-net01
# 一个容器两个ip地址 阿里云服务 公网IP 私网IP
# 01链接正常
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker exec -it tomcat01 ping tomcat-net01
PING tomcat-net01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.071 ms
64 bytes from tomcat-net01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.071 ms
# 02无法连接
[root@iZ2vc20ehn0q0ihrgccmd2Z ~]# docker exec -it tomcat03 ping tomcat-net01
ping: tomcat-net01: Name or service not known
# 创建网卡
docker network create redisnet --subnet 172.38.0.0/16
# 通过脚本创建6个redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done
docker run -p 6371:6379 -p 16371:16379 --name redis01 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.11 redis:5.0.12-alpine3.13 redis-server /etc/redis/redis.conf
docker run -p 6376:6379 -p 16376:16379 --name redis06 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redisnet --ip 172.38.0.16 redis:5.0.12-alpine3.13 redis-server /etc/redis/redis.conf
创建6个后 查看
[root@iZ2vc20ehn0q0ihrgccmd2Z redis]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3736c9518e9 redis:5.0.12-alpine3.13 "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp redis06
786da300a9c7 redis:5.0.12-alpine3.13 "docker-entrypoint.s…" 24 seconds ago Up 24 seconds 0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp redis05
779acf01de16 redis:5.0.12-alpine3.13 "docker-entrypoint.s…" 50 seconds ago Up 49 seconds 0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp redis04
83aa9bee60f5 redis:5.0.12-alpine3.13 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp redis03
831d2f770342 redis:5.0.12-alpine3.13 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp redis02
cf883fdb0fc4 redis:5.0.12-alpine3.13 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes 0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp redis01
进入redis
[root@iZ2vc20ehn0q0ihrgccmd2Z redis]# docker exec -it redis01 /bin/sh
/data # ls
appendonly.aof nodes.conf #appendonly 持久化 nodes 节点
创建集群
# 命令
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: bbdb614d654ec8c139e774ca6803f3068e409955 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
M: 0e0a506e5f25abb810488e3bd43959bc6776d939 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
M: c9062088c5344926880c342c9d3198ad7a15cc97 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
S: cd8a6522a8eb6c43c0022b8b0298338febe5b998 172.38.0.14:6379
replicates c9062088c5344926880c342c9d3198ad7a15cc97
S: 0c5726e5ffbfb9a9179e84635d2ef1b695407dd8 172.38.0.15:6379
replicates bbdb614d654ec8c139e774ca6803f3068e409955
S: 8d3f81848d366efd57e305599c2f51673a0a1d32 172.38.0.16:6379
replicates 0e0a506e5f25abb810488e3bd43959bc6776d939
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: bbdb614d654ec8c139e774ca6803f3068e409955 172.38.0.11:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 0c5726e5ffbfb9a9179e84635d2ef1b695407dd8 172.38.0.15:6379
slots: (0 slots) slave
replicates bbdb614d654ec8c139e774ca6803f3068e409955
S: 8d3f81848d366efd57e305599c2f51673a0a1d32 172.38.0.16:6379
slots: (0 slots) slave
replicates 0e0a506e5f25abb810488e3bd43959bc6776d939
M: c9062088c5344926880c342c9d3198ad7a15cc97 172.38.0.13:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 0e0a506e5f25abb810488e3bd43959bc6776d939 172.38.0.12:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: cd8a6522a8eb6c43c0022b8b0298338febe5b998 172.38.0.14:6379
slots: (0 slots) slave
replicates c9062088c5344926880c342c9d3198ad7a15cc97
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
# 链接命令
redis-cli -c
cluster info 集群信息
cluster nodes 节点信息
# 随便set一个值
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
# get a可以获得值
172.38.0.13:6379> get a
"b"
# 停掉 13 ip对应的 redis03(master)
# 重新链接后 看下图(0.13 master fail 0.14从 slave变成master了 并且从0.14中获得值)
构架springboot项目
打包应用
编写dockerfile
# Dockerfile FROM java:8 COPY *.jar /app.jar CMD ["--server.port=8080"] EXPOSE 8080 ENTRYPOINT ["java","-jar","/app.jar"]
构建镜像
docker build -t xm .
[root@iZ2vc20ehn0q0ihrgccmd2Z idea]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@iZ2vc20ehn0q0ihrgccmd2Z idea]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xm latest 6ffa490bf813 19 seconds ago 660MB
发布运行
[root@iZ2vc20ehn0q0ihrgccmd2Z idea]# docker run -d -P --name sp-web xm
[root@iZ2vc20ehn0q0ihrgccmd2Z idea]# curl localhost:49155/hello
this is xuanmeng!
以后我们使用Docker之后 给别人交付的就是一个镜像即可!