k8w.io
Rancher实验和学习笔记
2017-11-20作者:k8w

环境 > 应用栈 > 服务 > 容器

容器内,可通过以下任一方式访问其它容器:

  1. 容器名,如 ping wordpress_wordpress_1
  2. 服务名,如 ping wordpress
  3. 服务名.应用栈,如 ping wordpress.myblog
  4. 服务链接,如链接服务MySQL,服务名为db,则可ping db
  5. 服务别名,同服务链接
    一个服务可以有N个容器,当通过服务名访问服务时,则Rancher内置的DNS服务会返回一个随机的健康容器的IP列表。
    通过服务等方式访问容器,端口是全部可访问的。
    官方文档对此的说明如下:

DNS Service
Rancher provides an infrastructure service for a distributed DNS service by using its own lightweight DNS server coupled with a highly available control plane. Each healthy container is automatically added to the DNS service when linked to another service or added to a Service Alias. When queried by the service name, the DNS service returns a randomized list of IP addresses of the healthy containers implementing that service.

  • By default, all services within the same stack are added to the DNS service without requiring explicit service links, which can be set under Service Links in a service.
  • You can resolve containers within the same stacks by the service names.
  • If you need a custom DNS name for your service, that is different from your service name, you will be required to set a link to get the custom DNS name.
  • Links are still required if a Service Alias is used.
  • To make services resolvable that are in different stacks, you can use . and are not required explicit links, which can be set under Service Links in a service.
    Because Rancher’s overlay networking provides each container with a distinct IP address, you do not need to deal with port mappings and do not need to handle situations like duplicated services listening on different ports. As a result, a simple DNS service is adequate for handling service discovery.
    Learn more about the internal DNS service for Cattle environments and Kubernetes environments.

“随机DNS的IP列表”如何被访问

实验证明,似乎是随机的?
DNS会被缓存一定时间,如果不可用应该会自动换其它。
一定程度上可以实现内部服务的负载均衡。

一种实现Web服务负载均衡的方案

比如WordPress服务,服务端口是80。
那么新建一个服务:wordpress,不要设置端口链接。
然后在服务里起N个容器。
新建一个负载均衡,服务链接为wordpress,同时设置端口转发 外部端口号到内部端口80

注意

尚不确定,一台机器连接到一个容器后,后续请求是否由于DNS缓存的关系都指向同一容器。若是,那么则不能通过nginx做负载均衡。因为对服务名而言,单机转发过一次后,DNS就缓存下来了。起不到分布的作用。

Rancher论坛的回复

Depending on the client behavior, the answer is somewhere between "does not balance properly at all" and "works ok as a simple way to coarsely balance requests".
The DNS server will return an answer with all of the IPs of the (healthy) backend containers in random order each time it asked, with a TTL of 1 second.
But the client is free to do whatever it likes with this information. Some will take just the first answer, others pick randomly from the list, or not so randomly1. Some will respect the TTL, others will cache the value for some amount of time they decide, or forever, or until a request fails, etc.
Nginx specifically will resolve names exactly once on startup by default. To make it do anything else you have to give it a resolver1. The Rancher DNS service is always 169.254.169.250 on any host.

Nginx转发到Rancher

  1. Rancher配一个服务
  2. nginx配置 proxy_pass到服务
  3. nginx配置
resolver 169.254.169.250 valid=5s ipv6=off;

set $backend [http://](http://servicename.stackname:3000/)[S](http://servicename.stackname:3000/)[erviceName.StackName:3000](http://servicename.stackname:3000/);

location /api/ {
  rewrite ^/api/(.*) /$1 break;
  proxy_pass $indexServer;
}

主机间容器网络(IPSEC)不通的解决

  1. 检查agent机器 /etc/resolv.conf。
  2. 如图,如有红色部分,需要再前面加分号注释掉。
  3. Host 停用所有主机,然后删除
  4. 在agent主机上跑一遍大扫除脚本,见下方(来源:http://niusmallnan.com/2017/01/19/optimize-rancher-k8s-in-china/
  5. 重新添加Host
#!/bin/bash

#注意 这个会删除所有Container!
docker rm -f $(docker ps -qa)

rm -rf /var/etcd/
for m in $(tac /proc/mounts | awk '{print $2}' | grep /var/lib/kubelet); do
    umount $m || truedone
rm -rf /var/lib/kubelet/
for m in $(tac /proc/mounts | awk '{print $2}' | grep /var/lib/rancher); do
    umount $m || truedone
rm -rf /var/lib/rancher/

rm -rf /run/kubernetes/

#注意 这个会删除所有Volume!
docker volume rm $(docker volume ls -q)

docker ps -a
docker volume ls

大牛牛晓楠的回复:

不出意外的话,是同理,可以让他show一下日志,并让他show一下dns配置就能断定。

群里刚回的就是这个问题。

正常来说,go里面用了net库 去解析服务名时候,此时有两种选择:
1. 基于cgo来处理,就是调用系统glibc来处理,glibc解析服务名是依赖/etc/resolv文件
2. go自己的库处理
大部分情况go默认使用自己的库处理,但是一些特殊地址解析和特殊情况,会用cgo来处理。
而阿里云的主机上 /etc/resolv的配置 刚好会让cgo处理失败(主要是timeout),这时候程序就会大面积crash。
可以在build程序时候 加上CGO_ENABLED=0,这样全部强制使用go自己的库就没问题。

NFS的使用和挂载

挂载

mount -t nfs 10.0.10.11:/data/nfs /nfs
### 开机自动挂载
echo 192.168.0.123:/rancher/nfs/ /nfs nfs rw,async,hard,intr 0 0 >> /etc/fstab
(正文完)
留言(0条)
发表新留言
您的大名:
必填
电子邮箱:
不公开,仅用于向你发送回复
粤ICP备17160324号-3