ansible all -m shell -a "yum install iptables-services -y"
编辑规则
1 2 3 4 5 6 7 8 9
[root@k8s-w1 ~]# cat /etc/sysconfig/iptables # sample configuration for iptables service # you can edit this manually or use system-config-firewall # please do not ask us to add additional ports/services to this default configuration *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT
分发规则到剩余服务器
好想不需要了
1 2
ansible all -m copy -a "src=/etc/sysconfig/iptables dest=/etc/sysconfig/iptables" ansible all -m shell -a "systemctl enable iptables.service --now"
优化配置
1
ansible all -m shell -a "ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime"
[root@k8s-m1 ~]# kubectl get node -A NAME STATUS ROLES AGE VERSION k8s-m1 NotReady control-plane 42s v1.25.7 k8s-w1 NotReady <none> 5s v1.25.7 k8s-w2 NotReady <none> 5s v1.25.7
[root@k8s-m1 ~]# kubectl get node -A NAME STATUS ROLES AGE VERSION k8s-m1 NotReady control-plane 4m32s v1.25.7 k8s-w1 NotReady worker 3m55s v1.25.7 k8s-w2 NotReady worker 3m55s v1.25.7
helm安装
1
sealos run labring/helm:v3.12.3
检查
1 2
[root@k8s-m1 ~]# helm version version.BuildInfo{Version:"v3.12.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.20.7"}
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init) KVStore: Ok Disabled Kubernetes: Ok 1.25 (v1.25.7) [linux/amd64] Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Strict [enp6s18 192.168.6.32 (Direct Routing), kube-ipvs0 10.96.0.1] Host firewall: Disabled CNI Chaining: none Cilium: Ok 1.14.3 (v1.14.3-252a99ef) NodeMonitor: Listening for events on 4 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, Allocated addresses: 10.0.1.123 (kube-system/hubble-relay-767765f86f-n4bhp) 10.0.1.176 (router) 10.0.1.226 (health) 10.0.1.234 (kube-system/hubble-ui-5f5599766f-6hnww) IPv4 BIG TCP: Disabled IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: BPF [enp6s18, kube-ipvs0] 172.26.131.117/32 [IPv4: Enabled, IPv6: Disabled] Clock Source for BPF: ktime Controller Status: 29/29 healthy Name Last success Last error Count Message cilium-health-ep 5s ago never 0 no error dns-garbage-collector-job 11s ago never 0 no error endpoint-1658-regeneration-recovery never never 0 no error endpoint-221-regeneration-recovery never never 0 no error endpoint-2703-regeneration-recovery never never 0 no error endpoint-3248-regeneration-recovery never never 0 no error endpoint-gc 11s ago never 0 no error ipcache-inject-labels 6s ago 5m9s ago 0 no error k8s-heartbeat 11s ago never 0 no error link-cache 6s ago never 0 no error metricsmap-bpf-prom-sync 6s ago never 0 no error neighbor-table-refresh 6s ago never 0 no error resolve-identity-1658 5s ago never 0 no error resolve-identity-221 6s ago never 0 no error resolve-identity-2703 4s ago never 0 no error resolve-identity-3248 5s ago never 0 no error sync-host-ips 6s ago never 0 no error sync-lb-maps-with-k8s-services 5m6s ago never 0 no error sync-policymap-1658 5m2s ago never 0 no error sync-policymap-221 5m2s ago never 0 no error sync-policymap-2703 5m2s ago never 0 no error sync-policymap-3248 5m2s ago never 0 no error sync-to-k8s-ciliumendpoint (1658) 4s ago never 0 no error sync-to-k8s-ciliumendpoint (221) 6s ago never 0 no error sync-to-k8s-ciliumendpoint (2703) 4s ago never 0 no error sync-to-k8s-ciliumendpoint (3248) 5s ago never 0 no error sync-utime 6s ago never 0 no error template-dir-watcher never never 0 no error write-cni-file 5m11s ago never 0 no error Proxy Status: OK, ip 10.0.1.176, 0 redirects active on ports 10000-20000, Envoy: embedded Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 976/4095 (23.83%), Flows/s: 3.07 Metrics: Ok KubeProxyReplacement Details: Status: Strict Socket LB: Enabled Socket LB Tracing: Enabled Socket LB Coverage: Full Devices: enp6s18 192.168.6.32 (Direct Routing), kube-ipvs0 10.96.0.1 Mode: SNAT Backend Selection: Random Session Affinity: Enabled Graceful Termination: Enabled NAT46/64 Support: Disabled XDP Acceleration: Disabled Services: - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) - LoadBalancer: Enabled - externalIPs: Enabled - HostPort: Enabled BPF Maps: dynamic sizing: on (ratio: 0.002500) Name Size Auth 524288 Non-TCP connection tracking 67837 TCP connection tracking 135674 Endpoint policy 65535 IP cache 512000 IPv4 masquerading agent 16384 IPv6 masquerading agent 16384 IPv4 fragmentation 8192 IPv4 service 65536 IPv6 service 65536 IPv4 service backend 65536 IPv6 service backend 65536 IPv4 service reverse NAT 65536 IPv6 service reverse NAT 65536 Metrics 1024 NAT 135674 Neighbor table 135674 Global policy 16384 Session affinity 65536 Sock reverse NAT 67837 Tunnel 65536 Encryption: Disabled Cluster health: 3/3 reachable (2023-10-23T15:05:37Z) Name IP Node Endpoints kubernetes/k8s-w2 (localhost) 192.168.6.32 reachable reachable kubernetes/k8s-m1 192.168.6.21 reachable reachable kubernetes/k8s-w1 192.168.6.31 reachable reachable
[root@k8s-m1 ~]# kubectl get sc -A NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 20s