目录

Life in Flow

Embrace your dreams and unlock your full potential at every stage of life.

X

Quick Find 置顶!

dictionary

System

RockyLinux9 root用户通过ssh

允许Rocky Linux 9 root用户通过ssh登录方法:

1. 编辑SSH配置文件

vi /etc/ssh/sshd_config 按键盘i 进行编辑

找到以下内容

#PermitRootLogin prohibit-password

将其修改为

PermitRootLogin yes

按键盘Esc 键 :wq 保存退出

重启SSH服务

systemctl restart sshd

此时root用户可以通过ssh远程登录

修改无 GUI 了:把默认启动改为命令行

在系统里执行:

## 1) 设置默认启动到命令行
sudo systemctl set-default multi-user.target

## 2) 立刻切换到命令行(不用重启)
sudo systemctl isolate multi-user.target

## 3) 可选:禁用图形登录管理器(更彻底)
sudo systemctl disable --now gdm

## 4) 想恢复图形界面(反向操作)
sudo systemctl set-default graphical.target
sudo systemctl enable --now gdm

PowerShell 设置 Clash 代理

### 
notepad $PROFILE

$env:HTTP_PROXY  = "http://127.0.0.1:7899"
$env:HTTPS_PROXY = "http://127.0.0.1:7899"
$env:NO_PROXY    = "localhost,127.0.0.1"

RockyLinux9 安装docker镜像不兼容(Selinux)

https://www.sujx.net/2023/07/10/RockyLinux-Container/index.html

Firewalld

# 启动
systemctl start firewalld

# 查看状态
systemctl status firewalld

# 禁用,禁止开机启动
systemctl disable firewalld

# 停止运行
systemctl stop firewalld

SyncTime

# 安装ntp服务
yum install ntp 

# 开机启动服务
systemctl enable ntpd 

# 启动服务
systemctl start ntpd

# 更改时区
timedatectl set-timezone Asia/Shanghai

# 启用ntp同步
timedatectl set-ntp yes 

# 同步时间
ntpq -p



###  crontab
[root@master tmp]# vi /tmp/synctime.sh
#!/bin/bash
systemctl restart ntpd
timedatectl set-timezone Asia/Shanghai
timedatectl set-ntp yes
ntpq -p

[root@master tmp]# crontab -e
* * * * * /tmp/synctime.sh

Partition

在这里插入图片描述

socks5 Agent

[root@DevOps ~]# vim /etc/profile
[root@DevOps ~]# source /etc/profile
export ALL_PROXY="socks5://192.168.10.88:10808"
export https_proxy="http://192.168.10.88:10809"
export http_proxy="http://192.168.10.88:10809"

containerd 配 systemd 代理

  1. 配置 containerd 代理 drop-in(在 k8s-node01 执行)
mkdir -p /etc/systemd/system/containerd.service.d

cat >/etc/systemd/system/containerd.service.d/http-proxy.conf <<'EOF'
[Service]
Environment="HTTP_PROXY=http://192.168.10.88:7899"
Environment="HTTPS_PROXY=http://192.168.10.88:7899"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,.cluster.local,.svc,.svc.cluster.local,k8s-master01,k8s-master02,k8s-master03,k8s-node01,k8s-node02"
EOF
  1. 重载并重启 containerd
systemctl daemon-reload
systemctl restart containerd
  1. 验证 containerd 已拿到代理环境
systemctl show containerd --property=Environment

你应该能看到包含 HTTP_PROXY/HTTPS_PROXY/NO_PROXY

  1. 触发一次镜像拉取验证

任选其一:

  • 用 crictl(推荐)
crictl pull nginx:latest
  • 或让 Pod 重建
kubectl -n default delete pod <你的nginx-pod名>
kubectl -n default describe pod <新的pod名> | egrep -i "pull|image|error"

k3s 服务也配代理

如果你用的是 k3s,自带 containerd

有些环境里真正管理 containerd 的是 k3s.service(或者 kubelet 由其它 unit 管理),为了避免“containerd 配了但实际由 k3s 拉/管理”这种差异,建议也做一份(不冲突):

  1. 给 k3s 配代理(如果存在 k3s 服务)
systemctl status k3s 2>/dev/null | head

如果存在 k3s

mkdir -p /etc/systemd/system/k3s.service.d

cat >/etc/systemd/system/k3s.service.d/http-proxy.conf <<'EOF'
[Service]
Environment="HTTP_PROXY=http://192.168.10.88:7899"
Environment="HTTPS_PROXY=http://192.168.10.88:7899"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,.cluster.local,.svc,.svc.cluster.local,k8s-master01,k8s-master02,k8s-master03,k8s-node01,k8s-node02"
EOF

systemctl daemon-reload
systemctl restart k3s

注意点(避免你后面再次踩坑)

  • NO_PROXY 必须包含
    • Pod/Service 网段(你这里已有 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 等)
    • 集群域名(你这里已有 .cluster.local,.svc,...
  • 代理地址要从 node01 可达:你已经验证可用
  • 如果你还有 ImagePullBackOff
    • 大概率是 只改了 shell 环境,没改 systemd unit(我们这次就是修这个)

我需要你回传的验证结果(两条即可)

执行后把输出贴我:

systemctl show containerd --property=Environment
crictl pull nginx:latest

状态总结

  • 进展:根因已确认(node01 直连 docker.io 不通 + containerd 未配置代理)。
  • 下一步:按“方案 A”配置 systemd proxy 并验证 crictl pull 成功后,这个 ImagePullBackOff 问题就可以判定解决。

部署 K3S

### 部署前准备
sudo swapoff -a
sudo sed -i.bak '/ swap / s/^/#/' /etc/fstab

sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes.conf
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
EOF

sudo sysctl --system


### 安装 k3s(单机一体,默认包含 flannel + traefik)
curl -sfL https://get.k3s.io | sudo sh -
# 或者先把脚本下载到本地再跑(更容易看细节)

curl -sfL https://get.k3s.io -o install-k3s.sh
export CURL_OPTIONS="--http1.1"
sudo sh install-k3s.sh --debug


### 验证是否正常
sudo kubectl get nodes -o wide
NAME             STATUS   ROLES           AGE   VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
orangepi5ultra   Ready    control-plane   40s   v1.34.3+k3s1   192.168.10.68   <none>        Orange Pi 1.0.0 Jammy   6.1.43-rockchip-rk3588   containerd://2.1.5-k3s1

### k3s 服务也配代理
mkdir -p /etc/systemd/system/k3s.service.d

cat >/etc/systemd/system/k3s.service.d/http-proxy.conf <<'EOF'
[Service]
Environment="HTTP_PROXY=http://192.168.10.88:7899"
Environment="HTTPS_PROXY=http://192.168.10.88:7899"
Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,.cluster.local,.svc,.svc.cluster.local,k8s-master01,k8s-master02,k8s-master03,k8s-node01,k8s-node02"
EOF

systemctl daemon-reload
systemctl restart k3s

### 查看状态
root@orangepi5ultra:~# sudo kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-7f496c8d7d-ssfn2                  1/1     Running     0          9m17s
kube-system   helm-install-traefik-crd-9hqqd            0/1     Completed   0          68s
kube-system   helm-install-traefik-gjfm9                0/1     Completed   2          68s
kube-system   local-path-provisioner-578895bd58-mwq75   1/1     Running     0          9m17s
kube-system   metrics-server-7b9c9c4b9c-tvllc           1/1     Running     0          9m17s
kube-system   svclb-traefik-1861089c-tk462              2/2     Running     0          20s
kube-system   traefik-6f5f87584-d5bj2                   1/1     Running     0          20s

### 重启 k3s 让 containerd 重新生成配置并拉镜像
sudo systemctl restart k3s

### kubeconf 文件路径
cat /etc/rancher/k3s/k3s.yaml

K3S 安装 MetalLB

你应用一直 Progressing 是因为 Ingress 没有 ADDRESS(LB pending),Argo CD 将其判定为 Progressing;最标准的解决方案是 安装 MetalLB 给 ingress-nginx 分配 EXTERNAL-IP,或在 Argo CD 自定义 Ingress health 规则。

### 粗略探测是否被占用(可选)
for i in {200..220}; do ping -c 1 -W 1 192.168.10.$i >/dev/null && echo "in-use: $i"; done

### 安装 MetalLB
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm install metallb metallb/metallb -n metallb-system --create-namespace

### 直接 apply 官方 manifest(不用 Helm)
# 正常会看到至少
# controller Running
# speaker Running(单机也会有一个)
kubectl get pods -n metallb-system -w

### 配置地址池 IPAddressPool + L2Advertisement(关键)
# 创建一个 YAML(你可以直接复制粘贴执行): 把 192.168.10.200-192.168.10.220 按你自己的可用网段改掉
cat <<'EOF' | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.10.200-192.168.10.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default-l2
  namespace: metallb-system
spec:
  ipAddressPools:
    - default-pool
EOF


### 验证资源已创建
kubectl get ipaddresspool,l2advertisement -n metallb-system
NAME                                    AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
ipaddresspool.metallb.io/default-pool   true          false             ["192.168.10.200-192.168.10.220"]

NAME                                    IPADDRESSPOOLS     IPADDRESSPOOL SELECTORS   INTERFACES
l2advertisement.metallb.io/default-l2   ["default-pool"]


### 让 ingress-nginx 的 LoadBalancer 拿到 EXTERNAL-IP
# 你现在 ingress-nginx 的 service 是 LoadBalancer,MetalLB 会自动为它分配地址。
# EXTERNAL-IP 从 <pending> 变成类似 192.168.10.201
root@orangepi5ultra:~# kubectl get svc -n ingress-nginx ingress-nginx-controller -o wide
NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE   SELECTOR
ingress-nginx-controller   LoadBalancer   10.43.19.119   192.168.10.201   80:30336/TCP,443:30275/TCP   27m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

### 访问方式变化(更像生产)
curl -H "Host: demo.test.com" "http://192.168.10.201/receiveapi"

containerd 删除本地镜像

crictl rmi nginx:latest

proxy-on

### 加入文件末尾
vi ~/.bashrc 


# ============================================
# K8s Master 节点代理配置
# ============================================
export HTTP_PROXY=http://192.168.10.88:7899
export HTTPS_PROXY=http://192.168.10.88:7899

# NO_PROXY 配置(完整版)
export NO_PROXY="localhost,127.0.0.1,\
10.0.0.0/8,10.96.0.0/12,10.96.0.10,\
172.16.0.0/12,172.16.0.0/16,\
192.168.0.0/16,\
169.254.0.0/16,\
.cluster.local,.svc,.svc.cluster.local,\
kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,\
k8s-master01,k8s-node01,k8s-node02,\
192.168.10.231,192.168.10.232,192.168.10.233"

# 兼容性设置(小写)
export http_proxy=$HTTP_PROXY
export https_proxy=$HTTPS_PROXY
export no_proxy=$NO_PROXY

# 便捷命令(修复版)
alias proxy-status='echo "HTTP_PROXY=$HTTP_PROXY"; echo "NO_PROXY=$NO_PROXY"'
alias proxy-test='curl -I https://www.google.com 2>&1 | head -5 && kubectl get nodes'
alias proxy-off='unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy NO_PROXY no_proxy && echo "✅ Proxy disabled"'
alias proxy-on='source ~/.bashrc && echo "✅ Proxy enabled"'
alias curl-k8s='curl --noproxy "*"'

# 验证配置
echo "✅ Proxy configured: $HTTP_PROXY"
echo "✅ NO_PROXY includes: $(echo $NO_PROXY | cut -d',' -f1-5)..."


### 立即生效
source ~/.bashrc

Yum 2 Aliyun

### 备份yum源文件
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo_bak

### 新建yum仓库配置文件
vi /etc/yum.repos.d/CentOS-Base.repo
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/os/$basearch/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/updates/$basearch/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/extras/$basearch/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7

### 清除缓存并重建元数据缓存
yum clean all && yum makecache

kubernetes yum源

# 添加 Kubernetes 的 yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 更新 yum 缓存
yum clean all
yum makecache

# 查看版本
yum list kubeadm --showduplicates | grep 1.22

# 安装特定版本的 kubeadm
yum install -y kubeadm-1.22.17-0 kubelet-1.22.17-0 kubectl-1.22.17-0

# 如果需要降级,使用 downgrade 命令
yum downgrade -y kubeadm-1.22.17-0 kubelet-1.22.17-0 kubectl-1.22.17-0

# 查看版本
kubeadm version

NFS Provisioner 给 kubeadm 多节点集群做持久化

用一台机器当 NFS Server,K8s 里装 NFS Subdir External Provisioner 来动态创建 PV,然后把它设成默认 StorageClass


先确认 3 个信息(你选一个节点当 NFS Server)

  • [NFS Server IP]:例如 192.168.10.231(建议用 master 或单独一台稳定机器)
  • [导出目录]:例如 /srv/nfs/k8s
  • [客户端节点]:你的 3 台节点都要能访问这个 IP(231/232/233)

用 NFS 动态供给给 Loki PVC 提供持久化卷,

### 在 NFS Server(RockyLinux)上安装并配置 NFS
# 安装与启动(在k8s-master01上执行)
sudo dnf -y install nfs-utils
sudo systemctl enable --now nfs-server

### 创建导出目录
sudo mkdir -p /srv/nfs/k8s
sudo chmod 777 /srv/nfs/k8s

### 配置 /etc/exports
echo "/srv/nfs/k8s 192.168.10.0/24(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
sudo exportfs -rav

### 放行防火墙(如果你开了 firewalld)
sudo firewall-cmd --permanent --add-service=nfs
sudo firewall-cmd --permanent --add-service=mountd
sudo firewall-cmd --permanent --add-service=rpc-bind
sudo firewall-cmd --reload

### 在所有 K8s 节点(master + workers)安装 NFS 客户端
每台节点都执行
sudo dnf -y install nfs-utils

### 验证节点能挂载 NFS(强烈建议做一次)
在任意一个节点上测试(把 IP 换成你的 NFS Server)
sudo mkdir -p /mnt/testnfs
sudo mount -t nfs <NFS_SERVER_IP>:/srv/nfs/k8s /mnt/testnfs
df -h | grep testnfs
sudo umount /mnt/testnfs

### 在 Kubernetes 里安装 NFS 动态供给器 + StorageClass
# 推荐用官方维护的:nfs-subdir-external-provisioner
# 安装(需要你把 NFS Server IP/路径替换掉)
# storageClass.defaultClass=true:让它成为默认 SC(你 Loki PVC 没写 SC 也能自动用它)
# reclaimPolicy=Delete:PVC 删除时会删子目录(更省事;如果你想保留数据改成 Retain)

kubectl create ns nfs-provisioner

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

helm repo update

helm upgrade --install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  -n nfs-provisioner \
  --set nfs.server=192.168.10.231 \
  --set nfs.path=/srv/nfs/k8s \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=true \
  --set storageClass.reclaimPolicy=Delete

### 检查
# 检查 sc 里有 nfs-client,并标记为 (default)
[root@k8s-master01 ~]# kubectl get sc
NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   9m21s

# provisioner Pod 状态 Running
[root@k8s-master01 ~]# kubectl -n nfs-provisioner get pods -o wide
NAME                                               READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
nfs-subdir-external-provisioner-65f6486bd6-ch5zq   1/1     Running   0          9m59s   172.16.85.212   k8s-node01   <none>           <none>

Maven

https://blog.csdn.net/aaxzsuj/article/details/130524829

SpringBoot

启动类

package net.xdclass;

import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
@MapperScan("net.xdclass.mapper")
public class UserApplication {
    public static void main(String[] args) {
        SpringApplication.run(UserApplication.class,

application.yml

server:
  port: 9001
  
spring:
  application:
    name: xdclass-user-service

  # 数据库配置
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://192.168.10.21:3307/xdclass_user?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
    username: root
    password: abc1024.pub

# 配置plus打印sql日志
mybatis-plus:
  configuration:
    log-impl: org.apache.ibatis.logging.stdout.StdOutImpl

# 设置日志级别,ERROR/WARN/INFO/DEBUG,默认是INFO以上才显示
logging:
  level:
    root: INFO

Docker

Install

#安装并运行Docker。
yum install docker-io -y
systemctl start docker

#检查安装结果。
docker info

#启动使用Docker
systemctl start docker     #运行Docker守护进程
systemctl stop docker      #停止Docker守护进程
systemctl restart docker   #重启Docker守护进程


#修改镜像仓库
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": [
    "https://docker.m.daocloud.io",
    "https://noohub.ru",
    "https://huecker.io",
    "https://dockerhub.timeweb.cloud",
    "https://proxy.1panel.live",
    "https://docker.1panel.top",
    "https://docker.1ms.run",
    "https://docker.ketches.cn",
    "https://05f073ad3c0010ea0f4bc00b7105ec20.mirror.swr.myhuaweicloud.com",
    "https://mirror.ccs.tencentyun.com",
    "https://0dj0t5fb.mirror.aliyuncs.com",
    "https://docker.mirrors.ustc.edu.cn",
    "https://6kx4zyno.mirror.aliyuncs.com",
    "https://registry.docker-cn.com",
    "https://akchsmlh.mirror.aliyuncs.com",
    "https://hub-mirror.c.163.com",
    "https://docker.hpcloud.cloud",
    "https://docker.unsee.tech",
    "http://mirrors.ustc.edu.cn",
    "https://docker.chenby.cn",
    "http://mirror.azure.cn",
    "https://dockerpull.org",
    "https://dockerhub.icu",
    "https://hub.rat.dev"
  ]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

#查看信息
docker info

daemon.json

https://patzer0.com/archives/configure-docker-registry-mirrors-with-mirrors-available-in-cn-mainland

[root@Flink ~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://dockerpull.com"]
}

sudo systemctl daemon-reload
sudo systemctl restart docker

升级

升级https://blog.csdn.net/u011990675/article/details/141320931

升级后启动以前容器遇到的问题Error response from daemon: unknown or invalid runtime name: docker-runc

解决问题https://blog.csdn.net/weixin_40918145/article/details/133855258

MongoDB

配置文件

net:
    port: 27017
    bindIp: "0.0.0.0"

storage:
    dbPath: "/data/db"

security:
    authorization: enabled

命令

docker run -it -d --name mongo \
-p 27017:27017 \
--net mynet \
--ip 172.18.0.8 \
-v /root/mongo:/etc/mongo \
-v /root/mongo/data/db:/data/db \
-m 400m --privileged=true \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=abc123456 \
-e TZ=Asia/Shanghai \
docker.io/mongo --config /etc/mongo/mongod.conf

Redis

配置文件

bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
loglevel notice
logfile ""
databases 12
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
requirepass abc123456

命令

docker run -it -d --name redis -m 200m \
-p 6379:6379 --privileged=true \
--net mynet --ip 172.18.0.9 \
-v /root/redis/conf:/usr/local/etc/redis \
-e TZ=Asia/Shanghai redis:6.0.10 \
redis-server /usr/local/etc/redis/redis.conf

RabbitMQ

命令

docker run -it -d --name mq \
--net mynet --ip 172.18.0.11 \
-p 5672:5672 -m 500m \
-e TZ=Asia/Shanghai --privileged=true \
rabbitmq

Minio

我们打开浏览器,访问 http://127.0.0.1:9001/login,然后填写好登陆信息,就能进入Web管理画面。 root abc123

目录

docker load < Minio.tar.gz
mkdir /root/minio
mkdir /root/minio/data
chmod -R 777 /root/minio/data

命令

docker run -it -d --name minio \
-p 9000:9000 -p 9001:9001 \
-v /root/minio/data:/data \
-e TZ=Asia/Shanghai --privileged=true \
--env MINIO_ROOT_USER="root" \
--env MINIO_ROOT_PASSWORD="abc123456" \
-e MINIO_SKIP_CLIENT="yes" \
bitnami/minio:latest


### 最新版
docker run -it -d --name minio -m 400m \
-p 9000:9000 -p 9001:9001 \
-v /data/minio/data:/data \
-e TZ=Asia/Shanghai --privileged=true \
--env MINIO_ROOT_USER="root" \
--env MINIO_ROOT_PASSWORD="abc123456" \
bitnami/minio:latest

http://192.168.10.21:9001/login

Nacos

http://localhost:8848/nacos/
nacos
nacos

docker run -it -d -p 8848:8848 --env MODE=standalone \
--net mynet --ip 172.18.0.12 -e TZ=Asia/Shanghai \
--name nacos nacos/nacos-server

### new
docker run -d \
-e NACOS_AUTH_ENABLE=true \
-e MODE=standalone \
-e JVM_XMS=128m \
-e JVM_XMX=128m \
-e JVM_XMN=128m \
-p 8848:8848 \
-e SPRING_DATASOURCE_PLATFORM=mysql \
-e MYSQL_SERVICE_HOST=192.168.10.58 \
-e MYSQL_SERVICE_PORT=3306 \
-e MYSQL_SERVICE_USER=root \
-e MYSQL_SERVICE_PASSWORD=abc1024.pub \
-e MYSQL_SERVICE_DB_NAME=nacos_config \
-e MYSQL_SERVICE_DB_PARAM='characterEncoding=utf8&connectTimeout=10000&socketTimeout=30000&autoReconnect=true&useSSL=false' \
--restart=always \
--privileged=true \
-v /home/data/nacos/logs:/home/nacos/logs \
--name xdclass_nacos_auth \
nacos/nacos-server:2.0.2

SQL脚本,库名nacos_config

Sentinel

打开浏览器访问http://localhost:8858/#/login,然后填写登陆帐户,用户名和密码都是 sentinel

docker run -it -d --name sentinel \
-p 8719:8719 -p 8858:8858 \
--net mynet --ip 172.18.0.13 \
-e TZ=Asia/Shanghai -m 600m \
bladex/sentinel-dashboard

Mysql

docker run \
    -p 3306:3306 \
    -e MYSQL_ROOT_PASSWORD=123456 \
    -v /home/data/mysql/conf:/etc/mysql/conf.d \
    -v /home/data/mysql/data:/var/lib/mysql:rw \
    -v /home/data/mysql/my.cnf:/etc/mysql/my.cnf \
    --name mysql \
    --restart=always \
    -d mysql:8.0.22

docker run \
    -p 3306:3306 \
    -e MYSQL_ROOT_PASSWORD=123456 \
    --name mysql \
    -d mysql:8.0.22

webssh

docker run -d --name webssh -p 5032:5032 --restart always lihaixin/webssh2:ssh

Mybatis-plus-generator

依赖

<dependency>
        <groupId>com.baomidou</groupId>
        <artifactId>mybatis-plus-generator</artifactId>
        <version>3.4.1</version>
    </dependency>
    <!-- velocity -->
    <dependency>
        <groupId>org.apache.velocity</groupId>
        <artifactId>velocity-engine-core</artifactId>
        <version>2.0</version>
    </dependency>
    <!-- 代码自动生成依赖 end-->

代码(标记TODO的记得修改)

package net.xdclass.db;

import com.baomidou.mybatisplus.annotation.DbType;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.generator.AutoGenerator;
import com.baomidou.mybatisplus.generator.config.DataSourceConfig;
import com.baomidou.mybatisplus.generator.config.GlobalConfig;
import com.baomidou.mybatisplus.generator.config.PackageConfig;
import com.baomidou.mybatisplus.generator.config.StrategyConfig;
import com.baomidou.mybatisplus.generator.config.rules.DateType;
import com.baomidou.mybatisplus.generator.config.rules.NamingStrategy;

public class MyBatisPlusGenerator {

    public static void main(String[] args) {
        //1. 全局配置
        GlobalConfig config = new GlobalConfig();
        // 是否支持AR模式
        config.setActiveRecord(true)
                // 作者
                .setAuthor("soulboy")
                // 生成路径,最好使用绝对路径,window路径是不一样的
                //TODO  TODO  TODO  TODO
                .setOutputDir("C:\\Users\\chao1\\Desktop\\demo\\src\\main\\java")
                // 文件覆盖
                .setFileOverride(true)
                // 主键策略
                .setIdType(IdType.AUTO)

                .setDateType(DateType.ONLY_DATE)
                // 设置生成的service接口的名字的首字母是否为I,默认Service是以I开头的
                .setServiceName("%sService")

                //实体类结尾名称
                .setEntityName("%sDO")

                //生成基本的resultMap
                .setBaseResultMap(true)

                //不使用AR模式
                .setActiveRecord(false)

                //生成基本的SQL片段
                .setBaseColumnList(true);

        //2. 数据源配置
        DataSourceConfig dsConfig = new DataSourceConfig();
        // 设置数据库类型
        dsConfig.setDbType(DbType.MYSQL)
                .setDriverName("com.mysql.cj.jdbc.Driver")
                //TODO  TODO  TODO  TODO
                .setUrl("jdbc:mysql://192.168.10.21:3307/xdclass_user?useSSL=false")
                .setUsername("root")
                .setPassword("abc1024.pub");

        //3. 策略配置globalConfiguration中
        StrategyConfig stConfig = new StrategyConfig();

        //全局大写命名
        stConfig.setCapitalMode(true)
                // 数据库表映射到实体的命名策略
                .setNaming(NamingStrategy.underline_to_camel)

                //使用lombok
                .setEntityLombokModel(true)

                //使用restcontroller注解
                .setRestControllerStyle(true)

                // 生成的表, 支持多表一起生成,以数组形式填写
                //TODO  TODO  TODO  TODO
                .setInclude("user","address");

        //4. 包名策略配置
        PackageConfig pkConfig = new PackageConfig();
        pkConfig.setParent("net.xdclass")
                .setMapper("mapper")
                .setService("service")
                .setController("controller")
                .setEntity("model")
                .setXml("mapper");

        //5. 整合配置
        AutoGenerator ag = new AutoGenerator();
        ag.setGlobalConfig(config)
                .setDataSource(dsConfig)
                .setStrategy(stConfig)
                .setPackageInfo(pkConfig);

        //6. 执行操作
        ag.execute();
        System.out.println("=======  Done 相关代码生成完毕  ========");
    }
}

SwaggerConfiguration

依赖

<!--swagger ui接口文档依赖-->
       <dependency>
           <groupId>io.springfox</groupId>
           <artifactId>springfox-boot-starter</artifactId>
           <version>3.0.0</version>
       </dependency>

SwaggerConfiguration

package net.xdclass.config;

import lombok.Data;
import org.springframework.context.annotation.Bean;
import org.springframework.http.HttpMethod;
import org.springframework.stereotype.Component;
import springfox.documentation.builders.*;
import springfox.documentation.oas.annotations.EnableOpenApi;
import springfox.documentation.schema.ScalarType;
import springfox.documentation.service.*;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;

import java.util.ArrayList;
import java.util.List;

@Component
@EnableOpenApi
@Data
public class SwaggerConfiguration {

    /**
     * 对C端用户的接口文档
     *
     * @return
     */
    @Bean
    public Docket webApiDoc() {

        return new Docket(DocumentationType.OAS_30)
                .groupName("用户端接口文档")
                .pathMapping("/")
                // 定义是否开启swagger,false为关闭,可以通过变量控制,线上关闭
                .enable(true)
                //配置api文档元信息
                .apiInfo(apiInfo())
                // 选择哪些接口作为swagger的doc发布
                .select()
                .apis(RequestHandlerSelectors.basePackage("net.xdclass"))
                //正则匹配请求路径,并分配至当前分组
                .paths(PathSelectors.ant("/api/**"))
                .build()
                //新版swagger3.0配置
                .globalRequestParameters(getGlobalRequestParameters())
                .globalResponses(HttpMethod.GET, getGlobalResponseMessage())
                .globalResponses(HttpMethod.POST, getGlobalResponseMessage());
    }


    /**
     * 生成全局通用参数, 支持配置多个响应参数
     * 可以携带 token 信息
     * @return
     */
    private List<RequestParameter> getGlobalRequestParameters() {
        List<RequestParameter> parameters = new ArrayList<>();
        parameters.add(new RequestParameterBuilder()
                .name("token")
                .description("登录令牌")
                .in(ParameterType.HEADER)
                .query(q -> q.model(m -> m.scalarModel(ScalarType.STRING)))
                .required(false)
                .build());

//        parameters.add(new RequestParameterBuilder()
//                .name("version")
//                .description("版本号")
//                .required(true)
//                .in(ParameterType.HEADER)
//                .query(q -> q.model(m -> m.scalarModel(ScalarType.STRING)))
//                .required(false)
//                .build());

        return parameters;
    }

    /**
     * 生成通用响应信息
     *
     * @return
     */
    private List<Response> getGlobalResponseMessage() {
        List<Response> responseList = new ArrayList<>();
        responseList.add(new ResponseBuilder().code("4xx").description("请求错误,根据code和msg检查").build());
        return responseList;
    }

    /**
     * api文档元信息
     * @return
     */
    private ApiInfo apiInfo() {
        return new ApiInfoBuilder()
                .title("1024电商平台")
                .description("微服务接口文档")
                .contact(new Contact("soulboy", "abc1024.pub", "410686931@qq.com"))
                .version("v1.0")
                .build();
    }
}

AddressController

package net.xdclass.controller;

import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiParam;
import net.xdclass.service.AddressService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;

import org.springframework.web.bind.annotation.RestController;
/**
 * <p>
 * 电商-公司收发货地址表 前端控制器
 * </p>
 *
 * @author soulboy
 * @since 2023-10-21
 */
@Api(tags = "收货地址接口")
@RestController
@RequestMapping("/api/address/v1")
public class AddressController {
    @Autowired
    AddressService addressService;
    @ApiOperation("根据id查找地址详情")
    @GetMapping("find/{address_id}")
    public Object detail(@ApiParam(value = "地址id",required = true)
                             @PathVariable("address_id") long addressId){
        return addressService.detail(addressId);
    }
}

访问地址

http://192.168.10.88:9001/swagger-ui/index.html#/

Git

git add ./*

git commit -m "init2"

git push -u origin "master"

Hyper-v

### 关闭
bcdedit /set hypervisorlaunchtype off 

### 开启
bcdedit /set hypervisorlaunchtype auto

Docker打包Maven插件配置

### 聚合工程pom添加全局变量
        <docker.image.prefix>xdclass-cloud</docker.image.prefix>

### 每个微服务都添加依赖(服务名记得修改)
 <build>
        <finalName>alibaba-cloud-user</finalName>

        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>

                <!--需要加这个,不然打包镜像找不到启动文件-->
                <executions>
                    <execution>
                        <goals>
                            <goal>repackage</goal>
                        </goals>
                    </execution>
                </executions>

                <configuration>
                    <fork>true</fork>
                    <addResources>true</addResources>

                </configuration>
            </plugin>

            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>dockerfile-maven-plugin</artifactId>
                <version>1.4.10</version>
                <configuration>

                    <repository>${docker.image.prefix}/${project.artifactId}</repository>

                    <buildArgs>
                        <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
                    </buildArgs>
                </configuration>
            </plugin>

        </plugins>

    </build>

Dockerfile

### Dockerfile文件内容
#FROM  adoptopenjdk/openjdk11:ubi
FROM  adoptopenjdk/openjdk11:jre11u-nightly
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]

### mvn打包命令()
# 步骤一:最外层 mvn clean install
mvn clean install

# 步骤二:去到子模块pom文件下
mvn install -Dmaven.test.skip=true dockerfile:build

front-end

cnpm

npm install -g cnpm --registry=https://registry.npmmirror.com

JDK8

[root@ecs-8yZb5 ~]# mkdir -pv /usr/local/software
[root@MiWiFi-R3P-srv ~]# cd /usr/local/software/
[root@MiWiFi-R3P-srv software]# tar -zxvf jdk-8u181-linux-x64.tar.gz
[root@MiWiFi-R3P-srv software]# mv jdk1.8.0_181 jdk8
[root@MiWiFi-R3P-srv software]# vim /etc/profile
JAVA_HOME=/usr/local/software/jdk8
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASSPATH
[root@MiWiFi-R3P-srv software]# source /etc/profile
[root@MiWiFi-R3P-srv software]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

System

DiskMount

### 查看当前挂载点的设备名字和UUID
[root@iZbp19if2e8jvz5vlw7343Z ~]# blkid
/dev/vda1: UUID="c8b5b2da-5565-4dc1-b002-2a8b07573e22" TYPE="ext4" 
/dev/vdb1: UUID="9669d5ae-04db-4502-9a2a-6ec1d312ff3e" TYPE="ext4" PARTLABEL="Linux" PARTUUID="39c4d643-e085-4290-9ef5-f27cf7ee1ef8"

### 查看系统所有挂载点
[root@iZbp19if2e8jvz5vlw7343Z ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G   16K   16G   1% /dev/shm
tmpfs            16G  672K   16G   1% /run
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/vda1        99G   41G   54G  44% /
tmpfs           3.1G     0  3.1G   0% /run/user/1002
/dev/vdb1       148G   73G   69G  52% /mysqlbin
tmpfs           3.1G     0  3.1G   0% /run/user/0

### 开机自动挂载
[root@iZbp19if2e8jvz5vlw7343Z ~]# cat /etc/fstab 
UUID=c8b5b2da-5565-4dc1-b002-2a8b07573e22 /                       ext4    defaults        1 1
/dev/vdb1       /mysqlbin       ext4    defaults        0       2

rsyslogd(日志管理服务)

rsyslogd 是一个在 CentOS 7 中非常重要的系统日志守护进程。
rsyslogd 是 syslog 的升级版本,提供了更多功能和更好的性能,是现代 Linux 系统中标准的日志管理服务。

主要功能:

  • 系统日志处理
  • 收集各种系统日志和应用程序日志
  • 处理本地生成的日志信息
  • 可以接收来自远程系统的日志
  • 日志分类和存储
  • 根据不同的设施(facility)和严重级别(severity)对日志进行分类
  • 默认将日志存储在 /var/log 目录下
  • 可以将不同类型的日志存储到不同的文件中
  • 主要特点:
  • 高性能设计
  • 模块化架构
  • 支持 TCP/UDP 协议
  • 支持日志过滤
  • 支持日志转发功能

常见配置文件:

  • 主配置文件:/etc/rsyslog.conf
  • 附加配置目录:/etc/rsyslog.d/
# 查看 rsyslogd 状态
systemctl status rsyslog

# 重启服务
systemctl restart rsyslog

# 启动服务
systemctl start rsyslog

# 停止服务
systemctl stop rsyslog

/etc/rsyslog.conf 中直接控制日志大小,推荐的方式是使用 logrotate 来管理日志大小,因为它提供了更完整的日志轮转解决方案。步骤如下:

### 确保已安装
logrotate --version

### 配置文件:/etc/logrotate.d/rsyslog
/var/log/messages {
    daily          # 每天轮转一次
    rotate 7       # 保留最近7个归档(7天)
    dateext        # 使用日期作为后缀
    dateformat -%Y%m%d  # 日期格式
    compress       # 压缩旧日志
    missingok      # 如果日志丢失,不报错
    notifempty     # 空文件不轮转
    create 0600 root root  # 新建日志文件的权限和所有者
    postrotate
        /bin/kill -HUP $(cat /var/run/syslogd.pid 2>/dev/null) 2>/dev/null || true
    endscript
}

### 删除注释的配置文件:/etc/logrotate.d/rsyslog
/var/log/messages {
    su root root
    daily
    rotate 7
    dateext
    dateformat -%Y%m%d
    compress
    missingok
    notifempty
    create 0600 root root
    postrotate
        /bin/kill -HUP $(cat /var/run/syslogd.pid 2>/dev/null) 2>/dev/null || true
    endscript
}

### 测试配置是否正确
logrotate -d /etc/logrotate.d/rsyslog


### 强制立即执行轮转
logrotate -f /etc/logrotate.d/rsyslog

如果要对多个日志文件应用相同的规则:

/var/log/messages /var/log/secure /var/log/maillog /var/log/cron {
    su root root
    daily
    rotate 7
    dateext
    dateformat -%Y%m%d
    compress
    missingok
    notifempty
    create 0600 root root
    postrotate
        /bin/kill -HUP $(cat /var/run/syslogd.pid 2>/dev/null) 2>/dev/null || true
    endscript
}

轮转后的文件名示例:

messages-20240301.gz
messages-20240302.gz
messages-20240303.gz
...
messages-20240307.gz
messages  # 当前日志文件

任务计划:每天凌晨1分执行logrotate

1 0 * * * /usr/sbin/logrotate /etc/logrotate.d/rsyslog

SuYing666

最新地址
地址1:https://suying59.xyz
地址2:https://suying92.xyz

其他地址(需翻墙打开)
地址1:https://sy66a88.com
地址2:https://sy77a12.com
地址3:https://suying818.xyz
地址4:https://suying810.net
地址5:https://suying811.com
地址6:https://suying828.com
地址7:https://suying200.org
地址8:https://suying82.com

作者:Soulboy