整合营销服务商

电脑端+手机端+微信端=数据同步管理

免费咨询热线:

更改kindeditor编辑器,改用支持h5的video标签替换原有embed标签

indeditor是一款不错的可视化编辑器,不过最近几年似乎没在更新,现在h5趋于主流,正好有个业务需要编辑器支持mp4视频的播放,考虑到现在h5中的video标签的强大,于是决定将原来系统中的embed标记更改好video。具体操作方法如下:

1、在296行

embed : ['id', 'class', 'src', 'width', 'height', 'type', 'loop', 'autostart', 'quality', '.width', '.height', 'align', 'allowscriptaccess'],

下面增加以下代码:

video : ['id', 'class', 'src', 'width', 'height', 'type', 'loop', 'autostart', 'quality', '.width', '.height', 'align', 'allowscriptaccess','controls'],

修改后的效果如下图:

2、在893-895行代码段

if (/\.(swf|flv)(\?|$)/i.test(src)) {

return 'application/x-shockwave-flash';

}

下面增加以下代码:

if (/\.(mp4|mp5)(\?|$)/i.test(src)) {

return 'video/mp4';

}

3、然后修改901-903行代码

if (/flash/i.test(type)) {

return 'ke-flash';

}

在下面增加

if (/video/i.test(type)) {

return 'ke-video';

}

修改后的效果如下图:

4、在917行代码function _mediaImg(blankPath, attrs) {

在其上面增加代码:

function _mediaVideo(attrs) {

var html = '<video ';

_each(attrs, function(key, val) {

html += key + '="' + val + '" ';

});

html += ' controls="controls" />';

return html;

}

5、在955行代码:K.mediaEmbed = _mediaEmbed;的下面

增加代码 :K.mediaVideo = _mediaVideo;

好了,这样当我们上传视频时,就会使用video标记来引用视频了。取代以前的embed标签 。不过,这里还有一个问题,就是上传视频后,编辑器中为空白的(其实已经上传成功,切换到代码模式也能看到有内容)。使用chrome调试,发现问题在样式上。经过对比发现问题在这里:

之前使用embed标签显示视频的效果:

替换后使用video展示视频的效果:

所以我们增加样式即可,找到3528行的代码:'img.ke-media {',

将其修改为'img.ke-media,img.ke-video {',

这句话的含义就是ke-video样式与ke-media一样。好了,文件修改好,清除浏览器缓存(也可以按ctr+f5),再次上传视频查看效果解决!

在前面

  • 不小心拔错电源了,虚机强制关机,开机后集群死掉了
  • 记录下解决方案
  • 断电导致etcd 快照数据丢失,没有备份.基本上是没办法处理
  • 可以找专业的 DBA来处理数据看有没有可能恢复
  • 这篇博文的解决办法是删除了 etcd 数据目录中的部分文件。
  • 集群可以启动,但是 部署的环境数据都丢失了,包括CNI, 集群自带的 DNS 组件也丢了。
  • 理解不足小伙伴帮忙指正
  • 不管是生产还是测试, k8s集群 ETCD 一定要备份,ETCD 一定要备份,ETCD 一定要备份 ,重要的话说三遍。

我所渴求的,無非是將心中脫穎語出的本性付諸生活,為何竟如此艱難呢 ------赫尔曼·黑塞《德米安》


当前集群的状态

┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
The connection to the server 192.168.26.81:6443 was refused - did you specify the right host or port?

重启 docke 和 kubelet 尝试启动

┌──[root@vms81.liruilongs.github.io]-[~]
└─$systemctl restart docker
┌──[root@vms81.liruilongs.github.io]-[~]
└─$systemctl restart kubelet.service

还是不行,查看下 maser 节点的 kubelet 日志信息

┌──[root@vms81.liruilongs.github.io]-[~]
└─$journalctl  -u kubelet.service -f
1月 19 09:32:06 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:06.703418   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
1月 19 09:32:06 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:06.804201   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
1月 19 09:32:06 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:06.905156   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.005487   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.105648   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"
1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.186066   11344 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://192.168.26.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/vms81.liruilongs.github.io?timeout=10s": dial tcp 192.168.26.81:6443: connect: connection refused
1月 19 09:32:07 vms81.liruilongs.github.io kubelet[11344]: E0119 09:32:07.205785   11344 kubelet.go:2407] "Error getting node" err="node \"vms81.liruilongs.github.io\" not found"

利用 docker 查看下当前存在的 pod 信息

┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED          STATUS              PORTS     NAMES
d9d6471ce936   b51ddc1014b0                                        "kube-scheduler --au…"   17 minutes ago   Up 17 minutes                 k8s_kube-scheduler_kube-scheduler-vms81.liruilongs.github.io_kube-system_e1b874bfdef201d69db10b200b8f47d5_14
010c1b8c30c6   5425bcbd23c5                                        "kube-controller-man…"   17 minutes ago   Up 17 minutes                 k8s_kube-controller-manager_kube-controller-manager-vms81.liruilongs.github.io_kube-system_49b7654103f80170bfe29d034f806256_15
7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up About a minute             k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
f557435d150e   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up 18 minutes                 k8s_POD_kube-scheduler-vms81.liruilongs.github.io_kube-system_e1b874bfdef201d69db10b200b8f47d5_7
5deaffbc555a   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up 18 minutes                 k8s_POD_kube-controller-manager-vms81.liruilongs.github.io_kube-system_49b7654103f80170bfe29d034f806256_7
a418c2ce33f2   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 18 minutes ago   Up 18 minutes                 k8s_POD_kube-apiserver-vms81.liruilongs.github.io_kube-system_a35cb37b6c90c72f607936b33161eefe_6

etcd 没有启动, apiservice 也没有启动。

┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker ps -a | grep etcd
b5e18722315b   004811815584                                        "etcd --advertise-cl…"   5 minutes ago    Exited (2) About a minute ago             k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_19
7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 21 minutes ago   Up 4 minutes                              k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7

尝试重新启动 etcd

┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker restart b5e18722315b
b5e18722315b

查看启动状态

┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker ps -a | grep etcd
b5e18722315b   004811815584                                        "etcd --advertise-cl…"   5 minutes ago    Exited (2) About a minute ago             k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_19
7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 21 minutes ago   Up 4 minutes                              k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker logs b5e18722315b

看一下 etcd 对应的日志

┌──[root@vms81.liruilongs.github.io]-[~]
└─$docker logs 8a53cbc545e4
..................................................
{"level":"info","ts":"2023-01-19T01:34:24.332Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"5.557212ms"}
{"level":"warn","ts":"2023-01-19T01:34:24.332Z","caller":"wal/util.go:90","msg":"ignored file in WAL directory","path":"0000000000000014-0000000000185aba.wal.broken"}
{"level":"info","ts":"2023-01-19T01:34:24.770Z","caller":"etcdserver/server.go:508","msg":"recovered v2 store from snapshot","snapshot-index":26912747,"snapshot-size":"42 kB"}
{"level":"warn","ts":"2023-01-19T01:34:24.771Z","caller":"snap/db.go:88","msg":"failed to find [SNAPSHOT-INDEX].snap.db","snapshot-index":26912747,"snapshot-file-path":"/var/lib/etcd/member/snap/00000000019aa7eb.snap.db","error":"snap: snapshot file doesn't exist"}
{"level":"panic","ts":"2023-01-19T01:43:31.738Z","caller":"etcdserver/server.go:515","msg":"failed to recover v3 backend from snapshot","error":"failed to find database snapshot file (snap: snapshot file doesn't exist)","stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.NewServer\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:515\ngo.etcd.io/etcd/server/v3/embed.StartEtcd\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/embed/etcd.go:244\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcd\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:227\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:122\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
panic: failed to recover v3 backend from snapshot

goroutine 1 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000114600, 0xc000588240, 0x1, 0x1)
        /home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/go.uber.org/zap@v1.17.0/zapcore/entry.go:234 +0x58d
go.uber.org/zap.(*Logger).Panic(0xc000080960, 0x122e2fc, 0x2a, 0xc000588240, 0x1, 0x1)
        /home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global/pkg/mod/go.uber.org/zap@v1.17.0/logger.go:227 +0x85
go.etcd.io/etcd/server/v3/etcdserver.NewServer(0x7ffe54af1e25, 0x1a, 0x0, 0x0, 0x0, 0x0, 0xc0004cf830, 0x1, 0x1, 0xc0004cfa70, ...)
        /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdserver/server.go:515 +0x1656
go.etcd.io/etcd/server/v3/embed.StartEtcd(0xc0000ee000, 0xc0000ee600, 0x0, 0x0)
        /tmp/etcd-release-3.5.0/etcd/release/etcd/server/embed/etcd.go:244 +0xef8
go.etcd.io/etcd/server/v3/etcdmain.startEtcd(0xc0000ee000, 0x1202a6f, 0x6, 0xc000428401, 0x2)
        /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:227 +0x32
go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2(0xc00003a120, 0x12, 0x12)
        /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:122 +0x257a
go.etcd.io/etcd/server/v3/etcdmain.Main(0xc00003a120, 0x12, 0x12)
        /tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/main.go:40 +0x11f
main.main()
        /tmp/etcd-release-3.5.0/etcd/release/etcd/server/main.go:32 +0x45

"msg":"failed to recover v3 backend from snapshot","error":"failed to find database snapshot file (snap: snapshot file doesn't exist)","

"msg": "从快照恢复v3后台失败", "error": "未能找到数据库快照文件(snap: 快照文件不存在)","

断电照成数据文件损坏了,它希望从快照中恢复,但是没有快照。

额,这里没有备份,所以基本上是没有办法修复了。只能通过 kubeadm 重置集群了。

一些补救措施

如果说你希望通过一些其他的方式来启动集群,来获取一些当前集群的配置信息,下面的方式可以尝试,但是我的集群使用了下面的方法,所有的 pods 数据都丢失了,没办法最后重置集群了。

如果你想使用下面的方式,一定要备份删除的 etcd 数据文件

etcd master 是一个静态 pod ,所以我们看下 yaml 文件,配置的数据文件中什么位置

┌──[root@vms81.liruilongs.github.io]-[~]
└─$cd /etc/kubernetes/manifests/
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml

- --data-dir=/var/lib/etcd

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests]
└─$cat etcd.yaml | grep -e "--"
    - --advertise-client-urls=https://192.168.26.81:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://192.168.26.81:2380
    - --initial-cluster=vms81.liruilongs.github.io=https://192.168.26.81:2380
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://192.168.26.81:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://192.168.26.81:2380
    - --name=vms81.liruilongs.github.io
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

对应的数据文件,可以尝试对数据文件进行修复,如果希望集群可以快速启动,可以

┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd/member]
└─$tree
.
├── snap
│   ├── 0000000000000058-00000000019a0ba7.snap
│   ├── 0000000000000058-00000000019a32b8.snap
│   ├── 0000000000000058-00000000019a59c9.snap
│   ├── 0000000000000058-00000000019a80da.snap
│   ├── 0000000000000058-00000000019aa7eb.snap
│   └── db
└── wal
    ├── 0000000000000014-0000000000185aba.wal.broken
    ├── 0000000000000142-0000000001963c0e.wal
    ├── 0000000000000143-0000000001977bbe.wal
    ├── 0000000000000144-0000000001986aa6.wal
    ├── 0000000000000145-0000000001995ef6.wal
    ├── 0000000000000146-00000000019a544d.wal
    └── 1.tmp

2 directories, 13 files

备份一下数据文件

┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$ls
member
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$tar -cvf member.tar member/
member/
member/snap/
member/snap/db
member/snap/0000000000000058-00000000019a0ba7.snap
member/snap/0000000000000058-00000000019a32b8.snap
member/snap/0000000000000058-00000000019a59c9.snap
member/snap/0000000000000058-00000000019a80da.snap
member/snap/0000000000000058-00000000019aa7eb.snap
member/wal/
member/wal/0000000000000142-0000000001963c0e.wal
member/wal/0000000000000144-0000000001986aa6.wal
member/wal/0000000000000014-0000000000185aba.wal.broken
member/wal/0000000000000145-0000000001995ef6.wal
member/wal/0000000000000146-00000000019a544d.wal
member/wal/1.tmp
member/wal/0000000000000143-0000000001977bbe.wal
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$ls
member  member.tar
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$mv member.tar  /tmp/
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$rm -rf  member/snap/*.snap
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$rm -rf  member/wal/*.wal
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$

重新启动 docker 对应的镜像,或者重新启动 kubectl。

┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$docker ps -a | grep etcd
a3b97cb34d9b   004811815584                                        "etcd --advertise-cl…"   2 minutes ago   Exited (2) 2 minutes ago              k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_45
7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 3 hours ago     Up 2 hours                            k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$docker start a3b97cb34d9b
a3b97cb34d9b
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$docker ps -a | grep etcd
e1fc068247af   004811815584                                        "etcd --advertise-cl…"   3 seconds ago   Up 2 seconds                          k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_46
a3b97cb34d9b   004811815584                                        "etcd --advertise-cl…"   3 minutes ago   Exited (2) 3 seconds ago              k8s_etcd_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_45
7e215924a1dd   registry.aliyuncs.com/google_containers/pause:3.5   "/pause"                 3 hours ago     Up 2 hours                            k8s_POD_etcd-vms81.liruilongs.github.io_kube-system_1502584f9ab841720212d4341d723ba2_7
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$

查看 Node 状态

┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$kubectl get nodes
NAME                          STATUS   ROLES    AGE   VERSION
vms155.liruilongs.github.io   Ready    <none>   76s   v1.22.2
vms81.liruilongs.github.io    Ready    <none>   76s   v1.22.2
vms82.liruilongs.github.io    Ready    <none>   76s   v1.22.2
vms83.liruilongs.github.io    Ready    <none>   76s   v1.22.2
┌──[root@vms81.liruilongs.github.io]-[/var/lib/etcd]
└─$

查看集群当前所有的 Pod 。

┌──[root@vms81.liruilongs.github.io]-[~/ansible/kubevirt]
└─$kubectl get pods -A
NAME                                                 READY   STATUS    RESTARTS         AGE
etcd-vms81.liruilongs.github.io                      1/1     Running   48 (3h35m ago)   3h53m
kube-apiserver-vms81.liruilongs.github.io            1/1     Running   48 (3h35m ago)   3h51m
kube-controller-manager-vms81.liruilongs.github.io   1/1     Running   17 (3h35m ago)   3h51m
kube-scheduler-vms81.liruilongs.github.io            1/1     Running   16 (3h35m ago)   3h52m

网络相关的 pod 都不在了,而且 k8s 的 dns 组件也没有起来, 这里需要 重新配置网络,有点麻烦,正常情况下如果, 网络相关的组件没有起来, 所有节点应该都是未就绪状态。感觉有点妖。。。时间关系,我需要集群来做实验,所以通过 kubeadm重置了

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl apply -f calico.yaml

博文参考


https://github.com/etcd-io/etcd/issues/11949

,前言

XXL-JOB是一个优秀的国产开源分布式任务调度平台,他有着自己的一套调度注册中心,提供了丰富的调度和阻塞策略等,这些都是可视化的操作,使用起来十分方便。

由于是国产的,所以上手还是比较快的,而且他的源码也十分优秀,因为是调试平台所以线程这一块的使用是很频繁的,特别值得学习研究。

XXL-JOB一共分为两个模块,调度中心模块和执行模块。具体解释,我们copy下官网的介绍:

  • 调度模块(调度中心):
    负责管理调度信息,按照调度配置发出调度请求,自身不承担业务代码。调度系统与任务解耦,提高了系统可用性和稳定性,同时调度系统性能不再受限于任务模块;
    支持可视化、简单且动态的管理调度信息,包括任务新建,更新,删除,GLUE开发和任务报警等,所有上述操作都会实时生效,同时支持监控调度结果以及执行日志,支持执行器Failover。
  • 执行模块(执行器):
    负责接收调度请求并执行任务逻辑。任务模块专注于任务的执行等操作,开发和维护更加简单和高效;
    接收“调度中心”的执行请求、终止请求和日志请求等。

image

XXL-JOB中“调度模块”和“任务模块”完全解耦,调度模块进行任务调度时,将会解析不同的任务参数发起远程调用,调用各自的远程执行器服务。这种调用模型类似RPC调用,调度中心提供调用代理的功能,而执行器提供远程服务的功能。

下面看下springboot环境下的使用方式,首先看下执行器的配置:

    @Bean
    public XxlJobSpringExecutor xxlJobExecutor() {
        logger.info(">>>>>>>>>>> xxl-job config init.");
        XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor();
        //调度中心地址
        xxlJobSpringExecutor.setAdminAddresses(adminAddresses);
        //执行器AppName
        xxlJobSpringExecutor.setAppname(appname);
        //执行器注册地址,默认为空即可
        xxlJobSpringExecutor.setAddress(address);
        //执行器IP [选填]:默认为空表示自动获取IP
        xxlJobSpringExecutor.setIp(ip);
        //执行器端口
        xxlJobSpringExecutor.setPort(port);
        //执行器通讯TOKEN
        xxlJobSpringExecutor.setAccessToken(accessToken);
        //执行器运行日志文件存储磁盘路径
        xxlJobSpringExecutor.setLogPath(logPath);
        //执行器日志文件保存天数
        xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays);

        return xxlJobSpringExecutor;
    }

XXL-JOB提供了多种任务执行方式,我们今天看下最简单的bean执行模式。如下:

    /**
     * 1、简单任务示例(Bean模式)
     */
    @XxlJob("demoJobHandler")
    public void demoJobHandler() throws Exception {
        XxlJobHelper.log("XXL-JOB, Hello World.");

        for (int i = 0; i < 5; i++) {
            XxlJobHelper.log("beat at:" + i);
            TimeUnit.SECONDS.sleep(2);
        }
        // default success
    }

现在在调度中心稍做配置,我们这段代码就可以按照一定的策略进行调度执行,是不是很神奇?我们先看下官网上的解释:

原理:每个Bean模式任务都是一个Spring的Bean类实例,它被维护在“执行器”项目的Spring容器中。任务类需要加“@JobHandler(value=”名称”)”注解,因为“执行器”会根据该注解识别Spring容器中的任务。任务类需要继承统一接口“IJobHandler”,任务逻辑在execute方法中开发,因为“执行器”在接收到调度中心的调度请求时,将会调用“IJobHandler”的execute方法,执行任务逻辑。

纸上得来终觉浅,绝知此事要躬行,今天的任务就是跟着这段话,我们大体看一波源码的实现方式。

二,XxlJobSpringExecutor

XxlJobSpringExecutor其实看名字,我们都能想到,这是XXL-JOB为了适应spring模式的应用而开发的模板类,先看下他的实现结构。

image

XxlJobSpringExecutor继承自XxlJobExecutor,同时由于是用在spring环境,所以实现了多个spring内置的接口来配合实现整个执行器模块功能,每个接口的功能就不细说了,相信大家都可以百度查到。

我们看下初始化方法afterSingletonsInstantiated

    // start
    @Override
    public void afterSingletonsInstantiated() {

        //注册每个任务
        initJobHandlerMethodRepository(applicationContext);

        // refresh GlueFactory
        GlueFactory.refreshInstance(1);

        // super start
        try {
            super.start();
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }

主流程看上去是比较简单的,首先是注册每一个JobHandler,然后进行初始化操作,GlueFactory.refreshInstance(1)是为了另一种调用模式时用到的,主要是用到了groovy,不在这次的分析中,我们就不看了。我们继续看下如何注册JobHandler的。

 private void initJobHandlerMethodRepository(ApplicationContext applicationContext) {
        if (applicationContext == null) {
            return;
        }
        // 遍历所有beans,取出所有包含有@XxlJob的方法
        String[] beanDefinitionNames = applicationContext.getBeanNamesForType(Object.class, false, true);
        for (String beanDefinitionName : beanDefinitionNames) {
            Object bean = applicationContext.getBean(beanDefinitionName);

            Map<Method, XxlJob> annotatedMethods = null;   // referred to :org.springframework.context.event.EventListenerMethodProcessor.processBean
            try {
                annotatedMethods = MethodIntrospector.selectMethods(bean.getClass(),
                        new MethodIntrospector.MetadataLookup<XxlJob>() {
                            @Override
                            public XxlJob inspect(Method method) {
                                return AnnotatedElementUtils.findMergedAnnotation(method, XxlJob.class);
                            }
                        });
            } catch (Throwable ex) {
                logger.error("xxl-job method-jobhandler resolve error for bean[" + beanDefinitionName + "].", ex);
            }
            if (annotatedMethods==null || annotatedMethods.isEmpty()) {
                continue;
            }
            //遍历@XxlJob方法,取出executeMethod以及注解中对应的initMethod, destroyMethod进行注册
            for (Map.Entry<Method, XxlJob> methodXxlJobEntry : annotatedMethods.entrySet()) {
                Method executeMethod = methodXxlJobEntry.getKey();
                XxlJob xxlJob = methodXxlJobEntry.getValue();
                if (xxlJob == null) {
                    continue;
                }

                String name = xxlJob.value();
                if (name.trim().length() == 0) {
                    throw new RuntimeException("xxl-job method-jobhandler name invalid, for[" + bean.getClass() + "#" + executeMethod.getName() + "] .");
                }
                if (loadJobHandler(name) != null) {
                    throw new RuntimeException("xxl-job jobhandler[" + name + "] naming conflicts.");
                }

                executeMethod.setAccessible(true);

                // init and destory
                Method initMethod = null;
                Method destroyMethod = null;

                if (xxlJob.init().trim().length() > 0) {
                    try {
                        initMethod = bean.getClass().getDeclaredMethod(xxlJob.init());
                        initMethod.setAccessible(true);
                    } catch (NoSuchMethodException e) {
                        throw new RuntimeException("xxl-job method-jobhandler initMethod invalid, for[" + bean.getClass() + "#" + executeMethod.getName() + "] .");
                    }
                }
                if (xxlJob.destroy().trim().length() > 0) {
                    try {
                        destroyMethod = bean.getClass().getDeclaredMethod(xxlJob.destroy());
                        destroyMethod.setAccessible(true);
                    } catch (NoSuchMethodException e) {
                        throw new RuntimeException("xxl-job method-jobhandler destroyMethod invalid, for[" + bean.getClass() + "#" + executeMethod.getName() + "] .");
                    }
                }

                // 注册 jobhandler
                registJobHandler(name, new MethodJobHandler(bean, executeMethod, initMethod, destroyMethod));
            }
        }

    }

XxlJobSpringExecutor由于实现了ApplicationContextAware,所以通过applicationContext可以获得所有容器中的bean实例,再通过MethodIntrospector来过滤出所有包含@XxlJob注解的方法,最后把对应的executeMethod以及注解中对应的initMethod, destroyMethod进行注册到jobHandlerRepository中,jobHandlerRepository是一个线程安全ConcurrentMap,MethodJobHandler实现自IJobHandler接口的一个模板类,主要作用就是通过反射去执行对应的方法。看到这,之前那句话任务类需要加“@JobHandler(value=”名称”)”注解,因为“执行器”会根据该注解识别Spring容器中的任务。我们就明白了。

public class MethodJobHandler extends IJobHandler {
    ....
    public MethodJobHandler(Object target, Method method, Method initMethod, Method destroyMethod) {
        this.target = target;
        this.method = method;

        this.initMethod = initMethod;
        this.destroyMethod = destroyMethod;
    }

    @Override
    public void execute() throws Exception {
        Class<?>[] paramTypes = method.getParameterTypes();
        if (paramTypes.length > 0) {
            method.invoke(target, new Object[paramTypes.length]);       // method-param can not be primitive-types
        } else {
            method.invoke(target);
        }
    }

三,执行服务器initEmbedServer

看完上面的JobHandler注册,后面接着就是执行器模块的启动操作了,下面看下start方法:

    public void start() throws Exception {

        // 初始化日志path
        XxlJobFileAppender.initLogPath(logPath);

        // 注册adminBizList
        initAdminBizList(adminAddresses, accessToken);

        // 初始化日志清除线程
        JobLogFileCleanThread.getInstance().start(logRetentionDays);

        // 初始化回调线程,用来把执行结果回调给调度中心
        TriggerCallbackThread.getInstance().start();

        // 执行服务器启动
        initEmbedServer(address, ip, port, appname, accessToken);
    }

前几个操作,我们就不细看了,大家有兴趣的可以自行查看,我们直接进入initEmbedServer方法查看内部服务器如何启动,以及向调试中心注册的。

    private void initEmbedServer(String address, String ip, int port, String appname, String accessToken) throws Exception {
        ...
        // start
        embedServer = new EmbedServer();
        embedServer.start(address, port, appname, accessToken);
    }

    public void start(final String address, final int port, final String appname, final String accessToken) {
        ```
        // 启动netty服务器
        ServerBootstrap bootstrap = new ServerBootstrap();
        bootstrap.group(bossGroup, workerGroup)
                .channel(NioServerSocketChannel.class)
                .childHandler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    public void initChannel(SocketChannel channel) throws Exception {
                        channel.pipeline()
                                .addLast(new IdleStateHandler(0, 0, 30 * 3, TimeUnit.SECONDS))  // beat 3N, close if idle
                                .addLast(new HttpServerCodec())
                                .addLast(new HttpObjectAggregator(5 * 1024 * 1024))  // merge request & reponse to FULL
                                .addLast(new EmbedHttpServerHandler(executorBiz, accessToken, bizThreadPool));
                    }
                })
                .childOption(ChannelOption.SO_KEEPALIVE, true);

        // bind
        ChannelFuture future = bootstrap.bind(port).sync();

        logger.info(">>>>>>>>>>> xxl-job remoting server start success, nettype = {}, port = {}", EmbedServer.class, port);

        // 执行向调度中心注册
        startRegistry(appname, address);
        ```
    }

因为执行器模块本身需要有通讯交互的需求,不然调度中心是无法调用它的,所以内嵌了一个netty服务器进行通信。启动成功后,正式向调试中心执行注册请求。我们直接看注册的代码:

    RegistryParam registryParam = new RegistryParam(RegistryConfig.RegistType.EXECUTOR.name(), appname, address);
    for (AdminBiz adminBiz: XxlJobExecutor.getAdminBizList()) {
        try {
            //执行注册请求
            ReturnT<String> registryResult = adminBiz.registry(registryParam);
            if (registryResult!=null && ReturnT.SUCCESS_CODE == registryResult.getCode()) {
                registryResult = ReturnT.SUCCESS;
                logger.debug(">>>>>>>>>>> xxl-job registry success, registryParam:{}, registryResult:{}", new Object[]{registryParam, registryResult});
                break;
            } else {
                logger.info(">>>>>>>>>>> xxl-job registry fail, registryParam:{}, registryResult:{}", new Object[]{registryParam, registryResult});
            }
        } catch (Exception e) {
            logger.info(">>>>>>>>>>> xxl-job registry error, registryParam:{}", registryParam, e);
        }
    }

    @Override
    public ReturnT<String> registry(RegistryParam registryParam) {
        return XxlJobRemotingUtil.postBody(addressUrl + "api/registry", accessToken, timeout, registryParam, String.class);
    }

XxlJobRemotingUtil.postBody就是个符合XXL-JOB规范的restful的http请求处理,里面不止有注册请求,还有下线请求,回调请求等,碍于篇幅,就不一一展示了,调度中心接到对应的请求,会有对应的DB处理:

        // services mapping
        if ("callback".equals(uri)) {
            List<HandleCallbackParam> callbackParamList = GsonTool.fromJson(data, List.class, HandleCallbackParam.class);
            return adminBiz.callback(callbackParamList);
        } else if ("registry".equals(uri)) {
            RegistryParam registryParam = GsonTool.fromJson(data, RegistryParam.class);
            return adminBiz.registry(registryParam);
        } else if ("registryRemove".equals(uri)) {
            RegistryParam registryParam = GsonTool.fromJson(data, RegistryParam.class);
            return adminBiz.registryRemove(registryParam);
        } else {
            return new ReturnT<String>(ReturnT.FAIL_CODE, "invalid request, uri-mapping("+ uri +") not found.");
        }

跟到这里,我们就已经大概了解了整个注册的流程。同样当调度中心向我们执行器发送请求,譬如说执行任务调度的请求时,也是同样的http请求发送我们上面分析的执行器中内嵌netty服务进行操作,这边只展示调用方法:

    @Override
    public ReturnT<String> run(TriggerParam triggerParam) {
        return XxlJobRemotingUtil.postBody(addressUrl + "run", accessToken, timeout, triggerParam, String.class);
    }

这样,我们执行器模块收到请求后会执行我们上面注册中的jobHandle进行对应的方法执行,执行器会将请求存入“异步执行队列”并且立即响应调度中心,异步运行对应方法。这样一套注册和执行的流程就大致走下来了。

四,结尾

当然事实上XXL-JOB的代码还有许多丰富的特性,碍于本人实力不能一一道明,我这也是抛砖引玉,只是把最基础的一些地方介绍给大家,有兴趣的话,大家可以自行查阅相关代码,总的来说,毕竟是国产开源的优秀项目,还是值得赞赏的,也希望国内以后有越来越多优秀开源框架。

如果觉得本文对你有帮助,可以转发关注支持一下