Docker常用服务
目录
- 1 以下内容依赖
- 2 创建manager节点集群和从节点
- 3 创建网络
- 4 Docker network 替代--link
- 5 创建etcd
- 6 mysql数据库集群
- 7 phpmyadmin
- 8 docker service create
- 9 phpserver
- 10 phpserver7
- 11 mediawiki
- 12 mediawiki sincere
- 13 kityminder
- 14 mantis
- 15 禅道
- 16 eclipse che
- 17 maven
- 18 docker-mailserver
- 19 RabbitMQ
- 20 rocketmq
- 21 activemq
- 22 postgresql
- 23 redis
- 24 gitlab
- 25 zookeeper
- 26 kafka
- 27 Jenkins
- 28 ELK
- 29 skywalking
- 30 MongoDB
- 31 zipkin
- 32 tomcat
- 33 oracle
- 34 mycollab
- 35 jira
- 36 fecru
- 37 kodexplorer
- 38 sonarqube
- 39 orangescrum_失败
- 40 onlyoffice
- 41 sqlserver
- 42 ceph
- 43 cachecloud
- 44 安装
- 45 vpn
- 46 tuleap
- 47 minio
- 48 lychee
- 49 neo4j
- 50 aliyun-ddns-cli
- 51 nifi
- 52 sentinel
- 53 nacos
- 54 zentao
- 55 Memcached
- 56 flink
- 57 eureka
- 58 appollo
- 59 maxkey
- 60 nextcloud
- 61 ppdocr
- 62 Davinci
- 63 metabase
- 64 clickhouse
- 65 hadoop
- 66 melody
- 67 xxl-job
- 68 nginx
- 69 alist
- 70 doccano
- 71 问题解决
以下内容依赖
- docker 已完成安装
- 使用Swarm做集群管理
- 不使用Kubernetes的原因是Kubernetes太过于复杂,没有实际工作经验暂时不再进一步研究.更多编排工具比较见Swarm、Fleet、Kubernetes、Mesos - 编排工具对比分析
- 因为测试,所以所有服务都是按"适用于集群内全部可用节点上的服务任务"--mode global方式安装.实际生产可以使用--replicas 3扩展副本数量
- Swarm每次重启电脑后服务会重新安装,操,新的环境数据丢失,所以关键数据需要额外挂载
apt-get -y upgrade apt-get install -y inetutils-ping apt-get install -y telnet apt-get install -y net-tools apt-get install -y vim apt-get install -y dnsutils apt-get install -y net-tools apt-get install -y wget
nslookup app-kafka-v1-2.app-kafka.cafmfbiqa.svc.cluster.local netstat -lntp
创建manager节点集群和从节点
docker swarm init --advertise-addr 192.168.1.3 docker swarm join \ --token SWMTKN-1-59mrqeyld5khbaaw2ms24wnh9uvyrl05ik6vnmfr2qwhhc6y6x-2amqoic3looh793eexqjhcbc3 \ 192.168.1.3:2377 docker swarm join-token manager docker node ls
docker swarm leave
自身是manager
docker swarm leave --force
cloud.ling2.cn
Swarm initialized: current node (8wjh2i8fi40u9gg41nipxofws) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4vjcoz8zmnadnroo3xoehngjfsdrbv9i6u9c7leg2ficnir2ep-789yvc3m7dyx2gwxuciwwpfov 192.168.0.220:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
创建网络
docker network create --driver overlay ling-network
默认情况下使用docker network create -d overlay NET 创建的网络只能被swarm service使用,如果需要被独立的容器使用,需要添加--attachable选项
Docker network 替代--link
docker run -d -p 3306:3306 --name wp-mysql --network wp-net --network-alias mysql -e MYSQL_ROOT_PASSWORD=123 mysql
- docker run:启动容器
- -d:后台运行
- -p 3306:3306:将容器的3306端口映射到宿主机的3306端口上
- --name wp-mysql:指定容器的名称为wp-mysql
- --network wp-net:将容器加入到wp-net网络中
- --network-alias mysql:指定容器在wp-net网络中的别名是mysql
- -e MYSQL_ROOT_PASSWORD=123:初始化数据库root用户的密码为123
创建etcd
每次必须执行
curl -w "\n" 'https://discovery.etcd.io/new?size=1'
来获取新的地址,否则etcd无法安装
docker service create \
--name etcd \
--mode=global \
--network ling-network \
-p 2379:2379 \
-p 2380:2380 \
-p 4001:4001 \
-p 7001:7001 \
-name etcd \
elcolio/etcd:latest \
-discovery=https://discovery.etcd.io/d1568ff86169e87cd44242247c39ab76
mysql数据库集群
已经解决数据盘挂载问题,更多参考Docker_镜像制作#Docker mysql集群 linux恢复安装 mysql主从复制跳过错误和忽略多张表
大小写问题件Ling-cloud#mysql
show global variables like '%lower_case%';
wiki mysql 不暴露到外网
lingserver
tee /alidata/dockerdata/mysqldata/mysql-cloud/conf/docker.cnf <<-'EOF'
[mysqld]
skip-host-cache
skip-name-resolve
lower_case_table_names=1
EOF
docker run --name mysql --restart=always -p 3307:3306 \
-v /alidata/dockerdata/mysqldata/mysql-cloud/data/:/var/lib/mysql \
-v /alidata/dockerdata/mysqldata/mysql-cloud/conf/:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql
barkup
docker run --name mysql --restart=always -v /alidata/dockerdata/mysqldata/mysql-cloud:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql:5.7.18
docker run --name mysqldev --restart=always -p 3306:3306 -v /alidata/dockerdata/mysqldata/mysql-dev:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql
docker run --name mysql-sincere --restart=always -p 3307:3306 -v /alidata/dockerdata/mysqldata/mysql-sincere:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql
docker run --name mysql-idp --restart=always -p 3307:3306 -v /alidata/dockerdata/mysqldata/mysql-idp:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql:5.7.18
这里的设置无效,使用CHANGE REPLICATION FILTER REPLICATE_WILD_IGNORE_TABLE = ('wikipro.wiki_objectcache','wordpress.wp_options');有效果
ecs
tee /alidata/dockerdata/mysqldata/ecs_master.cnf <<-'EOF'
[mysqld]
log-bin = mysql-bin
server-id = 1
expire-logs-days = 100
binlog-ignore-db = mysql
log-slave-updates
auto-increment-increment = 2
auto-increment-offset = 1
#slave-skip-errors=all
lower_case_table_names=1
replicate-ignore-table=wikipro.wiki_objectcache,wordpress.wp_options
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
EOF
surface
tee /alidata/dockerdata/mysqldata/ecs_slave.cnf <<-'EOF'
[mysqld]
log-bin = mysql-bin
server-id = 2
expire-logs-days = 100
binlog-ignore-db = mysql
log-slave-updates
auto-increment-increment = 2
auto-increment-offset = 2
#slave-skip-errors=all
replicate-ignore-table=wikipro.wiki_objectcache,wordpress.wp_options
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
EOF
ecs
docker run --name mysql --restart=always -p 3306:3306 -v /alidata/dockerdata/mysqldata/mysql-cloud:/var/lib/mysql \ -v /alidata/dockerdata/mysqldata/ecs_master.cnf:/etc/mysql/conf.d/config-file.cnf \ -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql
surface
docker run --name mysql --restart=always -p 3306:3306 -v /alidata/dockerdata/mysqldata/mysql-cloud:/var/lib/mysql \ -v /alidata/dockerdata/mysqldata/ecs_slave.cnf:/etc/mysql/conf.d/config-file.cnf \ -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql
mysql数据双主复制参考Docker_镜像制作#Docker mysql集群
SHOW MASTER status;
CHANGE MASTER TO MASTER_HOST='www.ling2.cn',MASTER_USER='root',MASTER_PASSWORD='Wb19831010!', MASTER_LOG_FILE='mysql-bin.000006',MASTER_LOG_POS=585055; CHANGE REPLICATION FILTER REPLICATE_WILD_IGNORE_TABLE = ('wikipro.wiki_objectcache','wordpress.wp_options');
STOP SLAVE;
reset slave
START SLAVE;
SHOW SLAVE STATUS;
docker service create \
--name mysql-galera \
--mode=global \
-p 3306:3306 \
-l --mount /alidata/dockerdata/mysqldata/mysql-master-slave-master:/var/lib/mysql \
--network ling-network \
--env MYSQL_ROOT_PASSWORD=Wb19831010! \
--env DISCOVERY_SERVICE=10.0.0.2:2379 \
--env XTRABACKUP_PASSWORD=Wb19831010! \
--env CLUSTER_NAME=galera perconalab/percona-xtradb-cluster:5.7
latest有问题,只能用5.7
修改连接数
(方法一) 临时更改设置最大连接数据
1.进入mysql容器里面
docker exec -it mysql /bin/bash
2. 密码登录
mysql -u root -p
输入密码
3. 查看日志
show global variables like 'log';
4. 查看最大等待时间
show global variables like 'wait_timeout';
5.查看最大连接数
show variables like 'max_connections';
6.修改临时最大连接数
set global max_connections=3000;
7.查看最大连接数
show variables like 'max_connections';
8.查看最大等待时间
show global variables like 'wait_timeout';
9.设置临时等待时间
set global interactive_timeout=500;
10 查看当前连接数
show status like 'Threads%';
修改密码等
docker exec -it mysql /bin/bash mysql -u root -p 输入密码
alter user 'root' identified by 'xxxxxx'; flush privileges;
OPTIMIZE TABLE `base_reader_book_chapter`; OPTIMIZE TABLE `base_reader_book` ; OPTIMIZE TABLE `base_fileitemchangehis`
大小写
docker exec -it mysql bash cd /etc/mysql/conf.d
tee /etc/mysql/conf.d/docker.cnf <<-'EOF'
[mysqld]
skip-host-cache
skip-name-resolve
lower_case_table_names=1
EOF
phpmyadmin
https://hub.docker.com/r/phpmyadmin/phpmyadmin
docker run --name phpmyadmin -d --link mysql -e PMA_HOST="mysql" -p 6061:80 phpmyadmin/phpmyadmin
docker run --name phpmyadmin -d -p 6081:80 -e PMA_ABSOLUTE_URI=http://172.16.97.26:6080/phpmyadmin/ phpmyadmin/phpmyadmin
参考 Ariba#docker
docker service create
short | option usage | explain | default setting |
---|---|---|---|
–constraint value | Placement constraints | (default []) | |
–container-label value | Container labels | (default []) | |
–endpoint-mode string | Endpoint mode (vip or dnsrr) | ||
-e | –env value | Set environment variables | (default []) |
–help | Print usage | ||
-l | –label value | Service labels | (default []) |
–limit-cpu value | Limit CPUs | (default 0.000) | |
–limit-memory value | Limit Memory | (default 0 B) | |
–log-driver string | Logging driver for service | ||
–log-opt value | Logging driver options | (default []) | |
–mode string | Service mode (replicated or global) | (default “replicated”) | |
–mount value | Attach a mount to the service | ||
–name string | Service name | ||
–network value | Network attachments (default []) | ||
-p | –publish value | Publish a port as a node port | (default []) |
–replicas value | Number of tasks | (default none) | |
–reserve-cpu value | Reserve CPUs | (default 0.000) | |
–reserve-memory value | Reserve Memory | (default 0 B) | |
–restart-condition string | Restart when condition is met (none, on-failure, or any) | ||
–restart-delay value | Delay between restart attempts | (default none) | |
–restart-max-attempts value | Maximum number of restarts before giving up | (default none) | |
–restart-window value | Window used to evaluate the restart policy | (default none) | |
–stop-grace-period value | Time to wait before force killing a container | (default none) | |
–update-delay duration | Delay between updates | ||
–update-failure-action string | Action on update failure (pause | continue) | |
–update-parallelism uint | Maximum number of tasks updated simultaneously (0 to update all at once) | (default 1) | |
-u | –user string | Username or UID | |
–with-registry-auth | Send registry authentication details to swarm agents | ||
-w | –workdir string | Working directory inside the container |
phpserver
支持mantis
set -xe \ && apt-get update \ && apt-get install -y libpng12-dev libjpeg-dev libpq-dev libxml2-dev \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd mbstring mysql mysqli pgsql soap \ && rm -rf /var/lib/apt/lists/*
cd /alidata/dockerdata/phpserver docker build -t docker.ling2.cn/phpserver .
docker login --username=admin docker.ling2.cn password admin123 docker push docker.ling2.cn/phpserver
tee /alidata/dockerdata/phpserver/Dockerfile <<-'EOF'
FROM php:5.6-apache
MAINTAINER kev <noreply@easypi.info>
RUN a2enmod rewrite
RUN set -xe \
&& apt-get update \
&& apt-get install -y libpng12-dev libjpeg-dev libpq-dev libxml2-dev \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd mbstring mysql mysqli pgsql soap \
&& rm -rf /var/lib/apt/lists/*
RUN set -xe \
&& ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo 'date.timezone = "Asia/Shanghai"' > /usr/local/etc/php/php.ini
EOF
tee /alidata/dockerdata/phpserver/Dockerfile <<-'EOF'
FROM php:5.6-apache
MAINTAINER kev <noreply@easypi.info>
RUN a2enmod rewrite
RUN set -xe \
&& apt-get update \
&& apt-get install -y libpng12-dev libjpeg-dev libpq-dev libxml2-dev \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd mbstring mysql mysqli pgsql soap \
&& wget git unzip nginx fontconfig-config fonts-dejavu-core \
&& php5-fpm php5-common php5-json php5-cli php5-common php5-mysql\
&& php5-gd php5-imagick php5-json php5-mcrypt php5-readline psmisc ssl-cert \
&& ufw php-pear libgd-tools libmcrypt-dev mcrypt mysql-server mysql-client \
&& rm -rf /var/lib/apt/lists/*
RUN php5enmod mcrypt
RUN set -xe \
&& ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo 'date.timezone = "Asia/Shanghai"' > /usr/local/etc/php/php.ini
EOF
phpserver7
支持mantis
set -xe \ && apt-get update \ && apt-get install -y libpng12-dev libjpeg-dev libpq-dev libxml2-dev \ && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \ && docker-php-ext-install gd mbstring mysql mysqli pgsql soap \ && rm -rf /var/lib/apt/lists/*
cd /alidata/dockerdata/phpserver7 docker build -t docker.ling2.cn/phpserver7 .
docker login --username=admin docker.ling2.cn password admin123 docker push docker.ling2.cn/phpserver
tee /alidata/dockerdata/phpserver7/Dockerfile <<-'EOF'
FROM php:7.2-apache
MAINTAINER kev <noreply@easypi.info>
RUN a2enmod rewrite
RUN set -xe \
&& apt-get update \
&& apt-get install -y libpng12-dev libjpeg-dev libpq-dev libxml2-dev \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd mbstring mysql mysqli pgsql soap \
&& wget git unzip nginx fontconfig-config fonts-dejavu-core \
&& php5-fpm php5-common php5-json php5-cli php5-common php5-mysql\
&& php5-gd php5-imagick php5-json php5-mcrypt php5-readline psmisc ssl-cert \
&& ufw php-pear libgd-tools libmcrypt-dev mcrypt mysql-server mysql-client \
&& rm -rf /var/lib/apt/lists/*
RUN php5enmod mcrypt
RUN set -xe \
&& ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo 'date.timezone = "Asia/Shanghai"' > /usr/local/etc/php/php.ini
EOF
mediawiki
mediawiki docker
docker run --name mediawiki -it -p 73:80 --link mysql:mysql -d mediawiki
mediawiki.yml
# MediaWiki with MariaDB
#
# Access via "http://localhost:8080"
# (or "http://$(docker-machine ip):8080" if using docker-machine)
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 73:80
links:
- database
volumes:
- /alidata/dockerdata/mediawiki/images:/var/www/html/images
#- /alidata/dockerdata/mediawiki/LocalSettings.php:/var/www/html/LocalSettings.php
#- /alidata/dockerdata/wordpress:/var/www/html/wordpress
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
database:
image: mariadb
restart: always
environment:
# @see https://phabricator.wikimedia.org/source/mediawiki/browse/master/includes/DefaultSettings.php
MYSQL_DATABASE: my_wiki
MYSQL_USER: wikiuser
MYSQL_PASSWORD: example
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
mediawiki old
- 修改php.ini设置上传文件大小 参考 [1]
- LocalSettings.php
$wgUploadSizeWarning = 2147483647; $wgMaxUploadSize = 2147483647;
- PHP.INI
memory_limit = 2048M post_max_size = 2048M upload_max_filesize = 2048M
- copy php.ini
docker cp mediawiki:/usr/local/etc/php/php.ini /alidata/dockerdata/mediawiki/php/php.ini
- 更新或下载最新mediawiki版本
cd /alidata/dockerdata/mediawiki wget https://releases.wikimedia.org/mediawiki/1.28/mediawiki-1.28.2.tar.gz tar -xzvf mediawiki-1.28.2.tar.gz
- 启动docker
docker run --name mediawiki -it -p 73:80 \ -v /alidata/dockerdata/mediawiki/mediawiki-1.28.2:/var/www/html \ -v /alidata/dockerdata/mediawiki/images:/var/www/html/images \ -v /alidata/dockerdata/mediawiki/php/php.ini:/usr/local/etc/php/php.ini \ -v /alidata/dockerdata/mediawiki/LocalSettings.php:/var/www/html/LocalSettings.php \ --link mysql:mysql -d docker.ling2.cn/phpserver
sudo docker run --name mediawiki --restart=always \ -v /alidata/dockerdata/mediawiki/LocalSettings.php:/usr/share/nginx/html/LocalSettings.php:ro \ -v /alidata/dockerdata/mediawiki/images:/usr/share/nginx/html/images \ -v /alidata/dockerdata/mediawiki/extensions:/tmp/extensions \ -p 73:80 -d --link mysql:db simplyintricate/mediawiki
chown -R www-data:www-data /usr/share/nginx/html
docker run --name mediawiki \ --restart=always \ --link mysql:mysql \ -v /alidata/dockerdata/mediawiki:/data:rw \ -e MEDIAWIKI_SITE_SERVER=https://wiki.ling2.win \ -e MEDIAWIKI_SITE_NAME=ling \ -e MEDIAWIKI_SITE_LANG=zh-cn \ -e MEDIAWIKI_ADMIN_USER=bo.wang \ -e MEDIAWIKI_ADMIN_PASS=Wb19831010! \ -e MEDIAWIKI_UPDATE=true \ -e MEDIAWIKI_DB_TYPE=mysql \ -e MEDIAWIKI_DB_HOST=mysql \ -e MEDIAWIKI_DB_PORT=3306 \ -e MEDIAWIKI_DB_USER=root \ -e MEDIAWIKI_DB_PASSWORD=Wb19831010! \ -e MEDIAWIKI_DB_NAME=wikipro \ -p 73:80 -d wikimedia/mediawiki
-e MEDIAWIKI_SITE_SERVER= (required set this to the server host and include the protocol (and port if necessary) like http://my-wiki:8080; configures $wgServer) -e MEDIAWIKI_SITE_NAME= (defaults to MediaWiki; configures $wgSitename) -e MEDIAWIKI_SITE_LANG= (defaults to en; configures $wgLanguageCode) -e MEDIAWIKI_ADMIN_USER= (defaults to admin; configures default administrator username) -e MEDIAWIKI_ADMIN_PASS= (defaults to rosebud; configures default administrator password) -e MEDIAWIKI_UPDATE=true (defaults to false; run php maintenance/update.php) -e MEDIAWIKI_SLEEP= (defaults to 0; delays startup of container, useful when using Docker Compose)
-e MEDIAWIKI_DB_TYPE=... (defaults to mysql, but can also be postgres) -e MEDIAWIKI_DB_HOST=... (defaults to the address of the linked database container) -e MEDIAWIKI_DB_PORT=... (defaults to the port of the linked database container or to the default for specified db type) -e MEDIAWIKI_DB_USER=... (defaults to root or postgres based on db type being mysql, or postgres respsectively) -e MEDIAWIKI_DB_PASSWORD=... (defaults to the password of the linked database container) -e MEDIAWIKI_DB_NAME=... (defaults to mediawiki) -e MEDIAWIKI_DB_SCHEMA... (defaults to mediawiki, applies only to when using postgres)
没解决数据盘挂载问题,临时方案
注意进入docker授权
cd /alidata/dockerdata git clone https://github.com/kalcaddle/KODExplorer.git chmod -Rf 777 ./KODExplorer/*
chown -R www-data:www-data /var/www/html/images chown -R www-data:www-data /alidata/dockerdata/mediawiki/images chown -R www-data:www-data /alidata/dockerdata/KODExplorer
docker run --name mediawiki \ --restart=always \ --link mysql:mysql \ -v /alidata/dockerdata/mediawiki/images:/var/www/html/images \ -v /alidata/dockerdata/mediawiki/LocalSettings.php:/var/www/html/LocalSettings.php \ -v /alidata/dockerdata/wordpress:/var/www/html/wordpress \ -v /alidata/dockerdata/KODExplorer:/var/www/html/explorer \ -p 73:80 -d synctree/mediawiki
docker service create \
--name mediawiki \
--mode=global \
--network ling-network \
-p 73:80 \
-e MEDIAWIKI_DB_HOST=mysql-galera \
-e MEDIAWIKI_DB_USER=root \
-e MEDIAWIKI_DB_PASSWORD=Wb19831010! \
-e MEDIAWIKI_DB_NAME=wikipro synctree/mediawiki:latest
docker-compose.yml
mediawiki: image: 'synctree/mediawiki' ports: - "73:80" links: - mysql:mysql volumes: - /alidata/dockerdata/mediawiki/images:/var/www/html/images - /alidata/dockerdata/mediawiki/LocalSettings.php:/var/www/html/LocalSettings.php - /alidata/dockerdata/wordpress:/var/www/html/wordpress
mysql: image: "mysql" expose: - "3306" environment: - MYSQL_ROOT_PASSWORD=defaultpass
docker-compose up -d mail
mediawiki sincere
- 修改php.ini设置上传文件大小 参考 [4]
- LocalSettings.php
$wgUploadSizeWarning = 2147483647; $wgMaxUploadSize = 2147483647;
- PHP.INI
memory_limit = 2048M post_max_size = 2048M upload_max_filesize = 2048M
- copy php.ini
cp /alidata/dockerdata/mediawiki/php/php.ini /alidata/dockerdata/mediawiki-sincere/php/php.ini
- 启动docker
docker run --name mysql-sincere --restart=always -v /alidata/dockerdata/mysqldata/mysql-cloud:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql
- 更新或下载最新mediawiki版本
cd /alidata/dockerdata/mediawiki-sincere wget https://releases.wikimedia.org/mediawiki/1.31/mediawiki-1.31.4.tar.gz tar -xzvf mediawiki-1.31.4.tar.gz
docker run --name mediawiki-sincere -it -p 973:80 \ -v /alidata/dockerdata/mediawiki-sincere/mediawiki-1.31.4:/var/www/html \ -v /alidata/dockerdata/mediawiki-sincere/images:/var/www/html/images \ -v /alidata/dockerdata/mediawiki-sincere/php/php.ini:/usr/local/etc/php/php.ini \ -v /alidata/dockerdata/mediawiki-sincere/LocalSettings.php:/var/www/html/LocalSettings.php \ --link mysql-sincere:mysql -d docker.ling2.cn/phpserver7
docker run --name mediawiki-sincere -it -p 973:80 \ -v /alidata/dockerdata/mediawiki-sincere/images:/var/www/html/images \ -v /alidata/dockerdata/mediawiki-sincere/php/php.ini:/usr/local/etc/php/php.ini \ -v /alidata/dockerdata/mediawiki-sincere/LocalSettings.php:/var/www/html/LocalSettings.php \ --link mysql-sincere:mysql -d mediawiki
kityminder
git clone https://github.com/fex-team/kityminder.git git submodule init git submodule update cnpm -g install grunt npm install bower install
依赖安装完成,使用 grunt 进行构建:
grunt
dist 目录的内容放到需要的地方
docker run --name kityminder -it -p 57:80 -v /alidata/dockerdata/kityminder:/var/www/html -d docker.ling2.cn/phpserver
mantis
docker run --name mantis -it -p 69:80 -v /alidata/dockerdata/mantis:/var/www/html --link mysql:mysql -d docker.ling2.cn/phpserver
docker run --name mantisbt -it -p 68:80 --link mysql:mysql -d vimagick/mantisbt
username: administrator password: root
bo.wang/111111
docker cp mantisbt:/var/www/html /alidata/dockerdata/mantis docker run --name mantisbt -it -p 68:80 -v /alidata/dockerdata/mantis:/var/www/html --link mysql:mysql -d xlrl/mantisbt
禅道
https://hub.docker.com/r/idoop/zentao
mkdir -p /alidata/dockerdata/zbox && \
docker run -d -p 1080:80 -p 3307:3306 \
-e ADMINER_USER="root" -e ADMINER_PASSWD="Philips@1234" \
-e BIND_ADDRESS="false" \
-v /alidata/dockerdata/zbox/:/opt/zbox/ \
--add-host smtp.exmail.qq.com:163.177.90.125 \
--name zentao \
idoop/zentao:latest
rm -rf /alidata/dockerdata/zbox/app/zentao/tmp/backup/*
echo > /alidata/dockerdata/zbox/logs/apache_access.log
eclipse che
不知道为啥不能使用--name che 直接用eclipse/che也不行,必须要eclipse/che-cl However, if you do not want your /instance, and /backup folder to be children, you can set them individually with separate overrides.
docker run -it --rm -p --privileged -v /var/run/docker.sock:/var/run/docker.sock \ -v /alidata/dockerdata/eclipseche/data:/data \ -v /alidata/dockerdata/eclipseche/instance:/data/instance \ -v /alidata/dockerdata/eclipseche/backup:/data/backup \ -v /alidata/server/maven_repository:/home/user/.m2 \ -v /alidata/server/maven_repository/settings_che.xml:/home/user/.m2/settings.xml \ -e CHE_HOST=192.169.1.11 \ eclipse/che start
docker run -it --rm -p -v /var/run/docker.sock:/var/run/docker.sock -v /alidata/dockerdata/eclipseche/data:/data \ -v /alidata/server/maven_repository:/home/user/.m2 \ -v /alidata/server/maven_repository/settings_che.xml:/home/user/.m2/settings.xml \ -e CHE_HOST=192.168.1.3 eclipse/che start
docker cp /alidata/server/maven_repository che:/home/user/.m2
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /alidata/dockerdata/eclipseche/data:/data eclipse/che:nightly start
安装用临时容器,添加--rm,且不能用che做名称
docker run -it --rm -v /alidata/dockerdata/eclipseche/data:/data -v /var/run/docker.sock:/var/run/docker.sock \ -e CHE_HOST=che.ling2.win \ eclipse/che:latest start --fast -e CHE_HOST=<your-ip-or-host> \ -e CHE_PORT=<port>
群辉下使用
-e CHE_USER=admin:admin
或
chmod -R 777 /alidata/dockerdata/eclipseche
Port »»»»»»»» | Service »»»»»»»» | Notes |
---|---|---|
8080 | Tomcat Port | |
8000 | Server Debug Port | Users developing Che extensions and custom assemblies would use this debug port to connect a remote debugger to che server. |
32768-65535 | Docker and Che Agents | Users who launch servers in their workspace bind to ephemeral ports in this range. This range can be limited. |
- Stop Che
docker run <DOCKER_OPTIONS> eclipse/che stop
- Restart Che
docker run <DOCKER_OPTIONS> eclipse/che restart
- Run a specific version of Che
docker run <DOCKER_OPTIONS> eclipse/che:<version> start
- Get help
docker run eclipse/che
- If boot2docker on Windows, mount a subdir of `%userprofile%` to `:/data`. For example:
docker run <DOCKER_OPTIONS> -v /c/Users/tyler/che:/data eclipse/che start
- If Che will be accessed from other machines add your server's external IP
docker run <DOCKER_OPTIONS> -e CHE_HOST=<your-ip> eclipse/che start
maven
试用 Nexus OSS 3.0 的docker仓库1 试用 Nexus OSS 3.0 的docker仓库2 使用 Nexus 搭建 Docker 仓库
*创建data volume,用来持久化容器中的数据,保证容器删除重建后私服中的数据仍然存在
docker volume create --name nexus-data
- 启动nexus
docker run -d -p 8081:8081 --name nexus -v nexus-data:/nexus-data sonatype/nexus3
安装
mkdir -p /alidata/dockerdata/nexus/nexus-data && chown -R 200 /alidata/dockerdata/nexus/nexus-data docker run -d -p 74:8081 -p 79:79 -p 8889:8889 --name nexus -v /alidata/dockerdata/nexus/nexus-data:/nexus-data sonatype/nexus3
- nexus默认用户是 admin/admin123,建议启动后修改密码,创建一个专门给开发人员用的用户,分给可以read、browse所有仓库的权限
docker cp nexus:/opt/sonatype/nexus/lib/support/nexus-orient-console.jar /alidata/dockerdata/nexus
docker start nexus docker exec -it nexus bash java -jar /opt/sonatype/nexus/lib/support/nexus-orient-console.jar CONNECT plocal:/nexus-data/db/component admin admin
: Cannot open local storage '/nexus-data/db/config' with mode=rw
https://www.cnblogs.com/rongfengliang/archive/2004/01/13/10728632.html
The database write ahead logs are corrupt.
Make a backup of $install-dir/sonatype-work/nexus3/db/component, and then remove files with extension .wal from that directory. Then try the startup again
if the issue still persist, try clearing all .wal files under the "DB" directory, will resolve the issue,
I did tried this method and it worked like charm.
cd /alidata/dockerdata/nexus/nexus-data/db find *.wal rm -rf *.wal
创建xxxx-releases和xxxxx-snapshots
- 配置maven 其中xxxx-releases和xxxxx-snapshots是自己新创建的hosted maven仓库,并且设置成可上传模式
具体仓库类型主要分为hosted/proxy/group三种。具体含义如下:
项目 详细说明
hosted 本地存储,像官方仓库一样提供本地私库功能 proxy 提供代理其他仓库的类型 group 组类型,可以组合多个仓库为一个地址提供服务
<server> <id>ling-releases</id> <username>admin</username> <password>admin123</password> </server> <server> <id>ling-snapshots</id> <username>admin</username> <password>admin123</password> </server>
<distributionManagement> <repository> <id>ling-releases</id> <name>ling-releases</name> <url>http://nexus.ling2.cn/repository/ling-releases/</url> </repository> <snapshotRepository> <id>ling-snapshots</id> <name>ling-snapshots</name> <url>http://nexus.ling2.cn/repository/ling-snapshots/</url> </snapshotRepository> </distributionManagement>
此处配置的id需要和setting.xml中server的id保持一致,之后就可以在工程目录下执行mvn deploy,将jar发布到私服的仓库中,jar以snapshots结尾的会自动发布到xxxxx-snapshots里面,没有的会直接发布到xxxxx-releases中
- 利用命令行上传第三方jar
mvn -X deploy:deploy-file -DgroupId=com.chengf -DartifactId=test -Dversion=1.0.0 -Dpackaging=jar -Dfile=test.jar -Durl=http://surface.ling2.cn:74/repository/ling-releases/
此处的repositoryId需要和setting.xml中server的id保持一致
创建docker 仓库
docker 仓库参见https://blog.csdn.net/liumiaocn/article/details/62891201
http://nexus.ling2.cn/repository/docker/
server gave HTTP response to HTTPS client
vi /etc/docker/daemon.json
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://3ue1wki2.mirror.aliyuncs.com"],
"graph":"/data/tools/docker",
"insecure-registries": [
"127.0.0.1:8889","192.168.74.129:8889","192.168.121.226"
],
"disable-legacy-registry": true
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{"insecure-registries": ["192.168.121.226","127.0.0.1:8889","192.168.74.129:8889"],"graph":"/data/tools/docker","registry-mirrors": ["https://3ue1wki2.mirror.aliyuncs.com"]}
EOF
群晖 ps -ef|grep docker
/var/packages/Docker/etc/dockerd.json
sudo tee /var/packages/Docker/etc/dockerd.json <<-'EOF'
{
"data-root": "/var/packages/Docker/target/docker",
"log-driver": "db",
"registry-mirrors": ["https://3ue1wki2.mirror.aliyuncs.com"],
"insecure-registries": [
"127.0.0.1:8889"
],
"disable-legacy-registry": true,
"storage-driver": "btrfs"
}
EOF
docker-mailserver
mkdir /alidata/dockerdata/mail/docker-mailserver
touch docker-compose.yml vi
version: '2'
services:
mail:
image: tvial/docker-mailserver
hostname: mailserver
domainname: ling2.com
container_name: mailserver
ports:
- "25:25"
- "143:143"
- "587:587"
- "993:993"
volumes:
- maildata:/var/mail
- mailstate:/var/mail-state
- ./config/:/tmp/docker-mailserver/
environment:
- ENABLE_SPAMASSASSIN=1
- ENABLE_CLAMAV=1
- ENABLE_FAIL2BAN=1
- ENABLE_POSTGREY=1
- ONE_DIR=1
- DMS_DEBUG=0
cap_add:
- NET_ADMIN
volumes:
maildata:
driver: local
mailstate:
driver: local
mkdir -p config touch config/postfix-accounts.cf
生产用户名和密码
docker run --rm \ -e MAIL_USER=admin \ -e MAIL_PASS=admin123 \ -ti tvial/docker-mailserver:latest \ /bin/sh -c 'echo "$MAIL_USER|$(doveadm pw -s SHA512-CRYPT -u $MAIL_USER -p $MAIL_PASS)"' >> config/postfix-accounts.cf
docker-compose up -d mail
安装参考docker-composeDocker Compose
RabbitMQ
https://hub.docker.com/_/rabbitmq
Rabbitmq集群高可用测试 docker环境下的RabbitMQ部署,Spring AMQP使用 Spring Boot中使用RabbitMQ
vi /etc/hosts 127.0.0.1 cloud
install
docker run -d --hostname www.ling2.win --name rabbitmq --restart=always \ -p 78:15672 -p 5672:5672 -p 4369:4369 -p 25672:25672 -p 15671:15671 -p 15672:15672 \ -e RABBITMQ_ERLANG_COOKIE='ling2' -e RABBITMQ_DEFAULT_USER=root -e RABBITMQ_DEFAULT_PASS=Wb19831010! \ rabbitmq:3-management
link
docker run --name some-app --link some-rabbit:rabbit -d application-that-uses-rabbitmq
bash
docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3 bash docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash
-e RABBITMQ_DEFAULT_VHOST=my_vhost -e RABBITMQ_ERLANG_COOKIE:especially useful for clustering but also for remote/cross-container administration via rabbitmqctl
hostname
One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should specify -h/--hostname explicitly for each daemon so that we don't get a random hostname and can keep track of our data:
rocketmq
https://hub.docker.com/r/foxiswho/rocketmq
https://github.com/foxiswho/docker-rocketmq/blob/master/rmq/docker-compose.yml
git clone https://github.com/foxiswho/docker-rocketmq.git
cd docker-rocketmq
cd rmq
chmod +x start.sh
./start.sh
docker-compose up -d
docker-compose down
mkdir -p /alidata/dockerdata/rocketmq/rocketmqdata/logs
mkdir -p /alidata/dockerdata/rocketmq/rocketmqdata/store
mkdir -p /alidata/dockerdata/rocketmq/rocketmqdata/conf
chmod -R 777 /alidata/dockerdata/rocketmq/rocketmqdata/logs
chmod -R 777 /alidata/dockerdata/rocketmq/rocketmqdata/store
chmod -R 777 /alidata/dockerdata/rocketmq/rocketmqdata/conf
docker run -d \
--name rmqnamesrv \
-e "JAVA_OPT_EXT=-Xms512M -Xmx512M -Xmn128m" \
-p 9876:9876 \
foxiswho/rocketmq:4.8.0 \
sh mqbroker -c /home/rocketmq/conf/broker.conf
docker run -d -v /alidata/dockerdata/rocketmq/rocketmqdata/logs:/home/rocketmq/logs -v /alidata/dockerdata/rocketmq/rocketmqdata/store:/home/rocketmq/store \
-v /alidata/dockerdata/rocketmq/rocketmqdata/conf:/home/rocketmq/conf \
--name rmqbroker \
-e "NAMESRV_ADDR=rmqnamesrv:9876" \
-e "JAVA_OPT_EXT=-Xms512M -Xmx512M -Xmn128m" \
-p 10911:10911 -p 10912:10912 -p 10909:10909 \
foxiswho/rocketmq:4.8.0 \
sh mqbroker -c /home/rocketmq/conf/broker.conf
docker run --name rmqconsole --link rmqnamesrv:namesrv \
-e "JAVA_OPTS=-Drocketmq.namesrv.addr=rmqserver:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false" \
-p 8180:8080 -t styletang/rocketmq-console-ng
activemq
docker run -d --name activemq1 -p 61616:61616 -p 8161:8161 webcenter/activemq docker run -d --name activemq2 -p 61617:61616 -p 8162:8161 webcenter/activemq
postgresql
docker run --name=postgresql -d -p 5432:5432 --restart=always \ -v /alidata/dockerdata/postgresql/data:/var/lib/postgresql \ --env 'DB_EXTENSION=pg_trgm' \ --env 'PG_PASSWORD=Wb19831010!' \ sameersbn/postgresql
docker run --name=pgadmin4 -d -p 5433:80 --restart=always \ -e 'PGADMIN_DEFAULT_EMAIL=admin@daixiaole.net' \ -e 'PGADMIN_DEFAULT_PASSWORD=Wb19831010!' \ -d dpage/pgadmin4
docker cp pgadmin4:/var/lib/pgadmin/storage/admin_daixiaole.net/ling-cloud-admin_2021_11_13 /alidata/dockerdata/postgresql/barkup/ling-cloud-admin_2021_11_13
docker exec -it postgresql bash psql -U postgres alter user postgres with password 'Wb19831010!';
postgres/Wb19831010! ALTER USER postgres WITH PASSWORD 'Wb19831010'
https://hub.docker.com/_/postgres
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
If the DB_USER and DB_PASS variables are specified along with the DB_NAME variable, then the user specified in DB_USER will be granted access to all the databases listed in DB_NAME. Note that if the user and/or databases do not exist, they will be created.
docker run --name postgresql -itd --restart always \ --env 'DB_USER=dbuser' --env 'DB_PASS=dbuserpass' \ --env 'DB_NAME=dbname1,dbname2' \ sameersbn/postgresql:10-1
群晖 postgresql
https://www.iots.vip/post/navicat-connect-synology-postgresql.html
docker run --name=postgresql -d -p 5433:5432 --restart=always \ -v /alidata/dockerdata/postgresql/data:/var/lib/postgresql \ --env 'DB_EXTENSION=pg_trgm' \ --env 'PG_PASSWORD=Wb19831010!' \ sameersbn/postgresql
docker run --name=pgadmin4 -d -p 5434:80 --restart=always \ -e 'PGADMIN_DEFAULT_EMAIL=admin@daixiaole.net' \ -e 'PGADMIN_DEFAULT_PASSWORD=Wb19831010!' \ -d dpage/pgadmin4
ecs迁移到rds
ecs postgresql-->dump-->local postgresql-->local mysql-->dump-->rds
boot local 2 create tables-->少数数据迁移with tool -->ecs postgresql-->csv with header remove date-->download-->https://dms.aliyun.com/new upload
boot local 2 create tables-->少数数据迁移with tool -->ecs postgresql-->csv with header remove date-->download-- import data from file to rds where intellijj(check column mapping)
-->import data to local postgresql-->copy table to rds where intellijj
docker cp pgadmin4:/var/lib/pgadmin/storage/admin_daixiaole.net/book_20211214.csv /alidata/dockerdata/postgresql/barkup/2021_11_14/book_20211214.csv docker cp pgadmin4:/var/lib/pgadmin/storage/admin_daixiaole.net/book_chapter_20211214.csv /alidata/dockerdata/postgresql/barkup/2021_11_14/book_chapter_20211214.csv
tar -zcvf /alidata/dockerdata/postgresql/barkup/2021_11_14.tar.gz /alidata/dockerdata/postgresql/barkup/2021_11_14
tar -zxvf /alidata/dockerdata/postgresql/barkup/2021_11_14.tar.gz /alidata/dockerdata/postgresql/barkup/2021_11_14
redis
[7] Redis配置数据持久化---APPEND ONLY MODE
docker run --name redis --restart=always -v /alidata/dockerdata/redis/data:/data -p 6379:6379 -d redis redis-server --appendonly yes docker run --name redis --restart=always -v /alidata/dockerdata/redis/data:/data -p 6379:6379 -d redis redis-server --appendonly yes --requirepass Wb19831010!
docker run --name redis --restart=always -p 6379:6379 -d redis redis-server --appendonly yes docker run --name redis --restart=always -p 6379:6379 -d redis redis-server --appendonly yes --requirepass Wb19831010!
docker run --name some-app --link some-redis:redis -d application-that-uses-redis docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379
docker run --name ling-cloud-redis --restart=always -v /alidata/dockerdata/redis/data:/data -p 6379:6379 -d redis redis-server --appendonly yes
docker cp ling-cloud-redis:/usr/local/etc/redis/redis.conf /alidata/dockerdata/redis/data
docker run --name ling-cloud-redis --restart=always -v /alidata/dockerdata/redis/data:/data -v /alidata/dockerdata/redis/data/redis.conf:/usr/local/etc/redis/redis.conf -p 6379:6379 -d redis redis-server --appendonly yes
docker run --name ling-cloud-redis --restart=always --network ling-cloud --network-alias ling-cloud-redis -v /alidata/dockerdata/redis/data:/data -p 6379:6379 -d redis redis-server /data/redis.conf
docker run --name ling-cloud-redis --restart=always --network ling-cloud --network-alias ling-cloud-redis -v /alidata/dockerdata/redis/data:/data -p 6379:6379 -d redis redis-server --appendonly yes
docker run --name ling-cloud-redis --restart=always --network ling-cloud --network-alias ling-cloud-redis -v /alidata/dockerdata/redis/data:/data -p 6379:6379 -d redis redis-server --appendonly yes --requirepass Wb19831010!
docker exec -it redis bash
redis-cli -h 127.0.0.1 -p 6379 -a Kpmg@1234!
info clients
config set timeout 600
config set maxclients 100000
gitlab
docker-gitlab [8] gitlab基本维护和使用
docker run --name='gitlab-ce' -d \
-p 1022:22 -p 70:80 \
-v /alidata/dockerdata/gitlab/config:/etc/gitlab \
-v /alidata/dockerdata/gitlab/logs:/var/log/gitlab \
-v /alidata/dockerdata/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce
docker run --name='gitlab' -d \
--link redis:redisio \
--link postgresql:postgresql \
-v /alidata/dockerdata/gitlab/data:/home/git/data \
-p 10022:22 -p 70:80 \
-e 'DB_ADAPTER=postgresql' -e 'DB_HOST=postgresql' \
-e 'DB_NAME=gitlab' \
-e 'DB_USER=postgres' -e 'DB_PASS=Wb19831010!' \
-e 'GITLAB_PORT=80' \
-e 'GITLAB_SSH_PORT=10022' \
-e 'GITLAB_HOST=ling.ling2.cn' \
-e 'GITLAB_SIGNUP=true' \
-e 'GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alpha-numeric-string' \
-e 'GITLAB_SECRETS_SECRET_KEY_BASE=long-and-random-alpha-numeric-string' \
-e 'GITLAB_SECRETS_OTP_KEY_BASE=long-and-random-alpha-numeric-string' \
sameersbn/gitlab
docker run --name='gitlab' -d \
--link redis:redisio \
--link mysql:mysql \
-v /alidata/dockerdata/gitlab/data:/home/git/data \
-p 10022:22 -p 70:80 \
-e 'DB_ADAPTER=mysql2' -e 'DB_HOST=mysql' \
-e 'DB_NAME=gitlab' \
-e 'DB_USER=root' -e 'DB_PASS=Wb19831010!' \
-e 'GITLAB_PORT=80' \
-e 'GITLAB_SSH_PORT=10022' \
-e 'GITLAB_HOST=ling.ling2.cn' \
-e 'GITLAB_SIGNUP=true' \
-e 'GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alpha-numeric-string' \
-e 'GITLAB_SECRETS_SECRET_KEY_BASE=long-and-random-alpha-numeric-string' \
-e 'GITLAB_SECRETS_OTP_KEY_BASE=long-and-random-alpha-numeric-string' \
sameersbn/gitlab
-e 'GITLAB_EMAIL=102010cncger@sina.com' \ -e 'GITLAB_BACKUPS=daily' \ -e 'GITLAB_HOST=surface.ling2.cn:10080' \ -e 'GITLAB_SIGNUP=true' \ -e 'GITLAB_GRAVATAR_ENABLED=false' \
手动备份
docker stop gitlab docker rm gitlab
docker run --name='gitlab' -it --rm \
你的设置
sameersbn/gitlab:7.11.2 app:rake gitlab:backup:create
这样就会在/home/username/opt/gitlab/data/backups下面创建备份,备份的所属者请看下ls -la /home/username/opt/gitlab/data/backups下面的文件,迁移的时候有用,另外迁移的话,也只需要保留这个备份出来的文件即可。
恢复备份
确保你的gitlab的版本跟备份的版本一致。
docker run --name='gitlab' -it --rm \
你的设置
sameersbn/gitlab:7.11.2 app:rake gitlab:backup:restore
这样就能确保迁移之后的版本跟迁移之前的版本一样了。
迁移
这个时候你只要在目标机器上做完准备工作之后,启动一次gitlab,然后把之前备份的文件放到/home/username/opt/gitlab/data/backups下面,确保备份文件的权限问题之后恢复备份即可
升级
先备份,防止升级后出现问题,然后docker pull新的版本之后,直接run就可以了。
102010cncger@sina.com/Wb191010610109
zookeeper
基于docker部署的微服务架构(五): docker环境下的zookeeper和kafka部署 docker-zookeeper zkui
docker run -d --name zookeeper --restart=always --publish 2181:2181 \ zookeeper
-v /etc/localtime:/etc/localtime \
-v /alidata/dockerdata/zookeeper/zoo.cfg:/conf/zoo.cfg \
link方式使用
docker run --name some-app --link some-zookeeper:zookeeper -d application-that-uses-zookeeper
客户端连接
docker run -it --rm --link some-zookeeper:zookeeper zookeeper zkCli.sh -server zookeeper
docker run -d --name zkui -p 64:9090 -e ZKUI_ZK_SERVER=www.ling2.cn:2181 qnib/zkui docker run -d --name zkui --restart=always -p 64:9090 --link zookeeper:zookeeper -e ZKUI_ZK_SERVER=zookeeper:2181 qnib/zkui
admin/admin for read/write access, to change the password provide ZKUI_ADMIN_PW=pass user/user for read-only access, to change the password provide ZKUI_USER_PW=pass
docker-compose.yml
version: '2'
services:
zoo1:
image: zookeeper
restart: always
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo2:
image: zookeeper
restart: always
ports:
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo3:
image: zookeeper
restart: always
ports:
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
kafka
基于docker部署的微服务架构(五): docker环境下的zookeeper和kafka部署 基于docker部署的微服务架构(六): 日志统一输出到kafka中间件 Spring Cloud构建微服务架构(七)消息总线(续:Kafka)
https://hub.docker.com/r/wurstmeister/kafka/
https://github.com/wurstmeister/kafka-docker
https://hub.docker.com/r/landoop/kafka-connect-ui
https://blog.csdn.net/weixin_38251332/article/details/105638535
docker kafka docker-compose.yml
kafka-web-console
kafka-manager
更多参考Kafka
ecs
docker run -d --name kafka -p 9092:9092 -p 9094:9094 --link zookeeper \ -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \ -e KAFKA_ADVERTISED_LISTENERS=INSIDE://:9092,OUTSIDE://ling.ling2.cn:9094 \ -e KAFKA_LISTENERS=INSIDE://:9092,OUTSIDE://:9094 \ -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT \ -e KAFKA_INTER_BROKER_LISTENER_NAME=INSIDE \ -t wurstmeister/kafka
小于1g内存添加 -v /alidata/dockerdata/kafka/bin/kafka-server-start.sh:/opt/kafka/bin/kafka-server-start.sh \
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ $# -lt 1 ];
then
echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
exit 1
fi
base_dir=$(dirname $0)
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"
fi
EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
COMMAND=$1
case $COMMAND in
-daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
*)
;;
esac
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"
surface
docker run -d --name kafka --restart=always --publish 9092:9092 \ --link zookeeper \ -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \ -e KAFKA_ADVERTISED_HOST_NAME=192.168.1.11 \ -e KAFKA_ADVERTISED_PORT=9092 \ wurstmeister/kafka
docker run -d --name kafka --restart=always --publish 9092:9092 \ -e KAFKA_ZOOKEEPER_CONNECT=106.14.12.142:2181 \ -e KAFKA_ADVERTISED_HOST_NAME=192.168.1.3 \ -e KAFKA_ADVERTISED_PORT=9092 \ wurstmeister/kafka
docker run --rm -it -p 8000:8000 -e "CONNECT_URL=http://192.168.74.129:8083" landoop/kafka-connect-ui
docker cp kafka:/opt/kafka/config /alidata/dockerdata/kafka docker cp kafka:/opt/kafka/bin /alidata/dockerdata/kafka
docker exec -it kafka /bin/bash cd /opt/kafka/bin
docker run -it --name kafkamanager --restart=always --link zookeeper -p 72:9000 -e ZK_HOSTS="zookeeper:2181" -e APPLICATION_SECRET=111111 sheepkiller/kafka-manager
docker run -it --name kafkamanager --restart=always -p 72:9000 -e ZK_HOSTS="106.14.12.142:2181" -e APPLICATION_SECRET=111111 sheepkiller/kafka-manager
创建一个名为“test”的Topic,该Topic包含一个分区一个Replica。在创建完成后,可以使用kafka-topics --list --zookeeper localhost:2181命令来查看当前的Topic。
kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test
查看当前的 topic 列表。
kafka-topics.sh --list --zookeeper zookeeper:2181
kafka-console-producer命令可以启动Kafka基于命令行的消息生产客户端,启动后可以直接在控制台中输入消息来发送,控制台中的每一行数据都会被视为一条消息来发送。我们可以尝试输入几行消息,由于此时并没有消费者,所以这些输入的消息都会被阻塞在名为test的Topics中,直到有消费者将其消费掉位置
kafka-console-producer.sh --broker-list localhost:9092 --topic test 输入一些测试消息
kafka-console-consumer命令启动的是Kafka基于命令行的消息消费客户端,在启动之后,我们马上可以在控制台中看到输出了之前我们在消息生产客户端中发送的消息。我们可以再次打开之前的消息生产客户端来发送消息,并观察消费者这边对消息的输出来体验Kafka对消息的基础处理
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning,可以接收到生产者发送的消息。
efak
https://hub.docker.com/r/ydockerp/efak
docker run -it --name efak -p 8049:8048 -v /alidata/dockerdata/kafka-eagle/system-config.properties:/opt/kafka-eagle/conf/system-config.properties ydockerp/efak
######################################
# multi zookeeper & kafka cluster list
# Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead
######################################
efak.zk.cluster.alias=cluster1
cluster1.zk.list=192.168.31.2:2181
######################################
# zookeeper enable acl
######################################
cluster1.zk.acl.enable=false
cluster1.zk.acl.schema=digest
#cluster1.zk.acl.username=test
#cluster1.zk.acl.password=test123
######################################
# broker size online list
######################################
cluster1.efak.broker.size=20
######################################
# zk client thread limit
######################################
kafka.zk.limit.size=32
######################################
# EFAK webui port
######################################
efak.webui.port=8048
######################################
# kafka jmx acl and ssl authenticate
######################################
cluster1.efak.jmx.acl=false
cluster1.efak.jmx.user=keadmin
cluster1.efak.jmx.password=keadmin123
cluster1.efak.jmx.ssl=false
cluster1.efak.jmx.truststore.location=/data/ssl/certificates/kafka.truststore
cluster1.efak.jmx.truststore.password=ke123456
######################################
# kafka offset storage
######################################
cluster1.efak.offset.storage=kafka
######################################
# kafka jmx uri
######################################
cluster1.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
######################################
# kafka metrics, 15 days by default
######################################
efak.metrics.charts=true
efak.metrics.retain=15
######################################
# kafka sql topic records max
######################################
efak.sql.topic.records.max=5000
efak.sql.topic.preview.records.max=10
######################################
# delete kafka topic token
######################################
efak.topic.token=keadmin
######################################
# kafka sasl authenticate
######################################
cluster1.efak.sasl.enable=false
cluster1.efak.sasl.protocol=SASL_PLAINTEXT
cluster1.efak.sasl.mechanism=SCRAM-SHA-256
cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafka-eagle";
cluster1.efak.sasl.client.id=
cluster1.efak.blacklist.topics=
cluster1.efak.sasl.cgroup.enable=false
cluster1.efak.sasl.cgroup.topics=
######################################
# kafka sqlite jdbc driver address
######################################
efak.driver=org.sqlite.JDBC
efak.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
efak.username=root
efak.password=www.kafka-eagle.org
######################################
# kafka mysql jdbc driver address
######################################
#efak.driver=com.mysql.cj.jdbc.Driver
#efak.url=jdbc:mysql://192.168.56.10:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
#efak.username=root
#efak.password=root
Jenkins
jenkins官网 docker-jenkins 谈谈 Docker Volume 之权限管理(一) jenkins 反向代理设置有误
https://mirrors.aliyun.com/jenkins/updates/2.32/update-center.json
参考Wrong volume permissions Jenkins安装 自动化部署脚本参考
mkdir -p /alidata/dockerdata/jenkins/data mkdir -p /alidata/server/maven_repository chown -R 1000 /alidata/dockerdata/jenkins/data chown -R 1000 /alidata/server/maven_repository
docker run --name jenkins --restart=always -p 71:8080 -p 50000:50000 -v /alidata/dockerdata/jenkins/data:/var/jenkins_home \ -v /alidata/server/maven_repository:/var/jenkins_home/.m2 \ -v /alidata/server/maven3.5.0:/alidata/server/maven3.5.0 \ -v /alidata/server/java:/alidata/server/java \ -d jenkins
https://github.com/jenkinsci/docker/blob/master/README.md
docker run --name jenkins --restart=always -p 71:8080 -p 50000:50000 -v /alidata/dockerdata/jenkins/data:/var/jenkins_home \ -v /alidata/server/maven_repository:/alidata/server/maven_repository \ -v /alidata/server/maven3.5.0:/alidata/server/maven3.5.0 \ -v /alidata/server/java:/alidata/server/java \ -d jenkins/jenkins:lts
docker run --name jenkins --restart=always --privileged -p 71:8080 -p 50000:50000 -v /alidata/dockerdata/jenkins/data:/var/jenkins_home -d jenkins:2.32.3
cd /var/jenkins_home/secrets/initialAdminPassword FROM jenkins COPY plugins.txt /usr/share/jenkins/ref/
docker cp jenkins:/var/jenkins_home/secrets/initialAdminPassword /alidata/dockerdata/jenkins
admin/Wb191010610109 bo.wang/Wb191010610109 chmod -R 755 /alidata/dockerdata/jenkins/data
ELK
ELK 是 elasticsearch、 logstash、kibana 的简称,这三种工具各司其职,一起协作完成日志统计分析的功能。
- logstash 日志收集管道,把日志导入到 elasticsearch
- elasticsearch 存储日志,并提供基于 Lucene 的全文检索功能
- kibana 提供日志数据展示界面和各种统计图表
之前已经把日志都输出到 kafka 的 topic 下,只需要使用 logstash 把 kafka 中的数据导入到 elasticsearch 即可。
迁移 https://www.cnblogs.com/rianley/p/14355721.html
elasticsearch
https://blog.csdn.net/weixin_42697074/article/details/103815945
https://hub.docker.com/_/elasticsearch
先启动简洁版本
sudo docker run -id \
--restart=always \
--name=elasticsearch \
-p 9290:9200 \
-p 9390:9300 \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms300m -Xmx500m" \
-e "discovery.type=single-node" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.17.0
docker run -id --name=cerebro -e CEREBRO_PORT=1360 -p 1360:1360 lmenezes/cerebro
https://github.com/appbaseio/dejavu#docker-installation
docker run -p 1358:1358 --name=dejavu -d appbaseio/dejavu
http.port: 9290
http.cors.allow-origin: http://localhost:1358,http://127.0.0.1:1358,http://qunhui.ling2.cn:1358
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true
复制配置文件
sudo docker cp elasticsearch:/usr/share/elasticsearch/config/ /alidata/dockerdata/elasticsearch/config/ sudo docker cp elasticsearch:/usr/share/elasticsearch/data/ /alidata/dockerdata/elasticsearch/data/ sudo docker cp elasticsearch:/usr/share/elasticsearch/logs/ /alidata/dockerdata/elasticsearch/logs/ sudo docker cp elasticsearch:/usr/share/elasticsearch/plugins/ /alidata/dockerdata/elasticsearch/plugins/
docker rm -f elasticsearch
启动
sudo docker run -tid \
--restart=always \
--name=elasticsearch \
-p 9290:9200 \
-p 9390:9300 \
-v /alidata/dockerdata/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /alidata/dockerdata/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /alidata/dockerdata/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /alidata/dockerdata/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms300m -Xmx500m" \
-e "discovery.type=single-node" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.17.0
sudo docker run -tid \
--restart=always \
--name=elasticsearch \
-p 9290:9200 \
-p 9390:9300 \
-v /alidata/dockerdata/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /alidata/dockerdata/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /alidata/dockerdata/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /alidata/dockerdata/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms500m -Xmx1500m" \
-e "discovery.type=single-node" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.17.0
docker run -tid \
--restart=always \
--name=elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-v /alidata/dockerdata/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /alidata/dockerdata/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /alidata/dockerdata/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /alidata/dockerdata/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms500m -Xmx1500m" \
-e "discovery.type=single-node" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.17.0
--ip 172.170.0.15 \
</source>
ling server
mv /alidata/dockerdata/elasticsearch /home/alidata/dockerdata
ln -s /home/alidata/dockerdata/elasticsearch /alidata/dockerdata/elasticsearch
docker exec -it elasticsearch bash
more config/elasticsearch.yml
cluster.name: "docker-cluster"
xpack.security.enabled: true
network.host: 0.0.0.0
discovery.type: single-node
docker run -tid \
--restart=always \
--name=elasticsearch \
-p 9200:9200 \
-p 9300:9300 \
-v /alidata/dockerdata/elasticsearch/config/:/usr/share/elasticsearch/config/ \
-v /alidata/dockerdata/elasticsearch/data/:/usr/share/elasticsearch/data/ \
-v /alidata/dockerdata/elasticsearch/logs/:/usr/share/elasticsearch/logs/ \
-v /alidata/dockerdata/elasticsearch/plugins/:/usr/share/elasticsearch/plugins/ \
-v /etc/localtime:/etc/localtime \
-e ES_JAVA_OPTS="-Xms500m -Xmx1500m" \
-e "discovery.type=single-node" \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
elasticsearch:7.17.0
docker exec -it elasticsearch bash
./bin/elasticsearch-setup-passwords interactive
Wb19831010!
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
docker logs -f elasticsearch
sudo docker exec -it elasticsearch bash
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.17.0/elasticsearch-analysis-ik-7.17.0.zip
elasticsearch-plugin install http://wiki.ling2.cn/images/elasticsearch-analysis-ik-7.17.0.zip
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-pinyin/releases/download/v7.17.0/elasticsearch-analysis-pinyin-7.17.0.zip
elasticsearch-plugin install http://wiki.ling2.cn/images/elasticsearch-analysis-pinyin-7.17.0.zip
elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.17.0/elasticsearch-analysis-stconvert-7.17.0.zip
elasticsearch-plugin install http://wiki.ling2.cn/images/elasticsearch-analysis-stconvert-7.17.0.zip
<source>
sudo docker run -p 9800:9800 -d --link elasticsearch:elasticsearch --name=elasticsearch_hd containerize/elastichd
sudo docker run -dit --restart=always --link elasticsearch:elasticsearch --name=hadoop-elasticsearch-head -p 19100:9100 -v /etc/localtime:/etc/localtime -e TZ='Asia/Shanghai' -e LANG="en_US.UTF-8" mobz/elasticsearch-head:5
docker run -d --name elasticsearch \ --volume /alidata/dockerdata/elasticsearch/data:/usr/share/elasticsearch/data \ --publish 9200:9200 \ --publish 9300:9300 \ elasticsearch
sudo docker run -d --name elastic-hq -p 5000:5000 --restart always elastichq/elasticsearch-hq
默认用户名密码:elastic/changeme
修改密码
curl -XPUT -u elastic 'http://IP:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "newpassword" }'s
启动时如果报 max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144],修改配置文件 vim /etc/sysctl.conf,增加 vm.max_map_count=655360。保存之后运行 sysctl -p。重新启动 elasticsearch 容器。
logstash
mkdir -p /alidata/dockerdata/logstash/config-dir cd /alidata/dockerdata/logstash/config-dir touch logstash.conf
tee /alidata/dockerdata/logstash/config-dir/logstash.conf <<-'EOF'
input {
kafka {
bootstrap_servers => "kafka:9092"
group_id => "logstash"
topics => ["ling-cloud-config","ling-cloud-register","springCloudBusInput","springCloudBusOutput"]
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}\s+%{LOGLEVEL:severity}\s+\[%{DATA:service},%{DATA:trace},%{DATA:span},%{DATA:exportable}\]\s+%{DATA:pid}---\s+\[%{DATA:thread}\]\s+%{DATA:class}\s+:\s+%{GREEDYDATA:rest}" }
}
}
output {
stdout {}
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
EOF
cd mkdir -p /alidata/dockerdata/logstash touch Dockerfile
FROM logstash CMD ["-f", "/config-dir/logstash.conf"]
docker build -t docker.ling2.cn/logstash .
docker run -d --name logstash \ --volume /alidata/dockerdata/logstash/config-dir:/config-dir \ --link kafka \ --link elasticsearch \ docker.ling2.cn/logstash \
kibana
https://hub.docker.com/_/kibana/
docker run -d --name kibana \ --link elasticsearch \ --publish 75:5601 \ kibana:5.0
docker rm -f kibana
docker run -d --name kibana \ --link elasticsearch \ --publish 75:5601 \ kibana:7.17.0
docker exec -it kibana /bin/bash more /usr/share/kibana/config/kibana.yml
tee /usr/share/kibana/config/kibana.yml <<-'EOF'
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://elasticsearch:9290" ]
monitoring.ui.container.elasticsearch.enabled: true
EOF
docker restart kibana
docker logs -f kibana
最新版要求elasticsearch的高版本还没有出
skywalking
docker run --name skywalking -d -p 1234:1234 -p 11800:11800 -p 12800:12800 --restart always -v /etc/localtime:/etc/localtime --link elasticsearch:elasticsearch -e SW_STORAGE=elasticsearch -e SW_STORAGE_ES_CLUSTER_NODES=elasticsearch:9200 apache/skywalking-oap-server
sudo docker run --name skywalking-ui -d -p 52:8080 -v /etc/localtime:/etc/localtime --link skywalking:skywalking -e SW_OAP_ADDRESS=skywalking:12800 --restart always apache/skywalking-ui
MongoDB
mongo mongo-express mongdb中文社区
docker run -d --name mongodb --restart=always --volume /alidata/dockerdata/mongodb/data:/data/db \ --publish 27017:27017 \ mongo \ --storageEngine wiredTiger
docker exec -it mongodb mongo admin db.createUser({ user: 'wiki', pwd: '123456', roles: [ { role: "userAdminAnyDatabase", db: "wiki" } ] });
connecting to: admin > db.createUser({ user: 'jsmith', pwd: 'some-initial-password', roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] }); Successfully added user: { "user" : "jsmith", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" } ] }
docker run -it -d --name mongo-express --restart=always --link mongodb:mongo \ --publish 76:8081 \ mongo-express
-e ME_CONFIG_OPTIONS_EDITORTHEME="ambiance" \ -e ME_CONFIG_BASICAUTH_USERNAME="user" \ -e ME_CONFIG_BASICAUTH_PASSWORD="fairly long password" \
Configuration Environment vairables are passed to the run command for configuring a mongo-express container.
Name | Default | Description
--------------------------------|-----------------|------------
ME_CONFIG_BASICAUTH_USERNAME | '' | mongo-express web username
ME_CONFIG_BASICAUTH_PASSWORD | '' | mongo-express web password
ME_CONFIG_MONGODB_ENABLE_ADMIN | 'true' | Enable admin access to all databases. Send strings: `"true"` or `"false"`
ME_CONFIG_MONGODB_ADMINUSERNAME | '' | MongoDB admin username
ME_CONFIG_MONGODB_ADMINPASSWORD | '' | MongoDB admin password
ME_CONFIG_MONGODB_PORT | 27017 | MongoDB port
ME_CONFIG_MONGODB_SERVER | 'mongo' | MongoDB container name. Use comma delimited list of host names for replica sets.
ME_CONFIG_OPTIONS_EDITORTHEME | 'default' | mongo-express editor color theme, [more here](http://codemirror.net/demo/theme.html)
ME_CONFIG_REQUEST_SIZE | '100kb' | Maximum payload size. CRUD operations above this size will fail in [body-parser](https://www.npmjs.com/package/body-parser).
ME_CONFIG_SITE_BASEURL | '/' | Set the baseUrl to ease mounting at a subdirectory. Remember to include a leading and trailing slash.
ME_CONFIG_SITE_COOKIESECRET | 'cookiesecret' | String used by [cookie-parser middleware](https://www.npmjs.com/package/cookie-parser) to sign cookies.
ME_CONFIG_SITE_SESSIONSECRET | 'sessionsecret' | String used to sign the session ID cookie by [express-session middleware](https://www.npmjs.com/package/express-session).
ME_CONFIG_SITE_SSL_ENABLED | 'false' | Enable SSL.
ME_CONFIG_SITE_SSL_CRT_PATH | '' | SSL certificate file.
ME_CONFIG_SITE_SSL_KEY_PATH | '' | SSL key file.
The following are only needed if ME_CONFIG_MONGODB_ENABLE_ADMIN is "false"
Name | Default | Description
--------------------------------|-----------------|------------
ME_CONFIG_MONGODB_AUTH_DATABASE | 'db' | Database name
ME_CONFIG_MONGODB_AUTH_USERNAME | 'admin' | Database username
ME_CONFIG_MONGODB_AUTH_PASSWORD | 'pass' | Database password
zipkin
docker-zipkin 更多参考zipkin
docker run --name zipkin -d -p 63:9411 openzipkin/zipkin
tomcat
自定义镜像制作
tomcat-9.0-doc maven-tomcat Tomcat 9访问 Host Manager
cd /alidata/dockerdata/tomcat/images docker build -t docker.ling2.cn/tomcat9 .
docker login --username=admin docker.ling2.cn password admin123 docker push docker.ling2.cn/tomcat9
tomcat-users.xml
- manager-gui — Access to the HTML interface.
- manager-status — Access to the "Server Status" page only.
- manager-script — Access to the tools-friendly plain text interface that is described in this document, and to the "Server Status" page.
- manager-jmx — Access to JMX proxy interface and to the "Server Status" page.
The HTML interface is protected against CSRF but the text and JMX interfaces are not. To maintain the CSRF protection:
users with the manager-gui role should not be granted either the manager-script ormanager-jmx roles.
if the text or jmx interfaces are accessed through a browser (e.g. for testing since these interfaces are intended for tools not humans) then the browser must be closed afterwards to terminate the session.
The roles command has been removed from the Manager application since it did not work with the default configuration and most Realms do not support providing a list of roles.
- admin-gui - allows access to the HTML GUI and the status pages
- admin-script - allows access to the text interface and the status pages
The HTML interface is protected against CSRF but the text interface is not. To maintain the CSRF protection:
users with the admin-gui role should not be granted the admin-script role.
if the text interface is accessed through a browser (e.g. for testing since this inteface is intended for tools not humans) then the browser must be closed afterwards to terminate the session.
<?xml version="1.0" encoding="UTF-8"?>
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="admin"/>
<role rolename="manager-script"/>
<role rolename="manager-gui"/>
<role rolename="manager-jmx"/>
<role rolename="manager-status"/>
<role rolename="admin-gui"/>
<role rolename="admin-script"/>
<user username="admin" password="admin" roles="manager-gui,manager-script,manager-jmx,manager-status,admin-gui,admin-script"/>
</tomcat-users>
settings.xml
编辑pom.xml
<servers>
<server>
<id>tomcat</id>
<username>admin</username>
<password>admin</password>
</server>
</servers>
context.xml
编辑$TOMCAT_HOME/webapps/manager/META-INF/context.xml 参见Tomcat目录结构及配置文件说明#manager远程访问
<Context privileged="true" antiResourceLocking="false"
docBase="${catalina.home}/webapps/manager">
<Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="^.*$" />
</Context>
Dockerfile
cd /alidata/dockerdata/tomcat/images docker build -t docker.ling2.cn/tomcat9 .
FROM tomcat:9.0
MAINTAINER "bo.wang <102010cncger@sina.com>"
ADD conf/settings.xml /usr/local/tomcat/conf/
ADD conf/tomcat-users.xml /usr/local/tomcat/conf/
ADD webapps/manager/META-INF/context.xml /usr/local/tomcat/webapps/manager/META-INF/context.xml
ADD webapps/manager/META-INF/context.xml /usr/local/tomcat/webapps/host-manager/META-INF/context.xml
opencron-server
oschina-opencron docker run --name opencron-server --link mysql:mysql -it -d --restart=always -p 68:8080 \ -v /alidata/dockerdata/tomcat/opencron/opencronserver:/usr/local/tomcat/webapps/ROOT docker.ling2.cn/tomcat9
opencron/111111
sh startup.sh -P67 -p111111
opencron分为两个opencron-server端和opencron-agent端,opencron-server端即为一个web可视化的中央管理调度平台,opencron-agent为要管理的任务的机器,每个要纳入中央统一管理的机器都必须安装opencron-agent, opencron-agent在要管理的服务器中安装执行完后,可以直接在opencron-server添加当前的机器.
Opencron-agent 安装步骤 Opencron-server 部署步骤
oracle
自定义镜像安装
参考[9] 必须执行在overlayer2或bref上
mkdir -p /alidata/dockerdata/oracle/data_11g mkdir -p /alidata/dockerdata/oracle/data_12c
cd /alidata/dockerdata/oracle git clone https://github.com/oracle/docker-images.git
cd /alidata/dockerdata/oracle/docker-images/OracleDatabase/dockerfiles/12.2.0.1 wget http://download.oracle.com/otn/linux/oracle12c/122010/linuxx64_12201_database.zip?AuthParam=1493427346_e460f7ba1589fda383df41cc4cc85601
或
mv /alidata/soft/linuxx64_12201_database.zip /alidata/dockerdata/oracle/docker-images/OracleDatabase/dockerfiles/12.2.0.1
cd /alidata/dockerdata/oracle/docker-images/OracleDatabase/dockerfiles buildDockerImage.sh -h
或
cd /alidata/dockerdata/oracle/docker-images/OracleDatabase/dockerfiles/12.2.0.1 Dockerfile.ee Dockerfile docker build -t oracle/database:12.2.0.1-ee .
文件清理
mv /alidata/dockerdata/oracle/docker-images/OracleDatabase/dockerfiles/12.2.0.1/linuxx64_12201_database.zip /alidata/soft/
启动
docker run --name oracle \
-p 1521:1521 -p 5500:5500 \
-e ORACLE_SID=orcl \
-e ORACLE_PDB=orcl \
-e ORACLE_PWD=Wb19831010! \
-e ORACLE_CHARACTERSET=AL32UTF8 \
-v /alidata/dockerdata/oracle/data_12c:/opt/oracle/oradata \
oracle/database:12.2.0.1-ee
Parameters:
--name: The name of the container (default: auto generated)
-p: The port mapping of the host port to the container port.
Two ports are exposed: 1521 (Oracle Listener), 5500 (OEM Express)
-e ORACLE_SID: The Oracle Database SID that should be used (default: ORCLCDB)
-e ORACLE_PDB: The Oracle Database PDB name that should be used (default: ORCLPDB1)
-e ORACLE_PWD: The Oracle Database SYS, SYSTEM and PDB_ADMIN password (default: auto generated)
-e ORACLE_CHARACTERSET:
The character set to use when creating the database (default: AL32UTF8)
-v The data volume to use for the database.
Has to be owned by the Unix user "oracle" or set appropriately.
If omitted the database will not be persisted over container recreation.
使用
sqlplus sys/<your password>@//localhost:1521/<your SID> as sysdba sqlplus system/<your password>@//localhost:1521/<your SID> sqlplus pdbadmin/<your password>@//localhost:1521/<Your PDB name>
https://localhost:5500/em/
- 重置密码
docker exec <container name> ./setPassword.sh <your password>
- sqlplus
docker run --rm -ti oracle/database:12.2.0.1-ee sqlplus pdbadmin/<yourpassword>@//<db-container-ip>:1521/ORCLPDB1 docker exec -ti <container name> sqlplus pdbadmin@ORCLPDB1
使用sath89/oracle-12c安装
参考[10] https://www.oracle.com/database/technologies/instant-client/downloads.html
mkdir -p /alidata/dockerdata/oracle/data_12c mkdir -p /alidata/backup/oracle chmod 777 /alidata/backup/oracle chmod 777 /alidata/dockerdata/oracle/data_12c
docker pull quay.io/maksymbilenko/oracle-12c
docker run --name oracle_12c --restart=always -d -p 65:8080 -p 1521:1521 -v /alidata/dockerdata/oracle/data_12c:/u01/app/oracle -v /alidata/backup/oracle:/orcl_dp -e DBCA_TOTAL_MEMORY=1024 quay.io/maksymbilenko/oracle-12c
-e WEB_CONSOLE=false
- Connect database with following setting:
hostname: localhost port: 1521 sid: xe service name: xe.oracle.docker username: system password: oracle
jdbc:oracle:thin:@101.132.191.22:1521:xe
- To connect using sqlplus:
sqlplus system/oracle@//192.168.1.3:1521/xe.oracle.docker
- Password for SYS & SYSTEM:
oracle
- Connect to Oracle Application Express web management console with following settings:
http://localhost:65/apex workspace: INTERNAL user: ADMIN password: 0Racle$
- Connect to Oracle Enterprise Management console with following settings:
http://localhost:65/em user: sys password: oracle connect as sysdba: true
- tns
surface = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.3)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = xe.oracle.docker) ) )
mkdir -p /alidata/dockerdata/oracle/data_11g mkdir -p /alidata/backup/oracle chmod 777 /alidata/backup/oracle chmod 777 /alidata/dockerdata/oracle/data_11g
docker pull quay.io/maksymbilenko/oracle-xe-11g
docker run --name oracle_11g --restart=always -d -p 65:8080 -p 1521:1521 -v /alidata/dockerdata/oracle/data_11g:/u01/app/oracle -v /alidata/backup/oracle:/orcl_dp -e DBCA_TOTAL_MEMORY=1024 quay.io/maksymbilenko/oracle-xe-11g
hostname: localhost port: 1521 sid: xe username: system password: oracle
Password for SYS & SYSTEM:
oracle Connect to Oracle Application Express web management console with following settings:
http://localhost:8080/apex workspace: INTERNAL user: ADMIN password: oracle
- 导入导出
mkdir -p /alidata/backup/oracle chmod 777 /alidata/backup/oracle
docker exec -it oracle bash drop directory orcl_dp; create directory orcl_dp as '/orcl_dp'; grant read, write on directory orcl_dp to public
impdp system/oracle@//127.0.0.1:1521/xe.oracle.docker directory=orcl_dp remap_schema=channel:agent dumpfile =CHANNEL.DMP logfile=channel.log; impdp system/oracle@//127.0.0.1:1521/xe.oracle.docker directory=orcl_dp remap_schema=agent:channel dumpfile =AGENT.DMP logfile=agent.log; impdp system/oracle@//127.0.0.1:1521/xe.oracle.docker directory=orcl_dp dumpfile =YBTRAIN.DMP logfile=YBTRAIN.log;
exit;
CREATE SMALLFILE TABLESPACE "TMS"
DATAFILE '/u01/app/oracle/TMS.dbf'
SIZE 100M AUTOEXTEND ON NEXT 2048K MAXSIZE UNLIMITED
LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
create user TMS identified by TMS default tablespace TMS;
grant connect,resource to TMS;
grant dba to TMS;
mycollab
docker run --name mycollab --link mysql -d -p 62:80 -v /alidata/dockerdata/mycollab/mycollab.properties:/opt/mycollab/MyCollab-5.3.4/conf/mycollab.properties bibbox/mycollab
docker cp mycollab:/opt/mycollab/MyCollab-5.3.4/conf/mycollab.properties /alidata/dockerdata/mycollab/mycollab.properties
jira
docker create --restart=no --name "jira-container" \ --publish "8080:8080" \ --volume "hostpath:/var/atlassian/jira" \ --env "CATALINA_OPTS=" \ cptactionhank/atlassian-jira-software:latest
docker create --restart=no --name "confluence-container" \ --publish "8090:8090" \ --volume "hostpath:/var/atlassian/confluence" \ --env "CATALINA_OPTS=" \ cptactionhank/atlassian-confluence:latest
fecru
- crucible
docker run -d -p 62:8080 -v /alidata/dockerdata/crucible/data:/atlassian/data/crucible --name crucible \ mswinarski/atlassian-crucible:latest
docker run -d -p 62:8080 -v /alidata/dockerdata/crucible/data:/atlassian/data/crucible --name crucible --link mysql:mysql \ -e 'FECRU_CONFIGURE_ADMIN_PASSWORD=Wb191010610109' \ -e 'FECRU_CONFIGURE_DB_TYPE=mysql' -e 'FECRU_CONFIGURE_DB_HOST=mysql' -e 'FECRU_CONFIGURE_DB_PORT=3306' -e 'FECRU_CONFIGURE_DB_USER=root' -e 'FECRU_CONFIGURE_DB_PASSWORD=Wb19831010!' \ mswinarski/atlassian-crucible:latest
- fisheye
docker run -d -p 61:8080 -v /alidata/dockerdata/fisheye/data:/atlassian/data/fisheye --name fisheye \ mswinarski/atlassian-fisheye:latest
docker run -d -p 61:8080 -v /alidata/dockerdata/fisheye/data:/atlassian/data/fisheye --name fisheye --link mysql:mysql \ -e 'FECRU_CONFIGURE_ADMIN_PASSWORD=Wb191010610109' \ -e 'FECRU_CONFIGURE_DB_TYPE=mysql' -e 'FECRU_CONFIGURE_DB_HOST=mysql' -e 'FECRU_CONFIGURE_DB_PORT=3306' -e 'FECRU_CONFIGURE_DB_USER=root' -e 'FECRU_CONFIGURE_DB_PASSWORD=Wb19831010!' \ mswinarski/atlassian-fisheye:latest
FECRU_CONFIGURE_DB_TYPE FECRU_CONFIGURE_DB_HOST FECRU_CONFIGURE_DB_PORT FECRU_CONFIGURE_DB_USER FECRU_CONFIGURE_DB_PASSWORD FECRU_CONFIGURE_ADMIN_PASSWORD FECRU_CONFIGURE_LICENSE_FISHEYE FECRU_CONFIGURE_LICENSE_CRUCIBLE
kodexplorer
- Install from source
git clone https://github.com/kalcaddle/KODExplorer.git chmod -Rf 777 ./KODExplorer/*
- Install via download
wget https://github.com/kalcaddle/KODExplorer/archive/master.zip unzip master.zip chmod -Rf 777 ./*
cd /alidata/dockerdata git clone https://github.com/kalcaddle/KODExplorer.git chmod -Rf 777 ./KODExplorer/* docker run --name explorer --restart=always -it -p 62:80 -v /alidata/dockerdata/KODExplorer:/var/www/html --link mysql:mysql -d docker.ling2.cn/phpserver
手动更新流程:
方法一: 下载更新包,手动解压覆盖更新包到kod安装目录。
方法二: 下载更新包,上传更新包到程序目录下的data目录。再点击更新界面的『自动更新』。
sonarqube
mkdir -p /alidata/dockerdata/sonarqube chmod -R 777 /alidata/dockerdata/sonarqube
docker run -d --name sonarqube \ --link postgresql:postgresql \ -p 61:9000 \ -v /alidata/dockerdata/sonarqube/conf:/opt/sonarqube/conf \ -v /alidata/dockerdata/sonarqube/data:/opt/sonarqube/data \ -v /alidata/dockerdata/sonarqube/extensions:/opt/sonarqube/extensions \ -v /alidata/dockerdata/sonarqube/plugins:/opt/sonarqube/lib/bundled-plugins \ -e SONARQUBE_JDBC_USERNAME=postgres \ -e SONARQUBE_JDBC_PASSWORD=Wb19831010! \ -e SONARQUBE_JDBC_URL=jdbc:postgresql://postgresql/sonar \ sonarqube
admin/admin
orangescrum_失败
docker run -it -p 60:80 --name=orangescrum orangescrum/official /bin/bash sh start.sh
cd /alidata/dockerdata git clone https://github.com/Orangescrum/orangescrum.git chmod -R 0777 /alidata/dockerdata/orangescrum/app/Config chmod -R 0777 /alidata/dockerdata/orangescrum/app/tmp chmod -R 0777 /alidata/dockerdata/orangescrum/app/webroot docker run --name orangescrum -it -p 60:80 -v /alidata/dockerdata/orangescrum:/var/www/html --link mysql:mysql -d docker.ling2.cn/phpserver
[22] adm.demorangescrum@gmail.com admin123
- Extract the archive. Upload the extracted folder(orangescrum-master) to your working directory.
- Provide proper write permission to "app/Config", "app/tmp" and "app/webroot" folders and their sub-folders.
Ex. chmod -R 0777 app/Config, chmod -R 0777 app/tmp, chmod -R 0777 app/webroot You can change the write permission of "app/Config" folder after installation procedure is completed.
- Create a new MySQL database named "orangescrum"(`utf8_unicode_ci` collation).
- Get the database.sql file from the root directory and import that to your database.
- Locate your `app` directory, do the changes on following files:
- `app/Config/database.php` - We have already updated the database name as "Orangescrum" which you can change at any point. In order to change it, just create a database using any name and update that name as database in DATABASE_CONFIG section. And also you can set a password for your Mysql login which you will have to update in the same page as password. [Required]
- `app/Config/constants.php` - Provide your valid SMTP_UNAME and SMTP_PWORD. For SMTP email sending you can use(Only one at a time) either Gmail or Sendgrid or Mandrill. By default we are assuming that you are using Gmail, so Gmail SMTP configuration section is uncommented. If you are using Sendgrid or Mandrill just comment out the Gmail section and uncomment the Sendgrid or Mandrill configuration section as per your requirement. [Required]
- `app/Config/constants.php` - Update the FROM_EMAIL_NOTIFY and SUPPORT_EMAIL [Required]
- Run the application as http://your-site.com/ from your browser and start using Orangescrum
For more information please visit below link:
http://www.orangescrum.org/general-installation-guide
onlyoffice
阿里云备份 悦商onlyoffice安装 Centos8 work
https://hub.docker.com/r/onlyoffice/communityserver https://hub.docker.com/r/onlyoffice/communityserver https://hub.docker.com/r/onlyoffice/mailserver https://hub.docker.com/r/onlyoffice/appserver
[23] 邮件端口 port 25、109、110、143、465、587,995、993 [24] [25]
surface无法安装,因为路由器原因 mail无法验证到自己
Installing Community Server integrated with Document and Mail Servers
Community Server is a part of ONLYOFFICE Community Edition that comprises also Document Server and Mail Server. To install them, follow these easy steps:
STEP 1: Create the 'onlyoffice' network
docker network create --driver bridge onlyoffice
After that launch the containers on it using the 'docker run --net onlyoffice' option.
STEP 2: Install Document Server(surface)
sudo docker cp /alidata/backup/fronts onlyoffice-document-server:/usr/share/
systemctl stop postfix systemctl disable postfix
docker run --net onlyoffice -i -t -d --name onlyoffice-document-server -p 59:80 \ -v /alidata/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data \ -v /alidata/dockerdata/onlyoffice/DocumentServer/logs:/var/log/onlyoffice \ -v /alidata/backup/fonts:/usr/share/fonts/ \ -e POSTGRESQL_SERVER_HOST=192.168.31.2 \ -e POSTGRESQL_SERVER_PORT=5433 \ -e POSTGRESQL_SERVER_DB_NAME=onlyoffice \ -e POSTGRESQL_SERVER_USER=postgres \ -e POSTGRESQL_SERVER_PASS=Wb19831010! \ -e 'RABBITMQ_SERVER_URL=amqp://root:Wb19831010!@192.168.31.2:5672' \ -e REDIS_SERVER_HOST=192.168.31.2 \ -e REDIS_SERVER_PORT=6379 \ -e SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt \ -e SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key \ onlyoffice/documentserver
docker run --net onlyoffice -i -t -d --name onlyoffice-document-server -p 59:80 \ -v /alidata/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data \ -v /alidata/dockerdata/onlyoffice/DocumentServer/logs:/var/log/onlyoffice \ onlyoffice/documentserver
--link postgresql:postgresql --link rabbitmq:rabbitmq --link redis:redis \ -e POSTGRESQL_SERVER_HOST=postgresql \ -e POSTGRESQL_SERVER_PORT=5433 \ -e POSTGRESQL_SERVER_DB_NAME=onlyoffice_document \ -e POSTGRESQL_SERVER_USER=root \ -e POSTGRESQL_SERVER_PASS='Wb19831010!' \ -e RABBITMQ_SERVER_URL='amqp://rabbitmq:5672/' \ -e REDIS_SERVER_HOST=redis \ -e REDIS_SERVER_PORT=6379 \
80/tcp, 443/tcp
docker run -i -t -d -p 80:80 --restart=always -v d:/dockerdata/onlyoffice/DocumentServer/logs:/var/log/onlyoffice -v d:/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data -v d:/dockerdata/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice -v d:/dockerdata/onlyoffice/DocumentServer/db:/var/lib/postgresql onlyoffice/documentserver
docker run -i -t -d -p 59:80 --restart=always \ -v /alidata/dockerdata/onlyoffice/DocumentServer/logs:/var/log/onlyoffice \ -v /alidata/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data \ -v /alidata/dockerdata/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice \ -v /alidata/dockerdata/onlyoffice/DocumentServer/db:/var/lib/postgresql onlyoffice/documentserver
- ONLYOFFICE_HTTPS_HSTS_ENABLED: Advanced configuration option for turning off the HSTS configuration. Applicable only when SSL is in use. Defaults to true.
- ONLYOFFICE_HTTPS_HSTS_MAXAGE: Advanced configuration option for setting the HSTS max-age in the onlyoffice nginx vHost configuration. Applicable only when SSL is in use. Defaults to 31536000.
- SSL_CERTIFICATE_PATH: The path to the SSL certificate to use. Defaults to /var/www/onlyoffice/Data/certs/onlyoffice.crt.
- SSL_KEY_PATH: The path to the SSL certificate's private key. Defaults to /var/www/onlyoffice/Data/certs/onlyoffice.key.
- SSL_DHPARAM_PATH: The path to the Diffie-Hellman parameter. Defaults to /var/www/onlyoffice/Data/certs/dhparam.pem.
- SSL_VERIFY_CLIENT: Enable verification of client certificates using the CA_CERTIFICATES_PATH file. Defaults to false
- POSTGRESQL_SERVER_HOST: The IP address or the name of the host where the PostgreSQL server is running.
- POSTGRESQL_SERVER_PORT: The PostgreSQL server port number.
- POSTGRESQL_SERVER_DB_NAME: The name of a PostgreSQL database to be created on the image startup.
- POSTGRESQL_SERVER_USER: The new user name with superuser permissions for the PostgreSQL account.
- POSTGRESQL_SERVER_PASS: The password set for the PostgreSQL account.
- RABBITMQ_SERVER_URL: The AMQP URL to connect to RabbitMQ server.
- REDIS_SERVER_HOST: The IP address or the name of the host where the Redis server is running.
- REDIS_SERVER_PORT: The Redis server port number.
- NGINX_WORKER_PROCESSES: Defines the number of nginx worker processes.
- NGINX_WORKER_CONNECTIONS: Sets the maximum number of simultaneous connections that can be opened by a nginx worker process.
STEP 3: Install Mail Server(ecs)
暂时无法使用外部数据库 [27]
docker run --net onlyoffice --privileged -i -t -d --name onlyoffice-mail-server \ -p 25:25 -p 143:143 -p 587:587 -p 58:8081 -p 444:443 -p 3308:3306 \ -v /alidata/dockerdata/onlyoffice/MailServer/data:/var/vmail \ -v /alidata/dockerdata/onlyoffice/MailServer/data/certs:/etc/pki/tls/mailserver \ -v /alidata/dockerdata/onlyoffice/MailServer/logs:/var/log \ -v /alidata/dockerdata/onlyoffice/MailServer/mysql:/var/lib/mysql \ -h ling2.cn \ onlyoffice/mailserver
0.0.0.0:25->25/tcp, 0.0.0.0:143->143/tcp, 3306/tcp, 0.0.0.0:587->587/tcp, 8081/tcp
onlyoffice-mail-server日志 Where yourdomain.com is your mail server hostname.
Your domain that will be used for maintaining correspondence must be valid and configured for this machine (i.e. it should have the appropriate A record in the DNS settings that points your domain name to the IP address of the machine where Mail Server is installed).
In the command above, the "yourdomain.com" parameter must be understood as a service domain for Mail Server. It is usually specified in the MX record of the domain that will be used for maintaining correspondence. As a rule, the "yourdomain.com" looks like mx1.onlyoffice.com
STEP 4: Install Community Server(surface)
[28] 初始化的数据量比较大,不建议用www.ling2.cn上的数据库
因为家里电脑只能用443端口,所以必须做证书,否则https页面中的document页面为http的地址
cd /alidata/dockerdata/onlyoffice/CommunityServer/data/certs openssl genrsa -out onlyoffice.key 2048 openssl req -new -key onlyoffice.key -out onlyoffice.csr openssl x509 -req -days 365 -in onlyoffice.csr -signkey onlyoffice.key -out onlyoffice.crt openssl dhparam -out dhparam.pem 2048 chmod 400 onlyoffice.key
docker run --name mysql3309 --restart=always -p 3309:3306 -v /alidata/dockerdata/onlyoffice/CommunityServer/mysql_bark:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=Wb19831010! -d mysql:5.5.55
docker run --net onlyoffice -i -t -d --name onlyoffice-community-server \ -p 60:80 -p 5222:5222 -p 1443:443 -p 3309:3306 \ -v /alidata/dockerdata/onlyoffice/CommunityServer/data:/var/www/onlyoffice/Data \ -v /alidata/dockerdata/onlyoffice/CommunityServer/mysql:/var/lib/mysql \ -v /alidata/dockerdata/onlyoffice/CommunityServer/logs:/var/log/onlyoffice \ -v /alidata/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/DocumentServerData \ -e DOCUMENT_SERVER_PORT_80_TCP_ADDR=onlyoffice-document-server \ -e MAIL_SERVER_API_PORT=58 \ -e MAIL_SERVER_API_HOST=192.168.31.2 \ -e MAIL_SERVER_DB_HOST=192.168.31.2 \ -e MAIL_SERVER_DB_PORT=3308 \ -e REDIS_SERVER_HOST=192.168.31.2 \ -e REDIS_SERVER_CACHEPORT=6379 \ -e MYSQL_SERVER_HOST=192.168.31.2 \ -e MYSQL_SERVER_PORT=3306 \ -e MYSQL_SERVER_DB_NAME=onlyoffice \ -e MYSQL_SERVER_USER=root \ -e MYSQL_SERVER_PASS=Wb19831010! \ -e SSL_CERTIFICATE_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.crt \ -e SSL_KEY_PATH=/var/www/onlyoffice/Data/certs/onlyoffice.key \ onlyoffice/communityserver
docker run --net onlyoffice -i -t -d --name onlyoffice-community-server \ -p 60:80 -p 5222:5222 -p 1443:443 -p 3309:3306 \ -v /alidata/dockerdata/onlyoffice/CommunityServer/data:/var/www/onlyoffice/Data \ -v /alidata/dockerdata/onlyoffice/CommunityServer/mysql:/var/lib/mysql \ -v /alidata/dockerdata/onlyoffice/CommunityServer/logs:/var/log/onlyoffice \ -v /alidata/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/DocumentServerData \ -e DOCUMENT_SERVER_PORT_80_TCP_ADDR=onlyoffice-document-server \ -e MAIL_SERVER_DB_HOST=onlyoffice-mail-server \ -e MYSQL_SERVER_HOST=192.168.31.2 \ -e MYSQL_SERVER_PORT=3306 \ -e MYSQL_SERVER_DB_NAME=onlyoffice \ -e MYSQL_SERVER_USER=root \ -e MYSQL_SERVER_PASS=Wb19831010! \ -e REDIS_SERVER_HOST=192.168.31.2 \ -e REDIS_SERVER_CACHEPORT=6379 \ onlyoffice/communityserver
docker run --net onlyoffice -i -t -d --name onlyoffice-community-server \ -p 60:80 -p 5222:5222 -p 1443:443 -p 3309:3306 \ -v /alidata/dockerdata/onlyoffice/CommunityServer/data:/var/www/onlyoffice/Data \ -v /alidata/dockerdata/onlyoffice/CommunityServer/logs:/var/log/onlyoffice \ -v /alidata/dockerdata/onlyoffice/DocumentServer/data:/var/www/onlyoffice/DocumentServerData \ -e DOCUMENT_SERVER_PORT_80_TCP_ADDR=onlyoffice-document-server \ -e MAIL_SERVER_API_PORT=58 \ -e MAIL_SERVER_API_HOST=www.ling2.cn \ -e MAIL_SERVER_DB_HOST=www.ling2.cn \ -e MAIL_SERVER_DB_PORT=3308 \ -e MYSQL_SERVER_HOST=192.168.1.3 \ -e MYSQL_SERVER_PORT=3306 \ -e MYSQL_SERVER_DB_NAME=onlyoffice \ -e MYSQL_SERVER_USER=root \ -e MYSQL_SERVER_PASS=Wb19831010! \ -e REDIS_SERVER_HOST=192.168.1.3 \ -e REDIS_SERVER_CACHEPORT=6379 \ onlyoffice/communityserver
0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 3306/tcp, 5280/tcp, 9865-9866/tcp, 9871/tcp, 9882/tcp, 0.0.0.0:5222->5222/tcp, 9888/tcp
MAIL_SERVER_API_PORT=${MAIL_SERVER_API_PORT:-${MAIL_SERVER_PORT_8081_TCP_PORT:-8081}}; MAIL_SERVER_API_HOST=${MAIL_SERVER_API_HOST:-${MAIL_SERVER_PORT_8081_TCP_ADDR}}; MAIL_SERVER_DB_HOST=${MAIL_SERVER_DB_HOST:-${MAIL_SERVER_PORT_3306_TCP_ADDR}}; MAIL_SERVER_DB_PORT=${MAIL_SERVER_DB_PORT:-${MAIL_SERVER_PORT_3306_TCP_PORT:-3306}}; MAIL_SERVER_DB_NAME=${MAIL_SERVER_DB_NAME:-"onlyoffice_mailserver"}; MAIL_SERVER_DB_USER=${MAIL_SERVER_DB_USER:-"mail_admin"}; MAIL_SERVER_DB_PASS=${MAIL_SERVER_DB_PASS:-"Isadmin123"};
REDIS_SERVER_HOST=${REDIS_SERVER_PORT_3306_TCP_ADDR:-${REDIS_SERVER_HOST}}; REDIS_SERVER_CACHEPORT=${REDIS_SERVER_PORT_3306_TCP_PORT:-${REDIS_SERVER_CACHEPORT:-"6379"}}; REDIS_SERVER_PASSWORD=${REDIS_SERVER_PASSWORD:-""}; REDIS_SERVER_SSL=${REDIS_SERVER_SSL:-"false"}; REDIS_SERVER_DATABASE=${REDIS_SERVER_DATABASE:-"0"}; REDIS_SERVER_CONNECT_TIMEOUT=${REDIS_SERVER_CONNECT_TIMEOUT:-"5000"};
-e DOCUMENT_SERVER_PROTOCOL=https \ -e SSL_DHPARAM_PATH=var/www/onlyoffice/Data/certs/dhparam.pem \
https下doc路径为http问题
if [ ${DOCUMENT_SERVER_HOST} ]; then
DOCUMENT_SERVER_ENABLED=true;
DOCUMENT_SERVER_API_URL="${DOCUMENT_SERVER_PROTOCOL}://${DOCUMENT_SERVER_HOST}${DOCUMENT_SERVER_API_URL}";
elif [ ${DOCUMENT_SERVER_PORT_80_TCP_ADDR} ]; then
DOCUMENT_SERVER_ENABLED=true;
DOCUMENT_SERVER_HOST=${DOCUMENT_SERVER_PORT_80_TCP_ADDR};
fi
{"DbConnection" : "Server=www.ling2.cn;Database=onlyoffice_mailserver;User ID=mail_admin;Password=Isadmin123;Pooling=True;Character Set=utf8;AutoEnlist=false", "Api":{"Protocol":"http", "Server":"180.154.185.200", "Port":"8081","Version":"v1","Token":""}}
For outgoing connections you need to expose the following ports:
- 80 for HTTP
- 443 for HTTPS
- 5222 for XMPP-compatible instant messaging client (for ONLYOFFICE Talk correct work)
Additional ports to be exposed for the mail client correct work:
- 25 for SMTP
- 465 for SMTPS
- 143 for IMAP
- 993 for IMAPS
- 110 for POP3
- 995 for POP3S
张家口
admin@ling2.cn/Wb191010610109
https://helpcenter.onlyoffice.com/server/docker/document/docker-installation.aspx
mkdir -p /app/onlyoffice
cd /app/onlyoffice
sudo mkdir -p "/app/onlyoffice/mysql/conf.d";
sudo mkdir -p "/app/onlyoffice/mysql/data";
sudo mkdir -p "/app/onlyoffice/mysql/initdb";
sudo mkdir -p "/app/onlyoffice/mysql/logs";
chown 999:999 /app/onlyoffice/mysql/logs;
sudo mkdir -p "/app/onlyoffice/CommunityServer/data";
sudo mkdir -p "/app/onlyoffice/CommunityServer/logs";
sudo mkdir -p "/app/onlyoffice/DocumentServer/data";
sudo mkdir -p "/app/onlyoffice/DocumentServer/logs";
sudo mkdir -p "/app/onlyoffice/MailServer/data/certs";
sudo mkdir -p "/app/onlyoffice/MailServer/logs";
openssl genrsa -out onlyoffice.key 2048
openssl req -new -key onlyoffice.key -out onlyoffice.csr
openssl x509 -req -days 365 -in onlyoffice.csr -signkey onlyoffice.key -out onlyoffice.crt
openssl dhparam -out dhparam.pem 2048
mkdir -p /app/onlyoffice/DocumentServer/data/certs
cp onlyoffice.key /app/onlyoffice/DocumentServer/data/certs/
cp onlyoffice.crt /app/onlyoffice/DocumentServer/data/certs/
cp dhparam.pem /app/onlyoffice/DocumentServer/data/certs/
chmod 400 /app/onlyoffice/DocumentServer/data/certs/onlyoffice.key
mkdir -p /app/onlyoffice/CommunityServer/data/certs
cp onlyoffice.key /app/onlyoffice/CommunityServer/data/certs/
cp onlyoffice.crt /app/onlyoffice/CommunityServer/data/certs/
cp dhparam.pem /app/onlyoffice/CommunityServer/data/certs/
chmod 400 /app/onlyoffice/CommunityServer/data/certs/onlyoffice.key
sudo docker network create --driver bridge onlyoffice
echo "[mysqld]
sql_mode = 'NO_ENGINE_SUBSTITUTION'
max_connections = 1000
max_allowed_packet = 1048576000
group_concat_max_len = 2048
log-error = /var/log/mysql/error.log" > /app/onlyoffice/mysql/conf.d/onlyoffice.cnf
sudo chmod 0644 /app/onlyoffice/mysql/conf.d/onlyoffice.cnf
echo "CREATE USER 'onlyoffice_user'@'localhost' IDENTIFIED BY 'onlyoffice_pass';
CREATE USER 'mail_admin'@'localhost' IDENTIFIED BY 'Isadmin123';
GRANT ALL PRIVILEGES ON * . * TO 'root'@'%' IDENTIFIED BY 'my-secret-pw';
GRANT ALL PRIVILEGES ON * . * TO 'onlyoffice_user'@'%' IDENTIFIED BY 'onlyoffice_pass';
GRANT ALL PRIVILEGES ON * . * TO 'mail_admin'@'%' IDENTIFIED BY 'Isadmin123';
FLUSH PRIVILEGES;" > /app/onlyoffice/mysql/initdb/setup.sql
https
<source>
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-mysql-server -p 3307:3306 \
-v /app/onlyoffice/mysql/conf.d:/etc/mysql/conf.d \
-v /app/onlyoffice/mysql/data:/var/lib/mysql \
-v /app/onlyoffice/mysql/initdb:/docker-entrypoint-initdb.d \
-v /app/onlyoffice/mysql/logs:/var/log/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_DATABASE=onlyoffice \
mysql:5.7
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-document-server -p 59:443 \
-v /app/onlyoffice/DocumentServer/logs:/var/log/onlyoffice \
-v /app/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data \
-v /app/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice \
-v /app/onlyoffice/DocumentServer/db:/var/lib/postgresql \
onlyoffice/documentserver
sudo docker run --init --net onlyoffice --privileged -i -t -d --restart=always --name onlyoffice-mail-server -p 25:25 -p 143:143 -p 587:587 \
-e MYSQL_SERVER=onlyoffice-mysql-server \
-e MYSQL_SERVER_PORT=3306 \
-e MYSQL_ROOT_USER=root \
-e MYSQL_ROOT_PASSWD=my-secret-pw \
-e MYSQL_SERVER_DB_NAME=onlyoffice_mailserver \
-v /app/onlyoffice/MailServer/data:/var/vmail \
-v /app/onlyoffice/MailServer/data/certs:/etc/pki/tls/mailserver \
-v /app/onlyoffice/MailServer/logs:/var/log \
-h yourdomain.com \
onlyoffice/mailserver
Where ${MAIL_SERVER_IP} is the IP address for Mail Server. You can easily get it using the command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' onlyoffice-mail-server
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-community-server -p 60:80 -p 1443:443 -p 5222:5222 \
-e MYSQL_SERVER_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_SERVER_DB_NAME=onlyoffice \
-e MYSQL_SERVER_HOST=onlyoffice-mysql-server \
-e MYSQL_SERVER_USER=onlyoffice_user \
-e MYSQL_SERVER_PASS=onlyoffice_pass \
-e DOCUMENT_SERVER_PORT_80_TCP_ADDR=onlyoffice-document-server \
-e MAIL_SERVER_API_HOST=${MAIL_SERVER_IP} \
-e MAIL_SERVER_DB_HOST=onlyoffice-mysql-server \
-e MAIL_SERVER_DB_NAME=onlyoffice_mailserver \
-e MAIL_SERVER_DB_PORT=3306 \
-e MAIL_SERVER_DB_USER=root \
-e MAIL_SERVER_DB_PASS=my-secret-pw \
-v /app/onlyoffice/CommunityServer/data:/var/www/onlyoffice/Data \
-v /app/onlyoffice/CommunityServer/logs:/var/log/onlyoffice \
onlyoffice/communityserver
http
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-mysql-server -p 3307:3306 \
-v /app/onlyoffice/mysql/conf.d:/etc/mysql/conf.d \
-v /app/onlyoffice/mysql/data:/var/lib/mysql \
-v /app/onlyoffice/mysql/initdb:/docker-entrypoint-initdb.d \
-v /app/onlyoffice/mysql/logs:/var/log/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_DATABASE=onlyoffice \
mysql:5.7
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-document-server -p 59:80 \
-v /app/onlyoffice/DocumentServer/logs:/var/log/onlyoffice \
-v /app/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data \
-v /app/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice \
-v /app/onlyoffice/DocumentServer/db:/var/lib/postgresql \
onlyoffice/documentserver
sudo docker run --init --net onlyoffice --privileged -i -t -d --restart=always --name onlyoffice-mail-server -p 25:25 -p 143:143 -p 587:587 \
-e MYSQL_SERVER=onlyoffice-mysql-server \
-e MYSQL_SERVER_PORT=3306 \
-e MYSQL_ROOT_USER=root \
-e MYSQL_ROOT_PASSWD=my-secret-pw \
-e MYSQL_SERVER_DB_NAME=onlyoffice_mailserver \
-v /app/onlyoffice/MailServer/data:/var/vmail \
-v /app/onlyoffice/MailServer/data/certs:/etc/pki/tls/mailserver \
-v /app/onlyoffice/MailServer/logs:/var/log \
-h yourdomain.com \
onlyoffice/mailserver
Where ${MAIL_SERVER_IP} is the IP address for Mail Server. You can easily get it using the command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' onlyoffice-mail-server
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-community-server -p 60:80 -p 1443:443 -p 5222:5222 \
-e MYSQL_SERVER_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_SERVER_DB_NAME=onlyoffice \
-e MYSQL_SERVER_HOST=onlyoffice-mysql-server \
-e MYSQL_SERVER_USER=onlyoffice_user \
-e MYSQL_SERVER_PASS=onlyoffice_pass \
-e DOCUMENT_SERVER_PORT_80_TCP_ADDR=onlyoffice-document-server \
-e MAIL_SERVER_API_HOST=${MAIL_SERVER_IP} \
-e MAIL_SERVER_DB_HOST=onlyoffice-mysql-server \
-e MAIL_SERVER_DB_NAME=onlyoffice_mailserver \
-e MAIL_SERVER_DB_PORT=3306 \
-e MAIL_SERVER_DB_USER=root \
-e MAIL_SERVER_DB_PASS=my-secret-pw \
-v /app/onlyoffice/CommunityServer/data:/var/www/onlyoffice/Data \
-v /app/onlyoffice/CommunityServer/logs:/var/log/onlyoffice \
onlyoffice/communityserver
dai3
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-mysql-server -p 3306:3306 \
-v /app/onlyoffice/mysql/conf.d:/etc/mysql/conf.d \
-v /app/onlyoffice/mysql/data:/var/lib/mysql \
-v /app/onlyoffice/mysql/initdb:/docker-entrypoint-initdb.d \
-v /app/onlyoffice/mysql/logs:/var/log/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_DATABASE=onlyoffice \
mysql:5.7
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-document-server -p 11034:80 \
-v /app/onlyoffice/DocumentServer/logs:/var/log/onlyoffice \
-v /app/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data \
-v /app/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice \
-v /app/onlyoffice/DocumentServer/db:/var/lib/postgresql \
onlyoffice/documentserver
sudo docker run --init --net onlyoffice --privileged -i -t -d --restart=always --name onlyoffice-mail-server \
-e MYSQL_SERVER=onlyoffice-mysql-server \
-e MYSQL_SERVER_PORT=3306 \
-e MYSQL_ROOT_USER=root \
-e MYSQL_ROOT_PASSWD=my-secret-pw \
-e MYSQL_SERVER_DB_NAME=onlyoffice_mailserver \
-v /app/onlyoffice/MailServer/data:/var/vmail \
-v /app/onlyoffice/MailServer/data/certs:/etc/pki/tls/mailserver \
-v /app/onlyoffice/MailServer/logs:/var/log \
-h yourdomain.com \
onlyoffice/mailserver
Where ${MAIL_SERVER_IP} is the IP address for Mail Server. You can easily get it using the command:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' onlyoffice-mail-server
sudo docker run --net onlyoffice -i -t -d --restart=always --name onlyoffice-community-server -p 11035:80 \
-e MYSQL_SERVER_ROOT_PASSWORD=my-secret-pw \
-e MYSQL_SERVER_DB_NAME=onlyoffice \
-e MYSQL_SERVER_HOST=onlyoffice-mysql-server \
-e MYSQL_SERVER_USER=onlyoffice_user \
-e MYSQL_SERVER_PASS=onlyoffice_pass \
-e DOCUMENT_SERVER_PORT_80_TCP_ADDR=onlyoffice-document-server \
-e MAIL_SERVER_API_HOST=172.19.0.4 \
-e MAIL_SERVER_DB_HOST=onlyoffice-mysql-server \
-e MAIL_SERVER_DB_NAME=onlyoffice_mailserver \
-e MAIL_SERVER_DB_PORT=3306 \
-e MAIL_SERVER_DB_USER=root \
-e MAIL_SERVER_DB_PASS=my-secret-pw \
-v /app/onlyoffice/CommunityServer/data:/var/www/onlyoffice/Data \
-v /app/onlyoffice/CommunityServer/logs:/var/log/onlyoffice \
onlyoffice/communityserver
sqlserver
https://hub.docker.com/r/microsoft/mssql-server-linux
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Wb19831010!' -p 1433:1433 -d -v /alidata/dockerdata/mssql/data:/var/opt/mssql -v /alidata/dockerdata/mssql/sqlfile:/sqlfile --name mssql-server-linux microsoft/mssql-server-linux
docker exec -it mssql-server-linux /bin/bash
CREATE DATABASE tax_basic CREATE DATABASE tax_core CREATE DATABASE tax_doc CREATE DATABASE tax_ptc
docker exec -it mssql-server-linux /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Dtt@123456789 -d tax_basic -i /sqlfile/tax-basic.sql docker exec -it mssql-server-linux /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Dtt@123456789 -d tax_core -i /sqlfile/tax-core.sql docker exec -it mssql-server-linux /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Dtt@123456789 -d tax_doc -i /sqlfile/tax-doc.sql docker exec -it mssql-server-linux /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Dtt@123456789 -d tax_ptc -i /sqlfile/tax-ptc.sql
参数说明:-S 服务器地址 -U 用户名 -P 密码 -d 数据库名称 -i 脚本文件路径
sqlcmd -S . -U sa -P Dtt@123456789 -d tax_basic -i /sqlfile/tax-basic.sql
EXECUTE sp_password NULL,'Dtt@123456789','sa'
onlyoffice问题解决
中文乱码
将windows下字体放入linux 在服务器添加中文字体安装字体:
apt-get install fonts-arphic-ukai
调用:
documentserver-generate-allfonts.sh
首先安装中文字体,
然后
cd /usr/share/fonts tar cv * | docker exec -i onlyoffice-document-server tar x -C /usr/share/fonts/ docker exec onlyoffice-document-server documentserver-generate-allfonts.sh
Then clear cache browser and logon again.
apt-get install fonts-arphic-ukai cd /usr/share/fonts/ tar cv * | docker exec -i XXX tar x -C /usr/share/fonts/ docker exec XXX /var/www/onlyoffice/documentserver/Tools/GenerateAllFonts.sh where XXX is the container ID
文件无法打开
需要配置https地址如下
文件:Onlyoffice file can't open.PNG
配置https://onlyoffice.ling2.win 并且配置commdity server 的etc/hosts 然后设置到上面位置
ceph
cachecloud
https://hub.docker.com/r/wolferhua/cachecloud
https://github.com/sohutv/cachecloud/wiki
下载对应release版本的环境,例如1.2,将其在/opt下解压,解压后文件目录如下
tar xzvf cachecloud-bin-1.2.tar.gz
- cachecloud-open-web-1.0-SNAPSHOT.war: cachecloud war包
- cachecloud.sql: 数据库schema,默认数据名为cache_cloud,可以自行修改
- jdbc.properties:jdbc数据库配置,自行配置
- start.sh:启动脚本
- stop.sh: 停止脚本
- logs:存放日志的目录
- 默认端口是8585,可以修改start.sh中的server.port进行重置
- 访问:http://127.0.0.1:9999/manage/login (9999是tomcat的端口号,具体要参考第三节中的online.properties和local.properties中的web.port)
- 如果访问正常,请使用用户名:admin、密码:admin访问系统,跳转到应用列表下:
安装
docker run \ --name l2tp-ipsec-vpn-server \ --env-file ./vpn.env \ -p 500:500/udp \ -p 4500:4500/udp \ -v /lib/modules:/lib/modules:ro \ -d --privileged \ fcojean/l2tp-ipsec-vpn-server
vpn
https://hub.docker.com/r/fcojean/l2tp-ipsec-vpn-server/
https://github.com/hwdsl2/setup-ipsec-vpn/blob/master/docs/clients-xauth.md
https://github.com/hwdsl2/setup-ipsec-vpn/blob/master/docs/clients.md#linux
docker run \ --name l2tp-ipsec-vpn-server \ --env-file ./vpn.env \ -p 500:500/udp \ -p 4500:4500/udp \ -v /lib/modules:/lib/modules:ro \ -d --privileged \ fcojean/l2tp-ipsec-vpn-server
tuleap
https://hub.docker.com/r/enalean/tuleap-aio/
https://blog.csdn.net/Aria_Miazzy/article/details/85415606
mkdir -p /alidata/dockerdata/tuleap chmod -R 777 /alidata/dockerdata/tuleap docker run --detach --name tuleap \ -p 56:80 -p 55:443 -p 54:22\ --env DEFAULT_DOMAIN=tuleap.ling2.cn \ --env ORG_NAME="Ling2" \ -v /alidata/dockerdata/tuleap/data:/data \ jariasl/tuleap
docker volume create --name tuleap-data docker run -ti -e VIRTUAL_HOST=tuleap.ling2.cn -p 56:80 -p 55:443 -p 54:22 -d --name tuleap -v tuleap-data:/data enalean/tuleap-aio
docker run -ti -p 1031:80 -p 1032:443 -p 1033:22 -d --name tuleap -v tuleap-data:/data enalean/tuleap-aio
docker exec tuleap /bin/bash -c "cat /root/.tuleap_passwd"
server.ling2.cn Mysql user (root) : f3rjYeFVVhSW4VE Codendiadm unix & DB (codendiadm): JkLR8H9OvZStils Libnss-mysql DB user (dbauthuser): p7CBf6tabXg2NXP Site admin password (admin): OVgZ4VNE50nXvBe
sudo docker volume rm tuleap-data
Mysql user (root) : RKspvjd1Lc9wbVr
Codendiadm unix & DB (codendiadm): ZYkYXkuXirHjR5t
Mailman siteadmin: NImpIAihHnfv5x3
Libnss-mysql DB user (dbauthuser): KK8Jdu0BEjetogX
Site admin password (admin): 5EAnAASq1qr3433
minio
minio https://minio.io/ https://docs.minio.io/ https://blog.csdn.net/dingjs520/article/details/79111050 要创建具有永久存储的Minio容器,您需要将本地持久目录从主机操作系统映射到虚拟配置~/.minio 并导出/data目录。 为此,请运行以下命令
docker run -p 9000:9000 --name minio -v /mnt/data:/data -v /mnt/config:/root/.minio minio/minio server /data
要覆盖Minio的自动生成的密钥,您可以将Access和Secret密钥设为环境变量。 Minio允许常规字符串作为Access和Secret密钥。
docker run -p 9000:9000 --name minio -d \ -e "MINIO_ACCESS_KEY=ling" \ -e "MINIO_SECRET_KEY=123456789" \ -v /alidata/dockerdata/miniocloud/data:/data \ -v /alidata/dockerdata/miniocloud/config:/root/.minio \ minio/minio server /data
53:9000 这样设置端口不行,不知道为啥
lychee
出问题还是因为磁盘空间不足
https://hub.docker.com/r/linuxserver/lychee
docker run -it -d \ --name=fgzx99 \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=Asia/Shanghai \ -p 99:80 \ --link mysql:mysql \ -v /alidata/dockerdata/fgzx99/config:/config \ -v /alidata/dockerdata/fgzx99/pictures:/pictures \ --restart unless-stopped \ linuxserver/lychee
https://hub.docker.com/r/kdelfour/lychee-docker/
http://blog.liyuans.com/archives/Lychee.html <s>
mkdir -p /alidata/dockerdata/fgzx99/uploads mkdir -p /alidata/dockerdata/fgzx99/data mkdir -p /alidata/dockerdata/fgzx99/mysql chmod -R 777 /alidata/dockerdata/fgzx99/uploads chmod -R 777 /alidata/dockerdata/fgzx99/data chmod -R 777 /alidata/dockerdata/fgzx99/mysql
docker run -it -d -p 99:80 --name fgzx99 -v /alidata/dockerdata/fgzx99/uploads:/uploads/ -v /alidata/dockerdata/fgzx99/data/:/data/ -v /alidata/dockerdata/fgzx99/mysql/:/mysql/ kdelfour/lychee-docker docker exec -it fgzx99 bash chmod -R 750 /uploads/ /data/
docker run -it -d -p 99:80 --name fgzx99 kdelfour/lychee-docker docker run -it -d -p 99:80 --name fgzx99 -v /alidata/dockerdata/fgzx99/uploads:/uploads/ kdelfour/lychee-docker
url : localhost database name : lychee user name : lychee user password : lychee
</s>
https://gitee.com/ling-cloud/Lychee.git
1. Requirements
Everything you need is a web-server with PHP 5.5 or later and a MySQL-Database.
The following PHP extensions must be activated:
session, exif, mbstring, gd, mysqli, json, zip
To use Lychee without restrictions, we recommend to increase the values of the following properties in php.ini:
max_execution_time = 200
post_max_size = 100M
upload_max_size = 100M
upload_max_filesize = 20M
memory_limit = 256M
You might also take a look at Issue #106 if you are using nginx or in the FAQ if you are using CGI or FastCGI.
2. Download
The easiest way to download Lychee is with git:
git clone https://github.com/electerious/Lychee.git
You can also use the direct download.
3. Permissions
Change the permissions of uploads/, data/ and all their subfolders. Sufficient read/write privileges are required.
chmod -R 750 uploads/ data/
cd /alidata/dockerdata/fgzx99 git clone https://gitee.com/ling-cloud/Lychee.git
启动docker
mkdir -p /alidata/dockerdata/fgzx99/uploads
mkdir -p /alidata/dockerdata/fgzx99/data
mkdir -p /alidata/dockerdata/fgzx99/mysql
mkdir -p /alidata/temp/fgzx99/tmp
chmod -R 777 /alidata/dockerdata/fgzx99/uploads
chmod -R 777 /alidata/dockerdata/fgzx99/data
chmod -R 777 /alidata/dockerdata/fgzx99/mysql
chmod -R 777 /alidata/temp/fgzx99/tmp
docker run -it -d -p 99:80 --name fgzx99 kdelfour/lychee-docker
docker cp fgzx99 :/etc/php5/fpm/php.ini /alidata/dockerdata/fgzx99/php/php.ini
docker rm -f fgzx99
docker run --name fgzx99 -it -p 99:80 \
-v /alidata/dockerdata/fgzx99/Lychee:/var/www/html \
-v /alidata/dockerdata/fgzx99/uploads:/uploads/ \
-v /alidata/dockerdata/fgzx99/data:/data/ \
-v /alidata/dockerdata/fgzx99/php/php.ini:/usr/local/etc/php/php.ini \
-v /alidata/temp/fgzx99/tmp:/tmp
--link mysql:mysql -d docker.ling2.cn/phpserver
neo4j
https://hub.docker.com/_/neo4j/
https://blog.csdn.net/GraphWay/article/details/107320038
4.2.9
rm -rf /alidata/dockerdata/neo4j/neo4j429
docker rm -f neo4j429
docker rm -f neo4j429_copy
docker run -it -d --name neo4j429 \
--publish=7475:7474 \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.2.9
docker logs -f neo4j429
docker cp neo4j429:/var/lib/neo4j /alidata/dockerdata/neo4j/neo4j429
docker rm -f neo4j429
rm -rf /alidata/dockerdata/neo4j/neo4j429/logs
rm -rf /alidata/dockerdata/neo4j/neo4j429/data
docker run -it -d --name neo4j429 \
--volume=/alidata/dockerdata/neo4j/neo4j429/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j429/data:/var/lib/neo4j/data \
--volume=/alidata/dockerdata/neo4j/neo4j429/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j429/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j429/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j429/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.2.9
docker logs -f neo4j429
docker stop neo4j429
vi /alidata/dockerdata/neo4j/neo4j429/conf/neo4j.conf
docker run -it -d --name neo4j429_copy \
--volume=/alidata/dockerdata/neo4j/neo4j429/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j429/data:/var/lib/neo4j/data \
--volume=/alidata/dockerdata/neo4j/data-sincere_2021_11_07_backup/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j429/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j429/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j429/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.2.9 /bin/bash
docker exec -it neo4j429_copy bash
cd /var/lib/neo4j/bin
neo4j-admin load --from=/var/lib/neo4j/import/graph409.db.dump --database=graph.db --force
neo4j-admin dump --database=graph.db --to=/var/lib/neo4j/import/graph419.db.dump
neo4j 4.1.9
rm -rf /alidata/dockerdata/neo4j/neo4j419
docker rm -f neo4j419
docker rm -f neo4j419_copy
docker run -it -d --name neo4j419 \
--publish=7475:7474 \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9
docker logs -f neo4j419
docker cp neo4j419:/var/lib/neo4j /alidata/dockerdata/neo4j/neo4j419
docker rm -f neo4j419
rm -rf /alidata/dockerdata/neo4j/neo4j419/logs
rm -rf /alidata/dockerdata/neo4j/neo4j419/data
docker run -it -d --restart=always --name neo4j419 \
--publish=7474:7474 --publish=7687:7687 \
--volume=/alidata/dockerdata/neo4j/neo4j419/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j419/data:/data \
--volume=/alidata/dockerdata/neo4j/neo4j419/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j419/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j419/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j419/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9
docker logs -f neo4j419
docker stop neo4j419
vi /alidata/dockerdata/neo4j/neo4j419/conf/neo4j.conf
docker run -it -d --name neo4j419_copy \
--volume=/alidata/dockerdata/neo4j/neo4j419/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j419/data:/var/lib/neo4j/data \
--volume=/alidata/dockerdata/neo4j/neo4j419_barkup/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j419/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j419/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j419/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9 /bin/bash
docker exec -it neo4j419_copy bash
cd /var/lib/neo4j/bin
neo4j-admin load --from=/var/lib/neo4j/import/graph419.db.dump --database=zhongyi --force
neo4j-admin load --from=/var/lib/neo4j/import/graph409.db.dump --database=zhongyi --force
neo4j-admin dump --database=zhongyi --to=/var/lib/neo4j/import/graph419-zhongyi-2022-03-24.db.dump
neo4j-future
docker run -it -d --name neo4j_future \
--publish=7475:7474 \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9
docker logs -f neo4j_future
docker cp neo4j_future:/var/lib/neo4j /alidata/dockerdata/neo4j/neo4j_future
docker rm -f neo4j_future
rm -rf /alidata/dockerdata/neo4j/neo4j_future/logs
rm -rf /alidata/dockerdata/neo4j/neo4j_future/data
docker run -it -d --name neo4j_future \
--publish=7475:7474 --publish=7688:7687 \
--volume=/alidata/dockerdata/neo4j/neo4j_future/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j_future/data:/data \
--volume=/alidata/dockerdata/neo4j/neo4j_future/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j_future/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j_future/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j_future/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9
docker logs -f neo4j_future
docker stop neo4j_future
vi /alidata/dockerdata/neo4j/neo4j_future/conf/neo4j.conf
dbms.default_database=future
cp -r /alidata/dockerdata/neo4j/neo4j419/data/databases/future /alidata/dockerdata/neo4j/neo4j_future/data/databases
docker start neo4j_future
neo4j 4.3.6
docker run -it -d --name neo4j_sincere \ --publish=7474:7474 --publish=7687:7687 \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j
docker cp neo4j_sincere:/var/lib/neo4j /alidata/dockerdata/neo4j4/data-sincere cd /alidata/dockerdata/neo4j4/data-sincere/neo4j mv * ../
docker run -it -d --name neo4j_sincere \ --publish=7474:7474 --publish=7687:7687 \ --volume=/alidata/dockerdata/neo4j4/data-sincere/conf:/var/lib/neo4j/conf \ --volume=/alidata/dockerdata/neo4j4/data-sincere/data:/data \ --volume=/alidata/dockerdata/neo4j4/data-sincere/import:/var/lib/neo4j/import \ --volume=/alidata/dockerdata/neo4j4/data-sincere/metrics:/var/lib/neo4j/metrics \ --volume=/alidata/dockerdata/neo4j4/data-sincere/plugins:/var/lib/neo4j/plugins \ --volume=/alidata/dockerdata/neo4j4/data-sincere/ssl:/var/lib/neo4j/ssl \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j
neo4j 3.5.13
neo4j stop neo4j-admin dump --database=graph.db --to=/var/lib/neo4j/import/barkup20210729.dump neo4j-admin load --from=/home/robot/Neoj_data/graph.db.dump --database=graph.db --force #数据导入 neo4j start
docker run -it -d --name neo4j_sincere \ --publish=7474:7474 --publish=7687:7687 \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j
docker cp neo4j_sincere:/var/lib/neo4j /alidata/dockerdata/neo4j/data-sincere
docker run -it -d --name neo4j_sincere \ --publish=7474:7474 --publish=7687:7687 \ --volume=/alidata/dockerdata/neo4j/data-sincere/conf:/var/lib/neo4j/conf \ --volume=/alidata/dockerdata/neo4j/data-sincere/data:/var/lib/neo4j/data \ --volume=/alidata/dockerdata/neo4j/data-sincere/import:/var/lib/neo4j/import \ --volume=/alidata/dockerdata/neo4j/data-sincere/metrics:/var/lib/neo4j/metrics \ --volume=/alidata/dockerdata/neo4j/data-sincere/plugins:/var/lib/neo4j/plugins \ --volume=/alidata/dockerdata/neo4j/data-sincere/ssl:/var/lib/neo4j/ssl \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j
--env NEO4J_AUTH=neo4j/<password> in your run directive. Alternatively, you can disable authentication by specifying --env NEO4J_AUTH=none instead.
/conf /data /import /logs /metrics /plugins /ssl
docker run -it -d --name neo4j_build \ --publish=7475:7474 --publish=7688:7687 \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j:3.5.13
docker cp neo4j_build:/var/lib/neo4j /alidata/dockerdata/neo4j/data-build
cp -r /alidata/dockerdata/neo4j/data-sincere /alidata/dockerdata/neo4j/data-build
docker run -it -d --name neo4j_build \ --publish=7475:7474 --publish=7688:7687 \ --volume=/alidata/dockerdata/neo4j/data-build/conf:/var/lib/neo4j/conf \ --volume=/alidata/dockerdata/neo4j/data-build/data:/var/lib/neo4j/data \ --volume=/alidata/dockerdata/neo4j/data-build/import:/var/lib/neo4j/import \ --volume=/alidata/dockerdata/neo4j/data-build/metrics:/var/lib/neo4j/metrics \ --volume=/alidata/dockerdata/neo4j/data-build/plugins:/var/lib/neo4j/plugins \ --volume=/alidata/dockerdata/neo4j/data-build/ssl:/var/lib/neo4j/ssl \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j:3.5.13
docker 4.1.9 备份与升级
rm -rf /alidata/dockerdata/neo4j/neo4j419
docker rm -f neo4j419
docker rm -f neo4j419_copy
docker run -it -d --name neo4j419 \
--publish=7475:7474 \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9
docker logs -f neo4j419
docker cp neo4j419:/var/lib/neo4j /alidata/dockerdata/neo4j/neo4j419
docker rm -f neo4j419
rm -rf /alidata/dockerdata/neo4j/neo4j419/logs
rm -rf /alidata/dockerdata/neo4j/neo4j419/data
docker run -it -d --name neo4j419 \
--volume=/alidata/dockerdata/neo4j/neo4j419/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j419/data:/var/lib/neo4j/data \
--volume=/alidata/dockerdata/neo4j/neo4j419/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j419/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j419/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j419/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9
docker logs -f neo4j419
docker stop neo4j419
vi /alidata/dockerdata/neo4j/neo4j419/conf/neo4j.conf
docker run -it -d --name neo4j419_copy \
--volume=/alidata/dockerdata/neo4j/neo4j419/conf:/var/lib/neo4j/conf \
--volume=/alidata/dockerdata/neo4j/neo4j419/data:/var/lib/neo4j/data \
--volume=/alidata/dockerdata/neo4j/data-sincere_2021_11_07_backup/import:/var/lib/neo4j/import \
--volume=/alidata/dockerdata/neo4j/neo4j419/metrics:/var/lib/neo4j/metrics \
--volume=/alidata/dockerdata/neo4j/neo4j419/plugins:/var/lib/neo4j/plugins \
--volume=/alidata/dockerdata/neo4j/neo4j419/ssl:/var/lib/neo4j/ssl \
--env NEO4J_AUTH=neo4j/Wb19831010! \
neo4j:4.1.9 /bin/bash
docker exec -it neo4j419_copy bash
cd /var/lib/neo4j/bin
neo4j-admin load --from=/var/lib/neo4j/import/graph409.db.dump --database=graph.db --force
neo4j-admin dump --database=graph.db --to=/var/lib/neo4j/import/graph419.db.dump
docker备份与升级
https://neo4j.com/docs/upgrade-migration-guide/current/supported-paths/
https://blog.csdn.net/wang0112233/article/details/118682953
docker run -it -d --name neo4j_sincere_copy \ --publish=7444:7474 --publish=7647:7687 \ --volume=/alidata/dockerdata/neo4j/data-sincere/conf:/var/lib/neo4j/conf \ --volume=/alidata/dockerdata/neo4j/data-sincere/data:/var/lib/neo4j/data \ --volume=/alidata/dockerdata/neo4j/data-sincere/import:/var/lib/neo4j/import \ --volume=/alidata/dockerdata/neo4j/data-sincere/metrics:/var/lib/neo4j/metrics \ --volume=/alidata/dockerdata/neo4j/data-sincere/plugins:/var/lib/neo4j/plugins \ --volume=/alidata/dockerdata/neo4j/data-sincere/ssl:/var/lib/neo4j/ssl \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j:4.0 /bin/bash
dbms.allow_upgrade=true
docker stop neo4j_sincere docker exec -it neo4j_sincere_copy /bin/bash
rm -rf /alidata/dockerdata/neo4j/data-sincere cp data-sincere-3.5.12/import/graph.db.dump data-sincere/import/
/var/lib/neo4j/bin/neo4j-admin backup --backup-dir=/var/lib/neo4j/import --name=graphdbbackup /var/lib/neo4j/bin/neo4j-admin restore --from=/var/lib/neo4j/import/graphdbbackup --database=graphdbbackup
cd /var/lib/neo4j/bin neo4j-admin dump --database=graph.db --to=/var/lib/neo4j/import/graph.db.dump neo4j-admin load --from=/var/lib/neo4j/import/graph.db.dump --database=graph.db --force
cd /alidata/soft wget https://www.rarlab.com/rar/rarlinux-x64-6.0.2.tar.gz --no-check-certificate tar zxvf rarlinux-x64-6.0.2.tar.gz cd rar make make install
rar -a graph.db.dump.rar graph.db.dump rar x graph.db.dump.rar
tar -zcvf graph.db.dump tar -zxvf graph.db.dump
日常备份
docker stop neo4j419
docker run -it -d --name neo4j419_copy \ --volume=/alidata/dockerdata/neo4j/neo4j419/conf:/var/lib/neo4j/conf \ --volume=/alidata/dockerdata/neo4j/neo4j419/data:/var/lib/neo4j/data \ --volume=/alidata/dockerdata/neo4j/data-sincere_2021_11_07_backup/import:/var/lib/neo4j/import \ --volume=/alidata/dockerdata/neo4j/neo4j419/metrics:/var/lib/neo4j/metrics \ --volume=/alidata/dockerdata/neo4j/neo4j419/plugins:/var/lib/neo4j/plugins \ --volume=/alidata/dockerdata/neo4j/neo4j419/ssl:/var/lib/neo4j/ssl \ --env NEO4J_AUTH=neo4j/Wb19831010! \ neo4j:4.1.9 /bin/bash docker exec -it neo4j419_copy bash cd /var/lib/neo4j/bin
neo4j-admin load --from=/var/lib/neo4j/import/zhongyi-20220602.db.dump --database=zhongyi --force
neo4j-admin dump --database=zhongyi --to=/var/lib/neo4j/import/zhongyi-20220602.db.dump
To perform recovery please start database and perform clean shutdown
启动一个相同版本的neo4j neo4j:3.5.13
shutdown database open terminal run bin/neo4j console after startup, hit Ctrl+C to shutdown database then, run neo4j-admin dump respectivery
aliyun-ddns-cli
https://hub.docker.com/r/chenhw2/aliyun-ddns-cli/
sudo docker run -d \ --name aliyun-ddns-cli \ -e "AKID=LTAI5tAggHUfg27oLnNtbevT" \ -e "AKSCT=m7u5AaNcoJY8s6cex7WO7c725zDO5m" \ -e "DOMAIN=qunhui.ling2.cn " \ -e "REDO=600" \ chenhw2/aliyun-ddns-cli
sudo docker run -d \ --name aliyun-ddns-cli-city \ -e "AKID=LTAI5tAggHUfg27oLnNtbevT" \ -e "AKSCT=m7u5AaNcoJY8s6cex7WO7c725zDO5m" \ -e "DOMAIN=qunhui.ling.city " \ -e "REDO=600" \ chenhw2/aliyun-ddns-cli
https://registry.hub.docker.com/r/sanjusss/aliyun-ddns/
docker run -d --restart=always --net=host \ --name aliyun-ddns \ -e "AKID=LTAI5tAggHUfg27oLnNtbevT" \ -e "AKSCT=m7u5AaNcoJY8s6cex7WO7c725zDO5m" \ -e "DOMAIN=laojia.ling2.cn" \ -e "REDO=300" \ -e "TTL=600" \ -e "TIMEZONE=8.0" \ -e "CNIPV4=false" \ -e "TYPE=A,AAAA" \ -e "CHECKLOCAL=host" \ sanjusss/aliyun-ddns
sudo docker run -d --restart=always \ --name aliyun-ddns-lingserver \ -e "AKID=LTAI5tAggHUfg27oLnNtbevT" \ -e "AKSCT=m7u5AaNcoJY8s6cex7WO7c725zDO5m" \ -e "DOMAIN=ling.ling2.cn" \ -e "REDO=600" \ chenhw2/aliyun-ddns-cli
nifi
rm -rf /alidata/dockerdata/nifi docker run --name nifi -e NIFI_BASE_DIR=/opt/nifi -p 54:8080 -d apache/nifi:latest docker cp nifi:/opt/nifi /alidata/dockerdata/nifi docker rm -f nifi
chmod -R 777 /alidata/dockerdata/nifi
docker run --name nifi \ -e NIFI_BASE_DIR=/opt/nifi \ -v /alidata/dockerdata/nifi:/opt/nifi \ -v /alidata/dockerdata/nifi/driver:/driver \ -p 8443:8443 \ -p 54:8080 \ -d \ apache/nifi:latest
docker run --name nifi \ -e NIFI_BASE_DIR=/opt/nifi \ -v /alidata/dockerdata/nifi/driver:/driver \ -p 8443:8443 \ -p 54:8080 \ -d \ apache/nifi:latest
https://hub.docker.com/r/apache/nifi
docker run --rm --entrypoint /bin/bash apache/nifi:1.12.1 -c 'env | grep NIFI' NIFI_HOME=/opt/nifi/nifi-current NIFI_LOG_DIR=/opt/nifi/nifi-current/logs NIFI_TOOLKIT_HOME=/opt/nifi/nifi-toolkit-current NIFI_PID_DIR=/opt/nifi/nifi-current/run NIFI_BASE_DIR=/opt/nifi
docker run --name nifi \ -e NIFI_BASE_DIR=/opt/nifi \ -v /alidata/dockerdata/nifi/driver:/driver \ -p 54:8080 \ -d \ apache/nifi:latest
docker run --name nifi \ -v /alidata/dockerdata/nifi/driver:/driver \ -d -p 54:8443 \ -e SINGLE_USER_CREDENTIALS_USERNAME=admin \ -e NIFI_WEB_HTTPS_PORT='8443' \ -e NIFI_BASE_DIR=/opt/nifi \ -e SINGLE_USER_CREDENTIALS_PASSWORD=Wb19831010! \ apache/nifi:latest
docker run --name nifi \ -v /alidata/dockerdata/nifi/driver:/driver \ -d -p 54:8080 -p 8443:8443\ -e SINGLE_USER_CREDENTIALS_USERNAME=admin \ -e NIFI_BASE_DIR=/opt/nifi \ -e NIFI_WEB_HTTPS_PORT='8443' \ -e SINGLE_USER_CREDENTIALS_PASSWORD=Wb19831010! \ apache/nifi:latest
docker exec -it nifi bash sh nifi.sh set-single-user-credentials admin xxxxxxxx
sentinel
docker run --name sentinel -d -p 8858:8858 bladex/sentinel-dashboard
-p 77:8858会不行不知道为啥 只能用相同端口
nacos
https://nacos.io/zh-cn/docs/quick-start-docker.html
https://hub.docker.com/r/nacos/nacos-server
https://github.com/alibaba/nacos/blob/master/config/src/main/resources/META-INF/nacos-db.sql
docker run -d -it -e JVM_XMS=256m -e JVM_XMX=256m -e MODE=standalone -e SPRING_DATASOURCE_PLATFORM=mysql -e MYSQL_SERVICE_HOST=数据库内网IP -e MYSQL_SERVICE_PORT=3306 -e MYSQL_SERVICE_USER=数据库用户名 -e MYSQL_SERVICE_PASSWORD=数据库密码 -e MYSQL_SERVICE_DB_NAME=数据库名称 -p 8848:8848 -p 9848:9848 --restart=always --name nacos 镜像ID
docker run -d -it -e MODE=standalone -e SPRING_DATASOURCE_PLATFORM=mysql -e MYSQL_SERVICE_HOST=rm-8vbe87b5295dz08zhxo.mysql.zhangbei.rds.aliyuncs.com -e MYSQL_SERVICE_PORT=3306 -e MYSQL_SERVICE_USER=lingcloud -e MYSQL_SERVICE_PASSWORD=Wb19831010! -e MYSQL_SERVICE_DB_NAME=nacos2 -p 8848:8848 -p 9848:9848 -p 9849:9849 --restart=always --name nacos2 nacos/nacos-server:v2.1.0
docker run -d -it -e MODE=standalone -e SPRING_DATASOURCE_PLATFORM=mysql -e MYSQL_SERVICE_HOST=192.168.31.13 -e MYSQL_SERVICE_PORT=3307 -e MYSQL_SERVICE_USER=lingcloud -e MYSQL_SERVICE_PASSWORD=Wb19831010! -e MYSQL_SERVICE_DB_NAME=nacos2 -p 8848:8848 -p 9848:9848 -p 9849:9849 --restart=always --name nacos2 nacos/nacos-server:v2.1.0
创建存储nacos配置的数据库
create database nacos_config
初始化数据库,导入初始化文件nacos-db.sql
docker run -d \
-e PREFER_HOST_MODE=hostname \
-e MODE=standalone \
-e SPRING_DATASOURCE_PLATFORM=mysql \
-e MYSQL_MASTER_SERVICE_HOST=数据库ip \
-e MYSQL_MASTER_SERVICE_PORT=数据库端口 \
-e MYSQL_MASTER_SERVICE_USER=用户名 \
-e MYSQL_MASTER_SERVICE_PASSWORD=密码 \
-e MYSQL_MASTER_SERVICE_DB_NAME=对应的数据库名 \
-e MYSQL_SLAVE_SERVICE_HOST=从数据库ip \
-p 8848:8848 \
--name nacos-sa-mysql \
--restart=always \
nacos/nacos-server
docker run -d \
-e MODE=standalone \
-e SPRING_DATASOURCE_PLATFORM=mysql \
-e MYSQL_MASTER_SERVICE_HOST=mysql \
-e MYSQL_MASTER_SERVICE_PORT=3306 \
-e MYSQL_MASTER_SERVICE_USER=root \
-e MYSQL_MASTER_SERVICE_PASSWORD=root \
-e MYSQL_MASTER_SERVICE_DB_NAME=nacos \
-p 8848:8848 \
--link mysql:mysql \
--name nacos \
--restart=always \
nacos/nacos-server:1.4.0
docker run -d \
-e MODE=standalone \
-e SPRING_DATASOURCE_PLATFORM=mysql \
-e MYSQL_SERVICE_HOST=mysql \
-e MYSQL_SERVICE_PORT=3306 \
-e MYSQL_SERVICE_USER=root \
-e MYSQL_SERVICE_PASSWORD=root \
-e MYSQL_SERVICE_DB_NAME=nacos \
-p 8848:8848 \
--link mysql:mysql \
--name nacos \
--restart=always \
nacos/nacos-server:1.4.0
1.4.0版本 单机模式下 错误:code:503,msg:server is DOWN now, please try again later
https://github.com/alibaba/nacos/issues/4210
1.4.0使用了jraft, jraft会记录前一次启动的集群地址,如果重启机器ip变了的话,会导致jraft记录的地址失效,从而导致选主出问题。 现在引入了SofaJRaft,单机情况下也是存在节点了。流程和集群一样,需要先选出leader,再提供服务
删除data目录下的protocol文件夹
zentao
https://hub.docker.com/r/idoop/zentao
mkdir -p /alidata/dockerdata/zbox && \
docker run -d -p 83:80 -p 3310:3306 \
-e ADMINER_USER="root" -e ADMINER_PASSWD="Wb19831010!" \
-e BIND_ADDRESS="false" \
-v /alidata/dockerdata/zbox/:/opt/zbox/ \
--name zentao \
idoop/zentao:latest
--add-host mail.ling2.cn:163.177.90.125 \
Memcached
window安装Memcached和使用
docker pull memcached docker run --restart=always -it -d -p 11211:11211 --name memcache memcached
11上通过stats查看到的版本是1.5.13
docker run --restart=always -it -d -p 11211:11211 --name memcache memcached:1.5.14
flink
拉取flink镜像。
docker pull flink:1.10.0-scala_2.12
编写docker-compose.yml 注意,会使用8081等端口,要事先保证8081端口未被占用。
version: "2.1"
services:
jobmanager:
image: flink
expose:
- "6123"
ports:
- "8081:8081"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
taskmanager:
image: flink
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: taskmanager
links:
- "jobmanager:jobmanager"
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
创建、运行flink容器
docker-compose up
eureka
docker run -dit --name eureka -p 8761:8761 springcloud/eureka
appollo
http://39.98.126.59:8070 apollo/admin
https://github.com/apolloconfig/apollo/tree/master/scripts/sql
create schema apolloconfigdb collate utf8_general_ci
create schema apolloportaldb collate utf8_general_ci
https://hub.docker.com/r/apolloconfig/apollo-portal
mkdir -p /alidata/dockerdata/apollo & cd /alidata/dockerdata/apollo
git clone https://github.com/apolloconfig/apollo-build-scripts.git
vi demo.sh
数据库配置和下面配置
-Deureka.instance.instance-id=39.98.126.59:8080 -Deureka.instance.prefer-ip-address=true -Deureka.instance.ip-address=39.98.126.59
./demo.sh start
docker rm -f apollo-configservice
docker rm -f apollo-adminservice
docker rm -f apollo-portal
docker run -d \
--name apollo-configservice \
-p 8080:8080 \
-v /alidata/dockerdata/apollo/logs:/opt/logs \
-e SPRING_DATASOURCE_URL="jdbc:mysql:// 172.18.117.211:3306/apolloconfigdb?characterEncoding=utf8" \
-e SPRING_DATASOURCE_USERNAME=root \
-e SPRING_DATASOURCE_PASSWORD=Kpmg@1234! \
apolloconfig/apollo-configservice
docker run -d \
--name apollo-adminservice \
-p 8090:8090 \
-v /alidata/dockerdata/apollo/logs:/opt/logs \
-e SPRING_DATASOURCE_URL="jdbc:mysql://172.18.117.211:3306/apolloconfigdb?characterEncoding=utf8" \
-e SPRING_DATASOURCE_USERNAME=root \
-e SPRING_DATASOURCE_PASSWORD=Kpmg@1234! \
-e spring_jpa_database-platform=org.hibernate.dialect.MySQLDialect \
apolloconfig/apollo-adminservice
docker run -d \
--name apollo-portal \
-p 8070:8070 \
-v /alidata/dockerdata/apollo/logs:/opt/logs \
-e SPRING_DATASOURCE_URL="jdbc:mysql://172.18.117.211:3306/apolloportaldb?characterEncoding=utf8" \
-e SPRING_DATASOURCE_USERNAME=root \
-e SPRING_DATASOURCE_PASSWORD=Kpmg@1234! \
-e APOLLO_PORTAL_ENVS=dev \
-e DEV_META=http://172.18.117.211:8080 \
apolloconfig/apollo-portal
maxkey
nextcloud
https://hub.docker.com/_/nextcloud
https://apps.nextcloud.com/apps/files_mindmap
https://apps.nextcloud.com/apps/drawio
https://apps.nextcloud.com/apps/onlyoffice
https://www.dgpyy.com/archives/6/
mkdir -p /alidata/nextcloud/html
mkdir -p /alidata/nextcloud/apps
mkdir -p /alidata/nextcloud/config
mkdir -p /alidata/nextcloud/data
mkdir -p /alidata/nextcloud/themes
chmod 777 /alidata/nextcloud
docker run -i -t -d --name nextcloud -p 64:80 \
-v /alidata/nextcloud/html:/var/www/html \
-v /alidata/nextcloud/apps:/var/www/html/custom_apps \
-v /alidata/nextcloud/config:/var/www/html/config \
-v /alidata/nextcloud/data:/var/www/html/data \
-v /alidata/nextcloud/themes:/var/www/html/themes \
nextcloud:22.1.0
ppdocr
https://hub.docker.com/r/paddlecloud/paddleocr
docker run --name ppocr -v $PWD:/mnt -p 8888:8888 --shm-size=32g paddlecloud/paddleocr:2.5-cpu-latest
https://hub.docker.com/r/987846/paddleocr
docker run --name ppdocr -p 8866:8866 -d 987846/paddleocr:latest
https://hub.docker.com/r/paddlepaddle/cloud
https://hub.docker.com/r/paddlepaddle/serving https://github.com/PaddlePaddle/Serving/blob/v0.9.0/doc/Docker_Images_CN.md
docker run --name ppdocr -p 8866:8866 -d 987846/paddleocr:latest
docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-runtime
docker run --name ppdocr -p 9293:9293 -d registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda11.2-cudnn8-runtime
https://my.oschina.net/u/4067628/blog/5550191
# Version: 1.0.0
FROM hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev
# PaddleOCR base on Python3.7
RUN pip3.7 install --upgrade pip -i https://mirrors.aliyun.com/pypi/simple
RUN python3.7 -m pip install paddlepaddle==1.7.2 -i https://mirrors.aliyun.com/pypi/simple
RUN pip3.7 install paddlehub --upgrade -i https://mirrors.aliyun.com/pypi/simple
RUN mkdir -p /home && cd /home
RUN git clone https://gitee.com/PaddlePaddle/PaddleOCR
RUN cd /home/PaddleOCR && pip3.7 install -r requirments.txt -i https://mirrors.aliyun.com/pypi/simple
RUN mkdir -p /home/PaddleOCR/inference
ADD https://paddleocr.bj.bcebos.com/20-09-22/mobile/rec/ch_ppocr_mobile_v1.1_rec_infer.tar /home/PaddleOCR/inference
RUN tar xf /home/PaddleOCR/inference/ch_ppocr_mobile_v1.1_rec_infer.tar -C /home/PaddleOCR/inference
ADD https://paddleocr.bj.bcebos.com/20-09-22/mobile/det/ch_ppocr_mobile_v1.1_det_infer.tar /home/PaddleOCR/inference
RUN tar xf /home/PaddleOCR/inference/ch_ppocr_mobile_v1.1_det_infer.tar -C /home/PaddleOCR/inference
ADD https://paddleocr.bj.bcebos.com/20-09-22/cls/ch_ppocr_mobile_v1.1_cls_infer.tar /home/PaddleOCR/inference
RUN tar xf /home/PaddleOCR/inference/ch_ppocr_mobile_v1.1_cls_infer.tar -C /home/PaddleOCR/inference
RUN cd /home/PaddleOCR &&export PYTHONPATH=/home/PaddleOCR && hub install deploy/hubserving/ocr_system/
EXPOSE 8866
WORKDIR /home/PaddleOCR
CMD ["/bin/bash","-c","export PYTHONPATH=/home/PaddleOCR && hub serving start -m ocr_system"]
https://blog.csdn.net/weixin_43272781/article/details/114034077
# Version: 2.0.0
FROM registry.baidubce.com/paddlepaddle/paddle:2.0.0
# PaddleOCR base on Python3.7
RUN pip3.7 install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip3.7 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN git clone https://gitee.com/PaddlePaddle/PaddleOCR.git /PaddleOCR
WORKDIR /PaddleOCR
RUN pip3.7 install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN mkdir -p /PaddleOCR/inference/
# Download ocr detect model(light version). if you want to change normal version, you can change ch_ppocr_mobile_v1.1_det_infer to ch_ppocr_server_v1.1_det_infer, also remember change det_model_dir in deploy/hubserving/ocr_system/params.py)
ADD https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_infer.tar /PaddleOCR/inference/
RUN tar xf /PaddleOCR/inference/ch_ppocr_mobile_v2.0_det_infer.tar -C /PaddleOCR/inference/
# Download direction classifier(light version). If you want to change normal version, you can change ch_ppocr_mobile_v1.1_cls_infer to ch_ppocr_mobile_v1.1_cls_infer, also remember change cls_model_dir in deploy/hubserving/ocr_system/params.py)
ADD https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar /PaddleOCR/inference/
RUN tar xf /PaddleOCR/inference/ch_ppocr_mobile_v2.0_cls_infer.tar -C /PaddleOCR/inference/
# Download ocr recognition model(light version). If you want to change normal version, you can change ch_ppocr_mobile_v1.1_rec_infer to ch_ppocr_server_v1.1_rec_infer, also remember change rec_model_dir in deploy/hubserving/ocr_system/params.py)
ADD https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar /PaddleOCR/inference/
RUN tar xf /PaddleOCR/inference/ch_ppocr_mobile_v2.0_rec_infer.tar -C /PaddleOCR/inference/
RUN hub install deploy/hubserving/ocr_system/
RUN hub install deploy/hubserving/ocr_cls/
RUN hub install deploy/hubserving/ocr_det/
RUN hub install deploy/hubserving/ocr_rec/
EXPOSE 8866
CMD ["/bin/bash","-c","hub serving start --modules ocr_system ocr_cls ocr_det ocr_rec -p 8866 "]
Davinci
docker run -p 58081:8080 --name=davinci -e SPRING_DATASOURCE_URL="jdbc:mysql://rm-8vbe87b5295dz08zhxo.mysql.zhangbei.rds.aliyuncs.com:3306/davinci?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true" \
-e SPRING_DATASOURCE_USERNAME="lingcloud" -e SPRING_DATASOURCE_PASSWORD="Wb19831010!" \
-e SPRING_MAIL_HOST="smtp.163.com" -e SPRING_MAIL_PORT="465" -e SPRING_MAIL_PROPERTIES_MAIL_SMTP_SSL_ENABLE="true" \
-e SPRING_MAIL_USERNAME="xxxxxx@163.com" -e SPRING_MAIL_PASSWORD="xxxxxxx" \
-e SPRING_MAIL_NICKNAME="davinci_sys" \
edp963/davinci
metabase
clickhouse
https://www.jianshu.com/p/921a0d82c7b8
https://hub.docker.com/r/antrea/clickhouse-server
https://hub.docker.com/r/antrea/clickhouse-operator
2.2.1、拉取clickhouse的docker镜像
docker pull yandex/clickhouse-server docker pull yandex/clickhouse-client
2.2.2、启动server端
- 默认直接启动即可
docker run -d --name [启动之后的名称] --ulimit nofile=262144:262144 yandex/clickhouse-server
- 如果想指定目录启动,这里以clickhouse-test-server命令为例,可以随意写
mkdir /work/clickhouse/clickhouse-test-db ## 创建数据文件目录
- 使用以下路径启动,在外只能访问clickhouse提供的默认9000端口,只能通过clickhouse-client连接server
docker run -d --name clickhouse-test-server --ulimit nofile=262144:262144 --volume=/work/clickhouse/clickhouse_test_db:/var/lib/clickhouse yandex/clickhouse-server
2.3 启动并连接clickhouse-server
2.3.1、docker启动clickhouse-client
docker run -it --rm --link clickhouse-test-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server
3、客户端常用参数
clickhouse-client
--host, -h :服务端host名称,默认 localhost --port :连接端口,默认9000 --user, -u :用户名,默认 default --password :密码,默认空 --query, -q :非交互模式下的查询语句 --database, -d :默认当前操作的数据库,默认default --multiline, -m :允许多行语句查询,在clickhouse中默认回车即为sql结束,可使用该参数多行输入 --format, -f :使用指定的默认格式输出结果 csv,以逗号分隔 --time, -t :非交互模式下会打印查询执行的时间 --stacktrace :出现异常会打印堆栈跟踪信息 --config-file :配置文件名称
hadoop
https://hub.docker.com/u/bde2020
melody
https://github.com/foamzou/melody
docker run -d --name melody -p 5567:5566 -v /alidata/dockerdata/melody-profile:/app/backend/.profile foamzou/melody:latest
xxl-job
https://www.xuxueli.com/xxl-job/#2.1%20%E5%88%9D%E5%A7%8B%E5%8C%96%E2%80%9C%E8%B0%83%E5%BA%A6%E6%95%B0%E6%8D%AE%E5%BA%93%E2%80%9D
mkdir -p /alidata/dockerdata/xxljob/applogs docker run -p 3080:8080 -v /alidata/dockerdata/xxljob/applogs:/data/applogs --name xxl-job-admin -e PARAMS="--spring.datasource.url=jdbc:mysql://rm-8vbe87b5295dz08zhxo.mysql.zhangbei.rds.aliyuncs.com:3306/xxl_job --spring.datasource.username=lingcloud --spring.datasource.password=Wb19831010! --xxl.job.accessToken=ling" -d xuxueli/xxl-job-admin:2.3.0
docker build -t xuxueli/xxl-job-admin-dm ./xxl-job-admin mkdir -p /alidata/dockerdata/xxljob/applogs-dm
docker rm -f xxl-job-admin-dm docker run -p 3081:8081 -v /alidata/dockerdata/xxljob/applogs-dm:/data/applogs --name xxl-job-admin-dm -e PARAMS="--spring.datasource.url=jdbc:dm://ling.ling2.cn:5236/XXL_JOB?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf-8 --spring.datasource.username=XXL_JOB --spring.datasource.password=Wb19831010! " -d xuxueli/xxl-job-admin-dm
docker logs -f xxl-job-admin
http://qunhui.ling2.cn:5000/xxl-job-admin/
nginx
mkdir -p /alidata/dockerdata/nginx docker run --name tmp-nginx-container -d nginx docker cp tmp-nginx-container:/etc/nginx/nginx.conf /alidata/dockerdata/nginx/nginx.conf docker rm -f tmp-nginx-container
docker run --name nginx_kpmg --restart=always -p 6080:80 -v /alidata/dockerdata/nginx/html:/usr/share/nginx/html:ro -v /alidata/dockerdata/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
alist
docker exec -it alist ./alist -password
docker run -d --restart=always -v /alidata/dockerdata/alist:/opt/alist/data -p 5244:5244 --name="alist" xhofe/alist:latest
doccano
https://doccano.github.io/doccano/
docker run -d --name doccano \ -e "ADMIN_USERNAME=admin" \ -e "ADMIN_EMAIL=admin@ling2.com" \ -e "ADMIN_PASSWORD=Wb19831010" \ -p 8002:8000 doccano/doccano
docker cp doccano:/data /alidata/dockerdata/doccano
docker rm -f doccano chmod 777 -R /alidata/dockerdata/doccano/data
docker run -d --name doccano \ -e "ADMIN_USERNAME=admin" \ -e "ADMIN_EMAIL=admin@ling2.com" \ -e "ADMIN_PASSWORD=Wb19831010" \ -e "DATABASE_URL=postgres://postgres:Wb19831010@qunhui.ling2.cn:5433/doccano?sslmode=disable" \ -v /alidata/dockerdata/doccano/data:/data \ -p 8002:8000 doccano/doccano
source /root/anaconda3/bin/activate
pip install doccano
doccano init
doccano createuser --username ling --password Wb19831010!
doccano webserver --port 8000
doccano task
http://www.ling2.cn:8000/
#### 标注数据
我们推荐使用数据标注平台doccano 进行数据标注,本案例也打通了从标注到训练的通道,即doccano导出数据后可通过doccano.py脚本轻松将数据转换为输入模型时需要的形式,实现无缝衔接。为达到这个目的,您需要按以下标注规则在doccano平台上标注数据:

**Step 1. 本地安装doccano**(请勿在AI Studio内部运行,本地测试环境python=3.8)
``$ pip install doccano``
**Step 2. 初始化数据库和账户**(用户名和密码可替换为自定义值)
``$ doccano init``
``
$ doccano createuser --username my_admin_name --password my_password
``
**Step 3. 启动doccano**
- 在一个窗口启动doccano的WebServer,保持窗口
``
$ doccano webserver --port 8000
``
- 在另一个窗口启动doccano的任务队列
``
$ doccano task
``
**Step 4. 运行doccano来标注实体和关系**
- 打开浏览器(推荐Chrome),在地址栏中输入``http://0.0.0.0:8000/``后回车即得以下界面。

- 登陆账户。点击右上角的``LOGIN``,输入**Step 2**中设置的用户名和密码登陆。
- 创建项目。点击左上角的``CREATE``,跳转至以下界面。
- 勾选序列标注(``Sequence Labeling``)
- 填写项目名称(``Project name``)等必要信息
- 勾选允许实体重叠(``Allow overlapping entity``)、使用关系标注(``Use relation labeling``)
- 创建完成后,项目首页视频提供了从数据导入到导出的七个步骤的详细说明。


- 设置标签。在Labels一栏点击``Actions``,``Create Label``手动设置或者``Import Labels``从文件导入。
- 最上边Span表示实体标签,Relation表示关系标签,需要分别设置。

- 导入数据。在Datasets一栏点击``Actions``、``Import Dataset``从文件导入文本数据。
- 根据文件格式(File format)给出的示例,选择适合的格式导入自定义数据文件。
- 导入成功后即跳转至数据列表。
- 标注数据。点击每条数据最右边的``Annotate``按钮开始标记。标记页面右侧的标签类型(Label Types)开关可在实体标签和关系标签之间切换。
- 实体标注:直接用鼠标选取文本即可标注实体。
- 关系标注:首先点击待标注的关系标签,接着依次点击相应的头尾实体可完成关系标注。
- 导出数据。在Datasets一栏点击``Actions``、``Export Dataset``导出已标注的数据。
<br>
#### 将标注数据转化成UIE训练所需数据
- 将doccano平台的标注数据保存在``./data/``目录。对于语音报销工单信息抽取的场景,可以直接下载标注好的[数据](https://paddlenlp.bj.bcebos.com/datasets/erniekit/speech-cmd-analysis/audio-expense-account.jsonl)。
#### 各个任务标注文档
https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/doccano.md
问题解决
ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 475 ORA-29283: invalid file operation
以下动作做后可以了
chmod 777 /alidata/backup/oracle
docker stop oracle docker start oracle
drop directory orcl_dp; create directory orcl_dp as '/orcl_dp'; grant read, write on directory orcl_dp to public
Error opening log file '/gc.log': Permission denied
- 创建一个没有目录映射的elastic
- copy log目录到本地
docker cp elasticsearch:/usr/share/elasticsearch/logs/ /alidata/dockerdata/elasticsearch/
- 重新创建有目录映射的elastic
FORBIDDEN/12/index read-only / allow delete (api)]
curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
PUT _settings { "index.blocks.read_only_allow_delete":"false" }