官方文档:
启动初始化索引方法:
https://blog.csdn.net/Xc_xdd/article/details/113547403
kibana的 Dockerfile :
ARG ES_VERSION=7.4.0 FROM kibana:${ES_VERSION} MAINTAINER Alay<[email protected]> EXPOSE 5601 workdir / CMD ["/usr/local/bin/kibana-docker"]
Es 的 Dockerfile:
ARG ES_VERSION=7.4.0 FROM elasticsearch:$ES_VERSION MAINTAINER Alay<[email protected]> #设置alpine时区 ENV TIMEZONE Asia/Shanghai RUN ln -snf /usr/share/zoneinfo/$TIMEZONE /etc/localtime && echo $TIMEZONE > /etc/timezone #设置中文 ENV LANG C.UTF-8 EXPOSE 9200 9300
#运行时下面 VOLUME 改为一行,否则报错,这里是为了美观展示所以换行 VOLUME [ "/usr/share/elasticsearch/data", "/usr/share/elasticsearch/logs", "/usr/share/elasticsearch/plugins", "/usr/share/elasticsearch/config" ] CMD ["elasticsearch"]
环境变量文件: .env
################################# elasticsearch ############################## # ES 版本 ES_VERSION=7.6.2
docker-compose.yml
version: '3.0' services: # ES 的编排 behelpful-es: env_file: .env build: context: ./search/elasticsearch dockerfile: Dockerfile args: VERSION: ${ES_VERSION} image: behelpful-es:${ES_VERSION} container_name: behelpful-es restart: always # 环境变量配置(这里多配置了一些默认的环境变量,作为占位使用,后续可根据需求调整) environment: - "ES_JAVA_OPTS=-xms512m -Xmx512m" - transport.tcp.port=9300 - transport.host=0.0.0.0 - http.cors.enabled=true - http.cors.allow-origin=* - bootstrap.memory_lock=true - discovery.type=single-node - cluster.name=behelpful-Search - node.name=search-master - network.host=0.0.0.0 # 句柄数配置 ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 ports: - "9200:9200" - "9300:9300" volumes: # 插件挂载 - ./search/elasticsearch/plugins:/usr/share/elasticsearch/plugins # 日志挂载 - ./search/elasticsearch/logs:/usr/share/elasticsearch/logs # 数据挂载 - ./search/elasticsearch/data:/usr/share/elasticsearch/data networks: - behelpful # Kibana 容器编排:https://www.elastic.co/guide/cn/kibana/current/settings.html behelpful-kibana: env_file: .env image: kibana:${ES_VERSION} container_name: behelpful-kibana restart: always environment: - SERVER_NAME=behelpful-kibana - ELASTICSEARCH_URL=http://behelpful-es:9200 - ELASTICSEARCH_HOST=behelpful-es - ELASTICSEARCH_PORT=9200 ports: - "5601:5601" networks: - behelpful # 自定义网桥 behelpful networks: behelpful: # 启动时不自动创建网桥,需要提前手动创建 网桥 behelpful external: true
集群部署 ES : (未测试,感兴趣的朋友可以测试以下)
version: '3.0' services: # ES 的编排 search-master: env_file: .env build: context: ./search/elasticsearch dockerfile: Dockerfile args: VERSION: ${ES_VERSION} image: behelpful-es:${ES_VERSION} container_name: search-master restart: always # 环境变量配置(这里多配置了一些默认的环境变量,作为占位使用,后续可根据需求调整) environment: - "ES_JAVA_OPTS=-xms1g -Xmx1g" - transport.tcp.port=9300 - transport.host=0.0.0.0 - http.cors.enabled=true - http.cors.allow-origin=* - bootstrap.memory_lock=true - cluster.name=behelpful-Search - node.name=search-master - discovery.seed_hosts=search-node1,search-node1 - cluster.initial_master_nodes=search-master,search-node1,search-node2 - network.host=0.0.0.0 # 句柄数配置 ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 ports: - "9200:9200" - "9300:9300" volumes: # 插件挂载 - ./search/elasticsearch/plugins:/usr/share/elasticsearch/plugins # 日志挂载 - ./search/elasticsearch/logs:/usr/share/elasticsearch/logs # 数据挂载 - ./search/elasticsearch/data:/usr/share/elasticsearch/data networks: - behelpful search-node1: env_file: .env image: behelpful-es:${ES_VERSION} container_name: search-node1 restart: always # 环境变量配置(这里多配置了一些默认的环境变量,作为占位使用,后续可根据需求调整) environment: - "ES_JAVA_OPTS=-xms1g -Xmx1g" - transport.tcp.port=9300 - transport.host=0.0.0.0 - http.cors.enabled=true - http.cors.allow-origin=* - bootstrap.memory_lock=true - cluster.name=behelpful-Search - node.name=search-node1 - discovery.seed_hosts=search-master,search-node2 - cluster.initial_master_nodes=search-master,search-node1,search-node2 - network.host=0.0.0.0 # 句柄数配置 ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 volumes: # 插件挂载 - ./search/elasticsearch/plugins:/usr/share/elasticsearch/plugins # 日志挂载 - ./search/elasticsearch/logs:/usr/share/elasticsearch/logs # 数据挂载 - ./search/elasticsearch/data:/usr/share/elasticsearch/data networks: - behelpful search-node2: env_file: .env image: behelpful-es:${ES_VERSION} container_name: search-node2 restart: always # 环境变量配置(这里多配置了一些默认的环境变量,作为占位使用,后续可根据需求调整) environment: - "ES_JAVA_OPTS=-xms1g -Xmx1g" - transport.tcp.port=9300 - transport.host=0.0.0.0 - http.cors.enabled=true - http.cors.allow-origin=* - bootstrap.memory_lock=true - cluster.name=behelpful-Search - node.name=search-node2 - discovery.seed_hosts=search-master,search-node1 - cluster.initial_master_nodes=search-master,search-node1,search-node2 - network.host=0.0.0.0 #设置ES密码 - xpack.security.enabled=true - xpack.license.self_generated.type=basic - xpack.security.transport.ssl.enabled=true # 句柄数配置 ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 volumes: # 插件挂载 - ./search/elasticsearch/plugins:/usr/share/elasticsearch/plugins # 日志挂载 - ./search/elasticsearch/logs:/usr/share/elasticsearch/logs # 数据挂载 - ./search/elasticsearch/data:/usr/share/elasticsearch/data networks: - behelpful # 自定义网桥 behelpful networks: behelpful: # 启动时不自动创建网桥,需要提前手动创建 网桥 behelpful external: true driver: bridge
部署好以后测试:
linux 本机内部已经可以使用了 curl http://127.0.0.1:9200
[root@chxlay ~]# curl http://127.0.0.1:9200 { "name" : "bS0fjGz", #--------------------->>>>>>启动的 ES 服务节点的名称 "cluster_name" : "elasticsearch", #--------------------->>>>>>集群名称 "cluster_uuid" : "j8tyQZbAQC6zrr41vOYHQA", #--------------------->>>>>>集群 UUID "version" : { #--------------------->>>>>>ES 的版本信息 "number" : "6.3.1", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "eb782d0", "build_date" : "2018-06-29T21:59:26.107521Z", "build_snapshot" : false, "lucene_version" : "7.3.1", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You KNow, for Search" }
ES 配置文件相关属性介绍: elasticsearch.yml
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further @R_861_4045@ion on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # #集群名称 cluster.name: elastic_cluster # # ------------------------------------ Node ------------------------------------ # #节点名称 node.name: node-1 #( 集群中节点自己的名字 如:master 或者 slave1、slave2、slave3 。。。 )# # Add custom attributes to the node: # #node.attr.rack: r1 # #指定了该节点可能成为 master 节点,还可以是数据节点 node.master: true node.data: true # ----------------------------------- Paths ------------------------------------ # #节点数据存放目录地址;默认是在ELASTICSEARCH _HOME/data下面, #path.data: /path/to/data # #logs日志存放目录地址,可自定义,若自定义,(最好别动,自定义容易报错) #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- #是否锁定内存,以下两个一般设置为false bootstrap.memory_lock: false bootstrap.system_call_filter: false # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): #指定本机IP地址,自行修改为本机IP(0.0.0.0/内网ip,不是外网IP,阿里云需要添加防火墙规则允许 9200,9300端口访问) network.host: 192.168.42.111 #(打开此配置) #指定本机http访问端口,可以不修改,默认是9200 # 对外提供服务的端口,一般就是Linux本机的 IP 地址 ,端口号没有特殊要求的话默认就行 http.port: 9200 #(若不修改端口号,不用打开此配置) #9300为集群服务的端口 transport.tcp.port: 9300 #(若不修改端口号,不用打开此配置) #指定节点是否有资格被选举为主节点;默认true node.master: true #指定节点是否存储索引数据;默认true node.data: true # # For more @R_861_4045@ion, consult the network module documentation. # # --------------------------------- discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when new node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] #设置集群各节点的初始列表,默认第一个为主节点,如果不是配置集群,而是单机,以下参数可不进行配置 discovery.zen.ping.unicast.hosts: ["192.168.1.1", "192.168.1.2"] # # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): #设置集群最大主节点数量;默认1,为了避免脑裂,集群主节点数最少为 半数+1(单服务器启动不用设置) discovery.zen.minimum_master_nodes: 1 # # For more @R_861_4045@ion, consult the zen discovery module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more @R_861_4045@ion, consult the gateway module documentation. # # ---------------------------------- VarIoUs ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true #设置9200端口可以对外访问;比如head插件连es http.cors.enabled: true http.cors.allow-origin: "*"
#设置ES密码(在最后加入以下) xpack.security.enabled: true xpack.license.self_generated.type: basic xpack.security.transport.ssl.enabled: true
以 docker 容器部署为例:
1、进入容器,bin 目录下有生成证书的脚本
[root@363474ada611 elasticsearch]# ll bin/ total 19604 -rwxr-xr-x. 1 elasticsearch root 1915 Mar 26 2020 elasticsearch -rwxr-xr-x. 1 elasticsearch root 491 Mar 26 2020 elasticsearch-certgen -rwxr-xr-x. 1 elasticsearch root 483 Mar 26 2020 elasticsearch-certutil # 生成秘钥的 -rwxr-xr-x. 1 elasticsearch root 982 Mar 26 2020 elasticsearch-cli -rwxr-xr-x. 1 elasticsearch root 433 Mar 26 2020 elasticsearch-croneval -rwxr-xr-x. 1 elasticsearch root 4316 Mar 26 2020 elasticsearch-env -rwxr-xr-x. 1 elasticsearch root 1828 Mar 26 2020 elasticsearch-env-from-file -rwxr-xr-x. 1 elasticsearch root 121 Mar 26 2020 elasticsearch-keystore -rwxr-xr-x. 1 elasticsearch root 440 Mar 26 2020 elasticsearch-migrate -rwxr-xr-x. 1 elasticsearch root 126 Mar 26 2020 elasticsearch-node -rwxr-xr-x. 1 elasticsearch root 172 Mar 26 2020 elasticsearch-plugin -rwxr-xr-x. 1 elasticsearch root 431 Mar 26 2020 elasticsearch-saml-Metadata -rwxr-xr-x. 1 elasticsearch root 438 Mar 26 2020 elasticsearch-setup-passwords # 这个是设置密码用的 -rwxr-xr-x. 1 elasticsearch root 118 Mar 26 2020 elasticsearch-shard -rwxr-xr-x. 1 elasticsearch root 427 Mar 26 2020 elasticsearch-sql-cli -rwxr-xr-x. 1 elasticsearch root 19986912 Mar 26 2020 elasticsearch-sql-cli-7.6.2.jar -rwxr-xr-x. 1 elasticsearch root 426 Mar 26 2020 elasticsearch-syskeygen -rwxr-xr-x. 1 elasticsearch root 426 Mar 26 2020 elasticsearch-users # 用户管理 -rwxr-xr-x. 1 elasticsearch root 346 Mar 26 2020 x-pack-env -rwxr-xr-x. 1 elasticsearch root 354 Mar 26 2020 x-pack-security-env -rwxr-xr-x. 1 elasticsearch root 353 Mar 26 2020 x-pack-watcher-env
先执行可执行文件: elasticsearch-certutil 生成 ca 文件
[root@6bebc53a88ac bin]# elasticsearch-certutil ca This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack. The 'ca' mode generates a new 'certificate authority' This will create a new X.509 certificate and private key that can be used to sign certificate when running in 'cert' mode. Use the 'ca-dn' option if you wish to configure the 'distinguished name' of the certificate authority By default the 'ca' mode produces a single PKCS#12 output file which holds: * The CA certificate * The CA's private key If you elect to generate PEM format certificates (the -pem option), then the output will be a zip file containing individual files for the CA certificate and private key # 这里若不自定义文件名称的话直接回车,默认 elastic-stack-ca.p12 默认就行了 Please enter the desired output file [elastic-stack-ca.p12]: # 这里输入你要设置的密码,回车 Enter password for elastic-stack-ca.p12 :
再执行命令:elasticsearch-certutil cert --ca elastic-stack-ca.p12
[root@1d8bdbc07715 bin]# elasticsearch-certutil cert --ca elastic-stack-ca.p12 This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack. …………………………此处省略大量废话…………………………….. then the output will be be a zip file containing individual certificate/key files Enter password for CA (elastic-stack-ca.p12) : # 输入上一步执行时设置的密码 Please enter the desired output file [elastic-certificates.p12]: # 自定义证书文件名,默认就好了 Enter password for elastic-certificates.p12 : # 这个证书文件设置密码,和上一步同一个密码即可 Certificates written to /usr/share/elasticsearch/elastic-certificates.p12 This file should be properly secured as it contains the private key for your instance. This file is a self contained file and can be copied and used 'as is' For each Elastic product that you wish to configure, you should copy this '.p12' file to the relevant configuration directory and then follow the SSL configuration instructions in the product guide. For client applications, you may only need to copy the CA certificate and configure the client to trust this certificate.
执行以上完毕后可看到生成的证书在这里: /usr/share/elasticsearch/ elastic-certificates.p12
[root@1d8bdbc07715 elasticsearch]# ls LICENSE.txt NOTICE.txt README.asciidoc bin config data elastic-certificates.p12 elastic-stack-ca.p12 jdk lib logs modules plugins
设置密码:(bin 目录下),这里只允许配置系统用户的密码,个人用户密码需要在设置密码完成后,使用 Kibana 登录系统用户,如:elastic 进行新增
[root@9a6eebe7a6d7 bin]# elasticsearch-setup-passwords -h # 查看命令帮助解释
执行如下:
[root@368f57c255c9 bin]# elasticsearch-setup-passwords interactive # interactive自定义密码 atuo 是随机密码 Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y # 可以设置所有的用户都为一个密码,便于记忆管理,也可以分开设置,个人根据情况决定就好 Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]
分词器插件IK安装:
查看 ES 目录结构,分词器插件放在 plugins (路径下,一个目录代表一个插件,不能将散的插件文件放入,一定是一个插件一个目录)
重启 ES 即可 加入 IK 分词器,测试如下: kibana 测试
GET _analyze { "text": "我是中国人", "analyzer": "ik_max_word" }
安装head插件: 这里并没有使用 docker-compose 进行编排,而是通过 docker 命令直接运行,因为我个人很少使用此插件,
1、拉取镜像:
docker pull mobz/elasticsearch-head:5
2、运行容器:
docker run -d --name es_admin -p 9100:9100 mobz/elasticsearch-head:5
也可以使用 下载 tar 包,解压运行的方式:
wget https://github.com/mobz/elasticsearch-head/archive/elasticsearch-head-master.zip
也可以用git下载,前提yum install git
unzip elasticsearch-head-master.zip
可视化工具之cerebro,cerebro支持 Windows 版,Linux版等,Linux版还有docker版
1、Windows 版下载,解压后 bin 目录下的 cerebro.bat 文件双击打开即可使用
2、Linux 版 下载解压,可进行 可进行配置使用
nohup bin/cerebro >> logs/application.log 2>&1 &
3、Docker 版:
docker search cerebro
docker pull lmenezes/cerebro
docker run --name cerebro -d -p 9000:9000 lmenezes/cerebro
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。