elasticsearch+logstash and log retrieval using java code
- 2021-08-21 20:47:27
- OfStack
为了项目日志不被泄露,数据展示不采用Kibana
1. Environmental preparation
1.1 Creating Ordinary Users
# Create a user
useradd querylog
# Set password
passwd queylog
# Authorization sudo Authority
Find sudoers File location
whereis sudoers
# Modify the file to be editable
chmod -v u+w /etc/sudoers
# Edit a file
vi /etc/sudoers
# Withdraw authority
chmod -v u-w /etc/sudoers
# No. 1 1 Secondary use sudo There will be a hint
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
User creation is complete.
1.2 Installing jdk
su queylog
cd /home/queylog
# Decompression jdk-8u191-linux-x64.tar.gz
tar -zxvf jdk-8u191-linux-x64.tar.gz
sudo mv jdk1.8.0_191 /opt/jdk1.8
# Edit /ect/profile
vi /ect/profile
export JAVA_HOME=/opt/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
# Refresh configuration file
source /ect/profile
# View jdk Version
java -verion
1.3 Firewall Settings
# Release designation IP
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="172.16.110.55" accept"
# Reload
firewall-cmd --reload
2. Install elasticsearch
2.1 elasticsearch Configuration
Note: elasticsearch should be started by ordinary users or an error will be reported
su queylog
cd /home/queylog
# Decompression elasticsearch-6.5.4.tar.gz
tar -zxvf elasticsearch-6.5.4.tar.gz
sudo mv elasticsearch-6.5.4 /opt/elasticsearch
# Edit es Configuration file
vi /opt/elasticsearch/config/elasticsearch.yml
# Configure es Cluster name of
cluster.name: elastic
# Modify service address
network.host: 192.168.8.224
# Modify the service port
http.port: 9200
# Switch root Users
su root
# Modify /etc/security/limits.conf Add the following
vi /etc/security/limits.conf
* hard nofile 655360
* soft nofile 131072
* hard nproc 4096
* soft nproc 2048
# Edit /etc/sysctl.conf , add the following contents:
vi /etc/sysctl.conf
vm.max_map_count=655360
fs.file-max=655360
# After saving, reload:
sysctl -p
# Switch back to normal users
su queylog
# Start elasticsearch
./opt/elasticsearch/bin/elasticsearch
# Test
curl http://192.168.8.224:9200
# The console will print
{
"name" : "L_dA6oi",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "eS7yP6fVTvC8KMhLutOz6w",
"version" : {
"number" : "6.5.4",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
2.2 Managing elasticsearch as a service
# Switch root Users
su root
# Writing service profiles
vi /usr/lib/systemd/system/elasticsearch.service
[unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
Environment=ES_HOME=/opt/elasticsearch
Environment=ES_PATH_CONF=/opt/elasticsearch/config
Environment=PID_DIR=/opt/elasticsearch/config
EnvironmentFile=/etc/sysconfig/elasticsearch
WorkingDirectory=/opt/elasticsearch
User=queylog
Group=queylog
ExecStart=/opt/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Specifies the maximum number of process
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
vi /etc/sysconfig/elasticsearch
elasticsearch #
#######################
# Elasticsearch home directory
ES_HOME=/opt/elasticsearch
# Elasticsearch Java path
JAVA_HOME=/home/liyijie/jdk1.8
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOMR/jre/lib
# Elasticsearch configuration directory
ES_PATH_CONF=/opt/elasticsearch/config
# Elasticsearch PID directory
PID_DIR=/opt/elasticsearch/config
#############################
# Elasticsearch Service #
#############################
# SysV init.d
# The number of seconds to wait before checking if elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5
################################
# Elasticsearch Properties #
################################
# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd,this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65536
# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using Systemd,LimitMEMLOCK must be set in a unit file such as
# /etc/systemd/system/elasticsearch.service.d/override.conf.
#MAX_LOCKED_MEMORY=unlimited
# Maximum number of VMA(Virtual Memory Areas) a process can own
# When using Systemd,this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144
# Reload service
systemctl daemon-reload
# Switch ordinary users
su queylog
# Start elasticsearch
sudo systemctl start elasticsearch
# Set boot self-startup
sudo systemctl enable elasticsearch
3. Install logstash
3.1. logstash Configuration
su queylog
cd /home/queylog
# Decompression logstash-6.5.4.tar.gz
tar -zxvf logstash-6.5.4.tar.gz
sudo mv logstash-6.5.4 /opt/logstash
# Edit es Configuration file
vi /opt/logstash/config/logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.url: ["http://192.168.8.224:9200"]
# In bin Directory logstash.conf
vi /opt/logstash/bin/logstash.conf
input {
# Take documents as the source
file {
# Log file path
path => "/opt/tomcat/logs/catalina.out"
start_position => "beginning" # (end, beginning)
type=> "isp"
}
}
#filter {
# Define the format of data, and regularly parse logs (filter and collect logs according to actual needs)
#grok {
# match => { "message" => "%{IPV4:clientIP}|%{GREEDYDATA:request}|%{NUMBER:duration}"}
#}
# Type conversion of data as needed
#mutate { convert => { "duration" => "integer" }}
#}
# Defining Output
output {
elasticsearch {
hosts => "192.168.43.211:9200" #Elasticsearch Default port
index => "ind"
document_type => "isp"
}
}
# Authorize the user
chown queylog:queylog /opt/logstash
# Start logstash
./opt/logstash/bin/logstash -f logstash.conf
# Install and configure startup logstash Post-view es Is the index created complete
curl http://192.168.8.224:9200/_cat/indices
4. java code part
Previous exception resolution of integrating ElasticSearch and Redis in SpringBoot
Looking up the data, the reason for this induction is reasonable.
Cause analysis: Netty, or redis, is used elsewhere in the program. This affects the number of processors initialized before instantiating the transport client. When instantiating the transport client, we try to initialize the number of processors. Because Netty is used elsewhere, it has been initialized and Netty will guard against it, so the first instantiation will fail due to the illegal state exception seen.
Solutions
In the SpringBoot startup class, add:
System.setProperty("es.set.netty.runtime.available.processors", "false");
4.1. Introducing pom dependency
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
4.2. Modify the configuration file
spring.data.elasticsearch.cluster-name=elastic
# restapi Use 9200
# java Program usage 9300
spring.data.elasticsearch.cluster-nodes=192.168.43.211:9300
4.3. Corresponding interfaces and implementation classes
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
@Document(indexName = "ind", type = "isp")
public class Bean {
@Field
private String message;
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
@Override
public String toString() {
return "Tomcat{" +
", message='" + message + '\'' +
'}';
}
}
su queylog
cd /home/queylog
# Decompression jdk-8u191-linux-x64.tar.gz
tar -zxvf jdk-8u191-linux-x64.tar.gz
sudo mv jdk1.8.0_191 /opt/jdk1.8
# Edit /ect/profile
vi /ect/profile
export JAVA_HOME=/opt/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
# Refresh configuration file
source /ect/profile
# View jdk Version
java -verion
0
su queylog
cd /home/queylog
# Decompression jdk-8u191-linux-x64.tar.gz
tar -zxvf jdk-8u191-linux-x64.tar.gz
sudo mv jdk1.8.0_191 /opt/jdk1.8
# Edit /ect/profile
vi /ect/profile
export JAVA_HOME=/opt/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
# Refresh configuration file
source /ect/profile
# View jdk Version
java -verion
1
su queylog
cd /home/queylog
# Decompression jdk-8u191-linux-x64.tar.gz
tar -zxvf jdk-8u191-linux-x64.tar.gz
sudo mv jdk1.8.0_191 /opt/jdk1.8
# Edit /ect/profile
vi /ect/profile
export JAVA_HOME=/opt/jdk1.8
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
# Refresh configuration file
source /ect/profile
# View jdk Version
java -verion
2
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
@RunWith(SpringRunner.class)
@SpringBootTest(classes = CbeiIspApplication.class)
public class ElasticSearchServiceTest {w
private static Logger logger= LoggerFactory.getLogger(EncodePhoneAndCardTest.class);
@Autowired
private IElasticSearchService elasticSearchService;
@Test
public ResponseVO getLog(){
try {
Map<String, Object> search = elasticSearchService.search("Exception", 1, 10);
logger.info( JSON.toJSONString(search));
} catch (Exception e) {
e.printStackTrace();
}
}
For example, the above is what we want to talk about today. This article only briefly introduces the use of elasticsearch and logstash. If there are any inadequacies in the article, please comment and point out ~