|
1 | 1 | [[beats-getting-started]]
|
2 | 2 | == Getting started
|
3 | 3 |
|
| 4 | +A regular _Beats setup_ consists of: |
4 | 5 |
|
5 |
| -The _Beats platform_ together with Elasticsearch and Kibana form an _open source |
6 |
| -application monitoring solution_. |
7 |
| - |
8 |
| - * _Beats shippers_ to collect the data. You should install these on |
| 6 | + * Beats shippers to collect the data. You should install these on |
9 | 7 | your servers so that they capture the data.
|
10 |
| - * _Elasticsearch_ for storage and indexing. |
11 |
| - * _Kibana_ for the UI. |
| 8 | + * Elasticsearch for storage and indexing. <<elasticsearch-installation>> |
| 9 | + * Optionally Logstash. <<logstash-installation>> |
| 10 | + * Kibana for the UI. <<kibana-installation>> |
| 11 | + * Kibana dashboards to customize your widgets. <<load-kibana-dashboards>> |
12 | 12 |
|
13 |
| -For now, you can just install Elasticsearch and Kibana on a single VM or even |
| 13 | +NOTE: For now, you can just install Elasticsearch and Kibana on a single VM or even |
14 | 14 | on your laptop. The only condition is that this machine is accessible from the
|
15 | 15 | servers you want to monitor. As you add more shippers and your traffic grows, you
|
16 | 16 | will want replace the single Elasticsearch instance with a cluster. You will
|
17 | 17 | probably also want to automate the installation process. But for now, let's
|
18 | 18 | just do the fun part.
|
19 | 19 |
|
| 20 | +[[elasticsearch-installation]] |
20 | 21 | === Elasticsearch installation
|
21 | 22 |
|
22 | 23 | http://www.elasticsearch.org/[Elasticsearch] is a distributed real-time
|
@@ -84,58 +85,9 @@ curl http://127.0.0.1:9200
|
84 | 85 | }
|
85 | 86 | ----------------------------------------------------------------------
|
86 | 87 |
|
87 |
| -=== Beat configuration |
88 |
| - |
89 |
| -Set the IP address and port where the shipper can find the Elasticsearch |
90 |
| -installation: |
91 |
| - |
92 |
| -[source,yaml] |
93 |
| ----------------------------------------------------------------------- |
94 |
| -output: |
95 |
| -
|
96 |
| - elasticsearch: |
97 |
| - # Uncomment out this option if you want to output to Elasticsearch. The |
98 |
| - # default is false. |
99 |
| - enabled: true |
100 |
| -
|
101 |
| - # Set the host and port where to find Elasticsearch. |
102 |
| - host: 192.168.1.42 |
103 |
| - port: 9200 |
104 |
| -
|
105 |
| - # Comment this option if you don't want to store the topology in |
106 |
| - # Elasticsearch. The default is false. |
107 |
| - # This option makes sense only for Packetbeat |
108 |
| - save_topology: true |
109 |
| ----------------------------------------------------------------------- |
110 |
| - |
111 |
| -Before starting the shipper, you should also load an |
112 |
| -http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-templates.html[index |
113 |
| -template], which is different for every Beat. |
114 |
| - |
115 |
| -The recommended template file is installed by the Beats Platform packages. Load it with the |
116 |
| -following command: |
117 |
| - |
118 |
| -deb or rpm: |
119 |
| - |
120 |
| -[source,shell] |
121 |
| ----------------------------------------------------------------------- |
122 |
| -curl -XPUT 'http://localhost:9200/_template/packetbeat' -d@/etc/packetbeat/packetbeat.template.json |
123 |
| ----------------------------------------------------------------------- |
124 |
| - |
125 |
| -mac: |
126 |
| - |
127 |
| -[source,shell] |
128 |
| ----------------------------------------------------------------------- |
129 |
| -cd beats-1.0.0-beta3-darwin |
130 |
| -curl -XPUT 'http://localhost:9200/_template/packetbeat' -d@packetbeat.template.json |
131 |
| ----------------------------------------------------------------------- |
132 |
| - |
133 |
| -where `localhost:9200` is the IP and port where Elasticsearch is listening on |
134 |
| -Replace `packetbeat` with the beat name that you are running. |
135 | 88 |
|
136 |
| - |
137 |
| -[[logstash]] |
138 |
| -=== Using Logstash |
| 89 | +[[logstash-installation]] |
| 90 | +=== Insert data to Elasticsearch via Logstash |
139 | 91 |
|
140 | 92 | The simplest architecture for the Beat platform setup consists of the Beats shippers, Elasticsearch and Kibana.
|
141 | 93 | This is nice and easy to get started with and enough for networks with small traffic. It also uses the
|
@@ -213,53 +165,7 @@ output {
|
213 | 165 | }
|
214 | 166 | ------------------------------------------------------------------------------
|
215 | 167 |
|
216 |
| - |
217 |
| -=== Starting the Beat shipper |
218 |
| - |
219 |
| - |
220 |
| -There is an _init.d_ script for each Beat that can be used for starting and stopping the Beat. |
221 |
| -In the following examples, packetbeat is used as an example, but it can be any Beat. |
222 |
| - |
223 |
| -deb: |
224 |
| - |
225 |
| -[source,shell] |
226 |
| ----------------------------------------------------------------------- |
227 |
| -sudo /etc/init.d/packetbeat start |
228 |
| ----------------------------------------------------------------------- |
229 |
| - |
230 |
| -rpm: |
231 |
| - |
232 |
| -[source,shell] |
233 |
| ----------------------------------------------------------------------- |
234 |
| -sudo /etc/init.d/packetbeat start |
235 |
| ----------------------------------------------------------------------- |
236 |
| - |
237 |
| -mac: |
238 |
| - |
239 |
| -[source,shell] |
240 |
| ----------------------------------------------------------------------- |
241 |
| -sudo ./packetbeat -e -c packetbeat.yml -d "publish" |
242 |
| ----------------------------------------------------------------------- |
243 |
| - |
244 |
| -Packetbeat is now ready to capture data from your network traffic. You can test |
245 |
| -that it works by creating a simple HTTP request. For example: |
246 |
| - |
247 |
| -[source,shell] |
248 |
| ----------------------------------------------------------------------- |
249 |
| -curl http://www.elastic.co/ > /dev/null |
250 |
| ----------------------------------------------------------------------- |
251 |
| - |
252 |
| -Now check that the data is present in Elasticsearch with the following command: |
253 |
| - |
254 |
| -[source,shell] |
255 |
| ----------------------------------------------------------------------- |
256 |
| -curl -XGET 'http://localhost:9200/packetbeat-*/_search?pretty' |
257 |
| ----------------------------------------------------------------------- |
258 |
| - |
259 |
| -Make sure to replace `localhost:9200` with the address of your Elasticsearch |
260 |
| -instance. It should return data about the HTTP transaction you just created. |
261 |
| - |
262 |
| - |
| 168 | +[[kibana-installation]] |
263 | 169 | === Kibana installation
|
264 | 170 |
|
265 | 171 | https://www.elastic.co/products/kibana[Kibana] is a visualization application
|
@@ -304,6 +210,7 @@ interface.
|
304 | 210 | You can learn more about Kibana in the
|
305 | 211 | http://www.elastic.co/guide/en/kibana/current/index.html[Kibana User Guide].
|
306 | 212 |
|
| 213 | +[[load-kibana-dashboards]] |
307 | 214 | === Load Kibana dashboards
|
308 | 215 |
|
309 | 216 | Kibana has a large set of visualization types which you can combine to create
|
|
0 commit comments