Community:Best Practice For Configuring Syslog Input
From Splunk Wiki
Best practices for Configuring Syslog Input
Please make sure you always consult the latest documentation on this topic here to ensure you have up to date information.
(Or, working around the pitfalls of UDP.)
Direct UDP input is higher performance than reading files from disk. However, because UDP is a "best effort" protocol, you might not get messages if the network is clogged or has a hiccup. Therefore, using UDP for syslog is not reliable, not guaranteed and not recommended if you are concerned about potential data loss or security, or if you require the data for compliance.
Here are the recommended best practices for configuring your syslog:
1. Write to a file and configure Splunk to monitor that file
The best practice is to write to a file that Splunk is monitoring. This accounts for the scenario of data loss if Splunk is down. This also allows you to add the data again if you have to clean your index for some reason.
2. Configure a heavyweight forwarder on each of your remote hosts
Forwarders can be configured on your remote hosts to forward data to an indexer. Splunk forwarders give you the added benefit of data buffering. Should the indexer go down, a properly configured forwarder will buffer the data and send it on when the indexer becomes available.
The following queue settings will need to be properly configured on the forwarder (in outputs.conf) to ensure data does not get lost in the event that the indexer is not available:
maxQueueSize = <integer>
* The maximum number of queued events (queue size) on the forwarding server. * Defaults to 100,000 events.
usePersistentQueue = <true/false>
* If set to true and the queue is full, write events to the disk * Directory is specified with 'persistentQueuePath'. * Defaults to false.
persistentQueuePath = <absolute_path_that_must_exist>
* Specifies a directory for the storage of persistent queues if usePersistentQueue is set to true. * Path specified must be an absolute path and must exist. * Splunk must have sufficient access rights to create files in the specified directory. * Defaults to $SPLUNK_HOME/var/run/splunk/persistent_tcp_queue
maxPersistentQueueSizeInMegs = <integer>
* The maximum size in megabytes of the disk file where the persistent queue stores its events. * Defaults to 1000.
Splunk will block if it hits either the maxQueueSize limit or maxPersistentQueueSizeInMegs limit. You will need to calculate an appropriate value based on a combination of factors including which limit you want to affect blocking, how much memory you have available on the system and how much disk space is available.
Setting maxQueueSize to a value greater than 100,000 should be calculated carefully based on the average size of your syslog events and how much memory you can spare. Factoring in 3-4 times memory required per actual event size, a 250 byte event requires 1 MB memory. Here are some sample calculations:
The default maxQueueSize requires approximately 100 MB of memory: syslog event size (avg) = 1 MB x 100,000 events (default) = 100 MB memory
A maxQueueSize of 1,000,000 events will require 1 GB of memory: 1 MB (per event) x 1,000,000 (maxQueueSize) = 1 GB
Note: Using persistentQueue settings in version 4.0 and later is not recommended except for very specific use cases involving syslog. Contact Support for assistance.
3. Use syslog-ng
If installing heavyweight forwarders is not an option, then upgrade to syslog-ng and use a TCP connection. This would at least allow the syslog server to buffer events for a period of time if the indexer were to go down. However, it doesn’t do anything for the case of having to clean the index and get the archival data back.