From Splunk Wiki
Best practices for Configuring Syslog Input
(Or, working around the pitfalls of UDP.)
Direct UDP input is higher performance than reading files from disk. However, because UDP is a "best effort" protocol, you might not get messages if the network is clogged or has a hiccup. Therefore, using UDP for syslog is not reliable, not guaranteed and not recommended if you are concerned about potential data loss or security, or if you require the data for compliance.
Here are the recommended best practices for configuring your syslog:
1. Write to a file and configure Splunk to monitor that file
The best practice is to write to a file that Splunk is monitoring. This accounts for the scenario of data loss if Splunk is down. This also allows you to add the data again if you have to clean your index for some reason.
2. Configure a lightweight forwarder on each of your remote hosts
Lightweight forwarders can be configured on your remote hosts to forward data to an indexer. Splunk forwarders give you the added benefit of data buffering. Should the indexer go down, a properly configured forwarder will buffer the data and send it on when the indexer becomes available.
The following queue settings will need to be properly configured on the forwarder (in outputs.conf) to ensure data does not get lost in the event that the indexer is not available:
maxQueueSize = <integer>
- The maximum number of queued events (queue size) on the forwarding server.
- Defaults to 100,000 events.
usePersistentQueue = <true/false>
- If set to true and the queue is full, write events to the disk.
- Directory is specified with 'persistentQueuePath'.
- Defaults to false.
maxPersistentQueueSizeInMegs = <integer>
- The maximum size in megabytes of the disk file where the persistent queue stores its events.
- Defaults to 1000.
3. Use syslog-ng
If installing lightweight forwarders is not an option then upgrade to syslog-ng and use a TCP connection. This would at least allow the syslog server to buffer events for a period of time if the indexer were to go down. However, it doesn’t do anything for the case of having to clean the index and get the archival data back.