From Splunk Wiki
Estimate the size of your Splunk index and associated data
This topic discusses a method for calculating storage needs for your Splunk deployment.
When Splunk indexes your data, the resulting data falls into two basic categories: the compressed raw data that is persisted and the indexes that point to this data. With a little experimentation, you can estimate how much disk space you will need.
Typically, the compressed, persisted data that Splunk extracts from your data inputs amounts to approximately 10% of the raw data that comes into Splunk. The indexes that are created to access this data can be anywhere from 10% to 110% of the data that comes in. This value is affected strongly by how many unique terms occur in your data. Depending on the characteristics of your data, you might want to tune your segmentation settings. For an introduction to how segmentation works and how it affects your index size, you can also watch this video on segmentation by one of Splunk's lead developers.
The best way to get an idea of your index size is to experiment by installing a copy of Splunk somewhere and indexing a representative sample of your data, and then checking the sizes of the resulting directories
Once you've indexed your sample:
1. Go to
du -sh rawdata to determine how large the compressed persisted raw data is.
du -ch *.tsidx and look at the last
total line to see the size of the index.
This is the persisted data to which the items in the index point. Typically, this file's size is about 10% of the size of the sample data set you indexed.
4. Add the values you get together.
This is the total size of the index and associated data for the sample you have indexed. You can now use this to extrapolate the size requirements of your Splunk index and
rawdata directories over time.