Community:Limiting Splunk Memory Linux ControlGroups

From Splunk Wiki

Jump to: navigation, search

Historically, Linux systems provided no strong tools to constrain memory used by processes in aggretate (by user, by programname, or otherwise). Recently (in the last handful of years), Linux has gained features to do precisely this.

Limiting Splunk Memory with Linux Control Groups

While the memory constraining features of control groups can be managed many ways, one way is to use the cgroup-tools. These are available on RHEL6 or RHEL7 (CentOS 6 & 7) in the libcgroup and libcgroup-tools pacakges, which may be installed with 'yum install libcgroup' and 'yum install libcgroup-tools'. On Debian or Ubuntu, a similar result may be achieved with: 'apt-get install cgroup-tools'.

This simple recipe enables limiting memory in aggregate by user, so is effective if you run splunk as a user such as 'splunk', which is recommended. If you need to bind to low ports, you can do this via capabilities (not covered here -- see linux docs).

Ensure the memory cgroup filesystem is available

(Not needed on RHEL7/Centos7) On RHEL7, systemd should already have a memory cgroup setup. However on older systems, or on a Debian system set up without systemd for example, you have cgroup handle mounting this with a stanza such as:

In /etc/cgconfig.conf:

mount {
 memory = /sys/fs/cgroup/memory;

Create a group to limit the memory

Next we define a group to limit splunk memory in a particular way:

In /etc/cgconfig.conf:

group splunk_limited_memory {
    memory {
        memory.limit_in_bytes = 20G;
        # this order is required
        memory.memsw.limit_in_bytes = 21G;

This limits the total memory usable by all such processes to 20GB, and also allows them 1GB of swap This type of configuration would make sense on a system that has say, 22GB of real memory.

Place splunk processes within the group

Here we rely on the splunk user, ensuring all programs for Splunk are kept within the group and thus within the limit.

We create a rule for cgrulesengd to automatically place our processes in this group.

In /etc/cgrules.conf:

 splunk    memory     splunk_limited_memory

This causes all user-splunk processes to be limited to 20GB of memory maximum as were described in cgconfig.conf

Bring up the services

You could ensure the services are started by rebooting on RHEL7 or Centos7, or you can issue two commands:

First: 'systemctl start cgconfig' to cause our group to exist (read and use the cgconfig.conf) Second: 'systemctl start cgred' to launch cgrulesengd which will enforce the placement of 'splunk' users.

The processes are moved into the correct group via pam, so you will need a fresh 'splunk' user login after these rules are in effect for them to be applied.

Of course best practice is to ensure your system starts up as you want by actually restarting it.

Personal tools
Hot Wiki Topics

About Splunk >
  • Search and navigate IT data from applications, servers and network devices in real-time.
  • Download Splunk