Community:ERs

From Splunk Wiki

Jump to: navigation, search

About this page

This is to help members of the community know about each other's enhancement requests. Make a note of your ER's case number and the rough functionality of it. Others in the community may have similar requirements and can more easily file their own ERs as "me too" references to your own, or use yours as a baseline for describing their own slightly different use case.

Case 253490 - Time-frame Forward and Back buttons

At any particular time frame, I would like a "forward" and "back" button to move to the immediately preceding or following time period of the same length. I can't find anything that does this, nor any way to enable a feature like it.

Specifically, suppose I'm poking around and I have drilled down to a specific time frame. Let's say the entire day of "July 12th, 2015." I would like a button that would move me one time-frame forward to July 13th, and another that moves me back to July 11th. Similarly, suppose I have drilled down to 10:00 to 11:00 AM July 12th, at that point the forward/back button would take me to the periods 11:00 AM to 12:00 noon, or 9:00 to 10:00 AM. Much simpler and easier than zooming out and back in again.

Case 195705 - Make DB Connect respect SQL Aliasing

If you create an input with SQL, aliasing works when it's the only defined name (a la "SELECT '1' AS MyConstant, ...", but doesn't work for SQL Fields with a name already (like "SELECT MyOldFieldName AS MyNewFieldName...").

A workaround exists, use a function on the field that does nothing to it, but that's sortof a pain. SELECT timestamp(MyDateTimeColumn) AS MyField, max(MyIntColumn) AS MyIntField, ....

And, this technique doesn't work for the rising column which apparently HAS to be non-aliased in any way or else it breaks. Using the alias means it "can't find the field" and not using the alias breaks the rising-ness.


Case 194254 - props.conf field "rename"

An enhancement request to allow for props.conf to perform search-time field renames instead of just field aliases.

Rut environment sometimes has us searching against indexers we don't manage directly, and attempting to conform their indexed data at search time to CIM fields, etc... results in a rather massive list of fields in some cases with a lot of redundant values, etc...

Just being able to perform a rename as part of the props.conf for that source would be a nice improvement.

Case 194218 - SSL Configuration Enhancement.

The steps and docs for setting up non-default (e.g. "Proper") SSL and deploying to a forwarder to forward with SSL is extremely convoluted and involved.

At the least, a wrapper should be created that asks for a handful of fields then creates all the appropriate .pem and .csr files. This could be a script or a "real program" - I don't think it matters as much the exact form as that *something* is done. Preferably at install "Would you like to configure custom certs for your Splunk installation to use with forwarders and other Splunk components?"

There should be a bootstrap method for the UF at installation - not to specify a set of ssl config files, which requires yet some other way to distribute said files*, but to tell the UF that during the install it should ask the deployment server for the right files to pull into the system to make it work. msiexec /i blah /RETRIEVESSLCONFIG=FromDeploymentServer

While it's probably a pipe dream, I see no reason why this can't all leverage (in windows) an active directory domain, which can do all this stuff. But let's just make it easier for everyone before we go using the platform specific stuff. :)

  • I believe someone once mentioned that with this as a separate step you confirm there's been no hooligans played with your DS or whatever, but I contend that at install time you are trusting all of this anyway. You just copy the files into place and go on with life and probably don't bother to confirm them - OR there's a system in place that confirms this all worked properly anyway. Either way, the risk seems minimal, and if it's too high for you - using this method is not a requirement, just don't set the above mentioned option.
Case 193187 - "Replay" command.

Ability to run an historical search in past-time "Real time". Two pieces to this:

First, imagine an RT search. Events scroll in on the right, scroll off on the left. The chart updates every few seconds with new data, old data being dumped. "replay" would run that same search, at that same rate (one second per second) over historical data, so you could start an RT search that started yesterday at 10 PM and it would, second by second, show you what came in starting at 10:00 PM yesterday over the next time period. So at now+2 minutes it would be showing events coming in at 10:02 PM yesterday.

So, " | replay earliest=-1h@h" would start an hour ago, just like a current-real-time only from an hour ago.

Second, is a way to speed up (or potentially slow down) the replay rate. This may be sort of achievable now with summary indexes and stuff, but that's not very clean nor versatile, and not quite what's desired.

Imagine " | replay timecompression=10 earliest=-1h@h"

In that case, that "movie" of activity from an hour ago would start, then play back the "stuff that came in" at 10x the normal speed, showing 10 seconds of activity each second, potentially limited by system performance. So, in the same example as above, starting that search over yesterday at 10 PM, in now + two minutes the replayed logs would be at 10:20 PM instead of 10:02 PM.

We could imagine "timecompression=0.1" to play back stuff at slower than normal speed, like if you wanted to actually watch 10 seconds of, say, an Attack at a rate that you could see what's going on. This, IMO, could almost be more useful.

Case 193094 - "Search Proxy"

Our environment has multiple different contracted entities all running independent instances of Splunk for their datacenter + our security collection requirements. We do not centrally manage those instances, we only have right to peer to them with a central search head.

This does create a couple of potential issues we'd like to mitigate:

 1.  Requires a firewall rules change on both ends anytime a new indexer is added, or moves
 2.  Creates the potential for an abusive user at the central site to use admin role improperly and affect the other entity's indexers (possibly even deleting data)

So we thought, if a search head (where roles can be applied) had a way for another search head to spawn a search job through it (adjusting the bundles in flight to apply role restrictions on the receiving end of the search as it passes through the proxy), we could potentially have a less risky implementation. Though I can imagine the technical trickery required just saying it out loud.

Case 192250 - Native support for common multifactor auth solutions

An enhancement request to improve native Splunk support for integrating with various authentication / authorization solutions (or combinations).

Two in particular would be:

1. RSA SecurID (via Radius, etc...) for authentication, then authorization against LDAP/Active Directory (to determine group membership, etc...) 2. CAC/PIV support via windows integrated authentication, or the ability to challenge for a PIV/CAC cert + PIN, etc...

The first one is a higher priority as we currently must use multifactor to meet federal requirements, and have a bit of a kludgey solution in place that forces us to decentralize account management in less desired ways.

Case 192247 - Sharing or locally storing accelerated data model data

An enhancement request to enable sharing of accelerated data model information between disparate search heads. The changes of 6.0 + ESS 3.0, with the move of acceleration data back to the indexers rather than local summary on the search heads, places a much bigger load on indexer storage that is only helping the ES search head, etc...

An alternative would be a way to store data model acceleration output locally on the ES search head which was spec'd for the high disk I/O required under ES 2.0

Case 189711 - Ability to create an "underlay" in timecharts and similar things.

When looking at a timechart, it would be very useful to change the background chart color in certain portions of the timechart based on criteria in your search. This could be very useful to, say, delineate maintenance windows by changing the background color during those periods so that they stand out. For that particular use case, if odd behavior is noticed during that period, it will be far easier to determine what if any actions to take.

One mechanism that may work would be an added setting for the timechart which accepts some sort of color value. This setting would be to set the background pixel color on any addressable vertical pixel stack. This would arbitrarily allow for changes to background colors.

... | eval maintenance_color=if(_time>=1410275071 AND _time<1410361471,"#FFCCCC",0) | timechart count x-axis-bg-color=maintenance_color

The above example would, on any chart that contained the time period mentioned, have the period between the times shown as some sort of pink or light red (#ffcccc) and leave the rest of the chart the default background color (0).

One could imagine far more complex logic to determine multiple background colors using "case()" or other functions.

An enhancement may be to allow setting transparency of said color, either globally or as "x-axis-bg-transparency".


Cases 64159, 98041 - send diag to splunk.com directly from an indexer

When Splunk support asks for a diag, these can often be very large files. Especially for a distributed Splunk install, it can be burdensome for the admin to:

  1. collect a diag on each node (10-60 min)
  2. download this collection of diags to your laptop (1-10 min)
  3. upload each one manually to the Splunk support portal (10-60 min)

This ER would allow for a Splunk CLI command create and upload diags to a case directly, in one shot, using a syntax similar to:

$ splunk cmd senddiag --caseno=12345
Splunk.com userid: myusername
Splunk.com password: mypassword
Attaching <diag_name> to support case 12345 .. DONE
$ 

This ER would improve the administrator experience and greatly reduce the amount of time expended in gathering diagnostic details for support.

The implementation should also support use of a proxy server -- HTTP[s] or (perhaps) SOCKS -- using either commandline args or well-known proxy environment variables. Being able to do proxy authentication would be necessary for some customer environments too.

Case 146189 - include "max_depth" option for monitor inputs

When setting a monitor on a directory with a large number or very deep recursion, it can create high io load on a server looking at files that may not be relevant. To exclude these deep directories currently I've had to create multiple monitors where a single could be created if we were able to limit how deep it would look. Example on hpux, the [monitor:///var/adm] can have many files under /var/adm/sw and it's subdirectories that are mostly binary and not matching the default whitelist under the Splunk_TA_nix default inputs.conf.

Case 169607 - Inputs.conf option to not cross filesystem boundaries

Splunk's existing monitor / tailing code has several options for controlling what files to index (or not). These work great in most cases, but I think one more is warranted. The monitor stanzas need to support a configuration element for whether or not to recurse into a different filesystem. This would be similar to how du, tar, and other Unix commands work.

The primary advantage here would be an easy way to keep a monitor stanza from recursing through a giant NFS mount or other such deep filesystem tree that is mounted in a subdirectory of a subdirectory of a monitored directory.

Case 147958 - deterministic order for calculated fields / allow calculated fields to refer to each other

As of current versions of Splunk, calculated fields have no way to support dependencies among them. That is, one calculated field cannot be based on the value of another.

The ability to provide determinism in the order of evaluation of calculated fields, and therefore allow one calculated field to depend on the value of another, would make some operations much easier to perform.

The request would be for an alternate syntax for calculated fields that lets you specify an order / precedence / priority on the order of when calculated fields are computed. Something like:

EVAL-1-foo = bar+2
EVAL-2-baz = if(foo > 11,"OK","BAD")
EVAL-xxx = coalesce(status,baz)

In this suggestion, any *numbered* EVAL-${SEQ}-${NAME} expressions would run in a defined order based on their integer ${SEQ}. After all defined-order EVALs are run, all un-numbered EVALs run nondeterministically just like existing functionality.


Case 140686 - Logical intersection of props stanzas

I would like to be able to match on more than one key in a props.conf stanza. For example:

[thegreatestsourcetype] && [host::thebesthost]

And have it only apply to those streams which are both the greatest sourcetype and the best host

Case 124412 - Deployment clients should have a persistent storage

Currently, when an app is pushed via the deployment server to a client, then the app is overwritten.

This can sometimes be problematic if there is a particular file that is created and updated by the app on a per host basis and we do not want to have this file overwritten.

This ER is for requesting a method by which some files can be maintained without being changed from a deployment server update

Case 128241 - Windows Universal Forwarders $SPUNK_HOME not set; can't run btool

Splunk universal forwarders do not set a system-wide $SPLUNK_HOME variable on windows.

This causes issues for things like btool, which refuse to run, complaining that $SPLUNK_HOME isn't set.

We can manually set it with the "setx" command and btool works.

Case 122363 - Single URI for the most recent version's changelog

just need to have a pointer set up. There is already one for known issues:

http://answers.splunk.com/answers/90264/uri-for-latest-changelog

Better flexibility to have 2 or more "groupby" fields but still use "splitby" field

Best explained here: http://answers.splunk.com/answers/119215/whats-a-good-way-to-basically-end-up-with-more-than-1-group-by-field-in-the-chart-command?page=1&focusedAnswerId=119218#119218

Splunk 6 took some step backwards in terms of how you step back and forward in search history

In previous versions of Splunk, in the main search UI, you could leave your cursor in the searchbar and as you type - hitting Return would dispatch the search, and hitting Ctrl-Z or Ctrl-Y would step you backwards or forwards in your edit history in the textbox. You could thus very rapidly and intuitively back out some mistake by hitting Ctrl-Z Return, and you could step back *many* edits very quickly and with instantaneous feedback, by doing Ctrl-Z-Z-Z-Z-Z-Z return. (By instantaneous I mean literally instantly because the splunk code wasn't involved at all in the Ctrl-Z, it's just the textbox control doing it itself)

There were back button implementations where this nice behavior would be preserved despite the fact that the location bar was always keeping up to date with your most recent dispatched search. (Specifically both the Sideview Utils back button implementation and the core Splunk one preserved this nice behavior)

As of Splunk 6 hitting return reloads the whole page and as a side effect this blows away the undo/redo history of the control. So now Ctrl-Z will only take you back as far as the most recent dispatch and then you have to figure out to use your back button. Worse the back button will take a few seconds to reload everything right so what used to be a nearly instantaneous process is now very slow and broken up. You have to hit back button a few times, wait for it to re-render the contents of the searchbar, realize you went too far or not far enough, and hit the back/forward button a few more times.

Case 133059 - Ability to limit an index/volume by remaining free space

It would be very nice if there were a corresponding config option to maxVolumeDataSizeMB, something like minVolumeSizeFreeMB (or maybe percentage). Instead of calculating the total used size of the Volume in question, it would use all of the disk except for what it was told to leave free. Obviously if it was done as a percentage it doesn't matter if it's "used" or "free.

Personal tools
Hot Wiki Topics


About Splunk >
  • Search and navigate IT data from applications, servers and network devices in real-time.
  • Download Splunk