Have you ever been in a situation where you needed to mass-edit a large number of knowledge objects on a search head cluster? Any Splunk admin that has ever had to redirect data to a new index knows how painful this can be. Today, I'm going to teach you the easy way to do it, without even having to restart splunk!
Here are the steps:
Just a few notes on settings that everyone should be thinking about when creating custom sourcetypes or technology add-ons in Splunk...
Do you have these configurations in props.conf?
More Data Parsing...
ANNOTATE_PUNCT = false (if you don't need the punct field)
TZ = (if it's not part of the timestamp in your data)
CHARSET = UTF-8 (usually)
NO_BINARY_CHECK = true
Check out Splunk's documentation on props.conf for help with these settings.
Are you extracting fields for your users at data on-boarding? You should be! Splunk tends to grow organically and if your data isn't well-groomed when you bring it on, it may never be. Setup your users for success by identifying the fields they need and getting them extracted when you on-board their data.
Be sure to use either EXTRACT in props.conf or a REPORT in props.conf and corresponding REGEX/FORMAT in transforms.conf.
For CIM compliance, use this as a guide: http://docs.splunk.com/Documentation/CIM/4.12.0/User/Howtousethesereferencetables
Or, consider using the Splunk Add-on Builder
A word on community-built/3rd party apps and addons....
I feel like security is an often overlooked part of being a Splunk Engineer. This blog post is all about the importance of securing Splunk and the systems that it runs on. In addition to following the Securing Splunk guide in Splunk Docs, here are some other best practices you should be thinking about...
I recently upgraded a Splunk cluster from v6.5.2 to v7.0.1. There was one thing that wasn't covered in the release notes. After upgrading my first host (master node), I couldn't execute CLI commands. Splunk threw the following error:
$ splunk enable maintenance-mode
Couldn't complete HTTP request: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
Splunk Support admitted that they have some SSL bugs in the new release, and that this was one of them. To workaround this, you can make the following edits in server.conf:
sslVersions = *,-ssl2
sslVersionsForClient = *,-ssl2
cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH
Once this is done, restart Splunk and try the CLI again. You should be back in business.
I had to update server.conf on most of my Splunk server hosts (master node, search heads, deployers, deployment server, license master, etc.) but for some reason not on my indexers. I'm not sure why as both my indexers and search heads run the same OS and had the same OpenSSL package installed. Hopefully this helps anyone out there with a similar issue.
Do you use Heavy Forwarders in your organization? Perhaps you have one installed on your syslog server, or on a dozen syslog servers? Chances are that your host field is already being used to identify which host generated any particular event, which is exactly what it was designed to do. But, what if you need to identify where that data is coming from? That's where indexed fields can help out.
I like to call this indexed field, "splunk_forwarder" because it's not one of the fields Splunk uses by default (e.g. splunk_server), and it's easy to remember.
First, we'll create a fields.conf file on our search head(s) to tell Splunk about our indexed field:
INDEXED = true
Next, we'll add an inputs.conf file to our heavy forwarder that creates the new field along with its value:
_meta = splunk_forwarder::myforwarderhostname
This configuration will create a new indexed field called, "splunk_forwarder" and will set its value to whatever you put after the double colons. In this case, it will be assigned a value of "myforwarderhostname". I typically use the hostname of the heavy forwarder, but you could also use the IP address, FQDN, etc.
Finally, restart Splunk on your heavy forwarder and search head(s). Any new data that gets indexed will automatically have your new splunk_forwarder field!
Now, you can run cool searches like this one to quickly see which forwarders are sending what data to Splunk:
| tstats count where splunk_forwarder=* index=* by splunk_forwarder sourcetype index | stats values(index) as index values(sourcetype) as sourcetype sum(count) as count by splunk_forwarder
New to Splunk? This is a list of learning resources that I've curated for new Splunk users over the years. Feel free to share this with your fellow Splunkers!