Just a few notes on settings that everyone should be thinking about when creating custom sourcetypes or technology add-ons in Splunk...
Do you have these configurations in props.conf?
More Data Parsing...
ANNOTATE_PUNCT = false (if you don't need the punct field)
TZ = (if it's not part of the timestamp in your data)
CHARSET = UTF-8 (usually)
NO_BINARY_CHECK = true
Check out Splunk's documentation on props.conf for help with these settings.
Are you extracting fields for your users at data on-boarding? You should be! Splunk tends to grow organically and if your data isn't well-groomed when you bring it on, it may never be. Setup your users for success by identifying the fields they need and getting them extracted when you on-board their data.
Be sure to use either EXTRACT in props.conf or a REPORT in props.conf and corresponding REGEX/FORMAT in transforms.conf.
For CIM compliance, use this as a guide: http://docs.splunk.com/Documentation/CIM/4.12.0/User/Howtousethesereferencetables
Or, consider using the Splunk Add-on Builder
A word on community-built/3rd party apps and addons....
I feel like security is an often overlooked part of being a Splunk Engineer. This blog post is all about the importance of securing Splunk and the systems that it runs on. In addition to following the Securing Splunk guide in Splunk Docs, here are some other best practices you should be thinking about...
I recently upgraded a Splunk cluster from v6.5.2 to v7.0.1. There was one thing that wasn't covered in the release notes. After upgrading my first host (master node), I couldn't execute CLI commands. Splunk threw the following error:
$ splunk enable maintenance-mode
Couldn't complete HTTP request: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
Splunk Support admitted that they have some SSL bugs in the new release, and that this was one of them. To workaround this, you can make the following edits in server.conf:
sslVersions = *,-ssl2
sslVersionsForClient = *,-ssl2
cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH
Once this is done, restart Splunk and try the CLI again. You should be back in business.
I had to update server.conf on most of my Splunk server hosts (master node, search heads, deployers, deployment server, license master, etc.) but for some reason not on my indexers. I'm not sure why as both my indexers and search heads run the same OS and had the same OpenSSL package installed. Hopefully this helps anyone out there with a similar issue.
Do you use Heavy Forwarders in your organization? Perhaps you have one installed on your syslog server, or on a dozen syslog servers? Chances are that your host field is already being used to identify which host generated any particular event, which is exactly what it was designed to do.
But, what if you need to identify where that data is coming from? That's where indexed fields can help out.
I like to call my indexed field, "splunk_forwarder". It's not one of the fields Splunk uses by default (e.g. splunk_server), and it's easy to remember.
First, we'll create a props.conf file to tell Splunk that the new field we are going to create should apply to every host that this forwarder collects data from:
TRANSFORMS-create_splunk_forwarder_field = create_splunk_forwarder_field
Next, we'll create a transforms.conf file to actually create the new field along with its value.
REGEX = .+
FORMAT = splunk_forwarder::"myforwarderhostname"
WRITE_META = true
This configuration will create a new indexed field called, "splunk_forwarder" and will set its value to whatever name you give it in the quotation marks. I typically use the hostname of the heavy forwarder, but you could also use the IP address, FQDN, etc.
Now that you have your configuration, there are two ways to deploy it. The first is to create it locally under $SPLUNK_HOME/etc/system/local. This option is ideal if you're only applying it in a couple places, and you aren't using a configuration management system (e.g. Ansible, Puppet, Chef, Salt). The other method, is to deploy it using a Splunk Deployment Server (DS). If you are using a DS, make sure you create an app to hold your props.conf and transforms.conf files under $SPLUNK_HOME/etc/deployment-apps/<yourapp>/local/.
Next, you need to deploy a fields.conf file to both your search heads and indexers in order to be able to properly search it. The fields.conf file should look like this:
INDEXED = true
Finally, restart Splunk on your heavy forwarder. Any new data that gets indexed will automatically have your new splunk_forwarder field!
Bonus: If you don't want to deploy props.conf and transforms.conf, you can also accomplish this via the fields.conf deployment combined with the following inputs.conf configuration on each of your heavy forwarders:
_meta = splunk_forwarder::myforwarderhostname
New to Splunk? This is a list of learning resources that I've curated for new Splunk users over the years. Feel free to share this with your fellow Splunkers!