Do you use Heavy Forwarders in your organization? Perhaps you have one installed on your syslog server, or on a dozen syslog servers? Chances are that your host field is already being used to identify which host generated any particular event, which is exactly what it was designed to do.
But, what if you need to identify where that data is coming from? That's where indexed fields can help out.
I like to call my indexed field, "splunk_forwarder". It's not one of the fields Splunk uses by default (e.g. splunk_server), and it's easy to remember.
First, we'll create a props.conf file to tell Splunk that the new field we are going to create should apply to every host that this forwarder collects data from:
TRANSFORMS-create_splunk_forwarder_field = create_splunk_forwarder_field
Next, we'll create a transforms.conf file to actually create the new field along with its value.
REGEX = .+
FORMAT = splunk_forwarder::"myforwarderhostname"
WRITE_META = true
This configuration will create a new indexed field called, "splunk_forwarder" and will set its value to whatever name you give it in the quotation marks. I typically use the hostname of the heavy forwarder, but you could also use the IP address, FQDN, etc.
Now that you have your configuration, there are two ways to deploy it. The first is to create it locally under $SPLUNK_HOME/etc/system/local. This option is ideal if you're only applying it in a couple places, and you aren't using a configuration management system (e.g. Ansible, Puppet, Chef, Salt). The other method, is to deploy it using a Splunk Deployment Server (DS). If you are using a DS, make sure you create an app to hold your props.conf and transforms.conf files under $SPLUNK_HOME/etc/deployment-apps/<yourapp>/local/.
Finally, restart Splunk on your heavy forwarder. Any new data that gets indexed will automatically have your new splunk_forwarder field!