Zscaler Splunk Deployment Guide
Zscaler Splunk Deployment Guide
DEPLOYMENT GUIDE
Contents
Terms and Acronyms 6
Acronym Definition
API Application Programming Interface
CA Central Authority (Zscaler)
CIM Common Information Model (Splunk-defined data model)
CSV Comma-Separated Values
DLP Data Loss Prevention
DNS Domain Name Service
DPD Dead Peer Detection (RFC 3706)
GRE Generic Routing Encapsulation (RFC2890)
ICMP Internet Control Message Protocol
IKE Internet Key Exchange (RFC2409)
IPS Intrusion Prevention System
IPSec Internet Protocol Security (RFC2411)
LSS Log Streaming Service
NSS Nanolog Streaming Service
NOC Network Operations Centre
PFS Perfect Forward Secrecy
PSK Pre-Share Key
SaaS Software as a Service
SIEM Security Incident and Event Management
SOAR Security Orchestration and Automation
SOC Security Operations Centre
SSL Secure Socket Layer (RFC6101)
TCP Input Method of ingesting data in Splunk via TCP datagrams
TLS Transport Layer Security
VDI Virtual Desktop Infrastructure
XFF X-Forwarded-For (RFC7239)
ZDX Zscaler Digital Experience (Zscaler)
ZIA Zscaler Internet Access (Zscaler)
ZPA Zscaler Private Access (Zscaler)
ZPC Zscaler Posture Control (Zscaler)
Zscaler Overview
Zscaler (NASDAQ: ZS) enables the world’s leading organizations to securely transform their networks and applications for
a mobile and cloud-first world. Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA) services create fast, secure
connections between users and applications, regardless of device, location, or network. Zscaler delivers its services 100%
in the cloud and offers the simplicity, enhanced security, and improved user experience that traditional appliances or
hybrid solutions can’t match. Used in more than 185 countries, Zscaler operates a massive, global cloud security platform
that protects thousands of enterprises and government agencies from cyberattacks and data loss. To learn more, see
Zscaler's website.
Splunk Overview
Splunk (NASDAQ: SPLK) is a world leader in data analytics, security incident management, orchestration and automation.
Zscaler traffic, status and access logs provide a rich and voluminous source of data for ingesting into the Splunk platform.
You can then use this information to enrich other data sources and generate interesting events related to business
services and technology operations. To learn more, refer to Splunk's website.
Audience
This guide is for network administrators, endpoint and IT administrators, and security analysts responsible for deploying,
monitoring, and managing enterprise security systems. This document is targeted and those interested in learning details
of how Zscaler and Splunk interact, as well as providing guidance for integration of Zscaler and Splunk.
This can consist of:
Notice that appendices are provided for those needing a foundational exposure to Splunk and NSS as it relates to this
integration. For additional product and company resources, see:
• Zscaler Resources
• Splunk Resources
• Appendix F: Requesting Zscaler Support
Software Versions
This document was authored using the latest versions of ZIA, ZPA, and Splunk Cloud.
If you have created searches, reports, dashboards, or other useful functionality that could be used with the app, submit
them for inclusion into the next version of the Zscaler Splunk App:
• Email: [email protected]
• From the ZIA Admin Portal, go to Zscaler Community Products > Cloud Reporting and Management.
exclamation-triangle Ifdifferent
you are using this guide to implement a solution at a government agency, some of the content might be
for your deployment. Efforts are made throughout the guide to note where government agencies might
need different parameters or input. If you have questions, contact your Zscaler Account team.
ZIA Overview
ZIA is a secure internet and web gateway delivered as a service from the cloud. Think of it as a secure internet onramp—
all you do is make Zscaler your next hop to the internet via one of the following methods:
• Setting up a tunnel (GRE or IPSec) to the closest Zscaler data center (for offices).
• Forwarding traffic via our lightweight Zscaler Client Connector or PAC file (for mobile employees).
No matter where users connect—a coffee shop in Milan, a hotel in Hong Kong, or a VDI instance in South Korea—they get
identical protection. ZIA sits between your users and the internet and inspects every transaction inline across multiple
security techniques (even within SSL).
You get full protection from web and internet threats. The Zscaler cloud platform supports Cloud Firewall, IPS,
Sandboxing, DLP, and Isolation, allowing you start with the services you need now and activate others as your needs grow.
ZPA Overview
ZPA is a cloud service that provides secure remote access to internal applications running on cloud or data center using
a Zero Trust framework. With ZPA, applications are never exposed to the internet, making them completely invisible
to unauthorized users. The service enables the applications to connect to users via inside-out connectivity rather than
extending the network to them.
ZPA provides a simple, secure, and effective way to access internal applications. Access is based on policies created by
the IT administrator within the ZPA Admin Portal and hosted within the Zscaler cloud. On each user device, software
called Zscaler Client Connector is installed. Zscaler Client Connector ensures the user’s device posture and extends a
secure microtunnel out to the Zscaler cloud when a user attempts to access an internal application.
Zscaler Resources
The following table contains links to Zscaler resources based on general topic areas.
The following table contains links to Zscaler resources for government agencies.
Splunk Resources
The following table contains links to Splunk support resources.
Application Architecture
Zscaler’s integration with Splunk follows Splunk’s well-defined framework for Splunk App. Splunk App is designed
specifically to be installed and run in a Splunk environment. The app is separated into two discreet parts, the technical
add-on, and the Zscaler Splunk App.
The app takes advantage of several technologies in order to ingest data from Zscaler, which consists of log streams
generated from customer environments, and can also retrieve data using Zscaler APIs. The following diagram shows these
various interfaces.
Data Models
Zscaler and Splunk joint customers require Zscaler logging data to be in a format that is compatible with Splunk’s
Common Information Model (CIM) data model. The Zscaler Technical Add-On maps all Zscaler NSS fields into
CIM-compatible types, as well as tagging all events that are relevant to specific CIM data models.
These virtual machines attach to the Zscaler cloud via outbound connections and receive encrypted and tokenized logs
to stream into customer log collection and SIEM platforms. The following table describes the various log streams.
• NSS Feed Output Format: Web Logs (government agencies, see NSS Feed Output Format: Web Logs).
• Adding NSS Feeds for Tunnel Logs (government agencies, see Adding NSS Feeds for Tunnel Logs).
• Adding NSS Feeds for Alerts (government agencies, see Adding NSS Feeds for Alerts).
There is a dedicated Splunk event type for each of these log streams, detailed in the Source Types section.
You can find details for all possible fields and formats, see:
• NSS Feed Output Format: Firewall Logs (government agencies, see NSS Feed Output Format: Firewall Logs).
• NSS Feed Output Format: DNS Logs (government agencies, see NSS Feed Output Format: DNS Logs).
• Adding NSS Feeds for Alerts (government agencies, see Adding NSS Feeds for Alerts).
These log streams have a dedicated Splunk event type, detailed in the Source Types section.
Zscaler APIs
Zscaler runs a number of open APIs for customer use, which include read and write functions. The current Splunk
integration focuses on read functions for Zscaler Sandbox detonation reports and Zscaler Admin audit logs. Full
specifications for the Zscaler API are found in the API Reference (government agencies, see API Reference).
Splunk makes use of these APIs via Splunk modular inputs. Both Sandbox and audit logs have dedicated Splunk event
types and are detailed in the Source Types section.
Clipboard-list SOAR has existing write integrations to Zscaler API, details of these integrations are not in scope for this
document.
Python SDK
The Splunk App contains several scripts that interface with the Zscaler API, including a fork of a private SDK used by a
number of Zscaler technology partners. An unofficial version of the original SDK is located at the Zscaler Python SDK
GitHub repository.
The raw scripts and SDK are found in the bin/ directory of the Technical Add-On.
Sandbox
The Zscaler Sandbox is used by customers to detonate unknown file samples, and determines if there’s malicious
behavior.
When the Sandbox analyzes files, the end user recipient might be quarantined or allowed to download the file. The
outcome is determined by customer-specific Sandbox policies. The latest policy constructs are found in Configuring the
Sandbox Policy (government agencies, see Configuring the Sandbox Policy).
Sandbox detonation results are significant to customers because a malicious verdict indicates a possibly compromised
user or risky user behavior that could jeopardize business. As such, Zscaler offers full Sandbox reporting as a product
feature and includes the capability to pull detailed sandbox post-detonation reports via API calls. Zscaler’s Splunk
technical add-on ingests these events, and the Zscaler Splunk App produces a number of derived reports.
It’s possible that Splunk ES can find a notable event and generate a response action and engage a SOAR platform such as
Splunk > SOAR via correlation. Note that SOAR has existing read and write integrations to Zscaler API, but details of these
integrations are not in scope for this document.
Audit Logs
An audit log is generated as administrators access the Zscaler console and make changes within the console. Zscaler
makes these events available via the Zscaler API because they often must be archived outside of Zscaler. You can
configure the Splunk Technical Add-On to ingest these logs.
When configured, the modular input tracks the state of the most recent log retrieval, then requests the delta for any logs
generated since the last successful retrieval.
The Add-On is a requirement for the Zscaler Splunk App because the app takes advantage of many configurations and
components defined in the Add-On.
Source Types
The following source types are defined in the Zscaler Technical Add-On, and cover the current possible inputs. Actual use
of the source types might vary depending on the bundle and features to which the Zscaler customer subscribed.
There are no pre-configured data inputs. Data inputs must be configured by the Splunk Admin according to the Network
Inputs and Modular Inputs sections. Splunk’s best practice is to not permit the definition of network inputs in a Splunk
app.
Macros
Splunk Macros are used to shortcut frequently used sets of search commands. The Technical Add-On defines several
search macros to:
The following search macros are defined in the Zscaler Technical Add-On, and are used extensively throughout the
Add-On and App. Zscaler suggests that any additional searches and reports created by Splunk admins and operators
leverage these macros.
You might need to modify these macros depending on your Splunk configuration. The Macro Modification section
contains more information.
Splunk CIM
Zscaler implemented the Splunk CIM to integrate tightly with Splunk enterprise security. The Zscaler Technical Add-On
defines all the necessary field aliases and event tags to be compatible with Splunk’s CIM.
Modular Inputs
Zscaler’s Technical Add-On takes advantage of Splunk’s modular inputs to connect to Zscaler’s APIs for Sandbox and
admin logs. You can configure each API configured separately, and multiple instances are called if there is a need to ingest
logs from multiple Zscaler tenants.
The modular inputs are written in Python and are engineered for compatibility with Splunk Cloud (although full Splunk
Cloud validation hasn’t occurred). Modular inputs use Zscaler and Splunk SDKs. The Zscaler SDK simplifies access to
Zscaler APIs, and the Splunk SDK secures API keys and passwords, and leverages Splunk search and state-tracking.
All modular input files are in the /bin section of the Technical Add-On.
Dependencies
The Zscaler Splunk app is dependent on Zscaler’s Technical Add-On (mandatory).
User Interface
The Splunk App is the visual component of Zscaler’s Splunk integration. Other CIM-compatible Splunk tools or apps also
visualize Zscaler data, but the app leverages a number of fields that are not part of the Splunk CIM. The following is a
series of screenshots from the Splunk App.
The Zscaler Splunk App can serve as a useful base for you to create your own Zscaler-oriented searches, reports, and
dashboards.
Access Control
Threat Prevention
Private Access
Zscaler Configuration
You must configure Zscaler to send data into Splunk. Follow Zscaler’s existing documentation to set up the base
configuration of NSS, LSS, and API access. The relevant reference links are:
Output Strings
If you copy and paste the following outputs, remove any spaces between the fields when configuring an NSS
Clipboard-list feed in the ZIA Admin Portal. Removing all spaces allows you to save your NSS feed configuration successfully.
The Splunk App uses fields not included in the base output fields. Configure each of your LSS and NSS feeds as follows:
NSS Web
%d{yy}-%02d{mth}-%02d{dd}-%02d{hh}:%02d{mm}:%02d{ss}\treason=%s{reason}\
tevent_id=%d{recordid}\tmd5=%s{bamd5}\tprotocol=%s{proto}\taction=%s{action}\
ttransactionsize=%d{totalsize}\tresponsesize=%d{respsize}\trequestsize=%d{reqsize}\
turlcategory=%s{urlcat}\tserverip=%s{sip}\tclienttranstime=%d{ctime}\
trequestmethod=%s{reqmethod}\trefererURL=%s{ereferer}\tuseragent=%s{ua}\tproduct=NSS\
tlocation=%s{location}\tClientIP=%s{cip}\tstatus=%s{respcode}\tuser=%s{login}\
turl=%s{eurl}\tvendor=Zscaler\thostname=%s{ehost}\tclientpublicIP=%s{cintip}\
tthreatcategory=%s{malwarecat}\tthreatname=%s{threatname}\tfiletype=%s{filetype}\
tappname=%s{appname}\tpagerisk=%d{riskscore}\tdepartment=%s{dept}\turlsup
ercategory=%s{urlsupercat}\tappclass=%s{appclass}\tdlpengine=%s{dlpeng}\
tssldecrypted=%s{ssldecrypted}\turlclass=%s{urlclass}\tthreatclass=%s{malwareclass}\
tdlpdictionaries=%s{dlpdict}\tfileclass=%s{fileclass}\tbwthrottle=%s{bwthrottle}\
tservertranstime=%d{stime}\tcontenttype=%s{contenttype}\tunscannabletype=%s{unscannabl
etype}\tdevicehostname=%s{devicehostname}\tdeviceowner=%s{deviceowner}\n
NSS Tunnel Sample
%s{datetime}\tRecordtype=%s{tunnelactionname}\ttunneltype=%s{tunneltype}\
tuser=%s{vpncredentialname}\tlocation=%s{locationname}\tsourceip=%s{sourceip}\
tdestinationip=%s{destvip}\tsourceport=%d{srcport}\ttxbytes=%lu{txbytes}\
trxbytes=%lu{rxbytes}\tdpdrec=%d{dpdrec}\recordid=%d{recordid}\n
IKE Phase 1
%s{datetime}\tRecordtype=%s{tunnelactionname}\ttunneltype=IPSEC_IKEV %d{ikeversion}\
tuser=%s{vpncredentialname}\tlocation=%s{locationname}\tsourceip=%s{sourceip}\
tdestinationip=%s{destvip}\tsourceport=%d{srcport}\tdestinationport=%d{dstport}\
tlifetime=%d{lifetime}\tikeversion=%d{ikeversion}\tspi_in=%lu{spi_in}\tspi_out=%lu{spi_
out}\talgo=%s{algo}\tauthentication=%s{authentication}\tauthtype=%s{authtype}\
recordid=%d{recordid}\n
IKE Phase 2
%s{datetime}\tRecordtype=%s{tunnelactionname}\ttunneltype=IPSEC_IKEV
%d{ikeversion}\tuser=%s{vpncredentialname}\tlocation=%s{locationname}\
tsourceip=%s{sourceip}\tdestinationip=%s{destvip}\tsourceport=%d{srcport}\
tsourceportstart=%d{srcportstart}\tdestinationportstart=%d{destportstart}\
tsrcipstart=%s{srcipstart}\tsrcipend=%s{srcipend}\tdestinationipstart=%s{destipstart}\
tdestinationipend=%s{destipend}\tlifetime=%d{lifetime}\tikeversion=%d{ikeversion}\
tlifebytes=%d{lifebytes}\tspi=%d{spi}\talgo=%s{algo}\tauthentication=%s{authentic
ation}\tauthtype=%s{authtype}\tprotocol=%s{protocol}\ttunnelprotocol=%s{tunnelpro
tocol}\tpolicydirection=%s{policydirection}\recordid=%d{recordid}\n
Tunnel Event
%s{datetime}\tRecordtype=%s{tunnelactionname}\ttunneltype=%s{tunneltype}\
tuser=%s{vpncredentialname}\tlocation=%s{locationname}\tsourceip=%s{sourceip}\
tdestinationip=%s{destvip}\tsourceport=%d{srcport}\tevent=%s{event}\
teventreason=%s{eventreason}\recordid=%d{recordid}\n
NSS CFW
datetime=%s{time}\tuser=%s{login}\tdepartment=%s{dept}\tlocationname=%s{location}\
tcdport=%d{cdport}\tcsport=%d{csport}\tsdport=%d{sdport}\tssport=%d{ssport}\
tcsip=%s{csip}\tcdip=%s{cdip}\tssip=%s{ssip}\tsdip=%s{sdip}\ttsip=%s{tsip}\
ttunsport=%d{tsport}\ttuntype=%s{ttype}\taction=%s{action}\tdnat=%s{dnat}\
tstateful=%s{stateful}\taggregate=%s{aggregate}\tnwsvc=%s{nwsvc}\tnwapp=%s{nwapp}\
tproto=%s{ipproto}\tipcat=%s{ipcat}\tdestcountry=%s{destcountry}\
tavgduration=%d{avgduration}\trulelabel=%s{rulelabel}\tinbytes=%ld{inbytes}\
toutbytes=%ld{outbytes}\tduration=%d{duration}\tdurationms=%d{durationms}\
tnumsessions=%d{numsessions}\tipsrulelabel=%s{ipsrulelabel}\tthreatcat=%s{threatcat}\
tthreatname=%s{threatname}\tdeviceowner=%s{deviceowner}\tdevicehostname=%s{devicehostna
me}\n
NSS DNS
datetime=%s{time}\tuser=%s{login}\tdepartment=%s{dept}\tlocation=%s{location}\
treqaction=%s{reqaction}\tresaction=%s{resaction}\treqrulelabel=%s{reqrulelabel}\
tresrulelabel=%s{resrulelabel}\tdns_reqtype=%s{reqtype}\tdns_req=%s{req}\tdns_
resp=%s{res}\tsrv_dport=%d{sport}\tdurationms=%d{durationms}\tclt_sip=%s{cip}\tsrv_
dip=%s{sip}\tcategory=%s{domcat}\tdeviceowner=%s{deviceowner}\tdevicehostname=%s{device
hostname}\nNSS Alert
Admin Audit
\{ "sourcetype" : "zscalernss-audit", "event" :\{"time":"%s{time}","recordid":"%d{re
cordid}","action":"%s{action}","category":"%s{category}","subcategory":"%s{subcatego
ry}","resource":"%s{resource}","interface":"%s{interface}","adminid":"%s{adminid}","
clientip":"%s{clientip}","result":"%s{result}","errorcode":"%s{errorcode}","auditlo
gtype":"%s{auditlogtype}","preaction":%s{preaction},"postaction":%s{postaction}\}\}
All ZPA (LSS) logs
All ZPA log types use default JSON drop-down log format available in the Logging section of the ZPA Admin Portal.
Splunk Configuration
Prior to installing the App and Technical Add-on, Splunk architects or designers must determine where to install each
component. These decisions can affect the overall Splunk design and enterprise change controls when implementing
Zscaler Logs and APIs into Splunk.
Search Head
The Zscaler Splunk App can be installed exclusively on any Splunk search head. The app does not need any forwarding or
index time execution.
If taking advantage of Zscaler’s Sandbox APIs, install the Zscaler Technical Add-On on a search head because the app
leverages saved Splunk Searches and Alerts to find any files pending execution in the Zscaler sandbox.
Network Inputs
Zscaler NSS and LSS streams are typically sent to Splunk via network inputs. This is usually inbuilt Splunk TCP input and
can also be HTTP Event Collector, i.e., HEC (if using cloud NSS).
Example Configuration
Note the UEBA is an artifact of a non-Zscaler App and is not relevant to the Zscaler configuration.
Modular Inputs
Zscaler APIs are addressed via Splunk modular inputs. These can be seen, set, and configured in the TA’s setup page,
and there is a specific configuration for each input type. Splunk best practice uses a Global Account for the API user,
password, and key, and a setup screen when adding each input.
Take care, when defining the interval, that you stay within your API rate limits. For more information, see API Rate Limit
Summary (government agencies, see API Rate Limit Summary)
Macro Modification
Your preexisting Splunk environment might use an index name different to what Zscaler’s Splunk App and Technical
Add-On expect. In this case, modify the macros.conf (or create a local/macros.conf) and override the
index= zscalerlogs to match the index name used within your Splunk environment.
For example, if you use the name zscalerlogs you can change each macro definition as follows:
1. Change your Zscaler log stream configurations to match what the app is expecting.
2. Defined local field aliases to align to what the app is expecting.
[Zscaler_DNS]
search = (sourcetype=zscalernss-dns)
[Zscaler_Proxy_General]
search = (sourcetype=zscalernss-web)
[Zscaler_Proxy_DLP]
search = (sourcetype=zscalernss-web ruletype="DLP")
[Zscaler_ZPA]
search = (sourcetype=zscalerlss-zpa-app) OR (sourcetype=zscalerlss-zpa-auth) OR
(sourcetype=zscalerlss-zpa-connector)
[Zscaler_Proxy_Malware]
search = (sourcetype="zscalernss-web" threatname!="None")
[Zscaler_Sandbox]
search = (sourcetype=zscalerapi-zia-sandbox)
[Zscaler_Audit]
search = (sourcetype=zscalerapi-zia-audit)
[eventtype=Zscaler_DNS]
dns = enabled
network = enabled
resolution = enabled
[eventtype=Zscaler_CFW]
communicate = enabled
network = enabled
[eventtype=Zscaler_Proxy_General]
communicate = enabled
end = enabled
network = enabled
performance = enabled
proxy = enabled
session = enabled
start = enabled
web = enabled
[eventtype=Zscaler_Proxy_Malware]
attack = enabled
ids = enabled
malware = enabled
[eventtype=Zscaler_Proxy_DLP]
dlp = enabled
incident = enabled
[eventtype=Zscaler_ZPA]
authentication = enabled
communicate = enabled
end = enabled
network = enabled
performance = enabled
session = enabled
start = enabled
vpn = enabled
[zscalernss-alerts]
pulldown_type = 1
category = Network & Security
description = Zscaler NSS System Alerts
[zscalernss-dns]
EVAL-vendor_product = Zscaler_ZIA_Firewall
FIELDALIAS-clt_sip_as_src = clt_sip AS src
FIELDALIAS-clt_sip_as_src_ip = clt_sip AS src_ip
FIELDALIAS-dns_req_as_query = dns_req AS query
FIELDALIAS-dns_reqtype_as_record_type = dns_reqtype AS record_type
FIELDALIAS-dns_resp_as_answer = dns_resp AS answer
FIELDALIAS-durationms_as_response_time = durationms AS response_time
FIELDALIAS-srv_dip_as_dest = srv_dip AS dest
FIELDALIAS-srv_dip_as_dest_ip = srv_dip AS dest_ip
FIELDALIAS-srv_dport_as_dest_port = srv_dport AS dest_port
pulldown_type = 1
category = Network & Security
description = Zscaler DNS Control Logs
[zscalernss-web]
EVAL-action = lower(action)
EVAL-app = Zscaler
EVAL-dlp_type = "Inline Gateway"
EVAL-duration = clienttranstime + servertranstime
EVAL-dvc = "Zscaler Cloud Proxy"
EVAL-dvc_zone = "Cloud Proxy"
EVAL-vendor_product = "Zscaler_ZIA_Proxy"
FIELDALIAS-ClientIP_as_src = ClientIP AS src
FIELDALIAS-ClientIP_as_src_ip = ClientIP AS src_ip
FIELDALIAS-aob_gen_zscalernss_web_alias_1 = protocol AS transport
FIELDALIAS-aob_gen_zscalernss_web_alias_2 = user AS src_user
FIELDALIAS-aob_gen_zscalernss_web_alias_3 = dlpengine AS severity
FIELDALIAS-aob_gen_zscalernss_web_alias_4 = threatname AS signature
FIELDALIAS-aob_gen_zscalernss_web_alias_5 = contenttype AS http_content_type
FIELDALIAS-aob_gen_zscalernss_web_alias_6 = hostname AS dest
FIELDALIAS-clientpublicIP_as_src_translated_ip = clientpublicIP AS src_translated_ip
FIELDALIAS-clienttranstime_as_response_time = clienttranstime AS response_time
[zscalerlss-zpa-app]
EVAL-app = Zscaler
EVAL-vendor_product = Zscaler_ZPA
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_1 = ServerIP AS dest_ip
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_2 = ClientPublicIP AS src_ip
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_4 = Application AS app
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_5 = ServicePort AS dest_port
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_6 = ConnectorPort AS src_port
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_7 = Host AS dest
SHOULD_LINEMERGE = 0
category = Network & Security
description = Zscaler ZPA App Logs
pulldown_type = 1
[zscalerlss-zpa-auth]
EVAL-app = Zscaler
FIELDALIAS-aob_gen_zscalerlss_zpa_auth_alias_1 = Username AS user
FIELDALIAS-aob_gen_zscalerlss_zpa_auth_alias_3 = PublicIP AS src
[zscalerlss-zpa-connector]
EVAL-app = Zscaler
FIELDALIAS-aob_gen_zscalerlss_zpa_connector_alias_1 = Application AS app
FIELDALIAS-aob_gen_zscalerlss_zpa_connector_alias_2 = ServicePort AS dest_port
FIELDALIAS-aob_gen_zscalerlss_zpa_connector_alias_3 = ConnectorPort AS src_port
FIELDALIAS-aob_gen_zscalerlss_zpa_connector_alias_4 = Host AS dest
SHOULD_LINEMERGE = 0
category = Network & Security
description = Zscaler ZPA Connector Logs
pulldown_type = 1
[zscalernss-fw]
EVAL-action = eval action=if(like(action, "%Allow%"), "allowed", action)
EVAL-app = Zscaler
EVAL-bytes = inbytes + outbytes
EVAL-vendor_product = Zscaler_ZIA_Firewall
FIELDALIAS-cdip_as_dest_ip = cdip AS dest_ip
FIELDALIAS-cdport_as_dest_port = cdport AS dest_port
FIELDALIAS-csip_as_src = csip AS src
FIELDALIAS-csip_as_src_ip = csip AS src_ip
FIELDALIAS-csport_as_src_port = csport AS src_port
FIELDALIAS-csport_as_src_translated_port = csport AS src_translated_port
FIELDALIAS-inbytes_as_bytes_in = inbytes AS bytes_in
FIELDALIAS-locationname_as_src_zone = locationname AS src_zone
FIELDALIAS-outbytes_as_bytes_out = outbytes AS bytes_out
FIELDALIAS-proto_as_protocol = proto AS protocol
FIELDALIAS-proto_as_transport = proto AS transport
FIELDALIAS-sdip_as_dest = sdip AS dest
[zscalerapi-zia-sandbox]
TRUNCATE=0
category = Network & Security
description = Zscaler Sandbox detonation reports
pulldown_type = 1
FIELDALIAS-class_category = "Full Details.Classification.Category" AS class_category
FIELDALIAS-class_detect_mal = "Full Details.Classification.DetectedMalware" AS
class_detect_mal
FIELDALIAS-class_score = "Full Details.Classification.Score" AS class_score
FIELDALIAS-class_type = "Full Details.Classification.Type" AS class_type
FIELDALIAS-exploit_risk = "Full Details.Exploit{}.Risk" AS exploit_risk
FIELDALIAS-exploit_sig = "Full Details.Exploit{}.Signature" AS exploit_sig
FIELDALIAS-exploit_sig_source = "Full Details.Exploit{}.SignatureSources{}" AS
exploit_sig_source
[zscalerlss-zpa-bba]
EVAL-app = Zscaler
EVAL-vendor_product = Zscaler_ZPA
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_1 = ServerIP AS dest_ip
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_2 = ClientPublicIP AS src_ip
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_4 = Application AS app
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_5 = ServicePort AS dest_port
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_6 = ConnectorPort AS src_port
FIELDALIAS-aob_gen_zscalerlss_zpa_app_alias_7 = Host AS dest
SHOULD_LINEMERGE = 0
category = Network & Security
description = Zscaler ZPA Browser Access Logs
pulldown_type = 1
[z-dns]
definition = index=zscaler sourcetype="zscalernss-dns"
iseval = 0
[z-fw]
definition = index=zscaler sourcetype="zscalernss-fw"
iseval = 0
[z-web]
definition = index=zscaler sourcetype="zscalernss-web"
iseval = 0
[z-sandbox]
definition = index=zscaler sourcetype="zscalerapi-zia-sandbox"
iseval = 0
[z-audit]
definition = index=zscaler sourcetype="zscalerapi-zia-audit"
iseval = 0
[z-index]
definition = index=zscaler
iseval = 0
[z-zpa]
[z-zpa-app]
definition = index=zscaler sourcetype="zscalerlss-zpa-app"
iseval = 0
[z-zpa-auth]
definition = index=zscaler sourcetype="zscalerlss-zpa-auth"
iseval = 0
[z-zpa-con]
definition = index=zscaler sourcetype="zscalerlss-zpa-connector"
iseval = 0
[z-webuser-list]
definition = tstats prestats=false local=false summariesonly=true count from
datamodel=Web where nodename=Web.Proxy by Web.user | rename Web.user AS user
iseval = 0
[z-zpauser-list]
definition = tstats count AS "Count of VPN" from datamodel=Network_Sessions where
(nodename = All_Sessions.VPN) groupby All_Sessions.user prestats=true | stats dedup_
splitvals=t count AS "Count of VPN" by All_Sessions.user | sort limit=100 All_Sessions.
user | fields - _span | rename All_Sessions.user AS user | fillnull "Count of VPN" |
fields user, "Count of VPN"
iseval = 0
Deploy NSS
• NSS Deployment Guide for Microsoft Azure (government agencies, see NSS Deployment Guide for Microsoft
Azure).
• NSS Deployment Guide for Amazon Web Services (government agencies, see NSS Deployment Guide for Amazon
Web Services).
• NSS Deployment Guide for VMWare vSphere (government agencies, see NSS Deployment Guide for VMWare
vSphere).
• Configuring Advanced NSS Settings (government agencies, see Configuring Advanced NSS Settings).
• Troubleshooting Deployed NSS Servers (government agencies, see Troubleshooting Deployed NSS Servers).
Splunk Enterprise manages indexes to facilitate flexible searching and fast data retrieval, eventually archiving them
according to a user-configurable schedule.
After logging into Splunk, go to Settings > Indexes > New Index.
In the New Index dialog, type zscaler without quotes (case sensitive) and click Save.
The Add Data wizard is displayed. This step configures Splunk to listen on TCP using port 514. NSS supports only TCP, but
you can configure the destination port. Most administrators use port 514 as it is the default port for UDP-based syslog.
After configuring the SIEM port, click Next.
For example, Windows event logs, NSS web logs, NSS Firewall logs are all source types.
If multiple web NSS servers send logs to the same Splunk instance, the servers all belong to the same source type, but
each one of these servers constitute an independent source.
Splunk apps use sources and source types to extract knowledge from the data they index. Enter zscaler to display all
possible Zscaler-specific source types. Select the option based on the kind of Zscaler logs sent to Splunk.
If a particular panel is not populated, click the Search icon next to it. This shows the query that the panel is running
behind the scenes to help with troubleshooting.
Cloud NSS is a cloud-to-cloud log streaming service that allows you to stream logs directly from the ZIA cloud into a
supported cloud-based SIEM, without the need to deploy an NSS VM for web or Firewall. The service supports all ZIA log
types: web, SaaS security, tunnel, Firewall, and DNS.
When you subscribe to the service, you can configure cloud NSS feeds for each log type to an HTTPS API-based log
collector hosted on your cloud SIEM. Rather than deploying, managing, and monitoring on-premises NSS VMs, you can
simply configure an HTTPS API feed that pushes logs using HTTP POST from the Zscaler cloud service into an HTTPS API
endpoint on the SIEM. For the Splunk cloud, this is the HEC input.
You can subscribe to Cloud NSS, which allows direct cloud-to-cloud log streaming for all types of ZIA logs into a Splunk
instance.
• Understanding Nanolog Streaming Service (government agencies, see Understanding Nanolog Streaming Service).
• About Cloud NSS Feeds (government agencies, see About Cloud NSS Feeds).
• Add NSS Feeds (government agencies, see Add NSS Feeds).
• Adding Cloud NSS Feeds for Web Logs (government agencies, see Adding Cloud NSS Feeds for Web Logs).
The Splunk HEC sends data and application events to a Splunk deployment over the HTTPS. HEC uses a token-based
authentication model. You can generate a token and then configure a logging library or HTTP client with the token to
send data to HEC in a specific format. The HEC token that is created from the following steps must be pasted later into
the ZIA Admin Portal. While the HEC token is required in this deployment, in addition, you can optionally restrict the
public source IPs that are allowed to send logs to their Splunk cloud stack. You can contact Splunk support to employ any
IP-level allowlists.
You can install Zscaler Splunk App on your Splunk cloud tenant.
Contact the Splunk cloud support team to get "Zscaler Technical Add-on (TA)" installed in your Splunk cloud
exclamation-triangle tenant.
Because the Splunk App for Zscaler looks for data written at index zscaler by default, setting index=zscaler allows
you to use the Splunk App for Zscaler out of the box.
Zscaler does not have a specific recommendation for Max raw data size, Searchable time, or Dynamic Data Storage. These
values depend entirely on your setup, amount of logs, cost associated with storage in Splunk cloud, etc., and vary from
customer to customer. For more information regarding these settings, refer to the Splunk documentation.
The Data inputs dialog is displayed. Click the option to Add new input.
Do not enable indexer acknowledgment. Provide a token name. Leave the rest of options at default settings and
Clipboard-list click Next.
The following example sends ZIA Web logs to the Splunk cloud. Thus, the source type selected in this example is
zscalernss-web. Change the source type to match the log type that you want to ingest (for example, zscalernss-fw,
zscalernss-dns, etc.).
From the Review dialog, confirm the settings and click Submit.
The Token is being deployed. The token might take a few minutes to get deployed in Splunk cloud.
The 32-character HEC token is shown on this screen. Make a note of this token for use in the ZIA Admin Portal later. In
Splunk, HEC tokens are tied to different source types (Zscaler’s source types: web, Firewall, DNS, etc.).
Clipboard-list Create separate HEC tokens for each of the Zscaler log source types. For example, create an HEC token used
for only zscalernss-web, a separate HEC token used by only zscalerss-fw, and a separate HEC token just for
zscalernss-dns, etc. This allows for better scaling, renewing, and invalidating HEC tokens in the future, if needed,
without affecting other Zscaler source types.
The endpoint portion is always /services/collector, and endpoint /services/collector/raw does not come
into play.Note the complete API URL corresponding to your Splunk cloud instance.
Figure 51. Determine the Splunk Cloud API endpoint to send logs to
Configure Splunk Cloud to Fetch Zscaler Audit Logs and Sandbox Events
Previously, the Zscaler Splunk TA needed to be installed on Splunk Inputs Data Manager (IDM). IDM was a Splunk instance
within a Splunk Cloud Stack that set up and configured modular and scripted inputs. As a part of a stack, IDM is managed
by Splunk. IDM is a unique instance, meaning that it exists independently and separately from a search head, and does
not belong to a search or indexing cluster. To use IDM, contact Splunk support.
Now, Splunk cloud prefers using Victoria, which removes the necessity of IDM. You can install most apps on Splunk cloud
directly as opposed to contacting Splunk support.
If using IDM, a Zscaler username, password, and API credentials are configured on the Splunk TA installed on IDM. This
initiates API calls from the Splunk cloud to Zscaler to fetch audit logs and Sandbox reports.
If using the newer Victoria stack, complete the following steps on the same Splunk cloud instance on which the Zscaler
Splunk app is installed (instead of IDM).
Clipboard-list You must also request Splunk cloud support team to enable "Scheduled search" capabilities on their IDM. This
setting is disabled in Splunk cloud by default. The IDM must be peered to the indexing tier so that indexed data
can be searched.
Second, the account running the Zscaler TA (likely sc_admin or splunk-system-user) must have Splunk
capabilities to:
• Run saved searches.
• Output a lookup.
Finally, the equivalent saved search on the IDM must be enabled and scheduled to run.
You must contact Splunk cloud support team to get Zscaler Technical Add-On (TA) installed in your Splunk cloud tenant.
Zscaler Splunk App doesn’t need to be installed on IDM.
Add Zscaler Account Used by Splunk IDM to Make API Calls to ZIA
Go to Configuration > Account > Add.
Fill in the Zscaler credentials pertinent to your ZIA tenant and save the settings by clicking Add.
Figure 61. Confirm that both Inputs are enabled in Splunk IDM
The API URL is a Splunk URL dependent on your Splunk cloud stack.
Add "?auto_extract_timestamp=true" at the end of the Splunk cloud API endpoint. For example, if your Splunk
Clipboard-list API URL is:
https://http-inputs-partnerstack05.splunkcloud.com:443/services/collector
https://http-inputs-partnerstack05.splunkcloud.com:443/services/collector?auto_extract_timestamp=true
The authorization header contains the relevant Splunk HEC token created in previous steps.
In the Add Cloud NSS Feed dialog, Key1 is "Authorization". Value1 is the HEC token in the format "Splunk XXX-XXX-XXX"
(replace XXX with actual HEC token value).
Feed Output Type is JSON from the drop-down menu. After filling in required parameters, click Save. Add ,\" (comma,
backslash, double quotes) to the Feed Escape Character list.
When you create a web feed, you must set the Feed Output Type to Custom and then paste the following code
Clipboard-list text into the Feed Output Format:
The API URL is a Splunk URL, dependent on your Splunk cloud stack.
https://http-inputs-partnerstack05.splunkcloud.com:443/services/collector
https://http-inputs-partnerstack05.splunkcloud.com:443/services/collector?auto_extract_timestamp=true
The authorization header contains the relevant Splunk HEC token created in previous steps.
1. In the Add Cloud NSS Feed dialog, Key1 is "Authorization". Value1 is the HEC token in format "Splunk XXX-XXX-XXX"
(replace XXX with actual HEC token value).
2. Select the Feed Output Type of JSON from the drop-down menu. Add ,\" (comma, backslash, double quotes) to
the Feed Escape Character list. In the Feed Output Format, change the "sourcetype" to "zscalernss-fw".
3. After filling in required parameters, click Save.
Make sure to edit the feed output format to "zscalernss-dns", "zscalernss-tunnel", etc. Refer to the table in the Source
Types section for a list of source types.
After the connectivity is verified, the Connectivity Test column changes from Validation Pending to Validation
Successful.
If you see a particular panel not populated, click the magnifying glass next to it. This shows you the query that the panel is
running behind the scenes, which helps with troubleshooting.
SOAR components
In the example, ZIA NSS logs are streamed to Splunk (SIEM). SOAR talks to the ZIA tenant as well as the Splunk instance to
which NSS logs are being sent.
A custom threat feed (IOC type: malicious Domains) that the customer subscribes to is ingested into Splunk ES (which is
part of Splunk). Splunk ES then looks for an overlap between domains on the threat feed and incidents of them being
accessed via ZIA in the past (over an adjustable interval). If it finds an overlap, a notable event is created by Splunk ES and
sent to SOAR.
SOAR then checks to see if Zscaler currently classifies this domain as malicious. If Zscaler classifies this domain as
malicious, SOAR triggers a search in NSS logs that were consumed by Splunk to look at historical data.
If Zscaler doesn’t classify them as malicious, SOAR adds the domain to your ZIA disallowed list and then looks at historical
data (with an adjustable time range) to find which users have accessed those domains by triggering a search over NSS
logs that were consumed by Splunk.
SOAR then sends an email to the network admin detailing which users were exposed to these domains, along with
relevant timestamps.
Clipboard-list The following steps enable a SOAR instance to communicate with Splunk and Zscaler. Configure a sample
playbook which is used to automate threat hunting. This sample playbook is just an example of what is
achievable by leveraging SOAR abilities with Zscaler’s APIs. You can build your own playbooks to implement your
custom use cases.
Configuring SOAR
The following steps assume that you have admin access to the SOAR instance.
Triggered events is a way to limit a playbook, specifying actions only on specific kinds of events.
1. Go to Administration > Event Settings > Label Settings and then + Label.
2. Name it "from_correlation_splunk_search".
1. Go to Administration > Users and create a new automation user with following settings.
Fill out Asset Settings with your pertinent ZIA tenant details.
After filling all the details, click Save and then click Test Connectivity.
Fill out Asset Settings with your pertinent Splunk details. Make sure that communication from SOAR to Splunk on port
8089 is permitted by the network.
Under Ingest Settings, set the Polling Interval per your operational needs. This document sets it to 1-minute.
This playbook does a correlation search against known malicious IP and domains and your ZIA logs. If a malicious IP and
domain is found in these logs, the playbook checks if that IP and domain is already on that customer’s Zscaler disallow list.
If it is not on the disallow list, SOAR checks how Zscaler classifies this IP and domain. If Zscaler classifies it as "Unknown,"
SOAR updates Zscaler’s disallow list via an API call.
Also change Operates on to the label that was created earlier in the drop-down menu and click Save.
Configuring Splunk
The following sections describe how to configure Splunk.
Type Threat in the search box and select the Type as Correlation Search.
Notable events are automatically created by Splunk ES based on correlation searches. Add action to forward artifacts
related to such events to your SOAR setup.
Go to the newly installed SOAR Splunk App and then click Create Server.
Populate the Authorization Configuration by pasting the Authorization token content copied in earlier steps and click
Save.
Figure 106. Verify that the notable events are being forwarded by Splunk to SOAR
Posture Control is part of Zscaler for Workloads, a comprehensive cloud security solution for any application running on
any service in any cloud.
2. Click Add to enter a new cloud storage integration, which is used as a location to store alerts.
3. Name the integration and select Amazon S3 Bucket as the Cloud Storage.
4. Enter the AWS S3 Bucket Name to which ZPC should push alerts.
5. Click Copy the S3 Bucket policy and log into the AWS S3 portal.
6. Paste the bucket policy into the permissions of the bucket. The bucket policy looks similar to the following example.
7. Return to the ZPC Admin Portal and click Test Connection. The connection test must succeed before moving to the
next step.
Configuring AWS
ZPC writes alerts to this S3 bucket in AWS, and Splunk reaches out to this S3 bucket to pull down the alerts written to this
bucket.
To create an Identity and Access Management (IAM) user and assign permissions to that user in AWS to allow listing of S3
buckets:
2. Provide a username and select Access Key. You can download the access key, which is used later by Splunk to pull
contents of this S3 bucket into Splunk.
3. Create and attach an IAM policy to the user. This policy allows the Splunk user account to see all the buckets
available when configuring Splunk. In the following example, a similar policy must be created in AWS and attached
to the user.
4. Click Next.
5. After the user is created, create Access Keys to enable programmatic access using those credentials. Splunk uses the
credentials to contact S3. Create and download an Access Key and corresponding Secret Access Key for this user.
6. Edit the permissions of the bucket so that the user can read from that bucket. You need to make changes to the
Principal and Resource sections to match your accounts and usernames. The end result is an addition of a stanza in
the bucket permissions pertaining to the username that is used by Splunk to pull down the alerts from S3 bucket.
Configuring Splunk
Configure Splunk to read from the S3 bucket.
1. On your Splunk instance, install the Splunk Add-on for AWS. This allows you to configure Splunk to ingest the alerts
from S3.
2. Select the Splunk Add-on for AWS and then Account under the Configuration tab.
3. Click Add.
4. Add the user created earlier.
5. Enter the Username, Key ID, and Secret Key.
6. Create a generic S3 input from the Splunk App by going to Inputs > Create New Input > Custom Data Type >
Generic S3.
7. Provide a Name.
8. Select the AWS username created earlier.
9. Select the name of the bucket used in the previous sections.
10. In Source Type, enter zscaler-posturecontrol-alerts and for index, enter zscaler.
11. Click Add.
12. Go back to the Zscaler Splunk app and select the Posture Control tab. As alerts get pushed out by ZPC, the
corresponding Splunk dashboards are populated.
Figure 125. Collecting details to open support case with Zscaler TAC
3. Now that you have your company ID, you can open a support ticket. Go to Dashboard > Support > Submit a Ticket.