I would like to share my configuration for setting up an NGINX Ingress for the Splunk Web UI, Splunk HTTP Event Collector, and the splunkd REST API. Initially, I configured backend-protocol: "HTTP", which led to a 502 Bad Gateway error. So, be sure to use backend-protocol: "HTTPS".
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-splunk
namespace: splunk
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # HTTP causes 502 Bad Gateway
nginx.ingress.kubernetes.io/proxy-body-size: "200m" # Upload up to 200 MB
spec:
ingressClassName: nginx
rules:
- host: "splunkweb.dev"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: splunk-service
port:
number: 8000
- host: "hec.dev"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: splunk-service
port:
number: 8088
- host: "splunkd.dev"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: splunk-service
port:
number: 8089
The Ingress backend is the corresponding Splunk service in the cluster, which maps to the container ports of the Splunk pod.
Make sure the local DNS points to the IP address of the Ingress controller. In my case, it was just my localhost, so I added splunkweb.dev, hec.dev, and splunkd.dev to my local hosts file and pointed them to 127.0.0.1.
Are there any more configuration suggestions that may improve the setup for scalability and performance?
It took me almost two entire days to fix the 502 Bad Gateway response.
My setup is a Windows 11 machine running WSL 2. I have installed Rancher Desktop 1.13.1, disabled Traefik, and installed the NGINX Ingress controller.
The NGINX logs clearly showed the 502 errors and indicated that the backend actively refused the connection. However, they did not specify WHY it refused it.
This is the according service configuration
---
apiVersion: v1
kind: Service
metadata:
name: splunk-service
labels:
app: my-splunk-app
component: splunk
spec:
type: ClusterIP
selector:
app: my-splunk-app
component: splunk
ports:
- protocol: TCP
name: port-8000
port: 8000
targetPort: 8000
- protocol: TCP
name: port-8088
port: 8088
targetPort: 8088
- protocol: TCP
name: port-8089
port: 8089
targetPort: 8089
- protocol: TCP
name: port-9997
port: 9997
targetPort: 9997
The service maps to the container-Ports of the splunk deployment:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk
labels:
app: my-splunk-app
component: splunk
spec:
replicas: 1
selector:
matchLabels:
app: my-splunk-app
component: splunk
template:
metadata:
labels:
app: my-splunk-app
component: splunk
spec:
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
containers:
- name: splunk
image: {{ printf "%s:%s" .Values.splunk.image .Values.splunk.version }}
env:
- name: SPLUNK_START_ARGS
value: --accept-license
- name: SPLUNK_PASSWORD
value: {{ .Values.splunk.password }}
- name: SPLUNK_HEC_TOKEN
value: {{ .Values.splunk.hecToken }}
- name: SPLUNK_HEC_PORT
value: '8088'
- name: SPLUNK_HTTP_ENABLESSL
value: 'true'
ports:
- containerPort: 8000 # WEB UI
protocol: TCP
- containerPort: 8088 # default HEC port
protocol: TCP
- containerPort: 8089 # local splunkd management port
protocol: TCP
- containerPort: 9997 # forwarding and receiving
protocol: TCP
After many attempts, adding the backend-protocol: "HTTPS" annotation to the ingress template finally fixed the error. I hope this helps anyone who encounters a 502 Bad Gateway response from the NGINX Ingress in front of Splunk services, as I was unable to find any helpful information in Splunk's documentation.
Any suggestions for improving the setup for scalability and performance are very welcome.
Cheers, Loop