Troubleshooting NGINX Ingress Controller: A Step-by-Step Guide

Learn to troubleshoot your NGINX Ingress controller with this guide. Identify common issues and resolve them step-by-step.

Patrick Londa
Author
Oct 12, 2022
 • 
 min read
Share this post

If you are troubleshooting an issue with your Ingress and you know it isn’t an infrastructure issue, then you likely have an issue with your Ingress controller.

In this guide, we’re going to outline how you can identify the specific issue impacting your NGINX Ingress controller.

blink logo
nginx logo
Blink Automation: Troubleshoot Your NGINX Ingress Controller
Blink + NGINX
Get Started

Common Issues for NGINX Ingress Controllers

Here are some of the common causes for why an NGINX Ingress controller may not be working properly:

  • Misconfigured RBAC or a missing default server TLS Secret
  • Invalid annotation values
  • Invalid VirtualServer or VirtualServerRoute
  • Invalid policy
  • ConfigMap keys invalid values
  • Unhealthy backend pods or a misconfigured backend service

To debug your particular scenario, there are several steps to gathering useful information. These next steps assume that your Ingress controller is deployed in the namespace nginx-ingress and <nginx-ingress-pod> is the name of one of the pods.

Issue: Misconfigured RBAC or a Missing Default Server TLS Secret

Check Ingress Controller Logs

When you run into issues, checking the logs is often the first step to getting more information. By doing this, you’ll be able to see if the Ingress controller is failing to start, which could mean you have misconfigured RBAC or a missing default server TLS Secret.

To check the logs for both the Ingress controller software and the NGINX access and error logs, you can run this command:

kubectl logs <nginx-ingress-pod> -n nginx-ingress

You can manage the degree of detail you receive back about the Ingress controller by using the -v command-line argument for ingress controller software logs (-v=4: most detailed, -v=1: least detailed). To make a similar verbosity adjustment for the NGINX logs, you need to do that by configuring corresponding ConfigMap keys.

If the logs show that the Ingress controller is starting correctly, then you can move on to the next likely cause: invalid values of annotations.

Issue: Invalid annotation values

To check if this is your issue, you should review the controller logs, then check the events of your Ingress resource to see whether your NGINX configuration was successfully applied.

Check Events of an Ingress Resource

You can check the events of an Ingress resource by running the following command:

kubectl describe ing <ingress-resource-name>
Name:             <ingress-resource-name>
Namespace:        default
. . .
Events:
  Type    Reason          Age   From                      Message
  ----    ------          ----  ----                      -------
  Normal  AddedOrUpdated  12s   nginx-ingress-controller  Configuration for default/<ingress-resource-name> was added or updated

If you see a Normal event with the reason AddedOrUpdated, it means that the configuration was successfully applied. 

Check Generated Configuration

The Ingress controller creates a main NGINX configuration file for each Ingress/VirtualServer resource and stores it in the /etc/nginx/conf.d folder. You can also verify that your annotation values are correct by checking the configuration file, which you can print by running the following command:

kubectl exec <nginx-ingress-pod> -n nginx-ingress -- cat /etc/nginx/nginx.conf

If you have verified that your annotations have the correctly listed values; then you should next check if you have an invalid VirtualServer or VirtualServerRoute.

Issue: Invalid VirtualServer or VirtualServerRoute

You can next check if your configuration was successfully applied by NGINX to your VirtualServer or VirtualServiceRoute resources. To check if this is your issue, you can utilize the controller logs, the generated configuration file, and check the events for the VirtualServer and VirtualServerRoute resources.

Check Events of VirtualServer and VirtualServerRoute Resources

You can check events for a VirtualServer resource with this command:

kubectl describe vs <VS-resource-name>
. . .
Events:
  Type    Reason          Age   From                      Message
  ----    ------          ----  ----                      -------
  Normal  AddedOrUpdated  16s   nginx-ingress-controller  Configuration for default/<VS-resource-name> was added or updated

You can check events for a VirtualServerRoute resource with this command:

kubectl describe vsr <VSR-resource-name>
. . .
Events:
  Type     Reason                 Age   From                      Message
  ----     ------                 ----  ----                      -------
  Normal   AddedOrUpdated         1m    nginx-ingress-controller  Configuration for default/<VSR-resource-name> was added or updated

Similar to checking the events of an Ingress Resource, you are looking for Normal events with the reason AddedOrUpdated to signal that the configuration was successfully applied.

Issue: Invalid policy

Now that you have checked the logs, the generated configuration file, and the events of the VirtualServers, it may be an issue of an invalid Policy. 

Checking the Events of a Policy Resource

You can check the events of a Policy resource using the following command:

kubectl describe pol <policy-resource-name>
. . .
Events:
  Type    Reason          Age   From                      Message
  ----    ------          ----  ----                      -------
  Normal  AddedOrUpdated  11s   nginx-ingress-controller  Policy default/<policy-resource-name> was added or updated

If you see Normal events with the reason AddedOrUpdated, it means that the Policy resource was successfully accepted by the Ingress controller. You will need to check the VirtualServer events that reference that policy to know if the policy was actually applied.

Issue: ConfigMap keys invalid values

So far, we’ve narrowed down the list of potential issues by verifying which steps are operating as expected. If the configuration has not been applied, and you have checked the other causes, then it is likely an issue with the values for the ConfigMap keys.

To check this, you can use the logs, the generated configuration file, and also the events of the ConfigMap.

Checking the Events of the ConfigMap Resource

You can check the events of the ConfigMap using the following command:

kubectl describe configmap <configmap-name> -n nginx-ingress
Name:         <configmap-name>
Namespace:    nginx-ingress
Labels:       <none>
. . .
Events:
  Type    Reason   Age                From                      Message
  ----    ------   ----               ----                      -------
  Normal  Updated  11s (x2 over 26m)  nginx-ingress-controller  Configuration from nginx-ingress/<configmap-name> was updated

With this, and the other methods described above, you should be able to gather all the information you need to determine why the configuration is not being applied correctly.

What if your Ingress controller is starting successfully and configuration is applied correctly, but you are getting unexpected resources back? You might have unhealthy backend pods or an issue with your backend service.

Issue: Unhealthy Backend Pods or a Misconfigured Backend Service

You can check for backend pod or service issues by reviewing the logs, the generated configuration file, checking the live activity dashboard, and running NGINX in debug mode.

We’ve already covered the commands for the first two methods, so let’s move on to the live activity dashboard.

Check live activity monitoring dashboard

The dashboard is enabled and available on port 8080. As long as your nginx-status command-line argument is not set to false, you’ll be able to run the following command to forward connections to port 8080 on your local machine to port 8080 of your NGINX Plus Ingress Controller pod:

kubectl port-forward <nginx-plus-ingress-pod> 8080:8080 --namespace=nginx-ingress

You can also read about more customization options for the dashboard.

Run NGINX in Debug Mode

While it’s rare that the issue is due to a bug in NGINX code, you can check for issues by using the debug logs. You can enable them with these two steps:

1. Set error-log-level to debug in ConfigMap.

2. Use -nginx-debug command-line argument when running the ingress controller.

Troubleshooting Your NGINX Ingress Controller with Blink

Troubleshooting an NGINX configuration issue requires checking for multiple potential causes and running various commands to get more information.

There’s an easier and faster way to manage your Kubernetes troubleshooting.

With Blink, you can automate this process and run all these steps with a simple click. You can also customize and expose your automation as a self-service app that your team members can use. Instead of fielding a support ticket, you can enable other developers to solve their own NGINX problem, guided by your automation.

Get started with Blink and streamline your troubleshooting today.

Expert Tip