To generate verbose logs and get version and configuration information, follow these procedures.
For the Kubernetes integration debug level logs are not produced. If you are doing a more in-depth investigation on your own or with New Relic Support, you can enable verbose mode.
Caution
Verbose mode significantly increases the amount of info sent to log files. Temporarily enable this mode only for troubleshooting purposes, and reset the log level when finished.
To get verbose logging details using Helm and
nri-bundle
chart it is enough to add to the values.yamlnewrelic-infrastructure.verboseLog
. Notice that you can enable them in all subcharts settingglobal.verboseLog
.# To enable verboseLog for newrelic-infrastructure only# newrelic-infrastructure:# verboseLog: true# To enable verboseLog for all sub-charts# global:# verboseLog: trueOnce you have edited the files, upgrade the solution with the following command:
bash$helm upgrade <RELEASE_NAME> newrelic/nri-bundle \>--namespace <NEWRELIC_NAMESPACE> \>-f values-newrelic.yaml \>[--version fixed-chart-version]Then wait some time, collect the logs, revert the change in the
values.yaml
file, and upgrade again.To get verbose logging details for an integration manually using a manifest file you need to set two different environment variables:
NRIA_VERBOSE="1"
for allagent
andforwarder
containers. For example below theNRIA_LICENSE_KEY
environment variable. This environment variable enables agent verbose logs.- set
NRI_KUBERNETES_VERBOSE=true"
in all components of the integration. This environment variable enables verbose logs of the integration.
Once you have edited the manifest upgrade the solution with the following command. Then wait some time, collect the logs, revert the change in the manifest and apply the manifest again.
bash$kubectl apply -f your_newrelic_k8s.yaml -n <NEWRELIC_NAMESPACE>
There are three different components of the integration in charge of scraping ksm
, controlplane
and kubelet
.
In in each instance two containers are running, one scraping the data and one forwarding it.
The agent in the kubelet component is in charge also to scrape node data and run integrations.
To fetch logs, get the name of the pods you want to retrieve logs from:
$kubectl get pods -n <NEWRELIC_NAMESPACE>
Retrieve the logs:
$kubectl logs <POD_NAME> --all-containers --prefix
If you interested into the logs of the previous execution add the --previous
flag. If you are interested in the logs of just one of the containers you can avoid --all-containers --prefix
flags and specify the container with the --container
option.
Important
To have the best experience with our Kubernetes integration, we advice you to be on the last version of our integration. Check what changes got introduced with version 3.
For the Kubernetes integration, the infrastructure agent is distributed as a Docker image that contains the infrastructure agent and the Kubernetes integration. The Docker image is tagged with a version, and the infrastructure agent also has its own version.
When the agent is successfully sending information to New Relic, you can retrieve the versions of the infrastructure agent for Kubernetes (the Docker image) you are running in your clusters by using the following NRQL query:
FROM K8sContainerSample SELECT uniqueCount(entityId) WHERE containerName LIKE 'agent' FACET clusterName, containerImage
If the agent isn't reporting any data:
Get the versions of the New Relic integration for Kubernetes that you are running in a cluster using kubectl
:
$kubectl get pods --all-namespaces -l app.kubernetes.io/name=newrelic-infrastructure -o jsonpath="{.items..spec..containers..image}"
To retrieve the version of kube-state-metrics running on your clusters, run the following NRQL query:
FROM K8sContainerSample SELECT uniqueCount(entityId) WHERE containerName LIKE '%kube-state-metrics%' facet clusterName, containerImage
Integration Version 2 specific commands
For the Kubernetes integration, the infrastructure agent adds a log entry only in the event of an error. Most common errors are displayed in the standard (non-verbose) logs. If you are doing a more in-depth investigation on your own or with New Relic Support, you can enable verbose mode.
Caution
Verbose mode significantly increases the amount of info sent to log files. Temporarily enable this mode only for troubleshooting purposes, and reset the log level when finished.
To get verbose logging details for an integration using a manifest file:
Enable
verbose
logging: In the deployment file, set the value ofNRIA_VERBOSE
to1
.Apply the modified configuration by running:
bash$kubectl apply -f your_newrelic_k8s.yamlLeave on verbose mode for a few minutes, or until you feel enough activity has occurred.
Disable verbose mode: Set the
NRIA_VERBOSE
value back to0
.Apply the restored configuration by running:
bash$kubectl apply -f your_newrelic_k8s.yamlGet a list of nodes in the environment:
bash$kubectl get nodes --all-namespacesGet a list of infrastructure and kube-state-metrics pods:
bash$kubectl get pods --all-namespaces -o wide | egrep 'newrelic|kube-state-metrics'
For the Kubernetes integration, the infrastructure agent adds a log entry only in the event of an error. Most common errors are displayed in the standard (non-verbose) logs. If you're doing a more in-depth investigation on your own or with New Relic Support, you can enable verbose mode.
Caution
Verbose mode significantly increases the amount of info sent to log files. Temporarily enable this mode only for troubleshooting purposes, and reset the log level when finished.
To get verbose logging details for an integration using Helm:
- Enable verbose logging:bash$helm upgrade -n <namespace> --reuse-values newrelic-bundle --set newrelic-infrastructure.verboseLog=true newrelic/nri-bundle
- Leave on verbose mode for a few minutes, or until enough activity has occurred.
- When you have the information you need, disable verbose logging:bash$helm upgrade --reuse-values newrelic-bundle --set newrelic-infrastructure.verboseLog=false newrelic/nri-bundle
- Follow the steps to restore your configuration from step 5 in the section, Get verbose logs for installations using a manifest file.
To get the logs from pods connecting to kube-state-metrics:
Get the nodes that kube-state-metrics is running on:
bash$kubectl get pods --all-namespaces -o wide | grep kube-state-metricsLook for output similar to this:
bash$kube-system kube-state-metrics-5c6f5cb9b5-pclhh 2/2$Running 4 4d 172.17.0.3 minikubeGet the New Relic pods that are running on the same node as kube-state-metrics:
bash$kubectl describe node minikube | grep newrelic-infraLook for output similar to this:
bash$default newrelic-infra-5wcv6 100m (5%)$0 (0%) 100Mi (5%) 100Mi (5%)Retrieve the logs for the nodes by running:
bash$kubectl logs newrelic-infra-5wcv6
To get the logs from a pod running on a master node:
Get the nodes that are labelled as master:
bash$kubectl get nodes -l node-role.kubernetes.io/master=""Or,
bash$kubectl get nodes -l kubernetes.io/role="master"Look for output similar to this:
bash$NAME STATUS ROLES AGE VERSION$ip-10-42-24-4.ec2.internal Ready master 42d v1.14.8Get the New Relic pods that are running on one of the nodes returned in the previous step:
bash$kubectl get pods --field-selector spec.nodeName=ip-10-42-24-4.ec2.internal -l name=newrelic-infra --all-namespacesLook for output similar to this:
bash$newrelic-infra-whvztRetrieve the logs for the nodes by running:
bash$kubectl logs newrelic-infra-whvzt
For troubleshooting help, see Not seeing data or Error messages.