-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynatrace / Kubeshark / Openshift compatibility #1467
Comments
Hi @alongir , I'm unable to reproduce an issue on the simple non-STS single zone (with 2 replicas) ROSA (https://aws.amazon.com/es/rosa/) managed OpenShift cluster with below versions on which I installed latest Dynatrace and then Kubeshark % oc version
Client Version: 4.14.3
Kustomize Version: v5.0.1
Server Version: 4.12.19
Kubernetes Version: v1.25.8+37a9a08 and as you can see that all Kubeshark and Dynatrace pods are running successfully without any errors: % oc get pods --all-namespaces | grep -E 'kubeshark|dynatrace'
default kubeshark-front-647bcc7f66-nfr2j 1/1 Running 0 73m
default kubeshark-hub-6f68c99d8-xtd8l 1/1 Running 0 73m
default kubeshark-worker-daemon-set-6pkhm 2/2 Running 2 (46m ago) 73m
default kubeshark-worker-daemon-set-7qhz4 2/2 Running 1 (42m ago) 73m
default kubeshark-worker-daemon-set-k5fwz 2/2 Running 1 (26m ago) 68m
default kubeshark-worker-daemon-set-nmntq 2/2 Running 0 73m
default kubeshark-worker-daemon-set-pbxjj 2/2 Running 4 (9m28s ago) 73m
default kubeshark-worker-daemon-set-vbkd7 2/2 Running 1 (70m ago) 73m
default kubeshark-worker-daemon-set-wd999 2/2 Running 0 73m
dynatrace dynatrace-operator-77fdbcb56-vb8tp 1/1 Running 0 89m
dynatrace dynatrace-webhook-5dd6dcc547-5dvmh 1/1 Running 0 89m
dynatrace dynatrace-webhook-5dd6dcc547-6btc7 1/1 Running 0 89m
dynatrace my-dynatrace-openshift-activegate-0 1/1 Running 0 87m
dynatrace my-dynatrace-openshift-oneagent-4qjpj 1/1 Running 0 87m
dynatrace my-dynatrace-openshift-oneagent-cxbxz 1/1 Running 0 87m
dynatrace my-dynatrace-openshift-oneagent-glchh 1/1 Running 0 87m
dynatrace my-dynatrace-openshift-oneagent-m25dj 1/1 Running 0 87m
dynatrace my-dynatrace-openshift-oneagent-vf46k 1/1 Running 0 87m I even tried to double check if both Dynatrace and Kubeshark agents are run on the worker node. % oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-162-155.eu-central-1.compute.internal Ready control-plane,master 3h57m v1.25.8+37a9a08
ip-10-0-183-189.eu-central-1.compute.internal Ready control-plane,master 3h56m v1.25.8+37a9a08
ip-10-0-187-152.eu-central-1.compute.internal Ready infra,worker 3h27m v1.25.8+37a9a08
ip-10-0-202-225.eu-central-1.compute.internal Ready infra,worker 3h26m v1.25.8+37a9a08
ip-10-0-211-103.eu-central-1.compute.internal Ready worker 3h40m v1.25.8+37a9a08
ip-10-0-213-122.eu-central-1.compute.internal Ready worker 3h41m v1.25.8+37a9a08
ip-10-0-215-169.eu-central-1.compute.internal Ready control-plane,master 3h57m v1.25.8+37a9a08
% oc debug node/ip-10-0-213-122.eu-central-1.compute.internal
Temporary namespace openshift-debug-7rbmz is created for debugging node...
Starting pod/ip-10-0-213-122eu-central-1computeinternal-debug-tvqkk ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.213.122
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# pgrep -l '\bworker|dynatrace'
30186 dynatrace-opera
107890 worker
sh-4.4# Could you please give me a contact of the person who reported the issue so I would try to figure out more details on my own |
UpdateExclusion of anything inside specific namespace described here https://docs.dynatrace.com/docs/shortlink/annotate#exclude-specific-namespaces-from-being-monitored does not work on the vanilla k8s cluster as well (EKS in our case used for testing) despite below can be seen as an evidence that described above is applied to the cluster (by
but as can be seen on below screenshot that "Deep monitoring" on the Kubeshark Next step(s)I've provided today everything above along with a link on |
Gotten from Dynatrace support, that pods/processes exclusion from monitoring (neither per-pod nor the whole namespace) does not work on the Dynatrace
So if the customer uses the 1st one - then there are no way to configure any exclusion for them, unless they are ready to switch to the 2nd one which is actually is quite easy and straightforward as well and only the point of running few commands as well, which I'm gonna to finally figure out, test and provide little later today. Of course it does not cause losing any existing monitoring data or features |
Finally unfortunately I was unable to get 2 above (Cloud-Native Full Stack) deployment/connection/configuration working on neither OpenShift nor vanilla EKS Kubernetes cluster. In both case it succeeds if
I've contacted their support with this issue so will confirm if it works once they will provide me the solution |
A problem was reported when Kubeshark was installed on Openshift where Dynatrace was installed
This environment where the problem was reported:
The text was updated successfully, but these errors were encountered: