!QMMeAXdkLarsXxuGRg:matrix.org

Logging Operator

185 Members
7 Servers

Load older messages


SenderMessageTime
14 Dec 2023
@_discord_888751691606937651:t2bot.iosteveizzle Hey, i used a logging-operator-logging helm chart for configuring the logging object. Is there also a OCI replacement for this chart? Or is this chart deprecated not recommended? I could not find anything about that one... 15:09:34
@_discord_484298787846881280:t2bot.iopepov Hey, i used a logging-operator-logging 15:23:17
15 Dec 2023
@_discord_1185318657115570236:t2bot.iohaxodon_35576 joined the room.20:33:30
17 Dec 2023
@wrenix:chaos.fyiWrenIX Logging-operator-logging is legacy, it is part of logging-operator (with values.yaml under logging - logging.enabled is the importents :) ) 23:54:28
@wrenix:chaos.fyiWrenIX* Logging-operator-logging is legacy, it is part of logging-operator in oci repo (with values.yaml under logging - logging.enabled is the importents :) )23:58:38
18 Dec 2023
@_discord_181745896105574400:t2bot.iozadkiel#8960 Hey there 🙂 We have some extremely verbose pods in some namespaces without flows/outputs, we don't want them to go in the logging pipeline as it adds Gigs of useless load. Any idea how to tackle that? 16:06:55
19 Dec 2023
@_discord_888751691606937651:t2bot.iosteveizzle Hey i have got a problem and your bug tracker says i should first contact you via chat to rule out config errors:

Multiple times i observed the behavior, that when i do simple changes to the logging CR fluentd is broken afterwards with the same error and stops working:

.....
2023-12-19 09:06:17 +0000 [warn]: #0 [main_forward] unexpected error before accepting TLS connection by OpenSSL addr="10.194.147.129" host="10.194.147.129" port=37090 error_class=OpenSSL::SSL::SSLError error="SSL_accept returned=1 errno=0 peeraddr=10.194.147.129:37090 state=error: certificate verify failed (self-signed certificate in certificate chain)"
2023-12-19 09:06:17 +0000 [warn]: #0 [main_forward] unexpected error before accepting TLS connection by OpenSSL addr="10.194.147.89" host="10.194.147.89" port=41520 error_class=OpenSSL::SSL::SSLError error="SSL_accept returned=1 errno=0 peeraddr=10.194.147.89:41520 state=error: certificate verify failed (self-signed certificate in certificate chain)"
2023-12-19 09:06:18 +0000 [warn]: #0 [main_forward] unexpected error before accepting TLS connection by OpenSSL addr="10.194.147.197" host="10.194.147.197" port=36424 error_class=OpenSSL::SSL::SSLError error="SSL_accept returned=1 errno=0 peeraddr=10.194.147.197:36424 state=error: certificate verify failed (self-signed certificate in certificate chain)
.....

This time i only changed bufferVolumeResources!

When i do a simple rollout restart of the fluentd statefulset it simply works again... Any ideas?
09:31:23
@_discord_306521119928877056:t2bot.iooverorion TLS error (self-signed certificate) 09:48:52
@_discord_1186669588516192286:t2bot.iotru64guru_78072 joined the room.14:01:44
@_discord_945803954254647316:t2bot.iosdesbure joined the room.15:05:05
@_discord_374133986944876545:t2bot.iopafchuimort joined the room.18:14:18
21 Dec 2023
@_discord_271774181295652864:t2bot.ioquantumcat40 Hi, we are using the Logging Operator that is bundled with Rancher. We have an issue where the Logging Operator sends to ELK for a few days but then it suddenly stops working. Restarting the fluend-root container seems to get things running again. We also get this error constantly:
 
2023-12-21 15:16:47 +0000 [error]: #0 [clusterflow:cattle-logging-system:cluster-flow:clusteroutput:cattle-logging-system:group-elasticsearch] Could not bulk insert to Data Stream: group-kubernetes-dev {"took"=>4325, "errors"=>true, "items"=>[{"create"=>{"_index"=>".ds-group-kubernetes-dev-2023.12.20-000018", "_id"=>"GWvzsdffnrCtI94CR-yF", "status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:308] object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}}, {"create"=>{"_index"=>".ds-group-kubernetes-dev-2023.12.20-000018", "_id"=>"GmvzjsdffnrCtI94CR-yF", "status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:361] object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}}, {"create"=>{"_index"=>".ds-group-kubernetes-dev-2023.12.20-000018", "_id"=>"G2vzsdfdfrCtI94CR-yF", "status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:308] object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}}, {"create"=>{"_index"=>".ds-group-kubernetes-dev-2023.12.20-000018", "_id"=>"HGvzjsdfffff4CR-yF", "status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:361] object mapping for [kubernetes.labels.app] tried to parse field [app] as object, but found a concrete value"}}}]}
15:27:39
22 Dec 2023
@_discord_1187656340957581356:t2bot.iodipak_49549 joined the room.07:22:37
@_discord_989385913601773569:t2bot.iodipak140 joined the room.07:26:42
@_discord_989385913601773569:t2bot.iodipak140 Hi, for K8s version 1.25 and above, PodSecurityPolicy in version "policy/v1beta1" has been removed, hence i am getting the following error, this has also meant that my current logs have stopped flowing as well. 07:28:08
@_discord_989385913601773569:t2bot.iodipak140 dipaksisodiya@Dipaks-MacBook-Air logging % helm upgrade --install --wait --create-namespace --namespace logging logging-operator oci://ghcr.io/kube-logging/helm-charts/logging-operator --version=4.5.0
Pulled: ghcr.io/kube-logging/helm-charts/logging-operator:4.5.0
Digest: sha256:4339f18a2d4ea2e2dff8ced57137340b5eef3f0fd22f793280f406c5e22cc087
Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: resource mapping not found for name: "psp.logging-operator" namespace: "logging" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
07:28:51
@_discord_349665824086294539:t2bot.ioplaymtl Hey, i solved this error with a documentation from rancher: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards#cleaning-up-releases-after-a-kubernetes-v125-upgrade 11:42:44
@_discord_349665824086294539:t2bot.ioplaymtl * Hey dipak140 , i solved this error with a documentation from rancher: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards#cleaning-up-releases-after-a-kubernetes-v125-upgrade 11:42:57
@_discord_989385913601773569:t2bot.iodipak140 Hey playmtl It worked! Thanks for the quick response! 12:08:26
26 Dec 2023
@_discord_1176596701780381747:t2bot.ioblagi_65247 joined the room.22:31:38
@_discord_1176596701780381747:t2bot.ioblagi_65247 Please don't hold it against me if this question seems a bit naive, but I've been looking for an answer for quite some time and just can't seem to find it anywhere.
I need to install an EFK stack, but I want Fluentd to directly send logs to Elasticsearch, instead of creating copies like, for example, in /var/lib/docker.
I'm looking for something more along the lines of /var/lib/containerd/{pod_id}/logs or /var/log/containers/{pod_id}/log.

Otherwise, the logs are taking up too much space on my disk. Also, I'm using a storageClass and not an emptydir, so I need the logs on Elasticsearch and stored on an external disk, or rather, storageClass. Is this something kube-logging can handle?

Thank you very much!
22:49:36
27 Dec 2023
@_discord_484298787846881280:t2bot.iopepov Please don't hold it against me if this 15:04:00
@_discord_484298787846881280:t2bot.iopepov 🎄 Hey! I hope everyone had / is having a wonderful Christmas!

📚 Docs are now updated and available for 4.5: https://kube-logging.dev/docs/whats-new/

🙏 Huge thanks for everyone involved!
16:23:34
@_discord_484298787846881280:t2bot.iopepov * Hey! I hope everyone had / is having a wonderful Christmas! 🎄

📚 Docs are now updated and available for 4.5: https://kube-logging.dev/docs/whats-new/

🙏 Huge thanks for everyone involved!
16:23:59
3 Jan 2024
@_discord_626991723595300885:t2bot.iosunc363587351 joined the room.05:18:03
@_discord_1167395460399501362:t2bot.iowhok8s_08944 joined the room.10:19:36
@_discord_713124101169872981:t2bot.iowhok8s joined the room.10:27:32
@_discord_344992579026419714:t2bot.ioeudyptes_mosleyi joined the room.15:11:43
4 Jan 2024
@_discord_1192578866053664828:t2bot.iocelestial_dolphin_45229 joined the room.21:23:32
5 Jan 2024
@_discord_505787041024442369:t2bot.iosg2566 the message content is not json so you need to parse your content. You need 2 parsers (these are just dummy values)
1. /^(?<field_1>[^ ]) (?<timestamp>[^ ]) (?<field_2>[^ ]) (?<field_3>[^ ]) APP-METRIC (?<body>)....
2. json parser for the <body> field
21:19:55

Show newer messages


Back to Room ListRoom Version: 10