Sender | Message | Time |
---|---|---|
4 Sep 2023 | ||
orionhungary joined the room. | 12:10:59 | |
5 Sep 2023 | ||
cyrusmc_22944 | Question regarding the OpenSearch output. It appears it does not support the options required for configuring against an AWS OpenSearch cluster (such as the endpoint credentials) | 16:15:02 |
6 Sep 2023 | ||
f.stoeber joined the room. | 14:53:18 | |
f.stoeber changed their display name from f.stoeber to f.stoeber#0. | 15:12:44 | |
f.stoeber changed their display name from f.stoeber#0 to f.stoeber. | 15:12:45 | |
f.stoeber | Hello, I am looking for some advice. I am currently debugging a large logging system, which is set up using the logging-operator. We have around 180 Flow+Output configs in this environment. For a longer time, we are experiencing stability issues. We have activated Buffers for the „big log creators“, but could not improve the situation. Nevertheless, we have a workaround in place, so it is not a dramatic situation at the moment, tho bothering us, while we are also losing some logs. I think the fluentds cannot forward the big amount of logs at the moment, and I am trying to activate a multi-worker setup for the fluentds currently. To do that, I changed the following configs: - fluentd.rootDir=/buffers - fluentd.workers=2 Unfortunately, after activating this setting, the fluent processes are dying with exit code 137 (did a tail on the log file in the Pod), and the Pod is crashing immediately after that with exit code 2 (I think caused by the crashed process). These settings are working without a problem on a development cluster, and I cannot find a hint in the logs. I am pretty sure it is not a memory problem as I raised the memory limit of the fluent Pods to 8GB already, and this issue is occurring while initializing all the fluentd plugins (flow/outputs). Does anybody have additional ideas on this? Maybe I am just missing a configuration flag or something like that. | 15:12:46 |
7 Sep 2023 | ||
pepov | Multiworker setup | 10:10:08 |
genofire joined the room. | 10:21:29 | |
12 Sep 2023 | ||
cnfan_01187 joined the room. | 19:18:05 | |
15 Sep 2023 | ||
dazzling_peacock_02931 joined the room. | 13:37:09 | |
16 Sep 2023 | ||
@martin.mueller:dataport.modular.im changed their display name from 🐦Martin 🔥Müller (Phoenix AMO) to 🐦Martin 🔥Müller (Phoenix AMO - away till 2023-09-26). | 18:01:57 | |
18 Sep 2023 | ||
gdzcorp changed their display name from gdzcorp to gdzcorp#0. | 09:57:59 | |
gdzcorp set a profile picture. | 09:58:01 | |
gdzcorp changed their display name from gdzcorp#0 to gdzcorp. | 09:58:02 | |
gdzcorp | Hey - any idea why we'd be getting
when trying to configure logging operator output to Kinesis Firehose? It's been a supported pluging since logging operator 3.5 from what I can see | 09:58:02 |
gdzcorp | * Hey - any idea why we'd be getting
when trying to configure logging operator output to Kinesis Firehose? It's been a supported pluging since logging operator 3.5 from what I can see I don't see it in the CRD spec as well - https://github.com/kube-logging/logging-operator/blob/release-4.3/config/crd/bases/logging.banzaicloud.io_outputs.yaml | 10:03:01 |
WrenIX | You are correct, to golang crd does not has this entry: https://github.com/kube-logging/logging-operator/blob/4779b50c5e20c618f743432840bce143aa7e5ad7/pkg/sdk/logging/api/v1beta1/output_types.go#L49 | 10:04:49 |
WrenIX | My suggestion would be create an issue and maybe an PR to add such a line in golang (and generate the crd yaml) | 10:07:03 |
19 Sep 2023 | ||
kristof_63275 joined the room. | 13:34:58 | |
20 Sep 2023 | ||
gholie | Is there a best practice for logFlows? Right now I have a defaultFlow that should catch everything, and leave it up to app teams if they want a custom flow. In our cluster each app team has its own namespace, should I then instead of a defaultFlow set up flows per namespace? | 08:01:09 |
gholie | And what is the way to update CRDs when installing the operator via Helm Charts? I'm noticing that the helm tracking annotations are not set up which leads to issues when updating (helm erroring out on install with:
This is on version 4.2.1, have not tested 4.3 yet | 08:34:47 |
gholie | createCustomResource is set to true in the chart | 08:36:06 |
pepov | Is there a best practice for logFlows | 08:43:47 |
genofire | We have also created clusterflows, so that a pod just need n special label to parse json or logfmt (and do not need to create new flows again, if they make nothing special) and we provide an clusteroutput | 08:46:49 |
sagikazarmark | Hey folks! I'm happy to announce that the Logging Operator has been accepted by the CNCF as a Sandbox project. We would like to thank you for your support. We couldn't have done it without you! More details and updates: https://github.com/orgs/kube-logging/discussions/1485 | 10:22:25 |
WrenIX | Any Idee when and where it appears in the landscape https://landscape.cncf.io/ | 10:36:09 |
WrenIX | * Any Idee when and where it appears in the landscape https://landscape.cncf.io/ ? | 10:36:17 |
sg2566 | After the onboarding it will be in the: Observability and Analysis - Logging section | 11:12:11 |
pepov | And what is the way to update CRDs when | 12:29:26 |
22 Sep 2023 | ||
karvik.kimalane joined the room. | 18:07:24 |