!QMMeAXdkLarsXxuGRg:matrix.org

Logging Operator

183 Members
5 Servers

Load older messages


SenderMessageTime
16 Nov 2023
@_discord_1174767437229596706:t2bot.iojwitrick_91242 joined the room.17:46:53
@_discord_1174767437229596706:t2bot.iojwitrick_91242 hey all. not sure what the overall process is but i created a PR in logging-operator: https://github.com/kube-logging/logging-operator/pull/1582 17:49:47
@_discord_1174767437229596706:t2bot.iojwitrick_91242 * hey all. not sure what the overall process is but i created a PR in logging-operator: https://github.com/kube-logging/logging-operator/pull/1582 17:49:53
@_discord_1174767437229596706:t2bot.iojwitrick_91242 * hey all. not sure what the overall process is but i created a PR in logging-operator: https://github.com/kube-logging/logging-operator/pull/1582 17:49:57
17 Nov 2023
@_discord_484298787846881280:t2bot.iopepov Adding Opensearch field endpoint to supp... 06:51:07
@_discord_782892140312920074:t2bot.ioxinity77 hey folks, as we have fluentbit a daemonset, how it would beneficial to offer some HA for the central fluentD instance ? 09:07:50
@_discord_484298787846881280:t2bot.iopepov hey folks, as we have fluentbit a 09:18:38
21 Nov 2023
@_discord_782892140312920074:t2bot.ioxinity77 is there any workaround to avoid modifying sysctl configuration on GKE for example when using the logging operator ? 10:05:27
@_discord_484298787846881280:t2bot.iopepov is there any workaround to avoid 13:18:45
@_discord_100749097929019392:t2bot.ioivoriesablaze joined the room.19:07:00
22 Nov 2023
@_discord_782892140312920074:t2bot.ioxinity77 how to efficently debug a flow ? i'd like to make sure a configured flow is correctly grabing logs , any hints on this ? 11:14:43
@_discord_484298787846881280:t2bot.iopepov how to efficently debug a flow ? i'd 12:35:46
24 Nov 2023
@_discord_1176935402922328086:t2bot.ionakof_75462 joined the room.14:24:47
25 Nov 2023
@_discord_851301431155490826:t2bot.ioarta.eth joined the room.03:25:25
@_discord_271774181295652864:t2bot.ioquantumcat40 joined the room.13:47:17
@_discord_271774181295652864:t2bot.ioquantumcat40 We currently have a standalone fluentbit deployment in Kubernetes, it is picking up application metrics, with parse rules and sending them directly to influxdb. We want to use logging operator but don’t see an output supported for influxdb. Is this possible and if so would anyone have an example? 13:49:38
@wrenix:chaos.fyiWrenIXFluentD generates prometheus metrics based on the flow-filter (and the default fluentbit metrics exists) - you could configure your influxdb to scrape them22:36:12
@wrenix:chaos.fyiWrenIXhttps://docs.influxdata.com/influxdb/v2/write-data/no-code/scrape-data/manage-scrapers/22:39:39
@wrenix:chaos.fyiWrenIXOr use telegraf therefore22:39:55
@wrenix:chaos.fyiWrenIXhttps://github.com/influxdata/telegraf/blob/master/plugins/inputs/prometheus/README.md22:40:33
@wrenix:chaos.fyiWrenIX* FluentD generates prometheus metrics based on the [flow-filter](https://kube-logging.dev/docs/configuration/plugins/filters/prometheus/) (and the default fluentbit metrics exists) - you could configure your influxdb to scrape them22:41:40
@wrenix:chaos.fyiWrenIXPS: but maybe you want to take a look into the prometheus-operator.dev as well - it has many benefits inside of kubernetes (e.g. the helmchart of the logging-operator is able to deploy servicemonitor todo that jobs on an kubernetes-operatorpattern way)22:49:14
27 Nov 2023
@_discord_100749097929019392:t2bot.ioivoriesablaze Hello, I have a Rancher instance with Logging Operator set up going an Elasticsearch server. When I tail /fluentd/log/out in the fluentd pod, I get
error="400 - Rejected by Elasticsearch" location=nil
. In the clusteroutput, I have
spec:
  elasticsearch:
    buffer:
      timekey: 1m
      timekey_use_utc: true
      timekey_wait: 30s
    host: boc22entapplog.fau.edu
    include_timestamp: true
    index_name: egrades
    log_es_400_reason: true
    port: 9200

Despite the fact I have log_es_400_reason: true, I still get the same message without a reason. I'll add that it appears at least some of the logs do come through to Elasticsearch. Does anyone one know what the next step I should take is?
14:01:05
@_discord_100749097929019392:t2bot.ioivoriesablaze * Hello, I have a Rancher instance with Logging Operator set up going an Elasticsearch server. When I tail /fluentd/log/out in the fluentd pod, I get
error="400 - Rejected by Elasticsearch" location=nil
. In the clusteroutput, I have
spec:
  elasticsearch:
    buffer:
      timekey: 1m
      timekey_use_utc: true
      timekey_wait: 30s
    host: ---------------
    include_timestamp: true
    index_name: ----------
    log_es_400_reason: true
    port: 9200

Despite the fact I have log_es_400_reason: true, I still get the same message without a reason. I'll add that it appears at least some of the logs do come through to Elasticsearch. Does anyone one know what the next step I should take is?
14:04:17
@_discord_484298787846881280:t2bot.iopepov Hello, I have a Rancher instance with 15:25:14
@_discord_105120866047631360:t2bot.iogingimli 21:01:29
@_discord_105120866047631360:t2bot.iogingimli Hey, is there a way to remove or override the command that gets applied to the fluent bit Pods?

``` containers:
- command:
- /fluent-bit/bin/fluent-bit
- -c
- /fluent-bit/etc-operator/fluent-bit.conf
21:03:52
@_discord_105120866047631360:t2bot.iogingimli * Hey, is there a way to remove or override the command that gets applied to the fluent bit Pods?

  containers:
  - command:
    - /fluent-bit/bin/fluent-bit
    - -c
    - /fluent-bit/etc-operator/fluent-bit.conf
21:03:57
28 Nov 2023
@_discord_484298787846881280:t2bot.iopepov Hey, is there a way to remove or 09:30:55
29 Nov 2023
@_discord_1095768645520593056:t2bot.iogamba47 joined the room.13:59:58

Show newer messages


Back to Room ListRoom Version: 10