!oPtkUSbKZXLkzMcZBn:matrix.org

#ceph

59 Members
10 Servers

Load older messages


SenderMessageTime
24 Aug 2023
@ifiguero:matrix.orgifiguero There is a dedicated switch and NIC for Ceph, currently with the "normal" MTU. Before setting up Ceph did some testing and there were no noticeable improvement for different MTU sizes 1500/3000/5000 all gave similar-ish transfer speed for iperf. But maybe I'm missing something. 16:32:25
@issoumachankla:matrix.orgIsaBang
In reply to @ronny:synapse.dst.fjit.no

I would guess everything is possible. Also the performance would be absolutly abysmal. Probably single digit iops. Making stone tablets by hand may have better thruput.

This is definitly far into "hell no!" Teritory.

Thank for that answer

What append if I have a 10gig network between 2 server in the same server room, an the last in another location (with the vpn cursed latency) ?

16:58:55
@ronny:synapse.dst.fjit.nosepAll hosts must have ack'd the write, so all writes will be as slow as the vpn latency. All reads from the primary osd, so 1/3 of reads as slow as the vpn. Just don't17:41:48
@ronny:synapse.dst.fjit.nosepCeph is good with many hosts. 3 is allready cutting to the bone.17:42:11
@issoumachankla:matrix.orgIsaBang
In reply to @ronny:synapse.dst.fjit.no
All hosts must have ack'd the write, so all writes will be as slow as the vpn latency. All reads from the primary osd, so 1/3 of reads as slow as the vpn.

Just don't
intersting
17:42:58
@ronny:synapse.dst.fjit.nosepLatency is the achilles heel of distributed storage systems. Keeping it low is the single most important thing for performance.17:45:36
27 Aug 2023
@kron4eg:matrix.orgkron4eg changed their profile picture.09:47:32
29 Aug 2023
@ifiguero:matrix.orgifiguero I haven't found any direct reference to this. But I was playing with the Ceph Filesystem creation. And noticed that each cephFS requires a dedicated MDS, and that each node can run a single MDS. So you could at most have a single CephFS per node (-1 because you must have a standby MDS node). 16:41:00
@ifiguero:matrix.orgifigueroAm I wrong with this? 23:52:19
@ifiguero:matrix.orgifigueroBecause I never found any reference about this directly but if I create the same amount of cephfs than nodes it gives a warning that the cluster lacks an MDS23:53:09
1 Sep 2023
@ifiguero:matrix.orgifigueroNow I'm stuck on a replay 17:09:33
@ifiguero:matrix.orgifigueroWhy doesn't provide a progress for the task17:09:48
@ifiguero:matrix.orgifigueroHow can you know if it will take an hour or a day 17:10:12
@ifiguero:matrix.orgifiguero Well it looks it was stuck, it wasn't replaying anything of sorts 18:07:42
@ifiguero:matrix.orgifigueroit was a network outage and that left the cluster in an deadlock of sorts it seems. 18:08:06
4 Sep 2023
@dharjee:matrix.orgdthpulsehi guys, I nave latest Pacific Ceph cluster with 25 OSDs (same size), 2 pools with replica size 3 and trying to find cause why OSDs aren't utilized evenly there. Upmap is set to max deviation 1, balancer is enabled, but some OSDs spase utilization is in range of 25% to 85%. I was thinking about increasing PG num from 1024 to 2048, but not sure if this can help ... may be you have some other suggestion? Thx12:39:10
15 Sep 2023
@wsgalaxy:matrix.orgChen Yuanrun joined the room.03:49:46
23 Sep 2023
@webibowa:matrix.org@webibowa:matrix.org joined the room.17:26:27
@webibowa:matrix.org@webibowa:matrix.org left the room.17:26:36
8 Oct 2023
@issoumachankla:matrix.orgIsaBanghello guys some people here say that it is imposible to run monitor and osd with latency i used: tc qdisc change dev enp4s0f0 root netem rate 600mbit delay 25ms and i got with rsync a solid 100mbit/s 02:37:54
30 Oct 2023
@dharjee:matrix.orgdthpulseCeph channel here looks dead ...11:39:17
@ifiguero:matrix.orgifigueroheartbeat works alright12:44:56
31 Oct 2023
@dharjee:matrix.orgdthpulse
In reply to @ifiguero:matrix.org
heartbeat works alright
yep I meant community here is not responding and compared it to IRC Ceph channel it's looks like dead
08:47:55
@ifiguero:matrix.orgifigueroI imagine that since the bridge went down if happens like that 12:35:15
8 Nov 2023
@ser:sergevictor.euser(ial) changed their display name from Serge Victor to ser.04:43:37
17 Nov 2023
@ronny:synapse.dst.fjit.nosephello all. I have cluster with 100% full pools but osd's are not full. probably due to unbalanced hosts. have reweighted some osd's but even with some free space things are still stuck. radosgw-admin gives no output. are there some trigger that need to happen for radosgw-admin to start functioning again ? 19:15:11
9 Dec 2023
@pcfe:pcfe.netpcfe joined the room.15:10:27
11 Jan 2024
@capt_bolazzles:matrix.org@capt_bolazzles:matrix.org left the room.10:47:24
26 Jan 2024
@ser:sergevictor.euser(ial) changed their display name from ser to Dr Serge Victor.09:28:28
@ser:sergevictor.euser(ial) changed their display name from Dr Serge Victor to ser(ial).12:13:20

There are no newer messages yet.


Back to Room ListRoom Version: