!oPtkUSbKZXLkzMcZBn:matrix.org

#ceph

59 Members
10 Servers

Load older messages


SenderMessageTime
23 Jul 2023
@ronny:synapse.dst.fjit.nosepthe pools do not have a size. they are the size of the data you put into them. 18:47:14
@ronny:synapse.dst.fjit.nosepyou can probably use quotas to make a artificial limit. 18:48:43
@ronny:synapse.dst.fjit.nosepbut if you need more space, just add osd's or add nodes 18:49:12
@ifiguero:matrix.orgifiguero That is how I was approaching the storage so far. But the PG limit struck (not sure what was the limit, nor how it it was computed) but I can understand how osd can have only so many PG so if you over provision it could lead to starvation. 18:59:51
@ifiguero:matrix.orgifiguero So I'm trying to get to a sensible compromise. So I don't hurt performance. 19:00:39
@ronny:synapse.dst.fjit.nosepCeph have a configureable limit of 300 pg per osd19:01:01
@ronny:synapse.dst.fjit.nosepSo make pools have fewer pg's , make fewer pools, or add osd's19:01:33
@ronny:synapse.dst.fjit.nosephttps://old.ceph.com/pgcalc/19:01:58
@ronny:synapse.dst.fjit.nosepCalc for help19:02:04
@ronny:synapse.dst.fjit.nosepBut with autotuning and pg changing on live pools. This have become a lost art :p19:02:34
@ifiguero:matrix.orgifiguero I figured. I'll use autoscale and %data 19:04:54
@ifiguero:matrix.orgifigueroBut most logically will decrease the default pg number 19:05:12
26 Jul 2023
@capt_bolazzles:matrix.org@capt_bolazzles:matrix.org joined the room.05:05:34
2 Aug 2023
@heaven_devil:matrix.orgheaven_devil joined the room.15:00:24
7 Aug 2023
@dharjee:matrix.orgdthpulseHey guys, I have some conversation with collague of mine about data movements while draining the OSD (reweight to 0). He says that data are moved then even to another failure domain. What I consider as bullshit, because this would brake data redundancy. If I reweight OSD to 0 data are from draining OSD are moved to buckets in the same failure domain as drained OSD is. Am I right? Asking because he made me confused :D THX12:47:31
@ronny:synapse.dst.fjit.nosepif you have 10 hosts and hosts are failure domain, and you drain an osd on one node. objects on that osd would move to another osd on the same node, or to one of the other nodes, but not to the osd's on hosts that have the 2 other replicas of the placement group (assuming replication pool and size=3); if you lost so many hosts that there are no hosts left except the 2 with data on, you would get the pg undersized error. 16:29:15
@ronny:synapse.dst.fjit.nosepso yes the object can move to another failure domain, but not to a failure domain that would compromize redundancy 16:29:48
13 Aug 2023
@daniel:dlq.sedaniel joined the room.08:28:45
@daniel:dlq.sedanielHello folks, how do I verify that I have RocksDB rather than LevelDB before upgrading to 17.2.x?08:30:30
@daniel:dlq.sedaniel
In reply to@daniel:dlq.se
Hello folks, how do I verify that I have RocksDB rather than LevelDB before upgrading to 17.2.x?
Nvm, I found it in /var/lib/ceph/<uuid>/mon.<x>/ kv_backend
08:36:09
23 Aug 2023
@issoumachankla:matrix.orgIsaBang joined the room.23:32:02
@issoumachankla:matrix.orgIsaBangHello folks is it possible to connect osd through vpn ?23:32:46
@issoumachankla:matrix.orgIsaBangHow perform ceph with 50ms latency?23:33:05
@issoumachankla:matrix.orgIsaBangDoes ceph understand that to reach tontent it need to use slow link and so dispatching data to correct osd ?23:34:08
24 Aug 2023
@ronny:synapse.dst.fjit.nosepI would guess everything is possible. Also the performance would be absolutly abysmal. Probably single digit iops. Making stone tablets by hand may have better thruput. This is definitly far into "hell no!" Teritory.05:15:47
@ronny:synapse.dst.fjit.nosepNow having 2 clusters, one on each end of the vpn, and using async replication would probably work nicely. Depending a bit on your usecase05:20:16
@kron4eg:matrix.orgkron4egmade my day 😂10:35:15
@ifiguero:matrix.orgifiguero Is there any fine tuning you could do using different MTU on the network stack of the Ceph network? So you avoid fragmentation on network layer. 12:35:03
@ronny:synapse.dst.fjit.nosepI do run jumboframes on the cluster network, (osd to osd) since i control all those hosts and networks. But i run regular on the public side, because i do not controll end to end everything on the clients.12:46:27
@ronny:synapse.dst.fjit.nosepIf your clients do not directly access the cluster, ie you use rados gateway, or nfs for clients, having jumbo on the public side to the gateway would probably be beneficial. But if you do not control all switches, routers, clients and servers. Probably not worth the hassle.12:50:20

Show newer messages


Back to Room ListRoom Version: