!oPtkUSbKZXLkzMcZBn:matrix.org

#ceph

59 Members
10 Servers

Load older messages


SenderMessageTime
14 Jun 2023
@dudeastic:matrix.orgdudeTook some convincing and commits to loki though :-)09:59:26
@dudeastic:matrix.orgdudeI'm not sure that I should use S3, it might be better just running Loki on a single HDD instead. Less overhead.10:00:37
19 Jun 2023
@dudeastic:matrix.orgdude grin: did we ever move forward with your ceph-issue? 09:09:29
@grin:grin.hugrin
In reply to @dudeastic:matrix.org
grin: did we ever move forward with your ceph-issue?
I have sheepishly been spent the recent days in Slovenian mountains, so not yet. :-)
09:10:18
@dudeastic:matrix.orgdude
In reply to @grin:grin.hu
I have sheepishly been spent the recent days in Slovenian mountains, so not yet. :-)
That sounds awesome!
09:11:28
@grin:grin.hugrinIt indeed was. As for our topic, since I have lowered the thread number it seems to be stable, so at least didn't fall apart while I was away. I'll fiddle with it in a few days and try to get some result.09:12:52
19 Jul 2023
@herhel:matrix.orgherhel left the room.11:19:33
22 Jul 2023
@ifiguero:matrix.orgifiguero joined the room.23:11:44
23 Jul 2023
@ifiguero:matrix.orgifiguero

Been trying to learn the basics. I have 3 nodes each with 24TB hdd and 2480GB sdd.

So far I was using Proxmox to setup but now I want to dive deeper into managing users and pools.

02:00:14
@ifiguero:matrix.orgifigueroSomehow I made too many pools and it warned me about me creating too many. So I think I need to tune each pool. Idk if there is some fundamental limitation to pool creation. Or is just that the default values aren't good parameters for the pools.02:01:26
@ifiguero:matrix.orgifiguero So I've read some about how optimal pg is computed but I still feel like I'm missing something to better define a layout for the pools. 02:02:27
@ronny:synapse.dst.fjit.nosepToo many pools? Or too many pool placement groups?07:15:43
@ifiguero:matrix.orgifiguero* Been trying to learn the basics. I have 3 nodes each with 2x4TB hdd and 2x480GB sdd. So far I was using Proxmox to setup but now I want to dive deeper into managing users and pools. 18:10:02
@ifiguero:matrix.orgifiguero* Been trying to learn the basics. I have 3 nodes each with 2 x 4TB hdd and 2 x 480GB sdd. So far I was using Proxmox to setup but now I want to dive deeper into managing users and pools.18:10:26
@ifiguero:matrix.orgifiguero

Well. That is why I got into researching it better. I understand I need to better define the pools to avoid the defaults.

So far Proxmox creates RBD pools with 128PG to store the VMs images

18:12:33
@ifiguero:matrix.orgifiguero Now the current cluster I have crush rules for magnetic and ssd. So practically there is no 6 OSD on each of them (HDD and SDD) 18:13:40
@ifiguero:matrix.orgifiguero Now I'm still a bit curious if PG has a fixed size. Or are just some abstraction to distribute the space among the osd. 18:15:35
@ifiguero:matrix.orgifiguero Because if I have 6 OSD with 32 PG per volume I should have some good distribution of data among the nodes. But I'm unsure if that would be insufficient depending on the amount of files stored. 18:16:58
@ifiguero:matrix.orgifiguero

Like some will be used to store big files, backups, disk images. Or if it will be used to store tiny images from databases or web content.

Is there any inconvenient with using fewer pg?

18:18:31
@ronny:synapse.dst.fjit.nosepThink of it as a internal weighting between pools. More pg's =better balance, but more resource use18:19:44
@ronny:synapse.dst.fjit.nosepNowadays there is autobalancer that can semi or fully automattically handle it for you18:20:20
@ronny:synapse.dst.fjit.nosepAuto tuning size of placement groups 18:21:06
@ronny:synapse.dst.fjit.nosephttps://docs.ceph.com/en/latest/rados/operations/placement-groups/18:21:45
@ifiguero:matrix.orgifiguero

Yes I'll been looking into it too. Because it was turned off. And even if I turn them on

ceph osd pool autoscale-status

Doesn't output anything. I'm assuming I need to activate autoscale per pool after it was turned on systemwide

18:22:52
@ifiguero:matrix.orgifiguero I probably will define maximum sizes of the pools for the autoscale to distribute pg better. 18:23:36
@ifiguero:matrix.orgifigueroAnother thing. I noticed that cephfs requires one pool for data and one pool for data. And I'm trying to figure if is better to make independent pools for different usage or just make a big fat lake and manage access in sub-directory basis18:26:10
@ifiguero:matrix.orgifiguero* Another thing. I noticed that cephfs requires one pool for metadata and one pool for data. And I'm trying to figure if is better to make independent pools for different usage or just make a big fat lake and manage access in sub-directory basis18:26:25
@ifiguero:matrix.orgifiguero Because if I make too many pools again will fall into the too many osd thingy I assume. 18:26:57
@ifiguero:matrix.orgifiguero* Because if I make too many pools again will fall into the too many PG thingy I assume.18:27:12
@ifiguero:matrix.orgifiguero Like I have 13TB (44TB raw storage but on replica 3). If I want to reserve half (6TB) on 1 TB pools for different use cases, but leave room for future needs. 18:29:58

Show newer messages


Back to Room ListRoom Version: