!jKjSXaWmTyztUvZzSD:matrix.org

Gluster

21 Members
Gluster - A distributed, software-defined storage system6 Servers

Load older messages


SenderMessageTime
7 Jan 2023
@kshlm:matrix.orgkshlm set a profile picture.09:11:49
10 Feb 2023
@joseki:matrix.org@joseki:matrix.org left the room.02:01:48
4 Mar 2023
@fiveseven:matrix.orgDale Gribblethere's no way to guarantee that18:46:51
30 Mar 2023
@cloufisz:matrix.orgCloufish (Cloufish) joined the room.10:00:48
@cloufisz:matrix.orgCloufish (Cloufish)Is there any chance I can get 10:01:03
@cloufisz:matrix.orgCloufish (Cloufish) * Is there any chance I can get regarding Ansible Gluster Module? 10:01:20
@cloufisz:matrix.orgCloufish (Cloufish)

I have following Playbook that uses these tasks:

- name: Create a trusted storage pool
  become: true
  become_method: sudo
  gluster.gluster.gluster_peer:
    state: present
    nodes: "{{ groups['gluster_nodes']|join(',') }}"
    force: true
  run_once: true

- name: Configure gluster volume
  become: true
  gluster_volume:
    state: present
    name: "{{ gluster_volume_name }}"
    brick: "{{gluster_volume_path}}/brick"
    replicas: 3
    cluster: "{{ groups['gluster_nodes']|join(',') }}"
    host: "{{ ansible_default_ipv4.address }}"
    transport: tcp
    force: true
  run_once: true

In the first task I use run_once: true because the peer probing (from Getting Started GlusterFS guide) needs to only be executed on one server

But then I get these errors:

TASK [gluster : Create a trusted storage pool] *************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:18
Thursday 30 March 2023  13:46:08 +0200 (0:00:00.393)       0:00:52.875 ******** 
Thursday 30 March 2023  13:46:08 +0200 (0:00:00.393)       0:00:52.875 ******** 
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-zgiexvddsndnmqmwmnnceezddhyzbaok ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (0, b'\n{"msg": "", "changed": true, "invocation": {"module_args": {"state": "present", "nodes": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "force": true}}}\n', b'/tmp/ansible_gluster.gluster.gluster_peer_payload_kak0pi8f/ansible_gluster.gluster.gluster_peer_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py:173: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\n')
changed: [192.168.1.130] => {
    "changed": true,
    "invocation": {
        "module_args": {
            "force": true,
            "nodes": [
                "192.168.1.130",
                "192.168.1.131",
                "192.168.1.132"
            ],
            "state": "present"
        }
    },
    "msg": ""
}

TASK [gluster : Configure gluster volume] ******************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:51
Thursday 30 March 2023  13:46:18 +0200 (0:00:09.766)       0:01:02.642 ******** 
Thursday 30 March 2023  13:46:18 +0200 (0:00:09.766)       0:01:02.641 ******** 
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-vugsfyzcqhtvnnozksgwtubktbitydys ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (1, b'\n{"exception": "NoneType: None\\n", "failed": true, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in \'Peer in Cluster\' state\\n", "invocation": {"module_args": {"state": "present", "name": "gfs", "brick": "/glusterfs/bricks/brick", "replicas": 3, "cluster": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "host": "192.168.1.130", "transport": "tcp", "force": true, "bricks": "/glusterfs/bricks/brick", "start_on_create": true, "rebalance": false, "options": {}, "stripes": null, "arbiters": null, "disperses": null, "redundancies": null, "quota": null, "directory": null}}}\n', b'')
<192.168.1.130> Failed to connect to the host via ssh: 
The full traceback is:
NoneType: None
fatal: [192.168.1.130]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "arbiters": null,
            "brick": "/glusterfs/bricks/brick",
            "bricks": "/glusterfs/bricks/brick",
            "cluster": [
                "192.168.1.130",
                "192.168.1.131",
                "192.168.1.132"
            ],
            "directory": null,
            "disperses": null,
            "force": true,
            "host": "192.168.1.130",
            "name": "gfs",
            "options": {},
            "quota": null,
            "rebalance": false,
            "redundancies": null,
            "replicas": 3,
            "start_on_create": true,
            "state": "present",
            "stripes": null,
            "transport": "tcp"
        }
    },
    "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in 'Peer in Cluster' state\n"
}
10:04:09
@cloufisz:matrix.orgCloufish (Cloufish)

So the Hosts are not in Peer in Cluster mode, but rather Accepted peer request (Connected)

sudo gluster peer status:

Number of Peers: 2

Hostname: 192.168.1.131
Uuid: acfcf746-1416-4ec3-a1c6-79929016e924
State: Accepted peer request (Connected)

Hostname: 192.168.1.132
Uuid: eab67f82-6c2a-4c2a-b000-ba822a0ce59f
State: Accepted peer request (Connected)

10:05:33
@cloufisz:matrix.orgCloufish (Cloufish)

I really... really can't understand all of this :( And have been trying to setup this in for past 4days

Could someone at least give me a hint where to search for the root of issue?

10:06:32
@cloufisz:matrix.orgCloufish (Cloufish) *

I really... really can't understand all of this :( And have been trying to setup this in for past 4days
The part-solution here -> https://community.ops.io/jmarhee/fixing-glusterfs-not-in-peer-in-cluster-state-error-70l
Tells to simply change state to 3, but I want to have this working without manually going to file

Could someone at least give me a hint where to search for the root of issue?

10:08:05
@cloufisz:matrix.orgCloufish (Cloufish) *

I have following Playbook that uses these tasks:

- name: Create a trusted storage pool
  become: true
  become_method: sudo
  gluster.gluster.gluster_peer:
    state: present
    nodes: "{{ groups['gluster_nodes']|join(',') }}"
    force: true
  run_once: true

- name: Configure gluster volume
  become: true
  gluster_volume:
    state: present
    name: "{{ gluster_volume_name }}"
    brick: "{{gluster_volume_path}}/brick"
    replicas: 3
    cluster: "{{ groups['gluster_nodes']|join(',') }}"
    host: "{{ ansible_default_ipv4.address }}"
    transport: tcp
    force: true
  run_once: true

In the first task I use run_once: true because the peer probing (from Getting Started GlusterFS guide) needs to only be executed on one server

But then I get these errors:

TASK [gluster : Create a trusted storage pool] *************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:18
Thursday 30 March 2023  13:46:08 +0200 (0:00:00.393)       0:00:52.875 ******** 
Thursday 30 March 2023  13:46:08 +0200 (0:00:00.393)       0:00:52.875 ******** 
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-zgiexvddsndnmqmwmnnceezddhyzbaok ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (0, b'\n{"msg": "", "changed": true, "invocation": {"module_args": {"state": "present", "nodes": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "force": true}}}\n', b'/tmp/ansible_gluster.gluster.gluster_peer_payload_kak0pi8f/ansible_gluster.gluster.gluster_peer_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py:173: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\n')
changed: [192.168.1.130] => {
    "changed": true,
    "invocation": {
        "module_args": {
            "force": true,
            "nodes": [
                "192.168.1.130",
                "192.168.1.131",
                "192.168.1.132"
            ],
            "state": "present"
        }
    },
    "msg": ""
}

TASK [gluster : Configure gluster volume] ******************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:51
Thursday 30 March 2023  13:46:18 +0200 (0:00:09.766)       0:01:02.642 ******** 
Thursday 30 March 2023  13:46:18 +0200 (0:00:09.766)       0:01:02.641 ******** 
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n  -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-vugsfyzcqhtvnnozksgwtubktbitydys ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (1, b'\n{"exception": "NoneType: None\\n", "failed": true, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in \'Peer in Cluster\' state\\n", "invocation": {"module_args": {"state": "present", "name": "gfs", "brick": "/glusterfs/bricks/brick", "replicas": 3, "cluster": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "host": "192.168.1.130", "transport": "tcp", "force": true, "bricks": "/glusterfs/bricks/brick", "start_on_create": true, "rebalance": false, "options": {}, "stripes": null, "arbiters": null, "disperses": null, "redundancies": null, "quota": null, "directory": null}}}\n', b'')
<192.168.1.130> Failed to connect to the host via ssh: 
The full traceback is:
NoneType: None
fatal: [192.168.1.130]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "arbiters": null,
            "brick": "/glusterfs/bricks/brick",
            "bricks": "/glusterfs/bricks/brick",
            "cluster": [
                "192.168.1.130",
                "192.168.1.131",
                "192.168.1.132"
            ],
            "directory": null,
            "disperses": null,
            "force": true,
            "host": "192.168.1.130",
            "name": "gfs",
            "options": {},
            "quota": null,
            "rebalance": false,
            "redundancies": null,
            "replicas": 3,
            "start_on_create": true,
            "state": "present",
            "stripes": null,
            "transport": "tcp"
        }
    },
    "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in 'Peer in Cluster' state\n"
}
10:08:44
@cloufisz:matrix.orgCloufish (Cloufish)OKAY I THINK I REALIZED10:26:30
@cloufisz:matrix.orgCloufish (Cloufish)

It's DNS issue... sorry!!! 🙏
gluster@gluster2:~$ sudo gluster peer status

Number of Peers: 1

Hostname: traefik.docker.local
Uuid: b931b8ba-a743-4236-a69b-373fe1b157f5
State: Accepted peer request (Disconnected)
10:27:25
@cloufisz:matrix.orgCloufish (Cloufish)I think I fixed it 10:27:33
20 Apr 2023
@aryak:projectsegfau.lt@aryak:projectsegfau.lt changed their profile picture.17:10:01
30 Apr 2023
@imlostlmao:matrix.orgimlostlmao joined the room.08:14:20
14 May 2023
@ralph:fx45.in@ralph:fx45.in joined the room.05:48:25
20 May 2023
@anoopcs:matrix.orgAnoop C S changed their profile picture.04:16:37
@ralph:fx45.in@ralph:fx45.in left the room.22:38:04
5 Jun 2023
@aryak:projectsegfau.lt@aryak:projectsegfau.lt left the room.11:45:54
24 Jul 2023
@ufm:ufm.lolFyodor Ustinov joined the room.18:53:42
@ufm:twinkle.lolufm joined the room.19:02:20
27 Jul 2023
@anoopcs:matrix.orgAnoop C S changed their profile picture.07:06:15
19 Sep 2023
@crystal-void:matrix.org@crystal-void:matrix.org left the room.17:40:03
31 Mar 2024
@kshlm:matrix.orgkshlm changed their display name from kshlm to kshlm (Old).04:13:56
@kshlm:matrix.orgkshlm invited @kshlm:matrix.kshlm.xyz@kshlm:matrix.kshlm.xyz.04:15:30
@kshlm:matrix.kshlm.xyz@kshlm:matrix.kshlm.xyz joined the room.04:17:24
@kshlm:matrix.orgkshlmchanged room power levels.04:17:45
@kshlm:matrix.orgkshlm changed their display name from kshlm (Old) to kshlm.04:32:59
@kshlm:matrix.kshlm.xyz@kshlm:matrix.kshlm.xyz left the room.04:36:23

There are no newer messages yet.


Back to Room ListRoom Version: