Cloufish (Cloufish) | I have following Playbook that uses these tasks:
- name: Create a trusted storage pool
become: true
become_method: sudo
gluster.gluster.gluster_peer:
state: present
nodes: "{{ groups['gluster_nodes']|join(',') }}"
force: true
run_once: true
- name: Configure gluster volume
become: true
gluster_volume:
state: present
name: "{{ gluster_volume_name }}"
brick: "{{gluster_volume_path}}/brick"
replicas: 3
cluster: "{{ groups['gluster_nodes']|join(',') }}"
host: "{{ ansible_default_ipv4.address }}"
transport: tcp
force: true
run_once: true
In the first task I use run_once: true because the peer probing (from Getting Started GlusterFS guide) needs to only be executed on one server
But then I get these errors:
TASK [gluster : Create a trusted storage pool] *************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:18
Thursday 30 March 2023 13:46:08 +0200 (0:00:00.393) 0:00:52.875 ********
Thursday 30 March 2023 13:46:08 +0200 (0:00:00.393) 0:00:52.875 ********
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-zgiexvddsndnmqmwmnnceezddhyzbaok ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (0, b'\n{"msg": "", "changed": true, "invocation": {"module_args": {"state": "present", "nodes": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "force": true}}}\n', b'/tmp/ansible_gluster.gluster.gluster_peer_payload_kak0pi8f/ansible_gluster.gluster.gluster_peer_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py:173: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\n')
changed: [192.168.1.130] => {
"changed": true,
"invocation": {
"module_args": {
"force": true,
"nodes": [
"192.168.1.130",
"192.168.1.131",
"192.168.1.132"
],
"state": "present"
}
},
"msg": ""
}
TASK [gluster : Configure gluster volume] ******************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:51
Thursday 30 March 2023 13:46:18 +0200 (0:00:09.766) 0:01:02.642 ********
Thursday 30 March 2023 13:46:18 +0200 (0:00:09.766) 0:01:02.641 ********
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-vugsfyzcqhtvnnozksgwtubktbitydys ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (1, b'\n{"exception": "NoneType: None\\n", "failed": true, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in \'Peer in Cluster\' state\\n", "invocation": {"module_args": {"state": "present", "name": "gfs", "brick": "/glusterfs/bricks/brick", "replicas": 3, "cluster": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "host": "192.168.1.130", "transport": "tcp", "force": true, "bricks": "/glusterfs/bricks/brick", "start_on_create": true, "rebalance": false, "options": {}, "stripes": null, "arbiters": null, "disperses": null, "redundancies": null, "quota": null, "directory": null}}}\n', b'')
<192.168.1.130> Failed to connect to the host via ssh:
The full traceback is:
NoneType: None
fatal: [192.168.1.130]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"arbiters": null,
"brick": "/glusterfs/bricks/brick",
"bricks": "/glusterfs/bricks/brick",
"cluster": [
"192.168.1.130",
"192.168.1.131",
"192.168.1.132"
],
"directory": null,
"disperses": null,
"force": true,
"host": "192.168.1.130",
"name": "gfs",
"options": {},
"quota": null,
"rebalance": false,
"redundancies": null,
"replicas": 3,
"start_on_create": true,
"state": "present",
"stripes": null,
"transport": "tcp"
}
},
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in 'Peer in Cluster' state\n"
}
| 10:04:09 |
Cloufish (Cloufish) | * I have following Playbook that uses these tasks:
- name: Create a trusted storage pool
become: true
become_method: sudo
gluster.gluster.gluster_peer:
state: present
nodes: "{{ groups['gluster_nodes']|join(',') }}"
force: true
run_once: true
- name: Configure gluster volume
become: true
gluster_volume:
state: present
name: "{{ gluster_volume_name }}"
brick: "{{gluster_volume_path}}/brick"
replicas: 3
cluster: "{{ groups['gluster_nodes']|join(',') }}"
host: "{{ ansible_default_ipv4.address }}"
transport: tcp
force: true
run_once: true
In the first task I use run_once: true because the peer probing (from Getting Started GlusterFS guide) needs to only be executed on one server
But then I get these errors:
TASK [gluster : Create a trusted storage pool] *************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:18
Thursday 30 March 2023 13:46:08 +0200 (0:00:00.393) 0:00:52.875 ********
Thursday 30 March 2023 13:46:08 +0200 (0:00:00.393) 0:00:52.875 ********
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-zgiexvddsndnmqmwmnnceezddhyzbaok ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (0, b'\n{"msg": "", "changed": true, "invocation": {"module_args": {"state": "present", "nodes": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "force": true}}}\n', b'/tmp/ansible_gluster.gluster.gluster_peer_payload_kak0pi8f/ansible_gluster.gluster.gluster_peer_payload.zip/ansible_collections/gluster/gluster/plugins/modules/gluster_peer.py:173: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\n')
changed: [192.168.1.130] => {
"changed": true,
"invocation": {
"module_args": {
"force": true,
"nodes": [
"192.168.1.130",
"192.168.1.131",
"192.168.1.132"
],
"state": "present"
}
},
"msg": ""
}
TASK [gluster : Configure gluster volume] ******************************************************************************************************************************************
task path: /home/cloufish/Projects/Ansible-HomeLab/playbooks/swarm-deploy/roles/gluster/tasks/init.yml:51
Thursday 30 March 2023 13:46:18 +0200 (0:00:09.766) 0:01:02.642 ********
Thursday 30 March 2023 13:46:18 +0200 (0:00:09.766) 0:01:02.641 ********
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
redirecting (type: modules) ansible.builtin.gluster_volume to gluster.gluster.gluster_volume
Using module file /home/cloufish/.ansible/collections/ansible_collections/gluster/gluster/plugins/modules/gluster_volume.py
Pipelining is enabled.
<192.168.1.130> ESTABLISH SSH CONNECTION FOR USER: swarm
<192.168.1.130> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/home/cloufish/.ssh/ansible_ed25519"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="swarm"' -o ConnectTimeout=10 -o 'ControlPath="/home/cloufish/.ansible/cp/d0d2c75c4d"' 192.168.1.130 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-vugsfyzcqhtvnnozksgwtubktbitydys ; /usr/bin/python3'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<192.168.1.130> (1, b'\n{"exception": "NoneType: None\\n", "failed": true, "msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in \'Peer in Cluster\' state\\n", "invocation": {"module_args": {"state": "present", "name": "gfs", "brick": "/glusterfs/bricks/brick", "replicas": 3, "cluster": ["192.168.1.130", "192.168.1.131", "192.168.1.132"], "host": "192.168.1.130", "transport": "tcp", "force": true, "bricks": "/glusterfs/bricks/brick", "start_on_create": true, "rebalance": false, "options": {}, "stripes": null, "arbiters": null, "disperses": null, "redundancies": null, "quota": null, "directory": null}}}\n', b'')
<192.168.1.130> Failed to connect to the host via ssh:
The full traceback is:
NoneType: None
fatal: [192.168.1.130]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"arbiters": null,
"brick": "/glusterfs/bricks/brick",
"bricks": "/glusterfs/bricks/brick",
"cluster": [
"192.168.1.130",
"192.168.1.131",
"192.168.1.132"
],
"directory": null,
"disperses": null,
"force": true,
"host": "192.168.1.130",
"name": "gfs",
"options": {},
"quota": null,
"rebalance": false,
"redundancies": null,
"replicas": 3,
"start_on_create": true,
"state": "present",
"stripes": null,
"transport": "tcp"
}
},
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume create gfs replica 3 transport tcp 192.168.1.130:/glusterfs/bricks/brick 192.168.1.131:/glusterfs/bricks/brick 192.168.1.132:/glusterfs/bricks/brick force) command (rc=1): volume create: gfs: failed: Host 192.168.1.131 is not in 'Peer in Cluster' state\n"
}
| 10:08:44 |