isk's picture

Thanks for the offer. I bit the bullet and attempted a source build for the 3.3 version and managed to get a pair of appliances replicating in no time. Here's what I did (apols if this is wrong place for this stuff, I always keep notes of what I do and hope someone else will find them useful)...:

Installing GlusterFS 3.3 from source on Turnkey Linux Appliance (Ubuntu 10.04)
==============================================================================

 1. Login via SSH as "root".
 2. Stay in home folder;

      cd ~
    
 3. Download the GlusterFS 3.3 source package:

      wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/glusterfs-3.3.0...
    
 4. Unpack the downloaded package:

      tar -xvzf ./glusterfs-3.3.0.tar.gz
    
 5. Change to the package directory:

      cd glusterfs-3.3.0
    
 6. Install package dependencies:

      apt-get update
      apt-get install gcc flex bison libreadline5-dev
    
 7. Run the configuration utility:

    ./configure
   
       GlusterFS configure summary
        ===========================
        FUSE client        : yes
        Infiniband verbs   : no
        epoll IO multiplex : yes
        argp-standalone    : no
        fusermount         : no
        readline           : yes
        georeplication     : yes
       
 8. Build GlusterFS:

      make                                (put kettle on for nice cup of tea)
      make install
    
 9. Make sure the shared library can be found:

    echo "include /usr/local/lib" >> /etc/ld.so.conf
    ldconfig
   
10. Verify the installed version:

        glusterfs --version

        glusterfs 3.3.0 built on Jun  8 2012 21:34:47

        Repository revision: git://git.gluster.com/glusterfs.git
        Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
        GlusterFS comes with ABSOLUTELY NO WARRANTY.
        You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

11. Use Webmin to open ports in firewall (** harden later **)

        tcp    111
        udp    111
        tcp    24007:24011
        tcp 38465:38485
       
12. Start GlusterFS daemon:

        service glusterd start
       
Configuring GlusterFS 3.3 for two server file system replication
================================================================

1. Perform GlusterFS source installation procedure as above for each Turnkey Linux node appliance.
2. Make sure each node appliance can be resolved by DNS from each other.
3. Add servers to trusted storage pool:

         From server1.yourdomain:
         
                 gluster peer probe server2.yourdomain
                 Probe successful
                
         From server2.yourdomain:
         
                 gluster peer probe server1.yourdomain
                 Probe successful

4. Confirm peers can now see each other:

            From server1.yourdomain:
         
                  gluster peer status
                  
                  Number of Peers: 1

                 Hostname: server2.yourdomain
                 Uuid: df3811cc-3593-48e0-ac59-d82338543327
                 State: Peer in Cluster (Connected)

            From server2.yourdomain:
         
                  gluster peer status
                  
                  Number of Peers: 1

                 Hostname: server1.yourdomain
                 Uuid: 47619cc6-eba2-4bae-a0ad-17b745150c2d
                 State: Peer in Cluster (Connected)

5. Create replicated volumes:

            From server1.yourdomain:
           
                 gluster volume create your-volume-name replica 2 transport tcp server1.yourdomain:/exp1 server2.yourdomain:/exp2

                 Creation of volume your-volume-name has been successful. Please start the volume to access data.
                 
6. Start the volume:

            From server1.yourdomain:
           
                 gluster volume start your-volume-name
                   
                 Starting volume your-volume-name has been successful

7. Display volume information:

            From server1.yourdomain:
           
                 gluster volume info your-volume-name
                 
                 Volume Name: your-volume-name
                 Type: Replicate
                 Volume ID: b9ff3770-53d9-4209-9df6-c0006ade6dde
                 Status: Started
                 Number of Bricks: 1 x 2 = 2
                 Transport-type: tcp
                 Bricks:
                 Brick1: server1.yourdomain:/exp1
                 Brick2: server2.yourdomain:/exp2

 8. Add the FUSE loadable kernel module (LKM) to the Linux kernel for each client node:

          From server1.yourdomain:
         
               modprobe fuse
               dmesg | grep -i fuse
               fuse init (API version 7.13)
               
          From server2.yourdomain:
         
               modprobe fuse
               dmesg | grep -i fuse
               fuse init (API version 7.13)


9. Mount the volume on each server node:

            From server1.yourdomain:
           
                 mkdir /mnt/glusterfs
                 mount -t glusterfs server1.yourdomain:/your-volume-name /mnt/glusterfs
                 
            From server2.yourdomain:
           
                 mkdir /mnt/glusterfs
                 mount -t glusterfs server2.yourdomain:/your-volume-name /mnt/glusterfs                

10. Test the replication:

          From server1.yourdomain:
         
             touch /mnt/glusterfs/hello.world
            
          From server2.yourdomain:
         
               ls -l /mnt/glusterfs
               
               total 1
                 -rw-r--r-- 1 root root    0 Jun  8 22:48 hello.world
                 
11. To do...
 
         - Harden firewall
         - More testing
         - Add additional nodes (using snapshots)

        - Autostart daemons

        - Automount glusterfs

More reading: http://www.gluster.org/community/documentation/index.php/Main_Page

Enjoy!