- lsblk to find all disks
zpool create <name> raidz sd<>... cache /dev/sd<> log /dev/sd<>
zfs create tank/appdata
zfs set mountpoint=/appdata tank/appdata
#install common drivers
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt-get install ubuntu-drivers-common
sudo ubuntu-drivers autoinstall
- ubuntu 18+ uses netplan to configure network interface. Using netplan is straight forward. create a yaml describing your network interfaces and then run netplan apply
- gimli configuration: /etc/netplan/50-cloud-init.yaml
netplan apply
sudo apt install unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades
# Enter yes
For ubuntu the system waits for every interface defined to start up which can slow things down For every interface that isn't needed add the optional flag example:
network:
ethernets:
enp1s0f0:
dhcp4: true
enp1s0f1:
dhcp4: true
optional: true
Also good to lower timeout on shutdown
zfs create tank/zblock0 -V 10tb
-
Set up nfs with ceph follow instructions here: https://docs.ceph.com/en/latest/cephfs/fs-nfs-exports/#create-cephfs-export
-
All nodes need to be configured to use nfs 4.1 create file:
/etc/nfsmount.conf
inside add:
[ NFSMount_Global_Options ]
Defaultvers=4.1
- NFS has to be configured to convert ids to numbers
this can be done by creating a file with contents of:
vi nfs.config
NFSV4 {
Allow_Numeric_Owners = true;
Only_Numeric_Owners = true;
}
and then applying that file with:
https://docs.ceph.com/en/octopus/cephfs/fs-nfs-exports/#set-customized-nfs-ganesha-configuration
ceph nfs cluster config set nfs-cluster -i nfs.config
ceph nfs export create cephfs cephfs nfs-cluster /cephfs
nfs puts config in rados in it's own pull here are some common commands
rados -p nfs-ganesha ls --all
rados -p nfs-ganesha get -n nfs-cluster <objectname> <filetooutputto> --all
ceph nfs cluster update <clusterid> <placementnumber>
so
ceph nfs cluster update nfs-cluster "3 gimli,helium,lithium"