
Introduction
NFS stands for Network File System, a distributed filesystem protocol that enables users to mount remote filesystems onto their server. In this guide we’ll be using RedHat centos 7 server.
Update and Install Necessary packages
1 2 3 |
yum update -y yum -y install nfs-utils mdadm |
Configure disk(s) to be used as NFS storage
In our case we’ll be using two block devices i.e /dev/sdb and /dev/sdc
Run lsblk command to view the list of available devices. We’ll merge the two block devices into a RAID 0 array (Not recommended). This is just for tests purposes. in production environment, consider using a RAID configuration the ensures disk redundancy e.h RAID 6, RAID 6, RAID 10 e.t.c
1 2 3 4 5 6 7 8 9 10 11 |
[root@cs-nfs-dev ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 16G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 15G 0 part ├─cl-root 253:0 0 13.4G 0 lvm / └─cl-swap 253:1 0 1.6G 0 lvm [SWAP] sdb 8:16 0 100G 0 disk sdc 8:32 0 100G 0 disk sr0 11:0 1 1024M 0 rom |
Convert the disks to GPT and create Partions
To make it easier, we’ll write a short bash script perform this action on our behalf. Converting the disks to GPT removes the 4 primary partions limit enforced by MBR.
Create a file and name it mklabel.sh
1 |
touch mklabel.sh |
Add the following to the file. Edit the device names to suite your environment.
CAUTION: DO NOT INCLUDE THE BOOT DISK HERE!
1 2 3 4 5 6 7 8 |
for i in sdb sdc; do #Convert disk to gpt parted --script /dev/$i "mklabel gpt" #create primary partition on the entire block device parted --script /dev/$i "mkpart primary 0% 100%" #Set raid flag to make the kernel aware of the raid partition parted --script /dev/$i "set 1 raid on" done |
Make the file executable
1 |
chmod +x mklable.sh |
Run the script and run lsblk command to confirm the disks
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[root@cs-nfs-dev ~]# ./mklable.sh [root@cs-nfs-dev ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 16G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 15G 0 part ├─cl-root 253:0 0 13.4G 0 lvm / └─cl-swap 253:1 0 1.6G 0 lvm [SWAP] sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part sdc 8:32 0 100G 0 disk └─sdc1 8:33 0 100G 0 part sr0 11:0 1 1024M 0 rom |
Configure the devices as a RAID 0 array using mdadm
1 2 3 |
[root@cs-nfs-dev ~]# mdadm --create /dev/md0 --level=raid0 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. |
Run lsblk
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[root@cs-nfs-dev ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 16G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 15G 0 part ├─cl-root 253:0 0 13.4G 0 lvm / └─cl-swap 253:1 0 1.6G 0 lvm [SWAP] sdb 8:16 0 100G 0 disk └─sdb1 8:17 0 100G 0 part └─md0 9:0 0 199.9G 0 raid0 sdc 8:32 0 100G 0 disk └─sdc1 8:33 0 100G 0 part └─md0 9:0 0 199.9G 0 raid0 sr0 11:0 1 1024M 0 rom |
As shown, the two devices are now in a RAID 0 array i.e md0
Create a file system on the block device. In our case we’ll use xfs
1 2 3 4 5 6 7 8 9 10 |
[root@cs-nfs-dev ~]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=512 agcount=16, agsize=3274624 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=52393984, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=25584, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 |
Create a directory to mount the file system
1 |
mkdir /mnt/nfs |
Update /etc/fstab and mount the md0 device to the nfs directory
1 |
echo "/dev/md0 /mnt/nfs xfs defaults 0 0" >> /etc/fstab |
Mount the file system
1 2 3 4 5 6 7 8 9 10 |
[root@cs-nfs-dev ~]# mount -av / : ignored /boot : already mounted swap : ignored mount: /mnt/nfs does not contain SELinux labels. You just mounted an file system that supports labels which does not contain labels, onto an SELinux box. It is likely that confined applications will generate AVC messages and not be allowed access to this file system. For more details see restorecon(8) and mount(8). /mnt/nfs : successfully mounted |
Run df command to confirm the mount
1 2 3 4 5 6 7 8 9 10 |
[root@cs-nfs-dev ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 14G 1.4G 13G 10% / devtmpfs 910M 0 910M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 8.5M 912M 1% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda1 1014M 215M 800M 22% /boot tmpfs 184M 0 184M 0% /run/user/0 /dev/md0 200G 33M 200G 1% /mnt/nf |
NFS Exports
Next step is to add the exports
1 |
echo "/mnt/nfs/ 192.168.50.41(rw,async,no_root_squash,no_subtree_check)" >> /etc/exports |
In this case, the IP 192.168.50.41 is our NFS client that will mount the NFS storage. you can also use an asterik (*) if you need any client to mount the storage.
start nfs server
1 |
systemctl start nfs-server.service |
Run the exportfs command
1 2 |
[root@cs-nfs-dev ~]# exportfs -av exporting192.168.50.41:/mnt/nfs |
Add ports to the firewall and configure selinux
Before concluding, note that Redhat/Centos have a strict firewall policy. Any service accessed that need to be accessed from outside needs to be added to the firewall.
Set selinux to permissive
1 |
sed -i 's/enforcing/Permissive/g' /etc/selinux/config; setenforce 0; |
Add services to the firewall and reload
1 2 3 4 5 6 7 8 |
[root@cs-nfs-dev ~]# firewall-cmd --permanent --zone=public --add-service=nfs success [root@cs-nfs-dev ~]# firewall-cmd --permanent --zone=public --add-service=mountd success [root@cs-nfs-dev ~]# firewall-cmd --permanent --zone=public --add-service=rpc-bind success [root@cs-nfs-dev ~]# firewall-cmd --reload success |
NFS server is now ready to be used.
Next we’ll look at how to configure NFS client on Centos 7.
harun
Latest posts by harun (see all)
- Linux Input/Output Redirection - May 28, 2020
- Reset Linux Root Password Using Rescue CD - September 21, 2018
- Extending Linux Root Partition using LVM - June 14, 2018
Hello Harun,
Tutorial is very easy to follow and direct to the point.
Thanks for putting together such extensive library.
Rod