상세 컨텐츠

본문 제목

[CentOS 8] Disk 관리 | RAID 구축

Linux

by mp.jamong 2020. 7. 31. 09:04

본문

RAID 개념에 대해 이전 포스팅에서 설명을 드렸고, 이번 포스팅에서는 각각의 RAID를 CentOS 8 에서 어떻게 구축하는지 그 방법에 대해 설명을 드리고자 합니다.

 

 

[CentOS 8] Disk 관리 | RAID 개념

RAID는 여러 개의 하드디스크를 하나의 디스크처럼 사용하는 방식입니다. 비용을 절감하면서도 신뢰성을 높이며, 성능까지 향상시킬 수 있습니다. RAID는 디스크 관리 영역에서 중요한 요소로, 대

mpjamong.tistory.com

RAID를 구축하기 위해서는 다수의 디스크가 리눅스 시스템에 연결되어야 하며, 대부분 실습 환경은 VMware나 VirtualBox에서 진행하는데, 이 부분에 대해서는 추후 별도 포스팅을 통해 설명을 드리도록 하겠습니다.

 

 

1. 디스크 관리 | RAID 구축 준비 사항

  • 디스크 수량 : 13 개
  • 디스크 용량 : 2 GB
  • 디스크별 역할
    - Linear RAID : 2개 디스크 필요. /dev/sdb, /dev/sdc
    - RAID 0 : 2개 디스크 필요. /dev/sdd, /dev/sde

    - RAID 1 : 2개 디스크 필요. /dev/sdf, /dev/sdg

    - RAID 5 : 3개 디스크 필요. /dev/sdh, /dev/sdi, /dev/sdj
    - RAID 10 : 4개 디스크 필요. /dev/sdk, /dev/sdl, /dev/sdm, /dev/sdn

2. 디스크 관리 | RAID 구축에 사용할 디스크별 파티션 생성

  • 디스크 파티션 생성 방법은 아래 내용과 같이 동일하며, 다른 점은 파티션 시작시 디스크 디바이스명만 바꾸어서 파티션을 변경할 것
    - 예시 : RAID 구축 테스트시 사용하는 디스크 수량이 13개가 필요하기 때문에 13개 디스크 모두 파티션 생성을 할 것
       # fdisk /dev/sdb
       # fdisk /dev/sdc
       # fdsik /dev/sdd
                  .
                  .

                  .
       # fdisk /dev/sdj
# 디스크별 파티션 생성
[root@magicpipe ~]# >fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x9fc3be8c.

# 새로운 파티션 분할
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)

# Primary 파티션 선택
Select (default p): p

# 파티션 번호 1번 선택
Partition number (1-4, default 1): 1
# 시작 섹터 번호 입력 ('엔터' 입력시 기본값 입력)
First sector (2048-4194303, default 2048): ENTER
# 마지막 섹터 번호 입력 ('엔터'입력시 기본값 입력)
Last sector, +sectors or +size{K,M,G,T,P} (2048-4194303, default 4194303): ENTER

Created a new partition 1 of type 'Linux' and of size 2 GiB.

# 파일 시스템 유형 선택
Command (m for help): t
Selected partition 1
# 'Linux raid autodetect' 유형 번호 입력 (여기서 fd 입력)
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.

# 설정 내용 확인
Command (m for help): p
Disk /dev/sdb: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9fc3be8c

Device     Boot Start     End Sectors Size Id Type
/dev/sdb1        2048 4194303 4192256   2G fd Linux raid autodetect

# 설정 저장
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

 

 

3. 디스크 관리 | mdadm 명령어

  • 'mdadm' 명령어는 CentOS에서 RAID 장치를 생성/관리하는 명령어
  • 옵션 | create
    - 설명 : 지정하는 장치(디스크)에 RAID를 생성
    - 예시 : --create /dev/md0 : md10 장치에 RAID를 생성
  • 옵션 | level
    - 설명 : 사용할 RAID 번호를 지정 (0 = RAID 0, 1 = RAID 1, 5 = RAID 5, 10 = RAID 10, linear = Linear RAID)
    - 예시 : --level=linear
  • 옵션 | raid-devices
    - 설명 : RAID 구성시 사용할 디스크 수량과 파티션 지정
    - 주의 : RAID 구성시 최소 수량 디스크를 고려하여 지정할 것 (구축 에러 발생 주 원인)
    - 예시 : --raid-devices=2 /dev/sdb1 /dev/sdc1
  • 옵션 | stop
    - 설명 : RAID 중지 옵션
    - 예시 : mdadm --stop /dev/md0
  • 옵션 | run
    - 설명 : 중지된 RAID 시작 옵션
    - 예시 : mdadm --run /dev/md0
  • 옵션 | detail
    - 설명 : RAID 상세 내역 확인 옵션
    - 예시 : --detail /dev/md9
      

 

4. 디스크 관리 | Linear RAID 구축

# Linear RAID 생성
[root@magicpipe ~]# mdadm --create /dev/md9 --level=linear --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md9 started.

# Linear RAID 확인
[root@magicpipe ~]# mdadm --detail --scan
ARRAY /dev/md9 metadata=1.2 name=magicpipe:9 UUID=3f1ede01:7c44eb6e:1b344c28:5171fc34

# Linear RAID 볼륨 마운트
[root@magicpipe ~]# mkfs.ext4 /dev/md9
mke2fs 1.45.4 (23-Sep-2019)
Creating filesystem with 1047040 4k blocks and 262144 inodes
Filesystem UUID: b98bd080-558d-46d3-92e4-8d7714eed851
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@magicpipe ~]# mkdir /raid_linear
[root@magicpipe ~]# mount /dev/md9 /raid_linear

[root@magicpipe ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             382M     0  382M   0% /dev
tmpfs                410M     0  410M   0% /dev/shm
tmpfs                410M  6.4M  404M   2% /run
tmpfs                410M     0  410M   0% /sys/fs/cgroup
/dev/mapper/cl-root   27G  5.0G   22G  19% /
/dev/sda1            976M  244M  666M  27% /boot
tmpfs                 82M  1.2M   81M   2% /run/user/42
tmpfs                 82M  4.0K   82M   1% /run/user/1000
/dev/md9             3.9G   16M  3.7G   1% /raid_linear

# Linear RAID 볼륨 마운트 fstab 등록
[root@magicpipe ~]# vi /etc/fstab
# /etc/fstab
# Created by anaconda on Sat Jun 27 11:16:20 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=a1dec119-5afb-4119-ac5e-17d1176b3d64 /boot ext4    defaults        1 2
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/md9                /raid_linear            ext4    defaults        1 2

 

 

5. 디스크 관리 | RAID 0 구축

# RAID 0 생성
[root@magicpipe ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

# RAID 0 확인
[root@magicpipe ~]# mdadm --detail --scan
ARRAY /dev/md9 metadata=1.2 name=magicpipe:9 UUID=3f1ede01:7c44eb6e:1b344c28:5171fc34
ARRAY /dev/md0 metadata=1.2 name=magicpipe:0 UUID=3e3b0172:48948474:c9367f3d:365746ba

# RAID 0 볼륨 마운트
[root@magicpipe ~]# mkfs.ext4 /dev/md0
mke2fs 1.45.4 (23-Sep-2019)
Creating filesystem with 1047040 4k blocks and 262144 inodes
Filesystem UUID: ba628060-116f-46e5-9b9f-07444233c512
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@magicpipe ~]# mkdir /raid_0
[root@magicpipe ~]# mount /dev/md0 /raid_0

# RAID 0 볼륨 마운트 fstab 등록
[root@magicpipe ~]# vi /etc/fstab
# /etc/fstab
# Created by anaconda on Sat Jun 27 11:16:20 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=a1dec119-5afb-4119-ac5e-17d1176b3d64 /boot ext4    defaults        1 2
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/md9                /raid_linear            ext4    defaults        1 2
/dev/md0                /raid_0                 ext4    defaults        1 2

 

 

6. 디스크 관리 | RAID 1 구축

# RAID 1 생성
[root@magicpipe ~]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdf1 /dev/sdg1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

# RAID 1 확인
[root@magicpipe ~]# mdadm --detail --scan
ARRAY /dev/md9 metadata=1.2 name=magicpipe:9 UUID=3f1ede01:7c44eb6e:1b344c28:5171fc34
ARRAY /dev/md0 metadata=1.2 name=magicpipe:0 UUID=3e3b0172:48948474:c9367f3d:365746ba
ARRAY /dev/md1 metadata=1.2 name=magicpipe:1 UUID=981f868b:ff2e320a:41c7d206:f6a85369

# RAID 1 볼륨 마운트
[root@magicpipe ~]# mkfs.ext4 /dev/md1
mke2fs 1.45.4 (23-Sep-2019)
Creating filesystem with 523520 4k blocks and 131072 inodes
Filesystem UUID: e39278c8-31ae-429c-b0e2-ad019d959d2f
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

[root@magicpipe ~]# mkdir /raid_1
[root@magicpipe ~]# mount /dev/md1 /raid_1

# RAID 1 볼륨 마운트 fstab 등록
[root@magicpipe ~]# vi /etc/fstab
# /etc/fstab
# Created by anaconda on Sat Jun 27 11:16:20 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=a1dec119-5afb-4119-ac5e-17d1176b3d64 /boot ext4    defaults        1 2
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/md9                /raid_linear            ext4    defaults        1 2
/dev/md0                /raid_0                 ext4    defaults        1 2
/dev/md1                /raid_1                 ext4    defaults        1 2

 

 

7. 디스크 관리 | RAID 5 구축

# RAID 5 생성
[root@magicpipe ~]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sdh1 /dev/sdi1 /dev/sdj1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

# RAID 5 확인
[root@magicpipe ~]# mdadm --detail --scan
ARRAY /dev/md9 metadata=1.2 name=magicpipe:9 UUID=3f1ede01:7c44eb6e:1b344c28:5171fc34
ARRAY /dev/md0 metadata=1.2 name=magicpipe:0 UUID=3e3b0172:48948474:c9367f3d:365746ba
ARRAY /dev/md1 metadata=1.2 name=magicpipe:1 UUID=981f868b:ff2e320a:41c7d206:f6a85369
ARRAY /dev/md5 metadata=1.2 spares=1 name=magicpipe:5 UUID=5742527a:54cd71b9:66b76345:a3517f91

# RAID 5 볼륨 마운트
[root@magicpipe ~]# mkfs.ext4 /dev/md5
mke2fs 1.45.4 (23-Sep-2019)
Creating filesystem with 1047040 4k blocks and 262144 inodes
Filesystem UUID: d57c1f99-0d07-489e-8975-0de1cf988ff4
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@magicpipe ~]# mkdir /raid_5
[root@magicpipe ~]# mount /dev/md5 /raid_5

# RAID 5 볼륨 마운트 fstab 등록
[root@magicpipe ~]# vi /etc/fstab

# /etc/fstab
# Created by anaconda on Sat Jun 27 11:16:20 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=a1dec119-5afb-4119-ac5e-17d1176b3d64 /boot ext4    defaults        1 2
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/md9                /raid_linear            ext4    defaults        1 2
/dev/md0                /raid_0                 ext4    defaults        1 2
/dev/md1                /raid_1                 ext4    defaults        1 2
/dev/md5                /raid_5                 ext4    defaults        1 2

 

 

8. 디스크 관리 | RAID 10 구축

# RAID 10 생성
[root@magicpipe ~]# mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

# RAID 10 확인
[root@magicpipe ~]# mdadm --detail --scan
ARRAY /dev/md/5 metadata=1.2 name=magicpipe:5 UUID=5742527a:54cd71b9:66b76345:a3517f91
ARRAY /dev/md/9 metadata=1.2 name=magicpipe:9 UUID=3f1ede01:7c44eb6e:1b344c28:5171fc34
ARRAY /dev/md/0 metadata=1.2 name=magicpipe:0 UUID=3e3b0172:48948474:c9367f3d:365746ba
ARRAY /dev/md/1 metadata=1.2 name=magicpipe:1 UUID=981f868b:ff2e320a:41c7d206:f6a85369
ARRAY /dev/md10 metadata=1.2 name=magicpipe:10 UUID=3e07ec07:d61b9ee8:ffd22cd4:b8013b5e

# RAID 10 볼륨 마운트
[root@magicpipe ~]# mkfs.ext4 /dev/md10
mke2fs 1.45.4 (23-Sep-2019)
Creating filesystem with 1047040 4k blocks and 262144 inodes
Filesystem UUID: af607450-3850-4bbf-8530-6068cbb30105
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

[root@magicpipe ~]# mkdir /raid_10
[root@magicpipe ~]# mount /dev/md10 /raid_10


# RAID 10 볼륨 마운트 fstab 등록
[root@magicpipe ~]# vi /etc/fstab
# /etc/fstab
# Created by anaconda on Sat Jun 27 11:16:20 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=a1dec119-5afb-4119-ac5e-17d1176b3d64 /boot ext4    defaults        1 2
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/md9                /raid_linear            ext4    defaults        1 2
/dev/md0                /raid_0                 ext4    defaults        1 2
/dev/md1                /raid_1                 ext4    defaults        1 2
/dev/md5                /raid_5                 ext4    defaults        1 2
/dev/md10               /raid_10                ext4    defaults        1 2

 

 

9. 디스크 관리 | RAID 확인

# RAID 사용 디스크 현황 확인
[root@magicpipe ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md10 : active raid10 sdn1[3] sdm1[2] sdl1[1] sdk1[0]
      4188160 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md1 : active raid1 sdg1[1] sdf1[0]
      2094080 blocks super 1.2 [2/2] [UU]

md0 : active raid0 sde1[1] sdd1[0]
      4188160 blocks super 1.2 512k chunks

md9 : active linear sdb1[0] sdc1[1]
      4188160 blocks super 1.2 0k rounding

md5 : active raid5 sdj1[3] sdh1[0] sdi1[1]
      4188160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: none

 

 

관련글 더보기

댓글 영역