Divider Connect on Facebook Connect on Instagram Connect on Twitter YouTube email Email RSS Icon >

RAID

RAID Arrays & Controllers

Harddrive Read/Write Benchmark Test: 40 GB, 40 GB, 160 GB, 160 GB
Harddrive Read/Write Benchmark Test: 750 GB,320 GB, 250 GB, 80 GB
RAID 0: 750+320-250-160-160
RAID 0: 750+80-40-40
RAID 0: 320-250-160-160
RAID 0: 80-40-40
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
RAID 0: 750-320-320
November 13, 2006 at 2:14am

Physical Storage Configuration


Appendix


Common Buses and their Maximum Bandwidth
Slot Clock Number of Bits Data per Clock Cycle Bandwidth
PCI 33 MHz 32 1 133 MB/s
PCI-X 66 66 MHz 64 1 533 MB/s
PCI-X 133 133 MHz 64 1 1,066 MB/s
PCI-X 266 133 MHz 64 2 2,132 MB/s
PCI-X 533 133 MHz 64 4 4,266 MB/s
AGP x1 66 MHz 32 1 266 MB/s
AGP x2 66 MHz 32 2 533 MB/s
AGP x4 66 MHz 32 4 1,066 MB/s
AGP x8 66 MHz 32 8 2,133 MB/s
PCIe 1.0 x1 2.5 GHz 1 1 250 MB/s
PCIe 1.0 x4 2.5 GHz 4 1 1,000 MB/s
PCIe 1.0 x8 2.5 GHz 8 1 2,000 MB/s
PCIe 1.0 x16 2.5 GHz 16 1 4,000 MB/s
PCIe 2.0 x1 5 GHz 1 1 500 MB/s
PCIe 2.0 x4 5 GHz 4 1 2,000 MB/s
PCIe 2.0 x8 5 GHz 8 1 4,000 MB/s
PCIe 2.0 x16 5 GHz 16 1 8,000 MB/s
PCIe 3.0 x1 8 GHz 1 1 1,000 MB/s
PCIe 3.0 x4 8 GHz 4 1 4,000 MB/s
PCIe 3.0 x8 8 GHz 8 1 8,000 MB/s
PCIe 3.0 x16 8 GHz 16 1 16,000 MB/s

The AGP interface provided a direct connection to the North Bridge, skipping the traffic on the South Bridge and the peripherals connected to it.

 


PCIe Bandwidth
Revision Encoding Clock Bandwidth (x1)
1.0 8b/10b 2.5 GHz 250 MB/s
2.0 8b/10b 5 GHz 500 MB/s
3.0 128b/130b 8 GHz 1 GB/s
4.0 128b/130b 16 GHz 2 GB/s

 

Maximum HyperTransport Frequency and Bandwidth
HyperTransport
version
Year Max. HT frequency Max. link width Max. aggregate bandwidth (bi-directional) Max. bandwidth at 16-bit unidirectional (GB/s) Max. bandwidth at 32-bit unidirectional* (GB/s)
1.0 2001 800 MHz 32-bit 12.8 GB/s 3.2 6.4
1.1 2002 800 MHz 32-bit 12.8 GB/s 3.2 6.4
2.0 2004 1.4 GHz 32-bit 22.4 GB/s 5.6 11.2
3.0 2006 2.6 GHz 32-bit 41.6 GB/s 10.4 20.8
3.1 2008 3.2 GHz 32-bit 51.2 GB/s 12.8 25.6

 

Other Buses and Their Maximum Bandwidth
Name Data bandwidth
eSATA 600 MB/s
eSATAp 300 MB/s
SATA revision 3.2 1.97 GB/s
SATA revision 3.0 600 MB/s
SATA revision 2.0 300 MB/s
SATA revision 1.0 150 MB/s
PATA (IDE) 133 133.3 MB/s
SAS-3 1.2 GB/s
SAS-2 600 MB/s
SAS 300 300 MB/s
SAS 150 150 MB/s
IEEE 1394 (FireWire) 3200 393 MB/s
IEEE 1394 (FireWire) 800 98.25 MB/s
IEEE 1394 (FireWire) 400 49.13 MB/s
USB 3.1 1.21 GB/s
USB 3.0 400 MB/s or more (excl.protocol
overhead, flow control, and framing)
USB 2.0 35 MB/s
USB 1.1 1.5 MB/s
SCSI Ultra-320 320 MB/s
10GFC Fibre Channel 1.195 GB/s
4GFC Fibre Channel 398 MB/s
InfiniBand
Quad Rate
0.98 GB/s
Thunderbolt 1.22 GB/s
Thunderbolt 2 2.44 GB/s
Thunderbolt 3 4.88 GB/s

mdadm

Create RAID 0 (level=0) array:

mdadm --create --verbose /dev/md0 --level=0 /dev/sda1 /dev/sdb2

If creating a RAID 1 array then you can image the first drive:

sfdisk -d /dev/sda | sfdisk /dev/sdb

Get RAID array information:

mdadm --detail /dev/md0

Delete RAID 0 array:

mdadm --stop /dev/md0
mdadm --remove /dev/md0
mdadm --zero-superblock /dev/sda

If you are not able to completely destroy the array, nuke the drive:

sudo dd if=/dev/zero of=/dev/sda bs=512 count=1

As an aside, you can extract the Master Boot Record of a drive using this:

dd if=/dev/sda of=mbr.bin bs=512 count=1 od -xa mbr.bin

Stop RAID arrays:

mdadm --stop /dev/md1
mdadm --stop /dev/md0

Start RAID arrays:

mdadm --assemble --scan

Start RAID arrays:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Add disk (/dev/sdc1) to RAID array md0:

mdadm --add /dev/md0 /dev/sdb1

Create disk (/dev/sda1) from RAID array md0:

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

hdparm

Test hard drive read speed:

sudo hdparm -t /dev/sda

Test hard drive read speed by skipping the first 500 GB of data:

hdparm -t --direct --offset 500 /dev/sda

Get hard drive information:

sudo hdparm -I /dev/sda

Run Hdparm by directly reading data from the disk without using the buffer kept by the kernel to speed up data delivery:

hdparm -t --direct /dev/sda

All these settings will be forgotten on reboot. To make these changes permanent, edit /etc/hdparm.conf on Debian systems.

Tuning: Force drive to deliver data from say 20 sectors at once. hdparm -m20 /dev/sda

Check: The number of sectors that can be deliverd by the hard drive. hdparm -i /dev/sda

Tuning: Force drive to read 256 sectors in advance of the next read request

hdparm -a256 /dev/sda

Tuning: Force queries from the operating system to reach the hard drive controller faster

hdparm -c3 /dev/hda

Tuning: Enable Multiword DMA mode2

hdparm -X34 -d1 -u1 /dev/hda

Tuning: Enable write-back caching (the hard drive first stores the data to be written in a buffer before starting to write it)

hdparm -W1 /dev/sda

Tuning: All in one

hdparm -cx -dx -ux -mxx -Xxx /dev/hda

Measure hard drive temperature

sudo hdparm -H /dev/sdf

Smartmontools: Hard Drive Health

Install Smartmontools

sudo apt-get install smartmontools
sudo apt-get install gsmartcontrol

Make sure SMART is enable on drive /dev/sda:

sudo smartctl -s on /dev/sda

Initiate test:

sudo smartctl -t long /dev/sda

View Results

sudo smartctl -l selftest /dev/sda

For IDE drives:

sudo smartctl -a /dev/sda

For SATA drives

sudo smartctl -a -d ata /dev/sda

Highways and their speed limits Available Highways

HyperTransport comes in four versions—1.x, 2.0, 3.0, and 3.1—which run from 200 MHz to 3.2 GHz. It is also a DDR or "double data rate" connection, meaning it sends data on both the rising and falling edges of the clock signal. This allows for a maximum data rate of 6400 MT/s when running at 3.2 GHz. The operating frequency is autonegotiated with the motherboard chipset (North Bridge) in current computing.

The Bottleneck

Peripherals (USB, PCI devices, Hard dives, etc.) are connected to the South Bridge which is connected to the North Bridge which is connected to the CPU.

PCI Express lanes available on the North Bridge chip are used for video cards

a PCI Express bus link supports full-duplex (send/receive) communication between any two endpoints

PCIe scales linearly, that is, x8 provides double the bandwidth of x4.

PCI Express buses receive their own clock signals. This eliminates their dependence on the front-side bus for timing.

A x1 connection, the smallest PCIe connection, has one lane made up of four wires. It carries one bit per cycle in each direction. A x2 link contains eight wires and transmits two bits at once, a x4 link transmits four bits, and so on.

High-end PCI Express controllers usually provide more than 16 lanes, allowing the motherboard manufacturer to either provide more PCI Express x16 slots for video cards or allow the connection of other slots and devices directly to the north bridge chip or CPU.

Reference Section