Store 2.0: Unterschied zwischen den Versionen
| Zeile 96: | Zeile 96: | ||
02 : 13V9WK9AS (3TB Toshiba) data2 105 | 02 : 13V9WK9AS (3TB Toshiba) data2 105 | ||
03 : | 03 : | ||
04 : | 04 : MJ1311YNG3Y4SA (3TB Toshiba) data 005 | ||
05 : | 05 : MJ1311YNG3SYKA (3TB Toshiba) data 001 | ||
---- | ---- | ||
geht jetzt auch, Molex-Kontakt Problem behoben | geht jetzt auch, Molex-Kontakt Problem behoben | ||
06 | 06 : MJ1311YNG4J48A (3TB Toshiba) data 015 | ||
07 : | |||
08 : | 08 : | ||
09 : MJ1311YNG3UUPA (3TB) data 012 | 09 : MJ1311YNG3UUPA (3TB Toshiba) data 012 | ||
10 : | 10 : MJ0351YNGA02YA (3TB Toshiba) data2 102 | ||
---- | ---- | ||
| Zeile 113: | Zeile 113: | ||
12 : | 12 : | ||
13 : MJ1311YNG09EDA (3TB) data2 104 | 13 : MJ1311YNG09EDA (3TB) data2 104 | ||
14 : | 14 : WD-WCC7K7VCRJZT (3TB WD Red) HOTSPARE1 | ||
15 : MCE9215Q0AUYTW (3TB Toshiba) data2 100 | 15 : MCE9215Q0AUYTW (3TB Toshiba) data2 100 | ||
---- | ---- | ||
geht | geht | ||
16 : | 16 : WD-WCAWZ2279670 (3TB WD) data 016 | ||
17 : | 17 : MJ1311YNG3SSLA (3TB Toshiba) data 010 | ||
18 : | 18 : | ||
19 : | 19 : MJ1311YNG3LTRA (3TB Toshiba) data 003 | ||
20 : | 20 : MJ1311YNG3RM5A (3TB Toshiba) data 008 | ||
---- | ---- | ||
| Zeile 128: | Zeile 128: | ||
geht jetzt auch, Molex-Kontakt Problem behoben | geht jetzt auch, Molex-Kontakt Problem behoben | ||
21 : WD-WCC7K5THL257 (3TB WD Red) data2 103 | 21 : WD-WCC7K5THL257 (3TB WD Red) data2 103 | ||
22 : | 22 : | ||
23 : | 23 : WD-WMC4N0L7H2HV (3TB WD) data2 107 | ||
24 : | 24 : WD-WCC7K2JL6DSL (3TB WD Red) HOTSPARE2 | ||
25 : | 25 : | ||
---- | ---- | ||
geht | geht | ||
26 : | 26 : WD-WMC4N0L359DL (3TB WD) data2 106 | ||
27 : | 27 : | ||
28 : | 28 : WD-WCC4N1ZY6VVY (3TB WD Red) data2 110 | ||
29 : WD-WCC4N6EVNFZ5 (3TB | 29 : WD-WCC4N6EVNFZ5 (3TB WDRed) data2 109 | ||
30 : | 30 : | ||
---- | ---- | ||
Version vom 9. April 2018, 21:33 Uhr
STABILITÄT Store 2.0
Vermutung: Die drei SATA-Erweiterungskarten crashen gelegentlich das komplette System.
Karten:
http://www.sybausa.com/productInfo.php?iid=537
Syba SY-PEX40008 4-port SATA II PCI-e Software RAID Controller Card--Bundle with Low Profile Bracket, SIL3124 Chipset Sind die identischen Karten, die immer noch von Backblaze verbaut werden (Pod 2.0 UND Pod 3.0!) Hängen an drei PCI-E 1x (kleiner Port)
RAID bliebt heile (zum Glück!), da die Karten dann komplett die Zugriff sperren.
Arch-Log files haben gar keine Einträge zu den Crashs!!!
Remote-Log über Syslogd (über Myth) zeigt als letzten Eintrag:
mdadm: sending ioctl 1261 to a partition (buggy Eintrag, aber unkritisch) sata_sil24: IRQ status == 0xffffffff, PCI fault or device removal
sata-sil24:
https://ata.wiki.kernel.org/index.php/Sata_sil24
Spurious interrupts are expected on SiI3124 suffering from IRQ loss erratum on PCI-X
PATCH?
http://old.nabble.com/-PATCH-06-13--sata_sil24%3A-implement-loss-of-completion-interrupt-on-PCI-X-errta-fix-p3799674.html
Thread über Zugang mit SIL3124 Chip
http://www.linuxquestions.org/questions/linux-kernel-70/how-to-access-sata-drives-attached-to-sii3124-719408/
Test?
http://marc.info/?l=linux-ide&m=127228317404771&w=2
Raid nach Booten öffnen und mount
Um Auto-Assembly beim Booten zu verhindern muss die Config-Datei /etc/mdadm.conf leer (oder zumindest komplett auskommentiert sein) und "MDADM_SCAN=no" in /etc/sysconfig/mdadm
1.) Checken ob alle Platten da sind:
/root/bin/diskserial_sort2.sh
Müssen im Moment 17 Platten sein. Basis ist die Datei disknum.txt unter /root/bin
2.) Raids suchen und assemblen (kein Autostart):
mdadm --assemble --scan
3.) Cryptsetup:
cryptsetup luksOpen /dev/md125 cr_md125
4.) Mounten:
mount /dev/mapper/cr_md125 /data
Schliessen wäre:
cryptsetup luksClose cr_md125
JD2
java -Xmx512m -jar /home/gagi/jd2/JDownloader.jar
bzw in /home/gagi/jd2/
./JDownloader2
VNC
dergagi.selfhost.bz:5901
Festplatten-Layout
3000GB Hitachi Deskstar 5K3000 HDS5C3030ALA630 CoolSpin 32MB 3.5" (8.9cm) SATA 6Gb/s
3000GB Western Digital WD30EZRX 3TB interne Festplatte (8,9 cm (3,5 Zoll), 5400 rpm, 2ms, 64MB Cache, SATA III
Problem mit WD-Platten und LCC
http://idle3-tools.sourceforge.net/ http://koitsu.wordpress.com/2012/05/30/wd30ezrx-and-aggressive-head-parking/
Get idle3 timer raw value
idle3ctl -g /dev/sdh
Disable idle3 timer:
idle3ctl -d /dev/sdh
Serial auslesen mit:
udevadm info --query=all --name=/dev/sdi | grep ID_SERIAL_SHORT
Serial Systemplatte 160GB:
JC0150HT0J7TPC
Serials der Datenplatten
00 : 00000000000000 (1TB System, Samsung)
geht
01 : 234BGY0GS (3TB Toshiba) data 014 02 : 13V9WK9AS (3TB Toshiba) data2 105 03 : 04 : MJ1311YNG3Y4SA (3TB Toshiba) data 005 05 : MJ1311YNG3SYKA (3TB Toshiba) data 001
geht jetzt auch, Molex-Kontakt Problem behoben
06 : MJ1311YNG4J48A (3TB Toshiba) data 015 07 : 08 : 09 : MJ1311YNG3UUPA (3TB Toshiba) data 012 10 : MJ0351YNGA02YA (3TB Toshiba) data2 102
geht
11 : MJ1311YNG3SAMA (3TB) data 013 12 : 13 : MJ1311YNG09EDA (3TB) data2 104 14 : WD-WCC7K7VCRJZT (3TB WD Red) HOTSPARE1 15 : MCE9215Q0AUYTW (3TB Toshiba) data2 100
geht
16 : WD-WCAWZ2279670 (3TB WD) data 016 17 : MJ1311YNG3SSLA (3TB Toshiba) data 010 18 : 19 : MJ1311YNG3LTRA (3TB Toshiba) data 003 20 : MJ1311YNG3RM5A (3TB Toshiba) data 008
geht jetzt auch, Molex-Kontakt Problem behoben
21 : WD-WCC7K5THL257 (3TB WD Red) data2 103 22 : 23 : WD-WMC4N0L7H2HV (3TB WD) data2 107 24 : WD-WCC7K2JL6DSL (3TB WD Red) HOTSPARE2 25 :
geht
26 : WD-WMC4N0L359DL (3TB WD) data2 106 27 : 28 : WD-WCC4N1ZY6VVY (3TB WD Red) data2 110 29 : WD-WCC4N6EVNFZ5 (3TB WDRed) data2 109 30 :
geht
31 : 234BGY0GS (3TB Toshiba) data 014 32 : 33 : MJ1311YNG3WZVA (3TB Toshiba) data 000 34 : MJ1311YNG3Y4SA (3TB Toshiba) data 005 35 : MJ1311YNG3SYKA (3TB Toshiba) data 001
geht
36 : 37 : WD-WCC4N0RC60LS (3TB WD Red) data 006 38 : 13V9WK9AS (3TB Whatev) data2 105 39 : MJ1311YNG3RZTA (3TB Toshiba) data 002 40 :
geht
41 : MJ1311YNG3LTRA (3TB Toshiba) data 003 42 : 43 : MJ1311YNG38VGA (3TB Toshiba) data 004 44 : 45 :
TOTAL: 33 (18 + 11 + 3 Hotspare + eine Systemplatte) von 46 möglichen
Raid Baubefehl
im Screen mdadm
mdadm --create /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=15 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1
Re-Create 2014-01-31:
NEUER RICHTIGER RE-CREATE BEFEHL von mdadm-git:
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdb1:1024 /dev/sdd1:1024 /dev/sdf1:1024 /dev/sdg1:1024 /dev/sdh1:1024 /dev/sdc1:1024 /dev/sdt1:1024 /dev/sdn1:1024 /dev/sdo1:1024 /dev/sdq1:1024 /dev/sdm1:1024 /dev/sdp1:1024 /dev/sdu1:1024 /dev/sdv1:1024 /dev/sda1:1024 /dev/sds1:1024 /dev/sdl1:1024 /dev/sdw1:1024
Re-Create 2014-05-23: NEUER RICHTIGER RE-CREATE BEFEHL von mdadm-git aus /builds/mdadm/:
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdb1:1024 /dev/sdd1:1024 /dev/sdf1:1024 /dev/sdg1:1024 /dev/sdh1:1024 /dev/sdc1:1024 /dev/sdt1:1024 /dev/sdm1:1024 /dev/sdn1:1024 /dev/sdp1:1024 /dev/sdl1:1024 /dev/sdo1:1024 /dev/sdu1:1024 /dev/sdv1:1024 /dev/sda1:1024 /dev/sds1:1024 /dev/sdk1:1024 /dev/sdw1:1024
Re-Create 2014-07-10: NEUER RICHTIGER RE-CREATE BEFEHL von mdadm-git aus /builds/mdadm/:
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdb1:1024 /dev/sdd1:1024 /dev/sdf1:1024 /dev/sdg1:1024 /dev/sdh1:1024 /dev/sdc1:1024 /dev/sdt1:1024 /dev/sdn1:1024 /dev/sdo1:1024 /dev/sdq1:1024 /dev/sdm1:1024 /dev/sdp1:1024 /dev/sdu1:1024 /dev/sdv1:1024 /dev/sda1:1024 /dev/sds1:1024 /dev/sdl1:1024 /dev/sdw1:1024
Zweites Raid Baubefehl DATA2
im Screen mdadm
mdadm --create /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=4 /dev/sdv1 /dev/sdw1 /dev/sdj1 /dev/sdh1
Verschlüsseln mit speziellen Paramtern für Hardware-Verschlüsselung:
cryptsetup -v luksFormat --cipher aes-cbc-essiv:sha256 --key-size 256 /dev/md126
Öffnen:
cryptsetup luksOpen /dev/md126 cr_md126
XFS Filesystem drauf:
mkfs.xfs /dev/mapper/cr_md126
Versuch Recreate 2014-10-12 nach missglücktem Grow von 4 auf 5 RAID-Devices, scheiße:
./mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=4 /dev/sdy1:1024 /dev/sdq1:1024 /dev/sdi1:1024 /dev/sdj1:1024
Echtes Recreate 2014-10-12:
mdadm --create /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdq1 /dev/sdi1 /dev/sdj1 /dev/sdx1
Drittes Raid Recreate DATA und DATA2 2015-01-20
./mdadm --create /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdk1:1024 /dev/sdx1:1024
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdm1:1024 /dev/sdo1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdp1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdf1:1024 /dev/sdd1:1024 /dev/sdh1:1024 /dev/sde1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdk1:1024 /dev/sdq1:1024 /dev/sdc1:1024 /dev/sdv1:1024
Builden OHNE Spare, also 18 Devices, dann Spare adden
Viertes Raid Recreate DATA 2015-02-01
in /root/bin
./diskserial2.sh
um die Festplattenreihenfolge im Raid zu sehen. Ohne hddtemp etc. wohl auch stabiler
mdadm aus /builds/mdadm
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdo1:1024 /dev/sdd1:1024 /dev/sdk1:1024 /dev/sdg1:1024 /dev/sdp1:1024 /dev/sdb1:1024 /dev/sdr1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdj1:1024 /dev/sde1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdm1:1024 /dev/sdq1:1024 /dev/sdc1:1024 /dev/sdv1:1024
Mit normalem mdadm gehts wohl auch
mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdo1 /dev/sdd1 /dev/sdk1 /dev/sdg1 /dev/sdp1 /dev/sdb1 /dev/sdr1 /dev/sdf1 /dev/sdh1 /dev/sdj1 /dev/sde1 /dev/sdg1 /dev/sds1 /dev/sdu1 /dev/sdm1 /dev/sdq1 /dev/sdc1 /dev/sdv1
Builden OHNE Spare, also 18 Devices, dann Spare adden
PROBLEM:
mdadm: failed to open /dev/sdg1 after earlier success - aborting
Fünftes Raid Recreate DATA 2015-02-28
in /root/bin
./diskserial2.sh
um die Festplattenreihenfolge im Raid zu sehen.
mdadm aus /builds/mdadm
dqelcpfhmakgsunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdq1:1024 /dev/sde1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
Builden OHNE Spare, also 18 Devices, dann Spare adden
Sechstes Raid Recreate DATA und DATA2 2015-03-11
3 neue aber identische Sil-Sata Raid PCI1x Karten mit Firmware 2.6.18
in /root/bin
./diskserial2.sh
um die Festplattenreihenfolge im Raid zu sehen.
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
dqemcpfhialgsunrkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdq1:1024 /dev/sde1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdi1:1024 /dev/sda1:1024 /dev/sdl1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdk1:1024 /dev/sdv1:1024
Builden OHNE Spare, also 18 Devices, dann Spare adden
Siebtes Raid Recreate DATA und DATA2 2015-03-15
3 neue aber identische Sil-Sata Raid PCI1x Karten mit Firmware 2.6.18
in /root/bin
./diskserial2.sh
um die Festplattenreihenfolge im Raid zu sehen.
DATA2 mit normalem mdadm
ybixw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdi1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqjdcprtgalshunfkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdj1:1024 /dev/sdd1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdr1:1024 /dev/sdt1:1024 /dev/sdg1:1024 /dev/sda1:1024 /dev/sdl1:1024 /dev/sds1:1024 /dev/sdh1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdf1:1024 /dev/sdk1:1024 /dev/sdv1:1024
Builden OHNE Spare, also 18 Devices, dann Spare adden
Achtes Raid Recreate DATA und DATA2 2015-03-21
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfldpgicakhsunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdc1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
Neuntes Raid Recreate DATA und DATA2 2015-04-22
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfldpgimbkhsunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
Zehntes Raid Recreate DATA und DATA2 2015-04-24
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfldpgimbkhsunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
Elftes Raid Recreate DATA und DATA2 2015-04-26
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfmcpgidalhsunrkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdd1:1024 /dev/sda1:1024 /dev/sdl1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdk1:1024 /dev/sdv1:1024
12. Raid Recreate DATA und DATA2 2015-05-06
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
dqel cpfh makg sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdq1:1024 /dev/sde1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
smartctl-timeouts ergänzt 2015-05-07
/etc/udev/rules.d/ http://article.gmane.org/gmane.linux.raid/48238/match=smartctl+timeouts
The smartctl-timeouts scripts fix commonly mismatching defaults with drives that have no error recovery timeout configured, which has often lead to data loss.
To test, extract the files to /etc/udev/rules.d/ and
- reboot*. For me, the rules somehow had no effect without rebooting.
13. Raid Recreate DATA und DATA2 2015-05-11
DATA2 mit normalem mdadm
xcswv
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdx1 /dev/sdc1 /dev/sds1 /dev/sdw1 /dev/sdv1
DATA mdadm aus /builds/mdadm
epfl dogi bakh rtmqju
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdo1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sdr1:1024 /dev/sdt1:1024 /dev/sdm1:1024 /dev/sdq1:1024 /dev/sdj1:1024 /dev/sdu1:1024
14. Raid Recreate DATA und DATA2 2015-05-14
DATA2 mit normalem mdadm
xbswv
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdx1 /dev/sdb1 /dev/sds1 /dev/sdw1 /dev/sdv1
DATA mdadm aus /builds/mdadm
dpek cofh lajg rtmqiu
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sde1:1024 /dev/sdk1:1024 /dev/sdc1:1024 /dev/sdo1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdl1:1024 /dev/sda1:1024 /dev/sdj1:1024 /dev/sdg1:1024 /dev/sdr1:1024 /dev/sdt1:1024 /dev/sdm1:1024 /dev/sdq1:1024 /dev/sdi1:1024 /dev/sdu1:1024
15. Raid Recreate DATA und DATA2 2015-05-20
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl dpgi mbkh sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
16. Raid Recreate DATA und DATA2 2015-05-21
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfb 4 dpgi 8 jalh 12 sunrkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdb1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdj1:1024 /dev/sda1:1024 /dev/sdl1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdk1:1024 /dev/sdv1:1024
17. Raid Recreate DATA und DATA2 2015-05-22
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
cqfl 4 epgi 8 mbkh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdc1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
18. Raid Recreate DATA und DATA2 2015-05-23
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
fqga 4 ephj 8 mcli 12 sunrkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdf1:1024 /dev/sdq1:1024 /dev/sdg1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdh1:1024 /dev/sdj1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sdl1:1024 /dev/sdi1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdk1:1024 /dev/sdv1:1024
19. Raid Recreate DATA und DATA2 2015-05-24
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
fqgm 4 ephj 8 dbli 12 sunrkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdf1:1024 /dev/sdq1:1024 /dev/sdg1:1024 /dev/sdm1:1024 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdh1:1024 /dev/sdj1:1024 /dev/sdd1:1024 /dev/sdb1:1024 /dev/sdl1:1024 /dev/sdi1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdk1:1024 /dev/sdv1:1024
20. Raid Recreate DATA und DATA2 2015-05-31
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 bpgi 8 mckh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdb1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
21. Raid Recreate DATA und DATA2 2015-06-06
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 bpgi 8 mckh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdb1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
22. Raid Recreate DATA und DATA2 2015-06-09
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 dpgi 8 mbkh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
23. Raid Recreate DATA und DATA2 2015-06-11
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 dpgi 8 mbkh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
24. Raid Recreate DATA und DATA2 2015-06-12
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 dpgi 8 mbkh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
25. Raid Recreate DATA und DATA2 2015-06-20, irgendwie in letztet Zeit immer nur, wenn die Tür geöffnet/bewegt wurde
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 bpgi 8 mckh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdb1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
26. Raid Recreate DATA und DATA2 2015-06-27, irgendwie in letztet Zeit immer nur, wenn die Tür geöffnet/bewegt wurde
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
cqfl 4 epgi 8 mbkh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdc1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
27. Raid Recreate DATA und DATA2 2015-06-28
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
dqel 4 cpfh 8 makg 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdq1:1024 /dev/sde1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
28. Raid Recreate DATA und DATA2 2015-07-14
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
fqgm 4 ephj 8 bcli 12 sunrkv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdf1:1024 /dev/sdq1:1024 /dev/sdg1:1024 /dev/sdm1:1024 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdh1:1024 /dev/sdj1:1024 /dev/sdb1:1024 /dev/sdc1:1024 /dev/sdl1:1024 /dev/sdi1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdk1:1024 /dev/sdv1:1024
29. Raid Recreate DATA und DATA2 2015-07-19
DATA2 mit normalem mdadm
xbswv
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdx1 /dev/sdb1 /dev/sds1 /dev/sdw1 /dev/sdv1
DATA mdadm aus /builds/mdadm
dpek 4 cofh 8 lajg 12 rtmqiu
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sde1:1024 /dev/sdk1:1024 /dev/sdc1:1024 /dev/sdo1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdl1:1024 /dev/sda1:1024 /dev/sdj1:1024 /dev/sdg1:1024 /dev/sdr1:1024 /dev/sdt1:1024 /dev/sdm1:1024 /dev/sdq1:1024 /dev/sdi1:1024 /dev/sdu1:1024
30. Raid Recreate DATA und DATA2 2015-07-26
DATA2 mit normalem mdadm
yctxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdc1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
eqfl 4 dpgi 8 mbkh 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdq1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdp1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
31. Raid Recreate DATA und DATA2 2015-07-27
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
dqek 4 cpfh 8 lajg 12 sunriv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdq1:1024 /dev/sde1:1024 /dev/sdk1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdl1:1024 /dev/sda1:1024 /dev/sdj1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdi1:1024 /dev/sdv1:1024
32. Raid Recreate DATA und DATA2 2015-08-09
DATA2 mit normalem mdadm
xbswv
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdx1 /dev/sdb1 /dev/sds1 /dev/sdw1 /dev/sdv1
DATA mdadm aus /builds/mdadm
epfl 4 dogi 8 cakh 12 rtmqju
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdo1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdc1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sdr1:1024 /dev/sdt1:1024 /dev/sdm1:1024 /dev/sdq1:1024 /dev/sdj1:1024 /dev/sdu1:1024
33. Raid Recreate DATA und DATA2 2015-08-20
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
DATA mdadm aus /builds/mdadm
dqel 4 cpfh 8 makg 12 sunrjv
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdq1:1024 /dev/sde1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sdp1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdg1:1024 /dev/sds1:1024 /dev/sdu1:1024 /dev/sdn1:1024 /dev/sdr1:1024 /dev/sdj1:1024 /dev/sdv1:1024
34. Raid Recreate NUR DATA2 2015-09-01 NACH NETZTEILWECHSEL
DATA2 mit normalem mdadm
ybtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdb1 /dev/sdt1 /dev/sdx1 /dev/sdw1
35. Raid Recreate NUR DATA2 2015-09-01 NACH NETZTEILWECHSEL nach interupted DATA2 Raid6 Grow
Grow mdadm --grow --raid-devices=6 /dev/md126 --backup-file=/home/gagi/mda126backup_20150911
DATA2 mit normalem mdadm
ydtxw
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=5 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1
Neu dazu kam dann sdo1
Versuch von --update=revert-reshape
--assemble --update=revert-reshape" can be used to undo a reshape that has just been started but isn't really wanted. This is very new and while it passes basic tests it cannot be guaranteed. mdadm --assemble --update=revert-reshape /dev/md126 /dev/sdy1 /dev/sdd1 /dev/sdt1 /dev/sdx1 /dev/sdw1 /dev/sdo1 --backup-file=/home/gagi/mda126backup_20150911
md126 Reshape läuft als Raid6 wieder normal an und hat auch resumed (bei ca. 1.5%) :-) in diesem Zustand lässt sich md126 auf decrypten und mounten, Daten sind da. Sieht also alles gut aus. Reshape dauert noch ca. 3400 Minunten, also ca. 2,36 Tage ;-)
36. Raid Recreate DATA und DATA2 2016-01-19 nach neuer Platte rein im Betrieb :-(
DATA2 mit normalem mdadm
aa cvz 4 yqo
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=7 /dev/sdaa1 /dev/sdc1 /dev/sdv1 /dev/sdz1 /dev/sdy1 /dev/sdq1 /dev/sdo1
DATA mdadm aus /builds/mdadm
esfl 4 drgi 8 mbkh 12 uwptjx
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sds1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdd1:1024 /dev/sdr1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sdb1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sdu1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdt1:1024 /dev/sdj1:1024 /dev/sdx1:1024
37. Raid Recreate DATA und DATA2 2016-07-07 nach Ultrastar-Indizierung
DATA2 mit normalem mdadm
ab bw aa 4 zrpo 8
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdab1 /dev/sdb1 /dev/sdw1 /dev/sdaa1 /dev/sdz1 /dev/sdr1 /dev/sdp1 /dev/sdo1
DATA mdadm aus /builds/mdadm
dtfm 4 csgi 8 nalh 12 vxquky
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdt1:1024 /dev/sdf1:1024 /dev/sdm1:1024 /dev/sdc1:1024 /dev/sds1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdn1:1024 /dev/sda1:1024 /dev/sdl1:1024 /dev/sdh1:1024 /dev/sdv1:1024 /dev/sdx1:1024 /dev/sdq1:1024 /dev/sdu1:1024 /dev/sdk1:1024 /dev/sdy1:1024
38. Raid Recreate DATA und DATA2 2016-07-07 nach Ultrastar-Indizierung
DATA2 mit normalem mdadm
aa bvz 4 yqon 8
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdaa1 /dev/sdb1 /dev/sdv1 /dev/sdz1 /dev/sdy1 /dev/sdq1 /dev/sdo1 /dev/sdn1
DATA mdadm aus /builds/mdadm
dsfl 4 crgi 8 makh 12 uwptjx
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sds1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sdr1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sdu1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdt1:1024 /dev/sdj1:1024 /dev/sdx1:1024
39. Raid Recreate DATA und DATA2 2016-07-23
DATA2 mit normalem mdadm
ab bw aa 4 zrpo 8
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdab1 /dev/sdb1 /dev/sdw1 /dev/sdaa1 /dev/sdz1 /dev/sdr1 /dev/sdp1 /dev/sdo1
DATA mdadm aus /builds/mdadm
dtfl 4 csgi 8 makh 12 vxqujy
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sdt1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sds1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sdv1:1024 /dev/sdx1:1024 /dev/sdq1:1024 /dev/sdu1:1024 /dev/sdj1:1024 /dev/sdy1:1024
40. Raid Recreate DATA und DATA2 2016-07-24
DATA2 mit normalem mdadm
ab cw aa 4 zrpo 8
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdab1 /dev/sdc1 /dev/sdw1 /dev/sdaa1 /dev/sdz1 /dev/sdr1 /dev/sdp1 /dev/sdo1
DATA mdadm aus /builds/mdadm
etgm 4 dshj 8 nbli 12 vxquky
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sde1:1024 /dev/sdt1:1024 /dev/sdg1:1024 /dev/sdm1:1024 /dev/sdd1:1024 /dev/sds1:1024 /dev/sdh1:1024 /dev/sdj1:1024 /dev/sdn1:1024 /dev/sdb1:1024 /dev/sdl1:1024 /dev/sdi1:1024 /dev/sdv1:1024 /dev/sdx1:1024 /dev/sdq1:1024 /dev/sdu1:1024 /dev/sdk1:1024 /dev/sdy1:1024
41. Raid Recreate DATA und DATA2 2016-07-26
DATA2 mit normalem mdadm
aa bvz 4 yqon 8
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdaa1 /dev/sdb1 /dev/sdv1 /dev/sdz1 /dev/sdy1 /dev/sdq1 /dev/sdo1 /dev/sdn1
DATA mdadm aus /builds/mdadm
dsfl 4 crgi 8 makh 12 uwptjx
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdd1:1024 /dev/sds1:1024 /dev/sdf1:1024 /dev/sdl1:1024 /dev/sdc1:1024 /dev/sdr1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdm1:1024 /dev/sda1:1024 /dev/sdk1:1024 /dev/sdh1:1024 /dev/sdu1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdt1:1024 /dev/sdj1:1024 /dev/sdx1:1024
42. Raid Recreate DATA2 2016-08-11 SATA-KARTEN UMGEBAUT, NEUES SCRIPT
DATA2 mit normalem mdadm
mvhl 4 kcsn 8
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdm1 /dev/sdv1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1
43. Raid Recreate NUR DATA2 2016-08-18 nach interupted DATA2 Raid6 Grow
Grow mdadm --grow --raid-devices=9 /dev/md126 --backup-file=/home/gagi/mda126backup
Dann ist bei ca. 45% das System gestallt (war heiß im Raum). Reboot -> md126 mit 9 Platten und clean, aber read-only und nicht mehr am Gown
DATA2 mit normalem mdadm
mvhl 4 kcsn
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdm1 /dev/sdv1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1
Also nut mit 8 Devices -> FEHLER! und blöde Idee. cryptsetup LuksOpen geht zwar noch richtig, aber Filesystem-Fehler beim mounten
Neu dazu kam dann sdz1
Versuch von --update=revert-reshape
--assemble --update=revert-reshape" can be used to undo a reshape that has just been started but isn't really wanted. This is very new and while it passes basic tests it cannot be guaranteed. mdadm --assemble --update=revert-reshape /dev/md126 /dev/sdm1 /dev/sdv1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdz1
geht nicht, das 9. und neue Device sdz1 hat andere Superblock!
cryptsetup luksClose cr_md125
Nochmal createn mit 9, auch wenn 9. device sicher noch nicht richtig im Raid6 eingegrowt war. Allerdings könnte dann das Filesystem passen -> mount -> Daten da
lugk 4 jbrm 8 y
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdu1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdy1
mount: Einhängen von /dev/mapper/cr_md126 auf /data2 ist fehlgeschlagen: Die Struktur muss bereinigt werden
Besser, aber nochwas komisch. -> XFS-Check
xfs_repair /dev/mapper/cr_md126
Metadata corruption detected at xfs_bmbt block 0x60a0a30f8/0x1000 Metadata corruption detected at xfs_bmbt block 0x60a0a30f8/0x1000 bad magic # 0xc76d91a7 in inode 32617927112 (Daten fork) bmbt block 6322787567 bad data fork in inode 32617927112 cleared inode 32617927112
Dann noch viele andere Fehler und einiges in lost&found (vor allem auch viel vom neuen Singstar-Song-Archiv, davon aber zum Glück noch Backup auf 750er ToughDrive vorhanden) ABER: md126 lässt sich mit crypt entschlüsseln UND danach auch mounten -> meisten Daten also wieder da!
nochmal stop und dann nochmal
mvhl 4 kcsn 8 z mdadm --assemble --update=revert-reshape /dev/md126 /dev/sdm1 /dev/sdv1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdz1
Klappt nicht! :-(
So wie ist gehts zwar das mounten, allerdings sind scheinbar viele Daten korrupt! :-(((
md126 stoppen
nur mit den ursprünglichen 8 devices baun
mvhl 4 kcsn
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=8 /dev/sdm1 /dev/sdv1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1
xfs_repair
Viele Fehler -> ohoh :-(( -> Shit alles weg bis auf 3TB lost&found
WICHTIG: DER GANZE UNTERBROCHENE GROW-PROZESS WÄRE WAHRSCHEINLICH EINFACH NACH DEM MOUNTEN (!!!) NORMAL WEITERGELAUFEN!!
Nochmal mit 9 aber 9. fehlend
mvhl 4 kcsn 8 z
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdv1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 missing
-> geht auf und wird gemountet -> wieder einiges da -> xfs_repair (3x) -> dann keiner Fehler mehr erkannt -> Viele alten Daten wieder OK, neuere (so ab 19.9.2015) größtenteils korrupt :-(
9. devices wieder dazu -> wird neu eingebunden
44. Raid ReSync-Stop DATA 2016-11-01
Nach Hänger wollte DATA (schon noch clean) resynchen, was dann immer noch einiger Zeit (mehrere Stunden, ca. 50%) ohne log-Eintrag hängen blieb. Dies hat auch bei mehreren Anläufen (ca. 6 Stück) nicht geklappt.
Laufenden ReSync stoppen:
echo frozen > /sys/block/md125/md/sync_action
Gestoppten ReSync als Complete setzen:
echo none > /sys/block/md125/md/resync_start
Danach auch xfs-repair
xfs_repair /dev/mapper/cr_md125
Cronjob ausgeschalten für Auto-Sync einmal im Monat
45. Raid Recreate DATA und DATA2 2016-11-08
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
46. Raid Recreate DATA und DATA2 2016-12-05
DATA2 mit normalem mdadm
lwgk 4 jbsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
ydzq 4 xcu ac 8 rvp ab 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
47. Raid Recreate DATA und DATA2 2017-02-07
DATA2 mit normalem mdadm
lwgk 4 jbrm 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
ydzp 4 xcu ac 8 qvo ab 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdp1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdq1:1024 /dev/sdv1:1024 /dev/sdo1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
48. Raid Recreate DATA und DATA2 2017-02-24
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
49. Raid Recreate DATA 2017-03-24
Vormittags Server ganz aus. Dann Nachmittags hochgefahren, dabei sind 2 Platten "hängen geblieben" -> Data mit 2 fehlenden Platten clean but degraded
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
50. Raid Recreate DATA und DATA2 2017-04-22
DATA2 mit normalem mdadm
lwgk 4 jbsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
ydzq 4 xcu ac 8 rvp ab 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
51. Raid Recreate DATA und DATA2 2017-05-09 nach USV-Alarm und Shutdown
29 Platten insgesamt
DATA2 mit normalem mdadm
lvgk 4 jbrm 8 z
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdv1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdz1
DATA mdadm aus /builds/mdadm
xdyp 4 wct ab 8 quo aa 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdy1:1024 /dev/sdp1:1024 /dev/sdw1:1024 /dev/sdc1:1024 /dev/sdt1:1024 /dev/sdab1:1024 /dev/sdq1:1024 /dev/sdu1:1024 /dev/sdo1:1024 /dev/sdaa1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
52. Raid Recreate DATA und DATA2 2017-05-12
29 Platten insgesamt
DATA2 mit normalem mdadm
lwgk 4 jbrm 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
ydzp 4 xcu ac 8 qvo ab 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdp1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdq1:1024 /dev/sdv1:1024 /dev/sdo1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
53. Raid Recreate DATA und DATA2 2017-05-13
29 Platten insgesamt
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
54. Raid Recreate DATA2 2017-05-19 nach Mailand
29 Platten insgesamt
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
55. Raid Recreate DATA und DATA2 2017-05-21
29 Platten insgesamt
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
56. Raid Recreate DATA und DATA2 2017-05-22
29 Platten insgesamt
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
57. Raid Recreate DATA und DATA2 2017-05-26 Nach RöKo
29 Platten insgesamt
DATA2 mit normalem mdadm
mwhl 4 kcsn 8 aa
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdm1 /dev/sdw1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdaa1
DATA mdadm aus /builds/mdadm
yezq 4 xdu ac 8 rvp ab 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdz1:1024 /dev/sdq1:1024 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdr1:1024 /dev/sdv1:1024 /dev/sdp1:1024 /dev/sdab1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
58. Raid Recreate DATA und DATA2 2017-07-07 nach Zock
29 Platten insgesamt
DATA2 mit normalem mdadm
lvgk 4 jbrm 8 z
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=9 /dev/sdl1 /dev/sdv1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdz1
DATA mdadm aus /builds/mdadm
xdyp 4 wct ab 8 quo aa 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdx1:1024 /dev/sdd1:1024 /dev/sdy1:1024 /dev/sdp1:1024 /dev/sdw1:1024 /dev/sdc1:1024 /dev/sdt1:1024 /dev/sdab1:1024 /dev/sdq1:1024 /dev/sdu1:1024 /dev/sdo1:1024 /dev/sdaa1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
59. Raid Recreate DATA und DATA2 2017-08-11
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
60. Raid Recreate DATA und DATA2 2017-08-13
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
61. Raid Recreate DATA und DATA2 2017-08-16
30 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
zd aa q 4 ycv ad 8 rwp ac 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdc1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
62. Raid Recreate DATA und DATA2 2017-09-10 Nach Berlinfahrt
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
63. Raid Recreate DATA und DATA2 2017-09-11 Nach Berlinfahrt2
30 Platten insgesamt
DATA2 mit normalem mdadm
lwgk 4 jbrm 8 aa t
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdaa1 /dev/sdt1
DATA mdadm aus /builds/mdadm
ydzp 4 xcu ac 8 qvo ab 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdp1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdq1:1024 /dev/sdv1:1024 /dev/sdo1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
64. Raid Recreate DATA und DATA2 2017-10-15 schon mit neuer USV
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
65. Raid Recreate DATA und DATA2 2017-11-05
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
66. Raid Recreate DATA und DATA2 2017-11-07
30 Platten insgesamt
DATA2 mit normalem mdadm
lwgk 4 jbrm 8 aa t
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdaa1 /dev/sdt1
DATA mdadm aus /builds/mdadm
ydzp 4 xcu ac 8 qvo ab 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdp1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdq1:1024 /dev/sdv1:1024 /dev/sdo1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
67. Raid Recreate DATA und DATA2 2017-11-13
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
68. Raid Recreate DATA und DATA2 2017-11-19
30 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
zd aa q 4 ycv ad 8 rwp ac 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdc1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
69. Raid Recreate DATA und DATA2 2017-12-07
30 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
zd aa q 4 ycv ad 8 rwp ac 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdc1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
70. Raid Recreate DATA und DATA2 2017-12-22
30 Platten insgesamt
DATA2 mit normalem mdadm
lwgk 4 jbrm 8 aa t
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdaa1 /dev/sdt1
DATA mdadm aus /builds/mdadm
ydzp 4 xcu ac 8 qvo ab 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdp1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdq1:1024 /dev/sdv1:1024 /dev/sdo1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
71. Raid Recreate DATA und DATA2 2017-12-27
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kdsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdd1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
zc aa q 4 yev ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdc1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sde1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
72. Raid Recreate DATA und DATA2 2018-01-04
30 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
zd aa q 4 ycv ad 8 rwp ac 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdc1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
73. Raid Recreate DATA und DATA2 2018-01-06 nach 40er Zock
30 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
zd aa q 4 ycv ad 8 rwp ac 12 fhaeoi
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdc1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdo1:1024 /dev/sdi1:1024
74. Raid Recreate DATA und DATA2 2018-01-15 Nach Besuch Alex, Kammertür zu
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
75. Raid Recreate DATA und DATA2 2018-01-19
30 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbrm 8 ab t
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdab1 /dev/sdt1
DATA mdadm aus /builds/mdadm
zd aa p 4 ycv ad 8 qwo ac 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdaa1:1024 /dev/sdp1:1024 /dev/sdy1:1024 /dev/sdc1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdq1:1024 /dev/sdw1:1024 /dev/sdo1:1024 /dev/sdac1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
76. Raid Recreate DATA und DATA2 2018-01-21
30 Platten insgesamt
DATA2 mit normalem mdadm
mxhl 4 kcsn 8 ab u
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdm1 /dev/sdx1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdab1 /dev/sdu1
DATA mdadm aus /builds/mdadm
ze aa q 4 ydv ad 8 rwp ac 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdz1:1024 /dev/sde1:1024 /dev/sdaa1:1024 /dev/sdq1:1024 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdad1:1024 /dev/sdr1:1024 /dev/sdw1:1024 /dev/sdp1:1024 /dev/sdac1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
77. Raid Recreate DATA und DATA2 2018-01-24
30 Platten insgesamt
DATA2 mit normalem mdadm
lwgk 4 jbrm 8 aa t
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=10 /dev/sdl1 /dev/sdw1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdaa1 /dev/sdt1
DATA mdadm aus /builds/mdadm
ydzp 4 xcu ac 8 qvo ab 12 fhaeni
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdy1:1024 /dev/sdd1:1024 /dev/sdz1:1024 /dev/sdp1:1024 /dev/sdx1:1024 /dev/sdc1:1024 /dev/sdu1:1024 /dev/sdac1:1024 /dev/sdq1:1024 /dev/sdv1:1024 /dev/sdo1:1024 /dev/sdab1:1024 /dev/sdf1:1024 /dev/sdh1:1024 /dev/sda1:1024 /dev/sde1:1024 /dev/sdn1:1024 /dev/sdi1:1024
78. Raid Recreate DATA und DATA2 2018-02-28
32 Platten insgesamt
DATA2 mit normalem mdadm
myhl 4 kcsn 8 ac ut
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=11 /dev/sdm1 /dev/sdy1 /dev/sdh1 /dev/sdl1 /dev/sdk1 /dev/sdc1 /dev/sds1 /dev/sdn1 /dev/sdac1 /dev/sdu1 /dev/sdt1
DATA mdadm aus /builds/mdadm
aa e ab q 4 zdv ae 8 rxp ad 12 gibfoj
./mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 /dev/sdaa1:1024 /dev/sde1:1024 /dev/sdab1:1024 /dev/sdq1:1024 /dev/sdz1:1024 /dev/sdd1:1024 /dev/sdv1:1024 /dev/sdae1:1024 /dev/sdr1:1024 /dev/sdx1:1024 /dev/sdp1:1024 /dev/sdad1:1024 /dev/sdg1:1024 /dev/sdi1:1024 /dev/sdb1:1024 /dev/sdf1:1024 /dev/sdo1:1024 /dev/sdj1:1024
79. Raid Recreate DATA und DATA2 2018-03-13 Arch->Debian
31 Platten insgesamt
DATA2 mit normalem mdadm
lxgk 4 jbrm 8 ab ts
mdadm --create --assume-clean /dev/md126 --chunk=64 --level=raid6 --layout=ls --raid-devices=11 /dev/sdl1 /dev/sdx1 /dev/sdg1 /dev/sdk1 /dev/sdj1 /dev/sdb1 /dev/sdr1 /dev/sdm1 /dev/sdab1 /dev/sdt1 /dev/sds1
DATA jetzt so auch mit Standard-mdadm, nicht mehr aus /builds/mdadm
zd aa p 4 ycu ad 8 qwo ac 12 fhaeni
mdadm --create --assume-clean /dev/md125 --chunk=64 --level=raid6 --layout=ls --raid-devices=18 --data-offset=1024 /dev/sdz1 /dev/sdd1 /dev/sdaa1 /dev/sdp1 /dev/sdy1 /dev/sdc1 /dev/sdu1 /dev/sdad1 /dev/sdq1 /dev/sdw1 /dev/sdo1 /dev/sdac1 /dev/sdf1 /dev/sdh1 /dev/sda1 /dev/sde1 /dev/sdn1 /dev/sdi1
Spare-Group einrichten
aktuelle Config in mdadm.conf schreiben
mdadm -D -s >> /etc/mdadm.conf
spare-group ergänzen
nano /etc/mdadm.conf
ganz unten spare-group=shared ergänzen
ARRAY /dev/md/126 metadata=1.2 spares=1 name=store2:126 UUID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx spare-group=shared
Raid-Baustatus
cat /proc/mdstat
automatisch jede Sekunde aktualisiert
watch -n 1 cat /proc/mdstat
Verschlüssel von Hand (ohne Yast2)
Verschlüsseln:
cryptsetup -v --key-size 256 luksFormat /dev/md125
Mit speziellen Paramtern für Hardware-Verschlüsselung:
cryptsetup -v luksFormat --cipher aes-cbc-essiv:sha256 --key-size 256 /dev/md125
Öffnen:
cryptsetup luksOpen /dev/md125 cr_md125
Filesystem drauf:
mkfs.xfs /dev/mapper/cr_md125
store2:~ # mkfs.xfs /dev/mapper/cr_md125
meta-data=/dev/mapper/cr_md125 isize=256 agcount=36, agsize=268435424 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=9523357168, imaxpct=5
= sunit=16 swidth=208 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Status:
cryptsetup luksDump /dev/md125
Grown
Festplatte vorbereiten
Open gdisk with the first hard drive:
$ gdisk /dev/sda
and type the following commands at the prompt:
Add a new partition: n Select the default partition number: Enter Use the default for the first sector: Enter For sda1 and sda2 type the appropriate size in MB (i.e. +100MB and +2048M). For sda3 just hit Enter to select the remainder of the disk. Select Linux RAID as the partition type: fd00 Write the table to disk and exit: w
Bad Blocks
Screen-Umgebung starten
screen -S bb
badblocks -vs -o sdy-badblock-test /dev/sdy
verbose, show progress, output-file (log) badblocks sucht nur nach bad blocks, zerstört aber keine Daten.
Detach
Strg-a d = detach
Reatach
screen -r bb
ODER Wieder reingehen
screen -x bb
Falls nötig, Hot Spare aus Raid rausnehmen
mdadm --remove /dev/md125 /dev/sdn1
Device zum Raid hinzufügen
mdadm --add /dev/md126 /dev/sdt1
Raid reshapen mit zusätzlichem Device (dauert ca. 3 volle Tage)
mdadm --grow --raid-devices=11 /dev/md126 --backup-file=/home/gagi/mda126backup
Set Faulty
mdadm --manage --set-faulty /dev/md126 /dev/sdj1
um zu sehen, wer/was gerade Zugriff nimmt:
lsof /data
Samba-Service beenden:
rcsmb stop systemctl stop smbd
Unmounten:
umount /data
XFS (Data)
XFS checken (ungemountet)
xfs_repair -n -o bhash=1024 /dev/mapper/cr_md125
eigentlich einfach
xfs_repair /dev/mapper/cr_md126
Cryptcontainer wachsen
cryptsetup --verbose resize cr_md125
Mounten:
mount /dev/mapper/cr_md125 /data
XFS vergrößern
xfs_growfs /data
XFS checken (ungemountet)
xfs_repair -n -o bhash=1024 /dev/mapper/cr_md125
Read-Only Mounten:
mount -o ro /dev/mapper/cr_md125 /data
Samba-Freigabe
There's a bug in Samba in openSuse 11.4. Here's the workaround:
go to Yast --> AppArmor --> Control Panel (on) --> Configure Profile Modes --> usr.sbin.smbd = complain go to Yast --> system --> runlevels --> smb=on + nmb=on reboot
Direkte Netwerkverbindung Store1 <-> Store 2.0
du kannst auch mal schauen was in /etc/udev/rules.d/70-persistent-net... (oder wie auch immer die date heißt) steht.
da wird die mac einer bestimmten netzwerkadresse (eth0, eth1, ...) zugewiesen.
die datei kannst du auch löschen oder verschieben - wird beim neustart neu angelegt.
da kommt machmal was durcheinander - bei 'nem kernelupdate oder bios-update.
GEHT ! unterschiedliche subnets (192.168.2.100 und 192.168.2.102)
Fast-Copy
1.) Empfänger (Store2.0)
cd <Zielverzeichnis> netcat -l -p 4323 | gunzip | cpio -i -d -m
2.) Sender (Store)
cd <Quellverzeichnis> find . -type f | cpio -p | gzip -1 | netcat 192.168.2.102 4323
1.) Empfänger (Store2.0)
socat tcp4-listen:4323 stdout | tar xvpf - /data/eBooks
2.) Sender (Store)
tar cvf - /data/eBooks | socat stdin tcp4:192.168.2.102:4323
Test mit Fortschrittsanzeige bei bekannter Datengröße:
1.) Empfänger (Store2.0)
cd <Zielverzeichnis> socat tcp4-listen:4323 stdout | pv -s 93G | tar xvpf -
2.) Sender (Store)
cd <Quellverzeichnis> tar cvf - * | pv -s 93G | socat stdin tcp4:192.168.2.102:4323
dd if=/dev/sdl | bar -s 1.5T | dd of=/dev/sdw
FileBot Renamer Linux
filebot -rename -get-subtitles -non-strict /data/Downloads/Fertig/ --output /data/Downloads/Fertig/FileBot/ --format "{n}/Season {s}/{n}.{s00e00}.{t}" --db TheTVDB
filebot -get-missing-subtitles -non-strict -r --lang en /data/Downloads/Fertig/FileBot/
filebot -script fn:replace --conflict override --def "e=.eng.srt" "r=.srt" /data/Downloads/Fertig/FileBot/
RemoteDesktop ArchLinux Client
z.B. auf Busfahrer: rdesktop -g 1440x900 -P -z -x l -r sound:off -u gagi 192.168.1.149
Backplane-Rotation zur Fehlerdiagnose
Urzustand mit 24 Platten 2013-11-12
/dev/sdx -> 00 : 00000000000000 (1TB) 31��C /dev/sdp -> 01 : WD-WCC070299387 (3TB WD) 31��C /dev/sdq -> 03 : MJ1311YNG3SSLA (3TB) 33��C /dev/sdr -> 05 : MJ1311YNG3NZ3A (3TB) 32��C /dev/sds -> 07 : MJ1311YNG4J48A (3TB) 32��C /dev/sdt -> 09 : MJ1311YNG3UUPA (3TB) 33��C /dev/sdu -> 11 : MJ1311YNG3SAMA (3TB) 32��C /dev/sdv -> 13 : MJ1311YNG3SU1A (3TB) 34��C /dev/sdw -> 15 : MCE9215Q0AUYTW (3TB Toshiba neu) 31��C /dev/sdh -> 16 : MJ0351YNGA02YA (3TB) nicht im Einsatz, bb-check 2013-08-28 37��C /dev/sdi -> 18 : MJ1311YNG3Y4SA (3TB) 40��C /dev/sdj -> 20 : WD-WCAWZ1881335 (3TB WD) hot spare 38��C /dev/sdk -> 22 : WD-WCAWZ2279670 (3TB WD) 41��C /dev/sdl -> 24 : MJ1311YNG25Z6A (3TB) 39��C /dev/sdm -> 26 : MJ1311YNG3RM5A (3TB) 39��C /dev/sdn -> 28 : MJ1311YNG3NT5A (3TB) 40��C /dev/sdo -> 30 : MCM9215Q0B9LSY (3TB Toshiba neu) 38��C /dev/sda -> 31 : 234BGY0GS (3TB Toshiba neu) 40��C /dev/sdb -> 33 : MJ1311YNG3WZVA (3TB) 43��C /dev/sdc -> 35 : MJ1311YNG3SYKA (3TB) 42��C /dev/sdd -> 37 : WD-WCC070198169 (3TB WD) 41��C /dev/sde -> 39 : MJ1311YNG3RZTA (3TB) 39��C /dev/sdf -> 41 : MJ1311YNG3LTRA (3TB) 39��C /dev/sdg -> 43 : MJ1311YNG38VGA (3TB) 39��C Insgesamt 24 Platten gefunden.
Crashes
ArchLinux 2011-09-09
Sep 9 18:20:04 localhost kernel: [156439.479947] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Sep 9 18:20:04 localhost kernel: [156439.480035] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Sep 9 18:20:04 localhost kernel: [156439.486612] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Sep 9 18:20:04 localhost kernel: [156439.503656] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Sep 9 18:20:04 localhost kernel: [156439.504562] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Sep 9 18:34:11 localhost -- MARK -- Sep 9 18:42:42 localhost kernel: [157797.911330] r8169: eth0: link up Sep 9 18:54:11 localhost -- MARK -- Sep 9 19:14:11 localhost -- MARK -- Sep 9 19:34:11 localhost -- MARK -- Sep 9 19:54:11 localhost -- MARK -- Sep 9 20:14:11 localhost -- MARK -- Sep 9 20:27:32 localhost kernel: [164086.971566] r8169: eth0: link up Sep 9 20:27:42 localhost kernel: [164097.580071] r8169: eth0: link up Sep 9 20:27:50 localhost kernel: [164105.391755] r8169: eth0: link up Sep 9 20:27:51 localhost kernel: [164106.272019] r8169: eth0: link up Sep 9 20:28:12 localhost kernel: [164127.150062] r8169: eth0: link up Sep 9 20:28:22 localhost kernel: [164137.941304] r8169: eth0: link up Sep 9 20:28:33 localhost kernel: [164148.890097] r8169: eth0: link up Sep 9 20:28:38 localhost kernel: [164153.080536] r8169: eth0: link up Sep 9 20:28:58 localhost kernel: [164173.790064] r8169: eth0: link up Sep 9 20:42:19 localhost kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 9 20:42:19 localhost kernel: [ 0.000000] Initializing cgroup subsys cpu Sep 9 20:42:19 localhost kernel: [ 0.000000] Linux version 2.6.32-lts (tobias@T-POWA-LX) (gcc version 4.6.1 20110819 (prerelease) (GCC) ) #1 SMP Tue Aug 30 08:59:44 CEST 2011 Sep 9 20:42:19 localhost kernel: [ 0.000000] Command line: root=/dev/disk/by-uuid/ba47ea9a-c24c-4dc6-a9a2-ca3b442bdbfc ro vga=0x31B Sep 9 20:42:19 localhost kernel: [ 0.000000] KERNEL supported cpus: Sep 9 20:42:19 localhost kernel: [ 0.000000] Intel GenuineIntel Sep 9 20:42:19 localhost kernel: [ 0.000000] AMD AuthenticAMD Sep 9 20:42:19 localhost kernel: [ 0.000000] Centaur CentaurHauls
OpenSuse 2011-09-26
Sep 26 23:15:59 store2 su: (to nobody) root on none Sep 26 23:17:17 su: last message repeated 2 times Sep 26 23:25:23 store2 smartd[4617]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 166 to 162 Sep 26 23:25:26 store2 smartd[4617]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 171 to 166 Sep 26 23:25:29 store2 smartd[4617]: Device: /dev/sdg [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 171 to 166 Sep 26 23:25:36 store2 smartd[4617]: Device: /dev/sdk [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 171 to 166 Sep 26 23:25:37 store2 smartd[4617]: Device: /dev/sdl [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 181 to 187 Sep 26 23:55:22 store2 smartd[4617]: Device: /dev/sdb [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 96 Sep 26 23:55:23 store2 smartd[4617]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 162 to 166 Sep 26 23:55:26 store2 smartd[4617]: Device: /dev/sde [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 26 23:55:26 store2 smartd[4617]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 166 to 171 Sep 26 23:55:29 store2 smartd[4617]: Device: /dev/sdg [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 166 to 171 Sep 26 23:55:32 store2 smartd[4617]: Device: /dev/sdi [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 98 Sep 27 00:55:26 store2 kernel: imklog 5.6.5, log source = /proc/kmsg started.
OpenSuse2011-09-27
Sep 27 16:35:17 store2 smbd[29588]: [2011/09/27 16:35:17.391212, 0] param/loadparm.c:8445(check_usershare_stat) Sep 27 16:35:17 store2 smbd[29588]: check_usershare_stat: file /var/lib/samba/usershares/ owned by uid 0 is not a regular file Sep 27 16:44:06 store2 smbd[29163]: [2011/09/27 16:44:06.795153, 0] lib/util_sock.c:474(read_fd_with_timeout) Sep 27 16:44:06 store2 smbd[29163]: [2011/09/27 16:44:06.795341, 0] lib/util_sock.c:1441(get_peer_addr_internal) Sep 27 16:44:06 store2 smbd[29597]: [2011/09/27 16:44:06.795323, 0] lib/util_sock.c:474(read_fd_with_timeout) Sep 27 16:44:06 store2 smbd[29163]: getpeername failed. Error was Der Socket ist nicht verbunden Sep 27 16:44:06 store2 smbd[29592]: [2011/09/27 16:44:06.795368, 0] lib/util_sock.c:474(read_fd_with_timeout) Sep 27 16:44:06 store2 smbd[29163]: read_fd_with_timeout: client 0.0.0.0 read error = Die Verbindung wurde vom Kommunikationspartner zurückgesetzt. Sep 27 16:44:06 store2 smbd[29597]: [2011/09/27 16:44:06.795422, 0] lib/util_sock.c:1441(get_peer_addr_internal) Sep 27 16:44:06 store2 smbd[29597]: getpeername failed. Error was Der Socket ist nicht verbunden Sep 27 16:44:06 store2 smbd[29597]: read_fd_with_timeout: client 0.0.0.0 read error = Die Verbindung wurde vom Kommunikationspartner zurückgesetzt. Sep 27 16:44:06 store2 smbd[29592]: [2011/09/27 16:44:06.795468, 0] lib/util_sock.c:1441(get_peer_addr_internal) Sep 27 16:44:06 store2 smbd[29592]: getpeername failed. Error was Der Socket ist nicht verbunden Sep 27 16:44:06 store2 smbd[29592]: read_fd_with_timeout: client 0.0.0.0 read error = Die Verbindung wurde vom Kommunikationspartner zurückgesetzt. Sep 27 16:45:42 store2 smbd[29585]: [2011/09/27 16:45:42.499038, 0] lib/util_sock.c:474(read_fd_with_timeout) Sep 27 16:45:42 store2 smbd[29593]: [2011/09/27 16:45:42.499082, 0] lib/util_sock.c:474(read_fd_with_timeout) Sep 27 16:45:42 store2 smbd[29593]: [2011/09/27 16:45:42.499174, 0] lib/util_sock.c:1441(get_peer_addr_internal) Sep 27 16:45:42 store2 smbd[29585]: [2011/09/27 16:45:42.499174, 0] lib/util_sock.c:1441(get_peer_addr_internal) Sep 27 16:45:42 store2 smbd[29593]: getpeername failed. Error was Der Socket ist nicht verbunden Sep 27 16:45:42 store2 smbd[29585]: getpeername failed. Error was Der Socket ist nicht verbunden Sep 27 16:45:42 store2 smbd[29593]: read_fd_with_timeout: client 0.0.0.0 read error = Die Verbindung wurde vom Kommunikationspartner zurückgesetzt. Sep 27 16:45:42 store2 smbd[29585]: read_fd_with_timeout: client 0.0.0.0 read error = Die Verbindung wurde vom Kommunikationspartner zurückgesetzt. Sep 27 19:35:14 store2 kernel: imklog 5.6.5, log source = /proc/kmsg started.
OpenSuse 2011-09-29
während kräftigem Copyjob von Store
Sep 29 23:16:19 su: last message repeated 2 times Sep 29 23:28:41 store2 smartd[4624]: Device: /dev/sdb [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 157 to 162 Sep 29 23:28:44 store2 smartd[4624]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 153 to 157 Sep 29 23:28:49 store2 smartd[4624]: Device: /dev/sdh [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 29 23:28:53 store2 smartd[4624]: Device: /dev/sdk [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 162 to 157 Sep 29 23:28:57 store2 smartd[4624]: Device: /dev/sdo [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181 Sep 29 23:58:44 store2 smartd[4624]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 29 23:58:49 store2 smartd[4624]: Device: /dev/sdh [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 29 23:58:53 store2 smartd[4624]: Device: /dev/sdk [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 157 to 162 Sep 29 23:58:57 store2 smartd[4624]: Device: /dev/sdn [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 187 to 193 Sep 29 23:58:58 store2 smartd[4624]: Device: /dev/sdo [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 181 to 176 Sep 29 23:59:02 store2 smartd[4624]: Device: /dev/sdq [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181 Sep 30 00:28:41 store2 smartd[4624]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 187 to 193 Sep 30 00:28:43 store2 smartd[4624]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 00:28:49 store2 smartd[4624]: Device: /dev/sdh [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 00:28:58 store2 smartd[4624]: Device: /dev/sdo [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181 Sep 30 00:58:47 store2 smartd[4624]: Device: /dev/sdf [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 00:58:49 store2 smartd[4624]: Device: /dev/sdh [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 00:58:59 store2 smartd[4624]: Device: /dev/sdp [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 01:28:47 store2 smartd[4624]: Device: /dev/sdf [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 01:28:47 store2 smartd[4624]: Device: /dev/sdf [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 157 to 162 Sep 30 01:28:50 store2 smartd[4624]: Device: /dev/sdi [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 01:58:47 store2 smartd[4624]: Device: /dev/sdf [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 162 to 157 Sep 30 01:59:00 store2 smartd[4624]: Device: /dev/sdp [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 02:28:45 store2 smartd[4624]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 02:28:46 store2 smartd[4624]: Device: /dev/sde [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 02:28:46 store2 smartd[4624]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 157 to 162 Sep 30 02:28:48 store2 smartd[4624]: Device: /dev/sdf [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 02:28:52 store2 smartd[4624]: Device: /dev/sdi [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 02:58:45 store2 smartd[4624]: Device: /dev/sdd [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 02:58:46 store2 smartd[4624]: Device: /dev/sde [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 02:58:46 store2 smartd[4624]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 162 to 157 Sep 30 02:58:47 store2 smartd[4624]: Device: /dev/sdf [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 99 to 100 Sep 30 02:58:49 store2 smartd[4624]: Device: /dev/sdh [SAT], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 100 to 99 Sep 30 09:39:22 store2 kernel: imklog 5.6.5, log source = /proc/kmsg started.
What you are seeing are the Normalized Attribute values changing.
For example when the Raw_Read_Error_Rate changed from 99 to 100, the increase in Normalized value from 99 to 100 means that the disk now thinks it is a bit LESS likely to fail than before, because this Normalized value is moving further above the (low) Threshold value.
ArchLinux 2011-10-17
Oct 17 21:21:35 localhost smartd[1941]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 181 to 176 Oct 17 21:21:37 localhost smartd[1941]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 181 to 176 Oct 17 21:21:45 localhost smartd[1941]: Device: /dev/sdm [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 200 to 206 Oct 17 21:30:03 localhost -- MARK -- Oct 17 21:50:03 localhost -- MARK -- Oct 17 21:51:37 localhost smartd[1941]: Device: /dev/sde [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181 Oct 17 21:51:41 localhost smartd[1941]: Device: /dev/sdi [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 181 to 187 Oct 17 22:10:03 localhost -- MARK -- Oct 17 22:21:34 localhost smartd[1941]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 176 to 181 Oct 17 22:21:47 localhost smartd[1941]: Device: /dev/sdo [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 206 to 214 Oct 17 22:21:49 localhost smartd[1941]: Device: /dev/sdq [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 206 to 214 Oct 17 22:30:03 localhost -- MARK -- Oct 17 22:50:03 localhost -- MARK -- Oct 17 23:11:18 localhost kernel: [ 0.000000] Initializing cgroup subsys cpuset Oct 17 23:11:18 localhost kernel: [ 0.000000] Initializing cgroup subsys cpu
ArchLinux 2011-11-06
Nov 6 12:39:05 localhost -- MARK -- Nov 6 12:42:18 localhost smartd[1927]: Device: /dev/sdi [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 193 to 200 Nov 6 12:42:20 localhost smartd[1927]: Device: /dev/sdj [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 193 to 187 Nov 6 12:42:24 localhost smartd[1927]: Device: /dev/sdn [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 222 to 214 Nov 6 12:42:25 localhost smartd[1927]: Device: /dev/sdo [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 230 to 222 Nov 6 12:42:26 localhost smartd[1927]: Device: /dev/sdp [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 214 to 222 Nov 6 12:59:05 localhost -- MARK -- Nov 6 14:29:21 localhost kernel: [ 0.000000] Initializing cgroup subsys cpuset Nov 6 14:29:21 localhost kernel: [ 0.000000] Initializing cgroup subsys cpu Nov 6 14:29:21 localhost kernel: [ 0.000000] Linux version 3.0-ARCH (tobias@T-POWA-LX) (gcc version 4.6.1 20110819 (prerelease) (GCC) ) #1 SMP PREEMPT Wed Oct$ Nov 6 14:29:21 localhost kernel: [ 0.000000] Command line: root=/dev/disk/by-id/ata-Hitachi_HCS5C1016CLA382_JC0150HT0J7TPC-part3 ro Nov 6 14:29:21 localhost kernel: [ 0.000000] BIOS-provided physical RAM map:
EINFACH SO!
ArchLinux 2011-11-21
Dabei war vorher ein Systemupdate gelaufen (inklusive neuem Kernel), aber noch nicht rebootet.
Nov 21 09:30:27 localhost smartd[2208]: Device: /dev/sdj [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 200 to 206 Nov 21 09:30:30 localhost smartd[2208]: Device: /dev/sdl [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 240 to 250 Nov 21 09:30:31 localhost smartd[2208]: Device: /dev/sdm [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 250 to 240 Nov 21 09:30:35 localhost smartd[2208]: Device: /dev/sdp [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 230 to 240 Nov 21 09:43:32 localhost kernel: [1280595.864622] ------------[ cut here ]------------ Nov 21 09:43:32 localhost kernel: [1280595.864636] WARNING: at drivers/gpu/drm/i915/i915_irq.c:649 ironlake_irq_handler+0x1102/0x1110 [i915]() Nov 21 09:43:32 localhost kernel: [1280595.864638] Hardware name: H61M-S2V-B3 Nov 21 09:43:32 localhost kernel: [1280595.864639] Missed a PM interrupt Nov 21 09:43:32 localhost kernel: [1280595.864640] Modules linked in: xfs sha256_generic dm_crypt dm_mod raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx md_mod coretemp nfsd exportfs nfs lockd fscache auth_rpcgss nfs_acl sunrpc ipv6 ext2 sr_mod cdrom snd_hda_codec_realtek usb_storage uas sg evdev snd_hda_intel snd_hda_codec iTCO_wdt snd_hwdep snd_pcm snd_timer i915 snd drm_kms_helper drm pcspkr i2c_algo_bit r8169 ppdev i2c_i801 shp chp parport_pc intel_agp i2c_core pci_hotplug parport intel_gtt mei(C) soundcore snd_page_alloc processor button mii iTCO_vendor_support video aesni_intel cryptd aes_x86_64 aes_generic ext4 mbcache jbd2 crc16 usbhid hid sd_mod sata_sil24 ahci libahci libata scsi_mod ehci_hcd usbcore Nov 21 09:43:32 localhost kernel: [1280595.864674] Pid: 0, comm: swapper Tainted: G C 3.0-ARCH #1 Nov 21 09:43:32 localhost kernel: [1280595.864675] Call Trace: Nov 21 09:43:32 localhost kernel: [1280595.864676] <IRQ> [<ffffffff8105c76f>] warn_slowpath_common+0x7f/0xc0 Nov 21 09:43:32 localhost kernel: [1280595.864684] [<ffffffff8105c866>] warn_slowpath_fmt+0x46/0x50 Nov 21 09:43:32 localhost kernel: [1280595.864688] [<ffffffff81078f7d>] ? queue_work+0x5d/0x70 Nov 21 09:43:32 localhost kernel: [1280595.864693] [<ffffffffa0235a22>] ironlake_irq_handler+0x1102/0x1110 [i915] Nov 21 09:43:32 localhost kernel: [1280595.864696] [<ffffffff812a4bc5>] ? dma_issue_pending_all+0x95/0xa0 Nov 21 09:43:32 localhost kernel: [1280595.864699] [<ffffffff81333db1>] ? net_rx_action+0x131/0x300 Nov 21 09:43:32 localhost kernel: [1280595.864702] [<ffffffff810bf835>] handle_irq_event_percpu+0x75/0x2a0 Nov 21 09:43:32 localhost kernel: [1280595.864705] [<ffffffff810bfaa5>] handle_irq_event+0x45/0x70 Nov 21 09:43:32 localhost kernel: [1280595.864707] [<ffffffff810c21af>] handle_edge_irq+0x6f/0x120 Nov 21 09:43:32 localhost kernel: [1280595.864710] [<ffffffff8100d9f2>] handle_irq+0x22/0x40 Nov 21 09:43:32 localhost kernel: [1280595.864712] [<ffffffff813f66aa>] do_IRQ+0x5a/0xe0 Nov 21 09:43:32 localhost kernel: [1280595.864715] [<ffffffff813f4393>] common_interrupt+0x13/0x13 Nov 21 09:43:32 localhost kernel: [1280595.864716] <EOI> [<ffffffff81273cdb>] ? intel_idle+0xcb/0x120 Nov 21 09:43:32 localhost kernel: [1280595.864720] [<ffffffff81273cbd>] ? intel_idle+0xad/0x120 Nov 21 09:43:32 localhost kernel: [1280595.864723] [<ffffffff81313d9d>] cpuidle_idle_call+0x9d/0x350 Nov 21 09:43:32 localhost kernel: [1280595.864726] [<ffffffff8100a21a>] cpu_idle+0xba/0x100 Nov 21 09:43:32 localhost kernel: [1280595.864729] [<ffffffff813d1eb2>] rest_init+0x96/0xa4 Nov 21 09:43:32 localhost kernel: [1280595.864731] [<ffffffff81748c23>] start_kernel+0x3de/0x3eb Nov 21 09:43:32 localhost kernel: [1280595.864733] [<ffffffff81748347>] x86_64_start_reservations+0x132/0x136 Nov 21 09:43:32 localhost kernel: [1280595.864735] [<ffffffff81748140>] ? early_idt_handlers+0x140/0x140 Nov 21 09:43:32 localhost kernel: [1280595.864737] [<ffffffff8174844d>] x86_64_start_kernel+0x102/0x111 Nov 21 09:43:32 localhost kernel: [1280595.864738] ---[ end trace 01037f4ec3ec4ee5 ]--- Nov 21 10:00:16 localhost smartd[2208]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 206 to 200 Nov 21 10:00:18 localhost smartd[2208]: Device: /dev/sdc [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 200 to 193 Nov 21 10:00:19 localhost smartd[2208]: Device: /dev/sdd [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 206 to 200 Nov 21 10:00:23 localhost smartd[2208]: Device: /dev/sdg [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 206 to 200 Nov 21 10:00:29 localhost smartd[2208]: Device: /dev/sdl [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 250 to 240 Nov 21 10:00:30 localhost smartd[2208]: Device: /dev/sdm [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 240 to 250 Nov 21 10:00:33 localhost smartd[2208]: Device: /dev/sdp [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 240 to 230 Nov 21 10:00:34 localhost smartd[2208]: Device: /dev/sdq [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 240 to 250 Nov 21 11:52:01 localhost kernel: [ 0.000000] Initializing cgroup subsys cpuset Nov 21 11:52:01 localhost kernel: [ 0.000000] Initializing cgroup subsys cpu