Problem with XFS
I use Etegro NAS-400 server. It has Areca ARC-1280 controller with Open-E NAS-XSR Enterprise and hosted about 15 terabytes of video and audio data. It configures like HW RAID-6 with one logical volume lv00.
After about year of successful usage this volume was lost. Hardware RAID checks show no errors and I can see logical volume trough /setup/disk manager/ and it can be found nowhere but in this place.
dmesg messages point to some XFS corruptions:
Starting XFS recovery on filesystem: dm-3 (logdev: internal) Filesystem "dm-3": corrupt dinode 42856, (btree extents). Unmount and run xfs_repair.
Filesystem "dm-3": XFS internal error xfs_bmap_read_extents(1) at line
4386 of file fs/xfs/xfs_bmap.c. Caller 0xc0290e91 <c02681af> xfs_bmap_read_extents+0x42f/0x560 <c0290e91> xfs_iread_extents+0x71/0xf0 <c0290e91> xfs_iread_extents+0x71/0xf0 <c026ae42> xfs_bunmapi+0x1162/0x1200 <c014dd98> get_page_from_freelist+0x98/0xc0 <c014de17> __alloc_pages+0x57/0x320 <c016653f> kmem_getpages+0x7f/0xc0 <c01674e0> cache_grow+0x140/0x1a0 <c016760a> cache_alloc_refill+0xca/0x210 <c02ac49d> xfs_trans_log_inode+0x2d/0x60 <c029181a> xfs_itruncate_finish+0x1fa/0x460 <c02b2e4f> xfs_inactive+0x54f/0x5c0 <c02c370f> xfs_fs_clear_inode+0xaf/0xc0 <c0186ff6> clear_inode+0xe6/0x150 <c0187fa2> generic_delete_inode+0x142/0x150 <c0188193> iput+0x63/0x90 <c02a2ac3> xlog_recover_process_iunlinks+0x343/0x3f0 <c02a3e4c> xlog_recover_finish+0x9c/0xd0 <c029ac78> xfs_log_mount_finish+0x48/0x50 <c02a56f4> xfs_mountfs+0x924/0xf30 <c02c4e70> icmn_err+0x90/0xb0 <c0286ee7> xfs_fs_cmn_err+0x27/0x30 <c0295f57> xfs_ioinit+0x27/0x50 <c02ae1ff> xfs_mount+0x2ff/0x520 <c02c4283> vfs_mount+0x43/0x50 <c02c4283> vfs_mount+0x43/0x50 <c02c409a> xfs_fs_fill_super+0x9a/0x200 <c02e4597> snprintf+0x27/0x30 <c01ad014> disk_name+0xb4/0xc0 <c01734df> sb_set_blocksize+0x1f/0x50 <c0172e26> get_sb_bdev+0x106/0x150 <c014de17> __alloc_pages+0x57/0x320 <c02c4230> xfs_fs_get_sb+0x30/0x40 <c02c4000> xfs_fs_fill_super+0x0/0x200 <c01730c0> do_kern_mount+0xa0/0x160 <c018b367> do_new_mount+0x77/0xc0 <c018b9ef> do_mount+0x1bf/0x220 <c018b7d3> copy_mount_options+0x63/0xc0 <c018bd9f> sys_mount+0x9f/0xe0 <c0102e67> syscall_call+0x7/0xb
Could you advice what can I do in this situation? Open-E Console Extended Tools has "Repair filesystem on LV" item - is it safe to try or I can lose the data for good?
15TB in one logical volume thats alot...
Do a file system repair.. its an option in the console. It should be ALT CTRL X, then repair filesystem LV..
Do you have a backup???
Posting Permissions
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
Forum Rules