如何创建一个纯 NVMe 固态的群晖

如何创建一个纯 NVMe 固态的群晖

目标:纯 NVMe 固态的群晖

已知的群晖系统中,M.2 NVMe 只能做为存储池,并且不支持首次安装系统作为存储池,同时想使用 M.2 存储池的话,还得是特殊机型,目前来说,最后一点非常好解,可选的方案有:

  • 手动创建 M2 存储池,链接:Github
  • 手动修改 M2 盘的属性后,存储管理页面创建
  • Patch libhwcontrol.so 后,存储管理页面创建

那么目前来说,纯 NVMe 固态的群晖缺少的一环是:安装系统的时候,直接安装到 M.2 NVMe 固态,本文将粗略的讲解如何实现这一点的。

过程探索

虚拟群晖额外的环境准备如下:

  • 创建虚拟 NVMe SSD 盘(qemu-img create -f raw nvme0.raw 64G
  • 虚拟机添加 NVMe 参数(-drive file=nvme0.raw,if=none,format=raw,id=nvme0 -device nvme,drive=nvme0,serial=nvme0

本文测试环境添加了两块虚拟盘

1. 安装页面

按照上面的操作,仅添加虚拟的 NVMe SSD 盘的时候,安装页面会显示如下图的找不到盘的提示,我们这里看一下截图右边的网络请求,查看所有的异步请求,发现倒数第二个里有硬盘相关的信息,我们就从这里查看下去。

sa6400-nvme-install-no-disk.png

2. 进 ramdisk

查看 get_state.cgi 输出

get_state.cgi 是 nginx 代理的 cgi 程序,这个文件在 ramdisk 里其实上是一个 shell 脚本,我们先看一下输出,第 4 行直接说了没有磁盘:

  1. {
  2. "success": true,
  3. "data": {
  4. "has_disk": false,
  5. "dsinfo": {
  6. "product": "Synology NAS",
  7. "model": "SA6400",
  8. "internet_ok": "false",
  9. "internet_install_ok": false,
  10. "internet_migrate_ok": true,
  11. "internet_reinstall_ok": true,
  12. "internet_install_version": "",
  13. "internet_migrate_version": "DSM 7.1.1-42962",
  14. "internet_reinstall_version": "DSM 7.1.1-42962",
  15. "ip_addr": "192.168.3.172",
  16. "mac_addr": "",
  17. "serial": "0000XXXBN4YYY",
  18. "build_num": 42962,
  19. "build_ver": "7.1.1-42962",
  20. "is_installing": false,
  21. "clean_all_partition_disks": "",
  22. "buildin_storage": false,
  23. "disk_size_enough": true,
  24. "disk_count": 0,
  25. "support_dual_head": "",
  26. "unique_rd": "epyc7002",
  27. "update_hcl_status": "success",
  28. "incompatible_disks": null,
  29. "syno_incompatible_disks": "",
  30. "missing_system_disks": "",
  31. "root_on_isolated_disk": "",
  32. "disabled_port_disks": "",
  33. "ssd_cache_status": "",
  34. "sas_frimware_upgrade_fail": false,
  35. "unidentified": false,
  36. "status": "",
  37. "hostname": "SynologyNAS"
  38. }
  39. }
  40. }

这里再查看 get_state.cgi 关于 "has_disk": false 的相关代码:

  1. partition="$(/usr/syno/bin/synodiskport -installable_disk_list)"
  2. SupportBuildinStorage="$(/bin/get_key_value /etc.defaults/synoinfo.conf support_buildin_storage)"
  3. if [ "xyes" != "x${SupportBuildinStorage}" ]; then
  4. buildin_storage='false'
  5. if [ ! -z "$partition" ];then
  6. has_disk='true'
  7. else
  8. has_disk='false'
  9. fi
  10. disk_count=`echo $partition | wc -w`
  11. else
  12. buildin_storage='true'
  13. has_disk='true'
  14. disk_name=$(basename ${buildin_storage_node})
  15. disk_size=$(cat /sys/block/${disk_name}/size)
  16. if [ "0" -eq "${disk_size}" ]; then
  17. has_disk='false'
  18. elif [ "${min_buildin_storage_size}" -gt "${disk_size}" ]; then
  19. disk_size_enough='false'
  20. fi
  21. fi

想要得到 has_disk='true',可以下手的地方有: 7, 14 这两行,测试的 SA6400 存储不是内置的,所以这里选择第 7 行,那么就需要第 6 行的 partition 不为空,而 partition 是执行 /usr/syno/bin/synodiskport -installable_disk_list 得到的。

查看 synodiskport -installable_disk_list

这里直接执行 synodiskport -installable_disk_list 发现什么都没返回,尝试写了 shell 脚本去遍历 /sys/block 获取 SATA 和 NVMe 磁盘:

  1. #!/bin/sh
  2. if [ "$1" == "-installable_disk_list" ]; then
  3. disks=$(ls /sys/block | grep '(nvme*|sata*)' | xargs)
  4. echo " "$disks
  5. else
  6. /path/to/old/synodiskport "$@"
  7. fi

刷新页面后,可以发现,已经识别到硬盘了,上次 pat 文件尝试安装后,发现浏览器请求 get_install_progress.cgi 报错了:

  1. {
  2. "success": false,
  3. "data": {},
  4. "errinfo": {
  5. "sec": "error",
  6. "key": "error_mkfs",
  7. "line": 35
  8. }
  9. }

看一下相关日志:

  1. messages:Apr 12 2227 install.cgi: ninstaller.c:1167 SYSTEM_NOT_INSTALLED: Raid but md0 not exist
  2. messages:Apr 12 2227 install.cgi: ninstaller.c:1235 SYSTEM_NOT_INSTALLED: Not SynoParitition and Not Recoverable
  3. messages:Apr 12 2227 install.cgi: ninstaller.c:1142(FillUpgradeVolumeInfo): gszUpgradeVolDev = /dev/md0
  4. messages:Apr 12 2227 install.cgi: ninstaller.c:1143(FillUpgradeVolumeInfo): gszUpgradeVolMnt = /tmpData
  5. messages:Apr 12 2227 install.cgi: ninstaller.c:1245 gblSupportRaid: 1, gSysStatus: 3, gblCreateDataVol: 0, gblSystemRecoverable: 0
  6. messages:Apr 12 2227 install.cgi: ninstaller.c:1699 CreateDataVol=[0]
  7. messages:Apr 12 2227 install.cgi: ninstaller.c:158 umount partition /tmpData
  8. messages:Apr 12 2227 install.cgi: ninstaller.c:162 Fail to execute [/bin/umount -f /tmpData > /dev/null 2>&1]
  9. messages:Apr 12 2227 install.cgi: ninstaller.c:1710 installer cmd=[/usr/syno/sbin/installer.sh -r >> /tmp/installer_sh.log 2>&1]
  10. messages:Apr 12 2227 install.cgi: ninstaller.c:1715 szCmd=[/usr/syno/sbin/installer.sh -r >> /tmp/installer_sh.log 2>&1], retv=[1]
  11. messages:Apr 12 2227 install.cgi: ninstaller.c:1739 retv=[1]
  12. messages:Apr 12 2227 install.cgi: ninstaller.c:1740(ErrFHOSTDoFdiskFormat) retv=[1]

继续看 /tmp/installer_sh.log:

  1. Check new disk...
  2. umount: can't unmount /volume1: Invalid argument
  3. raidtool destroy 0
  4. Not found /dev/md0
  5. raidtool destroy 1
  6. Not found /dev/md1
  7. [CREATE] Raidtool initsys
  8. [CREATE][failed] Raidtool initsys

/usr/syno/sbin/installer.sh 执行报错了,查看该文件后,报错是执行 /sbin/raidtool initsys 导致的。

我们再看看 /usr/syno/bin/synodiskport 和 /sbin/raidtool 这两个命令:

  1. ls -alh /usr/syno/bin/ /sbin/raidtool | grep '(synodisk|raidtool
  2. )'
  3. lrwxrwxrwx 1 root root 19 Apr 12 22:48 /sbin/raidtool -> /usr/syno/bin/scemd
  4. -rwx------ 1 root root 180 Apr 12 22:17 synodiskport

都软链接到了 scemd,而 scemd 研究群晖引导较多的都知道,这个是类似 busybox 一样的工具,根据当前执行命令,执行不同的操作。
那下面就需要逆向 scemd 这个二进制文件,看看具体的找盘逻辑了。

3. 逆向分析

我们需要逆向 scemd 中查找可安装盘的逻辑,这里使用 bpftrace 抓了很多输出,一点点给代码加了注释,使用到的 bpftrace 脚本:

  1. uretprobe:"/usr/syno/bin/synodiskport":0xABCCC {
  2. printf("is_suppornmve return: %dn", retval);
  3. }
  4. uretprobe:"/usr/syno/bin/synodiskport":0x5AA09 {
  5. printf("is_support_local_only_dev return: %dn", retval);
  6. }
  7. uretprobe:"/usr/syno/bin/synodiskport":0x6FFF0 {
  8. printf("enumerate_disks return: %dn", retval);
  9. }
  10. uretprobe:"/usr/syno/bin/synodiskport":0x5A7BF {
  11. printf("support_dual_head return: %dn", retval);
  12. }
  13. uprobe:"/usr/syno/bin/synodiskport":0x591CF {
  14. printf("list_insert, string: %sn", str(arg1));
  15. }
  16. uprobe:"/usr/syno/bin/synodiskport":0x6FC50 {
  17. printf("enumerate_disks_with_type, type: %dn", arg0);
  18. }
  19. uretprobe:"/usr/syno/bin/synodiskport":0x6FC50 {
  20. printf("enumerate_disks_with_type, return: %dn", retval);
  21. }
  22. uprobe:"/usr/syno/bin/synodiskport":0x6F580 {
  23. printf("SynoDiskPathGlobAndPortCheck, disk type: %dn", *(uint64 *)arg1);
  24. }
  25. uretprobe:"/usr/syno/bin/synodiskport":0x6F580 {
  26. printf("SynoDiskPathGlobAndPortCheck, return: %dn", retval);
  27. }
  28. uprobe:"/usr/syno/bin/synodiskport":0x75390 {
  29. printf("disk_maybe_blocked, disk name: %sn", str(arg0));
  30. }
  31. uretprobe:"/usr/syno/bin/synodiskport":0x75390 {
  32. printf("disk_maybe_blocked, return: %dn", retval);
  33. }
  34. uretprobe:"/usr/syno/bin/synodiskport":0x70A70 {
  35. printf("get_disk_type_by_name, return: %dn", retval);
  36. }
  37. uprobe:"/usr/syno/bin/synodiskport":0xF370 {
  38. printf("strstr, string: %s, sub str: %sn", str(arg0), str(arg1));
  39. }
  40. uretprobe:"/usr/syno/bin/synodiskport":0xF370 {
  41. printf("strstr, return: %sn", str(retval));
  42. }
  43. uprobe:"/usr/syno/bin/synodiskport":0x94FD0 {
  44. printf("nvme_dev_port_check, name: %sn", str(arg0));
  45. }
  46. uretprobe:"/usr/syno/bin/synodiskport":0x94FD0 {
  47. printf("nvme_dev_port_check, return: %dn", retval);
  48. }
  49. uprobe:"/usr/syno/bin/synodiskport":0x98900 {
  50. printf("sata_dev_port_check, name: %sn", str(arg0));
  51. }
  52. uretprobe:"/usr/syno/bin/synodiskport":0x98900 {
  53. printf("sata_dev_port_check, return: %dn", retval);
  54. }

具体调试流程就不描述了,花了很多时间阅读了伪代码。

这里贴一下 IDA 里关键伪代码:

  1. __int64 __fastcall SYNODiskPathGlobAndPortCheck(
  2. __int64 glob_list,
  3. _DWORD *disk_type,
  4. int check_type,
  5. _QWORD *disk_list)
  6. {
  7. bool should_check_type; // r14
  8. int index; // ebp
  9. char **gl_pathv; // r15
  10. const char *v7; // rax
  11. const char *v8; // r13
  12. int v9; // eax
  13. __int64 v10; // r13
  14. char *v11; // rax
  15. __int64 disk_name; // r15
  16. int tmp_disk_type; // eax
  17. unsigned int v14; // ebx
  18. glob64_t pglob; // [rsp+10h] [rbp-88h] BYREF
  19. unsigned __int64 v18; // [rsp+58h] [rbp-40h]
  20. v18 = __readfsqword(0x28u);
  21. memset(&pglob, 0, sizeof(pglob));
  22. if ( check_type <= 0 && disk_type
  23. || ((unsigned __int8)check_type & (disk_type == 0LL)) != 0
  24. || !disk_list
  25. || !*disk_list
  26. || !glob_list )
  27. {
  28. v14 = -1;
  29. __syslog_chk(3LL, 1LL, "%s:%d Bad parameter", "external/external_disk_port_enum.c", 42LL);
  30. gl_pathv = pglob.gl_pathv;
  31. goto LABEL_29;
  32. }
  33. should_check_type = disk_type != 0LL && check_type > 0;
  34. if ( *(int *)(glob_list + 4) <= 0 )
  35. return 0;
  36. index = 0;
  37. while ( 1 )
  38. {
  39. v7 = (const char *)list_get(glob_list, index);
  40. memset(&pglob, 0, sizeof(pglob));
  41. v8 = v7;
  42. // 返回值 0 表示匹配到
  43. v9 = glob64(v7, 8, 0LL, &pglob);
  44. if ( v9 )
  45. break;
  46. gl_pathv = pglob.gl_pathv;
  47. if ( pglob.gl_pathc )
  48. {
  49. v10 = 0LL;
  50. while ( 2 )
  51. {
  52. v11 = strrchr(gl_pathv[v10], '/'); // 找到最后一个 /
  53. if ( !v11 )
  54. goto LABEL_21;
  55. disk_name = (__int64)(v11 + 1);
  56. tmp_disk_type = get_disk_type_by_name(v11 + 1);
  57. if ( should_check_type )
  58. {
  59. if ( tmp_disk_type == *disk_type )
  60. goto LABEL_19;
  61. }
  62. else if ( tmp_disk_type != 10 )
  63. {
  64. LABEL_19:
  65. list_insert((__int64)disk_list, disk_name);
  66. }
  67. gl_pathv = pglob.gl_pathv;
  68. LABEL_21:
  69. if ( pglob.gl_pathc <= ++v10 )
  70. break;
  71. continue;
  72. }
  73. }
  74. LABEL_12:
  75. if ( gl_pathv )
  76. globfree64(&pglob);
  77. if ( *(_DWORD *)(glob_list + 4) <= ++index )
  78. {
  79. gl_pathv = pglob.gl_pathv;
  80. v14 = 0;
  81. goto LABEL_29;
  82. }
  83. }
  84. if ( v9 == 2 )
  85. {
  86. __syslog_chk(3LL, 1LL, "%s:%d read error :%s", "external/external_disk_port_enum.c", 58LL, v8);
  87. goto LABEL_27;
  88. }
  89. if ( v9 != 1 )
  90. {
  91. gl_pathv = pglob.gl_pathv;
  92. if ( v9 != 3 )
  93. goto LABEL_28;
  94. goto LABEL_12;
  95. }
  96. __syslog_chk(
  97. 3LL,
  98. 1LL,
  99. "%s:%d out of memory to alloc glob function when looking for:%s",
  100. "external/external_disk_port_enum.c",
  101. 60LL,
  102. v8);
  103. LABEL_27:
  104. gl_pathv = pglob.gl_pathv;
  105. LABEL_28:
  106. v14 = -1;
  107. LABEL_29:
  108. if ( gl_pathv )
  109. globfree64(&pglob);
  110. return v14;
  111. }

在 SYNODiskPathGlobAndPortCheck 里有逻辑是,按照磁盘类型找磁盘,然后再反向检查一遍,如果类型匹配则添加到列表中(第 58 行),默认的 SATA 磁盘类型是 1,要去掉这个类型的检查,那为了能找到安装盘,我们这个给取个反后,重新构建引导并启动后,可以正常在安装页面查找到磁盘了,并且成功安装,进入重启流程。

4. 安装系统后还是进入 ramdisk

安装成功,发现重启后依旧进入 ramdisk 模式,查看日志后发现有以下报错:

  1. System volume is assembled with SSD Cache only, please remove SSD Cache and then reboot

这个错误是 /linuxrc.syno.impl 里做的检查,意思就是系统盘不能是 SSD Cache,具体代码:

  1. SupportSSDCache=`/bin/get_key_value /etc.defaults/synoinfo.conf support_ssd_cache`
  2. if [ "$SupportDualhead" != "yes" ] && [ "${SupportSSDCache}" = "yes" ] && [ -d "/sys/block/md0" ]; then
  3. WithInternal=0
  4. has_md0disk=0
  5. # check if any disk is INTERNAL, otherwise return fail
  6. for path in /sys/block/md0/md/dev-*; do
  7. [ -e "$path" ] || continue
  8. disk="$(basename "$path"| cut -c 5-)"
  9. [ -z "$disk" ] && continue
  10. has_md0disk=1
  11. PortType=`/usr/syno/bin/synodiskport -portcheck "${disk}"`
  12. if [ "${PortType}" = "SAS" ] || [ "${PortType}" = "SATA" ] || [ "${PortType}" = "SYS" ]; then
  13. WithInternal=1
  14. fi
  15. done
  16. # has raid0 and not composed by internal disk
  17. if [ "$has_md0disk" = 1 ] && [ ${WithInternal} -eq 0 ]; then
  18. echo "System volume is assembled with SSD Cache only, please remove SSD Cache and then reboot" >> /var/log/messages
  19. Exit 8 "System volume is assembled with SSD Cache only"
  20. fi
  21. fi

那就简单了,编译引导的时候,直接替换掉:sed -i 's/WithInternal=0/WithInternal=1/' ${RAMDISK_PATH}/linuxrc.syno.impl

5. 正常进入系统

重新编译引导后,终于正常进入系统,但是还有新问题:在存储管理器里找不到 NVMe 盘,继续探索。尝试添加一块 sata 盘后,正常操作,那可能还是在查看磁盘的时候有问题(因为没有 sata 盘),查看日志后发现对应的错误:

  1. 2023-04-12T2225+08:00 TestSA6400 scemd[17874]: disk/disk_info_enum.c:297 cann
  2. 't find enumlist_det, try to diskInfoEnum failed
  3. 2023-04-12T2225+08:00 TestSA6400 scemd[17874]: disk/shared_disk_info_enum.c::
  4. 84 Failed to allocate list in SharedDiskInfoEnum, errno=0x900.

SharedDiskInfoEnum 应该是具体的函数名,不过肯定被 strip 掉了,查找字符串关键字。

群晖安装好的系统里的 scemd 比 ramdisk 里的 scemd 二进制小了很多,是动态链接的,查找了一圈,发现相关函数在 libhwcontrol.so.1 中,参考 scemd 中的修改后,发现磁盘数翻倍了,继续分析一下,应该是磁盘类型 1 返回了 3 块 NVMe 盘,磁盘类型 7 也返回了相同的盘,所以这里选择跳过没有找到磁盘类型 1 的检查,对应下面伪代码的 23 行 v1 < 0

IDA 里的伪代码:

  1. __int64 __fastcall SLIBDiskInfoEnumToCache(__int64 a1)
  2. {
  3. int v1; // r12d
  4. int v2; // r13d
  5. int v3; // r14d
  6. int v4; // r15d
  7. int v5; // eax
  8. int v6; // ebp
  9. FILE *v7; // rbx
  10. _QWORD *v8; // rbp
  11. unsigned int v9; // ebx
  12. int v11; // [rsp+Ch] [rbp-4Ch]
  13. void *ptr[9]; // [rsp+10h] [rbp-48h] BYREF
  14. ptr[1] = (void *)__readfsqword(0x28u);
  15. ptr[0] = 0LL;
  16. v1 = enumerate_disks_by_type((__int64)ptr, 1LL, a1);
  17. v2 = enumerate_disks_by_type((__int64)ptr, 3LL, a1);
  18. v3 = enumerate_disks_by_type((__int64)ptr, 7LL, a1);
  19. v4 = enumerate_disks_by_type((__int64)ptr, 11LL, a1);
  20. v11 = enumerate_disks_by_type((__int64)ptr, 4LL, a1);
  21. v5 = enumerate_disks_by_type((__int64)ptr, 2LL, a1);
  22. if ( v1 < 0 || v2 < 0 || v3 < 0 || v4 < 0 || v11 < 0 || (v6 = v5, v5 < 0) )
  23. {
  24. v9 = -1;
  25. }
  26. else
  27. {
  28. v7 = fopen64("/tmp/enumlist_det.tmp", "wb");
  29. if ( v7 )
  30. {
  31. v8 = ptr[0];
  32. if ( ptr[0] )
  33. {
  34. do
  35. {
  36. if ( !*v8 )
  37. break;
  38. sub_40AE0(v7);
  39. v8 = (_QWORD *)v8[1];
  40. }
  41. while ( v8 );
  42. }
  43. fclose(v7);
  44. v9 = rename("/tmp/enumlist_det.tmp", "/tmp/enumlist_det");
  45. if ( v9 )
  46. {
  47. v9 = 0;
  48. __syslog_chk(
  49. 4LL,
  50. 1LL,
  51. "%s:%d Failed to rename %s into %s.",
  52. "disk/disk_info_enum.c",
  53. 456LL,
  54. "/tmp/enumlist_det.tmp",
  55. "/tmp/enumlist_det");
  56. }
  57. }
  58. else
  59. {
  60. v9 = v6 + v11 + v4 + v3 + v1 + v2;
  61. __syslog_chk(3LL, 1LL, "%s:%d fail to save enumlist, device is busy....n", "disk/disk_info_enum.c", 441LL);
  62. }
  63. }
  64. DiskInfoEnumFree(ptr[0]);
  65. return v9;
  66. }

再次更新 libhwcontrol.so.1 并重启后,存储管理的磁盘显示正常。

6. 创建存储池页面无 RAID 类型

sa6400-nvme-install-no-raid-type.png

再次回到存储管理,选择创建存储池,发现 RAID 类型为空,前面都已经能看到具体磁盘了,猜测这里大概率是前端逻辑的问题了,经过一番查找发现有如下一段代码在 storage_panel.js 中:

  1. isCacheTray() {
  2. return "cache" === this.portType
  3. }
  4. raidTypeStore() {
  5. if (SYNO.SDS.StorageUtils.isSingleBay() && (!this.isNeedSelectSource || "internal" === this.selectDiskSource))
  6. return [{
  7. label: this.T("volume", "volume_type_basic"),
  8. value: "basic"
  9. }];
  10. let e = []
  11. , t = 0
  12. , s = {}
  13. , i = (e,t,s)=>{
  14. this.raidTypeSupportTable[s].support && e && t.push({
  15. label: this.raidTypeSupportTable[s].label,
  16. value: s
  17. })
  18. }
  19. ;
  20. for (let e of this.disks) {
  21. if ("disabled" === e.portType || e.isCacheTray())
  22. continue;
  23. let t, i = e.container;
  24. if ("number" != typeof s[i.order]) {
  25. if ("ebox" === i.type) {
  26. if (SYNO.SDS.StorageUtils.supportSas && this.env.AHAInfo)
  27. t = this.env.AHAInfo.enclosures[i.order - 1].max_disk;
  28. else if (t = SYNO.SDS.StorageUtils.GetEboxBayNumber(i.str),
  29. 0 === t)
  30. continue
  31. } else
  32. t = +this.D("maxdisks", "1");
  33. s[i.order] = t
  34. }
  35. }
  36. for (let[e,i] of Object.entries(s))
  37. SYNO.SDS.StorageUtils.isSupportRaidCross() ? t += i : t = Math.max(t, i);
  38. return SYNO.SDS.StorageUtils.supportRaidGroup || !SYNO.SDS.StorageUtils.isSupportSHR() || "pool_type_multi_volume" !== this.poolType || this.S("ha_running") || (i(1 <= t, e, "shr"),
  39. i(4 <= t, e, "shr_2")),
  40. i(2 <= t, e, "raid_1"),
  41. i(3 <= t, e, "raid_5"),
  42. i(4 <= t, e, "raid_6"),
  43. i(4 <= t, e, "raid_10"),
  44. i(1 <= t, e, "basic"),
  45. i(1 <= t, e, "raid_linear"),
  46. i(2 <= t, e, "raid_0"),
  47. i(SYNO.SDS.StorageUtils.supportDiffRaid && 3 <= t, e, "raid_f1"),
  48. e
  49. }

这里 22 行的 if ("disabled" === e.portType || e.isCacheTray()) 会跳过所有 SSD Cache 盘,由于我们没有 SATA 盘,全跳过的话,自然就没有磁盘去判断 RAID 类型了,所以这里直接跳过 e.isCacheTray() 的检查。

强制刷新浏览器缓存,终于可以自由创建 NVMe 存储池了。

成果截图:

sa6400-nvme-install-raid-type.png

sa6400-nvme-install-storage-manager.png

相关修改汇总

ramdisk 相关修改

  1. 替换 ramdisk 下的 scemd,跳过 disk type 的检查,对应函数查找关键字:external_disk_port_enum
  2. 修改 /linuxrc.syno.impl:sed -i 's/WithInternal=0/WithInternal=1/' ${RAMDISK_PATH}/linuxrc.syno.impl

安装系统后相关修改

  1. 替换系统下的 libhwcontrol.so.1,跳过查找 SATA 盘的检查,对应的函数为:SLIBDiskInfoEnumToCache
  2. 修改 storage_panel.js:e.portType||e.isCacheTray() -> e.portType,以加载正确的磁盘 RAID 类型
© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享