SSD加速SATA盘之flashcache策略

概述

通常SATA盘的性能比较低,对于大多数应用来说性能不够,但纯SSD的盘又比较昂贵,结合这两种盘的使用策略是业内讨论的一个热点,也有很多成熟的方案来使用。

之前有文章讨论过cache策略,我们这里选择通用的flashcache方案来用SSD盘加速SATA盘。

flashcache

源码:https://github.com/facebookarchive/flashcache
参考:http://www.yangguanjun.com/2018/01/30/lvm-with-cache/

Ubuntu上安装

测试使用的是”Ubuntu 16.04.2 LTS”,它的内核比较新,所以这里编译安装flashcache很方便。

系统信息

1
2
3
4
5
6
7
8
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
# uname -r
4.4.0-62-generic

编译安装

1
2
3
4
5
6
7
8
9
10
11
12
13
# apt install -y git make gcc
# git clone git@github.com:facebookarchive/flashcache.git
# cd flashcache/
# make
# make install
# modprobe flashcache

# cat /proc/flashcache/flashcache_version
Flashcache Version : flashcache-3.1.1
git commit: 1.0-248-g437afbfe233e

# flashcache_
flashcache_create flashcache_destroy flashcache_load flashcache_setioctl

CentOS上安装

测试使用的是”CentOS Linux release 7.3.1611”,它的内核比较旧,在安装flashcache时遇到很多问题。

系统信息

1
2
3
4
5
6
7
8
9
# lsb_release -a
...
Distributor ID: CentOS
Description: CentOS Linux release 7.3.1611 (Core)
Release: 7.3.1611
Codename: Core

# uname -a
Linux xs732 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

编译安装

1
2
3
4
5
6
7
8
# yum install -y git make gcc 
# git clone git@github.com:facebookarchive/flashcache.git
# cd flashcache
# make
...
/root/yangguanjun/flashcache/src/flashcache_conf.c:1230:11: error: too many arguments to function ‘wait_on_bit_lock’
flashcache_wait_schedule, TASK_UNINTERRUPTIBLE);
...

网上可搜索到该问题:https://github.com/facebookarchive/flashcache/issues/191

升级内核

尝试升级CentOS的内核版本,但是编译flashcache还是一样的错误。

1
2
3
# yum update
# uname -a
Linux server0 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

无奈,选择升级CentOS内核到最新版本4.15。

参考:http://www.jiagoumi.com/work/1167.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
先导入elrepo的key,然后安装elrepo的yum源:
# rpm -import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

列出可用的内核相关包:
# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

kernel-lt.x86_64 4.4.115-1.el7.elrepo elrepo-kernel
kernel-lt-devel.x86_64 4.4.115-1.el7.elrepo elrepo-kernel
...
kernel-ml.x86_64 4.15.1-1.el7.elrepo elrepo-kernel
kernel-ml-devel.x86_64 4.15.1-1.el7.elrepo elrepo-kernel
...

安装最新的主线稳定内核:
# yum -y --enablerepo=elrepo-kernel install kernel-ml.x86_64 kernel-ml-devel.x86_64

查看系统kernel默认启动顺序:
# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.15.1-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-514.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-836c8e52784b401db086b015b56e2fba) 7 (Core)

修改系统kernel默认启动内核:
# vim /etc/default/grub
...
GRUB_DEFAULT=saved 修改为 0
...

重新创建内核配置:
# grub2-mkconfig -o /boot/grub2/grub.cfg

重启机器:
# reboot -nf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# uname -r
4.15.1-1.el7.elrepo.x86_64

# make
make -C src KERNEL_TREE=/usr/src/kernels/4.15.1-1.el7.elrepo.x86_64 PWD=/root/flashcache/src all
...
/root/flashcache/src/flashcache_main.c: In function ‘dm_io_async_bvec_pl’:
/root/flashcache/src/flashcache_main.c:119:6: error: ‘struct dm_io_request’ has no member named ‘bi_rw’
iorq.bi_rw = rw;
^
/root/flashcache/src/flashcache_main.c: In function ‘dm_io_async_bvec’:
/root/flashcache/src/flashcache_main.c:143:6: error: ‘struct dm_io_request’ has no member named ‘bi_rw’
iorq.bi_rw = rw;
...

很无语,最新kernel的4.15版本竟然也编译不过flashcache。

查看之前编译安装flashcache成功的Ubuntu平台上的内核版本为:4.4.0-62-generic,最后决定尝试 4.4的kernel版本,步骤与上面的步骤类似,差别如下:

1
2
3
4
5
6
7
重新升级系统内核,选择kernel-lt*
# yum -y --enablerepo=elrepo-kernel install kernel-lt.x86_64 kernel-lt-devel.x86_64

之后重启机器

# uname -r
4.4.115-1.el7.elrepo.x86_64

再尝试编译安装flashcache,一切正常O(∩_∩)O哈哈~

1
2
3
4
5
# cd flashcache
# make
# make install
# flashcache_
flashcache_create flashcache_destroy flashcache_load flashcache_setioctl

flashcache使用

硬盘信息

1
2
3
4
5
# fdisk -l | grep dev
...
Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Disk /dev/vdc: 107.4 GB, 107374182400 bytes, 209715200 sectors
Disk /dev/vdd: 53.7 GB, 53687091200 bytes, 104857600 sectors

这里使用三块盘:vdb、vdc、vdd。

其中vdb、vdc是容量型磁盘,vdd是性能型磁盘,实验用vdd通过bcache加速vdb和vdc。

使用步骤

与flashcache相关的命令好几个,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
# flashcache_create
Usage: flashcache_create [-v] [-p back|thru|around] [-w] [-b block size] [-m md block size] [-s cache size] [-a associativity] cachedev ssd_devname disk_devname
Usage : flashcache_create Cache Mode back|thru|around is required argument
Usage : flashcache_create Default units for -b, -m, -s are sectors, or specify in k/M/G. Default associativity is 512.

# flashcache_destroy
Usage: flashcache_destroy ssd_devname

# flashcache_load
Usage: flashcache_load ssd_devname [cachedev]

# flashcache_setioctl
Usage: flashcache_setioctl (-c | -a | -r) (-b pid |-w pid) ssd_devname

另外flashcache是通过device mapper来做的设备映射和缓存,所以对flashcache device的操作命令也有dmsetup,使用如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# dmsetup
Usage:

dmsetup
[--version] [-h|--help [-c|-C|--columns]]
[-v|--verbose [-v|--verbose ...]] [-f|--force]
[--checks] [--manglename {none|hex|auto}]
[-r|--readonly] [--noopencount] [--noflush] [--nolockfs] [--inactive]
[--udevcookie <cookie>] [--noudevrules] [--noudevsync] [--verifyudev]
[-y|--yes] [--readahead {[+]<sectors>|auto|none}] [--retry]
[-c|-C|--columns] [-o <fields>] [-O|--sort <sort_fields>]
[-S|--select <selection>] [--nameprefixes] [--noheadings]
[--separator <separator>]

help [-c|-C|--columns]
create <dev_name>
[-j|--major <major> -m|--minor <minor>]
[-U|--uid <uid>] [-G|--gid <gid>] [-M|--mode <octal_mode>]
[-u|uuid <uuid>] [--addnodeonresume|--addnodeoncreate]
[--readahead {[+]<sectors>|auto|none}]
[-n|--notable|--table {<table>|<table_file>}]
remove [--deferred] [-f|--force] [--retry] <device>
remove_all [-f|--force]
suspend [--noflush] [--nolockfs] <device>
resume [--noflush] [--nolockfs] <device>
[--addnodeonresume|--addnodeoncreate]
[--readahead {[+]<sectors>|auto|none}]
load <device> [<table>|<table_file>]
clear <device>
reload <device> [<table>|<table_file>]
wipe_table [-f|--force] [--noflush] [--nolockfs] <device>
rename <device> [--setuuid] <new_name_or_uuid>
message <device> <sector> <message>
ls [--target <target_type>] [--exec <command>] [-o <options>] [--tree]
info [<device>]
deps [-o <options>] [<device>]
stats <command> [<options>] [<devices>]
status [<device>] [--noflush] [--target <target_type>]
table [<device>] [--target <target_type>] [--showkeys]
wait <device> [<event_nr>] [--noflush]
mknodes [<device>]
mangle [<device>]
udevcreatecookie
udevreleasecookie [<cookie>]
udevflags <cookie>
udevcomplete <cookie>
udevcomplete_all [<age_in_minutes>]
udevcookies
targets
version
setgeometry <device> <cyl> <head> <sect> <start>
splitname <device> [<subsystem>]

<device> may be device name or -u <uuid> or -j <major> -m <minor>
<mangling_mode> is one of 'none', 'auto' and 'hex'.
<fields> are comma-separated. Use 'help -c' for list.
Table_file contents may be supplied on stdin.
Options are: devno, devname, blkdevname.
Tree specific options are: ascii, utf, vt100; compact, inverted, notrunc;
blkdevname, [no]device, active, open, rw and uuid.

磁盘规划

因为flashcache要求一个ssd_device对应一个sata_device,所以在使用之前,我们要规划好哪些SSD盘做缓存?哪些SATA盘需要做flashcache?以便给出合理规划。

这里我们有一个SSD盘,两个SATA盘,所以把SSD盘分为两个分区。

1
2
3
4
5
6
7
8
9
10
11
12
13
# fdisk /dev/vdd
...
# fdisk -l /dev/vdd
Disk /dev/vdd: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xa8f11a03

Device Boot Start End Blocks Id System
/dev/vdd1 2048 52430847 26214400 83 Linux
/dev/vdd2 52430848 104857599 26213376 83 Linux

缓存模式

flashcache支持三种缓存模式:

  • Writeback : 对于写入,首先会写入到Cache中,同时将对于block的元数据dirty bit,但是并不会立即写入后备的device
  • Writethrough : 对于写入,写入到Cache中,同时也会将数据写入backing device,知道写完backing device,才算写完
  • Writearound : 写入的时候,绕过Cache,直接写入backing device,即SSD只当读缓存

三种缓存模式的区别如下图:

flashcache-cachemodes

创建flashcache device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
后端设备为一分区:
# flashcache_create -p back fcache-dev1 /dev/vdd1 /dev/vdb1
cachedev fcache-dev1, ssd_devname /dev/vdd1, disk_devname /dev/vdb1 cache mode WRITE_BACK
block_size 8, md_block_size 8, cache_size 0
Flashcache metadata will use 137MB of your 3951MB main memory

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
vdb 253:16 0 100G 0 disk
└─vdb1 253:17 0 100G 0 part
└─fcache-dev1 252:0 0 100G 0 dm
vdd 253:48 0 50G 0 disk
├─vdd1 253:49 0 25G 0 part
│ └─fcache-dev1 252:0 0 100G 0 dm
└─vdd2 253:50 0 25G 0 part

后端设备为整块磁盘:
# flashcache_create -p back fcache-dev2 /dev/vdd2 /dev/vdc
cachedev fcache-dev2, ssd_devname /dev/vdd2, disk_devname /dev/vdc cache mode WRITE_BACK
block_size 8, md_block_size 8, cache_size 0
Flashcache metadata will use 137MB of your 3951MB main memory

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
vdb 253:16 0 100G 0 disk
└─vdb1 253:17 0 100G 0 part
└─fcache-dev1 252:0 0 100G 0 dm
vdc 253:32 0 100G 0 disk
└─fcache-dev2 252:1 0 100G 0 dm
vdd 253:48 0 50G 0 disk
├─vdd1 253:49 0 25G 0 part
│ └─fcache-dev1 252:0 0 100G 0 dm
└─vdd2 253:50 0 25G 0 part
└─fcache-dev2 252:1 0 100G 0 dm

查看flashcache device

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# ll /dev/mapper/fcache-dev*
lrwxrwxrwx. 1 root root 7 Feb 11 11:14 /dev/mapper/fcache-dev1 -> ../dm-0
lrwxrwxrwx. 1 root root 7 Feb 11 11:15 /dev/mapper/fcache-dev2 -> ../dm-1

# dmsetup table
fcache-dev2: 0 209715200 flashcache conf:
ssd dev (/dev/vdd2), disk dev (/dev/vdc) cache mode(WRITE_BACK)
capacity(25498M), associativity(512), data block size(4K) metadata block size(4096b)
disk assoc(0K)
skip sequential thresh(0K)
total blocks(6527488), cached blocks(259), cache percent(0)
dirty blocks(0), dirty percent(0)
nr_queued(0)
Size Hist: 4096:1497
fcache-dev1: 0 209713152 flashcache conf:
ssd dev (/dev/vdd1), disk dev (/dev/vdb1) cache mode(WRITE_BACK)
capacity(25498M), associativity(512), data block size(4K) metadata block size(4096b)
disk assoc(0K)
skip sequential thresh(0K)
total blocks(6527488), cached blocks(259), cache percent(0)
dirty blocks(0), dirty percent(0)
nr_queued(0)
Size Hist: 4096:1497

使用flashcache device

创建后的flashcache device可以像普通device一样使用,如下:

1
2
3
4
5
# mkfs.ext4 /dev/mapper/fcache-dev1
# mount /dev/mapper/fcache-dev1 /mnt/
# mount | grep mnt
/dev/mapper/fcache-dev1 on /mnt type ext4 (rw,relatime,seclabel,data=ordered)
# umount /mnt/

查看flashcache device的flashcache状态

在使用一段时间后,我们可以看flashcache的缓存状态如下,可以以此为依据来调整flashcache的策略。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# dmsetup status fcache-dev1
0 209713152 flashcache stats:
reads(1063), writes(422824)
read hits(786), read hit percent(73)
write hits(291) write hit percent(0)
dirty write hits(44) dirty write hit percent(0)
replacement(0), write replacement(7392)
write invalidates(0), read invalidates(2)
pending enqueues(2), pending inval(2)
metadata dirties(422499), metadata cleans(112914)
metadata batch(531237) metadata ssd writes(4432)
cleanings(112914) fallow cleanings(0)
no room(31) front merge(3042) back merge(109407)
force_clean_block(0)
disk reads(277), disk writes(112945) ssd reads(113700) ssd writes(427505)
uncached reads(2), uncached writes(31), uncached IO requeue(0)
disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0)
uncached sequential reads(0), uncached sequential writes(0)
pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0)
lru hot blocks(3263744), lru warm blocks(3263744)
lru promotions(0), lru demotions(0)

查看flashcache device相关信息

1
2
3
4
5
6
7
# ls /proc/flashcache/vdd1+vdb1/
flashcache_errors flashcache_iosize_hist flashcache_pidlists flashcache_stats
# cat /proc/flashcache/vdd1+vdb1/flashcache_errors
disk_read_errors=0 disk_write_errors=0 ssd_read_errors=0 ssd_write_errors=0 memory_alloc_errors=0
# cat /proc/flashcache/vdd1+vdb1/flashcache_pidlists
Blacklist:
Whitelist:

sysctl查看flashcache的信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# sysctl -a | grep flashcache
dev.flashcache.vdd1+vdb1.cache_all = 1
dev.flashcache.vdd1+vdb1.clean_on_read_miss = 0
dev.flashcache.vdd1+vdb1.clean_on_write_miss = 0
dev.flashcache.vdd1+vdb1.dirty_thresh_pct = 20
dev.flashcache.vdd1+vdb1.do_pid_expiry = 0
dev.flashcache.vdd1+vdb1.do_sync = 0
dev.flashcache.vdd1+vdb1.fallow_clean_speed = 2
dev.flashcache.vdd1+vdb1.fallow_delay = 900
dev.flashcache.vdd1+vdb1.fast_remove = 0
dev.flashcache.vdd1+vdb1.io_latency_hist = 0
dev.flashcache.vdd1+vdb1.lru_hot_pct = 75
dev.flashcache.vdd1+vdb1.lru_promote_thresh = 2
dev.flashcache.vdd1+vdb1.max_clean_ios_set = 2
dev.flashcache.vdd1+vdb1.max_clean_ios_total = 4
dev.flashcache.vdd1+vdb1.max_pids = 100
dev.flashcache.vdd1+vdb1.new_style_write_merge = 0
dev.flashcache.vdd1+vdb1.pid_expiry_secs = 60
dev.flashcache.vdd1+vdb1.reclaim_policy = 0
dev.flashcache.vdd1+vdb1.skip_seq_thresh_kb = 0
dev.flashcache.vdd1+vdb1.stop_sync = 0
dev.flashcache.vdd1+vdb1.zero_stats = 0

# cd /proc/sys/dev/flashcache/vdd1+vdb1/
# ls
cache_all dirty_thresh_pct fallow_clean_speed io_latency_hist max_clean_ios_set new_style_write_merge skip_seq_thresh_kb
clean_on_read_miss do_pid_expiry fallow_delay lru_hot_pct max_clean_ios_total pid_expiry_secs stop_sync
clean_on_write_miss do_sync fast_remove lru_promote_thresh max_pids reclaim_policy zero_stats

不明确上述参数是否可以动态调整?

删除flashcache device

1
2
3
4
5
6
7
8
9
10
11
12
13
# dmsetup info /dev/dm-0
Name: fcache-dev1
State: ACTIVE
Read Ahead: 8192
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 252, 0
Number of targets: 1

# dmsetup remove /dev/dm-0
# flashcache_destroy /dev/vdd1
flashcache_destroy: Destroying Flashcache found on /dev/vdd1. Any data will be lost !!

遇到的问题

删除设备出错

测试中在remove一个device mapper设备时,报“Device or resource busy”。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# dmsetup remove /dev/dm-1
device-mapper: remove ioctl on fcache-dev2 failed: Device or resource busy
Command failed

# dmsetup info /dev/dm-1
Name: fcache-dev2
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 252, 1
Number of targets: 1

# dmsetup table
fcache-dev2: 0 209715200 flashcache conf:
ssd dev (/dev/vdd2), disk dev (/dev/vdc) cache mode(WRITE_BACK)
capacity(25498M), associativity(512), data block size(4K) metadata block size(4096b)
disk assoc(0K)
skip sequential thresh(0K)
total blocks(6527488), cached blocks(259), cache percent(0)
dirty blocks(0), dirty percent(0)
nr_queued(0)
Size Hist: 1024:2 4096:471740

# lsof /dev/vdd2
# lsof /dev/vdc
# lsof /dev/dm-1

尝试remove --force参数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# dmsetup remove /dev/dm-1 --force
device-mapper: remove ioctl on fcache-dev2 failed: Device or resource busy
Command failed
# dmsetup table
fcache-dev2: 0 209715200 error

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
vdb 253:16 0 100G 0 disk
└─vdb1 253:17 0 100G 0 part
vdc 253:32 0 100G 0 disk
vdd 253:48 0 50G 0 disk
├─vdd1 253:49 0 25G 0 part
└─vdd2 253:50 0 25G 0 part
fcache-dev2 252:1 0 100G 0 dm ## 出些在这里了???

之后尝试很多命令,没法完全把该设备移除,只能重启,重启后一切正常。

创建设备出错1

创建flashcache device时报错:“Valid Flashcache already exists on…”。

1
2
3
4
5
# flashcache_create -p back fcache-dev1 /dev/vdd1 /dev/vdb1
cachedev fcache-dev1, ssd_devname /dev/vdd1, disk_devname /dev/vdb1 cache mode WRITE_BACK
block_size 8, md_block_size 8, cache_size 0
flashcache_create: Valid Flashcache already exists on /dev/vdd1
flashcache_create: Use flashcache_destroy first and then create again /dev/vdd1

这个出错比较明确,按照提示先销毁cache设备中的flashcache数据即可。

1
2
3
4
5
6
7
# flashcache_destroy /dev/vdd1
flashcache_destroy: Destroying Flashcache found on /dev/vdd1. Any data will be lost !!

# flashcache_create -p back fcache-dev1 /dev/vdd1 /dev/vdb1
cachedev fcache-dev1, ssd_devname /dev/vdd1, disk_devname /dev/vdb1 cache mode WRITE_BACK
block_size 8, md_block_size 8, cache_size 0
Flashcache metadata will use 137MB of your 3951MB main memory

创建设备出错2

测试中反复操作,创建flashcache device时报错:“Device or resource busy”。

1
2
3
4
5
6
7
# flashcache_create -p back fcache-dev2 /dev/vdd2 /dev/vdc
cachedev fcache-dev2, ssd_devname /dev/vdd2, disk_devname /dev/vdc cache mode WRITE_BACK
block_size 8, md_block_size 8, cache_size 0
Flashcache metadata will use 137MB of your 3951MB main memory
device-mapper: reload ioctl on fcache-dev2 failed: Device or resource busy
Command failed
echo 0 209715200 flashcache /dev/vdc /dev/vdd2 fcache-dev2 1 2 8 0 512 0 0 8 | dmsetup create fcache-dev2 failed

尝试销毁flashcache设备上的相关数据,之后创建还是报错:

1
2
# flashcache_destroy /dev/vdd2
flashcache_destroy: No valid Flashcache found on /dev/vdd2

再重启机器后,创建flashcache就正常了,比较奇怪 ;(

重启机器后设备消失

重启机器后,看不到之前创建的flashcache device。

这是因为flashcache重启后不会自动加载设备,需要手动执行命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# modprobe flashcache
# flashcache_load
Usage: flashcache_load ssd_devname [cachedev]
git commit:
# flashcache_load /dev/vdd1

# dmsetup info
Name: fcache-dev1
State: ACTIVE
Read Ahead: 8192
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 252, 0
Number of targets: 1

若想flashcache和设备在机器重启后自动加载,可以把相关命令加入“/etc/rc.d/rc.local”。

/etc/rc.d/init.d/里也添加服务也是一个好方法 ;)

支持原创