系统压力测试工具-stress-ng
2020-10-18 19:49:36 阿炯
stress-ng是stress的加强版,功能更加完善。它可以充分测试Linux、BSD等类Unix服务器以获得高负载并监控压力下CPU,内存,I/O和磁盘读写能力的状况,也可包括:Socket处理能力,进线程创建和终止、上下文切换等。采用C语言开发并在GPLv2协议下授权。CentOS 7 的EPEL源包含2个压力测试工具,一个是标准的stress,另一个是其升级版stress-ng。stress-ng是stress的加强版,完全兼容stress,并在此基础上增加了几百个参数。
stress : It is a simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is not a benchmark, but is rather a tool designed.
stress-ng : It is an updated version of stress tool and it will stress test a server for the following features:
CPU compute
Cache thrashing
Drive stress
I/O syncs
VM stress
Socket stressing
Context switching
Process creation and termination
It includes over 60 different stress tests, over 50 CPU specific stress tests that exercise floating point, integer, bit manipulation and control flow, over 20 virtual memory stress tests.
stress-ng will stress test a computer system in various selectable ways. It was designed to exercise various physical subsystems of a computer as well as the various operating system kernel interfaces. Stress-ng features:
* over 240 stress tests
* 78 CPU specific stress tests that exercise floating point, integer, bit manipulation and control flow
* over 20 virtual memory stress tests
* portable: builds on Linux, Solaris, *BSD, Minix, Android, MacOS X, Debian Hurd, Haiku, Windows Subsystem for Linux and SunOs/Dilos with gcc, clang, tcc and pcc.
stress-ng was originally intended to make a machine work hard and trip hardware issues such as thermal overruns as well as operating system bugs that only occur when a system is being thrashed hard. Use stress-ng with caution as some of the tests can make a system run hot on poorly designed hardware and also can cause excessive system thrashing which may be difficult to stop.
stress-ng can also measure test throughput rates; this can be useful to observe performance changes across different operating system releases or types of hardware. However, it has never been intended to be used as a precise benchmark test suite, so do NOT use it in this manner.
Running stress-ng with root privileges will adjust out of memory settings on Linux systems to make the stressors unkillable in low memory situations, so use this judiciously. With the appropriate privilege, stress-ng can allow the ionice class and ionice levels to be adjusted, again, this should be used with care.
To build, the following libraries will ensure a fully functional stress-ng build: (note libattr is not required for more recent disto releases).
Deb系:libaio-dev libapparmor-dev libattr1-dev libbsd-dev libcap-dev libgcrypt11-dev libipsec-mb-dev libjudy-dev libkeyutils-dev libsctp-dev libatomic1 zlib1g-dev
Rpm系:libaio-devel libattr-devel libbsd-devel libcap-devel libgcrypt-devel judy-devel keyutils-libs-devel lksctp-tools-devel libatomic zlib-devel
不同类型压测资源的worker数量:
cpu_workers、vm_workers、hdd_workers
每个worker的磁盘或内存使用量
bytes_per_hdd_worker、bytes_per_vm_worker
stress -c 2 -i 1 -m 1 --vm-bytes 128M -t 10s
Where,
-c 2 : Spawn two workers spinning on sqrt()
-i 1 : Spawn one worker spinning on sync()
-m 1 : Spawn one worker spinning on malloc()/free()
--vm-bytes 128M : Malloc 128MB per vm worker (default is 256MB)
-t 10s : Timeout after ten seconds
-v : Be verbose
是否一定要在root权限下运行stress-ng,手册页中有所表述:
Running stress-ng with root privileges will adjust out of memory settings on Linux systems to make the stressors unkillable in low memory situations, so use this judiciously. With the appropriate privilege, stress-ng can allow the ionice class and ionice levels to be adjusted, again, this should be used with care. However, some options do requires root privilege to alter various /sys interface controls. See stess-ng command man page for more info.
stress-ng完全兼容stress,常用选项有:
-i:表示调用 sync()
--hdd:表示读写临时文件
--timeout:表示超时时间,即压测时间
stress-ng的主要参数:
-c N:运行N worker CPU压力测试进程
--cpu-method all:worker从迭代使用30多种不同的压力算法,包括pi, crc16, fft等一系列专项的算例
-tastset N:将压力加到指定核心上
-d N:运行N worker HDD write/unlink测试
-i N:运行N worker IO测试
示例:运行8 cpu, 4 fork, 4 io, 2 vm 10小时
# stress-ng --cpu 8 --cpu-method all --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 36000s
stress-ng: info: [20829] dispatching hogs: 8 cpu, 4 fork, 4 io, 2 vm
在stress的基础上通过几百个参数组合,可以产生各种复杂的压力,兼容stress的参数,比如:
产生2个worker做圆周率算法压力:
stress-ng -c 2 --cpu-method pi
产生2个worker从迭代使用30多种不同的压力算法,包括pi, crc16, fft等等。
stress-ng -c 2 --cpu-method all
产生2个worker调用socket相关函数产生压力
stress-ng --sock 2
产生2个worker读取tsc产生压力
stress-ng --tsc 2
除了能够产生不同类型的压力,strss-ng还可以将压力指定到特定的cpu上,比如下面的命令将压力指定到cpu 0,2,3,6:
stress-ng --sock 4 --taskset 0,2-3,6
下面提供一些使用上的场景
场景一:CPU 密集型进程(使用CPU的进程)
使用2颗CPU
# stress --cpu 2 --timeout 600
# w|uptime
# mpstat -P ALL 5 1
# pidstat -u 5
1.通过w或uptime可以观察到,系统平均负载很高,通过mpstat观察到2个CPU使用率很高,平均负载也很高,而iowait为0,说明进程是CPU密集型的;
2.是由进程使用CPU密集导致系统平均负载变高、CPU使用率变高;
3.可以通过pidstat查看是哪个进程导致CPU使用率较高。
场景二:I/O 密集型进程(等待IO的进程)
对IO进行压测(使用stress观测到的iowait指标可能为0,所以使用stress-ng)
# stress-ng -i 4 --hdd 1 --timeout 600
# w|uptime
# mpstat -P ALL 5
1.可以通过uptime观察到,系统平均负载很高,通过mpstat观察到CPU使用很低,iowait很高,一直在等待IO处理,说明此进程是IO密集型的;
2.是由进程频繁的进行IO操作,导致系统平均负载很高而CPU使用率不高的情况。
场景三:大量进程的场景(等待CPU的进程>进程间会争抢CPU)
模拟16个进程,本机是4核心的机器。
# stress -c 16 --timeout 600
# w|uptime
# mpstat -P ALL 5
1.通过uptime观察到系统平均负载很高,通过mpstat观察到CPU使用率也很高,iowait为0,说明此进程是CPU密集型的,或者在争用CPU的运算资源;
2.通过pidstat -u观察到wait指标很高,则说明进程间存在CPU争用的情况,可以判断系统中存在大量的进程在等待使用CPU;
3.大量的进程,超出了CPU的计算能力,导致的系统的平均负载很高。
场景四:单进程多线程(大量线程造成上下文切换,从而造成系统负载升高)
模拟10个线程,对系统进行基准测试
# sysbench --threads=10 --time=300 threads run
可以看到平均1分钟的系统再升高
可以看到sys(内核态)对CPU的使用率比较高,iowait无(表示没有进程间的争用)
# mpstat -P ALL 5
可以看到无进程间的上下文切换(默认是进程间的)
# pidstat -w 3
可以看到存在大量的非自愿上下文切换(表示线程间争用引起的上下文切换,造成系统负载升高)
# pidstat -w -t 3
与stress的比较
stress常用选项:
-c,--cpu:代表进程个数(每个进程会占用一个cpu,当超出cpu个数时,进程间会互相争用cpu)
-t,--timeout:测试时长(超出这个时间后自动退出)
-i,--io:表示调用sync(),它表示通过系统调用 sync() 来模拟 I/O 的问题;
但这种方法实际上并不可靠,因为sync()的本意是刷新内存缓冲区的数据到磁盘中,以确保同步。如果缓冲区内本来就没多少数据,那读写到磁盘中的数据也就不多,也就没法产生 I/O 压力。这一点对在使用 SSD 磁盘的环境中尤为明显,很可能你的 iowait 总是 0,却单纯因为大量的系统调用,导致了系统CPU使用率 sys 升高。这种情况,推荐使用 stress-ng 来代替 stress。
stress参数和用法都比较简单:
-c 2 : 生成2个worker循环调用sqrt()产生cpu压力
-i 1 : 生成1个worker循环调用sync()产生io压力
-m 1 : 生成1个worker循环调用malloc()/free()产生内存压力
由于stress的压力模型非常简单,所以无法模拟任何复杂的场景;在stress压测过程中,如果用top命令去观察,会发现所有的cpu压力都在用户态,内核态没有任何压力。
运行2worker CPU压力40分钟,CPU占用率稳定在20%左右:
# stress -c 2 -i 9 -m 8 --verbose
stress: info: [5200] dispatching hogs: 2 cpu, 9 io, 8 vm, 0 hdd
再列举几个压测场景:
CPU密集型场景:
stress-ng --cpu 6 --timeout 300
该命令会尽量占满6个CPU核
IO密集型场景:
stress-ng -i 6 --hdd 1 --timeout 300
该命令会开启1个worker不停的读写临时文件,同时启动6个workers不停的调用sync系统调用提交缓存
进程密集型场景:
(( proc_cnt = `nproc`*10 )); stress-ng --cpu $proc_cnt --pthread 1 timeout 300
该命令会启动N*10个进程,在只有N个核的系统上,会产生大量的进程切换,模拟进程间竞争CPU的场景
线程密集型场景:
stress-ng --cpu `nproc` --pthread 1024 timeout 300
该命令会在N个CPU核的系统上,产生N个进程,每个进程1024个线程,模拟线程间竞争CPU的场景
stress-ng: RAM testing
To fill 80% of the free memory
There are many memory based stressors in stress-ng:
stress-ng --class memory?
class 'memory' stressors: atomic bsearch context full heapsort hsearch lockbus lsearch malloc matrix membarrier memcpy memfd memrate memthrash mergesort mincore null numa oom-pipe pipe qsort radixsort remap resources rmap stack stackmmap str stream tlb-shootdown tmpfs tsearch vm vm-rw wcs zero zlib
Alternatively, one can also use VM based stressors too:
stress-ng --class vm?
class 'vm' stressors: bigheap brk madvise malloc mlock mmap mmapfork mmapmany mremap msync shm shm-sysv stack stackmmap tmpfs userfaultfd vm vm-rw vm-splice
stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.8;}' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping:
stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.8;}' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo call with free(1)/vm_stat(1)/etc. if you need it portable.
So to run 1 vm stressor that uses 75% of memory using all the vm stressors with verification for 10 minutes with verbose mode enabled, use:
stress-ng --vm 1 --vm-bytes 75% --vm-method all --verify -t 10m -v
stress-ng --vm 1 --vm-bytes 1.5g --vm-method all --verify -t 10m -v
1.5g与1g是同效的,应该使用1536m取代1.5g。
其它常用样例:
stress-ng --all 4 --timeout 5m
run 4 instances of all the stressors for 5 minutes.
stress-ng --random 64
run 64 stressors that are randomly chosen from all the available stressors.
To run 2 instances of all the stressors for 10 minutes:
stress-ng --all 2 --timeout 10m
To run 128 stressors that are randomly chosen from all the available stressors:
stress-ng --random 128
stress-ng --vm 8 --vm-bytes 80% -t 1h
run 8 virtual memory stressors that combined use 80% of the available memory for 1 hour. Thus each stressor uses 10% of the available memory.
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
runs for 60 seconds with 4 cpu stressors, 2 io stressors and 1 vm stressor using 1GB of virtual memory.
stress-ng --iomix 2 --iomix-bytes 10% -t 10m
runs 2 instances of the mixed I/O stressors using a total of 10% of the available file system space for 10 minutes. Each stressor will use 5% of the available file system space.
stress-ng --cyclic 1 --cyclic-dist 2500 --cyclic-method clock_ns --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
measures real time scheduling latencies created by the hdd stressor. This uses the high resolution nanosecond clock to measure latencies during sleeps of 10,000 nanoseconds. At the end of 1 minute of stressing, the latency distribution with 2500 ns intervals will be displayed.
NOTE: this must be run with super user privileges to enable the real time scheduling to get accurate measurements.
stress-ng --cpu 8 --cpu-ops 800000
runs 8 cpu stressors and stops after 800000 bogo operations.
stress-ng --sequential 2 --timeout 2m --metrics
run 2 simultaneous instances of all the stressors sequentially one by one, each for 2 minutes and summarise with performance metrics at the end.
stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
run 4 FFT cpu stressors, stop after 10000 bogo operations and produce a summary just for the FFT results.
stress-ng --cpu 0 --cpu-method all -t 1h
run cpu stressors on all online CPUs working through all the available CPU stressors for 1 hour.
stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
run 64 instances of all the different cpu stressors and verify that the computations are correct for 10 minutes with a bogo operations summary at the end.
stress-ng --sequential 0 -t 10m
run all the stressors one by one for 10 minutes, with the number of instances of each stressor matching the number of online CPUs.
stress-ng --sequential 8 --class io -t 5m --times
run all the stressors in the io class one by one for 5 minutes each, with 8 instances of each stressor running concurrently and show overall time utilisation statistics at the end of the run.
stress-ng --all 0 --maximize --aggressive
run all the stressors (1 instance of each per CPU) simultaneously, maximize the settings (memory sizes, file allocations, etc.) and select the most demanding/aggressive options.
stress-ng --random 32 -x numa,hdd,key
run 32 randomly selected stressors and exclude the numa, hdd and key stressors
stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
run 4 instances of the VM stressors one after each other, excluding the bigheap, brk and stack stressors
stress-ng --taskset 0,2-3 --cpu 3
run 3 instances of the CPU stressor and pin them to CPUs 0, 2 and 3.
start N workers exercising the CPU by sequentially working through all the different CPU stress methods:
stress-ng --cpu 4 --timeout 60s --metrics-brief
For disk start N workers continually writing, reading and removing temporary files:
stress-ng --disk 2 --timeout 60s --metrics-brief
One can pass the --io N option to the stress-ng command to commit buffer cache to disk:
stress-ng --disk 2 --io 2 --timeout 60s --metrics-brief
Use mmap N bytes per vm worker, the default is 256MB. One can specify the size as % of total available memory or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g:
stress-ng --vm 2 --vm-bytes 1G --timeout 60s
The --vm 2 will start N workers (2 workers) continuously calling mmap/munmap and writing to the allocated memory. Note that this can cause systems to trip the kernel OOM killer on Linux systems if not enough physical memory and swap is not available. Putting it all together
To run for 60 seconds with 4 cpu stressors, 2 io stressors and 1 vm stressor using 1GB of virtual memory, enter:
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
To run 4 simultaneous instances of all the stressors sequentially one by one, each for 6 minutes and summaries with performance metrics at the end:
stress-ng --sequential 4 --timeout 6m --metrics
To run 2 FFT cpu stressors, stop after 5000 bogo operations and produce a summary just for the FFT results:
stress-ng --cpu 2 --cpu-method fft --cpu-ops 5000 --metrics-brief
To run cpu stressors on all online CPUs working through all the available CPU stressors for 2 hour:
stress-ng --cpu 0 --cpu-method all -t 2h
To run 64 instances of all the different cpu stressors and verify that the computations are correct for 5 minutes with a bogo operations summary at the end:
stress-ng --cpu 64 --cpu-method all --verify -t 5m --metrics-brief
To run all the stressors one by one for 5 minutes, with the number of instances of each stressor matching the number of online CPUs:
stress-ng --sequential 0 -t 5m
To run all the stressors in the io class one by one for 1 minutes each, with 8 instances of each stressor running concurrently and show overall time utilisation statistics at the end of the run:
stress-ng --sequential 8 --class io -t 1m --times
案例分析
案例分析1
工具:stress(系统压力测试工具)和sysstat(监控分析系统性能的工具)
需要开启多个终端,部分终端用于运行监测程序,部分终端用来运行实例模拟高负载 ,部分用于观测。这里模拟多种系统压力施加。
1、模拟高CPU密集的进程
stress -c 4 #运行4个高CPU进程
在一个终端运行stress,另外的终端用于监视系统负载以及其他性能
top监视的情况:
# top
top - 14:33:58 up 29 days, 23:28, 4 users, load average: 4.28, 1.42, 0.54
Tasks: 118 total, 6 running, 111 sleeping, 1 stopped, 0 zombie
%Cpu(s): 99.7 us, 0.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 8008684 total, 5493028 free, 195948 used, 2319708 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 7506152 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
可以看到平均负载逐步上升接近到4,并且可以看到有4个CPU使用率接近100%的进程,平均CPU使用率几乎达到100%,几乎所有的时间都在用户态。
mpstat监视情况:
# mpstat -P ALL 5 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
02:32:47 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
02:32:52 PM all 99.74 0.00 0.19 0.00 0.00 0.06 0.00 0.00 0.00 0.00
02:32:52 PM 0 99.44 0.00 0.28 0.00 0.00 0.28 0.00 0.00 0.00 0.00
02:32:52 PM 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
02:32:52 PM 2 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
02:32:52 PM 3 99.42 0.00 0.29 0.00 0.00 0.29 0.00 0.00 0.00 0.00
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
Average: all 99.74 0.00 0.19 0.00 0.00 0.06 0.00 0.00 0.00 0.00
Average: 0 99.44 0.00 0.28 0.00 0.00 0.28 0.00 0.00 0.00 0.00
Average: 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: 2 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: 3 99.42 0.00 0.29 0.00 0.00 0.29 0.00 0.00 0.00 0.00
mpstst可以看到每个cpu 的情况,可知每个CPU都有接近100%的使用率。
pidstat监视的情况:
# pidstat 1 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
02:37:07 PM UID PID %usr %system %guest %CPU CPU Command
02:37:08 PM 0 3126 99.01 0.00 0.00 99.01 1 stress
02:37:08 PM 0 3127 100.00 0.00 0.00 100.00 2 stress
02:37:08 PM 0 3128 98.02 0.00 0.00 98.02 3 stress
02:37:08 PM 0 3129 99.01 0.00 0.00 99.01 0 stress
02:37:08 PM UID PID %usr %system %guest %CPU CPU Command
02:37:09 PM 0 3126 99.01 0.00 0.00 99.01 1 stress
02:37:09 PM 0 3127 98.02 0.00 0.00 98.02 2 stress
02:37:09 PM 0 3128 100.00 0.00 0.00 100.00 3 stress
02:37:09 PM 0 3129 99.01 0.00 0.00 99.01 0 stress
02:37:09 PM 0 3608 0.99 0.00 0.00 0.99 0 barad_agent
02:37:09 PM 0 4242 0.00 0.99 0.00 0.99 1 pidstat
可以看到CPU被跑的满满的,并且可以看到是哪些进程在占据CPU。通过top和pidstat都可以找到到底是哪些进程在使CPU繁忙,因此找到根源后便可以去找更细的原因。
2、模拟IO密集型的进程
stress -i 3 #运行3个高IO进程
在一个终端运行stress,另外的终端观察系统负载以及其他性能,top监视的情况:
# top
top - 11:37:16 up 29 days, 20:31, 4 users, load average: 3.00, 3.21, 2.85
Tasks: 108 total, 3 running, 104 sleeping, 1 stopped, 0 zombie
%Cpu(s): 0.2 us, 49.0 sy, 0.0 ni, 49.9 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8008684 total, 5492672 free, 187984 used, 2328028 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 7514116 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
可以看到系统负载接近3,但是CPU利用率并没有那么高,并且可以看到 D 状态 (不可中断状态)。
mpstat监视的情况:
# mpstat -P ALL 5 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
11:31:29 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
11:31:34 AM all 0.05 0.00 52.10 1.05 0.00 0.00 0.00 0.00 0.00 46.79
11:31:34 AM 0 0.00 0.00 96.01 2.00 0.00 0.00 0.00 0.00 0.00 2.00
11:31:34 AM 1 0.20 0.00 39.40 1.00 0.00 0.00 0.00 0.00 0.00 59.40
11:31:34 AM 2 0.20 0.00 41.20 0.80 0.00 0.00 0.00 0.00 0.00 57.80
11:31:34 AM 3 0.20 0.00 31.34 0.60 0.00 0.00 0.00 0.00 0.00 67.86
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
Average: all 0.05 0.00 52.10 1.05 0.00 0.00 0.00 0.00 0.00 46.79
Average: 0 0.00 0.00 96.01 2.00 0.00 0.00 0.00 0.00 0.00 2.00
Average: 1 0.20 0.00 39.40 1.00 0.00 0.00 0.00 0.00 0.00 59.40
Average: 2 0.20 0.00 41.20 0.80 0.00 0.00 0.00 0.00 0.00 57.80
Average: 3 0.20 0.00 31.34 0.60 0.00 0.00 0.00 0.00 0.00 67.86
可以看到iowait值非常高,说明IO才是当前系统负载高的主要原因,同时可以看到主要事件消耗在于系统调用上,因为IO是需要系统调用的,用户态几乎不占时间。
pidstat查看相关状态:
# pidstat 1 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
02:39:56 PM UID PID %usr %system %guest %CPU CPU Command
02:39:57 PM 0 4719 0.00 61.00 0.00 61.00 0 stress
02:39:57 PM 0 4720 0.00 81.00 0.00 81.00 1 stress
02:39:57 PM 0 4721 0.00 50.00 0.00 50.00 2 stress
02:39:57 PM 0 4926 0.00 1.00 0.00 1.00 3 pidstat
02:39:57 PM UID PID %usr %system %guest %CPU CPU Command
02:39:58 PM 0 4719 0.00 59.00 0.00 59.00 0 stress
02:39:58 PM 0 4720 0.00 42.00 0.00 42.00 3 stress
02:39:58 PM 0 4721 0.00 94.00 0.00 94.00 2 stress
02:39:58 PM 0 32473 0.00 1.00 0.00 1.00 3 YDService
可以看到具体的进程占据CPU和IO的情况,通过top或者pidstat可以找到具体是哪个进程在频繁IO,从而定位问题原因。
上下文切换
Linux 是一个多任务操作系统,它支持远大于 CPU 数量的任务同时运行,这是通过频繁的上下文切换、将CPU轮流分配给不同任务从而实现的。每个进程运行时,CPU都需要知道进程已经运行到了哪里以及当前的各种状态,因此系统事先设置好 CPU 寄存器和程序计数器。CPU 上下文切换,就是先把前一个任务的 CPU 上下文(CPU 寄存器和程序计数器)保存起来,然后加载新任务的上下文到这些寄存器和程序计数器,最后再跳转到程序计数器所指的新位置,运行新任务,而保存下来的上下文,会存储在系统内核中,并在任务重新调度执行时再次加载进来。
进程上下文切换是消耗时间的,平均下文切换都需要几十纳秒到数微秒的 CPU 时间,因此如果进程上下文切换次数过多,就会导致 CPU 将大量时间耗费在寄存器、内核栈以及虚拟内存等资源的保存和恢复上,进而大大缩短了真正运行进程的时间,实际上有效的CPU运行时间大大减少(可以认为上下文切换对用户来说是在做无用功)。
上下文切换的时机:根据调度策略,将CPU时间划片为对应的时间片,当时间片耗尽,就需要进行上下文切换进程在系统资源不足,会在获取到足够资源之前进程挂起进程通过sleep函数将自己挂起当有优先级更高的进程运行时,为了保证高优先级进程的运行,当前进程会被挂起,由高优先级进程来运行,也就是被抢占当发生硬件中断时,CPU 上的进程会被中断挂起,转而执行内核中的中断服务程序。
现代操作系统中,线程是调度的基本单位,而进程则是资源拥有的基本单位,因此也会发生线程切换。如果是同一进程内的线程切换,由于大部分资源是共享的,因此不需要保存,只保存寄存器等不共享数据,因此这时候的线程切换是更轻量级更快的。如果不是同意进程内的线程切换,就等于进程切换了,花销稍大。
查看上下文切换:vmstat命令可以看到系统整体的context switches次数:
# vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 0 0 5492032 220452 2105940 0 0 0 5 2 1 0 0 100 0 0
3 0 0 5492412 220452 2105952 0 0 0 369 3267 2204 0 50 48 2 0
2 0 0 5492340 220452 2105968 0 0 0 342 3427 2477 0 49 50 1 0
cs:每秒上下文切换的次数
in:每秒中断的次数
r:就绪队列的长度,即正在运行和等待 CPU 的进程数
b:处于不可中断睡眠状态的进程数
可以通过pidstat查看每个进程的上下文切换情况:
# pidstat -w
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
03:10:50 PM UID PID cswch/s nvcswch/s Command
03:10:50 PM 0 1 1.10 0.00 systemd
03:10:50 PM 0 2 0.00 0.00 kthreadd
03:10:50 PM 0 4 0.00 0.00 kworker/0:0H
03:10:50 PM 0 6 0.07 0.00 ksoftirqd/0
03:10:50 PM 0 7 0.34 0.00 migration/0
03:10:50 PM 0 8 0.00 0.00 rcu_bh
03:10:50 PM 0 9 14.76 0.00 rcu_sched
cswch :表示每秒自愿上下文切换的次数 是指进程无法获取所需资源,导致的上下文切换
nvcswch :表示每秒非自愿上下文切换的次数 指进程由于时间片已到等原因,被系统强制调度,进而发生的上下文切换
案例分析2
工具:sysbench(一个多线程的基准测试工具)和sysstat(监控分析系统性能的工具)模拟系统多线程调度的瓶颈:
# 20个线程运行,模拟多线程切换的问题
$ sysbench --threads=20 threads run
在另一个终端用vmstat查看系统的上下文切换次数:
# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
5 0 0 5483872 220568 2114760 0 0 0 5 3 2 0 0 100 0 0
5 0 0 5483788 220568 2114760 0 0 0 0 24004 733269 35 56 9 0 0
5 0 0 5483804 220568 2114760 0 0 0 80 33083 688786 33 55 12 0 0
5 0 0 5483828 220568 2114760 0 0 0 0 21859 760155 32 58 9 0 0
6 0 0 5483912 220568 2114764 0 0 0 0 31601 794251 33 55 12 0 0
5 0 0 5483912 220568 2114764 0 0 0 0 22575 671252 35 56 9 0 0
可以看到每秒的上下文切换次数达到了70万次左右,这一定会大大影响系统性能,就绪队列中的进程数量也明显提升,已经高于CPU数量了,us和sy使用率较高,加起来在接近100%,同时in的数量非常高,说明每秒的中断次数非常高。用pidstat查看具体的情况 (-t可以显示出更具体的线程切换次数):
# pidstat -wt -u 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
03:41:38 PM UID TGID TID %usr %system %guest %CPU CPU Command
03:41:39 PM 0 3612 - 0.98 0.00 0.00 0.98 2 barad_agent
03:41:39 PM 0 18524 - 100.00 100.00 0.00 100.00 2 sysbench
03:41:39 PM 0 - 18530 3.92 8.82 0.00 12.75 3 |__sysbench
03:41:39 PM 0 - 18531 7.84 12.75 0.00 20.59 0 |__sysbench
03:41:39 PM 0 - 18532 7.84 11.76 0.00 19.61 0 |__sysbench
....
03:41:38 PM UID TGID TID cswch/s nvcswch/s Command
03:41:39 PM 0 1 - 0.98 0.00 systemd
03:41:39 PM 0 - 18539 10184.31 38460.78 |__sysbench
03:41:39 PM 0 - 18540 9807.84 31880.39 |__sysbench
03:41:39 PM 0 - 18541 8456.86 23916.67 |__sysbench
03:41:39 PM 0 - 18542 8710.78 25382.35 |__sysbench
03:41:39 PM 0 - 18543 9375.49 29080.39 |__sysbench
03:41:39 PM 0 - 18544 11208.82 31827.45 |__sysbench
03:41:39 PM 0 18555 - 0.98 1.96 pidstat
...
可以看到sysbench的系统CPU占用率达到了100%,并且几乎占据了所有的usr和sys时间。也能看到sysbench的进程中存在这大量的自愿上下文切换和非自愿上下文切换。
查看中断情况:watch -d cat /proc/interrupts
可以看到LOC和RES值非常高,LOC是计时器中断,RES是Rescheduling interrupts,也就是调度中断,因此可以基本确定,中断的产生主要是因为频繁的调度,也就是任务过多引起过多上下文切换导致的。
不可中断进程过多
僵尸进程,表示进程已经退出,但它的父进程还没有回收子进程占用的资源。正常情况下,当一个进程创建了子进程后,它应该通过系统调用 wait() 或者 waitpid() 等待子进程结束,回收子进程的资源。通常来说,僵尸进程持续的时间都比较短,在父进程回收它的资源后就会消亡;或者在父进程退出后,由 init 进程回收后也会消亡。但是如果父进程没有处理子进程的终止,还一直保持运行状态,那么子进程就会一直处于僵尸状态。大量的僵尸进程会用尽 PID 进程号,导致新进程不能创建,所以这种情况一定要避免。
不可中断状态,表示进程正在跟硬件交互,为了保护进程数据和硬件的一致性,系统不允许其他进程或中断打断这个进程。当 iowait 升高时,进程很可能因为得不到硬件的响应,而长时间处于不可中断状态。进程长时间处于不可中断状态,通常表示系统有 I/O 性能问题。
通常可以用top命令和ps命令查看系统的进程状态:
# top
top - 11:43:46 up 19:21, 2 users, load average: 81.48, 35.56, 13.78
Tasks: 258 total, 2 running, 253 sleeping, 1 stopped, 2 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 34.6 id, 64.8 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16165976 total, 8499940 free, 6695244 used, 970792 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 9338500 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32016 root 20 0 7076 6232 808 R 1.0 0.0 3:35.52 sap1002
26849 root 20 0 70040 65528 44 D 0.3 0.4 0:00.02 app
17626 root 20 0 0 0 0 S 0.3 0.0 0:00.06 kworker/6:2
32018 root 20 0 23336 8680 1164 S 0.3 0.1 0:21.21 sap1004
32031 root 20 0 45228 26664 5832 S 0.3 0.2 1:16.57 sap1009
1 root 20 0 53128 4336 2488 S 0.0 0.0 0:08.35 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
可以看到S即为进程状态,包括R:运行状态,S:Sleep,状态 D:不可中断状态。
这是一个高IO的实例:
# ps aux | grep /app
root 26564 0.0 0.0 4500 564 pts/0 Ss+ 11:39 0:00 /app -d /dev/vdb1
root 26622 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
root 26623 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
root 26629 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
root 26630 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
....
# top
top - 11:43:46 up 19:21, 2 users, load average: 81.48, 35.56, 13.78
Tasks: 258 total, 2 running, 253 sleeping, 1 stopped, 2 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 34.6 id, 64.8 wa, 0.0 hi, 0.0 si, 0.0 st
查看top可以知道平均负载极高!但是CPU利用率很低,io使用率很高,说明大概率是因为IO导致了如此高的系统负载。在终端中运行 dstat 命令,观察 CPU 和 I/O 的使用情况:
# dstat 1 10
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0 0 100 0 0 0| 682k 48k| 0 0 | 0 0 | 797 808
0 0 68 32 0 0| 130M 20k| 54B 146B| 0 0 |1040 827
0 0 75 25 0 0| 130M 0 | 96B 860B| 0 0 |1022 789
0 0 75 25 0 0| 130M 0 | 331B 894B| 0 0 |1071 856
0 0 72 28 0 0| 130M 24k| 54B 42B| 0 0 |1057 823
0 0 63 37 0 0| 130M 0 | 146B 388B| 0 0 |1036 789
0 0 63 37 0 0| 130M 0 | 96B 700B| 0 0 |1043 798
0 0 62 37 0 0| 130M 932k| 54B 42B| 0 0 |1033 797
0 0 62 37 0 0| 130M 0 | 96B 388B| 0 0 |1033 798
0 0 67 33 0 0| 130M 20k|1064B 7858B| 0 0 |1054 843
1 0 62 37 0 0| 130M 0 | 54B 42B| 0 0 |1074 789
可以看到 iowait 升高(wai)时,磁盘的读请求(read)都会很大。这说明 iowait 的升高跟磁盘的读请求有关,很可能就是磁盘读导致的。因此就需要找一些是哪些进程在频繁read,用top查找:
top - 14:44:34 up 2:45, 2 users, load average: 43.34, 15.43, 5.63
Tasks: 212 total, 1 running, 209 sleeping, 0 stopped, 2 zombie
%Cpu(s): 0.0 us, 0.2 sy, 0.0 ni, 59.0 id, 40.8 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16165976 total, 11210260 free, 4318284 used, 637432 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 11722680 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1160 root 20 0 70040 65524 44 D 0.3 0.4 0:00.01 app
1166 root 20 0 70040 65524 44 D 0.3 0.4 0:00.01 app
1315 root 20 0 70040 65524 44 D 0.3 0.4 0:00.01 app
7852 root 20 0 38008 19720 1168 S 0.3 0.1 0:02.06 secu-tcs-agent
9365 root 20 0 7208 6288 804 S 0.3 0.0 0:26.86 sap1002
9381 root 20 0 22612 4276 3700 S 0.3 0.0 0:04.14 sap1007
可以看到CPU负载非常高,但是CPU使用率几乎为0,而有着大量的iowait,并且看到有很多D进程状态,D进程状态是不可中断状态,因此大概率就是这些进程在占据磁盘读。从top看到1160处于D状态,因此查看是否是该进程的原因:
# pidstat -d -p 1160 1 3
Linux 3.10.107-1-tlinux2_kvm_guest-0049 (centos) 07/23/20 _x86_64_ (8 CPU)
14:48:42 UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
14:48:43 0 1160 0.00 0.00 0.00 app
14:48:44 0 1160 0.00 0.00 0.00 app
14:48:45 0 1160 0.00 0.00 0.00 app
Average: 0 1160 0.00 0.00 0.00 app
显然并不是,因为读写都是0,同理发现其他几个也是这样的情况。直接pidstat查看所有的进程情况来分析:
# pidstat -d 1 5
Linux 3.10.107-1-tlinux2_kvm_guest-0049 (centos) 07/23/20 _x86_64_ (8 CPU)
14:54:33 UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
14:54:34 0 3204 503.50 0.00 0.00 app
14:54:34 0 3216 520.50 0.00 0.00 app
14:54:34 0 3331 16128.00 0.00 0.00 app
14:54:34 0 3332 1024.00 0.00 0.00 app
14:54:34 0 3337 16128.00 0.00 0.00 app
14:54:34 0 3338 16128.00 0.00 0.00 app
14:54:34 0 3344 16128.00 0.00 0.00 app
发现确实是app进程在运行,并且占据了非常大的read。用strace看一下3204进程的系统调用情况:
# strace -p 3204
strace: attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
显示没有权限,很不科学,已经是root了,那么看一下这个进程的状态:
# ps aux | grep 3204
root 3204 0.0 0.0 0 0 pts/0 Z+ 14:53 0:00 [app] <defunct>
发现变成了僵尸状态。用perf top分析问题所在,找到app后进入其中看看,展开调用栈分析:发现进程在在通过系统调用 sys_read() 读取数据。并且从 new_sync_read 和 blkdev_direct_IO 能看出,进程正在对磁盘进行直接读,也就是绕过了系统缓存,每个读请求都会从磁盘直接读。分析源码,发现:
open(disk, O_RDONLY|O_DIRECT|O_LARGEFILE, 0755)
O_DIRECT,直接读写磁盘,删掉该选项。然后运行发现iowait非常低,该问题找到并解决。
这个例子中磁盘 I/O 导致了 iowait 升高,不过 iowait 高不一定代表 I/O 有性能瓶颈。当系统中只有 I/O 类型的进程在运行时,iowait 也会很高,但实际上,磁盘的读写远没有达到性能瓶颈的程度。因此碰到 iowait 升高时,需要先用 dstat、pidstat 等工具,确认是不是磁盘 I/O 的问题,然后再找是哪些进程导致了 I/O。等待 I/O 的进程一般是不可中断状态,所以用 ps 命令找到的 D 状态(即不可中断状态)的进程,多为可疑进程。然后用strace分析,或者用 perf 工具,来分析系统的 CPU 时钟事件,找到问题的原因。
最新版本:
项目主页:
https://github.com/ColinIanKing/stress-ng
https://kernel.ubuntu.com/~cking/stress-ng/
stress : It is a simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is not a benchmark, but is rather a tool designed.
stress-ng : It is an updated version of stress tool and it will stress test a server for the following features:
CPU compute
Cache thrashing
Drive stress
I/O syncs
VM stress
Socket stressing
Context switching
Process creation and termination
It includes over 60 different stress tests, over 50 CPU specific stress tests that exercise floating point, integer, bit manipulation and control flow, over 20 virtual memory stress tests.
stress-ng will stress test a computer system in various selectable ways. It was designed to exercise various physical subsystems of a computer as well as the various operating system kernel interfaces. Stress-ng features:
* over 240 stress tests
* 78 CPU specific stress tests that exercise floating point, integer, bit manipulation and control flow
* over 20 virtual memory stress tests
* portable: builds on Linux, Solaris, *BSD, Minix, Android, MacOS X, Debian Hurd, Haiku, Windows Subsystem for Linux and SunOs/Dilos with gcc, clang, tcc and pcc.
stress-ng was originally intended to make a machine work hard and trip hardware issues such as thermal overruns as well as operating system bugs that only occur when a system is being thrashed hard. Use stress-ng with caution as some of the tests can make a system run hot on poorly designed hardware and also can cause excessive system thrashing which may be difficult to stop.
stress-ng can also measure test throughput rates; this can be useful to observe performance changes across different operating system releases or types of hardware. However, it has never been intended to be used as a precise benchmark test suite, so do NOT use it in this manner.
Running stress-ng with root privileges will adjust out of memory settings on Linux systems to make the stressors unkillable in low memory situations, so use this judiciously. With the appropriate privilege, stress-ng can allow the ionice class and ionice levels to be adjusted, again, this should be used with care.
To build, the following libraries will ensure a fully functional stress-ng build: (note libattr is not required for more recent disto releases).
Deb系:libaio-dev libapparmor-dev libattr1-dev libbsd-dev libcap-dev libgcrypt11-dev libipsec-mb-dev libjudy-dev libkeyutils-dev libsctp-dev libatomic1 zlib1g-dev
Rpm系:libaio-devel libattr-devel libbsd-devel libcap-devel libgcrypt-devel judy-devel keyutils-libs-devel lksctp-tools-devel libatomic zlib-devel
不同类型压测资源的worker数量:
cpu_workers、vm_workers、hdd_workers
每个worker的磁盘或内存使用量
bytes_per_hdd_worker、bytes_per_vm_worker
stress -c 2 -i 1 -m 1 --vm-bytes 128M -t 10s
Where,
-c 2 : Spawn two workers spinning on sqrt()
-i 1 : Spawn one worker spinning on sync()
-m 1 : Spawn one worker spinning on malloc()/free()
--vm-bytes 128M : Malloc 128MB per vm worker (default is 256MB)
-t 10s : Timeout after ten seconds
-v : Be verbose
是否一定要在root权限下运行stress-ng,手册页中有所表述:
Running stress-ng with root privileges will adjust out of memory settings on Linux systems to make the stressors unkillable in low memory situations, so use this judiciously. With the appropriate privilege, stress-ng can allow the ionice class and ionice levels to be adjusted, again, this should be used with care. However, some options do requires root privilege to alter various /sys interface controls. See stess-ng command man page for more info.
stress-ng完全兼容stress,常用选项有:
-i:表示调用 sync()
--hdd:表示读写临时文件
--timeout:表示超时时间,即压测时间
stress-ng的主要参数:
-c N:运行N worker CPU压力测试进程
--cpu-method all:worker从迭代使用30多种不同的压力算法,包括pi, crc16, fft等一系列专项的算例
-tastset N:将压力加到指定核心上
-d N:运行N worker HDD write/unlink测试
-i N:运行N worker IO测试
示例:运行8 cpu, 4 fork, 4 io, 2 vm 10小时
# stress-ng --cpu 8 --cpu-method all --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 36000s
stress-ng: info: [20829] dispatching hogs: 8 cpu, 4 fork, 4 io, 2 vm
在stress的基础上通过几百个参数组合,可以产生各种复杂的压力,兼容stress的参数,比如:
产生2个worker做圆周率算法压力:
stress-ng -c 2 --cpu-method pi
产生2个worker从迭代使用30多种不同的压力算法,包括pi, crc16, fft等等。
stress-ng -c 2 --cpu-method all
产生2个worker调用socket相关函数产生压力
stress-ng --sock 2
产生2个worker读取tsc产生压力
stress-ng --tsc 2
除了能够产生不同类型的压力,strss-ng还可以将压力指定到特定的cpu上,比如下面的命令将压力指定到cpu 0,2,3,6:
stress-ng --sock 4 --taskset 0,2-3,6
下面提供一些使用上的场景
场景一:CPU 密集型进程(使用CPU的进程)
使用2颗CPU
# stress --cpu 2 --timeout 600
# w|uptime
# mpstat -P ALL 5 1
# pidstat -u 5
1.通过w或uptime可以观察到,系统平均负载很高,通过mpstat观察到2个CPU使用率很高,平均负载也很高,而iowait为0,说明进程是CPU密集型的;
2.是由进程使用CPU密集导致系统平均负载变高、CPU使用率变高;
3.可以通过pidstat查看是哪个进程导致CPU使用率较高。
场景二:I/O 密集型进程(等待IO的进程)
对IO进行压测(使用stress观测到的iowait指标可能为0,所以使用stress-ng)
# stress-ng -i 4 --hdd 1 --timeout 600
# w|uptime
# mpstat -P ALL 5
1.可以通过uptime观察到,系统平均负载很高,通过mpstat观察到CPU使用很低,iowait很高,一直在等待IO处理,说明此进程是IO密集型的;
2.是由进程频繁的进行IO操作,导致系统平均负载很高而CPU使用率不高的情况。
场景三:大量进程的场景(等待CPU的进程>进程间会争抢CPU)
模拟16个进程,本机是4核心的机器。
# stress -c 16 --timeout 600
# w|uptime
# mpstat -P ALL 5
1.通过uptime观察到系统平均负载很高,通过mpstat观察到CPU使用率也很高,iowait为0,说明此进程是CPU密集型的,或者在争用CPU的运算资源;
2.通过pidstat -u观察到wait指标很高,则说明进程间存在CPU争用的情况,可以判断系统中存在大量的进程在等待使用CPU;
3.大量的进程,超出了CPU的计算能力,导致的系统的平均负载很高。
场景四:单进程多线程(大量线程造成上下文切换,从而造成系统负载升高)
模拟10个线程,对系统进行基准测试
# sysbench --threads=10 --time=300 threads run
可以看到平均1分钟的系统再升高
可以看到sys(内核态)对CPU的使用率比较高,iowait无(表示没有进程间的争用)
# mpstat -P ALL 5
可以看到无进程间的上下文切换(默认是进程间的)
# pidstat -w 3
可以看到存在大量的非自愿上下文切换(表示线程间争用引起的上下文切换,造成系统负载升高)
# pidstat -w -t 3
与stress的比较
stress常用选项:
-c,--cpu:代表进程个数(每个进程会占用一个cpu,当超出cpu个数时,进程间会互相争用cpu)
-t,--timeout:测试时长(超出这个时间后自动退出)
-i,--io:表示调用sync(),它表示通过系统调用 sync() 来模拟 I/O 的问题;
但这种方法实际上并不可靠,因为sync()的本意是刷新内存缓冲区的数据到磁盘中,以确保同步。如果缓冲区内本来就没多少数据,那读写到磁盘中的数据也就不多,也就没法产生 I/O 压力。这一点对在使用 SSD 磁盘的环境中尤为明显,很可能你的 iowait 总是 0,却单纯因为大量的系统调用,导致了系统CPU使用率 sys 升高。这种情况,推荐使用 stress-ng 来代替 stress。
stress参数和用法都比较简单:
-c 2 : 生成2个worker循环调用sqrt()产生cpu压力
-i 1 : 生成1个worker循环调用sync()产生io压力
-m 1 : 生成1个worker循环调用malloc()/free()产生内存压力
由于stress的压力模型非常简单,所以无法模拟任何复杂的场景;在stress压测过程中,如果用top命令去观察,会发现所有的cpu压力都在用户态,内核态没有任何压力。
运行2worker CPU压力40分钟,CPU占用率稳定在20%左右:
# stress -c 2 -i 9 -m 8 --verbose
stress: info: [5200] dispatching hogs: 2 cpu, 9 io, 8 vm, 0 hdd
再列举几个压测场景:
CPU密集型场景:
stress-ng --cpu 6 --timeout 300
该命令会尽量占满6个CPU核
IO密集型场景:
stress-ng -i 6 --hdd 1 --timeout 300
该命令会开启1个worker不停的读写临时文件,同时启动6个workers不停的调用sync系统调用提交缓存
进程密集型场景:
(( proc_cnt = `nproc`*10 )); stress-ng --cpu $proc_cnt --pthread 1 timeout 300
该命令会启动N*10个进程,在只有N个核的系统上,会产生大量的进程切换,模拟进程间竞争CPU的场景
线程密集型场景:
stress-ng --cpu `nproc` --pthread 1024 timeout 300
该命令会在N个CPU核的系统上,产生N个进程,每个进程1024个线程,模拟线程间竞争CPU的场景
stress-ng: RAM testing
To fill 80% of the free memory
There are many memory based stressors in stress-ng:
stress-ng --class memory?
class 'memory' stressors: atomic bsearch context full heapsort hsearch lockbus lsearch malloc matrix membarrier memcpy memfd memrate memthrash mergesort mincore null numa oom-pipe pipe qsort radixsort remap resources rmap stack stackmmap str stream tlb-shootdown tmpfs tsearch vm vm-rw wcs zero zlib
Alternatively, one can also use VM based stressors too:
stress-ng --class vm?
class 'vm' stressors: bigheap brk madvise malloc mlock mmap mmapfork mmapmany mremap msync shm shm-sysv stack stackmmap tmpfs userfaultfd vm vm-rw vm-splice
stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.8;}' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping:
stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.8;}' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo call with free(1)/vm_stat(1)/etc. if you need it portable.
So to run 1 vm stressor that uses 75% of memory using all the vm stressors with verification for 10 minutes with verbose mode enabled, use:
stress-ng --vm 1 --vm-bytes 75% --vm-method all --verify -t 10m -v
stress-ng --vm 1 --vm-bytes 1.5g --vm-method all --verify -t 10m -v
1.5g与1g是同效的,应该使用1536m取代1.5g。
其它常用样例:
stress-ng --all 4 --timeout 5m
run 4 instances of all the stressors for 5 minutes.
stress-ng --random 64
run 64 stressors that are randomly chosen from all the available stressors.
To run 2 instances of all the stressors for 10 minutes:
stress-ng --all 2 --timeout 10m
To run 128 stressors that are randomly chosen from all the available stressors:
stress-ng --random 128
stress-ng --vm 8 --vm-bytes 80% -t 1h
run 8 virtual memory stressors that combined use 80% of the available memory for 1 hour. Thus each stressor uses 10% of the available memory.
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s
runs for 60 seconds with 4 cpu stressors, 2 io stressors and 1 vm stressor using 1GB of virtual memory.
stress-ng --iomix 2 --iomix-bytes 10% -t 10m
runs 2 instances of the mixed I/O stressors using a total of 10% of the available file system space for 10 minutes. Each stressor will use 5% of the available file system space.
stress-ng --cyclic 1 --cyclic-dist 2500 --cyclic-method clock_ns --cyclic-prio 100 --cyclic-sleep 10000 --hdd 0 -t 1m
measures real time scheduling latencies created by the hdd stressor. This uses the high resolution nanosecond clock to measure latencies during sleeps of 10,000 nanoseconds. At the end of 1 minute of stressing, the latency distribution with 2500 ns intervals will be displayed.
NOTE: this must be run with super user privileges to enable the real time scheduling to get accurate measurements.
stress-ng --cpu 8 --cpu-ops 800000
runs 8 cpu stressors and stops after 800000 bogo operations.
stress-ng --sequential 2 --timeout 2m --metrics
run 2 simultaneous instances of all the stressors sequentially one by one, each for 2 minutes and summarise with performance metrics at the end.
stress-ng --cpu 4 --cpu-method fft --cpu-ops 10000 --metrics-brief
run 4 FFT cpu stressors, stop after 10000 bogo operations and produce a summary just for the FFT results.
stress-ng --cpu 0 --cpu-method all -t 1h
run cpu stressors on all online CPUs working through all the available CPU stressors for 1 hour.
stress-ng --cpu 64 --cpu-method all --verify -t 10m --metrics-brief
run 64 instances of all the different cpu stressors and verify that the computations are correct for 10 minutes with a bogo operations summary at the end.
stress-ng --sequential 0 -t 10m
run all the stressors one by one for 10 minutes, with the number of instances of each stressor matching the number of online CPUs.
stress-ng --sequential 8 --class io -t 5m --times
run all the stressors in the io class one by one for 5 minutes each, with 8 instances of each stressor running concurrently and show overall time utilisation statistics at the end of the run.
stress-ng --all 0 --maximize --aggressive
run all the stressors (1 instance of each per CPU) simultaneously, maximize the settings (memory sizes, file allocations, etc.) and select the most demanding/aggressive options.
stress-ng --random 32 -x numa,hdd,key
run 32 randomly selected stressors and exclude the numa, hdd and key stressors
stress-ng --sequential 4 --class vm --exclude bigheap,brk,stack
run 4 instances of the VM stressors one after each other, excluding the bigheap, brk and stack stressors
stress-ng --taskset 0,2-3 --cpu 3
run 3 instances of the CPU stressor and pin them to CPUs 0, 2 and 3.
start N workers exercising the CPU by sequentially working through all the different CPU stress methods:
stress-ng --cpu 4 --timeout 60s --metrics-brief
For disk start N workers continually writing, reading and removing temporary files:
stress-ng --disk 2 --timeout 60s --metrics-brief
One can pass the --io N option to the stress-ng command to commit buffer cache to disk:
stress-ng --disk 2 --io 2 --timeout 60s --metrics-brief
Use mmap N bytes per vm worker, the default is 256MB. One can specify the size as % of total available memory or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g:
stress-ng --vm 2 --vm-bytes 1G --timeout 60s
The --vm 2 will start N workers (2 workers) continuously calling mmap/munmap and writing to the allocated memory. Note that this can cause systems to trip the kernel OOM killer on Linux systems if not enough physical memory and swap is not available. Putting it all together
To run for 60 seconds with 4 cpu stressors, 2 io stressors and 1 vm stressor using 1GB of virtual memory, enter:
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
To run 4 simultaneous instances of all the stressors sequentially one by one, each for 6 minutes and summaries with performance metrics at the end:
stress-ng --sequential 4 --timeout 6m --metrics
To run 2 FFT cpu stressors, stop after 5000 bogo operations and produce a summary just for the FFT results:
stress-ng --cpu 2 --cpu-method fft --cpu-ops 5000 --metrics-brief
To run cpu stressors on all online CPUs working through all the available CPU stressors for 2 hour:
stress-ng --cpu 0 --cpu-method all -t 2h
To run 64 instances of all the different cpu stressors and verify that the computations are correct for 5 minutes with a bogo operations summary at the end:
stress-ng --cpu 64 --cpu-method all --verify -t 5m --metrics-brief
To run all the stressors one by one for 5 minutes, with the number of instances of each stressor matching the number of online CPUs:
stress-ng --sequential 0 -t 5m
To run all the stressors in the io class one by one for 1 minutes each, with 8 instances of each stressor running concurrently and show overall time utilisation statistics at the end of the run:
stress-ng --sequential 8 --class io -t 1m --times
案例分析
案例分析1
工具:stress(系统压力测试工具)和sysstat(监控分析系统性能的工具)
需要开启多个终端,部分终端用于运行监测程序,部分终端用来运行实例模拟高负载 ,部分用于观测。这里模拟多种系统压力施加。
1、模拟高CPU密集的进程
stress -c 4 #运行4个高CPU进程
在一个终端运行stress,另外的终端用于监视系统负载以及其他性能
top监视的情况:
# top
top - 14:33:58 up 29 days, 23:28, 4 users, load average: 4.28, 1.42, 0.54
Tasks: 118 total, 6 running, 111 sleeping, 1 stopped, 0 zombie
%Cpu(s): 99.7 us, 0.2 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 8008684 total, 5493028 free, 195948 used, 2319708 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 7506152 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
可以看到平均负载逐步上升接近到4,并且可以看到有4个CPU使用率接近100%的进程,平均CPU使用率几乎达到100%,几乎所有的时间都在用户态。
mpstat监视情况:
# mpstat -P ALL 5 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
02:32:47 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
02:32:52 PM all 99.74 0.00 0.19 0.00 0.00 0.06 0.00 0.00 0.00 0.00
02:32:52 PM 0 99.44 0.00 0.28 0.00 0.00 0.28 0.00 0.00 0.00 0.00
02:32:52 PM 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
02:32:52 PM 2 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
02:32:52 PM 3 99.42 0.00 0.29 0.00 0.00 0.29 0.00 0.00 0.00 0.00
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
Average: all 99.74 0.00 0.19 0.00 0.00 0.06 0.00 0.00 0.00 0.00
Average: 0 99.44 0.00 0.28 0.00 0.00 0.28 0.00 0.00 0.00 0.00
Average: 1 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: 2 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: 3 99.42 0.00 0.29 0.00 0.00 0.29 0.00 0.00 0.00 0.00
mpstst可以看到每个cpu 的情况,可知每个CPU都有接近100%的使用率。
pidstat监视的情况:
# pidstat 1 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
02:37:07 PM UID PID %usr %system %guest %CPU CPU Command
02:37:08 PM 0 3126 99.01 0.00 0.00 99.01 1 stress
02:37:08 PM 0 3127 100.00 0.00 0.00 100.00 2 stress
02:37:08 PM 0 3128 98.02 0.00 0.00 98.02 3 stress
02:37:08 PM 0 3129 99.01 0.00 0.00 99.01 0 stress
02:37:08 PM UID PID %usr %system %guest %CPU CPU Command
02:37:09 PM 0 3126 99.01 0.00 0.00 99.01 1 stress
02:37:09 PM 0 3127 98.02 0.00 0.00 98.02 2 stress
02:37:09 PM 0 3128 100.00 0.00 0.00 100.00 3 stress
02:37:09 PM 0 3129 99.01 0.00 0.00 99.01 0 stress
02:37:09 PM 0 3608 0.99 0.00 0.00 0.99 0 barad_agent
02:37:09 PM 0 4242 0.00 0.99 0.00 0.99 1 pidstat
可以看到CPU被跑的满满的,并且可以看到是哪些进程在占据CPU。通过top和pidstat都可以找到到底是哪些进程在使CPU繁忙,因此找到根源后便可以去找更细的原因。
2、模拟IO密集型的进程
stress -i 3 #运行3个高IO进程
在一个终端运行stress,另外的终端观察系统负载以及其他性能,top监视的情况:
# top
top - 11:37:16 up 29 days, 20:31, 4 users, load average: 3.00, 3.21, 2.85
Tasks: 108 total, 3 running, 104 sleeping, 1 stopped, 0 zombie
%Cpu(s): 0.2 us, 49.0 sy, 0.0 ni, 49.9 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8008684 total, 5492672 free, 187984 used, 2328028 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 7514116 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
可以看到系统负载接近3,但是CPU利用率并没有那么高,并且可以看到 D 状态 (不可中断状态)。
mpstat监视的情况:
# mpstat -P ALL 5 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
11:31:29 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
11:31:34 AM all 0.05 0.00 52.10 1.05 0.00 0.00 0.00 0.00 0.00 46.79
11:31:34 AM 0 0.00 0.00 96.01 2.00 0.00 0.00 0.00 0.00 0.00 2.00
11:31:34 AM 1 0.20 0.00 39.40 1.00 0.00 0.00 0.00 0.00 0.00 59.40
11:31:34 AM 2 0.20 0.00 41.20 0.80 0.00 0.00 0.00 0.00 0.00 57.80
11:31:34 AM 3 0.20 0.00 31.34 0.60 0.00 0.00 0.00 0.00 0.00 67.86
Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
Average: all 0.05 0.00 52.10 1.05 0.00 0.00 0.00 0.00 0.00 46.79
Average: 0 0.00 0.00 96.01 2.00 0.00 0.00 0.00 0.00 0.00 2.00
Average: 1 0.20 0.00 39.40 1.00 0.00 0.00 0.00 0.00 0.00 59.40
Average: 2 0.20 0.00 41.20 0.80 0.00 0.00 0.00 0.00 0.00 57.80
Average: 3 0.20 0.00 31.34 0.60 0.00 0.00 0.00 0.00 0.00 67.86
可以看到iowait值非常高,说明IO才是当前系统负载高的主要原因,同时可以看到主要事件消耗在于系统调用上,因为IO是需要系统调用的,用户态几乎不占时间。
pidstat查看相关状态:
# pidstat 1 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
02:39:56 PM UID PID %usr %system %guest %CPU CPU Command
02:39:57 PM 0 4719 0.00 61.00 0.00 61.00 0 stress
02:39:57 PM 0 4720 0.00 81.00 0.00 81.00 1 stress
02:39:57 PM 0 4721 0.00 50.00 0.00 50.00 2 stress
02:39:57 PM 0 4926 0.00 1.00 0.00 1.00 3 pidstat
02:39:57 PM UID PID %usr %system %guest %CPU CPU Command
02:39:58 PM 0 4719 0.00 59.00 0.00 59.00 0 stress
02:39:58 PM 0 4720 0.00 42.00 0.00 42.00 3 stress
02:39:58 PM 0 4721 0.00 94.00 0.00 94.00 2 stress
02:39:58 PM 0 32473 0.00 1.00 0.00 1.00 3 YDService
可以看到具体的进程占据CPU和IO的情况,通过top或者pidstat可以找到具体是哪个进程在频繁IO,从而定位问题原因。
上下文切换
Linux 是一个多任务操作系统,它支持远大于 CPU 数量的任务同时运行,这是通过频繁的上下文切换、将CPU轮流分配给不同任务从而实现的。每个进程运行时,CPU都需要知道进程已经运行到了哪里以及当前的各种状态,因此系统事先设置好 CPU 寄存器和程序计数器。CPU 上下文切换,就是先把前一个任务的 CPU 上下文(CPU 寄存器和程序计数器)保存起来,然后加载新任务的上下文到这些寄存器和程序计数器,最后再跳转到程序计数器所指的新位置,运行新任务,而保存下来的上下文,会存储在系统内核中,并在任务重新调度执行时再次加载进来。
进程上下文切换是消耗时间的,平均下文切换都需要几十纳秒到数微秒的 CPU 时间,因此如果进程上下文切换次数过多,就会导致 CPU 将大量时间耗费在寄存器、内核栈以及虚拟内存等资源的保存和恢复上,进而大大缩短了真正运行进程的时间,实际上有效的CPU运行时间大大减少(可以认为上下文切换对用户来说是在做无用功)。
上下文切换的时机:根据调度策略,将CPU时间划片为对应的时间片,当时间片耗尽,就需要进行上下文切换进程在系统资源不足,会在获取到足够资源之前进程挂起进程通过sleep函数将自己挂起当有优先级更高的进程运行时,为了保证高优先级进程的运行,当前进程会被挂起,由高优先级进程来运行,也就是被抢占当发生硬件中断时,CPU 上的进程会被中断挂起,转而执行内核中的中断服务程序。
现代操作系统中,线程是调度的基本单位,而进程则是资源拥有的基本单位,因此也会发生线程切换。如果是同一进程内的线程切换,由于大部分资源是共享的,因此不需要保存,只保存寄存器等不共享数据,因此这时候的线程切换是更轻量级更快的。如果不是同意进程内的线程切换,就等于进程切换了,花销稍大。
查看上下文切换:vmstat命令可以看到系统整体的context switches次数:
# vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 0 0 5492032 220452 2105940 0 0 0 5 2 1 0 0 100 0 0
3 0 0 5492412 220452 2105952 0 0 0 369 3267 2204 0 50 48 2 0
2 0 0 5492340 220452 2105968 0 0 0 342 3427 2477 0 49 50 1 0
cs:每秒上下文切换的次数
in:每秒中断的次数
r:就绪队列的长度,即正在运行和等待 CPU 的进程数
b:处于不可中断睡眠状态的进程数
可以通过pidstat查看每个进程的上下文切换情况:
# pidstat -w
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
03:10:50 PM UID PID cswch/s nvcswch/s Command
03:10:50 PM 0 1 1.10 0.00 systemd
03:10:50 PM 0 2 0.00 0.00 kthreadd
03:10:50 PM 0 4 0.00 0.00 kworker/0:0H
03:10:50 PM 0 6 0.07 0.00 ksoftirqd/0
03:10:50 PM 0 7 0.34 0.00 migration/0
03:10:50 PM 0 8 0.00 0.00 rcu_bh
03:10:50 PM 0 9 14.76 0.00 rcu_sched
cswch :表示每秒自愿上下文切换的次数 是指进程无法获取所需资源,导致的上下文切换
nvcswch :表示每秒非自愿上下文切换的次数 指进程由于时间片已到等原因,被系统强制调度,进而发生的上下文切换
案例分析2
工具:sysbench(一个多线程的基准测试工具)和sysstat(监控分析系统性能的工具)模拟系统多线程调度的瓶颈:
# 20个线程运行,模拟多线程切换的问题
$ sysbench --threads=20 threads run
在另一个终端用vmstat查看系统的上下文切换次数:
# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
5 0 0 5483872 220568 2114760 0 0 0 5 3 2 0 0 100 0 0
5 0 0 5483788 220568 2114760 0 0 0 0 24004 733269 35 56 9 0 0
5 0 0 5483804 220568 2114760 0 0 0 80 33083 688786 33 55 12 0 0
5 0 0 5483828 220568 2114760 0 0 0 0 21859 760155 32 58 9 0 0
6 0 0 5483912 220568 2114764 0 0 0 0 31601 794251 33 55 12 0 0
5 0 0 5483912 220568 2114764 0 0 0 0 22575 671252 35 56 9 0 0
可以看到每秒的上下文切换次数达到了70万次左右,这一定会大大影响系统性能,就绪队列中的进程数量也明显提升,已经高于CPU数量了,us和sy使用率较高,加起来在接近100%,同时in的数量非常高,说明每秒的中断次数非常高。用pidstat查看具体的情况 (-t可以显示出更具体的线程切换次数):
# pidstat -wt -u 1
Linux 3.10.0-1062.18.1.el7.x86_64 (freeoa) 07/22/2020 _x86_64_ (4 CPU)
03:41:38 PM UID TGID TID %usr %system %guest %CPU CPU Command
03:41:39 PM 0 3612 - 0.98 0.00 0.00 0.98 2 barad_agent
03:41:39 PM 0 18524 - 100.00 100.00 0.00 100.00 2 sysbench
03:41:39 PM 0 - 18530 3.92 8.82 0.00 12.75 3 |__sysbench
03:41:39 PM 0 - 18531 7.84 12.75 0.00 20.59 0 |__sysbench
03:41:39 PM 0 - 18532 7.84 11.76 0.00 19.61 0 |__sysbench
....
03:41:38 PM UID TGID TID cswch/s nvcswch/s Command
03:41:39 PM 0 1 - 0.98 0.00 systemd
03:41:39 PM 0 - 18539 10184.31 38460.78 |__sysbench
03:41:39 PM 0 - 18540 9807.84 31880.39 |__sysbench
03:41:39 PM 0 - 18541 8456.86 23916.67 |__sysbench
03:41:39 PM 0 - 18542 8710.78 25382.35 |__sysbench
03:41:39 PM 0 - 18543 9375.49 29080.39 |__sysbench
03:41:39 PM 0 - 18544 11208.82 31827.45 |__sysbench
03:41:39 PM 0 18555 - 0.98 1.96 pidstat
...
可以看到sysbench的系统CPU占用率达到了100%,并且几乎占据了所有的usr和sys时间。也能看到sysbench的进程中存在这大量的自愿上下文切换和非自愿上下文切换。
查看中断情况:watch -d cat /proc/interrupts
可以看到LOC和RES值非常高,LOC是计时器中断,RES是Rescheduling interrupts,也就是调度中断,因此可以基本确定,中断的产生主要是因为频繁的调度,也就是任务过多引起过多上下文切换导致的。
不可中断进程过多
僵尸进程,表示进程已经退出,但它的父进程还没有回收子进程占用的资源。正常情况下,当一个进程创建了子进程后,它应该通过系统调用 wait() 或者 waitpid() 等待子进程结束,回收子进程的资源。通常来说,僵尸进程持续的时间都比较短,在父进程回收它的资源后就会消亡;或者在父进程退出后,由 init 进程回收后也会消亡。但是如果父进程没有处理子进程的终止,还一直保持运行状态,那么子进程就会一直处于僵尸状态。大量的僵尸进程会用尽 PID 进程号,导致新进程不能创建,所以这种情况一定要避免。
不可中断状态,表示进程正在跟硬件交互,为了保护进程数据和硬件的一致性,系统不允许其他进程或中断打断这个进程。当 iowait 升高时,进程很可能因为得不到硬件的响应,而长时间处于不可中断状态。进程长时间处于不可中断状态,通常表示系统有 I/O 性能问题。
通常可以用top命令和ps命令查看系统的进程状态:
# top
top - 11:43:46 up 19:21, 2 users, load average: 81.48, 35.56, 13.78
Tasks: 258 total, 2 running, 253 sleeping, 1 stopped, 2 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 34.6 id, 64.8 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16165976 total, 8499940 free, 6695244 used, 970792 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 9338500 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32016 root 20 0 7076 6232 808 R 1.0 0.0 3:35.52 sap1002
26849 root 20 0 70040 65528 44 D 0.3 0.4 0:00.02 app
17626 root 20 0 0 0 0 S 0.3 0.0 0:00.06 kworker/6:2
32018 root 20 0 23336 8680 1164 S 0.3 0.1 0:21.21 sap1004
32031 root 20 0 45228 26664 5832 S 0.3 0.2 1:16.57 sap1009
1 root 20 0 53128 4336 2488 S 0.0 0.0 0:08.35 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
可以看到S即为进程状态,包括R:运行状态,S:Sleep,状态 D:不可中断状态。
这是一个高IO的实例:
# ps aux | grep /app
root 26564 0.0 0.0 4500 564 pts/0 Ss+ 11:39 0:00 /app -d /dev/vdb1
root 26622 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
root 26623 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
root 26629 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
root 26630 0.0 0.4 70040 65528 pts/0 D+ 11:39 0:00 /app -d /dev/vdb1
....
# top
top - 11:43:46 up 19:21, 2 users, load average: 81.48, 35.56, 13.78
Tasks: 258 total, 2 running, 253 sleeping, 1 stopped, 2 zombie
%Cpu(s): 0.2 us, 0.3 sy, 0.0 ni, 34.6 id, 64.8 wa, 0.0 hi, 0.0 si, 0.0 st
查看top可以知道平均负载极高!但是CPU利用率很低,io使用率很高,说明大概率是因为IO导致了如此高的系统负载。在终端中运行 dstat 命令,观察 CPU 和 I/O 的使用情况:
# dstat 1 10
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0 0 100 0 0 0| 682k 48k| 0 0 | 0 0 | 797 808
0 0 68 32 0 0| 130M 20k| 54B 146B| 0 0 |1040 827
0 0 75 25 0 0| 130M 0 | 96B 860B| 0 0 |1022 789
0 0 75 25 0 0| 130M 0 | 331B 894B| 0 0 |1071 856
0 0 72 28 0 0| 130M 24k| 54B 42B| 0 0 |1057 823
0 0 63 37 0 0| 130M 0 | 146B 388B| 0 0 |1036 789
0 0 63 37 0 0| 130M 0 | 96B 700B| 0 0 |1043 798
0 0 62 37 0 0| 130M 932k| 54B 42B| 0 0 |1033 797
0 0 62 37 0 0| 130M 0 | 96B 388B| 0 0 |1033 798
0 0 67 33 0 0| 130M 20k|1064B 7858B| 0 0 |1054 843
1 0 62 37 0 0| 130M 0 | 54B 42B| 0 0 |1074 789
可以看到 iowait 升高(wai)时,磁盘的读请求(read)都会很大。这说明 iowait 的升高跟磁盘的读请求有关,很可能就是磁盘读导致的。因此就需要找一些是哪些进程在频繁read,用top查找:
top - 14:44:34 up 2:45, 2 users, load average: 43.34, 15.43, 5.63
Tasks: 212 total, 1 running, 209 sleeping, 0 stopped, 2 zombie
%Cpu(s): 0.0 us, 0.2 sy, 0.0 ni, 59.0 id, 40.8 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 16165976 total, 11210260 free, 4318284 used, 637432 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 11722680 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1160 root 20 0 70040 65524 44 D 0.3 0.4 0:00.01 app
1166 root 20 0 70040 65524 44 D 0.3 0.4 0:00.01 app
1315 root 20 0 70040 65524 44 D 0.3 0.4 0:00.01 app
7852 root 20 0 38008 19720 1168 S 0.3 0.1 0:02.06 secu-tcs-agent
9365 root 20 0 7208 6288 804 S 0.3 0.0 0:26.86 sap1002
9381 root 20 0 22612 4276 3700 S 0.3 0.0 0:04.14 sap1007
可以看到CPU负载非常高,但是CPU使用率几乎为0,而有着大量的iowait,并且看到有很多D进程状态,D进程状态是不可中断状态,因此大概率就是这些进程在占据磁盘读。从top看到1160处于D状态,因此查看是否是该进程的原因:
# pidstat -d -p 1160 1 3
Linux 3.10.107-1-tlinux2_kvm_guest-0049 (centos) 07/23/20 _x86_64_ (8 CPU)
14:48:42 UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
14:48:43 0 1160 0.00 0.00 0.00 app
14:48:44 0 1160 0.00 0.00 0.00 app
14:48:45 0 1160 0.00 0.00 0.00 app
Average: 0 1160 0.00 0.00 0.00 app
显然并不是,因为读写都是0,同理发现其他几个也是这样的情况。直接pidstat查看所有的进程情况来分析:
# pidstat -d 1 5
Linux 3.10.107-1-tlinux2_kvm_guest-0049 (centos) 07/23/20 _x86_64_ (8 CPU)
14:54:33 UID PID kB_rd/s kB_wr/s kB_ccwr/s Command
14:54:34 0 3204 503.50 0.00 0.00 app
14:54:34 0 3216 520.50 0.00 0.00 app
14:54:34 0 3331 16128.00 0.00 0.00 app
14:54:34 0 3332 1024.00 0.00 0.00 app
14:54:34 0 3337 16128.00 0.00 0.00 app
14:54:34 0 3338 16128.00 0.00 0.00 app
14:54:34 0 3344 16128.00 0.00 0.00 app
发现确实是app进程在运行,并且占据了非常大的read。用strace看一下3204进程的系统调用情况:
# strace -p 3204
strace: attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted
显示没有权限,很不科学,已经是root了,那么看一下这个进程的状态:
# ps aux | grep 3204
root 3204 0.0 0.0 0 0 pts/0 Z+ 14:53 0:00 [app] <defunct>
发现变成了僵尸状态。用perf top分析问题所在,找到app后进入其中看看,展开调用栈分析:发现进程在在通过系统调用 sys_read() 读取数据。并且从 new_sync_read 和 blkdev_direct_IO 能看出,进程正在对磁盘进行直接读,也就是绕过了系统缓存,每个读请求都会从磁盘直接读。分析源码,发现:
open(disk, O_RDONLY|O_DIRECT|O_LARGEFILE, 0755)
O_DIRECT,直接读写磁盘,删掉该选项。然后运行发现iowait非常低,该问题找到并解决。
这个例子中磁盘 I/O 导致了 iowait 升高,不过 iowait 高不一定代表 I/O 有性能瓶颈。当系统中只有 I/O 类型的进程在运行时,iowait 也会很高,但实际上,磁盘的读写远没有达到性能瓶颈的程度。因此碰到 iowait 升高时,需要先用 dstat、pidstat 等工具,确认是不是磁盘 I/O 的问题,然后再找是哪些进程导致了 I/O。等待 I/O 的进程一般是不可中断状态,所以用 ps 命令找到的 D 状态(即不可中断状态)的进程,多为可疑进程。然后用strace分析,或者用 perf 工具,来分析系统的 CPU 时钟事件,找到问题的原因。
最新版本:
项目主页:
https://github.com/ColinIanKing/stress-ng
https://kernel.ubuntu.com/~cking/stress-ng/