This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
projects:load_logger [2020/04/19 18:29] admin created |
projects:load_logger [2020/11/18 06:17] (current) admin |
||
---|---|---|---|
Line 1: | Line 1: | ||
+ | **Retired project**. This had it's uses for some of my more esoteric systems (that only had busybox for example) but I've sinced moved to an infrastructure that all supports [[https://collectd.org/|collectd]]. | ||
+ | |||
====== Load logger ====== | ====== Load logger ====== | ||
- | A simple bash script to log the datetime and system load to a file every minute. I wanted something that didn't rely on having any other programs installed. | + | A simple bash script to log the datetime and system load to a file every minute. I wanted something that didn't rely on having any other programs installed (though it does rely on the /proc/ filesystem and some basic utilities like free, ps, awk, bc). |
It saves data in the following format: Server ID, Date/Time, CPU usage (%), CPU max, Memory Total (Mb), Memory Used (Mb), Memory Free (Mb), Memory Shared (Mb), Memory Buffered/Cached (Mb), Memory Available (Mb), 1 minute load average, 5 minute load average, 15 minute load average, process count, network usage (one column per interface in the format [interface name, received bytes, transmitted bytes]) | It saves data in the following format: Server ID, Date/Time, CPU usage (%), CPU max, Memory Total (Mb), Memory Used (Mb), Memory Free (Mb), Memory Shared (Mb), Memory Buffered/Cached (Mb), Memory Available (Mb), 1 minute load average, 5 minute load average, 15 minute load average, process count, network usage (one column per interface in the format [interface name, received bytes, transmitted bytes]) | ||
Line 10: | Line 12: | ||
CPU_COUNT=`cat /proc/cpuinfo | grep processor | wc -l` | CPU_COUNT=`cat /proc/cpuinfo | grep processor | wc -l` | ||
MAX_CPU=$(($CPU_COUNT * 100)) | MAX_CPU=$(($CPU_COUNT * 100)) | ||
- | printf -v date '%(%Y-%m-%d %H:%M:%S)T' -1 | + | printf -v date '%(%Y-%m-%d %H:%M:%S)T' -1 |
- | CPU_USAGE=`ps -ax -h -o pcpu | paste "-sd+" | bc` | + | CPU_USAGE=`ps -ax -h -o pcpu | paste "-sd+" | bc` |
- | MEM_USAGE=`free -m | grep 'Mem:' | awk -v OFS="," '{print ($2,$3,$4,$5,$6,$7)}'` | + | MEM_USAGE=`free -m | grep 'Mem:' | awk -v OFS="," '{print ($2,$3,$4,$5,$6,$7)}'` |
- | LOAD_AVG=`awk -v OFS="," '{split($4,arr,"/")} {print $1,$2,$3,arr[2]}' /proc/loadavg` | + | LOAD_AVG=`awk -v OFS="," '{split($4,arr,"/")} {print $1,$2,$3,arr[2]}' /proc/loadavg` |
- | NET_USAGE=`awk -v ORS="," 'NR>2{print $1,$2,$9}' /proc/net/dev` | + | NET_USAGE=`awk -v ORS="," 'NR>2{print $1,$2,$9}' /proc/net/dev` |
- | OUTPUT=$ID','$date','$CPU_USAGE','$MAX_CPU','$MEM_USAGE','$LOAD_AVG','$NET_USAGE | + | OUTPUT=$ID','$date','$CPU_USAGE','$MAX_CPU','$MEM_USAGE','$LOAD_AVG','$NET_USAGE |
- | # to trim that last awkward comma | + | # to trim that last awkward comma |
- | echo $OUTPUT | awk '{gsub(/,$/,""); print $0}' | + | echo $OUTPUT | awk '{gsub(/,$/,""); print $0}' |
</code> | </code> | ||
Example output | Example output | ||
Line 25: | Line 27: | ||
</code> | </code> | ||
+ | You can log this to a file using cron: | ||
+ | <code bash> | ||
+ | * * * * * /home/seven/bin/save_load.sh >> /home/seven/data/load.log | ||
+ | </code> | ||
+ | |||
+ | ===== Central logging ===== | ||
+ | Alternatively, you can send this information to a central location. I wrote a quick script to save this to a database then used CURL to POST it to the server. I swapped the echo line from the script above with something like this: | ||
+ | <code bash> | ||
+ | curl --request POST "https://myserver/" --data-urlencode "data=$OUTPUT" | ||
+ | </code> |