You’ve mastered the fundamentals as you have hosted the panel and running your website, and now it’s time to elevate your skills into the advanced realm of Linux commands, where true server management expertise resides. These commands are akin to the secret weapons that every DevOps engineer aspires to wield—potent, enigmatic, and invaluable when your servers encounter unexpected challenges.
In this post of our way to imp commands, we will explore advanced Linux commands that not only enhance your capabilities as a DevOps engineer and cloudpanel but also position you as the go-to expert within your team. Are you prepared to harness the power of these commands and impress your colleagues with your newfound prowess? Let’s dive in!
1. uptime
and top
(System Performance Monitoring)
Use case: Monitor system load and identify performance bottlenecks.
- uptime: Displays the system’s uptime and average load.
uptime
uptime
07:32:24 up 144 days, 14:15, 3 users, load average: 1.11, 0.59, 0.45
- uptime w -h : Provides the system uptime, load average, and detailed information about currently logged-in users (user name, session start time, etc.)
uptime w -h
uptime w -h
07:32:24 up 144 days, 14:15, 3 users, load average: 1.11, 0.59, 0.45
user1 pts/1 2023-09-30 07:32 0.00s 0.50s 0.05s w
user2 pts/2 2023-09-29 12:21 10:22m 0.04s 0.03s bash
top
top
: Provides real-time monitoring of processes and resource usage.
top -u username
top -d 5 # Refresh every 5 seconds
top -o %MEM
# top -p <PID1>,<PID2>,<PID3>
top -p 1234
# Monitoring CPU Cores Separately
top -1
# Logging Top Output to CSV
top -b -n 1 | sed '1,7d' | awk '{print $1","$9","$10","$12}' > top_output.csv
# monitor processes with CPU usage above 90%
top -b -n 1 | awk '$9 > 90 {print $1, $2, $9, $12}'
P
: Sort by CPU usage (default).V
to enable the tree view.M
: Sort by memory usage.T
: Sort by running time of the process.N
: Sort by process ID (PID).<
and>
: Move the sorting column left or right- Kill Process: Press
k
. Enter the PID of the process. Confirm by pressingENTER
Advanced usage with top
and crontab
- Send alert for High CPU usage.
*/5 * * * * top -b -n 1 | awk '$9 > 90 {print $1, $12}' | mail -s "High CPU Usage Alert" youremail@example.com
Running top
on Remote Servers
ssh user@remote_host "top -b -n 1" > top_output_remote.txt
2. cron
(Job Scheduling)
Use case: Automate recurring tasks, such as backups, system cleanups, or regular monitoring tasks.
Schedule helper for cronjob
* * * * * command_to_be_executed
# | | | | |
# | | | | |__ Day of the week (0 - 6) (Sunday = 0 or 7)
# | | | |____ Month (1 - 12)
# | | |______ Day of the month (1 - 31)
# | |________ Hour (0 - 23)
#____________ Minute (0 - 59)
# Runs at 00, 15, 30, and 45 minutes of every hour
0,15,30,45 * * * * /path/to/command
# Runs at 9:30 AM to 5:30 PM, Monday through Friday
30 9-17 * * 1-5 /path/to/command
# Runs at midnight on January 1st and June 1st every year
0 0 1 1,6 * /path/to/command
# Runs every 10 days at midnight
0 0 */10 * * /path/to/command
# Run command2 only if command1 succeeds
* * * * * /path/to/command1 && /path/to/command2
# Run command2 only if command1 fails
* * * * * /path/to/command1 || /path/to/command2
# Run two commands sequentially
* * * * * /path/to/command1; /path/to/command2
@reboot
: Run once after the system reboots.@hourly
: Run once every hour.@daily
: Run once every day (midnight).@weekly
: Run once every week (midnight on Sunday).@monthly
: Run once a month (midnight on the 1st).@yearly
or@annually
: Run once every year (midnight on January 1st).
@reboot /path/to/command # Runs once after every reboot
@daily /path/to/command # Runs every day at midnight
Cron Job Conditional Execution Based on System Load
# Run with reduced priority (nice level 10)
* * * * * nice -n 10 /path/to/command
# Run with reduced I/O priority
* * * * * ionice -c2 -n7 /path/to/command
Assign variable on crontab
TZ="America/New_York"
0 2 * * * /path/to/command # Runs at 2:00 AM New York time
MY_VAR="my_value"
* * * * * /path/to/command $MY_VAR
Logging
# Append output to logfile
* * * * * /path/to/command >> /path/to/logfile 2>&1
# Send the output of this cron job to the specified email
MAILTO="admin@example.com"
* * * * * /path/to/command
3. journalctl
(Systemd Logs)
Use case: Analyze and troubleshoot systemd-managed services and their logs.
These advanced journalctl
commands can greatly enhance your ability to manage logs, troubleshoot systems, and optimize performance analysis on Linux systems.
- Unit Filtering: Focus on specific system services.
- Time-Based Filtering: Filter logs by custom time ranges.
- Priority Filtering: Focus on errors, warnings, or critical logs.
- Remote Logging: Access logs from remote machines.
- Log Rotation: Manage log sizes with vacuum commands.
- Export and Output Formats: Export logs in formats like JSON or verbose.
- Kernel Logs: Isolate kernel logs or boot-specific logs
# follow logs in real time (similar to tail -f)
journalctl -f
# Follow logs for a specific unit
journalctl -u nginx.service -f
# Filter logs by a specific time range using --since and --until
journalctl --since "YYYY-MM-DD HH:MM:SS" --until "YYYY-MM-DD HH:MM:SS"
journalctl --since "2024-09-15" --until "2024-09-16"
# Shows logs for the nginx service from the last hour
journalctl -u nginx.service --since "1 hour ago"
Filtering by Priority
journalctl
supports filtering logs by priority levels (severity). The priority levels range from 0 (highest) to 7 (lowest):
0
: Emergency1
: Alert2
: Critical3
: Error4
: Warning5
: Notice6
: Info7
: Debug
journalctl -p <priority_level>
# View only errors and higher severity
journalctl -p err
# View warnings and above
journalctl -p warning
Filtering by UID or PID
filter logs by the User ID (UID) or Process ID (PID) to see logs generated by a specific user or process
journalctl _UID=1000
journalctl _PID=12345
Displaying Logs in Reverse Order
By default, journalctl
displays logs from oldest to newest. You can reverse this order to show the most recent logs first.
journalctl -r
Search logs for specific keywords or patterns using -g
or --grep
journalctl -g "search_term"
journalctl -g "failed"
journalctl -g "error|failure"
#Limiting the Number of Log Lines
journalctl -n <number_of_lines>
journalctl -n 50
Show Logs for a Specific Boot
Each system boot is assigned a unique identifier (boot ID), and you can filter logs to view messages from a specific boot session.
journalctl -b <boot_id>
# View logs from the current boot
journalctl -b
# View logs from the previous boot
journalctl -b -1
# List boot IDs to choose from
journalctl --list-boots
Display Kernel Logs Only
You can use -k
to show only kernel logs (equivalent to dmesg
).
# show only kernel logs
journalctl -k
# View kernel logs from the last boot
journalctl -k -b
Compress or Export Logs
short
: Default output.short-iso
: Uses ISO 8601 timestamps.short-monotonic
: Adds monotonic timestamps.verbose
: Detailed information for each log entry.json
: Output in JSON format.json-pretty
: JSON with indentation.json-sse
: JSON formatted as Server-Sent Events (SSE).cat
: Prints only the message field, one per line.
# Export logs in journal format:
journalctl --output=export > /path/to/logfile.journal
# Export logs in JSON format:
journalctl -o json > /path/to/logfile.json
# Export logs in short-monotonic format
journalctl -o short-monotonic
# Display logs with ISO 8601 timestamps
journalctl -o short-iso
# Display logs in verbose mode
journalctl -o verbose
Log Rotation and Managing Disk Space
You can manage the size of the logs and configure journal log rotation to avoid excessive disk usage.
# Limit the size of journal logs
journalctl --vacuum-size=1G
# Limit logs by time
journalctl --vacuum-time=2weeks
Accessing Remote Logs
journalctl
can retrieve logs from a remote machine running systemd
if both systems are configured for remote logging.
# View logs from a remote machine
journalctl -M remote_machine_name
# List machines available for log retrieva
journalctl --machine
Viewing Logs for Specific Executables
# filter logs for a specific executable using _COMM
journalctl _COMM=<executable_name>
journalctl _COMM=sshd
4. sysctl
(Kernel Parameter Tuning)
Use case:****sysctl
for advanced systems, tuning can dramatically improve performance, security, and stability, especially on high-performance or production systems.
Caution: Only for advanced Linux users, misconfiguration may cause failure in the system.
- Networking Tuning: Improve TCP/IP performance for high-traffic servers.
- Memory Management: Adjust, cache ratios, and virtual memory parameters.
- Security Hardening: Disable IP source routing, enable SYN cookies, and adjust routing filters.
- System Optimization: Tune file descriptor limits, disk I/O, and virtual memory for specific workloads.
- Debugging: Enable kernel logging and fine-tune kernel behavior for troubleshooting.
Changing Kernel Parameters at Runtime
Note: To change a parameter temporarily (until the next reboot), use the -w option:
# Viewing All Kernel Parameters
sysctl -a
# To change a parameter temporarily (until the next reboot)
sysctl -w <parameter>=<value>
# Enable IP forwarding
sysctl -w net.ipv4.ip_forward=1
# Increase maximum number of file descriptors
sysctl -w fs.file-max=100000
Persisting Kernel Parameter Changes
To make kernel parameter changes permanent (i.e., they persist after reboot), you need to add them to the /etc/sysctl.conf
file or place them in a .conf
file inside /etc/sysctl.d/
Example: Add the following line to /etc/sysctl.conf
to enable IP forwarding permanently:
net.ipv4.ip_forward = 1
After making changes, reload the configuration:
sysctl -p # Reload the config
Networking Tuning with sysctl
You can fine-tune networking parameters, which can significantly affect performance for high-traffic servers.
# Adjust TCP connection backlog
sysctl -w net.core.somaxconn=1024
# Set maximum number of allowed open ports:
sysctl -w net.ipv4.ip_local_port_range="1024 65535"
# Disable ICMP redirects (for security)
sysctl -w net.ipv4.conf.all.accept_redirects=0
sysctl -w net.ipv4.conf.default.accept_redirects=0
Memory Management Tuning and I/O Performance
optimizes memory management and caching behavior
# Adjust swappiness to prefer RAM usage over swap(0 for low, 100 High )
sysctl -w vm.swappiness=10
# Increase the amount of dirty cache (data not yet written to disk)
sysctl -w vm.dirty_ratio=40
# Change the dirty_background_ratio to control when background processes start writing to disk
sysctl -w vm.dirty_background_ratio=10
Security Hardening with sysctl
Security-related kernel parameters can be adjusted using sysctl
to harden the system against common attack vectors
# Disable IP source routing (for security)
sysctl -w net.ipv4.conf.all.accept_source_route=0
# Enable SYN cookies to protect against SYN flood attacks:
sysctl -w net.ipv4.tcp_syncookies=1
# Disable IP forwarding to prevent the system from acting as a router
sysctl -w net.ipv4.ip_forward=0
# Enable protection against spoofed IP addresses
sysctl -w net.ipv4.conf.all.rp_filter=1
sysctl -w net.ipv4.conf.default.rp_filter=1
To prevent denial-of-service (DoS) attacks or other malicious traffic, you can limit the number of connections for a system.
# Set a maximum number of incoming connections:
sysctl -w net.core.somaxconn=1024
Increasing File Descriptors
On high-performance applications, you might need to increase the limit of open file descriptors.
# Increase file descriptor limit:
sysctl -w fs.file-max=200000
Applying Configuration Changes from a Custom File
If you want to apply settings from a custom file instead of /etc/sysctl.conf
, you can use the -p
option followed by the file path.
sysctl -p /path/to/custom_sysctl.conf
Optimizing Server Performance Using sysctl
You can fine-tune kernel parameters to optimize a Linux server for specific workloads such as web hosting, databases, or virtual machines.
# Optimize TCP window size for high-latency networks
sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
sysctl -w net.ipv4.tcp_wmem="4096 65536 16777216"
# Enable aggressive caching for file systems
sysctl -w vm.dirty_ratio=60
Clearing or Flushing Cache
flush the filesystem cache and free up system memory.
# Clear pagecache, dentries, and inodes
sysctl -w vm.drop_caches=3