Table of Contents
- Introduction
- Fundamental Concepts
- Installation
- Basic Usage
- Complete Parameter Reference
- Common Use Cases
- Advanced Usage
- Interpreting Results
- Troubleshooting
Introduction
iperf3 is a network performance measurement tool designed to measure the maximum achievable bandwidth on IP networks supporting TCP, UDP, and SCTP protocols. As a complete reimplementation of the original iperf, iperf3 offers cleaner code architecture and improved performance characteristics.
Key Features
- TCP and UDP protocol testing support
- Bidirectional simultaneous testing capability
- Bandwidth, jitter, and packet loss measurement
- Multi-threaded concurrent testing
- JSON format output for programmatic processing
- SCTP protocol support (system-dependent)
Fundamental Concepts
Operating Modes
- Server Mode: Listens on a specified port awaiting client connections
- Client Mode: Initiates connections to the server and conducts tests
Test Directions
- Forward Testing: Client transmits data to server (default behavior)
- Reverse Testing: Server transmits data to client (using -R parameter)
- Bidirectional Testing: Simultaneous bidirectional data transmission (using —bidir parameter)
Installation
Ubuntu/Debian
sudo apt-get update
sudo apt-get install iperf3
CentOS/RHEL
sudo yum install iperf3
# or
sudo dnf install iperf3
macOS
brew install iperf3
Building from Source
git clone https://github.com/esnet/iperf.git
cd iperf
./configure
make
sudo make install
Basic Usage
Starting the Server
iperf3 -s
Client Connection Test
iperf3 -c <server_ip_address>
Example
# Server side
iperf3 -s
# Client side
iperf3 -c 192.168.1.100
Complete Parameter Reference
General Parameters
-p, --port <port_number>
Specifies the server port number (default: 5201)
# Server
iperf3 -s -p 5555
# Client
iperf3 -c 192.168.1.100 -p 5555
-f, --format [kmgtKMGT]
Specifies the output format (bandwidth units)
- k/K: Kbits/Kbytes per second
- m/M: Mbits/Mbytes per second
- g/G: Gbits/Gbytes per second
- t/T: Tbits/Tbytes per second
iperf3 -c 192.168.1.100 -f m # Display in Mbits/s
iperf3 -c 192.168.1.100 -f M # Display in MBytes/s
-i, --interval <seconds>
Sets the reporting interval (default: 1 second)
iperf3 -c 192.168.1.100 -i 2 # Report every 2 seconds
-V, --verbose
Outputs detailed information
iperf3 -s -V
-J, --json
Outputs results in JSON format
iperf3 -c 192.168.1.100 -J
-d, --debug
Outputs debugging information
iperf3 -s -d
-v, --version
Displays version information
iperf3 -v
-h, --help
Displays help information
iperf3 -h
Server-Specific Parameters
-s, --server
Runs in server mode
iperf3 -s
-D, --daemon
Runs the server as a daemon process
iperf3 -s -D
-1, --one-off
Accepts only one client connection before terminating
iperf3 -s -1
--server-bitrate-limit <rate>
Limits server transmission rate (for reverse testing)
iperf3 -s --server-bitrate-limit 10M
Client-Specific Parameters
-c, --client <host>
Specifies the server address and runs in client mode
iperf3 -c 192.168.1.100
iperf3 -c example.com
-t, --time <seconds>
Sets the test duration (default: 10 seconds)
iperf3 -c 192.168.1.100 -t 30 # Test for 30 seconds
-n, --bytes <byte_count>
Specifies the total amount of data to transfer (overrides time parameter)
iperf3 -c 192.168.1.100 -n 100M # Transfer 100MB
iperf3 -c 192.168.1.100 -n 1G # Transfer 1GB
-k, --blockcount <block_count>
Specifies the number of data blocks to transfer
iperf3 -c 192.168.1.100 -k 1000
-b, --bandwidth <rate>
Sets the target bandwidth (UDP default: 1Mbps, TCP: unlimited)
iperf3 -c 192.168.1.100 -u -b 100M # UDP at 100Mbps
iperf3 -c 192.168.1.100 -b 50M # TCP limited to 50Mbps
-l, --length <bytes>
Sets the read/write buffer size (default TCP: 128KB, UDP: 1460 bytes)
iperf3 -c 192.168.1.100 -l 1M
-P, --parallel <thread_count>
Sets the number of parallel streams
iperf3 -c 192.168.1.100 -P 4 # 4 concurrent connections
-R, --reverse
Reverse testing (server sends data to client)
iperf3 -c 192.168.1.100 -R
--bidir
Bidirectional simultaneous testing
iperf3 -c 192.168.1.100 --bidir
-w, --window <size>
Sets the TCP window size
iperf3 -c 192.168.1.100 -w 512K
-M, --set-mss <bytes>
Sets the TCP maximum segment size (MSS)
iperf3 -c 192.168.1.100 -M 1400
-N, --no-delay
Sets TCP no-delay option, disabling Nagle’s algorithm
iperf3 -c 192.168.1.100 -N
-4, --version4
Forces IPv4 usage
iperf3 -c 192.168.1.100 -4
-6, --version6
Forces IPv6 usage
iperf3 -c fe80::1 -6
-S, --tos <value>
Sets the IP Type of Service (TOS) field
iperf3 -c 192.168.1.100 -S 0x10
-Z, --zerocopy
Uses zero-copy method (sendfile)
iperf3 -c 192.168.1.100 -Z
-O, --omit <seconds>
Omits the first N seconds of data (for warm-up)
iperf3 -c 192.168.1.100 -O 3 # Omit first 3 seconds
-T, --title <title>
Sets a title prefix for the test
iperf3 -c 192.168.1.100 -T "Test_1"
-C, --congestion <algorithm>
Sets the TCP congestion control algorithm
iperf3 -c 192.168.1.100 -C cubic
iperf3 -c 192.168.1.100 -C bbr
--get-server-output
Retrieves server-side output
iperf3 -c 192.168.1.100 --get-server-output
--logfile <file>
Saves output to a file
iperf3 -c 192.168.1.100 --logfile test.log
UDP-Related Parameters
-u, --udp
Uses UDP protocol (default: TCP)
iperf3 -c 192.168.1.100 -u
-b, --bandwidth <rate>
Sets target bandwidth for UDP testing (default: 1Mbps)
iperf3 -c 192.168.1.100 -u -b 100M
-l, --length <bytes>
UDP packet size (default: 1460 bytes)
iperf3 -c 192.168.1.100 -u -l 1400
SCTP-Related Parameters
--sctp
Uses SCTP protocol (requires system support)
iperf3 -c 192.168.1.100 --sctp
Authentication Parameters
--username <username>
Sets authentication username
iperf3 -s --username testuser --rsa-private-key-path private.key
--rsa-public-key-path <path>
Client specifies RSA public key path
iperf3 -c 192.168.1.100 --username testuser --rsa-public-key-path public.key
--rsa-private-key-path <path>
Server specifies RSA private key path
iperf3 -s --rsa-private-key-path private.key
Binding Parameters
-B, --bind <address>
Binds to a specific local address or interface
# Bind to specific IP
iperf3 -c 192.168.1.100 -B 192.168.1.50
# Bind to specific interface
iperf3 -c 192.168.1.100 -B eth0
--cport <port>
Specifies the local port used by the client
iperf3 -c 192.168.1.100 --cport 6000
Common Use Cases
1. Basic Bandwidth Testing
TCP Bandwidth Test (default 10 seconds)
# Server
iperf3 -s
# Client
iperf3 -c 192.168.1.100
UDP Bandwidth Test
# Client
iperf3 -c 192.168.1.100 -u -b 100M
2. Extended Duration Testing
60-second Test
iperf3 -c 192.168.1.100 -t 60
Fixed Data Volume Transfer
iperf3 -c 192.168.1.100 -n 1G
3. Multi-threaded Concurrent Testing
4 Parallel TCP Connections
iperf3 -c 192.168.1.100 -P 4
8 Parallel UDP Connections
iperf3 -c 192.168.1.100 -u -b 50M -P 8
4. Bidirectional Testing
Reverse Test (server sends data)
iperf3 -c 192.168.1.100 -R
Simultaneous Bidirectional Test
iperf3 -c 192.168.1.100 --bidir
5. UDP Jitter and Packet Loss Testing
Standard UDP Test
iperf3 -c 192.168.1.100 -u -b 100M -t 30
Output will include:
- Bandwidth
- Jitter
- Packet loss rate
6. Test Parameter Adjustment
Set Buffer Size
iperf3 -c 192.168.1.100 -l 256K
Set TCP Window Size
iperf3 -c 192.168.1.100 -w 1M
Disable Nagle’s Algorithm
iperf3 -c 192.168.1.100 -N
7. Rate-Limited Testing
Limit Transmission Rate to 50Mbps
iperf3 -c 192.168.1.100 -b 50M
8. JSON Format Output
Save as JSON for Programmatic Processing
iperf3 -c 192.168.1.100 -J > result.json
9. Specify Network Interface
Bind to Specific Network Adapter
iperf3 -c 192.168.1.100 -B eth1
10. Comprehensive Production Environment Testing
Integrated Test Script
#!/bin/bash
SERVER="192.168.1.100"
echo "=== TCP Single-Stream Test ==="
iperf3 -c $SERVER -t 30 -i 5
echo "=== TCP Multi-Stream Test (4 streams) ==="
iperf3 -c $SERVER -P 4 -t 30 -i 5
echo "=== UDP Test (100Mbps) ==="
iperf3 -c $SERVER -u -b 100M -t 30 -i 5
echo "=== Reverse TCP Test ==="
iperf3 -c $SERVER -R -t 30 -i 5
echo "=== Bidirectional Test ==="
iperf3 -c $SERVER --bidir -t 30 -i 5
Advanced Usage
1. Different Congestion Control Algorithms
Test BBR Algorithm Performance
iperf3 -c 192.168.1.100 -C bbr -t 60
Test Cubic Algorithm
iperf3 -c 192.168.1.100 -C cubic -t 60
2. Omit Initial Data (Warm-up Period)
Omit First 5 Seconds
iperf3 -c 192.168.1.100 -t 30 -O 5
3. Quality of Service (QoS) Configuration
Set DSCP Value
iperf3 -c 192.168.1.100 -S 0x28 # AF11
4. IPv6 Testing
IPv6 Server
iperf3 -s -6
IPv6 Client
iperf3 -c fe80::1%eth0 -6
5. Zero-Copy Transmission
Enable Zero-Copy for Improved Performance
iperf3 -c 192.168.1.100 -Z
6. Multiple Simultaneous Clients
Client 1
iperf3 -c 192.168.1.100 -p 5201 -t 60 &
Client 2
iperf3 -c 192.168.1.100 -p 5202 -t 60 &
Note: Server must run multiple instances or use different ports.
7. Retrieve Server-Side Output
iperf3 -c 192.168.1.100 --get-server-output
8. Daemon Mode
Run Server as Background Daemon
iperf3 -s -D
View Running iperf3 Processes
ps aux | grep iperf3
Terminate Daemon
killall iperf3
Interpreting Results
TCP Test Results Example
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec 0 sender
[ 5] 0.00-10.04 sec 1.10 GBytes 937 Mbits/sec receiver
Key Metrics Explained:
- Interval: Time interval
- Transfer: Amount of data transferred
- Bitrate: Bandwidth (throughput)
- Retr: TCP retransmission count (lower is better)
- Cwnd: TCP congestion window size
UDP Test Results Example
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 114 MBytes 95.4 Mbits/sec 0.015 ms 0/81570 (0%)
Key Metrics Explained:
- Jitter: Variation in packet arrival times (lower is better, < 10ms is excellent)
- Lost/Total: Lost packets / Total packets
- Packet Loss Rate: Percentage (0% is ideal)
Performance Evaluation Standards
TCP Performance:
- Excellent: Near theoretical bandwidth, Retr < 10
- Good: Achieves 80% of theoretical bandwidth
- Fair: Achieves 60% of theoretical bandwidth
- Poor: < 50% of theoretical bandwidth or excessive Retr
UDP Performance:
- Excellent: Packet loss < 0.1%, Jitter < 5ms
- Good: Packet loss < 1%, Jitter < 10ms
- Fair: Packet loss < 3%, Jitter < 30ms
- Poor: Packet loss > 5% or Jitter > 50ms
Troubleshooting
1. Connection Refused
Problem: connect failed: Connection refused
Solutions:
# Check if server is running
ps aux | grep iperf3
# Check if port is listening
netstat -tuln | grep 5201
# Check firewall
sudo firewall-cmd --list-all # CentOS/RHEL
sudo ufw status # Ubuntu
# Open port
sudo firewall-cmd --add-port=5201/tcp --permanent # CentOS/RHEL
sudo ufw allow 5201/tcp # Ubuntu
2. Unstable Test Results
Possible Causes and Solutions:
- Network Congestion
# Test with lower bandwidth
iperf3 -c 192.168.1.100 -b 50M
- Too Many Concurrent Connections
# Reduce parallel streams
iperf3 -c 192.168.1.100 -P 2
- Test Duration Too Short
# Increase test duration and omit initial seconds
iperf3 -c 192.168.1.100 -t 60 -O 5
3. Severe UDP Packet Loss
Solutions:
# Reduce transmission rate
iperf3 -c 192.168.1.100 -u -b 50M
# Decrease packet size
iperf3 -c 192.168.1.100 -u -b 100M -l 1000
4. Bandwidth Significantly Below Expected
Checklist:
- Network Rate Limiting
# Check for QoS restrictions
tc qdisc show
- CPU Load
# Monitor CPU utilization
top
htop
- MTU Configuration
# Check MTU
ip link show
# Adjust MTU (if necessary)
sudo ip link set eth0 mtu 9000
- TCP Window Size
# Increase TCP window
iperf3 -c 192.168.1.100 -w 2M
5. Version Incompatibility
Problem: Different client and server versions causing issues
Solutions:
# Check version
iperf3 -v
# Standardize versions (upgrade or downgrade)
sudo apt-get install iperf3=<version_number>
6. Permission Issues
Problem: Cannot bind to privileged ports (< 1024)
Solutions:
# Use sudo
sudo iperf3 -s -p 80
# Or use non-privileged port
iperf3 -s -p 5201
7. IPv6 Connection Issues
Solutions:
# Ensure IPv6 is specified
iperf3 -s -6
iperf3 -c fe80::1%eth0 -6
# Check IPv6 configuration
ip -6 addr show
Practical Tips
1. Quick Network Diagnostic Script
#!/bin/bash
# quick_network_test.sh
SERVER=$1
if [ -z "$SERVER" ]; then
echo "Usage: $0 <server_ip>"
exit 1
fi
echo "Testing connection to $SERVER..."
echo ""
echo "1. TCP Test (10s)"
iperf3 -c $SERVER -t 10 -f m
echo ""
echo "2. UDP Test (100Mbps)"
iperf3 -c $SERVER -u -b 100M -t 10
echo ""
echo "3. Parallel TCP (4 streams)"
iperf3 -c $SERVER -P 4 -t 10 -f m
echo "Test completed!"
2. Performance Benchmark Suite
#!/bin/bash
# benchmark.sh
SERVER=$1
DURATION=30
OUTPUT_DIR="iperf_results"
mkdir -p $OUTPUT_DIR
echo "Starting comprehensive network benchmark..."
# TCP tests
iperf3 -c $SERVER -t $DURATION -J > $OUTPUT_DIR/tcp_single.json
iperf3 -c $SERVER -t $DURATION -P 4 -J > $OUTPUT_DIR/tcp_parallel.json
iperf3 -c $SERVER -t $DURATION -R -J > $OUTPUT_DIR/tcp_reverse.json
# UDP tests
iperf3 -c $SERVER -u -b 100M -t $DURATION -J > $OUTPUT_DIR/udp_100m.json
iperf3 -c $SERVER -u -b 500M -t $DURATION -J > $OUTPUT_DIR/udp_500m.json
echo "Benchmark completed! Results saved to $OUTPUT_DIR/"
3. Network Quality Monitor
#!/bin/bash
# monitor_network.sh
SERVER=$1
INTERVAL=300 # 5 minutes
while true; do
TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
echo "[$TIMESTAMP] Testing..."
RESULT=$(iperf3 -c $SERVER -t 10 -f m | grep "sender")
echo "[$TIMESTAMP] $RESULT" >> network_monitor.log
sleep $INTERVAL
done
Summary
iperf3 is a powerful network testing tool that, through appropriate use of various parameters, enables:
- Network Bandwidth Measurement - TCP/UDP maximum throughput assessment
- Network Quality Evaluation - Packet loss rate, jitter, and latency analysis
- Network Problem Diagnosis - Bottleneck identification and performance optimization
- Network Configuration Verification - QoS, routing, and firewall testing
- Benchmarking - Network equipment performance evaluation
Usage Recommendations:
- Select appropriate test durations (30-60 seconds recommended)
- Conduct multiple tests and calculate averages
- Consider current network load conditions
- Combine with other tools (ping, traceroute, mtr) for comprehensive analysis
- Document test conditions and environmental factors
Important Considerations:
- Testing consumes significant bandwidth; avoid impacting production environments
- Configure firewalls and security groups appropriately
- Maintain version consistency between server and client
- Set reasonable bandwidth values for UDP testing
By mastering iperf3, you can effectively understand, monitor, and optimize network performance across diverse environments and use cases.