This knowledge base article provides guidance on troubleshooting connectivity issues between two servers within similar Class A or Class B IP ranges, particularly when the servers are located in different data centers and a configuration error, such as a typo in the IP space, causes one server to incorrectly assume the other is on a local network when it is actually routed across the internet. The article includes steps to verify network configurations, use diagnostic tools like MTR, interpret CIDR notations, and address issues related to cross-data center routing. Reference CIDR charts for IPv4 and IPv6 are provided for clarity.
CIDR Reference Charts
IPv4 CIDR Chart
CIDR Notation |
Subnet Mask |
Hosts per Subnet |
Example Range |
---|---|---|---|
/8 |
255.0.0.0 |
16,777,216 |
10.0.0.0 - 10.255.255.255 (Class A) |
/16 |
255.255.0.0 |
65,536 |
172.16.0.0 - 172.16.255.255 (Class B) |
/24 |
255.255.255.0 |
256 |
192.168.1.0 - 192.168.1.255 |
/30 |
255.255.255.252 |
4 |
192.168.1.0 - 192.168.1.3 |
IPv6 CIDR Chart
CIDR Notation |
Subnet Mask Bits |
Hosts per Subnet |
Example Range (Abbreviated) |
---|---|---|---|
/32 |
32 bits |
2^96 |
2001:db8::/32 |
/48 |
48 bits |
2^80 |
2001:db8:1::/48 |
/64 |
64 bits |
2^64 |
2001:db8:1:1::/64 |
/128 |
128 bits |
1 |
2001:db8:1:1::1/128 |
Understanding Class A and Class B Ranges
-
Class A (IPv4): Ranges from 1.0.0.0 to 126.0.0.0 (/8), providing a large number of hosts (16.7 million). Example: 10.0.0.0/8.
-
Class B (IPv4): Ranges from 128.0.0.0 to 191.255.0.0 (/16), supporting up to 65,536 hosts. Example: 172.16.0.0/16.
-
IPv6: Uses CIDR exclusively, with no direct class equivalents. Common prefixes like /32, /48, or /64 define network boundaries.
When two servers are in similar Class A or Class B ranges (e.g., 10.1.0.0/16 and 10.2.0.0/16) but located in different data centers, connectivity relies on proper routing across the internet or a private interconnect. A common issue is a misconfiguration, such as a typo in the IP space, where one server assumes the other is on the same local network (e.g., same /16 subnet) when it is actually in a different network routed externally.
Troubleshooting Steps
1. Verify IP and CIDR Configuration
-
Check IP Addresses and Subnets: Ensure both servers are correctly configured for their respective subnets. For example:
-
Server 1 (Data Center A): 10.1.1.10/16
-
Server 2 (Data Center B): 10.2.1.20/16
-
If Server 1 is misconfigured with a typo (e.g., 10.1.1.10/8 instead of /16), it may assume Server 2 is local, causing packets to be sent to the wrong gateway or dropped.
-
-
Command: On each server, run:
ip addr show
Look for the interface (e.g., eth0) and confirm the IP address and CIDR notation (e.g., inet 10.1.1.10/16).
-
Fix Typo in IP Space:
-
If a typo is detected (e.g., /8 instead of /16), correct the subnet mask:
sudo ip addr replace 10.1.1.10/16 dev eth0
-
Verify the configuration:
ip addr show
-
-
CIDR Mismatch Across Data Centers: Confirm the CIDR ranges align with the network topology. Servers in different data centers typically require distinct subnets (e.g., 10.1.0.0/16 vs. 10.2.0.0/16) with proper inter-data center routing.
2. Test Basic Connectivity with Ping
-
Command: From Server 1, ping Server 2:
ping 10.2.1.20
-
Expected Output: Successful replies indicate basic connectivity. If no response:
-
Check if ICMP is blocked by firewalls in either data center or by intermediate ISPs.
-
Verify the target server is online.
-
If a typo in the IP space caused a local routing assumption, packets may not reach the correct gateway, resulting in timeouts.
-
3. Use MTR for Detailed Path Analysis
MTR (My Traceroute) combines ping and traceroute to identify packet loss and latency at each hop, which is critical when traffic crosses data centers over the internet.
-
Install MTR:
-
On Ubuntu/Debian: sudo apt-get install mtr
-
On CentOS/RHEL: sudo yum install mtr
-
-
Run MTR:
mtr 10.2.1.20
-
Interpret MTR Output:
-
Loss%: Packet loss at a hop (e.g., >0% suggests issues at that router, possibly in the data center or ISP network).
-
Snt: Number of packets sent.
-
Last/Avg/Wrst: Latency metrics (in milliseconds). High latency or variance may indicate internet routing issues or congestion.
-
Example Output:
Host Loss% Snt Last Avg Best Wrst 1. 10.1.1.1 0.0% 10 1.2 1.3 1.1 1.5 2. 203.0.113.1 0.0% 10 10.5 10.7 9.8 11.2 3. 10.2.1.20 0.0% 10 15.1 15.3 14.9 16.0
-
If loss occurs at a hop outside the local network (e.g., an ISP router), the issue may be external. Contact the data center or ISP.
-
If MTR shows packets stopping at the local gateway (e.g., 10.1.1.1), a misconfigured subnet mask (e.g., /8 instead of /16) may be causing the server to assume a local route.
-
-
IPv6 MTR:
mtr -6 2001:db8:1:1::20
Ensure IPv6 routing is enabled and check for similar misconfigurations in IPv6 prefixes.
4. Check Routing Tables
-
Command: Verify routes on both servers:
ip route
-
Expected Output:
default via 10.1.1.1 dev eth0 10.1.0.0/16 dev eth0 proto kernel scope link src 10.1.1.10
Ensure the destination IP (e.g., 10.2.1.20) is reachable via the correct gateway. If the routing table incorrectly assumes the destination is local due to a typo (e.g., 10.0.0.0/8), add the correct route:
sudo ip route add 10.2.0.0/16 via 10.1.1.1
-
For IPv6:
ip -6 route
Confirm the IPv6 destination is routable and not incorrectly assumed to be local.
5. Inspect Firewall and Security Rules
-
Check Firewall Rules:
-
For iptables:
sudo iptables -L -v -n
-
For firewalld:
firewall-cmd --list-all
-
For IPv6:
sudo ip6tables -L -v -n
Ensure no rules block traffic between the servers’ IPs or required ports in either data center.
-
-
Security Groups (Cloud): If using AWS, Azure, or GCP, verify security group rules allow traffic on required ports (e.g., TCP/UDP) for both ingress and egress across data centers.
-
Inter-Data Center Firewalls: Check for provider-specific firewalls or DDoS protection that may filter traffic between data centers.
6. Verify CIDR and Subnet Overlap
-
Issue: Overlapping or misconfigured CIDR ranges can cause routing issues, especially with a typo in the IP space. For example:
-
Server 1: 10.1.1.10/16 (correct, Data Center A: 10.1.0.0 - 10.1.255.255)
-
Server 2: 10.2.1.20/16 (correct, Data Center B: 10.2.0.0 - 10.2.255.255)
-
Misconfiguration: Server 1 configured as 10.1.1.10/8 assumes 10.2.1.20 is local, bypassing the gateway.
-
-
Solution: Use a subnet calculator or manually verify ranges. Correct the CIDR notation to match the actual network topology and ensure proper routing to the external data center.
7. Verify Inter-Data Center Routing
-
Cross-Data Center Connectivity: Since the servers are in different data centers, confirm that routing is properly configured:
-
Check if a VPN, MPLS, or direct connect (e.g., AWS Direct Connect, Azure ExpressRoute) is used.
-
Verify with the network team or provider that routes between 10.1.0.0/16 and 10.2.0.0/16 are correctly set up.
-
-
Internet Routing: If traffic traverses the public internet:
-
Ensure public IPs or NAT configurations are correct.
-
Use MTR to identify external hops and potential ISP issues.
-
Check for asymmetric routing (packets taking different paths to/from servers), which can cause issues if firewalls are stateful.
-
8. Additional Tools
-
tcpdump: Capture packets to inspect traffic:
sudo tcpdump -i eth0 host 10.2.1.20
Look for sent/received packets or drops. If packets are sent but not received, the issue may be at the remote data center or an intermediate router.
-
netstat or ss: Check open ports:
ss -tuln
Ensure the target service (e.g., SSH on port 22) is listening on Server 2.
-
nslookup/dig: If using DNS, verify name resolution:
dig server2.example.com
Ensure DNS resolves to the correct IP (10.2.1.20).
9. Common Issues and Fixes
-
IP Space Typo: If a server assumes a local network due to a typo (e.g., /8 instead of /16), correct the CIDR:
sudo ip addr replace 10.1.1.10/16 dev eth0
-
Firewall Blocking: Temporarily disable firewalls for testing:
sudo ufw disable
or
sudo iptables -F
-
Routing Misconfiguration: Add missing routes:
sudo ip route add 10.2.0.0/16 via 10.1.1.1
-
MTU Issues: Test for MTU mismatches, common in cross-data center setups:
ping -s 1472 -M do 10.2.1.20
If it fails, adjust MTU:
sudo ip link set eth0 mtu 1400
-
IPv6 Issues: Ensure IPv6 is enabled if used:
sysctl net.ipv6.conf.all.disable_ipv6
If set to 1, enable IPv6:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
-
NAT/Internet Gateway Issues: If traffic is routed over the internet, verify NAT rules or public IP mappings in both data centers.
-
Inter-Data Center Latency: High latency between data centers may indicate ISP or routing issues. Use MTR to pinpoint problematic hops and escalate to the data center provider or ISP.
Conclusion
Connectivity issues between servers in different data centers, especially when caused by a typo in the IP space (e.g., incorrect CIDR notation), can be resolved by verifying IP configurations, correcting subnet masks, and using tools like MTR to diagnose network paths. Ensure proper routing between data centers, check firewall and security group settings, and confirm inter-data center connectivity (e.g., VPN or direct connect). For persistent issues, provide MTR output and routing details to your network administrator, data center provider, or ISP for further investigation.