Agent-to-target probing measures end-to-end network connectivity. Agents send packets to targets and measure the response. Hops along the way forward the packets to the final destination. This enables the viewer to quickly spot any changes in trends and find the affected parts of the network.
Packets are sent once, and each hop forwards the packet to other hops on the path to that target. The cumulative time it takes for each hop to forward the packet to the next hop determines the latency between the agent and the target.
The following article describes the types of metrics provided by agents.
Latency
Jitter
Loss
Timeout | Out of Order Packets | |
---|---|---|
ICMP |
|
|
UDP |
|
|
HTTP and speed test targets do not report packet loss. This changes how metrics are displayed in the Dashboard:
HTTP availability refers to consistent and reliable access to web resources through the Hypertext Transfer Protocol (HTTP). In the context of websites and web services, availability is a crucial aspect of providing a seamless user experience. Monitoring HTTP timing shows performance bottlenecks in client-to-server or server-to-server communications.
Service Experience Insights monitors the network quality from the agent to the host server but does not provide application monitoring.
HTTP Request Response Time
HTTP Availability
Red indicates when static and cloud agents were disconnected
Gray indicates when mobile agents were disconnected. Gray is used for mobile agents because they are expected to be offline when the host PC is not in use
Path discovery is an optional setting in Probing Distributions
Once enabled, agents send synthetic path discovery traffic to probe all available paths or hops between the agent and the target
Click on a plot line to reveal the IP, ASN, ISP, and latency for each hop
Path Discovery Methodology
Traceroute Concept
ICMP Traceroute
ICMP echo request message is sent to the target host with incrementing TTL
UDP Traceroute
TCP Traceroute
TCP Traceroute (Privledged)
Configure firewalls to allow traceroute
See the Bandwidth Estimation to learn more about the bandwidth consumed by speed tests.
Example of on-demand Speed Tests
Agent data collection and aggregation
Metrics | Probing Interval | |
---|---|---|
60 seconds | 1 second | |
Loss | 1 | 1 |
RTT | 1 | 1 |
Test Values | 2 | 120 |
Test values for each minute are saved to the agent’s local memory and aggregated into metrics values.
Every 60 seconds, the test results for each metric are aggregated into five metric values:
Metrics | Metric Values | |||||
---|---|---|---|---|---|---|
Max | Mean | Median | Median | 95th Percentile | 99th Percentile | |
Latency | 1 | 1 | 1 | 1 | 1 | 1 |
Jitter | 1 | 1 | 1 | 1 | 1 | 1 |
Loss | 1 | 1 | 1 | 1 | 1 | 1 |
Metric Values Per Minute | 3 | 3 | 3 | 3 | 3 | 3 |
Metric type | Shortest/Most Frequent | Longest/Least Frequent |
---|---|---|
ICMP | 1 second | 10 minutes |
UPD | 100ms | 10 minutes |
HTTP | 30 seconds | 10 minutes |
Speed Test | Defined hourly probing intervals | 1, 6, 12, or 24 hours |
Saving metrics values to the time series database
Metrics values in the time series database
Age of data | Alignment Period |
---|---|
1 minute to 14 days | 1 minute |
15 to 28 days | 3 minutes |
29 to 42 days | 5 minutes |
43 to 184 days | 1 hour |
185 to 366 days | 3 hours |
367 to 732 days | 6 hours |
733 to 1463 days | 12 hours |
≥ 1464 days | 1 day |
Date retention