When it comes to end-user experience, peering has a lot of advantages compared to IP transit-only designs. Better latency, less packet loss, and higher throughput all mean that your services work better and your users will be happy.
We have already talked about latency in a previous article in this “reason to peer” series, when we said even a 2-second delay in the loading time of a website is sufficient to increase the bounce rate more than 100%.
While buying IP transit is always just a best-effort method, without any guarantee that data is delivered or that the delivery meets any quality of service, peering increases the stability of your network. Read on to learn how this happens in this sixth instalment of our “reasons to peer” series.
In this third article in our “reasons to peer” series, we look at how peering lowers latency.
The shorter the trip, the better the latency
Latency is the delay between a user’s action and the response to that action from a website or an application – in networking terms the total time it takes for a data packet to make a round-trip. It is measured in milliseconds, and Internet quality depends on it. For example, for a website, even a 2-second delay in the loading time is sufficient to increase the bounce rate more than 100%!
Peering is a process by which two Internet networks connect and exchange traffic to distribute traffic to each other’s customers without having to pay a third party to carry that traffic across the Internet for them. The routing protocol that allows peering between ISPs is Border Gateway Protocol (BGP), which is free and benefits all ISPs.
Below are top five ways to get the maximum benefits of Peering.