[Dev Blog] Riot Direct & Improving Ping in Europe
Riot’s latest dev blog is all about networking/routing issues in Europe and how those are being solved via Riot Direct:
Back in January we updated you on our progress with Riot Direct, an initiative weâ€™d undertaken to establish a dedicated network highway for League of Legends traffic in North America and Europe. With that service now live and humming along, weâ€™d like to share some of the reductions in latency weâ€™re seeing as a result. And weâ€™ll also discuss how we got here in more depth, so you can better understand why we saw a need for a project like Riot Direct and the issues we hoped to address with it.
To fill in this context, we spoke with Peyton Maynard-Koran, a technical director at Riot with over 20 years of experience in the telecoms industry, product owner of Riot Direct and a certifiable Wukong fanboy. “Wukong’s always been my boy!” he says, whipping out a Monkey King phone case and directing us to a framed Wukong illustration his best friend commissioned for him. When heâ€™s not putting the â€˜hellâ€™ in helicopter by cycloning into the middle of the enemy team, then, heâ€™s figuring out new and creative ways to improve your connection to the League of Legends servers.
We certainly had improvements we wanted to make. Hereâ€™s a EUW heatmap of the in-game latency levels experienced by players across Europe just prior to the activation of the Riot Direct network (green = 0-65ms, orange = 66-100ms, red = 101ms or more).
Itâ€™s not easy being green for many in southern Europe, but hereâ€™s what the EUW situation looks like post-launch:
The picture looks somewhat different in EUNE, as weâ€™ve only recently begun the legal and contract works with central and eastern European providers. We know the Vienna point of presence will help with the latency and stability of these connections, but we still want to expand our own infrastructure further east and south. At this point the EUNE results show room for improvement, but we’re optimistic about the enhancements weâ€™ll be delivering to this region in the months ahead.
League of Legends is an online game (OMG SPOILER!). Meaning: regardless of how enjoyable the game is from a design standpoint, if the delivery mechanism for that experience fails to perform in an optimal fashion, League of Legends will cease to be fun. Nobody enjoys lagging out, being forced to look on helplessly as your champion jumps about erratically on-screen, wondering if youâ€™ll be staring at a death recap screen when the choppy internet waters settle.
When we opened the Amsterdam data centre last June, we purchased a huge amount of bandwidth and transit traffic from major ISPs and backbone providers (Level 3, GTT, Telia, Hibernia, etc). We assumed that having all these network connections would normalise the player experience and weâ€™d be able to find the fastest path to get to players. Yet game traffic still wasnâ€™t getting where we wanted it to go as fast as we wanted it to get there. For example, a number of German players, despite being located near our data centre, were playing with over 120ms ping. Making matters worse, the rate of speed remained highly variable and there was no obvious way to fix it.
We hadnâ€™t taken full control of our situation. If a link failed on a network outside of ours, it still had a negative effect on League players, even though it had nothing to do with us. Under the old model, Maynard-Koran explains, â€œwe could never maintain or influence the way routes were being populated throughout the network, the way that traffic was actually moving unless we built our own network.â€
â€œThatâ€™s why we built out this network in Europe. Instead of traffic taking the most preferred path to our data centre, it was bouncing all over the continent. By having our own infrastructure in place, we can not only take in the traffic and force it to come on to our network, but we can also force it to take the exact same route back. What that does for us is it creates an environment where the player gets a very symmetrical path and all these routers are removed.â€
To understand why ISPs sending League traffic on a convoluted journey through numerous routers poses difficulties, itâ€™s important to realise that online-gaming traffic looks different than most other internet traffic. Standard web traffic – movies, music, cat pics, etc – travels in 1500-byte packets. League traffic, on the other hand, involves a rapid stream of updates but each message is relatively small – 55 bytes. In terms of our network traffic profile, weâ€™re much more akin to an investment bank doing high-frequency trading rather than a Facebook or a Google thatâ€™s concerned primarily with raw bandwidth.
Routers are built around how many headers they can read because thatâ€™s what they do. So, in the case of League traffic, weâ€™re asking a router thatâ€™s accustomed to processing a single 1500-byte packet in a given amount of time to process 27x the number of packet headers in the same window. The more router waypoints we have in our gameâ€™s journey from data centre to player, the greater the risk of router overload, resulting in dropped packets and a head-desking play experience.
The internet is just a bunch of computers talking to one another, meaning you connect to a device and it takes you on a specific path. As a game company, the problem we have with the internet now is it uses all those paths so it creates a fluctuating quality of experience for the player. To solve this predicament, we investigate the options and decide which path we like the best, then we rent a special lane on that path and set it up so that all League of Legends game traffic travels along that path.
We do that by putting a router at the edge of a given PoP (point of presence) and peering directly with a regional USP. We create a sort of off-lane, so as soon as the game traffic is supposed to go to the regular internet, it goes to our router instead. And then we’ve already set up this special path that gets playersâ€™ inputs to the data centre in the fastest possible way. Like an LCS jungler rotating between neutral-monster camps, we want our game traffic to take the most economical route possible, every time.
Alas, even the most elegantly designed technology can fall over from time to time so we ensured that if a PoP goes down, the system automatically reverts to standard internet routing so your connection to the game server wonâ€™t be interrupted. And as an additional failsafe, our worldwide team will be supporting Riot Direct 24/7 with software alerting us anytime thereâ€™s a hiccup in the quality of playersâ€™ experience. Stability is hugely important to us and you deserve to keep hard-carrying on that late-game Tristana even if a PoP between you and the game server blinks offline 50 minutes into your match.
â€œAt the outset, we thought that this was a 2-3 year project based on how slow the telecom companies move,â€ says Maynard-Koran. â€œFor example, you’ll have to have your order in 90 days before you expect service, and you have to have a long history with them. Luckily our team has a lot of connections within the industry so were able to fast-forward that. On top of that, we thought that getting infrastructure built out would take longer as well, but we were able to fast-track that as well. So we got NA finished in under a year and if you think about EU, we started in probably November of last year, and we expect to launch the initial phase in late August.â€
When Maynard-Koran says ‘finished’, that means we’ve built the infrastructure and we’ve peered with ISPs that cover more than 50% of our players. We’re going to continue to add more ISPs and we’re going to continue to have to do things like route balancing to make sure data is going in and out of the right entry points, but we think we can get up to 80% coverage by the end of this year.
Even though the methodology for Riot Direct remains the same whether weâ€™re building out our North American or European infrastructure, there were unique considerations to expanding our European coverage. Europe is ultra-connected, with even the smallest member countries having three or four ISPs. We found cities with as many 15 ISPs.
North America has roughly 75 ISPs but it’s mainly dominated 25% by Comcast, another 40% by AT&T and Verizon combined. Then you get Time Warner Cable, Charter, so basically if the Riot Direct team hit 10 companies in America, they could cover 80% of players. But in Europe to get to 80% it would probably take around 40 companies.
That said, the path to expanding European coverage is a little easier. â€œThere are public exchange policies [in EU] that drive a faster path to getting connected,â€ says Maynard-Koran, â€œwhereas in NA we’ve even gotten to the point where we’ve had to make legal challenges to get connected with one particular ISP. We’re dealing with that right now with a company in Canada that just refuses to connect with us.â€ In other words, if we need to fight to make sure players can enjoy a better League connection, weâ€™re happy to throw down.
With Riot Direct now switched on, plans are already in motion to make it even better. Weâ€™re already looking at adding additional points of presence in Portugal (Lisbon or Porto), Italy (Rome), Greece (Athens) and potentially in Poland as well. Weâ€™re continuing to pursue new peering relationships with ISPs so their League data can travel via our super-duper-mega-highway.
We’re also developing software solutions to handle dynamic route manipulation, shepherding the way data comes into our entry points so that it takes the lowest-latency, symmetrical path and travels through the lowest number of routers off our network. But it also gives us the ability to manipulate traffic, so if we’re getting a DDoS event or if we see a route that’s bad, we can move it to a different entry point or move it to a different route without the player even noticing and without a reconnect happening.
â€œISPs get lazy because they have a tonne of routes they have to manage,â€ says Maynard-Koran. â€œThey leave default metrics on, like making the oldest-aging border-gateway-protocol route the preferred one. For them to go through that by hand is hard. We look at being able to do that via software and we think that software will start to take over the way the networks work. And being able to do that route manipulation is the first piece. It will be the first time that we’re able to manipulate routes based on something that’s not a router, which is actually a huge step in the industry and it’s something we’re working towards. That means that software programmers can finally run the internet!â€
Weâ€™re hopeful that with the launch of Riot Direct, youâ€™re enjoying a stable, low-latency gameplay experience. Please send along feedback and let us know if youâ€™re able to detect any improvement. Our team will continue to monitor the situation closely and optimise the network to reach our goal of 100% of European players clocking in below 65ms ping. And remember, if you are experiencing connectivity issues, you can download our network diagnostic tool and submit results to help us identify problems and new places to focus our attention. Now letâ€™s get back into game. Wukong insta-lock, anyone?
If you have any questions, feel free to ask me atÂ @NoL_ChefoÂ or e-mail me at firstname.lastname@example.org.