| Blizzard on Recent Diablo 2 Server Outages
Since the launch of Diablo II: Resurrected, we have been experiencing multiple server issues, and we wanted to provide some transparency around what is causing these issues and the steps we have taken so far to address them. We also want to give you some insight into how were moving forward.To get more news about Buy Diablo 2 Items, you can visit lootwowgold official website.
tl;dr: Our server outages have not been caused by a singular issue; we are solving each problem as they arise, with both mitigating solves and longer-term architectural changes. A small number of players have experienced character progression lossmoving forward, any loss due to a server crash should be limited to several minutes. This is not a complete solve to us, and we are continuing to work on this issue. Our team, with the help of others at Blizzard, are working to bring the game experience to a place that feels good for everyone.
Were going to get a little bit into the weeds here with some engineering specifics, but we hope that overall this helps you understand why these outages have been occurring and what weve been doing to address each instance, as well as how were investigating the overall root cause. Lets start at the beginning.
The problem(s) with the servers:
Before we talk about the problems, well briefly give you some context as to how our server databases work. First, theres our global database, which exists as the single source of truth for all your character information and progress. As you can imagine, thats a big task for one database, and wouldnt cope on its own. So to alleviate load and latency on our global database, each regionNA, EU, and Asiahas individual databases that also store your characters information and progress, and your regions database will periodically write to the global one. Most of your in-game actions are performed against this regional database because its faster, and your character is locked there to maintain the individual character record integrity. The global database also has a back-up in case the main fails.
With that in mind, to explain whats been going on, well be focusing on the downtimes experienced between Saturday October 9 to now.
On Saturday morning Pacific time, we suffered a global outage due to a sudden, significant surge in traffic. This was a new threshold that our servers had not experienced at all, not even at launch. This was exacerbated by an update we had rolled out the previous day intended to enhance performance around game creationthese two factors combined overloaded our global database, causing it to time out. We decided to roll back that Friday update wed previously deployed, hoping that would ease the load on the servers leading into Sunday while also giving us the space to investigate deeper into the root cause.
On Sunday, though, it became clear what wed done on Saturday wasnt enoughwe saw an even higher increase in traffic, causing us to hit another outage. Our game servers were observing the disconnect from the database and immediately attempted to reconnect, repeatedly, which meant the database never had time to catch up on the work we had completed because it was too busy handling a continuous stream of connection attempts by game servers. During this time, we also saw we could make configuration improvements to our database event logging, which is necessary to restore a healthy state in case of database failure, so we completed those, and undertook further root cause analysis.