Maramma Down as of this post

To make room for Eden, they had to shut down a server. Sorry Maramma

1 Like

same Issue

Same issue here

Just gotta say… as a software engineer with a few decades experience, the issues these devs have had has left me completely speechless. I would probably scream at the top of my lungs if I looked under the hood.

4 Likes

I mean, prolly an IT issue. Just didnt bother to allot resources for the demand.

Great experience for new and returning players, gotta say…

4 Likes

Almost queue’d up for another m10 before the crash, that would’ve sucked

RIP reekwater, invasion crash.

We were in an M10 about 2 mins before finishing it, RIP shards, a chest and Chardis drop :cry:

1 Like

Is not the issues, is the “solution” and lack of aweraness. I use aws in my system, like this guys. if i had a server down like this in peak time, someone will be crying… i really wonder sometimes if this guys are using ec2s instead of other moderm solutions

I find it odd that most games don’t have DR plans. If its down the gotta fix that server before it comes up. Even if you can’t move the sessions for what ever reason, you think they would just spin up DR server2 while they work on server1.

This is crap. With all the money they have they can’t even get this right.

On nwdb it’s changed from “unknown (253)” to maintenance so I assume it’s being worked on right now.

It depends how the system is really set. the “server” could no be down but the program that enable the game crashed and they don’t have a health check in some place. I will really love to see under the hood of this system to see how was engineered. Is using ec2/static servers or is setup in ecs. How much of “AWS” they are really using.

A dev responded in this thread here: [Maramma down please help]
Its being worked on

1 Like

I just spoke with tech support, crashed, working on it, back up soon.

Soon ™ but yeah, as long they’re aware and working on it, no worries.

by soon they mean tomorrow, GGS.

Can yall fix Maramma. OMG! been waiting 1 hour plus . fix…

Here’s an idea…all servers have failover/redundant/resilient backup. This way maintenance can be done with less downtime. unknown crashes have mirrored data to compare. So let us say maybe your Backbone network can’t handle the traffic. which with so many containers/servers having queue issues is possible(buffer overflow now becomes DoS. Invest in your product before it’s done for good.