This incident has been resolved. Our European Servers are fully operational.
Mar 9, 05:25 EST
We are continuing to monitor the servers. The API calls are now 100% responsive and without degraded performance.
Mar 9, 05:19 EST
The DNS servers are now responsive and our Load Balancer is now draining pending requests and flushing old requests in favor of new ones. New API calls will be handled normally and we will be monitoring the results.
Mar 9, 04:46 EST
We have identified a DNS issue that caused our server requests to queue and severely delay their responses on API requests.
The API microservices had to wait more than anticipated for a DNS resolution and that caused timeouts with a Gateway Timeout HTTP error.
We have deployed another DNS server to be used and this may take up to 30 minutes to take effect and requests starts to normalize. We will be monitoring the resolution development.
Mar 9, 04:29 EST
We are investigating an issue with the European Servers that are causing a response delay and Gateway Timeouts on certain requests.
Mar 9, 04:03 EST