Monitoring - Azure are prioritising their resources due to increased demand because of Covid 19. This effects our service in the Europe regions. Read more on Azures strategy here:
https://azure.microsoft.com/en-us/blog/update-2-on-microsoft-cloud-services-continuity/ and https://aka.ms/CloudCovidResponse
Mar 30, 04:42 UTC
Update - Azure is aware of the issue and working on it, we are waiting on a fix.
Mar 27, 13:59 UTC
Identified - Unfortunately we can't reliably add or remove nodes nor change instance types on Azure due to reasons out of our hands. We have many long running discussions with Azure regarding this but no clear solution is in sight at the moment.

Please resort to the other migration strategies outlined here: https://www.cloudamqp.com/docs/cluster_migration.html

Sorry for any inconvenience this may cause.
Mar 25, 14:07 UTC
Shared servers ? Operational
Dedicated servers ? Operational
Backend ? Operational
Heroku ? Operational
AWS route53 Operational
Help Scout Email Processing Operational
AWS ec2-ap-northeast-1 Operational
AWS ec2-ap-northeast-2 Operational
AWS ec2-ap-southeast-1 Operational
AWS ec2-ap-southeast-2 Operational
AWS ec2-ca-central-1 Operational
AWS ec2-eu-central-1 Operational
AWS ec2-eu-west-1 Operational
AWS ec2-eu-west-2 Operational
AWS ec2-eu-west-3 Operational
AWS ec2-sa-east-1 Operational
AWS ec2-us-east-1 Operational
AWS ec2-us-east-2 Operational
AWS ec2-us-gov-west-1 Operational
AWS ec2-us-west-1 Operational
AWS ec2-us-west-2 Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Scheduled Maintenance
We will upgrade RabbitMQ and Erlang versions on four shared servers.

For minimal disturbance, remember to always configure your clients to automatically reconnect on connection loss. Please do a full reload (cmd-shift-r) if the management interface is not accessible after the upgrade has been completed.

If you can't allow any scheduled downtime we recommend you to move to a new dedicated cluster via queue federation, as explained here: https://www.cloudamqp.com/blog/2015-07-08-migrate-between-plans-rabbitmq-queue-federation.html
Posted on Mar 26, 22:12 UTC
Past Incidents
Apr 1, 2020

No incidents reported today.

Mar 31, 2020

No incidents reported.

Mar 30, 2020
Resolved - This incident has been resolved.
Mar 30, 14:04 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 30, 14:00 UTC
Identified - We are currently investigating this issue.
Mar 30, 13:51 UTC
Mar 29, 2020
Completed - The scheduled maintenance has been completed.
Mar 29, 11:12 UTC
Verifying - Verification is currently underway for the maintenance items.
Mar 29, 10:26 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 29, 10:15 UTC
Scheduled - We will be undergoing immediate maintenance for our control plane, this will block instance modifications and cause intermediate API errors.

Running instances will not be affected.
Mar 29, 10:12 UTC
Mar 28, 2020
Resolved - This incident has been resolved.
Mar 28, 13:37 UTC
Monitoring - The error has been identified and resolved, currently monitoring the results.
Mar 28, 13:19 UTC
Investigating - We are currently investigating, this does not affect the RabbitMQ servers directly, but the control plane is currently down.
Mar 28, 13:03 UTC
Mar 26, 2020
Completed - The scheduled maintenance has been completed.
Mar 26, 22:06 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 26, 21:33 UTC
Scheduled - We will upgrade RabbitMQ and Erlang versions on four shared servers.

For minimal disturbance, remember to always configure your clients to automatically reconnect on connection loss. Please do a full reload (cmd-shift-r) if the management interface is not accessible after the upgrade has been completed.

If you can't allow any scheduled downtime we recommend you to move to a new dedicated cluster via queue federation, as explained here: https://www.cloudamqp.com/blog/2015-07-08-migrate-between-plans-rabbitmq-queue-federation.html
Mar 20, 01:50 UTC
Mar 24, 2020

No incidents reported.

Mar 23, 2020
Resolved - This incident has been resolved.
Mar 23, 11:16 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 23, 10:35 UTC
Investigating - We're currently experiencing issues with metrics and alarms delivery, this affects all integrations
Mar 23, 10:12 UTC
Mar 22, 2020

No incidents reported.

Mar 21, 2020

No incidents reported.

Mar 20, 2020
Completed - Shared server 'tiger' has been upgraded but unfortunately we could not save previously stored message.
Mar 20, 01:45 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 20, 00:41 UTC
Update - We will be undergoing scheduled maintenance during this time.
Mar 20, 00:40 UTC
Scheduled - We are still working on the upgrade of shared server 'tiger'.
Mar 20, 00:40 UTC
Completed - The scheduled maintenance has been completed.
Mar 20, 00:30 UTC
Update - Shared server sidewinder is upgraded. We are still working on tiger.
Mar 19, 23:44 UTC
Update - Shared servers salamander and reindeer are upgraded. We are working on tiger and sidewinder.
Mar 19, 23:31 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 19, 22:30 UTC
Scheduled - We will upgrade RabbitMQ and Erlang versions on four shared servers.

For minimal disturbance, remember to always configure your clients to automatically reconnect on connection loss. Please do a full reload (cmd-shift-r) if the management interface is not accessible after the upgrade has been completed.

If you can't allow any scheduled downtime we recommend you to move to a new dedicated cluster via queue federation, as explained here: https://www.cloudamqp.com/blog/2015-07-08-migrate-between-plans-rabbitmq-queue-federation.html
Mar 12, 23:07 UTC
Mar 19, 2020
Resolved - The server is back online. Unfortunately we could not save the disk and hence you might experience message loss.
Mar 19, 13:06 UTC
Identified - We have identifed a faulty disk with the shared server turtle. Working on fixing it.
Mar 19, 11:58 UTC
Mar 18, 2020

No incidents reported.