At 12:54 we noticed the platform responding slower than usual, and started investigating the issue.
At 12:56, more resources were provisioned for the platform to handle incoming traffic. This improvement response times, but did not remove the underlying issue.
At 13:01, investigation showed an unusual traffic pattern, which caused requests congestion and long response times on the platform.
At 13:03, the issue was resolved by diverting traffic, and we saw full recovery in the following minutes.
Between 12:54 and 13:04 the platform would be responding significantly slower than usual, impacting webinars and the time it took for people in the audience to join a webinar. In response to this incident, we will be reviewing the unusual traffic pattern, and optimising the underlying parts of the platform, to ensure smooth operation in the future.