Planned Work: Birmingham TE.

Posted: August 19th, 2020 at 15:45 by Iain Beveridge

Thursday 3rd September 2020, 21:00 – Friday 4th September 2020 05:00

Due to on going improvements to the power infrastructure within the BT POP’s, Birmingham TE requires both the ATS and UPS to be refreshed.

Therefore, BT will be undertaking maintenance work within the above window which will require the power feeds to be moved, whilst an ATS & UPS are swapped within the rack. Although the core network router is not expected to experience any loss of service, customers connecting into the adjacent equipment will experience a small period of downtime.

  1. Naz says:

    Work is now commencing

  2. Naz says:

    Work has been completed

Emergency Work: Peterborough BT POP.

Posted: August 18th, 2020 at 17:27 by Iain Beveridge

Wednesday 19th August 2020, 00:00 – 03:00

After a power check within Peterborough POP, BT identified an issue with part of the power infrastructure within one of our racks.

Therefore, BT will be undertaking emergency maintenance work in the above window whilst a UPS is swapped within the rack. The core network router is not in this rack and not expected to experience any loss of service, customers with circuits terminating in Peterborough BT POP should be considered at risk only as the work should be able to be completed with no disruption.

  1. Naz says:

    work is now commencing

  2. Naz says:

    work is now completed

Lon50/Telecity-hex89.core device outage

Posted: August 18th, 2020 at 05:20 by Adam Heath

Our Co-Location provider for this site has advised of a power outage which is effecting multiple floors within the London Harbour Exchange building. Investigations are being carried out to determine an RCA. At the time of this post the building has been evacuated due to the fire alarm triggering.

More information to be provided as and when we have updates from our Co-Location provider

  1. ljordan says:

    Further updates pending but Talk Talk are currently invesigating further. As soon as they are made available we will advise further.

  2. ljordan says:

    We are still awaiting further updates from the suppliers on this outage. The London building is also still under evacuation. The engineers are on site to investigate as soon as possible. As soon as we have any news, we will advise further.

  3. ljordan says:

    It been advised by Equinix that there is multiple core links experiencing an outage and 36 network to network interface ports causing Business customers connected to the London Harbour and Harbour exchanges to experience a Total Loss of Service. Outage start time was 04:23 (18/08). Full Impact assessment is still ongoing.

    Latest Update
    Equinix have advised that a fire alarm was triggered by the failure of output static switch from their Galaxy UPS system. This has resulted in a loss of power for multiple customers and Equinix IBX Engineers are working to resolve the issue

  4. mpurcell says:

    Latest Update:

    Engineers from Equinix continue to work on restoring power at the datacentre to restore service.

    Escalations have been made to suppliers who hand over services to us at this location with a view of getting more detail and timescales from Equinix. Equinix IBX site staff reports that IBX engineers and the specialist vendor have begun restoring services to customers by migrating to newly installed and commissioned infrastructure. IBX engineers continue to work towards restoring service to all customers

    Root cause has not currently been identified.

    Next update expected at 1000

  5. mpurcell says:

    Update is due shortly. Will update here as soon as it is received.

  6. mpurcell says:

    Latest Update:

    Talk Talk Support Teams have confirmed that services remain impacted but IBX staff are being allowed back on site and have confirmed that the fire detection system has been fully restored. The Galaxy UPS failure remains an ongoing incident and is currently under investigation

  7. mpurcell says:

    Latest Update:

    Equinix Engineers have advised that their IBX team have begun restoring power to affected devices. Unfortunately, at present there remains no estimate resolution time.

    We will continue to provide updates as soon as they come through.

  8. mpurcell says:

    Latest Update:

    Equinix IBX staff have reported that some services have been reinstated and they continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure. Talk Talk Network teams have also confirmed that some 3rd party services are coming back online.

  9. mpurcell says:

    Latest Update:

    Equinix IBX Site Staff reports that services have been further restored for several more customers and IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure

  10. mpurcell says:

    Apologies, no further updates at present. As soon as one comes through we will update here.

  11. mpurcell says:

    Latest Update:

    Equinix IBX Site Staff reports that services have been further restored to more customers and increasing numbers of those affected are now operational along with the majority of Equinix Network Services. IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure.

    No root cause has yet been identified.

  12. mpurcell says:

    Latest Update:

    Internally we are beginning to see an increasing number of our services coming back online.

  13. mpurcell says:

    Latest Update:

    Talk Talk reports that Service was restored at 15:36 to the TalkTalk switch, in HEX, on the Legacy TUK Network.

    IBX Engineers continue to work towards restoring services to all customers by migrating to the newly installed and commissioned infrastructure and estimate full restoration by or before 21:00 GMT/BST.

  14. mpurcell says:

    Additional to last:

    Equinix have advised that electrical work is being carried out at the data centre whereby services (under floor sockets) require migration to a new distribution board as one has failed. There are 8 floors that require this work to be carried out. Floors one and two have been completed. As a result we still consider the services to be at risk, despite them being restored currently. Further outages may be seen until the expected completion time of 21:00hrs tonight.

  15. ljordan says:

    Latest Update
    Equinix IBX Site Staff reports that restoration of services continues. The remaining affected customers are in the process of being migrated to the newly installed and commissioned infrastructure. A revised resolution time has not yet been provided.

  16. ljordan says:

    Equinix advised that all services are now restored. All customers are now fed from the new infrastructure and at full redundancy. UPS failure was deemed the cause. The incident will now be resolved as all systems are available with no further issues reported. Monitoring will remain in place to ensure stability.

Emergency Work: Glasgow POP

Posted: August 17th, 2020 at 14:04 by Iain Beveridge

Tuesday 17th August 2020, 00:00 – 03:00

After a power check within Glasgow POP, BT identified an issue with part of the power infrastructure within one of our racks.

Therefore, BT will be undertaking emergency maintenance work in the above window which will require the power feeds to be moved, whilst an ATS & UPS is swapped within the rack. The core network router is not in this rack and not expected to experience any loss of service, customers with BT EAD circuits terminating in Glasgow should be considered at risk.

  1. Naz says:

    work is now commencing

  2. Naz says:

    work now completed

Incident: BT Edinburgh

Posted: August 12th, 2020 at 08:57 by David Brewis

A major incident has been declared by BT for an exchange in Edinburgh impacting broadband and ethernet services.

Loss of service is a result of water ingress into the location. The emergency command centre is onsite with majority of water removed, with engineers ensuring safe to enter to continue investigations and restore services as soon as possible.

Further updates will be provided.

  1. David Brewis says:

    Engineers remain on site with a plan in place, working to attempt to restore service. Equipment continues to remove remaining water with spare hardware and cabling on site and on route.

    Next update to follow by 13:30

  2. David Brewis says:

    Water has been removed from the building and clean up is in progress. Cables have been reconnected providing restoration steps with testing being carried out.

    Works will continue on site ensuring the location is secure and no further risk.

    Next update to follow by 15:30

  3. David Brewis says:

    Sessions appear to have returned from around 13:30 and we will continue to monitor our network equipment and the progress of clean up activities.

    No further updates to follow unless further disruption is caused by these activities as the incident is now resolved. If you experience a loss of service please call into the helpdesk following a reboot of your equipment.

Telehouse Migrations

Posted: August 11th, 2020 at 14:19 by Iain Beveridge

From Wednesday 12th August 2020 19:00 – 02:00, Nightly until Friday 14th August 2020.

Due to the Telehouse Metro data centre closing, there is an ongoing project to migrate all Vodafone provided services to a new handoff location at Equinix Slough.

A maintenance window is scheduled nightly from 19:00 until 02:00 the following morning, customers will experience a single 30 minute outage whilst their individual circuit’s configuration is migrated and tested.

If you require any additional information, please contact the Service Desk at Service.desk@cityfibre.com

Incident: Virgin Media leased lines

Posted: August 10th, 2020 at 17:14 by Jonathan Clarke

We are currently seeing a number of Virgin Media leased line circuits as down. We are currently in correspondence with suppliers for diagnostics.

Further updates will be provided when available.

  1. Jonathan Clarke says:

    We can see that the circuits have now reconnected, We would advise rebooting equipment if the issue remains. Diagnostics are still being performed to ascertain why this drop occurred.

    Further updates will be provided when available.

  2. Jonathan Clarke says:

    Following further diagnostics and reports, we can see the circuits have reconnected, however, some circuits are unable to break out.

    We are continuing to investigate this and we will update you again when we have any further information.

  3. Jonathan Clarke says:

    Suppliers has confirmed that they are suffering with packet loss on their link that provides for the circuits having breakout problems.

    This is being investigated by Virgin Media and we are now awaiting an update from them.

    Further updates will be provided when available. We appreciate your patience regarding the fault.

  4. Jonathan Clarke says:

    Suppliers has confirmed an engineer attended site and resolved issue with the packet loss, they are monitoring the connection, however, believe this to be resolved.

    We are now awaiting restoration confirmation and we can update you. We apologise for the inconvenience cause to you and your customers.

  5. Chris McDonald says:

    Virgin Media have confirmed that service is fully restored. There was a card failure and their engineer has been to site to replace the faulty card.

Planned Maintenance: Santander, Milton Keynes MK9 1AA Network Diversion

Posted: August 4th, 2020 at 15:44 by Iain Beveridge

13/08/2020 22:00 – 14/08/2020 06:59

During the above window, please be aware that we are carrying out some essential planned maintenance at Milton Keynes MK9 1AA.

The work is a Network Diversion and customers will experience an outage during the maintenance window.

Telehouse Circuit Migration

Posted: July 28th, 2020 at 15:45 by Iain Beveridge

From Friday 31st July 2020 19:00 – 02:00, Nightly until Wednesday 12th August 2020.

Due to the Telehouse Metro data centre closing, there is an ongoing project to migrate all Vodafone provided services to a new handoff location at Equinix Slough.

A maintenance window is scheduled nightly from 19:00 until 02:00 the following morning, customers will experience a single 30 minute outage whilst their individual circuit’s configuration is migrated and tested.

If you require any additional information, please contact the Service Desk at Service.desk@cityfibre.com

Incident: Vodafone leased lines

Posted: July 24th, 2020 at 16:19 by david labouchardiere

We are currently seeing a number of Vodafone leased line circuits as down. Initial diagnostics indicate the circuits are related to a single Vodafone handoff. Engineers are investigating and engaging with our supplier. Further updates will be provided when available.

  1. Jonathan Clarke says:

    Suppliers have advised that diagnostics show their management kit unreachable due to which services are impacted. Vodafone have proactively escalated to Level 1 to expedite the fault and restoration of the services.

    Further updates will be provided when available.

  2. Jonathan Clarke says:

    Suppliers have advised that a field engineer has been dispatched to investigate, the ETA for the engineer is 18:30.

    Further updates will be provided when available.

  3. Jonathan Clarke says:

    Suppliers have advised that the field engineer recently dispatched to investigate has been delayed and therefore, the ETA is now 19:30

    We apologise for the delay and we will give further updates when provided.

  4. Jonathan Clarke says:

    Suppliers have advised that the field engineer has arrived on site to complete investigations.

    Further updates will be provided when available.

  5. Jonathan Clarke says:

    Suppliers advise investigations are still ongoing and we are awaiting further updates.

    Thank you for you patience.

  6. Jonathan Clarke says:

    Suppliers advise that the field engineer has been investigating with a 2nd Line Engineer. We have been advised that their equipment is unable to pass traffic and they suspect a line card issue. Suppliers transmission team has been engaged and a resolution is being attempted.

  7. Jonathan Clarke says:

    Suppliers advise that the transmission team has ordered a spare line card for replacement. ETA for the spare card to arrive at the site is 60 minutes. We are proactively monitoring for further updates. When a further update is obtained, we will update you again.

    We appreciate your patience regarding the incident.

  8. Jonathan Clarke says:

    Engineers are continuing to work on this incident, we are awaiting a further update. Once obtained, we will update you again.

  9. Jonathan Clarke says:

    Suppliers have replaced the line card and confirmed that the issue is resolved. We are now seeing our managed routers back online.

    If you have any further issues, please reboot the equipment and if after, the issue persists please call us.

    We appreciate your patience and apologise for any inconvenience caused.