Carl Henriksen, CEO at cybersecurity specialist OryxAlign, explains why visibility across operational systems is becoming a security and engineering challenge, and how getting it wrong can compromise both cyber resilience and operational continuity.

Operational technology (OT) networks run quietly across live facilities. They support building management systems, HVAC infrastructure, access control and other connected services that keep estates operating smoothly. Yet behind that day-to-day reliability, facilities teams struggle to gain asset visibility without affecting live operations. Traditional IT visibility tools can destabilise sensitive environments, turning a security exercise into unexpected disruption.

The IET’s Code of Practice for Cyber Security in the Built Environment makes clear that the growing technical complexity of built assets, and their increasing dependence on information and communications technologies, is creating new vulnerabilities. That matters in facilities management because many of the systems that support continuity were never designed with modern security visibility in mind.

But simply bolting traditional IT visibility tools designed for servers and endpoints into live operational environments can, and often does, create instability or degraded performance. To protect cyber and operational resilience, facilities teams need an engineering-led approach to visibility that respects uptime, occupied spaces and operational services.

The visibility paradox

Traditional IT security tools often rely on active scanning or inline inspection, methods that can create latency in fragile control systems if they are used carelessly.

Take building management systems coordinating HVAC and plant equipment across an occupied site. These systems rely on real-time communications to maintain conditions, support safe operation and keep services running as expected. Unexpected scans or intrusive network testing can introduce delays or disrupt those communications, which in turn can affect performance or occupant experience.

Paradoxically, organisations cannot secure what they cannot see. Yet attempting to observe these environments using conventional IT methods can destabilise the very systems they are trying to protect.

As NIST notes in its Guide to Operational Technology Security, OT security needs to address “unique performance, reliability, and safety requirements”, which is one reason conventional IT-style approaches need to be used carefully in live operational settings.

Passive monitoring resolves this. By observing network traffic through engineered SPAN or TAP connections, it gives operators a way to understand communications without interacting directly with sensitive devices. That makes it better suited to fragile environments where active scanning may introduce operational risk.

In live facilities, these approaches should be designed and validated before they are introduced. Passive monitoring across connected building systems can support asset discovery and exposure analysis without adding traffic to operational services, helping replace incomplete manual inventories with a clearer picture of building-related assets and communications.

Taking steps towards scalable resilience

Visibility alone does not reduce risk unless it informs how networks are structured and governed. In many estates, building systems that manage heating, cooling, access control and other services are connected to the wider business network without enough planning or separation.

That creates unnecessary exposure. Once a business network is compromised, attackers can move more easily towards critical operational systems. Legacy devices often cannot support modern security agents or deep packet inspection, which leaves them particularly exposed when networks are merged without clear boundaries.

The next step is turning visibility into controlled, resilient infrastructure. As the NCSC notes in its OT guidance, organisations should ensure that “any connectivity between their OT environments and their wider enterprise networks or the Internet is managed securely”. It adds that its design principles are intended to help architects and designers produce “secure and resilient systems”.

The progression is straightforward. Organisations first need to identify connected assets and communication flows so they understand how systems behave under normal conditions. Segmentation can then be introduced through methods such as VLANs and network isolation to separate domains according to operational importance or trust level. Continuous monitoring then helps ensure those boundaries remain effective over time.

In live facilities, network security should be embedded into the architecture from the outset. Facilities teams looking to strengthen resilience across live environments should start by asking not just whether they can see their estates, but whether they can do so safely and continuously.