In attendance:
Sam Norledge, Head of Smart Buildings @ LMG
Trevor Pope-Ellis, Regional Lead @ Neeve
Brahm Lategan, Digital Buildings Lead @ Buro Happold
Rakesh Chohan, Head of Property @ Rolls-Royce SMR
Stuart Humber, Associate Partner @ Foster & Partners
James Spires, Head of Net Zero Estates & Infrastructure @ Mitie
Allan Fourie, Technology Lead – Property Management @ JLL
At the latest Smart Buildings Magazine round table, hosted by JLL at its Canary Wharf offices, the topic of AI was on the agenda.
AI is obviously the technology on everyones lips and Allan opened the debate by stating that everyone's jumping on the AI bandwagon but what are we doing with it? He also asked, how well can we deploy it and how well can we integrate it with our existing systems? But it all comes down always to one thing, and that's data. How old is your data and how clean is your data. He then asked do you trust companies like JLL to look at your data?
James added that there needs to be more done around standards and this will especially be the case with controls. He stated that AI has got a big role to play in accelerating that process at a smidge of the cost.
Brahm pointed out that at the moment we can't exactly trust it to control a building as AI could make ridiculous changes and there are consequences of failure.
Sam said that there's some interesting research coming out of UCL at the moment, where they're talking about neurosymbolic AI, where they basically apply the laws of physics to a large language model, and use old mathematical principles, and actually build in that you cannot go above this pressure or this bar in this certain pipe.
Stuart said that Foster + Partners are doing a lot of work with AI agents, where we give the agent access to operations data and let it analyse and draw conclusions, but we're putting them on really narrow guidelines. Instead of just saying to the agent, have a look at all the data and tell us what you think, we're sort of saying, yeah, there's some specific tools to analyse the data and you're only allowed to look at this particular data set.
Rakesh added that Rolls Royce SMR is slowly rolling out AI with low level integration at the moment. All agreed that a strategy needed to be in place in order for AI to work effectively.
This theme of control and caution continued throughout the discussion, with participants agreeing that while AI presents significant opportunity, it must be implemented within clearly defined boundaries. Rather than fully autonomous systems, the current consensus is that AI should act as a support tool—augmenting human decision-making rather than replacing it.
A major challenge highlighted was the quality and structure of existing data. Much of the built environment still relies on legacy systems, with data often fragmented across multiple platforms and inconsistent in format. This creates a significant barrier to adoption, as AI systems are only as effective as the data they rely on. While there was optimism that AI could help address this through automation and data mapping, the industry still faces a substantial task in standardising information at scale.
Trevor mentioned that trust is a key issue — not just in AI itself, but in how data is managed and shared. As building systems become more connected, concerns around data ownership, security and transparency are becoming increasingly prominent. The question of whether organisations are willing to share their data, and with whom, remains a critical factor in unlocking the full potential of AI.
The conversation also explored the risks associated with deploying AI in live environments. Participants shared examples of systems making incorrect decisions, as well as wider concerns around cybersecurity and system vulnerabilities. In some cases, building systems have already been subject to breaches and ransomware attacks, underlining the importance of robust safeguards.
For high-risk sectors in particular, a more cautious approach is being adopted. Rakesh explained how AI is being introduced within a structured, risk-assessed framework, starting with lower-risk applications before expanding into more critical areas. This phased approach allows organisations to build confidence, test outcomes and develop governance processes before scaling up.
However, while large organisations may have the resources to implement such strategies, the discussion highlighted that risk often sits further down the supply chain. Smaller contractors and operators may not have the same level of oversight or controls, increasing the likelihood of data misuse or security gaps. Ensuring consistency across all levels of the supply chain was identified as one of the biggest challenges facing the industry.
Commercial models were also brought into question. Current procurement structures often prioritise cost over long-term value, which can limit investment in digital innovation. Without a shift in how projects are funded and delivered, participants suggested that progress in AI adoption could remain constrained.
Alongside this, the industry is facing a growing skills gap. As experienced engineers retire, there is a risk that valuable knowledge is lost, while newer entrants to the workforce may be more reliant on AI tools without fully understanding the underlying systems. This creates a need for stronger training, governance and oversight to ensure that technology is used effectively and responsibly.
Despite these challenges, there was broad agreement that AI is already influencing the sector, even if indirectly. From automating analysis to supporting operational decisions, its role is expanding rapidly. However, participants cautioned that the pace of adoption must be matched by appropriate controls, warning that the industry may otherwise face significant failures that could prompt stricter regulation.
Ultimately, the roundtable highlighted an industry at a turning point. AI has the potential to connect building systems, improve efficiency and unlock new insights, but its success will depend on strong foundations. Clean, structured data, clear governance frameworks and a balanced approach to risk will be essential.
For now, the message was clear: AI is a powerful tool, but one that must be applied carefully, with human oversight remaining firmly at the centre of decision-making.