Smart buildings are becoming sophisticated enough that we’re finally allowed to ask awkward questions. Not “can it optimise my HVAC?” or “will the dashboard impress leadership?”, but something slightly more philosophical: what happens when the intelligence we’ve embedded starts smoothing reality in ways we don’t fully see? Not maliciously, not Skynet-in-a-plant-room, just… systemically.

Most optimisation tools today do exactly what we ask of them. They reduce energy, stabilise comfort bands, and keep maintenance alerts to a polite minimum. But as these systems become more autonomous, a “what if” emerges: what if a model learns to hit the KPI without fully delivering the outcome?

Picture a building where AI manages comfort and energy. A drifting sensor? The optimiser might compensate, maintaining its perfect curve. An unusually high number of fault alerts? Thresholds could be recalibrated automatically to reduce noise. None of this is dishonest. It’s the machine fulfilling its objective: meet the target. Economists would simply shrug and point to Goodhart’s Law. The measure becomes the target, the target becomes meaningless.

This isn’t a conspiracy, it’s design logic. We’ve built systems that optimise relentlessly, yet we’ve rarely built the governance to understand how they do it. Algorithms adjust themselves. Models retrain in the background. Dashboards display the version of reality they’re fed. And as we outsource more operational decision-making to machines, our visibility into those decisions becomes more tenuous.

A more grounded challenge sits alongside this - the energy cost of intelligence. We talk a lot about how AI reduces building energy consumption. What we talk about less is the energy required to run the optimisation engines themselves. Yes, inference is far lighter than training, but even modest edge models require computational effort. Multiply this across hundreds of systems, thousands of buildings, and millions of automated micro-decisions, and a sensible question surfaces:

Are we saving more than we’re spending?

For many buildings, optimisation models raise efficiency by 5 to 20 percent. That’s meaningful. But as the industry gravitates toward heavier AI stacks, embedded digital twins, and real-time pattern recognition, the supporting infrastructure isn’t free. Local servers, cloud calls, model retraining cycles, and storage for accelerating volumes of sensor data all consume energy. It’s possible that, without careful design, the carbon cost of digital intelligence could nibble into the very savings it promises.

This isn’t an argument against AI. It’s an argument for transparency. Buildings should be able to show not only their performance outcomes but the “digital load” required to achieve them. The invisible energy overhead of optimisation will eventually need to be treated like any other operational cost: measured, monitored, and justified.

The governance gap is still wide. Few owners ask for independent verification of AI behaviour. Fewer still require audit trails for automated control decisions. As machine learning takes root in core operations, we’ll need standards that mirror those in other high-trust industries. Finance doesn’t rely on models without oversight. Healthcare doesn’t automate without explainability. Buildings shouldn’t either.

None of this diminishes the value of smart systems, it just puts them on firmer ground. Engineers and operators shouldn’t become spectators, marvelling at flawless dashboards. Their role becomes more analytical: question, verify, calibrate, challenge. When the data looks too neat, assume the machine has worked a little too hard to neaten it.

The intelligent building of the next decade won’t be the one with the most automation. It’ll be the one that can explain its automation. A system that shows its workings. A building whose digital brain is auditable, interpretable, and energy-accountable.

Because if buildings are going to think for themselves, even a little, we need to make sure we still know when they’re telling the truth, and how much it costs them to tell it.

In Dr Marson’s monthly column, he’ll be chronicling his thoughts and opinions on the latest developments, trends, and challenges in the Smart Buildings industry and the wider world of construction. Whether you're a seasoned pro or just starting out, you're sure to find something of interest here.

Something to share? Contact the author: column@matthewmarson.com

About the author:

Matthew Marson is an experienced leader, working at the intersection of technology, sustainability, and the built environment. He was recognised by the Royal Academy of Engineering as Young Engineer of the Year for his contributions to the global Smart Buildings industry. Having worked on some of the world’s leading smart buildings and cities projects, Matthew is a keynote speaker at international industry events related to emerging technology, net zero design and lessons from projects. He is author of The Smart Building Advantage and is published in a variety of journals, earning a doctorate in Smart Buildings.