Sensor technology is not new to building automation. Millions of office buildings around the world today are equipped with sensor-based systems designed to conserve energy, performing simple tasks such as automatically turning the lights on and off when someone enters or leaves a room. These simple tasks are enabled via an array of passive infrared sensors (PIRs), simple photocells, and/or CO2 sensors to ensure robust coverage throughout the office building.

This, however, is just the beginning of the smart building revolution. A truly smart building will know how the office space is being used at every single moment: how many people are in each room, how long the line is in the dining room, where there is a free desk, and many other aspects of the building usage. This awareness will be translated into a more cost-effective and comfortable working environment for the building inhabitants. In order to achieve this goal, the next generation of sensors will need to be much smarter and able to source and analyze richer levels of data, enabling the execution of more sophisticated tasks that go far beyond energy consumption management.

The Demand for On-Board Analytics

The Internet of Things (IoT) revolution introduces a new paradigm to building automation that supports a decentralized architecture where a great deal of analytics processing can be done at the edge (the sensor unit) instead of the cloud or a central server. This computing approach, often called “edge computing” or “fog computing,” provides real-time intelligence and greater control agility while at the same time off-loading the heavy communications traffic. New developments in computing technology provide us with cheap and energy-efficient embedded processors that are well suited for such data processing. This affords the newfound ability to process the sensor analytics inside the sensor unit itself. With this approach, the data that is sent over the network can be merely the final summary of the analysis, which is thinner in volume, and allows shorter response time. This capability sets the foundation for the next generation of rich data-driven smart sensors.

Now think about what we can do when we mate image sensors with this processing power in a network environment. We can save energy by counting exactly and in real-time how many people are in a room and use that information to adjust its ventilation. We can see which meeting rooms are free, or where there is an unoccupied work station. We can collect statistics on the usage of the office space so that we can better suit it to our needs. In cases of emergency, we can validate that the space has been evacuated, and if someone falls or gets hurt, we can issue an alert.

Deep Learning and the Smart Sensor

When assessing the opportunities and challenges inherent to rich data analysis, it’s important to understand the contrasts between data-driven systems and conventional rule-based systems, the latter of which assign the ‘heavy lifting’ of rule creation and modification to human programmers. In virtually every rich data domain, data-driven systems have beaten rule-based systems - the so-called “expert systems” - hands down. Rule-based systems exhibit inferior performance, and have shown to be slower in adapting to new types of data (for example, from an upgraded sensor or a new sensor of previously untapped data), or a changing domain (for example, a new style of furniture or new lighting conditions). Rule-based systems are supposedly easier to analyze, but even this advantage becomes moot as the system evolves and patches of rules are layered upon each other to account for myriad new rule exceptions, often yielding a hard-to-decipher “spaghetti code” of rules.

Data-driven Machine Learning systems are excellent tools for rich data analysis, particularly when cameras are employed at the sensing layer. With these systems, the burden of defining effective rules is lifted from the human experts and transferred to the algorithm. Humans are tasked only with defining the features of the raw data that they believe hold relevant information. Once the features have been defined, the rules (or formulas) that use these features are learned automatically by the algorithm. For this to work, the algorithm needs to have access to a multitude of data samples labeled with the desired outcomes, so that it can properly adapt itself. Once the rules are deployed in the sensor, it repeatedly runs a two-staged process: first, the human-defined features are computed from the sensor data (the heavy part); then, the learned rules are applied to perform the task at hand.

Deep Learning is an advanced new approach to Machine Learning, via which even the burden of defining features is lifted from the human engineers. With Deep Learning, the algorithm defines an end-to-end computation -- from the raw sensor data all the way to the final output. In this model, the algorithm must figure out for itself what the correct features are and how to compute them. This results in a much deeper level of computation -- much more complex and therefore much more effective than any rule or formula used by traditional Machine Learning. This computation is typically performed using a neural network, a complex computational circuit with millions of parameters that the algorithm will tune until it zeros-in on the right function.

Deep Learning has become the state-of-the-art approach in numerous Machine Learning domains, especially Computer Vision, Speech Recognition, and Natural Language Processing. Simply put, machines are better than humans at identifying the most informative features of the data, and when you take humans out of the equation, system performance improves dramatically. We have seen this to be the case in Building Automation as well, as Deep Learning tools have significantly improved our algorithm’s ability to detect people and their exact location and movement in a room.

Whereas in the rule-based system world (and even with traditional Machine Learning) the system engineer needed exhaustive information about the domain in order to build a good system, in the Deep Learning world this is no longer necessary. In this era of the IoT where new kinds of data are becoming available at a rapid clip, Deep Learning allows us to faster iterate on new data sources and use them to our best advantage without requiring intimate knowledge of them.

In the Deep Learning domain, the engineer’s main focus is to define the architecture of the neural network. The network needs to be large enough to have the capacity to tune-up to a useful computation, but simple enough so that the computation time does not exceed the allocated time limit. Then, once the architecture has been defined, it stays fixed while its parameters are tuned. Optimizing the parameters of a neural network can take days and even weeks on the strongest machines, but the computation itself -- from raw inputs to output -- takes a fraction of a second, and it will take exactly the same amount of time at the end of the process as it did at the beginning.

Deep Learning is therefore a great advantage for a real-time system like a smart sensor, because it enables significantly enhanced scalability and flexibility. For any given time budget, we can tailor a neural network that fits this budget to the maximum threshold, and thus make sure we are fully utilizing our processing power. If our computational budget increases and we have more time to run the calculation, we can assess a larger (and presumably better) network that will fully utilize the new budget.

One other advantage of using neural networks is that they are extremely portable. Software libraries make it very easy to build and customize a neural network, allowing us to run the same network on different types of devices -- just copy the parameters over and that’s it. In our research process, we can iterate quickly using GPUs, and then immediately see how our network behaves when it is deployed on embedded processors.

All of these properties of Deep Learning afford us great agility to develop next-generation smart sensors for advanced Building Automation. We can respond quickly to new types of data, easily adapt to new scenarios, and fully utilize computational resources as they become available. Compared to traditional Machine Learning, the superior performance of Deep Learning enables us to achieve new levels of sensing and analytical intelligence with the most cost effective, energy efficient embedded processors.

The CogniPoint Solution

PointGrab is the first and only solution provider to bring all of the aforementioned attributes together into a single platform, applying superior Deep Learning technology to the Building Automation ecosystem, where opportunities to gather data are abundant, but efficient, real-time analytics have to date been lacking. PointGrab is shaping the future of Building Automation and occupancy intelligence by embedding state-of-the-art computer vision into IoT devices to support powerful edge analytics. The advanced technology underlying PointGrab’s CogniPoint embedded-analytics sensors introduces an innovative fusion of proprietary Deep Learning and object-tracking algorithms optimized for low power ARM processors.

With the advent of its CogniPoint edge-analytics sensing solution, PointGrab is fulfilling its mission of enabling truly intelligent buildings by understanding occupants’ behavior and space utilization with unprecedented granularity and adaptability.