Hey friends,
This week we have a thought-provoking, future-looking guest essay from Drew DePriest, Director of Real Estate Operations Technology at McKesson.
Drew has been on a bit of a rant for the last few months, trying to get everyone to cut the fluff around AI for buildings. It’s not that AI isn’t important to Buyers like Drew, it’s that we have a lot of hard work to do before it can be as impactful in Facilities Operations as the salespeople say it is.
This message heavily resonates with us!
Enjoy!
—James Dice, CEO of Nexus Labs
P.S. We’re gathering Buyers, including Drew, at NexusCon in September. If you work for a building owner and you want to hang with others like you, join us. Hit reply and we’ll get you registered.
By Drew DePriest, Director, Real Estate Operations Technology, McKesson
If you engaged at all with the autonomous buildings market in the last 12 months, you no doubt observed a significant uptick in hype over “AI for Facilities” timed with the sudden rise of generative AI a la ChatGPT.
Let’s all take a breath and a giant step back—we’re not there yet. Collectively, we have an incredible amount of work to do to deploy and maintain this vision of the AI panacea at scale. For clarity, this thesis focuses on the broader “system of systems” use case that encompasses facility operations, not the individual use cases (i.e. lease abstraction, amongst others) where machine learning (ML) and generative AI can apply today.
See the cubic graph above. I believe any future path of AI (or similarly significant advances in technology) for facilities will follow three distinct yet different phases:
Let’s go deeper into each.
This is the number of unique, disparate systems (HVAC, LMS, EMS, PACS, CMMS, Occupancy, Finance, Lease Admin, Calendar, HR, IWMS, DR, Utility, Weather, etc) fed into your AI/ML model. In the visual, I intend point A to represent one system (HVAC, for example) and point W to represent all systems with any measurability across a facility.
In order to be AI/ML-ready, each system will need to be tagged to a common ontology and built from a globally unique primary key. This degree of data governance, especially across an entire portfolio, can be quite difficult, time-consuming, and expensive to achieve. I also assume such an effort must stay within the bounds of an enterprise OT cybersecurity model and regional data compliance regulations.
This dimension is the level of detail and controllability your algorithm ventures down the stack of each system. For example, within the HVAC domain, point A would represent a high-level, singular use case like chilled water plant optimization. Point C could include every air and hydronic system within a given building.
The final dimension is the number of facilities in which you deploy a common set of upstream layers, including any use cases powered by AI/ML. Points A, C, W, and Y represent a single facility, whereas B, D, X, and Z imply an entire global portfolio (assumed to be 100+ locations). Generally speaking, a very high degree of variability exists in system design, controls programming, and operational performance across an entire portfolio.
The points captured on the graph indicate the general ability to deploy and manage a solution as successfully as any IT service and use the term “operable” to describe it. If you follow the ITIL standard for IT Service Management, “operability” means structured processes, support mechanisms, and standard practices for introducing change.
In short, reaching a vertice of the graph means more than just executing a project. It’s part of a comprehensive program covering technology, people, and processes.
Now: Current State (A and B)
Today, many AI/ML vendors on the market can (and have) successfully deploy and maintain a stable solution for a single, broad use case. Some of them can push simple solutions to scale—imagine a cloud-connected thermostat operating rooftop units across a portfolio of retail locations. It’s a simple use case duplicated across hundreds of similar locations.
Next: Full Depth or Breadth (C or W)
Consider Full Depth to include a tightly-integrated, full stack along one type of system—I expect this to target HVAC first. Common use cases might include:
Consider Full Breadth to expand from a single system (HVAC) to aggregate every system that encapsulates the operational, business, and financial data that describe the performance of a building, in every sense of the word.
This will very likely require a central data repository, an expansive data governance program, implementation of a common ontology (RECore, Brick, BDNS, etc), and relational graph technologies. In essence, this is the point where a deployment must expand into a true independent data layer (IDL). Common use cases might include:
Later: Full Depth or Breadth at Scale (D, X) or Full Depth AND Breadth (Y)
Perhaps the most common of the “Later” scenarios, repeatedly deploying a full-depth use case across multiple buildings in a portfolio (D), could include very specific outcomes. Imagine the same RTU-controlling smart thermostat use case from Current State (B) and extend the capabilities deeper within the HVAC system.
An example: model predictive control (MPC) that considers terminal unit behavior, air sources, and cooling sources as part of its overall modeling and writeback control. You could also start to see pattern recognition algorithms processing historical service tickets or work orders to suggest potential root causes.
Those who reach full breadth at scale will likely take two to five years to design, plan, and deploy an IDL across their portfolio. I view a low-depth IDL as one where multiple systems all push into a central data repository. They might deploy some flavor of ontology (Brick, RECore, etc), but will not have yet built any graph-based pattern-recognition applications on top of the IDL.
And lastly, we’re starting to see more owners make plans for the multi-year effort required to achieve full depth and breadth at a single facility. Commonly tied to a construction project (with CapEx earmarked) for a headquarters or other flagship location, this deployment will recognize the full value of the integrated “system of systems,” the IDL, and predictive algorithms running on top of it all.
Goal: Full Depth AND Breadth AND Scale (Z)
For owners, reaching Z represents the ultimate goal for full portfolio optimization. When you can extend the full breadth and depth of a single facility (Y) to your entire fleet, you unlock multiple additional use cases that scale affords. Amongst others, this list could include:
I also assume that reaching Z includes developing and deploying end-to-end service management best practices, often found within enterprise IT and models like ITIL. Continuous support for incident response, change management, testing and production, and more would all be documented, resourced, and executed.
Building the Tesseract: Repeat All Three Dimensions Across Market Majority
I refer to this final concept as the tesseract as we think of the shift as taking the three-dimensional Breadth/Depth/Scale and moving it along a fourth dimension of “market ubiquity.”
In order to truly consider autonomous facilities technology as viable, vendors must be able to deploy full breadth and depth at scale for a majority of the total available market. As an analogy, consider the electric vehicle (EV) market—while not yet at market ubiquity, the growing number of EV manufacturers has progressed well beyond early adopters to a more common base of buyers.
tl;dr
While I certainly celebrate the early accomplishments of pattern recognition and optimization algorithms within our industry, I caution that reality does not yet come anywhere near matching the hype.
Historically, we’ve had strong, oft-celebrated examples of products or projects that reach points A and even C and W. In my nearly 20 years of experience in this space, these innovative projects tend to result from a savvy owner or end-user pull, not a push by service providers.
In that same vein, I believe the next five to ten years of expansion into “Next” and eventually “Later” stages will require owners and end users to pull harder. That means significant planning, budgeting, and focus on long-term operational support for data governance, ontology adoption, and cloud architecture necessary to make “IDL at portfolio scale” a consistent reality.
Discuss this with me
If you’re a Nexus Pro member, let’s discuss this on Nexus Connect!
Hey friends,
This week we have a thought-provoking, future-looking guest essay from Drew DePriest, Director of Real Estate Operations Technology at McKesson.
Drew has been on a bit of a rant for the last few months, trying to get everyone to cut the fluff around AI for buildings. It’s not that AI isn’t important to Buyers like Drew, it’s that we have a lot of hard work to do before it can be as impactful in Facilities Operations as the salespeople say it is.
This message heavily resonates with us!
Enjoy!
—James Dice, CEO of Nexus Labs
P.S. We’re gathering Buyers, including Drew, at NexusCon in September. If you work for a building owner and you want to hang with others like you, join us. Hit reply and we’ll get you registered.
By Drew DePriest, Director, Real Estate Operations Technology, McKesson
If you engaged at all with the autonomous buildings market in the last 12 months, you no doubt observed a significant uptick in hype over “AI for Facilities” timed with the sudden rise of generative AI a la ChatGPT.
Let’s all take a breath and a giant step back—we’re not there yet. Collectively, we have an incredible amount of work to do to deploy and maintain this vision of the AI panacea at scale. For clarity, this thesis focuses on the broader “system of systems” use case that encompasses facility operations, not the individual use cases (i.e. lease abstraction, amongst others) where machine learning (ML) and generative AI can apply today.
See the cubic graph above. I believe any future path of AI (or similarly significant advances in technology) for facilities will follow three distinct yet different phases:
Let’s go deeper into each.
This is the number of unique, disparate systems (HVAC, LMS, EMS, PACS, CMMS, Occupancy, Finance, Lease Admin, Calendar, HR, IWMS, DR, Utility, Weather, etc) fed into your AI/ML model. In the visual, I intend point A to represent one system (HVAC, for example) and point W to represent all systems with any measurability across a facility.
In order to be AI/ML-ready, each system will need to be tagged to a common ontology and built from a globally unique primary key. This degree of data governance, especially across an entire portfolio, can be quite difficult, time-consuming, and expensive to achieve. I also assume such an effort must stay within the bounds of an enterprise OT cybersecurity model and regional data compliance regulations.
This dimension is the level of detail and controllability your algorithm ventures down the stack of each system. For example, within the HVAC domain, point A would represent a high-level, singular use case like chilled water plant optimization. Point C could include every air and hydronic system within a given building.
The final dimension is the number of facilities in which you deploy a common set of upstream layers, including any use cases powered by AI/ML. Points A, C, W, and Y represent a single facility, whereas B, D, X, and Z imply an entire global portfolio (assumed to be 100+ locations). Generally speaking, a very high degree of variability exists in system design, controls programming, and operational performance across an entire portfolio.
The points captured on the graph indicate the general ability to deploy and manage a solution as successfully as any IT service and use the term “operable” to describe it. If you follow the ITIL standard for IT Service Management, “operability” means structured processes, support mechanisms, and standard practices for introducing change.
In short, reaching a vertice of the graph means more than just executing a project. It’s part of a comprehensive program covering technology, people, and processes.
Now: Current State (A and B)
Today, many AI/ML vendors on the market can (and have) successfully deploy and maintain a stable solution for a single, broad use case. Some of them can push simple solutions to scale—imagine a cloud-connected thermostat operating rooftop units across a portfolio of retail locations. It’s a simple use case duplicated across hundreds of similar locations.
Next: Full Depth or Breadth (C or W)
Consider Full Depth to include a tightly-integrated, full stack along one type of system—I expect this to target HVAC first. Common use cases might include:
Consider Full Breadth to expand from a single system (HVAC) to aggregate every system that encapsulates the operational, business, and financial data that describe the performance of a building, in every sense of the word.
This will very likely require a central data repository, an expansive data governance program, implementation of a common ontology (RECore, Brick, BDNS, etc), and relational graph technologies. In essence, this is the point where a deployment must expand into a true independent data layer (IDL). Common use cases might include:
Later: Full Depth or Breadth at Scale (D, X) or Full Depth AND Breadth (Y)
Perhaps the most common of the “Later” scenarios, repeatedly deploying a full-depth use case across multiple buildings in a portfolio (D), could include very specific outcomes. Imagine the same RTU-controlling smart thermostat use case from Current State (B) and extend the capabilities deeper within the HVAC system.
An example: model predictive control (MPC) that considers terminal unit behavior, air sources, and cooling sources as part of its overall modeling and writeback control. You could also start to see pattern recognition algorithms processing historical service tickets or work orders to suggest potential root causes.
Those who reach full breadth at scale will likely take two to five years to design, plan, and deploy an IDL across their portfolio. I view a low-depth IDL as one where multiple systems all push into a central data repository. They might deploy some flavor of ontology (Brick, RECore, etc), but will not have yet built any graph-based pattern-recognition applications on top of the IDL.
And lastly, we’re starting to see more owners make plans for the multi-year effort required to achieve full depth and breadth at a single facility. Commonly tied to a construction project (with CapEx earmarked) for a headquarters or other flagship location, this deployment will recognize the full value of the integrated “system of systems,” the IDL, and predictive algorithms running on top of it all.
Goal: Full Depth AND Breadth AND Scale (Z)
For owners, reaching Z represents the ultimate goal for full portfolio optimization. When you can extend the full breadth and depth of a single facility (Y) to your entire fleet, you unlock multiple additional use cases that scale affords. Amongst others, this list could include:
I also assume that reaching Z includes developing and deploying end-to-end service management best practices, often found within enterprise IT and models like ITIL. Continuous support for incident response, change management, testing and production, and more would all be documented, resourced, and executed.
Building the Tesseract: Repeat All Three Dimensions Across Market Majority
I refer to this final concept as the tesseract as we think of the shift as taking the three-dimensional Breadth/Depth/Scale and moving it along a fourth dimension of “market ubiquity.”
In order to truly consider autonomous facilities technology as viable, vendors must be able to deploy full breadth and depth at scale for a majority of the total available market. As an analogy, consider the electric vehicle (EV) market—while not yet at market ubiquity, the growing number of EV manufacturers has progressed well beyond early adopters to a more common base of buyers.
tl;dr
While I certainly celebrate the early accomplishments of pattern recognition and optimization algorithms within our industry, I caution that reality does not yet come anywhere near matching the hype.
Historically, we’ve had strong, oft-celebrated examples of products or projects that reach points A and even C and W. In my nearly 20 years of experience in this space, these innovative projects tend to result from a savvy owner or end-user pull, not a push by service providers.
In that same vein, I believe the next five to ten years of expansion into “Next” and eventually “Later” stages will require owners and end users to pull harder. That means significant planning, budgeting, and focus on long-term operational support for data governance, ontology adoption, and cloud architecture necessary to make “IDL at portfolio scale” a consistent reality.
Discuss this with me
If you’re a Nexus Pro member, let’s discuss this on Nexus Connect!
Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.
Go to Nexus ConnectJoin Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.
Sign Up