For those of you that are sick of supervisory control, you’re in luck, because this is the last installment of the series. Part 3 of 3.
If you’re still into it, you’re also in luck, because this is my favorite installment yet.
And besides...
If you missed parts one and two, you should start there. In part 3, let’s talk about strategy. Let’s talk about practical implementation that can be done today. Given what we’ve covered previously, how can building owners approach this strategically?
There are five parts to the strategy:
This approach isn’t the easiest way, but doing things right never is.¹
Let’s dig in.
Whew, that’s a mouthful.
As we’ve discussed, many of the problems solved by ASC are caused by underperforming control systems. Adding a controls overlay without addressing some of the underlying issues will make life more difficult for the owner and operator, not less.
They are already swimming upstream. It’s unclear who is taking responsibility for their setpoints, schedules, and sequences (the three S’s). Their BAS standards and construction methods have proven inept at optimization and/or opening up their systems to allow others to optimize. As a result, too much data is locked up in proprietary systems. Their IT team is often not included in the process, causing silo’d BAS networks, installation of superfluous BAS hardware, cybersecurity concerns, and unreliable or impossible cloud communication.
To take control—pun intended—it starts with design. I think the best long term design is to decouple the system. Two-way communication between a standardized, owner-owned supervisory layer and a commoditized, fully-open field controller layer. Something like this…
Image credit: Altura Associates
Once the two layers are decoupled and open, the owner has the power to truly own the three S’s and the optimization of their systems. And as shown in the graphic, all smart building systems can adopt this architecture to extend ASC into new use cases.
To make this real, I’m partnering with my friend Matt Schwartz at Altura Associates to explain exactly how they do it with the hope that others can adopt this model. We’ll publish an explainer essay, podcast, and an open source, fully-annotated BAS specification. Coming soon.
To be clear, starting here doesn’t necessarily need to hold an owner back from doing anything below. But if they don’t define where they’re going, every future BAS project will only put them further behind and make everything below more difficult.
Although this series is focused on the supervisory layer, the system as a whole is only as good as the field layer it’s sitting on top of and the people who keep it maintained. Faulty I/Os in the underlying controllers or an errant operator override can derail the whole thing. Just as we talked about enabling technologies in part 1, we also need to be talking about the enabling practices that keep the system working. There are two enabling practices I want to focus on today: Training and FDD.
Before ASC came along, there was already a growing gap between the sophistication of control sequences and the average building operator or manager’s understanding of them. If we start doing more sophisticated stuff, we can’t expect it to be maintained by people who don’t understand that either. Understanding starts with training. As Terry Herr said on the Nexus podcast, optimal start algorithms are a great example:
“I can't tell you how hard it is to communicate that to a typical operator who's used to a schedule that says 6:00 AM, that's exactly when my fan is going to start at 6:00 AM. When you tell them that, no, now set your schedule for when you want it to be up to temperature. He goes, ‘when's it going to start?’ Well, I said, it'll vary every morning. Well, they don't like that. Right? They don't like that. They want to know it's going to start at this time.”
And when we talk about AI or model predictive control, that’s even more out of reach than the simple stuff. Managers, operators, technicians, and even vendors need training on how the building will run with ASC.
As Terry also said on the podcast, the shift towards autonomy allows the operator to stop actively operating and start maintaining. Maintenance keeps sensors and actuators working properly, and by far the best way to do that is with fault detection and diagnostics.
It might seem weird that I’m including FDD as an enabling practice. Obviously, FDD is a tool, not a practice. But integrating FDD into operations processes is the key practice. This practice (sometimes delivered as MBCx by an external service provider) is just starting to get its legs in our industry. It’s primed and ready to scale. Just because we have new technologies doesn’t mean we should forget about this.
Note that FDD may need to be reconsidered just slightly in light of ASC. The focus can shift away from detecting issues with the three S’s and towards faults that clean up and enable ASC.
FDD and ASC can work hand in hand and shouldn’t fight each other. Some examples:
As we discussed previously, there are really two stages of ASC (1.0 and 2.0). 1.0 is all about doing the simple things well—like setpoints, sequences, and schedules, the three S’s. 2.0 is all about letting the algorithm learn the building, make predictions about the future, and actively manage loads while also considering (and valuing) trade-offs and constraints.
As the numbering entails, if you’re doing 2.0 before you do 1.0, you’re probably putting the cart before the horse. Here’s why:
If I were a building owner, I would start with ASC 1.0 and optimize the three S’s locally as much as possible. For a lot of buildings, ASC 1.0 could be the most sophisticated supervisory strategies that are needed. For those that need more sophistication, these strategies provide a fall back for when the cloud loses connection.
Let’s pause here and acknowledge that if a building owner is doing everything above, they’re killin’ it. They have their BAS under control. Their O&M team works hand in hand with their technology rather than fighting it. They own their data, and they’re using analytics to make use of it. They’re probably in the top 1% of building performance, capturing 80% or better of the savings and using FDD to keep those savings in place. Their systems are physically ready for autonomy, since they’ve used FDD to find faulty sensors, valves, dampers, etc.
Another way to think about this stop on the journey is through the lens of the all-star game. If you’re the manager of the all-star team, you have the core positions filled at this point. You’ve got truly open and commoditized field controllers. You’ve got open supervisory software in place to manage all of those controllers. You’ve got analytics. The O&M staff isn’t left out—you know how important they are to the squad.
ASC 2.0 can now be evaluated on whether it improves the team or not. It can also be evaluated on what the skills the ASC 2.0 vendor brings to the table, what capabilities and components are included in their stack, and which of those are unique and not covered by other positions on the team. Again, ASC 2.0’s special sauce is algorithms. The benefits of those algorithms can be compared with the SaaS fees needed to use them.
Since the O&M staff is on the team, they can help set the constraints of those algorithms, acknowledging the inherent tradeoffs between the different goals of the building. Since they’re bought in, they view the algorithms as their copilot, as Jean-Simon said on the podcast. Then they’re far less likely to turn them off.
Finally, since ASC 2.0 vendors are providing advanced algorithms, it will be tempting to also let them perform measurement and verification (M&V) of the energy and demand savings. It’s just one more algorithm, right?
My take is that M&V can and should be open, easily-manageable, accessible, and automated (or semi-automated) with software. It should also be independently verified. LBNL and EVO have teamed up to offer a free, independent algorithm testing service that does just that.
I consider M&V a separate position on the all-star team. And in no way should proprietary M&V algorithms (that haven’t been independently verified) play into the pricing models of any players on the all-star team. That’s not going to end well…
Ok friends, now I’ve said all I have to say on advanced supervisory control (for now). Thank you for not canceling your subscriptions as I’ve explored this topic over the last month or so. To conclude, let’s summarize:
As always, I’d love to hear your response. Hit reply or let us know your thoughts in the comments.
¹ Remember, if you like it easy, you can always just sprinkle some machine learning on it.
For those of you that are sick of supervisory control, you’re in luck, because this is the last installment of the series. Part 3 of 3.
If you’re still into it, you’re also in luck, because this is my favorite installment yet.
And besides...
If you missed parts one and two, you should start there. In part 3, let’s talk about strategy. Let’s talk about practical implementation that can be done today. Given what we’ve covered previously, how can building owners approach this strategically?
There are five parts to the strategy:
This approach isn’t the easiest way, but doing things right never is.¹
Let’s dig in.
Whew, that’s a mouthful.
As we’ve discussed, many of the problems solved by ASC are caused by underperforming control systems. Adding a controls overlay without addressing some of the underlying issues will make life more difficult for the owner and operator, not less.
They are already swimming upstream. It’s unclear who is taking responsibility for their setpoints, schedules, and sequences (the three S’s). Their BAS standards and construction methods have proven inept at optimization and/or opening up their systems to allow others to optimize. As a result, too much data is locked up in proprietary systems. Their IT team is often not included in the process, causing silo’d BAS networks, installation of superfluous BAS hardware, cybersecurity concerns, and unreliable or impossible cloud communication.
To take control—pun intended—it starts with design. I think the best long term design is to decouple the system. Two-way communication between a standardized, owner-owned supervisory layer and a commoditized, fully-open field controller layer. Something like this…
Image credit: Altura Associates
Once the two layers are decoupled and open, the owner has the power to truly own the three S’s and the optimization of their systems. And as shown in the graphic, all smart building systems can adopt this architecture to extend ASC into new use cases.
To make this real, I’m partnering with my friend Matt Schwartz at Altura Associates to explain exactly how they do it with the hope that others can adopt this model. We’ll publish an explainer essay, podcast, and an open source, fully-annotated BAS specification. Coming soon.
To be clear, starting here doesn’t necessarily need to hold an owner back from doing anything below. But if they don’t define where they’re going, every future BAS project will only put them further behind and make everything below more difficult.
Although this series is focused on the supervisory layer, the system as a whole is only as good as the field layer it’s sitting on top of and the people who keep it maintained. Faulty I/Os in the underlying controllers or an errant operator override can derail the whole thing. Just as we talked about enabling technologies in part 1, we also need to be talking about the enabling practices that keep the system working. There are two enabling practices I want to focus on today: Training and FDD.
Before ASC came along, there was already a growing gap between the sophistication of control sequences and the average building operator or manager’s understanding of them. If we start doing more sophisticated stuff, we can’t expect it to be maintained by people who don’t understand that either. Understanding starts with training. As Terry Herr said on the Nexus podcast, optimal start algorithms are a great example:
“I can't tell you how hard it is to communicate that to a typical operator who's used to a schedule that says 6:00 AM, that's exactly when my fan is going to start at 6:00 AM. When you tell them that, no, now set your schedule for when you want it to be up to temperature. He goes, ‘when's it going to start?’ Well, I said, it'll vary every morning. Well, they don't like that. Right? They don't like that. They want to know it's going to start at this time.”
And when we talk about AI or model predictive control, that’s even more out of reach than the simple stuff. Managers, operators, technicians, and even vendors need training on how the building will run with ASC.
As Terry also said on the podcast, the shift towards autonomy allows the operator to stop actively operating and start maintaining. Maintenance keeps sensors and actuators working properly, and by far the best way to do that is with fault detection and diagnostics.
It might seem weird that I’m including FDD as an enabling practice. Obviously, FDD is a tool, not a practice. But integrating FDD into operations processes is the key practice. This practice (sometimes delivered as MBCx by an external service provider) is just starting to get its legs in our industry. It’s primed and ready to scale. Just because we have new technologies doesn’t mean we should forget about this.
Note that FDD may need to be reconsidered just slightly in light of ASC. The focus can shift away from detecting issues with the three S’s and towards faults that clean up and enable ASC.
FDD and ASC can work hand in hand and shouldn’t fight each other. Some examples:
As we discussed previously, there are really two stages of ASC (1.0 and 2.0). 1.0 is all about doing the simple things well—like setpoints, sequences, and schedules, the three S’s. 2.0 is all about letting the algorithm learn the building, make predictions about the future, and actively manage loads while also considering (and valuing) trade-offs and constraints.
As the numbering entails, if you’re doing 2.0 before you do 1.0, you’re probably putting the cart before the horse. Here’s why:
If I were a building owner, I would start with ASC 1.0 and optimize the three S’s locally as much as possible. For a lot of buildings, ASC 1.0 could be the most sophisticated supervisory strategies that are needed. For those that need more sophistication, these strategies provide a fall back for when the cloud loses connection.
Let’s pause here and acknowledge that if a building owner is doing everything above, they’re killin’ it. They have their BAS under control. Their O&M team works hand in hand with their technology rather than fighting it. They own their data, and they’re using analytics to make use of it. They’re probably in the top 1% of building performance, capturing 80% or better of the savings and using FDD to keep those savings in place. Their systems are physically ready for autonomy, since they’ve used FDD to find faulty sensors, valves, dampers, etc.
Another way to think about this stop on the journey is through the lens of the all-star game. If you’re the manager of the all-star team, you have the core positions filled at this point. You’ve got truly open and commoditized field controllers. You’ve got open supervisory software in place to manage all of those controllers. You’ve got analytics. The O&M staff isn’t left out—you know how important they are to the squad.
ASC 2.0 can now be evaluated on whether it improves the team or not. It can also be evaluated on what the skills the ASC 2.0 vendor brings to the table, what capabilities and components are included in their stack, and which of those are unique and not covered by other positions on the team. Again, ASC 2.0’s special sauce is algorithms. The benefits of those algorithms can be compared with the SaaS fees needed to use them.
Since the O&M staff is on the team, they can help set the constraints of those algorithms, acknowledging the inherent tradeoffs between the different goals of the building. Since they’re bought in, they view the algorithms as their copilot, as Jean-Simon said on the podcast. Then they’re far less likely to turn them off.
Finally, since ASC 2.0 vendors are providing advanced algorithms, it will be tempting to also let them perform measurement and verification (M&V) of the energy and demand savings. It’s just one more algorithm, right?
My take is that M&V can and should be open, easily-manageable, accessible, and automated (or semi-automated) with software. It should also be independently verified. LBNL and EVO have teamed up to offer a free, independent algorithm testing service that does just that.
I consider M&V a separate position on the all-star team. And in no way should proprietary M&V algorithms (that haven’t been independently verified) play into the pricing models of any players on the all-star team. That’s not going to end well…
Ok friends, now I’ve said all I have to say on advanced supervisory control (for now). Thank you for not canceling your subscriptions as I’ve explored this topic over the last month or so. To conclude, let’s summarize:
As always, I’d love to hear your response. Hit reply or let us know your thoughts in the comments.
¹ Remember, if you like it easy, you can always just sprinkle some machine learning on it.
Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.
Go to Nexus ConnectJoin Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.
Sign Up