Article
30
min read
James Dice

The BAS architecture of the future

August 13, 2020

Welcome to part two of a three-part interview series with Matt Schwartz of Altura Associates on fixing today’s building automation system. In case you missed it, part one was on why the BAS industry is broken. Part two is about what we can do about it. If you haven’t read part one, start there!

To make sure you get part 3 in your inbox, make sure you’re signed up for Nexus emails. To learn more about Nexus and subscribe, start here.

Enjoy.

James Dice: Welcome back, Matt! Let’s start Part Two by getting nerdy with the old BAS architecture. Then, we’ll go through your new architecture and discuss the implications of that shift.

I’ve written previously about how the architecture of the BAS hasn’t changed much since before I was born. Can you explain the old architecture?

Matt Schwartz: Nerdy is my strong suit, James. If you’ll allow me a few generalizations here, the legacy BAS architecture can be summed up as multiple layers of controllers stitched together via different, BAS-specific communications protocols. At the lowest level, you have Field Controllers, which handle your sensors and command signals down at the device level. Above that, you have the Supervisory Building Controller, which aggregates the data from the field controllers and typically houses graphics, scheduling and other global functions. Large-scale campuses regularly have a centralized graphic server to help visualize the status of all of the devices.

For years these controller networks have existed as data islands with no industry communications standards. Without built-in ethernet and without standards, each manufacturer was left to develop their own serial string networks and protocols, and their dominant focus was on low bandwidth communications between controllers. I'm sure you've heard acronyms like N2, P1, P2, CCN, COM3, COM4 and the list goes on.

In a time when owners, operators, and the recent community of innovative data scientists weren’t asking for visibility into the BAS and internal IT departments were either unaware of or not engaged with the BAS, this architecture made sense.

The legacy architecture typically looks something like this:

As early as 1995, with the ASHRAE BACnet 135 Standard, efforts were made to promote standardization and interoperability. However, the legacy networks carry a lot of inertia and are expensive to rip out and replace, and thus the old models have survived and staved off enormous advances in technological innovation.

JD: As an outsider, it’s amazing that the architecture has largely stayed the same for the last 30 years. However, I understand that things are starting to change. What’s causing disruption these days?

MS: It’s a confluence of long-running pain points bumping into each other, but I would argue the primary driver and hottest spark is owner demand to extract and use data from their BAS. There is also pressure from cybersecurity/IT stakeholders and analytics software innovators, but the owners write the checks. And owners are increasingly demanding open, interoperable networks.

These owners are listening to their end-users, operators, designers, consultants, data scientists, and technology providers, who are showing them potential business value that is unleashed by access to real building data.

As an example of just one building owner who wants to use their data, here is an email I received from Kaiser Permanente on the matter:

“If a professional is made aware of a data pool that can inform a better service outcome, and is specifically asked to look at data – by anyone, especially the owner - then fails to do so, that’s actual negligence.

Let’s say we have a design engineer who is told – by anyone -  “don’t use old drawings and assumptions for entering supply water temperature sizing of a coil, use trends of the actual system”.  If that design engineer fails to look at the trends, uses the 42 degrees (because that’s what’s on the old drawings), then winds up with a system failure (short of capacity) because the real supply temperature is more like 50+, that’s real negligence.”

To efficiently accomplish this type of data-sharing at scale, not just for a single engineer, requires a secure, network-connected, and interoperable BAS architecture. It’s not enough to simply specify a BACnet-compliant system. Allow me to repeat. It’s not enough to simply specify a BACnet-compliant system.

It’s about time for another auto analogy. Think about a future with many different autonomous vehicles on the roads. Simply requiring BAS products to use BACnet would be analogous to requiring that the autonomous vehicles broadcast their data using a common protocol but failing to require standard highway markings (think of this like network requirements), neglecting to coordinate how the vehicles will react to each other in space (standardizing sequences of operations), and leaving the owner to figure out if their system is being hacked (failing to adopt the latest cyber tech). Sure, the vehicles would be capable of communicating to each other, but we haven’t solved the bigger picture challenges of security and functionality.

Add to this recent widespread craving for data the fact that all of the other pain points discussed in part one still exist, a strong focus in IT security as the result of hacks into BAS systems, and finally throw in some spirited entrepreneurs… and whammo! Disruption.

JD: Fascinating. Building owners seeing the value of their data is huge. And I love the auto analogies… keep ‘em coming!

Okay, so with those changes, what’s the new BAS architecture you’re designing with your clients?

MS: The architecture we’re designing and implementing with our clients is based on an extensive assessment of the owner’s goals, resources, and business processes. It flips the conventional process on its head - instead of leading with the technology or with a particular product solution, we build up a set of requirements, standards, and acceptance criteria and then allow competition from the industry to best meet those needs.

As we discussed in our first conversation, this approach works best when we are able to build a strong partnership with the client’s IT team. Many organizations and their buildings already have robust ethernet and virtualized server infrastructure to support open, interoperable BAS architectures. And the rapid increase in ethernet-based field controllers in the BAS industry feeds directly into this design. When we bridge the gap between Facilities and IT, we unlock enormous cost savings and value for the owners.

It looks like this:

There are three key elements to this design.

The Field Layer (in the building)

This is the hardware layer built to be the reliable rock in the building that can be confidently relied on to carry on critical operations and temperature control, even during network outages or disruptions at the supervisory layer.

Here’s my checklist for a successful field layer: 

  • Interoperable communications, BACnet IP for today 
  • Ethernet-based controllers patched directly into the owner’s IT provided cabling and network gear
  • Critical programming to ensure stand-alone building control in the event of network failure
  • Temperature control device loops and fallback values
  • Network communication loss detection logic
  • Preferably a product with no sales territories and great service providers with a reputation of delivering exceptional customer service (Yes, this is a real thing)

The Advanced Supervisory Control (ASC¹) Layer (in the datacenter, typically)

This is the most innovative layer in the architecture, and I often refer to it as the Optimization Layer. Here, we consolidate the many building controllers or supervisory devices from the legacy model into just a few virtual servers. This consolidation simplifies licensing, logic programming and maintenance efforts, while boosting security, performance and standardization.

Standardization of programs and alarms at this layer prevents the situation where project after project these items are undefined and scratch-built leading to dysfunction and nuisance.

Here’s my checklist for the virtual servers at the ASC layer: 

  • Standardized point naming, user account setup and alarming
  • Standardized database organization (i.e. folder structure and layout)
  • Standardized optimization programming for scheduling, setpoints, and high-performance sequences
  • Server support/disaster recovery and/or high availability and backup services by IT department
  • Interoperable(BACnetIP) communication southbound and TCP authenticated and encrypted communication protocol Northbound
  • Scheduled operating system maintenance and vulnerability scans
  • Regulation of custom software and tools that prohibit a multi-vendor environment
  • Owner controlled licensing and software maintenance requirements

The Enterprise Layer (AKA Central Graphics and Analytics Layer; also in the data center)

This is the most interactive layer, where we get the visualizations, energy use analytics, smart alarms, and artificial intelligence action. The magic of this layer is that it is made possible by the strict data standards that are enforced in the optimization layer, which enable reliable import and tagging of the data for use in sophisticated analytics software.

We also move any programming of the sequences of operation traditionally put in a graphical front end to the optimization layer, allowing a resting connection between this server and the optimization layer. This relationship is critical in delivering a high-performance network since any centralization effort has the highest potential to tax the entire network stack.

Here’s my checklist for the Enterprise Layer:

  • TCP authenticated and encrypted data transfer only.
  • Standardized alarm and BAS interface reporting tools
  • Standardized graphic templates
  • Standardized database organization (i.e. folder structure and layout)
  • Regulation of custom software and tools that prohibit a multi-vendor environment
  • Owner controlled licensing and software maintenance requirements

I just realized I may not have mentioned the opportunity to standardize your graphics here. If I may express an opinion, BAS projects and implementations spend entirely too much time on custom graphics rather than on the actual function and operation of the BAS.

Custom graphics are nearly impossible to implement well at scale. The challenge is the word custom. Meaning one-off. Meaning one provider responsible for software, the images, the know-how around the deployment of custom objects. This can completely defeat the concept of an open front end, not to mention create dependency and compatibility issues down the road when versions of custom objects are no longer supported.

JD: Thanks for walking us through that. It’s a huge shift from the status quo! And the checklists will be very useful for part three when we talk about how to implement this anywhere.

Let’s spend the rest of this conversation on the implications of this shift. Starting at the edge, it seems like this opens up and commoditizes the field layer. What are the advantages of that?

MS: This chunk...

Yes, a truly open field layer is incredibly powerful because the bulk of the hardware and fixed cost lives here. So the goal here is to build an architecture without dependency on any one specific product AND to leverage Ethernet-based controls to simplify and, in most cases, remove the supervisory building controller layer.

Time for another car analogy! Imagine if, for the last 40 years, cars were built such that they only worked on just a single brand of gasoline. You would be locked into visiting that same service station for the life of the car to keep it going, and you’d have to buy a new car if you wanted to use a different service station. Now, imagine if one day a company created a new car design that could run on any brand of gas and offered to license that technology to any auto manufacturer in the world. That, James, is the capability of this open, interoperable BAS architecture! Not only does it bring freedom to the users, it allows all of the manufacturers to play.

We’ve now demonstrated this new Field Layer with several clients. Facilities managers and construction departments evaluated multiple bids against a robust standard and were able to focus on exceptional customer service and expertise. In one case, the team even chose to deploy two different field controllers in their new facility. The sky did not fall!

I am not saying you should put every field controller under the sun in your facilities, either. However, if you consider a campus environment with a mix of laboratory and classroom buildings, the ability to use premium controller A in the lab building and more economical controller B in the classroom building, while preserving seamless communications, can be hugely valuable. The open model unlocks that flexibility.

No longer is there a physical dependency between my field control layer and the building control layer due to serial networks and the need for that local aggregator. In my opinion, this level of flexibility is the underpinning concept that gave birth to the term IoT. Plug your edge/field control into the network and integrate up into your standardized architecture—completely optimized and interface-able.

JD: That sounds like a game-changer for delivering better service and a better retrofit experience for building owners. And while some contractors who are used to the lockdown model might view it as bad for business, it actually seems like a win-win to me.

Let me pause on one key detail. If you remove the supervisory controllers or virtualize them, how can we rely on the equipment to run if we lose connectivity with the server?

MS: This is usually the first question that comes up, and the solution is quite simple. The Field Controllers are programmed to default to a safe, basic operating condition with any loss of network connectivity.  The way we implement this is to have the optimization server regularly pinging the field controllers to let them know it is online and communicating. If the field controller misses that ping a couple times, then it will automatically fall back to a safe occupied condition.

The optimization logic will be out of order until the issue is resolved, but the building will run in local automatic mode using its distributed logic. While the initial perception is that consolidating the supervisory functions into virtual servers that may not be in the building increases the risk of system downtime, we are finding that this architecture actually improves the reliability of basic system operation.

JD: Got it, thanks. Let’s move up to the supervisory layer. Virtualizing supervisory controllers seems like a great way to simplify things for the O&M staff and a great way to reduce the first cost of BAS installs. What are the benefits of removing all that hardware?

MS: How about a before and after as a refresher from part 1.

Before...

After...

You nailed it. The idea here is to save money and get back to focusing on the operation of the BAS. Get your BAS team out of the server/software/hardware maintenance business. Essentially you are consolidating what would have been (for a large portfolio) hundreds of non-IT-supportable embedded servers/PCs for a handful of virtual servers supported by IT. We’re talking about consolidating hundreds of embedded field servers into a handful of VMs.

I cannot tell you how many BAS departments have recently been forced to become software maintenance experts in order to support field devices with embedded operating systems. As with any software operating system, these devices require upgrades, security hardening, and patches as new version rewrites are released. IT cannot support these devices because they use proprietary software tools for device management.

Clearly, one option is to have these devices serviced by the appropriate vendor and to maintain a service agreement that achieves your security and reliability requirements. The question becomes, do you want to pay to have all of those devices managed separate from the many other network devices managed by your organization, or do you want to bring that functionality into your core IT environment, where you can benefit from economies of scale and common security standards.

I’ve run the cost models on this many times for folks to demonstrate the savings. Here are some high-level numbers on the first cost savings (I’m trying to use all conservative figures here so the comment board doesn't light up on this one!):

A building controller usually costs upfront somewhere between $3,000-$6,000. Let’s consider a project that requires 10 controllers, you are looking at $30,000-$60,000, excluding installation labor. For argument’s sake, let’s call it $50,000 installed.

Standing up virtual machines with your IT department typically requires only labor and software licensing. In my experience, the licensing cost covering this size project (and more), will be $15,000-$20,000. And the IT department may charge anywhere from $0-$5,000 to set everything up. So you are looking at $25,000 max up front - a 50% savings already!

A caveat on the virtual server approach is that maybe you have a good reason to locate the optimization server on site. Guess what, this still works and saves money! Work with IT to put it on a robust industrial PC or a blade server in the building and you can still consolidate the proprietary hardware and grab significant savings while benefiting from having IT focus on the software and security while the BAS team works to make building operations better.

Let’s also look at annual operating costs of these scenarios:

For the building controllers, we can assume you will spend an hour updating each device twice a year (this would be poor frequency by all IT security standards by the way). If we apply this to 10 devices with a labor rate of $140/hr, you are looking at $2,800 annually. This may not sound so bad, but $2,800 annually can get you virtual infrastructure with 100X the horsepower needed to run the same software as the building controller at larger scale and have it backed up and supported with a disaster recovery plan and adjustable resources to scale the horsepower as needed as your organization grows.

I haven’t put a value on cyber security in this equation yet, either. Distributed devices are more often ignored and unpatched for long periods of time, which makes them a major cyber risk. Virtualizing into an optimization layer forces attention from IT, minimizes network access points and ensures no security balls are dropped.

Standardization becomes much easier in a virtualized optimization layer as well. Settling in on an open software at the virtual layer makes it possible to define your own standard optimization sequences, point naming standards, alarm configurations etc. It makes them much easier to deploy across multiple buildings and empowers BAS operators to familiarize with one set of management tools for modifying these programs.

Decoupling the optimization programs from the field level also greatly simplifies the coding at the field layer, adding resiliency and reliability. Move the programs that operators frequently need to modify up to your optimization layer and the field layer becomes darn near touchless if well deployed and thoroughly commissioned. It’s a great day when BAS operators don’t need to know how to use all of the special field controller softwares of varying versions and complexity and can focus on building performance using a common application with standardized tools and programs. This is invaluable for any organization that wants to really take ownership of their BAS systems.

JD: Wow, that’s compelling.

You mentioned inertia at the beginning of this conversation. I’d imagine those that are making great money selling hardware have just a little resistance to that shift. What would you say to them?

MS: I’d say you have a critical role and plenty of business opportunities in this new architecture as well. Shift your focus to enhancing your field controllers. There is much more to be done in that space. For example, most field controllers still don’t have live programming applications built-in and require a separately licensed software for management and configuration. Not having all the tools and applications an owner needs to 100% own and manage their devices parallels what we see in the farming industry with Right to Repair. Owners of large computerized farm machinery are not provided the software to repair their equipment.

The manufacturers who are producing owner-friendly field hardware with open programming interfaces will have major advantages moving forward. We ought to be able to log into these devices directly and set them up entirely through a browser. No special software or licensing required. I already see this direction catching on with the Distech product line, which appears to be the most ahead of the curve with their Eclypse line and no anti-competition sales territories that I know of. I must also point out that that endorsement is 100% unbiased as I have no professional agreements or commitments with Distech. I simply see their market approach catching on in this new owner-driven sales environment.

The other untapped market is the enterprise software graphics and ASC/Optimization layers. Manufacturers are still refusing to take cues from what Niagara 4 has done to define what is an open BAS software and follow suit or put their own spin on it.

I just realized I’ve also failed to plug Niagara 4 Tridium so far in our conversations. I’m not good at plugs. However, we can’t avoid the reality that in today’s BAS market, Niagara 4 is thus far the only software that can deliver 100% of the recommendations and requirements of this architecture. This always makes me feel strange as I naturally remain agnostic and compare equals but today there is not an equal to challenge Niagara’s long-running position as the open BAS leader.

Right now, Niagara 4 is the only market software built first for interoperability and strong cybersecurity. Most enterprise software still requires consolidated graphics and optimization in one application and purpose-built their software to work only with their controllers even in a BACnet network. Interoperability for most BAS vertical providers still feels like a forced feature that is not prioritized.

I’ll get off the soapbox now.

JD: Fair enough. I see that as a nice challenge for everyone else besides Distech and Niagara!

Continuing up the stack, I love the concept of opening up, centralizing, and standardizing optimization. One of my struggles has always been that no one seems to be taking true responsibility for that key function.

What are all the standard optimization functions that happen at that layer?

MS: Picture for reference.

Glad you asked. Let me list the common ones.

  • Standardized point naming - still the bane of our BAS existence!
  • Scheduling - Holiday and zone-based
  • Setpoint reset logic of all flavors - Duct static, supply air temperature, chilled water temperature and pressure and the list goes on
  • Occupancy-based optimization between interoperable devices like lighting and HVAC controls - airflow setback, after-hours HVAC operations, etc.
  • Global temperature setpoint management
  • Weather services feeding global economizer controls

The types of programs that belong in this layer, I’ll say again, are not critical to the building's functions (yes, even scheduling) when a good fallback strategy is used locally. These are the programs that institutions can standardize on to ensure all of their facilities operate in a manageable and predictable manner. We recommend following industry standards wherever possible, such as ASHRAE Guideline 36. This makes it easy to set expectations with contractors. Frankly, it takes all the guesswork out of the installers’ scope. We work with organizations to make sure they have successfully built templates for the most common optimization programs and these are handed to the installing contractor to be used to ensure consistency across projects.

As a commissioning authority, I can tell you how powerful this concept is at mitigating the risk that the building does not perform as intended. It frees contractors and programmers up to spend their time ensuring the system operates properly rather than writing complex optimization code. This is a huge win in the plan and spec world where the numbers are tight. Contractors might even stand a chance at making a profit on a job where the building runs well 🙌.

Another enhancement the open optimization layer unlocks is that anyone with skills in the open software can write, modify and enhance these programs. We are an example of this. We are building scientists/energy experts writing energy optimization code directly in the controls software. There is efficiency and value in having the experts best-suited and most incentivized around a particular outcome to deliver the functionality and be accountable for it. The few BAS technicians in the world who can write these codes are inundated with projects and seldom have the time, incentive or budget to fully implement and test the optimization functions.

JD: That’s so cool. And it seems like that would make it very simple to plug in new applications like analytics, advanced supervisory control, etc. That brings us full circle, because it essentially frees up the owner’s data. Am I right?

MS: Nice pivot to the top layer, James 😉

This is exactly right. Having a standardized optimization layer not only allows competition and interoperability downstream but it creates the perfect data pool for the top layer of central graphics servers and other emerging technologies to plug into standardized building data. This delivers the flexibility to plug in whatever business applications work best for your organization rather than having to work within a defined ecosystem of providers and services.

If you want to be able to efficiently integrate your building data at low cost to the new analytics, asset management, digital twin, BIM, work order management (insert list of any other emerging tools out there), you must have a secure, organized, QC’d data source. The lower layers of the new BAS architecture serve the data up ready for action. Owners want secure, enterprise-grade communication protocols that are TCP based, encrypted and password protected and residing in a central database that can be accessed securely by a wide range of applications.

Put it this way James, if you are rinsing and repeating the verticalized BAS architecture of yesterday, you are waving goodbye to many of these emerging technologies.

JD: Preach!

Okay, wrapping up here, with this new architecture, what’s left of OT? Is OT dead? What are the responsibilities of the IT folks?

MS: As I have stated previously, I am struggling to see the lasting value in the devices and tools that are referred to as OT. There is clearly a divide between IT and BAS that is recognized and discussed widely in our industry. This is not an issue that will be solved by building technology, new devices or BAS centric practices intended to keep these worlds decoupled. It WILL be solved by having the difficult conversations with the IT and BAS thought leaders and implementers who have unfortunately struggled to work together historically. I think a lot of us have been there to witness inter-organizational dysfunction between the BAS and IT departments. I still see brand new buildings constructed where the BAS receives a dedicated physical network, eliminating any chance IT can support any component of it. The architecture we are offering here shares existing IT infrastructure in a safe and secure way, minimizing upfront and long term cost, complexity and leveraging the organization's IT department to the fullest.

Candidly, it also frustrates me because the brilliant minds in the OT space are exactly the minds we need in the room with IT to demystify BAS tech while listening and identifying the existing IT mechanisms that accomplish the needs of BAS.

Let’s start collaborating with our IT partners who have solved these matters for us already.

JD: Awesome, Matt. Well, thanks for the education once again. I’m really excited to see the reaction to this one.

In part three, let’s make this even more real for folks. It’ll be a how-to guide, complete with a sample specification for the BAS of the future. Any last words for now?

MS: Well, first, keep the feedback coming! Second, I also have to say that this architecture is no longer a hope or dream of mine, this is a demonstrated potential future that we must choose to pursue as an industry. All of the tools are available today and we are actively demonstrating how this works and works well with major clients across the board. The hard part is breaking down the wall and working together.

---

¹ If the ASC acronym is new to you, check out the three part series on advanced supervisory control, starting here.

Ready for Part 3? Check out Implementing The BAS Architecture of the Future

P.S. Matt and I will be discussing this architecture and fielding questions from Nexus Pro members at our August member gathering. To get the invite and recording, sign up for Nexus Pro.

Sign Up for Access or Log In to Continue Viewing

Sign Up for Access or Log In to Continue Viewing

Welcome to part two of a three-part interview series with Matt Schwartz of Altura Associates on fixing today’s building automation system. In case you missed it, part one was on why the BAS industry is broken. Part two is about what we can do about it. If you haven’t read part one, start there!

To make sure you get part 3 in your inbox, make sure you’re signed up for Nexus emails. To learn more about Nexus and subscribe, start here.

Enjoy.

James Dice: Welcome back, Matt! Let’s start Part Two by getting nerdy with the old BAS architecture. Then, we’ll go through your new architecture and discuss the implications of that shift.

I’ve written previously about how the architecture of the BAS hasn’t changed much since before I was born. Can you explain the old architecture?

Matt Schwartz: Nerdy is my strong suit, James. If you’ll allow me a few generalizations here, the legacy BAS architecture can be summed up as multiple layers of controllers stitched together via different, BAS-specific communications protocols. At the lowest level, you have Field Controllers, which handle your sensors and command signals down at the device level. Above that, you have the Supervisory Building Controller, which aggregates the data from the field controllers and typically houses graphics, scheduling and other global functions. Large-scale campuses regularly have a centralized graphic server to help visualize the status of all of the devices.

For years these controller networks have existed as data islands with no industry communications standards. Without built-in ethernet and without standards, each manufacturer was left to develop their own serial string networks and protocols, and their dominant focus was on low bandwidth communications between controllers. I'm sure you've heard acronyms like N2, P1, P2, CCN, COM3, COM4 and the list goes on.

In a time when owners, operators, and the recent community of innovative data scientists weren’t asking for visibility into the BAS and internal IT departments were either unaware of or not engaged with the BAS, this architecture made sense.

The legacy architecture typically looks something like this:

As early as 1995, with the ASHRAE BACnet 135 Standard, efforts were made to promote standardization and interoperability. However, the legacy networks carry a lot of inertia and are expensive to rip out and replace, and thus the old models have survived and staved off enormous advances in technological innovation.

JD: As an outsider, it’s amazing that the architecture has largely stayed the same for the last 30 years. However, I understand that things are starting to change. What’s causing disruption these days?

MS: It’s a confluence of long-running pain points bumping into each other, but I would argue the primary driver and hottest spark is owner demand to extract and use data from their BAS. There is also pressure from cybersecurity/IT stakeholders and analytics software innovators, but the owners write the checks. And owners are increasingly demanding open, interoperable networks.

These owners are listening to their end-users, operators, designers, consultants, data scientists, and technology providers, who are showing them potential business value that is unleashed by access to real building data.

As an example of just one building owner who wants to use their data, here is an email I received from Kaiser Permanente on the matter:

“If a professional is made aware of a data pool that can inform a better service outcome, and is specifically asked to look at data – by anyone, especially the owner - then fails to do so, that’s actual negligence.

Let’s say we have a design engineer who is told – by anyone -  “don’t use old drawings and assumptions for entering supply water temperature sizing of a coil, use trends of the actual system”.  If that design engineer fails to look at the trends, uses the 42 degrees (because that’s what’s on the old drawings), then winds up with a system failure (short of capacity) because the real supply temperature is more like 50+, that’s real negligence.”

To efficiently accomplish this type of data-sharing at scale, not just for a single engineer, requires a secure, network-connected, and interoperable BAS architecture. It’s not enough to simply specify a BACnet-compliant system. Allow me to repeat. It’s not enough to simply specify a BACnet-compliant system.

It’s about time for another auto analogy. Think about a future with many different autonomous vehicles on the roads. Simply requiring BAS products to use BACnet would be analogous to requiring that the autonomous vehicles broadcast their data using a common protocol but failing to require standard highway markings (think of this like network requirements), neglecting to coordinate how the vehicles will react to each other in space (standardizing sequences of operations), and leaving the owner to figure out if their system is being hacked (failing to adopt the latest cyber tech). Sure, the vehicles would be capable of communicating to each other, but we haven’t solved the bigger picture challenges of security and functionality.

Add to this recent widespread craving for data the fact that all of the other pain points discussed in part one still exist, a strong focus in IT security as the result of hacks into BAS systems, and finally throw in some spirited entrepreneurs… and whammo! Disruption.

JD: Fascinating. Building owners seeing the value of their data is huge. And I love the auto analogies… keep ‘em coming!

Okay, so with those changes, what’s the new BAS architecture you’re designing with your clients?

MS: The architecture we’re designing and implementing with our clients is based on an extensive assessment of the owner’s goals, resources, and business processes. It flips the conventional process on its head - instead of leading with the technology or with a particular product solution, we build up a set of requirements, standards, and acceptance criteria and then allow competition from the industry to best meet those needs.

As we discussed in our first conversation, this approach works best when we are able to build a strong partnership with the client’s IT team. Many organizations and their buildings already have robust ethernet and virtualized server infrastructure to support open, interoperable BAS architectures. And the rapid increase in ethernet-based field controllers in the BAS industry feeds directly into this design. When we bridge the gap between Facilities and IT, we unlock enormous cost savings and value for the owners.

It looks like this:

There are three key elements to this design.

The Field Layer (in the building)

This is the hardware layer built to be the reliable rock in the building that can be confidently relied on to carry on critical operations and temperature control, even during network outages or disruptions at the supervisory layer.

Here’s my checklist for a successful field layer: 

  • Interoperable communications, BACnet IP for today 
  • Ethernet-based controllers patched directly into the owner’s IT provided cabling and network gear
  • Critical programming to ensure stand-alone building control in the event of network failure
  • Temperature control device loops and fallback values
  • Network communication loss detection logic
  • Preferably a product with no sales territories and great service providers with a reputation of delivering exceptional customer service (Yes, this is a real thing)

The Advanced Supervisory Control (ASC¹) Layer (in the datacenter, typically)

This is the most innovative layer in the architecture, and I often refer to it as the Optimization Layer. Here, we consolidate the many building controllers or supervisory devices from the legacy model into just a few virtual servers. This consolidation simplifies licensing, logic programming and maintenance efforts, while boosting security, performance and standardization.

Standardization of programs and alarms at this layer prevents the situation where project after project these items are undefined and scratch-built leading to dysfunction and nuisance.

Here’s my checklist for the virtual servers at the ASC layer: 

  • Standardized point naming, user account setup and alarming
  • Standardized database organization (i.e. folder structure and layout)
  • Standardized optimization programming for scheduling, setpoints, and high-performance sequences
  • Server support/disaster recovery and/or high availability and backup services by IT department
  • Interoperable(BACnetIP) communication southbound and TCP authenticated and encrypted communication protocol Northbound
  • Scheduled operating system maintenance and vulnerability scans
  • Regulation of custom software and tools that prohibit a multi-vendor environment
  • Owner controlled licensing and software maintenance requirements

The Enterprise Layer (AKA Central Graphics and Analytics Layer; also in the data center)

This is the most interactive layer, where we get the visualizations, energy use analytics, smart alarms, and artificial intelligence action. The magic of this layer is that it is made possible by the strict data standards that are enforced in the optimization layer, which enable reliable import and tagging of the data for use in sophisticated analytics software.

We also move any programming of the sequences of operation traditionally put in a graphical front end to the optimization layer, allowing a resting connection between this server and the optimization layer. This relationship is critical in delivering a high-performance network since any centralization effort has the highest potential to tax the entire network stack.

Here’s my checklist for the Enterprise Layer:

  • TCP authenticated and encrypted data transfer only.
  • Standardized alarm and BAS interface reporting tools
  • Standardized graphic templates
  • Standardized database organization (i.e. folder structure and layout)
  • Regulation of custom software and tools that prohibit a multi-vendor environment
  • Owner controlled licensing and software maintenance requirements

I just realized I may not have mentioned the opportunity to standardize your graphics here. If I may express an opinion, BAS projects and implementations spend entirely too much time on custom graphics rather than on the actual function and operation of the BAS.

Custom graphics are nearly impossible to implement well at scale. The challenge is the word custom. Meaning one-off. Meaning one provider responsible for software, the images, the know-how around the deployment of custom objects. This can completely defeat the concept of an open front end, not to mention create dependency and compatibility issues down the road when versions of custom objects are no longer supported.

JD: Thanks for walking us through that. It’s a huge shift from the status quo! And the checklists will be very useful for part three when we talk about how to implement this anywhere.

Let’s spend the rest of this conversation on the implications of this shift. Starting at the edge, it seems like this opens up and commoditizes the field layer. What are the advantages of that?

MS: This chunk...

Yes, a truly open field layer is incredibly powerful because the bulk of the hardware and fixed cost lives here. So the goal here is to build an architecture without dependency on any one specific product AND to leverage Ethernet-based controls to simplify and, in most cases, remove the supervisory building controller layer.

Time for another car analogy! Imagine if, for the last 40 years, cars were built such that they only worked on just a single brand of gasoline. You would be locked into visiting that same service station for the life of the car to keep it going, and you’d have to buy a new car if you wanted to use a different service station. Now, imagine if one day a company created a new car design that could run on any brand of gas and offered to license that technology to any auto manufacturer in the world. That, James, is the capability of this open, interoperable BAS architecture! Not only does it bring freedom to the users, it allows all of the manufacturers to play.

We’ve now demonstrated this new Field Layer with several clients. Facilities managers and construction departments evaluated multiple bids against a robust standard and were able to focus on exceptional customer service and expertise. In one case, the team even chose to deploy two different field controllers in their new facility. The sky did not fall!

I am not saying you should put every field controller under the sun in your facilities, either. However, if you consider a campus environment with a mix of laboratory and classroom buildings, the ability to use premium controller A in the lab building and more economical controller B in the classroom building, while preserving seamless communications, can be hugely valuable. The open model unlocks that flexibility.

No longer is there a physical dependency between my field control layer and the building control layer due to serial networks and the need for that local aggregator. In my opinion, this level of flexibility is the underpinning concept that gave birth to the term IoT. Plug your edge/field control into the network and integrate up into your standardized architecture—completely optimized and interface-able.

JD: That sounds like a game-changer for delivering better service and a better retrofit experience for building owners. And while some contractors who are used to the lockdown model might view it as bad for business, it actually seems like a win-win to me.

Let me pause on one key detail. If you remove the supervisory controllers or virtualize them, how can we rely on the equipment to run if we lose connectivity with the server?

MS: This is usually the first question that comes up, and the solution is quite simple. The Field Controllers are programmed to default to a safe, basic operating condition with any loss of network connectivity.  The way we implement this is to have the optimization server regularly pinging the field controllers to let them know it is online and communicating. If the field controller misses that ping a couple times, then it will automatically fall back to a safe occupied condition.

The optimization logic will be out of order until the issue is resolved, but the building will run in local automatic mode using its distributed logic. While the initial perception is that consolidating the supervisory functions into virtual servers that may not be in the building increases the risk of system downtime, we are finding that this architecture actually improves the reliability of basic system operation.

JD: Got it, thanks. Let’s move up to the supervisory layer. Virtualizing supervisory controllers seems like a great way to simplify things for the O&M staff and a great way to reduce the first cost of BAS installs. What are the benefits of removing all that hardware?

MS: How about a before and after as a refresher from part 1.

Before...

After...

You nailed it. The idea here is to save money and get back to focusing on the operation of the BAS. Get your BAS team out of the server/software/hardware maintenance business. Essentially you are consolidating what would have been (for a large portfolio) hundreds of non-IT-supportable embedded servers/PCs for a handful of virtual servers supported by IT. We’re talking about consolidating hundreds of embedded field servers into a handful of VMs.

I cannot tell you how many BAS departments have recently been forced to become software maintenance experts in order to support field devices with embedded operating systems. As with any software operating system, these devices require upgrades, security hardening, and patches as new version rewrites are released. IT cannot support these devices because they use proprietary software tools for device management.

Clearly, one option is to have these devices serviced by the appropriate vendor and to maintain a service agreement that achieves your security and reliability requirements. The question becomes, do you want to pay to have all of those devices managed separate from the many other network devices managed by your organization, or do you want to bring that functionality into your core IT environment, where you can benefit from economies of scale and common security standards.

I’ve run the cost models on this many times for folks to demonstrate the savings. Here are some high-level numbers on the first cost savings (I’m trying to use all conservative figures here so the comment board doesn't light up on this one!):

A building controller usually costs upfront somewhere between $3,000-$6,000. Let’s consider a project that requires 10 controllers, you are looking at $30,000-$60,000, excluding installation labor. For argument’s sake, let’s call it $50,000 installed.

Standing up virtual machines with your IT department typically requires only labor and software licensing. In my experience, the licensing cost covering this size project (and more), will be $15,000-$20,000. And the IT department may charge anywhere from $0-$5,000 to set everything up. So you are looking at $25,000 max up front - a 50% savings already!

A caveat on the virtual server approach is that maybe you have a good reason to locate the optimization server on site. Guess what, this still works and saves money! Work with IT to put it on a robust industrial PC or a blade server in the building and you can still consolidate the proprietary hardware and grab significant savings while benefiting from having IT focus on the software and security while the BAS team works to make building operations better.

Let’s also look at annual operating costs of these scenarios:

For the building controllers, we can assume you will spend an hour updating each device twice a year (this would be poor frequency by all IT security standards by the way). If we apply this to 10 devices with a labor rate of $140/hr, you are looking at $2,800 annually. This may not sound so bad, but $2,800 annually can get you virtual infrastructure with 100X the horsepower needed to run the same software as the building controller at larger scale and have it backed up and supported with a disaster recovery plan and adjustable resources to scale the horsepower as needed as your organization grows.

I haven’t put a value on cyber security in this equation yet, either. Distributed devices are more often ignored and unpatched for long periods of time, which makes them a major cyber risk. Virtualizing into an optimization layer forces attention from IT, minimizes network access points and ensures no security balls are dropped.

Standardization becomes much easier in a virtualized optimization layer as well. Settling in on an open software at the virtual layer makes it possible to define your own standard optimization sequences, point naming standards, alarm configurations etc. It makes them much easier to deploy across multiple buildings and empowers BAS operators to familiarize with one set of management tools for modifying these programs.

Decoupling the optimization programs from the field level also greatly simplifies the coding at the field layer, adding resiliency and reliability. Move the programs that operators frequently need to modify up to your optimization layer and the field layer becomes darn near touchless if well deployed and thoroughly commissioned. It’s a great day when BAS operators don’t need to know how to use all of the special field controller softwares of varying versions and complexity and can focus on building performance using a common application with standardized tools and programs. This is invaluable for any organization that wants to really take ownership of their BAS systems.

JD: Wow, that’s compelling.

You mentioned inertia at the beginning of this conversation. I’d imagine those that are making great money selling hardware have just a little resistance to that shift. What would you say to them?

MS: I’d say you have a critical role and plenty of business opportunities in this new architecture as well. Shift your focus to enhancing your field controllers. There is much more to be done in that space. For example, most field controllers still don’t have live programming applications built-in and require a separately licensed software for management and configuration. Not having all the tools and applications an owner needs to 100% own and manage their devices parallels what we see in the farming industry with Right to Repair. Owners of large computerized farm machinery are not provided the software to repair their equipment.

The manufacturers who are producing owner-friendly field hardware with open programming interfaces will have major advantages moving forward. We ought to be able to log into these devices directly and set them up entirely through a browser. No special software or licensing required. I already see this direction catching on with the Distech product line, which appears to be the most ahead of the curve with their Eclypse line and no anti-competition sales territories that I know of. I must also point out that that endorsement is 100% unbiased as I have no professional agreements or commitments with Distech. I simply see their market approach catching on in this new owner-driven sales environment.

The other untapped market is the enterprise software graphics and ASC/Optimization layers. Manufacturers are still refusing to take cues from what Niagara 4 has done to define what is an open BAS software and follow suit or put their own spin on it.

I just realized I’ve also failed to plug Niagara 4 Tridium so far in our conversations. I’m not good at plugs. However, we can’t avoid the reality that in today’s BAS market, Niagara 4 is thus far the only software that can deliver 100% of the recommendations and requirements of this architecture. This always makes me feel strange as I naturally remain agnostic and compare equals but today there is not an equal to challenge Niagara’s long-running position as the open BAS leader.

Right now, Niagara 4 is the only market software built first for interoperability and strong cybersecurity. Most enterprise software still requires consolidated graphics and optimization in one application and purpose-built their software to work only with their controllers even in a BACnet network. Interoperability for most BAS vertical providers still feels like a forced feature that is not prioritized.

I’ll get off the soapbox now.

JD: Fair enough. I see that as a nice challenge for everyone else besides Distech and Niagara!

Continuing up the stack, I love the concept of opening up, centralizing, and standardizing optimization. One of my struggles has always been that no one seems to be taking true responsibility for that key function.

What are all the standard optimization functions that happen at that layer?

MS: Picture for reference.

Glad you asked. Let me list the common ones.

  • Standardized point naming - still the bane of our BAS existence!
  • Scheduling - Holiday and zone-based
  • Setpoint reset logic of all flavors - Duct static, supply air temperature, chilled water temperature and pressure and the list goes on
  • Occupancy-based optimization between interoperable devices like lighting and HVAC controls - airflow setback, after-hours HVAC operations, etc.
  • Global temperature setpoint management
  • Weather services feeding global economizer controls

The types of programs that belong in this layer, I’ll say again, are not critical to the building's functions (yes, even scheduling) when a good fallback strategy is used locally. These are the programs that institutions can standardize on to ensure all of their facilities operate in a manageable and predictable manner. We recommend following industry standards wherever possible, such as ASHRAE Guideline 36. This makes it easy to set expectations with contractors. Frankly, it takes all the guesswork out of the installers’ scope. We work with organizations to make sure they have successfully built templates for the most common optimization programs and these are handed to the installing contractor to be used to ensure consistency across projects.

As a commissioning authority, I can tell you how powerful this concept is at mitigating the risk that the building does not perform as intended. It frees contractors and programmers up to spend their time ensuring the system operates properly rather than writing complex optimization code. This is a huge win in the plan and spec world where the numbers are tight. Contractors might even stand a chance at making a profit on a job where the building runs well 🙌.

Another enhancement the open optimization layer unlocks is that anyone with skills in the open software can write, modify and enhance these programs. We are an example of this. We are building scientists/energy experts writing energy optimization code directly in the controls software. There is efficiency and value in having the experts best-suited and most incentivized around a particular outcome to deliver the functionality and be accountable for it. The few BAS technicians in the world who can write these codes are inundated with projects and seldom have the time, incentive or budget to fully implement and test the optimization functions.

JD: That’s so cool. And it seems like that would make it very simple to plug in new applications like analytics, advanced supervisory control, etc. That brings us full circle, because it essentially frees up the owner’s data. Am I right?

MS: Nice pivot to the top layer, James 😉

This is exactly right. Having a standardized optimization layer not only allows competition and interoperability downstream but it creates the perfect data pool for the top layer of central graphics servers and other emerging technologies to plug into standardized building data. This delivers the flexibility to plug in whatever business applications work best for your organization rather than having to work within a defined ecosystem of providers and services.

If you want to be able to efficiently integrate your building data at low cost to the new analytics, asset management, digital twin, BIM, work order management (insert list of any other emerging tools out there), you must have a secure, organized, QC’d data source. The lower layers of the new BAS architecture serve the data up ready for action. Owners want secure, enterprise-grade communication protocols that are TCP based, encrypted and password protected and residing in a central database that can be accessed securely by a wide range of applications.

Put it this way James, if you are rinsing and repeating the verticalized BAS architecture of yesterday, you are waving goodbye to many of these emerging technologies.

JD: Preach!

Okay, wrapping up here, with this new architecture, what’s left of OT? Is OT dead? What are the responsibilities of the IT folks?

MS: As I have stated previously, I am struggling to see the lasting value in the devices and tools that are referred to as OT. There is clearly a divide between IT and BAS that is recognized and discussed widely in our industry. This is not an issue that will be solved by building technology, new devices or BAS centric practices intended to keep these worlds decoupled. It WILL be solved by having the difficult conversations with the IT and BAS thought leaders and implementers who have unfortunately struggled to work together historically. I think a lot of us have been there to witness inter-organizational dysfunction between the BAS and IT departments. I still see brand new buildings constructed where the BAS receives a dedicated physical network, eliminating any chance IT can support any component of it. The architecture we are offering here shares existing IT infrastructure in a safe and secure way, minimizing upfront and long term cost, complexity and leveraging the organization's IT department to the fullest.

Candidly, it also frustrates me because the brilliant minds in the OT space are exactly the minds we need in the room with IT to demystify BAS tech while listening and identifying the existing IT mechanisms that accomplish the needs of BAS.

Let’s start collaborating with our IT partners who have solved these matters for us already.

JD: Awesome, Matt. Well, thanks for the education once again. I’m really excited to see the reaction to this one.

In part three, let’s make this even more real for folks. It’ll be a how-to guide, complete with a sample specification for the BAS of the future. Any last words for now?

MS: Well, first, keep the feedback coming! Second, I also have to say that this architecture is no longer a hope or dream of mine, this is a demonstrated potential future that we must choose to pursue as an industry. All of the tools are available today and we are actively demonstrating how this works and works well with major clients across the board. The hard part is breaking down the wall and working together.

---

¹ If the ASC acronym is new to you, check out the three part series on advanced supervisory control, starting here.

Ready for Part 3? Check out Implementing The BAS Architecture of the Future

P.S. Matt and I will be discussing this architecture and fielding questions from Nexus Pro members at our August member gathering. To get the invite and recording, sign up for Nexus Pro.

⭐️ Pro Article

Sign Up for Access or Log In to View

⭐️ Pro Article

Sign Up for Access or Log In to View

Are you interested in joining us at NexusCon 2025? Register now so you don’t miss out!

Join Today

Are you a Nexus Pro member yet? Join now to get access to our community of 600+ members.

Join Today

Have you taken our Smart Building Strategist Course yet? Sign up to get access to our courses platform.

Enroll Now

Get the renowned Nexus Newsletter

Access the Nexus Community

Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.

Go to Nexus Connect

Upgrade to Nexus Pro

Join Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.

Sign Up