Happy Thursday!
Welcome to this week’s deep dive exclusively for Nexus Pro members. It’s an honor to have you here. This deep dive is a follow up to my recent conversation with Terry Herr, president of Intellimation. I learned a lot from this conversation and want to share my takeaways and the full transcript with you below.
In case you missed it in your inbox, you can find the audio or video here:
Nexus site | Apple Podcasts | Spotify | YouTube | Add to other podcast apps
Enjoy!
—James
Disclaimer: James is a researcher at the National Renewable Energy Laboratory (NREL). All opinions expressed via Nexus emails, podcasts, or the website belong solely to James. No resources from NREL are used to support Nexus. NREL does not endorse or support any aspect of Nexus.
This conversation was a lot of fun for me. I had the feeling that I wish I would have met Terry a long long time ago. He's one of the originals of building analytics!
I think he gave a master class on several cutting edge topics that analytics providers will get a lot out of.
The first is the open data layer and using VOLTTRON to implement it. This strategy is sort of in disagreement with Nick and Alex from KGS. I respect everyone's opinions here, understand the tension, and see both sides. I'm excited to see how the debate evolves.
The second is advanced supervisory control. I don't know anyone who's further along in thinking about it and piloting it than Terry. Also, I realized during this episode that analytics and controls are really two sides of the same coin. I don't think everyone is thinking about it that way. More on that to come.
Finally, Terry and I think about analytics in the same way: it's a tool in the toolbox. The rest of the tools are: open protocols, data modeling, operator training, commissioning, optimized sequences.
I'm excited to see your responses to this one.
[00:10:56]
Even in the building automation world, we always wanted to be as vendor neutral as one can be in that space. The building automation world is, you know, one of the aspects of it I don't like is that the distribution of products is somewhat controlled. You can't just buy and use whatever you want, you have to sign up, and you have to be an authorized distributor. And so we always try to be, you know, client-focused, you know, solution-focused and not product focused. Because products, products evolve, right? That means, you know, what might be the best today isn't going to be the best 5 or 10 years from now.
So even in the BAS world, I mean, we still do some building automation, and we still use Delta mostly, but sometimes it's not the right fit, you know what I mean? So we'll use other products.
[00:12:21]
One of the things that we realized very early on is the data acquisition piece is troublesome, right? It's the first step and it's not an easy step. And we wanted to find a neutral, we'll call it trending appliance, right? Because all of these products leverage trended data, and if you look at the trend capabilities of a typical BAS, they're, they're weak, right? And they vary dramatically.
If you walk into a typical building today on the BAS, it's all over the place. Sometimes they're trending 5% of the points, sometimes they're trending 85% of points, but most of the time it's fairly minor. And many of them couldn't trend at the interval that you need to do good fault detection. And so we really looked for some way that we could get data out of buildings easily.
[00:20:04]
James Dice: Let's stay on the independent data layer real quick. So, let's just kind of close that out cause it was one of the things that I wanted to talk to you about. So with you guys installing that and then having many different options of where you can then plug analytics into it, one of the things that I wanted to ask you about is how you tag and model that data, given that all of the different analytics platforms that you might plug into it, they all have their own data models.
So how do you think about that?
Terry Herr: Well, you know, one of the things, I guess our view on the value of the middleware layer is sort of akin to what BACnet is, from a proprietary comm bus. If you have a standard data path, then basically you don't have vendor lock in. And our industry has a bad reputation for what I call the lock them and loot them business strategy, where you know, once the vendor gets a foot in the door, they're very hard to get out. So, now what makes that work, what makes that work is this tagging standard. Prior to tagging standard, pretty much, in fact, all of the FDD products that we've used were really a proprietary mapping every time, right?
You had to do a mapping process and they had their own data model. And we're in the transition right between that and a standard data model. I was hoping there would be, you know, the, probably almost a year and a half ago now, you had Brick and Haystack and ASHRAE put out a press release that indicated they were going to work together to have one standard. Prior to that, it seemed like we were going to have maybe three, and that was not going to be good for the industry. I don't know that I've seen them work together a lot, but I've certainly seen pressure from Brick, and I think the pressure from Brick is probably what prompted Haystack 4 0, so I don't think any of them are perfect, but we think that a standard is ideal and it's, it's what makes this plug and play work.
And our approach at the moment is to actually tag with Haystack. Although we're working with somebody right now that thinks we can cross tag. So we can tag a dataset with both Haystack and Brick, be cause they do overlap a fair amount, right? And then we house those tags in whatever database that we're using. Right now we're housing them in Crate, but if we transitioned to Timescale, we'll house them there. And then we'll have a Haystack API. So the other, the other part of the standard is having a standard way to pull the data out. And right now the Haystack API is the best thing that we've got. And we actually wrote for what we have now, we actually wrote a Haystack API connector. And so the theory is that if we do the data acquisition, and have the data in a neutral database, then the clients can use more than one.
They can transition easily. They can switch gears if a product down, you know, three or five years from now that they want to change, they don't, they don't have to redo the entire data acquisition and tagging layer. That all stays. And it's in an open source. The beautiful thing about VOLTTRON and those databases is so far, everything is open source. So didn't really cost anything. There was no SaaS model to it, for the entire neutral data layer.
[00:26:43]
I agree with your comment that, one of the things about SkySpark is that, from our perspective, I still think it's probably the industry leading product, but it definitely has a very heavy engineering component, right? The implementation of it is, is very time consuming, engineering wise. And I consider that to be sort of the opposite of say a KGS, where you know, everything is done. You're not writing any of your own rules. You're not even doing the data acquisition, or you're not doing the tagging piece. Everything is done. So it's a, it's a plug and play product.
SkySpark, again, the industry leader but I joke to people, I said this kind of like half done, right? They built the back end, but there's no rules, right? The user interface is weak. So in the early days of SkySpark, most people overlaid it with something else. Again, they've done successful with it, but, but it's very different than a KGS.
And I consider CopperTree to be somewhere in the middle, right? There you can write your own rules if you want. They've made it easier though. They've abstracted the rule writing, so it's a little bit easier to use than SkySpark, but to me one of the cool things about their product is they have a rules library that all the partners share. And so you're not reinventing the basic NIST rules, every integrator, you know, over and over and over again, which is, I think, a weak part of SkySpark, where every integrator is rewriting the standard, you know, 80% of the rules that are the same thing for everybody.
[00:28:51]
What I mean by that is, and this is to sort of, you know, to Nick from KGS's point, is that we're not at a place where, if one person tags it, right, with one standard, that every vendor can automatically consume that information. And part of that is because, you know, the standards aren't perfected yet, but we're certainly getting a lot closer.
And, it's funny, you know, Nick tells the story that, you know, the first site that they did with Haystack tags, they basically had to start over again because there is a lot of discrepancy in the way that you would tag with Haystack. It wasn't, there was nothing to force everybody to tag the same way. And so part of that is you know, the tagging model, and then the training on how to actually tag. Or having a standard tagging tool. If everybody used the same tagging tool, right, then everything would be tagged the same even within Haystack.
And we're not quite there, but I think we're getting a heck of a lot closer. Haystack 4.0 closes the gap on a bunch of their weaknesses in the past. And there are a number of tagging tools out there now. In fact, we've been sort of hunting for the ideal, what I'll call auto tagging tool for some time and demoing lots of products, because that's been , you know, obviously an obstacle to deploying this type of software. But I think we're at a point where if Haystack is defined well enough, the method of deploying it is defined well, that the software vendors can then create the right connection.
I mean, in the perfect world, they'd be able to basically with the end Haystack protocol, connect to any database that's tagged well, and there would be no onboarding, right? It would automatically work. That's the perfect world. I don't think we're quite there, but I would say we're probably 80% there.
[00:30:33]
James Dice: And these tagging tools, I'm not that familiar with these tools that are popping up in this area. Is this a place where it's like a startup company that's popping up, providing these tagging tools, or is VOLTTRON heading that direction to provide that? I'm assuming what you're talking about is like machine learning algorithms that add the tags automatically based on a point name and, and the data that that point is associated with, that kind of thing. Is that right?
Terry Herr: That's right. And you know, we've tagged by hand for, for years, and it's time consuming and painful. And it's because our industry didn't have a point naming standard. If we had one of those years ago it wouldn't be quite so hard to add tags. But you can imagine , you know, every site you go into, they name things somewhat differently. So, yeah, you know, to me the Holy Grail was some kind of auto tagging. And when I say that, I mean some machine learning that uses all the metadata you can pull from a BAS. So metadata meaning the point name is always probably the most important, but you've got the unit of measure, the actual present value and the trended value. You've got typically the description field. You got the point type field, right? So BACnet gives you a number of properties that when you scrape it, you know, give you hopefully enough information to decipher what that point is and add the tags accordingly.
The work in this area has been going on for some time. Again, you know, Mike Brambley at PNNL has done some early work on auto tagging. United Technologies Research Center, the Carrier folks, they actually funded, one of the early tools that we use was a project that their research center built, probably four years ago now, with some DOE funds.
So it's open source and we got it. And it was a good start, but it wasn't finished. And then BUENO has one. Pretty much anybody who's an analytics company, we're going to build a tagging tool, right? To just make their life easier, but it n't going to be a standalone one. There's a company called Onboard.io that actually, they are a startup probably two years old now. I think two guys from KGS, one guy from Opower, and there they initially started up to specifically build an auto tagging tool and so I know they have something. We worked with them in the early days.
There's another company that we are testing out right now, which so far we think is probably the best thing we've seen. It's actually a company called Kinetic Buildings. They're out of Philly here, so they're actually a local company. PhD that came out of Drexel that actually has an entire analytics platform as well, which we're also testing, but we're really more excited about his auto tagging tool. We want to be able to use that sort of independently if we can. So again, I know CopperTree has built, you know, some auto tagging capabilities as well. I don't know where it is at the moment, but almost everybody has to have something like that that has a product. We're just looking for something that is standalone, right? So it's not married to a particular product.
[00:33:52]
James Dice: What are you seeing as far as the types of supervisory control that analytics firms are wanting to add? I know ASO, you mentioned, is one of them. I'm seeing that and some other applications as well. But what are you excited about in that area?
Terry Herr: Well, we definitely think that, you know, what we call optimization, I think KGS calls opportunities for optimization, right? Their software will actually find those types of things. That's very much what AIRCx does as well. So basically, you know, things like optimal start, which is algorithms that have been around for 30 years but get deployed very infrequently, they should be deployed , you know, in every building. Temperature and pressure resets for VAV air handling units. We see that a little more often, maybe in 15% of the buildings that we walk into where it's applicable, but not in the other 85%. So, you know, that's an optimization. Chiller plant optimization is where this, to me, this work has been going on for a long time. Right? That's where it really started, in chiller plants, because if you're going to optimize one thing inside a building, if it's got a chiller plant, that's what you want to hit first. So that's fairly developed.
In fact there's a number of companies that are specialized in their products like Plant Pro, Tech Works, Optimum Energy. They all basically give you a Niagara JACE, right? With their optimization algorithms on them. You drop that in the building. You map across the data points for the chiller plant, and it takes supervisory control over from the BAS. So we think you can do that both on the chiller plant, and on the rest of the building's HVAC equipment, so on the air side as well. And we see that sort of as the next, the next move.
[00:35:58]
So in my view, first of all, I don't think you can take that to the cloud entirely. I think that is going to have to stay at the edge. So we've always used a heftier data collection box. So , you know, we're running on an industrial Nook because we know that, you know, intelligent demand control and optimization layers can be placed there.
A lot of easy optimization is literally writing to set set points, which BASs are designed for anyhow. So you can put those algorithms into an edge box or into the same VOLTTRON box that we have there and just run those algorithms there, and it's just going to write to set points. It's going to write to the, to the condenser water set point. It's going to write to the chilled water set point. The VAV air handlers, it'll do the optimal start. So we think at least my view at the moment that we're going to find where, we are going to have this edge device that does optimization in the building, and then the cloud is still going to do some advanced algorithms, right?
It's going to basically do the modeling, probably feed back some of the set points. I don't think you can run it all in the cloud. I mean, you could, but I think trying to control a building from the cloud is-, I'm not sure that we're there yet in terms of, of the, the quality of the connections, right?
So I can see a day where the PID loops and basic control was done at the controller level, right, at the equipment controller level, where now all controls are done. And the optimizations or the basic algorithms or the advanced algorithms are running at the edge on a bit of a, you know, better, PC, and then heavy machine learning, model building, advanced fault detection is done in the cloud. We think that's probably the progression.
[00:44:15]
If you look at the adoption curves for EIS, you know, if you, if you look at these acronyms, right? So you got BAS, we've been around, you know, 35 years. You got the EIS, which is pretty well developed as well. FDD, I think we're on the mass adoption curve there. And the ASO piece is sort of the last piece. And I won't say the last piece, but it's the next piece. And, we're at the very early stages there, and there's a lot of room. So we're pretty excited about that.
And then if you merge in this whole, DERMs concept, right? Distributed energy resource management. Because now you've got solar and probably batteries coming, and active demand grid connectivity. So now you need that component. So that's why you need this edge device that can sort of manage all of that as well. We see that ASO and that DERMs as sort of the next wave of technology to hit buildings.
[00:51:38]
The problem with it is that training is hard. You know, most of our industry in the past has done some sort of in-place training where they sit down with the operator or operators and they show them how to use it one time, maybe twice, right? And then they leave.
And that's typically not going to stick. It's like anybody, if you sit down at a 40 hour training class for like a week, and you learn all this and you jam it in your head, and then you don't use every piece of that, and then, you know, within the next two or three weeks, you know, 80% of it is sort of gone.
So to me, the training has to be, it has to be online, on-demand that they can go back to it, right? They can look at it again. They can look at it while they need it or while they're actually doing it. It's just a different way of actually training. There's a term for it, embedded training I think sticks in my head. But we really think that's the only way that you're going to be able to get operators up to speed on this.
And again, you've got to get them engaged. It's not even just training. If they're part of the process the whole time, I think they feel better about it. Right? As opposed to, you know, some contractor comes in, implements something and then, you know, does some basic training for them and turns it over to them. I think that's just not, not an ideal approach. So given the fact that a lot of this stuff gets sold above the operator, right? It gets sold by somebody that's, you know, the operator's boss. They really need to engage the operators early on. We even tell our clients who we're selling that to , you know, we really like to sit down with the operators, get their opinion, get their feedback. Because first of all, some of them really know the buildings well, and you just want their involvement in it. so I think that just makes them feel better about the process.
[00:10:56]
Part of what I see happening with operators is a transition from actually operating to troubleshooting and repair. Because to me, in a perfect world, when you get the automation right, you do have an automated self-driving building. And so they don't really need an operator. You need an operator maybe to change schedules and change set points, but the key is going to be keeping the sensors and the actuators working and mechanical equipment working. So it really goes to troubleshooting and repair of that so that you get it back to being automatic.
And again, that is just training, right? I mean, let's face with the fault detection, even fault detection and diagnostics, even the diagnostics only works to a point, right? It can't see it. You still need somebody oftentimes to go out with a meter and some tools and look at it, right, and see what the problem is. That's gotta be part of the training because getting that building back operational again or back under automatic control requires that those actuators and sensors all work.
What did you think about these highlights? Let us know in the comments.
Note: transcript was created using an imperfect machine learning tool and lightly edited by a human (so you can get the gist). Please forgive errors!
James Dice: [00:00:00] Hello, friends. Welcome to Nexus, a smart buildings technology podcast for smart humans. I'm your host, James Dice. If we haven't met before, I write a weekly newsletter on the same topic. It's also called Nexus. Each week I share what I've learned, my opinions, and what I'm excited about in the quickly evolving world of intelligent buildings. Readers have called Nexus the best way to stay up to date on the future of this industry without all the marketing fluff. You can check it out and subscribe at nexus.substack.com or click the link in the show notes.
Since starting the Nexus newsletter, many of you have reached out to me wanting to talk shop, and we have. After a few weeks of those wonderful conversations, I realized I needed to record and share them with our growing community. So here we are. The Nexus podcast is born. This is our chance to explore and learn with the brightest in our industry together.
One more quick note before we get to this week's episode. I'm a researcher at the National Renewable Energy Laboratory, otherwise known as NREL. All opinions expressed on this podcast belong solely to me or the guest. No resources from NREL are used to support Nexus, and NREL does not endorse or support any aspect of Nexus.
Alright. Episode 9 is a conversation with Terry Herr, President of Intellimation, a building controls and analytics technology and service provider. Terry is one of the originals in the world of analytics, and I was excited about picking his brain for the first time. Our conversation covers the past, present, and future of analytics, and one of the hot items for the future is advanced supervisory control, which Terry calls optimization. We do a deep dive on that and much, much more.
This episode of the podcast is directly funded by listeners like you who have joined the Nexus Pro membership community. You can find info on how to join and support the podcast at nexus.substack.com. You'll also find the show notes, which has links to Intelllimation's website and Terry's LinkedIn page.
Without further ado, please enjoy Nexus Podcast Episode 9.
Hello, Terry. Welcome to the show.
Terry Herr: [00:02:11] Thanks. Glad to be here.
James Dice: [00:02:13] Yeah, why don't you give us a little background on yourself and on Intellimation, your company.
Terry Herr: [00:02:19] Right. Well, I've been doing this a long time. I actually started my career as an electrician, right out of, right out of high school. And I really cut my teeth on controls, was working for a contractor who was doing wiring for companies like Honeywell and Johnson Controls. So I did a bunch of that. Then I worked for several years at Three Mile Island, in their startup and test division, and from a controls perspective, it doesn't get much more complicated than that.
So and then I was placed in my fourth year apprenticeship. I decided I really didn't want to do this my entire career. So I started taking college courses, at a local college, and they didn't have an engineering degree program. So I ended up with a, with a degree in physics, and I really wanted electrical engineering one, but again, they didn't have it. So, I worked on off. I finished my degree and, pretty much out of college founded a company called Knights Electric, was actually the first company. And we focused on doing control wiring for, Siemens was our first client actually. So Siemens, Johnson Controls, Honeywell.
This is back in the early nineties, and it was great timing to do control wiring because everything was moving from pneumatics to DDC. And much of those large branch offices had their own pneumatics guys, but they didn't have their own electricians. So yeah, it was great timing. We grew that pretty quickly, all in sort of south central PA, maybe southeastern PA.
And I think it was around like 95, 1995, I was doing some control wiring for a very small systems integrator, this was -, and most of our work, again, was for the branch offices, and we started doing some wiring for a small integrator who was doing controls for-, it's that company out of Texas, CSI. One of the early controls companies bought by Schneider. And I thought, wow. And he was like a two man operation, so I got the idea, like if he can do turnkey controls, then we can. And started looking for a product.
This is around 95, so it was right around the era that Johnson Controls and Honeywell were getting pressure from the independent controls systems integrators, plus smaller products, right, smaller manufacturers. And so they were looking to not lose market share. So both Honeywell and Johnson Controls started their independent distribution around that timeframe. And Johnson had a program called ABCS, authorized building control specialist. So in 95, we signed up with Johnson Controls to do turnkey controls.
Again, fairly good, good move for us because we were new to doing it, but having that brand name was really helpful. So and that, that was really the founding of Intellimation. I kept the old name, and we, there was two separate companies, one doing control wiring for Johnson, Siemens, Honeywell, and the other one doing turnkey controls with Johnson Controls.
That lasted for a few years until the Siemens and the Honeywells, you know, it's hard to sort of have a company that competes with them. So they basically stopped using us for wiring, and we ended up just really specializing in doing turnkey controls. So we were doing controls up through maybe the-, again, Johnson Controls through maybe 2000, early two thousands, where the open protocol is starting to really make some ground, right? LON and BACnet. We were doing a lot of military work, and LON was the preferred open protocol for the military bases.
So Johnson and like all the big guys, the Honeywell, Johnson, Siemens, they were not too fond of open protocols, right. They, they owned the market at that point, and open protocols was going to hurt them. So they weren't very, very fast to gain that. We started looking for other products. And we've repped, I think next was Sircon for awhile. That's a LON product. Then Dish Tech, again, another LON product. Eventually, dropped the Johnson product line because again, they, they were just slow to open systems in general.
And we had Dish Tech, and then we picked up Delta Controls, as a BACnet product. Because LON and BACnet, if you know, back in that era, LON and BACnet were, you know, they had what they would call the protocol war that was going on. Right. In fact, I tell people now that if I'd have had to make a bet back in 2000, you know, five to eight, I would have bet that LON was going to win the protocol wars, but that didn't happen.
James Dice: [00:06:36] Really? Hmm.
Terry Herr: [00:06:36] Yeah. We thought that in the early days it was a better, more interoperable protocol. More complicated, but anyhow, that didn't happen and they pretty much fell off the rails and BACnet came on strong ,and pretty much everything's BACnet today, so.
James Dice: [00:06:51] Wow. That's a fascinating history. All of the, up until this entire time you've been talking is all before I graduated from college. So this is all, this is fascinating.
Terry Herr: [00:07:00] I that. I looked at your history some too, and I was like, you're one of the young guns, I guess, in the space. Right?
James Dice: [00:07:07] Yeah, definitely. Well, okay, so BACnet kind of took over, and then what was the rest of the history?
Terry Herr: [00:07:13] So we've been doing, we've been a Delta Controls rep now for some time. And again, doing I would say 70% of our business was straight sticks, you know, systems integration, putting controls in new and existing buildings. We were doing probably 30% for ESCOs. So, you know, the ESCO market in Pennsylvania has been strong in the schools, so we're working for, you know, Ameresco, NORESCO, doing the controls portions of those projects. And right around 2005, and this was my foray into fault detection analytics, I actually had saw a demo of Cimetrics, and this was a couple of companies. If you look at 2005 era, to me, 2003 and five were the really upstart of when FDD got started.
There was some R&D, you know, lots of R&D out of NIST. NIST did some very early work. PNNL, you know, Mike Brambley and Srinivas were doing some research on fault detection. And Cimetrics, there's really three, in my view, three sort of early products. And I started tracking them in 2005, but the Cimetrics demo was a turning point for me.
I was, being in a controls guy and seeing, you know, what fault detection can do, I was convinced at that point that this is a, you know, this is the future. But you know, it was very expensive. In 2005, fault detection was expensive, and honestly, I didn't have a client that could afford Cimetrics back then. They were mostly focused on very large central plants or veryilarge facilities. So-
James Dice: [00:08:46] What made you think it was the future?
Terry Herr: [00:08:48] Well, it was-, I could see that every BAS could benefit dramatically from, from having fault detection running, right. It was like, to me, it was alarming on steroids. You know, every system has problems, faults, and so having that just seemed to me to, you know, an excellent add on. So pretty enamored with it. But again, didn't really have anybody who could afford it. The products were really young then, so I really didn't do much about it at least initially, just sort of watched the market. I think I started a spreadsheet back then on sort of tracking the products in the space, and there really were only three to begin with that at least I was aware of: Cimetrics, Packrat and, Interval Data Systems. I think that, well, Cimetrics is still around. In fact, you know, Jim Lee there , to me, is one of the, one of the grandfathers of the space and, you know, he's somebody you should get on the podcast next.
James Dice: [00:09:40] Cool.
Terry Herr: [00:09:40] And then I guess we really didn't, like I said, I didn't do anything about it for, awhile until we started looking at products. I think one of the early products that we looked at was KGS. I actually had one of the original partners there who's actually not with them anymore, that traveled down to Philadelphia and gave us a demo. That was probably 2010, maybe 2011, probably the early days for KGS.
There was also another product called, SiEnergy out of California, one of the early products, and actually this was probably 2000, maybe 12, 13, actually signed up with them.
James Dice: [00:10:15] Oh you did? Okay.
Terry Herr: [00:10:16] Yeah. Never did a project, because they, they-, I don't actually know what happened with them. They got bought and sold, and eventually just flamed out. Actually, I think there's a product called Flywheel BI. I don't know if you've seen that one That's leftover from SiEnergy. The company's changed a bit. They seem like they do more CMMS now than fault detection, but they're still out there.
James Dice: [00:10:39] Interesting. Alright. Cool. So you guys, I mean, the way I understand how you've approached analytics is you've-, you said you signed up for them, but you've always been sort of independent and repping many different analytics companies, right? Is that how you've approached it?
Terry Herr: [00:10:55] Yeah. And even in the building automation world, we always wanted to be as vendor neutral as one can be in that space. The building automation world is, you know, one of the aspects of it I don't like is that the distribution of products is somewhat controlled. You can't just buy and use whatever you want, you have to sign up, and you have to be an authorized distributor. And so we always try to be, you know, client-focused, you know, solution-focused and not product focused. Because products, products evolve, right? That means, you know, what might be the best today isn't going to be the best 5 or 10 years from now.
So even in the BAS world, I mean, we still do some building automation, and we still use Delta mostly, but sometimes it's not the right fit, you know what I mean? So we'll use other products.
I would-, you know, my transition, just to get this-, it was about 2014 where I felt the product development and the pricing for fault detection had come down, and the products got better and the prices got less, that it was near going to be mainstream. So that was the year we really transitioned entirely to focus on what I'll call energy retrofit work, leveraging the new analytics platforms in lieu of doing construction or renovations. We still do a little bit of that, but it's very little, where, you know, prior to this, that was our bread and butter work.
James Dice: [00:12:15] Okay.
Terry Herr: [00:12:16] Yeah, so now in 2014, we started doing that type of work, energy retrofit work. And one of the things that we realized very early on is the data acquisition piece is troublesome, right? It's the first step and it's not an easy step. And we wanted to find a neutral, we'll call it trending appliance, right? Because all of these products leverage trended data, and if you look at the trend capabilities of a typical BAS, they're, they're weak, right? And they vary dramatically.
If you walk into a typical building today on the BAS, it's all over the place. Sometimes they're trending 5% of the points, sometimes they're trending 85% of points, but most of the time it's fairly minor. And many of them couldn't trend at the interval that you need to do good fault detection. And so we really looked for some way that we could get data out of buildings easily. And I think actually we landed on, I don't know if you remember, Building Robotics, which is now Comfy, but in the early days they were Building Robotics. They had a product called Trender. I don't know if you, if you recall-
James Dice: [00:13:22] I used it once. Yep. I did the same exact thing as you. What year would that have been? For us it was probably 2015. I remember talking to them and starting off with that Trender box.
Terry Herr: [00:13:34] Yeah. Yeah. I think I can take credit for that product to some extent because I had landed on their sMAP protocol from, I think reading one of their research articles from when they were at Berkeley. And when I read it, I'm like, well, that protocol , you know, we could use that. And so I reached out to those guys because they were both came out of Berkeley. One of them is no longer with Comfy. The two, the two founders, they're both Berkeley PhDs. I can't think of their names now, but you know, I reached out to him and said, look, we think there's a market for a trending device based on sMAP, and we'd like to, we'd like to try it. I think they sent a free box and we tested it, and, you know, it worked.
So they decided they were going to build a product, and they called it Trender. And I'm sure we were their first, you know, to buy it. In fact, for most clients, they, they would sell box. And, and you know, it was a SaaS model. For us, we actually bought the platform, so we ran it ourselves, and we bought the boxes from them, but we, we ran our own sMAP server.
James Dice: [00:14:38] Ah, okay. Cool.
Terry Herr: [00:14:41] And that worked, I mean, it, it worked fine. I can tell you there's still one-, well, I know one of the clients that we did a lot of data acquisition for in the early days, we put one at the Empire State Building, and it's still there. It's still there and still collecting data.
James Dice: [00:14:58] Wow, that's really cool.
Terry Herr: [00:14:58] So, so if you used that, if you recall, Trender, they basically, I think the venture capitalist for Comfy squashed Trender at some point in time, and that was our, our transition to VOLTTRON. We were looking for another solution to replace Trender. And I had known about VOLTTRON, just from again, following the research and, yeah, so that was sort of our, our transition.
James Dice: [00:15:23] Got it. And so, you were no longer independent from the standpoint of your business model, but now you're independent from-, kind of the open data layer was now there so that you could then use any analytics platform you wanted to, using VOLTTRON. Is that why you guys kind of headed that direction?
Terry Herr: [00:15:40] So yeah. Even with Trender, right? It was a neutral data acquisition layer, and VOLTTRON sort of was even better in that it-, Trender had to use their own database and VOLTTRON was built to be sort of modular. So I think even like today, it has connectors for five or six different databases. When we first started using it, we were using it with Mongo, then we transitioned to Crate. So it's the same VOLTTRON box, but you can use multiple different databases. So we transitioned to Crate, and now we're about to transition actually to a TimescaleDB, same VOLTTRON trending device, you know, multiple databases. We even have some at the moment that are pushing data to two different ones. So we're pretty excited about the potential of VOLTTRON in general, and certainly, you know, a neutral data acquisition layer. We think that's a good move for the industry in general.
And, I know one of your LinkedIn threads sort of get into that topic.
James Dice: [00:16:44] Yeah. Yeah. I was just going to say, yeah, for those listeners that haven't heard of VOLTTRON, it's an open source project out of the Pacific Northwest National Lab. And anyone can download it on GitHub and use it on their projects. And it sounds like what you guys do is it's some sort of gateway device that's on site, and then you also have another instance in the cloud that they're then linked to each other so that then you can push the data to any cloud database.
Is that, is that how I understand it?
Terry Herr: [00:17:13] Yes. We always have an edge piece of hardware. We use an industrial Intel Nook, but it'll run on almost anything. You know, we consider it a little plug here for VOLTTRON, but you know, we like the way it's built in a modular fashion, so you can, it has multiple drivers, right? So it talks, like we typically use-, it's got a BACnet driver, of course, it uses bac pipes, it's got an oBIX driver that gets us the Niagara platform and anything that's BACnet. That covers a lot of ground. You know, it runs on Ubuntu. It's written in Python, which is a very popular language now.
It uses a message bus, you know, pubsub message bus, RabbitMQ to communicate. It's got a built in encryption, so we consider it to be a good platform. Presently, you know, we have most of our application for it is doing passive trending, but it doeshave an actuator, what they call an actuator agent. So it can write, and we see that as really future looking.
We have done a few pilots, with an agent or an application that PNNL built called Intelligent Load Control, where it does, in fact write back. So we started piloting that two summers ago. have you heard of that application at all?
James Dice: [00:18:30] Yeah, yeah, I've seen it, and I've been looking at it just tracking VOLTTRON it sounds like, throughout most of the history, but I've never used it. I've just been tracking it, but I've been tracking it lately for this grid interactive, efficient building type of strategy. It just seems like it lends itself really well and certainly just exactly the way you just described it.
So, and I wrote about it in the newsletter a couple of weeks back and I'll, I'll definitely link to that for everyone in the show notes so they can catch up on this. But, so you guys have done some pilots, testing that out it sounds like?
Terry Herr: [00:19:04] We did, in three buildings two summers ago, and we hope to do some more pilots this summer, that was the game plan working with PNNL. We had two clients that were willing to let us pilot it on three buildings each, actually not just ILC, but another application called, that they call AIRCx, which stands for Automated Indication of Retuning Measures, and we're pretty excited about that application as well, because that gets into the-, to this topic of ASO, right? What DOE is calling ASO, automatic system optimization, which, you know, we think is, you know, fault detection is really great, but, optimization we think is the other half of the energy savings equation. And so you have to add that into it. And AIRCx is an application that will allow us to do that.
James Dice: [00:19:54] Yeah, I hadn't heard of that before. That's interesting, AIRCx. And yeah PNNL is certainly known for these types of retuning strategies, so I'll definitely check that out.
But let's, let's stay on the independent data layer real quick. So, let's just kind of close that out cause it was one of the things that I wanted to talk to you about. So with you guys installing that and then having many different options of where you can then plug analytics into it, one of the things that I wanted to ask you about is how you tag and model that data, given that all of the different analytics platforms that you might plug into it, they all have their own data models.
So how do you think about that?
Terry Herr: [00:20:35] Well, you know, one of the things, I guess our view on the value of the middleware layer is sort of akin to what BACnet is, from a proprietary comm bus. If you have a standard data path, then basically you don't have vendor lock in. And our industry has a bad reputation for what I call the lock them and loot them business strategy, where you know, once the vendor gets a foot in the door, they're very hard to get out. So, now what makes that work, what makes that work is this tagging standard. Prior to tagging standard, pretty much, in fact, all of the FDD products that we've used were really a proprietary mapping every time, right?
You had to do a mapping process and they had their own data model. And we're in the transition right between that and a standard data model. I was hoping there would be, you know, the, probably almost a year and a half ago now, you had Brick and Haystack and ASHRAE put out a press release that indicated they were going to work together to have one standard. Prior to that, it seemed like we were going to have maybe three, and that was not going to be good for the industry. I don't know that I've seen them work together a lot, but I've certainly seen pressure from Brick, and I think the pressure from Brick is probably what prompted Haystack 4 0, so I don't think any of them are perfect, but we think that a standard is ideal and it's, it's what makes this plug and play work.
And our approach at the moment is to actually tag with Haystack. Although we're working with somebody right now that thinks we can cross tag. So we can tag a dataset with both Haystack and Brick, be cause they do overlap a fair amount, right? And then we house those tags in whatever database that we're using. Right now we're housing them in Crate, but if we transitioned to Timescale, we'll house them there. And then we'll have a Haystack API. So the other, the other part of the standard is having a standard way to pull the data out. And right now the Haystack API is the best thing that we've got. And we actually wrote for what we have now, we actually wrote a Haystack API connector. And so the theory is that if we do the data acquisition, and have the data in a neutral database, then the clients can use more than one.
They can transition easily. They can switch gears if a product down, you know, three or five years from now that they want to change, they don't, they don't have to redo the entire data acquisition and tagging layer. That all stays. And it's in an open source. The beautiful thing about VOLTTRON and those databases is so far, everything is open source. So didn't really cost anything. There was no SaaS model to it, for the entire neutral data layer.
James Dice: [00:23:29] Got it. Okay. So if I try to summarize kind of your last 10 years of fault detection, let me try. It's basically you started using all these early adopters basically. So, it sounds like you started using SiEnergy, and well you signed up as a partner but didn't quite use them. You use KGS, you use others.
And realized that, Hey, there's this part that's still proprietary and we can basically, as the integrator come in and install this neutral, open middleware layer, or I call it open data layer sometimes. You tried out another package, but then you found VOLTTRON, that you've been using VOLTTRON ever since. And that allows you the flexibility to send data or pull data from any application, whether it be, you know, KGS or SkySpark or whatever, and that provides your clients with the flexibility to try out analytics packages or switch, or it really drops the integration costs as well essentially, right?
Terry Herr: [00:24:31] I would agree with you. It ought to drop it. I mean, one of the things that we do very well is the data acquisition piece. You know, in the early part of our business, we really weren't chasing data acquisition separately. We were just doing it for ourselves, but we realized there was actually a market for that.
So we've done data acquisition for a number of software companies that had software of some sorts but didn't know how to get data out of a BAS. In fact, many of the FDD products aren't very good at getting data out of a BAS. So we think that data acquisition is almost a separate solution.
I mean, we did it in the early days for CopperTree, a company called Cortex Building Intelligence, Ecorithm, another FDD product, all, you know, all just doing data acquisition for them. We weren't, you know, we just did the data aquisition using Trender or VOLTTRON and, you know, it was their client. So again, that's a tricky part of it. We think that middleware layer does make a lot of sense. And in fact, we had a client very recently where we were able to quote them the data acquisition, and then give them three different options, right? In this, in this case, it was SkySpark, KGS, and CopperTree. And we gave them what we thought were the pros and cons to those three products and let them choose. They were a fairly sophisticated client. So they, they wanted to understand, you know, that they had options and what those options were and what we recommended. But we, you know, we let them make the final choice of what product that they wanted to use. And we think that's a pretty good model for larger clients.
James Dice: [00:26:06] Yeah, I mean, definitely this, this model is obviously different than a lot of firms like yours and a lot of firms that I kind of grew up in, right? Where you have guys on your staff that are very skilled in one platform or another. I mean, the one that comes to mind for me is like SkySpark. Once you learn that whole ecosystem, you're a little bit biased in a way because your guys have spent a long time learning it, they are obviously very skilled at it. And so, that could be a little bit of a bias that works its way in. But this model where you're kind of. Like basically totally independent is pretty novel from my standpoint.
Terry Herr: [00:26:43] I agree with your comment that, one of the things about SkySpark is that, from our perspective, I still think it's probably the industry leading product, but it definitely has a very heavy engineering component, right? The implementation of it is, is very time consuming, engineering wise. And I consider that to be sort of the opposite of say a KGS, where you know, everything is done. You're not writing any of your own rules. You're not even doing the data acquisition, or you're not doing the tagging piece. Everything is done. So it's a, it's a plug and play product.
SkySpark, again, the industry leader but I joke to people, I said this kind of like half done, right? They built the back end, but there's no rules, right? The user interface is weak. So in the early days of SkySpark, most people overlaid it with something else. Again, they've done successful with it, but, but it's very different than a KGS.
And I consider CopperTree to be somewhere in the middle, right? There you can write your own rules if you want. They've made it easier though. They've abstracted the rule writing, so it's a little bit easier to use than SkySpark, but to me one of the cool things about their product is they have a rules library that all the partners share. And so you're not reinventing the basic NIST rules, every integrator, you know, over and over and over again, which is, I think, a weak part of SkySpark, where every integrator is rewriting the standard, you know, 80% of the rules that are the same thing for everybody.
James Dice: [00:28:08] Mhm. Cool. So yeah, I think just to kind of hit that point again, is that there are a lot of SkySpark vendors that use SkySpark as the data layer, right? So, and this is what I've done in the past, is they're going to then use that as the backend, whereas you guys are using VOLTTRON, and it opens up some more flexibility. So that's really interesting.
Okay, so just to kind of finalize our independent data layer or middleware layer discussion, you mentioned that the data layer, or the modeling is in transition right now. So you said we're in transition to a standard data model. So what are the limitations currently when you guys are setting up this open middleware layer right now with the standard data models?
Terry Herr: [00:28:51] And what I mean by that is, and this is to sort of, you know, to Nick from KGS's point, is that we're not at a place where, if one person tags it, right, with one standard, that every vendor can automatically consume that information. And part of that is because, you know, the standards aren't perfected yet, but we're certainly getting a lot closer.
And, it's funny, you know, Nick tells the story that, you know, the first site that they did with Haystack tags, they basically had to start over again because there is a lot of discrepancy in the way that you would tag with Haystack. It wasn't, there was nothing to force everybody to tag the same way. And so part of that is you know, the tagging model, and then the training on how to actually tag. Or having a standard tagging tool. If everybody used the same tagging tool, right, then everything would be tagged the same even within Haystack.
And we're not quite there, but I think we're getting a heck of a lot closer. Haystack 4.0 closes the gap on a bunch of their weaknesses in the past. And there are a number of tagging tools out there now. In fact, we've been sort of hunting for the ideal, what I'll call auto tagging tool for some time and demoing lots of products, because that's been , you know, obviously an obstacle to deploying this type of software. But I think we're at a point where if Haystack is defined well enough, the method of deploying it is defined well, that the software vendors can then create the right connection.
I mean, in the perfect world, they'd be able to basically with the end Haystack protocol, connect to any database that's tagged well, and there would be no onboarding, right? It would automatically work. That's the perfect world. I don't think we're quite there, but I would say we're probably 80% there.
James Dice: [00:30:33] Okay. And these tagging tools, I'm not that familiar with these tools that are popping up in this area. Is this a place where it's like a startup company that's popping up, providing these tagging tools, or is VOLTTRON heading that direction to provide that? I'm assuming what you're talking about is like machine learning algorithms that add the tags automatically based on a point name and, and the data that that point is associated with, that kind of thing. Is that right?
Terry Herr: [00:30:57] That's right. And you know, we've tagged by hand for, for years, and it's time consuming and painful. And it's because our industry didn't have a point naming standard. If we had one of those years ago it wouldn't be quite so hard to add tags. But you can imagine , you know, every site you go into, they name things somewhat differently. So, yeah, you know, to me the Holy Grail was some kind of auto tagging. And when I say that, I mean some machine learning that uses all the metadata you can pull from a BAS. So metadata meaning the point name is always probably the most important, but you've got the unit of measure, the actual present value and the trended value. You've got typically the description field. You got the point type field, right? So BACnet gives you a number of properties that when you scrape it, you know, give you hopefully enough information to decipher what that point is and add the tags accordingly.
The work in this area has been going on for some time. Again, you know, Mike Brambley at PNNL has done some early work on auto tagging. United Technologies Research Center, the Carrier folks, they actually funded, one of the early tools that we use was a project that their research center built, probably four years ago now, with some DOE funds.
So it's open source and we got it. And it was a good start, but it wasn't finished. And then BUENO has one. Pretty much anybody who's an analytics company, we're going to build a tagging tool, right? To just make their life easier, but it n't going to be a standalone one. There's a company called Onboard.io that actually, they are a startup probably two years old now. I think two guys from KGS, one guy from Opower, and there they initially started up to specifically build an auto tagging tool and so I know they have something. We worked with them in the early days.
There's another company that we are testing out right now, which so far we think is probably the best thing we've seen. It's actually a company called Kinetic Buildings. They're out of Philly here, so they're actually a local company. PhD that came out of Drexel that actually has an entire analytics platform as well, which we're also testing, but we're really more excited about his auto tagging tool. We want to be able to use that sort of independently if we can. So again, I know CopperTree has built, you know, some auto tagging capabilities as well. I don't know where it is at the moment, but almost everybody has to have something like that that has a product. We're just looking for something that is standalone, right? So it's not married to a particular product.
James Dice: [00:33:32] Right, okay. Cool. So I guess I'd like to kind of transition now to supervisory control. So I'm writing this deep dive for Nexus members right now about what I see as kind of the next phase in analytics, right? It seems like you're in agreement with that, so, kind of what are you seeing as far as the types of supervisory control that analytics firms are wanting to add? I know ASO, you mentioned, is one of them. I'm seeing that and some other applications as well. But what are you excited about in that area?
Terry Herr: [00:34:05] Well, we definitely think that, you know, what we call optimization, I think KGS calls opportunities for optimization, right? Their software will actually find those types of things. That's very much what AIRCx does as well. So basically, you know, things like optimal start, which is algorithms that have been around for 30 years but get deployed very infrequently, they should be deployed , you know, in every building. Temperature and pressure resets for VAV air handling units. We see that a little more often, maybe in 15% of the buildings that we walk into where it's applicable, but not in the other 85%. So, you know, that's an optimization. Chiller plant optimization is where this, to me, this work has been going on for a long time. Right? That's where it really started, in chiller plants, because if you're going to optimize one thing inside a building, if it's got a chiller plant, that's what you want to hit first. So that's fairly developed.
In fact there's a number of companies that are specialized in their products like Plant Pro, Tech Works, Optimum Energy. They all basically give you a Niagara JACE, right? With their optimization algorithms on them. You drop that in the building. You map across the data points for the chiller plant, and it takes supervisory control over from the BAS. So we think you can do that both on the chiller plant, and on the rest of the building's HVAC equipment, so on the air side as well. And we see that sort of as the next, the next move.
And it's interesting, we saw a demo from the Brainbox AI guys, and you know, that's what they're doing. And in fact, in the demo, I basically determined that, again, it looks like you are lobodomizing the BAS. They're literally taking control, which is exactly what, you know, if you look at a Plant Pro product, it's exactly what that does. It takes over the chiller plant control from the BAS. So in my view, first of all, I don't think you can take that to the cloud entirely. I think that is going to have to stay at the edge. So we've always used a heftier data collection box. So , you know, we're running on an industrial Nook because we know that, you know, intelligent demand control and optimization layers can be placed there.
A lot of easy optimization is literally writing to set set points, which BASs are designed for anyhow. So you can put those algorithms into an edge box or into the ssame VOLTTRON box that we have there and just run those algorithms there, and it's just going to write to set points. It's going to write to the, to the condenser water set point. It's going to write to the chilled water set point. The VAV air handlers, it'll do the optimal start. So we think at least my view at the moment that we're going to find where, we are going to have this edge device that does optimization in the building, and then the cloud is still going to do some advanced algorithms, right?
It's going to basically do the modeling, probably feed back some of the set points. I don't think you can run it all in the cloud. I mean, you could, but I think trying to control a building from the cloud is-, I'm not sure that we're there yet in terms of, of the, the quality of the connections, right?
So I can see a day where the PID loops and basic control was done at the controller level, right, at the equipment controller level, where now all controls are done. And the optimizations or the basic algorithms or the advanced algorithms are running at the edge on a bit of a, you know, better, PC, and then heavy machine learning, model building, advanced fault detection is done in the cloud. We think that's probably the progression.
James Dice: [00:37:44] And the cloud is then just basically pushing down the model to the edge controller, essentially?
Terry Herr: [00:37:51] Well, you could see, like, I'll give you an example of that. So right now, a lot of BASs can do optimal start, but their algorithms are somewhat basic. You know, one that we've used is that basically it does learn the recovery rate of the zone. It looks at the last three days' recovery rates, and then will use the schedule and say, okay, well now that I know my recovery rate for the last three days, I'll average that it knows it's gotta be up to temperature at eight o'clock. And you know, the temperature has drifted, you know, eight degrees. So I do some math. But if you had, if you had cloud where you could look at that over any temperature range, right, from the last year instead of the last three days, look at the weather forecast. You could, you could have a better, more accurate, optimal start time. I mean, how better? I think the basic one gets you to 80%, and the cloud one gets to the other 20.
James Dice: [00:38:44] Yeah, and something, I've heard from skeptics for this new sort of supervisory control is, you know, I've heard engineers say, well, if the BAS had all the right supervisory control sequences anyway, you know, the savings would be minimal. And my answer to that has been kind of bullish for this new type of supervisory control because I haven't seen a lot of BASs that have, you know, optimized sequences programmed.
And I think if you have a solution that's specifically designed for that, it kinda makes up some of that ground. There's just so much opportunity for optimizing BASs right now.
Terry Herr: [00:39:18] Absolutely, in fact, I would tell people we think the fault detection maybe gets the 40% and the optimization gets the other 60% of the savings that are possible in a typical building. In fact, our present strategy is for optimization, we will reprogram the root BAS, if we can, if it's a product we're familiar with and we can program, and we've got self-sufficient in all of the major products, Siemens, Johnson, Niagara. So when we go to optimization presently, we will have two options. We'll do it in the base BAS, or we'll add an edge device. We always put a BACnet router controller there anyhow, and we can put the algorithms there. It's a pretty hefty machine, so we'll put those algorithms there.
We think in the future we'll probably run them on the VOLTTRON box. And the only reason that we would do that instead of reprogram the base BAS is that it's modular and it's plug and play. We can put the algorithms in one place. I mean, even these folks that do chiller plant optimization, if they took their algorithms and they went to the BAS vendor and said, here, reprogram your root BAS to do all of this, it could probably do it.
But they'd have to do that with a different vendor, different BAS at every chiller plant. Instead, put the algorithm into a single box one time, and now that's plug and play. You can drop it in and go, and that's the reason to not reprogram all these BASs out there.
James Dice: [00:40:42] Yeah, 100%.
Terry Herr: [00:40:44] And, I would say that I kind of agree. I'm big on the 80-20 rule. I do think that you get 80% of the savings out of doing 20% of the basic algorithms. I mean, requiring advanced or machine learning or AI, that gets to the other, the other 20, which is. I'm not saying we shouldn't get that other 20 but we have a lot of time and, and there's a lot of savings we can get, like the easy savings, right?
James Dice: [00:41:09] Totally. Yeah. It's a crawl, walk, run thing for me. I've been using the words crawl, walk, run quite a bit lately with, some of our clients. Cool.
So what are some of the challenges with this new wave of supervisory controls that you're seeing from the field? One of them that I've heard is that, you know, there's a lot of pride going on. So it could be pride in your existing control sequences, and that might be the design engineer that has the pride there. But then the most proud I've seen is the building operator who takes a lot of, a lot of pride in maybe running his building by hand, and if we're putting the robot in his place, he feels a little threatened and skeptical.
Terry Herr: [00:41:47] Yeah, we're getting into the operator training part and the operator coordination part. And I agree that we have seen operators feel threatened by the automation or this overlay. Some of it's even the fact that other people can see how their building is operating. Right? So even just the fault detection piece, we sometimes see operators, and BAS vendors a little worried about that. So that's just an obstalce we're going to have to overcome.
And your point about operators sort of manually operating it, we see that a lot. In fact, we think that operator overrides are I don't know, a reasonable percent of the problems out there, right. Where even if they knew what they were doing when they overrode it, they forgot about the override. Right. And then it's still in place, you know, months later when the season changes. And sometimes the overrides are really in place because they simply didn't understand the sequence of operation, or didn't think it was happening fast enough. So, and part of it is because the building may have not operated right from the start.
We see a lot. I mean, we retro commission a lot of buildings and we see errors, you know, poor sequences of operation, poor original programming, poor commissioning. So it's not unusual to have a building that's not operating right, even though it was built three years ago and in theory commissioned, yeah.
I tell people typically the sequence of operation, I say typically, so 80% of the time, the sequence of operation is probably one that was cut and pasted from a different job, an old job and then reused. So it wasn't really that well thought out to begin with. And then it gets programmed by a technician sitting on a bucket in the mechanical room, and maybe he gets it right. Maybe he gets it somewhat right. And then, and then it gets commissioned by essentially turning things on, and if nothing blows up, you're done and you're walking out the door. And that's an extreme example, but, you know, it's, there's a lot wrong with the typical BAS implementation in most buildings.
James Dice: [00:43:43] Yeah. I like to say, I like to say every building's different, but every control specs the same.
Terry Herr: [00:43:48] Yeah. That's a good one. Yeah. Yeah. So I would hope, in fact, in the early days of doing FDD, I kept telling all the vendors, you know, you really have to do optimization, and they go, well, the owner doesn't want us to, right. They want us to be passive. And I'm like, but why? Because the BAS is controlling it. I said, but it's not doing a very good job. I mean, it's just not. So you could certainly do better.
So I think that you know, that is the next move. If you look at the adoption curves for EIS, you know, if you, if you look at these acronyms, right? So you got BAS, we've been around, you know, 35 years. You got the EIS, which is pretty well developed as well. FDD, I think we're on the mass adoption curve there. And the ASO piece is sort of the last piece. And I won't say the last piece, but it's the next piece. And, we're at the very early stages there, and there's a lot of room. So we're pretty excited about that.
And then if you merge in this whole, DERMs concept, right? Distributed energy resource management. Because now you've got solar and probably batteries coming, and active demand grid connectivity. So now you need that component. So that's why you need this edge device that can sort of manage all of that as well. We see that ASO and that DERMs as sort of the next wave of technology to hit buildings.
James Dice: [00:45:10] Yeah, just like you're saying, I have it in three buckets in the essay that I'm writing right now. It's traditional supervisory control, just done better, right, and more consistently. And then it's ASO, and like the old ASO was just like a trim and respond sequence, and now it's turning into more learning, prediction and optimization, right. And then that third bucket is what I, what the DOE calls GEB or grid-interactive efficient buildings, where you're basically taking all of the systems in and using the same learning prediction optimization to then cycle loads in the right way, according to what the grid needs at the time.
So, cool. Yeah, I think that's, that's exactly how I'm seeing it as well. So, so one of the things that kind of scares me, and this is the kind of the thesis of my essay, is that I'm seeing a lot of these supervisory control solutions, but also these startups that are coming in and saying, we're going to plug this in and you don't need to think about it very much. It's just a box that you plug in and it's super easy to install. And the reason I'm worried about it is because it doesn't speak to the history of our industry, which is where we've been trying to break down these silos and we've been trying to look at things strategically this whole time. In other words, it's not just technology, it's also the processes. It's also the people that are tied to the technology. And one of the things that worries me about that solution is that they're making it sound like it's an LED light bulb and that you just plug it in and you're good to go. Right. Everything's optimized.
Whereas I think what I'm hearing from you and what I've seen from your comments on LinkedIn is that it's FDD that plays a role, it's supervisory control that plays a role, but then we can't forget about the operator and the engagement of the operator and the education of the operator or else the whole thing breaks down.
Is that how you're thinking about it too?
Terry Herr: [00:47:04] Well, first of all, I agree that this, the idea that the software can just solve everything, I think is unrealistic and they're minimizing the complexity of all the wires and sensors and actuators that are in the field. Even the ability to simply write to a point and release it isn't seamless. Right? I mean, it just isn't. We've done it. It's not, it's complicated. Sometimes it doesn't take, you're not writing at the right level or you can't release it correctly.
So, and then if you already have issues with the operators where if they don't understand, you know, a sequence of op that's already running in their BAS and you start doing more sophisticated stuff, they don't understand that. I mean optimal start, I can't tell you how hard it is to communicate that to a typical operator who's used to a schedule that says 6:00 AM, that's exactly when my fan is going to start at 6:00 AM. When you tell them that, no, now set your schedule for when you want it to be up to temperature. He goes and when's it going to start? Well, I said, it'll vary every morning. Well, that's, they don't, they don't like that. Right? They don't like that. They want to know it's going to start at this time.
So, and then you start to talk about, you know, models and machine learning and all that. So we definitely need to bring operators up to speed. Otherwise they can just derail the whole, the whole thing. Right. They'll just, I don't want to say sabotage, but they're not going to be on board with it. So I think we really have to bring them on board. There's an important human aspect of this, the training piece. We're giving them all of these cool technology and cool tools that are pretty complicated. So they really need to understand that.
And if you look at the work that PNNL started, right, this retuning training, which they have available online. And then a number of other organizations have taken that and sort of moved it forward. I think BOMA has done work with P NNL's retuning and, and I know CUNY, City University of New York, they have an entire building performance lab that goes out and trains operators on re tuning. And a lot of that training is training them how to actually look at trend logs, right? Look at the data, understand what is going wrong. It is still human in the loop fault detection, but even our engineers who were running fault detection rules, we still like to see it. We still like to go, okay, well I see I got this fault. Let me look at the multi trend and sort of validate or verify.
It's, it's really helpful for most humans to simply understand what is going on, right? When you get to AI or some sophisticated machine learning, it's really hard to understand even for engineers, right? I mean, you know, and we're even getting, that's getting worse. So, so training the operator is going to be important.
We actually, I think I mentioned to you, and we have a project that we got funded, CUNY and Intellimation, to build an online, what I call an online on demand, training platform for advanced building operations. So focused on all of this new technology that we're giving them how to use it well, what it does. And we have a project that's funded over the next three years to sort of build that out. We're using our largest client as sort of the place to try it out with. They've got 85 operators, you know what I mean? So I think it will be a good, a good start.
We'll be looking for industry input on some of that. I mean, it's definitely going to be helpful. So, anybody out there who's sort of interested in, you know, contributing to the project, certainly reach out.
James Dice: [00:50:39] Yeah. And I'll, I have a link that you sent on LinkedIn that I can put in the show notes for that.
So am I right in thinking about how, like if we think about this conversation as kind of the past, present, and future of building analytics, it's mostly left out the building operator. Maybe except for the monitoring based commissioning type of solutions, but I think even then, they've really been, you know, mostly implemented by a third party consultant who's the, you know, the commissioning agent that kind of hands the building operator a list of things they need to fix. And I think I'll speak for a lot of those people in that they've been a little frustrated by the lack of fixing after they hand them the list, right?
So how are you thinking about kind of how we move forward besides these training videos? How can, software vendors and then service providers in this industry kind of think about how to be including training in their solutions?
Terry Herr: [00:51:37] Well, people do. The problem with it is that training is hard. You know, most of our industry in the past has done some sort of in-place training where they sit down with the operator or operators and they show them how to use it one time, maybe twice, right? And then they leave.
And that's typically not going to stick. It's like anybody, if you sit down at a 40 hour training class for like a week, and you learn all this and you jam it in your head, and then you don't use every piece of that, and then, you know, within the next two or three weeks, you know, 80% of it is sort of gone.
So to me, the training has to be, it has to be online, on-demand that they can go back to it, right? They can look at it again. They can look at it while they need it or while they're actually doing it. It's just a different way of actually training. There's a term for it, embedded training I think sticks in my head. But we really think that's the only way that you're going to be able to get operators up to speed on this.
And again, you've got to get them engaged. It's not even just training. If they're part of the process the whole time, I think they feel better about it. Right? As opposed to, you know, some contractor comes in, implements something and then, you know, does some basic training for them and turns it over to them. I think that's just not, not an ideal approach. So given the fact that a lot of this stuff gets sold above the operator, right? It gets sold by somebody that's, you know, the operator's boss. They really need to engage the operators early on. We even tell our clients who we're selling that to , you know, we really like to sit down with the operators, get their opinion, get their feedback. Because first of all, some of them really know the buildings well, and you just want their involvement in it. so I think that just makes them feel better about the process.
James Dice: [00:53:22] Cool. Okay. So a couple more questions before we kind of wrap up. Where do work orders and the CMMS fit into all of this, either from, from the fault detection standpoint, but also how does it fit with what you're doing with VOLTTRON. And then obviously the last part of what we talked about was the building operators, which would be the connection between the fault and the building operator would be some sort of work order.
So how are you thinking about that part of the stack
Terry Herr: [00:53:52] I do think that asset management and computerized maintenance management is a great fit and an add on to all this technology, and it's not in enough of them already. So I don't know if that means you have to add that feature and function set into existing products or there's, there's a lot of software in that space already, and so probably just, you know, add an API connection to it, but there's no question the workflow ought to be, all right, I see a fault or I find a fault. I validate the fault, I built a work order, you know, it gets fixed and then it gets closed. So there's no question that's an important part of it.
It's something we haven't worked directly with, but it's certainly on my transition. So after ASO becomes, you know, CMMS or asset management, it definitely is a part that has to be added in. Some of the products have a little bit of that already starting, right? They're adding some feature sets of that in, but it's not a full blown one.
James Dice: [00:54:46] Yeah. Copper Tree has something that's really lightweight. Obviously BuildingIQ has it as part of their stack. Yeah, but it's not like if you look at a lot of built larger building owners, property managers, they have some sort of CMMS already. So to me, it's something that we need to think about from an integration standpoint.
And what I've seen is that if you look at, there's a whole separate data model problem inside of those platforms, right? So, everyone's calling the air handlers something different depending on what platform it's in. And, all the fields are filled out differently. So that's a, that's a whole nother challenge. And I like how you just phrased it in terms of like, okay, ASO is kind of the next phase, and then maybe the CMMS is the next phase, because I agree it's something that it's not easy to implement yet.
Terry Herr: [00:55:32] Also, part of what I see happening with operators is a transition from actually operating to troubleshooting and repair. Because to me, in a perfect world, when you get the automation right, you do have an automated self-driving building. And so they don't really need an operator. You need an operator maybe to change schedules and change set points, but the key is going to be keeping the sensors and the actuators working and mechanical equipment working. So it really goes to troubleshooting and repair of that so that you get it back to being automatic.
And again, that is just training, right? I mean, let's face with the fault detection, even fault detection and diagnostics, even the diagnostics only works to a point, right? It can't see it. You still need somebody oftentimes to go out with a meter and some tools and look at it, right, and see what the problem is. That's gotta be part of the training because getting that building back operational again or back under automatic control requires that those actuators and sensors all work.
James Dice: [00:56:38] Yeah. And that's where one of my clients likes to just throw the word condition based maintenance in, and I think that's in this next phase as well. It's like how do we automate what can be automated and then let the analytics also help us with the actual physical maintenance of our equipment.
So, cool. Okay, so as we kind of wrap up here, it's what May 11th, and we're still in this global pandemic, weird times for our industry. how are you thinking about how these types of technologies can help with kind of reoccupying buildings, but also managing them in the face of an uncontrolled, at this point, virus?
Terry Herr: [00:57:18] Yeah. We have a little bit of exposure to that, but not a lot. We are helping one of our clients shut down buildings but maintain humidity and temperature levels within an expanded range. And of course, we're going to be saddled with then starting them back up at some point, right. And there's, I know ASHRAE has some guidelines with regards to changes in how you might want to operate a building given the pandemic issue, you know, more outside air, which isn't going to help the energy usage much. But we know that there's a bunch of mechanicals out there putting, you know, MERV 13 filters in, and the ultraviolet lights in air handler units. We see that happening. So it's, it's definitely gonna make things interesting.
We also think that energy management is going to get probably even more important because, you know, this, this economy is going to be a little more struggling, right? So, I would hope that people are going to be interested more than ever in sort of saving as much money as they can on their, on their energy spend. And I think there's plenty of opportunity to save money in most commercial buildings with an ROI that's sub two years. This technology to me really should be deployed. I mean, we're probably at a saturation rate of, I don't know, 5% or so, and it probably needs to be 95%.
So I think we'll have over the next couple of years, you know, a nice growth path.
James Dice: [00:58:43] Cool. Yeah, and that's something, I guess to kind of wrap a bow on all of this is that, you know, we talk about all these different waves and we talk about 10 year plus history, but it really feels like, to me that these are really, you know, the history has been of early, early adopters, and now we're getting to the point we have early adopters from the standpoint of like the entire commercial buildings fleet. Right?
Cool. Well, this has been awesome. I think it's a good time to cut out, but I definitely want to dive deeper into some of this stuff at a future episode. Thanks so much for your time, and can't wait to connect over this stuff in the future.
Terry Herr: [00:59:17] Yeah. You're welcome, James. And now keep up the good work on LinkedIn getting those interesting topic threads rolling.
James Dice: [00:59:24] Absolutely. Yep. Yep. Definitely plan on it. All right. Well, have a good day and have a good week.
All right, friends. Thanks for listening to this episode of the nexus podcast. For more episodes like this and to get the weekly nexus newsletter, please subscribe at dot com. You can find the show notes of this conversation there as well. As always, please reach out on LinkedIn with any thoughts on this episode.
I'd love to hear from you. Have a great day.
Happy Thursday!
Welcome to this week’s deep dive exclusively for Nexus Pro members. It’s an honor to have you here. This deep dive is a follow up to my recent conversation with Terry Herr, president of Intellimation. I learned a lot from this conversation and want to share my takeaways and the full transcript with you below.
In case you missed it in your inbox, you can find the audio or video here:
Nexus site | Apple Podcasts | Spotify | YouTube | Add to other podcast apps
Enjoy!
—James
Disclaimer: James is a researcher at the National Renewable Energy Laboratory (NREL). All opinions expressed via Nexus emails, podcasts, or the website belong solely to James. No resources from NREL are used to support Nexus. NREL does not endorse or support any aspect of Nexus.
This conversation was a lot of fun for me. I had the feeling that I wish I would have met Terry a long long time ago. He's one of the originals of building analytics!
I think he gave a master class on several cutting edge topics that analytics providers will get a lot out of.
The first is the open data layer and using VOLTTRON to implement it. This strategy is sort of in disagreement with Nick and Alex from KGS. I respect everyone's opinions here, understand the tension, and see both sides. I'm excited to see how the debate evolves.
The second is advanced supervisory control. I don't know anyone who's further along in thinking about it and piloting it than Terry. Also, I realized during this episode that analytics and controls are really two sides of the same coin. I don't think everyone is thinking about it that way. More on that to come.
Finally, Terry and I think about analytics in the same way: it's a tool in the toolbox. The rest of the tools are: open protocols, data modeling, operator training, commissioning, optimized sequences.
I'm excited to see your responses to this one.
[00:10:56]
Even in the building automation world, we always wanted to be as vendor neutral as one can be in that space. The building automation world is, you know, one of the aspects of it I don't like is that the distribution of products is somewhat controlled. You can't just buy and use whatever you want, you have to sign up, and you have to be an authorized distributor. And so we always try to be, you know, client-focused, you know, solution-focused and not product focused. Because products, products evolve, right? That means, you know, what might be the best today isn't going to be the best 5 or 10 years from now.
So even in the BAS world, I mean, we still do some building automation, and we still use Delta mostly, but sometimes it's not the right fit, you know what I mean? So we'll use other products.
[00:12:21]
One of the things that we realized very early on is the data acquisition piece is troublesome, right? It's the first step and it's not an easy step. And we wanted to find a neutral, we'll call it trending appliance, right? Because all of these products leverage trended data, and if you look at the trend capabilities of a typical BAS, they're, they're weak, right? And they vary dramatically.
If you walk into a typical building today on the BAS, it's all over the place. Sometimes they're trending 5% of the points, sometimes they're trending 85% of points, but most of the time it's fairly minor. And many of them couldn't trend at the interval that you need to do good fault detection. And so we really looked for some way that we could get data out of buildings easily.
[00:20:04]
James Dice: Let's stay on the independent data layer real quick. So, let's just kind of close that out cause it was one of the things that I wanted to talk to you about. So with you guys installing that and then having many different options of where you can then plug analytics into it, one of the things that I wanted to ask you about is how you tag and model that data, given that all of the different analytics platforms that you might plug into it, they all have their own data models.
So how do you think about that?
Terry Herr: Well, you know, one of the things, I guess our view on the value of the middleware layer is sort of akin to what BACnet is, from a proprietary comm bus. If you have a standard data path, then basically you don't have vendor lock in. And our industry has a bad reputation for what I call the lock them and loot them business strategy, where you know, once the vendor gets a foot in the door, they're very hard to get out. So, now what makes that work, what makes that work is this tagging standard. Prior to tagging standard, pretty much, in fact, all of the FDD products that we've used were really a proprietary mapping every time, right?
You had to do a mapping process and they had their own data model. And we're in the transition right between that and a standard data model. I was hoping there would be, you know, the, probably almost a year and a half ago now, you had Brick and Haystack and ASHRAE put out a press release that indicated they were going to work together to have one standard. Prior to that, it seemed like we were going to have maybe three, and that was not going to be good for the industry. I don't know that I've seen them work together a lot, but I've certainly seen pressure from Brick, and I think the pressure from Brick is probably what prompted Haystack 4 0, so I don't think any of them are perfect, but we think that a standard is ideal and it's, it's what makes this plug and play work.
And our approach at the moment is to actually tag with Haystack. Although we're working with somebody right now that thinks we can cross tag. So we can tag a dataset with both Haystack and Brick, be cause they do overlap a fair amount, right? And then we house those tags in whatever database that we're using. Right now we're housing them in Crate, but if we transitioned to Timescale, we'll house them there. And then we'll have a Haystack API. So the other, the other part of the standard is having a standard way to pull the data out. And right now the Haystack API is the best thing that we've got. And we actually wrote for what we have now, we actually wrote a Haystack API connector. And so the theory is that if we do the data acquisition, and have the data in a neutral database, then the clients can use more than one.
They can transition easily. They can switch gears if a product down, you know, three or five years from now that they want to change, they don't, they don't have to redo the entire data acquisition and tagging layer. That all stays. And it's in an open source. The beautiful thing about VOLTTRON and those databases is so far, everything is open source. So didn't really cost anything. There was no SaaS model to it, for the entire neutral data layer.
[00:26:43]
I agree with your comment that, one of the things about SkySpark is that, from our perspective, I still think it's probably the industry leading product, but it definitely has a very heavy engineering component, right? The implementation of it is, is very time consuming, engineering wise. And I consider that to be sort of the opposite of say a KGS, where you know, everything is done. You're not writing any of your own rules. You're not even doing the data acquisition, or you're not doing the tagging piece. Everything is done. So it's a, it's a plug and play product.
SkySpark, again, the industry leader but I joke to people, I said this kind of like half done, right? They built the back end, but there's no rules, right? The user interface is weak. So in the early days of SkySpark, most people overlaid it with something else. Again, they've done successful with it, but, but it's very different than a KGS.
And I consider CopperTree to be somewhere in the middle, right? There you can write your own rules if you want. They've made it easier though. They've abstracted the rule writing, so it's a little bit easier to use than SkySpark, but to me one of the cool things about their product is they have a rules library that all the partners share. And so you're not reinventing the basic NIST rules, every integrator, you know, over and over and over again, which is, I think, a weak part of SkySpark, where every integrator is rewriting the standard, you know, 80% of the rules that are the same thing for everybody.
[00:28:51]
What I mean by that is, and this is to sort of, you know, to Nick from KGS's point, is that we're not at a place where, if one person tags it, right, with one standard, that every vendor can automatically consume that information. And part of that is because, you know, the standards aren't perfected yet, but we're certainly getting a lot closer.
And, it's funny, you know, Nick tells the story that, you know, the first site that they did with Haystack tags, they basically had to start over again because there is a lot of discrepancy in the way that you would tag with Haystack. It wasn't, there was nothing to force everybody to tag the same way. And so part of that is you know, the tagging model, and then the training on how to actually tag. Or having a standard tagging tool. If everybody used the same tagging tool, right, then everything would be tagged the same even within Haystack.
And we're not quite there, but I think we're getting a heck of a lot closer. Haystack 4.0 closes the gap on a bunch of their weaknesses in the past. And there are a number of tagging tools out there now. In fact, we've been sort of hunting for the ideal, what I'll call auto tagging tool for some time and demoing lots of products, because that's been , you know, obviously an obstacle to deploying this type of software. But I think we're at a point where if Haystack is defined well enough, the method of deploying it is defined well, that the software vendors can then create the right connection.
I mean, in the perfect world, they'd be able to basically with the end Haystack protocol, connect to any database that's tagged well, and there would be no onboarding, right? It would automatically work. That's the perfect world. I don't think we're quite there, but I would say we're probably 80% there.
[00:30:33]
James Dice: And these tagging tools, I'm not that familiar with these tools that are popping up in this area. Is this a place where it's like a startup company that's popping up, providing these tagging tools, or is VOLTTRON heading that direction to provide that? I'm assuming what you're talking about is like machine learning algorithms that add the tags automatically based on a point name and, and the data that that point is associated with, that kind of thing. Is that right?
Terry Herr: That's right. And you know, we've tagged by hand for, for years, and it's time consuming and painful. And it's because our industry didn't have a point naming standard. If we had one of those years ago it wouldn't be quite so hard to add tags. But you can imagine , you know, every site you go into, they name things somewhat differently. So, yeah, you know, to me the Holy Grail was some kind of auto tagging. And when I say that, I mean some machine learning that uses all the metadata you can pull from a BAS. So metadata meaning the point name is always probably the most important, but you've got the unit of measure, the actual present value and the trended value. You've got typically the description field. You got the point type field, right? So BACnet gives you a number of properties that when you scrape it, you know, give you hopefully enough information to decipher what that point is and add the tags accordingly.
The work in this area has been going on for some time. Again, you know, Mike Brambley at PNNL has done some early work on auto tagging. United Technologies Research Center, the Carrier folks, they actually funded, one of the early tools that we use was a project that their research center built, probably four years ago now, with some DOE funds.
So it's open source and we got it. And it was a good start, but it wasn't finished. And then BUENO has one. Pretty much anybody who's an analytics company, we're going to build a tagging tool, right? To just make their life easier, but it n't going to be a standalone one. There's a company called Onboard.io that actually, they are a startup probably two years old now. I think two guys from KGS, one guy from Opower, and there they initially started up to specifically build an auto tagging tool and so I know they have something. We worked with them in the early days.
There's another company that we are testing out right now, which so far we think is probably the best thing we've seen. It's actually a company called Kinetic Buildings. They're out of Philly here, so they're actually a local company. PhD that came out of Drexel that actually has an entire analytics platform as well, which we're also testing, but we're really more excited about his auto tagging tool. We want to be able to use that sort of independently if we can. So again, I know CopperTree has built, you know, some auto tagging capabilities as well. I don't know where it is at the moment, but almost everybody has to have something like that that has a product. We're just looking for something that is standalone, right? So it's not married to a particular product.
[00:33:52]
James Dice: What are you seeing as far as the types of supervisory control that analytics firms are wanting to add? I know ASO, you mentioned, is one of them. I'm seeing that and some other applications as well. But what are you excited about in that area?
Terry Herr: Well, we definitely think that, you know, what we call optimization, I think KGS calls opportunities for optimization, right? Their software will actually find those types of things. That's very much what AIRCx does as well. So basically, you know, things like optimal start, which is algorithms that have been around for 30 years but get deployed very infrequently, they should be deployed , you know, in every building. Temperature and pressure resets for VAV air handling units. We see that a little more often, maybe in 15% of the buildings that we walk into where it's applicable, but not in the other 85%. So, you know, that's an optimization. Chiller plant optimization is where this, to me, this work has been going on for a long time. Right? That's where it really started, in chiller plants, because if you're going to optimize one thing inside a building, if it's got a chiller plant, that's what you want to hit first. So that's fairly developed.
In fact there's a number of companies that are specialized in their products like Plant Pro, Tech Works, Optimum Energy. They all basically give you a Niagara JACE, right? With their optimization algorithms on them. You drop that in the building. You map across the data points for the chiller plant, and it takes supervisory control over from the BAS. So we think you can do that both on the chiller plant, and on the rest of the building's HVAC equipment, so on the air side as well. And we see that sort of as the next, the next move.
[00:35:58]
So in my view, first of all, I don't think you can take that to the cloud entirely. I think that is going to have to stay at the edge. So we've always used a heftier data collection box. So , you know, we're running on an industrial Nook because we know that, you know, intelligent demand control and optimization layers can be placed there.
A lot of easy optimization is literally writing to set set points, which BASs are designed for anyhow. So you can put those algorithms into an edge box or into the same VOLTTRON box that we have there and just run those algorithms there, and it's just going to write to set points. It's going to write to the, to the condenser water set point. It's going to write to the chilled water set point. The VAV air handlers, it'll do the optimal start. So we think at least my view at the moment that we're going to find where, we are going to have this edge device that does optimization in the building, and then the cloud is still going to do some advanced algorithms, right?
It's going to basically do the modeling, probably feed back some of the set points. I don't think you can run it all in the cloud. I mean, you could, but I think trying to control a building from the cloud is-, I'm not sure that we're there yet in terms of, of the, the quality of the connections, right?
So I can see a day where the PID loops and basic control was done at the controller level, right, at the equipment controller level, where now all controls are done. And the optimizations or the basic algorithms or the advanced algorithms are running at the edge on a bit of a, you know, better, PC, and then heavy machine learning, model building, advanced fault detection is done in the cloud. We think that's probably the progression.
[00:44:15]
If you look at the adoption curves for EIS, you know, if you, if you look at these acronyms, right? So you got BAS, we've been around, you know, 35 years. You got the EIS, which is pretty well developed as well. FDD, I think we're on the mass adoption curve there. And the ASO piece is sort of the last piece. And I won't say the last piece, but it's the next piece. And, we're at the very early stages there, and there's a lot of room. So we're pretty excited about that.
And then if you merge in this whole, DERMs concept, right? Distributed energy resource management. Because now you've got solar and probably batteries coming, and active demand grid connectivity. So now you need that component. So that's why you need this edge device that can sort of manage all of that as well. We see that ASO and that DERMs as sort of the next wave of technology to hit buildings.
[00:51:38]
The problem with it is that training is hard. You know, most of our industry in the past has done some sort of in-place training where they sit down with the operator or operators and they show them how to use it one time, maybe twice, right? And then they leave.
And that's typically not going to stick. It's like anybody, if you sit down at a 40 hour training class for like a week, and you learn all this and you jam it in your head, and then you don't use every piece of that, and then, you know, within the next two or three weeks, you know, 80% of it is sort of gone.
So to me, the training has to be, it has to be online, on-demand that they can go back to it, right? They can look at it again. They can look at it while they need it or while they're actually doing it. It's just a different way of actually training. There's a term for it, embedded training I think sticks in my head. But we really think that's the only way that you're going to be able to get operators up to speed on this.
And again, you've got to get them engaged. It's not even just training. If they're part of the process the whole time, I think they feel better about it. Right? As opposed to, you know, some contractor comes in, implements something and then, you know, does some basic training for them and turns it over to them. I think that's just not, not an ideal approach. So given the fact that a lot of this stuff gets sold above the operator, right? It gets sold by somebody that's, you know, the operator's boss. They really need to engage the operators early on. We even tell our clients who we're selling that to , you know, we really like to sit down with the operators, get their opinion, get their feedback. Because first of all, some of them really know the buildings well, and you just want their involvement in it. so I think that just makes them feel better about the process.
[00:10:56]
Part of what I see happening with operators is a transition from actually operating to troubleshooting and repair. Because to me, in a perfect world, when you get the automation right, you do have an automated self-driving building. And so they don't really need an operator. You need an operator maybe to change schedules and change set points, but the key is going to be keeping the sensors and the actuators working and mechanical equipment working. So it really goes to troubleshooting and repair of that so that you get it back to being automatic.
And again, that is just training, right? I mean, let's face with the fault detection, even fault detection and diagnostics, even the diagnostics only works to a point, right? It can't see it. You still need somebody oftentimes to go out with a meter and some tools and look at it, right, and see what the problem is. That's gotta be part of the training because getting that building back operational again or back under automatic control requires that those actuators and sensors all work.
What did you think about these highlights? Let us know in the comments.
Note: transcript was created using an imperfect machine learning tool and lightly edited by a human (so you can get the gist). Please forgive errors!
James Dice: [00:00:00] Hello, friends. Welcome to Nexus, a smart buildings technology podcast for smart humans. I'm your host, James Dice. If we haven't met before, I write a weekly newsletter on the same topic. It's also called Nexus. Each week I share what I've learned, my opinions, and what I'm excited about in the quickly evolving world of intelligent buildings. Readers have called Nexus the best way to stay up to date on the future of this industry without all the marketing fluff. You can check it out and subscribe at nexus.substack.com or click the link in the show notes.
Since starting the Nexus newsletter, many of you have reached out to me wanting to talk shop, and we have. After a few weeks of those wonderful conversations, I realized I needed to record and share them with our growing community. So here we are. The Nexus podcast is born. This is our chance to explore and learn with the brightest in our industry together.
One more quick note before we get to this week's episode. I'm a researcher at the National Renewable Energy Laboratory, otherwise known as NREL. All opinions expressed on this podcast belong solely to me or the guest. No resources from NREL are used to support Nexus, and NREL does not endorse or support any aspect of Nexus.
Alright. Episode 9 is a conversation with Terry Herr, President of Intellimation, a building controls and analytics technology and service provider. Terry is one of the originals in the world of analytics, and I was excited about picking his brain for the first time. Our conversation covers the past, present, and future of analytics, and one of the hot items for the future is advanced supervisory control, which Terry calls optimization. We do a deep dive on that and much, much more.
This episode of the podcast is directly funded by listeners like you who have joined the Nexus Pro membership community. You can find info on how to join and support the podcast at nexus.substack.com. You'll also find the show notes, which has links to Intelllimation's website and Terry's LinkedIn page.
Without further ado, please enjoy Nexus Podcast Episode 9.
Hello, Terry. Welcome to the show.
Terry Herr: [00:02:11] Thanks. Glad to be here.
James Dice: [00:02:13] Yeah, why don't you give us a little background on yourself and on Intellimation, your company.
Terry Herr: [00:02:19] Right. Well, I've been doing this a long time. I actually started my career as an electrician, right out of, right out of high school. And I really cut my teeth on controls, was working for a contractor who was doing wiring for companies like Honeywell and Johnson Controls. So I did a bunch of that. Then I worked for several years at Three Mile Island, in their startup and test division, and from a controls perspective, it doesn't get much more complicated than that.
So and then I was placed in my fourth year apprenticeship. I decided I really didn't want to do this my entire career. So I started taking college courses, at a local college, and they didn't have an engineering degree program. So I ended up with a, with a degree in physics, and I really wanted electrical engineering one, but again, they didn't have it. So, I worked on off. I finished my degree and, pretty much out of college founded a company called Knights Electric, was actually the first company. And we focused on doing control wiring for, Siemens was our first client actually. So Siemens, Johnson Controls, Honeywell.
This is back in the early nineties, and it was great timing to do control wiring because everything was moving from pneumatics to DDC. And much of those large branch offices had their own pneumatics guys, but they didn't have their own electricians. So yeah, it was great timing. We grew that pretty quickly, all in sort of south central PA, maybe southeastern PA.
And I think it was around like 95, 1995, I was doing some control wiring for a very small systems integrator, this was -, and most of our work, again, was for the branch offices, and we started doing some wiring for a small integrator who was doing controls for-, it's that company out of Texas, CSI. One of the early controls companies bought by Schneider. And I thought, wow. And he was like a two man operation, so I got the idea, like if he can do turnkey controls, then we can. And started looking for a product.
This is around 95, so it was right around the era that Johnson Controls and Honeywell were getting pressure from the independent controls systems integrators, plus smaller products, right, smaller manufacturers. And so they were looking to not lose market share. So both Honeywell and Johnson Controls started their independent distribution around that timeframe. And Johnson had a program called ABCS, authorized building control specialist. So in 95, we signed up with Johnson Controls to do turnkey controls.
Again, fairly good, good move for us because we were new to doing it, but having that brand name was really helpful. So and that, that was really the founding of Intellimation. I kept the old name, and we, there was two separate companies, one doing control wiring for Johnson, Siemens, Honeywell, and the other one doing turnkey controls with Johnson Controls.
That lasted for a few years until the Siemens and the Honeywells, you know, it's hard to sort of have a company that competes with them. So they basically stopped using us for wiring, and we ended up just really specializing in doing turnkey controls. So we were doing controls up through maybe the-, again, Johnson Controls through maybe 2000, early two thousands, where the open protocol is starting to really make some ground, right? LON and BACnet. We were doing a lot of military work, and LON was the preferred open protocol for the military bases.
So Johnson and like all the big guys, the Honeywell, Johnson, Siemens, they were not too fond of open protocols, right. They, they owned the market at that point, and open protocols was going to hurt them. So they weren't very, very fast to gain that. We started looking for other products. And we've repped, I think next was Sircon for awhile. That's a LON product. Then Dish Tech, again, another LON product. Eventually, dropped the Johnson product line because again, they, they were just slow to open systems in general.
And we had Dish Tech, and then we picked up Delta Controls, as a BACnet product. Because LON and BACnet, if you know, back in that era, LON and BACnet were, you know, they had what they would call the protocol war that was going on. Right. In fact, I tell people now that if I'd have had to make a bet back in 2000, you know, five to eight, I would have bet that LON was going to win the protocol wars, but that didn't happen.
James Dice: [00:06:36] Really? Hmm.
Terry Herr: [00:06:36] Yeah. We thought that in the early days it was a better, more interoperable protocol. More complicated, but anyhow, that didn't happen and they pretty much fell off the rails and BACnet came on strong ,and pretty much everything's BACnet today, so.
James Dice: [00:06:51] Wow. That's a fascinating history. All of the, up until this entire time you've been talking is all before I graduated from college. So this is all, this is fascinating.
Terry Herr: [00:07:00] I that. I looked at your history some too, and I was like, you're one of the young guns, I guess, in the space. Right?
James Dice: [00:07:07] Yeah, definitely. Well, okay, so BACnet kind of took over, and then what was the rest of the history?
Terry Herr: [00:07:13] So we've been doing, we've been a Delta Controls rep now for some time. And again, doing I would say 70% of our business was straight sticks, you know, systems integration, putting controls in new and existing buildings. We were doing probably 30% for ESCOs. So, you know, the ESCO market in Pennsylvania has been strong in the schools, so we're working for, you know, Ameresco, NORESCO, doing the controls portions of those projects. And right around 2005, and this was my foray into fault detection analytics, I actually had saw a demo of Cimetrics, and this was a couple of companies. If you look at 2005 era, to me, 2003 and five were the really upstart of when FDD got started.
There was some R&D, you know, lots of R&D out of NIST. NIST did some very early work. PNNL, you know, Mike Brambley and Srinivas were doing some research on fault detection. And Cimetrics, there's really three, in my view, three sort of early products. And I started tracking them in 2005, but the Cimetrics demo was a turning point for me.
I was, being in a controls guy and seeing, you know, what fault detection can do, I was convinced at that point that this is a, you know, this is the future. But you know, it was very expensive. In 2005, fault detection was expensive, and honestly, I didn't have a client that could afford Cimetrics back then. They were mostly focused on very large central plants or veryilarge facilities. So-
James Dice: [00:08:46] What made you think it was the future?
Terry Herr: [00:08:48] Well, it was-, I could see that every BAS could benefit dramatically from, from having fault detection running, right. It was like, to me, it was alarming on steroids. You know, every system has problems, faults, and so having that just seemed to me to, you know, an excellent add on. So pretty enamored with it. But again, didn't really have anybody who could afford it. The products were really young then, so I really didn't do much about it at least initially, just sort of watched the market. I think I started a spreadsheet back then on sort of tracking the products in the space, and there really were only three to begin with that at least I was aware of: Cimetrics, Packrat and, Interval Data Systems. I think that, well, Cimetrics is still around. In fact, you know, Jim Lee there , to me, is one of the, one of the grandfathers of the space and, you know, he's somebody you should get on the podcast next.
James Dice: [00:09:40] Cool.
Terry Herr: [00:09:40] And then I guess we really didn't, like I said, I didn't do anything about it for, awhile until we started looking at products. I think one of the early products that we looked at was KGS. I actually had one of the original partners there who's actually not with them anymore, that traveled down to Philadelphia and gave us a demo. That was probably 2010, maybe 2011, probably the early days for KGS.
There was also another product called, SiEnergy out of California, one of the early products, and actually this was probably 2000, maybe 12, 13, actually signed up with them.
James Dice: [00:10:15] Oh you did? Okay.
Terry Herr: [00:10:16] Yeah. Never did a project, because they, they-, I don't actually know what happened with them. They got bought and sold, and eventually just flamed out. Actually, I think there's a product called Flywheel BI. I don't know if you've seen that one That's leftover from SiEnergy. The company's changed a bit. They seem like they do more CMMS now than fault detection, but they're still out there.
James Dice: [00:10:39] Interesting. Alright. Cool. So you guys, I mean, the way I understand how you've approached analytics is you've-, you said you signed up for them, but you've always been sort of independent and repping many different analytics companies, right? Is that how you've approached it?
Terry Herr: [00:10:55] Yeah. And even in the building automation world, we always wanted to be as vendor neutral as one can be in that space. The building automation world is, you know, one of the aspects of it I don't like is that the distribution of products is somewhat controlled. You can't just buy and use whatever you want, you have to sign up, and you have to be an authorized distributor. And so we always try to be, you know, client-focused, you know, solution-focused and not product focused. Because products, products evolve, right? That means, you know, what might be the best today isn't going to be the best 5 or 10 years from now.
So even in the BAS world, I mean, we still do some building automation, and we still use Delta mostly, but sometimes it's not the right fit, you know what I mean? So we'll use other products.
I would-, you know, my transition, just to get this-, it was about 2014 where I felt the product development and the pricing for fault detection had come down, and the products got better and the prices got less, that it was near going to be mainstream. So that was the year we really transitioned entirely to focus on what I'll call energy retrofit work, leveraging the new analytics platforms in lieu of doing construction or renovations. We still do a little bit of that, but it's very little, where, you know, prior to this, that was our bread and butter work.
James Dice: [00:12:15] Okay.
Terry Herr: [00:12:16] Yeah, so now in 2014, we started doing that type of work, energy retrofit work. And one of the things that we realized very early on is the data acquisition piece is troublesome, right? It's the first step and it's not an easy step. And we wanted to find a neutral, we'll call it trending appliance, right? Because all of these products leverage trended data, and if you look at the trend capabilities of a typical BAS, they're, they're weak, right? And they vary dramatically.
If you walk into a typical building today on the BAS, it's all over the place. Sometimes they're trending 5% of the points, sometimes they're trending 85% of points, but most of the time it's fairly minor. And many of them couldn't trend at the interval that you need to do good fault detection. And so we really looked for some way that we could get data out of buildings easily. And I think actually we landed on, I don't know if you remember, Building Robotics, which is now Comfy, but in the early days they were Building Robotics. They had a product called Trender. I don't know if you, if you recall-
James Dice: [00:13:22] I used it once. Yep. I did the same exact thing as you. What year would that have been? For us it was probably 2015. I remember talking to them and starting off with that Trender box.
Terry Herr: [00:13:34] Yeah. Yeah. I think I can take credit for that product to some extent because I had landed on their sMAP protocol from, I think reading one of their research articles from when they were at Berkeley. And when I read it, I'm like, well, that protocol , you know, we could use that. And so I reached out to those guys because they were both came out of Berkeley. One of them is no longer with Comfy. The two, the two founders, they're both Berkeley PhDs. I can't think of their names now, but you know, I reached out to him and said, look, we think there's a market for a trending device based on sMAP, and we'd like to, we'd like to try it. I think they sent a free box and we tested it, and, you know, it worked.
So they decided they were going to build a product, and they called it Trender. And I'm sure we were their first, you know, to buy it. In fact, for most clients, they, they would sell box. And, and you know, it was a SaaS model. For us, we actually bought the platform, so we ran it ourselves, and we bought the boxes from them, but we, we ran our own sMAP server.
James Dice: [00:14:38] Ah, okay. Cool.
Terry Herr: [00:14:41] And that worked, I mean, it, it worked fine. I can tell you there's still one-, well, I know one of the clients that we did a lot of data acquisition for in the early days, we put one at the Empire State Building, and it's still there. It's still there and still collecting data.
James Dice: [00:14:58] Wow, that's really cool.
Terry Herr: [00:14:58] So, so if you used that, if you recall, Trender, they basically, I think the venture capitalist for Comfy squashed Trender at some point in time, and that was our, our transition to VOLTTRON. We were looking for another solution to replace Trender. And I had known about VOLTTRON, just from again, following the research and, yeah, so that was sort of our, our transition.
James Dice: [00:15:23] Got it. And so, you were no longer independent from the standpoint of your business model, but now you're independent from-, kind of the open data layer was now there so that you could then use any analytics platform you wanted to, using VOLTTRON. Is that why you guys kind of headed that direction?
Terry Herr: [00:15:40] So yeah. Even with Trender, right? It was a neutral data acquisition layer, and VOLTTRON sort of was even better in that it-, Trender had to use their own database and VOLTTRON was built to be sort of modular. So I think even like today, it has connectors for five or six different databases. When we first started using it, we were using it with Mongo, then we transitioned to Crate. So it's the same VOLTTRON box, but you can use multiple different databases. So we transitioned to Crate, and now we're about to transition actually to a TimescaleDB, same VOLTTRON trending device, you know, multiple databases. We even have some at the moment that are pushing data to two different ones. So we're pretty excited about the potential of VOLTTRON in general, and certainly, you know, a neutral data acquisition layer. We think that's a good move for the industry in general.
And, I know one of your LinkedIn threads sort of get into that topic.
James Dice: [00:16:44] Yeah. Yeah. I was just going to say, yeah, for those listeners that haven't heard of VOLTTRON, it's an open source project out of the Pacific Northwest National Lab. And anyone can download it on GitHub and use it on their projects. And it sounds like what you guys do is it's some sort of gateway device that's on site, and then you also have another instance in the cloud that they're then linked to each other so that then you can push the data to any cloud database.
Is that, is that how I understand it?
Terry Herr: [00:17:13] Yes. We always have an edge piece of hardware. We use an industrial Intel Nook, but it'll run on almost anything. You know, we consider it a little plug here for VOLTTRON, but you know, we like the way it's built in a modular fashion, so you can, it has multiple drivers, right? So it talks, like we typically use-, it's got a BACnet driver, of course, it uses bac pipes, it's got an oBIX driver that gets us the Niagara platform and anything that's BACnet. That covers a lot of ground. You know, it runs on Ubuntu. It's written in Python, which is a very popular language now.
It uses a message bus, you know, pubsub message bus, RabbitMQ to communicate. It's got a built in encryption, so we consider it to be a good platform. Presently, you know, we have most of our application for it is doing passive trending, but it doeshave an actuator, what they call an actuator agent. So it can write, and we see that as really future looking.
We have done a few pilots, with an agent or an application that PNNL built called Intelligent Load Control, where it does, in fact write back. So we started piloting that two summers ago. have you heard of that application at all?
James Dice: [00:18:30] Yeah, yeah, I've seen it, and I've been looking at it just tracking VOLTTRON it sounds like, throughout most of the history, but I've never used it. I've just been tracking it, but I've been tracking it lately for this grid interactive, efficient building type of strategy. It just seems like it lends itself really well and certainly just exactly the way you just described it.
So, and I wrote about it in the newsletter a couple of weeks back and I'll, I'll definitely link to that for everyone in the show notes so they can catch up on this. But, so you guys have done some pilots, testing that out it sounds like?
Terry Herr: [00:19:04] We did, in three buildings two summers ago, and we hope to do some more pilots this summer, that was the game plan working with PNNL. We had two clients that were willing to let us pilot it on three buildings each, actually not just ILC, but another application called, that they call AIRCx, which stands for Automated Indication of Retuning Measures, and we're pretty excited about that application as well, because that gets into the-, to this topic of ASO, right? What DOE is calling ASO, automatic system optimization, which, you know, we think is, you know, fault detection is really great, but, optimization we think is the other half of the energy savings equation. And so you have to add that into it. And AIRCx is an application that will allow us to do that.
James Dice: [00:19:54] Yeah, I hadn't heard of that before. That's interesting, AIRCx. And yeah PNNL is certainly known for these types of retuning strategies, so I'll definitely check that out.
But let's, let's stay on the independent data layer real quick. So, let's just kind of close that out cause it was one of the things that I wanted to talk to you about. So with you guys installing that and then having many different options of where you can then plug analytics into it, one of the things that I wanted to ask you about is how you tag and model that data, given that all of the different analytics platforms that you might plug into it, they all have their own data models.
So how do you think about that?
Terry Herr: [00:20:35] Well, you know, one of the things, I guess our view on the value of the middleware layer is sort of akin to what BACnet is, from a proprietary comm bus. If you have a standard data path, then basically you don't have vendor lock in. And our industry has a bad reputation for what I call the lock them and loot them business strategy, where you know, once the vendor gets a foot in the door, they're very hard to get out. So, now what makes that work, what makes that work is this tagging standard. Prior to tagging standard, pretty much, in fact, all of the FDD products that we've used were really a proprietary mapping every time, right?
You had to do a mapping process and they had their own data model. And we're in the transition right between that and a standard data model. I was hoping there would be, you know, the, probably almost a year and a half ago now, you had Brick and Haystack and ASHRAE put out a press release that indicated they were going to work together to have one standard. Prior to that, it seemed like we were going to have maybe three, and that was not going to be good for the industry. I don't know that I've seen them work together a lot, but I've certainly seen pressure from Brick, and I think the pressure from Brick is probably what prompted Haystack 4 0, so I don't think any of them are perfect, but we think that a standard is ideal and it's, it's what makes this plug and play work.
And our approach at the moment is to actually tag with Haystack. Although we're working with somebody right now that thinks we can cross tag. So we can tag a dataset with both Haystack and Brick, be cause they do overlap a fair amount, right? And then we house those tags in whatever database that we're using. Right now we're housing them in Crate, but if we transitioned to Timescale, we'll house them there. And then we'll have a Haystack API. So the other, the other part of the standard is having a standard way to pull the data out. And right now the Haystack API is the best thing that we've got. And we actually wrote for what we have now, we actually wrote a Haystack API connector. And so the theory is that if we do the data acquisition, and have the data in a neutral database, then the clients can use more than one.
They can transition easily. They can switch gears if a product down, you know, three or five years from now that they want to change, they don't, they don't have to redo the entire data acquisition and tagging layer. That all stays. And it's in an open source. The beautiful thing about VOLTTRON and those databases is so far, everything is open source. So didn't really cost anything. There was no SaaS model to it, for the entire neutral data layer.
James Dice: [00:23:29] Got it. Okay. So if I try to summarize kind of your last 10 years of fault detection, let me try. It's basically you started using all these early adopters basically. So, it sounds like you started using SiEnergy, and well you signed up as a partner but didn't quite use them. You use KGS, you use others.
And realized that, Hey, there's this part that's still proprietary and we can basically, as the integrator come in and install this neutral, open middleware layer, or I call it open data layer sometimes. You tried out another package, but then you found VOLTTRON, that you've been using VOLTTRON ever since. And that allows you the flexibility to send data or pull data from any application, whether it be, you know, KGS or SkySpark or whatever, and that provides your clients with the flexibility to try out analytics packages or switch, or it really drops the integration costs as well essentially, right?
Terry Herr: [00:24:31] I would agree with you. It ought to drop it. I mean, one of the things that we do very well is the data acquisition piece. You know, in the early part of our business, we really weren't chasing data acquisition separately. We were just doing it for ourselves, but we realized there was actually a market for that.
So we've done data acquisition for a number of software companies that had software of some sorts but didn't know how to get data out of a BAS. In fact, many of the FDD products aren't very good at getting data out of a BAS. So we think that data acquisition is almost a separate solution.
I mean, we did it in the early days for CopperTree, a company called Cortex Building Intelligence, Ecorithm, another FDD product, all, you know, all just doing data acquisition for them. We weren't, you know, we just did the data aquisition using Trender or VOLTTRON and, you know, it was their client. So again, that's a tricky part of it. We think that middleware layer does make a lot of sense. And in fact, we had a client very recently where we were able to quote them the data acquisition, and then give them three different options, right? In this, in this case, it was SkySpark, KGS, and CopperTree. And we gave them what we thought were the pros and cons to those three products and let them choose. They were a fairly sophisticated client. So they, they wanted to understand, you know, that they had options and what those options were and what we recommended. But we, you know, we let them make the final choice of what product that they wanted to use. And we think that's a pretty good model for larger clients.
James Dice: [00:26:06] Yeah, I mean, definitely this, this model is obviously different than a lot of firms like yours and a lot of firms that I kind of grew up in, right? Where you have guys on your staff that are very skilled in one platform or another. I mean, the one that comes to mind for me is like SkySpark. Once you learn that whole ecosystem, you're a little bit biased in a way because your guys have spent a long time learning it, they are obviously very skilled at it. And so, that could be a little bit of a bias that works its way in. But this model where you're kind of. Like basically totally independent is pretty novel from my standpoint.
Terry Herr: [00:26:43] I agree with your comment that, one of the things about SkySpark is that, from our perspective, I still think it's probably the industry leading product, but it definitely has a very heavy engineering component, right? The implementation of it is, is very time consuming, engineering wise. And I consider that to be sort of the opposite of say a KGS, where you know, everything is done. You're not writing any of your own rules. You're not even doing the data acquisition, or you're not doing the tagging piece. Everything is done. So it's a, it's a plug and play product.
SkySpark, again, the industry leader but I joke to people, I said this kind of like half done, right? They built the back end, but there's no rules, right? The user interface is weak. So in the early days of SkySpark, most people overlaid it with something else. Again, they've done successful with it, but, but it's very different than a KGS.
And I consider CopperTree to be somewhere in the middle, right? There you can write your own rules if you want. They've made it easier though. They've abstracted the rule writing, so it's a little bit easier to use than SkySpark, but to me one of the cool things about their product is they have a rules library that all the partners share. And so you're not reinventing the basic NIST rules, every integrator, you know, over and over and over again, which is, I think, a weak part of SkySpark, where every integrator is rewriting the standard, you know, 80% of the rules that are the same thing for everybody.
James Dice: [00:28:08] Mhm. Cool. So yeah, I think just to kind of hit that point again, is that there are a lot of SkySpark vendors that use SkySpark as the data layer, right? So, and this is what I've done in the past, is they're going to then use that as the backend, whereas you guys are using VOLTTRON, and it opens up some more flexibility. So that's really interesting.
Okay, so just to kind of finalize our independent data layer or middleware layer discussion, you mentioned that the data layer, or the modeling is in transition right now. So you said we're in transition to a standard data model. So what are the limitations currently when you guys are setting up this open middleware layer right now with the standard data models?
Terry Herr: [00:28:51] And what I mean by that is, and this is to sort of, you know, to Nick from KGS's point, is that we're not at a place where, if one person tags it, right, with one standard, that every vendor can automatically consume that information. And part of that is because, you know, the standards aren't perfected yet, but we're certainly getting a lot closer.
And, it's funny, you know, Nick tells the story that, you know, the first site that they did with Haystack tags, they basically had to start over again because there is a lot of discrepancy in the way that you would tag with Haystack. It wasn't, there was nothing to force everybody to tag the same way. And so part of that is you know, the tagging model, and then the training on how to actually tag. Or having a standard tagging tool. If everybody used the same tagging tool, right, then everything would be tagged the same even within Haystack.
And we're not quite there, but I think we're getting a heck of a lot closer. Haystack 4.0 closes the gap on a bunch of their weaknesses in the past. And there are a number of tagging tools out there now. In fact, we've been sort of hunting for the ideal, what I'll call auto tagging tool for some time and demoing lots of products, because that's been , you know, obviously an obstacle to deploying this type of software. But I think we're at a point where if Haystack is defined well enough, the method of deploying it is defined well, that the software vendors can then create the right connection.
I mean, in the perfect world, they'd be able to basically with the end Haystack protocol, connect to any database that's tagged well, and there would be no onboarding, right? It would automatically work. That's the perfect world. I don't think we're quite there, but I would say we're probably 80% there.
James Dice: [00:30:33] Okay. And these tagging tools, I'm not that familiar with these tools that are popping up in this area. Is this a place where it's like a startup company that's popping up, providing these tagging tools, or is VOLTTRON heading that direction to provide that? I'm assuming what you're talking about is like machine learning algorithms that add the tags automatically based on a point name and, and the data that that point is associated with, that kind of thing. Is that right?
Terry Herr: [00:30:57] That's right. And you know, we've tagged by hand for, for years, and it's time consuming and painful. And it's because our industry didn't have a point naming standard. If we had one of those years ago it wouldn't be quite so hard to add tags. But you can imagine , you know, every site you go into, they name things somewhat differently. So, yeah, you know, to me the Holy Grail was some kind of auto tagging. And when I say that, I mean some machine learning that uses all the metadata you can pull from a BAS. So metadata meaning the point name is always probably the most important, but you've got the unit of measure, the actual present value and the trended value. You've got typically the description field. You got the point type field, right? So BACnet gives you a number of properties that when you scrape it, you know, give you hopefully enough information to decipher what that point is and add the tags accordingly.
The work in this area has been going on for some time. Again, you know, Mike Brambley at PNNL has done some early work on auto tagging. United Technologies Research Center, the Carrier folks, they actually funded, one of the early tools that we use was a project that their research center built, probably four years ago now, with some DOE funds.
So it's open source and we got it. And it was a good start, but it wasn't finished. And then BUENO has one. Pretty much anybody who's an analytics company, we're going to build a tagging tool, right? To just make their life easier, but it n't going to be a standalone one. There's a company called Onboard.io that actually, they are a startup probably two years old now. I think two guys from KGS, one guy from Opower, and there they initially started up to specifically build an auto tagging tool and so I know they have something. We worked with them in the early days.
There's another company that we are testing out right now, which so far we think is probably the best thing we've seen. It's actually a company called Kinetic Buildings. They're out of Philly here, so they're actually a local company. PhD that came out of Drexel that actually has an entire analytics platform as well, which we're also testing, but we're really more excited about his auto tagging tool. We want to be able to use that sort of independently if we can. So again, I know CopperTree has built, you know, some auto tagging capabilities as well. I don't know where it is at the moment, but almost everybody has to have something like that that has a product. We're just looking for something that is standalone, right? So it's not married to a particular product.
James Dice: [00:33:32] Right, okay. Cool. So I guess I'd like to kind of transition now to supervisory control. So I'm writing this deep dive for Nexus members right now about what I see as kind of the next phase in analytics, right? It seems like you're in agreement with that, so, kind of what are you seeing as far as the types of supervisory control that analytics firms are wanting to add? I know ASO, you mentioned, is one of them. I'm seeing that and some other applications as well. But what are you excited about in that area?
Terry Herr: [00:34:05] Well, we definitely think that, you know, what we call optimization, I think KGS calls opportunities for optimization, right? Their software will actually find those types of things. That's very much what AIRCx does as well. So basically, you know, things like optimal start, which is algorithms that have been around for 30 years but get deployed very infrequently, they should be deployed , you know, in every building. Temperature and pressure resets for VAV air handling units. We see that a little more often, maybe in 15% of the buildings that we walk into where it's applicable, but not in the other 85%. So, you know, that's an optimization. Chiller plant optimization is where this, to me, this work has been going on for a long time. Right? That's where it really started, in chiller plants, because if you're going to optimize one thing inside a building, if it's got a chiller plant, that's what you want to hit first. So that's fairly developed.
In fact there's a number of companies that are specialized in their products like Plant Pro, Tech Works, Optimum Energy. They all basically give you a Niagara JACE, right? With their optimization algorithms on them. You drop that in the building. You map across the data points for the chiller plant, and it takes supervisory control over from the BAS. So we think you can do that both on the chiller plant, and on the rest of the building's HVAC equipment, so on the air side as well. And we see that sort of as the next, the next move.
And it's interesting, we saw a demo from the Brainbox AI guys, and you know, that's what they're doing. And in fact, in the demo, I basically determined that, again, it looks like you are lobodomizing the BAS. They're literally taking control, which is exactly what, you know, if you look at a Plant Pro product, it's exactly what that does. It takes over the chiller plant control from the BAS. So in my view, first of all, I don't think you can take that to the cloud entirely. I think that is going to have to stay at the edge. So we've always used a heftier data collection box. So , you know, we're running on an industrial Nook because we know that, you know, intelligent demand control and optimization layers can be placed there.
A lot of easy optimization is literally writing to set set points, which BASs are designed for anyhow. So you can put those algorithms into an edge box or into the ssame VOLTTRON box that we have there and just run those algorithms there, and it's just going to write to set points. It's going to write to the, to the condenser water set point. It's going to write to the chilled water set point. The VAV air handlers, it'll do the optimal start. So we think at least my view at the moment that we're going to find where, we are going to have this edge device that does optimization in the building, and then the cloud is still going to do some advanced algorithms, right?
It's going to basically do the modeling, probably feed back some of the set points. I don't think you can run it all in the cloud. I mean, you could, but I think trying to control a building from the cloud is-, I'm not sure that we're there yet in terms of, of the, the quality of the connections, right?
So I can see a day where the PID loops and basic control was done at the controller level, right, at the equipment controller level, where now all controls are done. And the optimizations or the basic algorithms or the advanced algorithms are running at the edge on a bit of a, you know, better, PC, and then heavy machine learning, model building, advanced fault detection is done in the cloud. We think that's probably the progression.
James Dice: [00:37:44] And the cloud is then just basically pushing down the model to the edge controller, essentially?
Terry Herr: [00:37:51] Well, you could see, like, I'll give you an example of that. So right now, a lot of BASs can do optimal start, but their algorithms are somewhat basic. You know, one that we've used is that basically it does learn the recovery rate of the zone. It looks at the last three days' recovery rates, and then will use the schedule and say, okay, well now that I know my recovery rate for the last three days, I'll average that it knows it's gotta be up to temperature at eight o'clock. And you know, the temperature has drifted, you know, eight degrees. So I do some math. But if you had, if you had cloud where you could look at that over any temperature range, right, from the last year instead of the last three days, look at the weather forecast. You could, you could have a better, more accurate, optimal start time. I mean, how better? I think the basic one gets you to 80%, and the cloud one gets to the other 20.
James Dice: [00:38:44] Yeah, and something, I've heard from skeptics for this new sort of supervisory control is, you know, I've heard engineers say, well, if the BAS had all the right supervisory control sequences anyway, you know, the savings would be minimal. And my answer to that has been kind of bullish for this new type of supervisory control because I haven't seen a lot of BASs that have, you know, optimized sequences programmed.
And I think if you have a solution that's specifically designed for that, it kinda makes up some of that ground. There's just so much opportunity for optimizing BASs right now.
Terry Herr: [00:39:18] Absolutely, in fact, I would tell people we think the fault detection maybe gets the 40% and the optimization gets the other 60% of the savings that are possible in a typical building. In fact, our present strategy is for optimization, we will reprogram the root BAS, if we can, if it's a product we're familiar with and we can program, and we've got self-sufficient in all of the major products, Siemens, Johnson, Niagara. So when we go to optimization presently, we will have two options. We'll do it in the base BAS, or we'll add an edge device. We always put a BACnet router controller there anyhow, and we can put the algorithms there. It's a pretty hefty machine, so we'll put those algorithms there.
We think in the future we'll probably run them on the VOLTTRON box. And the only reason that we would do that instead of reprogram the base BAS is that it's modular and it's plug and play. We can put the algorithms in one place. I mean, even these folks that do chiller plant optimization, if they took their algorithms and they went to the BAS vendor and said, here, reprogram your root BAS to do all of this, it could probably do it.
But they'd have to do that with a different vendor, different BAS at every chiller plant. Instead, put the algorithm into a single box one time, and now that's plug and play. You can drop it in and go, and that's the reason to not reprogram all these BASs out there.
James Dice: [00:40:42] Yeah, 100%.
Terry Herr: [00:40:44] And, I would say that I kind of agree. I'm big on the 80-20 rule. I do think that you get 80% of the savings out of doing 20% of the basic algorithms. I mean, requiring advanced or machine learning or AI, that gets to the other, the other 20, which is. I'm not saying we shouldn't get that other 20 but we have a lot of time and, and there's a lot of savings we can get, like the easy savings, right?
James Dice: [00:41:09] Totally. Yeah. It's a crawl, walk, run thing for me. I've been using the words crawl, walk, run quite a bit lately with, some of our clients. Cool.
So what are some of the challenges with this new wave of supervisory controls that you're seeing from the field? One of them that I've heard is that, you know, there's a lot of pride going on. So it could be pride in your existing control sequences, and that might be the design engineer that has the pride there. But then the most proud I've seen is the building operator who takes a lot of, a lot of pride in maybe running his building by hand, and if we're putting the robot in his place, he feels a little threatened and skeptical.
Terry Herr: [00:41:47] Yeah, we're getting into the operator training part and the operator coordination part. And I agree that we have seen operators feel threatened by the automation or this overlay. Some of it's even the fact that other people can see how their building is operating. Right? So even just the fault detection piece, we sometimes see operators, and BAS vendors a little worried about that. So that's just an obstalce we're going to have to overcome.
And your point about operators sort of manually operating it, we see that a lot. In fact, we think that operator overrides are I don't know, a reasonable percent of the problems out there, right. Where even if they knew what they were doing when they overrode it, they forgot about the override. Right. And then it's still in place, you know, months later when the season changes. And sometimes the overrides are really in place because they simply didn't understand the sequence of operation, or didn't think it was happening fast enough. So, and part of it is because the building may have not operated right from the start.
We see a lot. I mean, we retro commission a lot of buildings and we see errors, you know, poor sequences of operation, poor original programming, poor commissioning. So it's not unusual to have a building that's not operating right, even though it was built three years ago and in theory commissioned, yeah.
I tell people typically the sequence of operation, I say typically, so 80% of the time, the sequence of operation is probably one that was cut and pasted from a different job, an old job and then reused. So it wasn't really that well thought out to begin with. And then it gets programmed by a technician sitting on a bucket in the mechanical room, and maybe he gets it right. Maybe he gets it somewhat right. And then, and then it gets commissioned by essentially turning things on, and if nothing blows up, you're done and you're walking out the door. And that's an extreme example, but, you know, it's, there's a lot wrong with the typical BAS implementation in most buildings.
James Dice: [00:43:43] Yeah. I like to say, I like to say every building's different, but every control specs the same.
Terry Herr: [00:43:48] Yeah. That's a good one. Yeah. Yeah. So I would hope, in fact, in the early days of doing FDD, I kept telling all the vendors, you know, you really have to do optimization, and they go, well, the owner doesn't want us to, right. They want us to be passive. And I'm like, but why? Because the BAS is controlling it. I said, but it's not doing a very good job. I mean, it's just not. So you could certainly do better.
So I think that you know, that is the next move. If you look at the adoption curves for EIS, you know, if you, if you look at these acronyms, right? So you got BAS, we've been around, you know, 35 years. You got the EIS, which is pretty well developed as well. FDD, I think we're on the mass adoption curve there. And the ASO piece is sort of the last piece. And I won't say the last piece, but it's the next piece. And, we're at the very early stages there, and there's a lot of room. So we're pretty excited about that.
And then if you merge in this whole, DERMs concept, right? Distributed energy resource management. Because now you've got solar and probably batteries coming, and active demand grid connectivity. So now you need that component. So that's why you need this edge device that can sort of manage all of that as well. We see that ASO and that DERMs as sort of the next wave of technology to hit buildings.
James Dice: [00:45:10] Yeah, just like you're saying, I have it in three buckets in the essay that I'm writing right now. It's traditional supervisory control, just done better, right, and more consistently. And then it's ASO, and like the old ASO was just like a trim and respond sequence, and now it's turning into more learning, prediction and optimization, right. And then that third bucket is what I, what the DOE calls GEB or grid-interactive efficient buildings, where you're basically taking all of the systems in and using the same learning prediction optimization to then cycle loads in the right way, according to what the grid needs at the time.
So, cool. Yeah, I think that's, that's exactly how I'm seeing it as well. So, so one of the things that kind of scares me, and this is the kind of the thesis of my essay, is that I'm seeing a lot of these supervisory control solutions, but also these startups that are coming in and saying, we're going to plug this in and you don't need to think about it very much. It's just a box that you plug in and it's super easy to install. And the reason I'm worried about it is because it doesn't speak to the history of our industry, which is where we've been trying to break down these silos and we've been trying to look at things strategically this whole time. In other words, it's not just technology, it's also the processes. It's also the people that are tied to the technology. And one of the things that worries me about that solution is that they're making it sound like it's an LED light bulb and that you just plug it in and you're good to go. Right. Everything's optimized.
Whereas I think what I'm hearing from you and what I've seen from your comments on LinkedIn is that it's FDD that plays a role, it's supervisory control that plays a role, but then we can't forget about the operator and the engagement of the operator and the education of the operator or else the whole thing breaks down.
Is that how you're thinking about it too?
Terry Herr: [00:47:04] Well, first of all, I agree that this, the idea that the software can just solve everything, I think is unrealistic and they're minimizing the complexity of all the wires and sensors and actuators that are in the field. Even the ability to simply write to a point and release it isn't seamless. Right? I mean, it just isn't. We've done it. It's not, it's complicated. Sometimes it doesn't take, you're not writing at the right level or you can't release it correctly.
So, and then if you already have issues with the operators where if they don't understand, you know, a sequence of op that's already running in their BAS and you start doing more sophisticated stuff, they don't understand that. I mean optimal start, I can't tell you how hard it is to communicate that to a typical operator who's used to a schedule that says 6:00 AM, that's exactly when my fan is going to start at 6:00 AM. When you tell them that, no, now set your schedule for when you want it to be up to temperature. He goes and when's it going to start? Well, I said, it'll vary every morning. Well, that's, they don't, they don't like that. Right? They don't like that. They want to know it's going to start at this time.
So, and then you start to talk about, you know, models and machine learning and all that. So we definitely need to bring operators up to speed. Otherwise they can just derail the whole, the whole thing. Right. They'll just, I don't want to say sabotage, but they're not going to be on board with it. So I think we really have to bring them on board. There's an important human aspect of this, the training piece. We're giving them all of these cool technology and cool tools that are pretty complicated. So they really need to understand that.
And if you look at the work that PNNL started, right, this retuning training, which they have available online. And then a number of other organizations have taken that and sort of moved it forward. I think BOMA has done work with P NNL's retuning and, and I know CUNY, City University of New York, they have an entire building performance lab that goes out and trains operators on re tuning. And a lot of that training is training them how to actually look at trend logs, right? Look at the data, understand what is going wrong. It is still human in the loop fault detection, but even our engineers who were running fault detection rules, we still like to see it. We still like to go, okay, well I see I got this fault. Let me look at the multi trend and sort of validate or verify.
It's, it's really helpful for most humans to simply understand what is going on, right? When you get to AI or some sophisticated machine learning, it's really hard to understand even for engineers, right? I mean, you know, and we're even getting, that's getting worse. So, so training the operator is going to be important.
We actually, I think I mentioned to you, and we have a project that we got funded, CUNY and Intellimation, to build an online, what I call an online on demand, training platform for advanced building operations. So focused on all of this new technology that we're giving them how to use it well, what it does. And we have a project that's funded over the next three years to sort of build that out. We're using our largest client as sort of the place to try it out with. They've got 85 operators, you know what I mean? So I think it will be a good, a good start.
We'll be looking for industry input on some of that. I mean, it's definitely going to be helpful. So, anybody out there who's sort of interested in, you know, contributing to the project, certainly reach out.
James Dice: [00:50:39] Yeah. And I'll, I have a link that you sent on LinkedIn that I can put in the show notes for that.
So am I right in thinking about how, like if we think about this conversation as kind of the past, present, and future of building analytics, it's mostly left out the building operator. Maybe except for the monitoring based commissioning type of solutions, but I think even then, they've really been, you know, mostly implemented by a third party consultant who's the, you know, the commissioning agent that kind of hands the building operator a list of things they need to fix. And I think I'll speak for a lot of those people in that they've been a little frustrated by the lack of fixing after they hand them the list, right?
So how are you thinking about kind of how we move forward besides these training videos? How can, software vendors and then service providers in this industry kind of think about how to be including training in their solutions?
Terry Herr: [00:51:37] Well, people do. The problem with it is that training is hard. You know, most of our industry in the past has done some sort of in-place training where they sit down with the operator or operators and they show them how to use it one time, maybe twice, right? And then they leave.
And that's typically not going to stick. It's like anybody, if you sit down at a 40 hour training class for like a week, and you learn all this and you jam it in your head, and then you don't use every piece of that, and then, you know, within the next two or three weeks, you know, 80% of it is sort of gone.
So to me, the training has to be, it has to be online, on-demand that they can go back to it, right? They can look at it again. They can look at it while they need it or while they're actually doing it. It's just a different way of actually training. There's a term for it, embedded training I think sticks in my head. But we really think that's the only way that you're going to be able to get operators up to speed on this.
And again, you've got to get them engaged. It's not even just training. If they're part of the process the whole time, I think they feel better about it. Right? As opposed to, you know, some contractor comes in, implements something and then, you know, does some basic training for them and turns it over to them. I think that's just not, not an ideal approach. So given the fact that a lot of this stuff gets sold above the operator, right? It gets sold by somebody that's, you know, the operator's boss. They really need to engage the operators early on. We even tell our clients who we're selling that to , you know, we really like to sit down with the operators, get their opinion, get their feedback. Because first of all, some of them really know the buildings well, and you just want their involvement in it. so I think that just makes them feel better about the process.
James Dice: [00:53:22] Cool. Okay. So a couple more questions before we kind of wrap up. Where do work orders and the CMMS fit into all of this, either from, from the fault detection standpoint, but also how does it fit with what you're doing with VOLTTRON. And then obviously the last part of what we talked about was the building operators, which would be the connection between the fault and the building operator would be some sort of work order.
So how are you thinking about that part of the stack
Terry Herr: [00:53:52] I do think that asset management and computerized maintenance management is a great fit and an add on to all this technology, and it's not in enough of them already. So I don't know if that means you have to add that feature and function set into existing products or there's, there's a lot of software in that space already, and so probably just, you know, add an API connection to it, but there's no question the workflow ought to be, all right, I see a fault or I find a fault. I validate the fault, I built a work order, you know, it gets fixed and then it gets closed. So there's no question that's an important part of it.
It's something we haven't worked directly with, but it's certainly on my transition. So after ASO becomes, you know, CMMS or asset management, it definitely is a part that has to be added in. Some of the products have a little bit of that already starting, right? They're adding some feature sets of that in, but it's not a full blown one.
James Dice: [00:54:46] Yeah. Copper Tree has something that's really lightweight. Obviously BuildingIQ has it as part of their stack. Yeah, but it's not like if you look at a lot of built larger building owners, property managers, they have some sort of CMMS already. So to me, it's something that we need to think about from an integration standpoint.
And what I've seen is that if you look at, there's a whole separate data model problem inside of those platforms, right? So, everyone's calling the air handlers something different depending on what platform it's in. And, all the fields are filled out differently. So that's a, that's a whole nother challenge. And I like how you just phrased it in terms of like, okay, ASO is kind of the next phase, and then maybe the CMMS is the next phase, because I agree it's something that it's not easy to implement yet.
Terry Herr: [00:55:32] Also, part of what I see happening with operators is a transition from actually operating to troubleshooting and repair. Because to me, in a perfect world, when you get the automation right, you do have an automated self-driving building. And so they don't really need an operator. You need an operator maybe to change schedules and change set points, but the key is going to be keeping the sensors and the actuators working and mechanical equipment working. So it really goes to troubleshooting and repair of that so that you get it back to being automatic.
And again, that is just training, right? I mean, let's face with the fault detection, even fault detection and diagnostics, even the diagnostics only works to a point, right? It can't see it. You still need somebody oftentimes to go out with a meter and some tools and look at it, right, and see what the problem is. That's gotta be part of the training because getting that building back operational again or back under automatic control requires that those actuators and sensors all work.
James Dice: [00:56:38] Yeah. And that's where one of my clients likes to just throw the word condition based maintenance in, and I think that's in this next phase as well. It's like how do we automate what can be automated and then let the analytics also help us with the actual physical maintenance of our equipment.
So, cool. Okay, so as we kind of wrap up here, it's what May 11th, and we're still in this global pandemic, weird times for our industry. how are you thinking about how these types of technologies can help with kind of reoccupying buildings, but also managing them in the face of an uncontrolled, at this point, virus?
Terry Herr: [00:57:18] Yeah. We have a little bit of exposure to that, but not a lot. We are helping one of our clients shut down buildings but maintain humidity and temperature levels within an expanded range. And of course, we're going to be saddled with then starting them back up at some point, right. And there's, I know ASHRAE has some guidelines with regards to changes in how you might want to operate a building given the pandemic issue, you know, more outside air, which isn't going to help the energy usage much. But we know that there's a bunch of mechanicals out there putting, you know, MERV 13 filters in, and the ultraviolet lights in air handler units. We see that happening. So it's, it's definitely gonna make things interesting.
We also think that energy management is going to get probably even more important because, you know, this, this economy is going to be a little more struggling, right? So, I would hope that people are going to be interested more than ever in sort of saving as much money as they can on their, on their energy spend. And I think there's plenty of opportunity to save money in most commercial buildings with an ROI that's sub two years. This technology to me really should be deployed. I mean, we're probably at a saturation rate of, I don't know, 5% or so, and it probably needs to be 95%.
So I think we'll have over the next couple of years, you know, a nice growth path.
James Dice: [00:58:43] Cool. Yeah, and that's something, I guess to kind of wrap a bow on all of this is that, you know, we talk about all these different waves and we talk about 10 year plus history, but it really feels like, to me that these are really, you know, the history has been of early, early adopters, and now we're getting to the point we have early adopters from the standpoint of like the entire commercial buildings fleet. Right?
Cool. Well, this has been awesome. I think it's a good time to cut out, but I definitely want to dive deeper into some of this stuff at a future episode. Thanks so much for your time, and can't wait to connect over this stuff in the future.
Terry Herr: [00:59:17] Yeah. You're welcome, James. And now keep up the good work on LinkedIn getting those interesting topic threads rolling.
James Dice: [00:59:24] Absolutely. Yep. Yep. Definitely plan on it. All right. Well, have a good day and have a good week.
All right, friends. Thanks for listening to this episode of the nexus podcast. For more episodes like this and to get the weekly nexus newsletter, please subscribe at dot com. You can find the show notes of this conversation there as well. As always, please reach out on LinkedIn with any thoughts on this episode.
I'd love to hear from you. Have a great day.
Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.
Go to Nexus ConnectJoin Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.
Sign Up