The August Nexus Pro Members' Subject Matter Expert (SME) Workshop broke edge computing into four critical components that FINALLY make this complex subject something we can all understand.
Pro Members can watch the full recording, view the slides, and read the transcript here.
Cory Clarke is trained as an architect. He doesn’t call himself an IT expert, but he definitely is an IT expert, and he's gonna show us some cool stuff around edge cloud computing today. Not a whole lot of architects can go deep on that topic. So thank you, Cory, for volunteering, and for teaching us some stuff.
I work for View.
I think most people know us for our smart glass. You know, the cool stuff. It's nanotechnology goes from basically fully transparent to 100% opaque.
I actually work for the side of the company that doesn't do the glass.
You can think of the glass as if we were Tesla: the glass is our car. I deal with the batteries, solar, charging, and all the other stuff.
I deal with the stuff that is in support of the glass and part of that is edge compute.
The stuff I'm going to be talking about, View uses every day because we use it to run all of our glass.
These are products that we built for the glass and we now sell separately. Just our edge compute is deployed at around 800 buildings across the US and Australia.
And you know, I was trained as an architect. I fell off the wagon very early. Basically been working in software development for the last 25 years.
I'm particularly excited about this edge compute, edge cloud function because it really changes the way that applications can work with buildings and they can be truly transformative.
To get started, let’s look at what we have seen with Cloud transformation in general.
Fully on site, to fully remote, and landing in the middle.
We’ve seen this pattern before.
Retail: early days of the internet everything was bricks and mortar, everything was local on prem, you shopped for books, it all moved to the cloud and now things have kind of corrected back to hybrid experiences. People are finding these omni channel middle ground experiences like Amazon Go stores.
IT: Originally everything was on prem. Then data centers—moving fully to the cloud. Now we’re seeing these hybrid cloud offerings, like Amazon outposts, where you can basically take Amazon and put it on prem.
Work: We all worked very much on site, moved very quickly (and very painfully) to fully remote and now, people are interested in hybrid work experiences.
We're now seeing this trend in the real estate industry. It is a little bit slow on adopting some of the cloud pieces, but there has been a real big push recently into cloud and virtualization. But we're starting to see our customers trying to find this middle ground between OT on prem and OT in the cloud.
This is where edge cloud lives.
Edge cloud has the promise of both the benefits of full cloud with transparency, ease of management, and ease of deployments, and on prem with low latency, and resiliency.
This is the middle ground—that Goldilocks sweet spot of not too big, not too small. Not too flat, not too on prem.
What do I mean, Edge Cloud?
So there's a couple things I'm going to dive into.
These four things are what really make edge cloud possible.
This one's not anything new. Everyone's got PCs somewhere in their basement running their access control, their BMS, etc. The benefits of having the on prem compute: You're able to process confidential data, you have the benefit of air gapping, it is secure because it's on prem, you also have the low latency that you want, and you have higher resiliency. You don't have to worry about the internet connection, your building going down, and you losing functionality.
Now a device like this, which, you know, is similar to what we use in most of our installations is a couple thousand dollars so you can get a lot of these. They are fanless, they are high powered, they don't have a GPU but you can run pretty much anything on them, and they're very affordable. It's become a lot easier to have a lot more compute at the edge than a decade ago when you needed to buy a massive blade server and have a whole rack. These things are the size of a book.
Compute at the edge is just having the ability to process data and run applications. The thing that really starts to make edge cloud possible is this idea of containerization.
Traditionally, in buildings, if you have an application, your vendor will install a box. It'll be hardware with some OS running on it, and then run their application. Each vendor puts in another device, so you end up with a device for your energy management, you end up with a device for your sensors, a device for access control, a device for building automation, and so on. You’d end up with a rack with all these various pieces of hardware.
Over the last decade or so I've seen kind of a trend towards virtualization.
People don’t want all these devices. It's too much hardware. Instead they just run one device and run virtual machines. You'll have one box, the same hardware, the same operating system, but then you'll have a management layer in a separate environment. And inside of that you have virtual machines. And each of those virtual machines has their own copy of the operating system. And then they have apps.
It's nice because instead of having a bunch of devices, you have one device, maybe a little more capacity, running virtual machines. They're segmented so they can't talk to each other unless you want them to so they are very secure. You're able to have centralized management and control.
The downside is that the operating system is duplicated multiple times, so it takes up a lot of RAM and a lot of space. The virtual machines have more overhead to each of them and they take longer to start up. It's like booting up any computer, the OS starts and then the applications can start.
Container architecture is something that came about around 2013 or 2014. It's not that fundamentally different from virtual architecture, but there's one big difference: the operating system. You have the same relative structure, you have a management layer, you have something that makes a box that allows you to run multiple virtual spaces inside your device.
But these containers don't have their own operating system. It is just a safe box that's segmented and cordoned off so that apps and one box can't talk to the next. So they're all secure. The management layer acts as a proxy to the operating system, so everything is standardized. So if they need to talk to the file system, they talk in a standard way regardless of whether this is a Linux machine or a Windows machine or a Mac machine. Doesn't matter. The way the containers work is that everything's abstracted.
It still has the same security of the virtual architecture. It has some simplicity. When you're writing an app, you don't need to worry about what is the next layer down. It doesn't have all the overhead—your container can be exactly the size you need for your app. And it only takes seconds to start these things up instead of minutes. You also have this abstraction of hardware and OS so you can move applications from one device to another in a matter of seconds. It gives you a lot of portability and rapid deployment. You can basically send a message, deploy an app, and turn it on.
You still need segregation from a security standpoint, so you can make sure these apps don't talk to anything. This ensures data doesn't leak between the containers. Also in that management layer, you can restrict access to different things. So you can say this app only has access to a particular network, a particular file system, and be very granular and take control.
Portability is huge. You can move from container to container, and it doesn't matter where that container is since it's hardware agnostic. It could be running on prem, but it also could be running in the cloud as long as it has the little box to run in it.
The right sizing and scaling is huge. If you suddenly need a bigger computer or bigger device, you can start up a bigger device and move all the apps over there.
The next is this idea of clustering because if you have these containers, but your computer dies, what happens? You want some of that resiliency, and that's where clustering comes in.
You'll have multiple devices, the same hardware, same operating system, and you can actually have the same apps running on all of them. Put them together into a cluster. The apps run across that cluster. From the outside, it looks like one big device. Inside, It's 2, 3, 4… you can add more, you can remove them, and they all just kind of work as one.
That's really where the cloud starts to get cloud-like, because you don't really know how many devices you have. If one dies, start up another one. All the apps just migrate back and forth and move around.
The apps themselves are running on these pieces of hardware, but you'll have a storage layer that's independent. This way you do have some redundancy, but it means that if one of the machines suddenly dies, another one can start up and the apps are reading data from the storage layer so nothing is lost in the process.
It also means upgrades are easy because you can actually take one machine offline, upgrade it, put it back into the cluster, take another one out, upgrade it, etc. You never have to turn anything off to upgrade.
The other key is this management layer. There's a the main thing that's used by most people called Kubernetes. This is something that will deploy those apps and make sure that they're always running. If one goes down, it will restart it. If you need to spin up another hardware. You turn that hardware on and the management layer will push out all the applications and make sure all of the dependencies are there. So it's this orchestration piece on top that allows you to move apps in and out.
It's also really valuable for testing. There's a process called canary (like the canary in a coal mine). You can deploy just one instance of a new version of an app into a giant cluster and watch it and see if it goes bad.
You get a lot of resiliency out of clusters. Just like the cloud, you don't really know how many computers are on the cloud, this is the same. You get this resiliency because any single device can go down and the whole cluster generally will stay up.
You're able to do load balancing, and actually move applications and then scale them across the cluster. You can kind of scale up and down based on load and demand—mostly automated with the management layer. It'll watch each of the apps, see how much memory they're using, how much CPU they're using, and automatically start up another piece of another device and reallocate space between them. So if you have some spare devices that are just lying in wait, or spare capacity, you can redistribute apps and that's what that management layer does.
There's no downtime for upgrades because you can just pull some of the devices out of the cluster, upgrade them and put them back in again.
The last piece is connectivity.
Everybody has some degree of connectivity between their building and the cloud. But there's some ways you can do it to optimize for this edge cloud, or resiliency.
You have in the building your cluster of three or more of these devices downstream. They're talking to the BMS lighting, HVAC, and then you have your cloud and you can also have container apps running in the cloud.
You can set up what's called a software defined network between the two. So this is kind of like a VPN but a little more granular control. You could say that this particular cluster only has access to these particular pieces on the other side or this container only has access to just the BMS network. It's much more controlled. It also basically makes it seem like the cloud and the building are part of the same network. It's a consistent, unified network. So the application running in the cloud or application running in the building doesn't look any different. You can, with your orchestration layer, even choose to deploy the same app in the cloud or at the edge or even running across both.
This makes offsite backup easy, obviously centralized management because you can manage stuff from the cloud and reach all of your buildings, not just one. This ability to move things between edge and cloud really lets you optimize.
The Cloud tends to be a little bit more expensive, a little higher latency, so you could choose to run an app in the building or a little in the building and a little bit in the cloud. And it lets you really, for every single application, every single use case, dial up and dial down how much cloud and how much on prem to optimize for latency, cost, performance, resiliency, whatever you're trying to shoot for.
Edge cloud has the promise of both the benefits of full cloud with transparency, ease of management, and ease of deployments, and on prem with low latency, and resiliency.
This is the middle ground—that Goldilocks sweet spot of not too big, not too small. Not too flat, not too on prem.
What do I mean, Edge Cloud?
So there's a couple things I'm going to dive into.
These four things are what really make edge cloud possible.
This one's not anything new. Everyone's got PCs somewhere in their basement running their access control, their BMS, etc. The benefits of having the on prem compute: You're able to process confidential data, you have the benefit of air gapping, it is secure because it's on prem, you also have the low latency that you want, and you have higher resiliency. You don't have to worry about the internet connection, your building going down, and you losing functionality.
Now a device like this, which, you know, is similar to what we use in most of our installations is a couple thousand dollars so you can get a lot of these. They are fanless, they are high powered, they don't have a GPU but you can run pretty much anything on them, and they're very affordable. It's become a lot easier to have a lot more compute at the edge than a decade ago when you needed to buy a massive blade server and have a whole rack. These things are the size of a book.
Compute at the edge is just having the ability to process data and run applications. The thing that really starts to make edge cloud possible is this idea of containerization.
Traditionally, in buildings, if you have an application, your vendor will install a box. It'll be hardware with some OS running on it, and then run their application. Each vendor puts in another device, so you end up with a device for your energy management, you end up with a device for your sensors, a device for access control, a device for building automation, and so on. You’d end up with a rack with all these various pieces of hardware.
Over the last decade or so I've seen kind of a trend towards virtualization.
People don’t want all these devices. It's too much hardware. Instead they just run one device and run virtual machines. You'll have one box, the same hardware, the same operating system, but then you'll have a management layer in a separate environment. And inside of that you have virtual machines. And each of those virtual machines has their own copy of the operating system. And then they have apps.
It's nice because instead of having a bunch of devices, you have one device, maybe a little more capacity, running virtual machines. They're segmented so they can't talk to each other unless you want them to so they are very secure. You're able to have centralized management and control.
The downside is that the operating system is duplicated multiple times, so it takes up a lot of RAM and a lot of space. The virtual machines have more overhead to each of them and they take longer to start up. It's like booting up any computer, the OS starts and then the applications can start.
Container architecture is something that came about around 2013 or 2014. It's not that fundamentally different from virtual architecture, but there's one big difference: the operating system. You have the same relative structure, you have a management layer, you have something that makes a box that allows you to run multiple virtual spaces inside your device.
But these containers don't have their own operating system. It is just a safe box that's segmented and cordoned off so that apps and one box can't talk to the next. So they're all secure. The management layer acts as a proxy to the operating system, so everything is standardized. So if they need to talk to the file system, they talk in a standard way regardless of whether this is a Linux machine or a Windows machine or a Mac machine. Doesn't matter. The way the containers work is that everything's abstracted.
It still has the same security of the virtual architecture. It has some simplicity. When you're writing an app, you don't need to worry about what is the next layer down. It doesn't have all the overhead—your container can be exactly the size you need for your app. And it only takes seconds to start these things up instead of minutes. You also have this abstraction of hardware and OS so you can move applications from one device to another in a matter of seconds. It gives you a lot of portability and rapid deployment. You can basically send a message, deploy an app, and turn it on.
You still need segregation from a security standpoint, so you can make sure these apps don't talk to anything. This ensures data doesn't leak between the containers. Also in that management layer, you can restrict access to different things. So you can say this app only has access to a particular network, a particular file system, and be very granular and take control.
Portability is huge. You can move from container to container, and it doesn't matter where that container is since it's hardware agnostic. It could be running on prem, but it also could be running in the cloud as long as it has the little box to run in it.
The right sizing and scaling is huge. If you suddenly need a bigger computer or bigger device, you can start up a bigger device and move all the apps over there.
The next is this idea of clustering because if you have these containers, but your computer dies, what happens? You want some of that resiliency, and that's where clustering comes in.
You'll have multiple devices, the same hardware, same operating system, and you can actually have the same apps running on all of them. Put them together into a cluster. The apps run across that cluster. From the outside, it looks like one big device. Inside, It's 2, 3, 4… you can add more, you can remove them, and they all just kind of work as one.
That's really where the cloud starts to get cloud-like, because you don't really know how many devices you have. If one dies, start up another one. All the apps just migrate back and forth and move around.
The apps themselves are running on these pieces of hardware, but you'll have a storage layer that's independent. This way you do have some redundancy, but it means that if one of the machines suddenly dies, another one can start up and the apps are reading data from the storage layer so nothing is lost in the process.
It also means upgrades are easy because you can actually take one machine offline, upgrade it, put it back into the cluster, take another one out, upgrade it, etc. You never have to turn anything off to upgrade.
The other key is this management layer. There's a the main thing that's used by most people called Kubernetes. This is something that will deploy those apps and make sure that they're always running. If one goes down, it will restart it. If you need to spin up another hardware. You turn that hardware on and the management layer will push out all the applications and make sure all of the dependencies are there. So it's this orchestration piece on top that allows you to move apps in and out.
It's also really valuable for testing. There's a process called canary (like the canary in a coal mine). You can deploy just one instance of a new version of an app into a giant cluster and watch it and see if it goes bad.
You get a lot of resiliency out of clusters. Just like the cloud, you don't really know how many computers are on the cloud, this is the same. You get this resiliency because any single device can go down and the whole cluster generally will stay up.
You're able to do load balancing, and actually move applications and then scale them across the cluster. You can kind of scale up and down based on load and demand—mostly automated with the management layer. It'll watch each of the apps, see how much memory they're using, how much CPU they're using, and automatically start up another piece of another device and reallocate space between them. So if you have some spare devices that are just lying in wait, or spare capacity, you can redistribute apps and that's what that management layer does.
There's no downtime for upgrades because you can just pull some of the devices out of the cluster, upgrade them and put them back in again.
The last piece is connectivity.
Everybody has some degree of connectivity between their building and the cloud. But there's some ways you can do it to optimize for this edge cloud, or resiliency.
You have in the building your cluster of three or more of these devices downstream. They're talking to the BMS lighting, HVAC, and then you have your cloud and you can also have container apps running in the cloud.
You can set up what's called a software defined network between the two. So this is kind of like a VPN but a little more granular control. You could say that this particular cluster only has access to these particular pieces on the other side or this container only has access to just the BMS network. It's much more controlled. It also basically makes it seem like the cloud and the building are part of the same network. It's a consistent, unified network. So the application running in the cloud or application running in the building doesn't look any different. You can, with your orchestration layer, even choose to deploy the same app in the cloud or at the edge or even running across both.
This makes offsite backup easy, obviously centralized management because you can manage stuff from the cloud and reach all of your buildings, not just one. This ability to move things between edge and cloud really lets you optimize.
The Cloud tends to be a little bit more expensive, a little higher latency, so you could choose to run an app in the building or a little in the building and a little bit in the cloud. And it lets you really, for every single application, every single use case, dial up and dial down how much cloud and how much on prem to optimize for latency, cost, performance, resiliency, whatever you're trying to shoot for.
Edge cloud has the promise of both the benefits of full cloud with transparency, ease of management, and ease of deployments, and on prem with low latency, and resiliency.
This is the middle ground—that Goldilocks sweet spot of not too big, not too small. Not too flat, not too on prem.
What do I mean, Edge Cloud?
So there's a couple things I'm going to dive into.
These four things are what really make edge cloud possible.
This one's not anything new. Everyone's got PCs somewhere in their basement running their access control, their BMS, etc. The benefits of having the on prem compute: You're able to process confidential data, you have the benefit of air gapping, it is secure because it's on prem, you also have the low latency that you want, and you have higher resiliency. You don't have to worry about the internet connection, your building going down, and you losing functionality.
Now a device like this, which, you know, is similar to what we use in most of our installations is a couple thousand dollars so you can get a lot of these. They are fanless, they are high powered, they don't have a GPU but you can run pretty much anything on them, and they're very affordable. It's become a lot easier to have a lot more compute at the edge than a decade ago when you needed to buy a massive blade server and have a whole rack. These things are the size of a book.
Compute at the edge is just having the ability to process data and run applications. The thing that really starts to make edge cloud possible is this idea of containerization.
Traditionally, in buildings, if you have an application, your vendor will install a box. It'll be hardware with some OS running on it, and then run their application. Each vendor puts in another device, so you end up with a device for your energy management, you end up with a device for your sensors, a device for access control, a device for building automation, and so on. You’d end up with a rack with all these various pieces of hardware.
Over the last decade or so I've seen kind of a trend towards virtualization.
People don’t want all these devices. It's too much hardware. Instead they just run one device and run virtual machines. You'll have one box, the same hardware, the same operating system, but then you'll have a management layer in a separate environment. And inside of that you have virtual machines. And each of those virtual machines has their own copy of the operating system. And then they have apps.
It's nice because instead of having a bunch of devices, you have one device, maybe a little more capacity, running virtual machines. They're segmented so they can't talk to each other unless you want them to so they are very secure. You're able to have centralized management and control.
The downside is that the operating system is duplicated multiple times, so it takes up a lot of RAM and a lot of space. The virtual machines have more overhead to each of them and they take longer to start up. It's like booting up any computer, the OS starts and then the applications can start.
Container architecture is something that came about around 2013 or 2014. It's not that fundamentally different from virtual architecture, but there's one big difference: the operating system. You have the same relative structure, you have a management layer, you have something that makes a box that allows you to run multiple virtual spaces inside your device.
But these containers don't have their own operating system. It is just a safe box that's segmented and cordoned off so that apps and one box can't talk to the next. So they're all secure. The management layer acts as a proxy to the operating system, so everything is standardized. So if they need to talk to the file system, they talk in a standard way regardless of whether this is a Linux machine or a Windows machine or a Mac machine. Doesn't matter. The way the containers work is that everything's abstracted.
It still has the same security of the virtual architecture. It has some simplicity. When you're writing an app, you don't need to worry about what is the next layer down. It doesn't have all the overhead—your container can be exactly the size you need for your app. And it only takes seconds to start these things up instead of minutes. You also have this abstraction of hardware and OS so you can move applications from one device to another in a matter of seconds. It gives you a lot of portability and rapid deployment. You can basically send a message, deploy an app, and turn it on.
You still need segregation from a security standpoint, so you can make sure these apps don't talk to anything. This ensures data doesn't leak between the containers. Also in that management layer, you can restrict access to different things. So you can say this app only has access to a particular network, a particular file system, and be very granular and take control.
Portability is huge. You can move from container to container, and it doesn't matter where that container is since it's hardware agnostic. It could be running on prem, but it also could be running in the cloud as long as it has the little box to run in it.
The right sizing and scaling is huge. If you suddenly need a bigger computer or bigger device, you can start up a bigger device and move all the apps over there.
The next is this idea of clustering because if you have these containers, but your computer dies, what happens? You want some of that resiliency, and that's where clustering comes in.
You'll have multiple devices, the same hardware, same operating system, and you can actually have the same apps running on all of them. Put them together into a cluster. The apps run across that cluster. From the outside, it looks like one big device. Inside, It's 2, 3, 4… you can add more, you can remove them, and they all just kind of work as one.
That's really where the cloud starts to get cloud-like, because you don't really know how many devices you have. If one dies, start up another one. All the apps just migrate back and forth and move around.
The apps themselves are running on these pieces of hardware, but you'll have a storage layer that's independent. This way you do have some redundancy, but it means that if one of the machines suddenly dies, another one can start up and the apps are reading data from the storage layer so nothing is lost in the process.
It also means upgrades are easy because you can actually take one machine offline, upgrade it, put it back into the cluster, take another one out, upgrade it, etc. You never have to turn anything off to upgrade.
The other key is this management layer. There's a the main thing that's used by most people called Kubernetes. This is something that will deploy those apps and make sure that they're always running. If one goes down, it will restart it. If you need to spin up another hardware. You turn that hardware on and the management layer will push out all the applications and make sure all of the dependencies are there. So it's this orchestration piece on top that allows you to move apps in and out.
It's also really valuable for testing. There's a process called canary (like the canary in a coal mine). You can deploy just one instance of a new version of an app into a giant cluster and watch it and see if it goes bad.
You get a lot of resiliency out of clusters. Just like the cloud, you don't really know how many computers are on the cloud, this is the same. You get this resiliency because any single device can go down and the whole cluster generally will stay up.
You're able to do load balancing, and actually move applications and then scale them across the cluster. You can kind of scale up and down based on load and demand—mostly automated with the management layer. It'll watch each of the apps, see how much memory they're using, how much CPU they're using, and automatically start up another piece of another device and reallocate space between them. So if you have some spare devices that are just lying in wait, or spare capacity, you can redistribute apps and that's what that management layer does.
There's no downtime for upgrades because you can just pull some of the devices out of the cluster, upgrade them and put them back in again.
The last piece is connectivity.
Everybody has some degree of connectivity between their building and the cloud. But there's some ways you can do it to optimize for this edge cloud, or resiliency.
You have in the building your cluster of three or more of these devices downstream. They're talking to the BMS lighting, HVAC, and then you have your cloud and you can also have container apps running in the cloud.
You can set up what's called a software defined network between the two. So this is kind of like a VPN but a little more granular control. You could say that this particular cluster only has access to these particular pieces on the other side or this container only has access to just the BMS network. It's much more controlled. It also basically makes it seem like the cloud and the building are part of the same network. It's a consistent, unified network. So the application running in the cloud or application running in the building doesn't look any different. You can, with your orchestration layer, even choose to deploy the same app in the cloud or at the edge or even running across both.
This makes offsite backup easy, obviously centralized management because you can manage stuff from the cloud and reach all of your buildings, not just one. This ability to move things between edge and cloud really lets you optimize.
The Cloud tends to be a little bit more expensive, a little higher latency, so you could choose to run an app in the building or a little in the building and a little bit in the cloud. And it lets you really, for every single application, every single use case, dial up and dial down how much cloud and how much on prem to optimize for latency, cost, performance, resiliency, whatever you're trying to shoot for.
The August Nexus Pro Members' Subject Matter Expert (SME) Workshop broke edge computing into four critical components that FINALLY make this complex subject something we can all understand.
Pro Members can watch the full recording, view the slides, and read the transcript here.
Cory Clarke is trained as an architect. He doesn’t call himself an IT expert, but he definitely is an IT expert, and he's gonna show us some cool stuff around edge cloud computing today. Not a whole lot of architects can go deep on that topic. So thank you, Cory, for volunteering, and for teaching us some stuff.
I work for View.
I think most people know us for our smart glass. You know, the cool stuff. It's nanotechnology goes from basically fully transparent to 100% opaque.
I actually work for the side of the company that doesn't do the glass.
You can think of the glass as if we were Tesla: the glass is our car. I deal with the batteries, solar, charging, and all the other stuff.
I deal with the stuff that is in support of the glass and part of that is edge compute.
The stuff I'm going to be talking about, View uses every day because we use it to run all of our glass.
These are products that we built for the glass and we now sell separately. Just our edge compute is deployed at around 800 buildings across the US and Australia.
And you know, I was trained as an architect. I fell off the wagon very early. Basically been working in software development for the last 25 years.
I'm particularly excited about this edge compute, edge cloud function because it really changes the way that applications can work with buildings and they can be truly transformative.
To get started, let’s look at what we have seen with Cloud transformation in general.
Fully on site, to fully remote, and landing in the middle.
We’ve seen this pattern before.
Retail: early days of the internet everything was bricks and mortar, everything was local on prem, you shopped for books, it all moved to the cloud and now things have kind of corrected back to hybrid experiences. People are finding these omni channel middle ground experiences like Amazon Go stores.
IT: Originally everything was on prem. Then data centers—moving fully to the cloud. Now we’re seeing these hybrid cloud offerings, like Amazon outposts, where you can basically take Amazon and put it on prem.
Work: We all worked very much on site, moved very quickly (and very painfully) to fully remote and now, people are interested in hybrid work experiences.
We're now seeing this trend in the real estate industry. It is a little bit slow on adopting some of the cloud pieces, but there has been a real big push recently into cloud and virtualization. But we're starting to see our customers trying to find this middle ground between OT on prem and OT in the cloud.
This is where edge cloud lives.
Head over to Nexus Connect and see what’s new in the community. Don’t forget to check out the latest member-only events.
Go to Nexus ConnectJoin Nexus Pro and get full access including invite-only member gatherings, access to the community chatroom Nexus Connect, networking opportunities, and deep dive essays.
Sign Up