by Raka Mahesa | Published March 6, 2018
Beginning in the last decade, the concept of cloud computing began taking the world by storm. As companies realized how much they could save by using the cloud, they started looking to reduce costs even further by applying cloud computing to other aspects of their business. And as cloud technology has improved, more and more types of services have been made available through the cloud.
As more capabilities have moved into the cloud, the cloud’s limitations and drawbacks have become more evident. Users are finding out that the current architecture of cloud computing doesn’t fit some types of projects and situations. In particular, they have discovered that the Internet of Things (IoT)—which is experiencing huge growth—doesn’t always work that well with cloud computing. Let’s talk a bit about that.
So, what does IoT have to do with cloud computing? A lot, actually, because many parts of IoT are powered by cloud computing. It might be a simple case of controlling your thermostat remotely on your phone by connecting to the cloud. Or you might use cloud storage to store the footage recorded by a network of security cameras. All in all, IoT and cloud computing usually complement each other quite well.
However, as the network grows, and as more smart devices get connected, problems start to crop up. One such problem is often the massive amount of data that’s generated by all of those connected devices. For example, in a network of security cameras, the system can upload a huge amount of video data every second to the server. In such a system, the cameras themselves have no storage component, so all of those videos have to be stored on the main server.
Another problem that cloud computing often doesn’t handle well is time delay. Since data can be processed only in the cloud, there is always some delay between recording the data and getting results from that data. This kind of delay can be problematic in a time-critical system. Imagine a self-driving car that’s equipped with sensors to check its surroundings. When the car is driving at 50km/h, getting the analysis of its environment after two seconds is pointless, because the car will be at a different position by then.
The core issue of these problems lies in the centralized nature of a cloud computing architecture. After all, only the central nodes of the network have the capability to store and process data. To combat this problem, network designers are proposing architectures where the computing power is distributed more evenly around the network. These architectures push the processing capability out to the edge of the network, closer to the source of the data. Such techniques are called fog computing and mist computing.
The quickest way to understand these two architectures and how they differ from each other is to understand the cloud, fog, and mist phenomena in the everyday, meteorological sense. In real life, a cloud is thick with heavy condensed water hanging high in the sky, far away from the ground. Fog, on the other hand, is the less thick condensed water located below the clouds, and mist is the thin layer of floating water droplets located on the ground.
You can see from that analogy how cloud computing is similar to an actual cloud, where great computing power is located far away from human activities. Fog computing takes place beneath the cloud in a layer whose infrastructure connects end devices with the central server. And, finally, mist computing takes place on the ground, where it is the light computing power located at the very edge of the network, at the level of the sensor and actuator devices.
(Another term that is frequently used is edge computing. Unfortunately, this term seems to mean different things to different people. Some use edge computing as a synonym for mist computing, while others consider it equivalent to fog computing. For the purposes of this discussion, I will avoid using the term edge computing to avoid confusion.)
Let’s be clear about one thing: Cloud, fog, and mist computing are complementary to each other. They are meant to work together and not against one another. Each has its own advantages and disadvantages; so, by using all of them together, we can play to each of their strengths and minimize their weaknesses.
That said, you don’t have to use all of them at the same time. For example, let’s say you’re building an automated farm watering and monitoring system in a remote area somewhere. With no Internet connection, you may as well forgo the cloud aspect and simply store the data in a computer on the premises.
As stated earlier, fog computing is the paradigm of putting computing capability in the connection between the device sensors and the cloud server. This capability is usually put into a device that acts as a gateway, connecting all of the sensors and managing connectivity with the cloud. The gateway device tends to have decent computing power and data storage, so it can handle data received from multiple sensors.
Fog computing is a great fit for a project that needs to process data from multiple sensors as well as one where minimizing latency is critical. Based on those characteristics, you can see that autonomous vehicles are well suited for fog computing.
An autonomous car relies on multiple sensors in order to get a complete reading of its surroundings. However, having all that data processed at the cloud is a no-go, because the car needs the result of that data as quickly as possible. Having a gateway device that can process the data right away is crucial to having the car work properly. In addition, the gateway device must be able to filter and find the relevant data that needs to be sent to the cloud for further analysis, reducing the amount of bandwidth required.
Of course, it’s not all rainbows and sunshine with fog computing. The biggest problem with fog computing is that it relies solely on the gateway device for the whole system to function, which means the gateway can be a single point of failure. Having an issue like this in a system like an autonomous vehicle can be really dangerous, so it’s important to put a backup or a redundancy mechanism into place.
Mist computing is all about putting computing power on the very edge of the network, on the actual sensors of the device. This computing power usually comes in the form of microchips or micro-controllers embedded on the device. For that reason, their processing capability is much more limited.
You might wonder if having computational power on the sensors is even necessary. After all, the job of these sensors is solely to record data from the environment, right? Well, these sensors usually also have another job, which is to transfer the information they’ve recorded to a data storage in the network. Data transfer uses much more battery power than an equivalent computing process. So, by having computing power on the sensor, the data can be processed, preconditioned, and optimized first before being stored. The resulting data will be much smaller, consuming less power in the transfer.
Although there isn’t much of a downside to using mist computing, it is much more complex. Not only are the systems used for mist computing usually application specific, but sensors are often heterogeneous, making implementing a solution more complicated. In addition, the processing power available in the mist computing architecture is often limited, which adds even more constraints to any possible solution.
One final aspect of fog and mist computing is security. Using fog or mist computing enhances data security on the system. In these computing architectures, data is processed locally first before being sent to the remote server. This means any sensitive data can be removed or encrypted first, reducing the amount of security threat the system has to deal with.
Even with all of the benefits of the other architectures, cloud computing offers the largest amount of computing capability. Earlier, I explained how autonomous vehicles are a really good fit for fog computing, but don’t forget that all of that data needs to be analyzed and used to improve the performance of the vehicle. Since analyzing the data requires a huge amount of computational power and isn’t a time-sensitive operation, what better way to do it than using cloud computing?
So, cloud, fog, and mist computing all have their own strengths and weaknesses. As more and more devices are connected to the Internet, using all of these paradigms correctly will be key to ensuring that our systems and applications are able to scale alongside our growing network of devices.
Learn how to build an IoT system that uses long-range, low-power networking, and how to implement edge analytics and cloud…
Get the Code »
Build a cognitive IoT solution, following an edge computing architecture. Push your analytics out to the gateway, and use advanced…
Artificial intelligenceData science+
Back to top