Digital Developer Conference: Cloud Security 2021 -- Build the skills to secure your cloud and data Register free

IoT vs. Edge Computing: What’s the difference?

The terms “Internet of Things” (IoT) and “Edge Computing” are being used everywhere these days it seems. What do these terms really mean, and why do I think Edge Computing is replacing IoT? Read on to find out! This article will define these terms and also discuss why you might want to migrate your enterprise application code away from your data centers and out closer to your real-world data sources.

Internet of Things

Let’s begin our comparison of these two environments by getting a clearer understanding of what IoT devices generally can, cannot, and usually do. We’ll also look at what differentiates IoT hardware and software from that used in other technology platforms — what are its strengths and weaknesses?

IoT devices

Not all computers are capable of being IoT devices. Computers must be internet-capable to be considered for IoT applications, and many small computers are just not. For example, the two large Arduino computers at the top of Figure 1 contain no networking hardware on their main boards. They require additional hardware add-ons (called “shields”) before they can be used as IoT devices. The two Arduinos shown here have Ethernet shields attached (their top layers) enabling them to use wired network connections.

IoT devices usually have special-purpose sensors or special-purpose actuators attached. They are usually designed for a specific function.

Figure 1: These are IoT devices. None of these are suitable for edge computing.

Photo of IoT devices

IoT devices like the Arduino computers shown above, and the smaller Espressif devices shown below them in Figure 1, are all powerful little computers. Generally though, they are capable of performing just a single task. The computers above are all programmed by completely erasing their onboard permanent “flash” storage, and overwriting that with a new software image — a process called “flashing”. This paradigm is common in similar devices made by other IoT device manufacturers as well.

IoT software

On a technical level, IoT device software is able to run just a single execution thread or at most a few threads within a single running context. There is no real operating system (OS) on any of the machines shown above. Instead, the software program you run on them does everything! One of the main reasons that devices like this do not have operating systems is that they do not include the memory management hardware that is required by modern operating systems to enable the isolation of multiple execution contexts (or processes). So they are only capable of supporting one execution context. This is both limiting and empowering.

This single-minded OS-free design enables the IoT software to use the processor very efficiently (since there is zero OS overhead). It also enables very precise timing of the operations that the program performs because other processes are not competing for the processor’s attention. Software on IoT devices like these can perform “real-time” actions, allowing for extremely precise timing for its responses under all conditions. This design also often enables the IoT device to use a very small amount of power. For example, all of the Espressif devices shown above are capable of “sleeping” periodically to save power. When sleeping, they only use a few millionths of an ampere! This enables these devices to be powered for months or years on small batteries.

IoT devices are also generally less vulnerable to well-known software vulnerabilities, simply because there is no OS running well-known software. The software used on IoT devices is mostly proprietary one-off code (though it may include well-known software libraries too). Although well-known exploits may not present much risk, there have nevertheless been many cases where entire fleets of IoT devices have been commandeered by hackers and used for their own purposes against the wishes of their owners.

IoT devices are usually difficult to update. Typically software updates require physical access for re-flashing. Deploying a bug fix to a fleet of devices in the field can therefore be an expensive undertaking. There are exceptions that enable remote re-flashing (Often called “Over-The-Air” or OTA updates) but this is a cumbersome and risky process. It requires more than twice as much memory as your software uses. A new software image is written into the redundant bank of memory while your existing software is managing the process from the other bank. When the other bank has been written, the machine is rebooted to pick up the new version. If anything goes wrong during this process, the device is likely to be left in an unusable state. These software update challenges are major disadvantages for remote deployments.

Data processing

IoT devices usually aren’t able to do much data processing locally. They instead send their data over the Internet to cloud computers where more resources are available to do more sophisticated analysis. They are essentially simple sensors and actuators that are Internet-accessible. They send sensor data upward to a data center somewhere and accept commands flowing downward. This tends to limit the sampling resolution and sampling frequency of the sensor data, and it usually requires significant delay after any event occurs before actions can be taken on the device. For example, IoT cameras tend to send to the cloud only relatively low resolution images at relatively low frame rates.

Recently, exceptions to that rule have begun to appear. The device near the middle of the bottom row in Figure 1, marked “ESP EYE,” is one such device. This tiny IoT device contains a small camera and it runs a neural network program to perform object detection locally on the device. It can then consume most of the generated data locally and send much lower volumes of data with higher information content upward to the cloud. This device is a perfect segue into the topic of edge computing.

Getting started with IoT development

If you are interested in exploring IoT in more depth, review the “Getting started with IoT development” learning path.

Edge computing

Let’s face it, IoT is getting old. More than a decade ago, IoT devices overtook the number of humans on the Internet. A lot has happened over that time. Those minimalist IoT devices of yesteryear are very different from the tiny Internet-capable hardware of today.

Edge computing devices

Year after year, small and inexpensive computers have been getting more powerful. All of the computers shown in Figure 2 are small, inexpensive, 64-bit, fully-featured Linux-based edge computers. All of them except for the top middle, and bottom right machines cost just $60 USD or less. The smallest one retails for $10. Also, the two on the right have powerful NVIDIA GPUs. That’s significant, as I’ll explain soon.

Figure 2: A few edge computing devices

Photo of Edge computing devices

These small edge devices bring considerable processing horsepower right out to the farthest reaches of the Internet. They often have sensors and actuators directly attached to them, so they can take real world actions in response to real world events with sub-millisecond latencies — something that is simply impossible when data must first cross a network. Since an edge computer can process data locally, its sensors (such as cameras) could collect samples (such as images or frames) at a higher resolution, and at a higher frequency (such as frame rate) than would be possible if the data had to be sent to the cloud for processing. In general, this is what distinguishes edge computing from IoT. Edge computing is about placing computational resources as close as possible to the source of the data and where the actions need to occur.

Sometimes this means placing the computer power in the same physical device as the sensors and actuators, but sometimes it just means getting it closer to those endpoints. For example, as part of their 5G rollouts, many telecommunications providers are creating large multi-access edge computing (MEC) data centers at the far edges of their telco networks. Other companies will rent facilities in these MEC data centers to get closer to their customers, achieve extremely low service latencies, and enable new business models. The edge computers provided in these MEC facilities are large computing clusters such as those found in the data centers of large corporations and public cloud providers. Edge computing therefore defines a broad spectrum from large facilities like these MECs and the regional outposts of governments and corporations, down to the control centers in factories or warehouses, and all the way out to the small stand-alone machines shown in Figure 2.

The capabilities of the current generation of small edge machines can be quite surprising if you have not paid attention over the last few years. The bottom right machine, for example, has a 6-core 64-bit CPU, 8GB of RAM, and an NVIDIA Volta GPU. It is capable of 21 trillion operations per second (21 TOPS). It can run visual object detection algorithms like ResNet-50 (a convolutional neural network 50 layers deep) at over 1000 frames per second (that is, it can simultaneously process 50 different video streams at about 20 frames per second each). It is also capable of performing high quality speech-to-text, natural language processing, language translation, and text-to speech entirely on the device (no remote services required). This particular device retails at $400 USD and is about the size of a deck of playing cards.

Edge computing device software

While IoT devices tend to run single-purpose single-process software (like the earliest computers), edge computers run real, modern operating systems, usually based upon the Linux kernel. This means that edge computers can effortlessly multitask, managing multiple applications simultaneously, supporting a wide range of popular networking protocols, and so on. In general, edge computers today can run almost anything that a server computer in a data center can run. This enables corporations to move appropriate types of application code out of their data centers and into the field.

Edge computers often run multiple applications simultaneously. The edge computers’ modern operating systems enable shared access to hardware resources. Their powerful hardware can perform extremely complex tasks, like machine inferencing. They can analyze data locally and therefore send lower volume, higher value data to the data center. They also often enable multiple tenants to run applications simultaneously on the same hardware.

Of course, one of the most significant revolutions in data center computing has been the move from virtual machines (VMs) to Linux containers (especially through tools like Docker or Kubernetes). Containers have most of the same advantages as VMs, but they are dramatically more efficient, enabling faster processing and 10 to 100 times the application density on the same physical hardware. Edge computers that run Linux are usually able to run these containers. Containerized enterprise applications can therefore be deployed with ease out into the field, while still being managed centrally. By running the applications nearer to their data sources, they can sample data at higher resolutions, sample more frequently, reduce data transmission costs, and also be more immune to losses of network connectivity.

Since edge machines have real operating systems, providing many services to applications, intelligent edge agents can be deployed on them to autonomously secure and manage their software (for example, using Open Horizon). Edge machines can also be grouped into clusters to provide physical hardware redundancy, load sharing, and fault tolerance, often using one of the popular Kubernetes software distributions.

In general, it is much easier to keep edge machine software up-to-date than it is to keep IoT software up-to-date. It is also easy to completely change the purpose of an edge computer on the fly, or to spontaneously add new applications to an existing edge computer in the field.

Edge data processing

Specifically, what data processing is done on edge computers? Let’s take a look at some specific examples.

  • Visual inferencing. Edge computers are often fitted with high resolution cameras. They can consume streams of video from these cameras and perform machine inferencing on this data, right there on the edge machine. That inferencing can simply detect the people within the view, but it may also perform more complex inferencing. Also, some edge applications can detect whether someone is wearing a properly fitted mask, or whether they have an elevated body temperature, or whether they are maintaining a sufficient distance between them. Visual inferencing has also been used on edge computers to precisely detect the positions of particular ships in ports, to monitor for safety violations in factories, to detect anomalies on production lines, and much more.

  • Anomaly detection. Edge computers play a large role in many industrial plants and factories. They can see and hear things that human senses are unable to detect. They can infer many types of problems as well as (or sometimes even better than) the most skilled human operators. For example, edge computers can precisely monitor the power consumption of electric motors and take immediate action without human intervention. Edge computers can also listen for anomalous sounds from complex equipment, and alert human operators when things appear to be awry.

  • Environment monitoring. Edge computers can monitor for hazardous conditions like particulate pollution, or poisonous gases, and also rapidly act (within milliseconds) to correct or mitigate while also notifying appropriate authorities.

  • Multi-access Edge Compute (MEC). Computers in telco MEC facilities can provide the services that used to be provided in large corporate data centers or public clouds – but with much lower latencies, since they are physically much closer to customers. Since the edge computers in these MECs are the same kinds of server class machines found in large data centers, they really do enable any data center workloads to run on them.

Other common examples include worker safety in the field, applications in remote mines, automotive applications, road and rail monitoring, and realtime traffic monitoring.


IoT devices are essentially simple sensors and actuators that are Internet-accessible. They do very little local processing. They send data to larger computers in a remote data center, and they receive commands from those remote computers. They are generally incapable of making intelligent decisions or taking autonomous actions. They simply do what they are told to do, and this makes their centralized control systems an attractive target for hackers.

Small computer hardware has evolved since the IoT era. Edge computers of ever-increasing computational power will continue to migrate ever closer to their data sources.

Edge software will also continue to evolve. Applications that previously had to be run in centralized data centers will migrate closer to the edge, often using Linux technology. This migration will enable more data to be examined and faster responses to occur. Decision making will move closer to the places where the actions must occur. Technologies from cloud native computing, like Linux containers, will also migrate out to these powerful edge computers. Ultimately, new applications and new business models will arise from the opportunities presented by edge computing.