For this pre:Invent, AWS made some major announcements that focus on streamlining the integration of AWS IoT Services and Software in various environments. The overarching theme of facilitating adoption seems to be foreshadowing a meaningful shift in the way we do IoT on AWS. Without further ado, let’s take a look.
So far, there are two primary mechanisms to provision AWS IoT devices. One is to create an identity associated with device credentials (i.e. certificate and private key) which are loaded onto a device. In most instances, this is a one-by-one procedure which can become cumbersome to manage. The other mechanism is to use Just-in-Time provisioning, in which credentials are initially loaded onto a device but the identity, device configuration, and permissions are automatically determined at a later time when the device first connects.
Neither of these options addresses the need for procuring device credentials when the device is first being configured either at the manufacturing line or at the installation site.
AWS seeks to solve this by using Fleet Provisioning, which provides the device with a “provisioning claim” it can use to request a certificate and private key as soon as it needs to begin operating. This removes one layer of complexity by generating credentials, which is nice. Nevertheless, a team with sufficient AWS and credentials management knowledge could implement a similar solution in a reasonable amount of time if needed.
Up until now you were limited to a single endpoint under the AWS IoT domain assigned to you—but no more. Custom endpoints now allows you to keep your domain names consistent across applications, and they enable backwards compatibility with previous endpoint naming patterns. Notice I say endpoints—plural—because now you can use multiple instead of being limited to just one! Furthermore, authentication mechanisms can be customized for each endpoint.
Custom endpoints are long overdue, and when combined with custom authentication, will prove very compelling for customers still struggling to integrate their devices with AWS IoT. Gone are the days when devices had to conform to a single endpoint and a small set of authentication options.
Many IoT Devices connecting to AWS IoT are behind some type of firewall which makes remote device management particularly difficult. With secure tunneling, customers can establish a Secure Tunnel managed by AWS IoT, which means that they can communicate with the device through the same connection they have with AWS IoT.
One specific use case I’ve seen multiple times is IoT Devices sitting behind a cellular service carrier NAT gateway. Although the devices can communicate out to AWS IoT, it is hard to connect back to the devices for debugging and diagnostics unless the connection happens within the carrier’s private network. Secure Tunneling could provide a simple and effective mechanism for establishing this essential bi-directional communication.
Support for local Lambda functions is already provided as a means to run local user-defined code on Greengrass Core devices. As an extension of this, Docker containers are now also an option for deploying local applications through the Greengrass service.
This will add another degree of flexibility for Greengrass development. All comparisons between Function-as-a-Service and Containers aside, containers provide the ability to have an accurate representation of deployed application behavior in the local environment, which can lead to a more seamless development experience for many teams.
One of the primary goals of IoT Edge devices is to aggregate data and to make decisions locally prior to relaying that data to the backend. To accomplish this, data stream management had thus far been delegated to the custom applications running on the Greengrass Core Device.
The new Stream Manager for Greengrass Core drastically reduces the amount of custom work required to handle these data streams by adding a standardized framework that includes features such as read and write mechanisms, data retention policies, automatic exports, and many others!
Because of the tremendous value it adds to any edge application, learning how to leverage Greengrass Stream Manager will be an essential skill for Edge Device application developers.
Above: An Example Greengrass Streams Manager Application
Today voice assistants are prevalent in smartphones and personal computers and can also be bought as standalone units with a sensible amount of memory and processing power (minimum requirements have thus far been 100MB of RAM and ARM Cortex “A” microcontrollers). Smaller devices such as those found in light switches or smart scales, however, were often too resource constrained to run AVS.
This week’s announcement solves that limitation by offloading the necessary compute resources to AWS and enabling AVS on systems having as little as 1MB of RAM and running on ARM Cortex M microcontrollers. This means we are now one step closer to telling our toaster to stop exactly when we think the bread has reached its optimal golden deliciousness…
If AWS plays this right, I believe this will be a game changer for the Internet of Things. Being able to verbally instruct a physical object instead of going through a computer intermediary is a colossal paradigm shift.
AWS IoT Services and Software are now even more accessible from all types of devices—whether they are restricted by hardware or network resources, have intermittent connections, or require legacy support. Because of that, the way we implement IoT ecosystems might just be on its way to becoming dramatically more streamlined. I get a feeling we’ll get much more on this theme next week! See you at re:Invent…
Want to learn more about how we choose and leverage the best IoT services for the job?. Learn more here.