Cloud Architecture Design for IoT devices: microfluidic rheometers and machine learning

We want to design a cloud architecture to collect, process and respond to requests from IoT devices. These requests will come from devices in the same network (for instance, a Hospital) and will provide raw data to be processed by a pre-trained model. The main idea of the cloud is to be able to store all measurements together to improve the machine learning model easily.

The goal is to build a system that is robust and scalable, i.e. we need an infrastructure that can recover from failure and that can provide service to multiple devices.

Simplified overview of the architecture

DISCLAIMER: the following architecture has never been implemented nor deployed. It was thought of as a proof of concept.

Let's dive into the project

Functionalities

The cloud platform is designed to support our hybrid deployment, the IoT device measurements and machine learning models. Therefore, the cloud platform should give service not only to the IoT devices but also to potential users and clients. The cloud platform's main functionalities are:

  • The IoT device layer: a Raspberry PI and an ESP32 combined as hardware connected with AWS IoT Core.
  • A layer of connectivity in IoT Core allows managing the communication between the IoT device (RPi) with the cloud platform - via MQTT for individual authentication.
  • Pre-processing layer that saves data in a data lake and corresponding databases and prepares data for posterior processing.
  • Machine Learning and processing layer for the generation of results.

Hybrid deployment and IoT Device

To provide this service, it is possible to use a Model as a Web Endpoint and deploy it using gunicorn and mlflow. To deploy the app or service, it is possible to use Heroku, a cloud platform that works well for hosting Python applications. However, we want everything with the same cloud provider to simplify maintenance.

On the other hand, we can also present the models for processing the requests as serverless functions. These serverless technologies allow us to not worry about maintaining the servers as the cloud platform is responsible for provisioning servers, scaling up more machines to match demand and managing load balancers according to requests. The main concern at the start-up level for serverless technologies is latency because it can impact customer experience.

Deployment of a new device

Each device needs to have installed/loaded the required firmware with the corresponding certificates to guarantee security and confidentiality of communications. Then, the device needs to be connected to the facility's WiFi - the credentials of the wireless connection are the unique specification settings that need to be configured to connect to the internet.

Once it is connected to the internet, a unique authentication certificate is downloaded by using the MQTT connection service. From this point, with the authentication certificate saved and the internet connection, the device is ready to perform blood viscosity measurements and send them to the cloud platform for processing and receiving the model results.

Architecture Design

The architecture presented here has many services hosted and managed by AWS, which means the architecture is also quite pricer. Besides, due to the amount of services, it is advisable to use Terraform to deploy all these services. Terraform allows writing the infrastructure as code using declarative configuration files.

Cloud Architecture Design

SIMPLIFIED (AND CHEAPER) ARCHITECTURE DESIGN

This simplified architecture avoids the use of AWS hosted services by using the trigger functionalities by using a single data lake (Amazon S3) and further developing custom code to process the message received from the IoT device.

Simplified cloud Architecture Design

Services

IoTCore
AWS IoT Core

supports device connections that use the MQTT protocol allowing for mutual authentication and encryption at all endpoints so that data is never exchanged between devices and IoT Core without verified identity.

APIGateway
AWS API Gateway

A service to access AWS cloud that can manage and balance network traffic, directing requests to specific resources based on the endpoints requested.

AmazonCognito
Amazon Cognito

Allows direct sign-in features with a user name and password with user pools and identity pools; provides authentication, authorization and user management for the application backend.

AWS Lambda
AWS Lambda

A serverless service that allows to trigger functionalities and execute code.

Amazon S3
S3 Bucket

A : cloud storage service that, in our architecture, is used as a data lake for all messages and logs.

DynamoDB
Amazon DynamoDB

A fully managed NoSQL Database.

Amazon Aurora
Amazon Aurora

A relational database engine for high performance and scalability.

AWS SageMaker
AWS SageMaker

A fully managed machine learning service that allows to build and train machine learning models quickly and then directly deploy them into a production environment.

If you want to know more about this work, extend it or collaborate

Contact me

josep.ferre@fmc.ub.edu