In manufacturing, we see an increasing trend of solution providers applying cloud-native application development principles (aka modern app dev) to their environments with existing monolithic solutions being split into smaller, modular systems. 

The motivation for this is to be able to benefit from a wide range of deployment scenarios, from SaaS in a public cloud managed by the solution provider to local on prem deployments down to the edge. This places Kubernetes front and center to provide the universal platform for all these different manufacturing deployment options. Red Hat OpenShift in manufacturing use cases helps bring the consistency of operations at scale needed in edge deployments not only for the deployment and management of infrastructure, but also of the applications being built and run on it. To better understand how the manufacturing industry is evolving in its use of new technologies like AI/ML intelligent applications, let’s take a look at what the manufacturing edge is, and how single node OpenShift, the latest topology option added in version 4.9, can help.

The Manufacturing Edge

Where is the manufacturing edge? As with the majority of edge computing scenarios, this can vary by organization or use case. 

When we talk about edge computing in the context of a manufacturing plant, the “datacenter” could be at a server rack(s) located in a corner of the body shop or an adjacent room. In a big car OEM plant as another example, it could look more like a small datacenter with a couple of rooms, emergency power supplies, etc. The workloads deployed at these locations are manufacturing execution systems (MES) or more general manufacturing operations management software. It is used to plan production sequence, material flow, machine maintenance, quality control and so forth. 

The plant datacenter plays an important role in a sense that all IT functions that are required to keep the production up and running are usually located there. There is a trend to move more and more functions into public cloud services, but only for functions that are not time-critical (e.g. workforce planning). Especially for bigger plants, the risk and cost of machine downtime due to a wide area network failure is too high - hence the need for edge computing.

As we stated though, when it comes to the different environments where you could find computing power in these remote locations, there is no one size fits all scenario and so flexibility is key.

For a deeper look at what is usually found in a manufacturing plant, we see in figure 1. 

What's usually found at a manufacturing plant The far edge on the left hand side is a device, sensor or Programmable Logic Controller (PLC) integrated into a machine. These are usually really small devices with very limited computing power, but with low latency control capabilities.

On the next level, there are servers located at the shop floor, inside or close to the assembly line or cell. Industrial PCs (IPCs) are used to run Human Machine Interfaces (HMI) or Supervisory Control and Data Acquisition (SCADA) systems. This hardware is ruggedized to withstand the harsh conditions on the shop floor. 

All of the aforementioned equipment is usually considered operational technology (OT), because it is crucial for the operation of the factory, plant or other manufacturing site. We are seeing OT converge with IT to help gather and process sensor data to help make better decisions and optimize and drive manufacturing operations efficiency while reducing costs and risk. Using standard IT technology helps to reduce cost and reduce appliance sprawl.

This is a big shift as the OT and IT space have, traditionally, been strictly separated but it provides great new opportunities, especially when you can gather this data and process it at the individual plant or production line to help catch potential manufacturing problems as soon as they are identified. 

Why single node OpenShift in manufacturing?

Single node OpenShift is a full blown Kubernetes distribution, but running on a single node. It joins our other edge topologies with OpenShift including 3 node clusters and remote worker nodes, to provide manufacturing organizations with the flexibility to choose the right hardware footprint size and capabilities based on their edge environment. For the purposes of the rest of this blog, we’re going to focus on single node OpenShift here. 

From a manufacturing perspective,  we believe that single node OpenShift makes sense at the line server or small datacenter level.

For example HMI and SCADA systems that are containerized or virtualized and deployed to assembly cells on the shop floor. Each assembly cell should be as independent as possible from other cells and in this environment, high availability is not needed, as the assembly cell itself defines the failure domain. And as there are multiple cells, it is not that severe if one fails. 

A good example in the manufacturing space are Manufacturing Execution Systems (MES) or Manufacturing Operations Management (MOM) where s software providers are modularizing, containerizing and cloud-enabling their solutions.

The same holds true for other traditional shop floor solutions like HMI/SCADA, but also for recent Industry 4.0 add-ons like DigitalTwin, Device/Asset Management, IoT Gateways, Predictive Maintenance etc. 

If the solutions follow modern app dev principles, you typically end up with a number of stateless micro services, but also stateful backing services. This is too big to run on a small edge device but not big/critical enough to call for a three node HA cluster making single node OpenShift a great fit. 

Also, you want to leverage Kubernetes orchestration capabilities, e.g. rolling update deployments to add new features/functions without downtime - a much needed capability, even on a single node cluster. The motivation here is that app/service updates, especially in an agile environment, are much more frequent than OS security updates to the platform. 

Another capability that manufacturers may want at the edge is to use Kubernetes Operators. For example, they may want to use Operators to install and run stateful backing services (databases, event streaming or messaging services). This is fully supported on single node OpenShift and makes it much easier to run a message broker to receive MQTT data from the shopfloor.

Last but definitely not least, the combination of single node OpenShift with Red Hat Advanced Cluster Management, provides a single point of administration where you can manage all the remote locations and deployments easily from a single central location if needed.

Seeing is believing

To make this more tangible, take a look at the Red Hat validated pattern for industrial edge as shown in figure 2. The validated pattern is a ready to deploy implementation of machine condition monitoring, which collects data at the machine level, and uses ML inference at the factory level to detect critical machine conditions. The machine learning is done centrally in the cloud. Figure 2. The on premises part in the factory brings a couple of components, like the frontend Dashboard, MQTT Message Broker, ML based anomaly detection. 

But in a small factory or line station (aka the edge), this all could run on a single node. A perfect use case for single node OpenShift! The same deployment and management principles (e.g. GitOps) can be used, no matter if it is a small single node OpenShift or a full blown high availability cluster.

In fact, the recently updated version 2.0 of the validated pattern includes this deployment option and tests for it.

Summary

Single node OpenShift is a welcome addition to the long list of OpenShift deployment options. It enables new use cases for small and resource constrained solutions, while still enabling the well proven Kubernetes orchestration and management capabilities. Go check it out - we’ve documented how you can try single node OpenShift at console.redhat.com!


About the author

Daniel Fröhlich works as a Global Principal Solution Architect Industry 4.0 at Red Hat. He considers himself a catalyst to bring together the necessary resources (people, technology, methods) to make mission-critical projects a success. Fröhlich has more than 25 years of experience in IT. In the past years, Daniel has been focusing on hybrid cloud and container technologies in the industrial space.

Read full bio