The classic pattern: a vendor advises a manufacturer on a holistic factory transformation. The engagement opens with a concept spanning all production lines, a cloud platform, a data lake, a central analytics team. Twelve months in, the concept is complete; not a single machine is connected.
The reverse sequence works better: one machine, three to five sensors, one concrete use case — often reduction of unplanned downtime or compliance evidence toward customers. When that first block delivers measurable results in three months, a second and third block emerge organically.
The underlying rule: demonstrate value before scaling. Anything else produces presentations, not outcomes.
An IIoT system typically has three layers that should be separated deliberately:
Sensor and control layer (OT). Physical sensors, PLCs, and machine controls sit here. Protocols are industry-native: OPC UA, Modbus, Profinet. The OT layer is not reachable from the public internet, and should not be.
Edge layer. A small industrial PC or a Raspberry Pi Compute Module in an industrial enclosure sits between OT and IT. It reads data over OPC UA or Modbus, filters, aggregates, and forwards via MQTT to the IT layer. This layer solves latency and bandwidth: not every sensor reading belongs in the cloud, only the condensed indicators.
Application layer (IT). Data lands in a time-series database (InfluxDB, TimescaleDB), visualizes in a dashboard (Grafana), and feeds analytics and alerting. This layer ideally resides in an EU region on a reliable cloud provider.
Three-layer separation is not academic elegance. It solves practical problems: cloud outage does not stop production, sensor failure isolates cleanly, scaling touches one layer at a time.
OPC UA is the European de facto standard for communication between machines and edge devices. Modern controllers — Siemens S7-1500, Beckhoff TwinCAT, Rexroth ctrlX — speak OPC UA natively. Older PLCs are reachable through OPC UA gateways or PLC adapters.
MQTT is the standard for edge-to-cloud transport. It is lightweight, resilient over unreliable connections, and scales from a handful to millions of messages per second. Brokers like EMQX, HiveMQ, or Mosquitto cover mid-market demand without trouble.
The OPC UA + MQTT combination is also the stack where personnel is easiest to recruit. Training is available at any technical university; documentation is deep; community support is vendor-independent.
The initial entry does not require specialized industrial servers. An industrial PC with passive cooling (Beckhoff CX, Siemens IPC, B&R APC) or an industrial Raspberry Pi Compute Module in a DIN-rail housing covers the edge layer.
Typical criteria:
For a first pilot, simpler hardware often suffices — what matters is having a migration concept to industrial-grade gear when scaling arrives.
A realistic productive pilot flow:
Weeks 1–2: use-case definition, machine selection, data-point clarification, selection of the guiding use case (typically: unplanned-downtime reduction or OEE measurement).
Weeks 3–4: hardware procurement, OPC UA connection to the selected machine, edge configuration.
Weeks 5–8: dashboard build, data pipeline stand-up, parallel-run testing.
Weeks 9–10: integration with existing alarm and shift systems, operator training.
Weeks 11–12: production cutover, first analysis, decision on next step.
After twelve weeks you do not have "Industry 4.0". You have a concrete decision based on real data. That is the difference from a two-year concept.
Our model: a complimentary online scoping session to clarify the use case. Then a work contract with fixed price for a defined pilot scope (hardware integration, edge software, dashboard, training). Source code and usage rights to the customer at acceptance. Fifteen days of free post-delivery bug-fixing; optional maintenance thereafter. For downstream integration of machine data into ERP and business-management systems or for later analytics with artificial intelligence and automation, we build modularly.
Production data stays in EU regions or — where the customer prefers — on-premises. The edge stack is openly documented, so a later partner change or internal take-over is feasible. No vendor lock-in by design.
If a pilot is on the agenda for the next six months, start with the use-case question: which decision should you be able to make on data in half a year that you cannot make on data today? Book a complimentary online scoping session — we sketch a realistic pilot scope. The related read on AI integration in B2B products covers the natural follow-on step once the pipeline is stable, and the piece on custom ERP versus SAP for manufacturers addresses how the collected data flows into the back-office.
Not for a first pilot. Standardized dashboards and threshold-based alerting cover the entry demand. A data scientist earns their seat when you move into the prediction layer — predictive maintenance in the narrow sense.
Either via retrofit sensors (external temperature, vibration, current measurement) or via OPC UA gateways that translate older protocols. For many purposes, external sensors suffice — the point is not to understand the machine, but to measure its behavior.
AWS, Azure, Hetzner Cloud, and OVHcloud are all appropriate. The choice turns less on technology than on billing model, data-residency preference, and existing infrastructure. We typically recommend Hetzner Falkenstein or AWS Frankfurt for European customers with GDPR preference.
Hardware for the first pilot sits in the low four figures per machine. The meaningful effort lies in configuration, integration, and dashboard development. Concrete numbers come from the scoping session; flat estimates without use-case knowledge are not reliable.
By committing to open protocols (OPC UA, MQTT) and either building edge software in-house or on open-source foundations. Proprietary edge suites from large vendors are fast to adopt and expensive to replace — the decision is made on day one, not in year five.
© 2026 D'Cloud Software & Digital Agency. All rights reserved.