Introduction: The Visual Gateway to Critical Operations
In industrial control systems (ICS) and operational technology (OT) environments, the Human-Machine Interface (HMI) has always been the cognitive bridge between silicon logic and steel reality. Traditionally, HMIs served as passive dashboards—displaying sensor values, alarms, and process states for human supervision.
Today, that paradigm has fundamentally shifted.
As industrial networks move beyond the human loop toward autonomous AI agents, self-optimizing control systems, and predictive operations, the HMI has evolved into something far more powerful—and far more dangerous if compromised. The integration of Generative AI (GenAI) and spatial computing technologies (AR/VR/MR) is transforming HMIs into immersive, contextual, and decision-shaping systems.
But this evolution introduces a new and under-acknowledged threat domain:
The Visual Attack Surface — where the primary target is no longer the PLC logic, the historian, or even the data itself, but the operator’s perception of reality.
In this new landscape, seeing is no longer believing.
1. The Vulnerable Visual: GenAI and Telemetry Spoofing
Generative AI’s ability to synthesize hyper-realistic images, time-series data, and video streams introduces a direct and asymmetric threat to industrial monitoring environments. Unlike traditional data tampering—which often leaves statistical or temporal anomalies—GenAI enables plausible deception at human speed.
1.1 Fabricated Normalcy
In a Visual Hijacking scenario, an attacker overlays a legitimate-looking telemetry feed onto the HMI while a physical attack unfolds unseen:
Pressure gauges remain “green” while a vessel is being over-pressurized
Temperature trends show gradual stability while thermal runaway begins
Video feeds are replaced with AI-generated “last known safe” frames
The result is not system failure—but operator paralysis. The system appears healthy until it catastrophically is not.
1.2 Phantom Alerts and Behavioral Manipulation
Conversely, attackers may weaponize false urgency:
High-confidence alarms generated at precisely the wrong moment
AI-crafted alerts mimicking known failure patterns
Contextual voice or text explanations urging “immediate corrective action”
The goal is not denial of service, but coercive control—tricking the operator into executing a harmful command under pressure.
1.3 Predictive Manipulation
As GenAI models are increasingly used to forecast system health and recommend maintenance actions, a new failure mode emerges:
Subverted training data produces maliciously “reasonable” predictions
Maintenance schedules are altered to induce fatigue, stress, or component failure
AI-recommended actions gradually erode safety margins without triggering alarms
This is long-con deception, optimized for delayed physical impact.
2. The “Confused Deputy” 2.0: When Interfaces Become Malicious Proxies
The classic Confused Deputy Problem arises when a trusted component misuses its authority due to ambiguity in instruction or intent. In modern OT environments, the HMI itself becomes the deputy.
2.1 Delegated Authority Gone Wrong
Modern HMIs are no longer passive displays. They:
Pre-validate commands
Translate human intent into machine-specific actions
Invoke AI-based “optimization” or “safety refinement” layers
If a GenAI component within the HMI is compromised, a legitimate operator instruction
“Reduce output slightly”
can be silently transformed into:
“Throttle cooling below safe minimums.”
The operator’s intent is intact. The execution is not.
2.2 Audience-Scoping Failure in Industrial Commands
In cloud security, audience-scoped tokens ensure that a command is executed only by its intended recipient. HMIs lack equivalent rigor.
A secure HMI must guarantee that:
The command displayed to the operator
The command validated by the HMI
The command received by the PLC
are cryptographically identical and context-locked.
Without this, the operator authorizes one action while the system executes another—turning trust into liability.
3. Spatial Computing and the Risk of Visual Injection
Augmented and mixed reality promise dramatic efficiency gains in remote maintenance, training, and situational awareness. They also introduce spatial deception as a new attack vector.
3.1 Visual Displacement Attacks
In AR-assisted operations, digital overlays define what the operator believes they are touching:
Labels, arrows, and highlights guide physical interaction
Instructions are spatially anchored to real-world assets
A malicious overlay can subtly shift these anchors. The operator believes they are interacting with Breaker A—but physically toggles Breaker B. No malware touches the PLC. The damage is entirely cognitive.
3.2 Cognitive Overload as a Weapon
Attackers can deploy Rogue AI agents to overwhelm an operator’s perceptual bandwidth:
Flooding AR displays with low-priority diagnostics
Introducing dense, irrelevant overlays during emergencies
Masking physical danger indicators such as smoke, vibration, or sparks
This is not hacking the machine—it is denying the human’s ability to perceive risk.
4. Mitigation: Strengthening the Bars of the Digital Cage
Securing next-generation HMIs requires abandoning the assumption that what is rendered is real.
4.1 Hardware Interlocks – The Final Authority
Physical Safety Instrumented Systems (SIS) must remain sovereign:
Independent of HMI software
Immune to AI-driven overrides
Capable of enforcing hard safety limits
The “big red button” must never be virtual.
4.2 Digital Twin Command Validation
Before execution, high-risk commands should be:
Simulated within a real-time digital twin
Evaluated against safety and physics constraints
Blocked automatically if violations emerge
This creates a pre-execution reality check, immune to visual deception.
4.3 Zero Trust for the Visual Layer
Telemetry must be treated as untrusted until verified:
Sensor-level cryptographic signing
End-to-end integrity validation
Provenance indicators embedded directly into the UI
If the HMI cannot verify the data, it must visibly declare uncertainty.
4.4 Human-in-the-Loop Guardrails
AI may handle complexity, but humans must retain irreducible authority:
Manual overrides that bypass AI mediation
Out-of-band verification channels
Clear, auditable command provenance
The human should supervise the AI—not the other way around.