Home Product Use Cases Technology About Research
Contact Us

One Display. Total Situational Awareness.

Depth
26.80m
Water Temp
10.6°C
Heading
352° N
GPS Position
+28.3421°
−14.8917°
Dive Timer
00:14:22 · Survey Active
RS-232CONNECTED
|
HDMI IN1080p@60
|
LATENCY<2ms
|
OVERLAYACTIVE

Real time overlay signal from HDMI video input fused with custom input data.

Sub-2ms Latency
Deterministic real-time compositing. No buffering, no PC dependency.
📡
Multi-Source Input
RS-232 DB-9, USB, Ethernet — parse any structured data stream natively.
🔧
No-Code Config
Map data fields to widgets via GUI. Deploy overlays without writing code.
📺
Single HDMI Out
Replace multi-screen setups with one clean, fused display output.
Target Verticals
Built for mission-critical environments
🏥
Surgical / OR
Cataract & microscope procedures — fuse phaco data, vitals, and video into one annotated surgical feed.
🌊
ROV / Marine
Overlay NMEA telemetry, depth, heading, and sonar data on live underwater video.
🔭
Industrial Inspection
Annotate machine vision feeds with sensor metrics, pass/fail markers, and timestamps.
✈️
Aerospace / UAV
Real-time flight telemetry and payload data overlaid on camera downlinks.
🔬
R&D Laboratories
Instrument data streams synchronized and overlaid on microscope or test-bench video.
🎓
Training & Simulation
Record and replay annotated video sessions for procedural training and review.
Page 1 of 6 — Home
Hardware Platform
The VitalOverlay
VO-1 Engine

A compact, standalone FPGA-based overlay compositor. Ingests live HDMI and structured data, renders configurable widget layers in real time, and outputs a single synchronized fused display — no PC required.

<2ms
Latency
4K
Resolution
None
PC Required
16
Widget Layers
Download Datasheet
VitalOverlay VO-1 Hardware
Technical Specifications
Video I/O
Video InputHDMI 1.4 (up to 4K@30)
Video OutputHDMI 1.4 Fused
Resolutions720p, 1080p, 4K
Frame Rates24 / 30 / 60 fps
Overlay Latency<2ms
Color SpaceARGB 32-bit
Data Interfaces
SerialRS-232 · True DB-9
Baud Rate1200 – 115200 bps
USBUSB 2.0 Host
Ethernet10/100 RJ-45
ProtocolsCSV, NMEA, ASCII
Field MappingGUI Config Tool
Hardware
ProcessingFPGA (deterministic)
Form FactorCompact · fanless opt.
Power12V DC
Operating Temp0°C – 55°C
PC RequiredNo
OS RequiredNone
Overlay Engine
Widget TypesText, Metrics, Graphs, Icons
Overlay LayersUp to 16 simultaneous
Config MethodGUI / JSON
TimestampsUTC / NTP sync
RenderingARGB compositing
RecordingOptional USB capture
Page 2 of 6 — Product
01 / 06
Surgical Operating Room
Primary Market · Cataract & Microscopy
Surgical procedures like cataract surgery involve a microscope feed, phacoemulsification data, irrigation/aspiration metrics, and vitals — each on a separate screen. VitalOverlay composites them into one annotated surgical feed, reducing cognitive load and streamlining the procedure.
Phaco Data
IOP Monitoring
Vitals Overlay
RS-232 Instruments
02 / 06
ROV / Marine Telemetry
Underwater Vehicles · Survey Ops
ROVs generate rich NMEA telemetry — depth, heading, pitch, roll, thrust — alongside live camera feeds. VitalOverlay parses NMEA streams directly and renders a customizable HUD on top of the video, eliminating the dedicated telemetry monitor.
NMEA 0183
Depth / Heading
GPS Position
Thrust Metrics
Depth
26.8m
↑ 0.00 m/s
Dive Timer
00:14:22
Survey Active
Heading
352°
N
Water Temp
10.6°C
↑ Warming
GPS Position
LAT: +28.342183°
LON: −14.891746°
FIX: 3D LOCK · SATS: 8
ROV-01 · Live Telemetry Signal Strong
03 / 06
Industrial Inspection
Machine Vision · QA / QC Lines
Production line cameras capture defect imagery while PLC sensors stream dimensional and thermal data. VitalOverlay overlays pass/fail markers, tolerance readings, and timestamps in real time.
Pass/Fail Markers
PLC Serial Data
Thermal Values
Timestamp Burn-in
Line 4 · Station 7
Vision Active
Dim. Tolerance
0.12mm
Spec: ±0.20 mm
PASS
Surface Defect
0.3%
Threshold: 1.0%
CLEAR
Zone A ✓
Zone B !
Pass Rate
98.4%
Defects/Hr
3
Cycle Time
4.2 s
Status
RUNNING
04 / 06
Aerospace / UAV
Flight Systems · Payload Operators
UAV operators and payload specialists receive separate telemetry feeds from flight systems, gimbals, and cameras. VitalOverlay composites altitude, heading, GPS, and gimbal data directly onto the camera downlink.
GPS / AGL
Flight Telemetry
Gimbal Data
RS-232 Downlink
UAV-01
Overlay Active
09:47 UTC
Altitude AGL
124m
↑ +1.2 m/s
Ground Speed
8.4m/s
HDG: 247°
Gimbal
Pitch
−12°
Yaw
+4°
Battery
78%
≈ 14 min remaining
GPS
51.5074°N 0.1278°W
Sats
12
Mode
Loiter
05 / 06
R&D Laboratories
Research Instruments · Test Benches
Lab instruments output continuous data streams alongside camera-captured experiments. VitalOverlay overlays measurement values, timestamps, and event markers directly onto microscope or test-bench video for clean, annotated captures.
Instrument Streams
Event Markers
Time-sync
Microscope Video
06 / 06
Training & Simulation
Procedural Training · Skills Labs
Annotated video feeds with embedded data make ideal training material. VitalOverlay can record the fully composited output for post-session review, enabling trainees and instructors to replay annotated procedural video with all instrument data intact.
Annotated Capture
Session Replay
Timestamp Markers
USB Recording
Page 3 of 6 — Use Cases
Deterministic.
Real-time.
No Compromises.

Precise FPGA pipeline — predictable, auditable, and certifiable. Every frame processed identically, every time.

VitalOverlay VO-1 System Architecture
Processing Pipeline
01
📥
Ingest
HDMI video and serial/network data streams captured simultaneously into dedicated hardware buffers.
Hardware DMA
02
🔍
Parse
Structured data (CSV, NMEA, ASCII) parsed in hardware. Fields identified and mapped to named channels.
CSV · NMEA · ASCII
03
🎛️
Map
Parsed channels bound to overlay widgets via GUI configuration tool. No coding required.
JSON Config
04
🖼️
Composite
ARGB overlay plane rendered frame-synchronous with incoming video. Widgets draw atop live feed.
ARGB Plane
05
📤
Output
Fused HDMI stream output with end-to-end latency under 2ms. Clean and recorder-compatible.
<2ms · HDMI 1.4
FPGA-Based, Not CPU
Unlike software-based overlays running on a PC, VitalOverlay uses an FPGA to process video and data in dedicated hardware logic. Frame-deterministic timing, zero jitter, and no OS scheduling delays that cause dropped frames or latency spikes.
Deterministic Timing
Structured Data Parsers
The VO-1 includes native parsers for CSV, NMEA 0183, and generic ASCII delimited formats. Fields identified by position or label and mapped to widget instances. Custom delimiters configurable without firmware changes.
No Firmware Change Required
ARGB Overlay Compositing
Widgets rendered onto a transparent ARGB plane composited per-pixel with live video in hardware. Per-pixel alpha allows partial transparency, enabling clean HUD elements that don't obscure the underlying video content.
32-bit ARGB
No-Code Configuration
The VitalOverlay Config Tool (desktop app) provides a drag-and-drop canvas where operators map data fields to widget positions, set display formats, thresholds, and color rules. Configs stored as JSON and deployed over USB.
Desktop Config App
Page 4 of 6 — Technology
What We Believe
Our commitment to every customer.
VitalOverlay is more than a device — it is a long-term collaboration. We work alongside our customers to ensure the system fits their workflow perfectly, not the other way around.
🤝
True Partnership
We don't hand you a box and walk away. From initial scoping through deployment and beyond, VitalOverlay engineers work directly with your team — understanding your environment, your constraints, and your goals. Your success is our metric.
Direct Engineering Access Dedicated Onboarding Ongoing Support
Optimum Service
Every deployment is treated as mission-critical. We provide responsive technical support, proactive firmware updates, and guaranteed response SLAs — because downtime in your field is never acceptable. Built-in reliability, backed by people who care.
Fast Response SLAs Firmware Updates Remote Diagnostics
🎨
Full Customization
Every organization has its own visual language. We support custom data inputs, bespoke overlay layouts, and branded on-screen graphics — including your own logos, icons, color schemes, and HUD elements — so the overlay feels native to your workflow, not generic.
Custom Data Inputs Branded Logos & Icons Custom HUD Layouts Bespoke Color Schemes
Our Story
Built by
engineers
who hated
extra screens.

VitalOverlay started with an observation in an operating room. During a cataract surgery, a surgeon had to glance between the microscope feed, the phacoemulsification machine display, and a separate vitals monitor. Three screens. Three cognitive context switches. Every single procedure.

We asked: what if all of that lived in one display — without rewriting any software, without replacing any equipment, and without relying on a PC that could crash at the worst moment?

The answer is the VO-1: a deterministic, hardware-based overlay compositor that ingests your existing video and instrument data, and outputs one clean, annotated feed. Built for environments where reliability is not optional.

Our design philosophy is simple: deterministic over probabilistic. Hardware over software. Standalone over PC-dependent. Configurable without code.

Page 5 of 6 — About
Let's talk about
your workflow.
📍
LocationSan Jose, California, United States
Please enter your first name.
Please enter your last name.
Please enter a valid email address.
Please enter your organization.
Please select a use case.
Please tell us about your workflow.
What are you looking for?
Product Demo
Partnership
General Info
Page 6 of 6 — Contact
Privacy Policy — VitalOverlay
Terms of Use — VitalOverlay
VitalOverlay Research & Insights
Industry Deep Dives.
Hardware-layer display overlay across surgical, maritime, industrial, and defence sectors
Surgical Series
The Operating Room Still Has a Display Problem.
April 2026·8 min read·Surgical / OR
During a typical cataract surgery, a surgeon's eyes travel between four separate displays. The information exists — it just isn't in the right place.
Read article →
Maritime Series
One Screen Below the Surface. ROV Display Overlay.
April 2026·7 min read·ROV / Marine
An ROV pilot managing a subsea inspection holds vehicle video, NMEA telemetry, and sonar returns on three separate screens. Display overlay removes the reconstruction step.
Read article →
Industrial Series
Machine Vision Without Data Blindness.
April 2026·7 min read·Industrial Inspection
A quality engineer watching a production line manages a machine vision camera feed on one monitor and a PLC sensor data readout on another. Display overlay closes that gap.
Read article →
Defence Series
Decision Superiority Starts with Situational Awareness.
April 2026·10 min read·Defence / Military
In high-tempo operations the decisive advantage is not firepower — it is the speed and accuracy of decision-making. Display fragmentation is an enemy of decision superiority.
Read article →
Training Series
The Most Effective Teaching Tool Was Missing the Data.
April 2026·7 min read·Training & Simulation
Surgical training videos show what the expert did — but not what the instruments were doing. Annotated overlay recordings make the complete expert picture visible to trainees.
Read article →
R&D Series
When the Data and the Image Live on Different Screens.
April 2026·7 min read·R&D Laboratory
Microscope feeds and instrument data on separate displays force post-experiment synchronisation. Hardware overlay composites both in real time — producing annotated scientific records instantly.
Read article →
Aerospace Series
One Display to Rule the Sky. UAV Telemetry Overlay.
April 2026·7 min read·Aerospace / UAV
UAV payload operators manage video on one screen and telemetry on another. Display overlay fuses altitude, GPS, heading, and airspeed directly into the video feed — no GCS changes required.
Read article →
VitalOverlay Blog · Surgical Series

The Operating Room Still Has a
Display Problem.
Here's How to Fix It.

April 2026 8 min read Surgical / OR

During a typical cataract surgery, a surgeon's eyes travel between a microscope eyepiece, a phacoemulsification machine display, a patient vitals monitor, and sometimes a fourth screen showing irrigation and aspiration flow. That's four cognitive context switches — every single procedure, all day long. The information exists. It just isn't in the right place.

The Problem Nobody Talks About

The modern operating room is extraordinarily well-equipped. Phacoemulsification machines from Alcon, Johnson & Johnson Vision, and Bausch + Lomb stream real-time data on intraocular pressure, ultrasound power, vacuum levels, and fluid dynamics. Anaesthesia systems monitor heart rate, SpO2, and blood pressure continuously. Surgical microscopes and endoscopes produce stunning high-definition video feeds.

Every piece of information a surgeon needs is being generated, in real time, somewhere in that room. The problem is that it lives on four separate displays, each requiring a deliberate shift of attention to read.

This isn't a minor inconvenience. In high-stakes, time-sensitive procedures, every unnecessary eye movement is a moment of divided attention. Every glance away from the surgical field is a cognitive break in the most demanding environment in medicine.

"The data I need is right there — I just have to look away from my patient to see it. That's the problem we've always accepted as normal."

Why Ophthalmology Feels It First

Cataract surgery is the most commonly performed elective surgical procedure in the world — over 3 million procedures annually in the United States alone. It is also, paradoxically, one of the most data-intensive. A phacoemulsification surgeon simultaneously manages:

  • Phaco power — ultrasound energy delivered to emulsify the lens (typically 0–100%, modulated in real time)
  • Vacuum levels — aspiration pressure holding the lens fragment (measured in mmHg)
  • Irrigation and aspiration flow — fluid balance critical to maintaining anterior chamber stability
  • Intraocular pressure (IOP) — fluctuations can cause serious intraoperative complications
  • Patient vitals — heart rate, SpO2, and blood pressure, streamed from the anaesthesia workstation
  • Procedure timer — total ultrasound time is a key outcome metric

All of this while operating through a surgical microscope at sub-millimetre precision, in a field roughly the size of a thumbnail.

The conventional solution has been to place the phaco console display within peripheral vision of the surgeon. In practice, this means a machine screen off to the side, partially obstructed by drapes and equipment, requiring a full head turn to read clearly. Scrub technicians and circulating nurses manage the settings they can see; the surgeon manages the incision they can see. Nobody has a single, unified view of the procedure.

What surgical video overlay changes

VitalOverlay's VO-1 engine ingests the endoscope or microscope video feed and the phaco machine's serial data stream simultaneously. It composites them into a single HDMI output — the surgical video, with phaco parameters, IOP, vitals, and procedure timer overlaid directly on the image in configurable, non-obstructive widget panels.

No software changes to the phaco machine. No PC in the signal path. No buffering. The overlay is generated in hardware, deterministically, with frame-accurate timing.

Other Surgical Areas Where This Matters

Ophthalmology may feel the display fragmentation most acutely, but it is far from alone. Any surgical discipline that combines a video feed with real-time instrument data is a candidate for display overlay — and that list is longer than most people expect.

🔬
Vitreoretinal Surgery
Pars plana vitrectomy involves real-time infusion pressure management, cutter speed, and light intensity alongside the retinal video feed. Surgeons operating on detachments, macular holes, and proliferative diabetic retinopathy benefit from having infusion and instrument parameters visible within the same display as the retinal image — particularly during critical membrane-peeling steps where eyes cannot leave the field.
Infusion PressureCutter SpeedLight Intensity
🫀
Cardiac & Vascular Surgery
Endovascular procedures and minimally invasive cardiac interventions are conducted almost entirely under fluoroscopic or endoscopic guidance. Haemodynamic data — arterial pressure, cardiac output, oxygen saturation — streams continuously from perfusion and monitoring systems. Having these parameters visible as an overlay on the procedural video reduces the "heads-up/heads-down" switching that currently characterises these procedures.
HaemodynamicsFluoro OverlayPerfusion Data
🦷
Oral & Maxillofacial Surgery
Endoscopic-assisted jaw reconstruction, implant placement, and craniofacial procedures increasingly involve camera-guided navigation alongside planning system data. Overlay of navigation coordinates, implant positioning metrics, and patient identifiers on the intraoral camera feed improves verification without requiring secondary screen consultation.
Navigation DataImplant MetricsIntraoral Camera
🧠
Neurosurgery
Fluorescence-guided tumour resection, electrophysiology monitoring during spine procedures, and endoscopic approaches to skull base pathology all generate continuous data streams alongside the operative video. Neuromonitoring alerts — SSEP, MEP changes — currently require a separate technician and a separate screen. Bringing these into the operative field as a configurable overlay is a meaningful safety and workflow enhancement.
NeuromonitoringFluorescenceElectrophysiology
🩺
Laparoscopic & Robotic Surgery
General, gynaecological, and urological laparoscopic procedures are already screen-driven — the surgeon operates by watching a monitor. Adding patient vitals, insufflation pressure, and CO₂ flow as non-obstructive overlays on the laparoscopic feed is a natural extension that requires no change to the laparoscopic tower, the insufflator, or the anaesthesia cart.
Insufflation PressureCO₂ FlowPatient Vitals

Who Benefits — and How

A display change in the OR is never just a technology decision. It affects every person in the room and every stakeholder in the purchasing chain.

Surgeon
Eyes stay on the patient
When phaco parameters and vitals are visible within the surgical field, the surgeon's gaze stays where it belongs — on the patient. This is most meaningful during the most demanding phases of a procedure: nucleus cracking, cortex removal, and IOL insertion in cataract surgery.
Scrub Nurse / Scrub Tech
Shared situational awareness
With overlay, both surgeon and scrub tech see the same data simultaneously. Verbal confirmation of settings becomes verifiable rather than assumed. This reduces communication burden during critical steps.
Circulating Nurse
Faster anomaly detection
When vitals and machine parameters appear on the main surgical display, the circulator's attention is drawn to the same anomalies the surgeon sees — without requiring a separate journey to the monitoring station.
Anaesthesiologist
Procedural context without interruption
Having procedure context visible without requiring verbal updates from the surgeon reduces disruptive communication during sensitive steps. Phaco power use signals which phase of cataract surgery is underway.
OR Manager
Workflow and throughput
For a high-volume cataract centre performing 20–40 cases per day, even a 2–3 minute reduction in average procedure time represents meaningful capacity gain. VitalOverlay requires no OR renovation or downtime.
Surgical Resident / Fellow
Accelerated learning
When parameters are visible on the same screen as the surgical view, the correlation is immediate and intuitive. Recorded cases with overlay data become the most effective teaching material available.

The Technology Argument for Hardware-Layer Overlay

Software-based overlay approaches — applications that capture video, process it through a PC, and re-output it — exist, and they fail in the OR for a predictable set of reasons. A PC in the signal chain introduces buffering latency (typically 3–8 frames), adds a potential failure point in a sterile environment, requires OS maintenance and update cycles, and occupies valuable cart space.

VitalOverlay's VO-1 is an FPGA-based hardware compositor. The overlay is generated deterministically in silicon — no operating system, no software stack, no buffers. It accepts the surgical video input and produces a composited HDMI output in a single processing cycle. The result is a device that behaves like a cable: it either works or it doesn't, and when it works, it works the same way every time.

The overlay is generated in hardware, deterministically, with frame-accurate timing. No OS. No buffers. It behaves like a cable — and in the OR, that reliability is the entire value proposition.

What Implementation Actually Looks Like

VitalOverlay connects between the existing video source (microscope camera output or endoscope tower) and the existing display. The phaco machine's RS-232 or USB data port — already present on every major commercial phaco system — feeds into the VO-1's instrument input. The VO-1 outputs a single HDMI signal carrying the composited feed.

Configuration is done once, by a technician or biomedical engineer, using VitalOverlay's no-code GUI: drag the IOP widget here, the vacuum gauge there, the vitals bar along the bottom. Save the profile. The system applies it every time the device is powered on.

The Display Problem Has a Simple Answer

The operating room has not had a display problem because the information didn't exist. It has had a display problem because the information was fragmented across equipment that was never designed to talk to each other, in an environment where the cost of divided attention is measured in patient outcomes.

Surgical video overlay is not a sophisticated technology. It is a simple idea that has been waiting for the right hardware to make it viable without compromise. That hardware exists now.

See It in Your OR

VitalOverlay works with your existing phaco system, microscope, and displays. No software changes. No equipment replacement. Request a demo or an engineering sample for your facility.

VitalOverlay Blog · Maritime Series

One Screen Below
the Surface.
ROV Display Overlay.

April 2026 7 min read ROV / Marine

An ROV pilot managing a subsea inspection holds the vehicle's video feed on one monitor, the NMEA telemetry — depth, heading, GPS position, thrust — on a second, and sonar returns on a third. Three screens, three different data sources, and the pilot is expected to build a unified picture of what the vehicle is doing fifty metres below the surface. Display overlay removes the reconstruction step entirely.

The ROV Pilot's Attention Problem

Remotely operated vehicle operations are fundamentally a display-management task. The pilot cannot feel the water, cannot sense the current, and has no proprioceptive feedback from the vehicle. Every piece of information about the ROV's state — its orientation, its depth, its thrust output, its distance from the seabed — arrives through a display. When that information is fragmented across multiple screens, the pilot's situational awareness depends entirely on their ability to mentally integrate what they are seeing across all of them simultaneously.

For routine operations in calm, well-mapped environments, experienced pilots manage this integration automatically. For operations in strong currents, near complex infrastructure, or in low-visibility conditions, the cognitive overhead of cross-referencing three screens is a meaningful performance constraint. Reaction time suffers. Precision suffers. The margin for error narrows exactly when the environment demands the most.

"In a strong current near a jacket leg, I need my eyes on the video. Every time I glance at the telemetry screen, the vehicle moves. Display overlay means I see both at the same time."

What Changes with a Single Overlaid Display

VitalOverlay's VO-1 engine accepts the ROV's camera output over HDMI and parses its NMEA 0183 telemetry stream directly over RS-232 or USB. It composites depth, heading, GPS coordinates, speed through water, and thrust metrics onto the video feed as configurable, non-obstructive widget panels — and outputs a single HDMI signal to the pilot's primary monitor.

The pilot's eyes stay on the video. The telemetry is there, in the same frame, updated at the same rate as the vehicle's sensors report it. No head movement. No context switch. No mental integration required.

What the VO-1 overlays on ROV video

Depth (m / ft), heading (degrees), GPS position (lat/lon), altitude above seabed, speed through water, thrust vector indicators, navigation bearing to waypoint, surface vessel heading and position, and sonar returns where a video-compatible sonar output is available.

All parameters are parsed from the vehicle's existing NMEA 0183 or proprietary telemetry stream. No modifications to the ROV's control system or software are required.

Beyond the Pilot: Where ROV Display Overlay Creates Value

🌊
Offshore Oil & Gas Inspection
Subsea pipeline and riser inspection requires continuous correlation between video observations and positional data. Inspectors logging anomalies need GPS coordinates, depth, and heading visible alongside the camera feed to record accurate defect locations. With overlay, all of this information appears in the same frame — simplifying anomaly reporting and eliminating the need to cross-reference separate telemetry logs after the dive.
Pipeline InspectionGPS PositionDepth Logging
Naval Mine Countermeasures
Mine countermeasure ROV operations require precise positional awareness in environments where GPS accuracy degrades and acoustic positioning systems provide the primary navigation reference. Overlaying USBL position, vehicle heading, and depth onto the camera feed gives the operator a single unified display for both identification and prosecution tasks.
USBL PositionAcoustic NavMine Countermeasures
🔍
Offshore Wind & Renewable Infrastructure
Monopile, transition piece, and cable inspection for offshore wind farms combines high-definition video with structural survey data. Overlay of water column depth, current speed and direction, and proximity measurements allows inspection engineers to correlate visual observations with environmental conditions at the time of inspection.
Monopile InspectionCurrent DataProximity Sensors
🐟
Scientific & Research Surveys
Marine biology and geological survey ROV operations require annotated video records. When depth, temperature, salinity, and GPS position are overlaid on the camera feed and recorded simultaneously, the resulting footage is immediately usable for scientific reporting without requiring post-processing synchronisation of separate data logs and video files.
CTD DataGeo-referenced VideoScientific Survey

Who Benefits

ROV Pilot
Eyes on the vehicle, always
The pilot's reaction time improves when depth, heading, and altitude are visible within the primary video feed. Critical avoidance manoeuvres near infrastructure happen faster when the pilot doesn't need to glance at a secondary screen to confirm vehicle state.
Offshore Supervisor
One screen tells the whole story
Supervisors observing ROV operations from the control van want to see what the pilot sees — with all the context. A single overlaid feed means supervisors and clients share the same situational picture as the pilot without requiring a separate telemetry display.
Inspection Engineer
Annotated records without post-processing
Inspection engineers who receive overlaid video recordings have GPS position, depth, and heading embedded in every frame. Anomaly location reporting becomes immediate and accurate. There is no need to synchronise video timestamps against separate telemetry logs.
ROV Manufacturer / Integrator
A differentiated system offering
ROV manufacturers and system integrators who include VitalOverlay as part of their control system package offer a meaningfully better operator experience without redesigning their software stack. The VO-1 adds capability without adding integration risk.

The Technical Case for Hardware-Layer Overlay

Software-based telemetry overlay solutions exist for ROV operations. They typically run on a PC inserted into the video chain, introduce variable latency from operating system scheduling, and require maintenance and update cycles that are incompatible with offshore deployment rhythms. When a vessel is on location in the North Sea and the overlay software crashes, there is no IT department to call.

The VO-1 is an FPGA-based hardware compositor. It has no operating system, no software stack to crash, and no buffers to introduce latency. It either produces the overlaid output or it doesn't — and its behaviour is identical on day one and day five hundred. For offshore applications where equipment reliability is a contractual requirement, this is the only architecture that makes sense.

Built for the Water. Ready for Your ROV.

VitalOverlay works with any ROV system that outputs HDMI video and NMEA telemetry. No modifications to your existing system required.

VitalOverlay Blog · Industrial Series

Machine Vision Without
Data Blindness.

April 2026 7 min read Industrial Inspection

A quality engineer watching a production line manages a machine vision camera feed on one monitor and a PLC sensor data readout on another. When a defect appears on the camera, the dimensional and thermal readings that explain it are on a different screen. By the time the correlation is made, the part is three stations downstream. Display overlay closes that gap — putting every data source in the same frame, at the moment it matters.

The Production Line's Visibility Gap

Modern manufacturing lines generate more data than any previous generation of production technology. Vision systems detect surface defects at micrometre resolution. PLC networks stream dimensional tolerances, temperature gradients, torque measurements, and cycle times in real time. Automated inspection stations flag anomalies faster than any human observer could.

The problem is not a shortage of data. It is a shortage of contextualised data — information presented at the right time, in the right place, to the person who needs to act on it. When the camera feed and the sensor data live on separate displays, correlation is manual, slow, and dependent on operator experience.

For high-throughput lines where decisions must be made in seconds, the display gap is a throughput bottleneck.

"Our QA operator could see the defect on the camera or the tolerance deviation in the PLC readout — never both at once. By the time they correlated the two, the part was gone."

What Overlay Looks Like on a Production Floor

VitalOverlay's VO-1 connects between the machine vision camera's video output and the operator's existing monitor. It simultaneously parses the PLC serial data stream — RS-232, USB, or Ethernet — and composites pass/fail markers, dimensional readings, thermal values, and production timestamps directly onto the camera feed. The operator sees one display: the part, annotated with every parameter that matters for the accept/reject decision, updated in real time.

Typical parameters overlaid on industrial inspection video

Pass/fail status, dimensional tolerance deviation (mm/µm), surface temperature (°C), cycle time (ms), production count, timestamp burn-in, torque or force readings, barcode/QR scan results, and custom threshold alerts from any PLC data channel.

Configurable widget placement means the overlay is adapted to each station's specific display requirements without writing code — drag and place via the VO-1's no-code GUI.

Industries Where This Changes the Equation

🏭
Automotive Assembly & Stamping
Body panel stamping, weld inspection, and final assembly verification combine high-resolution camera feeds with dimensional measurement systems and torque monitoring. Overlay of dimensional deviation, weld penetration depth, and torque confirmation on the camera feed allows line operators to make accept/reject decisions from a single display.
Dimensional GaugingWeld InspectionTorque Monitoring
💊
Pharmaceutical Packaging
Blister pack and bottle inspection lines must verify fill level, cap seal integrity, label placement, and barcode legibility simultaneously. Overlaying barcode scan results, fill level readings, and environmental control parameters onto the camera feed creates an inspection record that satisfies 21 CFR Part 11 requirements without requiring a separate data acquisition system.
Fill LevelSeal Integrity21 CFR Part 11
🔬
Semiconductor & Electronics
PCB inspection, solder joint verification, and wafer defect analysis require sub-millimetre camera resolution alongside electrical continuity and dimensional measurements. Overlaying measurement results on the inspection image eliminates the post-inspection step of correlating visual records against measurement logs.
PCB InspectionSolder VerificationElectrical Test Data
🛢️
Pipeline & Infrastructure Inspection
In-line inspection tools and crawler systems combine video with ultrasonic wall thickness measurements, magnetic flux leakage readings, and GPS position data. When these streams are overlaid on the video feed, inspection engineers reviewing footage can immediately identify corrosion hotspots with their exact location and severity data in the same frame.
Wall ThicknessMFL DataGPS Position

Who Benefits

QA / QC Engineer
Immediate correlation, faster decisions
QA engineers who see dimensional data and camera evidence in the same frame make accept/reject decisions faster and with greater confidence. Defect pattern recognition becomes intuitive rather than investigative.
Production / Line Manager
Annotated records for root cause analysis
When the overlaid feed is recorded, every frame of inspection video carries its corresponding data annotation. Root cause analysis after a quality escape becomes a matter of reviewing footage that already contains the relevant parameters.
Machine Operator
Earlier warning, less scrap
Operators whose station monitor shows both the camera feed and the process parameters see trend deviations earlier. A temperature reading trending toward the upper control limit is visible at the same moment as the corresponding visual output.
Plant / Facility Manager
No system integration project required
VitalOverlay requires no integration with the factory's MES, ERP, or SCADA system. It sits between existing equipment and the existing monitor. Installation takes hours, not months. There is no software project, no IT approval cycle.
Industrial Equipment Sales
A workflow upgrade, not just a component
Sales representatives who offer VitalOverlay as part of a solution package shift the conversation from hardware specifications to production outcomes. "This system shows your operator everything in one place" resonates at every level of the purchasing hierarchy.
Quality Auditor
Traceability built into every frame
In regulated manufacturing environments — pharmaceutical, automotive, aerospace — inspection records must demonstrate that measurement data and visual evidence correspond to the same moment in time. An overlaid and recorded feed provides this traceability by design.

One Display. Zero Gaps.

VitalOverlay works with your existing machine vision cameras and PLC systems. No software integration. No downtime. Request an engineering sample for your production line.

VitalOverlay Blog · Defence Series

Decision Superiority
Starts with
Situational Awareness.

April 2026 10 min read Defence / Military

In high-tempo operations, the decisive advantage is not firepower — it is the speed and accuracy of decision-making. Warfighters and operators who can process a complete situational picture faster, with less cognitive friction, execute better decisions under pressure. When intelligence, surveillance, and reconnaissance feeds, telemetry, navigation data, and communications are distributed across multiple screens, that picture must be assembled manually, under fire, in real time. Display overlay eliminates the assembly step — delivering decision-ready information to the operator the moment it is generated.

The OODA Loop and the Display Problem

Colonel John Boyd's Observe–Orient–Decide–Act framework remains the dominant model for understanding how individuals and organisations compete under adversarial pressure. The side that cycles through the OODA loop faster — that observes reality more completely, orients to it more accurately, decides more rapidly, and acts more decisively — wins.

The modern warfighter's Observe phase has a structural bottleneck that neither doctrine nor training has fully addressed: fragmented displays. A UAV ground control station operator simultaneously manages a video downlink on one screen, flight telemetry on a second, and a tactical map on a third. Every screen that requires a deliberate attention shift is a millisecond of latency inserted into the Observe phase. In contested environments where threat timelines are measured in seconds, this latency has operational consequences.

"Decision superiority isn't just about having better data. It's about having the right data, in the right place, at the moment of decision. Display fragmentation is an enemy of decision superiority."

How Display Overlay Supports Decision Superiority

VitalOverlay's VO-1 engine addresses the display fragmentation problem at the hardware layer — the most reliable, lowest-latency, and most deployment-appropriate layer for defence applications. The VO-1 accepts a video feed from any camera or sensor system over HDMI and simultaneously parses telemetry, navigation, and situational data from RS-232, USB, or Ethernet inputs. It composites them into a single, annotated HDMI output with no operating system, no software stack, and no buffering latency.

The result is a single display that gives the operator the complete tactical picture — video, telemetry, navigation, and sensor data — without requiring a secondary screen or a cognitive integration step. The operator's Observe phase is compressed. The OODA loop accelerates.

Data streams the VO-1 composites for defence applications

ISR video downlink (EO/IR), UAV telemetry (altitude, heading, GPS, airspeed, fuel state), gimbal orientation, NMEA navigation data, tactical symbology from RS-232 or USB interfaces, range and bearing to points of interest, communications status indicators, and mission timer.

Compatible with MAVLink, STANAG 4609, NMEA 0183, and custom proprietary telemetry formats. No modifications to existing avionics, sensors, or GCS software required.

Defence Applications

🎯
UAS Ground Control Station Operations
Group 1 through Group 5 UAS operators and payload specialists in GCS environments manage video downlinks, telemetry streams, and navigation data across multiple displays. Overlay of flight parameters — altitude AGL, heading, GPS position, airspeed, fuel state — on the EO/IR video downlink eliminates the cross-screen attention switching that currently characterises payload operator workflows.
UAS/SUAS OperationsISR CollectionGCS Display FusionSTANAG 4609
🛥️
Naval & Maritime Operations
Naval operations centres and bridge environments manage multiple sensor feeds simultaneously — surface radar, sonar, EO cameras, AIS tracks, and NMEA navigation data. ROV operators conducting mine countermeasures, hull inspection, or underwater survey manage acoustic positioning, vehicle telemetry, and camera feeds across fragmented displays. VitalOverlay composites these into a unified situational awareness display.
Mine CountermeasuresNaval SAROV OperationsNMEA Navigation
📡
C2 & Joint Operations
Command and control environments require a common operating picture. Display overlay supports COP delivery at the operator level: individual stations receive a fused display that combines sensor video with positional and status data. In austere forward operating environments where display real estate and power are constrained, a single VO-1 unit replacing a multi-monitor stack is a meaningful capability delivery in a SWaP-compliant form factor.
C2 DisplayCommon Operating PictureSWaP ReductionForward Deployment
🎓
Military Training & Mission Rehearsal
Live, virtual, and constructive (LVC) training environments benefit from annotated recording of operator performance under simulated mission conditions. When GCS training sessions are recorded with telemetry overlay, instructors can review the exact moment where situational awareness broke down — with flight parameters, sensor feeds, and navigation data all visible in the same frame.
LVC TrainingMission RehearsalOperator CertificationSA Training
🔫
Close Air Support & Fires Coordination
Joint terminal attack controllers and fire support coordinators managing CAS missions correlate sensor feeds, target location data, and aircraft position information in time-critical environments. When target area video and aircraft telemetry are displayed on a single overlaid feed, the JTAC's cognitive load in correlating sensor observations with geographic coordinates is reduced.
CAS CoordinationJTAC OperationsTarget GeolocationFires Integration

The SWaP Argument for Hardware-Layer Compositing

Size, weight, and power constraints govern every decision in deployed defence systems. A PC inserted into the video chain to perform software-based overlay adds weight, requires cooling, introduces operating system reliability risks, and demands a power budget that forward operating environments often cannot spare. In vehicle integration, aircraft installation, and shipborne applications, these constraints are contractual, not advisory.

The VO-1's FPGA-based architecture addresses each of these constraints directly. It has no operating system to maintain or update. It has no moving parts and no active cooling requirement. Its power consumption is a fraction of a PC. And its behaviour is deterministic — it performs identically on day one of a deployment and day three hundred, in temperatures from -20°C to +70°C, without scheduled maintenance.

"The VO-1 has no operating system. No moving parts. No buffers. It either works or it doesn't — and it works the same way in a GCS tent at -20°C as it does in a lab. That's what military deployment reliability looks like."

Who Benefits

Warfighter / Operator
Faster Observe, faster OODA cycle
The operator whose display shows sensor video and telemetry in the same frame observes the complete tactical picture without constructing it manually across multiple screens. Decision latency decreases. In contested, time-compressed environments, this is a survivability and mission effectiveness advantage.
Squadron / Unit Commander
Shared SA without communications overhead
When all operators in a team see the same fused display, verbal coordination overhead drops. Commanders who previously relied on radio call-outs to maintain shared situational awareness gain that awareness passively through the display.
Program Manager / S6
Capability insertion without system redesign
VitalOverlay delivers display fusion capability without requiring modifications to existing avionics, GCS software, sensor systems, or communication architecture. Fielding timelines are measured in days, not programme cycles. No CDRL. No software qualification.
Defence Contractor / Integrator
SBIR-validated, transition-ready technology
VitalOverlay is pursuing AFWERX SBIR and Navy ONR SBIR funding for UAV GCS and maritime ROV applications. For prime contractors evaluating display fusion components, a SBIR-backed supplier provides government-validated technology and a clear Phase III transition pathway.
SOCOM / Special Operations
Deployable, ruggedised, zero-dependency
The VO-1 requires no network, no PC, no cloud connectivity, and no software licence management. It boots in seconds, configures in minutes, and operates continuously without maintenance. For austere environments where equipment failure is not an option, hardware-layer simplicity is the operational requirement.
Training Command
Realistic cognitive loading in simulation
Training environments that use overlaid sensor feeds and telemetry produce trainees who are already accustomed to reading a fused display when they reach operational units. The cognitive skill transfers directly to operational performance.

Positioning VitalOverlay in the Defence Acquisition Ecosystem

VitalOverlay addresses the display fragmentation problem at a layer of the system architecture that has historically received little dedicated investment: the final metre of the information chain, between the data and the operator's eyes. Billions of dollars are invested annually in sensor quality, data link reliability, and processing power. The display layer — the point where all of that investment is translated into operator action — has been treated as a commodity. It is not a commodity.

For programme offices evaluating display solutions, for system integrators building GCS and ISR platforms, and for warfighters who need to see everything at once: the answer is not more screens. It is one display that shows all of them.

One Display. Complete Situational Awareness.

VitalOverlay is pursuing AFWERX SBIR and Navy ONR SBIR funding for UAV GCS and maritime applications. Contact us to discuss programme fit, engineering samples, or SBIR co-development opportunities.

VitalOverlay Blog · Training Series

The Most Effective
Teaching Tool Was
Missing the Data.

April 2026 7 min read Training & Simulation

A surgical resident reviewing a recorded cataract procedure sees exactly what the attending surgeon saw through the microscope — but not what the attending was managing. The phaco parameters, the IOP readings, the vacuum and aspiration metrics that drove every decision in that procedure are on a separate machine log, if they were saved at all. The video without the data is half the lesson. Display overlay ensures trainees see the complete picture that experts navigate.

The Missing Dimension in Procedural Training

Procedural training in surgery, aviation, military operations, and industrial environments has benefited enormously from video-based review. Recorded procedures allow trainees to study technique, timing, and decision-making from expert practitioners. Simulation platforms allow deliberate practice in controlled environments. Debriefing sessions identify mistakes and correct misconceptions before they become habits.

What video-based training has never captured cleanly is the data dimension of expert performance. An experienced phacoemulsification surgeon does not just see through the microscope — they simultaneously monitor six instrument parameters and adjust their technique in response to real-time feedback from the machine. A UAV pilot does not just watch the video downlink — they manage altitude, heading, airspeed, and battery state continuously. An industrial operator does not just observe the production line — they track process parameters against control limits and intervene before they are breached.

Training that shows the procedure without the data shows the outcome of expert attention management without explaining how that attention management works. Display overlay makes the data dimension visible — and recordable — for the first time.

"We can show residents exactly what was happening on the phaco machine at the moment the surge occurred. The video and the data, in the same frame. That's when the teaching moment becomes real."

How Annotated Recording Changes Training

VitalOverlay's VO-1 composites the procedural video feed with real-time instrument data and outputs the annotated signal to both the primary display and a USB recording device simultaneously. The recorded file contains video frames where every instrument parameter is visible at the exact moment it was measured — creating a training record that is fundamentally more informative than video alone.

What annotated training recordings contain

For surgical training: phaco power, vacuum, IOP, I/A flow, patient vitals, procedure timer, and event markers at critical steps. For UAV/defence training: altitude, heading, GPS position, fuel/battery, and engagement parameters. For industrial training: machine parameters, process readings, and quality metrics — all embedded in the video at frame-accurate timing.

Recordings are made via USB directly from the VO-1. No additional recording hardware or software is required.

Training Applications by Sector

🏥
Surgical Education & Residency Training
Ophthalmic, laparoscopic, and endoscopic surgical training programmes benefit most directly from annotated procedure recordings. When a teaching case is recorded with phaco parameters, vitals, and procedure timer overlaid, the resulting video is usable for structured debriefing — "at this moment, vacuum was at 280 and IOP dropped — what should you have done?" becomes a question with visible evidence. Simulation centres can replay annotated cases to illustrate specific technique decisions in their full instrument context.
Surgical SimulationPhaco TrainingDebriefing Records
✈️
Aviation & UAV Operator Training
Ground control station operator training benefits from recorded missions where the video downlink and all telemetry parameters are visible in a single annotated feed. Instructor pilots reviewing trainee performance can identify the exact moments where altitude management, payload positioning, or situational awareness broke down — with the telemetry evidence embedded in the video, rather than requiring separate data export and timestamp correlation.
GCS TrainingPayload OperatorMission Debrief
🔧
Industrial Operator & Maintenance Training
Equipment operator training in manufacturing, energy, and process industries benefits from procedure recordings that show machine parameters alongside the visual state of the process. Trainees learning to operate complex machinery — CNC equipment, chemical reactors, offshore platforms — can review annotated recordings of expert operator sessions where the correlation between process parameters and expert decision-making is visible frame by frame.
Operator CertificationProcess TrainingMaintenance Procedure
🎓
Medical Device Sales Training
Medical device sales representatives training on complex surgical equipment need to understand not just how the device works but how surgeons interpret its outputs under procedure pressure. Annotated procedure recordings where phaco parameters, IOP, and surgical video are visible simultaneously give sales teams a visceral understanding of the data environment their customers navigate — improving clinical conversations and product demonstrations significantly.
Clinical TrainingProduct DemonstrationCustomer Education

Who Benefits

Surgical Educator / Attending
Teaching cases that teach completely
An attending reviewing a recorded trainee case with overlay data can identify the exact moment where a parameter deviation preceded a complication — and show the trainee the evidence in the same frame as the procedure. The teaching moment becomes specific, evidence-based, and immediately reproducible.
Resident / Fellow / Trainee
Expert decision-making made visible
Trainees reviewing annotated expert recordings can observe not just what the expert did, but what the machine was telling them at the moment they did it. The correlation between instrument feedback and technique adjustment — the core of expert performance — becomes legible in a way it never was from video alone.
Simulation Centre Manager
Richer curriculum, same equipment
Simulation centres that integrate VitalOverlay into their recording workflow produce a library of annotated cases usable for structured curriculum delivery — without replacing any existing simulation hardware. The VO-1 connects between the existing simulation camera and the existing recording system.
Medical Education Department
Accreditation-ready documentation
Annotated procedure recordings where instrument parameters are visible alongside video provide stronger documentation of trainee competency assessment than video alone — particularly for accreditation reviews that require evidence of structured, data-informed surgical training.

Record the Procedure. Capture the Data. Teach the Complete Picture.

VitalOverlay records annotated video directly to USB from your existing procedure setup. No additional recording hardware or software required.

VitalOverlay Blog · R&D Series

When the Data and
the Image Live on
Different Screens.

April 2026 7 min read R&D Laboratory

A researcher running a live-cell imaging experiment watches the microscope feed on one monitor while the fluorescence intensity readings, temperature, CO₂ levels, and incubation parameters stream on a second from the environmental control system. When something interesting happens in the image, correlating it with the instrument state at that exact moment requires reviewing two separate records. Display overlay synchronises the image and the data before the experiment even ends.

The Laboratory's Synchronisation Problem

Research laboratories are arguably the most data-rich environments in the world. Every instrument generates a continuous stream of measurements. Environmental chambers monitor temperature, humidity, and gas composition to parts-per-million precision. Microscopes, spectrometers, and imaging systems produce multi-dimensional visual data at frame rates that would have been impossible a decade ago. Data acquisition systems log everything.

The problem is that the visual record and the instrument record exist in parallel, not together. When a researcher observes a morphological change in a live-cell culture, they note the timestamp and later correlate it with the instrument log to determine what the environmental conditions were at that moment. This post-hoc synchronisation is standard practice in virtually every laboratory — not because it is the best approach, but because no one has provided a better one.

Display overlay is the better approach. When instrument parameters are visible in the same frame as the experimental image, the correlation is instantaneous, automatic, and embedded in the visual record itself.

"We spent two hours after every experiment matching video timestamps to instrument logs. With overlay, the data is already in the image. The analysis starts the moment we stop recording."

What Overlay Produces in a Laboratory Context

VitalOverlay's VO-1 engine accepts the microscope camera's or imaging system's HDMI output and simultaneously parses instrument data from RS-232, USB, or Ethernet interfaces — interfaces that virtually every laboratory instrument already provides. It composites measurement values, timestamps, and event markers onto the imaging feed in real time. When the output is recorded via USB, the result is an annotated video record where every frame contains the instrument state at the moment the image was captured.

Parameters overlaid on laboratory imaging feeds

Temperature (°C), CO₂ concentration (%), humidity (% RH), pH, dissolved oxygen, fluorescence intensity channels, flow rate (µL/min), pressure, elapsed experiment time, event markers from user input or automated triggers, and custom data channels from any serial-output instrument.

Multi-channel fluorescence parameters can be assigned to separate overlay widgets — allowing simultaneous display of GFP, RFP, and DAPI channel intensities alongside the composite image.

Laboratory Applications

🦠
Live Cell Imaging
Time-lapse fluorescence microscopy of live cell cultures correlates morphological changes with environmental conditions. When temperature, CO₂, and fluorescence intensity are overlaid on the microscope feed, every frame of a multi-hour experiment carries the complete environmental context. Researchers identifying the precise moment of a cellular event — mitosis, apoptosis, protein translocation — have the instrument state at that moment embedded in the image, not in a separate log to be matched later.
Fluorescence IntensityEnvironmental ControlTime-lapse
⚗️
Microfluidics & Lab-on-Chip
Microfluidic experiments require precise correlation between flow rate, pressure, reagent concentration, and the visual state of the chip. When flow and pressure readings from syringe pumps and pressure sensors are overlaid on the microscope image of the channel, researchers can identify flow anomalies — clogging, bubble formation, delamination — in the context of the exact instrument conditions that caused them, in real time rather than during post-experiment analysis.
Flow RatePressureChannel Imaging
🧪
Materials Characterisation
Materials testing laboratories combine optical or electron microscopy with mechanical load, thermal, or electrical measurements. In-situ tensile testing under a microscope requires correlation between the visual record of crack initiation and propagation with the load-displacement data from the testing machine. Overlay of load, displacement, and stress values on the microscope feed produces a testing record that is immediately usable for failure analysis — no post-processing synchronisation required.
Load-DisplacementCrack PropagationIn-situ Testing
🔭
Astronomical & Physical Observation
Telescope and spectrometer setups in physics and astronomy research combine image captures with instrument parameters — seeing conditions, atmospheric dispersion, tracking accuracy, and detector temperature. Overlaying these parameters on the observational image creates records that include the instrumental context at capture time — valuable for understanding artefacts, validating calibration, and comparing observations across sessions.
Seeing ConditionsDetector TempTracking Data

Who Benefits

Principal Investigator
Grant-ready data from day one
Annotated experimental records — where instrument parameters are embedded in every frame of imaging data — are immediately usable for publication figures and grant progress reports. The PI spends less time managing data synchronisation and more time interpreting results.
Graduate Student / Postdoc
Less time correlating, more time discovering
The post-experiment data synchronisation task — matching video timestamps against instrument logs — is a routine part of most researchers' workflow. Overlay eliminates this step entirely. The annotated record is produced during the experiment, not reconstructed after it.
Lab Manager / Facility Director
Standardised protocols across instruments
VitalOverlay's no-code configuration GUI allows lab managers to create standardised overlay templates for each instrument setup — ensuring that all researchers using a given microscope or imaging platform produce consistently annotated records, regardless of individual data management practices.
Scientific Equipment Supplier
Workflow enhancement, not just hardware
Instrument manufacturers and distributors who position VitalOverlay alongside their imaging systems offer a solution that improves not just the quality of data captured but the efficiency of the research workflow — resonating strongly in academic and pharmaceutical R&D procurement conversations.

Your Data and Your Images. Together.

VitalOverlay works with any laboratory instrument that outputs HDMI video and RS-232, USB, or Ethernet data. No software integration. No IT approval. Request a sample unit for your lab.

VitalOverlay Blog · Aerospace Series

One Display to
Rule the Sky.
UAV Telemetry Overlay.

April 2026 7 min read Aerospace / UAV

A UAV payload operator in a ground control station manages the aircraft's video downlink on one display and flight telemetry — altitude, heading, GPS, airspeed — on another. The pilot watches a third screen showing flight instruments and navigation. Three operators, three screens, three separate pictures of what one aircraft is doing. Display overlay fuses all of it into one feed — giving every operator the full picture without the cognitive overhead of building it themselves.

The Ground Control Station's Fragmentation Problem

Unmanned aerial systems have solved the problem of getting data off the aircraft. Modern UAVs stream high-definition video from EO/IR cameras, real-time telemetry from flight management systems, gimbal orientation, GPS coordinates, airspeed, altitude above ground level, and fuel or battery state simultaneously. The uplink and downlink architecture works. The display architecture does not.

Ground control stations in commercial, civil, and defence applications are almost universally built around fragmented display setups. Video goes to one monitor; telemetry goes to another. Payload operators who need both — who need to correlate what the camera sees with where the aircraft is and how it is oriented — must build that correlation in their heads, across two screens, under operational time pressure.

This is not a minor ergonomic inconvenience. In ISR operations, the latency between observation and position confirmation affects target identification accuracy. In precision agriculture, it affects the correlation between crop anomaly imagery and GPS-referenced field coordinates. In search and rescue, it affects response time. The display gap has a real cost in every one of these applications.

"The payload operator is watching the video. The pilot is watching the instruments. Nobody is watching both at once. Overlay gives both operators the same complete picture."

What Changes with Telemetry Overlaid on Video

VitalOverlay's VO-1 engine accepts the UAV's video downlink over HDMI and parses its telemetry stream — typically MAVLink, STANAG 4609, or proprietary RS-232 format — from the GCS's data output. It composites altitude, heading, GPS coordinates, airspeed, gimbal bearing, ground speed, and battery or fuel state onto the video feed as configurable widget panels. A single HDMI output carries the fully annotated feed to the payload operator's primary monitor.

The pilot retains their dedicated flight instrument display. The payload operator now has the full situational picture — video with all flight parameters visible — without requiring a dual-monitor setup or a second operator managing telemetry.

Telemetry parameters overlaid on UAV video

Altitude AGL/MSL, heading (magnetic and true), GPS position (lat/lon decimal degrees or MGRS), ground speed, airspeed, gimbal azimuth and elevation, battery/fuel percentage, range from home, mission waypoint progress, and custom data channels from payload sensors.

Compatible with MAVLink, STANAG 4609, custom RS-232 telemetry streams, and USB data interfaces from major GCS platforms.

Applications Across the UAV Spectrum

🎯
ISR & Persistent Surveillance
Intelligence, surveillance, and reconnaissance operations require continuous correlation between video observation and geographic position. Analysts who can see GPS coordinates, heading, and altitude in the same frame as the EO/IR image can geo-reference observations in real time — eliminating the post-mission step of correlating video timestamps against telemetry logs to determine where an observed event occurred.
EO/IR OverlayGPS Geo-referenceSTANAG 4609
🌾
Precision Agriculture
Agricultural UAV operations combine multispectral and RGB camera feeds with GPS-referenced field maps. Operators identifying crop anomalies — stress zones, irrigation failures, pest damage — need the camera observation and the field coordinate visible simultaneously to generate actionable prescription maps. Overlay of GPS position and altitude on the camera feed allows real-time annotation of field areas without separate GIS software open alongside the video player.
Multispectral OverlayGPS Field ReferenceNDVI Data
🔍
Infrastructure & Linear Asset Inspection
Pipeline, powerline, and railway inspection UAV operations generate kilometre-scale video records that must be correlated with GPS position for maintenance reporting. When position data is overlaid directly on inspection footage, every frame carries its geographic reference — simplifying defect location reporting and eliminating the post-processing step of matching video timestamps to GPS logs.
Pipeline InspectionGPS ReferencePowerline Survey
🆘
Search & Rescue
SAR UAV operators work under extreme time pressure in environments where GPS coordinates must be transmitted to ground teams immediately upon visual contact with a subject. When coordinates are visible in the same frame as the camera feed, the operator can relay position information the instant they see it — without switching attention to a telemetry screen, reading the coordinates, and then returning to the video to confirm the subject is still in frame.
Real-time GPSSubject TrackingCoordinate Relay

Who Benefits

Payload Operator
Complete situational awareness, one screen
The payload operator's primary cognitive task is managing what the camera sees. When flight parameters are visible within the video frame, the operator can maintain full situational awareness — knowing exactly where the aircraft is and how it is oriented — without breaking attention from the camera feed to consult a telemetry display.
UAV Pilot / GCS Operator
Reduced verbal coordination overhead
In two-operator GCS configurations, pilots and payload operators communicate constantly to coordinate aircraft positioning with camera observation. When the payload operator's display includes flight telemetry, the frequency of "what's our altitude?" and "which direction are we heading?" communications drops significantly.
Mission Commander / Flight Supervisor
Shared picture, shared accountability
A supervisor monitoring operations from a secondary display wants to see what the payload operator sees — with full context. A single overlaid feed allows supervisors to share the complete operational picture without requiring a dedicated telemetry display at the supervisor station.
UAS System Integrator
Differentiated GCS capability
System integrators who include VitalOverlay in GCS builds offer a demonstrably superior operator experience. The VO-1 sits between the video downlink output and the payload operator's monitor — no changes to the GCS software stack, no integration with the flight management system, no additional certification burden.

One Display Above and Below.

VitalOverlay works with any UAV system outputting HDMI video and MAVLink, STANAG 4609, or RS-232 telemetry. No GCS software changes required.