Skip to main content
Version: Reality 5.7

Multi-Channel Broadcast Graphics Pipeline

Under Construction

This section is currently under construction and will be updated soon. Thank you for your patience!

In a typical broadcast graphics environment, multiple visual elements such as a lower third, a ticker, a bug, or even a butterfly animation are rendered simultaneously on the screen. These elements are composited as overlay graphics, often stacked on top of each other within a single graphics pipeline. This approach represents a conventional broadcast pipeline, where all visual layers are managed together and output through a shared channel.

Overview

Reality Hub 2.1 now supports independent channel creation inside the Nodegraph, allowing each graphics channel to be controlled in its unified playout system Lino.

Core Concept of Multi-Channel Architecture

Multi-Channel graphics architecture allows you create and manage each channel independently within the Nodegraph. Each channel operates as a self-contained graphics output path, corresponding to dedicated SDI outputs on the device.

For example:

  • When a graphic is sent to Channel 1, the signal is output through SDI 1–2 (AJA Output).
  • When sent to Channel 2, it is output through SDI 3–4.

Each channel therefore represents a unique SDI Out pair, enabling flexible routing and synchronization with external playout systems. The same channel mapping can be utilized on the playout side for coordinated on-air control.

Pipeline Functionality

This pipeline is designed exclusively for output management. It defines how graphics are transmitted to the hardware outputs, rather than how they are layered internally. All rendering, composition, and synchronization processes occur before the signal reaches the output stage. Compared to a Multi-Layer graphics system, the Multi-Channel Pipeline provides a clear operational advantage: it allows broadcast elements to be managed independently, maintaining clean separation between content types and reducing layer complexity.

Typical use cases include:

  • Assigning lower thirds to one channel.
  • Sending full-screen graphics to another.
  • Routing logos (bugs), transition animations, or bumpers to dedicated channels.

This separation enables precise control over on-air graphics, simplifies error recovery, and improves integration with automation systems or master control units (MCUs).

Practical Example

In a live broadcast workflow—for example, a news production environment the operator may configure:

  • Channel 1: Lower thirds and tickers (real-time data overlays).
  • Channel 2: Full-screen graphics and video transitions.
  • Channel 3: Logo bugs and branding animations.

Each channel can then be triggered independently by the playout automation system, allowing smooth transitions between program states without interrupting other graphics. Multiple rundown items, even those sourced from different channels, can also be grouped and executed with a single click.