This Knowledge Base article explains the sensor structure, operation, behavior, and configuration of the Triton2 4K linescan camera, including triggering.
1. Sensor and Color Architecture
The Triton2 4K linescan camera is built on the Gpixel GL3504 sensor. This sensor contains:
- 4x 7 µm pixel lines (2K mode)
For the color variant, three of these lines have RGB truecolor filters:- One line with Red
- One line with Green
- One line with Blue
- The last 7 µm line has no color filter (clear).
- 2x 3.5 µm pixel lines (4K mode)
Arranged with a GGRB Bayer pattern for color.

This multi‑line architecture, combined with different pixel sizes and color filter layouts, is the basis for:
- Pixel size modes
- Line count modes
- Available: Pixel Formats – Line Scan
- Available color interpolation algorithm(s)
- The behavior of how many lines and exposures each frame will have
2. Pixel Size Modes, Line Count Modes, and Pixel Formats
The camera exposes different PixelSizeMode and LineCountMode combinations, each enabling a specific set of PixelFormats
PixelSizeMode selects which physical pixel rows on the GL3504 sensor are used:
Size3500nm- Uses the two 3.5 µm pixel lines (“4K mode”).
- Enables up to 4096‑pixel image width and color formats based on the 3.5 µm Bayer pattern.
Size7000nm- Uses the four 7 µm pixel lines (“2K mode”).
- Enables up to 2048‑pixel image width and color formats based on the 7 µm RGB + clear lines.
In other words, PixelSizeMode directly controls which physical sensor lines participate in acquisition, and therefore the native resolution and pixel size.
LineCountMode selects how many of the active sensor lines are combined in the ISP to form each output line:
LineCountMode = 1- A single sensor line is used per output line.
- Only monochrome formats are available; no color reconstruction is performed.
LineCountMode > 1(2 or 4, depending onPixelSizeMode)- Multiple sensor lines are combined per output line.
- Enables color‑capable configurations (CFA and RGB / YCbCr / YUV formats), and affects how many physical lines contribute to each exposure.
Together, PixelSizeMode and LineCountMode determine which PixelFormat options are available, as summarized in the table below.
| PixelSizeMode | LineCountMode | PixelFormats |
|---|---|---|
| Size3500nm | 1 | Mono8, Mono10, Mono10p, Mono10Packed, Mono12, Mono12p, Mono12Packed, Mono16 |
| Size3500nm | 2 | CFA_RBGG8, CFA_RBGG10, CFA_RBGG10p, CFA_RBGG10Packed, CFA_RBGG12, CFA_RBGG12p, CFA_RBGG12Packed, CFA_RBGG16, RGB8, BGR8, RGBY8, BGRY8, YCbCr8_CbYCr, YUV422_8, YUV422_8_UYVY, YCbCr411_8, YUV411_8_UYYVYY |
| Size7000nm | 1 | Mono8, Mono10, Mono10p, Mono10Packed, Mono12, Mono12p, Mono12Packed, Mono16 |
| Size7000nm | 2 | CFA2by1_WB8, CFA2by1_WB10, CFA2by1_WB10p, CFA2by1_WB10Packed, CFA2by1_WB12, CFA2by1_WB12p, CFA2by1_WB12Packed, CFA2by1_WB16, CFA4by1_WBRG8, CFA4by1_WBRG10, CFA4by1_WBRG10p, CFA4by1_WBRG10Packed, CFA4by1_WBRG12, CFA4by1_WBRG12p, CFA4by1_WBRG12Packed, CFA4by1_WBRG16 |
| Size7000nm | 4 | RGB8, BGR8, RGBY8, BGRY8,YCbCr8_CbYCr, YUV422_8, YUV422_8_UYVY, YCbCr411_8, YUV411_8_UYYVYY |
| The CFA_… pixel formats are raw sensor data formats that preserve the per‑pixel color filter pattern (no ISP color interpolation/ demosaicing applied yet). Examples:Í = 8‑bit raw data in an R‑B‑G‑G tile pattern on the 3.5 µm (4K) lines.CFA2by1_WB8 = 2×1 white/blue CFA tile, 8‑bit.CFA4by1_WBGR8 = 4×1 white‑blue‑green‑red tile, 8‑bit, used with the 7 µm (2K) mode.Suggested by the 2by1/ 4by1 notation, some of the CFA formats give you the raw data, which has 2 lines/ 4 lines height per exposure |
Key implications:
- PixelSizeMode directly determines whether you are using the 3.5 µm lines (Size3500nm) or 7 µm lines (Size7000nm) and thus which physical sensor lines participate in acquisition.
- LineCountMode = 1
- Monochrome only.
- You can only use Mono* pixel formats; no color formats are available.
- LineCountMode > 1
- Color‑capable configurations (CFA or RGB / YCbCr / YUV), depending on PixelSizeMode and line count.
3. Acquisition Rates, Frames, and Exposures
3.1 AcquisitionFrameRate vs AcquisitionLineRate
The Triton2 linescan exposes both AcquisitionFrameRate and AcquisitionLineRate controls, but only one can be actively controlled at a time.
- Only one of these can be True at any given time:
- AcquisitionFrameRateEnable
- AcquisitionLineRateEnable
- AcquisitionLineRate = AcquisitionFrameRate × Height ***
- where Height is the configured number of output lines per frame (the Height node).
- *** when you are using one of the CFA_ pixel formats, the equation becomes
AcquisitionLineRate = AcquisitionFrameRate × ImagerHeight
- When AcquisitionFrameRateEnable = True → AcquisitionLineRateEnable is read‑only.
- When AcquisitionLineRateEnable = True → AcquisitionFrameRateEnable is read‑only.
- When both are False
- The camera is in full auto mode with respect to acquisition rate:
- Maximizes sensor‑supported line rate.
- Maximizes bandwidth usage within hardware and interface limits.
- The camera automatically drives the acquisition to the highest sustainable line rate.
- The camera is in full auto mode with respect to acquisition rate:
3.2 Relationship Between Exposures, ImagerHeight, and Output Lines.
For line‑scan frames, it is useful to think in terms of how many exposures are taken per frame and how many output lines each exposure produces.
- A frame is an image of
Heightlines (as configured by theHeightnode). - The sensor uses ImagerHeight sensor lines to build each frame.
- The core relationships are:
- Exposure / Frame = ImagerHeight
- Output lines / Exposure = Height / ImagerHeight
Where:
- ImagerHeight – number of sensor lines that participate in constructing the frame
(for example, 1, 2, or 4 physical lines depending on mode). - Exposure / Frame – how many exposures are needed to build one frame of
Heightlines. - Output lines / Exposure – how many output lines are generated from each exposure.
These relationships are especially useful when:
- Planning trigger or encoder timing for a required number of lines per object.
- Understanding how multi‑line sensors readout and interpolation map onto the final line count and frame structure.
4. Line Rate and Line Count Configuration
4.1. Maximum Line Rate and LineCountMode
To reach the maximum line rate, the camera must minimize data per exposure.
- Set LineCountMode = LineCount1 for the highest possible line rate.
- Only a single sensor line is read per exposure.
- This maximizes throughput and line frequency.
On the color camera, this has an important consequence:
- With LineCountMode = LineCount1:
- Color information is not reconstructed.
- Only mono pixel formats are available; color formats are disabled.
4.2. 4K vs 2K Line Rate (Color)
For non-CFA color pixel formats, the maximum line rate is the same in 4K and 2K modes. In both cases, the total data per output line (lines × interpolation) is similar, so the bandwidth limit — and thus the line‑rate ceiling — is effectively identical.
| PixelSizeMode | LineCountMode | Max Width: | Bandwidth required per line | AcquisitionLineRate (with 10% DeviceLinkThroughputReserve) |
|---|---|---|---|---|
| 3500 | 4 (processed color formats use all 4 logical lines) | 2048 | 4 x 2048 x bit depth | 12348.8806 Hz |
| 7000 | 2 | 4096 | 2 x 4096 x bit depth | 12348.8806 Hz |
5. Exposure
5.1. Maximum Exposure Time vs Line Rate
For a given AcquisitionLineRate, the maximum exposure time is constrained by the time between lines:
- General rule: Max ExposureTime ≈ 1,000,000 / AcquisitionLineRate
- in microseconds
- minus a small margin to account for sensor overhead and timing guard bands
Example:
- If AcquisitionLineRate = 20,000 lines/s
- Max ExposureTime ≈ 1,000,000 / 20,000 ≈ 50 µs (practical max, slightly lower).
The API will clamp ExposureTime if you try to exceed the limit
5.2. Individual ExposureTime Per Line
The camera offers a feature where you can set individual ExposureTimes for different lines:
- The default is a global exposure time across all active lines.
- Per‑line exposure control or more granular control over which part of the sensor is exposed for how long. allowing additional options for ExposureTimeSelector:
| PixelSizeMode | LineCountMode | ExposureTimeSelector options |
|---|---|---|
| Size3500nm | LineCount1 | Line0EvenPixels, Line0OddPixels |
| Size3500nm | LineCount2 | Line0EvenPixels, Line0OddPixels, Line1EvenPixels, Line1OddPixels |
| Size7000nm | LineCount1 | Line0 |
| Size7000nm | LineCount2 | Line0, Line1 |
| Size7000nm | LineCount4 | Line0, Line1, Line2, Line3 |
6. Color Interpolation Algorithms
The camera uses different color interpolation strategies depending on the mode. In 4K operation with PixelSizeMode = Size3500nm and a color pixel format, the ISP can operate in either TrueColor or FullDefinition mode, selectable via the ColorInterpolationMode node.
The ColorInterpolationMode node becomes available when:
- PixelSizeMode = Size3500nm, and
- The selected PixelFormat uses the RGB pixels in the ISP pipeline (for example, RGB8, BGR8, YCbCr8, YCbCr8_CbYCr, YUV422_8, YUV422_8_UYVY, YCbCr411_8, YUV411_8_UYYVYY, and also mono formats on color cameras where interpolated RGB is converted to mono).
6.1. True Color Algorithm
In 4K operation, the 3.5 µm lines have a Bayer‑like pattern and the firmware can use a TrueColor algorithm to reconstruct color.
- In TrueColor mode:
- The ISP combines RGB information from the relevant lines.
- The effective image width is halved (4K → 2K effective color resolution), because pairs of sensor pixels are combined to form each output pixel.
- Conceptually, this is similar to horizontal binning:
- The original 4K information is effectively “squeezed” into a 2K‑wide image.
- Objects will appear slightly compressed horizontally, similar to what you would see with 2× binning in the horizontal direction.
- Color is produced without requiring extra exposures per frame:
- If Height = N lines, then N exposures are used to produce the color image.
- Rationale:
- The TrueColor approach allows clean combination of Red, Green, and Blue pixels when two lines are exposed simultaneously.
6.2. Full Definition Algorithm
- In FullDefinition mode:
- The ISP outputs a full 4K‑width color image (no 2K width reduction).
- In contrast to TrueColor:
- TrueColor produces 2K‑wide images (with the horizontal “squished” appearance described above).
- FullDefinition preserves the full 4K width, with no horizontal squeezing.
- This is achieved by using an extra exposure per frame to correctly align the color information between the two 3.5 µm lines:
- If Height = N lines, the camera requires N + 1 exposures to produce the N‑line color image.
- Use TrueColor when:
- You do not need full 4K horizontal color resolution, and
- You want to avoid any increase in the number of exposures per frame.
- Use FullDefinition when:
- You need maximum horizontal color resolution (4K) from the 3.5 µm lines, and
- You can accommodate the additional exposure requirement in your timing and encoder setup.
7. Triggering and Encoder
The camera can be integrated with external triggers and encoders for line‑synchronous acquisition. Two trigger concepts are important:
- FrameStart – defines when a frame (set of lines) begins and ends.
- LineStart – defines when each individual line is exposed.
The exact behavior depends on which signals are enabled and how they are driven. There are 3 ways of triggering the camera.
7.1. FrameStart Off, LineStart On
- FrameStart: disabled
- LineStart: driven by encoder or external trigger
In this mode:
- Each LineStart pulse causes the camera to expose and read a line.
- The camera automatically groups lines into frames.
- A frame is completed and an image is delivered when one of the following occurs:
- Enough lines have been received to fill the configured Height, or
- An encoder timeout expires (no new LineStart pulses for a defined interval), or
- A buffer expiry timer (BufferExpiryMs) expires.
This is useful when you want continuous line‑synchronous acquisition based purely on the encoder, without having to manage frame boundaries explicitly.
7.2. FrameStart On (Level), LineStart On
- FrameStart: enabled, level‑sensitive
- LineStart: driven by encoder or external trigger
In this mode:
- A frame is active only while FrameStart is high.
- LineStart pulses are accepted only when FrameStart is high.
- You should:
- Assert FrameStart high to begin a frame.
- Keep FrameStart high long enough to cover all encoder pulses needed to reach the target Height.
- De‑assert FrameStart to end the frame.
This mode is useful when an external controller (PLC, motion controller, etc.) needs explicit control over when a frame starts and ends (for example, when a product enters and leaves the field of view).
7.3. FrameStart On (Pulse), LineStart On
- FrameStart: enabled, edge/ pulse‑sensitive
- LineStart: driven by encoder or external trigger
In this mode:
- A FrameStart pulse begins a new frame.
- The camera then counts LineStart pulses until the configured Height is reached.
- Once the requested number of lines have been acquired:
- Any additional LineStart pulses are ignored until the next valid FrameStart pulse.
- A new frame will not start until another FrameStart is received.
This is useful when you want a fixed number of lines per frame and want to prevent extra encoder pulses from creating partial or over‑length frames.
You may find additional instructions on using the Enocder module in the KB: Using the TRI02KA Triton Linescan
8. How do incomplete images work with the TRT04KG
An image (frame) is flagged incomplete when the host cannot assemble a full GVSP block for that frame.
Incomplete images can occur for many reasons, but with linescan cameras they are most commonly seen in line trigger (line/ encoder) modes.
An image is constructed and returned when the SDK has received the configured Height number of lines. It will be marked incomplete if, for example:
- The line trigger stops before
Heightlines are received- →
BufferExpiryMSexpires.
- →
- Acquisition stops mid‑frame
- → the GVSP block is aborted.
- The line trigger rate is very slow
- →
InterPacketExpiryMSexpires.
- →
InterPacketExpiryMS and BufferExpiryMS are calculated automatically based on the reported AcquisitionFrameRate node, which in turn depends on Height, PixelFormat, LineCountMode, and PixelSizeMode.
It is also normal for the final image of a sequence to be flagged incomplete, because line triggers (and therefore image lines) rarely stop exactly at the end of a frame boundary.