App Note: Add RGB Color to Helios 3D Point Cloud

Introduction

This document will discuss how to display a color point cloud by overlaying the Helios ToF camera point cloud data onto RGB data from a Triton color camera. The foundation of the app note is based on OpenCV’s “Camera Calibration and 3D Reconstruction” documentation.

The following is a brief summary of our steps:

  • Place or mount cameras together
  • Calibrate intrinsic and pinhole parameters for Triton
  • Calibrate Triton and Helios together using opencv solvePnP() function
  • Use opencv projectPoints() function to find the position in the Triton RGB image of each 3D point measured with the Helios
Prerequisites
  • Helios/Helios2 3D ToF camera (Pixel format: Coord3D_ABCY16 – 4-ch point cloud XYZ + Intensity, 16 bits per channel)
  • Helios firmware version v1.18.18.3 or later
  • NA or EU Helios power supply (P/N: M8P-US1 / M8P-EU1)
  • Triton Color Camera (Minimum 1.6 MP, Recommend 3.2 MP, Pixel format: RGB8)
  • Lens for the Triton camera (This example uses S-mount (M12) lens (focal length: 6mm, f/2.4))
  • C-mount to S-mount adapter (P/N: ADA-C-S, only needed if using S-mount lens)
  • Optional: Helios tripod mount adapter (P/N: ATL-TR) & mount between Triton and Helios (download link below)
  • 2-CH PCIe GigE Vision PoE+ Card (P/N: PCIE-POE2, For Triton power and data)
  • 2 x M12 Ethernet CAT6 cables (P/N: CAB-MR-##)
  • OR purchase our “3D+RGB IP67 Kit (Helios2 & Triton 3.2MP Kit)”. Comes with everything you need to create color 3D point clouds.
  • Windows or Linux PC with OpenCV Software
Placement of Cameras

We recommend placing the Triton and Helios camera as close together as possible in order to reduce the amount of occlusion between the cameras. This can be achieved by mounting the Triton camera on the sides or on the top of the Helios camera.

Please contact LUCID Support if your Helios requires mounting holes on the top.

Helios and Triton placement
Helios and Triton placement
Helios + Triton mount

Download Mount File: 3D Print Your Own Triton + Helios Mount

If you wish to mount the Triton camera to the Helios camera the same way as shown in the image please download our Triton to Helios mount for 3D printing (121kB, STEP)

Calibration

Before the data between the two cameras can be combined, we must first calibrate the lens on the Triton color camera to find its optical center and focal length (intrinsics), and lens distortion coefficients (pinhole model). We can achieve this by printing a target with a checkerboard pattern or you can download our calibration target here (15kB, PDF, 8.5 x 11 in)

Before calibrating the Triton camera you must focus its lens. Place the target at your application’s working distance and focus the Triton’s lens so that the calibration target is in focus.

Calibrating the Triton camera requires grabbing several images of the calibration chart at different positions within the camera’s field of view. At least 3 images are required but 4 to 8 images are typically used to get a better-quality calibration. The process is described in detail in this tutorial: (Opencv camera calibration tutorial)

Successfully following the steps of the tutorial and calling the opencv calibrateCamera() function will give a 3×3 matrix of the camera’s intrinsics (variable cameraMatrix) and a 5-element vector of the distortion coefficients (variable distCoeffs). These two outputs will be used in the joint Helios-Triton calibration in the following steps.

For additional information on calibration and 3D reconstruction using the pinhole camera model, please visit the OpenCV documentation here: OpenCV Doc: Camera Calibration and 3D Reconstruction.html

We can also determine intrinsic parameters of the Helios using this method. However it is not necessary for this application note since the intrinsic calibration is already applied to the Helios point cloud data.

Find the Camera's Orientation Relative to Each Other

Our method to overlay color data can be accomplished by reading the 3D points ABC (XYZ) from the Helios and projecting them onto the Triton color (RGB) camera directly. This requires first solving for the orientation of the Helios coordinate system relative to the Triton’s native coordinate space (rotation and translation wise). This step can be achieved by using the open function solvePnP().

Solving for orientation of the Helios relative to the Triton requires a single image of the calibration target from each camera. Place the calibration target near the center of both cameras field of view and at an appropriate distance from the cameras. Make sure the calibration target is placed at the same distance you will be imaging in your application. Make sure not to move the calibration target or cameras in between grabbing the Helios image and grabbing the Triton image.

Grab an image from the Helios and extract the 3D data (XYZ coordinates) and intensity data into open matrices:

 // Get Helios XYZ data bytes and intensity data:
 void GetHeliosImage(Arena::IDevice* pHeliosDevice, cv::Mat& intensity_image, cv::Mat& xyz_mm)
 {
 	// Get Helios XYZ data bytes and intensity data:
 	Arena::SetNodeValue<gcstring>(pHeliosDevice->GetNodeMap(), "PixelFormat", "Coord3D_ABCY16");

 	// Read the scale factor and offsets to convert from unsigned 16-bit values 
 	// in the Coord3D_ABCY16 pixel format to coordinates in mm
 	GenApi::INodeMap* node_map = pHeliosDevice->GetNodeMap();
 	double xyz_scale_mm = Arena::GetNodeValue<double>(node_map, "Scan3dCoordinateScale");
	Arena::SetNodeValue<gcstring>(node_map, "Scan3dCoordinateSelector", "CoordinateA");
	double x_offset_mm = Arena::GetNodeValue<double>(node_map, "Scan3dCoordinateOffset");
	Arena::SetNodeValue<gcstring>(node_map, "Scan3dCoordinateSelector", "CoordinateB");
	double y_offset_mm = Arena::GetNodeValue<double>(node_map, "Scan3dCoordinateOffset");
	Arena::SetNodeValue<gcstring>(node_map, "Scan3dCoordinateSelector", "CoordinateC");
	double z_offset_mm = Arena::GetNodeValue<double>(node_map, "Scan3dCoordinateOffset");

	pHeliosDevice->StartStream();
	Arena::IImage* image = pHeliosDevice->GetImage(2000);

	size_t height, width;
	height = image->GetHeight();
	width = image->GetWidth();

	xyz_mm = cv::Mat((int)height, (int)width, CV_32FC3);
	intensity_image = cv::Mat((int)height, (int)width, CV_16UC1);

	const uint16_t* input_data;
	input_data = (uint16_t*)image->GetData();

	for (unsigned int ir = 0; ir < height; ++ir)
	{
		for (unsigned int ic = 0; ic < width; ++ic)
		{
			// Get unsigned 16 bit values for X,Y,Z coordinates
			ushort x_u16 = input_data[0];
			ushort y_u16 = input_data[1];
			ushort z_u16 = input_data[2];

			// Convert 16-bit X,Y,Z to float values in mm
			xyz_mm.at<cv::Vec3f>(ir, ic)[0] = (float)(x_u16 * xyz_scale_mm + x_offset_mm);
			xyz_mm.at<cv::Vec3f>(ir, ic)[1] = (float)(y_u16 * xyz_scale_mm + y_offset_mm);
			xyz_mm.at<cv::Vec3f>(ir, ic)[2] = (float)(z_u16 * xyz_scale_mm + z_offset_mm);

			intensity_image.at<ushort>(ir, ic) = input_data[3]; // // Intensity value

			input_data += 4;
		}
	}

	pHeliosDevice->StopStream();

	// Optional: Show the Helios intensity image
	//cv::imshow("HLS Intensity", intensity_image);
	//cv::waitKey(0);
}

Grab an image from the Triton, which can be done with the following function:

void GetTritonRGBImage(Arena::IDevice* pDeviceTriton, cv::Mat& triton_rgb)
{
	Arena::SetNodeValue<gcstring>(pDeviceTriton->GetNodeMap(), "PixelFormat", "RGB8");

	pDeviceTriton->StartStream();
	Arena::IImage* image = pDeviceTriton->GetImage(2000);
	size_t height, width;
	height = image->GetHeight();
	width = image->GetWidth();

	triton_rgb = cv::Mat((int)height, (int)width, CV_8UC3);

	// Copy image data from Arena image buffer to opencv Mat buffer
	memcpy(triton_rgb.data, image->GetData(), height * width * 3);

	pDeviceTriton->StopStream();
}

The following function finds circles in an image of the calibration chart, it will be used to finding matching points between the Helios intensity image and Triton image:

#include "opencv2\calib3d.hpp"

// Index the circles in the calibration target:
void LucidIndexCalibrationCircles(const cv::Mat& image_in, std::vector& grid_centers)
{
	cv::SimpleBlobDetector::Params bright_params;
	bright_params.filterByColor = true;
	bright_params.blobColor = 255; // white circles in the calibration target
	bright_params.thresholdStep = 2;
	bright_params.minArea = 10.0;  // Min/max area can be adjusted based on size of dots in image
	bright_params.maxArea = 1000.0;
	cv::Ptr<cv::SimpleBlobDetector> blob_detector = cv::SimpleBlobDetector::create(bright_params);

	// pattern_size(num_cols, num_rows)
	//       num_cols: number of columns (number of circles in a row) of the calibration target viewed by the camera
	//       num_rows: number of rows (number of circles in a column) of the calibration target viewed by the camera
	// Specify according to the orientation of the calibration target
	cv::Size pattern_size(5, 4);

	// Find max value in input image
	double min_value, max_value;
	cv::minMaxIdx(image_in, &min_value, &max_value);

	// Scale image to 8-bit, using full 8-bit range
	cv::Mat image_8bit;
	image_in.convertTo(image_8bit, CV_8U, 255.0 / max_value);

	bool is_found = cv::findCirclesGrid(image_8bit, pattern_size, grid_centers, cv::CALIB_CB_SYMMETRIC_GRID, blob_detector);
	printf("Found %d circle centers.\n", grid_centers.size());
	
	// Display image with the found grid (optional)
	cv::Mat image_bgr;
	cv::cvtColor(image_8bit, image_bgr, cv::COLOR_GRAY2BGR);
	cv::drawChessboardCorners(image_bgr, pattern_size, cv::Mat(grid_centers), is_found);
	cv::namedWindow("Detected circles", cv::WINDOW_NORMAL);
	cv::imshow("Detected circles", image_bgr);	
	cv::waitKey(0);
}

Once you have grabbed images of the calibration chart with the Helios and Triton, the following code can be used to find dots in each of them and call solvePnP() to find the rotation/translation from the Helios coordinate system to Triton native coordinate system:

	// Find circles in Helios and Triton images and solve for position and pose of Helios relative to Triton with solvePNP()
	// Inputs: image of calibration target from Helios (intensity_image), image of calibration target from Triton (triton_rgb), and triton lens calibration results (cameraMatrix, distCoeffs)

	std::vector<cv::Point2f> grid_centers_1, grid_centers_2;

	// convert Triton image to mono for dot finding
	cv::Mat triton_mono_image;
	cv::cvtColor(triton_rgb, triton_mono_image, cv::COLOR_RGB2GRAY);

	// Find corresponding points in intensity images
	LucidIndexCalibrationCircles(intensity_image, grid_centers_1);
	LucidIndexCalibrationCircles(triton_mono_image, grid_centers_2);

	/* Get XYZ values from the Helios 3D image that match 2D locations on the triton (corresponding points) */
	std::vector<cv::Point3f> target_points_3d_mm;
	std::vector<cv::Point2f> target_points_3d_pix;
	std::vector<cv::Point2f> target_points_2d_pix;
	unsigned int total_points = 0;

	for (int ind = 0; ind < grid_centers_2.size(); ind++)
	{
		unsigned int c1 = (unsigned int)round(grid_centers_1[ind].x);
		unsigned int r1 = (unsigned int)round(grid_centers_1[ind].y);
		unsigned int c2 = (unsigned int)round(grid_centers_2[ind].x);
		unsigned int r2 = (unsigned int)round(grid_centers_2[ind].y);

		float pt_x = xyz_mm.at<cv::Vec3f>(r1, c1)[0];
		float pt_y = xyz_mm.at<cv::Vec3f>(r1, c1)[1];
		float pt_z = xyz_mm.at<cv::Vec3f>(r1, c1)[2];

		cv::Point3f xyz_pt1(pt_x, pt_y, pt_z);
		std::cout << "xyz_pt1: " << xyz_pt1 << std::endl;

		target_points_3d_mm.push_back(xyz_pt1);
		target_points_3d_pix.push_back(grid_centers_1[ind]);
		target_points_2d_pix.push_back(grid_centers_2[ind]);
		total_points++;
	}

	cv::Mat rotation_vector, translation_vector;

	bool success = cv::solvePnP(target_points_3d_mm,
		target_points_2d_pix,
		cameraMatrix,
		distCoeffs,
		rotation_vector,
		translation_vector);

The output of the above code block is two vectors, rotation_vector and translation_vector, each with three float elements. These two vectors completely describe the conversion from Helios 3D coordinates to the Triton camera’s native coordinate system. This completes the calibration of the system.

The orientation of the two cameras has to match our recommendation in “Placement of Cameras” as the OpenCV function indexes the circles in a grid from top left to bottom right. If you need to have different camera orientations, you will need to reorder the points returned from LucidIndexCalibrationCircles().

For additional information on these steps, please expand the solvePnP example below taken from opencv.org (Click here for original OpenCV link)

C++: bool solvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int flags=ITERATIVE )
Python: cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
C: void cvFindExtrinsicCameraParams2(const CvMat* object_points, const CvMat* image_points, const CvMat* camera_matrix, const CvMat* distortion_coeffs, CvMat* rotation_vector, CvMat* translation_vector, int use_extrinsic_guess=0 )
Python: cv.FindExtrinsicCameraParams2(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec, useExtrinsicGuess=0) → None
Parameters:
  • objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here.
  • imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here.
  • cameraMatrix – Input camera matrix A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1} .
  • distCoeffs – Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
  • rvec – Output rotation vector (see Rodrigues() ) that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
  • tvec – Output translation vector.
  • useExtrinsicGuess – If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
  • flags –Method for solving a PnP problem:
    • CV_ITERATIVE Iterative method is based on Levenberg-Marquardt optimization. In this case the function finds such a pose that minimizes reprojection error, that is the sum of squared distances between the observed projections imagePoints and the projected (using projectPoints() ) objectPoints .
    • CV_P3P Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang “Complete Solution Classification for the Perspective-Three-Point Problem”. In this case the function requires exactly four object and image points.
    • CV_EPNP Method has been introduced by F.Moreno-Noguer, V.Lepetit and P.Fua in the paper “EPnP: Efficient Perspective-n-Point Camera Pose Estimation”.

The function estimates the object pose given a set of object points, their corresponding image projections, as well as the camera matrix and the distortion coefficients.

Note: An example of how to use solvePNP for planar augmented reality can be found at opencv_source_code/samples/python2/plane_ar.py

Project 3D Points to RGB Image

With the system calibrated, we can now remove the calibration target from the scene and grab new images with the Helios and Triton cameras, using the calibration result to find the RGB color for each 3D point measured with the Helios. Based on the output of solvePnP we can project the 3D points measured by the Helios onto the RGB camera image using the OpenCV function projectPoints.

Grab a Helios image with the GetHeliosImage() function (output: xyz_mm) and a Triton RGB image with the GetTritionRGBImage() function (output: triton_rgb).The following code shows how to project the Helios xyz points onto the Triton image, giving a (row,col) position for each 3D point. We can sample the Triton image at that (row,col) position to find the 3D point’s RGB value.

	// Convert the Helios xyz values from 640x480 to a Nx1 matrix to feed into projectPoints
	int N = xyz_mm.rows * xyz_mm.cols;
	cv::Mat xyz_points = xyz_mm.reshape(3, N);

	// Use projectPoints to find the position in the Triton image (row,col) of each Helios 3d point
	cv::Mat triton_image_positions;
	projectPoints(xyz_points, rotation_vector, translation_vector, cameraMatrix, distCoeffs, triton_image_positions);

	// Finally, loop through the set of points and access the Triton RGB image at the positions
	// calculated by projectPoints to find the RGB value of each 3D point
	for (int i = 0; i < N; i++)
	{
		unsigned int triton_col = (unsigned int)std::round(triton_image_positions.at<cv::Vec2f>(i)[0]);
		unsigned int triton_row = (unsigned int)std::round(triton_image_positions.at<cv::Vec2f>(i)[1]);

		// check to make sure calculated (row,col) position falls within triton image
		if (triton_row > 0 && triton_col > 0 && triton_row < triton_rgb.rows && triton_col < triton_rgb.cols)
		{
			// Access the Triton rgb image at the (row,col) position to get the RGB values matching the Helios XYZ point
			uchar R = triton_rgb.at<cv::Vec3b>(triton_row, triton_col)[0];
			uchar G = triton_rgb.at<cv::Vec3b>(triton_row, triton_col)[1];
			uchar B = triton_rgb.at<cv::Vec3b>(triton_row, triton_col)[2];

			float X = xyz_points.at<cv::Vec3f>(i)[0];
			float Y = xyz_points.at<cv::Vec3f>(i)[1];
			float Z = xyz_points.at<cv::Vec3f>(i)[2];

			// Now you have the RGB values of a measured 3D Point at location (X,Y,Z).
			// Depending on your application you can do different things with these values,
			// for example, feed them into a point cloud rendering engine to view a 3D RGB image.
		}
	}

For additional information on this step, expand the below section to view the projectPoints example taken from opencv.org (Click here for original OpenCV link)

C++: void projectPoints(InputArray objectPoints, InputArray rvec, InputArray tvec, InputArray cameraMatrix, InputArray distCoeffs, OutputArray imagePoints, OutputArray jacobian=noArray(), double aspectRatio=0 )
Python: cv2.projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs[, imagePoints[, jacobian[, aspectRatio]]]) → imagePoints, jacobian
C: void cvProjectPoints2(const CvMat* object_points, const CvMat* rotation_vector, const CvMat* translation_vector, const CvMat* camera_matrix, const CvMat* distortion_coeffs, CvMat* image_points, CvMat* dpdrot=NULL, CvMat* dpdt=NULL, CvMat* dpdf=NULL, CvMat* dpdc=NULL, CvMat* dpddist=NULL, double aspect_ratio=0 )
Python: cv.ProjectPoints2(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePoints, dpdrot=None, dpdt=None, dpdf=None, dpdc=None, dpddist=None) → None
Parameters:
  • objectPoints – Array of object points, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel (or vector<Point3f> ), where N is the number of points in the view.
  • rvec – Rotation vector. See Rodrigues() for details.
  • tvec – Translation vector.
  • cameraMatrix – Camera matrix A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{_1} .
  • distCoeffs – Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
  • imagePoints – Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, or vector<Point2f> .
  • jacobian – Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.
  • aspectRatio – Optional “fixed aspect ratio” parameter. If the parameter is not 0, the function assumes that the aspect ratio (fx/fy) is fixed and correspondingly adjusts the jacobian matrix.

The function computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes Jacobians - matrices of partial derivatives of image points coordinates (as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global optimization in calibrateCamera(), solvePnP(), and stereoCalibrate() . The function itself can also be used to compute a re-projection error given the current intrinsic and extrinsic parameters.

Note: By setting rvec=tvec=(0,0,0) or by setting cameraMatrix to a 3x3 identity matrix, or by passing zero distortion coefficients, you can get various useful partial cases of the function. This means that you can compute the distorted coordinates for a sparse set of points or apply a perspective transformation (and also compute the derivatives) in the ideal zero-distortion setup.

Conclusion

The overall steps to achieving a color point cloud is based on OpenCV’s “Camera Calibration and 3D Reconstruction” documentation. After intrinsic and pinhole calibration we use solvePnP to find rotation and translation. Once we know the rotation and translation we can use projectPoints to overlay the color data to the points.

Color overlay example

Left: Normal Helios 3D point cloud. Right: Triton color data overlaid 3D point cloud

Triton + Helios Setup