Skip to content

Matrix Studio User Manual

简体中文 | English

Table of Contents

  1. Prerequisites
  2. Certificate Installation
  3. Docker Image Setup
  4. Web Interface Guide
  5. Data Introduction
  6. Troubleshooting

1. Prerequisites

System Requirements

  • Operating System: Linux
  • Docker
  • GPU: Required (NVIDIA GPU recommended)
  • Memory: Minimum 4GB RAM (8GB recommended)
  • Storage: 2GB available space

Network Requirements

  • HTTPS access required for web interface

2. Certificate Installation

Step-by-Step Guide

2.1 Download Certificate File

Download the certificate file to your local machine.

bash
wget https://huggingface.co/datasets/genrobot2025/cert/resolve/main/arnold.crt

2.2 Create Certificate

bash
sudo mkdir -p /usr/local/share/ca-certificates/extra
sudo cp arnold.crt /usr/local/share/ca-certificates/extra/
sudo update-ca-certificates

3. Docker Image Setup

3.1 Pull Docker Image

We publish

bash
docker pull genrobot/matrix-studio:0.2.13 #latest version in dockerhub
bash
docker pull imagepublic.genrobotai.com/genrobot/matrix-studio:0.2.13 #latest version in Volcanic cloud

3.2 Get start scripts

Script 1: studio start script

bash
wget https://huggingface.co/datasets/genrobot2025/studio/resolve/main/start_studio.sh

Script 2: groundtruth sdk

bash
wget https://huggingface.co/datasets/genrobot2025/studio/resolve/main/start_studio_sdk.sh

3.3 Launch Studio

3.3.1 Prerequisites

  1. Create a data directory in any location of your choice. This directory will store your recorded MCAP files.
  2. Place the start_studio.sh script file in the same directory as the data folder.

Data

3.3.2 Starting the Studio

Run bash start_studio.sh --help to view all configuration options.

bash
bash start_studio.sh --help

Help

Method 1: Localhost Mode (Default)
bash
bash start_studio.sh --image-name [docker_image]:[tag]

Automatically runs in localhost mode
Access the web interface at:

bash
http://localhost:5500

This mode supports local browser access to the data processing platform

This mode allows you to specify a custom Docker image and tag. You can also modify the default image in the start_studio.sh script

Method 2: HTTPS Mode (Network Access)
bash
bash start_studio.sh --image-name [docker_image]:[tag] --server-ip [host_ip] --server-https true
  1. Starts the backend with HTTPS enabled
  2. Access the web interface at:
bash
https://[host_ip]:5501
  1. This mode allows other computers on the same network to access the platform
  2. Any computer on the network can connect using the specified IP address

Method 3: usd groundtruth algorithm in docker

  1. enter docker:
bash
bash start_studio_sdk.sh --image-name [docker_image]:[tag]
  1. Run the VIO processing script

Single-hand:

bash
bash /app/scripts/process_mcap_inner.sh [input_mcap_path] --output-dir [output_dir] --device-version [version]

Example:

bash
bash /app/scripts/process_mcap_inner.sh /app/data/test.mcap --output-dir /app/data/output --device-version v2

Dual-hand:

bash
bash /app/scripts/process_mcap_inner.sh [left_device_path] [right_device_path] --output-dir [output_dir] --device-version [version]

Example:

bash
bash /app/scripts/process_mcap_inner.sh /app/data/left.mcap /app/data/right.mcap --output-dir /app/data/output --device-version v4

--output-dir is optional. By default, results are saved in /app/data/output, including converted output files and corresponding JSON files.
--device-version is optional, with default value v2.
• The results include processed mcap files and corresponding JSON files. You can use the JSON files to verify whether VIO succeeded.

If you want to close studio:
bash
docker stop matrix-studio

4. Web Interface Guide

4.1 User Login

Login

When you open the URL in your computer browser, you will need to enter a username and password. This information will be sent to you along with your order confirmation. Please contact our sales team to obtain your credentials.

4.2 Main Dashboard

Main Dashboard
Core Functional Modules:
Data Viz Visualization - Data exploration and interactive chart analysis
Sensor Calibration - Multi-source sensor parameter calibration • Ground Truth Production - High-quality annotation and dataset

4.3 Data Page

The Data Page is the main interface for data processing, offering the following functionalities:
1. Display all MCAP data in local data directory including:
• Token: A unique identifier for each dataset.
• FileName: Shows the MCAP data filename. Clicking on it opens the Monitor for data visualization.
• VioState: Displays the processing status of the ground truth.
• Action: Allows running the VIO trajectory recovery algorithm on single or multiple selected datasets. Data Page2. Run VIO Algorithm (Single/Batch)​​ - The main workflow:
• Select one or multiple datasets.
• Click the "Batch Generate Trajectory" button. Vio Task1 • Enter the name of the input processing task.
• (Optional) Select whether to use the URDF solver (feature in development).
• Select the device type (default: das gripper).
• Select the version type (confirm the Das version type with after-sales personnel if needed).
• Select the task type: single (single-arm) or dual (dual-arm).
• Click OK to start ground truth data processing.
Vio Task2
3. GroundTruth Processing
• You can monitor the progress via the ​VioState​ column or check the detailed status on the ​GroundTruth Details Page. • More detailed features will be rolled out via ​OTA updates.

4. View & handle datas:
• Open the specific data visualization from the ​GroundTruth Details Page.
• The processed data is saved in the data/output/[task_name] directory. Task folder • The file vio_result.json contains the paths to the successfully processed data.
Result

4.4 GroundTruth Page

The GroundTruth Page primarily displays the processed ground truth data, which mainly consists of trajectory data. This page supports the following functionalities:
1. View Ground Truth Generation Status & Details: Check the status and reasons for ground truth generation to help identify and troubleshoot issues.
2. Visualize ground truth in Monitor: Click on the TruthValue to open the Monitor for data visualization. GroundTruth

• After opening the Monitor, you can view the original information via the ‘/robot0/vio/eef_pose’ topic (for specific operations, refer to section 4.5).
• In the 3D Panel, you can select the ‘robot trajectory’ to view the recovered trajectory ground truth.
• If the URDF Solver mode was used, you will see the Franka robot arm (with future support for custom URDF uploads) and its end-effector trajectory.

4.5 Data viz Features

4.5.1 Main Dashboard

Importer Bag

Upload Bag
Functional:
Open Local Files - Open and load local data files into the system.
Monitor - Click monitor buttion to visualize sensor data.
​​

4.5.2 Monitor Intruction

1. Topic Selection & Data Type Configuration:topictopicFunctional:
​​Topic Display Type Selection - Choose from text, image, icon, 3D model, and other visualization formats
Flexible Layout Adjustment​ - Add new panels to the right or bottom, delete, or reposition them
​​​​Topic Selection by Name​​ - Filter and select data streams based on their topic name

2. 3D Visualization Panel:

3D panelFunctional:
URDF Model Rendering - Display and manipulate robot URDF models
Ground Truth Trajectory Visualization - Visualize reference trajectories and paths
Visualization Tools - Viewpoint Control & Measurement Tools ​​

3. Bag File Detail Interface:Bag detail
Functional:
​​MCAP Data Summary​​​ - Channel lists, message counts, and duration statistics
​​Topic Timing​​​ - Frame rate analysis with timestamp visualization per topic

5. Data Intruction

This section provides detailed documentation for the data recorded in the MCAP format, including the data structure, topic list, coordinate frame definitions, and data conversion tools.
1. Data Format: MCAP
The system utilizes the ​MCAP​ file format as the primary container for all recorded sensor data and internal states. https://mcap.dev/
2. Topic List
Main topic intruction:

TopicDescriptionFrequency
/robot0/sensor/camera0/compressedmid cam image stream H.264 encoder30 Hz
/robot0/sensor/camera1/compressedleft cam image stream H.264 encoder30 Hz
/robot0/sensor/camera2/compressedright cam image stream H.264 encoder30 Hz
/robot0/sensor/imuIMU info200 Hz
/robot0/sensor/magnetic_encodergripper distace(m)50 Hz
/robot0/vio/eef_posevio eef pose30 Hz
/robot0/vio/relative_eef_poserelative vio eef pose30 Hz
/robot0/sim/robot_infourdf robot info: eef pose,joints30 Hz
/robot0/sensor/camera0/camera_infocalibration info30 Hz
/robot0/sensor/tactile_lefttactile info50 Hz
/robot0/sensor/tactile_righttactile info50 Hz

3. Coordinate Frames
The system adheres to the ​right-hand rule​ convention for all coordinate frames.
World Frame Definition:​ ​
​• Origin: Optical center of the middle camera (camera0).
​• Z-axis (blue): Defined as the anti-gravity direction.
​• X-axis (red): The direction of the camera's optical axis projected onto the horizontal plane (i.e., the plane orthogonal to the Z-axis).
​• Y-axis (green): Orthogonal to both the X and Z axes, uniquely determined by the right-hand rule, pointing to the left..

Local Frame Definition:
• Origin: The optical center of the central camera (camera0).
• X-axis (red): The direction of movement perpendicular to camera0.
• Y-axis (green): Perpendicular to the X-axis, uniquely determined by the right-hand rule, pointing to the left.
• Z-axis (blue): Perpendicular to the X,Y plane, pointing upwards.

The /robot0/vio/eef_pose topic records the pose of the DAS-Gripper in the world coordinate system, as shown in the Tilted state diagram below.
The /robot0/vio/relative_eef_pose topic records the pose changes relative to time 0, using the DAS-Gripper's pose at time 0 as the zero point of the world coordinate system, as shown in the Aligned state diagram below.

axis1

Due to DAS-Gripper version updates, the following are the coordinate system definitions for different versions:

Version Update changelog(including URDF): https://zcnma1sv5kma.feishu.cn/wiki/CKpbwye45iOlrckukIPc5hKCndh

DAS-Gripper V3 / V4 Diagram:

World frame:
world_axis

Local frame: local_axis

DAS-Gripper V2 Diagram:

World frame:
world_axis

Local Frame:

axis

4. Data Convert SDK
This toolkit allows you to seamlessly convert the native MCAP data into other popular research and development formats.

  1. decode mcap files
  2. convert mcap files to h5
  3. convert groundtruth result(vio_result.json files) to h5
    Links: https://github.com/genrobot-ai/das-datakit

6. Troubleshooting

  1. The 3D interface fails to display when opening Monitor in the Chrome browser.
    Solution:​ Open chrome://flagsin the browser, search for "webGL", and enable the related features.
    webgl

GenRobot AI Documentation