Getting started
https://ubicoders.github.io/vrobots-book/
1. Ubicoders Virtual Robots Simulation
The simulation is designed to test control, sensor fusion, and the data collection for autonomous navigtation primarily. The game provides multi-robot data in real time using Python scripts.
2. What this is "NOT"
This simulation do not support custom robot design or imports from CAD.
3. Supported virtual robots
- Multirotor
- Car
- Omni-directional rover
- Helicopter
See more details on the Suported Virutal Robots chapter.
4. Installatoin and hello-world example
Please follow next page.
⚙️ Installation
💻 Supported Operating Systems
- ✅ Windows 11
- ✅ Ubuntu 22.04+
- ⬜ macOS (not supported yet)
🖥️ Recommended Hardware
- RAM: 32 GB
- GPU: NVIDIA RTX Series (recommended for best performance)
📦 Download
Prebuilt packages are available at: https://www.ubicoders.com/virtualrobots
Setup - Windows
Just run "virtual_robots.exe"
Setup - Ubuntu
sudo apt install xdg-utils -y
sudo chmod +x ./virtual_robots.x86_64
./ .x86_64
or
# run "virtual_robots.x86_64" by double clicking it.
Setup - WSL
Install Vulkan (WSL graphics bridge)
sudo apt update
sudo apt install xdg-utils -y
sudo apt install mesa-vulkan-drivers vulkan-tools -y
vulkaninfo | grep "Vulkan Instance Version"
vkcube
Check Vulkan
vulkaninfo | grep "Vulkan Instance Version"
vkcube
sudo chmod +x ./virtual_robots.x86_64
Run
nohup ./virtual_robots.x86_64 -force-vulkan > output.log 2>&1 &
🐍 Python Client (all platforms)
Install the Python IPC client:
pip install ubicoders-vrobots-ipc
concept
Backgrounds
Ubicoders' Virtual Rorobts simulation is based on publish-subscribe pattern.
Default publishers
Each virtual robot in the sim publishes topics like
- vr/[system_id]/states
- vr/[system_id]/cam/left/720p
Default subscribers
By default, the sim subscribes command inputs like
- vr/[system_id]/cmd
Service request
The service is a one-time request-response transport.
The sim has default service servier at
- vr/service
hello-world
Assumption
Temrinal has access to the Python environment. i.e.
pip install ubicoders-vrobots-ipc
List the topics from Virtual Robots
Try this in the terminal.
list-topics
This will output the below.
Listing topics...
- [i] vr/0/cams/down/720p
- [i] vr/0/cams/left/720p
- [i] vr/0/cams/right/720p
- [z] vr/0/states
The sim publishes the topics like this.
hello-states
Intro
This script assumes the simulation is running.
hello_vrobots
hello_states.py
from ubicoders_vrobots_ipc import VRobotNodeBase, vrobot_client_runner
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId:int = 0):
super().__init__(sysId)
# ======================================================
# Like an Arduio script, setup and update (loop)
# Setup Once Here
def setup(self):
pass
# This is called every iteratoin.
def update(self):
if self.read_new_states():
ts = self.state.timestamp # unix time in millis
pos = self.state.linPos
elapsed_sec = (ts - self.first_ts) / 1000.0 # in seconds
print(f"State t={elapsed_sec} pos=({pos.x:.3f},{pos.y:.2f},{pos.z:.2f})")
# ======================================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
hello-states
Intro
This script assumes the simulation is running.
hello_image
hello_image.py
additional pip package
This example uses opencv.
pip install opencv-python
from ubicoders_vrobots_ipc import VRobotNodeBase, vrobot_client_runner, ImageResolution
import cv2
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId:int = 0):
super().__init__(sysId)
# ======================================================
# Like an Arduio script, setup and update (loop)
# Setup Once Here
def setup(self):
cv2.namedWindow("CamLeft", cv2.WINDOW_NORMAL) # make resizable window
self.register_img_subscriber("left", ImageResolution.P720)
# This is called every iteration.
def update(self):
if self.read_new_states():
ts = self.state.timestamp # unix time in millis
pos = self.state.linPos
elapsed_sec:float = (ts - self.first_ts) / 1000.0 # in seconds
print(f"State t={elapsed_sec:.3f} pos=({pos.x:.2f},{pos.y:.2f},{pos.z:.2f})")
if self.read_new_image("left"):
img = self.imgStates["left"].image_data
elapsed_sec:float = (self.imgStates["left"].ts - self.first_ts) /1000.0 # in seconds
print(f"Image Left t={elapsed_sec:.3f} size=({img.shape[1]}x{img.shape[0]}x{img.shape[2]})")
cv2.imshow("CamLeft", img)
cv2.waitKey(1)
# ======================================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
hello-control
hello_control.py
from ubicoders_vrobots_ipc import VRobotNodeBase, vrobot_client_runner
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId:int = 0):
super().__init__(sysId)
# ======================================================
# Like an Arduio script, setup and update (loop)
# Setup Once Here
def setup(self):
pass
# This is called every iteratoin.
def update(self):
if self.read_new_states():
ts = self.state.timestamp # unix time in millis
pos = self.state.linPos
elapsed_sec = (ts - self.first_ts) / 1000.0 # in seconds
print(f"State t={elapsed_sec} pos=({pos.x:.3f},{pos.y:.2f},{pos.z:.2f})")
# Do some COOL control here and publish
cool_control_result = [1501, 1501, 1501, 1501]
self.update_cmd_multirotor(cool_control_result)
# ======================================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
hello-service
intro
One can reuqest a serivce to the sim. For instance, the script below is to create a new virtual robot in the sim.
hello_service.py
from ubicoders_vrobots_ipc import req_srv_mission
from ubicoders_vrobots_msgs import Vec3MsgT, MissionMsgT, VRSceneObjectT, SrvSimParamsMsgT
if __name__ == "__main__":
pos = Vec3MsgT()
pos.x, pos.y, pos.z = 53.6, 180.5, -0.3
vrobot = VRSceneObjectT()
vrobot.objectType = "multirotor"
vrobot.position = pos
mission = MissionMsgT()
mission.vrobots = [vrobot]
mission.newMission = False
req_srv_mission(mission)
hello-rtg
Real Time Graph (RTG)
run realtime graph
rtg-sub -c 1
plot 3 axis
rtg-sub -c 3
options
rtg-sub --help
options:
-h, --help show this help message and exit
-c CHANNELS, --channels CHANNELS
Number of subplots (channels) [default: 3]
-f FPS, --fps FPS Qt redraw rate [default: 50] # recommend to keep it default
-b BUFFER_SECS, --buffer-secs BUFFER_SECS
Time window length in seconds [default: 6.0]
-t TOPIC, --topic TOPIC
Zenoh topic to subscribe to [default: vr/0/rtg]
--background BACKGROUND
Plot background color [default: 'k'] # recommended colors 'w' or 'k'
how to plot
The main script needs to publish the realtime values to plot. Keep the terminal running the rtg-sub. Open a new terminal and run hello_states.py after adding 3 lines.
updated "hello_states.py"
from ubicoders_vrobots_ipc import VRobotNodeBase, vrobot_client_runner
#===================================
# add this line
from ubicoders_vrobots_ipc import RTGPub
#===================================
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId:int = 0):
super().__init__(sysId)
def setup(self):
#===================================
# add this line
self.rtg = RTGPub(topic_name=f"vr/{self.sysId}/rtg")
#===================================
def update(self):
if self.read_new_states():
ts = self.state.timestamp
pos = self.state.linPos
elapsed_sec = (ts - self.first_ts) / 1000.0
print(f"State t={elapsed_sec} pos=({pos.x:.2f},{pos.y:.2f},{pos.z:.2f})")
#===================================
# add this line
self.rtg.publish(elapsed_sec, [pos.x, pos.y, pos.z])
#===================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
Virtual Robot Node Template (Arduino-style)
This template mirrors the Arduino programming model—where setup() runs once and loop() runs every cycle—but implemented in Python for a virtual robot.
setup(): one-time initialization.update(): called each iteration (the “loop”).
Timing: The simulation backend tops out around 50 Hz, so the loop sleep is set to 0.02 seconds (20 ms).
Core Class: VRobotNode
Implements the Arduino-like lifecycle using VRobotNodeBase.
from ubicoders_vrobots_ipc.vrobot_node import VRobotNodeBase, vrobot_client_runner
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId: int = 0):
super().__init__(sysId)
# ======================================================
# Arduino-like structure: setup() and update() (loop)
# Called once at the beginning
def setup(self):
pass
# Called every iteration
def update(self):
if self.read_new_states():
ts = self.state.timestamp # Unix time in millis
pos = self.state.linPos
elapsed_sec = (ts - self.first_ts) / 1000.0 # seconds
print(f"State t={elapsed_sec:.3f}s pos=({pos.x:.3f},{pos.y:.3f},{pos.z:.3f})")
# ======================================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
Main Loop: vrobot_client_runner()
Accepts a list of nodes for multi-robot (fleet/swarm) control.
import time
from typing import Any, List
from ubicoders_vrobots_ipc.vrobot_client import VRobotClient # adjust import if needed
def vrobot_client_runner(vrobot_nodes: List[Any]):
vrclient = VRobotClient()
for node in vrobot_nodes:
vrclient.add_vrobot_node(node)
try:
while True:
vrclient.update()
time.sleep(0.02) # ~50 Hz (0.02 s = 20 ms)
finally:
print("Shutting down...")
vrclient.shutdown()
Why 0.02 s? The game engine behind the simulator typically won’t exceed 50 Hz in the best case. Keeping the sleep at 0.02 seconds keeps your node in sync and avoids wasted CPU.
Example: One-Time Setup
Use setup() to create reusable state and register subscriptions only once.
import cv2
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId: int = 0):
super().__init__(sysId)
def setup(self):
# Example goal for a rover (x, y, z)
self.position_setpoint = [1.0, 2.0, 3.0]
# Configure a resizable OpenCV window for the left camera feed
cv2.namedWindow("CamLeft", cv2.WINDOW_NORMAL)
cv2.resizeWindow("CamLeft", 640, 360)
# Subscribe once to the left camera stream published by this vrobot
self.register_img_subscriber("left")
def update(self):
if self.read_new_states():
# Control logic can reference self.position_setpoint here
pass
Multi-Robot (Fleet) Example
Instantiate one node per robot with distinct sysIds. However, this does not instantiate the robot in the sim, to instantiate the robot, checkout service:
ch1/hello_service.html Therefore, it is recommended to run the control script after defining your mission service.
if __name__ == "__main__":
nodes = [
VRobotNode(sysId=0),
VRobotNode(sysId=1),
VRobotNode(sysId=2),
]
vrobot_client_runner(nodes)
Where do states and images come from?
The vrobot states data come from the topics published through Zenoh and Iceoryx2. Zenoh transport is used for robot states data. Iceoryx2 shared memory is used to publish the image data by the sim.
Listing topics...
- [i] vr/0/cams/down/720p
- [i] vr/0/cams/left/720p
- [i] vr/0/cams/right/720p
- [z] vr/0/states
Simply, the VRobotNode is subscribing the data specific to its sysId as the topics published by the sim follow the naming convention such as "vr/[sysId]/states"
Key Points
- Lifecycle:
setup()once →update()repeatedly. - Fleet-ready: Pass multiple nodes to
vrobot_client_runner. - Timing: Use 0.02 s sleep (~50 Hz) for stable sim performance.
- Setup scope: Put one-time work (targets, subscriptions, UI windows) in
setup().
Publishing robot data
more docs coming soon
Publishing Command Data in VRobotNode
Publishing command (cmd) data to control a robot is enabled through predefined publishers in VRobotNodeBase, which use FlatBuffers for structured messaging.
In the "hello-control" example, the robot receives commands because the node builds and publishes messages in the expected format.
Example: Sending Multirotor Commands
from ubicoders_vrobots_ipc.vrobot_node import VRobotNodeBase, vrobot_client_runner
import time
from typing import List
...
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId: int = 0):
super().__init__(sysId)
...
def update(self):
...
cool_control_result = [1600, 1600, 1600, 1600]
self.update_cmd_multirotor(cool_control_result)
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
Command Message Pipeline
The publishing flow follows three steps:
(1) Build command message → (2) Serialize with FlatBuffers → (3) Publish over Zenoh.
...
def update_cmd_multirotor(self, pwm: List[int]):
self.cmdMsgT: CommandMsgT = CommandMsgT()
self.cmdMsgT.timestamp = time.time() * 1e3
self.cmdMsgT.cmdId = VROBOTS_CMDS.SET_PWM
self.cmdMsgT.sysId = self.sysId
self.cmdMsgT.intArr = [int(pwm[i]) for i in range(4)]
self.build_and_publish_cmd()
def build_and_publish_cmd(self):
builder = flatbuffers.Builder(512)
os = self.cmdMsgT.Pack(builder)
builder.Finish(os, bytes("CMD0", "utf-8"))
payload = builder.Output()
self.publish_cmd(payload)
def publish_cmd(self, cmd_data: bytes):
self.zenoh_node.publish(f"vr/{self.sysId}/cmd", cmd_data)
...
Key Points
-
Correct cmdId is essential:
self.cmdMsgT.cmdId = VROBOTS_CMDS.SET_PWMtells the simulator that this is a multirotor PWM command.
If thecmdIdis wrong, the simulator ignores the input. -
System-specific expectations:
Each robot type (multirotor,rover, etc.) expects different command formats. Even ifsysIdis correct, commands must match the expected type. -
Timestamping:
The message includes a Unix timestamp in milliseconds to synchronize actions. -
Publishing channel:
Commands are published to:vr/[sysId]/cmd
Command List
List of available commands.
multirotor-only command
def update_cmd_multirotor(self, pwm: List[int])
heli-only command
def update_cmd_heli(self, force: float)
car-only command
def update_cmd_car(self, torque, brake, steer)
omr-only command
def update_cmd_omrover(self, actuators: List[float])
For all virtual robots.
""" override the body frame force and torque """
def update_cmd_set_force_torque_body(
self,
fx: float,
fy: float,
fz: float,
tx: float,
ty: float,
tz: float,
)
Publishing RTG Data
You can publish real-time graph (RTG) data from a node, and visualize it with the separate rtg-sub Python app.
For rtg-sub to parse and plot correctly, publish samples in the expected tuple format:
(ts_in_seconds, [var1, var2, ...])
ts_in_seconds: floating-point seconds (not milliseconds).- Variables: a list of numeric values (one or more series).
RTG Publisher in a Node
...
import time
from ubicoders_vrobots_ipc.rtg_pub import RTGPub
...
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId: int = 0):
super().__init__(sysId)
...
# Initialize RTG publisher (choose a topic/name that rtg-sub subscribes to)
self.rtg = RTGPub(topic=f"vr/{self.sysId}/rtg/pose")
def setup(self):
...
self.first_ts = time.time() # seconds
def update(self):
...
if self.read_new_states():
ts = self.state.timestamp # ms
pos = self.state.linPos
elapsed_sec = (ts - self.first_ts * 1e3) / 1000.0 # convert -> seconds
# Publish RTG sample as (t_seconds, [values...])
self.rtg.publish(elapsed_sec, [pos.x, pos.y, pos.z])
...
rtg-sub Subscriber App (Concept)
...
# Inside the rtg-sub app callback/loop
# t: float (seconds), values: List[float]
window.add_sample(float(t), values)
...
Tips
- Ensure time is in seconds before publishing (convert from ms if needed).
- Keep the variables in a list, even for a single series:
[value]. - Use stable, descriptive topics (e.g.,
vr/<sysId>/rtg/<signal>).
Publishing Custom Data
Beyond simulation control, you can publish any arbitrary data on any topic name.
This can be completely independent of the VRobot simulator and is useful for custom messaging, debugging, or integrating with other apps.
Example: Custom Node
from ubicoders_vrobots_ipc.node_zenoh import ZenohNode
from ubicoders_vrobots_ipc.vrobot_node import vrobot_client_runner
class MyOwnVRNode():
def __init__(self):
self.setup()
# Following the convention of VRobotNodeBase: setup + update
def setup(self):
# Use a negative sysId to avoid conflicts with real vrobot nodes
self.zenoh_node = ZenohNode(-123)
# Create a publisher for a custom topic
self.zenoh_node.create_publisher("my/own/topic")
def update(self):
# Publish arbitrary payload (raw bytes)
self.zenoh_node.publish("my/own/topic", b"hello from my own node")
if __name__ == "__main__":
vrobot_client_runner([MyOwnVRNode()])
Verifying with Topic Listing
Run list-topics in another terminal:
Listing topics...
...
- [z] my/own/topic
Key Points
- Unique sysId: Use a negative (or otherwise unused) sysId for standalone publishers to avoid conflicts with simulation nodes.
- Custom topics: You can name topics however you like (e.g.,
"my/own/topic"). - Independent publishing: This mechanism works outside the VRobot simulator, making it suitable for general data pipelines or external monitoring.
Subscribing robot data
This chapter discusses states and image data.
Subscribing robot data
Reading the states
from ubicoders_vrobots_ipc.vrobot_node import VRobotNodeBase, vrobot_client_runner
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId:int = 0):
super().__init__(sysId)
# ======================================================
# Like an Arduio script, setup and update (loop)
# Setup Once Here
def setup(self):
pass
# This is called every iteratoin.
def update(self):
if self.read_new_states():
ts = self.state.timestamp
pos = self.state.linPos
elapsed_sec = (ts - self.first_ts) / 1000.0
print(f"State t={elapsed_sec} pos=({pos.x:.2f},{pos.y:.2f},{pos.z:.2f})")
# ======================================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
Once state data is availabe in the update() loop, one can read the state values listed below.
For instance, state.linPos is type of Vec3() which is made of Vec3, which again has x, y, z as memeber variables. Therefore to read x, y, z position, one can simply access by
x = self.state.linPos.x
y = self.state.linPos.y
z = self.state.linPos.z
available states
name = string
sysId = int
timestamp = float
linAcc = Vec3()
linVel = Vec3()
linPos = Vec3()
altitude = float
angAcc = Vec3()
angVel = Vec3()
euler = Vec3()
eulerDot = Vec3()
quaternion = Vec4()
pwm = []
actuators = []
force = Vec3()
torque = Vec3()
accelerometer = Vec3()
gyroscope = Vec3()
magnetometer = Vec3()
mass = float
cg = Vec3()
moi3x1 = Vec3()
extraProps = []
collisions = []
class Vec4(Vec4MsgT):
def __init__(self,):
self.x = 0
self.y = 0
self.z = 0
self.w = 0
class Vec3(Vec3MsgT):
def __init__(self):
self.x = 0
self.y = 0
self.z = 0
Subscribing to Images in VRobotNode
To process camera feeds from the simulator, a VRobotNode must explicitly register image subscribers.
This is done using:
self.register_img_subscriber("left")
Topic Naming Convention
The VRobots simulator publishes image data in the format:
vr/[sysId]/cams/[side]/720p
sysId→ Robot system ID (e.g.,0,1,2).side→ Camera identifier (e.g.,left,right,down).
Example published topics:
[i] vr/0/cams/down/720p
[i] vr/0/cams/left/720p
[i] vr/0/cams/right/720p
[z] vr/0/states
Each robot may have a different set of cameras. For instance, the "multirotor" currently provides three: left, right, and down.
When registering subscribers, you must use the correct camera side name.
Example: Image Subscriber with OpenCV
from ubicoders_vrobots_ipc.vrobot_node import VRobotNodeBase, vrobot_client_runner
import cv2
class VRobotNode(VRobotNodeBase):
def __init__(self, sysId: int = 0):
super().__init__(sysId)
# ======================================================
# Arduino-like lifecycle: setup() and update()
# One-time setup
def setup(self):
# OpenCV window setup
cv2.namedWindow("CamLeft", cv2.WINDOW_NORMAL) # resizable window
cv2.resizeWindow("CamLeft", 640, 360)
# Subscribe to left camera stream
self.register_img_subscriber("left")
# Called every iteration
def update(self):
# State updates
if self.read_new_states():
ts = self.state.timestamp # Unix time (ms)
pos = self.state.linPos
elapsed_sec: float = (ts - self.first_ts) / 1000.0
print(f"State t={elapsed_sec:.3f} pos=({pos.x:.2f},{pos.y:.2f},{pos.z:.2f})")
# Image updates
if self.read_new_image("left"):
img = self.imgStates["left"].image_data
elapsed_sec: float = (self.imgStates["left"].ts - self.first_ts) / 1000.0
print(f"Image Left t={elapsed_sec:.3f} size=({img.shape[1]}x{img.shape[0]}x{img.shape[2]})")
# Display live feed
cv2.imshow("CamLeft", img)
cv2.waitKey(1)
# ======================================================
if __name__ == "__main__":
vrobot_client_runner([VRobotNode(sysId=0)])
Key Points
- Subscription naming: Match the correct camera
side(left,right,down). - One-time registration: Call
register_img_subscriber()insetup(). - Live display: Use OpenCV (
imshow,waitKey) to show images in real-time. - Integration: Image reading (
read_new_image()) and state reading (read_new_states()) can run in parallel each iteration.
Subscribing to Custom Data
In addition to publishing, you can subscribe to any arbitrary topic with a custom node.
This works independently of the VRobot simulator and lets you process or monitor custom data streams.
Example: Custom Subscriber Node
from ubicoders_vrobots_ipc.node_zenoh import ZenohNode
from ubicoders_vrobots_ipc.vrobot_node import vrobot_client_runner
class MyOwnVRNode():
def __init__(self):
self.setup()
# Listener callback function for incoming data
def listener_callback(self, sample):
topic: str = sample.key_expr
payload: bytes = sample.payload.to_bytes()
print(f"Received data: {payload.decode('utf-8')}")
# Ensure clean shutdown (avoid freezing terminal on Ctrl+C)
def shutdown(self):
self.zenoh_node.shutdown()
# Follow the convention of VRobotNodeBase - setup and update
def setup(self):
# Use a negative sysId to avoid conflicts with vrobot nodes
self.zenoh_node = ZenohNode(-123)
# Subscribe to a custom topic
self.zenoh_node.create_subscriber("my/own/topic", self.listener_callback)
def update(self):
# Main loop – perform custom logic if needed
pass
if __name__ == "__main__":
vrobot_client_runner([MyOwnVRNode()])
Key Points
- Custom subscription: You can subscribe to any topic (e.g.,
"my/own/topic"). - Callback-driven: Define a
listener_callbackfunction to handle received messages. - Clean exit: Implement
shutdown()to gracefully close the Zenoh node and prevent terminal freezes. - Independent of simulator: Works even if no simulation is running, making it suitable for standalone message handling.
Pure Zenoh
As noticed vrobot and vrobot pthon script is based on zenoh. Once can always use pure zenoh compeletely indepenedt to vrobots.
z_pub.py
import time
import zenoh
# Configure Zenoh session
conf = zenoh.Config()
session = zenoh.open(conf)
# Declare a publisher on a key expression
key_expr = "demo/example"
publisher = session.declare_publisher(key_expr)
print(f"Publishing on '{key_expr}' ... Press Ctrl+C to stop.")
try:
i = 0
while True:
msg = f"Hello Zenoh! {i}"
publisher.put(msg)
print(f">>> Published: {msg}")
i += 1
time.sleep(1)
except KeyboardInterrupt:
pass
# Clean up
session.close()
z_sub.py
import zenoh
# Configure Zenoh session
conf = zenoh.Config()
session = zenoh.open(conf)
# Callback for received messages
def listener(sample):
print(f"<<< Received [{sample.key_expr}]: '{sample.payload.decode()}'")
# Subscribe to the same key expression
key_expr = "demo/example"
sub = session.declare_subscriber(key_expr, listener)
print(f"Subscribed on '{key_expr}' ... Press Ctrl+C to stop.")
try:
while True:
pass # Keep the script alive
except KeyboardInterrupt:
pass
# Clean up
session.close()
In-game services
There are 2 types of services currently.
- in-game console service
- python scripts to request serivices.
in-game terminal
open in-game console:
ctrl + `
list all commands:
all-commands
help: help
more docs coming soon.
python scripts
1. Reset a vrobot
req_srv_reset(sysId:int)
from ubicoders_vrobots_ipc import req_srv_reset
if __name__ == "__main__":
req_srv_reset(sysId=0)
2. Reset all vorobots
req_srv_reset_all()
from ubicoders_vrobots_ipc import req_srv_reset_all
if __name__ == "__main__":
req_srv_reset_all()
3. Set property of a vrobot
from ubicoders_vrobots_ipc import req_srv_mission, req_srv_physical_property
from ubicoders_vrobots_msgs.S007_srv_vrobotphysicalpropertymsg_generated import Vec3MsgT, SrvVRobotPhysicalPropertyMsgT
import time
if __name__ == "__main__":
prop = SrvVRobotPhysicalPropertyMsgT()
prop.timestamp = time.time() * 1e3
prop.setMass = 3
prop.setMoi3x1 = [1, 1, 1]
req_srv_physical_property(prop)
4. Mission of the stage.
create a new vrobot
from ubicoders_vrobots_ipc import req_srv_mission
from ubicoders_vrobots_msgs.M100_mission_generated import Vec3MsgT, MissionMsgT, VRSceneObjectT
import time
if __name__ == "__main__":
pos = Vec3MsgT()
pos.x, pos.y, pos.z = 53.6, 180.5, -0.3
vrobot = VRSceneObjectT()
vrobot.objectType = "multirotor" # "heli", "car", "omrover"
vrobot.position = pos
mission = MissionMsgT()
mission.vrobots = [vrobot]
mission.newMission = False # True = delete all vrobots currently instantiated.
req_srv_mission(mission)
mission service message defnition in flatbuffers.
include "vectors.fbs";
table VRSceneObject{
object_type:string;
name:string;
scale:float;
// position, rotation, etc
position:Vec3Msg;
eul:Vec3Msg;
angvel:Vec3Msg;
linvel:Vec3Msg;
}
table Quest{
id:uint;
name:string;
type:string; // collision, move nvr object, move vrobot, etc
target:VRSceneObject;
is_completed:bool=false;
}
table MissionMsg{
id:uint;
name:string;
timestamp:double;
// new mission or continue from previous
new_mission:bool=false;
// scene name
main_scene:string;
other_scenes:[string];
// main camera position, rotation
main_camera_position:Vec3Msg;
main_camera_eul:Vec3Msg;
clear_nvr_objects:bool=false;
nvr_objects:[VRSceneObject];
clear_vrobots:bool=false;
vrobots:[VRSceneObject];
quests:[Quest];
}
root_type MissionMsg;
file_identifier "M100";
More docs coming soon.
Supported Virtual Robots
Currently Supported Robots
- Multirotor
- Car
- Omrover
- Heli
Common Units
Actuators
pwm
Integer representing microseconds. The range is 1100–2000 µs, which is a typical operation range of ESCs in real drones.
Sensors
-
accelerometer
Raw accelerometer values for x, y, and z axes.- 1g (9.81 m/s²) corresponds to 16384.
- Conversion to m/s²:
(accelerometer * 9.81) / 16384
-
gyroscope
250 deg/second corresponds to 32768.- Conversion to rad/s:
(gyroscope * 250 / 32768) * (π / 180)
- Conversion to rad/s:
-
magnetometer
Values are measured in microtesla (µT).
States
states.angVel— in radians per second (rad/s)states.euler— in degrees (°)states.linVel— in meters per second (m/s)states.linPos— in meters (m)
multirotor
Physical Properties
- Default Mass: 1 kg
- Default Moment of Inertia (MOI):
eye(3) * 1 - Per Motor Thrust: 1 kgf
- Gravitational Acceleration: 9.81 m/s²
Camera Intrinsics:
360p
- fx = 811.93
- fy = 811.93
- cx = 320
- cy = 180
720p
- fx = 1623.86
- fy = 1623.86
- cx = 640
- cy = 360
1080p
- fx = 2435.78
- fy = 2435.78
- cx = 960
- cy = 540
Stereo baseline:
0.2 m ( ~ 0.66 ft)
car
Currently, the max thrust and current mass is not realistic.
- default mass: 2000 kg
- gravitational acceleration: 9.81 m/s^2
Camera Intrinsics:
360p
- fx = 811.93
- fy = 811.93
- cx = 320
- cy = 180
720p
- fx = 1623.86
- fy = 1623.86
- cx = 640
- cy = 360
1080p
- fx = 2435.78
- fy = 2435.78
- cx = 960
- cy = 540
Stereo baseline:
2 m ( ~ 6.6 ft)
omni-directional rover
Currently, the max thrust and current mass is not realistic.
- default mass: 100 kg
- gravitational acceleration: 9.81 m/s^2
Camera Intrinsics:
360p
- fx = 811.93
- fy = 811.93
- cx = 320
- cy = 180
720p
- fx = 1623.86
- fy = 1623.86
- cx = 640
- cy = 360
1080p
- fx = 2435.78
- fy = 2435.78
- cx = 960
- cy = 540
Stereo baseline:
2 m ( ~ 6.6 ft)
helicopter
Currently, the max thrust and current mass is not realistic.
- default mass: 1800 kg
- gravitational acceleration: 9.81 m/s^2
Camera Intrinsics:
360p
- fx = 811.93
- fy = 811.93
- cx = 320
- cy = 180
720p
- fx = 1623.86
- fy = 1623.86
- cx = 640
- cy = 360
1080p
- fx = 2435.78
- fy = 2435.78
- cx = 960
- cy = 540
Stereo baseline:
2 m ( ~ 6.6 ft)