> Introduction
As robots move out of caged factory cells and into shared workspaces alongside humans, safety transitions from a hardware feature into a software discipline. A collaborative robot (cobot) must perceive human presence, reason about proximity risks in real time, and degrade its behavior gracefully — slowing, stopping, or re-routing — before any potential contact occurs.
In this guide we build a complete ROS2-based safety layer: detecting humans with a depth camera, defining configurable safety zones, enforcing speed limits through a ROS2 safety node, and validating the system against ISO/TS 15066 speed-and-separation monitoring guidelines.
// Table of Contents
> HRI Safety Standards Overview
Before writing a line of code, understand the regulatory landscape. Non-compliance can invalidate insurance and create legal liability. The two most relevant standards for collaborative robots are:
Key Standards
- ISO 10218-1/2 (2011): Safety requirements for industrial robots. Defines four collaborative modes: Safety Rated Monitored Stop, Hand Guiding, Speed and Separation Monitoring (SSM), and Power and Force Limiting (PFL).
- ISO/TS 15066 (2016): Technical specification for collaborative robots. Provides formulas for minimum protective distance in SSM mode and maximum biomechanical limits for PFL.
- IEC 61508 / IEC 62061: Functional safety standards for safety-related control systems. Software implementing safety functions must achieve at least SIL 2 if it is the sole safety measure.
SSM Minimum Protective Distance Formula
# ISO/TS 15066 Equation 3 — Minimum Protective Distance
# S_p = (v_h + v_r) * (t_r + t_b) + C + Z_d + Z_r
#
# where:
# v_h = max human approach speed (typically 1.6 m/s per ISO)
# v_r = current robot TCP speed
# t_r = reaction time (sensor + processor latency)
# t_b = robot brake stopping time
# C = intrusion depth before detection (sensor uncertainty)
# Z_d = position uncertainty of detection system
# Z_r = robot positional uncertainty
def minimum_separation(v_robot_mps,
t_reaction_s=0.1,
t_brake_s=0.15,
C=0.1,
Z_d=0.05,
Z_r=0.02,
v_human=1.6):
S_p = (v_human + v_robot_mps) * (t_reaction_s + t_brake_s)
S_p += C + Z_d + Z_r
return S_p # meters
# Example: robot moving at 0.5 m/s
print(f"Min separation: {minimum_separation(0.5):.3f} m")
# => ~0.87 m
> Human Detection with Depth Sensing
Depth cameras provide reliable range information independent of ambient lighting. We combine skeleton detection (MediaPipe or YOLOv8-pose) with depth data to track the 3D positions of human body parts in real time.
Human Tracking Node
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from geometry_msgs.msg import PointStamped, PoseArray
from cv_bridge import CvBridge
import mediapipe as mp
import numpy as np
class HumanTrackingNode(Node):
def __init__(self):
super().__init__('human_tracker')
self.bridge = CvBridge()
self.mp_pose = mp.solutions.pose
self.pose = self.mp_pose.Pose(
min_detection_confidence=0.7,
min_tracking_confidence=0.5
)
self.K = None # camera intrinsics set via CameraInfo
self.rgb_sub = self.create_subscription(
Image, '/camera/color/image_raw', self.rgb_cb, 10)
self.depth_sub = self.create_subscription(
Image, '/camera/aligned_depth_to_color/image_raw',
self.depth_cb, 10)
self.human_pub = self.create_publisher(
PoseArray, '/humans/poses_3d', 10)
self.latest_depth = None
def depth_cb(self, msg):
self.latest_depth = self.bridge.imgmsg_to_cv2(msg, '16UC1')
def rgb_cb(self, msg):
if self.latest_depth is None or self.K is None:
return
frame = self.bridge.imgmsg_to_cv2(msg, 'rgb8')
results = self.pose.process(frame)
if not results.pose_landmarks:
return
h, w = frame.shape[:2]
poses = PoseArray()
poses.header = msg.header
for lm in results.pose_landmarks.landmark:
px = int(lm.x * w)
py = int(lm.y * h)
if 0 <= px < w and 0 <= py < h:
depth_mm = float(self.latest_depth[py, px])
if depth_mm > 0:
z = depth_mm / 1000.0
x = (px - self.K[0,2]) * z / self.K[0,0]
y = (py - self.K[1,2]) * z / self.K[1,1]
# Append to poses array...
self.human_pub.publish(poses)
> Safety Zone Architecture
Safety zones are concentric regions around the robot that trigger different response levels. A three-zone model is common in industrial deployments:
Zone 1: Warning
Radius ~2.0 m. Human detected. Robot reduces speed to 50% of nominal. Visual/audio alert issued.
Zone 2: Slow
Radius ~1.0 m. Human close. Robot reduces speed to 25% and activates joint torque monitoring.
Zone 3: Stop
Radius ~0.5 m. Human in immediate proximity. Robot executes safety-rated monitored stop.
Zone Check Implementation
from enum import Enum
import numpy as np
class SafetyZone(Enum):
CLEAR = 0
WARNING = 1
SLOW = 2
STOP = 3
ZONE_RADII = {
SafetyZone.STOP: 0.5,
SafetyZone.SLOW: 1.0,
SafetyZone.WARNING: 2.0,
}
SPEED_FACTORS = {
SafetyZone.CLEAR: 1.00,
SafetyZone.WARNING: 0.50,
SafetyZone.SLOW: 0.25,
SafetyZone.STOP: 0.00,
}
def classify_zone(human_positions_3d, robot_pos_3d=np.zeros(3)):
"""Return the most restrictive safety zone given human positions."""
if len(human_positions_3d) == 0:
return SafetyZone.CLEAR
min_dist = min(
np.linalg.norm(h - robot_pos_3d)
for h in human_positions_3d
)
for zone in [SafetyZone.STOP, SafetyZone.SLOW, SafetyZone.WARNING]:
if min_dist <= ZONE_RADII[zone]:
return zone
return SafetyZone.CLEAR
> ROS2 Safety Node Implementation
The safety node sits between the navigation stack (or teleoperation) and the robot hardware. It intercepts velocity commands, scales them based on the current safety zone, and publishes a safety status topic that the rest of the system can monitor.
safety_filter_node.py
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist
from geometry_msgs.msg import PoseArray
from std_msgs.msg import String
import numpy as np
class SafetyFilterNode(Node):
def __init__(self):
super().__init__('safety_filter')
self.current_zone = SafetyZone.CLEAR
self.human_positions = []
# Subscribe to upstream velocity commands and human poses
self.vel_sub = self.create_subscription(
Twist, '/cmd_vel_raw', self.vel_callback, 10)
self.human_sub = self.create_subscription(
PoseArray, '/humans/poses_3d', self.human_callback, 10)
# Publish filtered velocity and safety status
self.vel_pub = self.create_publisher(Twist, '/cmd_vel', 10)
self.status_pub = self.create_publisher(String, '/safety/status', 10)
# Watchdog: stop if human pose topic goes silent
self.last_human_msg_time = self.get_clock().now()
self.create_timer(0.1, self.watchdog_check)
def human_callback(self, msg: PoseArray):
self.last_human_msg_time = self.get_clock().now()
pts = [np.array([p.position.x, p.position.y, p.position.z])
for p in msg.poses]
self.human_positions = pts
self.current_zone = classify_zone(pts)
status_msg = String()
status_msg.data = self.current_zone.name
self.status_pub.publish(status_msg)
def vel_callback(self, msg: Twist):
factor = SPEED_FACTORS[self.current_zone]
filtered = Twist()
filtered.linear.x = msg.linear.x * factor
filtered.linear.y = msg.linear.y * factor
filtered.angular.z = msg.angular.z * factor
self.vel_pub.publish(filtered)
def watchdog_check(self):
elapsed = (self.get_clock().now() -
self.last_human_msg_time).nanoseconds * 1e-9
if elapsed > 0.5:
# Sensor timeout: treat as STOP
self.current_zone = SafetyZone.STOP
self.get_logger().warn(
f'Human sensor timeout ({elapsed:.2f}s) — forcing STOP')
def main():
rclpy.init()
node = SafetyFilterNode()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
> Collaborative Robot Programming
Beyond reactive safety, good cobot programming designs workflows where human and robot tasks are interleaved in a way that minimises risk from the start — rather than bolting safety on afterwards.
MoveIt2 with Collision-Aware Planning
from moveit.planning import MoveItPy
from moveit.core.robot_state import RobotState
moveit = MoveItPy(node_name="moveit_py_safety")
arm = moveit.get_planning_component("arm")
# Add human bounding box as collision object
from moveit_msgs.msg import CollisionObject
from shape_msgs.msg import SolidPrimitive
from geometry_msgs.msg import Pose
def add_human_collision_box(planning_scene, pos_x, pos_y, pos_z):
co = CollisionObject()
co.id = "human_zone"
co.header.frame_id = "base_link"
box = SolidPrimitive(type=SolidPrimitive.BOX,
dimensions=[0.6, 0.6, 2.0]) # 60x60x200cm
pose = Pose()
pose.position.x = pos_x
pose.position.y = pos_y
pose.position.z = pos_z + 1.0 # center at hip height
co.primitives = [box]
co.primitive_poses = [pose]
co.operation = CollisionObject.ADD
planning_scene.apply_collision_object(co)
Handover Task Pattern
class HandoverStateMachine:
"""
Safe object handover sequence:
IDLE → APPROACH_HANDOVER_POSE → WAIT_HUMAN_GRASP → RELEASE → RETREAT
"""
def on_enter_WAIT_HUMAN_GRASP(self):
# Enable compliant mode: low stiffness at wrist
self.set_joint_compliance(stiffness=50.0)
# Publish token that safety node enters hand-guiding mode
self.publish_mode("HANDGUIDE")
# Wait for force threshold to indicate human has grasped
self.wait_for_force_event(threshold_N=5.0, timeout_s=10.0)
def on_enter_RELEASE(self):
self.open_gripper()
self.publish_mode("NORMAL")
def on_enter_RETREAT(self):
# Move back to home while staying in SSM mode
self.plan_and_execute_cartesian_retreat(distance_m=0.3)
> Testing and Validation
Safety systems require deterministic validation. Unit tests catch logic errors; integration tests with a dummy human (mannequin or staff wearing a retroreflective vest) verify the full pipeline end-to-end. Hardware-in-the-loop testing should cover edge cases like partial occlusion, rapid approach, and sensor dropouts.
Unit Tests for Zone Classification
import pytest
import numpy as np
def test_clear_zone():
positions = [np.array([5.0, 0.0, 0.0])]
assert classify_zone(positions) == SafetyZone.CLEAR
def test_warning_zone():
positions = [np.array([1.8, 0.0, 0.0])]
assert classify_zone(positions) == SafetyZone.WARNING
def test_stop_zone():
positions = [np.array([0.3, 0.0, 0.0])]
assert classify_zone(positions) == SafetyZone.STOP
def test_multiple_humans_most_restrictive():
# One person at 3 m (CLEAR), one at 0.4 m (STOP)
positions = [np.array([3.0, 0.0, 0.0]),
np.array([0.4, 0.0, 0.0])]
assert classify_zone(positions) == SafetyZone.STOP
def test_speed_factor_stop():
assert SPEED_FACTORS[SafetyZone.STOP] == 0.0
@pytest.mark.parametrize("zone,factor", [
(SafetyZone.CLEAR, 1.00),
(SafetyZone.WARNING, 0.50),
(SafetyZone.SLOW, 0.25),
(SafetyZone.STOP, 0.00),
])
def test_speed_factors(zone, factor):
assert SPEED_FACTORS[zone] == factor
Validation Checklist
- • [ ] Sensor latency measured and within t_r budget for all lighting conditions
- • [ ] Robot braking distance measured at maximum operating speed
- • [ ] Watchdog triggers STOP within 100 ms of sensor topic silence
- • [ ] Speed factor 0.00 results in exactly zero velocity commands at hardware level
- • [ ] False-positive rate of human detector < 0.1% in production environment
- • [ ] Safety node CPU usage < 5% so it cannot be starved by other processes
- • [ ] Documented risk assessment filed with site safety officer
> Conclusion
Safe human-robot interaction is not a checkbox — it is an ongoing engineering discipline. The stack described here (depth-based human tracking → zone classification → velocity filtering → watchdog) provides a solid, testable foundation that can be audited against ISO/TS 15066 requirements.
As robot capabilities grow, safety systems must grow with them. Invest in thorough testing, maintain detailed failure mode analyses, and keep human operators informed of the robot's internal state — trust is built through transparency.
Build Responsibly!
The full ROS2 package with the safety filter node, zone classifier, test suite, and a sample Gazebo world with a moving human actor is available on GitHub. Always have a qualified safety engineer review your risk assessment before deploying alongside real humans.