Autonomous systems, deep reinforcement learning, and robotics simulations — each project a step toward intelligent machines.
Clearpath Jackal robot trained to navigate a warehouse using Soft Actor-Critic with Hindsight Experience Replay. 89-ray LiDAR sensing, 1M timestep training on NVIDIA Isaac Lab. Achieved 100% goal-reach rate with zero collisions.
Real-time 2D LiDAR-based simultaneous localization and mapping. 72-beam scanner builds an occupancy map while the robot explores an unknown environment.
Heuristic-based optimal path planning on grid maps with diagonal movement. Visualizes the full search frontier, shortest path extraction, and robot traversal in real-time.
PPO-trained locomotion policy for ANYmal-D legged robot using NVIDIA Isaac Lab. 12-DOF trot gait learned from scratch — flat and rough terrain traversal.
Two-robot team performing collaborative map merging in ROS2. Each robot independently builds a local occupancy grid; a merge node fuses them into a unified global map using pose-graph optimization and occupancy grid alignment.
Proximal Policy Optimization agent learning soft-landing on a lunar surface. Trained with stable-baselines3 — reward curve shows convergence from random exploration to precision control.
Analytical inverse kinematics solver for a 6-DOF robotic arm. Visualizes joint angles, workspace reachability, and end-effector trajectory tracking for pick-and-place operations.
50-agent swarm implementing Reynolds' three rules: separation, alignment, and cohesion. Emergent collective behavior navigates around obstacles while maintaining formation.
Differential drive robot following a figure-eight reference trajectory using a tuned PID controller. Shows real-time error plots and heading correction alongside the path.
Side-by-side comparison of RRT* (asymptotically optimal sampling-based planner) vs A* on identical maps. Shows path cost convergence and rewiring behavior as tree nodes increase.