Back to Engineering

Robotics & Automation Series Part 13: Mobile Robotics

February 13, 2026 Wasil Zafar 40 min read

Take robots beyond the factory — wheeled and legged locomotion, differential drive, holonomic platforms, autonomous vehicles, drones (UAVs), marine robotics, and field deployment strategies.

Table of Contents

  1. Wheeled Robots
  2. Legged Robots
  3. Autonomous Vehicles
  4. Drones & Maritime
  5. Mobile Robot Planner Tool
  6. Exercises & Challenges
  7. Conclusion & Next Steps

Wheeled Robots

Series Overview: This is Part 13 of our 18-part Robotics & Automation Series. Mobile robotics extends robot capabilities beyond fixed workstations into dynamic, unstructured environments.

Robotics & Automation Mastery

Your 18-step learning path • Currently on Step 13
Introduction to Robotics
History, types, DOF, architectures, mechatronics, ethics
Sensors & Perception Systems
Encoders, IMUs, LiDAR, cameras, sensor fusion, Kalman filters, SLAM
Actuators & Motion Control
DC/servo/stepper motors, hydraulics, drivers, gear systems
Kinematics (Forward & Inverse)
DH parameters, transformations, Jacobians, workspace analysis
Dynamics & Robot Modeling
Newton-Euler, Lagrangian, inertia, friction, contact modeling
Control Systems & PID
PID tuning, state-space, LQR, MPC, adaptive & robust control
Embedded Systems & Microcontrollers
Arduino, STM32, RTOS, PWM, serial protocols, FPGA
Robot Operating Systems (ROS)
ROS2, nodes, topics, Gazebo, URDF, navigation stacks
Computer Vision for Robotics
Calibration, stereo vision, object recognition, visual SLAM
AI Integration & Autonomous Systems
ML, reinforcement learning, path planning, swarm robotics
Human-Robot Interaction (HRI)
Cobots, gesture/voice control, safety standards, social robotics
Industrial Robotics & Automation
PLC, SCADA, Industry 4.0, smart factories, digital twins
13
Mobile Robotics
Wheeled/legged robots, autonomous vehicles, drones, marine robotics
You Are Here
14
Safety, Reliability & Compliance
Functional safety, redundancy, ISO standards, cybersecurity
15
Advanced & Emerging Robotics
Soft robotics, bio-inspired, surgical, space, nano-robotics
16
Systems Integration & Deployment
HW/SW co-design, testing, field deployment, lifecycle
17
Robotics Business & Strategy
Startups, product-market fit, manufacturing, go-to-market
18
Complete Robotics System Project
Autonomous rover, pick-and-place arm, delivery robot, swarm sim

Mobile robots must solve a fundamental problem that fixed industrial robots never face: where am I, and how do I get there? The answer depends on the locomotion system. Wheels are efficient on flat surfaces, legs handle rough terrain, and each configuration has unique kinematic constraints.

Shopping Cart Analogy: A differential drive robot (two independently driven wheels) is like pushing a shopping cart — slight speed differences between left and right turn the cart naturally. An omnidirectional robot is like a caster-ball on a chair — it can move in any direction without turning first. An Ackermann robot is like a car — front wheels steer while rear wheels drive.

The differential drive is the most common mobile robot configuration. Two drive wheels with independent motors produce motion and steering through speed differences. The kinematics are elegantly simple:

import math

class DifferentialDriveRobot:
    """Differential drive mobile robot kinematics"""
    
    def __init__(self, wheel_radius, wheel_base):
        self.R = wheel_radius       # meters
        self.L = wheel_base         # distance between wheels (meters)
        self.x = 0.0                # position x (meters)
        self.y = 0.0                # position y (meters)
        self.theta = 0.0            # heading (radians)
        self.path = [(0, 0)]
    
    def forward_kinematics(self, omega_left, omega_right, dt):
        """
        Given wheel angular velocities (rad/s), compute new pose.
        v = R * (ωR + ωL) / 2      — linear velocity
        ω = R * (ωR - ωL) / L      — angular velocity
        """
        v_left = self.R * omega_left
        v_right = self.R * omega_right
        
        v = (v_right + v_left) / 2.0       # linear velocity
        omega = (v_right - v_left) / self.L # angular velocity
        
        # Update pose using ICC (Instantaneous Center of Curvature)
        if abs(omega) < 1e-6:  # Straight line
            self.x += v * math.cos(self.theta) * dt
            self.y += v * math.sin(self.theta) * dt
        else:
            # Arc motion
            r = v / omega  # turning radius
            self.x += r * (math.sin(self.theta + omega * dt) - math.sin(self.theta))
            self.y -= r * (math.cos(self.theta + omega * dt) - math.cos(self.theta))
            self.theta += omega * dt
        
        self.theta = self.theta % (2 * math.pi)
        self.path.append((round(self.x, 3), round(self.y, 3)))
        
        return {'x': round(self.x, 3), 'y': round(self.y, 3), 
                'theta_deg': round(math.degrees(self.theta), 1),
                'v': round(v, 3), 'omega': round(omega, 3)}
    
    def inverse_kinematics(self, v_desired, omega_desired):
        """Given desired v and ω, compute wheel speeds"""
        v_right = v_desired + (omega_desired * self.L / 2)
        v_left = v_desired - (omega_desired * self.L / 2)
        
        omega_right = v_right / self.R
        omega_left = v_left / self.R
        
        return {'omega_left': round(omega_left, 3), 
                'omega_right': round(omega_right, 3)}

# Create robot: 5cm wheel radius, 30cm wheelbase
robot = DifferentialDriveRobot(wheel_radius=0.05, wheel_base=0.30)

# Drive forward for 2 seconds (both wheels same speed)
print("=== Differential Drive Simulation ===\n")
print("Stage 1: Drive straight (2s)")
for t in range(20):
    pose = robot.forward_kinematics(omega_left=10, omega_right=10, dt=0.1)

print(f"  Position: ({pose['x']}, {pose['y']}) m, Heading: {pose['theta_deg']}°")

# Turn right (left wheel faster)
print("\nStage 2: Turn right (1s)")
for t in range(10):
    pose = robot.forward_kinematics(omega_left=10, omega_right=5, dt=0.1)

print(f"  Position: ({pose['x']}, {pose['y']}) m, Heading: {pose['theta_deg']}°")

# Inverse kinematics: what wheel speeds for v=0.5 m/s, ω=1 rad/s?
ik = robot.inverse_kinematics(v_desired=0.5, omega_desired=1.0)
print(f"\nInverse Kinematics (v=0.5 m/s, ω=1.0 rad/s):")
print(f"  Left wheel: {ik['omega_left']} rad/s, Right wheel: {ik['omega_right']} rad/s")

Ackermann Steering

Ackermann steering is the geometry used in cars and car-like robots. The inner wheel turns more sharply than the outer wheel, ensuring both wheels trace concentric circles around the same turn center — preventing tire scrubbing.

import math

class AckermannRobot:
    """Car-like robot with Ackermann steering geometry"""
    
    def __init__(self, wheelbase, track_width):
        self.L = wheelbase          # front-to-rear axle distance
        self.T = track_width        # distance between left-right wheels
        self.x = 0.0
        self.y = 0.0
        self.theta = 0.0            # heading
        self.max_steer = math.radians(35)  # max steering angle
    
    def compute_steering(self, desired_radius):
        """
        Ackermann geometry: compute inner and outer wheel angles
        for a desired turning radius.
        """
        if abs(desired_radius) < self.L:
            print("  Warning: Turning radius too small!")
            desired_radius = self.L * 1.5
        
        # Average steering angle
        delta_avg = math.atan(self.L / desired_radius)
        
        # Inner and outer wheel angles (Ackermann correction)
        delta_inner = math.atan(self.L / (desired_radius - self.T / 2))
        delta_outer = math.atan(self.L / (desired_radius + self.T / 2))
        
        return {
            'avg_angle_deg': round(math.degrees(delta_avg), 1),
            'inner_angle_deg': round(math.degrees(delta_inner), 1),
            'outer_angle_deg': round(math.degrees(delta_outer), 1),
            'turn_radius': round(desired_radius, 2)
        }
    
    def bicycle_model_update(self, speed, steering_angle, dt):
        """Simplified bicycle model for motion prediction"""
        steering_angle = max(-self.max_steer, min(self.max_steer, steering_angle))
        
        self.x += speed * math.cos(self.theta) * dt
        self.y += speed * math.sin(self.theta) * dt
        self.theta += (speed / self.L) * math.tan(steering_angle) * dt
        
        return {'x': round(self.x, 3), 'y': round(self.y, 3),
                'theta_deg': round(math.degrees(self.theta), 1)}

# Create car-like robot (wheelbase=2.5m, track=1.5m)
car = AckermannRobot(wheelbase=2.5, track_width=1.5)

# Compute steering for different turning radii
print("=== Ackermann Steering Geometry ===\n")
for radius in [5, 10, 20, 50]:
    result = car.compute_steering(radius)
    print(f"Radius {radius}m: Inner={result['inner_angle_deg']}°, "
          f"Outer={result['outer_angle_deg']}°, Avg={result['avg_angle_deg']}°")

# Simulate driving in a curve
print("\n=== Bicycle Model Simulation ===")
car2 = AckermannRobot(wheelbase=2.5, track_width=1.5)
steer = math.radians(15)  # 15° steering

for t in range(10):
    pose = car2.bicycle_model_update(speed=5.0, steering_angle=steer, dt=0.5)
    if t % 3 == 0:
        print(f"  t={t*0.5:.1f}s: ({pose['x']}, {pose['y']}) heading={pose['theta_deg']}°")

Holonomic & Omnidirectional Platforms

A holonomic robot can move in any direction at any time without needing to turn first. This requires special wheels — either Mecanum wheels (angled rollers at 45°) or omni wheels (perpendicular rollers). Holonomic platforms are ideal for warehouses, where robots need to navigate tight aisles.

Locomotion Comparison

Reference
TypeDOFTerrainManeuverabilityExample
Differential Drive2 (v, ω)Flat/indoorGood (spin in place)iRobot Roomba, TurtleBot
Ackermann2 (v, δ)Roads/outdoorLow (min turn radius)Cars, Waymo, GPS tractors
Mecanum (4-wheel)3 (vx, vy, ω)Flat/indoorExcellent (strafe)KUKA KMP, warehouse AGVs
Tracked2 (v, ω)Rough/outdoorMediumBomb disposal, mining
Quadruped12+ jointsAny terrainVery highBoston Dynamics Spot
Bipedal6+ per legHuman spacesHigh (stairs)Atlas, Digit, Optimus
import math

class MecanumRobot:
    """4-wheel Mecanum drive (holonomic platform)"""
    
    def __init__(self, wheel_radius, lx, ly):
        self.R = wheel_radius       # wheel radius
        self.lx = lx                # half-distance between front/rear axles
        self.ly = ly                # half-distance between left/right wheels
        self.x = 0.0
        self.y = 0.0
        self.theta = 0.0
    
    def inverse_kinematics(self, vx, vy, omega):
        """
        Compute individual wheel speeds from desired body velocity.
        Mecanum inverse kinematics (45° roller angle):
        ω1 = (vx - vy - (lx+ly)ω) / R   (front-left)
        ω2 = (vx + vy + (lx+ly)ω) / R   (front-right)
        ω3 = (vx + vy - (lx+ly)ω) / R   (rear-left)
        ω4 = (vx - vy + (lx+ly)ω) / R   (rear-right)
        """
        k = self.lx + self.ly
        w1 = (vx - vy - k * omega) / self.R  # Front-Left
        w2 = (vx + vy + k * omega) / self.R  # Front-Right
        w3 = (vx + vy - k * omega) / self.R  # Rear-Left
        w4 = (vx - vy + k * omega) / self.R  # Rear-Right
        
        return [round(w, 2) for w in [w1, w2, w3, w4]]
    
    def update(self, vx, vy, omega, dt):
        """Update pose from body-frame velocities"""
        # Transform body velocities to world frame
        cos_t = math.cos(self.theta)
        sin_t = math.sin(self.theta)
        
        self.x += (vx * cos_t - vy * sin_t) * dt
        self.y += (vx * sin_t + vy * cos_t) * dt
        self.theta += omega * dt
        
        return {'x': round(self.x, 3), 'y': round(self.y, 3),
                'theta_deg': round(math.degrees(self.theta), 1)}

# Create Mecanum robot
mec = MecanumRobot(wheel_radius=0.05, lx=0.15, ly=0.20)

# Movement commands
print("=== Mecanum Drive: Omnidirectional Motion ===\n")
commands = [
    ('Forward',      0.5,  0.0,  0.0),
    ('Strafe Right',  0.0,  0.5,  0.0),
    ('Diagonal',     0.5,  0.5,  0.0),
    ('Spin in place', 0.0,  0.0,  1.0),
    ('Forward+Turn', 0.5,  0.0,  0.5),
]

for name, vx, vy, omega in commands:
    wheels = mec.inverse_kinematics(vx, vy, omega)
    print(f"{name:15s} (vx={vx}, vy={vy}, ω={omega})")
    print(f"  Wheels [FL, FR, RL, RR]: {wheels} rad/s\n")

Legged Robots

Legged robots sacrifice the efficiency of wheels for the versatility of legs. They can step over obstacles, climb stairs, traverse rubble, and navigate terrain that would immobilize any wheeled platform. The trade-off is dramatically increased mechanical and computational complexity.

Stability Rule: A statically stable robot can balance without moving (like a table). A dynamically stable robot must keep moving to stay upright (like a bicycle). Quadrupeds can be statically stable with 3 legs on the ground; bipeds are inherently dynamically unstable and require active balancing.

Quadruped Robots

Quadruped robots like Boston Dynamics' Spot have become commercially viable for inspection, construction monitoring, and hazardous environments. Their stability comes from having up to three support points at any time.

import math

class QuadrupedGait:
    """Quadruped gait pattern generator"""
    
    # Standard gait patterns (phase offsets for each leg)
    GAITS = {
        'walk': {
            'description': 'Slow, statically stable — 3 legs always grounded',
            'phases': [0.0, 0.5, 0.75, 0.25],  # FL, FR, RL, RR
            'duty_factor': 0.75,  # % of cycle each foot is on ground
            'speed': 'slow'
        },
        'trot': {
            'description': 'Diagonal pairs move together — fastest stable gait',
            'phases': [0.0, 0.5, 0.5, 0.0],
            'duty_factor': 0.5,
            'speed': 'medium'
        },
        'pace': {
            'description': 'Same-side pairs move together — lateral rocking',
            'phases': [0.0, 0.5, 0.0, 0.5],
            'duty_factor': 0.5,
            'speed': 'medium'
        },
        'gallop': {
            'description': 'All legs different phase — fastest but dynamically unstable',
            'phases': [0.0, 0.1, 0.5, 0.6],
            'duty_factor': 0.3,
            'speed': 'fast'
        }
    }
    
    def __init__(self):
        self.leg_names = ['Front-Left', 'Front-Right', 'Rear-Left', 'Rear-Right']
    
    def generate_gait(self, gait_name, num_steps=8):
        """Generate gait timing diagram"""
        gait = self.GAITS[gait_name]
        print(f"\nGait: {gait_name.upper()} — {gait['description']}")
        print(f"Duty Factor: {gait['duty_factor']*100:.0f}% | Speed: {gait['speed']}")
        print(f"\nTiming Diagram (â–ˆ=stance/ground, â–‘=swing/air):")
        
        for i, leg in enumerate(self.leg_names):
            phase = gait['phases'][i]
            duty = gait['duty_factor']
            
            # Generate timing pattern
            pattern = ''
            for step in range(num_steps * 4):
                t = (step / (num_steps * 4) + phase) % 1.0
                if t < duty:
                    pattern += 'â–ˆ'
                else:
                    pattern += 'â–‘'
            
            print(f"  {leg:12s} |{pattern}|")
        
        # Calculate stability
        support_count = []
        for step in range(num_steps * 4):
            legs_on_ground = 0
            for i in range(4):
                t = (step / (num_steps * 4) + gait['phases'][i]) % 1.0
                if t < gait['duty_factor']:
                    legs_on_ground += 1
            support_count.append(legs_on_ground)
        
        min_support = min(support_count)
        stability = 'Static' if min_support >= 3 else 'Dynamic'
        print(f"\n  Min support legs: {min_support} → {stability} stability")

# Demonstrate all gaits
planner = QuadrupedGait()
for gait in ['walk', 'trot', 'pace', 'gallop']:
    planner.generate_gait(gait)

Case Study: Boston Dynamics Spot

CommercialQuadruped

Spot by Boston Dynamics is the first commercially successful quadruped robot. Key specifications and real-world deployments:

  • Specs: 12 DOF (3 per leg), 14 kg payload, 5.2 km/h top speed, 90-minute battery, IP54 ingress protection
  • Perception: 5 stereo cameras (360° vision), IMU, joint encoders — can climb 30° slopes and navigate stairs
  • Applications: Nuclear facility inspection (Sellafield, UK), construction progress monitoring (Pomerleau), oil rig inspection (BP Aker), Mars analog research (NASA JPL)
  • SDK: Python/gRPC API allows custom autonomy behaviors, waypoint navigation, and third-party sensor integration

Key Lesson: Spot succeeded commercially because Boston Dynamics built a platform, not a product. The robot's value comes from the ecosystem of payloads and applications built on top.

Gait Planning & Stability

Gait planning determines when each foot lifts and lands, while footstep planning determines where feet are placed. The key constraint is maintaining the Zero Moment Point (ZMP) within the support polygon — the convex hull of all ground contact points.

Tightrope Analogy: The ZMP is like a tightrope walker's center of gravity. As long as it stays over the rope (support polygon), the walker is stable. When it shifts outside, the walker falls. Bipedal robots constantly manage ZMP by adjusting foot placement, hip sway, and arm counterbalance — exactly like a human walking.

Autonomous Vehicles

Autonomous vehicles (AVs) are the highest-profile application of mobile robotics. The SAE J3016 standard defines six levels of driving automation:

SAE Automation Levels

Standard
LevelNameDriving TaskExample
L0No AutomationHuman does everythingClassic cars
L1Driver AssistanceSteering OR acceleration assistAdaptive cruise control
L2Partial AutomationSteering AND acceleration assistTesla Autopilot, GM Super Cruise
L3Conditional AutomationSystem drives in specific conditionsMercedes Drive Pilot (highway)
L4High AutomationSystem drives in geo-fenced areaWaymo, Cruise (urban robotaxi)
L5Full AutomationSystem drives everywhereNot yet achieved

Sensor Suites

Autonomous vehicles fuse multiple sensor modalities because no single sensor is sufficient for all conditions. The industry debate between camera-only (Tesla's approach) and LiDAR + camera fusion (Waymo, Cruise) remains unresolved.

AV Sensor Comparison

Key Differences
SensorRangeResolutionWeatherCost3D Info
Camera200m+Very highPoor (rain/glare)$50-200Inferred (mono) / Stereo
LiDAR200m+High (128 ch)Moderate$500-10KDirect point cloud
Radar300m+LowExcellent$100-500Range + velocity
Ultrasonic5mVery lowGood$5-20Range only

Localization & HD Mapping

Localization answers "where am I?" with centimeter accuracy. AVs combine GPS (which alone is only accurate to 1-2 meters) with HD maps, LiDAR matching, visual odometry, and IMU dead-reckoning for robust positioning.

import math
import random

class ParticleFilterLocalizer:
    """Monte Carlo Localization (particle filter) for mobile robots"""
    
    def __init__(self, num_particles, map_bounds):
        self.num_particles = num_particles
        self.map_bounds = map_bounds  # (x_min, x_max, y_min, y_max)
        
        # Initialize particles uniformly
        self.particles = []
        for _ in range(num_particles):
            p = {
                'x': random.uniform(map_bounds[0], map_bounds[1]),
                'y': random.uniform(map_bounds[2], map_bounds[3]),
                'theta': random.uniform(0, 2 * math.pi),
                'weight': 1.0 / num_particles
            }
            self.particles.append(p)
    
    def predict(self, v, omega, dt, noise_v=0.1, noise_omega=0.05):
        """Motion model: move particles forward with noise"""
        for p in self.particles:
            v_noisy = v + random.gauss(0, noise_v)
            omega_noisy = omega + random.gauss(0, noise_omega)
            
            p['x'] += v_noisy * math.cos(p['theta']) * dt
            p['y'] += v_noisy * math.sin(p['theta']) * dt
            p['theta'] += omega_noisy * dt
    
    def update(self, measurement, landmarks):
        """
        Sensor model: weight particles based on how well they
        explain the observed distances to landmarks.
        """
        for p in self.particles:
            weight = 1.0
            for i, (lx, ly) in enumerate(landmarks):
                expected_dist = math.hypot(lx - p['x'], ly - p['y'])
                observed_dist = measurement[i]
                
                # Gaussian likelihood
                sigma = 0.5  # sensor noise (meters)
                diff = expected_dist - observed_dist
                likelihood = math.exp(-0.5 * (diff / sigma) ** 2)
                weight *= max(likelihood, 1e-10)
            
            p['weight'] = weight
        
        # Normalize weights
        total = sum(p['weight'] for p in self.particles)
        for p in self.particles:
            p['weight'] /= total
    
    def resample(self):
        """Low-variance resampling"""
        new_particles = []
        r = random.uniform(0, 1.0 / self.num_particles)
        c = self.particles[0]['weight']
        i = 0
        
        for m in range(self.num_particles):
            u = r + m / self.num_particles
            while c < u:
                i += 1
                c += self.particles[i]['weight']
            
            new_p = dict(self.particles[i])
            new_p['weight'] = 1.0 / self.num_particles
            new_particles.append(new_p)
        
        self.particles = new_particles
    
    def estimate(self):
        """Weighted average of particles = best pose estimate"""
        x = sum(p['x'] * p['weight'] for p in self.particles)
        y = sum(p['y'] * p['weight'] for p in self.particles)
        
        sin_t = sum(math.sin(p['theta']) * p['weight'] for p in self.particles)
        cos_t = sum(math.cos(p['theta']) * p['weight'] for p in self.particles)
        theta = math.atan2(sin_t, cos_t)
        
        return {'x': round(x, 2), 'y': round(y, 2), 
                'theta_deg': round(math.degrees(theta), 1)}

# Simulate particle filter localization
landmarks = [(0, 10), (10, 0), (10, 10), (5, 5)]  # Known landmark positions
true_pos = {'x': 3.0, 'y': 4.0, 'theta': 0.5}

# Create localizer with 500 particles in 15x15 map
localizer = ParticleFilterLocalizer(500, (0, 15, 0, 15))

# Simulate range measurements from true position to landmarks
measurements = [math.hypot(lx - true_pos['x'], ly - true_pos['y']) + random.gauss(0, 0.3)
                for lx, ly in landmarks]

print("=== Particle Filter Localization ===\n")
print(f"True Position: ({true_pos['x']}, {true_pos['y']})")
print(f"Landmarks: {landmarks}")
print(f"Range Measurements: {[round(m, 2) for m in measurements]}\n")

# Run prediction → update → resample cycle
for iteration in range(5):
    localizer.predict(v=0.5, omega=0.1, dt=0.1)
    localizer.update(measurements, landmarks)
    localizer.resample()
    est = localizer.estimate()
    error = math.hypot(est['x'] - true_pos['x'], est['y'] - true_pos['y'])
    print(f"Iteration {iteration+1}: est=({est['x']}, {est['y']}), error={error:.2f}m")

Drones & Maritime Robotics

Unmanned Aerial Vehicles (UAVs) have transformed industries from agriculture to cinematography. Multirotor drones achieve flight through differential thrust — varying the speed of 4+ propellers to control roll, pitch, yaw, and altitude. Fixed-wing drones trade hovering capability for endurance and speed.

import math

class QuadcopterDynamics:
    """Simplified quadcopter flight dynamics model"""
    
    def __init__(self, mass_kg, arm_length_m):
        self.mass = mass_kg
        self.arm = arm_length_m
        self.g = 9.81               # gravity
        self.hover_thrust = mass_kg * self.g  # total thrust to hover
        
        # State: position, velocity, attitude
        self.z = 0.0                # altitude (m)
        self.vz = 0.0              # vertical velocity (m/s)
        self.roll = 0.0            # roll angle (rad)
        self.pitch = 0.0           # pitch angle (rad)
        self.yaw = 0.0             # yaw angle (rad)
    
    def motor_mixing(self, throttle, roll_cmd, pitch_cmd, yaw_cmd):
        """
        Convert pilot commands to individual motor speeds.
        Quadcopter X configuration:
            M1 (front-left, CCW)   M2 (front-right, CW)
            M3 (rear-left, CW)    M4 (rear-right, CCW)
        """
        m1 = throttle + roll_cmd + pitch_cmd - yaw_cmd
        m2 = throttle - roll_cmd + pitch_cmd + yaw_cmd
        m3 = throttle + roll_cmd - pitch_cmd + yaw_cmd
        m4 = throttle - roll_cmd - pitch_cmd - yaw_cmd
        
        # Clamp to [0, 1]
        motors = [max(0, min(1, m)) for m in [m1, m2, m3, m4]]
        return motors
    
    def update(self, motors, dt):
        """Simple altitude dynamics"""
        total_thrust = sum(motors) * self.hover_thrust / 2.0
        net_force = total_thrust - self.mass * self.g
        az = net_force / self.mass
        
        self.vz += az * dt
        self.z += self.vz * dt
        self.z = max(0, self.z)  # ground constraint
        
        return {
            'altitude': round(self.z, 2),
            'vz': round(self.vz, 2),
            'motors': [round(m, 3) for m in motors]
        }

# Create quadcopter (1.5kg, 25cm arm)
quad = QuadcopterDynamics(mass_kg=1.5, arm_length_m=0.25)

print("=== Quadcopter Motor Mixing ===\n")

commands = [
    ('Hover',       0.5,  0.0,  0.0,  0.0),
    ('Climb',       0.7,  0.0,  0.0,  0.0),
    ('Roll Right',  0.5, -0.1,  0.0,  0.0),
    ('Pitch Fwd',   0.5,  0.0,  0.1,  0.0),
    ('Yaw CW',      0.5,  0.0,  0.0,  0.1),
]

for name, throttle, roll, pitch, yaw in commands:
    motors = quad.motor_mixing(throttle, roll, pitch, yaw)
    print(f"{name:12s}: Motors [M1={motors[0]:.2f}, M2={motors[1]:.2f}, "
          f"M3={motors[2]:.2f}, M4={motors[3]:.2f}]")

# Simulate takeoff
print("\n=== Takeoff Simulation ===")
quad2 = QuadcopterDynamics(mass_kg=1.5, arm_length_m=0.25)

for t in range(20):
    throttle = 0.65 if t < 10 else 0.50  # climb then hover
    motors = quad2.motor_mixing(throttle, 0, 0, 0)
    state = quad2.update(motors, dt=0.1)
    if t % 4 == 0:
        print(f"  t={t*0.1:.1f}s: alt={state['altitude']}m, vz={state['vz']}m/s")

Marine & Underwater Robotics

Autonomous Underwater Vehicles (AUVs) and Remotely Operated Vehicles (ROVs) face unique challenges: GPS doesn't work underwater (radio waves barely penetrate water), pressure increases ~1 atmosphere per 10 meters, and saltwater corrodes electronics. Navigation relies on acoustic transponders, inertial navigation, and Doppler velocity logs.

Underwater Robotics Comparison

Marine
TypeTetheredDepthDurationUse Case
ROVYes (power + data)6,000m+Days (powered from ship)Deep-sea inspection, oil & gas
AUVNo (battery)6,000mHours (10-24h typical)Seafloor mapping, mine hunting
GliderNo (buoyancy)1,000mMonths (low energy)Ocean monitoring, climate data
ASVNo (surface)Surface onlyDays-weeks (solar)Bathymetry, environmental monitoring

Case Study: Ocean Infinity's Armada Fleet

MarineFleet

Ocean Infinity operates the world's largest fleet of marine robots — autonomous surface vessels (ASVs) and AUVs that conduct offshore surveys with minimal crew. Their Armada fleet uses:

  • Uncrewed Surface Vessels (USVs): 78m ships that can operate fully autonomously or be remotely piloted from shore control centers
  • AUV Swarms: Multiple AUVs deployed from USVs simultaneously to map the seafloor — covering more area in one expedition than traditional ships cover in months
  • Impact: Reduced COâ‚‚ emissions by 90% vs. conventional survey vessels, while dramatically reducing human risk at sea

Field Deployment Strategies

Deploying mobile robots in the real world requires solving problems that don't exist in the lab: GPS denial (indoors, urban canyons), dynamic obstacles (pedestrians, vehicles), weather (rain, fog, dust), and communication loss (network dead zones).

Field Deployment Checklist: Before deploying any autonomous mobile robot, verify: (1) Geofencing boundaries set, (2) E-stop accessible to operators, (3) Communication failsafe behavior defined (stop? return home? hold position?), (4) Battery management with safe return margin (never below 20%), (5) Obstacle detection redundancy (at least 2 independent sensor modalities), (6) Regulatory compliance (FAA Part 107 for drones, local autonomous vehicle regulations).

Mobile Robot Planner Tool

Use this tool to design a mobile robot platform — document your locomotion type, sensor suite, navigation strategy, and deployment environment, then download the design as Word, Excel, or PDF.

Mobile Robot Design Planner

Document your mobile robot design. Download as Word, Excel, or PDF.

Draft auto-saved

All data stays in your browser. Nothing is sent to or stored on any server.

Exercises & Challenges

Exercise 1: Pure Pursuit Controller

Intermediate

Implement a pure pursuit path-following controller for the DifferentialDriveRobot. Given a list of waypoints, the controller should: (1) find the lookahead point on the path, (2) compute the steering arc to that point, (3) output left/right wheel velocities. Test with a figure-8 path.

Exercise 2: Multi-Robot Coordination

Advanced

Create a fleet management system for 3 MecanumRobot instances in a warehouse. Implement: (1) centralized task assignment (which robot picks which order?), (2) collision avoidance (robots must maintain 1m separation), (3) traffic management at aisle intersections (priority queuing). Simulate 10 delivery tasks.

Exercise 3: Quadcopter PID Altitude Controller

Intermediate

Add a PID controller to the QuadcopterDynamics class that maintains a target altitude. The controller should output throttle commands to reach and hold a target altitude of 5 meters. Plot the altitude response over time. Tune Kp, Ki, Kd to minimize overshoot and settling time.

Conclusion & Next Steps

Mobile robotics unlocks the full potential of autonomous systems — taking robots beyond fixed installations into the dynamic, unstructured real world. You've explored wheeled locomotion from differential drive to holonomic Mecanum platforms, legged robots with quadruped gait planning and stability analysis, autonomous vehicles with sensor fusion and particle filter localization, and aerial and marine robots that extend robotics into sky and sea.

The common thread across all mobile platforms is the sense-plan-act loop: perceive the environment (sensors + localization), plan a path (navigation), and execute motion (locomotion control). Mastering these fundamentals prepares you for any mobile robotics domain.

Next in the Series

In Part 14: Safety, Reliability & Compliance, we'll ensure robots operate safely — functional safety analysis, redundancy, fault detection, ISO standards, and cybersecurity for robotic systems.