Emergency team gathers around a robotic dog with glowing blue accents and silver body and neon‑lit cityscape through doorway

Texas A&M Students Build AI-Powered Robotic Dog for Emergency Response

A team of Texas A&M engineering students has turned a robotic dog into a cutting-edge AI assistant for emergency responders, combining memory-driven navigation with a custom multimodal large language model.

The project was led by engineering technology master’s student Sandun Vitharana and interdisciplinary engineering doctoral student Sanjaya Mallikarachchi, who together built a prototype that never forgets where it’s been and what it’s seen.

At the core of the robot dog is a memory-based system that records previously traveled paths, allowing the machine to reuse routes and cut down repeated exploration. This efficiency is critical in search-and-rescue missions, especially in unmapped areas and GPS-denied environments, the release said.

The dog also understands voice commands and uses AI-driven camera input to perform path planning and identify objects. It quickly responds to avoid a collision and handles high-level planning by using the custom MLLM to analyze its current view and plan how best to proceed, the release noted.

The system interprets visual inputs, generates routing decisions, integrates environmental image capture, high-level reasoning, and path optimization, and combines these with a hybrid control architecture that enables both strategic planning and real-time adjustments.

The robot’s behavior is described as human-like: Like humans, the robot uses reactive and deliberative behaviors and thoughtful decision-making, per the release. It reacts quickly to avoid collisions while also planning using the custom MLLM to analyze its current view.

Robotic dog stepping forward with voice command interface and distorted map over debris in search and rescue mission

Vitharana said, Some academic and commercial systems have integrated language or vision models into robotics, however, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.

Mallikarachchi added, Moving forward, this kind of control structure will likely become a common standard for human-like robots, in the release.

Dr. Isuru Godage, assistant professor in the Department of Engineering Technology and Industrial Distribution and project advisor, explained, The core of our vision is deploying MLLM at the edge, which gives our robotic dog the immediate, high-level situational awareness and emotional intelligence previously impossible. This allows the system to bridge the interaction gap between humans and machines seamlessly. Our goal is to ensure this technology is not just a tool, but a truly empathetic partner, making it the most sophisticated and first responder-ready system for any unmapped environment.

The university noted that the robot dog’s use could extend beyond disaster response, improving efficiency in hospitals, warehouses, and other large facilities. Its advanced navigation could also help people with visual impairments, explore minefields, or perform reconnaissance in hazardous areas.

The project received support from the National Science Foundation, and the team combined the MLLM idea with voice commands to build a natural and intuitive system to show how vision, memory and language can come together interactively, the release said.

A pair of robotic dogs were demonstrated climbing concrete obstacles, showcasing their advanced navigation capabilities. Photo credit: Logan Jinks/Texas A&M University College of Engineering.

More details about the robot can be found on Texas A&M’s website.

Key Takeaways

  • The robot dog remembers paths, uses voice commands, and relies on a custom multimodal LLM for navigation.
  • Its memory-driven system reduces repeated exploration, a vital feature for search-and-rescue in GPS-denied zones.
  • The project, backed by the NSF, demonstrates a human-like decision process and aims to become a standard for future human-like robots.

The Texas A&M team’s robotic dog represents a significant step toward integrating AI, vision, memory, and language into a single platform that can assist emergency responders and other complex environments.

Author

  • I’m Fiona Z. Merriweather, an Entertainment & Culture journalist at News of Austin. I cover the stories that reflect creativity, identity, and cultural expression—from film, music, and television to art, theater, and local cultural movements. My work highlights how entertainment both shapes and mirrors society.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *