Human-centered robotics for well being

We’re building companion robots aimed at improving quality of life and care consistency for our users. Built for trials, documentation, and compliance.

Meet Vilu

Vilu is our base platform, demonstrated live in various real-world environments and built for structured pilots. Features shown are under development and may vary by pilot configuration

Physical

Height

Weight

Arm Payload

Shoulder Length

Bottom Chassis

Performance

Battery Life

Wheels

Speed

CPU

Mobility

Max Incline

Ground Clearance

Min Hallway Width

Emergency Stop

95 cm

12 kg

0.5 kg

48 cm

35 cm x 38 cm

3-4 hours

80 mm Mecanum

1.5 m/s

NVIDIA Jetson Orin Nano

10° Ramp

2.5 cm

72 cm

Physical Button

Autonomy

Our robot is capable of autonomously mapping and navigating in domestic home settings using a variety of sensors and custom algorithms.

Perception

LiDAR + Depth Camera + IMU Sensor Fusion

Combining these sensors, Vilu is able to autonomously navigate around a domestic environment, avoiding obstacles, planning routes and receiving commands for navigation instructions.

Navigation

Real time Mapping and Localization

Obstacle Detection and Safe Path Planning

Navigates to Custom Points on Command

Voice & Expression

Vilu uses multimodal feedback—voice, a dynamic face, and LED lighting—to make interactions intuitive and to signal what the robot is doing.

Visual Communication (LED + Face)

  • Status LED indicates listening / speaking / processing / alert

  • Dynamic face with eye gaze and mouth animation

  • Configurable “care-friendly” expressions (neutral, encouraging, attentive)

Speech Interface

  • Custom voices (tone, speed, language) configurable per pilot

  • On-device speech recognition using NVIDIA Riva (STT)

  • Noise-robust listening with push-to-talk / wake-word

Functional Arm (4-DOF | 0.5 kg payload)

  • Designed for simple, low-risk interactions (pointing, greeting, handing lightweight items)

  • Force/speed-limited motion for indoor, people-near operation

  • Supports repeatable pilot tasks (guided routines + physical cues)

Expressive Arm (5-DOF | Gesture Engine)

  • A dedicated arm for nonverbal communication: greeting, encouragement, acknowledgement, and various other gestures

  • Mood / state signaling synchronized with voice + face

  • Helps users interpret intent quickly—useful in noisy rooms or with hearing impairment

Gesture & Assistance

Contact Us

Interested in working together? Fill out some info and we will be in touch shortly. We can’t wait to hear from you!