Vision and Technological Philosophy

Robotin aims at building a decentralized embodied intelligence data platform, collecting crucial data for embodied intelligence training.This includes both user-generated visual data collected directly through the Robotin app and large-scale teleoperated robot demonstrations that capture real manipulation behaviors. We believe that data harvested from ordinary households, rather than relying solely on centralized and costly data collection pipelines that cannot scale to the diversity of real household environments, is the true foundation for the future of embodied intelligence development.

Embodied Intelligence and VLA

The future of intelligence lies not only in seeing and speaking but in doing and solving within the physical world. This is the era of Embodied Intelligence, a paradigm shift that requires a new type of data. Unlike conventional computer vision datasets focused on static classification, embodied AI systems must learn intricate perception-action loops to navigate, understand, and interact with dynamic environments. These perception-action datasets can originate from everyday household visuals as well as high-quality robot manipulation trajectories teleoperated by users. This demands an unprecedented volume of multimodal, high-fidelity data that captures real-world interactions. However, this data is extremely limited, fragmented, and expensive to collect, making its scarcity the biggest bottleneck for the evolution of Vision-Language-Action (VLA) models in domains like household robotics.

The Robotin Network is a DePIN project designed to tackle this data problem head-on. We combine user-generated perception data with centralized robot hardware that is remotely operated by contributors, forming a unified, scalable data engine for embodied intelligence. We are building a community-driven, decentralized network for collecting and generating high-quality embodied AI data. Our project establishes a data pipeline specifically for indoor household domains, enabling the training of VLA models that can generalize across complex navigation and manipulation tasks. By creating a scalable and cost-effective data infrastructure, we empower the development of robust and general-purpose robots for the real world.

Open and Collaborative Ecosystem

The Robotin ecosystem is highly transparent and open to everyone. Participation does not depend on owning specialized hardware — users can contribute through the app or by remotely operating shared robot systems hosted by Robotin. When the ecosystem reaches maturity, anyone can design, build, or customize hardware devices using our open-source framework. Participants are free to join and leave the network as they wish, and they can help collect, store, and process data without needing special permission.

1. Community and Governance

Our strength comes from our diverse community, which includes data providers, developers, hardware makers, and more. Together, they drive innovation and ensure the ecosystem continues to grow.

While we are a decentralized network, Robotin foundation oversees the project. The Foundation manages software and hardware development, ensures data quality, and guides network operations. We use Robotin Governance Proposals (RGPs) to let the community participate in decision-making, ensuring a transparent and fair process.

2. Privacy and Data Security

Privacy and security are at the heart of the Robotin project. We protect user privacy by ensuring all data is anonymized. For example, our device and system can precisely identify human facial features and personal information, then automatically blurs them. This allows us to create valuable data while fully protecting user privacy.

In the long term, Robotin aims to enable autonomous embodied robots to operate inside real households, where every interaction — from dirt detection to object organization — continuously generates large-scale embodied intelligence data. As more devices and robot platforms integrate into the network, Robotin will evolve into a global infrastructure layer for real-world physical AI.

Last updated