Coalescent Mobile Robotics robots help supermarkets by moving trolleys to help its staff. Specifically, our robots can move, dock trolleys, move docked trolleys and undock trolleys.
Supermarkets change over time, with different sections and products. Besides, they contain several dynamic objects, such as trolleys, other robots, people etc.
Our robots need to understand such dynamic environments for localization and obstacle avoidance, among others. Due to the prohibitive costs of generating a dataset with real data to train and validate our robots for the tasks we want them to learn, we need to build a synthetic dataset.
We want to be able to automatically generate high-fidelity virtual scenes of supermarkets based on objects found in the real dataset, e.g., shelves, products, trolleys, staff, and customers. These scenes must be generated using a Domain Specific Language (DSL), which must include elements that are deterministic as well as randomized.
The DSL generates the scene which is then rendered using the camera’s parameters and pose. From each rendered scene the solution shall provide segmentation, depth and pose information for each one of the objects in the scene.
Design a DSL that describes a supermarket scene. The DSL can as well make use of vector graphics to describe it. This DSL will start as a small proposal and throughout the project it will get updated to support additional features and improvements. Thus, the DSL will evolve throughout the project continuously improving it.
Create a 3D model dataset from commercial, open-source, and other available repositories containing the main objects present in the supermarket. Just like the DSL, the 3D dataset will evolve throughout the project, initially having a small subset of 3D models, and growing as the project progresses.
A DSL parser that generates a supermarket scene description that will contain all the objects in the scene, their poses as well as any other information necessary to generate the 3D scene (e.g., lights and materials). The scene description can be a YAML file or another custom DSL and shall contain the full description of the scene.
A scene generator that will use the scene description and produce an actual 3D scene that can be visualized using software. The scene generator shall be written for the specific software used, such as Unity 3D, Blender, and Unreal Engine (the specific software will be decided during the execution of the project). As examples, the generator could be based on a Python script using Blender Python API or a C# project using Unity 3D.
A sample generator that given a scene and the camera pose and parameters in the scene description generates the following data.
By producing many distinct virtual samples, one can generate an entire synthetic dataset to develop and, in the case of supervised learning, train computer vision algorithms.