Virtual Reality for Robotics
Programmer(s): Ankur Kohli, Ammar Iqbal, Basit Akram & Naveed Manzoor Afridi
M.Sc Robotics Engineering
University of Genoa (UniGe), Italy
Supervisor: Prof. Gianni Viardo Vercelli
This project provides a concise overview of photogrammetry and 3D model reconstruction. Photogrammetry is a technique that uses photographs to extract 3D geometric information, while 3D model reconstruction involves converting raw data into a digital representation. These technologies have diverse applications in fields such as architecture, archaeology, virtual reality, robotics, and manufacturing. Challenges include complex scenes, lighting variations, and algorithm scalability. Integration with other technologies offers further opportunities. Photogrammetry and 3D model reconstruction have the potential to revolutionize industries and enable new possibilities in various fields. RealityCapture is the state-of-the-art photogrammetry software solution that is changing the industry. It is currently the fastest solution on the market, which brings effectivity to your work and allows you to focus on your targets.
Photogrammetry and 3D model reconstruction face several challenges that hinder their widespread adoption and optimal performance. These challenges include complex scenes, lighting variations, occlusions, image noise, algorithm scalability, and integration with other technologies. Overcoming these challenges is crucial to improving the accuracy, efficiency, and applicability of photogrammetry and 3D model reconstruction in various industries and fields. Addressing these issues will enable more accurate and detailed 3D models, enhance their usability in real-world scenarios, and open up new possibilities for applications such as architectural design, cultural heritage preservation, virtual reality experiences, robotics, and industrial manufacturing. Therefore, research and development efforts should focus on developing robust algorithms, enhancing data processing techniques, and exploring innovative integration approaches to tackle these challenges and unlock the full potential of photogrammetry and 3D model reconstruction.
In the context of Virtual Reality (VR) for Robotics, the problem lies in the limitations and challenges faced by photogrammetry and 3D model reconstruction techniques when applied to create accurate and detailed 3D models for virtual environments used in robotics applications.
One key challenge is the need for highly precise and real-time 3D models that can be seamlessly integrated with virtual reality environments. The accuracy of the reconstructed models is crucial for tasks such as object recognition, motion planning, and robot manipulation. However, the existing photogrammetry and 3D model reconstruction techniques may not always provide the required level of accuracy and real-time performance necessary for robotics in VR.
Another challenge is the ability to handle complex scenes and dynamic environments. Robotics applications often involve objects or scenes with complex geometries, occlusions, and changing lighting conditions. Photogrammetry and 3D model reconstruction algorithms need to effectively handle these challenges and provide reliable and up-to-date 3D models that accurately represent the environment.
Moreover, the scalability of the reconstruction process is critical. Robotics applications often require real-time updates of the 3D models to accommodate dynamic changes in the environment. The ability to efficiently process large datasets and update the 3D models in real-time is crucial for providing an immersive and interactive experience in VR for robotics.
Additionally, the integration of photogrammetry and 3D model reconstruction with other technologies used in robotics, such as sensor fusion or real-time tracking systems, is essential. Seamless integration can enhance the accuracy and reliability of the virtual environment, enabling better interaction and decision-making for robotic systems.
Addressing these challenges in photogrammetry and 3D model reconstruction for VR in robotics is crucial for creating immersive and realistic virtual environments that can accurately simulate real-world scenarios. By developing improved algorithms, handling complex scenes and dynamic environments, ensuring scalability, and integrating with other technologies, photogrammetry and 3D model reconstruction can enhance the capabilities of VR in robotics and enable more sophisticated and efficient robotic applications in virtual environments.
To address the challenges in photogrammetry and 3D model reconstruction for virtual reality (VR) in robotics, the utilization of Reality Capture technology offers a promising solution. Reality Capture combines photogrammetry, laser scanning, and structured light scanning techniques to create highly accurate and detailed 3D models of real-world objects and scenes.
By employing Reality Capture, real-time reconstruction of 3D models can be achieved, providing up-to-date and precise representations of the environment for VR robotics applications. The advanced algorithms and processing capabilities of Reality Capture enable efficient and accurate reconstruction, ensuring the virtual environment accurately mirrors the real world.
One advantage of Reality Capture is its ability to handle complex scenes and dynamic environments. With its integration of multiple scanning technologies, it can effectively capture and reconstruct objects with intricate geometries, occlusions, and changing lighting conditions. This capability enhances the accuracy and realism of the virtual environment for robotics tasks, enabling more reliable object recognition, motion planning, and robot manipulation.
Furthermore, Reality Capture offers scalable processing capabilities to handle large datasets and enable real-time updates of 3D models. The parallel processing and distributed computing capabilities of the technology ensure efficient data processing, allowing for seamless integration with VR robotics applications. Real-time updates of the 3D models facilitate an immersive and interactive experience, enabling robots to interact with the virtual environment in real-time and respond to dynamic changes.
The integration of Reality Capture with other technologies used in robotics, such as sensor fusion and tracking systems, further enhances the capabilities of VR robotics. By fusing data from different sensors and incorporating real-time tracking information, the accuracy and reliability of the virtual environment are significantly improved, enabling precise localization, object tracking, and interaction between the virtual objects and the robotic system.
Overall, the utilization of Reality Capture technology provides an effective solution to the challenges faced in photogrammetry and 3D model reconstruction for VR in robotics. With its real-time reconstruction capabilities, handling of complex scenes, scalability, and integration with other technologies, Reality Capture enhances the accuracy, realism, and usability of the virtual environment for robotics applications, opening up new possibilities for advanced and efficient robotic tasks in virtual reality settings.
Photogrammetry is a technique of using photographs to generate 3D models or maps of objects and landscapes. This technique involves capturing multiple images of an object or a scene from different angles and using software to combine them into a 3D model. The resulting 3D model can be used for a variety of applications, including virtual reality, robotics, and more.
One of the most exciting applications of photogrammetry is in virtual reality for robotics. By using photogrammetry to create 3D models of real-world objects and environments, it is possible to create immersive virtual reality simulations that can be used to train and test robots. This technology can be used to train robots to perform tasks in a safe and controlled environment, without the risk of damaging real-world equipment or injuring people.
Additionally, photogrammetry can be used to create 3D models of existing equipment or environments, which can be used to design and test new robotics systems. For example, if a company is designing a new robot to operate in a factory environment, they can use photogrammetry to create a 3D model of the factory, which can be used to simulate the robot's movements and test its effectiveness before it is actually built and deployed.
Overall, photogrammetry and 3D model reconstruction have a lot of potential for virtual reality and robotics applications. As the technology continues to improve, we can expect to see more and more innovative uses for these techniques in the field of robotics.
How to turn 3D images into 3D Assets with a simple workflow
Note: Source of above picture is from The Ultimate Guide to 3D Reconstruction with Photogrammetry
3D Data Acquisition: 3D data acquisition refers to the process of capturing information about the threedimensional shape, surface, and/or appearance of an object or scene using specialized equipment or techniques. This can be done using a variety of methods, such as structured light, laser scanning, photogrammetry, and computer vision, among others. The captured data can then be used for a variety of purposes, such as computer graphics, scientific visualization, measurement, and inspection.
3D Data Processing: 3D data processing refers to the process of using computer software to analyze and manipulate 3D data captured by a reality capture device, such as a laser scanner or photogrammetry camera. This can include tasks such as cleaning and filtering the data, aligning and merging multiple scans or images, and creating a 3D model or point cloud of the captured environment. The resulting 3D data can be used for a variety of purposes, including building and construction, asset management, heritage preservation, and more.
Photogrammetry is a technique for creating three-dimensional models and maps from two-dimensional images. It involves taking a series of photographs from different angles and using specialized software such as Capture Reality and Unreal Engine to process the images and generate a 3D model or map.
One of the key challenges in photogrammetry is accurately aligning the images, which is known as image registration. This can be difficult due to factors such as variations in lighting, camera position, and image distortion. To address these challenges, researchers have developed techniques such as feature-based alignment and structure-from-motion.
Another important aspect of photogrammetry is the creation of accurate and detailed 3D models. This can be achieved through the use of techniques such as multi-view stereo, which involves comparing multiple images of the same scene to generate a 3D point cloud. Other techniques, such as lidar and structured light, can also be used to create high-resolution 3D models.
Recent advances in machine learning have also led to the development of neural network-based methods for photogrammetry. These methods can learn to align images and create 3D models from large datasets, and have the potential to significantly improve the accuracy and speed of photogrammetry.
Overall, photogrammetry and 3D model reconstruction are active areas of research with numerous practical applications, including mapping, architectural documentation, and heritage preservation.
Also, the state of the art in photogrammetry and 3D model reconstruction in terms of virtual reality (VR) for robotics has seen significant progress. Here are some key advancements in the field:
Real-Time Reconstruction: Real-time reconstruction techniques have been developed to generate 3D models in real-time or near real-time, enabling immediate integration with VR environments for robotics. These techniques leverage parallel processing, efficient algorithms, and optimized data structures to achieve fast reconstruction speeds, facilitating real-time interactions between robots and the virtual environment.
Sensor Fusion and Calibration: Integration of photogrammetry with other sensors, such as depth sensors or inertial measurement units (IMUs), allows for sensor fusion and calibration. This integration improves the accuracy of 3D reconstructions by combining visual data with depth or motion information, enabling more precise localization and object tracking in VR for robotics.
Dynamic Scene Handling: Advanced algorithms have been developed to handle dynamic scenes in VR for robotics. These algorithms can detect and track moving objects, handle changing lighting conditions, and accurately update the 3D models in real-time. This capability is essential for maintaining the accuracy and realism of the virtual environment and ensuring seamless interactions between robots and dynamic virtual objects.
Multi-Modal Data Integration: Integration of multiple data modalities, such as RGB images, depth maps, and point cloud data, enhances the fidelity of 3D model reconstructions. By combining data from different sensors, algorithms can leverage the strengths of each modality to improve accuracy, handle occlusions, and capture fine details in the virtual environment for robotics.
Interactive User Interfaces: User interfaces and tools have been developed to facilitate user interaction and manipulation within the VR environment for robotics. These interfaces allow users to directly interact with virtual objects, perform object manipulation tasks, and provide haptic feedback, enhancing the immersive experience and enabling intuitive control of robotic systems in VR scenarios.
Deep Learning and AI Techniques: Deep learning and AI techniques have been applied to various aspects of photogrammetry and 3D model reconstruction in VR for robotics. Convolutional neural networks (CNNs) and generative models have been utilized for feature extraction, object recognition, and scene understanding, improving the efficiency and accuracy of reconstruction algorithms.
Cloud-Based Solutions: Cloud-based photogrammetry services have emerged, offering scalable and on-demand computational resources for 3D model reconstruction in VR for robotics. These services allow users to upload their data to the cloud, where the reconstruction process is performed, enabling faster processing, large-scale reconstructions, and accessibility for users with limited computational resources.
It's important to note that the field of photogrammetry and 3D model reconstruction in VR for robotics is rapidly evolving. Ongoing research continues to push the boundaries of accuracy, real-time performance, and integration capabilities, with the aim of providing more realistic and interactive VR environments for robotics applications.
There are enormous tools used to accomplish the task such as Reality Capture, and Unreal Engine.
Reality Capture is a leading software tool used in the field of photogrammetry and 3D model reconstruction. It offers advanced capabilities for capturing and processing images to create highly detailed and accurate 3D models of real-world objects and scenes. Here are some key features and functionalities of Reality Capture:
-
Image-based Reconstruction: Reality Capture uses a collection of images taken from different viewpoints to reconstruct 3D models. It employs advanced algorithms to align and stitch these images together, extracting geometric and textural information to create a 3D representation of the scene.
-
Multiple Input Sources: The software supports various input sources, including standard RGB images, laser scans, and structured light scans. It can seamlessly integrate data from different sources to create a comprehensive and detailed 3D model.
-
Automatic Alignment and Calibration: Reality Capture automates the alignment and calibration process, making it easier to handle large datasets. The software identifies common features in the images and automatically aligns them to create an accurate representation of the scene.
-
Dense Point Cloud Generation: By leveraging multi-view stereo algorithms, Reality Capture generates dense point clouds, capturing intricate details and providing a solid foundation for the 3D model reconstruction process.
-
High-Quality Texturing: The software enables high-quality texturing of the reconstructed 3D models by applying the original images' textures onto the model's surface. This results in realistic and visually appealing representations of the captured scene.
-
Mesh Generation and Optimization: Reality Capture generates optimized meshes from the dense point clouds, producing a continuous and watertight 3D surface. The software employs various optimization techniques to refine the mesh and enhance its visual quality.
-
Export and Integration: The 3D models created in Reality Capture can be exported in various file formats, including industry-standard formats such as OBJ, FBX, and PLY. This allows for seamless integration with other software tools and platforms used in 3D visualization, virtual reality, and robotics.
-
Scalability and Performance: Reality Capture is designed to handle large-scale datasets efficiently. It leverages advanced processing techniques and can utilize multiple computing resources, such as GPUs, to accelerate the reconstruction process and achieve high-performance results.
-
User-Friendly Interface: The software provides an intuitive and user-friendly interface, making it accessible to both experts and beginners. It offers a range of automated tools and guided workflows that streamline the reconstruction process and make it easier to achieve high-quality results.
Overall, Reality Capture is a powerful and versatile tool for photogrammetry and 3D model reconstruction. Its advanced capabilities, automation features, and scalability make it a popular choice among professionals in industries such as architecture, archaeology, gaming, visual effects, and robotics.
Unreal Engine is a popular and powerful real-time 3D engine and development platform created by Epic Games. It provides a comprehensive suite of tools and features for creating interactive and immersive experiences across various platforms, including video games, virtual reality (VR), augmented reality (AR), architectural visualization, and more. Here are some key aspects and features of Unreal Engine:
-
Real-time Rendering: Unreal Engine offers advanced real-time rendering capabilities, allowing developers to create stunning and realistic visuals. It supports high-quality materials, lighting, shadows, and post-processing effects to achieve lifelike graphics.
-
Blueprints Visual Scripting: Unreal Engine includes a visual scripting system called Blueprints, which enables non-programmers to create gameplay mechanics, interactive elements, and complex behaviors using a node-based interface. It provides a user-friendly way to create game logic without writing code.
-
Content Creation Tools: The engine provides a range of powerful tools for content creation, including a robust editor for designing levels and environments, a material editor for creating and editing materials, a particle system for visual effects, and a physics engine for realistic simulations.
-
Cross-platform Development: Unreal Engine supports cross-platform development, allowing developers to create applications for PC, consoles, mobile devices, VR headsets, and AR platforms. It provides built-in support for major platforms, simplifying the process of deploying projects to different devices.
-
Blueprint Visual Scripting: Unreal Engine includes a visual scripting system called Blueprints, which enables non-programmers to create gameplay mechanics, interactive elements, and complex behaviors using a node-based interface. It provides a user-friendly way to create game logic without writing code.
-
Marketplace and Community: Unreal Engine has a thriving marketplace where developers can access a wide range of assets, including 3D models, materials, animations, and plugins, to enhance their projects. The engine also has a strong community of developers who actively share knowledge, provide support, and contribute to the growth of the engine.
-
Performance and Optimization: Unreal Engine offers various optimization features and techniques to ensure high-performance execution. This includes tools for profiling and debugging, GPU and CPU optimization, and efficient memory management.
-
Blueprint Visual Scripting: Unreal Engine includes a visual scripting system called Blueprints, which enables non-programmers to create gameplay mechanics, interactive elements, and complex behaviors using a node-based interface. It provides a user-friendly way to create game logic without writing code.
-
Industry Adoption: Unreal Engine is widely adopted in the gaming industry and beyond. It has been used to develop highly acclaimed games, VR experiences, animated films, architectural visualizations, and training simulations. Its versatility and feature set make it a popular choice for a wide range of projects.
Unreal Engine continues to evolve and introduce new features with regular updates, empowering developers to create cutting-edge and immersive experiences across multiple industries.
For this project we used Unreal Engine version 4.25.4.
The project of Photogrammetry and 3D Model Reconstruction in terms of Virtual Reality for Robotics aims to leverage the power of photogrammetry and 3D model reconstruction techniques in the context of virtual reality (VR) to enhance robotics applications. The project involves capturing images of the real-world environment using cameras or other imaging devices and using photogrammetry algorithms to reconstruct a detailed and accurate 3D model of the scene.
The 3D model, along with the captured images, is then integrated into a virtual reality environment, creating a realistic and immersive virtual representation of the physical space. This virtual environment serves as a simulation platform where robotic systems can be tested, trained, and interacted with.
The project focuses on developing advanced techniques to handle the challenges specific to robotics applications, such as real-time reconstruction, dynamic scene handling, and accurate object tracking. It explores the integration of multiple sensors, such as RGB cameras, depth sensors, and inertial measurement units (IMUs), to improve the accuracy and reliability of the 3D reconstructions.
In addition, the project aims to develop interactive user interfaces and control mechanisms within the virtual reality environment, enabling users to manipulate virtual objects, control robotic systems, and receive haptic feedback. The integration of robotics frameworks, such as robot simulators or control systems, with the virtual environment is also a key aspect of the project.
The ultimate goal of the project is to create a seamless integration between the physical and virtual worlds, allowing robotic systems to perceive and interact with the virtual environment in a realistic and intuitive manner. This integration opens up possibilities for various robotics applications, including robot training, task planning, autonomous navigation, and human-robot interaction.
Through this project, the potential of photogrammetry and 3D model reconstruction in the context of virtual reality is harnessed to advance the field of robotics and pave the way for more sophisticated and immersive robotic applications in diverse industries.
- Firstly, we will go through the data acquisition set-up and the data processing steps, showing the workflow with a paying option (Reality Capture) and open-source software (Meshroom).
Step1: Data Acquisition: The necessary equipment is easy to get by. Only a camera is needed (your smartphone works), and a tripod can be used if more stability and control are needed, but it is possible to work without it. The camera used is a Canon EOS 50D with a Canon Ultrasonic lens. The photos are saved in JPG format with a size of 4752x3168 pixels. Before starting the acquisition, we take a few photos to determine what settings should be used. In our experiments, we set the camera parameters as follow:
- Focal length: A fixed focal length of 28 mm, giving a good balance between distance to the subject and workable area;
- ISO: 400, which is a good value to start with, but if your scene appears too dark, you can push this value up to 1600 (at your own risks 😁);
- Diaphragm opening: F/3.5, which allows getting enough light in at the cost of a limited depth of field. In our case, as we focus only on a tiny object, we are good here.
- Shutter speed: 125 ms (1/8 s), which our tripod allows thanks to more stability. Without tripod, below 1/200s is super tricky;
- White balance: 4000 K. This adapts to our scene’s ambient lighting.
The above parameters remain the same throughout the acquisition process.
The acquisition strategy in itself is straightforward. We place the camera on the tripod and adjust the height and the angle so that the object completely fits in the field of view. In this case, as the object is relatively low, we can give a downward angle to the camera so that we can also shoot the top part. We use autofocus to ensure the photos won’t be blurry, then swap to manual mode.
We then move the tripod along a circular trajectory all around the object. We need to remain at a constant distance from the object so that the focus stays right. We check the photos as we go. Once we have achieved the first circle around the object, we diminish the tripod’s legs and lower the downward angle of the camera. We adjust the focus and then swap to manual mode again. We take a few more photos all around the subject.
Finally, we remove the camera from the tripod and take a vertical photo of the complete object. To do this, we adjust the focus again. Then, we take detailed photos to ensure that the texture will be fine on all parts of the object. The idea is basically to take enough photos to allow a complete and detailed reconstruction while keeping in mind that an unnecessarily large number of inputs will only slow down and complicate the process. ideally, for such an object, you have 25 pictures (8 horizontal positions multiplied by three layers of heights, plus one on top).
Step2: Photogrammetry Processing:
The photos are first sorted to eliminate images of bad quality (blurry, out of frame, wrong exposition) that would only disturb the next steps.
Now that all the testbench is clear, let us dive into the 3D photogrammetry processing details through a workflow with Reality Capture.
Option: Reality Capture
The choice toward Reality Capture is pretty straightforward: we can do everything for free. As long as nothing is exported. So, nothing better than showing State-of-the-Art Professional Software. But for this project we used licensed of a Reality Capture which was provided by our Prof. Gianni Viardo Vercelli & Prof. Saverio Iacono.
You can directly download Reality capture with this provided link.
The first step is to create a new project and import our approx 1000 images. This is done by drag-and-drop or clicking on the “Inputs” icon as shown in figure below:
Data Import
Then, we start the alignment with the first icon in the “Alignment” tab. For this project, we keep the default parameters.
Alignment
Furthermore, we imported our approx 1000 images and change the alignment settings before starting the process. We set the “Number of features detected per image” and “Number of features detected per mpx” (megapixel) parameters to 100k and set the “Image overlap” to “High”.
Alignment Settings
Given the low number of inputs, the alignment is done in only 30 seconds. After this step, we have a component that consists of a point cloud and the estimated positions of the cameras.
We can check that all inputs are aligned and see that the positions of 48 cameras out of 48 are estimated. We also check the alignment report. This report provides the alignment duration, the number of aligned inputs, the point count, and the mean and maximal reprojection error in pixels. It is also possible to retrieve the alignment settings that were used.
Alignment Report
Also, we go with the reconstruction settings steps before building the model. We start a reconstruction in Normal detail with default parameters, except for the “Maximal vertex count by part,” which is set on 3M. In “Normal detail,” the value of the “Image downscale” is two by default.
Reconstruction Settings
Lastly, a visual check of the point cloud is done. If everything is fine, we can proceed to the next step, the reconstruction. We keep the default parameters, except for the “Image downscale” factor, which is set to 2.
The reconstruction is started by clicking on the “Normal detail reconstruction” icon in the “Reconstruction” tab. This step takes only five minutes, resulting in a mesh of around 1 million triangular facets. It is a lot compared to the number of countries on earth (195 in 2023), but very tiny compared to what we usually work with. With the “Advanced” selection tool, we select the marginal and large triangles, then filter the selection. We also adjusted the reconstruction box to eliminate the parts that belong to the furniture on which the object was placed (“Box” selection tool).
Reconstruction button (Left), Filter Selection tool (Middle) and the possible filtering options (right)
Now, we proceed with the smoothing step. The smoothing is done on the surfaces and not the edges, 100 iterations of the algorithm are performed, and the weight is set to 0.5. Smoothing is finished in 10 min.
Smoothing Tool Settings
These parameters ensure efficient smoothing without losing much detail.
After aforementioned steps, we build a model and this model is about 141.5 millions triangles and this is maximum as shown in below.
141.5 millions triangles generated on our model
So, finally we simplified the model and build up to 5 millions triangles as shown in figures below.
Scale Down Triangles: Step1
Figure above shows Scale Down Triangles: Step1 and figure below shows Scale Down Triangles: Step2.
Scale Down Triangles: Step2
Scale Down Triangles: Step3
Figure above shows the final step for scale down triangles in a model.
Another tool that may be used at this point is the Simplify Tool. It is possible to simplify the model to reach a fixed number of triangles (type: “Absolute”) or to retain a certain percentage of triangles.
Simplify Tool Settings
The last step is to texture the mesh. To do this, we keep the default parameters and click on the “Texture” button in the “Reconstruction” tab. By default, a downscale factor of 2 is used for texturing.
Texture Menu
In less than an hour (including the acquisition and processing time), we managed to obtain a 3D model of the object that is complete and with high fidelity to the physical object. The maximal reprojection error was set to be lower than 2 pixels, and we obtained a mean error of 0.46 pixels. Certain areas of the model have a better texture than others. In this case, since we have the object at hand and an acquisition setup ready, texture problems can easily be solved by taking complimentary photos of the parts missing information. This shows that 3D Photogrammetry with Reality Capture can be very efficient and relatively easy to work with without being an expert. We kept the default parameters and got a satisfying result. Obviously, for more extensive or complex projects, it is essential to understand and adapt the settings of each task to get optimal results. And that is what we will check in the second chapter.
Importing models from Reality Capture to Unreal Engine involves several steps. Here's a general overview of the process:
-
Model Export from Reality Capture:
- In Reality Capture, ensure that your 3D model is complete and processed.
- Export the model in a compatible file format. Unreal Engine supports various formats such as FBX, OBJ, and Alembic.
- Specify the desired export settings, including scale, vertex color preservation, and material information.
-
Prepare Unreal Engine Project:
- Launch Unreal Engine and open your project.
- Create a new folder within the Content Browser to store your imported models.
-
Import the Model into Unreal Engine:
- In the Content Browser, navigate to the folder where you want to import the model."
- Right-click and select "Import" to open the import dialog.
- Browse for the exported model file and select it.
- Configure the import settings based on your requirements, such as scale, materials, and LOD (Level of Detail) options.
- Click "Import" to begin the import process.
-
Configure Material and Texture:
- After the import is complete, locate the imported model in the Content Browser.
- Double-click on the model to open the Static Mesh Editor.
- In the Static Mesh Editor, you can assign materials and textures to the model.
- Create or import the materials and textures you want to use, and then apply them to the corresponding surfaces of the model.
-
Place the Model in the Scene:
- Drag and drop the imported model from the Content Browser into the desired location in the scene or level.
- Adjust its position, rotation, and scale as needed.
-
Adjust Collision and LODs:
- In the Static Mesh Editor, you can define the collision properties of the model.
- Add or modify collision primitives to accurately represent the shape of the model for collision detection.
- In the Static Mesh Editor, you can assign materials and textures to the model.
- Set up LODs (Level of Detail) if needed, to optimize performance by displaying simplified versions of the model at different distances.
-
Test and Optimize:
- Launch the game or simulation within Unreal Engine to test the imported model in the virtual environment."
- Check for any issues, such as incorrect scaling, misplaced textures, or performance concerns.
- Optimize the model if necessary by simplifying geometry, reducing texture resolution, or adjusting LOD settings to ensure smooth performance.
- Set up LODs (Level of Detail) if needed, to optimize performance by displaying simplified versions of the model at different distances.
Below is the figure shows the model is imported from Reality Capture to Unreal Engine.
Imported model from Reality Capture to Unreal Engine
In Unreal Engine, we created drone and navigate the drone in the same platform. Also, the recontruction model we obtained from the Reality Capture we imported in the Unreal Engine for the navigation of the dron.
Creating a drone in Unreal Engine using Blueprints involves several steps. Here's a detailed breakdown of the process:
-
Set up the Project:
- Launch Unreal Engine and create a new Blueprint project.
- Choose a project template or start with a blank project.
- Specify the project settings, including project name and directory.
-
Import Drone Assets:
- Find or create 3D models for the drone's body, rotors, and any other components.
- Import the models into Unreal Engine by using the "Import" option in the Content Browser.
- Ensure the models are properly scaled and positioned for later use.
Drone Blueprint
Figure above shows drone blueprint.
- Create the Drone Blueprint:
- Find or create 3D models for the drone's body, rotors, and any other components.
- Open the Blueprint Editor by double-clicking on the Content Browser or right-clicking and selecting "Create Blueprint Class."
- Choose the desired parent class for the drone. You can start with a Pawn or Character class, depending on your requirements.
- In the Blueprint Editor, you will see the Construction Script, Event Graph, and other sections.
Figure below shows the drone skeleton.
Drone Skeleton
Now, we attached camera to our drone as show in figure below.
Camera to Drone Skeleton
Drone Activation
Here, we going to activated our dron as shown in figure above and later we going to deactivate our drone as shown below.
Drone Deactivation
Later, we attaching altitude meters to drone as shown in figure below.
Drone: Get Altitude Meters (pure, const)
At the end, we get distance meters as described in figure below.
Drone: Get Distance Meters 0 (pure, const)
-
Design the Drone's Behavior:
- Use the Construction Script to set up the initial positioning and attachment of the drone components.
- In the Event Graph, add nodes and script the desired behavior for the drone.
- Use input events (e.g., keyboard or gamepad inputs) to control the drone's movement, such as changing its location, rotation, or velocity.
- Implement logic for drone actions like taking off, landing, hovering, and rotating.
- Add collision detection and response to avoid obstacles or trigger specific events.
-
Add Drone Physics:
- Enable physics simulation for the drone by enabling the "Simulate Physics" option in the Details panel.
- Configure the drone's collision properties, such as collision channels, collision responses, and physical materials.
- Adjust the drone's mass, drag, and other physical parameters to mimic realistic flight dynamics.
- Use constraints or physics joints to connect the drone's components, such as the rotors to the body.
-
Implement Camera and View:
- Add a camera component to the drone Blueprint to simulate the drone's perspective.
- Configure the camera's position, rotation, field of view, and any other desired settings.
- Set up camera controls to allow the player to switch between different camera views or perspectives.
-
Test and Refine:
- Compile and save the drone Blueprint.
- Place an instance of the drone Blueprint in the game world or level.
- Launch the game or simulation to test the drone's behavior and controls.
- Iterate on the design, making adjustments and refinements as needed.
- Test the drone in different scenarios and environments to ensure its functionality and performance.
Moreover, we setup of environment blueprint as shown in figure below.
Environment setup Blueprint
Remember that this is a general outline of the process, and specific implementation details may vary based on your project requirements and the level of complexity you wish to achieve. The Blueprint system in Unreal Engine offers flexibility, allowing you to customize the drone's behavior and interactions according to your specific needs.
To create a forest environment in Unreal Engine, you'll start by setting up the project and creating the terrain using the engine's terrain tools. Then, you'll place vegetation such as trees and bushes using foliage painting tools. Apply realistic materials to the terrain and foliage assets to add detail and variation. Set up appropriate lighting and atmospheric effects to enhance the visual appeal. Add sound effects to create an immersive auditory experience. Figure below shows the forest environment in unreal engine.
Forest Environment Angle1
Include additional details like rocks and streams, and incorporate interactive elements for engagement. Test and optimize the scene to ensure optimal performance. By following these steps, you can quickly create a captivating and realistic forest environment in Unreal Engine. Figure below shows the forest environment view from different angel in unreal engine.
Forest Environment Angle2
In conclusion, creating a forest environment in Unreal Engine involves setting up the project, designing the terrain, placing vegetation, applying realistic materials, setting up lighting and atmospheric effects, adding sound effects, incorporating small details, and optimizing the scene for performance. By following these steps, you can create a visually stunning and immersive forest environment in Unreal Engine that transports users into a realistic and captivating virtual forest setting.
The result of the project on Photogrammetry and 3D Model Reconstruction in terms of Virtual Reality for Robotics is a highly immersive and realistic virtual environment that integrates photogrammetry-based 3D models with virtual reality technology. This integration brings several benefits and outcomes to the field of robotics:
-
Accurate and Detailed Virtual Representations: The project enables the creation of precise and detailed 3D models of real-world objects and scenes using photogrammetry techniques. These models capture intricate details and provide an accurate representation of the physical environment. When integrated into the virtual reality environment, they create a highly immersive experience for the users.
-
Realistic Training and Simulation: The resulting virtual reality environment allows for realistic training and simulation of robotic systems. Users can interact with virtual representations of robots and perform various tasks, such as manipulating objects, navigating through the virtual space, or testing different scenarios. This provides a safe and cost-effective way to train robot operators, test algorithms, and evaluate system performance.
-
Enhanced Visualization and Understanding: The combination of photogrammetry-based 3D models and virtual reality technology offers improved visualization and understanding of the physical environment for robotics applications. Users can explore and analyze the virtual representation of the environment, gaining insights into the spatial layout, object relationships, and potential obstacles. This aids in planning and decision-making processes for robotics tasks.
-
Seamless Human-Robot Interaction: The project's outcome facilitates seamless human-robot interaction within the virtual reality environment. Users can intuitively control and communicate with virtual robots, providing commands or performing actions through natural gestures or user interfaces. This enables the development and testing of human-robot interaction algorithms and enhances the overall user experience.
-
Robust System Development and Evaluation: The virtual reality environment resulting from the project serves as a valuable platform for the development and evaluation of robotic systems. Researchers and developers can test and refine algorithms, control strategies, and sensor configurations within the virtual environment, prior to real-world implementation. This helps in identifying and addressing potential issues or limitations before deployment.
-
Collaborative Robotics Research: The virtual reality environment allows for collaborative research and development in the field of robotics. Multiple users can connect to the same virtual space simultaneously, enabling collaboration, knowledge sharing, and joint experimentation. This fosters interdisciplinary collaboration and accelerates innovation in robotics.
Figure below shows the content browser folder which is obtained at end of our project.
Content Browser Folder
Figure below depicts when the game starts in Unreal Engine.
Game Start
Figure below highlights with the drone's camera view which shows our model which we imported from reality Capture.
Drone: Camera View of 3D Model from Reality Capture
Figure below explains about the drone camera which shows the environment.
Drone Camera: Environment View
Overall, the project's result combines the power of photogrammetry and 3D model reconstruction with virtual reality technology to enhance robotics applications. The outcome is a realistic and immersive virtual environment that facilitates training, simulation, visualization, and interaction with robotic systems. It opens up new possibilities for research, development, and deployment of robotics technologies in various domains, including manufacturing, healthcare, exploration, and more.
Below are the video links of our final project:
For complete project report click here.
In conclusion, the project on Photogrammetry and 3D Model Reconstruction in terms of Virtual Reality for Robotics has demonstrated the potential of combining these technologies to advance the field of robotics. By leveraging photogrammetry techniques to create accurate 3D models and integrating them into a virtual reality environment, several significant outcomes have been achieved.
The project has enabled the creation of highly immersive and realistic virtual environments, where robotics applications can be simulated, tested, and trained. The accurate and detailed 3D models reconstructed through photogrammetry techniques have provided a precise representation of the physical environment, enhancing visualization and understanding.
The resulting virtual reality environment has proven to be a valuable platform for realistic training and simulation of robotic systems. It has offered a safe and cost-effective way to train robot operators, test algorithms, and evaluate system performance, reducing the need for physical prototypes and potential risks associated with real-world testing.
Furthermore, the seamless human-robot interaction within the virtual environment has opened up opportunities for intuitive control and communication with virtual robots. This has facilitated the development and testing of human-robot interaction algorithms and enhanced the overall user experience.
The project's outcomes have also supported robust system development and evaluation, providing researchers and developers with a platform to refine algorithms, control strategies, and sensor configurations before real-world implementation. It has helped identify potential issues and limitations, leading to more reliable and efficient robotic systems.
Additionally, the virtual reality environment has encouraged collaborative research and development, allowing multiple users to connect and collaborate within the same virtual space. This collaborative aspect has accelerated innovation in robotics and fostered interdisciplinary collaboration.
Overall, the project's conclusion highlights the potential of combining photogrammetry and 3D model reconstruction with virtual reality for robotics applications. The achieved outcomes pave the way for advancements in training, simulation, visualization, and human-robot interaction, leading to more sophisticated and efficient robotics systems across various industries.
[1] RealityCapture - Photogrammetry Software — Reality Capture.
[2] RealityCapture - Tutorial.
[3] Unreal Engine - Tutorial.
[4] Unreal Engine | The most powerful real-time 3D creation tool.
[5] Ph.D. Florent Poux. The Ultimate Guide to 3D Reconstruction with Photogrammetry [Accessed 08-
Jan-2023].
[6] David Novotny Georgia Gkioxari, Shubham Tulsiani. Pushing state-of-the-art in
3D content understanding, February 18, 2019.
[7] Georgios Kordelas, Juan Agapito, Jesús Vegas, and Petros Daras. State-of-the-art algorithms for complete
3d model reconstruction. 09 2010.
[8] Malgorzata Kujawinska, Robert Sitnik, Michal Pawlowski, Piotr Garbat, and Marek Wegiel. Threedimensional data acquisition and processing for virtual reality applications. Proceedings of SPIE - The
International Society for Optical Engineering, 4778, 06 2002.
[9] Michael Zollh¨ofer, Patrick Stotko, Andreas G¨orlitz, Christian Theobalt, Matthias Nießner, Reinhard Klein,
and Andreas Kolb. State of the art on 3d reconstruction with rgb-d cameras. Computer Graphics Forum,
37(2):625–652, 2018.