-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Good first contributions
Looking for a place to start? Here is a list of various projects you are welcome to contribute:
good first contribution
โญ special
๐ challenging
https://github.com/alicevision/meshroom/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22
- contribute to the https://github.com/alicevision/meshroom-manual (proofreading, feedback, writing own chapters,...) Open a new issue in meshroom-manual to discuss the details
- create hq illustrations for the documentation or website
- contribute to the sensordatabase
- (once the documentation is finalized: translation)
- share your own ideas
#461 (pull request: https://github.com/alicevision/meshroom/pull/867)
Manual addition of EXIF data within Meshroom would be a good first contribution, with options for selecting 1 or N specific images, or All images, to apply the data to. https://github.com/alicevision/meshroom/issues/642
Maybe add support for ExifTool?
Show popup when a corrupted images is detected. https://github.com/alicevision/meshroom/issues/630 https://github.com/alicevision/meshroom/wiki/Images-cannot-be-imported https://github.com/alicevision/meshroom/pull/635
Notification sounds for success and error Details: #268
Change absolute paths to relative paths within Meshroom. Same for the Input and Output paths of the nodes #472
https://github.com/alicevision/meshroom/issues/864 Details: #473
hide "advanced" values for choice param in Meshroom. Example to hide: sift_ocv This could work well with alicevision/meshroom#676 (comment)
We could use the existing Lensefun database to get the distortion parameters for over 1000 lenses. The database is licensed under CC_BY-SA_3.0 and could be integrated similar to the sensor database. https://github.com/lensfun/lensfun https://wilson.bronger.org/lensfun_coverage.html (dropped 2016 https://github.com/alicevision/AliceVision/issues/10)
WIP https://github.com/alicevision/meshroom/issues/774
Meshroom has its own Remeshing/Simplification nodes. However, instant-meshes is more specialized.
https://github.com/wjakob/instant-meshes
Also interesting: http://bakemyscan.org/ https://github.com/norgeotloic/BakeMyScan
(*and remaining processing time WIP: https://github.com/alicevision/meshroom/pull/842) Details: #426 https://github.com/alicevision/meshroom/pull/778
https://github.com/alicevision/meshroom/issues/1354
ram, cpu, gpu usage
Add overheat protection for local computations (pause computation for cooldown)
IMPLEMENTED https://github.com/alicevision/meshroom/pull/712
- view of an image with matched features visible/highlighted #514
what cameras have contributed to a single point #512
When adding mp4 files to Meshroom, the Keyframe selection node is added to the Graph Once the node has been processed, the images are not automatically added as input. It should also be possible to use Augment reconstruction in some cases. Needs to be compatible with external computation. https://github.com/alicevision/meshroom/issues/232#issuecomment-540688506
#494 #450 View and correct and create tie points https://groups.google.com/forum/#!topic/alicevision/DoVpPYzCQz0
(WIP, internal) It would be great to improve edges on Meshes
https://github.com/manhofer/Line3Dpp (MPL2) โญ (Many thanks to manhofer for changing the license)
https://github.com/abignoli/EdgeGraph3D (GNU) newer
https://github.com/ySalaun/LineSfM (MPL2&others)
reconstruction based on optical flow
https://github.com/alicevision/meshroom/issues/232#issuecomment-593305597
http://lmi.bwh.harvard.edu/papers/pdfs/gunnar/farnebackICPR00.pdf
https://vision.in.tum.de/research/optical_flow_estimation
https://www.mia.uni-saarland.de/Publications/schneevoigt-gcpr14.pdf
https://github.com/ipa-mah/3D-Reconstruction-using-Dense-Optical-Flow
https://github.com/jswulff/pcaflow
https://github.com/menandro/Reconflow
https://github.com/limjiayi/RECONSTRUCT
https://github.com/menandro/opensor
https://github.com/menandro/sor
https://www.cvl.iis.u-tokyo.ac.jp/data/uploads/papers/Menandro_OpticalFlow_WACV2018.pdf
https://docs.opencv.org/3.4/d4/dee/tutorial_optical_flow.html
https://nanonets.com/blog/optical-flow/
show images as 2D plane in 3d Viewer project images from camera position on the object #358
Combine point clouds using PCL https://github.com/alicevision/AliceVision/pull/425
similar to https://www.cloudcompare.org/doc/wiki/index.php?title=Density https://github.com/CloudCompare/CloudCompare PCL could also be used http://www.pcl-users.org/Extract-Point-Density-td4045275.html
provide a node to compute dense point cloud from sparse point cloud or implement another solution for dense reconstruction
- also provided with some SLAM tools https://github.com/alicevision/meshroom/issues/266
Implementation as part of COLMAP https://github.com/mitjap/pwmvs https://demuc.de/papers/schoenberger2016mvs.pdf
provide a tool to correct the white balance of the images for a better final texture similar to https://lesterbanks.com/2018/12/easycorrect-nuke-gizmo-adjust-images/
similar to 2d features viewer but in 3d similar to depth map viewer
https://github.com/alicevision/meshroom/issues/256
https://github.com/alicevision/meshroom/issues/545
https://github.com/rogermm14/rec3D http://openaccess.thecvf.com/content_cvpr_2016_workshops/w14/papers/Muratov_3DCapture_3D_Reconstruction_CVPR_2016_paper.pdf
Gyroscope sensor data logger for mobile devices (txt/csv) Example apps: https://github.com/sztyler/sensordatacollector https://github.com/tyrex-team/senslogs https://github.com/e-lab/VideoSensors
OpenVR-Tracking-Example: A small c++ example on how to access OpenVR tracking data and controller states using IVRInput system https://github.com/Omnifinity/OpenVR-Tracking-Example
vive-diy-position-sensor: Code & schematics for position tracking sensor using HTC Vive's Lighthouse system and a Teensy board. https://github.com/ashtuchkin/vive-diy-position-sensor
HTC Vive Tracker Node for ROS https://github.com/moon-wreckers/vive_tracker
WIP https://vimeo.com/411076799
https://github.com/alicevision/meshroom/issues/928
https://en.wikipedia.org/wiki/Photometric_stereo
#246 https://github.com/natowi/orthoimage_software_collection/blob/master/README.md
Get camera position and path from video footage. As alternative to cctags without markers Example: https://www.youtube.com/watch?v=F3OFzsaPtvI https://www.youtube.com/watch?v=2YnIMfw6bJY https://github.com/uzh-rpg/rpg_svo
(WIP)
build-in client for Meshroom similar to Photoscan #357
Use GPS for camera positions and model scaling Support GPS EXIF tags including pitch/roll/yaw
https://github.com/openMVG/openMVG/issues/547
https://groups.google.com/forum/#!topic/alicevision/Pd7kwKxfbes
convert using https://proj.org/about.html https://proj4.org
GDAL + CGAL
There are several software libraries for monocular SLAM that can reconstruct 3D models in real-time while tracking the camera's position. https://github.com/raulmur/ORB_SLAM2 https://github.com/alicevision/meshroom/issues/232
https://github.com/ethz-asl/maplab
#188 WIP & first DEV version: https://github.com/alicevision/meshroom/issues/566#issuecomment-527822623
Deeplab based masking ๐๐๐ https://groups.google.com/forum/#!topic/alicevision/GvX1rDMI3oQ
Software list: https://github.com/natowi/masking_tools/blob/master/README.md
(hipcl or https://github.com/hughperkins/coriander) https://www.iwocl.org/wp-content/uploads/iwocl2017-hugh-perkins-cuda-cl.pdf
https://groups.google.com/forum/#!topic/alicevision/HQhqtJjGaQ0 https://groups.google.com/forum/#!searchin/alicevision/project|sort:date/alicevision/CS0Os345kio/APLjJWkDEAAJ
Use neural networks for create depth map https://github.com/alicevision/meshroom/issues/528#
GeoDesc replace SIFT
LiXudong found a new algorithm that seems to replace SIFT. This algorithm is called GeoDesc and I don't know if you know this algorithm. Its GITHUB address is: https://github.com/lzx551402/geodesc
https://groups.google.com/forum/#!topic/alicevision/_5Eo6hqLBS8 There (on Github) is a more efficient than state-of-the-art open-source SfM system that surpasses the accuracy of OpenMVG (...) at the same time. The reconstruction system, named i23dMVS, ranks in the top 10 in tanks and temples dataset. Here the link to the project: https://github.com/AIBluefisher/EGSfM Since GraphSfM is partially based on (an early) version of OpenMVG and licensed under BSD 3-Clause, It should be possible to include this approach in Meshroom to accelerate large scale reconstructions.
Polarimetric Multi-View Stereo http://alumni.media.mit.edu/~shiboxin/files/Cui_CVPR17.pdf