Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
1 Star 1 Fork 1

刘朋 / awesome-Monocular-Depth-Estimation

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
BSD-3-Clause

awesome-Monocular-Depth-Estimation Awesome

Thanks.

- Recent papers 

Statistics: :fire: code is available & stars >= 100  |  :star: citation >= 50


2021

  • [IJCV] Unsupervised Scale-consistent Depth Learning from Video [code]Github stars (Extended version of NeurIPS 2019) [Project webpage] :fire:
  • [CVPR] AdaBins: Depth Estimation Using Adaptive Bins [code]Github stars :fire:
  • [CVPR] PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View Depth Estimation with Neural Positional Encoding and Distilled Matting Loss
  • [CVPR] Sparse Auxiliary Networks for Unified Monocular Depth Prediction and Completion
  • [CVPR] Single Image Depth Prediction with Wavelet Decomposition [code]Github stars :fire:
  • [CVPR] The Temporal Opportunist: Self-Supervised Multi-Frame Monocular Depth [code]Github stars :fire:
  • [TIP] Unsupervised Monocular Depth Estimation via Recursive Stereo Distillation
  • [TIP] MLDA-Net: Multi-Level Dual Attention-Based Network for Self-Supervised Monocular Depth Estimation
  • [TIP] Joint Depth and Defocus Estimation From a Single Image Using Physical Consistency
  • [TPAMI] Deep Depth from Uncalibrated Small Motion Clip
  • [TPAMI] DENAO: Monocular Depth Estimation Network With Auxiliary Optical Flow
  • [CVPR] Vision Transformers for Dense Prediction [code]Github stars :fire:
  • [CVPR] S3: Learnable Sparse Signal Superdensity for Guided Depth Estimation
  • [CVPR] Robust Consistent Video Depth Estimation [code]Github stars

2020

  • [CVPR] 3D Packing for Self-Supervised Monocular Depth Estimation [code]Github starts :fire::star:
  • [CVPR] Self-supervised Monocular Trained Depth Estimation using Self-attention and Discrete Disparity Volume [code]Github starts
  • [CVPR] DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised Representation Learning [code]Github starts
  • [CVPR] The Edge of Depth: Explicit Constraints between Segmentation and Depth [code]Github starts
  • [CVPR] Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [code]Github starts :fire:
  • [CVPR] Online Depth Learning against Forgetting in Monocular Videos
  • [CVPR] On the uncertainty of self-supervised monocular depth estimation [code]Github starts :fire:
  • [ECCV] S3Net: Semantic-Aware Self-supervised Depth Estimation with Monocular Videos and Synthetic Data
  • [ECCV] Unsupervised Monocular Depth Estimation for Night-time Images using Adversarial Domain Feature Adaptation
  • [ECCV] Guiding Monocular Depth Estimation Using Depth-Attention Volume
  • [ECCV] P2Net: Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation [code]Github starts
  • [ECCV] Self-Supervised Monocular Depth Estimation Solving the Dynamic Object Problem by Semantic Guidance [code]Github starts :fire:
  • [ECCV] Feature-metric Loss for Self-supervised Learning of Depth and Egomotion [code]Github starts :fire:
  • [ECCV] Multi-Loss Rebalancing Algorithm for Monocular Depth Estimation
  • [ECCV] Improving Monocular Depth Estimation by Leveraging Structural Awareness and Complementary Datasets
  • [ECCV] CLIFFNet for Monocular Depth Estimation with Hierarchical Embedding Loss
  • [TIP] Adversarial Learning for Joint Optimization of Depth and Ego-Motion
  • [TIP] An Object Context Integrated Network for JointLearning of Depth and Optical Flow
  • [TPAMI] Confidence Propagation through CNNs for Guided Sparse Depth Regression
  • [TPAMI] Learning Depth with Convolutional Spatial Propagation Network
  • [TPAMI] Progressive_Fusion for Unsupervised Binocular Depth Estimation Using Cycled Networks
  • [TPAMI] Semi-Supervised Adversarial Monocular Depth Estimation
  • [TPAMI] Unsupervised Domain Adaptation for Depth Prediction from Images
  • [NIPS] Forget About the LiDAR: Self-Supervised DepthEstimators with MED Probability Volumes [code]Github starts
  • [NIPS] Targeted Adversarial Perturbations for Monocular Depth Prediction [code]Github starts
  • [ECCV] Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction
  • [ICLR] DeepV2D: video to depth with differentiable
  • [ICLR] Semantically-Guided Representation Learning for Self-Supervised Monocular Depth

2019

  • [ICCV] Digging Into Self-Supervised Monocular Depth Estimation [code]Github stars :fire::star:
  • [ICCV] Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras [code]Github stars :fire::star:
  • [ICCV] Self-Supervised Monocular Depth Hints [code]Github stars :fire:
  • [CVPR] Recurrent Neural Network for (Un-)supervised Learning of Monocular Video Visual Odometry and Depth
  • [CVPR] Neural RGB→D Sensing: Depth and Uncertainty from a Video Camera
  • [CVPRW] Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics [code]
  • [3DV] Enhancing self-supervised monocular depth estimationwith traditional visual odometry

2018

  • [CVPR] Deep Ordinal Regression Network for Monocular Depth Estimation [code]Github stars
  • [CVPR] Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation [code]Github stars
  • [ECCV] Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss
  • [ECCV] Supervising the new with the old: learning SFM from SFM
  • [CVPR] Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

2017

BSD 3-Clause License Copyright (c) 2021, 刘朋 All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

简介

深度估计文献集 展开 收起
BSD-3-Clause
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/benzg500/awesome-Monocular-Depth-Estimation.git
git@gitee.com:benzg500/awesome-Monocular-Depth-Estimation.git
benzg500
awesome-Monocular-Depth-Estimation
awesome-Monocular-Depth-Estimation
main
344bd9b3 5694891 D2dac590 5694891