I am a PhD student at TUM in the computer vision group supervised by Prof. Daniel Cremers and currently an autopilot intern at Tesla in Palo Alto. I received a Master's degree in Mathematics for Science and Engineering and a Bachelor's degree in Engineering Science from Technical University of Munich. I have studied at Technical University Munich (TUM), Federal Institute of Technology Zürich (ETH), University of Notre Dame in South Bend (ND) and École Polytechnique in Paris (X). I did my Master's thesis at Artisense and did an internship at BMW's autonomous driving division.
I am interested, among others, in Computer Vision, Machine Learning and Autonomous Driving. I enjoy passing on knowledge and worked as a teaching assistant for close to ten courses, e.g. The Evolution of Motion Estimation and Real-time 3D Reconstruction, Computer Vision II: Multiple View Geometry, and Numerical Treatment of Ordinary Differential Equations. In my free time I like climbing.
You can send an email to lukas.koestler@tum.de, or follow me on twitter.
|
Masked Event Modeling: Self-Supervised Pretraining for Event Cameras
Simon Klenk*, David Bonello*, Lukas Koestler*, Daniel Cremers arXiv 2022 Masked Event Modeling (MEM) is a self-supervised, BERT-inspired pretraining framework for unlabeled events from any event camera recording. The method outperforms the state-of-the-art on N-ImageNet, N-Cars, and N-Caltech101, increasing the object classification accuracy on N-ImageNet by 7.96%. |
|
E-NeRF: Neural Radiance Fields from a Moving Event Camera
Simon Klenk, Lukas Koestler, Davide Scaramuzza, Daniel Cremers RA-L 2023 E-NeRF shows how to estimate a neural radiance field (NeRF) from a single moving event camera or from an event camera in combination with a standard camera. The proposed method can recover NeRFs during very fast motion and in high dynamic range conditions, where frame-based approaches fail. |
|
Neural Implicit Representations for Physical Parameter Inference from a Single Video
Florian Hofherr, Lukas Koestler, Florian Bernard, Daniel Cremers WACV 2023 This work combines neural implicit representations for appearance modeling with neural ODEs for modelling physical phenomena to obtain a dynamic scene representation that can be identified directly from visual observations. The embedded neural ODE has a known parametric form that allows for the identification of interpretable physical parameters. |
|
Intrinsic Neural Fields: Learning Functions on Manifolds
Lukas Koestler*, Daniel Grittner*, Michael Moeller, Daniel Cremers, Zorah Lähner ECCV 2022 Intrinsic neural fields are a novel and versatile representation for functions on manifolds. They combine the advantages of neural fields with the spectral properties of the Laplace-Beltrami operator. |
|
The Probabilistic Normal Epipolar Constraint for Frame-To-Frame Rotation Optimization under Uncertain Feature Positions
Dominik Muhle*, Lukas Koestler*, Nikolaus Demmel, Florian Bernard, Daniel Cremers CVPR 2022 The probabilistic normal epipolar constraint (PNEC) extends the NEC by Kneip et. al. by accounting for anisotropic and inhomogeneous uncertainties in the feature positions, which yields more accurate rotation estimates. |
|
TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
Lukas Koestler*, Nan Yang*, Niclas Zeller, Daniel Cremers CoRL 2021 and 3DV 2021 Best Demo Award TANDEM combines photometric tracking and deep multi-view stereo depth estimation into a monocular dense SLAM algorithm. Using depth maps rendered from the incrementally-built TSDF model improves tracking robustness. |
|
Learning 3D Vehicle Detection without 3D Bounding Box Labels
Lukas Koestler, Nan Yang, Rui Wang, Daniel Cremers GCPR 2020 By predicting object meshes and employing differentiable rendering, we define loss functions based on depth maps, segmentation masks, and ego- and object-motion, which are generated by pre-trained, off-the-shelf networks. |