Engin Oksuz

Robotics Software Engineer specializing in autonomous systems, perception, and computer vision. Passionate about developing cutting-edge solutions for UGV/UAV applications through sensor fusion, SLAM, and deep learning technologies.

engin_oksuz.jpg

I am a Robotics Software Engineer at HAVELSAN - Robotics and Autonomous Systems, where I develop autonomous systems for Unmanned Ground Vehicles (UGV) and Unmanned Aerial Vehicles (UAV). My work focuses on perception systems, sensor fusion, SLAM and deep learning applications for autonomous navigation.

Research Interests

My research interests include autonomous navigation, deep reinforcement learning, sensor fusion, computer vision, and developing robust systems for GNSS-denied environments. I am passionate about bridging the gap between research and practical applications in robotics and autonomous systems.

Professional Experience

Currently, I lead the development of UGV BARKAN, an offroad autonomous ground vehicle system.

  • Perception & Computer Vision: Architecting PyTorch-based segmentation systems with SuperGradients, implementing multi-modal sensor fusion (LiDAR-camera), and deploying state-of-the-art deep learning models (DINOv2, SAM, RF-DETR) for real-time object detection and scene understanding in challenging offroad environments
  • Sensor Fusion & Localization: Leading LIO SLAM and superodom vs. implementation with IMU-LiDAR fusion, designing 3D point cloud processing pipelines using PCL and Open3D. Developed a novel camera-LiDAR-IMU fusion framework with traversability prediction specifically optimized for offroad terrain analysis
  • System Architecture & Optimization: Designing scalable ROS/ROS2 architectures for autonomous vehicles, optimizing deep learning inference pipelines for NVIDIA Jetson platforms, and implementing efficient sensor processing workflows

I also work on Toyota Corolla Autonomous Vehicle projects, focusing on multi-sensor architecture design and Autoware framework customization for advanced sensing, perception, and path-planning algorithms.

Additionally, I develop UAV systems for GNSS-denied navigation using deep learning-based visual localization algorithms that extract and match features from satellite imagery, achieving robust positioning in GPS-challenging environments.

I actively follow the latest developments in autonomous systems research, regularly reviewing papers on perception algorithms, sensor fusion techniques, and system architectures from conferences like CVPR, ICRA, and IROS. This allows me to integrate cutting-edge approaches into production systems and stay at the forefront of technological advancements in robotics and autonomous navigation.

Technical Skills

Programming: C++, Python, Bash
Frameworks: PyTorch, OpenCV, PCL, Open3D, SuperGradients, DINOv2, SAM, RF-DETR, YOLO, Autoware.AI/Universe, MAVROS
Tools: Docker, ROS1/ROS2, ONNX Runtime, TensorRT, NVIDIA Jetson, Ubuntu Linux, Git, GitHub, GitLab
Specializations: Semantic Segmentation, Object Detection, Sensor Fusion, SLAM, Collision Avoidance, Terrain Analysis
Sensors: LiDAR (Velodyne, Robosense), Cameras (Lucid, ZEDs), IMU/GPS (Applanix, SBG), RADAR (SmartMicro) Hardware_Platforms: NVIDIA Jetson AGX/NX, Neousys Nuvo Soft_Skills: Research, Technical Communication, Leadership, Team Collaboration

news

selected publications

  1. A Comparative Performance Analysis of FFT Based and DWT Based Systems for OFDM Systems
    Ali Özen Engin Oksuz
    In 24th Signal Processing and Communication Application Conference (SIU), 2016
  2. An Investigation Effects over Jammer Signal Excision of Different Spreading Sequences Employed in Spread Spectrum Communication
    Ali Özen Engin Oksuz
    Journal for New Generation Sciences, 2016
  3. A frequency domain channel equalizer for discrete Wavelet Transform based OFDM systems
    Ali Özen Engin Oksuz
    Journal for New Generation Sciences, 2016