BrightAnnotate

I led a team of 7+ engineers towards the development of a semi automatic annotation engine for the purpose of automatic labelling of objects in an autonomous navigation setting.

The main objective is the research and development of detection and segmentation models that push the state of the art in terms of model accuracy.

The annotation engine can be used for the generation of labelled datasets needed for the development of the perception stack for our Level 3 autonomously driven vehicle prototype. I utilize diverse deep learning models depending on the type of input data whether it is camera and Lidar fused data or camera only.The project has been delivered spanning multiple components namely; Road Model, Traffic Participants, Traffic Control and Static Environment Detection. I have also worked on lidar and camera synchronization and interpolation for accurate detection of objects. Technically, I was mainly responsible for laying the architecture of software and integrating the components into ROS environment, developing the module responsible for making vehicle associations to lanes and finally for technical leadership of the models used in all of the components


BuildingNet: Learning to Label 3D Buildings

project page

The main objective of this project was to automatically segment 3D models of buildings into their labels parts. I joined this new project at an early stage and worked on filtration and cleaning of the candidate 3D models of buildings to be added to the dataset.


Afterwards when the models were ready I was responsible for handling the 3D models selected for upload on MTurk to be labelled by manual annotators. I was later in charge of investigating the performance of multi-view based methods on the classification task. 

Later, we moved to tackle the segmentation problem and I re-implemented caffe models in PyTorch for analysis and evaluation of multi-view models on segmenting 3D models of buildings which the results for are reported in the paper. I was involved in all aspects related to data cleaning, filtration, ground truth label curation, and then model training and evaluation in this project. The work is published in ICCV



For more information refer to the project page.

Neural Contours: Learning to Draw Lines from 3D Shapes

The main contribution of the project was to provide a model for converting 3D shapes to sketches. We needed to provide training data for the learning process in the form of contour points. The purpose of the contour points was to guide the learning process of the sketching of 3D shapes. I was in charge of generating contour points using a legacy C++ code which I had refactored parts of it for the application. Also I tuned the parameters in the code that yielded contour points most fitting for the deep learning model. Finally I created the webpage for the evaluation of the automatically drawn lines against the manual ones. This work was published in CVPR. 


For more information refer to the project page.

Valeo Mid Range Radar Sensor

Before starting my masters in 2018 I was working on a Lane Change Assist ECU in Valeo as an embedded software engineer. I was mainly responsible for the development of memory management modules and the integration of the bootloader software. Later I was in charge of the maintenance of the diagnostics and communication stacks and was deeply involved in software architecture decisions. By my second year in Valeo I was technically mentoring a software team in a sister site in their early set up phases.