Race Detection with RetinaNet This kernel involves using a Pretrained RetinaNet Model and fine-tuning it to classify and detect (to draw a bounding box around the object of a specific race) different Races such as Mongoloid, Negroid and Caucasian. The RetinaNet is the current state-of-the-art Object Detection Algorithm and a modified pedigree of One-stage Detectors. If you want to get a basic intuition about it, check @ https://github.com/zuruoke/Race-Detection-with-RetinaNet, I bet it's pretty intuitive. The Dataset contained faces of different races aggregated from LFW and Flickr. Each of the object (face) in the image are either labelled as: - Caucasian: includes people of American and European descent, also known as whites - Mongoloid: includes people of Asian descent, especially Eastern Asian - Negroid: includes people of African descent or black Americans The workflow for this project are outlined as follows: - Use labelimg to annotate (label and specify the bounding box cordinates) all the objects in the image in a Pascal VOC format - Run a python script a to convert the Pascal VOC format (xml) to csv as that's what a RetinaNet expects - Load the Pretrained RetinaNet from keras and all it's dependencies and navigate to the main file directory - Train the Pretrained RetinaNet by specifying a backbone (I used Retina50) and save the learned parameters after each epochs - Convert the saved model to an inference graph to test on unseeen data All these was done Using Tensorflow 2.x Motivation Conventionally, Neural Networks (NN) are always feed images of different shapes, textures and disimilar features and it does well in learning each mapping or vector space relative to the others. That is NN are very good at learning mappings of nuanced distribution But what about feeding a Neural Network images of the same shape and a kind of similar features... How will the Neural Networks learn this less nuanced distribution?