Abstract:
Among the reasons for traffic accidents, distractions are the most common. Although there are many traffic signs on the road that contribute to safety, variable message signs (VMSs) require special attention, which is transformed into distraction. ADAS (advanced driver assistance sys-tem) devices are advanced systems that perceive the environment and provide assistance to the driver for his comfort or safety. This project aims to develop a prototype of a VMS (variable mes-sage sign) reading system using machine learning techniques, which are still not used, especially in this aspect. The assistant consists of two parts: a first one that recognizes the signal on the street and another one that extracts its text and transforms it into speech. For the first one, a set of im-ages were labeled in PASCAL VOC format by manual annotations, scraping and data augmenta-tion. With this dataset, the VMS recognition model was trained, a RetinaNet based off of Res-Net50 pretrained on the dataset COCO. Firstly, in the reading process, the images were prepro-cessed and binarized to achieve the best possible quality. Finally, the extraction was done by the Tesseract OCR model in its 4.0 version, and the speech was...