Detection and Recognition of Multi-language Traffic Sign Context by Intelligent Driver Assistance Systems

Document Type : Research Paper

Authors

1 Department of Mechanical Enineering Pardis Branch Islamic Azad University

2 Ph.D. Candidate, Mechanical Engineering Department, Shahid Rajaee Teacher Training University, Tehran, Iran

Abstract

Design of a new intelligent driver assistance system based on traffic sign detection with Persian context is concerned in this paper. The primary aim of this system is to increase the precision of drivers in choosing their path with regard to traffic signs. To achieve this goal, a new framework that implements fuzzy logic was used to detect traffic signs in videos captured along a highway from a vehicle. Implementing fuzzy logic in smart systems increases its inference and intelligent capabilities that results in better decision making in real-time conditions. In order to detect road sign’s texts, the combination of Canny Edge Detector Algorithms and Maximally Stable Extremal Regions (MSER) is used. Regions of an image that vary in properties, such as color or brightness, with respect to surrounding regions, are detected with the help of MSER algorithm. By using a multi-stage algorithm, Canny edge detector detects a wide range of edges in the acquired images. In order to join the individual characters for the final stage of detection of texts in traffic signs, a morphological mask operator is used. Finally, the recognition of the detected texts is carried out by employing MATLAB Optical Character Recognition (OCR). The overall accuracy of this new framework in detecting and recognizing texts is 90.6%.

Keywords


[1] Chen, D., and Odobez, J. M., “Text Detection and Recognition in Images and Video Frames”, Pattern Recognition, Vol. 37, No. 3, pp. 595-608, (2004).
 
[2] Lee, C. W., Jung, K., and Kim, H. J., “Automatic Text Detection and Removal in Video Sequences”, Pattern Recognition Letters, Vol. 24, No. 15, pp. 2607-2623, (2003).
 
[3] Myers, G., Bolles, R., Luong, Q.T., and Herson, J., “Recognition of Text in 3-D Scenes”, Proceedings of the 4th Symposium on Document Image Understanding Technology, Columbia, MD, pp. 23-25, (2001).
 
[4] Chang , S. L.,   Chen, L. S.,  Chung, Y. C.,   and Chen, S. W.,  “Automatic License Plate Recognition”, IEEE Transaction on Intelligent Transportation Systems, Vol. 5, No. 1, pp. 42-53, (2004).
 
[5] Veeraraghavan, H.,  Masoud, O., and Papanikolopoulos, N. P., “Computer Vision Algorithms for Intersection Monitoring”, Transaction on  Intelligent Transportation Systems, Vol. 4, No. 2, pp. 78-89, (2003).
 
[6] Vicen-Bueno, R., Gil-Pita, R., Jarabo-Amores, M.P., and L´opez-Ferreras, F., “Complexity Reduction in Neural Networks Applied to Traffic Sign Recognition”, Proceedings of the 13th European Signal Processing Conference, Antalya, Turkey, September 4-8, (2005).
 
[7] Vicen-Bueno, R., Gil-Pita, R., Rosa-Zurera, M., Utrilla-Manso, M., and Lopez-Ferreras, F., “Multilayer Perceptrons Applied to Traffic Sign Recognition Tasks”, International Work-Conference on Artificial Neural Networks, IWANN 2005: Computational Intelligence and Bioinspired Systems, pp 865-872, (2005).
 
[8] Loy, G., “Fast Shape-based Road Sign Detection for a Driver Assistance System”, IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS(, Sendai, Japan, pp. 70–75, (2004).
 
[9] Paulo, C., and Correia, P., “Automatic Detection and Classification of Traffic Signs”, Eighth International Workshop on, Santorini, Greece, June (2007).
 
[10] Gavrila, D., “Traffic Sign Recognition Revisited”, in DAGM-Symposium, Germany, pp. 86–93, (1999).
 
[11] Brki´c, K., Pinz, A., and ˇSegvi´c, S., “Traffic Sign Detection as A Component of an Automated Traffic Infrastructure Inventory System”, Stainz, Austria, May) 2009).
 
[12] Chen, X., Yang, J., Zhang, J., and Waibel, A., “Automatic Detection of Signs with Affine Transformation”, Proc. Workshop Application Computer Vision (WACV), Orlando, FL, pp. 32–36, (2002).
 
[13] Clark, P., and Mirmehdi, M., “Estimating the Orientation and Recovery of Text Planes in a Single Image”, Proc. 12th British Machine Vision Conf., U.K, Guildford, pp. 421–430, (2001).
 
[14] Chen, X., Yang, J., Zhang, J., and Waibel, A., “Automatic Detection and Recognition of Signs from Natural Scenes”,IEEE Trans. Image Process, Vol. 13, No. 1, pp. 87–99, (2004).
 
[15] Jain, A. K., and Yu, B., “Automatic Text Location in Images and Video Frames”, Pattern Recognition, Vol. 31, No. 12, pp. 2055–2076, (1998).
 
[16] Young, I.T., Gerbrands, J.J., and Van Vlient, L.J., “Fundamentals of Image Processing”, Printed in the Delft University of Technology, Netherlands, (1998).
 
[17] Oruklu, E., Pesty, D., Neveux, J., and Guebey, J.E., “Real-time Traffic Sign Detection and Recognition for in-car driver Assistance Systems”,IEEE 55th International Midwest Symposium on Circuits and Systems (MWSCAS), Boise, ID, USA, pp. 976-979, (2012).
 
[18] Fleyeh, H., “Traffic Signs Recognition by Fuzzy Sets”,IEEE Intelligent Vehicles Symposium, Eindhoven University of Technology, Netherlands, pp. 422-427, (2008).
 
[19] Blackledge, J., “Digital Image Processing Mathematical and Computational Methods”, Horwood Publishing, ISBN: 1-898563-49-7, (2005).
 
[20] Solanki, D.S., and Dixit, G., “Traffic Sign Detection using Feature Based Method”, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 5, pp. 340-346, (2015).
 
[21] Tuytelaars, T., and Mikolajczyk, K., “Local Invariant Feature Detectors: A Survey”, Foundation and Trends in Computer Graphics and Vision, Vol. 3, No. 3, pp. 177-280, (2007).
 
[22] Chen, H., Tsai, S., Schroth, G., Chen, D., Grzeszczuk, R., and Girod, B., “Robust Text Detection in Netural Image with Edge-enhanced Maximally Stable Extremal Regions”, 18th IEEE International Conference on Image Processing, Brussels, pp. 2609-2612, (2011),
 
[23] Epshtein, B.,  Ofek, E., and Wexler, Y., "Detecting Text in Natural Scenes with Stroke width Transform”, IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, pp. 2963-2970, (2010).