Application of Deep Learning To Develop An Autonomous Vehicle
Application of Deep Learning To Develop An Autonomous Vehicle
Application of Deep Learning To Develop An Autonomous Vehicle
AN AUTONOMOUS VEHICLE
Mohamed Salman, Nitte Meenakshi Institute of Technology,
R Manoj, Nitte Meenakshi Institute of Technology,
Rudresh M, Nitte Meenakshi Institute of Technology,
Mohamed Jaffar Niyaz, Nitte Meenakshi Institute of Technology,
Dr. Nalini N, Professor, Nitte Meenakshi Institute of Technology
Abstract:
Travelling by road is the most common mode of transport for day-to-day travel, but it is also the leading
cause of deaths among all other means of transport. Road accidents most often are a consequence of driver
errors that stem from the casual nature of driving and recklessness. Self-driving cars would effectively
remove the human factors behind road accidents and help bring down the number of fatalities that take place
every day.
Rise in technologies like Artificial Intelligence, Machine Learning and Image Processing has propelled the
development of autonomous vehicles. Self-driving vehicles are those vehicles where human intervention are
never required to take control of the operation of the vehicle, also known as autonomous or “driverless” cars.
Self-driving cars combine sensors and software to control, navigate, and drive the vehicle. The advancements
in the fields of computer vision and artificial intelligence have prompted engineers to use the deep learning
approach and move away from the conventional robotics approach.
This paper is a demonstration of a working model of self-driving car that is capable of driving itself on any
environment that it has been trained on. It is also capable of detecting sign boards and traffic signals, and
making appropriate decisions. The car uses a fusion of hardware and software. A camera module mounted
over the top of the car captures images of the environment and efficient algorithms help the car in making
appropriate decisions which keep it centred on the road lanes while also obeying traffic rules.
Thus, the paper proposes an end-to-end solution for an autonomously driven vehicle.
Keywords - Artificial Intelligence, Deep Learning, Self-driving car, Raspberry Pi, Arduino, OpenCV
1. INTRODUCTION
1.1. BACKGROUND:
The World Health Organization launched its Global Status Report on Road Safety for the year 2018[11] and it
stated that the number of annual road traffic deaths had reached a staggering 1.35 million. According to the
Ministry of Road Transport and Highways [12], close to 150,000 deaths had taken place due to road accidents.
India alone contributes to more than 10% of annual deaths worldwide due to road accidents. It is needless to say
that these accidents take place more often than not due to human negligence. The root cause of a lot of accidents
that take place on the road relate to speeding, drink and driving and using cellular devices while driving. This
has been the motivation behind the development of autonomously controlled cars. If technology can remove the
human factor behind the steering wheel of automobiles, the number of accidents could come down by a huge
number.
1.5. OBJECTIVES:
1. Incorporate a thoroughly autonomous driving vehicle.
2. Work with Deep Learning and Artificial Neural Networks to train a self-driving car.
3. Get ideal results in self-driving car performance.
2. RELATED WORK
T. Do, M. Duong, Q. Dang and M. Le [1] describe the process of how they developed a self-driving car that is
powered by raspberry pi. It is a miniscule of the entire model. They were able to achieve high accuracy using a
CNN model. They also stated that with the camera latency, the entire model working slows down, which deals
more into the hardware functionality. As per the work done by K. Bimbraw, [2] he decodes the past
technologies that led to the increase in interest of autonomous vehicle technology and reviews the current
technologies that govern this field. He also observes certain trends in the future technologies. He outlines
features such as ACC introduced by Volvo bring in a new dimension in the technology. According to T.
Okuyama, T. Gonsalves and J. Upadhay [3], the simulation results of autonomous car can learn and be driven in
those simplified environments which resembles to a real-world scenario. Algorithm for learning purpose was
performed by using a Deep Q Network which consists of Q values (rewards) corresponding to the actions
available to a self-driving car.
Many distance determination algorithms have been proposed and developed in the last decade [4- 6]. Active
detection systems are widely implemented commercially in vehicles today because of their immunity to
changing ambient light conditions; however, the cost is usually higher than passive systems because they
involve transmitters and receivers. A complete distance measurement system includes two steps: vehicle
detection and distance calculation. Motion-based and Appearance-based are two main approaches for vehicle
detections [6]. We will first apply the combination of Histogram of Oriented Gradients (HOG) descriptor and
Support Vector Machine (SVM) classification for vehicle detection, because it has shown promise by many
previous works [7].
R. Kulkarni, S. Dhavalikar and S. Bangar [8] have specified applications of AI over detections of traffic lights,
sign boards, unclear lane markings and more. To overcome these obstacles, technologies like Deep Learning are
used. The authors have proposed a model build using a deep neural network which could recognize traffic lights.
It includes use of fast Convolution NN (R-CNN) Inception V2 model in TensorFlow for transfer learning. J.
Kim, Y. Koo and S. Kim [9] have proposed a method to detect moving objects called as MOD (Moving Object
Detection) technology combined with recognition, ID tracking, detection and classification by using sensor
fusion to get information that local & global position estimation, pose estimation, velocity from around objects
in real time over 15 fps. Darknet based DL method and modified detector is used to obtain a local position
estimation. M. V. Smolyakov, A. I. Frolov, V. N. Volkov and I. V. Stelmashchuk [10] have given a possibility
of using images from the emulator for training Deep NN’s for the prediction of steering angle. There exists an
emulator generating the desired number of images of the vehicle movement easily. This approach allows the car
to move in automatic mode. It basically explores various architectures of CNN’s in order to obtain good results
with a minimum number of parameters.
3. LITERATURE SURVEY
2. Pi Camera:
The Pi Camera module is a portable lightweight camera that supports Raspberry Pi. It communicates
with Pi using the MIPI camera serial interface protocol. It is connected to the Pi through the CSI flex
cable. The camera has been used in this project to capture images of the surroundings for further image
processing.
3. Arduino Uno:
Arduino is an open-sourced platform which is used for building electronic projects. It consists of both a
microcontroller and an IDE (Integrated Development Environment), where we can write and upload the
code onto the microcontroller [17]. The board used in this project is the Arduino Uno and it is used to
control the actuators of the car.
2. Keras:
Keras is a high-level, open-sourced Neural Network library written in Python, and built on the
Tensorflow backend. It consists of multiple back-end API's and engines. Its designed to be more user-
friendly, easy to use, fast experimentation, modular and extensible [19].
4. OpenCV:
OpenCV is an open source computer vision library that performs image processing on images/videos
and can perform tasks like facial recognition. It supports languages like C, C++, Python and Java and
operating systems like Windows, Linux, Mac OS, iOS and Android. Written in optimized C/C++, the
library can take advantage of the multi-core processing with optimized code [22].
5. IMPLEMENTATION
Proper functioning of the car is the proper arrangement of the hardware components as displayed in Figure 1
and an efficient algorithm is used for performing the given tasks in the right manner. The car consists of two
main controlling boards, an open-source microcontroller board - Arduino Uno and the latest version of the Pi
family of computers - Raspberry Pi 4. The Arduino board has been used to interface the DC motors with an
L298N bridge which drives the motors. The functions of the Raspberry Pi are to interface the camera and
communicate between the artificial neural network model and the Arduino.
2. For the distance measurement aspect for obstacle detection, an ultrasonic sensor, HC-SR04 was placed at the
front of the car. This sensor is used for the purpose of obstacle detection. The sensor is connected to the
Raspberry Pi and it determines the distance to an obstacle provides accurate results, as shown in Figure 6. It also
takes the surface condition into consideration. The car functions as normal when there is no obstacle in front of
the sensor, and stops when the distance between itself and the obstacle is less than or equal to a predefined
threshold distance.
Figure 9a: Detection of 'Red' light Figure 9b: Detection of 'Green' light.
Figure 10a: Application Login Page Figure 10b: Home Page
The car can also be controlled remotely via an application, named as 'Infernus'. A glimpse of the application is
displayed above (Figure 10a, 10b). It's a Desktop/Android application that consists of a few sets of controls
within it to control the car. The application has been built using the KIVY Framework in Python. KIVY [23] is
an open source Python library for rapid development of applications that make use of innovative User Interface.
Application has a Login Page where only the application owner can access it. After login, the home page opens
up, which shows the camera stream, and a set of controls for manual functioning of the car, including stopping
and starting the car. A voice command feature has also been included. The speech recognition uses Natural
Language Processing. It uses the 'SpeechRecognition' Python Library which helps in converting spoken words
into equivalent textual form. It uses Google's Speech APIs to convert the voice commands into text.
6.1. CONCLUSION:
Through this paper, we have presented a novel idea on building a scaled down model of a self-driving car. A
brief overview of the requirements was specified, and the implementation of the model was discussed. The role
of emergent technologies like Machine Learning, Deep Learning, Image Processing and Internet of Things in
the development of a self-driving car was studied. Through the fusion of hardware and software, a successful
model was developed and this model could function as per expectations in the test scenario. Thus, the model
was successfully designed, implemented and tested.
The project successfully completed all the objectives that were discussed in Chapter 1.
The paper aimed to come up with the following results which are going to be important from an algorithmic
point of view:
1. Implement a system for a remote-controlled car that is automated and can make intuitive decisions for
itself.
2. Combination of different hardware components along with the software and the Neural Network
configurations.
3. Illustrate the use of Deep Learning concepts in the field of mobility and transport.
7. REFERENCES
[1] T. Do, M. Duong, Q. Dang and M. Le, "Real-Time Self-Driving Car Navigation Using Deep Neural
Network," 2018 4th International Conference on Green Technology and Sustainable Development (GTSD), Ho
Chi Minh City, 2018, pp. 7-12.
[2] K. Bimbraw, "Autonomous cars: Past, present and future a review of the developments in the last century,
the present scenario and the expected future of autonomous vehicle technology," 2015 12th International
Conference on Informatics in Control, Automation and Robotics (ICINCO), Colmar, 2015, pp. 191-198.
[3] T. Okuyama, T. Gonsalves and J. Upadhay, "Autonomous Driving System based on Deep Q Learning,"
2018 International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 2018, pp. 201-205.
[4] Sun, Zehang, et al., "On-road vehicle detection: A review." Pattern Analysis and Machine Intelligence, IEEE
Transactions on 28.5 (2006): 694-711.
[5] Cualain, Diarmaid O., et al., “Distance detection systems for the automotive environment: a review.” In Irish
Signals and Systems Conf. 2007.
[6] Sivaraman, Sayanan, and Mohan Manubhai Trivedi. “Looking at vehicles on the road: A survey of vision-
based vehicle detection, tracking, and behavior analysis.” Intelligent Transportation Systems, IEEE Transactions
on 14, no. 4 (2013): 1773-1795.
[7] H. Tehrani Niknejad, et al., “On-road multivehicle tracking using deformable object model and particle filter
with improved likelihood estimation,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 2, pp. 748–758, Jun. 2012.
[8] R. Kulkarni, S. Dhavalikar and S. Bangar, "Traffic Light Detection and Recognition for Self Driving Cars
Using Deep Learning," 2018 Fourth International Conference on Computing Communication Control and
Automation (ICCUBEA), Pune, India, 2018, pp. 1-4.
[9] J. Kim, Y. Koo and S. Kim, "MOD: Multi-camera Based Local Position Estimation for Moving Objects
Detection," 2018 IEEE International Conference on Big Data and Smart Computing (BigComp), Shanghai,
2018, pp. 642-643.
[10] M. V. Smolyakov, A. I. Frolov, V. N. Volkov and I. V. Stelmashchuk, "Self-Driving Car Steering Angle
Prediction Based On Deep Neural Network An Example Of CarND Udacity Simulator," 2018 IEEE 12th
International Conference on Application of Information and Communication Technologies (AICT), Almaty,
Kazakhstan, 2018, pp. 1-5.
[11] https://www.who.int/publications-detail/global-status-report-on-road-safety-2018
[12] https://www.prsindia.org/policy/vital-stats/overview-road-accidents-india
[13] https://www.titlemax.com/resources/history-of-the-autonomous-car/
[15] https://www.hongkiat.com/blog/pi-operating-systems/
[16] https://www.raspberrypi.org/documentation/hardware/camera/
[17] https://en.wikipedia.org/wiki/Arduino
[18] https://en.wikipedia.org/wiki/TensorFlow
[19] https://en.wikipedia.org/wiki/Keras
[20] https://en.wikipedia.org/wiki/Natural_language_processing
[21] https://randomnerdtutorials.com/complete-guide-for-ultrasonic-sensor-hc-sr04/
[22] https://opencv.org/
[23] https://kivy.org/#home
[24]https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_orb/py_orb.html
[25] https://en.wikipedia.org/wiki/Cascading_classifiers