[go: up one dir, main page]

0% found this document useful (0 votes)
24 views82 pages

REPORT

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 82

AN EFFECTIVE IMPLEMENTATION OF

AUTONOMOUS ATTENDANCE SYSTEM USING


CONVOLUTION NEURAL NETWORK

ABSTRACT

Attendance marking is a common activity to keep track of the


presence of students daily in all academic institutions at all grades.
Traditional approaches for marking attendance were manual. These
approaches are accurate without a chance of marking fake attendance
but these are time-consuming and laborsome for a large number of
students. To overcome the drawbacks of manual systems, automated
systems are developed using radio frequency identification-based
scanning, fingerprint scanning, Face-recognition, and Iris scanning
based biometric systems. Each system has its pros and cons. Besides, all
of these systems suffer from the limitation of human intervention to
mark the attendance one by one at a time. To overcome the limitations
of existing manual and automated attendance systems, in this work, we
propose a robust and efficient attendance marking system from a single
group image using face detection and recognition algorithms. In this
system, a group image is captured from a high-resolution camera
mounted at a fixed location to capture the group image for all the
students sitting in a classroom. Next, the face images are extracted from
the group image using algorithm followed by recognition using a
convolutional neural network trained on the face database of students.
We tested our system for different types of group images and types of
databases. Our experimental results show that the proposed framework
outperforms other attendance marking systems in terms of efficiency and
ease of use and implementation. The proposed system is an autonomous
attendance system that requires less human-machine interaction, making
it possible to easily incorporate in a smart classroom. This project
describes multiple attendance system using face recognition algorithm.
An robust and efficient face detection based attendance system is
implemented using deep learning algorithm. Face recognition widely
implemented to recognize the face.
CHAPTER 1

1.1 INTRODUCTION
Attendance marking is a regular activity in institutions and
industries. Attendance is considered an important factor for both
students and teachers in educational organizations. Managing student
attendance in the classroom is a tedious job. Attendance systems are
grouped into two broad categories i.e., manual and automated attendance
systems. Among manual attendance systems, the most common is the
roll call method, in which a teacher marks the attendance by calling out
the names of the students one by one. This method is extremely out-
dated, and in the case of a large number of students in a class, it can take
more than 10 minutes each day and has the most number of chances for
proxy attendance marking. The second method is signing attendance on
a register or attendance sheet. It is the most time-consuming method and
it can easily be manipulated and forged if left unsupervised. It is
therefore important to develop an automated attendance system to mark
attendance efficiently automatically without any human intervention.
Face recognition is the most viable solution for developing attendance
systems as face recognition is considered the least intrusive method of
identification, images can be captured from a distance, cost-effective
solution, no chance of marking proxy attendance and it is a user friendly
yet reliable method. In this paper, we developed an automated
attendance system from videos captured from a camera and recording
the attendance of the students through face detection and recognition.

IMAGE PROCESSOR

An image processor does the functions of image acquisition,


storage, preprocessing, segmentation, representation, recognition and
interpretation and finally displays or records the resulting image. The
following block diagram gives the fundamental sequence involved in an
image processing system.

PROBLEM IMAGE SEGMENTATION REPRESENTATION


DOMAIN ACQUISITION & DESCRIPTION

KNOWLEDGE RESULT
PREPROCESSING RECOGNITION &
BASE INTERPRETATION

FIG 1.2 BLOCK DIAGRAM OF FUNDAMENTAL SEQUENCE


INVOLVED IN AN IMAGE PROCESSING SYSTEM

As detailed in the diagram, the first step in the process is image


acquisition by an imaging sensor in conjunction with a digitizer to
digitize the image. The next step is the preprocessing step where the
image is improved being fed as an input to the other processes.
Preprocessing typically deals with enhancing, removing noise, isolating
regions, etc. Segmentation partitions an image into its constituent parts
or objects. The output of segmentation is usually raw pixel data, which
consists of either the boundary of the region or the pixels in the region
themselves. Representation is the process of transforming the raw pixel
data into a form useful for subsequent processing by the computer.
Description deals with extracting features that are basic in differentiating
one class of objects from another. Recognition assigns a label to an
object based on the information provided by its descriptors.
Interpretation involves assigning meaning to an ensemble of recognized
objects. The knowledge about a problem domain is incorporated into the
knowledge base. The knowledge base guides the operation of each
processing module and also controls the interaction between the
modules. Not all modules need be necessarily present for a specific
function. The composition of the image processing system depends on
its application. The frame rate of the image processor is normally around
25 frames per second.

IMAGE PREPROCESSING:
In preprocessing section, the input image may be in different size,
contains noise and it may be in different colour combination. These
parameters need to be modified according to the requirement of the
process. Image noise is most apparent in image regions with low signal
level such as shadow regions or under exposed images. There are so
many types of noise like salt and pepper noise, film grains etc., All these
noise are removed by using filtering algorithms. Among the several
filters, weiner filter is used. In preprocessing module image acquired
will be processed for correct output. Pre-processing was done by using
some algorithm. For all images the pre-processing should be done so
that the result can be obtained in the better way
FEATURE EXTRACTION:
Statistics is the study of the collection, organization, analysis, and
interpretation of data. It deals with all aspects of this, including the
planning of data collection in terms of the design of surveys and
experiments. This is the meaning of statistics. Statistical feature of
image contains • Mean • Variance • Skewness • Standard deviation
Texture Analysis Using the Gray-Level Co-Occurrence Matrix
(GLCM). A statistical method of examining texture that considers the
spatial relationship of pixels is the gray-level co-occurrence matrix
(GLCM), also known as the gray-level spatial dependence matrix.
Typical compromise: 16 gray levels and window size of 30 or 50 pixels
on each side.
CLASSIFICATION
In order to classify a set of data into different classes or categories,
the relationship between the data and the classes into which they are
classified must be well understood. To achieve this by computer, the
computer must be trained ♦ Training is key to the success of
classification Classification techniques were originally developed
Features are attributes of the data elements based on which the elements
are assigned to various classes. 1).The image classifier performs the role
of a discriminant - discriminates one class against others.
2).Discriminant value highest for one class, lower for other classes
(multiclass) 3).Discriminant value positive for one class, negative for
another class (two class).
CHAPTER 2

LITERATURE SURVEY

2.1 TITLE: Video surveillance systems-current status and future trends


AUTHOR: Vassilios Tsakanikas, Tasos Dagiuklas
YEAR: 2019

DESCRIPTION: Within this survey an attempt is made to document


the present status of video surveillance systems. The main components
of a surveillance system are presented and studied thoroughly.
Algorithms for image enhancement, object detection, object tracking,
object recognition and item re-identification are presented. The most
common modalities utilized by surveillance systems are discussed,
putting emphasis on video, in terms of avail- able resolutions and new
imaging approaches, like High Dynamic Range video. The most
important features and analytics are presented, along with the most
common approaches for image / video quality enhancement. Distributed
computational infrastructures are dis- cussed (Cloud, Fog and Edge
Computing), describing the advantages and disadvantages of each
approach. The most important deep learning algorithms are presented,
along with the smart analytics that they utilize. Augmented reality and
the role it can play to a surveil- lance system is reported, just before
discussing the challenges and the future trends of surveillance
2.2 TITLE: Automated Attendance Marking and Management System
by Facial Recognition Using Histogram
AUTHOR: Jenif D Souza W S Jothi S Chandrasekar A
YEAR: 2019
DESCRIPTION: In proposed system an automated attendance marking
and management system is proposed by using face detection and
recognition algorithms. Identification of human faces by the unique
characteristics or features of their face is known as Face recognition.
Currently, Face recognition technology is the fastest growing
technology. Instead of using the traditional methods, this proposed
system aims to develop an automated system that records the student's
attendance by using facial recognition technology for those who are
present during lecture hours. The main objective of this work is to make
the attendance marking and management system fully automatic, simple
and easy. In this work the facial recognition of face is done by image
processing techniques. The processed image is used to match with the
existing stored record and then attendance is marked in the database
correspondingly. Compared to existing system traditional attendance
marking system, this system reduces the workload of people and also
saves times. This proposed system is been implemented with 4 modules
such as Image Capturing, Segmentation of group photo and Face
Detection, Face comparison and Recognition, Updating of Attendance in
database.

2.3 TITLE: automatic attendance management system using face


detection
AUTHOR: E.Varadharajan,R.Dharani, S.Jeevitha, B.Kavinmathi,
S.Hemalatha
YEAR: 2020
DESCRIPTION: Attendance marking in a classroom during a lecture is
not only a onerous task but also a time consuming one at that. Due to an
unusually high number of students present during the lecture there will
always be a probability of proxy attendance(s).Attendance marking with
conventional methods has been an area of challenge. The growing need
of efficient and automatic techniques of marking attendance is a growing
challenge in the area of face recognition. In recent years, the problem of
automatic attendance marking has been widely addressed through the
use of standard biometrics like fingerprint and Radio frequency
Identification tags etc., However, these techniques lack the element of
reliability. In this proposed project an automated attendance marking
and management system is proposed by making use of face detection
and recognition algorithms. Instead of using the conventional methods,
this proposed system aims to develop an automated system that records
the student’s attendance by using facial recognition technology. The
main objective of this work is to make the attendance marking and
management system efficient, time saving, simple and easy. Here faces
will be recognized using face recognition algorithms.

2.4 TITLE: Attendance Management System Using Hybrid Face


Recognition Techniques

AUTHOR: Nazare Kanchan Jayant Surekha Borra

YEAR: 2019

DESCRIPTION: Attendance recording of a student in an academic


organization plays a vital role in judging students performance. As
manual labor involved in this process is time consuming, an automated
Attendance Management System (AMS) based on face detection and
face recognition techniques is proposed in this paper. The system
employs modified Viola-Jones algorithm for face detection, and
alignment- free partial face recognition algorithm for face recognition.
After successful recognition of a student, the system automatically
updates the attendance in the excel sheet. The proposed system improves
the performance of existing attendance management systems by
eliminating manual calling, marking and entry of attendance in
institutional websites.
2.5 TITLE: Smart Attendance System
AUTHOR: Swarnendu Ghosh, Mohammed Shafi KP, Neeraj Mogal,
Prabhu Kalyan Nayak, Biswajeet Champaty

YEAR: 2020

DESCRIPTION: The current study delineates the design and


development of a smart attendance system for students in schools or
colleges for optimum utilization of the teaching-learning time. The
proposed device is a biometric attendance recorder that uses the
fingerprint sensor in conjunction with Arduino UNO. The device stored
the fingerprint impressions of all the faculties and students of an institute
through the process of enrolment. During the attendance, the registering
fingerprint of students was matched with the enrolled database. In case
of a match, the name of the student was registered in that device and sent
wirelessly to an in-lab made Android application through Bluetooth
protocol service. The Android application is only accessible by the
authorized personnel to monitor the student attendance and to share for
academic record. The device is highly secured as it can only be activated
by the fingerprint recognition of the concerned authorized personnel
(faculties). The device is low-cost, robust, portable and user-friendly.
The device being handy and cheap gives an edge over the products that
are currently available in the market. The device saves time in class, thus
increases the valuable teaching learning time of teachers and students
giving them greater opportunity to teach and learn respectively.

2.6 TITLE: Automated Attendance Management and Reporting System


using Face Recognition

AUTHOR: S. Aravindh, R. Athira, M. J. Jeevitha

YEAR: 2020

DESCRIPTION: The attendance maintaining system is difficult


process if it's done manually. The smart and automatic attendance
system for managing the attendance are often implemented using the
varied ways of biometrics. Face recognition is one among them. By
using this technique, the difficulty of faux attendance and proxies are
often solved. Within the previous face recognition-based attendance
system, there have been some disadvantages like intensity of sunshine
problem and head pose problem. Therefore, to beat these issues, various
techniques like illumination invariant, Viola and Jones algorithm,
Principle component analysis are used. The main steps during this
system are detecting the faces and recognizing them. After these, the
comparison of detected faces are often done by cross checking with the
database of student's faces. This smart system are going to be an
efficient thanks to maintain the attendance and records of scholars. In a
classroom with large number of students, it is a very tedious and time-
consuming task to take the attendance manually. Therefore, we can
implement an effective system which will mark the attendance of
students automatically by recognizing their faces.

2.7 TITLE: Class Attendance Management System using Facial


Recognition
AUTHOR: Clyde Gomes Sagar Chanchal Tanmay Desai

YEAR: 2020

DESCRIPTION: Automatic Face Recognition (AFR) has created a


revolution in this changing world. It has ensured us with more safety of
our data. Smart attendance using Face Recognition comes handy in day
to day activities. It helps reducing the amount of paper and efforts for
taking manual attendance. It is a process which uses students face to
recognize them. It is done by using face biometrics and some other
features of face. It is captured and been stored in the memory and it’s
been processed on to recognize the student by using various algorithms
and techniques. In our attendance system, computer will be able to
recognize the student whose data has been stored and it marks
attendance of that student. Various algorithms and techniques has been
used for improving the performance of face recognition. The concept we
are using here is Open CV. We are also using Raspberry Pi and camera
module to take image and storing them in database. This way the
attendance will be automated.

2.8 TITLE: Face Recognition Based Attendance Management System


Using Raspberry Pi
AUTHOR: Mohd Abdul Muqeet

YEAR: 2019

DESCRIPTION: Our Paper involves the student attendance and faculty


attendance. The student attendance is marked by face recognition. For
face detection and face recognition the raspberry pi. If the camera is
connected to Raspberry pi USB port then only images will capture of the
students who are available in the class for face detection. The captured
images recognises with stored images then in that images we will
recognize the faces of every student and according to thatattendance will
be given to that subject class. This process is carried out for every class
and students are given attendance accordingly. Faculty attendance is
monitored with this project. A unique RFID card is given to the faculty,
when faculty enters the classroom swipes the RFID card attendance will
be marked with date and time. ESP8266 is used along with OLED to
display the faculty attendance. We can mark the attendance at any time
without any human Intervention.

2.9 TITLE: A Smart Attendance System based on Machine learning


AUTHOR: Harish M; Chethan P; Prajna N Holla K; Syed Abdul
Azeem

YEAR: 2019

DESCRIPTION: Taking attendance is an important step to monitor the


activities of a student and to ensure the eligibility of the student to
complete the course. Despite technological advancements, most of the
educational institutes still use the old register system. In this paper, we
propose a new way to take attendance of students in a classroom, which
is efficient, less time consuming and which can be done using devices
that are readily available with people in today's day and age such as
smartphones, laptops/desktops. In the proposed model, the power of
Machine learning and versatility of Google Drive have been put to good
use to build a smart attendance system with Face Recognition technique.
2.10 TITLE: Student Smart Attendance Through Face Recognition
using Machine Learning Algorithm
AUTHOR: Nandhini R, Kumar P

YEAR: 2020

DESCRIPTION: In today’s competitive world, with very less


classroom time and increasing working hours, lecturers may need tools
that can help them to manage precious class hours efficiently. Instead of
focusing on teaching, lecturers are stuck with completing some formal
duties, like taking attendance, maintaining the attendance record of each
student, etc. Manual attendance marking unnecessarily consumes
classroom time, whereas smart attendance through face recognition
techniques helps in saving the classroom time of the lecturer. Attendance
marking through face recognition can be implied in the classroom by
capturing the image of the students in the classroom via the camera
installed. Later through the HAAR Cascade algorithm and MTCNN
model, face region needs to be taken as interest and the face of each
student is bounded through a bounding box, and finally, attendance can
be marked into the database based on their presence by using Decision
Tree Algorithm

CHAPTER 3

3.1 EXISTING SYSTEM

The use of image processing in attendance systems has led to


various automatic attendance systems based on thumbprint scanning, iris
scan, and face detection. Fingerprint scan based attendance systems were
the first biometric attendance system. Each student has a unique
fingerprint that is scanned to mark attendance. The iris scan based
attendance system exterminated the possibility of proxy attendance by
fingerprint cards. This system scans the iris pattern of students to mark
attendance. Face recognition is also widely used to identify the people in
a large crowd or scene. Attendance marking using face recognition is
also being used in institutions. Face recognition, fingerprint, and iris
scanning based attendance systems suffer from the limitations that they
require more Human-Machine Interaction as only one student can mark
attendance at a time and it is not feasible to use it in classrooms where a
lot of students have to mark their attendance.

DISADVANTAGES

• Less accuracy

• Less sensitivity

• It extract the reduced number of features

• Time consuming

3.2 PROPOSED SYSTEM

In this work, we proposed an attendance system based on the


principles of face detection and recognition, so that the system can take
multiple attendances through a single input and thus increases the
efficiency of the system while leaving no space for proxy attendance.
The system starts by taking a group image of the class through a live
video captured through a CCTV, then the faces are detected. The
proposed system uses DCNN Algorithm for face recognition from group
image. In the first step, face data collected from user by using OpenCV
packages. More than 1000 images has been collected from the user. The
collected data are preprocessed by four important steps which are, Gray
scale conversion, Resizing, Normalization and Augmentation. The CNN
architecture has been implemented and trained on the processed data.
The trained model is stored and deployed in the automated face
detection block. The developed model helps to identify the trained faces
in accurate manner If the face is identified, the attendance will be
recorded. Once a student is recognized, it marks him/her present. The
process is repeated a few times to increase system efficiency and the
final results are recorded in the excel file. This automated attendance
system saves the precious study time of the students as it runs in the
background and needs little to no interaction from the teachers or the
students.

BLOCK DIAGRAM

Camera for Video into


Create Image
capture live frames
database preprocessing
video conversion

CNN training
Attendance
system Register data
base
Video into
Testing live video frames
CNN features
conversion

ADVANTAGE

• The main advantage of this system is where attendance is marked


on the server which is highly secure where no one can mark the
attendance of other

• Time saving

• Ease in maintaining attendance.

• Reduced paper work.

• Automatically operated and accurate.

• Reliable and user friendly

APPLICATION

 To verify identities in Government organizations.


 Enterprises.
 Attendance in Schools and colleges.
 To detect fake entries at international borders.
 Industries

CHAPTER 4

SOFTWARE REQUIREMENT

H/W SYSTEM CONFIGURATION:-

• processor - INTEL

• RAM - 4 GB (min)

• Hard Disk - 20 GB

S/W SYSTEM CONFIGURATION:-

• Operating System : Windows 7 or 8


• Software : Python Idle
SOFTWARE ENVIRONMENT

Python Technology:

Python is an interpreted, high-level, general-purpose programming


language. It supports multiple programming paradigms, including
procedural, object-oriented, and functional programming. Python is
often described as a "batteries included" language due to its
comprehensive standard library.

Python Programing Language:

Python is a multi-paradigm programming language. Object-


oriented programming and structured programming are fully supported,
and many of its features support functional programming and aspect-
oriented programming (including by metaprogramming and met objects
(magic methods)). Many other paradigms are supported via extensions,
including design by contract and logic programming.

Python packages with a wide range of functionality, including:

 Easy to Learn and Use


 Expressive Language
 Interpreted Language
 Cross-platform Language
 Free and Open Source
 Object-Oriented Language
 Extensible
 Large Standard Library
 GUI Programming Support
 Integrated

Python uses dynamic typing and a combination of reference counting


and a cycle-detecting garbage collector for memory management. It also
features dynamic name resolution (late binding), which binds method
and variable names during program execution.

Rather than having all of its functionality built into its core, Python
was designed to be highly extensible. This compact modularity has made
it particularly popular as a means of adding programmable interfaces to
existing applications. Van Rossum's vision of a small core language with
a large standard library and easily extensible interpreter stemmed from
his frustrations with ABC, which espoused the opposite approach.

Python is meant to be an easily readable language. Its formatting is


visually uncluttered, and it often uses English keywords where other
languages use punctuation. Unlike many other languages, it does not use
curly brackets to delimit blocks, and semicolons after statements are
optional. It has fewer syntactic exceptions and special cases than C or
Pascal.

Python strives for a simpler, less-cluttered syntax and grammar


while giving developers a choice in their coding methodology. In
contrast to Perl's "there is more than one way to do it" motto, Python
embraces a "there should be one and preferably only one obvious way to
do it" design philosophy. Alex Martelli, a Fellow at the Python Software
Foundation and Python book author, writes that "To describe something
as 'clever' is not considered a compliment in the Python culture."
Python's developers strive to avoid premature optimization, and reject
patches to non-critical parts of the Python reference implementation that
would offer marginal increases in speed at the cost of clarity. When
speed is important, a Python programmer can move time-critical
functions to extension modules written in languages such as C, or use
PyPy, a just-in-time compiler. Python is also available, which translates
a Python script into C and makes direct C-level API calls into the Python
interpreter.

An important goal of Python's developers is keeping it fun to


use. This is reflected in the language's name a tribute to the British
comedy group Monty Python and in occasionally playful approaches to
tutorials and reference materials, such as examples that refer to spam
and eggs (from a famous Monty Python sketch) instead of the standard
foo and bar.

Python uses duck typing and has typed objects but untyped
variable names. Type constraints are not checked at compile time;
rather, operations on an object may fail, signifying that the given object
is not of a suitable type. Despite being dynamically typed, Python is
strongly typed, forbidding operations that are not well-defined (for
example, adding a number to a string) rather than silently attempting to
make sense of them.
The Python Platform:

The platform module in Python is used to access the underlying


platform's data, such as, hardware, operating system, and interpreter
version information. The platform module includes tools to see the
platform's hardware, operating system, and interpreter version
information where the program is running.

There are four functions for getting information about the current
Python interpreter. python_version() and python_version_tuple() return
different forms of the interpreter version with major, minor, and patch
level components. python_compiler() reports on the compiler used to
build the interpreter. And python_build() gives a version string for the
build of the interpreter.

Platform() returns string containing a general purpose platform


identifier. The function accepts two optional Boolean arguments. If
aliased is true, the names in the return value are converted from a formal
name to their more common form. When terse is true, returns a minimal
value with some parts dropped.

What does python technology do?

Python is quite popular among programmers, but the practice


shows that business owners are also Python development believers and
for good reason. Software developers love it for its straightforward
syntax and reputation as one of the easiest programming languages to
learn. Business owners or CTOs appreciate the fact that there’s a
framework for pretty much anything – from web apps to machine
learning.

Moreover, it is not just a language but more a technology platform


that has come together through a gigantic collaboration from thousands
of individual professional developers forming a huge and peculiar
community of aficionados.

So what are the tangible benefits the language brings to those who
decided to use it as a core technology? Below you will find just some of
those reasons.
PRODUCTIVITY AND SPEED

It is a widespread theory within development circles that developing


Python applications is approximately up to 10 times faster than
developing the same application in Java or C/C++. The impressive
benefit in terms of time saving can be explained by the clean object-
oriented design, enhanced process control capabilities, and strong
integration and text processing capacities. Moreover, its own unit testing
framework contributes substantially to its speed and productivity.

PYTHON IS POPULAR FOR WEB APPS

Web development shows no signs of slowing down, so technologies for


rapid and productive web development still prevail within the market.
Along with JavaScript and Ruby, Python, with its most popular web
framework Django, has great support for building web apps and is rather
popular within the web development community.

OPEN-SOURCE AND FRIENDLY COMMUNITY

As stated on the official website, it is developed under an OSI-approved


open source license, making it freely usable and distributable.
Additionally, the development is driven by the community, actively
participating and organizing conference, meet-ups, hackathons, etc.
fostering friendliness and knowledge-sharing.

PYTHON IS QUICK TO LEARN

It is said that the language is relatively simple so you can get pretty
quick results without actually wasting too much time on constant
improvements and digging into the complex engineering insights of the
technology. Even though Python programmers are really in high demand
these days, its friendliness and attractiveness only help to increase
number of those eager to master this programming language.

BROAD APPLICATION

It is used for the broadest spectrum of activities and applications for


nearly all possible industries. It ranges from simple automation tasks to
gaming, web development, and even complex enterprise systems. These
are the areas where this technology is still the king with no or little
competence:

 Machine learning as it has a plethora of libraries implementing


machine learning algorithms.
 Web development as it provides back end for a website or an app.
 Cloud computing as Python is also known to be among one of the
most popular cloud-enabled languages even used by Google in
numerous enterprise-level software apps.
 Scripting.
 Desktop GUI applications.

Python compiler

The Python compiler package is a tool for analyzing Python source


code and generating Python bytecode. The compiler contains libraries to
generate an abstract syntax tree from Python source code and to generate
Python bytecode from the tree.

The compiler package is a Python source to bytecode translator


written in Python. It uses the built-in parser and standard parser module
to generate a concrete syntax tree. This tree is used to generate an
abstract syntax tree (AST) and then Python bytecode.

The full functionality of the package duplicates the built-in


compiler provided with the Python interpreter. It is intended to match its
behavior almost exactly. Why implement another compiler that does the
same thing? The package is useful for a variety of purposes. It can be
modified more easily than the built-in compiler. The AST it generates is
useful for analyzing Python source code.

The basic interface

The top-level of the package defines four functions. If you import


compiler, you will get these functions and a collection of modules
contained in the package.

compiler.parse(buf)

Returns an abstract syntax tree for the Python source code in buf. The
function raises Syntax Error if there is an error in the source code. The
return value is a compiler.ast. Module instance that contains the tree.

compiler.parseFile(path)

Return an abstract syntax tree for the Python source code in the file
specified by path. It is equivalent to parse(open(path).read()).

LIMITATIONS

There are some problems with the error checking of the compiler
package. The interpreter detects syntax errors in two distinct phases.
One set of errors is detected by the interpreter’s parser, the other set by
the compiler. The compiler package relies on the interpreter’s parser, so
it get the first phases of error checking for free. It implements the second
phase itself, and that implementation is incomplete. For example, the
compiler package does not raise an error if a name appears more than
once in an argument list: def f(x, x): ...

A future version of the compiler should fix these problems.

PYTHON ABSTRACT SYNTAX

The compiler.ast module defines an abstract syntax for Python. In


the abstract syntax tree, each node represents a syntactic construct. The
root of the tree is Module object.

The abstract syntax offers a higher level interface to parsed Python


source code. The parser module and the compiler written in C for the
Python interpreter use a concrete syntax tree. The concrete syntax is tied
closely to the grammar description used for the Python parser. Instead of
a single node for a construct, there are often several levels of nested
nodes that are introduced by Python’s precedence rules.

The abstract syntax tree is created by the compiler.transformer


module. The transformer relies on the built-in Python parser to generate
a concrete syntax tree. It generates an abstract syntax tree from the
concrete tree.
The transformer module was created by Greg Stein and Bill Tutt
for an experimental Python-to-C compiler. The current version contains
a number of modifications and improvements, but the basic form of the
abstract syntax and of the transformer are due to Stein and Tutt.

AST NODES

The compiler.ast module is generated from a text file that describes


each node type and its elements. Each node type is represented as a class
that inherits from the abstract base class compiler.ast.Node and defines a
set of named attributes for child nodes.

class compiler.ast.Node

The Node instances are created automatically by the parser


generator. The recommended interface for specific Node instances is to
use the public attributes to access child nodes. A public attribute may be
bound to a single node or to a sequence of nodes, depending on the Node
type. For example, the bases attribute of the Class node, is bound to a list
of base class nodes, and the doc attribute is bound to a single node.

Each Node instance has a lineno attribute which may be None. XXX Not
sure what the rules are for which nodes will have a useful lineno.
All Node objects offer the following methods:

getChildren()

Returns a flattened list of the child nodes and objects in the order they
occur. Specifically, the order of the nodes is the order in which they
appear in the Python grammar. Not all of the children are Node
instances. The names of functions and classes, for example, are plain
strings.

getChildNodes()

Returns a flattened list of the child nodes in the order they occur. This
method is like getChildren(), except that it only returns those children
that are Node instances.

The While node has three attributes: test, body, and else_. (If the
natural name for an attribute is also a Python reserved word, it can’t be
used as an attribute name. An underscore is appended to the word to
make it a legal identifier, hence else_ instead of else.)

The if statement is more complicated because it can include several


tests.

The If node only defines two attributes: tests and else_. The tests
attribute is a sequence of test expression, consequent body pairs. There
is one pair for each if/elif clause. The first element of the pair is the test
expression. The second elements is a Stmt node that contains the code to
execute if the test is true.

The getChildren() method of If returns a flat list of child nodes. If


there are three if/elif clauses and no else clause, then getChildren() will
return a list of six elements: the first test expression, the first Stmt, the
second text expression, etc.

The following table lists each of the Node subclasses defined in


compiler.ast and each of the public attributes available on their
instances. The values of most of the attributes are themselves Node
instances or sequences of instances. When the value is something other
than an instance, the type is noted in the comment. The attributes are
listed in the order in which they are returned by getChildren() and
getChildNodes().

DEVELOPMENT ENVIRONMENTS:

Most Python implementations (including CPython) include a read–eval–


print loop (REPL), permitting them to function as a command line
interpreter for which the user enters statements sequentially and receives
results immediately.

Other shells, including IDLE and IPython, add further abilities such as
auto-completion, session state retention and syntax highlighting.
IMPLEMENTATIONS

Reference implementation

CPython is the reference implementation of Python. It is written in


C, meeting the C89 standard with several select C99 features. It
compiles Python programs into an intermediate bytecode which is then
executed by its virtual machine. CPython is distributed with a large
standard library written in a mixture of C and native Python. It is
available for many platforms, including Windows and most modern
Unix-like systems. Platform portability was one of its earliest priorities.

Other implementations

PyPy is a fast, compliant interpreter of Python 2.7 and 3.5. Its just-in-
time compiler brings a significant speed improvement over CPython but
several libraries written in C cannot be used with it.

Stackless Python is a significant fork of CPython that implements


microthreads; it does not use the C memory stack, thus allowing
massively concurrent programs. PyPy also has a stackless version.

MicroPython and CircuitPython are Python 3 variants optimized for


microcontrollers. This includes Lego Mindstorms EV3.

RustPython is a Python 3 interpreter written in Rust.

Unsupported implementations
Other just-in-time Python compilers have been developed, but are now
unsupported:

Google began a project named Unladen Swallow in 2009, with the aim
of speeding up the Python interpreter five-fold by using the LLVM, and
of improving its multithreading ability to scale to thousands of cores,
while ordinary implementations suffer from the global interpreter lock.

Psyco is a just-in-time specialising compiler that integrates with


CPython and transforms bytecode to machine code at runtime. The
emitted code is specialized for certain data types and is faster than
standard Python code.

In 2005, Nokia released a Python interpreter for the Series 60 mobile


phones named PyS60. It includes many of the modules from the
CPython implementations and some additional modules to integrate with
the Symbian operating system. The project has been kept up-to-date to
run on all variants of the S60 platform, and several third-party modules
are available. The Nokia N900 also supports Python with GTK widget
libraries, enabling programs to be written and run on the target device.

Cross-compilers to other languages


There are several compilers to high-level object languages, with either
unrestricted Python, a restricted subset of Python, or a language similar
to Python as the source language:

 Jython enables the use of the Java class library from a Python
program.
 IronPython follows a similar approach in order to run Python
programs on the .NET Common Language Runtime.
 The RPython language can be compiled to C, and is used to build
the PyPy interpreter of Python.
 Pyjs compiles Python to JavaScript.
 Cython compiles Python to C and C++.
 Numba uses LLVM to compile Python to machine code.
 Pythran compiles Python to C++.
 Somewhat dated Pyrex (latest release in 2010) and Shed Skin
(latest release in 2013) compile to C and C++ respectively.
 Google's Grumpy compiles Python to Go.
 MyHDL compiles Python to VHDL.
 Nuitka compiles Python into C++.
PERFORMANCE
A performance comparison of various Python implementations on
a non-numerical (combinatorial) workload was presented at EuroSciPy
'13.

API DOCUMENTATION GENERATORS

Python API documentation generators include:

 Sphinx
 Epydoc
 HeaderDoc
 Pydoc

USES

Python has been successfully embedded in many software products as a


scripting language, including in finite element method software such as
Abaqus, 3D parametric modeler like FreeCAD, 3D animation packages
such as 3ds Max, Blender, Cinema 4D, Lightwave, Houdini, Maya,
modo, MotionBuilder, Softimage, the visual effects compositor Nuke,
2D imaging programs like GIMP, Inkscape, Scribus and Paint Shop Pro,
and musical notation programs like scorewriter and capella. GNU
Debugger uses Python as a pretty printer to show complex structures
such as C++ containers. Esri promotes Python as the best choice for
writing scripts in ArcGIS. It has also been used in several video games,
and has been adopted as first of the three available programming
languages in Google App Engine, the other two being Java and Go.

Python is commonly used in artificial intelligence projects with the help


of libraries like TensorFlow, Keras and Scikit-learn. As a scripting
language with modular architecture, simple syntax and rich text
processing tools, Python is often used for natural language processing.

Many operating systems include Python as a standard component. It


ships with most Linux distributions, AmigaOS 4, FreeBSD (as a
package), NetBSD, OpenBSD (as a package) and macOS and can be
used from the command line (terminal). Many Linux distributions use
installers written in Python: Ubuntu uses the Ubiquity installer, while
Red Hat Linux and Fedora use the Anaconda installer. Gentoo Linux
uses Python in its package management system, Portage.

Python is used extensively in the information security industry,


including in exploit development.
Most of the Sugar software for the One Laptop per Child XO, now
developed at Sugar Labs, is written in Python. The Raspberry Pi single-
board computer project has adopted Python as its main user-
programming language.

LibreOffice includes Python, and intends to replace Java with Python.


Its Python Scripting Provider is a core feature since Version 4.0 from 7
February 2013.

PANDAS

In computer programming, pandas is a software library written for


the Python programming language for data manipulation and analysis. In
particular, it offers data structures and operations for manipulating
numerical tables and time series. It is free software released under the
three-clause BSD license. The name is derived from the term "panel
data", an econometrics term for data sets that include observations over
multiple time periods for the same individuals.
Library features

 Data Frame object for data manipulation with integrated indexing.


 Tools for reading and writing data between in-memory data
structures and different file formats.
 Data alignment and integrated handling of missing data.
 Reshaping and pivoting of data sets.
 Label-based slicing, fancy indexing, and sub setting of large data
sets.
 Data structure column insertion and deletion.
 Group by engine allowing split-apply-combine operations on data
sets.
 Data set merging and joining.
 Hierarchical axis indexing to work with high-dimensional data in a
lower-dimensional data structure.
 Time series-functionality: Date range generation and frequency
conversion, moving window statistics, moving window linear
regressions, date shifting and lagging.
 Provides data filtration.

CSV READER

CSV (Comma Separated Values) is a simple file format used


to store tabular data, such as a spreadsheet or database. A CSV file
stores tabular data (numbers and text) in plain text. Each line of the
file is a data record. Each record consists of one or more fields,
separated by commas. The use of the comma as a field separator is
the source of the name for this file format.

For working CSV files in python, there is an inbuilt module called


csv.
CHAPTER-6

PROCESSOR

The processor is a chip or a logical circuit that responds and


processes the basic instructions to drive a particular computer. The main
functions of the processor are fetching, decoding, executing, and write
back the operations of an instruction. The processor is also called the
brain of any system which incorporates computers, laptops, smart
phones, embedded systems, etc. The ALU (Arithmetic Logic Unit) and
CU (Control Unit) are the two parts of the processors. The Arithmetic
Logic Unit performs all mathematical operations such as additions,
multiplications, subtractions, divisions, etc and the control unit works
like traffic police, it manages the command or the operation of the
instructions. The processor communicates with the other components
also they are input/output devices and memory/storage devices.

GENERAL PURPOSE PROCESSOR

There are five types of general-purpose processors they are,


Microcontroller, Microprocessor, Embedded Processor, DSP and Media
Processor.

MICROPROCESSOR

`The general-purpose processors are represented by the


microprocessor in embedded systems. There are different varieties of
microprocessors available in the market from different companies. The
microprocessor is also a general-purpose processor that consists of a
control unit, ALU, a bunch of registers also called scratchpad registers,
control registers and status registers. There may be an on-chip memory
and some interfaces for communicating with the external world like
interrupt lines, other lines for the memory and ports for communicating
with the external world. The ports often called the programmable ports
that mean, we can program these ports either to be acting as an input or
as an output. The general-purpose processors are shown in the below
table.

Basic Components of Processor

 ALU stands for arithmetic logic unit, which help out to execute
all arithmetic and logic operations.
 FPU (Floating Point Unit) is also called the “Math coprocessor”
that helps to manipulate mathematically calculations.
 Registers store all instructions and data, and it fires operands to
ALU and save the output of all operations.
 Cache memory helps to save more time in travelling data from
main memory.
Primary CPU Processor Operations are

 Fetch – In which, to obtain all instructions from main memory unit


(RAM).
 Decode – In this operation, to convert all instructions into
understandable ways then other components of CPU are ready to
continue further operations, and this entire operations ar performed by
decoder.
 Execute – Here, to perform all operations and every components
of CPU which are needed to activate to carry out instructions.
 Write-Back – After executing all operations, then its result is moved
to write back.

TYPES OF PROCESSOR

Here, we will discuss about different types of CPU (Processors),


which are used in computers. If you know how many types of CPU
(Processors) are there, then short answer is 5 types of processor.

SINGLE CORE PROCESSOR

Single Core CPUs were used in the traditional type of computers.


Those CPUs were able to perform one operation at once, so they were
not comfortable to multi tasking system. These CPUs got degrade the
entire performance of computer system while running multiple programs
at same time duration.
In Single Core CPU, FIFO (First Come First Serve) model is used, it
means that couple of operations goes to CPU for processing according to
priority base, and left operations get wait until first operation completed.

DUAL CORE PROCESSOR


Dual Core processor contains two processors, and they are linked
with each other like as single IC (Integrated circuit). Every processor
consist its own local cache and controller, so they are able to perform
different difficult operations in quickly than single core CPU.
There are some examples which are used as dual core processors such
as Intel Core Duo, the AMD X2, and the dual-core PowerPC G5.

MULTI CORE PROCESSOR


Multi core processor is designed with using of various processing
units’ means “Cores” on one chip, and every core of processor is able to
perform their all tasks. For example, if you are doing multiple activities
at a same time like as using WhatsApp and playing games then one core
handles WhatsApp activities and other core manage to another works
such as game.
QUAD CORE PROCESSOR
Quad core processor is high power CPU, in which four different
processors cores are combined into one processor. Every processor is
capable to execute and process all instructions own level without taking
support to other left processor cores. Quad core processors are able to
execute massive instructions at a time without getting waiting pools.
Quad core CPU help to enhance the processing power of computer
system, but it performance depend on their using computing
components.

OCTA CORE PROCESSOR


Octa core processor is designed with using of multiprocessor
architecture, and its design produces the higher processing speed. Octa
core processor has best ability to perform multi tasking and to boost up
efficiency of your CPU. These types of processors are mostly used in
your smart phones.

WEB CAM

A webcam is a video camera that feeds or streams its image in real


time to or through a computer to computer network. When "captured" by
the computer, the video stream may be saved, viewed or sent on to other
networks via systems such as the internet, and email as an attachment.
When sent to a remote location, the video stream may be saved, viewed
or on sent there.

Unlike an IP camera (which connects using Ethernet or Wi-Fi), a


webcam is generally connected by a USB cable, or similar cable, or built
into computer hardware, such as laptops. The term "webcam" (a clipped
compound) may also be used in its original sense of a video camera
connected to the Web continuously for an indefinite time, rather than for
a particular session, generally supplying a view for anyone who visits its
web page over the Internet.

Video calling and videoconferencing

Videophone, Videoconferencing, and Videotelephony Webcam


can be added to instant messaging, text chat services such as AOL
Instant Messenger, and VoIP services such as Skype, one-to-one live
video communication over the Internet has now reached millions of
mainstream PC users worldwide. Improved video quality has helped
webcams encroach on traditional video conferencing systems. New
features such as automatic lighting controls, real-time enhancements
(retouching, wrinkle smoothing and vertical stretch), automatic face
tracking and autofocus, assist users by providing substantial ease-of-use,
further increasing the popularity of webcams. Webcam features and
performance can vary by program, computer operating system, and also
by the computer's processor capabilities. Video calling support has also
been added to several popular instant messaging programs.

Video security

Webcams can be used as security cameras. Software is available to


allow PC-connected cameras to watch for movement and sound,
recording both when they are detected. These recordings can then be
saved to the computer, e-mailed, or uploaded to the Internet. In one well-
publicised case,a computer e-mailed images of the burglar during the
theft of the computer, enabling the owner to give police a clear picture
of the burglar's face even after the computer had been stolen.
Unauthorized access of webcams can present significant privacy issues
(see "Privacy" section below).

Video clips and stills

Webcams can be used to take video clips and still pictures. Various
software tools in wide use can be employed for this, such as Pic Master
(for use with Windows operating systems), Photo Booth (Mac), or
Cheese (with Unix systems). For a more complete list see Comparison
of webcam software.

Input control devices

Special software can use the video stream from a webcam to assist
or enhance a user's control of applications and games. Video features,
including faces, shapes, models and colors can be observed and tracked
to produce a corresponding form of control. For example, the position of
a single light source can be tracked and used to emulate a mouse pointer,
a head-mounted light would enable hands-free computing and would
greatly improve computer accessibility.

This can be applied to games, providing additional control,


improved interactivity and immersiveness. Free Track is a free webcam
motion-tracking application for Microsoft Windows that can track a
special head-mounted model in up to six degrees of freedom and output
data to mouse, keyboard, joystick and Free Track-supported games. By
removing the IR filter of the webcam, IR LEDs can be used, which has
the advantage of being invisible to the naked eye, removing a distraction
from the user. Track IR is a commercial version of this technology. The
Eye Toy for the PlayStation 2, PlayStation Eye for the PlayStation 3,
and the Xbox Live Vision camera and Kinect motion sensor for the
Xbox 360 and are color digital cameras that have been used as control
input devices by some games. Small webcam-based PC games are
available as either standalone executables or inside web browser
windows using Adobe Flash.

Astro photography

With very-low-light capability, a few specific models of webcams


are very popular to photograph the night sky by astronomers and astro
photographers. Mostly, these are manual-focus cameras and contain an
old CCD array instead of comparatively newer CMOS array. The lenses
of the cameras are removed and then these are attached to telescopes to
record images, video, still, or both. In newer techniques, videos of very
faint objects are taken for a couple of seconds and then all the frames of
the video are "stacked" together to obtain a still image of respectable
contrast.

OPENCV

Open Source Computer Vision Library’ initiated by some


enthusiast coders in ‘1999’ to incorporate Image Processing into a wide
variety of coding languages. It has C++, C, and Python interfaces
running on Windows, Linux, Android and Mac.

Officially launched in 1999 the Open CV project was initially an


Intel Research initiative to advance CPU-intensive applications, part of a
series of projects including real-time ray tracing and 3D display walls.
The main contributors to the project included a number of optimization
experts in Intel Russia, as well as Intel's Performance Library Team.
Disseminate vision knowledge by providing a common infrastructure
that developers could build on, so that code would be more readily
readable and transferable. Advance vision-based commercial
applications by making portable, performance-optimized code available
for free – with a license that did not require code to be open or free
itself. The first alpha version of Open CV was released to the public at
the IEEE Conference on Computer Vision and Pattern Recognition in
2000, and five betas were released between 2001 and 2005. The first 1.0
version was released in 2006. A version 1.1 "pre-release" was released
in October 2008. The second major release of the Open CV was in
October 2009. Open CV 2 includes major changes to the C++ interface,
aiming at easier, more type-safe patterns, new functions, and better
implementations for existing ones in terms of performance (especially
on multi-core systems). Official releases now occur every six months
and development is now done by an independent Russian team
supported by commercial corporations. In August 2012, support for
Open CV was taken over by a non-profit foundation OpenCV.org, which
maintains a developer and user site. On May 2016, Intel signed an
agreement to acquire Itseez,a leading developer of Open CV.[10] In July
2020, Open CV announced and began a Kick starter campaign for the
Open CV AI Kit, a series of hardware modules and additions to Open
CV supporting Spatial AI.

YOLO

YOLO is an algorithm that uses neural networks to provide real


time object detection. Object detection is a phenomenon in computer
vision that involves the detection of various objects in digital images or
videos. YOLO is an algorithm that detects and recognizes various
objects in a picture. The YOLO framework (You Only Look Once) on
the other hand, deals with object detection in a different way. It takes the
entire image in a single instance and predicts the bounding box
coordinates and class probabilities for these boxes. The biggest
advantage of using YOLO is its superb speed – it’s incredibly fast and
can process 45 frames per second. YOLO also understands generalized
object representation. This is one of the best algorithms for object
detection and has shown a comparatively similar performance to the R-
CNN algorithms. In the upcoming sections, we will learn about different
techniques used in YOLO algorithm. Object detection is one of the
classical problems in computer vision where you work to recognize what
and where — specifically what objects are inside a given image and also
where they are in the image. The problem of object detection is more
complex than classification, which also can recognize objects but
doesn’t indicate where the object is located in the image. In addition,
classification doesn’t work on images containing more than one object.
YOLO uses a totally different approach. YOLO is a clever convolutional
neural network (CNN) for doing object detection in real-time. The
algorithm applies a single neural network to the full image, and then
divides the image into regions and predicts bounding boxes and
probabilities for each region. These bounding boxes are weighted by the
predicted probabilities.

YOLO is popular because it achieves high accuracy while also


being able to run in real-time. The algorithm “only looks once” at the
image in the sense that it requires only one forward propagation pass
through the neural network to make predictions. After non-max
suppression (which makes sure the object detection algorithm only
detects each object once), it then outputs recognized objects together
with the bounding boxes. With YOLO, a single CNN simultaneously
predicts multiple bounding boxes and class probabilities for those boxes.
YOLO trains on full images and directly optimizes detection
performance. This model has a number of benefits over other object
detection methods: YOLO is extremely fast YOLO sees the entire image
during training and test time so it implicitly encodes contextual
information about classes as well as their appearance. YOLO learns
generalizable representations of objects so that when trained on natural
images and tested on artwork, the algorithm outperforms other top
detection methods.You Only Look Once (YOLO) is a network that uses
Deep Learning (DL) algorithms for object detection. YOLO performs
object detection by classifying certain objects within the image and
determining where they are located on it. For example, if you input an
image of a herd of sheep into a YOLO network, it will generate an
output of a vector of bounding boxes for each individual sheep and
classify it as such.

HOW YOLO IMPROVES OVER PREVIOUS OBJECT


DETECTION METHODS-

Previous object detection methods like Region-Convolution


Neural Networks (R-CNN), including other variations of it like fast R-
CNN, performed object detection tasks in a pipeline of multi-step
series. R-CNN focuses on a specific region within the image and trains
each individual component separately.

This process requires the R-CNN to classify 2000 regions per image,
which makes it very time-consuming (47 seconds per individual test
image). Thus it, cannot be implemented in real-time. Additionally, R-
CNN uses a fixed selective algorithm, which means no learning process
occurs during this stage so the network might generate an inferior
region proposal.

This makes object detection networks such as R-CNN harder to


optimize and slower compared to YOLO. YOLO is much faster (45
frames per second) and easier to optimize than previous algorithms, as
it is based on an algorithm that uses only one neural network to run all
components of the task.

To gain a better understanding of what YOLO is, we first have to


explore its architecture and algorithm.

YOLO Architecture━Structure Design and Algorithm Operation

A YOLO network consists of three main parts. First, the


algorithm, also known as the predictions vector. Second, the network.
Third, the loss functions.

The YOLO Algorithm

Once you insert input an image into a YOLO algorithm, it splits


the images into an SxS grid that it uses to predict whether the specific
bounding box contains the object (or parts of it) and then uses this
information to predict a class for the object.

  Before we can go into details and explain how the algorithm


functions, we need to understand how the algorithm builds and
specifies each bounding box. The YOLO algorithm uses four
components and additional value to predict an output.

1. The center of a bounding box (bx by)


2. Width (bw)
3. Height (bh)
4. The Class of the object (c)

The final predicted value is confidence (p c). It represents the


probability of the existence of an object within the bounding box.The
(x,y) coordinates represent the center of the bounding box.Typically,
most of the bounding boxes will not contain an object, so we need to
use the pc prediction. We can use a process called non-max suppression
to remove unnecessary boxes with low probability to contain objects
and those who share big areas with other boxes.

THE NETWORK

A YOLO network is structured like a regular CNN; it contains


convolution and max-pooling layers and then two fully connected CNN
layers. The Loss Function We only want one of the bounding boxes to
be responsible for the object within the image since the YOLO algorithm
predicts multiple bounding boxes for each grid cell. To achieve this, we
use the loss function to compute the loss for each true positive. To make
the loss function more efficient, we need to select the bounding box with
the highest Intersection over Union (IoU) with the ground truth. This
method improves predictions by making specialized bounding boxes
which improves the predictions for some aspect ratios and sizes.

YOLO V3
YOLO V3 is an incremental upgrade over YOLO V2, which uses
another variant of Dark net. This YOLO V3 architecture consists of 53
layers trained on Image net and another 53 tasked with object detection
which amounts to 106 layers. While this has dramatically improved the
accuracy of the network, it has also reduced the speed from 45 fps to 30
fps.

DARKNET

Dark net is a little awesome open source neural network written in


C. Is fast, slim and friendly to use. In particular, if you are interested in a
fast and small classifier you should try Tiny Dark net. I’m using it in
conjunction with a Bayesian network in order to classify a huge amount
of images (using my slow pc) in a fast and reliable way (maybe I’ll
explain more in another post). You should also use Nightmare the
counterpart of Deep Dream. Like I did in a previous post, I wanted to
analyze the community of projects surrounding Dark net. In this
exploration I’m using the fork action in order to obtain a network. As
second step I’m using a community detection method and some metrics
in order to obtain interesting clusters.

CNN ARCHITECTURE

This research work describes the image classification using deep


neural network combined with HOG feature extraction with K-means
segmentation algorithm and classifies through SVM classifier for more
accuracy. The following advantage of proposed system

1) The proposed CNN method reduce the number of preprocessing steps

2) Extra shape feature extracted from HOG algorithm for provide the
better accuracy

3) SVM classifier reduced the complexity of work and improved the


robustness of system

A. Deep neural network

A Complete 2D dimensional neural network consist of number of


image input layer, convolution layer ,ReLu layer,Maxpooling2d
layer ,Fully connected layer ,SoftmaxLayer and classification layer ,the
detail description of each layer of classifiers compete .

(1) Image input layer: The Image input Layer learn the feature from the
input image. The first step define pixel of input image, the image size
define.

(2) Convolution layer: The convolution layer extract the features of


image from the image input layer. CNN layer consists of one or more
kernel with different weight that are used for extract the features of input
image. Depending on weights associated with each Filter we can extract
the feature of image.
(3) Pooling layer: The pooling layer applies a down sampling of
convolved features of image .when detect non linearity of input image.
The pooling layer is to provide the dimension of features map of image.

(4)Fully connected layer: The fully connected layer connect the 26


class of image data, the above layer of five blocks interconnected which
is classified by the fully connected layer of system, based on the class
score we can classify the predicted score.

CNN ARCHITECTURE
CHAPTER 6

SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the
process of trying to discover every conceivable fault or weakness in a
work product. It provides a way to check the functionality of
components, sub-assemblies, assemblies and/or a finished product. It is
the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does
not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

6.1 TYPES OF TESTS


6.1.1 Unit testing
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its
construction and is invasive. Unit tests perform basic tests at component
level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.

6.1.2 Integration testing


Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is
event driven and is more concerned with the basic outcome of screens or
fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing
is specifically aimed at exposing the problems that arise from the
combination of components.

6.1.3 Functional test


Functional tests provide systematic demonstrations that functions
tested are available as specified by the business and technical
requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be


rejected.
Functions : identified functions must be exercised.

Output : identified classes of application outputs must be


exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on


requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.

6.1.4 System Test


System testing ensures that the entire integrated software system
meets requirements. It tests a configuration to ensure known and
predictable results. An example of system testing is the configuration
oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and
integration points.

6.1.5 White Box Testing


White Box Testing is a testing in which in which the software tester
has knowledge of the inner workings, structure and language of the
software, or at least its purpose. It is purpose. It is used to test areas that
cannot be reached from a black box level.
6.1.6 Black Box Testing
Black Box Testing is testing the software without any knowledge of
the inner workings, structure or language of the module being tested.
Black box tests, as most other kinds of tests, must be written from a
definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing
in which the software under test is treated, as a black box .you cannot
“see” into it. The test provides inputs and responds to outputs without
considering how the software works.

6.2 Unit Testing:

Unit testing is usually conducted as part of a combined code and


unit test phase of the software lifecycle, although it is not uncommon for
coding and unit testing to be conducted as two distinct phases.

6.2.1 Test strategy and approach


Field testing will be performed manually and functional tests will
be written in detail.

6.2.2 Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
6.2.3 Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.
6.3 Integration Testing
Software integration testing is the incremental integration testing
of two or more integrated software components on a single platform to
produce failures caused by interface defects.

The task of the integration test is to check that components or


software applications, e.g. components in a software system or – one
step up – software applications at the company level – interact without
error.

Test Results: All the test cases mentioned above passed successfully.
No defects encountered.

6.4 Acceptance Testing


User Acceptance Testing is a critical phase of any project and
requires significant participation by the end user. It also ensures that the
system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully.
No defects encountered.
CHAPTER 7

UML DIAGRAM

UML stands for Unified Modeling Language. UML is a


standardized general-purpose modeling language in the field of object-
oriented software engineering. The standard is managed, and was
created by, the Object Management Group.
The goal is for UML to become a common language for creating
models of object oriented computer software. In its current form UML is
comprised of two major components: a Meta-model and a notation. In
the future, some form of method or process may also be added to; or
associated with, UML.
The Unified Modeling Language is a standard language for
specifying, Visualization, Constructing and documenting the artifacts of
software system, as well as for business modeling and other non-
software systems.
The UML represents a collection of best engineering practices that
have proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented
software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language
so that they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the
core concepts.
3. Be independent of particular programming languages and
development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
USE CASE DIAGRAM:
A use case diagram in the Unified Modeling Language (UML) is a
type of behavioral diagram defined by and created from a Use-case
analysis. Its purpose is to present a graphical overview of the
functionality provided by a system in terms of actors, their goals
(represented as use cases), and any dependencies between those use
cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the
system can be depicted.
CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modeling


Language (UML) is a type of static structure diagram that describes the
structure of a system by showing the system's classes, their attributes,
operations (or methods), and the relationships among the classes. It
explains which class contains information.

SEQUENCE DIAGRAM:

A sequence diagram in Unified Modelling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another
and in what order. It is a construct of a Message Sequence Chart.
Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.
DEPLOYMENT:

Component diagrams are used to describe the components


and deployment diagrams shows how they are deployed in hardware.
UML is mainly designed to focus on the software artifacts of a system.
However, these two diagrams are special diagrams used to focus on
software and hardware components.

PRESENT OR
USER ATTENDANCE NOT
SYSTEM
DATA FLOW DIAGRAM:

1. The DFD is also called as bubble chart. It is a simple graphical


formalism that can be used to represent a system in terms of input
data to the system, various processing carried out on this data, and
the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important
modeling tools. It is used to model the system components. These
components are the system process, the data used by the process,
an external entity that interacts with the system and the information
flows in the system.
3. DFD shows how the information moves through the system and
how it is modified by a series of transformations. It is a graphical
technique that depicts information flow and the transformations
that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to
represent a system at any level of abstraction. DFD may be
partitioned into levels that represent increasing information flow
and functional detail.
DFD DIAGRAM

TRAINING OF
USER REGISTER CAMERA
IMAGE

FRAME PRESENT OR
MATCHING
NOT
CHAPTER 6

RESULTS

DATASET CREATION

In this module, the dataset for the student is created by using


Opencv-python. One thousand image where collected from each and
every students to create a dataset.

DATA COLLECTION
CHAPTER-7

FACE EXTRACTION

OUTPUT
AUTOMATED ATTENDANCE ENTRY

EXPECTED RESULT

Expected result of this proposed system is to mark the attendance for the
students in the class by automatically using single group video. The
marking of attendance is finally achieved in the excel sheet based on the
presence or absence of the student which is detected in the above model
using face recognition.
CHAPTER 7

7.1 CONCLUSION

Automated Attendance System has been envisioned for reducing


the drawbacks in the traditional (manual) system. This attendance
system demonstrates the use of image processing techniques in
classroom. The proposed automated attendance system using face
recognition is a great model for marking the attendance of students in a
classroom. This system also assists in overcoming the chances of proxies
and fake attendance. In the modern world, a large number of systems
using biometrics are available. However, the facial recognition turns out
to be a viable option because of its high accuracy along with minimum
human intervention. This system is aimed at providing a significant level
of security. This system can not only merely help in the attendance
system, but also improve the goodwill of an institution
7.2 REFERENCES

[1] P. Cocca, F. Marciano, and M. Alberti, ``Video surveillance systems


to enhance occupational safety: A case study,'' Saf. Sci., 2016

[2] M. L. Garcia, Vulnerability Assessment of Physical Protection


Systems. Oxford, U.K.: Heinemann, 2006.

[3] M. P. J. Ashby, ``The value of CCTV surveillance cameras as an


investigative tool: An empirical analysis,'' Eur. J. Criminal Policy
Res,2017

[4] B. C. Welsh, D. P. Farrington, and S. A. Taheri, ``Effectiveness and


social costs of public area surveillance for crime prevention,'' 2015

[5] The Effectiveness of Public Space CCTV: A Review of Recent


Published Evidence Regarding the Impact of CCTV on Crime, Police
Community Saf. Directorate Scottish Government, Edinburgh, U.K.,
2009.

[6] W. Hu, T. Tan, L. Wang, and S. Maybank, ``A survey on visual


surveillance of object motion and behaviors,'' EEE Trans. Syst., Man,
Cybern. C, Appl. Rev 2004

[7] P. L. Venetianer and H. Deng, ``Performance evaluation of an


intelligent video surveillance systemA case study,'' Comput. Vis. Image
Understand., Nov. 2010
[8] V. Tsakanikas and T. Dagiuklas, ``Video surveillance systems-
current status and future trends,'' Comput. Electr. Eng., Aug. 2018

[9] L. Patino, T. Nawaz, T. Cane, and J. Ferryman, ``PETS 2017:


Dataset and challenge,'' in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit. Workshops (CVPRW), Honolulu, HI, USA, Jul. 2017

[10] G. Awad, A. Butt, J. Fiscus, D. Joy, A. Delgado, M. Michel, A. F.


Smeaton, Y. Graham, W. Kraaij, G. Quénot, M. Eskevich, R. Ordelman,
G. J. F. Jones, and B. Huet. (2018). TRECVID 2017: Evaluating Ad-
Hoc and Instance Video Search, Events Detection, Video Captioning,
and Hyperlinking.

You might also like