[go: up one dir, main page]

0% found this document useful (0 votes)
92 views14 pages

Image Processing: Object Tracking With Color Detection

This document is a project report submitted by Ayush Maria (16BCE0400) to Prof. Suresh Kumar for the Fall 2018-2019 semester. The project involves object tracking with color detection using image processing in the J Component. The report includes an abstract, introduction, literature survey from 2013-2016 on real-time object detection and tracking methods using color features, and a conclusion. Key topics covered include color detection algorithms, simulating and deploying a color detection model on an Android device using a camera block for real-time images.

Uploaded by

Ayush Maria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views14 pages

Image Processing: Object Tracking With Color Detection

This document is a project report submitted by Ayush Maria (16BCE0400) to Prof. Suresh Kumar for the Fall 2018-2019 semester. The project involves object tracking with color detection using image processing in the J Component. The report includes an abstract, introduction, literature survey from 2013-2016 on real-time object detection and tracking methods using color features, and a conclusion. Key topics covered include color detection algorithms, simulating and deploying a color detection model on an Android device using a camera block for real-time images.

Uploaded by

Ayush Maria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

PROJECT REPORT

Object Tracking with Color Detection

Image Processing
J Component

By:
16BCE0400 - Ayush Maria

Submitted to Prof. Suresh Kumar N, SCOPE

Slot: G1

School of Computer Science and Engineering

Fall Semester 2018 – 2019

14th November, 2018


Abstract

Real-time object detection and tracking is a vast, vibrant yet inconclusive and
complex area of computer vision. Due to its increased utilization in surveillance,
tracking system used in security and many others applications have propelled
researchers to continuously devise more efficient and competitive algorithms.
However, problems emerge in implementing object detection and tracking in
real-time; such as tracking under dynamic environment, expensive computation
to fit the real-time performance, or multi-camera multi-objects tracking make this
task strenuously difficult. Though, many methods and techniques have been
developed, but in this literature review we have discussed some famous and basic
methods of object detection and tracking. In the end we have also given their
general applications and results.

Page 2 of 7
Introduction

Object detection is the process of finding instances of real-world objects such as


faces, bicycles, and buildings in images or videos. Object detection algorithms
typically use extracted features and learning algorithms to recognize instances of
an object category. It is commonly used in applications such as image retrieval,
security, surveillance, and advanced driver assistance systems (ADAS).

A color detection algorithm identifies pixels in an image that match a specified


color or color range. The color of detected pixels can then be changed to
distinguish them from the rest of the image.
This example introduces a workflow to take Simulink model of a color detection
algorithm from simulation to deployment onto device:
Simulate the color detection model
Deploy the color detection model and use the Video Display block to show
simulated data on the Android device
Modify the color detection model to use real-time images from the Camera
Block

Page 3 of 7
Literature Survey
2013
December 2013
Real-time object detection and Tracking
Abstract:
Real-time object detection and tracking is a vast, vibrant yet inconclusive and
complex area of computer vision. Due to its increased utilization in surveillance,
tracking system used in security and many others applications have propelled
researchers to continuously devise more efficient and competitive algorithms.
However, problems emerge in implementing object detection and tracking in
real-time; such as tracking under dynamic environment, expensive computation
to fit the real-time performance, or multi-camera multi-objects tracking make this
task strenuously difficult. Though, many methods and techniques have been
developed, but in this literature review we have discussed some famous and basic
methods of object detection and tracking. In the end we have also given their
general applications and results.
http://ieeexplore.ieee.org/document/6731341/?reload=true
June 2013
To Detect and Track Moving Object for Surveillance System
Abstract: - Recently, there has been increased need and interest in “video analysis”
which is analysis of video in sequence to determine relatively moving objects like
vehicles and different behavior of people. For eg- this can be used in CCTV network
to detect and track abnormal behavior of some people or vehicles. The proposed
system, can apply in home and business surveillance system to detect and track
moving objects. And also differentiate that, the detected objects are either vehicle or
human beings. It is necessary that video surveillance system must detect and track
moving object robustly against disturbances birds, trees, environmental changes like
different weather conditions etc. so the proposed method is using color background
Modeling with sensitivity parameter (delta) to remove noises and to detect and track
moving objects very easily. Also feature extraction method is used in object
recognition. Blob labelling is also used for grouping of moving objects. Then
Morphological operations like dilation and erosion is also used to remove noises
under surveillance. Finally the experiments Shows that the proposed method has the
robustness against the environmental disturbances and speed which are suitable for
the real-time surveillance system.
Keywords:-surveillance, distributed system, background detection, morphology, group
tracking, haar cascade features extraction.
https://www.ijircce.com/upload/2013/june/23_To Detect.pdf
2014
January 2014
Object Detection and Tracking in Real Time Video Based on Color
Abstract: - In this paper we present a framework for efficient detection and tracking of an
object in real time using color. A tracking algorithm is developed in order to analyze the
motion of object is chaotic or not. Automatic tracking of object is done by several stages.
These are image capturing, image processing, time series extraction and analysis. We
proposed here Euclidian filter which has the advantages that it can be used in dynamic
images. We convert the color image into gray, because it is easy to process the gray image
Page 4 of 7
in single color instead of multi colors. Gray images requires less time in processing. This
also contains a discussion of the requirements of any searching task and presents a fast
method for detecting the presence of known multi-colored objects in a scene.
Keywords:- Color-based detection, Euclidean filtering, gray scaling, contour tracking
http://www.ijerd.com/paper/vol10-issue6/Version_4/F1063337.pdf
June 2014
Multi-Target Tracking Using Color Information
Abstract: - For security purposes, it is prerequisite to track multiple targets efficiently. Most
of the current implementation uses Kalman filter and color information independently. The
proposed method combines extended Kalman filter and color information for tracking
multiple objects under high occlusion. For tracking, the first thing done is the object
detection. The background model used to segment foreground from background is spatio-
temporal Gaussian mixture model (STGMM). Tracking consists of two steps: independent
object tracking and occluded object tracking. For independent object tracking we exploit
extended Kalman filter, whereas for occluded object tracking, color information attribute is
used. The system was tested in real world application and successful results were obtained.
Keywords:- EKF, multi-target tracking, STGMM, tracking using color information.
http://www.ijcce.org/papers/283-E008.pdf
2015
April 2015
Real-time object detection and Tracking using color feature and motion
Abstract:- This paper, introduces a technique for automating the methodology of detecting
and racking objects utilizing color feature and motion. Video Tracking is the methodology of
finding a moving object over the long distance using a camera. The main aim of video
tracking is to relate target objects in consecutive video frames. The relationship can be
especially troublesome when the objects are moving speedy with respect to the frame rate. An
interchange situation that grows the unpredictability of the issue is the time when the tracking
object changes orientation after eventually. For these circumstances video tracking
frameworks typically utilize a movement model which depicts how the image of the target
may change for distinctive conceivable movements of the object. In this paper an algorithm is
developed to track the real-time moving objects in different frames of a video using color
feature and motion.
http://ieeexplore.ieee.org/document/7322705/
May 2015
BASIC GEOMETRIC SHAPE AND PRIMARY COLOUR DETECTION USING IMAGE
PROCESSING ON MATLAB
Abstract:- This paper gives an approach to identify basic geometric shapes and primary
RGB colors in a 2 dimensional image using image processing techniques with the help of
MATLAB. The basic shapes included are square, circle, triangle and rectangle. The
algorithm involves conversion of RGB image to grey scale image and then to black and
white image. This is achieved by thresholding concept. The area of the minimum bounding
rectangle is calculated irrespective of the angle of rotation of the object and ratio of this area
to area of the object is calculated and compared to the predefined ratio to determine the
shape of the given object. The dominant color pixels present helps to determine the color of
the object. The practical aspects of this include reducing the manual labour in industries
used to segregate the products and providing real time vision to the robots.
Keywords:- MATLAB, RGB Image, Bounding Rectangle, Shape and Color Detection.
http://esatjournals.net/ijret/2015v04/i05/IJRET20150405094.pdf
2016
Real-time detection and tracking for moving objects based on
computer vision method.
Page 5 of 7
Abstract:- In this paper, we will propose a method is to establishing 3D images and tracking
objects in a plane space. The system in this paper will use automated. At first, grabbing
dynamic environment images that from two CCDs, digitalized them by the image grabber and
then using digital image processing techniques to become 3D environment images. Next step
is to do objects recognition and locate them respectively. This action can achieve the function
of object tracking. At last, the result of both simulations and practical experiments
demonstrate the method that proposed in this paper is feasible. Keywords:- Object tracking,
Image color analysis, Digital images, Gray-scale, Image recognition, Computer vision
http://ieeexplore.ieee.org/abstract/document/7935072/?reload=true
Prediction of Object Detection, Recognition and Identification [DRI]
Ranges at Color Scene Images, Based-on Quantifying Human Color
Contrast Perception.
Abstract:- We propose a novel approach to predict, for specified color imaging system and
for objects with known characteristics, their detection, recognition, identification (DRI) ranges
in a colored dynamic scene, based on quantifying the human color contrast perception.
The method refers to the well-established L*a*b*, 3D color space. The nonlinear relations of
this space are intended to mimic the nonlinear response of the human eye. The metrics of
L*a*b* color space is such that the Euclidian distance between any two colors in this space
is approximately proportional to the color contrast as perceived by the human eye/brain. The
result of this metrics leads to the outcome that color contrast of any two points is always
greater (or equal) than their equivalent grey scale contrast. This meets our sense that
looking on a colored image, contrast is superior to the gray scale contrast of the same
image. Yet, color loss by scattering at very long ranges should be considered as well.
The color contrast derived from the distance between the colored object pixels and to the
nearby colored background pixels, as derived from the L*a*b* color space metrics, is
expressed in terms of gray scale contrast. This contrast replaces the original standard gray
scale contrast component of that image. As expected, the resulted DRI ranges are, in most
cases, larger than those predicted by the standard gray scale image. Upon further
elaboration and validation of this method, it may be combined with the next versions of the
well accepted TRM codes for DRI predictions. Consistent prediction of DRI ranges implies a
careful evaluation of the object and background color contrast reduction along the range.
Clearly, additional processing for reconstructing the objects and background true colors and
hence the color contrast along the range, will further increase the DRI ranges.
Keywords:- color contrast, color perception, DRI ranges, L*a*b* color space
https://www.researchgate.net/profile/Ephi_Pinsky/publication/309278159_Prediction_of_obje
ct_detection_recognition_and_identification_DRI_ranges_at_color_scene_images_based_on_q
uantifying_human_color_contrast_perception/links/581680ea08
aedc7d896762

Page 6 of 7
Implementation
MAIN
clc
clear all
close all

% Set up the Acquisition Object


%old - vidobj = videoinput('winvideo');
% Now that the device is configured for manual triggering, call START.
% This will cause the device to send data back to MATLAB, but will not log
% frames to memory at this point.
% old - start(vidobj)
cam = webcam(1);
previous_center = 0;
tic

for i=1:250
snap = snapshot(cam);
for i=1:1280
flipped_snapshot(:,1280-i+1,1) = snap(:,i , 1);
flipped_snapshot(:,1280-i+1,2) = snap(:,i , 2);
flipped_snapshot(:,1280-i+1,3) = snap(:,i , 3);
end
snap = flipped_snapshot;
colored_pixels = scan_for_color(snap,1);
[m n]=size(colored_pixels);
if m < 5
imshow(snap);
previous_center = 0;
hold on
t = ['NO RED OBJECT IN FRAME'];
text(640,360,t,'HorizontalAlignment','center','Color','red');
hold on
else
colored_pixels; % remove semi colon for debugging
previous_center = mark_color(snap, colored_pixels,
previous_center);
%colored_pixels = 0; % reset coloured pixels to zero for next
frame
end

end

elapsedTime = toc

% Compute the time per frame and effective frame rate.


timePerFrame = elapsedTime/20
effectiveFrameRate = 1/timePerFrame

% Call the STOP function to stop the device.


clear cam %old - stop(vidobj)

Page 7 of 7
MARK COLOR
clc
clear all
close all

% Set up the Acquisition Object


%old - vidobj = videoinput('winvideo');
% Now that the device is configured for manual triggering, call START.
% This will cause the device to send data back to MATLAB, but will not log
% frames to memory at this point.
%old - start(vidobj)
% cam = webcam(1);
%video = read()
previous_center = 0;
tic
v = VideoReader('/Users/akshaygugale/Downloads/video1.mp4');
while hasFrame(v)
snap = readFrame(v);
colored_pixels = scan_for_color(snap,1);
[m n]=size(colored_pixels);
if m < 5
imshow(snap);
previous_center = 0;
hold on
t = ['NO RED OBJECT IN FRAME'];
text(640,360,t,'HorizontalAlignment','center','Color','red');
hold on
else

imshow(snap);
colored_pixels; % remove semi colon for debugging
previous_center = mark_color(snap, colored_pixels,
previous_center);
%colored_pixels = 0; % reset coloured pixels to zero for next
frame
end

end

elapsedTime = toc

% Compute the time per frame and effective frame rate.


timePerFrame = elapsedTime/20
effectiveFrameRate = 1/timePerFrame

% Call the STOP function to stop the device.


clear cam %old - stop(vidobj)

Page 8 of 7
SCAN
z%updated with choosing colors R G B as color_option
%color option
% 1-->RED
% 2-->GREEN
% 3-->BLUE
function [ co ] = scan_for_color(i1, color_option)
%%
%hsv_R1 = [from_H to_H above_S above_V]
hsv_R1 = [0.944 1 0.75 0.30];
hsv_R2 = [0 0.0222 0.75 0.30];
hsv_G1 = [0.222 0.416 0.5 0.30]; %temporary random values Find actual
values
hsv_G2 = [0 0 0 0];
hsv_B1 = [0.4722 0.75 0.75 0.30];
hsv_B2 = [0 0 0 0];
colors = [hsv_R1; hsv_R2; hsv_G1; hsv_G2; hsv_B1; hsv_B2 ];
[m n o] = size(i1);
%converting to HSV values
HSV=rgb2hsv(i1);
H=HSV(:,:,1);
S=HSV(:,:,2);
V=HSV(:,:,3);
%%
%co --> list of coloured points
co = [];
p=1;
%%
for k=1:m
for l=1:n
%colours(4*color_option-1)
if (H(k,l) > colors(color_option,1) && H(k,l)<=
colors(color_option,2) && S(k,l) > colors(color_option,3) && V(k,l) >=
colors(color_option,4))
%making a list of red points
co(p,1) = l;
co(p,2) = k;
p=p+1;
elseif (H(k,l) > colors(color_option+1,1) && H(k,l)<=
colors(color_option+1,2) && S(k,l) > colors(color_option+1,3) && V(k,l) >=
colors(color_option+1,4))
co(p,1) = l;
co(p,2) = k;
p=p+1;
end
end
end

end

Page 9 of 7
Output
cam =
webcam with properties:
Name: 'FaceTime HD Camera'
Resolution: '1280x720'
AvailableResolutions: {'1280x720'}
elapsedTime =
25.0439
timePerFrame =
1.2522
effectiveFrameRate =
0.7986

Page 10 of
7
Fig 1: When no RED object is present in the frame of the camera

Fig 2: When a single RED object is present in the frame of the camera

Page 11 of
7
Fig 3: When a single RED object is present and has changed its position with respect to the last frame
a blue line indicates the shift in the position

Fig 4: Testing tracking using a different object

Page 12 of
7
Limitations:
 Need to manually enter the color range values.
 When two objects of the target color are in the frame at once, it cannot distinguish
between them and considers them as one.
 The tracking is just between two consequent frames and is limited to linear tracking.

Future Applications:
 The tracking can be used to detect signals, track cars, etc. in driver-less cars.
 Integrating the movement tracking between various frames with an input driver of a
mouse can provide a hands free way of input.
 Can be used in industries to place objects (screws, bolts, packages, etc.) at specific
places which are marked by color.

Page 13 of
7
Conclusion

Hence the blind stick, though a common idea was worked on and improvised to suit the
needs of the people today. In order to make the elderly people comfortable our version
of the blind stick has many improvised ideologies. If implemented properly it will
definitely help people and hence improve their comfort zone.

Our version of the blind stick can be revolutionized, by partnerships with taxicab
companies like Uber and Ola. The blind stick is something that can constantly improve
over time and hence has a great scope in the future. Additional features can be added
over time that will make the blind stick quite easy to handle. For example voice
recognition can be added to make the smart stick more user friendly. Fingerprint
sensors can also be added to recognize the user.

Page 14 of
7

You might also like