[go: up one dir, main page]

0% found this document useful (0 votes)
1K views26 pages

IVA Lab Manual for Image Analytics

The document is a lab manual for the Image and Video Analytics Laboratory for the academic year 2024-2025, outlining the vision, mission, program outcomes, and specific experiments to be conducted. It includes detailed algorithms and example programs for various image processing tasks such as T-pyramid computation, quad tree representation, geometric transformations, object detection, and facial recognition. The manual serves as a guide for students to document their practical work and is intended for submission during university practical examinations.

Uploaded by

sanjayvmyth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views26 pages

IVA Lab Manual for Image Analytics

The document is a lab manual for the Image and Video Analytics Laboratory for the academic year 2024-2025, outlining the vision, mission, program outcomes, and specific experiments to be conducted. It includes detailed algorithms and example programs for various image processing tasks such as T-pyramid computation, quad tree representation, geometric transformations, object detection, and facial recognition. The manual serves as a guide for students to document their practical work and is intended for submission during university practical examinations.

Uploaded by

sanjayvmyth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

DEPARTMENT OF

ELECTRONICS AND COMMUNICATION ENGINEERING

ACADEMIC YEAR: 2024 - 2025

ODD SEMESTER

LAB MANUAL
(REGULATION-2021)

CCS349 – IMAGE AND VIDEO ANALYTICS LABORATORY


RECORD NOTEBOOK

REGISTER NUMBER .……………………..……….

Certified that this is a Bonafide observation of Practical work done by Mr. /Ms. ................................

of the ............................................ Semester………………………………………... Branch during the

Academic year ….......................................... in the……………………………………………………….

Laboratory.

Submitted for the University Practical Examination held on……………….

STAFF-IN-CHARGE HEAD OF THE DEPARTMENT

INTERNAL EXAMINER EXTERNAL EXAMINER


Vision and Mission of the Institute

VISION

To develop globally competitive Electronics and Communication engineers to solve real-time


problems in industry and society.

MISSION

● To provide solid fundamental knowledge and technical skills through effective teaching
learning Methodologies
● To provide a conducive environment through collaborations with industry and academia
● To inculcate learning of emerging technologies leading to lifelong learning
● To enable students to imbibe ethical and enterprising characteristics to become socially
responsible engineers

PROGRAM OUTCOMES (POs)

PO1. Engineering knowledge: Apply the knowledge of mathematics, science,


engineering fundamentals, and an engineering specialization to the solution of
complex engineering problems.

PO2. Problem analysis: Identify, formulate, review research literature, and analyze
complex engineering problems reaching substantiated conclusions using first
principles of mathematics, natural sciences, and engineering sciences.

PO3. Design/development of solutions: Design solutions for complex engineering


problems and design system components or processes that meet the specified needs
with appropriate consideration for the public health and safety, and the cultural,
societal, and environmental considerations.

PO4. Conduct investigations of complex problems: Use research-based knowledge


and research methods including design of experiments, analysis and interpretation
of data, and synthesis of the information to provide valid conclusions.

PO5. Modern tool usage: Create, select, and apply appropriate techniques, resources,
and modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.

PO6. The engineer and society: Apply reasoning informed by the contextual
knowledge to assess societal, health, safety, legal and cultural issues and the
consequent responsibilities relevant to the professional engineering practice.

PO7. Environment and sustainability: Understand the impact of the professional


engineering solutions in societal and environmental contexts, and demonstrate the
knowledge of, and need for sustainable development.

PO8. Ethics: Apply ethical principles and commit to professional ethics and
responsibilities and norms of the engineering practice.
PO9. Individual and team work: Function effectively as an individual, and as a
member or leader in diverse teams, and in multidisciplinary settings.

PO10. Communication: Communicate effectively on complex engineering activities


with the engineering community and with society at large, such as, being able to
comprehend and write effective reports and design documentation, make effective
presentations, and give and receive clear instructions.

PO11. Project management and finance: Demonstrate knowledge and understanding of


the engineering and management principles and apply these to one’s own work, as a
member and leader in a team, to manage projects and in multidisciplinary
environments.

PO12. Life-long learning: Recognize the need for, and have the preparation and ability
to engage in independent and life-long learning in the broadest context of
technological change.

Program Specific Outcomes (PSOs)

PSO1: Design and test modern electronic systems by adapting emerging technologies.

PSO2: Design and formulate solutions for industrial requirements using communication, networking,
signal processing techniques, embedded systems and VLSI techniques

PSO3: Develop solutions required in multidisciplinary engineering fields.

Program Educational Outcomes (PEOs)

PEO 1: Technical Expertise: Acquire a professional career and personal development in industries /
higher studies / research / entrepreneurs.

PEO 2: Life-long learning: Sustain to develop their knowledge and skills throughout their career.

PEO 3: Ethical Knowledge: Exhibit professionalism, ethical attitude, communication skills, teamwork
and adaptation to current trends.
LIST OF EXPERIMENTS

SL. DATE NAME OF EXPERIMENT PAGE MARKS SIGN


NO NO:

1 T-Pyramid of an image

2 Quad Tree

3 Geometric Transformation of image

4 Object Detection and Recognition

5 Motion Analysis using moving edges

6 Facial Detection and Recognition

7 Event detection in video surveillance system


Exp No:1
Write a program that computes the T-pyramid of an image
Date:

Aim:
To write a program that computes the T-pyramid of an image

Algorithm:
Step 1: Import the necessary libraries: cv2 for image processing and [Link] for visualization.
Step 2: Read the input image using [Link].
Step 3: Convert the image to grayscale if needed (optional).
Step 4: Apply a contrast enhancement algorithm, such as histogram equalization or contrast stretching.
Step 5: Display the original and enhanced images using [Link].

Program:
Import cv2
Import numpy as np
from [Link] import cv2_imshow

def compute_t_pyramid(image_path, levels):


img = [Link](image_path, cv2.IMREAD_GRAYSCALE)

if img is None:
print(f"Error: Unable to load image from {image_path}")
return

cv2_imshow(img)

pyramid = [img]
for i in range(levels - 1):
img = [Link](img)
[Link](img)
cv2_imshow(img)

return pyramid

# Example usage
image_path = ' /content/[Link]'
pyramid_levels = 6
result_t_pyramid = compute_t_pyramid(image_path, pyramid_levels)
Output:

Result:

Thus the program that computes the T-pyramid of an image has been executed and verified successfully.
Exp No:2
Write a program that derives the quad tree representation of an image using the
Date: homogeneity criterion of equal intensity.

Aim:
To write a program that derives the quad tree representation of an image using the Homogeneity criterion
of equal intensity.

Algorithm:
Step 1: Read and Split the Image
Step 2: Concatenate the Quadrants
Step 3: Check Image Uniformity and Construct Quad tree
Step 4: Calculate Mean Colors
Step 5: Create and Display Quad tree
Step 6: End of Algorithm

Program:
import [Link] as plt
import [Link] as mpimg
import numpy as np

img = [Link]('/content/scenery_1.jpg')
[Link]

from operator import add


from functools import reduce

def split4(image):
half_split = np.array_split(image, 2)
res = map(lambda x: np.array_split(x, 2, axis=1), half_split)
return reduce(add, res)

split_img = split4(img)
split_img[0].shape

def concatenate4(north_west, north_east, south_west, south_east):


top = [Link]((north_west, north_east), axis=1)
bottom = [Link]((south_west, south_east), axis=1)
return [Link]((top, bottom), axis=0)

full_img = concatenate4(split_img[0], split_img[1], split_img[2], split_img[3])


[Link](full_img)
[Link]()
def checkEqual(myList):
first=myList[0]
return all((x==first).all() for x in myList)

class QuadTree:

def insert(self, img, level = 0):


[Link] = level
[Link] = calculate_mean(img).astype(int)
[Link] = ([Link][0], [Link][1])
[Link] = True

if not checkEqual(img):
split_img = split4(img)

[Link] = False
self.north_west = QuadTree().insert(split_img[0], level + 1)
self.north_east = QuadTree().insert(split_img[1], level + 1)
self.south_west = QuadTree().insert(split_img[2], level + 1)
self.south_east = QuadTree().insert(split_img[3], level + 1)

return self

def get_image(self, level):


if([Link] or [Link] == level):
return [Link]([Link], ([Link][0], [Link][1], 1))

return concatenate4(
self.north_west.get_image(level),
self.north_east.get_image(level),
self.south_west.get_image(level),
self.south_east.get_image(level))

def calculate_mean(img):
return [Link](img, axis=(0, 1))

means = [Link](list(map(lambda x: calculate_mean(x), split_img))).astype(int).reshape(2,2,3)


print(means)
[Link](means)
[Link]()

quadtree = QuadTree().insert(img)

[Link](quadtree.get_image(1))
[Link]()
[Link](quadtree.get_image(3))
[Link]()
[Link](quadtree.get_image(7))
[Link]()
[Link](quadtree.get_image(10))
[Link]()
Output:

Result:

Thus the program that derives the quad tree representation of an image using the homogeneity criterion
of equal intensity has been executed and verified successfully.
Exp No: 03 Develop programs for the following geometric transforms: (a) Rotation (b)
Change of scale (c) Skewing (d) Affine transform calculated from three pairs
Date: of corresponding points (e) Bilinear transform calculated from four pairs of
corresponding points.

Aim:
To write the python program for the geometric transforms: (a) Rotation (b) Change of scale (c)
Skewing (d) Affine transform calculated from three pairs of corresponding points (e) Bilinear transform
calculated from four pairs of corresponding points.
Algorithm:
Step 1: Import the necessary libraries: cv2 for image processing and [Link] for visualization.
Step 2: Read the input image using [Link].
Step 3: Convert the image to grayscale if needed (optional).
Step 4: Apply a contrast enhancement algorithm, such as histogram equalization or contrast stretching.
Step 5: Display the original and enhanced images using [Link].

Program:
a) Rotation
import cv2 as cv
import numpy as np
import [Link] as plt
image = [Link]("/content/[Link]")
h, w = [Link][:2]
rotation_matrix = cv.getRotationMatrix2D((w/2,h/2), -180, 0.5)
rotated_image = [Link](image, rotation_matrix, (w, h))
[Link]([Link](rotated_image, cv.COLOR_BGR2RGB))
[Link]("Rotation")
[Link]()
Output:
b) Change of scale
img = [Link]('/content/[Link]')
res = [Link](img,None,fx=0.5, fy=0.5, interpolation = cv2.INTER_CUBIC)
cv2_imshow(res)
[Link]
Output:

c) Skewing
import cv2 as cv
import numpy as np
import [Link] as plt
image = [Link]("/content/[Link]")
fig, ax = [Link](1, 3, figsize=(16, 8))
# image size being 0.15 times of it's original size
image_scaled_3 = [Link](image, (200, 400), interpolation=cv.INTER_AREA)
ax[2].imshow([Link](image_scaled_3, cv.COLOR_BGR2RGB))
ax[2].set_title("Skewed Interpolation Scale")

Output:
d) Affine transformation
import cv2
import numpy as np
from [Link] import cv2_imshow
img = [Link]('/content/[Link]')
res = [Link](img,None,fx=0.5, fy=0.5, interpolation = cv2.INTER_CUBIC
rows,cols,ch = [Link]
pts1 = np.float32([[50,50],[200,50],[50,200]])
pts2 = np.float32([[10,100],[200,50],[100,250]])
M = [Link](pts1,pts2)
dst = [Link](img,M,(cols,rows))
cv2_imshow(dst)
[Link](0)
[Link]()

Output:

e) Bilinear transform

import numpy as np
import cv2
from [Link] import cv2_imshow

def bilinear_transform(src_points, dest_points, img):


# Calculate the bilinear transform matrix using four pairs of corresponding points
M = [Link](src_points, dest_points)

# Apply the bilinear transform to the image


warped_img = [Link](img, M, ([Link][1], [Link][0]))

return warped_img

# Load an example image


img = [Link]('/content/[Link]')

# Example corresponding points


src_points = np.float32([[0, 0], [[Link][1]-1, 0], [[Link][1]-1, [Link][0]-1], [0, [Link][0]-1]])
dest_points = np.float32([[0, 0], [300, 0], [300, 300], [0, 300]])

# Perform bilinear transform


result_bilinear = bilinear_transform(src_points, dest_points, img)

# Display the result


cv2_imshow(result_bilinear)
[Link](0)
[Link]()

Output:

Result:
Thus the programs for the following geometric transforms: (a) Rotation (b) Change of scale
(c) Skewing (d) Affine transform calculated from three pairs of corresponding points (e) Bilinear transform
calculated from four pairs of corresponding points has been executed and verified successfully.
Exp No: 04
Develop a program to implement Object Detection and Recognition
Date:

Aim:
To develop a program to implement Object Detection and Recognition.
Algorithm:
Step 1: Install Required Libraries: Ensure necessary libraries like OpenCV, TensorFlow, or PyTorch are
installed.
Step 2: Load Pre-trained Model: Utilize a pre-trained model (e.g., YOLO, SSD, or Faster R-CNN) for object
detection.
Step 3: Capture or Load Image/Video: Obtain input by capturing from a camera or loading an image/video
file.
Step 4: Object Detection: Use the pre-trained model to detect objects in the input.
Step 5: Display Results: Show the original input alongside the detected objects, possibly with labeled class
names and confidence scores.

Program:
import torch
# Check PyTorch GPU availability
if [Link].is_available():
print(f"GPU Name: {[Link].get_device_name(0)}")
print(f"GPU Is Available: {[Link].is_available()}")
else:
print("GPU is not available.")

from [Link] import drive


[Link]('/content/gdrive')

%cd /content/gdrive/MyDrive/YOLOv9

!git clone [Link]

%cd yolov9

!pip install -r [Link]

!wget -P /mydrive/yolov9 [Link]


!wget -P /mydrive/yolov9 [Link]

!python [Link] --weights /mydrive/yolov9/[Link] –source


/content/gdrive/MyDrive/YOLOv9/yolov9/[Link] --device 0

!python [Link] --weights /mydrive/yolov9/[Link] --source


/content/gdrive/MyDrive/YOLOv9/yolov9/[Link] --device 0

from [Link] import Image

Image(filename=f"/content/gdrive/MyDrive/YOLOv9/yolov9/runs/detect/exp2/[Link]",
width=1000) from [Link] import Image
Image(filename=f"/content/gdrive/MyDrive/YOLOv9/yolov9/runs/detect/exp3/[Link]",
width=1000)

Output:

Result:
Thus the develop a program to implement Object Detection and Recognition has been executed
successfully.
Exp No: 05 Develop a program for Facial Detection and Recognition

Date:

Aim:
To develop a program for Facial Detection and Recognition
Algorithm:
Step 1 : Import necessary libraries: face_recognition, numpy, [Link], [Link], and
[Link].
Step 2: Load sample images of known individuals and learn their face encodings.
Step 3: Create arrays of known face encodings and their corresponding names.
Step 4: Load an image with unknown faces.
Step 5: Find all the faces and face encodings in the unknown image using face_recognition.face_locations
and face_recognition.face_encodings.
Step 6: Compare the unknown face encodings with the known face encodings to identify matches.
Step 7: Draw bounding boxes around the identified faces and display the result using the [Link]
module.

Program:
from PIL import Image, ImageDraw
from [Link] import display

# The program we will be finding faces on the example below


pil_im = [Link]('/content/Bezos and Elon [Link]')
display(pil_im)

!pip install face_recognition


import face_recognition
import numpy as np
from PIL import Image, ImageDraw
from [Link] import display

# This is an example of running face recognition on a single image


# and drawing a box around each person that was identified.

# Load a sample picture and learn how to recognize it.


obama_image = face_recognition.load_image_file("/content/[Link]")
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]

# Load a second sample picture and learn how to recognize it.


biden_image = face_recognition.load_image_file("/content/Elon [Link]")
biden_face_encoding = face_recognition.face_encodings(biden_image)[0]

# Create arrays of known face encodings and their names


known_face_encodings = [
obama_face_encoding,
biden_face_encoding
]
known_face_names = [
"Bezos",
"Elon Musk"
]
print('Learned encoding for', len(known_face_encodings), 'images.')

len(face_recognition.face_encodings(biden_image)[0])

# Load an image with an unknown face


unknown_image = face_recognition.load_image_file("/content/Bezos and Elon [Link]")

# Find all the faces and face encodings in the unknown image
face_locations = face_recognition.face_locations(unknown_image)
face_encodings = face_recognition.face_encodings(unknown_image, face_locations)

# Convert the image to a PIL-format image so that we can draw on top of it with the Pillow library
# See [Link] for more about PIL/Pillow
pil_image = [Link](unknown_image)
# Create a Pillow ImageDraw Draw instance to draw with
draw = [Link](pil_image)

# Loop through each face found in the unknown image


for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)

name = "Unknown"

# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = [Link](face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]

# Draw a box around the face using the Pillow module


[Link](((left, top), (right, bottom)), outline=(0, 0, 255))

# Draw a label with a name below the face


text_width, text_height = [Link](name)
[Link](((left, bottom - text_height - 10), (right, bottom)), fill=(0, 0, 255), outline=(0, 0, 255))
[Link]((left + 6, bottom - text_height - 5), name, fill=(255, 255, 255, 255))

# Remove the drawing library from memory as per the Pillow docs
del draw

# Display the resulting image


display(pil_image)
Output:

Result:
Thus the program develop a program for Facial Detection and Recognition has been executed
successfully.
Exp no: 06 Develop a program for Vehicle Counting and Tracking in a video

Date:

Aim :
To Develop a program for Vehicle Counting and Tracking in a video.

Algorithm:

Step 1: Apply image preprocessing techniques to enhance the video frames before feeding them into the
detector.

Step 2: Use a pre-trained model like YOLO to detect vehicles in each frame. You’ve already included this
step in
your code.
Step 3: Implement an object tracking algorithm like ByteTrack to track the detected vehicles across frames.

Step 4: Define a virtual line (as you have with LineZone) and check if the bounding boxes of detected
vehicles
intersect with this line.
Step 5: Increment the vehicle count when a vehicle crosses the line from one side to the other. Ensure that
each
vehicle is counted only once.
Step 6: Apply filters to remove false positives or duplicates in the count.

Step 7: Display the count on the video frames or store the data for further analysis.

Program:

import ultralytics
[Link]()

from ultralytics import YOLO


from [Link] import tqdm
from [Link] import Point
from [Link] import VideoSink
from [Link] import VideoInfo
from [Link] import plot_image
from [Link] import Detections
from [Link] import BoxAnnotator
from [Link] import Color, ColorPalette
from [Link].byte_tracker.core import ByteTrack
from [Link] import get_video_frames_generator
from [Link].line_counter import LineZone, LineZoneAnnotator
%cd {BASE_DIR}
!wget --load-cookies /tmp/[Link] "[Link] --
quiet --save-cookies /tmp/[Link] --keep-session-cookies --no-check-certificate
'[Link] -O- | sed -
rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1pz68D1Gsx80MoPg-_q-IbEdESEmyVLm-" -O vehicle-
counting.mp4 && rm -rf /tmp/[Link]

video_path = f"{BASE_DIR}/vehicle-counting.mp4"

model = YOLO("[Link]")

[Link]()

label_map = [Link]

!yolo task=detect mode=predict source={video_path}

byte_tracker = ByteTrack()result_path = f"{BASE_DIR}/vehicle-counting_result1.mp4"

LINE_START = Point(50, 1500)


LINE_END = Point(3840-50, 1500)

line_counter = LineZone(start=LINE_START,end=LINE_END)
line_annotator = LineZoneAnnotator(thickness=4, text_thickness=4, text_scale=2)

generator = get_video_frames_generator(video_path)

box_annotator = BoxAnnotator(color=[Link](),thickness=4, text_thickness=4,text_scale=2)

video_info = VideoInfo.from_video_path(video_path)
with VideoSink(result_path, video_info) as sink:
for frame in tqdm(generator,total=video_info.total_frames):

results = model(frame)[0]
detections = Detections.from_ultralytics(results)

detections = byte_tracker.update_with_detections(detections=detections)
labels=[f"{label_map[class_id]} {confidence:0.2f} -track_id:{tracker_id}" for
_,_,confidence,class_id,tracker_id in detections]

line_counter.trigger(detections=detections)

annotated_frame = box_annotator.annotate(scene=frame, detections=detections,labels=labels)


line_annotator.annotate(frame=frame,line_counter=line_counter)

sink.write_frame(frame)
#show_frame_in_notebook(annotated_frame,(10,10))
OUT PUT:

Result:
Thus the program Develop a program for Vehicle Counting and Tracking in a video has been
executedsuccessfully.
Exp no: 07 AI Face Body and Hand Pose Estimation With MediaPipe

Date:

Aim:
To write a program AI Face Body and Hand Pose Estimation With MediaPipe.

Algorithm:

Step 1: Set up the MediaPipe Holistic model with the desired confidence levels for detection and tracking.

Step 2: Use OpenCV to capture the video feed from the webcam.

Step 3: Convert the frame from BGR to RGB color space as MediaPipe requires.

Step 4: Process the frame with the Holistic model to get the landmarks.

Step 5: Use the drawing utilities provided by MediaPipe to draw the landmarks on the frame.

Step 6: Show the frame with the drawn landmarks in a window

Step 7: Allow the user to exit the loop and close the application by pressing a specific key (e.g., ‘q’)

Program:

import mediapipe as mp
import cv2

mp_drawing = [Link].drawing_utils
mp_holistic = [Link]
cap = [Link](0)
while [Link]():
ret, frame = [Link]()
[Link]('Raw Webcam Feed', frame)
if [Link](10) & 0xFF == ord('q'):
break

[Link]()
[Link]()
cap = [Link](0)
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:

while [Link]():
ret, frame = [Link]()

image = [Link](frame, cv2.COLOR_BGR2RGB)


results = [Link](image)
image = [Link](image, cv2.COLOR_RGB2BGR)

# Draw face landmarks


mp_drawing.draw_landmarks(image, results.face_landmarks,
mp_holistic.FACEMESH_TESSELATION)

# Right hand
mp_drawing.draw_landmarks(image, results.right_hand_landmarks,
mp_holistic.HAND_CONNECTIONS)

# Left Hand
mp_drawing.draw_landmarks(image, results.left_hand_landmarks,
mp_holistic.HAND_CONNECTIONS)

# Pose Detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS)

[Link]('Raw Webcam Feed', image)

if [Link](10) & 0xFF == ord('q'):


break

[Link]()
[Link]()

mp_holistic.POSE_CONNECTIONS
mp_drawing.DrawingSpec(color=(0,0,255), thickness=2, circle_radius=2)
mp_drawing.draw_landmarks?

cap = [Link](0)
with mp_holistic.Holistic(min_detection_confidence=0.5, min_tracking_confidence=0.5) as holistic:
while [Link]():
ret, frame = [Link]()

image = [Link](frame, cv2.COLOR_BGR2RGB)


results = [Link](image)

image = [Link](image, cv2.COLOR_RGB2BGR)

# 1. Draw face landmarks


mp_drawing.draw_landmarks(image, results.face_landmarks,
mp_holistic.FACEMESH_TESSELATION,
mp_drawing.DrawingSpec(color=(80,110,10), thickness=1, circle_radius=1),

mp_drawing.DrawingSpec(color=(80,256,121), thickness=1, circle_radius=1)


)
# 2. Right hand
mp_drawing.draw_landmarks(image, results.right_hand_landmarks,
mp_holistic.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(92,22,10), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(92,44,121), thickness=2, circle_radius=2)
)

# 3. Left Hand
mp_drawing.draw_landmarks(image, results.left_hand_landmarks,
mp_holistic.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(135,42,88), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(135,67,241), thickness=2, circle_radius=2)
)

# 4. Pose Detections
mp_drawing.draw_landmarks(image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS,
mp_drawing.DrawingSpec(color=(226,133,71), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(224,77,220), thickness=2, circle_radius=2)
)
[Link]('Raw Webcam Feed', image)

if [Link](10) & 0xFF == ord('q'):


break
[Link]()
[Link]

Output:

1. Left hand and Face detection


2. Right hand

3. Pose Detections

Result:
Thus the program AI Face Body and Hand Pose Estimation With Media Pipe has been executedsuccessfully.

You might also like