[go: up one dir, main page]

CN111260928B - Method and device for detecting pedestrian without giving way to vehicle - Google Patents

Method and device for detecting pedestrian without giving way to vehicle Download PDF

Info

Publication number
CN111260928B
CN111260928B CN201811450320.1A CN201811450320A CN111260928B CN 111260928 B CN111260928 B CN 111260928B CN 201811450320 A CN201811450320 A CN 201811450320A CN 111260928 B CN111260928 B CN 111260928B
Authority
CN
China
Prior art keywords
target
evidence
pedestrian
graph
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811450320.1A
Other languages
Chinese (zh)
Other versions
CN111260928A (en
Inventor
张伟良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811450320.1A priority Critical patent/CN111260928B/en
Publication of CN111260928A publication Critical patent/CN111260928A/en
Application granted granted Critical
Publication of CN111260928B publication Critical patent/CN111260928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of traffic monitoring, and provides a method and a device for detecting a pedestrian without vehicle courtesy, wherein the method comprises the following steps: when a vehicle is detected to drive into a first preset area, a first evidence graph and a second evidence graph are obtained; generating a first network model diagram according to the first evidence diagram, and carrying out target positioning on the first network model diagram to obtain a first coordinate set; generating a second network model diagram according to the second evidence diagram, and carrying out target positioning on the second network model diagram to obtain a second coordinate set; calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle; and comparing the target traveling information with preset traveling information, and judging that the target vehicle does not give the pedestrian a good gift when the target traveling information accords with the preset traveling information. Compared with the prior art, the invention can reduce the workload of law enforcement personnel and has better law enforcement effect.

Description

Method and device for detecting pedestrian without giving way to vehicle
Technical Field
The embodiment of the invention relates to the technical field of traffic monitoring, in particular to a method and a device for detecting a pedestrian without vehicle courtesy.
Background
With the continuous improvement of the living standard of people, the keeping quantity of motor vehicles is increased year by year, the contradiction among people, vehicles and roads is increasingly prominent, and the requirement of civilized participation in traffic is very urgent. According to the regulations of the road traffic safety law, pedestrians have the priority to pass in the zebra crossing area, and the motor vehicles need to stop actively to give way to the pedestrians when driving nearby. Unfortunately, in reality, many vehicles do not decelerate and give way at the front of the zebra crossing, and traffic accidents are easily caused by the conflict between vehicles and pedestrians.
In the prior art, in order to stop vehicles and prevent pedestrians from being present, traffic policemen also adopt different methods, such as traffic policemen field law enforcement, three-dimensional zebra crossing arrangement, deceleration strip arrangement and the like, but all need to consume manpower and material resources, the workload of law enforcement personnel is increased, and the law enforcement effect is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for detecting a pedestrian without courtesy of a vehicle, so as to solve the problems that in the prior art, the workload of law enforcement personnel is too heavy and the law enforcement effect is poor.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for detecting a vehicle impersonable pedestrian, including: when the fact that a vehicle enters a first preset area is detected, a first evidence graph and a second evidence graph are obtained, wherein the first evidence graph and the second evidence graph comprise a target pedestrian and a target vehicle; generating a first network model diagram according to the first evidence diagram, and carrying out target positioning on the first network model diagram to obtain a first coordinate set, wherein the first coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the first evidence diagram; generating a second network model diagram according to the second evidence diagram, and performing target positioning on the second network model diagram to obtain a second coordinate set, wherein the second coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the second evidence diagram; calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle; and comparing the target traveling information with preset traveling information, and judging that the target vehicle does not give the pedestrian at present when the target traveling information accords with the preset traveling information.
In a second aspect, an embodiment of the present invention provides a vehicle courtesy pedestrian detection apparatus, including: the system comprises an evidence graph acquisition module, a first evidence graph and a second evidence graph, wherein the evidence graph acquisition module is used for acquiring the first evidence graph and the second evidence graph when a vehicle is detected to drive into a first preset area, and the first evidence graph and the second evidence graph comprise a target pedestrian and a target vehicle; the first coordinate set extraction module is used for generating a first network model diagram according to the first evidence diagram and carrying out target positioning on the first network model diagram to obtain a first coordinate set, wherein the first coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the first evidence diagram; the second coordinate set extraction module is used for generating a second network model diagram according to the second evidence diagram and carrying out target positioning on the second network model diagram to obtain a second coordinate set, wherein the second coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the second evidence diagram; the traveling information calculation module is used for calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle; and the non-courtesy judging module is used for comparing the target traveling information with preset traveling information and judging that the target vehicle gives no courtesy to pedestrians when the target traveling information accords with the preset traveling information.
Compared with the prior art, the method and the device for detecting the vehicle unfriendly pedestrians, provided by the embodiment of the invention, comprise the steps of firstly, acquiring a first evidence graph and a second evidence graph which comprise a target pedestrian and a target vehicle when the vehicle drives into a first preset area; then, the coordinates of the target vehicle and the target pedestrian in the first evidence graph are located through the first network model graph, and the coordinates of the target vehicle and the target pedestrian in the second evidence graph are located through the second network model graph; and finally, obtaining target travelling information of the target pedestrian and the target vehicle according to the coordinates of the target vehicle and the target pedestrian in the first evidence graph and the coordinates of the target vehicle and the target pedestrian in the second evidence graph, and judging that the target vehicle does not give the pedestrian a good gift when the target travelling information accords with preset travelling information. Compared with the prior art, the method for detecting the vehicle non-courtesy pedestrians can quickly and accurately detect whether the vehicle is non-courtesy pedestrians, so that the workload of law enforcement personnel is reduced, and a better law enforcement effect can be achieved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a diagram showing a connection relationship of an image pickup apparatus according to an embodiment of the present invention.
Fig. 2 is a block diagram schematically illustrating an image pickup apparatus according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating a method for detecting a vehicle unfriendly pedestrian according to an embodiment of the present invention.
Fig. 4 shows a schematic structural diagram of a shooting area of a camera provided by an embodiment of the present invention.
Fig. 5 shows an example of a first evidence graph provided by an embodiment of the present invention.
Fig. 6 shows an example of a first network model diagram provided by an embodiment of the present invention.
Fig. 7 is a flowchart illustrating sub-steps of step S2 shown in fig. 3.
Fig. 8 is a flowchart illustrating sub-steps of sub-step S21 shown in fig. 7.
Fig. 9 shows a splicing combination diagram of the first scaling map ratio 1/4 provided by the embodiment of the present invention.
Fig. 10 shows a splicing combination diagram of the first scaling map ratio greater than 1/4 according to the embodiment of the present invention.
Fig. 11 shows an example of a first network model diagram including a coordinate system according to an embodiment of the present invention.
Fig. 12 is a schematic diagram illustrating a splicing of a first network model diagram according to an embodiment of the present invention.
Fig. 13 is a flowchart illustrating sub-steps of sub-step S24 shown in fig. 7.
Fig. 14 is a flowchart illustrating sub-steps of step S4 shown in fig. 3.
Fig. 15 is a block diagram schematically illustrating a vehicle courtesy pedestrian detection apparatus according to an embodiment of the present invention.
Fig. 16 is a block diagram illustrating a first coordinate set extraction module according to an embodiment of the present invention.
Fig. 17 is a block diagram illustrating a travel information calculation module according to an embodiment of the present invention.
Icon: 100-a camera device; 101-a processor; 102-a memory; 103-a bus; 104-a communication interface; 105-a display screen; 106-camera; 200-vehicle no-present pedestrian detection device; 201-evidence graph acquisition module; 202-a first coordinate set extraction module; 221-a first network graph extraction unit; 222-a first target extraction unit; 223-a network coordinate set acquisition unit; 224-a first coordinate set location unit; 203-a second coordinate set extraction module; 204-a travel information calculation module; 241-a first calculation unit; 242-a second calculation unit; 243-third calculation unit; 205-no-present decision module; 206-no-present evidence acquisition module; 300-server.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, the method for detecting a vehicle unlawful pedestrian according to the embodiment of the present invention is applied to the camera device 100, and the camera device 100 is in communication connection with the server 300.
Referring to fig. 2, fig. 2 is a block diagram illustrating a camera device 100 according to an embodiment of the present invention, where the camera device 100 includes a processor 101, a memory 102, a bus 103, a communication interface 104, a display screen 105, and a camera 106. The processor 101, the memory 102, the communication interface 104, the display 105 and the camera 106 are connected by the bus 103, and the processor 101 is configured to execute an executable module, such as a computer program, stored in the memory 102.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the vehicle courtesy pedestrian detection method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 101. The Processor 101 may be a general-purpose Processor 101, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The Memory 102 may comprise a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The bus 103 may be an ISA (Industry Standard architecture) bus, a PCI (peripheral Component interconnect) bus, an EISA (extended Industry Standard architecture) bus, or the like. Only one bi-directional arrow is shown in fig. 2, but this does not indicate only one bus 103 or one type of bus 103.
The camera 100 implements a communication connection between the camera 100 and the server 300 through at least one communication interface 104 (which may be wired or wireless). The memory 102 is used to store a program such as the vehicle no-will-pass pedestrian detection apparatus 200. The vehicle immortal pedestrian detection apparatus 200 includes at least one software functional module that may be stored in the memory 102 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the image pickup apparatus 100. The processor 101, upon receiving the execution instruction, executes the program to implement a vehicle courtesy pedestrian detection method.
The display screen 105 is used to display an image, which may be the result of some processing by the processor 101. The display screen 105 may be a touch display screen, a display screen without interactive functionality, or the like. The display screen 105 can display the first evidence graph, the second evidence graph, the first network model graph and the second network model graph.
The camera 106 is used to take pictures and send them to the processor 101 for processing via the bus 103 or to the memory 102 for storage.
First embodiment
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for detecting a vehicle unlawful pedestrian according to an embodiment of the invention. The detection method for the vehicle unlawful pedestrian comprises the following steps:
step S1, when it is detected that a vehicle enters a first preset area, a first evidence graph and a second evidence graph are obtained, wherein the first evidence graph and the second evidence graph both comprise a target pedestrian and a target vehicle.
Referring to fig. 4, the first preset area may be an area before the zebra crossing in the shooting area of the camera 106. Referring to fig. 5, the first evidence graph may be a picture including a target vehicle, a target pedestrian, a zebra crossing area, other vehicles and other pedestrians, the second evidence graph may be a picture including a target vehicle, a target pedestrian, a zebra crossing area, other vehicles and other pedestrians, wherein the target vehicle may be a motor vehicle such as various automobiles, trams, battery cars, motorcycles, etc., the target pedestrian may be a pedestrian located in the zebra crossing area and its neighboring area, and the other vehicles and other pedestrians in the first evidence graph and the second evidence graph may be the same vehicle and pedestrian or different vehicles and pedestrians, which is not limited in this embodiment. As an embodiment, the best snapshot point of the second evidence graph is that the target vehicle completely occupies the zebra crossing area, taking into account factors such as the target vehicle stopping halfway. The first evidence graph and the second evidence graph can be obtained by real-time shooting through the camera 106.
Step S2, generating a first network model map according to the first evidence map, and performing target positioning on the first network model map to obtain a first coordinate set, where the first coordinate set includes coordinates of a target vehicle and coordinates of a target pedestrian in the first evidence map.
Referring to fig. 6, the first network model map may be an image obtained by scaling, dividing and splicing the first evidence map. The first set of coordinates may be coordinates of the target vehicle and the target pedestrian in the first evidence map, and it is understood that the first set of coordinates is coordinates of the target vehicle and the target pedestrian in the first evidence map. The step of generating a first network model diagram according to the first evidence diagram, and performing target positioning on the first network model diagram to obtain a first coordinate set can be understood as that the first evidence diagram is enlarged or reduced, required parts are divided, and then the required parts are spliced to obtain the first network model diagram, the first network model diagram is input into a trained neural network for target detection, a target pedestrian and a target vehicle in the first network model diagram are detected, corresponding coordinates of the target pedestrian and the target vehicle in the first network model diagram are determined, and coordinates of the target pedestrian and the target vehicle in the first evidence diagram are positioned according to the coordinates to obtain the first coordinate set.
The following explanation will be given by taking the first evidence map including the target vehicle and the target pedestrian (pedestrian a) as an example.
The first evidence graph comprises a target vehicle and a target pedestrian (pedestrian A), the pedestrian A is in a zebra crossing region, the first evidence graph is used as an original image, the first evidence graph is zoomed to obtain two zoomed graphs with different sizes, the larger graph is subjected to zebra crossing region segmentation extraction and is spliced with the smaller graph to form a first network model graph, after target detection is carried out on the first network model graph, detection frames of the pedestrian A and the target vehicle are obtained, coordinates of the pedestrian A in the first network model graph are determined from the larger graph and can be represented as (15,3,1,2), wherein (15,3) represent coordinates of the upper left corner of the detection frame, (1,2) represent length and width of the detection frame respectively, so that specific coordinates of the pedestrian A in the first network model graph are formed, a target of the target vehicle in the first network model graph can be represented as (5,3,1,1), wherein (5,3) represents the coordinate of the upper left corner of the detection box, and (1,1) represents the length and width of the detection box respectively, thereby forming the specific coordinate of the target vehicle in the first network model map. And positioning the coordinates of the pedestrian A back into the first evidence image, and positioning the coordinates of the target vehicle back into the first evidence image to obtain a first coordinate set.
Referring to fig. 7, step S2 may further include the following sub-steps:
and a substep S21, processing the first evidence graph according to preset parameters to obtain a first network model graph.
In the embodiment of the present invention, the preset parameters may include a first preset ratio and a second preset ratio, the first preset ratio may be a ratio set by a user for performing size scaling on the first evidence graph, the second preset ratio may be a ratio set by the user for performing size scaling on the first evidence graph, and the first preset ratio and the second preset ratio may be different ratios, for example, the first preset ratio may be a1 and b1, and the second preset ratio may be a0 and b 0. The first evidence graph includes a first preset region and a second preset region, where the second preset region may be a zebra crossing region and a region behind the zebra crossing region in the shooting region of the camera 106. The step of processing the first evidence graph according to the preset parameters to obtain the first network model graph can be understood as performing first zooming on the first evidence graph to obtain a first zoomed graph, performing second zooming on the first evidence graph to obtain a second zoomed graph, segmenting an image containing the zebra crossing region in the second zoomed graph, and splicing the segmented image with the first zoomed graph to obtain the first network model graph.
Referring to fig. 8, the sub-step S21 may further include the following sub-steps:
and a substep S211, scaling the first evidence graph according to a first preset proportion to obtain a first scaled graph.
In the embodiment of the present invention, the first zoom map may be a picture obtained by reducing or enlarging the first evidence map according to a first preset scale. Since the pictures taken by the different cameras 106 are different in size, in order to generate the first network model map with a fixed length and width according to the first evidence map, the first evidence map needs to be reduced according to a first preset proportion when the picture of the first evidence map is large, and needs to be enlarged according to the first preset proportion when the picture of the first evidence map is small. Specifically, taking as an example that the first evidence map is reduced according to a first preset scale to obtain the first scaled map, the first preset scale may be a1, b1, the image size of the first evidence map is M × N, and then the size of the obtained first scaled map is (M/a1) (N/b 1).
And a substep S212, scaling the first evidence graph according to a second preset proportion to obtain a second scaled graph.
In this embodiment of the present invention, the second zoom map may be a picture obtained by reducing or enlarging the first evidence map according to a second preset scale. Since the pictures taken by the different cameras 106 are different in size, in order to generate the first network model map with a fixed length and width according to the first evidence map, the first evidence map needs to be reduced according to a second preset proportion when the picture of the first evidence map is large, and needs to be enlarged according to the second preset proportion when the picture of the first evidence map is small. Specifically, taking as an example that the first evidence map is reduced according to a second preset scale to obtain a second scaled map, the second preset scale may be a0, b0, and the image size of the first evidence map is M × N, so that the size of the obtained first scaled map is (M/a0) (N/b 0).
It should be noted that the size of the second scaling diagram is larger than that of the first scaling diagram, and in other embodiments of the present invention, the execution order of the sub-step S211 and the sub-step S212 may be exchanged, or the sub-step S211 and the sub-step S212 may be executed simultaneously.
And a substep S213, segmenting the second zoom map according to the zebra crossing region and the first zoom map to obtain a plurality of segmentation maps.
In the embodiment of the present invention, the segmentation map may be an image obtained by segmenting the second zoom map at least once. The step of obtaining a plurality of segmentation maps by segmenting the second zoom map according to the zebra crossing regions and the first zoom map may be understood as first obtaining a first segmentation map including the zebra crossing regions and a second segmentation map not including the zebra crossing regions by segmenting the second zoom map according to the zebra crossing regions, and then obtaining a first sub-segmentation map and a second sub-segmentation map by segmenting the first segmentation map according to the first zoom map, where a length of the first zoom map is equal to a difference between a length of the first sub-segmentation map and a length of the second sub-segmentation map. The plurality of segmentation maps may be composed of a first sub-segmentation map, a second sub-segmentation map and a second segmentation map of the first segmentation map.
And a substep S214 of screening out at least one target segmentation map containing the zebra crossing region from the plurality of segmentation maps, and splicing the at least one target segmentation map with the first zoom map to obtain a first network model map.
In the embodiment of the present invention, the target segmentation map may be a segmentation map including zebra crossing regions, and the step of screening out at least one target segmentation map including zebra crossing regions from the plurality of segmentation maps may be understood as screening the plurality of segmentation maps obtained in the sub-step S213 to obtain a segmentation map including zebra crossing regions, that is, the target segmentation map. For example, the first sub-segmentation map and the second sub-segmentation map both include zebra crossing regions, so that at least one target segmentation map can be obtained through screening. And a step of obtaining the first network model map by stitching the at least one target segmentation map and the first scaling map, wherein the length and the width of the first network model map are fixed, and the at least one target segmentation map and the first scaling map need to be reasonably stitched to enable the at least one target segmentation map and the first scaling map to form the first network model map with fixed length and width. For example, the first zoom map is a1, the first sub-division map is B0_1, and the second sub-division map is B0_2, if the first zoom map ratio 1/4 splicing combination is set, several splicing manners as shown in fig. 9 may be available, and if the first zoom map ratio exceeds 1/4 splicing combination, several splicing manners as shown in fig. 10 may be available. The method comprises the steps of screening out at least one target segmentation graph, and splicing the target segmentation graph and a first zoom graph to obtain a first network model graph, wherein in the first network graph, zebra crossing areas included by a first sub segmentation graph and a second sub segmentation graph are clearer, pedestrians in the zebra crossing areas are clearer and easy to identify.
A complete splicing process is described below, but the splicing process is only one splicing manner according to the embodiment of the present invention, and the present invention may also have other splicing manners, which are not limited herein.
The size of the first evidence graph is M × N, the size of the first network model graph is M × N, the first preset ratio is a1, b1, the second preset ratio is a0, b0, the size of the first scaled graph a1 is (M/a1) (N/b1), and the size of the second scaled graph a0 is (M/a0) (N/b 0). The size of the first zoom level a1 is smaller than that of the second zoom level a0, and the ratio of the first zoom level a 3526 is 1/4, where a1 is 2M/M and b1 is 2N/N.
Referring to fig. 11 and 12, a coordinate system with the top left vertex of the image as the origin of coordinates is established.
The first network model map M × n is divided into 4 parts, and in order to ensure that the size of the first network model map just occupies M × n, the length of the second scaling map a0 is required to be less than 3 times of the length of the first scaling map a1, i.e. M/a0 ═ 3 × M/a1, then a0> a1/3 ═ 2M/3M; a0/b0 is a1/b1, and b0 is a0 is b1/a 1. It is understood that the smaller a0, the larger the second zoom map A0, and the larger the corresponding target pedestrian size, for detection.
The zebra crossing region in the second zoom map a0 is segmented and the stitching with the first zoom map a1 is completed. Assuming that the coordinates of the left lower side of the zebra crossing region in the second zoom map a0 are (0, y0), the height of the cut is consistent with the height of the image a1, N/B1 is N/2, and the cut image is B0; since zebra crossing region B0 occupies the region of image 3/4, it needs to be cut into two parts B0_1 and B0_2, a part of B0 that is m/2 wide from left to right is filled into the upper right region, and a part of B0 that is m wide from right to left is filled into the lower half. The left and right cut filling areas of B0 may have overlapping areas, and then the fusion operation of the detection frames needs to be performed on the overlapping areas.
And a substep S22, performing target detection on the first network model diagram to obtain a first target in the first network model diagram, wherein the first target comprises a target vehicle and a target pedestrian.
In an embodiment of the present invention, the first target may be a target vehicle and a target pedestrian in the first network model map. The first network model map may be input to a preset convolutional neural network for target detection, or may be used for target detection through a deformable component model, or may be used in other target detection manners, which is not limited herein. And performing target detection on the first network model map to obtain a first target in the first network model map, wherein the step of inputting the first network model map into a target detection model to perform target detection to obtain a target vehicle and a target pedestrian in the first network model map.
In addition, the target detection of the target pedestrian is performed in at least one target segmentation map, and the target detection of the target vehicle is performed in the first zoom map. The detection of different targets (target pedestrians and target vehicles) is carried out in different areas in the first network model graph, so that the detection efficiency can be improved, the target pedestrians are detected in a clear target segmentation graph, the detection rate can be improved, and the time required by target detection is reduced.
And a substep S23, obtaining coordinates of the first target in the first network model map to obtain a first network coordinate set.
In an embodiment of the present invention, the first network coordinate set may be coordinates of the target vehicle and the target pedestrian in the first network model map. The first target has corresponding coordinates in the first network model map, and since the first target includes the target vehicle and the target pedestrian, whether the target vehicle or the target pedestrian, a rectangular area is occupied in the first network model map, and the coordinates can be represented by the length and width of the rectangular area plus a vertex angle of the rectangular area in order to accurately locate the rectangular area occupied by the target vehicle and the target pedestrian. For example, the coordinates of the target vehicle may be (x, y, w, h), where x is the abscissa of the vertex at the upper left corner of the rectangular region, y is the ordinate of the vertex at the upper left corner of the rectangular region, w is the length of the rectangular region, and h is the width of the rectangular region. And acquiring the coordinates of the target pedestrian and the target vehicle in the first network model diagram to form a first network coordinate set.
And a substep S24, positioning the first network coordinate set into the first evidence graph according to preset parameters to obtain a first coordinate set.
In the embodiment of the present invention, the preset parameter may be a first preset ratio and a second preset ratio. The first set of coordinates may be coordinates of the target vehicle and the target pedestrian in the first evidence map. After the first network coordinate set in the first network model map is obtained in sub-step S23, the coordinates need to be recovered and located back in the first evidence map for subsequent comparison. Positioning the first network coordinate set into the first evidence graph according to preset parameters to obtain a first coordinate set, wherein firstly, a target vehicle coordinate in the first evidence graph is obtained according to the target vehicle coordinate in the first network model graph and a first preset proportion; and then, obtaining the target pedestrian coordinate in the first evidence graph according to the target pedestrian coordinate in the first network model graph and a second preset proportion, wherein the target vehicle coordinate and the target pedestrian coordinate form a first coordinate set.
Referring to fig. 13, the sub-step S24 may further include the following sub-steps:
and a substep S241 of obtaining the coordinates of the target vehicle in the first evidence map according to the coordinates of the target vehicle in the first network model map and a first preset proportion.
In the embodiment of the present invention, the target detection of the target vehicle in the first network model map is performed in the first scaled map, and the first scaled map is obtained by scaling the first evidence map according to the first preset proportion, so that the target vehicle coordinate in the first evidence map can be obtained according to the target vehicle coordinate in the first network model map and the first preset proportion. For example, the first network model map is shown in fig. 11 and 12, and the coordinates of the target vehicle in the first network model map are (x)1,y1,w1,h1) First preset ratio a1, b1, coordinates (X) of the target vehicle in the first evidence chart1,Y1,W1,H1) Then, according to equation 1: x1=x1*a1,Y1=y1*b1,W1=w1*a1,H1=h1B1 may calculate the coordinates of the target vehicle in the first evidence graph.
And a substep S242, obtaining a target pedestrian coordinate in the first evidence graph according to the target pedestrian coordinate in the first network model graph and a second preset proportion, wherein the target vehicle coordinate and the target pedestrian coordinate form a first coordinate set.
In the embodiment of the present invention, the target detection of the target pedestrian in the first network model map is performed in at least one target segmentation map, and the at least one target segmentation map is obtained by scaling the first evidence map according to the second preset proportion, so that the target pedestrian coordinate in the first evidence map can be obtained according to the target pedestrian coordinate in the first network model map and the second preset proportion.
For example, the size of the first network model map is M × N, the size of the first evidence map is M × N, and the coordinates of the target pedestrian in the first network model map are (x)2,y2,w2,h2) The second preset proportion is a0, b0, and the coordinate (X) of the target pedestrian in the first evidence chart2,Y2,W2,H2) Then the coordinates of the target pedestrian in B0_2 are according to equation 2: x2=(x2-m/2)*a0,Y2=(y2+y0-n/2)*b0,W2=w2*a0,H2=h2B0, coordinates of the target pedestrian in B0_1 according to formula 3: x2=(x2+N/b0-m)*a0,Y2=(y2-n+y0)*b0,W2=w2*a0,H2=h2B0, calculating the coordinates of the target pedestrian in the first evidence graph.
It should be noted that the coordinates of the target vehicle and the coordinates of the target pedestrian form a first coordinate set, and when the coordinates of the target pedestrian in the first evidence graph and the coordinates of the target vehicle in the first evidence graph are obtained, the first coordinate set is obtained.
And step S3, generating a second network model diagram according to the second evidence diagram, and performing target positioning on the second network model diagram to obtain a second coordinate set, wherein the second coordinate set comprises the target vehicle coordinates and the target pedestrian coordinates in the second evidence diagram.
In the embodiment of the present invention, the second network model map may be an image obtained by performing multi-scale scaling, segmentation, and stitching on the second evidence map. The second coordinate set may be coordinates of the target vehicle and the target pedestrian in the second evidence map, and it is understood that the second coordinate set is coordinates of the target vehicle and the target pedestrian in the second evidence map. Generating a second network model graph according to the second evidence graph, and performing target positioning on the second network model graph to obtain a second coordinate set, wherein the steps can be understood as firstly performing multi-scale scaling, segmentation and splicing on the second evidence graph according to preset parameters to generate the second network model graph; secondly, performing target detection on the second network model image to obtain a target vehicle and a target pedestrian in the second network model image, namely a second target; then, obtaining the coordinates of a second target in a second network coordinate graph to obtain a second network coordinate set; and finally, positioning the second network coordinate set back to the second evidence graph according to preset parameters to obtain a second coordinate set. The method for specifically obtaining the second coordinate set is the same as the method for obtaining the first coordinate set, and is not described herein again.
And step S4, calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle.
In the embodiment of the present invention, the target traveling information may include a distance between the target vehicle and the target pedestrian, a traveling direction of the target pedestrian, and a traveling direction of the target vehicle. The first coordinate set comprises a target vehicle coordinate and a target pedestrian coordinate in the first evidence image, the second coordinate set comprises a target vehicle coordinate and a target pedestrian coordinate in the second evidence image, and the traveling information of the target pedestrian and the target vehicle can be obtained through the first coordinate set and the second coordinate set. Calculating the coordinates of the target vehicle and the coordinates of the target pedestrian in the second evidence graph to obtain the distance between the target vehicle and the target pedestrian; calculating the coordinates of the target pedestrian in the first evidence image and the coordinates of the target pedestrian in the second evidence image to obtain the advancing direction of the target pedestrian; and calculating the coordinates of the target vehicle in the first evidence graph and the coordinates of the target vehicle in the second evidence graph to obtain the traveling direction of the target vehicle. It should be noted that, when there is a traveling direction of the target pedestrian, the target pedestrian moves; when the target vehicle has a traveling direction, the target vehicle moves.
Referring to fig. 14, step S4 may further include the following sub-steps:
and a substep S41, calculating the coordinates of the target vehicle and the coordinates of the target pedestrian in the second evidence map to obtain the distance between the target vehicle and the target pedestrian.
In an embodiment of the present invention, the target vehicle may be an automobile, and the target pedestrian may be all pedestrians including the zebra crossing area. The lateral distance between the target vehicle and the target pedestrian may be understood as the lateral distance between the target vehicle and each pedestrian in the zebra crossing area, for example, the target vehicle is O, the zebra crossing area has the pedestrian a, the pedestrian B and the pedestrian C, and then the distance between the target vehicle and the target pedestrian may be the lateral distance between the target vehicle O and the pedestrian a OA, that is, the length with OA mapped to the horizontal direction, the lateral distance between the target vehicle O and the pedestrian B, that is, OB mapped to the length with horizontal direction, and the lateral distance between the target vehicle O and the pedestrian C, that is, OC mapped to the length with horizontal direction. Taking the transverse distance between a pedestrian and the target vehicle as an example, the target vehicle coordinate is (X'1,Y′1,W′1,H'1) Target pedestrian coordinate is (X'2,Y′2,W′2,H'2) When the target pedestrian is at the right side of the target vehicle, the transverse distance between the target vehicle and the target pedestrian is X'2-(X'1+W′1) When the target pedestrian is to the left of the target vehicle, the transverse distance between the target vehicle and the target pedestrian is X'1-(X'2+W′2). For example, the target vehicle coordinates are (10, -4, 2, 2), the target pedestrian coordinates are (5, -2, 1,2), the target pedestrian is to the left of the target vehicle, and then the target isThe transverse distance between the vehicle and the target pedestrian is 10- (5+1) ═ 4.
And a substep S42, calculating the coordinates of the target pedestrian in the first evidence graph and the coordinates of the target pedestrian in the second evidence graph to obtain the traveling direction of the target pedestrian.
In the embodiment of the present invention, the moving distance of the target pedestrian and the traveling direction of the target pedestrian may be understood as displacements of the same pedestrian (for example, the pedestrian a) in the first evidence map and the second evidence map, the magnitude of the displacement is the moving distance of the target pedestrian, and the direction of the displacement is the traveling direction of the target pedestrian. Next, a description will be given taking an example in which the target pedestrian is one pedestrian. If the coordinate of the target pedestrian in the first evidence chart is (X)2,Y2,W2,H2) The coordinates of the target pedestrian in the second evidence map are (X'2,Y′2,W′2,H'2) Then the moving distance of the target pedestrian can be according to equation 4:
Figure BDA0001886548980000151
the direction of travel of the target pedestrian may be according to equation 5:
Figure BDA0001886548980000152
for example, if the coordinates of the pedestrian a in the first evidence chart are (5, -2, 1,2) and the coordinates of the pedestrian a in the second evidence chart are (7, -2, 1,2), it can be found that the moving distance of the pedestrian a is 2, the pedestrian a moves, and the traveling direction is eastward.
And a substep S43, calculating the coordinates of the target vehicle in the first evidence graph and the coordinates of the target vehicle in the second evidence graph to obtain the traveling direction of the target vehicle.
In the embodiment of the present invention, the target vehicle is displaced in the first evidence map and the second evidence map, and the traveling direction of the target vehicle may be a moving direction of the target vehicle. If the coordinates of the target vehicle in the first evidence chart are (X)1,Y1,W1,H1) The coordinates of the target vehicle in the second evidence map are (X'1,Y′1,W′1,H'1) Then, the moving distance of the target vehicle may be according to equation 6:
Figure BDA0001886548980000161
the direction of travel of the target pedestrian may be according to the common 7:
Figure BDA0001886548980000162
for example, if the coordinates of the target vehicle in the first evidence chart are (10, -4, 2, 2) and the coordinates in the second evidence chart are (10, -3, 2, 2), then it can be obtained that the moving distance of the target vehicle is 1, the target vehicle has moved, and the traveling direction is north.
It should be noted that, in other embodiments of the present invention, the execution sequence of the sub-step S41, the sub-step S42, and the sub-step S43 may be exchanged, or the sub-step S41, the sub-step S42, and the sub-step S43 may be executed at the same time.
And step S5 of comparing the target traveling information with the preset traveling information and determining that the target vehicle does not give the pedestrian a gift when the target traveling information corresponds to the preset traveling information.
In the embodiment of the present invention, the preset traveling information may include a first preset distance and the like. Comparing the target travel information with preset travel information, and determining that the target vehicle does not give a good idea to the pedestrian when the target travel information corresponds to the preset travel information, which may be understood as comparing a distance between the target vehicle and the target pedestrian with a first preset distance, comparing a travel direction of the target pedestrian with a travel direction of the target vehicle, and determining that the target vehicle does not give a good idea to the pedestrian when the distance between the target vehicle and the target pedestrian is less than the first preset distance and the travel direction of the target pedestrian intersects with the travel direction of the target vehicle. The first preset distance is a safe transverse distance between a target pedestrian and a target vehicle, wherein the safe transverse distance is set by a user in a self-defined mode.
When the lateral distance between the target vehicle and the target pedestrian (any pedestrian) is smaller than a first preset distance, the target vehicle and the target pedestrian can be considered to be smaller than a preset safe lateral distance, and when the lateral distances between the target vehicle and the target pedestrians (all pedestrians) are larger than or the first preset distance, the target vehicle can be considered to be far away from all the target pedestrians, so that the situation that the target pedestrian is not given away is avoided; when the target pedestrian has no advancing direction, the target pedestrian can be considered not to move, the target pedestrian waits for the target vehicle to advance, and the target vehicle does not have the condition that the target pedestrian is not caused; when the target vehicle does not have the advancing direction, the target vehicle can be considered not to move, the target vehicle waits for the target pedestrian to go ahead, and the target vehicle does not have the situation that the target pedestrian is not worried; when the traveling direction of the target pedestrian intersects with the traveling direction of the target vehicle, it may be considered that the target vehicle and the target pedestrian may have an unsafe situation if both the target vehicle and the target pedestrian continue to travel, and when the traveling direction of the target vehicle does not coincide with the traveling direction of the target pedestrian, it may be considered that the target vehicle may not have a situation in which the target pedestrian is not present.
When the target traveling information accords with the preset traveling information, the target vehicle is judged not to give a gift to the pedestrian, the distance between the target vehicle and the target pedestrian is compared with the first preset distance, the traveling direction of the target pedestrian and the traveling direction of the target vehicle are compared for double verification, whether the target vehicle is judged not to give a gift to the pedestrian or not is achieved, and the judgment accuracy is improved.
Step S6, a third evidence map of the target vehicle in the second preset area is obtained, and the first evidence map, the second evidence map and the third evidence map are stored as an evidence that the target vehicle does not give the pedestrian a good gift.
In the embodiment of the present invention, the third evidence map may be an image of the target vehicle after entering the zebra crossing area, and after the step S5 determines that the target vehicle does not give the pedestrian a good idea, the target vehicle needs to be tracked subsequently to check the situation after the target vehicle enters the zebra crossing area (i.e., in the second preset area), so that the third evidence map that the vehicle is located in the second preset area is obtained. The first, second, and third evidence graphs are stored as evidence that the target vehicle does not give the pedestrian a gift, and may be transmitted to the server 300 through the communication interface 104.
Compared with the prior art, the embodiment of the invention has the following advantages:
firstly, the first network model graph is set to position the coordinates of the target vehicle and the target pedestrian in the first evidence graph, the second network model graph is set to position the coordinates of the target vehicle and the target pedestrian in the second evidence graph, and whether the vehicle gives away the pedestrian is not in good gift or not is quickly and accurately detected according to the coordinates of the target vehicle and the target pedestrian in the first evidence graph and the coordinates of the target vehicle and the target pedestrian in the second evidence graph, so that the work load of workers is reduced.
Secondly, different targets (target pedestrians and target vehicles) are detected in different areas in the first network model graph, so that the detection efficiency can be improved, the target pedestrians are detected in a clear target segmentation graph, the detection rate can be improved, and the time required by target detection is shortened.
Finally, whether the target vehicle gives no good way to pedestrians or not is judged through double verification, and the judgment accuracy is improved.
Second embodiment
Referring to fig. 15, fig. 15 is a block diagram illustrating a vehicle unlawful pedestrian detection device 200 according to an embodiment of the invention. The vehicle non-gifting pedestrian detection device 200 includes an evidence graph acquisition module 201, a first coordinate set extraction module 202, a second coordinate set extraction module 203, a travel information calculation module 204, a non-gifting determination module 205, and a non-gifting evidence acquisition module 206.
The evidence graph acquiring module 201 is configured to acquire a first evidence graph and a second evidence graph when it is detected that a vehicle enters a first preset region, where the first evidence graph and the second evidence graph both include a target pedestrian and a target vehicle.
The first coordinate set extraction module 202 is configured to generate a first network model map according to the first evidence map, and perform target positioning on the first network model map to obtain a first coordinate set, where the first coordinate set includes coordinates of a target vehicle and coordinates of a target pedestrian in the first evidence map.
Referring to fig. 16, the first coordinate set extraction module 202 may include a first network map extraction unit 221, a first target extraction unit 222, a network coordinate set acquisition unit 223, and a first coordinate set positioning unit 224, where the first network map extraction unit 221 is configured to process the first evidence map according to preset parameters to obtain a first network model map; a first target extraction unit 222, configured to perform target detection on the first network model map to obtain a first target in the first network model map, where the first target includes a target vehicle and a target pedestrian; a network coordinate set obtaining unit 223, configured to obtain coordinates of the first target in the first network model map, so as to obtain a first network coordinate set; the first coordinate set positioning unit 224 is configured to position the first network coordinate set into the first evidence graph according to a preset parameter, so as to obtain a first coordinate set.
In this embodiment of the present invention, the first network map extracting unit 221 is specifically configured to: zooming the first evidence graph according to a first preset proportion to obtain a first zoomed graph; zooming the first evidence graph according to a second preset proportion to obtain a second zoomed graph; dividing the second zoom map according to the zebra crossing area and the first zoom map to obtain a plurality of division maps; screening at least one target segmentation graph containing the zebra crossing region in the multiple segmentation graphs, and splicing the at least one target segmentation graph and the first zoom graph to obtain a first network model graph.
In this embodiment of the present invention, the first coordinate set positioning unit 224 is specifically configured to: obtaining a target vehicle coordinate in a first evidence graph according to the target vehicle coordinate in the first network model graph and a first preset proportion; and obtaining the target pedestrian coordinate in the first evidence graph according to the target pedestrian coordinate in the first network model graph and the second preset proportion, wherein the target vehicle coordinate and the target pedestrian coordinate form a first coordinate set.
And the second coordinate set extraction module 203 is configured to generate a second network model map according to the second evidence map, and perform target positioning on the second network model map to obtain a second coordinate set, where the second coordinate set includes coordinates of a target vehicle and coordinates of a target pedestrian in the second evidence map.
And the traveling information calculation module 204 is configured to calculate the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle.
In an embodiment of the present invention, please refer to fig. 17, the traveling information calculating module 204 may include a first calculating unit 241, a second calculating unit 242, and a third calculating unit 243, where the first calculating unit 241 is configured to calculate coordinates of a target vehicle and coordinates of a target pedestrian in the second evidence map to obtain a distance between the target vehicle and the target pedestrian; the second calculating unit 242 is configured to calculate coordinates of the target pedestrian in the first evidence graph and coordinates of the target pedestrian in the second evidence graph to obtain a traveling direction of the target pedestrian; the third calculating unit 243 is configured to calculate coordinates of the target vehicle in the first evidence map and coordinates of the target vehicle in the second evidence map to obtain a traveling direction of the target vehicle.
The non-courtesy determination module 205 is configured to compare the target traveling information with the preset traveling information, and determine that the target vehicle is not courtesy of the pedestrian when the target traveling information matches the preset traveling information.
The disapproval evidence obtaining module 206 is configured to obtain a third evidence graph of the target vehicle located in the second preset area, and store the first evidence graph, the second evidence graph, and the third evidence graph as an evidence that the target vehicle does not give way to pedestrians.
In summary, the present invention provides a method, an apparatus and a readable storage medium for identifying a vehicle brand, wherein the method includes: when the fact that a vehicle enters a first preset area is detected, a first evidence graph and a second evidence graph are obtained, wherein the first evidence graph and the second evidence graph comprise a target pedestrian and a target vehicle; generating a first network model diagram according to the first evidence diagram, and carrying out target positioning on the first network model diagram to obtain a first coordinate set, wherein the first coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the first evidence diagram; generating a second network model diagram according to the second evidence diagram, and carrying out target positioning on the second network model diagram to obtain a second coordinate set, wherein the second coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the second evidence diagram; calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle; and comparing the target traveling information with preset traveling information, and judging that the target vehicle does not give the pedestrian a good gift when the target traveling information accords with the preset traveling information. The method for detecting the vehicle non-courtesy pedestrian can quickly and accurately detect whether the vehicle is not courtesy pedestrian so as to reduce the workload of law enforcement officers and achieve better law enforcement effect.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (7)

1. A vehicle no-gift pedestrian detection method, comprising:
when the fact that a vehicle drives into a first preset area is detected, a first evidence graph and a second evidence graph are obtained, wherein the first evidence graph and the second evidence graph both comprise a target pedestrian and a target vehicle, and the first evidence graph comprises a zebra crossing area;
zooming the first evidence graph according to a first preset proportion to obtain a first zoomed graph;
zooming the first evidence graph according to a second preset proportion to obtain a second zoomed graph;
dividing the second zoom map according to the zebra crossing area and the first zoom map to obtain a plurality of division maps;
screening at least one target segmentation map containing the zebra crossing region in the multiple segmentation maps, and splicing the at least one target segmentation map and the first zoom map to obtain a first network model map;
performing target positioning on the first network model diagram to obtain a first coordinate set, wherein the first coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the first evidence diagram, the target vehicle coordinates in the first coordinate set are obtained by performing target detection on a first scaling diagram in the first network model diagram, and the target pedestrian coordinates in the first coordinate set are obtained by performing target detection on a target segmentation diagram in the first network model diagram;
generating a second network model diagram according to the second evidence diagram, and performing target positioning on the second network model diagram to obtain a second coordinate set, wherein the second coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the second evidence diagram;
calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle;
and comparing the target traveling information with preset traveling information, and when the distance between the target vehicle and the target pedestrian is smaller than a first preset distance and the traveling direction of the target pedestrian is intersected with the traveling direction of the target vehicle, determining that the target vehicle does not give away the pedestrian.
2. The method of claim 1, wherein the step of targeting the first network model map to obtain a first set of coordinates comprises:
performing target detection on the first network model diagram to obtain a first target in the first network model diagram, wherein the first target comprises a target vehicle and a target pedestrian;
obtaining the coordinates of the first target in the first network model diagram to obtain a first network coordinate set;
and positioning the first network coordinate set into the first evidence graph according to preset parameters to obtain a first coordinate set, wherein the preset parameters comprise the first preset proportion and the second preset proportion.
3. The method according to claim 2, wherein the first network coordinate set comprises target vehicle coordinates and target pedestrian coordinates in a first network model map, and the step of positioning the first network coordinate set into the first evidence map according to preset parameters to obtain a first coordinate set comprises:
obtaining a target vehicle coordinate in the first evidence graph according to the target vehicle coordinate in the first network model graph and the first preset proportion;
and obtaining the target pedestrian coordinate in the first evidence graph according to the target pedestrian coordinate in the first network model graph and the second preset proportion, wherein the target vehicle coordinate and the target pedestrian coordinate form a first coordinate set.
4. The method of claim 1, wherein the target travel information includes a distance between the target vehicle and the target pedestrian, a travel direction of the target pedestrian, and a travel direction of the target vehicle, and the step of calculating the first set of coordinates and the second set of coordinates to obtain the target travel information of the target pedestrian and the target vehicle includes:
calculating the coordinates of the target vehicle and the coordinates of the target pedestrian in the second evidence image to obtain the distance between the target vehicle and the target pedestrian;
calculating the coordinates of the target pedestrian in the first evidence image and the coordinates of the target pedestrian in the second evidence image to obtain the traveling direction of the target pedestrian;
and calculating the coordinates of the target vehicle in the first evidence image and the coordinates of the target vehicle in the second evidence image to obtain the traveling direction of the target vehicle.
5. The method of claim 1, wherein after the step of determining that the target vehicle is not courtesy, the method further comprises:
and acquiring a third evidence graph of the target vehicle in a second preset area, and storing the first evidence graph, the second evidence graph and the third evidence graph as the evidence that the target vehicle does not give the best way to pedestrians.
6. A vehicle no-present pedestrian detection apparatus, characterized in that the apparatus comprises:
the system comprises an evidence graph acquiring module, a first evidence graph acquiring module and a second evidence graph acquiring module, wherein the first evidence graph and the second evidence graph both comprise a target pedestrian and a target vehicle, and the first evidence graph comprises a zebra crossing region;
a first coordinate set extraction module to: zooming the first evidence graph according to a first preset proportion to obtain a first zoomed graph; zooming the first evidence graph according to a second preset proportion to obtain a second zoomed graph; dividing the second zoom map according to the zebra crossing area and the first zoom map to obtain a plurality of division maps; screening at least one target segmentation map containing the zebra crossing region in the multiple segmentation maps, and splicing the at least one target segmentation map and the first zoom map to obtain a first network model map; performing target positioning on the first network model diagram to obtain a first coordinate set, wherein the first coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the first evidence diagram, the target vehicle coordinates in the first coordinate set are obtained by performing target detection on a first scaling diagram in the first network model diagram, and the target pedestrian coordinates in the first coordinate set are obtained by performing target detection on a target segmentation diagram in the first network model diagram;
the second coordinate set extraction module is used for generating a second network model diagram according to the second evidence diagram and carrying out target positioning on the second network model diagram to obtain a second coordinate set, wherein the second coordinate set comprises target vehicle coordinates and target pedestrian coordinates in the second evidence diagram;
the traveling information calculation module is used for calculating the first coordinate set and the second coordinate set to obtain target traveling information of the target pedestrian and the target vehicle;
and the non-courtesy judging module is used for comparing the target traveling information with preset traveling information and judging that the target vehicle does not courtesy pedestrians when the distance between the target vehicle and the target pedestrians is smaller than a first preset distance and the traveling direction of the target pedestrians is intersected with the traveling direction of the target vehicle.
7. The apparatus of claim 6, wherein the first coordinate set extraction module comprises:
a first target extraction unit, configured to perform target detection on the first network model map to obtain a first target in the first network model map, where the first target includes a target vehicle and a target pedestrian;
a network coordinate set obtaining unit, configured to obtain a coordinate of the first target in the first network model map to obtain a first network coordinate set;
and the first coordinate set positioning unit is used for positioning the first network coordinate set into the first evidence graph according to preset parameters to obtain a first coordinate set, wherein the preset parameters comprise the first preset proportion and the second preset proportion.
CN201811450320.1A 2018-11-30 2018-11-30 Method and device for detecting pedestrian without giving way to vehicle Active CN111260928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811450320.1A CN111260928B (en) 2018-11-30 2018-11-30 Method and device for detecting pedestrian without giving way to vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811450320.1A CN111260928B (en) 2018-11-30 2018-11-30 Method and device for detecting pedestrian without giving way to vehicle

Publications (2)

Publication Number Publication Date
CN111260928A CN111260928A (en) 2020-06-09
CN111260928B true CN111260928B (en) 2021-07-20

Family

ID=70953606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811450320.1A Active CN111260928B (en) 2018-11-30 2018-11-30 Method and device for detecting pedestrian without giving way to vehicle

Country Status (1)

Country Link
CN (1) CN111260928B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115083208B (en) * 2022-07-20 2023-02-03 深圳市城市交通规划设计研究中心股份有限公司 Human-vehicle conflict early warning method, early warning analysis method, electronic device and storage medium
CN116189445A (en) * 2023-03-08 2023-05-30 以萨技术股份有限公司 Vehicle behavior determination method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU164432U1 (en) * 2015-12-25 2016-08-27 Общество С Ограниченной Ответственностью "Технологии Распознавания" DEVICE FOR AUTOMATIC PHOTOVIDEO FIXATION OF VIOLATIONS DO NOT GIVE ADVANTAGES TO THE PEDESTRIAN AT THE UNRESOLVED PEDESTRIAN TRANSITION
CN106373430A (en) * 2016-08-26 2017-02-01 华南理工大学 Intersection pass early warning method based on computer vision
CN106503627A (en) * 2016-09-30 2017-03-15 西安翔迅科技有限责任公司 A kind of vehicle based on video analysis avoids pedestrian detection method
CN107730906A (en) * 2017-07-11 2018-02-23 银江股份有限公司 Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU164432U1 (en) * 2015-12-25 2016-08-27 Общество С Ограниченной Ответственностью "Технологии Распознавания" DEVICE FOR AUTOMATIC PHOTOVIDEO FIXATION OF VIOLATIONS DO NOT GIVE ADVANTAGES TO THE PEDESTRIAN AT THE UNRESOLVED PEDESTRIAN TRANSITION
CN106373430A (en) * 2016-08-26 2017-02-01 华南理工大学 Intersection pass early warning method based on computer vision
CN106503627A (en) * 2016-09-30 2017-03-15 西安翔迅科技有限责任公司 A kind of vehicle based on video analysis avoids pedestrian detection method
CN107730906A (en) * 2017-07-11 2018-02-23 银江股份有限公司 Zebra stripes vehicle does not give precedence to the vision detection system of pedestrian behavior

Also Published As

Publication number Publication date
CN111260928A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US11847917B2 (en) Fixation generation for machine learning
EP3376432B1 (en) Method and device to generate virtual lane
US10055652B2 (en) Pedestrian detection and motion prediction with rear-facing camera
CN104350510B (en) For the method and system for distinguishing the foreground object of image and background model
US20180211117A1 (en) On-demand artificial intelligence and roadway stewardship system
CN111369831A (en) Road driving danger early warning method, device and equipment
CN104376297A (en) Detection method and device for linear indication signs on road
JP6630521B2 (en) Danger determination method, danger determination device, danger output device, and danger determination system
JP2018081545A (en) Image data extraction device and image data extraction method
US20180033297A1 (en) Method and apparatus for determining split lane traffic conditions utilizing both multimedia data and probe data
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN111260928B (en) Method and device for detecting pedestrian without giving way to vehicle
CN107529659A (en) Seatbelt wearing detection method, device and electronic equipment
Phatchuay et al. The System Vehicle of Application Detector for Categorize Type
CN114664085A (en) Dangerous road section reminding method and device, electronic equipment and medium
Kotha et al. Potsense: Pothole detection on indian roads using smartphone sensors
Alpar et al. Intelligent collision warning using license plate segmentation
CN114771576A (en) Behavior data processing method, control method of automatic driving vehicle and automatic driving vehicle
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
Athree et al. Vision-based automatic warning system to prevent dangerous and illegal vehicle overtaking
JP6185327B2 (en) Vehicle rear side warning device, vehicle rear side warning method, and other vehicle distance detection device
JP7610480B2 (en) Vehicle control device
Hovorushchenko et al. Road Accident Prevention System
Said et al. AI-Based Helmet Violation Detection for Traffic Management System.
Gowtham et al. An investigation approach used for pattern classification and recognition of an emergency vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant