CN111179305B - Object position estimation method and object position estimation device thereof - Google Patents
Object position estimation method and object position estimation device thereof Download PDFInfo
- Publication number
- CN111179305B CN111179305B CN201811344830.0A CN201811344830A CN111179305B CN 111179305 B CN111179305 B CN 111179305B CN 201811344830 A CN201811344830 A CN 201811344830A CN 111179305 B CN111179305 B CN 111179305B
- Authority
- CN
- China
- Prior art keywords
- coordinate value
- later
- previous
- picture
- position estimation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种具有时间戳对齐功能的对象位置估算方法,应用在对象位置估算装置。该对象位置估算方法包括取得第一摄像机的多张第一画面,设定第一预设时间点,从该多张第一画面取出最接近该第一预设时间点的第一先前画面与第一稍后画面,取得该第一先前画面里该对象的第一先前坐标值和该第一稍后画面里该对象的第一稍后坐标值,以及利用该第一先前坐标值与该第一稍后坐标值,运算出该对象在该第一预设时间点的第一估计坐标值。本发明能克服不同摄像机之间元数据的时间戳差距,有效提升对象追踪的正确率。
The invention discloses an object position estimation method with a time stamp alignment function, which is applied to an object position estimation device. The object position estimation method includes obtaining a plurality of first frames from a first camera, setting a first preset time point, and extracting from the plurality of first frames a first previous frame and a third frame closest to the first preset time point. A later picture, obtaining the first previous coordinate value of the object in the first previous picture and the first later coordinate value of the object in the first later picture, and using the first previous coordinate value and the first later coordinate value After the coordinate value is calculated, the first estimated coordinate value of the object at the first preset time point is calculated. The invention can overcome the time stamp gap of metadata between different cameras and effectively improve the accuracy of object tracking.
Description
技术领域Technical field
本发明涉及一种对象位置估算方法与对象位置估算装置,特别是有关一种具有时间戳对齐功能的对象位置估算方法及其对象位置估算装置。The present invention relates to an object position estimating method and an object position estimating device, and in particular to an object position estimating method with a timestamp alignment function and an object position estimating device thereof.
背景技术Background technique
随着科技的进步,多摄像机监控系统广泛地运用在大范围的地区或环境。一台摄像机的视野不足以涵盖全部监控区域,故传统的多摄像机监控系统将各个摄像机分别指向不同区域,在进行对象追踪时,便能根据不同摄像机的监控图像判断对象在监控区域内的移动轨迹。为了取得对象的精确移动轨迹,传统多摄像机监控系统会针对所以摄像机进行时间同步设定,确保每台摄像机都能在所需时间点取得监控画面。然而,每台摄像机取得监控画面的时间点与撷取频率会受到硬件配置或网络环境的影响,导致不同摄像机之间的元资料(metadata)时间戳仍有差距。时间戳的差距越大,对象追踪的准确度就越差。因此,如何克服此缺陷,设计一款能精准对齐多台摄像机对象元数据的对象追踪方法,即为监控产业的重点发展目标。With the advancement of technology, multi-camera surveillance systems are widely used in a wide range of areas or environments. The field of view of one camera is not enough to cover the entire monitoring area, so the traditional multi-camera monitoring system points each camera to different areas. When tracking objects, the movement trajectory of the object in the monitoring area can be judged based on the monitoring images of different cameras. . In order to obtain the precise movement trajectory of objects, traditional multi-camera surveillance systems will perform time synchronization settings for all cameras to ensure that each camera can obtain surveillance images at the required time point. However, the time point at which each camera acquires surveillance images and the acquisition frequency are affected by hardware configuration or network environment, resulting in differences in metadata timestamps between different cameras. The larger the difference in timestamps, the less accurate the object tracking will be. Therefore, how to overcome this shortcoming and design an object tracking method that can accurately align the object metadata of multiple cameras is a key development goal of the surveillance industry.
发明内容Contents of the invention
本发明涉及一种具有时间戳对齐功能的对象位置估算方法及其对象位置估算装置。The invention relates to an object position estimation method with a time stamp alignment function and an object position estimation device thereof.
本发明进一步公开一种具有时间戳对齐功能的对象位置估算方法,其包括取得第一摄像机的多张第一画面,设定第一预设时间点,从该多张第一画面取出最接近该第一预设时间点的第一先前画面与第一稍后画面,取得该第一先前画面里对象的第一先前坐标值和该第一稍后画面里该对象的第一稍后坐标值,以及利用该第一先前坐标值与该第一稍后坐标值,运算出该对象在该第一预设时间点的第一估计坐标值。The invention further discloses an object position estimation method with a timestamp alignment function, which includes obtaining a plurality of first pictures of the first camera, setting a first preset time point, and extracting the closest to the first picture from the plurality of first pictures. The first previous picture and the first later picture at the first preset time point obtain the first previous coordinate value of the object in the first previous picture and the first later coordinate value of the object in the first later picture, and using the first previous coordinate value and the first later coordinate value to calculate a first estimated coordinate value of the object at the first preset time point.
本发明还公开一种具有时间戳对齐功能的对象位置估算装置,包括接收器以及处理器。该接收器取得摄像机所产生的画面。该处理器电连接该接收器,用来取得第一摄像机的多张第一画面,设定第一预设时间点,从该多张第一画面取出最接近该第一预设时间点的第一先前画面与第一稍后画面,取得该第一先前画面里对象的第一先前坐标值和该第一稍后画面里该对象的第一稍后坐标值,以及利用该第一先前坐标值与该第一稍后坐标值,运算出该对象在该第一预设时间点的第一估计坐标值。The invention also discloses an object position estimating device with a time stamp alignment function, which includes a receiver and a processor. The receiver captures the footage produced by the camera. The processor is electrically connected to the receiver and is used to obtain a plurality of first frames from the first camera, set a first preset time point, and retrieve a third frame from the plurality of first frames that is closest to the first preset time point. A previous picture and a first later picture, obtaining the first previous coordinate value of the object in the first previous picture and the first later coordinate value of the object in the first later picture, and using the first previous coordinate value and the first later coordinate value to calculate the first estimated coordinate value of the object at the first preset time point.
本发明的多台摄像机的视野范围可以部分重叠或完全不重叠,并预先完成时间同步设定。本发明的对象位置估算方法及其对象位置估算装置依特定频率设定数个预设时间点,获得各摄像机在此预设时间点前后画面的各一笔对象元数据后,以内插法或其它运算函数取得对象在此预设时间点的元数据,估算出时间对齐后的对象元数据。本发明能克服不同摄像机之间元数据的时间戳差距,有效提升对象追踪的正确率。The fields of view of multiple cameras of the present invention may partially overlap or not overlap at all, and time synchronization settings are completed in advance. The object position estimation method and the object position estimation device of the present invention set several preset time points according to a specific frequency, and after obtaining each object metadata of each camera before and after the preset time point, use interpolation or other methods to The operation function obtains the metadata of the object at this preset time point and estimates the metadata of the object after time alignment. The invention can overcome the time stamp gap of metadata between different cameras and effectively improve the accuracy of object tracking.
附图说明Description of the drawings
图1为本发明实施例的对象位置估算装置的功能方块图。FIG. 1 is a functional block diagram of an object position estimation device according to an embodiment of the present invention.
图2为本发明实施例的对象位置估算方法的流程图。Figure 2 is a flow chart of an object position estimation method according to an embodiment of the present invention.
图3为本发明实施例的摄像机在不同时间顺序所取得画面的示意图。FIG. 3 is a schematic diagram of images acquired by the camera in different time sequences according to the embodiment of the present invention.
图4为本发明实施例的对象位置估算装置与摄像机的示意图。Figure 4 is a schematic diagram of an object position estimation device and a camera according to an embodiment of the present invention.
图5与图6为本发明实施例的先前坐标值与稍后坐标值转换为估计坐标值的Figures 5 and 6 show the conversion of previous coordinate values and later coordinate values into estimated coordinate values according to the embodiment of the present invention.
示意图。Schematic diagram.
图7为本发明实施例的拼接画面的示意图。Figure 7 is a schematic diagram of a spliced screen according to an embodiment of the present invention.
其中,附图标记说明如下:Among them, the reference symbols are explained as follows:
10 对象位置估算装置10 Object position estimation device
12 接收器12 receivers
14 处理器14 processors
16 第一摄像机16 first camera
18 第二摄像机18 Second camera
Is1 第一画面Is1 first picture
Is2 第二画面Is2 second screen
Ip1 第一先前画面IP1 first previous screen
In1 第一稍后画面In1 first later screen
Ip2 第二先前画面IP2 second previous screen
In2 第二稍后画面In2 second later screen
Ip3 第三先前画面Ip3 third previous screen
In3 第三稍后画面In3 third later screen
Ip4 第四先前画面Ip4 fourth previous screen
In4 第四稍后画面In4 fourth later screen
Iu 未使用画面Iu unused screen
I’、I” 拼接画面I’, I” splicing screen
O、O’ 物件O, O’ objects
Cp1 第一先前坐标值Cp1 first previous coordinate value
Cn1 第一稍后坐标值Cn1 first later coordinate value
Ce1 第一估计坐标值Ce1 first estimated coordinate value
Cp2 第二先前坐标值Cp2 second previous coordinate value
Cn2 第二稍后坐标值Cn2 second later coordinate value
Ce2 第二估计坐标值Ce2 second estimated coordinate value
T1 第一预设时间点T1 first preset time point
T2 第二预设时间点T2 second preset time point
步骤 S200、S202、S204、S206、S208、S210、S212、S214Steps S200, S202, S204, S206, S208, S210, S212, S214
具体实施方式Detailed ways
请参阅图1,图1为本发明实施例的对象位置估算装置10的功能方块图。多台摄像机分别以不同视角撷取同一监控范围的画面时,该些摄像机的运作系统已经完成时间同步设定,但不同摄像机各自取得的每帧画面仍会有些微时间差。对象位置估算装置10具有时间戳对齐功能,用来对齐多台摄像机所取得画面内的对象元数据,提高对象追踪精确度。对象位置估算装置10可以是服务器,收集多台摄像机的画面数据进行时间戳对齐;对象位置估算装置10还可以是具有特殊功能的摄像机,能够接收其它摄像机的监控画面,并与自己的监控画面进行时间戳对齐。Please refer to FIG. 1 , which is a functional block diagram of an object position estimation device 10 according to an embodiment of the present invention. When multiple cameras capture images of the same surveillance area from different angles, the operating systems of these cameras have completed time synchronization settings, but there will still be a slight time difference in each frame captured by different cameras. The object position estimating device 10 has a timestamp alignment function, which is used to align object metadata in the frames obtained by multiple cameras to improve object tracking accuracy. The object position estimating device 10 can be a server that collects picture data from multiple cameras for time stamp alignment; the object position estimating device 10 can also be a camera with special functions that can receive monitoring pictures from other cameras and compare them with its own monitoring pictures. Timestamp alignment.
请参阅图1至图4,图2为本发明实施例的对象位置估算方法的流程图,图3为本发明实施例的摄像机在不同时间顺序所取得画面的示意图,图4为本发明实施例的对象位置估算装置10与摄像机的示意图。对象位置估算装置10可包括电连接在一起的接收器12以及处理器14。接收器12取得其它摄像机所产生的画面。处理器14执行的对象位置估算方法适用图1所示对象位置估算装置10。第一摄像机16与第二摄像机18虽已完成时间同步设定,但因摄像机的系统效能与网络联机质量有不同变异性,各摄像机的撷取帧数(Frames persecond,FPS)会随着时间变化,如图3所示。Please refer to Figures 1 to 4. Figure 2 is a flow chart of an object position estimation method according to an embodiment of the present invention. Figure 3 is a schematic diagram of images acquired by a camera in different time sequences according to an embodiment of the present invention. Figure 4 is a schematic diagram of an object position estimation method according to an embodiment of the present invention. A schematic diagram of the object position estimation device 10 and the camera. The object position estimation device 10 may include a receiver 12 and a processor 14 electrically connected together. The receiver 12 acquires images produced by other cameras. The object position estimation method executed by the processor 14 is applicable to the object position estimation device 10 shown in FIG. 1 . Although the first camera 16 and the second camera 18 have completed the time synchronization setting, due to the different variability of the camera system performance and network connection quality, the number of captured frames (Frames per second, FPS) of each camera will change over time. ,As shown in Figure 3.
关在对象位置估算方法,首先执行步骤S200,处理器14可决定时间戳对齐的频率,例如第一摄像机16和第二摄像机18的撷取帧数为60fps时,对象位置估算方法的时间戳对齐频率可以是60fps或30fps、或其它数值。接着,执行步骤S202与S204,处理器14通过接收器12取得第一摄像机16与第二摄像机18的多张第一画面Is1及多张第二画面Is2,并根据时间戳对齐频率设定第一预设时间点T1。然后执行步骤S206,从第一画面Is1里找出最接近第一预设时间点T1的第一先前画面Ip1与第一稍后画面In1,以及从第二画面Is2找出最接近第一预设时间点T1的第二先前画面Ip2与第二稍后画面In2。Regarding the object position estimation method, step S200 is first performed. The processor 14 can determine the frequency of timestamp alignment. For example, when the number of frames captured by the first camera 16 and the second camera 18 is 60 fps, the time stamp alignment of the object position estimation method is The frequency can be 60fps or 30fps, or other values. Next, steps S202 and S204 are executed. The processor 14 obtains a plurality of first images Is1 and a plurality of second images Is2 of the first camera 16 and the second camera 18 through the receiver 12, and sets the first alignment frequency according to the timestamp. Default time point T1. Then step S206 is executed to find out the first previous picture Ip1 and the first later picture In1 from the first picture Is1 that are closest to the first preset time point T1, and to find out the second picture Is2 that is closest to the first preset time point. The second previous picture Ip2 and the second later picture In2 at the time point T1.
接着,执行步骤S208,使用对象追踪技术分析先前画面Ip1、Ip2和稍后画面In1、Is2内的对象O与O’,取得对象O在第一先前画面Ip1的第一先前坐标值Cp1以及在第一稍后画面In1的第一稍后坐标值Cn1、和对象O’在第二先前画面Ip2的第二先前坐标值Cp2以及在第二稍后画面In2的第二稍后坐标值Cn2。稍后,执行步骤S210,利用第一先前坐标值Cp1与第一稍后坐标值Cn1运算出对象O在第一预设时间点T1的第一估计坐标值Ce1,并且利用第二先前坐标值Cp2与第二稍后坐标值Cn2运算出对象O’在第一预设时间点T1的第二估计坐标值Ce2。Next, step S208 is performed, using object tracking technology to analyze the objects O and O' in the previous pictures Ip1, Ip2 and later pictures In1, Is2, and obtain the first previous coordinate value Cp1 of the object O in the first previous picture Ip1 and the first previous coordinate value Cp1 in the first previous picture Ip1 and the first previous coordinate value Cp1 in the first previous picture Ip1. A first later coordinate value Cn1 in a later picture In1, a second previous coordinate value Cp2 of the object O' in the second previous picture Ip2, and a second later coordinate value Cn2 in the second later picture In2. Later, step S210 is executed, using the first previous coordinate value Cp1 and the first later coordinate value Cn1 to calculate the first estimated coordinate value Ce1 of the object O at the first preset time point T1, and using the second previous coordinate value Cp2 The second estimated coordinate value Ce2 of the object O' at the first preset time point T1 is calculated with the second later coordinate value Cn2.
取得第一估计坐标值Ce1与第二估计坐标值Ce2后,执行步骤S212与S214,从多张第一画面Is1与多张第二画面Is2分别任选一张进行拼接以生成拼接画面I’,将对象O的第一估计坐标值Ce1和对象O’的第二估计坐标值Ce2显示在拼接画面I’。这样一来,对象位置估算装置10可以在拼接画面I’上形成两台摄像机16与18分别追踪的对象O及O’的行进轨迹,方便使用者观察或供作其它运算应用。本实施例优选采用第一先前画面Ip1和第二稍后画面In2进行拼接,然实际应用并不限于此。After obtaining the first estimated coordinate value Ce1 and the second estimated coordinate value Ce2, steps S212 and S214 are executed to select one of the plurality of first pictures Is1 and the plurality of second pictures Is2 for splicing to generate the spliced picture I', The first estimated coordinate value Ce1 of the object O and the second estimated coordinate value Ce2 of the object O' are displayed on the spliced screen I'. In this way, the object position estimating device 10 can form the trajectories of the objects O and O' tracked by the two cameras 16 and 18 respectively on the spliced image I', which is convenient for the user to observe or for other calculation applications. In this embodiment, the first previous picture Ip1 and the second later picture In2 are preferably used for splicing, but the actual application is not limited to this.
特别一提的是,第一估计坐标值Ce1属于对象O在第一预设时间点T1的预测量,第二估计坐标值Ce2属于对象O’在第一预设时间点T1的预测量,因此第一估计坐标值Ce1与第二估计坐标值Ce2在时间顺序上优选视为两者相同;不论实际情况上是否有些微误差,本发明的对象位置估算方法已能将两者误差缩减至极小值,故仍可定义为相同。In particular, the first estimated coordinate value Ce1 belongs to the predicted quantity of object O at the first preset time point T1, and the second estimated coordinate value Ce2 belongs to the predicted quantity of object O' at the first preset time point T1. Therefore, The first estimated coordinate value Ce1 and the second estimated coordinate value Ce2 are preferably regarded as the same in time sequence; regardless of whether there is a slight error in the actual situation, the object position estimation method of the present invention can reduce the error between the two to a minimum value. , so it can still be defined as the same.
请参阅图3至图7,图5与图6为本发明实施例的先前坐标值与稍后坐标值转换为估计坐标值的示意图,图7为本发明实施例的拼接画面的示意图。本发明的对象位置估算方法可利用内插法运算取得第一先前坐标值Cp1(x1,y1)与第一稍后坐标值Cn1(x2,y2)之间的第一估计坐标值Ce1(x3,y3)。其中,第一先前画面Ip1在时间点Tp1取得,第一稍后画面In1进一步在时间点Tn1取得,坐标值x3=x1+(x2-x1)*(T1-Tp1)/(Tn1-Tp1),并且坐标值y3=y1+(y2-y1)*(T1-Tp1)/(Tn1-Tp1)。第二先前坐标值Cp2与第二稍后坐标值Cn2转换为第二估计坐标值Ce2的运算方式如第一估计坐标值Ce1,故此不再重复说明。此外,估计坐标值的运算方式不限于内插法,也可通过其它运算函数取得。Please refer to FIG. 3 to FIG. 7 . FIG. 5 and FIG. 6 are schematic diagrams of converting previous coordinate values and later coordinate values into estimated coordinate values according to an embodiment of the present invention. FIG. 7 is a schematic diagram of a spliced screen according to an embodiment of the present invention. The object position estimation method of the present invention can use the interpolation method to obtain the first estimated coordinate value Ce1(x3, y3). Among them, the first previous picture Ip1 is obtained at the time point Tp1, the first later picture In1 is further obtained at the time point Tn1, the coordinate value x3=x1+(x2-x1)*(T1-Tp1)/(Tn1-Tp1), and Coordinate value y3=y1+(y2-y1)*(T1-Tp1)/(Tn1-Tp1). The calculation method for converting the second previous coordinate value Cp2 and the second later coordinate value Cn2 into the second estimated coordinate value Ce2 is the same as the first estimated coordinate value Ce1, so the description will not be repeated. In addition, the calculation method of the estimated coordinate value is not limited to the interpolation method, and can also be obtained through other calculation functions.
对象位置估算方法可进一步设定晚于第一预设时间点T1的一个或多个预设时间点,并取得对象O与O’在此预设时间点的估计坐标值。以第二预设时间点T2为例,第二预设时间点T2可根据步骤S200所载的时间戳对齐频率决定。第一画面Is1在第二预设时间点T2的前后各具有最接近此预设时间点的第三先前画面Ip3与第三稍后画面In3。对象O在第三先前画面Ip3里的位置为第三先前坐标值(没有标示在附图中),在第三稍后画面In3里的位置为第三稍后坐标值(没有标示在附图中)。第三先前坐标值与第三稍后坐标值可利用内插法或其它运算函数生成第三估计坐标值(没有标示在附图中)。对象O’在第二画面Is2中最接近第二预设时间点T2的第四先前画面Ip4及第四稍后画面In4的先前坐标值与稍后坐标值也能转换产生另一估计坐标值。第三先前画面Ip3可与第四稍后画面In4拼接生成拼接画面I”,并将对象O的第三估计坐标值和对象O’的另一估计坐标值显示在拼接画面I”。The object position estimation method can further set one or more preset time points later than the first preset time point T1, and obtain the estimated coordinate values of the objects O and O' at this preset time point. Taking the second preset time point T2 as an example, the second preset time point T2 can be determined according to the timestamp alignment frequency included in step S200. The first picture Is1 has a third previous picture Ip3 and a third later picture In3 that are closest to the second preset time point T2 before and after the second preset time point T2. The position of the object O in the third previous picture Ip3 is the third previous coordinate value (not shown in the drawing), and the position of the object O in the third later picture In3 is the third later coordinate value (not shown in the drawing). ). The third previous coordinate value and the third later coordinate value may use interpolation or other operation functions to generate a third estimated coordinate value (not shown in the drawing). The previous coordinate value and the later coordinate value of the fourth previous picture Ip4 and the fourth later picture In4 in the second picture Is2 that are closest to the second preset time point T2 of the object O' can also be converted to generate another estimated coordinate value. The third previous picture Ip3 can be spliced with the fourth later picture In4 to generate a spliced picture I", and the third estimated coordinate value of the object O and another estimated coordinate value of the object O' are displayed in the spliced picture I".
如果第一稍后画面In1与第三先前画面Ip3之间存在一张或多张未使用画面Iu,本发明的对象位置估算方法可直接舍弃未使用画面Iu内对象O的数据,意即未使用画面Iu内的对象O的坐标值不会用在运算第三估计坐标值。未使用画面Iu定义为两个预设时间点T1与T2之间没有用在对象位置估算的任意画面。此外,未使用画面Iu或能配合第三先前画面Ip3及第三稍后画面In3,以特定运算函数运算出对象O在第二预设时间点T2的估计坐标值;例如未使用画面Iu给予较低权重,第三先前画面Ip3与第三稍后画面In3给予较高权重,三帧画面都用来运算估计坐标值。又或者,对象位置估算方法可分析比较未使用画面Iu与第三先前画面Ip3,两者取其一以搭配第三稍后画面In3进行对象位置估算。If there is one or more unused pictures Iu between the first later picture In1 and the third previous picture Ip3, the object position estimation method of the present invention can directly discard the data of the object O in the unused picture Iu, which means that it is not used. The coordinate value of the object O in the screen Iu will not be used in calculating the third estimated coordinate value. The unused frame Iu is defined as any frame that is not used for object position estimation between two preset time points T1 and T2. In addition, the unused frame Iu may be able to cooperate with the third previous frame Ip3 and the third later frame In3 to calculate the estimated coordinate value of the object O at the second preset time point T2 using a specific operation function; for example, the unused frame Iu can be used to calculate the estimated coordinate value of the object O at the second preset time point T2. Low weight, the third previous picture Ip3 and the third later picture In3 are given higher weight, and all three frames are used to calculate the estimated coordinate values. Alternatively, the object position estimation method can analyze and compare the unused frame Iu and the third previous frame Ip3, and select one of the two to estimate the object position in conjunction with the third later frame In3.
在本发明实施例中,若对象位置估算装置10为电连接多台摄像机的服务器,可自行设定其时间戳对齐频率,稳定地在各个预设时间点运算其估计坐标值。若对象位置估算装置10为接收其它摄像机的画面数据的摄像机,可以交错地进行画面撷取与对象位置估算,例如先运算前一预设时间点的估计坐标值、完毕后才接续运算下一预设时间点的估计坐标值,故预设时间点的间隔可能略有不同。时间戳对齐频率与预设时间点的设定依照对象位置估算装置10的硬设备及其运算效能决定,可能有多种类变化,在此不再分别叙明。In the embodiment of the present invention, if the object position estimation device 10 is a server electrically connected to multiple cameras, it can set its own timestamp alignment frequency and stably calculate its estimated coordinate values at each preset time point. If the object position estimation device 10 is a camera that receives image data from other cameras, the image capture and object position estimation can be performed in an interleaved manner, for example, the estimated coordinate value of the previous preset time point is first calculated, and then the next predetermined value is calculated after completion. Set the estimated coordinate value of the time point, so the interval of the preset time points may be slightly different. The settings of the timestamp alignment frequency and the preset time point are determined according to the hardware of the object position estimation device 10 and its computing performance, and may vary in many ways, which will not be described separately here.
综上所述,多台摄像机的视野范围可以部分重叠或完全不重叠,并预先完成时间同步设定。本发明的对象位置估算方法及其对象位置估算装置依特定频率设定数个预设时间点,获得各摄像机在此预设时间点前后画面的各一笔对象元数据后,以内插法或其它运算函数取得对象在此预设时间点的元数据,估算出时间对齐后的对象元数据。相比现有技术,本发明能克服不同摄像机之间元数据的时间戳差距,有效提升对象追踪的正确率。To sum up, the fields of view of multiple cameras can partially overlap or not overlap at all, and time synchronization settings are completed in advance. The object position estimation method and the object position estimation device of the present invention set several preset time points according to a specific frequency, and after obtaining each object metadata of each camera before and after the preset time point, use interpolation or other methods to The operation function obtains the metadata of the object at this preset time point and estimates the metadata of the object after time alignment. Compared with the existing technology, the present invention can overcome the time stamp gap of metadata between different cameras and effectively improve the accuracy of object tracking.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则的内,所作的任何修改、等同替换、改进等,均应包括在本发明的保护范围的内。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection scope of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811344830.0A CN111179305B (en) | 2018-11-13 | 2018-11-13 | Object position estimation method and object position estimation device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811344830.0A CN111179305B (en) | 2018-11-13 | 2018-11-13 | Object position estimation method and object position estimation device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111179305A CN111179305A (en) | 2020-05-19 |
CN111179305B true CN111179305B (en) | 2023-11-14 |
Family
ID=70649977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811344830.0A Active CN111179305B (en) | 2018-11-13 | 2018-11-13 | Object position estimation method and object position estimation device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111179305B (en) |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7450735B1 (en) * | 2003-10-16 | 2008-11-11 | University Of Central Florida Research Foundation, Inc. | Tracking across multiple cameras with disjoint views |
CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera |
CN101808230A (en) * | 2009-02-16 | 2010-08-18 | 杭州恒生数字设备科技有限公司 | Unified coordinate system used for digital video monitoring |
CN103716594A (en) * | 2014-01-08 | 2014-04-09 | 深圳英飞拓科技股份有限公司 | Panorama splicing linkage method and device based on moving target detecting |
CN104063867A (en) * | 2014-06-27 | 2014-09-24 | 浙江宇视科技有限公司 | Multi-camera video synchronization method and multi-camera video synchronization device |
CN104318588A (en) * | 2014-11-04 | 2015-01-28 | 北京邮电大学 | Multi-video-camera target tracking method based on position perception and distinguish appearance model |
CN104766291A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and system for calibrating multiple cameras |
KR20160014413A (en) * | 2014-07-29 | 2016-02-11 | 주식회사 일리시스 | The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map |
JP2016099941A (en) * | 2014-11-26 | 2016-05-30 | 日本放送協会 | System and program for estimating position of object |
CN105828045A (en) * | 2016-05-12 | 2016-08-03 | 浙江宇视科技有限公司 | Method and device for tracking target by using spatial information |
CN106023139A (en) * | 2016-05-05 | 2016-10-12 | 北京圣威特科技有限公司 | Indoor tracking and positioning method based on multiple cameras and system |
CN107343165A (en) * | 2016-04-29 | 2017-11-10 | 杭州海康威视数字技术股份有限公司 | A kind of monitoring method, equipment and system |
CN107613159A (en) * | 2017-10-12 | 2018-01-19 | 北京工业职业技术学院 | Image time calibration method and system |
CN108111818A (en) * | 2017-12-25 | 2018-06-01 | 北京航空航天大学 | Moving target active perception method and apparatus based on multiple-camera collaboration |
CN108734739A (en) * | 2017-04-25 | 2018-11-02 | 北京三星通信技术研究有限公司 | The method and device generated for time unifying calibration, event mark, database |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8335345B2 (en) * | 2007-03-05 | 2012-12-18 | Sportvision, Inc. | Tracking an object with multiple asynchronous cameras |
US10423164B2 (en) * | 2015-04-10 | 2019-09-24 | Robert Bosch Gmbh | Object position measurement with automotive camera using vehicle motion data |
-
2018
- 2018-11-13 CN CN201811344830.0A patent/CN111179305B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7450735B1 (en) * | 2003-10-16 | 2008-11-11 | University Of Central Florida Research Foundation, Inc. | Tracking across multiple cameras with disjoint views |
CN101344965A (en) * | 2008-09-04 | 2009-01-14 | 上海交通大学 | Tracking system based on binocular camera |
CN101808230A (en) * | 2009-02-16 | 2010-08-18 | 杭州恒生数字设备科技有限公司 | Unified coordinate system used for digital video monitoring |
CN104766291A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and system for calibrating multiple cameras |
CN103716594A (en) * | 2014-01-08 | 2014-04-09 | 深圳英飞拓科技股份有限公司 | Panorama splicing linkage method and device based on moving target detecting |
CN104063867A (en) * | 2014-06-27 | 2014-09-24 | 浙江宇视科技有限公司 | Multi-camera video synchronization method and multi-camera video synchronization device |
KR20160014413A (en) * | 2014-07-29 | 2016-02-11 | 주식회사 일리시스 | The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map |
CN104318588A (en) * | 2014-11-04 | 2015-01-28 | 北京邮电大学 | Multi-video-camera target tracking method based on position perception and distinguish appearance model |
JP2016099941A (en) * | 2014-11-26 | 2016-05-30 | 日本放送協会 | System and program for estimating position of object |
CN107343165A (en) * | 2016-04-29 | 2017-11-10 | 杭州海康威视数字技术股份有限公司 | A kind of monitoring method, equipment and system |
CN106023139A (en) * | 2016-05-05 | 2016-10-12 | 北京圣威特科技有限公司 | Indoor tracking and positioning method based on multiple cameras and system |
CN105828045A (en) * | 2016-05-12 | 2016-08-03 | 浙江宇视科技有限公司 | Method and device for tracking target by using spatial information |
CN108734739A (en) * | 2017-04-25 | 2018-11-02 | 北京三星通信技术研究有限公司 | The method and device generated for time unifying calibration, event mark, database |
CN107613159A (en) * | 2017-10-12 | 2018-01-19 | 北京工业职业技术学院 | Image time calibration method and system |
CN108111818A (en) * | 2017-12-25 | 2018-06-01 | 北京航空航天大学 | Moving target active perception method and apparatus based on multiple-camera collaboration |
Non-Patent Citations (1)
Title |
---|
目标跟踪技术综述;高文 等;《中国光学》;20140630;第7卷(第3期);第365-375页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111179305A (en) | 2020-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020094091A1 (en) | Image capturing method, monitoring camera, and monitoring system | |
US10867166B2 (en) | Image processing apparatus, image processing system, and image processing method | |
US9277165B2 (en) | Video surveillance system and method using IP-based networks | |
US20110043691A1 (en) | Method for synchronizing video streams | |
US20140354826A1 (en) | Reference and non-reference video quality evaluation | |
US9215357B2 (en) | Depth estimation based on interpolation of inverse focus statistics | |
Sun et al. | VU: Edge computing-enabled video usefulness detection and its application in large-scale video surveillance systems | |
CN113837979B (en) | Live image synthesis method, device, terminal equipment and readable storage medium | |
US10593059B1 (en) | Object location estimating method with timestamp alignment function and related object location estimating device | |
CN112183431A (en) | Real-time pedestrian number statistical method and device, camera and server | |
WO2023169281A1 (en) | Image registration method and apparatus, storage medium, and electronic device | |
US20120008038A1 (en) | Assisting focusing method for face block | |
US20160255382A1 (en) | System and method to estimate end-to-end video frame delays | |
CN111179305B (en) | Object position estimation method and object position estimation device thereof | |
WO2016192467A1 (en) | Method and device for playing videos | |
TWI718437B (en) | Object location estimating method with timestamp alignment function and related object location estimating device | |
KR101597095B1 (en) | Processing monitoring data in a monitoring system | |
CN112565630B (en) | A video frame synchronization method for video splicing | |
JP7431514B2 (en) | Method and system for measuring quality of video call service in real time | |
CN109598276A (en) | Image processing apparatus and method and monitoring system | |
TWI423170B (en) | A method for tracing motion of object in multi-frame | |
JP3747230B2 (en) | Video analysis system | |
US10425460B2 (en) | Marking objects of interest in a streaming video | |
JP4333255B2 (en) | Monitoring device | |
TWI502979B (en) | Method of image motion estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |