JP2004013725A - Method and program for position detection image processing, and computer readable recording medium recording program for position detection image processing - Google Patents
Method and program for position detection image processing, and computer readable recording medium recording program for position detection image processing Download PDFInfo
- Publication number
- JP2004013725A JP2004013725A JP2002169034A JP2002169034A JP2004013725A JP 2004013725 A JP2004013725 A JP 2004013725A JP 2002169034 A JP2002169034 A JP 2002169034A JP 2002169034 A JP2002169034 A JP 2002169034A JP 2004013725 A JP2004013725 A JP 2004013725A
- Authority
- JP
- Japan
- Prior art keywords
- position detection
- luminance value
- value
- image processing
- gravity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
【0001】
【発明の属する技術分野】
本発明は、被測定物の重心を求める位置検出画像処理方法、位置検出画像処理プログラム、および位置検出画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体に関する。
【0002】
【従来の技術】
従来、たとえばCRT(Cathode Ray Tube)ディスプレイなどのブラウン管の赤(R)、緑(G)、および青(B)の色ずれの測定などに用いられる位置検出画像処理方法としては、被測定物を撮像した画像を、水平方向および垂直方向の座標値と多階調の輝度値とを有した画素により構成されるデジタル情報に変換し、輝度値が予め設定した任意の下限閾値よりも大きい画素の座標領域において、水平方向および垂直方向に重心計算して被測定物の重心を求める方法が知られている。
【0003】
そして、この重心計算の際には、各画素の輝度値と座標値の積の全画素分の合計値を、全画素の輝度値の合計値で除して求めている。
【0004】
【発明が解決しようとする課題】
しかしながら、上述の位置検出画像処理方法では、輝度値が画素毎に与えられた離散情報であることにより、たとえば画像を撮像する際などに画像に混入したノイズにて、輝度値が下限閾値に比較的近い被測定物の輪郭部分の画素の輝度値が下限閾値より小さくなるなどした際に、各画素の輝度値で計算した被測定物の重心位置が比較的大きく変動するので、被測定物の重心位置の検出の精度を向上するために画素分解能を上げなければならないという問題点を有している。
【0005】
本発明はこのような点に鑑みなされたもので、画素分解能を上げることなく被測定物の位置検出の精度を向上できる位置検出画像処理方法、位置検出画像処理プログラム、および位置検出画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体を提供することを目的とする。
【0006】
【課題を解決するための手段】
本発明は、被測定物を撮像して画像を取得した画像を、水平方向および垂直方向の座標値と多階調の輝度値とを有した画素により構成されるデジタル情報に変換し、互いに隣接する画素の間の輝度値を、水平垂直方向に格子状に補間計算するか、あるいは水平方向および垂直方向のいずれか一方にすだれ状に補間計算して連続情報に変換し、この連続情報を重心計算して画像における被測定物の重心を求めるものである。そして、被測定物を撮像した画像の輝度値を、互いに隣接する画素の間で水平垂直方向に格子状に補間計算するか、あるいは水平方向および垂直方向のいずれか一方にすだれ状に補間計算して連続情報に変換し、この連続情報にて被測定物の重心を計算することにより、たとえば画像にノイズが混入した場合に輝度値がこのノイズにより受ける影響を低減して位置検出の精度を向上できる。
【0007】
【発明の実施の形態】
以下、本発明の位置検出画像処理方法の一実施の形態を図1および図2を参照して説明する。
【0008】
まず、図1に示すように、被測定物をたとえばCCDカメラ、あるいはデジタルカメラなどにて撮像し、この被測定物の画像Gを取得する(ステップ1)。
【0009】
次いで、この取得した画像Gに対して、図2に示すように、水平方向および垂直方向の座標値、すなわちx座標xおよびy座標yと、たとえば0を含めた整数256段階などの、0と2つ以上の多諧調の輝度値Bとを有する画素Pにより構成されるデジタル情報に変換する(ステップ2)。この画像Gは、たとえば水平方向にx0〜x6の7つ、垂直方向にy0〜y6の7つ、合計49個の略矩形状の画素Pにて構成されている。また、輝度値Bは画像Bにおける赤(R)、緑(G)、および青(B)の各色光成分について変換する。
【0010】
この後、互いに隣接する画素P間の輝度値Bを各y座標yについて水平方向に沿って直線補間計算、すなわち水平方向にすだれ状に直線補間計算して、輝度値Bを連続情報としての連続輝度値f(x)に変換する(ステップ3)。このとき、任意の画素Pのx座標xおよび輝度値Bをそれぞれx1、B1とし、この画素Pとy座標yが等しく、x座標xが(x1+1)の画素Pの輝度値BをB2とすると、
【数1】
にて連続輝度値f(x)の式を算出する。
【0011】
そして、ステップ3にて求めた連続輝度値f(x)における、予め設定した下限閾値よりも連続輝度値f(x)が大きい座標領域で、画像Gにおける被測定物の重心を求める(ステップ4)。このとき、重心位置のx座標xをXgとし、連続輝度値f(x)と下限閾値との交点をたとえばx2、x3とすると、
【数2】
にて重心位置の座標Xgを計算する。ここで、この数2の式の分母および分子は、連続輝度値f(x)は直線補間計算で求めたので、数2の式の分母および分子を計算する際、x0≦x≦x0+1の範囲の積分値は、f(x0)=B0、f(x0+1)=B1とすると、それぞれ
【数3】
【数4】
となり、隣り合う画素間の積分値は容易に計算できる。
【0012】
さらに、この重心位置の座標Xgを、画像Gにおける赤(R)、緑(G)、および青(B)の各色光成分について計算することにより、これら赤(R)、緑(G)、および青(B)の各色光成分の重心を求めて色ずれを測定する。
【0013】
上述したように、上記一実施の形態によれば、被測定物を撮像した画像Gの輝度値Bを、互いに隣接する画素Pの間で水平方向にすだれ状に補間計算して連続輝度値f(x)に変換し、この連続輝度値f(x)にて被測定物の重心位置の座標Xgを計算することにより、たとえば画像Gを撮像した際などにこの画像Gにノイズが混入し、被測定物の輪郭部分に近い画素Pの輝度値Bが変化した場合でも、連続輝度値f(x)がこのノイズにより受ける影響を低減でき、画素分解能を上げたり被測定物の面積を大きくしたりすることなく、画像Gにおける被測定物の位置検出の精度を向上できる。
【0014】
また、画素P間の輝度値Bを直線補間計算することにより、数2の式における分母および分子を、それぞれ数3の式および数4の式にて容易に計算できるため、コンピュータなどの演算装置による演算時間を低減して高速化できる。
【0015】
なお、上記一実施の形態において、図3に示すように、ステップ3において連続輝度値f(x)の式を求めた際に、この連続輝度値f(x)のグラフに示す輝度分布が均一でない場合には、下限閾値T1よりも大きい値である予め設定した上限閾値T2よりも大きい連続輝度値f(x)を、上限閾値T2に置換してもよい。この場合には、たとえば画像Gを撮像した際などにこの画像Gに混入した比較的大きいノイズにより連続輝度値f(x)が不均一になることによる被測定物の重心計算への影響をも低減でき、画素分解能を上げることなく画像Gにおける被測定物の位置検出の精度をより向上できる。
【0016】
また、画素P間の輝度値Bの補間計算を直線補間として計算を容易にして高速化を図っているが、重心位置の座標Xgの検出の精度をより向上させる場合には、曲線補間計算を用いてもよい。
【0017】
さらに、画素P間の輝度値Bを補間計算する際には、各x座標xについて画像Gの垂直方向、すなわちy座標yの方向に沿ってすだれ状に補間計算してもよい。あるいは、画素P間の輝度値Bを、画像Gの各水平方向および各垂直方向、すなわち水平垂直方向に格子状に補間計算して、これら水平方向および垂直方向の重心位置の座標Xgを求めることにより、画像Gにおける被測定物の重心計算の精度をより向上させてもよい。
【0018】
そして、赤(R)、緑(G)、および青(B)の色ずれの測定だけでなく、一般の被測定物の重心位置の検出にも用いることができる。
【0019】
またさらに、上記一実施の形態の各ステップを位置検出画像処理プログラムとしてコンピュータに実行させることで、画素分可能を上げることなく精度を向上した被測定物の位置検出を自動的にできる。
【0020】
さらに、この位置検出画像処理プログラムを光学ディスク、あるいは磁気ディスクなどの記録媒体にコンピュータ読み取り可能に記録させて用いることにより、他のコンピュータでもこのプログラムを実現でき、上記一実施の形態の各ステップと同様の作用効果を得ることができる。
【0021】
【実施例】
以下、上記一実施の形態の位置検出画像処理方法による被測定物の重心計算の具体的な一実施例を図4ないし図7を参照して説明する。
【0022】
まず、従来の位置検出画像処理方法にて図2に示す画像Gの重心位置の座標Xgを計算する。下限閾値T1を13とし、図6に示すように図2に示す画像Gの一部を離散グラフに変換して計算すると、重心位置の座標Xgは、
【数5】
である。ここで、たとえば図7に示すように、x座標xが1の画素Pの輝度値Bがノイズの影響により14から12に変動した場合に、重心位置の座標Xgは、
【数6】
となり、重心位置の座標Xgの変化量は0.4である。
【0023】
一方、上述の手法の一実施の形態の位置検出画像処理方法にて重心計算する。
【0024】
まず、図4に示すように図2に示す画像Gの一部を連続グラフに変換する。数1の式により、x座標xが0〜1の領域で連続輝度値f(x)は、f(x)=4x+10となる。同様に計算して、x座標xが1〜2の領域では、f(x)=4x+10、x座標xが2〜3の領域では、f(x)=2x+14、x座標xが3〜4の領域では、f(x)=−2x+26、x座標xが4〜5の領域では、f(x)=−4x+34、x座標xが5〜6の領域では、f(x)=−4x+34となる。
【0025】
そして、各連続輝度値f(x)と下限閾値T1との交点のx座標xは、それぞれ0.75および5.25となるので、x座標xが0.75〜5.25の領域で数2の式、数3の式、および数4の式にて重心位置の座標Xgを計算すると、
【数7】
となる。ここで、たとえば図5に示すように、x座標xが1の画素Pの輝度値Bがノイズの影響により14から12に変動した場合には、数1の式によりx座標xが1〜2の連続輝度値f(x)は、f(x)=6x+6に変化するので、下限閾値T1が4の際には、連続輝度値f(x)とこの下限閾値T1との交点のx座標xが、それぞれ1.167および5.25となる。
【0026】
そして、x座標xが1.167〜5.25の領域で、数2の式、数3の式、および数4の式にて被測定物の重心位置の座標Xgを計算すると、
【数8】
となり、重心位置の座標Xgの変化量を0.182に抑制できる。
【0027】
上記一実施例に示したように、上記一実施の形態の位置検出画像処理方法を用いることにより、従来の位置検出画像処理方法よりもノイズの影響を低減でき、被測定物の重心位置の座標Xgの検出の精度を向上できることが分かる。
【0028】
【発明の効果】
本発明によれば、被測定物を撮像した画像の輝度値を、互いに隣接する画素の間で水平垂直方向に格子状に補間計算するか、あるいは水平方向および垂直方向のいずれか一方にすだれ状に補間計算して連続情報に変換し、被測定物の重心を計算することにより、たとえば画像にノイズが混入した場合でも輝度値がこのノイズにより受ける影響を低減できるため、画素分解能を上げることなく被測定物の位置検出の精度を向上できる。
【図面の簡単な説明】
【図1】本発明の位置検出画像処理方法の一実施の形態を示すフローチャートである。
【図2】同上位置検出画像処理方法における画像を示す説明図である。
【図3】同上位置検出画像処理方法の他の例における画像の各画素間で補間計算された輝度値の例を示すグラフである。
【図4】同上位置検出画像処理方法における画像の各画素間で補間計算された輝度値の例を示すグラフである。
【図5】同上位置検出画像処理方法における画像の各画素間で補間計算された輝度値の他の例を示すグラフである。
【図6】従来例の位置検出画像処理方法における画像の各画素の輝度値の例を示すグラフである。
【図7】同上位置検出画像処理方法における画像の各画素の輝度値の他の例を示すグラフである。
【符号の説明】
B 輝度値
G 画像
P 画素
T1 下限閾値
T2 上限閾値
x 座標値としてのx座標
y 座標値としてのy座標[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a position detection image processing method for obtaining a center of gravity of an object to be measured, a position detection image processing program, and a computer-readable recording medium storing the position detection image processing program.
[0002]
[Prior art]
2. Description of the Related Art Conventionally, as a position detection image processing method used for measuring a color shift of red (R), green (G), and blue (B) of a cathode ray tube such as a CRT (Cathode Ray Tube) display, an object to be measured is used. The captured image is converted into digital information composed of pixels having horizontal and vertical coordinate values and multi-tone luminance values, and pixels having luminance values larger than a predetermined lower threshold are set. There is known a method of calculating a center of gravity in a horizontal direction and a vertical direction in a coordinate area to obtain a center of gravity of an object to be measured.
[0003]
In calculating the center of gravity, the total value of the product of the luminance value and the coordinate value of each pixel for all pixels is obtained by dividing the total value of the luminance values of all pixels.
[0004]
[Problems to be solved by the invention]
However, in the above-described position detection image processing method, since the luminance value is discrete information given to each pixel, the luminance value is compared with the lower threshold value due to noise mixed in the image when, for example, capturing an image. For example, when the luminance value of a pixel in the contour portion of the DUT that is close to the target becomes smaller than the lower threshold value, the center of gravity position of the DUT calculated based on the luminance value of each pixel fluctuates relatively largely, so that the DUT There is a problem that the pixel resolution must be increased in order to improve the accuracy of detecting the position of the center of gravity.
[0005]
The present invention has been made in view of such a point, and a position detection image processing method, a position detection image processing program, and a position detection image processing program capable of improving the accuracy of position detection of a device under test without increasing pixel resolution. It is an object of the present invention to provide a recorded computer-readable recording medium.
[0006]
[Means for Solving the Problems]
The present invention converts an image obtained by capturing an image of an object to be measured into digital information composed of pixels having horizontal and vertical coordinate values and multi-tone luminance values, and adjacent to each other. The luminance value between pixels to be calculated is interpolated in a grid pattern in the horizontal and vertical directions, or interpolated in either the horizontal direction or the vertical direction, and converted into continuous information. This is to calculate the center of gravity of the measured object in the image. Then, the brightness value of the image obtained by capturing the device under test is interpolated and calculated in a grid pattern in the horizontal and vertical directions between adjacent pixels, or in an interdigitated manner in one of the horizontal and vertical directions. By converting to the continuous information and calculating the center of gravity of the DUT using the continuous information, for example, when noise is mixed in the image, the influence of the noise on the luminance value is reduced, and the accuracy of position detection is improved. it can.
[0007]
BEST MODE FOR CARRYING OUT THE INVENTION
An embodiment of the position detection image processing method according to the present invention will be described below with reference to FIGS.
[0008]
First, as shown in FIG. 1, an object to be measured is imaged by, for example, a CCD camera or a digital camera, and an image G of the object to be measured is obtained (step 1).
[0009]
Next, as shown in FIG. 2, the acquired image G has horizontal and vertical coordinate values, that is, x coordinate x and y coordinate y, and 0, such as an integer 256 steps including 0, for example. It is converted into digital information composed of pixels P having two or more multi-tone luminance values B (step 2). This image G is composed of a total of 49 substantially rectangular pixels P, for example, seven in the horizontal direction x0 to x6 and seven in the vertical direction y0 to y6. The luminance value B is converted for each color light component of red (R), green (G), and blue (B) in the image B.
[0010]
After that, the luminance value B between the pixels P adjacent to each other is linearly interpolated along the horizontal direction for each y coordinate y, that is, linearly interpolated in the horizontal direction, and the luminance value B is continuously calculated as continuous information. It is converted to a luminance value f (x) (step 3). At this time, if the x coordinate x and the luminance value B of an arbitrary pixel P are x1 and B1, respectively, and the pixel P and the y coordinate y are equal, and the luminance value B of the pixel P whose x coordinate x is (x1 + 1) is B2. ,
(Equation 1)
Is used to calculate the expression of the continuous luminance value f (x).
[0011]
Then, in the coordinate area where the continuous luminance value f (x) is larger than the preset lower threshold value in the continuous luminance value f (x) obtained in
(Equation 2)
Is used to calculate the coordinates Xg of the position of the center of gravity. Here, as for the denominator and the numerator of the equation (2), since the continuous luminance value f (x) was obtained by the linear interpolation calculation, when calculating the denominator and the numerator of the equation (2), x 0 ≦ x ≦ x 0 Assuming that f (x 0 ) = B0 and f (x 0 +1) = B1, the integral values in the range of +1 are given by
(Equation 4)
And the integral value between adjacent pixels can be easily calculated.
[0012]
Further, by calculating the coordinates Xg of the position of the center of gravity for each of the red (R), green (G), and blue (B) color light components in the image G, these red (R), green (G), and The color shift is measured by finding the center of gravity of each color light component of blue (B).
[0013]
As described above, according to the embodiment, the luminance value B of the image G obtained by capturing the object to be measured is interpolated in the horizontal direction between pixels P adjacent to each other, and the continuous luminance value f is calculated. (X), and calculating the coordinates Xg of the center of gravity of the measured object with the continuous luminance value f (x), for example, when the image G is picked up, noise is mixed into the image G, Even when the luminance value B of the pixel P close to the contour of the object changes, the influence of the noise on the continuous luminance value f (x) can be reduced, and the pixel resolution can be increased or the area of the object can be increased. The accuracy of the position detection of the object to be measured in the image G can be improved without any trouble.
[0014]
Further, by calculating the luminance value B between the pixels P by linear interpolation, the denominator and the numerator in the equation (2) can be easily calculated by the equations (3) and (4). Can reduce the calculation time and increase the speed.
[0015]
In the embodiment, as shown in FIG. 3, when the expression of the continuous luminance value f (x) is obtained in
[0016]
In addition, although the interpolation calculation of the luminance value B between the pixels P is linear interpolation, the calculation is facilitated to speed up the calculation. However, when the accuracy of detecting the coordinates Xg of the center of gravity is further improved, the curve interpolation calculation is performed. May be used.
[0017]
Further, when performing the interpolation calculation of the luminance value B between the pixels P, the interpolation calculation may be performed for each x-coordinate x in a vertical direction of the image G, that is, along the y-coordinate y. Alternatively, the luminance value B between the pixels P is interpolated and calculated in a grid shape in each horizontal direction and each vertical direction of the image G, that is, in the horizontal and vertical directions, and the coordinates Xg of the barycentric positions in these horizontal and vertical directions are obtained. Thereby, the accuracy of calculating the center of gravity of the measured object in the image G may be further improved.
[0018]
The present invention can be used not only for measuring the color misregistration of red (R), green (G), and blue (B) but also for detecting the position of the center of gravity of a general object to be measured.
[0019]
Further, by causing the computer to execute each step of the above-described embodiment as a position detection image processing program, it is possible to automatically detect a position of an object to be measured with improved accuracy without increasing the number of pixels.
[0020]
Further, by using the position detection image processing program in a computer-readable manner on a recording medium such as an optical disk or a magnetic disk, the program can be realized on another computer. Similar functions and effects can be obtained.
[0021]
【Example】
Hereinafter, a specific example of calculating the center of gravity of the device under test by the position detection image processing method according to the embodiment will be described with reference to FIGS.
[0022]
First, the coordinates Xg of the position of the center of gravity of the image G shown in FIG. 2 are calculated by a conventional position detection image processing method. When the lower limit threshold T1 is set to 13 and a part of the image G shown in FIG. 2 is converted into a discrete graph and calculated as shown in FIG. 6, the coordinates Xg of the position of the center of gravity are
(Equation 5)
It is. Here, as shown in FIG. 7, for example, when the luminance value B of the pixel P having the x coordinate x of 1 changes from 14 to 12 due to the influence of noise, the coordinate Xg of the center of gravity becomes
(Equation 6)
And the amount of change in the coordinate Xg of the center of gravity is 0.4.
[0023]
On the other hand, the center of gravity is calculated by the position detection image processing method according to one embodiment of the above method.
[0024]
First, as shown in FIG. 4, a part of the image G shown in FIG. 2 is converted into a continuous graph. According to the equation (1), the continuous luminance value f (x) is f (x) = 4x + 10 in a region where the x coordinate x is 0 to 1. Similarly, in the region where the x coordinate x is 1 to 2, f (x) = 4x + 10, and in the region where the x coordinate x is 2 to 3, f (x) = 2x + 14 and the x coordinate x is 3 to 4. In the area, f (x) = − 2x + 26, in the area where the x coordinate x is 4 to 5, f (x) = − 4x + 34, and in the area where the x coordinate x is 5 to 6, f (x) = − 4x + 34. .
[0025]
The x coordinate x at the intersection of each continuous luminance value f (x) and the lower threshold T1 is 0.75 and 5.25, respectively. When the coordinates Xg of the position of the center of gravity is calculated by the
(Equation 7)
It becomes. Here, for example, as shown in FIG. 5, when the luminance value B of the pixel P having the x coordinate x of 1 changes from 14 to 12 due to the influence of noise, the x coordinate x becomes 1 to 2 according to the equation (1). Of the continuous luminance value f (x) changes to f (x) =
[0026]
Then, in the region where the x coordinate x is 1.167 to 5.25, when the coordinates Xg of the center of gravity of the measured object are calculated by the equations of
(Equation 8)
And the amount of change in the coordinate Xg of the position of the center of gravity can be suppressed to 0.182.
[0027]
As shown in the above-described one example, by using the position detection image processing method of the one embodiment, the influence of noise can be reduced as compared with the conventional position detection image processing method, and the coordinates of the center of gravity of the measured object can be obtained. It can be seen that the accuracy of Xg detection can be improved.
[0028]
【The invention's effect】
According to the present invention, the brightness value of an image obtained by capturing an object to be measured is calculated by interpolation in a grid pattern in the horizontal and vertical directions between pixels adjacent to each other, or in an interdigitated manner in one of the horizontal direction and the vertical direction. By converting to continuous information by interpolation calculation and calculating the center of gravity of the device under test, for example, even if noise is mixed in the image, the influence of the brightness value on the noise can be reduced, so that the pixel resolution is not increased. The accuracy of position detection of the device under test can be improved.
[Brief description of the drawings]
FIG. 1 is a flowchart illustrating an embodiment of a position detection image processing method according to the present invention.
FIG. 2 is an explanatory diagram showing an image in the position detection image processing method according to the first embodiment.
FIG. 3 is a graph showing an example of a luminance value interpolated between pixels of an image in another example of the above-described position detection image processing method.
FIG. 4 is a graph showing an example of a luminance value interpolated between pixels of an image in the position detection image processing method.
FIG. 5 is a graph showing another example of a luminance value interpolated between pixels of an image in the position detection image processing method according to the first embodiment.
FIG. 6 is a graph showing an example of a luminance value of each pixel of an image in a conventional position detection image processing method.
FIG. 7 is a graph showing another example of the luminance value of each pixel of the image in the position detection image processing method.
[Explanation of symbols]
B Luminance value G Image P Pixel T1 Lower threshold T2 Upper threshold x x coordinate as y coordinate y coordinate as y coordinate
Claims (11)
この被測定物を撮像して取得した画像を、水平方向および垂直方向の座標値と多階調の輝度値とを有した画素により構成されるデジタル情報に変換し、
互いに隣接する前記画素の間の輝度値を、水平垂直方向に格子状に補間計算するか、あるいは水平方向および垂直方向のいずれか一方にすだれ状に補間計算して連続情報に変換し、
この連続情報を重心計算して前記画像における前記被測定物の重心を求める
ことを特徴とした位置検出画像処理方法。Image the object to be measured to obtain an image,
The image obtained by capturing the object to be measured is converted into digital information composed of pixels having horizontal and vertical coordinate values and multi-tone luminance values,
The luminance value between the pixels adjacent to each other is calculated by interpolation in a grid in the horizontal and vertical directions, or is converted into continuous information by interpolating and calculating in an interdigitated manner in one of the horizontal and vertical directions,
A position detection image processing method characterized by calculating a center of gravity of the continuous information to obtain a center of gravity of the measured object in the image.
ことを特徴とした請求項1記載の位置検出画像処理方法。2. The position detection image processing method according to claim 1, wherein when calculating the center of gravity of the measured object in the image, the center of gravity is calculated in a coordinate area of continuous information whose luminance value is larger than a predetermined lower threshold.
ことを特徴とした請求項2記載の位置検出画像処理方法。When the luminance value between adjacent pixels is calculated by interpolation in a grid pattern in the horizontal and vertical directions, or in the case of interpolating and calculating in an interdigitated manner in either the horizontal direction or the vertical direction, the luminance is converted to continuous information. 3. The position detection image processing method according to claim 2, wherein the continuous information whose value is larger than an upper limit threshold larger than a preset lower limit threshold is replaced with the upper limit threshold.
ことを特徴とした請求項1ないし4いずれか一記載の位置検出画像処理方法。When calculating the luminance value between pixels adjacent to each other in a grid pattern in the horizontal and vertical directions, or when interpolating and calculating in an interdigitated manner in one of the horizontal direction and the vertical direction to convert to continuous information, the pixel 5. The position detection image processing method according to claim 1, wherein a luminance value between the two is calculated by linear interpolation.
この被測定物を撮像して取得した画像を、水平方向および垂直方向の座標値と多階調の輝度値とを有した画素により構成されるデジタル情報に変換するステップと、
互いに隣接する前記画素の間の輝度値を、水平垂直方向に格子状に補間計算するか、あるいは水平方向および垂直方向のいずれか一方にすだれ状に補間計算して連続情報に変換するステップと、
この連続情報を重心計算して前記画像における前記被測定物の重心を求めるステップと
を実行させることを特徴とした位置検出画像処理プログラム。A step of capturing an image of the device under test to obtain an image,
Converting an image obtained by imaging the object to be measured into digital information composed of pixels having horizontal and vertical coordinate values and multi-tone luminance values,
Converting the luminance value between the pixels adjacent to each other into continuous information by interpolation calculation in a grid pattern in the horizontal and vertical directions, or by interpolating calculation in an interdigitated manner in one of the horizontal direction and the vertical direction;
Calculating the center of gravity of the continuous information to obtain the center of gravity of the measured object in the image.
ことを特徴とした請求項6記載の位置検出画像処理プログラム。7. The computer program according to claim 6, wherein, in the step of obtaining the center of gravity of the object to be measured in the image, the center of gravity is calculated in a coordinate area of continuous information whose luminance value is larger than a preset lower limit threshold.
ことを特徴とした請求項7記載の位置検出画像処理プログラム。In the step of calculating the luminance value between adjacent pixels in a grid-like manner in the horizontal and vertical directions, or interpolating and calculating in an interdigitated manner in one of the horizontal direction and the vertical direction, and converting it into continuous information, 8. The position detection image processing program according to claim 7, wherein said continuous information having a value larger than a preset lower limit threshold and larger than an upper limit threshold is replaced with said upper limit threshold.
ことを特徴とした請求項6ないし8いずれか一記載の位置検出画像処理プログラム。In the step of obtaining the center of gravity of the device under test in the image, the integrated value of the product of the luminance value of each point and the coordinate value of each point in the coordinate area of the continuous information whose luminance value is larger than a preset lower threshold value is calculated as the continuous value. 9. The position detection image processing program according to claim 6, wherein the information is obtained by dividing by an integral value of a luminance value of each point in the information area.
ことを特徴とした請求項6ないし9いずれか一記載の位置検出画像処理方法。In the step of calculating the luminance value between adjacent pixels in a grid-like manner in the horizontal and vertical directions, or interpolating and calculating in an interdigitated manner in one of the horizontal direction and the vertical direction to convert the pixel into continuous information, 10. The position detection image processing method according to claim 6, wherein a luminance value between the two is calculated by linear interpolation.
ことを特徴とした位置検出画像処理プログラムを記録したコンピュータ読み取り可能な記録媒体。A computer-readable recording medium storing a position detection image processing program, wherein the position detection image processing program according to any one of claims 6 to 10 is recorded in a computer-readable manner.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002169034A JP2004013725A (en) | 2002-06-10 | 2002-06-10 | Method and program for position detection image processing, and computer readable recording medium recording program for position detection image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002169034A JP2004013725A (en) | 2002-06-10 | 2002-06-10 | Method and program for position detection image processing, and computer readable recording medium recording program for position detection image processing |
Publications (1)
Publication Number | Publication Date |
---|---|
JP2004013725A true JP2004013725A (en) | 2004-01-15 |
Family
ID=30435782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2002169034A Pending JP2004013725A (en) | 2002-06-10 | 2002-06-10 | Method and program for position detection image processing, and computer readable recording medium recording program for position detection image processing |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP2004013725A (en) |
-
2002
- 2002-06-10 JP JP2002169034A patent/JP2004013725A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7499600B2 (en) | Method for characterizing a digital imaging system | |
JP6363863B2 (en) | Information processing apparatus and information processing method | |
JP5567922B2 (en) | Image processing apparatus and control method thereof | |
US20180176440A1 (en) | Structured-light-based exposure control method and exposure control apparatus | |
JP2018087732A (en) | Wire rope measurement device and method | |
JP2014092508A (en) | Test chart, and method of use for the same | |
JP7257231B2 (en) | MTF measuring device and its program | |
JP2004134861A (en) | Resolution evaluation method, resolution evaluation program, and optical apparatus | |
JP5446285B2 (en) | Image processing apparatus and image processing method | |
JP6742180B2 (en) | MTF measuring device and its program | |
TW202232936A (en) | Resolution measurement method, resolution measurement system, and program | |
JP6923915B2 (en) | A computer-readable recording medium that records measurement methods, measuring devices, and measurement programs that can simultaneously acquire the three-dimensional shape of a color object and color information. | |
JP2004013725A (en) | Method and program for position detection image processing, and computer readable recording medium recording program for position detection image processing | |
JP4534992B2 (en) | Pixel position acquisition method | |
JP4293191B2 (en) | Pixel position acquisition method, image processing apparatus, program for executing pixel position acquisition method on a computer, and computer-readable recording medium on which the program is recorded | |
JP2006125896A (en) | Flat panel display inspection device | |
JP2003331263A (en) | Camera image-real space coordinate association map generation method, camera image-real space coordinate association map generation device, program, and recording medium | |
JP2018116039A (en) | Mtf measurement device and program of the same | |
JP2020025224A (en) | Camera evaluation value measuring device and camera evaluation value measuring method | |
JP2001258054A (en) | Image quality inspection apparatus and luminance center of gravity detection apparatus used therefor | |
WO2016174701A1 (en) | Endoscopic device and method for measuring three-dimensional shape | |
JPH0552520A (en) | Digital image perimeter measurement device | |
JP2022046063A (en) | Three-dimensional shape measuring method and three-dimensional shape measuring device | |
CN118552413A (en) | Pixel precision improving method, system, equipment and medium | |
JPH05168052A (en) | Raster edge detecting method |