DESCRIPTION AN IMAGE PICKUP APPARATUS AND AN OBJECT
BACKGROUND OF THE INVENTION l. Field of the Invention The present invention relates to an image pickup apparatus operable to pick up an image by intermittently emitting light to an object, and the related techniques thereof.
2. Description of the Related Arrt The following image extracting apparatus is described in a patent publication (Jpn. unexamined patent publication No. 10-222285) . This image extracting apparatus comprises the first light receiving unit and the second light receiving unit, and performs light receiving operation at the different timing. A light emitting unit emits light when the first light receiving unit is in light receiving state, and does not emit light when the second light receiving unit is in light receiving state. The first light receiving unit receives reflected light from an object and outside light. The second light receiving unit receives only outside light. Then, a difference calculating unit obtains difference between an image picked up by the first light receiving unit and an image picked up by the second light receiving unit, and extracts an image of the target object. In this case, the light emitting unit emits near-infrared light. The first and second light receiving units receives only near-inf ared light through a near-infrared light filter. In this way, it is possible to cut out outside light including visible light and far-infrared light. However, this patent publication does not describe details of arrangement and mechanism of the light receiving optical system including the light emitting unit, the first light receiving unit, the second light receiving unit and the near-infrared light filter. In some cases, the first light -receiving unit might not receive light reflected by the object lout direct light from the light emitting unit depending on its mechanism and how they are arranged. This might cause an adverse influence on extraction of the image of the object.
SUMMARY OF INVENTION
It is an object of the present invention to provide an image pickup apparatus which can avoid receiving light directly from a light emitting unit when imaging an object by emitting light intermittently to it, and the related techniques thereof. in accordance with the first aspect of the present invention, an image pickup apparatus comprises: a plurality of light emitting elements each of which is operable to emit light at predetermined intervals; a filter operable to transmit light in a certain wavelength range; a lens unit operable to focus light; an imaging unit operable to receive, through said filter and said lens unit, light reflected from an object which is irradiated by said light emitting elements, and generate an image signal of said object; a light leakage prevention unit having an aperture portion through which a light incident surface of said filter is exposed and around which a plurality of surrounding portions are located; a light emitting element holder unit operable to hold a plurality of said light emitting elements, wherein said filter is secured in order that the light incident surface is exposed through said aperture portion of said light leakage prevention unit, wherein said light emitting elements are inserted respectively into holes which are opened in said surrounding portions to expose the light emitting surfaces of said light emitting elements, and fixedly supported by said light emitting element holder, wherein said lens unit is secured with its light incident surface which is located at an end of said lens unit to face the light emitting surface of said filter; and wherein said image pickup unit is secured in a location on the optical axis of said lens unit. In accordance with this configuration, the light leakage prevention unit and the infrared filter are separately provided. in addition, since the light emitting elements are inserted to respective insertion hole of the surrounding portions, the outer surface of the base end of the light emitting portion of the light emitting element is surrounded by the inner wall of the surrounding portion. Consequently, it is possible to avoid leakage of infrared light from the light emitting elements. Therefore, it is possible to prevent the imaging unit from receiving infrared light directly from the light emitting elements. If the light leakage prevention unit and the infrared filter are formed as one, light emitted from the light emitting elements directly enters the infrared filter
via the light leakage prevention unit. Since the imaging unit receives the light in this way, the imaging unit picks up unnecessary images. In addition, since the light leakage prevention unit and the infrared filter are separately provided, it is possible to replace only filter with ease when a defect of the filter is found. Therefore, it is possible to reduce the cost. In the above image pickup apparatus, said light leakage prevention unit is black at least in the inner walls of said surrounding portions thereof. In accordance with this configuration, since the inside surface of the surrounding portion is black, the surrounding portion can absorb more light from the light emitting elements.
Consequently, it is possible to efficiently avoid the imaging unit from receiving light directly from the light emitting elements. In the above image pickup apparatus, the hole of each surrounding portion of said light leakage prevention unit is made in the form of an inverted corn with the point cut off through which the light emitting portion of said light emitting element is exposed. In accordance with this configuration, the lighting area of the light emitting element can be expanded as compared to the case where the shape of the surrounding portion is cylindrical. The above image pickup apparatus comprises: an upper housing and a lower housing wherein both said upper housing and lower housings have a cutout area in the front side into which said light leakage prevention portion is fitted, wherein groove portions are formed along opposite sides of the cutout area, and wherein said upper housing and the lower housing are joined while side edges of said light leakage prevention unit are fitted into said groove portions . In accordance with this configuration, since the light leakage prevention unit is not firmly fixed, it is possible to conduct an inspection such as operation tests before setting in the image pickup apparatus to the housing. Therefore, when a defect is found, it is possible to analyze, repair and replace the parts with ease. On the other hand, if the light leakage prevention unit is firmly fixed, an inspection can not be performed until the image pickup apparatus is set in the housing. Therefore, it needs
troublesome works such as disassembling and analyzing when a defect is found. In addition, it is impossible to analyze it in the same state. Therefore, even though the defect is fixed, another defect might be found after setting the image pickup apparatus back in the housing again. The image pickup apparatus further comprises: a circuit board on which the image pickup unit is mounted, wherein said lens unit is located above said circuit board in such a manner as to cover said image pickup unit, wherein said light emitting element holder unit is supported between said circuit board and said light leakage prevention unit, and wherein said circuit board is fixed inside of said upper housing and lower housing in parallel with the surface of said light leakage prevention unit. In accordance with the second aspect of the present invention, an object imaged by the image pickup apparatus of the first aspect of the present invention comprises: a retroreflective sheet operable to reflect light emitted from said light emitting element, a main body in the form of a regular polyhedron, and wherein said retroreflective sheet is attached to a surface of said main body. in accordance with this configuration, since the retroreflective sheet is attached to the surface of the regular polyhedron main body, the retroreflective sheet can be prepared in the form of a development of the regular polyhedron or a set of segments which are obtained by dividing a development in units of the polygon. Therefore, it is possible to cut out these shapes efficiently from a large retroreflective sheet without wasting the raw sheet as much as possible. In the above object to be imaged, the main body is a regular icosahedron . In accordance with this configuration, since the retroreflective sheet is attached to the surface of the regular icosahedron main body, the retroreflective sheet can be prepared in the form of a development of the regular icosahedron or a set of segments which are obtained by dividing the development in units of the polygon. Therefore, it is possible to cut out the retroreflective sheet in these shapes efficiently from a large retroreflective raw sheet without wasting the raw sheet as much as possible. In the above object to be imaged, the main body is covered
with a plurality of said retroreflective sheet segments which are obtained by dividing the development of the regular polyhedron in units of the polygon. In accordance with this configuration, it is easier to attach the retroreflective sheet to the main body.
BRIEF DESCRIPTION OF DRAWINGS The aforementioned and other features and objects of the present invention and the manner of attaining them will become more apparent and the invention itself will be best understood by reference to the following description of a preferred embodiment taken in conjunction with the accompanying drawings, wherein: Fig. 1 is a view showing the overall configuration of a game system in accordance with the embodiment of the present invention. Fig. 2A is a front view of the game apparatus of Fig. 1. Fig. 2B is a plan view of the game apparatus of Fig. 1. Fig. 3 is an exploded perspective view of an optical unit including the light leakage prevention member , the infrared filter and infrared emitting diodes of Fig. 2A. Fig. 4 is an exploded perspective view from back side of the optical unit of Fig. 3. Fig. 5 is a plan view of the light leakage prevention member of Fig . 3. Fig. 6 is a cross-sectional view of Fig. 5 along the line B-B. Fig. 7 is a cross-sectional view of Fig. 2B along the line A-A. Fig. 8 is a perspective view of the lower housing of Fig. 2A. Fig. 9 is a plan view of the object of Fig. 1. Fig. 10A is a perspective view of inner surface of the upper housing constructing the inner shell of Fig. 9. Fig. 10B is a perspective view of inner surface of the lower housing constructing the inner shell of Fig. 9. Fig. 11A is a perspective view of the inner shell of Fig. 9. Fig. 11B is a vertex-based plan view of the inner shell of Fig. 9. Fig. 11C is a side-based plan view of the inner shell of Fig.
9 . Fig. 11D is a side-face-based plan view of the inner shell of Fig. 9. Fig. 12 is a view showing an example of the retroreflective sheets attached on the surface of the inner shell of Fig. 9. Fig. 13A is a perspective view of an upper housing constructing the outer shell of Fig. 9. Fig.13B is a perspective view of a lower housing constructing the outer shell of Fig. 9. Fig. 14 is a cross-sectional view of Fig. 9 along the line C-C. Fig. 15 is a view showing the electrical construction of the game apparatus of Fig. 1. Fig. 16 is a diagram showing the configuration of inputting the pixel data from the image sensor to the high speed processor of Fig. 15, and a LED driver circuit. Fig. 17 is a timing chart illustrating the process of inputting the pixel data from the image sensor to the high speed processor . Fig. 18 is an enlarged view of a portion of the timing chart shown in Fig . 17. Fig. 19 is a flowchart showing the overall process flow of the game apparatus of Fig. 1. Fig. 20 is a flowchart showing the process flow of the sensor initial setting process performed in step SI of Fig. 19. Fig. 21 is a flowchart showing the process flow of the command transmitting process of step S31 of Fig. 20. Fig. 22A is a timing diagram showing the register setting clock RCLK of Fig. 16. Fig. 22B is a timing diagram showing the register data of Fig. 16. Fig. 23 is a flowchart showing the process flow of the register setting process of step S33 of Fig. 20. Fig. 24 is a flowchart showing the process flow of imaging process of step S3 of Fig. 19. Fig. 25 is a flowchart showing the process flow of the pixel data aggregation acquiring process of step S61 of Fig. 24. Fig. 26 is a flowchart showing the process flow of the pixel data acquiring process of step S76 of Fig. 25.
Fig. 27 is a flowchart showing the process flow of the target area extracting process of step S5 of Fig. 19. Fig. 28 is a flowchart showing the process flow of the target point extracting process of step S6 of Fig. 19. Fig. 29 is a view showing an example of the game screen displayed by the game apparatus of Fig. 1.
DESCRIPTION OF THE PREFERRED EMBODIMENTS In what follows, an embodiment of this invention will be explained in conjunction with the accompanying drawings . Meanwhile, similar references indicate the same or functionally similar elements throughout the respective drawings, and therefore redundant explanation is not repeated. Fig. 1 is a view showing the overall configuration of a game system in accordance with the embodiment of the present invention. As illustrated in Fig. 1, this game system comprises a game apparatus 1 including an image pickup unit, an object 3 and a television monitor 5. A direct current power voltage is applied to the game apparatus 1 by an AC adaptor 11. Alternatively, instead of using the AC adapter 11, the game apparatus 1 can be supplied with the direct current power voltage from batteries (not shown) . A screen 6 is provided on the front surface of the television monitor 5. The television monitor 5 and the game apparatus 1 are connected with each other by an AV cable 9. The game apparatus 1 is provided with the built-in image pickup unit (hereinafter described) , which picks up an image of the object 3. More specifically, four infrared emitting diodes 23 intermittently emit infrared light. Then, the infrared light emitted from the infrared emitting diodes 23 is reflected by retroreflective sheets (hereinafter described) provided on the object 3, and then the return light is input to an image sensor (hereinafter described) placed behind an infrared filter 21. In this way, the object 3 is intermittently imaged. Therefore, the game apparatus 1 can obtain intermittent image signals of the object 3 moved by a player. The game apparatus 1 analyzes the image signals, and reflects the result to a game process. Fig. 2A is a front view of the game apparatus 1 of Fig. 1. Fig. 2B is a plan view of the game apparatus 1 of Fig. 1. As illustrated in Fig. 2A, a light leakage prevention member 25 is
secured between an upper housing 13 and lower housing 15 in a manner to nip the light leakage prevention member 25 therebetween. The surface of the infrared filter 21 is exposed from the center part of the light leakage prevention member 25. In addition, light emitting portions of four infrared light emitting diodes 23 is also exposed from the light leakage prevention member 25 around the infrared filter 21. As illustrated in Fig. 2B, a power supply lamp 19 (LED: light emitting diode) which emits light when a power switch 17 is turned on is provided on the surface of the game apparatus 1. Fig. 3 is an exploded perspective view of an optical unit including the light leakage prevention member 25, the infrared filter 21 and infrared emitting diodes 23 of Fig. 2A. Fig. 4 is an exploded perspective view from back side of the optical unit of Fig. 3. As illustrated in these drawings, the optical unit includes the light leakage prevention member 25, the infrared filter 21, the infrared emitting diodes 23, a LED holding member 57, a lens holder 29 and a unit base 27. A lens unit consists of the lens holder 29 and the unit base 27. There is a circle aperture 59 at the center of the light leakage prevention member 25. Four surrounding portions 43 are formed around the circle aperture 59. The surrounding portions 43, which are all cylindrical shape, are used to surround respective infrared emitting diode 23. They stick out from four areas near each corner of the back surface of the light leakage prevention member 25. Fig. 5 is a plan view of the light leakage prevention member 25 of Fig. 3. Fig. 6 is a cross-sectional view of Fig. 5 along the line B-B. As illustrated in Fig. 6, the infrared emitting diodes 23 are inserted to respective surrounding portion 43 from back side of the light leakage prevention member 25. The openings of the surrounding portions 43 are broaden toward the end, and penetrate from back to surface of the light leakage prevention member 25. Therefore, the light emitting portions of the infrared emitting diodes 23 are exposed from these end widened openings. An irradiation range of the infrared emitting diode 23 is decided depending on the inclination of the end widen part of the opening. Returning to Fig. 3 and Fig. 4, the infrared filter 21 which transmits only infrared light is a disc-shaped filter. The infrared filter fits the shape of the aperture 59 of the light leakage
prevention member 25. An aperture 61 is formed at the center of the LED holding member 57. The four infrared emitting diodes 23 are placed around the aperture 61. The cylindrical lens holder 29 is attached to the unit base 27. Fig. 7 is a cross-sectional view of Fig. 2B along the line A-A. As illustrated in Fig. 7, a concave lens 31 is placed in the lens holder 29 so as to be face to face with the infrared filter
21 and parallel to the image sensor 37 attached to a circuit board 39. In addition, a convex lens 33 is also placed in the lens holder 29 so as to be face to face with and parallel to the image sensor 37. There is a cavity 35 (optical path) between the concave lens 31 and the concave lens 33. The infrared light which is transmitted through the infrared filter 21 passes through the concave lens 31, the cavity 35 and the convex lens 33, and then, is detected by the image sensor 43. The image sensor 37 is a low-resolution CMOS image sensor (for example, 32 pixels * 32 pixels, gray scale) . Alternatively, the image sensor 37 can be replaced by a higher resolution image sensor or other device such as CCD. The unit base 27 where the lens holder 29 is attached is secured to the circuit board 39 with using screws. Fixing parts
22 of the infrared filter 21 are welded to the surface of the LED holding member 57, and firmly fixed to the LED holding member 57. The LED holding member 57 where the infrared filter 21 is welded is secured by being nipped between the light leakage prevention member 25 and the circuit board 39. In this case, the surface of the infrared filter 22 is exposed from the aperture 59 of the light leakage prevention member 25. In addition, the lens holder 29 is inserted to the aperture 61 of the LED holding member 57, therefore the concave lens 31 comes close to back side of the infrared filter 21. Since supporting portions 28 sticking out from back side of the LED holding member 57 are touching the circuit board 39, the LED holding member 57 is supported. Since the outer surface of the base end of the light emitting portion of the infrared light emitting diode 23 is surrounded by the surrounding portion 43 of the light leakage prevention member 25, leakage of infrared light can be avoided. Therefore, it is possible to avoid receiving infrared light directly from the infrared emitting diodes 23. Especially, the effect can be improved
by coloring the inner surface of the surrounding portions 43 of the light leakage prevention member 25 in black (especially matte black) . Alternatively, it is possible to color entire surface of the light leakage prevention member 25 in black. Fig. 8 is a perspective view of the lower housing 15 of Fig. 2A. As illustrated in Fig. 7 and Fig. 8, the both side ends of the light leakage prevention member 25 are held by holding portions 53 of both the lower housing 15 and the upper housing 13. More specifically, the side ends of the light leakage prevention member 25 are inserted to the groove portions as the holding portions 53. In same way, the both side ends of the circuit board 39 are held by holding portions 55 of both the lower housing 15 and the upper housing 13. More specifically, the side ends of the circuit board 39 are inserted to the groove portions as the holding portions 55. Returning to Fig. 7, a circuit board 41 is secured by screwing screws into bosses sticking out from inner surface of the upper housing 13. The power switch 17, the power supply lamp 19, a power supply terminal 47 and an AV terminal (not shown) are attached to the circuit board 41. The lower housing 15 is provided with a battery box 49 where batteries 45 are placed. The batteries 45 can be replaced to new ones by opening a battery cover 51. Fig. 9 is a plan view of the object 3 of Fig. 1. As illustrated in Fig. 9, the object 3 is provided with a grip 100 held by an operator. An ornamental member 106 and an outer shell 102 are attached on the top of the grip 100. The outer shell 102 is made of transparent or translucent material. For example, the outer shell 102 is made of transparent or translucent ABS resin (acrylonitrile butadiene styrene resin) . An inner shell 104 is attached inside of the outer shell 104. Alternatively, the ornamental member 106 may not always be attached. Fig. 10A is a perspective view of inner surface of the upper housing 112 constructing the inner shell 104 of Fig. 9. Fig. 10B is a perspective view of inner surface of the lower housing 110 constructing the inner shell 104 of Fig. 9. As illustrated in Fig. 10A, a boss 120 is formed at the center of the inner surface of the upper housing 112 of the inner shell 104. Three cylindrical bosses 116 are formed around the boss 120. As illustrated in Fig. 10B, a boss 118 is formed at the center of the inner surface of the lower housing 110 of the inner shell 104. Three columnar bosses
114 are formed around the boss 118. The bosses 114 of the lower housing 110 are inserted into respective bosses 116 of the upper housing 112 in order to combine with each other. In this way, the regular icosahedron shaped inner shell 104 is formed. Fig. 11A is a perspective view of the inner shell 104 of Fig. 9. Fig. 11B is a vertex-based plan view of the inner shell 104 of Fig. 9. Fig. 11C is a side-based plan view of the inner shell 104 of Fig. 9. Fig. 11D is a side-face-based plan view of the inner shell 104 of Fig. 9. The retroreflective sheets are attached on the surface of the regular icosahedron shaped inner shell 104. Fig. 12 is a view showing an example of the retroreflective sheets attached on the surface of the inner shell 104 of Fig. 9.
As illustrated in Fig. 12, the retroreflective sheets are prepared in the form of five parallelograms obtained by dividing a development of a regular icosahedron. These retroreflective sheets 132 are attached to entire surface of the inner shell 104. The five retroreflective sheets 132 are prepared by dividing a regular icosahedron in this manner for the purpose of making it easy to attach the retroreflective sheets. Fig. 13A is a perspective view of an upper housing 122 constructing the outer shell 102 of Fig. 9. Fig. 13B is a perspective view of a lower housing 124 constructing the outer shell 102 of Fig. 9. As illustrated in Fig. 13A, a boss 126 is formed at the center of inner surface of the upper housing 122 constructing the outer shell. As illustrated in Fig. 13B, a boss 128 is formed at the center of inner surface of the lower housing 124 constructing the outer shell. The inner shell 104 is nipped between the upper housing 122 and the lower housing 124 with butting these bosses 126 and 128. This will be explained next. Fig. 14 is a cross-sectional view of Fig. 9 along the line C-C. As illustrated in Fig. 14, an insertion hole is formed each inside of the boss 120 of the upper housing 112 and the boss 118 of the lower housing 110 of the inner shell 104. The boss 126 of the upper housing 122 constructing the outer shell 102 is inserted into the insertion hole of the boss 120 of the upper housing 112. The boss 128 of the lower housing 124 constructing the outer shell 102 is inserted into the insertion hole of the boss 118 of the lower housing 110. Then, a screw 130 is screwed so as to pierce the lower housing 124 constructing the outer shell 102, the lower housing
110 constructing the inner shell 104, the upper housing 112 constructing the inner shell 104 and the upper housing 122 constructing the outer shell 102 in order to fix them all. In this way, the outer shell 102 and inner shell 104 are fixed firmly. A columnar portion formed at the bottom part of the lower housing 124 of outer shell 102 is inserted to a hole on top of the grip 100. Then, the lower housing 124 is fixed with the ornamental member 106 by screwing screws 108 from outside. Incidentally, though the outer shell 102 is described thicker than the inner shell 104, a thinner outer shell is more preferable. This is because a thinner shell can transmit more infrared light so that the retroreflective sheets 132 can be irradiated more, and the return light can be transmitted more. Fig. 15 is a view showing the electrical construction of the game apparatus 1 of Fig. 1. Referring to Fig. 15, the object 3 is lighted by the infrared emitting diodes 23, and reflects the light by the retroreflective sheets 132. The image sensor 37 receives the return light from the retroreflective sheets 132. Then, the image sensor 37 outputs an image signal of the retroreflective sheets 132. An A/D converter (not shown) in high speed processor 575 converts the image signal output as an analog signal from the image sensor 37 into digital data. The high speed processor 575 makes the infrared emitting diodes 23 flash intermittently to perform stroboscopic imaging. The high speed processor 575 includes various processors such as an arithmetic processor, a graphic processor, a sound processor and DMA processor; the A/D converter for inputting an analog signal; and an input/output control circuit which receives input signals such as a key operation signal and an infrared signal and sends output signals to external devices though they are not shown in drawing. The arithmetic processor performs necessary calculation on the basis of a game program, and sends the result to the graphic processor and the sound processor. The graphic processor and the sound processor perform the image process and sound process on the basis of the result of calculation. The high speed processor 575 also comprises an inner memory which includes a ROM or a RAM (SRAM and/or DRAM) though they are not shown in drawing. A RAM is used as a temporary memory, a working memory, a counter or register area (temporary data area) , or a flag
area. A ROM 66 is connected with the high speed processor 575 through an external buss 36. The game program and image data are preliminarily stored in the ROM 66. The high speed processor 575 detects motion of the object 3 by processing a digital image signal input through the A/D converter from the image sensor 37. Then, the high speed processor 575 performs calculation, graphic and sound processes, and outputs a video signal and an audio signal. The video signal is an image signal for displaying a game screen. The audio signal is a sound signal for game music and sound effects. Due to these signals, a game screen is displayed on the screen 6 of the television monitor 5, and necessary sounds (sound effects and game music) are output from speakers (not shown) . The high speed processor 575 works in response to a clock signal on the basis of an oscillation circuit 63. Fig. 16 is a circuit diagram showing the configuration of inputting the pixel data from the image sensor 37 to the high speed processor 575 of Fig. 15, and a LED driver circuit. Fig. 17 is a timing chart illustrating the process of inputting the pixel data from the image sensor 37 to the high speed processor 575. Fig. 18 is an enlarged view of a portion of the timing chart shown in Fig. 17. According to Fig. 16, since the image sensor 37 outputs pixel data D(X,Y) as an analog signal, it is input to the high speed processor 575 via an analog input port. The analog input port is connected with the A/D converter in the high speed processor 575. Therefore, the high speed processor 575 obtains the pixel data converted into digital data. The middle point of above-mentioned analog pixel data D (X, Y) is determined on the basis of reference voltage from the image sensor 37. Each digital signal to control the image sensor 37 is input to the Input/output ports of the high speed processor 575, and also output from the Input/output ports . Each input/output port is a digital port operable to control input and output operation, and connected with the input/output control circuit of the high speed processor 575. More specifically, a reset signal "reset" to reset the image sensor 37 is output from the input/output port of the high speed processor 575, and transmitted to the image sensor 37. A pixel data strobe signal λΛPDS" and a frame status flag signal "FSF" are output
from the image sensor 37 to the input/output ports of the high speed processor 575. As shown in Fig. 17, the pixel data strobe signal "PDS" is a strobe signal to read above-mentioned each pixel data D (X, Y) from the image sensor 37. The frame status flag signal "FSF" indicates a state of the image sensor 37, and as shown in Fig. 17, it determines an exposure period of the image sensor 37. In other words, a low-level period of the frame status flag signal "FSF" as illustrated in Fig. 17 shows the exposure period, and a high-level period shows an unexposure period. In addition, the high speed processor 575 outputs a command (or a command and data) as register data to be set to a control register (not shown) of the image sensor 37 via the input/output ports. Furthermore, the high speed processor 575 outputs a register setting clock "CLK" which repeats a low-level period and a high-level period alternately. The register data and the register setting clock "CLK" are sent to the image sensor 37. The four infrared-emitting diodes 23 are connected in parallel . These infrared-emitting diodes 23 are turned on or turned off by the LED driver circuit 690. The LED driver circuit 690 receives the above-mentioned frame status flag signal "FSF", and then, the signal "FSF" is applied to a base terminal of a PNP transistor 686 via a differentiation circuit 685 consisting of a resistor 683 and a capacitor 684. In addition, the base terminal of the PNP transistor 686 is connected with a pull-up resistor 687, and is normally pulled up to high level. When the frame status flag signal "FSF" becomes low level, the low-level signal "FSF" is input to the base terminal via the differentiation circuit 685. Therefore, the PNP transistor 686 is turned on only when the level of the flag signal "FSF" is low. A collector terminal of the PNP transistor 686 is grounded via resistors 680 and 689. The connecting point of the resistances 680 and 689 is connected with a base terminal of a NPN transistor 681. A collector terminal of this NPN transistor 681 is connected to anodes of the infrared-emitting diodes 23 in common. An emitter terminal of the NPN transistor 681 is connected to a base terminal of a NPN transistor 682 directly. Moreover, a collector terminal of the NPN transistor 682 is connected to cathodes of the infrared-emitting diodes 23 in common. An emitter terminal of the
NPN transistor 682 is grounded via a resister 691. This LED driver circuit 690 turns on the infrared-emitting diodes 23 only when the LED control signal "LEDC" which is output from the input/output port of the high speed processor 575 is active (high-level) and also the level of the frame status flag signal
"FSF" from the image sensor 37 is low. As illustrated in Fig. 17, the PNP transistor 686 is turned on while the level of the frame status flag signal "FSF" is low (there is actually a time-lag caused by a time constant of the differentiation circuit 685). Therefore, when the LED control signal "LEDC" illustrated in Fig. 17 is set to a high level by the high speed processor 575, the electric potential at the base terminal of the NPN transistor 681 becomes a high level. As a result, this transistor 681 is turned on. Then, when the transistor 681 is turned on, the transistor 682 is also turned on. Therefore, a current passes through each infrared-emitting diodes 23 and the transistor 682 from a power supply Vcc, and consequently the infrared-emitting diodes 23 flash as described in Fig. 17. The image sensor 37 works in response to a clock signal "SCLK" on the basis of the oscillation circuit 63 which serves to send the clock signal to the high speed processor 575. The light leakage prevention member 25, the infrared filter 21, the LED holding member 57, the lens holder 29, the unit base 27, the infrared emitting diodes 23, the LED driver circuit 690, the image sensor 37 and the high speed processor 575 are constructing the image pickup unit. Therefore, the game apparatus 1 can be called as the image pickup unit. The LED driver circuit 690 and the image sensor 37 are attached to the circuit board 39 of Fig. 7, and the high processor 575 is attached to the circuit board 41 of Fig. 7. Fig. 19 is a flowchart showing the overall process flow of the game apparatus 1 of Fig. 1. As illustrated in Fig. 19, the high speed processor 575 performs the initial setup of the system in step Si. In step S2, the high speed processor 575 updates an image signal to update an image displayed on the television monitor 5. Incidentally, the high speed processor 575 updates a displayed image every frame (television frame or video frame) . In step S3, the high speed processor 575 drives the infrared emitting diodes 23, and performs the imaging process of the object 3. In step S4,
the high speed processor 575 determines whether or not a game is over. If the game is over, the high speed processor 575 finishes processing, otherwise the process proceeds to step S5. In step S5, the high speed processor 575 performs an extracting process of a target area of the object 3. In step S6, the high speed processor 575 extracts a target point of the object 3. In step S7, the high speed processor 575 performs a game process in accordance with the game program stored in the ROM 66. In step S8, the high speed processor 575 determines whether or not variable "M" is smaller than a predetermined value "K" . If "M" is equal to or more than the predetermined value "K", the high speed processor 575 proceeds to step S9, and assigns "0" to "M", and then proceeds to step S10. On the other hand, if the variable "M" is smaller than the predetermined value "K", the high speed processor 575 proceeds from step S8 to step S10. If it is "Yes" in step S10, i.e., the high speed processor 575 waits for the video system synchronous interrupt (i.e., there is no interrupt responsive to the video system synchronous signal) , the process repeats the same step S10. On the other hand, if it is "No" in step S10, i.e., the high speed processor 575 gets out of the state of waiting for the video system synchronous interrupt (i.e., the high speed processor 575 is given the video system synchronous interrupt), the process proceeds to step S2. The sound process of step Sll is performed to output game music and sound effects when the high speed processor 575 is given a sound interrupt. Fig. 20 is a flowchart showing the process flow of the sensor initial setting process performed in step SI of Fig. 19. As illustrated in Fig. 20, the high speed processor 575 sets a command "CONF" as a setting data. This command "CONF" is a command which instructs the image sensor 37 to enter setting mode to send a command from the high speed processor 575. In next step S31, the high speed processor 575 performs a command transmitting process. Fig. 21 is a flowchart showing the process flow of the command transmitting process of step S31 of Fig. 20. As illustrated in Fig.
21, in step S40, the high speed processor 575 sets setting data (in case of step S31, the command "CONF") as register data (I/O ports) . In step S41, the high speed processor 575 sets register setting clock "RCLK" (I/O port) to low level. After waiting for
a predetermined period "tl" in step S42, in the next step S43, the high speed processor 575 sets the register setting clock "RCLK" to high level. Then, after waiting for a predetermined period "t2" in step S44, in the next step S45, the high speed processor 575 sets the register setting clock "RCLK" to low level. In this way, as illustrated in Fig. 22A and Fig. 22B, the high speed processor 200 performs the command (command or command + data) transmitting process by periodically changing the register setting clock "RCLK" into low level, high level, and low level while waiting for the predetermined periods "tl" and "t2". Returning to Fig. 20, in step S32, the high speed processor 575 sets a pixel mode and also sets an exposure period. In this embodiment, since the image sensor 37 is the CMOS image sensor which consists of 32 pixels * 32 pixels as mentioned above, the high speed processor 575 sets "Oh" which indicates it consists of 32 pixels x 32 pixels, to a pixel mode register whose setting address is "0". In step S33, the high speed processor 575 performs a register setting process. Fig. 23 is a flowchart showing the process flow of the register setting process of step S33 of Fig. 20. As illustrated in Fig. 23, the high speed processor 575 sets a command "MOV" + "address" as a setting data in the first step S50. Then, in step S51, the high speed processor 575 transmits the setting data to the image sensor 37 by executing the command transmitting process as explained hereinbefore (refer to Fig. 21) . In step S52, the high speed processor 575 sets a command "LD" + "data" as a setting data. Then, in step S53, the high speed processor 575 transmits the setting data to the image sensor 37 by executing the command transmitting process which is explained in Fig. 21. In step S54, the high speed processor 575 sets a command "SET" as a setting data. Then, in step S55, the high speed processor 575 transmits the setting data to the image sensor 37 by executing the command transmitting process which is explained in Fig. 21. The command "MOV" indicates to transmit an address of the control register. The command "LD" indicates to transmit data. The command "SET" indicates to set data to the address. Incidentally, if there are several control registers to set, the register setting process is repeatedly executed. Returning to Fig. 20, in step S34, the high speed processor
575 sets "1" (indicating an address of low nibble of an exposure period setting register) as a setting address, and also sets low nibble data "Fh" of "FFh" indicating the maximum exposure period. Then, in step S35, the high speed processor 200 executes the register setting process of Fig. 23. In a similar way, in step S36, the high speed processor 575 sets "2" (indicating an address of high nibble of the exposure period setting register) as a setting address, and also sets high nibble data "Fh" of "FFh" indicating the maximum exposure period. Then, in step S37, the high speed processor 575 executes the register setting process of Fig. 34. In step S38, the high speed processor 575 sets a command "RUN" which indicates an end of setting and also makes the image sensor
37 start outputting data. Then, the high speed processor 575 transmits it in step S39. In this way, the sensor setting process is performed. However, the example from Fig. 20 to Fig. 23 may be changed depending on the specification of the image sensor 37. Fig. 24 is a flowchart showing the process flow of imaging process of step S3 of Fig. 19. As illustrated in Fig. 24, the high speed processor 575 turns the infrared emitting diodes 23 on to pick up an image by means of the stroboscope in step S60. More specifically, the LED control signal as illustrated in Fig. 16 is set to high level. Then, the high speed processor 575 performs an acquiring process of a pixel data aggregation in step S61. In step
562, the high speed processor 575 transits the LED control signal into low level to turn the infrared emitting diodes 23 off. In step
563, the high speed processor 575 acquires a pixel data aggregation without light emitted from the infrared emitting diodes 23 in similar way as step S61. Fig. 25 is a flowchart showing the process flow of the pixel data aggregation acquiring process of step S61 of Fig. 24. As illustrated in Fig. 25, in the first step S70, the high speed processor 575 sets "0" to "X" and "0" to "Y" as an element number of a pixel data array. In the step S71, the high speed processor 575 checks the frame status flag signal FSF as transmitted from the image sensor 37. Then, the high speed processor 575 decides whether or not the rising edge of the frame status flag signal FSF is detected. In step S72, if the high speed processor 575 detects the rising edge of the frame status flag signal FSF, the process proceeds to step S73, otherwise proceeds to step S71.
After that, in step S73, the high speed processor 575 checks the pixel strobe PDS as transmitted from the image sensor 37. Then, in step S74, the high speed processor 575 determines whether or not the rising edge of the pixel strobe PDS is detected. In step S74, if "No" is judged, the process proceeds to step S73. On the other hand, if "Yes" is judged, "0" is assigned to "X" in step S75. In step S76, the high speed processor 575 performs a pixel data acquiring process. Fig. 26 is a flowchart showing the process flow of the pixel data acquiring process of step S76 of Fig. 25. As illustrated in Fig. 26, in the first step S91, the high speed processor 575 instructs the A/D converter to start converting the analog pixel data into the digital data. After that, in step S92, the high speed processor 575 checks the pixel strobe PDS as transmitted from the image sensor 37. Then, the high speed processor 575 determines whether or not the rising edge of the pixel strobe PDS is detected. In step S93, if "No" is judged, the process proceeds to step S92, otherwise proceeds to step S94. In step S94, the high speed processor 575 obtains digital pixel data (conversion value) from the A/D converter. Then, the high speed processor 575 stores the acquired pixel data in temporary register (not shown) in step S95. After that, the process proceeds to step S77 of Fig.25. In step S77, the high speed processor 575 assigns the pixel data stored in the temporary register to the pixel data array P[X] [Y] . Then, the high speed processor 575 increments "X" in next step S78. If X is smaller than "32", the process from step S76 to step S78 is repeatedly executed. If X = 32, i.e., the acquisition of pixel data reaches the end of the row, the high speed processor 575 increments "Y" in step S80. Then, the high speed processor 575 repeats the pixel data acquiring process from head of the next row. In step S81, if Y = 32, i.e. , the acquisition of pixel data reaches the end of the pixel data array P[X] [Y] , the process proceeds to step S62 of Fig. 24. Fig. 27 is a flowchart showing the process flow of the target area extracting process of step S5 of Fig. 19. As illustrated in Fig. 27, in step S100, the high speed processor 575 calculates the difference between the pixel data P[X] [Y] with and without light emitted from the infrared-emitting diodes 23 to obtain differential data. In step S101, the high speed processor 575
assigns the differential data to the array Dif[X][Y]. In this embodiment, since the 32 pixel x 32 pixel image sensor 37 is used, X=0 to 31 and Y=0 to 31. In step S102, the high speed processor 575 compares an element of the array Dif [X] [Y] with a predetermined threshold value "Th". In step S103, if the element of the array Dif [X] [Y] exceeds the predetermined threshold value "Th", the process proceeds to step S104, otherwise proceeds to step S105. In step S104, the high speed processor 575 increments a count value "c" by one to count a number of differential data (elements of the array Dif [X] [Y] ) exceeding the predetermined threshold value "Th". The high speed processor 575 repeats the process from step S102 to step S104 until all elements of the array Dif [X] [ Y] are compared with the predetermined threshold value "Th". Fig. 28 is a flowchart showing the process flow of the target point extracting process of step S6 of Fig. 19. As illustrated in Fig. 29, in step S110, the high speed processor 575 scans all elements of the array Dif [X] [ Y] , and detects the maximum value from all elements of the array Dif [X] [Y] . The high speed processor 575 makes use of the X coordinate and Y coordinate of the maximum value as a coordinate (Xc,Yc) of the target point. In step Sill, the high speed processor 575 increments variable "M" by one. In step S112, the high speed processor 575 assigns the coordinate Xc and Yc to arrays PX [M] and PY[M] . In step S113, the high speed processor 575 calculates a moving average (AX [M] , AY [M] ) of the target point (Xc,Yc) of the object 3. In step S114, the high speed processor
575 converts the average coordinate (AX [M] , AY [M] ) of the target point on the image sensor 37 into the coordinate (xc,yc) on the screen 6 of the television monitor 5. Returning to Fig. 19, the high speed processor 575 derives various state information of the object 3 in accordance with the information of the target area and the coordinate of the target point of the object 3 to perform the game process on the basis of the derived state information. The state information could be, for example, any one of speed information, movement direction information, movement distance information, velocity vector information, acceleration information, motion path information, area information and positional information, or a combination thereof .
Fig. 29 is a view showing an example of the game screen displayed by the game apparatus 1 of Fig. 1. The object 3 held by a player is imaged by the image pickup unit of the game apparatus 1. The high speed processor 575 calculates a coordinate of the object 3 on the screen 6, and a cursor 73 imitating the object 3 is displayed on the screen 6. The player operates the object 3 with looking at the cursor 73. Meanwhile, operating the object 3 does not mean pushing switches or operating analog sticks but moving the object 3 itself. The game screen shown in Fig. 29 is displayed and several parts of music are output when the game is started. Animal characters 65 are displayed on the game screen, and musical notation marks 70 move toward these animal objects 65. The musical notation marks 70 appear from the bottom of the screen 6 and move upward. The player operates the object 3 to move the cursor 73 to the animal character 65 at the timing when the musical notation mark 70 reaches the animal character 65. If the player fails to move the cursor 73 to the animal character 65 in time, one of the parts of music stops. If the player fails in a row, another one of the parts stops . On the other hand, if the player moves the cursor 73 in time, one of failed parts is started again. As has been discussed above, the light leakage prevention member 25 and the infrared filter 21 are separately provided in this embodiment of this invention. In addition, since the infrared light emitting diodes 23 are inserted to respective insertion hole of the surrounding portions 43, the outer surface of the base end of the light emitting portion of the infrared light emitting diode 23 is surrounded by the surrounding portion 43. In this way, leakage of infrared light emitted from the infrared light emitting diodes 23 can be avoided. Therefore, it is possible to prevent the image sensor 37 from receiving infrared light directly from the infrared light emitting diodes 23. If the light leakage prevention member 25 and the infrared filter 21 are formed as one, light emitted from the infrared light emitting diodes 23 directly enters the infrared filter 21 via light leakage prevention member 25. Since the image sensor 37 receives the light in this way, the image sensor 37 picks up unnecessary image. If the image sensor 37 receives infrared light directly from the infrared light emitting diodes 23, the high speed processor
575 can not extract the target point of the object 3 properly by the target point extracting process of step S6 of Fig. 19. This may cause bad influences to the game processing of step S7 performed on the basis of a location of the target point. It is possible to realize to perform better game processing (information processing) by avoiding the image sensor 37 from receiving light directly from the infrared light emitting diodes 23. In addition, since the light leakage prevention member 25 and the infrared filter 21 are separately provided, it is possible to change only filter 21 to the new one if there is a problem (for example, a scratch) with the infrared filter 21. Therefore, it is possible to reduce the cost. In this embodiment, the hole (or opening) of each surrounding portion 43 of the light leakage prevention unit 25 is made in the form of an inverted corn with the point cut off through which the light emitting portion of the light emitting diode 23 is exposed (refer to Fig. 6) . Therefore, the lighting area of the infrared light emitting diodes 23 can be expanded as compared to the case where the shape of the surrounding portion 43 is cylindrical. In this embodiment, a cutout portion is formed in the front face of each of the upper housing 13 and lower housing 15, while the groove portions 53 are formed along opposite sides of the cutout portion. Then, the upper housing 13 and the lower housing 15 are joined while the side edges of the light leakage prevention unit are fitted into the groove portions 53 (refer to Fig. 7 and Fig. 8) . Since the light leakage prevention member 25 is not fixed as explained above, it is possible to do inspection such as operation tests before setting in the image pickup unit. Therefore, when a defect is found, it is possible to analyze, repair and replace the parts with ease. On the other hand, if the light leakage prevention member is firmly fixed, inspection can not be performed until the image pickup unit is set in the housing. Therefore, it needs troublesome works such as disassembling and analyzing when a defect is found. In addition, it is not possible to analyze in the same state. Therefore, even though the defect is fixed, another defect might be found after setting the image pickup unit back in the housing again. Moreover, in this embodiment, the inner shell 104 of the object 3 is a regular icosahedron. The surface of the inner shell
104 is covered with the retroreflective sheet 132. Since the retroreflective sheet 132 is attached to the regular icosahedron shaped inner shell 104, the retroreflective sheet can be prepared in the form of a development of the regular icosahedron or a set of segments which are obtained by dividing a development of the regular icosahedron in units of the polygon (refer to Fig. 12) . Therefore, it is possible to cut out the retroreflective sheet 132 in these shapes efficiently from a large retroreflective raw sheet without wasting the raw sheet as much as possible. in addition, in this embodiment, five retroreflective sheets 132 are prepared, corresponding to five fragments which are obtained by dividing a development of the regular icosahedron in four-polygon units, and attached to the inner shell 104 (refer to Fig. 12) . Therefore, it is easier to attach the retroreflective sheet 132 to the inner shell 104. Incidentally, the present invention is not limited to the above embodiment, and a variety of variations and modifications may be effected without departing from the spirit and scope thereof, as described in the following exemplary modifications. (l)In above embodiment, the inner shell 104 is the regular icosahedron. However, it is not limited to the regular icosahedron, and any regular polyhedron can be an inner shell 104. Incidentally, it is not always necessary to provide the outer shell 102. (2) In above embodiment, the image pickup apparatus is used as the game apparatus 1. However, it is possible to use it for other purpose . (3) While any appropriate processor can be used as the high speed processor 200 of Fig. 6, it is preferred to use the high speed processor in relation to which the applicant has been filed patent applications. The details of this high speed processor are disclosed, for example, in Jpn . unexamined patent publication No.
10-307790 and U. S. Patent No. 6,070,205 corresponding thereto. The foregoing description of the embodiments has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form described, and obviously many modifications and variations are possible in light of the above teaching. The embodiment was chosen in order to explain most clearly the principles of the invention and its practical application thereby to enable others
in the art to utilize most effectively the invention in various embodiments and with various modifications as are suited to the particular use contemplated.