[go: up one dir, main page]

HK1132034B - Instructor-lead training simulation system and method of interacting with a simulated infantry scenarios - Google Patents

Instructor-lead training simulation system and method of interacting with a simulated infantry scenarios Download PDF

Info

Publication number
HK1132034B
HK1132034B HK09109972.4A HK09109972A HK1132034B HK 1132034 B HK1132034 B HK 1132034B HK 09109972 A HK09109972 A HK 09109972A HK 1132034 B HK1132034 B HK 1132034B
Authority
HK
Hong Kong
Prior art keywords
weapon
simulation
instructor
computer
controller
Prior art date
Application number
HK09109972.4A
Other languages
Chinese (zh)
Other versions
HK1132034A1 (en
Inventor
D.A.斯莱顿
D.E.小纽康宝
E.A.普瑞茨
C.D.沃克
C.W.小卢茨
R.J.科比斯
C.M.莱德维斯
D.库珀
R.E.扬恩
S.梅斯达吉
Original Assignee
动态动画系统股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 动态动画系统股份有限公司 filed Critical 动态动画系统股份有限公司
Priority claimed from PCT/US2005/042659 external-priority patent/WO2007011418A2/en
Publication of HK1132034A1 publication Critical patent/HK1132034A1/en
Publication of HK1132034B publication Critical patent/HK1132034B/en

Links

Description

Training simulation system guided by instructor and method for interacting with simulated infantry scene
This application is related to and claims priority from U.S. provisional patent application No. 60/630,304 filed on 24/11/2004 and U.S. provisional patent application No. 60/734,276 filed on 8/11/2005, which are hereby incorporated by reference in their entirety.
This application contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the patent and trademark office patent files or records, but otherwise reserves all rights to the copyright owner.
Technical Field
The present invention relates to the field of instructor-based simulated training environments, and more particularly to providing a new interface to such environments.
Background
Armed forces worldwide rely on well-trained men and women to protect their countries from injury. Such training varies widely between different military species, but until recently such training has primarily involved one of two extremes, either highly advanced simulations or actual, real-world training.
Several reasons contribute to the existence of such training partitions. One such reason is that the cost of developing a simulated training environment is typically significantly higher than real-world training. For example, the united states army costs approximately $35,000 to train an infantry recruit using traditional training methods, based on statistics compiled in 2001. When this is compared to the cost of developing and deploying infantry simulators, which easily costs tens of millions of dollars, it can generally be seen that it is more cost-effective to provide traditional, actual combat training costs. The exception to this is in the aviation and maritime domains, where each real-world aircraft and vessel can easily cost tens of millions of dollars, while training pilots can cost hundreds of thousands of dollars. In such cases, developing a simulator that allows entry level pilots to gain experience without accessing the cockpit of a real aircraft or the bridge of a ship has proven to be a much more cost-effective training method than risking the life and safety of important instructors, trainees and equipment.
Another reason for the division of training is that most infantry-related tasks require manoeuvres (maneuvering). Unlike pilots sitting in relatively static, fixed-size cockpits or bridges, infantries and other soldiers are required to move over a wider area. For example, an infantry training exercise may involve protecting buildings in a city. When the simulation begins in the suburbs of a city, a new soldier must be able to find the right building, enter and protect it, through the city. Such interactions have heretofore required awkward interfaces that tend to be distracting and do not allow the novice to be fully immersed in the simulation. Thus, for infantry new soldiers, traditional, actual combat training is traditionally preferred.
While traditional, actual combat, real world training has traditionally been preferred for training infantry recruits, such training has its drawbacks. For example, it is often difficult to simulate the differences in various environments, structures, and languages experienced in different battlefields. Instead, the simulated training environment can easily allow the trainee to experience these differences.
Summary of The Invention
There is a need for a system and method that can train infantries and other recruits using a simulated environment that overcomes one or more limitations of the prior art.
One objective of the present invention is to provide a simulated training environment based on a tunnel, which is controllable by instructors.
It is another object of the present invention to provide a user interface device that an infantry soldier or other such trainee can use to easily navigate large simulated terrain.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Technologies supporting the needs of national defense and civil defense forces have been developed. Specific areas of training addressed by this technology may include, but are not limited to, firearm non-live firing training, situation-based lethal force application decision-making technology, driver training, and convoy technology training. While the exemplary embodiments described below are directed to software and hardware techniques that are currently present in military example applications and law enforcement training system environments, it will be apparent to those skilled in the art that such systems may be readily adapted for alternative use environments such as, but not limited to, video games, civil weapons training, semi-military training, and the like. The technical building blocks described in the exemplary embodiments can be enhanced, combined, and configured in various ways as a solution to a different set of training requirements.
The system is preferably scalable and allows multiple drift to be coordinated with the simulation simultaneously, allowing multiple team members to practice tactics, techniques and procedures, either alone or as a team. Such a configuration also allows multiple teams to train together at the same time, allowing the army to train the army; allowing training of a shooting team on a shooting team or multiple shooting teams on multiple shooting teams, and any combination of shooting team on shooting team training. Using integrated simulation controls, a shooting team or captain for a single shooting lane may order other trainees during a performance or practice, such as through an interactive GUI or voice command.
One embodiment of the invention comprises an infantry training simulation system comprising at least one firing lane, at least one display being arranged substantially near the end of the firing lane. The trainee using the simulation may carry at least one weapon, which is typically similar to an infantry weapon. To facilitate navigation and other interaction with the simulation, the weapon is preferably equipped with at least one controller. At least one computer is communicatively coupled to the display and the weapon, monitors input from the at least one controller, and modifies a training simulation displayed on the display based on the input.
Another embodiment of the invention includes an infantry training simulation system comprising a plurality of firing lanes, wherein each firing lane has at least one display associated therewith. At least one computer is communicatively coupled to at least one of the plurality of displays and generates a training simulation for display by the attached at least one display. The embodiment preferably further comprises at least one instructor station, wherein the instructor station is communicatively coupled to the at least one computer, allowing an instructor to control at least one entity in the simulation. The trainee and/or instructor can interact with the simulation by various means, including by at least one weapon. Each weapon is preferably associated with a firing lane and each weapon is preferably communicatively coupled to the at least one computer so that the at least one computer can monitor the trainee and/or instructor as they interact with the weapon.
Yet another embodiment of the present invention includes a method of interacting with a simulated infantry scenario, comprising arming a physical weapon with at least one controller; navigating in the simulation using the at least one controller; monitoring a simulation for at least one hostile object; and using the physical weapon to engage with the enemy target.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Brief Description of Drawings
FIG. 1 is a left side elevational view of a weapon equipped with two controls.
FIG. 2 is a left side elevational view of a weapon equipped with two controls.
FIG. 3 is a top view of a weapon equipped with two controls.
Fig. 4 is a side view of a controller mounting device for use on a barrel of a firearm.
FIG. 5 is a side view of a controller mount for use near the trigger of a weapon.
FIG. 6 is a left side plan view of a weapon equipped with two controls.
Fig. 7 is a right side plan view of a weapon equipped with a wireless transmitter.
Figure 8 is a detailed perspective view of the replacement controller and mounting device.
Figure 9 is a detailed perspective view of an alternative controller and mounting arrangement.
Figure 10 is a perspective view of a plurality of training lanes in use as part of a simulation.
Figure 11 is a perspective view of a single training tunnel.
Figure 12 is a perspective view of an embodiment of a training center.
Figure 13 is an alternative perspective view of an embodiment of the training center.
Figure 14 is a top view of an embodiment of a training center.
FIG. 15 is a screen shot of a user interface through which the controller input may be customized to each student's preferences.
Fig. 16 is a screen shot of an entity list for use by the instructor station.
Fig. 17 and 18 are screenshots of simulated infantry training scenes.
FIG. 19 is a screenshot illustrating an instructor's station display of an exemplary scene control interface.
FIG. 20 is a screenshot illustrating an instructor station display of an exemplary host scene control interface.
FIG. 21 is an alternative screenshot of an instructor's desk display illustrating an exemplary scene control interface and also illustrating an exemplary agent control interface.
FIG. 22 is an alternative screenshot of an instructor's desk display illustrating an exemplary scene control interface and also illustrating an exemplary student monitoring interface.
Fig. 23 is a flow chart illustrating an exemplary instructor-controlled agent engagement process.
FIG. 24 is a flow chart illustrating an exemplary weapon jam simulation process.
FIG. 25 is a flow chart illustrating an exemplary weapon jam clearing process.
Detailed description of the preferred embodiments
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
One aspect of the present invention provides a tunnel-based, instructor-guided infantry training simulator. In the embodiment shown in fig. 12-14, each trainee system 300 and 310 is designed to provide a single "aisle" of visual simulation and can support a single jet lane. A standard 4: 3 aspect ratio image generated by a single vision channel provides a field of view (FOV) in a virtual environment of about 45 degrees horizontally and 34 degrees vertically. As in the trainee system 310, multiple passes may be combined in a side-by-side fashion to create a larger composite FOV such as 120 degrees horizontal and 34 degrees vertical. Each system may also be configured with a rotated aspect ratio of 4: 3 if a larger vertical FOV is desired, and other aspect ratios may be used.
At its most basic, embodiments of the present invention may be implemented using a computer system, simulated or modified weapons, and training roadways. All of the elements of this embodiment of the system software are executed on a Windows-based PC, although it will be apparent to those skilled in the art that alternative operating systems may be substituted therefor without departing from the spirit or scope of the invention. Each software component is easily configured and controlled using standard input devices such as a keyboard and mouse. Other input devices may also be used with the software components as desired, such as, but not limited to, game pads, joysticks, steering wheels, pedals, foot pads, light gloves (light glove), any Microsoft DirectInput compatible USB device, and the like. While directtinput is a preferred API for interfacing with such devices, it will be apparent to those skilled in the art that alternative interface means may be substituted therefor without departing from the spirit or scope of the invention.
The PC is the preferred standard COTS game level capability PC, although technological developments such as high-end machines may not be necessary. A typical, presently preferred computer configuration is as follows:
pentium 42.5GHz or above
IGB RAM
ATI Radon 9800 XT 128MB video card or above
40GB hard disk drive
The software running on these PCs is preferably capable of operating in a stand-alone mode or a collaborative, networked mode. In a standalone mode, the student is the only user-controlled entity in the environment, with all other entities being controlled by the AI according to the scenario definition. In the collaborative, networked mode, each instance of application software, such as but not limited to each individually controlled student PC on the network, represents a student-controlled entity. The student-controlled entity may be friendly or hostile, with its role and starting position set by the scene. Using this capability, the following engagement scenarios may be trained:
one-man to programmable AI
Team-to-programmable AI
One to one
Team to team
As shown in fig. 10 and 11, a preferred trainee system may consist of at least one display system 200, such as a high lumen compact projector secured to a sturdy base 205. When used, the dock 205 may also hold an associated ethernet-based hit detection system 208, or the hit detection system 208 may be separately installed. The combined projector/hit detection base system may be attached to an industrial tripod so as to be easily adjustable and alignable. The display system 200 may preferably be configured for forward or backward projection to accommodate any deployment scenario. If space in the facility allows, the projector and hit detection system are placed behind the projection screen, which allows the trainee to move freely near the front of the screen without physical limitations of the projector, cables, etc.
The hit detection system allows the trainee PC 230 or other computing device to determine when a shot has been fired from the weapon 100. After a firing mechanism associated with the weapon, such as trigger 110, is activated, the laser "fires" a pulse for each shot, which indicates to the software, via hit detection system 208, the location at which the shot entered the virtual environment space. Each weapon-specific laser signature may identify individual shots fired from multiple weapons in the same lane, allowing multiple trainees, as shown in fig. 10, to train in a single lane.
Referring again to fig. 12-14, task training generally consists of a brief description of the task, task execution, and post-action review. The training position floor plans shown in fig. 12-14 facilitate such training. The training positions are preferably arranged in three separate stations, one for each respective training phase, to maximize student throughput. Each stage of the illustrated embodiment consists of an instructor station 320, an observer station 330, and one or more student stations, i.e., the training alley 300. While the illustrated embodiment is presently preferred, it will be apparent to those of ordinary skill in the art that an alternate number of stations and station arrangements may be substituted without departing from the spirit or scope of the invention. As an example, an alternate trainee station may be used in place of the trainee station 300, such as a 120 degree shooting lane 310 made up of multiple screens synchronized in time, image, shot detection, etc.
In addition to the three stations, four separate software applications are preferably implemented across the various system components. Although the four software applications are described herein as separate entities, it will be apparent to those skilled in the art that the functionality of one or more applications may be combined together and one or more applications may be divided into multiple applications without departing from the spirit or scope of the present invention. The following is an overview of each application. A more detailed description of each will be presented below.
The first application is a student application for presenting real-time images to students in the roadway via the display system 200. The application also processes inputs from hit detection system 208, from weapon 100 (including inputs from controllers 115, 120, and 140 as described below), and from a clip stand (described below), and propagates these inputs to the simulation server. In one embodiment, the input is propagated to the simulation server via the student station. In this embodiment, the trainee station preferably handles all inputs to the simulation from any trainee control device. As described below, by using these control devices, the trainee can adequately interact with the 3D environment, fire weapons, throw grenades, climb platforms, ladders, climb ropes, and the like. In one embodiment, the instructor station controls an observer station, which can run the trainee application from a mode.
The second application is an instructor's desk. The instructor station preferably serves as a simulation server, network host, and simulation console for task execution.
The third application is a scene editor. The application allows course designers to self-define tactical situations using simple point-and-click interfaces and standard scripting languages.
The last application is a level editor. The application is used to create an environment that consists of visible and invisible geometry, collision geometry, lighting information, special rendering pipeline information, and other features of the environment, objects, and actors in the simulation.
The trainee station preferably includes at least one physical or virtual weapon. Referring to fig. 10, while weapon 100 is shown as a machine gun, it should be apparent to those skilled in the art that alternate weapons, including other projectile weapons and non-target weapons such as stun guns and tathier guns, may be substituted without departing from the spirit or scope of the present invention. As shown in fig. 1-9, each weapon is preferably equipped with one or more manual intervention controllers 115 and 120, as well as a laser controller 150 and a laser 155. Such weapons may be actual assigned weapons of trainees to whom the manual intervention controllers 115 and 120 and the laser controller 155 and laser 155 are attached, or may be effective simulated weapons to which such components are attached or embedded. Whether the weapon is a real or simulated weapon, it is preferably not physically tethered to any system component, but rather utilizes the weapon mounted, embedded signature laser 155 to identify the fire to the system as described above.
The preferred embodiment of the present invention allows the controls 115,120 to be placed in a convenient and comfortable position for the trainee. The student can adjust the control position based on arm length, hand size, etc. using a plurality of set screws 117 and brackets 116 and/or simply removing and rotating the joystick/rocker and button mechanism for the left-hand configuration. Although the illustrated embodiment utilizes screws 117 to mount the controller to the weapon, it should be apparent to those skilled in the art that alternative mounting means, including, but not limited to, double-sided tape or other adhesive, and rubber bands or other mechanical devices, may be substituted without departing from the spirit or scope of the invention.
In a preferred embodiment, the controllers 115 and 120 are implemented as conventional joysticks or rockers, and add the function of pressing directly on the joysticks as an additional input. While a joystick is preferred, it will be apparent to those skilled in the art that alternative controller arrangements, including but not limited to a plurality of buttons or a trackball, may be substituted therefor without departing from the spirit or scope of the invention,
Multiple controls are preferably provided because they allow the trainee to simultaneously traverse the simulated environment and adjust the viewing angle. By way of example, and not intended to limit the invention, the controller 115 may be configured as a view controller. In this configuration, activation of the controller 115 may cause the display to change as if the trainee were turning or tilting their head. Conversely, when the controller 120 is configured as a mobile or navigational controller, activation of the controller 120 may cause the position of the trainee within the simulation to change appropriately. The combination of these controls allows, for example, the trainee to look to his left when backing off.
The controls 115 and 120 are preferably located at or near a location where a trainee traditionally holds a weapon. In the embodiment shown in fig. 1-9, the controller 120 is located near the trigger 110 and the controller 115 is located on the barrel 125. In the illustrated embodiment, cables 130 communicatively couple these controllers to wireless controller 105. However, it should be apparent to those skilled in the art that alternative communication coupling means, including but not limited to short range ultrasonic or radio frequency communication, may be substituted without departing from the spirit or scope of the invention.
This approach provides highly realistic simulated weapon engagement training. Conversion of the trainee weapon to an indoor training weapon is a simple process of using a simulated barrel or standard blank firing adapter in place of the receiver or barrel and adding a laser 155 to indicate the firing position. The weapon is then suitably loaded with special indoor blank or standard blank. In the event that the user does not desire to use a blank weapon, an effective simulated weapon that satisfies the same form/equipment/weight/function as a real weapon may be used instead without departing from the spirit or scope of the present invention.
In addition to weapons being equipped to simulate inputs using controllers 115 and 120, standard button presses may also be used to control trainee simulation control functions such as throwing a grenade, jumping, clearing a jam for a weapon, switching on and off a weapon, and the like. As shown in the input system 140 of fig. 8 and 9, each trainee can configure the layout and placement of these buttons to account for ergonomic differences and personal preferences. These buttons are preferably mounted to the weapon 100, but may alternatively be provided by a conventional keyboard or by an alternative input device such as a plurality of buttons attached to or worn by the trainee.
By utilizing a multi-state input system, such as input system 140, multiple controls can be activated to define individual commands simultaneously and/or in a defined temporal sequence. This allows a much larger set of commands to be easily provided to each student. By way of example, and not intended to limit the invention, a series of buttons from the input system 140 may temporarily cause the input state machine to enter an external control mode in which the next command will affect the entire group with which the trainee is associated.
In one embodiment, the trainee may freely define the functions represented by the various buttons on the input system 140 and the functions associated with each of the controllers 115 and 120 through a weapon control configuration screen such as that shown in FIG. 15. This screen allows the trainee to configure movements and view axis controls based on directional inputs received from an armed weapon, as well as set the "stiffness" of the configuration.
It is an object of the present invention to provide an immersive simulated environment in which a trainee can become more familiar with weapons, practice various technical and tactics, and the like. This immersive environment is a collaborative virtual world that preferably supports various outdoor terrain types such as cities, countrysides, and city/village transitions, as well as various building indoor and outdoor types and specifically custom-built indoor scenery. The user's view of the environment may be static or moving. The point of view of the movement simulates walking, running, driving, or other movement within the environment, and may be controlled directly by the user, written to a scene script, or controlled by a second user. When walking or running in this environment, the indoor scenery of the building may be explored by moving through the room to the porch, around corners, up and down steps, ropes, ladders, etc. of the room. Fig. 17 and 18 illustrate an exemplary bageda city virtual environment with a broken building.
Whether the point of view is static or moving, the software can place scene driven Artificial Intelligence (AI) entities 1800 throughout the immersive environment to provide situational engagement opportunities. The AI may represent a single entity or a group of entities and may exhibit innocent/non-combat, armed/hostile, or other such behavior. These behaviors are preferably programmable and can be combined and/or event driven to synthesize complex sequences of behaviors. This technique differs from a branching video scene by providing a wider range of situations to train. Further, this technique provides the ability to add variability to the AI behavioral response to let the trainee learn to deal with the situation rather than train the device.
It is an object of the present invention to allow a trainee to train under various conditions and to allow an instructor to modify a given training scenario so that the trainee learns to respond to events that occur in a simulation rather than only anticipating events based on an existing simulation. To this end, a preferred embodiment of the present invention includes a scene editor and a level editor. The scenario editor allows an instructor, course developer, or other user to create new scenarios and modify existing scenarios. The user is preferably provided with at least two different viewing modes, namely a free-flight camera mode and a locked camera view, which effectively provides a 2D orthogonal view. The level editor allows course developers to create new environments.
The level editor user can generate OpenFligh from a source such as, but not limited toTMSoftware for files, and various external software, import new terrain or geometry. In addition, the geometry created in 3 DstudiosMax or other three-dimensional CAD or drawing tools may also be imported. Such introduction can be achieved by using Apex, for exampleTMExporter or other similar tool. Using a mouse and keyboard or navigation area, a user can move, fly, or otherwise navigate around the imported terrain and place objects into the scene. Objects can be placed by explicitly specifying a location (e.g., by mouse clicking), by using a drawing function to quickly fill in objects such as forests or bushes, trash or other municipal debris, or by using a random placement function according to a user-specified object density. Depending on the method used to render the terrain, the user may also specify the terrain texture, the tiling factor (tilingfactor), and the detail texture to be used. The terrain may also have visual details such as waters, roads, high speed travel markers, and any other type of visual detail placed thereon. Objects added to the environment using the level editor may be moved, rotated, scaled and have their object-specific properties edited. The level editor is also used to generate or specify a terrain and object collision mesh (collision mesh).
The scenario editor preferably includes an AI menu that allows the user to populate the environment with entities and specify their default behavior. An opposing military entity may be given a task or goal, a skill level, a stealth level, and a set of human characteristics similar to those given to an active participant. Non-combat entities may be given, for example, a starting point, a number, a path, and a destination (i.e., an area in which they maneuver), or a location in which they linger but perform a specified action. Other functions include a trigger/event system for specifying complex scene behavior.
The scene editor preferably also contains menu items that allow the user to specify particular attributes of particular objects, such as weapons (e.g., weapon type, effective range, broker pose (slope), lethality, damage/interaction to objects), and explosive devices (e.g., fireball size, death range, injury range, and damage/interaction to objects). The scenario editor also supports the ability to assign "health" to objects in the environment. Anything that interacts with a particular object has the ability to destroy (reduce "health") the object depending on its speed, stiffness, or other factors.
The destructible object system is tightly bound to the material system. If the user specifies that a sub-grid of an object is "wood," then a wood property will be applied to the sub-grid. The basic properties of "wood" materials may include particle effects, impact action sounds, shot holes, and burn marks, but may also include higher level physical properties, such as brittleness, which can damage the locations where the object system can function.
The brittleness of a material, such as wood or glass, determines the impact force or force required to thoroughly break the object to which the material is dispensed. The failure point and the fracture path are determined on the fly based on the position and magnitude of the applied contact force. For clarity, and not intended to limit the invention, in two dimensions, a fracture path may be thought of as a series of connected line segments with random perturbation directions. Implementation of the brittleness simulation preferably involves splitting a volume comprising the original object and applying a pre-assigned texture to the newly created polygon. The texture represents the newly exposed interior of the object.
The physical visual representation in the simulation is preferably composed of various body type representations (male, female, child, etc.) combined with a customizable appearance that allows for different face, skin, hair, and clothing styles. Using this multivariate approach implemented by entities, the software provides a virtually limitless representation of humans. Figure 18 shows four different entities shooting a trainee.
The entity uses physical or virtual weapons to engage during the simulation. When a shot is registered in a simulated environment, an appropriate response is obtained by imposing a breach on the scene or other entity. The visual special effect and the physical reaction provide a visual indication of the destruction. Some of the visual effects include explosions, blood splashes, dust bursts, debris bursts, sparks, wood chips (from trees), cement bursts, shot holes, and burn marks. Physical indications include body movement reactions, debris from larger flights, and vehicle physical effects. Furthermore, a registered shot in the environment may also derive a responsive behavior from an entity in the scene in its programmed behavior pattern in a given scene.
The present invention preferably uses base morale scores with different factors to calculate morale scores for each entity or group of entities in the simulation. The morale score may affect the behavior of friendly and hostile AI entities. The score may also be used to determine whether the entity should be suppressed. The following are exemplary factors related to morale score. The following list is intended to be exemplary and not a listing of the only factors related to morale score.
Shoot at +1 to enemy
Hit enemy +3
Death enemy +5
Youfang quilt shooting-1
Youfang quilt hit-3
Youfang quilt dying-5
When morale is low, AI entities may tend to cover, watch, approach, reduce intercept costs, reduce accuracy, use a lower posture (such as crouch attack), panic if run away and not counter hit and attempt to increase distance from threat. Based on the morale score and the number of bullets "shot to" an AI entity or group of AI entities, the simulation can determine whether the entity is suppressed (i.e., exhibits a longer shield interval), pinned (suppressed + not moving), flinged (pinned + not hitting), and the like.
To detect shot proximity, a sphere is created around each entity and the ray cast through the bullet entry, exit and midpoint (the midpoint of the line connecting the entry and exit points on the sphere) is calculated. If the light passes through the sphere, it means that the bullet passes nearby and the entity is able to perceive it, which in turn alters the morale score of the entity.
While the scenario can be designed for stand-alone training, the training is preferably controlled via an instructor's desk. During the simulation, the instructor is presented with a user interface similar to that shown in FIGS. 19-22. The instructor station provides the main scene selection and overall simulation executive control. Scene execution control commands may include, but are not limited to, start, stop, pause, and resume. The instructor is preferably presented with his own camera view of the free flight of the environment, allowing him to view the scene from any desired angle.
The instructor may also choose to play a role in the first person to improve the realism of the training. Using an object-oriented actor management and command system, an exemplary interface of which is shown in FIGS. 16 and 22, when a 3D environment widget has focus in the application, the distributed training application may use a multiplexed command router to send input from the instructor station to the appropriately selected distributed actor. As shown in fig. 16 and 19, such distributed actors may comprise AI entities or trainees, and may be "possessed" (i.e., controlled), cured, and/or revived (i.e., re-instantiated or revived) by the instructor. FIG. 23 illustrates a method by which an instructor may possess an entity.
In FIG. 23, the instructor may select the entity to possess (block 2315) and click the possess button (block 2320). Any keyboard commands or other inputs (block 2310) are then multiplexed to the appropriate entity controls (block 2330), ground vehicle controls (block 2335), or aircraft controls (block 2340).
Fig. 19 also shows a host scenario interface through which an instructor can instantly move to the location of a given entity and select various trainee and/or simulation options. Fig. 22 shows a third person title view of the trainee by the instructor after clicking on the trainee's name in the trainee list of fig. 19.
As shown in fig. 21, when the instructor occupies or controls the entity, the planar cross-section of the instructor's view and the HUD1905 are switched to conform to the first-person perspective simulation. Additional controls may also be presented to the instructor including a compass, friend AI locator, target locator, health and posture indicators. The instructor can also control the entity from a third person perspective via entity control 2100 and command the entity to act in a certain manner. Similarly, when the instructor is controlling an entity that is performing a task that is traditionally more simulated from a third person perspective, such as, but not limited to, operating a vehicle, the instructor's view plane section may be similarly transformed. Once the necessity for instructor participation is achieved at this level, the instructor can return control of the entity to the original control AI or training participant.
During this scenario, a post-action review (AAR) log is compiled at the instructor station. The AAR information preferably includes, but is not limited to, the number of shots fired by each trainee, the location of these shots to land, the line of fire on the battlefield, reaction times, and other relevant data deemed important by the instructor. Using the AAR log, the instructor can play back the scene on their local display and/or the display system of the student to report performance. The playback system preferably employs a "play from here" approach from any given point in time in the AAR log file.
In one embodiment, the instructor can use a simple user interface, such as the scenario control 1900 of FIG. 19, to indicate which communication channels are available, giving the instructor full control over the communication matrix. The instructor may create, enable, or disable a single channel, a team channel, or a global broadcast channel. The instructor can specify which channels to record the AAR and use bookmarks to mark the AAR. The instructor can also record selected individuals, groups, or sub-groups. External communications, such as those sent for analog commands to fire, may also be recorded.
While the instructor may insert a bookmark via the add mark button of scene control interface 1900 or other such user interface element, certain events should automatically trigger a bookmark. These events include, but are not limited to, the enemy first entering the student's view, student death, enemy death, all explosions, trigger activation, and other events from the script. A particular evaluation trigger will automatically log the event into the trainee individual's AAR statistics log.
Multi-channel hit detection is accomplished by sending network packets whenever mouse click input is received from a channel. The network packet contains the position and direction of the projected ray generated by the two-dimensional mouse click. The network packet is processed by the main channel and a hit is registered using the ray information. To avoid registering multiple hits in the case of screen regions overlapping between channels, the time of each hit is compared for the weapon's maximum firing rate. If a hit occurs too quickly, the hit is discarded.
As shown in fig. 24, an embodiment of the present invention may simulate jamming of a weapon system based on the current clip 135, cumulative effects on rounds fired, etc. In one embodiment, weapon 100 is equipped with a clip sensor that determines which clip 135 is currently in the weapon. In an alternative embodiment, the clips 135 are stored in the container or placed on a holder, and the platform or container is capable of counting the clips contained therein. When each clip is removed, the clip currently in use is recorded. Based on the clip sensor input 2410, a clip and projectile are determined (block 2420) each time the weapon is fired (block 2415). This information is processed (block 2425) and stored in a database (block 2430) so that the trainee cannot "reuse" the previously consumed clip. Performance metrics associated with each projectile, such as, but not limited to, jam frequency, particulate clogging, etc., are also accessed, and these performance metrics are used to determine the likelihood of a jam. The random number generator 2445 may be used to determine whether a jam actually occurs based on the likelihood of the jam.
By using RFID or other wireless or wired technologies, including special clip holders equipped to identify which clips 135 are still unused (thereby resulting in which clips have been used), tracking the clips, rounds within the clips, and types of such rounds (standard, missile, custom, etc.) in a database, the simulation can statistically derive probabilities for simulating a true and reasonable weapon jam during the currently performed firing cycle. The jam probability during any firing cycle is related to the type of round fired, the total number of shots fired during that period, the total number of shots fired since the last jam, the number of shots fired per ammunition type during that period, the number of shots fired per ammunition type since the last jam, and other items tracked in the simulation; or the card shell may be simulated as commanded by the instructor or as predefined commands in the control script.
Once a weapon jam occurs, the firing signal to the armed weapon is blocked and the trainee must perform a purge process to reactivate the weapon. The complexity of this process may range from pressing a single button on the armed weapon to a complex series of steps perceived by simulation from a fully armed weapon, such as shown in fig. 25. In the exemplary process shown in fig. 25, when a jam occurs (block 2500), the clip sensor 2510 is polled to determine if the trainee has removed the clip and checked it (block 2515). The time the clip is removed from the weapon can be monitored to force the trainee to simulate more realistically. Depending on the type of jam, the clip check may be sufficient to clear the jam for the weapon, and the simulation continues as normal. If the weapon is still jammed, the bore mantle may be monitored via a bore mantle sensor (block 2520) to determine if the trainee has checked the bore (block 2525). As with the clip sensor, depending on the type of jam, the bore check may be sufficient to clear the weapon of the jam, and the simulation continues as normal. If the weapon is still jammed, the system may monitor the load (click) and refill sensors 2530. If the trainee performs the appropriate steps, the weapon will clear the jam. Although described above in terms of physical inspection, such steps may include virtually inspecting the chamber, filling the chamber with a new round of ammunition using a bolt, loading the gun using a loading arm, and the like. Once the correct procedure is performed as monitored by the simulated use state machine, the system re-accesses the firing signal, allowing the weapon to activate normally.
While the invention has been described in detail and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope thereof. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (42)

1. An infantry training simulation system comprising:
at least one firing lane, wherein the firing lane has a first end and a second end;
at least one display arranged near a second end of the firing lane;
at least one weapon, wherein the weapon is an infantry weapon;
at least one controller, wherein the at least one controller is mounted on the at least one weapon;
at least one computer, wherein the at least one computer is communicatively coupled to the at least one display and communicatively coupled to the at least one controller, and wherein the at least one computer monitors inputs from the at least one controller and modifies a training simulation displayed on the display based on the inputs;
at least one instructor station communicatively coupled to the at least one computer, the instructor station allowing an instructor to view the simulation without interacting with the simulation.
2. The system of claim 1, wherein the at least one controller is comprised of a plurality of controllers.
3. The system of claim 2, wherein at least one of the plurality of controllers is located near a trigger of the weapon.
4. The system of claim 2, wherein at least one of the plurality of controls is located on a barrel of the weapon.
5. The system of claim 1, wherein the at least one computer is communicatively coupled to the at least one controller via a wireless communication link.
6. The system of claim 1, wherein the instructor station allows an instructor to control at least one entity in the simulation.
7. The system of claim 6, wherein the entity is a student.
8. The system of claim 1, wherein the weapon is a traditional weapon.
9. The system of claim 1, wherein the weapon is configured to fire at least one simulated projectile.
10. The system of claim 9, wherein the at least one simulated projectile is stored in at least one clip.
11. The system of claim 10, wherein each of the at least one cartridge clips is configured with an identifier.
12. The system of claim 11, wherein the weapon is capable of determining a type of simulated projectile stored in a current clip in the weapon based on the identifier.
13. The system of claim 12, wherein the at least one weapon is capable of determining whether a clip has been previously used based on the identifier.
14. The system of claim 13, wherein the at least one weapon tracks the number of projectiles fired from the weapon.
15. The system of claim 11, wherein the at least one computer is capable of determining a type of simulated projectile stored in a current clip in the weapon based on the identifier.
16. The system of claim 15, wherein the at least one computer determines the type of simulated projectile stored in the current clip in the weapon by clearing the clip associated with the trainee and identifying the missing clip on the list.
17. The system of claim 15, wherein the at least one computer is capable of determining whether a cartridge clip has been used based on the identifier.
18. The system of claim 1, wherein the weapon is capable of simulating a weapon jam.
19. The system of claim 18, wherein the weapon is capable of monitoring student interaction with the weapon to determine when a prescribed step of clearing a simulated weapon jam has been performed.
20. The system of claim 18, wherein the computer is capable of monitoring student interaction with the weapon to determine when a prescribed step of clearing a simulated weapon jam has been performed.
21. The system of claim 1, wherein the computer is capable of simulating a weapon jam.
22. The system of claim 21, wherein the computer is capable of monitoring student interaction with the weapon to determine when a prescribed step of clearing a simulated weapon jam has been performed.
23. An infantry training simulation system comprising:
a plurality of firing lanes, wherein each firing lane has at least one display associated therewith;
at least one computer, wherein the at least one computer is communicatively coupled to at least one of the plurality of displays, and wherein the at least one computer generates a training simulation for display by the at least one attached display;
at least one instructor station, wherein the instructor station is communicatively coupled to the at least one computer, and wherein the at least one instructor station allows an instructor to control at least one entity in the simulation;
at least one weapon, wherein each of the at least one weapon is associated with a firing lane, and wherein each of the at least one weapon is communicatively coupled to the at least one computer such that the at least one computer can monitor the trainee as it interacts with the weapon.
24. The system of claim 23, wherein each weapon has at least one controller associated therewith, the at least one controller at least allowing the trainee to navigate within the simulation.
25. The system of claim 24, wherein the at least one controller is located proximate to a trigger of the weapon.
26. The system of claim 24, wherein the at least one controller is located on a barrel of the weapon.
27. The system of claim 23, wherein a plurality of controls are associated with each weapon, the controls at least allowing the trainee to navigate within the simulation, and wherein at least one of the plurality of controls is located near a trigger of the weapon.
28. The system of claim 23, wherein a plurality of controls are associated with each weapon, the controls at least allowing the trainee to navigate within the simulation, and wherein at least one of the plurality of controls is located on a barrel of the weapon.
29. A method of interacting with a simulated infantry scene, comprising:
displaying the computer-generated simulation on a display;
equipping a physical weapon with at least one controller;
navigating in a simulation using the at least one controller;
monitoring at least one hostile object in the simulation;
engaging the physical weapon with the opposing target; and
the instructor is allowed to view the simulation via the instructor terminal.
30. The method of claim 29, wherein the controller allows a trainee to hold the weapon to navigate through the simulation.
31. The method of claim 30, wherein the weapon is a projectile weapon.
32. The method of claim 31, further comprising firing a projectile at the opposing object as part of the battle step.
33. The method of claim 32, further comprising calculating the projectile trajectory through the simulated environment.
34. The method of claim 32, wherein the projectile firing is simulated by the weapon.
35. The method of claim 32, further comprising monitoring, by the weapon, a number of projectiles fired and simulating a weapon jam.
36. The method of claim 35, wherein the weapon jam is modeled at a frequency associated with using the physical weapon in the real world.
37. The method of claim 35, further comprising simulating for each weapon firing the depletion of a projectile from a clip associated with the weapon.
38. The method of claim 29, further comprising allowing the instructor to interact with the simulation.
39. The method of claim 38, wherein the interaction comprises altering an environmental characteristic.
40. The method of claim 39, wherein the environmental characteristics include wind, time of day, and lighting.
41. The method of claim 38 wherein the interacting comprises allowing the instructor to control at least one entity within the simulation.
42. The method of claim 41 wherein the at least one entity controllable by the instructor comprises at least one student.
HK09109972.4A 2004-11-24 2005-11-23 Instructor-lead training simulation system and method of interacting with a simulated infantry scenarios HK1132034B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US63030404P 2004-11-24 2004-11-24
US60/630,304 2004-11-24
US73427605P 2005-11-08 2005-11-08
US60/734,276 2005-11-08
PCT/US2005/042659 WO2007011418A2 (en) 2004-11-24 2005-11-23 Instructor-lead training environment and interfaces therewith

Publications (2)

Publication Number Publication Date
HK1132034A1 HK1132034A1 (en) 2010-02-12
HK1132034B true HK1132034B (en) 2013-04-12

Family

ID=

Similar Documents

Publication Publication Date Title
CN101438121B (en) Instructor-lead training environment and interfaces therewith
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
JP2023516879A (en) Virtual item display method and apparatus, equipment and computer program
KR20200017973A (en) The AR VR MR Combat simulation trainig system using military operation theme contents
CN113457151B (en) Virtual prop control method, device, equipment and computer readable storage medium
US12217628B2 (en) Magnetic tracking in an extended reality training system
Karr et al. Synthetic soldiers [military training simulators]
Loachamín-Valencia et al. A virtual shooting range, experimental study for military training
Knerr Immersive simulation training for the dismounted soldier
Knerr et al. Virtual environments for dismounted Soldier simulation, training, and mission rehearsal: Results of the FY 2001 culminating event
HK1132034B (en) Instructor-lead training simulation system and method of interacting with a simulated infantry scenarios
Goldberg et al. Training dismounted combatants in virtual environments
CN112870708A (en) Information display method, device, equipment and storage medium in virtual scene
Rashid Use of VR technology and passive haptics for MANPADS training system
Kaczorowski et al. Innovative Concept of Augmented Reality Training for Countering UAVs
Kopecky II A software framework for initializing, running, and maintaining mixed reality environments
Downes et al. Proving situational awareness impact in the land warrior project
Banta et al. ARI Research Note 2005-01
Banta The Virtual Observer/Controller (VOC): Automated Intelligent Coaching in Dismounted Warrior Simulations
Centric Train-the-trainer package for the Full Spectrum Warrior game
THORPE Netted engagement simulation
Pratt et al. Soldier Station: A Tool for Dismounted Infantry Analysis
Sunderland Platoon-Level Battlefield Simulation: Functional Requirements
Martin Army Research Institute Virtual Environment Research Testbed
Lind Behavior Representation for the Team Tactical Engagement Simulator (TTES).