MXPA06007922A - Vehicle video processing system - Google Patents
Vehicle video processing systemInfo
- Publication number
- MXPA06007922A MXPA06007922A MXPA/A/2006/007922A MXPA06007922A MXPA06007922A MX PA06007922 A MXPA06007922 A MX PA06007922A MX PA06007922 A MXPA06007922 A MX PA06007922A MX PA06007922 A MXPA06007922 A MX PA06007922A
- Authority
- MX
- Mexico
- Prior art keywords
- video
- cameras
- vehicle
- processing unit
- subgroups
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 claims abstract description 65
- 230000004297 night vision Effects 0.000 claims abstract description 24
- 238000013459 approach Methods 0.000 claims description 4
- 238000004091 panning Methods 0.000 claims description 3
- 239000011248 coating agent Substances 0.000 claims 2
- 238000000576 coating method Methods 0.000 claims 2
- 230000008569 process Effects 0.000 description 44
- 230000006870 function Effects 0.000 description 36
- 230000015654 memory Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 230000004438 eyesight Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000000994 depressogenic effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013497 data interchange Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
Abstract
Disclosed are various systems and methods for processing and displaying video in a vehicle. In one embodiment, a vehicle video system is provided that comprises a plurality of cameras mounted in a vehicle, each of the cameras generating a video image, and the cameras including a plurality of visible light cameras and a plurality of night vision cameras. The vehicle video system also includes a video processing unit, where each of the cameras and each of the monitors are electrically coupled to the video processing unit. The video processing unit being configured to select at least two subsets of the cameras from which output video images are obtained for display on monitors.
Description
VIDEO PROCESSING SYSTEM IN VEHICLE BACKGROUND OF THE INVENTION The use of vision systems in commercial vehicles offers improved vision around a commercial vehicle. In certain situations, several views are limited to a few selected cameras in a commercial vehicle that do not offer a complete knowledge of the environment to the commercial vehicle operator. Consequently, the operator may have limitations during the handling of the vehicle or other activities in relation to the commercial vehicle. BRIEF DESCRIPTION OF THE VARIOUS VIEWS OF THE DRAWINGS The invention will be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Also, in the drawings, the same reference numerals designate corresponding parts in the various views. Figure 1 shows a block diagram of a vehicle employing a vehicle video system in accordance with one embodiment of the present invention; Figure 2 shows a schematic block diagram of a video processing unit used as part of the vehicle video system of Figure 1 according to one embodiment of the present invention; Figure 3 illustrates a block diagram of a video image selector that is employed in the vehicle video system of Figure 1 in accordance with one embodiment of the present invention; Figure 4 shows a schematic block diagram of a control processor employed in the video processing unit of Figure 2 in accordance with an embodiment of the present invention; and Figures 5A-5D show flow charts illustrating an example of a control system executed by the control processor of Figure 4 in accordance with one embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION With reference to Figure 1, a block diagram of a vehicle 100 according to one embodiment of the present invention is shown. The vehicle 100 can be, for example, a commercial vehicle such as, for example, a truck, a truck with a trailer, or another commercial vehicle. The commercial vehicle may also be a general purpose vehicle used, for example, by law enforcement agencies or other agencies for the purpose of obtaining visual information about the environment surrounding the commercial vehicle itself. In general, the vehicle includes a front part F, a rear part R and sides S. For this purpose, the vehicle 100 includes a vehicle video system 101 having several cameras mounted on the vehicle 100 or inside said vehicle. Specifically, the cameras include several visible light cameras 103 and several night vision cameras 106. Alternatively, a single camera can be used in place of one of the visible light chambers 103 and one of the night vision cameras 106 that includes both Daytime vision capacity as night vision capability. In addition, the vehicle video system 101 includes a video processing unit 109. Each of the cameras 103, 106 is electrically connected to the video processing unit 109 and each of the cameras 103, 106 generates an image of video 111 which is applied to the video processing unit 109. Regarding that aspect, the video processing unit 109 includes several video inputs to facilitate the electrical connection with each of the cameras 103, 106. The system of video within the vehicle 100 also includes several monitors 113. Each of the monitors 113 is also electrically connected to the video processing unit 109 through video output ports in the video processing unit 109. The video Vehicle video 101 further includes video image selectors 116 that can be hand-held devices or can be mounted on the commercial vehicle 100 in an appropriate manner. Each of the video image selectors 116 allows an operator to control the displayed video on a respective monitor between the monitors 113. Specifically, each of the video image selectors 116 is related to a respective monitor of the monitors 113 and controls the video displayed there as will be described. Each of the video image selectors 116 may be connected to the video processing unit 109 via an appropriate vehicle data bus or through a direct electrical connection as will be described. In addition, the vehicle video system 100 includes audible alarms 119 connected to the video processing unit 109. In this regard, the audible alarms 119 sound upon detecting predefined conditions with respect to the video system in the vehicle 100 as will be described. . Alternatively, the video processing unit 109 may generate visual alarms on the monitors 113 as will be described. Likewise, both audible alarms 119 and visual alarms, etc., can be used in combination. The chambers 103, 106 are mounted on the vehicle 100, for example, such that a field of view 123 of each of the chambers 103, 106 is oriented either in a substantially longitudinal direction 126 or in a substantially lateral direction 129 relative to the vehicle 100. As regards this aspect, the longitudinal direction 126 is generally aligned with the direction of travel of the vehicle 100 when moving in a forward or backward direction. The lateral direction 129 is substantially orthogonal with respect to the longitudinal direction 126. Some of the cameras 103, 106 are oriented so as to have a field of view 123 oriented in the substantially longitudinal direction 126 relative to the vehicle 100, while other cameras 103. 106 are oriented such that they have a field of view 123 oriented in the substantially lateral direction 129. In this regard, cameras 103, 106 are provided in that they can generate video images 111 which show views of the surroundings around the vehicle 100. In one embodiment, the angle of the fields of view 123 of the cameras 103, 106 may differ according to their location and orientation relative to the vehicle 100. For example, the cameras 103, 106 which are oriented in such a way that your field of view 123 facing forward in the longitudinal direction may have an angle associated with your field of view. 123 which is lower than the field of view angle 123 of the rear facing chambers 103, 106 in the longitudinal direction. In a specific embodiment, the angle of the field of view 123 of said front facing cameras 103, 106 is 12 degrees, and the angle of the field of view 123 of the back facing cameras 103, 106 is approximately 153. degrees, even when the angles of the fields of view 123 of the forward and backward facing cameras 103, 106 may differ from these values according to the desired vision capabilities of the vehicle video system 101. The processing unit 109 is configured to select numerous subgroups of the cameras 103, 106 from which output video images 133. may be generated. As to that aspect, the video processing unit 109 generates at least two video images of output 133 that are applied to the corresponding areas of the monitors 113. In one embodiment, a first output video image 133 incorporates one or more video images 111 g Enemies by one or more of the cameras 103, 106 included in a first subgroup of the subgroups of the cameras 103, 106. At the same time, a second output video image 133 incorporates one or more video images 111 generated by one or several corresponding cameras of the cameras 103, 106 included in a second subgroup of the subgroups of the cameras 103, 106. According to one embodiment of the present invention, the video processing unit 109 independently displays the first output video image 133. in a first monitor of the monitors 113 and the second output video image 133 in a second monitor of the monitors 113. Regarding this aspect, the output video images 133 displayed on any of the monitors 113 does not affect or govern the output video image 133 displayed on the other of the monitors 113. In addition, there may be more than two monitors 113 (not illustrated) and more than two output video images 133 (not illustrated) generated by the video processing unit 109, and so on. Each of the output video images 133 generated by the video processing unit 109 may incorporate one or more of the video images 111 generated by one or more corresponding cameras of the cameras 103, 106 in a respective subgroup of the subgroups of cameras 103, 106. Regarding this aspect, a user may manipulate one of the video image selectors 116 configured to select which of the video images 111 of which of the cameras 103, 106 within a subgroup should be incorporated. in a respective output video image 133 to be applied to a respective monitor of the monitors 113. The output video images 133 may incorporate a single of the video images 111 or several of the video images 111 generated by camera within a respective subgroup of the subgroups. The cameras 103, 106 selected to be in one of the subgroups from which the output video images 133 are generated can be selected according to various characteristics. For example, a given subgroup of camera 103, 106 may include only visible light cameras 103 or only night vision cameras 106. As regards that aspect, an operator may thus dictate that the output video images 133 incorporate images of video 111 generated entirely by visible light cameras 103 or night vision cameras 106, depending on the nature of the environment surrounding the vehicle 100. Alternatively, a given selected subgroup of camera 103, 106 may include only cameras 103, 106 having a field of view oriented along the longitudinal direction 126 or oriented along the lateral direction 126. As to that aspect, an operator may then dictate that the output video images 133 display views directed only towards the front and the rear of the vehicle 100 or views directed towards the environment next to the vehicle 100. The video processing unit 109 is also configured to detect a movement within a field of view 123 of each of the cameras 103, 106 included within any subsets of the cameras 103, 106. When motion is detected within the field of view of a respective chamber of the cameras 103, 106, the video processing unit 109 can generate an alarm that alerts the operators within the vehicle 100 of said movement. In this regard, the alarm may comprise, for example, the incorporation of an edge, alarm text, or other symbol within the output video images 133 displayed on the monitors 113. The edge, alarm text, or other symbol can be generated within the video images 111 incorporated within the output video image 133, for example, if the movement is detected in said video images 111. Alternatively, the alarms may comprise the audible alarms 119 or both a video image alarm as an audio alarm 119. In certain situations, the output video images 133 viewed on a particular monitor 133 may not incorporate a video image 111 generated by one of the cameras 103, 106 that is included within of a particular subgroup of the cameras 103, 106. The video processing unit 109 may also detect movement in the video image 111 excluded from the output video image 133. In this In this case, an alarm may be generated which informs the operator that motion was detected in a video image 111 generated by a camera 103, 106 that is not currently being viewed in one of the monitors 113. As to that aspect, movement operators who can not see in any of the video images 111 incorporated in the output video images 133 are visibly seen in the respective monitors 113. Such alarm may differ in appearance or appearance. it may well have a different sound compared to an alarm caused by a movement detected in a video image 111 incorporated in an output video image 133 displayed on a monitor 113. Thus, in accordance with one embodiment of the present invention, different alarms sound for movements detected within a video image 111 incorporated in an output video image 133 displayed on a monitor 113 and for movements detected within a video image 111 excluded from an output video image 133 displayed on a monitor respective 113. As additional modalities, different alarms can be generated according to the place where the movement is detected in relation to the n to the vehicle 100. Specifically, different alarms can be generated according to which of the video images 111 of the cameras 103, 106 the movement is detected, which provides an instantaneous information to an operator as to the place where the movement in relation to the vehicle 100 itself. In another embodiment, the video processing unit 109 can operate on a respective video image 111 from one of the cameras 103, 106 to generate a mirror image therefrom for purposes of displaying images from cameras facing towards back 103, 106 in a way that does not confuse an operator as to the orientation of the fields of view 123 of the respective cameras of the cameras 103, 106. Referring to Figure 2, the video processing unit is shown schematically 109 in accordance with one embodiment of the present invention. The video processing unit 109 includes a control processor 153, and at least two video processors 156a and 156b. The control processor 153 is electrically connected to each of the video processors 156a and 156b to facilitate data communication between them. The control processor 153 may, for example, be a Motorola MC9S12DG128 micro processor manufactured by Motorola Semiconductor of Austin, Texas. Each of the video processors 156a / 156b may be, for example, an Averlogic AL700C video processor manufactured by Averlogic Technologies, Inc., of San Jose, California. The video processing unit 109 further comprises several video encoders 163. The output of each of the video encoders 163 is applied to several multiplexed inputs of the video processors 156a / 156b. Each of the video encoders 163 performs the function of converting the video images 111 generated by the cameras 103, 106 in the form of an analog signal into a digital video signal which is recognizable by the video processors 156a / 156b. Each of the video encoders 163 is associated with a respective corner of the vehicle 100 (Figure 1). Regarding this aspect, two of the video encoders 163 are associated with the left front corner (LFC), two of the video encoders 163 are associated with the right front corner (RFC, for its acronym) in English), two of the video encoders are associated with the left rear corner (LRC), and the remaining two video encoders 163 are associated with the right rear corner (RRC) ) of the vehicle 100. Each of the video encoders 163 can be, for example, a Phillips SAA7113H encoder manufactured by Phillips Semiconductor of Eindhoven, The Netherlands. Each of the left-hand front corner (LFC) video encoders 163 receives inputs from the left front cameras 103, 106 (LF) and the cameras 103, 106 of the left lateral front (LSF, for its acronym in English). Likewise, the video encoders 163 of the right front corner (RFC) receive inputs from the front right cameras 103, 106.
(RF, for its acronym in English), and cameras 103, 106 of the front right side (RSF, for its acronym in English). The video encoders 163 of the left rear corner (LRC) receive inputs from the left rear cameras 103, 106 (LR) and the rear left side cameras 103, 106 ( LSR). Finally, the right rear corner (RRC) video encoders 163 receive inputs from the right 103, 106 rear right (RR) sides, and from the rear 103, 106 rearward facing cameras. right side (RSR, for its acronym in English). The respective video inputs 111 in each of the video encoders 163 are multiplexed through a single output which is applied to one of the video processors 156a, 156b. For example, a first video encoder of the left front corner (LFC) video encoders 163 applies its output to the video processor 156a and the front left corner video encoder 163 (LFC, for example). its remaining acronym) applies its output to the 156b video processor. Similarly, the outputs of the various pairs of video encoders 163 are applied to one of the video processors 156a and 156b. Finally, the encoders 163 facilitate the selection of the subgroup 165 of the video images 111 generated by the respective cameras of the cameras 103, 106 that are applied to the video processors 156a / 156b to be incorporated into the video output signals 133 in accordance with what is described above. Regarding this aspect, the control processor 153 is electrically connected to each of the encoders 163 and executes a control system that controls the operation of each of the encoders 163 in the selection of several of the video images 111 applied. to the inputs of the video processors 156a, thereby selecting the subgroup of the cameras 103, 106 that generate video images 111 that are incorporated into a respective image of the output video images 133. Since the video encoders 163 are grouped in pairs that receive identical inputs of 4 cameras as shown, and since each video encoder 163 within each pair provides its output to a video processor separate from the video processors 156a and 156b, then the multiplexed inputs of the video processors 156a / 156b can receive the same video images 111 generated by the various cameras 103, 106. In this aspect, the The video images 111 generated by any of the cameras 103, 106 can be applied to each of the video processors 156a, 156b. The video processors 156a / 156b each generate the video output images 133 (FIG. 1) that are applied to the monitors 113. As to this aspect, each video processor 156a, 156b is associated with a respective monitor of the monitors 113. Alternatively, the output of only one of the video processors 156a, 156b may be applied to multiple monitors 113 simultaneously using an appropriate intermediate circuit 164 to avoid overloading several outputs, and so on. In the generation of the various output video images 133, each of the video processors 156a / 156b can perform various processing operations in relation to the video images 111 received from the respective cameras of the cameras 103, 106. example, each of the video processors 156a / 156b may incorporate any number of video images 111 received from the selected cameras 103, 106 into a single output video image 133 applied to a respective monitor of the monitors 113. Likewise , each of the video processors 156a / 156b includes a motion detection capability with respect to each of the video images 111 received from one of the selected cameras 103, 106. Said motion detection can be performed, for example by carrying out screen-to-screen comparisons to detect changes in video images 111 over time, etcetera. Once a movement is detected in a respective video image 111, the respective video processor 156a / 156b can establish a register at a predefined value which is then supplied to the control processor 153. The control processor 153 is then programmed, for example , to perform several tasks in reaction to the value in the record, for example to execute an alarm or take some other action, etcetera. Each of the video processors 156a / 156b can perform a mirror image operation in relation to any of the video images 111 received from one of the cameras 103, 106, thereby generating a mirror video image thereof. . Said mirror image may be included in one of the output video images 133, as appropriate, for example, to view reverse directions on a respective monitor 113. Likewise, each of the video processors 156a / 156b may effect a digital zoom function and a panoramic function in relation to one of the video images 111. For example, the digital zoom function may involve the realization of a 2X digital zoom or a higher magnification digital zoom. The pan function includes moving up, down, left and right to make unseen portions of a zoomed video image 111 appear on a respective monitor 113. The zoom and pan functions are discussed in more detail below . In addition, each of the video processors 156a, 156b includes a memory in which various image templates are stored such as icons, symbols, other images or text that can be covered in a respective output video image 133 displayed in a monitor 113 in accordance with that indicated by the control processor 153, and so on. Specific examples of images such as text that can be covered in a respective output video image 133 include, for example, information indicating from which camera a particular video image 111 shown inside a video image of In addition, the control processor 153 includes inputs that facilitate an electrical connection of the video image selectors 126 directly to the control processor 153. Alternatively, the control processor 153 may be connected to a vehicle data bus 166. through an electronic communication unit (ECU) of controller 168. As to that aspect, each of the video image selectors 116 may also be connected to the data bus 166 associated with the vehicle. 110 and communicate with the control processor 153 through this. Regarding that aspect, the vehicle data bus 166 can operate according to any of several vehicle data communication specifications, for example, SAE J1587, "Electronic Data Interchange Between Microcomputer Systems in Heavy-Duty Vehicle Applications". electronic data between microcomputer systems in heavy-duty vehicle applications] (February 2002); SAE J1939 / 71, "Vehícle Application Layer" (December 2003), or SAE J2497, "Power Line Carrier Communications for Commercial Vehicles" (Carrier Communications for Commercial Vehicles) October 2002) as promulgated by the Society of Automotive Engineers, the entire text of each of these standards is incorporated herein by reference, since the control processor 153 may be directly connected to a bus. vehicle data 166, may receive data information describing the general operational aspects of the vehicle 100 that is transmitted on the vehicle data bus 166. The control processor 153 may then be programmed to direct the video processors 156a / 156b to cover such information in one of the output video images 133. Such information may include text or other images describing operational aspects ives of the vehicle 100 such as for example if the vehicle 100 is traveling, speed settings, engine information and diagnosis, other vehicle diagnostic information, and other information, and so on. In addition, the control processor 153 includes an alarm output that can be used to drive the audible alarms 119. Specifically, alternatively, there may be several audible alarms 119 connected to the control processor 153 beyond the two that are used to indicate the various alarm conditions that can be detected with the video processing unit 109. Also, a single alarm can be activated in different ways to indicate different alarm conditions. For example, audible alarms 119 may include a horn that can be driven to generate multiple different alarm sounds, and so on. Turning to Figure 3, a video image selector 116 according to one embodiment of the present invention is shown. The video image selector 116 includes several buttons that carry out several positions as will be described. The video image selector 116 is connected to the video processing unit 109 either by direct electrical connection or via the vehicle data bus 166 in accordance with what is described above. Considering that the video image selector 116 is connected to the video processing unit 109 via the video data bus 166, then an electronic communication unit (ECU) of the controller 169 is used to connect the video image selector 116 to the data bus 166. Regarding that aspect, the controller ECU 169 receives signals from the video image selector 116 when several buttons there are depressed, and the controller ECU 169 generates appropriate messages in the vehicle data bus 166 in accordance with the predefined protocol associated with the vehicle data bus in accordance with that described above. Alternatively, when a video image selector 116 is directly connected to the video processing unit 109, then electrical signals can be transmitted to the video processing unit 109 through the direct electrical connection in accordance with what is described above. The video image selector 116 includes several direction buttons 173 including, for example, the "left front" LF button, a "front right" RF button, a "left rear" button LR, and a button of "right rear part" RR. The direction buttons 173 allow a user to select a video image 111 from the front left, front right, rear left, or right rear (Figure 2) from the corresponding camera 103, 106 (Figure 2) associated with such positions to be included as one of the output video images 133 on a respective monitor 113 associated with the video image selector 116. Likewise, the address buttons 173 may be used for other purposes such as controlling the zoom and pan functions as they apply to a particular output video image 133 as will be described. In addition, the video image selector 116 includes a multi-view button 176 which directs the video processing unit 109 to generate an output video image 133 that includes 2, 3, or 4 or more video images 111 from of multiple cameras 103, 106 included in subgroup 165 (Figure 2). For example, in one embodiment, video images 111 from 4 cameras 103, 106 are displayed in a single output video image 133 applied to monitor 113. Said display is referred to herein as a "quad" view. In addition, the video image selector 116 includes a day / night button 179 which is used to control whether the subgroup 165 of video images 111 is generated by visible light cameras 103 or night vision cameras 106. In one embodiment, each of the output video images 133 generated by the video processing unit 109 is generated only by either visible light cameras 103 or by night vision cameras 106. Likewise, the video image selector 116 includes a button 183"forward-reverse / side by side". The forward-reverse / side-to-side button 183 is used to select the subgroup 165 of video images 111 generated by the cameras 103, 106 which are facing in the longitudinal direction 126 (FIG. 1) (ie in a direction toward front or reverse), or video images 111 generated by camera 103, 106 that are facing lateral direction 129 (Figure 1) (ie, in a lateral direction), relative to vehicle 100. In addition, the forward button - reverse / side by side 183 can be used for other purposes that will be described later. As for that aspect, operators can usefully select between viewing areas on the front and rear of the vehicle 100, or on either side of the vehicle 100. When any of the buttons 173, 176, 179, 183 is pressed , the video image selector 116 provides a signal to the controller ECU 169 which in turn generates a message on the data bus 166 which is transmitted to the control processor 153 and is received by said control processor (FIG. ) of the video processing unit 109. The control processor 153 then reacts accordingly. The messages generated on the data bus 166 by the controller ECU 169 include parameter identifiers that inform the control processor 153 of the video processor 156a / 156b for which the message is contemplated. As for that aspect, each of the video image selectors 116 is associated with a respective monitor of the monitors 113, and consequently, with a respective video processor of the video processors 156a / 156b. Alternatively, the video image selector 116 may be directly connected to the video processing unit 109, and the video processing unit 109 may react to the signals received directly from the video image selector 116 which are generated when manipulating any of the buttons 173, 176, 179, 183. Turning now to Figure 4, a schematic block diagram is shown which provides an example of the control processor 153 in accordance with one embodiment of the present invention. In this regard, the control processor 153 is a processor circuit that includes a processor 193 and a memory 196, both of which are connected to a local interface 199. The local interface 199 may be, for example, a data bus with a bus of control / directing companion as they can observe the people with ordinary knowledge in the matter. An operating system 203 and a control system 206 are stored in the memory 196 and are executable by the processor 193. The control system 206 is effected by the processor 193 for the purpose of organizing the operation of the video processing unit 109 in response to various inputs from the video image selectors 116 (Figure 3) as will be described. In that aspect, the control system 206 can facilitate communication with each of the encoders 163 (Figure 2) and the video processors 156a / 156b (Figure 2). The memory 196 is defined herein as both volatile memory and non-volatile memory and data storage components. Volatile components are the components that do not retain the data values upon suspending the power supply. The non-volatile components are the components that conserve the data in case of power loss in power. Thus, the memory 196 may comprise, for example, a random access memory (RAM), a read-only memory (ROM), hard disk drives, soft disks accessed through an associated soft disk drive, compact disks accessed through a compact disk drive, magnetic tapes accessed through an appropriate tape unit, and / or other memory components, or a combination of two or more of these memory components. In addition, the RAM may comprise, for example, a static random access memory (SRAM), a dynamic random access memory (DRAM), or a magnetic random access memory (MRAM) and other devices of this type. The ROM may comprise, for example, a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or other similar memory device. In addition, the processor 193 may represent multiple processors and the memory 196 may represent multiple memories operating in parallel. In such a case, the local interference 199 may be an appropriate network that facilitates communications between two of the various processors, between any processor and any of the memories, or between two of the memories, etc. The processor 193 may be electrical, optical or molecular construction, or of some other construction as can be observed by one of ordinary skill in the art. The operating system 203 is executed to control the allocation and use of hardware resources such as memory, processing time and peripheral devices in the control processor 153. In this way, the operating system 203 serves as the basis upon which applications such as are dependent. as the control system 206 as is generally known to people with ordinary knowledge of the technique. Turning to Figures 5A-5D, flow charts are shown which provide an example of the operation of the control system 206 in accordance with one embodiment of the present invention. Alternatively, the flowcharts of Figures 5A-5D can be considered as illustrating steps of an example of a method implemented in the control processor 10953 (Figure 2) to control the operation of the video processing unit 109 (Figure 2). ). The functionality of the control system 206 in accordance with that illustrated by the flowchart of Figures 5A-5D can be implemented, for example, in an object-oriented design or in another programming architecture. Considering that the functionality is implemented in an object-oriented design, then each block represents a functionality that can be implemented in one or several methods encapsulated in one or several objects. The control processor 153 can be implemented using any of several programming languages such as C, C ++, or other programming languages. Starting with step 223, control system 206 initializes all registers and other aspects of the operation of video processing unit 109. Then, in step 226, control system 206 determines whether a message has been received from Quad command or other command message of multiple video images from a respective video image selector 116 (Figure 3). In this regard, the quad message dictates that an output video image 133 (Figure 2) must be generated, for example, from the four video images 111 (Figure 2) that make up the subgroup 165 (Figure 2) from four respective cameras 103 or 106 (Figure 2). The quad message is generated by oppressing or otherwise manipulating the multiple image button 176 (Figure 3). Whereas a quad message has been received from a respective selector of the video image selectors 116 in step 226, then the control system 206 advances to step 229 where it is determined whether a panoramic function is active in relation to a present output video image displayed on the respective monitor 113. While in a wide mode, the output video image 133 (FIG. 2) includes only one of the video images 111 generated by a camera selected from the cameras 103 , 106 in subgroup 165. In this aspect, the panning function is a processing function within each of the video processors 156a / 156b. Whereas a panoramic feature within a respective video processor of the video processors 156a / 156b is active, then the control system 206 advances to step 235. Otherwise, the control system 206 advances to step 236 in where the "quad" view is displayed on the specified monitor 113 by the video processing unit 109. In this aspect, the control system 206 communicates with a respective video processor of the video processes 156a, 156b and gives instructions to the video processor 156a, 156b to display an output video image 133 that incorporates the video images 111 from various of the cameras 103, 106 included in the subgroup 165. Then, the control system 206 advances to the step 233 as shown. In step 233, the control system determines whether an address button 173 (Figure 3) such as, for example, the left front button, right front button, left rear button, right rear button has been manipulated based on a message received from the respective video image selector 116. If this is the case, then the control system 206 proceeds to execute the process 239 which controls the functions of whole view, pan, and zoom as will be described. Otherwise, the control system 206 advances to step 243. In step 243, the control system 206 determines whether a day / night message has been received from the respective selector of the video image selectors 106 to be directed to one of the video processors 156a, 156b for switching between the application of visible light cameras 103 or night vision cameras 106 to the respective video processor 156a, 156b identified in the day / night message. If this is the case, then the control system 206 proceeds to execute the process 246 which controls the selection of the visible light chambers 103 or the night vision cameras 106 such as subgroup 165 of chambers 103, 106. Otherwise, the control system 206 progresses to step 249. In step 249, control system 206 determines whether a forward-reverse / side-by-side message has been received from a respective video image selector of image selectors 116. If it is the case, then control system 206 executes process 253. Otherwise, control system 206 returns to step 226. Referring later to Figure 5B, a flow chart of process 239 is shown. described in relation to a "front left" (LF) camera 103, 106, the same logic applies to all cameras 103, 106. Starting with step 263, process 239 determines whether the current video output image 1 33 incorporates one of the video images 111 generated by one of the cameras 103, 106 e full view that is applied to the respective monitor of the monitors 113 (Figure 1). If the full view of the respective video image 111 is already incorporated as an output video image 133, then process 239 proceeds to step 266. Otherwise, process 239 proceeds to step 269. In step 269, process 239 instructs the respective video processor 156a, 156b identified in the respective message to generate the output video image 133 incorporating the full view of the respective video image 111 of the selected camera 103, 106 based on the address button 173 pressed in the video image selector 116 in accordance with that identified in the message received by the control processor 153. In this aspect, the output video image 133 includes the video image 111 of the selected camera 103, 106 in a full view mode such that the entire monitor 113 displays the image of video 111 from a respective camera of the cameras 103, 106. Then, the process 239 ends as shown. Whereas the process 239 has advanced to step 266, then the entire view of the video image 111 from the respective camera 103, 106 associated with the address button 173 pressed in the video image selector 163 is already displayed in the respective monitor 113 associated with the respective video image selector 116. In this case, in step 266, the process 239 determines whether the zoom function in relation to the current whole view displayed as production of the output video image. 133 is activated. The zoom function effects a digital zoom in relation to the video output image 133 currently displayed on the respective monitor 113. If the zoom function is deactivated, then the process 239 proceeds to step 273 where the zoom function is activated in relation to the current output video image 133 displayed on the respective monitor 113. Then, the process 239 ends as shown. On the other hand, considering that the approach function is already active in accordance with that determined in step 266, then in step 276 the process 239 determines whether a panoramic function relative to the current output video image 133 is active. In this aspect, the panoramic function allows a user to scroll around within the video image 111 from a respective camera of the cameras 103, 106. If the panoramic function is active in step 276, then in step 279 the process 239 causes the current output video image 133 to encompass a selected address based on the respective direction button of the address buttons 173 (Figure 3) pressed on the video image selector 116. In this aspect, the buttons Address 173 has several purposes, such as, for example, selecting an entire view from a respective camera of cameras 103, 106 to be displayed as the output video image 133, activating a zoom function in relation to a whole view currently. display of a video image 111 within the output video image 133, or direct the output video image 133 in a selected direction. For a view to encompass several directions, in accordance with one embodiment, the direction buttons 173 control the pan function insofar as the front left buttons LF and front right RF 173 direct the focus in the directions to the left and to the right, respectively. The LR rear left and right rear RR 173 buttons direct the view in the up and down directions, respectively. In addition, when in the pan mode, the multi-view button 176 may be depressed to scan the center of the output video image 133. However, in step 273 if the pan function is disabled relative to the image of current output video 133, then process 239 proceeds to step 269 where the entire view of the video image 111 from a respective camera 103, 106 is incorporated as the current output video image 133 to be displayed on the respective monitor 113. In this aspect, pressing one of the direction buttons 173 may cause the display of an entire view of one of the video images. 111, the zooming of a current whole view of a video image 111, or a panning relative to a displayed video image 111 into a respective output video image of the output video images 133. The diagram of The flow of Figure 5C describes, in general terms, the functions within the control system 206 that offer the change between the use of visible light cameras 103 (Figure 2) and of night vision cameras 106 (Figure 2) to generate the output video images 133 (Figure 2).
Specifically, the flow chart of Figure 5C describes how the control system 206 directs the various video encoders 163 to apply the video image 111 (Figure 2) generated by either the visible light cameras 103 or the cameras night vision 106 to the multiplexed inputs of a respective video processor of the video processors 156a / 156b (Figure 2), according to the particular video image selector 116 manipulated accordingly. Beginning with step 303, the process 246 determines whether a panoramic function is active in relation to a particular whole view of a video image 111 incorporated within an output video image 133 applied to a respective monitor of the monitors 113 by a respective video processor of the video processors 156a / 156b. If this is the case, then the process 246 ends. In this regard, the control system 206 prevents the selection of the video images 111 of visible light or night vision cameras 103, 106 as one of the subgroups 165 of video images. 111 if a respective video processor 156a / 156b currently implements a panoramic function relative to the output video image 133 generated. Considering that no panoramic function is active in step 303, then the process 246 proceeds to step 306 where it is determined whether the video images 111 of the current subgroup 165 are generated by night vision cameras 106. If this is the case, then the process 246 proceeds to step 309 where the video images 111 from visible light cameras 103 are selected as the subgroup from which an output video image 133. is generated. The output video image 133 is generated in the same manner as previously observed during the use of the night vision cameras 106. Then, the process 246 ends as shown. On the other hand, if the video images 111 generated by the night vision cameras 106 are not currently selected as the subgroup of video images 111 applied to the multiplexed inputs of a respective video processor 156a, 156b, then the process 246 advances to step 313 wherein the video images 111 of the respective night vision cameras 106 are applied to the multiplexed inputs of a respective video processor of the video processors 156a, 156b and a corresponding output video image 133 is generated. . Then, the process 246 ends as shown. In this aspect, it has been observed that pressing the day / night button 179 (Figure 3) causes a change between the use of the visible light cameras 103 and the use of the night vision cameras 106 to generate the image output video 133 displayed on a respective monitor of the monitors 113. Turning now to Figure 5D, the process 253 executed in response to the receipt of the forward-reverse / side-by-side message generated by a manipulation of the forward button will be commented on. reverse / side by side 183 (Figure 3). It is understood that the discussion of the flow diagram of Figure 5D is made with reference to a video image 111 coming from a left front (LF) camera 103, 106 incorporated within the output video image 133. In addition , the same applies in relation to the remaining cameras of cameras 103, 106. Starting with step 323, process 253 determines whether the zoom function is activated in relation to a whole view of a video image 111 generated by a camera. left front (LF) / left side front (LSF) 103, 106. If the zoom function is activated, then process 253 proceeds to step 326. Otherwise, process 253 proceeds to step 329 as shown in FIG. sample. In step 326, the process 253 determines whether a panoramic function is activated relative to the current output video image 133 applied to the respective monitor of the monitors 113. If this is the case, then process 253 proceeds to step 333. Otherwise, process 253 proceeds to step 336 as shown. In step 333, an approach function is activated in relation to the current output video image 133 which includes the video image 111 generated by one of the front left side cameras LF or left side front side LSF 103, 106. Then, the process 253 ends as shown. Considering however that the panoramic function is not activated in step 326, then in step 336 the process 253 implements the panoramic function in relation to the current output video image 133 which incorporates the video image 111 generated by a video camera. left front part LF or respective left side front LSF 103, 106. Then, process 253 ends as shown. Thus, the process 253 facilitates, for example, the activation and deactivation of the panoramic function in relation to a particular output video image 133 that incorporates the video image generated by a respective camera 103, 106 in accordance with what has been described. However, considering that the zoom feature is not activated in step 323 relative to the current output video image 133, then the process 253 proceeds to step 329 where it is determined whether the video images 111 generated by the cameras 103, 106 facing a forward / reverse or longitudinal direction relative to the vehicle 100 (Figure 1) are currently selected as the subgroup 165 applied to the multiplexed inputs of a respective video processor of the video processors 156a, 156b, according to the respective video image selector 116 including the forward-reverse / side-to-side button 183 (Figure 3) which is manipulated to activate the execution of the process 253. If the video images 111 generated by the cameras which face the longitudinal direction 126 are applied to the multiplexed inputs of the respective video precursor 156a / 156b in accordance with that determined in step 329, then process 253 proceeds to step 339. Otherwise, process 256 advances to step 343. Whereas process 253 has advanced to step 333, then the video images 111 generated by cameras 103, 106 which face a lateral direction 129 are applied to the inputs of the respective video processor 156a / 156b. Then process 253 ends. Whereas process 253 has advanced to step 343, then process 253 manipulates respective video encoders 163 to apply video images 111 from cameras 103, 106 facing the longitudinal direction 126 to the multiplexed inputs of the respective video processor 156a / 156b. The corresponding output video image 133 therefore incorporates the video images 111 from the cameras 103, 106 facing the longitudinal direction 126. In this aspect, a complete view of only one of the cameras 103, 106 or a quad type view that incorporates the
• video images 111 multiple cameras 103, 106 oriented in the longitudinal direction 126 are supplied to the monitor 113. Then, the process 253 ends as shown. Furthermore, while FIGS. 5A-5D discuss the control of the video processing unit 109 using the specific buttons in the video image selector 116, it will be understood that the particular control configuration and the logic discussed are simply offering a example and that other input and logic components can be used for the same purpose. Although the control system 206 (Figures 5A-5D) is described as being incorporated into software or hardware-executed code for general purposes in accordance with what is discussed above, alternatively, the control system 206 may also be incorporated into dedicated hardware or hardware. in a combination of software / hardware for general purposes and dedicated hardware. If incorporated into dedicated hardware, the control system 206 can be implemented in a state machine or circuit employing any of several technologies or a combination of several technologies. These technologies may include, but are not limited to, discrete logic circuits that have logic gates to implement various logic functions by applying one or more data signals, application-specific integrated circuits that have appropriate logic gates, programmable gate arrays (PGA) , sets of programmable field gates (FPGA), or other components, and so on. Such technologies are generally well known to those skilled in the art and, therefore, will not be described in detail here. The block diagram / block diagrams and / or flowchart / flowcharts of Figures 5A-5D show the architecture, functionality, and operation of a control system implementation 206. If incorporated in software, each block may represent a module, segment, or portion of code comprising program instructions to implement the specified logical function (s). The program instructions can be incorporated in the form of a source code comprising human-readable indications written in a programming language or machine code comprising numerical instructions recognizable by a suitable execution system such as a processor in a computer system or another system. The machine code can be converted from the source code, etcetera. If incorporated into hardware, each block can represent a circuit or a number of interconnected circuits to implement the specified logical function (s).
Although the flow charts of Figures 5A-5D show a specific order of execution, it is understood that the order of execution may differ from that shown. For example, the order of execution of two or more blocks can be scrambled in relation to the displayed order. Thus, two or more blocks shown in succession in Figures 5A-5D may be executed concurrently or with partial concurrency. In addition, any number of counters, state variables, warning semaphores, or messages can be added to the logical flow described above in order to improve utility, accounting, performance measurement, or to provide aids to detect errors, and so on. It will be understood that all of these variations are within the scope of the present invention. Likewise, when the control system 206 comprises software or code, it can be incorporated in any computer-readable medium by its use by an instruction execution system or in connection with an instruction execution system such as a processor in a computer. computer system or another system. In that sense, the logic may comprise, for example, indications that include instructions and statements that can be obtained from the computer-readable medium and executed by the instruction execution system. In the context of the present invention, a "computer readable medium" can be any means that can contain, store, or maintain the control system 206 for use by the instruction execution system or in connection with said instruction execution system. The computer can comprise any of several physical media such as electronic, magnetic, optical, electromagnetic, infrared, or semiconductor media, etc. More specific examples of a suitable computer-readable medium would include, but are not limited to, magnetic tapes, magnetic soft diskettes , hard magnetic drives, or compact discs Also, the computer-readable medium may be a random access memory (RAM) that includes, for example, a static random access memory (SRAM) and a dynamic random access memory ( DRAM), or a magnetic random access memory (MRAM) In addition, the computer-readable medium can be a a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or another type of memory device. Even though the invention is shown and described with respect to certain embodiments, it is clear that equivalents and modifications will occur to others with knowledge in the art upon reading and understanding the specification. The present invention includes all these equivalents and modifications and is limited only by the scope of the claims.
Claims (35)
- CLAIMS 1. A vehicle video system, comprising: several cameras mounted on a vehicle, each of the cameras generates a video image, the cameras include several visible light cameras and several night vision cameras; a video processing unit, each of the cameras and each of the monitors are electrically connected to the video processing unit, the video processing unit is configured to select at least two subgroups of the cameras; the video processing unit generates a first output video image incorporating at least one of the video images generated by at least one of the cameras in a first subgroup; and the video processing unit generates a second output video image incorporating at least one of the video images generated by at least one of the cameras in a second subgroup. The vehicle video system according to claim 1, further comprising several monitors mounted on the vehicle, each of the monitors being electrically connected to the video processing unit, wherein the video processing unit deploys independently the first video output image on a first monitor and the second video output image on a second monitor. The vehicle video system according to claim 1, further comprising several video image selectors electrically connected to the video processing unit, wherein a first video image selector is configured to select the at least one video image selector. one of the video images incorporated in the first output video image, and a second video image selector is configured to select the at least one of the video images incorporated in the second output video image. The vehicle video system according to claim 1, further comprising several video image selectors electrically connected to the video processing unit, each of the video image selectors being configured to select one of the video image selectors. subgroups of the cameras. The vehicle video system according to claim 4, wherein each of the video image selectors is electrically connected to the video processing unit through a video data bus. The vehicle video system according to claim 1, wherein at least one of the subgroups of the cameras further comprises only a number of visible light chambers. The vehicle video system according to claim 1, wherein at least one of the subgroups of the cameras further comprises only a number of the night vision cameras. The vehicle video system according to claim 1, wherein at least one of the subgroups of the cameras further comprises only a number of the cameras having a field of view oriented in the longitudinal direction relative to the vehicle. The vehicle video system according to claim 1, wherein at least one of the subgroups of the cameras further comprises only a number of the cameras having a field of view oriented in a lateral direction relative to the vehicle. The vehicle video system according to claim 1, wherein the video processing unit generates the first output video image incorporating several video images generated by several corresponding cameras of the cameras in the first of the subgroups and the video processing unit generates the second output video image incorporating several of the video images generated by a corresponding plurality of the cameras in the second of the subgroups. 11. The vehicle video system according to claim 1, wherein the video processing unit is configured to generate a mirror video image from at least one of the video images. The vehicle video system according to claim 1, wherein the video processing unit is further configured to perform an approach function with respect to at least one of the video images. vehicle video according to claim 1, wherein the video processing unit is further configured to perform a panoramic function relative to at least one of the video images 14. The vehicle video system in accordance with the claim 1, wherein the video processing unit is further configured to cover an image in the first and second output video images. 15. The vehicle video system according to claim 1, wherein the video processing unit is further configured to cover a quantity of text in the first and second output video images. The vehicle video system according to claim 1, wherein the video processing unit is further configured to detect movement within the field of view of each of the cameras within at least two subgroups. The vehicle video system according to claim 16, wherein the video processing unit is further configured to generate an alarm when motion is detected in the field of view of any of the cameras within the at least two subgroups The vehicle video system according to claim 16, wherein the video processing unit is further configured to generate an alarm when movement is detected in a field of view of a first of the cameras within the at least two subgroups, the first subgroup of the cameras generating an unincorporated video image in none of the first and second output video images. 19. A method for controlling and displaying video in a vehicle, wherein several cameras and several monitors are mounted on the vehicle, the cameras include several visible light cameras and several night vision cameras, each of the cameras generates an image of video, the method comprises the steps of: selecting at least two subgroups of the cameras; generating a first output video image incorporating at least one of the video images generated by at least one of the cameras in a first subgroup; generating a second output video image incorporating at least one of the video images generated by at least one of the cameras in a second subgroup; and independently displaying the first output video image on a first monitor and the second output video image on a second monitor. The method according to claim 19, further comprising the steps of: selecting at least one of the video images incorporated in the first output video image; and selecting at least one of the video images incorporated in the second output video image. 21. The method according to claim 19, wherein the step of selecting at least two subgroups of the cameras further comprises the step of selecting one of the subgroups to include only a number of the visible light chambers. 22. The method according to claim 19, wherein the step of selecting at least two subgroups of the cameras further comprises the step of selecting one of the subgroups to include only a number of the night vision cameras. The method according to claim 19, wherein the step of selecting at least two subgroups of the cameras further comprises the step of selecting only a number of the cameras having a field of view oriented in a longitudinal direction relative to the vehicle. The method according to claim 19, wherein the step of selecting at least two subgroups of the cameras further comprises the step of selecting one of the subgroups to include only a number of the cameras having a field of view oriented in a lateral direction in relation to the vehicle. The method according to claim 19, further comprising the steps of: generating the first output video image incorporating several of the video images generated by a corresponding number of the cameras in the first of the subgroups; and generating the second output video image incorporating several of the video images generated by a corresponding number of the cameras in the second of the subgroups. 26. The method according to claim 19, which further comprises the step of generating a mirror video image from at least one of video images. 27. The method according to claim 19, further comprising the step of performing an approach function with respect to at least one of the video images. 28. The method according to claim 19, further comprising the step of performing a panning function relative to at least one of the video images. 29. The method according to claim 19, further comprising the step of coating an image on the first and second output video images. 30. The method according to claim 19, further comprising the step of coating a quantity of text in the first video image and the second video image. The method according to claim 19, further comprising the step of detecting a movement within the field of view of each of the cameras within the at least two subgroups. 32. The method according to claim 31, further comprising the step of generating an alarm when the movement is detected in the field of view of any of the cameras in the at least two subgroups. The method according to claim 31, further comprising the step of generating an alarm when a movement is detected in a field of view of a first camera within the at least two subgroups, the first of the cameras generates an image of video not incorporated in any of the first or second video output images. 34. A vehicle video system, comprising: several cameras mounted on a vehicle, each of the cameras generates a video image, the cameras include several visible light cameras and several night vision cameras; a video processing unit, each of the cameras and each of the monitors is electrically connected to the video processing unit; a means within the video processing unit for selecting at least two subgroups of the cameras; means within the video processing unit for generating a first output video image incorporating at least one of the video images generated by at least one of the cameras in a first subgroup; and a means within the video processing unit for generating a second output video image incorporating at least one of the video images generated by at least one of the cameras in a second subgroup. 35. The vehicle video system according to claim 34, further comprising: several monitors mounted on the vehicle, each of the monitors being electrically connected to the video processing unit; a means for independently displaying the first output video image on a first monitor; means for independently deploying a second output video image on a second monitor. SUMMARY OF THE INVENTION Several systems and methods for processing and displaying video in a vehicle are disclosed. In one embodiment, a vehicle video system (101) is provided comprising several cameras (103, 106) mounted on a vehicle (100), each of the cameras generates a video image, and the cameras include several cameras. visible light (103) and several night vision cameras (106). The vehicle video system (101) also includes a video processing unit 109, wherein each of the cameras and each of the monitors (113) is electrically connected to the video processing unit (109). The video processing unit (109) is configured to select at least two subgroups of cameras from which output video images are obtained to display on monitors (113). 1/7
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10787786 | 2004-02-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| MXPA06007922A true MXPA06007922A (en) | 2006-12-13 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20050190261A1 (en) | Vehicle video processing system | |
| JP5064601B2 (en) | In-vehicle video display | |
| JP2004151513A (en) | Display unit and method | |
| CN103577097B (en) | Terminal and the method for sharing person's handwriting in the terminal | |
| KR20180084042A (en) | User interface for in-vehicle systems | |
| CN106802794A (en) | Method for switching theme, device, vehicle and system | |
| US20130250097A1 (en) | Method for displaying background screen in navigation device | |
| CN112606764A (en) | Multi-camera vision system for work vehicle | |
| CN114371898B (en) | Information display method, equipment, device and storage medium | |
| US20150160840A1 (en) | Display device and method of controlling the same | |
| MXPA06007922A (en) | Vehicle video processing system | |
| MXPA04010636A (en) | Adaptation of vision systems for commerical vehicles. | |
| CN112106017B (en) | Vehicle interaction method, device, system and readable storage medium | |
| CN116095465B (en) | Video recording method, device and storage medium | |
| JP2007515728A (en) | Control system for vehicle | |
| JP2004276731A (en) | Vehicle-mounted image display device | |
| JP2020180466A (en) | Work machine periphery monitoring system, and work machine periphery monitoring program | |
| CN116112781B (en) | Video recording method, device and storage medium | |
| CN117014543B (en) | Image display method and related device | |
| CN104363435A (en) | Tracking state indicating method and tracking state displaying device | |
| CN114872628B (en) | Method, device, computer equipment and medium for controlling streaming media rearview mirror | |
| CN116095460B (en) | Video recording method, device and storage medium | |
| KR101163933B1 (en) | Operation method for car multimedia display system | |
| CN116132790B (en) | Video recording method and related device | |
| CN115484392B (en) | Video shooting method and electronic equipment |