[go: up one dir, main page]

CN121179977A - Control method, equipment and storage medium of central control screen - Google Patents

Control method, equipment and storage medium of central control screen

Info

Publication number
CN121179977A
CN121179977A CN202511715969.1A CN202511715969A CN121179977A CN 121179977 A CN121179977 A CN 121179977A CN 202511715969 A CN202511715969 A CN 202511715969A CN 121179977 A CN121179977 A CN 121179977A
Authority
CN
China
Prior art keywords
control
sensor
target
pixel area
tail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511715969.1A
Other languages
Chinese (zh)
Inventor
彭俊清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Teyes High And New Technology Co ltd
Original Assignee
Shenzhen Teyes High And New Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Teyes High And New Technology Co ltd filed Critical Shenzhen Teyes High And New Technology Co ltd
Priority to CN202511715969.1A priority Critical patent/CN121179977A/en
Publication of CN121179977A publication Critical patent/CN121179977A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

本申请公开了一种中控屏的控制方法、设备和存储介质,本申请涉及电子数字数据处理技术领域,中控屏的控制方法包括:响应于车尾图像控件的添加指令,确定第一显示控件的第一像素区域,第一显示控件包括导航控件;根据第一像素区域以及添加指令对应的长宽比,于中控屏渲染显示车尾图像控件的候选像素区域;响应于针对目标候选像素区域的触发操作,根据长宽比调整车尾视觉传感器的采集视角;获取调整后的车尾视觉传感器采集的视频流,于目标候选像素区域渲染显示车尾图像控件。本申请能够实现拓展车尾图像的使用场景的技术效果。

This application discloses a control method, device, and storage medium for a central control screen. The application relates to the field of electronic digital data processing technology. The control method for the central control screen includes: responding to an addition command for a rear-end image control, determining a first pixel area of a first display control, the first display control including a navigation control; rendering and displaying candidate pixel areas of the rear-end image control on the central control screen according to the first pixel area and the aspect ratio corresponding to the addition command; responding to a trigger operation targeting a target candidate pixel area, adjusting the acquisition angle of a rear-end vision sensor according to the aspect ratio; acquiring the adjusted video stream acquired by the rear-end vision sensor, and rendering and displaying the rear-end image control on the target candidate pixel area. This application can achieve the technical effect of expanding the application scenarios of rear-end images.

Description

Control method, equipment and storage medium of central control screen
Technical Field
The present application relates to the field of electronic digital data processing technologies, and in particular, to a control method, apparatus, and storage medium for a central control panel.
Background
In the existing automobile engine system, the operation logic for displaying the images of the automobile tail is to switch to a reverse gear, and the central control screen displays the images of the automobile tail or displays the top view of the automobile tail so as to assist the driver in reversing. However, in the scheme, the vehicle tail vision sensor is only aimed at a reversing scene, the visual angle is too small, so that the display control of the central control screen needs to be matched with the acquisition range of the vehicle tail vision sensor, the display control of the vehicle tail image is required to be close to a square and to be fully paved for display, and the use scene of the vehicle tail vision sensor is limited.
Disclosure of Invention
The application mainly aims to provide a control method, equipment and storage medium for a central control screen, and aims to solve the technical problem that the use scene of a vehicle tail vision sensor is limited due to the fact that display control requirements of a vehicle tail image are close to a square and are fully paved for display.
In order to achieve the above object, the present application provides a control method of a central control panel, the control method of the central control panel includes:
Determining a first pixel area of a first display control in response to an adding instruction of the tail image control, wherein the first display control comprises a navigation control;
Rendering and displaying candidate pixel areas of the tail image control on the central control screen according to the first pixel areas and the length-width ratio corresponding to the adding instruction;
responding to triggering operation aiming at a target candidate pixel area, and adjusting the acquisition view angle of the vehicle tail vision sensor according to the length-width ratio;
and acquiring a video stream acquired by the regulated vehicle tail vision sensor, and rendering and displaying a vehicle tail image control in the target candidate pixel area.
In an embodiment, in response to an add instruction of the tail image control, determining a first pixel area of a first display control, the first display control including, before the step of navigating the control, comprising:
responding to a control adding instruction, and acquiring layout information of a central control screen;
Determining candidate aspect ratio according to the layout information and the pixel size of the central control screen;
In response to a click operation of the target candidate aspect ratio, an add instruction is generated.
In one embodiment, after the step of determining the candidate aspect ratio according to the layout information and the pixel size of the center control screen, the method includes:
displaying a sensor selection control, wherein the sensor selection control is provided with a vehicle tail vision sensor, a first side rear sensor and a second side rear sensor;
Determining a sensor combination in response to a triggering operation of a target sensor in the sensor selection control;
and updating the candidate aspect ratio according to the acquisition view angle of the sensor combination.
In an embodiment, determining a first pixel area of a first display control, the first display control comprising the steps of navigating the control comprises:
If the first display control only comprises the navigation control, acquiring a pixel coordinate range of the navigation control in the current layout of the central control screen, and determining the pixel coordinate range as a first pixel area;
If the first display control comprises a navigation control and at least one other application control, identifying pixel occupied areas of the navigation control and each other application control, calculating total pixel coverage according to layout relations of the controls, and determining the total pixel coverage as a first pixel area.
In an embodiment, the step of rendering, on the central control screen, a candidate pixel area for displaying the tail image control according to the first pixel area and the aspect ratio corresponding to the adding instruction includes:
Identifying the application program type of each application program control in the first display control, wherein the application program type comprises a core application and a non-core application, the navigation control belongs to the core application, and the other application program controls are the non-core application;
According to the control corresponding to the core application, a minimum necessary scaling strategy is adopted, the control corresponding to the non-core application adopts a self-adaptive scaling strategy, and the minimum necessary scaling strategy is the minimum scaling for keeping the key information of the control to be displayed completely;
According to the minimum necessary scaling strategy and the self-adaptive scaling strategy, scaling the first pixel area to obtain a residual layout space of the scaled first pixel area;
And planning at least one vehicle tail image control placement area meeting the requirement of the length-width ratio based on the size of the residual layout space and the length-width ratio corresponding to the adding instruction, determining the placement area as a candidate pixel area, and rendering and displaying.
In one embodiment, in response to a triggering operation for a target candidate pixel region, the step of adjusting the acquisition viewing angle of the tail vision sensor according to the aspect ratio comprises:
Acquiring an actual display aspect ratio of the target candidate pixel region in response to a trigger operation for the target candidate pixel region;
determining an acquisition visual angle range to be adjusted based on hardware parameters of the vehicle tail visual sensor, wherein the hardware parameters comprise a maximum horizontal visual angle, a minimum horizontal visual angle and a visual angle adjusting step length;
Calculating a target horizontal viewing angle meeting the actual display length-width ratio according to a picture proportion mapping relation between the actual display length-width ratio and the vehicle tail vision sensor, wherein the picture proportion mapping relation is that every time the actual display length-width ratio is increased by a preset proportion value, the target horizontal viewing angle is correspondingly increased by a preset angle value;
judging whether the target horizontal viewing angle is in the acquisition viewing angle range or not based on the acquisition viewing angle range to be adjusted;
If the acquired visual angle is within the acquired visual angle range, sending a visual angle adjustment instruction to the vehicle tail visual sensor, and controlling the vehicle tail visual sensor to adjust the acquired visual angle to a target horizontal visual angle;
and if the acquired visual angle exceeds the acquired visual angle range, adjusting the acquired visual angle of the vehicle tail visual sensor to a maximum horizontal visual angle or a minimum horizontal visual angle.
In one embodiment, in response to a triggering operation for a target candidate pixel region, the step of adjusting the acquisition viewing angle of the tail vision sensor according to the aspect ratio comprises:
responding to a triggering operation aiming at a target candidate pixel area, and acquiring a pixel range of the target candidate pixel area and a corresponding target aspect ratio;
based on the selected vehicle tail vision sensor, the first side rear sensor and the second side rear sensor, respectively acquiring initial acquisition visual angles, installation position parameters and hardware distortion coefficients of the sensors;
determining a physical visual field range covered by the spliced picture according to the pixel range and the target length-width ratio;
Based on the physical visual field range, calculating the visual field partition needed to be born by each sensor;
based on the visual field partition and the target length-width ratio of each sensor, respectively calculating the target horizontal visual angle of the vehicle tail visual sensor, the target horizontal visual angle of the first side rear sensor and the target horizontal visual angle of the second side rear sensor by combining the installation position parameters of the sensors;
Checking whether each target horizontal visual angle is in a hardware adjusting range of a corresponding sensor;
if the two images are within the range, respectively sending visual angle adjustment instructions to the three sensors, synchronously adjusting the visual angle adjustment instructions to the corresponding target horizontal visual angles, and calibrating splicing parameters based on the characteristic points of the overlapping areas;
if the target horizontal view angle of the sensor exceeds the hardware range, the sensor view angle of the core visual field area accords with the target value to serve as an optimization target, and the minimum hardware view angle of the sensor at the rear side exceeding the range is adopted and the distortion compensation of the splicing overlapping area is optimized through an algorithm.
In an embodiment, the step of obtaining the video stream collected by the adjusted tail vision sensor and rendering and displaying the tail image control in the target candidate pixel area includes:
synchronously acquiring original video streams respectively acquired by the adjusted vehicle tail vision sensor, the first side rear sensor and the second side rear sensor, and carrying out frame synchronization processing based on time stamps of the sensors;
Based on the adjusted acquisition visual angles of the sensors and the preset overlapping areas, matching and matching characteristic points of the three paths of video stream overlapping areas to generate panoramic video streams covering the tail and the rear sides of the vehicle;
Acquiring pixel resolution, boundary coordinates and a target length-width ratio of a target candidate pixel region, and calculating an adaptation proportion of the panoramic video stream and the target region, wherein if the original proportion of the panoramic video stream is consistent with the target length-width ratio, scaling is performed according to the resolution equal ratio;
Scaling the spliced panoramic video stream according to the adaptation proportion to obtain an adaptation video frame, and determining a rendering position according to boundary coordinates of a target candidate pixel region;
Taking the adaptive video frame as a bottom layer picture of the tail image control, and carrying out hierarchical fusion with the navigation control so that the display hierarchy of the tail image control does not shield key interaction elements of the navigation control;
and rendering and displaying the tail image control in the target candidate pixel area.
In addition, in order to achieve the aim, the application also provides control equipment of the central control panel, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program is configured to achieve the steps of the control method of the central control panel.
In addition, in order to achieve the above object, the present application also provides a storage medium, which is a computer-readable storage medium, on which a program for implementing a control method of a center control panel is stored, the program for implementing the control method of the center control panel being executed by a processor to implement the steps of the control method of the center control panel as above.
The application provides a control method of a central control screen, which comprises the steps of determining a first pixel area of a first display control by responding to an adding instruction of a vehicle tail image control, rendering and displaying a candidate pixel area of the vehicle tail image control on the central control screen according to the first pixel area and an aspect ratio corresponding to the adding instruction, responding to a triggering operation aiming at a target candidate pixel area, adjusting an acquisition view angle of a vehicle tail vision sensor according to the aspect ratio, acquiring a video stream acquired by the adjusted vehicle tail vision sensor, and rendering and displaying the vehicle tail image control on the target candidate pixel area. The application adjusts the acquisition view angle of the vehicle tail vision sensor through the set aspect ratio and the existing display control of the central control screen, so that the acquisition view angle is matched with the UI pixel size of the vehicle tail image control in equal proportion, the vehicle tail image control can stay in the central control screen as a fixed control for a long time without affecting the overall UI vision experience, and the use scene of the vehicle tail image is expanded.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a first embodiment of a control method of a central control panel of the present application;
FIG. 2 is a schematic flow chart of a second embodiment of a control method of a central control panel of the present application;
FIG. 3 is a schematic flow chart of a third embodiment of a control method of a central control panel of the present application;
FIG. 4 is a schematic diagram of an embodiment of the present application prior to adding a rear of vehicle image control;
FIG. 5 is a schematic illustration of an embodiment of the present application with a rear of vehicle image control added;
Fig. 6 is a schematic structural diagram of a central control panel control device in an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
At present, the vehicle tail vision sensor in the related art only aims at a reversing scene, the visual angle is too small, so that the display control UI of the central control screen needs to be matched with the acquisition range of the vehicle tail vision sensor, the display control of the vehicle tail image is required to be close to a square and to be fully paved for display, and the use scene of the vehicle tail vision sensor is limited.
The main solution of the application is that a first pixel area of a first display control is determined by responding to an adding instruction of a vehicle tail image control, the first display control comprises a navigation control, a candidate pixel area of the vehicle tail image control is rendered and displayed on a central control screen according to the first pixel area and an aspect ratio corresponding to the adding instruction, an acquisition view angle of a vehicle tail vision sensor is adjusted according to the aspect ratio in response to a triggering operation aiming at a target candidate pixel area, a video stream acquired by the adjusted vehicle tail vision sensor is acquired, and the vehicle tail image control is rendered and displayed on the target candidate pixel area. The application adjusts the acquisition view angle of the vehicle tail vision sensor through the set aspect ratio and the existing display control of the central control screen, so that the acquisition view angle is matched with the UI pixel size of the vehicle tail image control in equal proportion, the vehicle tail image control can stay in the central control screen as a fixed control for a long time without affecting the overall UI vision experience, and the use scene of the vehicle tail image is expanded.
It should be noted that, the execution body of the embodiment may be a control device of a central control panel, or may be a computing service device with functions of data processing, network communication and program running, such as a tablet computer, a personal computer, a mobile phone, or a control device of a central control panel capable of implementing the above functions, which is not limited in this embodiment. The present embodiment and the following embodiments will be described below using a control device of a central control panel as an execution body.
Based on this, a control method of a central control panel is provided in the first embodiment of the present application, please refer to fig. 1, fig. 1 is a schematic flow chart provided in the first embodiment of the control method of the central control panel of the present application, and the control method of the central control panel includes steps S10 to S40:
step S10, a first pixel area of a first display control is determined in response to an adding instruction of the vehicle tail image control, and the first display control comprises a navigation control.
The vehicle tail image control is used for displaying a vehicle tail image and can display a real-time vehicle tail picture or a preset image. The adding instruction is an operation instruction which is sent by a user or a system and requires a control to be newly added in the interface. In this embodiment, the instruction targets are to add a tail image control, so as to determine a first pixel area of the first display control. The first display control is a specific one of a plurality of display controls in the interface and is used for bearing content relevant to a specific function. In this embodiment, the target control including the navigation control has a pixel area to be precisely determined to match with the addition layout of the tail image control. The first pixel area refers to a pixel coordinate range occupied by the control in the interface, and the display position and the display size of the control are defined. In this embodiment, the specific pixel boundary of the first display control on the screen is determined, so that display conflict with the newly added tail image control can be avoided. The navigation control is a control for providing navigation functions, such as a direction identifier, a path display component and the like, is a core component of the first display control, and is required to ensure that the display and the functions of the first pixel region are not affected by the addition of the tail image control when the first pixel region is determined.
In this embodiment, after receiving an operation instruction for adding a tail image control, the system accurately locates a pixel area occupied by a first display control on an interface, where the first display control includes a navigation control for implementing a navigation function. By determining the first pixel area of the first display control, subsequent processing according to the first pixel area is facilitated.
In a first possible implementation manner, step S10 may include receiving an addition instruction of the tail image control, and analyzing an interface layout priority carried in the instruction, for example, the navigation control may not be blocked. Positioning a first display control containing a navigation control, calling a preset initial pixel area parameter of the first display control, checking whether the initial pixel area conflicts with the preset display area of the tail image control, if not, directly determining the initial pixel area as the first pixel area, and if so, finely adjusting the initial parameter to ensure that the final pixel area is determined after the navigation control is completely displayed. By preferentially guaranteeing the display priority of the core control, the navigation control is prevented from being blocked.
In a second possible implementation manner, step S10 may include, after responding to the adding instruction, starting an interface control scanning program, identifying all controls including navigation functions in the current interface, and screening out target controls preset as first display controls. The actual display boundary of the first display control is captured in real time through a screen pixel mapping technology, and the pixel coordinates of the upper left corner and the lower right corner of the first display control are recorded. And verifying whether the captured boundary can completely cover the navigation control or not by combining the sub-pixel area of the navigation control in the first display control, and determining the coordinate range as the first pixel area after confirming that the boundary is correct. By capturing the actual display boundary in real time, the pixel region is not influenced by the deviation of preset parameters, and a more accurate pixel region is obtained.
In a third possible implementation manner, step S10 may include reading an interface layout configuration file stored in the system after receiving the addition instruction, and extracting a pixel region rule corresponding to the first display control, for example, the navigation control is fixed under the screen. And calculating the specific pixel range of the first display control according to the current screen resolution and the layout rule. And sending a pixel region verification request to the system, and determining the range as a first pixel region after confirming that the calculated region is not occupied by other loaded controls and the navigation control region does not exceed the region. Devices of different screen sizes and resolutions may be compatible by calculation based on the configuration file and the resolution.
The above are merely three possible implementations of step S10 provided in this embodiment, and the embodiment is not particularly limited to the specific implementation of step S10.
And step S20, rendering and displaying candidate pixel areas of the tail image control on the central control screen according to the first pixel areas and the length-width ratio corresponding to the adding instruction.
The aspect ratio is the ratio of the width to the height of the image or the control, in this embodiment, the preset parameter carried by the instruction is added, the proportion of the tail image control to be displayed is defined, and the image is prevented from stretching or deforming. The central control screen is a vehicle-mounted hardware device integrating display, interaction and control functions and is usually arranged at the position of a console in a cockpit. The device is not only used for displaying navigation, tail images and reversing images, but also can control air conditioners, sound equipment, vehicle settings and the like. All operations related to the pixel area and the control display are completed in the display range of the central control screen. The candidate pixel region is a plurality of pixel regions meeting the display requirements, and is an alternative to the final rendering region. In this embodiment, the first pixel area and the preset aspect ratio are combined, and the calculated available display area on the central control screen is calculated. The candidate pixel area needs to meet two core conditions, namely the aspect ratio of the proportional matching adding instruction is firstly matched, and the candidate pixel area is not overlapped with the first pixel area, so that the normal display of the navigation control is ensured.
In this embodiment, the determined first pixel area and the aspect ratio carried by the adding instruction are obtained, the area meeting the two conditions of proportion matching aspect ratio and non-overlapping with the first pixel area is screened out in the available display space of the central control screen, and a plurality of candidate pixel areas are rendered and displayed for subsequent selection of the target area. By rendering multiple candidate pixel regions, different user usage habits or different scene requirements are adapted.
In a first possible implementation manner, step S20 may include extracting a preset aspect ratio of the tail image control in the add instruction, analyzing boundary coordinates of the first pixel area, and determining available blank areas except the first pixel area on the central control screen. And traversing and calculating all rectangular pixel ranges conforming to the proportion in the available blank area according to the preset length-width ratio, and screening 3 rectangular areas with the largest area as primary candidates. And checking whether the preliminary candidate region conflicts with other fixed controls of the central control screen, such as pixel regions of a status bar, a shortcut button and the like, and determining the residual region as a candidate pixel region of the tail image control after eliminating the conflict region. Through the largest coincidence proportion area of screening area, can let the tailstock image control show with as large as possible size, promote visual experience.
In a second possible embodiment, step S20 may include preferentially delineating non-conflicting safety zones based on the location of the first pixel region. And calculating the minimum adaptation area in the safe area according to the length-width ratio of the added instruction and scaling the minimum adaptation area in an equal proportion, and generating a plurality of candidate areas with different sizes and consistent proportions, such as three specifications of large, medium and small. And (3) performing display effect simulation on each candidate region, namely reserving a region with the stretching rate lower than a threshold value if the stretching or compression degree of the preview image in the region, and finally determining the region as a candidate pixel region of the vehicle tail image control. By providing a multi-size candidate, the display requirements in different scenarios can be adapted.
And step S30, responding to the triggering operation for the target candidate pixel area, and adjusting the acquisition view angle of the vehicle tail vision sensor according to the length-width ratio.
The triggering operation is user interaction or automatic triggering action of the system which can start a specific function, and is a signal for flow promotion. In this embodiment, a confirmation operation on the target candidate pixel area is performed, for example, the user clicks on the candidate area on the screen or the system automatically selects the optimal candidate area. The acquisition view angle is the angle range when the vehicle tail vision sensor shoots, and the view width and the content coverage range of the picture are determined. In this embodiment, the acquisition view angle is an angle at which the sensor actually captures a tail scene, and needs to be adjusted according to the aspect ratio of the target candidate region. The proportion of the target area can be just adapted by adjusting the picture collected by the sensor, so that the stretching or cutting of key information during display is avoided.
In the present embodiment, when a trigger operation for a target candidate pixel region occurs, the acquisition view angle of the vehicle tail vision sensor is adjusted according to the aspect ratio of the addition instruction. And the acquisition visual angle is adjusted according to the aspect ratio of the added instruction, so that the proportion of the picture captured by the sensor is completely matched with the target area, the picture is prevented from being stretched and cut during subsequent display, and the integrity of the image information is ensured.
In a first possible implementation, step S30 may include monitoring a center screen interaction event in real time, and when detecting a user click, long press operation of a target candidate pixel region, immediately locking aspect ratio parameters of the region. And sending a proportion adaptation instruction to the vehicle tail vision sensor, wherein the proportion adaptation instruction carries preset visual angle parameters corresponding to the aspect ratio, such as 120-degree wide angle corresponding to 16:9 and 90-degree standard angle corresponding to 4:3. After the sensor receives the instruction, the sensor is quickly switched to a preset visual angle and starts the picture collection, and meanwhile, a visual angle adjustment completion signal is fed back to the central control screen, so that the picture is ensured to be rendered to the target area in real time. Based on preset visual angle parameter switching, the visual angle adjustment can be rapidly completed without complex calculation, the response speed is high, and the method is suitable for scenes with high requirements on real-time performance.
In a second possible implementation, step S30 may include supporting two triggering operation modes, where the user manually confirms the target candidate pixel area, such as clicking a confirm button, or where the system automatically triggers according to a scene, such as a reverse signal, and extracts the actual aspect ratio of the target area after triggering. Based on the accurate length-width ratio, the optimal acquisition visual angle is calculated through a sensor control algorithm, and the adaptation deviation of preset parameters is avoided. The sensor is controlled to finely adjust the visual angle step by step, and simultaneously, the acquired picture is returned to the central control screen in real time, the deviation of the picture proportion and the target area proportion is compared, and when the deviation is smaller than a threshold value, the current visual angle is locked and the acquisition is stabilized. And calculating the optimal view angle based on the aspect ratio of the target candidate pixel region, so that the stretching or cutting of the picture can be reduced to the greatest extent, and the integrity of the image information is ensured.
And S40, acquiring a video stream acquired by the regulated vehicle tail vision sensor, and rendering and displaying a vehicle tail image control in the target candidate pixel area.
Wherein the video stream is a data stream composed of a large number of continuous still images, which are sequentially transmitted and displayed at a fixed frame rate, e.g., 30 frames per second, to visually form a dynamic video effect. In this embodiment, the video stream is a sequence of images of a vehicle tail scene captured in real time after the vehicle tail vision sensor adjusts the acquisition viewing angle, and includes continuous images of the vehicle tail environment, such as a road surface, an obstacle, a vehicle behind, and the like. And transmitting the video stream to a central control screen, displaying the video stream frame by frame in a target candidate pixel area, and finally displaying a dynamic real-time image of the vehicle tail.
In this embodiment, an acquisition instruction is sent to the adjusted rear-view sensor, and the sensor continuously acquires rear-view scene images according to a target viewing angle and a preset frame rate to form a video stream. And transmitting the video stream to the central control screen host in real time through the vehicle-mounted communication bus. And calling the parameters of the locked target candidate pixel area, and positioning the specific position of the area in the central control screen display layer. And carrying out real-time adaptation processing on the received video stream based on the aspect ratio of the target area. Initializing a tail image control, and mapping the adapted video stream to the control frame by frame. Rendering the control in the target candidate pixel area, realizing real-time display of the video stream, and simultaneously ensuring that the control level is lower than the navigation control. And continuously monitoring the video stream transmission state and the control display effect, and automatically triggering repairing operations such as reconnection sensors, calibration area coordinates and the like if frame loss, clamping or display offset are detected. And when the video stream is not received, displaying a signal abnormality prompt in the target area. The video stream is processed based on the aspect ratio adaptation of the target area, the original proportion is not changed, information loss caused by stretching or cutting is avoided, and the detail of the tail scene is ensured to be clear and distinguishable.
In a first possible embodiment, step S40 may include sending a real-time acquisition command to the adjusted rear-of-vehicle vision sensor, which acquires the video stream at a default resolution, e.g., 720P, 30 frames/second frame rate, and transmits the video stream to the central control screen in a low-latency mode via the vehicle bus. And (3) calling parameters of the target candidate pixel region, and performing equal-proportion scaling treatment on the received video stream to quickly match the region size. And starting a light-weight rendering engine, mapping the adapted video stream to a tail image control frame by frame, and directly displaying in a target area. And monitoring video stream transmission delay in real time, if the delay exceeds a threshold value, for example, 100ms, automatically reducing the resolution of the video stream, for example, switching to 480P, and preferentially guaranteeing the display fluency. The time from acquisition to display is greatly shortened through low-delay transmission, light-weight rendering and simplified processing flow, the response speed is extremely high, and the method is suitable for scenes with high requirements on real-time performance such as reversing.
In a second possible implementation, step S40 may include synchronously recording acquisition parameters, such as resolution and frame rate, when the sensor acquires the video stream, and transmitting the video stream in a high definition mode, such as 1080P, where bandwidth occupation is reduced by a data compression algorithm, while retaining picture details. After receiving the video stream, combining the aspect ratio and the screen resolution of the target candidate pixel area, performing accurate clipping and proportional calibration processing, eliminating invalid information at the edge of the picture, and ensuring that the core scene is completely presented in the target area. And starting a high-definition rendering engine, optimizing the image quality of the video stream, such as noise reduction and contrast enhancement, and rendering the optimized picture to the tail image control frame by frame, and meanwhile ensuring that the control display level and the navigation control do not conflict. Continuously comparing the adaptation deviation of the rendering picture and the target area, and if display deviation or deformation occurs, finely adjusting the rendering parameters in real time, so as to ensure the precise alignment of the picture and the target area. By adopting the high-definition acquisition, compression transmission and image quality optimization technology, the image quality is clear, the tail scene details are reserved, and the visual experience and judgment accuracy are improved.
The embodiment provides a control method of a central control screen, which comprises the steps of determining a first pixel area of a first display control by responding to an adding instruction of a vehicle tail image control, wherein the first display control comprises a navigation control, rendering and displaying candidate pixel areas of the vehicle tail image control on the central control screen according to the first pixel area and an aspect ratio corresponding to the adding instruction, responding to a triggering operation aiming at a target candidate pixel area, adjusting an acquisition view angle of a vehicle tail visual sensor according to the aspect ratio, acquiring a video stream acquired by the adjusted vehicle tail visual sensor, and rendering and displaying the vehicle tail image control on the target candidate pixel area. The application adjusts the acquisition view angle of the vehicle tail vision sensor through the set aspect ratio and the existing display control of the central control screen, so that the acquisition view angle is matched with the UI pixel size of the vehicle tail image control in equal proportion, the vehicle tail image control can stay in the central control screen as a fixed control for a long time without affecting the overall UI vision experience, and the use scene of the vehicle tail image is expanded.
Based on the first embodiment, in the second embodiment of the present application, the same or similar content as the first embodiment can be referred to the description above, and the description is omitted. On this basis, referring to fig. 2, fig. 2 is a schematic flow chart of a second embodiment of a control method for a control panel in the present application, in response to an addition instruction of a tail image control, determining a first pixel area of a first display control, where before the step of the first display control includes a navigation control, the method includes:
And step S01, responding to a control adding instruction, and acquiring layout information of the central control screen.
The layout information is a set of hardware parameters, control distribution and display rules of the current interface of the central control screen. The layout information comprises basic parameters of the central control screen hardware, distribution data of existing loaded controls, interface display rules and constraints and available display resource data. The central control screen hardware basic parameters comprise screen resolution, such as 1920×1080 pixels, pixel density corresponding to the physical size of the screen, and the whole display range of the interface is defined. The method also comprises an effective boundary of the screen display area, and the control is prevented from being rendered to an invisible area. The distribution data of the existing loaded controls comprises key information of each displayed control, namely unique identification, pixel area and display level, such as that the navigation control is at the top layer. The method also comprises the functional attributes of the control, such as whether the control is a fixed control or not and whether the control can be temporarily hidden, such as that the navigation control is a fixed core control generally and can not be blocked. The interface display rules and constraints include control layout priority rules, such as core functionality control priority is higher than the minimum spacing requirement between auxiliary functionality controls and controls. The method also comprises a reserved area rule, wherein if the bottom of the central control screen is a shortcut operation area, large-size auxiliary controls can not be loaded. The available display resource data comprises blank area information which is not occupied by the existing control on the central control screen, wherein the blank area information comprises pixel ranges and shapes of each blank area, and whether the blank area information accords with the display proportion of the common control.
In this embodiment, a control adding instruction is used as a signal, for example, an instruction for adding a tail image control, and a layout information obtaining flow is started. And acquiring core layout data of the central control screen, wherein the core layout data comprise pixel areas of loaded controls, screen resolution, available blank area range, control display level rules and the like. Interface resource distribution of the central control screen is clear, collision between a newly added control and an existing control is avoided, and data support is provided for subsequent calculation of candidate pixel areas and adaptation of display proportions.
Step S02, determining candidate length-width ratio according to the layout information and the pixel size of the central control screen.
Wherein the pixel size is expressed by the number of horizontal pixels×the number of vertical pixels, such as 1920×1080, and each pixel is the minimum unit of the screen display. For example, a pixel size of 1920×1080 means that there are 1920 pixels in the horizontal direction and 1080 pixels in the vertical direction of the screen, and the total number of pixels is the product of the two.
In this embodiment, the available blank area in the layout information and the size of the pixels of the center control screen are associated first, and the maximum limit of the available space is defined. Based on the wide height range of the available space, the common proportions of 16:9, 4:3 and the like are preferentially generated when the proportion of the screen display logic is adapted to the core control which is not shielded by screening out, for example, the available space is 1600 pixels wide and 900 pixels high. And eliminating the extreme proportion or the proportion with poor display effect beyond hardware support, such as the non-adaptive proportion of the sensor, and finally reserving 3-5 optimal proportions as candidate aspect ratios. The feasibility and harmony of the subsequent control display are ensured from the source by simultaneously adapting the candidate length-width ratio to the hardware capability of the central control screen and the interface layout constraint.
In step S03, an add instruction is generated in response to the click operation of the target candidate aspect ratio.
In this embodiment, the central control screen displays a candidate aspect ratio list, monitors user operation in real time, and triggers an instruction generation flow when detecting that a user clicks a certain candidate proportion, namely a target candidate aspect ratio. And taking the target length-width ratio selected by the user as a core parameter, and combining auxiliary information such as a central control screen identifier, an operation time stamp and the like to package the target length-width ratio into a standardized adding instruction. And checking the integrity of the instruction parameters, and sending the instruction to a central control screen control system after the instruction passes through the integrity, and taking the instruction as a trigger signal of a subsequent step. And converting the preference of the user to the display proportion into a specific instruction through clicking operation, so that the follow-up procedure is ensured to be executed strictly according to the selection of the user.
In the embodiment, the candidate length-width ratio is deduced based on the layout information of the central control screen and the pixel size, so that the proportion is ensured to be in line with the screen hardware capacity, control conflict is avoided, and the problems of follow-up display deformation and shielding are avoided from the source. And the coordination of the original layout of the newly added control and the central control screen is greatly improved.
Based on any of the foregoing embodiments of the present application, a third embodiment of the present application provides a control method for a central control panel, which can be described with reference to the foregoing embodiments, and will not be described in detail later. On this basis, referring to fig. 3, fig. 3 is a flowchart of a third embodiment of a control method for a central control panel according to the present application, after determining a candidate aspect ratio according to layout information and a pixel size of the central control panel, the method includes:
and step S021, displaying a sensor selection control, wherein the sensor selection control is displayed with a vehicle tail vision sensor, a first side rear sensor and a second side rear sensor.
The sensor selection control is an interaction component on the central control screen and is specially used for displaying the selectable vehicle-mounted image sensor. The user manually selects the sensor to be started, and the system can call the sensor to collect images after the sensor is selected, so that a data source is provided for the display of the follow-up control. The installation position of the vehicle tail vision sensor is usually at the tail of the vehicle, such as a trunk door handle and above a license plate, and is an image sensor specially capturing the scene right behind the vehicle, and most of the vehicle tail vision sensors are cameras. And collecting real-time pictures right behind the vehicle tail, corresponding to the conventional reversing images, and mainly used for observing the distance between the vehicle tail and the obstacle and the condition of the road surface at the rear. The first side rear sensor mounting position is at the rear part of one side of the vehicle, such as near the left rear door handle and at the left rear fender, and belongs to a side rear special image sensor. The scene behind one side of the vehicle is captured, such as left rear Fang Chedao and left rear obstacles, the blind area of the view of the rear sensor is supplemented, and the scene is usually in a side parking and parallel auxiliary scene. The second side rear sensor is identical in function and type with the first side rear sensor, and is only installed in a different position, typically at the rear of the other side of the vehicle, such as near the right rear door handle, at the right rear fender. And capturing a scene behind the other side of the vehicle, such as a right rear lane and a right rear obstacle, and matching with the first rear sensor to realize full coverage of rear views of the two sides of the vehicle.
In this embodiment, after determining the candidate aspect ratio according to the layout information and the pixel size of the central control screen, a sensor selection control is displayed, where the sensor selection control includes a rear-end vision sensor, a first side rear sensor, and a second side rear sensor. The vehicle tail vision sensor is an optional sensor, and the first side rear sensor and the second side rear sensor are optional sensors. By providing three types of sensors for selection, the rear view of the tail and the two sides of the vehicle is covered, and the image acquisition requirements of different scenes such as reversing, lateral parking and the like are met.
Step S022, a sensor combination is determined in response to a triggering operation of a target sensor in the sensor selection control.
The sensor combination is a product of integrating sensors selected by a user by the system, and can be a single sensor, such as a vehicle tail vision sensor only, or a combination of a plurality of sensors, such as a vehicle tail vision sensor and a first side rear sensor. By combining the sensors at different positions, the visual field range of image acquisition is enlarged, blind areas are reduced, more comprehensive surrounding scenes of the vehicle are provided for users, and driving safety is improved.
In this embodiment, the system continuously monitors the user interaction behavior, capturing the triggering operation of the user on the target sensor. And recording the triggering state of each sensor, and supporting single selection and multiple selection of users. And extracting all sensor identifications selected by a user, such as a vehicle tail vision sensor ID and a first side rear sensor ID, based on the trigger state. Checking whether the selected sensor is in a normal working state, and if the selected sensor is abnormal, popping up a prompt and eliminating the sensor. And integrating the checked sensors to form a sensor combination, wherein the combination form is determined according to the selection of a user. And synchronizing key information of the sensor assembly to a subsequent module. And displaying the currently selected combination result on the central control screen. And the single-choice sensor and the multi-choice sensor are supported, and a user can customize combination according to scene requirements to adapt to different use scenes.
And step S023, updating the candidate aspect ratio according to the acquisition view angle of the sensor combination.
In this embodiment, the hardware parameters of each sensor in the sensor combination are retrieved, including the acquisition ratio of the individual sensors, such as the rear sensor 16:9, the viewing angle, such as the rear sensor level 120 °. If the sensor is a multi-sensor, the equivalent proportion of the spliced and fused pictures needs to be analyzed. If the sensor is a single sensor, the original proportion is directly used. And screening the proportion with high matching degree in the aspect ratio of the original candidates based on the equivalent proportion of the combined view angles. If a special proportion is formed after the sensors are spliced, the proportion is newly added in the candidate list, and the image characteristics after the coverage combination are ensured. The candidate length-width ratio is not only dependent on the central control screen parameter, but also is dynamically adjusted by combining the acquisition view angle of the sensor, so that the proportion is ensured to be completely matched with the actual acquired picture, and display deformation and effective information loss are avoided.
In this embodiment, dedicated adaptation ratios are provided for different viewing angle characteristics of a single or multiple sensors, so as to meet display requirements of full scenes such as reversing, lateral parking, and the like. The situation that the pictures of a certain type of sensor combination cannot be completely displayed due to fixed proportion is avoided.
Based on any of the foregoing embodiments of the present application, a fourth embodiment of the present application provides a control method for a central control panel, which can be described with reference to the foregoing embodiments, and will not be described in detail later. On the basis, a first pixel area of a first display control is determined, wherein the first display control comprises a navigation control step, and the method comprises the following steps of:
Step S11, if the first display control only comprises the navigation control, acquiring a pixel coordinate range of the navigation control in the current layout of the central control screen, and determining the pixel coordinate range as a first pixel area.
In this embodiment, the system first identifies the content of the first display control, and after confirming that only the navigation control exists, the system invokes the position data of the navigation control in the layout information of the central control screen to obtain the complete pixel coordinate range of the navigation control in the screen coordinate system. And directly defining the acquired navigation control pixel coordinate range as a first pixel region, and recording the wide and high pixel values and boundary parameters of the region. The existing coordinate data of the navigation control is directly multiplexed, additional calculation is not needed, a first pixel area is quickly defined, and flow efficiency is improved.
Step S12, if the first display control comprises a navigation control and at least one other application control, identifying pixel occupied areas of the navigation control and each other application control, calculating total pixel coverage according to layout relations of the controls, and determining the total pixel coverage as a first pixel area.
In this embodiment, the system scans the first display control first, and identifies the navigation control and all other application controls therein. And respectively calling pixel occupation area data of each control, wherein the pixel occupation area data comprise respective boundary coordinates and wide and high pixel values, analyzing the layout relation of each control, and calculating the total pixel coverage area by taking the outermost boundary as a standard. And transversely taking the leftmost X coordinate and the rightmost X coordinate of all the controls, and longitudinally taking the uppermost Y coordinate and the bottommost Y coordinate of all the controls, wherein the formed rectangular area is the total pixel coverage area. And determining the calculated total pixel coverage as a first pixel area, and recording the complete coordinate parameters and the total width and height data of the first pixel area. And obtaining the accurate total pixel coverage area according to the layout relation of the controls by determining the controls contained in the first display control.
In this embodiment, by determining the controls included in the first display control, the total coverage of all the controls is calculated, so that the subsequent new control is prevented from blocking the non-navigation core functions due to missing a certain application control.
Based on any of the foregoing embodiments of the present application, a fifth embodiment of the present application provides a control method for a central control panel, which can be described with reference to the foregoing embodiments, and will not be described in detail later. On the basis, according to the first pixel area and the aspect ratio corresponding to the adding instruction, rendering and displaying the candidate pixel area of the tail image control on the central control screen, wherein the method comprises the following steps:
Step S21, identifying the application program type of each application program control in the first display control, wherein the application program type comprises a core application and a non-core application, the navigation control belongs to the core application, and the other application program controls are the non-core application.
The application program types are defined by classifying the application program types according to the functional importance and the use scene priority of the central control screen application control, and are only divided into two types of core application and non-core application. The control system provides a basis for layout planning, and defines which controls need to be preferentially ensured to be displayed and which controls can be flexibly adjusted. The core application is an application program directly serving driving safety and core driving requirements, and is an indispensable key function in the driving process. The priority is highest, the whole-course non-shielding display is ensured, the layout position is relatively fixed, and the layout position cannot be optionally covered by other controls. Such as navigation controls. The non-core application is defined as an auxiliary application program for improving driving comfort and entertainment, driving safety is not directly affected, and the function can temporarily pause or adjust the display position. The priority is lower than that of the core application, so that the display is allowed to be displayed on the premise of not influencing the core application, and the display can be hidden or moved if necessary to adapt to the layout requirements of other functions. Such as music play controls, weather display controls, etc.
In this embodiment, the system scans all the application program controls in the first display control, and extracts the unique identifier of each control one by one, such as the control ID and the function name. And establishing a control list to ensure that any control to be classified is not omitted. And executing classification judgment on each control, and directly judging the control as a core application if the control identification is matched with the navigation control. All other application program controls are uniformly judged to be non-core applications, and no additional complex verification is needed. The navigation control is adopted, namely the core application, and the other explicit rules are adopted, namely the non-core application, so that the classification is completed quickly without complex algorithm, and the flow efficiency is improved.
Step S22, according to the control corresponding to the core application, adopting a minimum necessary scaling strategy, and the control corresponding to the non-core application adopts an adaptive scaling strategy, wherein the minimum necessary scaling strategy is the minimum scaling for keeping the key information of the control to be displayed completely.
The minimum necessary scaling strategy is a scaling rule specially designed for core application, only performs minimum-scale scaling when necessary, and the core aim is to ensure that key information of the control, such as navigation route and distance identifier, is completely displayed. The key information area of the core control is defined firstly, the area is preferably reserved to be not compressed and cut when the key information area is scaled, and only when the occupied space of the control exceeds the layout limit, the key information area is reduced according to the minimum proportion which can just display the key information completely, so that the information is prevented from being blurred or lost due to excessive scaling. The self-adaptive scaling strategy is a scaling rule aiming at non-core application, and can flexibly adjust the scaling of the control according to the available display space of the central control screen and adapt to the layout requirement. The system automatically calculates a proper scaling factor according to the size of the current available space without fixing scaling, and can enlarge or reduce the whole size of the control. And when zooming, the whole adaptation space of the control is targeted, and the secondary information is allowed to be properly compressed, so long as the basic operation is not influenced.
In this embodiment, the application type classification result of the preamble step is called, and the control corresponding to the core application is bound to the minimum necessary scaling policy, and the control corresponding to the non-core application is bound to the adaptive scaling policy. The scaling priority of the core application controls is lower than its critical information integrity, and the scaling priority of the non-core application controls obeys the overall layout space requirements. The display space of a part of non-key area is sacrificed, so that the core function is not influenced. The minimum necessary scaling strategy focuses on the complete key information of the core application, avoids the core function failure such as navigation caused by scaling, and meets the driving safety requirement.
Step S23, scaling the first pixel area according to the minimum necessary scaling strategy and the adaptive scaling strategy to obtain the residual layout space of the scaled first pixel area.
In this embodiment, the minimum necessary scaling is performed on the core application control, the minimum scaling is performed according to the minimum scale for keeping the key information intact, and the new occupied space of the core control after scaling is recorded. And executing self-adaptive scaling on the non-core application control, flexibly adjusting the scaling according to the overall layout constraint, and recording the new occupied space of the scaled non-core control. And ensuring that all the controls are not overlapped after being scaled, key information and basic operation areas can be normally used, and the whole is still in the range of the original first pixel area. Remaining layout space = first pixel region total space-core control space after scaling-non-core control space after scaling. Specific parameters of the residual layout space are defined, including boundary coordinates, wide-high pixel values, and shapes. And releasing the redundant space through a differential scaling strategy, so that idle resources in the first pixel region are fully utilized, and sufficient display space is provided for the newly added control.
And step S24, planning at least one vehicle tail image control placement area meeting the aspect ratio requirement based on the size of the residual layout space and the aspect ratio corresponding to the adding instruction, determining the placement area as a candidate pixel area, and rendering and displaying.
In this embodiment, the core parameters of the remaining layout space are called, including the wide-high pixel value, boundary coordinates, and shape of the space. And extracting the length-width ratio corresponding to the adding instruction, and determining the proportion constraint condition of the tail image control. Based on the target length-width ratio and the residual space size, calculating the maximum displayable size meeting the ratio requirement, matching according to the aspect ratio and not exceeding the residual space boundary principle, and deducing the optimal width and height of the control. If the remaining space can accommodate a plurality of proportional zones, at least one candidate zone is planned. Ensure that the region does not overlap with any cores, the scaled region of non-core controls, and the boundaries are within the remaining layout space. And defining the planned placement area as a candidate pixel area, and recording the complete coordinates, the width and height data and the corresponding target length-width ratio. And rendering and displaying the effect of the candidate pixel area in the residual layout space of the central control screen. Based on the actual residual space and the proportional constraint planning area, overlapping with the existing control is avoided from the source, and layout coordination is guaranteed.
In this embodiment, by dividing the application program types into core application and non-core application and scaling according to the minimum necessary scaling strategy and the adaptive scaling strategy, the vehicle tail image control fully utilizes the idle space while meeting the ratio requirement, and improves the display effect.
Based on any of the foregoing embodiments of the present application, a sixth embodiment of a control method for a central control panel is provided, which can be described with reference to the foregoing embodiments and will not be described in detail later. On the basis, in response to a triggering operation for a target candidate pixel region, the step of adjusting the acquisition viewing angle of the tail vision sensor according to the aspect ratio comprises the following steps:
Step S31, in response to a trigger operation for the target candidate pixel area, the actual display aspect ratio of the target candidate pixel area is acquired.
In this embodiment, the system continuously monitors the user interaction of the candidate pixel area on the central control screen, and locks a target candidate pixel area when a trigger operation for the target candidate pixel area is detected. And (3) retrieving the stored data of the target candidate pixel region, extracting the wide and high pixel values actually displayed by the target candidate pixel region, calculating the length-width ratio, and carrying out standardized processing on the result. Based on the actual pixel size calculation of the area selected by the user, instead of the theoretical proportion, proportion deviation caused by space adaptation is avoided, and control rendering is guaranteed to be free of deformation.
Step S32, determining an acquisition view angle range to be adjusted based on hardware parameters of the vehicle tail vision sensor, wherein the hardware parameters comprise a maximum horizontal view angle, a minimum horizontal view angle and a view angle adjusting step length.
The maximum horizontal visual angle is the maximum visual field range in the horizontal direction captured by the vehicle tail visual sensor, and is the upper limit boundary of visual angle adjustment. The unit is typically an angle, such as 120 °, meaning that the sensor can cover a range of up to 60 ° from left to right in the horizontal direction, beyond which a scene cannot be acquired. The minimum horizontal viewing angle is the minimum horizontal viewing range in which the vehicle tail vision sensor can stably work, and is the lower limit boundary of viewing angle adjustment. The unit is also an angle, such as 60 °, meaning that the sensor horizontal direction can be focused at least in the range of 30 ° from left to right, less than which would result in excessive magnification, distortion or unstable acquisition of the picture. The viewing angle adjustment step size is the fixed angle increment of the sensor between the maximum and minimum horizontal viewing angles, and the viewing angle change during each adjustment determines the precision of the viewing angle adjustment. In units of angle, such as 5 °, means that the sensor viewing angle can only be adjusted in units of 5 °.
In this embodiment, the core hardware parameters of the rear-of-vehicle vision sensor, i.e., the maximum horizontal viewing angle, the minimum horizontal viewing angle, the viewing angle adjustment step size, are invoked. The minimum horizontal visual angle is taken as the lower adjustment limit, the maximum horizontal visual angle is taken as the upper adjustment limit, the adjustment range of the sensor acquisition visual angle is defined, for example, 60-120 degrees, and meanwhile, the fixed angle increment of the visual angle change during each adjustment is defined to be 5 degrees. Adjustment instructions below the minimum viewing angle or above the maximum viewing angle are considered invalid, and screen distortion or sensor damage caused by exceeding hardware capability is avoided. The upper limit and the lower limit of visual angle adjustment are defined based on hardware parameters, invalid adjustment exceeding the capability of a sensor is avoided, and the acquisition stability is ensured.
Step S33, calculating a target horizontal viewing angle meeting the actual display length-width ratio according to the image proportion mapping relation between the actual display length-width ratio and the vehicle tail vision sensor, wherein the image proportion mapping relation is that the target horizontal viewing angle is correspondingly increased by a preset angle value when the actual display length-width ratio is increased by a preset proportion value.
The image proportion mapping relation is a preset corresponding rule between the actual display length-width ratio and the horizontal visual angle of the vehicle tail visual sensor. That is, each time the aspect ratio of the actual display is increased by a preset proportional value, the target horizontal viewing angle is correspondingly increased by a preset angle value. The target horizontal viewing angle is calculated through a picture proportion mapping relation, can enable the sensor to acquire a picture and the aspect ratio of actual display to be completely matched, and is a viewing angle value which the sensor finally needs to adjust. The target horizontal visual angle is in the range of the acquisition visual angle to be regulated by the sensor, and if the target horizontal visual angle exceeds the range, the boundary value is taken.
In this embodiment, key data is invoked, and the calculated actual display aspect ratio, preset scale value, preset angle value, and reference viewing angle of the sensor default to a minimum horizontal viewing angle or preset initial viewing angle, such as 60 °. The difference between the actual aspect ratio and the reference ratio is calculated with reference to the reference ratio corresponding to the reference viewing angle, for example, the reference viewing angle 60 ° corresponding to the ratio 1.33, i.e., 4:3. Ratio increment = actual aspect ratio-reference ratio, e.g. 1.78-1.33 = 0.45. In the mapping relationship, the angle increment= (proportional increment ++preset proportional value) ×preset angle value, e.g., 0.45 ++0.2×10 ° =22.5°. Target horizontal viewing angle = reference viewing angle + angular increment, e.g. 60 ° +22.5 ° = 82.5 °, calculated as target horizontal viewing angle. Through the association of the proportion and the angle, the proportion of the sensor acquisition picture to the display area is ensured to be completely consistent, and stretching, black edge or cutting is avoided.
Step S34, based on the acquisition view angle range to be adjusted, judging whether the target horizontal view angle is within the acquisition view angle range.
In this embodiment, the target horizontal viewing angle calculated by the preamble is extracted, such as 82.5 °, and the determined sensor can adjust the acquisition viewing angle range, such as 60 ° minimum and 120 ° maximum. If the target horizontal viewing angle is more than or equal to the minimum horizontal viewing angle and less than or equal to the maximum horizontal viewing angle, judging that the target horizontal viewing angle is within the range. If the target horizontal viewing angle < the minimum horizontal viewing angle, it is determined to be lower than the lower limit. If the target horizontal viewing angle > the maximum horizontal viewing angle, it is determined to be higher than the upper limit. Recording a judging result and key data, and judging that the target horizontal viewing angle is in the range, wherein the judging result and the key data are in the range of 60-120 degrees when the target viewing angle is 82.5 degrees. And the verification is completed only through boundary value comparison, no complex calculation is performed, the response speed is high, and the real-time adjustment requirement is adapted.
And step S35, if the acquired visual angle range is within the acquired visual angle range, sending a visual angle adjustment instruction to the vehicle tail visual sensor, and controlling the vehicle tail visual sensor to adjust the acquired visual angle to the target horizontal visual angle.
In this embodiment, when it is confirmed that the target horizontal angle of view is within the adjustment range, the instruction transmission flow is triggered. And adjusting the acquisition visual angle to a target horizontal visual angle. By confirming that the target horizontal viewing angle is within this range, quick adjustment of the horizontal viewing angle is performed.
Step S36, if the acquired visual angle range is exceeded, the acquired visual angle of the vehicle tail visual sensor is adjusted to the maximum horizontal visual angle or the minimum horizontal visual angle.
In this embodiment, when it is determined that the target horizontal viewing angle is out of range, the collection viewing angle of the rear-view visual sensor is adjusted to its minimum horizontal viewing angle if the target horizontal viewing angle is lower than the minimum horizontal viewing angle, and the collection viewing angle of the rear-view visual sensor is adjusted to its maximum horizontal viewing angle if the target horizontal viewing angle is higher than the maximum horizontal viewing angle. The acquisition visual angle of the vehicle tail visual sensor is adjusted to the boundary value, so that the requirement of the target visual angle is met to the greatest extent.
In this embodiment, the collected view angle of the rear-view vision sensor is adjusted by determining the collected view angle range and the target horizontal view angle, so as to meet the target view angle requirement and ensure the picture adaptation effect.
Based on any of the foregoing embodiments of the present application, a seventh embodiment of the present application provides a control method for a central control panel, which can be described with reference to the foregoing embodiments, and will not be described in detail later. On the basis, in response to a triggering operation for a target candidate pixel region, the step of adjusting the acquisition viewing angle of the tail vision sensor according to the aspect ratio comprises the following steps:
In step S301, in response to a triggering operation for the target candidate pixel area, a pixel range and a corresponding target aspect ratio of the target candidate pixel area are acquired.
In this embodiment, the system continuously monitors the user interaction of the candidate pixel area on the central control screen, and locks a target candidate pixel area when a trigger operation for the target candidate pixel area is detected. And (3) retrieving the stored data of the target candidate pixel region, extracting the wide and high pixel values actually displayed by the target candidate pixel region, calculating the length-width ratio, and carrying out standardized processing on the result. Based on the actual pixel size calculation of the area selected by the user, instead of the theoretical proportion, proportion deviation caused by space adaptation is avoided, and control rendering is guaranteed to be free of deformation.
Step S302, based on the selected vehicle tail vision sensor, the first side rear sensor and the second side rear sensor, initial acquisition view angles, installation position parameters and hardware distortion coefficients of the sensors are respectively obtained.
The initial acquisition view angle is the default initial horizontal view angle of each sensor and is used as a reference value for the adjustment of the subsequent view angle. The installation position parameters are physical installation coordinates of the sensors on the vehicle body, such as the height of the rear sensor from the ground, the distance of the rear sensor from the vehicle door and the installation angle, such as the horizontal inclination angle, and the spatial position relation of the sensors is defined. The hardware distortion coefficient is an inherent distortion parameter of the sensor lens and is used for correcting the subsequent picture distortion, so that picture distortion after splicing is avoided.
In this embodiment, based on the sensor determination result, the target sensor is locked, and the initial acquisition viewing angle, the installation position parameter, and the hardware distortion coefficient corresponding to each sensor are acquired. Three key parameters are extracted, so that multiple requirements of view angle adaptation, space positioning and distortion correction are met.
Step S303, determining the physical field of view covered by the spliced picture according to the pixel range and the target length-width ratio.
In this embodiment, the scale constraint of the display end is confirmed to match the spatial size according to the pixel range and the target aspect ratio. And (5) calling the installation position parameters of the corresponding sensors, and determining the physical position layout and the overlapping area of each sensor. And calculating the physical visual field range acquired by each sensor independently by combining the initial acquisition visual angle, and determining the superposition relation of the original visual fields. And calculating the horizontal and vertical physical angles to be covered by the spliced picture according to the requirement of the target length-width ratio. The horizontal direction needs to integrate the fields of view of three sensors, eliminate blind areas and distribute coverage areas in proportion. After the spliced physical field of view is rendered, the target pixel area can be just filled, and no field of view waste or display insufficiency is caused. And the visual field partition which is required to be contributed by each sensor is defined, and a clear target is provided for the subsequent fine adjustment of the visual angle of the sensor and the calibration of the picture overlapping area.
Step S304, based on the physical visual field range, the visual field partition needed to be born by each sensor is calculated.
In this embodiment, the horizontal total field of view and the vertical field of view are extracted. The visual field is disassembled according to the principle that the horizontal direction is mainly and the vertical direction is secondarily, and each sensor is covered according to the installation depression angle by default in the vertical direction. Based on the basic visual field of installation position parameter distribution, the vehicle tail visual sensor is centered, the core area right behind the vehicle body is preferentially distributed, the first side rear sensor and the second side rear sensor are symmetrically installed on two sides of the vehicle body, and the side rear area is distributed. Ensuring that the assigned field of view is not outside the adjustment range of the individual sensors, supplementing the redundant field of view of adjacent sensors if the maximum viewing angle of a sensor is insufficient. In order to ensure smooth splicing, overlapping areas of 5-10 degrees can be arranged on the visual field boundaries of adjacent sensors, so that splicing gaps are avoided. Based on the sensor installation position and the capability distribution visual field, the partition born by each sensor is ensured to be in the controllable range, and the acquisition stability is improved.
Step S305, calculating the target horizontal viewing angle of the rear-view sensor, the target horizontal viewing angle of the first rear-side sensor, and the target horizontal viewing angle of the second rear-side sensor based on the field-of-view partition and the target aspect ratio of each sensor, in combination with the installation position parameters thereof.
In this embodiment, the field of view partition (horizontal start angle, end angle), target aspect ratio, and installation position parameters of each sensor are called. The zone horizontal span, i.e. the end angle-start angle of the vehicle rear sensor field of view zone, is calculated as 65 ° - (-65 °) =130°. And in combination with the target length-width ratio calibration, the span is finely adjusted to be matched with the display proportion, so that the picture is ensured to be free from stretching. And then determining a target horizontal view angle, taking the partition horizontal span as a core, combining with the installation center position, and finally ensuring that the partition right behind is completely covered by the target horizontal view angle = partition horizontal span. The end angle-start angle of the side rear sensor field of view zone is calculated as-60 ° - (-125 °) =65° by calculating the single-sided zone horizontal span. The rear side sensor is arranged on the side surface of the vehicle body, the visual angle is required to be adjusted according to the installation horizontal inclined angle, the subarea is ensured to be parallel to the side surface of the vehicle body, and if the installation inclined angle is 5 degrees, the visual angle span is corrected to be 70 degrees. And fine-tuning the span according to the requirements of a wide screen or standard proportion, wherein the final target horizontal viewing angle=the corrected subarea span, ensuring to cover the rear subarea of the side and adapting to the whole display proportion. And combining three factors of field of view partition, length-width ratio and installation position parameters, ensuring that the view angle of each sensor not only covers the distribution area, but also adapts to the overall display requirement.
In step S306, it is checked whether each target horizontal viewing angle is within the hardware adjustment range of the corresponding sensor.
In this embodiment, whether the target horizontal viewing angles of the sensors are within the respective adjustment ranges is checked one by one, and if the target horizontal viewing angles are beyond the adjustment ranges, the boundary values are taken. And the multi-sensor verification is completed rapidly by comparing the boundary values of the adjusting range, and the real-time adjustment requirement is adapted.
Step S307, if the three sensors are within the range, respectively sending a visual angle adjustment instruction to the three sensors, synchronously adjusting the visual angle to the corresponding target horizontal visual angles, and calibrating the splicing parameters based on the characteristic points of the overlapping area.
The characteristic points of the overlapping area refer to picture element points which can be clearly identified, are not easy to deform and exist in different sensor pictures in the overlapping area of the two adjacent sensor acquisition fields.
In this embodiment, if each target horizontal viewing angle is within the hardware adjustment range of the corresponding sensor, a viewing angle adjustment instruction is sent to each of the three sensors to adjust. And calling the real-time acquisition picture adjusted by each sensor, identifying the characteristic points in the overlapping area, and recording the pixel coordinates of the characteristic points in each sensor picture. And calculating core parameters such as splicing offset, rotation angle, scaling and the like based on the coordinate matching of the sensor pixel crossing of the characteristic points, and calibrating the splicing parameters. The feature points of the overlapped area are ensured to be completely aligned after being spliced, and double images and dislocation are avoided. By calibrating the feature points based on the overlapping area, the images are aligned from the pixel level, so that the splicing gaps and double images are thoroughly eliminated, and the visual effect is improved.
In step S308, if the target horizontal viewing angle of the sensor exceeds the hardware range, the sensor viewing angle of the core field area is used as an optimization target, and the minimum hardware viewing angle is adopted and the distortion compensation of the splicing overlapping area is optimized by algorithm for the sensor behind the exceeding range.
In this embodiment, if the target horizontal viewing angle of the existing sensor exceeds the hardware range, the target horizontal viewing angle of the vehicle tail vision sensor is used as a core to ensure that the target horizontal viewing angle completely accords with the calculated value, and no compromise correction is made. For the side rear sensor beyond the hardware range, the minimum hardware view angle is directly taken. After the viewing angle of the rear-side sensor is corrected, distortion deviation, such as picture stretching and angle deviation, can occur in the overlapping area of the covered view area and the adjacent sensor. And calculating indexes such as distortion offset, stretching coefficient and the like by comparing pixel positions of the characteristic points of the overlapped area in the sensor pictures at two sides. Based on the distortion index, correcting the picture of the side rear sensor, and adjusting the pixel coordinates of the overlapped area to align with the characteristic points of the picture of the vehicle rear sensor. And the edge part of the overlapping area can be smoothly transited by adopting an eclosion algorithm, so that picture splicing marks caused by visual angle correction are eliminated, and the distortion compensation of the spliced overlapping area is optimized. By adopting a combined scheme of hardware boundary values and algorithm compensation, the limitation of sensor hardware is avoided, and the available splicing effect can be realized without changing hardware.
In this embodiment, the accuracy of the viewing angle of the core area is ensured by preferentially ensuring, so as to meet the main use requirement of the user, and avoid influencing the core function due to the limitation of the sensor behind the side. And focus the key link of overlapping area, solve the splice problem that the visual angle correction brings accurately.
Based on any of the foregoing embodiments of the present application, an eighth embodiment of the present application provides a control method for a central control panel, which can be described with reference to the foregoing embodiments, and will not be described in detail later. On the basis, the step of acquiring the video stream acquired by the regulated vehicle tail vision sensor and rendering and displaying the vehicle tail image control in the target candidate pixel area comprises the following steps:
Step S41, original video streams respectively acquired by the adjusted rear-end vision sensor, the first side rear sensor and the second side rear sensor are synchronously acquired, and frame synchronization processing is carried out based on the time stamps of the sensors.
In this embodiment, the adjusted tail vision sensor, the first side rear sensor and the second side rear sensor synchronously acquire the original video streams respectively, and the time stamp is extracted from the video stream frame data of each sensor to ensure the consistency of the time reference. The video frames of each sensor are ordered according to the time stamp, and a continuous frame sequence index is generated, so that the frame data of the corresponding time point can be conveniently and quickly searched. And searching a frame with the closest timestamp in the frame sequence of the first side rear sensor and the second side rear sensor by taking the frame timestamp of the vehicle tail vision sensor as a reference, and determining the frame as a synchronous frame pair. Based on the timestamp and frame synchronization processing, the multi-sensor picture is ensured to be collected at the same time point, and splicing dislocation caused by time difference is avoided.
And step S42, based on the adjusted acquisition view angles of the sensors and the preset overlapping areas, matching and aligning the characteristic points of the overlapping areas of the three paths of video streams to generate panoramic video streams covering the tail and the rear sides of the two sides of the vehicle.
In this embodiment, three paths of original video streams after frame synchronization, the final adjusted acquisition viewing angle of each sensor, and a preset overlapping area are invoked. And (3) according to the acquisition visual angle of the sensor, determining a physical visual field range corresponding to each path of video stream, marking a pixel coordinate interval of an overlapping region in a video frame, and defining a target region for feature point matching. And extracting stable characteristic points from the overlapping region of each path of video stream, and matching the characteristic points of the overlapping region with the characteristic points of the corresponding regions of the rear sensors at two sides by taking the video stream of the rear sensor as a reference. And splicing the aligned three paths of video streams according to a preset physical visual field range, filling a non-overlapping area of the panoramic picture, ensuring to cover the complete visual field at the rear of the vehicle tail and the rear of the two sides, and generating the panoramic video stream covering the rear of the vehicle tail and the rear of the two sides. And the overlapping area alignment is realized based on the feature point matching, so that the problem of splicing dislocation and double image is solved, and the visual experience is improved.
Step S43, obtaining pixel resolution, boundary coordinates and target length-width ratio of the target candidate pixel region, and calculating the adaptive proportion of the panoramic video stream and the target region, wherein if the original proportion of the panoramic video stream is consistent with the target length-width ratio, scaling is performed according to the resolution equal ratio.
In the present embodiment, the target candidate pixel region pixel resolution, boundary coordinates, and target aspect ratio are acquired. The original resolution, original aspect ratio in the global video stream are extracted. And judging whether the original aspect ratio of the panoramic video stream is equal to the target aspect ratio. If the original scale of the panoramic video stream is consistent with the target aspect ratio, scaling the panoramic video stream according to the resolution equal ratio. Based on the actual parameter calculation of the target area and the video stream, the resolution after scaling is completely matched with the target area, and the filling effect is accurate.
And S44, scaling the spliced panoramic video stream according to the adaptation proportion to obtain an adaptation video frame, and determining a rendering position according to the boundary coordinates of the target candidate pixel region.
The adaptive video frame is a single frame picture with resolution and length-width ratio completely matched with the target candidate pixel area after the panoramic video stream is subjected to scaling treatment.
In this embodiment, the calculated adaptation ratio is called, it is confirmed that the original ratio of the panoramic video stream is consistent with the target aspect ratio, and scaling processing is performed on the spliced panoramic video stream. And processing the spliced panoramic video stream frame by frame according to the scaling coefficient to obtain an adaptive video frame. And extracting specific coordinates of the upper left corner and the lower right corner of the boundary of the target candidate pixel region. And determining the rendering range of the adaptive video frame by taking the boundary coordinates as the reference, aligning the left upper corner of the video frame with the left upper corner coordinates of the target area, and aligning the right lower corner of the video frame with the right lower corner coordinates of the target area to ensure no offset. Based on the direct alignment of boundary coordinates, the video frame is ensured to be completely attached to the target area, and no black edge and no exceeding boundary are generated.
And step S45, taking the adaptive video frame as a bottom layer picture of the tail image control, and carrying out hierarchical fusion with the navigation control so that the display hierarchy of the tail image control does not shield key interaction elements of the navigation control.
In this embodiment, the navigation control is set to a high priority, the tail image control is set to a low priority, and complete display of the high priority control is preferentially ensured. And taking the adaptive video frame as a bottom layer picture, and carrying out hierarchical fusion with the navigation control, so that the display hierarchy of the tail image control does not shield key interaction elements of the navigation control, and visual identification is prevented from being influenced when the navigation control is shielded. Through carrying out the level fusion, not only the key operation of navigation is not influenced, but also the panoramic view of the vehicle tail can be completely displayed, and the dual requirements of driving scenes are met.
And step S46, rendering and displaying the tail image control in the target candidate pixel area.
In this embodiment, the boundary coordinates and resolution of the target candidate pixel region are called, and the precise rendering range is clarified. And acquiring parameters such as a bottom layer picture, hierarchical configuration and the like of the tail image control. Rendering and displaying the tail image control in the target candidate pixel area by the central control screen. And obtaining the complete display of the panoramic view of the vehicle tail, wherein the key operation area of the navigation control has a visible and interactive effect, and the whole interface meets the use requirement of driving scenes. And a complete panoramic view of the vehicle tail is obtained by rendering and displaying the vehicle tail image control, so that the vehicle tail panoramic view is convenient for a user to drive and use.
In this embodiment, the video stream acquired by the adjusted tail vision sensor is acquired, and the tail image control is rendered and displayed in the target candidate pixel area, wherein the video stream completely covers the tail core view, no key environment information is omitted, the video stream is adjusted accurate data, the environment detail reduction degree is high, and the accurate judgment of the distance and the road condition by the user is facilitated.
For an exemplary embodiment, for helping to understand the implementation flow of the control method of the central control panel obtained by combining the above embodiments, please refer to fig. 4 and fig. 5, fig. 4 is a schematic diagram before adding the tail image control in the embodiment of the present application, and fig. 5 is a schematic diagram after adding the tail image control in the embodiment of the present application, specifically:
As shown in fig. 4, the controls displayed on the central control screen include a navigation control 100 and a music control 200, when an instruction for adding a tail image control is performed, the navigation control 100 and the music control 200 are determined to be first display controls, and a first pixel area of the first display controls is determined, wherein the navigation control 100 belongs to a core application, and the music control 200 is a non-core application. According to the aspect ratio corresponding to the pixel area and the adding instruction, a minimum necessary scaling strategy is adopted for the navigation control 100, a self-adaptive scaling strategy is adopted for the music control 200, a residual layout space of the first pixel area is obtained after scaling processing, at least one vehicle tail image control placement area meeting the aspect ratio requirement is planned to serve as a candidate pixel area based on the size of the residual layout space and the aspect ratio corresponding to the adding instruction, and the candidate pixel area of the vehicle tail image control is rendered and displayed in the central control screen. And then adjusting the acquisition visual angle of the visual sensor according to the corresponding aspect ratio. And after the video stream acquired by the regulated tail vision sensor is acquired, processing the video stream to obtain an adaptive video frame, and then taking the adaptive video frame as a bottom layer picture of a tail image control, carrying out hierarchical fusion with the navigation control 100 to ensure that the display hierarchy of the tail image control does not shield key interaction elements of the navigation control 100, then rendering and displaying the tail image control 300 in a target candidate pixel area, and rendering and displaying an image as shown in fig. 5, wherein the frame selection area is the added tail image control 300. It should be noted that, the rear image shown in fig. 5 has a significantly wider viewing angle than the conventional reverse image, and does not generate excessive distortion. The tail vision sensor selected in the application can be arranged at the top of the tail of the vehicle except at the lower bumper, or can be arranged at the top of the tail of the vehicle, namely, the first tail vision sensor and the second tail vision sensor exist, and the specific view angle adjusting scheme and the image splicing scheme are as described in the previous embodiment and are not repeated here.
It should be noted that the foregoing examples are only for understanding the present application, and do not limit the control method of the control panel of the present application, and more forms of simple transformation based on the technical concept are all within the scope of the present application.
The application provides control equipment of a central control panel, which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the control method of the central control panel in the first embodiment.
Referring now to FIG. 6, a schematic diagram of a control device suitable for use in implementing a center screen in accordance with an embodiment of the present application is shown. The control device of the center screen in the embodiment of the present application may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer, a car-mounted terminal, etc., a fixed terminal such as a digital TV, a desktop computer, etc. The control device of the central control panel shown in fig. 6 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 6, the control device of the center screen may include a processing means 1001 (e.g., a central processor, a graphic processor, etc.), which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1003 into a random access Memory (RAM, random Access Memory) 1004. In the random access memory 1004, various programs and data required for the operation of the control device of the center screen are also stored. The processing device 1001, the read only memory 1002, and the random access memory 1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the control device of the central control screen to communicate with other devices wirelessly or by wire to exchange data. While a control device having a central control screen for various systems is shown, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The control equipment of the central control screen provided by the application can solve the technical problem that the use scene of the vehicle tail vision sensor is limited by adopting the control method of the central control screen in the embodiment. Compared with the prior art, the beneficial effects of the control device of the central control panel provided by the application are the same as those of the control device of the central control panel provided by the embodiment, and other technical features of the control device of the central control panel are the same as those disclosed by the method of the previous embodiment, so that the description is omitted.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present application, and the application should be covered. Therefore, the protection scope of the application is subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for executing the control method of the center screen in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM, random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM, erasable Programmable Read Only Memory), a flash Memory, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, radio Frequency (RF), the like, or any suitable combination of the foregoing.
The computer readable storage medium may be included in the control device of the center screen or may exist alone without being incorporated in the control device of the center screen.
The computer readable storage medium carries one or more programs, when the one or more programs are executed by control equipment of the central control screen, the control equipment of the central control screen is enabled to respond to an adding instruction of a vehicle tail image control, a first pixel area of a first display control is determined, the first display control comprises a navigation control, a candidate pixel area of the vehicle tail image control is rendered and displayed on the central control screen according to the first pixel area and an aspect ratio corresponding to the adding instruction, an acquisition view angle of a vehicle tail vision sensor is adjusted according to the aspect ratio in response to a triggering operation aiming at a target candidate pixel area, a video stream acquired by the adjusted vehicle tail vision sensor is acquired, and the vehicle tail image control is rendered and displayed on the target candidate pixel area.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN, local Area Network) or a wide area network (WAN, wide Area Network), or may be connected to an external computer (e.g., through the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the control method of the central control screen, so that the technical problem that the use scene of the vehicle tail vision sensor is limited can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the control method of the central control panel provided by the above embodiment, and are not described in detail herein.
The embodiment of the application provides a computer program product, which comprises a computer program, and the computer program realizes the steps of the control method of the central control screen when being executed by a processor.
The computer program product provided by the application can solve the problem of limited use scene of the vehicle tail vision sensor. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the application are the same as those of the control method of the central control panel provided by the embodiment, and are not described in detail herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein, or any application, directly or indirectly, within the scope of the application.

Claims (10)

1. The control method of the central control panel is characterized by comprising the following steps of:
Determining a first pixel area of a first display control in response to an adding instruction of a vehicle tail image control, wherein the first display control comprises a navigation control;
Rendering and displaying candidate pixel areas of the tail image control on a central control screen according to the first pixel areas and the length-width ratio corresponding to the adding instruction;
responding to triggering operation aiming at a target candidate pixel area, and adjusting an acquisition view angle of the vehicle tail vision sensor according to the aspect ratio;
and acquiring the video stream acquired by the adjusted vehicle tail vision sensor, and rendering and displaying a vehicle tail image control in the target candidate pixel area.
2. The method for controlling a center control panel according to claim 1, wherein the determining, in response to the instruction for adding the tail image control, a first pixel area of a first display control, the first display control including, before the step of navigating the control, includes:
responding to a control adding instruction, and acquiring layout information of a central control screen;
determining candidate aspect ratios according to the layout information and the pixel size of the central control screen;
In response to a click operation of the target candidate aspect ratio, an add instruction is generated.
3. The method for controlling a center screen according to claim 2, wherein after the step of determining the candidate aspect ratio according to the layout information and the pixel size of the center screen, the method comprises:
displaying a sensor selection control, wherein the sensor selection control is provided with a vehicle tail vision sensor, a first side rear sensor and a second side rear sensor;
Determining a sensor combination in response to a triggering operation of a target sensor in the sensor selection control;
And updating the candidate aspect ratio according to the acquisition view angle of the sensor combination.
4. The method for controlling a center control panel according to claim 1, wherein the determining a first pixel area of a first display control, the first display control including a navigation control, includes:
If the first display control only comprises a navigation control, acquiring a pixel coordinate range of the navigation control in the current layout of the central control screen, and determining the pixel coordinate range as a first pixel area;
If the first display control comprises a navigation control and at least one other application control, identifying pixel occupied areas of the navigation control and each other application control, calculating total pixel coverage according to layout relations of the controls, and determining the total pixel coverage as a first pixel area.
5. The method for controlling a center control panel according to claim 1, wherein the step of rendering and displaying the candidate pixel area of the tail image control on the center control panel according to the first pixel area and the aspect ratio corresponding to the adding instruction comprises:
Identifying the application program type of each application program control in the first display control, wherein the application program type comprises a core application and a non-core application, the navigation control belongs to the core application, and other application program controls are non-core applications;
the control corresponding to the non-core application adopts a self-adaptive scaling strategy according to the minimum necessary scaling strategy adopted by the control corresponding to the core application, wherein the minimum necessary scaling strategy is the minimum scaling for keeping the key information of the control to be displayed completely;
according to the minimum necessary scaling strategy and the self-adaptive scaling strategy, scaling the first pixel area to obtain a residual layout space of the scaled first pixel area;
And planning at least one vehicle tail image control placement area meeting the aspect ratio requirement based on the size of the residual layout space and the aspect ratio corresponding to the adding instruction, determining the placement area as a candidate pixel area, and rendering and displaying.
6. The control method of a center screen according to claim 1, wherein the step of adjusting an acquisition viewing angle of the tail vision sensor according to the aspect ratio in response to a trigger operation for a target candidate pixel region comprises:
acquiring an actual display aspect ratio of a target candidate pixel region in response to a trigger operation for the target candidate pixel region;
determining an acquisition visual angle range to be adjusted based on hardware parameters of the vehicle tail visual sensor, wherein the hardware parameters comprise a maximum horizontal visual angle, a minimum horizontal visual angle and a visual angle adjusting step length;
Calculating a target horizontal viewing angle meeting the actual display length-width ratio according to a picture proportion mapping relation between the actual display length-width ratio and a vehicle tail vision sensor, wherein the picture proportion mapping relation is that every time the actual display length-width ratio is increased by a preset proportion value, the target horizontal viewing angle is correspondingly increased by a preset angle value;
Judging whether the target horizontal viewing angle is in the acquisition viewing angle range or not based on the acquisition viewing angle range to be adjusted;
If the acquired visual angle is within the acquired visual angle range, sending a visual angle adjustment instruction to a vehicle tail visual sensor, and controlling the vehicle tail visual sensor to adjust the acquired visual angle to the target horizontal visual angle;
and if the acquired visual angle exceeds the acquired visual angle range, adjusting the acquired visual angle of the vehicle tail visual sensor to a maximum horizontal visual angle or a minimum horizontal visual angle.
7. The control method of a center screen according to claim 1, wherein the step of adjusting an acquisition viewing angle of the tail vision sensor according to the aspect ratio in response to a trigger operation for a target candidate pixel region comprises:
responding to a triggering operation aiming at a target candidate pixel area, and acquiring a pixel range and a corresponding target aspect ratio of the target candidate pixel area;
based on the selected vehicle tail vision sensor, the first side rear sensor and the second side rear sensor, respectively acquiring initial acquisition visual angles, installation position parameters and hardware distortion coefficients of the sensors;
Determining a physical visual field range covered by the spliced picture according to the pixel range and the target length-width ratio;
calculating the field of view partition needed to be born by each sensor based on the physical field of view range;
based on the visual field partition and the target length-width ratio of each sensor, respectively calculating the target horizontal visual angle of the vehicle tail visual sensor, the target horizontal visual angle of the first side rear sensor and the target horizontal visual angle of the second side rear sensor by combining the installation position parameters of the sensors;
Checking whether each target horizontal visual angle is in a hardware adjusting range of a corresponding sensor;
if the two images are within the range, respectively sending visual angle adjustment instructions to the three sensors, synchronously adjusting the visual angle adjustment instructions to the corresponding target horizontal visual angles, and calibrating splicing parameters based on the characteristic points of the overlapping areas;
if the target horizontal view angle of the sensor exceeds the hardware range, the sensor view angle of the core visual field area accords with the target value to serve as an optimization target, and the minimum hardware view angle of the sensor at the rear side exceeding the range is adopted and the distortion compensation of the splicing overlapping area is optimized through an algorithm.
8. The method for controlling a center control panel according to claim 1, wherein the step of obtaining the video stream collected by the adjusted tail vision sensor and rendering and displaying a tail image control in the target candidate pixel area comprises the steps of:
synchronously acquiring original video streams respectively acquired by the adjusted vehicle tail vision sensor, the first side rear sensor and the second side rear sensor, and carrying out frame synchronization processing based on time stamps of the sensors;
Based on the adjusted acquisition visual angles of the sensors and the preset overlapping areas, matching and matching characteristic points of the three paths of video stream overlapping areas to generate panoramic video streams covering the tail and the rear sides of the vehicle;
acquiring pixel resolution, boundary coordinates and a target length-width ratio of the target candidate pixel region, and calculating an adaptation proportion of the panoramic video stream and the target region, wherein if the original proportion of the panoramic video stream is consistent with the target length-width ratio, scaling is performed according to the resolution equal ratio;
Scaling the spliced panoramic video stream according to the adaptation proportion to obtain an adaptation video frame, and determining a rendering position according to boundary coordinates of a target candidate pixel region;
Taking the adaptive video frame as a bottom layer picture of the tail image control, and carrying out hierarchical fusion with the navigation control so that the display hierarchy of the tail image control does not shield key interaction elements of the navigation control;
and rendering and displaying a tail image control in the target candidate pixel area.
9. Control device of a central control panel, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the control method of a central control panel according to any one of claims 1 to 8.
10. A storage medium, characterized in that the storage medium is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the control method of a center screen according to any one of claims 1 to 8.
CN202511715969.1A 2025-11-21 2025-11-21 Control method, equipment and storage medium of central control screen Pending CN121179977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511715969.1A CN121179977A (en) 2025-11-21 2025-11-21 Control method, equipment and storage medium of central control screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511715969.1A CN121179977A (en) 2025-11-21 2025-11-21 Control method, equipment and storage medium of central control screen

Publications (1)

Publication Number Publication Date
CN121179977A true CN121179977A (en) 2025-12-23

Family

ID=98085413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511715969.1A Pending CN121179977A (en) 2025-11-21 2025-11-21 Control method, equipment and storage medium of central control screen

Country Status (1)

Country Link
CN (1) CN121179977A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460352A (en) * 2022-11-07 2022-12-09 摩尔线程智能科技(北京)有限责任公司 Vehicle-mounted video processing method, device, equipment, storage medium and program product
CN119620895A (en) * 2025-02-12 2025-03-14 深圳市天之眼高新科技有限公司 Vehicle-mounted central control screen control method, vehicle-mounted central control screen and storage medium
WO2025147819A1 (en) * 2024-01-08 2025-07-17 京东方科技集团股份有限公司 Display apparatus, display method, display system, and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460352A (en) * 2022-11-07 2022-12-09 摩尔线程智能科技(北京)有限责任公司 Vehicle-mounted video processing method, device, equipment, storage medium and program product
WO2025147819A1 (en) * 2024-01-08 2025-07-17 京东方科技集团股份有限公司 Display apparatus, display method, display system, and electronic device
CN119620895A (en) * 2025-02-12 2025-03-14 深圳市天之眼高新科技有限公司 Vehicle-mounted central control screen control method, vehicle-mounted central control screen and storage medium

Similar Documents

Publication Publication Date Title
CN112965503B (en) Multi-channel camera fusion and splicing method, device, equipment and storage medium
DE112017005807B4 (en) Image processing device and image processing method
CN103988499B (en) Vehicle periphery monitoring apparatus
US11528453B2 (en) Sensor fusion based perceptually enhanced surround view
US11273763B2 (en) Image processing apparatus, image processing method, and image processing program
US20130215280A1 (en) Camera calibration device, camera and camera calibration method
WO2018134897A1 (en) Position and posture detection device, ar display device, position and posture detection method, and ar display method
CN110087032A (en) A kind of panorama type tunnel video monitoring devices and method
JP2023046953A (en) Image processing system, mobile device, image processing method, and computer program
JP2023046965A (en) Image processing system, moving device, image processing method, and computer program
US20130222376A1 (en) Stereo image display device
US12450825B2 (en) Surround view system
JP2006154759A (en) Image interpolation device and display device
CN121179977A (en) Control method, equipment and storage medium of central control screen
JP2009083744A (en) Synthetic image adjustment device
CN117237237B (en) Luminosity balancing method and device for vehicle-mounted 360-degree panoramic image
JP2024079872A (en) Display control device, display control method and program
CN111263115B (en) Method, apparatus, electronic device, and computer-readable medium for presenting images
CN117808674A (en) Vehicle-mounted surround view system image processing method, device, electronic equipment and vehicle
CN117087545A (en) A working method and device of a holographic electronic rearview mirror system
JP2021512536A (en) How and equipment to drive a camera / monitor system for a car
US20050030380A1 (en) Image providing apparatus, field-of-view changing method, and computer program product for changing field-of-view
JP2024056563A (en) Display processing device, display processing method, and operation program for display processing device
CN116489503A (en) A video-assisted method for remote takeover of unmanned vehicles in surface mines
CN117853326A (en) Image stitching method, device, electronic equipment and vehicle for vehicle-mounted surround view system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination